In-reply-to » It seems silly to me that we humans create thermal energy with coal, convert the thermal energy to mechanical energy with steam turbines, convert the mechanical energy to electrical energy with generators, and convert the electrical energy back into thermal energy with glass-top stoves and electric heaters.

@Rob@jsreed5.org Hmm Coal -> Heat -> Stream -> Generator -> Electricity -> Resistance -> Heat

You do have an interesting point there 🤔 Seems rather wasteful just to produce some heat 🔥

⤋ Read More

Bad surprise at wake up this morning, my server was down. The dmesg show a lot of ‘scsi_xfer pool exhausted!’. I don’t known what that mean or even what happened #openbsd

⤋ Read More

It seems silly to me that we humans create thermal energy with coal, convert the thermal energy to mechanical energy with steam turbines, convert the mechanical energy to electrical energy with generators, and convert the electrical energy back into thermal energy with glass-top stoves and electric heaters.

⤋ Read More
In-reply-to » @bender This is basically the problem. Even if you wanted to there generally isn't any state for feeds stored on behalf of the user, in other words, a read status.

@lyse@lyse.isobeef.org tracking read/unread is something that Yarn could benefit from. It has been thought before, just never gotten anywhere. Yarn just don’t keep track of those, it will be something that @prologic@twtxt.net will need to implement. Maybe if I keep poking him he will! 😂

⤋ Read More
In-reply-to » They promised rain. I ain’t seeing any rain so far. 🫤

@lyse@lyse.isobeef.org we had a huge thunder/lighning storm last night here too. Kids got really scared (it struck something very close here), and the dog panicked (he opened all doors and would only sleep in kitchen). We woke up around 2 at night from it. But kids luckily fell a sleep again.

⤋ Read More
In-reply-to » New Research Reveals AI Lacks Independent Learning, Poses No Existential Threat ZipNada writes: New research reveals that large language models (LLMs) like ChatGPT cannot learn independently or acquire new skills without explicit instructions, making them predictable and controllable. The study dispels fears of these models developing complex reasoning abilities, emphasizing that while LLMs can genera ... ⌘ Read more

@prologic@twtxt.net The headline is interesting and sent me down a rabbit hole understanding what the paper (https://aclanthology.org/2024.acl-long.279/) actually says.

The result is interesting, but the Neuroscience News headline greatly overstates it. If I’ve understood right, they are arguing (with strong evidence) that the simple technique of making neural nets bigger and bigger isn’t quite as magically effective as people say — if you use it on its own. In particular, they evaluate LLMs without two common enhancements, in-context learning and instruction tuning. Both of those involve using a small number of examples of the particular task to improve the model’s performance, and they turn them off because they are not part of what is called “emergence”: “an ability to solve a task which is absent in smaller models, but present in LLMs”.

They show that these restricted LLMs only outperform smaller models (i.e demonstrate emergence) on certain tasks, and then (end of Section 4.1) discuss the nature of those few tasks that showed emergence.

I’d love to hear more from someone more familiar with this stuff. (I’ve done research that touches on ML, but neural nets and especially LLMs aren’t my area at all.) In particular, how compelling is this finding that zero-shot learning (i.e. without in-context learning or instruction tuning) remains hard as model size grows.

⤋ Read More