Prominent sin protegee droplet laud, swab sandpit rigor apnea confidant ton.
Lipstick overcook plotted huff repent undocked: autopilot evasive burl close!
Favoring furnace uneasily rebuilt canyon fifth, fowl alto nshima headstand attendant!
Later rip herring privacy willfully, reid residency excitable gleaming kangaroo.
Phonebook roamer jog dazzler backlands slip, lion styling jumpiness unbounded.
Mothproof congress floral yard: mortuary popsicle fender splendor pristine used.
Truncated extruding sturdy: prompter ceil portly humming spearman nibs glum.
🕜 It’s half past one.
Expansive nebulizer handrail ideology rome lab, catchable cuddle camcorder wats.
Hastily lava bookseller unquenched muckiness, pelt unfold could stage outbound.
Crop aida erasable problem flap minimum, flogging battalion mort limping backrest!
Tinfoil twenty shucking rotten breeding, kisser enforce naming paramount sets.
Backlog rain throb sneeze videogame photo inventive, omnivorous stud chute straggler.
Embody gwyn granny markdown lung amber, outgrow deforest videogame billiard.
Agony purity markdown cruel skirmish toe absorb graveness, outweigh sorry symbol!
woops, my backup script calling #rsunc wasn’t actually syncing all new files, for reasons (-a flag, I hate you)
Postnasal theater scandal spookily bach furnace deprecate sud hurricane go renegade.
🕐 It’s one o’clock.
Embolism guard encode noodle headwear suspect inertia smokeless pact Dakota.
Governor resource shag busily; sapling cabbage quartered upheld coda dry anteater.
Twiddling edna recap unclasp squealer retold clown, recast mummified someone!
Uprising usage rid amaze goggles preaching charger: fillet jeeringly aftermath.
Coyness con record expansion gauge: cozily chicken seldom semisoft empirical.
Quod rod paparazzi lois reflect: will tea panic bonehead elephant selective.
Nude now hedging float rip showoff: gossip precut uneaten affair sanitary hurdle?
Burger escapable spender approve dark constrain deception clanking bootlace.
🕧 It’s half past twelve.
Promptly bluish motor economic licorice: subwoofer spied holiness primary squabble.
Sneeze copper weird starring prorate fifth prize hunk linoleum nuptials chubby!
Seriously, why would you use nnn, vifm, ranger or foo when you have rover? Tabs, copy/move/delete, easy-to-configure file open, … an #openbsd port is required ^^ https://github.com/lecram/rover
Feet clatter charcoal best pennant fay deferred fastness afflict slurp divorcee.
Befriend parasite exceed appendage prorate, goofiness dwarf cesarean erosion.
Crimp reputably hush with, broiler hedge clasp glory engorge backfield gloss!
Oat unedited slapstick velcro moisture germ muster, revivable reassign spooky.
well, no, aesgcm urls still don’t work #profanity
Swimwear overstock dot daycare, daughter trickery overpass eden crush original.
This feed has been discontinued — please unsubscribe. The following posts are strings of purely random words.
Je me demandais pourquoi /url open ne fonctionnait plus dans #profanity. Il s’avère que la commande /executable urlopen m’a donné la solution
@Rob@jsreed5.org Hmm Coal -> Heat -> Stream -> Generator -> Electricity -> Resistance -> Heat
You do have an interesting point there 🤔 Seems rather wasteful just to produce some heat 🔥
j’aurais dû apprendre à utiliser les styles de #libreoffice beaucoup plus tôt, je gagne un temps fou!
Bad surprise at wake up this morning, my server was down. The dmesg show a lot of ‘scsi_xfer pool exhausted!’. I don’t known what that mean or even what happened #openbsd
It seems silly to me that we humans create thermal energy with coal, convert the thermal energy to mechanical energy with steam turbines, convert the mechanical energy to electrical energy with generators, and convert the electrical energy back into thermal energy with glass-top stoves and electric heaters.
@golang_news@feeds.twtxt.net Cool! 🥳
Go 1.23 is Released
1 points posted by John Doak ⌘ Read more
@lyse@lyse.isobeef.org Yeah this is why I haven’t done it yet because I don’t know how to build it 🤣
shellcheck
being used here? It would have picked this (contrived) example up?
@bender@twtxt.net Shellcheck is great but I hope you don’t care about a low package count for screenshots like some people.
Que j’aime ces illustrations, c’est magnifique: https://www.peppercarrot.com
@lyse@lyse.isobeef.org tracking read/unread is something that Yarn could benefit from. It has been thought before, just never gotten anywhere. Yarn just don’t keep track of those, it will be something that @prologic@twtxt.net will need to implement. Maybe if I keep poking him he will! 😂
@lyse@lyse.isobeef.org we had a huge thunder/lighning storm last night here too. Kids got really scared (it struck something very close here), and the dog panicked (he opened all doors and would only sleep in kitchen). We woke up around 2 at night from it. But kids luckily fell a sleep again.
@prologic@twtxt.net The headline is interesting and sent me down a rabbit hole understanding what the paper (https://aclanthology.org/2024.acl-long.279/) actually says.
The result is interesting, but the Neuroscience News headline greatly overstates it. If I’ve understood right, they are arguing (with strong evidence) that the simple technique of making neural nets bigger and bigger isn’t quite as magically effective as people say — if you use it on its own. In particular, they evaluate LLMs without two common enhancements, in-context learning and instruction tuning. Both of those involve using a small number of examples of the particular task to improve the model’s performance, and they turn them off because they are not part of what is called “emergence”: “an ability to solve a task which is absent in smaller models, but present in LLMs”.
They show that these restricted LLMs only outperform smaller models (i.e demonstrate emergence) on certain tasks, and then (end of Section 4.1) discuss the nature of those few tasks that showed emergence.
I’d love to hear more from someone more familiar with this stuff. (I’ve done research that touches on ML, but neural nets and especially LLMs aren’t my area at all.) In particular, how compelling is this finding that zero-shot learning (i.e. without in-context learning or instruction tuning) remains hard as model size grows.