@falsifian@www.falsifian.org It’s also astonishing how much power these things use and how incredibly inefficient they are 🤣

But seriously though we have come a long way in some machine learning sxiwnde and twxh and we’ve managed to build ever more powerful and power hungry massively parallel matrix computational hardware 😅

⤋ Read More

LLMs though, whilst good at understating the “model” (or shape) of things (not just natural language), are generally still stochastic parrots.

⤋ Read More

@prologic@twtxt.net I don’t know what you mean when you call them stochastic parrots, or how you define understanding. It’s certainly true that current language models show an obvious lack of understanding in many situations, but I find the trend impressive. I would love to see someone achieve similar results with much less power or training data.

⤋ Read More

Participate

Login to join in on this yarn.