@movq@www.uninformativ.de The success of large neural nets. People love to criticize today’s LLMs and image models, but if you compare them to what we had before, the progress is astonishing.
@falsifian@www.falsifian.org It’s also astonishing how much power these things use and how incredibly inefficient they are 🤣
But seriously though we have come a long way in some machine learning sxiwnde and twxh and we’ve managed to build ever more powerful and power hungry massively parallel matrix computational hardware 😅
LLMs though, whilst good at understating the “model” (or shape) of things (not just natural language), are generally still stochastic parrots.
@prologic@twtxt.net I thought “stochastic parrot” meant a complete lack of understanding.
@falsifian@www.falsifian.org I don’t believe so. But then again we’d have to define what cognitive understanding really is 😅 LLM(s) have none.
@prologic@twtxt.net I don’t know what you mean when you call them stochastic parrots, or how you define understanding. It’s certainly true that current language models show an obvious lack of understanding in many situations, but I find the trend impressive. I would love to see someone achieve similar results with much less power or training data.
@falsifian@www.falsifian.org Can’t argue with the some of the feats we’ve achieved for sure 😅 I think some of the good stuff is in smarter auto completion: summarization and pattern reproduction.
But “intelligent” it ain’t 🤣