@movq@www.uninformativ.de The success of large neural nets. People love to criticize today’s LLMs and image models, but if you compare them to what we had before, the progress is astonishing.
@falsifian@www.falsifian.org It’s also astonishing how much power these things use and how incredibly inefficient they are 🤣
But seriously though we have come a long way in some machine learning sxiwnde and twxh and we’ve managed to build ever more powerful and power hungry massively parallel matrix computational hardware 😅
LLMs though, whilst good at understating the “model” (or shape) of things (not just natural language), are generally still stochastic parrots.
@falsifian@www.falsifian.org I don’t believe so. But then again we’d have to define what cognitive understanding really is 😅 LLM(s) have none.
@falsifian@www.falsifian.org Can’t argue with the some of the feats we’ve achieved for sure 😅 I think some of the good stuff is in smarter auto completion: summarization and pattern reproduction.
But “intelligent” it ain’t 🤣