Generative AI Doesn’t Have a Coherent Understanding of the World, MIT Researchers Find
Long-time Slashdot reader Geoffrey.landis writes: Despite its impressive output, a recent study from MIT suggests generative AI doesn’t have a coherent understanding of the world. While the best-performing large language models have surprising capabilities that make it seem like the models are implicitly learn … ⌘ Read more
@eldersnake@we.loveprivacy.club With enough data and enough computing power you can simulate anything right or create grand illusions that appear to real they’re hard to tell 😅 – But yes, at the end of the day LLM(s) today are just large probabilistic models, stochastic parrots.
They are however pretty good at auto-complete though. If you wire up Continue.dev with VSCode and a local Ollama powered codeastral model, it’s pretty decent. Or if you use the open source friendly Codeium.