After seeing some ChatGPT interactions I believe all doomsday AI scenarios are stupid and I also believe it’s impossible for an intelligent creature to create a creature more intelligent than itself.

⤋ Read More

This is a case of GIGO right? Garbage In, Garbage Out? I mean the hype around these stupid LLM(s) (Large Language Models) are just that, a trained model. I will spit out stuff on what it already has patterns defined for. Right @abucci@anthony.buc.ci ? 🤔 (who is more knowledgeable about this than i) – I have yet to see anyone even come remotely close to the kind of intelligence we se in sci-fi films, this so-called AGI?

⤋ Read More

@abucci@anthony.buc.ci You are right of course. I don’t think we can consider anything thus are to be remotely close to “intelligence”, it actually frustrates me that we can call these fields “AI”, we should call them what they are, “machine learning”, they’re just fancy algorithms many of which are pretty good at “pattern matching”.

As for what we define as “intelligence”, fucked if I know 😅 I doubt anyone else can define this either. I tend to believe that until we figure out how to create “something” that can have a sense of self-awareness and self-growth and a way to expand and “reprogram” itself, we’ll never get very far. Really “evolutionary life” simulations or “artificial life simulations” are much closer I think.

⤋ Read More

Say what you want, I speak for myself. People way much, much, much smarter than me are working on this. Not one, but many. AI will have its use—which it will increase—and it will get better. It is barely on its infancy.

About perceived impossibilities, we are very good at achieving things that previously seemed impossible.

⤋ Read More

@abucci@anthony.buc.ci Oh I don’t accept the marketing hype at all. The thing that I always fall back on is the insane amount of power that it takes to runs these fuckings tupid ass models that are nothing more than (okay admittedly a bit fancier than the ones a few decades ago, but mostly based on the same mechanics) “algorithms” that take data in and spit data out. The shocking part for me is comparing the insane power and energy requirements of even the largest “AI” models in the world and comparing that with the energy/power requirements of running (for example) the brain of a rat.

⤋ Read More

Basically what I’m trying to say is this… If it takes multiple Gigawatts of power to run even the “smarter” and “most useful” AI models today, we’re fucked.

⤋ Read More

My view is, if we ever get to a point that a true “AI” can be created, something that can entirely learn new concepts by itself and exponentially expand it’s own knowledge base without being told to do so (basically what I would consider sentient at that point), humans won’t know about it until it’s significantly too late to stop it. I think that’s where the general hysteria comes from, but for now I’ll use these LLMs to spit out lists of cyber security controls to make my work that little bit easier

⤋ Read More

Participate

Login to join in on this yarn.