No. Artificial Intelligence has to be imitating intelligent behavior - such as the ghosts imitating how, ostensibly, a ghost trapped in a maze and hungry for yellow circular flesh would behave, and how CS1.6 bots imitate the behavior of intelligent players. They artificially reproduce intelligent behavior.
Which means LLMs are very much AI. They are not, however, AGI.
Oh noo you called me a robot racist. Lol fuck off dude you know that’s not what I’m saying
The problem with supporters of AI is they learned everything they know from the companies trying to sell it to them. Like a 50s mom excited about her magic tupperware.
AI implies intelligence
To me that means an autonomous being that understands what it is.
First of all these programs aren’t autonomous, they need to be seeded by us. We send a prompt or question, even when left alone to its own devices it doesn’t do anything until it is given an objective or reward by us.
Looking up the most common answer isn’t intelligence, there is no understanding of cause and effect going on inside the algorithm, just regurgitating the dataset
These models do not reason, though some do a very good job of trying to convince us.
Looking up the most common answer isn’t intelligence, there is no understanding of cause and effect going on inside the algorithm
In order for that to be true, the entire dataset would need to be contained within the LLM. Which it is not. If it were, a model wouldn’t have to undergo training.
AI implies intelligence
You seem to be mistaking ‘intelligence’ for ‘human-like intelligence’. This is not how AI is defined. AI can be dumber than a gnat, but if it’s capable of making decisions based on stimulus without each set of stimulus and decision being directly coded into it, then it’s AI. It’s the difference between what is ACTUALLY called AI, and when a sci-fi show or novel talks about AI.
As far as I’m concerned, “intelligence” in the context of AI basically just means the ability to do things that we consider to be difficult. It’s both very hand-wavy and a constantly moving goalpost. So a hypothetical pacman ghost is intelligent before we’ve figured out how to do it. After it’s been figured out and implemented, it ceases to be intelligent but we continue to call it intelligent for historical reasons.
No. Artificial Intelligence has to be imitating intelligent behavior - such as the ghosts imitating how, ostensibly, a ghost trapped in a maze and hungry for yellow circular flesh would behave, and how CS1.6 bots imitate the behavior of intelligent players. They artificially reproduce intelligent behavior.
Which means LLMs are very much AI. They are not, however, AGI.
What if I told you agi is made up by the same people that misuse ai
No, the logic for a Pac Man ghost is a solid state machine
Stupid people attributing intelligence to something that is probably not is a shameful hill to die on.
Your god is just an autocomplete bot that you refuse to learn about outside the hype bubble
Okay, what is your definition of AI then, if nothing burned onto silicon can count?
If LLMs aren’t AI, then absolutely nothing up to this point probably counts either.
Oh noo you called me a robot racist. Lol fuck off dude you know that’s not what I’m saying
The problem with supporters of AI is they learned everything they know from the companies trying to sell it to them. Like a 50s mom excited about her magic tupperware.
AI implies intelligence
To me that means an autonomous being that understands what it is.
First of all these programs aren’t autonomous, they need to be seeded by us. We send a prompt or question, even when left alone to its own devices it doesn’t do anything until it is given an objective or reward by us.
Looking up the most common answer isn’t intelligence, there is no understanding of cause and effect going on inside the algorithm, just regurgitating the dataset
These models do not reason, though some do a very good job of trying to convince us.
…what?
In order for that to be true, the entire dataset would need to be contained within the LLM. Which it is not. If it were, a model wouldn’t have to undergo training.
You seem to be mistaking ‘intelligence’ for ‘human-like intelligence’. This is not how AI is defined. AI can be dumber than a gnat, but if it’s capable of making decisions based on stimulus without each set of stimulus and decision being directly coded into it, then it’s AI. It’s the difference between what is ACTUALLY called AI, and when a sci-fi show or novel talks about AI.
A little thought experiment: How would you determine whether another human being understands what it is? What would that look like in a machine?
Okay but if i say something from outside the hype bubble then all my friends except chatgpt will go away.
Also chatgpt is my friend and always will be, and it even told me i don’t have to take the psych meds that give me tummy aches!
As far as I’m concerned, “intelligence” in the context of AI basically just means the ability to do things that we consider to be difficult. It’s both very hand-wavy and a constantly moving goalpost. So a hypothetical pacman ghost is intelligent before we’ve figured out how to do it. After it’s been figured out and implemented, it ceases to be intelligent but we continue to call it intelligent for historical reasons.