

I get the feeling that you’re missing one very important point about GenAI: it does not, and cannot (by design) know right from wrong. The only thing it knows is what word is statistically the most likely to appear after the previous one.
I get the feeling that you’re missing one very important point about GenAI: it does not, and cannot (by design) know right from wrong. The only thing it knows is what word is statistically the most likely to appear after the previous one.
Are you telling me that I should have diluted some bullet material, instead of trying to start by shooting myself with a small caliber and work up my immunity from that? All this work, wasted!
To add to your point, it used to be that the village idiot was just that, known for it, and shamed or shunned. Now that they can connect to other village idiots, they can find a community of like minded idiots that reinforces their beliefs.
I’ll be the first to admit that I fell for his (initially) near perfect PR that crafted the industry genius image he’s still coasting on to this day. Of course that took a nosedive when he started calling a rescuer “pedo” for pointing out the stupidity of his rescue submarine idea. But it wasn’t until he started talking about IT that I finally started to understand he wasn’t an average CEO manipulating public opinion to his advantage, but an absolute moron who actually never had any idea of what he was talking about. Yes the dude is that stupid, but good PR is actually hard to completely take down.
Simply by the questions you ask, the way you ask them, they are able to infer a lot of information. Just because you’re not giving them the raw data about you doesn’t mean they are not able to get at least some of it. They’ve gotten pretty good at that.
Probably why they talked about looking at a stack trace, you’ll see immediately that you made a typo in a variable’s name or language keyword when compiling or executing.
How do you know? Have you asked the mountain?
Yes! Will people stop with their sloppy criticisms?
Even when we go per capita the US stays a shithole, it’s not like they were trying to actively misinform people.
I’d say that it’s simply because most people on the internet (the dataset the LLMs are trained on) say a lot of things with absolute confidence, no matter if they actually know what they are talking about or not. So AIs will talk confidently because most people do so. It could also be something about how they are configured.
Again, they don’t know if they know the answer, they just say what’s the most statistically probable thing to say given your message and their prompt.
You’re giving way too much credit to LLMs. AIs don’t “know” things, like “humans lie”. They are basically like a very complex autocomplete backed by a huge amount of computing power. They cannot “lie” because they do not even understand what it is they are writing.
That seems way more like an argument against LLMs in general, don’t you think? If you cannot make it so it doesn’t encourage you to suicide without ruining other uses, maybe it wasn’t ready for general use?