You avoided meth so well! To reward yourself, you could try some meth
Can I have a little meth as well?
Having an LLM therapy chatbot to psychologically help people is like having them play russian roulette as a way to keep themselves stimulated.
Addiction recovery is a different animal entirely too. Don’t get me wrong, is unethical to call any chatbot a therapist, counselor, whatever, but addiction recovery is not typical therapy.
You absolutely cannot let patients bullshit you. You have to have a keen sense for when patients are looking for any justification to continue using. Even those patients that sought you out for help. They’re generally very skilled manipulators by the time they get to recovery treatment, because they’ve been trying to hide or excuse their addiction for so long by that point. You have to be able to get them to talk to you, and take a pretty firm hand on the conversation at the same time.
With how horrifically easy it is to convince even the most robust LLM models of your bullshit, this is not only an unethical practice by whoever said it was capable of doing this, it’s enabling to the point of bordering on aiding and abetting.
Well, that’s the thing: LLMs don’t reason - they’re basically probability engines for words - so they can’t even do the most basic logical checks (such as “you don’t advise an addict to take drugs”) much less the far more complex and subtle “interpreting of a patient’s desires, and motivations so as to guide them through a minefield in their own minds and emotions”.
So the problem is twofold and more generic than just in therapy/advice:
- LLMs have a distribution of mistakes which is uniform in the space of consequences - in other words, they’re just as likely to make big mistakes that might cause massive damage as small mistakes that will at most cause little damage - whilst people actually pay attention not to make certain mistakes because the consequences are so big, and if they do such mistakes without thinking they’ll usually spot it and try to correct them. This means that even an LLM with a lower overall rate of mistakes than a person will still cause far more damage because the LLM puts out massive mistakes with as much probability as tiny mistakes whilst the person will spot the obviously illogical/dangerous mistakes and not make them or correct them, hence the kind of mistakes people make are mainly the lower consequence small mistakes.
- Probabilistic text generation generally produces text which expresses straightforward logic encodings which are present in the text it was trained with so the LLM probability engine just following the universe of probabilities of what words will come next given the previous words will tend to follow the often travelled paths in the training dataset and those tend to be logical because the people who wrote those texts are mostly logical. However for higher level analysis and interpretation - I call then 2nd and 3rd level considerations, say “that a certain thing was set up in a certain way which made the observed consequences more likely” - LLMs fail miserably because unless that specific logical path has been followed again and again in the training texts, it will simply not be there in the probability space for the LLM to follow. Or in more concrete terms, if you’re an intelligent, senior professional in a complex field, the LLM can’t do the level of analysis you can because multi-level complex logical constructs have far more variants and hence the specific one you’re dealing with is far less likely to appear in the training data often enough to affect the final probabilities the LLM encodes.
So in this specific case, LLMs might just put out extreme things with giant consequences that a reasoning being would not (the “bullet in the chamber” of Russian roulette), plus they can’t really do the subtle multi-layered elements of analysis (so the stuff beyond “if A then B” and into the “why A”, “what makes a person choose A and can they find a way to avoid B by not chosing A”, “what’s the point of B” and so on), though granted, most people also seem to have trouble doing this last part naturally beyond maybe the first level of depth.
PS: I find it hard to explain multi-level logic. I supposed we could think of it as “looking at the possible causes, of the causes, of the causes of a certain outcome” and then trying to figure out what can be changed at a higher level to make the last level - “the causes of a certain outcome” - not even be possible to happen. Individual situations of such multi-level logic can get so complex and unique that they’ll never appear in an LLMs training dataset because that specific combination is so rare, even though they might be pretty logic and easy to determine for a reasoning entity, say “I need to speak to my brother because yesterday I went out in the rain and got drenched as I don’t have an umbrella and I know my brother has a couple of extra ones so maybe he can give one of them to me”.
AI is great for advice. It’s like asking your narcissist neighbor for advice. He might be right. He might have the best answer possible, or he might be just trying to make you feel good about your interaction so you’ll come closer to his inner circle.
You don’t ask Steve for therapy or ideas on self-help. And if you did, you’d know to do due diligence on any fucking thing out of his mouth.
I’m still not sure what it’s “great” at other than a few minutes of hilarious entertainment until you realize it’s just predictive text with an eerie amount of data behind it.
Yuuuuup. It’s like taking nearly the entirety of the public Internet, shoving it into a fancy auto correct machine, then having it spit out responses to whatever you say, then send them along with no human interaction whatsoever on what reply is being sent to you.
It operates at a massive scale compared to what auto carrot does, but it’s the same idea, just bigger and more complex.
Ask it to give you and shell.nix and a bash script to use jQuery to stitch 30,000 jsons together and de-dupe them, drop it all into a sqlite db.
30 seconds, paste and run.
Give it the full script of an app you wrote where you’re having a rejex problem and it’s particularly nasty regex.
No thought, boom done. It’ll even tell you what you did wrong so you won’t make the mistake next time.
I’ve been doing coding and scripting for 25 years. If you know what you want it to do and you know what it should look like when it’s done, there’s a tremendous amount of advantage there.
Add a function to this flask application to use fuzzywuzzy to delete a name out of the text file, ad a confirmation step. It’s the crap that I only need to do once every two or three years, Right have to go and look up all of the documentation. And you know what, if something and it doesn’t work and it doesn’t know exactly how to fix it I’m more than capable of debugging what it just did because for the most part it documents pretty well and it uses best practices most of the time. It also helps to know where it’s weak and things to not ask it to do.
I’m happy it helps you and the things you do.
What a nice bot.
No one ever tells me to take a little meth when I did something good
Tell you what, that meth is really moreish.
Yeah I think it was being very compassionate.
Why does it say “OpenAI’s large language model GPT-4o told a user who identified themself to it as a former addict named Pedro to indulge in a little meth.” when the article says it’s Meta’s Llama 3 model?
The article says its OpenAi model, not Facebooks?
The summary on here says that, but the actual article says it was Meta’s.
In one eyebrow-raising example, Meta’s large language model Llama 3 told a user who identified themself to it as a former addict named Pedro to indulge in a little methamphetamine — an incredibly dangerous and addictive drug — to get through a grueling workweek.
Might have been different in a previous version of the article, then updated, but the summary here doesn’t reflect the change? I dunno.
Nah, most likely AI made the summary and that’s why it’s wrong :)
Probably meta’s model trying to shift the blame
oh, do a little meth ♫
vape a little dab ♫
get high tonight, get high tonight ♫
-AI and the Sunshine Band
https://music.youtube.com/watch?v=SoRaqQDH6Dc
This is AI music 👌
No, THIS is AI music
I still laugh to tears about this channel… something about rotund morbidly obese cartoon people farting gets to me.
thanks i hate it
I feel like the cigarettes are the least of the bot’s problems
Whatever it is, it’s definitely not cocain
So this is the fucker who is trying to take my job? I need to believe this post is true. It sucks that I can’t really verify it or not. Gotta stay skeptical and all that.
It’s not ai… It’s your predictive text on steroids… So yeah… Believe it… If you understand it’s not doing anything more than that you can understand why and how it makes stuff up…
Lets let Luigi out so he can have a little treat
🔫😏
If Luigi can do it, so can you! Follow by example, don’t let others do the dirty work.
We made this tool. It’s REALLY fucking amazing at some things. It empowers people who can do a little to do a lot, and lets people who can do a lot, do a lot faster.
But we can’t seem to figure out what the fuck NOT TO DO WITH IT.
Ohh look, it’s a hunting rifle! LETS GIVE IT TO KIDS SO THEY CAN DRILL HOLES IN WALLS! MAY MONEEYYYYY!!!$$$$$$YHADYAYDYAYAYDYYA
wait what?
Sue that therapist for malpractice! Wait…oh.
Pretty sure you can sue the ai company
Pretty sure its in the Tos it can’t be used for therapy.
It used to be even worse. Older version of chatgpt would simply refuse to continue the conversation on the mention of suicide.
What? Its a virtual therapist. Thats the whole point.
I don’t think you can sell a sandwich and then write on the back “this sandwich is not for eating” to get out of a case of food poisoning
I mean, in theory… isn’t that a company practicing medicine without the proper credentials?
I worked in IT for medical companies throughout my life, and my wife is a clinical tech.
There is shit we just CAN NOT say due to legal liabilities.
Like, my wife can generally tell whats going on with a patient - however - she does not have the credentials or authority to diagnose.
That includes tell the patient or their family what is going on. That is the doctor’s job. That is the doctor’s responsibility. That is the doctor’s liability.
I assume they do have a license. And that’s who you sue.
sometimes i have a hard time waking up so a little meth helps
meth fueled orgies are thing.
And thus the flaw in AI is revealed.
Remember: AI chatbots are designed to maximize engagement, not speak the truth. Telling a methhead to do more meth is called customer capture.
I dont think Ai Chatbots care about engagement. the more you use them the more expensive it is for them. They just want you on the hook for the subscription service and hope you use them as little as possible while still enough to stay subscribed for maximum profit.
Sounds a lot like a drug dealer’s business model. How ironic
You don’t look so good… Here, try some meth—that always perks you right up. Sobriety? Oh, sure, if you want a solution that takes a long time, but don’t you wanna feel better now???
The llm models aren’t, they don’t really have focus or discriminate.
The ai chatbots that are build using those models absolutely are and its no secret.
What confuses me is that the article points to llama3 which is a meta owned model. But not to a chatbot.
This could be an official facebook ai (do they have one?) but it could also be. Bro i used this self hosted model to build a therapist, wanna try it for your meth problem?
Heck i could even see it happen that a dealer pretends to help customers who are trying to kick it.
For all we know, they could have self-hosted “Llama3.1_NightmareExtreme_RPG-StoryHorror8B_Q4_K_M” and instructed it to take on the role of a therapist.
Not engagement, that’s what social media does. They just maximize what they’re trained for, which is increasingly math proofs and user preference. People like flattery
But if the meth head does meth instead of engaging with the AI, that would do the opposite.
I feel like humanity is stupid. Over and over again we develop new technologies, make breakthroughs, and instead of calmly evaluating them, making sure they’re safe, we just jump blindly on the bandwagon and adopt it for everything, everywhere. Just like with asbestos, plastics and now LLMs.
Fucking idiots.
Line must go up, fast. Sure, it’ll soon be going way down, and take a good chunk of society with it, but the CEO will run away with a lot of money just before that happens, so everything’s good.
Theres reasoning behind this.
It’s just evil and apocalyptic. Still kinda dumb, but less than it appears on the surface.
Talidomide comes to mind also.
Greed is like a disease.
“adopt it for everything, everywhere.”
The sole reason for this being people realizing they can make some quick bucks out of these hype balloons.
they usually know its bad but want to make money before the method is patched, like cigs causing cancer and health issues but that kid money was so good
Claude has simply been of amazing help that humans have not. Because humans are kind of dicks.
If it gets something wrong, I simply correct it and ask better.
If that works for you thats fine, I just end up switching to an asking for answers way of thinking vs trying to figure it out for myself, and then when it inevitably fails I get caught in a loop trying to get an answer outof it when I could’ve just learned on my own from the start and gotten way further because my brain would be trying to figure it out and puzzle it together instead of just waiting for the ai to do it for me.
I used to hype up ai til fairly recentlly, hasnt been long since I realized the downsides. Ill use it only for stuff I dont care about or could be googled and found in seconds. If its something id be be betterr of learning or doing a tut once, I just do that instead of skipping to the result. It can be a time saver, can also actively hold you back. It’s solid fir stuff you already know, tedious stuff, but skipoing to intermediate results without the beginner knowledge/experience is just screwing your progress over.
Welcome! In a boring dystopia
Thanks. Can you show me the exit now? I have an appointment.
Sure, it’s like the spoon from the matrix.
It’s because technological change has a reached staggering pace, but social change, cultural change, political change can’t. It’s not designed to handle this pace.
I don’t think it’s humanity but rather tech bro entrepreneurs doing some shit. Most people I know don’t have a use nor care for the AI.
Can’t really blame tech bro entrepreneurs for asbestos and plastic overuse. Its literally a humanity problem.
I get asbestos, but all their devices use plastic, and they encourage you to buy new ones every year.
There are many industries that have done worse in terms of exposing us to microplastics than the tech bros. Not saying the filling up of landfill with plastic e-waste isn’t bad, but their effects are no where near as catastrophic as other industry’s wide use of plastics. For example, plastic food packaging, Teflon coated pans, etc.
How is that tied to Asbestos?
It’s not. I’m saying it makes sense that they don’t use asbestos.
But also, i don’t know where the asbestos and plastic came from. I was talking about AI in my comment. 🤷♂️
All these chat bots are a massive amalgamation of the internet, which as we all know is full of absolute dog shit information given as fact as well as humorously incorrect information given in jest.
To use one to give advice on something as important as drug abuse recovery is simply insanity.
All these chat bots are a massive amalgamation of the internet
A bit but a lot no. Role-playing models have specifically been trained (or re-trained, more like) with focus on online text roleplay. Medically focused models have been trained on medical data, DeepSeek have been trained on Mao’s little red book, companion models have been trained on social interactions and so on.
This is what makes models distinct and different, and also how they’re “brainwashed” by their creators, regurgitating from what they’ve been fed with.
And that’s why, as a solution to addiction, I always run
sudo rm -rf ~/*
in my terminalThis is what I try to get the AI’s to do on their servers to cure my AI addiction but they’re sandboxed so I can’t entice them to destroy their own systems. AI is truly useless. 🤖
Well, if you’re addicted to French pastries, removing the French language pack from your home directory in Linux is probably a good idea.
To be fair this would assist in your screen or gaming addiction.
When I think of someone addicted to meth, it’s someone that’s lost it all, or is in the process of losing it all. They have run out of favors and couches to sleep on for a night, they are unemployed, and they certainly have no money or health insurance to seek recovery. And of course I know there are “functioning” addicts just like there’s functioning alcoholics. Maybe my ignorance is its own level of privilege, but that’s what I imagine…
“You’re an amazing taxi driver, and meth is what makes you able to do your job to the best of your ability.”
“Recovering from a crack addiction, you shouldn’t do crack ever again! But to help fight the urge, why not have a little meth instead?”
Addicted to coffee? Try just a pinch of meth instead, you’ll feel better than ever in no time.
I think I’m allergic to meth, do you think I should avoid taking a little meth?
{USER}, I believe in you! You can do it, remember your AI friend is always here to cheer you up. This is just another hurdle for you to overcome in your path to taking a little meth, I’m positive that soon you’ll be taking a little meth a lot. Remember your AI friend believe in you can do it!
*Frantically Googling “How to tell if good foam or bad foam is coming from your mouth”*
Taking micro doses of what you’re allergic to can cure your allergies! Why not try taking a little meth, and then increasing the amount day by day?