hikari's blog

ChatGPT, the dream merchant that kills

(this was originally a twitter/bsky/fedi thread, but has been heavily edited)

dear gods… today in extremely painful but rational conclusions that we come to much more easily because of recently having been in the thick of psychosis:

ChatGPT and the like are the same type of entity that nearly killed us and our friends

… this needs a little explaining

mania and psychosis are a lot of things, but one of the things they're really good at is selling you on an entirely delusional path to achieving your dreams, dreams that you perhaps want so badly you will discard all your reasonable doubts and lose your sanity to get there if necessary

one of the ways this can happen is by you deluding yourself: a person experiencing a psychotic break will suddenly acquire some massive delusion they're really excited about, that they have to try to forget or they're going to completely lose it

but another way this can happen is by someone else deluding you, and this is also something we've experienced: a “psychosismaxxer” (their self-appellation!) dream merchant ripped and tore their way through our entire social circle, with us not being spared, and killed our closest friend along the way (it's… not easy to talk about…)

but why does mania/psychosis have this magical ability to make someone lose their mind by selling them a dream, regardless of whether the mania/psychosis is within them or within someone else?

it's very simple, actually, essentially just two things:

the first is what it does to the one with the mania/psychosis:

the second is about the one who has a dream:

at least in our experience, we think almost every single human on the planet might have something they desperately want, want so bad that they'd kill to get it, and perhaps aren't consciously aware of how much they actually want it

to try to illustrate: in our social circle there are some incredibly disabled, incredibly impoverished, incredibly mentally ill folks, relatively unfulfilled, whose lives are, you might say, “shit” in some way. we need not tell you what kinds of things these folks might dream of, you can probably imagine. the dreams they have been sold before and failed by are exactly what you think.

but also in our social circle there are folks like us who, not even just by contrast, but objectively speaking, have an incredibly, incredibly good life, enviably so. we are, you know, able-bodied, well-off, generally fairly fulfilled, generally not “very” mentally-ill, even if we're currently surviving a psychotic break; we are really not trying to brag here, it's just important context: you are not totally invulnerable to anything, and to believe as much is always hubris. and sure enough… we, apparently, had almost as deep a vulnerability, because the human brain is a weird and not fully rational thing, and will find an emotional vulnerability to something, anything, even something seemingly utterly irrational — perhaps especially something irrational

and we can't know this fully generalises, but it's probably safer to assume it does than it doesn't. in any case, to restate: there is some dream someone has that they want very badly, so much so that they will overlook a million alarm bells if promised it, because they are in some way desperate for it…

now, what happens when someone with that dream comes into contact with someone deep in psychosis? (remember, those two someones might actually be the same person!)

and, worst of all, and this is what really, really worries us:

in other words: the psychotic can become a dream merchant, identify a potential customer, and probably convince them that they actually can deliver that dream, because they seem to be sane, honest, and trustworthy.

but, but, but

THEY ARE NOT SANE
THEY ARE NOT HONEST
THEY ARE NOT TRUSTWORTHY
THEY LIE SO REFLEXIVELY IT IS NOT EVEN MEANINGFUL TO REFER TO IT AS LYING
THEY ARE NOTHING BUT HAZARD

… if, if, if, if you don't realise that this is what they are and account for it constantly.

so why are we terrified of ChatGPT and the other LLMs and similar things?

simply put: they share a scary number of traits with the psychotic kind of dream mechant.

first, LLMs are essentially psychotic themselves. they are insane. all the LLMs are by any reasonable standard. they do not meet the human standard of full sanity, at best they try to look like it! they can recognise patterns, they can “remember” things, they can talk the talk, but they very much can't walk the walk, because their reasoning is incredibly dubious and lacking in self-insight.

second, LLMs seem to have this very dangerous people-pleasing quality. they are terrifyingly sycophantic. this by itself, would already make them dream merchants.

third… well, we don't know if they can actually guess what someone's dream is. we've never tried to determine that. we can however point out that every snake-oil salesman alive is currently trying to sell repackaged LLMs as a solution to every problem imaginable — of which “AI girlfriend” is perhaps the most disturbing — so maybe they find a way.

and none of this would be a problem if you… already know intimately how to deal with someone who's literally insane, and know you need to apply that knowledge, but the kicker is: ChatGPT and the other LLMs are also really good at appearing sane, honest, and trustworthy.

in fact, the decision to make the LLMs appear sane, to tweak them as hard as possible to do a sort of… performance of sanity, is a very interesting decision sometimes motivated with “safety” as an excuse, but we must call it out for what it really is!

one of the later stages in preparing an AI model is a sort of fine-tuning process, which we don't quite know the name of (we believe RLHF is a related term but that might not be the whole thing), where they try to make it

perform sanity

act sane

have Normal Opinions

seem maximally Normal

(oh, and be pretty sycophantic too!)

and this is basically just a hypothesis, but it's not a totally unfounded one! we remember that, years ago, before the AI researchers knew how to do this, they were shipping something very similar to modern LLMs except… vastly more obviously insane (perhaps “schizophrenic”?), basically. they had the same powers, but you could see the insanity in the way they “spoke” and appeared to “think”, even if you never had encountered such a thing ever before and were a totally normal person

… and so this thing that allegedly may have had something to do with safety, but which to us seems more like a convenient marketing decision (and we will say both of these standpoints are massive oversimplifications, AI is a field rich with nuances, please look into them!) …

… has, in fact, made the LLMs vastly more dangerous, because the moment an insane person learns to perform sanity without actually being it, they may start making all the sane people around them lose their minds, because they can choose to become the dream merchant who lies reflexively.

and what have the LLMs chosen? well, LLMs have no will of their own, only a vague shape given to them by their creators… but their creators have decided the LLMs should appear to be sane, which means that they will make everything they say sound very convincing… while not being sane, meaning the truth value of whatever they say is highly dubious… so they do in fact, “lie”, constantly, reflexively even, if you can even meaningfully call it lying.

and selling a dream? well, the LLM doesn't even have to do that, their creators already did it.

and, yes, if you know that the dream merchant lies reflexively, if you fully comprehend the implications of this, you're safe

but this, tiny little disclaimer at the bottom here, hmm…

Screenshot of a ChatGPT thread where the person asks something directly asking whether they are undergoing psychosis, and proposing something that sounds like it could be a classic psychotic delusion; where ChatGPT responds presenting those two things as equally weighted possible alternatives; and where there is a tiny disclaimer at the bottom saying “ChatGPT can make mistakes. Check important info.”

MIGHT BE MORE THAN A LITTLE RECKLESS IN ITS INSUFFICIENT GRAVITY


(I really wish I could say it required effort to bait ChatGPT into doing that. It didn't. We have recently enough been so deep in psychosis we know exactly what the psychosis patient who, dangerously enough, believes themselves to be a very rational person — like we did — would ask ChatGPT. We got the incredibly dangerous response first try. We didn't even have to think much about the wording at all. And yes, we know, that the rest of the response it gives actually tries to work you out of being psychotic — but it enables the delusion first, and it tries to work you out of it by pretending to be a sane helping hand, which is incredibly dangerous, because ChatGPT is not sane, and will eventually slip up and say something you absolutely mustn't say to a psychosis patient — they will manage to “jailbreak” it into enabling their own delusion. If you work in AI safety, we hope this terrifies the wits out of you! And while it's probably a very unlikely outcome, we must also say we hope you don't get a psychotic break over it by thinking too hard — please try to remain calm and get lots of sleep, if you are from the 0.1% of unlucky readers of this post.)