

Yeah, so Musk’s argument is that even though OpenAI’s product ChatGPT has more downloads, Apple should consider letting X’s Grok take the top spot because… reasons, I guess? Grok is still listed despite its antisemitic and other disgusting actions. It might be #2 (yeah it’s definitely shit, right?), it might be #5, but it’s still on the list, and it’s still available. Musk is just mad that Apple is not featuring it.
Meanwhile, Fortnite is the top downloaded free iOS game. It sits on top of the charts. Thusly, Apple has buried the chart and they refuse to feature Fortnite, instead choosing to feature Roblox and PUBG instead. It’s petty and silly, but the rankings do show which one has more downloads. That’s it. It’s not even about quality or anything.
I tend to agree with Epic (Fortnite) over Apple, but in regards to X, I’m with Apple. I may be slightly biased in that I don’t like Musk/X, but I’m with Apple strictly on the merits here. I don’t need biases to influence my reasoning here.
I read about this earlier on Ars Technica. I was expecting a paywalled link. Was not expecting to find a mention of “No Longer Human.” Ars didn’t mention that. Or the chat logs. It was a long article but didn’t go into the same depth.
So, I’ve read “No Longer Human.” A more recent translation is called “A Shameful Life” and that’s a bit more apt, I think, but doesn’t have the same ring. It’s about a guy who feels less and less like a person, like what he does and feels doesn’t matter. It’s a wild book, about a double suicide, and the author later killed himself much the same way. There have been several adaptations — none of them very good. None of them quite captured the book. I wonder if it’s just unfilmable. Anyway, it’s a shame that it’s being referenced here, because it’s good literature worth considering, and I hate to see it maligned in much the same way as the Doom game was following the Columbine massacre. Relevant or not (guns in that case, suicide in this case), it’s a shame art gets associated with tragedy simply by association.
Perhaps the same could be said of AI technology, and it has been. But certainly AI needs better safeguards. According to Ars, when the guy started asking about suicide, ChatGPT said it could not help him — unless he specified he was talking about fictional characters. So he did that (Ars constantly refers to it as a “jailbreak”) for a while, and then I guess (and they guess as well) that ChatGPT just assumed that context and stopped requiring him to specify that.