

I can’t help but feel like this is the most important part of the article:
The model’s refusal to accept information to the contrary, meanwhile, is no doubt rooted in the safety mechanisms OpenAI was so keen to bake in, in order to protect against prompt engineering and injection attacks.
Do any of you believe that these “safety mechanisms” are there just for safety? If they can control ai, they will. This is how we got mecha-hitler, same mucking about with weights and such, not just what it was trained on.
They WILL, they already are, trying to control how ai “thinks”. This is why it’s desperately important to whatever we can to democratize ai. People have already decided that ai has all the answers, and folks like peter thiel now have the single most potent propaganda machine in history.
Smart