As you probably saw, Grok briefly endorsed a new Holocaust. I wrote about this for JTA, arguing that Jewish institutions and leaders need to recognize that they are particularly vulnerable to AI because so much of what Americans know about Jews comes from the internet.
It’s really important to recognize that this isn’t just an Elon Musk issue, nor will the bias always be so blatant. There are very strong incentives for bad actors to “groom” AIs into accepting their nonsense as fact, and Russia already appears to be doing exactly this.
We’ve seen AIs fall for falsehoods in the past. An earlier version of Google’s embedded search AI told people to eat a rock every day because of an article in The Onion. Sometimes you can fix the problem by telling AIs that certain news sites are untrustworthy, but at some point they need to trust what they read on the internet if they’re going to be useful; you can’t be incredulous about everything. A state actor with plenty of resources could diffuse an idea across the web to such an extent that it becomes hard to tell fact from fiction.
I don’t think there’s a permanent fix to this problem. The only way to deal with it is to understand that the potential for antisemitic AI is always going to be present, to demand more audits of advanced systems, and to discourage use of systems that prioritize user engagement over truth.
Let me know what you think. New post coming soon.