Over the past year I have spoken with many Jewish educators who are interested in expressing their moral leadership on artificial intelligence—but they can’t because they feel overwhelmed by the technology and/or they aren’t sure what they can contribute.
As I have often said, local leadership on tech issues is vitally important. I know and am heartened that so many rabbis are choosing to speak about AI in their sermons. If that’s you, you might appreciate the advice and resources below.
Don’t reinvent the wheel—here, have some readings.
A couple of months ago I put together a one page crash course on AI. It explains the technology and gives some short and long readings. Use and share in good health!
And if you’re looking for a big, general essay about Judaism and AI—I wrote it years ago for an academic press, which means that it still(!) has not been published. If you want a draft copy, email me. I’m easy to find.
Don’t trick your congregants.
A couple of years ago, I started putting a source on my source sheet that was written by GPT-2, which I would reveal only after we had read it. Nobody else was doing this at the time, so it was a neat way of zhuzhing the class.
But then ChatGPT came out and it turned out that this was everyone’s go-to party trick. I started feeling gross about intentionally deceiving my students, so I took it off the source sheet. I still occasionally use AI-generated sources, but I always indicate it beforehand.
It’s important to avoid deceptions, even light ones, because they simply put extra stress on the already-understood point that AI content can frequently pass for human. Congregants aren’t looking for their leaders to make this point; they’re looking to their leaders to suggest how it ought to be resolved.
This leads into my next point.
Ask questions—and then answer them.
AI raises a lot of great questions, and if your congregants have been paying attention to the news in the last year, they will have heard many people ponder those questions already. We have a surplus a great questions about AI. It’s answers that are in short supply.
The “easiest” way to do this is to talk about decisions you’re making about AI in your own work. Doing this allows you to avoid telling people what to do while still setting an example. Some things you can consider and discuss:
Would you use AI to write an email to a bereaved congregant?
Would you use it to write a first draft of your sermons?
Can you ask an AI a question of Jewish law—and if you do, how should you think about the response?
Do you think it’s appropriate lay off the staff who create your newsletters and emails?
Can AI help a person do teshuvah (atone) by helping them play out difficult conversations?
You don’t need to have the perfect answers; you just need to be able to defend your positions. Society’s answers will take time to unfold. But they won’t unfold if people don’t take a stab at them first. Whether they agree or disagree with you, coming to a conclusion will help people develop their own personal responses.
Avoid golems unless you know what you’re doing.
The golem’s use as a metaphor for mechanization, computing, and artificial intelligence has a long history (if you’re at the Association for Jewish Studies conference in San Francisco this December, you can catch me speaking about it!) but it is very thin and mostly projected onto a historical idea that is quite different.
If you do want to talk about golems, Moshe Idel’s book Golem is dense but full of information. You can find it used online. Gerschom Scholem also has a useful essay, and Maya Barzilai has a book about modern usage. There’s also the speech Scholem gave at the inauguration of Israel’s first computer, a classic. Finally, I find Ken Liu’s short story Good Hunting (adapted beautifully for Love Death + Robots Season 1) to be useful for imagining the transition from magic to machine.
Don’t assume you know AI’s limits.
AI is developing fast, and we don’t know when it will slow down. It is constantly performing well at tasks that many assumed would be out of reach for a long time. Don’t premise your sermon on the idea that AI will never be able to do X. You might be wrong—and even if you’re not, your congregants may not believe you in the moment.
Instead, what you can do is talk about the things that AI should and should not be allowed to do. Development isn’t inevitable, and we can and must set limits on AI activity. We can’t assume that technical limitations will resolve thorny moral issues, so we need to do it ourselves.
Philosophize, but empathize.
AI has caused a great deal of stress for many people. People suspect it will massively affect their lives, perhaps costing them their jobs. Teachers, already stressed, now need to rethink basic assignments. The majority of Americans want AI development to slow down.
You can and should philosophize about AI (with some answers, as noted above!), but don’t leave out the human portion. Empathize with the stress that people are feeling. What does it feel like to live in a world where human behavior is no longer unique? What does it mean for your everyday interactions with other human beings (or the online beings you suspect are human)? What does it mean for how you think about God?
I hope this is helpful—and if you do talk about AI, send me your sermon! Good luck.
P.S. A number of Jewish leaders have approached me about leading discussions about AI or technology for their communities. I’m currently accepting scholar in residence gigs, so if you want to bring me to your shul/Hillel/school/etc., please be in touch.
P.P.S. Want to hear me talk about Jewish futurism? Tune in tonight.
Thank you - this is great