
06 May AI, GOLEM AND FRANKENSTEIN’S MONSTER
I. AI Interfaces
One of the most
persistent motifs
in science fiction
is the Frankenstein
plot: a human
creates a machine,
which then becomes independent and
ultimately turns against its creator.
The underlying question is one of
responsibility — who is accountable for
the damage caused by Frankenstein’s
monster? In contemporary terms, who
bears responsibility for harm caused by
artificial intelligence (AI)?
Currently, AI consists of advanced
software systems that perform complex
tasks, though they remain fundamentally
tools rather than autonomous beings. At
some point in the future, artificial general
intelligence (AGI) may emerge — entities
capable of independent thought and
possibly even embodied in robotic form.
In a previous essay, I explored how such
hypothetical AGI might be compared to
the halachic category of a golem. In the
meantime, however, we are dealing with
limited-function AI, such as ChatGPT.
How should we understand such AI
interfaces from a halachic perspective?
In some respects, these tools appear
more “intelligent” than the traditional
golem: they can carry on conversations,
generate creative content, and even
mimic reasoning (albeit based on pre-
existing data). On the other hand, they
lack autonomy and have no physical
form. Can we compare an AI interface to
a golem, which has a body and performs
physical tasks? The answer depends on
how we conceptualize a golem.
II. AI Interfaces and Golems
Rav J. David Bleich (cont., US) identifies
four major halachic views concerning
the nature of a golem (Jewish Law and
Contemporary Issues, pp. 373–382): 1) a
golem is a Jew; 2) a golem is a person but
not a Jew; 3) a golem is an animal; and 4)
a golem is an inanimate object.
According to the first two views, a
golem has the appearance or status of
a human. It may even be halachically
Jewish if created through acts of sanctity.
However, these positions generally
presume a golem with a physical body
and humanoid behavior. While some
people cannot speak and are nonetheless
considered human, there is no
precedent for a human being that
interacts solely through digital
means without any corporeal
form. Therefore, an AI interface
cannot reasonably be compared
to a golem under the views that
classify it as a person or Jew.
Others, liken a golem to an
animal: it moves independently
and possesses a kind of life
force, yet lacks a soul. However,
even this comparison falls short. An AI
interface does not move, does not possess
physical form, and lacks any biological
vitality. It lacks the basic characteristics
that led these authorities to compare a
golem to an animal.
The remaining view is that a golem is
an inanimate object. According to this
perspective, the golem is akin to a stick
or stone, artificially animated but devoid
of true life or halachic status. If a golem
is an inanimate object, then the same
could be said of an AI interface, which
consists of code and data operating
within a hardware system. It may not
even rise to the status of an object in the
tangible sense, but for lack of a better
halachic model, we might classify AI as
an inanimate object with functionality.
III. Liability of an AI Interface
If we categorize an AI interface as
an inanimate object, we can explore
the resulting halachic implications.
Just as one cannot commit murder by
smashing a rock, there is no halachic
prohibition of homicide in destroying
an AI interface. Nevertheless, other
prohibitions may apply. We may not
destroy inanimate objects without
reason due to the prohibition against
needless destruction, bal tashchis.
Additionally, an AI interface has an
owner. If you damage the AI, you cause
a financial loss to the owner.
The more complicated question is
liability for damage caused by an
AI interface. Since AI systems often
function unpredictably or semi-
independently, any harm they cause
would likely be classified as gerama,
indirect causation. An owner is
generally exempt from paying for
indirect damage caused by his property.
This would mean that even if ChatGPT
reveals the secret recipe for Coca Cola,
which would cause millions of dollars
in damage, halachically the owner of
ChatGPT would not be liable because
the damage is indirect.
Rav Asher Weiss (cont., Israel) was asked
about someone who was bouncing checks,
whether this person is exempt from
repaying for the loss caused by the bad
check (Responsa Minchas Asher, vol. 1,
no. 114). After all, the damage is indirect,
gerama. Rav Weiss focuses on the ruling
that while we do not force payment of
gerama, we do force payment of garmi
(Shulchan Aruch, Choshen Mishpat
386:1). It is not clear what the difference
is between these two categories. Rav
Weiss lists eight approaches discussed
in the commentaries. Perhaps most
important is that of the Ritzba (quoted
in Tosafos, Bava Basra 22b s.v. zos),
which Rav Moshe Isserles (Rema; 15th
cen., Poland) follows (Choshen Mishpat
387:3). Ritzba says that there is no
conceptual difference between gerama
and garmi. Rather, garmi is a punishment
for any indirect damage that is common
and frequent.
Rav Weiss says that in the modern
economy, most damage is caused
indirectly. If we would never force
repayment of indirect damage, halachah
would be unable to guide an economy.
Rather, we follow the Rema and Ritzba
who rule that when indirect damage
is common and frequent, we enforce
payment for that damage because it is
classified as garmi. He quotes a number
of authorities over the generations who
invoked this type of concern in requiring
payment for indirect damage.
Based on this reasoning, I suggest that
halachic liability should apply to damage
caused by AI. While the causation may
be indirect, it is increasingly common
and frequent. As AI becomes integral
to business, journalism, healthcare,
and other sectors, the halachic system
must account for such damage within
the framework of garmi. Otherwise, we
would create vast areas of economic
harm with no accountability. Therefore,
owners of AI interfaces should be held
halachically liable for damage they
cause when that damage is typical and
foreseeable.