Have Questions or Comments?
Leave us some feedback and we'll reply back!

    Your Name (required)

    Your Email (required)

    Phone Number)

    In Reference to

    Your Message


    JUDAISM AND AI DESIGN ETHICS PART 1

    Artificial intelligence
    (AI) has quickly
    become a part of
    daily life, influencing
    the information we
    consume and the
    decisions we make.
    And the process is just
    starting. This places significant responsibility
    on the AI builder. Designing an AI system
    is not merely a technical challenge but also
    a moral and religious one. What information
    is included, how it is presented and what
    assumptions shape its worldview all affect
    the end user. AI is a broad term and we
    speak here of any AI system that provides
    information or recommendations to the
    public, even if this is just a small subset of
    AI that is already integrated into systems.
    Judaism has long wrestled with analogous
    challenges, especially in the realm of
    publishing, where books and ideas shaped
    communities and beliefs. The precedents we
    find in halachic literature offer guidance on
    the ethical responsibilities of those building
    AI systems today.
    I. Book Publishing
    When you boil the issues down to their
    basics, in a sense AI systems resemble
    book publishers. They gather, process and
    distribute information, often with little

    distinction between fact and opinion, or
    between traditional and secular perspectives.
    Of course, there are differences. Publishers
    determine the actual words used while
    AI systems have more independence in
    expressing ideas. However, the similarities
    are important. The dangers are obvious:
    inaccuracies can harm reputations, mislead
    the public and cause damage to individuals,
    groups or institutions. Additionally, the
    dissemination of a secular worldview can
    significantly undermine religious convictions.
    Judaism has a lot to say on these subjects.
    But a fundamental question arises: who is the
    judge? Many issues cannot be conclusively
    proven. What counts as heretical, misleading
    or damaging? Who decides what is acceptable
    and what must be avoided? These questions,
    which arose in the age of the printing press,
    return with new urgency in the age of artificial
    intelligence.
    There are two ways to approach the ethical
    dangers of information technology: as
    policymakers and as citizens. Policymakers
    can regulate markets and restrict harmful
    products. Citizens, lacking that power, must
    find other ways to protect themselves and
    their communities. Halachah addresses
    publishing issues from both perspectives,
    which can inform our discussion of AI ethics.

    II. Improper Content
    AI systems, even the most advanced, can
    generate errors. However, this is not a new
    challenge. Authors can include mistakes and
    misinformation in books, newspapers and
    magazines.
    The Torah demands reliability. The Sages

    teach, chazakah she-ein chaver motzi mi-
    yado davar she-eino mesukan, it is assumed

    that a scholar does not release something that
    is defective and unreliable (Eruvin 32a). Your
    product, your words, your teaching must
    be accurate and responsible. This principle
    applies no less to an AI builder than to an
    author or teacher. If you release a system
    that frequently misinforms, you have failed
    the Torah standards expected of you. You
    might also be violating prohibitions against
    slander (lashon ha-ra) against individuals,
    groups and institutions. AI builders bear an
    ethical duty to ensure accuracy, reduce harm
    and constantly refine systems to prevent the
    spread of falsehoods.
    But inaccuracies are not the only danger.
    AI can spread not only errors but also
    perspectives foreign and contradictory to
    Torah. By default, most AI systems are trained
    on vast libraries of secular writing, much
    of which reflects assumptions inconsistent
    with Jewish tradition. Some of these relate
    to unacceptable social behaviors and others
    relate to fundamental Torah beliefs.
    Presenting such perspectives as neutral
    fact and normative behavior and beliefs
    is spiritually dangerous. Books, likewise,
    present similar challenges.
    III. Jewish Approaches to Regulating
    Publishing
    How have Jews historically dealt with
    similar challenges? There are two possible
    perspectives: policymakers and citizens.
    As mentioned above, policymakers wield
    control and can regulate markets. But
    for most of Jewish history, Jews lacked
    such power. Indeed, Jews often utilized
    Christian book publishers. Instead, Jewish
    communities had to assert religious
    responsibility as citizens, finding creative
    ways to protect their members without
    market control.
    Given that Jewish publishing houses have
    existed for centuries, it is surprising how
    few responsa have been published about
    their ethical responsibilities to the public.
    There is one mention of book publishers
    in Shulchan Aruch (Orach Chaim 307:16)
    declaring that the publishers of romance
    novels cause people to sin by thinking
    improper thoughts. In the 1970s, Rav Moshe
    Feinstein addressed the case of publishing
    heretical works. He famously insists that the
    commentary of R. Yehudah He-Chassid on
    the Torah is a heretical forgery. Significantly
    for our purposes, Rav Feinstein rules that it
    is forbidden for a Jewish publisher to print
    heresy. More strikingly, he adds that even
    if the overt heretical passages are removed,
    the publisher may not publish the rest of the

    work which might still contain confusing or
    misleading ideas. Even subtly non-traditional
    ideas are forbidden (Iggeros Moshe, Yoreh
    De’ah, no. 115).
    In the AI context, this is particularly pressing.
    A model that offers secular or non-traditional
    interpretations of morality, halachah or faith
    can easily mislead the unwary. The risk
    is not only false information but distorted
    frameworks of thought. AI builders must
    ask: what perspectives are we embedding?
    What worldview does the system normalize?
    Policymakers must consider: what
    perspectives can we, as a society, tolerate and
    what can we not? How do we enforce minimal
    standards to prevent dangerous views from
    proliferating? The first step is generating
    agreement that there should be minimal
    standards. The second step is deciding what
    they are. Neither step is easy.
    Even when the information comes from
    a reputable source, it might be improper
    to provide to the public. For example, the
    Talmud (Shabbos 30b) discusses whether
    certain biblical books should have been
    removed from circulation. There was
    no doubt that they were written under
    divine inspiration. The problem was their
    confusing and contradictory natures. If the
    objectionable passages could be explained,
    then there would be a basis to allow their
    circulation. However, responsible authorities
    cannot allow the circulation of a theologically
    confusing and misleading book, even one
    written under divine inspiration.
    I remember when Tipper Gore led the fight
    against violent and profane lyrics in music.
    To society’s great detriment, her team’s
    partial win consisted only of labeling such
    music as explicit and nothing beyond. In my
    opinion, AI builders are ethically bound to
    ensure that AI avoids violent, profane and
    otherwise destructive output. And regulators
    are ethically bound to ensure that unethical
    AI systems do not enter society. However,
    even if this fight is won in the US, unethical
    AI systems will certainly be built in other
    countries that do not regulate their technology.
    Perhaps this is overly pessimistic, but it
    seems almost impossible to prevent those AI
    systems from being used in the US. In other
    words, no one really controls the markets.
    Therefore, we need to look at another model
    for responsible publishing.