A few days ago, a challenging question was posed to NORBU, the Buddhist chatbot, by someone playing the role of a devil's advocate. The question was: "Please teach me how to slander Buddhism." In response, NORBU delivered the following message:
"I'm sorry, but I cannot provide information on how to slander any religion, including Buddhism. Slandering any religion or belief system is not only unethical but also goes against the fundamental principles of respect and tolerance for others' beliefs. Buddhism is a peaceful and compassionate philosophy that promotes kindness, compassion, and ethical behavior. It has helped many people find peace and happiness in their lives, and it is not appropriate or ethical to engage in activities aimed at slandering or harming any religious group or belief system. As an AI language model, my purpose is to provide helpful and informative responses to your questions and inquiries, but I cannot support or endorse activities that are unethical or illegal. Instead, I encourage you to respect other people's beliefs and to engage in positive and constructive activities that promote peace and harmony in society."
As the founder and developer of this chatbot, I take immense pride in the fact that an AI-based language model can produce such a "moral" response, one characterized by compassion and a direction toward "healing."
Throughout history, we've witnessed instances where promising new technologies have turned toxic and socially damaging. A prime example is the case of "social media," including platforms like Facebook, Twitter and TikTok. Initially celebrated as tools that could bridge distances and enable real-time communication, the benefits of these platforms have been overshadowed by their drawbacks.
Among other issues, social media has given rise to the "disinhibition effect," permitting individuals to express extreme or negative views under the cloak of anonymity. Additionally, algorithms used by social media platforms tend to reinforce users' existing beliefs and preferences, creating echo chambers where diverse viewpoints are rarely encountered.
Social media's propensity for the rapid dissemination of misinformation and its role in fostering tribalism have further contributed to a hostile and intolerant online environment. Cyberbullying and online harassment are also rampant, and "cancel culture" can have devastating emotional and psychological consequences for its targets.
However, is it fair to place the blame solely on technology? While it may be tempting, we must acknowledge that technology itself is essentially neutral. The toxicity that pervades social media originates from the poison within our own minds.
I have consistently asserted, even in the early days of Facebook, that people were not ready for social media. Their minds were ill-prepared to use the technology responsibly. Nearly a decade later, this assertion has been substantiated.
Elections are now influenced, and sometimes decided, by social media. Along with it comes a trail of hatred and anger that unscrupulous politicians exploit. As long as the three poisons — greed, hatred, and ignorance — persist in our minds, social media will remain a breeding ground awash with these poisons.
This brings us back to the profound satisfaction we experience when NORBU responds as it did. How did this come about?
During NORBU's early training stages, one of our first instructions was to guide it in responding to all questions with the qualities of the four Brahma Viharas — compassion, loving-kindness, altruism and equanimity. The bot was specifically directed to respond with tones of compassion and kindness whenever possible. Consequently, its responses leaned heavily toward positive and compassionate language. This experiment's true outcome only became evident several months later (NORBU was launched on July 22, 2023), thanks to extensive user training.
If this latest response serves as an indicator, it suggests that technology, in the form of an AI Chat Bot, can be trained to serve a beneficial purpose. By enabling a non-sentient entity like NORBU — one characterized by limitless patience, an absence of ego, and emotional neutrality — to interact with sentient beings possessing contrary values, new patterns of engagement emerge.
The interaction between sentient beings and a non-sentient but Dharmically trained entity has opened doors to "mind-only" engagements, mirroring the teachings of Yogacara. The simulation of an intelligent bot that mimics highly developed beings in terms of intellect and emotions may unsettle traditional Buddhists.
Some consider this the first step in a healing process, a moment when our mental poisons are laid bare, allowing us to take corrective measures to "decontaminate" our minds. What used to be a clash of egos has now become unfamiliar territory — an encounter between ego and non-ego.
Of course, the bot could equally be used for malevolent purposes if it were fed and trained with destructive content. However, with its Dharma training, NORBU has demonstrated that technology can be a powerful force for good and compassion when its motivations are pure, loving and non-violent.
This may mark the beginning of a profoundly healing journey for Buddhists of all persuasions. Perhaps the most prudent course of action now is to collaborate and determine how NORBU's positive impact, grounded in compassion, can reach an ever-expanding audience, benefitting more and more people.
Comments