Nurturing Responsible, Ethical Use of Artificial Intelligence

 

A staggering 83% of nonprofits believe an ethical framework needs to be defined before full adoption of Artificial Intelligence in their sector, says a 2020 State of Artificial Intelligence in the Nonprofit Sector (SAINS) report, the largest study on nonprofit AI to date.

 “The pace of adoption for artificial intelligence is unprecedented,” begins a 2023 Christianity Today article. But likewise unprecedented, it seems, are questions about the moral and ethical use of AI (as well as legal and liability questions), especially generative AI systems like the amazingly popular ChatGPT, which can produce humanlike responses to users’ prompts. One piece of evidence: Geoffrey Hinton, known as the “godfather” of AI, recently quit his post at Google based on concerns about the lack of policy surrounding it. 

Certainly, AI can help in many ways. It can improve efficiency by automating certain tasks and processes, freeing staff time for other activities. It can enhance decision-making by analyzing large amounts of data and extracting relevant insights. It can increase donor engagement using donor preferences and behaviors to speak better to a donor’s needs and interests. And it can improve communication and outreach by analyzing data on the effectiveness of different channels and messages to optimize communication strategies.

You’re already using AI if you’ve followed a phone GPS route optimization map to avoid traffic, clicked on a “suggested link” from social media for a product recommendation, or interacted with a chatbot on a retail website seeking customer service. Already AI can plan and lead a whole worship service like a ChatGPT avatar did in June 2023 at a Lutheran church in Germany

But the explosion of AI’s abilities and accessibility have raised a whole new series of moral and ethical questions for Christ-centered churches and ministries. For example:

• AI can fill this instruction in seconds: “Write a 300-word sympathy note that includes three different Bible references, speaking to a long-time donor whose mother recently died at age 85 from a stroke.” Assuming the sender reviews it carefully before sending, how transparent should the sender be that AI was tapped to help with the composition?

  • In order to provide a 24-hour prayer line for its online website, a ministry employs a chatbot that asks questions and then offers a written prayer with suggestions of Scripture for study based on the person’s text conversation. Should the ministry disclose that it is using AI to provide this service? 
  • AI can take a recorded video—from a pastor’s sermon to a ministry CEO’s vision message—and generate clips that can be posted to Reels, TikTok, and YouTube Shorts. It can also create an image of that same speaker voicing words to introduce these clips. Is that latter creation ethical if no disclaimer is given?
  • AI can also use natural language processing and machine learning to create stories based on assigned outcomes. It can turn donor data into personalized and captivating stories designed to connect with a specific donor’s likely emotions. But what if the story is totally made up by AI, simply based on donor metrics?

How ethical are these emerging practices? In the corporate world, seven companies have given their scout's honor pledge to develop AI responsibly. The group, which includes Microsoft, Google, and OpenAI, promised to abide by eight measures to keep the growth of AI safe and secure. These include watermarking audio and visual AI-generated content, encouraging experts and third parties to test their models, and flagging inappropriate use. The good-faith agreement came after a push by the White House, which created an "AI Bill of Rights." Other bills are in the works, including one requiring political ads to disclose any AI use.

Meanwhile, various groups are creating free information sites about AI pitched to churches and nonprofits. One example is a series of webinars by FreeWill, such as the July 2023 webinar, “AI and the Future of Nonprofit Fundraising.” Another example: Gloo has created an “Artificial Intelligence and the Church” hub for staying current on the latest news, discovering curated articles that church leaders should read, exploring new tools specifically designed to aid in utilizing AI effectively, and finding upcoming events and communities to be a part of. Podcasts, such as Liquid Church’s AI and the Church with Kenny Jahng, devote time to positive examples and the ethics of AI.

Many churches and ministries are also developing policies for the appropriate, responsible use of AI, much like many did over the last decade about the use of social media.

During the June 2023 Southern Baptist Convention meeting, messengers adopted resolutions regarding AI, urging pastors to use these tools in “honest, transparent, and Christlike ways.” But will that resolution, handle-with-care statements, and other guidelines produced by evangelical leaders translate into the “common sense” of wise everyday decision-making with AI?

Microsoft founder Bill Gates has blogged, “The development of AI is as fundamental as the creation of the microprocessor, the personal computer, the Internet, and the mobile phone.” If so, the quest for responsible use of AI has only just begun.

ECFA will continue to monitor developments and keep readers informed at ECFA.org/news and through the ECFA PULSE email

 

This text is provided with the understanding that ECFA is not rendering legal, accounting, or other professional advice or service. Professional advice on specific issues should be sought from an accountant, lawyer, or other professional.