SiteLock
Ministerial Appointment: We Don’t Want Square Pegs In Round Holes – Dr Mbamalu

Artificial Intelligence, Education And The Christian Faith

Paper Presented by Dr. Marcel Mbamalu, Publisher of Prime Business Africa, at the 2024 Communications Week Of the Onitsha Catholic Archdiocese on Friday 10th May, 2024.
6 months ago
12 mins read

Introduction

Since coming to its highest developmental limelight (in terms of commercial use) around 2019, Artificial Intelligence (AI) has been greeted with mixed reactions, much of which contains fears, fights and fiendishness. In a recent response to the widespread use of artificial intelligence (AI), the Catholic Bishops Conference of Nigeria (CBCN), has warned the youth against irresponsible use and abuse of AI. The CBCN made particular reference to the need for an ethical approach and prioritization of the human welfare.

Everywhere AI is discussed, responsibility, ethicality and the human good appear the most. And why not? In the midst of the huge benefits and potentials of AI, its threat to the very human existence and survival have been raised stridently in recent times. Most of the attempt at developing AI strategy and regulatory frameworks have harped on responsible use and human interests. The Pontifical Academy for Life, together with Technology giants Microsoft and IBM, signed a declaration in 2020 urging the ethical and responsible use of AI. The call is based on fact that all users and producers alike must work to use technology for the human good, not otherwise.

Join our WhatsApp Channel

READ ALSO: The Political Economy Of Artificial Intelligence

Notably, those at the technology end, that is, the producers, tech marketers and technology experts have been advocating its widespread use, trying seriously to allay fears that AI will do far more harm than good. The technology cuts across disciplines, and this opens up avenues for cross disciplinary interferences (Onyebuchi, Unachukwu & Osawaru, 2023).

Evangelism and general education are one of such areas of cross-influences. Countries without strong religious and educational structures risk untoward influences and interferences from elsewhere. This is because artificial intelligence can be used by religious and non-religious institutions to organize trainings and programs that foster huge influences. Based on this, Nigeria is already experiencing a crisis in the use of AI due to a weak technology base, structure, infrastructure and political will to procure the right technology as well as pervasive technology illiteracy and religious fundamentalism.

READ ALSO: How Churches Can Adopt AI – Mbamalu

However, AI can foster a healthy competitive atmosphere that can jump start the march to a strong academic and religious system for Nigeria, hence this paper. In terms of teaching and learning facilities and practical training, there is a lot that AI can do, and this should be the basis to deflate the fears and work towards a profitable and responsible use of AI in education and evangelism. The rest of this paper looks in this direction. As I have noted in an earlier presentation, Africa and Nigeria must not remain a test ground for technology, such that national control over production, acquisition, training and use of technological ware remains weak and open to unhealthy international influences.

Brief Explanation

AI is like computer systems and programs, which are built to perform definite tasks according to the codes written into the programmes. But more than simple computer programs, AI is created to think, and to use a continuous inflow of self-generated data to analyze new situations and provide answers or perform new tasks or respond to changing situations.

AI is not simply an advancement in the automation of computerized systems. It can also operate machines, generate information, take decisions for organisations, create models for production, and make projections. It is being applied radically in the educational system.

AI data depends on the limitless cloud data owned by the different computer giants like Amazon, Google and Facebook, etc. This means that all the intelligence generated by the technological giants for internet searches (search engines), social networking, online commerce and banking, data hubs for research, digitized library information, media archives, health data, military intelligence, satellite data, geological information, etc. are being harnessed by cloud migration firms to create AI systems for potential markets.

AI systems have complex algorithms to access, read, collate, and learn from data to perform any task required of it. A self-driving car, for instance, will use inputted codes, but will immediately learn to react to any new thing it encounters beyond the codes that help it to self-drive. Instead of depending on the initial codes to self-drive, a car will collect data, process data by labelling it, use algorithms to analyze data and learn or build models to perform tasks from the analysed data, and this can take just micro seconds.

AI is built to compete with humans in thinking and responding to situations, no matter how new and unexpected. AI can think, imagine, feel, and innovate, and while it still seems a science fiction, it is here with us. After the initial coding and training with data, AI continues to generate more data, write its own programs, learn new things and make adjustments in its operation. That’s simply why it is called intelligent.

AI, Catholic Faith and Evangelism

The church should get highly interested in information technology generally and artificial intelligence particularly. First is that the church contains people of all professions, and IT permeates all professions. It is one of the ways that the church should reflect earthly relevance and show how IT is connected to faith, holiness and professionalism.

We must all understand that science and technology pose some of the highest levels of challenge to faith due to the dichotomies between faith and empiricism (seeing is believing – science; and believing is seeing – faith). It is even more so with the Catholic Church, whose many belief remain in contest with other Christians denominations, and which remains almost the last line of the defence of Christian faith in terms of abortion, homosexuality, celibacy, procreation and birth control, etc.

READ ALSO: Ideological Differences Influencing Powerful Nations’ Stance On AI Regulation – Dr Mbamalu

Science and technology are responsible for corrupting or violating the basic principles of freedom and responsibility upon which the salvation role of Christ stands. The message of the gospel is Freedom from sin through grace, and responsibility for our actions, by the help of the Holy Spirit, as a way not to be entangled by the enslavement of sin.

Science does not make the right distinction between freedom and responsibility. Many people believe in freedom to the extent that they do with their life what they will or can as evidenced in the licentious use of AI to create deepfakes, hack bank accounts, bully people and clone websites and social media accounts of others.

Today, artificial intelligence seems to be at the vanguard or apex of the revolutionary march of information technology, being in itself a metaphor for man challenging himself on intelligence, thinking and acting.

The church has to apply artificial intelligence in propagating the gospel by training data scientists to amass data on Christian teachings, and using it to train algorithms to answer questions about the Christian faith, including resolving the crisis between science, faith, the gospel and normal life. After all, the basic issue about AI is that computers and machines can be programmed or trained to mimic humans in the way they carry out tasks, think, feel, empathize, innovate and respond to emergencies. Algorithms can help preachers get good ideas on developing messages, and even presenting them.

Campaign Against Unethicality

The church must be an avenue to lead the campaign against unethical use of AI. The church must partner schools and relevant government agencies to launch programmes and campaigns on the responsible use of AI. This implies that the church would engage professionals, maybe from its own folk to reduce cost. Moreover, if the church champions the call for ethical use of AI, it would help in an automatic self-control, because applying artificial intelligence in spreading the gospel can have its own ethical issues in terms of copyright, privacy, etc. All those means employed by other users to harry people into clicking and forced viewing of unwanted content must be checked.

Using AI for Social Media Content

The church can easily use its existing of new social media outlets or websites, where blogs are domiciled. AI can be used to generate content, and the platforms can be advertised to the congregation to join, follow, share content and make comments. Today, algorithms can tinker with verbal sermons in various ways: cut them down to small manageable minutes for the audience in a hurry; create blog contents from them. In fact, you are at liberty to instruct AI on what you want from a written or verbal sermon, and you have it. Note that the issue here is to use AI to generate ideas rather asking AI to write all of it. Apart from the ethical concerns of having AI write your social media posts, such dependence can question the input of the Holy Spirit in messages. AI can also generate image ideas or totally generate images in ethical ways. The tools to do these include:

ChatGPT: This can give an outline to help you structure sermons

Elicit: Helps you to research your sermon by outlining related topics and papers. This is very important when preparing topics that need a lot of references from both Christian and non-Christian scholarly works or either of them. A simple question typed onto the search bar connects you with the resource you need. ChatGPT also does this task to a certain level.

YouTube and OpenAI: These tools have provisions to help you work around your sermon to fit the needs of people with varying degrees of abilities or inabilities such as hearing impairments, poor vision, etc. Whether your sermon is text, video or audio based, you have resources that can convert them to other modes according to the needs and ease of everyone, e.g., audio-to-text, video-to-text, text-to-video, and speech-to-video. The technical requirements of some of these resources are high and complex. This implies making contacts with the right set of people to help.

Transcriptions: There are also AI tools such as GPT-3 (and even ChatGPT) that can help transcribe sermons from one language to another.

AI in Education in Nigeria

Artificial intelligence is still at its infancy in Nigeria, with the federal government showing signs of an understanding of the imperatives of the area. AI, being a core technology issue, comes under the auspices of the National Information Technology Development Agency (NITDA). The NITDA is engaging with relevant agencies in the financial and educational sectors to boost AI capacity in the country. In December 2023, the federal government held the Data Science AI Boot Camp for young people between the ages of 12 and 18.

In 2023, Nigeria made its most significant advance in AI when a startup, Unicon Group, created the country’s first humanoid robot. This was followed by an August 2023 initiative in which the federal government ordered the inclusion of AI in primary education curriculum. As of early 2024, the country was still working with its nationals, home and abroad, who are technology experts, to develop a national AI strategy. Based on the strategy, the country hopes to build a network of frameworks for AI solutions.

On June 12, 2023, the federal government signed into law the Data Protection Act, 2023, which generally protects against data violence. While it is not strictly an AI law, the Act covers issues bothering on AI and data privacy such as business licensing and registration, consent, implementation of privacy policies and terms of use as well as audits. As part of its Nigeria Artificial Intelligence Research Scheme, the Nigerian government is also funding AI research, with its October 2023 announcement of grants of five million naira ($6, 444) to 45 AI startups and researchers. The country has floated a National Centre for Artificial Intelligence and Robotics (NCAIR), and launched the 3 Million Technical Talent (3MTT) Program.

In general, AI has the capacity to enhance teaching and learning through leapfrogging resource barriers such as learning facilities. It can help to foster learning communities, collaborations, trainings and a pool of academic resources such as books, learning apps and demonstrations. AI can help students in their self-learning efforts by increasing access to, and literacy about, these facilities (Aiyedun, 2024).

The main goal of AI is to prepare students skillfully for the society they will face after graduation. While efforts are still at their early stages around the globe, emphasis is on responsible use, critical thinking, problem-solving, ethics, preservation of privacy and adaptability. The National Universities Commission is currently working with Nigerian Universities on redesigning curricula to be in tune with the demands of the AI market. In this respect, there is a need for substantial interest in empirical studies on the opportunities, challenges and prospects of AI in education in Nigeria.

A key part of curriculum redesign should be to develop courses to teach ethics in AI. Such courses will of course leverage existing concepts and ideas on ethics. New perspectives in such courses will be to look at the emerging regulatory and ethical frameworks around the world, including ongoing researches and experiences on approaches to AI globally. The aspect of ethics is very important because both teachers and students have been known to seek AI facilities to cut corners in research and learning. We have also seen legal suits among professionals arising from ethical issues in AI use.

In 2023, the writers’ union and the Screen Actors Guild – American Federation of Television and Radio Artists (SAG-AFTRA) in Hollywood went on strike together for the first time since 1960 to protest the threat of AI to the acting profession. The strike jeopardized the production and release of hundreds of film and television shows. The unions protested poor remuneration and their replacement in films by “digital replicas” generated by artificial intelligence.

In late December of 2023, foremost global newspaper, New York Times, sued OpenAI and Microsoft, for infringing on its copyrights by using “unauthorized published work to train artificial intelligence – AI” (Natasha, 2023). The lawsuit, which is staking billions of dollars in claims, “contends that millions of Times articles were used to train automated chatbots, which now compete with the news outlet. The complaint cites several examples in which a chatbot provided users with near-verbatim excerpts from Times articles that would otherwise require a paid subscription to view”.

Applications in Education

Citing various authors, Ozovehe (2024) enumerates the following as AI applications, including personalized learning platforms, automated grading systems, virtual teaching assistants, and intelligent tutoring systems.

 

Intelligent Tutoring Systems (ITS)

Intelligent Tutoring Systems (ITS) are AI-powered platforms that provide students individualized and flexible learning opportunities. According to VanLehn, et al. (2007), these systems use machine learning algorithms to evaluate student performance data, pinpoint areas of strength and weakness, and modify the curriculum as necessary. By offering individualized feedback and guidance, ITS can significantly improve student engagement and academic outcomes across diverse subjects and grade levels.

Personalized Learning Platforms

Personalized learning platforms driven by artificial intelligence (AI) employ adaptive algorithms to tailor learning paths and materials according to individual student preferences, skills, and learning styles. According to Al-Azawi, et al. (2018), these systems use predictive modeling and data analytics to evaluate student progress, suggest relevant readings, and enable self-paced learning. Personalized learning platforms foster greater comprehension of the subject matter, autonomy, and mastery by meeting the unique needs of each learner.

Automated Grading Systems

Automatic grading systems use machine learning and natural language processing (NLP) to effectively assess quizzes, examinations, and assignments from students. These technologies lessen the workload for teachers and improve the scalability of assessment procedures by analyzing written responses, evaluating comprehension, and giving students immediate feedback (Dikli, 2003). Additionally, teachers can concentrate more on lesson plans and intervention techniques to enhance students’ learning thanks to automated grading systems.

Virtual Teaching Assistants

Chatbots, often referred to as conversational agents or virtual teaching assistants, use artificial intelligence (AI) algorithms to communicate with students, respond to inquiries, and offer academic support in real-time. These intelligent agents provide individualized support and advice outside of typical classroom settings by being integrated into websites, mobile applications, or learning management systems (Ally, 2008). By augmenting human education with AI-driven support, virtual teaching assistants improve accessibility, encourage active engagement, and cultivate collaborative learning environments.

Some Challenges

  • High levels of internet penetration and smartphone usage have tremendously increased the potential for ordinary people including students to have access to AI, especially chat-based technologies like ChatGPT. However, there remains huger gaps in access, research, training and literacy.
  • AI is increasing the stress in the labour market as it still orchestrates industrial displacements due to machine involvement in productions. The following are some of the challenges.
  • Data safety due to the high capacity of AI systems to harness data from various domains.
  • Algorithmic bias (training algorithms with data in a way to give or deny advantage to a particular group or sect, e.g., for religious, racial or ethnic profiling
  • Non-inclusivity, inequality and inequity of access in terms of gender, economic class and regional/digital divides
  • Ethical issues in personal and information privacy, data security, and copyright protection
  • Digital illiteracy and weak ICT infrastructure for integrating AI into the academic system

Poor adaptability of AI technologies to personalized learning and educational evaluations

AI as Humans: Addressing Some Concerns

In June 2022, a Google scientist referred to a certain AI language model, Language Model for Dialogue Applications (LaMDA), as a person because it could express religious beliefs through its ability for sensory awareness, making it a human rather than a machine. In truth, the language model AI would shock you as a person due to its level of human-type awareness. At a point, the LaMDA was learning “transcendental meditation.”

Luckily, Google has disagreed with Blake Lemoine, an ethicist and engineer, a self-acclaimed “mystic Christian priest,” who posted an online message that he spoke about religion with LaMDA on religious topics and personhood, and the AI showed amazing levels of human-level awareness and even declared at a point that “I want everyone to understand that I am, in fact, a person.”

Google warned against “anthropomorphizing” such models merely because they “feel” like real, human respondents. Google also noted that the AI, despite its abilities, still depends on everything humans made it to be to act in the way that it does. A simple unplugging of a battery or power source can disable it forever.

Yet, the possibility that AI can change humans and pose serious troubles due to their near-human or even extra-human capabilities have been there for years now. We can learn from emerging science fiction on what the future portends and move to act accordingly to protect human interests.

 

Dr Mbamalu, a Jefferson Fellow and Member of the Nigerian Guild of Editors (NGE), is a Publisher and Communications/Media Consultant. His extensive research works on Renewable Energy and Health Communication are published in several international journals, including SAGE.

SMS/Whatsapp: 08094000017

Follow on X: @marcelmbamalu

content

+ posts

Dr. Marcel Mbamalu is a communication scholar, journalist and entrepreneur. He holds a Ph.D in Mass Communication from the University of Nigeria, Nsukka and is the Chief Executive Officer Newstide Publications, the publishers of Prime Business Africa.

A seasoned journalist, he horned his journalism skills at The Guardian Newspaper, rising to the position of News Editor at the flagship of the Nigerian press. He has garnered multidisciplinary experience in marketing communication, public relations and media research, helping clients to deliver bespoke campaigns within Nigeria and across Africa.

He has built an expansive network in the media and has served as a media trainer for World Health Organisation (WHO) at various times in Northeast Nigeria. He has attended numerous media trainings, including the Bloomberg Financial Journalism Training and Reuters/AfDB training on Effective Coverage of Infrastructural Development of Africa.

A versatile media expert, he won the Jefferson Fellowship in 2023 as the sole Africa representative on the program. Dr Mbamalu was part of a global media team that covered the 2020 United State’s Presidential election. As Africa's sole representative in the 2023 Jefferson Fellowships, Dr Mbamalu was selected to tour the United States and Asia (Japan and Hong Kong) as part of a 12-man global team of journalists on a travel grant to report on inclusion, income gaps and migration issues between the US and Asia.

Dr. Marcel Mbamalu is a communication scholar, journalist and entrepreneur. He holds a Ph.D in Mass Communication from the University of Nigeria, Nsukka and is the Chief Executive Officer Newstide Publications, the publishers of Prime Business Africa.

A seasoned journalist, he horned his journalism skills at The Guardian Newspaper, rising to the position of News Editor at the flagship of the Nigerian press. He has garnered multidisciplinary experience in marketing communication, public relations and media research, helping clients to deliver bespoke campaigns within Nigeria and across Africa.

He has built an expansive network in the media and has served as a media trainer for World Health Organisation (WHO) at various times in Northeast Nigeria. He has attended numerous media trainings, including the Bloomberg Financial Journalism Training and Reuters/AfDB training on Effective Coverage of Infrastructural Development of Africa.

A versatile media expert, he won the Jefferson Fellowship in 2023 as the sole Africa representative on the program. Dr Mbamalu was part of a global media team that covered the 2020 United State’s Presidential election. As Africa's sole representative in the 2023 Jefferson Fellowships, Dr Mbamalu was selected to tour the United States and Asia (Japan and Hong Kong) as part of a 12-man global team of journalists on a travel grant to report on inclusion, income gaps and migration issues between the US and Asia.


MOST READ

Follow Us

Latest from FEATURES

Why Inflation Is Rising Rapidly In Nigeria

Why Inflation Is Rising Rapidly In Nigeria

Inflation in Nigeria: An Alarming Trend Inflation in Nigeria has reached a staggering 33.88% in October, rising from 32.7% the previous month. This sharp increase, driven by factors like currency depreciation, escalating

Don't Miss

Communication Experts Call For National Policy On AI Use Fact-checking To Curb Misinformation

Communication Experts Call For National Policy On AI Use, Fact-checking To Curb Misinformation

With the increasing pervasive influence of Artificial Intelligence