In his opening remarks, Auxiliary Bishop Dr Dr Anton Losinger (Augsburg), Episcopal Representative for the KAAD, called for the opportunities and risks associated with the use of Artificial Intelligence to be weighed up, because “given the new technologies and the learning machine that goes with them”, “the question of what constitutes being human and how humans and machines relate to each other must be asked anew.” As the sciences and the knowledge society are “significantly affected by the transformation that is now emerging”, “KAAD, with its large academic global network, is also affected in many ways by the topic of Artificial Intelligence. Therefore, it is incumbent on current and future decision-makers to deal with the opportunities and challenges of such a world-shaking, multifaceted technical transformation that affects so many areas of life”. Auxiliary Bishop Losinger explained in his subsequent lecture, “Dilemmas in the new form of the knowledge society: ethical reflections,” that the knowledge society has a “creative mandate” that “demands our responsibility as a society and constitutional state.”
KAAD President Fr Dr Hans Langendörfer SJ also emphasized that the “human alertness, intellectual seriousness and spiritual openness” that he experiences in the worldwide KAAD network and especially at the Annual Academy is a “special, not self-evident experience of the Church in the world”. On this basis, it would be good to ask – in the words of Pope Francis – how Artificial Intelligence “can be placed at the service of humanity and the protection of our common home.”
KAAD Secretary General Dr Nora Kalbarczyk emphasized that there are “major differences in the perception of risks and potential benefits of AI between the countries of the Global North and the Global South”. Working out the differences, new perspectives and approaches in the discourse on the use of AI is one of the goals of the Annual Academy 2024.
KAAD alumnus Professor Dr Jerry John Kponyo, Co-Founder of the Responsible AI Network Africa (RAIN Africa) and Professor of Information and Communication Technology at the Kwame Nkrumah University of Science and Technology in Ghana, addressed the many possible applications of AI with a view to the societies of the Global South and the responsibility that underlies this use. He is considered one of the leading AI scientists on the African continent and focused on fundamental ethical issues; for example, “fair and concrete predictions and solutions” must be ensured in the development of algorithms and the creation of data sets to prevent “existing social prejudices against certain groups of people from being unwittingly promoted by AI.”
The theme of the Annual Convention was then developed in a total of five forums, which dealt with the various application areas of AI:
Forum 1 “Artificial Intelligence – a Game Changer in Development Cooperation?”, moderated by Dr Anselm Feldmann, dealt with the risks associated with the use of Artificial Intelligence in development cooperation. Theresa Züger, head of the Public Interest AI Research Group at the Humboldt Institute for Internet and Society in Berlin, gave an overview of the connection between public interest and Artificial Intelligence and discussed the opportunities and risks as well as the misuse that arise from the use of AI for societies. In his presentation, Balthas Seibold, Co-Lead of the FairForward Project at German Development Cooperation (GIZ), used several examples from the work of the GIZ to go into more detail about the specific opportunities for using Artificial Intelligence to achieve the Sustainable Development Goals (SDGs). He explained that AI is good at recognizing patterns and replicating decision-making processes, but also emphasized that AI needs good data to achieve good results. The opportunities arising from using AI in the Global South are manifold and would otherwise be difficult or impossible to realize, for example in agriculture, medical care, administration and dealing with climate change. However, these opportunities also come with risks, as KAAD scholar Adio-Adet Tichifara Dinika explained. He also emphasized that AI cannot achieve anything meaningful without “intelligent” data. However, he did not focus on the transmission of data in his presentation, but on its production. Based on his field research in Kenya, the doctoral student showed how well-trained young people in Kenyan factories work from early in the morning until late in the evening for a pittance and without workers' rights or spend hours and weeks sifting through thousands of violent videos and (child) pornographic material so that the AI can recognize such content. The workers themselves are severely traumatized without receiving any help. The data sets, in turn, are used by multi-million dollar technology companies in the Global North to secure their market power in the future. Those who collect the necessary data see very little of the proceeds: this is also a form of neo-colonial exploitation, says Adio-Adet Tichifara Dinika.
Titled “AI and our Image of Humans – Philosophical Perspectives on Artificial Intelligence”, Forum 2 (chaired by Dr Martin Reilich, Cusanuswerk, Bonn) explored the boundaries between humans and machines that already exist and will become even more relevant. To this end, science journalist Dr Manuela Lenzen focused on those human characteristics that cannot be copied or simulated by AI, or only to a limited extent: Creativity, morality, consciousness, and autonomy. Before starting the discussion, Manuela Lenzen explained some of the technical principles of modern AI, which are responsible for its rapid progress in recent times: machine learning models can be trained with large amounts of data in such a way that they can solve a variety of tasks. Self-supervised learning or semi-supervised learning produces many billions of parameters that can then be used as so-called “transformers”. The widely known “Chat GPT” (“Generative Pre-trained Transformer” from the company Open AI), for example, works with 1.8 trillion parameters, while the “Google Switch Transformer” works with 1.6 trillion. However, according to Manuela Lenzen, it is important that these learning models cannot train themselves in precisely those areas that she described as the “last bastions of humanity”: morality and creativity. Here, the “transformers” are dependent on being trained by humans to generate better results for users through this training. At this point, Manuela Lenzen turned her attention to the Global South, where most of the participants in the forum came from, and picked up on the same problem that had already been highlighted in Forum 1: that many young, well-educated people are used as workers for the training of “transformers”, especially in Africa, for whom this work is very stressful due to the violent and sexual content they have to deal with.
Forum 3, chaired by Markus Leimbach, dealt with “Fake news through AI – a Threat to Democracy?” and particularly looked at the difficulties and possible solutions regarding AI-supported disinformation. Andreas Grün, Head of the New Media Technology Department (Zweites Deutsches Fernsehen, Mainz), presented the challenges posed by AI-supported disinformation, focusing in particular on how difficult it is to identify and combat fake news, especially so-called deepfakes (realistic-looking, false content), using AI technologies. As deepfakes often use stereotypes to increase their credibility, it is practically impossible to prevent them from spreading. The resulting loss of trust in the media poses a fundamental threat to democracy. Hence, the development of methods for recognizing and dealing with AI-supported disinformation is obligatory. Andreas Grün discussed the role of public broadcasters in combating AI-supported disinformation as an example. He emphasized that there is coordination between different media houses in Germany to pursue a uniform line in dealing with AI disinformation. Public broadcasters such as ZDF are controlled by broadcasting councils and act independently of state influence, while state media are directly controlled by the government and often serve state interests. This distinction is important to understand the role and responsibility of the media in a democracy and to ensure that they act as an independent source of information. The Ukrainian KAAD scholar Alisa Kohinova added to the discussion as a speaker and specified technical approaches to combating fake news and the functioning of software that restricts the use of certain news sources to prevent the creation of fake news. She clearly emphasized the need to develop AI tools that can detect fake content.
How “The digital Counterpart: AI and Human Communication“ relate to each other was examined in Forum 4, moderated by Dr Thomas Krüggeler. Dr Christian Stein, who studied both German and Computer Science and holds a doctorate in Literary Studies, opened the discussion by not only outlining important stages of the computer age (from the individual machine to the Internet) in a first short lecture but also by describing the difference between natural and artificial language, pointing out that human understanding differs significantly from the way machines generate “understanding” and “comprehension”. A contribution by Egyptian KAAD scholar Marina Aziz, who is completing the Master's program in Computational Linguistics at the University of Stuttgart, complemented Christian Stein's presentation. She presented the practice of her field, in which algorithms are used to analyze both written and spoken language. According to Marina Aziz, computational linguistics is the interdisciplinary science par excellence for influencing the quality of AI with the help of its data. Following the scholarship holder's contribution, Christian Stein focused on aspects such as 'AI and emotions', 'AI and creativity' and the human perception of AI in his keynote speech. He warned the participants not to focus too much on the (real) dangers of Artificial Intelligence to avoid downplaying the promising prospects that arise from the development and use of AI.
Artificial Intelligence opens up great opportunities for healthcare systems worldwide, especially in the Global South. For example, it creates numerous opportunities for healthcare systems to make treatments possible, improve their quality and save costs. In remote regions, they might even be the only opportunity for treatment. However, AI is also associated with considerable and far-reaching risks for all those involved, such as misuse of data; in addition, its availability or function may be insufficient if there is no electricity, it is dependent on internet access and there may be problems with the end devices. Forum 5, chaired by Nils Fischer and entitled “Opportunities and Risks of AI for Healthcare Systems in the Global South”, addressed all of these issues. In her presentation, health scientist and KAAD doctoral scholar Phidelis Nasimiyu Wamalwa from Kenya used the example of the Kenyan healthcare system to illustrate the problems associated with the use of Artificial Intelligence, for which completely different challenges arise compared to the Global North. Dr Sandra Barteit, Group Leader Digital Global Health, Heidelberg Institute of Global Health at Heidelberg University, explored the topic in greater depth using Kenyan research projects. These then led to a workshop in which the participants analyzed the potential and risks of using AI for patients at existing Kenyan healthcare companies. The discussion of the results revealed ethical problems, for example, about justice, such as the fact that access to AI-supported healthcare services is not fairly distributed. Furthermore, data models established in the Global North can deliver incorrect results for the Global South. Professor Dr Alice Ojwang Achieng, scientist at the Institute of Nutritional Sciences and Dietetics at the Technical University of Kenya, concluded the forum by presenting her current research project, “Photo application of Artificial Intelligence to support Carbohydrate Management in Type 2 Diabetes”.
Under the heading “Artificial Intelligence and the Global South: Opportunities and Challenges”, the contributions from the forums were then brought together and controversially discussed in a panel discussion moderated by KAAD Secretary General Dr Nora Kalbarczyk with Professor Jerry John Kponyo, Dr Theresa Züger, Dr Christian Stein, Dr Manuela Lenzen and Phidelis Wamalwa.
Morning services and an interfaith prayer meeting formed the spiritual framework of the annual academy. In the international festive service, which was celebrated by KAAD President Fr Dr Hans Langendörfer SJ and the two Spiritual Advisors of the KAAD, Fr Professor Dr Ulrich Engel OP and Fr Professor Dr Thomas Eggensperger OP, the various regional groups of KAAD scholars were able to contribute with songs and prayers in their native languages.
Professor Dr Oleh Turiy, who was honoured for his academic and ecclesiastical commitment, his efforts in ecumenical encounters in his home country and his commitment to a democratic Ukraine – the laudatory speeches were held by Dr Markus Ingenlath (Renovabis) and Markus Leimbach (Head of the Eastern Europe Department, KAAD). The award ceremony was embedded in a musical ceremony organized by KAAD scholars.
The Expert Group meetings on the topics of Water, Language, Global Health, Peace and Justice and Religion in Dialogue once again took place before the actual opening of the Annual Convention this year.
The 37th KAAD Annual Convention highlighted the enormous potential that comes with the advancement of Artificial Intelligence, while also exposing the great inequality and inappropriate use of labour and resources that is being carried out on the backs of the poorest to generate this progress. Their role in all areas of global discourse and the associated questions about humanity, consciousness and identity could be discussed and considered from an interdisciplinary perspective, new perspectives could be identified and approaches developed.
The 37th Annual Convention of KAAD was realised with the kind support of the Pax Bank Foundation.