<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://www.glossalab.org/w/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Emily+Hoppe</id>
	<title>glossaLAB - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://www.glossalab.org/w/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Emily+Hoppe"/>
	<link rel="alternate" type="text/html" href="https://www.glossalab.org/wiki/Special:Contributions/Emily_Hoppe"/>
	<updated>2026-04-30T19:37:01Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.43.6</generator>
	<entry>
		<id>https://www.glossalab.org/w/index.php?title=Draft:Artificial_Intelligence_and_Justice:_How_Algorithms_Shape_Our_Society&amp;diff=13676</id>
		<title>Draft:Artificial Intelligence and Justice: How Algorithms Shape Our Society</title>
		<link rel="alternate" type="text/html" href="https://www.glossalab.org/w/index.php?title=Draft:Artificial_Intelligence_and_Justice:_How_Algorithms_Shape_Our_Society&amp;diff=13676"/>
		<updated>2025-06-12T19:20:57Z</updated>

		<summary type="html">&lt;p&gt;Emily Hoppe: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Overview ==&lt;br /&gt;
The rapid development and use of artificial intelligence since the introduction of ChatGPT in November 2022&amp;lt;ref name=&amp;quot;:0&amp;quot;&amp;gt;Kero, S., Parycek, P., &amp;amp; Dreyer, S. (2023). &#039;&#039;Bekanntheit und Akzeptanz von ChatGPT in Deutschland&#039;&#039; (Factsheet Nr. 10). Meinungsmonitor Künstliche Intelligenz. Retrieved from &amp;lt;nowiki&amp;gt;https://www.cais-research.de/wp-content/uploads/Factsheet-10-ChatGPT.pdf&amp;lt;/nowiki&amp;gt;&amp;lt;/ref&amp;gt; raises questions of justice and ethical responsibility. This paper examines the interactions between AI systems and social justice, with a focus on discrimination caused by algorithmic bias, transparency in decision-making, and the challenges posed by AI-driven disinformation. Drawing on current studies and philosophical concepts, the analysis explores how AI systems can reinforce existing inequalities and what measures are necessary to ensure the responsible use of this technology.&lt;br /&gt;
&lt;br /&gt;
== Introduction and Relevance of the Topic ==&lt;br /&gt;
Since the launch of ChatGPT in November 2022, artificial intelligence has made an unprecedented entry into our daily lives. This new generation of AI systems, which is already being used by hundreds of millions of people just two and a half years later, marks a turning point in the digitalization of our society. Almost every person in the digitally connected world regularly interacts with AI applications.&amp;lt;ref name=&amp;quot;:0&amp;quot; /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The global use of AI technologies is steadily increasing. According to forecasts, the number of users worldwide is expected to grow to approximately 826.2 million by 2025. Beyond the education sector, AI is also gaining increasing importance in other fields such as business, medicine, justice, and public administration.&amp;lt;ref&amp;gt;Statista. (2025). &#039;&#039;Number of artificial intelligence (AI) tool users globally from 2021 to 2031&#039;&#039;. Retrieved from &amp;lt;nowiki&amp;gt;https://www.statista.com/forecasts/1449844/ai-tool-users-worldwide&amp;lt;/nowiki&amp;gt;&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
One reason for this is the efficiency of AI, as it saves time, automates processes, and delivers fast results. However, with the rapid development of AI, new considerations arise. When machines write texts, prepare decisions, or even make moral judgments, questions emerge such as: What distinguishes a machine-made decision from a human one? Who is responsible when errors occur or people are disadvantaged?&lt;br /&gt;
&lt;br /&gt;
It is no longer just about simple applications. In philosophical discussions, the question increasingly arises whether so-called &amp;quot;strong AI&amp;quot; machines with consciousness or their own thinking is even possible. Olkhovsky addresses this idea and refers to the example from the film &#039;&#039;Ex Machina&#039;&#039;: an AI that appears so human-like that it seems almost impossible to distinguish it from a real person. But what separates a convincing imitation from genuine consciousness? Can machines truly “be” like us—or does it ultimately remain a complex simulation?&amp;lt;ref name=&amp;quot;:1&amp;quot;&amp;gt;[[Artificial Intelligence]]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
AI confronts us with fundamental questions about what knowledge is, how we can recognize truth, and what role humans play in an increasingly automated world.&lt;br /&gt;
&lt;br /&gt;
== Artificial Intelligence ==&lt;br /&gt;
Artificial intelligence (AI) is generally understood as a subfield of computer science that focuses on the development of systems capable of performing tasks that typically require human intelligence, such as problem-solving, language understanding, or pattern recognition.&amp;lt;ref&amp;gt;Gabler Wirtschaftslexikon. (n.d.). &#039;&#039;Künstliche Intelligenz (KI)&#039;&#039;. Retrieved from &amp;lt;nowiki&amp;gt;https://wirtschaftslexikon.gabler.de/definition/kuenstliche-intelligenz-ki-40285&amp;lt;/nowiki&amp;gt;&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In scientific discussions, a distinction is often made between weak AI, which is specialized in narrowly defined tasks (e.g., voice assistants), and strong AI, which could develop human-like consciousness or genuine thinking. This concept remains largely theoretical but raises profound ethical and epistemological questions.&amp;lt;ref name=&amp;quot;:1&amp;quot; /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Additionally, a distinction is made between knowledge-based AI, which operates using symbolic logic, and so-called machine learning, which is based on statistical methods. The latter has made significant advances in recent years, intensifying the question of how it differs from human understanding.&amp;lt;ref&amp;gt;Gethmann, C. F., Nordmann, A., &amp;amp; Grunwald, A. (2021). Künstliche Intelligenz in der Forschung. In A. Grunwald (Hrsg.), &#039;&#039;Handbuch Künstliche Intelligenz&#039;&#039; (2. Aufl.). Springer. &amp;lt;nowiki&amp;gt;https://doi.org/10.1007/978-3-662-63449-3_2&amp;lt;/nowiki&amp;gt;&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
With the increasing use of AI, the question arises not only of what machines are capable of but also what this means for us humans. What exactly does it mean when machines “learn,” “understand,” or “decide”? And what happens when we begin to delegate our own thinking to machines? Is AI truly a form of cognitive processing, or merely a highly advanced simulation of human behavior?&lt;br /&gt;
&lt;br /&gt;
It is emphasized that AI systems ultimately reflect our own thinking. They adopt patterns, logics, and assumptions from the data they were trained on, but they do not understand them in the human sense.&amp;lt;ref name=&amp;quot;:1&amp;quot; /&amp;gt; AI lacks the deeper understanding of causality and context that characterizes human thought. This gap between statistical pattern recognition and genuine understanding is central to the philosophical discourse on AI.&lt;br /&gt;
&lt;br /&gt;
The increasing automation of creative processes by AI raises the question of whether human thinking and creativity are being displaced. Hannah Arendt emphasized that thinking is more than mere information processing; it is connected with reflection and responsibility.&amp;lt;ref&amp;gt;Waelen, R. R. (2025). Rethinking automation and the future of work with Hannah Arendt. &#039;&#039;Journal of Business Ethics&#039;&#039;. &amp;lt;nowiki&amp;gt;https://doi.org/10.1007/s10551-025-05991-1&amp;lt;/nowiki&amp;gt;&amp;lt;/ref&amp;gt; Nick Bostrom also warns that excessive dependence on AI could restrict human freedom of decision.&amp;lt;ref&amp;gt;Bostrom, N. (2012). The superintelligent will: Motivation and instrumental rationality in advanced artificial agents. &#039;&#039;Minds and Machines&#039;&#039;. Retrieved from &amp;lt;nowiki&amp;gt;https://nickbostrom.com/superintelligentwill.pdf&amp;lt;/nowiki&amp;gt;&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
What does it mean for our humanity when processes once considered uniquely human such as thinking, decision-making, or creativity are increasingly taken over by machines?&lt;br /&gt;
&lt;br /&gt;
== Responsibility &amp;amp; Decision-Making&amp;lt;ref&amp;gt;Heine, M., Potthast, L., &amp;amp; Siewert, S. (2023). Künstliche Intelligenz in öffentlichen Verwaltungen. In M. Wimmer et al. (Hrsg.), &#039;&#039;Digitalisierung und öffentliche Verwaltung&#039;&#039;. Springer. &amp;lt;nowiki&amp;gt;https://doi.org/10.1007/978-3-658-40101-6_11&amp;lt;/nowiki&amp;gt;&amp;lt;/ref&amp;gt; ==&lt;br /&gt;
When we ask an AI a question, we usually receive a fitting answer almost instantly. This seems so natural that we rarely stop to consider how the answer is actually generated or whether it is truly correct. And therein lies an ethical challenge: unlike humans, who make decisions based on experience, values, or moral convictions, AI operates purely statistically. It identifies patterns and produces what seems most probable based on its training data, without “understanding” what is good, right, or wrong.&lt;br /&gt;
&lt;br /&gt;
Such systems now accompany us not only when writing or researching, but also influence what we see on social media, which news is shown to us, or how decisions are made in professional contexts. The more we place our trust in AI, the greater the responsibility we must take on ourselves even if it may seem at first glance as though the machine has everything under control.&lt;br /&gt;
&lt;br /&gt;
According to Daniel Wessel, this reveals a central problem: AI systems cannot make moral decisions, at least not in the human sense. They have no values of their own or awareness of responsibility. That is why it is all the more important that such technologies follow clear ethical guidelines. The people who develop or use them must define in advance what is to be considered right, fair, or transparent.&lt;br /&gt;
&lt;br /&gt;
Which values should be embedded in AI in the first place? And who decides that? What we perceive as “right” often depends on cultural, societal, or individual contexts. To ensure AI is used responsibly, these values must be consciously named and integrated into the systems. This requires not only technical know-how but above all a societal and ethical debate. &lt;br /&gt;
&lt;br /&gt;
Ultimately, AI remains a tool. Whether it acts “rightly” in a given situation depends not only on the algorithm but above all on how we design it, use it, and question it critically.&lt;br /&gt;
&lt;br /&gt;
== Discrimination and Bias in AI Systems ==&lt;br /&gt;
It is often assumed that artificial intelligence evaluates things objectively and neutrally. In practice, however, it becomes clear that AI systems can adopt and even reinforce social biases. This can occur when the information an AI relies on is based on already biased data. An example of this is the increasing use of automated recruitment processes in companies. Here, applicants may be excluded based on certain ethnic backgrounds without a human ever having looked at the application.&amp;lt;ref name=&amp;quot;:3&amp;quot;&amp;gt;Pohlmann, P., Drolshagen, P., &amp;amp; Kötter, M. (2022). Künstliche Intelligenz, Bias und Versicherungen – Eine technische und rechtliche Analyse. &#039;&#039;Zeitschrift für die gesamte Versicherungswissenschaft&#039;&#039;. &amp;lt;nowiki&amp;gt;https://doi.org/10.1007/s12297-022-00528-1&amp;lt;/nowiki&amp;gt;&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
A 2024 study by the University of Washington shows that discrimination can occur in AI-driven recruitment processes. The researchers analyzed over 554 résumés and 571 job postings, generating more than 3 million combinations of names and job positions. They then altered the names on 120 resumes, replacing typically white-sounding names with names commonly associated with the Black population.&lt;br /&gt;
&lt;br /&gt;
The results were clear: in 85% of the cases, the AI favored names typically associated with white individuals, while only 9% of the preferred names were linked to Black individuals. Additionally, the AI selected male candidates 52% of the time even for roles predominantly held by women, such as HR positions (77% female representation) or teaching jobs (57% female representation). White women were also more likely to be selected than Black women.&amp;lt;ref name=&amp;quot;:4&amp;quot;&amp;gt;Fisher Phillips. (2024). &#039;&#039;New study shows AI resume screeners prefer white male candidates: Your 5-step blueprint to prevent AI discrimination in hiring&#039;&#039;. Retrieved from &amp;lt;nowiki&amp;gt;https://www.fisherphillips.com/en/news-insights/ai-resume-screeners.html&amp;lt;/nowiki&amp;gt;&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This example illustrates how AI systems derive selection criteria from existing data, which can lead them to unintentionally adopt and perpetuate discriminatory structures from real-world practices. From a philosophical perspective, this raises a fundamental question: If our knowledge about the world in this case, about job applicants is based on data that is itself biased, how reliable is that knowledge? Epistemology asks what we truly know when we receive information. AI systems process data, but they do not understand it in the human sense. Their decisions are based on patterns, not on insight or moral reasoning. As Olkhovsky also emphasizes, machines lack the intentionality and awareness that give human decisions their moral depth.&amp;lt;ref name=&amp;quot;:1&amp;quot; /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
That is why it is even more important that AI does not make the final selection of applicants alone. To ensure a fair and just hiring process, human oversight by HR professionals is essential so that they can intervene if there is a suspicion of discrimination. Moreover, the automated AI recruitment process should be adapted in such a way that it either completely avoids discrimination or at least minimizes it. This could be achieved through regular fairness and bias tests to identify and address problematic patterns early on.&amp;lt;ref name=&amp;quot;:4&amp;quot; /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In addition, John Rawls’ concept of justice as fairness offers a philosophical foundation for the discussion of discrimination and bias in AI systems. His famous thought experiment the veil of ignorance asks us to imagine a society in which no one knows their own social position. This is meant to produce fair rules that do not favor any individual or group.&amp;lt;ref name=&amp;quot;:2&amp;quot;&amp;gt;Gabriel, I. (2022). Toward a theory of justice for artificial intelligence. &#039;&#039;Daedalus&#039;&#039;. &amp;lt;nowiki&amp;gt;https://doi.org/10.1162/daed_a_01864&amp;lt;/nowiki&amp;gt;&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Applied to AI, this means that algorithms must be designed not to reinforce existing inequalities but to actively contribute to fairness. As studies have shown, AI systems can unconsciously adopt discriminatory patterns when trained on biased data. Rawls would argue that such systems do not meet the principles of justice, as they fail to ensure that the least advantaged members of society are not further marginalized.&amp;lt;ref name=&amp;quot;:2&amp;quot; /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Bias ==&lt;br /&gt;
In the example above, the AI exhibited bias a systematic distortion. This occurs when an AI application receives training data and information from the real world that contains prejudices, and it does not question them but instead accepts and adopts them as correct. It has learned that, in the past, white men were predominantly hired and therefore sets the attribute “male” as a selection criterion, concluding that this group is the most suitable and thus prefers them.&lt;br /&gt;
&lt;br /&gt;
Such errors must be identified and corrected. While bias cannot be completely avoided, it can be reduced through technical solutions.&lt;br /&gt;
&lt;br /&gt;
There are three strategies that can be used to minimize bias in AI systems. First, potential biases in the training data can be removed before the learning process begins. This involves modifying the data so that attributes such as origin or religion no longer have a distorted effect on the AI’s decisions. Another method is to program the AI in such a way that, during learning, it is prevented from making unfair assessments or discriminating by applying additional mathematical constraints. In the third method, the AI’s outcomes are evaluated and corrected for fairness after the learning process.&amp;lt;ref name=&amp;quot;:3&amp;quot; /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Discrimination in AI systems is not only a technical problem but also touches on fundamental ethical principles such as justice, human dignity, and responsibility. Even if an AI application does not intentionally discriminate, it still violates these principles because its decisions have real-world consequences for people. Machines do not act morally, but their algorithms can reinforce existing inequalities and thus raise ethical concerns.&amp;lt;ref&amp;gt;Mougan, C., &amp;amp; Brand, J. (2024). Kantian deontology meets AI alignment: Towards morally grounded fairness metrics. &#039;&#039;arXiv&#039;&#039;. &amp;lt;nowiki&amp;gt;https://arxiv.org/pdf/2311.05227&amp;lt;/nowiki&amp;gt;&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Immanuel Kant formulated the categorical imperative as a universal moral principle: every human being must be treated as an end in themselves and never merely as a means to an end. Applied to AI, this means that algorithms must not be designed in a way that disadvantages people based on prejudice or biased data. An AI system that adopts discriminatory patterns contradicts this principle, as it fails to treat people as equal individuals.&amp;lt;ref&amp;gt;Ashrafian, H. (2022). Engineering a social contract: Rawlsian distributive justice through algorithmic game theory and artificial intelligence. &#039;&#039;AI and Ethics&#039;&#039;. &amp;lt;nowiki&amp;gt;https://doi.org/10.1007/s43681-022-00253-6&amp;lt;/nowiki&amp;gt;&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
John Rawls developed the concept of justice as fairness, which aims to structure society in such a way that it does not disadvantage the weakest. His veil of ignorance challenges us to design rules without knowing our own social position—thereby ensuring fair conditions for all. AI systems that unconsciously discriminate contradict this principle, as they perpetuate existing inequalities instead of correcting them. To counteract this, algorithms must be actively tested and adjusted for fairness.&lt;br /&gt;
&lt;br /&gt;
Therefore, it is essential to assume not only technical but also ethical responsibility. AI systems must be designed to promote justice and equal opportunity rather than reinforce existing biases. This can be achieved through regular fairness and bias tests as well as through intentional ethical programming. Even if bias cannot be completely eliminated, it can be significantly reduced through targeted measures.&lt;br /&gt;
&lt;br /&gt;
== Transparency and Traceability ==&lt;br /&gt;
Since May 21, 2024, the EU Artificial Intelligence Act has established a comprehensive regulatory framework for artificial intelligence adopted by the EU Council, setting uniform rules for the use of AI across Europe. The AI Act places strong emphasis on transparency to ensure that AI handles data responsibly and fairly. Systems considered particularly high-risk must therefore be designed in a way that makes their use comprehensible and understandable. This allows individuals to make informed decisions about whether a particular AI system is appropriate for them. Overall, the goal of increased AI transparency is to empower users and give them more control.&lt;br /&gt;
&lt;br /&gt;
To ensure the transparent use of AI, users must be informed when they are interacting with an AI system. The system&#039;s documentation should explain how the AI works, what it can be used for, and what opportunities and risks it entails. Additionally, information about the development and context of the system is important so that both users and organizations understand its capabilities and limitations.&lt;br /&gt;
&lt;br /&gt;
Transparency is important because it leads to greater traceability. It enables people to understand how AI reaches certain decisions and helps to identify issues early on such as potential bias or copyright violations. Moreover, studies show that users are more likely to use transparent AI models. For developers, it is particularly essential to know where the training data comes from, whether it is fair and non-discriminatory, and what risks a particular model might pose.&amp;lt;ref&amp;gt;Bundesamt für Sicherheit in der Informationstechnik. (2024). &#039;&#039;Transparenz von KI-Systemen&#039;&#039;. Retrieved from &amp;lt;nowiki&amp;gt;https://www.bsi.bund.de&amp;lt;/nowiki&amp;gt;&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Even though transparency is crucial, it is not always easy to implement. A careful balance must be struck: users should be given enough information to understand and use a system safely, but not all aspects should be disclosed if doing so poses security or misuse risks.&lt;br /&gt;
&lt;br /&gt;
This is where the discussion around AI intersects closely with epistemological questions: What counts as reliable knowledge? How much do we need to know in order to trust a decision? AI challenges us to rethink our understanding of knowledge, responsibility, and control.&lt;br /&gt;
&lt;br /&gt;
As Olkhovsky emphasizes, transparency is a key prerequisite for people to trust AI systems at all. Only if it is clear how decisions are made can those decisions be questioned or challenged. Without that clarity, responsibility becomes blurred and control over technological decisions slips away from users. Therefore, transparency is not only a technical task but also an ethical imperative: AI systems must be designed to reveal the criteria by which they operate. Transparency not only enables oversight but also reinforces the democratic principle of accountability in the digital age.&amp;lt;ref name=&amp;quot;:1&amp;quot; /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Missinformation, Fake News and Deepfakes ==&lt;br /&gt;
AI systems are used for almost everything. They are applied to a wide range of tasks, such as text processing, data analysis on specific topics, or even creating study schedules for exams. Additionally, we ask AI questions on various subjects or consult it about issues that concern us, and we generally assume that its answers are correct. However, the use of AI can lead to deepfakes and fake news.&lt;br /&gt;
&lt;br /&gt;
An important aspect of misinformation caused by artificial intelligence is the algorithmic theory of information. This involves the efficiency and complexity of algorithms, which significantly influence how AI systems process and present information. In contrast to human knowledge, which is based on experience, reflection, and context, AI-generated content is the result of statistical calculations and patterns.&amp;lt;ref name=&amp;quot;:1&amp;quot; /&amp;gt; This creates the risk of distorted or manipulative content, particularly in the case of deepfakes and fake news. Since AI cannot distinguish between truth and deception but merely calculates probabilities, false or misleading information can be amplified and spread uncritically.&lt;br /&gt;
&lt;br /&gt;
AI systems are constantly learning and improving their capabilities day by day. They can generate images or videos that appear to be real. While this may initially be seen as a major advancement, one must ask whether AI-generated visual content is truly a positive development. For instance, a video might show a person saying something they never actually said—this is where deepfakes come in. Fake news can spread, reputations can be damaged, and entire elections or public opinions can be manipulated. It is essential for people to be able to recognize such content.&lt;br /&gt;
&lt;br /&gt;
AI-generated material can often be identified by subtle details such as limited facial expressions, inconsistent lighting, or incorrect or monotonous speech. It is crucial that we are able to detect these deepfakes so that we are not manipulated or misled by false claims.&amp;lt;ref&amp;gt;Bundesamt für Sicherheit in der Informationstechnik. (2024). &#039;&#039;Deepfakes – Gefahren und Gegenmaßnahmen&#039;&#039;. Retrieved from &amp;lt;nowiki&amp;gt;https://www.bsi.bund.de&amp;lt;/nowiki&amp;gt;&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When it comes to fake news, we also need to be particularly cautious with regard to AI, as we often accept the answers provided by AI systems like ChatGPT as true and correct without question.&lt;br /&gt;
&lt;br /&gt;
Deepfakes also contribute to the spread of fake news because AI can easily create manipulated content. This becomes especially dangerous during elections, as people can be influenced without realizing it. AI-generated images or videos are designed to be dramatic, fear-inducing, or emotionally charged. Political parties might use deepfake campaign videos to manipulate voters and push them in a certain direction. These political messages are also distributed across multiple channels to gain more attention.&lt;br /&gt;
&lt;br /&gt;
Because AI helps disseminate this content quickly and widely, it can intentionally influence opinions and draw massive attention to certain topics. However, this can also lead to a divided society, especially when every party holds very strong and opposing views. Therefore, it is essential that people learn how to recognize fake news themselves. They should think critically, question information, verify sources, and compare it with other media. Moreover, there must be regulations for election campaigns, and AI-generated content must be labeled. Above all, transparency is necessary in order to provide insight into the algorithms. This way, manipulation can be understood and ultimately contained.&amp;lt;ref&amp;gt;Muñoz, K. (2025). &#039;&#039;Systematische Manipulation sozialer Medien im Zeitalter der KI&#039;&#039;. Deutsche Gesellschaft für Auswärtige Politik (DGAP). Retrieved from &amp;lt;nowiki&amp;gt;https://dgap.org&amp;lt;/nowiki&amp;gt;&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
Artificial intelligence is increasingly shaping our everyday lives as well as areas such as education, the economy, and society. It brings both opportunities and risks. Since AI operates without moral understanding or a sense of responsibility, the responsibility for its use always lies with humans. Issues such as bias, discrimination, deepfakes, or fake news highlight the importance of critically questioning AI and regulating it through ethical and societal measures.&lt;br /&gt;
&lt;br /&gt;
Despite its useful functions, AI should not be seen as the automatic solution to every challenge. Those who rely too heavily on it risk unlearning how to think independently and may adopt decisions without reflection. It is important to first seek one’s own solutions and use AI selectively and consciously.&lt;br /&gt;
&lt;br /&gt;
Furthermore, the responsible use of AI requires clear ethical guidelines and transparent regulation. AI decisions are often difficult to understand, which is why society must actively question how AI is controlled and applied. The discussion about AI is not only technical, but also philosophical—it challenges us to rethink our understanding of knowledge, responsibility, and freedom of choice.&amp;lt;ref name=&amp;quot;:1&amp;quot; /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Given the rapid development of technology, a responsible approach to AI is essential—especially with regard to data protection and personal information. Just as we are cautious about protecting our data, we should act with the same care when dealing with artificial intelligence.&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;/div&gt;</summary>
		<author><name>Emily Hoppe</name></author>
	</entry>
	<entry>
		<id>https://www.glossalab.org/w/index.php?title=Draft:Artificial_Intelligence_and_Justice:_How_Algorithms_Shape_Our_Society&amp;diff=13675</id>
		<title>Draft:Artificial Intelligence and Justice: How Algorithms Shape Our Society</title>
		<link rel="alternate" type="text/html" href="https://www.glossalab.org/w/index.php?title=Draft:Artificial_Intelligence_and_Justice:_How_Algorithms_Shape_Our_Society&amp;diff=13675"/>
		<updated>2025-06-12T19:12:50Z</updated>

		<summary type="html">&lt;p&gt;Emily Hoppe: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Overview ==&lt;br /&gt;
The rapid development and use of artificial intelligence since the introduction of ChatGPT in November 2022&amp;lt;ref name=&amp;quot;:0&amp;quot;&amp;gt;Kero, S., Parycek, P., &amp;amp; Dreyer, S. (2023). &#039;&#039;Bekanntheit und Akzeptanz von ChatGPT in Deutschland&#039;&#039; (Factsheet Nr. 10). Meinungsmonitor Künstliche Intelligenz. Retrieved from &amp;lt;nowiki&amp;gt;https://www.cais-research.de/wp-content/uploads/Factsheet-10-ChatGPT.pdf&amp;lt;/nowiki&amp;gt;&amp;lt;/ref&amp;gt; raises questions of justice and ethical responsibility. This paper examines the interactions between AI systems and social justice, with a focus on discrimination caused by algorithmic bias, transparency in decision-making, and the challenges posed by AI-driven disinformation. Drawing on current studies and philosophical concepts, the analysis explores how AI systems can reinforce existing inequalities and what measures are necessary to ensure the responsible use of this technology.&lt;br /&gt;
&lt;br /&gt;
== Introduction and Relevance of the Topic ==&lt;br /&gt;
Since the launch of ChatGPT in November 2022, artificial intelligence has made an unprecedented entry into our daily lives. This new generation of AI systems, which is already being used by hundreds of millions of people just two and a half years later, marks a turning point in the digitalization of our society. Almost every person in the digitally connected world regularly interacts with AI applications.&amp;lt;ref name=&amp;quot;:0&amp;quot; /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The global use of AI technologies is steadily increasing. According to forecasts, the number of users worldwide is expected to grow to approximately 826.2 million by 2025. Beyond the education sector, AI is also gaining increasing importance in other fields such as business, medicine, justice, and public administration.&amp;lt;ref&amp;gt;Statista. (2025). &#039;&#039;Number of artificial intelligence (AI) tool users globally from 2021 to 2031&#039;&#039;. Retrieved from &amp;lt;nowiki&amp;gt;https://www.statista.com/forecasts/1449844/ai-tool-users-worldwide&amp;lt;/nowiki&amp;gt;&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
One reason for this is the efficiency of AI, as it saves time, automates processes, and delivers fast results. However, with the rapid development of AI, new considerations arise. When machines write texts, prepare decisions, or even make moral judgments, questions emerge such as: What distinguishes a machine-made decision from a human one? Who is responsible when errors occur or people are disadvantaged?&lt;br /&gt;
&lt;br /&gt;
It is no longer just about simple applications. In philosophical discussions, the question increasingly arises whether so-called &amp;quot;strong AI&amp;quot; machines with consciousness or their own thinking is even possible. Olkhovsky addresses this idea and refers to the example from the film &#039;&#039;Ex Machina&#039;&#039;: an AI that appears so human-like that it seems almost impossible to distinguish it from a real person. But what separates a convincing imitation from genuine consciousness? Can machines truly “be” like us—or does it ultimately remain a complex simulation?&amp;lt;ref name=&amp;quot;:1&amp;quot;&amp;gt;[[Artificial Intelligence]]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
AI confronts us with fundamental questions about what knowledge is, how we can recognize truth, and what role humans play in an increasingly automated world.&lt;br /&gt;
&lt;br /&gt;
== Artificial Intelligence ==&lt;br /&gt;
Artificial intelligence (AI) is generally understood as a subfield of computer science that focuses on the development of systems capable of performing tasks that typically require human intelligence, such as problem-solving, language understanding, or pattern recognition.&amp;lt;ref&amp;gt;Gabler Wirtschaftslexikon. (n.d.). &#039;&#039;Künstliche Intelligenz (KI)&#039;&#039;. Retrieved from &amp;lt;nowiki&amp;gt;https://wirtschaftslexikon.gabler.de/definition/kuenstliche-intelligenz-ki-40285&amp;lt;/nowiki&amp;gt;&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In scientific discussions, a distinction is often made between weak AI, which is specialized in narrowly defined tasks (e.g., voice assistants), and strong AI, which could develop human-like consciousness or genuine thinking. This concept remains largely theoretical but raises profound ethical and epistemological questions.&amp;lt;ref name=&amp;quot;:1&amp;quot; /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Additionally, a distinction is made between knowledge-based AI, which operates using symbolic logic, and so-called machine learning, which is based on statistical methods. The latter has made significant advances in recent years, intensifying the question of how it differs from human understanding.&amp;lt;ref&amp;gt;Gethmann, C. F., Nordmann, A., &amp;amp; Grunwald, A. (2021). Künstliche Intelligenz in der Forschung. In A. Grunwald (Hrsg.), &#039;&#039;Handbuch Künstliche Intelligenz&#039;&#039; (2. Aufl.). Springer. &amp;lt;nowiki&amp;gt;https://doi.org/10.1007/978-3-662-63449-3_2&amp;lt;/nowiki&amp;gt;&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
With the increasing use of AI, the question arises not only of what machines are capable of but also what this means for us humans. What exactly does it mean when machines “learn,” “understand,” or “decide”? And what happens when we begin to delegate our own thinking to machines? Is AI truly a form of cognitive processing, or merely a highly advanced simulation of human behavior?&lt;br /&gt;
&lt;br /&gt;
It is emphasized that AI systems ultimately reflect our own thinking. They adopt patterns, logics, and assumptions from the data they were trained on, but they do not understand them in the human sense.&amp;lt;ref name=&amp;quot;:1&amp;quot; /&amp;gt; AI lacks the deeper understanding of causality and context that characterizes human thought. This gap between statistical pattern recognition and genuine understanding is central to the philosophical discourse on AI.&lt;br /&gt;
&lt;br /&gt;
The increasing automation of creative processes by AI raises the question of whether human thinking and creativity are being displaced. Hannah Arendt emphasized that thinking is more than mere information processing; it is connected with reflection and responsibility.&amp;lt;ref&amp;gt;Waelen, R. R. (2025). Rethinking automation and the future of work with Hannah Arendt. &#039;&#039;Journal of Business Ethics&#039;&#039;. &amp;lt;nowiki&amp;gt;https://doi.org/10.1007/s10551-025-05991-1&amp;lt;/nowiki&amp;gt;&amp;lt;/ref&amp;gt; Nick Bostrom also warns that excessive dependence on AI could restrict human freedom of decision.&amp;lt;ref&amp;gt;Bostrom, N. (2012). The superintelligent will: Motivation and instrumental rationality in advanced artificial agents. &#039;&#039;Minds and Machines&#039;&#039;. Retrieved from &amp;lt;nowiki&amp;gt;https://nickbostrom.com/superintelligentwill.pdf&amp;lt;/nowiki&amp;gt;&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
What does it mean for our humanity when processes once considered uniquely human such as thinking, decision-making, or creativity are increasingly taken over by machines?&lt;br /&gt;
&lt;br /&gt;
== Responsibility &amp;amp; Decision-Making&amp;lt;ref&amp;gt;Heine, M., Potthast, L., &amp;amp; Siewert, S. (2023). Künstliche Intelligenz in öffentlichen Verwaltungen. In M. Wimmer et al. (Hrsg.), &#039;&#039;Digitalisierung und öffentliche Verwaltung&#039;&#039;. Springer. &amp;lt;nowiki&amp;gt;https://doi.org/10.1007/978-3-658-40101-6_11&amp;lt;/nowiki&amp;gt;&amp;lt;/ref&amp;gt; ==&lt;br /&gt;
When we ask an AI a question, we usually receive a fitting answer almost instantly. This seems so natural that we rarely stop to consider how the answer is actually generated or whether it is truly correct. And therein lies an ethical challenge: unlike humans, who make decisions based on experience, values, or moral convictions, AI operates purely statistically. It identifies patterns and produces what seems most probable based on its training data, without “understanding” what is good, right, or wrong.&lt;br /&gt;
&lt;br /&gt;
Such systems now accompany us not only when writing or researching, but also influence what we see on social media, which news is shown to us, or how decisions are made in professional contexts. The more we place our trust in AI, the greater the responsibility we must take on ourselves even if it may seem at first glance as though the machine has everything under control.&lt;br /&gt;
&lt;br /&gt;
According to Daniel Wessel, this reveals a central problem: AI systems cannot make moral decisions, at least not in the human sense. They have no values of their own or awareness of responsibility. That is why it is all the more important that such technologies follow clear ethical guidelines. The people who develop or use them must define in advance what is to be considered right, fair, or transparent.&lt;br /&gt;
&lt;br /&gt;
Which values should be embedded in AI in the first place? And who decides that? What we perceive as “right” often depends on cultural, societal, or individual contexts. To ensure AI is used responsibly, these values must be consciously named and integrated into the systems. This requires not only technical know-how but above all a societal and ethical debate. &lt;br /&gt;
&lt;br /&gt;
Ultimately, AI remains a tool. Whether it acts “rightly” in a given situation depends not only on the algorithm but above all on how we design it, use it, and question it critically.&lt;br /&gt;
&lt;br /&gt;
== Discrimination and Bias in AI Systems ==&lt;br /&gt;
It is often assumed that artificial intelligence evaluates things objectively and neutrally. In practice, however, it becomes clear that AI systems can adopt and even reinforce social biases. This can occur when the information an AI relies on is based on already biased data. An example of this is the increasing use of automated recruitment processes in companies. Here, applicants may be excluded based on certain ethnic backgrounds without a human ever having looked at the application.&amp;lt;ref&amp;gt;Pohlmann, P., Drolshagen, P., &amp;amp; Kötter, M. (2022). Künstliche Intelligenz, Bias und Versicherungen – Eine technische und rechtliche Analyse. &#039;&#039;Zeitschrift für die gesamte Versicherungswissenschaft&#039;&#039;. &amp;lt;nowiki&amp;gt;https://doi.org/10.1007/s12297-022-00528-1&amp;lt;/nowiki&amp;gt;&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
A 2024 study by the University of Washington shows that discrimination can occur in AI-driven recruitment processes. The researchers analyzed over 554 résumés and 571 job postings, generating more than 3 million combinations of names and job positions. They then altered the names on 120 resumes, replacing typically white-sounding names with names commonly associated with the Black population.&lt;br /&gt;
&lt;br /&gt;
The results were clear: in 85% of the cases, the AI favored names typically associated with white individuals, while only 9% of the preferred names were linked to Black individuals. Additionally, the AI selected male candidates 52% of the time even for roles predominantly held by women, such as HR positions (77% female representation) or teaching jobs (57% female representation). White women were also more likely to be selected than Black women.&amp;lt;ref&amp;gt;Fisher Phillips. (2024). &#039;&#039;New study shows AI resume screeners prefer white male candidates: Your 5-step blueprint to prevent AI discrimination in hiring&#039;&#039;. Retrieved from &amp;lt;nowiki&amp;gt;https://www.fisherphillips.com/en/news-insights/ai-resume-screeners.html&amp;lt;/nowiki&amp;gt;&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This example illustrates how AI systems derive selection criteria from existing data, which can lead them to unintentionally adopt and perpetuate discriminatory structures from real-world practices. From a philosophical perspective, this raises a fundamental question: If our knowledge about the world in this case, about job applicants is based on data that is itself biased, how reliable is that knowledge? Epistemology asks what we truly know when we receive information. AI systems process data, but they do not understand it in the human sense. Their decisions are based on patterns, not on insight or moral reasoning. As Olkhovsky also emphasizes, machines lack the intentionality and awareness that give human decisions their moral depth.&amp;lt;ref name=&amp;quot;:1&amp;quot; /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
That is why it is even more important that AI does not make the final selection of applicants alone. To ensure a fair and just hiring process, human oversight by HR professionals is essential so that they can intervene if there is a suspicion of discrimination. Moreover, the automated AI recruitment process should be adapted in such a way that it either completely avoids discrimination or at least minimizes it. This could be achieved through regular fairness and bias tests to identify and address problematic patterns early on.&lt;br /&gt;
&lt;br /&gt;
In addition, John Rawls’ concept of justice as fairness offers a philosophical foundation for the discussion of discrimination and bias in AI systems. His famous thought experiment the veil of ignorance asks us to imagine a society in which no one knows their own social position. This is meant to produce fair rules that do not favor any individual or group.&amp;lt;ref name=&amp;quot;:2&amp;quot;&amp;gt;Gabriel, I. (2022). Toward a theory of justice for artificial intelligence. &#039;&#039;Daedalus&#039;&#039;. &amp;lt;nowiki&amp;gt;https://doi.org/10.1162/daed_a_01864&amp;lt;/nowiki&amp;gt;&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Applied to AI, this means that algorithms must be designed not to reinforce existing inequalities but to actively contribute to fairness. As studies have shown, AI systems can unconsciously adopt discriminatory patterns when trained on biased data. Rawls would argue that such systems do not meet the principles of justice, as they fail to ensure that the least advantaged members of society are not further marginalized.&amp;lt;ref name=&amp;quot;:2&amp;quot; /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Bias ==&lt;br /&gt;
In the example above, the AI exhibited bias a systematic distortion. This occurs when an AI application receives training data and information from the real world that contains prejudices, and it does not question them but instead accepts and adopts them as correct. It has learned that, in the past, white men were predominantly hired and therefore sets the attribute “male” as a selection criterion, concluding that this group is the most suitable and thus prefers them.&lt;br /&gt;
&lt;br /&gt;
Such errors must be identified and corrected. While bias cannot be completely avoided, it can be reduced through technical solutions.&lt;br /&gt;
&lt;br /&gt;
There are three strategies that can be used to minimize bias in AI systems. First, potential biases in the training data can be removed before the learning process begins. This involves modifying the data so that attributes such as origin or religion no longer have a distorted effect on the AI’s decisions. Another method is to program the AI in such a way that, during learning, it is prevented from making unfair assessments or discriminating by applying additional mathematical constraints. In the third method, the AI’s outcomes are evaluated and corrected for fairness after the learning process.&lt;br /&gt;
&lt;br /&gt;
Discrimination in AI systems is not only a technical problem but also touches on fundamental ethical principles such as justice, human dignity, and responsibility. Even if an AI application does not intentionally discriminate, it still violates these principles because its decisions have real-world consequences for people. Machines do not act morally, but their algorithms can reinforce existing inequalities and thus raise ethical concerns.&lt;br /&gt;
&lt;br /&gt;
Immanuel Kant formulated the categorical imperative as a universal moral principle: every human being must be treated as an end in themselves and never merely as a means to an end. Applied to AI, this means that algorithms must not be designed in a way that disadvantages people based on prejudice or biased data. An AI system that adopts discriminatory patterns contradicts this principle, as it fails to treat people as equal individuals.&lt;br /&gt;
&lt;br /&gt;
John Rawls developed the concept of justice as fairness, which aims to structure society in such a way that it does not disadvantage the weakest. His veil of ignorance challenges us to design rules without knowing our own social position—thereby ensuring fair conditions for all. AI systems that unconsciously discriminate contradict this principle, as they perpetuate existing inequalities instead of correcting them. To counteract this, algorithms must be actively tested and adjusted for fairness.&lt;br /&gt;
&lt;br /&gt;
Therefore, it is essential to assume not only technical but also ethical responsibility. AI systems must be designed to promote justice and equal opportunity rather than reinforce existing biases. This can be achieved through regular fairness and bias tests as well as through intentional ethical programming. Even if bias cannot be completely eliminated, it can be significantly reduced through targeted measures.&lt;br /&gt;
&lt;br /&gt;
== Transparency and Traceability ==&lt;br /&gt;
Since May 21, 2024, the EU Artificial Intelligence Act has established a comprehensive regulatory framework for artificial intelligence adopted by the EU Council, setting uniform rules for the use of AI across Europe. The AI Act places strong emphasis on transparency to ensure that AI handles data responsibly and fairly. Systems considered particularly high-risk must therefore be designed in a way that makes their use comprehensible and understandable. This allows individuals to make informed decisions about whether a particular AI system is appropriate for them. Overall, the goal of increased AI transparency is to empower users and give them more control.&lt;br /&gt;
&lt;br /&gt;
To ensure the transparent use of AI, users must be informed when they are interacting with an AI system. The system&#039;s documentation should explain how the AI works, what it can be used for, and what opportunities and risks it entails. Additionally, information about the development and context of the system is important so that both users and organizations understand its capabilities and limitations.&lt;br /&gt;
&lt;br /&gt;
Transparency is important because it leads to greater traceability. It enables people to understand how AI reaches certain decisions and helps to identify issues early on such as potential bias or copyright violations. Moreover, studies show that users are more likely to use transparent AI models. For developers, it is particularly essential to know where the training data comes from, whether it is fair and non-discriminatory, and what risks a particular model might pose.&lt;br /&gt;
&lt;br /&gt;
Even though transparency is crucial, it is not always easy to implement. A careful balance must be struck: users should be given enough information to understand and use a system safely, but not all aspects should be disclosed if doing so poses security or misuse risks.&lt;br /&gt;
&lt;br /&gt;
This is where the discussion around AI intersects closely with epistemological questions: What counts as reliable knowledge? How much do we need to know in order to trust a decision? AI challenges us to rethink our understanding of knowledge, responsibility, and control.&lt;br /&gt;
&lt;br /&gt;
As Olkhovsky emphasizes, transparency is a key prerequisite for people to trust AI systems at all. Only if it is clear how decisions are made can those decisions be questioned or challenged. Without that clarity, responsibility becomes blurred and control over technological decisions slips away from users. Therefore, transparency is not only a technical task but also an ethical imperative: AI systems must be designed to reveal the criteria by which they operate. Transparency not only enables oversight but also reinforces the democratic principle of accountability in the digital age.&lt;br /&gt;
&lt;br /&gt;
== Missinformation, Fake News and Deepfakes ==&lt;br /&gt;
AI systems are used for almost everything. They are applied to a wide range of tasks, such as text processing, data analysis on specific topics, or even creating study schedules for exams. Additionally, we ask AI questions on various subjects or consult it about issues that concern us, and we generally assume that its answers are correct. However, the use of AI can lead to deepfakes and fake news.&lt;br /&gt;
&lt;br /&gt;
An important aspect of misinformation caused by artificial intelligence is the algorithmic theory of information. This involves the efficiency and complexity of algorithms, which significantly influence how AI systems process and present information. In contrast to human knowledge, which is based on experience, reflection, and context, AI-generated content is the result of statistical calculations and patterns. This creates the risk of distorted or manipulative content, particularly in the case of deepfakes and fake news. Since AI cannot distinguish between truth and deception but merely calculates probabilities, false or misleading information can be amplified and spread uncritically.&lt;br /&gt;
&lt;br /&gt;
AI systems are constantly learning and improving their capabilities day by day. They can generate images or videos that appear to be real. While this may initially be seen as a major advancement, one must ask whether AI-generated visual content is truly a positive development. For instance, a video might show a person saying something they never actually said—this is where deepfakes come in. Fake news can spread, reputations can be damaged, and entire elections or public opinions can be manipulated. It is essential for people to be able to recognize such content.&lt;br /&gt;
&lt;br /&gt;
AI-generated material can often be identified by subtle details such as limited facial expressions, inconsistent lighting, or incorrect or monotonous speech. It is crucial that we are able to detect these deepfakes so that we are not manipulated or misled by false claims.&lt;br /&gt;
&lt;br /&gt;
When it comes to fake news, we also need to be particularly cautious with regard to AI, as we often accept the answers provided by AI systems like ChatGPT as true and correct without question.&lt;br /&gt;
&lt;br /&gt;
Deepfakes also contribute to the spread of fake news because AI can easily create manipulated content. This becomes especially dangerous during elections, as people can be influenced without realizing it. AI-generated images or videos are designed to be dramatic, fear-inducing, or emotionally charged. Political parties might use deepfake campaign videos to manipulate voters and push them in a certain direction. These political messages are also distributed across multiple channels to gain more attention.&lt;br /&gt;
&lt;br /&gt;
Because AI helps disseminate this content quickly and widely, it can intentionally influence opinions and draw massive attention to certain topics. However, this can also lead to a divided society, especially when every party holds very strong and opposing views. Therefore, it is essential that people learn how to recognize fake news themselves. They should think critically, question information, verify sources, and compare it with other media. Moreover, there must be regulations for election campaigns, and AI-generated content must be labeled. Above all, transparency is necessary in order to provide insight into the algorithms. This way, manipulation can be understood and ultimately contained.&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
Artificial intelligence is increasingly shaping our everyday lives as well as areas such as education, the economy, and society. It brings both opportunities and risks. Since AI operates without moral understanding or a sense of responsibility, the responsibility for its use always lies with humans. Issues such as bias, discrimination, deepfakes, or fake news highlight the importance of critically questioning AI and regulating it through ethical and societal measures.&lt;br /&gt;
&lt;br /&gt;
Despite its useful functions, AI should not be seen as the automatic solution to every challenge. Those who rely too heavily on it risk unlearning how to think independently and may adopt decisions without reflection. It is important to first seek one’s own solutions and use AI selectively and consciously.&lt;br /&gt;
&lt;br /&gt;
Furthermore, the responsible use of AI requires clear ethical guidelines and transparent regulation. AI decisions are often difficult to understand, which is why society must actively question how AI is controlled and applied. The discussion about AI is not only technical, but also philosophical—it challenges us to rethink our understanding of knowledge, responsibility, and freedom of choice.&lt;br /&gt;
&lt;br /&gt;
Given the rapid development of technology, a responsible approach to AI is essential—especially with regard to data protection and personal information. Just as we are cautious about protecting our data, we should act with the same care when dealing with artificial intelligence.&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;/div&gt;</summary>
		<author><name>Emily Hoppe</name></author>
	</entry>
	<entry>
		<id>https://www.glossalab.org/w/index.php?title=Draft:Artificial_Intelligence_and_Justice:_How_Algorithms_Shape_Our_Society&amp;diff=13674</id>
		<title>Draft:Artificial Intelligence and Justice: How Algorithms Shape Our Society</title>
		<link rel="alternate" type="text/html" href="https://www.glossalab.org/w/index.php?title=Draft:Artificial_Intelligence_and_Justice:_How_Algorithms_Shape_Our_Society&amp;diff=13674"/>
		<updated>2025-06-12T18:57:59Z</updated>

		<summary type="html">&lt;p&gt;Emily Hoppe: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Overview ==&lt;br /&gt;
The rapid development and use of artificial intelligence since the introduction of ChatGPT in November 2022&amp;lt;ref name=&amp;quot;:0&amp;quot;&amp;gt;Kero, S., Parycek, P., &amp;amp; Dreyer, S. (2023). &#039;&#039;Bekanntheit und Akzeptanz von ChatGPT in Deutschland&#039;&#039; (Factsheet Nr. 10). Meinungsmonitor Künstliche Intelligenz. Retrieved from &amp;lt;nowiki&amp;gt;https://www.cais-research.de/wp-content/uploads/Factsheet-10-ChatGPT.pdf&amp;lt;/nowiki&amp;gt;&amp;lt;/ref&amp;gt; raises questions of justice and ethical responsibility. This paper examines the interactions between AI systems and social justice, with a focus on discrimination caused by algorithmic bias, transparency in decision-making, and the challenges posed by AI-driven disinformation. Drawing on current studies and philosophical concepts, the analysis explores how AI systems can reinforce existing inequalities and what measures are necessary to ensure the responsible use of this technology.&lt;br /&gt;
&lt;br /&gt;
== Introduction and Relevance of the Topic ==&lt;br /&gt;
Since the launch of ChatGPT in November 2022, artificial intelligence has made an unprecedented entry into our daily lives. This new generation of AI systems, which is already being used by hundreds of millions of people just two and a half years later, marks a turning point in the digitalization of our society. Almost every person in the digitally connected world regularly interacts with AI applications.&amp;lt;ref name=&amp;quot;:0&amp;quot; /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The global use of AI technologies is steadily increasing. According to forecasts, the number of users worldwide is expected to grow to approximately 826.2 million by 2025. Beyond the education sector, AI is also gaining increasing importance in other fields such as business, medicine, justice, and public administration.&amp;lt;ref&amp;gt;Statista. (2025). &#039;&#039;Number of artificial intelligence (AI) tool users globally from 2021 to 2031&#039;&#039;. Retrieved from &amp;lt;nowiki&amp;gt;https://www.statista.com/forecasts/1449844/ai-tool-users-worldwide&amp;lt;/nowiki&amp;gt;&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
One reason for this is the efficiency of AI, as it saves time, automates processes, and delivers fast results. However, with the rapid development of AI, new considerations arise. When machines write texts, prepare decisions, or even make moral judgments, questions emerge such as: What distinguishes a machine-made decision from a human one? Who is responsible when errors occur or people are disadvantaged?&lt;br /&gt;
&lt;br /&gt;
It is no longer just about simple applications. In philosophical discussions, the question increasingly arises whether so-called &amp;quot;strong AI&amp;quot; machines with consciousness or their own thinking is even possible. Olkhovsky addresses this idea and refers to the example from the film &#039;&#039;Ex Machina&#039;&#039;: an AI that appears so human-like that it seems almost impossible to distinguish it from a real person. But what separates a convincing imitation from genuine consciousness? Can machines truly “be” like us—or does it ultimately remain a complex simulation?[[Artificial Intelligence]]&lt;br /&gt;
&lt;br /&gt;
AI confronts us with fundamental questions about what knowledge is, how we can recognize truth, and what role humans play in an increasingly automated world.&lt;br /&gt;
&lt;br /&gt;
== Artificial Intelligence ==&lt;br /&gt;
Artificial intelligence (AI) is generally understood as a subfield of computer science that focuses on the development of systems capable of performing tasks that typically require human intelligence, such as problem-solving, language understanding, or pattern recognition.&lt;br /&gt;
&lt;br /&gt;
In scientific discussions, a distinction is often made between weak AI, which is specialized in narrowly defined tasks (e.g., voice assistants), and strong AI, which could develop human-like consciousness or genuine thinking. This concept remains largely theoretical but raises profound ethical and epistemological questions.&lt;br /&gt;
&lt;br /&gt;
Additionally, a distinction is made between knowledge-based AI, which operates using symbolic logic, and so-called machine learning, which is based on statistical methods. The latter has made significant advances in recent years, intensifying the question of how it differs from human understanding.&lt;br /&gt;
&lt;br /&gt;
With the increasing use of AI, the question arises not only of what machines are capable of but also what this means for us humans. What exactly does it mean when machines “learn,” “understand,” or “decide”? And what happens when we begin to delegate our own thinking to machines? Is AI truly a form of cognitive processing, or merely a highly advanced simulation of human behavior?&lt;br /&gt;
&lt;br /&gt;
It is emphasized that AI systems ultimately reflect our own thinking. They adopt patterns, logics, and assumptions from the data they were trained on, but they do not understand them in the human sense. AI lacks the deeper understanding of causality and context that characterizes human thought. This gap between statistical pattern recognition and genuine understanding is central to the philosophical discourse on AI.&lt;br /&gt;
&lt;br /&gt;
The increasing automation of creative processes by AI raises the question of whether human thinking and creativity are being displaced. Hannah Arendt emphasized that thinking is more than mere information processing; it is connected with reflection and responsibility. Nick Bostrom also warns that excessive dependence on AI could restrict human freedom of decision.&lt;br /&gt;
&lt;br /&gt;
What does it mean for our humanity when processes once considered uniquely human such as thinking, decision-making, or creativity are increasingly taken over by machines?&lt;br /&gt;
&lt;br /&gt;
== Responsibility &amp;amp; Decision-Making ==&lt;br /&gt;
When we ask an AI a question, we usually receive a fitting answer almost instantly. This seems so natural that we rarely stop to consider how the answer is actually generated—or whether it is truly correct. And therein lies an ethical challenge: unlike humans, who make decisions based on experience, values, or moral convictions, AI operates purely statistically. It identifies patterns and produces what seems most probable based on its training data, without “understanding” what is good, right, or wrong.&lt;br /&gt;
&lt;br /&gt;
Such systems now accompany us not only when writing or researching, but also influence what we see on social media, which news is shown to us, or how decisions are made in professional contexts. The more we place our trust in AI, the greater the responsibility we must take on ourselves even if it may seem at first glance as though the machine has everything under control.&lt;br /&gt;
&lt;br /&gt;
According to Daniel Wessel, this reveals a central problem: AI systems cannot make moral decisions, at least not in the human sense. They have no values of their own or awareness of responsibility. That is why it is all the more important that such technologies follow clear ethical guidelines. The people who develop or use them must define in advance what is to be considered right, fair, or transparent.&lt;br /&gt;
&lt;br /&gt;
Which values should be embedded in AI in the first place? And who decides that? What we perceive as “right” often depends on cultural, societal, or individual contexts. To ensure AI is used responsibly, these values must be consciously named and integrated into the systems. This requires not only technical know-how but above all a societal and ethical debate. &lt;br /&gt;
&lt;br /&gt;
Ultimately, AI remains a tool. Whether it acts “rightly” in a given situation depends not only on the algorithm but above all on how we design it, use it, and question it critically.&lt;br /&gt;
&lt;br /&gt;
== Discrimination and Bias in AI Systems ==&lt;br /&gt;
It is often assumed that artificial intelligence evaluates things objectively and neutrally. In practice, however, it becomes clear that AI systems can adopt and even reinforce social biases. This can occur when the information an AI relies on is based on already biased data. An example of this is the increasing use of automated recruitment processes in companies. Here, applicants may be excluded based on certain ethnic backgrounds without a human ever having looked at the application.&lt;br /&gt;
&lt;br /&gt;
A 2024 study by the University of Washington shows that discrimination can occur in AI-driven recruitment processes. The researchers analyzed over 554 résumés and 571 job postings, generating more than 3 million combinations of names and job positions. They then altered the names on 120 resumes, replacing typically white-sounding names with names commonly associated with the Black population.&lt;br /&gt;
&lt;br /&gt;
The results were clear: in 85% of the cases, the AI favored names typically associated with white individuals, while only 9% of the preferred names were linked to Black individuals. Additionally, the AI selected male candidates 52% of the time even for roles predominantly held by women, such as HR positions (77% female representation) or teaching jobs (57% female representation). White women were also more likely to be selected than Black women.&lt;br /&gt;
&lt;br /&gt;
This example illustrates how AI systems derive selection criteria from existing data, which can lead them to unintentionally adopt and perpetuate discriminatory structures from real-world practices. From a philosophical perspective, this raises a fundamental question: If our knowledge about the world in this case, about job applicants is based on data that is itself biased, how reliable is that knowledge? Epistemology asks what we truly know when we receive information. AI systems process data, but they do not understand it in the human sense. Their decisions are based on patterns, not on insight or moral reasoning. As Olkhovsky also emphasizes, machines lack the intentionality and awareness that give human decisions their moral depth.&lt;br /&gt;
&lt;br /&gt;
That is why it is even more important that AI does not make the final selection of applicants alone. To ensure a fair and just hiring process, human oversight by HR professionals is essential so that they can intervene if there is a suspicion of discrimination. Moreover, the automated AI recruitment process should be adapted in such a way that it either completely avoids discrimination or at least minimizes it. This could be achieved through regular fairness and bias tests to identify and address problematic patterns early on.&lt;br /&gt;
&lt;br /&gt;
In addition, John Rawls’ concept of justice as fairness offers a philosophical foundation for the discussion of discrimination and bias in AI systems. His famous thought experiment the veil of ignorance asks us to imagine a society in which no one knows their own social position. This is meant to produce fair rules that do not favor any individual or group.&lt;br /&gt;
&lt;br /&gt;
Applied to AI, this means that algorithms must be designed not to reinforce existing inequalities but to actively contribute to fairness. As studies have shown, AI systems can unconsciously adopt discriminatory patterns when trained on biased data. Rawls would argue that such systems do not meet the principles of justice, as they fail to ensure that the least advantaged members of society are not further marginalized.&lt;br /&gt;
&lt;br /&gt;
== Bias ==&lt;br /&gt;
In the example above, the AI exhibited bias a systematic distortion. This occurs when an AI application receives training data and information from the real world that contains prejudices, and it does not question them but instead accepts and adopts them as correct. It has learned that, in the past, white men were predominantly hired and therefore sets the attribute “male” as a selection criterion, concluding that this group is the most suitable and thus prefers them.&lt;br /&gt;
&lt;br /&gt;
Such errors must be identified and corrected. While bias cannot be completely avoided, it can be reduced through technical solutions.&lt;br /&gt;
&lt;br /&gt;
There are three strategies that can be used to minimize bias in AI systems. First, potential biases in the training data can be removed before the learning process begins. This involves modifying the data so that attributes such as origin or religion no longer have a distorted effect on the AI’s decisions. Another method is to program the AI in such a way that, during learning, it is prevented from making unfair assessments or discriminating by applying additional mathematical constraints. In the third method, the AI’s outcomes are evaluated and corrected for fairness after the learning process.&lt;br /&gt;
&lt;br /&gt;
Discrimination in AI systems is not only a technical problem but also touches on fundamental ethical principles such as justice, human dignity, and responsibility. Even if an AI application does not intentionally discriminate, it still violates these principles because its decisions have real-world consequences for people. Machines do not act morally, but their algorithms can reinforce existing inequalities and thus raise ethical concerns.&lt;br /&gt;
&lt;br /&gt;
Immanuel Kant formulated the categorical imperative as a universal moral principle: every human being must be treated as an end in themselves and never merely as a means to an end. Applied to AI, this means that algorithms must not be designed in a way that disadvantages people based on prejudice or biased data. An AI system that adopts discriminatory patterns contradicts this principle, as it fails to treat people as equal individuals.&lt;br /&gt;
&lt;br /&gt;
John Rawls developed the concept of justice as fairness, which aims to structure society in such a way that it does not disadvantage the weakest. His veil of ignorance challenges us to design rules without knowing our own social position—thereby ensuring fair conditions for all. AI systems that unconsciously discriminate contradict this principle, as they perpetuate existing inequalities instead of correcting them. To counteract this, algorithms must be actively tested and adjusted for fairness.&lt;br /&gt;
&lt;br /&gt;
Therefore, it is essential to assume not only technical but also ethical responsibility. AI systems must be designed to promote justice and equal opportunity rather than reinforce existing biases. This can be achieved through regular fairness and bias tests as well as through intentional ethical programming. Even if bias cannot be completely eliminated, it can be significantly reduced through targeted measures.&lt;br /&gt;
&lt;br /&gt;
== Transparency and Traceability ==&lt;br /&gt;
Since May 21, 2024, the EU Artificial Intelligence Act has established a comprehensive regulatory framework for artificial intelligence adopted by the EU Council, setting uniform rules for the use of AI across Europe. The AI Act places strong emphasis on transparency to ensure that AI handles data responsibly and fairly. Systems considered particularly high-risk must therefore be designed in a way that makes their use comprehensible and understandable. This allows individuals to make informed decisions about whether a particular AI system is appropriate for them. Overall, the goal of increased AI transparency is to empower users and give them more control.&lt;br /&gt;
&lt;br /&gt;
To ensure the transparent use of AI, users must be informed when they are interacting with an AI system. The system&#039;s documentation should explain how the AI works, what it can be used for, and what opportunities and risks it entails. Additionally, information about the development and context of the system is important so that both users and organizations understand its capabilities and limitations.&lt;br /&gt;
&lt;br /&gt;
Transparency is important because it leads to greater traceability. It enables people to understand how AI reaches certain decisions and helps to identify issues early on such as potential bias or copyright violations. Moreover, studies show that users are more likely to use transparent AI models. For developers, it is particularly essential to know where the training data comes from, whether it is fair and non-discriminatory, and what risks a particular model might pose.&lt;br /&gt;
&lt;br /&gt;
Even though transparency is crucial, it is not always easy to implement. A careful balance must be struck: users should be given enough information to understand and use a system safely, but not all aspects should be disclosed if doing so poses security or misuse risks.&lt;br /&gt;
&lt;br /&gt;
This is where the discussion around AI intersects closely with epistemological questions: What counts as reliable knowledge? How much do we need to know in order to trust a decision? AI challenges us to rethink our understanding of knowledge, responsibility, and control.&lt;br /&gt;
&lt;br /&gt;
As Olkhovsky emphasizes, transparency is a key prerequisite for people to trust AI systems at all. Only if it is clear how decisions are made can those decisions be questioned or challenged. Without that clarity, responsibility becomes blurred and control over technological decisions slips away from users. Therefore, transparency is not only a technical task but also an ethical imperative: AI systems must be designed to reveal the criteria by which they operate. Transparency not only enables oversight but also reinforces the democratic principle of accountability in the digital age.&lt;br /&gt;
&lt;br /&gt;
== Missinformation, Fake News and Deepfakes ==&lt;br /&gt;
AI systems are used for almost everything. They are applied to a wide range of tasks, such as text processing, data analysis on specific topics, or even creating study schedules for exams. Additionally, we ask AI questions on various subjects or consult it about issues that concern us, and we generally assume that its answers are correct. However, the use of AI can lead to deepfakes and fake news.&lt;br /&gt;
&lt;br /&gt;
An important aspect of misinformation caused by artificial intelligence is the algorithmic theory of information. This involves the efficiency and complexity of algorithms, which significantly influence how AI systems process and present information. In contrast to human knowledge, which is based on experience, reflection, and context, AI-generated content is the result of statistical calculations and patterns. This creates the risk of distorted or manipulative content, particularly in the case of deepfakes and fake news. Since AI cannot distinguish between truth and deception but merely calculates probabilities, false or misleading information can be amplified and spread uncritically.&lt;br /&gt;
&lt;br /&gt;
AI systems are constantly learning and improving their capabilities day by day. They can generate images or videos that appear to be real. While this may initially be seen as a major advancement, one must ask whether AI-generated visual content is truly a positive development. For instance, a video might show a person saying something they never actually said—this is where deepfakes come in. Fake news can spread, reputations can be damaged, and entire elections or public opinions can be manipulated. It is essential for people to be able to recognize such content.&lt;br /&gt;
&lt;br /&gt;
AI-generated material can often be identified by subtle details such as limited facial expressions, inconsistent lighting, or incorrect or monotonous speech. It is crucial that we are able to detect these deepfakes so that we are not manipulated or misled by false claims.&lt;br /&gt;
&lt;br /&gt;
When it comes to fake news, we also need to be particularly cautious with regard to AI, as we often accept the answers provided by AI systems like ChatGPT as true and correct without question.&lt;br /&gt;
&lt;br /&gt;
Deepfakes also contribute to the spread of fake news because AI can easily create manipulated content. This becomes especially dangerous during elections, as people can be influenced without realizing it. AI-generated images or videos are designed to be dramatic, fear-inducing, or emotionally charged. Political parties might use deepfake campaign videos to manipulate voters and push them in a certain direction. These political messages are also distributed across multiple channels to gain more attention.&lt;br /&gt;
&lt;br /&gt;
Because AI helps disseminate this content quickly and widely, it can intentionally influence opinions and draw massive attention to certain topics. However, this can also lead to a divided society, especially when every party holds very strong and opposing views. Therefore, it is essential that people learn how to recognize fake news themselves. They should think critically, question information, verify sources, and compare it with other media. Moreover, there must be regulations for election campaigns, and AI-generated content must be labeled. Above all, transparency is necessary in order to provide insight into the algorithms. This way, manipulation can be understood and ultimately contained.&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
Artificial intelligence is increasingly shaping our everyday lives as well as areas such as education, the economy, and society. It brings both opportunities and risks. Since AI operates without moral understanding or a sense of responsibility, the responsibility for its use always lies with humans. Issues such as bias, discrimination, deepfakes, or fake news highlight the importance of critically questioning AI and regulating it through ethical and societal measures.&lt;br /&gt;
&lt;br /&gt;
Despite its useful functions, AI should not be seen as the automatic solution to every challenge. Those who rely too heavily on it risk unlearning how to think independently and may adopt decisions without reflection. It is important to first seek one’s own solutions and use AI selectively and consciously.&lt;br /&gt;
&lt;br /&gt;
Furthermore, the responsible use of AI requires clear ethical guidelines and transparent regulation. AI decisions are often difficult to understand, which is why society must actively question how AI is controlled and applied. The discussion about AI is not only technical, but also philosophical—it challenges us to rethink our understanding of knowledge, responsibility, and freedom of choice.&lt;br /&gt;
&lt;br /&gt;
Given the rapid development of technology, a responsible approach to AI is essential—especially with regard to data protection and personal information. Just as we are cautious about protecting our data, we should act with the same care when dealing with artificial intelligence.&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;/div&gt;</summary>
		<author><name>Emily Hoppe</name></author>
	</entry>
	<entry>
		<id>https://www.glossalab.org/w/index.php?title=Draft:Artificial_Intelligence_and_Justice:_How_Algorithms_Shape_Our_Society&amp;diff=13673</id>
		<title>Draft:Artificial Intelligence and Justice: How Algorithms Shape Our Society</title>
		<link rel="alternate" type="text/html" href="https://www.glossalab.org/w/index.php?title=Draft:Artificial_Intelligence_and_Justice:_How_Algorithms_Shape_Our_Society&amp;diff=13673"/>
		<updated>2025-06-12T18:54:55Z</updated>

		<summary type="html">&lt;p&gt;Emily Hoppe: /* References */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Overview ==&lt;br /&gt;
The rapid development and use of artificial intelligence since the introduction of ChatGPT in November 2022&amp;lt;ref name=&amp;quot;:0&amp;quot;&amp;gt;Kero, S., Parycek, P., &amp;amp; Dreyer, S. (2023). &#039;&#039;Bekanntheit und Akzeptanz von ChatGPT in Deutschland&#039;&#039; (Factsheet Nr. 10). Meinungsmonitor Künstliche Intelligenz. Retrieved from &amp;lt;nowiki&amp;gt;https://www.cais-research.de/wp-content/uploads/Factsheet-10-ChatGPT.pdf&amp;lt;/nowiki&amp;gt;&amp;lt;/ref&amp;gt; raises questions of justice and ethical responsibility. This paper examines the interactions between AI systems and social justice, with a focus on discrimination caused by algorithmic bias, transparency in decision-making, and the challenges posed by AI-driven disinformation. Drawing on current studies and philosophical concepts, the analysis explores how AI systems can reinforce existing inequalities and what measures are necessary to ensure the responsible use of this technology.&lt;br /&gt;
&lt;br /&gt;
== Introduction and Relevance of the Topic ==&lt;br /&gt;
Since the launch of ChatGPT in November 2022, artificial intelligence has made an unprecedented entry into our daily lives. This new generation of AI systems, which is already being used by hundreds of millions of people just two and a half years later, marks a turning point in the digitalization of our society. Almost every person in the digitally connected world regularly interacts with AI applications.&amp;lt;ref name=&amp;quot;:0&amp;quot; /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The global use of AI technologies is steadily increasing. According to forecasts, the number of users worldwide is expected to grow to approximately 826.2 million by 2025. Beyond the education sector, AI is also gaining increasing importance in other fields such as business, medicine, justice, and public administration.&amp;lt;ref&amp;gt;Statista. (2025). &#039;&#039;Number of artificial intelligence (AI) tool users globally from 2021 to 2031&#039;&#039;. Retrieved from &amp;lt;nowiki&amp;gt;https://www.statista.com/forecasts/1449844/ai-tool-users-worldwide&amp;lt;/nowiki&amp;gt;&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
One reason for this is the efficiency of AI, as it saves time, automates processes, and delivers fast results. However, with the rapid development of AI, new considerations arise. When machines write texts, prepare decisions, or even make moral judgments, questions emerge such as: What distinguishes a machine-made decision from a human one? Who is responsible when errors occur or people are disadvantaged?&lt;br /&gt;
&lt;br /&gt;
It is no longer just about simple applications. In philosophical discussions, the question increasingly arises whether so-called &amp;quot;strong AI&amp;quot; machines with consciousness or their own thinking is even possible. Olkhovsky addresses this idea and refers to the example from the film &#039;&#039;Ex Machina&#039;&#039;: an AI that appears so human-like that it seems almost impossible to distinguish it from a real person. But what separates a convincing imitation from genuine consciousness? Can machines truly “be” like us—or does it ultimately remain a complex simulation?&lt;br /&gt;
&lt;br /&gt;
AI confronts us with fundamental questions about what knowledge is, how we can recognize truth, and what role humans play in an increasingly automated world.&lt;br /&gt;
&lt;br /&gt;
== Artificial Intelligence ==&lt;br /&gt;
Artificial intelligence (AI) is generally understood as a subfield of computer science that focuses on the development of systems capable of performing tasks that typically require human intelligence, such as problem-solving, language understanding, or pattern recognition.&lt;br /&gt;
&lt;br /&gt;
In scientific discussions, a distinction is often made between weak AI, which is specialized in narrowly defined tasks (e.g., voice assistants), and strong AI, which could develop human-like consciousness or genuine thinking. This concept remains largely theoretical but raises profound ethical and epistemological questions.&lt;br /&gt;
&lt;br /&gt;
Additionally, a distinction is made between knowledge-based AI, which operates using symbolic logic, and so-called machine learning, which is based on statistical methods. The latter has made significant advances in recent years, intensifying the question of how it differs from human understanding.&lt;br /&gt;
&lt;br /&gt;
With the increasing use of AI, the question arises not only of what machines are capable of but also what this means for us humans. What exactly does it mean when machines “learn,” “understand,” or “decide”? And what happens when we begin to delegate our own thinking to machines? Is AI truly a form of cognitive processing, or merely a highly advanced simulation of human behavior?&lt;br /&gt;
&lt;br /&gt;
It is emphasized that AI systems ultimately reflect our own thinking. They adopt patterns, logics, and assumptions from the data they were trained on, but they do not understand them in the human sense. AI lacks the deeper understanding of causality and context that characterizes human thought. This gap between statistical pattern recognition and genuine understanding is central to the philosophical discourse on AI.&lt;br /&gt;
&lt;br /&gt;
The increasing automation of creative processes by AI raises the question of whether human thinking and creativity are being displaced. Hannah Arendt emphasized that thinking is more than mere information processing; it is connected with reflection and responsibility. Nick Bostrom also warns that excessive dependence on AI could restrict human freedom of decision.&lt;br /&gt;
&lt;br /&gt;
What does it mean for our humanity when processes once considered uniquely human such as thinking, decision-making, or creativity are increasingly taken over by machines?&lt;br /&gt;
&lt;br /&gt;
== Responsibility &amp;amp; Decision-Making ==&lt;br /&gt;
When we ask an AI a question, we usually receive a fitting answer almost instantly. This seems so natural that we rarely stop to consider how the answer is actually generated—or whether it is truly correct. And therein lies an ethical challenge: unlike humans, who make decisions based on experience, values, or moral convictions, AI operates purely statistically. It identifies patterns and produces what seems most probable based on its training data, without “understanding” what is good, right, or wrong.&lt;br /&gt;
&lt;br /&gt;
Such systems now accompany us not only when writing or researching, but also influence what we see on social media, which news is shown to us, or how decisions are made in professional contexts. The more we place our trust in AI, the greater the responsibility we must take on ourselves even if it may seem at first glance as though the machine has everything under control.&lt;br /&gt;
&lt;br /&gt;
According to Daniel Wessel, this reveals a central problem: AI systems cannot make moral decisions, at least not in the human sense. They have no values of their own or awareness of responsibility. That is why it is all the more important that such technologies follow clear ethical guidelines. The people who develop or use them must define in advance what is to be considered right, fair, or transparent.&lt;br /&gt;
&lt;br /&gt;
Which values should be embedded in AI in the first place? And who decides that? What we perceive as “right” often depends on cultural, societal, or individual contexts. To ensure AI is used responsibly, these values must be consciously named and integrated into the systems. This requires not only technical know-how but above all a societal and ethical debate. &lt;br /&gt;
&lt;br /&gt;
Ultimately, AI remains a tool. Whether it acts “rightly” in a given situation depends not only on the algorithm but above all on how we design it, use it, and question it critically.&lt;br /&gt;
&lt;br /&gt;
== Discrimination and Bias in AI Systems ==&lt;br /&gt;
It is often assumed that artificial intelligence evaluates things objectively and neutrally. In practice, however, it becomes clear that AI systems can adopt and even reinforce social biases. This can occur when the information an AI relies on is based on already biased data. An example of this is the increasing use of automated recruitment processes in companies. Here, applicants may be excluded based on certain ethnic backgrounds without a human ever having looked at the application.&lt;br /&gt;
&lt;br /&gt;
A 2024 study by the University of Washington shows that discrimination can occur in AI-driven recruitment processes. The researchers analyzed over 554 résumés and 571 job postings, generating more than 3 million combinations of names and job positions. They then altered the names on 120 resumes, replacing typically white-sounding names with names commonly associated with the Black population.&lt;br /&gt;
&lt;br /&gt;
The results were clear: in 85% of the cases, the AI favored names typically associated with white individuals, while only 9% of the preferred names were linked to Black individuals. Additionally, the AI selected male candidates 52% of the time even for roles predominantly held by women, such as HR positions (77% female representation) or teaching jobs (57% female representation). White women were also more likely to be selected than Black women.&lt;br /&gt;
&lt;br /&gt;
This example illustrates how AI systems derive selection criteria from existing data, which can lead them to unintentionally adopt and perpetuate discriminatory structures from real-world practices. From a philosophical perspective, this raises a fundamental question: If our knowledge about the world in this case, about job applicants is based on data that is itself biased, how reliable is that knowledge? Epistemology asks what we truly know when we receive information. AI systems process data, but they do not understand it in the human sense. Their decisions are based on patterns, not on insight or moral reasoning. As Olkhovsky also emphasizes, machines lack the intentionality and awareness that give human decisions their moral depth.&lt;br /&gt;
&lt;br /&gt;
That is why it is even more important that AI does not make the final selection of applicants alone. To ensure a fair and just hiring process, human oversight by HR professionals is essential so that they can intervene if there is a suspicion of discrimination. Moreover, the automated AI recruitment process should be adapted in such a way that it either completely avoids discrimination or at least minimizes it. This could be achieved through regular fairness and bias tests to identify and address problematic patterns early on.&lt;br /&gt;
&lt;br /&gt;
In addition, John Rawls’ concept of justice as fairness offers a philosophical foundation for the discussion of discrimination and bias in AI systems. His famous thought experiment the veil of ignorance asks us to imagine a society in which no one knows their own social position. This is meant to produce fair rules that do not favor any individual or group.&lt;br /&gt;
&lt;br /&gt;
Applied to AI, this means that algorithms must be designed not to reinforce existing inequalities but to actively contribute to fairness. As studies have shown, AI systems can unconsciously adopt discriminatory patterns when trained on biased data. Rawls would argue that such systems do not meet the principles of justice, as they fail to ensure that the least advantaged members of society are not further marginalized.&lt;br /&gt;
&lt;br /&gt;
== Bias ==&lt;br /&gt;
In the example above, the AI exhibited bias a systematic distortion. This occurs when an AI application receives training data and information from the real world that contains prejudices, and it does not question them but instead accepts and adopts them as correct. It has learned that, in the past, white men were predominantly hired and therefore sets the attribute “male” as a selection criterion, concluding that this group is the most suitable and thus prefers them.&lt;br /&gt;
&lt;br /&gt;
Such errors must be identified and corrected. While bias cannot be completely avoided, it can be reduced through technical solutions.&lt;br /&gt;
&lt;br /&gt;
There are three strategies that can be used to minimize bias in AI systems. First, potential biases in the training data can be removed before the learning process begins. This involves modifying the data so that attributes such as origin or religion no longer have a distorted effect on the AI’s decisions. Another method is to program the AI in such a way that, during learning, it is prevented from making unfair assessments or discriminating by applying additional mathematical constraints. In the third method, the AI’s outcomes are evaluated and corrected for fairness after the learning process.&lt;br /&gt;
&lt;br /&gt;
Discrimination in AI systems is not only a technical problem but also touches on fundamental ethical principles such as justice, human dignity, and responsibility. Even if an AI application does not intentionally discriminate, it still violates these principles because its decisions have real-world consequences for people. Machines do not act morally, but their algorithms can reinforce existing inequalities and thus raise ethical concerns.&lt;br /&gt;
&lt;br /&gt;
Immanuel Kant formulated the categorical imperative as a universal moral principle: every human being must be treated as an end in themselves and never merely as a means to an end. Applied to AI, this means that algorithms must not be designed in a way that disadvantages people based on prejudice or biased data. An AI system that adopts discriminatory patterns contradicts this principle, as it fails to treat people as equal individuals.&lt;br /&gt;
&lt;br /&gt;
John Rawls developed the concept of justice as fairness, which aims to structure society in such a way that it does not disadvantage the weakest. His veil of ignorance challenges us to design rules without knowing our own social position—thereby ensuring fair conditions for all. AI systems that unconsciously discriminate contradict this principle, as they perpetuate existing inequalities instead of correcting them. To counteract this, algorithms must be actively tested and adjusted for fairness.&lt;br /&gt;
&lt;br /&gt;
Therefore, it is essential to assume not only technical but also ethical responsibility. AI systems must be designed to promote justice and equal opportunity rather than reinforce existing biases. This can be achieved through regular fairness and bias tests as well as through intentional ethical programming. Even if bias cannot be completely eliminated, it can be significantly reduced through targeted measures.&lt;br /&gt;
&lt;br /&gt;
== Transparency and Traceability ==&lt;br /&gt;
Since May 21, 2024, the EU Artificial Intelligence Act has established a comprehensive regulatory framework for artificial intelligence adopted by the EU Council, setting uniform rules for the use of AI across Europe. The AI Act places strong emphasis on transparency to ensure that AI handles data responsibly and fairly. Systems considered particularly high-risk must therefore be designed in a way that makes their use comprehensible and understandable. This allows individuals to make informed decisions about whether a particular AI system is appropriate for them. Overall, the goal of increased AI transparency is to empower users and give them more control.&lt;br /&gt;
&lt;br /&gt;
To ensure the transparent use of AI, users must be informed when they are interacting with an AI system. The system&#039;s documentation should explain how the AI works, what it can be used for, and what opportunities and risks it entails. Additionally, information about the development and context of the system is important so that both users and organizations understand its capabilities and limitations.&lt;br /&gt;
&lt;br /&gt;
Transparency is important because it leads to greater traceability. It enables people to understand how AI reaches certain decisions and helps to identify issues early on such as potential bias or copyright violations. Moreover, studies show that users are more likely to use transparent AI models. For developers, it is particularly essential to know where the training data comes from, whether it is fair and non-discriminatory, and what risks a particular model might pose.&lt;br /&gt;
&lt;br /&gt;
Even though transparency is crucial, it is not always easy to implement. A careful balance must be struck: users should be given enough information to understand and use a system safely, but not all aspects should be disclosed if doing so poses security or misuse risks.&lt;br /&gt;
&lt;br /&gt;
This is where the discussion around AI intersects closely with epistemological questions: What counts as reliable knowledge? How much do we need to know in order to trust a decision? AI challenges us to rethink our understanding of knowledge, responsibility, and control.&lt;br /&gt;
&lt;br /&gt;
As Olkhovsky emphasizes, transparency is a key prerequisite for people to trust AI systems at all. Only if it is clear how decisions are made can those decisions be questioned or challenged. Without that clarity, responsibility becomes blurred and control over technological decisions slips away from users. Therefore, transparency is not only a technical task but also an ethical imperative: AI systems must be designed to reveal the criteria by which they operate. Transparency not only enables oversight but also reinforces the democratic principle of accountability in the digital age.&lt;br /&gt;
&lt;br /&gt;
== Missinformation, Fake News and Deepfakes ==&lt;br /&gt;
AI systems are used for almost everything. They are applied to a wide range of tasks, such as text processing, data analysis on specific topics, or even creating study schedules for exams. Additionally, we ask AI questions on various subjects or consult it about issues that concern us, and we generally assume that its answers are correct. However, the use of AI can lead to deepfakes and fake news.&lt;br /&gt;
&lt;br /&gt;
An important aspect of misinformation caused by artificial intelligence is the algorithmic theory of information. This involves the efficiency and complexity of algorithms, which significantly influence how AI systems process and present information. In contrast to human knowledge, which is based on experience, reflection, and context, AI-generated content is the result of statistical calculations and patterns. This creates the risk of distorted or manipulative content, particularly in the case of deepfakes and fake news. Since AI cannot distinguish between truth and deception but merely calculates probabilities, false or misleading information can be amplified and spread uncritically.&lt;br /&gt;
&lt;br /&gt;
AI systems are constantly learning and improving their capabilities day by day. They can generate images or videos that appear to be real. While this may initially be seen as a major advancement, one must ask whether AI-generated visual content is truly a positive development. For instance, a video might show a person saying something they never actually said—this is where deepfakes come in. Fake news can spread, reputations can be damaged, and entire elections or public opinions can be manipulated. It is essential for people to be able to recognize such content.&lt;br /&gt;
&lt;br /&gt;
AI-generated material can often be identified by subtle details such as limited facial expressions, inconsistent lighting, or incorrect or monotonous speech. It is crucial that we are able to detect these deepfakes so that we are not manipulated or misled by false claims.&lt;br /&gt;
&lt;br /&gt;
When it comes to fake news, we also need to be particularly cautious with regard to AI, as we often accept the answers provided by AI systems like ChatGPT as true and correct without question.&lt;br /&gt;
&lt;br /&gt;
Deepfakes also contribute to the spread of fake news because AI can easily create manipulated content. This becomes especially dangerous during elections, as people can be influenced without realizing it. AI-generated images or videos are designed to be dramatic, fear-inducing, or emotionally charged. Political parties might use deepfake campaign videos to manipulate voters and push them in a certain direction. These political messages are also distributed across multiple channels to gain more attention.&lt;br /&gt;
&lt;br /&gt;
Because AI helps disseminate this content quickly and widely, it can intentionally influence opinions and draw massive attention to certain topics. However, this can also lead to a divided society, especially when every party holds very strong and opposing views. Therefore, it is essential that people learn how to recognize fake news themselves. They should think critically, question information, verify sources, and compare it with other media. Moreover, there must be regulations for election campaigns, and AI-generated content must be labeled. Above all, transparency is necessary in order to provide insight into the algorithms. This way, manipulation can be understood and ultimately contained.&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
Artificial intelligence is increasingly shaping our everyday lives as well as areas such as education, the economy, and society. It brings both opportunities and risks. Since AI operates without moral understanding or a sense of responsibility, the responsibility for its use always lies with humans. Issues such as bias, discrimination, deepfakes, or fake news highlight the importance of critically questioning AI and regulating it through ethical and societal measures.&lt;br /&gt;
&lt;br /&gt;
Despite its useful functions, AI should not be seen as the automatic solution to every challenge. Those who rely too heavily on it risk unlearning how to think independently and may adopt decisions without reflection. It is important to first seek one’s own solutions and use AI selectively and consciously.&lt;br /&gt;
&lt;br /&gt;
Furthermore, the responsible use of AI requires clear ethical guidelines and transparent regulation. AI decisions are often difficult to understand, which is why society must actively question how AI is controlled and applied. The discussion about AI is not only technical, but also philosophical—it challenges us to rethink our understanding of knowledge, responsibility, and freedom of choice.&lt;br /&gt;
&lt;br /&gt;
Given the rapid development of technology, a responsible approach to AI is essential—especially with regard to data protection and personal information. Just as we are cautious about protecting our data, we should act with the same care when dealing with artificial intelligence.&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;/div&gt;</summary>
		<author><name>Emily Hoppe</name></author>
	</entry>
	<entry>
		<id>https://www.glossalab.org/w/index.php?title=Draft:Artificial_Intelligence_and_Justice:_How_Algorithms_Shape_Our_Society&amp;diff=13672</id>
		<title>Draft:Artificial Intelligence and Justice: How Algorithms Shape Our Society</title>
		<link rel="alternate" type="text/html" href="https://www.glossalab.org/w/index.php?title=Draft:Artificial_Intelligence_and_Justice:_How_Algorithms_Shape_Our_Society&amp;diff=13672"/>
		<updated>2025-06-12T18:52:31Z</updated>

		<summary type="html">&lt;p&gt;Emily Hoppe: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Overview ==&lt;br /&gt;
The rapid development and use of artificial intelligence since the introduction of ChatGPT in November 2022&amp;lt;ref&amp;gt;Kero, S., Parycek, P., &amp;amp; Dreyer, S. (2023). &#039;&#039;Bekanntheit und Akzeptanz von ChatGPT in Deutschland&#039;&#039; (Factsheet Nr. 10). Meinungsmonitor Künstliche Intelligenz. Retrieved from &amp;lt;nowiki&amp;gt;https://www.cais-research.de/wp-content/uploads/Factsheet-10-ChatGPT.pdf&amp;lt;/nowiki&amp;gt;&amp;lt;/ref&amp;gt; raises questions of justice and ethical responsibility. This paper examines the interactions between AI systems and social justice, with a focus on discrimination caused by algorithmic bias, transparency in decision-making, and the challenges posed by AI-driven disinformation. Drawing on current studies and philosophical concepts, the analysis explores how AI systems can reinforce existing inequalities and what measures are necessary to ensure the responsible use of this technology.&lt;br /&gt;
&lt;br /&gt;
== Introduction and Relevance of the Topic ==&lt;br /&gt;
Since the launch of ChatGPT in November 2022, artificial intelligence has made an unprecedented entry into our daily lives. This new generation of AI systems, which is already being used by hundreds of millions of people just two and a half years later, marks a turning point in the digitalization of our society. Almost every person in the digitally connected world regularly interacts with AI applications.&amp;lt;ref&amp;gt;Kero, S., Parycek, P., &amp;amp; Dreyer, S. (2023). &#039;&#039;Bekanntheit und Akzeptanz von ChatGPT in Deutschland&#039;&#039; (Factsheet Nr. 10). Meinungsmonitor Künstliche Intelligenz. Retrieved from &amp;lt;nowiki&amp;gt;https://www.cais-research.de/wp-content/uploads/Factsheet-10-ChatGPT.pdf&amp;lt;/nowiki&amp;gt;&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The global use of AI technologies is steadily increasing. According to forecasts, the number of users worldwide is expected to grow to approximately 826.2 million by 2025. Beyond the education sector, AI is also gaining increasing importance in other fields such as business, medicine, justice, and public administration.&amp;lt;ref&amp;gt;Statista. (2025). &#039;&#039;Number of artificial intelligence (AI) tool users globally from 2021 to 2031&#039;&#039;. Retrieved from &amp;lt;nowiki&amp;gt;https://www.statista.com/forecasts/1449844/ai-tool-users-worldwide&amp;lt;/nowiki&amp;gt;&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
One reason for this is the efficiency of AI, as it saves time, automates processes, and delivers fast results. However, with the rapid development of AI, new considerations arise. When machines write texts, prepare decisions, or even make moral judgments, questions emerge such as: What distinguishes a machine-made decision from a human one? Who is responsible when errors occur or people are disadvantaged?&lt;br /&gt;
&lt;br /&gt;
It is no longer just about simple applications. In philosophical discussions, the question increasingly arises whether so-called &amp;quot;strong AI&amp;quot; machines with consciousness or their own thinking is even possible. Olkhovsky addresses this idea and refers to the example from the film &#039;&#039;Ex Machina&#039;&#039;: an AI that appears so human-like that it seems almost impossible to distinguish it from a real person. But what separates a convincing imitation from genuine consciousness? Can machines truly “be” like us—or does it ultimately remain a complex simulation?&lt;br /&gt;
&lt;br /&gt;
AI confronts us with fundamental questions about what knowledge is, how we can recognize truth, and what role humans play in an increasingly automated world.&lt;br /&gt;
&lt;br /&gt;
== Artificial Intelligence ==&lt;br /&gt;
Artificial intelligence (AI) is generally understood as a subfield of computer science that focuses on the development of systems capable of performing tasks that typically require human intelligence, such as problem-solving, language understanding, or pattern recognition.&lt;br /&gt;
&lt;br /&gt;
In scientific discussions, a distinction is often made between weak AI, which is specialized in narrowly defined tasks (e.g., voice assistants), and strong AI, which could develop human-like consciousness or genuine thinking. This concept remains largely theoretical but raises profound ethical and epistemological questions.&lt;br /&gt;
&lt;br /&gt;
Additionally, a distinction is made between knowledge-based AI, which operates using symbolic logic, and so-called machine learning, which is based on statistical methods. The latter has made significant advances in recent years, intensifying the question of how it differs from human understanding.&lt;br /&gt;
&lt;br /&gt;
With the increasing use of AI, the question arises not only of what machines are capable of but also what this means for us humans. What exactly does it mean when machines “learn,” “understand,” or “decide”? And what happens when we begin to delegate our own thinking to machines? Is AI truly a form of cognitive processing, or merely a highly advanced simulation of human behavior?&lt;br /&gt;
&lt;br /&gt;
It is emphasized that AI systems ultimately reflect our own thinking. They adopt patterns, logics, and assumptions from the data they were trained on, but they do not understand them in the human sense. AI lacks the deeper understanding of causality and context that characterizes human thought. This gap between statistical pattern recognition and genuine understanding is central to the philosophical discourse on AI.&lt;br /&gt;
&lt;br /&gt;
The increasing automation of creative processes by AI raises the question of whether human thinking and creativity are being displaced. Hannah Arendt emphasized that thinking is more than mere information processing; it is connected with reflection and responsibility. Nick Bostrom also warns that excessive dependence on AI could restrict human freedom of decision.&lt;br /&gt;
&lt;br /&gt;
What does it mean for our humanity when processes once considered uniquely human such as thinking, decision-making, or creativity are increasingly taken over by machines?&lt;br /&gt;
&lt;br /&gt;
== Responsibility &amp;amp; Decision-Making ==&lt;br /&gt;
When we ask an AI a question, we usually receive a fitting answer almost instantly. This seems so natural that we rarely stop to consider how the answer is actually generated—or whether it is truly correct. And therein lies an ethical challenge: unlike humans, who make decisions based on experience, values, or moral convictions, AI operates purely statistically. It identifies patterns and produces what seems most probable based on its training data, without “understanding” what is good, right, or wrong.&lt;br /&gt;
&lt;br /&gt;
Such systems now accompany us not only when writing or researching, but also influence what we see on social media, which news is shown to us, or how decisions are made in professional contexts. The more we place our trust in AI, the greater the responsibility we must take on ourselves even if it may seem at first glance as though the machine has everything under control.&lt;br /&gt;
&lt;br /&gt;
According to Daniel Wessel, this reveals a central problem: AI systems cannot make moral decisions, at least not in the human sense. They have no values of their own or awareness of responsibility. That is why it is all the more important that such technologies follow clear ethical guidelines. The people who develop or use them must define in advance what is to be considered right, fair, or transparent.&lt;br /&gt;
&lt;br /&gt;
Which values should be embedded in AI in the first place? And who decides that? What we perceive as “right” often depends on cultural, societal, or individual contexts. To ensure AI is used responsibly, these values must be consciously named and integrated into the systems. This requires not only technical know-how but above all a societal and ethical debate. &lt;br /&gt;
&lt;br /&gt;
Ultimately, AI remains a tool. Whether it acts “rightly” in a given situation depends not only on the algorithm but above all on how we design it, use it, and question it critically.&lt;br /&gt;
&lt;br /&gt;
== Discrimination and Bias in AI Systems ==&lt;br /&gt;
It is often assumed that artificial intelligence evaluates things objectively and neutrally. In practice, however, it becomes clear that AI systems can adopt and even reinforce social biases. This can occur when the information an AI relies on is based on already biased data. An example of this is the increasing use of automated recruitment processes in companies. Here, applicants may be excluded based on certain ethnic backgrounds without a human ever having looked at the application.&lt;br /&gt;
&lt;br /&gt;
A 2024 study by the University of Washington shows that discrimination can occur in AI-driven recruitment processes. The researchers analyzed over 554 résumés and 571 job postings, generating more than 3 million combinations of names and job positions. They then altered the names on 120 resumes, replacing typically white-sounding names with names commonly associated with the Black population.&lt;br /&gt;
&lt;br /&gt;
The results were clear: in 85% of the cases, the AI favored names typically associated with white individuals, while only 9% of the preferred names were linked to Black individuals. Additionally, the AI selected male candidates 52% of the time even for roles predominantly held by women, such as HR positions (77% female representation) or teaching jobs (57% female representation). White women were also more likely to be selected than Black women.&lt;br /&gt;
&lt;br /&gt;
This example illustrates how AI systems derive selection criteria from existing data, which can lead them to unintentionally adopt and perpetuate discriminatory structures from real-world practices. From a philosophical perspective, this raises a fundamental question: If our knowledge about the world in this case, about job applicants is based on data that is itself biased, how reliable is that knowledge? Epistemology asks what we truly know when we receive information. AI systems process data, but they do not understand it in the human sense. Their decisions are based on patterns, not on insight or moral reasoning. As Olkhovsky also emphasizes, machines lack the intentionality and awareness that give human decisions their moral depth.&lt;br /&gt;
&lt;br /&gt;
That is why it is even more important that AI does not make the final selection of applicants alone. To ensure a fair and just hiring process, human oversight by HR professionals is essential so that they can intervene if there is a suspicion of discrimination. Moreover, the automated AI recruitment process should be adapted in such a way that it either completely avoids discrimination or at least minimizes it. This could be achieved through regular fairness and bias tests to identify and address problematic patterns early on.&lt;br /&gt;
&lt;br /&gt;
In addition, John Rawls’ concept of justice as fairness offers a philosophical foundation for the discussion of discrimination and bias in AI systems. His famous thought experiment the veil of ignorance asks us to imagine a society in which no one knows their own social position. This is meant to produce fair rules that do not favor any individual or group.&lt;br /&gt;
&lt;br /&gt;
Applied to AI, this means that algorithms must be designed not to reinforce existing inequalities but to actively contribute to fairness. As studies have shown, AI systems can unconsciously adopt discriminatory patterns when trained on biased data. Rawls would argue that such systems do not meet the principles of justice, as they fail to ensure that the least advantaged members of society are not further marginalized.&lt;br /&gt;
&lt;br /&gt;
== Bias ==&lt;br /&gt;
In the example above, the AI exhibited bias a systematic distortion. This occurs when an AI application receives training data and information from the real world that contains prejudices, and it does not question them but instead accepts and adopts them as correct. It has learned that, in the past, white men were predominantly hired and therefore sets the attribute “male” as a selection criterion, concluding that this group is the most suitable and thus prefers them.&lt;br /&gt;
&lt;br /&gt;
Such errors must be identified and corrected. While bias cannot be completely avoided, it can be reduced through technical solutions.&lt;br /&gt;
&lt;br /&gt;
There are three strategies that can be used to minimize bias in AI systems. First, potential biases in the training data can be removed before the learning process begins. This involves modifying the data so that attributes such as origin or religion no longer have a distorted effect on the AI’s decisions. Another method is to program the AI in such a way that, during learning, it is prevented from making unfair assessments or discriminating by applying additional mathematical constraints. In the third method, the AI’s outcomes are evaluated and corrected for fairness after the learning process.&lt;br /&gt;
&lt;br /&gt;
Discrimination in AI systems is not only a technical problem but also touches on fundamental ethical principles such as justice, human dignity, and responsibility. Even if an AI application does not intentionally discriminate, it still violates these principles because its decisions have real-world consequences for people. Machines do not act morally, but their algorithms can reinforce existing inequalities and thus raise ethical concerns.&lt;br /&gt;
&lt;br /&gt;
Immanuel Kant formulated the categorical imperative as a universal moral principle: every human being must be treated as an end in themselves and never merely as a means to an end. Applied to AI, this means that algorithms must not be designed in a way that disadvantages people based on prejudice or biased data. An AI system that adopts discriminatory patterns contradicts this principle, as it fails to treat people as equal individuals.&lt;br /&gt;
&lt;br /&gt;
John Rawls developed the concept of justice as fairness, which aims to structure society in such a way that it does not disadvantage the weakest. His veil of ignorance challenges us to design rules without knowing our own social position—thereby ensuring fair conditions for all. AI systems that unconsciously discriminate contradict this principle, as they perpetuate existing inequalities instead of correcting them. To counteract this, algorithms must be actively tested and adjusted for fairness.&lt;br /&gt;
&lt;br /&gt;
Therefore, it is essential to assume not only technical but also ethical responsibility. AI systems must be designed to promote justice and equal opportunity rather than reinforce existing biases. This can be achieved through regular fairness and bias tests as well as through intentional ethical programming. Even if bias cannot be completely eliminated, it can be significantly reduced through targeted measures.&lt;br /&gt;
&lt;br /&gt;
== Transparency and Traceability ==&lt;br /&gt;
Since May 21, 2024, the EU Artificial Intelligence Act has established a comprehensive regulatory framework for artificial intelligence adopted by the EU Council, setting uniform rules for the use of AI across Europe. The AI Act places strong emphasis on transparency to ensure that AI handles data responsibly and fairly. Systems considered particularly high-risk must therefore be designed in a way that makes their use comprehensible and understandable. This allows individuals to make informed decisions about whether a particular AI system is appropriate for them. Overall, the goal of increased AI transparency is to empower users and give them more control.&lt;br /&gt;
&lt;br /&gt;
To ensure the transparent use of AI, users must be informed when they are interacting with an AI system. The system&#039;s documentation should explain how the AI works, what it can be used for, and what opportunities and risks it entails. Additionally, information about the development and context of the system is important so that both users and organizations understand its capabilities and limitations.&lt;br /&gt;
&lt;br /&gt;
Transparency is important because it leads to greater traceability. It enables people to understand how AI reaches certain decisions and helps to identify issues early on such as potential bias or copyright violations. Moreover, studies show that users are more likely to use transparent AI models. For developers, it is particularly essential to know where the training data comes from, whether it is fair and non-discriminatory, and what risks a particular model might pose.&lt;br /&gt;
&lt;br /&gt;
Even though transparency is crucial, it is not always easy to implement. A careful balance must be struck: users should be given enough information to understand and use a system safely, but not all aspects should be disclosed if doing so poses security or misuse risks.&lt;br /&gt;
&lt;br /&gt;
This is where the discussion around AI intersects closely with epistemological questions: What counts as reliable knowledge? How much do we need to know in order to trust a decision? AI challenges us to rethink our understanding of knowledge, responsibility, and control.&lt;br /&gt;
&lt;br /&gt;
As Olkhovsky emphasizes, transparency is a key prerequisite for people to trust AI systems at all. Only if it is clear how decisions are made can those decisions be questioned or challenged. Without that clarity, responsibility becomes blurred and control over technological decisions slips away from users. Therefore, transparency is not only a technical task but also an ethical imperative: AI systems must be designed to reveal the criteria by which they operate. Transparency not only enables oversight but also reinforces the democratic principle of accountability in the digital age.&lt;br /&gt;
&lt;br /&gt;
== Missinformation, Fake News and Deepfakes ==&lt;br /&gt;
AI systems are used for almost everything. They are applied to a wide range of tasks, such as text processing, data analysis on specific topics, or even creating study schedules for exams. Additionally, we ask AI questions on various subjects or consult it about issues that concern us, and we generally assume that its answers are correct. However, the use of AI can lead to deepfakes and fake news.&lt;br /&gt;
&lt;br /&gt;
An important aspect of misinformation caused by artificial intelligence is the algorithmic theory of information. This involves the efficiency and complexity of algorithms, which significantly influence how AI systems process and present information. In contrast to human knowledge, which is based on experience, reflection, and context, AI-generated content is the result of statistical calculations and patterns. This creates the risk of distorted or manipulative content, particularly in the case of deepfakes and fake news. Since AI cannot distinguish between truth and deception but merely calculates probabilities, false or misleading information can be amplified and spread uncritically.&lt;br /&gt;
&lt;br /&gt;
AI systems are constantly learning and improving their capabilities day by day. They can generate images or videos that appear to be real. While this may initially be seen as a major advancement, one must ask whether AI-generated visual content is truly a positive development. For instance, a video might show a person saying something they never actually said—this is where deepfakes come in. Fake news can spread, reputations can be damaged, and entire elections or public opinions can be manipulated. It is essential for people to be able to recognize such content.&lt;br /&gt;
&lt;br /&gt;
AI-generated material can often be identified by subtle details such as limited facial expressions, inconsistent lighting, or incorrect or monotonous speech. It is crucial that we are able to detect these deepfakes so that we are not manipulated or misled by false claims.&lt;br /&gt;
&lt;br /&gt;
When it comes to fake news, we also need to be particularly cautious with regard to AI, as we often accept the answers provided by AI systems like ChatGPT as true and correct without question.&lt;br /&gt;
&lt;br /&gt;
Deepfakes also contribute to the spread of fake news because AI can easily create manipulated content. This becomes especially dangerous during elections, as people can be influenced without realizing it. AI-generated images or videos are designed to be dramatic, fear-inducing, or emotionally charged. Political parties might use deepfake campaign videos to manipulate voters and push them in a certain direction. These political messages are also distributed across multiple channels to gain more attention.&lt;br /&gt;
&lt;br /&gt;
Because AI helps disseminate this content quickly and widely, it can intentionally influence opinions and draw massive attention to certain topics. However, this can also lead to a divided society, especially when every party holds very strong and opposing views. Therefore, it is essential that people learn how to recognize fake news themselves. They should think critically, question information, verify sources, and compare it with other media. Moreover, there must be regulations for election campaigns, and AI-generated content must be labeled. Above all, transparency is necessary in order to provide insight into the algorithms. This way, manipulation can be understood and ultimately contained.&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
Artificial intelligence is increasingly shaping our everyday lives as well as areas such as education, the economy, and society. It brings both opportunities and risks. Since AI operates without moral understanding or a sense of responsibility, the responsibility for its use always lies with humans. Issues such as bias, discrimination, deepfakes, or fake news highlight the importance of critically questioning AI and regulating it through ethical and societal measures.&lt;br /&gt;
&lt;br /&gt;
Despite its useful functions, AI should not be seen as the automatic solution to every challenge. Those who rely too heavily on it risk unlearning how to think independently and may adopt decisions without reflection. It is important to first seek one’s own solutions and use AI selectively and consciously.&lt;br /&gt;
&lt;br /&gt;
Furthermore, the responsible use of AI requires clear ethical guidelines and transparent regulation. AI decisions are often difficult to understand, which is why society must actively question how AI is controlled and applied. The discussion about AI is not only technical, but also philosophical—it challenges us to rethink our understanding of knowledge, responsibility, and freedom of choice.&lt;br /&gt;
&lt;br /&gt;
Given the rapid development of technology, a responsible approach to AI is essential—especially with regard to data protection and personal information. Just as we are cautious about protecting our data, we should act with the same care when dealing with artificial intelligence.&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;/div&gt;</summary>
		<author><name>Emily Hoppe</name></author>
	</entry>
	<entry>
		<id>https://www.glossalab.org/w/index.php?title=Draft:Artificial_Intelligence_and_Justice:_How_Algorithms_Shape_Our_Society&amp;diff=12935</id>
		<title>Draft:Artificial Intelligence and Justice: How Algorithms Shape Our Society</title>
		<link rel="alternate" type="text/html" href="https://www.glossalab.org/w/index.php?title=Draft:Artificial_Intelligence_and_Justice:_How_Algorithms_Shape_Our_Society&amp;diff=12935"/>
		<updated>2025-06-11T17:46:34Z</updated>

		<summary type="html">&lt;p&gt;Emily Hoppe: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Overview ==&lt;br /&gt;
The rapid development and use of artificial intelligence since the introduction of ChatGPT in November 2022 raises questions of justice and ethical responsibility. This paper examines the interactions between AI systems and social justice, with a focus on discrimination caused by algorithmic bias, transparency in decision-making, and the challenges posed by AI-driven disinformation. Drawing on current studies and philosophical concepts, the analysis explores how AI systems can reinforce existing inequalities and what measures are necessary to ensure the responsible use of this technology.&lt;br /&gt;
&lt;br /&gt;
== Introduction and Relevance of the Topic ==&lt;br /&gt;
Since the launch of ChatGPT in November 2022, artificial intelligence has made an unprecedented entry into our daily lives. This new generation of AI systems, which is already being used by hundreds of millions of people just two and a half years later, marks a turning point in the digitalization of our society. Almost every person in the digitally connected world regularly interacts with AI applications.&lt;br /&gt;
&lt;br /&gt;
The global use of AI technologies is steadily increasing. According to forecasts, the number of users worldwide is expected to grow to approximately 826.2 million by 2025. Beyond the education sector, AI is also gaining increasing importance in other fields such as business, medicine, justice, and public administration.&lt;br /&gt;
&lt;br /&gt;
One reason for this is the efficiency of AI, as it saves time, automates processes, and delivers fast results. However, with the rapid development of AI, new considerations arise. When machines write texts, prepare decisions, or even make moral judgments, questions emerge such as: What distinguishes a machine-made decision from a human one? Who is responsible when errors occur or people are disadvantaged?&lt;br /&gt;
&lt;br /&gt;
It is no longer just about simple applications. In philosophical discussions, the question increasingly arises whether so-called &amp;quot;strong AI&amp;quot; machines with consciousness or their own thinking is even possible. Olkhovsky addresses this idea and refers to the example from the film &#039;&#039;Ex Machina&#039;&#039;: an AI that appears so human-like that it seems almost impossible to distinguish it from a real person. But what separates a convincing imitation from genuine consciousness? Can machines truly “be” like us—or does it ultimately remain a complex simulation?&lt;br /&gt;
&lt;br /&gt;
AI confronts us with fundamental questions about what knowledge is, how we can recognize truth, and what role humans play in an increasingly automated world.&lt;br /&gt;
&lt;br /&gt;
== Artificial Intelligence ==&lt;br /&gt;
Artificial intelligence (AI) is generally understood as a subfield of computer science that focuses on the development of systems capable of performing tasks that typically require human intelligence, such as problem-solving, language understanding, or pattern recognition.&lt;br /&gt;
&lt;br /&gt;
In scientific discussions, a distinction is often made between weak AI, which is specialized in narrowly defined tasks (e.g., voice assistants), and strong AI, which could develop human-like consciousness or genuine thinking. This concept remains largely theoretical but raises profound ethical and epistemological questions.&lt;br /&gt;
&lt;br /&gt;
Additionally, a distinction is made between knowledge-based AI, which operates using symbolic logic, and so-called machine learning, which is based on statistical methods. The latter has made significant advances in recent years, intensifying the question of how it differs from human understanding.&lt;br /&gt;
&lt;br /&gt;
With the increasing use of AI, the question arises not only of what machines are capable of but also what this means for us humans. What exactly does it mean when machines “learn,” “understand,” or “decide”? And what happens when we begin to delegate our own thinking to machines? Is AI truly a form of cognitive processing, or merely a highly advanced simulation of human behavior?&lt;br /&gt;
&lt;br /&gt;
It is emphasized that AI systems ultimately reflect our own thinking. They adopt patterns, logics, and assumptions from the data they were trained on, but they do not understand them in the human sense. AI lacks the deeper understanding of causality and context that characterizes human thought. This gap between statistical pattern recognition and genuine understanding is central to the philosophical discourse on AI.&lt;br /&gt;
&lt;br /&gt;
The increasing automation of creative processes by AI raises the question of whether human thinking and creativity are being displaced. Hannah Arendt emphasized that thinking is more than mere information processing; it is connected with reflection and responsibility. Nick Bostrom also warns that excessive dependence on AI could restrict human freedom of decision.&lt;br /&gt;
&lt;br /&gt;
What does it mean for our humanity when processes once considered uniquely human such as thinking, decision-making, or creativity are increasingly taken over by machines?&lt;br /&gt;
&lt;br /&gt;
== Responsibility &amp;amp; Decision-Making ==&lt;br /&gt;
When we ask an AI a question, we usually receive a fitting answer almost instantly. This seems so natural that we rarely stop to consider how the answer is actually generated—or whether it is truly correct. And therein lies an ethical challenge: unlike humans, who make decisions based on experience, values, or moral convictions, AI operates purely statistically. It identifies patterns and produces what seems most probable based on its training data, without “understanding” what is good, right, or wrong.&lt;br /&gt;
&lt;br /&gt;
Such systems now accompany us not only when writing or researching, but also influence what we see on social media, which news is shown to us, or how decisions are made in professional contexts. The more we place our trust in AI, the greater the responsibility we must take on ourselves even if it may seem at first glance as though the machine has everything under control.&lt;br /&gt;
&lt;br /&gt;
According to Daniel Wessel, this reveals a central problem: AI systems cannot make moral decisions, at least not in the human sense. They have no values of their own or awareness of responsibility. That is why it is all the more important that such technologies follow clear ethical guidelines. The people who develop or use them must define in advance what is to be considered right, fair, or transparent.&lt;br /&gt;
&lt;br /&gt;
Which values should be embedded in AI in the first place? And who decides that? What we perceive as “right” often depends on cultural, societal, or individual contexts. To ensure AI is used responsibly, these values must be consciously named and integrated into the systems. This requires not only technical know-how but above all a societal and ethical debate. &lt;br /&gt;
&lt;br /&gt;
Ultimately, AI remains a tool. Whether it acts “rightly” in a given situation depends not only on the algorithm but above all on how we design it, use it, and question it critically.&lt;br /&gt;
&lt;br /&gt;
== Discrimination and Bias in AI Systems ==&lt;br /&gt;
It is often assumed that artificial intelligence evaluates things objectively and neutrally. In practice, however, it becomes clear that AI systems can adopt and even reinforce social biases. This can occur when the information an AI relies on is based on already biased data. An example of this is the increasing use of automated recruitment processes in companies. Here, applicants may be excluded based on certain ethnic backgrounds without a human ever having looked at the application.&lt;br /&gt;
&lt;br /&gt;
A 2024 study by the University of Washington shows that discrimination can occur in AI-driven recruitment processes. The researchers analyzed over 554 résumés and 571 job postings, generating more than 3 million combinations of names and job positions. They then altered the names on 120 resumes, replacing typically white-sounding names with names commonly associated with the Black population.&lt;br /&gt;
&lt;br /&gt;
The results were clear: in 85% of the cases, the AI favored names typically associated with white individuals, while only 9% of the preferred names were linked to Black individuals. Additionally, the AI selected male candidates 52% of the time even for roles predominantly held by women, such as HR positions (77% female representation) or teaching jobs (57% female representation). White women were also more likely to be selected than Black women.&lt;br /&gt;
&lt;br /&gt;
This example illustrates how AI systems derive selection criteria from existing data, which can lead them to unintentionally adopt and perpetuate discriminatory structures from real-world practices. From a philosophical perspective, this raises a fundamental question: If our knowledge about the world in this case, about job applicants is based on data that is itself biased, how reliable is that knowledge? Epistemology asks what we truly know when we receive information. AI systems process data, but they do not understand it in the human sense. Their decisions are based on patterns, not on insight or moral reasoning. As Olkhovsky also emphasizes, machines lack the intentionality and awareness that give human decisions their moral depth.&lt;br /&gt;
&lt;br /&gt;
That is why it is even more important that AI does not make the final selection of applicants alone. To ensure a fair and just hiring process, human oversight by HR professionals is essential so that they can intervene if there is a suspicion of discrimination. Moreover, the automated AI recruitment process should be adapted in such a way that it either completely avoids discrimination or at least minimizes it. This could be achieved through regular fairness and bias tests to identify and address problematic patterns early on.&lt;br /&gt;
&lt;br /&gt;
In addition, John Rawls’ concept of justice as fairness offers a philosophical foundation for the discussion of discrimination and bias in AI systems. His famous thought experiment the veil of ignorance asks us to imagine a society in which no one knows their own social position. This is meant to produce fair rules that do not favor any individual or group.&lt;br /&gt;
&lt;br /&gt;
Applied to AI, this means that algorithms must be designed not to reinforce existing inequalities but to actively contribute to fairness. As studies have shown, AI systems can unconsciously adopt discriminatory patterns when trained on biased data. Rawls would argue that such systems do not meet the principles of justice, as they fail to ensure that the least advantaged members of society are not further marginalized.&lt;br /&gt;
&lt;br /&gt;
== Bias ==&lt;br /&gt;
In the example above, the AI exhibited bias a systematic distortion. This occurs when an AI application receives training data and information from the real world that contains prejudices, and it does not question them but instead accepts and adopts them as correct. It has learned that, in the past, white men were predominantly hired and therefore sets the attribute “male” as a selection criterion, concluding that this group is the most suitable and thus prefers them.&lt;br /&gt;
&lt;br /&gt;
Such errors must be identified and corrected. While bias cannot be completely avoided, it can be reduced through technical solutions.&lt;br /&gt;
&lt;br /&gt;
There are three strategies that can be used to minimize bias in AI systems. First, potential biases in the training data can be removed before the learning process begins. This involves modifying the data so that attributes such as origin or religion no longer have a distorted effect on the AI’s decisions. Another method is to program the AI in such a way that, during learning, it is prevented from making unfair assessments or discriminating by applying additional mathematical constraints. In the third method, the AI’s outcomes are evaluated and corrected for fairness after the learning process.&lt;br /&gt;
&lt;br /&gt;
Discrimination in AI systems is not only a technical problem but also touches on fundamental ethical principles such as justice, human dignity, and responsibility. Even if an AI application does not intentionally discriminate, it still violates these principles because its decisions have real-world consequences for people. Machines do not act morally, but their algorithms can reinforce existing inequalities and thus raise ethical concerns.&lt;br /&gt;
&lt;br /&gt;
Immanuel Kant formulated the categorical imperative as a universal moral principle: every human being must be treated as an end in themselves and never merely as a means to an end. Applied to AI, this means that algorithms must not be designed in a way that disadvantages people based on prejudice or biased data. An AI system that adopts discriminatory patterns contradicts this principle, as it fails to treat people as equal individuals.&lt;br /&gt;
&lt;br /&gt;
John Rawls developed the concept of justice as fairness, which aims to structure society in such a way that it does not disadvantage the weakest. His veil of ignorance challenges us to design rules without knowing our own social position—thereby ensuring fair conditions for all. AI systems that unconsciously discriminate contradict this principle, as they perpetuate existing inequalities instead of correcting them. To counteract this, algorithms must be actively tested and adjusted for fairness.&lt;br /&gt;
&lt;br /&gt;
Therefore, it is essential to assume not only technical but also ethical responsibility. AI systems must be designed to promote justice and equal opportunity rather than reinforce existing biases. This can be achieved through regular fairness and bias tests as well as through intentional ethical programming. Even if bias cannot be completely eliminated, it can be significantly reduced through targeted measures.&lt;br /&gt;
&lt;br /&gt;
== Transparency and Traceability ==&lt;br /&gt;
Since May 21, 2024, the EU Artificial Intelligence Act has established a comprehensive regulatory framework for artificial intelligence adopted by the EU Council, setting uniform rules for the use of AI across Europe. The AI Act places strong emphasis on transparency to ensure that AI handles data responsibly and fairly. Systems considered particularly high-risk must therefore be designed in a way that makes their use comprehensible and understandable. This allows individuals to make informed decisions about whether a particular AI system is appropriate for them. Overall, the goal of increased AI transparency is to empower users and give them more control.&lt;br /&gt;
&lt;br /&gt;
To ensure the transparent use of AI, users must be informed when they are interacting with an AI system. The system&#039;s documentation should explain how the AI works, what it can be used for, and what opportunities and risks it entails. Additionally, information about the development and context of the system is important so that both users and organizations understand its capabilities and limitations.&lt;br /&gt;
&lt;br /&gt;
Transparency is important because it leads to greater traceability. It enables people to understand how AI reaches certain decisions and helps to identify issues early on such as potential bias or copyright violations. Moreover, studies show that users are more likely to use transparent AI models. For developers, it is particularly essential to know where the training data comes from, whether it is fair and non-discriminatory, and what risks a particular model might pose.&lt;br /&gt;
&lt;br /&gt;
Even though transparency is crucial, it is not always easy to implement. A careful balance must be struck: users should be given enough information to understand and use a system safely, but not all aspects should be disclosed if doing so poses security or misuse risks.&lt;br /&gt;
&lt;br /&gt;
This is where the discussion around AI intersects closely with epistemological questions: What counts as reliable knowledge? How much do we need to know in order to trust a decision? AI challenges us to rethink our understanding of knowledge, responsibility, and control.&lt;br /&gt;
&lt;br /&gt;
As Olkhovsky emphasizes, transparency is a key prerequisite for people to trust AI systems at all. Only if it is clear how decisions are made can those decisions be questioned or challenged. Without that clarity, responsibility becomes blurred and control over technological decisions slips away from users. Therefore, transparency is not only a technical task but also an ethical imperative: AI systems must be designed to reveal the criteria by which they operate. Transparency not only enables oversight but also reinforces the democratic principle of accountability in the digital age.&lt;br /&gt;
&lt;br /&gt;
== Missinformation, Fake News and Deepfakes ==&lt;br /&gt;
AI systems are used for almost everything. They are applied to a wide range of tasks, such as text processing, data analysis on specific topics, or even creating study schedules for exams. Additionally, we ask AI questions on various subjects or consult it about issues that concern us, and we generally assume that its answers are correct. However, the use of AI can lead to deepfakes and fake news.&lt;br /&gt;
&lt;br /&gt;
An important aspect of misinformation caused by artificial intelligence is the algorithmic theory of information. This involves the efficiency and complexity of algorithms, which significantly influence how AI systems process and present information. In contrast to human knowledge, which is based on experience, reflection, and context, AI-generated content is the result of statistical calculations and patterns. This creates the risk of distorted or manipulative content, particularly in the case of deepfakes and fake news. Since AI cannot distinguish between truth and deception but merely calculates probabilities, false or misleading information can be amplified and spread uncritically.&lt;br /&gt;
&lt;br /&gt;
AI systems are constantly learning and improving their capabilities day by day. They can generate images or videos that appear to be real. While this may initially be seen as a major advancement, one must ask whether AI-generated visual content is truly a positive development. For instance, a video might show a person saying something they never actually said—this is where deepfakes come in. Fake news can spread, reputations can be damaged, and entire elections or public opinions can be manipulated. It is essential for people to be able to recognize such content.&lt;br /&gt;
&lt;br /&gt;
AI-generated material can often be identified by subtle details such as limited facial expressions, inconsistent lighting, or incorrect or monotonous speech. It is crucial that we are able to detect these deepfakes so that we are not manipulated or misled by false claims.&lt;br /&gt;
&lt;br /&gt;
When it comes to fake news, we also need to be particularly cautious with regard to AI, as we often accept the answers provided by AI systems like ChatGPT as true and correct without question.&lt;br /&gt;
&lt;br /&gt;
Deepfakes also contribute to the spread of fake news because AI can easily create manipulated content. This becomes especially dangerous during elections, as people can be influenced without realizing it. AI-generated images or videos are designed to be dramatic, fear-inducing, or emotionally charged. Political parties might use deepfake campaign videos to manipulate voters and push them in a certain direction. These political messages are also distributed across multiple channels to gain more attention.&lt;br /&gt;
&lt;br /&gt;
Because AI helps disseminate this content quickly and widely, it can intentionally influence opinions and draw massive attention to certain topics. However, this can also lead to a divided society, especially when every party holds very strong and opposing views. Therefore, it is essential that people learn how to recognize fake news themselves. They should think critically, question information, verify sources, and compare it with other media. Moreover, there must be regulations for election campaigns, and AI-generated content must be labeled. Above all, transparency is necessary in order to provide insight into the algorithms. This way, manipulation can be understood and ultimately contained.&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
Artificial intelligence is increasingly shaping our everyday lives as well as areas such as education, the economy, and society. It brings both opportunities and risks. Since AI operates without moral understanding or a sense of responsibility, the responsibility for its use always lies with humans. Issues such as bias, discrimination, deepfakes, or fake news highlight the importance of critically questioning AI and regulating it through ethical and societal measures.&lt;br /&gt;
&lt;br /&gt;
Despite its useful functions, AI should not be seen as the automatic solution to every challenge. Those who rely too heavily on it risk unlearning how to think independently and may adopt decisions without reflection. It is important to first seek one’s own solutions and use AI selectively and consciously.&lt;br /&gt;
&lt;br /&gt;
Furthermore, the responsible use of AI requires clear ethical guidelines and transparent regulation. AI decisions are often difficult to understand, which is why society must actively question how AI is controlled and applied. The discussion about AI is not only technical, but also philosophical—it challenges us to rethink our understanding of knowledge, responsibility, and freedom of choice.&lt;br /&gt;
&lt;br /&gt;
Given the rapid development of technology, a responsible approach to AI is essential—especially with regard to data protection and personal information. Just as we are cautious about protecting our data, we should act with the same care when dealing with artificial intelligence.&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;/div&gt;</summary>
		<author><name>Emily Hoppe</name></author>
	</entry>
	<entry>
		<id>https://www.glossalab.org/w/index.php?title=Draft:Artificial_Intelligence_and_Justice:_How_Algorithms_Shape_Our_Society&amp;diff=12934</id>
		<title>Draft:Artificial Intelligence and Justice: How Algorithms Shape Our Society</title>
		<link rel="alternate" type="text/html" href="https://www.glossalab.org/w/index.php?title=Draft:Artificial_Intelligence_and_Justice:_How_Algorithms_Shape_Our_Society&amp;diff=12934"/>
		<updated>2025-06-11T17:35:00Z</updated>

		<summary type="html">&lt;p&gt;Emily Hoppe: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Overview ==&lt;br /&gt;
The rapid development and use of artificial intelligence since the introduction of ChatGPT in November 2022 raises fundamental questions about justice and ethical responsibility. This paper explores the interplay between AI systems and social justice, with a particular focus on discrimination caused by algorithmic bias, transparency in decision-making, and the challenges posed by AI-driven disinformation. Drawing on recent studies and philosophical concepts from Kant and Rawls, the analysis examines how AI systems can reinforce existing inequalities and what measures are necessary to ensure a fair and ethically sound use of this technology. The paper argues that only through a combination of technical solutions, ethical guidelines, and legal regulation such as the EU AI Act can a balance be achieved between technological progress and social justice.&lt;br /&gt;
&lt;br /&gt;
== Introduction and Relevance of the Topic ==&lt;br /&gt;
Since the launch of ChatGPT in November 2022, artificial intelligence has made an unprecedented entry into our daily lives. This new generation of AI systems, which is already being used by hundreds of millions of people just two and a half years later, marks a turning point in the digitalization of our society. Almost every person in the digitally connected world regularly interacts with AI applications.&lt;br /&gt;
&lt;br /&gt;
The global use of AI technologies is steadily increasing. According to forecasts, the number of users worldwide is expected to grow to approximately 826.2 million by 2025. Beyond the education sector, AI is also gaining increasing importance in other fields such as business, medicine, justice, and public administration.&lt;br /&gt;
&lt;br /&gt;
One reason for this is the efficiency of AI, as it saves time, automates processes, and delivers fast results. However, with the rapid development of AI, new considerations arise. When machines write texts, prepare decisions, or even make moral judgments, questions emerge such as: What distinguishes a machine-made decision from a human one? Who is responsible when errors occur or people are disadvantaged?&lt;br /&gt;
&lt;br /&gt;
It is no longer just about simple applications. In philosophical discussions, the question increasingly arises whether so-called &amp;quot;strong AI&amp;quot; machines with consciousness or their own thinking is even possible. Olkhovsky addresses this idea and refers to the example from the film &#039;&#039;Ex Machina&#039;&#039;: an AI that appears so human-like that it seems almost impossible to distinguish it from a real person. But what separates a convincing imitation from genuine consciousness? Can machines truly “be” like us—or does it ultimately remain a complex simulation?&lt;br /&gt;
&lt;br /&gt;
AI confronts us with fundamental questions about what knowledge is, how we can recognize truth, and what role humans play in an increasingly automated world.&lt;br /&gt;
&lt;br /&gt;
== Artificial Intelligence ==&lt;br /&gt;
Artificial intelligence (AI) is generally understood as a subfield of computer science that focuses on the development of systems capable of performing tasks that typically require human intelligence, such as problem-solving, language understanding, or pattern recognition.&lt;br /&gt;
&lt;br /&gt;
In scientific discussions, a distinction is often made between weak AI, which is specialized in narrowly defined tasks (e.g., voice assistants), and strong AI, which could develop human-like consciousness or genuine thinking. This concept remains largely theoretical but raises profound ethical and epistemological questions.&lt;br /&gt;
&lt;br /&gt;
Additionally, a distinction is made between knowledge-based AI, which operates using symbolic logic, and so-called machine learning, which is based on statistical methods. The latter has made significant advances in recent years, intensifying the question of how it differs from human understanding.&lt;br /&gt;
&lt;br /&gt;
With the increasing use of AI, the question arises not only of what machines are capable of but also what this means for us humans. What exactly does it mean when machines “learn,” “understand,” or “decide”? And what happens when we begin to delegate our own thinking to machines? Is AI truly a form of cognitive processing, or merely a highly advanced simulation of human behavior?&lt;br /&gt;
&lt;br /&gt;
It is emphasized that AI systems ultimately reflect our own thinking. They adopt patterns, logics, and assumptions from the data they were trained on, but they do not understand them in the human sense. AI lacks the deeper understanding of causality and context that characterizes human thought. This gap between statistical pattern recognition and genuine understanding is central to the philosophical discourse on AI.&lt;br /&gt;
&lt;br /&gt;
The increasing automation of creative processes by AI raises the question of whether human thinking and creativity are being displaced. Hannah Arendt emphasized that thinking is more than mere information processing; it is connected with reflection and responsibility. Nick Bostrom also warns that excessive dependence on AI could restrict human freedom of decision.&lt;br /&gt;
&lt;br /&gt;
What does it mean for our humanity when processes once considered uniquely human such as thinking, decision-making, or creativity are increasingly taken over by machines?&lt;br /&gt;
&lt;br /&gt;
== Responsibility &amp;amp; Decision-Making ==&lt;br /&gt;
When we ask an AI a question, we usually receive a fitting answer almost instantly. This seems so natural that we rarely stop to consider how the answer is actually generated—or whether it is truly correct. And therein lies an ethical challenge: unlike humans, who make decisions based on experience, values, or moral convictions, AI operates purely statistically. It identifies patterns and produces what seems most probable based on its training data, without “understanding” what is good, right, or wrong.&lt;br /&gt;
&lt;br /&gt;
Such systems now accompany us not only when writing or researching, but also influence what we see on social media, which news is shown to us, or how decisions are made in professional contexts. The more we place our trust in AI, the greater the responsibility we must take on ourselves even if it may seem at first glance as though the machine has everything under control.&lt;br /&gt;
&lt;br /&gt;
According to Daniel Wessel, this reveals a central problem: AI systems cannot make moral decisions, at least not in the human sense. They have no values of their own or awareness of responsibility. That is why it is all the more important that such technologies follow clear ethical guidelines. The people who develop or use them must define in advance what is to be considered right, fair, or transparent.&lt;br /&gt;
&lt;br /&gt;
Which values should be embedded in AI in the first place? And who decides that? What we perceive as “right” often depends on cultural, societal, or individual contexts. To ensure AI is used responsibly, these values must be consciously named and integrated into the systems. This requires not only technical know-how but above all a societal and ethical debate. &lt;br /&gt;
&lt;br /&gt;
Ultimately, AI remains a tool. Whether it acts “rightly” in a given situation depends not only on the algorithm but above all on how we design it, use it, and question it critically.&lt;br /&gt;
&lt;br /&gt;
== Discrimination and Bias in AI Systems ==&lt;br /&gt;
It is often assumed that artificial intelligence evaluates things objectively and neutrally. In practice, however, it becomes clear that AI systems can adopt and even reinforce social biases. This can occur when the information an AI relies on is based on already biased data. An example of this is the increasing use of automated recruitment processes in companies. Here, applicants may be excluded based on certain ethnic backgrounds without a human ever having looked at the application.&lt;br /&gt;
&lt;br /&gt;
A 2024 study by the University of Washington shows that discrimination can occur in AI-driven recruitment processes. The researchers analyzed over 554 résumés and 571 job postings, generating more than 3 million combinations of names and job positions. They then altered the names on 120 resumes, replacing typically white-sounding names with names commonly associated with the Black population.&lt;br /&gt;
&lt;br /&gt;
The results were clear: in 85% of the cases, the AI favored names typically associated with white individuals, while only 9% of the preferred names were linked to Black individuals. Additionally, the AI selected male candidates 52% of the time even for roles predominantly held by women, such as HR positions (77% female representation) or teaching jobs (57% female representation). White women were also more likely to be selected than Black women.&lt;br /&gt;
&lt;br /&gt;
This example illustrates how AI systems derive selection criteria from existing data, which can lead them to unintentionally adopt and perpetuate discriminatory structures from real-world practices. From a philosophical perspective, this raises a fundamental question: If our knowledge about the world in this case, about job applicants is based on data that is itself biased, how reliable is that knowledge? Epistemology asks what we truly know when we receive information. AI systems process data, but they do not understand it in the human sense. Their decisions are based on patterns, not on insight or moral reasoning. As Olkhovsky also emphasizes, machines lack the intentionality and awareness that give human decisions their moral depth.&lt;br /&gt;
&lt;br /&gt;
That is why it is even more important that AI does not make the final selection of applicants alone. To ensure a fair and just hiring process, human oversight by HR professionals is essential so that they can intervene if there is a suspicion of discrimination. Moreover, the automated AI recruitment process should be adapted in such a way that it either completely avoids discrimination or at least minimizes it. This could be achieved through regular fairness and bias tests to identify and address problematic patterns early on.&lt;br /&gt;
&lt;br /&gt;
In addition, John Rawls’ concept of justice as fairness offers a philosophical foundation for the discussion of discrimination and bias in AI systems. His famous thought experiment the veil of ignorance asks us to imagine a society in which no one knows their own social position. This is meant to produce fair rules that do not favor any individual or group.&lt;br /&gt;
&lt;br /&gt;
Applied to AI, this means that algorithms must be designed not to reinforce existing inequalities but to actively contribute to fairness. As studies have shown, AI systems can unconsciously adopt discriminatory patterns when trained on biased data. Rawls would argue that such systems do not meet the principles of justice, as they fail to ensure that the least advantaged members of society are not further marginalized.&lt;br /&gt;
&lt;br /&gt;
== Bias ==&lt;br /&gt;
In the example above, the AI exhibited bias a systematic distortion. This occurs when an AI application receives training data and information from the real world that contains prejudices, and it does not question them but instead accepts and adopts them as correct. It has learned that, in the past, white men were predominantly hired and therefore sets the attribute “male” as a selection criterion, concluding that this group is the most suitable and thus prefers them.&lt;br /&gt;
&lt;br /&gt;
Such errors must be identified and corrected. While bias cannot be completely avoided, it can be reduced through technical solutions.&lt;br /&gt;
&lt;br /&gt;
There are three strategies that can be used to minimize bias in AI systems. First, potential biases in the training data can be removed before the learning process begins. This involves modifying the data so that attributes such as origin or religion no longer have a distorted effect on the AI’s decisions. Another method is to program the AI in such a way that, during learning, it is prevented from making unfair assessments or discriminating by applying additional mathematical constraints. In the third method, the AI’s outcomes are evaluated and corrected for fairness after the learning process.&lt;br /&gt;
&lt;br /&gt;
Discrimination in AI systems is not only a technical problem but also touches on fundamental ethical principles such as justice, human dignity, and responsibility. Even if an AI application does not intentionally discriminate, it still violates these principles because its decisions have real-world consequences for people. Machines do not act morally, but their algorithms can reinforce existing inequalities and thus raise ethical concerns.&lt;br /&gt;
&lt;br /&gt;
Immanuel Kant formulated the categorical imperative as a universal moral principle: every human being must be treated as an end in themselves and never merely as a means to an end. Applied to AI, this means that algorithms must not be designed in a way that disadvantages people based on prejudice or biased data. An AI system that adopts discriminatory patterns contradicts this principle, as it fails to treat people as equal individuals.&lt;br /&gt;
&lt;br /&gt;
John Rawls developed the concept of justice as fairness, which aims to structure society in such a way that it does not disadvantage the weakest. His veil of ignorance challenges us to design rules without knowing our own social position—thereby ensuring fair conditions for all. AI systems that unconsciously discriminate contradict this principle, as they perpetuate existing inequalities instead of correcting them. To counteract this, algorithms must be actively tested and adjusted for fairness.&lt;br /&gt;
&lt;br /&gt;
Therefore, it is essential to assume not only technical but also ethical responsibility. AI systems must be designed to promote justice and equal opportunity rather than reinforce existing biases. This can be achieved through regular fairness and bias tests as well as through intentional ethical programming. Even if bias cannot be completely eliminated, it can be significantly reduced through targeted measures.&lt;br /&gt;
&lt;br /&gt;
== Transparency and Traceability ==&lt;br /&gt;
Since May 21, 2024, the EU Artificial Intelligence Act has established a comprehensive regulatory framework for artificial intelligence adopted by the EU Council, setting uniform rules for the use of AI across Europe. The AI Act places strong emphasis on transparency to ensure that AI handles data responsibly and fairly. Systems considered particularly high-risk must therefore be designed in a way that makes their use comprehensible and understandable. This allows individuals to make informed decisions about whether a particular AI system is appropriate for them. Overall, the goal of increased AI transparency is to empower users and give them more control.&lt;br /&gt;
&lt;br /&gt;
To ensure the transparent use of AI, users must be informed when they are interacting with an AI system. The system&#039;s documentation should explain how the AI works, what it can be used for, and what opportunities and risks it entails. Additionally, information about the development and context of the system is important so that both users and organizations understand its capabilities and limitations.&lt;br /&gt;
&lt;br /&gt;
Transparency is important because it leads to greater traceability. It enables people to understand how AI reaches certain decisions and helps to identify issues early on such as potential bias or copyright violations. Moreover, studies show that users are more likely to use transparent AI models. For developers, it is particularly essential to know where the training data comes from, whether it is fair and non-discriminatory, and what risks a particular model might pose.&lt;br /&gt;
&lt;br /&gt;
Even though transparency is crucial, it is not always easy to implement. A careful balance must be struck: users should be given enough information to understand and use a system safely, but not all aspects should be disclosed if doing so poses security or misuse risks.&lt;br /&gt;
&lt;br /&gt;
This is where the discussion around AI intersects closely with epistemological questions: What counts as reliable knowledge? How much do we need to know in order to trust a decision? AI challenges us to rethink our understanding of knowledge, responsibility, and control.&lt;br /&gt;
&lt;br /&gt;
As Olkhovsky emphasizes, transparency is a key prerequisite for people to trust AI systems at all. Only if it is clear how decisions are made can those decisions be questioned or challenged. Without that clarity, responsibility becomes blurred and control over technological decisions slips away from users. Therefore, transparency is not only a technical task but also an ethical imperative: AI systems must be designed to reveal the criteria by which they operate. Transparency not only enables oversight but also reinforces the democratic principle of accountability in the digital age.&lt;br /&gt;
&lt;br /&gt;
== Missinformation, Fake News and Deepfakes ==&lt;br /&gt;
AI systems are used for almost everything. They are applied to a wide range of tasks, such as text processing, data analysis on specific topics, or even creating study schedules for exams. Additionally, we ask AI questions on various subjects or consult it about issues that concern us, and we generally assume that its answers are correct. However, the use of AI can lead to deepfakes and fake news.&lt;br /&gt;
&lt;br /&gt;
An important aspect of misinformation caused by artificial intelligence is the algorithmic theory of information. This involves the efficiency and complexity of algorithms, which significantly influence how AI systems process and present information. In contrast to human knowledge, which is based on experience, reflection, and context, AI-generated content is the result of statistical calculations and patterns. This creates the risk of distorted or manipulative content, particularly in the case of deepfakes and fake news. Since AI cannot distinguish between truth and deception but merely calculates probabilities, false or misleading information can be amplified and spread uncritically.&lt;br /&gt;
&lt;br /&gt;
AI systems are constantly learning and improving their capabilities day by day. They can generate images or videos that appear to be real. While this may initially be seen as a major advancement, one must ask whether AI-generated visual content is truly a positive development. For instance, a video might show a person saying something they never actually said—this is where deepfakes come in. Fake news can spread, reputations can be damaged, and entire elections or public opinions can be manipulated. It is essential for people to be able to recognize such content.&lt;br /&gt;
&lt;br /&gt;
AI-generated material can often be identified by subtle details such as limited facial expressions, inconsistent lighting, or incorrect or monotonous speech. It is crucial that we are able to detect these deepfakes so that we are not manipulated or misled by false claims.&lt;br /&gt;
&lt;br /&gt;
When it comes to fake news, we also need to be particularly cautious with regard to AI, as we often accept the answers provided by AI systems like ChatGPT as true and correct without question.&lt;br /&gt;
&lt;br /&gt;
Deepfakes also contribute to the spread of fake news because AI can easily create manipulated content. This becomes especially dangerous during elections, as people can be influenced without realizing it. AI-generated images or videos are designed to be dramatic, fear-inducing, or emotionally charged. Political parties might use deepfake campaign videos to manipulate voters and push them in a certain direction. These political messages are also distributed across multiple channels to gain more attention.&lt;br /&gt;
&lt;br /&gt;
Because AI helps disseminate this content quickly and widely, it can intentionally influence opinions and draw massive attention to certain topics. However, this can also lead to a divided society, especially when every party holds very strong and opposing views. Therefore, it is essential that people learn how to recognize fake news themselves. They should think critically, question information, verify sources, and compare it with other media. Moreover, there must be regulations for election campaigns, and AI-generated content must be labeled. Above all, transparency is necessary in order to provide insight into the algorithms. This way, manipulation can be understood and ultimately contained.&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
Artificial intelligence is increasingly shaping our everyday lives as well as areas such as education, the economy, and society. It brings both opportunities and risks. Since AI operates without moral understanding or a sense of responsibility, the responsibility for its use always lies with humans. Issues such as bias, discrimination, deepfakes, or fake news highlight the importance of critically questioning AI and regulating it through ethical and societal measures.&lt;br /&gt;
&lt;br /&gt;
Despite its useful functions, AI should not be seen as the automatic solution to every challenge. Those who rely too heavily on it risk unlearning how to think independently and may adopt decisions without reflection. It is important to first seek one’s own solutions and use AI selectively and consciously.&lt;br /&gt;
&lt;br /&gt;
Furthermore, the responsible use of AI requires clear ethical guidelines and transparent regulation. AI decisions are often difficult to understand, which is why society must actively question how AI is controlled and applied. The discussion about AI is not only technical, but also philosophical—it challenges us to rethink our understanding of knowledge, responsibility, and freedom of choice.&lt;br /&gt;
&lt;br /&gt;
Given the rapid development of technology, a responsible approach to AI is essential—especially with regard to data protection and personal information. Just as we are cautious about protecting our data, we should act with the same care when dealing with artificial intelligence.&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;/div&gt;</summary>
		<author><name>Emily Hoppe</name></author>
	</entry>
	<entry>
		<id>https://www.glossalab.org/w/index.php?title=Draft:Artificial_Intelligence_and_Justice:_How_Algorithms_Shape_Our_Society&amp;diff=12933</id>
		<title>Draft:Artificial Intelligence and Justice: How Algorithms Shape Our Society</title>
		<link rel="alternate" type="text/html" href="https://www.glossalab.org/w/index.php?title=Draft:Artificial_Intelligence_and_Justice:_How_Algorithms_Shape_Our_Society&amp;diff=12933"/>
		<updated>2025-06-11T17:33:26Z</updated>

		<summary type="html">&lt;p&gt;Emily Hoppe: Sarah Kero et al. (2023): Bekanntheit und Akzeptanz von ChatGPT in Deutschland. Meinungsmonitor Künstliche Intelligenz&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Overview ==&lt;br /&gt;
The rapid development and use of artificial intelligence since the introduction of ChatGPT in November 2022 raises fundamental questions about justice and ethical responsibility. This paper explores the interplay between AI systems and social justice, with a particular focus on discrimination caused by algorithmic bias, transparency in decision-making, and the challenges posed by AI-driven disinformation. Drawing on recent studies and philosophical concepts from Kant and Rawls, the analysis examines how AI systems can reinforce existing inequalities and what measures are necessary to ensure a fair and ethically sound use of this technology. The paper argues that only through a combination of technical solutions, ethical guidelines, and legal regulation such as the EU AI Act can a balance be achieved between technological progress and social justice.&lt;br /&gt;
&lt;br /&gt;
== Introduction and Relevance of the Topic ==&lt;br /&gt;
Since the launch of ChatGPT in November 2022, artificial intelligence has made an unprecedented entry into our daily lives. This new generation of AI systems, which is already being used by hundreds of millions of people just two and a half years later, marks a turning point in the digitalization of our society. Almost every person in the digitally connected world regularly interacts with AI applications.&lt;br /&gt;
&lt;br /&gt;
The global use of AI technologies is steadily increasing. According to forecasts, the number of users worldwide is expected to grow to approximately 826.2 million by 2025. Beyond the education sector, AI is also gaining increasing importance in other fields such as business, medicine, justice, and public administration.&lt;br /&gt;
&lt;br /&gt;
One reason for this is the efficiency of AI, as it saves time, automates processes, and delivers fast results. However, with the rapid development of AI, new considerations arise. When machines write texts, prepare decisions, or even make moral judgments, questions emerge such as: What distinguishes a machine-made decision from a human one? Who is responsible when errors occur or people are disadvantaged?&lt;br /&gt;
&lt;br /&gt;
It is no longer just about simple applications. In philosophical discussions, the question increasingly arises whether so-called &amp;quot;strong AI&amp;quot; machines with consciousness or their own thinking is even possible. Olkhovsky addresses this idea and refers to the example from the film &#039;&#039;Ex Machina&#039;&#039;: an AI that appears so human-like that it seems almost impossible to distinguish it from a real person. But what separates a convincing imitation from genuine consciousness? Can machines truly “be” like us—or does it ultimately remain a complex simulation?&lt;br /&gt;
&lt;br /&gt;
AI confronts us with fundamental questions about what knowledge is, how we can recognize truth, and what role humans play in an increasingly automated world.&lt;br /&gt;
&lt;br /&gt;
== Artificial Intelligence ==&lt;br /&gt;
Artificial intelligence (AI) is generally understood as a subfield of computer science that focuses on the development of systems capable of performing tasks that typically require human intelligence, such as problem-solving, language understanding, or pattern recognition.&lt;br /&gt;
&lt;br /&gt;
In scientific discussions, a distinction is often made between weak AI, which is specialized in narrowly defined tasks (e.g., voice assistants), and strong AI, which could develop human-like consciousness or genuine thinking. This concept remains largely theoretical but raises profound ethical and epistemological questions.&lt;br /&gt;
&lt;br /&gt;
Additionally, a distinction is made between knowledge-based AI, which operates using symbolic logic, and so-called machine learning, which is based on statistical methods. The latter has made significant advances in recent years, intensifying the question of how it differs from human understanding.&lt;br /&gt;
&lt;br /&gt;
With the increasing use of AI, the question arises not only of what machines are capable of but also what this means for us humans. What exactly does it mean when machines “learn,” “understand,” or “decide”? And what happens when we begin to delegate our own thinking to machines? Is AI truly a form of cognitive processing, or merely a highly advanced simulation of human behavior?&lt;br /&gt;
&lt;br /&gt;
It is emphasized that AI systems ultimately reflect our own thinking. They adopt patterns, logics, and assumptions from the data they were trained on, but they do not understand them in the human sense. AI lacks the deeper understanding of causality and context that characterizes human thought. This gap between statistical pattern recognition and genuine understanding is central to the philosophical discourse on AI.&lt;br /&gt;
&lt;br /&gt;
The increasing automation of creative processes by AI raises the question of whether human thinking and creativity are being displaced. Hannah Arendt emphasized that thinking is more than mere information processing; it is connected with reflection and responsibility. Nick Bostrom also warns that excessive dependence on AI could restrict human freedom of decision.&lt;br /&gt;
&lt;br /&gt;
What does it mean for our humanity when processes once considered uniquely human such as thinking, decision-making, or creativity are increasingly taken over by machines?&lt;br /&gt;
&lt;br /&gt;
== Responsibility &amp;amp; Decision-Making ==&lt;br /&gt;
When we ask an AI a question, we usually receive a fitting answer almost instantly. This seems so natural that we rarely stop to consider how the answer is actually generated—or whether it is truly correct. And therein lies an ethical challenge: unlike humans, who make decisions based on experience, values, or moral convictions, AI operates purely statistically. It identifies patterns and produces what seems most probable based on its training data, without “understanding” what is good, right, or wrong.&lt;br /&gt;
&lt;br /&gt;
Such systems now accompany us not only when writing or researching, but also influence what we see on social media, which news is shown to us, or how decisions are made in professional contexts. The more we place our trust in AI, the greater the responsibility we must take on ourselves even if it may seem at first glance as though the machine has everything under control.&lt;br /&gt;
&lt;br /&gt;
According to Daniel Wessel, this reveals a central problem: AI systems cannot make moral decisions, at least not in the human sense. They have no values of their own or awareness of responsibility. That is why it is all the more important that such technologies follow clear ethical guidelines. The people who develop or use them must define in advance what is to be considered right, fair, or transparent.&lt;br /&gt;
&lt;br /&gt;
Which values should be embedded in AI in the first place? And who decides that? What we perceive as “right” often depends on cultural, societal, or individual contexts. To ensure AI is used responsibly, these values must be consciously named and integrated into the systems. This requires not only technical know-how but above all a societal and ethical debate. &lt;br /&gt;
&lt;br /&gt;
Ultimately, AI remains a tool. Whether it acts “rightly” in a given situation depends not only on the algorithm but above all on how we design it, use it, and question it critically.&lt;br /&gt;
&lt;br /&gt;
== Discrimination and Bias in AI Systems ==&lt;br /&gt;
It is often assumed that artificial intelligence evaluates things objectively and neutrally. In practice, however, it becomes clear that AI systems can adopt and even reinforce social biases. This can occur when the information an AI relies on is based on already biased data. An example of this is the increasing use of automated recruitment processes in companies. Here, applicants may be excluded based on certain ethnic backgrounds without a human ever having looked at the application.&lt;br /&gt;
&lt;br /&gt;
A 2024 study by the University of Washington shows that discrimination can occur in AI-driven recruitment processes. The researchers analyzed over 554 résumés and 571 job postings, generating more than 3 million combinations of names and job positions. They then altered the names on 120 resumes, replacing typically white-sounding names with names commonly associated with the Black population.&lt;br /&gt;
&lt;br /&gt;
The results were clear: in 85% of the cases, the AI favored names typically associated with white individuals, while only 9% of the preferred names were linked to Black individuals. Additionally, the AI selected male candidates 52% of the time even for roles predominantly held by women, such as HR positions (77% female representation) or teaching jobs (57% female representation). White women were also more likely to be selected than Black women.&lt;br /&gt;
&lt;br /&gt;
This example illustrates how AI systems derive selection criteria from existing data, which can lead them to unintentionally adopt and perpetuate discriminatory structures from real-world practices. From a philosophical perspective, this raises a fundamental question: If our knowledge about the world in this case, about job applicants is based on data that is itself biased, how reliable is that knowledge? Epistemology asks what we truly know when we receive information. AI systems process data, but they do not understand it in the human sense. Their decisions are based on patterns, not on insight or moral reasoning. As Olkhovsky also emphasizes, machines lack the intentionality and awareness that give human decisions their moral depth.&lt;br /&gt;
&lt;br /&gt;
That is why it is even more important that AI does not make the final selection of applicants alone. To ensure a fair and just hiring process, human oversight by HR professionals is essential so that they can intervene if there is a suspicion of discrimination. Moreover, the automated AI recruitment process should be adapted in such a way that it either completely avoids discrimination or at least minimizes it. This could be achieved through regular fairness and bias tests to identify and address problematic patterns early on.&lt;br /&gt;
&lt;br /&gt;
In addition, John Rawls’ concept of justice as fairness offers a philosophical foundation for the discussion of discrimination and bias in AI systems. His famous thought experiment the veil of ignorance asks us to imagine a society in which no one knows their own social position. This is meant to produce fair rules that do not favor any individual or group.&lt;br /&gt;
&lt;br /&gt;
Applied to AI, this means that algorithms must be designed not to reinforce existing inequalities but to actively contribute to fairness. As studies have shown, AI systems can unconsciously adopt discriminatory patterns when trained on biased data. Rawls would argue that such systems do not meet the principles of justice, as they fail to ensure that the least advantaged members of society are not further marginalized.&lt;br /&gt;
&lt;br /&gt;
== Bias ==&lt;br /&gt;
In the example above, the AI exhibited bias a systematic distortion. This occurs when an AI application receives training data and information from the real world that contains prejudices, and it does not question them but instead accepts and adopts them as correct. It has learned that, in the past, white men were predominantly hired and therefore sets the attribute “male” as a selection criterion, concluding that this group is the most suitable and thus prefers them.&lt;br /&gt;
&lt;br /&gt;
Such errors must be identified and corrected. While bias cannot be completely avoided, it can be reduced through technical solutions.&lt;br /&gt;
&lt;br /&gt;
There are three strategies that can be used to minimize bias in AI systems. First, potential biases in the training data can be removed before the learning process begins. This involves modifying the data so that attributes such as origin or religion no longer have a distorted effect on the AI’s decisions. Another method is to program the AI in such a way that, during learning, it is prevented from making unfair assessments or discriminating by applying additional mathematical constraints. In the third method, the AI’s outcomes are evaluated and corrected for fairness after the learning process.&lt;br /&gt;
&lt;br /&gt;
Discrimination in AI systems is not only a technical problem but also touches on fundamental ethical principles such as justice, human dignity, and responsibility. Even if an AI application does not intentionally discriminate, it still violates these principles because its decisions have real-world consequences for people. Machines do not act morally, but their algorithms can reinforce existing inequalities and thus raise ethical concerns.&lt;br /&gt;
&lt;br /&gt;
Immanuel Kant formulated the categorical imperative as a universal moral principle: every human being must be treated as an end in themselves and never merely as a means to an end. Applied to AI, this means that algorithms must not be designed in a way that disadvantages people based on prejudice or biased data. An AI system that adopts discriminatory patterns contradicts this principle, as it fails to treat people as equal individuals.&lt;br /&gt;
&lt;br /&gt;
John Rawls developed the concept of justice as fairness, which aims to structure society in such a way that it does not disadvantage the weakest. His veil of ignorance challenges us to design rules without knowing our own social position—thereby ensuring fair conditions for all. AI systems that unconsciously discriminate contradict this principle, as they perpetuate existing inequalities instead of correcting them. To counteract this, algorithms must be actively tested and adjusted for fairness.&lt;br /&gt;
&lt;br /&gt;
Therefore, it is essential to assume not only technical but also ethical responsibility. AI systems must be designed to promote justice and equal opportunity rather than reinforce existing biases. This can be achieved through regular fairness and bias tests as well as through intentional ethical programming. Even if bias cannot be completely eliminated, it can be significantly reduced through targeted measures.&lt;br /&gt;
&lt;br /&gt;
== Transparency and Traceability ==&lt;br /&gt;
Since May 21, 2024, the EU Artificial Intelligence Act has established a comprehensive regulatory framework for artificial intelligence adopted by the EU Council, setting uniform rules for the use of AI across Europe. The AI Act places strong emphasis on transparency to ensure that AI handles data responsibly and fairly. Systems considered particularly high-risk must therefore be designed in a way that makes their use comprehensible and understandable. This allows individuals to make informed decisions about whether a particular AI system is appropriate for them. Overall, the goal of increased AI transparency is to empower users and give them more control.&lt;br /&gt;
&lt;br /&gt;
To ensure the transparent use of AI, users must be informed when they are interacting with an AI system. The system&#039;s documentation should explain how the AI works, what it can be used for, and what opportunities and risks it entails. Additionally, information about the development and context of the system is important so that both users and organizations understand its capabilities and limitations.&lt;br /&gt;
&lt;br /&gt;
Transparency is important because it leads to greater traceability. It enables people to understand how AI reaches certain decisions and helps to identify issues early on such as potential bias or copyright violations. Moreover, studies show that users are more likely to use transparent AI models. For developers, it is particularly essential to know where the training data comes from, whether it is fair and non-discriminatory, and what risks a particular model might pose.&lt;br /&gt;
&lt;br /&gt;
Even though transparency is crucial, it is not always easy to implement. A careful balance must be struck: users should be given enough information to understand and use a system safely, but not all aspects should be disclosed if doing so poses security or misuse risks.&lt;br /&gt;
&lt;br /&gt;
This is where the discussion around AI intersects closely with epistemological questions: What counts as reliable knowledge? How much do we need to know in order to trust a decision? AI challenges us to rethink our understanding of knowledge, responsibility, and control.&lt;br /&gt;
&lt;br /&gt;
As Olkhovsky emphasizes, transparency is a key prerequisite for people to trust AI systems at all. Only if it is clear how decisions are made can those decisions be questioned or challenged. Without that clarity, responsibility becomes blurred and control over technological decisions slips away from users. Therefore, transparency is not only a technical task but also an ethical imperative: AI systems must be designed to reveal the criteria by which they operate. Transparency not only enables oversight but also reinforces the democratic principle of accountability in the digital age.&lt;br /&gt;
&lt;br /&gt;
== Missinformation, Fake News and Deepfakes ==&lt;br /&gt;
AI systems are used for almost everything. They are applied to a wide range of tasks, such as text processing, data analysis on specific topics, or even creating study schedules for exams. Additionally, we ask AI questions on various subjects or consult it about issues that concern us, and we generally assume that its answers are correct. However, the use of AI can lead to deepfakes and fake news.&lt;br /&gt;
&lt;br /&gt;
An important aspect of misinformation caused by artificial intelligence is the algorithmic theory of information. This involves the efficiency and complexity of algorithms, which significantly influence how AI systems process and present information. In contrast to human knowledge, which is based on experience, reflection, and context, AI-generated content is the result of statistical calculations and patterns. This creates the risk of distorted or manipulative content, particularly in the case of deepfakes and fake news. Since AI cannot distinguish between truth and deception but merely calculates probabilities, false or misleading information can be amplified and spread uncritically.&lt;br /&gt;
&lt;br /&gt;
AI systems are constantly learning and improving their capabilities day by day. They can generate images or videos that appear to be real. While this may initially be seen as a major advancement, one must ask whether AI-generated visual content is truly a positive development. For instance, a video might show a person saying something they never actually said—this is where deepfakes come in. Fake news can spread, reputations can be damaged, and entire elections or public opinions can be manipulated. It is essential for people to be able to recognize such content.&lt;br /&gt;
&lt;br /&gt;
AI-generated material can often be identified by subtle details such as limited facial expressions, inconsistent lighting, or incorrect or monotonous speech. It is crucial that we are able to detect these deepfakes so that we are not manipulated or misled by false claims.&lt;br /&gt;
&lt;br /&gt;
When it comes to fake news, we also need to be particularly cautious with regard to AI, as we often accept the answers provided by AI systems like ChatGPT as true and correct without question.&lt;br /&gt;
&lt;br /&gt;
Deepfakes also contribute to the spread of fake news because AI can easily create manipulated content. This becomes especially dangerous during elections, as people can be influenced without realizing it. AI-generated images or videos are designed to be dramatic, fear-inducing, or emotionally charged. Political parties might use deepfake campaign videos to manipulate voters and push them in a certain direction. These political messages are also distributed across multiple channels to gain more attention.&lt;br /&gt;
&lt;br /&gt;
Because AI helps disseminate this content quickly and widely, it can intentionally influence opinions and draw massive attention to certain topics. However, this can also lead to a divided society, especially when every party holds very strong and opposing views. Therefore, it is essential that people learn how to recognize fake news themselves. They should think critically, question information, verify sources, and compare it with other media. Moreover, there must be regulations for election campaigns, and AI-generated content must be labeled. Above all, transparency is necessary in order to provide insight into the algorithms. This way, manipulation can be understood and ultimately contained.&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
Artificial intelligence is increasingly shaping our everyday lives as well as areas such as education, the economy, and society. It brings both opportunities and risks. Since AI operates without moral understanding or a sense of responsibility, the responsibility for its use always lies with humans. Issues such as bias, discrimination, deepfakes, or fake news highlight the importance of critically questioning AI and regulating it through ethical and societal measures.&lt;br /&gt;
&lt;br /&gt;
Despite its useful functions, AI should not be seen as the automatic solution to every challenge. Those who rely too heavily on it risk unlearning how to think independently and may adopt decisions without reflection. It is important to first seek one’s own solutions and use AI selectively and consciously.&lt;br /&gt;
&lt;br /&gt;
Furthermore, the responsible use of AI requires clear ethical guidelines and transparent regulation. AI decisions are often difficult to understand, which is why society must actively question how AI is controlled and applied. The discussion about AI is not only technical, but also philosophical—it challenges us to rethink our understanding of knowledge, responsibility, and freedom of choice.&lt;br /&gt;
&lt;br /&gt;
Given the rapid development of technology, a responsible approach to AI is essential—especially with regard to data protection and personal information. Just as we are cautious about protecting our data, we should act with the same care when dealing with artificial intelligence.&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
[1] Hutan Ashrafian (2022): Engineering a social contract: Rawlsian distributive justice through algorithmic game theory and artificial intelligence. Springer &lt;br /&gt;
&lt;br /&gt;
[2] Nick Bostrom (2012). The superintelligent will: Motivation and instrumental rationality in advanced artificial agents. Minds and Machines. &lt;br /&gt;
&lt;br /&gt;
[3] Bundesamt für Sicherheit und Informationstechnik (2024): Deepfakes – Gefahren und Gegenmaßnahmen. Bsi.bund.  &lt;br /&gt;
&lt;br /&gt;
[4] Bundesamt für Sicherheit in der Informationstechnik (2024): Transparenz von KI-Systemen. bsi.bund.   &lt;br /&gt;
&lt;br /&gt;
[5] Fisher Philips (2024): New Study Shows AI Resume Screeners Prefer White Male Candidates: Your 5-Step Blueprint to Prevent AI Discrimination in Hiring. Fisher Philips. &lt;br /&gt;
&lt;br /&gt;
[6] Gabler Wirtschaftslexikon. (n.d.). Künstliche Intelligenz (KI). Gabler Wirtschaftslexikon &lt;br /&gt;
&lt;br /&gt;
[7] Iason Gabriel (2022): Toward a Theory of Justice for Artificial Intelligence. Daedalus  &lt;br /&gt;
&lt;br /&gt;
[8] Carl Friedrich Gethmann et al. (2021). Künstliche Intelligenz in der Forschung. Springer Nature Link. &lt;br /&gt;
&lt;br /&gt;
[9] Moreen Heine et al. (2023): Künstliche Intelligenz in öffentlichen Verwaltungen. Springer &lt;br /&gt;
&lt;br /&gt;
[10] Sarah Kero et al. (2023): Bekanntheit und Akzeptanz von ChatGPT in Deutschland. Meinungsmonitor Künstliche Intelligenz &lt;br /&gt;
&lt;br /&gt;
[11] Carlos Mougan &amp;amp; Joshua Brand (2024): Kantian Deontology Meets AI Alignment: Towards Morally Grounded Fairness Metrics. Arvix. &lt;br /&gt;
&lt;br /&gt;
[12] Dr Katja Muñoz (2025): Systematische Manipulation sozialer Medien im Zeitalter der KI. Dgap. &lt;br /&gt;
&lt;br /&gt;
[13] Petra Pohlmann et al. (2022): Künstliche Intelligenz, Bias und Versicherungen – Eine technische und rechtliche Analyse. Springer&lt;br /&gt;
&lt;br /&gt;
[14] Statista (2025): Number of artificial intelligence (AI) tool users globally from 2021 to 2031. Statista. &lt;br /&gt;
&lt;br /&gt;
[15] Rosalie R. Waelen (2025). Rethinking Automation and the Future of Work with Hannah Arendt. Springer&lt;/div&gt;</summary>
		<author><name>Emily Hoppe</name></author>
	</entry>
	<entry>
		<id>https://www.glossalab.org/w/index.php?title=Draft:Artificial_Intelligence_and_Justice:_How_Algorithms_Shape_Our_Society&amp;diff=12932</id>
		<title>Draft:Artificial Intelligence and Justice: How Algorithms Shape Our Society</title>
		<link rel="alternate" type="text/html" href="https://www.glossalab.org/w/index.php?title=Draft:Artificial_Intelligence_and_Justice:_How_Algorithms_Shape_Our_Society&amp;diff=12932"/>
		<updated>2025-06-11T17:16:45Z</updated>

		<summary type="html">&lt;p&gt;Emily Hoppe: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Overview ==&lt;br /&gt;
The rapid development and use of artificial intelligence since the introduction of ChatGPT in November 2022 raises fundamental questions about justice and ethical responsibility. This paper explores the interplay between AI systems and social justice, with a particular focus on discrimination caused by algorithmic bias, transparency in decision-making, and the challenges posed by AI-driven disinformation. Drawing on recent studies and philosophical concepts from Kant and Rawls, the analysis examines how AI systems can reinforce existing inequalities and what measures are necessary to ensure a fair and ethically sound use of this technology. The paper argues that only through a combination of technical solutions, ethical guidelines, and legal regulation such as the EU AI Act—can a balance be achieved between technological progress and social justice.&lt;br /&gt;
&lt;br /&gt;
== Introduction and Relevance of the Topic ==&lt;br /&gt;
Since the launch of ChatGPT in November 2022, artificial intelligence has made an unprecedented entry into our daily lives. This new generation of AI systems, which is already being used by hundreds of millions of people just two and a half years later, marks a turning point in the digitalization of our society. Almost every person in the digitally connected world regularly interacts with AI applications.&lt;br /&gt;
&lt;br /&gt;
The global use of AI technologies is steadily increasing. According to forecasts, the number of users worldwide is expected to grow to approximately 826.2 million by 2025. Beyond the education sector, AI is also gaining increasing importance in other fields such as business, medicine, justice, and public administration.&lt;br /&gt;
&lt;br /&gt;
One reason for this is the efficiency of AI, as it saves time, automates processes, and delivers fast results. However, with the rapid development of AI, new considerations arise. When machines write texts, prepare decisions, or even make moral judgments, questions emerge such as: What distinguishes a machine-made decision from a human one? Who is responsible when errors occur or people are disadvantaged?&lt;br /&gt;
&lt;br /&gt;
It is no longer just about simple applications. In philosophical discussions, the question increasingly arises whether so-called &amp;quot;strong AI&amp;quot; machines with consciousness or their own thinking is even possible. Olkhovsky addresses this idea and refers to the example from the film &#039;&#039;Ex Machina&#039;&#039;: an AI that appears so human-like that it seems almost impossible to distinguish it from a real person. But what separates a convincing imitation from genuine consciousness? Can machines truly “be” like us—or does it ultimately remain a complex simulation?&lt;br /&gt;
&lt;br /&gt;
AI confronts us with fundamental questions about what knowledge is, how we can recognize truth, and what role humans play in an increasingly automated world.&lt;br /&gt;
&lt;br /&gt;
== Artificial Intelligence ==&lt;br /&gt;
Artificial intelligence (AI) is generally understood as a subfield of computer science that focuses on the development of systems capable of performing tasks that typically require human intelligence, such as problem-solving, language understanding, or pattern recognition.&lt;br /&gt;
&lt;br /&gt;
In scientific discussions, a distinction is often made between weak AI, which is specialized in narrowly defined tasks (e.g., voice assistants), and strong AI, which could develop human-like consciousness or genuine thinking. This concept remains largely theoretical but raises profound ethical and epistemological questions.&lt;br /&gt;
&lt;br /&gt;
Additionally, a distinction is made between knowledge-based AI, which operates using symbolic logic, and so-called machine learning, which is based on statistical methods. The latter has made significant advances in recent years, intensifying the question of how it differs from human understanding.&lt;br /&gt;
&lt;br /&gt;
With the increasing use of AI, the question arises not only of what machines are capable of but also what this means for us humans. What exactly does it mean when machines “learn,” “understand,” or “decide”? And what happens when we begin to delegate our own thinking to machines? Is AI truly a form of cognitive processing, or merely a highly advanced simulation of human behavior?&lt;br /&gt;
&lt;br /&gt;
It is emphasized that AI systems ultimately reflect our own thinking. They adopt patterns, logics, and assumptions from the data they were trained on, but they do not understand them in the human sense. AI lacks the deeper understanding of causality and context that characterizes human thought. This gap between statistical pattern recognition and genuine understanding is central to the philosophical discourse on AI.&lt;br /&gt;
&lt;br /&gt;
The increasing automation of creative processes by AI raises the question of whether human thinking and creativity are being displaced. Hannah Arendt emphasized that thinking is more than mere information processing; it is connected with reflection and responsibility. Nick Bostrom also warns that excessive dependence on AI could restrict human freedom of decision.&lt;br /&gt;
&lt;br /&gt;
What does it mean for our humanity when processes once considered uniquely human such as thinking, decision-making, or creativity are increasingly taken over by machines?&lt;br /&gt;
&lt;br /&gt;
== Responsibility &amp;amp; Decision-Making ==&lt;br /&gt;
When we ask an AI a question, we usually receive a fitting answer almost instantly. This seems so natural that we rarely stop to consider how the answer is actually generated—or whether it is truly correct. And therein lies an ethical challenge: unlike humans, who make decisions based on experience, values, or moral convictions, AI operates purely statistically. It identifies patterns and produces what seems most probable based on its training data, without “understanding” what is good, right, or wrong.&lt;br /&gt;
&lt;br /&gt;
Such systems now accompany us not only when writing or researching, but also influence what we see on social media, which news is shown to us, or how decisions are made in professional contexts. The more we place our trust in AI, the greater the responsibility we must take on ourselves even if it may seem at first glance as though the machine has everything under control.&lt;br /&gt;
&lt;br /&gt;
According to Daniel Wessel, this reveals a central problem: AI systems cannot make moral decisions, at least not in the human sense. They have no values of their own or awareness of responsibility. That is why it is all the more important that such technologies follow clear ethical guidelines. The people who develop or use them must define in advance what is to be considered right, fair, or transparent.&lt;br /&gt;
&lt;br /&gt;
Which values should be embedded in AI in the first place? And who decides that? What we perceive as “right” often depends on cultural, societal, or individual contexts. To ensure AI is used responsibly, these values must be consciously named and integrated into the systems. This requires not only technical know-how but above all a societal and ethical debate. &lt;br /&gt;
&lt;br /&gt;
Ultimately, AI remains a tool. Whether it acts “rightly” in a given situation depends not only on the algorithm but above all on how we design it, use it, and question it critically.&lt;br /&gt;
&lt;br /&gt;
== Discrimination and Bias in AI Systems ==&lt;br /&gt;
It is often assumed that artificial intelligence evaluates things objectively and neutrally. In practice, however, it becomes clear that AI systems can adopt and even reinforce social biases. This can occur when the information an AI relies on is based on already biased data. An example of this is the increasing use of automated recruitment processes in companies. Here, applicants may be excluded based on certain ethnic backgrounds without a human ever having looked at the application.&lt;br /&gt;
&lt;br /&gt;
A 2024 study by the University of Washington shows that discrimination can occur in AI-driven recruitment processes. The researchers analyzed over 554 résumés and 571 job postings, generating more than 3 million combinations of names and job positions. They then altered the names on 120 resumes, replacing typically white-sounding names with names commonly associated with the Black population.&lt;br /&gt;
&lt;br /&gt;
The results were clear: in 85% of the cases, the AI favored names typically associated with white individuals, while only 9% of the preferred names were linked to Black individuals. Additionally, the AI selected male candidates 52% of the time even for roles predominantly held by women, such as HR positions (77% female representation) or teaching jobs (57% female representation). White women were also more likely to be selected than Black women.&lt;br /&gt;
&lt;br /&gt;
This example illustrates how AI systems derive selection criteria from existing data, which can lead them to unintentionally adopt and perpetuate discriminatory structures from real-world practices. From a philosophical perspective, this raises a fundamental question: If our knowledge about the world in this case, about job applicants is based on data that is itself biased, how reliable is that knowledge? Epistemology asks what we truly know when we receive information. AI systems process data, but they do not understand it in the human sense. Their decisions are based on patterns, not on insight or moral reasoning. As Olkhovsky also emphasizes, machines lack the intentionality and awareness that give human decisions their moral depth.&lt;br /&gt;
&lt;br /&gt;
That is why it is even more important that AI does not make the final selection of applicants alone. To ensure a fair and just hiring process, human oversight by HR professionals is essential so that they can intervene if there is a suspicion of discrimination. Moreover, the automated AI recruitment process should be adapted in such a way that it either completely avoids discrimination or at least minimizes it. This could be achieved through regular fairness and bias tests to identify and address problematic patterns early on.&lt;br /&gt;
&lt;br /&gt;
In addition, John Rawls’ concept of justice as fairness offers a philosophical foundation for the discussion of discrimination and bias in AI systems. His famous thought experiment the veil of ignorance asks us to imagine a society in which no one knows their own social position. This is meant to produce fair rules that do not favor any individual or group.&lt;br /&gt;
&lt;br /&gt;
Applied to AI, this means that algorithms must be designed not to reinforce existing inequalities but to actively contribute to fairness. As studies have shown, AI systems can unconsciously adopt discriminatory patterns when trained on biased data. Rawls would argue that such systems do not meet the principles of justice, as they fail to ensure that the least advantaged members of society are not further marginalized.&lt;br /&gt;
&lt;br /&gt;
== Bias ==&lt;br /&gt;
In the example above, the AI exhibited bias a systematic distortion. This occurs when an AI application receives training data and information from the real world that contains prejudices, and it does not question them but instead accepts and adopts them as correct. It has learned that, in the past, white men were predominantly hired and therefore sets the attribute “male” as a selection criterion, concluding that this group is the most suitable and thus prefers them.&lt;br /&gt;
&lt;br /&gt;
Such errors must be identified and corrected. While bias cannot be completely avoided, it can be reduced through technical solutions.&lt;br /&gt;
&lt;br /&gt;
There are three strategies that can be used to minimize bias in AI systems. First, potential biases in the training data can be removed before the learning process begins. This involves modifying the data so that attributes such as origin or religion no longer have a distorted effect on the AI’s decisions. Another method is to program the AI in such a way that, during learning, it is prevented from making unfair assessments or discriminating by applying additional mathematical constraints. In the third method, the AI’s outcomes are evaluated and corrected for fairness after the learning process.&lt;br /&gt;
&lt;br /&gt;
Discrimination in AI systems is not only a technical problem but also touches on fundamental ethical principles such as justice, human dignity, and responsibility. Even if an AI application does not intentionally discriminate, it still violates these principles because its decisions have real-world consequences for people. Machines do not act morally, but their algorithms can reinforce existing inequalities and thus raise ethical concerns.&lt;br /&gt;
&lt;br /&gt;
Immanuel Kant formulated the categorical imperative as a universal moral principle: every human being must be treated as an end in themselves and never merely as a means to an end. Applied to AI, this means that algorithms must not be designed in a way that disadvantages people based on prejudice or biased data. An AI system that adopts discriminatory patterns contradicts this principle, as it fails to treat people as equal individuals.&lt;br /&gt;
&lt;br /&gt;
John Rawls developed the concept of justice as fairness, which aims to structure society in such a way that it does not disadvantage the weakest. His veil of ignorance challenges us to design rules without knowing our own social position—thereby ensuring fair conditions for all. AI systems that unconsciously discriminate contradict this principle, as they perpetuate existing inequalities instead of correcting them. To counteract this, algorithms must be actively tested and adjusted for fairness.&lt;br /&gt;
&lt;br /&gt;
Therefore, it is essential to assume not only technical but also ethical responsibility. AI systems must be designed to promote justice and equal opportunity rather than reinforce existing biases. This can be achieved through regular fairness and bias tests as well as through intentional ethical programming. Even if bias cannot be completely eliminated, it can be significantly reduced through targeted measures.&lt;br /&gt;
&lt;br /&gt;
== Transparency and Traceability ==&lt;br /&gt;
Since May 21, 2024, the EU Artificial Intelligence Act has established a comprehensive regulatory framework for artificial intelligence adopted by the EU Council, setting uniform rules for the use of AI across Europe. The AI Act places strong emphasis on transparency to ensure that AI handles data responsibly and fairly. Systems considered particularly high-risk must therefore be designed in a way that makes their use comprehensible and understandable. This allows individuals to make informed decisions about whether a particular AI system is appropriate for them. Overall, the goal of increased AI transparency is to empower users and give them more control.&lt;br /&gt;
&lt;br /&gt;
To ensure the transparent use of AI, users must be informed when they are interacting with an AI system. The system&#039;s documentation should explain how the AI works, what it can be used for, and what opportunities and risks it entails. Additionally, information about the development and context of the system is important so that both users and organizations understand its capabilities and limitations.&lt;br /&gt;
&lt;br /&gt;
Transparency is important because it leads to greater traceability. It enables people to understand how AI reaches certain decisions and helps to identify issues early on such as potential bias or copyright violations. Moreover, studies show that users are more likely to use transparent AI models. For developers, it is particularly essential to know where the training data comes from, whether it is fair and non-discriminatory, and what risks a particular model might pose.&lt;br /&gt;
&lt;br /&gt;
Even though transparency is crucial, it is not always easy to implement. A careful balance must be struck: users should be given enough information to understand and use a system safely, but not all aspects should be disclosed if doing so poses security or misuse risks.&lt;br /&gt;
&lt;br /&gt;
This is where the discussion around AI intersects closely with epistemological questions: What counts as reliable knowledge? How much do we need to know in order to trust a decision? AI challenges us to rethink our understanding of knowledge, responsibility, and control.&lt;br /&gt;
&lt;br /&gt;
As Olkhovsky emphasizes, transparency is a key prerequisite for people to trust AI systems at all. Only if it is clear how decisions are made can those decisions be questioned or challenged. Without that clarity, responsibility becomes blurred and control over technological decisions slips away from users. Therefore, transparency is not only a technical task but also an ethical imperative: AI systems must be designed to reveal the criteria by which they operate. Transparency not only enables oversight but also reinforces the democratic principle of accountability in the digital age.&lt;br /&gt;
&lt;br /&gt;
== Missinformation, Fake News and Deepfakes ==&lt;br /&gt;
AI systems are used for almost everything. They are applied to a wide range of tasks, such as text processing, data analysis on specific topics, or even creating study schedules for exams. Additionally, we ask AI questions on various subjects or consult it about issues that concern us, and we generally assume that its answers are correct. However, the use of AI can lead to deepfakes and fake news.&lt;br /&gt;
&lt;br /&gt;
An important aspect of misinformation caused by artificial intelligence is the algorithmic theory of information. This involves the efficiency and complexity of algorithms, which significantly influence how AI systems process and present information. In contrast to human knowledge, which is based on experience, reflection, and context, AI-generated content is the result of statistical calculations and patterns. This creates the risk of distorted or manipulative content, particularly in the case of deepfakes and fake news. Since AI cannot distinguish between truth and deception but merely calculates probabilities, false or misleading information can be amplified and spread uncritically.&lt;br /&gt;
&lt;br /&gt;
AI systems are constantly learning and improving their capabilities day by day. They can generate images or videos that appear to be real. While this may initially be seen as a major advancement, one must ask whether AI-generated visual content is truly a positive development. For instance, a video might show a person saying something they never actually said—this is where deepfakes come in. Fake news can spread, reputations can be damaged, and entire elections or public opinions can be manipulated. It is essential for people to be able to recognize such content.&lt;br /&gt;
&lt;br /&gt;
AI-generated material can often be identified by subtle details such as limited facial expressions, inconsistent lighting, or incorrect or monotonous speech. It is crucial that we are able to detect these deepfakes so that we are not manipulated or misled by false claims.&lt;br /&gt;
&lt;br /&gt;
When it comes to fake news, we also need to be particularly cautious with regard to AI, as we often accept the answers provided by AI systems like ChatGPT as true and correct without question.&lt;br /&gt;
&lt;br /&gt;
Deepfakes also contribute to the spread of fake news because AI can easily create manipulated content. This becomes especially dangerous during elections, as people can be influenced without realizing it. AI-generated images or videos are designed to be dramatic, fear-inducing, or emotionally charged. Political parties might use deepfake campaign videos to manipulate voters and push them in a certain direction. These political messages are also distributed across multiple channels to gain more attention.&lt;br /&gt;
&lt;br /&gt;
Because AI helps disseminate this content quickly and widely, it can intentionally influence opinions and draw massive attention to certain topics. However, this can also lead to a divided society, especially when every party holds very strong and opposing views. Therefore, it is essential that people learn how to recognize fake news themselves. They should think critically, question information, verify sources, and compare it with other media. Moreover, there must be regulations for election campaigns, and AI-generated content must be labeled. Above all, transparency is necessary in order to provide insight into the algorithms. This way, manipulation can be understood and ultimately contained.&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
Artificial intelligence is increasingly shaping our everyday lives as well as areas such as education, the economy, and society. It brings both opportunities and risks. Since AI operates without moral understanding or a sense of responsibility, the responsibility for its use always lies with humans. Issues such as bias, discrimination, deepfakes, or fake news highlight the importance of critically questioning AI and regulating it through ethical and societal measures.&lt;br /&gt;
&lt;br /&gt;
Despite its useful functions, AI should not be seen as the automatic solution to every challenge. Those who rely too heavily on it risk unlearning how to think independently and may adopt decisions without reflection. It is important to first seek one’s own solutions and use AI selectively and consciously.&lt;br /&gt;
&lt;br /&gt;
Furthermore, the responsible use of AI requires clear ethical guidelines and transparent regulation. AI decisions are often difficult to understand, which is why society must actively question how AI is controlled and applied. The discussion about AI is not only technical, but also philosophical—it challenges us to rethink our understanding of knowledge, responsibility, and freedom of choice.&lt;br /&gt;
&lt;br /&gt;
Given the rapid development of technology, a responsible approach to AI is essential—especially with regard to data protection and personal information. Just as we are cautious about protecting our data, we should act with the same care when dealing with artificial intelligence.&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
[1] Hutan Ashrafian (2022): Engineering a social contract: Rawlsian distributive justice through algorithmic game theory and artificial intelligence. Springer &lt;br /&gt;
&lt;br /&gt;
[2] Nick Bostrom (2012). The superintelligent will: Motivation and instrumental rationality in advanced artificial agents. Minds and Machines. &lt;br /&gt;
&lt;br /&gt;
[3] Bundesamt für Sicherheit und Informationstechnik (2024): Deepfakes – Gefahren und Gegenmaßnahmen. Bsi.bund.  &lt;br /&gt;
&lt;br /&gt;
[4] Bundesamt für Sicherheit in der Informationstechnik (2024): Transparenz von KI-Systemen. bsi.bund.   &lt;br /&gt;
&lt;br /&gt;
[5] Fisher Philips (2024): New Study Shows AI Resume Screeners Prefer White Male Candidates: Your 5-Step Blueprint to Prevent AI Discrimination in Hiring. Fisher Philips. &lt;br /&gt;
&lt;br /&gt;
[6] Gabler Wirtschaftslexikon. (n.d.). Künstliche Intelligenz (KI). Gabler Wirtschaftslexikon &lt;br /&gt;
&lt;br /&gt;
[7] Iason Gabriel (2022): Toward a Theory of Justice for Artificial Intelligence. Daedalus  &lt;br /&gt;
&lt;br /&gt;
[8] Carl Friedrich Gethmann et al. (2021). Künstliche Intelligenz in der Forschung. Springer Nature Link. &lt;br /&gt;
&lt;br /&gt;
[9] Moreen Heine et al. (2023): Künstliche Intelligenz in öffentlichen Verwaltungen. Springer &lt;br /&gt;
&lt;br /&gt;
[10] Sarah Kero et al. (2023): Bekanntheit und Akzeptanz von ChatGPT in Deutschland. Meinungsmonitor Künstliche Intelligenz &lt;br /&gt;
&lt;br /&gt;
[11] Carlos Mougan &amp;amp; Joshua Brand (2024): Kantian Deontology Meets AI Alignment: Towards Morally Grounded Fairness Metrics. Arvix. &lt;br /&gt;
&lt;br /&gt;
[12] Dr Katja Muñoz (2025): Systematische Manipulation sozialer Medien im Zeitalter der KI. Dgap. &lt;br /&gt;
&lt;br /&gt;
[13] Petra Pohlmann et al. (2022): Künstliche Intelligenz, Bias und Versicherungen – Eine technische und rechtliche Analyse. Springer&lt;br /&gt;
&lt;br /&gt;
[14] Statista (2025): Number of artificial intelligence (AI) tool users globally from 2021 to 2031. Statista. &lt;br /&gt;
&lt;br /&gt;
[15] Rosalie R. Waelen (2025). Rethinking Automation and the Future of Work with Hannah Arendt. Springer&lt;/div&gt;</summary>
		<author><name>Emily Hoppe</name></author>
	</entry>
	<entry>
		<id>https://www.glossalab.org/w/index.php?title=Draft:Artificial_Intelligence_and_Justice:_How_Algorithms_Shape_Our_Society&amp;diff=12887</id>
		<title>Draft:Artificial Intelligence and Justice: How Algorithms Shape Our Society</title>
		<link rel="alternate" type="text/html" href="https://www.glossalab.org/w/index.php?title=Draft:Artificial_Intelligence_and_Justice:_How_Algorithms_Shape_Our_Society&amp;diff=12887"/>
		<updated>2025-06-09T17:20:22Z</updated>

		<summary type="html">&lt;p&gt;Emily Hoppe: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Overview ==&lt;br /&gt;
The rapid development and use of artificial intelligence since the introduction of ChatGPT in November 2022 raises fundamental questions about justice and ethical responsibility. This paper explores the interplay between AI systems and social justice, with a particular focus on discrimination caused by algorithmic bias, transparency in decision-making, and the challenges posed by AI-driven disinformation. Drawing on recent studies and philosophical concepts from Kant and Rawls, the analysis examines how AI systems can reinforce existing inequalities and what measures are necessary to ensure a fair and ethically sound use of this technology. The paper argues that only through a combination of technical solutions, ethical guidelines, and legal regulation such as the EU AI Act—can a balance be achieved between technological progress and social justice.&lt;br /&gt;
&lt;br /&gt;
== Introduction and Relevance of the Topic ==&lt;br /&gt;
Since the launch of ChatGPT in November 2022, artificial intelligence has made an unprecedented entry into our daily lives. This new generation of AI systems, which is already being used by hundreds of millions of people just two and a half years later, marks a turning point in the digitalization of our society. Almost every person in the digitally connected world regularly interacts with AI applications.&lt;br /&gt;
&lt;br /&gt;
The global use of AI technologies is steadily increasing. According to forecasts, the number of users worldwide is expected to grow to approximately 826.2 million by 2025. Beyond the education sector, AI is also gaining increasing importance in other fields such as business, medicine, justice, and public administration.&lt;br /&gt;
&lt;br /&gt;
One reason for this is the efficiency of AI, as it saves time, automates processes, and delivers fast results. However, with the rapid development of AI, new considerations arise. When machines write texts, prepare decisions, or even make moral judgments, questions emerge such as: What distinguishes a machine-made decision from a human one? Who is responsible when errors occur or people are disadvantaged?&lt;br /&gt;
&lt;br /&gt;
It is no longer just about simple applications. In philosophical discussions, the question increasingly arises whether so-called &amp;quot;strong AI&amp;quot; machines with consciousness or their own thinking is even possible. Olkhovsky addresses this idea and refers to the example from the film &#039;&#039;Ex Machina&#039;&#039;: an AI that appears so human-like that it seems almost impossible to distinguish it from a real person. But what separates a convincing imitation from genuine consciousness? Can machines truly “be” like us—or does it ultimately remain a complex simulation?&lt;br /&gt;
&lt;br /&gt;
AI confronts us with fundamental questions about what knowledge is, how we can recognize truth, and what role humans play in an increasingly automated world.&lt;br /&gt;
&lt;br /&gt;
== Artificial Intelligence ==&lt;br /&gt;
Artificial intelligence (AI) is generally understood as a subfield of computer science that focuses on the development of systems capable of performing tasks that typically require human intelligence, such as problem-solving, language understanding, or pattern recognition.&lt;br /&gt;
&lt;br /&gt;
In scientific discussions, a distinction is often made between weak AI, which is specialized in narrowly defined tasks (e.g., voice assistants), and strong AI, which could develop human-like consciousness or genuine thinking. This concept remains largely theoretical but raises profound ethical and epistemological questions.&lt;br /&gt;
&lt;br /&gt;
Additionally, a distinction is made between knowledge-based AI, which operates using symbolic logic, and so-called machine learning, which is based on statistical methods. The latter has made significant advances in recent years, intensifying the question of how it differs from human understanding.&lt;br /&gt;
&lt;br /&gt;
With the increasing use of AI, the question arises not only of what machines are capable of but also what this means for us humans. What exactly does it mean when machines “learn,” “understand,” or “decide”? And what happens when we begin to delegate our own thinking to machines? Is AI truly a form of cognitive processing, or merely a highly advanced simulation of human behavior?&lt;br /&gt;
&lt;br /&gt;
It is emphasized that AI systems ultimately reflect our own thinking. They adopt patterns, logics, and assumptions from the data they were trained on, but they do not understand them in the human sense. AI lacks the deeper understanding of causality and context that characterizes human thought. This gap between statistical pattern recognition and genuine understanding is central to the philosophical discourse on AI.&lt;br /&gt;
&lt;br /&gt;
The increasing automation of creative processes by AI raises the question of whether human thinking and creativity are being displaced. Hannah Arendt emphasized that thinking is more than mere information processing; it is connected with reflection and responsibility. Nick Bostrom also warns that excessive dependence on AI could restrict human freedom of decision.&lt;br /&gt;
&lt;br /&gt;
What does it mean for our humanity when processes once considered uniquely human such as thinking, decision-making, or creativity are increasingly taken over by machines?&lt;br /&gt;
&lt;br /&gt;
== Responsibility &amp;amp; Decision-Making ==&lt;br /&gt;
When we ask an AI a question, we usually receive a fitting answer almost instantly. This seems so natural that we rarely stop to consider how the answer is actually generated—or whether it is truly correct. And therein lies an ethical challenge: unlike humans, who make decisions based on experience, values, or moral convictions, AI operates purely statistically. It identifies patterns and produces what seems most probable based on its training data, without “understanding” what is good, right, or wrong.&lt;br /&gt;
&lt;br /&gt;
Such systems now accompany us not only when writing or researching, but also influence what we see on social media, which news is shown to us, or how decisions are made in professional contexts. The more we place our trust in AI, the greater the responsibility we must take on ourselves even if it may seem at first glance as though the machine has everything under control.&lt;br /&gt;
&lt;br /&gt;
According to Daniel Wessel, this reveals a central problem: AI systems cannot make moral decisions, at least not in the human sense. They have no values of their own or awareness of responsibility. That is why it is all the more important that such technologies follow clear ethical guidelines. The people who develop or use them must define in advance what is to be considered right, fair, or transparent.&lt;br /&gt;
&lt;br /&gt;
Which values should be embedded in AI in the first place? And who decides that? What we perceive as “right” often depends on cultural, societal, or individual contexts. To ensure AI is used responsibly, these values must be consciously named and integrated into the systems. This requires not only technical know-how but above all a societal and ethical debate. &lt;br /&gt;
&lt;br /&gt;
Ultimately, AI remains a tool. Whether it acts “rightly” in a given situation depends not only on the algorithm but above all on how we design it, use it, and question it critically.&lt;br /&gt;
&lt;br /&gt;
== Discrimination and Bias in AI Systems ==&lt;br /&gt;
It is often assumed that artificial intelligence evaluates things objectively and neutrally. In practice, however, it becomes clear that AI systems can adopt and even reinforce social biases. This can occur when the information an AI relies on is based on already biased data. An example of this is the increasing use of automated recruitment processes in companies. Here, applicants may be excluded based on certain ethnic backgrounds without a human ever having looked at the application.&lt;br /&gt;
&lt;br /&gt;
A 2024 study by the University of Washington shows that discrimination can occur in AI-driven recruitment processes. The researchers analyzed over 554 résumés and 571 job postings, generating more than 3 million combinations of names and job positions. They then altered the names on 120 resumes, replacing typically white-sounding names with names commonly associated with the Black population.&lt;br /&gt;
&lt;br /&gt;
The results were clear: in 85% of the cases, the AI favored names typically associated with white individuals, while only 9% of the preferred names were linked to Black individuals. Additionally, the AI selected male candidates 52% of the time even for roles predominantly held by women, such as HR positions (77% female representation) or teaching jobs (57% female representation). White women were also more likely to be selected than Black women.&lt;br /&gt;
&lt;br /&gt;
This example illustrates how AI systems derive selection criteria from existing data, which can lead them to unintentionally adopt and perpetuate discriminatory structures from real-world practices. From a philosophical perspective, this raises a fundamental question: If our knowledge about the world in this case, about job applicants is based on data that is itself biased, how reliable is that knowledge? Epistemology asks what we truly know when we receive information. AI systems process data, but they do not understand it in the human sense. Their decisions are based on patterns, not on insight or moral reasoning. As Olkhovsky also emphasizes, machines lack the intentionality and awareness that give human decisions their moral depth.&lt;br /&gt;
&lt;br /&gt;
That is why it is even more important that AI does not make the final selection of applicants alone. To ensure a fair and just hiring process, human oversight by HR professionals is essential so that they can intervene if there is a suspicion of discrimination. Moreover, the automated AI recruitment process should be adapted in such a way that it either completely avoids discrimination or at least minimizes it. This could be achieved through regular fairness and bias tests to identify and address problematic patterns early on.&lt;br /&gt;
&lt;br /&gt;
In addition, John Rawls’ concept of justice as fairness offers a philosophical foundation for the discussion of discrimination and bias in AI systems. His famous thought experiment the veil of ignorance asks us to imagine a society in which no one knows their own social position. This is meant to produce fair rules that do not favor any individual or group.&lt;br /&gt;
&lt;br /&gt;
Applied to AI, this means that algorithms must be designed not to reinforce existing inequalities but to actively contribute to fairness. As studies have shown, AI systems can unconsciously adopt discriminatory patterns when trained on biased data. Rawls would argue that such systems do not meet the principles of justice, as they fail to ensure that the least advantaged members of society are not further marginalized.&lt;br /&gt;
&lt;br /&gt;
== Bias ==&lt;br /&gt;
In the example above, the AI exhibited bias a systematic distortion. This occurs when an AI application receives training data and information from the real world that contains prejudices, and it does not question them but instead accepts and adopts them as correct. It has learned that, in the past, white men were predominantly hired and therefore sets the attribute “male” as a selection criterion, concluding that this group is the most suitable and thus prefers them.&lt;br /&gt;
&lt;br /&gt;
Such errors must be identified and corrected. While bias cannot be completely avoided, it can be reduced through technical solutions.&lt;br /&gt;
&lt;br /&gt;
There are three strategies that can be used to minimize bias in AI systems. First, potential biases in the training data can be removed before the learning process begins. This involves modifying the data so that attributes such as origin or religion no longer have a distorted effect on the AI’s decisions. Another method is to program the AI in such a way that, during learning, it is prevented from making unfair assessments or discriminating by applying additional mathematical constraints. In the third method, the AI’s outcomes are evaluated and corrected for fairness after the learning process.&lt;br /&gt;
&lt;br /&gt;
Discrimination in AI systems is not only a technical problem but also touches on fundamental ethical principles such as justice, human dignity, and responsibility. Even if an AI application does not intentionally discriminate, it still violates these principles because its decisions have real-world consequences for people. Machines do not act morally, but their algorithms can reinforce existing inequalities and thus raise ethical concerns.&lt;br /&gt;
&lt;br /&gt;
Immanuel Kant formulated the categorical imperative as a universal moral principle: every human being must be treated as an end in themselves and never merely as a means to an end. Applied to AI, this means that algorithms must not be designed in a way that disadvantages people based on prejudice or biased data. An AI system that adopts discriminatory patterns contradicts this principle, as it fails to treat people as equal individuals.&lt;br /&gt;
&lt;br /&gt;
John Rawls developed the concept of justice as fairness, which aims to structure society in such a way that it does not disadvantage the weakest. His veil of ignorance challenges us to design rules without knowing our own social position—thereby ensuring fair conditions for all. AI systems that unconsciously discriminate contradict this principle, as they perpetuate existing inequalities instead of correcting them. To counteract this, algorithms must be actively tested and adjusted for fairness.&lt;br /&gt;
&lt;br /&gt;
Therefore, it is essential to assume not only technical but also ethical responsibility. AI systems must be designed to promote justice and equal opportunity rather than reinforce existing biases. This can be achieved through regular fairness and bias tests as well as through intentional ethical programming. Even if bias cannot be completely eliminated, it can be significantly reduced through targeted measures.&lt;br /&gt;
&lt;br /&gt;
== Transparency and Traceability ==&lt;br /&gt;
Since May 21, 2024, the EU Artificial Intelligence Act has established a comprehensive regulatory framework for artificial intelligence adopted by the EU Council, setting uniform rules for the use of AI across Europe. The AI Act places strong emphasis on transparency to ensure that AI handles data responsibly and fairly. Systems considered particularly high-risk must therefore be designed in a way that makes their use comprehensible and understandable. This allows individuals to make informed decisions about whether a particular AI system is appropriate for them. Overall, the goal of increased AI transparency is to empower users and give them more control.&lt;br /&gt;
&lt;br /&gt;
To ensure the transparent use of AI, users must be informed when they are interacting with an AI system. The system&#039;s documentation should explain how the AI works, what it can be used for, and what opportunities and risks it entails. Additionally, information about the development and context of the system is important so that both users and organizations understand its capabilities and limitations.&lt;br /&gt;
&lt;br /&gt;
Transparency is important because it leads to greater traceability. It enables people to understand how AI reaches certain decisions and helps to identify issues early on such as potential bias or copyright violations. Moreover, studies show that users are more likely to use transparent AI models. For developers, it is particularly essential to know where the training data comes from, whether it is fair and non-discriminatory, and what risks a particular model might pose.&lt;br /&gt;
&lt;br /&gt;
Even though transparency is crucial, it is not always easy to implement. A careful balance must be struck: users should be given enough information to understand and use a system safely, but not all aspects should be disclosed if doing so poses security or misuse risks.&lt;br /&gt;
&lt;br /&gt;
This is where the discussion around AI intersects closely with epistemological questions: What counts as reliable knowledge? How much do we need to know in order to trust a decision? AI challenges us to rethink our understanding of knowledge, responsibility, and control.&lt;br /&gt;
&lt;br /&gt;
As Olkhovsky emphasizes, transparency is a key prerequisite for people to trust AI systems at all. Only if it is clear how decisions are made can those decisions be questioned or challenged. Without that clarity, responsibility becomes blurred and control over technological decisions slips away from users. Therefore, transparency is not only a technical task but also an ethical imperative: AI systems must be designed to reveal the criteria by which they operate. Transparency not only enables oversight but also reinforces the democratic principle of accountability in the digital age.&lt;br /&gt;
&lt;br /&gt;
== Missinformation, Fake News and Deepfakes ==&lt;br /&gt;
AI-Systems..&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
..&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
[1] Hutan Ashrafian (2022): Engineering a social contract: Rawlsian distributive justice through algorithmic game theory and artificial intelligence. Springer &lt;br /&gt;
&lt;br /&gt;
[2] Nick Bostrom (2012). The superintelligent will: Motivation and instrumental rationality in advanced artificial agents. Minds and Machines. &lt;br /&gt;
&lt;br /&gt;
[3] Bundesamt für Sicherheit und Informationstechnik (2024): Deepfakes – Gefahren und Gegenmaßnahmen. Bsi.bund.  &lt;br /&gt;
&lt;br /&gt;
[4] Bundesamt für Sicherheit in der Informationstechnik (2024): Transparenz von KI-Systemen. bsi.bund.   &lt;br /&gt;
&lt;br /&gt;
[5] Fisher Philips (2024): New Study Shows AI Resume Screeners Prefer White Male Candidates: Your 5-Step Blueprint to Prevent AI Discrimination in Hiring. Fisher Philips. &lt;br /&gt;
&lt;br /&gt;
[6] Gabler Wirtschaftslexikon. (n.d.). Künstliche Intelligenz (KI). Gabler Wirtschaftslexikon &lt;br /&gt;
&lt;br /&gt;
[7] Iason Gabriel (2022): Toward a Theory of Justice for Artificial Intelligence. Daedalus  &lt;br /&gt;
&lt;br /&gt;
[8] Carl Friedrich Gethmann et al. (2021). Künstliche Intelligenz in der Forschung. Springer Nature Link. &lt;br /&gt;
&lt;br /&gt;
[9] Moreen Heine et al. (2023): Künstliche Intelligenz in öffentlichen Verwaltungen. Springer &lt;br /&gt;
&lt;br /&gt;
[10] Sarah Kero et al. (2023): Bekanntheit und Akzeptanz von ChatGPT in Deutschland. Meinungsmonitor Künstliche Intelligenz &lt;br /&gt;
&lt;br /&gt;
[11] Carlos Mougan &amp;amp; Joshua Brand (2024): Kantian Deontology Meets AI Alignment: Towards Morally Grounded Fairness Metrics. Arvix. &lt;br /&gt;
&lt;br /&gt;
[12] Dr Katja Muñoz (2025): Systematische Manipulation sozialer Medien im Zeitalter der KI. Dgap. &lt;br /&gt;
&lt;br /&gt;
[13] Petra Pohlmann et al. (2022): Künstliche Intelligenz, Bias und Versicherungen – Eine technische und rechtliche Analyse. Springer&lt;br /&gt;
&lt;br /&gt;
[14] Statista (2025): Number of artificial intelligence (AI) tool users globally from 2021 to 2031. Statista. &lt;br /&gt;
&lt;br /&gt;
[15] Rosalie R. Waelen (2025). Rethinking Automation and the Future of Work with Hannah Arendt. Springer&lt;/div&gt;</summary>
		<author><name>Emily Hoppe</name></author>
	</entry>
	<entry>
		<id>https://www.glossalab.org/w/index.php?title=Draft:Artificial_Intelligence_and_Justice:_How_Algorithms_Shape_Our_Society&amp;diff=12886</id>
		<title>Draft:Artificial Intelligence and Justice: How Algorithms Shape Our Society</title>
		<link rel="alternate" type="text/html" href="https://www.glossalab.org/w/index.php?title=Draft:Artificial_Intelligence_and_Justice:_How_Algorithms_Shape_Our_Society&amp;diff=12886"/>
		<updated>2025-06-09T16:55:25Z</updated>

		<summary type="html">&lt;p&gt;Emily Hoppe: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Overview ==&lt;br /&gt;
The rapid development and use of artificial intelligence since the introduction of ChatGPT in November 2022 raises fundamental questions about justice and ethical responsibility. This paper explores the interplay between AI systems and social justice, with a particular focus on discrimination caused by algorithmic bias, transparency in decision-making, and the challenges posed by AI-driven disinformation. Drawing on recent studies and philosophical concepts from Kant and Rawls, the analysis examines how AI systems can reinforce existing inequalities and what measures are necessary to ensure a fair and ethically sound use of this technology. The paper argues that only through a combination of technical solutions, ethical guidelines, and legal regulation such as the EU AI Act—can a balance be achieved between technological progress and social justice.&lt;br /&gt;
&lt;br /&gt;
== Introduction and Relevance of the Topic ==&lt;br /&gt;
Since the launch of ChatGPT in November 2022, artificial intelligence has made an unprecedented entry into our daily lives. This new generation of AI systems, which is already being used by hundreds of millions of people just two and a half years later, marks a turning point in the digitalization of our society. Almost every person in the digitally connected world regularly interacts with AI applications.&lt;br /&gt;
&lt;br /&gt;
The global use of AI technologies is steadily increasing. According to forecasts, the number of users worldwide is expected to grow to approximately 826.2 million by 2025. Beyond the education sector, AI is also gaining increasing importance in other fields such as business, medicine, justice, and public administration.&lt;br /&gt;
&lt;br /&gt;
One reason for this is the efficiency of AI, as it saves time, automates processes, and delivers fast results. However, with the rapid development of AI, new considerations arise. When machines write texts, prepare decisions, or even make moral judgments, questions emerge such as: What distinguishes a machine-made decision from a human one? Who is responsible when errors occur or people are disadvantaged?&lt;br /&gt;
&lt;br /&gt;
It is no longer just about simple applications. In philosophical discussions, the question increasingly arises whether so-called &amp;quot;strong AI&amp;quot; machines with consciousness or their own thinking is even possible. Olkhovsky addresses this idea and refers to the example from the film &#039;&#039;Ex Machina&#039;&#039;: an AI that appears so human-like that it seems almost impossible to distinguish it from a real person. But what separates a convincing imitation from genuine consciousness? Can machines truly “be” like us—or does it ultimately remain a complex simulation?&lt;br /&gt;
&lt;br /&gt;
AI confronts us with fundamental questions about what knowledge is, how we can recognize truth, and what role humans play in an increasingly automated world.&lt;br /&gt;
&lt;br /&gt;
== Artificial Intelligence ==&lt;br /&gt;
Artificial intelligence (AI) is generally understood as a subfield of computer science that focuses on the development of systems capable of performing tasks that typically require human intelligence, such as problem-solving, language understanding, or pattern recognition.&lt;br /&gt;
&lt;br /&gt;
In scientific discussions, a distinction is often made between weak AI, which is specialized in narrowly defined tasks (e.g., voice assistants), and strong AI, which could develop human-like consciousness or genuine thinking. This concept remains largely theoretical but raises profound ethical and epistemological questions.&lt;br /&gt;
&lt;br /&gt;
Additionally, a distinction is made between knowledge-based AI, which operates using symbolic logic, and so-called machine learning, which is based on statistical methods. The latter has made significant advances in recent years, intensifying the question of how it differs from human understanding.&lt;br /&gt;
&lt;br /&gt;
With the increasing use of AI, the question arises not only of what machines are capable of but also what this means for us humans. What exactly does it mean when machines “learn,” “understand,” or “decide”? And what happens when we begin to delegate our own thinking to machines? Is AI truly a form of cognitive processing, or merely a highly advanced simulation of human behavior?&lt;br /&gt;
&lt;br /&gt;
It is emphasized that AI systems ultimately reflect our own thinking. They adopt patterns, logics, and assumptions from the data they were trained on, but they do not understand them in the human sense. AI lacks the deeper understanding of causality and context that characterizes human thought. This gap between statistical pattern recognition and genuine understanding is central to the philosophical discourse on AI.&lt;br /&gt;
&lt;br /&gt;
The increasing automation of creative processes by AI raises the question of whether human thinking and creativity are being displaced. Hannah Arendt emphasized that thinking is more than mere information processing; it is connected with reflection and responsibility. Nick Bostrom also warns that excessive dependence on AI could restrict human freedom of decision.&lt;br /&gt;
&lt;br /&gt;
What does it mean for our humanity when processes once considered uniquely human such as thinking, decision-making, or creativity are increasingly taken over by machines?&lt;br /&gt;
&lt;br /&gt;
== Responsibility &amp;amp; Decision-Making ==&lt;br /&gt;
When we ask an AI a question, we usually receive a fitting answer almost instantly. This seems so natural that we rarely stop to consider how the answer is actually generated—or whether it is truly correct. And therein lies an ethical challenge: unlike humans, who make decisions based on experience, values, or moral convictions, AI operates purely statistically. It identifies patterns and produces what seems most probable based on its training data, without “understanding” what is good, right, or wrong.&lt;br /&gt;
&lt;br /&gt;
Such systems now accompany us not only when writing or researching, but also influence what we see on social media, which news is shown to us, or how decisions are made in professional contexts. The more we place our trust in AI, the greater the responsibility we must take on ourselves even if it may seem at first glance as though the machine has everything under control.&lt;br /&gt;
&lt;br /&gt;
According to Daniel Wessel, this reveals a central problem: AI systems cannot make moral decisions, at least not in the human sense. They have no values of their own or awareness of responsibility. That is why it is all the more important that such technologies follow clear ethical guidelines. The people who develop or use them must define in advance what is to be considered right, fair, or transparent.&lt;br /&gt;
&lt;br /&gt;
Which values should be embedded in AI in the first place? And who decides that? What we perceive as “right” often depends on cultural, societal, or individual contexts. To ensure AI is used responsibly, these values must be consciously named and integrated into the systems. This requires not only technical know-how but above all a societal and ethical debate. &lt;br /&gt;
&lt;br /&gt;
Ultimately, AI remains a tool. Whether it acts “rightly” in a given situation depends not only on the algorithm but above all on how we design it, use it, and question it critically.&lt;br /&gt;
&lt;br /&gt;
== Discrimination and Bias in AI Systems ==&lt;br /&gt;
It is often assumed that artificial intelligence evaluates things objectively and neutrally. In practice, however, it becomes clear that AI systems can adopt and even reinforce social biases. This can occur when the information an AI relies on is based on already biased data. An example of this is the increasing use of automated recruitment processes in companies. Here, applicants may be excluded based on certain ethnic backgrounds without a human ever having looked at the application.&lt;br /&gt;
&lt;br /&gt;
A 2024 study by the University of Washington shows that discrimination can occur in AI-driven recruitment processes. The researchers analyzed over 554 résumés and 571 job postings, generating more than 3 million combinations of names and job positions. They then altered the names on 120 resumes, replacing typically white-sounding names with names commonly associated with the Black population.&lt;br /&gt;
&lt;br /&gt;
The results were clear: in 85% of the cases, the AI favored names typically associated with white individuals, while only 9% of the preferred names were linked to Black individuals. Additionally, the AI selected male candidates 52% of the time even for roles predominantly held by women, such as HR positions (77% female representation) or teaching jobs (57% female representation). White women were also more likely to be selected than Black women.&lt;br /&gt;
&lt;br /&gt;
This example illustrates how AI systems derive selection criteria from existing data, which can lead them to unintentionally adopt and perpetuate discriminatory structures from real-world practices. From a philosophical perspective, this raises a fundamental question: If our knowledge about the world in this case, about job applicants is based on data that is itself biased, how reliable is that knowledge? Epistemology asks what we truly know when we receive information. AI systems process data, but they do not understand it in the human sense. Their decisions are based on patterns, not on insight or moral reasoning. As Olkhovsky also emphasizes, machines lack the intentionality and awareness that give human decisions their moral depth.&lt;br /&gt;
&lt;br /&gt;
That is why it is even more important that AI does not make the final selection of applicants alone. To ensure a fair and just hiring process, human oversight by HR professionals is essential so that they can intervene if there is a suspicion of discrimination. Moreover, the automated AI recruitment process should be adapted in such a way that it either completely avoids discrimination or at least minimizes it. This could be achieved through regular fairness and bias tests to identify and address problematic patterns early on.&lt;br /&gt;
&lt;br /&gt;
In addition, John Rawls’ concept of justice as fairness offers a philosophical foundation for the discussion of discrimination and bias in AI systems. His famous thought experiment the veil of ignorance asks us to imagine a society in which no one knows their own social position. This is meant to produce fair rules that do not favor any individual or group.&lt;br /&gt;
&lt;br /&gt;
Applied to AI, this means that algorithms must be designed not to reinforce existing inequalities but to actively contribute to fairness. As studies have shown, AI systems can unconsciously adopt discriminatory patterns when trained on biased data. Rawls would argue that such systems do not meet the principles of justice, as they fail to ensure that the least advantaged members of society are not further marginalized.&lt;br /&gt;
&lt;br /&gt;
== Bias ==&lt;br /&gt;
In the example above, the AI exhibited bias a systematic distortion. This occurs when an AI application receives training data and information from the real world that contains prejudices, and it does not question them but instead accepts and adopts them as correct. It has learned that, in the past, white men were predominantly hired and therefore sets the attribute “male” as a selection criterion, concluding that this group is the most suitable and thus prefers them.&lt;br /&gt;
&lt;br /&gt;
Such errors must be identified and corrected. While bias cannot be completely avoided, it can be reduced through technical solutions.&lt;br /&gt;
&lt;br /&gt;
There are three strategies that can be used to minimize bias in AI systems. First, potential biases in the training data can be removed before the learning process begins. This involves modifying the data so that attributes such as origin or religion no longer have a distorted effect on the AI’s decisions. Another method is to program the AI in such a way that, during learning, it is prevented from making unfair assessments or discriminating by applying additional mathematical constraints. In the third method, the AI’s outcomes are evaluated and corrected for fairness after the learning process.&lt;br /&gt;
&lt;br /&gt;
Discrimination in AI systems is not only a technical problem but also touches on fundamental ethical principles such as justice, human dignity, and responsibility. Even if an AI application does not intentionally discriminate, it still violates these principles because its decisions have real-world consequences for people. Machines do not act morally, but their algorithms can reinforce existing inequalities and thus raise ethical concerns.&lt;br /&gt;
&lt;br /&gt;
Immanuel Kant formulated the categorical imperative as a universal moral principle: every human being must be treated as an end in themselves and never merely as a means to an end. Applied to AI, this means that algorithms must not be designed in a way that disadvantages people based on prejudice or biased data. An AI system that adopts discriminatory patterns contradicts this principle, as it fails to treat people as equal individuals.&lt;br /&gt;
&lt;br /&gt;
John Rawls developed the concept of justice as fairness, which aims to structure society in such a way that it does not disadvantage the weakest. His veil of ignorance challenges us to design rules without knowing our own social position—thereby ensuring fair conditions for all. AI systems that unconsciously discriminate contradict this principle, as they perpetuate existing inequalities instead of correcting them. To counteract this, algorithms must be actively tested and adjusted for fairness.&lt;br /&gt;
&lt;br /&gt;
Therefore, it is essential to assume not only technical but also ethical responsibility. AI systems must be designed to promote justice and equal opportunity rather than reinforce existing biases. This can be achieved through regular fairness and bias tests as well as through intentional ethical programming. Even if bias cannot be completely eliminated, it can be significantly reduced through targeted measures.&lt;br /&gt;
&lt;br /&gt;
== Transparency and Traceability ==&lt;br /&gt;
Since May 21, 2024, the EU Artificial Intelligence Act has established a comprehensive regulatory framework for artificial intelligence adopted by the EU Council, setting uniform rules for the use of AI across Europe. The AI Act places strong emphasis on transparency to ensure that AI handles data responsibly and fairly. Systems considered particularly high-risk must therefore be designed in a way that makes their use comprehensible and understandable. This allows individuals to make informed decisions about whether a particular AI system is appropriate for them. Overall, the goal of increased AI transparency is to empower users and give them more control.&lt;br /&gt;
&lt;br /&gt;
To ensure the transparent use of AI, users must be informed when they are interacting with an AI system. The system&#039;s documentation should explain how the AI works, what it can be used for, and what opportunities and risks it entails. Additionally, information about the development and context of the system is important so that both users and organizations understand its capabilities and limitations.&lt;br /&gt;
&lt;br /&gt;
Transparency is important because it leads to greater traceability. It enables people to understand how AI reaches certain decisions and helps to identify issues early on such as potential bias or copyright violations. Moreover, studies show that users are more likely to use transparent AI models. For developers, it is particularly essential to know where the training data comes from, whether it is fair and non-discriminatory, and what risks a particular model might pose.&lt;br /&gt;
&lt;br /&gt;
Even though transparency is crucial, it is not always easy to implement. A careful balance must be struck: users should be given enough information to understand and use a system safely, but not all aspects should be disclosed if doing so poses security or misuse risks.&lt;br /&gt;
&lt;br /&gt;
This is where the discussion around AI intersects closely with epistemological questions: What counts as reliable knowledge? How much do we need to know in order to trust a decision? AI challenges us to rethink our understanding of knowledge, responsibility, and control.&lt;br /&gt;
&lt;br /&gt;
As Olkhovsky emphasizes, transparency is a key prerequisite for people to trust AI systems at all. Only if it is clear how decisions are made can those decisions be questioned or challenged. Without that clarity, responsibility becomes blurred and control over technological decisions slips away from users. Therefore, transparency is not only a technical task but also an ethical imperative: AI systems must be designed to reveal the criteria by which they operate. Transparency not only enables oversight but also reinforces the democratic principle of accountability in the digital age.&lt;br /&gt;
&lt;br /&gt;
== Missinformation, Fake News and Deepfakes ==&lt;br /&gt;
AI-Systems..&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
..&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
[1] &lt;br /&gt;
&lt;br /&gt;
[2] &lt;br /&gt;
&lt;br /&gt;
[3] &lt;br /&gt;
&lt;br /&gt;
[4] &lt;br /&gt;
&lt;br /&gt;
[5] &lt;br /&gt;
&lt;br /&gt;
[6] &lt;br /&gt;
&lt;br /&gt;
[7] &lt;br /&gt;
&lt;br /&gt;
[8] &lt;br /&gt;
&lt;br /&gt;
[9]&lt;/div&gt;</summary>
		<author><name>Emily Hoppe</name></author>
	</entry>
	<entry>
		<id>https://www.glossalab.org/w/index.php?title=Draft:Artificial_Intelligence_and_Justice:_How_Algorithms_Shape_Our_Society&amp;diff=12885</id>
		<title>Draft:Artificial Intelligence and Justice: How Algorithms Shape Our Society</title>
		<link rel="alternate" type="text/html" href="https://www.glossalab.org/w/index.php?title=Draft:Artificial_Intelligence_and_Justice:_How_Algorithms_Shape_Our_Society&amp;diff=12885"/>
		<updated>2025-06-09T16:50:45Z</updated>

		<summary type="html">&lt;p&gt;Emily Hoppe: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Overview ==&lt;br /&gt;
The rapid development and use of artificial intelligence since the introduction of ChatGPT in November 2022 raises fundamental questions about justice and ethical responsibility. This paper explores the interplay between AI systems and social justice, with a particular focus on discrimination caused by algorithmic bias, transparency in decision-making, and the challenges posed by AI-driven disinformation. Drawing on recent studies and philosophical concepts from Kant and Rawls, the analysis examines how AI systems can reinforce existing inequalities and what measures are necessary to ensure a fair and ethically sound use of this technology. The paper argues that only through a combination of technical solutions, ethical guidelines, and legal regulation such as the EU AI Act—can a balance be achieved between technological progress and social justice.&lt;br /&gt;
&lt;br /&gt;
== Introduction and Relevance of the Topic ==&lt;br /&gt;
Since the launch of ChatGPT in November 2022, artificial intelligence has made an unprecedented entry into our daily lives. This new generation of AI systems, which is already being used by hundreds of millions of people just two and a half years later, marks a turning point in the digitalization of our society. Almost every person in the digitally connected world regularly interacts with AI applications.&lt;br /&gt;
&lt;br /&gt;
The global use of AI technologies is steadily increasing. According to forecasts, the number of users worldwide is expected to grow to approximately 826.2 million by 2025. Beyond the education sector, AI is also gaining increasing importance in other fields such as business, medicine, justice, and public administration.&lt;br /&gt;
&lt;br /&gt;
One reason for this is the efficiency of AI, as it saves time, automates processes, and delivers fast results. However, with the rapid development of AI, new considerations arise. When machines write texts, prepare decisions, or even make moral judgments, questions emerge such as: What distinguishes a machine-made decision from a human one? Who is responsible when errors occur or people are disadvantaged?&lt;br /&gt;
&lt;br /&gt;
It is no longer just about simple applications. In philosophical discussions, the question increasingly arises whether so-called &amp;quot;strong AI&amp;quot; machines with consciousness or their own thinking is even possible. Olkhovsky addresses this idea and refers to the example from the film &#039;&#039;Ex Machina&#039;&#039;: an AI that appears so human-like that it seems almost impossible to distinguish it from a real person. But what separates a convincing imitation from genuine consciousness? Can machines truly “be” like us—or does it ultimately remain a complex simulation?&lt;br /&gt;
&lt;br /&gt;
AI confronts us with fundamental questions about what knowledge is, how we can recognize truth, and what role humans play in an increasingly automated world.&lt;br /&gt;
&lt;br /&gt;
== Artificial Intelligence ==&lt;br /&gt;
Artificial intelligence (AI) is generally understood as a subfield of computer science that focuses on the development of systems capable of performing tasks that typically require human intelligence, such as problem-solving, language understanding, or pattern recognition.&lt;br /&gt;
&lt;br /&gt;
In scientific discussions, a distinction is often made between weak AI, which is specialized in narrowly defined tasks (e.g., voice assistants), and strong AI, which could develop human-like consciousness or genuine thinking. This concept remains largely theoretical but raises profound ethical and epistemological questions.&lt;br /&gt;
&lt;br /&gt;
Additionally, a distinction is made between knowledge-based AI, which operates using symbolic logic, and so-called machine learning, which is based on statistical methods. The latter has made significant advances in recent years, intensifying the question of how it differs from human understanding.&lt;br /&gt;
&lt;br /&gt;
With the increasing use of AI, the question arises not only of what machines are capable of but also what this means for us humans. What exactly does it mean when machines “learn,” “understand,” or “decide”? And what happens when we begin to delegate our own thinking to machines? Is AI truly a form of cognitive processing, or merely a highly advanced simulation of human behavior?&lt;br /&gt;
&lt;br /&gt;
It is emphasized that AI systems ultimately reflect our own thinking. They adopt patterns, logics, and assumptions from the data they were trained on, but they do not understand them in the human sense. AI lacks the deeper understanding of causality and context that characterizes human thought. This gap between statistical pattern recognition and genuine understanding is central to the philosophical discourse on AI.&lt;br /&gt;
&lt;br /&gt;
The increasing automation of creative processes by AI raises the question of whether human thinking and creativity are being displaced. Hannah Arendt emphasized that thinking is more than mere information processing; it is connected with reflection and responsibility. Nick Bostrom also warns that excessive dependence on AI could restrict human freedom of decision.&lt;br /&gt;
&lt;br /&gt;
What does it mean for our humanity when processes once considered uniquely human such as thinking, decision-making, or creativity are increasingly taken over by machines?&lt;br /&gt;
&lt;br /&gt;
== Responsibility &amp;amp; Decision-Making ==&lt;br /&gt;
When we ask an AI a question, we usually receive a fitting answer almost instantly. This seems so natural that we rarely stop to consider how the answer is actually generated—or whether it is truly correct. And therein lies an ethical challenge: unlike humans, who make decisions based on experience, values, or moral convictions, AI operates purely statistically. It identifies patterns and produces what seems most probable based on its training data, without “understanding” what is good, right, or wrong.&lt;br /&gt;
&lt;br /&gt;
Such systems now accompany us not only when writing or researching, but also influence what we see on social media, which news is shown to us, or how decisions are made in professional contexts. The more we place our trust in AI, the greater the responsibility we must take on ourselves even if it may seem at first glance as though the machine has everything under control.&lt;br /&gt;
&lt;br /&gt;
According to Daniel Wessel, this reveals a central problem: AI systems cannot make moral decisions, at least not in the human sense. They have no values of their own or awareness of responsibility. That is why it is all the more important that such technologies follow clear ethical guidelines. The people who develop or use them must define in advance what is to be considered right, fair, or transparent.&lt;br /&gt;
&lt;br /&gt;
Which values should be embedded in AI in the first place? And who decides that? What we perceive as “right” often depends on cultural, societal, or individual contexts. To ensure AI is used responsibly, these values must be consciously named and integrated into the systems. This requires not only technical know-how but above all a societal and ethical debate. &lt;br /&gt;
&lt;br /&gt;
Ultimately, AI remains a tool. Whether it acts “rightly” in a given situation depends not only on the algorithm but above all on how we design it, use it, and question it critically.&lt;br /&gt;
&lt;br /&gt;
== Discrimination and Bias in AI Systems ==&lt;br /&gt;
It is often assumed that artificial intelligence evaluates things objectively and neutrally. In practice, however, it becomes clear that AI systems can adopt and even reinforce social biases. This can occur when the information an AI relies on is based on already biased data. An example of this is the increasing use of automated recruitment processes in companies. Here, applicants may be excluded based on certain ethnic backgrounds without a human ever having looked at the application.&lt;br /&gt;
&lt;br /&gt;
A 2024 study by the University of Washington shows that discrimination can occur in AI-driven recruitment processes. The researchers analyzed over 554 résumés and 571 job postings, generating more than 3 million combinations of names and job positions. They then altered the names on 120 resumes, replacing typically white-sounding names with names commonly associated with the Black population.&lt;br /&gt;
&lt;br /&gt;
The results were clear: in 85% of the cases, the AI favored names typically associated with white individuals, while only 9% of the preferred names were linked to Black individuals. Additionally, the AI selected male candidates 52% of the time even for roles predominantly held by women, such as HR positions (77% female representation) or teaching jobs (57% female representation). White women were also more likely to be selected than Black women.&lt;br /&gt;
&lt;br /&gt;
This example illustrates how AI systems derive selection criteria from existing data, which can lead them to unintentionally adopt and perpetuate discriminatory structures from real-world practices. From a philosophical perspective, this raises a fundamental question: If our knowledge about the world in this case, about job applicants is based on data that is itself biased, how reliable is that knowledge? Epistemology asks what we truly know when we receive information. AI systems process data, but they do not understand it in the human sense. Their decisions are based on patterns, not on insight or moral reasoning. As Olkhovsky also emphasizes, machines lack the intentionality and awareness that give human decisions their moral depth.&lt;br /&gt;
&lt;br /&gt;
That is why it is even more important that AI does not make the final selection of applicants alone. To ensure a fair and just hiring process, human oversight by HR professionals is essential so that they can intervene if there is a suspicion of discrimination. Moreover, the automated AI recruitment process should be adapted in such a way that it either completely avoids discrimination or at least minimizes it. This could be achieved through regular fairness and bias tests to identify and address problematic patterns early on.&lt;br /&gt;
&lt;br /&gt;
In addition, John Rawls’ concept of justice as fairness offers a philosophical foundation for the discussion of discrimination and bias in AI systems. His famous thought experiment the veil of ignorance asks us to imagine a society in which no one knows their own social position. This is meant to produce fair rules that do not favor any individual or group.&lt;br /&gt;
&lt;br /&gt;
Applied to AI, this means that algorithms must be designed not to reinforce existing inequalities but to actively contribute to fairness. As studies have shown, AI systems can unconsciously adopt discriminatory patterns when trained on biased data. Rawls would argue that such systems do not meet the principles of justice, as they fail to ensure that the least advantaged members of society are not further marginalized.&lt;br /&gt;
&lt;br /&gt;
== Bias ==&lt;br /&gt;
In the example above, the AI exhibited bias a systematic distortion. This occurs when an AI application receives training data and information from the real world that contains prejudices, and it does not question them but instead accepts and adopts them as correct. It has learned that, in the past, white men were predominantly hired and therefore sets the attribute “male” as a selection criterion, concluding that this group is the most suitable and thus prefers them.&lt;br /&gt;
&lt;br /&gt;
Such errors must be identified and corrected. While bias cannot be completely avoided, it can be reduced through technical solutions.&lt;br /&gt;
&lt;br /&gt;
There are three strategies that can be used to minimize bias in AI systems. First, potential biases in the training data can be removed before the learning process begins. This involves modifying the data so that attributes such as origin or religion no longer have a distorted effect on the AI’s decisions. Another method is to program the AI in such a way that, during learning, it is prevented from making unfair assessments or discriminating by applying additional mathematical constraints. In the third method, the AI’s outcomes are evaluated and corrected for fairness after the learning process.&lt;br /&gt;
&lt;br /&gt;
Discrimination in AI systems is not only a technical problem but also touches on fundamental ethical principles such as justice, human dignity, and responsibility. Even if an AI application does not intentionally discriminate, it still violates these principles because its decisions have real-world consequences for people. Machines do not act morally, but their algorithms can reinforce existing inequalities and thus raise ethical concerns.&lt;br /&gt;
&lt;br /&gt;
Immanuel Kant formulated the categorical imperative as a universal moral principle: every human being must be treated as an end in themselves and never merely as a means to an end. Applied to AI, this means that algorithms must not be designed in a way that disadvantages people based on prejudice or biased data. An AI system that adopts discriminatory patterns contradicts this principle, as it fails to treat people as equal individuals.&lt;br /&gt;
&lt;br /&gt;
John Rawls developed the concept of justice as fairness, which aims to structure society in such a way that it does not disadvantage the weakest. His veil of ignorance challenges us to design rules without knowing our own social position—thereby ensuring fair conditions for all. AI systems that unconsciously discriminate contradict this principle, as they perpetuate existing inequalities instead of correcting them. To counteract this, algorithms must be actively tested and adjusted for fairness.&lt;br /&gt;
&lt;br /&gt;
Therefore, it is essential to assume not only technical but also ethical responsibility. AI systems must be designed to promote justice and equal opportunity rather than reinforce existing biases. This can be achieved through regular fairness and bias tests as well as through intentional ethical programming. Even if bias cannot be completely eliminated, it can be significantly reduced through targeted measures.&lt;br /&gt;
&lt;br /&gt;
== Transparency and Traceability ==&lt;br /&gt;
Since May 21, 2024, the EU Artificial Intelligence Act has established a comprehensive regulatory framework for artificial intelligence adopted by the EU Council, setting uniform rules for the use of AI across Europe. The AI Act places strong emphasis on transparency to ensure that AI handles data responsibly and fairly. Systems considered particularly high-risk must therefore be designed in a way that makes their use comprehensible and understandable. This allows individuals to make informed decisions about whether a particular AI system is appropriate for them. Overall, the goal of increased AI transparency is to empower users and give them more control.&lt;br /&gt;
&lt;br /&gt;
To ensure the transparent use of AI, users must be informed when they are interacting with an AI system. The system&#039;s documentation should explain how the AI works, what it can be used for, and what opportunities and risks it entails. Additionally, information about the development and context of the system is important so that both users and organizations understand its capabilities and limitations.&lt;br /&gt;
&lt;br /&gt;
Transparency is important because it leads to greater traceability. It enables people to understand how AI reaches certain decisions and helps to identify issues early on such as potential bias or copyright violations. Moreover, studies show that users are more likely to use transparent AI models. For developers, it is particularly essential to know where the training data comes from, whether it is fair and non-discriminatory, and what risks a particular model might pose.&lt;br /&gt;
&lt;br /&gt;
Even though transparency is crucial, it is not always easy to implement. A careful balance must be struck: users should be given enough information to understand and use a system safely, but not all aspects should be disclosed if doing so poses security or misuse risks.&lt;br /&gt;
&lt;br /&gt;
This is where the discussion around AI intersects closely with epistemological questions: What counts as reliable knowledge? How much do we need to know in order to trust a decision? AI challenges us to rethink our understanding of knowledge, responsibility, and control.&lt;br /&gt;
&lt;br /&gt;
As Olkhovsky emphasizes, transparency is a key prerequisite for people to trust AI systems at all. Only if it is clear how decisions are made can those decisions be questioned or challenged. Without that clarity, responsibility becomes blurred and control over technological decisions slips away from users. Therefore, transparency is not only a technical task but also an ethical imperative: AI systems must be designed to reveal the criteria by which they operate. Transparency not only enables oversight but also reinforces the democratic principle of accountability in the digital age.&lt;/div&gt;</summary>
		<author><name>Emily Hoppe</name></author>
	</entry>
	<entry>
		<id>https://www.glossalab.org/w/index.php?title=Draft:Artificial_Intelligence_and_Justice:_How_Algorithms_Shape_Our_Society&amp;diff=12884</id>
		<title>Draft:Artificial Intelligence and Justice: How Algorithms Shape Our Society</title>
		<link rel="alternate" type="text/html" href="https://www.glossalab.org/w/index.php?title=Draft:Artificial_Intelligence_and_Justice:_How_Algorithms_Shape_Our_Society&amp;diff=12884"/>
		<updated>2025-06-09T16:38:54Z</updated>

		<summary type="html">&lt;p&gt;Emily Hoppe: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Overview ==&lt;br /&gt;
The rapid development and use of artificial intelligence since the introduction of ChatGPT in November 2022 raises fundamental questions about justice and ethical responsibility. This paper explores the interplay between AI systems and social justice, with a particular focus on discrimination caused by algorithmic bias, transparency in decision-making, and the challenges posed by AI-driven disinformation. Drawing on recent studies and philosophical concepts from Kant and Rawls, the analysis examines how AI systems can reinforce existing inequalities and what measures are necessary to ensure a fair and ethically sound use of this technology. The paper argues that only through a combination of technical solutions, ethical guidelines, and legal regulation such as the EU AI Act—can a balance be achieved between technological progress and social justice.&lt;br /&gt;
&lt;br /&gt;
== Introduction and Relevance of the Topic ==&lt;br /&gt;
Since the launch of ChatGPT in November 2022, artificial intelligence has made an unprecedented entry into our daily lives. This new generation of AI systems, which is already being used by hundreds of millions of people just two and a half years later, marks a turning point in the digitalization of our society. Almost every person in the digitally connected world regularly interacts with AI applications.&lt;br /&gt;
&lt;br /&gt;
The global use of AI technologies is steadily increasing. According to forecasts, the number of users worldwide is expected to grow to approximately 826.2 million by 2025. Beyond the education sector, AI is also gaining increasing importance in other fields such as business, medicine, justice, and public administration.&lt;br /&gt;
&lt;br /&gt;
One reason for this is the efficiency of AI, as it saves time, automates processes, and delivers fast results. However, with the rapid development of AI, new considerations arise. When machines write texts, prepare decisions, or even make moral judgments, questions emerge such as: What distinguishes a machine-made decision from a human one? Who is responsible when errors occur or people are disadvantaged?&lt;br /&gt;
&lt;br /&gt;
It is no longer just about simple applications. In philosophical discussions, the question increasingly arises whether so-called &amp;quot;strong AI&amp;quot; machines with consciousness or their own thinking is even possible. Olkhovsky addresses this idea and refers to the example from the film &#039;&#039;Ex Machina&#039;&#039;: an AI that appears so human-like that it seems almost impossible to distinguish it from a real person. But what separates a convincing imitation from genuine consciousness? Can machines truly “be” like us—or does it ultimately remain a complex simulation?&lt;br /&gt;
&lt;br /&gt;
AI confronts us with fundamental questions about what knowledge is, how we can recognize truth, and what role humans play in an increasingly automated world.&lt;br /&gt;
&lt;br /&gt;
== Artificial Intelligence ==&lt;br /&gt;
Artificial intelligence (AI) is generally understood as a subfield of computer science that focuses on the development of systems capable of performing tasks that typically require human intelligence, such as problem-solving, language understanding, or pattern recognition.&lt;br /&gt;
&lt;br /&gt;
In scientific discussions, a distinction is often made between weak AI, which is specialized in narrowly defined tasks (e.g., voice assistants), and strong AI, which could develop human-like consciousness or genuine thinking. This concept remains largely theoretical but raises profound ethical and epistemological questions.&lt;br /&gt;
&lt;br /&gt;
Additionally, a distinction is made between knowledge-based AI, which operates using symbolic logic, and so-called machine learning, which is based on statistical methods. The latter has made significant advances in recent years, intensifying the question of how it differs from human understanding.&lt;br /&gt;
&lt;br /&gt;
With the increasing use of AI, the question arises not only of what machines are capable of but also what this means for us humans. What exactly does it mean when machines “learn,” “understand,” or “decide”? And what happens when we begin to delegate our own thinking to machines? Is AI truly a form of cognitive processing, or merely a highly advanced simulation of human behavior?&lt;br /&gt;
&lt;br /&gt;
It is emphasized that AI systems ultimately reflect our own thinking. They adopt patterns, logics, and assumptions from the data they were trained on, but they do not understand them in the human sense. AI lacks the deeper understanding of causality and context that characterizes human thought. This gap between statistical pattern recognition and genuine understanding is central to the philosophical discourse on AI.&lt;br /&gt;
&lt;br /&gt;
The increasing automation of creative processes by AI raises the question of whether human thinking and creativity are being displaced. Hannah Arendt emphasized that thinking is more than mere information processing; it is connected with reflection and responsibility. Nick Bostrom also warns that excessive dependence on AI could restrict human freedom of decision.&lt;br /&gt;
&lt;br /&gt;
What does it mean for our humanity when processes once considered uniquely human such as thinking, decision-making, or creativity are increasingly taken over by machines?&lt;br /&gt;
&lt;br /&gt;
== Responsibility &amp;amp; Decision-Making ==&lt;br /&gt;
When we ask an AI a question, we usually receive a fitting answer almost instantly. This seems so natural that we rarely stop to consider how the answer is actually generated—or whether it is truly correct. And therein lies an ethical challenge: unlike humans, who make decisions based on experience, values, or moral convictions, AI operates purely statistically. It identifies patterns and produces what seems most probable based on its training data, without “understanding” what is good, right, or wrong.&lt;br /&gt;
&lt;br /&gt;
Such systems now accompany us not only when writing or researching, but also influence what we see on social media, which news is shown to us, or how decisions are made in professional contexts. The more we place our trust in AI, the greater the responsibility we must take on ourselves even if it may seem at first glance as though the machine has everything under control.&lt;br /&gt;
&lt;br /&gt;
According to Daniel Wessel, this reveals a central problem: AI systems cannot make moral decisions, at least not in the human sense. They have no values of their own or awareness of responsibility. That is why it is all the more important that such technologies follow clear ethical guidelines. The people who develop or use them must define in advance what is to be considered right, fair, or transparent.&lt;br /&gt;
&lt;br /&gt;
Which values should be embedded in AI in the first place? And who decides that? What we perceive as “right” often depends on cultural, societal, or individual contexts. To ensure AI is used responsibly, these values must be consciously named and integrated into the systems. This requires not only technical know-how but above all a societal and ethical debate. &lt;br /&gt;
&lt;br /&gt;
Ultimately, AI remains a tool. Whether it acts “rightly” in a given situation depends not only on the algorithm but above all on how we design it, use it, and question it critically.&lt;/div&gt;</summary>
		<author><name>Emily Hoppe</name></author>
	</entry>
	<entry>
		<id>https://www.glossalab.org/w/index.php?title=Draft:Artificial_Intelligence_and_Justice:_How_Algorithms_Shape_Our_Society&amp;diff=12883</id>
		<title>Draft:Artificial Intelligence and Justice: How Algorithms Shape Our Society</title>
		<link rel="alternate" type="text/html" href="https://www.glossalab.org/w/index.php?title=Draft:Artificial_Intelligence_and_Justice:_How_Algorithms_Shape_Our_Society&amp;diff=12883"/>
		<updated>2025-06-09T16:29:31Z</updated>

		<summary type="html">&lt;p&gt;Emily Hoppe: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Overview ==&lt;br /&gt;
The rapid development and use of artificial intelligence since the introduction of ChatGPT in November 2022 raises fundamental questions about justice and ethical responsibility. This paper explores the interplay between AI systems and social justice, with a particular focus on discrimination caused by algorithmic bias, transparency in decision-making, and the challenges posed by AI-driven disinformation. Drawing on recent studies and philosophical concepts from Kant and Rawls, the analysis examines how AI systems can reinforce existing inequalities and what measures are necessary to ensure a fair and ethically sound use of this technology. The paper argues that only through a combination of technical solutions, ethical guidelines, and legal regulation such as the EU AI Act—can a balance be achieved between technological progress and social justice.&lt;br /&gt;
&lt;br /&gt;
== Introduction and Relevance of the Topic ==&lt;br /&gt;
Since the launch of ChatGPT in November 2022, artificial intelligence has made an unprecedented entry into our daily lives. This new generation of AI systems, which is already being used by hundreds of millions of people just two and a half years later, marks a turning point in the digitalization of our society. Almost every person in the digitally connected world regularly interacts with AI applications.&lt;br /&gt;
&lt;br /&gt;
The global use of AI technologies is steadily increasing. According to forecasts, the number of users worldwide is expected to grow to approximately 826.2 million by 2025. Beyond the education sector, AI is also gaining increasing importance in other fields such as business, medicine, justice, and public administration.&lt;br /&gt;
&lt;br /&gt;
One reason for this is the efficiency of AI, as it saves time, automates processes, and delivers fast results. However, with the rapid development of AI, new considerations arise. When machines write texts, prepare decisions, or even make moral judgments, questions emerge such as: What distinguishes a machine-made decision from a human one? Who is responsible when errors occur or people are disadvantaged?&lt;br /&gt;
&lt;br /&gt;
It is no longer just about simple applications. In philosophical discussions, the question increasingly arises whether so-called &amp;quot;strong AI&amp;quot; machines with consciousness or their own thinking is even possible. Olkhovsky addresses this idea and refers to the example from the film &#039;&#039;Ex Machina&#039;&#039;: an AI that appears so human-like that it seems almost impossible to distinguish it from a real person. But what separates a convincing imitation from genuine consciousness? Can machines truly “be” like us—or does it ultimately remain a complex simulation?&lt;br /&gt;
&lt;br /&gt;
AI confronts us with fundamental questions about what knowledge is, how we can recognize truth, and what role humans play in an increasingly automated world.&lt;br /&gt;
&lt;br /&gt;
== Artificial Intelligence ==&lt;/div&gt;</summary>
		<author><name>Emily Hoppe</name></author>
	</entry>
	<entry>
		<id>https://www.glossalab.org/w/index.php?title=Draft:Artificial_Intelligence_and_Justice:_How_Algorithms_Shape_Our_Society&amp;diff=12882</id>
		<title>Draft:Artificial Intelligence and Justice: How Algorithms Shape Our Society</title>
		<link rel="alternate" type="text/html" href="https://www.glossalab.org/w/index.php?title=Draft:Artificial_Intelligence_and_Justice:_How_Algorithms_Shape_Our_Society&amp;diff=12882"/>
		<updated>2025-06-09T16:25:14Z</updated>

		<summary type="html">&lt;p&gt;Emily Hoppe: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Overview ==&lt;br /&gt;
The rapid development and use of artificial intelligence since the introduction of ChatGPT in November 2022 raises fundamental questions about justice and ethical responsibility. This paper explores the interplay between AI systems and social justice, with a particular focus on discrimination caused by algorithmic bias, transparency in decision-making, and the challenges posed by AI-driven disinformation. Drawing on recent studies and philosophical concepts from Kant and Rawls, the analysis examines how AI systems can reinforce existing inequalities and what measures are necessary to ensure a fair and ethically sound use of this technology. The paper argues that only through a combination of technical solutions, ethical guidelines, and legal regulation such as the EU AI Act—can a balance be achieved between technological progress and social justice.&lt;br /&gt;
&lt;br /&gt;
== Introduction and Relevance of the Topic ==&lt;br /&gt;
Since the launch of ChatGPT in November 2022, artificial intelligence has made an unprecedented entry into our daily lives. This new generation of AI systems, which is already being used by hundreds of millions of people just two and a half years later, marks a turning point in the digitalization of our society. Almost every person in the digitally connected world regularly interacts with AI applications.&lt;br /&gt;
&lt;br /&gt;
The global use of AI technologies is steadily increasing. According to forecasts, the number of users worldwide is expected to grow to approximately 826.2 million by 2025. Beyond the education sector, AI is also gaining increasing importance in other fields such as business, medicine, justice, and public administration.&lt;br /&gt;
&lt;br /&gt;
One reason for this is the efficiency of AI, as it saves time, automates processes, and delivers fast results. However, with the rapid development of AI, new considerations arise. When machines write texts, prepare decisions, or even make moral judgments, questions emerge such as: What distinguishes a machine-made decision from a human one? Who is responsible when errors occur or people are disadvantaged?&lt;br /&gt;
&lt;br /&gt;
It is no longer just about simple applications. In philosophical discussions, the question increasingly arises whether so-called &amp;quot;strong AI&amp;quot; machines with consciousness or their own thinking is even possible. Olkhovsky addresses this idea and refers to the example from the film &#039;&#039;Ex Machina&#039;&#039;: an AI that appears so human-like that it seems almost impossible to distinguish it from a real person. But what separates a convincing imitation from genuine consciousness? Can machines truly “be” like us—or does it ultimately remain a complex simulation?&lt;br /&gt;
&lt;br /&gt;
AI confronts us with fundamental questions about what knowledge is, how we can recognize truth, and what role humans play in an increasingly automated world.&lt;/div&gt;</summary>
		<author><name>Emily Hoppe</name></author>
	</entry>
	<entry>
		<id>https://www.glossalab.org/w/index.php?title=Draft:Artificial_Intelligence_and_Justice:_How_Algorithms_Shape_Our_Society&amp;diff=12881</id>
		<title>Draft:Artificial Intelligence and Justice: How Algorithms Shape Our Society</title>
		<link rel="alternate" type="text/html" href="https://www.glossalab.org/w/index.php?title=Draft:Artificial_Intelligence_and_Justice:_How_Algorithms_Shape_Our_Society&amp;diff=12881"/>
		<updated>2025-06-09T16:15:39Z</updated>

		<summary type="html">&lt;p&gt;Emily Hoppe: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Overview ==&lt;br /&gt;
The rapid development and use of artificial intelligence since the introduction of ChatGPT in November 2022 raises fundamental questions about justice and ethical responsibility. This paper explores the interplay between AI systems and social justice, with a particular focus on discrimination caused by algorithmic bias, transparency in decision-making, and the challenges posed by AI-driven disinformation. Drawing on recent studies and philosophical concepts from Kant and Rawls, the analysis examines how AI systems can reinforce existing inequalities and what measures are necessary to ensure a fair and ethically sound use of this technology. The paper argues that only through a combination of technical solutions, ethical guidelines, and legal regulation such as the EU AI Act—can a balance be achieved between technological progress and social justice.&lt;/div&gt;</summary>
		<author><name>Emily Hoppe</name></author>
	</entry>
	<entry>
		<id>https://www.glossalab.org/w/index.php?title=Draft:Artificial_Intelligence_and_Justice:_How_Algorithms_Shape_Our_Society&amp;diff=12880</id>
		<title>Draft:Artificial Intelligence and Justice: How Algorithms Shape Our Society</title>
		<link rel="alternate" type="text/html" href="https://www.glossalab.org/w/index.php?title=Draft:Artificial_Intelligence_and_Justice:_How_Algorithms_Shape_Our_Society&amp;diff=12880"/>
		<updated>2025-06-09T16:15:23Z</updated>

		<summary type="html">&lt;p&gt;Emily Hoppe: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== &#039;&#039;&#039;Overview&#039;&#039;&#039; ==&lt;br /&gt;
The rapid development and use of artificial intelligence since the introduction of ChatGPT in November 2022 raises fundamental questions about justice and ethical responsibility. This paper explores the interplay between AI systems and social justice, with a particular focus on discrimination caused by algorithmic bias, transparency in decision-making, and the challenges posed by AI-driven disinformation. Drawing on recent studies and philosophical concepts from Kant and Rawls, the analysis examines how AI systems can reinforce existing inequalities and what measures are necessary to ensure a fair and ethically sound use of this technology. The paper argues that only through a combination of technical solutions, ethical guidelines, and legal regulation such as the EU AI Act—can a balance be achieved between technological progress and social justice.&lt;/div&gt;</summary>
		<author><name>Emily Hoppe</name></author>
	</entry>
	<entry>
		<id>https://www.glossalab.org/w/index.php?title=Draft:Artificial_Intelligence_and_Justice:_How_Algorithms_Shape_Our_Society&amp;diff=12879</id>
		<title>Draft:Artificial Intelligence and Justice: How Algorithms Shape Our Society</title>
		<link rel="alternate" type="text/html" href="https://www.glossalab.org/w/index.php?title=Draft:Artificial_Intelligence_and_Justice:_How_Algorithms_Shape_Our_Society&amp;diff=12879"/>
		<updated>2025-06-09T16:11:34Z</updated>

		<summary type="html">&lt;p&gt;Emily Hoppe: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;&#039;Overview&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The rapid development and use of artificial intelligence since the introduction of ChatGPT in November 2022 raises fundamental questions about justice and ethical responsibility. This paper explores the interplay between AI systems and social justice, with a particular focus on discrimination caused by algorithmic bias, transparency in decision-making, and the challenges posed by AI-driven disinformation. Drawing on recent studies and philosophical concepts from Kant and Rawls, the analysis examines how AI systems can reinforce existing inequalities and what measures are necessary to ensure a fair and ethically sound use of this technology. The paper argues that only through a combination of technical solutions, ethical guidelines, and legal regulation such as the EU AI Act—can a balance be achieved between technological progress and social justice.&lt;/div&gt;</summary>
		<author><name>Emily Hoppe</name></author>
	</entry>
	<entry>
		<id>https://www.glossalab.org/w/index.php?title=Draft:Artificial_Intelligence_and_Justice:_How_Algorithms_Shape_Our_Society&amp;diff=12878</id>
		<title>Draft:Artificial Intelligence and Justice: How Algorithms Shape Our Society</title>
		<link rel="alternate" type="text/html" href="https://www.glossalab.org/w/index.php?title=Draft:Artificial_Intelligence_and_Justice:_How_Algorithms_Shape_Our_Society&amp;diff=12878"/>
		<updated>2025-06-09T16:08:26Z</updated>

		<summary type="html">&lt;p&gt;Emily Hoppe: Created page with &amp;quot;Overview   The rapid development and use of artificial intelligence since the introduction of ChatGPT in November 2022 raises fundamental questions about justice and ethical responsibility. This paper explores the interplay between AI systems and social justice, with a particular focus on discrimination caused by algorithmic bias, transparency in decision-making, and the challenges posed by AI-driven disinformation. Drawing on recent studies and philosophical concepts fr...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Overview &lt;br /&gt;
&lt;br /&gt;
The rapid development and use of artificial intelligence since the introduction of ChatGPT in November 2022 raises fundamental questions about justice and ethical responsibility. This paper explores the interplay between AI systems and social justice, with a particular focus on discrimination caused by algorithmic bias, transparency in decision-making, and the challenges posed by AI-driven disinformation. Drawing on recent studies and philosophical concepts from Kant and Rawls, the analysis examines how AI systems can reinforce existing inequalities and what measures are necessary to ensure a fair and ethically sound use of this technology. The paper argues that only through a combination of technical solutions, ethical guidelines, and legal regulation such as the EU AI Act—can a balance be achieved between technological progress and social justice.&lt;/div&gt;</summary>
		<author><name>Emily Hoppe</name></author>
	</entry>
	<entry>
		<id>https://www.glossalab.org/w/index.php?title=User:Emily_Hoppe&amp;diff=12581</id>
		<title>User:Emily Hoppe</title>
		<link rel="alternate" type="text/html" href="https://www.glossalab.org/w/index.php?title=User:Emily_Hoppe&amp;diff=12581"/>
		<updated>2025-05-25T12:33:34Z</updated>

		<summary type="html">&lt;p&gt;Emily Hoppe: Replaced content with &amp;quot; &amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Emily Hoppe</name></author>
	</entry>
</feed>