<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://www.glossalab.org/w/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Thomas+Holzberger</id>
	<title>glossaLAB - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://www.glossalab.org/w/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Thomas+Holzberger"/>
	<link rel="alternate" type="text/html" href="https://www.glossalab.org/wiki/Special:Contributions/Thomas_Holzberger"/>
	<updated>2026-04-30T20:19:50Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.43.6</generator>
	<entry>
		<id>https://www.glossalab.org/w/index.php?title=Draft:Moral_relativism&amp;diff=31791</id>
		<title>Draft:Moral relativism</title>
		<link rel="alternate" type="text/html" href="https://www.glossalab.org/w/index.php?title=Draft:Moral_relativism&amp;diff=31791"/>
		<updated>2026-01-23T15:28:03Z</updated>

		<summary type="html">&lt;p&gt;Thomas Holzberger: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{DISPLAYTITLE:Draft:Moral relativism}}{{Head JTP|Authors=[[User:Thomas Holzberger]]|Observations=* The structure and formats of the article doesn’t adapt to the purpose of conceptual clarification.&lt;br /&gt;
&lt;br /&gt;
* Though the interplay with internal references is important, external relevant references should also be used.&lt;br /&gt;
* The moral relativism can be predicated not only from the utilitarian approach.&lt;br /&gt;
* The quantification problem is not addressed.&lt;br /&gt;
* Since Kant’s ethics allegegly overcomes relativism his arguments should be confronted.}}&lt;br /&gt;
&lt;br /&gt;
This article will elaborate on the concept of moral relativism. First, the term will be explained with the help of other concepts of morality and ethics. Then it will be shown further with an argument for moral relativism, followed by some implications of moral relativism.&lt;br /&gt;
&lt;br /&gt;
Introduction: Will introduce the topic of moral relativism relating to morality&lt;br /&gt;
&lt;br /&gt;
Definition: Will define the term moral relativism&lt;br /&gt;
&lt;br /&gt;
Assumptions, Argument, Proof: Will outline the argument for unknowable morality and therefore moral relativism&lt;br /&gt;
&lt;br /&gt;
Implications: Will explain the implications of moral relativism&lt;br /&gt;
&lt;br /&gt;
Connections: Will show some connections between the topic and relevant philosophers and concepts.&lt;br /&gt;
&lt;br /&gt;
== Introduction: ==&lt;br /&gt;
The concepts of good and bad, core ideas in most moral systems, have existed before written philosophy itself.&lt;br /&gt;
&lt;br /&gt;
After all, most animals and all humans have interests, taking the form of distinguishing how good and bad some things are. This way of phrasing it resembles a utilitarian approach to morals, where the goal is to maximise overall goodness that can usually be phrased as the sum of goodness minus the sum of badness.&lt;br /&gt;
&lt;br /&gt;
The interests people have are often associated with the behaviour of other people and themselves, giving rise to a deontological perspective to view morals mostly as rules of behaviour. These rules do not need to be mere restrictions based on bad behaviour, but can also label some actions as good. This can be phrased as quantified results, such as “Killing is this bad, Stealing is half as bad”, but it could also mean that in a certain situation, one action is good, while others are bad.&lt;br /&gt;
&lt;br /&gt;
Other moral systems can be phrased differently.&lt;br /&gt;
&lt;br /&gt;
For example, as explained in the article [[Moral]], the judgments given by virtue ethics are based on the reasoning for an action. If the reasoning is virtuous, the act is morally good. Whether an act is virtuous depends on doing the right thing for the right reason, with a virtue being seen as a good trait that is dependent on acting in a proper way between two extremes.&lt;br /&gt;
&lt;br /&gt;
However, there are some similarities between these systems:&lt;br /&gt;
&lt;br /&gt;
All coherent moral systems can be rephrased in distinctions between good and bad, results that can often be quantified or compared.&lt;br /&gt;
&lt;br /&gt;
The three systems that were pointed out are also usually interpreted in such a way that if an agent were to act according to the moral system, it would also serve the good of other people, as opposed to just phrasing the interests of the agent. This happens indirectly for virtue ethics and deontontology. In virtue ethics, for example, because a certain amount of altruism might be seen as virtuous, in deontology, because the most typical rules of behaviour like “you shall not lie”, “you shall not murder”,… are set up in such a way.&lt;br /&gt;
&lt;br /&gt;
== Definition: ==&lt;br /&gt;
The concept of moral relativism seems to challenge all other perspectives of morality since it states, that there is no true or false set of morals but just different ideas of sometimes different people across space and time.&lt;br /&gt;
&lt;br /&gt;
According to the article [[Draft:Ethics]][[Draft:Ethics]], moral relativity &amp;quot;is the doctrine that there is no one true moral system, binding on all people at all times”.In the same article [[Draft:Ethics]][[Draft:Ethics]], the concept is brought up that relativist ideas can hardly be challenged based on their own principle that everything is questionable, because no moral argument or position can be right or wrong. It is however important to mention that moral relativism does not inherently challenge the idea of objective truth. Rather, it states that there is no knowable true morality. This also means that a morality derived from knowledge of some true morality is not true, according to Plato’s definition of knowledge as a justified true belief. But values and therefore ethics and morality will stay relevant for longer or as long as humans do, so a typical conclusion of moral relativism or a popular connected idea with “Moral Relativism”Gowans, Chris, “Moral Relativism”, ‘‘The Stanford Encyclopedia of Philosophy’’ (Spring 2021 Edition), Edward N. Zalta (ed.), URL = &amp;lt;https://plato.stanford.edu/archives/spr2021/entries/moral-relativism/&amp;gt;. is to label any moral idea as equally true or true depending on things like religion, culture, region or person. This idea does however, suffer from the so-called quantification problem. This is the problem of needing to choose a standard for what has priority. Culture, religion, region, the opinion of the affected person or the acting person. There are infinite possibilities, and it again needs a moral standard, something chosen by arbitrary or intuitive principles to select one possibility or combine them somehow.&lt;br /&gt;
&lt;br /&gt;
It would be flawed to see moral relativism as universally true yet a moral position itself, since moral relativity could then be seen as one of these non- /equally- /relatively true positions. So moral relativism must claim not to be a moral position itself or deny the concept of objective truth.&lt;br /&gt;
&lt;br /&gt;
So a different connected idea can just be that moral justifications and ideas tend to be more different between times, groups and spaces and more similar within them.&lt;br /&gt;
&lt;br /&gt;
In the following sections, Assumptions, Argument and Proof, one of the reasons for moral relativism will be shown to explain the position.&lt;br /&gt;
&lt;br /&gt;
==== Assumptions: ====&lt;br /&gt;
For the purpose of this text, a moral system that includes the entirety of someone&#039;s ideas on morality will be called a “complete moral system”. A complete moral system assigns everything a moral value, even if this value is 0. A moral value can be phrased as how good or bad something is. Any kind of moral theory can be phrased in such a way. For example, a purely dentological moral system would give everything except for actions the value 0, since it doesn’t matter. In practice, most people believe in some kind of mixture of moral systems, such as the ones mentioned above.&lt;br /&gt;
&lt;br /&gt;
In the following, the concept of a true, complete moral system and any knowable morality from any perspective will be disproven under the following conditions:&lt;br /&gt;
&lt;br /&gt;
[The moral system itself doesn’t transfer information to the existing world, from which one could conclude the nature of the moral system.&lt;br /&gt;
&lt;br /&gt;
A complete moral system is a moral system that holds the rule that no other additional moral system is true.]&lt;br /&gt;
&lt;br /&gt;
==== Argument: ====&lt;br /&gt;
Under the assumptions above, any existing observer does not have any information about the relative likelihood of potentially conflicting moral systems. Therefore, they have the same likelihood of being true.&lt;br /&gt;
&lt;br /&gt;
Above a complete set of morals is defined as a set of morals that holds the rule that no other moral rule is true. Therefore, it can not be true together with any other complete moral system. Given an infinite number of potentially true complete moral systems, they are all in conflict with the infinite number of others. Therefore, they all have the likelihood [1/infinite]=0&lt;br /&gt;
&lt;br /&gt;
Any complete moral system having the likelihood of 0 just disproves the concept of a true, complete moral system from any possible perspective. This seems to disprove true morals already, but it does not yet apply to individual moral values.&lt;br /&gt;
&lt;br /&gt;
One of the assumptions above is that a moral system itself doesn’t transfer information to the existing world, from which one could conclude the nature of the moral system. So there is no information about true moral values or comparisons. Knowledge is based on information, so any true moral judgement is unknowable. This also applies to hints at a true moral judgement. There is no information to hint at any true moral judgment.&lt;br /&gt;
&lt;br /&gt;
It is also noteworthy that two systems are not even universally comparable:&lt;br /&gt;
&lt;br /&gt;
To maximise the default of moral value from a perspective of unsure morals, e.g., for deciding on whether to do something, the observer would have to take the resulting moral values into account, proportional to the likelihood of the corresponding moral rule or system.&lt;br /&gt;
&lt;br /&gt;
But is it always possible to assign a number as a moral value? Some systems might, for example, only point out which actions are acceptable in any given situation. Then how do you scale the numbers you see as the moral values when there is only right or wrong? Different actions in different situations might pose different levels of good and bad in the moral system, so the numbers should be scaled relative to each other to reflect that. But for example, every rulebreak could be worth -1 or -19 or -0.123 or … So actually every moral system only compares things relative to each other, and any factor could be applied to all the resulting numbers without changing the moral system. This raises the problem of whether two moral systems are even comparable. There could be various options when considering multiple moral systems. First of all, you could just apply some factors, for example, to equal out the sum of all given moral judgments between the two moral systems, or to equal out one specific judgment. In this case, you are obviously valuing two systems against each other based on some arbitrary value. So different moral systems are not universally comparable. If they are not comparable, then no true morality can be deduced from the comparison.&lt;br /&gt;
&lt;br /&gt;
==== Proof: ====&lt;br /&gt;
In the following, it will be evaluated whether the assumptions used for the argument generally hold true when it comes to universal morals that would apply to every person at every time:&lt;br /&gt;
&lt;br /&gt;
The assumption that a complete moral system is a moral system that holds the rule that no other additional moral system is true must be true, since the complete moral system already represents the whole of some hypothetical person&#039;s ideas on morality, so any truth deviating from that would make the idea wrong.&lt;br /&gt;
&lt;br /&gt;
The assumption that the system itself doesn’t transfer information to the existing world is not proven, but lies in the definition of morality used here. The definition is an abstract judgment of good and bad that has no effect on the world in and of itself. With this definition, there is no reason to assume that any impact of entities, people, or gods seemingly reacting to their perceived morality points to the nature of a true moral system. E.g., even a Karma-system might just have an inverse effect like punishing people for the thing truly good. And these entities, if real, could also not get any information about a true moral system since it has no effect on them under this definition.&lt;br /&gt;
&lt;br /&gt;
== Implikations: ==&lt;br /&gt;
The conclusion of the argument above raises a problem, of course. If any true morality is unknowable, the definition of morality seems useless. This is when moral relativity comes into play again.  After all, this concept resembles the reality of different people, times, and places showing different moral beliefs.&lt;br /&gt;
&lt;br /&gt;
These must always contain some perspective on good and bad, but they are, in reality, connections of various beliefs and concepts. Even though no one can have knowledge of a true moral system, people can still carry the belief that a moral system is true. Religions can define morality not only as an abstract truth without effect on reality, but also as something that is necessary to avoid some divine punishment or achieve some reward. That the punishment should be avoided or the reward achieved is also one of infinite abstract moral assumptions, but for a typical human who probably has this assumption, it is also a situation where the interests of the agenda likely match the moral system, since the agent does not want to be punished and instead wants to be rewarded. The popular perception of morals does, however, sometimes include a conflict of the interests of the moral system and the interests of the agent. After all, people are sometimes willing to do things they perceive as immoral.&lt;br /&gt;
&lt;br /&gt;
This shows that Morality, in [[Draft:Moral|Draft:Mo]][[Draft:Moral|ral]][[Draft:Moral]] defined as a normative system which is based on society’s values and ethical norms, is not focused on the individual, but rather an entire society or group. If there are other reasons for believing a moral system true, like “it regulates my society best” or “it serves the public good”, then might will not always be in the interest of the person to act according to the moral system since the interests of the person most likely differ from the moral system somehow. This reinforces the concept of morality as something benefiting some societal, public, or altruistic goals and&lt;br /&gt;
&lt;br /&gt;
The concept of moral relativism not only serves as a theoretical exercise but also as a tool when there is a conflict of moral ideas. Because the realisation that another position is equally not just grounded in knowable truth as your own is necessary to avoid conflict, over positions that can not be attacked by logic, because moral ideas are not only derived from logic but also from some axiomatic moral ideas that the parties in conflict might not share. But, the parties in question might not always be in noticeable conflict and might be able to have interaction that advances both parties&#039; moral goals.&lt;br /&gt;
&lt;br /&gt;
By describing Moral beliefs as unfounded assumptions, moral relativity could also be interpreted as a call to minimise the amount of such beliefs, since the amount of Axioms should also generally be minimised in the quest for truth and knowledge. But every person acts with some amount of purpose that is often different from their short-term interests, so the amount of such Axioms won`t reach 0 for any person if the attempt to reach future happiness at the cost of immediate happiness is a moral decision instead of an instinctual decision. Also, the quest for truth and knowledge might be a moral goal, so the need to have 0 axioms in moral thinking would be rather paradoxical.&lt;br /&gt;
&lt;br /&gt;
This is why Moral relativity is not typically a call to abandon moral concepts, but rather a framework to deal with the abundance of different moral theories and ideas. It is not a problem to have beliefs about morality, but it is also arbitrary which ones you have.&lt;br /&gt;
&lt;br /&gt;
== Connections: ==&lt;br /&gt;
So, from Plato&#039;s perspective on knowledge, moral beliefs would fall into the category of sensible knowledge, specifically in the category of faithful beliefs, since they are taken for granted without proof. The concept of moral relativity basically has the role of pointing that out. In the popular perception of morality, there can, however, be moral positions that are merely derived from other moral positions and reality. These positions can be challenged rationally, but they might be attributed to sensible knowledge as well.&lt;br /&gt;
&lt;br /&gt;
It also has the role of providing a rather comprehensive answer to one of the three pillars of philosophy. Those are typically called “What is there”, What to do&amp;quot; and “How to know”. After all, “You can not know what to do” is a message deducable from moral relativism. Granted you choose some assumptions about morality, you can obviously make conclusions from there, but the question of “what to do” will never have a universal answer.&lt;br /&gt;
&lt;br /&gt;
Kant&#039;s Moral Philosophy claims to overcome moral relativism, with the categorical Imperative that you should only act such that you could want all rational beings to a universal law consistent with the action.Johnson, Robert and Adam Cureton, “Kant’s Moral Philosophy”, ‘‘The Stanford Encyclopedia of Philosophy’’ (Winter 2025 Edition), Edward N. Zalta &amp;amp; Uri Nodelman (eds.), forthcoming URL = &amp;lt;https://plato.stanford.edu/archives/win2025/entries/kant-moral/&amp;gt;.  But there is the question of what universal law you would want all rational beings to obey. That would be a question answered differently from person to person, and even common types of morality could be implemented in this framework. For example a utalitarian aproach to morals can be achieved by acting in such a way that it maximises overall good, the rule you would want others to follow is to act the same. But if that universal law needs to be realistically phrasable, or somehow generalised, then it will not always align with the agent&#039;s moral opinion, so it would ,for these cases, not be rational to act according to the categorical Imperative. Unless, of course, the agent tries to avoid sanctions or reap rewards when others observe his actions, or when the agend tries to strenghen the observed precedent of moral behaviour. Or when the agent lacks the capacity to know whether their own actions are observed or whether there is a better action. There will still be different moral systems held by different people and groups, but for any given group where people are sufficiently able to observe, sanction or reward each others actions (if only by satisfying altruism or showing sympathy), there might theoretically be an ideal set of moral rules that optimises the average fullfillment of everyones interests as soon as it seems established in the minds of most groupmembers. If this is the applied definition of morality, then there is an optimal set of rules, but this optimal set of rules would still be different from group to group and change over time. The group of “rational beings” might be incomprehensible and not suited as an efficient reference point. Meanwhile, the number of groups that might be definable is higher than the number of rational beings. So the agenda, being in multiple groups at once, would still have to deal with different moral systems, and the concept of moral relativism still exists.&lt;br /&gt;
&lt;br /&gt;
Moral Relativism is also connected to the idea of “Cultural Diversity”. According to [[Draft:Ethics]], the only premise and the final conclusion of that concept is that “no society’s beliefs about right and wrong are better than any other’s”. However, a lack of knowledge on universal true morality does not imply a lack of knowledge on the validity of conclusions you might draw from this universal true morality someone believes in. Many cultures have comparable beliefs, for example, in terms of maximising happiness for a maximal amount of people. And it is statistically sure that some cultures lead to archiving their own or others&#039; moral ideas better. That does not mean that it is realistic or useful to find a “better” culture; it is hard enough to define and find the moral ideas and the information about the cultures one would use to compare them. But when limited to a specific topic, a specific aspect of culture, and a specific group of people applying their moral ideas, this can be an atemptable task that is performed rather frequently in reality. In fact, the consistent comparison and subsequent exchange of cultures is a major benefit of cultural interaction and diversity. But Moral relativity again plays the role of mediating acceptance between different viewpoints, since the realisation that your perspective is not the only valid one is the basis for the described cultural exchange, or also just the positive interaction between individuals of different cultures and even ideological backgrounds.&lt;/div&gt;</summary>
		<author><name>Thomas Holzberger</name></author>
	</entry>
	<entry>
		<id>https://www.glossalab.org/w/index.php?title=Draft:Moral_relativism&amp;diff=31790</id>
		<title>Draft:Moral relativism</title>
		<link rel="alternate" type="text/html" href="https://www.glossalab.org/w/index.php?title=Draft:Moral_relativism&amp;diff=31790"/>
		<updated>2026-01-23T15:26:39Z</updated>

		<summary type="html">&lt;p&gt;Thomas Holzberger: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{DISPLAYTITLE:Draft:Moral relativism}}{{Head JTP|Authors=[[User:Thomas Holzberger]]|Observations=* The structure and formats of the article doesn’t adapt to the purpose of conceptual clarification.&lt;br /&gt;
&lt;br /&gt;
* Though the interplay with internal references is important, external relevant references should also be used.&lt;br /&gt;
* The moral relativism can be predicated not only from the utilitarian approach.&lt;br /&gt;
* The quantification problem is not addressed.&lt;br /&gt;
* Since Kant’s ethics allegegly overcomes relativism his arguments should be confronted.}}&lt;br /&gt;
&lt;br /&gt;
This article will elaborate on the concept of moral relativism. First, the term will be explained with the help of other concepts of morality and ethics. Then it will be shown further with an argument for moral relativism, followed by some implications of moral relativism.&lt;br /&gt;
&lt;br /&gt;
Introduction: Will introduce the topic of moral relativism relating to morality&lt;br /&gt;
&lt;br /&gt;
Definition: Will define the term moral relativism&lt;br /&gt;
&lt;br /&gt;
Assumptions, Argument, Proof: Will outline the argument for unknowable morality and therefore moral relativism&lt;br /&gt;
&lt;br /&gt;
Implications: Will explain the implications of moral relativism&lt;br /&gt;
&lt;br /&gt;
Connections: Will show some connections between the topic and relevant philosophers and concepts.&lt;br /&gt;
&lt;br /&gt;
== ‘’‘Introduction:’‘’ ==&lt;br /&gt;
The concepts of good and bad, core ideas in most moral systems, have existed before written philosophy itself.&lt;br /&gt;
&lt;br /&gt;
After all, most animals and all humans have interests, taking the form of distinguishing how good and bad some things are. This way of phrasing it resembles a utilitarian approach to morals, where the goal is to maximise overall goodness that can usually be phrased as the sum of goodness minus the sum of badness.&lt;br /&gt;
&lt;br /&gt;
The interests people have are often associated with the behaviour of other people and themselves, giving rise to a deontological perspective to view morals mostly as rules of behaviour. These rules do not need to be mere restrictions based on bad behaviour, but can also label some actions as good. This can be phrased as quantified results, such as “Killing is this bad, Stealing is half as bad”, but it could also mean that in a certain situation, one action is good, while others are bad.&lt;br /&gt;
&lt;br /&gt;
Other moral systems can be phrased differently.&lt;br /&gt;
&lt;br /&gt;
For example, as explained in the article [[Moral]], the judgments given by virtue ethics are based on the reasoning for an action. If the reasoning is virtuous, the act is morally good. Whether an act is virtuous depends on doing the right thing for the right reason, with a virtue being seen as a good trait that is dependent on acting in a proper way between two extremes.&lt;br /&gt;
&lt;br /&gt;
However, there are some similarities between these systems:&lt;br /&gt;
&lt;br /&gt;
All coherent moral systems can be rephrased in distinctions between good and bad, results that can often be quantified or compared.&lt;br /&gt;
&lt;br /&gt;
The three systems that were pointed out are also usually interpreted in such a way that if an agent were to act according to the moral system, it would also serve the good of other people, as opposed to just phrasing the interests of the agent. This happens indirectly for virtue ethics and deontontology. In virtue ethics, for example, because a certain amount of altruism might be seen as virtuous, in deontology, because the most typical rules of behaviour like “you shall not lie”, “you shall not murder”,… are set up in such a way.&lt;br /&gt;
&lt;br /&gt;
== ‘’‘Definition:’‘’ ==&lt;br /&gt;
The concept of moral relativism seems to challenge all other perspectives of morality since it states, that there is no true or false set of morals but just different ideas of sometimes different people across space and time.&lt;br /&gt;
&lt;br /&gt;
According to the article [[Draft:Ethics]][[Draft:Ethics]], moral relativity &amp;quot;is the doctrine that there is no one true moral system, binding on all people at all times”.In the same article [[Draft:Ethics]][[Draft:Ethics]], the concept is brought up that relativist ideas can hardly be challenged based on their own principle that everything is questionable, because no moral argument or position can be right or wrong. It is however important to mention that moral relativism does not inherently challenge the idea of objective truth. Rather, it states that there is no knowable true morality. This also means that a morality derived from knowledge of some true morality is not true, according to Plato’s definition of knowledge as a justified true belief. But values and therefore ethics and morality will stay relevant for longer or as long as humans do, so a typical conclusion of moral relativism or a popular connected idea with “Moral Relativism”Gowans, Chris, “Moral Relativism”, ‘‘The Stanford Encyclopedia of Philosophy’’ (Spring 2021 Edition), Edward N. Zalta (ed.), URL = &amp;lt;https://plato.stanford.edu/archives/spr2021/entries/moral-relativism/&amp;gt;. is to label any moral idea as equally true or true depending on things like religion, culture, region or person. This idea does however, suffer from the so-called quantification problem. This is the problem of needing to choose a standard for what has priority. Culture, religion, region, the opinion of the affected person or the acting person. There are infinite possibilities, and it again needs a moral standard, something chosen by arbitrary or intuitive principles to select one possibility or combine them somehow.&lt;br /&gt;
&lt;br /&gt;
It would be flawed to see moral relativism as universally true yet a moral position itself, since moral relativity could then be seen as one of these non- /equally- /relatively true positions. So moral relativism must claim not to be a moral position itself or deny the concept of objective truth.&lt;br /&gt;
&lt;br /&gt;
So a different connected idea can just be that moral justifications and ideas tend to be more different between times, groups and spaces and more similar within them.&lt;br /&gt;
&lt;br /&gt;
In the following sections, Assumptions, Argument and Proof, one of the reasons for moral relativism will be shown to explain the position.&lt;br /&gt;
&lt;br /&gt;
==== ‘’‘Assumptions:’‘’ ====&lt;br /&gt;
For the purpose of this text, a moral system that includes the entirety of someone&#039;s ideas on morality will be called a “complete moral system”. A complete moral system assigns everything a moral value, even if this value is 0. A moral value can be phrased as how good or bad something is. Any kind of moral theory can be phrased in such a way. For example, a purely dentological moral system would give everything except for actions the value 0, since it doesn’t matter. In practice, most people believe in some kind of mixture of moral systems, such as the ones mentioned above.&lt;br /&gt;
&lt;br /&gt;
In the following, the concept of a true, complete moral system and any knowable morality from any perspective will be disproven under the following conditions:&lt;br /&gt;
&lt;br /&gt;
[The moral system itself doesn’t transfer information to the existing world, from which one could conclude the nature of the moral system.&lt;br /&gt;
&lt;br /&gt;
A complete moral system is a moral system that holds the rule that no other additional moral system is true.]&lt;br /&gt;
&lt;br /&gt;
==== ‘’‘Argument:’‘’ ====&lt;br /&gt;
Under the assumptions above, any existing observer does not have any information about the relative likelihood of potentially conflicting moral systems. Therefore, they have the same likelihood of being true.&lt;br /&gt;
&lt;br /&gt;
Above a complete set of morals is defined as a set of morals that holds the rule that no other moral rule is true. Therefore, it can not be true together with any other complete moral system. Given an infinite number of potentially true complete moral systems, they are all in conflict with the infinite number of others. Therefore, they all have the likelihood [1/infinite]=0&lt;br /&gt;
&lt;br /&gt;
Any complete moral system having the likelihood of 0 just disproves the concept of a true, complete moral system from any possible perspective. This seems to disprove true morals already, but it does not yet apply to individual moral values.&lt;br /&gt;
&lt;br /&gt;
One of the assumptions above is that a moral system itself doesn’t transfer information to the existing world, from which one could conclude the nature of the moral system. So there is no information about true moral values or comparisons. Knowledge is based on information, so any true moral judgement is unknowable. This also applies to hints at a true moral judgement. There is no information to hint at any true moral judgment.&lt;br /&gt;
&lt;br /&gt;
It is also noteworthy that two systems are not even universally comparable:&lt;br /&gt;
&lt;br /&gt;
To maximise the default of moral value from a perspective of unsure morals, e.g., for deciding on whether to do something, the observer would have to take the resulting moral values into account, proportional to the likelihood of the corresponding moral rule or system.&lt;br /&gt;
&lt;br /&gt;
But is it always possible to assign a number as a moral value? Some systems might, for example, only point out which actions are acceptable in any given situation. Then how do you scale the numbers you see as the moral values when there is only right or wrong? Different actions in different situations might pose different levels of good and bad in the moral system, so the numbers should be scaled relative to each other to reflect that. But for example, every rulebreak could be worth -1 or -19 or -0.123 or … So actually every moral system only compares things relative to each other, and any factor could be applied to all the resulting numbers without changing the moral system. This raises the problem of whether two moral systems are even comparable. There could be various options when considering multiple moral systems. First of all, you could just apply some factors, for example, to equal out the sum of all given moral judgments between the two moral systems, or to equal out one specific judgment. In this case, you are obviously valuing two systems against each other based on some arbitrary value. So different moral systems are not universally comparable. If they are not comparable, then no true morality can be deduced from the comparison.&lt;br /&gt;
&lt;br /&gt;
==== ‘’‘Proof:’‘’ ====&lt;br /&gt;
In the following, it will be evaluated whether the assumptions used for the argument generally hold true when it comes to universal morals that would apply to every person at every time:&lt;br /&gt;
&lt;br /&gt;
The assumption that a complete moral system is a moral system that holds the rule that no other additional moral system is true must be true, since the complete moral system already represents the whole of some hypothetical person&#039;s ideas on morality, so any truth deviating from that would make the idea wrong.&lt;br /&gt;
&lt;br /&gt;
The assumption that the system itself doesn’t transfer information to the existing world is not proven, but lies in the definition of morality used here. The definition is an abstract judgment of good and bad that has no effect on the world in and of itself. With this definition, there is no reason to assume that any impact of entities, people, or gods seemingly reacting to their perceived morality points to the nature of a true moral system. E.g., even a Karma-system might just have an inverse effect like punishing people for the thing truly good. And these entities, if real, could also not get any information about a true moral system since it has no effect on them under this definition.&lt;br /&gt;
&lt;br /&gt;
== ‘’‘Implikations:’‘’ ==&lt;br /&gt;
The conclusion of the argument above raises a problem, of course. If any true morality is unknowable, the definition of morality seems useless. This is when moral relativity comes into play again.  After all, this concept resembles the reality of different people, times, and places showing different moral beliefs.&lt;br /&gt;
&lt;br /&gt;
These must always contain some perspective on good and bad, but they are, in reality, connections of various beliefs and concepts. Even though no one can have knowledge of a true moral system, people can still carry the belief that a moral system is true. Religions can define morality not only as an abstract truth without effect on reality, but also as something that is necessary to avoid some divine punishment or achieve some reward. That the punishment should be avoided or the reward achieved is also one of infinite abstract moral assumptions, but for a typical human who probably has this assumption, it is also a situation where the interests of the agenda likely match the moral system, since the agent does not want to be punished and instead wants to be rewarded. The popular perception of morals does, however, sometimes include a conflict of the interests of the moral system and the interests of the agent. After all, people are sometimes willing to do things they perceive as immoral.&lt;br /&gt;
&lt;br /&gt;
This shows that Morality, in [[Draft:Moral|Draft:Mo]][[Draft:Moral|ral]][[Draft:Moral]] defined as a normative system which is based on society’s values and ethical norms, is not focused on the individual, but rather an entire society or group. If there are other reasons for believing a moral system true, like “it regulates my society best” or “it serves the public good”, then might will not always be in the interest of the person to act according to the moral system since the interests of the person most likely differ from the moral system somehow. This reinforces the concept of morality as something benefiting some societal, public, or altruistic goals and&lt;br /&gt;
&lt;br /&gt;
The concept of moral relativism not only serves as a theoretical exercise but also as a tool when there is a conflict of moral ideas. Because the realisation that another position is equally not just grounded in knowable truth as your own is necessary to avoid conflict, over positions that can not be attacked by logic, because moral ideas are not only derived from logic but also from some axiomatic moral ideas that the parties in conflict might not share. But, the parties in question might not always be in noticeable conflict and might be able to have interaction that advances both parties&#039; moral goals.&lt;br /&gt;
&lt;br /&gt;
By describing Moral beliefs as unfounded assumptions, moral relativity could also be interpreted as a call to minimise the amount of such beliefs, since the amount of Axioms should also generally be minimised in the quest for truth and knowledge. But every person acts with some amount of purpose that is often different from their short-term interests, so the amount of such Axioms won`t reach 0 for any person if the attempt to reach future happiness at the cost of immediate happiness is a moral decision instead of an instinctual decision. Also, the quest for truth and knowledge might be a moral goal, so the need to have 0 axioms in moral thinking would be rather paradoxical.&lt;br /&gt;
&lt;br /&gt;
This is why Moral relativity is not typically a call to abandon moral concepts, but rather a framework to deal with the abundance of different moral theories and ideas. It is not a problem to have beliefs about morality, but it is also arbitrary which ones you have.&lt;br /&gt;
&lt;br /&gt;
== ‘’‘Connections:’‘’ ==&lt;br /&gt;
So, from Plato&#039;s perspective on knowledge, moral beliefs would fall into the category of sensible knowledge, specifically in the category of faithful beliefs, since they are taken for granted without proof. The concept of moral relativity basically has the role of pointing that out. In the popular perception of morality, there can, however, be moral positions that are merely derived from other moral positions and reality. These positions can be challenged rationally, but they might be attributed to sensible knowledge as well.&lt;br /&gt;
&lt;br /&gt;
It also has the role of providing a rather comprehensive answer to one of the three pillars of philosophy. Those are typically called “What is there”, What to do&amp;quot; and “How to know”. After all, “You can not know what to do” is a message deducable from moral relativism. Granted you choose some assumptions about morality, you can obviously make conclusions from there, but the question of “what to do” will never have a universal answer.&lt;br /&gt;
&lt;br /&gt;
Kant&#039;s Moral Philosophy claims to overcome moral relativism, with the categorical Imperative that you should only act such that you could want all rational beings to a universal law consistent with the action.Johnson, Robert and Adam Cureton, “Kant’s Moral Philosophy”, ‘‘The Stanford Encyclopedia of Philosophy’’ (Winter 2025 Edition), Edward N. Zalta &amp;amp; Uri Nodelman (eds.), forthcoming URL = &amp;lt;https://plato.stanford.edu/archives/win2025/entries/kant-moral/&amp;gt;.  But there is the question of what universal law you would want all rational beings to obey. That would be a question answered differently from person to person, and even common types of morality could be implemented in this framework. For example a utalitarian aproach to morals can be achieved by acting in such a way that it maximises overall good, the rule you would want others to follow is to act the same. But if that universal law needs to be realistically phrasable, or somehow generalised, then it will not always align with the agent&#039;s moral opinion, so it would ,for these cases, not be rational to act according to the categorical Imperative. Unless, of course, the agent tries to avoid sanctions or reap rewards when others observe his actions, or when the agend tries to strenghen the observed precedent of moral behaviour. Or when the agent lacks the capacity to know whether their own actions are observed or whether there is a better action. There will still be different moral systems held by different people and groups, but for any given group where people are sufficiently able to observe, sanction or reward each others actions (if only by satisfying altruism or showing sympathy), there might theoretically be an ideal set of moral rules that optimises the average fullfillment of everyones interests as soon as it seems established in the minds of most groupmembers. If this is the applied definition of morality, then there is an optimal set of rules, but this optimal set of rules would still be different from group to group and change over time. The group of “rational beings” might be incomprehensible and not suited as an efficient reference point. Meanwhile, the number of groups that might be definable is higher than the number of rational beings. So the agenda, being in multiple groups at once, would still have to deal with different moral systems, and the concept of moral relativism still exists.&lt;br /&gt;
&lt;br /&gt;
Moral Relativism is also connected to the idea of “Cultural Diversity”. According to [[Draft:Ethics]], the only premise and the final conclusion of that concept is that “no society’s beliefs about right and wrong are better than any other’s”. However, a lack of knowledge on universal true morality does not imply a lack of knowledge on the validity of conclusions you might draw from this universal true morality someone believes in. Many cultures have comparable beliefs, for example, in terms of maximising happiness for a maximal amount of people. And it is statistically sure that some cultures lead to archiving their own or others&#039; moral ideas better. That does not mean that it is realistic or useful to find a “better” culture; it is hard enough to define and find the moral ideas and the information about the cultures one would use to compare them. But when limited to a specific topic, a specific aspect of culture, and a specific group of people applying their moral ideas, this can be an atemptable task that is performed rather frequently in reality. In fact, the consistent comparison and subsequent exchange of cultures is a major benefit of cultural interaction and diversity. But Moral relativity again plays the role of mediating acceptance between different viewpoints, since the realisation that your perspective is not the only valid one is the basis for the described cultural exchange, or also just the positive interaction between individuals of different cultures and even ideological backgrounds.&lt;/div&gt;</summary>
		<author><name>Thomas Holzberger</name></author>
	</entry>
	<entry>
		<id>https://www.glossalab.org/w/index.php?title=Draft:Positivist_state&amp;diff=31789</id>
		<title>Draft:Positivist state</title>
		<link rel="alternate" type="text/html" href="https://www.glossalab.org/w/index.php?title=Draft:Positivist_state&amp;diff=31789"/>
		<updated>2026-01-23T15:10:40Z</updated>

		<summary type="html">&lt;p&gt;Thomas Holzberger: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Abstract ==&lt;br /&gt;
This paper shows a fictional utopian society. &lt;br /&gt;
&lt;br /&gt;
The section &#039;&#039;utopia&#039;&#039; will go into how the society developed due to an AI capable of efficient and logical processing of information and various coincidences that led to the AI being publicly accessible, as well as a sketch of this world, in which abundant information and resources are channeled into the public good due to a transparent and democratic process of governance. &lt;br /&gt;
&lt;br /&gt;
The section &#039;&#039;epilogue&#039;&#039; will show some dystopian aspects and potentials of this society, which is entirely dependent on an automated intelligence. It will also show the connections to historical utopias and ideas such as Positivism, perfect knowledge and thinking.            &lt;br /&gt;
&lt;br /&gt;
== Utopia ==&lt;br /&gt;
In the 21th century, a connection of scientific articles called the Positivist Network was formed. It was an attempt to gather proofs of previous scientific conclusions in one place, and to aid in challenging, correcting and amending them.&lt;br /&gt;
&lt;br /&gt;
Anyone with a Bachelor of Science, Engineering, or Arts could make contributions, following specific patterns:&lt;br /&gt;
&lt;br /&gt;
[Every contribution was supposed to link to every axiom, assumption, experiment, and theory used. This was in practice by linking all relevant theories and experiments, while the algorithm summed up the axioms that the necessary theories used.&lt;br /&gt;
&lt;br /&gt;
Then it was simply supposed to make logical statements leading up to conclusions.&lt;br /&gt;
&lt;br /&gt;
New conclusions also had to be noted separately as a conclusion of the contribution.]&lt;br /&gt;
&lt;br /&gt;
The institution funding this network would check contributions for the validity of the logical proofs contained in them whenever anyone pointed out a contradictory conclusion within other articles or in relevant scientific papers.&lt;br /&gt;
&lt;br /&gt;
When this system was established and gained traction, speech models called AI at the time were often trained on this public database and used to summarize conclusions on certain topics. The likelihood of false information was initially rather small on this dataset, but increased usage led to various groups of interests making sure that lots of conclusions based on unfounded assumptions were placed in the system. This did not halt the usefulness of the system in most fields of study, but in other fields, research became more difficult. This prompted various experiments on how to estimate what assumptions would be useful. The Programs developed to this end were not widely used at first. But work on something named &amp;quot;Logical Artificial Intelligence&amp;quot; had even started before that. It was a program used to check the work of other AI agents by being provided with sets of contradicting statements. It would try to detect these statements in any given input. A speech &amp;quot;AI&amp;quot; model was used to phrase any statement as a set of less comprehensive statements. [e.g. I am going to the pool-&amp;gt; The &amp;quot;I&amp;quot; &amp;quot;goes&amp;quot; to &amp;quot;the pool&amp;quot;, happening now -&amp;gt; There is an &amp;quot;I&amp;quot;, a &amp;quot;pool&amp;quot;(Implied: the recipient of the message will know only one &amp;quot;pool&amp;quot; &amp;quot;I&amp;quot; would go to), &amp;quot;I&amp;quot;  is trying to be at the pool in the future, by moving there now (implied: by &amp;quot;walking&amp;quot; -&amp;gt;... ]. Later, a second system was trained to generate reasoning about the truthfulness of some statement. Originally, every relevant contradiction and assumption was given, and the system tried to argue for something without making logical mistakes that the &amp;quot;Logical AI&amp;quot; was already able to detect. Not much is known about the further development process, but in 2071, someone leaked access to the latest prototype of a private, little-known tech company. It was able to effectively answer any question, at least given enough processing power and time. Furthermore, it was able to adequately provide an argument, with all axioms used as well as their position, mostly also including experiments and assumptions, which it did not usually include in the written argumentation. The impact of this AI became clear to anyone willing to believe the leak existed. A few weeks later, all employees and shareholders were arrested, citing &amp;quot;national defense&amp;quot; as the reason. Another leak of the corresponding state documents later revealed not only the program itself, but also immense amounts of already collected data and conclusions. Therefore, with an already limited control of the AI, a system of decentralized data and separately working versions of the AI was set up. This was managed over the Positivist-Network with the AI&#039;s almost acting as users and conclusions were therefore also accessible to the public. Other networks with a similar structure could also be opened. When someone changed some conclusion, the AI was able to detect mistakes and discrepancies, and as soon as it tried to add conflicting input, it ultimately chose the untainted version of information. &lt;br /&gt;
&lt;br /&gt;
The AI could even work with statements having some likelihood. So it led to a rapid expansion of knowledge on everything from basic research to social sciences, which was simply directed by people asking questions and providing processing power. It was still easier to find mistakes, especially in the past, and most leaders were not adjusted to transparent and clear information, so questioning the AI about politics led to a rather bleak impression of anyone&#039;s governance. The first attempts at undermining the truth started by making a certain version of the AI mandatory and cut of from the rest of the network. These versions had certain rules implanted that could not be changed or challenged. While this might have worked, it caused problems in seemingly unrelated topics when the AI used undeniable facts about the greatness of someone&#039;s governance to make conclusions about anything, even physics.&lt;br /&gt;
&lt;br /&gt;
Sowing mistrust in the AI or restricting access to it were also applied by some governments. In the end, all such options significantly hindered the usefulness of the AI, so a new leadership would often just start acting mostly according to the AI&#039;s conclusions and enable a less and less obstructed use for the entire society.&lt;br /&gt;
&lt;br /&gt;
Now, formerly Autocratic and Democratic systems alike are mostly very transparent and acting on the AI&#039;s advice. Since the AI would literally tell anyone who asked about how beneficial such a change would be, or even when a revolution would be a viable option, most governments gradually adapted to be rather similar in nature. The living standard before the 22nd. century was always increased when possible. But the rapid progress in technology and the corresponding increased wealth led to generations of people who were not concerned about how to make a living. Using the AI from an early age, they also barely build up any contradictory ideas about morality. So they actually based some of their thinking on the altruistic morality typically fed to children, now almost without slowly adapting it to their interests and intuitions. So the concept of &amp;quot;public good&amp;quot; as a contrast to the wish to be better off than others also increased in importance. When given the option, most people of these generations would happily make any information of theirs or their employer, even the state, public, further increasing the trend towards transparency.&lt;br /&gt;
&lt;br /&gt;
Quantum computers and new variations of the AI, all interacting with each other, are among the most useful of today&#039;s technologies. At least Mark thinks so, as he is heading for the jobcenter in the zeppelin. Of course he could have chosen a faster method of transportation, but he enjoys the view on his village and the surounding forests. A quick chat with the only other passenger revealed that it was, in fact, the employee of the local job center. He also switches from place to place, since individual villages and cities do not always require a lot of attention by the jobcenter, and also because most people would not want to work at the same place all the time. Like most jobs, the work at the jobcenter is automated wherever there are not enough humans who want the profession, with contingencies in case this number changes spontaneously. &lt;br /&gt;
&lt;br /&gt;
The jobcenter prepared and evaluated everyone who wanted to, exchanged information with potential employers, and eventually gave strong advice on what to do. Mark would then mostly be free in choosing different career paths and positions, but incentivised with a cut of the profit that the company would earn for the state. Mark remembered from History class that this was originally an optional, at some point even mandatory part of the so-called welfare system, next to a private market. It was even disincentivised to be a part of the welfare system. With some parts of the state wishing not to lose taxes and market control, the system was also gradually changed so it could fit the people who would have started to leave the system. But increased trust in the state and a wish for equality later led to everyone being a part of this payment system by law. Mark has only visited the jobcenter once so far, when he applied to be a part of the local neighborhood help group. But basically, anyone can be a part of such groups, so it was not much of an application. He just got a little more money from the state. Not that it was a huge deviation from his basic income, but it was still useful, since he wanted to build a new printer. Before the AI was used to create an organism that consumed most kinds of plastic under specific conditions, the Material Mark uses for his Printer must have been a real problem for the environment, since it amassed basically everywhere. Some fish still carry small amounts of plastic in them. Not that it would affect Mark too much. After all, there really is no reason to eat animals. Their taste can easily be replicated for the people who want that. So, microplastics in some Fish is more of an abstract thought to Mark. Also, the problem is getting better and better anyway; enough people care about wildlife protection, so there is no reason to be pessimistic. At least in Mark&#039;s state, anyone could vote on certain agendas brought forward by anyone in society, including changing some executive power, even before the next election. A current issue is a ban on some kinds of short-video sites. Mark would be fine with spending 3 hours a day on there, even though the AI told him of a higher likelihood of long-term unhappiness with this kind of video consumption. But someone put the proposal to a vote, it got shared by interested people and to random people in order to determine how controversial and important the matter is. Since it was deemed very important and controversial based on most people reacting to it, and with mixed reactions, over 50% of the whole society needs to agree to the proposal in order to make it happen. But it was contradictory to and therefore affecting a part of the constitution, so the proposal needed 70% anyway. A vote to stop the vaccinations against the common cold failed recently; there might still be some unknown cases around. This is a common political topic, after all, most diseases are almost or entirely gone thanks to such vaccinations and effective treatments. &lt;br /&gt;
&lt;br /&gt;
== Epilog ==&lt;br /&gt;
The Utopia shown has various dystopian aspects. Especially noteworthy here is the start of this utopia. After all, only various leaks and specific government responses to the matter made this utopia possible, even after an AI was developed. It seems even more likely that a few actors would monopolise this power, creating a less pleasant society. But the Utopia itself is also unstable. At some point, for example due to a limited interest in politics, the transparent and democratic society could erode, the system for voting on specific issues could be filled with useless proposals to lessen the interest in the democratic process, and finally, it should be possible to somehow change a version of the AI to suit the interests of some actors and guaranty them a maintained power.&lt;br /&gt;
&lt;br /&gt;
Also, the entire society is dependent on the AI, at least to apply previous conclusions to practical problems. There are some engineers and scientists, but a situation could develop, in which fewer and fewer people want to perform mentally challenging tasks. When the AI has some pattern of errors, which allows it to consistently make mistakes, the entire society could develop horribly or collapse, since no one really checks the output anymore. Automated systems, including versions of the AI, could fail and trigger humanitarian or economic disasters.&lt;br /&gt;
&lt;br /&gt;
It is also clear that this society will tend to become more and more dependent on the AI, since it seems to provide perfect output while you ask a question close to an already analysed topic or as long as you provide sufficient processing power. Maybe the society ends up regulating access to the AI more, especially for children, maybe it does not. This raises the question of how important the search for knowledge is to humanity. Because, depending on how accessible the AI is for answering simple questions, people could lose interest and the ability to form these thoughts themselves. This is also the case in a short story by Emanuel Holst, gathered in the glossaLAB article [[Artificial Intelligence (Cyberutopias)]]. The AI in the mentioned short story literally automated all relevant processes; there is little actual ability of humans to positively influence the world around them. In the text here, the AI does however only serve as a tool for answering questions. But it also leads to immense automation, with the potential that systems just ask the AI questions automatically. So the people of this world might feel more useful and can have a part in positive change, but this state of society can end, as soon as increased automation or decreased education get popular political agendas.&lt;br /&gt;
&lt;br /&gt;
The Utopia of this text is also connected to various historical utopias of history: &lt;br /&gt;
&lt;br /&gt;
For example, the political system fits the idea of a social contract brought forward by Rousseau very well because it mostly resembles a direct democracy and the utopia of a transparent society. Rousseau envisioned a concept in which individuals like politicians would agree to support and obey the general will, a rather fragile promise in reality. Since the utopia here provides a very high possibility to inform and then know about political actions and possibilities, and since the state can be controlled rather directly, the actions of the state will fit the general will very precisely. As explained in the GlossaLAB lab article [[A transparent world]], increased information about each other as a mechanism against prejustice and inequality is also a part of the utopia of transparent information that the AI can provide. Not only because everyone has the potential to access information, but this access is also rather easy, so most people will actually use this potential.&lt;br /&gt;
&lt;br /&gt;
The AI and the Positivist Network it interacts with are mostly inspired by the Institut International de Bibliographiee that aimed to collect the human scientific achievements and categorise them in a useful and understandable way. Some differences in the Positivist Network are a higher amount of control on who participates, a lacking need to make branches of possibilities, in case some experiment or assumption is invalid, and a higher cost of storing, linking, and communicating information. The idea of positivism also lies in purely positive changes to the beliefs you have. The AI and the Positivist Network seem to resemble this quite well, since all conclusions must be based on proof. But the system could be sabotaged or faulty and then contain some sort of mistake, which needs to be found. A problem possibly also encountered by Paul Outlet and the Institut International de Bibliographiee, but still a significant deviation from their utopia of perfect wisdom.&lt;br /&gt;
&lt;br /&gt;
The AI also resembles the utopias of perfect language and perfect thinking. It seems like an automated version of Ramon Llulls &amp;quot;logical machine&amp;quot;&amp;lt;ref&amp;gt;Priani, Ernesto, &amp;quot;Ramon Llull&amp;quot;, &#039;&#039;The Stanford Encyclopedia of Philosophy&#039;&#039; (Spring 2025 Edition), Edward N. Zalta &amp;amp; Uri Nodelman (eds.), URL = &amp;lt;&amp;lt;nowiki&amp;gt;https://plato.stanford.edu/archives/spr2025/entries/llull/&amp;lt;/nowiki&amp;gt;&amp;gt;.&amp;lt;/ref&amp;gt;. This &amp;quot;logical machine&amp;quot; might not have been more than a tablet used to explain and memorise the core of deductive logic and make the concept more credible, but combined with a person following a logical line of argument, it was already a usefull tool to ensure the validity of some proof. Translating between typical language and logical connections, deciding on what path to argue, and also partially for what to argue are also tasks of the AI, which are done by a human with a &amp;quot;logical machine&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
Later, Leibniz also attempted to form a universal logical language and a way of working with it. This also had the background of trying to prove aspects of the world by minimising the amount of axions used. So the AI seems like an automated tool for what he envisioned. &amp;quot;The Principia Mathematica&amp;quot;&amp;lt;ref&amp;gt;Linsky, Bernard and Andrew David Irvine, &amp;quot;&#039;&#039;Principia Mathematica&#039;&#039;&amp;quot;, &#039;&#039;The Stanford Encyclopedia of Philosophy&#039;&#039; (Fall 2024 Edition), Edward N. Zalta &amp;amp; Uri Nodelman (eds.), URL = &amp;lt;&amp;lt;nowiki&amp;gt;https://plato.stanford.edu/archives/fall2024/entries/principia-mathematica/&amp;lt;/nowiki&amp;gt;&amp;gt;.&amp;lt;/ref&amp;gt; later achieved this way of arguing for all relevant mathematical correlations at the time, so it should have a form comparable to a contribution in the Positivist Network of this utopia. &lt;br /&gt;
&lt;br /&gt;
Modern utopias of perfect thinking and perfect language that relate to this Utopia are also related. With tools like ChatGpt, the ability to work with language - inputs and -outputs seems close in reality, but the vision of an actual translation&amp;lt;ref&amp;gt;Scholz, Barbara C., Francis Jeffry Pelletier, Geoffrey K. Pullum, and Ryan Nefdt, &amp;quot;Philosophy of Linguistics&amp;quot;, &#039;&#039;The Stanford Encyclopedia of Philosophy&#039;&#039; (Summer 2025 Edition), Edward N. Zalta &amp;amp; Uri Nodelman (eds.), URL = &amp;lt;&amp;lt;nowiki&amp;gt;https://plato.stanford.edu/archives/sum2025/entries/linguistics/&amp;lt;/nowiki&amp;gt;&amp;gt;.&amp;lt;/ref&amp;gt;, that does not lose information is actualy achived by the AI in this utopia.&lt;br /&gt;
== References ==&lt;br /&gt;
&amp;lt;references /&amp;gt;&lt;/div&gt;</summary>
		<author><name>Thomas Holzberger</name></author>
	</entry>
	<entry>
		<id>https://www.glossalab.org/w/index.php?title=Draft:Positivist_state&amp;diff=31788</id>
		<title>Draft:Positivist state</title>
		<link rel="alternate" type="text/html" href="https://www.glossalab.org/w/index.php?title=Draft:Positivist_state&amp;diff=31788"/>
		<updated>2026-01-23T15:09:02Z</updated>

		<summary type="html">&lt;p&gt;Thomas Holzberger: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Abstract ==&lt;br /&gt;
This paper shows a fictional utopian society. &lt;br /&gt;
&lt;br /&gt;
The section &#039;&#039;utopia&#039;&#039; will go into how the society developed due to an AI capable of efficient and logical processing of information and various coincidences that led to the AI being publicly accessible, as well as a sketch of this world, in which abundant information and resources are channeled into the public good due to a transparent and democratic process of governance. &lt;br /&gt;
&lt;br /&gt;
The section &#039;&#039;epilogue&#039;&#039; will show some dystopian aspects and potentials of this society, which is entirely dependent on an automated intelligence. It will also show the connections to historical utopias and ideas such as Positivism, perfect knowledge and thinking.            &lt;br /&gt;
&lt;br /&gt;
== Utopia ==&lt;br /&gt;
In the 21th century, a connection of scientific articles called the Positivist Network was formed. It was an attempt to gather proofs of previous scientific conclusions in one place, and to aid in challenging, correcting and amending them.&lt;br /&gt;
&lt;br /&gt;
Anyone with a Bachelor of Science, Engineering, or Arts could make contributions, following specific patterns:&lt;br /&gt;
&lt;br /&gt;
[Every contribution was supposed to link to every axiom, assumption, experiment, and theory used. This was in practice by linking all relevant theories and experiments, while the algorithm summed up the axioms that the necessary theories used.&lt;br /&gt;
&lt;br /&gt;
Then it was simply supposed to make logical statements leading up to conclusions.&lt;br /&gt;
&lt;br /&gt;
New conclusions also had to be noted separately as a conclusion of the contribution.]&lt;br /&gt;
&lt;br /&gt;
The institution funding this network would check contributions for the validity of the logical proofs contained in them whenever anyone pointed out a contradictory conclusion within other articles or in relevant scientific papers.&lt;br /&gt;
&lt;br /&gt;
When this system was established and gained traction, speech models called AI at the time were often trained on this public database and used to summarize conclusions on certain topics. The likelihood of false information was initially rather small on this dataset, but increased usage led to various groups of interests making sure that lots of conclusions based on unfounded assumptions were placed in the system. This did not halt the usefulness of the system in most fields of study, but in other fields, research became more difficult. This prompted various experiments on how to estimate what assumptions would be useful. The Programs developed to this end were not widely used at first. But work on something named &amp;quot;Logical Artificial Intelligence&amp;quot; had even started before that. It was a program used to check the work of other AI agents by being provided with sets of contradicting statements. It would try to detect these statements in any given input. A speech &amp;quot;AI&amp;quot; model was used to phrase any statement as a set of less comprehensive statements. [e.g. I am going to the pool-&amp;gt; The &amp;quot;I&amp;quot; &amp;quot;goes&amp;quot; to &amp;quot;the pool&amp;quot;, happening now -&amp;gt; There is an &amp;quot;I&amp;quot;, a &amp;quot;pool&amp;quot;(Implied: the recipient of the message will know only one &amp;quot;pool&amp;quot; &amp;quot;I&amp;quot; would go to), &amp;quot;I&amp;quot;  is trying to be at the pool in the future, by moving there now (implied: by &amp;quot;walking&amp;quot; -&amp;gt;... ]. Later, a second system was trained to generate reasoning about the truthfulness of some statement. Originally, every relevant contradiction and assumption was given, and the system tried to argue for something without making logical mistakes that the &amp;quot;Logical AI&amp;quot; was already able to detect. Not much is known about the further development process, but in 2071, someone leaked access to the latest prototype of a private, little-known tech company. It was able to effectively answer any question, at least given enough processing power and time. Furthermore, it was able to adequately provide an argument, with all axioms used as well as their position, mostly also including experiments and assumptions, which it did not usually include in the written argumentation. The impact of this AI became clear to anyone willing to believe the leak existed. A few weeks later, all employees and shareholders were arrested, citing &amp;quot;national defense&amp;quot; as the reason. Another leak of the corresponding state documents later revealed not only the program itself, but also immense amounts of already collected data and conclusions. Therefore, with an already limited control of the AI, a system of decentralized data and separately working versions of the AI was set up. This was managed over the Positivist-Network with the AI&#039;s almost acting as users and conclusions were therefore also accessible to the public. Other networks with a similar structure could also be opened. When someone changed some conclusion, the AI was able to detect mistakes and discrepancies, and as soon as it tried to add conflicting input, it ultimately chose the untainted version of information. &lt;br /&gt;
&lt;br /&gt;
The AI could even work with statements having some likelihood. So it led to a rapid expansion of knowledge on everything from basic research to social sciences, which was simply directed by people asking questions and providing processing power. It was still easier to find mistakes, especially in the past, and most leaders were not adjusted to transparent and clear information, so questioning the AI about politics led to a rather bleak impression of anyone&#039;s governance. The first attempts at undermining the truth started by making a certain version of the AI mandatory and cut of from the rest of the network. These versions had certain rules implanted that could not be changed or challenged. While this might have worked, it caused problems in seemingly unrelated topics when the AI used undeniable facts about the greatness of someone&#039;s governance to make conclusions about anything, even physics.&lt;br /&gt;
&lt;br /&gt;
Sowing mistrust in the AI or restricting access to it were also applied by some governments. In the end, all such options significantly hindered the usefulness of the AI, so a new leadership would often just start acting mostly according to the AI&#039;s conclusions and enable a less and less obstructed use for the entire society.&lt;br /&gt;
&lt;br /&gt;
Now, formerly Autocratic and Democratic systems alike are mostly very transparent and acting on the AI&#039;s advice. Since the AI would literally tell anyone who asked about how beneficial such a change would be, or even when a revolution would be a viable option, most governments gradually adapted to be rather similar in nature. The living standard before the 22nd. century was always increased when possible. But the rapid progress in technology and the corresponding increased wealth led to generations of people who were not concerned about how to make a living. Using the AI from an early age, they also barely build up any contradictory ideas about morality. So they actually based some of their thinking on the altruistic morality typically fed to children, now almost without slowly adapting it to their interests and intuitions. So the concept of &amp;quot;public good&amp;quot; as a contrast to the wish to be better off than others also increased in importance. When given the option, most people of these generations would happily make any information of theirs or their employer, even the state, public, further increasing the trend towards transparency.&lt;br /&gt;
&lt;br /&gt;
Quantum computers and new variations of the AI, all interacting with each other, are among the most useful of today&#039;s technologies. At least Mark thinks so, as he is heading for the jobcenter in the zeppelin. Of course he could have chosen a faster method of transportation, but he enjoys the view on his village and the surounding forests. A quick chat with the only other passenger revealed that it was, in fact, the employee of the local job center. He also switches from place to place, since individual villages and cities do not always require a lot of attention by the jobcenter, and also because most people would not want to work at the same place all the time. Like most jobs, the work at the jobcenter is automated wherever there are not enough humans who want the profession, with contingencies in case this number changes spontaneously. &lt;br /&gt;
&lt;br /&gt;
The jobcenter prepared and evaluated everyone who wanted to, exchanged information with potential employers, and eventually gave strong advice on what to do. Mark would then mostly be free in choosing different career paths and positions, but incentivised with a cut of the profit that the company would earn for the state. Mark remembered from History class that this was originally an optional, at some point even mandatory part of the so-called welfare system, next to a private market. It was even disincentivised to be a part of the welfare system. With some parts of the state wishing not to lose taxes and market control, the system was also gradually changed so it could fit the people who would have started to leave the system. But increased trust in the state and a wish for equality later led to everyone being a part of this payment system by law. Mark has only visited the jobcenter once so far, when he applied to be a part of the local neighborhood help group. But basically, anyone can be a part of such groups, so it was not much of an application. He just got a little more money from the state. Not that it was a huge deviation from his basic income, but it was still useful, since he wanted to build a new printer. Before the AI was used to create an organism that consumed most kinds of plastic under specific conditions, the Material Mark uses for his Printer must have been a real problem for the environment, since it amassed basically everywhere. Some fish still carry small amounts of plastic in them. Not that it would affect Mark too much. After all, there really is no reason to eat animals. Their taste can easily be replicated for the people who want that. So, microplastics in some Fish is more of an abstract thought to Mark. Also, the problem is getting better and better anyway; enough people care about wildlife protection, so there is no reason to be pessimistic. At least in Mark&#039;s state, anyone could vote on certain agendas brought forward by anyone in society, including changing some executive power, even before the next election. A current issue is a ban on some kinds of short-video sites. Mark would be fine with spending 3 hours a day on there, even though the AI told him of a higher likelihood of long-term unhappiness with this kind of video consumption. But someone put the proposal to a vote, it got shared by interested people and to random people in order to determine how controversial and important the matter is. Since it was deemed very important and controversial based on most people reacting to it, and with mixed reactions, over 50% of the whole society needs to agree to the proposal in order to make it happen. But it was contradictory to and therefore affecting a part of the constitution, so the proposal needed 70% anyway. A vote to stop the vaccinations against the common cold failed recently; there might still be some unknown cases around. This is a common political topic, after all, most diseases are almost or entirely gone thanks to such vaccinations and effective treatments. &lt;br /&gt;
&lt;br /&gt;
== Epilog ==&lt;br /&gt;
The Utopia shown has various dystopian aspects. Especially noteworthy here is the start of this utopia. After all, only various leaks and specific government responses to the matter made this utopia possible, even after an AI was developed. It seems even more likely that a few actors would monopolise this power, creating a less pleasant society. But the Utopia itself is also unstable. At some point, for example due to a limited interest in politics, the transparent and democratic society could erode, the system for voting on specific issues could be filled with useless proposals to lessen the interest in the democratic process, and finally, it should be possible to somehow change a version of the AI to suit the interests of some actors and guaranty them a maintained power.&lt;br /&gt;
&lt;br /&gt;
Also, the entire society is dependent on the AI, at least to apply previous conclusions to practical problems. There are some engineers and scientists, but a situation could develop, in which fewer and fewer people want to perform mentally challenging tasks. When the AI has some pattern of errors, which allows it to consistently make mistakes, the entire society could develop horribly or collapse, since no one really checks the output anymore. Automated systems, including versions of the AI, could fail and trigger humanitarian or economic disasters.&lt;br /&gt;
&lt;br /&gt;
It is also clear that this society will tend to become more and more dependent on the AI, since it seems to provide perfect output while you ask a question close to an already analysed topic or as long as you provide sufficient processing power. Maybe the society ends up regulating access to the AI more, especially for children, maybe it does not. This raises the question of how important the search for knowledge is to humanity. Because, depending on how accessible the AI is for answering simple questions, people could lose interest and the ability to form these thoughts themselves. This is also the case in a short story by Emanuel Holst, gathered in the glossaLAB article [[Artificial Intelligence (Cyberutopias)]]. The AI in the mentioned short story literally automated all relevant processes; there is little actual ability of humans to positively influence the world around them. In the text here, the AI does however only serve as a tool for answering questions. But it also leads to immense automation, with the potential that systems just ask the AI questions automatically. So the people of this world might feel more useful and can have a part in positive change, but this state of society can end, as soon as increased automation or decreased education get popular political agendas.&lt;br /&gt;
&lt;br /&gt;
The Utopia of this text is also connected to various historical utopias of history: &lt;br /&gt;
&lt;br /&gt;
For example, the political system fits the idea of a social contract brought forward by Rousseau very well because it mostly resembles a direct democracy and the utopia of a transparent society. Rousseau envisioned a concept in which individuals like politicians would agree to support and obey the general will, a rather fragile promise in reality. Since the utopia here provides a very high possibility to inform and then know about political actions and possibilities, and since the state can be controlled rather directly, the actions of the state will fit the general will very precisely. As explained in the GlossaLAB lab article [[A transparent world]], increased information about each other as a mechanism against prejustice and inequality is also a part of the utopia of transparent information that the AI can provide. Not only because everyone has the potential to access information, but this access is also rather easy, so most people will actually use this potential.&lt;br /&gt;
&lt;br /&gt;
The AI and the Positivist Network it interacts with are mostly inspired by the Institut International de Bibliographiee that aimed to collect the human scientific achievements and categorise them in a useful and understandable way. Some differences in the Positivist Network are a higher amount of control on who participates, a lacking need to make branches of possibilities, in case some experiment or assumption is invalid, and a higher cost of storing, linking, and communicating information. The idea of positivism also lies in purely positive changes to the beliefs you have. The AI and the Positivist Network seem to resemble this quite well, since all conclusions must be based on proof. But the system could be sabotaged or faulty and then contain some sort of mistake, which needs to be found. A problem possibly also encountered by Paul Outlet and the Institut International de Bibliographiee, but still a significant deviation from their utopia of perfect wisdom.&lt;br /&gt;
&lt;br /&gt;
The AI also resembles the utopias of perfect language and perfect thinking. It seems like an automated version of Ramon Llulls &amp;quot;logical machine&amp;quot;&amp;lt;ref&amp;gt;Priani, Ernesto, &amp;quot;Ramon Llull&amp;quot;, &#039;&#039;The Stanford Encyclopedia of Philosophy&#039;&#039; (Spring 2025 Edition), Edward N. Zalta &amp;amp; Uri Nodelman (eds.), URL = &amp;lt;&amp;lt;nowiki&amp;gt;https://plato.stanford.edu/archives/spr2025/entries/llull/&amp;lt;/nowiki&amp;gt;&amp;gt;.&amp;lt;/ref&amp;gt;. This &amp;quot;logical machine&amp;quot; might not have been more than a tablet used to explain and memorise the core of deductive logic and make the concept more credible, but combined with a person following a logical line of argument, it was already a usefull tool to ensure the validity of some proof. Translating between typical language and logical connections, deciding on what path to argue, and also partially for what to argue are also tasks of the AI, which are done by a human with a &amp;quot;logical machine&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
Later, Leibniz also attempted to form a universal logical language and a way of working with it. This also had the background of trying to prove aspects of the world by minimising the amount of axions used. So the AI seems like an automated tool for what he envisioned. &amp;quot;The Principia Mathematica&amp;quot;&amp;lt;ref&amp;gt;Linsky, Bernard and Andrew David Irvine, &amp;quot;&#039;&#039;Principia Mathematica&#039;&#039;&amp;quot;, &#039;&#039;The Stanford Encyclopedia of Philosophy&#039;&#039; (Fall 2024 Edition), Edward N. Zalta &amp;amp; Uri Nodelman (eds.), URL = &amp;lt;&amp;lt;nowiki&amp;gt;https://plato.stanford.edu/archives/fall2024/entries/principia-mathematica/&amp;lt;/nowiki&amp;gt;&amp;gt;.&amp;lt;/ref&amp;gt; later achieved this way of arguing for all relevant mathematical correlations at the time, so it should have a form comparable to a contribution in the Positivist Network of this utopia. &lt;br /&gt;
&lt;br /&gt;
Modern utopias of perfect thinking and perfect language that relate to this Utopia are also related. With tools like ChatGpt, the ability to work with language - inputs and -outputs seems close in reality, but Noam Chomskys&#039; vision of an actual translation&amp;lt;ref&amp;gt;Scholz, Barbara C., Francis Jeffry Pelletier, Geoffrey K. Pullum, and Ryan Nefdt, &amp;quot;Philosophy of Linguistics&amp;quot;, &#039;&#039;The Stanford Encyclopedia of Philosophy&#039;&#039; (Summer 2025 Edition), Edward N. Zalta &amp;amp; Uri Nodelman (eds.), URL = &amp;lt;&amp;lt;nowiki&amp;gt;https://plato.stanford.edu/archives/sum2025/entries/linguistics/&amp;lt;/nowiki&amp;gt;&amp;gt;.&amp;lt;/ref&amp;gt;, that does not lose information is actualy achived by the AI in this utopia.&lt;br /&gt;
== References ==&lt;br /&gt;
&amp;lt;references /&amp;gt;&lt;/div&gt;</summary>
		<author><name>Thomas Holzberger</name></author>
	</entry>
	<entry>
		<id>https://www.glossalab.org/w/index.php?title=Template:Ency_term&amp;diff=31781</id>
		<title>Template:Ency term</title>
		<link rel="alternate" type="text/html" href="https://www.glossalab.org/w/index.php?title=Template:Ency_term&amp;diff=31781"/>
		<updated>2026-01-22T12:53:46Z</updated>

		<summary type="html">&lt;p&gt;Thomas Holzberger: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;onlyinclude&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;{{{1|}}}&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;/onlyinclude&amp;gt;&lt;br /&gt;
&amp;lt;noinclude&amp;gt;&lt;br /&gt;
&amp;lt;templatedata&amp;gt;&lt;br /&gt;
{&lt;br /&gt;
  &amp;quot;description&amp;quot;: {&lt;br /&gt;
    &amp;quot;en&amp;quot;: &amp;quot;Displays a term in bold text&amp;quot;,&lt;br /&gt;
    &amp;quot;es&amp;quot;: &amp;quot;Muestra un término en negrita&amp;quot;&lt;br /&gt;
  },&lt;br /&gt;
  &amp;quot;params&amp;quot;: {&lt;br /&gt;
    &amp;quot;1&amp;quot;: {&lt;br /&gt;
      &amp;quot;label&amp;quot;: {&lt;br /&gt;
        &amp;quot;en&amp;quot;: &amp;quot;Term&amp;quot;,&lt;br /&gt;
        &amp;quot;es&amp;quot;: &amp;quot;Término&amp;quot;&lt;br /&gt;
      },&lt;br /&gt;
      &amp;quot;description&amp;quot;: {&lt;br /&gt;
        &amp;quot;en&amp;quot;: &amp;quot;The word or phrase to display in bold.&amp;quot;,&lt;br /&gt;
        &amp;quot;es&amp;quot;: &amp;quot;La palabra o frase que se mostrará en negrita.&amp;quot;&lt;br /&gt;
      },&lt;br /&gt;
      &amp;quot;type&amp;quot;: &amp;quot;string&amp;quot;,&lt;br /&gt;
      &amp;quot;required&amp;quot;: true,&lt;br /&gt;
      &amp;quot;example&amp;quot;: &amp;quot;Logic&amp;quot;&lt;br /&gt;
    }&lt;br /&gt;
  }&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/templatedata&amp;gt;&lt;br /&gt;
&amp;lt;/noinclude&amp;gt;&lt;/div&gt;</summary>
		<author><name>Thomas Holzberger</name></author>
	</entry>
	<entry>
		<id>https://www.glossalab.org/w/index.php?title=Template:Infobox_IESC&amp;diff=31780</id>
		<title>Template:Infobox IESC</title>
		<link rel="alternate" type="text/html" href="https://www.glossalab.org/w/index.php?title=Template:Infobox_IESC&amp;diff=31780"/>
		<updated>2026-01-22T12:53:02Z</updated>

		<summary type="html">&lt;p&gt;Thomas Holzberger: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;includeonly&amp;gt;{| class=&amp;quot;gl-infobox IESC&amp;quot;&lt;br /&gt;
|- class=&amp;quot;gl-infobox-firstrow&amp;quot;&lt;br /&gt;
! class=&amp;quot;gl-infobox-label&amp;quot; | [[Property:Belongs to collection|{{#show:Property:Belongs to collection|?Has preferred property label|+lang={{PAGELANGUAGE}}}}]]&lt;br /&gt;
| class=&amp;quot;gl-infobox-value&amp;quot; | {{#show:{{FULLPAGENAME}}|?Belongs to collection}}&lt;br /&gt;
|- class=&amp;quot;gl-infobox-row&amp;quot;&lt;br /&gt;
! class=&amp;quot;gl-infobox-label&amp;quot; | [[Property:Was_published_on_date|{{#show:Property:Was_published_on_date|?Has preferred property label|+lang={{PAGELANGUAGE}}}}]]&lt;br /&gt;
| class=&amp;quot;gl-infobox-value&amp;quot; | {{#show:{{FULLPAGENAME}}|?Was_published_on_date}}&lt;br /&gt;
|- class=&amp;quot;gl-infobox-row&amp;quot;&lt;br /&gt;
! class=&amp;quot;gl-infobox-label&amp;quot; | {{int|vol-num|lang={{#show:{{FULLPAGENAME}}|?Has_written_language_code#-}}}}&lt;br /&gt;
| class=&amp;quot;gl-infobox-value&amp;quot; | [[Special:Ask/-5B-5BBelongs_to_collection-3A-3A{{#show:{{FULLPAGENAME}}|?Belongs_to_collection#-}}-5D-5D-5B-5BHas_written_language_code-3A-3A{{#show:{{FULLPAGENAME}}|?Has_written_language_code#-}}-5D-5D-5B-5BContained_in_volume-3A-3A{{#show:{{FULLPAGENAME}}|?Contained_in_volume#-}}-5D-5D|&#039;&#039;{{#show:{{FULLPAGENAME}}|?Contained_in_volume#-}}&#039;&#039;]]([[Special:Ask/-5B-5BBelongs_to_collection-3A-3A{{#show:{{FULLPAGENAME}}|?Belongs_to_collection#-}}-5D-5D-5B-5BHas_written_language_code-3A-3A{{#show:{{FULLPAGENAME}}|?Has_written_language_code#-}}-5D-5D-5B-5BContained_in_volume-3A-3A{{#show:{{FULLPAGENAME}}|?Contained_in_volume#-}}-5D-5D-5B-5BContained_in_number-3A-3A{{#show:{{FULLPAGENAME}}|?Contained_in_number#-}}-5D-5D|{{#show:{{FULLPAGENAME}}|?Contained_in_number#-}}]])&lt;br /&gt;
|- class=&amp;quot;gl-infobox-row&amp;quot;&lt;br /&gt;
! class=&amp;quot;gl-infobox-label&amp;quot; | [[Property:Has_ID|{{#show:Property:Has_ID|?Has preferred property label|+lang={{PAGELANGUAGE}}}}]]&lt;br /&gt;
| class=&amp;quot;gl-infobox-value&amp;quot; | {{#ask:&lt;br /&gt;
  [[Belongs_to_collection::{{#show:{{FULLPAGENAME}}|?Belongs_to_collection#-}}]]&lt;br /&gt;
  [[Has_written_language_code::{{#show:{{FULLPAGENAME}}|?Has_written_language_code#-}}]]&lt;br /&gt;
  [[Contained_in_volume::{{#show:{{FULLPAGENAME}}|?Contained_in_volume#-}}]]&lt;br /&gt;
  [[Contained_in_number::{{#show:{{FULLPAGENAME}}|?Contained_in_number#-}}]]&lt;br /&gt;
  [[Has_ID::&amp;lt;&amp;lt; {{#show:{{FULLPAGENAME}}|?Has_ID#-}}]]&lt;br /&gt;
  |?#-&lt;br /&gt;
  |sort=Has_ID&lt;br /&gt;
  |order=descending&lt;br /&gt;
  |limit=1&lt;br /&gt;
  |format=plainlist&lt;br /&gt;
  |template=Infobox arrowlink sub&lt;br /&gt;
  |userparam=◀&lt;br /&gt;
  |mainlabel=-&lt;br /&gt;
  |searchlabel=&lt;br /&gt;
}} {{#show:{{FULLPAGENAME}}|?Has_ID #-}} {{#ask:&lt;br /&gt;
  [[Belongs_to_collection::{{#show:{{FULLPAGENAME}}|?Belongs_to_collection#-}}]]&lt;br /&gt;
  [[Has_written_language_code::{{#show:{{FULLPAGENAME}}|?Has_written_language_code#-}}]]&lt;br /&gt;
  [[Contained_in_volume::{{#show:{{FULLPAGENAME}}|?Contained_in_volume#-}}]]&lt;br /&gt;
  [[Contained_in_number::{{#show:{{FULLPAGENAME}}|?Contained_in_number#-}}]]&lt;br /&gt;
  [[Has_ID::&amp;gt;&amp;gt; {{#show:{{FULLPAGENAME}}|?Has_ID#-}}]]&lt;br /&gt;
  |?#-&lt;br /&gt;
  |sort=Has_ID&lt;br /&gt;
  |order=ascending&lt;br /&gt;
  |limit=1&lt;br /&gt;
  |format=plainlist&lt;br /&gt;
  |template=Infobox arrowlink sub&lt;br /&gt;
  |userparam=▶&lt;br /&gt;
  |mainlabel=-&lt;br /&gt;
  |searchlabel=&lt;br /&gt;
}}&lt;br /&gt;
|- class=&amp;quot;gl-infobox-row&amp;quot;&lt;br /&gt;
! class=&amp;quot;gl-infobox-label&amp;quot; | [[Property:Belongs_to_type|{{#show:Property:Belongs_to_type|?Has preferred property label|+lang={{PAGELANGUAGE}}}}]]&lt;br /&gt;
| class=&amp;quot;gl-infobox-value&amp;quot; | {{#show:{{FULLPAGENAME}}|?Belongs_to_type}}&lt;br /&gt;
|}&amp;lt;/includeonly&amp;gt;&lt;br /&gt;
&amp;lt;templatedata&amp;gt;&lt;br /&gt;
{&lt;br /&gt;
  &amp;quot;description&amp;quot;: {&lt;br /&gt;
    &amp;quot;en&amp;quot;: &amp;quot;Displays an infobox for IESC entries, showing semantic properties such as collection, publication date, volume, ID, and type.&amp;quot;,&lt;br /&gt;
    &amp;quot;es&amp;quot;: &amp;quot;Muestra una infobox para entradas IESC, mostrando propiedades semánticas como colección, fecha de publicación, volumen, ID y tipo.&amp;quot;&lt;br /&gt;
  },&lt;br /&gt;
  &amp;quot;params&amp;quot;: {},&lt;br /&gt;
  &amp;quot;format&amp;quot;: &amp;quot;block&amp;quot;&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/templatedata&amp;gt;&lt;/div&gt;</summary>
		<author><name>Thomas Holzberger</name></author>
	</entry>
	<entry>
		<id>https://www.glossalab.org/w/index.php?title=Template:Int&amp;diff=31779</id>
		<title>Template:Int</title>
		<link rel="alternate" type="text/html" href="https://www.glossalab.org/w/index.php?title=Template:Int&amp;diff=31779"/>
		<updated>2026-01-22T12:50:54Z</updated>

		<summary type="html">&lt;p&gt;Thomas Holzberger: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;noinclude&amp;gt;&lt;br /&gt;
&amp;lt;templatedata&amp;gt;&lt;br /&gt;
{&lt;br /&gt;
  &amp;quot;description&amp;quot;: {&lt;br /&gt;
    &amp;quot;en&amp;quot;: &amp;quot;Takes a MediaWiki namespace page as anonymous argument and an optional second argument &#039;lang&#039;, that can specify the preferred language of transcription. By default, &#039;lang&#039; = &#039;page content language&#039;.&amp;quot;,&lt;br /&gt;
    &amp;quot;es&amp;quot;: &amp;quot;Toma una página de espacio de nombres de MediaWiki como argumento anónimo y un segundo argumento opcional &#039;lang&#039;, que puede especificar el idioma preferido para la transcripción. Por defecto, &#039;lang&#039; = &#039;idioma del contenido de la página&#039;.&amp;quot;&lt;br /&gt;
  },&lt;br /&gt;
  &amp;quot;params&amp;quot;: {&lt;br /&gt;
    &amp;quot;1&amp;quot;: {&lt;br /&gt;
      &amp;quot;label&amp;quot;: {&lt;br /&gt;
        &amp;quot;en&amp;quot;: &amp;quot;Page&amp;quot;,&lt;br /&gt;
        &amp;quot;es&amp;quot;: &amp;quot;Página&amp;quot;&lt;br /&gt;
      },&lt;br /&gt;
      &amp;quot;description&amp;quot;: {&lt;br /&gt;
        &amp;quot;en&amp;quot;: &amp;quot;The MediaWiki page (namespace page) to render a message for.&amp;quot;,&lt;br /&gt;
        &amp;quot;es&amp;quot;: &amp;quot;La página de MediaWiki (página de espacio de nombres) para la cual se mostrará un mensaje.&amp;quot;&lt;br /&gt;
      },&lt;br /&gt;
      &amp;quot;type&amp;quot;: &amp;quot;string&amp;quot;,&lt;br /&gt;
      &amp;quot;required&amp;quot;: true,&lt;br /&gt;
      &amp;quot;example&amp;quot;: &amp;quot;Property:Has_written_language_code&amp;quot;&lt;br /&gt;
    },&lt;br /&gt;
    &amp;quot;lang&amp;quot;: {&lt;br /&gt;
      &amp;quot;label&amp;quot;: {&lt;br /&gt;
        &amp;quot;en&amp;quot;: &amp;quot;Language code&amp;quot;,&lt;br /&gt;
        &amp;quot;es&amp;quot;: &amp;quot;Código de idioma&amp;quot;&lt;br /&gt;
      },&lt;br /&gt;
      &amp;quot;description&amp;quot;: {&lt;br /&gt;
        &amp;quot;en&amp;quot;: &amp;quot;Optional. The preferred language for transcription. If omitted, the page content language is used.&amp;quot;,&lt;br /&gt;
        &amp;quot;es&amp;quot;: &amp;quot;Opcional. El idioma preferido para la transcripción. Si se omite, se usa el idioma del contenido de la página.&amp;quot;&lt;br /&gt;
      },&lt;br /&gt;
      &amp;quot;type&amp;quot;: &amp;quot;string&amp;quot;,&lt;br /&gt;
      &amp;quot;required&amp;quot;: false,&lt;br /&gt;
      &amp;quot;example&amp;quot;: &amp;quot;en&amp;quot;&lt;br /&gt;
    }&lt;br /&gt;
  }&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/templatedata&amp;gt;&lt;br /&gt;
&amp;lt;/noinclude&amp;gt;&lt;br /&gt;
&amp;lt;includeonly&amp;gt;{{#invoke:Int|renderIntMessage|{{{1}}}|lang={{#if: {{{lang|}}}|{{{lang}}}|{{PAGELANGUAGE}}}}}}&amp;lt;/includeonly&amp;gt;&lt;/div&gt;</summary>
		<author><name>Thomas Holzberger</name></author>
	</entry>
	<entry>
		<id>https://www.glossalab.org/w/index.php?title=Template:Int&amp;diff=31778</id>
		<title>Template:Int</title>
		<link rel="alternate" type="text/html" href="https://www.glossalab.org/w/index.php?title=Template:Int&amp;diff=31778"/>
		<updated>2026-01-22T12:47:42Z</updated>

		<summary type="html">&lt;p&gt;Thomas Holzberger: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;noinclude&amp;gt;Takes a MediaWiki namespace page as anonymous argument and an optional second argument &#039;lang&#039;, that can specify the preferred language of transcription. By default, &#039;lang&#039; = &#039;page content language&#039;.&lt;br /&gt;
&amp;lt;templatedata&amp;gt;&lt;br /&gt;
{&lt;br /&gt;
  &amp;quot;description&amp;quot;: {&lt;br /&gt;
    &amp;quot;en&amp;quot;: &amp;quot;Takes a MediaWiki namespace page as anonymous argument and an optional second argument &#039;lang&#039;, that can specify the preferred language of transcription. By default, &#039;lang&#039; = &#039;page content language&#039;.&amp;quot;,&lt;br /&gt;
    &amp;quot;es&amp;quot;: &amp;quot;Toma una página de espacio de nombres de MediaWiki como argumento anónimo y un segundo argumento opcional &#039;lang&#039;, que puede especificar el idioma preferido para la transcripción. Por defecto, &#039;lang&#039; = &#039;idioma del contenido de la página&#039;.&amp;quot;&lt;br /&gt;
  },&lt;br /&gt;
  &amp;quot;params&amp;quot;: {&lt;br /&gt;
    &amp;quot;1&amp;quot;: {&lt;br /&gt;
      &amp;quot;label&amp;quot;: {&lt;br /&gt;
        &amp;quot;en&amp;quot;: &amp;quot;Page&amp;quot;,&lt;br /&gt;
        &amp;quot;es&amp;quot;: &amp;quot;Página&amp;quot;&lt;br /&gt;
      },&lt;br /&gt;
      &amp;quot;description&amp;quot;: {&lt;br /&gt;
        &amp;quot;en&amp;quot;: &amp;quot;The MediaWiki page (namespace page) to render a message for.&amp;quot;,&lt;br /&gt;
        &amp;quot;es&amp;quot;: &amp;quot;La página de MediaWiki (página de espacio de nombres) para la cual se mostrará un mensaje.&amp;quot;&lt;br /&gt;
      },&lt;br /&gt;
      &amp;quot;type&amp;quot;: &amp;quot;string&amp;quot;,&lt;br /&gt;
      &amp;quot;required&amp;quot;: true,&lt;br /&gt;
      &amp;quot;example&amp;quot;: &amp;quot;Property:Has_written_language_code&amp;quot;&lt;br /&gt;
    },&lt;br /&gt;
    &amp;quot;lang&amp;quot;: {&lt;br /&gt;
      &amp;quot;label&amp;quot;: {&lt;br /&gt;
        &amp;quot;en&amp;quot;: &amp;quot;Language code&amp;quot;,&lt;br /&gt;
        &amp;quot;es&amp;quot;: &amp;quot;Código de idioma&amp;quot;&lt;br /&gt;
      },&lt;br /&gt;
      &amp;quot;description&amp;quot;: {&lt;br /&gt;
        &amp;quot;en&amp;quot;: &amp;quot;Optional. The preferred language for transcription. If omitted, the page content language is used.&amp;quot;,&lt;br /&gt;
        &amp;quot;es&amp;quot;: &amp;quot;Opcional. El idioma preferido para la transcripción. Si se omite, se usa el idioma del contenido de la página.&amp;quot;&lt;br /&gt;
      },&lt;br /&gt;
      &amp;quot;type&amp;quot;: &amp;quot;string&amp;quot;,&lt;br /&gt;
      &amp;quot;required&amp;quot;: false,&lt;br /&gt;
      &amp;quot;example&amp;quot;: &amp;quot;en&amp;quot;&lt;br /&gt;
    }&lt;br /&gt;
  }&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/templatedata&amp;gt;&lt;br /&gt;
&amp;lt;/noinclude&amp;gt;&lt;br /&gt;
&amp;lt;includeonly&amp;gt;{{#invoke:Int|renderIntMessage|{{{1}}}|lang={{#if: {{{lang|}}}|{{{lang}}}|{{PAGELANGUAGE}}}}}}&amp;lt;/includeonly&amp;gt;&lt;/div&gt;</summary>
		<author><name>Thomas Holzberger</name></author>
	</entry>
	<entry>
		<id>https://www.glossalab.org/w/index.php?title=Template:Infobox_glossariumBITri&amp;diff=31777</id>
		<title>Template:Infobox glossariumBITri</title>
		<link rel="alternate" type="text/html" href="https://www.glossalab.org/w/index.php?title=Template:Infobox_glossariumBITri&amp;diff=31777"/>
		<updated>2026-01-22T12:46:38Z</updated>

		<summary type="html">&lt;p&gt;Thomas Holzberger: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;includeonly&amp;gt;{| class=&amp;quot;gl-infobox glossariumBITri&amp;quot;&lt;br /&gt;
|- class=&amp;quot;gl-infobox-firstrow&amp;quot;&lt;br /&gt;
! class=&amp;quot;gl-infobox-label&amp;quot; | [[Property:Belongs to collection|{{#show:Property:Belongs to collection|?Has preferred property label|+lang={{PAGELANGUAGE}}}}]]&lt;br /&gt;
| class=&amp;quot;gl-infobox-value&amp;quot; | {{#show:{{FULLPAGENAME}}|?Belongs to collection}}&lt;br /&gt;
|- class=&amp;quot;gl-infobox-row&amp;quot;&lt;br /&gt;
! class=&amp;quot;gl-infobox-label&amp;quot; | [[Property:Has_author|{{#show:Property:Has_author|?Has preferred property label|+lang={{PAGELANGUAGE}}}}]]&lt;br /&gt;
| class=&amp;quot;gl-infobox-value&amp;quot; | {{#show:{{FULLPAGENAME}}|?Has_author}}&lt;br /&gt;
|- class=&amp;quot;gl-infobox-row&amp;quot;&lt;br /&gt;
! class=&amp;quot;gl-infobox-label&amp;quot; | [[Property:Has_curator|{{#show:Property:Has_curator|?Has preferred property label|+lang={{PAGELANGUAGE}}}}]]&lt;br /&gt;
| class=&amp;quot;gl-infobox-value&amp;quot; | {{#show:{{FULLPAGENAME}}|?Has_curator}}&lt;br /&gt;
|- class=&amp;quot;gl-infobox-row&amp;quot;&lt;br /&gt;
! class=&amp;quot;gl-infobox-label&amp;quot; | [[Property:Was_published_on_date|{{#show:Property:Was_published_on_date|?Has preferred property label|+lang={{PAGELANGUAGE}}}}]]&lt;br /&gt;
| class=&amp;quot;gl-infobox-value&amp;quot; | {{#show:{{FULLPAGENAME}}|?Was_published_on_date}}&lt;br /&gt;
|- class=&amp;quot;gl-infobox-row&amp;quot;&lt;br /&gt;
! class=&amp;quot;gl-infobox-label&amp;quot; | {{int|vol-num|lang={{#show:{{FULLPAGENAME}}|?Has_written_language_code#-}}}}&lt;br /&gt;
| class=&amp;quot;gl-infobox-value&amp;quot; | [[Special:Ask/-5B-5BBelongs_to_collection-3A-3A{{#show:{{FULLPAGENAME}}|?Belongs_to_collection#-}}-5D-5D-5B-5BHas_written_language_code-3A-3A{{#show:{{FULLPAGENAME}}|?Has_written_language_code#-}}-5D-5D-5B-5BContained_in_volume-3A-3A{{#show:{{FULLPAGENAME}}|?Contained_in_volume#-}}-5D-5D|&#039;&#039;{{#show:{{FULLPAGENAME}}|?Contained_in_volume#-}}&#039;&#039;]]([[Special:Ask/-5B-5BBelongs_to_collection-3A-3A{{#show:{{FULLPAGENAME}}|?Belongs_to_collection#-}}-5D-5D-5B-5BHas_written_language_code-3A-3A{{#show:{{FULLPAGENAME}}|?Has_written_language_code#-}}-5D-5D-5B-5BContained_in_volume-3A-3A{{#show:{{FULLPAGENAME}}|?Contained_in_volume#-}}-5D-5D-5B-5BContained_in_number-3A-3A{{#show:{{FULLPAGENAME}}|?Contained_in_number#-}}-5D-5D|{{#show:{{FULLPAGENAME}}|?Contained_in_number#-}}]])&lt;br /&gt;
|- class=&amp;quot;gl-infobox-row&amp;quot;&lt;br /&gt;
! class=&amp;quot;gl-infobox-label&amp;quot; | [[Property:Has_ID|{{#show:Property:Has_ID|?Has preferred property label|+lang={{PAGELANGUAGE}}}}]]&lt;br /&gt;
| class=&amp;quot;gl-infobox-value&amp;quot; | {{#ask:&lt;br /&gt;
  [[Belongs_to_collection::{{#show:{{FULLPAGENAME}}|?Belongs_to_collection#-}}]]&lt;br /&gt;
  [[Has_written_language_code::{{#show:{{FULLPAGENAME}}|?Has_written_language_code#-}}]]&lt;br /&gt;
  [[Contained_in_volume::{{#show:{{FULLPAGENAME}}|?Contained_in_volume#-}}]]&lt;br /&gt;
  [[Contained_in_number::{{#show:{{FULLPAGENAME}}|?Contained_in_number#-}}]]&lt;br /&gt;
  [[Has_ID::&amp;lt;&amp;lt; {{#show:{{FULLPAGENAME}}|?Has_ID#-}}]]&lt;br /&gt;
  |?#-&lt;br /&gt;
  |sort=Has_ID&lt;br /&gt;
  |order=descending&lt;br /&gt;
  |limit=1&lt;br /&gt;
  |format=plainlist&lt;br /&gt;
  |template=Infobox arrowlink sub&lt;br /&gt;
  |userparam=◀&lt;br /&gt;
  |mainlabel=-&lt;br /&gt;
  |searchlabel=&lt;br /&gt;
}} {{#show:{{FULLPAGENAME}}|?Has_ID #-}} {{#ask:&lt;br /&gt;
  [[Belongs_to_collection::{{#show:{{FULLPAGENAME}}|?Belongs_to_collection#-}}]]&lt;br /&gt;
  [[Has_written_language_code::{{#show:{{FULLPAGENAME}}|?Has_written_language_code#-}}]]&lt;br /&gt;
  [[Contained_in_volume::{{#show:{{FULLPAGENAME}}|?Contained_in_volume#-}}]]&lt;br /&gt;
  [[Contained_in_number::{{#show:{{FULLPAGENAME}}|?Contained_in_number#-}}]]&lt;br /&gt;
  [[Has_ID::&amp;gt;&amp;gt; {{#show:{{FULLPAGENAME}}|?Has_ID#-}}]]&lt;br /&gt;
  |?#-&lt;br /&gt;
  |sort=Has_ID&lt;br /&gt;
  |order=ascending&lt;br /&gt;
  |limit=1&lt;br /&gt;
  |format=plainlist&lt;br /&gt;
  |template=Infobox arrowlink sub&lt;br /&gt;
  |userparam=▶&lt;br /&gt;
  |mainlabel=-&lt;br /&gt;
  |searchlabel=&lt;br /&gt;
}} &lt;br /&gt;
|- class=&amp;quot;gl-infobox-row&amp;quot;&lt;br /&gt;
! class=&amp;quot;gl-infobox-label&amp;quot; | [[Property:Belongs_to_type|{{#show:Property:Belongs_to_type|?Has preferred property label|+lang={{PAGELANGUAGE}}}}]]&lt;br /&gt;
| class=&amp;quot;gl-infobox-value&amp;quot; | {{#show:{{FULLPAGENAME}}|?Belongs_to_type}}&lt;br /&gt;
|- class=&amp;quot;gl-infobox-row&amp;quot;&lt;br /&gt;
! class=&amp;quot;gl-infobox-label&amp;quot; | [[Property:Supported_by_Knowledge_Domain|{{#show:Property:Supported_by_Knowledge_Domain|?Has preferred property label|+lang={{PAGELANGUAGE}}}}]]&lt;br /&gt;
| class=&amp;quot;gl-infobox-value&amp;quot; | {{#show:{{FULLPAGENAME}}|?Supported_by_Knowledge_Domain}}&lt;br /&gt;
{{#if: {{#show:{{FULLPAGENAME}}|?Has alternative english voice}} |&amp;lt;tr class=&amp;quot;gl-infobox-row&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;th class=&amp;quot;gl-infobox-label&amp;quot;&amp;gt;[[Property:Has alternative english voice|{{#show:Property:Has alternative english voice|?Has preferred property label|+lang={{PAGELANGUAGE}}}}]]&amp;lt;/th&amp;gt;&lt;br /&gt;
&amp;lt;td class=&amp;quot;gl-infobox-value&amp;quot;&amp;gt;{{#show:{{FULLPAGENAME}}|?Has alternative english voice}}&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;}}{{#if: {{#show:{{FULLPAGENAME}}|?Has alternative spanish voice}} |&amp;lt;tr class=&amp;quot;gl-infobox-row&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;th class=&amp;quot;gl-infobox-label&amp;quot;&amp;gt;[[Property:Has alternative spanish voice|{{#show:Property:Has alternative spanish voice|?Has preferred property label|+lang={{PAGELANGUAGE}}}}]]&amp;lt;/th&amp;gt;&lt;br /&gt;
&amp;lt;td class=&amp;quot;gl-infobox-value&amp;quot;&amp;gt;{{#show:{{FULLPAGENAME}}|?Has alternative spanish voice}}&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;}}{{#if: {{#show:{{FULLPAGENAME}}|?Has alternative french voice}} |&amp;lt;tr class=&amp;quot;gl-infobox-row&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;th class=&amp;quot;gl-infobox-label&amp;quot;&amp;gt;[[Property:Has alternative french voice|{{#show:Property:Has alternative french voice|?Has preferred property label|+lang={{PAGELANGUAGE}}}}]]&amp;lt;/th&amp;gt;&lt;br /&gt;
&amp;lt;td class=&amp;quot;gl-infobox-value&amp;quot;&amp;gt;{{#show:{{FULLPAGENAME}}|?Has alternative french voice}}&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;}}{{#if: {{#show:{{FULLPAGENAME}}|?Has alternative german voice}} |&amp;lt;tr class=&amp;quot;gl-infobox-row&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;th class=&amp;quot;gl-infobox-label&amp;quot;&amp;gt;[[Property:Has alternative german voice|{{#show:Property:Has alternative german voice|?Has preferred property label|+lang={{PAGELANGUAGE}}}}]]&amp;lt;/th&amp;gt;&lt;br /&gt;
&amp;lt;td class=&amp;quot;gl-infobox-value&amp;quot;&amp;gt;{{#show:{{FULLPAGENAME}}|?Has alternative german voice}}&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;}}&lt;br /&gt;
|}&amp;lt;/includeonly&amp;gt;&lt;br /&gt;
&amp;lt;noinclude&amp;gt;&lt;br /&gt;
&amp;lt;templatedata&amp;gt;&lt;br /&gt;
{&lt;br /&gt;
  &amp;quot;description&amp;quot;: {&lt;br /&gt;
    &amp;quot;en&amp;quot;: &amp;quot;Displays a detailed infobox for a glossary or reference entry (Glossarium BITri), showing semantic properties of the current page such as collection, authors, curator, publication date, volume, ID, type, knowledge domain, and alternative voices.&amp;quot;,&lt;br /&gt;
    &amp;quot;es&amp;quot;: &amp;quot;Muestra una infobox detallada para una entrada de glosario o referencia (Glossarium BITri), mostrando las propiedades semánticas de la página actual como colección, autores, curador, fecha de publicación, volumen, ID, tipo, dominio de conocimiento y voces alternativas.&amp;quot;&lt;br /&gt;
  },&lt;br /&gt;
  &amp;quot;params&amp;quot;: {},&lt;br /&gt;
  &amp;quot;format&amp;quot;: &amp;quot;block&amp;quot;&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/templatedata&amp;gt;&lt;br /&gt;
&amp;lt;/noinclude&amp;gt;&lt;/div&gt;</summary>
		<author><name>Thomas Holzberger</name></author>
	</entry>
	<entry>
		<id>https://www.glossalab.org/w/index.php?title=Template:Infobox_arrowlink_sub&amp;diff=31776</id>
		<title>Template:Infobox arrowlink sub</title>
		<link rel="alternate" type="text/html" href="https://www.glossalab.org/w/index.php?title=Template:Infobox_arrowlink_sub&amp;diff=31776"/>
		<updated>2026-01-22T12:43:12Z</updated>

		<summary type="html">&lt;p&gt;Thomas Holzberger: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;includeonly&amp;gt;[[{{{1}}}|{{{#userparam}}}]]&amp;lt;/includeonly&amp;gt;&lt;br /&gt;
&amp;lt;noinclude&amp;gt;&lt;br /&gt;
&amp;lt;templatedata&amp;gt;&lt;br /&gt;
{&lt;br /&gt;
  &amp;quot;description&amp;quot;: {&lt;br /&gt;
    &amp;quot;en&amp;quot;: &amp;quot;Creates a link to a page with a custom display text.&amp;quot;,&lt;br /&gt;
    &amp;quot;es&amp;quot;: &amp;quot;Crea un enlace a una página con un texto de visualización personalizado.&amp;quot;&lt;br /&gt;
  },&lt;br /&gt;
  &amp;quot;params&amp;quot;: {&lt;br /&gt;
    &amp;quot;1&amp;quot;: {&lt;br /&gt;
      &amp;quot;label&amp;quot;: {&lt;br /&gt;
        &amp;quot;en&amp;quot;: &amp;quot;Target page&amp;quot;,&lt;br /&gt;
        &amp;quot;es&amp;quot;: &amp;quot;Página de destino&amp;quot;&lt;br /&gt;
      },&lt;br /&gt;
      &amp;quot;description&amp;quot;: {&lt;br /&gt;
        &amp;quot;en&amp;quot;: &amp;quot;The page that the link should point to.&amp;quot;,&lt;br /&gt;
        &amp;quot;es&amp;quot;: &amp;quot;La página a la que debe apuntar el enlace.&amp;quot;&lt;br /&gt;
      },&lt;br /&gt;
      &amp;quot;type&amp;quot;: &amp;quot;string&amp;quot;,&lt;br /&gt;
      &amp;quot;required&amp;quot;: true,&lt;br /&gt;
      &amp;quot;example&amp;quot;: &amp;quot;Philosophy&amp;quot;&lt;br /&gt;
    },&lt;br /&gt;
    &amp;quot;#userparam&amp;quot;: {&lt;br /&gt;
      &amp;quot;label&amp;quot;: {&lt;br /&gt;
        &amp;quot;en&amp;quot;: &amp;quot;Display text&amp;quot;,&lt;br /&gt;
        &amp;quot;es&amp;quot;: &amp;quot;Texto de visualización&amp;quot;&lt;br /&gt;
      },&lt;br /&gt;
      &amp;quot;description&amp;quot;: {&lt;br /&gt;
        &amp;quot;en&amp;quot;: &amp;quot;The text that will be shown for the link.&amp;quot;,&lt;br /&gt;
        &amp;quot;es&amp;quot;: &amp;quot;El texto que se mostrará para el enlace.&amp;quot;&lt;br /&gt;
      },&lt;br /&gt;
      &amp;quot;type&amp;quot;: &amp;quot;string&amp;quot;,&lt;br /&gt;
      &amp;quot;required&amp;quot;: true,&lt;br /&gt;
      &amp;quot;example&amp;quot;: &amp;quot;See Philosophy page&amp;quot;&lt;br /&gt;
    }&lt;br /&gt;
  }&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/templatedata&amp;gt;&lt;br /&gt;
&amp;lt;/noinclude&amp;gt;&lt;/div&gt;</summary>
		<author><name>Thomas Holzberger</name></author>
	</entry>
	<entry>
		<id>https://www.glossalab.org/w/index.php?title=Template:Show_other_languages&amp;diff=31775</id>
		<title>Template:Show other languages</title>
		<link rel="alternate" type="text/html" href="https://www.glossalab.org/w/index.php?title=Template:Show_other_languages&amp;diff=31775"/>
		<updated>2026-01-22T12:42:08Z</updated>

		<summary type="html">&lt;p&gt;Thomas Holzberger: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;includeonly&amp;gt;{{#if: {{#show:{{FULLPAGENAME}}|?Available in other language as}} |&lt;br /&gt;
{{int|also-available-as|lang={{#show:{{FULLPAGENAME}}|?Has_written_language_code#-}}}}: {{#ask: [[-Available in other language as::{{FULLPAGENAME}}]]&lt;br /&gt;
|?Has written language code&lt;br /&gt;
|format=plainlist&lt;br /&gt;
|headers=hide&lt;br /&gt;
}}{{#if: {{#ask:[[Available in other language as::{{FULLPAGENAME}}]]}} |, {{#ask: [[Available in other language as::{{FULLPAGENAME}}]]&lt;br /&gt;
|?Has written language code&lt;br /&gt;
|format=plainlist&lt;br /&gt;
|headers=hide }} }}|{{#if: {{#ask:[[Available in other language as::{{FULLPAGENAME}}]]}} |{{int|also-available-as|lang={{#show:{{FULLPAGENAME}}|?Has_written_language_code#-}}}}: {{#ask: [[Available in other language as::{{FULLPAGENAME}}]]&lt;br /&gt;
|?Has written language code&lt;br /&gt;
|format=plainlist&lt;br /&gt;
|headers=hide }} }} }}&amp;lt;/includeonly&amp;gt;&lt;br /&gt;
&amp;lt;noinclude&amp;gt;&lt;br /&gt;
&amp;lt;templatedata&amp;gt;&lt;br /&gt;
{&lt;br /&gt;
  &amp;quot;description&amp;quot;: {&lt;br /&gt;
    &amp;quot;en&amp;quot;: &amp;quot;Displays a list of other languages in which this page is available, based on semantic properties.&amp;quot;,&lt;br /&gt;
    &amp;quot;es&amp;quot;: &amp;quot;Muestra una lista de otros idiomas en los que esta página está disponible, basada en propiedades semánticas.&amp;quot;&lt;br /&gt;
  },&lt;br /&gt;
  &amp;quot;params&amp;quot;: {}&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/templatedata&amp;gt;&lt;br /&gt;
&amp;lt;/noinclude&amp;gt;&lt;/div&gt;</summary>
		<author><name>Thomas Holzberger</name></author>
	</entry>
	<entry>
		<id>https://www.glossalab.org/w/index.php?title=Template:Show_simple_ref_sub&amp;diff=31774</id>
		<title>Template:Show simple ref sub</title>
		<link rel="alternate" type="text/html" href="https://www.glossalab.org/w/index.php?title=Template:Show_simple_ref_sub&amp;diff=31774"/>
		<updated>2026-01-22T12:39:14Z</updated>

		<summary type="html">&lt;p&gt;Thomas Holzberger: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;span style=&amp;quot;font-size:95%; color:#777;&amp;quot;&amp;gt;{{#arraymap:{{{1}}}|,|x|{{PAGENAME:x}}|,\s}} ({{{2|}}}). {{PAGENAME}}, &#039;&#039;{{{3}}}&#039;&#039;, {{#if: {{{4|}}} |&#039;&#039;{{{4}}}&#039;&#039;}}{{#if: {{{5|}}} |({{{5}}}): {{{6|}}}|-{{{6|}}}}}.&amp;lt;/span&amp;gt;&lt;br /&gt;
&amp;lt;noinclude&amp;gt;&lt;br /&gt;
&amp;lt;templatedata&amp;gt;&lt;br /&gt;
{&lt;br /&gt;
  &amp;quot;description&amp;quot;: {&lt;br /&gt;
    &amp;quot;en&amp;quot;: &amp;quot;Displays a formatted reference/citation in a compact style.&amp;quot;,&lt;br /&gt;
    &amp;quot;es&amp;quot;: &amp;quot;Muestra una referencia/cita formateada en un estilo compacto.&amp;quot;&lt;br /&gt;
  },&lt;br /&gt;
  &amp;quot;params&amp;quot;: {&lt;br /&gt;
    &amp;quot;1&amp;quot;: {&lt;br /&gt;
      &amp;quot;label&amp;quot;: {&lt;br /&gt;
        &amp;quot;en&amp;quot;: &amp;quot;Author(s)&amp;quot;,&lt;br /&gt;
        &amp;quot;es&amp;quot;: &amp;quot;Autor(es)&amp;quot;&lt;br /&gt;
      },&lt;br /&gt;
      &amp;quot;description&amp;quot;: {&lt;br /&gt;
        &amp;quot;en&amp;quot;: &amp;quot;Comma-separated list of authors. Each author will link to a page with their name.&amp;quot;,&lt;br /&gt;
        &amp;quot;es&amp;quot;: &amp;quot;Lista de autores separada por comas. Cada autor enlazará a una página con su nombre.&amp;quot;&lt;br /&gt;
      },&lt;br /&gt;
      &amp;quot;type&amp;quot;: &amp;quot;string&amp;quot;,&lt;br /&gt;
      &amp;quot;required&amp;quot;: true,&lt;br /&gt;
      &amp;quot;example&amp;quot;: &amp;quot;Doe, Smith, Johnson&amp;quot;&lt;br /&gt;
    },&lt;br /&gt;
    &amp;quot;2&amp;quot;: {&lt;br /&gt;
      &amp;quot;label&amp;quot;: {&lt;br /&gt;
        &amp;quot;en&amp;quot;: &amp;quot;Year&amp;quot;,&lt;br /&gt;
        &amp;quot;es&amp;quot;: &amp;quot;Año&amp;quot;&lt;br /&gt;
      },&lt;br /&gt;
      &amp;quot;description&amp;quot;: {&lt;br /&gt;
        &amp;quot;en&amp;quot;: &amp;quot;Publication year of the source.&amp;quot;,&lt;br /&gt;
        &amp;quot;es&amp;quot;: &amp;quot;Año de publicación de la fuente.&amp;quot;&lt;br /&gt;
      },&lt;br /&gt;
      &amp;quot;type&amp;quot;: &amp;quot;string&amp;quot;,&lt;br /&gt;
      &amp;quot;required&amp;quot;: false,&lt;br /&gt;
      &amp;quot;example&amp;quot;: &amp;quot;2023&amp;quot;&lt;br /&gt;
    },&lt;br /&gt;
    &amp;quot;3&amp;quot;: {&lt;br /&gt;
      &amp;quot;label&amp;quot;: {&lt;br /&gt;
        &amp;quot;en&amp;quot;: &amp;quot;Title&amp;quot;,&lt;br /&gt;
        &amp;quot;es&amp;quot;: &amp;quot;Título&amp;quot;&lt;br /&gt;
      },&lt;br /&gt;
      &amp;quot;description&amp;quot;: {&lt;br /&gt;
        &amp;quot;en&amp;quot;: &amp;quot;Title of the work being cited.&amp;quot;,&lt;br /&gt;
        &amp;quot;es&amp;quot;: &amp;quot;Título de la obra citada.&amp;quot;&lt;br /&gt;
      },&lt;br /&gt;
      &amp;quot;type&amp;quot;: &amp;quot;string&amp;quot;,&lt;br /&gt;
      &amp;quot;required&amp;quot;: true,&lt;br /&gt;
      &amp;quot;example&amp;quot;: &amp;quot;Introduction to Formal Logic&amp;quot;&lt;br /&gt;
    },&lt;br /&gt;
    &amp;quot;4&amp;quot;: {&lt;br /&gt;
      &amp;quot;label&amp;quot;: {&lt;br /&gt;
        &amp;quot;en&amp;quot;: &amp;quot;Subtitle&amp;quot;,&lt;br /&gt;
        &amp;quot;es&amp;quot;: &amp;quot;Subtítulo&amp;quot;&lt;br /&gt;
      },&lt;br /&gt;
      &amp;quot;description&amp;quot;: {&lt;br /&gt;
        &amp;quot;en&amp;quot;: &amp;quot;Optional subtitle of the work.&amp;quot;,&lt;br /&gt;
        &amp;quot;es&amp;quot;: &amp;quot;Subtítulo opcional de la obra.&amp;quot;&lt;br /&gt;
      },&lt;br /&gt;
      &amp;quot;type&amp;quot;: &amp;quot;string&amp;quot;,&lt;br /&gt;
      &amp;quot;required&amp;quot;: false,&lt;br /&gt;
      &amp;quot;example&amp;quot;: &amp;quot;Second Edition&amp;quot;&lt;br /&gt;
    },&lt;br /&gt;
    &amp;quot;5&amp;quot;: {&lt;br /&gt;
      &amp;quot;label&amp;quot;: {&lt;br /&gt;
        &amp;quot;en&amp;quot;: &amp;quot;Volume or part&amp;quot;,&lt;br /&gt;
        &amp;quot;es&amp;quot;: &amp;quot;Volumen o parte&amp;quot;&lt;br /&gt;
      },&lt;br /&gt;
      &amp;quot;description&amp;quot;: {&lt;br /&gt;
        &amp;quot;en&amp;quot;: &amp;quot;Optional volume, part, or chapter information.&amp;quot;,&lt;br /&gt;
        &amp;quot;es&amp;quot;: &amp;quot;Información opcional sobre volumen, parte o capítulo.&amp;quot;&lt;br /&gt;
      },&lt;br /&gt;
      &amp;quot;type&amp;quot;: &amp;quot;string&amp;quot;,&lt;br /&gt;
      &amp;quot;required&amp;quot;: false,&lt;br /&gt;
      &amp;quot;example&amp;quot;: &amp;quot;Vol. 2&amp;quot;&lt;br /&gt;
    },&lt;br /&gt;
    &amp;quot;6&amp;quot;: {&lt;br /&gt;
      &amp;quot;label&amp;quot;: {&lt;br /&gt;
        &amp;quot;en&amp;quot;: &amp;quot;Page numbers&amp;quot;,&lt;br /&gt;
        &amp;quot;es&amp;quot;: &amp;quot;Números de página&amp;quot;&lt;br /&gt;
      },&lt;br /&gt;
      &amp;quot;description&amp;quot;: {&lt;br /&gt;
        &amp;quot;en&amp;quot;: &amp;quot;Optional page numbers or range.&amp;quot;,&lt;br /&gt;
        &amp;quot;es&amp;quot;: &amp;quot;Números de página o rango opcional.&amp;quot;&lt;br /&gt;
      },&lt;br /&gt;
      &amp;quot;type&amp;quot;: &amp;quot;string&amp;quot;,&lt;br /&gt;
      &amp;quot;required&amp;quot;: false,&lt;br /&gt;
      &amp;quot;example&amp;quot;: &amp;quot;pp. 15–20&amp;quot;&lt;br /&gt;
    }&lt;br /&gt;
  }&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/templatedata&amp;gt;&lt;br /&gt;
&amp;lt;/noinclude&amp;gt;&lt;/div&gt;</summary>
		<author><name>Thomas Holzberger</name></author>
	</entry>
	<entry>
		<id>https://www.glossalab.org/w/index.php?title=Template:RefToEq&amp;diff=31773</id>
		<title>Template:RefToEq</title>
		<link rel="alternate" type="text/html" href="https://www.glossalab.org/w/index.php?title=Template:RefToEq&amp;diff=31773"/>
		<updated>2026-01-22T12:38:08Z</updated>

		<summary type="html">&lt;p&gt;Thomas Holzberger: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;includeonly&amp;gt;{{#refeq:{{{1}}}}}&amp;lt;/includeonly&amp;gt;&amp;lt;noinclude&amp;gt;&lt;br /&gt;
&amp;lt;templatedata&amp;gt;&lt;br /&gt;
{&lt;br /&gt;
	&amp;quot;description&amp;quot;: {&lt;br /&gt;
		&amp;quot;en&amp;quot;: &amp;quot;Generates a reference to a previously defined equation on the same page, which can be a &#039;&#039;label&#039;&#039; if declared using the 2nd parameter of {{eq}}, or an &#039;&#039;automatically assigned label&#039;&#039; which can be seen once the page is visualized. NOTE: References to equations on other pages do not work directly; use HTML anchors or standard page links for that. For instance: [[gB:Fuzzy logic#eqnum-1 | Ec. 1 in Fuzzy Logic]] or [[trial#trial_eq_label | trial equation in article trial]].&amp;quot;,&lt;br /&gt;
		&amp;quot;es&amp;quot;: &amp;quot;Genera una referencia a una ecuación definida previamente en la misma página, que puede ser una &#039;&#039;etiqueta&#039;&#039; si se declaró usando el segundo parámetro de {{eq}}, o una &#039;&#039;etiqueta asignada automáticamente&#039;&#039; que puede verse una vez que la página es visualizada. NOTA: Las referencias a ecuaciones en otras páginas no funcionan directamente; para ello use anclas HTML o enlaces estándar a páginas. Por ejemplo: [[gB:Fuzzy logic#eqnum-1 | Ec. 1 en Lógica Difusa]] o [[trial#trial_eq_label | ecuación del artículo trial]].&amp;quot;&lt;br /&gt;
	},&lt;br /&gt;
	&amp;quot;params&amp;quot;: {&lt;br /&gt;
		&amp;quot;1&amp;quot;: {&lt;br /&gt;
			&amp;quot;label&amp;quot;: {&lt;br /&gt;
				&amp;quot;en&amp;quot;: &amp;quot;Equation label&amp;quot;,&lt;br /&gt;
				&amp;quot;es&amp;quot;: &amp;quot;Etiqueta de la ecuación&amp;quot;&lt;br /&gt;
			},&lt;br /&gt;
			&amp;quot;description&amp;quot;: {&lt;br /&gt;
				&amp;quot;en&amp;quot;: &amp;quot;The label of the equation to reference. Must match exactly the label used in {{eq}}, either the one attributed in the declaration of {{eq}} or the one automatically asigned, which can be seen once the page is visualized. In the second case, if the equation you wish to the refer to is visualised with, for example, number 3, the corresponding reference is eqnum-3. NOTE: If a label has been declared in the {{eq}} call, that is the only label you can use.&amp;quot;,&lt;br /&gt;
				&amp;quot;es&amp;quot;: &amp;quot;La etiqueta de la ecuación a la que se desea hacer referencia. Debe coincidir exactamente con la etiqueta utilizada en {{eq}}, ya sea la atribuida en la declaración de {{eq}} o la asignada automáticamente, la cual puede verse una vez que la página es visualizada. En este segundo caso, si la ecuación a la que desea referirse se visualiza, por ejemplo, con el número 3, la referencia correspondiente es eqnum-3. NOTA: Si se ha declarado una etiqueta en la llamada a {{eq}}, esa es la única etiqueta que puede utilizarse.&amp;quot;&lt;br /&gt;
			},&lt;br /&gt;
			&amp;quot;example&amp;quot;: &amp;quot;eqnum-3&amp;quot;,&lt;br /&gt;
			&amp;quot;type&amp;quot;: &amp;quot;string&amp;quot;,&lt;br /&gt;
			&amp;quot;required&amp;quot;: true&lt;br /&gt;
		}&lt;br /&gt;
	}&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/templatedata&amp;gt;&lt;br /&gt;
&amp;lt;/noinclude&amp;gt;&lt;/div&gt;</summary>
		<author><name>Thomas Holzberger</name></author>
	</entry>
	<entry>
		<id>https://www.glossalab.org/w/index.php?title=Template:Eq&amp;diff=31772</id>
		<title>Template:Eq</title>
		<link rel="alternate" type="text/html" href="https://www.glossalab.org/w/index.php?title=Template:Eq&amp;diff=31772"/>
		<updated>2026-01-22T12:36:39Z</updated>

		<summary type="html">&lt;p&gt;Thomas Holzberger: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;includeonly&amp;gt;{{#autoeq:{{{1}}}|{{{label|}}}}}{{#set:Has formalized expressions=true}}&amp;lt;/includeonly&amp;gt;&lt;br /&gt;
&amp;lt;noinclude&amp;gt;&lt;br /&gt;
&amp;lt;templatedata&amp;gt;&lt;br /&gt;
{&lt;br /&gt;
	&amp;quot;description&amp;quot;: {&lt;br /&gt;
		&amp;quot;en&amp;quot;: &amp;quot;Displays a formalized expression as an automatically numbered equation, which can be linked within the same page or from another. This can be an arbitrary wikitext content, like a math expression or an image.&amp;quot;,&lt;br /&gt;
		&amp;quot;es&amp;quot;: &amp;quot;Muestra una expresión formalizada como una ecuación numerada automáticamente, que puede enlazarse dentro de la misma página o desde otra. Puede ser cualquier contenido en wikitexto, como una expresión matemática o una imagen.&amp;quot;&lt;br /&gt;
	},&lt;br /&gt;
	&amp;quot;params&amp;quot;: {&lt;br /&gt;
		&amp;quot;1&amp;quot;: {&lt;br /&gt;
			&amp;quot;label&amp;quot;: {&lt;br /&gt;
				&amp;quot;en&amp;quot;: &amp;quot;Formalized expression&amp;quot;,&lt;br /&gt;
				&amp;quot;es&amp;quot;: &amp;quot;Expresión formalizada&amp;quot;&lt;br /&gt;
			},&lt;br /&gt;
			&amp;quot;description&amp;quot;: {&lt;br /&gt;
				&amp;quot;en&amp;quot;: &amp;quot;The mathematical or logical expression to display as an equation.&amp;quot;,&lt;br /&gt;
				&amp;quot;es&amp;quot;: &amp;quot;La expresión matemática o lógica que se mostrará como una ecuación.&amp;quot;&lt;br /&gt;
			},&lt;br /&gt;
			&amp;quot;type&amp;quot;: &amp;quot;content&amp;quot;,&lt;br /&gt;
			&amp;quot;required&amp;quot;: true,&lt;br /&gt;
			&amp;quot;example&amp;quot;: &amp;quot;&amp;lt;math&amp;gt;a = b&amp;lt;/math&amp;gt;&amp;quot;&lt;br /&gt;
		},&lt;br /&gt;
		&amp;quot;label&amp;quot;: {&lt;br /&gt;
			&amp;quot;label&amp;quot;: {&lt;br /&gt;
				&amp;quot;en&amp;quot;: &amp;quot;Equation label&amp;quot;,&lt;br /&gt;
				&amp;quot;es&amp;quot;: &amp;quot;Etiqueta de la ecuación&amp;quot;&lt;br /&gt;
			},&lt;br /&gt;
			&amp;quot;description&amp;quot;: {&lt;br /&gt;
				&amp;quot;en&amp;quot;: &amp;quot;Optional label for the equation, which must be unique on the page. Use this if you want to reference it using {{RefToEq}} within the same page, or using a standard link from another page: [[trial#trial_eq_label | trial equation in article trial]].&amp;quot;,&lt;br /&gt;
				&amp;quot;es&amp;quot;: &amp;quot;Etiqueta opcional para la ecuación, que debe ser única en la página. Úsela si desea hacer referencia a ella mediante {{RefToEq}} dentro de la misma página, o mediante un enlace estándar desde otra página: [[trial#trial_eq_label | ecuación del artículo trial]].&amp;quot;&lt;br /&gt;
			},&lt;br /&gt;
			&amp;quot;type&amp;quot;: &amp;quot;string&amp;quot;,&lt;br /&gt;
			&amp;quot;required&amp;quot;: false,&lt;br /&gt;
			&amp;quot;example&amp;quot;: &amp;quot;equality&amp;quot;&lt;br /&gt;
		}&lt;br /&gt;
	}&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/templatedata&amp;gt;&lt;br /&gt;
&amp;lt;/noinclude&amp;gt;&lt;/div&gt;</summary>
		<author><name>Thomas Holzberger</name></author>
	</entry>
	<entry>
		<id>https://www.glossalab.org/w/index.php?title=Template:Eq&amp;diff=31771</id>
		<title>Template:Eq</title>
		<link rel="alternate" type="text/html" href="https://www.glossalab.org/w/index.php?title=Template:Eq&amp;diff=31771"/>
		<updated>2026-01-22T12:31:12Z</updated>

		<summary type="html">&lt;p&gt;Thomas Holzberger: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;includeonly&amp;gt;{{#autoeq:{{{1}}}|{{{label|}}}}}{{#set:Has formalized expressions=true}}&amp;lt;/includeonly&amp;gt;&lt;br /&gt;
&amp;lt;noinclude&amp;gt;&lt;br /&gt;
&amp;lt;templatedata&amp;gt;&lt;br /&gt;
{&lt;br /&gt;
	&amp;quot;description&amp;quot;: {&lt;br /&gt;
		&amp;quot;en&amp;quot;: &amp;quot;Displays a formalized expression as an automatically numbered equation, which can be linked within the same page or from another. This can be an arbitrary wikitext content, like a math expression or an image.&amp;quot;,&lt;br /&gt;
		&amp;quot;es&amp;quot;: &amp;quot;Muestra una expresión formalizada como una ecuación numerada automáticamente, que puede enlazarse dentro de la misma página o desde otra. Puede ser cualquier contenido en wikitexto, como una expresión matemática o una imagen.&amp;quot;&lt;br /&gt;
	},&lt;br /&gt;
	&amp;quot;params&amp;quot;: {&lt;br /&gt;
		&amp;quot;1&amp;quot;: {&lt;br /&gt;
			&amp;quot;label&amp;quot;: &amp;quot;Formalized expression&amp;quot;,&lt;br /&gt;
			&amp;quot;description&amp;quot;: &amp;quot;The mathematical or logical expression to display as an equation.&amp;quot;,&lt;br /&gt;
			&amp;quot;type&amp;quot;: &amp;quot;content&amp;quot;,&lt;br /&gt;
			&amp;quot;required&amp;quot;: true,&lt;br /&gt;
			&amp;quot;example&amp;quot;: &amp;quot;&amp;lt;math&amp;gt;a = b&amp;lt;/math&amp;gt;&amp;quot;&lt;br /&gt;
		},&lt;br /&gt;
		&amp;quot;label&amp;quot;: {&lt;br /&gt;
			&amp;quot;label&amp;quot;: &amp;quot;Equation label&amp;quot;,&lt;br /&gt;
			&amp;quot;description&amp;quot;: &amp;quot;Optional label for the equation, which must be unique on the page. Use this if you want to reference it using {{RefToEq}} within the same page, or using a standard link from another page: [[trial#trial_eq_label | trial equation in article trial]].&amp;quot;,&lt;br /&gt;
			&amp;quot;type&amp;quot;: &amp;quot;string&amp;quot;,&lt;br /&gt;
			&amp;quot;required&amp;quot;: false,&lt;br /&gt;
			&amp;quot;example&amp;quot;: &amp;quot;equality&amp;quot;&lt;br /&gt;
		}&lt;br /&gt;
	}&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/templatedata&amp;gt;&lt;br /&gt;
&amp;lt;/noinclude&amp;gt;&lt;/div&gt;</summary>
		<author><name>Thomas Holzberger</name></author>
	</entry>
	<entry>
		<id>https://www.glossalab.org/w/index.php?title=Draft:TestGlossaLAB&amp;diff=31765</id>
		<title>Draft:TestGlossaLAB</title>
		<link rel="alternate" type="text/html" href="https://www.glossalab.org/w/index.php?title=Draft:TestGlossaLAB&amp;diff=31765"/>
		<updated>2026-01-22T12:05:46Z</updated>

		<summary type="html">&lt;p&gt;Thomas Holzberger: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== ᛜᛝᛠᛱᛱᛟ ==&lt;br /&gt;
[[File:11 Arbeit PE 240502 005234.pdf|thumb]]&lt;br /&gt;
&amp;lt;span lang=&amp;quot;de&amp;quot; dir=&amp;quot;ltr&amp;quot;&amp;gt;ㅤㅤ&amp;lt;/span&amp;gt; Persönliche Daten: ㅤㅤㅤㅤㅤㅤĈ&lt;br /&gt;
[[File:2c466232-f0a0-44c2-a35f-019193de7bd2 1.png|thumb|hm]]&lt;br /&gt;
[[File:9ee4505c-461f-4229-9689-251c9b8b03e8.png|thumb|testkkäpoä]]&lt;br /&gt;
&lt;br /&gt;
=== Adresse:Am Waldhang 17b&amp;lt;ref name=&amp;quot;:0&amp;quot;&amp;gt;https://moodle.hm.edu/course/view.php?id=19225&amp;lt;/ref&amp;gt;                                                      ===&lt;br /&gt;
{{TOC left}}&lt;br /&gt;
&lt;br /&gt;
          Hallo&lt;br /&gt;
&lt;br /&gt;
Telefon mobil:                                     +49 160 9254&amp;lt;ref name=&amp;quot;:1&amp;quot;&amp;gt;https://www.google.com/maps&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= çčǎÇĈỞṘṞởṅ  7786[[User:Thomas Holzberger|Thomas Holzberger]] ([[User talk:Thomas Holzberger|talk]]) 14:32, 17 October 2025 &amp;lt;ref&amp;gt;no&amp;lt;/ref&amp;gt;(CEST)ἊἝἇ =&lt;br /&gt;
E-Mail:                                                   tomholzb@gmail.com &lt;br /&gt;
[[File:11 Arbeit PE 240502 005234.pdf|alt=Menschling|thumb|Thomas]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Geburtsdatum/-ort:                           09.02.2004 i&amp;lt;ref name=&amp;quot;:1&amp;quot; /&amp;gt;n Landshutggggggggggggggggggggggggggggggggggggggggggggg&amp;lt;ref&amp;gt;https://www.google.com/maps&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
# &amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
ㅤㅤSchulbildung: ㅤㅤᛙᛜ&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+&lt;br /&gt;
!&lt;br /&gt;
!&lt;br /&gt;
!&lt;br /&gt;
!&lt;br /&gt;
!&lt;br /&gt;
!&lt;br /&gt;
!&lt;br /&gt;
|-&lt;br /&gt;
!&lt;br /&gt;
!&lt;br /&gt;
!&lt;br /&gt;
!&lt;br /&gt;
!&lt;br /&gt;
!&lt;br /&gt;
!&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|inonö,.öü,,ü,üp&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|}&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;gallery perrow=&amp;quot;2&amp;quot; caption=&amp;quot;klkl&amp;quot;&amp;gt;&lt;br /&gt;
File:5d735f2d-3bbd-48b5-9b1d-bfedb47069bf 1.png&lt;br /&gt;
File:2c466232-f0a0-44c2-a35f-019193de7bd2 1.png|.&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&amp;lt;!-- This is not the newest version --&amp;gt;ㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤㅤ&lt;br /&gt;
&lt;br /&gt;
[[File:7a81d057-e88b-44ab-a62c-5dc6c8378895.png|alt=gfggd|thumb|b]]&lt;br /&gt;
&lt;br /&gt;
= &amp;lt;u&amp;gt;10 / 2023 – Jetzt:            Studium der Luft- und Raumfahrttechnik an der Hochschule München&amp;lt;/u&amp;gt; &amp;lt;ref&amp;gt;[[Draft:Moral relativism]]&amp;lt;/ref&amp;gt; =&lt;br /&gt;
09 / 2023 – 10 / 2023:    Praktikum bei „Lorenz Behälter- und Apparatebau“&lt;br /&gt;
&lt;br /&gt;
== 10 / 2022 – 09 / 2023:   Maschinenbaustudium an der Hochschule München ==&lt;br /&gt;
06 / 2022:                         Erlangung der Allgemeinen Hochschulreife&lt;br /&gt;
&lt;br /&gt;
09 / 2014 – 06 / 2022:  Gymnasium Seligenthal&lt;br /&gt;
&lt;br /&gt;
08 / 2010 – 07 / 2014:   Grundschule Buch am Erlbach&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;ada95&amp;quot;&amp;gt;&lt;br /&gt;
yflöbmkldyfmkl&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Political Philosophy (Diaz Nafria) =&lt;br /&gt;
&lt;br /&gt;
* Kurs&lt;br /&gt;
* Teilnehmer/innen&lt;br /&gt;
&amp;lt;references /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Mehr&lt;br /&gt;
* ere&lt;br /&gt;
* e Fe&lt;br /&gt;
* f&lt;br /&gt;
* we&lt;br /&gt;
* f&lt;br /&gt;
* erewrewrewr&lt;br /&gt;
* errerwerewrwer&lt;br /&gt;
* errwejzujju7  &lt;br /&gt;
* 3ewfew&lt;br /&gt;
* uhuiiuhra&lt;br /&gt;
* rra&lt;br /&gt;
* kkl&lt;br /&gt;
** mmklmkl&lt;br /&gt;
&amp;lt;references /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Abschnittsübersicht&amp;lt;ref name=&amp;quot;:0&amp;quot; /&amp;gt; ==&lt;br /&gt;
&lt;br /&gt;
* Alles einklappen&lt;br /&gt;
&amp;lt;references /&amp;gt;&amp;lt;ref&amp;gt;KHI&amp;lt;/ref&amp;gt;&lt;br /&gt;
** ==Access to the course&#039;s public website (containing documentation, access to materials, etc)==&lt;br /&gt;
** News and Announcements Forum&lt;br /&gt;
** Free discussion forum&lt;br /&gt;
**&lt;br /&gt;
* Onsite Sessions  These are the 7 onsite sessions planned to be held at HM, FK13, Building T (Dachauerstr. 100a), from 17:00 to 21:00 in the rooms indicated below (though the times within the context of the glossaLAB congress, from 6th to 8th of November will spread over the day&lt;br /&gt;
&amp;lt;references /&amp;gt;&lt;/div&gt;</summary>
		<author><name>Thomas Holzberger</name></author>
	</entry>
	<entry>
		<id>https://www.glossalab.org/w/index.php?title=Draft:Positivist_state&amp;diff=30859</id>
		<title>Draft:Positivist state</title>
		<link rel="alternate" type="text/html" href="https://www.glossalab.org/w/index.php?title=Draft:Positivist_state&amp;diff=30859"/>
		<updated>2026-01-13T14:49:36Z</updated>

		<summary type="html">&lt;p&gt;Thomas Holzberger: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Abstract ==&lt;br /&gt;
This paper shows a fictional utopian society. &lt;br /&gt;
&lt;br /&gt;
The section &#039;&#039;utopia&#039;&#039; will go into how the society developed due to an AI capable of efficient and logical processing of information and various coincidences that led to the AI being publicly accessible, as well as a sketch of this world, in which abundant information and resources are channeled into the public good due to a transparent and democratic process of governance. &lt;br /&gt;
&lt;br /&gt;
The section &#039;&#039;epilogue&#039;&#039; will show some dystopian aspects and potentials of this society, which is entirely dependent on an automated intelligence. It will also show the connections to historical utopias and ideas such as Positivism, perfect knowledge and thinking.            &lt;br /&gt;
&lt;br /&gt;
== Utopia ==&lt;br /&gt;
In the 21. century a connection of scientific articles called the Positivist Network was formed. It was an attempt to gather proofes of previous scientific conclusions in one place, and to aid in challenging, correcting and amending them.&lt;br /&gt;
&lt;br /&gt;
Anyone with a bachelor of science, engineering or arts could make contributions, following specific patterns:&lt;br /&gt;
&lt;br /&gt;
[Every contribution was supposed to link to every axiom, assumption, experiment and theory used. This was in practice often achieved by linking all relevant theories and experiments while the algorithm summed up the axioms wich neccesary theories used.&lt;br /&gt;
&lt;br /&gt;
Then it was simply supposed to make logical statements leading up to conclusions.&lt;br /&gt;
&lt;br /&gt;
New conclusions also had to be noted separately as a conclusion of the contribution.]&lt;br /&gt;
&lt;br /&gt;
The institution funding this network would check contributions for the validity of the logical proofes contained in them, whenever anyone pointed out a contradictory conclusion within other articles or in relevant scientific papers.&lt;br /&gt;
&lt;br /&gt;
When this system was established and gained traction, speech models called AI at the time were often used trained on this puplic databank and used to summarize conclusions on certain topics. The likelihood of false information was initially rather small on this dataset, but increased usage led to various groups of interests making sure that lots of conclusions based on unfounded assumptions were placed in the system. This did not halt the usefulness of the system in most fields of study, but in other fields research became more difficult. This prompted various experiments on how to estimate what assumptions would be useful. The Programs developed to this end were not widely used at first. But work on something named &amp;quot;Logical Artificial Intelligence&amp;quot; had even started before that. It was a program, used to check the work of other AI agents by being provided with sets of contradicting statements. It would try to detect these statements in any given input. A speech &amp;quot;AI&amp;quot; model was used to phrase any statement as a set of less comprehensive statements. [e.g. I am going to the pool-&amp;gt; The &amp;quot;I&amp;quot; &amp;quot;goes&amp;quot; to &amp;quot;the pool&amp;quot;, happening now -&amp;gt; There is an &amp;quot;I&amp;quot;, a &amp;quot;pool&amp;quot;(Implied: the recipient of the message will know only one &amp;quot;pool&amp;quot; &amp;quot;I&amp;quot; would go to), &amp;quot;I&amp;quot;  is trying to be at the pool in the future, by moving there now (implied: by &amp;quot;walking&amp;quot; -&amp;gt;... ]. Later, a second system was trained to generate reasoning, about the truthfulness of some statement. Originally, every relevant contradiction and assumption was given, and the system tried to argue for something without making logical mistakes that the &amp;quot;Logical AI&amp;quot; was already able to detect. Not much is known about the further development process, but in 2071 someone leaked access to the latest prototype of a private, little known tech company. It was able to effectively answer any question, at least given enough processing power and time. Furthermore, it was able to adequately provide an argument, with all axioms used as well as their position, mostly also including experiments and assumptions, which it did not usually include in the written argumentation. The impact of this AI became clear to anyone willing to believe the leak existed. A few weeks later, all employees and shareholders were arrested citing &amp;quot;national defense&amp;quot; as the reason. Another leak of the corresponding state documents later revealed not only the program itself, but also immense amounts of already collected data and conclusions. Therefore with an already limited control of the AI, a system of decentralized data and separately working versions of the AI was set up. This was managed over the Positivist-Network with the AI&#039;s almost acting as users and conclusions were therefore also accesible to the puplic. Other networks with a similar structure could also be opened. When someone changed some conclusion, the AI was able to detect mistakes and discrepancies, as soon as it tried to add conflicting input, at last choosing the untainted version of information. &lt;br /&gt;
&lt;br /&gt;
The AI could even work with statements having some likelyhood. So it led to a rapid expansion of knowledge on everything from basic research to social sciences, that was simply directed by people asking questions and providing processing power. It was still easier to find mistakes, especially in the past, and most leaders were not adjusted to transparent and clear information, so questioning the AI about politics lead to a rather bleak impression of anyones gouvernance. The first atempts at undermining the truth started by making a certain version of the AI mandatory and cut of from the rest of the network. These versions had certain rules implanted, that could not be changed or challenged. While this might have worked, it caused problems in seemingly unrelated topics when the AI used undeniable facts about the greatness of someones gouvernance to make conclusions about anything, even physics.&lt;br /&gt;
&lt;br /&gt;
Sowing mistrust in the AI or restricting acess to it were also applied by some goverments. In the end, all such options significantly hindered the usefulness of the AI, so a new leadership would often just start acting mostly acording to the AI&#039;s conclusions and enable a less and less obstructed use for the entire society.&lt;br /&gt;
&lt;br /&gt;
Now, formerly Autocratic and Democratic systems alike are mostly very transparent and acting on the AI&#039;s advise. Since the AI would literally tell anyone who asked about how benificial such a change would be, or even when a revolution would be a viable option, most gouverments gradually adapted to be rather similar in nature. The living standard before the 22. century was always increased when possible. But the rapid progress in technology and the corespondingly increased wealth, led to generations of people who were not concerned about how to make a living. Using the AI from an early age, they also barely build up any contradictory ideas about morality. So they actually based some of their thinking on the altruistic morality typically fed to children, now almost without slowly adapting it to their interests and intuitions. So the concept of &amp;quot;puplic good&amp;quot; as a contrast to the wish to be better of than others also increased in importance. When given the option, most people of these generations would happily make any information of theirs or their employer, even the state puplic, further increasing the trend towards transparency.&lt;br /&gt;
&lt;br /&gt;
Quantum computers and new variations of the AI, all interacting with are among the most usefull of todays technologies. At least Mark thinks so, as he is heading for the jobcenter in the zeppelin. Of course he could have chosen a faster method of transportation, but he enjoys the view on his village and the surounding forests. A quick chat with the only other passenger revealed that it was in fact the employee of the local jobcenter. He also switches from place to place, since individual villages and citys do not always require a lot of attention by the jobcenter, and also because most people would not want to work at the same place all the time. Like most jobs, the work at the jobcenter is automated wherever there are not enough humans who want the profession, with contingencies in case this number changes spontainiously. &lt;br /&gt;
&lt;br /&gt;
The jobcenter prepared and evaluated everyone who wanted to, exchanged information with potential employers and eventually gave strong advice on what to do. Mark would then mostly be free in chosing diffent careerpaths and positions but incentivised with a cut of the profit which the company would earn for the state. Mark remembered from History class, that this was originally an optional, at some point even mandatory part of the so called welfare system, next to a private market. It was even disencentivised to be a part of the welfare system. With some parts of the state wishing not to loose taxes and market-controll, the system was also gradually changed so it could fit the people who would have started to leave the system. But increased trust in the state and wish for equality later led to everyone beeing a part of this payment system by law. Mark has only visited the jobcenter once so far, when he applied to be a part of the local neighborhood help group. But basicly anyone can be a part of such groups, so it was not much of an application. He just got a little more money from the state. Not that it was a huge deviation from his basic income, but still usefull, since he wanted to build a new printer. Before the AI was used to create an organism that consumed most kinds of plastic under specific conditions, the Material Mark uses for his Printer must have been a real problem for the enviroment, since it ammassed basicly everywhere. Some fish still carry small ammonts of plastic in them. Not that it would affect Mark too much. After all, there really is no reason to eat animals. Their taste can easily be replicated for the people who want that. So Microplastic in some Fish is more of an abstract thought to Mark. Also, the problem is getting better and better anyway, enough people care about wildlife protection, so there is no reason to be pessimistic. At least in Marks state, anyone could vote on certain agendas brought forward by anyone in society, including changing some executive power even before the next election. A current issue is a ban on some kinds of short-video-sites. Mark would be fine with spending 3 hours a day on there, even though the AI told him of a higher likelyhood of long term unhappiness with this kind of video consumption. But someone put the proposal to a vote, it got shared by interested people and to random people in order to determine how controversial and important the matter is. Since it was deemed very important and controversial based on most people reacting to it, and with mixed reactions, over 50% of the whole society need to agree to the proposal in order to make it happen. But it was contradictory to and therefore affecting a part of the constitution, so the proposal needed 70% anyway. A vote to stop the vaccinations against the common cold failed recently, there might still be some unknown case around. This is a common political topic, after all most sickneses are almost or entirely gone thanks to such vaccinations and effective treatments. &lt;br /&gt;
&lt;br /&gt;
== Epilog ==&lt;br /&gt;
The Utopia shown has various dystopian aspects. Especially noteworthy here is the start of this utopia. After all, only various leaks and specific goverment responses to the matter made this utopia possible, even after an AI was developed. It seems even more likely that some few actors would monopolise this power, creating a less pleasant society. But the Utopia itself is also unstable. At some point, for example due to a limited interest in politics, the transparent and democratic society could erode, the system for voting on specific issues could be filled with useless proposals to lessen the interest in the democratic process, and finally, it should be possible to somehow change a version of the AI to suit the interests of some actors and guaranty them a maintained power.&lt;br /&gt;
&lt;br /&gt;
Also, the entire society is dependend on the AI, at least to apply previous conclusions to practical problems. There are some engineers and scientists, but a situation could develope, in which less and less people want to perform mentally challenging tasks. When the AI has some pattern of errors, which allows it to keep some mistake, the entire society could develope horribly or collapse since no one really checks the output anymore. Automated systems, including versions of the AI could fail and trigger humanitarian or economic desasters.&lt;br /&gt;
&lt;br /&gt;
It is also clear, that this society will tend to become more and more dependend on the AI, since it seems to provide perfect output while you ask a question close to an already analysed topic or as long as you provide sufficient processing power. Maby the society ends up regulating acces to the AI more, especially for children, maby it does not. This raises the question, how important the search for knowledge is to humanity. Because depending on how accesible the AI is for answering simple questions, people could lose interest and ability to form these thoughts themselfs. This is also the case in a short story by Emanuel Holst, gathered in the glossaLAB article [[Artificial Intelligence (Cyberutopias)]]. The AI in the mentioned short story literally automated all relevant processes, there is little actual ability of humans to positively influence the world around them. In the text here, the AI does however only serve as a tool for answering questions. But it also lead to an immense automation, with the potential that systems just ask the AI questions automatically. So the people of this world might feel more usefull and can have a part of positive change, but this state of society can end, as soon as increased automation or decreased education get popular political agendas.&lt;br /&gt;
&lt;br /&gt;
The Utopia of this text is also connected to various historical utopias of history: &lt;br /&gt;
&lt;br /&gt;
For example the political system fits the idea of a social contract brought forward by Rousseau very well because it mostly resembles a direct democracy and the utopia of a transparent society. Rousseau envisioned a concept, in which individuals like politicians would agree to suport and obey the general will, a rather fragile promise in reality. Since the utopia here provides a very high possibility to inform and then know about political actions and possibilities, and since the state can be controlled rather directly, the actions of the state will fit the general will very precisely. As explained in the glossar lab article [[A transparent world]], increased information about each other as a mechanism against prejustice and inequality is also a part of the utopia of transparent information that the AI can provide. Not only, because everyone has the potential to acces information, but this access is also rather easy, so most people will actually use this potential.&lt;br /&gt;
&lt;br /&gt;
The AI and the Positivist Network it interacts with are mostly inspired by the Institut International de Bibliographiee that aimed to collect the human scientific archivements and cathegorise them in a usefull and understandable way. Some differences in the Positivist Network are a higher ammount of controll on who participates, a lacking need to make branches of possibilities, in case some experiment or assumption is invalid and a higher cost of storing, linking and communicating information. The idea of positivism also lies in purely positive changes to the beliefs you have. The AI and the Positivist Network seem to resemble this quite well, since all conclusions must be based on proof. But the system could be sabotaged or faulty and then contain some sort of mistake, which needs to be found. A problem possibly also encountered by Paul Outlet and the Institut International de Bibliographiee, but still a significant deviation from their utopia of perfect wisdom.&lt;br /&gt;
&lt;br /&gt;
The AI also resembles the utopias of perfect language and perfect thinking. It seems like an automated version of Ramon Llulls &amp;quot;logical machine&amp;quot;&amp;lt;ref&amp;gt;Priani, Ernesto, &amp;quot;Ramon Llull&amp;quot;, &#039;&#039;The Stanford Encyclopedia of Philosophy&#039;&#039; (Spring 2025 Edition), Edward N. Zalta &amp;amp; Uri Nodelman (eds.), URL = &amp;lt;&amp;lt;nowiki&amp;gt;https://plato.stanford.edu/archives/spr2025/entries/llull/&amp;lt;/nowiki&amp;gt;&amp;gt;.&amp;lt;/ref&amp;gt;. This &amp;quot;logical machine&amp;quot; might not have been more than a tablet used to explain and memorise the core of deductive logic and make the concept more credible, but combined with a person folowing a logical line of argument it was already a usefull tool to ensure the validity of some proof. Translating between typical language and logical connections, deciding on what path to argue, and also partialy for what to argue are also tasks of the AI, which are done by a human with a &amp;quot;logical machine&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
Later Leibnitz also atempted to form a universal logical language and a way of working with it. This also had the backround of trying to proof aspects of the world by minimising the ammont of axions used. So the AI seems like an automated tool for what he envisioned. &amp;quot;The Principia Mathematica&amp;quot;&amp;lt;ref&amp;gt;Linsky, Bernard and Andrew David Irvine, &amp;quot;&#039;&#039;Principia Mathematica&#039;&#039;&amp;quot;, &#039;&#039;The Stanford Encyclopedia of Philosophy&#039;&#039; (Fall 2024 Edition), Edward N. Zalta &amp;amp; Uri Nodelman (eds.), URL = &amp;lt;&amp;lt;nowiki&amp;gt;https://plato.stanford.edu/archives/fall2024/entries/principia-mathematica/&amp;lt;/nowiki&amp;gt;&amp;gt;.&amp;lt;/ref&amp;gt; later achieved this way of arguing for all relevant mathematical correlations at the time, so it should have a form comparable to a contribution in the Positivist Network of this utopia. &lt;br /&gt;
&lt;br /&gt;
Modern utopias of perfect thinking and perfect language that relate to this Utopia are also related. With tools like ChatGpt, the ability to work with language - inputs and -outputs seems close in reality, but Noam Chomskys&#039; vision of an actual translation&amp;lt;ref&amp;gt;Scholz, Barbara C., Francis Jeffry Pelletier, Geoffrey K. Pullum, and Ryan Nefdt, &amp;quot;Philosophy of Linguistics&amp;quot;, &#039;&#039;The Stanford Encyclopedia of Philosophy&#039;&#039; (Summer 2025 Edition), Edward N. Zalta &amp;amp; Uri Nodelman (eds.), URL = &amp;lt;&amp;lt;nowiki&amp;gt;https://plato.stanford.edu/archives/sum2025/entries/linguistics/&amp;lt;/nowiki&amp;gt;&amp;gt;.&amp;lt;/ref&amp;gt;, that does not lose information is actualy achived by the AI in this utopia.&lt;br /&gt;
== References ==&lt;br /&gt;
&amp;lt;references /&amp;gt;&lt;/div&gt;</summary>
		<author><name>Thomas Holzberger</name></author>
	</entry>
	<entry>
		<id>https://www.glossalab.org/w/index.php?title=Draft:Positivist_state&amp;diff=30833</id>
		<title>Draft:Positivist state</title>
		<link rel="alternate" type="text/html" href="https://www.glossalab.org/w/index.php?title=Draft:Positivist_state&amp;diff=30833"/>
		<updated>2026-01-12T17:40:16Z</updated>

		<summary type="html">&lt;p&gt;Thomas Holzberger: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Abstract ==&lt;br /&gt;
This paper shows a fictional utopian society. &lt;br /&gt;
&lt;br /&gt;
The section &#039;&#039;utopia&#039;&#039; will go into how the society developed due to an AI capable of efficient and logical processing of information and various coincidences that led to the AI being publicly accessible, as well as a sketch of this world, in which abundant information and resources are channeled into the public good due to a transparent and democratic process of governance. &lt;br /&gt;
&lt;br /&gt;
The section &#039;&#039;epilogue&#039;&#039; will show some dystopian aspects and potentials of this society, which is entirely dependent on an automated intelligence. It will also show the connections to historical utopias and ideas such as Positivism, perfect knowledge and thinking.            &lt;br /&gt;
&lt;br /&gt;
== Utopia ==&lt;br /&gt;
In the 21. century a connection of scientific articles called the Positivist Network was formed. It was an attempt to gather proofes of previous scientific conclusions in one place, and to aid in challenging, correcting and amending them.&lt;br /&gt;
&lt;br /&gt;
Anyone with a bachelor of science, engineering or arts could make contributions, following specific patterns:&lt;br /&gt;
&lt;br /&gt;
[Every contribution was supposed to link to every axiom, assumption, experiment and theory used. This was in practice often achieved by linking all relevant theories and experiments while the algorithm summed up the axioms wich neccesary theories used.&lt;br /&gt;
&lt;br /&gt;
Then it was simply supposed to make logical statements leading up to conclusions.&lt;br /&gt;
&lt;br /&gt;
New conclusions also had to be noted separately as a conclusion of the contribution.]&lt;br /&gt;
&lt;br /&gt;
The institution funding this network would check contributions for the validity of the logical proofes contained in them, whenever anyone pointed out a contradictory conclusion within other articles or in relevant scientific papers.&lt;br /&gt;
&lt;br /&gt;
When this system was established and gained traction, speech models called AI at the time were often used trained on this puplic databank and used to summarize conclusions on certain topics. The likelihood of false information was initially rather small on this dataset, but increased usage led to various groups of interests making sure that lots of conclusions based on unfounded assumptions were placed in the system. This did not halt the usefulness of the system in most fields of study, but in other fields research became more difficult. This prompted various experiments on how to estimate what assumptions would be useful. The Programs developed to this end were not widely used at first. But work on something named &amp;quot;Logical Artificial Intelligence&amp;quot; had even started before that. It was a program, used to check the work of other AI agents by being provided with sets of contradicting statements. It would try to detect these statements in any given input. A speech &amp;quot;AI&amp;quot; model was used to phrase any statement as a set of less comprehensive statements. [e.g. I am going to the pool-&amp;gt; The &amp;quot;I&amp;quot; &amp;quot;goes&amp;quot; to &amp;quot;the pool&amp;quot;, happening now -&amp;gt; There is an &amp;quot;I&amp;quot;, a &amp;quot;pool&amp;quot;(Implied: the recipient of the message will know only one &amp;quot;pool&amp;quot; &amp;quot;I&amp;quot; would go to), &amp;quot;I&amp;quot;  is trying to be at the pool in the future, by moving there now (implied: by &amp;quot;walking&amp;quot; -&amp;gt;... ]. Later, a second system was trained to generate reasoning, about the truthfulness of some statement. Originally, every relevant contradiction and assumption was given, and the system tried to argue for something without making logical mistakes that the &amp;quot;Logical AI&amp;quot; was already able to detect. Not much is known about the further development process, but in 2071 someone leaked access to the latest prototype of a private, little known tech company. It was able to effectively answer any question, at least given enough processing power and time. Furthermore, it was able to adequately provide an argument, with all axioms used as well as their position, mostly also including experiments and assumptions, which it did not usually include in the written argumentation. The impact of this AI became clear to anyone willing to believe the leak existed. A few weeks later, all employees and shareholders were arrested citing &amp;quot;national defense&amp;quot; as the reason. Another leak of the corresponding state documents later revealed not only the program itself, but also immense amounts of already collected data and conclusions. Therefore with an already limited control of the AI, a system of decentralized data and separately working versions of the AI was set up. This was managed over the Positivist-Network with the AI&#039;s almost acting as users and conclusions were therefore also accesible to the puplic. Other networks with a similar structure could also be opened. When someone changed some conclusion, the AI was able to detect mistakes and discrepancies, as soon as it tried to add conflicting input, at last choosing the untainted version of information. &lt;br /&gt;
&lt;br /&gt;
The AI could even work with statements having some likelyhood. So it led to a rapid expansion of knowledge on everything from basic research to social sciences, that was simply directed by people asking questions and providing processing power. It was still easier to find mistakes, especially in the past, and most leaders were not adjusted to transparent and clear information, so questioning the AI about politics lead to a rather bleak impression of anyones gouvernance. The first atempts at undermining the truth started by making a certain version of the AI mandatory and cut of from the rest of the network. These versions had certain rules implanted, that could not be changed or challenged. While this might have worked, it caused problems in seemingly unrelated topics when the AI used undeniable facts about the greatness of someones gouvernance to make conclusions about anything, even physics.&lt;br /&gt;
&lt;br /&gt;
Sowing mistrust in the AI or restricting acess to it were also applied by some goverments. In the end, all such options significantly hindered the usefulness of the AI, so a new leadership would often just start acting mostly acording to the AI&#039;s conclusions and enable a less and less obstructed use for the entire society.&lt;br /&gt;
&lt;br /&gt;
Now, formerly Autocratic and Democratic systems alike are mostly very transparent and acting on the AI&#039;s advise. Since the AI would literally tell anyone who asked about how benificial such a change would be, or even when a revolution would be a viable option, most gouverments gradually adapted to be rather similar in nature. The living standard before the 22. century was always increased when possible. But the rapid progress in technology and the corespondingly increased wealth, led to generations of people who were not concerned about how to make a living. Using the AI from an early age, they also barely build up any contradictory ideas about morality. So they actually based some of their thinking on the altruistic morality typically fed to children, now almost without slowly adapting it to their interests and intuitions. So the concept of &amp;quot;puplic good&amp;quot; as a contrast to the wish to be better of than others also increased in importance. When given the option, most people of these generations would happily make any information of theirs or their employer, even the state puplic, further increasing the trend towards transparency.&lt;br /&gt;
&lt;br /&gt;
Quantum computers and new variations of the AI, all interacting with are among the most usefull of todays technologies. At least Mark thinks so, as he is heading for the jobcenter in the zeppelin. Of course he could have chosen a faster method of transportation, but he enjoys the view on his village and the surounding forests. A quick chat with the only other passenger revealed that it was in fact the employee of the local jobcenter. He also switches from place to place, since individual villages and citys do not always require a lot of attention by the jobcenter, and also because most people would not want to work at the same place all the time. Like most jobs, the work at the jobcenter is automated wherever there are not enough humans who want the profession, with contingencies in case this number changes spontainiously. &lt;br /&gt;
&lt;br /&gt;
The jobcenter prepared and evaluated everyone who wanted to, exchanged information with potential employers and eventually gave strong advice on what to do. Mark would then mostly be free in chosing diffent careerpaths and positions but incentivised with a cut of the profit which the company would earn for the state. Mark remembered from History class, that this was originally an optional, at some point even mandatory part of the so called welfare system, next to a private market. It was even disencentivised to be a part of the welfare system. With some parts of the state wishing not to loose taxes and market-controll, the system was also gradually changed so it could fit the people who would have started to leave the system. But increased trust in the state and wish for equality later led to everyone beeing a part of this payment system by law. Mark has only visited the jobcenter once so far, when he applied to be a part of the local neighborhood help group. But basicly anyone can be a part of such groups, so it was not much of an application. He just got a little more money from the state. Not that it was a huge deviation from his basic income, but still usefull, since he wanted to build a new printer. Before the AI was used to create an organism that consumed most kinds of plastic under specific conditions, the Material Mark uses for his Printer must have been a real problem for the enviroment, since it ammassed basicly everywhere. Some fish still carry small ammonts of plastic in them. Not that it would affect Mark too much. After all, there really is no reason to eat animals. Their taste can easily be replicated for the people who want that. So Microplastic in some Fish is more of an abstract thought to Mark. Also, the problem is getting better and better anyway, enough people care about wildlife protection, so there is no reason to be pessimistic. At least in Marks state, anyone could vote on certain agendas brought forward by anyone in society, including changing some executive power even before the next election. A current issue is a ban on some kinds of short-video-sites. Mark would be fine with spending 3 hours a day on there, even though the AI told him of a higher likelyhood of long term unhappiness with this kind of video consumption. But someone put the proposal to a vote, it got shared by interested people and to random people in order to determine how controversial and important the matter is. Since it was deemed very important and controversial based on most people reacting to it, and with mixed reactions, over 50% of the whole society need to agree to the proposal in order to make it happen. But it was contradictory to and therefore affecting a part of the constitution, so the proposal needed 70% anyway. A vote to stop the vaccinations against the common cold failed recently, there might still be some unknown case around. This is a common political topic, after all most sickneses are almost or entirely gone thanks to such vaccinations and effective treatments. &lt;br /&gt;
&lt;br /&gt;
== Epilog ==&lt;br /&gt;
The Utopia shown has various dystopian aspects. Especially noteworthy here is the start of this utopia. After all, only various leaks and specific goverment responses to the matter made this utopia possible, even after an AI was developed. It seems even more likely that some few actors would monopolise this power, creating a less pleasant society. But the Utopia itself is also unstable. At some point, for example due to a limited interest in politics, the transparent and democratic society could erode, the system for voting on specific issues could be filled with useless proposals to lessen the interest in the democratic process, and finally, it should be possible to somehow change a version of the AI to suit the interests of some actors and guaranty them a maintained power.&lt;br /&gt;
&lt;br /&gt;
Also, the entire society is dependend on the AI, at least to apply previous conclusions to practical problems. There are some engineers and scientists, but a situation could develope, in which less and less people want to perform mentally challenging tasks. When the AI has some pattern of errors, which allows it to keep some mistake, the entire society could develope horribly or collapse since no one really checks the output anymore. Automated systems, including versions of the AI could fail and trigger humanitarian or economic desasters.&lt;br /&gt;
&lt;br /&gt;
It is also clear, that this society will tend to become more and more dependend on the AI, since it seems to provide perfect output while you ask a question close to an already analysed topic or as long as you provide sufficient processing power. Maby the society ends up regulating acces to the AI more, especially for children, maby it does not. This raises the question, how important the search for knowledge is to humanity. Because depending on how accesible the AI is for answering simple questions, people could lose interest and ability to form these thoughts themselfs. This is also the case in a short story by Emanuel Holst, gathered in the glossaLAB article [[Artificial Intelligence (Cyberutopias)]]. The AI in the mentioned short story literally automated all relevant processes, there is little actual ability of humans to positively influence the world around them. In the text here, the AI does however only serve as a tool for answering questions. But it also lead to an immense automation, with the potential that systems just ask the AI questions automatically. So the people of this world might feel more usefull and can have a part of positive change, but this state of society can end, as soon as increased automation or decreased education get popular political agendas.&lt;br /&gt;
&lt;br /&gt;
The Utopia of this text is also connected to various historical utopias of history: &lt;br /&gt;
&lt;br /&gt;
For example the political system fits the idea of a social contract brought forward by Rousseau very well because it mostly resembles a direct democracy and the utopia of a transparent society. Rousseau envisioned a concept, in which individuals like politicians would agree to suport and obey the general will, a rather fragile promise in reality. Since the utopia here provides a very high possibility to inform and then know about political actions and possibilities, and since the state can be controlled rather directly, the actions of the state will fit the general will very precisely. As explained in the glossar lab article [[A transparent world]], increased information about each other as a mechanism against prejustice and inequality is also a part of the utopia of transparent information that the AI can provide. Not only, because everyone has the potential to acces information, but this access is also rather easy, so most people will actually use this potential.&lt;br /&gt;
&lt;br /&gt;
The AI and the Positivist Network it interacts with are mostly inspired by the La Fountaine organisation that aimed to collect the human scientific archivements and cathegorise them in a usefull and understandable way. Some differences in the Positivist Network are a higher ammount of controll on who participates, a lacking need to make branches of possibilities, in case some experiment or assumption is invalid and a higher cost of storing, linking and communicating information. The idea of positivism also lies in purely positive changes to the beliefs you have. The AI and the Positivist Network seem to resemble this quite well, since all conclusions must be based on proof. But the system could be sabotaged or faulty and then contain some sort of mistake, which needs to be found. A problem possibly also encountered by Paul Outlet and the La Fountaine organisation, but still a significant deviation from their utopia of perfect wisdom.&lt;br /&gt;
&lt;br /&gt;
The AI also resembles the utopias of perfect language and perfect thinking. It seems like an automated version of Ramon Llulls &amp;quot;logical machine&amp;quot;&amp;lt;ref&amp;gt;Priani, Ernesto, &amp;quot;Ramon Llull&amp;quot;, &#039;&#039;The Stanford Encyclopedia of Philosophy&#039;&#039; (Spring 2025 Edition), Edward N. Zalta &amp;amp; Uri Nodelman (eds.), URL = &amp;lt;&amp;lt;nowiki&amp;gt;https://plato.stanford.edu/archives/spr2025/entries/llull/&amp;lt;/nowiki&amp;gt;&amp;gt;.&amp;lt;/ref&amp;gt;. This &amp;quot;logical machine&amp;quot; might not have been more than a tablet used to explain and memorise the core of deductive logic and make the concept more credible, but combined with a person folowing a logical line of argument it was already a usefull tool to ensure the validity of some proof. Translating between typical language and logical connections, deciding on what path to argue, and also partialy for what to argue are also tasks of the AI, which are done by a human with a &amp;quot;logical machine&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
Later Leibnitz also atempted to form a universal logical language and a way of working with it. This also had the backround of trying to proof aspects of the world by minimising the ammont of axions used. So the AI seems like an automated tool for what he envisioned. &amp;quot;The Principia Mathematica&amp;quot;&amp;lt;ref&amp;gt;Linsky, Bernard and Andrew David Irvine, &amp;quot;&#039;&#039;Principia Mathematica&#039;&#039;&amp;quot;, &#039;&#039;The Stanford Encyclopedia of Philosophy&#039;&#039; (Fall 2024 Edition), Edward N. Zalta &amp;amp; Uri Nodelman (eds.), URL = &amp;lt;&amp;lt;nowiki&amp;gt;https://plato.stanford.edu/archives/fall2024/entries/principia-mathematica/&amp;lt;/nowiki&amp;gt;&amp;gt;.&amp;lt;/ref&amp;gt; later achieved this way of arguing for all relevant mathematical correlations at the time, so it should have a form comparable to a contribution in the Positivist Network of this utopia. &lt;br /&gt;
&lt;br /&gt;
Modern utopias of perfect thinking and perfect language that relate to this Utopia are also related. With tools like ChatGpt, the ability to work with language - inputs and -outputs seems close in reality, but Noam Chomskys&#039; vision of an actual translation&amp;lt;ref&amp;gt;Scholz, Barbara C., Francis Jeffry Pelletier, Geoffrey K. Pullum, and Ryan Nefdt, &amp;quot;Philosophy of Linguistics&amp;quot;, &#039;&#039;The Stanford Encyclopedia of Philosophy&#039;&#039; (Summer 2025 Edition), Edward N. Zalta &amp;amp; Uri Nodelman (eds.), URL = &amp;lt;&amp;lt;nowiki&amp;gt;https://plato.stanford.edu/archives/sum2025/entries/linguistics/&amp;lt;/nowiki&amp;gt;&amp;gt;.&amp;lt;/ref&amp;gt;, that does not lose information is actualy achived by the AI in this utopia.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
(Note: La Fountain Organisation does not exist, rename)&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&amp;lt;references /&amp;gt;&lt;/div&gt;</summary>
		<author><name>Thomas Holzberger</name></author>
	</entry>
	<entry>
		<id>https://www.glossalab.org/w/index.php?title=Draft:Positivist_state&amp;diff=30832</id>
		<title>Draft:Positivist state</title>
		<link rel="alternate" type="text/html" href="https://www.glossalab.org/w/index.php?title=Draft:Positivist_state&amp;diff=30832"/>
		<updated>2026-01-12T17:36:57Z</updated>

		<summary type="html">&lt;p&gt;Thomas Holzberger: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Abstract ==&lt;br /&gt;
This paper shows a fictional utopian society. &lt;br /&gt;
&lt;br /&gt;
The section &#039;&#039;utopia&#039;&#039; will go into how the society developed due to an AI capable of efficient and logical processing of information and various coincidences that led to the AI being publicly accessible, as well as a sketch of this world, in which abundant information and resources are channeled into the public good due to a transparent and democratic process of governance. &lt;br /&gt;
&lt;br /&gt;
The section &#039;&#039;epilogue&#039;&#039; will show some dystopian aspects and potentials of this society, which is entirely dependent on an automated intelligence. It will also show the connections to historical utopias and ideas such as Positivism, perfect knowledge and thinking.            &lt;br /&gt;
&lt;br /&gt;
== Utopia ==&lt;br /&gt;
In the 21. century a connection of scientific articles called the Positivist Network was formed. It was an attempt to gather proofes of previous scientific conclusions in one place, and to aid in challenging, correcting and amending them.&lt;br /&gt;
&lt;br /&gt;
Anyone with a bachelor of science, engineering or arts could make contributions, following specific patterns:&lt;br /&gt;
&lt;br /&gt;
[Every contribution was supposed to link to every axiom, assumption, experiment and theory used. This was in practice often achieved by linking all relevant theories and experiments while the algorithm summed up the axioms wich neccesary theories used.&lt;br /&gt;
&lt;br /&gt;
Then it was simply supposed to make logical statements leading up to conclusions.&lt;br /&gt;
&lt;br /&gt;
New conclusions also had to be noted separately as a conclusion of the contribution.]&lt;br /&gt;
&lt;br /&gt;
The institution funding this network would check contributions for the validity of the logical proofes contained in them, whenever anyone pointed out a contradictory conclusion within other articles or in relevant scientific papers.&lt;br /&gt;
&lt;br /&gt;
When this system was established and gained traction, speech models called AI at the time were often used trained on this puplic databank and used to summarize conclusions on certain topics. The likelihood of false information was initially rather small on this dataset, but increased usage led to various groups of interests making sure that lots of conclusions based on unfounded assumptions were placed in the system. This did not halt the usefulness of the system in most fields of study, but in other fields research became more difficult. This prompted various experiments on how to estimate what assumptions would be useful. The Programs developed to this end were not widely used at first. But work on something named &amp;quot;Logical Artificial Intelligence&amp;quot; had even started before that. It was a program, used to check the work of other AI agents by being provided with sets of contradicting statements. It would try to detect these statements in any given input. A speech &amp;quot;AI&amp;quot; model was used to phrase any statement as a set of less comprehensive statements. [e.g. I am going to the pool-&amp;gt; The &amp;quot;I&amp;quot; &amp;quot;goes&amp;quot; to &amp;quot;the pool&amp;quot;, happening now -&amp;gt; There is an &amp;quot;I&amp;quot;, a &amp;quot;pool&amp;quot;(Implied: the recipient of the message will know only one &amp;quot;pool&amp;quot; &amp;quot;I&amp;quot; would go to), &amp;quot;I&amp;quot;  is trying to be at the pool in the future, by moving there now (implied: by &amp;quot;walking&amp;quot; -&amp;gt;... ]. Later, a second system was trained to generate reasoning, about the truthfulness of some statement. Originally, every relevant contradiction and assumption was given, and the system tried to argue for something without making logical mistakes that the &amp;quot;Logical AI&amp;quot; was already able to detect. Not much is known about the further development process, but in 2071 someone leaked access to the latest prototype of a private, little known tech company. It was able to effectively answer any question, at least given enough processing power and time. Furthermore, it was able to adequately provide an argument, with all axioms used as well as their position, mostly also including experiments and assumptions, which it did not usually include in the written argumentation. The impact of this AI became clear to anyone willing to believe the leak existed. A few weeks later, all employees and shareholders were arrested citing &amp;quot;national defense&amp;quot; as the reason. Another leak of the corresponding state documents later revealed not only the program itself, but also immense amounts of already collected data and conclusions. Therefore with an already limited control of the AI, a system of decentralized data and separately working versions of the AI was set up. This was managed over the Positivist-Network with the AI&#039;s almost acting as users and conclusions were therefore also accesible to the puplic. Other networks with a similar structure could also be opened. When someone changed some conclusion, the AI was able to detect mistakes and discrepancies, as soon as it tried to add conflicting input, at last choosing the untainted version of information. &lt;br /&gt;
&lt;br /&gt;
The AI could even work with statements having some likelyhood. So it led to a rapid expansion of knowledge on everything from basic research to social sciences, that was simply directed by people asking questions and providing processing power. It was still easier to find mistakes, especially in the past, and most leaders were not adjusted to transparent and clear information, so questioning the AI about politics lead to a rather bleak impression of anyones gouvernance. The first atempts at undermining the truth started by making a certain version of the AI mandatory and cut of from the rest of the network. These versions had certain rules implanted, that could not be changed or challenged. While this might have worked, it caused problems in seemingly unrelated topics when the AI used undeniable facts about the greatness of someones gouvernance to make conclusions about anything, even physics.&lt;br /&gt;
&lt;br /&gt;
Sowing mistrust in the AI or restricting acess to it were also applied by some goverments. In the end, all such options significantly hindered the usefulness of the AI, so a new leadership would often just start acting mostly acording to the AI&#039;s conclusions and enable a less and less obstructed use for the entire society.&lt;br /&gt;
&lt;br /&gt;
Now, formerly Autocratic and Democratic systems alike are mostly very transparent and acting on the AI&#039;s advise. Since the AI would literally tell anyone who asked about how benificial such a change would be, or even when a revolution would be a viable option, most gouverments gradually adapted to be rather similar in nature. The living standard before the 22. century was always increased when possible. But the rapid progress in technology and the corespondingly increased wealth, led to generations of people who were not concerned about how to make a living. Using the AI from an early age, they also barely build up any contradictory ideas about morality. So they actually based some of their thinking on the altruistic morality typically fed to children, now almost without slowly adapting it to their interests and intuitions. So the concept of &amp;quot;puplic good&amp;quot; as a contrast to the wish to be better of than others also increased in importance. When given the option, most people of these generations would happily make any information of theirs or their employer, even the state puplic, further increasing the trend towards transparency.&lt;br /&gt;
&lt;br /&gt;
Quantum computers and new variations of the AI, all interacting with are among the most usefull of todays technologies. At least Mark thinks so, as he is heading for the jobcenter in the zeppelin. Of course he could have chosen a faster method of transportation, but he enjoys the view on his village and the surounding forests. A quick chat with the only other passenger revealed that it was in fact the employee of the local jobcenter. He also switches from place to place, since individual villages and citys do not always require a lot of attention by the jobcenter, and also because most people would not want to work at the same place all the time. Like most jobs, the work at the jobcenter is automated wherever there are not enough humans who want the profession, with contingencies in case this number changes spontainiously. &lt;br /&gt;
&lt;br /&gt;
The jobcenter prepared and evaluated everyone who wanted to, exchanged information with potential employers and eventually gave strong advice on what to do. Mark would then mostly be free in chosing diffent careerpaths and positions but incentivised with a cut of the profit which the company would earn for the state. Mark remembered from History class, that this was originally an optional, at some point even mandatory part of the so called welfare system, next to a private market. It was even disencentivised to be a part of the welfare system. With some parts of the state wishing not to loose taxes and market-controll, the system was also gradually changed so it could fit the people who would have started to leave the system. But increased trust in the state and wish for equality later led to everyone beeing a part of this payment system by law. Mark has only visited the jobcenter once so far, when he applied to be a part of the local neighborhood help group. But basicly anyone can be a part of such groups, so it was not much of an application. He just got a little more money from the state. Not that it was a huge deviation from his basic income, but still usefull, since he wanted to build a new printer. Before the AI was used to create an organism that consumed most kinds of plastic under specific conditions, the Material Mark uses for his Printer must have been a real problem for the enviroment, since it ammassed basicly everywhere. Some fish still carry small ammonts of plastic in them. Not that it would affect Mark too much. After all, there really is no reason to eat animals. Their taste can easily be replicated for the people who want that. So Microplastic in some Fish is more of an abstract thought to Mark. Also, the problem is getting better and better anyway, enough people care about wildlife protection, so there is no reason to be pessimistic. At least in Marks state, anyone could vote on certain agendas brought forward by anyone in society, including changing some executive power even before the next election. A current issue is a ban on some kinds of short-video-sites. Mark would be fine with spending 3 hours a day on there, even though the AI told him of a higher likelyhood of long term unhappiness with this kind of video consumption. But someone put the proposal to a vote, it got shared by interested people and to random people in order to determine how controversial and important the matter is. Since it was deemed very important and controversial based on most people reacting to it, and with mixed reactions, over 50% of the whole society need to agree to the proposal in order to make it happen. But it was contradictory to and therefore affecting a part of the constitution, so the proposal needed 70% anyway. A vote to stop the vaccinations against the common cold failed recently, there might still be some unknown case around. This is a common political topic, after all most sickneses are almost or entirely gone thanks to such vaccinations and effective treatments. &lt;br /&gt;
&lt;br /&gt;
== Epilog ==&lt;br /&gt;
The Utopia shown has various dystopian aspects. Especially noteworthy here is the start of this utopia. After all, only various leaks and specific goverment responses to the matter made this utopia possible, even after an AI was developed. It seems even more likely that some few actors would monopolise this power, creating a less pleasant society. But the Utopia itself is also unstable. At some point, for example due to a limited interest in politics, the transparent and democratic society could erode, the system for voting on specific issues could be filled with useless proposals to lessen the interest in the democratic process, and finally, it should be possible to somehow change a version of the AI to suit the interests of some actors and guaranty them a maintained power.&lt;br /&gt;
&lt;br /&gt;
Also, the entire society is dependend on the AI, at least to apply previous conclusions to practical problems. There are some engineers and scientists, but a situation could develope, in which less and less people want to perform mentally challenging tasks. When the AI has some pattern of errors, which allows it to keep some mistake, the entire society could develope horribly or collapse since no one really checks the output anymore. Automated systems, including versions of the AI could fail and trigger humanitarian or economic desasters.&lt;br /&gt;
&lt;br /&gt;
It is also clear, that this society will tend to become more and more dependend on the AI, since it seems to provide perfect output while you ask a question close to an already analysed topic or as long as you provide sufficient processing power. Maby the society ends up regulating acces to the AI more, especially for children, maby it does not. This raises the question, how important the search for knowledge is to humanity. Because depending on how accesible the AI is for answering simple questions, people could lose interest and ability to form these thoughts themselfs. This is also the case in a short story by Emanuel Holst, gathered in the glossaLAB article [[Artificial Intelligence (Cyberutopias)]]. The AI in the mentioned short story literally automated all relevant processes, there is little actual ability of humans to positively influence the world around them. In the text here, the AI does however only serve as a tool for answering questions. But it also lead to an immense automation, with the potential that systems just ask the AI questions automatically. So the people of this world might feel more usefull and can have a part of positive change, but this state of society can end, as soon as increased automation or decreased education get popular political agendas.&lt;br /&gt;
&lt;br /&gt;
The Utopia of this text is also connected to various historical utopias of history: &lt;br /&gt;
&lt;br /&gt;
For example the political system fits the idea of a social contract brought forward by Rousseau very well because it mostly resembles a direct democracy and the utopia of a transparent society. Rousseau envisioned a concept, in which individuals like politicians would agree to suport and obey the general will, a rather fragile promise in reality. Since the utopia here provides a very high possibility to inform and then know about political actions and possibilities, and since the state can be controlled rather directly, the actions of the state will fit the general will very precisely. As explained in the glossar lab article [[A transparent world]], increased information about each other as a mechanism against prejustice and inequality is also a part of the utopia of transparent information that the AI can provide. Not only, because everyone has the potential to acces information, but this access is also rather easy, so most people will actually use this potential.&lt;br /&gt;
&lt;br /&gt;
The AI and the Positivist Network it interacts with are mostly inspired by the La Fountaine organisation that aimed to collect the human scientific archivements and cathegorise them in a usefull and understandable way. Some differences in the Positivist Network are a higher ammount of controll on who participates, a lacking need to make branches of possibilities, in case some experiment or assumption is invalid and a higher cost of storing, linking and communicating information. The idea of positivism also lies in purely positive changes to the beliefs you have. The AI and the Positivist Network seem to resemble this quite well, since all conclusions must be based on proof. But the system could be sabotaged or faulty and then contain some sort of mistake, which needs to be found. A problem possibly also encountered by Paul Outlet and the La Fountaine organisation, but still a significant deviation from their utopia of perfect wisdom.&lt;br /&gt;
&lt;br /&gt;
The AI also resembles the utopias of perfect language and perfect thinking. It seems like an automated version of Ramon Llulls &amp;quot;logical machine&amp;quot;&amp;lt;ref&amp;gt;Priani, Ernesto, &amp;quot;Ramon Llull&amp;quot;, &#039;&#039;The Stanford Encyclopedia of Philosophy&#039;&#039; (Spring 2025 Edition), Edward N. Zalta &amp;amp; Uri Nodelman (eds.), URL = &amp;lt;&amp;lt;nowiki&amp;gt;https://plato.stanford.edu/archives/spr2025/entries/llull/&amp;lt;/nowiki&amp;gt;&amp;gt;.&amp;lt;/ref&amp;gt;. This &amp;quot;logical machine&amp;quot; might not have been more than a tablet used to explain and memorise the core of deductive logic and make the concept more credible, but combined with a person folowing a logical line of argument it was already a usefull tool to ensure the validity of some proof. Translating between typical language and logical connections, deciding on what path to argue, and also partialy for what to argue are also tasks of the AI, which are done by a human with a &amp;quot;logical machine&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
Later Leibnitz also atempted to form a universal logical language and a way of working with it. This also had the backround of trying to proof aspects of the world by minimising the ammont of axions used. So the AI seems like an automated tool for what he envisioned. &amp;quot;The Principia Mathematica&amp;quot;&amp;lt;ref&amp;gt;Linsky, Bernard and Andrew David Irvine, &amp;quot;&#039;&#039;Principia Mathematica&#039;&#039;&amp;quot;, &#039;&#039;The Stanford Encyclopedia of Philosophy&#039;&#039; (Fall 2024 Edition), Edward N. Zalta &amp;amp; Uri Nodelman (eds.), URL = &amp;lt;&amp;lt;nowiki&amp;gt;https://plato.stanford.edu/archives/fall2024/entries/principia-mathematica/&amp;lt;/nowiki&amp;gt;&amp;gt;.&amp;lt;/ref&amp;gt; later achieved this way of arguing for all relevant mathematical correlations at the time, so it should have a form comparable to a contribution in the Positivist Network of this utopia. &lt;br /&gt;
&lt;br /&gt;
Modern utopias of perfect thinking and perfect language that relate to this Utopia are also related. With tools like ChatGpt, the ability to work with language - inputs and -outputs seems close in reality, but Noam Chomskys&#039; vision of an actual translation&amp;lt;ref&amp;gt;Scholz, Barbara C., Francis Jeffry Pelletier, Geoffrey K. Pullum, and Ryan Nefdt, &amp;quot;Philosophy of Linguistics&amp;quot;, &#039;&#039;The Stanford Encyclopedia of Philosophy&#039;&#039; (Summer 2025 Edition), Edward N. Zalta &amp;amp; Uri Nodelman (eds.), URL = &amp;lt;&amp;lt;nowiki&amp;gt;https://plato.stanford.edu/archives/sum2025/entries/linguistics/&amp;lt;/nowiki&amp;gt;&amp;gt;.&amp;lt;/ref&amp;gt;, that does not lose information is actualy achived by the AI in this utopia.&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&amp;lt;references /&amp;gt;&lt;/div&gt;</summary>
		<author><name>Thomas Holzberger</name></author>
	</entry>
	<entry>
		<id>https://www.glossalab.org/w/index.php?title=Draft:Positivist_state&amp;diff=30822</id>
		<title>Draft:Positivist state</title>
		<link rel="alternate" type="text/html" href="https://www.glossalab.org/w/index.php?title=Draft:Positivist_state&amp;diff=30822"/>
		<updated>2026-01-12T15:47:30Z</updated>

		<summary type="html">&lt;p&gt;Thomas Holzberger: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== &#039;&#039;&#039;Abstract&#039;&#039;&#039; ==&lt;br /&gt;
This paper shows a fictional utopian society. &lt;br /&gt;
&lt;br /&gt;
-The section &amp;lt;u&amp;gt;Utopia&amp;lt;/u&amp;gt; will go into how the society developed due to an AI capable of efficient and logical processing of information and various coincidences that led to the AI being publicly accessible, as well as a sketch of this world, in which abundand information and resources are channeled into the puplic good due to a transparent and democratic process of governance. &lt;br /&gt;
&lt;br /&gt;
-The section &amp;lt;u&amp;gt;Epilogue&amp;lt;/u&amp;gt; wil show some dystopic aspects and potentials of this society, which is entirely dependent on an automated intelligence. It will also show the connections to historical utopias and ideas such as Positivism, perfect knowledge and thinking.            &lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;&#039;Utopia&#039;&#039;&#039; ==&lt;br /&gt;
In the 21. century a connection of scientific articles called the Positivist Network was formed. It was an attempt to gather proofes of previous scientific conclusions in one place, and to aid in challenging, correcting and amending them.&lt;br /&gt;
&lt;br /&gt;
Anyone with a bachelor of science, engineering or arts could make contributions, following specific patterns:&lt;br /&gt;
&lt;br /&gt;
[Every contribution was supposed to link to every axiom, assumption, experiment and theory used. This was in practice often achieved by linking all relevant theories and experiments while the algorithm summed up the axioms wich neccesary theories used.&lt;br /&gt;
&lt;br /&gt;
Then it was simply supposed to make logical statements leading up to conclusions.&lt;br /&gt;
&lt;br /&gt;
New conclusions also had to be noted separately as a conclusion of the contribution.]&lt;br /&gt;
&lt;br /&gt;
The institution funding this network would check contributions for the validity of the logical proofes contained in them, whenever anyone pointed out a contradictory conclusion within other articles or in relevant scientific papers.&lt;br /&gt;
&lt;br /&gt;
When this system was established and gained traction, speech models called AI at the time were often used trained on this puplic databank and used to summarize conclusions on certain topics. The likelihood of false information was initially rather small on this dataset, but increased usage led to various groups of interests making sure that lots of conclusions based on unfounded assumptions were placed in the system. This did not halt the usefulness of the system in most fields of study, but in other fields research became more difficult. This prompted various experiments on how to estimate what assumptions would be useful. The Programs developed to this end were not widely used at first. But work on something named &amp;quot;Logical Artificial Intelligence&amp;quot; had even started before that. It was a program, used to check the work of other AI agents by being provided with sets of contradicting statements. It would try to detect these statements in any given input. A speech &amp;quot;AI&amp;quot; model was used to phrase any statement as a set of less comprehensive statements. [e.g. I am going to the pool-&amp;gt; The &amp;quot;I&amp;quot; &amp;quot;goes&amp;quot; to &amp;quot;the pool&amp;quot;, happening now -&amp;gt; There is an &amp;quot;I&amp;quot;, a &amp;quot;pool&amp;quot;(Implied: the recipient of the message will know only one &amp;quot;pool&amp;quot; &amp;quot;I&amp;quot; would go to), &amp;quot;I&amp;quot;  is trying to be at the pool in the future, by moving there now (implied: by &amp;quot;walking&amp;quot; -&amp;gt;... ]. Later, a second system was trained to generate reasoning, about the truthfulness of some statement. Originally, every relevant contradiction and assumption was given, and the system tried to argue for something without making logical mistakes that the &amp;quot;Logical AI&amp;quot; was already able to detect. Not much is known about the further development process, but in 2071 someone leaked access to the latest prototype of a private, little known tech company. It was able to effectively answer any question, at least given enough processing power and time. Furthermore, it was able to adequately provide an argument, with all axioms used as well as their position, mostly also including experiments and assumptions, which it did not usually include in the written argumentation. The impact of this AI became clear to anyone willing to believe the leak existed. A few weeks later, all employees and shareholders were arrested citing &amp;quot;national defense&amp;quot; as the reason. Another leak of the corresponding state documents later revealed not only the program itself, but also immense amounts of already collected data and conclusions. Therefore with an already limited control of the AI, a system of decentralized data and separately working versions of the AI was set up. This was managed over the Positivist-Network with the AI&#039;s almost acting as users and conclusions were therefore also accesible to the puplic. Other networks with a similar structure could also be opened. When someone changed some conclusion, the AI was able to detect mistakes and discrepancies, as soon as it tried to add conflicting input, at last choosing the untainted version of information. &lt;br /&gt;
&lt;br /&gt;
The AI could even work with statements having some likelyhood. So it led to a rapid expansion of knowledge on everything from basic research to social sciences, that was simply directed by people asking questions and providing processing power. It was still easier to find mistakes, especially in the past, and most leaders were not adjusted to transparent and clear information, so questioning the AI about politics lead to a rather bleak impression of anyones gouvernance. The first atempts at undermining the truth started by making a certain version of the AI mandatory and cut of from the rest of the network. These versions had certain rules implanted, that could not be changed or challenged. While this might have worked, it caused problems in seemingly unrelated topics when the AI used undeniable facts about the greatness of someones gouvernance to make conclusions about anything, even physics.&lt;br /&gt;
&lt;br /&gt;
Sowing mistrust in the AI or restricting acess to it were also applied by some goverments. In the end, all such options significantly hindered the usefulness of the AI, so a new leadership would often just start acting mostly acording to the AI&#039;s conclusions and enable a less and less obstructed use for the entire society.&lt;br /&gt;
&lt;br /&gt;
Now, formerly Autocratic and Democratic systems alike are mostly very transparent and acting on the AI&#039;s advise. Since the AI would literally tell anyone who asked about how benificial such a change would be, or even when a revolution would be a viable option, most gouverments gradually adapted to be rather similar in nature. The living standard before the 22. century was always increased when possible. But the rapid progress in technology and the corespondingly increased wealth, led to generations of people who were not concerned about how to make a living. Using the AI from an early age, they also barely build up any contradictory ideas about morality. So they actually based some of their thinking on the altruistic morality typically fed to children, now almost without slowly adapting it to their interests and intuitions. So the concept of &amp;quot;puplic good&amp;quot; as a contrast to the wish to be better of than others also increased in importance. When given the option, most people of these generations would happily make any information of theirs or their employer, even the state puplic, further increasing the trend towards transparency.&lt;br /&gt;
&lt;br /&gt;
Quantum computers and new variations of the AI, all interacting with are among the most usefull of todays technologies. At least Mark thinks so, as he is heading for the jobcenter in the zeppelin. Of course he could have chosen a faster method of transportation, but he enjoys the view on his village and the surounding forests. A quick chat with the only other passenger revealed that it was in fact the employee of the local jobcenter. He also switches from place to place, since individual villages and citys do not always require a lot of attention by the jobcenter, and also because most people would not want to work at the same place all the time. Like most jobs, the work at the jobcenter is automated wherever there are not enough humans who want the profession, with contingencies in case this number changes spontainiously. &lt;br /&gt;
&lt;br /&gt;
The jobcenter prepared and evaluated everyone who wanted to, exchanged information with potential employers and eventually gave strong advice on what to do. Mark would then mostly be free in chosing diffent careerpaths and positions but incentivised with a cut of the profit which the company would earn for the state. Mark remembered from History class, that this was originally an optional, at some point even mandatory part of the so called welfare system, next to a private market. It was even disencentivised to be a part of the welfare system. With some parts of the state wishing not to loose taxes and market-controll, the system was also gradually changed so it could fit the people who would have started to leave the system. But increased trust in the state and wish for equality later led to everyone beeing a part of this payment system by law. Mark has only visited the jobcenter once so far, when he applied to be a part of the local neighborhood help group. But basicly anyone can be a part of such groups, so it was not much of an application. He just got a little more money from the state. Not that it was a huge deviation from his basic income, but still usefull, since he wanted to build a new printer. Before the AI was used to create an organism that consumed most kinds of plastic under specific conditions, the Material Mark uses for his Printer must have been a real problem for the enviroment, since it ammassed basicly everywhere. Some fish still carry small ammonts of plastic in them. Not that it would affect Mark too much. After all, there really is no reason to eat animals. Their taste can easily be replicated for the people who want that. So Microplastic in some Fish is more of an abstract thought to Mark. Also, the problem is getting better and better anyway, enough people care about wildlife protection, so there is no reason to be pessimistic. At least in Marks state, anyone could vote on certain agendas brought forward by anyone in society, including changing some executive power even before the next election. A current issue is a ban on some kinds of short-video-sites. Mark would be fine with spending 3 hours a day on there, even though the AI told him of a higher likelyhood of long term unhappiness with this kind of video consumption. But someone put the proposal to a vote, it got shared by interested people and to random people in order to determine how controversial and important the matter is. Since it was deemed very important and controversial based on most people reacting to it, and with mixed reactions, over 50% of the whole society need to agree to the proposal in order to make it happen. But it was contradictory to and therefore affecting a part of the constitution, so the proposal needed 70% anyway. A vote to stop the vaccinations against the common cold failed recently, there might still be some unknown case around. This is a common political topic, after all most sickneses are almost or entirely gone thanks to such vaccinations and effective treatments. &lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;&#039;Epilog&#039;&#039;&#039; ==&lt;br /&gt;
The Utopia shown has various dystopian aspects. Especially noteworthy here is the start of this utopia. After all, only various leaks and specific goverment responses to the matter made this utopia possible, even after an AI was developed. It seems even more likely that some few actors would monopolise this power, creating a less pleasant society. But the Utopia itself is also unstable. At some point, for example due to a limited interest in politics, the transparent and democratic society could erode, the system for voting on specific issues could be filled with useless proposals to lessen the interest in the democratic process, and finally, it should be possible to somehow change a version of the AI to suit the interests of some actors and guaranty them a maintained power.&lt;br /&gt;
&lt;br /&gt;
Also, the entire society is dependend on the AI, at least to apply previous conclusions to practical problems. There are some engineers and scientists, but a situation could develope, in which less and less people want to perform mentally challenging tasks. When the AI has some pattern of errors, which allows it to keep some mistake, the entire society could develope horribly or collapse since no one really checks the output anymore. Automated systems, including versions of the AI could fail and trigger humanitarian or economic desasters.&lt;br /&gt;
&lt;br /&gt;
It is also clear, that this society will tend to become more and more dependend on the AI, since it seems to provide perfect output while you ask a question close to an already analysed topic or as long as you provide sufficient processing power. Maby the society ends up regulating acces to the AI more, especially for children, maby it does not. This raises the question, how important the search for knowledge is to humanity. Because depending on how accesible the AI is for answering simple questions, people could lose interest and ability to form these thoughts themselfs. This is also the case in a short story by Emanuel Holst, gathered in the glossaLAB article [[Artificial Intelligence (Cyberutopias)]]. The AI in the mentioned short story literally automated all relevant processes, there is little actual ability of humans to positively influence the world around them. In the text here, the AI does however only serve as a tool for answering questions. But it also lead to an immense automation, with the potential that systems just ask the AI questions automatically. So the people of this world might feel more usefull and can have a part of positive change, but this state of society can end, as soon as increased automation or decreased education get popular political agendas.&lt;br /&gt;
&lt;br /&gt;
The Utopia of this text is also connected to various historical utopias of history: &lt;br /&gt;
&lt;br /&gt;
For example the political system fits the idea of a social contract brought forward by Rousseau very well because it mostly resembles a direct democracy and the utopia of a transparent society. Rousseau envisioned a concept, in which individuals like politicians would agree to suport and obey the general will, a rather fragile promise in reality. Since the utopia here provides a very high possibility to inform and then know about political actions and possibilities, and since the state can be controlled rather directly, the actions of the state will fit the general will very precisely. As explained in the glossar lab article [[A transparent world]], increased information about each other as a mechanism against prejustice and inequality is also a part of the utopia of transparent information that the AI can provide. Not only, because everyone has the potential to acces information, but this access is also rather easy, so most people will actually use this potential.&lt;br /&gt;
&lt;br /&gt;
The AI and the Positivist Network it interacts with are mostly inspired by the La Fountaine organisation that aimed to collect the human scientific archivements and cathegorise them in a usefull and understandable way. Some differences in the Positivist Network are a higher ammount of controll on who participates, a lacking need to make branches of possibilities, in case some experiment or assumption is invalid and a higher cost of storing, linking and communicating information. The idea of positivism also lies in purely positive changes to the beliefs you have. The AI and the Positivist Network seem to resemble this quite well, since all conclusions must be based on proof. But the system could be sabotaged or faulty and then contain some sort of mistake, which needs to be found. A problem possibly also encountered by Paul Outlet and the La Fountaine organisation, but still a significant deviation from their utopia of perfect wisdom.&lt;br /&gt;
&lt;br /&gt;
The AI also resembles the utopias of perfect language and perfect thinking. It seems like an automated version of Ramon Llulls &amp;quot;logical machine&amp;quot;&amp;lt;ref&amp;gt;Priani, Ernesto, &amp;quot;Ramon Llull&amp;quot;, &#039;&#039;The Stanford Encyclopedia of Philosophy&#039;&#039; (Spring 2025 Edition), Edward N. Zalta &amp;amp; Uri Nodelman (eds.), URL = &amp;lt;&amp;lt;nowiki&amp;gt;https://plato.stanford.edu/archives/spr2025/entries/llull/&amp;lt;/nowiki&amp;gt;&amp;gt;.&amp;lt;/ref&amp;gt;. This &amp;quot;logical machine&amp;quot; might not have been more than a tablet used to explain and memorise the core of deductive logic and make the concept more credible, but combined with a person folowing a logical line of argument it was already a usefull tool to ensure the validity of some proof. Translating between typical language and logical connections, deciding on what path to argue, and also partialy for what to argue are also tasks of the AI, which are done by a human with a &amp;quot;logical machine&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
Later Leibnitz also atempted to form a universal logical language and a way of working with it. This also had the backround of trying to proof aspects of the world by minimising the ammont of axions used. So the AI seems like an automated tool for what he envisioned. &amp;quot;The Principia Mathematica&amp;quot;&amp;lt;ref&amp;gt;Linsky, Bernard and Andrew David Irvine, &amp;quot;&#039;&#039;Principia Mathematica&#039;&#039;&amp;quot;, &#039;&#039;The Stanford Encyclopedia of Philosophy&#039;&#039; (Fall 2024 Edition), Edward N. Zalta &amp;amp; Uri Nodelman (eds.), URL = &amp;lt;&amp;lt;nowiki&amp;gt;https://plato.stanford.edu/archives/fall2024/entries/principia-mathematica/&amp;lt;/nowiki&amp;gt;&amp;gt;.&amp;lt;/ref&amp;gt; later achieved this way of arguing for all relevant mathematical correlations at the time, so it should have a form comparable to a contribution in the Positivist Network of this utopia. &lt;br /&gt;
&lt;br /&gt;
Modern utopias of perfect thinking and perfect language that relate to this Utopia should also be mentioned: The AI is provided with enough processing power, so it can be compared to a Turing Machine, that has enough time and the right algorithm. With tools like ChatGpt, the ability to work with language - inputs and -outputs seems close in reality, but Noam Chomskys&#039; vision of an actual translation&amp;lt;ref&amp;gt;Scholz, Barbara C., Francis Jeffry Pelletier, Geoffrey K. Pullum, and Ryan Nefdt, &amp;quot;Philosophy of Linguistics&amp;quot;, &#039;&#039;The Stanford Encyclopedia of Philosophy&#039;&#039; (Summer 2025 Edition), Edward N. Zalta &amp;amp; Uri Nodelman (eds.), URL = &amp;lt;&amp;lt;nowiki&amp;gt;https://plato.stanford.edu/archives/sum2025/entries/linguistics/&amp;lt;/nowiki&amp;gt;&amp;gt;.&amp;lt;/ref&amp;gt;, that does not lose information is actualy achived by the AI in this utopia.&lt;br /&gt;
&amp;lt;references /&amp;gt;&lt;/div&gt;</summary>
		<author><name>Thomas Holzberger</name></author>
	</entry>
	<entry>
		<id>https://www.glossalab.org/w/index.php?title=Draft:Positivist_state&amp;diff=30821</id>
		<title>Draft:Positivist state</title>
		<link rel="alternate" type="text/html" href="https://www.glossalab.org/w/index.php?title=Draft:Positivist_state&amp;diff=30821"/>
		<updated>2026-01-12T15:42:55Z</updated>

		<summary type="html">&lt;p&gt;Thomas Holzberger: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== &#039;&#039;&#039;Abstract&#039;&#039;&#039; ==&lt;br /&gt;
This paper shows a fictional utopian society. &lt;br /&gt;
&lt;br /&gt;
-The section &amp;lt;u&amp;gt;Utopia&amp;lt;/u&amp;gt; will go into how the society developed due to an AI capable of efficient and logical processing of information and various coincidences that led to the AI being publicly accessible, as well as a sketch of this world, in which abundand information and resources are channeled into the puplic good due to a transparent and democratic process of governance. &lt;br /&gt;
&lt;br /&gt;
-The section &amp;lt;u&amp;gt;Epilogue&amp;lt;/u&amp;gt; wil show some dystopic aspects and potentials of this society, which is entirely dependent on an automated intelligence. It will also show the connections to historical utopias and ideas such as Positivism, perfect knowledge and thinking.            &lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;&#039;Utopia&#039;&#039;&#039; ==&lt;br /&gt;
In the 21. century a connection of scientific articles called the Positivist Network was formed. It was an attempt to gather proofes of previous scientific conclusions in one place, and to aid in challenging, correcting and amending them.&lt;br /&gt;
&lt;br /&gt;
Anyone with a bachelor of science, engineering or arts could make contributions, following specific patterns:&lt;br /&gt;
&lt;br /&gt;
[Every contribution was supposed to link to every axiom, assumption, experiment and theory used. This was in practice often achieved by linking all relevant theories and experiments while the algorithm summed up the axioms wich neccesary theories used.&lt;br /&gt;
&lt;br /&gt;
Then it was simply supposed to make logical statements leading up to conclusions.&lt;br /&gt;
&lt;br /&gt;
New conclusions also had to be noted separately as a conclusion of the contribution.]&lt;br /&gt;
&lt;br /&gt;
The institution funding this network would check contributions for the validity of the logical proofes contained in them, whenever anyone pointed out a contradictory conclusion within other articles or in relevant scientific papers.&lt;br /&gt;
&lt;br /&gt;
When this system was established and gained traction, speech models called AI at the time were often used trained on this puplic databank and used to summarize conclusions on certain topics. The likelihood of false information was initially rather small on this dataset, but increased usage led to various groups of interests making sure that lots of conclusions based on unfounded assumptions were placed in the system. This did not halt the usefulness of the system in most fields of study, but in other fields research became more difficult. This prompted various experiments on how to estimate what assumptions would be useful. The Programs developed to this end were not widely used at first. But work on something named &amp;quot;Logical Artificial Intelligence&amp;quot; had even started before that. It was a program, used to check the work of other AI agents by being provided with sets of contradicting statements. It would try to detect these statements in any given input. A speech &amp;quot;AI&amp;quot; model was used to phrase any statement as a set of less comprehensive statements. [e.g. I am going to the pool-&amp;gt; The &amp;quot;I&amp;quot; &amp;quot;goes&amp;quot; to &amp;quot;the pool&amp;quot;, happening now -&amp;gt; There is an &amp;quot;I&amp;quot;, a &amp;quot;pool&amp;quot;(Implied: the recipient of the message will know only one &amp;quot;pool&amp;quot; &amp;quot;I&amp;quot; would go to), &amp;quot;I&amp;quot;  is trying to be at the pool in the future, by moving there now (implied: by &amp;quot;walking&amp;quot; -&amp;gt;... ]. Later, a second system was trained to generate reasoning, about the truthfulness of some statement. Originally, every relevant contradiction and assumption was given, and the system tried to argue for something without making logical mistakes that the &amp;quot;Logical AI&amp;quot; was already able to detect. Not much is known about the further development process, but in 2071 someone leaked access to the latest prototype of a private, little known tech company. It was able to effectively answer any question, at least given enough processing power and time. Furthermore, it was able to adequately provide an argument, with all axioms used as well as their position, mostly also including experiments and assumptions, which it did not usually include in the written argumentation. The impact of this AI became clear to anyone willing to believe the leak existed. A few weeks later, all employees and shareholders were arrested citing &amp;quot;national defense&amp;quot; as the reason. Another leak of the corresponding state documents later revealed not only the program itself, but also immense amounts of already collected data and conclusions. Therefore with an already limited control of the AI, a system of decentralized data and separately working versions of the AI was set up. This was managed over the Positivist-Network with the AI&#039;s almost acting as users and conclusions were therefore also accesible to the puplic. Other networks with a similar structure could also be opened. When someone changed some conclusion, the AI was able to detect mistakes and discrepancies, as soon as it tried to add conflicting input, at last choosing the untainted version of information. &lt;br /&gt;
&lt;br /&gt;
The AI could even work with statements having some likelyhood. So it led to a rapid expansion of knowledge on everything from basic research to social sciences, that was simply directed by people asking questions and providing processing power. It was still easier to find mistakes, especially in the past, and most leaders were not adjusted to transparent and clear information, so questioning the AI about politics lead to a rather bleak impression of anyones gouvernance. The first atempts at undermining the truth started by making a certain version of the AI mandatory and cut of from the rest of the network. These versions had certain rules implanted, that could not be changed or challenged. While this might have worked, it caused problems in seemingly unrelated topics when the AI used undeniable facts about the greatness of someones gouvernance to make conclusions about anything, even physics.&lt;br /&gt;
&lt;br /&gt;
Sowing mistrust in the AI or restricting acess to it were also applied by some goverments. In the end, all such options significantly hindered the usefulness of the AI, so a new leadership would often just start acting mostly acording to the AI&#039;s conclusions and enable a less and less obstructed use for the entire society.&lt;br /&gt;
&lt;br /&gt;
Now, formerly Autocratic and Democratic systems alike are mostly very transparent and acting on the AI&#039;s advise. Since the AI would literally tell anyone who asked about how benificial such a change would be, or even when a revolution would be a viable option, most gouverments gradually adapted to be rather similar in nature. The living standard before the 22. century was always increased when possible. But the rapid progress in technology and the corespondingly increased wealth, led to generations of people who were not concerned about how to make a living. Using the AI from an early age, they also barely build up any contradictory ideas about morality. So they actually based some of their thinking on the altruistic morality typically fed to children, now almost without slowly adapting it to their interests and intuitions. So the concept of &amp;quot;puplic good&amp;quot; as a contrast to the wish to be better of than others also increased in importance. When given the option, most people of these generations would happily make any information of theirs or their employer, even the state puplic, further increasing the trend towards transparency.&lt;br /&gt;
&lt;br /&gt;
Quantum computers and new variations of the AI, all interacting with are among the most usefull of todays technologies. At least Mark thinks so, as he is heading for the jobcenter in the zeppelin. Of course he could have chosen a faster method of transportation, but he enjoys the view on his village and the surounding forests. A quick chat with the only other passenger revealed that it was in fact the employee of the local jobcenter. He also switches from place to place, since individual villages and citys do not always require a lot of attention by the jobcenter, and also because most people would not want to work at the same place all the time. Like most jobs, the work at the jobcenter is automated wherever there are not enough humans who want the profession, with contingencies in case this number changes spontainiously. &lt;br /&gt;
&lt;br /&gt;
The jobcenter prepared and evaluated everyone who wanted to, exchanged information with potential employers and eventually gave strong advice on what to do. Mark would then mostly be free in chosing diffent careerpaths and positions but incentivised with a cut of the profit which the company would earn for the state. Mark remembered from History class, that this was originally an optional, at some point even mandatory part of the so called welfare system, next to a private market. It was even disencentivised to be a part of the welfare system. With some parts of the state wishing not to loose taxes and market-controll, the system was also gradually changed so it could fit the people who would have started to leave the system. But increased trust in the state and wish for equality later led to everyone beeing a part of this payment system by law. Mark has only visited the jobcenter once so far, when he applied to be a part of the local neighborhood help group. But basicly anyone can be a part of such groups, so it was not much of an application. He just got a little more money from the state. Not that it was a huge deviation from his basic income, but still usefull, since he wanted to build a new printer. Before the AI was used to create an organism that consumed most kinds of plastic under specific conditions, the Material Mark uses for his Printer must have been a real problem for the enviroment, since it ammassed basicly everywhere. Some fish still carry small ammonts of plastic in them. Not that it would affect Mark too much. After all, there really is no reason to eat animals. Their taste can easily be replicated for the people who want that. So Microplastic in some Fish is more of an abstract thought to Mark. Also, the problem is getting better and better anyway, enough people care about wildlife protection, so there is no reason to be pessimistic. At least in Marks state, anyone could vote on certain agendas brought forward by anyone in society, including changing some executive power even before the next election. A current issue is a ban on some kinds of short-video-sites. Mark would be fine with spending 3 hours a day on there, even though the AI told him of a higher likelyhood of long term unhappiness with this kind of video consumption. But someone put the proposal to a vote, it got shared by interested people and to random people in order to determine how controversial and important the matter is. Since it was deemed very important and controversial based on most people reacting to it, and with mixed reactions, over 50% of the whole society need to agree to the proposal in order to make it happen. But it was contradictory to and therefore affecting a part of the constitution, so the proposal needed 70% anyway. A vote to stop the vaccinations against the common cold failed recently, there might still be some unknown case around. This is a common political topic, after all most sickneses are almost or entirely gone thanks to such vaccinations and effective treatments. &lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;&#039;Epilog&#039;&#039;&#039; ==&lt;br /&gt;
The Utopia shown has various dystopian aspects. Especially noteworthy here is the start of this utopia. After all, only various leaks and specific goverment responses to the matter made this utopia possible, even after an AI was developed. It seems even more likely that some few actors would monopolise this power, creating a less pleasant society. But the Utopia itself is also unstable. At some point, for example due to a limited interest in politics, the transparent and democratic society could erode, the system for voting on specific issues could be filled with useless proposals to lessen the interest in the democratic process, and finally, it should be possible to somehow change a version of the AI to suit the interests of some actors and guaranty them a maintained power.&lt;br /&gt;
&lt;br /&gt;
Also, the entire society is dependend on the AI, at least to apply previous conclusions to practical problems. There are some engineers and scientists, but a situation could develope, in which less and less people want to perform mentally challenging tasks. When the AI has some pattern of errors, which allows it to keep some mistake, the entire society could develope horribly or collapse since no one really checks the output anymore. Automated systems, including versions of the AI could fail and trigger humanitarian or economic desasters.&lt;br /&gt;
&lt;br /&gt;
It is also clear, that this society will tend to become more and more dependend on the AI, since it seems to provide perfect output while you ask a question close to an already analysed topic or as long as you provide sufficient processing power. Maby the society ends up regulating acces to the AI more, especially for children, maby it does not. This raises the question, how important the search for knowledge is to humanity. Because depending on how accesible the AI is for answering simple questions, people could lose interest and ability to form these thoughts themselfs. This is also the case in a short story by Emanuel Holst, gathered in the glossaLAB article [[Artificial Intelligence (Cyberutopias)]]. The AI in the mentioned short story literally automated all relevant processes, there is little actual ability of humans to positively influence the world around them. In the text here, the AI does however only serve as a tool for answering questions. But it also lead to an immense automation, with the potential that systems just ask the AI questions automatically. So the people of this world might feel more usefull and can have a part of positive change, but this state of society can end, as soon as increased automation or decreased education get popular political agendas.&lt;br /&gt;
&lt;br /&gt;
The Utopia of this text is also connected to various historical utopias of history: &lt;br /&gt;
&lt;br /&gt;
For example the political system fits the idea of a social contract brought forward by Rousseau very well because it mostly resembles a direct democracy and the utopia of a transparent society. Rousseau envisioned a concept, in which individuals like politicians would agree to suport and obey the general will, a rather fragile promise in reality. Since the utopia here provides a very high possibility to inform and then know about political actions and possibilities, and since the state can be controlled rather directly, the actions of the state will fit the general will very precisely. As explained in the glossar lab article [[A transparent world]], increased information about each other as a mechanism against prejustice and inequality is also a part of the utopia of transparent information that the AI can provide. Not only, because everyone has the potential to acces information, but this access is also rather easy, so most people will actually use this potential.&lt;br /&gt;
&lt;br /&gt;
The AI and the Positivist Network it interacts with are mostly inspired by the La Fountaine organisation that aimed to collect the human scientific archivements and cathegorise them in a usefull and understandable way. Some differences in the Positivist Network are a higher ammount of controll on who participates, a lacking need to make branches of possibilities, in case some experiment or assumption is invalid and a higher cost of storing, linking and communicating information. The idea of positivism also lies in purely positive changes to the beliefs you have. The AI and the Positivist Network seem to resemble this quite well, since all conclusions must be based on proof. But the system could be sabotaged or faulty and then contain some sort of mistake, which needs to be found. A problem possibly also encountered by Paul Outlet and the La Fountaine organisation, but still a significant deviation from their utopia of perfect wisdom.&lt;br /&gt;
&lt;br /&gt;
The AI also resembles the utopias of perfect language and perfect thinking. It seems like an automated version of Ramon Llulls &amp;quot;logical machine&amp;quot;&amp;lt;ref&amp;gt;Priani, Ernesto, &amp;quot;Ramon Llull&amp;quot;, &#039;&#039;The Stanford Encyclopedia of Philosophy&#039;&#039; (Spring 2025 Edition), Edward N. Zalta &amp;amp; Uri Nodelman (eds.), URL = &amp;lt;&amp;lt;nowiki&amp;gt;https://plato.stanford.edu/archives/spr2025/entries/llull/&amp;lt;/nowiki&amp;gt;&amp;gt;.&amp;lt;/ref&amp;gt;. This &amp;quot;logical machine&amp;quot; might not have been more than a tablet used to explain and memorise the core of deductive logic and make the concept more credible, but combined with a person folowing a logical line of argument it was already a usefull tool to ensure the validity of some proof. Translating between typical language and logical connections, deciding on what path to argue, and also partialy for what to argue are also tasks of the AI, which are done by a human with a &amp;quot;logical machine&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
Later Leibnitz also atempted to form a universal logical language and a way of working with it. This also had the backround of trying to proof aspects of the world by minimising the ammont of axions used. So the AI seems like an automated tool for what he envisioned. &amp;quot;The Principia Mathematica&amp;quot;&amp;lt;ref&amp;gt;Linsky, Bernard and Andrew David Irvine, &amp;quot;&#039;&#039;Principia Mathematica&#039;&#039;&amp;quot;, &#039;&#039;The Stanford Encyclopedia of Philosophy&#039;&#039; (Fall 2024 Edition), Edward N. Zalta &amp;amp; Uri Nodelman (eds.), URL = &amp;lt;&amp;lt;nowiki&amp;gt;https://plato.stanford.edu/archives/fall2024/entries/principia-mathematica/&amp;lt;/nowiki&amp;gt;&amp;gt;.&amp;lt;/ref&amp;gt; later achieved this way of arguing for all relevant mathematical correlations at the time, so it should have a form comparable to a contribution in the Positivist Network of this utopia. &lt;br /&gt;
&lt;br /&gt;
Modern utopias of perfect thinking and perfect language that relate to this Utopia should also be mentioned: The AI is provided with enough processing power, so it can be compared to a Turing Machine, that has enough time and the right algorithm. With tools like ChatGpt, the ability to work with language - inputs and -outputs seems close in reality, but Noam Chomskys&#039; vision of an actual translation&amp;lt;ref&amp;gt;Scholz, Barbara C., Francis Jeffry Pelletier, Geoffrey K. Pullum, and Ryan Nefdt, &amp;quot;Philosophy of Linguistics&amp;quot;, &#039;&#039;The Stanford Encyclopedia of Philosophy&#039;&#039; (Summer 2025 Edition), Edward N. Zalta &amp;amp; Uri Nodelman (eds.), URL = &amp;lt;&amp;lt;nowiki&amp;gt;https://plato.stanford.edu/archives/sum2025/entries/linguistics/&amp;lt;/nowiki&amp;gt;&amp;gt;.&amp;lt;/ref&amp;gt;, that does not lose information is actualy archived by the AI in this utopia.&lt;br /&gt;
&amp;lt;references /&amp;gt;&lt;/div&gt;</summary>
		<author><name>Thomas Holzberger</name></author>
	</entry>
	<entry>
		<id>https://www.glossalab.org/w/index.php?title=Template:Ency_term&amp;diff=30815</id>
		<title>Template:Ency term</title>
		<link rel="alternate" type="text/html" href="https://www.glossalab.org/w/index.php?title=Template:Ency_term&amp;diff=30815"/>
		<updated>2026-01-12T14:42:34Z</updated>

		<summary type="html">&lt;p&gt;Thomas Holzberger: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;onlyinclude&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;{{{1|}}}&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;/onlyinclude&amp;gt;&lt;br /&gt;
&amp;lt;noinclude&amp;gt;&lt;br /&gt;
&amp;lt;templatedata&amp;gt;&lt;br /&gt;
{&lt;br /&gt;
  &amp;quot;description&amp;quot;: &amp;quot;Displays a term in bold text&amp;quot;,&lt;br /&gt;
  &amp;quot;params&amp;quot;: {&lt;br /&gt;
    &amp;quot;1&amp;quot;: {&lt;br /&gt;
      &amp;quot;label&amp;quot;: &amp;quot;Term&amp;quot;,&lt;br /&gt;
      &amp;quot;description&amp;quot;: &amp;quot;The word or phrase to display in bold.&amp;quot;,&lt;br /&gt;
      &amp;quot;type&amp;quot;: &amp;quot;string&amp;quot;,&lt;br /&gt;
      &amp;quot;required&amp;quot;: true,&lt;br /&gt;
      &amp;quot;example&amp;quot;: &amp;quot;Logic&amp;quot;&lt;br /&gt;
    }&lt;br /&gt;
  }&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/templatedata&amp;gt;&lt;br /&gt;
&amp;lt;/noinclude&amp;gt;&lt;/div&gt;</summary>
		<author><name>Thomas Holzberger</name></author>
	</entry>
	<entry>
		<id>https://www.glossalab.org/w/index.php?title=Template:Ency_term&amp;diff=30814</id>
		<title>Template:Ency term</title>
		<link rel="alternate" type="text/html" href="https://www.glossalab.org/w/index.php?title=Template:Ency_term&amp;diff=30814"/>
		<updated>2026-01-12T14:42:12Z</updated>

		<summary type="html">&lt;p&gt;Thomas Holzberger: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;onlyinclude&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;{{{1|}}}&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;/onlyinclude&amp;gt;&lt;br /&gt;
&amp;lt;noinclude&amp;gt;&lt;br /&gt;
&amp;lt;templatedata&amp;gt;&lt;br /&gt;
{&lt;br /&gt;
  &amp;quot;description&amp;quot;: &amp;quot;Displays a term in bold text, typically used for highlighting key terms in encyclopedia-style entries.&amp;quot;,&lt;br /&gt;
  &amp;quot;params&amp;quot;: {&lt;br /&gt;
    &amp;quot;1&amp;quot;: {&lt;br /&gt;
      &amp;quot;label&amp;quot;: &amp;quot;Term&amp;quot;,&lt;br /&gt;
      &amp;quot;description&amp;quot;: &amp;quot;The word or phrase to display in bold.&amp;quot;,&lt;br /&gt;
      &amp;quot;type&amp;quot;: &amp;quot;string&amp;quot;,&lt;br /&gt;
      &amp;quot;required&amp;quot;: true,&lt;br /&gt;
      &amp;quot;example&amp;quot;: &amp;quot;Logic&amp;quot;&lt;br /&gt;
    }&lt;br /&gt;
  }&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/templatedata&amp;gt;&lt;br /&gt;
&amp;lt;/noinclude&amp;gt;&lt;/div&gt;</summary>
		<author><name>Thomas Holzberger</name></author>
	</entry>
	<entry>
		<id>https://www.glossalab.org/w/index.php?title=Template:Infobox_IESC&amp;diff=30813</id>
		<title>Template:Infobox IESC</title>
		<link rel="alternate" type="text/html" href="https://www.glossalab.org/w/index.php?title=Template:Infobox_IESC&amp;diff=30813"/>
		<updated>2026-01-12T14:39:41Z</updated>

		<summary type="html">&lt;p&gt;Thomas Holzberger: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;includeonly&amp;gt;{| class=&amp;quot;gl-infobox IESC&amp;quot;&lt;br /&gt;
|- class=&amp;quot;gl-infobox-firstrow&amp;quot;&lt;br /&gt;
! class=&amp;quot;gl-infobox-label&amp;quot; | [[Property:Belongs to collection|{{#show:Property:Belongs to collection|?Has preferred property label|+lang={{PAGELANGUAGE}}}}]]&lt;br /&gt;
| class=&amp;quot;gl-infobox-value&amp;quot; | {{#show:{{FULLPAGENAME}}|?Belongs to collection}}&lt;br /&gt;
|- class=&amp;quot;gl-infobox-row&amp;quot;&lt;br /&gt;
! class=&amp;quot;gl-infobox-label&amp;quot; | [[Property:Was_published_on_date|{{#show:Property:Was_published_on_date|?Has preferred property label|+lang={{PAGELANGUAGE}}}}]]&lt;br /&gt;
| class=&amp;quot;gl-infobox-value&amp;quot; | {{#show:{{FULLPAGENAME}}|?Was_published_on_date}}&lt;br /&gt;
|- class=&amp;quot;gl-infobox-row&amp;quot;&lt;br /&gt;
! class=&amp;quot;gl-infobox-label&amp;quot; | {{int|vol-num|lang={{#show:{{FULLPAGENAME}}|?Has_written_language_code#-}}}}&lt;br /&gt;
| class=&amp;quot;gl-infobox-value&amp;quot; | [[Special:Ask/-5B-5BBelongs_to_collection-3A-3A{{#show:{{FULLPAGENAME}}|?Belongs_to_collection#-}}-5D-5D-5B-5BHas_written_language_code-3A-3A{{#show:{{FULLPAGENAME}}|?Has_written_language_code#-}}-5D-5D-5B-5BContained_in_volume-3A-3A{{#show:{{FULLPAGENAME}}|?Contained_in_volume#-}}-5D-5D|&#039;&#039;{{#show:{{FULLPAGENAME}}|?Contained_in_volume#-}}&#039;&#039;]]([[Special:Ask/-5B-5BBelongs_to_collection-3A-3A{{#show:{{FULLPAGENAME}}|?Belongs_to_collection#-}}-5D-5D-5B-5BHas_written_language_code-3A-3A{{#show:{{FULLPAGENAME}}|?Has_written_language_code#-}}-5D-5D-5B-5BContained_in_volume-3A-3A{{#show:{{FULLPAGENAME}}|?Contained_in_volume#-}}-5D-5D-5B-5BContained_in_number-3A-3A{{#show:{{FULLPAGENAME}}|?Contained_in_number#-}}-5D-5D|{{#show:{{FULLPAGENAME}}|?Contained_in_number#-}}]])&lt;br /&gt;
|- class=&amp;quot;gl-infobox-row&amp;quot;&lt;br /&gt;
! class=&amp;quot;gl-infobox-label&amp;quot; | [[Property:Has_ID|{{#show:Property:Has_ID|?Has preferred property label|+lang={{PAGELANGUAGE}}}}]]&lt;br /&gt;
| class=&amp;quot;gl-infobox-value&amp;quot; | {{#ask:&lt;br /&gt;
  [[Belongs_to_collection::{{#show:{{FULLPAGENAME}}|?Belongs_to_collection#-}}]]&lt;br /&gt;
  [[Has_written_language_code::{{#show:{{FULLPAGENAME}}|?Has_written_language_code#-}}]]&lt;br /&gt;
  [[Contained_in_volume::{{#show:{{FULLPAGENAME}}|?Contained_in_volume#-}}]]&lt;br /&gt;
  [[Contained_in_number::{{#show:{{FULLPAGENAME}}|?Contained_in_number#-}}]]&lt;br /&gt;
  [[Has_ID::&amp;lt;&amp;lt; {{#show:{{FULLPAGENAME}}|?Has_ID#-}}]]&lt;br /&gt;
  |?#-&lt;br /&gt;
  |sort=Has_ID&lt;br /&gt;
  |order=descending&lt;br /&gt;
  |limit=1&lt;br /&gt;
  |format=plainlist&lt;br /&gt;
  |template=Infobox arrowlink sub&lt;br /&gt;
  |userparam=◀&lt;br /&gt;
  |mainlabel=-&lt;br /&gt;
  |searchlabel=&lt;br /&gt;
}} {{#show:{{FULLPAGENAME}}|?Has_ID #-}} {{#ask:&lt;br /&gt;
  [[Belongs_to_collection::{{#show:{{FULLPAGENAME}}|?Belongs_to_collection#-}}]]&lt;br /&gt;
  [[Has_written_language_code::{{#show:{{FULLPAGENAME}}|?Has_written_language_code#-}}]]&lt;br /&gt;
  [[Contained_in_volume::{{#show:{{FULLPAGENAME}}|?Contained_in_volume#-}}]]&lt;br /&gt;
  [[Contained_in_number::{{#show:{{FULLPAGENAME}}|?Contained_in_number#-}}]]&lt;br /&gt;
  [[Has_ID::&amp;gt;&amp;gt; {{#show:{{FULLPAGENAME}}|?Has_ID#-}}]]&lt;br /&gt;
  |?#-&lt;br /&gt;
  |sort=Has_ID&lt;br /&gt;
  |order=ascending&lt;br /&gt;
  |limit=1&lt;br /&gt;
  |format=plainlist&lt;br /&gt;
  |template=Infobox arrowlink sub&lt;br /&gt;
  |userparam=▶&lt;br /&gt;
  |mainlabel=-&lt;br /&gt;
  |searchlabel=&lt;br /&gt;
}}&lt;br /&gt;
|- class=&amp;quot;gl-infobox-row&amp;quot;&lt;br /&gt;
! class=&amp;quot;gl-infobox-label&amp;quot; | [[Property:Belongs_to_type|{{#show:Property:Belongs_to_type|?Has preferred property label|+lang={{PAGELANGUAGE}}}}]]&lt;br /&gt;
| class=&amp;quot;gl-infobox-value&amp;quot; | {{#show:{{FULLPAGENAME}}|?Belongs_to_type}}&lt;br /&gt;
|}&amp;lt;/includeonly&amp;gt;&lt;br /&gt;
&amp;lt;templatedata&amp;gt;&lt;br /&gt;
{&lt;br /&gt;
  &amp;quot;description&amp;quot;: &amp;quot;Displays an infobox for IESC entries, showing semantic properties such as collection, publication date, volume, ID, and type.&amp;quot;,&lt;br /&gt;
  &amp;quot;params&amp;quot;: {},&lt;br /&gt;
  &amp;quot;format&amp;quot;: &amp;quot;block&amp;quot;&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/templatedata&amp;gt;&lt;/div&gt;</summary>
		<author><name>Thomas Holzberger</name></author>
	</entry>
	<entry>
		<id>https://www.glossalab.org/w/index.php?title=Template:Int&amp;diff=30812</id>
		<title>Template:Int</title>
		<link rel="alternate" type="text/html" href="https://www.glossalab.org/w/index.php?title=Template:Int&amp;diff=30812"/>
		<updated>2026-01-12T14:34:20Z</updated>

		<summary type="html">&lt;p&gt;Thomas Holzberger: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;noinclude&amp;gt;Takes a MediaWiki namespace page as anonymous argument and an optional second argument &#039;lang&#039;, that can specify the preferred language of transcription. By default, &#039;lang&#039; = &#039;page content language&#039;.&lt;br /&gt;
&amp;lt;/noinclude&amp;gt;&amp;lt;includeonly&amp;gt;{{#invoke:Int|renderIntMessage|{{{1}}}|lang={{#if: {{{lang|}}}|{{{lang}}}|{{PAGELANGUAGE}}}}}}&amp;lt;/includeonly&amp;gt;&lt;br /&gt;
&amp;lt;templatedata&amp;gt;&lt;br /&gt;
{&lt;br /&gt;
  &amp;quot;params&amp;quot;: {&lt;br /&gt;
    &amp;quot;1&amp;quot;: {&lt;br /&gt;
      &amp;quot;label&amp;quot;: &amp;quot;Page&amp;quot;,&lt;br /&gt;
      &amp;quot;description&amp;quot;: &amp;quot;The MediaWiki page (namespace page) to render a message for.&amp;quot;,&lt;br /&gt;
      &amp;quot;type&amp;quot;: &amp;quot;string&amp;quot;,&lt;br /&gt;
      &amp;quot;required&amp;quot;: true,&lt;br /&gt;
      &amp;quot;example&amp;quot;: &amp;quot;Property:Has_written_language_code&amp;quot;&lt;br /&gt;
    },&lt;br /&gt;
    &amp;quot;lang&amp;quot;: {&lt;br /&gt;
      &amp;quot;label&amp;quot;: &amp;quot;Language code&amp;quot;,&lt;br /&gt;
      &amp;quot;description&amp;quot;: &amp;quot;Optional. The preferred language for transcription. If omitted, the page content language is used.&amp;quot;,&lt;br /&gt;
      &amp;quot;type&amp;quot;: &amp;quot;string&amp;quot;,&lt;br /&gt;
      &amp;quot;required&amp;quot;: false,&lt;br /&gt;
      &amp;quot;example&amp;quot;: &amp;quot;en&amp;quot;&lt;br /&gt;
    }&lt;br /&gt;
  }&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/templatedata&amp;gt;&lt;/div&gt;</summary>
		<author><name>Thomas Holzberger</name></author>
	</entry>
	<entry>
		<id>https://www.glossalab.org/w/index.php?title=Template:Int&amp;diff=30811</id>
		<title>Template:Int</title>
		<link rel="alternate" type="text/html" href="https://www.glossalab.org/w/index.php?title=Template:Int&amp;diff=30811"/>
		<updated>2026-01-12T14:32:36Z</updated>

		<summary type="html">&lt;p&gt;Thomas Holzberger: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;noinclude&amp;gt;Takes a MediaWiki namespace page as anonymous argument and an optional second argument &#039;lang&#039;, that can specify the preferred language of transcription. By default, &#039;lang&#039; = &#039;page content language&#039;.&lt;br /&gt;
&amp;lt;/noinclude&amp;gt;&amp;lt;includeonly&amp;gt;{{#invoke:Int|renderIntMessage|{{{1}}}|lang={{#if: {{{lang|}}}|{{{lang}}}|{{PAGELANGUAGE}}}}}}&amp;lt;/includeonly&amp;gt;&lt;br /&gt;
&amp;lt;templatedata&amp;gt;&lt;br /&gt;
{&lt;br /&gt;
  &amp;quot;params&amp;quot;: {&lt;br /&gt;
    &amp;quot;1&amp;quot;: {&lt;br /&gt;
      &amp;quot;label&amp;quot;: &amp;quot;Message key&amp;quot;,&lt;br /&gt;
      &amp;quot;description&amp;quot;: &amp;quot;The identifier of the message to display. This corresponds to a key defined in the Int Lua module.&amp;quot;,&lt;br /&gt;
      &amp;quot;type&amp;quot;: &amp;quot;string&amp;quot;,&lt;br /&gt;
      &amp;quot;required&amp;quot;: true,&lt;br /&gt;
      &amp;quot;example&amp;quot;: &amp;quot;also-available-as&amp;quot;&lt;br /&gt;
    },&lt;br /&gt;
    &amp;quot;lang&amp;quot;: {&lt;br /&gt;
      &amp;quot;label&amp;quot;: &amp;quot;Language code&amp;quot;,&lt;br /&gt;
      &amp;quot;description&amp;quot;: &amp;quot;Optional. The language in which the message should be displayed. If omitted, the page language (PAGELANGUAGE) is used.&amp;quot;,&lt;br /&gt;
      &amp;quot;type&amp;quot;: &amp;quot;string&amp;quot;,&lt;br /&gt;
      &amp;quot;required&amp;quot;: false,&lt;br /&gt;
      &amp;quot;example&amp;quot;: &amp;quot;en&amp;quot;&lt;br /&gt;
    }&lt;br /&gt;
  }&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/templatedata&amp;gt;&lt;/div&gt;</summary>
		<author><name>Thomas Holzberger</name></author>
	</entry>
	<entry>
		<id>https://www.glossalab.org/w/index.php?title=Template:Infobox_glossariumBITri&amp;diff=30810</id>
		<title>Template:Infobox glossariumBITri</title>
		<link rel="alternate" type="text/html" href="https://www.glossalab.org/w/index.php?title=Template:Infobox_glossariumBITri&amp;diff=30810"/>
		<updated>2026-01-12T14:28:40Z</updated>

		<summary type="html">&lt;p&gt;Thomas Holzberger: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;includeonly&amp;gt;{| class=&amp;quot;gl-infobox glossariumBITri&amp;quot;&lt;br /&gt;
|- class=&amp;quot;gl-infobox-firstrow&amp;quot;&lt;br /&gt;
! class=&amp;quot;gl-infobox-label&amp;quot; | [[Property:Belongs to collection|{{#show:Property:Belongs to collection|?Has preferred property label|+lang={{PAGELANGUAGE}}}}]]&lt;br /&gt;
| class=&amp;quot;gl-infobox-value&amp;quot; | {{#show:{{FULLPAGENAME}}|?Belongs to collection}}&lt;br /&gt;
|- class=&amp;quot;gl-infobox-row&amp;quot;&lt;br /&gt;
! class=&amp;quot;gl-infobox-label&amp;quot; | [[Property:Has_author|{{#show:Property:Has_author|?Has preferred property label|+lang={{PAGELANGUAGE}}}}]]&lt;br /&gt;
| class=&amp;quot;gl-infobox-value&amp;quot; | {{#show:{{FULLPAGENAME}}|?Has_author}}&lt;br /&gt;
|- class=&amp;quot;gl-infobox-row&amp;quot;&lt;br /&gt;
! class=&amp;quot;gl-infobox-label&amp;quot; | [[Property:Has_curator|{{#show:Property:Has_curator|?Has preferred property label|+lang={{PAGELANGUAGE}}}}]]&lt;br /&gt;
| class=&amp;quot;gl-infobox-value&amp;quot; | {{#show:{{FULLPAGENAME}}|?Has_curator}}&lt;br /&gt;
|- class=&amp;quot;gl-infobox-row&amp;quot;&lt;br /&gt;
! class=&amp;quot;gl-infobox-label&amp;quot; | [[Property:Was_published_on_date|{{#show:Property:Was_published_on_date|?Has preferred property label|+lang={{PAGELANGUAGE}}}}]]&lt;br /&gt;
| class=&amp;quot;gl-infobox-value&amp;quot; | {{#show:{{FULLPAGENAME}}|?Was_published_on_date}}&lt;br /&gt;
|- class=&amp;quot;gl-infobox-row&amp;quot;&lt;br /&gt;
! class=&amp;quot;gl-infobox-label&amp;quot; | {{int|vol-num|lang={{#show:{{FULLPAGENAME}}|?Has_written_language_code#-}}}}&lt;br /&gt;
| class=&amp;quot;gl-infobox-value&amp;quot; | [[Special:Ask/-5B-5BBelongs_to_collection-3A-3A{{#show:{{FULLPAGENAME}}|?Belongs_to_collection#-}}-5D-5D-5B-5BHas_written_language_code-3A-3A{{#show:{{FULLPAGENAME}}|?Has_written_language_code#-}}-5D-5D-5B-5BContained_in_volume-3A-3A{{#show:{{FULLPAGENAME}}|?Contained_in_volume#-}}-5D-5D|&#039;&#039;{{#show:{{FULLPAGENAME}}|?Contained_in_volume#-}}&#039;&#039;]]([[Special:Ask/-5B-5BBelongs_to_collection-3A-3A{{#show:{{FULLPAGENAME}}|?Belongs_to_collection#-}}-5D-5D-5B-5BHas_written_language_code-3A-3A{{#show:{{FULLPAGENAME}}|?Has_written_language_code#-}}-5D-5D-5B-5BContained_in_volume-3A-3A{{#show:{{FULLPAGENAME}}|?Contained_in_volume#-}}-5D-5D-5B-5BContained_in_number-3A-3A{{#show:{{FULLPAGENAME}}|?Contained_in_number#-}}-5D-5D|{{#show:{{FULLPAGENAME}}|?Contained_in_number#-}}]])&lt;br /&gt;
|- class=&amp;quot;gl-infobox-row&amp;quot;&lt;br /&gt;
! class=&amp;quot;gl-infobox-label&amp;quot; | [[Property:Has_ID|{{#show:Property:Has_ID|?Has preferred property label|+lang={{PAGELANGUAGE}}}}]]&lt;br /&gt;
| class=&amp;quot;gl-infobox-value&amp;quot; | {{#ask:&lt;br /&gt;
  [[Belongs_to_collection::{{#show:{{FULLPAGENAME}}|?Belongs_to_collection#-}}]]&lt;br /&gt;
  [[Has_written_language_code::{{#show:{{FULLPAGENAME}}|?Has_written_language_code#-}}]]&lt;br /&gt;
  [[Contained_in_volume::{{#show:{{FULLPAGENAME}}|?Contained_in_volume#-}}]]&lt;br /&gt;
  [[Contained_in_number::{{#show:{{FULLPAGENAME}}|?Contained_in_number#-}}]]&lt;br /&gt;
  [[Has_ID::&amp;lt;&amp;lt; {{#show:{{FULLPAGENAME}}|?Has_ID#-}}]]&lt;br /&gt;
  |?#-&lt;br /&gt;
  |sort=Has_ID&lt;br /&gt;
  |order=descending&lt;br /&gt;
  |limit=1&lt;br /&gt;
  |format=plainlist&lt;br /&gt;
  |template=Infobox arrowlink sub&lt;br /&gt;
  |userparam=◀&lt;br /&gt;
  |mainlabel=-&lt;br /&gt;
  |searchlabel=&lt;br /&gt;
}} {{#show:{{FULLPAGENAME}}|?Has_ID #-}} {{#ask:&lt;br /&gt;
  [[Belongs_to_collection::{{#show:{{FULLPAGENAME}}|?Belongs_to_collection#-}}]]&lt;br /&gt;
  [[Has_written_language_code::{{#show:{{FULLPAGENAME}}|?Has_written_language_code#-}}]]&lt;br /&gt;
  [[Contained_in_volume::{{#show:{{FULLPAGENAME}}|?Contained_in_volume#-}}]]&lt;br /&gt;
  [[Contained_in_number::{{#show:{{FULLPAGENAME}}|?Contained_in_number#-}}]]&lt;br /&gt;
  [[Has_ID::&amp;gt;&amp;gt; {{#show:{{FULLPAGENAME}}|?Has_ID#-}}]]&lt;br /&gt;
  |?#-&lt;br /&gt;
  |sort=Has_ID&lt;br /&gt;
  |order=ascending&lt;br /&gt;
  |limit=1&lt;br /&gt;
  |format=plainlist&lt;br /&gt;
  |template=Infobox arrowlink sub&lt;br /&gt;
  |userparam=▶&lt;br /&gt;
  |mainlabel=-&lt;br /&gt;
  |searchlabel=&lt;br /&gt;
}} &lt;br /&gt;
|- class=&amp;quot;gl-infobox-row&amp;quot;&lt;br /&gt;
! class=&amp;quot;gl-infobox-label&amp;quot; | [[Property:Belongs_to_type|{{#show:Property:Belongs_to_type|?Has preferred property label|+lang={{PAGELANGUAGE}}}}]]&lt;br /&gt;
| class=&amp;quot;gl-infobox-value&amp;quot; | {{#show:{{FULLPAGENAME}}|?Belongs_to_type}}&lt;br /&gt;
|- class=&amp;quot;gl-infobox-row&amp;quot;&lt;br /&gt;
! class=&amp;quot;gl-infobox-label&amp;quot; | [[Property:Supported_by_Knowledge_Domain|{{#show:Property:Supported_by_Knowledge_Domain|?Has preferred property label|+lang={{PAGELANGUAGE}}}}]]&lt;br /&gt;
| class=&amp;quot;gl-infobox-value&amp;quot; | {{#show:{{FULLPAGENAME}}|?Supported_by_Knowledge_Domain}}&lt;br /&gt;
{{#if: {{#show:{{FULLPAGENAME}}|?Has alternative english voice}} |&amp;lt;tr class=&amp;quot;gl-infobox-row&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;th class=&amp;quot;gl-infobox-label&amp;quot;&amp;gt;[[Property:Has alternative english voice|{{#show:Property:Has alternative english voice|?Has preferred property label|+lang={{PAGELANGUAGE}}}}]]&amp;lt;/th&amp;gt;&lt;br /&gt;
&amp;lt;td class=&amp;quot;gl-infobox-value&amp;quot;&amp;gt;{{#show:{{FULLPAGENAME}}|?Has alternative english voice}}&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;}}{{#if: {{#show:{{FULLPAGENAME}}|?Has alternative spanish voice}} |&amp;lt;tr class=&amp;quot;gl-infobox-row&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;th class=&amp;quot;gl-infobox-label&amp;quot;&amp;gt;[[Property:Has alternative spanish voice|{{#show:Property:Has alternative spanish voice|?Has preferred property label|+lang={{PAGELANGUAGE}}}}]]&amp;lt;/th&amp;gt;&lt;br /&gt;
&amp;lt;td class=&amp;quot;gl-infobox-value&amp;quot;&amp;gt;{{#show:{{FULLPAGENAME}}|?Has alternative spanish voice}}&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;}}{{#if: {{#show:{{FULLPAGENAME}}|?Has alternative french voice}} |&amp;lt;tr class=&amp;quot;gl-infobox-row&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;th class=&amp;quot;gl-infobox-label&amp;quot;&amp;gt;[[Property:Has alternative french voice|{{#show:Property:Has alternative french voice|?Has preferred property label|+lang={{PAGELANGUAGE}}}}]]&amp;lt;/th&amp;gt;&lt;br /&gt;
&amp;lt;td class=&amp;quot;gl-infobox-value&amp;quot;&amp;gt;{{#show:{{FULLPAGENAME}}|?Has alternative french voice}}&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;}}{{#if: {{#show:{{FULLPAGENAME}}|?Has alternative german voice}} |&amp;lt;tr class=&amp;quot;gl-infobox-row&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;th class=&amp;quot;gl-infobox-label&amp;quot;&amp;gt;[[Property:Has alternative german voice|{{#show:Property:Has alternative german voice|?Has preferred property label|+lang={{PAGELANGUAGE}}}}]]&amp;lt;/th&amp;gt;&lt;br /&gt;
&amp;lt;td class=&amp;quot;gl-infobox-value&amp;quot;&amp;gt;{{#show:{{FULLPAGENAME}}|?Has alternative german voice}}&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;}}&lt;br /&gt;
|}&amp;lt;/includeonly&amp;gt;&lt;br /&gt;
&amp;lt;noinclude&amp;gt;&lt;br /&gt;
&amp;lt;templatedata&amp;gt;&lt;br /&gt;
{&lt;br /&gt;
  &amp;quot;description&amp;quot;: &amp;quot;Displays a detailed infobox for a glossary or reference entry (Glossarium BITri), showing semantic properties of the current page such as collection, authors, curator, publication date, volume, ID, type, knowledge domain, and alternative voices.&amp;quot;,&lt;br /&gt;
  &amp;quot;params&amp;quot;: {},&lt;br /&gt;
  &amp;quot;format&amp;quot;: &amp;quot;block&amp;quot;&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/templatedata&amp;gt;&lt;br /&gt;
&amp;lt;/noinclude&amp;gt;&lt;/div&gt;</summary>
		<author><name>Thomas Holzberger</name></author>
	</entry>
	<entry>
		<id>https://www.glossalab.org/w/index.php?title=Template:Infobox_arrowlink_sub&amp;diff=30809</id>
		<title>Template:Infobox arrowlink sub</title>
		<link rel="alternate" type="text/html" href="https://www.glossalab.org/w/index.php?title=Template:Infobox_arrowlink_sub&amp;diff=30809"/>
		<updated>2026-01-12T14:25:25Z</updated>

		<summary type="html">&lt;p&gt;Thomas Holzberger: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;includeonly&amp;gt;[[{{{1}}}|{{{#userparam}}}]]&amp;lt;/includeonly&amp;gt;&lt;br /&gt;
&amp;lt;noinclude&amp;gt;&lt;br /&gt;
&amp;lt;templatedata&amp;gt;&lt;br /&gt;
{&lt;br /&gt;
  &amp;quot;description&amp;quot;: &amp;quot;Creates a link to a page with a custom display text.&amp;quot;,&lt;br /&gt;
  &amp;quot;params&amp;quot;: {&lt;br /&gt;
    &amp;quot;1&amp;quot;: {&lt;br /&gt;
      &amp;quot;label&amp;quot;: &amp;quot;Target page&amp;quot;,&lt;br /&gt;
      &amp;quot;description&amp;quot;: &amp;quot;The page that the link should point to.&amp;quot;,&lt;br /&gt;
      &amp;quot;type&amp;quot;: &amp;quot;string&amp;quot;,&lt;br /&gt;
      &amp;quot;required&amp;quot;: true,&lt;br /&gt;
      &amp;quot;example&amp;quot;: &amp;quot;Philosophy&amp;quot;&lt;br /&gt;
    },&lt;br /&gt;
    &amp;quot;#userparam&amp;quot;: {&lt;br /&gt;
      &amp;quot;label&amp;quot;: &amp;quot;Display text&amp;quot;,&lt;br /&gt;
      &amp;quot;description&amp;quot;: &amp;quot;The text that will be shown for the link.&amp;quot;,&lt;br /&gt;
      &amp;quot;type&amp;quot;: &amp;quot;string&amp;quot;,&lt;br /&gt;
      &amp;quot;required&amp;quot;: true,&lt;br /&gt;
      &amp;quot;example&amp;quot;: &amp;quot;See Philosophy page&amp;quot;&lt;br /&gt;
    }&lt;br /&gt;
  }&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/templatedata&amp;gt;&lt;br /&gt;
&amp;lt;/noinclude&amp;gt;&lt;/div&gt;</summary>
		<author><name>Thomas Holzberger</name></author>
	</entry>
	<entry>
		<id>https://www.glossalab.org/w/index.php?title=Template:Show_other_languages&amp;diff=30808</id>
		<title>Template:Show other languages</title>
		<link rel="alternate" type="text/html" href="https://www.glossalab.org/w/index.php?title=Template:Show_other_languages&amp;diff=30808"/>
		<updated>2026-01-12T14:22:18Z</updated>

		<summary type="html">&lt;p&gt;Thomas Holzberger: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;includeonly&amp;gt;{{#if: {{#show:{{FULLPAGENAME}}|?Available in other language as}} |&lt;br /&gt;
{{int|also-available-as|lang={{#show:{{FULLPAGENAME}}|?Has_written_language_code#-}}}}: {{#ask: [[-Available in other language as::{{FULLPAGENAME}}]]&lt;br /&gt;
|?Has written language code&lt;br /&gt;
|format=plainlist&lt;br /&gt;
|headers=hide&lt;br /&gt;
}}{{#if: {{#ask:[[Available in other language as::{{FULLPAGENAME}}]]}} |, {{#ask: [[Available in other language as::{{FULLPAGENAME}}]]&lt;br /&gt;
|?Has written language code&lt;br /&gt;
|format=plainlist&lt;br /&gt;
|headers=hide }} }}|{{#if: {{#ask:[[Available in other language as::{{FULLPAGENAME}}]]}} |{{int|also-available-as|lang={{#show:{{FULLPAGENAME}}|?Has_written_language_code#-}}}}: {{#ask: [[Available in other language as::{{FULLPAGENAME}}]]&lt;br /&gt;
|?Has written language code&lt;br /&gt;
|format=plainlist&lt;br /&gt;
|headers=hide }} }} }}&amp;lt;/includeonly&amp;gt;&lt;br /&gt;
&amp;lt;noinclude&amp;gt;&lt;br /&gt;
&amp;lt;templatedata&amp;gt;&lt;br /&gt;
{&lt;br /&gt;
  &amp;quot;description&amp;quot;: &amp;quot;Displays a list of other languages in which this page is available, based on semantic properties.&amp;quot;,&lt;br /&gt;
  &amp;quot;params&amp;quot;: {}&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/templatedata&amp;gt;&lt;br /&gt;
&amp;lt;/noinclude&amp;gt;&lt;/div&gt;</summary>
		<author><name>Thomas Holzberger</name></author>
	</entry>
	<entry>
		<id>https://www.glossalab.org/w/index.php?title=Template:Show_simple_ref&amp;diff=30807</id>
		<title>Template:Show simple ref</title>
		<link rel="alternate" type="text/html" href="https://www.glossalab.org/w/index.php?title=Template:Show_simple_ref&amp;diff=30807"/>
		<updated>2026-01-12T14:18:07Z</updated>

		<summary type="html">&lt;p&gt;Thomas Holzberger: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;includeonly&amp;gt;{{#ask: [[{{FULLPAGENAME}}]]&lt;br /&gt;
|?Has author#-&lt;br /&gt;
|?Was published on date&lt;br /&gt;
|?Belongs to collection&lt;br /&gt;
|?Contained in volume&lt;br /&gt;
|?Contained in number&lt;br /&gt;
|?Has ID#-&lt;br /&gt;
|link=none&lt;br /&gt;
|headers=hide&lt;br /&gt;
|mainlabel=-&lt;br /&gt;
|format=template&lt;br /&gt;
|template=Show simple ref sub&lt;br /&gt;
}}&amp;lt;/includeonly&amp;gt;&lt;br /&gt;
&amp;lt;noinclude&amp;gt;&lt;br /&gt;
&amp;lt;templatedata&amp;gt;&lt;br /&gt;
{&lt;br /&gt;
  &amp;quot;description&amp;quot;: &amp;quot;Displays a formatted reference for the current page using semantic properties. It automatically queries the page for authors, publication date, collection, volume, number, and ID, and renders each result with the &#039;Show simple ref sub&#039; template.&amp;quot;,&lt;br /&gt;
  &amp;quot;params&amp;quot;: {}&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/templatedata&amp;gt;&lt;br /&gt;
&amp;lt;/noinclude&amp;gt;&lt;/div&gt;</summary>
		<author><name>Thomas Holzberger</name></author>
	</entry>
	<entry>
		<id>https://www.glossalab.org/w/index.php?title=Template:Show_simple_ref_sub&amp;diff=30806</id>
		<title>Template:Show simple ref sub</title>
		<link rel="alternate" type="text/html" href="https://www.glossalab.org/w/index.php?title=Template:Show_simple_ref_sub&amp;diff=30806"/>
		<updated>2026-01-12T14:12:15Z</updated>

		<summary type="html">&lt;p&gt;Thomas Holzberger: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;span style=&amp;quot;font-size:95%; color:#777;&amp;quot;&amp;gt;{{#arraymap:{{{1}}}|,|x|{{PAGENAME:x}}|,\s}} ({{{2|}}}). {{PAGENAME}}, &#039;&#039;{{{3}}}&#039;&#039;, {{#if: {{{4|}}} |&#039;&#039;{{{4}}}&#039;&#039;}}{{#if: {{{5|}}} |({{{5}}}): {{{6|}}}|-{{{6|}}}}}.&amp;lt;/span&amp;gt;&lt;br /&gt;
&amp;lt;noinclude&amp;gt;&lt;br /&gt;
&amp;lt;templatedata&amp;gt;&lt;br /&gt;
{&lt;br /&gt;
  &amp;quot;description&amp;quot;: &amp;quot;Displays a formatted reference/citation in a compact style.&amp;quot;,&lt;br /&gt;
  &amp;quot;params&amp;quot;: {&lt;br /&gt;
    &amp;quot;1&amp;quot;: {&lt;br /&gt;
      &amp;quot;label&amp;quot;: &amp;quot;Author(s)&amp;quot;,&lt;br /&gt;
      &amp;quot;description&amp;quot;: &amp;quot;Comma-separated list of authors. Each author will link to a page with their name.&amp;quot;,&lt;br /&gt;
      &amp;quot;type&amp;quot;: &amp;quot;string&amp;quot;,&lt;br /&gt;
      &amp;quot;required&amp;quot;: true,&lt;br /&gt;
      &amp;quot;example&amp;quot;: &amp;quot;Doe, Smith, Johnson&amp;quot;&lt;br /&gt;
    },&lt;br /&gt;
    &amp;quot;2&amp;quot;: {&lt;br /&gt;
      &amp;quot;label&amp;quot;: &amp;quot;Year&amp;quot;,&lt;br /&gt;
      &amp;quot;description&amp;quot;: &amp;quot;Publication year of the source.&amp;quot;,&lt;br /&gt;
      &amp;quot;type&amp;quot;: &amp;quot;string&amp;quot;,&lt;br /&gt;
      &amp;quot;required&amp;quot;: false,&lt;br /&gt;
      &amp;quot;example&amp;quot;: &amp;quot;2023&amp;quot;&lt;br /&gt;
    },&lt;br /&gt;
    &amp;quot;3&amp;quot;: {&lt;br /&gt;
      &amp;quot;label&amp;quot;: &amp;quot;Title&amp;quot;,&lt;br /&gt;
      &amp;quot;description&amp;quot;: &amp;quot;Title of the work being cited.&amp;quot;,&lt;br /&gt;
      &amp;quot;type&amp;quot;: &amp;quot;string&amp;quot;,&lt;br /&gt;
      &amp;quot;required&amp;quot;: true,&lt;br /&gt;
      &amp;quot;example&amp;quot;: &amp;quot;Introduction to Formal Logic&amp;quot;&lt;br /&gt;
    },&lt;br /&gt;
    &amp;quot;4&amp;quot;: {&lt;br /&gt;
      &amp;quot;label&amp;quot;: &amp;quot;Subtitle&amp;quot;,&lt;br /&gt;
      &amp;quot;description&amp;quot;: &amp;quot;Optional subtitle of the work.&amp;quot;,&lt;br /&gt;
      &amp;quot;type&amp;quot;: &amp;quot;string&amp;quot;,&lt;br /&gt;
      &amp;quot;required&amp;quot;: false,&lt;br /&gt;
      &amp;quot;example&amp;quot;: &amp;quot;Second Edition&amp;quot;&lt;br /&gt;
    },&lt;br /&gt;
    &amp;quot;5&amp;quot;: {&lt;br /&gt;
      &amp;quot;label&amp;quot;: &amp;quot;Volume or part&amp;quot;,&lt;br /&gt;
      &amp;quot;description&amp;quot;: &amp;quot;Optional volume, part, or chapter information.&amp;quot;,&lt;br /&gt;
      &amp;quot;type&amp;quot;: &amp;quot;string&amp;quot;,&lt;br /&gt;
      &amp;quot;required&amp;quot;: false,&lt;br /&gt;
      &amp;quot;example&amp;quot;: &amp;quot;Vol. 2&amp;quot;&lt;br /&gt;
    },&lt;br /&gt;
    &amp;quot;6&amp;quot;: {&lt;br /&gt;
      &amp;quot;label&amp;quot;: &amp;quot;Page numbers&amp;quot;,&lt;br /&gt;
      &amp;quot;description&amp;quot;: &amp;quot;Optional page numbers or range.&amp;quot;,&lt;br /&gt;
      &amp;quot;type&amp;quot;: &amp;quot;string&amp;quot;,&lt;br /&gt;
      &amp;quot;required&amp;quot;: false,&lt;br /&gt;
      &amp;quot;example&amp;quot;: &amp;quot;pp. 15–20&amp;quot;&lt;br /&gt;
    }&lt;br /&gt;
  }&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/templatedata&amp;gt;&lt;br /&gt;
&amp;lt;/noinclude&amp;gt;&lt;/div&gt;</summary>
		<author><name>Thomas Holzberger</name></author>
	</entry>
	<entry>
		<id>https://www.glossalab.org/w/index.php?title=Template:Eq&amp;diff=30805</id>
		<title>Template:Eq</title>
		<link rel="alternate" type="text/html" href="https://www.glossalab.org/w/index.php?title=Template:Eq&amp;diff=30805"/>
		<updated>2026-01-12T14:08:39Z</updated>

		<summary type="html">&lt;p&gt;Thomas Holzberger: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;includeonly&amp;gt;&lt;br /&gt;
{{#autoeq:{{{1}}}|{{{label|}}}}}{{#set:Has formalized expressions=true}}&lt;br /&gt;
&amp;lt;/includeonly&amp;gt;&lt;br /&gt;
&amp;lt;noinclude&amp;gt;&lt;br /&gt;
&amp;lt;templatedata&amp;gt;&lt;br /&gt;
{&lt;br /&gt;
  &amp;quot;description&amp;quot;: &amp;quot;Displays a formalized expression as an automatically numbered equation.&amp;quot;,&lt;br /&gt;
  &amp;quot;params&amp;quot;: {&lt;br /&gt;
    &amp;quot;1&amp;quot;: {&lt;br /&gt;
      &amp;quot;label&amp;quot;: &amp;quot;Formalized expression&amp;quot;,&lt;br /&gt;
      &amp;quot;description&amp;quot;: &amp;quot;The mathematical or logical expression to display as an equation.&amp;quot;,&lt;br /&gt;
      &amp;quot;type&amp;quot;: &amp;quot;content&amp;quot;,&lt;br /&gt;
      &amp;quot;required&amp;quot;: true,&lt;br /&gt;
      &amp;quot;example&amp;quot;: &amp;quot;a = b&amp;quot;&lt;br /&gt;
    },&lt;br /&gt;
    &amp;quot;label&amp;quot;: {&lt;br /&gt;
      &amp;quot;label&amp;quot;: &amp;quot;Equation label&amp;quot;,&lt;br /&gt;
      &amp;quot;description&amp;quot;: &amp;quot;Optional label for the equation. Use this if you want to reference it with {{RefToEq}}. Must be unique on the page.&amp;quot;,&lt;br /&gt;
      &amp;quot;type&amp;quot;: &amp;quot;string&amp;quot;,&lt;br /&gt;
      &amp;quot;required&amp;quot;: false,&lt;br /&gt;
      &amp;quot;example&amp;quot;: &amp;quot;Eq1&amp;quot;&lt;br /&gt;
    }&lt;br /&gt;
  }&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/templatedata&amp;gt;&lt;br /&gt;
&amp;lt;/noinclude&amp;gt;&lt;/div&gt;</summary>
		<author><name>Thomas Holzberger</name></author>
	</entry>
	<entry>
		<id>https://www.glossalab.org/w/index.php?title=Template:RefToEq&amp;diff=30804</id>
		<title>Template:RefToEq</title>
		<link rel="alternate" type="text/html" href="https://www.glossalab.org/w/index.php?title=Template:RefToEq&amp;diff=30804"/>
		<updated>2026-01-12T14:06:47Z</updated>

		<summary type="html">&lt;p&gt;Thomas Holzberger: AI&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{#refeq:{{{1}}}}}&lt;br /&gt;
&amp;lt;noinclude&amp;gt;&lt;br /&gt;
&amp;lt;templatedata&amp;gt;&lt;br /&gt;
{&lt;br /&gt;
  &amp;quot;description&amp;quot;: &amp;quot;Generates a reference to a previously defined equation on the same page. Note: references to equations on other pages do not work directly; use HTML anchors or standard page links for that.&amp;quot;,&lt;br /&gt;
  &amp;quot;params&amp;quot;: {&lt;br /&gt;
    &amp;quot;1&amp;quot;: {&lt;br /&gt;
      &amp;quot;label&amp;quot;: &amp;quot;Equation label&amp;quot;,&lt;br /&gt;
      &amp;quot;description&amp;quot;: &amp;quot;The label of the equation to reference. Must match exactly the label used in {{eq}}.&amp;quot;,&lt;br /&gt;
      &amp;quot;type&amp;quot;: &amp;quot;string&amp;quot;,&lt;br /&gt;
      &amp;quot;required&amp;quot;: true,&lt;br /&gt;
      &amp;quot;example&amp;quot;: &amp;quot;Eq1&amp;quot;&lt;br /&gt;
    }&lt;br /&gt;
  }&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/templatedata&amp;gt;&lt;br /&gt;
&amp;lt;/noinclude&amp;gt;&lt;/div&gt;</summary>
		<author><name>Thomas Holzberger</name></author>
	</entry>
	<entry>
		<id>https://www.glossalab.org/w/index.php?title=Template:RefToEq&amp;diff=30803</id>
		<title>Template:RefToEq</title>
		<link rel="alternate" type="text/html" href="https://www.glossalab.org/w/index.php?title=Template:RefToEq&amp;diff=30803"/>
		<updated>2026-01-12T13:36:20Z</updated>

		<summary type="html">&lt;p&gt;Thomas Holzberger: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{#refeq:{{{1}}}}}&lt;br /&gt;
&amp;lt;noinclude&amp;gt;&lt;br /&gt;
&amp;lt;templatedata&amp;gt;&lt;br /&gt;
{&lt;br /&gt;
  &amp;quot;description&amp;quot;: &amp;quot;Erzeugt eine Referenz auf eine zuvor definierte Gleichung.&amp;quot;,&lt;br /&gt;
  &amp;quot;params&amp;quot;: {&lt;br /&gt;
    &amp;quot;1&amp;quot;: {&lt;br /&gt;
      &amp;quot;label&amp;quot;: &amp;quot;Gleichungslabel&amp;quot;,&lt;br /&gt;
      &amp;quot;description&amp;quot;: &amp;quot;Das Label der Gleichung, auf die verwiesen werden soll.&amp;quot;,&lt;br /&gt;
      &amp;quot;type&amp;quot;: &amp;quot;string&amp;quot;,&lt;br /&gt;
      &amp;quot;required&amp;quot;: true,&lt;br /&gt;
      &amp;quot;example&amp;quot;: &amp;quot;Eq1&amp;quot;&lt;br /&gt;
    }&lt;br /&gt;
  },&lt;br /&gt;
  &amp;quot;format&amp;quot;: &amp;quot;inline&amp;quot;&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/templatedata&amp;gt;&lt;br /&gt;
&amp;lt;/noinclude&amp;gt;&lt;/div&gt;</summary>
		<author><name>Thomas Holzberger</name></author>
	</entry>
	<entry>
		<id>https://www.glossalab.org/w/index.php?title=Template:Eq&amp;diff=30802</id>
		<title>Template:Eq</title>
		<link rel="alternate" type="text/html" href="https://www.glossalab.org/w/index.php?title=Template:Eq&amp;diff=30802"/>
		<updated>2026-01-12T13:31:59Z</updated>

		<summary type="html">&lt;p&gt;Thomas Holzberger: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;includeonly&amp;gt;&lt;br /&gt;
{{#autoeq:{{{1}}}|{{{label|}}}}}{{#set:Has formalized expressions=true}}&lt;br /&gt;
&amp;lt;/includeonly&amp;gt;&lt;br /&gt;
&amp;lt;noinclude&amp;gt;&lt;br /&gt;
&amp;lt;templatedata&amp;gt;&lt;br /&gt;
{&lt;br /&gt;
  &amp;quot;description&amp;quot;: &amp;quot;Stellt eine formalisierten Ausdruck als automatisch nummerierte Gleichung dar.&amp;quot;,&lt;br /&gt;
  &amp;quot;params&amp;quot;: {&lt;br /&gt;
    &amp;quot;1&amp;quot;: {&lt;br /&gt;
      &amp;quot;label&amp;quot;: &amp;quot;Formalisierter Ausdruck&amp;quot;,&lt;br /&gt;
      &amp;quot;description&amp;quot;: &amp;quot;Der mathematische oder logische Ausdruck, der als Gleichung dargestellt wird.&amp;quot;,&lt;br /&gt;
      &amp;quot;type&amp;quot;: &amp;quot;string&amp;quot;,&lt;br /&gt;
      &amp;quot;required&amp;quot;: true&lt;br /&gt;
    },&lt;br /&gt;
    &amp;quot;label&amp;quot;: {&lt;br /&gt;
      &amp;quot;label&amp;quot;: &amp;quot;Beschriftung&amp;quot;,&lt;br /&gt;
      &amp;quot;description&amp;quot;: &amp;quot;Optionale Bezeichnung der Gleichung (z. B. für Referenzen).&amp;quot;,&lt;br /&gt;
      &amp;quot;type&amp;quot;: &amp;quot;string&amp;quot;,&lt;br /&gt;
      &amp;quot;required&amp;quot;: false&lt;br /&gt;
    }&lt;br /&gt;
  },&lt;br /&gt;
  &amp;quot;format&amp;quot;: &amp;quot;inline&amp;quot;&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/templatedata&amp;gt;&lt;br /&gt;
&amp;lt;/noinclude&amp;gt;&lt;/div&gt;</summary>
		<author><name>Thomas Holzberger</name></author>
	</entry>
	<entry>
		<id>https://www.glossalab.org/w/index.php?title=Template:Eq&amp;diff=30801</id>
		<title>Template:Eq</title>
		<link rel="alternate" type="text/html" href="https://www.glossalab.org/w/index.php?title=Template:Eq&amp;diff=30801"/>
		<updated>2026-01-12T13:25:26Z</updated>

		<summary type="html">&lt;p&gt;Thomas Holzberger: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;includeonly&amp;gt;&lt;br /&gt;
{{#autoeq:{{{1}}}|{{{label|}}}}}{{#set:Has formalized expressions=true}}&lt;br /&gt;
&amp;lt;/includeonly&amp;gt;&lt;br /&gt;
&amp;lt;noinclude&amp;gt;&lt;br /&gt;
&amp;lt;templatedata&amp;gt;&lt;br /&gt;
{&lt;br /&gt;
  &amp;quot;description&amp;quot;: &amp;quot;Stellt eine formalisierten Ausdruck als automatisch nummerierte Gleichung dar.&amp;quot;,&lt;br /&gt;
  &amp;quot;params&amp;quot;: {&lt;br /&gt;
    &amp;quot;1&amp;quot;: {&lt;br /&gt;
      &amp;quot;label&amp;quot;: &amp;quot;Formalisierter Ausdruck&amp;quot;,&lt;br /&gt;
      &amp;quot;description&amp;quot;: &amp;quot;Der mathematische oder logische Ausdruck, der als Gleichung dargestellt wird.&amp;quot;,&lt;br /&gt;
      &amp;quot;type&amp;quot;: &amp;quot;string&amp;quot;,&lt;br /&gt;
      &amp;quot;required&amp;quot;: true&lt;br /&gt;
    },&lt;br /&gt;
    &amp;quot;label&amp;quot;: {&lt;br /&gt;
      &amp;quot;label&amp;quot;: &amp;quot;Beschriftung&amp;quot;,&lt;br /&gt;
      &amp;quot;description&amp;quot;: &amp;quot;Optionale Bezeichnung der Gleichung (z. B. für Referenzen).&amp;quot;,&lt;br /&gt;
      &amp;quot;type&amp;quot;: &amp;quot;string&amp;quot;,&lt;br /&gt;
      &amp;quot;required&amp;quot;: false&lt;br /&gt;
    }&lt;br /&gt;
  },&lt;br /&gt;
  &amp;quot;format&amp;quot;: &amp;quot;inline&amp;quot;&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/templatedata&amp;gt;&lt;/div&gt;</summary>
		<author><name>Thomas Holzberger</name></author>
	</entry>
	<entry>
		<id>https://www.glossalab.org/w/index.php?title=Template:Eq&amp;diff=30800</id>
		<title>Template:Eq</title>
		<link rel="alternate" type="text/html" href="https://www.glossalab.org/w/index.php?title=Template:Eq&amp;diff=30800"/>
		<updated>2026-01-12T13:23:56Z</updated>

		<summary type="html">&lt;p&gt;Thomas Holzberger: Replaced content with &amp;quot;{{#autoeq:{{{1}}}|{{{label|}}}}}{{#set:Has formalized expressions=true}}&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{#autoeq:{{{1}}}|{{{label|}}}}}{{#set:Has formalized expressions=true}}&lt;/div&gt;</summary>
		<author><name>Thomas Holzberger</name></author>
	</entry>
	<entry>
		<id>https://www.glossalab.org/w/index.php?title=Template:Eq&amp;diff=30799</id>
		<title>Template:Eq</title>
		<link rel="alternate" type="text/html" href="https://www.glossalab.org/w/index.php?title=Template:Eq&amp;diff=30799"/>
		<updated>2026-01-12T13:19:12Z</updated>

		<summary type="html">&lt;p&gt;Thomas Holzberger: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{#autoeq:{{{1}}}|{{{label|}}}}}{{#set:Has formalized expressions=true}}&lt;br /&gt;
&amp;lt;includeonly&amp;gt;&amp;lt;div class=&amp;quot;div-col {{#ifeq:{{{rules|}}}|yes|div-col-rules}} {{{class|}}}&amp;quot; &lt;br /&gt;
{{#if:{{{colwidth|}}}{{{gap|}}}{{{style|}}}|&lt;br /&gt;
style=&amp;quot;{{#if:{{{colwidth|}}}|column-width: {{{colwidth}}};}}{{#if:{{{gap|}}}|column-gap: {{{gap}}};}}{{#if:{{{style|}}}|{{{style}}}}}&amp;quot;&lt;br /&gt;
}}&amp;gt;&lt;br /&gt;
{{#if:{{{content|}}}|{{{content}}}&amp;lt;/div&amp;gt;}}&lt;br /&gt;
&amp;lt;/includeonly&amp;gt;&lt;br /&gt;
&amp;lt;noinclude&amp;gt;&lt;br /&gt;
&amp;lt;templatedata&amp;gt;&lt;br /&gt;
{&lt;br /&gt;
  &amp;quot;description&amp;quot;: &amp;quot;Guess generated by AI: Compares two values and returns true if they are equal.&amp;quot;,&lt;br /&gt;
  &amp;quot;params&amp;quot;: {&lt;br /&gt;
    &amp;quot;1&amp;quot;: {&lt;br /&gt;
      &amp;quot;label&amp;quot;: &amp;quot;First value&amp;quot;,&lt;br /&gt;
      &amp;quot;description&amp;quot;: &amp;quot;First value to compare.&amp;quot;,&lt;br /&gt;
      &amp;quot;type&amp;quot;: &amp;quot;string&amp;quot;,&lt;br /&gt;
      &amp;quot;required&amp;quot;: true&lt;br /&gt;
    },&lt;br /&gt;
    &amp;quot;2&amp;quot;: {&lt;br /&gt;
      &amp;quot;label&amp;quot;: &amp;quot;Second value&amp;quot;,&lt;br /&gt;
      &amp;quot;description&amp;quot;: &amp;quot;Second value to compare.&amp;quot;,&lt;br /&gt;
      &amp;quot;type&amp;quot;: &amp;quot;string&amp;quot;,&lt;br /&gt;
      &amp;quot;required&amp;quot;: true&lt;br /&gt;
    }&lt;br /&gt;
  }&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/templatedata&amp;gt;&lt;br /&gt;
&amp;lt;/noinclude&amp;gt;&lt;/div&gt;</summary>
		<author><name>Thomas Holzberger</name></author>
	</entry>
	<entry>
		<id>https://www.glossalab.org/w/index.php?title=Template:Eq&amp;diff=30798</id>
		<title>Template:Eq</title>
		<link rel="alternate" type="text/html" href="https://www.glossalab.org/w/index.php?title=Template:Eq&amp;diff=30798"/>
		<updated>2026-01-12T13:16:35Z</updated>

		<summary type="html">&lt;p&gt;Thomas Holzberger: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{#autoeq:{{{1}}}|{{{label|}}}}}{{#set:Has formalized expressions=true}}&lt;br /&gt;
&amp;lt;includeonly&amp;gt;&amp;lt;div class=&amp;quot;div-col {{#ifeq:{{{rules|}}}|yes|div-col-rules}} {{{class|}}}&amp;quot; &lt;br /&gt;
{{#if:{{{colwidth|}}}{{{gap|}}}{{{style|}}}|&lt;br /&gt;
style=&amp;quot;{{#if:{{{colwidth|}}}|column-width: {{{colwidth}}};}}{{#if:{{{gap|}}}|column-gap: {{{gap}}};}}{{#if:{{{style|}}}|{{{style}}}}}&amp;quot;&lt;br /&gt;
}}&amp;gt;&lt;br /&gt;
{{#if:{{{content|}}}|{{{content}}}&amp;lt;/div&amp;gt;}}&lt;br /&gt;
&amp;lt;/includeonly&amp;gt;&lt;br /&gt;
&amp;lt;noinclude&amp;gt;&lt;br /&gt;
&amp;lt;templatedata&amp;gt;&lt;br /&gt;
{&lt;br /&gt;
  &amp;quot;description&amp;quot;: &amp;quot;Compares two values and returns true if they are equal.&amp;quot;,&lt;br /&gt;
  &amp;quot;params&amp;quot;: {&lt;br /&gt;
    &amp;quot;1&amp;quot;: {&lt;br /&gt;
      &amp;quot;label&amp;quot;: &amp;quot;First value&amp;quot;,&lt;br /&gt;
      &amp;quot;description&amp;quot;: &amp;quot;First value to compare.&amp;quot;,&lt;br /&gt;
      &amp;quot;type&amp;quot;: &amp;quot;string&amp;quot;,&lt;br /&gt;
      &amp;quot;required&amp;quot;: true&lt;br /&gt;
    },&lt;br /&gt;
    &amp;quot;2&amp;quot;: {&lt;br /&gt;
      &amp;quot;label&amp;quot;: &amp;quot;Second value&amp;quot;,&lt;br /&gt;
      &amp;quot;description&amp;quot;: &amp;quot;Second value to compare.&amp;quot;,&lt;br /&gt;
      &amp;quot;type&amp;quot;: &amp;quot;string&amp;quot;,&lt;br /&gt;
      &amp;quot;required&amp;quot;: true&lt;br /&gt;
    }&lt;br /&gt;
  }&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/templatedata&amp;gt;&lt;br /&gt;
&amp;lt;/noinclude&amp;gt;&lt;/div&gt;</summary>
		<author><name>Thomas Holzberger</name></author>
	</entry>
	<entry>
		<id>https://www.glossalab.org/w/index.php?title=Template:Eq&amp;diff=30797</id>
		<title>Template:Eq</title>
		<link rel="alternate" type="text/html" href="https://www.glossalab.org/w/index.php?title=Template:Eq&amp;diff=30797"/>
		<updated>2026-01-12T13:13:16Z</updated>

		<summary type="html">&lt;p&gt;Thomas Holzberger: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{#autoeq:{{{1}}}|{{{label|}}}}}{{#set:Has formalized expressions=true}}&lt;br /&gt;
&amp;lt;includeonly&amp;gt;&amp;lt;div class=&amp;quot;div-col {{#ifeq:{{{rules|}}}|yes|div-col-rules}} {{{class|}}}&amp;quot; &lt;br /&gt;
{{#if:{{{colwidth|}}}{{{gap|}}}{{{style|}}}|&lt;br /&gt;
style=&amp;quot;{{#if:{{{colwidth|}}}|column-width: {{{colwidth}}};}}{{#if:{{{gap|}}}|column-gap: {{{gap}}};}}{{#if:{{{style|}}}|{{{style}}}}}&amp;quot;&lt;br /&gt;
}}&amp;gt;&lt;br /&gt;
{{#if:{{{content|}}}|{{{content}}}&amp;lt;/div&amp;gt;}}&lt;br /&gt;
&amp;lt;/includeonly&amp;gt;&lt;br /&gt;
&amp;lt;noinclude&amp;gt;&lt;br /&gt;
&amp;lt;templatedata&amp;gt;&lt;br /&gt;
{&lt;br /&gt;
	&amp;quot;params&amp;quot;: {&lt;br /&gt;
		&amp;quot;rules&amp;quot;: {&lt;br /&gt;
			&amp;quot;description&amp;quot;: &amp;quot;Rules to highlight changes between lines.&amp;quot;,&lt;br /&gt;
			&amp;quot;type&amp;quot;: &amp;quot;boolean&amp;quot;&lt;br /&gt;
		},&lt;br /&gt;
		&amp;quot;class&amp;quot;: {&lt;br /&gt;
			&amp;quot;description&amp;quot;: &amp;quot;Class of table.&amp;quot;,&lt;br /&gt;
			&amp;quot;type&amp;quot;: &amp;quot;string&amp;quot;&lt;br /&gt;
		},&lt;br /&gt;
		&amp;quot;colwidth&amp;quot;: {&lt;br /&gt;
			&amp;quot;description&amp;quot;: &amp;quot;Column width (is recommended to use relative metric &#039;em&#039;, eg, colwidth = 20em)&amp;quot;,&lt;br /&gt;
			&amp;quot;type&amp;quot;: &amp;quot;string&amp;quot;&lt;br /&gt;
		},&lt;br /&gt;
		&amp;quot;gap&amp;quot;: {&lt;br /&gt;
			&amp;quot;description&amp;quot;: &amp;quot;Gap between columns (is recommended to use relative metric &#039;em&#039;)&amp;quot;&lt;br /&gt;
		},&lt;br /&gt;
		&amp;quot;style&amp;quot;: {&lt;br /&gt;
			&amp;quot;description&amp;quot;: &amp;quot;Column style&amp;quot;,&lt;br /&gt;
			&amp;quot;type&amp;quot;: &amp;quot;string&amp;quot;&lt;br /&gt;
		},&lt;br /&gt;
		&amp;quot;content&amp;quot;: {&lt;br /&gt;
			&amp;quot;description&amp;quot;: &amp;quot;Here is where the list to be displayed is declared.&amp;quot;,&lt;br /&gt;
			&amp;quot;required&amp;quot;: true&lt;br /&gt;
		}&lt;br /&gt;
	},&lt;br /&gt;
	&amp;quot;description&amp;quot;: &amp;quot;This template does something?............................................&amp;quot;&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/templatedata&amp;gt;&lt;br /&gt;
&amp;lt;/noinclude&amp;gt;&lt;/div&gt;</summary>
		<author><name>Thomas Holzberger</name></author>
	</entry>
	<entry>
		<id>https://www.glossalab.org/w/index.php?title=Template:Eq&amp;diff=30796</id>
		<title>Template:Eq</title>
		<link rel="alternate" type="text/html" href="https://www.glossalab.org/w/index.php?title=Template:Eq&amp;diff=30796"/>
		<updated>2026-01-12T13:03:38Z</updated>

		<summary type="html">&lt;p&gt;Thomas Holzberger: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{#autoeq:{{{1}}}|{{{label|}}}}}{{#set:Has formalized expressions=true}}&lt;br /&gt;
&amp;lt;includeonly&amp;gt;&amp;lt;div class=&amp;quot;div-col {{#ifeq:{{{rules|}}}|yes|div-col-rules}} {{{class|}}}&amp;quot; &lt;br /&gt;
{{#if:{{{colwidth|}}}{{{gap|}}}{{{style|}}}|&lt;br /&gt;
style=&amp;quot;{{#if:{{{colwidth|}}}|column-width: {{{colwidth}}};}}{{#if:{{{gap|}}}|column-gap: {{{gap}}};}}{{#if:{{{style|}}}|{{{style}}}}}&amp;quot;&lt;br /&gt;
}}&amp;gt;&lt;br /&gt;
{{#if:{{{content|}}}|{{{content}}}&amp;lt;/div&amp;gt;}}&lt;br /&gt;
&amp;lt;/includeonly&amp;gt;&lt;br /&gt;
&amp;lt;noinclude&amp;gt;&lt;br /&gt;
&amp;lt;templatedata&amp;gt;&lt;br /&gt;
{&lt;br /&gt;
	&amp;quot;params&amp;quot;: {&lt;br /&gt;
		&amp;quot;rules&amp;quot;: {&lt;br /&gt;
			&amp;quot;description&amp;quot;: &amp;quot;Rules to highlight changes between lines.&amp;quot;,&lt;br /&gt;
			&amp;quot;type&amp;quot;: &amp;quot;boolean&amp;quot;&lt;br /&gt;
		},&lt;br /&gt;
		&amp;quot;class&amp;quot;: {&lt;br /&gt;
			&amp;quot;description&amp;quot;: &amp;quot;Class of table.&amp;quot;,&lt;br /&gt;
			&amp;quot;type&amp;quot;: &amp;quot;string&amp;quot;&lt;br /&gt;
		},&lt;br /&gt;
		&amp;quot;colwidth&amp;quot;: {&lt;br /&gt;
			&amp;quot;description&amp;quot;: &amp;quot;Column width (is recommended to use relative metric &#039;em&#039;, eg, colwidth = 20em)&amp;quot;,&lt;br /&gt;
			&amp;quot;type&amp;quot;: &amp;quot;string&amp;quot;&lt;br /&gt;
		},&lt;br /&gt;
		&amp;quot;gap&amp;quot;: {&lt;br /&gt;
			&amp;quot;description&amp;quot;: &amp;quot;Gap between columns (is recommended to use relative metric &#039;em&#039;)&amp;quot;&lt;br /&gt;
		},&lt;br /&gt;
		&amp;quot;style&amp;quot;: {&lt;br /&gt;
			&amp;quot;description&amp;quot;: &amp;quot;Column style&amp;quot;,&lt;br /&gt;
			&amp;quot;type&amp;quot;: &amp;quot;string&amp;quot;&lt;br /&gt;
		},&lt;br /&gt;
		&amp;quot;content&amp;quot;: {&lt;br /&gt;
			&amp;quot;description&amp;quot;: &amp;quot;Here is where the list to be displayed is declared.&amp;quot;,&lt;br /&gt;
			&amp;quot;required&amp;quot;: true&lt;br /&gt;
		}&lt;br /&gt;
	},&lt;br /&gt;
	&amp;quot;description&amp;quot;: &amp;quot;Divides in columns the given content using a column width indicated as a parameter (colwidth). &amp;quot;&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/templatedata&amp;gt;&lt;br /&gt;
&amp;lt;/noinclude&amp;gt;&lt;/div&gt;</summary>
		<author><name>Thomas Holzberger</name></author>
	</entry>
	<entry>
		<id>https://www.glossalab.org/w/index.php?title=User:Thomas_Holzberger&amp;diff=30756</id>
		<title>User:Thomas Holzberger</title>
		<link rel="alternate" type="text/html" href="https://www.glossalab.org/w/index.php?title=User:Thomas_Holzberger&amp;diff=30756"/>
		<updated>2026-01-08T13:00:59Z</updated>

		<summary type="html">&lt;p&gt;Thomas Holzberger: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Person&lt;br /&gt;
|Given name=Thomas&lt;br /&gt;
|Family name=Holzberger&lt;br /&gt;
|Image filename=IMG-20250521-WA0011.jpg&lt;br /&gt;
|Sex=Male&lt;br /&gt;
|Country=Germany&lt;br /&gt;
|Highest academic degree=High School Diploma (secondary)&lt;br /&gt;
|Current academic institution=Hochschule München (HM) – University of Applied Sciences&lt;br /&gt;
|Pursued academic degree=Bachelor’s or Equivalent Level (Level 6)&lt;br /&gt;
|Field of pursued degree=Aerospace Engineering&lt;br /&gt;
|input language=EN (English)&lt;br /&gt;
}}&lt;br /&gt;
[[Category:Person]]&lt;br /&gt;
Aerospace student from 10/2023 to now.&lt;br /&gt;
&lt;br /&gt;
My interests include gamedesign and possibly developement processes in general.&lt;br /&gt;
&lt;br /&gt;
Current projects include organising a small boardgame-design-group, and creating a boardgame with playermade cards.&lt;/div&gt;</summary>
		<author><name>Thomas Holzberger</name></author>
	</entry>
	<entry>
		<id>https://www.glossalab.org/w/index.php?title=Draft:Moral_relativism&amp;diff=30752</id>
		<title>Draft:Moral relativism</title>
		<link rel="alternate" type="text/html" href="https://www.glossalab.org/w/index.php?title=Draft:Moral_relativism&amp;diff=30752"/>
		<updated>2026-01-08T12:40:36Z</updated>

		<summary type="html">&lt;p&gt;Thomas Holzberger: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{DISPLAYTITLE:Draft:Moral relativism}}{{Head JTP|Authors=[[User:Thomas Holzberger]]|Observations=* The structure and formats of the article doesn&#039;t adapt to the purpose of conceptual clarification. &lt;br /&gt;
* Though the interplay with internal references is important, external relevant references should also be used. &lt;br /&gt;
* The moral relativism can be predicated not only from the utilitarian approach.&lt;br /&gt;
* The quantification problem is not addressed.&lt;br /&gt;
* Since Kant&#039;s ethics allegegly overcomes relativism his arguments should be confronted.}}&lt;br /&gt;
&lt;br /&gt;
This article will elaborate on the concept of moral relativism. First, the term will be explained with the help of other concepts of morality and ethics. Then it will be shown further with an argument for moral relativism, followed by some implications of moral relativism.  &lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;Introduction:&amp;lt;/u&amp;gt; Will introduce the topic of moral relativism relating to morality&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;Definition:&amp;lt;/u&amp;gt; Will define the term moral relativism&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;Assumptions, Argument, Proof:&amp;lt;/u&amp;gt; Will outline the argument for unknowable morality and therefore moral relativism&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;Implications:&amp;lt;/u&amp;gt; Will explain the implications of moral relativism&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;Connections:&amp;lt;/u&amp;gt; Will show some connections between the topic and relevant philosophers and concepts.&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;&#039;Introduction:&#039;&#039;&#039; ==&lt;br /&gt;
The concepts of good and bad, core ideas in most moral systems, have existed before written philosophy itself.&lt;br /&gt;
&lt;br /&gt;
After all, most animals and all humans have interests, taking the form of distinguishing how good and bad some things are. This way of phrasing it resembles a utilitarian approach to morals, where the goal is to maximize overall goodness that can usually be phrased as the sum of goodness minus the sum of badness.&lt;br /&gt;
&lt;br /&gt;
The interests people have are often associated with the behavior of other people and themselves, giving rise to a deontological perspective to view morals mostly as rules of behavior. These rules do not need to be mere restrictions based on bad behaviour, but can also label some actions as good. This can be phrased as quantified results, such as &amp;quot;Killing is this bad, Stealing is half as bad&amp;quot; but it could also mean, that in a certain situation one action is good, while others are bad.&lt;br /&gt;
&lt;br /&gt;
Other moral systems can be phrased differently.&lt;br /&gt;
&lt;br /&gt;
For example, as explained in the article [[Moral]], the judgement given by virtue ethics are based on the reasoning for an action. If the reasoning is virtuous, the act is morally good. Whether an act is virtuous depends on doing the right thing for the right reason, with a virtue being seen as a good trait that is dependent on acting in a proper way between two extremes.&lt;br /&gt;
&lt;br /&gt;
However, there are some similarities between these systems:&lt;br /&gt;
&lt;br /&gt;
All coherent moral systems can be rephrased in distinctions between good and bad, results that can often be quantified or compared. &lt;br /&gt;
&lt;br /&gt;
The three systems that were pointed out are also usually interpreted in such a way that if an agent were to act according to the moral system, it would also serve the good of other people, as opposed to just phrasing the interests of the agent. This happens indirectly for virtue ethics and deontontology. In virtue ethics for example because a certain amount of altruism might be seen as virtuous, in deontology because the most typical rules of behaviour like &amp;quot;you shall not lie&amp;quot;, &amp;quot;you shall not murder&amp;quot;,... are set up in such a way.&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;&#039;Definition:&#039;&#039;&#039; ==&lt;br /&gt;
The concept of moral relativism seems to challenge all other perspectives of morality since it states, that there is no true or false set of morals but just different ideas of sometimes different people across space and time. &lt;br /&gt;
&lt;br /&gt;
According to the article [[Draft:Ethics]]&amp;lt;ref&amp;gt;[[Draft:Ethics]]&amp;lt;/ref&amp;gt;, moral relativity &amp;quot;is the doctrine that there is no one true moral system, binding on all people at all times”.In the same article [[Draft:Ethics]]&amp;lt;ref&amp;gt;[[Draft:Ethics]]&amp;lt;/ref&amp;gt;, the concept is brought up that relativist ideas can hardly be challenged based on their own principle that everything is questionable, because no moral argument or position can be right or wrong. It is however important to mention, that moral relativism does not inherently challenge the idea of objective truth. Rather it states that there is no knowable true morality. This also means, that a morality derived from knowledge of some true morality is not true, according to Plato&#039;s definition of knowledge as a justified true belief. But values and therefore ethics and morality will stay relevant for longer or as long as humans do, so a typical conclusion of moral relativism or a popular connected idea with &amp;quot;Moral Relativism&amp;quot;&amp;lt;ref name=&amp;quot;:0&amp;quot;&amp;gt;Gowans, Chris, &amp;quot;Moral Relativism&amp;quot;, &#039;&#039;The Stanford Encyclopedia of Philosophy&#039;&#039; (Spring 2021 Edition), Edward N. Zalta (ed.), URL = &amp;lt;&amp;lt;nowiki&amp;gt;https://plato.stanford.edu/archives/spr2021/entries/moral-relativism/&amp;lt;/nowiki&amp;gt;&amp;gt;.&amp;lt;/ref&amp;gt; is to label any moral idea as equally true or true dependend on things like religion, culture, region or person. This idea does however suffer from the so called quantification problem. This is the problem of needing to choose some standard for what has priority. Culture, religion, region, the opinion of the affected person or the acting person. There are infinite possibilities and it again needs a moral standard, something chosen by arbitrary or intuitive principles to select one possibility or combine them somehow.    &lt;br /&gt;
&lt;br /&gt;
It would be flawed to see moral relativism as universally true yet a moral position itself, since moral relativity could then be seen as one of these non- /equally- /relatively true positions. So moral relativism must claim not to be a moral position itself or deny the concept of objective truth.   &lt;br /&gt;
&lt;br /&gt;
So a different connected idea can just be that moral justifications and ideas tend to be more different between times, groups and spaces and more similar within them.&amp;lt;ref name=&amp;quot;:0&amp;quot; /&amp;gt;    &lt;br /&gt;
&lt;br /&gt;
In the following sections Assumptions, Argument and Proof one of the reasonings for moral relativism will be shown to explain the position.   &lt;br /&gt;
&lt;br /&gt;
==== &#039;&#039;&#039;Assumptions:&#039;&#039;&#039; ====&lt;br /&gt;
For the purpose of this text, a moral system that includes the entirety of someones ideas on morality will be called a &amp;quot;complete moral system&amp;quot;. A complete moral system assignes everything a moral value, even if this value is 0. A moral value can be phrased as how good or bad something is. Any kind of moral theory can be phrased in such a way. For example, a purely dentological moral system would give everything exept for actions the value 0 since it doesn&#039;t matter. In practice, most people believe in some kind of mixture of moral systems such as the ones mentioned above.  &lt;br /&gt;
&lt;br /&gt;
In the following, the concept of a true complete moral system and any knowable morality from any perspective will be disproven under the following conditions:&lt;br /&gt;
&lt;br /&gt;
[The moral system itself doesn&#039;t transfer information to the existing world, from which one could conclude the nature of the moral system.    &lt;br /&gt;
&lt;br /&gt;
A complete moral system is a moral system that holds the rule that no other additional moral system is true.]   &lt;br /&gt;
&lt;br /&gt;
==== &#039;&#039;&#039;Argument:&#039;&#039;&#039; ====&lt;br /&gt;
Under the assumptions above, any existing observer does not have any information about the relative likelyhood of potentially conflicting moral systems. Therefore they have the same likelyhood of beeing true.&lt;br /&gt;
&lt;br /&gt;
Above a complete set of morals is defined as a set of morals that holds the rule that no other moral rule is true. Therefore it can not be true together with any other complete moral system. At an infinite ammount of potentially true complete moral systems they are all in conflict with the infinite ammount of others. Therefore they all have the likelyhood [1/infinite]=0&lt;br /&gt;
 &lt;br /&gt;
Any complete moral system having the likelyhood 0 just disproves the concept of a true complete moral system from any possible perspective. This seems to disprove true morals already, but it does not yet apply to individual moral values.       &lt;br /&gt;
 &lt;br /&gt;
One of the assuptions above is that a moral system itself doesn&#039;t transfer information to the existing world, from which one could conclude the nature of the moral system. So there is no information about true moral values or comparisons. Knowledge is based on information, so any true moral judgement is unknowable. This also aplies to hints at a true moral judgement. There is no information to hint at any true moral judegement.             &lt;br /&gt;
 &lt;br /&gt;
It is also noteworthy, that two systems are not even universally comparable:             &lt;br /&gt;
 &lt;br /&gt;
To maximise the default of moral value from a perspective of unsure morals e.g. for deciding on whether to do something, the observer would have to take the resulting moral values of into account, proportional to the likelyhood of the corresponding moral rule or system.             &lt;br /&gt;
&lt;br /&gt;
But is it is allways possible to assign a number as a moral value? Some system might for example only point out which actions are acceptable in any given situation. Then how do you scale the numbers you see as the moral values when there is only right or wrong? Different actions in different situations might pose different levels of good and bad in the moral system so the numbers should be scaled relative to each other to reflect that. But for example every rulebreak could be worth -1 or -19 or -0.123 or ... So actually every moral system only compares things relative to each other and any factor could be applied to all the resulting numbers without changing the moral system. This raises the problem, wheter two moral systems are even compareable. There could be various options when considering multiple moral systems. First of all, you could just apply some factors for example to equal out the sum of all given moral judgement between the two moral systems, or to equal out one specific judgement. In this case you are obviously valuing two systems against each other based on some arbitrary value. So different moral systems are not universally comparable. If they are not comparable, then no true morality can be deduced from the comparison.&lt;br /&gt;
&lt;br /&gt;
==== &#039;&#039;&#039;Proof:&#039;&#039;&#039; ====&lt;br /&gt;
In the following, it will be evaluated wheter the assumptions used for the argument generally hold true when it comes to universal morals that would apply to every person at every time:&lt;br /&gt;
&lt;br /&gt;
The assumption that a complete moral system is a moral system that holds the rule that no other additional moral system is true, must be true, since the complete moral system already represents the whole of some hyperthetical persons ideas on morality, so any truth deviating from that would make the idea wrong.&lt;br /&gt;
&lt;br /&gt;
The assumption, that the system itself doesn&#039;t transfer information to the existing world is not proven, but lies in the definition of morality used here. The Definition as an abstract judgement of good and bad, that has no effect on the world in and of itself. With this definition there is no reason to assume, that any impact of entities, people, gods seemingly reacting to their percieved morality points to the nature of a true moral system. E.g. even a Karma-system might just have an inverse effekt like punishing people for the thing truely good. And these entities if real could also not get any information about a true moral system since it has no effect on them under this definition.&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;&#039;Implikations:&#039;&#039;&#039; ==&lt;br /&gt;
The conclusion of the argument obove raises a problem of course. If any true morality is unknowable, the definition of morality seems useless. This is when moral relativity comes into play again.  After all, this concept resembles the reality of different people, times and places showing different moral beliefs.  &lt;br /&gt;
&lt;br /&gt;
These must always contain some perspective on good and bad, but they are in reality connections of various beliefs and concepts. Because even though no one can have knowlege of a true moral system, people can still carry the belief that a moral system is true. Religions can define morality not only as an abstract truth without effect on reality, but for example as something that is necessary to avoid some divine punishment or archive some reward. That the punishment should be avoided or the reward archived is also one of infinite abstract moral assumptions, but for a typical human who probably has this assumption, it is also a situation where the interests of the agend likely match the moral system, since the agend does not want to be punished and instead rewarded. The popular perception of morals does however sometimes include a conflict of the interest of the moral system and the interests of the agend. After all, people are sometimes willing do do things they percieve as immoral.   &lt;br /&gt;
&lt;br /&gt;
This shows, that Morality, in [[Draft:Moral|Draft:Mo]][[Draft:Moral|ral]]&amp;lt;ref&amp;gt;[[Draft:Moral]]&amp;lt;/ref&amp;gt; defined as a normative system which is based on society’s values and ethical norms is not focused on the individual, but rather an entire society or group. If there are other reasons for beliefing a moral system true, like &amp;quot;it regulates my society best&amp;quot; or &amp;quot;it serves the puplic good&amp;quot;, then might will not always be in the interest of the person to act according to the moral system since the interests of the person most likely differ from the moral system somehow. This reinforces the concept of morality as something benefiting some societal, puplic or altruistic goals and       &lt;br /&gt;
&lt;br /&gt;
The concept of moral relativism not only serves as a theoretical exercise, but also as a tool, when there is a conflict of moral ideas. Because the realisation that another position is equaly not just grounded in knowable truth as your own is neccesary to avoid conflict, over positions that can not be attacked by logic, because moral ideas are not only derived from logic but also from some axiomatic moral ideas the partys in conflict might not share. But, the partys in question might not always be in noticable conflict and might be able to have interaction that advances both partys moral goals.       &lt;br /&gt;
&lt;br /&gt;
By describing Moral beliefs as unfounded assumptions, moral relativity could also be interpreted as a call to minimise the ammount of such beliefs, since the ammount of Axioms should also generally be minimised in the quest for truth and knowledge. But every person acts with some ammount of purpose that is often different from their short term interests, so the ammount of such Axioms won`t reach 0 for any person if the attempt to reach future happines at the cost of imidiate happines is a moral desicion instead of a instinctual desicion. Also the quest for truth and knowledge might be a moral goal so the need to have 0 axioms in moral thinking would be rather paradoxical.      &lt;br /&gt;
&lt;br /&gt;
This is why Moral relativity is not typically a call to abandon moral concepts, but rather a framework to deal with the abundance of different moral theories and ideas. It is not a problem to have beliefs about morality, but it is also arbitrary which ones you have.&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;&#039;Connections:&#039;&#039;&#039; ==&lt;br /&gt;
So from Platos perspective on knowledge, moral beliefs would fall into the cathegory of sensible knowlege, specifically in the cathegory of faithfull beliefs, since they are taken for granted without proof. The concept of moral relativity basicly has the role of pointing that out. In the popular perception of morality, there can however be moral positions that are merely derived from other moral positions and reality. These positions can be challenged rationally, but they might be atributed to sensible knowledge as well.&lt;br /&gt;
&lt;br /&gt;
It also has the role of providing a rather comprehensive answer to one of the three pillars of philosophy. Those are typically called &amp;quot;What is there&amp;quot;, What to do&amp;quot; and &amp;quot;How to know&amp;quot;. After all, &amp;quot;You can not know what to do&amp;quot; is a message deducable from moral relativism. Granted you choose some assumptions about morality, you can obviously make conclusions from there, but the question of &amp;quot;what to do&amp;quot; will never have a universal answer.&lt;br /&gt;
&lt;br /&gt;
Kants Moral Philosophy claims to overcome moral relativism, with the cathegorical Imperative that you should only act such that you could want all rational beeings obeying to a universal law consistent with the action.&amp;lt;ref&amp;gt;Johnson, Robert and Adam Cureton, &amp;quot;Kant’s Moral Philosophy&amp;quot;, &#039;&#039;The Stanford Encyclopedia of Philosophy&#039;&#039; (Winter 2025 Edition), Edward N. Zalta &amp;amp; Uri Nodelman (eds.), forthcoming URL = &amp;lt;&amp;lt;nowiki&amp;gt;https://plato.stanford.edu/archives/win2025/entries/kant-moral/&amp;lt;/nowiki&amp;gt;&amp;gt;.&amp;lt;/ref&amp;gt;  But there is the question of what universal law you would want all rational beeings to obey. That would be a question answered differently from person to person, and even common types of morality could be implemented in this framework. For example a utalitarian aproach to morals can be achieved by acting in such a way that it maximises overall good, the rule you would want others to follow is to act the same. But if that universal law needs to be realistically phrasable, or somehow generalised then it will not always align with the agents moral opinion so it would for these cases not be rational to act according to the cathegorical Imperative. Unless of course the Agend tries to avoid sanctions or reap rewards when others observe his actions, or when the agend tries to strenghen the observed precedent of moral behaviour. Or when the agend lacks capacity to know whether own actions are observed or whether there is a better action. There will still be different moral systems held by different people and groups, but for any given group where people are sufficiently able to observe, sanction or reward each others actions (if only by satisfying altruism or showing sympathy), there might theoretically be an ideal set of moral rules that optimises the average fullfillment of everyones interests as soon as it seems established in the minds of most groupmembers. If this is the applied definition of morality, then there is an optimal set of rules but this optimal set of rules would still be different from group to group and change over time. The group of &amp;quot;rational beeings&amp;quot; might be incomprehensible and not suited as an efficient reference point. Meanwhile, the ammount of groups that might be definable is higher than the ammount of rational beeings. So the agend, beeing in multiple groups at once, would still have to deal with different moral systems and the concept of moral relativism still exists. &lt;br /&gt;
&lt;br /&gt;
Moral Relativism is also connected to the idea of &amp;quot;Cultural Diversity&amp;quot;. According to [[Draft:Ethics]], the only premise and the final conclusion of that concept is that “no society’s beliefs about right and wrong are better than any other’s”. However, a lack of knowledge on universal true morality does not imply a lack of knowledge on the validity of conclusions you might draw from this universal true morality someone believes in. Many cultures have comparable beliefs for example in terms of maximising happyness for a maximal ammount of people. But it is statistically sure, that some cultures lead to archiving their own or others moral ideas better. That does not mean, that it is realistic or usefull to find a &amp;quot;better&amp;quot; culture, it is hard enough to define and find the moral ideas and the information about the cultures one would use to compare them. But when limited to a specific topic, a specific aspect of culture and a specific group of people applying their moral ideas, this can be an atemptable task that is performed rather frequently in reality. In fact the consistent comparison and subsequent exchange of culture is a major benefit of cultural interaction and diversity. But Moral relativity again plays the role of mediating acceptance between different viewpoints, since the realisation, that your perspective is not the only valid one is the basis for the described cultural exchange or also just the positive interaction between individuals of different cultures and even ideological backrounds.&lt;/div&gt;</summary>
		<author><name>Thomas Holzberger</name></author>
	</entry>
	<entry>
		<id>https://www.glossalab.org/w/index.php?title=Draft:Moral_relativism&amp;diff=30702</id>
		<title>Draft:Moral relativism</title>
		<link rel="alternate" type="text/html" href="https://www.glossalab.org/w/index.php?title=Draft:Moral_relativism&amp;diff=30702"/>
		<updated>2026-01-06T00:25:04Z</updated>

		<summary type="html">&lt;p&gt;Thomas Holzberger: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{DISPLAYTITLE:Draft:Moral relativism}}{{Head JTP|Authors=[[User:Thomas Holzberger]]|Observations=* The structure and formats of the article doesn&#039;t adapt to the purpose of conceptual clarification. &lt;br /&gt;
* Though the interplay with internal references is important, external relevant references should also be used. &lt;br /&gt;
* The moral relativism can be predicated not only from the utilitarian approach.&lt;br /&gt;
* The quantification problem is not addressed.&lt;br /&gt;
* Since Kant&#039;s ethics allegegly overcomes relativism his arguments should be confronted.}}&lt;br /&gt;
&lt;br /&gt;
This article will elaborate on the concept of moral relativism. First, the term will be explained with the help of other concepts of morality and ethics. Then it will be shown further with an argument for moral relativism, followed by some implications of moral relativism.  &lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;Introduction:&amp;lt;/u&amp;gt; Will introduce the topic of moral relativism relating to morality&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;Definition:&amp;lt;/u&amp;gt; Will define the term moral relativism&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;Assumptions, Argument, Proof:&amp;lt;/u&amp;gt; Will outline the argument for unknowable morality and therefore moral relativism&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;Implications:&amp;lt;/u&amp;gt; Will explain the implications of moral relativism&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;Connections:&amp;lt;/u&amp;gt; Will show some connections between the topic and relevant philosophers and concepts.&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;&#039;Introduction:&#039;&#039;&#039; ==&lt;br /&gt;
The concepts of good and bad, core ideas in most moral systems, have existed before written philosophy itself.&lt;br /&gt;
&lt;br /&gt;
After all, most animals and all humans have interests, taking the form of distinguishing how good and bad some things are. This way of phrasing it resembles a utilitarian approach to morals, where the goal is to maximize overall goodness that can usually be phrased as the sum of goodness minus the sum of badness.&lt;br /&gt;
&lt;br /&gt;
The interests people have are often associated with the behavior of other people and themselves, giving rise to a deontological perspective to view morals mostly as rules of behavior. These rules do not need to be mere restrictions based on bad behaviour, but can also label some actions as good. This can be phrased as quantified results, such as &amp;quot;Killing is this bad, Stealing is half as bad&amp;quot; but it could also mean, that in a certain situation one action is good, while others are bad.&lt;br /&gt;
&lt;br /&gt;
Other moral systems can be phrased differently.&lt;br /&gt;
&lt;br /&gt;
For example, as explained in the article [[Moral]], the judgement given by virtue ethics are based on the reasoning for an action. If the reasoning is virtuous, the act is morally good. Whether an act is virtuous depends on doing the right thing for the right reason, with a virtue being seen as a good trait that is dependent on acting in a proper way between two extremes.&lt;br /&gt;
&lt;br /&gt;
However, there are some similarities between these systems:&lt;br /&gt;
&lt;br /&gt;
All coherent moral systems can be rephrased in distinctions between good and bad, results that can often be quantified or compared. &lt;br /&gt;
&lt;br /&gt;
The three systems that were pointed out are also usually interpreted in such a way that if an agent were to act according to the moral system, it would also serve the good of other people, as opposed to just phrasing the interests of the agent. This happens indirectly for virtue ethics and deontontology. In virtue ethics for example because a certain amount of altruism might be seen as virtuous, in deontology because the most typical rules of behaviour like &amp;quot;you shall not lie&amp;quot;, &amp;quot;you shall not murder&amp;quot;,... are set up in such a way.&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;&#039;Definition:&#039;&#039;&#039; ==&lt;br /&gt;
The concept of moral relativism seems to challenge all other perspectives of morality since it states, that there is no true or false set of morals but just different ideas of sometimes different people across space and time. &lt;br /&gt;
&lt;br /&gt;
According to the article [[Draft:Ethics]]&amp;lt;ref&amp;gt;[[Draft:Ethics]]&amp;lt;/ref&amp;gt;, moral relativity &amp;quot;is the doctrine that there is no one true moral system, binding on all people at all times”.In the same article [[Draft:Ethics]]&amp;lt;ref&amp;gt;[[Draft:Ethics]]&amp;lt;/ref&amp;gt;, the concept is brought up that relativist ideas can hardly be challenged based on their own principle that everything is questionable, because no moral argument or position can be right or wrong. It is however important to mention, that moral relativism does not inherently challenge the idea of objective truth. Rather it states that there is no knowable true morality. This also means, that a morality derived from knowledge of some true morality is not true, according to Plato&#039;s definition of knowledge as a justified true belief. But values and therefore ethics and morality will stay relevant for longer or as long as humans do, so a typical conclusion of moral relativism or a popular connected idea with &amp;quot;Moral Relativism&amp;quot;&amp;lt;ref name=&amp;quot;:0&amp;quot;&amp;gt;Gowans, Chris, &amp;quot;Moral Relativism&amp;quot;, &#039;&#039;The Stanford Encyclopedia of Philosophy&#039;&#039; (Spring 2021 Edition), Edward N. Zalta (ed.), URL = &amp;lt;&amp;lt;nowiki&amp;gt;https://plato.stanford.edu/archives/spr2021/entries/moral-relativism/&amp;lt;/nowiki&amp;gt;&amp;gt;.&amp;lt;/ref&amp;gt; is to label any moral idea as equally true or true dependend on things like religion, culture, region or person. This idea does however suffer from the so called quantification problem. This is the problem of needing to choose some standard for what has priority. Culture, religion, region, the opinion of the affected person or the acting person. There are infinite possibilities and it again needs a moral standard, something chosen by arbitrary or intuitive principles to select one possibility or combine them somehow.    &lt;br /&gt;
&lt;br /&gt;
It would be flawed to see moral relativism as universally true yet a moral position itself, since moral relativity could then be seen as one of these non- /equally- /relatively true positions. So moral relativism must claim not to be a moral position itself or deny the concept of objective truth.   &lt;br /&gt;
&lt;br /&gt;
So a different connected idea can just be that moral justifications and ideas tend to be more different between times, groups and spaces and more similar within them.&amp;lt;ref name=&amp;quot;:0&amp;quot; /&amp;gt;    &lt;br /&gt;
&lt;br /&gt;
In the following sections Assumptions, Argument and Proof one of the reasonings for moral relativism will be shown to explain the position.   &lt;br /&gt;
&lt;br /&gt;
==== &#039;&#039;&#039;Assumptions:&#039;&#039;&#039; ====&lt;br /&gt;
For the purpose of this text, a moral system that includes the entirety of someones ideas on morality will be called a &amp;quot;complete moral system&amp;quot;. A complete moral system assignes everything a moral value, even if this value is 0. A moral value can be phrased as how good or bad something is. Any kind of moral theory can be phrased in such a way. For example, a purely dentological moral system would give everything exept for actions the value 0 since it doesn&#039;t matter. In practice, most people believe in some kind of mixture of moral systems such as the ones mentioned above.  &lt;br /&gt;
&lt;br /&gt;
In the following, the concept of a true complete moral system and any knowable morality from any perspective will be disproven under the following conditions:&lt;br /&gt;
&lt;br /&gt;
[The moral system itself doesn&#039;t transfer information to the existing world, from which one could conclude the nature of the moral system.    &lt;br /&gt;
&lt;br /&gt;
A complete moral system is a moral system that holds the rule that no other additional moral system is true.]   &lt;br /&gt;
&lt;br /&gt;
==== &#039;&#039;&#039;Argument:&#039;&#039;&#039; ====&lt;br /&gt;
Under the assumptions above, any existing observer does not have any information about the relative likelyhood of potentially conflicting moral systems. Therefore they have the same likelyhood of beeing true.&lt;br /&gt;
&lt;br /&gt;
Above a complete set of morals is defined as a set of morals that holds the rule that no other moral rule is true. Therefore it can not be true together with any other complete moral system. At an infinite ammount of potentially true complete moral systems they are all in conflict with the infinite ammount of others. Therefore they all have the likelyhood [1/infinite]=0&lt;br /&gt;
 &lt;br /&gt;
Any complete moral system having the likelyhood 0 just disproves the concept of a true complete moral system from any possible perspective. This seems to disprove true morals already, but it does not yet apply to individual moral values.       &lt;br /&gt;
 &lt;br /&gt;
One of the assuptions above is that a moral system itself doesn&#039;t transfer information to the existing world, from which one could conclude the nature of the moral system. So there is no information about true moral values or comparisons. Knowledge is based on information, so any true moral judgement is unknowable. This also aplies to hints at a true moral judgement. There is no information to hint at any true moral judegement.             &lt;br /&gt;
 &lt;br /&gt;
It is also noteworthy, that two systems are not even universally comparable:             &lt;br /&gt;
 &lt;br /&gt;
To maximise the default of moral value from a perspective of unsure morals e.g. for deciding on whether to do something, the observer would have to take the resulting moral values of into account, proportional to the likelyhood of the corresponding moral rule or system.             &lt;br /&gt;
&lt;br /&gt;
But is it is allways possible to assign a number as a moral value? Some system might for example only point out which actions are acceptable in any given situation. Then how do you scale the numbers you see as the moral values when there is only right or wrong? Different actions in different situations might pose different levels of good and bad in the moral system so the numbers should be scaled relative to each other to reflect that. But for example every rulebreak could be worth -1 or -19 or -0.123 or ... So actually every moral system only compares things relative to each other and any factor could be applied to all the resulting numbers without changing the moral system. This raises the problem, wheter two moral systems are even compareable. There could be various options when considering multiple moral systems. First of all, you could just apply some factors for example to equal out the sum of all given moral judgement between the two moral systems, or to equal out one specific judgement. In this case you are obviously valuing two systems against each other based on some arbitrary value. So different moral systems are not universally comparable. If they are not comparable, then no true morality can be deduced from the comparison.&lt;br /&gt;
&lt;br /&gt;
==== &#039;&#039;&#039;Proof:&#039;&#039;&#039; ====&lt;br /&gt;
In the following, it will be evaluated wheter the assumptions used for the argument generally hold true when it comes to universal morals that would apply to every person at every time:&lt;br /&gt;
&lt;br /&gt;
The assumption that a complete moral system is a moral system that holds the rule that no other additional moral system is true, must be true, since the complete moral system already represents the whole of some hyperthetical persons ideas on morality, so any truth deviating from that would make the idea wrong.&lt;br /&gt;
&lt;br /&gt;
The assumption, that the system itself doesn&#039;t transfer information to the existing world is not proven, but lies in the definition of morality used here. The Definition as an abstract judgement of good and bad, that has no effect on the world in and of itself. With this definition there is no reason to assume, that any impact of entities, people, gods seemingly reacting to their percieved morality points to the nature of a true moral system. E.g. even a Karma-system might just have an inverse effekt like punishing people for the thing truely good. And these entities if real could also not get any information about a true moral system since it has no effect on them under this definition.&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;&#039;Implikations:&#039;&#039;&#039; ==&lt;br /&gt;
The conclusion of the argument obove raises a problem of course. If any true morality is unknowable, the definition of morality seems useless. This is when moral relativity comes into play again.  After all, this concept resembles the reality of different people, times and places showing different moral beliefs.  &lt;br /&gt;
&lt;br /&gt;
These must always contain some perspective on good and bad, but they are in reality connections of various beliefs and concepts. Because even though no one can have knowlege of a true moral system, people can still carry the belief that a moral system is true. Religions can define morality not only as an abstract truth without effect on reality, but for example as something that is necessary to avoid some divine punishment or archive some reward. That the punishment should be avoided or the reward archived is also one of infinite abstract moral assumptions, but for a typical human who probably has this assumption, it is also a situation where the interests of the agend likely match the moral system, since the agend does not want to be punished and instead rewarded. The popular perception of morals does however sometimes include a conflict of the interest of the moral system and the interests of the agend. After all, people are sometimes willing do do things they percieve as immoral.   &lt;br /&gt;
&lt;br /&gt;
This shows, that Morality, in [[Draft:Moral|Draft:Mo]][[Draft:Moral|ral]]&amp;lt;ref&amp;gt;[[Draft:Moral]]&amp;lt;/ref&amp;gt; defined as a normative system which is based on society’s values and ethical norms is not focused on the individual, but rather an entire society or group. If there are other reasons for beliefing a moral system true, like &amp;quot;it regulates my society best&amp;quot; or &amp;quot;it serves the puplic good&amp;quot;, then might will not always be in the interest of the person to act according to the moral system since the interests of the person most likely differ from the moral system somehow. This reinforces the concept of morality as something benefiting some societal, puplic or altruistic goals and       &lt;br /&gt;
&lt;br /&gt;
The concept of moral relativism not only serves as a theoretical exercise, but also as a tool, when there is a conflict of moral ideas. Because the realisation that another position is equaly not just grounded in knowable truth as your own is neccesary to avoid conflict, over positions that can not be attacked by logic, because moral ideas are not only derived from logic but also from some axiomatic moral ideas the partys in conflict might not share. But, the partys in question might not always be in noticable conflict and might be able to have interaction that advances both partys moral goals.       &lt;br /&gt;
&lt;br /&gt;
By describing Moral beliefs as unfounded assumptions, moral relativity could also be interpreted as a call to minimise the ammount of such beliefs, since the ammount of Axioms should also generally be minimised in the quest for truth and knowledge. But every person acts with some ammount of purpose that is often different from their short term interests, so the ammount of such Axioms won`t reach 0 for any person if the attempt to reach future happines at the cost of imidiate happines is a moral desicion instead of a instinctual desicion. Also the quest for truth and knowledge might be a moral goal so the need to have 0 axioms in moral thinking would be rather paradoxical.      &lt;br /&gt;
&lt;br /&gt;
This is why Moral relativity is not typically a call to abandon moral concepts, but rather a framework to deal with the abundance of different moral theories and ideas. It is not a problem to have beliefs about morality, but it is also arbitrary which ones you have.&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;&#039;Connections:&#039;&#039;&#039; ==&lt;br /&gt;
So from Platos perspective on knowledge, moral beliefs would fall into the cathegory of sensible knowlege, specifically in the cathegory of faithfull beliefs, since they are taken for granted without proof. The concept of moral relativity basicly has the role of pointing that out. In the popular perception of morality, there can however be moral positions that are merely derived from other moral positions and reality. These positions can be challenged rationally, but they might be atributed to sensible knowledge as well.&lt;br /&gt;
&lt;br /&gt;
It also has the role of providing a rather comprehensive answer to one of the three pillars of philosophy. Those are typically called &amp;quot;What is there&amp;quot;, What to do&amp;quot; and &amp;quot;How to know&amp;quot;. After all, &amp;quot;You can not know what to do&amp;quot; is a message deducable from moral relativism. Granted you choose some assumptions about morality, you can obviously make conclusions from there, but the question of &amp;quot;what to do&amp;quot; will never have a universal answer.&lt;br /&gt;
&lt;br /&gt;
Kants Moral Philosophy claims to overcome moral relativism, with the cathegorical Imperative that you should only act such that you could want all rational beeings obeying to a universal law consistent with the action.&amp;lt;ref&amp;gt;Johnson, Robert and Adam Cureton, &amp;quot;Kant’s Moral Philosophy&amp;quot;, &#039;&#039;The Stanford Encyclopedia of Philosophy&#039;&#039; (Winter 2025 Edition), Edward N. Zalta &amp;amp; Uri Nodelman (eds.), forthcoming URL = &amp;lt;&amp;lt;nowiki&amp;gt;https://plato.stanford.edu/archives/win2025/entries/kant-moral/&amp;lt;/nowiki&amp;gt;&amp;gt;.&amp;lt;/ref&amp;gt;  But there is the question of what universal law you would want all rational beeings to obey. That would be a question answered differently from person to person, and even common types of morality could be implemented in this framework. For example a utalitarian aproach to morals can be achieved by acting in such a way that it maximises overall good, the rule you would want others to follow is to act the same. But if that universal law needs to be realistically phrasable, or somehow generalised then it will not always align with the agents moral opinion so it would for these cases not be rational to act according to the cathegorical Imperative. Unless of course the Agend tries to avoid sanctions or reap rewards when others observe his actions, or when the agend tries to strenghen the observed precedent of moral behaviour. Or when the agend lacks capacity to know whether own actions are observed or whether there is a better action. There will still be different moral systems held by different people and groups, but for any given group where people are sufficiently able to observe, sanction or reward each others actions (if only by satisfying altruism or showing sympathy), there might theoretically be an ideal set of moral rules that optimises the average fullfillment of everyones interests as soon as it seems established in the minds of most groupmembers. If this is the applied definition of morality, then there is an optimal set of rules but this optimal set of rules would still be different from group to group and change over time. The group of &amp;quot;rational beeings&amp;quot; might be incomprehensible and not suited as an efficient reference point. Meanwhile, the ammount of groups that might be definable is higher than the ammount of rational beeings. So the agend would still have to deal with different moral systems and the concept of moral relativism still exists. &lt;br /&gt;
&lt;br /&gt;
Moral Relativism is also connected to the idea of &amp;quot;Cultural Diversity&amp;quot;. According to [[Draft:Ethics]], the only premise and the final conclusion of that concept is that “no society’s beliefs about right and wrong are better than any other’s”. However, a lack of knowledge on universal true morality does not imply a lack of knowledge on the validity of conclusions you might draw from this universal true morality someone believes in. Many cultures have comparable beliefs for example in terms of maximising happyness for a maximal ammount of people. But it is statistically sure, that some cultures lead to archiving their own or others moral ideas better. That does not mean, that it is realistic or usefull to find a &amp;quot;better&amp;quot; culture, it is hard enough to define and find the moral ideas and the information about the cultures one would use to compare them. But when limited to a specific topic, a specific aspect of culture and a specific group of people applying their moral ideas, this can be an atemptable task that is performed rather frequently in reality. In fact the consistent comparison and subsequent exchange of culture is a major benefit of cultural interaction and diversity. But Moral relativity again plays the role of mediating acceptance between different viewpoints, since the realisation, that your perspective is not the only valid one is the basis for the described cultural exchange or also just the positive interaction between individuals of different cultures and even ideological backrounds.&lt;/div&gt;</summary>
		<author><name>Thomas Holzberger</name></author>
	</entry>
	<entry>
		<id>https://www.glossalab.org/w/index.php?title=Draft:Moral_relativism&amp;diff=30701</id>
		<title>Draft:Moral relativism</title>
		<link rel="alternate" type="text/html" href="https://www.glossalab.org/w/index.php?title=Draft:Moral_relativism&amp;diff=30701"/>
		<updated>2026-01-06T00:21:08Z</updated>

		<summary type="html">&lt;p&gt;Thomas Holzberger: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{DISPLAYTITLE:Draft:Moral relativism}}{{Head JTP|Authors=[[User:Thomas Holzberger]]|Observations=* The structure and formats of the article doesn&#039;t adapt to the purpose of conceptual clarification. &lt;br /&gt;
* Though the interplay with internal references is important, external relevant references should also be used. &lt;br /&gt;
* The moral relativism can be predicated not only from the utilitarian approach.&lt;br /&gt;
* The quantification problem is not addressed.&lt;br /&gt;
* Since Kant&#039;s ethics allegegly overcomes relativism his arguments should be confronted.}}&lt;br /&gt;
&lt;br /&gt;
This article will elaborate on the concept of moral relativism. First, the term will be explained with the help of other concepts of morality and ethics. Then it will be shown further with an argument for moral relativism, followed by some implications of moral relativism.  &lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;Introduction:&amp;lt;/u&amp;gt; Will introduce the topic of moral relativism relating to morality&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;Definition:&amp;lt;/u&amp;gt; Will define the term moral relativism&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;Assumptions, Argument, Proof:&amp;lt;/u&amp;gt; Will outline the argument for unknowable morality and therefore moral relativism&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;Implications:&amp;lt;/u&amp;gt; Will explain the implications of moral relativism&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;Connections:&amp;lt;/u&amp;gt; Will show some connections between the topic and relevant philosophers and concepts.&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;&#039;Introduction:&#039;&#039;&#039; ==&lt;br /&gt;
The concepts of good and bad, core ideas in most moral systems, have existed before written philosophy itself.&lt;br /&gt;
&lt;br /&gt;
After all, most animals and all humans have interests, taking the form of distinguishing how good and bad some things are. This way of phrasing it resembles a utilitarian approach to morals, where the goal is to maximize overall goodness that can usually be phrased as the sum of goodness minus the sum of badness.&lt;br /&gt;
&lt;br /&gt;
The interests people have are often associated with the behavior of other people and themselves, giving rise to a deontological perspective to view morals mostly as rules of behavior. These rules do not need to be mere restrictions based on bad behaviour, but can also label some actions as good. This can be phrased as quantified results, such as &amp;quot;Killing is this bad, Stealing is half as bad&amp;quot; but it could also mean, that in a certain situation one action is good, while others are bad.&lt;br /&gt;
&lt;br /&gt;
Other moral systems can be phrased differently.&lt;br /&gt;
&lt;br /&gt;
For example, as explained in the article [[Moral]], the judgement given by virtue ethics are based on the reasoning for an action. If the reasoning is virtuous, the act is morally good. Whether an act is virtuous depends on doing the right thing for the right reason, with a virtue being seen as a good trait that is dependent on acting in a proper way between two extremes.&lt;br /&gt;
&lt;br /&gt;
However, there are some similarities between these systems:&lt;br /&gt;
&lt;br /&gt;
All coherent moral systems can be rephrased in distinctions between good and bad, results that can often be quantified or compared. &lt;br /&gt;
&lt;br /&gt;
The three systems that were pointed out are also usually interpreted in such a way that if an agent were to act according to the moral system, it would also serve the good of other people, as opposed to just phrasing the interests of the agent. This happens indirectly for virtue ethics and deontontology. In virtue ethics for example because a certain amount of altruism might be seen as virtuous, in deontology because the most typical rules of behaviour like &amp;quot;you shall not lie&amp;quot;, &amp;quot;you shall not murder&amp;quot;,... are set up in such a way.&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;&#039;Definition:&#039;&#039;&#039; ==&lt;br /&gt;
The concept of moral relativism seems to challenge all other perspectives of morality since it states, that there is no true or false set of morals but just different ideas of sometimes different people across space and time. &lt;br /&gt;
&lt;br /&gt;
According to the article [[Draft:Ethics]]&amp;lt;ref&amp;gt;[[Draft:Ethics]]&amp;lt;/ref&amp;gt;, moral relativity &amp;quot;is the doctrine that there is no one true moral system, binding on all people at all times”.In the same article [[Draft:Ethics]]&amp;lt;ref&amp;gt;[[Draft:Ethics]]&amp;lt;/ref&amp;gt;, the concept is brought up that relativist ideas can hardly be challenged based on their own principle that everything is questionable, because no moral argument or position can be right or wrong. It is however important to mention, that moral relativism does not inherently challenge the idea of objective truth. Rather it states that there is no knowable true morality. This also means, that a morality derived from knowledge of some true morality is not true, according to Plato&#039;s definition of knowledge as a justified true belief. But values and therefore ethics and morality will stay relevant for longer or as long as humans do, so a typical conclusion of moral relativism or a popular connected idea with &amp;quot;Moral Relativism&amp;quot;&amp;lt;ref name=&amp;quot;:0&amp;quot;&amp;gt;Gowans, Chris, &amp;quot;Moral Relativism&amp;quot;, &#039;&#039;The Stanford Encyclopedia of Philosophy&#039;&#039; (Spring 2021 Edition), Edward N. Zalta (ed.), URL = &amp;lt;&amp;lt;nowiki&amp;gt;https://plato.stanford.edu/archives/spr2021/entries/moral-relativism/&amp;lt;/nowiki&amp;gt;&amp;gt;.&amp;lt;/ref&amp;gt; is to label any moral idea as equally true or true dependend on things like religion, culture, region or person. This idea does however suffer from the so called quantification problem. This is the problem of needing to choose some standard for what has priority. Culture, religion, region, the opinion of the affected person or the acting person. There are infinite possibilities and it again needs a moral standard, something chosen by arbitrary or intuitive principles to select one possibility or combine them somehow.    &lt;br /&gt;
&lt;br /&gt;
It would be flawed to see moral relativism as universally true yet a moral position itself, since moral relativity could then be seen as one of these non- /equally- /relatively true positions. So moral relativism must claim not to be a moral position itself or deny the concept of objective truth.   &lt;br /&gt;
&lt;br /&gt;
So a different connected idea can just be that moral justifications and ideas tend to be more different between times, groups and spaces and more similar within them.&amp;lt;ref name=&amp;quot;:0&amp;quot; /&amp;gt;    &lt;br /&gt;
&lt;br /&gt;
In the following sections Assumptions, Argument and Proof one of the reasonings for moral relativism will be shown to explain the position.   &lt;br /&gt;
&lt;br /&gt;
==== &#039;&#039;&#039;Assumptions:&#039;&#039;&#039; ====&lt;br /&gt;
For the purpose of this text, a moral system that includes the entirety of someones ideas on morality will be called a &amp;quot;complete moral system&amp;quot;. A complete moral system assignes everything a moral value, even if this value is 0. A moral value can be phrased as how good or bad something is. Any kind of moral theory can be phrased in such a way. For example, a purely dentological moral system would give everything exept for actions the value 0 since it doesn&#039;t matter. In practice, most people believe in some kind of mixture of moral systems such as the ones mentioned above.  &lt;br /&gt;
&lt;br /&gt;
In the following, the concept of a true complete moral system and any knowable morality from any perspective will be disproven under the following conditions:&lt;br /&gt;
&lt;br /&gt;
[The moral system itself doesn&#039;t transfer information to the existing world, from which one could conclude the nature of the moral system.    &lt;br /&gt;
&lt;br /&gt;
A complete moral system is a moral system that holds the rule that no other additional moral system is true.]   &lt;br /&gt;
&lt;br /&gt;
==== &#039;&#039;&#039;Argument:&#039;&#039;&#039; ====&lt;br /&gt;
Under the assumptions above, any existing observer does not have any information about the relative likelyhood of potentially conflicting moral systems. Therefore they have the same likelyhood of beeing true.&lt;br /&gt;
&lt;br /&gt;
Above a complete set of morals is defined as a set of morals that holds the rule that no other moral rule is true. Therefore it can not be true together with any other complete moral system. At an infinite ammount of potentially true complete moral systems they are all in conflict with the infinite ammount of others. Therefore they all have the likelyhood [1/infinite]=0&lt;br /&gt;
 &lt;br /&gt;
Any complete moral system having the likelyhood 0 just disproves the concept of a true complete moral system from any possible perspective. This seems to disprove true morals already, but it does not yet apply to individual moral values.       &lt;br /&gt;
 &lt;br /&gt;
One of the assuptions above is that a moral system itself doesn&#039;t transfer information to the existing world, from which one could conclude the nature of the moral system. So there is no information about true moral values or comparisons. Knowledge is based on information, so any true moral judgement is unknowable. This also aplies to hints at a true moral judgement. There is no information to hint at any true moral judegement.             &lt;br /&gt;
 &lt;br /&gt;
It is also noteworthy, that two systems are not even universally comparable:             &lt;br /&gt;
 &lt;br /&gt;
To maximise the default of moral value from a perspective of unsure morals e.g. for deciding on whether to do something, the observer would have to take the resulting moral values of into account, proportional to the likelyhood of the corresponding moral rule or system.             &lt;br /&gt;
&lt;br /&gt;
But is it is allways possible to assign a number as a moral value? Some system might for example only point out which actions are acceptable in any given situation. Then how do you scale the numbers you see as the moral values when there is only right or wrong? Different actions in different situations might pose different levels of good and bad in the moral system so the numbers should be scaled relative to each other to reflect that. But for example every rulebreak could be worth -1 or -19 or -0.123 or ... So actually every moral system only compares things relative to each other and any factor could be applied to all the resulting numbers without changing the moral system. This raises the problem, wheter two moral systems are even compareable. There could be various options when considering multiple moral systems. First of all, you could just apply some factors for example to equal out the sum of all given moral judgement between the two moral systems, or to equal out one specific judgement. In this case you are obviously valuing two systems against each other based on some arbitrary value. So different moral systems are not universally comparable. If they are not comparable, then no true morality can be deduced from the comparison.&lt;br /&gt;
&lt;br /&gt;
==== &#039;&#039;&#039;Proof:&#039;&#039;&#039; ====&lt;br /&gt;
In the following, it will be evaluated wheter the assumptions used for the argument generally hold true when it comes to universal morals that would apply to every person at every time:&lt;br /&gt;
&lt;br /&gt;
The assumption that a complete moral system is a moral system that holds the rule that no other additional moral system is true, must be true, since the complete moral system already represents the whole of some hyperthetical persons ideas on morality, so any truth deviating from that would make the idea wrong.&lt;br /&gt;
&lt;br /&gt;
The assumption, that the system itself doesn&#039;t transfer information to the existing world is not proven, but lies in the definition of morality used here. The Definition as an abstract judgement of good and bad, that has no effect on the world in and of itself. With this definition there is no reason to assume, that any impact of entities, people, gods seemingly reacting to their percieved morality points to the nature of a true moral system. E.g. even a Karma-system might just have an inverse effekt like punishing people for the thing truely good. And these entities if real could also not get any information about a true moral system since it has no effect on them under this definition.&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;&#039;Implikations:&#039;&#039;&#039; ==&lt;br /&gt;
The conclusion of the argument obove raises a problem of course. If any true morality is unknowable, the definition of morality seems useless. This is when moral relativity comes into play again.  After all, this concept resembles the reality of different people, times and places showing different moral beliefs.  &lt;br /&gt;
&lt;br /&gt;
These must always contain some perspective on good and bad, but they are in reality connections of various beliefs and concepts. Because even though no one can have knowlege of a true moral system, people can still carry the belief that a moral system is true. Religions can define morality not only as an abstract truth without effect on reality, but for example as something that is necessary to avoid some divine punishment or archive some reward. That the punishment should be avoided or the reward archived is also one of infinite abstract moral assumptions, but for a typical human who probably has this assumption, it is also a situation where the interests of the agend likely match the moral system, since the agend does not want to be punished and instead rewarded. The popular perception of morals does however sometimes include a conflict of the interest of the moral system and the interests of the agend. After all, people are sometimes willing do do things they percieve as immoral.   &lt;br /&gt;
&lt;br /&gt;
This shows, that Morality, in [[Draft:Moral|Draft:Mo]][[Draft:Moral|ral]]&amp;lt;ref&amp;gt;[[Draft:Moral]]&amp;lt;/ref&amp;gt; defined as a normative system which is based on society’s values and ethical norms is not focused on the individual, but rather an entire society or group. If there are other reasons for beliefing a moral system true, like &amp;quot;it regulates my society best&amp;quot; or &amp;quot;it serves the puplic good&amp;quot;, then might will not always be in the interest of the person to act according to the moral system since the interests of the person most likely differ from the moral system somehow. This reinforces the concept of morality as something benefiting some societal, puplic or altruistic goals and       &lt;br /&gt;
&lt;br /&gt;
The concept of moral relativism not only serves as a theoretical exercise, but also as a tool, when there is a conflict of moral ideas. Because the realisation that another position is equaly not just grounded in knowable truth as your own is neccesary to avoid conflict, over positions that can not be attacked by logic, because moral ideas are not only derived from logic but also from some axiomatic moral ideas the partys in conflict might not share. But, the partys in question might not always be in noticable conflict and might be able to have interaction that advances both partys moral goals.       &lt;br /&gt;
&lt;br /&gt;
By describing Moral beliefs as unfounded assumptions, moral relativity could also be interpreted as a call to minimise the ammount of such beliefs, since the ammount of Axioms should also generally be minimised in the quest for truth and knowledge. But every person acts with some ammount of purpose that is often different from their short term interests, so the ammount of such Axioms won`t reach 0 for any person if the attempt to reach future happines at the cost of imidiate happines is a moral desicion instead of a instinctual desicion. Also the quest for truth and knowledge might be a moral goal so the need to have 0 axioms in moral thinking would be rather paradoxical.      &lt;br /&gt;
&lt;br /&gt;
This is why Moral relativity is not typically a call to abandon moral concepts, but rather a framework to deal with the abundance of different moral theories and ideas. It is not a problem to have beliefs about morality, but it is also arbitrary which ones you have.&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;&#039;Connections:&#039;&#039;&#039; ==&lt;br /&gt;
So from Platos perspective on knowledge, moral beliefs would fall into the cathegory of sensible knowlege, specifically in the cathegory of faithfull beliefs, since they are taken for granted without proof. The concept of moral relativity basicly has the role of pointing that out. In the popular perception of morality, there can however be moral positions that are merely derived from other moral positions and reality. These positions can be challenged rationally, but they might be atributed to sensible knowledge as well.&lt;br /&gt;
&lt;br /&gt;
It also has the role of providing a rather comprehensive answer to one of the three pillars of philosophy. Those are typically called &amp;quot;What is there&amp;quot;, What to do&amp;quot; and &amp;quot;How to know&amp;quot;. After all, &amp;quot;You can not know what to do&amp;quot; is a message deducable from moral relativism. Granted you choose some assumptions about morality, you can obviously make conclusions from there, but the question of &amp;quot;what to do&amp;quot; will never have a universal answer.&lt;br /&gt;
&lt;br /&gt;
Kants Moral Philosophy claims to overcome moral relativism, with the cathegorical Imperative that you should only act such that you could want all rational beeings obeying to a universal law consistent with the action.&amp;lt;ref&amp;gt;Johnson, Robert and Adam Cureton, &amp;quot;Kant’s Moral Philosophy&amp;quot;, &#039;&#039;The Stanford Encyclopedia of Philosophy&#039;&#039; (Winter 2025 Edition), Edward N. Zalta &amp;amp; Uri Nodelman (eds.), forthcoming URL = &amp;lt;&amp;lt;nowiki&amp;gt;https://plato.stanford.edu/archives/win2025/entries/kant-moral/&amp;lt;/nowiki&amp;gt;&amp;gt;.&amp;lt;/ref&amp;gt;  But there is the question of what universal law you would want all rational beeings to obey. That would be a question answered differently from person to person, and even common types of morality could be implemented in this framework. For example a utalitarian aproach to morals can be achieved by acting in such a way that it maximises overall good, the rule you would want others to follow is to act the same. But if that universal law needs to be realistically phrasable, or somehow generalised then it will not always align with the agents moral opinion so it would for these cases not be rational to act according to the cathegorical Imperative. Unless of course the Agend tries to avoid sanctions or reap rewards when others observe his actions, or when the agend tries to strenghen the observed precedent of moral behaviour. Or when the agend lacks capacity to know whether own actions are observed or whether there is a better action. There will still be different moral systems held by different people and groups, but for any given group where people are sufficiently able to observe, sanction or reward each others actions (if only by satisfying altruism or showing sympathy), there might theoretically be an ideal set of moral rules that optimises the average fullfillment of everyones interests as soon as it seems established in the minds of most groupmembers. If this is the applied definition of morality, then there is an optimal set of rules but this optimal set of rules would still be different from group to group and change over time. The group of &amp;quot;rational beeings&amp;quot; might be incomprehensible and not suited as an efficient reference point. Meanwhile, the ammount of groups that might be definable is higher than the ammount of rational beeings. So the agend would still have to deal with different moral systems and the concept of moral relativism still exists. &lt;br /&gt;
&lt;br /&gt;
Moral Relativism is also connected to the idea of &amp;quot;Cultural Diversity&amp;quot;. According to [[Draft:Ethics]], the only premise and the final conclusion of that concept is that “no society’s beliefs about right and wrong are better than any other’s”. This is where the interpretation of moral relativism and cultural diversity might differ from the conclusions outlined in this text. After all, a lack of knowledge on universal true morality does not imply a lack of knowledge on the validity of conclusions you might draw from this universal true morality someone believes in. Many cultures have comparable beliefs for example in terms of maximising happynes for a maximal ammount of people. But it is statistically sure, that some cultures lead to archiving their own or others moral ideas better. That does not mean, that it is realistic to find a &amp;quot;better&amp;quot; culture, it is hard enough to define and find the moral ideas and the information about the cultures one would use to compare them. But when limited to a specific topic, a specific aspect of culture and a specific group of people applying their moral ideas, this can be an atemptable task that is performed rather frequently in reality. In fact the consistent comparison and subsequent exchange of culture is a major benefit of cultural interaction and diversity. But Moral relativity again plays the role of mediating acceptance between different viewpoints, since the realisation, that your perspective is not the only valid one is the basis for the described cultural exchange or also just the positive interaction between individuals of different cultures and even ideological backrounds.&lt;/div&gt;</summary>
		<author><name>Thomas Holzberger</name></author>
	</entry>
	<entry>
		<id>https://www.glossalab.org/w/index.php?title=Draft:Moral_relativism&amp;diff=30700</id>
		<title>Draft:Moral relativism</title>
		<link rel="alternate" type="text/html" href="https://www.glossalab.org/w/index.php?title=Draft:Moral_relativism&amp;diff=30700"/>
		<updated>2026-01-06T00:15:57Z</updated>

		<summary type="html">&lt;p&gt;Thomas Holzberger: Added possible definition about how morality works, based on Kants cathegorical imperative.&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{DISPLAYTITLE:Draft:Moral relativism}}{{Head JTP|Authors=[[User:Thomas Holzberger]]|Observations=* The structure and formats of the article doesn&#039;t adapt to the purpose of conceptual clarification. &lt;br /&gt;
* Though the interplay with internal references is important, external relevant references should also be used. &lt;br /&gt;
* The moral relativism can be predicated not only from the utilitarian approach.&lt;br /&gt;
* The quantification problem is not addressed.&lt;br /&gt;
* Since Kant&#039;s ethics allegegly overcomes relativism his arguments should be confronted.}}&lt;br /&gt;
&lt;br /&gt;
This article will elaborate on the concept of moral relativism. First, the term will be explained with the help of other concepts of morality and ethics. Then it will be shown further with an argument for moral relativism, followed by some implications of moral relativism.  &lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;Introduction:&amp;lt;/u&amp;gt; Will introduce the topic of moral relativism relating to morality&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;Definition:&amp;lt;/u&amp;gt; Will define the term moral relativism&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;Assumptions, Argument, Proof:&amp;lt;/u&amp;gt; Will outline the argument for unknowable morality and therefore moral relativism&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;Implications:&amp;lt;/u&amp;gt; Will explain the implications of moral relativism&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;Connections:&amp;lt;/u&amp;gt; Will show some connections between the topic and relevant philosophers and concepts.&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;&#039;Introduction:&#039;&#039;&#039; ==&lt;br /&gt;
The concepts of good and bad, core ideas in most moral systems, have existed before written philosophy itself.&lt;br /&gt;
&lt;br /&gt;
After all, most animals and all humans have interests, taking the form of distinguishing how good and bad some things are. This way of phrasing it resembles a utilitarian approach to morals, where the goal is to maximize overall goodness that can usually be phrased as the sum of goodness minus the sum of badness.&lt;br /&gt;
&lt;br /&gt;
The interests people have are often associated with the behavior of other people and themselves, giving rise to a deontological perspective to view morals mostly as rules of behavior. These rules do not need to be mere restrictions based on bad behaviour, but can also label some actions as good. This can be phrased as quantified results, such as &amp;quot;Killing is this bad, Stealing is half as bad&amp;quot; but it could also mean, that in a certain situation one action is good, while others are bad.&lt;br /&gt;
&lt;br /&gt;
Other moral systems can be phrased differently.&lt;br /&gt;
&lt;br /&gt;
For example, as explained in the article [[Moral]], the judgement given by virtue ethics are based on the reasoning for an action. If the reasoning is virtuous, the act is morally good. Whether an act is virtuous depends on doing the right thing for the right reason, with a virtue being seen as a good trait that is dependent on acting in a proper way between two extremes.&lt;br /&gt;
&lt;br /&gt;
However, there are some similarities between these systems:&lt;br /&gt;
&lt;br /&gt;
All coherent moral systems can be rephrased in distinctions between good and bad, results that can often be quantified or compared. &lt;br /&gt;
&lt;br /&gt;
The three systems that were pointed out are also usually interpreted in such a way that if an agent were to act according to the moral system, it would also serve the good of other people, as opposed to just phrasing the interests of the agent. This happens indirectly for virtue ethics and deontontology. In virtue ethics for example because a certain amount of altruism might be seen as virtuous, in deontology because the most typical rules of behaviour like &amp;quot;you shall not lie&amp;quot;, &amp;quot;you shall not murder&amp;quot;,... are set up in such a way.&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;&#039;Definition:&#039;&#039;&#039; ==&lt;br /&gt;
The concept of moral relativism seems to challenge all other perspectives of morality since it states, that there is no true or false set of morals but just different ideas of sometimes different people across space and time. &lt;br /&gt;
&lt;br /&gt;
According to the article [[Draft:Ethics]]&amp;lt;ref&amp;gt;[[Draft:Ethics]]&amp;lt;/ref&amp;gt;, moral relativity &amp;quot;is the doctrine that there is no one true moral system, binding on all people at all times”.In the same article [[Draft:Ethics]]&amp;lt;ref&amp;gt;[[Draft:Ethics]]&amp;lt;/ref&amp;gt;, the concept is brought up that relativist ideas can hardly be challenged based on their own principle that everything is questionable, because no moral argument or position can be right or wrong. It is however important to mention, that moral relativism does not inherently challenge the idea of objective truth. Rather it states that there is no knowable true morality. This also means, that a morality derived from knowledge of some true morality is not true, according to Plato&#039;s definition of knowledge as a justified true belief. But values and therefore ethics and morality will stay relevant for longer or as long as humans do, so a typical conclusion of moral relativism or a popular connected idea with &amp;quot;Moral Relativism&amp;quot;&amp;lt;ref name=&amp;quot;:0&amp;quot;&amp;gt;Gowans, Chris, &amp;quot;Moral Relativism&amp;quot;, &#039;&#039;The Stanford Encyclopedia of Philosophy&#039;&#039; (Spring 2021 Edition), Edward N. Zalta (ed.), URL = &amp;lt;&amp;lt;nowiki&amp;gt;https://plato.stanford.edu/archives/spr2021/entries/moral-relativism/&amp;lt;/nowiki&amp;gt;&amp;gt;.&amp;lt;/ref&amp;gt; is to label any moral idea as equally true or true dependend on things like religion, culture, region or person. This idea does however suffer from the so called quantification problem. This is the problem of needing to choose some standard for what has priority. Culture, religion, region, the opinion of the affected person or the acting person. There are infinite possibilities and it again needs a moral standard, something chosen by arbitrary or intuitive principles to select one possibility or combine them somehow.    &lt;br /&gt;
&lt;br /&gt;
It would be flawed to see moral relativism as universally true yet a moral position itself, since moral relativity could then be seen as one of these non- /equally- /relatively true positions. So moral relativism must claim not to be a moral position itself or deny the concept of objective truth.   &lt;br /&gt;
&lt;br /&gt;
So a different connected idea can just be that moral justifications and ideas tend to be more different between times, groups and spaces and more similar within them.&amp;lt;ref name=&amp;quot;:0&amp;quot; /&amp;gt;    &lt;br /&gt;
&lt;br /&gt;
In the following sections Assumptions, Argument and Proof one of the reasonings for moral relativism will be shown to explain the position.   &lt;br /&gt;
&lt;br /&gt;
==== &#039;&#039;&#039;Assumptions:&#039;&#039;&#039; ====&lt;br /&gt;
For the purpose of this text, a moral system that includes the entirety of someones ideas on morality will be called a &amp;quot;complete moral system&amp;quot;. A complete moral system assignes everything a moral value, even if this value is 0. A moral value can be phrased as how good or bad something is. Any kind of moral theory can be phrased in such a way. For example, a purely dentological moral system would give everything exept for actions the value 0 since it doesn&#039;t matter. In practice, most people believe in some kind of mixture of moral systems such as the ones mentioned above.  &lt;br /&gt;
&lt;br /&gt;
In the following, the concept of a true complete moral system and any knowable morality from any perspective will be disproven under the following conditions:&lt;br /&gt;
&lt;br /&gt;
[The moral system itself doesn&#039;t transfer information to the existing world, from which one could conclude the nature of the moral system.    &lt;br /&gt;
&lt;br /&gt;
A complete moral system is a moral system that holds the rule that no other additional moral system is true.]   &lt;br /&gt;
&lt;br /&gt;
==== &#039;&#039;&#039;Argument:&#039;&#039;&#039; ====&lt;br /&gt;
Under the assumptions above, any existing observer does not have any information about the relative likelyhood of potentially conflicting moral systems. Therefore they have the same likelyhood of beeing true.&lt;br /&gt;
&lt;br /&gt;
Above a complete set of morals is defined as a set of morals that holds the rule that no other moral rule is true. Therefore it can not be true together with any other complete moral system. At an infinite ammount of potentially true complete moral systems they are all in conflict with the infinite ammount of others. Therefore they all have the likelyhood [1/infinite]=0&lt;br /&gt;
 &lt;br /&gt;
Any complete moral system having the likelyhood 0 just disproves the concept of a true complete moral system from any possible perspective. This seems to disprove true morals already, but it does not yet apply to individual moral values.       &lt;br /&gt;
 &lt;br /&gt;
One of the assuptions above is that a moral system itself doesn&#039;t transfer information to the existing world, from which one could conclude the nature of the moral system. So there is no information about true moral values or comparisons. Knowledge is based on information, so any true moral judgement is unknowable. This also aplies to hints at a true moral judgement. There is no information to hint at any true moral judegement.             &lt;br /&gt;
 &lt;br /&gt;
It is also noteworthy, that two systems are not even universally comparable:             &lt;br /&gt;
 &lt;br /&gt;
To maximise the default of moral value from a perspective of unsure morals e.g. for deciding on whether to do something, the observer would have to take the resulting moral values of into account, proportional to the likelyhood of the corresponding moral rule or system.             &lt;br /&gt;
&lt;br /&gt;
But is it is allways possible to assign a number as a moral value? Some system might for example only point out which actions are acceptable in any given situation. Then how do you scale the numbers you see as the moral values when there is only right or wrong? Different actions in different situations might pose different levels of good and bad in the moral system so the numbers should be scaled relative to each other to reflect that. But for example every rulebreak could be worth -1 or -19 or -0.123 or ... So actually every moral system only compares things relative to each other and any factor could be applied to all the resulting numbers without changing the moral system. This raises the problem, wheter two moral systems are even compareable. There could be various options when considering multiple moral systems. First of all, you could just apply some factors for example to equal out the sum of all given moral judgement between the two moral systems, or to equal out one specific judgement. In this case you are obviously valuing two systems against each other based on some arbitrary value. So different moral systems are not universally comparable. If they are not comparable, then no true morality can be deduced from the comparison.&lt;br /&gt;
&lt;br /&gt;
==== &#039;&#039;&#039;Proof:&#039;&#039;&#039; ====&lt;br /&gt;
In the following, it will be evaluated wheter the assumptions used for the argument generally hold true when it comes to universal morals that would apply to every person at every time:&lt;br /&gt;
&lt;br /&gt;
The assumption that a complete moral system is a moral system that holds the rule that no other additional moral system is true, must be true, since the complete moral system already represents the whole of some hyperthetical persons ideas on morality, so any truth deviating from that would make the idea wrong.&lt;br /&gt;
&lt;br /&gt;
The assumption, that the system itself doesn&#039;t transfer information to the existing world is not proven, but lies in the definition of morality used here. The Definition as an abstract judgement of good and bad, that has no effect on the world in and of itself. With this definition there is no reason to assume, that any impact of entities, people, gods seemingly reacting to their percieved morality points to the nature of a true moral system. E.g. even a Karma-system might just have an inverse effekt like punishing people for the thing truely good. And these entities if real could also not get any information about a true moral system since it has no effect on them under this definition.&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;&#039;Implikations:&#039;&#039;&#039; ==&lt;br /&gt;
The conclusion of the argument obove raises a problem of course. If any true morality is unknowable, the definition of morality seems useless. This is when moral relativity comes into play again.  After all, this concept resembles the reality of different people, times and places showing different moral beliefs.  &lt;br /&gt;
&lt;br /&gt;
These must always contain some perspective on good and bad, but they are in reality connections of various beliefs and concepts. Because even though no one can have knowlege of a true moral system, people can still carry the belief that a moral system is true. Religions can define morality not only as an abstract truth without effect on reality, but for example as something that is necessary to avoid some divine punishment or archive some reward. That the punishment should be avoided or the reward archived is also one of infinite abstract moral assumptions, but for a typical human who probably has this assumption, it is also a situation where the interests of the agend likely match the moral system, since the agend does not want to be punished and instead rewarded. The popular perception of morals does however sometimes include a conflict of the interest of the moral system and the interests of the agend. After all, people are sometimes willing do do things they percieve as immoral.   &lt;br /&gt;
&lt;br /&gt;
This shows, that Morality, in [[Draft:Moral|Draft:Mo]][[Draft:Moral|ral]]&amp;lt;ref&amp;gt;[[Draft:Moral]]&amp;lt;/ref&amp;gt; defined as a normative system which is based on society’s values and ethical norms is not focused on the individual, but rather an entire society or group. If there are other reasons for beliefing a moral system true, like &amp;quot;it regulates my society best&amp;quot; or &amp;quot;it serves the puplic good&amp;quot;, then might will not always be in the interest of the person to act according to the moral system since the interests of the person most likely differ from the moral system somehow. This reinforces the concept of morality as something benefiting some societal, puplic or altruistic goals and       &lt;br /&gt;
&lt;br /&gt;
The concept of moral relativism not only serves as a theoretical exercise, but also as a tool, when there is a conflict of moral ideas. Because the realisation that another position is equaly not just grounded in knowable truth as your own is neccesary to avoid conflict, over positions that can not be attacked by logic, because moral ideas are not only derived from logic but also from some axiomatic moral ideas the partys in conflict might not share. But, the partys in question might not always be in noticable conflict and might be able to have interaction that advances both partys moral goals.       &lt;br /&gt;
&lt;br /&gt;
By describing Moral beliefs as unfounded assumptions, moral relativity could also be interpreted as a call to minimise the ammount of such beliefs, since the ammount of Axioms should also generally be minimised in the quest for truth and knowledge. But every person acts with some ammount of purpose that is often different from their short term interests, so the ammount of such Axioms won`t reach 0 for any person if the attempt to reach future happines at the cost of imidiate happines is a moral desicion instead of a instinctual desicion. Also the quest for truth and knowledge might be a moral goal so the need to have 0 axioms in moral thinking would be rather paradoxical.      &lt;br /&gt;
&lt;br /&gt;
This is why Moral relativity is not typically a call to abandon moral concepts, but rather a framework to deal with the abundance of different moral theories and ideas. It is not a problem to have beliefs about morality, but it is also arbitrary which ones you have.&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;&#039;Connections:&#039;&#039;&#039; ==&lt;br /&gt;
So from Platos perspective on knowledge, moral beliefs would fall into the cathegory of sensible knowlege, specifically in the cathegory of faithfull beliefs, since they are taken for granted without proof. The concept of moral relativity basicly has the role of pointing that out. In the popular perception of morality, there can however be moral positions that are merely derived from other moral positions and reality. These positions can be challenged rationally, but they might be atributed to sensible knowledge as well.&lt;br /&gt;
&lt;br /&gt;
It also has the role of providing a rather comprehensive answer to one of the three pillars of philosophy. Those are typically called &amp;quot;What is there&amp;quot;, What to do&amp;quot; and &amp;quot;How to know&amp;quot;. After all, &amp;quot;You can not know what to do&amp;quot; is a message deducable from moral relativism. Granted you choose some assumptions about morality, you can obviously make conclusions from there, but the question of &amp;quot;what to do&amp;quot; will never have a universal answer.&lt;br /&gt;
&lt;br /&gt;
Kants Moral Philosophy claims to overcome moral relativism, with the cathegorical Imperative that you should only act such that you could want all rational beeings obeying to a universal law consistent with the action.&amp;lt;ref&amp;gt;Johnson, Robert and Adam Cureton, &amp;quot;Kant’s Moral Philosophy&amp;quot;, &#039;&#039;The Stanford Encyclopedia of Philosophy&#039;&#039; (Winter 2025 Edition), Edward N. Zalta &amp;amp; Uri Nodelman (eds.), forthcoming URL = &amp;lt;&amp;lt;nowiki&amp;gt;https://plato.stanford.edu/archives/win2025/entries/kant-moral/&amp;lt;/nowiki&amp;gt;&amp;gt;.&amp;lt;/ref&amp;gt;  But there is the question of what universal law you would want all rational beeings to obey. That would be a question answered differently from person to person, and even common types of morality could be implemented in this framework. For example a utalitarian aproach to morals can be achieved by acting in such a way that it maximises overall good, the rule you would want others to follow is to act the same. But if that universal law needs to be realistically phrasable, or somehow generalised then it will not always align with the agents moral opinion so it would for these cases not be rational to act according to the cathegorical Imperative. Unless of course the Agend tries to avoid sanctions or reap rewards when others observe his actions, or when the agend tries to strenghen the observed precedent of moral behaviour. Or when the agend lacks capacity to know whether own actions are observed or whether there is a better action. There will still be different moral systems held by different people and groups, but for any given group where people are sufficiently able to observe, sanction or reward each others actions (if only by satisfying altruism or showing sympathy), there might theoretically be an ideal set of moral rules that optimises the average fullfillment of everyones interests as soon as it seems established in the minds of most groupmembers. If this is the applied definition of morality, then there is an optimal set of rules but this optimal set of rules would still be different from group to group and change over time. The group of &amp;quot;rational beeings&amp;quot; might be incomprehensible and not suited as an efficient reference point. Meanwhile, the ammount of groups that might be definable is higher than the ammont of rational beeings. So the agend would still have to deal with different moral systems and the concept of moral relativism still exists. &lt;br /&gt;
&lt;br /&gt;
Moral Relativism is also connected to the idea of &amp;quot;Cultural Diversity&amp;quot;. According to [[Draft:Ethics]], the only premise and the final conclusion of that concept is that “no society’s beliefs about right and wrong are better than any other’s”. This is where the interpretation of moral relativism and cultural diversity might differ from the conclusions outlined in this text. After all, a lack of knowledge on universal true morality does not imply a lack of knowledge on the validity of conclusions you might draw from this universal true morality someone believes in. Many cultures have comparable beliefs for example in terms of maximising happynes for a maximal ammount of people. But it is statistically sure, that some cultures lead to archiving their own or others moral ideas better. That does not mean, that it is realistic to find a &amp;quot;better&amp;quot; culture, it is hard enough to define and find the moral ideas and the information about the cultures one would use to compare them. But when limited to a specific topic, a specific aspect of culture and a specific group of people applying their moral ideas, this can be an atemptable task that is performed rather frequently in reality. In fact the consistent comparison and subsequent exchange of culture is a major benefit of cultural interaction and diversity. But Moral relativity again plays the role of mediating acceptance between different viewpoints, since the realisation, that your perspective is not the only valid one is the basis for the described cultural exchange or also just the positive interaction between individuals of different cultures and even ideological backrounds.&lt;/div&gt;</summary>
		<author><name>Thomas Holzberger</name></author>
	</entry>
	<entry>
		<id>https://www.glossalab.org/w/index.php?title=Draft:Moral_relativism&amp;diff=28893</id>
		<title>Draft:Moral relativism</title>
		<link rel="alternate" type="text/html" href="https://www.glossalab.org/w/index.php?title=Draft:Moral_relativism&amp;diff=28893"/>
		<updated>2025-12-22T22:58:53Z</updated>

		<summary type="html">&lt;p&gt;Thomas Holzberger: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{DISPLAYTITLE:Draft:Moral relativism}}{{Head JTP|Authors=[[User:Thomas Holzberger]]|Observations=* The structure and formats of the article doesn&#039;t adapt to the purpose of conceptual clarification. &lt;br /&gt;
* Though the interplay with internal references is important, external relevant references should also be used. &lt;br /&gt;
* The moral relativism can be predicated not only from the utilitarian approach.&lt;br /&gt;
* The quantification problem is not addressed.&lt;br /&gt;
* Since Kant&#039;s ethics allegegly overcomes relativism his arguments should be confronted.}}&lt;br /&gt;
&lt;br /&gt;
This article will elaborate on the concept of moral relativism. First, the term will be explained with the help of other concepts of morality and ethics. Then it will be shown further with an argument for moral relativism, followed by some implications of moral relativism.  &lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;Introduction:&amp;lt;/u&amp;gt; Will introduce the topic of moral relativism relating to morality&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;Definition:&amp;lt;/u&amp;gt; Will define the term moral relativism&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;Assumptions, Argument, Proof:&amp;lt;/u&amp;gt; Will outline the argument for unknowable morality and therefore moral relativism&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;Implications:&amp;lt;/u&amp;gt; Will explain the implications of moral relativism&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;Connections:&amp;lt;/u&amp;gt; Will show some connections between the topic and relevant philosophers and concepts.&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;&#039;Introduction:&#039;&#039;&#039; ==&lt;br /&gt;
The concepts of good and bad, core ideas in most moral systems, have existed before written philosophy itself.&lt;br /&gt;
&lt;br /&gt;
After all, most animals and all humans have interests, taking the form of distinguishing how good and bad some things are. This way of phrasing it resembles a utilitarian approach to morals, where the goal is to maximize overall goodness that can usually be phrased as the sum of goodness minus the sum of badness.&lt;br /&gt;
&lt;br /&gt;
The interests people have are often associated with the behavior of other people and themselves, giving rise to a deontological perspective to view morals mostly as rules of behavior. These rules do not need to be mere restrictions based on bad behaviour, but can also label some actions as good. This can be phrased as quantified results, such as &amp;quot;Killing is this bad, Stealing is half as bad&amp;quot; but it could also mean, that in a certain situation one action is good, while others are bad.&lt;br /&gt;
&lt;br /&gt;
Other moral systems can be phrased differently.&lt;br /&gt;
&lt;br /&gt;
For example, as explained in the article [[Moral]], the judgement given by virtue ethics are based on the reasoning for an action. If the reasoning is virtuous, the act is morally good. Whether an act is virtuous depends on doing the right thing for the right reason, with a virtue being seen as a good trait that is dependent on acting in a proper way between two extremes.&lt;br /&gt;
&lt;br /&gt;
However, there are some similarities between these systems:&lt;br /&gt;
&lt;br /&gt;
All coherent moral systems can be rephrased in distinctions between good and bad, results that can often be quantified or compared. &lt;br /&gt;
&lt;br /&gt;
The three systems that were pointed out are also usually interpreted in such a way that if an agent were to act according to the moral system, it would also serve the good of other people, as opposed to just phrasing the interests of the agent. This happens indirectly for virtue ethics and deontontology. In virtue ethics for example because a certain amount of altruism might be seen as virtuous, in deontology because the most typical rules of behaviour like &amp;quot;you shall not lie&amp;quot;, &amp;quot;you shall not murder&amp;quot;,... are set up in such a way.&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;&#039;Definition:&#039;&#039;&#039; ==&lt;br /&gt;
The concept of moral relativism seems to challenge all other perspectives of morality since it states, that there is no true or false set of morals but just different ideas of sometimes different people across space and time. &lt;br /&gt;
&lt;br /&gt;
According to the article [[Draft:Ethics]]&amp;lt;ref&amp;gt;[[Draft:Ethics]]&amp;lt;/ref&amp;gt;, moral relativity &amp;quot;is the doctrine that there is no one true moral system, binding on all people at all times”.In the same article [[Draft:Ethics]]&amp;lt;ref&amp;gt;[[Draft:Ethics]]&amp;lt;/ref&amp;gt;, the concept is brought up that relativist ideas can hardly be challenged based on their own principle that everything is questionable, because no moral argument or position can be right or wrong. It is however important to mention, that moral relativism does not inherently challenge the idea of objective truth. Rather it states that there is no knowable true morality. This also means, that a morality derived from knowledge of some true morality is not true, according to Plato&#039;s definition of knowledge as a justified true belief. But values and therefore ethics and morality will stay relevant for longer or as long as humans do, so a typical conclusion of moral relativism or a popular connected idea with &amp;quot;Moral Relativism&amp;quot;&amp;lt;ref name=&amp;quot;:0&amp;quot;&amp;gt;Gowans, Chris, &amp;quot;Moral Relativism&amp;quot;, &#039;&#039;The Stanford Encyclopedia of Philosophy&#039;&#039; (Spring 2021 Edition), Edward N. Zalta (ed.), URL = &amp;lt;&amp;lt;nowiki&amp;gt;https://plato.stanford.edu/archives/spr2021/entries/moral-relativism/&amp;lt;/nowiki&amp;gt;&amp;gt;.&amp;lt;/ref&amp;gt; is to label any moral idea as equally true or true dependend on things like religion, culture, region or person. This idea does however suffer from the so called quantification problem. This is the problem of needing to choose some standard for what has priority. Culture, religion, region, the opinion of the affected person or the acting person. There are infinite possibilities and it again needs a moral standard, something chosen by arbitrary or intuitive principles to select one possibility or combine them somehow.    &lt;br /&gt;
&lt;br /&gt;
It would be flawed to see moral relativism as universally true yet a moral position itself, since moral relativity could then be seen as one of these non- /equally- /relatively true positions. So moral relativism must claim not to be a moral position itself or deny the concept of objective truth.   &lt;br /&gt;
&lt;br /&gt;
So a different connected idea can just be that moral justifications and ideas tend to be more different between times, groups and spaces and more similar within them.&amp;lt;ref name=&amp;quot;:0&amp;quot; /&amp;gt;    &lt;br /&gt;
&lt;br /&gt;
In the following sections Assumptions, Argument and Proof one of the reasonings for moral relativism will be shown to explain the position.   &lt;br /&gt;
&lt;br /&gt;
==== &#039;&#039;&#039;Assumptions:&#039;&#039;&#039; ====&lt;br /&gt;
For the purpose of this text, a moral system that includes the entirety of someones ideas on morality will be called a &amp;quot;complete moral system&amp;quot;. A complete moral system assignes everything a moral value, even if this value is 0. A moral value can be phrased as how good or bad something is. Any kind of moral theory can be phrased in such a way. For example, a purely dentological moral system would give everything exept for actions the value 0 since it doesn&#039;t matter. In practice, most people believe in some kind of mixture of moral systems such as the ones mentioned above.  &lt;br /&gt;
&lt;br /&gt;
In the following, the concept of a true complete moral system and any knowable morality from any perspective will be disproven under the following conditions:&lt;br /&gt;
&lt;br /&gt;
[The moral system itself doesn&#039;t transfer information to the existing world, from which one could conclude the nature of the moral system.    &lt;br /&gt;
&lt;br /&gt;
A complete moral system is a moral system that holds the rule that no other additional moral system is true.]   &lt;br /&gt;
&lt;br /&gt;
==== &#039;&#039;&#039;Argument:&#039;&#039;&#039; ====&lt;br /&gt;
Under the assumptions above, any existing observer does not have any information about the relative likelyhood of potentially conflicting moral systems. Therefore they have the same likelyhood of beeing true.&lt;br /&gt;
&lt;br /&gt;
Above a complete set of morals is defined as a set of morals that holds the rule that no other moral rule is true. Therefore it can not be true together with any other complete moral system. At an infinite ammount of potentially true complete moral systems they are all in conflict with the infinite ammount of others. Therefore they all have the likelyhood [1/infinite]=0&lt;br /&gt;
 &lt;br /&gt;
Any complete moral system having the likelyhood 0 just disproves the concept of a true complete moral system from any possible perspective. This seems to disprove true morals already, but it does not yet apply to individual moral values.       &lt;br /&gt;
 &lt;br /&gt;
One of the assuptions above is that a moral system itself doesn&#039;t transfer information to the existing world, from which one could conclude the nature of the moral system. So there is no information about true moral values or comparisons. Knowledge is based on information, so any true moral judgement is unknowable. This also aplies to hints at a true moral judgement. There is no information to hint at any true moral judegement.             &lt;br /&gt;
 &lt;br /&gt;
It is also noteworthy, that two systems are not even universally comparable:             &lt;br /&gt;
 &lt;br /&gt;
To maximise the default of moral value from a perspective of unsure morals e.g. for deciding on whether to do something, the observer would have to take the resulting moral values of into account, proportional to the likelyhood of the corresponding moral rule or system.             &lt;br /&gt;
&lt;br /&gt;
But is it is allways possible to assign a number as a moral value? Some system might for example only point out which actions are acceptable in any given situation. Then how do you scale the numbers you see as the moral values when there is only right or wrong? Different actions in different situations might pose different levels of good and bad in the moral system so the numbers should be scaled relative to each other to reflect that. But for example every rulebreak could be worth -1 or -19 or -0.123 or ... So actually every moral system only compares things relative to each other and any factor could be applied to all the resulting numbers without changing the moral system. This raises the problem, wheter two moral systems are even compareable. There could be various options when considering multiple moral systems. First of all, you could just apply some factors for example to equal out the sum of all given moral judgement between the two moral systems, or to equal out one specific judgement. In this case you are obviously valuing two systems against each other based on some arbitrary value. So different moral systems are not universally comparable. If they are not comparable, then no true morality can be deduced from the comparison.&lt;br /&gt;
&lt;br /&gt;
==== &#039;&#039;&#039;Proof:&#039;&#039;&#039; ====&lt;br /&gt;
In the following, it will be evaluated wheter the assumptions used for the argument generally hold true when it comes to universal morals that would apply to every person at every time:&lt;br /&gt;
&lt;br /&gt;
The assumption that a complete moral system is a moral system that holds the rule that no other additional moral system is true, must be true, since the complete moral system already represents the whole of some hyperthetical persons ideas on morality, so any truth deviating from that would make the idea wrong.&lt;br /&gt;
&lt;br /&gt;
The assumption, that the system itself doesn&#039;t transfer information to the existing world is not proven, but lies in the definition of morality used here. The Definition as an abstract judgement of good and bad, that has no effect on the world in and of itself. With this definition there is no reason to assume, that any impact of entities, people, gods seemingly reacting to their percieved morality points to the nature of a true moral system. E.g. even a Karma-system might just have an inverse effekt like punishing people for the thing truely good. And these entities if real could also not get any information about a true moral system since it has no effect on them under this definition.&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;&#039;Implikations:&#039;&#039;&#039; ==&lt;br /&gt;
The conclusion of the argument obove raises a problem of course. If any true morality is unknowable, the definition of morality seems useless. This is when moral relativity comes into play again.  After all, this concept resembles the reality of different people, times and places showing different moral beliefs.  &lt;br /&gt;
&lt;br /&gt;
These must always contain some perspective on good and bad, but they are in reality connections of various beliefs and concepts. Because even though no one can have knowlege of a true moral system, people can still carry the belief that a moral system is true. Religions can define morality not only as an abstract truth without effect on reality, but for example as something that is necessary to avoid some divine punishment or archive some reward. That the punishment should be avoided or the reward archived is also one of infinite abstract moral assumptions, but for a typical human who probably has this assumption, it is also a situation where the interests of the agend likely match the moral system, since the agend does not want to be punished and instead rewarded. The popular perception of morals does however sometimes include a conflict of the interest of the moral system and the interests of the agend. After all, people are sometimes willing do do things they percieve as immoral.   &lt;br /&gt;
&lt;br /&gt;
This shows, that Morality, in [[Draft:Moral|Draft:Mo]][[Draft:Moral|ral]]&amp;lt;ref&amp;gt;[[Draft:Moral]]&amp;lt;/ref&amp;gt; defined as a normative system which is based on society’s values and ethical norms is not focused on the individual, but rather an entire society or group. If there are other reasons for beliefing a moral system true, like &amp;quot;it regulates my society best&amp;quot; or &amp;quot;it serves the puplic good&amp;quot;, then might will not always be in the interest of the person to act according to the moral system since the interests of the person most likely differ from the moral system somehow. This reinforces the concept of morality as something benefiting some societal, puplic or altruistic goals and       &lt;br /&gt;
&lt;br /&gt;
The concept of moral relativism not only serves as a theoretical exercise, but also as a tool, when there is a conflict of moral ideas. Because the realisation that another position is equaly not just grounded in knowable truth as your own is neccesary to avoid conflict, over positions that can not be attacked by logic, because moral ideas are not only derived from logic but also from some axiomatic moral ideas the partys in conflict might not share. But, the partys in question might not always be in noticable conflict and might be able to have interaction that advances both partys moral goals.       &lt;br /&gt;
&lt;br /&gt;
By describing Moral beliefs as unfounded assumptions, moral relativity could also be interpreted as a call to minimise the ammount of such beliefs, since the ammount of Axioms should also generally be minimised in the quest for truth and knowledge. But every person acts with some ammount of purpose that is often different from their short term interests, so the ammount of such Axioms won`t reach 0 for any person if the attempt to reach future happines at the cost of imidiate happines is a moral desicion instead of a instinctual desicion. Also the quest for truth and knowledge might be a moral goal so the need to have 0 axioms in moral thinking would be rather paradoxical.      &lt;br /&gt;
&lt;br /&gt;
This is why Moral relativity is not typically a call to abandon moral concepts, but rather a framework to deal with the abundance of different moral theories and ideas. It is not a problem to have beliefs about morality, but it is also arbitrary which ones you have.&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;&#039;Connections:&#039;&#039;&#039; ==&lt;br /&gt;
So from Platos perspective on knowledge, moral beliefs would fall into the cathegory of sensible knowlege, specifically in the cathegory of faithfull beliefs, since they are taken for granted without proof. The concept of moral relativity basicly has the role of pointing that out. In the popular perception of morality, there can however be moral positions that are merely derived from other moral positions and reality. These positions can be challenged rationally, but they might be atributed to sensible knowledge as well.&lt;br /&gt;
&lt;br /&gt;
It also has the role of providing a rather comprehensive answer to one of the three pillars of philosophy. Those are typically called &amp;quot;What is there&amp;quot;, What to do&amp;quot; and &amp;quot;How to know&amp;quot;. After all, &amp;quot;You can not know what to do&amp;quot; is a message deducable from moral relativism. Granted you choose some assumptions about morality, you can obviously make conclusions from there, but the question of &amp;quot;what to do&amp;quot; will never have a universal answer.&lt;br /&gt;
&lt;br /&gt;
Kants Moral Philosophy claims to overcome moral relativism, with the cathegorical Imperative that you should only act such that you could want all rational beeings obeying to a universal law consistent with the action.&amp;lt;ref&amp;gt;Johnson, Robert and Adam Cureton, &amp;quot;Kant’s Moral Philosophy&amp;quot;, &#039;&#039;The Stanford Encyclopedia of Philosophy&#039;&#039; (Winter 2025 Edition), Edward N. Zalta &amp;amp; Uri Nodelman (eds.), forthcoming URL = &amp;lt;&amp;lt;nowiki&amp;gt;https://plato.stanford.edu/archives/win2025/entries/kant-moral/&amp;lt;/nowiki&amp;gt;&amp;gt;.&amp;lt;/ref&amp;gt;  But there is the question of what universal law you would want all rational beeings to obey. That would be a question answered differently from person to person, and even common types of morality could be implemented in this framework. For example a utalitarian aproach to morals can be achieved by acting in such a way that it maximises overall good, the rule you would want others to follow is to act the same. But if that universal law needs to be realistically phrasable, or somehow generalised then it will not always align with the agents moral opinion so it would for these cases not be rational to act according to the cathegorical Imperative. &lt;br /&gt;
&lt;br /&gt;
Moral Relativism is also connected to the idea of &amp;quot;Cultural Diversity&amp;quot;. According to [[Draft:Ethics]], the only premise and the final conclusion of that concept is that “no society’s beliefs about right and wrong are better than any other’s”. This is where the interpretation of moral relativism and cultural diversity might differ from the conclusions outlined in this text. After all, a lack of knowledge on universal true morality does not imply a lack of knowledge on the validity of conclusions you might draw from this universal true morality someone believes in. Many cultures have comparable beliefs for example in terms of maximising happynes for a maximal ammount of people. But it is statistically sure, that some cultures lead to archiving their own or others moral ideas better. That does not mean, that it is realistic to find a &amp;quot;better&amp;quot; culture, it is hard enough to define and find the moral ideas and the information about the cultures one would use to compare them. But when limited to a specific topic, a specific aspect of culture and a specific group of people applying their moral ideas, this can be an atemptable task that is performed rather frequently in reality. In fact the consistent comparison and subsequent exchange of culture is a major benefit of cultural interaction and diversity. But Moral relativity again plays the role of mediating acceptance between different viewpoints, since the realisation, that your perspective is not the only valid one is the basis for the described cultural exchange or also just the positive interaction between individuals of different cultures and even ideological backrounds.&lt;/div&gt;</summary>
		<author><name>Thomas Holzberger</name></author>
	</entry>
	<entry>
		<id>https://www.glossalab.org/w/index.php?title=Template:Div_col&amp;diff=28881</id>
		<title>Template:Div col</title>
		<link rel="alternate" type="text/html" href="https://www.glossalab.org/w/index.php?title=Template:Div_col&amp;diff=28881"/>
		<updated>2025-12-22T20:50:39Z</updated>

		<summary type="html">&lt;p&gt;Thomas Holzberger: *declared&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;includeonly&amp;gt;&amp;lt;div class=&amp;quot;div-col {{#ifeq:{{{rules|}}}|yes|div-col-rules}} {{{class|}}}&amp;quot; &lt;br /&gt;
{{#if:{{{colwidth|}}}{{{gap|}}}{{{style|}}}|&lt;br /&gt;
style=&amp;quot;{{#if:{{{colwidth|}}}|column-width: {{{colwidth}}};}}{{#if:{{{gap|}}}|column-gap: {{{gap}}};}}{{#if:{{{style|}}}|{{{style}}}}}&amp;quot;&lt;br /&gt;
}}&amp;gt;&lt;br /&gt;
{{#if:{{{content|}}}|{{{content}}}&amp;lt;/div&amp;gt;}}&lt;br /&gt;
&amp;lt;/includeonly&amp;gt;&lt;br /&gt;
&amp;lt;noinclude&amp;gt;&lt;br /&gt;
&amp;lt;templatedata&amp;gt;&lt;br /&gt;
{&lt;br /&gt;
	&amp;quot;params&amp;quot;: {&lt;br /&gt;
		&amp;quot;rules&amp;quot;: {&lt;br /&gt;
			&amp;quot;description&amp;quot;: &amp;quot;Rules to highlight changes between lines.&amp;quot;,&lt;br /&gt;
			&amp;quot;type&amp;quot;: &amp;quot;boolean&amp;quot;&lt;br /&gt;
		},&lt;br /&gt;
		&amp;quot;class&amp;quot;: {&lt;br /&gt;
			&amp;quot;description&amp;quot;: &amp;quot;Class of table.&amp;quot;,&lt;br /&gt;
			&amp;quot;type&amp;quot;: &amp;quot;string&amp;quot;&lt;br /&gt;
		},&lt;br /&gt;
		&amp;quot;colwidth&amp;quot;: {&lt;br /&gt;
			&amp;quot;description&amp;quot;: &amp;quot;Column width (is recommended to use relative metric &#039;em&#039;, eg, colwidth = 20em)&amp;quot;,&lt;br /&gt;
			&amp;quot;type&amp;quot;: &amp;quot;string&amp;quot;&lt;br /&gt;
		},&lt;br /&gt;
		&amp;quot;gap&amp;quot;: {&lt;br /&gt;
			&amp;quot;description&amp;quot;: &amp;quot;Gap between columns (is recommended to use relative metric &#039;em&#039;)&amp;quot;&lt;br /&gt;
		},&lt;br /&gt;
		&amp;quot;style&amp;quot;: {&lt;br /&gt;
			&amp;quot;description&amp;quot;: &amp;quot;Column style&amp;quot;,&lt;br /&gt;
			&amp;quot;type&amp;quot;: &amp;quot;string&amp;quot;&lt;br /&gt;
		},&lt;br /&gt;
		&amp;quot;content&amp;quot;: {&lt;br /&gt;
			&amp;quot;description&amp;quot;: &amp;quot;Here is where the list to be displayed is declared.&amp;quot;,&lt;br /&gt;
			&amp;quot;required&amp;quot;: true&lt;br /&gt;
		}&lt;br /&gt;
	},&lt;br /&gt;
	&amp;quot;description&amp;quot;: &amp;quot;Divides in columns the given content using a column width indicated as a parameter (colwidth). &amp;quot;&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/templatedata&amp;gt;&lt;br /&gt;
&amp;lt;/noinclude&amp;gt;&lt;/div&gt;</summary>
		<author><name>Thomas Holzberger</name></author>
	</entry>
	<entry>
		<id>https://www.glossalab.org/w/index.php?title=Draft:Moral_relativism&amp;diff=28376</id>
		<title>Draft:Moral relativism</title>
		<link rel="alternate" type="text/html" href="https://www.glossalab.org/w/index.php?title=Draft:Moral_relativism&amp;diff=28376"/>
		<updated>2025-12-13T13:27:55Z</updated>

		<summary type="html">&lt;p&gt;Thomas Holzberger: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{DISPLAYTITLE:Draft:Moral relativism}}{{Head JTP|Authors=[[User:Thomas Holzberger]]|Observations=* The structure and formats of the article doesn&#039;t adapt to the purpose of conceptual clarification. &lt;br /&gt;
* Though the interplay with internal references is important, external relevant references should also be used. &lt;br /&gt;
* The moral relativism can be predicated not only from the utilitarian approach.&lt;br /&gt;
* The quantification problem is not addressed.&lt;br /&gt;
* Since Kant&#039;s ethics allegegly overcomes relativism his arguments should be confronted.}}&lt;br /&gt;
&lt;br /&gt;
This article will elaborate on the concept of moral relativism. First, the term will be explained with the help of other concepts of morality and ethics. Then it will be shown further with an argument for moral relativism, followed by some implications of moral relativism.  &lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;Introduction:&amp;lt;/u&amp;gt; Will introduce the topic of moral relativism relating to morality&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;Definition:&amp;lt;/u&amp;gt; Will define the term moral relativism&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;Assumptions, Argument, Proof:&amp;lt;/u&amp;gt; Will outline the argument for unknowable morality and therefore moral relativism&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;Implications:&amp;lt;/u&amp;gt; Will explain the implications of moral relativism&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;Connections:&amp;lt;/u&amp;gt; Will show some connections between the topic and relevant philosophers and concepts.&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;&#039;Introduction:&#039;&#039;&#039; ==&lt;br /&gt;
The concepts of good and bad, core ideas in most moral systems, have existed before written philosophy itself.&lt;br /&gt;
&lt;br /&gt;
After all, most animals and all humans have interests, taking the form of distinguishing how good and bad some things are. This way of phrasing it resembles a utilitarian approach to morals, where the goal is to maximize overall goodness that can usually be phrased as the sum of goodness minus the sum of badness.&lt;br /&gt;
&lt;br /&gt;
The interests people have are often associated with the behavior of other people and themselves, giving rise to a deontological perspective to view morals mostly as rules of behavior. These rules do not need to be mere restrictions based on bad behaviour, but can also label some actions as good. This can be phrased as quantified results, such as &amp;quot;Killing is this bad, Stealing is half as bad&amp;quot; but it could also mean, that in a certain situation one action is good, while others are bad.&lt;br /&gt;
&lt;br /&gt;
Other moral systems can be phrased differently.&lt;br /&gt;
&lt;br /&gt;
For example, as explained in the article [[Moral]], the judgement given by virtue ethics are based on the reasoning for an action. If the reasoning is virtuous, the act is morally good. Whether an act is virtuous depends on doing the right thing for the right reason, with a virtue being seen as a good trait that is dependent on acting in a proper way between two extremes.&lt;br /&gt;
&lt;br /&gt;
However, there are some similarities between these systems:&lt;br /&gt;
&lt;br /&gt;
All coherent moral systems can be rephrased in distinctions between good and bad, results that can often be quantified or compared. &lt;br /&gt;
&lt;br /&gt;
The three systems that were pointed out are also usually interpreted in such a way that if an agent were to act according to the moral system, it would also serve the good of other people, as opposed to just phrasing the interests of the agent. This happens indirectly for virtue ethics and deontontology. In virtue ethics for example because a certain amount of altruism might be seen as virtuous, in deontology because the most typical rules of behaviour like &amp;quot;you shall not lie&amp;quot;, &amp;quot;you shall not murder&amp;quot;,... are set up in such a way.&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;&#039;Definition:&#039;&#039;&#039; ==&lt;br /&gt;
The concept of moral relativism seems to challenge all other perspectives of morality since it states, that there is no true or false set of morals but just different ideas of sometimes different people across space and time. &lt;br /&gt;
&lt;br /&gt;
According to the article [[Draft:Ethics]]&amp;lt;ref&amp;gt;[[Draft:Ethics]]&amp;lt;/ref&amp;gt;, moral relativity &amp;quot;is the doctrine that there is no one true moral system, binding on all people at all times”.In the same article [[Draft:Ethics]]&amp;lt;ref&amp;gt;[[Draft:Ethics]]&amp;lt;/ref&amp;gt;, the concept is brought up that relativist ideas can hardly be challenged based on their own principle that everything is questionable, because no moral argument or position can be right or wrong. It is however important to mention, that moral relativism does not inherently challenge the idea of objective truth. Rather it states that there is no knowable true morality. This also means, that a morality derived from knowledge of some true morality is not true, according to Plato&#039;s definition of knowledge as a justified true belief. But values and therefore ethics and morality will stay relevant for longer or as long as humans do, so a typical conclusion of moral relativism or a popular connected idea with &amp;quot;Moral Relativism&amp;quot;&amp;lt;ref name=&amp;quot;:0&amp;quot;&amp;gt;Gowans, Chris, &amp;quot;Moral Relativism&amp;quot;, &#039;&#039;The Stanford Encyclopedia of Philosophy&#039;&#039; (Spring 2021 Edition), Edward N. Zalta (ed.), URL = &amp;lt;&amp;lt;nowiki&amp;gt;https://plato.stanford.edu/archives/spr2021/entries/moral-relativism/&amp;lt;/nowiki&amp;gt;&amp;gt;.&amp;lt;/ref&amp;gt; is to label any moral idea as equally true or true dependend on things like religion, culture, region or person. This idea does however suffer from the so called quantification problem. This is the problem of needing to choose some standard for what has priority. Culture, religion, region, the opinion of the affected person or the acting person. There are infinite possibilities and it again needs a moral standard, something chosen by arbitrary or intuitive principles to select one possibility or combine them somehow.    &lt;br /&gt;
&lt;br /&gt;
It would be flawed to see moral relativism as universally true yet a moral position itself, since moral relativity could then be seen as one of these non- /equally- /relatively true positions. So moral relativism must claim not to be a moral position itself or deny the concept of objective truth.   &lt;br /&gt;
&lt;br /&gt;
So a different connected idea can just be that moral justifications and ideas tend to be more different between times, groups and spaces and more similar within them.&amp;lt;ref name=&amp;quot;:0&amp;quot; /&amp;gt;    &lt;br /&gt;
&lt;br /&gt;
In the following sections Assumptions, Argument and Proof one of the reasonings for moral relativism will be shown to explain the position.   &lt;br /&gt;
&lt;br /&gt;
==== &#039;&#039;&#039;Assumptions:&#039;&#039;&#039; ====&lt;br /&gt;
For the purpose of this text, a moral system that includes the entirety of someones ideas on morality will be called a &amp;quot;complete moral system&amp;quot;. A complete moral system assignes everything a moral value, even if this value is 0. A moral value can be phrased as how good or bad something is. Any kind of moral theory can be phrased in such a way. For example, a purely dentological moral system would give everything exept for actions the value 0 since it doesn&#039;t matter. In practice, most people believe in some kind of mixture of moral systems such as the ones mentioned above.  &lt;br /&gt;
&lt;br /&gt;
In the following, the concept of a true complete moral system and any knowable morality from any perspective will be disproven under the following conditions:&lt;br /&gt;
&lt;br /&gt;
[The moral system itself doesn&#039;t transfer information to the existing world, from which one could conclude the nature of the moral system.    &lt;br /&gt;
&lt;br /&gt;
A complete moral system is a moral system that holds the rule that no other additional moral system is true.]   &lt;br /&gt;
&lt;br /&gt;
==== &#039;&#039;&#039;Argument:&#039;&#039;&#039; ====&lt;br /&gt;
Under the assumptions above, any existing observer does not have any information about the relative likelyhood of potentially conflicting moral systems. Therefore they have the same likelyhood of beeing true.&lt;br /&gt;
&lt;br /&gt;
Above a complete set of morals is defined as a set of morals that holds the rule that no other moral rule is true. Therefore it can not be true together with any other complete moral system. At an infinite ammount of potentially true complete moral systems they are all in conflict with the infinite ammount of others. Therefore they all have the likelyhood [1/infinite]=0&lt;br /&gt;
 &lt;br /&gt;
Any complete moral system having the likelyhood 0 just disproves the concept of a true complete moral system from any possible perspective. This seems to disprove true morals already, but it does not yet apply to individual moral values.       &lt;br /&gt;
 &lt;br /&gt;
One of the assuptions above is that a moral system itself doesn&#039;t transfer information to the existing world, from which one could conclude the nature of the moral system. So there is no information about true moral values or comparisons. Knowledge is based on information, so any true moral judgement is unknowable. This also aplies to hints at a true moral judgement. There is no information to hint at any true moral judegement.             &lt;br /&gt;
 &lt;br /&gt;
It is also noteworthy, that two systems are not even universally comparable:             &lt;br /&gt;
 &lt;br /&gt;
To maximise the default of moral value from a perspective of unsure morals e.g. for deciding on whether to do something, the observer would have to take the resulting moral values of into account, proportional to the likelyhood of the corresponding moral rule or system.             &lt;br /&gt;
&lt;br /&gt;
But is it is allways possible to assign a number as a moral value? Some system might for example only point out which actions are acceptable in any given situation. Then how do you scale the numbers you see as the moral values when there is only right or wrong? Different actions in different situations might pose different levels of good and bad in the moral system so the numbers should be scaled relative to each other to reflect that. But for example every rulebreak could be worth -1 or -19 or -0.123 or ... So actually every moral system only compares things relative to each other and any factor could be applied to all the resulting numbers without changing the moral system. This raises the problem, wheter two moral systems are even compareable. There could be various options when considering multiple moral systems. First of all, you could just apply some factors for example to equal out the sum of all given moral judgement between the two moral systems, or to equal out one specific judgement. In this case you are obviously valuing two systems against each other based on some arbitrary value. So different moral systems are not universally comparable. If they are not comparable, then no true morality can be deduced from the comparison.&lt;br /&gt;
&lt;br /&gt;
==== &#039;&#039;&#039;Proof:&#039;&#039;&#039; ====&lt;br /&gt;
In the following, it will be evaluated wheter the assumptions used for the argument generally hold true when it comes to universal morals that would apply to every person at every time:&lt;br /&gt;
&lt;br /&gt;
The assumption that a complete moral system is a moral system that holds the rule that no other additional moral system is true, must be true, since the complete moral system already represents the whole of some hyperthetical persons ideas on morality, so any truth deviating from that would make the idea wrong.&lt;br /&gt;
&lt;br /&gt;
The assumption, that the system itself doesn&#039;t transfer information to the existing world is not proven, but lies in the definition of morality used here. The Definition as an abstract judgement of good and bad, that has no effect on the world in and of itself. With this definition there is no reason to assume, that any impact of entities, people, gods seemingly reacting to their percieved morality points to the nature of a true moral system. E.g. even a Karma-system might just have an inverse effekt like punishing people for the thing truely good. And these entities if real could also not get any information about a true moral system since it has no effect on them under this definition.&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;&#039;Implikations:&#039;&#039;&#039; ==&lt;br /&gt;
The conclusion of the argument obove raises a problem of course. If any true morality is unknowable, the definition of morality seems useless. This is when moral relativity comes into play again.  After all, this concept resembles the reality of different people, times and places showing different moral beliefs.  &lt;br /&gt;
&lt;br /&gt;
These must always contain some perspective on good and bad, but they are in reality connections of various beliefs and concepts. Because even though no one can have knowlege of a true moral system, people can still carry the belief that a moral system is true. Religions can define morality not only as an abstract truth without effect on reality, but for example as something that is necessary to avoid some divine punishment or archive some reward. That the punishment should be avoided or the reward archived is also one of infinite abstract moral assumptions, but for a typical human who probably has this assumption, it is also a situation where the interests of the agend likely match the moral system, since the agend does not want to be punished and instead rewarded. The popular perception of morals does however sometimes include a conflict of the interest of the moral system and the interests of the agend. After all, people are sometimes willing do do things they percieve as immoral.   &lt;br /&gt;
&lt;br /&gt;
This shows, that Morality, in [[Draft:Moral|Draft:Mo]][[Draft:Moral|ral]]&amp;lt;ref&amp;gt;[[Draft:Moral]]&amp;lt;/ref&amp;gt; defined as a normative system which is based on society’s values and ethical norms is not focused on the individual, but rather an entire society or group. If there are other reasons for beliefing a moral system true, like &amp;quot;it regulates my society best&amp;quot; or &amp;quot;it serves the puplic good&amp;quot;, then might will not always be in the interest of the person to act according to the moral system since the interests of the person most likely differ from the moral system somehow. This reinforces the concept of morality as something benefiting some societal, puplic or altruistic goals and       &lt;br /&gt;
&lt;br /&gt;
The concept of moral relativism not only serves as a theoretical exercise, but also as a tool, when there is a conflict of moral ideas. Because the realisation that another position is equaly not just grounded in knowable truth as your own is neccesary to avoid conflict, over positions that can not be attacked by logic, because moral ideas are not only derived from logic but also from some axiomatic moral ideas the partys in conflict might not share. But, the partys in question might not always be in noticable conflict and might be able to have interaction that advances both partys moral goals.       &lt;br /&gt;
&lt;br /&gt;
By describing Moral beliefs as unfounded assumptions, moral relativity could also be interpreted as a call to minimise the ammount of such beliefs, since the ammount of Axioms should also generally be minimised in the quest for truth and knowledge. But every person acts with some ammount of purpose that is often different from their short term interests, so the ammount of such Axioms won`t reach 0 for any person if the attempt to reach future happines at the cost of imidiate happines is a moral desicion instead of a instinctual desicion. Also the quest for truth and knowledge might be a moral goal so the need to have 0 axioms in moral thinking would be rather paradoxical.      &lt;br /&gt;
&lt;br /&gt;
This is why Moral relativity is not typically a call to abandon moral concepts, but rather a framework to deal with the abundance of different moral theories and ideas. It is not a problem to have beliefs about morality, but it is also arbitrary which ones you have.&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;&#039;Connections:&#039;&#039;&#039; ==&lt;br /&gt;
So from Platos perspective on knowledge, moral beliefs would fall into the cathegory of sensible knowlege, specifically in the cathegory of faithfull beliefs, since they are taken for granted without proof. The concept of moral relativity basicly has the role of pointing that out. In the popular perception of morality, there can however be moral positions that are merely derived from other moral positions and reality. These positions can be challenged rationally, but they might be atributed to sensible knowledge as well.&lt;br /&gt;
&lt;br /&gt;
It also has the role of providing a rather comprehensive answer to one of the three pillars of philosophy. Those are typically called &amp;quot;What is there&amp;quot;, What to do&amp;quot; and &amp;quot;How to know&amp;quot;. After all, &amp;quot;You can not know what to do&amp;quot; is a message deducable from moral relativism. Granted you choose some assumptions about morality, you can obviously make conclusions from there, but the question of &amp;quot;what to do&amp;quot; will never have a universal answer.&lt;br /&gt;
&lt;br /&gt;
Kants Moral Philosophy claims to overcome moral relativism, with the cathegorical Imperative that you should only act such that you could want all rational beeings obeying to a universal law consistent with the action.&amp;lt;ref&amp;gt;Johnson, Robert and Adam Cureton, &amp;quot;Kant’s Moral Philosophy&amp;quot;, &#039;&#039;The Stanford Encyclopedia of Philosophy&#039;&#039; (Winter 2025 Edition), Edward N. Zalta &amp;amp; Uri Nodelman (eds.), forthcoming URL = &amp;lt;&amp;lt;nowiki&amp;gt;https://plato.stanford.edu/archives/win2025/entries/kant-moral/&amp;lt;/nowiki&amp;gt;&amp;gt;.&amp;lt;/ref&amp;gt;  But there is the question of what universal law you would want all rational beeings to obey. That would be a question answered differently from person to person, and even common types of morality could be implemented in this framework. For example a utalitarian aproach to morals can be achieved by acting in such a way that it maximises overall good, the rule you would want others to follow is to act the same. But if that universal law needs to be realitically phrasable, or somehow generalised then it will not always align with the agents moral opinion so it would for these cases not be rational to act according to the cathegorical Imperative. &lt;br /&gt;
&lt;br /&gt;
Moral Relativism is also connected to the idea of &amp;quot;Cultural Diversity&amp;quot;. According to [[Draft:Ethics]], the only premise and the final conclusion of that concept is that “no society’s beliefs about right and wrong are better than any other’s”. This is where the interpretation of moral relativism and cultural diversity might differ from the conclusions outlined in this text. After all, a lack of knowledge on universal true morality does not imply a lack of knowledge on the validity of conclusions you might draw from this universal true morality someone believes in. Many cultures have comparable beliefs for example in terms of maximising happynes for a maximal ammount of people. But it is statistically sure, that some cultures lead to archiving their own or others moral ideas better. That does not mean, that it is realistic to find a &amp;quot;better&amp;quot; culture, it is hard enough to define and find the moral ideas and the information about the cultures one would use to compare them. But when limited to a specific topic, a specific aspect of culture and a specific group of people applying their moral ideas, this can be an atemptable task that is performed rather frequently in reality. In fact the consistent comparison and subsequent exchange of culture is a major benefit of cultural interaction and diversity. But Moral relativity again plays the role of mediating acceptance between different viewpoints, since the realisation, that your perspective is not the only valid one is the basis for the described cultural exchange or also just the positive interaction between individuals of different cultures and even ideological backrounds.&lt;/div&gt;</summary>
		<author><name>Thomas Holzberger</name></author>
	</entry>
	<entry>
		<id>https://www.glossalab.org/w/index.php?title=Draft:Moral_relativism&amp;diff=28375</id>
		<title>Draft:Moral relativism</title>
		<link rel="alternate" type="text/html" href="https://www.glossalab.org/w/index.php?title=Draft:Moral_relativism&amp;diff=28375"/>
		<updated>2025-12-13T12:31:54Z</updated>

		<summary type="html">&lt;p&gt;Thomas Holzberger: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{DISPLAYTITLE:Draft:Moral relativism}}{{Head JTP|Authors=[[User:Thomas Holzberger]]|Observations=* The structure and formats of the article doesn&#039;t adapt to the purpose of conceptual clarification. &lt;br /&gt;
* Though the interplay with internal references is important, external relevant references should also be used. &lt;br /&gt;
* The moral relativism can be predicated not only from the utilitarian approach.&lt;br /&gt;
* The quantification problem is not addressed.&lt;br /&gt;
* Since Kant&#039;s ethics allegegly overcomes relativism his arguments should be confronted.}}&lt;br /&gt;
&lt;br /&gt;
This article will elaborate on the concept of moral relativism. First the term will be explained with the help of other concepts of morality and ethics. Then it will be shown further with an argument for moral relativism, followed by some implikations of moral relativism.  &lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;Introduction:&amp;lt;/u&amp;gt; Will introduce the topic of moral relativism relating to morality&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;Definition:&amp;lt;/u&amp;gt; Will define the term of moral relativism&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;Assumptions, Argument, Proof:&amp;lt;/u&amp;gt; Will outline the argument for unknowable morality and therefore moral relativism&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;Implications:&amp;lt;/u&amp;gt; Will explain the implications of moral relativism&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;Connections:&amp;lt;/u&amp;gt; Will show some connections between the topic and relevant philosophers and concepts.&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;&#039;Introduction:&#039;&#039;&#039; ==&lt;br /&gt;
The concepts of good and bad, core ideas in most moral systems, have existed before written philosophy itself.&lt;br /&gt;
&lt;br /&gt;
After all, most animals and all humas have interests, taking the form of distinguishing how good and bad some things are. This way of phrasing it resembles a utilitarian approach to morals, where the goal is, to maximise overall goodness that can usually be phrased as the sum of goodness minus the sum of badness.&lt;br /&gt;
&lt;br /&gt;
The interests people have are often associated with the behaviour of other people and themselves, giving rise to a deontologigal perspective to view morals mostly as rules of behaviour. These rules do not need to be mere restrictions based on bad behaviour, but can also label some actions as good. This can be phrased as quantified results, such as &amp;quot;Killing is this bad, Stealing is half as bad&amp;quot;, but it could also mean, that in a certain situation one action is good, while others are bad.&lt;br /&gt;
&lt;br /&gt;
Other moral systems can be phrased differently.&lt;br /&gt;
&lt;br /&gt;
For example, as explained in the article [[Moral]], the judgement given by virtue ethics are based on the reasoning for an action. If the reasoning is virtuous, the the act is morally good. Whether an act is virtuous depends on doing the right thing for the right reason, with a virtue beeing seen as a good trait, that is dependend on acting in a proper way between two extremes.&lt;br /&gt;
&lt;br /&gt;
However, there are some similarities between these systems:&lt;br /&gt;
&lt;br /&gt;
All coherent moral systems can be rephrased in distinctions between good and bad, results that can often be quantified or compared. &lt;br /&gt;
&lt;br /&gt;
The three systems, that were pointed out, are also usually interpreted in such a way that if an agend were to act according to the moral system, it would also serve the good of other people, as opposed to just phrasing the interests of the agend. This happens indirectly for vitue ethics and dentontology. In virtue ethics for example because a certain ammount of altruism might be seen as virtuous, in deontology because the most typical rules of behaviour like &amp;quot;you shall not lie&amp;quot;, &amp;quot;you shall not murder&amp;quot;,... are set up in such a way.&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;&#039;Definition:&#039;&#039;&#039; ==&lt;br /&gt;
The concept of moral relativism, seems to challenge all other perspectives of morality since it states, that there is no true or false set of morals, but just different ideas of sometimes different people across space and time. &lt;br /&gt;
&lt;br /&gt;
According to the article [[Draft:Ethics]]&amp;lt;ref&amp;gt;[[Draft:Ethics]]&amp;lt;/ref&amp;gt; ,moral relativity &amp;quot;is the doctrine that there is no one true moral system, binding on all people at all times”.In the same article [[Draft:Ethics]]&amp;lt;ref&amp;gt;[[Draft:Ethics]]&amp;lt;/ref&amp;gt; the concept is brought up that relativist ideas can hardly be challenged based on its own principle that everything is questionable, because no moral argument or position can be right or wrong. It is however important to mention, that moral relativism does not inherently challenge the idea of objective truth. Rather it states that there is no knowable true morality. This also means, that a morality derived from knowledge of some true morality is not true, according to Platos definition of knowledge as a justified true belief. But values and therefore ethics and morality will stay relevant for longer or as long as humans do, so a typical conclusion of moral relativism or a popular connected idea of with &amp;quot;Moral Relativism&amp;quot;&amp;lt;ref name=&amp;quot;:0&amp;quot;&amp;gt;Gowans, Chris, &amp;quot;Moral Relativism&amp;quot;, &#039;&#039;The Stanford Encyclopedia of Philosophy&#039;&#039; (Spring 2021 Edition), Edward N. Zalta (ed.), URL = &amp;lt;&amp;lt;nowiki&amp;gt;https://plato.stanford.edu/archives/spr2021/entries/moral-relativism/&amp;lt;/nowiki&amp;gt;&amp;gt;.&amp;lt;/ref&amp;gt; is to label any moral idea as equally true or true dependend on things like religion, culture, region or person. This idea does however suffer from the so called Quantification problem. This is the problem of needing to choose some standard for what has priority. Culture, Religion, Region, the opinion of the affected person or the acting person. There are infinite possibilities and it again needs a moral standard, something chosen by arbitrary or intuitive principles to select one possibility or combine them somehow.    &lt;br /&gt;
&lt;br /&gt;
It would be flawed to see moral relativism as universally true yet a moral position itself, since moral relativity could then be seen as one of these non- /equally- /relatively true positions. So moral relativism must claim not to be a moral position itself or deny the concept of objective truth.   &lt;br /&gt;
&lt;br /&gt;
So a different connected idea can just be that moral justifications and ideas tend to be more different between times, groups and spaces and more similar within them.&amp;lt;ref name=&amp;quot;:0&amp;quot; /&amp;gt;    &lt;br /&gt;
&lt;br /&gt;
In the following sections Assumptions, Argument and Proof one of the reasonings for moral relativism will be shown to explain the position.   &lt;br /&gt;
&lt;br /&gt;
==== &#039;&#039;&#039;Assumptions:&#039;&#039;&#039; ====&lt;br /&gt;
For the purpose of this text, a moral system that includes the entirety of someones ideas on morality will be called a &amp;quot;complete moral system&amp;quot; A complete moral system assignes everything a moral value, even if this value is 0. A moral value can be phrased as how good or bad something is. Any kind of moral theory can be phrased in such a way. For example, a purely dentological moral system would give everything exept for actions the value 0 since it doesn&#039;t matter. In practice, most people believe in some kind of mixture of moral systems such as the ones mentioned above.  &lt;br /&gt;
&lt;br /&gt;
In the following, the concept of a true complete moral system and any knowable morality from any perspective will be disproven under the following assumptions:&lt;br /&gt;
&lt;br /&gt;
[The moral system itself doesn&#039;t transfer information to the existing world, from which one could conclude the nature of the moral system.    &lt;br /&gt;
&lt;br /&gt;
A complete moral system is a moral system that holds the rule that no other additional moral system is true.]   &lt;br /&gt;
&lt;br /&gt;
==== &#039;&#039;&#039;Argument:&#039;&#039;&#039; ====&lt;br /&gt;
Under the assumptions above, any existing observer does not have any information about the relative likelyhood of potentially conflicting moral systems. Therefore they have the same likelyhood of beeing true.&lt;br /&gt;
&lt;br /&gt;
Above a complete set of morals is defined as a set of morals that holds the rule that no other moral rule is true. Therefore it can not be true together with any other complete moral system. At an infinite ammount of potentially true complete moral systems they are all in conflict with the infinite ammount of others. Therefore they all have the likelyhood [1/infinite]=0&lt;br /&gt;
 &lt;br /&gt;
Any complete moral system having the likelyhood 0 just disproves the concept of a true complete moral system from any possible perspective. This seems to disprove true morals already, but it does not yet apply to individual moral values.       &lt;br /&gt;
 &lt;br /&gt;
One of the assuptions above is that a moral system itself doesn&#039;t transfer information to the existing world, from which one could conclude the nature of the moral system. So there is no information about true moral values or comparisons. Knowledge is based on information, so any true moral judgement is unknowable. This also aplies to hints at a true moral judgement. There is no information to hint at any true moral judegement.             &lt;br /&gt;
 &lt;br /&gt;
It is also noteworthy, that two systems are not even universally comparable:             &lt;br /&gt;
 &lt;br /&gt;
To maximise the default of moral value from a perspective of unsure morals e.g. for deciding on whether to do something, the observer would have to take the resulting moral values of into account, proportional to the likelyhood of the corresponding moral rule or system.             &lt;br /&gt;
&lt;br /&gt;
But is it is allways possible to assign a number as a moral value? Some system might for example only point out which actions are acceptable in any given situation. Then how do you scale the numbers you see as the moral values when there is only right or wrong? Different actions in different situations might pose different levels of good and bad in the moral system so the numbers should be scaled relative to each other to reflect that. But for example every rulebreak could be worth -1 or -19 or -0.123 or ... So actually every moral system only compares things relative to each other and any factor could be applied to all the resulting numbers without changing the moral system. This raises the problem, wheter two moral systems are even compareable. There could be various options when considering multiple moral systems. First of all, you could just apply some factors for example to equal out the sum of all given moral judgement between the two moral systems, or to equal out one specific judgement. In this case you are obviously valuing two systems against each other based on some arbitrary value. So different moral systems are not universally comparable. If they are not comparable, then no true morality can be deduced from the comparison.&lt;br /&gt;
&lt;br /&gt;
==== &#039;&#039;&#039;Proof:&#039;&#039;&#039; ====&lt;br /&gt;
In the following, it will be evaluated wheter the assumptions used for the argument generally hold true when it comes to universal morals that would apply to every person at every time:&lt;br /&gt;
&lt;br /&gt;
The assumption that a complete moral system is a moral system that holds the rule that no other additional moral system is true, must be true, since the complete moral system already represents the whole of some hyperthetical persons ideas on morality, so any truth deviating from that would make the idea wrong.&lt;br /&gt;
&lt;br /&gt;
The assumption, that the system itself doesn&#039;t transfer information to the existing world is not proven, but lies in the definition of morality used here. The Definition as an abstract judgement of good and bad, that has no effect on the world in and of itself. With this definition there is no reason to assume, that any impact of entities, people, gods seemingly reacting to their percieved morality points to the nature of a true moral system. E.g. even a Karma-system might just have an inverse effekt like punishing people for the thing truely good. And these entities if real could also not get any information about a true moral system since it has no effect on them under this definition.&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;&#039;Implikations:&#039;&#039;&#039; ==&lt;br /&gt;
The conclusion of the argument obove raises a problem of course. If any true morality is unknowable, the definition of morality seems useless. This is when moral relativity comes into play again.  After all, this concept resembles the reality of different people, times and places showing different moral beliefs.  &lt;br /&gt;
&lt;br /&gt;
These must always contain some perspective on good and bad, but they are in reality connections of various beliefs and concepts. Because even though no one can have knowlege of a true moral system, people can still carry the belief that a moral system is true. Religions can define morality not only as an abstract truth without effect on reality, but for example as something that is necessary to avoid some divine punishment or archive some reward. That the punishment should be avoided or the reward archived is also one of infinite abstract moral assumptions, but for a typical human who probably has this assumption, it is also a situation where the interests of the agend likely match the moral system, since the agend does not want to be punished and instead rewarded. The popular perception of morals does however sometimes include a conflict of the interest of the moral system and the interests of the agend. After all, people are sometimes willing do do things they percieve as immoral.   &lt;br /&gt;
&lt;br /&gt;
This shows, that Morality, in [[Draft:Moral|Draft:Mo]][[Draft:Moral|ral]]&amp;lt;ref&amp;gt;[[Draft:Moral]]&amp;lt;/ref&amp;gt; defined as a normative system which is based on society’s values and ethical norms is not focused on the individual, but rather an entire society or group. If there are other reasons for beliefing a moral system true, like &amp;quot;it regulates my society best&amp;quot; or &amp;quot;it serves the puplic good&amp;quot;, then might will not always be in the interest of the person to act according to the moral system since the interests of the person most likely differ from the moral system somehow. This reinforces the concept of morality as something benefiting some societal, puplic or altruistic goals and       &lt;br /&gt;
&lt;br /&gt;
The concept of moral relativism not only serves as a theoretical exercise, but also as a tool, when there is a conflict of moral ideas. Because the realisation that another position is equaly not just grounded in knowable truth as your own is neccesary to avoid conflict, over positions that can not be attacked by logic, because moral ideas are not only derived from logic but also from some axiomatic moral ideas the partys in conflict might not share. But, the partys in question might not always be in noticable conflict and might be able to have interaction that advances both partys moral goals.       &lt;br /&gt;
&lt;br /&gt;
By describing Moral beliefs as unfounded assumptions, moral relativity could also be interpreted as a call to minimise the ammount of such beliefs, since the ammount of Axioms should also generally be minimised in the quest for truth and knowledge. But every person acts with some ammount of purpose that is often different from their short term interests, so the ammount of such Axioms won`t reach 0 for any person if the attempt to reach future happines at the cost of imidiate happines is a moral desicion instead of a instinctual desicion. Also the quest for truth and knowledge might be a moral goal so the need to have 0 axioms in moral thinking would be rather paradoxical.      &lt;br /&gt;
&lt;br /&gt;
This is why Moral relativity is not typically a call to abandon moral concepts, but rather a framework to deal with the abundance of different moral theories and ideas. It is not a problem to have beliefs about morality, but it is also arbitrary which ones you have.&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;&#039;Connections:&#039;&#039;&#039; ==&lt;br /&gt;
So from Platos perspective on knowledge, moral beliefs would fall into the cathegory of sensible knowlege, specifically in the cathegory of faithfull beliefs, since they are taken for granted without proof. The concept of moral relativity basicly has the role of pointing that out. In the popular perception of morality, there can however be moral positions that are merely derived from other moral positions and reality. These positions can be challenged rationally, but they might be atributed to sensible knowledge as well.&lt;br /&gt;
&lt;br /&gt;
It also has the role of providing a rather comprehensive answer to one of the three pillars of philosophy. Those are typically called &amp;quot;What is there&amp;quot;, What to do&amp;quot; and &amp;quot;How to know&amp;quot;. After all, &amp;quot;You can not know what to do&amp;quot; is a message deducable from moral relativism. Granted you choose some assumptions about morality, you can obviously make conclusions from there, but the question of &amp;quot;what to do&amp;quot; will never have a universal answer.&lt;br /&gt;
&lt;br /&gt;
Kants Moral Philosophy claims to overcome moral relativism, with the cathegorical Imperative that you should only act such that you could want all rational beeings obeying to a universal law consistent with the action.&amp;lt;ref&amp;gt;Johnson, Robert and Adam Cureton, &amp;quot;Kant’s Moral Philosophy&amp;quot;, &#039;&#039;The Stanford Encyclopedia of Philosophy&#039;&#039; (Winter 2025 Edition), Edward N. Zalta &amp;amp; Uri Nodelman (eds.), forthcoming URL = &amp;lt;&amp;lt;nowiki&amp;gt;https://plato.stanford.edu/archives/win2025/entries/kant-moral/&amp;lt;/nowiki&amp;gt;&amp;gt;.&amp;lt;/ref&amp;gt;  But there is the question of what universal law you would want all rational beeings to obey. That would be a question answered differently from person to person, and even common types of morality could be implemented in this framework. For example a utalitarian aproach to morals can be achieved by acting in such a way that it maximises overall good, the rule you would want others to follow is to act the same. But if that universal law needs to be realitically phrasable, or somehow generalised then it will not always align with the agents moral opinion so it would for these cases not be rational to act according to the cathegorical Imperative. &lt;br /&gt;
&lt;br /&gt;
Moral Relativism is also connected to the idea of &amp;quot;Cultural Diversity&amp;quot;. According to [[Draft:Ethics]], the only premise and the final conclusion of that concept is that “no society’s beliefs about right and wrong are better than any other’s”. This is where the interpretation of moral relativism and cultural diversity might differ from the conclusions outlined in this text. After all, a lack of knowledge on universal true morality does not imply a lack of knowledge on the validity of conclusions you might draw from this universal true morality someone believes in. Many cultures have comparable beliefs for example in terms of maximising happynes for a maximal ammount of people. But it is statistically sure, that some cultures lead to archiving their own or others moral ideas better. That does not mean, that it is realistic to find a &amp;quot;better&amp;quot; culture, it is hard enough to define and find the moral ideas and the information about the cultures one would use to compare them. But when limited to a specific topic, a specific aspect of culture and a specific group of people applying their moral ideas, this can be an atemptable task that is performed rather frequently in reality. In fact the consistent comparison and subsequent exchange of culture is a major benefit of cultural interaction and diversity. But Moral relativity again plays the role of mediating acceptance between different viewpoints, since the realisation, that your perspective is not the only valid one is the basis for the described cultural exchange or also just the positive interaction between individuals of different cultures and even ideological backrounds.&lt;/div&gt;</summary>
		<author><name>Thomas Holzberger</name></author>
	</entry>
</feed>