<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://www.glossalab.org/w/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Leon+Zipfel</id>
	<title>glossaLAB - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://www.glossalab.org/w/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Leon+Zipfel"/>
	<link rel="alternate" type="text/html" href="https://www.glossalab.org/wiki/Special:Contributions/Leon_Zipfel"/>
	<updated>2026-05-12T03:13:41Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.43.6</generator>
	<entry>
		<id>https://www.glossalab.org/w/index.php?title=Draft:Autonomous_agent&amp;diff=30774</id>
		<title>Draft:Autonomous agent</title>
		<link rel="alternate" type="text/html" href="https://www.glossalab.org/w/index.php?title=Draft:Autonomous_agent&amp;diff=30774"/>
		<updated>2026-01-09T11:51:46Z</updated>

		<summary type="html">&lt;p&gt;Leon Zipfel: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Proposal&lt;br /&gt;
|Was created on date=2025-12-23&lt;br /&gt;
|Belongs to clarus=Understanding Complexity&lt;br /&gt;
|Has author=Leon Zipfel&lt;br /&gt;
|Has publication status=glossaLAB:In review&lt;br /&gt;
}}&lt;br /&gt;
&#039;&#039;Leon Zipfel (2025). Autonomous agent, Understanding Complexity&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The idea of an &#039;&#039;&#039;autonomous agent&#039;&#039;&#039; historically has its roots in the [[IESC:SYSTEMS THEORY|&#039;&#039;&#039;systems theory&#039;&#039;&#039;]] as well as &#039;&#039;&#039;cybernetics&#039;&#039;&#039;, where autonomous behavior was described in terms of feedback, regulation, and adaptation. However the term and formal concept of an &#039;&#039;&#039;autonomous agent&#039;&#039;&#039; were later introduced and refined in &#039;&#039;&#039;artificial intelligence&#039;&#039;&#039; and &#039;&#039;&#039;multi-agent systems&#039;&#039;&#039; research. In order to understand the concept of an autonomous agent, it is first necessary to take a closer look at the two underlying ideas that make up the concept.  The first step is to clarify the concept of an agent itself. This will be followed by an explanation of the term autonomy. Finally, it will be discussed whether combining these two ideas is as simple as it seems in order to arrive at a valid definition of an autonomous agent, or if it is not quite as simple as that. Additionally new tools such as large language models, smart home devices and interactive AI systems, will be inspected, in order to see if autonomous agents are part of our daily life, or just a topic that is discussed by scientist and researchers.  &lt;br /&gt;
&lt;br /&gt;
== Agent ==&lt;br /&gt;
The concept of an &#039;&#039;&#039;agent&#039;&#039;&#039; is described in detail in the conceptual clarification [[IESC:AGENT|&#039;&#039;&#039;agent&#039;&#039;&#039;]] by &#039;&#039;&#039;Charles François&#039;&#039;&#039; (2004)&amp;lt;ref name=&amp;quot;:0&amp;quot;&amp;gt;François, C. (2004). &#039;&#039;International encyclopedia of systems and cybernetics&#039;&#039; (2nd ed.)&amp;lt;/ref&amp;gt;. Because it is essential to this article however, it will be shortly explained here as well.  &lt;br /&gt;
&lt;br /&gt;
There is no single, universally accepted definition of an &#039;&#039;&#039;agent&#039;&#039;&#039;, as the term is used across multiple disciplines, including &#039;&#039;&#039;artificial intelligence&#039;&#039;&#039;, &#039;&#039;&#039;systems theory&#039;&#039;&#039;, and &#039;&#039;&#039;network science&#039;&#039;&#039;, each emphasizing distinct aspects of agency. Consequently, there are many different definitions of the term, all of which are valid within their respective fields of research. &lt;br /&gt;
&lt;br /&gt;
One widely cited general definition describes an &#039;&#039;&#039;agent&#039;&#039;&#039; as an entity that perceives its environment through sensors and acts upon that environment through actuators (Stuart J. Russell &amp;amp; Peter Norvig 2010)&amp;lt;ref&amp;gt;Russell, S. J., &amp;amp; Norvig, P. (2010). &#039;&#039;Artificial intelligence: A modern approach&#039;&#039; (3rd ed.). Prentice Hall.&amp;lt;/ref&amp;gt;. This formulation is intentionally broad and highlights the interaction between an &#039;&#039;&#039;agent&#039;&#039;&#039; and its environment, without imposing assumptions about internal structure or cognitive capabilities. &lt;br /&gt;
&lt;br /&gt;
From a &#039;&#039;&#039;multi-element systems&#039;&#039;&#039; view, &#039;&#039;&#039;agents&#039;&#039;&#039; may also be characterized as &#039;&#039;&#039;active elements within multi-element systems or networks&#039;&#039;&#039;, distinguished from passive components by its capacity to influence system states through its actions. Building on this view, &#039;&#039;&#039;ERCEAU&#039;&#039;&#039; and &#039;&#039;&#039;FERBER&#039;&#039;&#039; proposed a hierarchical classification of agents, ranging from &#039;&#039;&#039;reactive agents&#039;&#039;&#039; to &#039;&#039;&#039;intentional agents&#039;&#039;&#039; with explicit goals and plans. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;J. FERBER&#039;&#039;&#039; later proposes a more detailed definition with nine properties that can be fulfilled by an agent, such as possessing resources or being driven by a set of tendencies. An entity that complies with all nine of those properties can be described as an intelligent system. (Ferber, 1999)&amp;lt;ref&amp;gt;Ferber, J. (1999). &#039;&#039;Multi-agent systems: An introduction to distributed artificial intelligence&#039;&#039;. Addison-Wesley.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;B. HAYES-ROTH&#039;&#039;&#039; on the other hand does not have quite as many requirements to declare an &#039;&#039;&#039;agent&#039;&#039;&#039; as an &#039;&#039;&#039;intelligent agent&#039;&#039;&#039;. She states in order to be considered as such the &#039;&#039;&#039;agent&#039;&#039;&#039; must perform three functions continuously.&lt;br /&gt;
&lt;br /&gt;
1) precepting dynamic conditions in the environment &lt;br /&gt;
&lt;br /&gt;
2)acting to affect conditions in the environment &lt;br /&gt;
&lt;br /&gt;
3) interpreting perceptions, solving problems, drawing inferences and determining actions &lt;br /&gt;
&lt;br /&gt;
(B. Hayes-Roth,1992)&amp;lt;ref&amp;gt;Hayes-Roth, B. (1992). An architecture for adaptive intelligent systems. &#039;&#039;Artificial Intelligence&#039;&#039;, p.329–365.&amp;lt;/ref&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
All of these definitions state that an &#039;&#039;&#039;agent&#039;&#039;&#039; must be in an environment, they must be able to receive information from and about this environment and they have to be able to act upon this environment. For the purpose of this conceptual clarification, this will be the minimum definition for an entity to be called an agent.&lt;br /&gt;
&lt;br /&gt;
While the research discussed so far primarily addressed agents as objects of scientific investigation, recent advances in artificial intelligence have made agent-like systems widely accessible. Tools such as Gemini, Chat GPT, Notebook LM and others are now used daily by a broad population. Based on the definitions discussed above, such systems can reasonably be described as agents. Whether they should also be regarded as &#039;&#039;&#039;autonomous agents&#039;&#039;&#039;, however, depends critically on how the term autonomy is understood and used. Addressing this question therefore requires a more detailed examination of the concept of autonomy itself.&lt;br /&gt;
&lt;br /&gt;
== Autonomy ==&lt;br /&gt;
In the conceptual clarification [[IESC:AUTONOMY|autonomy]] the concept is defined as &amp;quot;The capacity of a system to select and decide, within limits, its own behavior&amp;quot;&amp;lt;ref name=&amp;quot;:0&amp;quot; /&amp;gt;. This definition is most commonly used in fields such as system theory. &lt;br /&gt;
&lt;br /&gt;
The concept was introduced by the French biologist &#039;&#039;&#039;P. VENDRYÈS&#039;&#039;&#039; in the early 1940s and represents one of the earliest systematic attempts to explain autonomous behaviour in biological and artificial systems.(P. Vendryes, 1942)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;VENDRYÈS&#039;&#039;&#039; conception of autonomy is fundamentally grounded in the system’s &#039;&#039;&#039;relative control over its relations with the environment&#039;&#039;&#039;. Autonomy is therefore not absolute or the same concept as independence; rather, it emerges from regulatory mechanisms that allow a system to manage environmental influences while preserving internal coherence.&lt;br /&gt;
&lt;br /&gt;
A central element of Vendryès’ theory is a &#039;&#039;&#039;probabilistic conception of time and choice&#039;&#039;&#039;. While the past is considered strictly fixed, the future is only in part determined and presents a limited set of possible outcomes. Autonomy is manifested in the system’s capacity to select one possibility out of  several possible ones. So the entity chooses one specific trajectory and excludes others. This view anticipates later developments in systems theory and complexity science, including probabilistic and chaotic dynamics. (P. Vendryes, 1942)&amp;lt;ref&amp;gt;Vendryès, P. (1942). &#039;&#039;Autonomie et mécanismes&#039;&#039;. Presses Universitaires de France.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Subsequent authors expanded this framework. &#039;&#039;&#039;Kenneth Berrien&#039;&#039;&#039; emphasized the analogy between human choice and probabilistic system outputs(K. Berrien,1968)&amp;lt;ref&amp;gt;Berrien, K. F. (1968). &#039;&#039;General and social systems&#039;&#039;. Rutgers University Press.&amp;lt;/ref&amp;gt;, while &#039;&#039;&#039;Robert H. Howe&#039;&#039;&#039; defined autonomy in 1975 as the unity of computation and construction, linking internal information processing with self-production&amp;lt;ref&amp;gt;Howe, R. H. (1975). Autonomy and self-regulation in complex systems. &#039;&#039;Systems Research&#039;&#039;, 20(2), 85–98.&amp;lt;/ref&amp;gt;. Finally, &#039;&#039;&#039;A. S. Iberall&#039;&#039;&#039; stressed the thermodynamic dimension of autonomy, arguing that autonomous systems must be understood as energy-processing engines.   (A. S. Iberall, 1973)&amp;lt;ref&amp;gt;Iberall, A. S. (1973). Toward a general science of viable systems. &#039;&#039;McGraw-Hill&#039;&#039;.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Overall, autonomy in systems theory, and other fields of research, implies a &#039;&#039;&#039;relational, graded, and energetically grounded property&#039;&#039;&#039;, arising from internal organization, regulation, and sustained interaction with a structured environment.&lt;br /&gt;
&lt;br /&gt;
== Autonomous Agent ==&lt;br /&gt;
Having clarified the concepts of &#039;&#039;agent&#039;&#039; and &#039;&#039;autonomy&#039;&#039; separately, it would appear straightforward to define an &#039;&#039;&#039;autonomous agent&#039;&#039;&#039; by combining these two notions. As with the concept of an agent itself, however, definitions of autonomous agents vary depending on the disciplinary context and the specific subject or field under investigation. Different fields emphasize different aspects of autonomy, purpose, and interaction with the environment, leading to multiple complementary definitions.&lt;br /&gt;
&lt;br /&gt;
One early and influential definition is provided by &#039;&#039;&#039;Jose C. Brustoloni&#039;&#039;&#039;, who characterizes autonomous agents as &#039;&#039;“systems capable of autonomous, purposeful action in the real world”&#039;&#039; (Brustoloni, 1991)&amp;lt;ref&amp;gt;Brustoloni, J. C. (1991). Autonomous agents: Characterization and requirements. &#039;&#039;Technical Report&#039;&#039;, University of Pittsburgh.&amp;lt;/ref&amp;gt;. This definition highlights two essential features: autonomy, understood as self-directed control of behavior, and purposefulness, referring to action directed toward goals or objectives. While it is a short definition, that is fairly easy to understand, it leaves open how such purpose is internally represented or realized.&lt;br /&gt;
&lt;br /&gt;
A more explicit account of the autonomous agent–environment relationship is given by &#039;&#039;&#039;Stan Franklin&#039;&#039;&#039; and &#039;&#039;&#039;Art Graesser&#039;&#039;&#039;, who define an autonomous agent as &#039;&#039;“a system situated within and a part of an environment that senses that environment and acts on it, over time, in pursuit of its own agenda and so as to effect what it senses in the future”&#039;&#039; (Franklin &amp;amp; Graesser, 1997, p. 25)&amp;lt;ref&amp;gt;Franklin, S., &amp;amp; Graesser, A. (1997). Is it an agent, or just a program? A taxonomy for autonomous agents. In J. P. Müller, M. J. Wooldridge, &amp;amp; N. R. Jennings (Eds.), &#039;&#039;Intelligent agents III&#039;&#039; (pp. 21–35). Springer.&amp;lt;/ref&amp;gt;. This definition emphasizes the &#039;&#039;&#039;situatedness&#039;&#039;&#039; of the agent, its &#039;&#039;&#039;temporal continuity&#039;&#039;&#039;, and the presence of an internally driven agenda that guides action beyond immediate perception and response behavior.&lt;br /&gt;
&lt;br /&gt;
Similarly, &#039;&#039;&#039;Pattie Maes&#039;&#039;&#039; describes autonomous agents as &#039;&#039;“computational systems that inhabit some complex, dynamic environment, sense and act autonomously in this environment, and by doing so realize a set of goals or tasks for which they are designed”&#039;&#039; (Maes, 1995, p. 108)&amp;lt;ref&amp;gt;Maes, P. (1995). Artificial life meets entertainment: Life-like autonomous agents. &#039;&#039;Communications of the ACM&#039;&#039;, 38(11), 108–114. &amp;lt;/ref&amp;gt;. This formulation situates autonomous agents explicitly within &#039;&#039;&#039;artificial intelligence&#039;&#039;&#039; and &#039;&#039;&#039;artificial life&#039;&#039;&#039; research, focusing on their operation in dynamic environments and their role in achieving goals set by the designer of the agent.&lt;br /&gt;
&lt;br /&gt;
Taken together, these definitions converge on a common understanding of autonomous agents as systems that are situated in an environment, capable of perceiving and acting upon that environment, and able to regulate their behavior over time in pursuit of internal objectives or agendas. While differing in emphasis—ranging from purposefulness and environmental embedding to computational realization—all definitions reflect the view that autonomy arises from an agent’s capacity to control its own actions within environmental constraints, rather than from external, continuous control. &lt;br /&gt;
&lt;br /&gt;
Previously it was stated that many AI-tools used today can be considered as agents, but in regard of the definitions that have been given for autonomous agents, it is clear to see that these tools do not fullfill the requirements of autonomy, and therefore can not be called autonmous agents. An example for an autonomous agent in our daily life is a thermostat used for heating or air conditioning in a house or room. It fulfills all the requirements that have been stated as essential:&lt;br /&gt;
&lt;br /&gt;
* it is Situated in an environment (the room or building)&lt;br /&gt;
* it perceives the environment (through temperature sensors)&lt;br /&gt;
* it can choose from different actions (heating, cooling or doing nothing)&lt;br /&gt;
* it takes action upon the environment (controlling heating or cooling systems)&lt;br /&gt;
* it is influenced by these actions in the future and they shape its own behavior in the future&lt;br /&gt;
* it regulates its behavior autonomously (maintaining target temperature)&lt;br /&gt;
* it operates without continuous human input (once it is configured)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
As the example of a thermostat demonstrates, autonomous agents are not confined to theoretical models but are already part of everyday life and can be expected to become increasingly relevant in the years to come. They espacially have a huge potential in the research fields of artificial intelligence and computing, prompting further advances in those areas. &lt;br /&gt;
&lt;br /&gt;
== Statement on the useage of Artificial Intelligence (AI) ==&lt;br /&gt;
ChatGPT was used to support the editorial process by suggesting stylistic improvements and identifying relevant sections to include. It was not used as a primary author and did not generate the core content, arguments, or structure of the text.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;references responsive=&amp;quot;0&amp;quot; /&amp;gt;&lt;/div&gt;</summary>
		<author><name>Leon Zipfel</name></author>
	</entry>
	<entry>
		<id>https://www.glossalab.org/w/index.php?title=Draft:Autonomous_agent&amp;diff=30773</id>
		<title>Draft:Autonomous agent</title>
		<link rel="alternate" type="text/html" href="https://www.glossalab.org/w/index.php?title=Draft:Autonomous_agent&amp;diff=30773"/>
		<updated>2026-01-09T11:46:17Z</updated>

		<summary type="html">&lt;p&gt;Leon Zipfel: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Proposal&lt;br /&gt;
|Was created on date=2025-12-23&lt;br /&gt;
|Belongs to clarus=Understanding Complexity&lt;br /&gt;
|Has author=Leon Zipfel&lt;br /&gt;
|Has publication status=glossaLAB:In review&lt;br /&gt;
}}&lt;br /&gt;
&#039;&#039;Leon Zipfel (2025). Autonomous agent, Understanding Complexity&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The idea of an &#039;&#039;&#039;autonomous agent&#039;&#039;&#039; historically has its roots in the [[IESC:SYSTEMS THEORY|&#039;&#039;&#039;systems theory&#039;&#039;&#039;]] as well as &#039;&#039;&#039;cybernetics&#039;&#039;&#039;, where autonomous behavior was described in terms of feedback, regulation, and adaptation. However the term and formal concept of an &#039;&#039;&#039;autonomous agent&#039;&#039;&#039; were later introduced and refined in &#039;&#039;&#039;artificial intelligence&#039;&#039;&#039; and &#039;&#039;&#039;multi-agent systems&#039;&#039;&#039; research. In order to understand the concept of an autonomous agent, it is first necessary to take a closer look at the two underlying ideas that make up the concept.  The first step is to clarify the concept of an agent itself. This will be followed by an explanation of the term autonomy. Finally, it will be discussed whether combining these two ideas is as simple as it seems in order to arrive at a valid definition of an autonomous agent, or if it is not quite as simple as that. Additionally new tools such as large language models, smart home devices and interactive AI systems, will be inspected, in order to see if autonomous agents are part of our daily life, or just a topic that is discussed by scientist and researchers.  &lt;br /&gt;
&lt;br /&gt;
== Agent ==&lt;br /&gt;
The concept of an &#039;&#039;&#039;agent&#039;&#039;&#039; is described in detail in the conceptual clarification [[IESC:AGENT|&#039;&#039;&#039;agent&#039;&#039;&#039;]] by &#039;&#039;&#039;Charles François&#039;&#039;&#039; (2004)&amp;lt;ref name=&amp;quot;:0&amp;quot;&amp;gt;François, C. (2004). &#039;&#039;International encyclopedia of systems and cybernetics&#039;&#039; (2nd ed.)&amp;lt;/ref&amp;gt;. Because it is essential to this article however, it will be shortly explained here as well.  &lt;br /&gt;
&lt;br /&gt;
There is no single, universally accepted definition of an &#039;&#039;&#039;agent&#039;&#039;&#039;, as the term is used across multiple disciplines, including &#039;&#039;&#039;artificial intelligence&#039;&#039;&#039;, &#039;&#039;&#039;systems theory&#039;&#039;&#039;, and &#039;&#039;&#039;network science&#039;&#039;&#039;, each emphasizing distinct aspects of agency. Consequently, there are many different definitions of the term, all of which are valid within their respective fields of research. &lt;br /&gt;
&lt;br /&gt;
One widely cited general definition describes an &#039;&#039;&#039;agent&#039;&#039;&#039; as an entity that perceives its environment through sensors and acts upon that environment through actuators (Stuart J. Russell &amp;amp; Peter Norvig 2010)&amp;lt;ref&amp;gt;Russell, S. J., &amp;amp; Norvig, P. (2010). &#039;&#039;Artificial intelligence: A modern approach&#039;&#039; (3rd ed.). Prentice Hall.&amp;lt;/ref&amp;gt;. This formulation is intentionally broad and highlights the interaction between an &#039;&#039;&#039;agent&#039;&#039;&#039; and its environment, without imposing assumptions about internal structure or cognitive capabilities. &lt;br /&gt;
&lt;br /&gt;
From a &#039;&#039;&#039;multi-element systems&#039;&#039;&#039; view, &#039;&#039;&#039;agents&#039;&#039;&#039; may also be characterized as &#039;&#039;&#039;active elements within multi-element systems or networks&#039;&#039;&#039;, distinguished from passive components by its capacity to influence system states through its actions. Building on this view, &#039;&#039;&#039;ERCEAU&#039;&#039;&#039; and &#039;&#039;&#039;FERBER&#039;&#039;&#039; proposed a hierarchical classification of agents, ranging from &#039;&#039;&#039;reactive agents&#039;&#039;&#039; to &#039;&#039;&#039;intentional agents&#039;&#039;&#039; with explicit goals and plans. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;J. FERBER&#039;&#039;&#039; later proposes a more detailed definition with nine properties that can be fulfilled by an agent, such as possessing resources or being driven by a set of tendencies. An entity that complies with all nine of those properties can be described as an intelligent system. (Ferber, 1999)&amp;lt;ref&amp;gt;Ferber, J. (1999). &#039;&#039;Multi-agent systems: An introduction to distributed artificial intelligence&#039;&#039;. Addison-Wesley.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;B. HAYES-ROTH&#039;&#039;&#039; on the other hand does not have quite as many requirements to declare an &#039;&#039;&#039;agent&#039;&#039;&#039; as an &#039;&#039;&#039;intelligent agent&#039;&#039;&#039;. She states in order to be considered as such the &#039;&#039;&#039;agent&#039;&#039;&#039; must perform three functions continuously.&lt;br /&gt;
&lt;br /&gt;
1) precepting dynamic conditions in the environment &lt;br /&gt;
&lt;br /&gt;
2)acting to affect conditions in the environment &lt;br /&gt;
&lt;br /&gt;
3) interpreting perceptions, solving problems, drawing inferences and determining actions &lt;br /&gt;
&lt;br /&gt;
(B. Hayes-Roth,1992)&amp;lt;ref&amp;gt;Hayes-Roth, B. (1992). An architecture for adaptive intelligent systems. &#039;&#039;Artificial Intelligence&#039;&#039;, p.329–365.&amp;lt;/ref&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
All of these definitions state that an &#039;&#039;&#039;agent&#039;&#039;&#039; must be in an environment, they must be able to receive information from and about this environment and they have to be able to act upon this environment. For the purpose of this conceptual clarification, this will be the minimum definition for an entity to be called an agent.&lt;br /&gt;
&lt;br /&gt;
While the research discussed so far primarily addressed agents as objects of scientific investigation, recent advances in artificial intelligence have made agent-like systems widely accessible. Tools such as Gemini, Chat GPT, Notebook LM and others are now used daily by a broad population. Based on the definitions discussed above, such systems can reasonably be described as agents. Whether they should also be regarded as &#039;&#039;&#039;autonomous agents&#039;&#039;&#039;, however, depends critically on how the term autonomy is understood and used. Addressing this question therefore requires a more detailed examination of the concept of autonomy itself.&lt;br /&gt;
&lt;br /&gt;
== Autonomy ==&lt;br /&gt;
In the conceptual clarification [[IESC:AUTONOMY|autonomy]] the concept is defined as &amp;quot;The capacity of a system to select and decide, within limits, its own behavior&amp;quot;&amp;lt;ref name=&amp;quot;:0&amp;quot; /&amp;gt;. This definition is most commonly used in fields such as system theory. &lt;br /&gt;
&lt;br /&gt;
The concept was introduced by the French biologist &#039;&#039;&#039;P. VENDRYÈS&#039;&#039;&#039; in the early 1940s and represents one of the earliest systematic attempts to explain autonomous behaviour in biological and artificial systems.(P. Vendryes, 1942)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;VENDRYÈS&#039;&#039;&#039; conception of autonomy is fundamentally grounded in the system’s &#039;&#039;&#039;relative control over its relations with the environment&#039;&#039;&#039;. Autonomy is therefore not absolute or the same concept as independence; rather, it emerges from regulatory mechanisms that allow a system to manage environmental influences while preserving internal coherence.&lt;br /&gt;
&lt;br /&gt;
A central element of Vendryès’ theory is a &#039;&#039;&#039;probabilistic conception of time and choice&#039;&#039;&#039;. While the past is considered strictly fixed, the future is only in part determined and presents a limited set of possible outcomes. Autonomy is manifested in the system’s capacity to select one possibility out of  several possible ones. So the entity chooses one specific trajectory and excludes others. This view anticipates later developments in systems theory and complexity science, including probabilistic and chaotic dynamics. (P. Vendryes, 1942)&amp;lt;ref&amp;gt;Vendryès, P. (1942). &#039;&#039;Autonomie et mécanismes&#039;&#039;. Presses Universitaires de France.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Subsequent authors expanded this framework. &#039;&#039;&#039;Kenneth Berrien&#039;&#039;&#039; emphasized the analogy between human choice and probabilistic system outputs(K. Berrien,1968)&amp;lt;ref&amp;gt;Berrien, K. F. (1968). &#039;&#039;General and social systems&#039;&#039;. Rutgers University Press.&amp;lt;/ref&amp;gt;, while &#039;&#039;&#039;Robert H. Howe&#039;&#039;&#039; defined autonomy in 1975 as the unity of computation and construction, linking internal information processing with self-production&amp;lt;ref&amp;gt;Howe, R. H. (1975). Autonomy and self-regulation in complex systems. &#039;&#039;Systems Research&#039;&#039;, 20(2), 85–98.&amp;lt;/ref&amp;gt;. Finally, &#039;&#039;&#039;A. S. Iberall&#039;&#039;&#039; stressed the thermodynamic dimension of autonomy, arguing that autonomous systems must be understood as energy-processing engines.   (A. S. Iberall, 1973)&amp;lt;ref&amp;gt;Iberall, A. S. (1973). Toward a general science of viable systems. &#039;&#039;McGraw-Hill&#039;&#039;.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Overall, autonomy in systems theory, and other fields of research, implies a &#039;&#039;&#039;relational, graded, and energetically grounded property&#039;&#039;&#039;, arising from internal organization, regulation, and sustained interaction with a structured environment.&lt;br /&gt;
&lt;br /&gt;
== Autonomous Agent ==&lt;br /&gt;
Having clarified the concepts of &#039;&#039;agent&#039;&#039; and &#039;&#039;autonomy&#039;&#039; separately, it would appear straightforward to define an &#039;&#039;&#039;autonomous agent&#039;&#039;&#039; by combining these two notions. As with the concept of an agent itself, however, definitions of autonomous agents vary depending on the disciplinary context and the specific subject or field under investigation. Different fields emphasize different aspects of autonomy, purpose, and interaction with the environment, leading to multiple complementary definitions.&lt;br /&gt;
&lt;br /&gt;
One early and influential definition is provided by &#039;&#039;&#039;Jose C. Brustoloni&#039;&#039;&#039;, who characterizes autonomous agents as &#039;&#039;“systems capable of autonomous, purposeful action in the real world”&#039;&#039; (Brustoloni, 1991)&amp;lt;ref&amp;gt;Brustoloni, J. C. (1991). Autonomous agents: Characterization and requirements. &#039;&#039;Technical Report&#039;&#039;, University of Pittsburgh.&amp;lt;/ref&amp;gt;. This definition highlights two essential features: autonomy, understood as self-directed control of behavior, and purposefulness, referring to action directed toward goals or objectives. While it is a short definition, that is fairly easy to understand, it leaves open how such purpose is internally represented or realized.&lt;br /&gt;
&lt;br /&gt;
A more explicit account of the autonomous agent–environment relationship is given by &#039;&#039;&#039;Stan Franklin&#039;&#039;&#039; and &#039;&#039;&#039;Art Graesser&#039;&#039;&#039;, who define an autonomous agent as &#039;&#039;“a system situated within and a part of an environment that senses that environment and acts on it, over time, in pursuit of its own agenda and so as to effect what it senses in the future”&#039;&#039; (Franklin &amp;amp; Graesser, 1997, p. 25)&amp;lt;ref&amp;gt;Franklin, S., &amp;amp; Graesser, A. (1997). Is it an agent, or just a program? A taxonomy for autonomous agents. In J. P. Müller, M. J. Wooldridge, &amp;amp; N. R. Jennings (Eds.), &#039;&#039;Intelligent agents III&#039;&#039; (pp. 21–35). Springer.&amp;lt;/ref&amp;gt;. This definition emphasizes the &#039;&#039;&#039;situatedness&#039;&#039;&#039; of the agent, its &#039;&#039;&#039;temporal continuity&#039;&#039;&#039;, and the presence of an internally driven agenda that guides action beyond immediate perception and response behavior.&lt;br /&gt;
&lt;br /&gt;
Similarly, &#039;&#039;&#039;Pattie Maes&#039;&#039;&#039; describes autonomous agents as &#039;&#039;“computational systems that inhabit some complex, dynamic environment, sense and act autonomously in this environment, and by doing so realize a set of goals or tasks for which they are designed”&#039;&#039; (Maes, 1995, p. 108)&amp;lt;ref&amp;gt;Maes, P. (1995). Artificial life meets entertainment: Life-like autonomous agents. &#039;&#039;Communications of the ACM&#039;&#039;, 38(11), 108–114. &amp;lt;/ref&amp;gt;. This formulation situates autonomous agents explicitly within &#039;&#039;&#039;artificial intelligence&#039;&#039;&#039; and &#039;&#039;&#039;artificial life&#039;&#039;&#039; research, focusing on their operation in dynamic environments and their role in achieving goals set by the designer of the agent.&lt;br /&gt;
&lt;br /&gt;
Taken together, these definitions converge on a common understanding of autonomous agents as systems that are situated in an environment, capable of perceiving and acting upon that environment, and able to regulate their behavior over time in pursuit of internal objectives or agendas. While differing in emphasis—ranging from purposefulness and environmental embedding to computational realization—all definitions reflect the view that autonomy arises from an agent’s capacity to control its own actions within environmental constraints, rather than from external, continuous control. &lt;br /&gt;
&lt;br /&gt;
Previously it was stated that many AI-tools used today can be considered as agents, but in regard of the definitions that have been given for autonomous agents, it is clear to see that these tools do not fullfill the requirements of autonomy, and therefore can not be called autonmous agents. An example for an autonomous agent in our daily life is a thermostat used for heating or air conditioning in a house or room. It fulfills all the requirements that have been stated as essential:&lt;br /&gt;
&lt;br /&gt;
* it is Situated in an environment (the room or building)&lt;br /&gt;
* it perceives the environment (through temperature sensors)&lt;br /&gt;
* it can choose from different actions (heating, cooling or doing nothing)&lt;br /&gt;
* it takes action upon the environment (controlling heating or cooling systems)&lt;br /&gt;
* it is influenced by these actions in the future and they shape its own behavior in the future&lt;br /&gt;
* it regulates its behavior autonomously (maintaining target temperature)&lt;br /&gt;
* it operates without continuous human input (once it is configured)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
As the example of a thermostat demonstrates, autonomous agents are not confined to theoretical models but are already part of everyday life and can be expected to become increasingly revalent in the years to come. This is because they can make our life easier and have huge potential in the fields of artificial intelligence and computing. &lt;br /&gt;
&lt;br /&gt;
== Statement on the useage of Artificial Intelligence (AI) ==&lt;br /&gt;
ChatGPT was used to support the editorial process by suggesting stylistic improvements and identifying relevant sections to include. It was not used as a primary author and did not generate the core content, arguments, or structure of the text.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;references responsive=&amp;quot;0&amp;quot; /&amp;gt;&lt;/div&gt;</summary>
		<author><name>Leon Zipfel</name></author>
	</entry>
	<entry>
		<id>https://www.glossalab.org/w/index.php?title=User:Leon_Zipfel&amp;diff=30768</id>
		<title>User:Leon Zipfel</title>
		<link rel="alternate" type="text/html" href="https://www.glossalab.org/w/index.php?title=User:Leon_Zipfel&amp;diff=30768"/>
		<updated>2026-01-09T09:01:50Z</updated>

		<summary type="html">&lt;p&gt;Leon Zipfel: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Person&lt;br /&gt;
|Given name=Leon&lt;br /&gt;
|Family name=Zipfel&lt;br /&gt;
|Sex=Male&lt;br /&gt;
|Country=Germany&lt;br /&gt;
|Institution=Hochschule München (HM) – University of Applied Sciences&lt;br /&gt;
|Highest academic degree=High School Diploma (secondary)&lt;br /&gt;
|KD of expertise=Aerospace Engineering&lt;br /&gt;
|Current academic institution=Hochschule München (HM) – University of Applied Sciences&lt;br /&gt;
|Pursued academic degree=Bachelor’s Degree&lt;br /&gt;
|Field of pursued degree=Aerospace Engineering Bachelor&lt;br /&gt;
|input language=EN (English)&lt;br /&gt;
}}&lt;br /&gt;
Leon Simeon Zipfel (*2001, Starnberg) is a student at Hochschule München (HM) – University of Applied Sciences.  &lt;br /&gt;
[[Category:Person]]&lt;/div&gt;</summary>
		<author><name>Leon Zipfel</name></author>
	</entry>
	<entry>
		<id>https://www.glossalab.org/w/index.php?title=Draft:Autonomous_agent&amp;diff=29373</id>
		<title>Draft:Autonomous agent</title>
		<link rel="alternate" type="text/html" href="https://www.glossalab.org/w/index.php?title=Draft:Autonomous_agent&amp;diff=29373"/>
		<updated>2025-12-28T12:44:40Z</updated>

		<summary type="html">&lt;p&gt;Leon Zipfel: Added Statement on the useage of Artificial Intelligence (AI)&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;Leon Zipfel (2025). Autonomous agent, Understanding Complexity&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
{{Proposal&lt;br /&gt;
|Was created on date=2025-12-23&lt;br /&gt;
|Belongs to clarus=Understanding Complexity&lt;br /&gt;
|Has author=Leon Zipfel&lt;br /&gt;
|Has publication status=glossaLAB:Open&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
The idea of an &#039;&#039;&#039;autonomous agent&#039;&#039;&#039; historically has its roots in the [[IESC:SYSTEMS THEORY|&#039;&#039;&#039;systems theory&#039;&#039;&#039;]] as well as &#039;&#039;&#039;cybernetics&#039;&#039;&#039;, where autonomous behavior was described in terms of feedback, regulation, and adaptation. However the term and formal concept of an &#039;&#039;&#039;autonomous agent&#039;&#039;&#039; were later introduced and refined in &#039;&#039;&#039;artificial intelligence&#039;&#039;&#039; and &#039;&#039;&#039;multi-agent systems&#039;&#039;&#039; research. In order to understand the concept of an autonomous agent, it is first necessary to take a closer look at the two underlying ideas that make up the concept.  The first step is to clarify the concept of an agent itself. This will be followed by an explanation of the term autonomy. Finally, it will be discussed whether combining these two ideas is as simple as it seems in order to arrive at a valid definition of an autonomous agent, or if it is not quite as simple as that. Additionally new tools such as language models, smart home devices and interactive AI systems, will inspected, in order to see if autonomous agents are part of our daily life, or just a topic that is discussed by scientist and researchers.  &lt;br /&gt;
&lt;br /&gt;
== Agent ==&lt;br /&gt;
The concept of an &#039;&#039;&#039;agent&#039;&#039;&#039; is described in detail in the conceptual clarification [[IESC:AGENT|&#039;&#039;&#039;agent&#039;&#039;&#039;]] by &#039;&#039;&#039;Charles François&#039;&#039;&#039; (2004)&amp;lt;ref name=&amp;quot;:0&amp;quot;&amp;gt;François, C. (2004). &#039;&#039;International encyclopedia of systems and cybernetics&#039;&#039; (2nd ed.)&amp;lt;/ref&amp;gt;. Because it is essential to this article however, it will be shortly explained here as well.  &lt;br /&gt;
&lt;br /&gt;
There is no single, universally accepted definition of an &#039;&#039;&#039;agent&#039;&#039;&#039;, as the term is used across multiple disciplines, including &#039;&#039;&#039;artificial intelligence&#039;&#039;&#039;, &#039;&#039;&#039;systems theory&#039;&#039;&#039;, and &#039;&#039;&#039;network science&#039;&#039;&#039;, each emphasizing distinct aspects of agency. Consequently, there are many different definitions of the term, all of which are valid within their respective fields of research. &lt;br /&gt;
&lt;br /&gt;
One widely cited general definition describes an &#039;&#039;&#039;agent&#039;&#039;&#039; as an entity that perceives its environment through sensors and acts upon that environment through actuators (Stuart J. Russell &amp;amp; Peter Norvig 2010)&amp;lt;ref&amp;gt;Russell, S. J., &amp;amp; Norvig, P. (2010). &#039;&#039;Artificial intelligence: A modern approach&#039;&#039; (3rd ed.). Prentice Hall.&amp;lt;/ref&amp;gt;. This formulation is intentionally broad and highlights the interaction between an &#039;&#039;&#039;agent&#039;&#039;&#039; and its environment, without imposing assumptions about internal structure or cognitive capabilities. &lt;br /&gt;
&lt;br /&gt;
From a &#039;&#039;&#039;multi-element systems&#039;&#039;&#039; view, &#039;&#039;&#039;agents&#039;&#039;&#039; may also be characterized as &#039;&#039;&#039;active elements within multi-element systems or networks&#039;&#039;&#039;, distinguished from passive components by its capacity to influence system states through its actions. Building on this view, &#039;&#039;&#039;ERCEAU&#039;&#039;&#039; and &#039;&#039;&#039;FERBER&#039;&#039;&#039; proposed a hierarchical classification of agents, ranging from &#039;&#039;&#039;reactive agents&#039;&#039;&#039; to &#039;&#039;&#039;intentional agents&#039;&#039;&#039; with explicit goals and plans. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;J. FERBER&#039;&#039;&#039; later proposes a more detailed definition with nine properties that can be fulfilled by an agent, such as possessing resources or being driven by a set of tendencies. An entity that complies with all nine of those properties can be described as an intelligent system. (Ferber, 1999)&amp;lt;ref&amp;gt;Ferber, J. (1999). &#039;&#039;Multi-agent systems: An introduction to distributed artificial intelligence&#039;&#039;. Addison-Wesley.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;B. HAYES-ROTH&#039;&#039;&#039; on the other hand does not have quite as many requirements to declare an &#039;&#039;&#039;agent&#039;&#039;&#039; as an &#039;&#039;&#039;intelligent agent&#039;&#039;&#039;. She states in order to be considered as such the &#039;&#039;&#039;agent&#039;&#039;&#039; must perform three functions continuously.&lt;br /&gt;
&lt;br /&gt;
1) precepting dynamic conditions in the environment &lt;br /&gt;
&lt;br /&gt;
2)acting to affect conditions in the environment &lt;br /&gt;
&lt;br /&gt;
3) interpreting perceptions, solving problems, drawing inferences and determining actions &lt;br /&gt;
&lt;br /&gt;
(B. Hayes-Roth,1992)&amp;lt;ref&amp;gt;Hayes-Roth, B. (1992). An architecture for adaptive intelligent systems. &#039;&#039;Artificial Intelligence&#039;&#039;, p.329–365.&amp;lt;/ref&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
All of these definitions state that an &#039;&#039;&#039;agent&#039;&#039;&#039; must be in an environment, they must be able to receive information from and about this environment and they have to be able to act upon this environment. For the purpose of this conceptual clarification, this will be the minimum definition for an entity to be called an agent.&lt;br /&gt;
&lt;br /&gt;
While the research discussed so far primarily addressed agents as objects of scientific investigation, recent advances in artificial intelligence have made agent-like systems widely accessible. Tools such as Gemini, Chat GPT, Notebook LM and others are now used daily by a broad population. Based on the definitions discussed above, such systems can reasonably be described as agents. Whether they should also be regarded as &#039;&#039;&#039;autonomous agents&#039;&#039;&#039;, however, depends critically on how the term autonomy is understood and used. Addressing this question therefore requires a more detailed examination of the concept of autonomy itself.&lt;br /&gt;
&lt;br /&gt;
== Autonomy ==&lt;br /&gt;
In the conceptual clarification [[IESC:AUTONOMY|autonomy]] the concept is defined as &amp;quot;The capacity of a system to select and decide, within limits, its own behavior&amp;quot;&amp;lt;ref name=&amp;quot;:0&amp;quot; /&amp;gt;. This definition is most commonly used in fields such as system theory. &lt;br /&gt;
&lt;br /&gt;
The concept was introduced by the French biologist &#039;&#039;&#039;P. VENDRYÈS&#039;&#039;&#039; in the early 1940s and represents one of the earliest systematic attempts to explain autonomous behaviour in biological and artificial systems.(P. Vendryes, 1942)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;VENDRYÈS&#039;&#039;&#039; conception of autonomy is fundamentally grounded in the system’s &#039;&#039;&#039;relative control over its relations with the environment&#039;&#039;&#039;. Autonomy is therefore not absolute or the same concept as independence; rather, it emerges from regulatory mechanisms that allow a system to manage environmental influences while preserving internal coherence.&lt;br /&gt;
&lt;br /&gt;
A central element of Vendryès’ theory is a &#039;&#039;&#039;probabilistic conception of time and choice&#039;&#039;&#039;. While the past is considered strictly fixed, the future is only in part determined and presents a limited set of possible outcomes. Autonomy is manifested in the system’s capacity to select one possibility out of  several possible ones. So the entity chooses one specific trajectory and excludes others. This view anticipates later developments in systems theory and complexity science, including probabilistic and chaotic dynamics. (P. Vendryes, 1942)&amp;lt;ref&amp;gt;Vendryès, P. (1942). &#039;&#039;Autonomie et mécanismes&#039;&#039;. Presses Universitaires de France.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Subsequent authors expanded this framework. &#039;&#039;&#039;Kenneth Berrien&#039;&#039;&#039; emphasized the analogy between human choice and probabilistic system outputs(K. Berrien,1968)&amp;lt;ref&amp;gt;Berrien, K. F. (1968). &#039;&#039;General and social systems&#039;&#039;. Rutgers University Press.&amp;lt;/ref&amp;gt;, while &#039;&#039;&#039;Robert H. Howe&#039;&#039;&#039; defined autonomy in 1975 as the unity of computation and construction, linking internal information processing with self-production&amp;lt;ref&amp;gt;Howe, R. H. (1975). Autonomy and self-regulation in complex systems. &#039;&#039;Systems Research&#039;&#039;, 20(2), 85–98.&amp;lt;/ref&amp;gt;. Finally, &#039;&#039;&#039;A. S. Iberall&#039;&#039;&#039; stressed the thermodynamic dimension of autonomy, arguing that autonomous systems must be understood as energy-processing engines.   (A. S. Iberall, 1973)&amp;lt;ref&amp;gt;Iberall, A. S. (1973). Toward a general science of viable systems. &#039;&#039;McGraw-Hill&#039;&#039;.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Overall, autonomy in systems theory, and other fields of research, implies a &#039;&#039;&#039;relational, graded, and energetically grounded property&#039;&#039;&#039;, arising from internal organization, regulation, and sustained interaction with a structured environment.&lt;br /&gt;
&lt;br /&gt;
== Autonomous Agent ==&lt;br /&gt;
Having clarified the concepts of &#039;&#039;agent&#039;&#039; and &#039;&#039;autonomy&#039;&#039; separately, it would appear straightforward to define an &#039;&#039;&#039;autonomous agent&#039;&#039;&#039; by combining these two notions. As with the concept of an agent itself, however, definitions of autonomous agents vary depending on the disciplinary context and the specific subject or field under investigation. Different fields emphasize different aspects of autonomy, purpose, and interaction with the environment, leading to multiple complementary definitions.&lt;br /&gt;
&lt;br /&gt;
One early and influential definition is provided by &#039;&#039;&#039;Jose C. Brustoloni&#039;&#039;&#039;, who characterizes autonomous agents as &#039;&#039;“systems capable of autonomous, purposeful action in the real world”&#039;&#039; (Brustoloni, 1991)&amp;lt;ref&amp;gt;Brustoloni, J. C. (1991). Autonomous agents: Characterization and requirements. &#039;&#039;Technical Report&#039;&#039;, University of Pittsburgh.&amp;lt;/ref&amp;gt;. This definition highlights two essential features: autonomy, understood as self-directed control of behavior, and purposefulness, referring to action directed toward goals or objectives. While it is a short definition, that is fairly easy to understand, it leaves open how such purpose is internally represented or realized.&lt;br /&gt;
&lt;br /&gt;
A more explicit account of the autonomous agent–environment relationship is given by &#039;&#039;&#039;Stan Franklin&#039;&#039;&#039; and &#039;&#039;&#039;Art Graesser&#039;&#039;&#039;, who define an autonomous agent as &#039;&#039;“a system situated within and a part of an environment that senses that environment and acts on it, over time, in pursuit of its own agenda and so as to effect what it senses in the future”&#039;&#039; (Franklin &amp;amp; Graesser, 1997, p. 25)&amp;lt;ref&amp;gt;Franklin, S., &amp;amp; Graesser, A. (1997). Is it an agent, or just a program? A taxonomy for autonomous agents. In J. P. Müller, M. J. Wooldridge, &amp;amp; N. R. Jennings (Eds.), &#039;&#039;Intelligent agents III&#039;&#039; (pp. 21–35). Springer.&amp;lt;/ref&amp;gt;. This definition emphasizes the &#039;&#039;&#039;situatedness&#039;&#039;&#039; of the agent, its &#039;&#039;&#039;temporal continuity&#039;&#039;&#039;, and the presence of an internally driven agenda that guides action beyond immediate perception and response behavior.&lt;br /&gt;
&lt;br /&gt;
Similarly, &#039;&#039;&#039;Pattie Maes&#039;&#039;&#039; describes autonomous agents as &#039;&#039;“computational systems that inhabit some complex, dynamic environment, sense and act autonomously in this environment, and by doing so realize a set of goals or tasks for which they are designed”&#039;&#039; (Maes, 1995, p. 108)&amp;lt;ref&amp;gt;Maes, P. (1995). Artificial life meets entertainment: Life-like autonomous agents. &#039;&#039;Communications of the ACM&#039;&#039;, 38(11), 108–114. &amp;lt;/ref&amp;gt;. This formulation situates autonomous agents explicitly within &#039;&#039;&#039;artificial intelligence&#039;&#039;&#039; and &#039;&#039;&#039;artificial life&#039;&#039;&#039; research, focusing on their operation in dynamic environments and their role in achieving goals set by the designer of the agent.&lt;br /&gt;
&lt;br /&gt;
Taken together, these definitions converge on a common understanding of autonomous agents as systems that are situated in an environment, capable of perceiving and acting upon that environment, and able to regulate their behavior over time in pursuit of internal objectives or agendas. While differing in emphasis—ranging from purposefulness and environmental embedding to computational realization—all definitions reflect the view that autonomy arises from an agent’s capacity to control its own actions within environmental constraints, rather than from external, continuous control. &lt;br /&gt;
&lt;br /&gt;
Previously it was stated that many AI-tools used today can be considered as agents, but in regard of the definitions that have been given for autonomous agents, it is clear to see that these tools do not fullfill the requirements of autonomy, and therefore can not be called autonmous agents. An example for an autonomous agent in our daily life is a thermostat used for heating or air conditioning in a house or room. It fulfills all the requirements that have been stated as essential:&lt;br /&gt;
&lt;br /&gt;
* it is Situated in an environment (the room or building)&lt;br /&gt;
* it perceives the environment (through temperature sensors)&lt;br /&gt;
* it can choose from different actions (heating, cooling or doing nothing)&lt;br /&gt;
* it takes action upon the environment (controlling heating or cooling systems)&lt;br /&gt;
* it is influenced by these actions in the future and they shape its own behavior in the future&lt;br /&gt;
* it regulates its behavior autonomously (maintaining target temperature)&lt;br /&gt;
* it operates without continuous human input (once it is configured)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
As the example of a thermostat demonstrates, autonomous agents are not confined to theoretical models but are already part of everyday life and can be expected to become increasingly revalent in the years to come. This is because they can make our life easier and have huge potential in the fields of artificial intelligence and computing. &lt;br /&gt;
&lt;br /&gt;
== Statement on the useage of Artificial Intelligence (AI) ==&lt;br /&gt;
ChatGPT was used to support the editorial process by suggesting stylistic improvements and identifying relevant sections to include. It was not used as a primary author and did not generate the core content, arguments, or structure of the text.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;references responsive=&amp;quot;0&amp;quot; /&amp;gt;&lt;/div&gt;</summary>
		<author><name>Leon Zipfel</name></author>
	</entry>
	<entry>
		<id>https://www.glossalab.org/w/index.php?title=Draft:Autonomous_agent&amp;diff=29371</id>
		<title>Draft:Autonomous agent</title>
		<link rel="alternate" type="text/html" href="https://www.glossalab.org/w/index.php?title=Draft:Autonomous_agent&amp;diff=29371"/>
		<updated>2025-12-28T12:33:11Z</updated>

		<summary type="html">&lt;p&gt;Leon Zipfel: Added reference list&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;Leon Zipfel (2025). Autonomous agent, Understanding Complexity&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
{{Proposal&lt;br /&gt;
|Was created on date=2025-12-23&lt;br /&gt;
|Belongs to clarus=Understanding Complexity&lt;br /&gt;
|Has author=Leon Zipfel&lt;br /&gt;
|Has publication status=glossaLAB:Open&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
The idea of an &#039;&#039;&#039;autonomous agent&#039;&#039;&#039; historically has its roots in the [[IESC:SYSTEMS THEORY|&#039;&#039;&#039;systems theory&#039;&#039;&#039;]] as well as &#039;&#039;&#039;cybernetics&#039;&#039;&#039;, where autonomous behavior was described in terms of feedback, regulation, and adaptation. However the term and formal concept of an &#039;&#039;&#039;autonomous agent&#039;&#039;&#039; were later introduced and refined in &#039;&#039;&#039;artificial intelligence&#039;&#039;&#039; and &#039;&#039;&#039;multi-agent systems&#039;&#039;&#039; research. In order to understand the concept of an autonomous agent, it is first necessary to take a closer look at the two underlying ideas that make up the concept.  The first step is to clarify the concept of an agent itself. This will be followed by an explanation of the term autonomy. Finally, it will be discussed whether combining these two ideas is as simple as it seems in order to arrive at a valid definition of an autonomous agent, or if it is not quite as simple as that. Additionally new tools such as language models, smart home devices and interactive AI systems, will inspected, in order to see if autonomous agents are part of our daily life, or just a topic that is discussed by scientist and researchers.  &lt;br /&gt;
&lt;br /&gt;
== Agent ==&lt;br /&gt;
The concept of an &#039;&#039;&#039;agent&#039;&#039;&#039; is described in detail in the conceptual clarification [[IESC:AGENT|&#039;&#039;&#039;agent&#039;&#039;&#039;]] by &#039;&#039;&#039;Charles François&#039;&#039;&#039; (2004)&amp;lt;ref name=&amp;quot;:0&amp;quot;&amp;gt;François, C. (2004). &#039;&#039;International encyclopedia of systems and cybernetics&#039;&#039; (2nd ed.)&amp;lt;/ref&amp;gt;. Because it is essential to this article however, it will be shortly explained here as well.  &lt;br /&gt;
&lt;br /&gt;
There is no single, universally accepted definition of an &#039;&#039;&#039;agent&#039;&#039;&#039;, as the term is used across multiple disciplines, including &#039;&#039;&#039;artificial intelligence&#039;&#039;&#039;, &#039;&#039;&#039;systems theory&#039;&#039;&#039;, and &#039;&#039;&#039;network science&#039;&#039;&#039;, each emphasizing distinct aspects of agency. Consequently, there are many different definitions of the term, all of which are valid within their respective fields of research. &lt;br /&gt;
&lt;br /&gt;
One widely cited general definition describes an &#039;&#039;&#039;agent&#039;&#039;&#039; as an entity that perceives its environment through sensors and acts upon that environment through actuators (Stuart J. Russell &amp;amp; Peter Norvig 2010)&amp;lt;ref&amp;gt;Russell, S. J., &amp;amp; Norvig, P. (2010). &#039;&#039;Artificial intelligence: A modern approach&#039;&#039; (3rd ed.). Prentice Hall.&amp;lt;/ref&amp;gt;. This formulation is intentionally broad and highlights the interaction between an &#039;&#039;&#039;agent&#039;&#039;&#039; and its environment, without imposing assumptions about internal structure or cognitive capabilities. &lt;br /&gt;
&lt;br /&gt;
From a &#039;&#039;&#039;multi-element systems&#039;&#039;&#039; view, &#039;&#039;&#039;agents&#039;&#039;&#039; may also be characterized as &#039;&#039;&#039;active elements within multi-element systems or networks&#039;&#039;&#039;, distinguished from passive components by its capacity to influence system states through its actions. Building on this view, &#039;&#039;&#039;ERCEAU&#039;&#039;&#039; and &#039;&#039;&#039;FERBER&#039;&#039;&#039; proposed a hierarchical classification of agents, ranging from &#039;&#039;&#039;reactive agents&#039;&#039;&#039; to &#039;&#039;&#039;intentional agents&#039;&#039;&#039; with explicit goals and plans. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;J. FERBER&#039;&#039;&#039; later proposes a more detailed definition with nine properties that can be fulfilled by an agent, such as possessing resources or being driven by a set of tendencies. An entity that complies with all nine of those properties can be described as an intelligent system. (Ferber, 1999)&amp;lt;ref&amp;gt;Ferber, J. (1999). &#039;&#039;Multi-agent systems: An introduction to distributed artificial intelligence&#039;&#039;. Addison-Wesley.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;B. HAYES-ROTH&#039;&#039;&#039; on the other hand does not have quite as many requirements to declare an &#039;&#039;&#039;agent&#039;&#039;&#039; as an &#039;&#039;&#039;intelligent agent&#039;&#039;&#039;. She states in order to be considered as such the &#039;&#039;&#039;agent&#039;&#039;&#039; must perform three functions continuously.&lt;br /&gt;
&lt;br /&gt;
1) precepting dynamic conditions in the environment &lt;br /&gt;
&lt;br /&gt;
2)acting to affect conditions in the environment &lt;br /&gt;
&lt;br /&gt;
3) interpreting perceptions, solving problems, drawing inferences and determining actions &lt;br /&gt;
&lt;br /&gt;
(B. Hayes-Roth,1992)&amp;lt;ref&amp;gt;Hayes-Roth, B. (1992). An architecture for adaptive intelligent systems. &#039;&#039;Artificial Intelligence&#039;&#039;, p.329–365.&amp;lt;/ref&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
All of these definitions state that an &#039;&#039;&#039;agent&#039;&#039;&#039; must be in an environment, they must be able to receive information from and about this environment and they have to be able to act upon this environment. For the purpose of this conceptual clarification, this will be the minimum definition for an entity to be called an agent.&lt;br /&gt;
&lt;br /&gt;
While the research discussed so far primarily addressed agents as objects of scientific investigation, recent advances in artificial intelligence have made agent-like systems widely accessible. Tools such as Gemini, Chat GPT, Notebook LM and others are now used daily by a broad population. Based on the definitions discussed above, such systems can reasonably be described as agents. Whether they should also be regarded as &#039;&#039;&#039;autonomous agents&#039;&#039;&#039;, however, depends critically on how the term autonomy is understood and used. Addressing this question therefore requires a more detailed examination of the concept of autonomy itself.&lt;br /&gt;
&lt;br /&gt;
== Autonomy ==&lt;br /&gt;
In the conceptual clarification [[IESC:AUTONOMY|autonomy]] the concept is defined as &amp;quot;The capacity of a system to select and decide, within limits, its own behavior&amp;quot;&amp;lt;ref name=&amp;quot;:0&amp;quot; /&amp;gt;. This definition is most commonly used in fields such as system theory. &lt;br /&gt;
&lt;br /&gt;
The concept was introduced by the French biologist &#039;&#039;&#039;P. VENDRYÈS&#039;&#039;&#039; in the early 1940s and represents one of the earliest systematic attempts to explain autonomous behaviour in biological and artificial systems.(P. Vendryes, 1942)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;VENDRYÈS&#039;&#039;&#039; conception of autonomy is fundamentally grounded in the system’s &#039;&#039;&#039;relative control over its relations with the environment&#039;&#039;&#039;. Autonomy is therefore not absolute or the same concept as independence; rather, it emerges from regulatory mechanisms that allow a system to manage environmental influences while preserving internal coherence.&lt;br /&gt;
&lt;br /&gt;
A central element of Vendryès’ theory is a &#039;&#039;&#039;probabilistic conception of time and choice&#039;&#039;&#039;. While the past is considered strictly fixed, the future is only in part determined and presents a limited set of possible outcomes. Autonomy is manifested in the system’s capacity to select one possibility out of  several possible ones. So the entity chooses one specific trajectory and excludes others. This view anticipates later developments in systems theory and complexity science, including probabilistic and chaotic dynamics. (P. Vendryes, 1942)&amp;lt;ref&amp;gt;Vendryès, P. (1942). &#039;&#039;Autonomie et mécanismes&#039;&#039;. Presses Universitaires de France.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Subsequent authors expanded this framework. &#039;&#039;&#039;Kenneth Berrien&#039;&#039;&#039; emphasized the analogy between human choice and probabilistic system outputs(K. Berrien,1968)&amp;lt;ref&amp;gt;Berrien, K. F. (1968). &#039;&#039;General and social systems&#039;&#039;. Rutgers University Press.&amp;lt;/ref&amp;gt;, while &#039;&#039;&#039;Robert H. Howe&#039;&#039;&#039; defined autonomy in 1975 as the unity of computation and construction, linking internal information processing with self-production&amp;lt;ref&amp;gt;Howe, R. H. (1975). Autonomy and self-regulation in complex systems. &#039;&#039;Systems Research&#039;&#039;, 20(2), 85–98.&amp;lt;/ref&amp;gt;. Finally, &#039;&#039;&#039;A. S. Iberall&#039;&#039;&#039; stressed the thermodynamic dimension of autonomy, arguing that autonomous systems must be understood as energy-processing engines.   (A. S. Iberall, 1973)&amp;lt;ref&amp;gt;Iberall, A. S. (1973). Toward a general science of viable systems. &#039;&#039;McGraw-Hill&#039;&#039;.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Overall, autonomy in systems theory, and other fields of research, implies a &#039;&#039;&#039;relational, graded, and energetically grounded property&#039;&#039;&#039;, arising from internal organization, regulation, and sustained interaction with a structured environment.&lt;br /&gt;
&lt;br /&gt;
== Autonomous Agent ==&lt;br /&gt;
Having clarified the concepts of &#039;&#039;agent&#039;&#039; and &#039;&#039;autonomy&#039;&#039; separately, it would appear straightforward to define an &#039;&#039;&#039;autonomous agent&#039;&#039;&#039; by combining these two notions. As with the concept of an agent itself, however, definitions of autonomous agents vary depending on the disciplinary context and the specific subject or field under investigation. Different fields emphasize different aspects of autonomy, purpose, and interaction with the environment, leading to multiple complementary definitions.&lt;br /&gt;
&lt;br /&gt;
One early and influential definition is provided by &#039;&#039;&#039;Jose C. Brustoloni&#039;&#039;&#039;, who characterizes autonomous agents as &#039;&#039;“systems capable of autonomous, purposeful action in the real world”&#039;&#039; (Brustoloni, 1991)&amp;lt;ref&amp;gt;Brustoloni, J. C. (1991). Autonomous agents: Characterization and requirements. &#039;&#039;Technical Report&#039;&#039;, University of Pittsburgh.&amp;lt;/ref&amp;gt;. This definition highlights two essential features: autonomy, understood as self-directed control of behavior, and purposefulness, referring to action directed toward goals or objectives. While it is a short definition, that is fairly easy to understand, it leaves open how such purpose is internally represented or realized.&lt;br /&gt;
&lt;br /&gt;
A more explicit account of the autonomous agent–environment relationship is given by &#039;&#039;&#039;Stan Franklin&#039;&#039;&#039; and &#039;&#039;&#039;Art Graesser&#039;&#039;&#039;, who define an autonomous agent as &#039;&#039;“a system situated within and a part of an environment that senses that environment and acts on it, over time, in pursuit of its own agenda and so as to effect what it senses in the future”&#039;&#039; (Franklin &amp;amp; Graesser, 1997, p. 25)&amp;lt;ref&amp;gt;Franklin, S., &amp;amp; Graesser, A. (1997). Is it an agent, or just a program? A taxonomy for autonomous agents. In J. P. Müller, M. J. Wooldridge, &amp;amp; N. R. Jennings (Eds.), &#039;&#039;Intelligent agents III&#039;&#039; (pp. 21–35). Springer.&amp;lt;/ref&amp;gt;. This definition emphasizes the &#039;&#039;&#039;situatedness&#039;&#039;&#039; of the agent, its &#039;&#039;&#039;temporal continuity&#039;&#039;&#039;, and the presence of an internally driven agenda that guides action beyond immediate perception and response behavior.&lt;br /&gt;
&lt;br /&gt;
Similarly, &#039;&#039;&#039;Pattie Maes&#039;&#039;&#039; describes autonomous agents as &#039;&#039;“computational systems that inhabit some complex, dynamic environment, sense and act autonomously in this environment, and by doing so realize a set of goals or tasks for which they are designed”&#039;&#039; (Maes, 1995, p. 108)&amp;lt;ref&amp;gt;Maes, P. (1995). Artificial life meets entertainment: Life-like autonomous agents. &#039;&#039;Communications of the ACM&#039;&#039;, 38(11), 108–114. &amp;lt;/ref&amp;gt;. This formulation situates autonomous agents explicitly within &#039;&#039;&#039;artificial intelligence&#039;&#039;&#039; and &#039;&#039;&#039;artificial life&#039;&#039;&#039; research, focusing on their operation in dynamic environments and their role in achieving goals set by the designer of the agent.&lt;br /&gt;
&lt;br /&gt;
Taken together, these definitions converge on a common understanding of autonomous agents as systems that are situated in an environment, capable of perceiving and acting upon that environment, and able to regulate their behavior over time in pursuit of internal objectives or agendas. While differing in emphasis—ranging from purposefulness and environmental embedding to computational realization—all definitions reflect the view that autonomy arises from an agent’s capacity to control its own actions within environmental constraints, rather than from external, continuous control. &lt;br /&gt;
&lt;br /&gt;
Previously it was stated that many AI-tools used today can be considered as agents, but in regard of the definitions that have been given for autonomous agents, it is clear to see that these tools do not fullfill the requirements of autonomy, and therefore can not be called autonmous agents. An example for an autonomous agent in our daily life is a thermostat used for heating or air conditioning in a house or room. It fulfills all the requirements that have been stated as essential:&lt;br /&gt;
&lt;br /&gt;
* it is Situated in an environment (the room or building)&lt;br /&gt;
* it perceives the environment (through temperature sensors)&lt;br /&gt;
* it can choose from different actions (heating, cooling or doing nothing)&lt;br /&gt;
* it takes action upon the environment (controlling heating or cooling systems)&lt;br /&gt;
* it is influenced by these actions in the future and they shape its own behavior in the future&lt;br /&gt;
* it regulates its behavior autonomously (maintaining target temperature)&lt;br /&gt;
* it operates without continuous human input (once it is configured)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
As the example of a thermostat demonstrates, autonomous agents are not confined to theoretical models but are already part of everyday life and can be expected to become increasingly revalent in the years to come. This is because they can make our life easier and have huge potential in the fields of artificial intelligence and computing. &lt;br /&gt;
&amp;lt;references responsive=&amp;quot;0&amp;quot; /&amp;gt;&lt;/div&gt;</summary>
		<author><name>Leon Zipfel</name></author>
	</entry>
	<entry>
		<id>https://www.glossalab.org/w/index.php?title=Draft:Autonomous_agent&amp;diff=29355</id>
		<title>Draft:Autonomous agent</title>
		<link rel="alternate" type="text/html" href="https://www.glossalab.org/w/index.php?title=Draft:Autonomous_agent&amp;diff=29355"/>
		<updated>2025-12-28T11:33:20Z</updated>

		<summary type="html">&lt;p&gt;Leon Zipfel: Defined Autonmous agents&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;Leon Zipfel (2025). Autonomous agent, Understanding Complexity&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
{{Proposal&lt;br /&gt;
|Was created on date=2025-12-23&lt;br /&gt;
|Belongs to clarus=Understanding Complexity&lt;br /&gt;
|Has author=Leon Zipfel&lt;br /&gt;
|Has publication status=glossaLAB:Open&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
The idea of an &#039;&#039;&#039;autonomous agent&#039;&#039;&#039; historically has its roots in the [[IESC:SYSTEMS THEORY|&#039;&#039;&#039;systems theory&#039;&#039;&#039;]] as well as &#039;&#039;&#039;cybernetics&#039;&#039;&#039;, where autonomous behavior was described in terms of feedback, regulation, and adaptation. However the term and formal concept of an &#039;&#039;&#039;autonomous agent&#039;&#039;&#039; were later introduced and refined in &#039;&#039;&#039;artificial intelligence&#039;&#039;&#039; and &#039;&#039;&#039;multi-agent systems&#039;&#039;&#039; research. In order to understand the concept of an autonomous agent, it is first necessary to take a closer look at the two underlying ideas that make up the concept.  The first step is to clarify the concept of an agent itself. This will be followed by an explanation of the term autonomy. Finally, it will become clear whether combining these two ideas is as simple as it seems in order to arrive at a valid definition of an autonomous agent,or if it is not quite as easy as that. Additionally new tools such as language models, smart home devices and interactive AI systems, will be looked at, to see if autonmous agents are part of our daily lifes, or merly a topic that is discussed by scientist and researchers.  &lt;br /&gt;
&lt;br /&gt;
== Agent ==&lt;br /&gt;
The concept of an &#039;&#039;&#039;agent&#039;&#039;&#039; is described in detail in the conceptual clarification [[IESC:AGENT|&#039;&#039;&#039;agent&#039;&#039;&#039;]] by &#039;&#039;&#039;Charles François&#039;&#039;&#039; (2004). Because it is essential to this article however, it will be shortly explained here as well.  &lt;br /&gt;
&lt;br /&gt;
There is no single, universally accepted definition of an &#039;&#039;&#039;agent&#039;&#039;&#039;, as the term is used across multiple disciplines, including &#039;&#039;&#039;artificial intelligence&#039;&#039;&#039;, &#039;&#039;&#039;systems theory&#039;&#039;&#039;, and &#039;&#039;&#039;network science&#039;&#039;&#039;, each emphasizing distinct aspects of agency. Consequently, there are many different definitions of the term, all of which are valid within their respective fields of research. &lt;br /&gt;
&lt;br /&gt;
One widely cited general definition describes an &#039;&#039;&#039;agent&#039;&#039;&#039; as an entity that perceives its environment through sensors and acts upon that environment through actuators. (Stuart J. Russell and Peter Norvig. Artificial Intelligence: A Modern Approach Third Edition p.34) This formulation is intentionally broad and highlights the interaction between an &#039;&#039;&#039;agent&#039;&#039;&#039; and its environment, without imposing assumptions about internal structure or cognitive capabilities. &lt;br /&gt;
&lt;br /&gt;
From a &#039;&#039;&#039;multi-element systems&#039;&#039;&#039; view, &#039;&#039;&#039;agents&#039;&#039;&#039; may also be characterized as &#039;&#039;&#039;active elements within multi-element systems or networks&#039;&#039;&#039;, distinguished from passive components by its capacity to influence system states through its actions. Building on this view, &#039;&#039;&#039;ERCEAU&#039;&#039;&#039; and &#039;&#039;&#039;FERBER&#039;&#039;&#039; proposed a hierarchical classification of agents, ranging from &#039;&#039;&#039;reactive agents&#039;&#039;&#039; to &#039;&#039;&#039;intentional agents&#039;&#039;&#039; with explicit goals and plans.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;J. FERBER&#039;&#039;&#039; later proposes a more detailed definition with nine properties that can be fulfilled by an agent, such as possessing resources or being driven by a set of tendencies. An entity that complys with all nine of those properties can be described as an intelligent system.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;B. HAYES-ROTH&#039;&#039;&#039; on the other hand does not have quite as many requirements to declare an &#039;&#039;&#039;agent&#039;&#039;&#039; as an &#039;&#039;&#039;intelligent agent&#039;&#039;&#039;. She states in order to be considered as such the &#039;&#039;&#039;agent&#039;&#039;&#039; must perform three functions continuously.&lt;br /&gt;
&lt;br /&gt;
1) precepting dynamic conditions in the environment &lt;br /&gt;
&lt;br /&gt;
2)acting to affect conditions in the environment &lt;br /&gt;
&lt;br /&gt;
3) interpreting perceptions, solving problems, drawing inferences and determining actions &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
All of these definitions state that an &#039;&#039;&#039;agent&#039;&#039;&#039; must be in an environment, they must be able to receive information from and about this environment and they have to be able to act upon this environment. For the purpose of this conceptual clarification, this will be the minimum definition for an entity to be called an agent.&lt;br /&gt;
&lt;br /&gt;
While the research discussed so far primarily addressed agents as objects of scientific investigation, recent advances in artificial intelligence have made agent-like systems widely accessible. Tools such as Gemini, Chat GPT, Notebook LM and others are now used daily by a broad population. Based on the definitions discussed above, such systems can reasonably be described as agents. Whether they should also be regarded as &#039;&#039;&#039;autonomous agents&#039;&#039;&#039;, however, depends critically on how the term autonomy is understood and used. Addressing this question therefore requires a more detailed examination of the concept of autonomy itself.&lt;br /&gt;
&lt;br /&gt;
== Autonomy ==&lt;br /&gt;
In the conceptual clarification [[IESC:AUTONOMY|autonomy]] is defined as &amp;quot;The capacity of a system to select and decide, within limits, its own behavior&amp;quot;. This definition is most commonly used in fields such as system theory. &lt;br /&gt;
&lt;br /&gt;
The concept was introduced by the French biologist &#039;&#039;&#039;P. VENDRYÈS&#039;&#039;&#039; in the early 1940s and represents one of the earliest systematic attempts to explain autonomous behaviour in biological and artificial systems.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;VENDRYÈS&#039;&#039;&#039; conception of autonomy is fundamentally grounded in the system’s &#039;&#039;&#039;relative control over its relations with the environment&#039;&#039;&#039;. Autonomy is therefore not absolute or the same concept as independence; rather, it emerges from regulatory mechanisms that allow a system to manage environmental influences while preserving internal coherence.&lt;br /&gt;
&lt;br /&gt;
A central element of Vendryès’ theory is a &#039;&#039;&#039;probabilistic conception of time and choice&#039;&#039;&#039;. While the past is considered strictly determined, the future is only partially determined and presents a limited set of possible outcomes. Autonomy is manifested in the system’s capacity to select one possibility out of  several possible ones. So the entity chooses one specific trajectory and excludes others. This view anticipates later developments in systems theory and complexity science, including probabilistic and chaotic dynamics.&lt;br /&gt;
&lt;br /&gt;
Subsequent authors expanded this framework. &#039;&#039;&#039;Kenneth Berrien&#039;&#039;&#039; emphasized the analogy between human choice and probabilistic system outputs, while &#039;&#039;&#039;Robert H. Howe&#039;&#039;&#039; defined autonomy as the unity of computation and construction, linking internal information processing with self-production. Finally, &#039;&#039;&#039;A. S. Iberall&#039;&#039;&#039; stressed the thermodynamic dimension of autonomy, arguing that autonomous systems must be understood as energy-processing engines.&lt;br /&gt;
&lt;br /&gt;
Overall, autonomy in systems theory, and other fields of resarch, implies a &#039;&#039;&#039;relational, graded, and energetically grounded property&#039;&#039;&#039;, arising from internal organization, regulation, and sustained interaction with a structured environment.&lt;br /&gt;
&lt;br /&gt;
== Autonomous Agent ==&lt;br /&gt;
Having clarified the concepts of &#039;&#039;agent&#039;&#039; and &#039;&#039;autonomy&#039;&#039; separately, it would appear straightforward to define an &#039;&#039;&#039;autonomous agent&#039;&#039;&#039; by combining these two notions. As with the concept of an agent itself, however, definitions of autonomous agents vary depending on the disciplinary context and the specific subject or field under investigation. Different fields emphasize different aspects of autonomy, purpose, and interaction with the environment, leading to multiple complementary definitions.&lt;br /&gt;
&lt;br /&gt;
One early and influential definition is provided by &#039;&#039;&#039;Jose C. Brustoloni&#039;&#039;&#039;, who characterizes autonomous agents as &#039;&#039;“systems capable of autonomous, purposeful action in the real world”&#039;&#039; (Brustoloni, 1991). This definition highlights two essential features: autonomy, understood as self-directed control of behavior, and purposefulness, referring to action directed toward goals or objectives. While it is a short definition, that is fairly easy to understand, it leaves open how such purpose is internally represented or realized.&lt;br /&gt;
&lt;br /&gt;
A more explicit account of the autonomous agent–environment relationship is given by &#039;&#039;&#039;Stan Franklin&#039;&#039;&#039; and &#039;&#039;&#039;Art Graesser&#039;&#039;&#039;, who define an autonomous agent as &#039;&#039;“a system situated within and a part of an environment that senses that environment and acts on it, over time, in pursuit of its own agenda and so as to effect what it senses in the future”&#039;&#039; (Franklin &amp;amp; Graesser, 1997, p. 25). This definition emphasizes the &#039;&#039;&#039;situatedness&#039;&#039;&#039; of the agent, its &#039;&#039;&#039;temporal continuity&#039;&#039;&#039;, and the presence of an internally driven agenda that guides action beyond immediate perception and respopnse behavior.&lt;br /&gt;
&lt;br /&gt;
Similarly, &#039;&#039;&#039;Pattie Maes&#039;&#039;&#039; describes autonomous agents as &#039;&#039;“computational systems that inhabit some complex, dynamic environment, sense and act autonomously in this environment, and by doing so realize a set of goals or tasks for which they are designed”&#039;&#039; (Maes, 1995, p. 108). This formulation situates autonomous agents explicitly within &#039;&#039;&#039;artificial intelligence&#039;&#039;&#039; and &#039;&#039;&#039;artificial life&#039;&#039;&#039; research, focossing on their operation in dynamic environments and their role in achieving goals set by the desingner of the agent.&lt;br /&gt;
&lt;br /&gt;
Taken together, these definitions converge on a common understanding of autonomous agents as &#039;&#039;&#039;systems that are situated in an environment, capable of perceiving and acting upon that environment, and able to regulate their behavior over time in pursuit of internal objectives or agendas&#039;&#039;&#039;. While differing in emphasis—ranging from purposefulness and environmental embedding to computational realization—all definitions reflect the view that autonomy arises from an agent’s capacity to control its own actions within environmental constraints, rather than from external, continuous control. &lt;br /&gt;
&lt;br /&gt;
Previously it was stated that many AI-tools used today can be considered as agents, but in regard of the definitions that have been given for autonmous agents, it is clear to see that these tools do not fullfill the requirements of autonomy, and therefore can not be called autonmous agents. An exampe for an autonmous agent in our daily lifes is a thermostat used for heating or air conditioning in a house or room. It fullfills all the requirements that have been stated as essential:&lt;br /&gt;
&lt;br /&gt;
* it is Situated in an enviroment (the room or building)&lt;br /&gt;
* it perceives the enviroment (through temperature sensors)&lt;br /&gt;
* it can choose from differnt actions (heating, cooling or doing nothing)&lt;br /&gt;
* it takes action upon the enviroment (controlling heating or cooling systems)&lt;br /&gt;
* it is influenced by these actions in the future and they shape its own behaviour in the future &lt;br /&gt;
* it regulates its behaviour autonomously (maintaining target temperature)&lt;br /&gt;
* it operates without continous human input (once it is configured)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
As the example of a thermostat demonstrates, autonomous agents are not confined to theoretical models but are already part of everyday life and can be expected to become increasingly revalent in the years to come.&lt;/div&gt;</summary>
		<author><name>Leon Zipfel</name></author>
	</entry>
	<entry>
		<id>https://www.glossalab.org/w/index.php?title=IESC:AGENT&amp;diff=29288</id>
		<title>IESC:AGENT</title>
		<link rel="alternate" type="text/html" href="https://www.glossalab.org/w/index.php?title=IESC:AGENT&amp;diff=29288"/>
		<updated>2025-12-27T18:03:09Z</updated>

		<summary type="html">&lt;p&gt;Leon Zipfel: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{article&lt;br /&gt;
 | Collection = International Encyclopedia of Systems and Cybernetics&lt;br /&gt;
 | Volume = 2&lt;br /&gt;
 | Number = 1&lt;br /&gt;
 | ID = 0060&lt;br /&gt;
 | Type = IESC:General information&lt;br /&gt;
 | Curator = Charles François&lt;br /&gt;
 | Author = Charles François&lt;br /&gt;
 | Date = 2004&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
== Introduction ==&lt;br /&gt;
There is no single, universally accepted definition of an agent, but rather several valid ones. The term is used in many different fields of study, such as artificial intelligence, system theory and network science. This makes it quite difficult to identify one definitive definition, since each field emphasizes different aspects of what constitutes an agent and defines it according to its specific context. However, by examining these different definitions, it is possible to develop a more general understanding of what the term agent means.&lt;br /&gt;
&lt;br /&gt;
== Agent ==&lt;br /&gt;
[[File:Depition of an Agent.jpg|thumb|448x448px|&#039;&#039;&#039;Figure 1, T&#039;&#039;&#039;he concept of an agent based on the definition and depiction of an agent in Artifical Intelligence: A Modern Approach by Stuart J. Rusell and Peter Norvig]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
One of the most general definitions of the term comes from &#039;&#039;&#039;Stuart J. Russell&#039;&#039;&#039; and &#039;&#039;&#039;Peter Norvig&#039;&#039;&#039;. They state in their paper &#039;&#039;Artificial Intelligence: A Modern Approach&#039;&#039; that,“An agent is anything that can be viewed as perceiving its environment through sensors and acting upon that environment through actuators” &amp;lt;ref&amp;gt;Stuart J. Russell and Peter Norvig. Artificial Intelligence: A Modern Approach Third Edition p.34&amp;lt;/ref&amp;gt;. This definition is deliberately broad and depends heavily on how the concept of the environment is specified. Because it is so general, it is fairly easy to depict the idea, as shown in &#039;&#039;&#039;Figure 1&#039;&#039;&#039;, which is based heavly on the dipiction Stuart J. Russell and Peter Norvig used in their paper. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
An agent could also be characterized as an active &#039;&#039;&#039;element&#039;&#039;&#039; in a &#039;&#039;&#039;multi-elements system&#039;&#039;&#039; or &#039;&#039;&#039;network&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
{{ency_person|J. ERCEAU}} and {{ency_person|J. FERBER}} describe the following types of agents, at different {{ency_term|hierarchical}} {{ency_term|levels}} in the active multi-agents {{ency_term|system}}:&lt;br /&gt;
&lt;br /&gt;
- “&amp;lt;u&amp;gt;reactive agents&amp;lt;/u&amp;gt;: these are at the lower {{ency_term|levels}}. They merely dispose of a reduced {{ency_term|protocole}} and {{ency_term|communication}} {{ency_term|language}} and … their abilities rely only on a {{ency_term|stimulus/action}} {{ency_term|rule}}. The reactive agents {{ency_term|class}} include various {{ency_term|levels}}, according to their {{ency_term|group}}-forming ability and capacity to produce global {{ency_term|behavior}};&lt;br /&gt;
&lt;br /&gt;
- “&amp;lt;u&amp;gt;communicating agents&amp;lt;/u&amp;gt;, which possess a complete {{ency_term|communication}} {{ency_term|protocole}}, but whose conversational and behavioral {{ency_term|parts}} are {{ency_term|interdependent}};&lt;br /&gt;
&lt;br /&gt;
- “&amp;lt;u&amp;gt;rational agents&amp;lt;/u&amp;gt;, which possess precise abilities, beliefs and a partial {{ency_term|representation}} of their {{ency_term|environment}}, specially of the other agents within the system;&lt;br /&gt;
&lt;br /&gt;
- “&amp;lt;u&amp;gt;intentional agents&amp;lt;/u&amp;gt;, at the highest {{ency_term|level}}, possessing explicit {{ency_term|goals}}, specific plans which allow them to fulfill their {{ency_term|goals}}, as well as the possibility to commit themselves to specific tasks, that they are obliged to carry out, or to contract other agents to execute certain {{ency_term|actions}}”. (1991, p.757-8)&lt;br /&gt;
&lt;br /&gt;
This could be a stimulating {{ency_term|description}} for a {{ency_term|model}} of any society.&lt;br /&gt;
&lt;br /&gt;
More recently, {{ency_person|J. FERBER}} (1999) has given another much more precise definition of an agent:&lt;br /&gt;
&lt;br /&gt;
According to Ferber, an Agent is a virtual or physical entity which:&lt;br /&gt;
&lt;br /&gt;
1) is capable of acting in an {{ency_term|environment}}&lt;br /&gt;
&lt;br /&gt;
2) can communicate directly with other agents&lt;br /&gt;
&lt;br /&gt;
3) is driven by a set of tendencies (in the form of individual {{ency_term|objectives}} or of a satisfaction/survival {{ency_term|function}} which it tries to optimize)&lt;br /&gt;
&lt;br /&gt;
4) possesses {{ency_term|resources}} of its own&lt;br /&gt;
&lt;br /&gt;
5) is capable to perceive its {{ency_term|environment}} (but up to a limited extent)&lt;br /&gt;
&lt;br /&gt;
6) has only a partial representation of this {{ency_term|environment}} (and perhaps none at all)&lt;br /&gt;
&lt;br /&gt;
7) possesses skills and can offer services&lt;br /&gt;
&lt;br /&gt;
8) may be able to reproduce itself&lt;br /&gt;
&lt;br /&gt;
9) whose {{ency_term|behavior}} tends towards satisfying its {{ency_term|objectives}} , taking account of the {{ency_term|resources}} and skills available to it and depending on its {{ency_term|perception}} , its {{ency_term|representation}} and the {{ency_term|communication}} it receives &lt;br /&gt;
&lt;br /&gt;
“Having the properties 1-9) an agent can be considered as an ”intelligent system“(Ibid)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Another influential characterization of &#039;&#039;&#039;intelligent agents&#039;&#039;&#039; is provided by &#039;&#039;&#039;B. HAYES-ROTH&#039;&#039;&#039; in &#039;&#039;An Architecture for Adaptive Intelligent Systems&#039;&#039;. &amp;quot;Intelligent agents continuously perform three functions: perception of dynamic conditions in the environment; action to affect conditions in the environment; and reasoning to interpret perceptions, solve problems, draw inferences, and determine actions.&amp;quot;&amp;lt;ref&amp;gt;Barbara Hayes-Roth. An architecture for adaptive intelligent systems p.329&amp;lt;/ref&amp;gt; This definition concentrates on  the &#039;&#039;&#039;functional coupling of perception&#039;&#039;&#039;, &#039;&#039;&#039;reasoning&#039;&#039;&#039;, and &#039;&#039;&#039;action&#039;&#039;&#039; as the basis of &#039;&#039;&#039;adaptive intelligent behavior&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
By contrast, in the previously discussed definition by Jacques Ferber adopts a more structural and property-based perspective. Ferber defines agents as &#039;&#039;&#039;physical&#039;&#039;&#039; or &#039;&#039;&#039;virtual entities&#039;&#039;&#039; characterized by a set of capabilities, including &#039;&#039;&#039;action&#039;&#039;&#039;, &#039;&#039;&#039;communication&#039;&#039;&#039;, &#039;&#039;&#039;goal-oriented behavior&#039;&#039;&#039;, &#039;&#039;&#039;partial environmental representation&#039;&#039;&#039;, and the &#039;&#039;&#039;possession of resources and skills.&#039;&#039;&#039; Within this framework, intelligence is not presupposed but may emerge from the combination of these properties (Ferber, 1999).&lt;br /&gt;
&lt;br /&gt;
These definitions reflect different theoretical emphases: Hayes-Roth’s approach, rooted in artificial intelligence, focuses on internal functional processes underlying intelligent behavior, whereas Ferber’s definition, developed in the context of multi-agent systems, highlights interaction and functional roles within distributed systems. &lt;br /&gt;
&lt;br /&gt;
This example illustrates perfectly how different fields of resarch employ definitions of agents that are precisely adapted to their respective research goals, methodological assumptions, and domains of application. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
From the description of &#039;&#039;&#039;J. FERBER&#039;&#039;&#039;, {{ency_person|N. SAFFARPOUR}} (2000, p. 75) deduces the following characteristics of agents&lt;br /&gt;
&lt;br /&gt;
“- Agents are {{ency_term|autonomous}} , i.e. have {{ency_term|control}} over their own actions&lt;br /&gt;
&lt;br /&gt;
- Agents contain some level of intelligence, from fixed {{ency_term|rule}} to {{ency_term|learning}} engine that allows them to adapt to {{ency_term|change}} in the {{ency_term|environment}}&lt;br /&gt;
&lt;br /&gt;
- Agents don&#039;t only act reactively, but sometimes also proactively and don&#039;t simply act in {{ency_term|response}} to {{ency_term|environment}} , in other words agents are {{ency_term|goal}} oriented&lt;br /&gt;
&lt;br /&gt;
- Agents have social ability, that is they communicate with the user, the system and other agents as required&lt;br /&gt;
&lt;br /&gt;
- Agents may also cooperate with other agents to carry out more complex tasks that those they themselves can handle&lt;br /&gt;
&lt;br /&gt;
- Agents may move from one system to another to access remote recourse or even to meet other agents&lt;br /&gt;
&lt;br /&gt;
- Agents are adaptive, that is change their {{ency_term|behavior}} based on previous experience&lt;br /&gt;
&lt;br /&gt;
While some of these terms may appear ambiguous—such as &#039;&#039;&#039;intelligence&#039;&#039;&#039; or &#039;&#039;&#039;mobility&#039;&#039;&#039;—they remain significant for understanding agents in a general sense.&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In conclusion, the concept of an agent should be understood as an &#039;&#039;&#039;analytical abstraction&#039;&#039;&#039; rather than a rigid classification. As emphasized by &#039;&#039;&#039;Stuart J. Russell&#039;&#039;&#039; and &#039;&#039;&#039;Peter Norvig&#039;&#039;&#039; in &#039;&#039;Artificial Intelligence: A Modern Approach&#039;&#039;, &amp;quot;The notion of an agent is meant to be a tool for analyzing systems, not an absolute characterization that divides the world into agents and non-agents.&amp;quot; &amp;lt;ref&amp;gt;Stuart J. Russell and Peter Norvig. Artificial Intelligence: A Modern Approach Third Edition p.36&amp;lt;/ref&amp;gt; Whether something is treated as an agent depends on the perspective, purpose, and level of analysis adopted. This flexibility is precisely what makes the agent concept powerful: it allows complex systems to be studied in a structured way without imposing artificial boundaries between agents and non-agents.&lt;br /&gt;
&lt;br /&gt;
== See also ==&lt;br /&gt;
{{ency_term|Adaptability}}, {{ency_term|Artificial life}}, {{ency_term|Autonomy}}, {{ency_term|Behavior (Anticipatory)}}, {{ency_term|Intelligence (Distributed artificial)}}, {{ency_term|Stigmergy}}, {{ency_term|Swarm}}&lt;br /&gt;
&amp;lt;references responsive=&amp;quot;0&amp;quot; /&amp;gt;&lt;/div&gt;</summary>
		<author><name>Leon Zipfel</name></author>
	</entry>
	<entry>
		<id>https://www.glossalab.org/w/index.php?title=IESC:AGENT&amp;diff=29257</id>
		<title>IESC:AGENT</title>
		<link rel="alternate" type="text/html" href="https://www.glossalab.org/w/index.php?title=IESC:AGENT&amp;diff=29257"/>
		<updated>2025-12-27T14:15:38Z</updated>

		<summary type="html">&lt;p&gt;Leon Zipfel: Added figure 1 and definition by Hayes-Roth&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{article&lt;br /&gt;
 | Collection = International Encyclopedia of Systems and Cybernetics&lt;br /&gt;
 | Volume = 2&lt;br /&gt;
 | Number = 1&lt;br /&gt;
 | ID = 0060&lt;br /&gt;
 | Type = IESC:General information&lt;br /&gt;
 | Curator = Charles François&lt;br /&gt;
 | Author = Charles François&lt;br /&gt;
 | Date = 2004&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
== Introduction ==&lt;br /&gt;
There is no single, universally accepted definition of an agent, but rather several valid ones. The term is used in many different fields of study, such as artificial intelligence, system theory and network science. This makes it quite difficult to identify one definitive definition, since each field emphasizes different aspects of what constitutes an agent and defines it according to its specific context. However, by examining these different definitions, it is possible to develop a more general understanding of what the term agent means.&lt;br /&gt;
&lt;br /&gt;
== Agent ==&lt;br /&gt;
[[File:Depition of an Agent.jpg|thumb|448x448px|&#039;&#039;&#039;Figure 1, T&#039;&#039;&#039;he concept of an agent based on the definition and depiction of an agent in Artifical Intelligence: A Modern Approach by Stuart J. Rusell and Peter Norvig]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
One of the most general definitions of the term comes from &#039;&#039;&#039;Stuart J. Russell&#039;&#039;&#039; and &#039;&#039;&#039;Peter Norvig&#039;&#039;&#039;. They state in their paper &#039;&#039;Artificial Intelligence: A Modern Approach&#039;&#039; that,“An agent is anything that can be viewed as perceiving its environment through sensors and acting upon that environment through actuators” &amp;lt;ref&amp;gt;Stuart J. Russell and Peter Norvig. Artificial Intelligence: A Modern Approach Third Edition p.34&amp;lt;/ref&amp;gt;. This definition is deliberately broad and depends heavily on how the concept of the environment is specified. Because it is so general, it is fairly easy to depict the idea, as shown in &#039;&#039;&#039;Figure 1&#039;&#039;&#039;, which is based heavly on the dipiction Stuart J. Russell and Peter Norvig used in their paper. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
An agent could also be characterized as an active &#039;&#039;&#039;element&#039;&#039;&#039; in a multi-&#039;&#039;&#039;elements system&#039;&#039;&#039; or &#039;&#039;&#039;network&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
{{ency_person|J. ERCEAU}} and {{ency_person|J. FERBER}} describe the following types of agents, at different {{ency_term|hierarchical}} {{ency_term|levels}} in the active multi-agents {{ency_term|system}}:&lt;br /&gt;
&lt;br /&gt;
- “&amp;lt;u&amp;gt;reactive agents&amp;lt;/u&amp;gt;: these are at the lower {{ency_term|levels}}. They merely dispose of a reduced {{ency_term|protocole}} and {{ency_term|communication}} {{ency_term|language}} and … their abilities rely only on a {{ency_term|stimulus/action}} {{ency_term|rule}}. The reactive agents {{ency_term|class}} include various {{ency_term|levels}}, according to their {{ency_term|group}}-forming ability and capacity to produce global {{ency_term|behavior}};&lt;br /&gt;
&lt;br /&gt;
- “&amp;lt;u&amp;gt;communicating agents&amp;lt;/u&amp;gt;, which possess a complete {{ency_term|communication}} {{ency_term|protocole}}, but whose conversational and behavioral {{ency_term|parts}} are {{ency_term|interdependent}};&lt;br /&gt;
&lt;br /&gt;
- “&amp;lt;u&amp;gt;rational agents&amp;lt;/u&amp;gt;, which possess precise abilities, beliefs and a partial {{ency_term|representation}} of their {{ency_term|environment}}, specially of the other agents within the system;&lt;br /&gt;
&lt;br /&gt;
- “&amp;lt;u&amp;gt;intentional agents&amp;lt;/u&amp;gt;, at the highest {{ency_term|level}}, possessing explicit {{ency_term|goals}}, specific plans which allow them to fulfill their {{ency_term|goals}}, as well as the possibility to commit themselves to specific tasks, that they are obliged to carry out, or to contract other agents to execute certain {{ency_term|actions}}”. (1991, p.757-8)&lt;br /&gt;
&lt;br /&gt;
This could be a stimulating {{ency_term|description}} for a {{ency_term|model}} of any society.&lt;br /&gt;
&lt;br /&gt;
More recently, {{ency_person|J. FERBER}} (1999) has given another much more precise definition of an agent:&lt;br /&gt;
&lt;br /&gt;
According to Ferber, an Agent is a virtual or physical entity which:&lt;br /&gt;
&lt;br /&gt;
1) is capable of acting in an {{ency_term|environment}}&lt;br /&gt;
&lt;br /&gt;
2) can communicate directly with other agents&lt;br /&gt;
&lt;br /&gt;
3) is driven by a set of tendencies (in the form of individual {{ency_term|objectives}} or of a satisfaction/survival {{ency_term|function}} which it tries to optimize)&lt;br /&gt;
&lt;br /&gt;
4) possesses {{ency_term|resources}} of its own&lt;br /&gt;
&lt;br /&gt;
5) is capable to perceive its {{ency_term|environment}} (but up to a limited extent)&lt;br /&gt;
&lt;br /&gt;
6) has only a partial representation of this {{ency_term|environment}} (and perhaps none at all)&lt;br /&gt;
&lt;br /&gt;
7) possesses skills and can offer services&lt;br /&gt;
&lt;br /&gt;
8) may be able to reproduce itself&lt;br /&gt;
&lt;br /&gt;
9) whose {{ency_term|behavior}} tends towards satisfying its {{ency_term|objectives}} , taking account of the {{ency_term|resources}} and skills available to it and depending on its {{ency_term|perception}} , its {{ency_term|representation}} and the {{ency_term|communication}} it receives &lt;br /&gt;
&lt;br /&gt;
“Having the properties 1-9) an agent can be considered as an ”intelligent system“(Ibid)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Another influential characterization of &#039;&#039;&#039;intelligent agents&#039;&#039;&#039; is provided by &#039;&#039;&#039;B. HAYES-ROTH&#039;&#039;&#039; in &#039;&#039;An Architecture for Adaptive Intelligent Systems&#039;&#039;. &amp;quot;Intelligent agents continuously perform three functions: perception of dynamic conditions in the environment; action to affect conditions in the environment; and reasoning to interpret perceptions, solve problems, draw inferences, and determine actions.&amp;quot;&amp;lt;ref&amp;gt;Barbara Hayes-Roth. An architecture for adaptive intelligent systems p.329&amp;lt;/ref&amp;gt; This definition concentrates on  the &#039;&#039;&#039;functional coupling of perception&#039;&#039;&#039;, &#039;&#039;&#039;reasoning&#039;&#039;&#039;, and &#039;&#039;&#039;action&#039;&#039;&#039; as the basis of &#039;&#039;&#039;adaptive intelligent behavior&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
By contrast, in the previously discussed definition by Jacques Ferber adopts a more structural and property-based perspective. Ferber defines agents as &#039;&#039;&#039;physical&#039;&#039;&#039; or &#039;&#039;&#039;virtual entities&#039;&#039;&#039; characterized by a set of capabilities, including &#039;&#039;&#039;action&#039;&#039;&#039;, &#039;&#039;&#039;communication&#039;&#039;&#039;, &#039;&#039;&#039;goal-oriented behavior&#039;&#039;&#039;, &#039;&#039;&#039;partial environmental representation&#039;&#039;&#039;, and the &#039;&#039;&#039;possession of resources and skills.&#039;&#039;&#039; Within this framework, intelligence is not presupposed but may emerge from the combination of these properties (Ferber, 1999).&lt;br /&gt;
&lt;br /&gt;
These definitions reflect different theoretical emphases: Hayes-Roth’s approach, rooted in artificial intelligence, focuses on internal functional processes underlying intelligent behavior, whereas Ferber’s definition, developed in the context of multi-agent systems, highlights interaction and functional roles within distributed systems. &lt;br /&gt;
&lt;br /&gt;
This example illustrates perfectly how different fields of resarch employ definitions of agents that are precisely adapted to their respective research goals, methodological assumptions, and domains of application. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
From the description of &#039;&#039;&#039;J. FERBER&#039;&#039;&#039;, {{ency_person|N. SAFFARPOUR}} (2000, p. 75) deduces the following characteristics of agents&lt;br /&gt;
&lt;br /&gt;
“- Agents are {{ency_term|autonomous}} , i.e. have {{ency_term|control}} over their own actions&lt;br /&gt;
&lt;br /&gt;
- Agents contain some level of intelligence, from fixed {{ency_term|rule}} to {{ency_term|learning}} engine that allows them to adapt to {{ency_term|change}} in the {{ency_term|environment}}&lt;br /&gt;
&lt;br /&gt;
- Agents don&#039;t only act reactively, but sometimes also proactively and don&#039;t simply act in {{ency_term|response}} to {{ency_term|environment}} , in other words agents are {{ency_term|goal}} oriented&lt;br /&gt;
&lt;br /&gt;
- Agents have social ability, that is they communicate with the user, the system and other agents as required&lt;br /&gt;
&lt;br /&gt;
- Agents may also cooperate with other agents to carry out more complex tasks that those they themselves can handle&lt;br /&gt;
&lt;br /&gt;
- Agents may move from one system to another to access remote recourse or even to meet other agents&lt;br /&gt;
&lt;br /&gt;
- Agents are adaptive, that is change their {{ency_term|behavior}} based on previous experience&lt;br /&gt;
&lt;br /&gt;
While some of these terms may appear ambiguous—such as &#039;&#039;&#039;intelligence&#039;&#039;&#039; or &#039;&#039;&#039;mobility&#039;&#039;&#039;—they remain significant for understanding agents in a general sense.&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In conclusion, the concept of an agent should be understood as an &#039;&#039;&#039;analytical abstraction&#039;&#039;&#039; rather than a rigid classification. As emphasized by &#039;&#039;&#039;Stuart J. Russell&#039;&#039;&#039; and &#039;&#039;&#039;Peter Norvig&#039;&#039;&#039; in &#039;&#039;Artificial Intelligence: A Modern Approach&#039;&#039;, &amp;quot;The notion of an agent is meant to be a tool for analyzing systems, not an absolute characterization that divides the world into agents and non-agents.&amp;quot; &amp;lt;ref&amp;gt;Stuart J. Russell and Peter Norvig. Artificial Intelligence: A Modern Approach Third Edition p.36&amp;lt;/ref&amp;gt; Whether something is treated as an agent depends on the perspective, purpose, and level of analysis adopted. This flexibility is precisely what makes the agent concept powerful: it allows complex systems to be studied in a structured way without imposing artificial boundaries between agents and non-agents.&lt;br /&gt;
&lt;br /&gt;
== See also ==&lt;br /&gt;
{{ency_term|Adaptability}}, {{ency_term|Artificial life}}, {{ency_term|Autonomy}}, {{ency_term|Behavior (Anticipatory)}}, {{ency_term|Intelligence (Distributed artificial)}}, {{ency_term|Stigmergy}}, {{ency_term|Swarm}}&lt;br /&gt;
&amp;lt;references responsive=&amp;quot;0&amp;quot; /&amp;gt;&lt;/div&gt;</summary>
		<author><name>Leon Zipfel</name></author>
	</entry>
	<entry>
		<id>https://www.glossalab.org/w/index.php?title=File:Depition_of_an_Agent.jpg&amp;diff=29255</id>
		<title>File:Depition of an Agent.jpg</title>
		<link rel="alternate" type="text/html" href="https://www.glossalab.org/w/index.php?title=File:Depition_of_an_Agent.jpg&amp;diff=29255"/>
		<updated>2025-12-27T13:43:06Z</updated>

		<summary type="html">&lt;p&gt;Leon Zipfel: describtion&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This depiction shows the concept of an agent, as it is introduced by Stuart J. Rusell and Peter Norvig in Artifical Intelligence: A Modern Approach. It is heavily based on Figure 2.1, shown in that paper on page 35.&lt;/div&gt;</summary>
		<author><name>Leon Zipfel</name></author>
	</entry>
	<entry>
		<id>https://www.glossalab.org/w/index.php?title=File:Depition_of_an_Agent.jpg&amp;diff=29251</id>
		<title>File:Depition of an Agent.jpg</title>
		<link rel="alternate" type="text/html" href="https://www.glossalab.org/w/index.php?title=File:Depition_of_an_Agent.jpg&amp;diff=29251"/>
		<updated>2025-12-27T13:25:39Z</updated>

		<summary type="html">&lt;p&gt;Leon Zipfel: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Figure 1 shows the concept of an agent based on the definition and depiction of an agent in Artifical Intelligence: A Modern Approach by Stuart J. Rusell and Peter Norvig&lt;/div&gt;</summary>
		<author><name>Leon Zipfel</name></author>
	</entry>
	<entry>
		<id>https://www.glossalab.org/w/index.php?title=IESC:AGENT&amp;diff=29183</id>
		<title>IESC:AGENT</title>
		<link rel="alternate" type="text/html" href="https://www.glossalab.org/w/index.php?title=IESC:AGENT&amp;diff=29183"/>
		<updated>2025-12-25T14:05:33Z</updated>

		<summary type="html">&lt;p&gt;Leon Zipfel: Added a conclusion&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{article&lt;br /&gt;
 | Collection = International Encyclopedia of Systems and Cybernetics&lt;br /&gt;
 | Volume = 2&lt;br /&gt;
 | Number = 1&lt;br /&gt;
 | ID = 0060&lt;br /&gt;
 | Type = IESC:General information&lt;br /&gt;
 | Curator = Charles François&lt;br /&gt;
 | Author = Charles François&lt;br /&gt;
 | Date = 2004&lt;br /&gt;
}}&lt;br /&gt;
There is no single, universally accepted definition of an agent, but rather several valid ones. The term is used in many different fields of study, such as artificial intelligence, system theory and network science. This makes it difficult to identify one definitive definition, since each field emphasizes different aspects of what constitutes an agent and defines it according to its specific context. However, by examining these different definitions, it is possible to develop a more general understanding of what the term agent means.&lt;br /&gt;
&lt;br /&gt;
one of the most gerneral definitions is “An agent is anything that can be viewed as perceiving its environment through sensors and acting upon that environment through actuators” &amp;lt;ref&amp;gt;Stuart J. Russell and Peter Norvig. Artificial Intelligence: A Modern Approach Third Edition p.34&amp;lt;/ref&amp;gt; by Stuart J. Russell and Peter Norvig. This definition is deliberately broad and depends heavily on how the concept of the &#039;&#039;environment&#039;&#039; is specified.&lt;br /&gt;
&lt;br /&gt;
An agent could also be characterized as an active &#039;&#039;&#039;element&#039;&#039;&#039; in a multi-&#039;&#039;&#039;elements system&#039;&#039;&#039; or &#039;&#039;&#039;network&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
{{ency_person|J. ERCEAU}} and {{ency_person|J. FERBER}} describe the following types of agents, at different {{ency_term|hierarchical}} {{ency_term|levels}} in the active multi-agents {{ency_term|system}}:&lt;br /&gt;
&lt;br /&gt;
- “&amp;lt;u&amp;gt;reactive agents&amp;lt;/u&amp;gt;: these are at the lower {{ency_term|levels}}. They merely dispose of a reduced {{ency_term|protocole}} and {{ency_term|communication}} {{ency_term|language}} and … their abilities rely only on a {{ency_term|stimulus/action}} {{ency_term|rule}}. The reactive agents {{ency_term|class}} include various {{ency_term|levels}}, according to their {{ency_term|group}}-forming ability and capacity to produce global {{ency_term|behavior}};&lt;br /&gt;
&lt;br /&gt;
- “&amp;lt;u&amp;gt;communicating agents&amp;lt;/u&amp;gt;, which possess a complete {{ency_term|communication}} {{ency_term|protocole}}, but whose conversational and behavioral {{ency_term|parts}} are {{ency_term|interdependent}};&lt;br /&gt;
&lt;br /&gt;
- “&amp;lt;u&amp;gt;rational agents&amp;lt;/u&amp;gt;, which possess precise abilities, beliefs and a partial {{ency_term|representation}} of their {{ency_term|environment}}, specially of the other agents within the system;&lt;br /&gt;
&lt;br /&gt;
- “&amp;lt;u&amp;gt;intentional agents&amp;lt;/u&amp;gt;, at the highest {{ency_term|level}}, possessing explicit {{ency_term|goals}}, specific plans which allow them to fulfill their {{ency_term|goals}}, as well as the possibility to commit themselves to specific tasks, that they are obliged to carry out, or to contract other agents to execute certain {{ency_term|actions}}”. (1991, p.757-8)&lt;br /&gt;
&lt;br /&gt;
This could be a stimulating {{ency_term|description}} for a {{ency_term|model}} of any society.&lt;br /&gt;
&lt;br /&gt;
More recently, {{ency_person|J. FERBER}} (1999) has given another much more precise definition of an agent:&lt;br /&gt;
&lt;br /&gt;
According to Ferber, an Agent is a virtual or physical entity which:&lt;br /&gt;
&lt;br /&gt;
1) is capable of acting in an {{ency_term|environment}}&lt;br /&gt;
&lt;br /&gt;
2) can communicate directly with other agents&lt;br /&gt;
&lt;br /&gt;
3) is driven by a set of tendencies (in the form of individual {{ency_term|objectives}} or of a satisfaction/survival {{ency_term|function}} which it tries to optimize)&lt;br /&gt;
&lt;br /&gt;
4) possesses {{ency_term|resources}} of its own&lt;br /&gt;
&lt;br /&gt;
5) is capable to perceive its {{ency_term|environment}} (but up to a limited extent)&lt;br /&gt;
&lt;br /&gt;
6) has only a partial representation of this {{ency_term|environment}} (and perhaps none at all)&lt;br /&gt;
&lt;br /&gt;
7) possesses skills and can offer services&lt;br /&gt;
&lt;br /&gt;
8) may be able to reproduce itself&lt;br /&gt;
&lt;br /&gt;
9) whose {{ency_term|behavior}} tends towards satisfying its {{ency_term|objectives}} , taking account of the {{ency_term|resources}} and skills available to it and depending on its {{ency_term|perception}} , its {{ency_term|representation}} and the {{ency_term|communication}} it receives&lt;br /&gt;
&lt;br /&gt;
:“Having the properties 1-9) an agent can be considered as an ”intelligent system“(Ibid)&lt;br /&gt;
&lt;br /&gt;
From this description {{ency_person|N. SAFFARPOUR}} (2000, p. 75) deduces the following characteristics of agents&lt;br /&gt;
&lt;br /&gt;
“- Agents are {{ency_term|autonomous}} , i.e. have {{ency_term|control}} over their own actions&lt;br /&gt;
&lt;br /&gt;
- Agents contain some level of intelligence, from fixed {{ency_term|rule}} to {{ency_term|learning}} engine that allows them to adapt to {{ency_term|change}} in the {{ency_term|environment}}&lt;br /&gt;
&lt;br /&gt;
- Agents don&#039;t only act reactively, but sometimes also proactively and don&#039;t simply act in {{ency_term|response}} to {{ency_term|environment}} , in other words agents are {{ency_term|goal}} oriented&lt;br /&gt;
&lt;br /&gt;
- Agents have social ability, that is they communicate with the user, the system and other agents as required&lt;br /&gt;
&lt;br /&gt;
- Agents may also cooperate with other agents to carry out more complex tasks that those they themselves can handle&lt;br /&gt;
&lt;br /&gt;
- Agents may move from one system to another to access remote recourse or even to meet other agents&lt;br /&gt;
&lt;br /&gt;
- Agents are adaptive, that is change their {{ency_term|behavior}} based on previous experience&lt;br /&gt;
&lt;br /&gt;
While some of these terms may appear ambiguous—such as &#039;&#039;intelligence&#039;&#039; or &#039;&#039;mobility&#039;&#039;—they remain significant for understanding agents in a general sense.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In conclusion, the concept of an agent should be understood as an &#039;&#039;&#039;analytical abstraction&#039;&#039;&#039; rather than a rigid classification. As emphasized by Stuart J. Russell and Peter Norvig in &#039;&#039;Artificial Intelligence: A Modern Approach&#039;&#039;, &amp;quot;The notion of an agent is meant to be a tool for analyzing systems, not an absolute characterization that divides the world into agents and non-agents.&amp;quot;&amp;lt;ref&amp;gt;Stuart J. Russell and Peter Norvig. Artificial Intelligence: A Modern Approach Third Edition p.36&amp;lt;/ref&amp;gt; Whether something is treated as an agent depends on the perspective, purpose, and level of analysis adopted. This flexibility is precisely what makes the agent concept powerful: it allows complex systems to be studied in a structured way without imposing artificial boundaries between agents and non-agents.&lt;br /&gt;
== See also ==&lt;br /&gt;
{{ency_term|Adaptability}}, {{ency_term|Artificial life}}, {{ency_term|Autonomy}}, {{ency_term|Behavior (Anticipatory)}}, {{ency_term|Intelligence (Distributed artificial)}}, {{ency_term|Stigmergy}}, {{ency_term|Swarm}}&lt;/div&gt;</summary>
		<author><name>Leon Zipfel</name></author>
	</entry>
	<entry>
		<id>https://www.glossalab.org/w/index.php?title=IESC:AGENT&amp;diff=29177</id>
		<title>IESC:AGENT</title>
		<link rel="alternate" type="text/html" href="https://www.glossalab.org/w/index.php?title=IESC:AGENT&amp;diff=29177"/>
		<updated>2025-12-25T12:17:08Z</updated>

		<summary type="html">&lt;p&gt;Leon Zipfel: added introduction and one definition&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{article&lt;br /&gt;
 | Collection = International Encyclopedia of Systems and Cybernetics&lt;br /&gt;
 | Volume = 2&lt;br /&gt;
 | Number = 1&lt;br /&gt;
 | ID = 0060&lt;br /&gt;
 | Type = IESC:General information&lt;br /&gt;
 | Curator = Charles François&lt;br /&gt;
 | Author = Charles François&lt;br /&gt;
 | Date = 2004&lt;br /&gt;
}}&lt;br /&gt;
There is no single, universally accepted definition of an agent, but rather several valid ones. The term is used in many different fields of study, such as artificial intelligence, system theory and network science. This makes it difficult to identify one definitive definition, since each field emphasizes different aspects of what constitutes an agent and defines it according to its specific context. However, by examining these different definitions, it is possible to develop a more general understanding of what the term agent means.&lt;br /&gt;
&lt;br /&gt;
“An agent is anything that can be viewed as perceiving its environment through sensors and acting upon that environment through actuators” &amp;lt;ref&amp;gt;Stuart J. Russell and Peter Norvig. Artificial Intelligence: A Modern Approach Third Edition p.34&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This is a very general definition by Stuart J. Russell and Peter Norvig which depends on clarification of the term environment. &lt;br /&gt;
&lt;br /&gt;
An agent could also be characterized as an active &#039;&#039;&#039;element&#039;&#039;&#039; in a multi-&#039;&#039;&#039;elements system&#039;&#039;&#039; or &#039;&#039;&#039;network&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
{{ency_person|J. ERCEAU}} and {{ency_person|J. FERBER}} describe the following types of agents, at different {{ency_term|hierarchical}} {{ency_term|levels}} in the active multi-agents {{ency_term|system}}:&lt;br /&gt;
&lt;br /&gt;
- “&amp;lt;u&amp;gt;reactive agents&amp;lt;/u&amp;gt;: these are at the lower {{ency_term|levels}}. They merely dispose of a reduced {{ency_term|protocole}} and {{ency_term|communication}} {{ency_term|language}} and … their abilities rely only on a {{ency_term|stimulus/action}} {{ency_term|rule}}. The reactive agents {{ency_term|class}} include various {{ency_term|levels}}, according to their {{ency_term|group}}-forming ability and capacity to produce global {{ency_term|behavior}};&lt;br /&gt;
&lt;br /&gt;
- “&amp;lt;u&amp;gt;communicating agents&amp;lt;/u&amp;gt;, which possess a complete {{ency_term|communication}} {{ency_term|protocole}}, but whose conversational and behavioral {{ency_term|parts}} are {{ency_term|interdependent}};&lt;br /&gt;
&lt;br /&gt;
- “&amp;lt;u&amp;gt;rational agents&amp;lt;/u&amp;gt;, which possess precise abilities, beliefs and a partial {{ency_term|representation}} of their {{ency_term|environment}}, specially of the other agents within the system;&lt;br /&gt;
&lt;br /&gt;
- “&amp;lt;u&amp;gt;intentional agents&amp;lt;/u&amp;gt;, at the highest {{ency_term|level}}, possessing explicit {{ency_term|goals}}, specific plans which allow them to fulfill their {{ency_term|goals}}, as well as the possibility to commit themselves to specific tasks, that they are obliged to carry out, or to contract other agents to execute certain {{ency_term|actions}}”. (1991, p.757-8)&lt;br /&gt;
&lt;br /&gt;
This could be a stimulating {{ency_term|description}} for a {{ency_term|model}} of any society.&lt;br /&gt;
&lt;br /&gt;
More recently, {{ency_person|J. FERBER}} (1999) has given a much more precise definition of an agent:&lt;br /&gt;
&lt;br /&gt;
Agent is a virtual or physical entity which:&lt;br /&gt;
&lt;br /&gt;
1) is capable of acting in an {{ency_term|environment}}&lt;br /&gt;
&lt;br /&gt;
2) can communicate directly with other agents&lt;br /&gt;
&lt;br /&gt;
3) is driven by a set of tendencies (in the form of individual {{ency_term|objectives}} or of a satisfaction/survival {{ency_term|function}} which it tries to optimize)&lt;br /&gt;
&lt;br /&gt;
4) possesses {{ency_term|resources}} of its own&lt;br /&gt;
&lt;br /&gt;
5) is capable to perceive its {{ency_term|environment}} (but up to a limited extent)&lt;br /&gt;
&lt;br /&gt;
6) has only a partial representation of this {{ency_term|environment}} (and perhaps none at all)&lt;br /&gt;
&lt;br /&gt;
7) possesses skills and can offer services&lt;br /&gt;
&lt;br /&gt;
8) may be able to reproduce itself&lt;br /&gt;
&lt;br /&gt;
9) whose {{ency_term|behavior}} tends towards satisfying its {{ency_term|objectives}} , taking account of the {{ency_term|resources}} and skills available to it and depending on its {{ency_term|perception}} , its {{ency_term|representation}} and the {{ency_term|communication}} it receives&lt;br /&gt;
&lt;br /&gt;
:“Having the properties 1-9) an agent can be considered as an ”intelligent system“(Ibid)&lt;br /&gt;
&lt;br /&gt;
From this description {{ency_person|N. SAFFARPOUR}} (2000, p. 75) deduces the following characteristics of agents&lt;br /&gt;
&lt;br /&gt;
“- Agents are {{ency_term|autonomous}} , i.e. have {{ency_term|control}} over their own actions&lt;br /&gt;
&lt;br /&gt;
- Agents contain some level of intelligence, from fixed {{ency_term|rule}} to {{ency_term|learning}} engine that allows them to adapt to {{ency_term|change}} in the {{ency_term|environment}}&lt;br /&gt;
&lt;br /&gt;
- Agents don&#039;t only act reactively, but sometimes also proactively and don&#039;t simply act in {{ency_term|response}} to {{ency_term|environment}} , in other words agents are {{ency_term|goal}} oriented&lt;br /&gt;
&lt;br /&gt;
- Agents have social ability, that is they communicate with the user, the system and other agents as required&lt;br /&gt;
&lt;br /&gt;
- Agents may also cooperate with other agents to carry out more complex tasks that those they themselves can handle&lt;br /&gt;
&lt;br /&gt;
- Agents may move from one system to another to access remote recourse or even to meet other agents&lt;br /&gt;
&lt;br /&gt;
- Agents are adaptive, that is change their {{ency_term|behavior}} based on previous experience&lt;br /&gt;
&lt;br /&gt;
All these specifications are quite significant in a general sense, even if some terms used may seem ambiguous (as f. ex. “intelligence”, “move from one system to another”)&lt;br /&gt;
== See also ==&lt;br /&gt;
{{ency_term|Adaptability}}, {{ency_term|Artificial life}}, {{ency_term|Autonomy}}, {{ency_term|Behavior (Anticipatory)}}, {{ency_term|Intelligence (Distributed artificial)}}, {{ency_term|Stigmergy}}, {{ency_term|Swarm}}&lt;/div&gt;</summary>
		<author><name>Leon Zipfel</name></author>
	</entry>
	<entry>
		<id>https://www.glossalab.org/w/index.php?title=Draft:Autonomous_agent&amp;diff=28983</id>
		<title>Draft:Autonomous agent</title>
		<link rel="alternate" type="text/html" href="https://www.glossalab.org/w/index.php?title=Draft:Autonomous_agent&amp;diff=28983"/>
		<updated>2025-12-23T22:57:50Z</updated>

		<summary type="html">&lt;p&gt;Leon Zipfel: Started article&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;Leon Zipfel (2025). Autonomous agent, Understanding Complexity&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
{{Proposal&lt;br /&gt;
|Was created on date=2025-12-23&lt;br /&gt;
|Belongs to clarus=Understanding Complexity&lt;br /&gt;
|Has author=Leon Zipfel&lt;br /&gt;
|Has publication status=glossaLAB:Open&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The idea of an autonomous agent historically has its roots in the [[IESC:SYSTEMS THEORY|systems theory]] as well as cybernetics, where autonomous behavior was described in terms of feedback, regulation, and adaptation. However the term and formal concept of an &#039;&#039;&#039;autonomous agent&#039;&#039;&#039; were later introduced and refined in artificial intelligence and multi-agent systems research. To understand the concept of an &#039;&#039;&#039;autonomous agent&#039;&#039;&#039;, it is necessary to first clarify the concept of an &#039;&#039;&#039;agent&#039;&#039;&#039; itself. &lt;br /&gt;
&lt;br /&gt;
== Agent ==&lt;br /&gt;
The concept of an &#039;&#039;&#039;agent&#039;&#039;&#039; is described in detail in the conceptual clarification [[IESC:AGENT|&#039;&#039;&#039;agent&#039;&#039;&#039;]] by Charles François (2004). Because it is essential to this article however, it will be shortly explained here as well. Charles François defines an agent as &amp;quot;An active &#039;&#039;&#039;element&#039;&#039;&#039; in a multi-&#039;&#039;&#039;elements system&#039;&#039;&#039; or &#039;&#039;&#039;network&#039;&#039;&#039;.&amp;quot;&amp;lt;ref&amp;gt;An active &#039;&#039;&#039;element&#039;&#039;&#039; in a multi-&#039;&#039;&#039;elements system&#039;&#039;&#039; or &#039;&#039;&#039;network&#039;&#039;&#039;.&amp;lt;/ref&amp;gt;  This definition emphasizes the functional role of an &#039;&#039;&#039;agent&#039;&#039;&#039; as a system component that is not merely a passive object, but one that can activly effect a system. Additionally it could be able to perceive aspects of that environment and posses objectives or tendencies that guide its actions. &lt;br /&gt;
&lt;br /&gt;
== Autonomy ==&lt;br /&gt;
Defined in the conceptual clarification [[IESC:AUTONOMY|autonomy]] as &amp;quot;The capacity of a system to select and decide, within &#039;&#039;&#039;limits&#039;&#039;&#039;, its own &#039;&#039;&#039;behavior&amp;quot;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Autonomous Agent ==&lt;/div&gt;</summary>
		<author><name>Leon Zipfel</name></author>
	</entry>
	<entry>
		<id>https://www.glossalab.org/w/index.php?title=Draft:Autonomous_agent&amp;diff=28965</id>
		<title>Draft:Autonomous agent</title>
		<link rel="alternate" type="text/html" href="https://www.glossalab.org/w/index.php?title=Draft:Autonomous_agent&amp;diff=28965"/>
		<updated>2025-12-23T19:14:39Z</updated>

		<summary type="html">&lt;p&gt;Leon Zipfel: Created page with &amp;quot;{{Proposal |Was created on date=2025-12-23 |Belongs to clarus=Understanding Complexity |Has author=Leon Zipfel |Has publication status=glossaLAB:Open }}&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Proposal&lt;br /&gt;
|Was created on date=2025-12-23&lt;br /&gt;
|Belongs to clarus=Understanding Complexity&lt;br /&gt;
|Has author=Leon Zipfel&lt;br /&gt;
|Has publication status=glossaLAB:Open&lt;br /&gt;
}}&lt;/div&gt;</summary>
		<author><name>Leon Zipfel</name></author>
	</entry>
	<entry>
		<id>https://www.glossalab.org/w/index.php?title=User:Leon_Zipfel&amp;diff=27311</id>
		<title>User:Leon Zipfel</title>
		<link rel="alternate" type="text/html" href="https://www.glossalab.org/w/index.php?title=User:Leon_Zipfel&amp;diff=27311"/>
		<updated>2025-11-06T16:40:11Z</updated>

		<summary type="html">&lt;p&gt;Leon Zipfel: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Person&lt;br /&gt;
|Given name=Leon&lt;br /&gt;
|Family name=Zipfel&lt;br /&gt;
|Sex=Male&lt;br /&gt;
|Country=Germany&lt;br /&gt;
|Institution=Hochschule München (HM) – University of Applied Sciences&lt;br /&gt;
|Academic degree=High School Diploma (secondary)&lt;br /&gt;
|KD of expertise=Aerospace Engeneering&lt;br /&gt;
|Current academic institution=Hochschule München (HM) – University of Applied Sciences&lt;br /&gt;
|Current academic level=Bachelor’s Degree&lt;br /&gt;
|Current academic degree=Aerospace Engeneering Bachelor&lt;br /&gt;
|input language=EN (English)&lt;br /&gt;
}}&lt;br /&gt;
Leon Simeon Zipfel (*2001, Starnberg) is a student at Hochschule München (HM) – University of Applied Sciences.  &lt;br /&gt;
[[Category:Person]]&lt;/div&gt;</summary>
		<author><name>Leon Zipfel</name></author>
	</entry>
	<entry>
		<id>https://www.glossalab.org/w/index.php?title=User:Leon_Zipfel&amp;diff=27281</id>
		<title>User:Leon Zipfel</title>
		<link rel="alternate" type="text/html" href="https://www.glossalab.org/w/index.php?title=User:Leon_Zipfel&amp;diff=27281"/>
		<updated>2025-11-06T16:30:15Z</updated>

		<summary type="html">&lt;p&gt;Leon Zipfel: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Person&lt;br /&gt;
|Given name=Leon Simeon&lt;br /&gt;
|Family name=Zipfel&lt;br /&gt;
|Sex=Male&lt;br /&gt;
|Country=Germany&lt;br /&gt;
|Institution=Hochschule München (HM) – University of Applied Sciences&lt;br /&gt;
|Academic degree=High School Diploma (secondary)&lt;br /&gt;
|KD of expertise=Aerospace Engeneering&lt;br /&gt;
|Current academic institution=Hochschule München (HM) – University of Applied Sciences&lt;br /&gt;
|Current academic level=Bachelor’s Degree&lt;br /&gt;
|Current academic degree=Aerospace Engeneering Bachelor&lt;br /&gt;
|input language=EN (English)&lt;br /&gt;
}}&lt;br /&gt;
Leon Simeon Zipfel (*2001, Starnberg) is a student at Hochschule München (HM) – University of Applied Sciences.  &lt;br /&gt;
[[Category:Person]]&lt;/div&gt;</summary>
		<author><name>Leon Zipfel</name></author>
	</entry>
	<entry>
		<id>https://www.glossalab.org/w/index.php?title=User:Leon_Zipfel&amp;diff=27279</id>
		<title>User:Leon Zipfel</title>
		<link rel="alternate" type="text/html" href="https://www.glossalab.org/w/index.php?title=User:Leon_Zipfel&amp;diff=27279"/>
		<updated>2025-11-06T16:28:51Z</updated>

		<summary type="html">&lt;p&gt;Leon Zipfel: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Person&lt;br /&gt;
|Given name=Leon Simeon&lt;br /&gt;
|Family name=Zipfel&lt;br /&gt;
|Sex=Male&lt;br /&gt;
|Country=Germany&lt;br /&gt;
|Institution=Hochschule München (HM) – University of Applied Sciences&lt;br /&gt;
|Academic degree=High School Diploma (secondary)&lt;br /&gt;
|KD of expertise=Aerospace Engeneering&lt;br /&gt;
|Current academic institution=Hochschule München (HM) – University of Applied Sciences&lt;br /&gt;
|Current academic level=Bachelor’s Degree&lt;br /&gt;
|Current academic degree=Aerospace Engeneering Bachelor&lt;br /&gt;
|input language=EN (English)&lt;br /&gt;
}}&lt;br /&gt;
Leon Simeon Zipfel (*2001, Starnberg), is a student at Hochschule München.  &lt;br /&gt;
[[Category:Person]]&lt;/div&gt;</summary>
		<author><name>Leon Zipfel</name></author>
	</entry>
	<entry>
		<id>https://www.glossalab.org/w/index.php?title=User:Leon_Zipfel&amp;diff=27270</id>
		<title>User:Leon Zipfel</title>
		<link rel="alternate" type="text/html" href="https://www.glossalab.org/w/index.php?title=User:Leon_Zipfel&amp;diff=27270"/>
		<updated>2025-11-06T16:25:51Z</updated>

		<summary type="html">&lt;p&gt;Leon Zipfel: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Person&lt;br /&gt;
|Given name=Leon Simeon&lt;br /&gt;
|Family name=Zipfel&lt;br /&gt;
|Sex=Male&lt;br /&gt;
|Country=Germany&lt;br /&gt;
|Institution=Hochschule München (HM) – University of Applied Sciences&lt;br /&gt;
|Academic degree=High School Diploma (secondary)&lt;br /&gt;
|KD of expertise=Aerospace Engeneering&lt;br /&gt;
|Current academic institution=Hochschule München (HM) – University of Applied Sciences&lt;br /&gt;
|Current academic level=Bachelor’s Degree&lt;br /&gt;
|Current academic degree=Aerospace Engeneering Bachelor&lt;br /&gt;
|input language=EN (English)&lt;br /&gt;
}}&lt;br /&gt;
Leon Simeon Zipfel (*2001, Starnberg)&lt;br /&gt;
[[Category:Person]]&lt;/div&gt;</summary>
		<author><name>Leon Zipfel</name></author>
	</entry>
	<entry>
		<id>https://www.glossalab.org/w/index.php?title=User:Leon_Zipfel&amp;diff=27260</id>
		<title>User:Leon Zipfel</title>
		<link rel="alternate" type="text/html" href="https://www.glossalab.org/w/index.php?title=User:Leon_Zipfel&amp;diff=27260"/>
		<updated>2025-11-06T16:22:37Z</updated>

		<summary type="html">&lt;p&gt;Leon Zipfel: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Person&lt;br /&gt;
|Given name=Leon&lt;br /&gt;
|Family name=Zipfel&lt;br /&gt;
|Sex=Male&lt;br /&gt;
|Country=Germany&lt;br /&gt;
|Institution=Hochschule München (HM) – University of Applied Sciences&lt;br /&gt;
|Academic degree=High School Diploma (secondary)&lt;br /&gt;
|Current academic institution=Hochschule München (HM) – University of Applied Sciences&lt;br /&gt;
|Current academic level=Bachelor’s Degree&lt;br /&gt;
|Current academic degree=Aerospace Engeneering Bachelor&lt;br /&gt;
|input language=EN (English)&lt;br /&gt;
}}&lt;br /&gt;
[[Category:Person]]&lt;/div&gt;</summary>
		<author><name>Leon Zipfel</name></author>
	</entry>
	<entry>
		<id>https://www.glossalab.org/w/index.php?title=User:Leon_Zipfel&amp;diff=27243</id>
		<title>User:Leon Zipfel</title>
		<link rel="alternate" type="text/html" href="https://www.glossalab.org/w/index.php?title=User:Leon_Zipfel&amp;diff=27243"/>
		<updated>2025-11-06T15:12:16Z</updated>

		<summary type="html">&lt;p&gt;Leon Zipfel: create user page&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Person}}[[Category:Person]]&lt;/div&gt;</summary>
		<author><name>Leon Zipfel</name></author>
	</entry>
</feed>