Difference between revisions of "Other terms"
Line 1: | Line 1: | ||
<h2>AI systems</h2> | <h2>AI systems</h2> | ||
The notion of an “intelligent system” is neither defined precisely nor demarcated sharply from other systems, artifacts, or technical devices. Instead, the perception of what is judged to be intelligent changes with progress and exposure to such a system. Broadly speaking, an intelligent system is commonly understood as a computational system – such as a search engine, an online shopping assistant, a chat bot, or a cleaning robot – that leverages concepts, tools, and techniques from artificial intelligence in order to establish capabilities that are commonly attributed to humans while (still) being less typical of other soft- and hardware systems. Most notably, these capabilities let an AI system learn from experience and adapt to specific environmental conditions. As a consequence, an intelligent system exhibits a certain degree of autonomy, and its behavior is not completely prespecified. Importantly, intelligent systems are able to interact with humans or other systems through various modalities, for example, textual, visual, acoustic, or haptic signals. Whereas we share contemporary definitions of intelligent systems, we specifically focus on the abilities of AI systems to learn not only from prespecified data but also through interacting with humans. | "The notion of an “intelligent system” is neither defined precisely nor demarcated sharply from other systems, artifacts, or technical devices. Instead, the perception of what is judged to be intelligent changes with progress and exposure to such a system. Broadly speaking, an intelligent system is commonly understood as a computational system – such as a search engine, an online shopping assistant, a chat bot, or a cleaning robot – that leverages concepts, tools, and techniques from artificial intelligence in order to establish capabilities that are commonly attributed to humans while (still) being less typical of other soft- and hardware systems. Most notably, these capabilities let an AI system learn from experience and adapt to specific environmental conditions. As a consequence, an intelligent system exhibits a certain degree of autonomy, and its behavior is not completely prespecified. Importantly, intelligent systems are able to interact with humans or other systems through various modalities, for example, textual, visual, acoustic, or haptic signals. Whereas we share contemporary definitions of intelligent systems, we specifically focus on the abilities of AI systems to learn not only from prespecified data but also through interacting with humans." <ref name="TRR_318_proposal">TRR 318 Proposal</ref> | ||
<h2>Co-construction</h2> | <h2>Co-construction</h2> | ||
Co-construction refers to an interactive and iterative process of negotiating both the explanandum and the form of understanding for explanations. It is a process that is performed mutually by sequentially building on, refining, and modifying the interaction: Each partner elaborates upon the other partner’s last contribution. The processes of scaffolding and monitoring (see below) guide this elaboration and direct it toward a specific form of understanding. In effect, what is achieved is the participation of both partners in moving toward a goal. Whereas the co-construction of an explanation takes place on the microlevel of an unfolding interaction and can thus be accessed directly, the process is modulated crucially on the macrolevel of the interaction. | "Co-construction refers to an interactive and iterative process of negotiating both the explanandum and the form of understanding for explanations. It is a process that is performed mutually by sequentially building on, refining, and modifying the interaction: Each partner elaborates upon the other partner’s last contribution. The processes of scaffolding and monitoring (see below) guide this elaboration and direct it toward a specific form of understanding. In effect, what is achieved is the participation of both partners in moving toward a goal. Whereas the co-construction of an explanation takes place on the microlevel of an unfolding interaction and can thus be accessed directly, the process is modulated crucially on the macrolevel of the interaction."<ref name="TRR_318_proposal"/> | ||
<h2>Explainee</h2> | <h2>Explainee</h2> | ||
The addressee of an explanation | "The addressee of an explanation"<ref name="TRR_318_proposal"/> | ||
<h2>Explainer</h2> | <h2>Explainer</h2> | ||
The person who steers an explanation forward | "The person who steers an explanation forward"<ref name="TRR_318_proposal"/> | ||
<h2>Explanandum</h2> | <h2>Explanandum</h2> | ||
The entity (event, phenomenon) that is the subject of an explanation | "The entity (event, phenomenon) that is the subject of an explanation"<ref name="TRR_318_proposal"/> | ||
<h2>Explanans</h2> | <h2>Explanans</h2> | ||
The (verbal) way that an explanation can be expressed and co-constructed by both partners | "The (verbal) way that an explanation can be expressed and co-constructed by both partners"<ref name="TRR_318_proposal"/> | ||
<h2>Monitoring</h2> | <h2>Monitoring</h2> | ||
In this multimodal process, the observed outcome is compared to what was predicted (Pickering and Garrod 2013). Via monitoring, the partners multimodally (i.e., using speech, gestures and nonverbal behavior) keep track of the progress in a joint task. For example, the explainer will monitor the explainee’s understanding by evaluating whether her or his way of explaining has been successful or whether further elaboration or modification is needed. Vice versa, the explainee will monitor the explainer by accepting a level of detail that is needed for a particular explanation. | "In this multimodal process, the observed outcome is compared to what was predicted (Pickering and Garrod 2013). Via monitoring, the partners multimodally (i.e., using speech, gestures and nonverbal behavior) keep track of the progress in a joint task. For example, the explainer will monitor the explainee’s understanding by evaluating whether her or his way of explaining has been successful or whether further elaboration or modification is needed. Vice versa, the explainee will monitor the explainer by accepting a level of detail that is needed for a particular explanation."<ref name="TRR_318_proposal"/> | ||
<h2>Scaffolding</h2> | <h2>Scaffolding</h2> | ||
In developmental literature, scaffolding refers to the way an expert provides guidance to a learner within a learning process by increasing or reducing the level of assistance in accordance with the partner’s performance. In our approach, we transfer the term from the area of learning to understanding. In accordance with the idea that understanding is constructed by both partners, both partners can scaffold each other—that is, provide the other partner with the information needed to arrive at a joint construction of the explanandum and the desired form of understanding. Together with the process of monitoring, it is not only a form of guidance but also supervision, and both together aid the active participation of both partners. | "In developmental literature, scaffolding refers to the way an expert provides guidance to a learner within a learning process by increasing or reducing the level of assistance in accordance with the partner’s performance. In our approach, we transfer the term from the area of learning to understanding. In accordance with the idea that understanding is constructed by both partners, both partners can scaffold each other—that is, provide the other partner with the information needed to arrive at a joint construction of the explanandum and the desired form of understanding. Together with the process of monitoring, it is not only a form of guidance but also supervision, and both together aid the active participation of both partners."<ref name="TRR_318_proposal"/> | ||
<h2>Understanding</h2> | <h2>Understanding</h2> | ||
Whereas in the current debate on explainable systems (XAI), understanding refers to the problem of receiving “enough information” (Miller 2019, p. 11), in our approach, understanding is linked to what is relevant for the explainee. To account for variations in the progress of and the varying goals of explanations, we will differentiate between practices of enabling and comprehension. With enabling, we refer to explanations in the context of choosing or performing an action. Comprehension, in contrast, accounts for a reflexive awareness that may lead to a conceptual framework for a phenomenon that goes beyond what is immediately perceivable. We expect further differentiations that will be explored in the individual projects. | "Whereas in the current debate on explainable systems (XAI), understanding refers to the problem of receiving “enough information” (Miller 2019, p. 11), in our approach, understanding is linked to what is relevant for the explainee. To account for variations in the progress of and the varying goals of explanations, we will differentiate between practices of enabling and comprehension. With enabling, we refer to explanations in the context of choosing or performing an action. Comprehension, in contrast, accounts for a reflexive awareness that may lead to a conceptual framework for a phenomenon that goes beyond what is immediately perceivable. We expect further differentiations that will be explored in the individual projects."<ref name="TRR_318_proposal"/> | ||
<h2>Social-practice</h2> | <h2>Social-practice</h2> | ||
Social practice determines the social relations and power structures in a given situation and thereby provides a specific (normative) background for the way interaction will play out in order to ‘place’ the explanation appropriately, and finally how that explanation will be interpreted. Social practice is a product of our actions with respect to each other that often has both social consequences and social presuppositions. The consequences on the one hand and the presuppositions on the other hand speak to the two timescales that constitute a social practice: In terms of consequences, every explaining process re-establishes the relevant social practice; in terms of presuppositions, in turn, the experience of an explaining process will confirm or make a new contribution to our expectations, roles, and partner models in relation to this particular social practice. | "Social practice determines the social relations and power structures in a given situation and thereby provides a specific (normative) background for the way interaction will play out in order to ‘place’ the explanation appropriately, and finally how that explanation will be interpreted. Social practice is a product of our actions with respect to each other that often has both social consequences and social presuppositions. The consequences on the one hand and the presuppositions on the other hand speak to the two timescales that constitute a social practice: In terms of consequences, every explaining process re-establishes the relevant social practice; in terms of presuppositions, in turn, the experience of an explaining process will confirm or make a new contribution to our expectations, roles, and partner models in relation to this particular social practice."<ref name="TRR_318_proposal"/> | ||
<!-- | <!-- | ||
Line 36: | Line 36: | ||
<h2>dyad</h2> | <h2>dyad</h2> | ||
two individuals (such as husband and wife) maintaining a sociologically significant relationship <ref name="">https://www.merriam-webster.com/dictionary/dyad</ref> | |||
<h2>partner-model</h2> | <h2>partner-model</h2> |
Revision as of 12:57, 20 February 2022
AI systems
"The notion of an “intelligent system” is neither defined precisely nor demarcated sharply from other systems, artifacts, or technical devices. Instead, the perception of what is judged to be intelligent changes with progress and exposure to such a system. Broadly speaking, an intelligent system is commonly understood as a computational system – such as a search engine, an online shopping assistant, a chat bot, or a cleaning robot – that leverages concepts, tools, and techniques from artificial intelligence in order to establish capabilities that are commonly attributed to humans while (still) being less typical of other soft- and hardware systems. Most notably, these capabilities let an AI system learn from experience and adapt to specific environmental conditions. As a consequence, an intelligent system exhibits a certain degree of autonomy, and its behavior is not completely prespecified. Importantly, intelligent systems are able to interact with humans or other systems through various modalities, for example, textual, visual, acoustic, or haptic signals. Whereas we share contemporary definitions of intelligent systems, we specifically focus on the abilities of AI systems to learn not only from prespecified data but also through interacting with humans." [1]
Co-construction
"Co-construction refers to an interactive and iterative process of negotiating both the explanandum and the form of understanding for explanations. It is a process that is performed mutually by sequentially building on, refining, and modifying the interaction: Each partner elaborates upon the other partner’s last contribution. The processes of scaffolding and monitoring (see below) guide this elaboration and direct it toward a specific form of understanding. In effect, what is achieved is the participation of both partners in moving toward a goal. Whereas the co-construction of an explanation takes place on the microlevel of an unfolding interaction and can thus be accessed directly, the process is modulated crucially on the macrolevel of the interaction."[1]
Explainee
"The addressee of an explanation"[1]
Explainer
"The person who steers an explanation forward"[1]
Explanandum
"The entity (event, phenomenon) that is the subject of an explanation"[1]
Explanans
"The (verbal) way that an explanation can be expressed and co-constructed by both partners"[1]
Monitoring
"In this multimodal process, the observed outcome is compared to what was predicted (Pickering and Garrod 2013). Via monitoring, the partners multimodally (i.e., using speech, gestures and nonverbal behavior) keep track of the progress in a joint task. For example, the explainer will monitor the explainee’s understanding by evaluating whether her or his way of explaining has been successful or whether further elaboration or modification is needed. Vice versa, the explainee will monitor the explainer by accepting a level of detail that is needed for a particular explanation."[1]
Scaffolding
"In developmental literature, scaffolding refers to the way an expert provides guidance to a learner within a learning process by increasing or reducing the level of assistance in accordance with the partner’s performance. In our approach, we transfer the term from the area of learning to understanding. In accordance with the idea that understanding is constructed by both partners, both partners can scaffold each other—that is, provide the other partner with the information needed to arrive at a joint construction of the explanandum and the desired form of understanding. Together with the process of monitoring, it is not only a form of guidance but also supervision, and both together aid the active participation of both partners."[1]
Understanding
"Whereas in the current debate on explainable systems (XAI), understanding refers to the problem of receiving “enough information” (Miller 2019, p. 11), in our approach, understanding is linked to what is relevant for the explainee. To account for variations in the progress of and the varying goals of explanations, we will differentiate between practices of enabling and comprehension. With enabling, we refer to explanations in the context of choosing or performing an action. Comprehension, in contrast, accounts for a reflexive awareness that may lead to a conceptual framework for a phenomenon that goes beyond what is immediately perceivable. We expect further differentiations that will be explored in the individual projects."[1]
Social-practice
"Social practice determines the social relations and power structures in a given situation and thereby provides a specific (normative) background for the way interaction will play out in order to ‘place’ the explanation appropriately, and finally how that explanation will be interpreted. Social practice is a product of our actions with respect to each other that often has both social consequences and social presuppositions. The consequences on the one hand and the presuppositions on the other hand speak to the two timescales that constitute a social practice: In terms of consequences, every explaining process re-establishes the relevant social practice; in terms of presuppositions, in turn, the experience of an explaining process will confirm or make a new contribution to our expectations, roles, and partner models in relation to this particular social practice."[1]
dyad
two individuals (such as husband and wife) maintaining a sociologically significant relationship [2]
partner-model
a partner model is a main resource is for ‘placing’ explanations and contains knowledge and assumptions about the explainee with regard to her/his dialogical role, general characteristics, or even this specific person
Obligation
Obligations represent what an agent should do, according to some set of norms. The notion of obligation has been studied for many centuries, and its formal aspects are examined using Deontic Logic.
obligor
one who is bound by a legal obligation
obligee
one to whom another is obligated (as by a contract). specifically : one who is protected by a surety bond
Audience-design
Audience design is a process in a symmetric interaction by which speakers tailor what they say in order for the addressee to understand it. Critically, audience design involves taking into account a representation of the addressee’s perspective, and how it differs from one’s own perspective.
interlocutor
one who takes part in dialogue or conversation
reappraisal
re-interpreting or re-analyzing the emotional situation and/or goals
Persuasion
Persuasion can be seen as a further strategy to achieve a decision or behavior that is congruent with logical argumentation and not influenced by emotional processes.
feedback signals
Feedback signals are generally (i) short (i.e., consist of minimal verbal/vocal expressions), (ii) locally adapted to their prosodic context (i.e., the speaker’s utterance) by being more similar in pitch to their immediate surrounding than regular utterances, or (iii) taking place in the visual modality, for example as head gestures or facial expressions.
verbal-feedback
we consider feedback ‘verbal/vocal’, if it is spoken, i.e., produced as a speech sound in the vocal tract of a listener. Examples of such feedback found in the alico-corpus are genau (‘exactly’), ja (‘yes’), mhm (‘uh-huh’), and m.
explanation-purpose
Explanations are provided to support transparency, where users can see some aspects of the inner state or functionality of the AI system. When AI is used as a decision aid, users would seek to use explanations to improve their decision making. If the system behaved unexpectedly or erroneously, users would want explanations for scrutability and debugging to be able to identify the offending fault and take control to make corrections. Indeed, this goal is important and has been well studied regarding user models and debugging intelligent agents. Finally, explanations are often proposed to improve trust in the system and specifically moderate trust to an appropriate level.
transparency
The level to which a system provides information about its internal workings or structure, and the data it has been trained with – this is similar to Lipton’s definition of transparency
fact
that what happened
foil
that what is expected or plausible to happen
causal-explanation
refers to an explanation that is focused on selected causes relevant to interpreting the observation with respect to existing knowledge.
EXPLAINING-WHY
It is a semantic type of explanation which explicates how a complex matter comes into being (e.g., explaining natural phenomena by reference to physical principles, or explaining a person’s action by explicating possible motives.
EXPLAINING-HOW
It is a semantic type of explanation which outlines procedural knowledge about processes and coordinations of actions in order to achieve a specific goal.
EXPLAINING-WHAT
It is a semantic type of explanation which describes, for example, the meaning of a term or a proverb. We consider these distinctions to be useful for describing ways of explaining technical artifacts because they reflect their intrinsic duality.
Deontic logic
Deontic logic is the field of philosophical logic that is concerned with obligation, permission, and related concepts. Alternatively, a deontic logic is a formal system that attempts to capture the essential logical features of these concepts.
dialog-act
In linguistics and in particular in natural language understanding, a dialog act is an utterance, in the context of a conversational dialog, that serves a function in the dialog. Types of dialog acts include a question, a statement, or a request for action. Dialog acts are a type of speech act.