Other terms
AI systems
The notion of an “intelligent system” is neither defined precisely nor demarcated sharply from other systems, artifacts, or technical devices. Instead, the perception of what is judged to be intelligent changes with progress and exposure to such a system. Broadly speaking, an intelligent system is commonly understood as a computational system – such as a search engine, an online shopping assistant, a chat bot, or a cleaning robot – that leverages concepts, tools, and techniques from artificial intelligence in order to establish capabilities that are commonly attributed to humans while (still) being less typical of other soft- and hardware systems. Most notably, these capabilities let an AI system learn from experience and adapt to specific environmental conditions. As a consequence, an intelligent system exhibits a certain degree of autonomy, and its behavior is not completely prespecified. Importantly, intelligent systems are able to interact with humans or other systems through various modalities, for example, textual, visual, acoustic, or haptic signals. Whereas we share contemporary definitions of intelligent systems, we specifically focus on the abilities of AI systems to learn not only from prespecified data but also through interacting with humans.
Co-construction
Co-construction refers to an interactive and iterative process of negotiating both the explanandum and the form of understanding for explanations. It is a process that is performed mutually by sequentially building on, refining, and modifying the interaction: Each partner elaborates upon the other partner’s last contribution. The processes of scaffolding and monitoring (see below) guide this elaboration and direct it toward a specific form of understanding. In effect, what is achieved is the participation of both partners in moving toward a goal. Whereas the co-construction of an explanation takes place on the microlevel of an unfolding interaction and can thus be accessed directly, the process is modulated crucially on the macrolevel of the interaction.
Explainee
The addressee of an explanation
Explainer
The person who steers an explanation forward
Explanandum
The entity (event, phenomenon) that is the subject of an explanation
Explanans
The (verbal) way that an explanation can be expressed and co-constructed by both partners
Monitoring
In this multimodal process, the observed outcome is compared to what was predicted (Pickering and Garrod 2013). Via monitoring, the partners multimodally (i.e., using speech, gestures and nonverbal behavior) keep track of the progress in a joint task. For example, the explainer will monitor the explainee’s understanding by evaluating whether her or his way of explaining has been successful or whether further elaboration or modification is needed. Vice versa, the explainee will monitor the explainer by accepting a level of detail that is needed for a particular explanation.
Scaffolding
In developmental literature, scaffolding refers to the way an expert provides guidance to a learner within a learning process by increasing or reducing the level of assistance in accordance with the partner’s performance. In our approach, we transfer the term from the area of learning to understanding. In accordance with the idea that understanding is constructed by both partners, both partners can scaffold each other—that is, provide the other partner with the information needed to arrive at a joint construction of the explanandum and the desired form of understanding. Together with the process of monitoring, it is not only a form of guidance but also supervision, and both together aid the active participation of both partners.
Understanding
Whereas in the current debate on explainable systems (XAI), understanding refers to the problem of receiving “enough information” (Miller 2019, p. 11), in our approach, understanding is linked to what is relevant for the explainee. To account for variations in the progress of and the varying goals of explanations, we will differentiate between practices of enabling and comprehension. With enabling, we refer to explanations in the context of choosing or performing an action. Comprehension, in contrast, accounts for a reflexive awareness that may lead to a conceptual framework for a phenomenon that goes beyond what is immediately perceivable. We expect further differentiations that will be explored in the individual projects.
Social-practice
Social practice determines the social relations and power structures in a given situation and thereby provides a specific (normative) background for the way interaction will play out in order to ‘place’ the explanation appropriately, and finally how that explanation will be interpreted. Social practice is a product of our actions with respect to each other that often has both social consequences and social presuppositions. The consequences on the one hand and the presuppositions on the other hand speak to the two timescales that constitute a social practice: In terms of consequences, every explaining process re-establishes the relevant social practice; in terms of presuppositions, in turn, the experience of an explaining process will confirm or make a new contribution to our expectations, roles, and partner models in relation to this particular social practice.
%-------------------------- Project A01 ----------------------------------------------
dyad
In sociology, a dyad (from the Greek: δυάς dyás, "pair") is a group of two people, the smallest possible social group. As an adjective, "dyadic" describes their interaction. The pair of individuals in a dyad can be linked via romantic interest, family relation, interests, work, partners in crime, and so on. The relation can be based on equality, but may be based on an asymmetrical or hierarchical relationship (master–servant).
partner-model
a partner model is a main resource is for ‘placing’ explanations and contains knowledge and assumptions about the explainee with regard to her/his dialogical role, general characteristics, or even this specific person
Obligation
Obligations represent what an agent should do, according to some set of norms. The notion of obligation has been studied for many centuries, and its formal aspects are examined using Deontic Logic.
obligor
one who is bound by a legal obligation
obligee
one to whom another is obligated (as by a contract). specifically : one who is protected by a surety bond
Audience-design
Audience design is a process in a symmetric interaction by which speakers tailor what they say in order for the addressee to understand it. Critically, audience design involves taking into account a representation of the addressee’s perspective, and how it differs from one’s own perspective.
interlocutor
one who takes part in dialogue or conversation
reappraisal
re-interpreting or re-analyzing the emotional situation and/or goals
Persuasion
Persuasion can be seen as a further strategy to achieve a decision or behavior that is congruent with logical argumentation and not influenced by emotional processes.
feedback signals
Feedback signals are generally (i) short (i.e., consist of minimal verbal/vocal expressions), (ii) locally adapted to their prosodic context (i.e., the speaker’s utterance) by being more similar in pitch to their immediate surrounding than regular utterances, or (iii) taking place in the visual modality, for example as head gestures or facial expressions.
verbal-feedback
we consider feedback ‘verbal/vocal’, if it is spoken, i.e., produced as a speech sound in the vocal tract of a listener. Examples of such feedback found in the alico-corpus are genau (‘exactly’), ja (‘yes’), mhm (‘uh-huh’), and m.
explanation-purpose
Explanations are provided to support transparency, where users can see some aspects of the inner state or functionality of the AI system. When AI is used as a decision aid, users would seek to use explanations to improve their decision making. If the system behaved unexpectedly or erroneously, users would want explanations for scrutability and debugging to be able to identify the offending fault and take control to make corrections. Indeed, this goal is important and has been well studied regarding user models and debugging intelligent agents. Finally, explanations are often proposed to improve trust in the system and specifically moderate trust to an appropriate level.
transparency
The level to which a system provides information about its internal workings or structure, and the data it has been trained with – this is similar to Lipton’s definition of transparency
fact
that what happened
foil
that what is expected or plausible to happen
causal-explanation
refers to an explanation that is focused on selected causes relevant to interpreting the observation with respect to existing knowledge.
EXPLAINING-WHY
It is a semantic type of explanation which explicates how a complex matter comes into being (e.g., explaining natural phenomena by reference to physical principles, or explaining a person’s action by explicating possible motives.
EXPLAINING-HOW
It is a semantic type of explanation which outlines procedural knowledge about processes and coordinations of actions in order to achieve a specific goal.
EXPLAINING-WHAT
It is a semantic type of explanation which describes, for example, the meaning of a term or a proverb. We consider these distinctions to be useful for describing ways of explaining technical artifacts because they reflect their intrinsic duality.
dialog-act
In linguistics and in particular in natural language understanding, a dialog act is an utterance, in the context of a conversational dialog, that serves a function in the dialog. Types of dialog acts include a question, a statement, or a request for action. Dialog acts are a type of speech act.