Artificial intelligence (AI) technologies promise to transform how people perform tasks and make decisions within organizations. Yet, their impact on human reasoning processes remains poorly understood. When encountering an unexpected AI suggestion, individuals may either attempt to understand the reasoning behind it or blindly accept or reject it. What drives these different reactions, however, remains unexplored. Unpacking these factors is essential to advance our understanding of augmentation and prevent major decision-making failures. This study addresses this gap through three experimental studies. In study 1 we find that, when performing a task, the unexpected failure of one's own frames increases the likelihood of individuals blindly accepting AI suggestions and effortfully trying to explain them. In study 2 we shed light on the underlying reasons for the results, by analyzing qualitative insights. We find that the unexpected failure of frames promotes "problematization pivoting", a phenomenon wherein individuals anchor their reasoning to opaque AI suggestions ignoring other available cues. In study 3, we add evidence of potential negative performance implications associated with effects documented before. Overall, these findings contribute to the literature on human-AI augmentation and sensemaking theory, while also alerting managers and policymakers on the perils associated with AI use.

Di Prisco, Domenico; Dello Russo, Silvia. (2026). Sensemaking and AI: Unraveling individuals' reactions to the black box in a three-study investigation. TECHNOLOGICAL FORECASTING AND SOCIAL CHANGE, (ISSN: 0040-1625), 226: 1-16. Doi: 10.1016/j.techfore.2025.124491.

Sensemaking and AI: Unraveling individuals' reactions to the black box in a three-study investigation

di Prisco D.
;
Dello Russo S.
2026

Abstract

Artificial intelligence (AI) technologies promise to transform how people perform tasks and make decisions within organizations. Yet, their impact on human reasoning processes remains poorly understood. When encountering an unexpected AI suggestion, individuals may either attempt to understand the reasoning behind it or blindly accept or reject it. What drives these different reactions, however, remains unexplored. Unpacking these factors is essential to advance our understanding of augmentation and prevent major decision-making failures. This study addresses this gap through three experimental studies. In study 1 we find that, when performing a task, the unexpected failure of one's own frames increases the likelihood of individuals blindly accepting AI suggestions and effortfully trying to explain them. In study 2 we shed light on the underlying reasons for the results, by analyzing qualitative insights. We find that the unexpected failure of frames promotes "problematization pivoting", a phenomenon wherein individuals anchor their reasoning to opaque AI suggestions ignoring other available cues. In study 3, we add evidence of potential negative performance implications associated with effects documented before. Overall, these findings contribute to the literature on human-AI augmentation and sensemaking theory, while also alerting managers and policymakers on the perils associated with AI use.
2026
Artificial intelligence
Experiment
AI opacity
Sensemaking theory
Human-AI interaction
Augmentation
Di Prisco, Domenico; Dello Russo, Silvia. (2026). Sensemaking and AI: Unraveling individuals' reactions to the black box in a three-study investigation. TECHNOLOGICAL FORECASTING AND SOCIAL CHANGE, (ISSN: 0040-1625), 226: 1-16. Doi: 10.1016/j.techfore.2025.124491.
File in questo prodotto:
File Dimensione Formato  
di Prisco & Dello Russo_2026_Tech Forec & Soc Change.pdf

Open Access

Tipologia: Versione dell'editore
Licenza: Creative commons
Dimensione 1.97 MB
Formato Adobe PDF
1.97 MB Adobe PDF Visualizza/Apri
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11385/259858
Citazioni
  • Scopus 0
  • ???jsp.display-item.citation.isi??? 0
  • OpenAlex 0
social impact