Advances in large language model (LLM) technology enable chatbots to generate and analyze content for our work. Generative chatbots do this work by predicting responses rather than knowing the meaning of their responses. In other words, chatbots can produce coherent-sounding but inaccurate or fabricated content, referred to as hallucinations. When humans uncritically use this untruthful content, it becomes what we call botshit. This article focuses on how to use chatbots for content generation work while mitigating the epistemic (i.e., the process of producing knowledge) risks associated with botshit. Drawing on risk management research, we introduce a typology framework that orients how chatbots can be used based on two dimensions: response veracity verifiability and response veracity importance. The framework identifies four modes of chatbot work (authenticated, autonomous, automated, and augmented) with a botshit-related risk (ignorance, miscalibration, routinization, and black boxing). We describe and illustrate each mode and offer advice to help chatbot users guard against the botshit risks that come with each mode.

Beware of botshit: How to manage the epistemic risks of generative chatbots / Hannigan, Timothy R.; Mccarthy, Ian Paul; Spicer, André. - In: BUSINESS HORIZONS. - ISSN 0007-6813. - 67:5(2024), pp. 471-486. [10.1016/j.bushor.2024.03.001]

Beware of botshit: How to manage the epistemic risks of generative chatbots

McCarthy, Ian P.;
2024

Abstract

Advances in large language model (LLM) technology enable chatbots to generate and analyze content for our work. Generative chatbots do this work by predicting responses rather than knowing the meaning of their responses. In other words, chatbots can produce coherent-sounding but inaccurate or fabricated content, referred to as hallucinations. When humans uncritically use this untruthful content, it becomes what we call botshit. This article focuses on how to use chatbots for content generation work while mitigating the epistemic (i.e., the process of producing knowledge) risks associated with botshit. Drawing on risk management research, we introduce a typology framework that orients how chatbots can be used based on two dimensions: response veracity verifiability and response veracity importance. The framework identifies four modes of chatbot work (authenticated, autonomous, automated, and augmented) with a botshit-related risk (ignorance, miscalibration, routinization, and black boxing). We describe and illustrate each mode and offer advice to help chatbot users guard against the botshit risks that come with each mode.
2024
Artificial intelligence
Botshit
Bullshit
Chatbots
Natural language processing
Beware of botshit: How to manage the epistemic risks of generative chatbots / Hannigan, Timothy R.; Mccarthy, Ian Paul; Spicer, André. - In: BUSINESS HORIZONS. - ISSN 0007-6813. - 67:5(2024), pp. 471-486. [10.1016/j.bushor.2024.03.001]
File in questo prodotto:
File Dimensione Formato  
2024 Botshit in BH.pdf

Solo gestori archivio

Tipologia: Versione dell'editore
Licenza: Tutti i diritti riservati
Dimensione 678.74 kB
Formato Adobe PDF
678.74 kB Adobe PDF   Visualizza/Apri
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11385/245822
Citazioni
  • Scopus 18
  • ???jsp.display-item.citation.isi??? 18
  • OpenAlex ND
social impact