Do not put the blame on me: Asymmetric responses to service outcome with autonomous vehicles versus human agents

The present study aims to examine customer responses to negative (vs. positive) service outcome with an autonomous vehicle (AV) vis-à-vis human agent. In the first study we manipulate the outcome of the service (negative vs. positive) and the type of service agent (AV vs. human) to evaluate customers' satisfaction with the service. Based on the results of the first study, in the second study we focus only on the negative outcome scenario and we examine how the perceived blame of the AV (vs. human agent) affects customers' satisfaction. Finally, in the third study we introduce another mediator, namely perceived competence, which explains why people attribute less blame to the AV (vs. human agent). Customers' satisfaction is higher for the AV only in the case of a negative service outcome, while no differences emerge when the service had a positive outcome. Additionally, AVs (vs. human agent) are perceived as less competent and blameworthy, leading to a higher customer satisfaction with the service in case of a negative outcome. These results show a customer under-reaction to negative service outcome with AVs. This has relevance for the design and policy implications related to AVs. Our results also provide insights into the psychological underpinnings of acceptance of AVs replacing human agents. This study extends previous literature by showing circumstances in which customers are more (or less) satisfied with an AVs (vs. human agents) and the psychological drivers of these asymmetrical responses.


| INTRODUCTION
Autonomous vehicles (hereafter AVs) are defined as those that carry out all driving functions in an automated way and the driver or passenger interaction with the system is limited to specifying the destination (Shladover, 2018). A great benefit AVs can bring to modern society is that they have the potential to significantly reduce road crashes, as AVs can avoid human limitations such as fatigue, distraction and other human errors that account for over 90% of all road crashes (Erskine et al., 2020). Given the superior performance of AVs both in terms of road safety and environmental benefits, people should prefer to use autonomous technologies compared to humans agents. Yet, most customers remain skeptical toward AVs (Gill, 2020;Hulse et al., 2018).
Recent studies have made a significant progress in identifying and investigating the factors influencing AV usage intention (Casidy et al., 2021;Huang & Qian, 2021). Most of them have examined the role of psychological and social factors such as trust (Yuen et al., 2020;Zhang et al., 2020), perceived risk (Casidy et al., 2021;Hulse et al., 2018), perceived ease of use (Baccarella et al., 2020), perceived usefulness (Park et al., 2021), social influence (Erskine et al., 2020), and morality (Gill, 2020) in shaping customers' responses T A B L E 1 Literature review on the psychological underpinnings of customers' receptiveness to AVs • Perceived usefulness • Perceived ease of use The study validates the positive effect of perceived usefulness on behavioral intention to adopt self-driving cars. The results further suggest that individuals with a generally negative attitude toward technologies are afraid that they might not be capable of handling the new technology. Casidy et al., 2021 Online experiment/ fictitious article from car magazine (scenariobased) • Risk perception • Usage barriers The results suggest that self-brand connections are positively associated with intentions to adopt radical innovation and that this effect is mediated by reduced risk barrier. Erskine et al., 2020 Online survey/video on AVs • Performance expectancy • Effort expectancy • Social influence • Hedonic motivation Information seekers' objectives and issue involvement are important drivers of blog selection and determinants of the blog's influence. Gill, 2020 Online experiments/ fictitious moral dilemma (scenariobased) • Morality • Attribution of responsibility Results show that participants considered harm to a pedestrian more permissible with an AV as compared to self as the decision agent in a regular car. This shift in moral judgments was driven by the attribution of responsibility to the AV and was observed for both severe and moderate harm, and when harm was real or imagined Hulse et al., 2018 Survey/demographic and attitudinal questions on AVs • Perceived risk • Perceived safety • Societal benefits Compared to human-operated cars, AV were perceived differently depending on the road user perspective: more risky when a passenger yet less risky when a pedestrian. AV were also perceived as more risky than existing autonomous trains.
Huang & Qian, 2021 Online survey/ demographic and attitudinal questions on AVs • Need for uniqueness • Risk aversion The results indicate that the psychological trait of need for uniqueness strengthens the association between consumers' reasoning for AVs and their adoption intention, while the risk aversion trait intensifies the negative relationships between consumers' reasoning against AVs and their attitude/adoption intention. Keszey, 2020 Survey data/ demographic and attitudinal questions on AVs • Technological anxiety • Privacy concerns • Equal opportunity of mobility The behavioral intention of innovative users is influenced by utilitarian and hedonic motivations, whereas laggards are driven by hedonic motivation, and a utilitarian motivation does not play a role. Liu et al., 2019 Survey/demographic and attitudinal questions on AVs • Perceived risk • Perceived benefits • Social trust The results indicate that social trust retained a direct effect as well as an indirect effect on all acceptance measures. Compared to perceived risk, perceived benefit was a stronger predictor of all acceptance measures and also a stronger mediator of the trustacceptance relationship. suggests that the service outcome is not always error-free (Choi et al., 2021). Service outcome failure, defined as "a service that does not fulfill the basic service need" (Smith et al., 1999, p. 358;e.g (Gill, 2020) it is important to know why more (vs. less) responsibility is ascribed to the actions of AVs (vs. human agent). However, there is very limited research, especially in the AVs context, on the psychological antecedents of attribution of blame to AVs (see Table. 1). We believe that perceived competence of the AV vis-àvis human agent, defined as the degree to which customers perceive that the service provider possesses the required skills, knowledge and capabilities to deliver adequately the service product (Belanche et al., 2021), can explain the customers' blame attribution (Choi et al., 2021).
In sum, the current study provides empirical evidence that a negative service outcome (e.g., taxi arrives 40 min late) leads to a lower customer satisfaction when the service is provided by a human agent compared to AV. Moreover, we find that this effect is driven by perceived agent competence and by blame attribution, which are both higher for human agents compared to AVs. This comparison is extremely important in service pseudorelationships, in which a customer usually interacts with different service agents, such as in the train, rent-a-car o taxi services. Therefore, we study customer responses to negative (vs. positive) service outcome in the context of taxi service. Second, we contribute to the stream of research focused on understanding the psychological underpinnings of customers' receptiveness to AVs (Huang & Qian, 2021;Manfreda et al., 2021;Park et al., 2021). Little research to date has studied how and why users attribute blame on AVs versus human agents for service tasks bearing negative outcomes (Erskine et al., 2020;Gill, 2020;Keszey, 2020 Social influence and initial trust played the most important roles in AV acceptance. Sensation seekers and those with a higher openness to experience had a higher intention to adopt AVs. Neurotic users were less likely to accept AVs. The current study Online experiments/ fictitious service scenario (scenariobased) • Perceived competence • Perceived blame • Service outcome Customers' satisfaction was higher for the autonomous vehicle only in the case of a negative service outcome, while no differences emerged when the service had a positive outcome. Autonomous vehicles (vs. human agents) are perceived as less competent and blameworthy, leading to a higher customer satisfaction with the service in case of a negative outcome.
2 | CONCEPTUAL FRAMEWORK AND HYPOTHESES DEVELOPMENT

| Negative outcome-positive outcome effect in AVs
Customer acceptance of AVs, defined as those where a human agent would not carry any driving-related activities (Shladover, 2018), has gained significant interest from marketing academics and practitioners in recent years (Huang & Qian, 2021;Park et al., 2021).
Despite the growing interest in this topic, previous marketing research has produced mixed results (Erskine et al., 2020;Gill, 2020;Manfreda et al., 2021). Some recent empirical evidence suggests that customers' acceptance for AVs has increased, especially since the onset of the COVID-19 pandemic (Erskine et al., 2020;Keszey, 2020). Accordingly, a report on the public perception of driverless cars in the United States found a growing enthusiasm and general acceptance for AVs with 62% of people surveyed saying that AVs are the way of the future (Vigliarolo, 2020). Moreover, Du et al. (2021) suggest that recent mass media reports on the advantages of self-driving cars have affected subjective norms and self-perception, and thereby increased traveler's trust and behavioral intentions toward AVs.
Conversely, other studies on customer responses to AVs suggest that humans prefer the service of other humans over artificially intelligent systems (Hulse et al., 2018). Specifically, research shows that resistance toward AVs occurs as a result of various psychological barriers (Joachim et al., 2018). For example, customers' reluctance to use AVs is driven by safety concerns (Howard & Dai, 2014), ethical issues (Lin, 2016) and perceived loss of control (Baccarella et al., 2020). Moreover, customers might resist AVs due to fears of technological malfunctioning or because AVs could take away the joy of driving (Casidy et al., 2021). However, the relevance of these barriers in the context of AVs remains an empirical question. For example, Joachim et al. (2018) found that "the effect sizes of each barrier … vary with the context present" (p. 105), thereby adding to a growing evidence base that suggests that the relative effect of these antecedents is highly context dependent. Previous literature on service interaction between customer and autonomous technology indicates that one such context variable could be the service outcome (Gill, 2020;Pascual Nebreda et al., 2021;Srinivasan & Sarial-Abi, 2021). In particular, some studies show that customers have more negative responses for a self-service technology failure than for an employee failure (Fan et al., 2016). Other studies suggest that customers respond more positively toward an autonomous service technology than toward a human agent in case of a service failure (Leo & Huh, 2020).
Previous empirical evidence on information processing reports asymmetry in responses to positive versus negative outcomes (Ertac, 2011;Soroka, 2006). In particular, works in psychology suggest that asymmetry is the product of differences in perception. Specifically, people might view negative information as particularly informative, and act accordingly (Fiske & Taylor, 1991). Economics literature suggests a similar story, by describing asymmetry not as a function of perception, but simply as a process of reacting differently to positive versus negative perceptions (Kahneman & Tversky, 2013).
Here, we conceive the "negative outcome-positive outcome" as the phenomenon by which a negative versus positive service outcome determines differential customer responses (e.g., satisfaction) toward AVs (vs. human agents). The "negative bias" principle suggests that negative compared to positive information is perceived as more reliable and useful, thus exerting a greater impact on customers' behavior (Baumeister et al., 2001). Accordingly, people weigh a negative outcome more than a positive one and thus their responses toward AVs (vs. human agents) are more likely to differ when a service presents negative (vs. positive) outcomes. On the contrary, customers are not averse to gains (Kahneman & Tversky, 2013), and therefore their immediate response following a positive service outcome would not be as steep as those following a negative one. Drawing on these empirical and theoretical insights, we hypothesize that: H1. Customer responses to AVs (vs. human agents) will differ only in a case of a negative service outcome.

| Blame attribution to AVs (vs. human agents)
Attribution theory has its origins in social psychology work of Kel- shows that in case of shared control between human and machine when both agents make an error the responsibility attributed to the machine is reduced. Moreover, previous studies have offered empirical evidence supporting the impact of customer attributions on satisfaction (Jörling et al., 2019). For example, past empirical evidence on customer responses toward negative (vs. positive) decision outcomes found higher perceived blame to be associated with lower satisfaction (Botti & McGill, 2006) and negative service evaluation (Kalamas et al., 2008).
Extending this empirical evidence to AVs, we suggest that perceived blame attribution could explain the "negative outcome-positive outcome" effect, namely the higher customer satisfaction with the service provided by the AV compared to the human agent when the service outcome is negative. Specifically, we suggest that AVs (vs. human agents) will be perceived as less blameworthy for the negative service outcome, which in turn will lead to higher customer satisfaction for the AVs compared to the human agents. Thus, we hypothesize that: H2. In the case of a negative service outcome, customers will perceive the AV (vs. the human agent) as less blameworthy, and this will lead to higher customer satisfaction when the service is provided by the AV.

| Perceived competence of AVs (vs. human agents)
As previously discussed, attribution of blame for service outcomes represents an important factor for the success of service providers (Botti & McGill, 2006). However, most research has focused on the consequences of service outcome attributions, whereas the antecedents and the process of blame attributions have been largely neglected (Jörling et al., 2019).
Past literature on customer responses to autonomous technologies suggests that service robot competence, defined as robot's ability to accurately and reliably perform a frontline task, can influence customers' expected service value (Belanche et al., 2021). Delivering competent service requires that the service provider possesses capabilities, knowledge and skillfulness. Importantly, competence entails a significant degree of agent flexibility, both in the moment and diachronically, especially in service situations that require the kind of adaptiveness needed to face sudden environmental changes or outcome failures (Miracchi, 2019). Moreover, the perception of an autonomous technology performing the promised service accurately represents an important informational cue for inferring its functional value (Habel et al., 2017;Nguyen et al., 2021). In other words, if customers know that an AV provides an error free and timely service, they will perceive the AV service to be fulfilling their needs. Previous literature also reports that customers are more satisfied with services that are competently performed because they save them time and effort in achieving the end results (Homburg et al., 2005).
However, recent studies on people's responses to autonomous technologies suggest that customers tend to trust humans and resist to autonomous technologies, because artificially intelligent systems are perceived as less capable of providing reliable, competent and useful information (Baccarella et al., 2020). Customers perceive automated systems as less flexible and adaptable, especially when it comes to circumstances involving high uncertainty (Leo & Huh, 2020) or situations requiring explanations such as in the case of a negative service outcome (Huang & Qian, 2021). In addition, previous research suggests that service provider's competence is more prominent in outcomes failures than in process failures (Choi et al., 2021) and that perceived competence of autonomous technologies affects mainly utilitarian expectations (Belanche et al., 2021). Based on the aforementioned theoretical and empirical findings, we hypothesize that the lower competence attributed to AVs (vs. human agents) in case of negative service outcomes will lead to a lower blame attribution and therefore to a higher customer satisfaction.
H3. In the case of negative service outcome, customers will perceive the AV (vs. the human agent) as less competent, and this will lead to lower attribution of blame and in turn to higher customer satisfaction when the service is provided by the AV.

| Pretest
We first conducted a pretest study aimed at validating the correct identification of the service provider and the quality of the service described in the scenarios that will be later used for the main studies.
The pretest involved 111 US Mturk workers (M age = 32.74 years, SD = 9.26, 34.2% female). Participants were firstly informed that they had to rate a daily life episode, read carefully the questions proposed and answer sincerely. Then, depending on the condition, they were randomly and evenly exposed to one of four scenarios describing a taxi service with an AV or a human agent, with negative or positive service outcomes. In particular, participants were asked to imagine going to an appointment with a friend, but not knowing the right direction and not owing a car, they decide to call for a taxi (Appendix).
In the negative outcome scenario, the main emphasis was on the delay of the taxi arrival. In the positive outcome condition, the taxi was described as arriving on time (Appendix). Moreover, in order to ensure that participants attentively read the scenarios we blocked each condition for 20 s and in the AV scenarios participants were provided with the definition of AVs proposed by Wikipedia.org.
After exposure to the scenarios, participants were asked to indicate if the service agent was a human or the taxi was a selfdriving vehicle using a seven-points differential semantic scale anchored with 1 as "The taxi agent was a human" and 7 as "The taxi was a self-driving vehicle". Then, to test the service outcome we used a seven-points differential semantic scale anchored with 1 as "The taxi ride had a negative outcome" and 7 as "The taxi ride  sincerely. Then, they were randomly and evenly exposed to one of four scenarios describing a taxi service with an AV or human agent and bearing a negative or positive service outcome as described in the pretest.

| Measures
Following the scenario, we measured customer satisfaction through a three-items scale adapted from Voss et al. (1998) ("I was satisfied with the provided service"; "I was delighted by the provided service"; "I was happy with the provided service") with a seven-points Likert scale, anchored with "Completely disagree/ Completely agree" as endpoints (M = 4.14; SD = 2.12; α = 0.96).
After that, demographic questions (age and gender) were administered and then participants were debriefed and thanked for their participation.

| RESULTS
We performed a two-way ANOVA with the aim of testing the moder- These results lend support to H 1 (see Figure 2). Taxi than adults. Participants were firstly informed that they have to rate a daily life episode, read carefully the questions proposed and answer sincerely, and then they were randomly and evenly exposed to one of two scenarios describing a taxi service provided by an AV or by a human agent and bearing a negative service outcome (the same as those used in the pretest and in Study 1; see Appendix).

| Measures
Following the scenario, we collected participants' satisfaction with the service, perceived blame of the service provider type, and demographic information (i.e., age and gender). Customer satisfaction has been measured through the same three-items scale adapted from  The results of Study 2 support our H 2 . We found that the AV is perceived less blame worthy for the negative service outcome than the human agent, and this leads to a higher customer satisfaction with the service.

| Study 3: The mediation effect of perceived competence on perceived blame
Study 3 aims to provide further insights into the psychological drivers of the effect observed in Study 2 by demonstrating that in the case of negative service outcome customers will show a higher satisfaction toward the AV because AVs are perceived as less competent and therefore less blameworthy than human agents.

| Design and participants
Similar to Study 2, this study used a single factor (service provider type: AV vs. human agent) between-subject experimental design. Data was collected through an online experiment spread via email to a F I G U R E 2 Results of study 1 convenience sample of 124 students (M age = 26.96 years, SD = 3.47, 60.5% female) and former students at a large European University.
Participants were randomly exposed to one of two scenarios describing a taxi service provided by a AVs (vs. a human agent) and bearing the same negative service outcome as that used in our previous studies (see Appendix).

| Measures
Following the scenario, we collected participants' satisfaction with the service, perceived competence, perceived blame of the service provider and demographic information (i.e., age and gender). We mea- p < .001). A possible reason can be that differently from the previous study, Study 3 involved a sample of students, less used to take taxi compared to a more adult population, and therefore a human agent can be perceived as more reassuring.
However, as expected, results showed that perceived blame negatively affected customer satisfaction (b = À0.28, SE = 0.47; t = À5.96, p < .001), confirming therefore our conceptualization; in fact, the indirect effect of the independent variable (type of service provider) on the dependent variable (customer satisfaction) via perceived competence and perceived blame was negative and significant (b = À0.08; 95% CI: À0.16, À0.06). Moreover, the indirect effect of the type of service provider on the customer service satisfaction via perceived blame was also negative and significant (b = À0.14; 95% CI: À0.27, À0.03), thus confirming again H 2 . Conversely, the indirect effect of the type of service provider on the customer satisfaction via perceived competence was positive but not significant (b = 0.04; 95% CI: À0.05, 0.09).
The results of Study 3 support our H 3 . We found that the AV is perceived less competent and therefore less blame worthy for the negative service outcome than the human agent, and this results in higher satisfaction when the service is provided by the AV.

| GENERAL DISCUSSION
AVs technology is expected to revolutionize mobility systems in the coming decades, with substantial benefits in terms of efficiency of resource utilization and ecological footprints of mobility (Park et al., 2021;Zhang et al., 2020). Consequently, the question of how customers respond to AVs (vs. human agents) and the underpinnings of such responses are of pivotal importance. Previous literature seems to show a robust aversion to AVs stemming from relatively low trust and low tolerance about errors for autonomous technologies (Baccarella et al., 2020;Liu et al., 2019). At the same time more recent findings show cases of low aversion toward AVs, identifying different predictors of behavioral intentions to use AVs, as for example self-efficacy (Chen & Yan, 2019) and hedonic motivations (Erskine et al., 2020;Keszey, 2020).
While the above studies have improved our understating of customers' reactions to AVs, they did not take into account the valence of outcome of the service provided by such technologies (negative vs. positive).
The present research addressed this gap in the literature by studying how service outcome valence (negative vs. positive) affects customer responses (e.g., perceived competence, perceived blame, and customer satisfaction) toward the AVs (vs. human agents). The results of Study 1 confirm H 1 by demonstrating that, compared to a human agent, AV generates more positive responses in case of a negative service outcome. However, in case of a positive service outcome there is no difference in the way customers respond to AV versus human agent. Study 2 shows the underlying mechanism driving such an effect; in particular, the results confirm H 2 by demonstrating that in case of a negative service outcome the higher blame attributed to the human agent (vs. the AV) leads to a higher decrease in customer satisfaction for human agent than for AV. Finally, the results of Study 3 show that in case of a negative service outcome the higher competence attributed to the human agent (vs. AVs) generates higher blame, and therefore a decrease in the customer satisfaction for human agent (vs. AV).

| THEORETICAL CONTRIBUTION
First of all, the results of this research make a contribution to the literature on asymmetric responses to valenced information (Ertac, 2011;Soroka, 2006). No prior empirical investigation has explored if the "good-outcome, bad-outcome" asymmetry extends to the case of autonomous technologies in the transportation services. We extend this stream of research by providing empirical evidence for this effect in transportation industry and more specifically in customer responses to AVs. Specifically, we find that in the case of positive service outcome customers' differential perception regarding AVs versus human agents does not affect their satisfaction with the service because the service is error-free (e.g., fulfills the basic service need) and thus there are no negative aspects (e.g., outcome failure) of the service that the provider can be blamed about. On the contrary, we find that in the outcome failure scenario, the negative aspects of the service activate the differential perceptions (e.g., competence and blame) regarding AVs versus humans, which, in turn, affect the level of customer satisfaction with the service.
Second, our research makes a contribution to the literature focused on understanding consumers' receptiveness to AVs technology (Erskine et al., 2020;Keszey, 2020). While consumers seem to be skeptic about AVs and more in general about AI technologies (Alexander et al., 2018), we show that the lower competence attributed to the AVs compared to human agents can paradoxically trigger a lower customer dissatisfaction in case of a negative service outcome with an AV. One possible explanation for this results may be derived from the customer perception about AVs' lower flexibility, especially when it comes to circumstances that require a certain degree of adaptiveness (e.g., sudden increase in road traffic), or in situations of outcome failure that requires explanations (e.g., taxi arrives late) (Huang & Qian, 2021).
Customers expect that a human agent (vs. an AV) is more flexible and adaptable and therefore capable of making adjustments to the service delivery (e.g., speed up) in order to compensate for the service failure (e.g., delay). When this does not occur customers feel less satisfied with the service outcome. This may also reflect customers' general perception that autonomous technologies are more predictable than humans and thus less capable of making adjustments to the service delivery based on contextual events (Hong & Williams, 2019;Ivanova et al., 2020).
Third, our findings show that human agents are perceived to be more competent than AVs, and therefore they are perceived to be more responsible in case of negative service outcome than AVs. This result is in line with recent work of Gill (2020) on AVs blame and responsibility, according to which individuals tend to be more indulgent toward AVs responsibilities (compared to human agents) in case when the vehicle generates pedestrian harms. Consequently, this result contributes to reinforce the stream of literature that promotes customers' positive responses toward AVs (Erskine et al., 2020;Keszey, 2020). A possible explanation for the lower blame attributed to the AVs compared to human agents may stem from the common belief that AV technology is operated by computer algorithms that have been programmed by humans who are likely to bear most of the blame for AVs failures (Hao, 2019). And this is likely to occur even in a highly automated driving systems where humans have limited control of its behavior. Therefore, in a situation of service failure humans act like "liability sponge", absorbing all responsibility in AVs accidents no matter how little or unintentionally they are involved. Another possible explanation for the lower blame attributed to AVs (vs. human agents) is provided by the notion that customers believe autonomous technologies to have lower agency than humans because AVs do not have "human-like" qualities (Pozharliev et al., 2021). In such cases, customers may not hold AVs fully responsible for actions that may cause service failure.
Finally, our results contribute to better understand how people identify causal links when forming judgments about a negative event comparing human and AI agents, and therefore to attribution theory (Fiske & Taylor, 1991). In that sense, our results are coherent with a recent study of Leo and Huh (2020) showing that people attribute less responsibility to a robot than to a human for a service failure (e.g., pharmacy and restaurant) because people perceive robots to have less controllability over the task. On the other hand, our findings are opposite to those proposed by Hong et al. (2020), that demonstrated that in case of car accident consumers tend to blame more AVs than human agents, because people generally identify with AVs less than they do with humans. Differently from our study, however, Hong et al. (2020) considered a very extreme negative outcome (i.e., an accident with a victim who died) and did not take into account agent perceived competence.

| LIMITATION AND FUTURE RESEARCH
The present study has some limitations that could provide opportunities for further investigation. First, our research presented one particular example of negative (vs. positive) service outcome, namely the taxi service, but future research could take into account additional negative episodes in order to validate or extend our findings. In particular, there are situations in which a problematic process is followed by an acceptable final outcome, such as in the case in which the taxi arrives late but manages to arrive in time at the final destination by speeding up. Future research can verify consumers' reactions toward human versus automated agents in the context of a problematic service process that ends with a failure versus with an acceptable outcome. This question is important because the marketing literature distinguishes between an outcome failure which occurs when the service firm does not fulfill the basic service need and process failure which occurs when the service delivery is flawed or deficient in some way (Smith et al., 1999). Since our service scenario described an outcome failure, future research can examine consumers' responses toward human versus automated agents in the context of a process failure versus outcome failure.
Second, the present research identified competence perceptions and blame as mechanisms explaining the higher customers' satisfaction attributed to AVs (vs. human agents), without taking into account task controllability (Leo & Huh, 2020) and personal traits that could mitigate or enhance these effects, such as for example anxiety (Keszey, 2020).
Moreover, further research could take into account further control variables, such as perceived familiarity with AI and more in particular AVs and subjective knowledge with such a technology (Qian et al., 2017).  (Bock et al., 2020).
Finally, we did not consider other important aspects deriving from the adoption of AVs, such as sustainability implications, that, as noted by Mora et al. (2020), are still almost unexplored. Therefore, we encourage further research to expand our view of AVs by bringing in sustainable issues, especially from an environmental perspective. For example, future studies could examine whether the lower blame attribution and the higher satisfaction with AVs (vs. human agents) in service failure can be partially mediated by the differential perceptions about the environmental impact that these two service providers have. Voss, G. B., Parasuraman, A., & Grewal, D. (1998)  Your friend, whom you have not seen for several months, gives you an appointment to the city center, but you do not know how to reach it. Thus, you decide to take a taxi and to book a trip with your smartphone to an autonomous driving taxi service.
Despite the autonomous vehicle was expected after 10 min it arrived 30 min late, and you come at your destination 40 min late than expected.
For autonomous taxi driver we intend a vehicle that is capable of sensing its environment and moving safely with no human input.

NEGATIVE SERVICE OUTCOME HUMAN AGENT
Imagine you are on a vacation abroad to visit a friend of yours.
Your friend, whom you have not seen for several months, gives you an appointment to the city center, but you do not know how to reach it. Thus, you decide to take a taxi and to book a trip calling with your smartphone the city taxi service.
Despite the taxi driver was expected after 10 min he arrived 30 min late, and you come at your destination 40 min late than expected.

POSITIVE SERVICE OUTCOME AV
Imagine you are on a vacation abroad to visit a friend of yours.
Your friend, whom you have not seen for several months, gives you an appointment to the city center, but you do not know how to reach it. Thus, you decide to take a taxi and to book a trip with your smartphone to an autonomous driving taxi service.
The autonomous vehicle arrived perfectly on time, and you come at your destination 5 min before expected.
For autonomous taxi driver we intend a vehicle that is capable of sensing its environment and moving safely with no human input.

POSITIVE SERVICE OUTCOME HUMAN AGENT
Imagine you are on a vacation abroad to visit a friend of yours.
Your friend, whom you have not seen for several months, gives you an appointment to the city center, but you do not know how to reach it. Thus, you decide to take a taxi and to book a trip calling with your smartphone the city taxi service.
The taxi driver arrived perfectly on time, and you come at your destination 5 min before expected.