News

Study Shows Humans Overtrust AI In Life-And-Death Scenarios

A recent study conducted by scientists at the University of California, Merced, has demonstrated that humans are prone to placing excessive trust in artificial intelligence (AI) when making critical life-and-death decisions, even when the AI is known to be unreliable.

The research, published in Scientific Reports, used a simulation in which test subjects were asked to make rapid decisions about drone strikes, classifying targets as “friend” or “foe.” Participants were briefly shown photos of eight individuals before making their decisions. AI provided a second opinion, but its advice was random.

Despite being warned about the fallibility of the AI, two-thirds of the participants were influenced by the system’s random guidance. According to the study’s lead investigator, Professor Colin Holbrook, the results highlight a growing concern about the overtrust in AI, particularly in high-risk situations involving uncertainty.

Holbrook emphasized that the implications of the findings go beyond military contexts. Similar risks of overreliance on AI could emerge in situations where police might use AI to decide on lethal force or where paramedics might rely on AI in medical emergencies. The research also suggests that AI influence could extend to major life decisions, such as purchasing a home.

The study underscores the need for caution when integrating AI into critical decision-making processes. Holbrook noted, “We should have a healthy skepticism about AI, especially in life-or-death decisions. These are still devices with limited abilities.”

This research serves as a reminder that despite AI’s remarkable capabilities, its reliability across different contexts cannot be assumed without thorough evaluation.

Can AI be improved?

Reducing AI bias is a critical issue in ensuring fairness and reliability in AI systems. Here are several strategies to mitigate bias:

1. Diverse and Representative Data

AI models often reflect the biases present in their training data. One of the key ways to reduce bias is to ensure that the data used to train AI systems is diverse, inclusive, and representative of different populations. This includes:

  • Auditing Data: Regularly auditing datasets for imbalances or underrepresentation of specific groups can help identify potential sources of bias.
  • Synthetic Data: In cases where real-world data is biased or limited, synthetic data can be used to fill gaps, ensuring that the AI system is exposed to a wider range of scenarios​(

2. Algorithmic Transparency

Ensuring transparency in how AI models are trained and making the algorithms understandable can help mitigate bias. This can involve:

  • Open Sourcing Models: Allowing public scrutiny of models can enable communities to identify biases.
  • Explainability Techniques: Techniques like SHAP (Shapley Additive Explanations) and LIME (Local Interpretable Model-agnostic Explanations) help make AI decisions more transparent, enabling humans to detect and correct biased decisions​(

3. Bias Audits and Testing

Conducting bias audits and regular testing throughout the AI development process can help detect bias early on. This involves:

  • Pre-launch Testing: Evaluating AI models against known benchmarks for fairness, ensuring they perform equally well across different demographic groups.
  • Post-launch Monitoring: Continuous monitoring of AI performance in the real world, followed by updates and corrections as new biases emerge​(

4. Bias Mitigation Algorithms

Certain techniques are designed specifically to reduce bias in AI. These include:

  • Fairness Constraints: Building fairness constraints directly into the optimization process of the AI system. For example, models can be penalized during training for producing biased outcomes.
  • Adversarial Debiasing: A technique where one model learns to perform a task while another tries to detect bias, improving fairness during training​(

5. Human Oversight and Inclusive Development

Incorporating human oversight, especially from diverse teams, can help reduce AI bias. Ensuring that the development teams reflect the diversity of the end-users can provide better insights into potential biases.

  • Diverse Development Teams: A diverse group of developers and researchers can better anticipate and address biases in the AI systems they create.
  • Human-in-the-Loop Systems: These systems allow human intervention in critical AI decisions, ensuring that biased AI outputs can be overridden​(

6. Regulation and Ethical Guidelines

Governments and organizations can implement policies and guidelines that mandate fairness in AI systems. Ethical AI frameworks, such as those outlined by the EU’s AI Act or companies like Google and Microsoft, provide rules on fairness, transparency, and accountability to reduce bias​(

By applying these strategies—improving data quality, maintaining transparency, conducting regular bias audits, using bias-mitigation algorithms, fostering inclusive development teams, and adhering to ethical guidelines—organizations can significantly reduce bias in AI systems.

News Team
Related News
Related sized article featured image

The TUC says young people are paying the price of the Conservative Party’s ‘toxic economic legacy’.

Alan Jones
Related sized article featured image

It is the latest major business to caution that workers and customers could face an impact from the rise in business taxes.

Henry Saker-Clark