🔥 Trade with Pros on Discord → 21 Days Free (No Card)JOIN FREE

Socially conscious AI in AVs may prevent fatalities in accidents: Research

In this post:

  • Researchers reported that “social sensitivity” in autonomous vehicle AI can reduce harm in crashes by over 50%.
  • Real-world incidents involving AVs, such as fatal crashes with Uber and Tesla vehicles, have led to calls for ethically aware decision-making systems.
  • Ethical programming and legal accountability are becoming critical issues for the future of self-driving technology.

A new study published in the US Proceedings of the National Academy of Sciences, demonstrates that Autonomous Vehicles (AVs) trained to think more like humans, specifically to recognize and weigh the vulnerability of road users, are significantly safer in high-risk scenarios. 

When equipped with what the researchers called “social sensitivity,” vehicles were found to reduce total harm in crashes by over 17%, with harm to pedestrians, cyclists, and other vulnerable groups slashed by more than 50%.

Human-like reasoning and machine precision to come to AVs

AVs have been pushed as the future of safe, efficient transportation. However, as more development continues to enter that space, ethical concerns continue to surface, especially in life-or-death scenarios.

The study led by Hongliang Lu of the Hong Kong University of Science and Technology suggests that giving driverless cars a dose of social consciousness could drastically reduce harm in road accidents. The research integrates insights from neuroscience and behavioral psychology into AV programming.

The model was layered onto the existing decision-making framework called EthicalPlanner, enabling AVs to behave in a way that not only avoids collisions but minimizes harm when a crash is inevitable.

The AI prioritizes the protection of the most vulnerable road users, which is a major shift away from the conventional model that often treats all objects or persons equally from a purely mechanical risk standpoint.

See also  Musk's SpaceX may design Trump's "Golden Dome" missile defense system

Daniela Rus, director of the Computer Science and Artificial Intelligence Laboratory at Massachusetts Institute of Technology (MIT), praised the approach, saying, “The proposed framework offers a potential path toward AVs that can navigate complex, multi-agent scenarios with an awareness of differing levels of vulnerability among road users.”

Self-driving vehicles have faced criticisms for ethical blind spots and accidents

The need for innovations like this stems from real-world tragedies that have affected lives and businesses. For instance, in 2018, an Uber self-driving test vehicle struck and killed a pedestrian in Arizona, highlighting the need for AVs to make nuanced decisions in real time.

Similarly, Tesla’s Autopilot has been under intense scrutiny after several high-profile crashes, including fatal collisions involving its semi-autonomous systems.

Also, General Motors shut down its AV taxi subsidiary, Cruise, after a series of setbacks, a major one involving an accident that left a pedestrian gravely injured. General Motors executives tried to downplay the impact of the incident. However, the blowback contributed to the announcement of the closure, having invested billions of dollars.

However, Waymo, Google’s self-driving car project, has made significant strides in deploying AVs with enhanced safety features. Waymo’s vehicles have reportedly demonstrated a 96% reduction in vehicle-to-vehicle accidents at intersections compared to human drivers.

See also  AI Advancements Raise Concerns About the Future of Professional Photography

Robotaxis on the rise

Waymo now operates driverless rides in cities like San Francisco, Phoenix, and Los Angeles.

Internationally, cities like Dubai and Beijing are investing in autonomous public transport as part of their smart city goals.

Despite the concerns with AVs, autonomous taxi services are gaining traction, which makes the call for smarter and safer vehicles important.

Beyond technology, autonomous decision-making poses serious legal and moral questions. If an AI system “chooses” to harm one road user to save another, who is accountable: the manufacturer, the software engineer, or the owner? Legal frameworks around the world have yet to catch up to the realities of machine decision-making in public spaces.

A panel convened by the European Commission recently called for AVs to ensure a “fair distribution of risk” and uphold “the protection of basic rights, including those of vulnerable users.”

The smartest crypto minds already read our newsletter. Want in? Join them.

Share link:

Disclaimer. The information provided is not trading advice. Cryptopolitan.com holds no liability for any investments made based on the information provided on this page. We strongly recommend independent research and/or consultation with a qualified professional before making any investment decisions.

Most read

Loading Most Read articles...

Stay on top of crypto news, get daily updates in your inbox

Editor's choice

Loading Editor's Choice articles...

- The Crypto newsletter that keeps you ahead -

Markets move fast.

We move faster.

Subscribe to Cryptopolitan Daily and get timely, sharp, and relevant crypto insights straight to your inbox.

Join now and
never miss a move.

Get in. Get the facts.
Get ahead.

Subscribe to CryptoPolitan