Loading...

Australian government launches consultation to assess ban on “high-risk” AI

TL;DR

  • The Australian government has initiated an unexpected eight-week consultation period aimed at determining whether certain “high-risk” artificial intelligence (AI) tools should be prohibited.
  • The Australian government seeks feedback on strategies to promote the “safe and responsible use of AI,” exploring options such as voluntary ethical frameworks, specific regulations, or a combination of both approaches.
  • The document emphasizes both the positive applications of AI in sectors like medicine, engineering, and law, as well as the potential harms associated with deepfake tools.

The Australian government has initiated an unexpected eight-week consultation period aimed at determining whether certain “high-risk” artificial intelligence (AI) tools should be prohibited. This move follows similar measures taken by other regions, including the United States, the European Union, and China, in addressing the risks associated with rapid AI development.

On June 1, Industry and Science Minister Ed Husic unveiled two papers for public review: one on “Safe and Responsible AI in Australia” and another on generative AI from the National Science and Technology Council. These papers were released alongside a consultation period that will remain open until July 26.

The Australian government seeks feedback on strategies to promote the “safe and responsible use of AI,” exploring options such as voluntary ethical frameworks, specific regulations, or a combination of both approaches. Notably, the consultation directly asks whether certain high-risk AI applications or technologies should be completely banned and seeks input on the criteria for identifying such tools.

The comprehensive discussion paper includes a draft risk matrix for AI models, categorizing self-driving cars as “high risk” and generative AI tools for creating medical patient records as “medium risk.” The document emphasizes both the positive applications of AI in sectors like medicine, engineering, and law, as well as the potential harms associated with deepfake tools, fake news generation, and instances where AI bots have encouraged self-harm.

Australian government vs AI

Concerns regarding bias in AI models, as well as the generation of nonsensical or false information known as “hallucinations” by AI systems, are also addressed in the discussion paper. It acknowledges that AI adoption in Australia is currently limited due to low levels of public trust. The paper references AI regulations implemented in other jurisdictions and Italy’s temporary ban on ChatGPT as examples.

Additionally, the National Science and Technology Council report highlights Australia’s advantageous capabilities in robotics and computer vision but notes relative weaknesses in core areas such as large language models. It raises concerns about the concentration of generative AI resources in a small number of predominantly US-based tech companies, which poses potential risks for Australia.

The report further explores global AI regulation, provides examples of generative AI models, and suggests that such models will likely have far-reaching impacts on sectors ranging from banking and finance to public services, education, and creative industries.

Disclaimer. The information provided is not trading advice. Cryptopolitan.com holds no liability for any investments made based on the information provided on this page. We strongly recommend independent research and/or consultation with a qualified professional before making any investment decisions.

Share link:

Lacton Muriuki

Lacton is an experienced journalist specializing in blockchain-based technologies, including NFTs and cryptocurrency. He dabbles in daily crypto news rich with well-researched stats. He adds aesthetic appeal, adding a human face to technology.

Most read

Loading Most Read articles...

Stay on top of crypto news, get daily updates in your inbox

Related News

Binance
Cryptopolitan
Subscribe to CryptoPolitan