
In an unexpected consultation, Australia asked whether “high risk” AI should be banned.
Unexpectedly, the Australian government announced a new eight-week consultation to determine how strictly the AI industry should be regulated.
An eight-week rapid consultation by the Australian government to determine whether “high risk” AI tools should be banned has been launched.
In recent months, measures have also been introduced in other regions, including the United States, European Union, and China, to identify and possibly mitigate concerns regarding the rapid development of AI.
A discussion paper on “Safe and Responsible AI in Australia” and a report on generative AI from the National Science and Technology Council were both released on June 1, according to industry and science minister Ed Husic.
The document is part of a consultation that will last until July 26.
The government is seeking input on how to assist the “safe and responsible use of AI” and is debating whether to adopt voluntary strategies such as ethical frameworks, implement specific regulations, or combine the two.
“Should high-risk AI applications or technologies be banned completely?” are questions asked in consultations. and what standards should be applied to determine which AI tools should be banned.
The thorough discussion paper also offers a sketch risk matrix for AI models for comment. It classifies generative AI tools used for tasks like creating medical patient records as “moderate risk” while classifying AI in self-driving cars as “high risk” just to give an example.
The study highlights “harmful” uses of AI, such as deepfake tools, used in the production of fake news, and examples where AI bots have encouraged self-harm, as well as “positive” uses in the medical, engineering, and legal industries.
Bias in AI models and “hallucinations” – information generated by AI that is erroneous or unintelligible – were also mentioned as problems.
According to the discussion paper, adoption of AI is “relatively low” in the country due to “low levels of public trust”. In addition, it mentions other countries’ AI laws as well as Italy’s temporary ban on ChatGPT.
Australia has some advantageous AI capabilities in robotics and computer vision, but its “core fundamental capacity in (large language model) and related fields is relatively weak,” according to the National Science and Technology Council report. He also stated:
“Australia faces potential risks due to the concentration of generative AI resources within a small number of large multinational technology companies with US-based operations.”
The paper goes on to explore international AI policies, provides examples of generative AI models, and predicts that these technologies “are likely to impact everything from banking and finance to public services, education, and everything in between.”
Post In an unexpected consultation, Australia asked whether “high risk” AI should be banned. first appeared on BTC Wires.