🔥 Land A High Paying Web3 Job In 90 Days LEARN MORE

Government Amends AI Advisory, Eases Regulations for Industry Players

In this post:

  • The government eases regulations for AI tools; there is no need for prior approval from 15 March.
  • Industry welcomes the move and says earlier rules would have slowed innovation.
  • Despite revisions, platforms still need approval for deepfake services to warn users of potential errors.

In a significant development aimed at fostering innovation in the artificial intelligence (AI) sector, the government has revised its advisory concerning releasing GenAI– and AI-based tools and features into the market. The move, welcomed by industry players, is a relief as companies will no longer be required to seek explicit government consent before launching their products.

Industry applauds revisions

The amended advisory, issued on 15 March, has notably removed the requirement for companies to comply within a strict 15-day timeframe. This alteration has been approved by industry experts who had voiced concerns over the potential hindrance to the pace of innovation posed by the initial regulations.

Rohit Kumar, founding partner at The Quantum Hub, a public policy consulting firm, commended the government’s responsiveness to industry feedback. He emphasized that the earlier advisory could have significantly impeded speed to market and stifled the innovation ecosystem. Kumar also pointed out that removing the necessity to submit an action-taken report indicated that the advisory was not merely suggestive but carried weight as a directive.

Key revisions and continuity in requirements

Under the revised advisory, platforms and intermediaries equipped with AI and GenAI capabilities, such as Google and OpenAI, are still required to obtain government approval before offering services enabling the creation of deepfakes. Additionally, they must continue to label themselves as ‘under testing’ and secure explicit consent from users, informing them about potential errors inherent in the technology.

See also  US Bitcoin ETFs now hold 5% of total supply: How far are we from the Wall Street takeover?

The directive extends to all platforms and intermediaries utilizing large language models (LLMs) and foundational models. Moreover, services are mandated not to produce content that compromises the integrity of the electoral process or violates Indian law, underscoring concerns over misinformation and deepfakes influencing election outcomes.

Emphasis on procedural safeguards

While acknowledging the positive stride with the advisory revision, some executives stress the importance of procedural safeguards in policymaking. They advocate for a consultative approach to prevent knee-jerk reactions to incidents and ensure the formulation of well-considered regulations.

Executives, speaking on the condition of anonymity, highlighted the necessity for intermediaries to exercise caution during high-risk periods such as elections. They supported the government’s initiative to urge intermediaries to be vigilant before releasing untested models and appropriately labeling outputs.

The original advisory was prompted by various controversies, including when Google’s AI platform Gemini faced criticism for answers generated about Prime Minister Modi. Instances of ‘hallucinations’ by GenAI models, exemplified by Ola’s beta GenAI platform Krutrim, have also been observed, prompting regulatory intervention.

A Step-By-Step System To Launching Your Web3 Career and Landing High-Paying Crypto Jobs in 90 Days.

Share link:

Disclaimer. The information provided is not trading advice. Cryptopolitan.com holds no liability for any investments made based on the information provided on this page. We strongly recommend independent research and/or consultation with a qualified professional before making any investment decisions.

Most read

Loading Most Read articles...

Stay on top of crypto news, get daily updates in your inbox

Editor's choice

Loading Editor's Choice articles...
Cryptopolitan
Subscribe to CryptoPolitan