Loading...

Regulating AI: Feasible Approaches to Address Growing Concerns

AI

Most read

Loading Most Ready posts..

TL;DR

  • Regulating AI is tricky due to tech limits.
  • Limiting training data can affect content quality.
  • Cryptographic signatures can track AI content but raise vendor dependence.

As the influence of generative artificial intelligence (AI) technologies continues to expand, so does the apprehension surrounding their potential consequences. These concerns encompass various issues, from the proliferation of disinformation and job displacement to the loss of creative control and even existential threats posed by superintelligent AI. 

In response to these mounting fears, governments worldwide are grappling with how to regulate AI. This article explores various approaches to AI regulation, considering their technological and economic feasibility.

Restricting training data: A technologically and partially economically feasible approach

One viable approach to AI regulation is to limit the training data used by AI models to public domain material and copyrighted content for which explicit permission has been obtained. This approach allows AI companies to control the quality and legality of the data used in their models. 

Technologically, this restriction is entirely feasible, as AI developers can precisely curate their training datasets.

From an economic perspective, there is a trade-off between using restricted data and achieving the highest possible content quality. AI systems benefit from diverse and extensive training data, but adhering to permissions may limit the variety and richness of the data. 

Nonetheless, some AI companies, such as Adobe with its Firefly image generator, are already marketing themselves using only authorized content, demonstrating that partial economic feasibility can be achieved.

Attributing AI output to creators: A technologically infeasible endeavor

Another potential avenue for AI regulation is attributing the output of AI technology to specific creators or groups of creators for compensation. However, this approach faces significant technological challenges. 

The complex algorithms used in generative AI make it nearly impossible to pinpoint which input samples directly influenced the output. Even if such attribution were possible, determining the extent to which each input sample contributed to the output remains a daunting task.

The complexity of AI systems and the lack of a clear link between input and output render this form of regulation technologically infeasible. This is a critical concern, as the ability to attribute AI-generated content could determine whether creators and their licensing holders embrace or oppose AI technology.

Distinguishing human from AI-generated content: A technologically evolving challenge

Concerns about AI-generated disinformation campaigns have underscored the need to distinguish between human-generated content and AI-generated content. While technology startups are actively developing solutions to address this issue, the current state of technology lags behind generative AI advancements.

Existing approaches primarily rely on identifying patterns specific to generative AI, but this method is akin to chasing a moving target. Technologically, it remains a challenge to reliably differentiate between human and AI-generated content. However, rapid progress in this field suggests that a solution may emerge in the near future.

Attributing AI output to AI firms: A technologically and economically feasible regulation

An alternative approach is to attribute AI-generated content to the specific AI vendor responsible for its creation. This can be accomplished through cryptographic signatures, a well-understood and mature technology. 

AI companies could cryptographically sign all output from their systems, allowing anyone to verify the authenticity of the content’s source.

This regulation is both technologically and economically feasible, as the necessary technology is already embedded in basic computational infrastructure. 

Nevertheless, it raises questions about whether it is desirable to rely exclusively on content generated by a limited number of well-established vendors whose signatures can be verified.

Disclaimer. The information provided is not trading advice. Cryptopolitan.com holds no liability for any investments made based on the information provided on this page. We strongly recommend independent research and/or consultation with a qualified professional before making any investment decision.

Share link:

Benson Mawira

Benson is a blockchain reporter who has delved into industry news, on-chain analysis, non-fungible tokens (NFTs), Artificial Intelligence (AI), etc.His area of expertise is the cryptocurrency markets, fundamental and technical analysis.With his insightful coverage of everything in Financial Technologies, Benson has garnered a global readership.

Stay on top of crypto news, get daily updates in your inbox

Related News

Cryptopolitan
Subscribe to CryptoPolitan