A recent investigation conducted by Stanford University researchers reveals that major tech companies, including OpenAI, the creators of ChatGPT, Meta (previously Facebook), and Google, might face conflicts with the EU’s draft regulations concerning the deployment and usage of artificial intelligence (AI).
EU’s AI regulation drama
The Stanford research study, led by AI researcher Rishi Bommasani, uncovers a brewing storm between the multi-billion-dollar industry, championed by politicians for national security, and international authorities committed to mitigating its potential risks.
The paper particularly spotlights that these corporations are not adequately adhering to the draft guidelines, particularly regarding copyright laws.
Generative AI tools, which are software applications developed using vast data resources to generate human-like text, code, and images, have been growing at an exponential rate.
These AI models, like ChatGPT, Bard, and Midjourney, often rely on copyrighted content for training purposes.
As per the EU’s AI Act proposals, developers of such tools would be required to disclose AI-generated content and offer summaries of copyrighted data utilized during training, thereby ensuring creators’ compensation for their contributions.
The compliance challenge
Stanford’s research measured ten AI models against the EU’s provisional regulations on various parameters, such as describing data sources, summarizing copyrighted data, disclosing the technology’s energy consumption and computing needs, and detailing assessments, tests, and anticipated risks associated with the technology.
The study revealed that every model failed to meet the proposed rules in several significant areas. More than half of the evaluated providers didn’t achieve even a 50% compliance score.
Specifically, proprietary AI models like OpenAI’s ChatGPT and Google’s PaLM 2 were criticized for their lack of transparency concerning copyrighted data. On the other hand, their open-source competitors showed greater transparency but posed control challenges.
Addressing these concerns, Rumman Chowdhury from Harvard University noted that AI is not intrinsically neutral, reliable, or beneficial, and a coordinated and directed effort is required to ensure its appropriate use. Trustworthiness, she argued, is the real competitive edge in this arena.
Global implications and future considerations
Findings from the Stanford study, presented at a recent US Congress committee hearing on AI, will guide regulators worldwide as they attempt to navigate this transformative technology projected to revolutionize industries from financial services to media.
However, this research also illuminates the persistent tension between the pace of AI development and responsible growth. According to Frank Lucas, the committee’s Republican chair, while the US must retain its AI leadership role, it should also uphold values of trustworthiness, transparency, and fairness.
As the EU’s AI Act is poised to enforce specific rules and the US plans to introduce related legislation, Bommasani advocates for more transparency within the industry to better regulate AI.
Yet, he also acknowledges that enforcing these laws will be a challenging task. Understanding how to summarize the copyrighted portions of these enormous datasets is not immediately evident.
As regulations become more concrete, lobbying efforts are expected to intensify both in Brussels and Washington.