A recent executive briefing by GlobalData has shed light on pivotal concerns surrounding the regulation and environmental implications of Artificial Intelligence (AI). The report underscores the revolutionary impacts of AI across diverse industries, forecasting a substantial industry growth rate of 35% between 2022 and 2030, potentially culminating in a staggering $909 billion valuation by the end of the forecast period. However, amidst the exponential growth, questions loom regarding the feasibility of establishing a global standard for AI regulation and the intricate relationship between AI and environmental sustainability.
The report highlights notable strides in AI regulation, particularly citing the EU’s imminent AI Act, slated to take effect in late 2023 or 2024. Geared towards ensuring the non-discriminatory and eco-friendly nature of AI models, the Act emphasizes safety, transparency, and traceability. Further, the UK government is poised to host the AI Safety Summit in November, aimed at facilitating discussions on risk assessment and mitigation strategies, with the ambition of fostering international cooperation in AI regulation.
GlobalData’s Thematic Research Director, Josep Bori, voiced concerns about the UK’s potential to spearhead the global AI safety movement, given the current geopolitical climate post-Brexit. Bori emphasized the necessity for large international organizations and regulatory bodies to play a pivotal role in setting global standards, cautioning against potential limitations posed by single-country initiatives.
AI regulation amidst technological advancements
Assessing the evolving landscape of AI regulation, Bori highlighted the rapid advancements in generative AI, foreseeing a fluid situation in the regulatory sphere for the foreseeable future. Anticipating potential divergence across jurisdictions and frequent regulatory changes, Bori underscored the ambiguity surrounding the eventual convergence or divergence of the global regulatory framework.
Benjamin Chin, Associate Analyst in Thematic Intelligence at GlobalData, added a note of realism, asserting that while governments might enact regulations to elevate technical and ethical standards, the inherent differences in national approaches, alongside the influence of major tech companies, could impede the achievement of a unified vision for AI regulation.
The multifaceted relationship between AI and the environment remains a focal point in GlobalData’s executive brief. Acknowledging the technology’s potential to both harm and benefit the environment, the report highlights the substantial energy consumption associated with training AI on large language models (LLMs). Notably, while this energy consumption raises concerns about sustainability, AI can also contribute positively by monitoring renewable energy consumption in smart grids and similar applications.
Bori emphasized the growing significance of environmental considerations in the context of LLMs, anticipating a shift towards prioritizing lower carbon footprint LLMs in the wake of mounting emphasis on compliance with environmental, social, and governance frameworks.
Chin highlighted a specific concern from the sixth edition of the AI Index Report 2023 published by Stanford University, citing the carbon dioxide-equivalent emissions produced by GPT-3 in 2022.
To contextualize, he drew a comparison, noting that the carbon footprint of GPT-3 is roughly half that of a single passenger’s roundtrip flight from London to New York.
Underlining the ethical complexities tied to LLMs, Chin emphasized the potential for these models to perpetuate existing social biases, primarily stemming from the assimilation of large volumes of data.
GlobalData’s comprehensive analysis underscores the critical need for thoughtful and comprehensive regulatory frameworks to guide the development and deployment of AI, taking into account ethical considerations and environmental impacts to ensure a sustainable and equitable technological landscape. As the industry continues to evolve, the global community must strive for collaborative efforts and cohesive strategies to harness the full potential of AI while mitigating associated risks.