In 2023, Artificial General Intelligence (AGI) surged into the mainstream, reshaping everything from homework assistance to social media content creation. High-profile instances, like its inclusion in the King’s Speech, highlighted its growing influence. However, this rapid integration of AI into daily life has sparked concerns. Narratives often veer towards dystopian fears, overshadowing the technology’s potential benefits. A critical issue has been the speed of AI deployment, which is outpacing the development of necessary legislative and ethical frameworks.
Big Tech companies have faced criticism for insufficient self-regulation, described as ‘marking their own AI homework’. In response, the US rallied industry leaders like Amazon, Google, OpenAI, and Microsoft to sign a groundbreaking voluntary agreement. This pact allows independent security experts to evaluate their latest AI models. Similarly, in the UK, Prime Minister Rishi Sunak announced plans for the world’s first AI safety institute, aiming to address national security risks. This move coincided with an international accord among the US, UK, and several other nations to safeguard AI from misuse and ensure ‘secure by design’ systems.
The global scene witnessed an unofficial competition among governments to lead in AI regulation. A significant milestone was the UK hosting the inaugural AI Safety Summit at Bletchley Park. Discussions centered around AI’s darker aspects, with comparisons to nuclear war risks. However, concerns were raised about the exclusion of communities most likely to be impacted by AI in areas like employment or algorithmic decision-making.
The challenge of regulating the unknown
The UK’s strategy, as articulated by Rishi Sunak, revolves around the dilemma of legislating a yet-to-be-fully-understood technology. This topic was a focal point at the Open Data Institute Summit, where I, along with co-founder Tim Berners-Lee, emphasized the importance of transparency in AI models and their data. Effective scrutiny of AI, we argued, must involve those who understand its intricacies, a group that does not typically include political figures.
Public anxiety, particularly regarding AI’s impact on jobs, has been palpable. This was evident in the 146-day strike by the Writers Guild of America and a lawsuit by Getty Images against Stability AI. Despite these concerns, the creative industry has embraced generative AI, producing notable imagery and even award-winning art. Tools like Nightshade emerged to protect creators’ rights by ‘poisoning’ data used by AI, hindering its ability to replicate human work.
A significant undercurrent in 2023 was the issue of data literacy. Surveys among Fortune 1000 companies revealed stagnation in data, analytics, and AI initiatives. The UK’s digital and data skill shortages, particularly in the civil service, were highlighted as an urgent concern.
2024 and beyond: Data as the foundation of AI
Looking towards 2024, it’s clear that the foundation of effective and safe AI lies in data. AI’s power and safety are directly tied to the quality and governance of the data it utilizes. With a General Election on the horizon, it is vital for political party manifestos to recognize data as the cornerstone of technological advances and commit to its responsible collection and distribution. Another critical aspect is maintaining human oversight in AI-driven decisions, ensuring that there is always a human available to review and appeal decisions made by algorithms.
The challenges faced by OpenAI towards the end of last year, in managing and overseeing rapid AI development, underscored the need for diverse and skilled human involvement in AI training and application. The education sector is starting to view AI as a universal learning tool rather than a means for plagiarism. The hope for 2024 is to foster a data-literate generation capable of discerning fact from AI-generated fiction and equipped to fill the emerging job roles in an AI-augmented world.