The rapid growth of generative AI has created a competition between cybersecurity stakeholders and hackers. The consequent dynamism led US President Joe Biden to issue an executive order (EO) in October that focused on the safe, secure and trustworthy development and use of artificial intelligence.
Who will gain the upper hand – defenders or attackers as the next five years unfold? At this point, there is no certain answer yet.
The Cyber Arms Race
Generative AI empowers both defenders and attackers, offering unprecedented speed and power to social engineering and impersonation attacks.
For attackers, phishing campaigns targeting high-profile individuals become vastly scalable as AI rapidly mimics communication styles, enabling the execution of numerous threat campaigns simultaneously. This represents as a great challenge for defenders due to the increased intensity and severity of attacks.
In response to these attacks, the cybersecurity industry is using AI to detect and counteract them. However, creating effective countermeasures takes time leaving companies exposed in between.
It is an arms race dynamic that closely resembles a continuous cycle of innovation, whereby attackers and defenders try to outdo each other.
The Role of Legislation in Adapting to AI’s Evolution
Public-private collaboration is essential in this landscape. The EO offers a starting point for regulation while ongoing tech industry-government collaboration remains necessary.
When AI-based products start coming out of tech companies, the feedback from customers becomes invaluable in moulding regulations that balance innovation data protection and societal concerns.
Public-private partnerships matter a lot when it comes to creating a secure environment that cultivates AI innovation and tackles safety questions.
It is, however notable that legislative frameworks should keep pace with the changing nature of AI technology, as stated in the Executive Order.
For instance, within content labelling, the US Department of Commerce is developing guidelines on watermarking and authentication for AI-generated content. Alphabet, Meta, and OpenAI are great examples of big techs who commit to doing such actions similar to what the US Secret Service did by including digital watermarks in colour copiers and printers to fight against counterfeiting.
To be proactive about AI development and implementation necessitates a long-term commitment to transparency, visibility and understanding. With the advent of AI-driven cyber warfare, a new arms race has begun.
As defenders both in industry and government enter uncharted territory, the joint endeavour towards improving defensive AI strategies becomes important.
The state of cybersecurity is at a crucial point, with the ability and possibility of generative AI changing the course of this discipline. The race is still on, which highlights the importance of holistic and cooperative measures in order to guarantee that AI-based technologies are designed and used responsibly.