Loading...

Uncertainty abounds as generative AI use grows- ISACA poll reveals

AI

Most read

Loading Most Ready posts..

TL;DR

  • A recent poll conducted by ISACA has shed light on the growing uncertainty surrounding generative AI.
  • The survey found that a significant percentage of organizations are allowing employees to use generative AI.
  • Only six percent of respondents’ organizations are offering AI training to all staff members.

A recent poll conducted by ISACA, the global digital trust association, has shed light on the growing uncertainty surrounding generative artificial intelligence (AI). The poll, titled “Generative AI 2023: An ISACA Pulse Poll,” surveyed over 2,300 professionals working in fields such as cybersecurity, IT audit, governance, privacy, and risk. It revealed a notable discrepancy between the high use of generative AI and the absence of comprehensive company policies governing its usage.

The survey found that a significant percentage of organizations are allowing employees to use generative AI, even in the absence of explicit policies. Only 28 percent of organizations reported having policies that expressly permit the use of generative AI, and a mere 10 percent claimed to have a formal comprehensive policy in place. 

Shockingly, more than one in four organizations stated that they lacked both a policy and any plans to create one. Moreover, 40 percent reported that employees were using generative AI despite the absence of formal policies, and another 35 percent were uncertain about whether such usage was occurring.

Generative AI is being employed across various domains within organizations, including content creation, productivity enhancement, task automation, customer service, and decision-making.

Lack of familiarity and training

While employees are actively embracing generative AI, the survey found that organizations are lagging in providing adequate training. Only six percent of respondents’ organizations are offering AI training to all staff members. More alarmingly, 54 percent reported that no AI training whatsoever was provided, even to teams directly impacted by AI implementation. Furthermore, only 25 percent of respondents felt they had a high level of familiarity with generative AI.

Jason Lau, ISACA board director and CISO at Crypto.com, emphasized the need for organizations to catch up and provide policies, guidance, and training to ensure the responsible and ethical use of generative AI. He stated that employees were not waiting for permission to leverage generative AI for their work, emphasizing the necessity for greater alignment between employers and staff regarding the technology.

Risk and exploitation concerns

The survey delved into ethical concerns and risks associated with generative AI. An alarming 41 percent of respondents expressed the belief that insufficient attention was being paid to ethical standards for AI implementation. Surprisingly, fewer than one-third of organizations considered managing AI risk an immediate priority. Twenty-nine percent regarded it as a longer-term concern, while 23 percent stated that their organizations had no current plans to address AI risk.

Respondents identified several top risks associated with generative AI, including misinformation and disinformation, privacy violations, social engineering, loss of intellectual property, and job displacement.

Of particular concern was the fear that generative AI could be exploited by malicious actors, with 57 percent of respondents indicating a high level of worry in this regard. Additionally, 69 percent believed that adversaries were using AI as successfully or even more successfully than digital trust professionals.

Impact on jobs and optimism

Examining the impact of AI on job roles, respondents believed that security, IT operations, and risk and compliance were primarily responsible for ensuring the safe deployment of AI. Looking ahead, 19 percent of organizations stated that they would be creating job roles related to AI functions in the next 12 months. 

However, 45 percent believed that a significant number of jobs would be eliminated due to AI. Interestingly, 70 percent of digital trust professionals remained optimistic about the positive impact of AI on their roles, but 80 percent acknowledged the need for additional training to retain their jobs or advance their careers.

Despite the challenges and uncertainties surrounding AI, a significant proportion of respondents expressed optimism. Eighty percent believed that AI would have a positive or neutral impact on their industry, organizations, and careers. Moreover, 85 percent viewed AI as a tool that extended human productivity, and 62 percent believed it would have a positive or neutral impact on society as a whole.

Disclaimer. The information provided is not trading advice. Cryptopolitan.com holds no liability for any investments made based on the information provided on this page. We strongly recommend independent research and/or consultation with a qualified professional before making any investment decisions.

Share link:

Benson Mawira

Benson is a blockchain reporter who has delved into industry news, on-chain analysis, non-fungible tokens (NFTs), Artificial Intelligence (AI), etc.His area of expertise is the cryptocurrency markets, fundamental and technical analysis.With his insightful coverage of everything in Financial Technologies, Benson has garnered a global readership.

Stay on top of crypto news, get daily updates in your inbox

Related News

Cryptopolitan
Subscribe to CryptoPolitan