Academics Apologize for False AI-Generated Allegations Against Big Four Consultancy Firms


Most read

Loading Most Ready posts..


  • Academics apologize for false AI-generated accusations against big consultancy firms, highlighting risks of relying on AI-generated content. 
  • Big four firms, KPMG and Deloitte, demand public correction after erroneous allegations made in a parliamentary submission. 
  • Lesson learned: The incident underscores the need for rigorous fact-checking and responsible use of AI in shaping public discourse.

A group of academics specializing in accounting has issued an unreserved apology to the big four consultancy firms – KPMG, Deloitte, PwC, and EY – after admitting to using artificial intelligence to make false allegations of serious wrongdoing in a submission to a parliamentary inquiry. The allegations, based on AI-generated content, have raised concerns about the misuse of technology in shaping public discourse and the potential harm caused to the reputation of these prominent firms.

AI’s role in false accusations

The academics had urged a parliamentary inquiry into the ethics and professional accountability of the consultancy industry, advocating for regulatory changes that included breaking up the big four. In a startling revelation, it was disclosed that part of their original submission relied on the Google Bard AI tool, which the responsible academic had only recently started using. The AI program generated several case studies about alleged misconduct, subsequently highlighted in the submission.

False accusations against KPMG

One of the false allegations targeted KPMG, falsely accusing the firm of complicit in a “KPMG 7-Eleven wage theft scandal” that led to the resignation of several partners. Additionally, KPMG was accused of auditing the Commonwealth Bank during a financial planning scandal, which was categorically untrue as KPMG had never audited the Commonwealth Bank.

Deloitte’s concerns

Deloitte also found itself wrongly accused in the submission. The academics falsely claimed that Deloitte had been sued by the liquidators of the collapsed building company Probuild for allegedly failing to audit its accounts, which Deloitte had never audited properly. The submission raised concerns about a “Deloitte NAB financial planning scandal,” which Deloitte vehemently denied, stating that there was no such scandal. The accusation also included allegations of Deloitte falsifying the accounts of a company called Patisserie Valerie, another claim that had no basis in reality.

Repercussions and correcting the record

In response to these false accusations, KPMG and Deloitte have taken steps to protect the reputation of their staff. KPMG has written to the academics’ employers, expressing their intent to correct the record while respecting academic freedom publicly. This incident marks the first time that a parliamentary inquiry has had to address the issue of artificial intelligence generating false accusations that are protected by parliamentary privilege, preventing defamation lawsuits.

Removing false information

To rectify the situation, the sections of the submission containing false information generated by artificial intelligence will be removed. A new, accurate document is expected to be uploaded to the Senate inquiry website.

Apology from Emeritus professor

Emeritus Professor James Guthrie, one of the academics involved, took responsibility for the error in a letter to the Senate. He excused the other academics and emphasized the impact of AI in shaping misinformation. He acknowledged that using AI can lead to authoritative-sounding but incorrect, incomplete, or biased output.

Ensuring accountability and transparency

Guthrie expressed deep regret for the factual errors but stressed that their substantive arguments and recommendations for reform in the consultancy sector remained important for ensuring a sustainable industry built on shared community values. The incident is a cautionary tale about the potential pitfalls of relying on AI-generated content without thorough verification.

Lessons learned

Liberal Senator Richard Colbeck, who chairs a separate inquiry into the consulting industry, commented on the incident, emphasizing the importance of fact-checking and the potential consequences of incorrect information in public discourse. This incident is a salient reminder of the need for rigor and accuracy in academic and public discourse.

The false AI-generated allegations against the Big Four consultancy firms have exposed the risks associated with using artificial intelligence in shaping public discourse. The academics’ apology and efforts to correct the record underscore the importance of accountability and transparency in both academia and industry, reminding us all of the critical need to verify information from reliable sources.

Disclaimer: The information provided is not trading advice. Cryptopolitan.com holds no liability for any investments made based on the information provided on this page. We strongly recommend independent research and/or consultation with a qualified professional before making any investment decision.

Share link:

Glory Kaburu

Glory is an extremely knowledgeable journalist proficient with AI tools and research. She is passionate about AI and has authored several articles on the subject. She keeps herself abreast of the latest developments in Artificial Intelligence, Machine Learning, and Deep Learning and writes about them regularly.

Stay on top of crypto news, get daily updates in your inbox

Related News

Subscribe to CryptoPolitan