🔥Early Access List: Land A High Paying Web3 Job In 90 Days LEARN MORE

A new wave of black market chatbots emerges and thrives

In this post:

  • Illicit LLMs can make as much as $28,000 in two months for their creators.
  • These are “powerhouses” of creating phishing emails and dodging spam detectors and antivirus software.
  • According to a study, these also manipulate the legally registered LLMs.

Black market chatbots have become one of the latest trends emerging in the AI sector and are thriving as malicious large language models (LLMs) earn money for their creators.

An American study has found that there is a thriving AI black market where unregistered LLMs are making a killing selling their services underground. Registered and public-facing companies like OpenAI, Perplexity, and other large corporations offering the same services have regulatory prescriptions they follow.

The underground LLMs can dodge antiviruses

The study, which focused mainly on Malla, an ecosystem that thrives on bastardizing legal LLMs and AI platforms for profit, scrutinized 212 black-market LLMs listed on underground marketplaces between April and October 2023, assessing their attributes.

One of the authors, Zilong Lin from Indiana University, said:

“We found that most of the mala services on underground forums exist mainly to earn profit.”

The study also found that some platforms are making as much as $28,000 in two months from subscriptions and purchases from people who want to escape the confines of regulated AI.

Some, like DarkGPT and EscapeGPT, are just jailbreak derivatives of duly licensed LLMs that are being used legally.

A separate study has also shown that these illicit LLMs can be used for many tasks, like creating phishing emails. For instance, DarkGPT and EscapeGPT could create correct codes nearly 75% of the time, but antiviruses could not pick these codes.

See also  OpenAI Japan’s CEO reveals upcoming GPT-Next model to be 100 times more powerful

Another model known as WolfGPT was seen as a “powerhouse” for creating phishing emails and dodging spam detectors.

According to Andrew Hundt, an expert in computing innovation at Carnegie Mello University, the booming AI underground market should be a cause for concern.

He called for a legal requirement that compels companies developing LLMs to have robust systems that prevent replication by people with sinister motives.

“We also need legal frameworks to ensure that companies that create these models and provide services do so more responsibly in a way that mitigates the risks posed by malicious actors,” said Hundt.

Understanding how black market chatbots function is key

Professor Xiaofeng Wang from Indiana University, who was part of the study, said it is important to understand how these black market LLMs are working so that the solutions will be specific to the issues identified.

Professor Wang added that the manipulation of these LLMs could not be avoided, which creates scope for the sector to come up with strong guardrails to minimize potential harm executed through cyberattacks in all its forms.

“We believe now is a good stage to start to study these because we don’t want to wait until the big harm has already been done.”

Professor Wang.

According to the professor, every tech, including LLMs, comes in two ways: the good and the bad side.

See also  C3.ai shares plummet 20% after subscription revenue misses estimates

Cybersecurity experts are already calling on the AI industry to slow down in its race and reflect on the growing dangers of their technology being adulterated by anonymous internet communities.

A Forbes article published late last year said there should be a responsible introduction of new ideas, especially around LLMs. It also emphasized on the role of regulation in promoting responsible AI innovations unlike the misconception that regulation stifles development.

“Developers and providers of LLMs need to hit pause on a frantic arms race that aims to churn out ever more complex models and should instead work with regulators on adopting frameworks for the ethical design and use of generative AI applications,” read part of the report.

Now, with regulatory loopholes, the AI black market is also fast growing into a multimillion-dollar industry and may need extensive resources to contain, according to Fast Company.

Share link:

Disclaimer. The information provided is not trading advice. Cryptopolitan.com holds no liability for any investments made based on the information provided on this page. We strongly recommend independent research and/or consultation with a qualified professional before making any investment decisions.

Most read

Loading Most Read articles...

Stay on top of crypto news, get daily updates in your inbox

Related News

DOJ
Cryptopolitan
Subscribe to CryptoPolitan

Interested in launching your Web3 career and landing a high-paying job in 90 days?

Leading industry experts show you how with this bran new course: Crypto Career Launchpad

Join the early access list below and be the first to know when the course opens its doors. You’ll also save $100’s off the regular launch price.