🔥 Trade with Pros on Discord → 21 Days Free (No Card)JOIN FREE

China’s Rednote enters the low-cost AI sweepstakes with open-source model

In this post:

  • RedNote, also known as Xiaohongshu, has open-sourced its first large language model, “dots.llm1” amid global AI expansion.
  • The Shanghai-based social media firm saw US user spikes following TikTok ban threats and plans further international growth.
  • Chinese tech companies, including Alibaba and DeepSeek, are accelerating open-source AI development to compete with Western counterparts.

Chinese social media platform RedNote, also known domestically as Xiaohongshu, released its first open-source large language model (LLM) last Friday. The new model, dubbed “dots.llm1,” contains 142 billion parameters in total, but only 14 billion are supposedly activated for each response. 

According to the Asian news outlet, South China Morning Post, this architecture could help the LLM balance performance with cost-efficiency to rival competitors like OpenAI’s ChatGPT while reducing the expense of both training and inference.

RedNote’s internal Humane Intelligence Lab developed the LLM, or “hi lab,” which evolved from the company’s previous artificial intelligence team. RedNote said its model outperforms other open-source systems in Chinese language understanding, surpassing Alibaba’s Qwen2.5-72B-Instruct and DeepSeek-V3.

No synthetic data used in pretraining

RedNote issued a statement to explain the standards behind the training of its LLM. Unlike some other models in the market, the company asserted that no synthetic data was used during pretraining. 

Developers insisted that dots.llm1 was trained on 11.2 trillion tokens of non-synthetic data, an approach RedNote says is imperative for the model to achieve higher fidelity and more reliable results.

The company has also begun trialing an AI research assistant called Diandian on its platform. Diandian, launched via a dialogue box within the app, features a “deep research” function and is powered by one of RedNote’s in-house models. Still, the company has yet to confirm if this assistant is based on dots.llm1.

See also  WHO Releases Comprehensive Guidelines for AI Regulation in Healthcare

RedNote expands global reach after LLM launch

RedNote’s open-source AI announcement came just a day prior to the company’s opening of a new office in Hong Kong, its first outside of mainland China. The new location is situated in Times Square, a commercial area in Causeway Bay. 

RedNote’s presence will improve the interactions between local content creators, brands and organisations, and promote East-meets-West cultural exchanges and content marketing development among Hong Kong, the Mainland and the global markets,” InvestHK’s Director-General of Investment Promotion Alpha Lau told reporters during a press conference last Saturday.

RedNote, headquartered in Shanghai, is one of China’s most widely used social media platforms, with 300 million monthly active users. Per company officials, the expansion is part of plans to increase RedNote’s overseas reach, in preparation for a potential TikTok ban in the United States.

Chinese AI companies choose open-source tech

RedNote joins the list of Chinese firms that have moved towards making their large language models more open-source AI. More companies are trying to mirror the success of low-cost, high-performance models like those released by the startup DeepSeek. 

Earlier this year, DeepSeek launched its open-source R1 model, which topped downloads on several app stores for delivering strong results at a fraction of the cost associated with Western LLMs.

See also  Stellantis Acquires CloudMade's AI Technology for STLA SmartCockpit Advancement

Tech giants Alibaba, Tencent, and ByteDance have made significant investments in AI infrastructure. Alibaba, for instance, has released several new LLMs as part of its Qwen series, including the latest Qwen3 Embedding models. These support over 100 languages capable of code and language retrieval.

Alibaba said the Qwen3 models have improved efficiency and performance in embedding and reranking systems. Speaking earlier this year, Wang Jian, founder of Alibaba Cloud, claimed that the progress of large language models is exceeding expectations and will continue to do so. 

Wang mentioned startups like DeepSeek as examples of how young innovators solve problems with creative approaches.

According to Wang, Alibaba’s ZEROSEARCH demonstrates how innovation can significantly lower development costs. ZEROSEARCH, showcased in May, is designed to simulate search engine behavior during training without making actual API calls. The company claims this can reduce training costs by up to 90%.

Want your project in front of crypto’s top minds? Feature it in our next industry report, where data meets impact.

Share link:

Disclaimer. The information provided is not trading advice. Cryptopolitan.com holds no liability for any investments made based on the information provided on this page. We strongly recommend independent research and/or consultation with a qualified professional before making any investment decisions.

Most read

Loading Most Read articles...

Stay on top of crypto news, get daily updates in your inbox

Editor's choice

Loading Editor's Choice articles...

- The Crypto newsletter that keeps you ahead -

Markets move fast.

We move faster.

Subscribe to Cryptopolitan Daily and get timely, sharp, and relevant crypto insights straight to your inbox.

Join now and
never miss a move.

Get in. Get the facts.
Get ahead.

Subscribe to CryptoPolitan