Last week, DeepSeek AI released its first Large Language Model (LLM) DeepSeek R1. DeepSeek R1 competes favorably with other reasoning models, like OpenAI’s O1 and Anthropic’s Claude-Sonnet, despite far less costs for development. Other innovations of DeepSeek R1 include its 8-bit processing, multi-token prediction, dual reward system, and use of small models embedded within the larger model.

The DeepSeek release has rightfully focused the attention on AI’s high cost and energy consumption. And, if society is truly going to realize the full potential of AI in the near- and mid-terms, we need to address these important factors.

At Lumina AI, we’ve recently developed a breakthrough AI algorithm, Random Contrast Learning (RCL), which provides a near-term path to vastly less expensive and more energy efficient AI training used to support LLMs and numerous other AI and machine learning workloads. And our pioneering U.S.-based startup company is American made – headquartered in Tampa, FL.  We are able to accomplish these cost and energy savings given our algorithm was built from the ground up to perform on standard CPU chip-based computers, versus the expensive and energy-hungry GPU chip computers required for training for LLMs such as ChatGPT or DeepSeek.

RCL’s efficiencies are also derived from the speed of its algorithm. Recent test results demonstrate up to 98.3x faster training speeds for LLMs compared to transformer-based models. Lumina AI anticipates even further gains through new hardware optimizations in the near future.

The DeepSeek announcement has also shed light on the cost-effective and efficient use of smaller models, which can comprise a larger model. In the words of the DeepSeek team: “Smaller models can be powerful too.” Likewise, Lumina AI has developed RCL to more efficiently derive insight from smaller datasets, which can then be combined, to operate at far lower costs. More uniquely, RCL also allows different models to be combined, thus providing for truly federated training, improving data privacy and security.

“We strongly believe that the future of AI doesn’t lie in more computation but in smarter, more efficient approaches that don’t demand excessive energy or hardware to improve performance, “ said Allan Martin, Lumina AI’s co-founder and CEO.

For all of these reasons, we expect that organizations re-evaluating their approaches to AI and machine learning will see that the advances of RCL not only resemble the advantages of DeepSeek, but are also more multifaceted and profound.

This post, a part of our ‘Executive Insights’ series, has been authored by Ed Ingle, offering valuable perspectives on Lumina’s positions regarding AI and Machine Learning news. Delve into expert insights and stay informed about our company’s stance on the latest developments in the field.

Ed Ingle

Board Member, Lumina AI