My 35-year career is a tale of two worlds in the same city. The first half of my career was spent in the environmental policy arena, in both the public and private sectors, while the second half has been in the technology industry.

I started my career at the White House Office of Management and Budget (OMB) as an analyst overseeing EPA’s programs. I had just received a master’s degree in public affairs from Indiana University’s School of Public and Environmental Affairs (the O’Neill School), where I also served as a graduate teaching assistant in the school’s microcomputer lab. After OMB, I was hired by a WPP-owned bipartisan government affairs firm, where I ran its environment and energy practice for 12 years. After returning to the White House for three years as Deputy Assistant for Cabinet Affairs, I joined Microsoft for a 16-year run serving as General Manager for Government Affairs based in Washington, DC.

On a personal level, throughout my entire adult life I have subscribed to the notion of sensible environmentalism, involving practical lifestyle decisions that reduce environmental impact and promote resource conservation. For example, my wife and I are avid gardeners and have been composting our food and yard waste for 30 years. I was an early adopter of hybrid car technology, driving a hybrid for 14 years, and then an all-electric car for the last six. We installed a deep-well geothermal system during our new house construction 12 years ago to provide 100% of our heating and cooling, which has long paid for itself. And, we installed lower flow toilets than required by code, and energy-saving white TPO membrane on the flat roof areas of our home.

So I was excited to recently join the board of Lumina AI, which was an opportunity to bring my two previous professional worlds together, technology and the environment, while also furthering my personal passion for conservation.

Technology and the Environment Are Mutually Inclusive

Technology and the environment are not mutually exclusive; in fact, they are mutually inclusive. Technological advances over the last 35 years have led to tremendous reductions in environmental impact through greater energy efficiencies obtained through software and hardware innovations in a broad range of products in the transportation, industrial, commercial, and residential sectors.

Many of the nation’s largest technology companies have poured billions of dollars into carbon-reducing innovations and programs, which have contributed significantly to environmental improvements over the last decade. These technology companies have also publicly committed to bold carbon reduction targets for their own operations within the next decade.

Yet, these laudable corporate goals now risk being upended by the recent AI bonanza, as these same companies chase the holy grail of replicating the human brain and the enormous business opportunities of generative AI. In their quest, the planet’s largest technology companies will use inordinate amounts of energy to power the GPU-based computers and data centers needed to train the massively large language models to accomplish this feat. And, it will be our planet that will pay the price.

The GPU Chip’s Impact on the Environment

Needless to say, the title of last week’s Popular Science article caught my attention:  “AI Companies Eye Fossil Fuels to Meet Booming Energy Demand.” The article cites a fresh round of estimates from the International Energy Agency that note by 2027 the energy demand from AI servers will be on par with the energy consumption of the countries of Argentina, the Netherlands, and Sweden combined. It goes on to say that renewable energy sources alone will in no way cover this increased electricity demand any time soon, hence leading to an increased reliance on fossil fuels.

GPU (graphics processing unit) chips were originally designed to accelerate graphic-intensive applications (e.g., games and CAD) working in combination with the CPU (central processing unit), which handles the primary functions of a computer. In recent years, A GPU’s ability of handling numerous calculations simultaneously for non-graphics applications has made it the go-to chip for training large AI models. But the GPU architecture comes at a cost – consuming much more energy than a computer’s main CPU chip, while costing as much as $30,000 to $40,000 per unit for the newest GPUs.

With the explosion of generative AI in the last 18 months, innovations in GPU chip design have focused primarily on processing power and not energy efficiency. Whereas NVIDIA’s newly announced Blackwell GPU chip does tout greater energy efficiency, it is still far more energy demanding than a general-purpose CPU.

Lumina AI’s CPU-Driven Environmental and Societal Benefits

Against this backdrop, this is why Lumina AI’s latest CPU-based model training algorithm, Random Contrast Learning (Lumina RCL), holds such promise. Lumina’s RCL classification innovation uniquely leverages randomness and phenomenology (i.e., the study of lived or conscious experience) to more aptly mimic the realities of how the mind works vs. seeking to duplicate the neural intricacies of the human brain. In doing so, RCL outcompetes most machine learning (ML) and deep learning (DL) neural network models in accuracy, and does so in a fraction of the time and expense due to less data required for training, less data preparation, and greater tolerance for data abnormalities or duplication – all on a much less expensive and less energy-consuming CPU-based computer.

Admittedly, Lumina RCL’s CPU-driven approach is not going to replace every use case where GPU-driven model training is needed, e.g., for large language model generative AI type applications. However, it can replace a large portion of current GPU-driven AI, ML, and DL models for purpose-specific workloads, and where smaller language models can be used for a myriad of high-value use cases (e.g., image classification for cancer identification) across a variety of industry sectors, including healthcare, transportation, agriculture and food safety, finance, national security, manufacturing, robotics, logistics, building management, etc.

As an added benefit, by greatly reducing the overall expense of running high-performing AI tools, Lumina RCL’s CPU-driven approach scores big on accessibility. It puts the full power of AI within reach of a much broader set of users, including small and medium-sized universities, community colleges, hospitals and clinics, small and medium-sized businesses, nonprofit organizations, state and local governments, etc.

This represents a genuine democratization of artificial intelligence that will benefit the smallest to the very largest technology companies and their customers in achieving worthy environmental and societal objectives — an intelligent outcome that is good for all of us.

This post, a part of our ‘Executive Insights’ series, has been authored by Ed Ingle, offering valuable perspectives on Lumina’s positions regarding AI and Machine Learning news. Delve into expert insights and stay informed about our company’s stance on the latest developments in the field.

Ed Ingle

Board Member, Lumina AI

Important PrismRCL Update: Model Compatibility and Dual Version Support - Retain your existing models and explore 2.4.x features! Learn more about your options.

X