Artificial intelligence is a very real data center problem

Representation of AI
(Image credit: Shutterstock)

Artificial Intelligence (AI) is integrating deeper within our daily activities with applications to make us more efficient and possibly smarter. It would be stupid for us to not consider the consequences as tools like Open AI’s ChatGPT or Google’s Bard for example to proliferate and introduce machine intelligence to everyday people. That includes how our data centers are evolving amid the rapid growth in data that needs to be stored, processed, managed, and transferred.

AI could be the Achilles heel for the data centers unable to evolve in the face of massive datasets required for AI. Like the storied character from Greek Mythology, for decades we’ve recognized the threats that could topple our ever-increasing digital society and learned how to build strong and robust data centers to be our undefeatable heroes. The flipside to the myth is the concept of vulnerability that is inherent in the mightiest defenders. Today AI can be the unforeseen fatal flaw.

Dr. Michael Lebby

Dr. Michael Lebby is Chairperson and CEO at Lightwave Logic.

From the Agora to hyper connected global markets: the rise of AI and modulators

While Homer hoped to impart a lesson on flaws with tales like Achilles squaring up to Agamemnon in an ancient agora at the beginning of the Iliad, defending against weakness today happen on a much larger landscape. There are three macro market areas that are driving the need for a faster and more universal internet and more effective and efficient data center infrastructures. These macro markets areas are density, speed, and low power, and are driving engineers to design higher performance optical and photonic components that are supported by the fiber optic cables that interconnect data centers as part of the IT infrastructure. Optical components today must address higher switch density from smaller size or footprint, higher and faster information flow and computation processing, and the need for green, low power consumption. One new flavor of upcoming optical components that support these three macro areas include devices such as polymer modulators which are located and positioned in front of the lasers that send light into the fiber optic cables.

Modulators are devices that encode optical signals and are critical for increasing information capacity, sending digital signals faster, with lower power consumption, and decreased footprint. Modulators in general are becoming the critical component for next generation optical networks and the internet. Newer electro-optic polymer modulators as opposed to semiconductor variants have the potential make the end-users business more competitive because they can address the problems that these large data center companies are having and create a positive impact for the industry, especially with the three macro markets: size, power consumption, and speed.

In more detail, the first macro area of increased switch density and size has already become a big issue for data centers with a need for space; the second area of higher information flow for increased computational processing has become a topical subject over the last few months with a need for speed, and lastly, the third area has a need for low power consumption, lower heat generation, and in general, ‘greener’ solutions.

The end users that own and operate data centers as part of the internet need to seek an optical balance to make sure these macro drivers will drive the next-generation systems as well as having optical components that support the metrics that will keep the data centers competitive and sustainable.

Survival by the numbers: measuring the strain of AI

The rise of the internet and the performance issues caused by demand for information continues to grow, so what are the measures that the data centers need to do to survive? Three measures bubble quickly to the surface as it is well known that AI generates more computational processing information, and that information needs to be communicated and routed by data centers efficiently to various destinations:

  • Data centers need to be able to handle higher data rates, higher bandwidth, and more information transfer. Information ‘traffic jams’ is not a long-term solution
  • Data centers need to be able to operate with lower power consumption. Using major parts or percentage share of the National Grid of electricity is not a long-term solution. 
  • Data centers need to be able to shrink and utilize space more effectively. Data centers are already the size of power stations: physically growing is simply not practical for communities.

Avoiding data traffic jams

Consider the Internet like a highway system connecting every person and everything coast-to-coast using lanes that crisscross and overlap. Computational processing using neural network-based architecture for AI can be viewed as the intersections where each of those lanes meet.

For data to travel well, those intersections would need to accommodate dozens of lanes to avoid congestion at these points where data needs to be routed in the correct direction. Computing power is currently doubling every two to four months, which means that the data centers that switch and route the traffic to destination are essentially clogging up. In means the folks who run and operate the data centers need to update and upgrade them more quickly than before. Data center upgrades are typically undertaken every two to four years depending on datacenter design and company, but with AI and the surge of computational processing demands increasing, the process could need to be undertaken an order of magnitude faster, which could generate significant operational complexities.

The rate and complexity of forklift upgrades are expected to be an ongoing headache for data center system architects well into the next decade. The increased computing demands and increased information flow is driving power consumption up as well as increased traffic for the optical network or internet. Together with a steady and consistent increase in traffic, the impact of AI is expected to be significant and strain the system.

The strains can be measured three ways: power consumption, increased traffic, and size.

As the power consumption increases then more heat is generated, which is something that data centers would like to suppress with more efficient components. The power generated in data centers is rising exponentially, with no end in sight. The power consumption is on a trajectory that is not sustainable. The traffic, which is the data that gets sent down the Internet as information, that is now being generated in part by AI, is also on a trajectory that is not sustainable, and like power consumption, does not have a sustainable plateau in sight. The size of data centers is already becoming unmanageable with the location of data centers now outside of communities that need access to the capabilities. While there are data center network architectures that distribute data centers across metro areas, the size of them is increasing not declining. This density metric is not sustainable.

The metrics are intended to expose any Achilles Heel. Driven in-part by the rise of AI, there are additional metrics that are increasingly needed to avoid data center strain, including data protection and rising cost to have competent employees ensuring data center operations are on track. AI has progressed from being a scientific curiosity by a few thousand engineers to whole scale infatuation by ~100m end users (and growing). With ~100m users, the level of creativity and innovation will accelerate, and there will be new and innovative ways AI can be more valuable and useful. With data protection, the volume of users may find governments close these paths down to limit access to what might be considered important information. This is also a situation that is not sustainable. It very well may have a huge negative impact on the utility of data centers.

Gauging the impact of AI

We’re only two decades removed from using dial-up modems that could take an average of 10-20 minutes for the novelty of downloading a single image. Only 10 years ago we could wait 10-20 minutes to download a short video clip or TV show. As the wait times shortened, image and video notoriously became the largest generators of internet traffic. Within the last year it become apparent that AI or raw computational power are likely to take us just as much room, and much faster.

For example, Netflix in 2003 reached a milestone one million subscribers ~3.5 years after it launched its streaming video service. This year, ChatGPT gained one million users in five days. ChatGPT is a AI platform similar in concept to Bing, Bard etc., that allows users to experiment with artificial intelligence commands for simple projects. On a more generic social platform scale, Threads for example by Meta is rumored to have attracted over 100million users in a little over a week. One observation is that when something like this is taking off in approximately a week, it may mean it has already had impact on society and is set to accelerate that impact very quickly indeed, possibly in ways we can’t imagine today.

If we look at the growth of computing power in high computational processing systems over the past 60 years we know that this growth has initially increased or doubled every 3-5years. Then from about 2020 onwards, the growth has increased by over an order of magnitude, or 10X, to a doubling of computational power every 3-4 months (in terms of petaflops - which is a metric for computational processing magnitude). This increase in computational processing has been driven at least for the most part by neural networks that compose artificial intelligence. This growth places significant strain on the infrastructure supporting computational processing. Key components for enabling this level of computing processing are ICs such as: Graphic processing units (GPUs) and Microprocessor units (MPUs). The capacity for optical signal information traveling down fibers needs to be increased; and the places where the electronic processing takes place, typically data centers, are clearly needed to deal with higher amounts of information. The economic cost for the internet and optical network operators will be more expensive.

Alleviating data center strain

This article reviews the negative effects or strain of increasing computational processing from the rise and popularity of AI on the internet or optical network. AI is driving higher computational processing which in turn is generating more traffic and lots of heat. Higher traffic, and power consumption are becoming a problem for the internet architectural infrastructure in places such as data centers and need to be addressed quickly before they become a weakness or vulnerability, even the Achilles Heel for the industry.

A new polymer material that is electro-optic for an optical component called a modulator is being developed to replace existing semiconductor modulators that are used today in the internet. A modulator in general switches and modulates light, and there are millions of these devices on the internet today, and these are semiconductor based. The established semiconductor incumbent solutions are struggling to deal with higher data-rates, higher traffic volumes and low power requirements that are necessary to cope with AI.

Electro-optic polymer modulator devices have superior speed performance and lower power than existing technologies. These polymer modulator devices are fast, stable, reliable, have low power consumption and are very small in size. They are physically positioned in front of lasers that generate the light for the internet. The modulator function encodes and sends information down the fiber optic cable for the network and internet. The electro-optic polymer materials which are at the heart of the modulators are reliable and stable in performance, similar in a simple way to OLEDs (Organic LEDs, which are also polymers with a different chemical composition, but have the capability to emit light (red, green, blue) when a voltage is applied to them). Electro-optic polymer modulators also have a voltage applied to them, and when this happens, the performance is not only exciting but actually incredible – which is ideal to address some of the strains and Achilles Heel’s that the data center operators are facing. It may well be that the negative impacts of AI on data centers could be alleviated using a simple material that we are all comfortable with – electro-optic polymers!

Achilles helped us learn to identify vulnerability as part of an overall strength. There are five clear negative, weakness, vulnerability effects on data centers that are in part driven by the rise of AI which are shown below:

  • how will data centers handle higher data rates
  • how will data centers handle the rise of power consumption
  • how will data centers keep size in check
  • how will data centers look to reduce salaries and operating expenses through the advancement of AI
  • how will data centers deal with data protection when individual countries and governments tighten the availability of information?

The first 3 can be handled by new technologies such as polymers. The 4th and 5th vulnerabilities are also storm clouds on the horizon.

We are all aware that employees of datacenters have extremely high salaries today. Advanced and mature AI has the potential to reduce datacenter related workforce and save datacenter operators huge amounts of operating expenses. Why would datacenter operators keep paying super high salaries if AI can carry the design and architecture load for the next generation solutions? Datacenter operators would strive to streamline and reduce operational expenses, and this may not be sustainable and have a negative impact on the attractiveness of working in the datacenter field.

Further, data protection issues are likely to become a major headache: How will datacenters deal with social media and open access effects of data protection internationally? For example, data protection is generally governed through individual countries where in each country where selected information can be restricted, with some countries restricting more than others. AI requires significant amounts of information to be effective as the neural network architectures implement efficient learning algorithms. Will government restrictions and national boundaries on data protection limit the use and impact of AI eventually? Will this limitation affect the role of datacenters and their operational business model? The vectors for these issues are not looking good today and could very well be a situation that is not sustainable. It very well may have a huge negative impact on the utility of datacenters overall.

Summary

While AI is expected to grow in maturity and acceleration in popularity, the impact on data centers is serious and will impart an incredible level of strain on the future of the data center architecture. Five negative impacts have been outlined in this article, with one alleviation being the implementation and design of very high-performance polymer optical modulators, which have already demonstrated a capability to modulator light faster, reduce power consumption and be available in a tiny footprint the size of a grain of salt. Together with data centers operating more efficiently with lower operating expenses, and innovative approaches to data protection, there might be a route forward where data centers survive the Achilles arrow to the tendon.

We've featured the best productivity tool.

Dr. Michael Lebby is Chairperson and CEO, Lightwave Logic.