Investors pour into photonics startups to stop data centers from hogging energy and to speed up AI



Oriole Networks, a British company with plans for a completely new networking infrastructure for AI supercomputing clusters that is based on using light instead of electricity to transmit data, has raised $22 million from the London-based venture capital firm Plural.

Photonics, which is the science of generating, manipulating, and detecting light, is suddenly a hot topic in the tech industry as a potential solution to two big problems facing AI data centers: their colossal electricity demands and the time it can take train the largest AI models on massive datasets. Just this week, two other companies working on photonic networking for AI chips announced major funding rounds.

Lightmatter, announced it had raised $400 million in a venture capital deal led by T. Rowe Price that values the seven-year-old company at $4.4 billion. And Xscape Photonics announced it had closed a $44 million investment round led by IAG Capital, with the venture capital arm of network equipment maker Cisco and Nvidia among its other investors.

No valuation figures were announced as part of either Xscape’s or the Oriole Networks’ fundraises, both of which were Series A rounds.

The reason photonics is suddenly in vogue has to do with a series of challenges tech companies are encountering as they seek to build ever larger data centers stuffed with hundreds of thousands of specialized chips—in most cases, graphics processing units (or GPUs)— used for training and running AI applications.

Conventional networking and switching equipment, which primarily uses copper wiring through which electricity is passed to convey information, is itself becoming a bottleneck to how quickly and easily large AI models can be trained. In other cases, fiberoptics are used, but with only a few colors of light traveling in a single cable, which also constrains how much information can be transmitted.

AI models based on neural networks must shuttle a lot of data continuously back and forth through the entire network. But moving all this data between GPUs, including those that might be located in distant server racks, depends on wiring pathways and the capacity of switching equipment to send data zipping to the right place.

The way many large AI supercomputing clusters are wired, data traveling from one computer chip to another located elsewhere in the cluster, might have to make as many as nine hops through different network switches before it reaches its destination, George Zervas, Oriole Network’s cofounder and chief technology officer, said.

The larger the AI model and the more server racks involved, the more likely it is that this roadway of wiring will become congested, similar to how traffic jams delay commuters. For the largest AI models, 90% of their training time can consist of waiting for data in transit across the supercomputing cluster as opposed to the time it actually takes the chips to run the necessary computations.

Conventional networking equipment, which uses electricity to transmit data, also contributes significantly to the energy requirements of data centers, both by directly consuming power, and because the copper wiring dissipates heat, meaning more energy is required to cool the data center. In some data centers, the networking equipment alone can account for 20% of the facility’s overall energy consumption.

Depending on what energy source is used to power the data center, this electrical demand can result in a colossal carbon footprint. Meanwhile, many data centers require vast quantities of water to help cool the racks of chips used to run AI applications.

Cloud computing companies are anticipating power needs for future AI data centers that are driving them to extreme lengths to secure enough energy. Google, Amazon, and Microsoft have all struck deals that would see nuclear reactors dedicated solely to powering their data centers. Meanwhile, OpenAI had briefed the U.S. government on a plan to possibly construct multiple data centers that would each consume five gigawatts of power annually, more than the entire city of Miami currently does.

Photonics potentially solves all of these challenges. Using fiberoptics to transmit data in the form of light instead of electricity makes it possible to connect more of the chips in a supercomputing cluster directly to one another, reducing or eliminating the need for switching equipment. Photonics also uses far less electricity to transmit data than electronics and photonic signals produce no heat in transit.

Different photonic companies have different ideas about how to use the technology to revamp data centers. Lightmatter is creating a product called Passage that is a light-conducting surface onto which multiple AI chips could be mounted, allowing photonic data transmission between any of the chips on that Passage surface without the need for cabled connections or copper wiring. Fiberoptic cabling would then be used to connect multiple Passage products in a single server rack and for the connections between racks. Xscape envisions using photonic equipment and cabling that can transmit and detect hundreds of different colors of light through a single cable, vastly increasing the amount of data that could flow through the network at any one time.

But Oriole Networks’ may have the most sweeping vision, using photonics to connect every AI chip in a supercomputing cluster to every other chip in the entire cluster. This could result in training times for the largest AI models—such as OpenAI’s GPT-4—that are up to 10 to 100 times faster, Oriole Networks said. It can also mean networks can be trained using a fraction less power than today’s AI supercomputing clusters consume.

To accomplish this, Oriole envisions not just new photonic communication equipment but also new software to help program the network, and a new hardware device that can act as the “brain” for the entire network, determining which packets of information will need to be sent between which chips at exactly what moment.

“It’s completely radical,” Oriole CEO James Regan said. “There’s no electrical packet switching in the network at all.”

Oriole Networks was spun-out from University College London in 2023, but it relies on technology that its founders, in particular Zervas, pioneered over the past two decades. In addition to Zervas, who is a veteran photonics researcher, UCL PhD. student Alessandro Ottino and post-doctoral fellow Joshua Benjamin, who is an expert in designing communication networks, cofounded the company. They brought on Regan, an experienced entrepreneur who helped create a previous photonics company, as CEO.

The company currently employs 30 people. It raised an initial Seed funding round of $13 million in March from a group of investors that includes the venture capital arm of XTX Markets, which operates one of the largest GPU clusters in Europe. UCL Technology Fund, XTX Ventures, Clean Growth Fund, and Dorilton Ventures also all participated in both the Seed round and the most recent Series A investment.

Regan said that Oriole is using other companies to manufacture the photonic equipment it is designing, which will enable the company to keep its capital requirements lower than would otherwise be the case and enable the company to move faster. He said it aims to have initial equipment with potential customers to test in 2025.

The company has held discussions with most of the “hyperscale” cloud service providers as well as a number of semiconductor companies manufacturing GPUs and AI chips.

Ian Hogarth, the partner at Plural who led the Series A investment, said that he was drawn to Oriole Networks because it represented “a paradigm shift” rather than an incremental approach to making AI data centers more energy and resource efficient. Hogarth, who is also the chair of the U.K.’s AI Safety Institute, said he was impressed by the “raw ambition and speed that [Oriole’s] founders have brought to the problem.”

He said the company fit in with other investments Plural has made into companies helping to combat climate change. Finally, he said he felt it was important for Europe “to have really hard assets when it comes to the evolution of the compute stack, and to not squander the opportunity to translate brilliant inventions from European universities, UK universities, into iconic companies.”

Of course, there’s been hype about photonics before, and it hasn’t always panned out. During the first internet boom of the late 1990s and early 2000s, there was also great excitement about the possibility of photonics to become the primary backbone for the internet, including for switching equipment. Venture capitalists back then also poured money into the sector. But most of those investments failed to pan out because of a lack of maturity in the photonics industry. Parts were difficult and expensive to manufacture and had higher failure rates than semiconductors and more conventional electronic switching equipment. Then, when the dot com bubble burst, it largely took the photonics boom down with it.

Regan says that things are different today. The ecosystem of companies making photonic integrated circuits and photonic equipment is more robust than it was and the technology far more reliable, he said. A decade ago, a company like Oriole Networks would have had to manufacture much of the equipment it wants to produce itself—a much more capital intensive and risky proposition. Today, there is a reliable supply chain of contract manufacturers that can execute designs developed by Oriole, he said.



Source link

About The Author

Scroll to Top