0:00
/
0:00
Transcript

Photonic "Engines" for Data Centers

Building Coherent Optical I/O at Scale with Hitesh Sahoo from Phanofi

Hello friends, colleagues and enemies. Last issue we talked about stopping data movement at the chip level. The memory wall. Compute and memory sitting too far apart, shuttling bits back and forth like some kind of digital logistics nightmare. Manu had the framing right: data movement is the meta-problem.

Well. Same problem exists one level up. Getting data between chips, between racks, between buildings. And copper is dying. Not metaphorically obviously. It’s inert. The physics is straightforward: as you push bandwidth higher, copper’s reach shrinks. A decade ago you could run copper across the data center floor. Now it doesn’t make it out of the rack. Next gen, it won’t make it off the board. So we get optical interconnects, converting electrons to photons, shipping them down fiber, converting back.

But conversion is expensive. In power and in latency. Which is money. And interestingly the optical links inside data centers today are basic compared to what the telecoms industry has been doing for decades. Long-haul networks use ‘coherent optics’, encoding data not just in the intensity of light but in its phase and polarisation. So this means you can get more data onto each wavelength. But coherent systems require monstrous digital signal processors (DSPs) that consume 3-4x more power and cost 3-5x more than intensity-based systems. Too expensive for the volume game inside a hyperscaler.

Phanofi, a Danish startup I spoke to for this issue, claims they can bridge that gap. Their bet isn’t on exotic new modulator materials like lithium niobate or barium titanate. It’s on the detection side: an architecture for recovering optical signals that maintains coherent efficiency while working with standard DSPs and standard foundries. No new manufacturing processes, no supply chain disruption. The pragmatist in me loves it.

Hitesh Kumar Sahoo, the CEO, did his PhD in integrated photonics and has been deep in the foundry ecosystem. His argument: the industry doesn’t want disruption, it wants compatibility. Hyperscalers are spending billions building supply chains around specific DSP vendors, specific foundries, specific packaging houses. Any solution that requires them to rebuild that infrastructure is dead on arrival.

The interview gets into the technical details of coherent versus intensity-based systems, why the detection side is the real bottleneck, and where co-packaged optics fits into all of this.

What did I learn?

  • Data movement is the meta-problem at every level of abstraction. Last issue it was compute-to-memory. This issue it’s chip-to-chip, rack-to-rack. The principle is the same: stop shuffling bits around unnecessarily, and when you must shuffle them, do it as efficiently as physics allows.

  • Coherent optics inside data centers is when, not if. The bandwidth requirements are pushing past what intensity-based systems can deliver cost-effectively. The question is who captures the value: incumbent DSP vendors who add coherent capability, new entrants with novel architectures, or vertically integrated hyperscalers who build their own. Value capture!

  • Foundry compatibility is the moat. Exotic materials make for exciting papers but conservative supply chains. Phanofi’s focus on working with existing foundry processes is a strategic choice. It’s the challenge of using something novel and getting 10x better something but asking customers to adapt, versus 2-3x better but no with adaptation required.

It’s a tricky business.


The State of the Future Show

Tell me about what you do at Phanofi.

At Phanofi we’re building photonic engines that help data centers save energy when moving data from one point to another. Computing is done in electronics with zeros and ones, but optics is the preferred approach for sending data. You need to convert efficiently from electronics to optics, and this is what our engine does—it converts data from zeros and ones to light and then back from light to zeros and ones so another computer can read and process it.

How does this differ from an analog-to-digital converter?

An ADC generally operates within the electronics domain where you have digital and analog processing. What we do is very similar, except we’re going from electrons to photons rather than staying within electrons.

When you convert from electrons to photons, don’t you need specific materials? Can’t you just do it in silicon?

You can actually do it in silicon. There are new materials in the ecosystem which are much more efficient, and until now it was silicon. But when we’re aiming for gigabits or terabits of data being moved, we’re running out of power budget, and this has pushed the industry to find new materials.

Let’s think really basically about the components required. What does converting to optics actually mean?

Start with electronics—zeros and ones. How do you put that onto light? You have a continuous wave laser that’s switched on, and you have an element called a modulator that’s modulating the intensity of that laser. On the other side, you would see the laser turning on and off, and the modulator is doing that job based on the zeros and ones coming from the data. On the receiving side, you have a photodiode or photodetector that’s detecting whether there’s light or no light, giving a signal. That’s the very simple implementation: on the transmit side you have a laser with a modulator, the light carries the data, and on the receiving side the data is extracted by detecting the presence or absence of light.

So traditional electronic computing uses digital zeros and ones, and you’re saying on-off is the equivalent—flashing as fast as you can?

Exactly. This isn’t new—it’s been used for long-distance communication. You used to have a lantern switching on and off for signaling.

The speed is probably why photonics companies exist now. A decade ago, modulating and detecting light was fine, but now we need it faster?

Yes. It’s leapfrogged significantly in the last couple of years. There’s a limit to how much you can push the technology with a basic laser-modulator-photodetector structure—how fast you can put data on the light source and how fast you can extract it. That’s why there’s tremendous focus on building high-speed modulators. Companies like Hyperlight are building lithium niobate modulators, Lumiphase is using barium titanate—lots of interesting approaches.

Why do new materials make modulators faster?

There’s a limitation on how fast you can modulate in silicon because of how modulation works—you have movement of carriers across metal plates, which creates fundamental physics limits. Lithium niobate and barium titanate operate on different mechanisms, so they can go faster. There are even newer materials like organic hybrids that can go faster still. The industry is experimenting and testing new modulators.

Beyond faster modulators, there’s also parallelization. Can you explain how that works with light?

The industry is exploring multiple approaches rather than just focusing on one component. They’re using multiple wavelengths—this is where CWDM [Coarse Wavelength Division Multiplexing] and DWDM [Dense Wavelength Division Multiplexing] come in. But there’s another architecture: coherent systems, which use intensity, phase, and polarization of light. This isn’t about different wavelengths—for each wavelength you can maximize the amount of data by using its polarization and phase. This technology is used for long-distance communication outside data centers. It’s just expensive and power-consuming, so it doesn’t scale well when brought inside data centers.

At a system level, what’s the core problem? Why can’t we just keep using electronics?

There’s a limit to how far you can go with copper. The losses increase significantly as you go to higher speeds, and the link length over which a signal can be transmitted on copper shrinks. When speeds were lower, copper links could be much longer. Now as we go to higher bandwidth, copper is just behind the rack. For the next generation of bandwidth, it’s going to be even shorter. That’s why people are trying to bring optical interconnects behind the rack as well.

How does what Phanofi is doing fit alongside other photonics companies?

At Phanofi, we’re coming up with an alternate architecture for how to put data on light and take it out. We’re competing at a higher abstraction level. Current implementations inside data centers only use intensity-based modulation. Outside data centers, systems put data on intensity, phase, and polarization, but they use very complex, expensive, and power-hungry equipment.

Why is coherent technology used outside but not inside data centers?

Outside data centers, there’s a need for high bandwidth efficiency—you pack more data per laser for long-distance communication. The receivers are complex and power-expensive, but the number of deployments is significantly lower, so the cost is absorbed. Inside data centers, the volume is significantly higher, which is why coherent technology that exists outside cannot just come into data centers. There’s also a gray zone—today’s data centers aren’t just 500 meters anymore. Links are going to 2 kilometers, 10 kilometers. There’s this space between intensity-based data center interconnects and coherent systems where both struggle. Intensity-based systems face power walls and are very expensive going to 1.6T or 3.2T implementations. Coherent systems, even though bandwidth-efficient, can’t get in because of cost and power constraints. We’re saying we can bring the efficiency of coherent systems at the simplicity and cost of intensity-based systems.

You mentioned three types of modulation used outside data centers—intensity, phase, and polarization. Why is the equipment bigger and bulkier?

I wouldn’t say inefficient—it’s not inefficient for the purpose outside. But when you bring that technology inside data centers, it becomes inefficient relative to the requirements.

So Phanofi is proposing to bring coherent systems into data centers by improving the modulator?

Actually, our main innovation lies in the detection side. Modulation has largely been solved—people have been able to use similar architectures to do phase and polarization modulation. The detection side is the problem. If you break open these interconnect boxes, there are two parts: the electronic DSP and the optical part. The electronic DSP is the real challenge. If you compare an intensity-based DSP and a coherent-based DSP, the coherent DSP consumes 3-4 times more power and costs 4-5 times more.

Why does it consume so much more power?

In an intensity-based system, you’re only modulating light, so you have a photodiode detecting zeros and ones really fast. The DSP does some cleanup and error correction. But in a coherent system, the DSP handles a significant part of the decoding. When the photodiode receives information before the DSP processes it, you cannot make sense of it—it’s almost noise. The DSP takes it and runs through extensive algorithms to extract the real data. It’s a marvelous piece of engineering, but it’s overkill for what we’re trying to achieve within a data center. Because so much computation happens inside that chip, it ends up being expensive and power-consuming.

So you’re focused on the photodiode, the detector side. How have you made it better for this use case?

We’re building on an industry platform—an industry-validated foundry model. We’re using industry foundries to manufacture our chips.

Why is using existing foundries important?

Any new material requires a new manufacturing process. Big foundries at high volumes don’t readily adopt new processes because of contamination risk and the need to develop entirely new tool sets. What we’re saying is you can take all the tools you already have, and we can make our device with those materials—no additional contamination risk. You can just make our stuff as you would normally. That’s the key thing we’ve done that nobody else in the industry has done.

What’s the actual technical achievement that colleagues wouldn’t have thought possible?

This is a highly conservative market built on supply chains. They want compatibility. We’re talking about engines for optics, but there’s also the DSP sitting next to us in the interconnect, and we need to be compatible with that. Last September, we demonstrated this. We collaborated with one of the leaders in DSP manufacturing, got their evaluation board, interfaced it with our photonic chip, and showed we can do 400 gigabits per second per laser module together with their equipment. That’s a big proof point to the industry—we’re not disrupting your supply chain. We can use what you’re doing and show this architecture can work.

Can you explain pluggables and co-packaged optics (CPO)?

This comes from the need for low-power interconnects. Current implementations have pluggables—essentially large Ethernet cables that plug into switch boxes. Inside the switch, there’s a CPU in the center with a path routing from the interface to the main chip. CPO wants to eliminate that path—they don’t want copper traces going from the central CPU to the interconnect. Instead, they want to bring optics closer to the CPU. It’s not just a new implementation, it’s an architecture change. CPO is being pushed by big industry players for future data center architectures where instead of switch boxes, they bring their own CPU boxes.

Why do this? What’s the benefit?

Right now you have a CPU connected to a pluggable, and the pluggable has its own DSP. Instead of having two different places working with electronics, they want one place where only the CPU can directly drive data conversion into optics. You reduce that redundant DSP and eliminate copper traces, which improves noise and recovers some losses. There are benefits. But we should look at it holistically. CPO is more energy efficient in cost per bit transferred, but it requires expensive investment and big buy-in. If something goes wrong, tens of thousands of dollars is wasted—you have to throw away the whole unit. With pluggables, if one goes bad, you swap it quickly.

This sounds similar to the debate about integrating lasers on photonic chips versus using external lasers.

Yes, exactly. It’s the same trade-off at a different abstraction level—efficiency versus reliability.

What about Google’s optical circuit switching (OCS) implementation? Will the whole industry move toward optical circuit switching, or will it remain a proprietary Google advantage?

I’m not the best person to comment definitively, but I feel optics and photonics are much more powerful than what we see right now. We’re just getting started on how we can use photonics to improve efficiency in data communication. Wherever you have communication, optics and photonics has an edge over electronics. I’m closer to believing OCS systems will emerge as winners eventually.

You describe what you’re building as an engine. What will the product look like in five years?

We’re making optical engines—think of them as high-speed LEGO blocks. The reason I call it a LEGO block is because we want it to be modular. If you want to put it in a pluggable, we should be able to do that. If you want it in a CPO engine, we can do that too. That’s our approach to market. We have a proprietary way of putting data on light and taking it off light, so we need to make the whole engine—laser, modulator, detector side with our patented design, and photodiodes. All the photonics is ours; all the electronics is standardized. We talk to everyone who adheres to industry-defined standards.

Will you be shipping chiplets?

Exactly, yes. The advantage is that you unlock possibilities. When you’re a vendor designing one thing, you have a specific application and focus. But in a chiplet ecosystem, you’re opening up possibilities. People can combine your I/O module with something they’re building. You focus on what you do best, and the ecosystem automatically takes care of the application space.

If someone wants the world’s best laser but also needs your modulator and detector, could they license your IP?

Yes, exactly. It depends on where in the value chain you sit. For example, if we’re integrating a laser on our chiplet, we would need to license a laser from wherever we’re manufacturing. But someone building a CPO engine would license from us the I/O module block they need next to their CPU or XPU. Where you sit in the value chain determines how you interact or license. The beauty is that this works if standardizations are built as they’re planning. The silicon electronics industry was built on standardization, which accelerated growth so much that we’re seeing diverse forms of electronics at very cheap prices. Electronics is so ubiquitous we don’t even think about it.

But that only works if interfaces are standardized. Where are we with standards in photonics and chiplets?

It’s a pain point. There’s tremendous debate around it. If you put all the leaders from switch companies, NVIDIA, and other big companies—bring their photonic experts to the table—the one thing they’d agree on is we need standards. It helps everyone.

What specific standard would help you most reduce costs?

Packaging. Packaging standardization is important—how we get fiber from our chip, how we put multiple platforms together so the electronic interfaces work. For CPO implementation, how are those electronic lanes designed? Right now you have multiple implementations with very different I/O ports for RF, DC, or fiber connections. I’m starting to see some packaging houses begin with standards. Swiss Peak, for example, launched in November with a small initiative toward standardization. Their approach is to customize if you want, but they’re starting with some standards for packaging. It’s a very small step, but it’s progress. If we start designing PCB boards or modules where you can place your chips and everyone agrees on standards, it’s easier for chip manufacturers—we know what we’re designing to, and time to market adoption is much faster.

From NVIDIA or Broadcom’s perspective, don’t they want to build their moat and commoditize suppliers? Wouldn’t they want modulators and photodetectors to be commodities?

Yes, and this is where we differ from our competition. We’re not designing any single component. We’re coming with a new architecture—how do you take those components and make the function more efficient? We’re focused on modulation and demodulation efficiency. The hyperscalers are pushing suppliers to standardize and commoditize. But as an industry, we benefit from new innovations. When we’re trying a new architecture, we’re using industry components. The moment we have standardization—even though silicon photonics is non-standard, it was born out of old CMOS foundries, so there’s a certain level of standardization that exists. This has helped us as an early-stage startup tape out three times, get access to foundries, and test chips. There’s tremendous value even though there’s a downside. It enables a much bigger ecosystem to move forward.

Doesn’t getting components into engineers’ hands help build adoption faster than waiting for theoretical standards?

Exactly. With any new technology, there’s a learning phase. Photonics is going through this where we’re building PDKs [Process Design Kits] and more complex libraries. It’s definitely simpler compared to what electronic libraries look like today, but it’s not fair to compare given the resources that went into electronics. We’re already seeing the impact of photonics not just in communication but also in biosensing and quantum. Quantum will be a big enabling area for photonics. We’re seeing photonics, foundries, and PDK design at a very early stage. I’m very hopeful it’s moving in a positive direction. We see the pull in the industry. It’s only time that will decide how big it turns out to be.

Many photonics companies are building foundries predicated on new materials like lithium niobate. It seems like there won’t be a single dominant material like silicon in electronics. Wouldn’t we be better off with an integrated facility under one roof with multiple material processes?

Unfortunately, silicon photonics is more complicated than electronics. It’s not a one-to-one material platform solution. We need different materials because of different performance characteristics. For lasers being active, we need indium phosphide. For modulation, different materials work better. We need to create an ecosystem with different materials. But we should acknowledge foundries are insanely capital-intensive businesses. When you’re talking about a manufacturing facility, someone is putting in an insane amount of money to create a pilot plant and ensure quality control—producing the same thing every time. That’s the challenge. Different foundries are trying different approaches. GlobalFoundries has tried integrating silicon with CMOS layers on top. Tower Semi has integrated indium phosphide on top of silicon. Silicon is the base in everything because of cost and infrastructure. These foundries are opening up to different new materials being integrated on top. Some are testing lithium niobate and sharing results internally—they just haven’t announced publicly. TSMC wasn’t in photonics, and suddenly it’s doing a lot. They don’t want to say it out loud until they’ve proven it because there’s tremendous money going in. They need to generate revenue after that, so they’re very careful choosing which material platforms to integrate. But you’re correct—it’s moving toward that. XFAB, for example, is looking into transfer printing multiple platforms onto silicon. Foundries recognize this and many are taking that path.

You’re a startup trying to sell into one of the fastest-moving markets in history with data center buildout. What could NVIDIA or Broadcom announce that would make your business non-viable?

If they found another way to communicate data better than light, yes. If the industry somehow finds a new approach alternative to copper that’s not fiber or light—this is where carbon nanotubes come in, where you’re sending electrons rather than converting to photons. That makes more sense because you don’t have to convert and don’t pay for efficiency loss every time you convert to optics and back. Definitely that would impact us.

Could open-source transceiver designs kill the business?

That wouldn’t affect much because, as I said, we could take those elements and put them together. Our IP is on how we arrange those elements, which results in an efficient approach.

What about wafer-scale computing where you don’t need to move data off-chip?

Light has proved it’s the most efficient way to transmit data. When we’re talking about AI computing especially, we’re limited now on how much data we can process. These big wafers where you can do tremendous computing within one wafer without going to another—there are challenges with reliability, yield, and cost aspects. But that’s one way you don’t need to get data out, though at some point you do.

That’s interesting because it ties to the previous interview about compute-in-memory—doing more on-chip without going off-chip. But we’ll always need to go off-chip eventually.

The real question isn’t whether light is the best way to transmit data, but where does the cost-performance curve stop for photonics? To answer that, just see what the current implementation is. The industry is very cost-conscious, so it won’t have an architecture where it’s not cost-effective. Behind the rack is where optics currently doesn’t become more cost-effective. But going forward, in the next generation—we’re talking 500 meters to 2 kilometers, which is away from rack-to-rack—it’s getting more interesting to have optical fiber within the rack for efficiency gains in cost and power. There’s a reasonable argument for why optics can come very close to the CPU. It unlocks a significant bottleneck: how cramped do you want to make your system? Heat is the other challenge. With copper, you can have two XPUs or GPUs connected very close, but getting heat out becomes challenging. With fiber, you’re not limited by length—it’s essentially lossless—so you can stretch it significantly, spread things around for cooling, but still gain the same computation. The other thing is power. There’s a limit to how much power each rack can get, which limits how much compute you can pack in one rack. These system-level boundary conditions make optics much better in terms of cost-performance.

Let me summarize to see if I’ve understood correctly. At a system level, the problem is no longer compute—it’s moving data. Copper is hitting physical power limits, so optics is being pushed closer to the rack and eventually toward the package. You’re not just building a faster modulator or new laser—you’re proposing an architectural change. Long-haul networks already use coherent techniques with intensity, phase, and polarization to pack more data onto each wavelength, but this is complex, power-hungry, and expensive. Your bet is on the detection side—changing how optical signals are recovered to reduce DSP complexity and bring coherent-like efficiency from outside the data center into the data center at something like intensity-based cost and power. Importantly, you’ve stayed compatible with existing foundries, DSP vendors, and supply chains. It works with pluggables today and can work as a chiplet or part of CPO architectures over time.

Absolutely. I should use you for pitching all my fundraising. You nailed it.

Debrief

Some solid synergies with the chat with Manu right. The through-line from the Synthara conversation to this one is almost too clean. Manu’s framing was “stop moving data” at the chip level: compute and memory are too far apart, so bring them together. Hitesh is solving the same problem one abstraction layer up: chips need to talk to each other, and copper can’t keep up, so optics has to come closer to the processor. It’s almost like I planned a narrative in advance.

What’s interesting is how both companies have made similar choices despite operating in different domains. Neither is betting on exotic new physics. Synthara isn’t doing analog compute-in-memory; they’re doing digital design with standard bit cells. Phanofi isn’t building lithium niobate modulators; they’re working with existing silicon photonics platforms. Both are saying: the industry doesn’t want revolution, it wants evolution that’s compatible with supply chains.

The co-packaged optics question is genuinely unresolved. Hitesh is diplomatic, but you can read between the lines. CPO makes sense on paper: eliminate the copper traces between the switch ASIC and the optical transceiver, reduce power, improve density. But the reliability and serviceability concerns are serious. If a pluggable fails, you swap it. If a CPO engine fails, you throw away the whole package. At tens of thousands of dollars per unit.

Phanofi’s chiplet approach is somewhat of a hedge. If pluggables win, they can sell into that market. If CPO wins, they can sell into that market. If some hybrid emerges, which seems likely, they can adapt. The modular “LEGO block” framing is never quite true in reality but it does hedge against architectural uncertainty. Sort of like an FPGA instead of an ASIC.

The standardisation point deserves emphasis. Hitesh says if you put all the photonics experts from the hyperscalers in a room, the one thing they’d agree on is the need for standards. Packaging, interfaces, fibre attachment, all of it. The silicon electronics industry was built on standardisation, which is why you can buy commodity chips at scale. Photonics is still very much in the bespoke era, which keeps costs high and iteration cycles slow. Whoever drives standardisation will shape the industry for decades.

One question I didn’t push hard enough on: what happens when NVIDIA or Broadcom decides to vertically integrate optical I/O? They have the resources, the customer relationships, and the incentive. Hitesh’s answer, that they’re offering an architectural innovation rather than a component, is reasonable but not entirely satisfying. Architectural innovations can be copied. The real moat is probably speed to market and foundry relationships, which are harder to replicate than any single technical insight. They are also harder to diligence as a pre-seed/seed investor. I mean should I be asking for the names of the TSMC execs they know? I joke. Or do I?

Data movement is expensive at every level of abstraction. Last issue was the chip. This issue was the rack. Next might be the building, the campus, the continent. At some point we hit the speed of light and then what? Quantum interconnects? Satellite relay? Free-space optics between buildings? Honestly I don’t know. Nobody does. But the pattern holds: whoever figures out how to stop moving data, or move it more efficiently when you must, captures enormous value. The specific technology matters less than the principle.


Discussion about this video

User's avatar

Ready for more?