❄️Cubits in a Fridge: Make Quantum Great Again (Feat Himadri of SemiQon)❄️
What if research.. was marketing all along?
The Promise
So, quantum computing. With everything about AI, whatever happened to quantum computing you might rightly ask. Back in 1981, physicist Richard Feynman proposed a revolutionary idea: using quantum mechanics to build computers that could solve problems impossible for classical computers. By the 1990s, researchers had developed the first quantum algorithms, showing these theoretical machines could crack problems that would take classical computers billions of years to solve. Quantum computing was seen as the next major computing paradigm that would revolutionize everything from drug discovery to cryptography. Yet while AI is scaling to 100GW clusters and AGI, quantum computers remain largely confined to research labs and early pilot programs.
Slow Progress
Well it's harder. We are talking about figuring out physics. Traditional computers use well understood physics - we know exactly how to control electric current flow through semiconductors, how to amplify signals, and how to store charge in capacitors. It's all based on classical physics where bits are either on or off, 1 or 0, and stay that way until we change them. But quantum computers operate in the weird realm of quantum mechanics, using quantum bits or "qubits" that can exist in multiple states simultaneously (called superposition) and influence each other instantly across distances (entanglement). Think of a classical bit as a coin lying flat - it's either heads or tails. A qubit is like a spinning coin - it's in both states at once until you measure it, and that very act of measuring changes its state. This property is what gives quantum computers their potential power - a quantum computer with just 300 qubits could theoretically perform more operations simultaneously than there are atoms in the universe.
But we're still learning how to reliably create and control these quantum states, and every different approach - whether it's trapping ions, using superconducting circuits, or manipulating electron spins - comes with its own set of physics puzzles we haven't fully solved yet. While classical computers basically figured out their physics in the 1950s, quantum computers are still very much in the "figuring out the basic physics" stage.
Today's quantum computers, while impressive, are still primitive. In 2019, Google achieved "quantum supremacy" by performing a specific calculation in minutes that would take classical supercomputers thousands of years - though the calculation itself wasn't practically useful. We're in the NISQ era - "Noisy Intermediate-Scale Quantum" - with machines containing hundreds of qubits but too error-prone for practical use.
Quantum computers have to reinvent the basics of computing. You need to build new components that can maintain quantum states, figure out how to connect them without losing quantum coherence, and scale up to enough qubits for practical computations while maintaining precise control over each one. Only after solving all these hardware fundamentals can you even begin thinking about running quantum algorithms. It's like having to rebuild the entire computing stack from scratch while fighting against the fundamental instability of quantum states. By comparison, advancing AI today is mainly about scaling up existing architectures using well-established classical computing infrastructure.
Nearly There?
But quantum computers could solve problems that are fundamentally intractable for classical computers. To break modern encryption, a classical computer would need to factor a 2048-bit number, which would take billions of years even on our fastest supercomputers. A sufficiently powerful quantum computer could theoretically do it in hours. But it's not just about breaking codes. Quantum computers could revolutionize drug discovery by perfectly simulating molecular interactions. While a classical computer trying to simulate a molecule with just 100 atoms would require more memory than atoms in the universe, a quantum computer could handle it naturally.
The quantum computing field has evolved dramatically since its early commercial days. D-Wave Systems, founded in 1999, was one of the first companies to build and sell quantum computers, ultimately raising hundreds of millions for their quantum annealing approach - good for optimization problems but not general-purpose quantum computing. While D-Wave helped put quantum computing on the map, their specialized technology turned out to be just one piece of the puzzle.
Today in the NISQ era, IBM has a quantum computer with 433 qubits, Google has one with 72, and others are racing to add more. But these qubits are "noisy" - prone to errors and losing quantum states quickly. The next breakthrough needed is "fault-tolerant" quantum computation - systems that can correct their own errors. Current estimates suggest we'll need around 1,000 physical qubits to create just one reliable "logical" qubit.
This challenge has sparked a shift in the industry toward specialized startups focusing on specific technical problems - better qubit control, reduced error rates, and rethinking basic building blocks like transistors that can work at quantum scales. The field has matured enough to recognize it needs its own ecosystem of specialized solutions.
While we might still be years from the quantum computers science fiction promised, progress is accelerating. DeepMind has shown that AI can help optimize quantum algorithms and control sequences, while AI systems are now being used to predict and correct quantum errors in real-time, leading to promising hybrid approaches. It remains challenging to reconcile the 2035 timeline for practical quantum computers with the assumption of trillion-dollar AI datacenters running advanced models in the early 2030s.
There are 8 different approaches to building quantum computers today, with superconcoducting, trapped ion, and silicon spin the most advanced. With photonic, neutral atom, topological, diamond NV, and molecular at lower TRL. Each approch has it’s advantages and disadvantages outlined below.
I spoke with Himadri, CEO of SemiQon, a portfolio company, taking the Silicon Spin path. They say, look, the other approaches have merit but silicon spin will win because it is the only way to get cheap and scalable computers. Superconducting and Trapped Ion can’t scale and will be too expensive for the mainstream. They say the only pathway to cost-efficient quantum computing is by using cheap materials and processes, specifically semiconductor CMOS. If you can get quantum processors to work at very low temperatures aka cryo-CMOS, then you are on to a winner.
TLDR
🔍 Scale and price: Silicon spin qubits are 1000x smaller than superconducting qubits and more cost-effective due to existing CMOS supply chains. They require fewer expensive components like control cables ($3000) and electronics ($100,000/16 channels).
❄️ Temperature applications: SemiQon enables operation at 4 Kelvin and below, suitable for quantum computing, space applications (1-4K), and data centers (77K). This could help reduce data center cooling costs, currently $16-30B annually.
🖥️ Practical quantum computing: Cryo-CMOS integration with silicon quantum dots enables computation inside the cryostat, unlike superconducting approaches. Industry may evolve toward hybrid solutions combining different technologies' strengths. Multiple approaches may coexist for different applications.
🏭 Silicon spin advantage: Having their own fab enables faster iteration in this exploratory phase. Their facility can meet demand for the next decade until quantum computing becomes mainstream.
📊 Practical metrics shift: Focus is moving from qubit numbers to gate operations and speeds - shifting from theoretical to commercial performance metrics.
1. Scale and Price
Hi Himadri, let’s start by placing SemiQon in the broader quantum field and why you are pursuing that approach.
We are developing silicon spin qubits. Because we believe it is the most scalable solution both architecturally but also in terms of the price.
What do you mean by price? Why are we talking about price when we don’t even know if we can make these computers at all?
Okay, we have to answer three questions: can we make quantum computers? Can we make them small enough? And can we make them cheap enough?
First, can we actually make these machines work at scale? This is our starting point - can we build a quantum computer with enough qubits to be useful? Recently, we've seen encouraging progress with superconducting quantum computers showing they can link processors together. This proves scaling is possible.
The second question is more practical: what about the physical size? These machines can't fill entire buildings if they're going to be practical. This is where superconducting quantum computers face a real challenge - they need massive cooling systems and infrastructure. Neutral atom approaches offer more promise here, as they can pack more computing power into manageable spaces.
Finally, we get to the price question, and this is where we come in. Today's working quantum computers cost over $50 million. Yes, you can buy a small 5-qubit machine from IQM for $1 million, but that's just for research. This is where silicon quantum computing might be the game-changer. They're targeting a price point under $10 million for a machine that can actually solve real problems. This is why silicon is starting to look like the only approach that can hit all three targets: it could work at scale, fit in a reasonable space, and not cost more than a small rocket launch.
Sure, why are you cheaper? Because you use the existing CMOS semiconductor supply chain?
Exactly. The primary cost driver for all modalities is, first of all, the room temperature electronics. So how many channels do you need to read all the cubits that you have in your chips? And the more the number of channels you need, the more units you need to buy from CubeBlox or Quantum Machines. So it's that stack with more and more electronics in the room temperature. And then the next cost is how do you connect them to the cryostat with cables. The cables are also very expensive. Just to give you a quick understanding, if somebody is building a 1,000 cubit superconductor system, that means they would need at least 2 and a half 1000 k bots to do that. And each cable costs €3,000. So it's around 12 to 14000000 just for cables. And then you have the control electronics. So each sort of 16 channel unit costs about 100,000. And if you have 1,000 qubit you would need 1,000 divided by 16, so that would be around 70, 80 of those. And that's another 7,000,000 just in terms of electronics to read out the whole thing. So these are costs which cannot be ignored or neglected the way it is done now. But when we get the cryosimals going, when we get the switches that we are building as the first products going, everything goes in the cryostat, reducing the number of cables, which reduces the number of electronics that you need outside. And that means also reduction in price and reduction in footprint of the whole thing.
My understanding also on the size question is just that by using silicon spin, we are thinking in terms of nanometre processes as with chips today like 7 and 5nm for example. And that superconducting or ion traps are like 1,000x in order of magnitude larger. So yes the sheer size of the machine because you need less equipment. But also you can fit more transistors on the processors.
In terms of the processors, what it means even if we don't go to the latest 7 nanometer with the CMOS, even if we stay at a generation or few generations before with 20 nanometers, even then we can fit a 1,000,000 cubits in the same dimensions that the superconductors can have a 1,000. So we have a huge advantage from the processor dimension already. And why that is important is if you have a larger processor, then you need a bigger fridge. And this is why the IBM system is growing because they have to fit all the cubits in the fridge. And it's not big enough unless it's sort of building bigger and bigger fridges. In our case, the same system that can hold 20 cubits can hold 1,000,000 cubits. 20 superconducting cubits can hold 1,000,000 semiconducting cubits. So the dimension of the fridge is significantly lower. And another factor on top of that is superconductors need to operate at millikelvin, so you need a dilution fringe, which is power hungry and expensive. But if we are operating at 1 Kelvin like we are, you will only need a cryostat, which is much less expensive and also smaller.
And for completeness, we've spoken about superconducting, but trapped ion is another popular approach. Am I right in thinking, the same size and cost benefits exist, but just not to the same order of magnitude as with superconducting yes?
Exactly. So this size benefit on cryostat is applicable for every modality using low temperatures.
2. Cryo-CMOS and New Applications
So the big question is why has a cryogenic transistor not existed before? Why has it not been done, or maybe why couldn't you do it faster?
So they do exist. Others do make it. Others do build cryo-CMOS. ST Microelectronics, GlobalFoundries, Intel, they have all build cryo-CMOS. It's not that it's anything new. What they have done is they have taken a top down approach. They have used traditional transistors and the existing stack and tried to use it in cryogenic temperatures, trying to optimize it for cryogenic temperatures. And with that kind of approach, they have reached performance which are quite okay until 20 kelvins. Below that, they become impractical due to power requirements. And if somebody wants to operate anything at 4 Kelvin or lower, it's very difficult to get enough power to make their cryo-CMOS working at that temperature. So that has been the bottleneck.
What we have done with this process is we have built it bottom up. So we knew what materials to build on, what properties of silicon to optimize, how to do the design, how to build the stack. And this allows us to go to cryogenic temperatures of 4 Kelvin and less and still have energy requirements, which is 100 times less than others. And this is the key factor that we are breaking this inflection point and taking transistors to a level where nobody has gone before in terms of temperature and performance.
Got it, so basically, cryo-CMOS does exist, but existing solutions cannot hit performance requirements below say 20 kelvins. You needed to build this yourselves to get to 4 Kelvin and below. How does this relate to an existing data center server rack running today?
The data center is the long term for cryo-CMOS, and TSMC and others are working on that solution, and they want to address the data center aspect. Obviously, regular processors generate a lot of heat. This is why you need to use a lot of cooling equipment, even putting entire datacentres under the ocean. Cooling costs can reach 16 to 30 billion every year. Just to cool down processors. It’s a huge waste.
The point of using cryo-CMOS is instead of you spending money on cooling an overheated system, you use money to build a cryo setup where you put the chips inside and you don't heat them up. And that would require that the cryogenic system and the chips inside the cryogenic system should be less expensive than the 16 to 20 billion in cooling costs. And with the performance datacentres need. Modern cryo-CMOS works at 77 Kelvin, which is liquid nitrogen temperature, it is not enough to get the price and performance advantage. And this is where we come in. Our work is to get cryo-CMOS working at high performance at low enough temperatures to fit into a datacentre and remove the need for extra cooling for overheated systems at all.
Follow-up. To what extent is making the cryo-CMOS work at these temperatures relevant for just Quantum. Or are there spillover effects to tradition running traditional electronic computers?
There are. Cryo-CMOS is a very generic term because it means the CMOS operating at low temperatures, but the temperature definition is different. So for data center and high performance computing, the need is 77 Kelvin which is liquid nitrogen temperature. For space application, the need is between 1 to 4 Kelvin for space temperatures. It's the same for the spin qubit systems that we are building. But if the cryo-CMOS has to be usable for superconducting setup, then they have to be also operating at less than 1 Kelvin. The requirements vary in different temperatures for different applications. So think quantum, space, and then datacentres in terms of applicability.
Got it. And to tie that back to your recent breakthrough then, or the transistor specifically, will you be able to operate at the 1 Kelvin mark?
Yes and below 1 Kelvin as well.
I noticed that the framing is the cryogenic transistor. Not the cryogenic circuit or processor. What needs to be done to create a full circuit?
We need circuit designs that create more complex and more complex circuits. So how complex can we get with the transistors? First, putting a few tens of transistors together, then putting a few 100, a few 1000, a few 100,000, and so on. So the complication of the circuit increases. And at the same time, there are different market segments that can be addressed with these kind of different levels of complexity. A few tens of transistors is good enough for quantum applications, for example. Because all you need is the simple multiplexer which needs a few tens of transistors to build. And then with a few hundreds of transistors or even a few thousands of transistors, you can address space applications where you need to build arrays for reading sensors in satellites or telescopes. So those are simpler circuits, but a bit more complicated than just multiplexers. So that will be the second application domain, which has a reasonably less complicated circuit. And then the circuit amount when it goes to the extreme dimension, so let's say 1 million or 1 billion, that's when HPC becomes a potential market.
What are the R&D challenges to scale up to 100? I assume interconnects, et cetera?
So the transistor consumes less power. That we have realized. That we understand. But when you take 10 transistors, you also have to build interconnects. So you have to connect those transistors with wires. How much heat is generated in those? It's something that we have calculated, but we don't know for certain how much it will end up when we make those. Because as you know from Ohm's law, if you have thinner wires than thinner lines, they start to generate more heat. It's a trade off in those situations. And it's the same with going to more and more complex circuits because that means we have to have thinner and thinner wires, and that means more and more heat can be potentially coming out of those. Not from the transistors, but just from the wire connected. Or when I say wire, I mean, printed lines or whatever they are on the PCBs or circuits.
Is there a role for silicon photonics as an alternative interconnect? Networking is one of the key advantages of Photonic Quantum Computing.
It's a potential option, but you have to then convert the electronic signal to an optical signal. So you need a converter which adds power, complexity and cost. But remember, we can always use the best of other approaches to solve the problem. There is no reason we couldn’t use semiconducting lines to take advantage of superconductors in low temperatures and use these as an interconnect. Because they don't emit heat. There are lots of options but we are always ensuring we don’t add unnecessary cost and complexity.
But I guess the simulations and theories suggest that you can get to an economically viable product, right, with 10 transistors, so maybe a 100. Before you have to worry about these exotic combinations?
Exactly.
3. Towards A Practical Quantum Computer
What is the importance of the cryogenic circuit to making a practical quantum computer? Is it foundational or one of many lines of R&D?
The cryo-CMOS is an essential part of the processor that we are building. And the processors are silicon quantum dots which are also transistors. They are single electron transistors. And they will be coupled with these cryo-CMOS circuits. So the difference between, for example, the superconductors and the silicon processors is in case of superconductors, you only have the dumb cubits inside the cryostat. And everything that you need to read out has to go outside to the control electronics. There is no electronics inside the cryostat for superconductors. But in our case, we will already do the initial computation inside the cryostat. So that's the eventual huge advantage of using cryo-CMOS together with the quantum dots. So absolutely, the cryo-CMOS are by themselves interesting for various reasons. But with our silicon quantum dots they enable the eventual winner in terms of quantum processors.
Help me understand the quantum dot side of this and why you need cryogenic temperatures?
A quantum dot here is essentially a special type of transistor that works with single electrons. When you cool this transistor to very low temperatures, it can create a "quantum well" - think of it like a tiny trap that can hold a single electron. In this trapped state, the electron can be used as a qubit (the basic unit of quantum computing). So what we do is build an array of these quantum wells, one after another. Each well holds an electron that acts as a qubit. And next to each quantum well, we place an electrometer (another transistor), and the electrometer's job is to read out the state of the electron either its charge or spin.
Right and at higher temperatures, the electrons would have too much energy and wouldn't stay trapped. So extremely cold temperatures are necessary because they help create and maintain these quantum wells that can trap single electrons. Okay, now, this is all great, but superconducting quantum computers are probably going to reach the market first yes? Is your bet that these computers will be too expensive for mass market applications?
Yes. So there might be a success coming from other modalities because some aspects of building them are "easier". And this is the reason why superconductors have been the most successful so far. So if Google or whoever at some point can demonstrate supremacy with their superconducting qubits, that is very much possible. But it will just be a demonstration experiment and not a product as such.
Reminds me of nuclear fusion with the tokamak approach, in that, it’s more mature and likely to reach the market first, but it will likely be cost prohibitive for mass market.
Exactly.
So when you speak at Quantum conferences, and to your colleagues that are maybe pursuing photonic quantum, because of its room temp operation and networking capabilities or even you must know people working at topological, which has the very clear benefit of requiring less error correction. How do you view other approaches? If we think of 30 years out, do you imagine we have heterogeneous quantum computing, a bunch of different approaches all in the market solving different applications? And if so, where's the niche for silicon spin do you imagine?
That's a very good question. In terms of modalities, I think the topological or the neutral atoms have the capacity or at least the theoretical capacity to provide better solutions, in terms of qubit quality and performance. But having said that, they are at a lower TRL. And the viability of it on a higher scale is yet to be demonstrated. So if somebody comes up with a scientific breakthrough then fine it might be the ultimate best solution. But it's unlikely going by the progress that we have seen so far in those domains. Photonic is interesting and a bit different because there are multiple different approaches in photonics as well. So for example, the PsiQuantum approach of using SPDs (single photon detectors). So single wire photonic detectors, that actually requires a very low temperature detector as well. So they need a millikelvin range operation of a detector. So it's not that everything is room temperature in those cases. So some operations are room temperature, but some require a low temperature operation. Which increases the costs. And the other approach, for example, is Xanadu. So they are talking about silicon integrated circuits. And that's a fascinating approach, but they have a double challenge. First, they have to build a silicon integrated circuit which has been a challenge for 20 years. And second, they have to use it for quantum purposes. So if they can do it, it’s a very elegant solution. Then the third approach, Photonic, out of Canada. They are using the spin of elements, but they are using laser or optical guide readouts for each qubit. Again, very elegant solution, but is it scalable when you have to have each individual cubit read out by optical fibers? So there are various ways that it's being looked into from a photonic perspective. However, taking all things together, we do understand their benefits, but we don't see a pathway for those solutions to achieve the same price point and scalability as we do.
Can you imagine a world of hybrid approaches? You are beginning to see it with photonics and electronics on SoCs today. We basically use electrons to process information and photons to move them. Why will trapped ion, superconducting, and silicon spin not combine to use the best of all?
There will be. I think it will be because of the various benefits exactly as you said. So some aspects will have a benefit of using an optical readout, for example, rather than an electronic readout. Because single photons have much longer coherence times than any anybody else. So these will be a potential kind of mid stage evolution of the whole industry. And I think my way of looking at it is that in some time, let's say 5 years or so, there will be industry consolidations happening as well. And this will be driven by hybrid approaches where expertise of different aspects can be combined together to build a competitive edge eventually.
4. Winning in Silicon Spin
Let's say, there are others who agree that silicon spin is the most plausible, economical approach. We just have to solve some science problems first. Quantum Motion for example. What are you doing to win in that specific approach?
The fundamental difference between us and the likes of Quantum Motion is the ability to iterate faster. Because we have a very similar technical approach. And in their case, they are reliant on the GlobalFoundries chip stack. So GlobalFoundries has their own technology and PDKs with their certain materials. And when using their PDKs, Quantum Motion cannot change that. So they are reliant. All they can do is the design. And this is a severe limitation. You have to live with the limitations of the transistors that come out of that system and operate with that.
In our case, we have our own dedicated fab allowing us to do this experimentation with not just the design of the circuits, but also design of the stack as well. So how to change the dimensions, how to change the ratios of the transistors, gate and or the channel and the width and so on. So there are multiple variations that we can bring in. And what we can do in the fab then is to try these different designs, go there, test it, quickly measure, but it doesn't work, we go to the next cycle and we start measuring it. So these are the crucial advantages which brings us to the point where we are right now. So these transistors that we developed, went through a few iterations already, when we fabbed. So we saw problems in the early generation. We rectified those, and now we are there.
What are the fab limitations? And how should I think about the node processes? Should I think of you having a legacy node of like 90nm? And that over time you will develop a leading fab to do high volume quantum processor manufacturing?
This is the question that we often discuss with the colleagues in IQM. The reason they have to invest in a fab is there is no fab which can do their processes anywhere in the world. So the only way they could create a business potentially is to build their own fab and protect their own designs and protect their own kind of structures. In our case, we do it because it gives us a speed and design space advantage. But we don’t want to be a TSMC for quantum. We cannot expect to build a new TSMC like fab. It's kind of extreme to ask at this moment of time where quantum is not mainstream at all. When we have enough developed PDKs and we have significant volumes, the easiest and the simplest thing would be to go and license the PDK to the likes of TSMC and others. And they can then do the manufacturing. The same as semiconductors today.
So will you be able to build a practical quantum computer without the need for fabs? You don’t need to tape out with the high-volume fabs because the market isn’t large enough yet?
Right, we don’t need to think about working with fabs until quantum computing is a real market. We think that we can likely serve demand from our fab for the next decade or so. Remember, we don’t need to persuade the fabs to work with us and prove demand. We can, unlike most others, continue to serve smaller markets because our cost base will be so much lower than the competitor.
Fascinating. The market positioning is interesting. I've been thinking about vertical integration and horizontal integration as it relates to AI today, where you have model owners that are building their own chips and some have their own data centers, and some are now partnering with utilities to power those data centers. It's sort of like a full stack AI. You arguably are the only company that is building the transistors, the circuits, the components, the chip, and the computer, but also the factory to make it! Is there any other Quantum company that also has their own fab to tape out their own processor?
Not in silicon spin that we know of. In superconductor, IBM, of course, has that. IQM has built its own. So there has been investment going around in superconductors for that. But not in silicon. Their expenses will be quite exorbitant though.
And then to follow that thread, up the stack so to speak, once you have a processor, how will people use it? Will you offer cloud access? If so, how?
So that would be the intermediate solution where we will build these test systems with our partners. Where one can access it through the cloud. And one of the things that we were really actively pursuing is there are these tenders in Europe now to combine HPC with spin qubit processors, and that would be the first step in that direction that you can combine an HPC and this spin qubit processor, and that get benefit out of it. And that's already in the cloud because of the HPC setup. So it's easy to access and low threshold in terms of getting an application up and running.
I'm thinking up the way to the top of the stack where there's actual value, not just consulting for the military. Are there any particular quantum algorithms for which silicon spin is better suited than any other approach?
Not yet. That's the place where silicon spin has not reached. It’s a sequencing challenge. We need to make sure we can make these computers work first, and once we know they do, we will start to understand how they differ from other quantum computers and other traditional computers. It’s very important, because the algorithms that are being developed now are dependent on the hardware significantly. One cannot run on the other hardware. Our ambition is that we create a more universal computer that can run any algorithm rather than only run very specific algorithms. You will need some more general purpose approach to enable a flourishing software ecosystem to grow on top.
5. Practical metrics shift
Right, this is the challenge for analog computers, in that they can’t run the same algorithms as digital computers. It’s even a challenge for computers using the same electronic substrate but with a different physical layout like in-memory compute and neuromorphic. How do you want to close off?
I think one aspect that also is very important, or might be very important is from UK's perspective, if you look at the UK National Quantum Agenda, which was published this month, the one thing that really resonated with me was nobody is talking about number of cubits or cubit performance anymore. So the point that was written in that strategy document from a quantum computing perspective is how many gate operations you can perform and what will be the gate speed. And that is the true measure of quantum computing from a computing perspective because so far, it has only been how good your hardware is. But now it's like how good you can build the hardware which operates the software. So this is another factor where spin qubits actually have a bit of advantage because the T1 and T2 times in spin qubits are long. And the gate operation speeds are high. So you can operate multiple gates, when you can retain the coherence of the spin. Which is a distinct advantage compared to others where you have to calibrate and recalibrate the system, and the computation time is long.
Thanks Himadri, keep it real.