🔮 E05: Neuromorphic Computing & The Future of Edge AI
By 2030, neuromorphic chip designs win 20% of the edge AI hardware market
It’s Friday. Friday is a better day for a newsletter. I’m out here hustling, synthesising all that deep tech and 39% of readers said the last newsletter was too long. Harsh. I’ll bounce the interviews and news but I’m going to give you a mailbag anyway, because you can’t trust people (Peep Show, Super Hans, Series 3, episode 2).
As I implore you to share so those subscriber numbers go up and to the right, I’m forever minded of Eminem, 50 Cent and Nate Dogg (Ask your parents), who once profoundly said:
“No matter how many battles I been in and won. No matter how many magazines on my nuts. No matter how many MC's I eat up. Ooh ooh, it's never enough”
Take that with you into your day. Lawrence
P.S. if you are doing anything in deep geothermal, assisted reproductive technologies, or neuromorphic, or know someone that is, please get in touch in the comments, let’s speak, open a dialogue; here’s my calendly, or send your availability, etc. et al.
📬 Mailbag
Deep Geothermal
Seriously people don’t sleep on deep geothermal.
@ DeepSight sent me this from the FT: UK’s first deep geothermal energy project for 37 years switched on.“The UK’s first deep geothermal energy project in nearly four decades will start operating on Monday, a scheme that proponents hope will bolster the case for geothermal energy despite its high costs.” A government white paper on deep geothermal energy is expected in coming weeks, which will assess the its potential in the UK and make policy recommendations.” Also: “We would love to turn it into electricity. But it’s a nightmare — my grid connection is for December 2036,” said Grand. “That is a big, big issue.”
Right, yeah, I mean technology is all very well and good. But if it can’t connect to the grid, then we have a problem… Has anyone thought about this bottleneck in the context of fusion? Imagine if we actually got an operational, reliable fusion plant by 2032, after all. And then the waiting list for a grid connection was ten years lol.
Assisted Reproductive Technologies
I’ve noted:
“An estimated one in seven couples have trouble conceiving, and between 48 million couples and 186 million individuals live with infertility globally.”
Global sperm counts are falling. This scientist believes she knows why FT, paywalled
“Sperm count appeared to have declined 52 per cent in 38 years, or something over 1 per cent a year.”
Dr Shanna Swan blames endocrine-disrupting chemicals (EDCs):
“The chemicals she has been able to link most directly to reproductive health are phthalates and pesticides, where she and others have found convincing evidence of a causal link between reproductive disorders and the “triazine” category of herbicides.”
If her hypothesis is correct, we must overhaul how we cook, eat, produce and package consumer goods and rethink industrial processes. Not deep tech, but she tries to buy unwrapped, organic fruit and vegetables, and her water is always filtered. She recommends using stainless steel or glass water bottles and microwaving food in glass or ceramic containers — never plastic. Also, something something something discovering new materials using AI. One Word: Plastics. I’m Dustin Hoffman, and you are Mrs Robinson at the back of the bus; what am I telling you about? (Ask your parents)
🦖Friends
Our friends over at Inflection are hiring a senior engineer to build a “computer aided venture investing machine (CAVI)” Link.
Jan Baerswyl launched a new crypto fund called very early Ventures, which is a cool name, Link. I love Jan and worked with him at his startup a few years ago. He really knows his crypto-economic onions, as *they* say.
I’ve spoken to lots of amazing people doing things in deep tech, I wanted to shout out Daniel Smith of DeepSight and Eden Djanashvili of Deeptech Community; both are doing great stuff in the deep tech community. The latest post from Gael Amouyal on VTOLs at Deeptech Community is strong, although I still think VTOLs, like crypto, were mainly a ZIRP thing. The economics are a nightmare, and even Uber can’t turn a profit. But to be fair, I do like to say most things were ZIRP, and I only graduated in 2009, so like what do I know?
This week you get neuromorphic computing. Is it the solution to the AI chip crunch? Or? I don’t actually have an OR because it is. It shouldn’t have been a question.
✍️ Neuromorphic Computing
Explain in 3 words
“Brain-inspired” Computing Architecture
Explain in 3 sentences
Neuromorphic computing is a computing architecture that takes the biological brain as inspiration. Traditional architectures in all of our CPUs and GPUs today separate memory (data storage) and processing (data computation), leading to a bottleneck known as the von Neumann bottleneck when data is transferred between the two. Biological brains co-locate memory and processing in synapses within a neuron. Neuromorphic design is an attempt to replicate this architecture. Basically, it’s like bunk beds instead of two singles. It’s quicker to pass a note to a friend in a bunk bed. The note is data. And this metaphor is bad. And now, not three sentences either.
Little more technical detail
In neuromorphic designs, memory and processing are integrated into the same location, often using specialized hardware such as memristive or memcapacitive devices. These devices adjust their resistance state based on the history of the voltage applied to them, which allows them to emulate the plasticity of biological synapses - their ability to strengthen or weaken over time in response to activity. This change in resistance can be used to store information, effectively integrating memory and processing in the same location. This co-location of memory and processing leads to more efficient and powerful computing systems, particularly for tasks such as pattern recognition, decision-making, and sensory processing. Additional neuromorphic hardware typically requires neural networks Spiking Neural Networks (SNNs), an artificial neural network that emulates biological neurons, processing information in a discrete, time-dependent manner through binary signals, or "spikes". The event-driven, power-efficient nature of SNNs aligns well with neuromorphic hardware's parallel, low-power processing capabilities, enabling more efficient real-time sensory data processing and decision-making.
---
(i) viability: how technically mature is the technology? (4)
There are no neuromorphic chips at the commercialisation stage as of June 2023. Intel, IBM and BrainChip all have research chips at different levels of adoption. Many startups are developing solutions, including Rain Neuromorphic, Innatera, and SynSense. But the space has not caught on regarding VC investments yet, as AI ASICs did over the past decade or so. From an R&D perspective, neuromorphic designs are not standard, and fabrication continues to be challenging versus non-Neumann chips. This is an issue across the entire fabrication process, especially at the design stage, where most chip design software from Cadence, Synopsys and Mentor (Siemens) isn’t suitable. Also, there is still a nascent software ecosystem to write software for these chips despite efforts from Intel and IBM.
(ii) drivers: how powerful are adoption forces? (4)
On the demand side, neuromorphic designs have always been interesting because they have the potential to deliver orders of magnitude improvements in power consumption. Depending on the design and substrate used (analog, electronic, photonic), power consumption can be reduced by 100-10,000x. These sorts of leaps forward in efficiency aren’t possible with von Neumann designs, and these sorts of numbers get you to remote IoT sensors running for years or smart glasses running for a full day.
But it is the supply side that materially changed the market. Until the development of resistive random-access memory (RRAM), a type of memristor, neuromorphic chips were not viable. Conventional static random-access memory (SRAM) or dynamic random-access memory (DRAM) are not well-suited for neuromorphic requirements regarding density, power consumption, and non-volatility. RRAM functions by changing the resistance across a dielectric solid-state material. The change in resistance can then be read out as a binary value, representing digital data. Developments in other memory technologies, notably PCM (Phase-Change Memory), MRAM (Magnetic Random-Access Memory), and memcapacitors, are also giving designers more product options. With RRAM in the market at sufficient volumes, neuromorphic chips can be fabricated cost-effectively.
(iii) novelty: how much better relative to alternatives? (5)
Neuromorphic computing is an architectural innovation and competes with von-Neumann architectures. Neuromorphic designs can use different substrates like electrons (electronics), photons (photonics), qubits (quantum), and molecules (biological) and doesn't compete with these exotic computing methods. Rather it is a complementary technology. If you can get 100x power consumption with a photonic chip for example, if you use a neuromorphic design, you can get to 10,000x, rough numbers of course but you get the point.
The best way to think of novelty is that we are at the pre-GPU days when the CPU was the only type of processor in computing. When the GPU came along, many tasks that required parellelism used the GPU instead of the CPU. If you strike lucky on timing you can ride your way all the way to a trillion-dollar company. The CPU is still the workhorse of computing, performing the majority of processing in a computer. Neuromorphic designs are similar. They will not replace non-Neumann designs but will "win market share" in tasks where parallelism, real-time processing, and pattern recognition are required. Neuromorphic designs are particulary suited to edge computing and AI, two huge growth areas.
(iv) diffusion: how easily will it be adopted? (3)
Despite the massive opportunity, neuromorphic adoption will be slow. We are likely to see electronic neuromorphic chips first because they can fit more easily into the existing semiconductor supply chain and slot in easily(ish) to SoCs. Analog and photonic neuromorphic designs will come later once a component and supply chain is more mature. The first wave of chips will still be relatively complicated to deliver at scale, although the growth of the ReRam market and standardized production processes makes it easier. the biggest bottleneck though will be on the software side. The design of efficient algorithms tailored to exploit the parallelism and distributed nature of neural architectures is complex. We can’t just use the transformer models we have today. LLaMA isn’t going to just run on an NPU. Neuromorphic designs are event-driven, in which processing occurs only when an event (like a spike in a neural network). Whereas, traditional ML algorithms, on the other hand, are generally developed for conventional clock-based, sequential processing of data architectures. There may be some compiling that can be done which enables existing ML algorithms to run on an NPU, but we will have to see what the performance trade-off would look like in the compilling. The slow development and adoption of IBM's NxSDK and IBM's TrueNorth ecosystem demonstrate the complexity and steep learning curve of writing software for a new computing architecture.
(v) impact: how much value is created? (4) Medium certainty
The high impact scenario sees neuromorphic architectures as the best-in-class chip for edge devices and AI. AI chips predicted to be worth something like $300 billion in 2030, and edge computing about $150 billion. Both of these predictions are certainly an underestimate, and neuromorphic chips can be an unlock for new applications, I wouldn't be surprised to see a potential market upwards of $750bn by 2030. But that would be a market constrained by supply as it will take five years to build out production capacity and a software ecosystem to take full advantage of the new architectures. A low impact case relies on alternative substrates, most probably, photonic delivering low-power chips serving the AI and edge markets. However, they would have to become more adept at variable connectivity and adaptive computation.
(vi) timing: when will the market be large enough or growing fast enough for scale capital? (2025-2030) High certainty
Mass production of ReRam is the catalyst for the neuromorphic computing industry. AI and especially AI at the edge, has further catalysed demand and will drive more resources, R&D and money into the industry speeding up timelines. That said, the supply chain and software ecosystem needs to mature because volumes can be meaningful. I expect a relatively slow build-up, but when the first products can be shipped in large volumes, extremely fast hockey-stick like growth.
(vii) What are the major open questions in the industry?
What will be the equivalent of backpropagation for neuromorphic architectures? Do we need a new algorithm before we will see adoption?
Or could we find a way to run a transformer on neuromorphic hardware? Is that just a silly idea because it’s not optimised in any way for it? Or is the performance loss worth the benefits of the growing LLM software ecosystem?
Is the electronic neuromorphic pathway which is trading off better performance for easier fabrication a useful middle ground? Or is it just too middle-ground(y)? Like transformer ASICs will get us 10x? better performance and efficiency and can be rolled out in 2-3 years. Add on all the other software optimisations and cooling management techniques, compression, etc. Is the status quo just too powerful on the software side for electronic neuromorphic adoption?
How will the software ecosystem play out? A start-up sort of needs to develop their own software to run on the hardware to demonstrate capabilities. This is especially true for AI, there is no point offering a new piece of hardware and hoping developers will put in the time to figure out how to write software for it. Especially when they can just go to Nvidia, deploy and start making fun and profit. My bet is that any neuromorphic startup needs to launch with an algorithm that is competitive with transformers.
(viii) What are some of the most important startups?
Rain.ai (I’m an angel investor) (US)
Innatera Nanosystems (The Netherlands)
GrAI Matter Labs (France)
Intrinsic Semiconductor Technologies (ReRam) (UK)
Semron (Germany)
Old Guard: Intel, IBM, BrainChip, SpiNNcloud Systems
(ix) Underrated or Overrated
Underrated. I don’t think the market has understood how important the development of RRAM enables for neuromorphic chip production.
(x) 2030 Prediction
Neuromorphic chip designs will win 20% of the edge AI hardware market.
That’s your lot; no interviews or news this week because none of you have the stomach for it. Sad.
Very rich insights as always from Lawrence!
I am sorry, but your statement about what is available, is factually incorrect.
BrainChip does have a commercially available product, and it has been available for months.
Their "Akida" chip technology is evolving and the second generation chip is soon to be released.
You might want to review your investment strategy. Their tech is certainly not OLD school.