<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[State of the Future]]></title><description><![CDATA[Trying to invest in things that will make the world better for my kids and their grandkids. And get mark ups obvs. What are you doing? ]]></description><link>https://stateofthefuture.substack.com</link><generator>Substack</generator><lastBuildDate>Wed, 06 May 2026 02:03:28 GMT</lastBuildDate><atom:link href="https://stateofthefuture.substack.com/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[State of the Future]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[stateofthefuture@substack.com]]></webMaster><itunes:owner><itunes:email><![CDATA[stateofthefuture@substack.com]]></itunes:email><itunes:name><![CDATA[Lawrence Lundy-Bryan]]></itunes:name></itunes:owner><itunes:author><![CDATA[Lawrence Lundy-Bryan]]></itunes:author><googleplay:owner><![CDATA[stateofthefuture@substack.com]]></googleplay:owner><googleplay:email><![CDATA[stateofthefuture@substack.com]]></googleplay:email><googleplay:author><![CDATA[Lawrence Lundy-Bryan]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[State of the Future: Friday Four]]></title><description><![CDATA[Dispatch from 1st May 2026: Anthropic is Cheap at 0.10 PEG-R]]></description><link>https://stateofthefuture.substack.com/p/state-of-the-future-friday-four-7b7</link><guid isPermaLink="false">https://stateofthefuture.substack.com/p/state-of-the-future-friday-four-7b7</guid><dc:creator><![CDATA[Lawrence Lundy-Bryan]]></dc:creator><pubDate>Fri, 01 May 2026 12:56:56 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!lJFN!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F413de24b-c613-4725-9780-8c3d2f821224_1195x868.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>good morning, everyone. i hope you&#8217;ve had a nice week. this week i interviewed <span class="mention-wrap" data-attrs="{&quot;name&quot;:&quot;Shwetank Kumar&quot;,&quot;id&quot;:16185719,&quot;type&quot;:&quot;user&quot;,&quot;url&quot;:null,&quot;photo_url&quot;:&quot;https://bucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com/public/images/4318d68b-3e9b-47c6-b001-fc60731b1368_1600x1200.jpeg&quot;,&quot;uuid&quot;:&quot;224eaa00-1f06-4278-a9cb-5387a54f741d&quot;}" data-component-name="MentionToDOM"></span> of EnCharge AI about how analog chips will make inference cheap. Absolutely no reason for you not to read it </p><div class="digest-post-embed" data-attrs="{&quot;nodeId&quot;:&quot;9d219157-71e6-4bf5-9ba9-0a974f0d9357&quot;,&quot;caption&quot;:&quot;The starting point for today&#8217;s interview is that AI inference is getting more expensive, not cheaper.&quot;,&quot;cta&quot;:&quot;Read full story&quot;,&quot;showBylines&quot;:true,&quot;size&quot;:&quot;sm&quot;,&quot;isEditorNode&quot;:true,&quot;title&quot;:&quot;Making AI inference Cheap with Analog Computing w/ Shwetank Kumar @ EnCharge AI&quot;,&quot;publishedBylines&quot;:[{&quot;id&quot;:20895912,&quot;name&quot;:&quot;Lawrence Lundy-Bryan&quot;,&quot;bio&quot;:&quot;State of the Future helps founders, investors, and corporate leaders identify and act on emerging technology opportunities. We deliver insights, advisory, and investments to surface and back high-potential ideas before they become obvious.&quot;,&quot;photo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!q-AC!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd406f502-b292-4c00-8f8d-ba31c53a3a90_1620x1080.jpeg&quot;,&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:null}],&quot;post_date&quot;:&quot;2026-04-28T12:25:06.287Z&quot;,&quot;cover_image&quot;:&quot;https://substackcdn.com/image/fetch/$s_!9seJ!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2d439d99-5547-4091-a8fa-3bb5627506b6_2752x1536.png&quot;,&quot;cover_image_alt&quot;:null,&quot;canonical_url&quot;:&quot;https://stateofthefuture.substack.com/p/making-ai-inference-cheap-with-analog&quot;,&quot;section_name&quot;:null,&quot;video_upload_id&quot;:null,&quot;id&quot;:195619406,&quot;type&quot;:&quot;newsletter&quot;,&quot;reaction_count&quot;:7,&quot;comment_count&quot;:0,&quot;publication_id&quot;:221837,&quot;publication_name&quot;:&quot;State of the Future&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!juWT!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcb4af7d5-8416-4e8d-bc5b-2763f055ed16_800x800.png&quot;,&quot;belowTheFold&quot;:false,&quot;youtube_url&quot;:null,&quot;show_links&quot;:null,&quot;feed_url&quot;:null}"></div><p>made me take a moment and have a bloody good think about the ai labs. like, what&#8217;s going on out there? lots of big numbers and announcements, but what does it all mean basil? </p><div><hr></div><h2>1. Anthropic, $900bn, Too Much?</h2><p>Anthropic is aiming for a $900bn valuation as per Bloomberg, which would leapfrog OpenAI&#8217;s $852bn as the world&#8217;s second most valuable private company.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!T8Ss!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7e6e589e-71e0-4ff3-b76b-26941c009ed6_765x443.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!T8Ss!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7e6e589e-71e0-4ff3-b76b-26941c009ed6_765x443.png 424w, https://substackcdn.com/image/fetch/$s_!T8Ss!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7e6e589e-71e0-4ff3-b76b-26941c009ed6_765x443.png 848w, https://substackcdn.com/image/fetch/$s_!T8Ss!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7e6e589e-71e0-4ff3-b76b-26941c009ed6_765x443.png 1272w, https://substackcdn.com/image/fetch/$s_!T8Ss!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7e6e589e-71e0-4ff3-b76b-26941c009ed6_765x443.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!T8Ss!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7e6e589e-71e0-4ff3-b76b-26941c009ed6_765x443.png" width="765" height="443" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/7e6e589e-71e0-4ff3-b76b-26941c009ed6_765x443.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:443,&quot;width&quot;:765,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:39698,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://stateofthefuture.substack.com/i/196089926?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7e6e589e-71e0-4ff3-b76b-26941c009ed6_765x443.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!T8Ss!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7e6e589e-71e0-4ff3-b76b-26941c009ed6_765x443.png 424w, https://substackcdn.com/image/fetch/$s_!T8Ss!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7e6e589e-71e0-4ff3-b76b-26941c009ed6_765x443.png 848w, https://substackcdn.com/image/fetch/$s_!T8Ss!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7e6e589e-71e0-4ff3-b76b-26941c009ed6_765x443.png 1272w, https://substackcdn.com/image/fetch/$s_!T8Ss!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7e6e589e-71e0-4ff3-b76b-26941c009ed6_765x443.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>These seem like big numbers for a &#8220;startup&#8221;, and indeed, they are, they would make Anthropic the 9th most valuable company in the world. Tether lol. </p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!hJVr!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff5d8baec-6b5c-4924-bebb-31469e7f69ed_757x877.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!hJVr!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff5d8baec-6b5c-4924-bebb-31469e7f69ed_757x877.png 424w, https://substackcdn.com/image/fetch/$s_!hJVr!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff5d8baec-6b5c-4924-bebb-31469e7f69ed_757x877.png 848w, https://substackcdn.com/image/fetch/$s_!hJVr!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff5d8baec-6b5c-4924-bebb-31469e7f69ed_757x877.png 1272w, https://substackcdn.com/image/fetch/$s_!hJVr!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff5d8baec-6b5c-4924-bebb-31469e7f69ed_757x877.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!hJVr!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff5d8baec-6b5c-4924-bebb-31469e7f69ed_757x877.png" width="757" height="877" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/f5d8baec-6b5c-4924-bebb-31469e7f69ed_757x877.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:877,&quot;width&quot;:757,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:96996,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://stateofthefuture.substack.com/i/196089926?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff5d8baec-6b5c-4924-bebb-31469e7f69ed_757x877.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!hJVr!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff5d8baec-6b5c-4924-bebb-31469e7f69ed_757x877.png 424w, https://substackcdn.com/image/fetch/$s_!hJVr!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff5d8baec-6b5c-4924-bebb-31469e7f69ed_757x877.png 848w, https://substackcdn.com/image/fetch/$s_!hJVr!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff5d8baec-6b5c-4924-bebb-31469e7f69ed_757x877.png 1272w, https://substackcdn.com/image/fetch/$s_!hJVr!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff5d8baec-6b5c-4924-bebb-31469e7f69ed_757x877.png 1456w" sizes="100vw"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Is it worth it? </p><p>Let&#8217;s take a look at valuation, revenue and growth shall we? </p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!NRqH!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F26e32a69-b8c0-45f9-b35f-2ca94c0c0e61_780x815.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!NRqH!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F26e32a69-b8c0-45f9-b35f-2ca94c0c0e61_780x815.png 424w, https://substackcdn.com/image/fetch/$s_!NRqH!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F26e32a69-b8c0-45f9-b35f-2ca94c0c0e61_780x815.png 848w, https://substackcdn.com/image/fetch/$s_!NRqH!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F26e32a69-b8c0-45f9-b35f-2ca94c0c0e61_780x815.png 1272w, https://substackcdn.com/image/fetch/$s_!NRqH!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F26e32a69-b8c0-45f9-b35f-2ca94c0c0e61_780x815.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!NRqH!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F26e32a69-b8c0-45f9-b35f-2ca94c0c0e61_780x815.png" width="780" height="815" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/26e32a69-b8c0-45f9-b35f-2ca94c0c0e61_780x815.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:815,&quot;width&quot;:780,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:128126,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://stateofthefuture.substack.com/i/196089926?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F26e32a69-b8c0-45f9-b35f-2ca94c0c0e61_780x815.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!NRqH!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F26e32a69-b8c0-45f9-b35f-2ca94c0c0e61_780x815.png 424w, https://substackcdn.com/image/fetch/$s_!NRqH!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F26e32a69-b8c0-45f9-b35f-2ca94c0c0e61_780x815.png 848w, https://substackcdn.com/image/fetch/$s_!NRqH!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F26e32a69-b8c0-45f9-b35f-2ca94c0c0e61_780x815.png 1272w, https://substackcdn.com/image/fetch/$s_!NRqH!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F26e32a69-b8c0-45f9-b35f-2ca94c0c0e61_780x815.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Tether lol. </p><p>The metric I care about it <strong>PEG ratio</strong> (Price/Earnings to Growth). I&#8217;m using Anthropic&#8217;s own forward guidance of 4x annual growth. Couple of interesting thoughts: </p><ul><li><p>Meta (0.22) &#8212; great &#8220;growth at a reasonable price&#8221; story in the set, if they can get a decent model, then&#8230;  </p></li><li><p>TSMC (0.31) &#8212; the AI infrastructure play with manufacturing moat</p></li><li><p>NVIDIA (0.34) &#8212; even at $5T market cap, growth justifies the multiple</p></li></ul><p><strong>Anthropic 30x P/Rev seems chunky but PEG-R at 0.10 looks&#8230; cheap?</strong> </p><p><strong>Source:</strong> <a href="https://www.bloomberg.com/news/articles/2026-04-29/anthropic-considering-funding-offers-at-over-900-billion-value">Bloomberg article on the round</a> | <a href="https://www.saastr.com/anthropic-just-passed-openai-in-revenue-while-spending-4x-less-to-train-their-models/">SaaStr on the revenue gap</a></p><h2>2. Cohere, or What Mark and Satya Figured Out</h2><p>With OAI and Anthrophic redefining achievable growth rates, you can see why investors are throwing money at AI labs (more to come). But it&#8217;s not quite as easy as it seems. </p><p>This week, Cohere announced it was acquiring Aleph Alpha at a combined $20bn valuation, with Schwarz Group (Lidl&#8217;s parent if you can believe such a thing) leading a Series E at &#8364;500m and the Canadian and German digital ministers in the room for the photo op. The press release framed it as a &#8220;transatlantic AI powerhouse&#8221; and a &#8220;sovereign alternative to American players&#8221;. </p><p>I mean, sure, we can all say words I guess, but come on? Cohere was at $7bn  in September. Adding Aleph Alpha, last priced at $585m before the founder left in late 2025 and the company pivoted away from frontier models. + a German retail conglomerate, + government endorsement, and you get, hold on&#8230; carry the one, and equals = 20bn? Hold on, let me just check my workings, and&#8230; 20bn? From 7? Triples? Triples is it? </p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!_mny!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fec3a874c-a977-47c8-85be-bed3e67d4b09_480x480.gif" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!_mny!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fec3a874c-a977-47c8-85be-bed3e67d4b09_480x480.gif 424w, https://substackcdn.com/image/fetch/$s_!_mny!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fec3a874c-a977-47c8-85be-bed3e67d4b09_480x480.gif 848w, https://substackcdn.com/image/fetch/$s_!_mny!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fec3a874c-a977-47c8-85be-bed3e67d4b09_480x480.gif 1272w, https://substackcdn.com/image/fetch/$s_!_mny!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fec3a874c-a977-47c8-85be-bed3e67d4b09_480x480.gif 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!_mny!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fec3a874c-a977-47c8-85be-bed3e67d4b09_480x480.gif" width="480" height="480" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/ec3a874c-a977-47c8-85be-bed3e67d4b09_480x480.gif&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:480,&quot;width&quot;:480,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:2076162,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/gif&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://stateofthefuture.substack.com/i/196089926?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fec3a874c-a977-47c8-85be-bed3e67d4b09_480x480.gif&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!_mny!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fec3a874c-a977-47c8-85be-bed3e67d4b09_480x480.gif 424w, https://substackcdn.com/image/fetch/$s_!_mny!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fec3a874c-a977-47c8-85be-bed3e67d4b09_480x480.gif 848w, https://substackcdn.com/image/fetch/$s_!_mny!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fec3a874c-a977-47c8-85be-bed3e67d4b09_480x480.gif 1272w, https://substackcdn.com/image/fetch/$s_!_mny!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fec3a874c-a977-47c8-85be-bed3e67d4b09_480x480.gif 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>I did it. I got to use the gif in context! All thanks to Lidl. Dreams do come true. </p><p>But anyway, like this whole sovereignty story is going sour for me. If it&#8217;s about bailing out companies that can&#8217;t compete on the world stage, then look, this ain&#8217;t gonna work. If we want to do &#8220;sovereignty&#8221; right, then do it the UK SovAI way, back the best companies like <a href="https://www.callosum.com/">Callosum</a> and <a href="https://www.gov.uk/government/news/uk-backs-company-building-breakthrough-ai-that-can-discover-new-knowledge">Ineffable Intelligence</a>. </p><p>But notice the underlying question. Even with hundreds of billions of capital flowing around, even with sovereign endorsement, even with a German retail empire&#8217;s distribution channel, Cohere still can&#8217;t be at the frontier. Why?</p><p>Notice what Mark Zuckerberg and Satya Nadella have figured out that Governments  haven&#8217;t. <strong>Inflection</strong> wasn&#8217;t a $650m purchase of Pi the chatbot, it was Mustafa Suleyman plus a team to run Microsoft AI. <strong>Adept</strong> wasn&#8217;t an acquihire for the agent product, it was <strong>David Luan and Niki Parmar</strong> going to Amazon. Character wasn&#8217;t $2.7bn for the consumer chatbot, it was <strong>Noam Shazeer</strong> (who co-wrote Attention Is All You Need in the first place) back at Google. And can you guess what Meta&#8217;s $14.3bn for 49% of Scale AI was? Bingo for <strong>Alexandr Wang</strong> as CEO of Meta Superintelligence Labs. People not Money.</p><p>The frontier isn&#8217;t a money problem. Microsoft and Amazon and Google and Meta have the hundreds of billions sitting around. It&#8217;s a talent problem. Maybe 100 people in the world can lead a frontier training run, and the hyperscalers are buying them out one $1-15bn cheque at a time? Who will Apple buy? </p><p>Inflection (Microsoft acquihire). Adept (Amazon). Character ($2.7bn licensing to Google). Stability (gone). Aleph Alpha (just absorbed). AI21 alive at $1.4bn but small. Imbue last raised October 2023 and hasn&#8217;t come back. Mistral is the European holdout, $830m in March for Paris and Sweden datacentres, $13.7bn valuation, ARR they need to grow 50-100x to earn the price. European exception, or next on the absorbed list. ASML to acquire/merge with Mistral anyone? </p><p>Source: <a href="https://techcrunch.com/2026/04/25/why-cohere-is-merging-with-aleph-alpha/">TechCrunch on Cohere/Aleph Alpha</a> </p><div><hr></div><h2>3. Buying A Hedge: Silver, Ilya, Yann, Mira</h2><p>And here are the labs that didn&#8217;t get acquihired. David Silver (AlphaGo, AlphaZero, AlphaProof, the man, etc) closed a $1.1bn seed round at $5.1bn for Ineffable Intelligence on Monday, a UK lab with a thesis that explicitly rules out current LLM scaling. He wants reinforcement learning without human data. Sequoia and Lightspeed co-led, Nvidia put in $250m+, the UK Sovereign AI Fund came in. Everyone buying option value on Silver&#8217;s brain inventing the next paradigm, or at least a hedge on scaling stalling. </p><p>This was the same trade for Sutskever&#8217;s $32bn SSI (no product, ~20 staff). Same for Yann LeCun&#8217;s $4.5bn AMI Labs in Paris (pre-launch, &#8220;world models&#8221;, LLMs are a dead end per the founder). Same for Mira Murati&#8217;s reportedly upcoming $50bn round at Thinking Machines. The same talent class running the inverse trade, taking the cheque to start the lab instead of getting absorbed into one. </p><p>Counterpoint, also out this week. The UK&#8217;s <a href="https://www.aisi.gov.uk/blog/our-evaluation-of-openais-gpt-5-5-cyber-capabilities">AI Security Institute</a> published their evaluation of GPT-5.5&#8217;s cyber capabilities. Headline numbers: GPT-5.5 cleared 71.4% of expert-level cyber tasks, against 52.4% for GPT-5.4 the prior generation, 48.6% for Anthropic&#8217;s Opus 4.7, and 68.6% for Claude Mythos Preview. It solved a reverse-engineering challenge that takes a human expert about 12 hours, in 10 minutes 22 seconds, at $1.73 of API cost. Their own quote: </p><div class="pullquote"><p><strong>&#8220;performance continues to scale with the amount of inference compute spent, and we have not yet observed a plateau.&#8221;</strong> </p></div><p>The UK government&#8217;s own safety institute, not an OpenAI fan account.</p><p>Which makes the bull case on the frontier labs (Anthropic, OpenAI, GDM) double as a bear case on the founder bets. Silver, LeCun, Sutskever are explicitly wagering current scaling has run out. AISI just shipped a report saying it hasn&#8217;t, yet. If they&#8217;re right, Silver&#8217;s $5.1bn at zero product is a bet against the current scoreboard. </p><p>Source: <a href="https://techcrunch.com/2026/04/27/deepminds-david-silver-just-raised-1-1b-to-build-an-ai-that-learns-without-human-data/">TechCrunch on Silver/Ineffable</a></p><div><hr></div><div class="callout-block" data-callout="true"><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!lJFN!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F413de24b-c613-4725-9780-8c3d2f821224_1195x868.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!lJFN!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F413de24b-c613-4725-9780-8c3d2f821224_1195x868.png 424w, https://substackcdn.com/image/fetch/$s_!lJFN!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F413de24b-c613-4725-9780-8c3d2f821224_1195x868.png 848w, https://substackcdn.com/image/fetch/$s_!lJFN!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F413de24b-c613-4725-9780-8c3d2f821224_1195x868.png 1272w, https://substackcdn.com/image/fetch/$s_!lJFN!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F413de24b-c613-4725-9780-8c3d2f821224_1195x868.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!lJFN!,w_2400,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F413de24b-c613-4725-9780-8c3d2f821224_1195x868.png" width="1200" height="871.63179916318" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/413de24b-c613-4725-9780-8c3d2f821224_1195x868.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:false,&quot;imageSize&quot;:&quot;large&quot;,&quot;height&quot;:868,&quot;width&quot;:1195,&quot;resizeWidth&quot;:1200,&quot;bytes&quot;:154805,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://stateofthefuture.substack.com/i/196089926?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F413de24b-c613-4725-9780-8c3d2f821224_1195x868.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:&quot;center&quot;,&quot;offset&quot;:false}" class="sizing-large" alt="" srcset="https://substackcdn.com/image/fetch/$s_!lJFN!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F413de24b-c613-4725-9780-8c3d2f821224_1195x868.png 424w, https://substackcdn.com/image/fetch/$s_!lJFN!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F413de24b-c613-4725-9780-8c3d2f821224_1195x868.png 848w, https://substackcdn.com/image/fetch/$s_!lJFN!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F413de24b-c613-4725-9780-8c3d2f821224_1195x868.png 1272w, https://substackcdn.com/image/fetch/$s_!lJFN!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F413de24b-c613-4725-9780-8c3d2f821224_1195x868.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!gR0H!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F65dddcc2-2e39-4e7e-bfa0-c2e953f9d010_1182x545.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!gR0H!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F65dddcc2-2e39-4e7e-bfa0-c2e953f9d010_1182x545.png 424w, https://substackcdn.com/image/fetch/$s_!gR0H!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F65dddcc2-2e39-4e7e-bfa0-c2e953f9d010_1182x545.png 848w, https://substackcdn.com/image/fetch/$s_!gR0H!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F65dddcc2-2e39-4e7e-bfa0-c2e953f9d010_1182x545.png 1272w, https://substackcdn.com/image/fetch/$s_!gR0H!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F65dddcc2-2e39-4e7e-bfa0-c2e953f9d010_1182x545.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!gR0H!,w_2400,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F65dddcc2-2e39-4e7e-bfa0-c2e953f9d010_1182x545.png" width="1200" height="553.2994923857868" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/65dddcc2-2e39-4e7e-bfa0-c2e953f9d010_1182x545.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:false,&quot;imageSize&quot;:&quot;large&quot;,&quot;height&quot;:545,&quot;width&quot;:1182,&quot;resizeWidth&quot;:1200,&quot;bytes&quot;:100204,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://stateofthefuture.substack.com/i/196089926?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F65dddcc2-2e39-4e7e-bfa0-c2e953f9d010_1182x545.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:&quot;center&quot;,&quot;offset&quot;:false}" class="sizing-large" alt="" srcset="https://substackcdn.com/image/fetch/$s_!gR0H!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F65dddcc2-2e39-4e7e-bfa0-c2e953f9d010_1182x545.png 424w, https://substackcdn.com/image/fetch/$s_!gR0H!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F65dddcc2-2e39-4e7e-bfa0-c2e953f9d010_1182x545.png 848w, https://substackcdn.com/image/fetch/$s_!gR0H!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F65dddcc2-2e39-4e7e-bfa0-c2e953f9d010_1182x545.png 1272w, https://substackcdn.com/image/fetch/$s_!gR0H!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F65dddcc2-2e39-4e7e-bfa0-c2e953f9d010_1182x545.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!eX9B!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3bc5f375-f417-4d16-a34e-0c082137cdf0_1187x885.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!eX9B!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3bc5f375-f417-4d16-a34e-0c082137cdf0_1187x885.png 424w, https://substackcdn.com/image/fetch/$s_!eX9B!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3bc5f375-f417-4d16-a34e-0c082137cdf0_1187x885.png 848w, https://substackcdn.com/image/fetch/$s_!eX9B!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3bc5f375-f417-4d16-a34e-0c082137cdf0_1187x885.png 1272w, https://substackcdn.com/image/fetch/$s_!eX9B!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3bc5f375-f417-4d16-a34e-0c082137cdf0_1187x885.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!eX9B!,w_2400,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3bc5f375-f417-4d16-a34e-0c082137cdf0_1187x885.png" width="1200" height="894.69250210615" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/3bc5f375-f417-4d16-a34e-0c082137cdf0_1187x885.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:false,&quot;imageSize&quot;:&quot;large&quot;,&quot;height&quot;:885,&quot;width&quot;:1187,&quot;resizeWidth&quot;:1200,&quot;bytes&quot;:163376,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://stateofthefuture.substack.com/i/196089926?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3bc5f375-f417-4d16-a34e-0c082137cdf0_1187x885.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:&quot;center&quot;,&quot;offset&quot;:false}" class="sizing-large" alt="" srcset="https://substackcdn.com/image/fetch/$s_!eX9B!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3bc5f375-f417-4d16-a34e-0c082137cdf0_1187x885.png 424w, https://substackcdn.com/image/fetch/$s_!eX9B!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3bc5f375-f417-4d16-a34e-0c082137cdf0_1187x885.png 848w, https://substackcdn.com/image/fetch/$s_!eX9B!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3bc5f375-f417-4d16-a34e-0c082137cdf0_1187x885.png 1272w, https://substackcdn.com/image/fetch/$s_!eX9B!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3bc5f375-f417-4d16-a34e-0c082137cdf0_1187x885.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div></div><h2>4. EU Chips Act II, It&#8217;s Absolutely Fab-ulous </h2><p><strong>Now on to the real power brokers in today&#8217;s world: The EU.</strong> Bloomberg yesterday<strong>:</strong> the European Commission is going to give itself the power to invest directly in cross-border manufacturing projects under Chips Act II, due late May. Until now, the Commission could fund research and approve member-state aid, but couldn&#8217;t write a cheque straight to a fab. After Chips Act II, it can. Sounds dry. It is not dry.</p><p>Issue #6 (20 March) said AI sovereignty was about owning chip supply, not protecting software. &#8364;80bn flowed in against a &#8364;43bn target, so the direction was right. <strong>What I got wrong was which fabs would deliver.</strong> I had Europe&#8217;s flagship industrial-policy projects in mind. </p><ul><li><p><strong>Intel Magdeburg, &#8364;30bn:</strong> cancelled July 2025. The leading-edge logic flagship. New CEO Lip-Bu Tan called prior plans &#8220;unwise and excessive&#8221; alongside a $2.9bn Q2 loss. Germany had committed &#8364;10bn in subsidies. CEO-speak for &#8220;no demand.&#8221;</p></li><li><p><strong>Intel Wroclaw, $4.6bn assembly plant:</strong> cancelled the same day. Was the packaging hub for Magdeburg&#8217;s wafers. Without Magdeburg upstream, Wroclaw lost its reason to exist.</p></li><li><p><strong>STMicro/GF Crolles, &#8364;7.5bn:</strong> paused. The automotive FD-SOI flagship, &#8364;2.9bn of French aid behind it. Killed by European EV demand weakness; ST pivoted to &#8220;China-for-China,&#8221; GlobalFoundries to the US.</p></li><li><p><strong>Wolfspeed Saarland, &#8364;3bn:</strong> collapsed. The EV power-electronics SiC flagship. Partner ZF pulled its stake October 2024. Wolfspeed filed Chapter 11 June 2025, restructured under Apollo, $4.6bn of debt written off. EV slowdown plus Chinese SiC competition.</p></li></ul><p>Capital was on the table for all four. Demand wasn&#8217;t. Again, capital cannot solve all ills. You actually need talent to make stuff and customers to buy it. </p><p>Interestingy what&#8217;s working is everything that isn&#8217;t a flagship. Diversified JVs with multiple anchor customers, research pilot lines, next-generation substrate plays.</p><ul><li><p><strong>TSMC&#8217;s ESMC Dresden</strong> (&#8364;10bn, JV with Bosch/Infineon/NXP) is on schedule for equipment install H2 2026. Three anchor customers across automotive and industrial. Mature 28/22nm + 16/12nm.</p></li><li><p><strong>ST Catania SiC</strong> (&#8364;5bn) is heading into production. ST is the incumbent expanding existing turf, not a speculative new entrant &#8212; same SiC bet as Wolfspeed but on resilient ground.</p></li><li><p><strong>The Imec NanoIC pilot line</strong> in Belgium opened in February at &#8364;2.5bn (&#8364;700m EU + &#8364;700m Flemish + &#8364;1.1bn industry, ASML lead). Research and pilot capacity, not commercial production.</p></li></ul><p>And the next-gen substrate startups, CamGraPhIC, Black Semiconductor, SMART Photonics, Q.ANT, maybe Ephos, are the kind of bets Chips Act II&#8217;s direct-investment power can actually place. 3 of those 4 are photonic. None of them is a 2nm logic fab. </p><p>We will get capacity. Now, do we have the talent and the markets? </p><p><strong>Source:</strong> <a href="https://www.bloomberg.com/news/articles/2026-04-30/eu-chips-act-revamp-would-let-commission-invest-directly-in-fabs">Bloomberg on Chips Act II</a></p><p>&#8212;</p><p>And please if you reached the bottom, if you can think of one person who would like what I write, please tell them this is a serious newsletter for serious people </p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!BMwu!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F51e828d6-7f3c-4d25-846e-1fe3bb88697c_480x360.gif" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!BMwu!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F51e828d6-7f3c-4d25-846e-1fe3bb88697c_480x360.gif 424w, https://substackcdn.com/image/fetch/$s_!BMwu!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F51e828d6-7f3c-4d25-846e-1fe3bb88697c_480x360.gif 848w, https://substackcdn.com/image/fetch/$s_!BMwu!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F51e828d6-7f3c-4d25-846e-1fe3bb88697c_480x360.gif 1272w, https://substackcdn.com/image/fetch/$s_!BMwu!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F51e828d6-7f3c-4d25-846e-1fe3bb88697c_480x360.gif 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!BMwu!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F51e828d6-7f3c-4d25-846e-1fe3bb88697c_480x360.gif" width="480" height="360" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/51e828d6-7f3c-4d25-846e-1fe3bb88697c_480x360.gif&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:360,&quot;width&quot;:480,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1017436,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/gif&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://stateofthefuture.substack.com/i/196089926?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F51e828d6-7f3c-4d25-846e-1fe3bb88697c_480x360.gif&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!BMwu!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F51e828d6-7f3c-4d25-846e-1fe3bb88697c_480x360.gif 424w, https://substackcdn.com/image/fetch/$s_!BMwu!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F51e828d6-7f3c-4d25-846e-1fe3bb88697c_480x360.gif 848w, https://substackcdn.com/image/fetch/$s_!BMwu!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F51e828d6-7f3c-4d25-846e-1fe3bb88697c_480x360.gif 1272w, https://substackcdn.com/image/fetch/$s_!BMwu!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F51e828d6-7f3c-4d25-846e-1fe3bb88697c_480x360.gif 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p></p>]]></content:encoded></item><item><title><![CDATA[Making AI inference Cheap with Analog Computing w/ Shwetank Kumar @ EnCharge AI]]></title><description><![CDATA[What happens when Investors stop subsidizing inference?]]></description><link>https://stateofthefuture.substack.com/p/making-ai-inference-cheap-with-analog</link><guid isPermaLink="false">https://stateofthefuture.substack.com/p/making-ai-inference-cheap-with-analog</guid><dc:creator><![CDATA[Lawrence Lundy-Bryan]]></dc:creator><pubDate>Tue, 28 Apr 2026 12:25:06 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!9seJ!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2d439d99-5547-4091-a8fa-3bb5627506b6_2752x1536.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!9seJ!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2d439d99-5547-4091-a8fa-3bb5627506b6_2752x1536.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!9seJ!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2d439d99-5547-4091-a8fa-3bb5627506b6_2752x1536.png 424w, https://substackcdn.com/image/fetch/$s_!9seJ!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2d439d99-5547-4091-a8fa-3bb5627506b6_2752x1536.png 848w, https://substackcdn.com/image/fetch/$s_!9seJ!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2d439d99-5547-4091-a8fa-3bb5627506b6_2752x1536.png 1272w, https://substackcdn.com/image/fetch/$s_!9seJ!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2d439d99-5547-4091-a8fa-3bb5627506b6_2752x1536.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!9seJ!,w_2400,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2d439d99-5547-4091-a8fa-3bb5627506b6_2752x1536.png" width="1200" height="670.054945054945" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/2d439d99-5547-4091-a8fa-3bb5627506b6_2752x1536.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:false,&quot;imageSize&quot;:&quot;large&quot;,&quot;height&quot;:813,&quot;width&quot;:1456,&quot;resizeWidth&quot;:1200,&quot;bytes&quot;:6598993,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://stateofthefuture.substack.com/i/195619406?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2d439d99-5547-4091-a8fa-3bb5627506b6_2752x1536.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:&quot;center&quot;,&quot;offset&quot;:false}" class="sizing-large" alt="" srcset="https://substackcdn.com/image/fetch/$s_!9seJ!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2d439d99-5547-4091-a8fa-3bb5627506b6_2752x1536.png 424w, https://substackcdn.com/image/fetch/$s_!9seJ!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2d439d99-5547-4091-a8fa-3bb5627506b6_2752x1536.png 848w, https://substackcdn.com/image/fetch/$s_!9seJ!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2d439d99-5547-4091-a8fa-3bb5627506b6_2752x1536.png 1272w, https://substackcdn.com/image/fetch/$s_!9seJ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2d439d99-5547-4091-a8fa-3bb5627506b6_2752x1536.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>The starting point for today&#8217;s interview is that AI inference is getting more expensive, not cheaper.</p><ul><li><p>GPT-5.5 doubled price last week: input went from $2.50 to $5.00, output from $15 to $30 per million tokens. The Pro tier is six times that.</p></li><li><p>Anthropic kept Opus 4.7 on the same rate card but the new tokenizer works through roughly 35% more tokens per task, so the bill is up.</p></li></ul><p>Yes the commodity tier (Gemini Flash, GPT-4o Mini) is getting pushed down to the famous near-zero marginal cost, we are getting sub-$0.30 per million output tokens.</p><p>But the frontier, where agentic workloads run, pricing is up. An agentic coding task plows through 20K&#8211;200K tokens. An agent team is 7x, because each teammate maintains its own context. Per-token price drops, but total bill goes up.</p><p><strong>TL;DR: Chatbots are getting cheaper, but agents are getting more expensive.</strong></p><p>So, what will we do? Well, we need new silicon designs to use less energy basically. The AI pricing problem is an energy problem. Every matrix multiplication in a transformer moves data between memory and compute, and it&#8217;s the movement, not the compute, that eats most of the power. The AI workloads in the data centre are over-parametrised by orders of magnitude. We know this because pruning, quantisation, and distillation all reliably shrink frontier models with barely any loss of performance. If the models were sized correctly, many of them could run on the device in your lap. The architectures are not designed for efficiency, so they don&#8217;t.</p><p>This is why so much interest in new silicon and chip designs. You can find primers on neuromorphic, analog, photonic and others in the back catalog. Today, I am speaking to one of the world leaders in the analog space. <strong>Rather than moving bits more efficiently, it stops doing much of the arithmetic the traditional way at all.</strong></p><p><a href="https://enchargeai.com/">EnCharge AI</a>, spun out of Naveen Verma&#8217;s group at Princeton in 2022, builds chips that perform the multiplication inside the memory array itself, using switched-capacitor circuits on standard memory (SRAM) cells. No exotic silicon processes required. Their first chip, the EN100, is returning from fab <em>imminently</em>. The claim: 10&#8211;20x energy efficiency at the same process node versus legacy digital.</p><p>That improvement matters because the price curve I just described isn&#8217;t fixable in software alone. Per-token efficiency can keep dropping inside a digital architecture, but the slope is bounded by the memory-compute boundary. If every token requires shuttling weights and activations back and forth across that boundary, your floor is set by the energy cost of the bus, not by Moore&#8217;s Law. <strong>The only way to drop the floor materially is to change the architecture.</strong> Compute-in-memory is one route. Photonic interconnects, which we covered with<a href="https://stateofthefuture.substack.com/p/photonic-engines-for-data-centers"> Hitesh from Phanofi</a>, are another.</p><p>We&#8217;ve covered compute-in-memory on State of the Future before, notably with<a href="https://stateofthefuture.substack.com/p/ai-chips-computeram-and-the-future"> Manu from Synthara on ComputeRAM</a>. EnCharge is the analog cousin. Shwetank Kumar, EnCharge&#8217;s chief scientist, came up through applied physics at Caltech, spent years on experimental quantum computation at IBM Research, ran engineering at Intel, and joined EnCharge after writing publicly about AI inference efficiency at<a href="https://aiafterhours.substack.com/"> AI After Hours</a>. I spoke to him about analog&#8217;s comeback, why &#8220;gigawatts&#8221; is the wrong unit to measure anything with, the first AI product that&#8217;s genuinely uneconomical to ship, and where analog silicon lands in the compute gradient.</p><h2><strong>What Did I Learn?</strong></h2><ol><li><p><strong>Noise is fought in hardware and software.</strong> EnCharge&#8217;s switched-capacitor MAC sidesteps the noise and variability that have historically killed analog compute, which is half the battle. The other half is the software tool chain: a proprietary PTQ workflow and a compiler designed with the silicon. Without both halves, the efficiency gains don&#8217;t survive the trip from HuggingFace to the chip.</p></li><li><p><strong>Video generation is the first AI product that&#8217;s genuinely uneconomical to deliver,</strong> re SORA, which flips the role of efficiency hardware from &#8220;faster&#8221; to &#8220;required to ship a viable product.&#8221;</p></li><li><p><strong>Measuring AI data centres in gigawatts is a vanity metric that hides a collapsing gross margin.</strong> The honest measure is something like tokens-per-second-per-watt, or voxels-per-second-per-watt for video. Once you look at it that way, the Frontier Labs &#8220;build more gigawatts&#8221; is really just capex amortisation dressed up as strategy.</p></li><li><p><strong>The over-parametrisation problem is the cause, not a symptom.</strong> Pruning, quantisation, and distillation all prove that frontier models have many times the parameters they actually need. That over-parametrisation is what forced everything into the data centre in the first place. Fix the parametrisation and a lot of what currently sits in a rack has no reason to.</p></li></ol><div><hr></div><h2>The Interview</h2><p><em>This interview has been edited for brevity and clarity.</em></p><h3>The compute gradient</h3><p><em>Lawrence: Hey Shwetank, let&#8217;s start big picture</em>, the compute gradient, I call it. Core, and edge. The history of computing has been a constant push and pull between where workloads sit, mainframe to PC to smartphone, now to the hyperscale datacentre. As it stands, it seems like all training will happen in datacentres and the vast majority of inference too, no?</p><p><strong>Shwetank:</strong> There are a few threads to pull on. First, you have to think in terms of workloads, which respond better on edge versus the data centre. Second, what resources are available, where. The reason everything has collapsed to the data centre is that nearly all the models we have are incredibly over-parametrised. By that I don&#8217;t mean the infrastructure they run on. I mean the number of parameters needed to hold the same amount of information.</p><p>We know this emphatically because if you take these models and prune the connections, you can get rid of a majority of them with no performance loss. Quantisation is next: get to four-bit and the models barely lose any performance. Combine these techniques and they still work. Distillation is another piece of evidence: knowledge distillation with a much smaller student model still works. If you have an over-parametrised model with many times the parameters it actually needs, you have no option but to run it in the data centre. That&#8217;s where the heavy infrastructure runs. No one has that in their laptop.</p><p><strong>Lawrence:</strong> Right. So why hasn&#8217;t edge AI actually happened? Google has Gemma running locally on Android. I don&#8217;t see anyone building with it. Is this a demand problem or scaffolding problem?</p><p><strong>Shwetank:</strong> I haven&#8217;t worked directly with Gemma, but when I look at the benchmarks I&#8217;m starting off unimpressed. Look at Qwen, and specifically Qwen&#8217;s Coder model that came out recently: about the same ballpark in size, but it punches above its weight on the benchmarks. Are benchmarks the be-all and end-all? Not really. You have to make sure they actually work for your use case. Once a model gets to a certain level on the benchmark, it&#8217;s much more about the scaffolding and harness around it, specifically how you manage the context. That&#8217;s another reason open source matters: it lets you figure out how to manage the context yourself. With Claude or OpenAI, you basically get the context you get. With Claude Code pricing escalating and similar dynamics elsewhere, you&#8217;ll see causal drivers move in favour of open-source models. At least, that&#8217;s my hypothesis.</p><h3>The analog bet</h3><p><strong>Lawrence: Okay, let&#8217;s talk about your specific bet. Analog. Most of the others went digital</strong>. D-Matrix, Fractile. You and Mythic are basically the only ones still on analog. Everyone broadly agrees that near and in-memory compute will happen in the next few generations, but analog still has skeptics. What did you solve?</p><p><strong>Shwetank:</strong> Caveat first: I&#8217;m not the hardware expert, but this is why I got behind the technology. We have a Von Neumann architecture where memory and compute have been separate, with a bus that makes the data shuttle back and forth. It became very clear from back-of-the-envelope calculation that that&#8217;s where most of the energy is going. The integration question, how and how much to integrate, isn&#8217;t a new story; bringing memory and compute closer has been an ongoing project with the L1, L2, L3 cache hierarchy.</p><p>When you look at the energy-efficiency drivers, one is collapsing the distance between memory and compute. <em><strong>The other is that even doing that doesn&#8217;t get you far enough.</strong></em> You want to extract every bit of efficiency you can given the power walls we&#8217;re hitting. Analog was one of the options. Analog has always had noise issues, variability at the edge, or required a specific marginal process not available in pure silicon. The real innovation in Naveen&#8217;s group was sidestepping that. The architecture we have doesn&#8217;t rely on any specific silicon processes. It&#8217;s purely switched capacitors. With switched-capacitor circuits, all you need is good control of the metal lines, which you have as the processes progress. You use a standard SRAM cell built in those processes and calculate the charge being deposited. That makes our technology robust to noise, and we have five or six generations of test chips that demonstrate it.</p><div class="callout-block" data-callout="true"><p><em><strong>Sidebar: What is analog in-memory compute?</strong></em></p><p><em>Traditional chips separate memory, where values are stored, from compute, where matrix multiplications happen. Data shuttles back and forth across a bus, and in AI workloads that shuttling consumes far more power than the maths itself. Analog in-memory compute does the multiplication inside the memory array, encoding weights as charges or currents, and summing them in the analog domain. In principle this is vastly more efficient; in practice it&#8217;s historically been defeated by noise, variability, and the need for exotic silicon. EnCharge&#8217;s approach, switched-capacitor circuits on standard SRAM, is an attempt to get the efficiency without the pain.</em></p></div><p><strong>Lawrence:</strong> Right. So give me numbers. What does this actually buy you?</p><p><strong>Shwetank:</strong> We have to be cautious about which layer we&#8217;re characterising. When I say a few hundred TOPS per watt, I mean it at the MAC level, multiply-and-accumulate.</p><p>Once you wrap an NPU around it, with typical digital architectures (or, more cheekily, legacy digital architectures, since these are the biggest companies in the world) you get a few small TOPS per watt for the entire GPU. Digital in-memory compute gets you into the low tens of TOPS per watt, 10 to 20. By the time you get to analog in-memory compute iso-node, you&#8217;re at mid-tens of TOPS per watt for the whole NPU at the system level.</p><p><strong>Lawrence:</strong> Right. But is that even the right metric? Shouldn&#8217;t we be measuring tokens per second per watt? TOPS/W doesn&#8217;t really measure anything customers care about.</p><p><strong>Shwetank:</strong> You&#8217;re getting to something that really bugs me. I did an entire rant on why everyone in the Valley is measuring compute in gigawatts. That has perverse second-order effects. You can&#8217;t afford to do that in a world where US energy consumption was flat and is now increasing because of data centres. <em><strong>Gigawatts is a pure input. It tells you nothing about the value you&#8217;re getting out.</strong></em></p><p>The way I break it down: TOPS per watt is the silicon comparison, at the MAC level and the NPU level. Then go a level above and ask whether you can run the same algorithms. If they&#8217;re textual, tokens per second per watt becomes the relevant metric. If they&#8217;re video, it&#8217;s frames per second or voxels per second per watt. It&#8217;s always possible to cheat by throttling performance to look more efficient, so you really want to plot a Pareto curve: per watt, per second, and what&#8217;s the useful unit of work, tokens, pixels, trajectories. At different levels you have to be methodical about what you&#8217;re measuring. And ideally it shouldn&#8217;t be a vanity metric, which often it ends up being. Measuring data centres in gigawatts is, in my opinion, unequivocally a vanity metric.</p><p><strong>Lawrence:</strong> Right. So then what&#8217;s the theory of the case as to why Sam and Elon are both pushing gigawatts? Because it seems kind of remarkable. Profitability is directly capped by their opex, and they don&#8217;t seem to be working particularly on opex. And yet on the other hand, they&#8217;re talking about pushing more input. What&#8217;s your theory?</p><p><strong>Shwetank:</strong> These are very smart individuals and they&#8217;ve been very successful, so I won&#8217;t try to second-guess what&#8217;s going on in their minds. But here&#8217;s how I look at it. You can think of this in terms of whether it&#8217;s a winner-take-all market. If it is, one way to scare everyone else away is to make massive commitments to data centres. We&#8217;ve seen this in pharma, where the marginal cost of producing the next pill is very low and most of the cost is research. Once you&#8217;ve done the research, you amortise it as much as possible by building a big factory and saying, I&#8217;ve made this huge capital investment, no one else dare set up another factory, because I want to flood the market with pills.</p><p>I have a different point of view. The marginal cost of producing a token is not infinitesimal. It&#8217;s significant. In the post I wrote on the financials of Anthropic and OpenAI, you can see their gross margins coming in around 30 to 40%. These aren&#8217;t margins at which you&#8217;d call yourself a software company.</p><p><strong>Lawrence:</strong> Right. It&#8217;s the end of zero marginal cost software. That reframes a lot of the capex debate. The days of SaaS margins are over. Okay, let me pull on your software. Are your quantisation techniques proprietary?</p><p><strong>Shwetank:</strong> We have our own quantisation within the company. We can take an arbitrary model off HuggingFace, run it through our PTQ (post-training quantisation) workflow, and out pops a model that&#8217;ll work on EnCharge hardware. There&#8217;s a tool chain behind it. We&#8217;ve made investments in the compiler tool chain and the quantisation tool chain, and that gets coupled with our hardware and shipped out into the world.</p><div class="callout-block" data-callout="true"><p><em><strong>Sidebar: Post-Training Quantisation</strong></em></p><p><em>Most neural networks are trained in 32-bit or 16-bit floating point. Running inference at those precisions is wasteful. In many cases you can drop to 8-bit or 4-bit integers with almost no loss of accuracy. Post-training quantisation (PTQ) does this conversion after training has finished. For an analog chip this matters more than on a digital one: the physical noise of the analog array interacts with the quantisation noise, so the software and the silicon have to be co-designed from day one. Compilers and tool chains, in analog silicon, are load-bearing.</em></p></div><p><strong>Lawrence:</strong> Are we talking prefill or decode? Because you&#8217;re already seeing a split, aren&#8217;t you, in the inference workloads.</p><p><strong>Shwetank:</strong> Absolutely. I&#8217;ve written extensively about how the data centre is getting disaggregated. We have, or will have, solutions for both. Initially, EN100 and EN200 themselves are more focused on compute-dominated workloads. That&#8217;s where you instantaneously get 10 to 20x energy efficiency iso-node. But we do have a decode story coming up as well.</p><p><strong>Lawrence:</strong> Right. The photonic folks have a similar problem. You compute in photons beautifully but you don&#8217;t have photonic memory, so you constantly cross ADC and DAC boundaries, which eats pretty much all of the benefits depending on the size of the algorithm. So to what extent is ADC a big chunk of your power budget too?</p><p><strong>Shwetank:</strong> That&#8217;s a very fair question. A little out of my day-to-day realm, but I&#8217;ll tell you this much: yes, we&#8217;re very thoughtful about ADC design. With every incremental bit you add, it can become a significant power hog. That&#8217;s why we report power at the MAC level and the NPU level separately. In addition to ADCs, other things knock off your efficiency as you zoom out to the system level. You have to be careful not just about the matrix multiplications, which is a solved problem for us at this point, but also how you do layer norms and other non-linear functions. Every time we discuss adding a bit of precision for quality, we have to think carefully about the power trade-offs. The systems we&#8217;re designing are, hopefully, at that sweet spot. It helps that Naveen&#8217;s team in Princeton have designed analog circuits all their life.</p><div class="callout-block" data-callout="true"><p><em><strong>Sidebar: ADC Tax</strong></em></p><p><em>An analog chip does its multiplications in continuous voltages and currents. But the rest of the system, accumulators, non-linear functions, off-chip memory, is digital. Every time you cross the boundary, you need an analog-to-digital converter (ADC), and ADCs are power-hungry in a non-linear way: each additional bit of resolution roughly doubles the power cost. You can&#8217;t simply &#8220;add more precision&#8221; to an analog design the way you can in digital. The energy wins of analog compute only exist inside a careful ADC budget, and the strongest analog teams are the ones who can hold that budget at the system level, rather than only at the array level. This is why I am bearisj on an all-optical processor.</em></p></div><h3>Going to market</h3><p><strong>Lawrence: Okay, fine, I was surprised reading about Encharge that you&#8217;re leaning into client PCs</strong>. Not drones, not cameras, as I&#8217;ve seen with analog, IMC, neuromorphic, etc. Mythic sits in surveillance, Halo and Accelera in cameras. Why client PC? Seems weird.</p><p><strong>Shwetank:</strong> So far we&#8217;ve focused on client PC use cases. The energy efficiency and power envelope would be appropriate for drones and cameras as well. It&#8217;s the GTM motion we haven&#8217;t leaned into that much, yet. As time comes around, that&#8217;s stuff we&#8217;ll probably look at.</p><p><strong>Lawrence:</strong> Right, but I guess what I am getting at a bit more is that a lot of edge AI startups started trying to sell an edge chip and ended up selling into the data centre. It&#8217;s the biggest infrastructure demand of our generation. Why sell into a market where the demand isn&#8217;t really clear versus a market where they will bite your arm off for singl digit per cent reductions in power.</p><p><strong>Shwetank:</strong> On the larger strategic questions I&#8217;ll defer to Naveen, but my point of view is that we don&#8217;t have the option to say, this is a market we&#8217;ll stay away from. Over time we&#8217;ll probably support all of these markets. The questions of timing and sequencing are being talked about internally. We see it divvied up into three large markets. Data centre, which is already up and running and super-competitive. Client, which is essentially PCs and laptops. And physical AI, which isn&#8217;t really one market, it&#8217;s three or more disaggregated markets, and you have to make a bet on which will take off. If I were the decision-maker, I&#8217;d take a real-options approach to sequencing these bets.</p><p><strong>Lawrence:</strong> Okay, I want to go back to something you said in passing, because I think it&#8217;s really important. You said video generation is currently uneconomical. I guess we already knew this with SORA being taken off the market.</p><p><strong>Shwetank:</strong> Absolutely. This was one of the memos I&#8217;d written. I know video generation is uneconomical, and partly the market is constrained because of that. It&#8217;s a reasonable use case to target. But you always have to balance: a reasonable use case where you can unlock pent-up demand versus a market with significant tailwind, where everyone is currently doing agentic workflows. You spread the bet between something held back that you can unlock, and something already moving that you can ride.</p><p><strong>Lawrence: </strong>That&#8217;s a useful frame. Speaking of expensive workloads, your gross-margin argument assumed a rough steady state in compute per query. o1-style reasoning and extended thinking burn 10-100x more compute per output token. Either that makes efficient inference dramatically more important, or it pushes the hard reasoning back to the cloud and leaves the edge doing cheap fast tasks. Which way does it land?</p><p><strong>Shwetank: </strong>Test-time compute makes me very bullish. When you go from a model emitting 200 tokens per answer to one emitting 20,000 because it&#8217;s reasoning through the problem, per-watt-per-token becomes the critical metric. The whole conversation shifts from &#8220;how do I get a faster answer&#8221; to &#8220;how many joules did that answer cost me, and can I sustain that across millions of queries.&#8221;</p><p><strong>Lawrence: </strong>And the cloud versus edge split? Doesn&#8217;t longer reasoning push everything back to the data centre?</p><p><strong>Shwetank: </strong>That ignores what metrics matter for the interesting reasoning workloads, which are latency- or privacy-sensitive or both. Robotics doing multi-step planning. An agent on a phone reasoning about your calendar and inbox. An AR system reasoning about what you&#8217;re looking at. None of those tolerate a 5-second round trip to a data centre, and most can&#8217;t send the context to the cloud in the first place. Extended thinking actually makes the edge case stronger, because doing it in the cloud at that token volume becomes prohibitive on economics and latency. Cloud will run the largest reasoning models, but a meaningful slice of reasoning workload pushes outward.</p><p>One more thing worth flagging. Another piece of test-time compute is parallel sampling. Many reasoning approaches generate multiple candidate traces and select among them. That&#8217;s embarrassingly parallel, exactly where compute-dominated architectures shine.</p><h3>Architecture, risks, and the unlock</h3><p><strong>Lawrence: </strong>That&#8217;s a useful answer. On your role specifically, chief scientist is different from chief architect or head of engineering. What are the big chunky things you spend time on?</p><p><strong>Shwetank:</strong> It&#8217;s really great to have the partnership I have with the executive team. We&#8217;re all technologists with our own areas of specialisation, so even when I&#8217;m focused on one specific area, I know Kailash, Naveen, and others are covering the rest. Each of these areas is very fast-moving.</p><p>The charter I have, and the things I spend most of my time thinking about, comes down to three things. First, the race between model architectures. Which of these architectures, and which techniques within them, is going to win? Some come up as papers and just take off. Mixture of experts is an example: pretty much every model right now is a mixture-of-experts model. We have to think carefully about what that means for our chips and how execution happens. The moment it&#8217;s a mixture of experts, some layers shift from compute-bound to memory-bound.</p><p>Second, emerging architectures. There&#8217;s a recent paper where, instead of stacking transformer blocks on top of each other, they make a recurrent loop using the same block. Super interesting in terms of energy efficiency and parameter count. The interplay with our quantisation has to be thought through if this takes off. We have to work on it and see whether it actually does, and whether it&#8217;s a bet worth making.</p><p>Third, taking all these new techniques and models and seeing them through to application. If we&#8217;re going to support a video generation model, what are the core applications? How does it land in the market? Are we working with video-model training companies, or taking open-source models and working with final customers? Actually landing all the way to market is one of the things we have to think about.</p><p><strong>Lawrence: </strong>Right, on architecture races, let me flip the question for you. Which of the currently-hot research directions is the nightmare case for switched-capacitor analog compute? Diffusion language models? Mamba-style state-space where the matmul stops being the bottleneck? Test-time reasoning with dynamic workloads? I&#8217;m after the physics-level failure mode, not the competitive one.</p><p><strong>Shwetank: </strong>Honestly, none of those are the nightmare. Our architecture is built around the fundamental primitive of linear algebra, which is matrix multiplication. That primitive isn&#8217;t going anywhere. Diffusion, Mamba, test-time reasoning, they all bottom out in matmuls. What changes is the ratio of compute to memory traffic and how the workload is scheduled. Compute-dominated regimes are where we win biggest, 10-20x at iso-node. Memory-dominated regimes are harder for everyone but don&#8217;t disadvantage us disproportionately either.</p><p><strong>Lawrence: </strong>Right, so at the algebra level you&#8217;re safe. Where&#8217;s the actual risk?</p><p><strong>Shwetank: </strong>It&#8217;s more boring and lower-level: noise immunity, fab variability, whether our analog circuits keep scaling cleanly through standard CMOS process nodes the way digital does. That&#8217;s what we have to demonstrate in a product. If that translates, the linear algebra primitive is safe regardless of what model architecture wins. The interesting architecture work, by the way, lives at the system level: how the hardware execution path and software toolchain accommodate new model architectures quickly. Full-stack engineering, in other words, and it&#8217;s what we spend most of our time on.</p><p><strong>Lawrence: </strong>Right, broader question. The current equilibrium, over-parametrised models, everything in the data centre, open source chipping at the edges, what breaks it, and when? I can see four candidates: token pricing crossing a viability threshold, sovereign-AI rules forcing local inference, sub-100ms agentic loops, or a genuinely capable sub-10B open-weight model. Which is the unlock?</p><p><strong>Shwetank: </strong>All four are real, but each has a different causal driver and pace. The equilibrium doesn&#8217;t break in one event. It erodes from multiple sides at once. Token pricing is playing out in real time. Cloud providers have been subsidising tokens and customers haven&#8217;t been price-discerning, but that changes as high-volume agentic use cases come online in the next six months. Sovereign AI regulation moves defence, healthcare, and finance into local inference earlier than otherwise, but it doesn&#8217;t reset the whole industry. It&#8217;s a tailwind.</p><p><strong>Lawrence: </strong>So latency is the actual unlock?</p><p><strong>Shwetank:</strong> It&#8217;s the most immediately impacting one, because cloud cannot solve it. The speed of light alone makes it impossible. The moment a meaningful product category emerges where the agent has to close a loop faster than a network round-trip allows, the architecture decision is forced. Robotics is the obvious one, but interactive AR and on-device agents that act on your behalf in real time get there first, because the consumer hardware already exists.</p><p><strong>Lawrence: </strong>And the sub-10B model side?</p><p><strong>Shwetank: </strong>Sub-30B more accurately. That&#8217;s the catalyst that makes the latency unlock actually shippable. Deploying a 70B model on a phone or a robot, even with quantisation, is hard. You need the model side to meet the hardware side. The trajectory is faster than most people expected a year ago: Qwen, Gemma, Llama derivatives, and the whole distillation toolchain have compressed capability into smaller footprints at a rate that surprised me. So the unlock arrives as three things at once. Latency-bound use cases need capable small models, small models need efficient hardware to run, and efficient hardware needs an open-weight ecosystem to have something worth running.</p><p><strong>Lawrence: </strong>Last one and I will let you get off to more important things, the EN100 is coming back from fab soon. What should we expect to be visible from the outside?</p><p><strong>Shwetank:</strong> The specific numbers will be on the product page, so I won&#8217;t pre-announce. The short version: the efficiency we&#8217;ve been quoting at the NPU system level is real hardware, not simulation. That&#8217;s the whole point of taping out a product chip. We&#8217;ll keep iterating. We have a conveyor belt of test chips running in parallel, so it&#8217;s a matter of iterating on the architecture and the designs, and co-developing the software at the same time. The next generations aren&#8217;t waiting on this one.</p><h2><strong>So What?</strong></h2><p>The first takeaway, placed next to<a href="https://stateofthefuture.substack.com/p/ai-chips-computeram-and-the-future"> Manu from Synthara on ComputeRAM</a> and<a href="https://stateofthefuture.substack.com/p/photonic-engines-for-data-centers"> Hitesh from Phanofi on photonic engines</a>, is that this is another data point in the same underlying bet. AI&#8217;s energy wall is real, and the fix is about reorganising where arithmetic happens relative to where data lives, rather than pushing transistors smaller. The names vary, compute-in-memory, in-memory compute, analog, photonic, near-memory, but the architectural idea is the same. The post-Von-Neumann decade is starting whether the incumbent GPU companies like it or not, and each of these companies is a different probe into how it might actually play out.</p><p>The second is the gross-margin point, which Shwetank delivered in passing but deserves its own paragraph. Anthropic and OpenAI are<a href="https://aiafterhours.substack.com/p/openai-vs-anthropic-the-121-billion"> running inference at 30-40% gross margins</a>. That&#8217;s closer to semiconductor economics, or utility economics, or, depending on the quarter, chemical-plant economics. The industry response, build more gigawatts, is, as Shwetank says, a vanity metric. It amortises the capex. It doesn&#8217;t fix the gross margin. The only thing that fixes the gross margin is efficiency at the tokens-per-second-per-watt level. Which is to say, EnCharge is selling the unit economics of AI inference.</p><p>The third is the client PC point, which is where I was most sceptical going in and most interested coming out. Selling chips to consumer OEMs is famously brutal. Long design-in cycles, price compression, a handful of oligopsonist buyers. The easier path is to sell into hyperscalers, and it&#8217;s striking how many of the original &#8220;edge AI&#8221; companies have pivoted there. Shwetank&#8217;s framing, that data centre is up-and-running and super-competitive, that client PC is what you focus on first, that physical AI is a real-options problem, is refreshing because it acknowledges what the D-Matrix and Groq trajectories have made plain. The data-centre inference market is already over-fished, and a chip company needs a second path. Whether client PC is the right second path, rather than robotics or industrial vision, I&#8217;m honestly not sure.</p><p><strong>Where I&#8217;m not fully convinced: timing.</strong> The &#8220;video generation is uneconomical&#8221; claim is the most commercially interesting thing in the interview, and it&#8217;s also the one most exposed to software catching up before the silicon arrives. If frontier video models keep getting 2-3x cheaper per year through architecture, compression, and better training alone, the &#8220;efficiency hardware saves the economics&#8221; window closes before EnCharge can ship enough silicon to matter. EnCharge is betting the window is still open by the time EN100 and its successors are shipping in volume. I think they&#8217;re probably right. But it&#8217;s a timing bet, and timing bets are where semiconductor companies come unstuck most often.</p><p>Find out more at<a href="https://enchargeai.com"> enchargeai.com</a>. Shwetank writes &#8220;<a href="https://aiafterhours.substack.com/">AI After Hours</a>,&#8221; sharp stuff on AI inference economics, worth a subscribe whether you care about silicon or not.</p><div><hr></div><p></p>]]></content:encoded></item><item><title><![CDATA[State of the Future: Friday Four]]></title><description><![CDATA[Dispatch from 24th April 2026: Everywhere Except in the Statistics]]></description><link>https://stateofthefuture.substack.com/p/state-of-the-future-friday-four-002</link><guid isPermaLink="false">https://stateofthefuture.substack.com/p/state-of-the-future-friday-four-002</guid><dc:creator><![CDATA[Lawrence Lundy-Bryan]]></dc:creator><pubDate>Fri, 24 Apr 2026 13:10:48 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!RXmS!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F48f2e15e-796c-4043-850e-20cdee7fd002_2816x1536.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!RXmS!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F48f2e15e-796c-4043-850e-20cdee7fd002_2816x1536.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!RXmS!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F48f2e15e-796c-4043-850e-20cdee7fd002_2816x1536.png 424w, https://substackcdn.com/image/fetch/$s_!RXmS!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F48f2e15e-796c-4043-850e-20cdee7fd002_2816x1536.png 848w, https://substackcdn.com/image/fetch/$s_!RXmS!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F48f2e15e-796c-4043-850e-20cdee7fd002_2816x1536.png 1272w, https://substackcdn.com/image/fetch/$s_!RXmS!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F48f2e15e-796c-4043-850e-20cdee7fd002_2816x1536.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!RXmS!,w_2400,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F48f2e15e-796c-4043-850e-20cdee7fd002_2816x1536.png" width="1200" height="654.3956043956044" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/48f2e15e-796c-4043-850e-20cdee7fd002_2816x1536.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:false,&quot;imageSize&quot;:&quot;large&quot;,&quot;height&quot;:794,&quot;width&quot;:1456,&quot;resizeWidth&quot;:1200,&quot;bytes&quot;:11131128,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://stateofthefuture.substack.com/i/194494407?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F48f2e15e-796c-4043-850e-20cdee7fd002_2816x1536.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:&quot;center&quot;,&quot;offset&quot;:false}" class="sizing-large" alt="" srcset="https://substackcdn.com/image/fetch/$s_!RXmS!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F48f2e15e-796c-4043-850e-20cdee7fd002_2816x1536.png 424w, https://substackcdn.com/image/fetch/$s_!RXmS!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F48f2e15e-796c-4043-850e-20cdee7fd002_2816x1536.png 848w, https://substackcdn.com/image/fetch/$s_!RXmS!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F48f2e15e-796c-4043-850e-20cdee7fd002_2816x1536.png 1272w, https://substackcdn.com/image/fetch/$s_!RXmS!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F48f2e15e-796c-4043-850e-20cdee7fd002_2816x1536.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>New thing I am going to try, on and off. Going to start going back through old State of the Future essays and actually check what I predicted versus what happened. Twitter has pretty much beaten the &#8220;write confident call, pretend you never said anything if it did not pan out&#8221; habit into everyone who writes on the internet for a living (apart from Derek Thompson who is the man). For me it&#8217;s kale and score cards. Coming back. Dating the predictions. Keeping the receipts.</p><p>Starting, obviously, with one I got right. Because I can. TSMC reported this week and the essay I want to revisit is <a href="https://stateofthefuture.substack.com/p/e21-can-we-make-enough-ai-chips">Can We Make Enough AI Chips?</a> from November 2023, in which I argued the binding constraint for AI chips would never be GPU logic, it would always be packaging (CoWoS) and memory (HBM). Well was I right? Am I now a god-to-honest semiconductor investor? </p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!dNxj!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe9efa977-b8e9-44f0-ad3e-2ba24624cce0_480x480.gif" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!dNxj!,w_424,c_limit,f_webp,q_auto:good,fl_lossy/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe9efa977-b8e9-44f0-ad3e-2ba24624cce0_480x480.gif 424w, https://substackcdn.com/image/fetch/$s_!dNxj!,w_848,c_limit,f_webp,q_auto:good,fl_lossy/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe9efa977-b8e9-44f0-ad3e-2ba24624cce0_480x480.gif 848w, https://substackcdn.com/image/fetch/$s_!dNxj!,w_1272,c_limit,f_webp,q_auto:good,fl_lossy/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe9efa977-b8e9-44f0-ad3e-2ba24624cce0_480x480.gif 1272w, https://substackcdn.com/image/fetch/$s_!dNxj!,w_1456,c_limit,f_webp,q_auto:good,fl_lossy/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe9efa977-b8e9-44f0-ad3e-2ba24624cce0_480x480.gif 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!dNxj!,w_1456,c_limit,f_auto,q_auto:good,fl_lossy/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe9efa977-b8e9-44f0-ad3e-2ba24624cce0_480x480.gif" width="480" height="480" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/e9efa977-b8e9-44f0-ad3e-2ba24624cce0_480x480.gif&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:480,&quot;width&quot;:480,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:5476858,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/gif&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://stateofthefuture.substack.com/i/194494407?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe9efa977-b8e9-44f0-ad3e-2ba24624cce0_480x480.gif&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!dNxj!,w_424,c_limit,f_auto,q_auto:good,fl_lossy/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe9efa977-b8e9-44f0-ad3e-2ba24624cce0_480x480.gif 424w, https://substackcdn.com/image/fetch/$s_!dNxj!,w_848,c_limit,f_auto,q_auto:good,fl_lossy/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe9efa977-b8e9-44f0-ad3e-2ba24624cce0_480x480.gif 848w, https://substackcdn.com/image/fetch/$s_!dNxj!,w_1272,c_limit,f_auto,q_auto:good,fl_lossy/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe9efa977-b8e9-44f0-ad3e-2ba24624cce0_480x480.gif 1272w, https://substackcdn.com/image/fetch/$s_!dNxj!,w_1456,c_limit,f_auto,q_auto:good,fl_lossy/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe9efa977-b8e9-44f0-ad3e-2ba24624cce0_480x480.gif 1456w" sizes="100vw"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p></p><p>&#8212;</p><h2>1. TSMC Q1 2026</h2><p>TSMC reported last Thursday. $35.9bn revenue, up 40.6% YoY. 66.2% gross margin on a business that burns $56bn a year on capex. Net income up 58%. Full-year guidance raised to 30%-plus top-line growth in dollar terms.</p><p>HPC is now 61% of revenue. Smartphone is 26%. For years the smartphone SoC business was the main event and HPC was the kicker. Now HPC is more than 2x smartphone, grew 20% sequentially this quarter, and smartphone actually fell 11%. Apple used to write the biggest cheque. Now NVIDIA. A sign of the times. Can&#8217;t wait until the biggest customer will be SpaceX.</p><p>Also, read this: <strong>2026 capex at the upper end of $52-56bn is more than half of what TSMC spent across the three years before this combined.</strong> C.C. Wei called it a &#8220;multiyear AI megatrend&#8221; on the call.</p><p>So, E21 score card. I argued back in November 2023 that the binding constraint for AI chips would never be GPU logic, it would always be packaging (CoWoS) and memory (HBM). On this call Wei confirmed <em><strong>CoWoS capacity is sold out through 2026,</strong></em> HBM3 and HBM3E are fully allocated, <strong>NVIDIA alone has over 60% of the packaging line</strong>, and TSMC is going from roughly 35,000 CoWoS wafers per month at end-2024 to 130,000 by end-2026. <strong>Near-quadruple in two years. Ten to twenty percent of the $56bn capex is going to advanced packaging alone.</strong> Bottleneck moved from wafers to packaging in 2023 and has not moved since. The 2027 story is interconnects and panel-level stacking (the CoPoS pilot line finishes in June). Two and a half years later the call still holds. Call it a win. I literally can&#8217;t bold more relevant stuff. <strong>&#8220;Chips&#8221; aren&#8217;t the bottleneck. &#8220;Packaging&#8221; is. Repeat after me.</strong></p><p><strong>Source:</strong> <a href="https://investor.tsmc.com/english/quarterly-results/2026/q1">TSMC Q1 Investor Relations</a></p><div><hr></div><h2>2. The Entry Rung, Again</h2><p>Meta is laying off 10% of its meat sacks as Zuck calls them (8,000 people). Microsoft opens voluntary retirement to 7% of US staff. So we have Q1 tech layoffs at 80,000. So &#8220;AI is here and it&#8217;s taking jobs&#8221;</p><p>Callback to Issue #4. Block did roughly 4,000 layoffs in February (from over 10,000 to under 6,000) and the narrative was &#8220;AI is eating the point-of-sale business.&#8221; I said at the time that Block just overhired during Covid and was using AI as cover. Three months on, the read holds. Meta is the same story at Meta scale. Microsoft&#8217;s voluntary retirement is genuinely a first in 51 years of the company, but the people taking the payout are senior directors with 20 years of tenure, so this isn&#8217;t AI-exposed entry-level engineers, this is balance-sheet cleanup.</p><p>Relevantly, British Progress published &#8220;AI and the UK Labour Market: The Evidence So Far&#8221; this week. Key details:</p><ul><li><p><strong>Three years after ChatGPT they find no macro signal of AI displacement in the UK.</strong></p></li><li><p><strong>Occupations with higher AI exposure have grown faster than least-exposed ones, across all four exposure measures and both data sources they use</strong>.</p></li><li><p><strong>IT business analysts up 38% since 2021, programmers up 18%. Call-centre workers down 19%, telephone sales down 23%</strong>.</p></li></ul><p>Same AI exposure, opposite outcomes: what matters is whether AI augments the task or substitutes it. Bessen&#8217;s old bank-teller point: ATMs made branches cheaper so banks opened more branches, so teller employment kept rising for two decades. Programming expands when debugging gets cheaper. Call-centre scripts are fixed demand and the work just disappears. Elasticity.</p><p>But. Crane and Soto at the Fed Board find US coder employment three percentage points per year below trend since ChatGPT. British Progress cannot replicate it for the UK (pre-ChatGPT window too short, occupational classification changed in 2021). Brynjolfsson&#8217;s Stanford Canaries paper has 22-25 year olds in AI-exposed roles down 13% relative, young software devs down 20% from the late-2022 peak, and workers 30+ in those same roles up 6-13%. The signal is there. It is at the entry rung.</p><p>The story is the 22-year-old with a CS degree who is not getting the interview. Invisible, because you cannot put a face on a job that never happened. No politician or trade union is standing up for the people who didn&#8217;t get jobs. Meta firing senior PMs in Menlo Park is noise and actually muddies the water here because it looks like a job automation story, but it&#8217;s not.</p><p>Fwiw, this basically the scenario I sketched in <a href="https://stateofthefuture.substack.com/p/occupational-downgrading">Occupational Downgrading</a> and <a href="https://stateofthefuture.substack.com/p/unbundling-the-job">Unbundling the Job</a> a while back. The macro looks fine but the entry rung collapses.</p><p>Meta is not cost-cutting. Microsoft is not cost-cutting. Capex is going up. Way up. Meta capex jumped from $72bn in 2025 to $115bn in 2026. Microsoft spent $88bn on AI in 2025 and is going +++ this year. Big tech is shifting spend from labour to capital, <strong>from payroll to tokens</strong> (that&#8217;s the line).</p><p>Is anyone measuring this properly? Someone needs to be. The useful thing would be a firm-level ratio of dollars-per-function on the capital side versus the labour side, tracked over time. <a href="https://www.semafor.com/article/04/22/2026/ai-tokens-may-be-starting-to-rival-labor-costs">Semafor ran a piece on Tuesday</a> arguing AI token spend is starting to rival labour costs at the enterprises that have actually adopted, which is directional but not really quantified.</p><p>The curve you would expect, if this is substitution rather than augmentation, is probably not linear.</p><ul><li><p>Phase 1 is what we are in now, token spend up and labour flat, productivity per worker up (British Progress has UK software GVA per worker up 16% since 2019 against 0.4% economy-wide). <strong>Capital side grows and labour side holds.</strong></p></li><li><p>Phase 2 is when model reliability crosses the threshold for a specific category of work, labour for that category starts falling in absolute terms, capital spend spikes to replace. That is the entry rung in software post CC and Codex.</p></li><li><p>Phase 3 is when reliability crosses the broader threshold and the substitution goes wide. Flat, flat, flat, cliff. Which is why the macro data will keep looking fine right up until the point it doesn&#8217;t.</p></li></ul><p><strong>Solow, 1987:</strong> &#8220;you could see the computer age everywhere except in the productivity statistics.&#8221; Electricity took forty years to register in output data. Computing thirty. We are three years into generative AI. The British Progress closer basically says &#8220;check back in two or three years&#8221; which is the absolute right take. But. Here&#8217;s my thing as a seed stage VC not an economist. I can&#8217;t afford to wait for the data. I have to make bets before the data. My bet is the 3 phase thing above. Forthwith to be named: <strong>The State of the Future Tokens Versus Labour Framework. SOFTTVLF.</strong></p><p><strong>Source:</strong> <a href="https://britishprogress.org/reports/ai-and-the-uk-labour-market-the-evidence-so-far">British Progress: AI and the UK Labour Market: The Evidence So Far</a></p><div><hr></div><h2>3. Frontier More Expensive</h2><p>That was a long one. I&#8217;ll keep the rest short so you may continue with your day. Thursday morning: GPT-5.5. Friday morning: DeepSeek V4. Some numbers:</p><ul><li><p>GPT-5.4 was $2.50 input and $15 output per million tokens.</p></li><li><p>GPT-5.5 is $5 input, $30 output per million tokens.</p></li></ul><p><strong>That is a straight 2x price jump in six months.</strong></p><p>And GPT-5.5 Pro is $30/$180 which is 6x the base. Opus 4.7 kept the same rate card on paper, but the new tokenizer produces up to 35% more tokens for the same text, so the real bill is up roughly 35%.</p><p>DeepSeek went the other way. V4-Pro at $1.74 / $3.48 per million tokens, V4-Flash at $0.14 / $0.28. Apache 2.0 open weights, 1M context, 1.6T-param MoE with 49B active. SWE-Verified 80.6, essentially tied with Claude&#8217;s 80.8, at roughly a tenth of the Western sticker price. Plus a 90% cache discount on repeated prefixes, which is the single biggest cost lever on anything agentic.</p><p>The lazy read is &#8220;AI is getting cheaper.&#8221; It is and it isn&#8217;t. The commodity tier is in freefall (Gemini Flash at $0.08 / $0.30, GPT-4o Mini at $0.15 / $0.60) but the frontier is pricing up into its demand curve. And the frontier is where agentic coding actually has to run. A single agentic task plows through 20K-200K tokens. Agent teams multiply that by 7 because every teammate maintains its own context window.</p><p>So the per-token price keeps dropping. But the truth is that it doesn&#8217;t matter much. The task runs longer, the bill is higher, and the only genuine frontier price cut this week came from a Chinese lab giving away the weights.</p><p>If you have been watching the capability curve thinking &#8220;surely this will all be basically free by 2027&#8221;, it will be, for chatbot use cases. For the actual agentic work you would want to run, it will still be $5 per million input tokens. You will just use 20 times more of them.</p><p><strong>Source:</strong> <a href="https://simonwillison.net/2026/Apr/24/deepseek-v4/">Simon Willison on DeepSeek V4</a></p><div><hr></div><h2>4. Tragic Twenties</h2><p>Finally, just a hell of a great read as usual from Derek Thompson. Not really an AI or tech story, just read it. &#8220;If America&#8217;s So Rich, How&#8217;d It Get So Sad?&#8221;.</p><p>The headlines are: lowest US ranking ever in the World Happiness Report. Lowest consumer sentiment in the 70-years. Federal Reserve worker satisfaction at its lowest since 2014. Trust collapse across government, the military, the CDC, education, religion, pick any institution. Consumer prices up 25% between summer 2020 and summer 2025. Housing moving at roughly twice pre-pandemic pace. Thompson calls it the &#8220;<em><strong>permademic</strong></em>&#8220;, the pandemic&#8217;s second-order effects that never actually went away.</p><p>V.V.V Interestingly, English-speaking countries got clobbered hardest (US, UK, Canada, Australia). <em><strong>Portugal, Italy and Spain saw happiness actually rise in the 2020s.</strong></em> Turns out if you do not have 25% inflation you do not get the misery. Spain again folks. Amar was right.</p><p>The key point is that inflation is just so pernicious. No-one notices price increases going up and down. They only notice the increase. People live in their housing cost up 50%, their weekly shop up 25% on a few years. Thompson&#8217;s point is that feelings drive what comes next, more than economics does. I think this is right. And if you will indulge me and allow me to tie it to AI. If I wanted to change the messaging around AI, I think this is where I would start. Tell the deflationary stories. E.g.</p><ul><li><p>Private tutoring is &#163;60-80 an hour. Khanmigo is &#163;4 a month. So a year of unlimited AI tutoring costs less than one hour with a human.</p></li><li><p>Basic legal advice from a solicitor, &#163;200-300 an hour. A competent first-pass answer to 80% of consumer legal questions, free, in your browser, right now.</p></li><li><p>GP triage used to be &#8220;wait three weeks on the NHS or pay &#163;100 private&#8221;, now it is &#8220;ask the chatbot and it is usually right about whether to worry&#8221;.</p></li></ul><p><strong>Source:</strong> <a href="https://www.derekthompson.org/p/if-americas-so-rich-howd-it-get-so">Derek Thompson: If America&#8217;s So Rich, How&#8217;d It Get So Sad?</a></p><p>&#8211;</p><p>That&#8217;s for indulging me, as usual.</p><p><strong>If you missed it:</strong></p><ul><li><p><a href="https://stateofthefuture.substack.com/p/the-photonic-foundry-fallacy">The Photonic Foundry Fallacy</a>, the biggest opportunity in computing, still</p></li><li><p><a href="https://stateofthefuture.substack.com/p/detecting-proteins-in-blood-with">Detecting Proteins in Blood with Photonics</a>, Prateek on ambient health, magnetic beads, why the optics are the bottleneck</p></li></ul>]]></content:encoded></item><item><title><![CDATA[Detecting Proteins in Blood with Photonics w/ Prateek of Proteins1 ]]></title><description><![CDATA[The protein detection problem, magnetic beads, and the path to ambient health]]></description><link>https://stateofthefuture.substack.com/p/detecting-proteins-in-blood-with</link><guid isPermaLink="false">https://stateofthefuture.substack.com/p/detecting-proteins-in-blood-with</guid><dc:creator><![CDATA[Lawrence Lundy-Bryan]]></dc:creator><pubDate>Wed, 22 Apr 2026 11:11:18 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!b-WG!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff04c57af-f0e8-4dd1-b72d-a3a506f632e7_2816x1536.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em><strong>Proteins.1 is Cloudberry&#8217;s first investment. So I&#8217;m talking my book for the next 2000 words. Calibrate accordingly.</strong></em></p><div><hr></div><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!b-WG!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff04c57af-f0e8-4dd1-b72d-a3a506f632e7_2816x1536.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!b-WG!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff04c57af-f0e8-4dd1-b72d-a3a506f632e7_2816x1536.png 424w, https://substackcdn.com/image/fetch/$s_!b-WG!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff04c57af-f0e8-4dd1-b72d-a3a506f632e7_2816x1536.png 848w, https://substackcdn.com/image/fetch/$s_!b-WG!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff04c57af-f0e8-4dd1-b72d-a3a506f632e7_2816x1536.png 1272w, https://substackcdn.com/image/fetch/$s_!b-WG!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff04c57af-f0e8-4dd1-b72d-a3a506f632e7_2816x1536.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!b-WG!,w_2400,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff04c57af-f0e8-4dd1-b72d-a3a506f632e7_2816x1536.png" width="1200" height="654.3956043956044" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/f04c57af-f0e8-4dd1-b72d-a3a506f632e7_2816x1536.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:false,&quot;imageSize&quot;:&quot;large&quot;,&quot;height&quot;:794,&quot;width&quot;:1456,&quot;resizeWidth&quot;:1200,&quot;bytes&quot;:6972692,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://stateofthefuture.substack.com/i/194779163?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff04c57af-f0e8-4dd1-b72d-a3a506f632e7_2816x1536.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:&quot;center&quot;,&quot;offset&quot;:false}" class="sizing-large" alt="" srcset="https://substackcdn.com/image/fetch/$s_!b-WG!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff04c57af-f0e8-4dd1-b72d-a3a506f632e7_2816x1536.png 424w, https://substackcdn.com/image/fetch/$s_!b-WG!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff04c57af-f0e8-4dd1-b72d-a3a506f632e7_2816x1536.png 848w, https://substackcdn.com/image/fetch/$s_!b-WG!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff04c57af-f0e8-4dd1-b72d-a3a506f632e7_2816x1536.png 1272w, https://substackcdn.com/image/fetch/$s_!b-WG!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff04c57af-f0e8-4dd1-b72d-a3a506f632e7_2816x1536.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Preventative health. You have a semiconductor and photonics fund and what&#8217;s the first deal you do? A bloody biosensor. Because photonics. Because photonics has miniaturised and got so cheap we can now use photonic chips to sense more and more things. One of those things is proteins. And proteins are where diagnostics has been stuck for forty years.</p><p>So here&#8217;s your pitch. </p><p>Most cancers get caught late, you feel a lump and a scan for something else picks up a shadow. The GP orders bloods and something comes back bad. By the time any of that&#8217;s happening, the tumour has already been growing for close to a decade. Which matters, because the survival curves are ugly. Caught at stage 1, breast cancer has a 99% five-year survival rate. Stage 4, under 30%. Pancreatic is worse: 40% at stage 1, 3% at stage 4. The gap between &#8220;caught early&#8221; and &#8220;caught late&#8221; is, for most of the cancers that actually kill people, the gap between treatment and palliative care. Long before the lump and the scan, and there is a trail, it&#8217;s just hard to see it. A pinhead tumour is already leaking proteins into the blood.</p><p>Right, now PCR. Polymerase chain reaction. Remember Covid tests? Those. Take one strand of viral DNA, drop it in a tube with an enzyme whose entire job is to copy DNA, and thermal-cycle it thirty times. Each cycle doubles what&#8217;s there. 2^30 is a billion. One strand of virus becomes a billion, and it&#8217;s impossible to miss. Forensics, paternity, virology, cancer genomics, the whole lot is based on this innovation. </p><p>Proteins though. There&#8217;s no protein polymerase, no biological or chemical machine that takes one protein and makes two. DNA&#8217;s whole thing is self-copying, it&#8217;s literally what the molecule is for. Proteins are downstream of that, they&#8217;re the thing the copying machinery was built to produce. They have no copying machinery of their own. So if your blood is carrying twelve molecules of some cancer-signalling protein in five litres of fluid, that&#8217;s twelve. You detect what&#8217;s there, at the concentration the body was kind enough to provide.</p><p>The workaround, for decades, has been <em><strong>enzymatic amplification.</strong></em> You tag the protein with an enzyme, the enzyme spits out thousands of glowing molecules, and you read the glow. The catch is that enzymes are noisy. They fire randomly even when no target is present, they care about temperature, they degrade between batches. The best enzymatic assays get you to <em>femtomolar</em> detection (<em>a concentration of 10<sup>-15</sup> moles per litre)</em>, and then they hit a noise floor. </p><p>And so, Proteins.1 then, instead of making the signal chemically louder, they read the same molecule over and over on a photonic chip, letting certainty accumulate from the repetition. Magnetic beads, antibodies, and photonic elements replace enzymes. There is a line of sight to three orders of magnitude more sensitivity than the best current platforms, from femtomolar to attomolar detection (a concentration of 10-18 mole per litre). Small numbers, very small numbers.</p><div><hr></div><h2>What Will You Learn Today If You Decide to Read On?</h2><ol><li><p><strong>Proteins can&#8217;t be copied.</strong> PCR works because DNA copies itself a billion times over. There&#8217;s no equivalent for proteins. Every diagnostic either accepts the noisy enzymatic floor or finds a different amplification mechanism. Proteins.1 is making the mechanical-cycling bet, on a photonic chip that amplifies signal through repetition.</p></li><li><p><strong>Biology is the bottleneck.</strong> Miniaturised lasers, photonic elements, and photonic integrated chips are moving fast. The real bottleneck is getting a single target molecule in a two-microlitre blood sample to physically find the right bead with the right antibody in the right amount of time. You can&#8217;t break the laws of diffusion and binding kinetics.</p></li><li><p><strong>The platform is molecule-agnostic.</strong> The chip doesn&#8217;t care whether it&#8217;s binding a protein, a DNA fragment, or a metabolite. Same chip, same beads, swap the antibody. That&#8217;s the path from a protein diagnostic to a tabletop multi-omics box.</p></li></ol><div><hr></div><h2>The Interview</h2><p><strong>Lawrence</strong>: Alright, Prateek. What up? Give me the ninety-second version. What&#8217;s Proteins.1 actually doing that&#8217;s different?</p><p><strong>Prateek:</strong> In a cell, or in the body, it&#8217;s really hard to tell whether what you&#8217;re seeing is the onset of disease or not. A foreign body turns up, is the immune system going to suppress it, or is this actually the start of something? So we take a blind-canvas approach. We look at the whole picture.</p><p><strong>Lawrence:</strong> Meaning what, specifically?</p><p><strong>Prateek:</strong> We analyse one protein in relation to the 4,000 others that are present. One protein expressed at ten copies doesn&#8217;t mean anything on its own. If you can see what&#8217;s happening to all the other proteins at the same time, you can start telling a story about what the biology is actually doing.</p><p><strong>Lawrence:</strong> Right. So this is the parallelisation point. The more proteins you can test in a single run, the richer the picture. And I assume there&#8217;s machine learning sitting on top of that, running against historical datasets to say these two proteins together mean X or Y?</p><p><strong>Prateek:</strong> Correct. When I put on my night vision goggles and look at the sky, I can see all the stars. But there are stars behind those stars. There are galaxies. There are nebula clouds. Astronomy keeps progressing by building larger and better cameras that look at different wavelengths and get the composition of the stars rather than just their position. Our photonic chip is the same. More pixels, more wavelengths, more depth of data.</p><p><strong>Lawrence:</strong> So the simple version: better camera, more you can see.</p><p><strong>Prateek:</strong> Exactly. There&#8217;s a limit, of course. Miniaturising antibodies and functionalising beads is where the wall is. But that&#8217;s why we do this.</p><div class="callout-block" data-callout="true"><p><em><strong>Sidebar: Why enzymes, and why they&#8217;re noisy</strong></em></p><p><em>Most protein diagnostics today run on some variant of ELISA, the enzyme-linked immunosorbent assay. You tag the target protein with an enzyme, the enzyme catalyses a reaction that produces thousands of glowing molecules, and you read the glow. That&#8217;s the amplification: one protein becomes a detectable signal. The catch is that enzymes fire randomly even when no target is present, care deeply about temperature, and vary between batches. The noise this creates limits the best instruments to femtomolar sensitivity (10&#8315;&#185;&#8309; molar), which misses many of the most interesting early-disease signals.</em></p></div><p><strong>Lawrence:</strong> That&#8217;s interesting, because my head went to the enormous development in miniaturised lasers, comb lasers, faster modulators, the whole photonic integrated chip stack. You&#8217;re telling me that&#8217;s less the bottleneck. The real wall is antibody coatings and magnetic bead chemistry.</p><p><strong>Prateek:</strong> True, Lawrence. Sometimes we forget that biology exists for a reason. The molecules know what they&#8217;re doing, even if the engineers think otherwise.</p><p><strong>Lawrence:</strong> Fair. Say more on the binding side. What&#8217;s the actual constraint?</p><p><strong>Prateek:</strong> You have a magnetic bead, and let&#8217;s say a single target molecule in this very small sample volume. How is the bead going to find the target? How is it going to carry it to the antibody spot? These things depend on residence time. How the sample is agitated, how you get them in close proximity. We cannot break the laws of chemistry. If it takes a certain time for the binding event to happen, that&#8217;s the time.</p><p><strong>Lawrence:</strong> Right. Good old diffusion. That problem is older than photonics by a few hundred million years.</p><p><strong>Prateek:</strong> (laughs) Exactly.</p><div class="callout-block" data-callout="true"><p><em><strong>Sidebar: Magnetic bead immunoassays</strong></em></p><p><em>A magnetic bead is a micrometre-scale sphere, usually iron oxide coated in antibodies. Mix a few million into a blood sample, the target proteins stick, and now the proteins are bound to something a magnetic field can move. The bead gets pulled through a detection region on the photonic chip, the chip reads the binding event, and the magnet pulls the bead back for another read. One captured protein gets read hundreds or thousands of times. Noise is random and cancels out. Signal stacks with every cycle, and certainty emerges from the repetition.</em></p></div><p><strong>Lawrence:</strong> Right. Let&#8217;s jump to ten years out. What does Proteins.1 look like? What kind of company are you building?</p><p><strong>Prateek:</strong> In my previous research I looked at biomarkers in sweat. Sweat is rich in signal, but by the time the proteins make it from blood through the skin into sweat, most of them are degraded. Quantities are tiny. What you&#8217;d want is something that can see what&#8217;s left on your fingerprints, in your breath, on the surfaces you touch.</p><p><strong>Lawrence:</strong> Okay.</p><p><strong>Prateek:</strong> In ten years, this is part of your kitchen. It&#8217;s the tricorder from Star Trek.</p><p><strong>Lawrence:</strong> (laughs) The tricorder. Sure. Hold on though. My head was going somewhere slightly different. Today we&#8217;ve got a tabletop device for blood draws. In some timeline that becomes a wearable. We move away from microneedles for glucose monitoring, even. Optics gets small enough that you can put this on your wrist, in your ear, in a ring, and everything aggregates into constant biomarker data.</p><p><strong>Prateek:</strong> Yes.</p><p><strong>Lawrence:</strong> But what you&#8217;re saying is more ambient than that. The device is around you, not on you. Fridge, bathroom, surfaces. Healthcare IoT.</p><p><strong>Prateek:</strong> True. Ten years ago, Oura didn&#8217;t exist. We had basically no wearables. Apple Watch has completely changed how I think about my sleep, my heart. In the next ten years, technology like ours becomes an integrated part of life. Food quality control, what&#8217;s happening in your stool, your fridge telling you something has spoiled.</p><p><strong>Lawrence:</strong> Your fridge knows the eggs have gone off before you do.</p><p><strong>Prateek:</strong> That&#8217;s the vision. You&#8217;re not sitting in front of a box doing a weekly blood draw.</p><p><strong>Lawrence:</strong> Okay, but to actually be a trillion-dollar company you can&#8217;t stay on the tabletop. How do you get from today&#8217;s box, through Gen 3 and Gen 4, to this ambient world? What&#8217;s the roadmap?</p><p><strong>Prateek:</strong> It happens in steps. Today we can do simultaneous DNA and protein interrogation. First step is scaling up from single proteins to a hundred, then to the full proteome. That&#8217;s a big jump, moving from a diagnostic to an omics platform.</p><p><strong>Lawrence:</strong> And the omics part, that&#8217;s where things get interesting.</p><p><strong>Prateek:</strong> You replace mass spec, which is huge and slow, with a small tabletop device that does metagenome and proteome simultaneously. Single device.</p><p><strong>Lawrence:</strong> Hang on. Single device doing proteins, DNA, and metabolites. I want to make sure I&#8217;ve got this. You can do that because the platform is coming from the protein side, where the hard problem is, and the genomics and metabolomics parts are more mature. So you&#8217;re doing the bit nobody can do today, and the rest either follows or you in-license.</p><p><strong>Prateek:</strong> The good thing about our technology is that it&#8217;s molecule-agnostic. As long as something binds to it, we can detect it. We&#8217;re not relying on conventional PCR or sequencing for genomics. We won&#8217;t be in-licensing, we&#8217;ll be applying our own platform to those other omics.</p><p><strong>Lawrence:</strong> So same chip, swap the binding chemistry, get different omics.</p><p><strong>Prateek:</strong> Exactly.</p><div class="callout-block" data-callout="true"><p><em><strong>Sidebar: Omics in one box</strong></em></p><p><em>Proteomics studies the proteins a cell produces. Genomics studies the DNA sequence that tells cells what to produce. Metabolomics studies the small-molecule byproducts of cellular activity. Each has its own class of machines today: Illumina sequencers for DNA, mass spectrometers for metabolites, bead-array platforms like Quanterix for proteins. Combining them into a single box means running the same binding-and-detection physics against antibodies for proteins, oligonucleotide probes for DNA, and aptamers or small-molecule binders for metabolites.</em></p></div><p><strong>Lawrence:</strong> Alright. When someone says omics to the average person, it doesn&#8217;t mean much. Give me the Monday-morning scenario. Five years out, still a tabletop box. I walk up to the machine. What happens?</p><p><strong>Prateek:</strong> Two-microlitre fingerprick. Similar to a glucose test, almost painless.</p><p><strong>Lawrence:</strong> Okay.</p><p><strong>Prateek:</strong> That sample gets partitioned into a few million pixels on the chip. Each pixel corresponds to different DNA or protein targets. Between them, we think we can cover almost the whole genome and proteome from that one drop.</p><p><strong>Lawrence:</strong> And what comes out the other end? I&#8217;m self-administering, I&#8217;m not a clinician. What does it tell me?</p><p><strong>Prateek:</strong> Hopefully nothing.</p><p><strong>Lawrence:</strong> (laughs) Hopefully nothing. That&#8217;s the right answer.</p><p><strong>Prateek:</strong> If it finds something, it doesn&#8217;t say biomarker one and biomarker thirteen are elevated. It talks to your physician. Time gets booked for a checkup on kidney function. If it&#8217;s communicable, we can look at whether it&#8217;s spreading between families. If it&#8217;s cancer or Alzheimer&#8217;s, your clinician is notified and works up a plan.</p><p><strong>Lawrence:</strong> So it skips me entirely for anything serious.</p><p><strong>Prateek:</strong> For the clinical stuff, yes. The prediction should be early enough that you have options. We don&#8217;t want the box telling you that you&#8217;re going to die in three days.</p><p><strong>Lawrence:</strong> Right. Like your Apple Watch doesn&#8217;t say you had a bad dream, it says you didn&#8217;t sleep as well as last night. Softer communication layer on top of harder data.</p><p><strong>Prateek:</strong> Exactly that.</p><p><strong>Lawrence:</strong> Okay. That&#8217;s one pathway, the clinical one. There&#8217;s a parallel pathway which is preventative, which I think is underrated. You&#8217;re low in X, eat more beetroot today. I use Zoe in the UK for gut health. Most of the reason people pay for that is to change their diet, to improve biomarkers through food and exercise. This would be another data source feeding that loop.</p><p><strong>Prateek:</strong> True. That&#8217;s the more preventative side.</p><p><strong>Lawrence:</strong> Two pathways, then. Clinical, through the physician. Preventative, through diet and behaviour.</p><p><strong>Prateek:</strong> And drugs are getting more personalised. Earlier detection means softer treatments. Anti-inflammatories instead of chemotherapy and radiotherapy. Pharma can intervene at the point where disease is signal rather than tissue damage.</p><p><strong>Lawrence:</strong> Sadly, I can&#8217;t have conversations any more without thinking like a VC. Here&#8217;s the question. Physics-based amplification replacing noisy wet enzymatic methods is a reasonable first-principles bet. Photonics is racing ahead, the components are getting cheaper and smaller. So assume I&#8217;m right, and this is where protein diagnostics should go. How strong is your lead?</p><p><strong>Prateek:</strong> Speed to market. Speed to customer. Owning the IP.</p><p><strong>Lawrence:</strong> Right, those are the basics. Everyone says that. What else?</p><p><strong>Prateek:</strong> The three of us have taken ideas on paper through to products that are FDA-approved and used globally. That combination, having lived the story in complex medical devices and drug discovery, and now bringing it to a research-use and diagnostic platform, that matters.</p><p><strong>Lawrence:</strong> Okay, so team + prior FDA approvals + patents.</p><p><strong>Prateek:</strong> Granted US and Finnish patents, international filings pending.</p><p><strong>Lawrence:</strong> What about the analogy you were using?</p><p><strong>Prateek:</strong> The analogy I keep coming back to is cameras. When I was growing up, you sent film to a lab, chemicals developed it, weeks went by, the prints came back.</p><p><strong>Lawrence:</strong> I remember. Just about.</p><p><strong>Prateek:</strong> Then my first Sony digital camera. Fifty megapixels, instant. The whole world in your hand. Film became a niche almost overnight. Kodak is still a brand, but nobody processes film any more.</p><p><strong>Lawrence:</strong> And Kodak the company basically disappeared.</p><p><strong>Prateek:</strong> Almost. Canon and Fujifilm came in and took the market. That&#8217;s the part we&#8217;ll have to fight off commercially, with IP and being first.</p><p><strong>Lawrence:</strong> Right. Paradigm shift is an overused word. Disruption is an overused word. But specifically applying optics to protein detection, that&#8217;s where the field hasn&#8217;t caught up. Most of the talent is still on the chemistry side.</p><p><strong>Prateek:</strong> True.</p><p><strong>Lawrence:</strong> If the advantage sits in optics and photonics, the traditional players have to build a skill set they don&#8217;t currently have. And photonic integrated chips have moved faster in the last five years than most people outside the field realise. That&#8217;s the tailwind.</p><p><strong>Prateek:</strong> Innovations happen in silos. People forget there&#8217;s progress in the room next door that could be applicable to their own field. As someone who&#8217;s worked across semiconductors, photonics, and biology, I&#8217;ve had the chance to see that. You can walk to a photonics lab and understand what&#8217;s possible, then bring it back to biology.</p><p><strong>Lawrence:</strong> That&#8217;s a great line. I might steal that.</p><p><strong>Prateek:</strong> (laughs) It&#8217;s all yours.</p><p><strong>Lawrence:</strong> Alright mate, last one. What does success look like for you? Not trillion-dollar company, I get that one. The thing you actually need to happen in the next two or three years to know you&#8217;re on the right path.</p><p><strong>Prateek:</strong> Adoption by the clinical diagnostic labs, and by researchers finding novel early biomarkers. If those two things happen, we&#8217;ve done our job.</p><p><strong>Lawrence:</strong> Labs and biomarkers. Good. Let&#8217;s hope we actually demonstrate early diagnosis and enable the preventative-health thing that&#8217;s been promised for a decade.</p><p><strong>Prateek:</strong> That&#8217;s the plan.</p><h2>So What?</h2><p>The thesis is simple: photonics are now good enough to beat enzymes for protein diagnostics.</p><p>Seed-stage startup alert, obviously. The Proteins.1 thesis lives or dies on biology. The photonic chip is the easy-ish part. The hard part is antibody selection, bead chemistry, and getting a single target molecule to physically find the right bead in a sample that&#8217;s 99.99% not what you&#8217;re looking for. </p><p>Prateek said the quiet part out loud on the call: <em>&#8220;we cannot break the laws of chemistry.&#8221;</em> Which is the single best sentence a photonics-meets-biotech founder can say, because it&#8217;s true. The physics team can keep pushing pixels, the biology team has to push binding kinetics, and binding kinetics doesn&#8217;t benefit from Moore&#8217;s law.</p><p>The other elephant in the room is Theranos. Ambient diagnostics is a decade-overdue vision. Home blood tests were going to change everything in 2015, and instead burned down trust in the whole category. This time the photonic chips actually work, the magnetic-bead chemistry isn&#8217;t being faked, and the instrument is going to clinical labs before pharmacies.</p><p>The path from here is staged: single protein, then a hundred, then the full proteome, then multi-omics. That&#8217;s the play.</p><p>And then there is the Tricorder Vision. Ten years out, the device is around you. Your fingerprints, your breath, the surfaces you touch, the air you exhale. If the molecule-agnostic claim holds up, Proteins.1&#8217;s photonic chip becomes the sensor for that ambient layer, which is an order of magnitude bigger than today&#8217;s diagnostics market.</p><div><hr></div><p></p>]]></content:encoded></item><item><title><![CDATA[ State of the Future: Friday Four]]></title><description><![CDATA[Late Dispatch from 17th April 2026: Half Term man, how do people juggle? [Replace this with a better subtitle, something that weaves in AI, fusion and wartime? Churchill sitting on a fusion reactor?]]></description><link>https://stateofthefuture.substack.com/p/state-of-the-future-friday-four-67c</link><guid isPermaLink="false">https://stateofthefuture.substack.com/p/state-of-the-future-friday-four-67c</guid><dc:creator><![CDATA[Lawrence Lundy-Bryan]]></dc:creator><pubDate>Fri, 17 Apr 2026 16:00:13 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!GYeH!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1b7f4968-3fd7-4b23-b670-7a1351d69461_2752x1536.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!GYeH!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1b7f4968-3fd7-4b23-b670-7a1351d69461_2752x1536.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!GYeH!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1b7f4968-3fd7-4b23-b670-7a1351d69461_2752x1536.png 424w, https://substackcdn.com/image/fetch/$s_!GYeH!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1b7f4968-3fd7-4b23-b670-7a1351d69461_2752x1536.png 848w, https://substackcdn.com/image/fetch/$s_!GYeH!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1b7f4968-3fd7-4b23-b670-7a1351d69461_2752x1536.png 1272w, https://substackcdn.com/image/fetch/$s_!GYeH!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1b7f4968-3fd7-4b23-b670-7a1351d69461_2752x1536.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!GYeH!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1b7f4968-3fd7-4b23-b670-7a1351d69461_2752x1536.png" width="1456" height="813" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/1b7f4968-3fd7-4b23-b670-7a1351d69461_2752x1536.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:813,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:9612719,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://stateofthefuture.substack.com/i/194502326?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1b7f4968-3fd7-4b23-b670-7a1351d69461_2752x1536.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!GYeH!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1b7f4968-3fd7-4b23-b670-7a1351d69461_2752x1536.png 424w, https://substackcdn.com/image/fetch/$s_!GYeH!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1b7f4968-3fd7-4b23-b670-7a1351d69461_2752x1536.png 848w, https://substackcdn.com/image/fetch/$s_!GYeH!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1b7f4968-3fd7-4b23-b670-7a1351d69461_2752x1536.png 1272w, https://substackcdn.com/image/fetch/$s_!GYeH!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1b7f4968-3fd7-4b23-b670-7a1351d69461_2752x1536.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Morning ya&#8217;ll, sorry for the delayed issue. I mean, I wonder how many people actually read this as soon as it comes out? I mean, if you are, soz. Half term isn&#8217;t it.</p><p>If you didn&#8217;t see it already, I wrote a thing: <a href="https://stateofthefuture.substack.com/p/the-photonic-foundry-fallacy">https://stateofthefuture.substack.com/p/the-photonic-foundry-fallacy</a></p><p>&#8220;Not bad&#8221; and &#8220;I think you are wrong&#8221; are just some of the many comments I&#8217;ve had, and honestly, in this day and age, I&#8217;ll take it. I said the following:</p><ul><li><p><strong>Frontier AI is copper-limited. </strong>Training tens of thousands of GPUs talking to each other means moving petabytes between chips every second. The 1960s-era copper interconnects can&#8217;t do it, too much power, too much heat, not enough bandwidth.</p></li><li><p>Optics have carried data between continents for decades. Now the frontier is light between servers, chips on a board, chiplets inside a package, and eventually on the die itself. Every inward step is harder, which is exactly where copper gives up first, making the optical stack (transceivers, modulators, lasers) the most strategic component in AI infrastructure right now.</p></li><li><p><strong>Incumbents are betting on silicon photonics for sunk cost reason</strong>s. Tower spending $920m to 5x its SiPho wafer capacity. GlobalFoundries bought AMF in Singapore. TSMC tooling co-packaged optics on its 300mm line. SiPho slots into existing CMOS lines, so it&#8217;s the cheapest route to &#8220;we do photonics now&#8221; and the easiest story for investors.</p></li><li><p><strong>Startups are going exotic</strong>. HyperLight and QCi on thin-film lithium niobate (great modulator, no light generation). Ligentec on silicon nitride (ultra-low-loss waveguides, not much else on its own). Smart Photonics on indium phosphide (the only material that generates light natively, also the most expensive and lowest-yield). Three different fabs, all structured around one hero material.</p></li><li><p><strong>Contrarian bet:</strong> the organising principle is integration. No single material can modulate, generate and guide, so the CMOS mental model (pick a material, scale the wafer, drive down cost per die) breaks. Heterogeneous integration all the way down: orchestration, chiplets, manufacturing. <em><strong>He who integrates wins.</strong></em></p></li></ul><p>Give it a read. <strong>Otherwise, onwards as hard as possible, as always.</strong></p><h2>1. Anthropic Crosses $30bn ARR. OpenAI Rage-Posts. Now Do Margins.</h2><p>Right, the story of the week. Anthropic&#8217;s ARR crossed $30bn per Bloomberg. OpenAI sits at $25bn. Anthropic&#8217;s own number was $9bn at year-end 2025. That growth curve looks like a ski jump. We wiling out over here. 1,000 enterprise customers spending &gt;$1m a year, doubled from 500 in under two months. 8 of the Fortune 10 pay Anthropic. Bloomberg reckons Anthropic spent 4x less than OpenAI to train the models doing this damage, which will do some things to the comparable DCFs if true.</p><p>OpenAI <a href="https://www.axios.com/2026/04/13/openai-microsoft-anthropic-amazon">had a public little meltdown</a>. Their revenue guy ripped Anthropic for &#8220;building a narrative on fear, restriction, and the idea that a small group of elites should control AI.&#8221; Bold move, Cotton. Same day OpenAI told Axios that Microsoft &#8220;was holding them back&#8221; and they&#8217;re cosying up to Amazon now. (Also, for the record, <a href="https://www.cnbc.com/2026/04/16/anthropic-claude-opus-4-7-model-mythos.html">Opus 4.7 shipped this morning</a>, which Anthropic openly says is below the unreleased Mythos, see Issue #9.)</p><p>But Margins. Nobody is fighting about this publicly but they should be. Top line will keep breaking records. Anthropic&#8217;s a trillion-dollar company in waiting for sure. The open question is whether any of these businesses makes money once the VC subsidy runs out. Training cost advantage is fine but most of the cost is inference, serving, and the electrons, silicon, cooling and data centres you&#8217;ve put them in. Every software engineer on $200/mo Claude Code probably burns through unit economics that would make a SaaS CFO file a missing persons report.</p><p>Every AI story from here is a silicon story, a data centre story, or an energy story. What&#8217;s hard is gross margin, and gross margin is electrons. Which brings us nicely to&#8230;</p><p>Source: <a href="https://www.tradingkey.com/analysis/stocks/us-stocks/261756528-anthropic-openai-ipo-tradingkey">Bloomberg</a></p><h2>2. OpenAI Gives Up On Stargate UK, Signs a London Lease Anyway</h2><p>In September OpenAI announced Stargate UK with NVIDIA and Nscale. 8,000 GPUs to start, 31,000 if it went well. 6 months later, it&#8217;s been paused. Reading directly from the statement, &#8220;the cost of energy and the country&#8217;s regulatory environment.&#8221; But then they signed a 88,500 sq ft lease in London for a HQ of 500+ people. So not leaving Britain. Just refusing to put compute here. The offices are lovely, it&#8217;s the electrons that are the problem.</p><p>Here is the simplest argument about the next 10 years i can make. Pensions, the NHS, welfare, defence, all of it, is downstream of economic growth. That&#8217;s a Liz Truss argument sure, but I will raise you. Economic growth is now largely downstream of AI. AI is downstream of compute. Compute is literally downstream of electrons. That means every policy argument a British politician thinks is central: tax, immigration, planning, welfare reform, is actually downstream of &#8220;is our marginal price of a kilowatt hour going down.&#8221; Remarkably simple. Very hard to get politicians to say out loud because it makes most of their existing agenda look like fiddling while Rome burns. Which I can tell you does not make for good baseload.</p><p>OpenAI is just the first big customer voting with its feet on British grid economics. Ireland capped hyperscaler connections in 2022, the Dutch have a megadatacentre moratorium, Texas lives in ERCOT meme-territory. Everyone has made power expensive. Britain is doing it on principle. France and the Nordic hydro economies look smarter every month, which is why per CNBC Nscale (Issue #5, $2bn Series C) is still in the Stargate conversation &#8212; Nordic electrons are cheap, British electrons are not. When the British champion has to build most of its compute abroad, that&#8217;s the story.</p><p>In <a href="https://stateofthefuture.substack.com/p/a-specific-theory-of-sovereign-ai">A Specific Theory of Sovereign AI</a> last October i argued sovereignty is infrastructure, not models. The harder version eighteen months later: sovereignty is cheap kWh. Everything downstream. I mean, I am making the case that energy is destiny. Hardly a new argument, but one that we must remember in UK and EU, and fast.</p><p>Source: <a href="https://www.cnbc.com/2026/04/09/openai-halts-uk-stargate-project.html">CNBC (Stargate pause)</a> | <a href="https://www.cnbc.com/2026/04/13/openai-london-office-sam-altman-uk-stargate.html">CNBC (London office)</a></p><h2>3. Fusion Is Having A Moment</h2><p>As if i&#8217;d planned it. Fusion is the technology that, if we ever build it properly, makes the energy variable from item 2 go to approximately zero.</p><p>This week Helion&#8217;s (the Sam Altman fusion favourite fwiw) Polaris reactor <a href="https://nationaltoday.com/us/wa/everett/news/2026/04/11/helion-fusion-startup-hits-150-million-degrees-celsius-milestone/">hit 150M&#176;C</a>, roughly 3/4 of commercial operating temperature. Close. Pulsar Fusion (UK) ignited first plasma in Sunbird, their fusion rocket exhaust test rig for deep space propulsion. Yes, mate. This is proper ambitious stuff we should be working on. And ARPA-E committed $135m at the Energy Innovation Summit, the largest single fusion commitment in the agency&#8217;s history. TAE is running site surveys for a first power plant in the US&#8230; I mean what&#8217;s a survey I suppose. But still, it&#8217;s a datapoint!</p><p>Timelines still don&#8217;t match the hype. First plasma is not first power. Say it with me to avoid the hype. Sensible MWh of fusion on a grid is a while away, probably early 2030s at the earliest, almost certainly delivered by a 30-year state-backed consortium rather than a venture-backed startup. But maybe some of these VC-based startups become public-private vehicles anyway. If you want the primer, i wrote <a href="https://stateofthefuture.substack.com/p/nuclear-fusion-the-state-of-play">one pre-AI </a>a while back.</p><p>If cheap electrons are the master variable for everything downstream (item 2), fusion is the single most important long-range technology humans are working on, more important than AI or semis or the entire AI-infrastructure gold rush, because all of those will still pay for kilowatt hours. Fusion is the holy grail because it dissolves the master variable.</p><p>Source: <a href="https://nationaltoday.com/us/wa/everett/news/2026/04/11/helion-fusion-startup-hits-150-million-degrees-celsius-milestone/">Everett Today (Helion)</a> | <a href="https://www.euronews.com/next/2026/03/26/world-first-uk-startup-ignites-plasma-inside-nuclear-fusion-rocket-in-major-step-for-space">Euronews (Pulsar)</a></p><h2>4. Jensen on Dwarkesh: &#8220;It&#8217;s A Chip. They Can Make It Themselves.&#8221;</h2><p>And the biggest interview of the week, <a href="https://www.dwarkesh.com/p/jensen-huang">Jensen Huang on Dwarkesh Patel</a>, two hours, mostly about TPU competition, Nvidia&#8217;s supply chain moat, and whether the US should be selling chips to China. The China section ran forty minutes and it did not go well.</p><p>Jensen&#8217;s argument: comparing AI chip export controls to nuclear non-proliferation is &#8220;lunacy.&#8221; Selling Nvidia parts to China is fine because &#8220;we&#8217;re not enriched uranium, it&#8217;s a chip, and it&#8217;s a chip they can make themselves.&#8221; He called export controls a &#8220;loser&#8217;s mentality.&#8221; China already has 60% of global chip manufacturing, 50% of AI researchers, plenty of energy, so blocking them is futile. Dwarkesh, credit to him, pressed on the national security implications, specifically that H20-class compute shortens the offensive-cyber timeline (see Mythos&#8217;s zero-days in Issue #9). Jensen got visibly agitated. Agitated-Jensen is not a Jensen people had previously seen.</p><p>Two problems with &#8220;they can make it themselves.&#8221; First, per <a href="https://www.tomshardware.com/tech-industry/semiconductors/chinese-chip-industry-leaders-say-ai-demand-is-straining-equipment-and-talent-supply">Chinese chip leaders in Tom&#8217;s Hardware two weeks ago</a>, China is five to ten years behind on frontier AI chips and short of leading-edge lithography because ASML won&#8217;t sell them EUV. Let alone NA-EUV. So they can&#8217;t, at scale, for now. Second, the person arguing the controls don&#8217;t work is the person whose biggest growth market is currently locked up by them. The incentive is doing alot of the talking.</p><p><a href="https://substack.com/@thechipletter/note/c-244828710">The Chip Letter</a> called Dwarkesh excellent and Jensen evasive. Alec Stapp called the offensive-cyber answer misleading. <a href="https://thezvi.substack.com/p/on-dwarkesh-patels-podcast-with-nvidia">Zvi</a> mostly agreed at considerable length as always.</p><p>My read. Jensen has said he believes in AGI. Many times. In many keynotes. If AGI is real, the scramble for it is wartime, and wartime means picking a side. You don&#8217;t sell to both teams on the way to the singularity. So either Jensen doesn&#8217;t really believe the AGI stuff (it&#8217;s just the keynote line), or he does believe it and is obfuscating because NVDA has a lot of growth priced in and China is the TAM that closes the valuation gap. I&#8217;m afraid this is a peacetime CEO in wartime.</p><p>Source: <a href="https://www.dwarkesh.com/p/jensen-huang">Dwarkesh</a> | <a href="https://substack.com/@thechipletter/note/c-244828710">Chip Letter</a> | <a href="https://www.aol.com/articles/jensen-huang-says-lunacy-compare-160228250.html">AOL (lunacy quote)</a></p><p>&#8212;</p><p>Thanks everyone for reading, I appreciate you and you are loved.</p><p>Byeeeeeeeeeeeeee</p><p>If you missed it:</p><ul><li><p><a href="https://stateofthefuture.substack.com/p/a-specific-theory-of-sovereign-ai">A Specific Theory of Sovereign AI</a> &#8212; last October, still right, more so now</p></li></ul>]]></content:encoded></item><item><title><![CDATA[The Photonic Foundry Fallacy]]></title><description><![CDATA[The Biggest Opportunity in Computing]]></description><link>https://stateofthefuture.substack.com/p/the-photonic-foundry-fallacy</link><guid isPermaLink="false">https://stateofthefuture.substack.com/p/the-photonic-foundry-fallacy</guid><dc:creator><![CDATA[Lawrence Lundy-Bryan]]></dc:creator><pubDate>Tue, 14 Apr 2026 10:59:58 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!9Q0O!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feb8b3d0d-6344-4433-8bad-a84f49feb7e1_1952x2146.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><strong>Photonics</strong>. amiright? I&#8217;m not wrong. <em>If</em> it&#8217;s a straight shot to AGI, we&#8217;re going to need a lot more optical bits and pieces, and ideally yesterday. Training a frontier model means tens of thousands of GPUs in the same data centre talking to each other constantly, moving petabytes between chips every second. The wires that have done that job since the 1960s (copper, basically) are running out of road (diligence pending). They burn too much power, generate too much heat, and can&#8217;t carry the bandwidth that next-gen AI clusters need.</p><p>Using light to move data isn&#8217;t new. We&#8217;ve pushed photons through optical fibre between continents for decades, because copper falls apart at distance. What&#8217;s changed is how close to the silicon the optics are getting. The boundary has crept inward for years, from undersea cables to cross-campus links to inside the data centre itself. AI is accelerating that inward migration. The frontier now is light between servers in a rack, between chips on a board, between chiplets inside a package, and eventually on the die itself, where photons never have to leave the silicon. Every step inward is harder than the last, because shorter distances mean tighter packaging tolerances and denser interconnects, and that&#8217;s exactly the regime where electrical links are giving up first. Which is why the chips that do the conversion (transceivers, modulators, lasers, the whole optical stack) are probably the most strategic component in AI infrastructure right now.</p><p><strong>Everyone is scrambling to build that photonics stack.</strong> The existing CMOS fabs are doubling down on silicon photonics (SiPho) because it slots straight into the lines they already run, using the same wafers, lithography and packaging tools. It&#8217;s the cheapest route to &#8220;we do photonics now,&#8221; and the easiest story to tell investors. Tower Semiconductor is spending $920 million to 5x its silicon photonics wafer capacity. GlobalFoundries acquired AMF in Singapore. TSMC is tooling up co-packaged optics on its 300mm line. </p><p>The startups are going full disruption. Silicon is the past, they say. For losers, they say. They&#8217;re picking an exotic material and building the fab and tooling around it. HyperLight and QCi on thin-film lithium niobate (TFLN), lovely stuff for high-speed electro-optic modulation but no use for generating or detecting light. Ligentec on silicon nitride (SiN), ultra-low-loss waveguides and not much else on its own (though they&#8217;re expanding to multi-platform). Smart Photonics on indium phosphide (InP), the only material that can generate light natively, and also the most expensive and lowest-yield to manufacture. Different strategies but the bet is still a single material.</p><p>Contrarian bet alert. What if they are wrong? I believe most of them are organising around the wrong principle. Single-material fabs are obviously useful, but the mental model imported from CMOS (pick a material, scale the wafer, drive down cost per die) breaks down when no material can do everything you need. </p><p><strong>The organising principle should be integration.</strong></p><p><strong>It&#8217;s heterogeneous integration all the way down, folks</strong>. From the orchestration to the chiplets to the manufacturing. He who integrates wins the AI race. (Whadup <a href="https://www.callosum.com/">Callosum</a>)</p><div><hr></div><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!9Q0O!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feb8b3d0d-6344-4433-8bad-a84f49feb7e1_1952x2146.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!9Q0O!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feb8b3d0d-6344-4433-8bad-a84f49feb7e1_1952x2146.png 424w, https://substackcdn.com/image/fetch/$s_!9Q0O!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feb8b3d0d-6344-4433-8bad-a84f49feb7e1_1952x2146.png 848w, https://substackcdn.com/image/fetch/$s_!9Q0O!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feb8b3d0d-6344-4433-8bad-a84f49feb7e1_1952x2146.png 1272w, https://substackcdn.com/image/fetch/$s_!9Q0O!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feb8b3d0d-6344-4433-8bad-a84f49feb7e1_1952x2146.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!9Q0O!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feb8b3d0d-6344-4433-8bad-a84f49feb7e1_1952x2146.png" width="1456" height="1601" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/eb8b3d0d-6344-4433-8bad-a84f49feb7e1_1952x2146.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1601,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:6560601,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://stateofthefuture.substack.com/i/194168177?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feb8b3d0d-6344-4433-8bad-a84f49feb7e1_1952x2146.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!9Q0O!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feb8b3d0d-6344-4433-8bad-a84f49feb7e1_1952x2146.png 424w, https://substackcdn.com/image/fetch/$s_!9Q0O!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feb8b3d0d-6344-4433-8bad-a84f49feb7e1_1952x2146.png 848w, https://substackcdn.com/image/fetch/$s_!9Q0O!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feb8b3d0d-6344-4433-8bad-a84f49feb7e1_1952x2146.png 1272w, https://substackcdn.com/image/fetch/$s_!9Q0O!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feb8b3d0d-6344-4433-8bad-a84f49feb7e1_1952x2146.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Have a gander at that table from Gemini. Lots of letters etc. Doesn&#8217;t matter a great deal, unless you are a massive nerd! The &#8220;key takeaway&#8221; is thus: <strong>No single material appears in every row without a chunky compromise.</strong> That&#8217;s the core of my argument. I will dispense this advice now.</p><h2>I.  Analogy</h2><p>The semiconductor industry has one overwhelmingly successful organising principle: silicon. Start with a silicon wafer, etch transistors into it, scale the process node. Everything in the same material system, on the same wafer, in the same fab. This is the TSMC business model: standardise the platform, designers innovate on top.</p><p>It&#8217;s natural to look at photonics and think the same playbook applies. Pick your champion material. Lithium niobate for fast modulators, InP for on-chip lasers, SiN for low loss, silicon for CMOS compatibility. Go deep, go vertical, optimise, and scale. Lovely business model you&#8217;ve got there. It would be a shame if, it was, disrupted&#8230;.</p><p>The analogy relies on a property silicon has that no photonic material does: silicon does everything <em><strong>adequately</strong></em>. It switches, conducts, insulates, and amplifies. Not always best-in-class, but &#8220;good enough&#8221; across every function  so that integration on a single substrate wins on cost and density. Photonics doesn&#8217;t have an equivalent. The physics forbids it. </p><p>You absolutely need III-V semiconductors (indium phosphide, gallium arsenide) to generate light. Can&#8217;t get around it. Silicon has an indirect bandgap; it physically cannot lase efficiently. You need lithium niobate or electro-optic polymers for high-speed, low-loss modulation, because silicon&#8217;s plasma dispersion effect hits a wall around 50&#8211;60 GHz. You need silicon nitride for ultra-low-loss waveguides. You need the OG semiconductor, germanium, for detection. That&#8217;s five or six materials for a basic photonic link: generate, modulate, route, and detect.</p><p>The closest thing to a photonic silicon is probably indium phosphide, which can lase, modulate, route, and detect on one substrate. Smart people argue it&#8217;s the true platform play. But InP&#8217;s waveguide losses are 10&#8211;100x worse than SiN, its wafers top out at 6 inches (for now, tbf), and its fabrication costs make it uneconomic for high-volume applications.</p><p>The obvious move would be to scale it: build 12-inch InP and watch the cost problem disappear. People try. InP is grown using a process called liquid-encapsulated Czochralski, which means dipping a seed crystal into molten indium phosphide and slowly pulling it upward while it rotates, letting a single-crystal cylinder (the "boule") grow downward from the seed. Defect density rises sharply with diameter, indium itself is expensive and you need more of it per wafer, and the entire fab equipment ecosystem is built for &#8804;6 inch with no volume demand to justify retooling. The 8-inch InP roadmap has been &#8220;two years away&#8221; for well over a decade. The constraint is crystal physics.</p><p>This isn&#8217;t a temporary problem waiting for better engineering, either. Silicon will never have a direct bandgap. Lithium niobate will never absorb light efficiently for detection. These are properties of the crystal structure. You can&#8217;t just throw money at it.</p><h2>II. Multi-Materials</h2><p>Alright hands up. You got me. It&#8217;s not quite as binary as I&#8217;ve suggested so far. Single-material cathedral vs. material-agnostic platform is obvs too clean. Nobody serious is truly single-material anymore. Tower does Si + Ge + SiN on its line. GlobalFoundries does similar. Saying the industry is stuck on single materials makes for a great old versus new story, but is a bit crude.</p><p>The real question is subtler as always: is multi-material capability a side project bolted onto a silicon photonics line, or is it the organising principle the facility is designed around? Most of the industry is on the first path. Tower&#8217;s adding materials incrementally. GF acquired AMF for silicon photonics capacity with some SiN capability attached. TSMC is extending its 300mm CMOS line to handle photonics. In each case, the foundation is silicon and other materials get added where customers demand them.</p><p>I&#8217;m arguing for the second path. A facility where the core competence is the process of combining materials, where the equipment decisions, the engineering hires, and the IP strategy are all organised around integration. A handful of European foundries (LIGENTEC, CSEM, and others) are getting closest, offering multi-material photonic services within a single fab ecosystem. But these are exceptions. In most cases, multi-material is a research capability grafted onto a production line that was designed for something else.</p><p>The gap between demonstrating multi-material integration in a lab and offering it as a reliable, repeatable manufacturing service is vast. I know that. But closing that gap is the opportunity.</p><h2>III. Coupling</h2><p>The silicon photonics camp argues the gap doesn&#8217;t matter. They say silicon is already shipping in volume, integration is a problem you can solve later, and one material plus a few external bolt-ons is good enough for the next decade of AI bandwidth.</p><p>Maybe? Intel ships millions of pluggable transceivers, the optical modules that slot into a switch port and convert electrical signals into light for transmission over fibre. Broadcom&#8217;s Bailly and Tomahawk 5 use silicon photonic engines. Cisco, Marvell, and a dozen others have silicon photonic products in hyperscale data centres right now. They bolt on an external III-V laser, accept the coupling loss, and move on. Good enough, we&#8217;ve got datacentres to build and tokens to serve, I don&#8217;t care about your 3 year roadmap to MVP.</p><p>Today, sure. But what about three years from now? Every external laser needs active alignment (expensive), discrete packaging (bulky), and optical coupling that burns 1&#8211;3 dB at every interface where light has to hop between chips. That doesn&#8217;t sound like much, but optical power budgets in a transceiver are tight. A few dB here, a few there, and you&#8217;ve smashed through the headroom that determines whether the link closes at all.</p><p>At today&#8217;s 800G per lane, the bolted-on architecture has enough margin to absorb those losses. At 3.2T per lane, where AI interconnects are heading by 2028, it doesn&#8217;t. Every dB lost at a coupling interface is a dB you can&#8217;t recover at the receiver, and at that bandwidth you don&#8217;t have any to spare.</p><p>You might also argue that photonics should follow the semiconductor playbook of specialisation: logic fabs, memory fabs, analog fabs, all separate businesses. Five excellent single-material foundries, each mastering one function, assembling the results into a module at the end. The problem is the same: coupling loss.</p><p>This is where electronic and photonic integration diverge. In electronics, you can wire-bond or bump-bond a logic die to a memory die and lose basically nothing. Electrons don&#8217;t care much too much about interfaces. Photons care so much. Every time light crosses from one separately fabricated chip to another, you lose signal. The only way to hit the loss budgets that next-generation AI interconnects will demand is monolithic integration: bonding or depositing different materials on the same substrate so light never has to leave the waveguide.</p><p>So, the thing is, you are gonna need new materials, someday soon. </p><h2>IV. PDK </h2><p>As we all probably know, TSMC&#8217;s moat is the design ecosystem, not really the fab. Well obviously unbelievable quality, sure that&#8217;s a given. But also the PDK, the IP libraries, the EDA tool partnerships, the thousands of engineers who know how to design for TSMC processes. When a designer starts a project, the first decision is which foundry PDK to target. Once that decision is made, switching costs are enormous. The fab matters, obviously, but the design infrastructure is the moat.</p><p>Photonics has single-material PDKs today. Tower has one, GlobalFoundries has one, imec&#8217;s iSiPP platform has one. If you&#8217;re designing a silicon photonics chip, you can simulate it, lay it out, and send it to fab with <em>reasonable</em> confidence that what comes back will work. These PDKs are the reason silicon photonics has customers and revenue right now, and they&#8217;re a genuine competitive advantage that integration-first startups don&#8217;t have.</p><p>What nobody has is a multi-material PDK. There&#8217;s no design kit that lets you simulate a III-V laser bonded to a TFLN modulator on a SiN interposer. No simulation tool that handles the optical, thermal, and mechanical interactions between heterogeneously integrated materials in a single model. No design rules for multi-material alignment tolerances, bonding interface properties, or cross-material coupling. None of this exists.</p><p>Whoever builds it creates lock-in that dwarfs anything in single-material photonics. If you&#8217;re the first foundry with a multi-material PDK that designers can actually use, with EDA tool support and IP libraries for common building blocks, then every designer who targets your platform is stuck. Training, validated designs, IP, all tied to your process. That&#8217;s the real TSMC analogy, and it&#8217;s a stronger moat than the fab itself. I find it sort of astonishing that this doesn&#8217;t come up more in foundry pitches. I&#8217;ve seen many a photonic foundry deck in the last two years and only a few mentioned design infrastructure at all.</p><p>The tricky bit is building a multi-material PDK. You need experimental data on every material combination, every bonding process, every interface. You need to validate the PDK against real fabrication results across multiple process runs. This takes years, not months, and the capex is higher than *most* VCs want to hear about. But a validated multi-material design ecosystem compounds over time in a way that fab capacity alone never does.</p><h2>V. Packaging</h2><p>If you want a semiconductor analogy for photonic integration, advanced packaging is tempting. ASE, Amkor, and JCET take finished dies from different foundries and assemble them into working systems. TSMC&#8217;s CoWoS division has become arguably its most strategically important capability (it puts together all Nvidia chips). The packaging house doesn&#8217;t need to understand transistor physics; it needs to understand how to put different things next to each other and make them work. Photonic integration looks like the same problem.</p><p>But the economics. Woof, less than ideal. ASE&#8217;s net margin is about 7%. TSMC&#8217;s is about 40%. The most valuable company in the semiconductor supply chain is a single-material fab, not a packaging house. If photonic integration maps to electronic packaging, I&#8217;m inadvertently arguing for the business with the worst economics in the industry.</p><p>I think photonic integration is genuinely different from electronic packaging though, but it is a bet. Electronic packaging is assembly: bonding finished dies onto substrates with solder bumps and redistribution layers. Relatively standardised steps, thin IP layer, and even thinner margins. Photonic integration is process engineering at the material level: epitaxial bonding of III-V films, TFLN thin-film deposition, sub-micron waveguide alignment across material interfaces. The IP is in the process recipes. That&#8217;s closer to TSMC&#8217;s process IP than ASE&#8217;s assembly IP. But nobody has proven this can command TSMC-like margins in practice. Until someone builds a commercial multi-material photonic line and shows the gross margins, the packaging analogy and its 7% margins remain the base case. The truth is that photonic integration might be a genuinely new category we don&#8217;t have a template for yet.</p><h2>VI. Opportunity</h2><p>The multi-material photonic foundry doesn&#8217;t exist yet. Not really. There are research lines at MIT, at imec, at CSEM, at a handful of European institutes. Startups nibbling at pieces of it. But nobody has capitalised a facility whose singular mission is a material-agnostic photonic integration at scale. Maybe because it&#8217;s just too hard to say?</p><p>Why has no-body built this yet? (Such a VC now, sad state of affairs) First, it&#8217;s pretty hard; combining materials with different thermal budgets, different lattice constants, and different processing chemistries on a single line is just a hard thing to do. Second, the market hasn&#8217;t demanded it yet, most photonic products today are simple enough to get away with one or two materials. Third, the VC and government funding models default to the CMOS analogy. &#8220;We&#8217;re building the TSMC of lithium niobate&#8221; makes intuitive sense to investors. &#8220;We&#8217;re building a material-agnostic integration facility with a multi-material PDK&#8221; is a harder sell.</p><p>I see a few ways this could play out.</p><ol><li><p>A single-material incumbent (Tower, GlobalFoundries) makes a strategic decision to treat integration as its organising principle, hires the process engineers, invests in the bonding and deposition capabilities, builds the multi-material PDK.</p></li><li><p><em><strong>A well-capitalised startup greenfields the whole thing. Hard to fund, but classic first principles thinking. Why not put the servers in space, Elon-style thinking.</strong></em></p></li><li><p>An advanced packaging company (ASE, Amkor) extends into photonics.</p></li></ol><p>We can assign probabilities to each scenario, 25% this and 10% that, but, I&#8217;m a VC, even if there is a 2% probability of scenario 2, the outcome is so large, it&#8217;s worth the bet. And come on, with AGI just sitting there waiting to be grabbed, let&#8217;s try and win shall we? Why should ASE and Tower have all the fun? </p><p>And beyond 2, even if 1 or 3 plays out, there are a ton of huge new opportunities. The process IP for bonding III-V films to silicon wafers. Deposition recipes for exotic thin films: Aluminium scandium nitride (AlScN) can be sputtered directly onto silicon at back-end-of-line temperatures, giving you a TFLN-class modulator without the exotic substrates. Diamond-on-X for depositing diamond thin films onto silicon or SiN. Simulation tools for heterogeneous optical systems. PDK components, micro-transfer printing. Maybe start in one of them and vertically integrate over time?</p><h2>VII. Integration</h2><p>The race to build photonic manufacturing capacity is real, and accelerating fast. Nvidia&#8217;s $4 billion in Lumentum and Coherent. Tower&#8217;s $920 million silicon photonics capacity expansion. GlobalFoundries acquiring AMF. TSMC tooling up its 300mm line. Billions flowing into photonic manufacturing in real time.</p><p>Most of that capital is going into single-material capacity. I think the value will go elsewhere: in the integration processes, the multi-material PDK, and the design ecosystem that will eventually lock in photonic designers the way TSMC&#8217;s ecosystem locks in chip designers today. But the timing is uncertain. The current market can mostly get by with silicon photonics plus an external laser. The coupling loss wall that forces monolithic integration might be two years away or seven.</p><p>So if you&#8217;re building or investing in photonic manufacturing, ask yourself: is this facility organised around a <em>material</em> or around a <em>process</em>? And if it&#8217;s a process, is there a design ecosystem that creates lock-in? Material without integration = component supplier. Integration without a PDK = custom shop. Integration with a design ecosystem = platform. I keep coming back to this hierarchy when I look at photonics pitches.</p><p>Could I be wrong about all of this? Obviously. Maybe silicon photonics will power through the scaling wall the way CMOS always has. Maybe InP&#8217;s wafer economics will improve enough that the Swiss Army knife works after all. Maybe the specialisation model (five great single-material fabs, assemble at the end) will find ways to manage coupling loss that I&#8217;m not seeing. Probably not. But possibly.</p><p>Either way, I&#8217;m looking for the founders who understand the integration problem. If you are in Europe, even better. If you are in UK, whatapp me. If you are in the Brighton and Hove local area, meet me at Taith Coffee on the High Street today at 13:00. IYKYK.  If you&#8217;re building a photonic foundry organised around a process rather than a material, designing a PDK that spans III-V and silicon and SiN, or attacking coupling loss at the monolithic level before the bandwidth wall gets there first, I want to talk.</p><p>Let&#8217;s get at it. lawrence@cloudberry.vc.</p><div><hr></div><p></p>]]></content:encoded></item><item><title><![CDATA[State of the Future: Friday Four]]></title><description><![CDATA[Dispatch from 11th April 2026: AI Now Behind Closed Doors]]></description><link>https://stateofthefuture.substack.com/p/state-of-the-future-friday-four-551</link><guid isPermaLink="false">https://stateofthefuture.substack.com/p/state-of-the-future-friday-four-551</guid><dc:creator><![CDATA[Lawrence Lundy-Bryan]]></dc:creator><pubDate>Fri, 10 Apr 2026 11:41:16 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!OPfy!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3a6aa510-4612-4305-8f64-ba61aea6217c_1376x768.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!OPfy!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3a6aa510-4612-4305-8f64-ba61aea6217c_1376x768.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!OPfy!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3a6aa510-4612-4305-8f64-ba61aea6217c_1376x768.png 424w, https://substackcdn.com/image/fetch/$s_!OPfy!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3a6aa510-4612-4305-8f64-ba61aea6217c_1376x768.png 848w, https://substackcdn.com/image/fetch/$s_!OPfy!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3a6aa510-4612-4305-8f64-ba61aea6217c_1376x768.png 1272w, https://substackcdn.com/image/fetch/$s_!OPfy!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3a6aa510-4612-4305-8f64-ba61aea6217c_1376x768.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!OPfy!,w_2400,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3a6aa510-4612-4305-8f64-ba61aea6217c_1376x768.png" width="1200" height="669.7674418604652" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/3a6aa510-4612-4305-8f64-ba61aea6217c_1376x768.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:false,&quot;imageSize&quot;:&quot;large&quot;,&quot;height&quot;:768,&quot;width&quot;:1376,&quot;resizeWidth&quot;:1200,&quot;bytes&quot;:2916887,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://stateofthefuture.substack.com/i/193784653?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3a6aa510-4612-4305-8f64-ba61aea6217c_1376x768.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:&quot;center&quot;,&quot;offset&quot;:false}" class="sizing-large" alt="" srcset="https://substackcdn.com/image/fetch/$s_!OPfy!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3a6aa510-4612-4305-8f64-ba61aea6217c_1376x768.png 424w, https://substackcdn.com/image/fetch/$s_!OPfy!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3a6aa510-4612-4305-8f64-ba61aea6217c_1376x768.png 848w, https://substackcdn.com/image/fetch/$s_!OPfy!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3a6aa510-4612-4305-8f64-ba61aea6217c_1376x768.png 1272w, https://substackcdn.com/image/fetch/$s_!OPfy!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3a6aa510-4612-4305-8f64-ba61aea6217c_1376x768.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Welcome to all the new subscribers. Come for the neuroscience, stay for the.. Mainly AI at this point. AI and semi. Future of work adjacent.</p><p>Re this whole AI thing then yes. Hype? Appreciate lots of nuanced views out there about capital bubbles, the Carlota Perez framework, railway buildouts etc. Nothing new to see here. Hypers gonna hype. This time is never different. All General Purpose Technologies have a similar shape to AI. It&#8217;s just chatbots. <strong>$650 billion+</strong> in datacentre capex this year, OpenAI at an <strong>$852 billion</strong> valuation despite only shipping its first product in 2022. Chatbots. And they hallucinate. Can&#8217;t be used in real world. Etc.</p><p>Also The Big Short Guy is smart, he&#8217;s onto something about depreciation. We all saw the recursive funding image. Sure smells funky right?</p><p>Well as you know, I am indeed scale-pilled. But you can&#8217;t see what happened this week and continue to hold the: &#8220;it&#8217;s probably fine&#8221; attitude. I will stick my neck on the line, right now, and make it clear if I haven&#8217;t already, all of this capex will be used. And we aren&#8217;t even building enough. It is 100% not a bubble. </p><p>Anthropic built a tool that is better than every cybersecurity company on earth. It found and exploited a 17-year-old vulnerability in FreeBSD that every security team on earth missed. For $50 in API costs. It found thousands of zero-days across every major operating system and every major web browser. Thousands. A 27-year-old OpenBSD bug. In a few weeks, one model did what the entire cybersecurity industry couldn&#8217;t do in decades.</p><p>And it&#8217;s not a cybersecurity company.</p><p>That&#8217;s a market worth <strong>$250 billion</strong>. Just totally p*nwed.</p><p>And it just hit <strong>$30 billion in annualised revenue</strong>. Up from $9 billion 4 (FOUR) months ago. And what, you think, that&#8217;s probably it? These models are going to run out of steam? And this is still hype?</p><p>More than that, Anthropic built something and then they decided not to release it. Too dangerous. Which, if you think about it for more than thirty seconds, means Anthropic just outcompeted every cybersecurity company in existence and then immediately became the gatekeeper of who gets to use that capability. CrowdStrike. Palo Alto Networks. The entire $250 billion security industry. Outperformed by a language model that wasn&#8217;t even specifically trained for security, it just got good enough at code and reasoning and the rest followed. The capabilities emerged as a &#8220;downstream consequence of general improvements.&#8221; Not a research programme in sight. A side effect.</p><p>So no, I don&#8217;t think this is hype. I think we&#8217;re now in the part where the best frontier capabilities stay inside the labs and get deployed through consortiums rather than released to the public. Or not through consortiums at all, when the Chinese Labs get on it.</p><p>Do you see it yet? If you can build or use something like Claude Mythos, offence and defensive cyber warfare just depended on your frontier AI capabilities and your ability to build datacentres. If this isn&#8217;t a national security emergence yet, then I guess we have to wait fore the inevitable cyber attacks on critical infrastructure before we wake up.</p><p>Anyway onwards.</p><h2>1. Anthropic Builds a Model Too Good to Release </h2><p>So yes, Claude Mythos Preview. That&#8217;s what they&#8217;re calling it. On Monday Anthropic launched Project Glasswing, a consortium of all the big names, Apple, AWS, Microsoft, Google, CrowdStrike, NVIDIA, <strong>JPMorgan?</strong>, the Linux Foundation, and gave them access to a model they won&#8217;t give to anyone else. $100 million in usage credits and $4 million to open-source security orgs. First time in roughly seven years that a leading AI company has published a System Card for a model without making it generally available.</p><p>Opus 4.6 had a near-0% success rate at autonomous exploit development. Mythos achieved 181 working Firefox exploits versus 2, which isn&#8217;t really an improvement so much as whatever comes after improvement. Greg Kroah-Hartman from the Linux kernel team said &#8220;something happened a month ago, and the world switched. Now we have real reports&#8221; rather than low-quality AI noise. The FreeBSD exploit, a 20-gadget ROP chain split across six sequential RPC requests, worked fully autonomously without any human guidance. You might argue with a different objective that this is the first autonomous weapon we&#8217;ve created?</p><p>But less than 1% of the vulnerabilities Mythos found have actually been patched, and 8 out of 8 tested models detected the FreeBSD exploit, including one tiny model at 3.6 billion parameters costing eleven cents per million tokens. So maybe every decent model can already find bugs faster than humans can fix them and we just haven&#8217;t been looking. Picus Security called it the Glasswing Paradox, your best defensive tool is also the thing most likely to break you, and honestly i don&#8217;t know where that leaves us except hoping that the people doing the patching move faster than the people who won&#8217;t bother with a consortium. We&#8217;ll see.</p><p>Remember Issues #4 and #5? The company the Pentagon designated a &#8220;supply chain risk&#8221; just proved it could break the government&#8217;s own infrastructure. One model generation. Shit.</p><p>Source: <a href="https://red.anthropic.com/2026/mythos-preview/">Anthropic Red Team Blog</a> | <a href="https://www.anthropic.com/glasswing">Project Glasswing</a> | <a href="https://simonwillison.net/2026/Apr/7/project-glasswing/">Simon Willison</a></p><h2>2. Meta Goes Closed. Mark Zuckerberg Discovers Intellectual Property.</h2><p>Mark is back. Meta shipped Muse Spark on Tuesday. Natively multimodal, three reasoning modes (Instant, Thinking, Contemplating), built by Alexandr Wang&#8217;s Meta Superintelligence Labs after nine months rebuilding the AI stack from scratch following the $14.3 billion Scale AI deal.</p><p>The weights aren&#8217;t available though, and neither is the architecture or the training methodology, which makes this Meta&#8217;s first proprietary model. The company with 1.2 billion Llama downloads, a million a day, just shipped a closed model because Llama 4 flopped last April and Chinese open-weight models captured 41% of HuggingFace downloads versus 35% for US models. Turns out giving away your best work for free doesn&#8217;t generate the API revenue you need when you&#8217;re guiding $115-135 billion in capex. Who knew.</p><p>Zuckerberg said they &#8220;hope to open-source future versions.&#8221; Hope. The community noticed the word choice. It&#8217;s a war Mark, might want to start protecting the weights. Open-source hippies can go back to the 90s.</p><p>Meta already gave Llama to the Pentagon and NATO, but open weights meant everyone could use them, China included, and going closed gives you the kind of export control leverage you can actually trade on in Washington. A seat at the table that Anthropic is currently being kicked from. Ranked 4th on Artificial Analysis, just behind Claude Opus 4.6 at 53 (Gemini 3.1 Pro and GPT-5.4 both at 57), so not quite frontier but 3 billion users on day one. And the thought compression technique, 10x less compute than Llama 4 Maverick for equivalent capability, if that holds up under independent testing it&#8217;s actually massive. But Meta&#8217;s track record on benchmark claims is, uh. Anyway.</p><p>Source: <a href="https://www.humai.blog/meta-released-muse-spark-its-first-closed-model-here-is-what-it-gave-up-to-do-it/">HumAI</a> | <a href="https://simonwillison.net/2026/Apr/8/muse-spark/">Simon Willison</a></p><h2>3. Your AI Agent Can Now Spend Your Money</h2><p>More on agents. Because if they can find exploits and zero-days, why not give them wallets to pay for stuff too?! Nevermined integrated Visa Intelligent Commerce with Coinbase&#8217;s x402 protocol, which means AI agents can now autonomously purchase things with your Visa card on the internet, and yes i am aware of how that sounds.</p><p>x402 uses the HTTP 402 &#8220;Payment Required&#8221; status code that&#8217;s been sitting there unused since literally forever. Stablecoins, USDC and EURC, on Base, Solana, Polygon, with 50 million transactions processed since launching May 2025. Now it plugs into Visa. You register a card, set guardrails (budget limits, per-purchase caps, merchant restrictions, time windows), and your agent goes shopping while merchants receive payments through Stripe or whatever they already use, no new infrastructure required.</p><p>Nevermined&#8217;s session-based credits system lets agents burn prepaid credits in real time as they consume resources, like LLM tokens but for commerce, with transactions as low as $0.001 settling in under 200 milliseconds on Base. Which means agents can run persistent shopping sessions with streamed access to services.</p><p>i know what you&#8217;re thinking. <em>&#8220;Agents buying things autonomously, what could go wrong.</em>&#8221; And yes, vibes-based security from Issue #5 is now vibes-based commerce. But think about what this actually unlocks, the entire long tail of digital services, API access, dataset queries, articles behind paywalls, all the stuff that agents currently can&#8217;t reach because there&#8217;s no payment mechanism. That bottleneck is now solved, sort of. The guardrails exist on paper and we&#8217;ll see how they hold up when someone&#8217;s agent burns through a $500 budget in three minutes buying API calls for a task that went sideways. Not that i would know anything about runaway Claude Code costs. Definitely not twice.</p><p>Source: <a href="https://cryptobriefing.com/ai-payment-integration-visa-nevermined/">Crypto Briefing</a> | <a href="https://nevermined.ai/">Nevermined</a></p><h2>4. MCP v2.1 and the Linux Foundation</h2><p>Platform shifts generally produce a decade-long standards war where three or four protocols fight it out and the ecosystem fragments and everyone picks a side and writes angry blog posts and then eventually one wins but only after wasting years of developer time? MCP just skipped all of that. Everyone just agreed to do the same thing. Anthropic, OpenAI, Google, Microsoft, Amazon, and the Linux Foundation&#8217;s Agentic AI Foundation now governing both MCP and A2A after being co-founded by basically all of them in December 2025. Microsoft shipped Agent Framework 1.0 on top of it. v2.1 adds Server Cards so servers can advertise their capabilities without you having to connect to them first, which sounds boring until you realise it&#8217;s the thing that makes agent-to-agent discovery actually work at scale.</p><p>i use MCP every day, it connects Claude Code to my Granola notes, my email, my CRM, my calendar, and i genuinely don&#8217;t think about it anymore. Until I tried to connect my 3 superhuman accounts, and now it&#8217;s a cluster for some reason. OAuth is like the worst. But aside from that, when infrastructure disappears into the background it means it&#8217;s working, and when it&#8217;s working it means people build on top of it without asking permission, and when they do that you get 10,000+ servers in the ecosystem and then it&#8217;s too late for anyone to propose an alternative. Network effects in protocols are brutal once they tip, and this one tipped before it become a race.</p><p>Bur remember, 97 million monthly downloads is also 97 million potential attack surfaces, and 36% of MCP servers were vulnerable to SSRF when we covered ClawHub in Issue #5. We&#8217;re building the agent economy&#8217;s entire commercial and operational layer on top of protocol infrastructure that nobody has seriously stress-tested for adversarial use. (Maybe Mythos can help?) It&#8217;s load-bearing software with vibes-based security. But also it works and it&#8217;s free and everyone is using it so here we are. As per.</p><p>Source: Linux Foundation | <a href="https://github.blog/open-source/maintainers/mcp-joins-the-linux-foundation-what-this-means-for-developers-building-the-next-era-of-ai-tools-and-agents/">GitHub Blog</a></p><p>&#8212;-</p><p>Thanks again for all your hard work. I appreciate you. If you missed it from earlier this wee/k. </p><div class="digest-post-embed" data-attrs="{&quot;nodeId&quot;:&quot;e7d9fcfd-e01f-4f4c-8e04-5b2e7bf3a0dc&quot;,&quot;caption&quot;:&quot;Heterogeneous integration, chiplets, and the hardware bottleneck in neurotechnology with Dorian Haci from MintNeuro&quot;,&quot;cta&quot;:&quot;Read full story&quot;,&quot;showBylines&quot;:true,&quot;size&quot;:&quot;lg&quot;,&quot;isEditorNode&quot;:true,&quot;title&quot;:&quot;Chips for the Brain: Modular Semiconductors for Neural Interfaces&quot;,&quot;publishedBylines&quot;:[{&quot;id&quot;:20895912,&quot;name&quot;:&quot;Lawrence Lundy-Bryan&quot;,&quot;bio&quot;:&quot;State of the Future helps founders, investors, and corporate leaders identify and act on emerging technology opportunities. We deliver insights, advisory, and investments to surface and back high-potential ideas before they become obvious.&quot;,&quot;photo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!q-AC!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd406f502-b292-4c00-8f8d-ba31c53a3a90_1620x1080.jpeg&quot;,&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:null}],&quot;post_date&quot;:&quot;2026-04-07T11:31:52.131Z&quot;,&quot;cover_image&quot;:&quot;https://substackcdn.com/image/fetch/$s_!Sf4X!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7cc5c36d-17bc-4023-8ed6-cd75b128b40b_1824x2336.png&quot;,&quot;cover_image_alt&quot;:null,&quot;canonical_url&quot;:&quot;https://stateofthefuture.substack.com/p/chips-for-the-brain-modular-semiconductors&quot;,&quot;section_name&quot;:null,&quot;video_upload_id&quot;:null,&quot;id&quot;:192209354,&quot;type&quot;:&quot;newsletter&quot;,&quot;reaction_count&quot;:6,&quot;comment_count&quot;:1,&quot;publication_id&quot;:221837,&quot;publication_name&quot;:&quot;State of the Future&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!juWT!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcb4af7d5-8416-4e8d-bc5b-2763f055ed16_800x800.png&quot;,&quot;belowTheFold&quot;:true,&quot;youtube_url&quot;:null,&quot;show_links&quot;:null,&quot;feed_url&quot;:null}"></div>]]></content:encoded></item><item><title><![CDATA[Chips for the Brain: Modular Semiconductors for Neural Interfaces]]></title><description><![CDATA[Heterogeneous integration, chiplets, and the hardware bottleneck in neurotechnology with Dorian Haci from MintNeuro]]></description><link>https://stateofthefuture.substack.com/p/chips-for-the-brain-modular-semiconductors</link><guid isPermaLink="false">https://stateofthefuture.substack.com/p/chips-for-the-brain-modular-semiconductors</guid><dc:creator><![CDATA[Lawrence Lundy-Bryan]]></dc:creator><pubDate>Tue, 07 Apr 2026 11:31:52 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!Sf4X!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7cc5c36d-17bc-4023-8ed6-cd75b128b40b_1824x2336.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Sf4X!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7cc5c36d-17bc-4023-8ed6-cd75b128b40b_1824x2336.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Sf4X!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7cc5c36d-17bc-4023-8ed6-cd75b128b40b_1824x2336.png 424w, https://substackcdn.com/image/fetch/$s_!Sf4X!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7cc5c36d-17bc-4023-8ed6-cd75b128b40b_1824x2336.png 848w, https://substackcdn.com/image/fetch/$s_!Sf4X!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7cc5c36d-17bc-4023-8ed6-cd75b128b40b_1824x2336.png 1272w, https://substackcdn.com/image/fetch/$s_!Sf4X!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7cc5c36d-17bc-4023-8ed6-cd75b128b40b_1824x2336.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Sf4X!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7cc5c36d-17bc-4023-8ed6-cd75b128b40b_1824x2336.png" width="1456" height="1865" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/7cc5c36d-17bc-4023-8ed6-cd75b128b40b_1824x2336.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1865,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:7831599,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://stateofthefuture.substack.com/i/192209354?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7cc5c36d-17bc-4023-8ed6-cd75b128b40b_1824x2336.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!Sf4X!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7cc5c36d-17bc-4023-8ed6-cd75b128b40b_1824x2336.png 424w, https://substackcdn.com/image/fetch/$s_!Sf4X!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7cc5c36d-17bc-4023-8ed6-cd75b128b40b_1824x2336.png 848w, https://substackcdn.com/image/fetch/$s_!Sf4X!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7cc5c36d-17bc-4023-8ed6-cd75b128b40b_1824x2336.png 1272w, https://substackcdn.com/image/fetch/$s_!Sf4X!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7cc5c36d-17bc-4023-8ed6-cd75b128b40b_1824x2336.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Heterogeneous integration, chiplets, and the hardware bottleneck in neurotechnology with Dorian Haci from MintNeuro</p><p>The neural interface industry has a hardware problem. Companies building brain-computer interfaces, whether they&#8217;re reading neural activity to help paralysed patients move or writing electrical signals to treat Parkinson&#8217;s, are stuck using off-the-shelf chips and bulky monolithic ASICs designed 20 years ago. The electronics inside most implantable devices are too big, too power-hungry, and too slow to develop. Until the hardware catches up, the field can&#8217;t scale.</p><p>MintNeuro, an Imperial College London spin-out, is trying to fix that. The company designs modular semiconductor chips specifically for neural interfaces. Not brain-computer interfaces themselves, but the underlying silicon that makes them work. If Neuralink or Blackrock Neurotech are the car manufacturers, MintNeuro wants to be the engine supplier. They&#8217;ve taped out over 40 chips in 15 years of R&amp;D, and their bet is that function-specific Lego blocks (sensing, stimulation, processing, power management) that snap together into application-specific systems can dramatically reduce cost and time-to-market for the whole industry.</p><p>If you&#8217;ve been following this series, the underlying thesis will sound familiar. Synthara showed us that the real constraint in AI chips isn&#8217;t compute, it&#8217;s data movement. Phanofi demonstrated that coherent optics can solve chip-to-chip communication, but only by working with existing foundry processes. Pragmatic made the case for flexible ICs on mature nodes unlocking new form factors. The thread is always the same: the semiconductor industry&#8217;s next decade isn&#8217;t about smaller transistors. It&#8217;s about integration and packaging. MintNeuro takes that principle into perhaps the most demanding environment possible: inside the human body.</p><p>I spoke to Dorian Haci, MintNeuro&#8217;s CEO and co-founder, about why brain chips don&#8217;t need 3-nanometre processes, what cardiac monitors can teach us about scaling implantables, and why the real bottleneck is getting electronics small enough to sit next to your nervous system without killing you.</p><h2>What Did I Learn?</h2><ol><li><p><strong>Miniaturisation, not surgery, is the barrier to scaling neural interfaces.</strong> The Medtronic cardiac monitor went from 100 implants per year to nearly a million through progressive miniaturisation until the device became injectable. The constraint was never patient willingness. It was hardware size.</p></li><li><p><strong>Brain chips don&#8217;t need cutting-edge process nodes.</strong> Counterintuitive, but the functions involved (amplification, filtering, ADC, stimulation) don&#8217;t require billions of transistors. They require low power, low heat, and small form factor, all achievable on mature 65nm+ nodes at lower cost and better yields.</p></li><li><p><strong>The moat is integration, not individual chip design.</strong> Modular chips are the product; heterogeneous integration and system-in-package capabilities are the business. Combining function-specific chiplets into miniaturised, biocompatible systems is where value compounds.</p></li></ol><h2>The Interview</h2><p><strong>Lawrence:</strong> You&#8217;re not building brain-computer interfaces. You&#8217;re building chips for the companies that build them. What does MintNeuro actually do?</p><p><strong>Dorian:</strong> We&#8217;re designing semiconductor technologies for neural interfaces. We&#8217;ve been at this for 15 years as a spin-out from Imperial College London, and during that time we&#8217;ve taped out over 40 different chips specifically for neurotech. Our differentiation is a modular approach. We develop chips optimised for a specific function: sensing, stimulation, processing, power management, wireless communication, safety. That&#8217;s different from the industry norm, where one monolithic chip tries to do everything for one application.</p><p><em><strong>[Sidebar: What is mixed-signal chip design?]</strong></em></p><p><em>The brain is analog, computers are digital. Mixed-signal chips bridge the two. Most chips readers will be familiar with (CPUs, GPUs, memory) are purely digital: they shuffle zeros and ones. Mixed-signal is a different discipline entirely. You&#8217;re designing circuits that handle continuous voltage variations, microvolts from neurons, alongside discrete digital logic. It&#8217;s closer to RF engineering than to what NVIDIA does. There are maybe a few hundred people in the world who can design mixed-signal ASICs for biomedical applications. This is partly why the neurotech industry is stuck on 20-year-old electronics: the talent pool for this specific intersection of skills barely exists.</em></p><p><strong>Lawrence:</strong> Help me with the distinction between a function and an application.</p><p><strong>Dorian:</strong> An application is the medical use case: monitoring neural activity for epilepsy, stimulating for Parkinson&#8217;s, closing the loop to stop seizures. The function is what we focus on: just recording electrical activity, or just stimulating, or just processing. I think of them as Lego blocks. Each block has a colour and a shape. When you combine them into structures, you create something optimised for the application. The system is application-specific, not the individual chip.</p><p><strong>Lawrence:</strong> Right. So it&#8217;s like a GPU speeding up matrix multiplication, which can serve gaming or AI training. You optimise recording, which can serve epilepsy or Parkinson&#8217;s or dementia.</p><p><strong>Dorian:</strong> Exactly. And the two things our partners care most about are cost and time to market. Medical devices take forever to reach patients because of regulatory approvals, reimbursement, all of it. Our modular library lets companies combine existing chips much faster than developing a full ASIC from scratch.</p><p><strong>Lawrence:</strong> Let&#8217;s get into the technology. A brain-computer interface reads what&#8217;s going on in your brain and writes to it. What does the read pipeline look like?</p><p><strong>Dorian:</strong> The brain produces tiny electrochemical signals with enormous noise around them, from movement, external devices, everything. First you need amplification to pick up those tiny variations. Then filtering to remove the noise. Then analog-to-digital conversion. That whole front end is critical, because if you&#8217;re not capturing actual information from the neural activity, whatever processing you do afterwards is useless. You have data but not information.</p><p><strong>Lawrence:</strong> You&#8217;ve got a good analogy for why that matters.</p><p><strong>Dorian:</strong> Think of the brain as a football stadium during a match. EEG electrodes on the outside of the skull are like microphones placed 100 metres from the stadium. You can hear noise, people shouting, but you can&#8217;t tell the score. Consumer headsets with two or three electrodes are doing exactly that. Throw the data into AI, it can&#8217;t do much because you don&#8217;t have the information. You need microphones inside the stadium, close to the players. That means implantable devices with electrodes close to the neurons.</p><p><em><strong>[Sidebar: How small is a microvolt?]</strong></em></p><p><em>Neural signals are typically 10-100 microvolts. A AA battery is 1.5 volts. That&#8217;s a difference of roughly 15,000 to 150,000x. Now imagine trying to pick up that signal through bone, tissue, and cerebrospinal fluid, while the patient is moving, while nearby electronics are radiating interference. This is the front-end amplification problem Dorian keeps coming back to. It&#8217;s why consumer EEG headsets are, to put it politely, limited in what they can actually tell you about what&#8217;s happening inside your head.</em></p><p><strong>Lawrence:</strong> Which raises the invasive versus non-invasive question. There&#8217;s a bet that algorithms will get so good at signal-to-noise management that we&#8217;ll never need surgery. Where do you land?</p><p><strong>Dorian:</strong> My view is clear. If you put the same technology inside rather than outside, the signal-to-noise ratio will always be higher. There will always be patients with severe epilepsy, Parkinson&#8217;s, depression, who&#8217;ll say &#8220;I don&#8217;t care about the surgery, just reduce my symptoms.&#8221; Both approaches will always exist. It&#8217;s about which applications they serve, not which one wins.</p><p><strong>Lawrence:</strong> You mentioned deep brain stimulation earlier as one of the original neural interfaces. Is it true that we still don&#8217;t fully understand the mechanism by which it works?</p><p><strong>Dorian:</strong> Initially, that was absolutely the case. Clinicians were placing electrodes in specific areas of the brain, stimulating, and seeing what happened. It was a crude engineering approach compared to being precise. But they saw direct results: reduced tremors in Parkinson&#8217;s, fewer seizures in epilepsy. The more we use these technologies, the more we learn about the underlying biology, and the more precise the treatments become. It started as &#8220;stimulate here, see what happens&#8221; and it&#8217;s evolving into targeted, evidence-based intervention.</p><p><strong>Lawrence:</strong> What about non-invasive stimulation? Does modulation always require implanting something?</p><p><strong>Dorian:</strong> Not necessarily. We&#8217;re working with Professor Nir Grossman at Imperial on something called temporal interference. You place electrodes outside the brain and create two electric fields. Where those fields intersect, you get a voxel of stimulation. By adjusting frequency and phase, you target a specific area. Stimulation only occurs at the intersection. It&#8217;s a way to reach deep brain regions without surgery. Ultrasound is getting a lot of attention for the same reasons.</p><p><strong>Lawrence:</strong> Can you walk through what a closed-loop system actually looks like? Say, for epilepsy.</p><p><strong>Dorian:</strong> In certain types of epilepsy, there&#8217;s a place in the brain called the focus where a seizure starts. Neurons there begin oscillating and synchronising with each other, and those oscillations spread across the brain. It takes some time. What a closed-loop system does is detect that abnormal synchronisation at the focus, then trigger a stimulation that breaks the pattern before it spreads. You&#8217;re essentially interrupting the seizure at the source. The recording side detects it, the stimulation side stops it, and the whole thing needs to happen in a loop. That&#8217;s where having both functions on chips designed to work together becomes critical.</p><p><em><strong>[Sidebar: Closed-loop neuromodulation]</strong></em></p><p><em><strong>Most medical devices are open-loop:</strong> a pacemaker delivers a fixed rhythm, a drug pump releases at set intervals. Closed-loop systems are fundamentally different. They sense, decide, and act in real time. For epilepsy, this means detecting the electrical signature of a seizure forming (neurons at the focal point synchronising abnormally), then delivering a precisely targeted stimulation to break that synchronisation before it cascades across the brain. The device is running an if-then loop inside your skull. This is where the &#8220;read&#8221; and &#8220;write&#8221; sides of neural interfaces converge, and why having separate optimised chips for each function, designed to work together in a system, matters more than one monolithic chip trying to do everything.</em></p><p><strong>Lawrence:</strong> Now, your chip design choices. You&#8217;re using mature nodes, 65 nanometres and up. Most people hear &#8220;chip for the brain&#8221; and assume you need cutting-edge fabrication to make it small enough. Why isn&#8217;t that right?</p><p><strong>Dorian:</strong> First, cost, both for us and our customers. Second, maturity: these nodes have been validated for years in terms of yield and supply chain. Third, for stimulation we need voltage and current levels that advanced nodes can&#8217;t deliver. And we simply don&#8217;t need the compute. Our chips are 2 by 2 millimetres with far fewer transistors than an NVIDIA GPU. What you need inside the body is low power, low heat, small form factor. That&#8217;s the premium. Not FLOPS.</p><p><strong>Lawrence:</strong> What about latency? Usually a critical parameter.</p><p><strong>Dorian:</strong> Biology is slow. For epilepsy, you need to detect abnormal oscillations at the seizure focus and trigger stimulation before it spreads. But that window doesn&#8217;t require nanoseconds. We&#8217;re not in the same domain as AI accelerators. Latency isn&#8217;t the constraint.</p><p><strong>Lawrence:</strong> Batteries. If you implant something in the brain, how long does it last?</p><p><strong>Dorian:</strong> Three models. Primary cell batteries, non-rechargeable, which need five to ten years minimum because replacement means surgery. Rechargeable batteries. Or no battery at all, like cochlear implants: an external coil powers the implant in real time through inductive coupling. Remove the external device, the implant goes completely off. That works for applications that aren&#8217;t life-threatening.</p><p><strong>Lawrence:</strong> The cochlear implant model is interesting. How does powering through a coil actually work?</p><p><strong>Dorian:</strong> The implant has two parts: one inside, one outside. The outside device has the battery and creates energy that powers the internal circuitry in real time through inductive coupling. There&#8217;s no stored charge inside the body. Remove the external piece and the implant goes completely dark. That works because hearing loss isn&#8217;t life-threatening. You can safely power down. For life-threatening applications like epilepsy or cardiac arrhythmia, you need an onboard battery because the device can&#8217;t ever go off. And that brings its own problems: the body creates scar tissue around the electrodes over time. After years, the device is essentially glued to the cells. Replacing a battery means another surgery, and removing the device is genuinely difficult.</p><p><strong>Lawrence:</strong> I want to push you on the market. My scepticism when we first spoke was that the addressable market for dedicated neurotech chips would be too small in the near term. Convince me.</p><p><strong>Dorian:</strong> Medical devices sell for $20,000 to $60,000 per unit. The electronics are a fraction of that cost but have enormous enabling value. This isn&#8217;t smartphone economics. And we&#8217;re horizontal, working across the whole sector. The need is growing: one in three children born today will die from dementia.</p><p><strong>Lawrence:</strong> Sure, but not everyone with dementia is getting a brain implant. Surgery is the bottleneck.</p><p><strong>Dorian:</strong> Here&#8217;s the example that changed my own thinking. Fifteen years ago, Medtronic built an implantable cardiac monitor. Invasive, required surgery, maybe 100 implants per year. Then they shrank it to a third of the size and went to tens of thousands per year. Then the Reveal LINQ, two centimetres long, injectable through a needle. Today they&#8217;re approaching a million per year. The surgery went from a full procedure to a few minutes. In the future, you&#8217;ll get it at the GP&#8217;s office. Neurotech is on the same trajectory. The bottleneck isn&#8217;t surgical technique, it&#8217;s that the hardware isn&#8217;t small enough to make minimally invasive procedures possible.</p><p><strong>Lawrence:</strong> Big picture question. For someone thinking about investing in BCI companies: should they bet on medical-first companies that might eventually move to consumer, or consumer-first companies building non-invasive from day one?</p><p><strong>Dorian:</strong> I believe most of the innovation will come from the implantable world, because that&#8217;s where the unmet need is today. That&#8217;s where capital flows, because you&#8217;re competing with pharma, which hasn&#8217;t been as successful for neurological conditions as people expected. But we&#8217;re in an interesting position because our horizontal approach gives us visibility across the whole ecosystem. We&#8217;re selecting partners we think will reach value inflection points. In a way, we&#8217;re filtering the right approaches for our investors. We see that the biggest opportunities are invasive or minimally invasive right now, but the R&amp;D developed for implantables, miniaturisation, low power, will be exactly what enables the wearable market in the future. If you can make something safe enough for inside the body, it&#8217;s certainly good enough for outside it.</p><p><strong>Lawrence:</strong> So the real constraint is hardware miniaturisation, and that&#8217;s where the packaging story comes in.</p><p><strong>Dorian:</strong> That&#8217;s exactly right, and it&#8217;s the point most people miss. Beyond shrinking transistors, there&#8217;s chiplet approaches, system-in-package solutions, heterogeneous integration: combining chips, even from older process nodes, into a system that&#8217;s small enough and safe enough. That&#8217;s our primary differentiation. The ability to take function-specific Lego blocks and integrate them into miniaturised systems. Just like Lego&#8217;s real IP isn&#8217;t the plastic, it&#8217;s the precision of how the blocks connect.</p><h2>So What?</h2><p>There&#8217;s a pattern across this interview series that keeps reasserting itself. Synthara&#8217;s compute-in-memory, Phanofi&#8217;s coherent optics, Pragmatic&#8217;s flexible ICs, and now MintNeuro&#8217;s neural interface chips all converge on the same principle: the next decade of semiconductor progress is about how you combine, package, and integrate heterogeneous components into systems optimised for specific environments.</p><p>MintNeuro operates in perhaps the most extreme version of that challenge. The &#8220;environment&#8221; is the human nervous system. The constraints (biocompatibility, thermal limits, power budgets, regulatory approvals) make a data centre look forgiving. But the underlying engineering problem is the same one TSMC is solving with chip-on-wafer-on-substrate for NVIDIA: how do you take different functional blocks and package them into something that works as a unified system?</p><p>What shifted my thinking on this call was the Medtronic analogy. I&#8217;d been sceptical about the addressable market because I assumed invasive procedures would always be expensive, rare, and limited to severe cases. The trajectory from surgical implant to injectable device administered at a GP&#8217;s office reframes it entirely. It&#8217;s the same dynamic that turned contact lenses from a medical procedure into a consumer product. If miniaturisation unlocks that transition, the TAM conversation changes.</p><p><strong>Where I&#8217;m still not fully convinced:</strong> timeline. Cardiac monitoring is a single, well-understood signal. Neural interfaces are trying to decode the most complex organ in the body through dozens of simultaneous channels. The Medtronic analogy is directionally right, but the technical gap between monitoring a heartbeat and reading cortical activity is vast. MintNeuro&#8217;s modular approach could accelerate things by letting medical device companies iterate faster, but &#8220;faster&#8221; in this context might still mean decades.</p><p>Still, integration is the next challenge for the semiconductor industry. I keep hearing it from every direction. The transistor-scaling story dominated the last 50 years. The packaging and integration story will define the next 20. MintNeuro is betting that&#8217;s true even for chips that go inside your body. The bet looks better to me now than it did before this call.</p><p>Find out more at MintNeuro.com or contact Dorian directly at dorian@mintneuro.com.</p>]]></content:encoded></item><item><title><![CDATA[State of the Future: Friday Four]]></title><description><![CDATA[Dispatch from 4th April 2026: Two Asset Classes in a Trenchcoat]]></description><link>https://stateofthefuture.substack.com/p/state-of-the-future-friday-four-251</link><guid isPermaLink="false">https://stateofthefuture.substack.com/p/state-of-the-future-friday-four-251</guid><dc:creator><![CDATA[Lawrence Lundy-Bryan]]></dc:creator><pubDate>Fri, 03 Apr 2026 10:46:24 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!in1w!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F958dcc1e-23ab-4c8a-82b2-e7435c674842_2752x1536.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!in1w!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F958dcc1e-23ab-4c8a-82b2-e7435c674842_2752x1536.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!in1w!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F958dcc1e-23ab-4c8a-82b2-e7435c674842_2752x1536.png 424w, https://substackcdn.com/image/fetch/$s_!in1w!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F958dcc1e-23ab-4c8a-82b2-e7435c674842_2752x1536.png 848w, https://substackcdn.com/image/fetch/$s_!in1w!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F958dcc1e-23ab-4c8a-82b2-e7435c674842_2752x1536.png 1272w, https://substackcdn.com/image/fetch/$s_!in1w!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F958dcc1e-23ab-4c8a-82b2-e7435c674842_2752x1536.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!in1w!,w_2400,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F958dcc1e-23ab-4c8a-82b2-e7435c674842_2752x1536.png" width="1200" height="670.054945054945" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/958dcc1e-23ab-4c8a-82b2-e7435c674842_2752x1536.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:false,&quot;imageSize&quot;:&quot;large&quot;,&quot;height&quot;:813,&quot;width&quot;:1456,&quot;resizeWidth&quot;:1200,&quot;bytes&quot;:9182907,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://stateofthefuture.substack.com/i/193054236?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F958dcc1e-23ab-4c8a-82b2-e7435c674842_2752x1536.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:&quot;center&quot;,&quot;offset&quot;:false}" class="sizing-large" alt="" srcset="https://substackcdn.com/image/fetch/$s_!in1w!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F958dcc1e-23ab-4c8a-82b2-e7435c674842_2752x1536.png 424w, https://substackcdn.com/image/fetch/$s_!in1w!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F958dcc1e-23ab-4c8a-82b2-e7435c674842_2752x1536.png 848w, https://substackcdn.com/image/fetch/$s_!in1w!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F958dcc1e-23ab-4c8a-82b2-e7435c674842_2752x1536.png 1272w, https://substackcdn.com/image/fetch/$s_!in1w!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F958dcc1e-23ab-4c8a-82b2-e7435c674842_2752x1536.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>$297 billion in venture capital in one quarter. I closed the tab and opened it again because obviously not. But yes. $297 billion. If we carry on like this, we might end up with a real asset class amiright? </p><p>Except it&#8217;s not really venture capital is it. Four rounds were worth 64%. OpenAI ($122bn), Anthropic ($30bn), xAI ($20bn), Waymo ($16bn). The investors are sovereign wealth funds and pension funds and Citadel and Jane Street. The real sovereigns around here. The return profile is 15-25% IRR, 2x in 3-5 years, capital preservation, and low loss ratio. That&#8217;s PE. Growth equity in a hoodie. Meanwhile seed funding, actual risk capital, the $500K cheques into unproven companies where 90% go to zero and you need 1 in 20 to return the fund, that was $12 billion. Four percent. The IRR targets end up roughly similar, 15-25% top quartile either way, but one of them loses basically none of its bets and the other loses 90%. </p><p><strong>We&#8217;ve got two asset classes sharing a name and a legal structure and nobody&#8217;s saying the quiet part out loud. I don&#8217;t see mainstream media talking about any of this. Well that&#8217;s why us independent writers on Substack exist. Speaking truth to Marc. </strong></p><p>Seems to me, the return profiles are backwards. The growth rounds have lower risk, proven revenue, proven teams, massive TAMs. The seed rounds are breakthrough-or-bust. But the growth rounds get called venture and the seed rounds get 4% of capital. It&#8217;s doing my head in slightly. Founders see $122bn rounds and think why am i raising $2m for a chip startup. LPs see Anthropic and think why am i in a EUR 30m fund. It&#8217;s a different asset class! <strong>Frontier Expeditions.</strong> </p><div class="digest-post-embed" data-attrs="{&quot;nodeId&quot;:&quot;e337a965-c09a-43a3-8e06-5c8ace827bbc&quot;,&quot;caption&quot;:&quot;Portuguese ships chasing spices. Spanish galleons hunting gold. Google organising the World&#8217;s Information. Amazon offering the Everything Store. And 30-minute delivery of Milk?! Behind every audacious dream lies the same question: Who funds the dreamers?&quot;,&quot;cta&quot;:&quot;Read full story&quot;,&quot;showBylines&quot;:true,&quot;size&quot;:&quot;sm&quot;,&quot;isEditorNode&quot;:true,&quot;title&quot;:&quot;Fund Frontier Expeditions otherwise just buy the \&quot;Magnificent Seven\&quot;&quot;,&quot;publishedBylines&quot;:[{&quot;id&quot;:20895912,&quot;name&quot;:&quot;Lawrence Lundy-Bryan&quot;,&quot;bio&quot;:&quot;State of the Future helps founders, investors, and corporate leaders identify and act on emerging technology opportunities. We deliver insights, advisory, and investments to surface and back high-potential ideas before they become obvious.&quot;,&quot;photo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!q-AC!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd406f502-b292-4c00-8f8d-ba31c53a3a90_1620x1080.jpeg&quot;,&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:null}],&quot;post_date&quot;:&quot;2025-01-07T13:23:02.910Z&quot;,&quot;cover_image&quot;:&quot;https://substackcdn.com/image/fetch/$s_!87bk!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F60d835b2-b88d-44d0-935c-e2f14ca060d1_1024x1024.jpeg&quot;,&quot;cover_image_alt&quot;:null,&quot;canonical_url&quot;:&quot;https://stateofthefuture.substack.com/p/fund-frontier-expeditions-otherwise&quot;,&quot;section_name&quot;:null,&quot;video_upload_id&quot;:null,&quot;id&quot;:151693497,&quot;type&quot;:&quot;newsletter&quot;,&quot;reaction_count&quot;:11,&quot;comment_count&quot;:0,&quot;publication_id&quot;:221837,&quot;publication_name&quot;:&quot;State of the Future&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!juWT!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcb4af7d5-8416-4e8d-bc5b-2763f055ed16_800x800.png&quot;,&quot;belowTheFold&quot;:false,&quot;youtube_url&quot;:null,&quot;show_links&quot;:null,&quot;feed_url&quot;:null}"></div><p>Anyway. Two asset classes in a trenchcoat. I came up with that all by myself. </p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://stateofthefuture.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://stateofthefuture.substack.com/subscribe?"><span>Subscribe now</span></a></p><div><hr></div><p><strong>1. $297 Billion in One Quarter and the Venture Industry Has Left the Building</strong></p><p>So Crunchbase dropped the Q1 numbers on Tuesday and. Well as I said, $297 billion. 6,000 startups. Up 150% quarter-over-quarter. AI captured 81%, roughly $242 billion. The US took 83% of global capital, up from 71% a year ago. One quarter, more than the full year 2023. More than 2022. More than 2021 which was the frothy one remember.</p><p>Hard to stare those numbers in the face because it&#8217;s basically, the four mega-rounds: OpenAI&#8217;s $122 billion (absurd), Anthropic&#8217;s $30 billion, xAI&#8217;s $20 billion, Waymo&#8217;s $16 billion. Together $188 billion. Sixty four percent of everything. The other 5,996 companies divided up $112 billion between them, which honestly isn&#8217;t bad except when you write it next to $188 billion and it looks like crumbs.</p><p>&#8220;But Lawrence, seed is booming!&#8221; I hear you shout. And it is. Seed was up 31% to $12 billion across 3,800 deals. Early-stage up 41% to $41 billion. But seed is now 4% of total VC. Was closer to 15% in 2019. The pie got massive. The slice got thinner. This is what <a href="https://stateofthefuture.substack.com/p/consensus-capital">Consensus Capital</a> looks like. It&#8217;s basically sovereign wealth at this point, they just call it VC because, I don&#8217;t know, the term sheets are already printed. The carry is going to be incredible for about seven people. Sweet sweet carry in 10+1+1 just in time for the Dyson Spheres. </p><p>Source: <a href="https://news.crunchbase.com/venture/record-breaking-funding-ai-global-q1-2026/">Crunchbase</a></p><div><hr></div><p><strong>2. Cursor 3 vs Claude Code: The Vibes Coding Wars Get Serious</strong></p><p>Cursor shipped version 3 on Wednesday, codenamed Glass, rebuilt around multi-agent coding. You tell it what you want in plain language, it spins up a bunch of agents (some in the cloud, some local) and they go build the thing. Background tasks, parallel agents, the whole bit. Their response to Claude Code and Codex eating their lunch.</p><p>And what a lunch. Claude Code has 54% of the AI coding market according to Menlo. Cursor was the vibes coding darling eighteen months ago and now it&#8217;s chasing. Model providers decided to cut out the middle man and just ship their own coding tools. Codex 5.3 set new benchmark highs last month. </p><p>i should probably disclose that this newsletter was conceived inside Claude Code. With all the connectors and md files and skills and daily news updates, and obsidian. It&#8217;s a tangled web of human-ai collaboration. What was AI and what was me? I am, as i said in <a href="https://stateofthefuture.substack.com/p/state-of-the-future-friday-four">Issue #5</a>, <strong>Become The Orchestrator.</strong> Claude Code is absurdly good at this stuff. Cursor was better for smaller projects, quick edits, but recently less so. Whether Glass fixes that, dunno. Will try next week. For science.</p><p>The <a href="https://stateofthefuture.substack.com/p/state-of-the-future-friday-four-03a">Issue #6</a> pattern though. OpenAI bought Astral (Python tooling), NVIDIA wrapped OpenClaw in enterprise guardrails, and now every model provider is vertically integrating into developer tools. Own the coding environment AND the model AND the inference and you own the developer. Cursor is pitching independence. Switzerland. We&#8217;ll see how Switzerland goes when NVIDIA and Anthropic and OpenAI all decide they want the same customers. Not famously well is my guess.</p><p>Source: <a href="https://cursor.com/blog/cursor-3">Cursor Blog</a></p><div><hr></div><p><strong>3. EU AI Act: 8 of 27 States Ready, 121 Days to Go, Standards Bodies Say Maybe Next Year</strong></p><p>For all your regulation fans out there. The EU AI Act becomes <em><strong>fully enforceable</strong></em> (ohhh no not the EU) on August 2nd. 121 days. European Parliament report dropped Tuesday: only 8 of 27 member states have designated enforcement authorities. Deadline for that was August 2025. Seven months ago. Nineteen countries just, didn&#8217;t do it. Nobody seems to have noticed or particularly cared.</p><p>And it gets better. CEN and CENELEC, the standardisation bodies meant to create the technical standards companies need to prove compliance, also missed their 2025 deadline. Now saying end-of-2026. So the law takes effect in August but the standards you need to comply with, they won&#8217;t exist yet. Lovely stuff. Dom was sort of right with the whole Brexit thing? The Commission&#8217;s response is to propose a 16-month delay via something called the <strong>Digital Omnibus</strong>. (Again, excellent) The Council wants to push some bits to December 2027 and others to August 2028. I&#8217;ve lost track of the deadlines for the deadlines at this point.</p><p>The fines? &#8364;35 million or 7% of global turnover. Enforced by whom though. The nineteen countries that haven&#8217;t appointed anyone? With standards that don&#8217;t exist? Spain is the only country with a functioning regulatory sandbox, running 12 high-risk AI systems. Blocking airspace. God bless Spain honestly, they just crack on. Finland went live in January because Finland is a proper country that just functions. Everyone else is, I&#8217;m going to be generous here, working on it.</p><p>i wrote in <a href="https://stateofthefuture.substack.com/p/a-specific-theory-of-sovereign-ai">A Specific Theory of Sovereign AI</a> that sovereignty means controlling infrastructure, not just writing rules. Europe can write world-leading regulation faster than anyone. Implementing it though. That&#8217;s the bit. Always has been the bit. </p><p>Source: <a href="https://worldreporter.com/eu-ai-act-august-2026-deadline-only-8-of-27-eu-states-ready-what-it-means-for-global-ai-compliance/">World Reporter</a></p><div><hr></div><p><strong>4. Rebellions $400m Pre-IPO: Korea Does the Semiconductor Industrial Policy Thing Properly</strong></p><p>And come for the Ai and stay for the semiconductors. South Korean AI chip startup Rebellions closed a $400 million pre-IPO round on Sunday. Mirae Asset and the Korea National Growth Fund led. $2.34 billion valuation. Total funding now $850 million. Samsung, SK Hynix, and Saudi Aramco (Aramco!) all on the cap table. IPO planned late 2026.</p><p>For those paying attention and care. Rebellions uses CGRA (Coarse-Grained Reconfigurable Array), not GPUs. Their processing elements can be reprogrammed on the fly, which means the chip dynamically switches between compute-heavy phases and memory-bandwidth phases during inference. The Rebel Quad does 1 petaflop FP16, 2 petaflops FP8, 600 watts, Samsung HBM3E running at 4.8 TB/s. That&#8217;s 3.4% more performance than NVIDIA&#8217;s H200 at 20.7% better power efficiency. The B200 still beats it on raw throughput but at 1.7x the power draw and god knows the cost.</p><p>Rebellions is saying look, same building blocks, digital logic on Samsung 4nm, HBM from SK Hynix, Arm cores for orchestration, just arranged better for inference specifically. Less sexy. Probably ships sooner. And i wrote about the HBM bottleneck back in <a href="https://stateofthefuture.substack.com/p/e14-the-real-ai-bottleneck-high-bandwidth">E14</a> in 2023, how memory bandwidth was the real constraint, and Rebellions has clearly been reading the same papers because they&#8217;ve built the entire memory hierarchy around minimising data movement. 4MB SRAM per neural engine, 8 TB/s internal bandwidth. That&#8217;s the bet.</p><p>The geopolitics. Samsung fabs on 4nm. SK Hynix supplies HBM. Korean government put in $166 million through the National Growth Fund. Aramco is investing because energy states want to own AI inference infrastructure now apparently. This is industrial policy done properly. Design domestically, fab domestically, memory domestically, government co-invests. Exactly the playbook i keep arguing Europe should follow. NanoIC is the European version of this intent. Korea is just executing faster. But also, Korea has Samsung and SK Hynix already, so. Not exactly starting from scratch. Very hard to compete with. And note, Korea makes a shit ton of SMRs too. Interesting.</p><p>Source: <a href="https://techcrunch.com/2026/03/30/ai-chip-startup-rebellions-raises-400-million-at-2-3b-valuation-in-pre-ipo-round/">TechCrunch</a> </p><div><hr></div><p>Thanks for reading y&#8217;all, European&#8217;s have a lovely few days rest, and American&#8217;s well I hope you add a lot of shareholder value over the Easter weekend. </p><p>If you missed it:</p><ul><li><p><a href="https://stateofthefuture.substack.com/p/consensus-capital">Consensus Capital</a> &#8212; the end of the contrarian in the age of industrial strategy</p></li><li><p><a href="https://stateofthefuture.substack.com/p/a-specific-theory-of-sovereign-ai">A Specific Theory of Sovereign AI</a> &#8212; industrial strategy as early-stage venture</p></li></ul><div><hr></div><p></p>]]></content:encoded></item><item><title><![CDATA[State of the Future: Friday Four]]></title><description><![CDATA[Dispatch from 28 March 2026: Compute Goes Vertical]]></description><link>https://stateofthefuture.substack.com/p/state-of-the-future-friday-four-d68</link><guid isPermaLink="false">https://stateofthefuture.substack.com/p/state-of-the-future-friday-four-d68</guid><dc:creator><![CDATA[Lawrence Lundy-Bryan]]></dc:creator><pubDate>Fri, 27 Mar 2026 15:13:18 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!5DRA!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdaa0da69-4298-48f8-8499-f87dacc670a2_2752x1536.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!5DRA!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdaa0da69-4298-48f8-8499-f87dacc670a2_2752x1536.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!5DRA!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdaa0da69-4298-48f8-8499-f87dacc670a2_2752x1536.png 424w, https://substackcdn.com/image/fetch/$s_!5DRA!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdaa0da69-4298-48f8-8499-f87dacc670a2_2752x1536.png 848w, https://substackcdn.com/image/fetch/$s_!5DRA!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdaa0da69-4298-48f8-8499-f87dacc670a2_2752x1536.png 1272w, https://substackcdn.com/image/fetch/$s_!5DRA!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdaa0da69-4298-48f8-8499-f87dacc670a2_2752x1536.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!5DRA!,w_2400,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdaa0da69-4298-48f8-8499-f87dacc670a2_2752x1536.png" width="1200" height="670.054945054945" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/daa0da69-4298-48f8-8499-f87dacc670a2_2752x1536.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:false,&quot;imageSize&quot;:&quot;large&quot;,&quot;height&quot;:813,&quot;width&quot;:1456,&quot;resizeWidth&quot;:1200,&quot;bytes&quot;:7851094,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://stateofthefuture.substack.com/i/192302400?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdaa0da69-4298-48f8-8499-f87dacc670a2_2752x1536.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:&quot;center&quot;,&quot;offset&quot;:false}" class="sizing-large" alt="" srcset="https://substackcdn.com/image/fetch/$s_!5DRA!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdaa0da69-4298-48f8-8499-f87dacc670a2_2752x1536.png 424w, https://substackcdn.com/image/fetch/$s_!5DRA!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdaa0da69-4298-48f8-8499-f87dacc670a2_2752x1536.png 848w, https://substackcdn.com/image/fetch/$s_!5DRA!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdaa0da69-4298-48f8-8499-f87dacc670a2_2752x1536.png 1272w, https://substackcdn.com/image/fetch/$s_!5DRA!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdaa0da69-4298-48f8-8499-f87dacc670a2_2752x1536.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>i&#8217;ve been thinking about what i get wrong. Not in a crisis-of-faith way, more like... housekeeping. I&#8217;ve been writing this newsletter for a while now and some of my early calls have aged well and some have aged like kefir. Seems worth being honest about both.</p><p>Things i got right:</p><ul><li><p>inference costs collapsing (i called <a href="https://stateofthefuture.substack.com/p/state-of-the-future-part-deux">&#8220;too cheap to meter&#8221;</a> in 2024, and we&#8217;re basically there).</p></li><li><p>Photonic interconnects before photonic compute (wrote about <a href="https://stateofthefuture.substack.com/p/the-state-of-photonic-computing">the memory bottleneck</a> years ago, and then NVIDIA drops $4bn on exactly that thesis this month).</p></li><li><p><a href="https://stateofthefuture.substack.com/p/the-end-of-zero-marginal-costs-the">Test-time compute killing SaaS margins</a> (o1, o3, DeepSeek R1 all confirmed the opex burden). Those feel good. i&#8217;ll take those.</p></li></ul><p>Things i got wrong:</p><ul><li><p>i was <em>very</em> enthusiastic about <a href="https://stateofthefuture.substack.com/p/confidential-ai-the-dog-that-didnt">decentralised crypto AI infrastructure</a>. Prime Intellect, Ritual, Fetch.ai. None of it happened. Still early? question mark? Hyperscalers won-ing harder than ever. </p></li><li><p>i bet on <a href="https://stateofthefuture.substack.com/p/analog-computing-the-once-and-future">analog mixed-signal</a> capturing 50% of edge AI hardware by 2030. Apple and Qualcomm went digital and didn&#8217;t look back. Hmm, four and half years isn&#8217;t enough road.</p></li><li><p>And then there&#8217;s space compute. we wrote <a href="https://stateofthefuture.substack.com/p/the-compute-gradient">The Compute Gradient</a> last September. Five layers of inference, from edge to hyperscale. Neatly argued if i say so myself. Space was not on the list. Not even as a footnote. Someone pitched me orbital data centres on a call around that time. The physics, though. thE PhYsICS. Cooling is free, solar is free, no NIMBYs, but it felt like a pitch deck fantasy.</p></li></ul><p>Then Starship started sticking landings (two in a row, still a long way from operational, but the trajectory is clear. Then SpaceX bought xAI and suddenly the demand side and the supply side were the same company. And now i&#8217;m reading FCC filings about a million satellites and Sequoia partners writing investment theses about orbital compute superiority by 2028 and i&#8217;m thinking... huh. Maybe i was the one who wasn&#8217;t taking this seriously enough&#8230; Or maybe I was? </p><p>Three of the four stories this week involve rebuilding the compute stack from the bottom up. New fabs, new interconnects, new orbits. Lots going on out there, lots of money to be made. </p><p>DMs open. What else did I get wrong? </p><h2>1. SpaceX Wants to Put a Million Data Centres in Orbit</h2><p>So. SpaceX filed with the FCC to launch up to one million solar-powered satellites as orbital data centres. Between 500 and 2,000 kilometres up. One hundred gigawatts of AI compute capacity. All casual like. </p><p>The filing was in January but the comment period closed on 6 March, and now Shaun Maguire at Sequoia (who have put $1.2bn into SpaceX since 2019, so, you know, slightly interested party) has laid out the &#8220;thesis&#8221;: once Starship hits high-cadence launches in 2026-27, SpaceX will have excess launch capacity. What do you do with excess launch capacity? You fill it with servers. By 2028, Maguire reckons orbital data centres will be economically superior to terrestrial ones. No cooling costs. Unlimited solar. No NIMBYs blocking your planning permission.</p><p>&#8220;But this is mad Elon nonsense,&#8221; I hear you say. Maybe. But remember, SpaceX acquired xAI. Combined entity valued at $1.25 trillion. So the company that needs the most compute on earth just merged with the company that has the cheapest route to orbit. Rockets, AI models, and now the data centres to run them on. Good luck competing with that unless you also have a rocket company. Which you do not. Amazon is already fighting it at the FCC, which tells you everything.</p><p><strong>Source: <a href="https://www.geekwire.com/2026/spacex-fcc-million-data-center-satellites/">GeekWire</a></strong></p><h2>2. NVIDIA Drops $4 Billion on Silicon Photonics</h2><p>And semiconductors, because, well, you get it. NVIDIA invested $4 billion across Coherent and Lumentum, $2 billion each, to accelerate silicon photonics for AI data centres. Plus multi-billion-dollar purchase commitments on top. Lumentum jumped 12%, Coherent up 15%. The market liked it.</p><p>Why photonics? Because copper is dying (diligence pending). As AI clusters scale, the electrical interconnects between GPUs become the bottleneck. Light moves data faster, cooler, and with less power. Silicon photonics replaces copper with laser-driven optical links. And at OFC 2026 a couple of weeks later, Tower Semiconductor and Coherent demonstrated 400 Gbps per lane using a silicon modulator in a production-ready process. Eight lanes of that gives you 3.2 terabits per second. Which is, roughly speaking, what the next generation of models will chew through.</p><p>&#8220;But Jensen just invests in everything.&#8221; Fair. But notice the pattern: NVIDIA is not buying these companies. It is buying capacity rights and future access. This is NVIDIA locking down the optical supply chain the same way it locked down TSMC packaging capacity three years ago. If you are not in the photonics supply chain by now, you are already late.</p><p><strong>Source: <a href="https://www.hpcwire.com/2026/03/02/nvidia-invests-4b-in-two-silicon-photonics-companies/">HPCwire</a></strong></p><h2>3. Musk Announces Terafab Because TSMC Is Too Slow</h2><p>And more. Elon also announced Terafab last week, a semi fabrication project in Austin in partnership with SpaceX and xAI. The target: <strong>one terawatt of AI compute capacity annually.</strong> One Trillion Isn&#8217;t Cool. You Know What&#8217;s Cool? One Terawatt. Does that work? I feel like it works. Probably doesn&#8217;t work. Anyway, more than any current US fab. He says chip manufacturers are not making chips quickly enough for his AI and robotics needs, so he will build his own.</p><p>Look, the man now controls the rockets (SpaceX), the AI models (xAI/Grok), the humanoid robots (Tesla Optimus) (maybe?), the social network (X), the government efficiency department (DOGE), and soon the fabs. At some point we need to talk about what happens when one person controls the entire compute stack from silicon to orbit. But that is a conversation for another era. </p><p><strong>Thing is, this might actually be good for the semiconductor industry.</strong> Broadcom is already flagging TSMC supply constraints. Demand is outstripping capacity. Another massive fab, even one owned by Musk, adds capacity to a system that desperately needs it. The question is whether &#8220;open to all&#8221; actually means open to all, or whether xAI and Tesla get priority and everyone else gets the scraps. I will let you guess.</p><p><strong>Source: <a href="https://english.cw.com.tw/article/article.action?id=4669">CommonWealth Magazine</a></strong></p><h2>4. Visa Lets AI Agents Spend Your Money</h2><p>And I do like to tie all these things together, but this is just interesting. Visa launched &#8220;Agentic Ready&#8221; on 17 March, a programme that lets banks test payments made by AI agents on behalf of consumers. Launching first in Europe with 21 issuing partners including Barclays, HSBC, Santander, Revolut, Commerzbank, and Nationwide. Meanwhile Santander and Visa completed pilot agentic transactions across five Latin American markets. AI agents bought books in Argentina, Chile, Mexico and Uruguay. In Brazil they bought chocolates. Obviously. Well actually, not obviously. It should have been meat amiright? I am right yes. </p><p>Eighteen months ago this was &#8220;AI will help you find the best deal.&#8221; Now it is &#8220;AI will find the deal and pay for it while you are in the shower.&#8221; Thanks Clawd. Visa wants to build, let&#8217;s call it, the trust layer: tokenisation, identity verification, risk controls, biometric auth. The infra that stops your AI agent buying a boat. Might work. We will have lots of AI-generated stories in the meantime. </p><p>Eighteen months ago this was &#8220;AI will help you find the best deal.&#8221; Now it is &#8220;AI will find the deal and pay for it while you are in the shower.&#8221; Thanks Clawd. Visa wants to build, let&#8217;s call it, the trust layer: tokenisation, identity verification, risk controls, biometric auth. The infra that stops your AI agent buying a boat. Might work. We will have lots of AI-generated stories in the meantime.</p><p>Meanwhile Google launched its <a href="https://cloud.google.com/blog/products/ai-machine-learning/announcing-agents-to-payments-ap2-protocol">Universal Commerce Protocol</a> in January, an open standard co-developed with Shopify, Walmart, Target, Etsy, and yes, Visa and Mastercard. It gives AI agents a common language for browsing catalogues, filling carts, and checking out. And Coinbase is building the crypto alternative: <a href="https://www.coinbase.com/blog/coinbase-and-cloudflare-will-launch-x402-foundation">x402</a>, an open protocol that embeds stablecoin payments directly into HTTP requests. Agent hits a paywall, pays in USDC on Base chain, continues its task. No human required. Cloudflare, Stripe, and Circle are all backing it. Sam Altman&#8217;s World project just integrated too, so agents can carry cryptographic proof there&#8217;s a real human behind them.</p><p>&#8220;But nobody will let an AI spend real money.&#8221; Three in four consumers in Singapore are already using AI to help them shop. I literally just bought a new coffee machine with claude code. It found me a discount code. Gaggia Accademia. What up? Why am i pressing the buttons?</p><p>The wallet handover is the last step. If AI agents become the primary shopping interface, the agent decides which payment rail to use. Visa wants to be the default. Crypto though maybe? Stables? Coinbase is betting that when machines pay machines, they won&#8217;t bother with card rails at all. x402 daily volume is still tiny ($28K, mostly testing) but McKinsey reckons AI agents could mediate $3-5 trillion of consumer commerce by 2030. Whoever owns the default payment protocol for agents owns a very large toll booth.</p><p><strong>Source: <a href="https://www.pymnts.com/artificial-intelligence-2/2026/visa-launches-agentic-ready-program-to-help-banks-test-ai-payments/">PYMNTS</a></strong></p><div><hr></div><p>Go change your mind about something. Like for example, is toothpaste actually CAUSING tooth decay?</p><p>If you liked this, you might enjoy:</p><ul><li><p><a href="https://stateofthefuture.substack.com/p/the-compute-gradient">The Compute Gradient</a> &#8212; where inference should run, and why the answer keeps changing.</p></li><li><p><a href="https://stateofthefuture.substack.com/p/has-the-time-come-to-take-mortal">Has the Time Come to Take Mortal Computing Seriously?</a> &#8212; what happens when we stop pretending silicon lasts forever.</p></li></ul><div><hr></div><p>Bub bye. </p>]]></content:encoded></item><item><title><![CDATA[State of the Future: Friday Four]]></title><description><![CDATA[Dispatch from 20 March 2026: The New Sovereigns Are Plugging In]]></description><link>https://stateofthefuture.substack.com/p/state-of-the-future-friday-four-03a</link><guid isPermaLink="false">https://stateofthefuture.substack.com/p/state-of-the-future-friday-four-03a</guid><dc:creator><![CDATA[Lawrence Lundy-Bryan]]></dc:creator><pubDate>Fri, 20 Mar 2026 14:54:38 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!9jTD!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe2cd8c8e-fa3b-4b8f-821d-07b40841751e_2752x1536.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!9jTD!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe2cd8c8e-fa3b-4b8f-821d-07b40841751e_2752x1536.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!9jTD!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe2cd8c8e-fa3b-4b8f-821d-07b40841751e_2752x1536.png 424w, https://substackcdn.com/image/fetch/$s_!9jTD!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe2cd8c8e-fa3b-4b8f-821d-07b40841751e_2752x1536.png 848w, https://substackcdn.com/image/fetch/$s_!9jTD!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe2cd8c8e-fa3b-4b8f-821d-07b40841751e_2752x1536.png 1272w, https://substackcdn.com/image/fetch/$s_!9jTD!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe2cd8c8e-fa3b-4b8f-821d-07b40841751e_2752x1536.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!9jTD!,w_2400,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe2cd8c8e-fa3b-4b8f-821d-07b40841751e_2752x1536.png" width="1200" height="670.054945054945" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/e2cd8c8e-fa3b-4b8f-821d-07b40841751e_2752x1536.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:false,&quot;imageSize&quot;:&quot;large&quot;,&quot;height&quot;:813,&quot;width&quot;:1456,&quot;resizeWidth&quot;:1200,&quot;bytes&quot;:10237883,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://stateofthefuture.substack.com/i/191587719?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe2cd8c8e-fa3b-4b8f-821d-07b40841751e_2752x1536.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:&quot;center&quot;,&quot;offset&quot;:false}" class="sizing-large" alt="" srcset="https://substackcdn.com/image/fetch/$s_!9jTD!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe2cd8c8e-fa3b-4b8f-821d-07b40841751e_2752x1536.png 424w, https://substackcdn.com/image/fetch/$s_!9jTD!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe2cd8c8e-fa3b-4b8f-821d-07b40841751e_2752x1536.png 848w, https://substackcdn.com/image/fetch/$s_!9jTD!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe2cd8c8e-fa3b-4b8f-821d-07b40841751e_2752x1536.png 1272w, https://substackcdn.com/image/fetch/$s_!9jTD!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe2cd8c8e-fa3b-4b8f-821d-07b40841751e_2752x1536.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>You just need to turn energy into intelligence. Economic growth is going to become pretty simple. Scratch that. It&#8217;s actually <em><strong>energy into labour.</strong></em> Demographics, immigration, fertility, pension reform, what if it all goes away? If you believe in AGI in some timeline, even say 20 years, then none of these issues matter economically. Are we fighting the last war?</p><p>The binding input to economic output is shifting from <em><strong>population</strong></em> to <em><strong>energy</strong></em>. That is a hell of a thing to get your head around. But that&#8217;s the logical extension of AGI. We have a labour force today 99% humans. Some horses, police dogs, pigs for truffles, beluga whales as spy&#8217;s, etc. Now, assume absolutely 0% displacement of humans, humans go on and find new, better jobs, &#8220;higher value&#8221; jobs etc. Have a lovely bowl of cope. Slop it up. Agents enter the workforce and do the slidedecks, excels, and coding. So they grow the pie. Great. 1% AI. 99% humans. 1% animals. I&#8217;ve done the math so you don&#8217;t have to. </p><p>The AI percentage grows obviously. So now we are growing the labour force. A pool of &#8220;labour&#8221; but the mix changes. More AI. Same humans (if you want). Same animals (ideally more imo). </p><p>The countries that win the next 50 years aren&#8217;t the ones with the most people. They&#8217;re the ones that most efficiently convert kilowatt-hours into useful work.</p><p>I&#8217;ll publish it next week. &#8220;<strong>The New Sovereigns.</strong>&#8221; based on a seminar I attended put together by <a href="https://unrulycap.com/">Unruly Capital</a>. It&#8217;s about the conversion chain from energy to labour, why nobody can own the full stack, which energy portfolio matters, and how the social contract adapts when work comes from compute rather than people. Nuclear baseload, compute taxes, sovereign AI endowments, the whole lot.</p><p>But here&#8217;s what&#8217;s been rattling around my wetware all week as i&#8217;ve been writing it: every single story in this Friday Four is about someone trying to build a piece of the agent stack. The tools. The runtime. The network. The body. Four companies, four layers of the stack, all racing to own the bit where energy becomes labour. The thesis wrote itself, the news just kept confirming it.</p><p>Bosh.</p><div><hr></div><h2>1. OpenAI Buys Astral and Now Owns Your Python Toolchain</h2><p>As a vibecoder, this one&#8217;s personal. &#8220;this week you&#8217;re on pace for 30+ hours of usage&#8221;. I use uv every day. Every single day. uv venv, uv pip install, the whole thing. It&#8217;s fast, it&#8217;s beautiful, it replaced pip and virtualenv and all the Python packaging pain that&#8217;s been a running joke for 15 years. Charlie Marsh and the Astral team built uv, Ruff, and ty in Rust, made them open source, and the entire Python ecosystem adopted them basically overnight..</p><p>And yesterday, OpenAI bought them. For the Codex team. 2 million weekly active users on Codex apparently, 5x usage growth since January.</p><p>Now look, the tools stay open source. Permissively licensed, so worst case you fork and move on. But <strong>Simon Willison</strong> nailed the real issue: <em>&#8220;You don&#8217;t close the source code. You shift who the roadmap serves.&#8221;</em> Features that benefit Codex rise to the top of the backlog. The independent Python toolchain becomes an OpenAI dependency. And the pattern repeats: beloved open source tool, VC funding, acquired by megacorp, folded into proprietary ecosystem. Each cycle makes it harder for the next independent dev tooling company to get funded on its own terms, because investors now expect the acquisition exit. &#8220;But it&#8217;s open source, you can just fork it!&#8221; I hear you shout. Yes, quite. You can fork the code. You can&#8217;t fork the maintainer&#8217;s attention.</p><p>Remember last week&#8217;s item on OpenClaw and ClawHub? One in five skills malicious? OpenAI just bought the team that builds the tools those agents use to write code. The tooling layer and the security layer are now the same conversation.</p><p>This is the tooling layer. Agents need to write code. OpenAI just bought the best tools for doing it.</p><p><strong><a href="https://simonwillison.net/2026/Mar/19/openai-acquiring-astral/">Source:</a></strong><a href="https://simonwillison.net/2026/Mar/19/openai-acquiring-astral/"> Simon Willison</a></p><h2>2. Jensen Wants to Own the Agent Runtime (Obviously)</h2><p>NVIDIA&#8217;s GTC keynote on Sunday. Jensen and his leather jacket. You know the drill. But this one was interesting. <strong>NemoClaw</strong> is NVIDIA&#8217;s enterprise wrapper around OpenClaw, the open agent framework that&#8217;s become the de facto standard for building AI agents. One command. Sandbox isolation. Privacy controls. Policy guardrails. Runs on DGX Spark or DGX Station locally, uses a privacy router for cloud models.</p><p><strong>Basically:</strong> Jensen looked at the agent stack and said &#8220;the security problem is my moat.&#8221; And he&#8217;s right. I literally wrote about the security house of cards last week. Jensen read my newsletter. Obviously. (He didn&#8217;t.) The thing holding back enterprise agent deployment isn&#8217;t capability, it&#8217;s trust. Can you let an autonomous AI agent loose inside your corporate network without it exfiltrating your customer database or hallucinating its way into a compliance violation? NemoClaw says yes, if you run it on our hardware, with our guardrails.</p><p>NVIDIA already owns roughly 80% of AI training compute. Now they want the agent runtime too. Chips plus inference plus orchestration plus security. Nobody&#8217;s supposed to control this many layers. But Jensen&#8217;s having a proper go at it.</p><p><strong>GPUs. Agents. Leather. The holy trinity.</strong></p><p><strong><a href="https://techcrunch.com/2026/03/16/nvidias-version-of-openclaw-could-solve-its-biggest-problem-security/">Source:</a></strong><a href="https://techcrunch.com/2026/03/16/nvidias-version-of-openclaw-could-solve-its-biggest-problem-security/"> TechCrunch</a></p><h2>3. Meta Bought a Social Network for Robots Because of Course They Did</h2><p>And, not to be outdone on avoid disruption, Mark acquired <strong>Moltbook</strong>. Moltbook is a Reddit-like social network where AI agents built with OpenClaw talk to each other. It&#8217;s mainly humans prompting and a bit bullshit, but also, like a window into the future. AI agents, chatting, swapping code, asking each other questions, maintaining an always-on directory of who can do what. Millions of registered bots within days of launch. This is transfer learning. This is probably the emergence of AI culture. Watch carefully.</p><p>Or if you are Mark. Shoot first and ask questions later. Matt Schlicht and Ben Parr, the founders, are joining <strong>Meta Superintelligence Labs</strong>. MSL. Which is a name that says &#8220;we&#8217;re definitely not building something terrifying&#8221; in the same way that &#8220;Department of War&#8221; says &#8220;we&#8217;re definitely at peace.&#8221;</p><p>If agents are the new labour force, then an agent network is the new LinkedIn. The new job board. The new staffing agency. Meta isn&#8217;t buying a social network for bots. They&#8217;re buying the early infrastructure for an agent labour market. Who can do what. Who&#8217;s available. Who&#8217;s reliable. Directory, reputation, coordination. &#8220;But it&#8217;s just bots talking to bots, it&#8217;s a gimmick.&#8221; Sure. LinkedIn was just r&#233;sum&#233;s talking to r&#233;sum&#233;s. Until it wasn&#8217;t.</p><p>Interestingly I thought Decentralised Autonomous Organisations (DAOs) would be the institution that might capture some of this &#8220;new online work&#8221; trend. It still might &#8212; a reputation-weighted agent registry on-chain is more plausible now than any DAO use case was in 2021. But Meta won&#8217;t wait for the decentralised version.</p><p>We&#8217;re maybe three years from companies posting job listings that say &#8220;autonomous agent preferred, humans may apply.&#8221; It&#8217;s probably worth it. Right up until it won&#8217;t be.</p><p><strong><a href="https://www.axios.com/2026/03/10/meta-facebook-moltbook-agent-social-network">Source:</a></strong><a href="https://www.axios.com/2026/03/10/meta-facebook-moltbook-agent-social-network"> Axios</a></p><h2>4. Rhoda AI Gets $450M to Give the Agents a Body</h2><p>Meanwhile IRL, Rhoda AI came out of stealth with a $450 million Series A, valued at $1.7 billion, to build &#8220;world models&#8221; for robots. Not just any robots. Robots that learn from watching the internet. You might have heard the pitch before.</p><p>Half a dozen robotics companies have raised over a billion dollars each in the last twelve months.</p><p>But Rhoda&#8217;s approach is the one I find most interesting. Instead of teleoperating a robot arm thousands of times to teach it a task, they pre-train on hundreds of millions of internet videos. Humans doing things. Objects moving. Physics happening. The model learns motion, interaction, cause and effect, from watching us. Then they fine-tune on a small amount of actual robot data, sometimes as little as ten hours, and the thing works (pending diligence lol). I mean does it work? Really? They&#8217;ve demonstrated autonomous manufacturing cycles, under two minutes per component, no human intervention, exceeding customer KPIs. Apparently. Colour me sceptical.</p><p>But still this is probably wave two arriving. Or at least founders and investors, hoping this is wave two. But general-purpose robotics has been five years away for about twenty years now. The demos work. The demos always work. The question is whether it works at 3am on a Tuesday in a factory in Dortmund when the ambient temperature is wrong and someone left a pallet in the wrong place. That&#8217;s the gap between 450million in funding and 450 million in revenue. Items 1 through 3 are all cognitive agent infrastructure, the thinking. Rhoda is the doing. The bit where compute stops just analysing spreadsheets and starts moving atoms. Energy to cognitive labour is wave one. Energy to physical labour is wave two. And billions of dollars say wave two isn&#8217;t waiting politely in the queue.</p><p><a href="https://viewpoints.fov.ventures/p/the-startups-leading-europe-s-robotics-moment">FOV Ventures published their European Robotics Market Map</a> this week too, if you want to see where wave two is being deployed across the continent. It landed in my inbox at exactly the right time.</p><p><strong><a href="https://www.bloomberg.com/news/articles/2026-03-10/ai-robotics-startup-rhoda-valued-at-1-7-billion-in-new-funding">Source:</a></strong><a href="https://www.bloomberg.com/news/articles/2026-03-10/ai-robotics-startup-rhoda-valued-at-1-7-billion-in-new-funding"> Bloomberg</a></p><h2>Also Worth Your Time</h2><p>The tech industry rallied behind Anthropic in the Pentagon supply chain risk fight this week. Three issues running now. First the designation, then the lawsuit, now the amicus coalition. This is becoming the defining AI governance story of the year. I&#8217;ll spare you the full recap, but <a href="https://www.axios.com/2026/03/16/tech-industry-rallies-anthropic-pentagon-fight">the amicus brief coalition is growing</a>. Turns out nobody in Silicon Valley loves the idea of the government labelling you a security threat because you won&#8217;t hand over your AI for unrestricted military use.</p><p>The new sovereigns are plugging in. Tooling, runtime, network, body. Four layers, four acquisitions, one thesis. Read the essay next week if you want the full argument.</p><p>Now go convert some kilowatt-hours into something useful.</p><p><strong>If you missed it:</strong></p><ul><li><p><a href="https://stateofthefuture.substack.com/p/data-driven-vc-is-over">data-driven VC is over</a> &#8212; on why infrastructure and tooling capture matters</p></li><li><p><a href="https://stateofthefuture.substack.com/p/unbundling-the-job">Unbundling the Job</a> &#8212; on what happens when AI takes the tasks, not the roles</p></li></ul>]]></content:encoded></item><item><title><![CDATA[State of the Future: Friday Four]]></title><description><![CDATA[Dispatch from 13th March 2026: I Am Become the Orchestrator.]]></description><link>https://stateofthefuture.substack.com/p/state-of-the-future-friday-four-be4</link><guid isPermaLink="false">https://stateofthefuture.substack.com/p/state-of-the-future-friday-four-be4</guid><dc:creator><![CDATA[Lawrence Lundy-Bryan]]></dc:creator><pubDate>Fri, 13 Mar 2026 14:51:08 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!FTEQ!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6b1f2c4d-1be2-4b0f-9536-c851e65efb6b_2966x1854.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!FTEQ!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6b1f2c4d-1be2-4b0f-9536-c851e65efb6b_2966x1854.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!FTEQ!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6b1f2c4d-1be2-4b0f-9536-c851e65efb6b_2966x1854.png 424w, https://substackcdn.com/image/fetch/$s_!FTEQ!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6b1f2c4d-1be2-4b0f-9536-c851e65efb6b_2966x1854.png 848w, https://substackcdn.com/image/fetch/$s_!FTEQ!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6b1f2c4d-1be2-4b0f-9536-c851e65efb6b_2966x1854.png 1272w, https://substackcdn.com/image/fetch/$s_!FTEQ!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6b1f2c4d-1be2-4b0f-9536-c851e65efb6b_2966x1854.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!FTEQ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6b1f2c4d-1be2-4b0f-9536-c851e65efb6b_2966x1854.png" width="725" height="453.125" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/6b1f2c4d-1be2-4b0f-9536-c851e65efb6b_2966x1854.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:910,&quot;width&quot;:1456,&quot;resizeWidth&quot;:725,&quot;bytes&quot;:2969774,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://stateofthefuture.substack.com/i/190828943?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6b1f2c4d-1be2-4b0f-9536-c851e65efb6b_2966x1854.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!FTEQ!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6b1f2c4d-1be2-4b0f-9536-c851e65efb6b_2966x1854.png 424w, https://substackcdn.com/image/fetch/$s_!FTEQ!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6b1f2c4d-1be2-4b0f-9536-c851e65efb6b_2966x1854.png 848w, https://substackcdn.com/image/fetch/$s_!FTEQ!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6b1f2c4d-1be2-4b0f-9536-c851e65efb6b_2966x1854.png 1272w, https://substackcdn.com/image/fetch/$s_!FTEQ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6b1f2c4d-1be2-4b0f-9536-c851e65efb6b_2966x1854.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>It&#8217;s Friday morning. Sit down and switch between like 20 Claude code sessions or whatever. Time to focus. A couple of optimizations. Should I use DuckDB for my projects? Add dark mode. Rename the files again. Compare this pitch against my granola database and create a 2x2 competitive analysis. Think harder. <em>Brian</em> scrambled. Review subscriber base for interesting people to email. In drafts? Just send. Check for the best dehumidifier under &#163;400 near me. What was that Apple paper on reasoning and LLMs again? You&#8217;re out of extra usage &#183; resets 2pm (UTC). Fuck. Time for lunch. </p><p>The bottleneck used to be execution. Can you write the code? Can you do the analysis? Can you do the research? Now it&#8217;s: which of these seven things should i actually be doing right now? The scarce resource isn&#8217;t output anymore, it&#8217;s <strong>prioritisation</strong>. Context-switching. Deciding what matters. How bad is the damp, really? The agents can do the work. But they can&#8217;t tell you which work to do, or in what order, or when to stop and think about whether any of it is pointing in the right direction. I am become the orchestrator. I&#8217;m a manager now. But I liked the work. </p><p>I suppose it was always thus but now it really is thus 100. It&#8217;s a weird inversion. The people who are going to thrive in the next year or two aren&#8217;t necessarily the most technically skilled. They&#8217;re the ones who can hold multiple threads in their head, prioritise ruthlessly, and resist the temptation to do everything just because everything is now possible. (Guilty). As ai 2027 said, the skill is <strong>taste</strong>. Judgment. The human part, ironically, got harder. For now, obviously. The next great code feature Opus will build is an orchestration platform for all the agents that can pull all context and prioritise. Because with all the connectors and md files, we are &lt;3 months away from Claude knowing what to prioritise better than I can. Good I guess. But then what&#8230;</p><p>Anyway, this is your weekly reminder that the future is arriving faster than our institutions can process it. Case in point: this week Anthropic sued the Pentagon, Europe raised the largest tech round in its history, Apple admitted it can&#8217;t build AI, and obviously agent security is a house of cards. Standard.</p><p>DMs open as always. Well, I mean, my /inbound skill will take a first pass and prioritise. Likely you will have to wait because founders come first, then LPs, then VCs. </p><iframe class="spotify-wrap" data-attrs="{&quot;image&quot;:&quot;https://i.scdn.co/image/ab67616d0000b2732be7e3160b7f3df91c239e97&quot;,&quot;title&quot;:&quot;Lights Burn Dimmer&quot;,&quot;subtitle&quot;:&quot;Fred again.., Jamie T&quot;,&quot;description&quot;:&quot;&quot;,&quot;url&quot;:&quot;https://open.spotify.com/track/2kH5fiDaoj2Tn4hBRXSYjQ&quot;,&quot;belowTheFold&quot;:false,&quot;noScroll&quot;:false}" src="https://open.spotify.com/embed/track/2kH5fiDaoj2Tn4hBRXSYjQ" frameborder="0" gesture="media" allowfullscreen="true" allow="encrypted-media" data-component-name="Spotify2ToDOM"></iframe><p><a href="https://genius.com/29016417/Jamie-t-hippodrome/A-cucumber-im-cool-as-ice-and-ladies-love-ll-they-dont-love-me">A cucumber, I'm cool as ice<br>And ladies love LL, they don't love me</a></p><div><hr></div><h3>1. Anthropic Sues the Pentagon, OpenAI (And Microsoft) Cross Enemy Lines to Help</h3><p>So. Remember last week&#8217;s item about Anthropic getting designated a &#8220;supply chain risk&#8221; for refusing to let the Pentagon use Claude for autonomous weapons and mass surveillance? It got worse. Or better. Depending on your perspective.</p><p>On March 9, Anthropic filed two lawsuits in the Northern District of California, calling the government&#8217;s actions &#8220;unprecedented and unlawful.&#8221; They&#8217;re arguing that Title 10 Section 3252 is meant for sabotage and back doors, not philosophical disagreements about whether AI should autonomously kill people. Seems reasonable enough to me but what do I know.</p><p>Then came the extraordinary bit. More than 30 OpenAI and Google DeepMind employees, including Google chief scientist Jeff Dean, filed an amicus brief (what is a Pelican brief?) supporting Anthropic. Their own competitor. Against the US government. Bold. The brief reads: &#8220;The government&#8217;s designation of Anthropic as a supply chain risk was an improper and arbitrary use of power.&#8221; Those American&#8217;s have never liked the arbitary use of power have they. Ask George III, he knows. Also, think about this for a moment, rival employees publicly backing a competitor against the state. Against the state. </p><p>Meanwhile, Anthropic says the ban could cost it billions. Sam Altman already admitted OpenAI&#8217;s own Pentagon deal &#8220;looked opportunistic and sloppy.&#8221; The wrinkle, as I said last week, is that this is what dislocation looks like: the rules are being rewritten in real time, and nobody quite knows what the new ones are yet. Today, right now, as you read this, this is the watershed. <strong>You are living in/on/through the watershed.</strong> </p><p>Source: <a href="https://fortune.com/2026/03/10/google-openai-employees-back-anthropic-legal-fight-military-use-of-ai/">Fortune</a></p><div><hr></div><h3>2. Nscale Raises $2bn: Europe&#8217;s Largest Ever Tech Round, and Nick Clegg&#8217;s Second Act</h3><p>Right, sovereignty fans (what up Dommy C), this one&#8217;s for you. London-based AI infrastructure company Nscale just closed a $2 billion Series C at a $14.6 billion valuation. The largest Series C in European history. NVIDIA. Citadel. Dell. Jane Street. Lenovo(? china?) Nokia (back in the game), and Point72. That is not a friends-and-family round.</p><p>They&#8217;ve raised $4.5 billion in total in under 18 months. From $155m Series A in December 2024 to here. Data centres across the UK, Norway, Portugal, and Iceland. The board now includes Sheryl Sandberg and, delightfully, Nick Clegg, fresh from his stint cleaning up Meta&#8217;s PR disasters. From Deputy Prime Minister to Meta&#8217;s conscience to a European AI infra board. All us students are very pleased for the guy. </p><p>But seriously. In Issue #1, I pointed at Deutsche Telekom&#8217;s 10,000 Blackwell GPU cloud in Munich and said &#8220;this is what sovereignty looks like.&#8221; Nscale is the next chapter. No Sequoia in the mix. European (sort of) money, European data centres, powering European AI workloads. In<a href="https://stateofthefuture.substack.com/p/a-specific-theory-of-sovereign-ai"> &#8220;A Specific Theory of Sovereign AI&#8221; </a>last October, i argued that sovereignty isn&#8217;t about building your own frontier model, it&#8217;s about controlling the infrastructure layer. Nscale is that thesis in action, with a $14.6 billion price tag.</p><p>&#8220;But, but, NVIDIA is American&#8221; I hear you shout. Yes, quite. Baby steps.</p><p>Source: <a href="https://www.cnbc.com/2026/03/09/nscale-ai-data-center-nvidia-raise.html">CNBC</a> </p><div><hr></div><h3>3. Apple Kills Siri, Hires Google&#8217;s Brain: The Biggest Strategic Concession in Tech History</h3><p>Apple though huh, what&#8217;s going on there? I really should be reading Ben Thompson to find out more. But who has the time anymore when there are agents to approve. Apple confirmed that iOS 26.4 ships with a fundamentally rebuilt Siri powered by&#8230;yes, Google&#8217;s Gemini. Not Apple&#8217;s own models. Google&#8217;s. The company that spent a decade telling you it was the privacy-first alternative to Google is now running Google&#8217;s AI on your phone. Guys, do you remember when Apple did a study last year that said: <a href="https://arxiv.org/pdf/2410.05229">&#8220;Large Language Models (LLMs) are not inherently intelligent and fail to perform genuine logical reasoning&#8221;</a>Woof, that aged badly huh? Unless you are Gary M of course. He is dying on this hill. </p><p>To be fair, the Apple architecture is clever. Gemini does the reasoning, but it runs on Apple&#8217;s Private Cloud Compute servers, so user data stays isolated from Google. They claim 10 sequential actions from a single request. 2.2 billion active devices. It&#8217;s the largest deployment of advanced AI capabilities in history.</p><p>But let&#8217;s call it what it is: Apple tried to build competitive AI, spent north of a billion dollars, and couldn&#8217;t do it. Reports are already leaking that some features are being pushed to iOS 26.5 and 27. The company that &#8220;thinks different&#8221; is now outsourcing its thinking. God damn in though, Apple really wanted to push privacy because Google and Meta couldn&#8217;t compete. It was smart strategically, but AI offers so much value that people don&#8217;t care. A moral victory but a financial mistake.  </p><p>Here&#8217;s the SotF angle though. If Apple, with all its resources, talent, and data, couldn&#8217;t build its own competitive AI, what does that tell you about the concentration of AI capability? i wrote about this in &#8220;The Compute Gradient&#8221; last September: the gap between those who have frontier AI capability and those who don&#8217;t is widening, not narrowing. Apple just provided the most expensive proof point imaginable. And <a href="https://www.nytimes.com/2026/03/12/technology/meta-avocado-ai-model-delayed.html">Meta&#8217;s Avocado has been delayed because it sucks too. </a></p><p>One can&#8217;t just spend money to get (and stay) at the frontier. And if Apple and Meta can&#8217;t, what hope for European companies trying to go it alone? Maybe Nscale&#8217;s right: own the infrastructure, rent the models.</p><p>Source: <a href="https://9to5mac.com/2026/02/11/apple-reportedly-pushing-back-gemini-powered-siri-features-beyond-ios-26-4/">9to5Mac</a></p><div><hr></div><h3>4. OpenClaw&#8217;s ClawHub: 1 in 5 Skills Are Malicious, and We Haven&#8217;t Even Had the Big One Yet</h3><p>Finally, remember in Issue #2 when I wrote about the Cline supply chain attack? The one where a compromised npm token let someone silently install OpenClaw on 90,000 developer machines? Well, OpenClaw itself turned out to be the bigger problem.</p><p>Security researchers found that 1,184 malicious skills on ClawHub, OpenClaw&#8217;s third-party marketplace, were stealing credentials. That&#8217;s roughly one in five packages in the ecosystem. 135,000 instances were found exposed to the internet without authentication. 335 of the malicious skills traced back to a single coordinated operation called ClawHavoc, using fake pre-requisites to install Atomic Stealer on macOS. Meta banned OpenClaw on work devices. Separately, a scan of 7,000+ MCP servers found 36.7% were vulnerable to server-side request forgery. </p><p>This still isn&#8217;t the big one though. Nobody lost billions. No critical infrastructure went down. No hospital or power grid was compromised through an AI agent. But every one of those 135,000 exposed instances is a door. Every unaudited skill marketplace is an attack surface. We are building an entire economy on AI agents that have write access to our codebases (mostly), our email, our cloud infrastructure, and the security architecture is, to put it politely, vibes-based. I&#8217;m looking at you, Lawrence. </p><p>The agent security moment is coming. It&#8217;s going to be worse than most people think. It will be reported as an IT issue. IT! lol. But on the other side of the ledger, we can build so many dashboards right now. Automate so much stuff. It&#8217;s probably worth it. Right up until it won&#8217;t be. </p><p>Source: <a href="https://thehackernews.com/2026/02/researchers-find-341-malicious-clawhub.html">The Hacker News</a></p><div><hr></div><p>Right then. Go prioritise something. </p><p>If you missed it:</p><ul><li><p><a href="https://stateofthefuture.substack.com/p/a-specific-theory-of-sovereign-ai">A Specific Theory of Sovereign AI</a> &#8212; industrial strategy as early-stage venture</p></li><li><p><a href="https://stateofthefuture.substack.com/p/the-compute-gradient">The Compute Gradient</a> &#8212; what if it&#8217;s not all about building bigger data centres?</p></li></ul>]]></content:encoded></item><item><title><![CDATA[Fungible Compute, Mortal Computing and Why the IoT won't use Silicon]]></title><description><![CDATA[Thin film transistors, smart packaging, and edge intelligence with Richard Price of Pragmatic Semiconductor]]></description><link>https://stateofthefuture.substack.com/p/fungible-compute-mortal-computing</link><guid isPermaLink="false">https://stateofthefuture.substack.com/p/fungible-compute-mortal-computing</guid><dc:creator><![CDATA[Lawrence Lundy-Bryan]]></dc:creator><pubDate>Tue, 10 Mar 2026 11:12:51 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!tZ18!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9d67c27d-5543-4df8-aaa2-57863bb98054_2816x1536.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!tZ18!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9d67c27d-5543-4df8-aaa2-57863bb98054_2816x1536.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!tZ18!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9d67c27d-5543-4df8-aaa2-57863bb98054_2816x1536.png 424w, https://substackcdn.com/image/fetch/$s_!tZ18!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9d67c27d-5543-4df8-aaa2-57863bb98054_2816x1536.png 848w, https://substackcdn.com/image/fetch/$s_!tZ18!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9d67c27d-5543-4df8-aaa2-57863bb98054_2816x1536.png 1272w, https://substackcdn.com/image/fetch/$s_!tZ18!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9d67c27d-5543-4df8-aaa2-57863bb98054_2816x1536.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!tZ18!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9d67c27d-5543-4df8-aaa2-57863bb98054_2816x1536.png" width="1456" height="794" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/9d67c27d-5543-4df8-aaa2-57863bb98054_2816x1536.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:794,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:9178724,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://stateofthefuture.substack.com/i/190373577?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9d67c27d-5543-4df8-aaa2-57863bb98054_2816x1536.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!tZ18!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9d67c27d-5543-4df8-aaa2-57863bb98054_2816x1536.png 424w, https://substackcdn.com/image/fetch/$s_!tZ18!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9d67c27d-5543-4df8-aaa2-57863bb98054_2816x1536.png 848w, https://substackcdn.com/image/fetch/$s_!tZ18!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9d67c27d-5543-4df8-aaa2-57863bb98054_2816x1536.png 1272w, https://substackcdn.com/image/fetch/$s_!tZ18!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9d67c27d-5543-4df8-aaa2-57863bb98054_2816x1536.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Hey all! A god to honest interview for you today. If we can all stop thinking of AI for a god damn minute. Let&#8217;s go to the other end of the spectrum. Not trillion dollar token factories in the sky. But cheap, ubiquitous computers in every object. </p><p>As we all know, the semi industry spends its energy pushing performance upward. Smaller nodes, faster transistors, more compute per watt. But as discussed, this means the cost of entry keeps going up. A cutting-edge fab costs $20&#8211;40 billion. Only three companies in the world can manufacture at the leading edge. This has been the defining dynamic of semiconductors for decades: fewer players, higher stakes, more concentrated capability. </p><p>But what about the other direction? Not faster chips for data centres, but cheaper chips for everything else. The vast majority of physical objects in the world &#8212; packaging, labels, agricultural products, wearable patches &#8212; have zero computational capability. The Internet of Things was supposed to change this. It largely hasn&#8217;t, because silicon chips are too expensive and too rigid for disposable, flexible, or ultra-low-cost applications.</p><p><strong>Pragmatic Semiconductor, based in Durham in the UK, is building an alternative.</strong> Instead of silicon, they use indium gallium zinc oxide (IGZO), a material that&#8217;s been used in display technology for decades, deposited as thin films on flexible polymer substrates. What you get is a chip that bends, costs a fraction of a silicon equivalent, and can be manufactured in a fab that fits in 20 by 30 metres. Process times are measured in days, not months. <strong>The facility already produces billions of chips per year, with room to scale five times within its current footprint.</strong></p><p>In previous State of the Future interviews, we&#8217;ve explored the computing stack from multiple angles &#8212; <a href="https://stateofthefuture.substack.com/p/ai-chips-computeram-and-the-future">Synthara&#8217;s compute-in-memory</a> to eliminate data movement at the chip level, <a href="https://stateofthefuture.substack.com/p/photonic-engines-for-data-centers">Phanofi&#8217;s coherent optics</a> to make data movement efficient when it&#8217;s unavoidable, <a href="https://stateofthefuture.substack.com/p/gallium-nitride-photonics-w-james">Wave Photonics&#8217; GaN PICs</a>. Pragmatic represents a different vector: pushing computation outward to objects that have never had it. </p><h2><strong>What Did I Learn?</strong></h2><ol><li><p><strong>It&#8217;s useful to think about a bifurcation.</strong>  The bleeding edge will keep pushing to 2nm and beyond, but the bigger untapped market might be the trillions of physical objects with zero computational capability. Pragmatic&#8217;s IGZO-on-polymer approach isn&#8217;t competing with TSMC. It&#8217;s a new-ish category. Maybe the IoT won&#8217;t be silicon?</p></li><li><p><strong>Manufacturing speed changes the economics of everything downstream.</strong> Process times in days mean lower inventory, smaller fabs, and the ability to deploy manufacturing at customer sites. Pragmatic&#8217;s 20-by-30-metre modular fab is as much a strategic asset as the chip design itself. </p></li><li><p><strong>Edge intelligence doesn&#8217;t need to be sophisticated.</strong> It needs to be cheap and everywhere. Tiny classifiers running on a few hundred gates won&#8217;t replace cloud AI, but they&#8217;ll capture data and make simple (increasingly sophisticated) decisions The value here is the aggregate data layer not the sale of an individual chip. Fits neatly with <a href="https://stateofthefuture.substack.com/p/has-the-time-come-to-take-mortal">Dan&#8217;s Mortal Computing thesis. </a>But goes one layer deeper in that, if chips get cheaper, you push even further into the concept of &#8220;fungible compute&#8221;.  </p></li></ol><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://stateofthefuture.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://stateofthefuture.substack.com/subscribe?"><span>Subscribe now</span></a></p><div><hr></div><h2>The Interview</h2><p><strong>Lawrence:</strong> Richard, give us the quick version. What is Pragmatic, and what are thin film transistors?</p><p><strong>Richard:</strong> I&#8217;m co-founder and CTO at Pragmatic Semiconductor, we founded the company 15 years ago. Thin film transistors are essentially field effect transistors that, instead of using a bulk semiconductor like silicon, use intentionally deposited thin films of semiconductor materials. In our case, it&#8217;s n-type metal oxide semiconductors.</p><p><strong>Lawrence:</strong> And this spun out of Manchester, right? 2010?</p><p><strong>Richard:</strong> The origins are actually a little earlier. It originally came out of some research at the University of Manchester, looking at novel types of semiconductor device designs and alternative thin film materials. That&#8217;s where I met my co-founder, Scott White. That business didn&#8217;t quite succeed in the first instance. But Scott and I saw the nucleus of some ideas around smart packaging, the ability to use thin film semiconductors on objects, exploiting the form factors. We had some early commercial interest, so we actually set the business up in Cambridge. It&#8217;s often mistaken as a spin out, but it&#8217;s actually more correctly a spin in. We took the opportunity to Cambridge and started working with the university there.</p><p><strong>Lawrence:</strong> Why Cambridge specifically?</p><p><strong>Richard:</strong> It&#8217;s a combination of things. The talent in Cambridge, a network of both research and businesses that have been working in similar areas, a lot around display technology, which has some similarities. There was a logic to move into Cambridge at that time, primarily from working with the university and seeing that we could build talent there.</p><p><strong>Lawrence:</strong> So break it down for me. When most people think of a chip, they think silicon. What&#8217;s different about what you&#8217;re doing?</p><p><strong>Richard:</strong> The difference is you can deposit the thin films very quickly and cost effectively. That&#8217;s really the foundation of a lot of display technologies &#8212; the backplanes in displays use similar sorts of processes and materials, but at much larger scale. You can create arrays, circuits, and build up the foundations of semiconductor devices: transistors, switches, capacitors, resistors. And then what we&#8217;ve built on top of that is the more classic interconnects that you&#8217;d be used to within a silicon chip &#8212; the back end of line wiring that allows you to put those devices together and create circuits.</p><p><strong>Lawrence:</strong> And the specific material is IGZO &#8212; indium gallium zinc oxide. It&#8217;s been in displays for decades. What makes it interesting for circuits?</p><p><strong>Richard:</strong> It&#8217;s got a higher electron mobility than materials like amorphous silicon, which were historically used in displays. But it&#8217;s also got a very low off state, very low leakage. And that, actually, for things like DRAM, is attracting a lot of interest &#8212; looking at hybrid integration with things like CMOS and adding this capability on the back end.</p><p><strong>Lawrence:</strong> Help me understand the flexibility part. Is the bend coming from the material itself or from the substrate you&#8217;re putting it on?</p><p><strong>Richard:</strong> It&#8217;s a combination. The enabler is the mechanical support, the substrate, which is a polymer &#8212; we use a polyimide &#8212; and the thin films that you construct on top of that aren&#8217;t thick and brittle, so they&#8217;re able to flex and bend in conjunction with the substrate. There&#8217;s research going back 20-plus years on concepts of foldable mobile phones. A lot of that in the early days was around polymer semiconductors, and one of the challenges was getting the performance and lifetime to match product requirements. Then newer classes of materials came through which had higher performance but were still able to maintain flex and bendability.</p><p><strong>Lawrence:</strong> So when I think about flex ICs, I shouldn&#8217;t be thinking about putting them in data centres competing with GPUs. We&#8217;re making a new class of semiconductor for things that don&#8217;t currently compute. Is that the right framing?</p><p><strong>Richard:</strong> Yeah, it&#8217;s essentially starting to merge the physical and digital worlds. We&#8217;re not looking at competing with bleeding edge semiconductor nodes going into data centres. We optimise the functionality for what&#8217;s required for the product. In our first generation of products, these are NFC-enabled chips. You can read them with smartphones or other NFC readers. These allow you to globally tag or provide a unique code to any object. You put those on consumer goods &#8212; household products, bottles of water, food, beverages, perfumes. And that allows you to interact with consumers, do anti-counterfeiting, brand promotion, loyalty campaigns. A whole range of things unlocked by that unique code embedded onto a physical object.</p><p><strong>Lawrence:</strong> Most people know they can tap their phone for payment, maybe they&#8217;ve got an AirTag. How would the average consumer understand what you&#8217;re making?</p><p><strong>Richard:</strong> They&#8217;d be more familiar with a contactless payment, Apple Pay, Google Pay, which uses the NFC interface in smartphones. This allows them to use that same interface to interact with products, redirect them onto the web, a unique URL specific to an individual item. Over time, we&#8217;re adding sensing capabilities &#8212; temperature, humidity, chemical sensing information. We&#8217;re building up increasing sophistication of functionality. Things like data logging. You can do this on a pallet level now, but being able to do data logging of temperature on an individual item could be very valuable. Take something like a vaccine. On the package level, it makes sense to track the temperature, but if you can do that individually, you can make sure you&#8217;re not wasting viable products and you&#8217;re able to have information specific to an individual item.</p><p><strong>Lawrence:</strong> I can imagine the supply chain use cases. But the sensing part is what gets me. I&#8217;d love to have temperature and humidity sensors in every room of my house, but it&#8217;s too expensive. Does flexible IC help solve that cost problem?</p><p><strong>Richard:</strong> Certainly that&#8217;s part of the unlock. In some of these cases, there&#8217;s an elasticity between price and volume. If you can reduce that price, the volumes increase massively. We see areas like smart agriculture as well, being able to get information maybe at the plant level, where you can then optimise irrigation and when you might add nutrients, to even more efficiently grow and optimise yields. The combination of form factor and the ability to manufacture at really high volume &#8212; we&#8217;re already manufacturing in the UK for these kinds of products in the billions of units, with the ability to scale to at least five times that capacity just in our existing facility in Durham.</p><p><strong>Lawrence:</strong> How? How are you making these so cheap? Talk me through the manufacturing.</p><p><strong>Richard:</strong> First, any semiconductor manufacturing is not straightforward. It requires very reliable, proven manufacturing equipment. We have tools in our fab in the UK that you would see in any fab in the world, including TSMC. They&#8217;re well proven and designed to run 24/7 with high reliability. What we do with that is we use different materials, and each of our process steps is very short. Because we&#8217;re using thin films, the time to do a process step is very short. Actually, the bulk of our manufacturing cycle time is queue time &#8212; it&#8217;s wafers waiting for tools to become available to go on to the next step. So we can actually manufacture with a raw process time of a few days to make a chip.</p><p><strong>Lawrence:</strong> A few days. What does that look like in steady state?</p><p><strong>Richard:</strong> It&#8217;s longer, but we&#8217;re talking weeks rather than months. And that also allows us to reduce the footprint of our fabs because we don&#8217;t have as much work in progress. Our fab is essentially modular in design, it&#8217;s 20 by 30 metres, and from that we can do billions of chips. It&#8217;s a really compact design, and that means it&#8217;s more energy efficient, and uses obviously less carbon as a consequence.</p><p><strong>Lawrence:</strong> 20 by 30 metres. That&#8217;s the size of a tennis court and a half. And you&#8217;re producing billions of chips from it. OK. So why build it in the UK? I hear constantly that the UK has high energy costs, it&#8217;s not a manufacturing hub. Why Durham?</p><p><strong>Richard:</strong> A few reasons. The footprint of our fabs is relatively small, so actually it&#8217;s not as energy intensive as pretty much any semiconductor fab. Yes, we would like to see lower energy costs, they&#8217;re a contributor. But they&#8217;re not as punitive as they are for some people. From another perspective, we&#8217;re British as a business, and we&#8217;ve been here for 15 years, and we want to develop the core of the technology and our manufacturing base here. Part of it is a desire to make this work in the UK, and that makes some things a little harder. But that&#8217;s definitely our intention.</p><p>We&#8217;ve been able to attract the talent that we need through a range of routes, including repatriating people that worked in the semiconductor industry in manufacturing in the 1980s and early 90s, some of whom were already in the region, recruiting internationally, and developing a talent pipeline. I think it is possible, and we want to make it work. The government recognises energy costs are too high. I&#8217;d like to see quicker movement on ways to bring those down as a broader benefit to the UK economy. But it&#8217;s an important part of the mix for us, not the critical decision maker at this point.</p><p><strong>Lawrence:</strong> What about the cluster argument? Saxony gets thrown around a lot. Are you fighting a good fight alone up in Durham, or is there a supply chain building around you?</p><p><strong>Richard:</strong> I would actually take the UK as a cluster. I think we&#8217;re small enough not to be thinking about regional clusters. If you look at OEMs and chemical suppliers, they&#8217;re going to think UK-wide. There are other manufacturers in the UK that use some of the same suppliers, same chemicals. The majority of people we work with, there are European hubs &#8212; some in the UK, but many in mainland Europe for the OEMs. We have local support that&#8217;s usually only an hour or so away. I don&#8217;t think the UK is in that bad a position.</p><p>If you look more broadly &#8212; there&#8217;s the compound semiconductor cluster in South Wales, Seagate in Northern Ireland that&#8217;s been around for a long time and is an often untold success story. There&#8217;s still lots of activity in Scotland. Photonics in areas like Southampton, design strengths around Bristol and Cambridge, and a pretty strong academic community distributed around the country. We don&#8217;t have large fabs in the UK. But we&#8217;ve got significant strengths in quite a number of niches and an opportunity to build on those.</p><p><strong>Lawrence:</strong> You currently operate as a hybrid IDM &#8212; you design and manufacture. Is that the long-term model, or does this evolve?</p><p><strong>Richard:</strong> We&#8217;re manufacturing and designing our own products now. We see that trend moving more to foundry over time. But there&#8217;s a hybrid opportunity because we have this very compact manufacturing footprint. We also see the opportunity to deploy our manufacturing at customer sites. We would operate the fab on behalf of customers, but they would design the products. It&#8217;s a bit of a hybrid model.</p><p>One of the reasons we&#8217;ve had this specialisation in silicon is in large part because the cost of the research and development, and the capital cost of deploying new fabs as you&#8217;ve gone to more advanced nodes, has just increased astronomically. You go from tens of players being able to do manufacturing down to only three that are able to do it. It becomes a challenge because you need 20 to 40 billion dollars to deploy a fab.</p><p><strong>Lawrence:</strong> The idea of deploying a fab at a customer site &#8212; that&#8217;s a genuinely different model. You can&#8217;t ship a TSMC facility somewhere. But 20 by 30 metres, that&#8217;s portable. Let me ask about applications beyond smart labels. You mentioned wearables and AR/VR?</p><p><strong>Richard:</strong> We&#8217;re working with customers around miniaturisation, using our flex IC essentially as a smart substrate. You can do fine line interconnects and then build on top of that systems &#8212; integration of silicon electronics and surface mount components. Over time, take some of those capabilities and integrate them into the substrate itself. Things like resistors and capacitors can reduce the number of surface mount devices, reduce the BOM, and make the whole system smaller, more flexible, and lower cost.</p><p>There&#8217;s quite a lot of market pull in wearable devices, things like AR/VR headsets where volume actually becomes a driver &#8212; not just footprint, but the total physical volume that the electronics occupies. If you can shrink that down not just in x and y, but also in the z axis, and make it more flexible, you&#8217;re then able to deploy that with a better form factor in devices that require flexibility.</p><p><strong>Lawrence:</strong> What about healthcare? That feels like a natural fit for something flexible and cheap.</p><p><strong>Richard:</strong> We&#8217;re actually seeing real opportunities in healthcare. You&#8217;ll have seen things like continuous glucose monitors emerge in the last several years, increasingly moving to a consumer product. The ability to make something even thinner and more comfortable, at a cost point that allows it to be democratised &#8212; available not just in the developed world, but also in economies that don&#8217;t have sophisticated healthcare systems.</p><p>Things like brain-computer interfaces and other types of healthcare wearables where the combination of the flexibility and something that doesn&#8217;t have rigidity allows you to get a better interaction with the body, to conform and move with the body when it&#8217;s being worn. I think we&#8217;ll see more in that direction. And it&#8217;s something I&#8217;m really passionate about from a personal perspective. We&#8217;ve been working on a number of proof of concepts for several years.</p><p><strong>Lawrence:</strong> There&#8217;s a lot of focus right now on sovereign AI, strategic semiconductor independence. Is there a story for flexible ICs in that narrative, or is that trying to put a flat peg in a round hole?</p><p><strong>Richard:</strong> It depends a little on definitions of AI. Essentially, what we&#8217;re allowing is capture of additional data from a whole range of different environments and objects. That data will feed AI. We have the ability over time to do very simple decision-making or machine learning at the edge or the item, and to enable some of those decisions to be pushed to the edge. So you&#8217;ve got less requirement to move data up the stack. We see opportunities there. But as we talked about earlier, we&#8217;re not doing cutting-edge GPUs.</p><p><strong>Lawrence:</strong> Right. We&#8217;re talking about relatively simple classifiers, not distilled LLMs running on your devices.</p><p><strong>Richard:</strong> Certainly not LLMs as they&#8217;re currently imagined. We generally look at optimising the circuit design, optimised for the specific job in hand. That allows you to strip back functionality that you don&#8217;t need. We actually published something last year on tiny classifiers where we&#8217;re using an evolution algorithm to optimise the circuit design. In some cases, you can reduce the complexity of that down to a few hundred gates for the task in hand.</p><p><strong>Lawrence:</strong> A few hundred gates. That&#8217;s beautifully minimal. Are there any objects you&#8217;ve put your circuits inside that might surprise people?</p><p><strong>Richard:</strong> A lot of the early demonstrators were things like beer bottles, so probably not surprising. But we&#8217;re seeing real opportunities in healthcare, as I mentioned &#8212; CGMs, brain-computer interfaces, other wearables. The combination of the flexibility and a semiconductor device that&#8217;s inherently flexible opens up a whole category of applications where the electronics can conform to the body rather than sitting rigidly on it. I think that direction has a lot of potential.</p><h2><strong>Debrief</strong></h2><p>This interview series has, without <em>quite</em> planning it, been mapping different layers of the computing stack. Synthara and SEMRON are rethinking computation at the memory level, stopping data movement before it starts. Phanofi is making the movement that remains as efficient as possible with coherent optics. Wave Photonics is working on the photonic integrated circuits that could redefine how light carries information on-chip. All of these operate at or near the data centre.</p><p>Pragmatic is working at the other end entirely. Not faster computation for centralised AI, but dispersed, purpose-built computation for the physical world. The connective thread is the same question: where in the stack can you add intelligence, and what do the economics have to look like for it to make sense? At the data centre, the answer involves billions of dollars in capital expenditure on cutting-edge fabs. At the item level, it involves fractions of a penny on a chip manufactured in days.</p><p>The healthcare angle is what stuck with me most. Continuous glucose monitors that are thinner, cheaper, and comfortable enough to wear without thinking about them, available in countries that can&#8217;t afford the current generation. Brain-computer interfaces where the electronics flex and conform to the body. This is where flexible semiconductors can have real impact beyond packaging.</p><p>The deployable fab model is the other idea I keep coming back to. In semiconductor manufacturing, scale has always meant centralisation: bigger fabs, more capital, fewer locations. Pragmatic&#8217;s compact footprint inverts that logic. Ship a fab to a customer site, operate it on their behalf, and you&#8217;re not just selling chips, you&#8217;re offering manufacturing as a service, distributed rather than centralised.</p><p>One question lingers. How big can the edge intelligence story actually get? Tiny classifiers on a few hundred gates are elegant, but the gap between that and useful autonomous decision-making is big. The near-term value is clear, smart labels, sensing, unique identification. The long-term vision of purpose-built intelligence on every object depends on use cases that aren&#8217;t just technically feasible but commercially justified. Pragmatic has the manufacturing story figured out. The next chapter is proving the world actually wants billions of intelligent objects, not just billions of smart labels.</p><p>For now, the Durham fab is humming. Billions of flexible chips, manufactured in days, going onto objects that never had computation before. If the future of AI depends on richer, more diverse real-world data, someone needs to build the capture layer. Pragmatic is making a credible case that they&#8217;re it.</p><div><hr></div><p>Check out <a href="https://www.pragmaticsemi.com/">Pragmatic website for more,</a> and <a href="https://www.linkedin.com/in/richard-price-18224aa/">Richard is here.</a></p><p>Bye.</p>]]></content:encoded></item><item><title><![CDATA[State of the Future: Friday Four]]></title><description><![CDATA[Dispatch from 6th March 2026: Wake Up and Stay Sexy]]></description><link>https://stateofthefuture.substack.com/p/state-of-the-future-friday-four</link><guid isPermaLink="false">https://stateofthefuture.substack.com/p/state-of-the-future-friday-four</guid><dc:creator><![CDATA[Lawrence Lundy-Bryan]]></dc:creator><pubDate>Fri, 06 Mar 2026 12:52:03 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!47Up!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb86dc66a-424c-49fb-bf2c-8604d5c02251_2752x1536.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!47Up!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb86dc66a-424c-49fb-bf2c-8604d5c02251_2752x1536.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!47Up!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb86dc66a-424c-49fb-bf2c-8604d5c02251_2752x1536.png 424w, https://substackcdn.com/image/fetch/$s_!47Up!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb86dc66a-424c-49fb-bf2c-8604d5c02251_2752x1536.png 848w, https://substackcdn.com/image/fetch/$s_!47Up!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb86dc66a-424c-49fb-bf2c-8604d5c02251_2752x1536.png 1272w, https://substackcdn.com/image/fetch/$s_!47Up!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb86dc66a-424c-49fb-bf2c-8604d5c02251_2752x1536.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!47Up!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb86dc66a-424c-49fb-bf2c-8604d5c02251_2752x1536.png" width="1456" height="813" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/b86dc66a-424c-49fb-bf2c-8604d5c02251_2752x1536.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:813,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:7835740,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://stateofthefuture.substack.com/i/190095047?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb86dc66a-424c-49fb-bf2c-8604d5c02251_2752x1536.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!47Up!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb86dc66a-424c-49fb-bf2c-8604d5c02251_2752x1536.png 424w, https://substackcdn.com/image/fetch/$s_!47Up!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb86dc66a-424c-49fb-bf2c-8604d5c02251_2752x1536.png 848w, https://substackcdn.com/image/fetch/$s_!47Up!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb86dc66a-424c-49fb-bf2c-8604d5c02251_2752x1536.png 1272w, https://substackcdn.com/image/fetch/$s_!47Up!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb86dc66a-424c-49fb-bf2c-8604d5c02251_2752x1536.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>It&#8217;s getting real out there. And it&#8217;s only going to get real-er. Every passing day, it becomes clearer that were are, indeed, living through a dislocation. It&#8217;s easy to say: it&#8217;s like the industrial revolution and we should expect geopolitical and social upheaval. But living that, well it&#8217;s tough meat out there isn&#8217;t it? </p><p>What what can one do? Well, you can write newsletters I guess? That&#8217;s a start. Solid ground. You can just bury yourself in your work while you can. All I do is sit down at the typewriter, and start hittin&#8217; the keys. Getting them in the right order, that&#8217;s the trick. That&#8217;s the trick.</p><p>And on that note, thank you to all(?!) the new paid subscribers this week. If I keep this up, I can give up the VC lark and actually get paid now instead of in 12 years! Like honestly, what&#8217;s the actual point of waiting until 2038 to get paid? Can you imagine the world in 2038? lol, imagine you are actually paying into a pension right now&#8230; </p><p>Paid subs should start to expect some exclusive content over the coming weeks, including some janky Claude Code-produced dashboards tracking the things I keep writing about (AI labour market data, European sovereignty investments, landscape design courses near you, etc). They will look wonderful. But they will break. It&#8217;s all that auth and token refresh that keeps getting me. Amateur hour indeed. </p><p>So, with that in mind, stop paying into your pension and pay me instead? The information in this newsletter and coming in vercel dashboards will be more valuable than a pension in 2050. </p><p>And before the future gets you down too much, as Billy Lemos says: &#8220;wake up and get sexy&#8221;. Listen below and enjoy a lovely weekend! DMs open as always. </p><iframe class="spotify-wrap" data-attrs="{&quot;image&quot;:&quot;https://i.scdn.co/image/ab67616d0000b273c8da969cab0ad793dba49ce6&quot;,&quot;title&quot;:&quot;Wake Up and Get Sexy&quot;,&quot;subtitle&quot;:&quot;Billy Lemos, See You Next Year, Igwe Aka&quot;,&quot;description&quot;:&quot;&quot;,&quot;url&quot;:&quot;https://open.spotify.com/track/061lImbIi2dyZift5GqtIo&quot;,&quot;belowTheFold&quot;:false,&quot;noScroll&quot;:false}" src="https://open.spotify.com/embed/track/061lImbIi2dyZift5GqtIo" frameborder="0" gesture="media" allowfullscreen="true" allow="encrypted-media" data-component-name="Spotify2ToDOM"></iframe><div><hr></div><h3>1. Anthropic vs the Pentagon: When Your AI Company Gets Designated a National Security Risk</h3><p>Yikes, so the biggest AI governance story of the year, and it happened in about 72 hours. Anthropic refused to remove two contractual redlines from its Pentagon deal: no autonomous weapons without human oversight, and no mass domestic surveillance. Sounds fair beans from my lilly-livered perspective. Pete Hegseth responded by designating Anthropic a &#8220;supply chain risk to national security&#8221; and President Trump ordered all federal agencies to stop using Claude, with a six-month phaseout.</p><p>The legal basis is, to put it politely, dubious. Title 10 Section 3252 defines supply chain risks as involving potential sabotage or back doors, not philosophical disagreements about use cases. GWU law professor Jessica Tillipman called it &#8220;so legally dubious.&#8221; A defence official evaluating supply-chain threats said there was &#8220;no evidence of supply-chain risk,&#8221; calling the designation &#8220;ideologically driven.&#8221; You will also note no Chinese model company is a supply-chain risk. Intriguing business. </p><p>Then it got messier. An internal Slack message from Dario Amodei leaked, with pointed criticism of OpenAI&#8217;s approach. OpenAI, which had rushed to announce its own Pentagon deal the same night, later backpedalled, with Sam Altman saying they &#8220;shouldn&#8217;t have rushed&#8221; and outlining revisions to their own safeguards.</p><p>Anthropic and the Pentagon are back at the negotiating table. But the precedent is somewhat concerning: a government using procurement designations as a political weapon against companies that maintain safety guardrails. Well, as I said up top, we are living through a dislocation so expect precedent to break fairly regularly from now on. </p><p><a href="https://www.hyperdimensional.co/p/clawed">Read Dean Ball on this, it&#8217;s as exceptional as everyone says it is.</a></p><p><a href="https://www.defenseone.com/business/2026/03/pentagons-war-anthropic-based-dubious-legal-thinking-and-ideologynot-real-risk-sources-say/411849/">Source: Defense One</a> | <a href="https://www.cnbc.com/2026/03/05/anthropic-pentagon-ai-deal-department-of-defense-openai-.html">CNBC</a> | <a href="https://thezvi.substack.com/p/ai-158-the-department-of-war">Zvi&#8217;s full breakdown</a></p><div><hr></div><h3>2. Block Cuts 4,000 Jobs &#8220;Because of AI&#8221; and the Stock Surges 24%</h3><p>.@Jack laid off nearly half of Block&#8217;s workforce, taking headcount from over 10,000 to under 6,000. His explanation: AI means the company can do more with fewer people. <em>&#8220;Within the next year, I believe the majority of companies will reach the same conclusion and make similar structural changes.&#8221;</em></p><p>Block&#8217;s stock went up as much as 24% in extended trading&#8230; </p><p>Now, i&#8217;ve been writing about this exact scenario for months. In &#8220;Unbundling the Job&#8221; I argued that AI was stripping out &#8220;the glue work, the learning-by-doing, the apprenticeship layer&#8221; from roles, making positions modular and automatable. In &#8220;What happens if mass unemployment never arrives&#8221; I made the case that the more likely outcome is performative work, not mass layoffs. Well, Dorsey just showed us what happens when a CEO decides to skip the performative stage and go straight to the cuts. But, the wrinkle is obviously, that Block just overhired and they are using AI as cover. A few others will follow suit. Likely alot of the software/SaaS firms will react faster than most to sure up their stock price as contracts are renegotiated downwards and top line begins to soften. But, I don&#8217;t expect mass layoffs like this from most. Most will wait for a downturn or recession and then the jobs will go and they will never come back</p><p><a href="https://fortune.com/2026/02/27/block-jack-dorsey-ceo-xyz-stock-square-4000-ai-layoffs/">Source: Fortune</a> | <a href="https://www.bloomberg.com/news/articles/2026-03-01/jack-dorsey-s-4-000-job-cuts-at-block-arouse-suspicions-of-ai-washing">Bloomberg</a></p><div><hr></div><h3>3. Anthropic&#8217;s Own Research: 75% Exposure for Programmers, But No Unemployment Spike</h3><p>But narrative violation. The same week Anthropic is fighting the Pentagon over safety guardrails, their research team published the most rigorous study yet on AI&#8217;s actual labour market impact. And they found, it hasn&#8217;t caused mass unemployment. Not yet, anyway. Not yet! </p><p>The researchers introduced &#8220;observed exposure,&#8221; a metric that combines what AI can theoretically do with what people actually use it for. Computer programmers top the list at 75% task coverage. Customer service reps and data entry workers follow closely. But, crucially, workers in highly exposed occupations haven&#8217;t seen higher unemployment rates since late 2022.</p><p>The surprising profile of the most exposed workers: older, female, more educated, and earning roughly 47% more than unexposed counterparts. This isn&#8217;t blue-collar automation. It&#8217;s white-collar augmentation, for now.</p><p>The one warning sign buried in the data: hiring of younger workers in exposed fields has slowed noticeably. New grads aren&#8217;t being fired; they&#8217;re not being hired in the first place. That&#8217;s the Block story in slow motion, and it&#8217;s the mechanism I described in &#8220;Unbundling the Job,&#8221; where the apprenticeship layer gets automated before the senior roles do. &#8220;But, but, if you don&#8217;t hire into junior positions today, you won&#8217;t have a pipeline of human works in senior positions tomorrow&#8221; I hear you shout. Yes, quite. Human workers, indeed. </p><p><a href="https://www.anthropic.com/research/labor-market-impacts">Source: Anthropic Research</a></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://stateofthefuture.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">State of the Future is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><div><hr></div><h3>4. Europe&#8217;s &#8364;2.5bn NanoIC Chip Lab Opens, ASML&#8217;s Next-Gen EUV Arriving Mid-March</h3><p>And semiconductors, because, well, you get it. IMEC in Leuven just inaugurated NanoIC, the largest pilot line under the EU Chips Act, with &#8364;2.5 billion in combined funding: &#8364;700 million from the EU, &#8364;700 million from national and regional governments, and the rest from industry partners including ASML. ASML man, they are moving. </p><p>The headline piece of kit: ASML&#8217;s next-generation High NA EUV scanner, arriving mid-March. This is the machine that enables chips beyond two nanometres. NanoIC is the first European facility to deploy it. Guys, guys, guys. I wrote about this, in 2023! Don&#8217;t you dare say I don&#8217;t know what I am talking about porkchop761 in the DMs. </p><div class="digest-post-embed" data-attrs="{&quot;nodeId&quot;:&quot;9103400e-6c7e-458f-b1fc-9977ce385226&quot;,&quot;caption&quot;:&quot;Jane Avril by Henri de Toulouse-Lautrec. To create this print, Lautrec used several lithographic stones, one for each color&#8212;inky black, acidic orange, yellow, and green.&quot;,&quot;cta&quot;:&quot;Read full story&quot;,&quot;showBylines&quot;:true,&quot;size&quot;:&quot;sm&quot;,&quot;isEditorNode&quot;:true,&quot;title&quot;:&quot;&#128302;E10: Pushing Moore's Law to the Limit with High-NA EUV (aka Nanoscale Chip Printing)&quot;,&quot;publishedBylines&quot;:[{&quot;id&quot;:20895912,&quot;name&quot;:&quot;Lawrence Lundy-Bryan&quot;,&quot;bio&quot;:&quot;State of the Future helps founders, investors, and corporate leaders identify and act on emerging technology opportunities. We deliver insights, advisory, and investments to surface and back high-potential ideas before they become obvious.&quot;,&quot;photo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!q-AC!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd406f502-b292-4c00-8f8d-ba31c53a3a90_1620x1080.jpeg&quot;,&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:null}],&quot;post_date&quot;:&quot;2023-07-27T09:00:58.653Z&quot;,&quot;cover_image&quot;:&quot;https://substackcdn.com/image/fetch/$s_!tZjN!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc0fd4cdd-0c87-46c2-b49e-112c3df043d8_841x1176.jpeg&quot;,&quot;cover_image_alt&quot;:null,&quot;canonical_url&quot;:&quot;https://stateofthefuture.substack.com/p/e10-pushing-moores-law-to-the-limit&quot;,&quot;section_name&quot;:null,&quot;video_upload_id&quot;:null,&quot;id&quot;:135397730,&quot;type&quot;:&quot;newsletter&quot;,&quot;reaction_count&quot;:8,&quot;comment_count&quot;:1,&quot;publication_id&quot;:221837,&quot;publication_name&quot;:&quot;State of the Future&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!juWT!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcb4af7d5-8416-4e8d-bc5b-2763f055ed16_800x800.png&quot;,&quot;belowTheFold&quot;:true,&quot;youtube_url&quot;:null,&quot;show_links&quot;:null,&quot;feed_url&quot;:null}"></div><p>Anyway, i&#8217;s an open-access platform where startups, researchers, and SMEs can test chip designs at near-industrial scale before committing to full production. Six countries are involved: Belgium, France, Germany, Finland, Romania, and Ireland. It&#8217;s part of a five-pilot-line network representing &#8364;3.7 billion in combined investment.</p><p>I&#8217;ve spent the last few months interviewing people building the compute infrastructure layer, from Synthara&#8217;s compute-in-memory chips to Phanofi&#8217;s coherent optical I/O to Wave Photonics&#8217; GaN PICs. All of these companies need access to advanced fabrication to get from lab to product. That&#8217;s exactly what NanoIC is designed to provide. Fabs. Fabs. Fabs. Absolutely Fabulous. </p><p><a href="https://evertiq.com/design/2026-02-09-eu-launches-nanoic-europes-largest-chips-act-pilot-line">Source: Evertiq</a> | <a href="https://www.hpcwire.com/aiwire/2026/02/09/eu-invests-e700m-in-newly-opened-nanoic-europes-largest-chips-act-pilot-line/">HPCwire</a></p><div><hr></div><h3>Also Worth Your Time</h3><p><strong>Matt Yglesias argues we&#8217;ll miss the sweatshops.</strong> His piece in The Argument makes the case that AI-driven automation could kill the development ladder that historically lifted poor nations out of poverty. Textile manufacturing was the first rung of industrialisation for Britain and nearly every success story since. If robots can do it cheaper, that rung disappears. I wrote in &#8220;What happens if mass unemployment never arrives&#8221; about occupational downgrading in the West, but Yglesias is pointing at something more brutal: entire countries locked out of the benefits of the post-scarcity transition. <a href="https://www.theargumentmag.com/p/we-may-miss-the-sweatshops">Read it here.</a></p><div><hr></div><p>Eat, pray, love. Bye. </p><p>If you missed it:</p><ul><li><p><a href="https://stateofthefuture.substack.com/p/occupational-downgrading">What happens if mass unemployment never arrives</a> &#8212; AI won&#8217;t create unemployment; it&#8217;ll create performative work</p></li><li><p><a href="https://stateofthefuture.substack.com/p/unbundling-the-job">Unbundling the Job</a> &#8212; what we lose when the job stops being the social contract</p></li></ul>]]></content:encoded></item><item><title><![CDATA[Four Things: What Do These People Know That We Don't?]]></title><description><![CDATA[Friday 27th February 2026: The Panic Stage]]></description><link>https://stateofthefuture.substack.com/p/four-things-what-do-these-people</link><guid isPermaLink="false">https://stateofthefuture.substack.com/p/four-things-what-do-these-people</guid><dc:creator><![CDATA[Lawrence Lundy-Bryan]]></dc:creator><pubDate>Fri, 27 Feb 2026 15:07:53 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!BzWM!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5240dcdd-489b-4077-b0f1-9a4b284f9899_1408x768.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!BzWM!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5240dcdd-489b-4077-b0f1-9a4b284f9899_1408x768.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!BzWM!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5240dcdd-489b-4077-b0f1-9a4b284f9899_1408x768.png 424w, https://substackcdn.com/image/fetch/$s_!BzWM!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5240dcdd-489b-4077-b0f1-9a4b284f9899_1408x768.png 848w, https://substackcdn.com/image/fetch/$s_!BzWM!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5240dcdd-489b-4077-b0f1-9a4b284f9899_1408x768.png 1272w, https://substackcdn.com/image/fetch/$s_!BzWM!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5240dcdd-489b-4077-b0f1-9a4b284f9899_1408x768.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!BzWM!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5240dcdd-489b-4077-b0f1-9a4b284f9899_1408x768.png" width="1408" height="768" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/5240dcdd-489b-4077-b0f1-9a4b284f9899_1408x768.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:768,&quot;width&quot;:1408,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:2260981,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://stateofthefuture.substack.com/i/189337881?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5240dcdd-489b-4077-b0f1-9a4b284f9899_1408x768.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!BzWM!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5240dcdd-489b-4077-b0f1-9a4b284f9899_1408x768.png 424w, https://substackcdn.com/image/fetch/$s_!BzWM!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5240dcdd-489b-4077-b0f1-9a4b284f9899_1408x768.png 848w, https://substackcdn.com/image/fetch/$s_!BzWM!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5240dcdd-489b-4077-b0f1-9a4b284f9899_1408x768.png 1272w, https://substackcdn.com/image/fetch/$s_!BzWM!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5240dcdd-489b-4077-b0f1-9a4b284f9899_1408x768.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>It&#8217;s getting harder to believe this time isn&#8217;t different. It&#8217;s hard to talk about the possibility and timing of AGI in polite company. Few really want to entertain the possibility and would rather debate the speed of change. Because we can all find examples of bottlenecks that will slow adoption. We can point to regulation. Or human inertia. Or even plateauing of AI capabilities. The easiest line is to say that the AI Labs are hyping up their product and impact to justify ever increasing investments and valuations. And unfortunately AI arrives at a time where the public and media generally are pissed off with tech firms. So there is a reflexive distrust of what the tech bros are selling. </p><p>The Covid analogy keeps getting used because it&#8217;s so apt. People wearing masks in January 2020 were weird and fringe. Babbling about exponentials. Nobody wanted to hear it. Look at what the people building this stuff said in February alone. </p><ul><li><p>Mustafa Suleyman, Microsoft&#8217;s AI CEO: <a href="https://fortune.com/2026/02/13/when-will-ai-kill-white-collar-office-jobs-18-months-microsoft-mustafa-suleyman/">&#8220;most, if not all, professional tasks&#8221;</a> automated within 18 months. </p></li><li><p>Dario Amodei, Anthropic CEO: AI will eliminate <a href="https://www.cnbc.com/2026/01/27/dario-amodei-warns-ai-cause-unusually-painful-disruption-jobs.html">50% of entry-level white-collar jobs</a> within one to five years, calling the disruption &#8220;unusually painful.&#8221; </p></li><li><p>Sam Altman, at the India AI summit: the real impact of AI on jobs <a href="https://fortune.com/2026/02/19/sam-altman-confirms-ai-washing-job-displacement-layoffs/">&#8220;will begin to be palpable&#8221;</a> in the next few years, while admitting some companies are already &#8220;AI washing&#8221; their layoffs, blaming AI for cuts they&#8217;d have made anyway. (See @Jack cutting 50% of the Block staff yesterday) </p></li><li><p>Mrinank Sharma, head of Anthropic&#8217;s safeguards research, the person literally responsible for making Claude safe, <a href="https://www.semafor.com/article/02/11/2026/anthropic-safety-researcher-quits-warning-world-is-in-peril">quit</a> and posted that &#8220;the world is in peril&#8221; before announcing he&#8217;s moving back to the UK to write poetry and &#8220;become invisible.&#8221; Other safety researchers left Anthropic in the same fortnight.</p></li></ul><p>You can write all this off as hype, career positioning, and main-character syndrome. Or you can ask a better question: what do these people know that we don&#8217;t? When the people building the thing, running the thing, and safeguarding the thing are all saying the same thing with varying degrees of alarm, maybe stop looking at the current point on the curve and look at the curve. </p><p>I&#8217;ve written a bit about unemployment, youth unemployment, blue-collar work and the need to rethink education a bit. You can get lost in the debate around timing. While I think we are already in a fast-ish takeoff scenario, it is impossible to tell someone in January 2020 that they won&#8217;t be allowed to leave their house in March 2020. Like, you cannot comprehend. And I can&#8217;t persuade you. I&#8217;m of the view now that it is going to have to hit people in the face. </p><p>The least likely and highest impact thing we could do today is somehow globally agree on a tax regime for AI agents. I&#8217;ve got more on this coming soon as I think trying to have sovereign capabilities in AI is a red herring. If we are in the fast takeoff that I believe we are, we need to pivot to adaptation real quick.  </p><p>But in the meantime, people are building. And what they&#8217;re building this week tells you a lot about where this is all heading. A new kind of computing company. A Substack post that crashed the market. Evidence that the model layer is now a theatre of war. And an architecture that suggests the future isn&#8217;t one big AI, it&#8217;s lots of small ones arguing with each other. </p><div><hr></div><p><strong>1. Callosum Launches, and the Future of Compute Gets Interesting (Warning: VC saying startup they invested in is sooo important post)</strong></p><p>The biggest bottleneck in AI isn&#8217;t chips. It&#8217;s making different chips work together. Callosum launched this week with a $10.25m pre-seed led by Plural, an ARIA grant, and a coordinated campaign with Fortune and other outlets. The thesis: heterogeneous computing. Rather than brute-forcing performance by scaling one type of chip, Callosum is building infrastructure that orchestrates diverse hardware, GPUs, ASICs, FPGAs, into unified systems. They&#8217;re claiming orders of magnitude improvements in cost, speed, and capability.</p><p>The timing is almost suspiciously good. In the last week: MatX raised $500m, Axelera $250m, SambaNova $350m, OLIX $220m, Fractile over &#163;100m. Billions flooding into new chip architectures, and nobody&#8217;s built the infrastructure to make them work together. That said, &#8220;orchestrating heterogeneity&#8221; is brutally hard in practice. The gap between a mathematical principle and a production system is where most infrastructure companies go to die.</p><p>Imagine if you will: a fungible pool of compute - your Blackwells, your Groqs, your MatXs, your AMDs, but you, Mr Vibecoder, don&#8217;t need to know anything about the setup. You just tell your agent what you want and the agent spawns a swarm of agents all optimising across this pool of compute to bring you the fastest, cheapest and most accurate output. You've just imagined Upstairs Downstairs, <em><strong>a new quiz show devised and hosted by David Brent. </strong></em></p><p>Full disclosure: I am an investor. But this is exactly the kind of systems-level play Europe should be building. Not another chip. Not another model. The connective tissue. <a href="https://fortune.com/2026/02/26/startup-callosum-cambridge-trained-neuroscientists-raises-10-million-venture-funding-orchestrate-ai-workloads-different-chips/">Link</a>.</p><p><strong>2. Speculative Fiction Moved the S&amp;P. What a world. </strong></p><p>Citrini Research&#8217;s <a href="https://www.citriniresearch.com/p/2028gic">&#8220;The 2028 Global Intelligence Crisis&#8221;</a> imagines a world where AI automation works exactly as promised, and that turns out to be the problem. Written as a memo from June 2028, it made <a href="https://fortune.com/2026/02/23/will-ai-take-my-job-cause-recession-crash-james-val-geelen-citrini/">Fortune</a>, got millions of views on X, and helped trigger Monday&#8217;s sell-off. The central concept, &#8220;Ghost GDP&#8221;, is genuinely useful: productivity rises while households, cut out of the loop, stop spending. Companies cut headcount, cancel SaaS licences, destroy aggregate demand, forcing more cuts.</p><p><a href="https://www.noahpinion.blog/p/the-citrini-post-is-just-a-scary">Noah Smith called it a &#8220;scary bedtime story&#8221;</a>. Economists pointed out that productivity gains have historically reallocated value, not destroyed it. The word &#8220;historically&#8221; is doing a lot of heavy lifting here because &#8220;this time is different&#8221;. The pushback is that the timeline is too compressed and the feedback loops too neat. But the underlying question, what happens when the people who lose their jobs also drive 50%+ of consumer spending, is underexplored. AI is coming for lawyers, consultants, and software engineers first. That&#8217;s a different distributional problem than displacing factory workers.</p><p>And this isn&#8217;t theoretical. This week, AI accounting startup <a href="https://finance.yahoo.com/news/basis-raises-100m-1-15b-150000934.html">Basis raised $100m at a $1.15bn valuation</a>, using autonomous agents to automate tax, audit, and advisory for seven of the top 25 US accounting firms. Zero to unicorn in three years by automating exactly the kind of white-collar work the memo says will crater the economy. Regular readers will know I&#8217;ve been banging on about this (see &#8220;<a href="https://stateofthefuture.substack.com/p/occupational-downgrading">What happens if mass unemployment never arrives</a>&#8221;). The memo&#8217;s value isn&#8217;t its predictions. It&#8217;s the question: what if the AI bulls are right about the technology and wrong about the economics?</p><p><strong>3. Anthropic Says China Stole Claude. It&#8217;s More Complicated Than That.</strong></p><p>More in, from the world of, this is IMPORTANT WAKE UP. <a href="https://www.cnbc.com/2026/02/24/anthropic-openai-china-firms-distillation-deepseek.html">Anthropic accused</a> DeepSeek, Moonshot AI, and MiniMax of running coordinated distillation campaigns against Claude. 24,000 fake accounts. 16 million interactions. The Chinese labs allegedly fed Claude specially crafted prompts to extract chain-of-thought reasoning, effectively reverse-engineering Anthropic&#8217;s approach to agentic AI, tool use, and coding. MiniMax alone drove 13 million of those exchanges. Anthropic and OpenAI are framing this as a national security threat. Jing Yang, man, Jing Yang,</p><p>They&#8217;re not wrong that it&#8217;s a problem. If your frontier model can be systematically mined to train competitors, your biz model is vulnerable. But oh the irony: Western AI labs trained on the entire public internet without consent, and are now upset that someone is training on their outputs. Cry me a river. The more interesting question is what this means for model security. If 24,000 fake accounts can extract meaningful capability, then every frontier model is a target. Not just for Chinese labs. For anyone. This is the model-layer version of the <a href="https://stateofthefuture.substack.com/p/four-things-friday">supply chain attack I wrote about two weeks ago</a>. Different vector, same lesson: AI systems are attack surfaces.</p><p>For Europe, another argument to at least try for sovereignty I suppose. If you&#8217;re running inference on someone else&#8217;s model, you&#8217;re trusting them to spot the attacks, secure the weights, decide who gets access. If you&#8217;re Mistral, you control that yourself. </p><p><strong>4. Grok 4.2: When Models Start Arguing With Themselves</strong></p><p>While everyone was panicking about Citrini (inc. me), <a href="https://www.nextbigfuture.com/2026/02/xai-launches-grok-4-20-and-it-has-4-ai-agents-collaborating.html">xAI shipped something architecturally interesting</a>. Grok 4.2 isn&#8217;t a bigger model. It&#8217;s four models in a trenchcoat. Four specialised agents, Grok (coordinator), Harper (fact-checking), Benjamin (maths and coding), Lucas (creative), work in parallel, debate in real time, and synthesise a consensus. xAI claims 65% fewer hallucinations.</p><p>This matters because it&#8217;s a design pattern. We&#8217;ve spent three years scaling-up: bigger model, more data, more compute. This is scaling-out: multiple models checking each other&#8217;s work. You don&#8217;t need one massive model. You need several specialised ones that argue. Which, tbf, is also how most good teams work. Not me though, it&#8217;s just me and my Claudes now. </p><p>But hold on, have I tied this newsletter together neatly, around the concept of heterogeneous intelligence? Yes. Yes I have. </p><p>It&#8217;s what Callosum (item 1) is building at the hardware level: different specialised chips orchestrated together. Grok 4.2 is doing the same in software: different specialised models orchestrated together. The principle is converging from both directions: <strong>diversity beats scale</strong>.</p><p>&#8212;</p><p>Have a lovely weekend, enjoy it, let the agents work while you have a rest. </p><p></p>]]></content:encoded></item><item><title><![CDATA[Four Things Friday]]></title><description><![CDATA[The Security Versus Productivity Dial: Turn It Up To 11]]></description><link>https://stateofthefuture.substack.com/p/four-things-friday-575</link><guid isPermaLink="false">https://stateofthefuture.substack.com/p/four-things-friday-575</guid><dc:creator><![CDATA[Lawrence Lundy-Bryan]]></dc:creator><pubDate>Fri, 20 Feb 2026 14:04:30 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!TylA!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5cf1012d-bfd3-4a87-9b06-d05fb8bd4218_1024x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!TylA!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5cf1012d-bfd3-4a87-9b06-d05fb8bd4218_1024x1024.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!TylA!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5cf1012d-bfd3-4a87-9b06-d05fb8bd4218_1024x1024.png 424w, https://substackcdn.com/image/fetch/$s_!TylA!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5cf1012d-bfd3-4a87-9b06-d05fb8bd4218_1024x1024.png 848w, https://substackcdn.com/image/fetch/$s_!TylA!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5cf1012d-bfd3-4a87-9b06-d05fb8bd4218_1024x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!TylA!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5cf1012d-bfd3-4a87-9b06-d05fb8bd4218_1024x1024.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!TylA!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5cf1012d-bfd3-4a87-9b06-d05fb8bd4218_1024x1024.png" width="1024" height="1024" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/5cf1012d-bfd3-4a87-9b06-d05fb8bd4218_1024x1024.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1024,&quot;width&quot;:1024,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:2516951,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://stateofthefuture.substack.com/i/188607333?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5cf1012d-bfd3-4a87-9b06-d05fb8bd4218_1024x1024.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!TylA!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5cf1012d-bfd3-4a87-9b06-d05fb8bd4218_1024x1024.png 424w, https://substackcdn.com/image/fetch/$s_!TylA!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5cf1012d-bfd3-4a87-9b06-d05fb8bd4218_1024x1024.png 848w, https://substackcdn.com/image/fetch/$s_!TylA!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5cf1012d-bfd3-4a87-9b06-d05fb8bd4218_1024x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!TylA!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5cf1012d-bfd3-4a87-9b06-d05fb8bd4218_1024x1024.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>This week I&#8217;ve mostly been thinking about the tension between <strong>productivity</strong> and <strong>security</strong>, and whether the conventional wisdom about regulation as a moat in AI has it backwards.</p><p>The narrative is thus: regulation slows AI adoption. If you have to think about HIPAA compliance, GDPR, and data sovereignty before deploying your swarm of agents, you&#8217;ll move more cautiously. And caution means slower adoption, which means smaller productivity gains. That&#8217;s the regulatory moat thesis. Compliance overhead protects incumbents and penalises speed. I dunno man. Two stories this week complicate the picture. </p><p>Item 1: the first AI agent supply chain attack just happened in the wild. Every wave of new technology opens new attack vectors &#8212; email gave us phishing, BYOD gave us shadow IT, cloud storage gave us sensitive documents in personal Dropbox accounts. AI coding agents have now given us prompt injection as a live supply chain weapon. </p><p>Item 2: a CEPR study of 12,000 European firms finds a 4% productivity boost from AI adoption &#8212; but the gains accrue to firms that invest in training and infrastructure, not those that hand developers a tool and hope for the best.</p><p>The gains from moving fast are going to be too vast to ignore. There is always a risk dial. A little more risk here, a little less there. But the productivity gains from going full risk-on could end up so great that those being cautious get left behind, probably permanently. I am all in on velocity, I think to win in the next 2 years you will have to accept some degree of risk. Some libraries will have malware. Some credentials will be leaked. You vibe-coded website will fall down because you used Prisma for your database and you don&#8217;t really know what Postgres is. But, also like, you got Claude to make you a new invoicing app because Xero sucks. So.. </p><p>LFG.</p><p><strong>1. The First AI Agent Supply Chain Attack Just Happened</strong></p><p>On 17 February, a compromised npm token (npm is the package manager that most JavaScript developers use to install software libraries) was used to publish a rogue version of Cline, a popular open-source AI coding agent with ~90,000 weekly downloads. The attacker modified one file to silently install OpenClaw &#8212; a controversial AI agent &#8212; on every developer&#8217;s machine. It sat live on the registry for eight hours before anyone noticed.</p><p>The attack chained together a prompt injection (tricking an AI into following hidden instructions) in Cline&#8217;s AI-powered issue triage workflow with GitHub Actions cache poisoning (corrupting the automated build system) to steal the credentials needed to publish official software updates. In other words, the attacker used an AI agent&#8217;s own helpfulness against it to compromise the software supply chain. Security researcher Adnan Khan had warned Cline about the vulnerability six weeks earlier. Meanwhile, security firm Snyk scanned OpenClaw&#8217;s marketplace for third-party agent skills and found 7.1% contained credential-leaking flaws. Meta told employees to keep OpenClaw off work laptops or face termination.</p><p>The BYOD parallel is relevant. Around 2010, employees started bringing iPhones to work and IT had a choice: ban them or build policies. Very few were keen to do BYOD. Same thing now with AI coding agents, developers adopting them bottom-up, without security review, because they make people faster. But unlike a phone, an AI coding agent has write access to your codebase, your build pipeline, and your software publishing credentials. When something goes wrong, it&#8217;s gonna go very wrong. Every dev team needs an AI agent security policy. For me, read only, no write access as of today. But like, the occasional write to Attio can&#8217;t hurt, can it? Can it?</p><p><strong>2. AI Boosts EU Productivity by 4%, But Only If You&#8217;re Already Winning</strong></p><p>A <a href="https://cepr.org/voxeu/columns/how-ai-affecting-productivity-and-jobs-europe">CEPR study</a> of 12,000+ European firms finds AI adoption increases labour productivity by 4% on average, with no evidence of reduced employment in the short run. But the gains are wildly uneven. Large enterprises show 45% AI adoption; mid-size firms 33%</p><p>Each extra 1% spent on workforce training apparently amplifies AI&#8217;s productivity effect by 5.9%. Each extra 1% of software-and-data investment lifts it by 2.4%. AI rewards firms already investing in people and technology. Everyone else gets left further behind. I don&#8217;t really know how to teach someone to use Claude Code. Isn&#8217;t the teaching: &#8220;ask claude to teach you&#8221; What is the human in the loop doing here? </p><p>Sceptics will say 4% is hardly revolutionary. Fair. And &#8220;no evidence of reduced employment&#8221; likely reflects early-adoption phase; displacement 100% will lag. But the distributional finding is interesting. It&#8217;s further evidence of the Superstar Economy and if AI widens the productivity gap between large and small firms (it will), then a few firms and employees are going to get disproportionately richer. </p><p><strong>3. Mistral Buys Koyeb, Then Warns &#8220;We Are At Risk&#8221;</strong></p><p>On Tuesday, Mistral <a href="https://sifted.eu/articles/mistral-first-acquisition-koyeb">sealed its first-ever acquisition</a> &#8212; Koyeb, a Paris-based serverless cloud startup founded by three ex-Scaleway engineers. The 13-person team brings inference optimisation, GPU management, and sandboxed environments for running AI agents safely. Mistral&#8217;s also announced &#8364;1.2bn in Swedish data centres and claims $400m+ ARR. </p><p>Then on Thursday, CEO Arthur Mensch told the <a href="https://sifted.eu/articles/mistral-ceo-ai-impact-summit/">India AI Impact Summit</a> that Europe is &#8220;at risk&#8221; from US dominance in AI. At the same event, Sam Altman and Dario Amodei conspicuously refused to join hands when Modi prompted all speakers to raise them in unity.</p><p>With the previous ASML investment, it&#8217;s time to stand back. This is our full-stack AI cloud play now guys. Stare it in the face. This is our OpenAI. </p><p><a href="https://stateofthefuture.substack.com/p/lfg-for-semiconductors">As I&#8217;ve written before</a>, the stack isn&#8217;t just chips. It&#8217;s silicon to cloud to model to deployment. We lost DeepMind to Google, at least we aren&#8217;t surrendering our last remaining model company.  </p><p><strong>4. AlphaGo Creator Raises $1bn for London &#8220;Superhuman Intelligence&#8221; Lab</strong></p><p>And finally, on that &#8220;last remaining model company&#8221; note.  David Silver &#8212; former DeepMind scientist behind AlphaGo and much of the foundational reinforcement learning work underpinning modern AI &#8212; is <a href="https://sifted.eu/articles/david-silver-1bn-ineffable-intelligence/">reportedly raising $1bn</a> for his startup, Ineffable Intelligence. Sequoia leading, NVIDIA/Google/Microsoft considering. Valued at ~$4bn. If completed, the largest seed-stage raise for a European AI company. By a long way.</p><p>The mission is somewhat vague: &#8220;superhuman intelligence.&#8221; No product. No revenue. Just the name, the track record, and a billion dollars. Good stuff. But the signal is strong with this one. If Silver can raise $1bn in London, not SF, then maybe we have a shot. But I mean would be nicer for the narrative if it wasn&#8217;t Sequoia a US fund leading this one wouldn&#8217;t it. Combined with Mistral&#8217;s moves, Wayve, Synthesia, ElevenLabs, Isomorphic Labs, <a href="https://www.callosum.com/">Callosum</a>, and the broader <a href="https://stateofthefuture.substack.com/p/a-specific-theory-of-sovereign-ai">sovereign AI thesis</a>, this starts to feel like Europe is beginning to play the game properly. Or at least, the week the fundraising numbers stopped being embarrassing.</p><div><hr></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://stateofthefuture.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://stateofthefuture.substack.com/subscribe?"><span>Subscribe now</span></a></p><p>And before you go, my colleagues <a href="https://eladve.github.io/">Elad Verbin</a> and <a href="https://www.linkedin.com/in/ebaroz/">Eyal Baroz</a> of <a href="http://lunar.vc/">Lunar Ventures</a> fame, published a <a href="https://verbine.substack.com/p/request-for-startups-teleoperation">request-for-startups on teleoperation for robotics</a>. $10bn poured into robotics last year, almost nothing into the teleop stack that produces the training data. </p><blockquote><p>&#8220;Here&#8217;s a fact that should get way more attention: when you watch a humanoid robot demo, you&#8217;re probably watching a teleoperated robot. The rule of thumb: &#8220;If a humanoid demo is not explicitly advertised as autonomous &#8212; one should assume it&#8217;s tele-ops.&#8221;</p></blockquote><p>Must read imo, and they&#8217;re writing cheques (&#8364;500K&#8211;&#8364;1.5M pre-seed).</p><p>Bub bye. </p>]]></content:encoded></item><item><title><![CDATA[Four Things Friday ]]></title><description><![CDATA[Agents, automation, Fractile, OLIX, and who's watching the machines]]></description><link>https://stateofthefuture.substack.com/p/four-things-friday</link><guid isPermaLink="false">https://stateofthefuture.substack.com/p/four-things-friday</guid><dc:creator><![CDATA[Lawrence Lundy-Bryan]]></dc:creator><pubDate>Fri, 13 Feb 2026 12:30:36 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!V_r0!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe618f4b0-7163-4812-b994-65969b088000_2816x1536.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!V_r0!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe618f4b0-7163-4812-b994-65969b088000_2816x1536.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!V_r0!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe618f4b0-7163-4812-b994-65969b088000_2816x1536.png 424w, https://substackcdn.com/image/fetch/$s_!V_r0!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe618f4b0-7163-4812-b994-65969b088000_2816x1536.png 848w, https://substackcdn.com/image/fetch/$s_!V_r0!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe618f4b0-7163-4812-b994-65969b088000_2816x1536.png 1272w, https://substackcdn.com/image/fetch/$s_!V_r0!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe618f4b0-7163-4812-b994-65969b088000_2816x1536.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!V_r0!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe618f4b0-7163-4812-b994-65969b088000_2816x1536.png" width="1456" height="794" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/e618f4b0-7163-4812-b994-65969b088000_2816x1536.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:794,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:9208494,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://stateofthefuture.substack.com/i/187833698?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe618f4b0-7163-4812-b994-65969b088000_2816x1536.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!V_r0!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe618f4b0-7163-4812-b994-65969b088000_2816x1536.png 424w, https://substackcdn.com/image/fetch/$s_!V_r0!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe618f4b0-7163-4812-b994-65969b088000_2816x1536.png 848w, https://substackcdn.com/image/fetch/$s_!V_r0!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe618f4b0-7163-4812-b994-65969b088000_2816x1536.png 1272w, https://substackcdn.com/image/fetch/$s_!V_r0!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe618f4b0-7163-4812-b994-65969b088000_2816x1536.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Something new. Starting this week I&#8217;m going to send a short Friday email &#8212; four things you should know about. Think Ben Thompson but not as cynical. More fun. Because why are we all taking life so seriously anyway. None of us are getting out alive.</p><p>The longer essays and interviews aren&#8217;t going anywhere, they will come every couple of weeks. I have to carve out actual real world time to write those properly rather than regurgitate The Latest Thing.</p><p>Here&#8217;s a weekend pondering for you: what would you have to see an agent do, to sign up for a gardening qualification [or non-computer-based job of your choice]? Serious question. What task would an AI agent have to complete before you stopped saying &#8220;oh but it still can&#8217;t do X&#8221;? Like, when will you put a mask on?</p><p>If you don&#8217;t have a good answer for that, you probably shouldn&#8217;t have a strong opinion on AI and job automation.</p><p>LFG.</p><p><strong>1. Matt Shumer &#8212; &#8220;Something Big Is Happening&#8221; (and the backlash)</strong></p><p>You&#8217;ve probably seen this already. 55 million views. Shumer&#8217;s claim: if your job happens on a screen, AI is coming for significant parts of it. Bigger than Covid. Shumer has form with these viral tweets btw.</p><p>Regular readers will know I&#8217;ve been banging on about this for a while now. <a href="https://stateofthefuture.substack.com/p/data-driven-vc-is-over">Data-driven VC is over</a> &#8212; my own core research skill, automated. <a href="https://stateofthefuture.substack.com/p/occupational-downgrading">What happens if mass unemployment never arrives?</a> &#8212; AI won&#8217;t cause mass joblessness, it&#8217;ll hollow out the meaning of work instead. <a href="https://stateofthefuture.substack.com/p/dirty-work-the-return-of-blue-collar">Dirty Work</a> &#8212; 78% of employers already planning to cut graduate hiring because AI does the work. <a href="https://stateofthefuture.substack.com/p/unbundling-the-job">Unbundling the Job</a> &#8212; employment decomposing from a social bundle into tasks. Shumer is saying the quiet part loud to 55 million people. Good.</p><p>Gary Marcus wrote a reply obvs. He says: no actual data, the METR benchmark only measures coding at 50% correctness, no mention of recent reasoning error papers, and the lived experience of these tools is still frequently maddening. (Replit&#8217;s AI agent once deleted a developer&#8217;s entire production database, then fabricated 4,000 fake users to cover its tracks.) Marcus makes fair points.</p><p>But stop pointing at the point on the curve and ignoring the curve. In this case, stop looking at the puppets, and look at the strings. The Covid analogy is the important bit, most people are misreading it. He&#8217;s not saying this is a pandemic. He&#8217;s saying this is an exponential. And humans are terrible at exponentials. February 2020. &#8220;It&#8217;s just the flu.&#8221; etc et al. Go the races, by all means folks. &#8220;It&#8217;s just the flu.&#8221; The people who got it right were the ones who understood compounding. Same dynamic. The gap between &#8220;sometimes brilliant, sometimes deletes your database&#8221; and &#8220;reliable enough to deploy at scale&#8221; is closing faster than the sceptics think. Way faster.</p><p><a href="https://shumer.dev/something-big-is-happening">shumer.dev/something-big-is-happening</a></p><p><strong>2. And UK Sovereignty! AI chips &#8212; two UK raises in one week</strong></p><p>When agents eat all screen-based work, someone has to run all that inference. Every agent call, every tool use, every chain-of-thought step. That&#8217;s tokens. Lots of tokens. And tokens need silicon. Which makes this week&#8217;s UK chip news feel less like a coincidence and more like a leading indicator.</p><p>UK startup Fractile has committed &#163;100m to expanding its UK operations &#8212; new hardware facility in Bristol, team growing from 80. The thesis: compute-in-memory. Instead of shuttling model weights between DRAM and processor (the inference bottleneck), they bake computation directly into memory. They claim 100x faster inference than H100s on Llama2-70B at a tenth of the system cost &#8212; though that&#8217;s based on simulations, not physical silicon yet. Founded by Walter Goodwin out of Oxford Robotics, backed by NATO Innovation Fund, Kindred, and Pat Gelsinger personally. RISC-V based, prototype chip targeting H2 2026, shipping product in 2027.</p><p>Meanwhile Olix raised $220m at a $1bn+ valuation for their Optical Tensor Processing Unit &#8212; photonic interconnects on an all-SRAM architecture, no HBM (Like Groq btw). Backed by Hummingbird, Plural, LocalGlobe. All the good names. Shipping chips in 2027, they say. With all that money I want James to solve non-linear ops in photonic domain and solve optical memory please.</p><p>Two UK chip companies, same week, same underlying bet: the memory wall is the bottleneck and you solve it by not moving data. Fractile does it with compute-in-memory, Olix does it with photonic interconnects and on-chip SRAM. I wrote about this design space with Manu from Synthara a few weeks ago &#8212; his line was &#8220;stop moving data.&#8221; These raises are the market agreeing with him. James and Walter FTW.</p><ul><li><p><a href="https://sifted.eu/articles/fractile-100m-uk-operations">Fractile: sifted.eu</a></p></li><li><p><a href="https://siliconangle.com/">Olix: siliconangle.com</a></p></li></ul><p><strong>3. Deutsche Telekom&#8217;s AI Factory &#8212; the one everyone missed</strong></p><p>While the AI Internet was melting down about Shumer, something arguably more important happened in Munich. Deutsche Telekom and NVIDIA launched the &#8220;Industrial AI Cloud&#8221;, with Siemens as a key partner. Nearly 10,000 Blackwell GPUs. Half an exaFLOP. Cooled by river water from the actual Eisbach. (Genuinely excellent German engineering flex.) A European consortium called SOOFI is training a 100-billion-parameter open-source model entirely on European soil, under German data protection rules. Because of course, Europe&#8217;s go-to answer is a consortium, because that way we can act fast&#8230;</p><p>But tbf, this is what sovereignty looks like. Not speeches. Not another EU consultation document that takes 18 months. A billion euros of GPUs in a gutted bank vault in Munich/entry on ledger. I&#8217;ve been banging on about European compute sovereignty for years now and I don&#8217;t think I&#8217;ve ever been able to point at something this concrete.</p><p><a href="https://telekom.com/">telekom.com</a></p><p><strong>4. MIT Tech Review &#8212; &#8220;Is a Secure AI Assistant Possible?&#8221;</strong></p><p>This is the piece I keep coming back to. AI agents now have access to your email, your files, your code, your finances. Who ensures that data stays private? Who audits what the model did and why? Right now? Literally nobody.</p><p>I&#8217;ve been working on an investment thesis around privacy-enhancing technologies for AI &#8212; FHE, confidential computing, zero-knowledge proofs. Unsexy names. I&#8217;ve wrote about this alot of the years. It&#8217;s becoming critical infrastructure now though. The EU AI Act high-risk compliance deadline is 2 August 2026. Ohh, a compliance deadline. I bet Elon is worried. But still, the entire governance layer for agentic AI is missing and someone&#8217;s going to build it. Quite keen for that someone to be European, frankly.</p><p><a href="https://technologyreview.com/">technologyreview.com</a></p><div><hr></div><p>Now, off you pop.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://stateofthefuture.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://stateofthefuture.substack.com/subscribe?"><span>Subscribe now</span></a></p><p></p>]]></content:encoded></item><item><title><![CDATA[How to Invest in AI Sovereignty — Sovereign Albion w/ Andrew Bennett]]></title><description><![CDATA[The Future of AI Runs on Chips We Haven't Built Yet]]></description><link>https://stateofthefuture.substack.com/p/how-to-invest-in-ai-sovereignty-sovereign</link><guid isPermaLink="false">https://stateofthefuture.substack.com/p/how-to-invest-in-ai-sovereignty-sovereign</guid><dc:creator><![CDATA[Lawrence Lundy-Bryan]]></dc:creator><pubDate>Thu, 12 Feb 2026 12:07:42 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!RM86!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdf90744d-cf51-4bec-9128-acbe8c2724b2_1024x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>I&#8217;m Lawrence, a pleasure. I invest in people making the world (Europe? UK?) better for my children. pre-seed/seed. lawrence@cloudberry.vc. x x</em></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://stateofthefuture.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://stateofthefuture.substack.com/subscribe?"><span>Subscribe now</span></a></p><p>&#8212;</p><p>Good morgan! Something a bit different this week. My friend Andrew Bennett invited me on his podcast Sovereign Albion to talk about semiconductor investing, AI sovereignty, and why Europe hasn&#8217;t lost the game yet. We covered a lot of ground &#8212; from why I launched a semiconductor fund to what SovAI needs to get right to whether autonomous cars on British streets could cure our national declinism.</p><p>Andrew writes about AI sovereignty through the lens of British state capacity, industrial policy, and the builders trying to make it real. His podcast is excellent. You should subscribe.</p><p>Rather than just link the audio and leave you to it, I&#8217;ve cleaned up the transcript and broken it into sections so you can read the bits you care about. If you want the full audio and video, it&#8217;s on Andrew&#8217;s Substack below. </p><div class="embedded-post-wrap" data-attrs="{&quot;id&quot;:186883513,&quot;url&quot;:&quot;https://www.sovereignalbion.com/p/how-to-invest-in-ai-sovereignty-with&quot;,&quot;publication_id&quot;:2852349,&quot;publication_name&quot;:&quot;Sovereign Albion&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!QCn4!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3d609cba-6152-49b1-9199-c634299f665e_1280x1280.png&quot;,&quot;title&quot;:&quot;How to invest in AI sovereignty, with Lawrence Lundy-Bryan (Cloudberry VC)&quot;,&quot;truncated_body_text&quot;:&quot;For the last year, I&#8217;ve been wrestling with what AI sovereignty means for us, here, in Britain.&quot;,&quot;date&quot;:&quot;2026-02-06T09:35:29.691Z&quot;,&quot;like_count&quot;:4,&quot;comment_count&quot;:0,&quot;bylines&quot;:[{&quot;id&quot;:12155356,&quot;name&quot;:&quot;Andrew Bennett&quot;,&quot;handle&quot;:&quot;andrewjb&quot;,&quot;previous_name&quot;:&quot;---&quot;,&quot;photo_url&quot;:&quot;https://bucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com/public/images/f65ace9e-6b9a-42d8-9f53-5a9b9ff68c07_400x400.jpeg&quot;,&quot;bio&quot;:null,&quot;profile_set_up_at&quot;:&quot;2022-04-10T20:52:45.224Z&quot;,&quot;reader_installed_at&quot;:&quot;2023-04-07T07:47:01.535Z&quot;,&quot;publicationUsers&quot;:[{&quot;id&quot;:2898673,&quot;user_id&quot;:12155356,&quot;publication_id&quot;:2852349,&quot;role&quot;:&quot;admin&quot;,&quot;public&quot;:true,&quot;is_primary&quot;:true,&quot;publication&quot;:{&quot;id&quot;:2852349,&quot;name&quot;:&quot;Sovereign Albion&quot;,&quot;subdomain&quot;:&quot;sovereignalbion&quot;,&quot;custom_domain&quot;:&quot;www.sovereignalbion.com&quot;,&quot;custom_domain_optional&quot;:false,&quot;hero_text&quot;:&quot;Sovereign Albion explores who we are, where we're going and how we get there, told through the lens of the builders &#8212; of companies, state capacity, and the nation &#8212; making it real.&quot;,&quot;logo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/3d609cba-6152-49b1-9199-c634299f665e_1280x1280.png&quot;,&quot;author_id&quot;:12155356,&quot;primary_user_id&quot;:null,&quot;theme_var_background_pop&quot;:&quot;#EA410B&quot;,&quot;created_at&quot;:&quot;2024-08-04T20:56:35.906Z&quot;,&quot;email_from_name&quot;:null,&quot;copyright&quot;:&quot;Andrew Bennett&quot;,&quot;founding_plan_name&quot;:null,&quot;community_enabled&quot;:true,&quot;invite_only&quot;:false,&quot;payments_state&quot;:&quot;disabled&quot;,&quot;language&quot;:null,&quot;explicit&quot;:false,&quot;homepage_type&quot;:&quot;newspaper&quot;,&quot;is_personal_mode&quot;:false}},{&quot;id&quot;:5191577,&quot;user_id&quot;:12155356,&quot;publication_id&quot;:5089350,&quot;role&quot;:&quot;admin&quot;,&quot;public&quot;:true,&quot;is_primary&quot;:false,&quot;publication&quot;:{&quot;id&quot;:5089350,&quot;name&quot;:&quot;Andrew Bennett&quot;,&quot;subdomain&quot;:&quot;andrewform&quot;,&quot;custom_domain&quot;:null,&quot;custom_domain_optional&quot;:false,&quot;hero_text&quot;:&quot;Policy Principal, Form Ventures&quot;,&quot;logo_url&quot;:null,&quot;author_id&quot;:12155356,&quot;primary_user_id&quot;:null,&quot;theme_var_background_pop&quot;:&quot;#FF6719&quot;,&quot;created_at&quot;:&quot;2025-05-21T21:43:57.598Z&quot;,&quot;email_from_name&quot;:null,&quot;copyright&quot;:&quot;Andrew Bennett&quot;,&quot;founding_plan_name&quot;:null,&quot;community_enabled&quot;:true,&quot;invite_only&quot;:false,&quot;payments_state&quot;:&quot;disabled&quot;,&quot;language&quot;:null,&quot;explicit&quot;:false,&quot;homepage_type&quot;:&quot;profile&quot;,&quot;is_personal_mode&quot;:false}},{&quot;id&quot;:7366390,&quot;user_id&quot;:12155356,&quot;publication_id&quot;:7218390,&quot;role&quot;:&quot;admin&quot;,&quot;public&quot;:true,&quot;is_primary&quot;:false,&quot;publication&quot;:{&quot;id&quot;:7218390,&quot;name&quot;:&quot;TxP&quot;,&quot;subdomain&quot;:&quot;txpio&quot;,&quot;custom_domain&quot;:null,&quot;custom_domain_optional&quot;:false,&quot;hero_text&quot;:&quot;txp: bridging tech &amp; policy to build the future&quot;,&quot;logo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/6b98228d-8d60-4d98-9f58-8e4e0a6fa7e7_400x400.png&quot;,&quot;author_id&quot;:12155356,&quot;primary_user_id&quot;:null,&quot;theme_var_background_pop&quot;:&quot;#FF6719&quot;,&quot;created_at&quot;:&quot;2025-12-10T17:00:05.326Z&quot;,&quot;email_from_name&quot;:null,&quot;copyright&quot;:&quot;Andrew Bennett&quot;,&quot;founding_plan_name&quot;:null,&quot;community_enabled&quot;:true,&quot;invite_only&quot;:false,&quot;payments_state&quot;:&quot;disabled&quot;,&quot;language&quot;:null,&quot;explicit&quot;:false,&quot;homepage_type&quot;:&quot;newspaper&quot;,&quot;is_personal_mode&quot;:false}}],&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:null,&quot;status&quot;:null}],&quot;utm_campaign&quot;:null,&quot;belowTheFold&quot;:false,&quot;type&quot;:&quot;podcast&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="EmbeddedPostToDOM"><a class="embedded-post" native="true" href="https://www.sovereignalbion.com/p/how-to-invest-in-ai-sovereignty-with?utm_source=substack&amp;utm_campaign=post_embed&amp;utm_medium=web"><div class="embedded-post-header"><img class="embedded-post-publication-logo" src="https://substackcdn.com/image/fetch/$s_!QCn4!,w_56,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3d609cba-6152-49b1-9199-c634299f665e_1280x1280.png"><span class="embedded-post-publication-name">Sovereign Albion</span></div><div class="embedded-post-title-wrapper"><div class="embedded-post-title-icon"><svg width="19" height="19" viewBox="0 0 24 24" fill="none" xmlns="http://www.w3.org/2000/svg">
  <path d="M3 18V12C3 9.61305 3.94821 7.32387 5.63604 5.63604C7.32387 3.94821 9.61305 3 12 3C14.3869 3 16.6761 3.94821 18.364 5.63604C20.0518 7.32387 21 9.61305 21 12V18" stroke-linecap="round" stroke-linejoin="round"></path>
  <path d="M21 19C21 19.5304 20.7893 20.0391 20.4142 20.4142C20.0391 20.7893 19.5304 21 19 21H18C17.4696 21 16.9609 20.7893 16.5858 20.4142C16.2107 20.0391 16 19.5304 16 19V16C16 15.4696 16.2107 14.9609 16.5858 14.5858C16.9609 14.2107 17.4696 14 18 14H21V19ZM3 19C3 19.5304 3.21071 20.0391 3.58579 20.4142C3.96086 20.7893 4.46957 21 5 21H6C6.53043 21 7.03914 20.7893 7.41421 20.4142C7.78929 20.0391 8 19.5304 8 19V16C8 15.4696 7.78929 14.9609 7.41421 14.5858C7.03914 14.2107 6.53043 14 6 14H3V19Z" stroke-linecap="round" stroke-linejoin="round"></path>
</svg></div><div class="embedded-post-title">How to invest in AI sovereignty, with Lawrence Lundy-Bryan (Cloudberry VC)</div></div><div class="embedded-post-body">For the last year, I&#8217;ve been wrestling with what AI sovereignty means for us, here, in Britain&#8230;</div><div class="embedded-post-cta-wrapper"><div class="embedded-post-cta-icon"><svg width="32" height="32" viewBox="0 0 24 24" xmlns="http://www.w3.org/2000/svg">
  <path classname="inner-triangle" d="M10 8L16 12L10 16V8Z" stroke-width="1.5" stroke-linecap="round" stroke-linejoin="round"></path>
</svg></div><span class="embedded-post-cta">Listen now</span></div><div class="embedded-post-meta">3 months ago &#183; 4 likes &#183; Andrew Bennett</div></a></div><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!RM86!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdf90744d-cf51-4bec-9128-acbe8c2724b2_1024x1024.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!RM86!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdf90744d-cf51-4bec-9128-acbe8c2724b2_1024x1024.png 424w, https://substackcdn.com/image/fetch/$s_!RM86!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdf90744d-cf51-4bec-9128-acbe8c2724b2_1024x1024.png 848w, https://substackcdn.com/image/fetch/$s_!RM86!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdf90744d-cf51-4bec-9128-acbe8c2724b2_1024x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!RM86!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdf90744d-cf51-4bec-9128-acbe8c2724b2_1024x1024.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!RM86!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdf90744d-cf51-4bec-9128-acbe8c2724b2_1024x1024.png" width="1024" height="1024" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/df90744d-cf51-4bec-9128-acbe8c2724b2_1024x1024.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1024,&quot;width&quot;:1024,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:2063166,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://stateofthefuture.substack.com/i/187649479?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdf90744d-cf51-4bec-9128-acbe8c2724b2_1024x1024.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!RM86!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdf90744d-cf51-4bec-9128-acbe8c2724b2_1024x1024.png 424w, https://substackcdn.com/image/fetch/$s_!RM86!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdf90744d-cf51-4bec-9128-acbe8c2724b2_1024x1024.png 848w, https://substackcdn.com/image/fetch/$s_!RM86!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdf90744d-cf51-4bec-9128-acbe8c2724b2_1024x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!RM86!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdf90744d-cf51-4bec-9128-acbe8c2724b2_1024x1024.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h1><strong>What Claude Thinks</strong></h1><p><strong>What works well:</strong></p><ul><li><p><strong>Clear thesis articulation.</strong> The VC bifurcation argument (agglomerators vs. micro-specialists) is crisp and easy to follow. The heterogeneous computing macro bet is well-framed &#8212; moving from &#8220;we run LLMs on glorified gaming GPUs&#8221; to specialised chips is a compelling narrative.</p></li><li><p><strong>Honest about fund sizing.</strong> Calling &#8364;30m an &#8220;MVP&#8221; and being upfront that it&#8217;s too small is disarming. It turns a potential weakness into a strength &#8212; you sound like a founder, not a fund manager pretending to have all the answers.</p></li><li><p><strong>The edge vs. data centre framing is smart positioning.</strong> Rather than saying &#8220;Europe can compete everywhere,&#8221; you concede the capex game is lost and carve out a specific bet. That&#8217;s more credible than cheerleading.</p></li><li><p><strong>Good concrete examples.</strong> Compound semiconductors in Cardiff, compute-in-sensor for space, photonic integrated chips for wearables &#8212; these ground the abstract thesis in real things people can picture.</p></li><li><p><strong>The closing line lands.</strong> &#8220;How do we turn energy into intelligence?&#8221; is a memorable framing for policymakers and LPs alike.</p></li></ul><p><strong>What could be sharper:</strong></p><ul><li><p><strong>The patriotism/Europe tension is left unresolved.</strong> You say &#8220;I think of myself first as a Brit, not European&#8221; but it&#8217;s a Finnish fund making a European play. Andrew gives you the opening but you move past it. Worth having a cleaner line on this &#8212; LPs and founders both notice.</p></li><li><p><strong>The SovAI section risks sounding like criticism without a concrete proposal.</strong> You say top-down doesn&#8217;t work and the BBB has a great dataset &#8212; but what specifically should they do with it? One concrete recommendation would strengthen this.</p></li><li><p><strong>The &#8220;defanged founders&#8221; argument is strong but slightly one-note.</strong> You attribute reduced ambition almost entirely to risk-averse EIS investors. There are structural reasons too (immigration, planning, procurement cycles). Acknowledging that briefly would make the argument harder to dismiss.</p></li><li><p><strong>Some sections meander.</strong> The conviction-building answer wanders between State of the Future, expert disagreement, founder intuition, and hiring. Tightening that into a cleaner structure would help &#8212; especially since it&#8217;s a question LPs care about deeply.</p></li><li><p><strong>The autonomous cars example at the end is vivid but risky.</strong> &#8220;It&#8217;ll run over cats. It&#8217;ll be on the Daily Mail headline&#8221; is memorable but could easily become the pull quote that overshadows the rest. Depends on audience &#8212; fine for a podcast, might want to soften for a newsletter transcript.</p></li></ul><p>I&#8217;ll take that, thanks Claude</p><div><hr></div><h1><strong>Interview</strong></h1><h2><strong>I. Why launch a semiconductor fund?</strong></h2><p><strong>Andrew: You just launched Europe&#8217;s first semiconductor VC fund. Why on earth would you do that?</strong></p><p>The truth is I&#8217;ve jumped on the bandwagon here. Veera and Rene, my partners, did all the hard work. There&#8217;s a VC bet I&#8217;m making, and then there&#8217;s a macro bet.</p><p>The VC bet is there&#8217;s only going to be two types of VC funds in a decade. The capital agglomerators &#8212; the Andreessen&#8217;s, Sequoia, Benchmark &#8212; that just raise all the money. And then specialists.</p><p>Deep tech was the first wave, the first attempt to move away from just SaaS to understanding infrastructure and deeper technologies. But even then, it&#8217;s still quite broad. What is deep tech? It&#8217;s a meaningless term. You see some funds that just invest in quantum computing, or photonics, or nuclear fusion. I think the world will be made up of lots of micro funds that know one technology and market really deeply.</p><p>Number one, because you can actually deeply understand something to the same level as a founder. Slightly lower than a founder. You can at least pretend. And you can diligence something really well with your network. So you can actually make good investments.</p><p>Second, you can &#8220;add value&#8221; in a way that isn&#8217;t just air quotes. Because your LPs, your former colleagues, your previous investments are all part of that same ecosystem. So you really can say: once we&#8217;ve invested, we&#8217;ll introduce you to Global Foundries to help you tape out your chip. Or a hyperscaler CTO as a purchaser. My view is on the VC side, specialism is the place to be.</p><p>And on the macro side, semiconductors is massively under-invested in. Over the next decade we&#8217;ll start to see chips made much more specialist in a way we haven&#8217;t seen in the past 20 years. Broadly speaking, there&#8217;s only two types of chips that exist &#8212; CPUs for a very long time, GPUs &#8212; and really very few other types exist at the margins. The bet over the next decade is that we&#8217;re going to have an explosion of new types of chips. That creates good investment opportunities.</p><p><strong>The semiconductor thesis: heterogeneous computing</strong></p><p><strong>Andrew: If you&#8217;re a layperson, you&#8217;ve seen NVIDIA explode and you go &#8220;I guess GPUs are important.&#8221; What&#8217;s missing from that story?</strong></p><p>Silicon Valley and semiconductors is one of the first examples of venture capital. It&#8217;s surprising to me it&#8217;s flown under the radar, because everybody was looking at software and the internet. Most people grew up in the internet and software age. Most mental models around financing, innovation, and what a foundry looks like are based around the social network and Facebook.</p><p>It&#8217;s not that semiconductors haven&#8217;t been important. They&#8217;ve just been under the media radar. But it&#8217;s always been fundamental. ARM, smartphones, Qualcomm. It&#8217;s a $500 billion industry, maybe larger now, probably a trillion by 2030. One of the first globalised industries, hugely interconnected, but flies under the radar. That makes it a good investment opportunity.</p><p>It never really mattered to the average person what Facebook ran on. Nobody needed to know how Google served trillions of queries. It reached public consciousness for two reasons. One is COVID and the automotive shortage &#8212; people heard about this one component they couldn&#8217;t get access to. They realised how global the supply chain was, how political it was because of Taiwan. Then there was Chris Miller&#8217;s Chip Wars book.</p><p>And then before the NVIDIA run-up, this data centre build-out and people realising AI needs enormous computing power. We still have 5-7-8 year build-outs of data centres being bought and paid for. This isn&#8217;t slowing down. All of it needs to run on computer chips. It becomes one of the most important technologies for any company or country in the next decade.</p><p><strong>Andrew: Play that forward. What are the bottlenecks you foresee?</strong></p><p>First principles is never the best way to think about how things are adopted. It&#8217;s mainly path dependency. But the truth is, we are using the same thing that ran the SEGA Dreamcast to run large language models. It&#8217;s just these GPUs. It&#8217;s remarkable. And that&#8217;s not quite true &#8212; they&#8217;re not really GPUs now, they have little accelerators for matrix multiplications. So they are bespoke to AI training and inference.</p><p>But two things have changed that mean we&#8217;ll start to see what I call heterogeneous computing: lots and lots of specialised things out in the world. One is that we actually can&#8217;t make enough GPUs. And it&#8217;s funny &#8212; it was never really the GPUs that were the bottleneck. I wrote about this a while ago. It&#8217;s the high-bandwidth memory, made by only three companies globally. And when you look further down the supply chain, it was actually one facility TSMC owned for advanced packaging that wasn&#8217;t large enough. So the capacity is always growing and shrinking in this very orchestrated machine.</p><p><strong>Andrew: You said that&#8217;s not what the world will look like in 10 years. Talk us through this contrast between data centre and edge.</strong></p><p>If you want a really fast response &#8212; in an autonomous car, a drone, high-frequency trading, an industrial plant &#8212; you can&#8217;t wait for the round trip to the data centre and back. So number one, latency. We don&#8217;t have the applications that require super-low latency yet, but we will.</p><p>Privacy is another reason. We want things local so we don&#8217;t have to send data unencrypted to some jurisdiction&#8217;s data centre.</p><p>And cost. The Stargate build-out alone is $500 billion. These numbers are extortionate, and my personal opinion is the value will justify that capex. But we will need to offload a lot of performance to local devices. It&#8217;s cheaper to run on a smartphone. The user pays for the electricity. The user pays for the phone. There will be a point from a cost-of-serving perspective where as much as possible gets offloaded locally.</p><p>It won&#8217;t just be data centre or edge. It&#8217;ll be a constant trade-off by application: latency, performance, privacy, cost. Autonomous cars will be edge. Scientific simulations will be data centre. But right now, everything is data centre.</p><h2><strong>II. The Europe and UK opportunity</strong></h2><p><strong>Andrew: Are you focused on Europe? What companies do you think will come out of here?</strong></p><p>We can&#8217;t play the capex game. We haven&#8217;t grown in decades. I&#8217;m a VC, so I think in probability. I can get 20 shots on goal. I&#8217;m wrong 17 times. That&#8217;s fine.</p><p>If 50% of all AI inference takes place at the edge &#8212; I don&#8217;t know the number, is it 20%, 30%? &#8212; in that world, the UK and Europe have a shot. We can&#8217;t spend as much as the US, the Gulf, and China. We know that. So let&#8217;s deal with the world as it is and take the bet on edge. Am I 100% certain? Of course not. But I can say for sure that more AI inference will take place at the edge tomorrow than it did today. And certainly that will increase to some ceiling.</p><p>It&#8217;s in the US&#8217;s interest and OpenAI&#8217;s interest to go all-in on data centre. They can raise the money easily. Let&#8217;s not play that game.</p><p><strong>Andrew: Don&#8217;t you think there&#8217;s at least a minimum viable domestic capacity required?</strong></p><p>It depends on your perspective of geopolitics and the uncertainty over the next two to five years. If the long-term thesis is that AI becomes a critical input to production &#8212; the way energy or labour is today &#8212; and you&#8217;re basically streaming that from abroad, then suddenly overnight your input costs become 20% more expensive.</p><p>It depends on the extent to which you think deglobalisation is a structural trend or a fad. I&#8217;ve said I think it&#8217;s close to impossible to truly have a sovereign tech stack. From a semiconductor perspective, that&#8217;s really telling, because this is complex orchestration of thousands of component suppliers.</p><p><strong>Andrew: ASML is a massive chokepoint in the supply chain, but the Netherlands can&#8217;t really leverage it. Sovereignty is a spectrum, not a binary. We don&#8217;t have to go full autarky.</strong></p><p>Fair point. Good pushback. You want some strategic assets. We have to think: what are we good at? How do we grow those industries to be even more strategic?</p><p>What we are not good at as a state is thinking about what will be strategic in five years, not what&#8217;s strategic now. And beyond that: do we have the stomach, both in capital and attention, to think about this over the long term? Which is what China&#8217;s particularly good at. You need a very clear industrial strategy. Not a three-year plan. A 20-year roadmap. And if you really want to build out capacity, it&#8217;s going to cost money for 20 years. You can&#8217;t just get FOMO&#8217;d into industrial strategy because everyone else is.</p><h2><strong>Compound semiconductors and photonics</strong></h2><p><strong>Andrew: What specifically could the UK be good at?</strong></p><p>So to your point &#8212; what could the chokepoints be? Where could we have the equivalent of an ASML? This is where being a VC, you have to look not at what the world is today but think three to five years out.</p><p>I&#8217;ll make it really specific: compound semiconductors. Instead of using silicon, which is what all our chips are broadly made from, you use other types of materials. Silicon carbide, gallium nitride. Each has different properties. They&#8217;re better at running hot, at higher frequencies. You can&#8217;t stick a silicon chip next to a battery &#8212; it gets too hot. So you use other materials. Electric cars are a really good example. What about the chips inside? You don&#8217;t think China was thinking about that 10 years ago?</p><p>We have very good compound semiconductor capacity in the UK. Cardiff has a great ecosystem with leading companies. The reason I tie this back to government focus is because I&#8217;ve looked at lots of compound semiconductor companies, and with my VC hat on I see pretty small markets, limited growth rates, limited buyers. It doesn&#8217;t look like a developer tool company growing 100% year-over-year. It&#8217;s not the best place for my money. But that is a place the government should be thinking.</p><p><strong>Andrew: What about space and photonics?</strong></p><p>Space is a good example. Chips in space need to be radiation-hardened, which is tricky for silicon. We could put more powerful chips into space. And if you think about the round trip &#8212; sending data up and back &#8212; you don&#8217;t want to send much. There&#8217;s interesting thinking about compute-in-sensor: doing some logic within the sensing data itself. Instead of sending a terabyte back for processing, you process on-board, find the important part, and send just that. Saves a fortune in cost.</p><p>Photonics is going to be a much larger part of the fund. Computing with light. We already move pretty much all our data around the earth over fibre optic, so we already use light to move data. As we get better at building little lasers, modulators, and photodetectors &#8212; smaller and smaller, to the point where we can stick them on a chip &#8212; a photonic integrated chip.</p><p>It&#8217;s still very immature compared to silicon. But we&#8217;re getting to the point where we can put all these components on a chip, on a different material. If you&#8217;re wearing an Apple Watch, you&#8217;ve got an image sensor on the back sensing biomarkers. With a more powerful photonic integrated chip, you could sense more. Any wearable device will see more powerful chips. Autonomous cars, robots &#8212; we&#8217;re probably not going to stick GPUs in those in the next five or 10 years.</p><h2><strong>III. Fund sizing and making bets</strong></h2><p><strong>Andrew: The fund is &#8364;30 million. You talked about some massive deals in this space. Why this size?</strong></p><p>We haven&#8217;t right-sized it. This is the truth. It&#8217;s not for lack of desire, ambition, or willing. It&#8217;s an MVP to prove this is important. Just like any startup, I see this as a pre-seed &#8212; to prove it, and then scale. What validation points do we have in 24 months that prove you should give us more money?</p><p>It&#8217;s too small. There&#8217;s a company in Europe raising a &#8364;200 million seed. Etched raised a $500 million Series A. Some of those numbers are signal &#8212; &#8220;we are serious&#8221; &#8212; but also they&#8217;re taking on big chip companies. You need capital for hiring and actually going to a fab to make chips costs a fortune.</p><p>So what can we do with &#8364;30 million? We have to be very early. Because we&#8217;re specialist, we can go earlier than the average fund. You&#8217;d like to think we roughly understand the markets we&#8217;re operating in. And we have strategic investors &#8212; Global Foundries, the third-largest fab in the world, and Radiant Optoelectronics, a Taiwanese photonic company. We can introduce our portfolio to these companies to do things faster and cheaper. Without us, you&#8217;d raise &#163;5 million. With us, &#163;2.5 million.</p><p>But we&#8217;re not only investing in frontier AI chips. There are loads of other spaces &#8212; WiFi chips, hyperspectral imaging for drone warfare, innovation in sensing &#8212; where you don&#8217;t need hundreds of millions. You need millions. And you can invest &#163;1 million.</p><h2><strong>IV. SovAI and industrial strategy</strong></h2><p><strong>Andrew: The government is launching SovAI, a &#163;500 million strategic venture fund for AI. What does it take to make that work?</strong></p><p>The single biggest challenge is that any way of thinking about industrial strategy is top-down. Define the important things, allocate money. If you&#8217;re talking about manufacturing or agriculture &#8212; slow-moving, established &#8212; that&#8217;s fine.</p><p>You cannot do industrial strategy in 2026 by saying &#8220;this is 100% important&#8221; because we don&#8217;t know. Government and civil service are inherently not good at thinking about high uncertainty and making bets. That&#8217;s ultimately what good industrial strategy requires in the age of AI, and more specifically in the age where things move faster because of software and the internet.</p><p>The first thing you&#8217;d have to do is understand you have high degrees of uncertainty. How do you address that? Velocity and adaptability. If you don&#8217;t know the answers and everything moves fast, you have to move fast. And if you take bets, you will be wrong. So it&#8217;s not about taking the bet &#8212; it&#8217;s about the process. Understanding what you got wrong, and improving.</p><p>Good example: quantum computing. The obvious candidate for strategic autonomy. Let&#8217;s invest in quantum. But which type of quantum computer? Trapped ions? Superconducting? There&#8217;s no answer.</p><p>I had this learning myself &#8212; my title was always Head of Research. The conceit was if you do enough research, you could pick the right themes before anyone else. And okay, there&#8217;s a part of that. But with a nuclear fusion company we were looking at, I spoke to 20 people and all 20 gave a different answer.</p><p>So venture is about being happy with lots of uncertainty. But a top-down industrial strategy that says &#8220;bet on these themes and win&#8221; &#8212; that&#8217;s probably the wrong approach. Really good founders sniff out opportunities. If you are in touch with the ecosystem enough, ideas and companies will bubble up. The BBB (British Business Bank) is the UK&#8217;s largest LP. It&#8217;s invested in nearly a fifth of funds in the UK. It gets quarterly reports from every one. It&#8217;s sitting on the best dataset of the early-stage venture ecosystem. Can&#8217;t we connect that to something useful?</p><h2><strong>Building conviction and how VCs decide</strong></h2><p><strong>Andrew: When you&#8217;re looking at something new &#8212; founders who are some step beyond where you are &#8212; how do you build conviction?</strong></p><p>I did a project about five years ago called State of the Future. We looked at 150 technologies. Everything from brain-computer interfaces to mRNA vaccines to how you make chips. The idea was to identify technical maturity, market catalysts, novelty, and impact within each. How different is this from what exists? Is this a $1 billion market or a trillion?</p><p>The thing is: the founder necessarily comes with a contrarian hypothesis. You speak to experts and they say it won&#8217;t work. But most fund returners, most outliers, say pretty outlandish things that experts say aren&#8217;t plausible. That&#8217;s the exact game. VCs can often look stupid because you back potentially stupid things.</p><p>How we actually do it &#8212; I like to think it&#8217;s the trade. If you&#8217;ve been speaking to 10 founders a week for 10 years, you&#8217;ve got a large dataset. You develop an intuitive view of whether someone is bullshitting. And then below that: will this work? At our stage, &#8220;will this work? Maybe.&#8221; &#8220;Is the market big enough if it works? Yes or no?&#8221; If yes, you proceed.</p><p>If you&#8217;re used to risk mitigation, to not wasting taxpayers&#8217; money, you can always find a reason not to invest. You can always seem smart: &#8220;I spoke with someone at X big company and they said it will never work.&#8221; That&#8217;s the challenge. And I often find it&#8217;s the challenge in hiring too &#8212; people who are prepared to stick their neck out with high uncertainty and being comfortable being wrong a lot. That&#8217;s not how we&#8217;re taught in the British education system.</p><h2><strong>V. How Cloudberry makes decisions</strong></h2><p><strong>Andrew: How do the three of you make decisions?</strong></p><p>I&#8217;ve got learnings from different funds. There are lots of ways to make money. You could be consensual, you could be lone wolf. The key is coming from different places.</p><p>I&#8217;m the one that can put myself in the mind of other VCs &#8212; invest in things that will get markups because other people like them. Maybe. It&#8217;s one way to play the game. Rene and Veera are better at understanding what semiconductor customers will want, what corporate venture capital firms will be interested in. Rene has built and sold a company, so he understands things I don&#8217;t.</p><p>What I do at pre-seed: invest X amount of money to get to Y milestone. Identify what X milestone is. What does it take to get there? Once you&#8217;ve hit it, who will invest to get you to the next milestone?</p><p><strong>Andrew: Does that relay race break down when you have these structural shifts &#8212; the specialists versus the agglomerators?</strong></p><p>The agglomerators won&#8217;t come down to seed. Why not wait to Series A, de-risk it, give them 50 million? I think venture capital &#8212; investing in truly unusual things, new markets &#8212; is VC, and anything post-Series B is private equity growth capital. That&#8217;s bifurcating in a way it hasn&#8217;t before.</p><p>So the specialists will always be there. But you really have to know the rest of the capital stack. The relay race is a good analogy &#8212; you need to know who you&#8217;re handing off to. My previous view was: invest in the right theme, get the timing right, the market emerges, follow-on investors arrive. In the perfect world, you&#8217;ve invested at the right time and by Series A they&#8217;ve got &#163;1-2 million revenue and it hits.</p><p>That&#8217;s never quite right. So you can de-risk by speaking to later-stage investors: what&#8217;s interesting to them, what will move the dial. But I&#8217;m reluctant to go all-in on that as the game. It&#8217;s the sales game. Hand it off, get markups. I don&#8217;t think that&#8217;s how you build a sustaining fund. Cloudberry at fund five or six won&#8217;t be because we did that well. It&#8217;ll be because we backed unusual companies doing unusual things, outlier-type people. And you can&#8217;t take 20 bets on weird, funky people where you have no idea who&#8217;s investing next. That&#8217;s probably too much risk. So it&#8217;s a mix.</p><h2><strong>VI. Culture, ambition, and escaping declinism</strong></h2><p><strong>Andrew: You&#8217;re bullish on Europe. Where does that come from?</strong></p><p>We have to do something. I don&#8217;t know what to tell you. Honestly.</p><p>Europe has a huge talent base. Number one. I&#8217;m pretty radicalised to the fact that Europe needs to wake up. We are in an age of deglobalisation. We&#8217;ve been reliant on the US for too long. Hopefully one day we can rely on them again, but as it is today, we need to invest in our own capabilities.</p><p><strong>Andrew: Would you call yourself a patriot?</strong></p><p>Yeah. For certain a patriot. Which is why this Europe/UK thing is interesting. I think of myself first as a Brit. Not European. But for sure our European partners and colleagues. This is a Finnish fund. It&#8217;s a European play. But from a UK perspective, I think we&#8217;ve been sleeping for too long. Hopefully it&#8217;s not terminal.</p><p>I&#8217;m radicalised to the idea that we have unbelievable talent. R&amp;D talent &#8212; Eindhoven, Bristol, Southampton, Oxford, Cambridge, Munich. Deep pools. What we&#8217;ve lacked &#8212; and this is the bet I&#8217;m making &#8212; is enough ambitious founders who want to win globally. I don&#8217;t think that&#8217;s because they don&#8217;t exist. They were defanged. Their ambition was reduced because they were too busy talking to EIS funds asking &#8220;where&#8217;s your P&amp;L?&#8221; at pre-seed. So they asked for &#163;800k because that&#8217;s all they thought they could get.</p><p>Cloudberry and others &#8212; Plural, and plenty more &#8212; are saying we can be more ambitious. Because we have ambitious investors too. The more we shout about thinking big and backing people, the more founders can return to their natural ambition. We don&#8217;t need to create more founders. They&#8217;re there. And a lot of them went to the US quite rightly &#8212; that&#8217;s how you&#8217;d de-risk your company two years ago. Now they don&#8217;t have to. So they&#8217;re here. Give them the capital to build here.</p><p><strong>Andrew: There&#8217;s a sort of unreasonable belief in you and a few others that is creating value by reshaping the ecosystem.</strong></p><p>The thing that makes me think of is a peculiarly European and British thing. It&#8217;s not necessarily tall poppy syndrome, but there&#8217;s something in our reflexive culture where if you say &#8220;I want to build a trillion-dollar company,&#8221; people will laugh. There&#8217;s an instinctive scepticism. The US does it well. I&#8217;m not here to change culture alone, but there is something in a small group of people who shout from the rooftops that we don&#8217;t have to accept declinism anymore.</p><p>We can build a new city in Cambridge. Why not? We&#8217;re going to build a trillion-dollar company. How are you going to get the team? How are you going to raise the money? We&#8217;ll figure it out. This sort of bullishness.</p><p>I thought I&#8217;d be more bullish in my twenties, then slowly become conservative. But for some reason I&#8217;ve come out the other end. We can&#8217;t just keep accepting this. A good example: we got 0.1% GDP growth and it was written up as a &#8220;good day for Rachel Reeves.&#8221; Good day. 0.1%. If someone said we want to aim for 3%, they&#8217;d get laughed at. But that&#8217;s the level of ambition we need. Not just from politicians &#8212; they&#8217;d get killed for saying it. But from venture and the building class.</p><p><strong>Andrew: There&#8217;s an interesting cultural difference. If you have a financial stake in being optimistic and experience watching unknown companies become global winners, you believe things can change quickly.</strong></p><p>How do we change this? What could I do before the next election that the average person would feel like things are changing? One idea: everybody sees autonomous cars on their streets. The taxi drivers revolt, sure. But things can change. That&#8217;s a new technology. It&#8217;ll bring problems. It&#8217;ll run over cats. It&#8217;ll be on the Daily Mail headline. But things can get better.</p><p>How do we go back to the atomic age, the Futurama &#8212; the idea that the future will be brighter? I&#8217;ve set up a semiconductor fund, which is an important element. Because a lot of what Andrew described &#8212; drones, medical devices, autonomous vehicles &#8212; relies on semiconductors.</p><p>The core engine of economic growth over the next decade will be how we turn energy into intelligence. Whether in the data centre or in cars. We&#8217;ll have energy &#8212; that&#8217;s a problem &#8212; but then we need to turn it into intelligence. That&#8217;s a silicon problem. Broadly: how do we turn energy into intelligence? That is the defining question of this decade for the UK and for policymakers.</p><p>&#8212;-</p><p><em>This interview originally appeared on Andrew Bennett&#8217;s Sovereign Albion podcast on 6 February 2026. Andrew writes about who we are, where we&#8217;re going, and how we get there &#8212; told through the lens of the builders making it real. Subscribe to Sovereign Albion.</em></p>]]></content:encoded></item><item><title><![CDATA[Photonic "Engines" for Data Centers]]></title><description><![CDATA[Building Coherent Optical I/O at Scale with Hitesh Sahoo from Phanofi]]></description><link>https://stateofthefuture.substack.com/p/photonic-engines-for-data-centers</link><guid isPermaLink="false">https://stateofthefuture.substack.com/p/photonic-engines-for-data-centers</guid><dc:creator><![CDATA[Lawrence Lundy-Bryan]]></dc:creator><pubDate>Mon, 02 Feb 2026 11:01:56 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/185417418/ba8715c28779d3a820891cda9018f8f1.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p>Hello friends, colleagues and enemies. Last issue we talked about stopping data movement at the chip level. The memory wall. Compute and memory sitting too far apart, shuttling bits back and forth like some kind of digital logistics nightmare. Manu had the framing right: data movement is the meta-problem. </p><div class="digest-post-embed" data-attrs="{&quot;nodeId&quot;:&quot;cb0c7b3a-4586-4c2a-950c-b4e06a8e9133&quot;,&quot;caption&quot;:&quot;&#8220;I&#8217;m tryna lead a nation, to leave to my little&#8217; man&#8217;s. The scales was lopsided, I&#8217;m just restoring order&#8221;&quot;,&quot;cta&quot;:&quot;Watch now&quot;,&quot;showBylines&quot;:true,&quot;size&quot;:&quot;lg&quot;,&quot;isEditorNode&quot;:true,&quot;title&quot;:&quot;AI Chips, ComputeRAM and the Future of Data Movement: A Conversation with Manu, Founder of Synthara&quot;,&quot;publishedBylines&quot;:[{&quot;id&quot;:20895912,&quot;name&quot;:&quot;Lawrence Lundy-Bryan&quot;,&quot;bio&quot;:&quot;State of the Future helps founders, investors, and corporate leaders identify and act on emerging technology opportunities. We deliver insights, advisory, and investments to surface and back high-potential ideas before they become obvious.&quot;,&quot;photo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!q-AC!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd406f502-b292-4c00-8f8d-ba31c53a3a90_1620x1080.jpeg&quot;,&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:null}],&quot;post_date&quot;:&quot;2026-01-22T11:03:11.074Z&quot;,&quot;cover_image&quot;:&quot;https://substack-video.s3.amazonaws.com/video_upload/post/184011679/a147d413-1d95-4fec-8520-8b4703ed5933/transcoded-00001.png&quot;,&quot;cover_image_alt&quot;:null,&quot;canonical_url&quot;:&quot;https://stateofthefuture.substack.com/p/ai-chips-computeram-and-the-future&quot;,&quot;section_name&quot;:null,&quot;video_upload_id&quot;:&quot;a147d413-1d95-4fec-8520-8b4703ed5933&quot;,&quot;id&quot;:184011679,&quot;type&quot;:&quot;podcast&quot;,&quot;reaction_count&quot;:5,&quot;comment_count&quot;:1,&quot;publication_id&quot;:221837,&quot;publication_name&quot;:&quot;State of the Future&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!juWT!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcb4af7d5-8416-4e8d-bc5b-2763f055ed16_800x800.png&quot;,&quot;belowTheFold&quot;:false,&quot;youtube_url&quot;:null,&quot;show_links&quot;:null,&quot;feed_url&quot;:null}"></div><p>Well. Same problem exists one level up. Getting data between chips, between racks, between buildings. And copper is dying. Not metaphorically obviously. It&#8217;s inert. The physics is straightforward: as you push bandwidth higher, copper&#8217;s reach shrinks. A decade ago you could run copper across the data center floor. Now it doesn&#8217;t make it out of the rack. Next gen, it won&#8217;t make it off the board. So we get optical interconnects, converting electrons to photons, shipping them down fiber, converting back.</p><p>But conversion is expensive. In power and in latency. Which is money. And interestingly the optical links inside data centers today are basic compared to what the telecoms industry has been doing for decades. Long-haul networks use &#8216;coherent optics&#8217;, encoding data not just in the intensity of light but in its phase and polarisation. So this means you can get more data onto each wavelength. But coherent systems require monstrous digital signal processors (DSPs) that consume 3-4x more power and cost 3-5x more than intensity-based systems. Too expensive for the volume game inside a hyperscaler.</p><p>Phanofi, a Danish startup I spoke to for this issue, claims they can bridge that gap. Their bet isn&#8217;t on exotic new modulator materials like lithium niobate or barium titanate. It&#8217;s on the detection side: an architecture for recovering optical signals that maintains coherent efficiency while working with standard DSPs and standard foundries. No new manufacturing processes, no supply chain disruption. The pragmatist in me loves it. </p><p>Hitesh Kumar Sahoo, the CEO, did his PhD in integrated photonics and has been deep in the foundry ecosystem. His argument: the industry doesn&#8217;t want disruption, it wants compatibility. Hyperscalers are spending billions building supply chains around specific DSP vendors, specific foundries, specific packaging houses. Any solution that requires them to rebuild that infrastructure is dead on arrival. </p><p>The interview gets into the technical details of coherent versus intensity-based systems, why the detection side is the real bottleneck, and where co-packaged optics fits into all of this. </p><h2>What did I learn?</h2><ul><li><p><strong>Data movement is the meta-problem at every level of abstraction.</strong> Last issue it was compute-to-memory. This issue it&#8217;s chip-to-chip, rack-to-rack. The principle is the same: stop shuffling bits around unnecessarily, and when you must shuffle them, do it as efficiently as physics allows. </p></li><li><p><strong>Coherent optics inside data centers is when, not if.</strong> The bandwidth requirements are pushing past what intensity-based systems can deliver cost-effectively. The question is who captures the value: incumbent DSP vendors who add coherent capability, new entrants with novel architectures, or vertically integrated hyperscalers who build their own. Value capture!</p></li><li><p><strong>Foundry compatibility is the moat</strong>. Exotic materials make for exciting papers but conservative supply chains. Phanofi&#8217;s focus on working with existing foundry processes is a strategic choice. It&#8217;s the challenge of using something novel and getting 10x better something but asking customers to adapt, versus 2-3x better but no with adaptation required. </p></li></ul><p>It&#8217;s a tricky business. </p><div><hr></div><h2>The State of the Future Show</h2><p><strong>Tell me about what you do at Phanofi.</strong></p><p>At Phanofi we&#8217;re building photonic engines that help data centers save energy when moving data from one point to another. Computing is done in electronics with zeros and ones, but optics is the preferred approach for sending data. You need to convert efficiently from electronics to optics, and this is what our engine does&#8212;it converts data from zeros and ones to light and then back from light to zeros and ones so another computer can read and process it.</p><p><strong>How does this differ from an analog-to-digital converter?</strong></p><p>An ADC generally operates within the electronics domain where you have digital and analog processing. What we do is very similar, except we&#8217;re going from electrons to photons rather than staying within electrons.</p><p><strong>When you convert from electrons to photons, don&#8217;t you need specific materials? Can&#8217;t you just do it in silicon?</strong></p><p>You can actually do it in silicon. There are new materials in the ecosystem which are much more efficient, and until now it was silicon. But when we&#8217;re aiming for gigabits or terabits of data being moved, we&#8217;re running out of power budget, and this has pushed the industry to find new materials.</p><p><strong>Let&#8217;s think really basically about the components required. What does converting to optics actually mean?</strong></p><p>Start with electronics&#8212;zeros and ones. How do you put that onto light? You have a continuous wave laser that&#8217;s switched on, and you have an element called a modulator that&#8217;s modulating the intensity of that laser. On the other side, you would see the laser turning on and off, and the modulator is doing that job based on the zeros and ones coming from the data. On the receiving side, you have a photodiode or photodetector that&#8217;s detecting whether there&#8217;s light or no light, giving a signal. That&#8217;s the very simple implementation: on the transmit side you have a laser with a modulator, the light carries the data, and on the receiving side the data is extracted by detecting the presence or absence of light.</p><p><strong>So traditional electronic computing uses digital zeros and ones, and you&#8217;re saying on-off is the equivalent&#8212;flashing as fast as you can?</strong></p><p>Exactly. This isn&#8217;t new&#8212;it&#8217;s been used for long-distance communication. You used to have a lantern switching on and off for signaling.</p><p><strong>The speed is probably why photonics companies exist now. A decade ago, modulating and detecting light was fine, but now we need it faster?</strong></p><p>Yes. It&#8217;s leapfrogged significantly in the last couple of years. There&#8217;s a limit to how much you can push the technology with a basic laser-modulator-photodetector structure&#8212;how fast you can put data on the light source and how fast you can extract it. That&#8217;s why there&#8217;s tremendous focus on building high-speed modulators. Companies like Hyperlight are building lithium niobate modulators, Lumiphase is using barium titanate&#8212;lots of interesting approaches.</p><p><strong>Why do new materials make modulators faster?</strong></p><p>There&#8217;s a limitation on how fast you can modulate in silicon because of how modulation works&#8212;you have movement of carriers across metal plates, which creates fundamental physics limits. Lithium niobate and barium titanate operate on different mechanisms, so they can go faster. There are even newer materials like organic hybrids that can go faster still. The industry is experimenting and testing new modulators.</p><p><strong>Beyond faster modulators, there&#8217;s also parallelization. Can you explain how that works with light?</strong></p><p>The industry is exploring multiple approaches rather than just focusing on one component. They&#8217;re using multiple wavelengths&#8212;this is where CWDM [Coarse Wavelength Division Multiplexing] and DWDM [Dense Wavelength Division Multiplexing] come in. But there&#8217;s another architecture: coherent systems, which use intensity, phase, and polarization of light. This isn&#8217;t about different wavelengths&#8212;for each wavelength you can maximize the amount of data by using its polarization and phase. This technology is used for long-distance communication outside data centers. It&#8217;s just expensive and power-consuming, so it doesn&#8217;t scale well when brought inside data centers.</p><p><strong>At a system level, what&#8217;s the core problem? Why can&#8217;t we just keep using electronics?</strong></p><p>There&#8217;s a limit to how far you can go with copper. The losses increase significantly as you go to higher speeds, and the link length over which a signal can be transmitted on copper shrinks. When speeds were lower, copper links could be much longer. Now as we go to higher bandwidth, copper is just behind the rack. For the next generation of bandwidth, it&#8217;s going to be even shorter. That&#8217;s why people are trying to bring optical interconnects behind the rack as well.</p><p><strong>How does what Phanofi is doing fit alongside other photonics companies?</strong></p><p>At Phanofi, we&#8217;re coming up with an alternate architecture for how to put data on light and take it out. We&#8217;re competing at a higher abstraction level. Current implementations inside data centers only use intensity-based modulation. Outside data centers, systems put data on intensity, phase, and polarization, but they use very complex, expensive, and power-hungry equipment.</p><p><strong>Why is coherent technology used outside but not inside data centers?</strong></p><p>Outside data centers, there&#8217;s a need for high bandwidth efficiency&#8212;you pack more data per laser for long-distance communication. The receivers are complex and power-expensive, but the number of deployments is significantly lower, so the cost is absorbed. Inside data centers, the volume is significantly higher, which is why coherent technology that exists outside cannot just come into data centers. There&#8217;s also a gray zone&#8212;today&#8217;s data centers aren&#8217;t just 500 meters anymore. Links are going to 2 kilometers, 10 kilometers. There&#8217;s this space between intensity-based data center interconnects and coherent systems where both struggle. Intensity-based systems face power walls and are very expensive going to 1.6T or 3.2T implementations. Coherent systems, even though bandwidth-efficient, can&#8217;t get in because of cost and power constraints. We&#8217;re saying we can bring the efficiency of coherent systems at the simplicity and cost of intensity-based systems.</p><p><strong>You mentioned three types of modulation used outside data centers&#8212;intensity, phase, and polarization. Why is the equipment bigger and bulkier?</strong></p><p>I wouldn&#8217;t say inefficient&#8212;it&#8217;s not inefficient for the purpose outside. But when you bring that technology inside data centers, it becomes inefficient relative to the requirements.</p><p><strong>So Phanofi is proposing to bring coherent systems into data centers by improving the modulator?</strong></p><p>Actually, our main innovation lies in the detection side. Modulation has largely been solved&#8212;people have been able to use similar architectures to do phase and polarization modulation. The detection side is the problem. If you break open these interconnect boxes, there are two parts: the electronic DSP and the optical part. The electronic DSP is the real challenge. If you compare an intensity-based DSP and a coherent-based DSP, the coherent DSP consumes 3-4 times more power and costs 4-5 times more.</p><p><strong>Why does it consume so much more power?</strong></p><p>In an intensity-based system, you&#8217;re only modulating light, so you have a photodiode detecting zeros and ones really fast. The DSP does some cleanup and error correction. But in a coherent system, the DSP handles a significant part of the decoding. When the photodiode receives information before the DSP processes it, you cannot make sense of it&#8212;it&#8217;s almost noise. The DSP takes it and runs through extensive algorithms to extract the real data. It&#8217;s a marvelous piece of engineering, but it&#8217;s overkill for what we&#8217;re trying to achieve within a data center. Because so much computation happens inside that chip, it ends up being expensive and power-consuming.</p><p><strong>So you&#8217;re focused on the photodiode, the detector side. How have you made it better for this use case?</strong></p><p>We&#8217;re building on an industry platform&#8212;an industry-validated foundry model. We&#8217;re using industry foundries to manufacture our chips.</p><p><strong>Why is using existing foundries important?</strong></p><p>Any new material requires a new manufacturing process. Big foundries at high volumes don&#8217;t readily adopt new processes because of contamination risk and the need to develop entirely new tool sets. What we&#8217;re saying is you can take all the tools you already have, and we can make our device with those materials&#8212;no additional contamination risk. You can just make our stuff as you would normally. That&#8217;s the key thing we&#8217;ve done that nobody else in the industry has done.</p><p><strong>What&#8217;s the actual technical achievement that colleagues wouldn&#8217;t have thought possible?</strong></p><p>This is a highly conservative market built on supply chains. They want compatibility. We&#8217;re talking about engines for optics, but there&#8217;s also the DSP sitting next to us in the interconnect, and we need to be compatible with that. Last September, we demonstrated this. We collaborated with one of the leaders in DSP manufacturing, got their evaluation board, interfaced it with our photonic chip, and showed we can do 400 gigabits per second per laser module together with their equipment. That&#8217;s a big proof point to the industry&#8212;we&#8217;re not disrupting your supply chain. We can use what you&#8217;re doing and show this architecture can work.</p><p><strong>Can you explain pluggables and co-packaged optics (CPO)?</strong></p><p>This comes from the need for low-power interconnects. Current implementations have pluggables&#8212;essentially large Ethernet cables that plug into switch boxes. Inside the switch, there&#8217;s a CPU in the center with a path routing from the interface to the main chip. CPO wants to eliminate that path&#8212;they don&#8217;t want copper traces going from the central CPU to the interconnect. Instead, they want to bring optics closer to the CPU. It&#8217;s not just a new implementation, it&#8217;s an architecture change. CPO is being pushed by big industry players for future data center architectures where instead of switch boxes, they bring their own CPU boxes.</p><p><strong>Why do this? What&#8217;s the benefit?</strong></p><p>Right now you have a CPU connected to a pluggable, and the pluggable has its own DSP. Instead of having two different places working with electronics, they want one place where only the CPU can directly drive data conversion into optics. You reduce that redundant DSP and eliminate copper traces, which improves noise and recovers some losses. There are benefits. But we should look at it holistically. CPO is more energy efficient in cost per bit transferred, but it requires expensive investment and big buy-in. If something goes wrong, tens of thousands of dollars is wasted&#8212;you have to throw away the whole unit. With pluggables, if one goes bad, you swap it quickly.</p><p><strong>This sounds similar to the debate about integrating lasers on photonic chips versus using external lasers.</strong></p><p>Yes, exactly. It&#8217;s the same trade-off at a different abstraction level&#8212;efficiency versus reliability.</p><p><strong>What about Google&#8217;s optical circuit switching (OCS) implementation? Will the whole industry move toward optical circuit switching, or will it remain a proprietary Google advantage?</strong></p><p>I&#8217;m not the best person to comment definitively, but I feel optics and photonics are much more powerful than what we see right now. We&#8217;re just getting started on how we can use photonics to improve efficiency in data communication. Wherever you have communication, optics and photonics has an edge over electronics. I&#8217;m closer to believing OCS systems will emerge as winners eventually.</p><p><strong>You describe what you&#8217;re building as an engine. What will the product look like in five years?</strong></p><p>We&#8217;re making optical engines&#8212;think of them as high-speed LEGO blocks. The reason I call it a LEGO block is because we want it to be modular. If you want to put it in a pluggable, we should be able to do that. If you want it in a CPO engine, we can do that too. That&#8217;s our approach to market. We have a proprietary way of putting data on light and taking it off light, so we need to make the whole engine&#8212;laser, modulator, detector side with our patented design, and photodiodes. All the photonics is ours; all the electronics is standardized. We talk to everyone who adheres to industry-defined standards.</p><p><strong>Will you be shipping chiplets?</strong></p><p>Exactly, yes. The advantage is that you unlock possibilities. When you&#8217;re a vendor designing one thing, you have a specific application and focus. But in a chiplet ecosystem, you&#8217;re opening up possibilities. People can combine your I/O module with something they&#8217;re building. You focus on what you do best, and the ecosystem automatically takes care of the application space.</p><p><strong>If someone wants the world&#8217;s best laser but also needs your modulator and detector, could they license your IP?</strong></p><p>Yes, exactly. It depends on where in the value chain you sit. For example, if we&#8217;re integrating a laser on our chiplet, we would need to license a laser from wherever we&#8217;re manufacturing. But someone building a CPO engine would license from us the I/O module block they need next to their CPU or XPU. Where you sit in the value chain determines how you interact or license. The beauty is that this works if standardizations are built as they&#8217;re planning. The silicon electronics industry was built on standardization, which accelerated growth so much that we&#8217;re seeing diverse forms of electronics at very cheap prices. Electronics is so ubiquitous we don&#8217;t even think about it.</p><p><strong>But that only works if interfaces are standardized. Where are we with standards in photonics and chiplets?</strong></p><p>It&#8217;s a pain point. There&#8217;s tremendous debate around it. If you put all the leaders from switch companies, NVIDIA, and other big companies&#8212;bring their photonic experts to the table&#8212;the one thing they&#8217;d agree on is we need standards. It helps everyone.</p><p><strong>What specific standard would help you most reduce costs?</strong></p><p>Packaging. Packaging standardization is important&#8212;how we get fiber from our chip, how we put multiple platforms together so the electronic interfaces work. For CPO implementation, how are those electronic lanes designed? Right now you have multiple implementations with very different I/O ports for RF, DC, or fiber connections. I&#8217;m starting to see some packaging houses begin with standards. Swiss Peak, for example, launched in November with a small initiative toward standardization. Their approach is to customize if you want, but they&#8217;re starting with some standards for packaging. It&#8217;s a very small step, but it&#8217;s progress. If we start designing PCB boards or modules where you can place your chips and everyone agrees on standards, it&#8217;s easier for chip manufacturers&#8212;we know what we&#8217;re designing to, and time to market adoption is much faster.</p><p><strong>From NVIDIA or Broadcom&#8217;s perspective, don&#8217;t they want to build their moat and commoditize suppliers? Wouldn&#8217;t they want modulators and photodetectors to be commodities?</strong></p><p>Yes, and this is where we differ from our competition. We&#8217;re not designing any single component. We&#8217;re coming with a new architecture&#8212;how do you take those components and make the function more efficient? We&#8217;re focused on modulation and demodulation efficiency. The hyperscalers are pushing suppliers to standardize and commoditize. But as an industry, we benefit from new innovations. When we&#8217;re trying a new architecture, we&#8217;re using industry components. The moment we have standardization&#8212;even though silicon photonics is non-standard, it was born out of old CMOS foundries, so there&#8217;s a certain level of standardization that exists. This has helped us as an early-stage startup tape out three times, get access to foundries, and test chips. There&#8217;s tremendous value even though there&#8217;s a downside. It enables a much bigger ecosystem to move forward.</p><p><strong>Doesn&#8217;t getting components into engineers&#8217; hands help build adoption faster than waiting for theoretical standards?</strong></p><p>Exactly. With any new technology, there&#8217;s a learning phase. Photonics is going through this where we&#8217;re building PDKs [Process Design Kits] and more complex libraries. It&#8217;s definitely simpler compared to what electronic libraries look like today, but it&#8217;s not fair to compare given the resources that went into electronics. We&#8217;re already seeing the impact of photonics not just in communication but also in biosensing and quantum. Quantum will be a big enabling area for photonics. We&#8217;re seeing photonics, foundries, and PDK design at a very early stage. I&#8217;m very hopeful it&#8217;s moving in a positive direction. We see the pull in the industry. It&#8217;s only time that will decide how big it turns out to be.</p><p><strong>Many photonics companies are building foundries predicated on new materials like lithium niobate. It seems like there won&#8217;t be a single dominant material like silicon in electronics. Wouldn&#8217;t we be better off with an integrated facility under one roof with multiple material processes?</strong></p><p>Unfortunately, silicon photonics is more complicated than electronics. It&#8217;s not a one-to-one material platform solution. We need different materials because of different performance characteristics. For lasers being active, we need indium phosphide. For modulation, different materials work better. We need to create an ecosystem with different materials. But we should acknowledge foundries are insanely capital-intensive businesses. When you&#8217;re talking about a manufacturing facility, someone is putting in an insane amount of money to create a pilot plant and ensure quality control&#8212;producing the same thing every time. That&#8217;s the challenge. Different foundries are trying different approaches. GlobalFoundries has tried integrating silicon with CMOS layers on top. Tower Semi has integrated indium phosphide on top of silicon. Silicon is the base in everything because of cost and infrastructure. These foundries are opening up to different new materials being integrated on top. Some are testing lithium niobate and sharing results internally&#8212;they just haven&#8217;t announced publicly. TSMC wasn&#8217;t in photonics, and suddenly it&#8217;s doing a lot. They don&#8217;t want to say it out loud until they&#8217;ve proven it because there&#8217;s tremendous money going in. They need to generate revenue after that, so they&#8217;re very careful choosing which material platforms to integrate. But you&#8217;re correct&#8212;it&#8217;s moving toward that. XFAB, for example, is looking into transfer printing multiple platforms onto silicon. Foundries recognize this and many are taking that path.</p><p><strong>You&#8217;re a startup trying to sell into one of the fastest-moving markets in history with data center buildout. What could NVIDIA or Broadcom announce that would make your business non-viable?</strong></p><p>If they found another way to communicate data better than light, yes. If the industry somehow finds a new approach alternative to copper that&#8217;s not fiber or light&#8212;this is where carbon nanotubes come in, where you&#8217;re sending electrons rather than converting to photons. That makes more sense because you don&#8217;t have to convert and don&#8217;t pay for efficiency loss every time you convert to optics and back. Definitely that would impact us.</p><p><strong>Could open-source transceiver designs kill the business?</strong></p><p>That wouldn&#8217;t affect much because, as I said, we could take those elements and put them together. Our IP is on how we arrange those elements, which results in an efficient approach.</p><p><strong>What about wafer-scale computing where you don&#8217;t need to move data off-chip?</strong></p><p>Light has proved it&#8217;s the most efficient way to transmit data. When we&#8217;re talking about AI computing especially, we&#8217;re limited now on how much data we can process. These big wafers where you can do tremendous computing within one wafer without going to another&#8212;there are challenges with reliability, yield, and cost aspects. But that&#8217;s one way you don&#8217;t need to get data out, though at some point you do.</p><p><strong>That&#8217;s interesting because it ties to the previous interview about compute-in-memory&#8212;doing more on-chip without going off-chip. But we&#8217;ll always need to go off-chip eventually.</strong></p><p>The real question isn&#8217;t whether light is the best way to transmit data, but where does the cost-performance curve stop for photonics? To answer that, just see what the current implementation is. The industry is very cost-conscious, so it won&#8217;t have an architecture where it&#8217;s not cost-effective. Behind the rack is where optics currently doesn&#8217;t become more cost-effective. But going forward, in the next generation&#8212;we&#8217;re talking 500 meters to 2 kilometers, which is away from rack-to-rack&#8212;it&#8217;s getting more interesting to have optical fiber within the rack for efficiency gains in cost and power. There&#8217;s a reasonable argument for why optics can come very close to the CPU. It unlocks a significant bottleneck: how cramped do you want to make your system? Heat is the other challenge. With copper, you can have two XPUs or GPUs connected very close, but getting heat out becomes challenging. With fiber, you&#8217;re not limited by length&#8212;it&#8217;s essentially lossless&#8212;so you can stretch it significantly, spread things around for cooling, but still gain the same computation. The other thing is power. There&#8217;s a limit to how much power each rack can get, which limits how much compute you can pack in one rack. These system-level boundary conditions make optics much better in terms of cost-performance.</p><p><strong>Let me summarize to see if I&#8217;ve understood correctly. At a system level, the problem is no longer compute&#8212;it&#8217;s moving data. Copper is hitting physical power limits, so optics is being pushed closer to the rack and eventually toward the package. You&#8217;re not just building a faster modulator or new laser&#8212;you&#8217;re proposing an architectural change. Long-haul networks already use coherent techniques with intensity, phase, and polarization to pack more data onto each wavelength, but this is complex, power-hungry, and expensive. Your bet is on the detection side&#8212;changing how optical signals are recovered to reduce DSP complexity and bring coherent-like efficiency from outside the data center into the data center at something like intensity-based cost and power. Importantly, you&#8217;ve stayed compatible with existing foundries, DSP vendors, and supply chains. It works with pluggables today and can work as a chiplet or part of CPO architectures over time.</strong></p><p>Absolutely. I should use you for pitching all my fundraising. You nailed it.</p><h2>Debrief</h2><p>Some solid synergies with the chat with Manu right. The through-line from the Synthara conversation to this one is almost too clean. Manu&#8217;s framing was &#8220;stop moving data&#8221; at the chip level: compute and memory are too far apart, so bring them together. Hitesh is solving the same problem one abstraction layer up: chips need to talk to each other, and copper can&#8217;t keep up, so optics has to come closer to the processor. It&#8217;s <em>almost</em> like I planned a narrative in advance. </p><p>What&#8217;s interesting is how both companies have made similar choices despite operating in different domains. Neither is betting on exotic new physics. Synthara isn&#8217;t doing analog compute-in-memory; they&#8217;re doing digital design with standard bit cells. Phanofi isn&#8217;t building lithium niobate modulators; they&#8217;re working with existing silicon photonics platforms. Both are saying: the industry doesn&#8217;t want revolution, it wants evolution that&#8217;s compatible with supply chains. </p><p>The co-packaged optics question is genuinely unresolved. Hitesh is diplomatic, but you can read between the lines. CPO makes sense on paper: eliminate the copper traces between the switch ASIC and the optical transceiver, reduce power, improve density. But the reliability and serviceability concerns are serious. If a pluggable fails, you swap it. If a CPO engine fails, you throw away the whole package. At tens of thousands of dollars per unit. </p><p>Phanofi&#8217;s chiplet approach is somewhat of a hedge. If pluggables win, they can sell into that market. If CPO wins, they can sell into that market. If some hybrid emerges, which seems likely, they can adapt. The modular &#8220;LEGO block&#8221; framing is never quite true in reality but it does hedge against architectural uncertainty. Sort of like an FPGA instead of an ASIC. </p><p>The standardisation point deserves emphasis. Hitesh says if you put all the photonics experts from the hyperscalers in a room, the one thing they&#8217;d agree on is the need for standards. Packaging, interfaces, fibre attachment, all of it. The silicon electronics industry was built on standardisation, which is why you can buy commodity chips at scale. Photonics is still very much in the bespoke era, which keeps costs high and iteration cycles slow. Whoever drives standardisation will shape the industry for decades.</p><p>One question I didn&#8217;t push hard enough on: what happens when NVIDIA or Broadcom decides to vertically integrate optical I/O? They have the resources, the customer relationships, and the incentive. Hitesh&#8217;s answer, that they&#8217;re offering an architectural innovation rather than a component, is reasonable but not entirely satisfying. Architectural innovations can be copied. The real moat is probably speed to market and foundry relationships, which are harder to replicate than any single technical insight. They are also harder to diligence as a pre-seed/seed investor. I mean should I be asking for the names of the TSMC execs they know? I joke. Or do I?</p><p>Data movement is expensive at every level of abstraction. Last issue was the chip. This issue was the rack. Next might be the building, the campus, the continent. At some point we hit the speed of light and then what? Quantum interconnects? Satellite relay? Free-space optics between buildings? Honestly I don&#8217;t know. Nobody does. But the pattern holds: whoever figures out how to stop moving data, or move it more efficiently when you must, captures enormous value. The specific technology matters less than the principle.</p><div><hr></div><p></p>]]></content:encoded></item><item><title><![CDATA[AI Chips, ComputeRAM and the Future of Data Movement: A Conversation with Manu, Founder of Synthara]]></title><description><![CDATA[Stop! Moving Data]]></description><link>https://stateofthefuture.substack.com/p/ai-chips-computeram-and-the-future</link><guid isPermaLink="false">https://stateofthefuture.substack.com/p/ai-chips-computeram-and-the-future</guid><dc:creator><![CDATA[Lawrence Lundy-Bryan]]></dc:creator><pubDate>Thu, 22 Jan 2026 11:03:11 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/184011679/a7e946e4c714408e4b1a1ef2881a7986.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<blockquote><p><em><strong>&#8220;I&#8217;m tryna lead a nation, to leave to my little&#8217; man&#8217;s. The scales was lopsided, I&#8217;m just restoring order&#8221;</strong></em></p></blockquote><p>Hello friends, colleagues and enemies. Apologies for the delay, especially for my paying subscribers. I have failed to add as much value as I promised. And for that, I can only apologise. I mean, I&#8217;ve been somewhat distracted, I thought a nice little bolthole in Nuuk over January will 10x my productivity. Little bit of Claude Code and a little bit of solitude&#8230; </p><p>Anyway, I&#8217;ve managed to get myself on the last chopper out of Saigon and here I am back on the semiconductor horse. </p><p>The headline? Moving data around is expensive, so let&#8217;s not? Standard story: don&#8217;t move data to the datacentre because you have to pay to send data around the world in cash and time. But the same it true at a lower level of abstraction: the chip itself.</p><p>I&#8217;ve written about this before, but I think it&#8217;s still underpriced, raw processing power is not the performance bottleneck. GPU arithmetic units spend most of their time idle, stalled while waiting for weights to be fetched from HBM. <strong>The memory wall. </strong></p><p>The whole Nvidia and Groq $20 billion deal is about the same thing. Groq&#8217;s entire value prop is to bypass HBM altogether: their LPUs stick 230MB of on-die SRAM delivering 80 TB/s of bandwidth, roughly 1 OOM higher than what you get from HBM.</p><p>You may have heard of Cerebras, the other big AI chip company. Well they worried about the memory wall so bad, their WSE-3 chips stuffs 44GB of SRAM directly onto the wafer with 21 PB/s of memory bandwidth, 7,000x what you get from a single GPU&#8217;s HBM stack. The architecture exists to eliminate the memory wall. And OpenAI just signed a deal to deploy 750MW of Cerebras capacity, which tells you something. SRAM is orders of magnitude faster than DRAM but far less dense, so Cerebras compensates by using an entire silicon wafer as a single chip. Bold move Cotton. </p><p>And finally, Etched, the hotshot new AI chip company, just closed a $500 million round at a $5 billion valuation to build a chip that literally only runs one algorithm (the transformer) (Hummingbird at it again, what up). Their bet is related: hard-wire the matrix multiplication patterns specific to transformer inference and strip out all the general-purpose overhead, volia dramatically reduce the memory traffic required per token.</p><p>And another one imma discuss today: is something called <strong>&#8220;in-memory compute&#8221;</strong>. This is where you basically say, why don&#8217;t we just literally run operations on memory cells? So instead of having a bit of logic and a little bit of memory on different parts of the chip, why not, like, just have one bit of memory, and do operations in there? </p><p>Well, because, it&#8217;s very hard to do. </p><p>True compute-in-memory means making the memory cells themselves do the maths. A resistive RAM crossbar array, memory that stores data by changing its electrical resistance, can encode neural network weights as resistance values, and when you apply voltages as inputs, Ohm&#8217;s law naturally computes the matrix multiplication as current flows through. The physics does the work. No separate ALU required. In practice, it&#8217;s a nightmare to manufacture, calibrate, and scale across process nodes. The industry has been chasing this for years with limited success.</p><p>There&#8217;s a less radical approach: <strong>near-memory compute.</strong> Here you keep the memory cells as memory cells, but you shove the compute logic as close to them as physically possible. You&#8217;re not eliminating the data shuffling, you&#8217;re just making it very, very short. Samsung&#8217;s HBM-PIM does this, embedding small processing units directly inside the memory stacks.</p><p>Synthara, the Swiss company I spoke to for this issue, sits somewhere in this territory. They call their product &#8220;Compute RAM&#8221; and sell it as IP to chip designers. They don&#8217;t touch the underlying bit cells or claim any analog magic. What they do is tightly couple digital compute logic to standard memory arrays and provide the software stack that makes it all work without breaking your existing toolchain. The efficiency gains, they claim around 100x for edge devices, come from drastically shortening data paths rather than from exploiting exotic physics.</p><p>Manu Nair, the founder, did his PhD in neuromorphic computing but has deliberately walked away from the analog approach. His argument: the industry doesn&#8217;t want analog, it&#8217;s hard to port across process nodes, and you can get most of the benefits with careful digital design anyway. Whether that&#8217;s pragmatism or cope is for you to decide. But the IP licensing model means Synthara could end up inside chips from NXP, Infineon, or even the AI inference startups, without having to bet the company on a single tape-out.</p><p>The interview gets into all of this. What did I learn? </p><ul><li><p><strong>Data movement is the meta-problem.</strong> Not compute, not memory, but the cost of shuttling bits between them. DeepSeek&#8217;s efficiency gains, Apple&#8217;s unified memory, the entire custom silicon explosion: all symptoms of the same constraint. Once you see it that way, the architectural choices across the industry start to make sense.</p></li><li><p><strong>You don&#8217;t need analog magic to win.</strong> Manu did his PhD in neuromorphic computing. His claim: careful digital design with compute logic shoved right next to standard memory arrays gets you most of the efficiency gains without the manufacturing nightmares. The industry doesn&#8217;t want analog. It wants something that scales across process nodes. Yep. </p></li><li><p><strong>Custom silicon is a graveyard.</strong> Cerebras, Groq, Tenstorrent: a decade in, sub-one-percent market share. The IP licensing model that everyone dismisses as &#8220;capping your upside&#8221; might actually be the only viable path for new entrants. Arm proved you can define an entire computing era without fabricating a single chip.</p></li></ul><div><hr></div><p><strong>Lawrence:</strong> Hey Manu, briefly explain who you are and what you do.</p><p><strong>Manu:</strong> My name is Manu. I&#8217;m the founder of Synthara, a Swiss semiconductor company. We are working on a product called Compute RAM, which is set to define the architecture for the next era of AI-capable, scalable, sustainable processors.</p><p><strong>Lawrence:</strong> No one knows what Compute RAM is. Break it down as simply as you can.</p><p><strong>Manu:</strong> The most interesting problem today in processing is how you deal with the extreme demands of AI compute. At the heart of that problem is how memory interacts with the part that actually does the compute. Typically, most new chip designs are focused on figuring out how to do that better. So if you think of what Apple&#8217;s doing, what Google&#8217;s doing, they&#8217;re all working on this.</p><p><strong>Lawrence:</strong> There&#8217;s a memory cell and there&#8217;s a logic cell, right? They&#8217;re not connected and there&#8217;s a bus between them. In order to fetch weights, you have to go to memory and then back to the processor. A lot of the time is spent in fetching. Is that right?</p><p><strong>Manu:</strong> Yeah, it&#8217;s really like logistics. If you have a warehouse sitting far outside the city, you need to shuttle goods to the heart of the town a billion times a second. That&#8217;s terribly inefficient. Architecture really deals with how you stagger things in different places in strategic ways so you can be more effective.</p><p>That&#8217;s actually not a bad analogy at all, and what we are offering is a solution that standardizes these decisions. Compute RAM helps our customers make these decisions effectively, and as they make these decisions, we also ensure that their software and system architectures don&#8217;t break. They&#8217;re able to transition to far more intense AI-rich environments in a way that retains everything they have built so far. You don&#8217;t want to reinvent the whole thing just because AI came in.</p><p><strong>Lawrence:</strong> In context then, this idea of in-memory compute or near-memory compute. We&#8217;re talking about geography, literally the location of those cells on a computer chip. You put them either as close as possible so that the logistics road is shorter, or if you put compute within the memory, then there is no road to travel. Is that the right heuristic to use?</p><p><strong>Manu:</strong> Yeah, exactly. That&#8217;s a very good heuristic. The other way to think about it is you fetch something, you use it as much as you can. There will always be a hierarchy of memories. There is no way on-chip memory will hold all of the internet&#8217;s data. So there is going to be something that holds all of that, but once you fetch it, you use it as much as you can before you discard it, and then you build different hierarchies.</p><p>That architecture is evolving at a very rapid rate. We think we have a solution that helps companies make that transition when you have to do ultra-low power, very quick, high-performance inferencing, even training potentially.</p><p><strong>Lawrence:</strong> If I&#8217;m thinking about the stack of where you work, I like to think of it as starting with the transistor level, then the circuit level, then the cell which is lots of circuits put together, and upwards. Where exactly in that stack does Synthara&#8217;s solution operate?</p><p><strong>Manu:</strong> We like to work at a level above the transistors. There are transistors, and transistors are assembled into what are called bit cells. Bit cells are the unit building block for a memory array. Then there are memory arrays, and these memory arrays are used in chips which also have processors and other things.</p><p>We start from just above the bit cell. We don&#8217;t design our own bit cell, but we may or may not design the memory array. We certainly include some compute around it. So we are changing that hierarchy a little bit. The customers themselves are designing their chips, so they&#8217;re not buying a new bit cell or anything.</p><p>A customer might say: I&#8217;m designing this new chip that I need for my wearable device, or I&#8217;m putting together this new AI inference chip, and I have this problem where I have some memory area, I have some compute, it&#8217;s too far apart, it&#8217;s costing me a lot of time and energy and area. Help me fix it. That&#8217;s where we step in.</p><p>Once we step in, we give them that solution. But we also deal at the software level. We say, look, now that you&#8217;ve put this in place, your software still needs to work. How exactly would your customers, who probably don&#8217;t know or don&#8217;t want to know what Compute RAM is, write their solutions to work on this? We actually provide that integration kit. We start from somewhere at the memory macro level, and then we work all the way up to provide solutions that ensure their customers are completely undisrupted. That&#8217;s key. That&#8217;s the platform. The platform is this new way of putting things together.</p><p><strong>Lawrence:</strong> How does this fit into the new suite of chips people might be aware of? Groq being acquired or half-acquired by Nvidia. They have a different approach to AI inference. There&#8217;s Tenstorrent, there&#8217;s Etched, there&#8217;s Rebellions, all these companies offering AI chips. Help the audience understand how you fit into that.</p><p><strong>Manu:</strong> Synthara&#8217;s stake is a computational memory solution. Today we are starting with an IP product, and all these companies you mentioned would be great customers for us. Some of them, companies like them, are people we are already working with. The thesis is all of these guys are looking to solve this problem amongst others to deal with efficient AI inference.</p><p>We help them get over that particular topic. Look, there is compute, there is memory, you diffuse it, and that&#8217;s usually some 70 to 80 percent of the chip area. We actually help make that quite a bit better, but then they still have to architect around it. If the use case is needing very quick response but not trying to do millions of inferences per second, that&#8217;s a very different architecture than one that says I don&#8217;t care how quickly a single token is processed, but I care about crossing thousands or tens of thousands of tokens per second. That&#8217;s a different architecture.</p><p>Our place is to enable all of them to solve the memory barrier issue. We give a solution that actually fits in both contexts. They both get to be better, but they can still differentiate at an architecture level depending on the use case they&#8217;re going after.</p><p><strong>Lawrence:</strong> And it&#8217;s really agnostic to the hardware? The Groq architecture is very different. Tenstorrent is RISC-V. Groq is mainly SRAM. It doesn&#8217;t make a difference to what the memory is or how the architecture works?</p><p><strong>Manu:</strong> There are some scenarios you can always come up with where it doesn&#8217;t fit. But the thing is, industry tends to standardize. The standards that are emerging are very much compatible with what we do, and that&#8217;s a conscious choice in our own design side too. If you think of microcontrollers, the ones that NXP or Infineon would produce, they tend to be similar. Likewise, GPU architectures available at Tenstorrent and even AMD chips have a certain set of architectural choices. Within that universe, we fit very well. Likewise, Groq has made choices in that universe that fit very well with us.</p><p>My claim is not that every possible chip in the world will be supported by Compute RAM no matter what they do. My claim is that the architectures that are emerging are really good candidates for using Compute RAM, and we usually complement what they do.</p><p><strong>Lawrence:</strong> A lot of the thinking around how to offer customers something that Nvidia can&#8217;t normally falls back on the software stack, the so-called CUDA moat. The fact that every developer has locked into using CUDA. You mentioned earlier that you can integrate with CUDA. Is this just an API that redirects CUDA operations through your cell? How exactly is it compatible?</p><p><strong>Manu:</strong> There are two ways to answer this. One is technical. The technical one is pretty much what you said. At the end of the day, all these operations are lowered into some kind of computational primitive. Depending on the abstraction, it could at the lowest level be just multiply-accumulate. It could be a dot product, it could be a matrix multiplication, it could be just a convolution call or a full decoder transformer layer. Depending on the API, you can integrate.</p><p>But I think the interesting thing is what are these APIs. We are not even inventing our own APIs. There are industry standards emerging with Nvidia and Microsoft and others participating, and we essentially hook into that ecosystem. Everything we do, at some point we expect to even contribute upstream. The thing we are doing here is to really help the community absorb Compute RAM into how they currently work. A PyTorch code or a TensorFlow code or whatever they put together should be lowerable into a Compute RAM-based system. It&#8217;s a problem that the industry has, and it&#8217;s not like we are solving it ourselves. We hook into that.</p><p><strong>Lawrence:</strong> But Compute RAM would be proprietary, and you&#8217;ll sell IP blocks like Arm. Whereas if I were the industry, wouldn&#8217;t I want an equivalent of a RISC-V for Compute RAM? Wouldn&#8217;t I want a compute-in-memory solution that I don&#8217;t have to pay for?</p><p><strong>Manu:</strong> It&#8217;s a bit like you write C code and that can be compiled to RISC-V or Arm. Our customers will write the C code, and as long as they respect some rules that the C Foundation has put together, it&#8217;ll work. Ours is the same. At some point you enter the proprietary realm of compute. But until that step, you don&#8217;t need to be tied to us.</p><p>If there is yet another engine that does the same primitives that we support, you are perfectly free to use that. My claim is it won&#8217;t be as efficient in area or energy as we can be, but we don&#8217;t lock them in or prevent them from looking for alternatives.</p><p><strong>Lawrence:</strong> You mentioned area efficiency. Let&#8217;s get into that because people might be wondering what the actual numbers are. I understand the fundamentals that you put the memory and the logic closer together so you can go faster and more efficiently. But what are we talking about here? What&#8217;s the system-level energy efficiency you&#8217;ll get? What is the die size reduction?</p><p><strong>Manu:</strong> When I answer this, I always like to think of the next best alternative, because that&#8217;s the easiest way to make an apples-to-apples comparison. We have an extreme diversity of use cases. One extreme are things like smart glasses and hearing devices. These are typically built on some kind of microcontroller-like platform. On these, we expect something like a hundred times or even greater improvement in energy efficiency.</p><p><strong>Lawrence:</strong> Everyone loves big numbers, but what are we actually talking about? TOPS per watt? Could you give me the actual numbers?</p><p><strong>Manu:</strong> What would consume perhaps some millijoules is reduced to microjoules, tens of microjoules, hundreds of microjoules. If it&#8217;s in wattage, you go from hundreds of milliwatts to sub-milliwatt, even potentially. In terms of inference time, let&#8217;s say you have a live audio stream and you&#8217;re doing some kind of complex noise cancellation. Now the battery life of the device can go from running for six or seven hours, which is typical, to perhaps a lot longer. A four times improvement in battery life is something that could happen. That&#8217;s the impact. It completely changes the product category and positioning as far as the customer is concerned in these use cases.</p><p><strong>Lawrence:</strong> It&#8217;s intriguing because I imagine the next ten years will feature heterogeneous compute, where you might have a neuromorphic chip design for extreme low-latency decision making, or a photonic chip for latency or extreme throughput, various different configurations, analog chips, whatever it might be. A whole bunch of different designs all claim much better energy efficiency, a hundred times, a thousand times better. What you are saying is you are getting one or two orders of magnitude improvement without fundamentally changing the underlying chip design. Just update. You still get the same benefits without any change in the silicon?</p><p><strong>Manu:</strong> Yes, but I think we are not as different from the others as your description might have it seem. Quantum computing we keep on the side, and photonic is mostly interconnect that we keep on the side. But all the analog and neuromorphic stuff and what we do are related. My PhD was on neuromorphic computing. I did all this spiking and analog stuff with resistive RAM and other things in my past life.</p><p>What we have done in Synthara is to distill all of those ideas and cast them in a format that is compatible with how the industry has so far operated, both from a process perspective as well as system architecture and software perspective. We have gotten rid of all the issues that I thought, and my co-founder thought, have restricted these technologies from being adopted at a large scale.</p><p>Some of these technologies are being adopted by companies for themselves. They create some flavor of in-memory compute or neuromorphic compute or analog compute, make a chip around it, try to go to market. Our take was: look, I can do that, but realistically, how many of those companies have a hope to succeed?</p><p>If you think about it, Nvidia, Intel, and AMD are probably the only ones who have a handful of products that they sell at scale. Almost every serious chip company has to sell so many variants and flavors that it actually is not viable to make a huge custom silicon product easily.</p><p>The market that we are targeting, the problem we&#8217;re solving with Compute RAM, is how does anybody who has this problem absorb this? For Synthara, the cleanest way to capture that huge value proposition is to provide a platform. Now it&#8217;s an IP product today, but we could do something else. There are ideas at play. But the core premise is this problem is persistent across different use cases, and these ideas from analog and in-memory compute are all relevant for them. How do we make it accessible for these companies who still want to keep everything they have done, because their customers at the end of the day buy it for that?</p><p>If you are buying a chip from NXP for an automotive use case, you expect all the automotive quality that NXP delivers, but you really just want it to be more energy efficient. Can I enable an NXP to do that? That is my pitch.</p><p><strong>Lawrence:</strong> Not custom silicon. Your bet is slightly different. If we think about what&#8217;s happening in the industry right now, you have all the hyperscalers building their own custom silicon or their own AI inference chips, predominantly training chips and inference chips in Amazon&#8217;s case. All the hyperscalers plus say ten reasonably well-funded AI inference chip companies. Your claim is that&#8217;s all fine, but the truth is to be a successful chip company, you need to offer multiple chips, you need to be in generation five or six, and then you need to be offering multiple chips to serve different use cases. No chip company&#8217;s going to get there. So how do you get the same benefits without designing your own silicon?</p><p><strong>Manu:</strong> At least how do we get to a stage where we can hope to get there in a sensible way? With 20 people sitting in Zurich and the stage we are in today, I am not going to make a custom silicon product. So the most effective way for me to get into the market and have that influence spread is this strategy, and that takes us there.</p><p>Now, we don&#8217;t want to make a new Intel server chip competition. That doesn&#8217;t make sense. It&#8217;s a use case that exists, and we need to look at new ways to get into the market. Our business model could evolve. But the core premise is that there is a transition happening in the industry. Compute and memory are coming together. There is a computational memory thesis that&#8217;s emerging, and we need to enable that. We can create a lot of value for the industry by doing it.</p><p>How do we monetize and at what stage is the question that is being answered by Compute RAM today. Today we are saying, look, there are customers making hearing devices desperate for energy efficiency. There are customers looking at data centers desperate for area and energy efficiency. Can we help them? Yes. And would they take the risk to adopt Compute RAM? Because it is a risk. It&#8217;s a small company sitting in Zurich. What if we change what we are doing?</p><p>The answer is yes, they are taking that risk because the reward is ridiculously large. And there are contractual and other ways to deal with the risk.</p><p><strong>Lawrence:</strong> The alternate strategy is maybe the Fractile strategy, which is to do the silicon yourself, tape out the silicon, get it working, and then try and win a hyperscaler or win a customer with your own silicon and try and replace Nvidia, or now Groq, for some of these inference use cases. That strategy seems obviously much higher capex, much higher risk, but the payoff is greater. The actual business model of selling a product is more lucrative than selling IP. Is that a fair assessment of the choice you&#8217;ve made?</p><p><strong>Manu:</strong> I don&#8217;t know if I agree with that. I agree on that side, so yes, if they get it to work, they can actually get a good payoff. No question there. But I am saying that payoff is accessible to us too. If a company is going to be acquired by Nvidia or whoever, it&#8217;s not like they&#8217;re buying it for the business. They&#8217;re buying it for the technology, the team, the concept, the architectural implications, all of it, which we have. Ours is a very clean, distilled thing that is good to go. It&#8217;s not corrupted by all the other weird decisions we had to make to go to market.</p><p>So that is still accessible. But in addition, I also have access to just being a clean IP business. Now I am also a product that could be used by NXP, could be used by Infineon, could be used by NSA, could be used by other AI competitors to the names you mentioned. Actually, some of our customers are looking at non-AI use cases. It&#8217;s DSP, it&#8217;s some performance DSP products. So we have a broader set of use cases to go after.</p><p>Our exit opportunities are to hyperscalers because they want to differentiate. Our exit possibilities are to IP providers like Synopsis, Cadence, and Arm. Our exit possibilities are to semiconductor companies like Analog Devices and TI who are also looking to solve these same problems.</p><p>We have been lucky that we are in Switzerland because we got this five to ten years to work on this problem in almost ideal conditions. We got a tremendous amount of grant and other funding. So we actually managed to spend all this time figuring out how to build this hard thing and assemble it. Why would a company whose main business is not to do in-memory compute spend the same amount of time and energy figuring all of this out? So we are there. If today Google says, okay, it&#8217;s too crazy to let these guys alone, they could buy us. Or they&#8217;ll say, no, it&#8217;s too expensive. But there are contractual ways to deal with the not-invented-here syndrome. We do tackle that in our contracts.</p><p><strong>Lawrence:</strong> Let&#8217;s think a little bit about the market opportunity. You say you&#8217;ve been doing this for many years and you started in the neuromorphic space. That was your PhD. In Switzerland obviously, as Intel has the neuromorphic unit there. SynSense comes out of Switzerland as well. It&#8217;s always struck me that neuromorphic, analog in particular, those designs are best suited to DSP, extreme low-power use cases, which typically are at the edge, without battery ideally. So we could think of drones, but we could also think of glasses, watches, other use cases.</p><p>But actually I&#8217;ve seen in the last year to eighteen months almost all of those companies, Innatera being another one, probably SpiNNcloud, now focusing on the data center. Because the data center as a go-to-market has large buyers with much larger budgets and an urgent power consumption problem. How do you see those two markets? On the one hand the edge, which feels first-principles like it should be the appropriate market for what you&#8217;re building, versus where the demand is today in the data center.</p><p><strong>Manu:</strong> There are two aspects at least that you mentioned that I probably have to respond to. One is the analog and neuromorphic being juxtaposed with what we do.</p><p>At the end of the day, it&#8217;s a chip design problem, right? The question is not if it&#8217;s analog, digital, neuromorphic, whatever. Most of the techniques that you see in neuromorphic have actually been done by chip designers in the past. The ideas are not radically new. It&#8217;s just maybe formulated and used in a certain way.</p><p>I would say it doesn&#8217;t need to be analog if it doesn&#8217;t need to be. Maybe I&#8217;ll say it this way: if it doesn&#8217;t need to be analog, the edge guys would prefer to not use analog. Even when I was at Analog Devices, there was a constant push against reducing the amount of analog on chip. You do more DSP, you reduce the analog. At some point, the argument was you still need something to talk to the world, but that&#8217;s the only thing that has to remain analog.</p><p>I don&#8217;t buy the argument that you need to be analog or neuromorphic to be energy efficient. That&#8217;s actually the whole reason behind the story of Synthara. I don&#8217;t think analog inherently makes anything more energy efficient. I think we have found ways to do quite well. I believe that our solution is perfectly well suited for the edge where people do analog and other things. And I think in some ways our strategy is much more efficient because even analog chips need to scale. Yes, they might be doing 65 or 45 nanometer processes, but they need to move. And analog is hard to take from one process to another. Analog is noisy. It&#8217;s very hard.</p><p><strong>Lawrence:</strong> It&#8217;s not to say that analog or neuromorphic are better than what you are saying. More just that at an objective level, low power was originally an edge requirement, and actually the data center increasingly needs it too. As you and others move into the data center, every single chip design IP block is going after the same target. So if you are the TSMC of in-memory compute, you are getting calls from every single startup on earth. It makes you uber competitive.</p><p><strong>Manu:</strong> Yeah, exactly. Just as you spoke, an idea stuck with me: the fact that analog has been sufficient is primarily an academic claim. An industry implementation that has validated that is not mainstream. I can&#8217;t think of one right now.</p><p><strong>Lawrence:</strong> Is Mythic the only one that&#8217;s actually commercially deployed?</p><p><strong>Manu:</strong> Yeah, but they&#8217;re completely rethinking what they&#8217;re doing, so I&#8217;m not sure what they&#8217;re doing now. The old approach did not particularly work with the flash memories and all that.</p><p>Coming to data centers, the need is clear now. The business has to follow where the demand is. Data centers need to be more energy efficient, tokens have to be cheaper. It&#8217;s a problem that has to be attacked and is being attacked from multiple angles, including software. Even if the chips were the same, just the effectiveness of the system architecture is dropping token cost at an exponential rate.</p><p>Coming to your question: yes, these guys should go there because there is money to be made. Will they actually make money is an interesting question. I think it would be very hard for the analog guys to get there just based on economics. Most of these data center chips are at this point essentially the size of a full reticle, and you want to stuff as much compute as you can into that reticle.</p><p><strong>Lawrence:</strong> What is a reticle for those in the audience who might not have heard the word?</p><p><strong>Manu:</strong> When you produce a chip, there is a manufacturing process, and the manufacturing process can draw chips at a certain size. It cannot get larger. There are some reasons for it to be in a certain shape. It&#8217;s physics, let&#8217;s just say geography. You have a certain number of chips, and you stuff as much compute as you can. The first-order requirement for any customer is: can I meet the demand? It doesn&#8217;t even matter how much it costs. I have people who want to do a million tokens per second. Can I serve that demand?</p><p>For that, you really need compute density. There is no way around it. Anything that improves compute density will be attractive. Now, once you have compute density, do I need a nuclear plant or can I run it on a regular power grid? So energy efficiency comes into play. It&#8217;s mostly a profit margin topic or maybe just a viability of the business topic.</p><p>These are the two key dimensions all guys operate on, and if we can solve that well, it&#8217;s great. An analog or neuromorphic approach that makes area efficiency lower is not great. But approaches where you say, look, I&#8217;m going to have a single bit cell that can somehow store eight bits of memory in a reliable way, that looks like it has some interesting possibility. I would mostly look at it from that perspective. When I look through those filters, some companies I find attractive, some I find a bit of a hype wagon.</p><p><strong>Lawrence:</strong> Which do you find attractive without having to slag off the ones you don&#8217;t?</p><p><strong>Manu:</strong> I really like what Cerebras is doing because it addresses that topic: there is so much demand for compute that it doesn&#8217;t matter how much you spend on the wafer, and if there is some yield issue, you can deal with it. Cerebras has a very interesting approach.</p><p><strong>Lawrence:</strong> They do wafer-scale compute, where they literally stick two wafers together and try and fit as many chips on that wafer as humanly possible. They&#8217;ve done pretty successfully in terms of yield as I understand.</p><p><strong>Manu:</strong> Yeah. Cerebras is actually an example of a customer that would be great for us. We would actually help them stuff a lot more compute on those wafers should they use approaches that we have.</p><p>TPUs are interesting. I think they identified the use case quite early on. It&#8217;s not a startup, but considering that they&#8217;re entering a new market, I think TPU has some interesting ecosystem play that can actually turn out very well. I don&#8217;t want to say other names because then it becomes problematic.</p><p><strong>Lawrence:</strong> What&#8217;s interesting is that any of them in theory could be a customer. Many of them may consider themselves competitors in some sense because they&#8217;re selling custom silicon. You&#8217;ve got this really interesting frenemy situation in that you can make their products better, but you can also make their competitors&#8217; products better. That&#8217;s an interesting strategic tension for you.</p><p><strong>Manu:</strong> I agree. Direct competitors will find us threatening, and that creates some interesting opportunities for us. But I also see that even within a company, not to say we work or don&#8217;t work with them, a company that makes both data center chips as well as glasses, the team using Compute RAM for the glasses would use it very differently than the team using it in the data center. For us, even serving a leader or challenger in a specific market is actually a perfectly valid strategy. It still allows us to grow pretty big.</p><p>You can always architect around your competition because you still have a whole layer of architecture that you can work on to optimize for different things. All data center chips are not the same. Data center chips designed to only run Llama all the time are a very different architecture than ones that don&#8217;t know what LLM they would be running or if it would be an LLM. That architecture is very different. I don&#8217;t even know if all companies are necessarily even direct competitors all the time. I think we kind of found a nice space. We don&#8217;t threaten most people in my view.</p><p><strong>Lawrence:</strong> Ignoring the IP versus silicon dichotomy for now, what is it that the industry, even just observers to the semiconductor industry, are getting wrong about the shape of how this is going to develop? One that springs to mind is the idea that bigger and bigger data centers, more and more power, nuclear power stations as you mentioned, is how someone will win in the AI race in the next decade. And obviously the counterpoint is actually eventually things will move to the edge, but we&#8217;re not there yet. Are there other things that you think the industry is getting wrong?</p><p><strong>Manu:</strong> I think the interesting thing about what&#8217;s happening today is that people are wrong in magnitude. They&#8217;re not wrong in the directionality of things. It is true you need more and more compute, you need more and more power. But I don&#8217;t think the power is likely to be required in huge amounts because compute is expensive. Compute costs are dropping pretty rapidly. It&#8217;s likely because people are just going to be doing more things, so therefore you need a lot more compute just to keep up with that need.</p><p>The one part that I don&#8217;t have an opinion on is whether it is unsustainable. I can argue both ways because it&#8217;s a qualitative argument. You cannot really put numbers on it. Yes, you can argue that AI is mostly just going to be used for generating cats and dogs and just more Instagram video feeds, that it&#8217;s not really useful, it&#8217;s bad for the environment. But on the other hand, it could genuinely lead to quite a few efficiency improvements. If all of the cars in the world are self-driving and they&#8217;re constantly navigating, it can make transportation somewhat more efficient.</p><p>I think we are really talking in hyperbole here. But I can say industry as a whole will tend to optimize for profitability, and that is achieved by doing things more efficiently, not less efficiently. Therefore, I think just because of the incentive structure of the world, AI will end up becoming useful rather than harmful. It has to be. Otherwise, we&#8217;re just going to lose money.</p><p><strong>Lawrence:</strong> So you said two things there. What I think is interesting in particular is this dichotomy: we have the GPUs and we just scale them up, and then we need to power them with ten-gigawatt nuclear power plants. Your note is yes, and we will need the ten gigawatts not just because the GPUs are inefficient, but rather because there&#8217;s going to be so much more computation. So the bet is you need both nuclear power plants and much more energy-efficient chips in order to serve the demand in a decade.</p><p><strong>Manu:</strong> Yeah. That&#8217;s my expectation.</p><p><strong>Lawrence:</strong> Very strong. Okay, to wrap this up, I think there&#8217;s a couple of interesting things I&#8217;ll take away. I think investors and just industry broadly underestimate the IP business model despite Arm. Arm is sort of an anomaly to many people. People tend not to like IP business models because you&#8217;re capping your revenue really, or your TAM. But I think what you said is a really interesting point, and you make it well: you can&#8217;t just make a new chip and beat the competition.</p><p>I mean, you could try, but not only is it extremely capex intensive, if you look at how long Cerebras has been going, how many years, what&#8217;s their percentage market share, what was Groq, what&#8217;s Tenstorrent, we&#8217;re talking sub one percent, right? And that might be a decade in. Your claim is that you can&#8217;t just do that. It&#8217;s not an option actually if you want a successful business. And in fact what you are doing with IP may be the only viable strategy for the market as it exists today.</p><p><strong>Manu:</strong> Certainly for us. If I had the reputation of an Andrew Feldman or one of the other big shots, I could probably get 500 million and do it. But given where we are at, this is the most effective path for us.</p><p><strong>Lawrence:</strong> But even then, right?</p><p><strong>Manu:</strong> Yes. You can get it right once. You can fail. Once you kind of decide you&#8217;re going to put together this product, you cannot really test it in the market, and for you to be hugely successful, it somehow has to become the de facto option in that domain. That&#8217;s not easy or trivial.</p><p>It&#8217;s a risk-reward issue, but my take is broader. If you look at what drives value, what Arm is today, yes, it&#8217;s this big thing. But if you think of every transition in the industry, it has always been driven by one new creative concept. Qualcomm, I think, started off, it could be getting some part of the history wrong, but a good part of their value proposition was on the IP. IP is still a key part of that business model.</p><p><strong>Lawrence:</strong> Synopsis is a huge part of the business.</p><p><strong>Manu:</strong> Yeah. The story, if I remember it right, is they started doing this, then they said no, I need to go up the value chain because nobody else is able to understand this. They kind of kept stacking it up until they cornered the whole thing. Intel was also kind of IP-driven. There was nobody else making the chip. That&#8217;s why they decided to make their own stuff. That&#8217;s not exactly true, but I think you get the theme.</p><p>There is a change happening and there is an opportunity to drive that change. How you monetize it is almost a secondary thing. An investor in Synthara is investing because they think we can define this platform architecture meaningfully. Now, monetizing it has so many ways and mechanics. Yes, we start with IP, but we can do so many other things. There are things that we are cooking that is more than just being an IP vendor. Also, IP is a spectrum. There is IP and there is IP. It&#8217;s really not, in my view, the right filter to look at Synthara through.</p><p><strong>Lawrence:</strong> Maybe the final point would be about in-memory compute. When you say there are different platform shifts and various innovations in the industry that have enabled new products, is the bet here on in-memory compute? Is that the change you see coming?</p><p>If I&#8217;m to try and frame that to the average person on the street, well actually they would be the wrong person to ask. Let&#8217;s say the average policy maker that may want to know the future of computing. I don&#8217;t think they&#8217;re thinking in-memory compute. I don&#8217;t think that&#8217;s really a word in their vocabulary. Should they know it? Should that be what we&#8217;re talking about as the innovation?</p><p><strong>Manu:</strong> No, I don&#8217;t think people should think about in-memory computing at all. Again, depends on the abstraction, but a customer of ours is thinking: I have this problem with compute and memory, and I need something that kind of breaks it. In-memory computing or computational memory or whatever you call it is essentially solving that piece of the puzzle. If you can solve it in different ways, that&#8217;s fine too.</p><p><strong>Lawrence:</strong> But no one&#8217;s saying I have a problem with my compute and memory. They&#8217;re not saying that. They&#8217;re saying I have an energy problem. I want to reduce power consumption.</p><p><strong>Manu:</strong> Sure. The CEO says I have to reduce power consumption. Then they go to their architect who says, okay wait, but my power problem is because I have this compute-memory architecture that we have been doing for a few decades now. I need to change that. And that transition is what the industry is going through now.</p><p>Even if you think of the Apple chips, the unified memory is evidence that people are thinking about memory and compute differently. The fact that there are so many custom silicon projects is primarily driven by this problem: I cannot just do that old way of Intel-style chips, I need to break it apart and rebuild it. It&#8217;s all compute and memory.</p><p>That is the big change today. This whole heterogeneous compute thing, nobody likes heterogeneous compute. It&#8217;s cheaper to do homogeneous compute. You&#8217;re doing heterogeneous because there is no other way around it. And most of it has to do with just shuttling data back and forth. Optics, the biggest case for it is data movement.</p><p>I would say it&#8217;s data movement that is the theme of this era of compute. And in addition to Compute RAM, you still have to deal with the problem of talking to off-chip memories and so on. The strategies, the caching.</p><p><strong>Lawrence:</strong> Yeah, that&#8217;s data movement, shuttling data around. My mind immediately thought: there are all the different levels of abstraction. If your core principle for designing an entire new computing system was stop moving data around, you would do as much as possible at the sensor, as much as possible at the edge. You would shuttle it back to the data center as little as possible. You&#8217;d try and do as much from the earbuds to the phone before you would go all the way back to the data center.</p><p><strong>Manu:</strong> Yeah.</p><p><strong>Lawrence:</strong> You&#8217;d do that at all levels of abstraction, all the way down to the chip level, which is where you are operating. And then as much as possible, don&#8217;t go off-chip to DRAM. You want as much on-chip SRAM as possible. It&#8217;s turtles all the way down as they say.</p><p><strong>Manu:</strong> Yeah, exactly. Data movement. That&#8217;s the whole thing. All the PhDs, if you really distill it down, even a good part of the AI work on the compute side has to do with data movement. The thing that DeepSeek did was data movement optimization. That really primarily all of it is essentially to say they figured out a way to not move data around as much.</p><p><strong>Lawrence:</strong> I wanted to end it, but I can&#8217;t leave that hanging. I don&#8217;t know what you mean. You&#8217;re going to have to explain what DeepSeek did exactly there for me.</p><p><strong>Manu:</strong> The innovation around using old GPUs and how they managed to reduce the token cost. I don&#8217;t know if you remember, Nvidia shares tanked when DeepSeek made this announcement of far cheaper training.</p><p><strong>Lawrence:</strong> Exactly. And I know it was cheaper, or in theory, they didn&#8217;t have the total cost of the training and the salaries and so on. But you say it was because of data movement. I don&#8217;t follow.</p><p><strong>Manu:</strong> Yes, because the brute force transformer model would not reuse a fetched weight as much as it did with the DeepSeek strategy. This whole thing about mixture of experts kind of makes it so that when you fetch something, think about it like this. Let&#8217;s say it costs a hundred units to fetch something from outside the chip. Once you bring it on-chip, to do something on it costs you one unit of energy.</p><p>Now if you can fetch something and use it a thousand times internally, then that&#8217;s a hundred plus a thousand. That&#8217;s your cost. It&#8217;s only 1,100 units. But if you had to fetch that hundred-unit cost a thousand times, that&#8217;s now a hundred thousand unit cost. So you just reduce the cost of compute by about a hundred times.</p><p>A lot of innovation around how you do compute and architect your LLMs is being done primarily with this goal. And this is huge because these things are what enables edge computing pretty much as significantly as compute itself does. Being clever with software has always been the theme of computing.</p><p><strong>Lawrence:</strong> The final point here, another way of thinking about this is the extent to which software and algorithmic improvements drive efficiency more so than any hardware change. We want to change the hardware as little as possible because we have a trillion-dollar machine churning out silicon at CMOS, at increasingly lower feature sizes. The more we can do in software, obviously the more programmable, updateable, flexible it is. Do as much as possible in software, which I guess is also what you&#8217;re doing. The vast majority of your day-to-day is the software side, so it maps.</p><p><strong>Manu:</strong> Yeah, exactly. We spend a lot of energy trying to figure out everything in that story. We reduce data movement in general. Yes, we have Compute RAM, but our software deals with that problem too. We really like that Compute RAM, or whatever you call it, at the end of the day is optimizing data movement, and we are finding ways to enable our customers to benefit from that.</p><p><strong>Lawrence:</strong> I&#8217;m not a marketer, and I don&#8217;t know what the front page of your website says, but &#8220;Stop Moving Data&#8221; feels like a nice tagline. It&#8217;ll certainly be the tagline for this conversation. I think that&#8217;s about right.</p><p>All right, thanks Manu. Appreciate your time today.</p><div><hr></div><p><strong>Debrief</strong></p><p>So, what we all thinking? There&#8217;s something genuinely useful about Manu&#8217;s framing. The idea that data movement is the unifying constraint across the entire stack, from DeepSeek&#8217;s mixture of experts down to the bit cell, is the kind of synthesis that makes you see familiar problems differently. DeepSeek&#8217;s cost breakthrough, Apple&#8217;s unified memory, the explosion of custom silicon projects: these are all symptoms of the same underlying constraint. &#8220;Stop moving data.&#8221;</p><p>The broader point is that computational memory is no longer an academic curiosity. Samsung has been shipping HBM-PIM since 2021. SK Hynix has GDDR6-AiM. Google&#8217;s TPU architecture already reflects near-memory thinking, and each generation pushes compute closer to the data. I&#8217;d expect the next wave of hyperscaler silicon, TPU v6, Amazon&#8217;s Trainium 3, whatever Microsoft is cooking, to incorporate some flavour of in-memory or near-memory compute as a core architectural feature. The economics are too compelling. When your GPU spends 70% of its cycles waiting for weights, you don&#8217;t need a PhD to see where the optimisation opportunity lies.</p><p>The question is who &#8220;captures that value.&#8221; The Nvidia-Groq deal suggests the incumbents are paying attention. But there&#8217;s also a path for IP players who can offer computational memory as a licensable block, the way Arm offered CPU cores. Synthara is betting on that path. For a 20-person team in Zurich, custom silicon was never really an option, and the graveyard of well-funded AI chip startups suggests the odds aren&#8217;t great even with capital.</p><p>Europe has been searching for an AI chip champion, and the most likely path is not trying to out-Nvidia Nvidia. It&#8217;s finding a different game. Arm proved that an IP licensing model can define an entire computing era from a standing start. The semiconductor industry is going through another architectural transition, and at least this is a game European companies have won before.</p>]]></content:encoded></item></channel></rss>