š” E01: Launching State of the Future: The Worlds First Deep Tech Tracker
Computer vision, metaverse, generative AI, quantum hardware, data infrastructure, drones, vertical farming, and single-cell sequencing. If itās deep tech, it's here
Welcome to the first edition of the State of the Future substack!
This is a spin off from the Lunar Venturesā Deep Thoughts newsletter you know and love. This one is the sidecar to our new deep tech tracker: State of the Future.
Todayās edition is a good example of the breadth of coverage you can expect. We cover computer vision, metaverse, generative AI, quantum hardware, data infrastructure, drones, vertical farming, and single-cell sequencing. If itās deep tech, you will find it here.
Next week, Iāll explore three of the most undervalued technologies today Iāve explored: deep geothermal energy, IVF and brain recording. You will have to wait to hear about the most *over*valued. It may or may not include Bitcoin and Nuclear Fusionā¦
Today, you get 2 years+ of work condensed into 3 macro trends.
Peace, Lawrence
š® What is State of the Future?
āļø Essay: Three Things We Learned from the FutureĀ
Solving the productivity paradox
Software ate the digital world; now itās coming for the real one
How to Stop Worrying and Love the Future
āļø Interviews
Ben Mildenhall, Research Scientist at Google and Co-inventor of Neural Radiance Fields (NeRFs) on the State of the Neural Rendering, Generative AI, and the Metaverse
Dirk Zechiel, CEO of Quantagonia on the State of Quantum Optimization
Cesare Cugnasco, CEO of Qbeast on the State of Data Lakes and Big Data
šļø News
Suppressing quantum errors by scaling a surface code logical qubit
Amazonās Drone Delivery Dream Is Crashing
Upward Farms throws in the towel ten years after founding the vertical-farming business.
š® What is State of the Future?
The result of over two years of work, State of the Future, is Lunar Venturesā attempt to bring some rigour to deep tech investing. We started the project to make an internal tool. As a venture fund, we are very good at assessing technologies in the context of investment opportunities. That means we have many extremely deep but unconnected analyses of markets and technologies. I wanted to bring a framework that connected our existing research and extended it to new technologies.Ā At one point, I considered calling it āepistemic infrastructure,ā which I still like but maybe slightly oversells what weāve done.
As we collected information on a range of diverse technologies under the broad banner of ādeep tech,ā we struggled to find comprehensive, non-paywalled, credible sources of information from a startup and investor perspective. It became clear that the research would be useful for many people, and generally, what is good for deep tech is good for us. So we are essentially open-sourcing all our work and putting it into a ādeep tech commons.ā
State of the Future is now a deep tech tracker with nearly 100 technology assessments. Everything from fully homomorphic encryption to Bitcoin to VTOLs. We still have 50 technologies to complete, including fan favourites like chiplets, flow batteries, and space-based solar power. Each technology is summarised in about 1000 words and scored across six dimensions:Ā viability, drivers, novelty, diffusion, impact, and timing. You should expect spicy predictions, too: āBy 2030, large language models will have increased labour productivity by 20% in OECD countries versus 2022 benchmark levels.ā
We also augmented the assessments with expert interviews, news stories, research, and relevant startups.Ā We intend to add some quantitative data in 2023, but State of the Future is predominately a qualitative tool. We remain unconvinced that technological progressās high complexity and uncertainty can be captured in metrics. The intention behind a straightforward 1-5 scoring system is that it gives some sense of scale but ultimately acts as a starting point for a discussion.Ā
At launch, the site will be maintained by the Lunar team. Contributions will mainly be through interviews and the Substack community forum. As we move into Phase 3: Open, we want to let the community submit new content and enable all the content to be accessed through an API. Eventually, we want the community to maintain State of the Future.Ā
While we work on that, get in touch to learn more and tell me how the assessments are wrong. This is, after all, a starting point for discussion. Iām lawrence@lunar.vc.
āļø Three Things We Learned from the FutureĀ
Future positivism: Five decades in, one decade left
We learned many things after assessing over 100 technologies for State of the Future. The most important insight is that technologies do not progress in isolation; they combine and reinforce in novel ways. Honestly, this was the conceit behind the project, but it was exciting to have that validated over the last few years. The best opportunities for investors, founders and storytellers are at the intersections. Of course, this is already a well-understood idea, but you have to do the work to find those intersections. They arenāt always entirely obvious, especially to experts in the field. What are the crossovers between optogenetics and surface-acoustic chips? Or spatial audio and the Metaverse? Or Web3, DAOs and LLMs?
Aside from this meta point, three trends jump out: first, large language models will likely solve the productivity paradox. Second, the last two decades have been dominated by the relentless growth of bits, aka āsoftware is eating the world.ā The low-hanging fruit has been eaten. Software now meets atoms, and deep tech will be the main story of the 2020s. Last, despite the prevailing pessimistic narrative of tech and its failures, we are at the cusp of huge changes. The last 50 years were laying the foundations for whatās to come.
We have built a global computing grid. Now weāre connecting AI. If we survive AGI, we should be very excited about our future.Ā Ā Ā Ā
Solving the productivity paradox
One of the most important conclusions is a conclusion we came to last year, but now at launch, the rest of the world has caught up. Artificial intelligence driven by large language models (LLMs) will probably be one of the most impactful technologies of this decade and our lifetimes. It became clear last year that the performance of AI models could be improved by making them bigger. The emergent capacities of these larger models have surprised almost everyone. And now, in the last few weeks, we are beginning to see what we can do by connecting these LLMs (see AutoGPT, BabyAGI, and Microsoftās JARVIS).
We will see the long-promised productivity gains expected from the technological revolution. Despite the hype, the consequences are still being underestimated. The tech industry is no longer in the ascendancy in early 2023. Tech changed the world, but many argue it wasnāt for the better. The smartphone revolution is over. Crypto was overhyped. The Metaverse is a decade away. The talk is about regulation, layoffs, and efficiency. It wonāt be for long. I predict LLMs by 2030 will increase labour productivity by 20% in OECD countries. We will see this in economic growth, lower prices, and higher living standards.Ā
Software ate the digital world; now itās coming for the real one
The age of bits is coming to an end. Many of the most important companies of the last few decades have excelled at manipulating bits. The software wonāt quite become a commodity, but it will get cheaper and less important. People often say that technological progress has never been faster. That hasnāt been true. Computers, the Internet, and smartphones have changed work and leisure. But if you were living in the 1920s and 1930s, you would have seen the automobile, electrification, and artificial lighting. Itās not helpful to say which technologies were most transformative but rather to point to the fact that the 20s and 30s saw dramatic changes in the physical environment. Things were transported, powered, and illuminated. The daily existence of someone living in the city was radically transformed. I claim that the 1970s to roughly 2022 should be considered the infrastructure phase of the IT revolution.Ā Ā Ā
In the last 50 years, weāve built out a global computing and connectivity grid. But itās hard to use. The Cloud, microservices, and APIs made it slightly easier to use. But still, all software is hand-crafted and manually operated. LLMs are a mass manufacturing innovation. The age of zero marginal cost of digital production is upon us. As software becomes automated, everyone can plug into the computing grid as easily as you plug into the wall socket. When people say, āTechnological progress has never been faster.ā They will be right. As we plug in 3d printers, drones, wind turbines, DNA sequencers, and brain-computing devices, the convergence of atoms and bits will lead to the sorts of leaps forward in material science, synthetic biology, and energy systems we have imagined for decades.Ā
How to Stop Worrying and Love the Future
After immersing myself in 100 technologies for two years, my overwhelming conclusion is optimism. Positive progress is evident in protein sequencing, small modular reactors, or organoids. The day-to-day drumbeat of politics, Twitter spats, and economic data is a distraction. They say, āCrypto is deadā and āMetaverse was a fad.ā The perception of the technology industry, particularly Silicon Valley, has undeniably been tainted in the eyes of many. This is due to various reasons, from privacy breaches to mental health concerns and, unfortunately, criminal activities within certain companies such as Theranos and FTX. In the 2010s, tech founders' declarations of their intention to improve the world were often met with optimism and belief. However, the media and the public approach these claims with greater scepticism today. If not outright hostility.Ā
Society needs stories. And today, the dominant story is that tech has failed us. We wanted flying cars, but we got 120 characters, etc. Or how smartphones and social media are ruining the next generationās minds. But you canāt look at 100+ technologies and not be excited about our future. Much of the forecasting exercise considers the barriers to adoption and how this will impact when technology reaches the market. When you take a 10-year timeframe, you realize that ten years will be nothing for our children. Low-earth orbit satellites will be expensive in the 20s. Drones will face strong regulatory headwinds before adoption. Gene and cell therapies will take the best part of a decade to get regulatory approval. But these technologies will be a big part of our lives in the 2030s.Ā
Itās easy to focus on the negatives. The risks of warfare in space or drones are over our heads. Of the costs of gene and cell therapies. Or with AI to focus on the jobs that will be lost. But what about solving digital privacy with fully homomorphic encryption? Or solving pollution solved with direct air capture. Or even solving neurodegenerative diseases, cancer, and maybe even ageing with new genome editing tools like prime editing. The world needs positive stories to help people embrace the future and not fear it.Ā Iām going to call it futurepositivism[dot]xyz. This was the idea behind a16zās future.com and Derek Thompsonās Abundance Agenda. I think this is right and part of what I hope to achieve with stateofthefuture.
āļø Interviews
āļø Ben Mildenhall, Research Scientist at Google and Co-inventor of NeRF: Neural Radiance Fields on the State of the Neural Rendering, Generative AI, and the Metaverse [computer vision], [generative AI], [metaverse]
We must create high-quality objects and scenes to populate digital worlds with 3D assets. But collecting 3D data is expensive, and there isnāt much of it available. Ben Mildenhall, Research Scientist at Google and co-inventor of NeRF: Neural Radiance Fields, thinks that the answer is to extract 3D data from something we have a lot of 2D images. Ben is pioneering work on generating synthetic 3D renders from 2D images and believes this is the only way to build 3D worlds. Digital twins and the Metaverse will not be possible without the emerging field of neural rendering.
āļø Dirk Zechiel, CEO of Quantagonia on the State of Quantum Optimization [quantum algorithms]
In the last twenty years, people have joked that if quantum computers were around, they might be able to solve these optimization problems. Still, no quantum computer has outperformed a classical supercomputer in practical tasks. This might change sooner than expected.Ā
āļø Cesare Cugnasco, CEO of Qbeast on the State of Data Lakes and Big Data
Big data is the key ingredient for many of todayās products and services, from training recommender systems for e-commerce stores to computational chemistry simulations for developing new materials. Moving from local databases to data lakes in the cloud has made data storage and handling as scalable as spinning up new virtual machines for processing. Yet, data lakes, by nature, are unstructured, and retrieving data is a pain as you want to avoid loading large chunks of your data into memory.Ā QbeastĀ was founded in 2020 to optimize the organization of data lakes, making it faster and cheaper to access data.
šļø News
šļø Suppressing quantum errors by scaling a surface code logical qubit [quantum hardware]
Quantum computing has the potential to outperform classical computers vastly. Still, one of its major challenges is that quantum bits (qubits), the fundamental units of quantum information, are very prone to errors. Quantum error correction is a method to reduce these errors by distributing the information of one 'logical' qubit across many 'physical' qubits. However, adding more qubits also introduces more potential sources of errors, so there's a balance to be struck.Ā
In this study, the researchers demonstrate that their system, which uses superconducting qubits, can achieve a low enough error rate to benefit from quantum error correction. They show that a larger 'distance-5' surface code logical qubit performs better on average than a group of smaller 'distance-3' logical qubits. 'distance-5' means that the information of one logical qubit is encoded across five physical qubits. Similarly, a 'distance-3' surface code logical qubit encodes the information across three physical qubits. They ran a much larger 'distance-25' repetition code to investigate rare but damaging error sources and identified a particular high-energy event that caused errors. A larger code size allows for better detection of rare errors because the larger the number of physical qubits used, the more likely it is to catch a low-probability error that could be missed in a smaller system.Ā
This research represents a significant milestone, showing for the first time that increasing the number of qubits in a quantum error correction code can improve performance, bringing us one step closer to practical quantum computing. The researchers also identified the primary sources of errors in their system, providing valuable insights for the design of future quantum computers.
Weights and Biases: This a step towards a fault-tolerant quantum computer, although it must be noted the research doesn't prove it is possible to build one. It demonstrates that increasing the qubits in a quantum error correction code reduces error rates. And the researchers identified the primary sources of errors in their system, which should improve the performance of the entire field of quantum computing. We should increase the probability slightly that a fault-tolerant quantum computer is practical and might be practical on a sooner timeline. We still can't be anything other than low certainty regarding impact or timeline. However, we may be slightly more confident in our prediction that by 2030 500 qubit quantum computers will perform practical calculations more efficiently than classical computers.
šļø Amazonās Drone Delivery Dream Is Crashing [drones]
Amazon has spent $2 billion a decade on the Prime Air drone delivery project. After several technical issues, it still hasnāt received type certification from the FAA to fly over active roadways and people. They operate as experimental aircraft under a tangle of federal exemptions (18601B and 18602B among the most recent) that severely restrict their Part 135 authorization, which allows a company to operate on-demand air deliveries. This starkly contrasts with competitors. As of January 2022, UPS had completed 10,000 flights using the Matternet M2 delivery drone and system, the first to be issued FAA-type certification. Wing, the drone delivery subsidiary of Alphabet, was the first in the industry to obtain a Part 135 certificate in April 2019 and now has delivery programs in Virginia, Texas, and parts of Finland, Ireland and Australia. Wing has completed more than 300,000 commercial deliveries worldwide. In partnership with DroneUp, Flytrex, and Zipline, Walmart conducted over 6,000 paid deliveries in 2022 and recently expanded to 34 stores across seven US states.
Despite setbacks and staff cuts, Amazon aims to deliver 500 million packages by drone annually by 2030 by focusing resources away from flight-testing the MK27-2 to focus on developing the MK30āa lighter, smaller drone that can fly in light raināwhich is set to go into service in 2024. The proposed federal legislation, the Increasing Competitiveness for American Drones Act of 2023, may change Amazonās fortunes if it passes and changes the FAAās drone licensing requirements.Ā
Weights and Biases: There probably was a bias when thinking about drones that if Amazon were working hard on it, they would succeed and catalyse the drone market. That does not appear to have happened, with many startups Matternet, DroneUp, Flytrex, Zipline, and Wing (albeit an Alphabet spin-out) succeeding where Amazon has so far failed. From this report, we learn that FAA-type certification is hard to get. This reinforces the view that diffusion should be scored low and that adoption is a slow, painful state-by-state, country-by-country path.Ā Still, I'm comfortable with a 2020-2025 timeline, as adoption has proven to be speedy once certification is gained. This doesn't speak to my view that the air cargo market will be the primary market segment for drones in the next few years. Still, more operators flying means more issues are resolved with regulators, manufacturing costs can come down, and the market opportunity grows, fulfilling a virtuous cycle.
šļø Upward Farms throws in towel ten years after founding vertical-farming business [vertical farming]
Upward Farms, a US vertical farming business, announced its closure a decade after it was established in New York. Originally known as Edenworks, the firm used aquaponics technology to grow microgreens like kale and mustard, supplying them to Whole Foods Market in New York City. Despite securing $20m in funding and announcing a new farming facility in Brooklyn in 2020, the founders stated that the complexities of vertical farming led to their decision to shut down. The vertical farming industry is capital-intensive, and many companies remain unprofitable due to the high costs and the need for scale. Upward Farms' closure follows similar outcomes for other vertical farming startups, such as Infarm in Germany, Future Crops in the Netherlands, and Agricool in France.
Weights and Biases: Another proof supporting the conclusion on vertical farming: "The most likely scenario is low impact in which vertical farming is a niche part of the food production supply chain." Or at least, the timing is too soon. Vertical farming won't be a material part of the food production system until energy costs come down substantially. Any low-cost energy system will take five years+ to materialise simply as a function of the time it takes to deploy solar/wind/batteries at scale and for small modular reactors to be deployed.
More
šļø What is a Vector Database? [Vector Databases]Ā [Link]
šļø More than 200 people have been treated with experimental CRISPR therapies [Cell and Gene Therapies] [Link]
šļø Bacteria can be engineered to fight cancer in mice. Human trials are coming. [Link]
šļø Synthetic embryos have been implanted into monkey wombs [Synthetic Embryos] [Link]
šļø Ageing studies in five animals suggests how to reverse decline [Single-cell Sequencing][Link]