Skip to content
Business

Inside Google’s quantum computing breakthrough

Welcome to The Nightcrawler — a weekly newsletter from Eric Markowitz covering tech, innovation, and long-term thinking.
Google logo in large letters displayed on a glass building facade.

Credit: Sundry Photography / Adobe Stock

Key Takeaways
  • Main Story: Google’s newest state-of-the-art Willow quantum chip is performing certain calculations at almost inconceivable speed.
  • The Google Quantum AI team spent years honing their ability to manipulate many qubits (quantum bits) at once.
  • Also among this week’s stories: Why volatile stocks are not always risky bets, unlocking the potential of the lithium-ion battery, and Amazon’s AI “ultracluster.”
Sign up for The Nightcrawler Newsletter
A weekly collection of thought-provoking articles on tech, innovation, and long-term investing from Nightview Capital’s Eric Markowitz.
This is an installment of The Nightcrawler, a weekly collection of thought-provoking articles on tech, innovation, and long-term investing by Eric Markowitz of Nightview Capital. You can get articles like this one straight to your inbox every Friday evening by subscribing above. Follow him on X: @EricMarkowitz.

This week, Google announced that its new state-of-the-art quantum chip, Willow, could perform a computation in under five minutes that would otherwise take one of today’s fastest supercomputers 10 septillion years to solve. (For the record: that’s 10,000,000,000,000,000,000,000,000 years.)

“This mind-boggling number exceeds known timescales in physics and vastly exceeds the age of the universe,” Google’s researchers wrote. “It lends credence to the notion that quantum computation occurs in many parallel universes, in line with the idea that we live in a multiverse, a prediction first made by David Deutsch.”

My mind, technically speaking, was blown away by this announcement. I had known about this project for several years — The New York Times published a thoughtful report on Willow back in 2019 — but perhaps like many of you, I wanted an outsider’s take to put this milestone into context. Ben Brubaker, a science journalist at Quanta Magazine, published a smart, technical, and well-reported article about Google’s achievement — and what it could mean for the future.

Key quote: “The Google Quantum AI team spent years improving their qubit design and fabrication procedures, scaling up from a handful of qubits to dozens, and honing their ability to manipulate many qubits at once. In 2021, they were finally ready to try error correction with the surface code for the first time. They knew they could build individual physical qubits with error rates below the surface-code threshold. But they had to see if those qubits could work together to make a logical qubit that was better than the sum of its parts. Specifically, they needed to show that as they scaled up the code — by using a larger patch of the physical-qubit grid to encode the logical qubit — the error rate would get lower.”

Rethinking risk from a long-term perspective

My colleague Dan Crowley recently posed an interesting question: do we think about risk incorrectly?

In his essay, Dan walks us through how modern finance has grown to equate volatility as a measure of risk itself. Volatile stocks, for instance, are often portrayed as “risky” bets. But this isn’t necessarily the case. And volatile stocks can often be the greatest investments — over time.

“Short-term sentiment, news cycles, and macroeconomic noise can cause prices to swing wildly, creating dislocations between a stock’s price and its intrinsic value,” Dan writes. “For the long-term investor, however, this is an incredible feature — not a bug.”

Key quote: “Volatility tells only part of the story. Financial markets don’t follow the neat patterns of a normal distribution, which is what these models assume. Extreme events occur far more often than traditional models predict. We’ve seen this play out time and again—from the collapse of Long-Term Capital Management to the Great Financial Crisis. The models couldn’t account for the market’s tendency to behave irrationally and with far greater extremes than the math suggested. That’s why I’ve come to view volatility not as risk itself but as a signal, an invitation to investigate further.”


Why Inversion Is Indispensable for Long-Term Success – via John Mihaljevic

Key quote: “‍Unfortunately, the typical investor cannot help but get swayed by the prevailing market sentiment. He feels better on a day when the ticker tape is flashing green. He experiences a magnetic pull toward the market’s favorite story stocks. As indices hit new highs, he asks, ‘How high can we go?’ As indices hit new lows, he worries, ‘How low can we go?’ A rational investor would take these two questions in reverse.”

How We Got the Lithium-ion Battery – via Construction Physics

Key quote: “Not unlike the solar photovoltaic cell, the lithium-ion battery was a novel energy technology that was incredibly expensive in terms of cost per unit of energy delivered. Its high energy density made it useful for many applications like portable electronics, but its full potential wouldn’t be unlocked until it could fall down the learning curve, and get orders of magnitude less expensive.”

AWS CEO Matt Garman Talks Amazon’s Big Bets in AI Chips, Reasoning, and Nuclear Energy – via Alex Kantrowitz / Big Technology

Key quote: “Amazon made a slew of big AI announcements early Tuesday. It said its Trainium2 chip would be generally available for AI training and inference. It debuted a new reasoning tool to limit hallucinations. It said it was building an AI ‘ultracluster’ with hundreds of thousands of GPUs that Anthropic will use for AI training, among others. And it shared that it developed a new foundational set of AI models, called Nova. A day before the news hit, I sat down with AWS CEO Matt Garman at the company’s re:Invent conference in Las Vegas, Nevada to talk through Amazon’s AI strategy and plans for the future.”


From the archives:

This little-known physics law silently controls your life (2018) – via BigThink (H/T Paul Higgins)

Key quote: “Ever noticed that shapes in nature tend to repeat themselves? Leafless tree branches, for instance, resemble the branching nerve endings inside the human body, which also resemble forked lightning strikes, subways maps, and even the tributaries of a river basin. Scientists have noticed these similarities too, and one has identified the properties that come along with it.”

Sign up for The Nightcrawler Newsletter
A weekly collection of thought-provoking articles on tech, innovation, and long-term investing from Nightview Capital’s Eric Markowitz.

Unlock potential in your business

Learn how Big Think+ can empower your people.
Request a Demo

Related