The UNDO Button Part 2 of 7

Prepare yourself for a voyage into the impossible. We assure you, we will arrive at specific strategies for business growth and profitability, but the road to our destination leads us through the darkest forests of theoretical physics. Don’t be afraid. There are no equations to memorize and no test at the end, but you must understand the logic behind hypothesis-based innovation to guide you in designing your own.

Otherwise, you might end up taking advice from experts, without testing whether their reasoning is sound. Bad advice from people who call themselves experts is really the worst thing in the world for your business.

In the quest to test if your reasoning is sound and your results are valid, the Universal Undo Button will be your greatest ally. This series of seven parts shows you how to create one.

The Physicist and the Angry AI

Max Tegmark is a genial Swedish physicist with a roguish smile and sudden bursts of intensity that tend to unsettle people. While his meticulously detailed work on hydrogen tomography or gravitational lensing might lull you to sleep, his plausible scenarios of genocidal AI have terrified both business leaders and academics, from Elon Musk to the late Stephen Hawking. Tegmark told IEEE Spectrum, “Just because we don’t know quite what will go wrong doesn’t mean we shouldn’t think about it. That’s the basic idea of safety engineering: You think hard about what might go wrong to prevent it from happening.”

Tegmark wrote about the safety engineering for runaway AI in his book titled Life 3.0: Being Human in the Age of Artificial Intelligence. With the speed of AI’s development, he argues, we may never get a chance to hit the Universal Undo Button before it erases humanity from the world’s hard drive.

He wrote that the emergence of a non-human intelligence is essentially different from any other innovation humanity has developed before. Tegmark said, “We invented the car, screwed up a bunch of times, and invented the safety belt. But with things like nuclear weapons and super-intelligent AI, we don’t want to learn from mistakes. We need to get it right the first time, because that might be the only time we have.”

Tegmark sees himself as an optimist, and makes the point that if AI were to wipe out humanity, but then go on to do wonderful things, it might not be the worst outcome. Like children fulfilling the dreams of their parents, the intelligent algorithms we create may further our values or they could develop an intelligence without a conscience. A tiny variation in initial conditions could result in vastly different outcomes. It’s up to us to do as much as we can now to prevent the latter.

From Tegmark’s aloof perspective on the scale of eons and star systems, the problems of a few startups might seem insignificant, but the same principles apply on a personal level as they do on the grandest of scales. As above, so below.

Tegmark’s ability to draw out detailed scenarios based on minor variations in assumptions is closely related to his work on multiple universes, which is the underpinning for the logic of the Universal Undo Button.