How to learn from failure and quit the blame game
After a setback occurs, you have two choices: blame someone, or get wiser. Executive coach Alisa Cohn explains why a 'learning lab' is more productive than pointing fingers.
Alisa Cohn is an executive coach who works with senior executives and high potential leaders to help them create positive permanent shifts in their leadership impact and the results they achieve. She works one-on-one with CEOs and executives and with senior teams to help them work together better and create much impact as a team. She works with Fortune 500 companies as well as start-ups.
She also works with executive teams to help them be stronger as a team, have the right conversations and take the right actions to move forward faster.
Alisa provides practical tools and serves as a thought partner to support the challenging process of change. Leaders get the chance to practice their new behaviors and troubleshoot before doing them live.
Prior to becoming a coach, Alisa, a CPA, was the CFO of Clairvergent Technology Group, a Vice President at two high-tech start-ups. She was a manager and consultant at PricewaterhouseCoopers and The Monitor Group. Alisa holds an MBA from Cornell University and a BS from Boston University. She is a guest lecturer at Harvard and Cornell Universities and the Naval War College. She is a coach for the prestigious Linkage Global Institute for Leadership Development and for the Center for Inclusive Security, Harvard University.
Alisa is the executive coach for Runway - the incubator at Cornell NYC Tech that helps post-docs commercialize their technology and build companies. She serves on the Entrepreneurship at Cornell Advisory Committee and the President’s Council of Cornell Women.
She was selected as one of the Top 10 Coaches by Women’s Business, which called her “absolutely brilliant, laugh-out-loud hilarious and a superhero.” A dynamic speaker and skilled facilitator, she is known for her humor, energy, results-orientation and motivational style, along with a propensity to burst into song without warning.
Get in touch with Alisa Cohn at alisacohn.com.
Alisa Cohn: The thing about building a company is that inevitably things go wrong, and bad things happen.
You don’t want that to happen, you don’t anticipate when that happens, but it is inevitable in the lifecycle of any company.
When that happens the best way to react to that is to use it as a learning lab. Use it as an opportunity to call everybody together and really have a laboratory, have a workshop, have an understanding of: how do we unpack what happened and why it all happened with no blame but with understanding of the systems that got us here, and then how do we think about how do you respond right now together, and then how do you move forward from there, both in terms of establishing maybe new procedures, establishing some new policies, some even new ways of thinking, some new operational tactics, but then also this is equally important, how does the company and the CEO and the team around him or her successfully move on emotionally, kind of create a new point of view recognizing that that was in the past and there’s the future to look too? You can’t change the past, you can only change the future.
So the best way to debrief any bad thing that happened, any problem is just to go down the tiers of “Why.”
And so you start with—so let’s assume that the project that you’re working on is late, let’s assume it’s a product release, and that it is now definitely not going to make its deadline, and it’s probably three or six months late.
First of all it’s important just to create an environment where people can talk freely and not feel blamed, because we’re just debriefing to understand what happened.
So it’s about understanding the structure of it not looking to finger point.
But the first question is, why? So why was the release late? Well, engineering, for example, didn’t deliver the code on time.
Why didn’t engineering deliver the code on time? Because they weren’t given the specs early enough. Why weren’t they given the specs early enough? Because product didn’t get them to them early enough. Why didn’t product to get them to them early enough Because product didn’t understand from marketing the requirements early enough.
So if you keep going down those and you understand why did marketing not get them early enough, it’s because they didn’t have a good plan to get the customer data they needed to, then you can take a look at what went wrong here.
Is it about we need to tighten up our process (which is very often true, especially with startups)?
Is it that we need to have a better timeframe for deliverables (which is often very true when you’re working on complicated multi-domain projects)? Or do we just forecast incorrectly? Did we not take into account all the multiple steps that leads to the product release, and that’s often very true as well? And maybe why didn’t we take into account the multiple steps? Because there wasn’t one person in charge.
Great. So going forward we know that we need to all take into account the multiple steps, and declare one person the owner of the project overall, and let’s try those two interventions, those two changes, and that’s going to help us have more excellence in operations.
After a setback occurs, you have two choices: blame someone, or get wiser. Executive coach Alisa Cohn explains why it's important not to point fingers or shut down and never mention the fiasco again—tempting as that may be. Instead, it's critical to run a disaster debrief. By reframing the conversation as a 'learning lab' it can help defuse tension and build a stronger team that has found and fixed its weak spots. What is the key to running an excellent debrief after a failed project or difficult delivery? Using an example, Cohn explains why working down the multiple tiers of "why?" is so powerful when you're trying to learn actionable lessons from failure.
Once a week.
Subscribe to our weekly newsletter.
The experience of life flashing before one's eyes has been reported for well over a century, but where's the science behind it?
At the age of 16, when Tony Kofi was an apprentice builder living in Nottingham, he fell from the third story of a building. Time seemed to slow down massively, and he saw a complex series of images flash before his eyes.
As he described it, “In my mind's eye I saw many, many things: children that I hadn't even had yet, friends that I had never seen but are now my friends. The thing that really stuck in my mind was playing an instrument". Then Tony landed on his head and lost consciousness.
When he came to at the hospital, he felt like a different person and didn't want to return to his previous life. Over the following weeks, the images kept flashing back into his mind. He felt that he was “being shown something" and that the images represented his future.
Later, Tony saw a picture of a saxophone and recognized it as the instrument he'd seen himself playing. He used his compensation money from the accident to buy one. Now, Tony Kofi is one of the UK's most successful jazz musicians, having won the BBC Jazz awards twice, in 2005 and 2008.
Though Tony's belief that he saw into his future is uncommon, it's by no means uncommon for people to report witnessing multiple scenes from their past during split-second emergency situations. After all, this is where the phrase “my life flashed before my eyes" comes from.
But what explains this phenomenon? Psychologists have proposed a number of explanations, but I'd argue the key to understanding Tony's experience lies in a different interpretation of time itself.
When life flashes before our eyes
The experience of life flashing before one's eyes has been reported for well over a century. In 1892, a Swiss geologist named Albert Heim fell from a precipice while mountain climbing. In his account of the fall, he wrote is was “as if on a distant stage, my whole past life [was] playing itself out in numerous scenes".
More recently, in July 2005, a young woman called Gill Hicks was sitting near one of the bombs that exploded on the London Underground. In the minutes after the accident, she hovered on the brink of death where, as she describes it: “my life was flashing before my eyes, flickering through every scene, every happy and sad moment, everything I have ever done, said, experienced".
In some cases, people don't see a review of their whole lives, but a series of past experiences and events that have special significance to them.
Explaining life reviews
Perhaps surprisingly, given how common it is, the “life review experience" has been studied very little. A handful of theories have been put forward, but they're understandably tentative and rather vague.
For example, a group of Israeli researchers suggested in 2017 that our life events may exist as a continuum in our minds, and may come to the forefront in extreme conditions of psychological and physiological stress.
Another theory is that, when we're close to death, our memories suddenly “unload" themselves, like the contents of a skip being dumped. This could be related to “cortical disinhibition" – a breaking down of the normal regulatory processes of the brain – in highly stressful or dangerous situations, causing a “cascade" of mental impressions.
But the life review is usually reported as a serene and ordered experience, completely unlike the kind of chaotic cascade of experiences associated with cortical disinhibition. And none of these theories explain how it's possible for such a vast amount of information – in many cases, all the events of a person's life – to manifest themselves in a period of a few seconds, and often far less.
Thinking in 'spatial' time
An alternative explanation is to think of time in a “spatial" sense. Our commonsense view of time is as an arrow that moves from the past through the present towards the future, in which we only have direct access to the present. But modern physics has cast doubt on this simple linear view of time.
Indeed, since Einstein's theory of relativity, some physicists have adopted a “spatial" view of time. They argue we live in a static “block universe" in which time is spread out in a kind of panorama where the past, the present and the future co-exist simultaneously.
The modern physicist Carlo Rovelli – author of the best-selling The Order of Time – also holds the view that linear time doesn't exist as a universal fact. This idea reflects the view of the philosopher Immanuel Kant, who argued that time is not an objectively real phenomenon, but a construct of the human mind.
This could explain why some people are able to review the events of their whole lives in an instant. A good deal of previous research – including my own – has suggested that our normal perception of time is simply a product of our normal state of consciousness.
In many altered states of consciousness, time slows down so dramatically that seconds seem to stretch out into minutes. This is a common feature of emergency situations, as well as states of deep meditation, experiences on psychedelic drugs and when athletes are “in the zone".
The limits of understanding
But what about Tony Kofi's apparent visions of his future? Did he really glimpse scenes from his future life? Did he see himself playing the saxophone because somehow his future as a musician was already established?
There are obviously some mundane interpretations of Tony's experience. Perhaps, for instance, he became a saxophone player simply because he saw himself playing it in his vision. But I don't think it's impossible that Tony did glimpse future events.
If time really does exist in a spatial sense – and if it's true that time is a construct of the human mind – then perhaps in some way future events may already be present, just as past events are still present.
Admittedly, this is very difficult to make sense of. But why should everything make sense to us? As I have suggested in a recent book, there must be some aspects of reality that are beyond our comprehension. After all, we're just animals, with a limited awareness of reality. And perhaps more than any other phenomenon, this is especially true of time.
Might as well face it, you're addicted to love.
- Many writers have commented on the addictive qualities of love. Science agrees.
- The reward system of the brain reacts similarly to both love and drugs
- Someday, it might be possible to treat "love addiction."
Since people started writing, they've written about love. The oldest love poem known dates back to the 21st century BCE. For most of that time, writers also apparently have been of two (or more) minds about it, announcing that love can be painful, impossible to quit, or even addictive — while also mentioning how nice it is.
The idea of love as an addiction is one that is both familiar and unsettling. Surely it can't be the case that our mutual love with our partner — a thing that can produce euphoria, consumes a great deal of our time, and which we fear losing — can be compared to a drug habit? But indeed, many scientists have turned their attention to the idea of "love addiction" and how your brain on drugs might resemble your brain in love.
Love and other drugs
In a 2017 article published in the journal Philosophy, Psychiatry, & Psychology, a team of neuroethicists considered the idea that love is addicting and held the idea up to science for scrutiny.
They point out that the leading model of addiction rests on the notion of a drug causing the brain to release an unnatural level of reward chemicals, such as dopamine, effectively hijacking the brain's reward system. This phenomenon isn't strictly limited to drugs, though they are more effective at this process than other things. Rats can get a similar rush from sugar as from cocaine, and they can have terrible withdrawal symptoms when the sugar crash kicks in.
On the structural level, there is a fair amount of overlap between the parts of the brain that handle love and pair-bonding and the parts that deal with addiction and reward processing. When inside an MRI machine and asked to think about the person they love romantically, the reward centers of people's brains light up like Broadway.
Love as an addiction
These facts lead the authors to consider two ideas, dubbed the "narrow" and "broad" views of love as an addiction.
The narrow view holds that addiction is the result of abnormal brain processes that simply don't exist in non-addicts. Under this paradigm, "food-seeking or love-seeking behaviors are not truly the result of addiction, no matter how addiction-like they may outwardly appear." It could be that abnormal processes cause the brain's reward system to misfire when exposed to love and to react to it excessively.
If this model is accurate, love addiction would be a rare thing — one study puts it around five to ten percent of the population — but could be considered a disorder similar to others and caused by faulty wiring in the brain. As with other addictions, this malfunction of the reward system could lead to an inability to fully live a typical life, difficulty having healthy relationships, and a number of other negative consequences.
The broad view looks at addiction differently, perhaps even radically.
It begins with the idea that addiction exists on a spectrum of motivations. All of our appetites, including those for food and water, exist on this spectrum and activate similar parts of the brain when satisfied. We can have appetites for anything that taps into our reward system, including food, gambling, sex, drugs, and love. For most people most of the time, our appetites are fairly temperate, if recurring. I might be slightly "addicted" to food — I do need some a few times per day — but that "addiction" doesn't have any negative effects on my health.
An appetite for cocaine, however, is rarely temperate and usually dangerous. Likewise, a person's appetite for love could reach addiction levels, and a person could be considered "hooked" on relationships (or on a particular person). This would put love addiction at the extreme end of the spectrum.
None of this is to say that the authors think that love is bad for you just because it can resemble an addiction. Love addiction is not the same as cocaine addiction at the neurological level: important differences, like how long it takes for the desire for another "hit" to occur, do exist. Rather, the authors see this as an opportunity to reconsider our approach to addiction in general and to think about how we can help the heartsick when they just can't seem to get over their last relationship.
Is "love addiction" a treatable disorder?
Hypothetically, a neurological basis for an addiction to love could point toward interventions that "correct" for it. If the narrow view of addiction is accurate, perhaps some people will be able to seek treatment for love addiction in the same way that others seek help to quit smoking. If the broad view of addiction is correct, the treatment of love addiction would be unlikely as it may be difficult to properly identify where the cutoff of acceptability on a spectrum should be.
Either way, since love is generally held in high regard by all cultures and doesn't quite seem to be in the same category as a bad cocaine habit in terms of social undesirability, the authors doubt we'll be treating anyone for "love addiction" anytime soon.
A school lesson leads to more precise measurements of the extinct megalodon shark, one of the largest fish ever.
- A new method estimates the ancient megalodon shark was as long as 65 feet.
- The megalodon was one of the largest fish that ever lived.
- The new model uses the width of shark teeth to estimate its overall size.
A Florida student figured out a way to more accurately measure the size of one of the largest fish that ever lived – the extinct megalodon shark – and found that it was even larger than previously estimated.
The megalodon (officially named Otodus megalodon, which means "Big Tooth") lived between 3.6 and 23 million years ago and was thought to be about 34 feet long on average, reaching the maximum length of 60 feet. Now a new study puts that number at up to 65 feet (20 meters).
Homework assignment leads to a discovery
The study, published in Palaeontologia Electronica, used new equations extrapolated from the width of megalodon's teeth to make the improved estimates. The paper's lead author, Victor Perez, developed the revised methodology while he was a doctoral student at the Florida Museum of Natural History. He got the idea while teaching students, noticing a range of discrepancies in the results they were getting.
Students were supposed to calculate the size of megalodon based on the ancient fish's similarities to the modern great white shark. They utilized the commonly accepted method of linking the height of a shark's tooth to its total body length. As the press release from the Florida Museum of Natural History expounds, this method involves locating the anatomical position of a tooth in the shark's jaw, measuring the tooth "from the tip of the crown to the line where root and crown meet," and using that number in an appropriate equation.
But while carrying out calculations in this way, some of Perez's students thought the shark would have been just 40 feet long, while others were calculating 148 feet. Teeth located toward the back of the mouth were yielding the largest estimates.
"I was going around, checking, like, did you use the wrong equation? Did you forget to convert your units?" said Perez, currently the assistant curator of paleontology at the Calvert Marine Museum in Maryland. "But it very quickly became clear that it was not the students that had made the error. It was simply that the equations were not as accurate as we had predicted."
Found in North Carolina, these 46 fossils are the most complete set of megalodon teeth ever excavated.Credit: Jeff Gage/Florida Museum
The new approach
Perez's math exercise demonstrated that the equations in use since 2002 were generating different size estimates for the same shark based on which tooth was being measured. Because megalodon teeth are most often found as standalone fossils, Perez focused on a nearly complete set of teeth donated by a fossil collector to design a new approach.
Perez also had help from Teddy Badaut, an avocational paleontologist in France, who suggested using tooth width instead of height, which would be proportional to the length of its body. Another collaborator on the revised method was Ronny Maik Leder, then a postdoctoral researcher at the Florida Museum, who aided in the development of the new set of equations.
The research team analyzed the widths of fossil teeth that came from 11 individual sharks of five species, which included megalodon and modern great white sharks, and created a model that connects how wide a tooth was to the size of the jaw for each species.
"I was quite surprised that indeed no one had thought of this before," shared Leder, who is now director of the Natural History Museum in Leipzig, Germany. "The simple beauty of this method must have been too obvious to be seen. Our model was much more stable than previous approaches. This collaboration was a wonderful example of why working with amateur and hobby paleontologists is so important."
Why use teeth?
In general, almost nothing of the super-shark survived to this day, other than a few vertebrae and a large number of big teeth. The megalodon's skeleton was made of lightweight cartilage that decomposed after death. But teeth, with enamel that preserves very well, are "probably the most structurally stable thing in living organisms," Perez said. Considering that megalodons lost thousands of teeth during a lifetime, these are the best resources we have in trying to figure out information about these long-gone giants.
Researchers suggest megalodon's large jaws were very thick, made for grabbing prey and breaking its bones, exerting a bite force of up to 108,500 to 182,200 newtons.
Megalodon tooth compared to two great white shark teeth. Credit: Brocken Inaglory / Wikimedia.
Limitations of the new model
While the new model is better than previous methods, it's still far from perfect in precisely figuring out the sizes of animals which lived so long ago and left behind few if any full remains. Because individual sharks come in a variety of sizes, Perez warned that even their new estimates have an error range of about 10 feet when it comes to the largest animals.
Other ambiguities may affect the results, such as the width of the megalodon's jaw and the size of the gaps between its teeth, neither of which are accurately known. "There's still more that could be done, but that would probably require finding a complete skeleton at this point," Perez pointed out.
How did the megalodon go extinct?
Environmental changes that led to fluctuations in sea levels and disturbed ecosystems in the oceans likely led to the demise of these enormous ancient sharks. They were just too big to be sustained by diminishing food resources, says the ReefQuest Centre for Shark Research.
A 2018 study suggested that a supernova 2.6 million years ago hit Earth's atmosphere with so much cosmic energy that it resulted in climate change. The cosmic rays that included particles called muons might have caused a mass extinction of giant ocean animals ("the megafauna") that included the megalodon by causing mutations and cancer.
Scientists, led by Adrian Melott, professor emeritus of physics and astronomy at the University of Kansas, estimated that "the cancer rate would go up about 50 percent for something the size of a human — and the bigger you are, the worse it is. For an elephant or a whale, the radiation dose goes way up," as he explained in a press release.
A brief passage from a recent UN report describes what could be the first-known case of an autonomous weapon, powered by artificial intelligence, killing in the battlefield.
- Autonomous weapons have been used in war for decades, but artificial intelligence is ushering in a new category of autonomous weapons.
- These weapons are not only capable of moving autonomously but also identifying and attacking targets on their own without oversight from a human.
- There's currently no clear international restrictions on the use of new autonomous weapons, but some nations are calling for preemptive bans.
Nothing transforms warfare more violently than new weapons technology. In prehistoric times, it was the club, the spear, the bow and arrow, the sword. The 16th century brought rifles. The World Wars of the 20th century introduced machine guns, planes, and atomic bombs.
Now we might be seeing the first stages of the next battlefield revolution: autonomous weapons powered by artificial intelligence.
In March, the United Nations Security Council published an extensive report on the Second Libyan War that describes what could be the first-known case of an AI-powered autonomous weapon killing people in the battlefield.
The incident took place in March 2020, when soldiers with the Government of National Accord (GNA) were battling troops supporting the Libyan National Army of Khalifa Haftar (called Haftar Affiliated Forces, or HAF, in the report). One passage describes how GNA troops may have used an autonomous drone to kill retreating HAF soldiers:
"Logistics convoys and retreating HAF were subsequently hunted down and remotely engaged by the unmanned combat aerial vehicles or the lethal autonomous weapons systems such as the STM Kargu-2... and other loitering munitions. The lethal autonomous weapons systems were programmed to attack targets without requiring data connectivity between the operator and the munition: in effect, a true 'fire, forget and find' capability."
Still, because the GNA forces were also firing surface-to-air missiles at the HAF troops, it's currently difficult to know how many, if any, troops were killed by autonomous drones. It's also unclear whether this incident represents anything new. After all, autonomous weapons have been used in war for decades.
Lethal autonomous weapons
Lethal autonomous weapon systems (LAWS) are weapon systems that can search for and fire upon targets on their own. It's a broad category whose definition is debatable. For example, you could argue that land mines and naval mines, used in battle for centuries, are LAWS, albeit relatively passive and "dumb." Since the 1970s, navies have used active protection systems that identify, track, and shoot down enemy projectiles fired toward ships, if the human controller chooses to pull the trigger.
Then there are drones, an umbrella term that commonly refers to unmanned weapons systems. Introduced in 1991 with unmanned (yet human-controlled) aerial vehicles, drones now represent a broad suite of weapons systems, including unmanned combat aerial vehicles (UCAVs), loitering munitions (commonly called "kamikaze drones"), and unmanned ground vehicles (UGVs), to name a few.
Some unmanned weapons are largely autonomous. The key question to understanding the potential significance of the March 2020 incident is: what exactly was the weapon's level of autonomy? In other words, who made the ultimate decision to kill: human or robot?
The Kargu-2 system
One of the weapons described in the UN report was the Kargu-2 system, which is a type of loitering munitions weapon. This type of unmanned aerial vehicle loiters above potential targets (usually anti-air weapons) and, when it detects radar signals from enemy systems, swoops down and explodes in a kamikaze-style attack.
Kargu-2 is produced by the Turkish defense contractor STM, which says the system can be operated both manually and autonomously using "real-time image processing capabilities and machine learning algorithms" to identify and attack targets on the battlefield.
STM | KARGU - Rotary Wing Attack Drone Loitering Munition System youtu.be
In other words, STM says its robot can detect targets and autonomously attack them without a human "pulling the trigger." If that's what happened in Libya in March 2020, it'd be the first-known attack of its kind. But the UN report isn't conclusive.
It states that HAF troops suffered "continual harassment from the unmanned combat aerial vehicles and lethal autonomous weapons systems," which were "programmed to attack targets without requiring data connectivity between the operator and the munition: in effect, a true 'fire, forget and find' capability."
What does that last bit mean? Basically, that a human operator might have programmed the drone to conduct the attack and then sent it a few miles away, where it didn't have connectivity to the operator. Without connectivity to the human operator, the robot would have had the final call on whether to attack.
Key line 2: The loitering munitions/LAWS (depending upon how you frame it) were enabled to attack without data conn… https://t.co/5u89cDDA60— Jack McDonald (@Jack McDonald)1622114029.0
To be sure, it's unclear if anyone died from such an autonomous attack in Libya. In any case, LAWS technology has evolved to the point where such attacks are possible. What's more, STM is developing swarms of drones that could work together to execute autonomous attacks.
Noah Smith, an economics writer, described what these attacks might look like on his Substack:
"Combined with A.I., tiny cheap little battery-powered drones could be a huge game-changer. Imagine releasing a networked swarm of autonomous quadcopters into an urban area held by enemy infantry, each armed with little rocket-propelled fragmentation grenades and equipped with computer vision technology that allowed it to recognize friend from foe."
But could drones accurately discern friend from foe? After all, computer-vision systems like facial recognition don't identify objects and people with perfect accuracy; one study found that very slightly tweaking an image can lead an AI to miscategorize it. Can LAWS be trusted to differentiate between a soldier with a rifle slung over his back and, say, a kid wearing a backpack?
Opposition to LAWS
Unsurprisingly, many humanitarian groups are concerned about introducing a new generation of autonomous weapons to the battlefield. One such group is the Campaign to Stop Killer Robots, whose 2018 survey of roughly 19,000 people across 26 countries found that 61 percent of respondents said they oppose the use of LAWS.
In 2018, the United Nations Convention on Certain Conventional Weapons issued a rather vague set of guidelines aiming to restrict the use of LAWS. One guideline states that "human responsibility must be retained when it comes to decisions on the use of weapons systems." Meanwhile, at least a couple dozen nations have called for preemptive bans on LAWS.
The U.S. and Russia oppose such bans, while China's position is a bit ambiguous. It's impossible to predict how the international community will regulate AI-powered autonomous weapons in the future, but among the world's superpowers, one assumption seems safe: If these weapons provide a clear tactical advantage, they will be used on the battlefield.