Why a neurodivergent team will be a golden asset in the AI workplace
- In the absence of some hypothetical “awake” AGI (artificial general intelligence) AI systems will remain inescapably chained to linear reasoning.
- Neurodivergence broadly correlates with atypical cognitive strategies.
- Organizations that commit to the inclusion of different kinds of minds may achieve a sustained differentiating edge.
The world now sits on the precipice of transformational change driven by the emergence of new technology. Perhaps counterintuitively, as this transformation carves out a very special and complementary place for human neurodivergence, it will serve to powerfully elevate the crucial importance of neurodiversity inclusion as an organizational asset.
AI is set to redefine how we understand our role in the world when it comes to linear thinking tasks. In fact, the domain of linear thinking, in all its forms, will be consumed by artificial intelligence over the coming 15 years, especially as we move gradually closer toward the viability of quantum computation, which should serve to turbocharge the productivity and speed of AI research systems to unimaginable heights.
However—and this is a critical point—modes of “thinking” exist that are not derivative of the complexification of linear processing.
My experience as a Silicon Valley entrepreneur, VC advisor, technology investor, and MIT machine-learning student has equipped me with a basic understanding of AI systems, machine learning, and neural networks. Just enough to contribute to the conversation among insiders. But an Oxford PhD in cognitive science and a lifetime working in neurodiversity has granted me legitimate claim to something approximating expertise—if there is such a thing—in human cognition.
That combination has left me somewhat skeptical about whether the current paradigm in AI systems development will ever produce something “conscious” in the sense we mean when we refer to our own feelings of self-awareness and identity. But perhaps we will reach that point. If it’s possible, then it will probably be in the not-too-distant future given the exponential curve we are now traversing.
This sentiment matches the messaging we have heard over the past year from the likes of Sam Altman, Elon Musk, Ilya Sutskever (the “mastermind” behind ChatGPT), Marc Andreessen, and Geoffrey Hinton, the so-called Godfather of modern AI systems—i.e., the leading minds in the field admit AGI (Artificial General Intelligence) is a mysterious concept possibly, but not definitely, in our future at some non-immediate but relatively near point in time, with capabilities that cannot be easily predicted other than the possession of vastly superhuman linear intelligence.
At their base, all AI systems are, as Jaron Lanier has described, merely innovative forms of “social collaboration.” Lanier—a prodigy mentored by Marvin Minsky, and a foundational thinker in the field of AI—notes that something like ChatGPT is an astonishingly proficient curator, but nothing more. Its creations are simply mashups of existing human expression.
Imagine such a system being trained on baroque music in the early 18th century—on Bach, Handel, Vivaldi, Purcell, and Corelli. Such a system would no doubt astonish us with seemingly original concertos indistinguishable from other notable pieces of the period in complexity, originality, and quality. But it would never give us the Moonlight Sonata.
Imagine such a system being trained on the work of naturalists in the early 19th century. No doubt, it would quickly compile an amazing compendium of extant and fossilized species, and possibly even arrive at general conclusions about regions and traits. But it would never give us On the Origin of Species.
The next decade is going to feature an explosion in applied use-cases for AI systems that drive incredible advances in most fields of research and supplant or redefine many roles in the economy. But, crucially, in the absence of some hypothetical “awake” AGI that has somehow domain-hopped into a capacity for lateral reasoning and intuitive leaps of insight, AI systems will remain inescapably chained to linear reasoning.
Even if we aren’t barreling toward an epochal hand-off from human to machine intelligence, we are certainly on the cusp of transformative change in the nature of how we experience life as humans due to the emergence of non-human intelligence and the likely future emergence of revolutionary advances in computational power — not only the period of most rapid change in human history, but also perhaps the most unpredictable period in human history in terms of our ability to extrapolate from the immediate prior period to generate basic assumptions about the near future.
As an executive or organizational leader, what do you do to plan for a period where the ground is shifting beneath your feet in a fundamental sense every step of the way, and long-range strategic planning is subject to the type of confidence coefficient physicists admit to when hypothesizing about the interior of black holes?
Neurodivergence broadly correlates with atypical cognitive strategies, including evidence of less cognitive bias and groupthink, particularly among autistic people. While support needs may vary among and within the neurodivergent population, there is also evidence of increased prevalence of lateral thinking, generally increased access to nontraditional solution pathways, creative problem solving, and other atypical approaches to perceiving and thinking about the surrounding world and its challenges and opportunities.
A recent article in The Military Times revealed that there are already autistic leaders in senior positions in the intelligence community, and how matters of national security are too important and challenging to leave only to people who see the world in typical ways. Autistic workers, for example, have been shown to detect sensitive geospatial imagery patterns with significantly higher precision rates. Studies show that people with ADHD display increased originality of thought driven by flashes of intuitive insight, with access to a wider range of cognitive access to semantic choices. Dyslexic people have shown a higher likelihood for associative and systems thinking, as well as creativity.
As we blaze ever faster toward a world inextricably entwined with the incorporation of artificial intelligence, these properties—lateral thinking, intuitive insight, inductive leaps of creativity, resistance to manipulation or social pressure—will become increasingly important because they represent pathways of thought capable of being complementary to those produced by AI systems, which are inescapably chained to linear progressions of reason.
No matter how advanced, the paradigm of artificial intelligence that is ascending to dominance during the current period is bound, at its core, to linear progressions—to lightspeed processes that ultimately involve merely answering yes or no to billions or even trillions of inputs as data pathways are navigated toward a result that fits some predetermined objective.
You can’t model such a system to include lateral leaps of creative thought. It’s just not possible. Given what we now understand about the fundamental processes underlying AI, you can’t get out of Flatland by simply building an infinite number of two-dimensional ladders infinitely fast. They will move faster and more competently within the boundaries of linear rationality than any human is capable of. But they will always be bound within the confines of that map.
Perhaps over time such systems will get better at simulating something that approximates this type of process—after studying enough instances of creative genius at work among human beings through historical examples. But there is, at present, no reason to believe that such a goal even exists among those creating such systems. And even if there was, the result still wouldn’t be the actual instance of “lateral thinking.” It would merely be a “lateral thinking” simulation performed through a linear progression.
Such a process is not capable of extrapolating Beethoven or Mozart from Bach and Handel. As they say in Maine, you can’t get there from here.
As every organization falls down the gravity well of history into the warm embrace of an eternity of increasing reliance on increasingly powerful AI systems—the ultimate Red Queen’s Race—it may well be that the only source of sustained differentiating edge, of tapping into the N+1 axis that extends perpendicularly out of Flatland, lies in building an organizational culture fundamentally committed to the proactive and authentic inclusion of different kinds of minds.