Once a week.
Subscribe to our weekly newsletter.
Transforming the Workplace: Critical Skills and Learning Methods for the Successful 21st Century Worker
We need to better prepare, train, and inspire successful self-directed learners to meet today’s challenges.
There are many explanations for today’s uncertain economy. But Nobel economist Joseph Stiglitz of Columbia University has advanced an analysis that’s starting to resonate. In a recent article, Stiglitz says that our problem is “rooted in the kinds of jobs we have, the kind we need, and the kind we're losing, and rooted as well in the kind of workers we want, and the kind we don't know what to do with.” To advance our economy, Stiglitz believes that wrenching, fundamental change is required – no less dramatic than the shifts experienced by an earlier generation during the Great Depression.
While Stiglitz and I work in different worlds, I see evidence in all types of organizations that we need to better prepare, train, and inspire successful self-directed learners to meet today’s challenges.
As I see it, there are two big questions to consider. First, what are the critical 21st century skills that the workforce of tomorrow needs to develop and master today? Secondly, how can we improve our learning methods to enable the self-directed learner to thrive in this new environment?
A Workplace in Transition
Today’s workforce exists in a riptide of change. At one end of the spectrum, where new entrants are joining the workforce, there exists a significant, distressing skills gap. A 2011 Pew Research study found that only 55 percent of graduates from four-year colleges say that their education was “very useful in helping prepare them for a job or career.” These findings dovetail with ASTD research conducted in 2009, in which 51 percent of those surveyed said that the skills of the current workforce do not match changes in their company’s strategy, goals, markets or business models.
At the other end of the spectrum, we have educated knowledge workers who came of age in the last few decades of the 20th century. Often middle managers once prized for overseeing steady, slowly-evolving corporate bureaucracies, today these same individuals are increasingly dispensable in a globally connected marketplace characterized by rapid change. The competencies that previously armed middle managers for success to manage steady-state operations are now insufficient at best, and in most cases profoundly counterproductive. Today, organizations and their managers are forced to continually re-invent themselves in the context of continuous change and complexity. An apt, if unfortunate, example of an organization that failed to do this is Eastman Kodak. As the world became more digital, the company struggled to keep pace. Now it finds itself in bankruptcy. Or consider RIM—just a few years ago, the Blackberry seemed untouchable as the world’s dominant smartphone brand. Today, thanks to the advent of the iPhone, Android, Windows Phone and other devices, RIM may be closer to RIP.
The fading ranks of middle management have lost their edge, thanks to revolutions in both technology and globalization. Indeed, the latest wave of technology advances—cloud computing, advanced mobile applications and devices, and rapidly expanding social networks to name a few—have greatly eased access to knowledge work. Nowhere is this change seen more dramatically than with the rapidly ascending workforce in high-growth markets outside the United States. Business writer Seth Godin remarks ominously, “If you're the average person out there doing average work, there's going to be someone else out there doing the exact same thing as you, but cheaper.” The game has shifted to a far more competitive, globally-connected field of play, requiring individuals to differentiate themselves in authentic, compelling ways like never before. Godin concludes, “If you're different somehow and have made yourself unique, people will find you and pay you more.”
I believe the accelerated pace of change, the increased complexity of the business environment, and the requirement for individual differentiation will place a new premium on individual, self-directed learning. Continual, personalized learning will be the key to individual growth and differentiation, and could eventually lead to a personalized learning and development revolution.
As leaders responsible for designing and implementing successful business strategies, and preparing our workforces to shape and execute them, we face clear challenges. We must find new ways to help yesterday’s knowledge workers take responsibility for their own development, and to see that development as central to not just their employer’s value creation, but their own value creation as well. We must help the knowledge workers of yesterday move beyond their comfort zones to become the innovation workers of tomorrow. It is critical that they embrace self-directed, continuous development in order to differentiate their skills and make themselves invaluable contributors to our economy. This, in turn, will help their organizations accelerate business results, spark innovation, improve performance, and drive growth.
What We Need: Agility, Curiosity, and a Commitment to Continuous Learning
To achieve these ends, workers must become more agile, curious, and committed to continuous learning. For the past few decades, knowledge workers have been required to master only their particular areas of expertise, but subject matter expertise is no longer enough. Applying one’s knowledge to an organization’s existing strategy or operations is only part of today’s business challenge. We need leaders and employees with foresight who can identify new opportunities, design creative solutions, and bring them to market. Simply put, we need innovation workers, not just knowledge workers. Innovation workers differentiate themselves through their ability to understand context, to judge situations, and to deviate from established norms in order to create new, creative solutions to the challenges they face. This requirement for adaptability is identified in ASTD’s report, Bridging the Skills Gap, as one of the three highest-demand skills that will be required of the workforce in the future (and the future is today!).
Curiosity is the spark behind innovation and the driver that can differentiate a highly-valued employee from a mediocre performer. Curiosity is a fundamental attribute associated with many of the behaviors possessed by our greatest innovators. In their groundbreaking book The Innovator’s DNA, Jeff Dyer, Hal Gregersen, and Clayton Christensen share research demonstrating core innovation skills such as experimentation, questioning, exploring a broad range of interests, and meeting different types of people. According to their research, leaders who demonstrate these behaviors have a superior track record of leading innovation in their organizations.
In my own experience, I think of the curiosity I admire in my colleagues—those who embrace the opportunity to learn an entirely new subject. The best performers combine curiosity with a certain moxie that gives them the confidence to step into a new field and not just learn, but master and eventually promote their expertise. For example, a colleague of mine showed interest in social networking a few years ago, became a passionate user of multiple tools and applications, and read voraciously on the topic. Then he started sharing his experiences and ideas through presentations and articles, and eventually he wrote a book. Today, he is a nationally-sought-after speaker and expert on the topic. His success began with curiosity and a commitment to continuous learning. My colleague knows that stasis isn’t an option. To remain relevant in today’s workforce, he followed his natural curiosity to identify and master new areas of expertise.
How We Will Learn: Technology-Enabled Informal Learning
When we talk about fostering agility, curiosity and continuous learning, we’re fortunate because today we have a host of Web-based technologies (including social, mobile, video, games, and personalized portals) that can serve as perfect tools to support the self-directed learner.
By utilizing technology-enabled informal learning resources, collaborative learners can easily share and exchange knowledge, and self-directed learners can continuously teach themselves. These tools allow us to gain and share knowledge when, where and how we want it.
Technology-enabled informal learning (that is, technology-based learning that takes place outside a formal classroom environment) also makes sense for organizations because we know that people learn in a variety of ways, and they usually like to learn on their own terms. This insight is derived from Howard Gardner, the influential educational thinker, who has argued that all of us have multiple intelligences. Adjusting and adapting to this cognitive norm, Gardner explains, will generally result in greater skill development and sharper problem solving.
I’m seeing an increasing number of organizations in a wide range of industries begin to facilitate informal learning programs for their employees. For example, the use of social learning, which allows people to leverage their personal and social networks for knowledge, is rapidly growing. According to ASTD’s Learning Executive’s Confidence Index for the fourth quarter of 2011, almost 55% of learning executives expect an increase in the use of informal learning and Web 2.0 tools in their organizations over the next 6 months.
At Intrepid Learning, we recently crowdsourced best-practice videos on the topic of client communication skills from our top learning consultants. Rather than designing and delivering a traditional course, we identified the critical required skills, and asked our community of experts (that is, our employees) to share a best practice. And the results were simply fantastic. We had nearly twenty contributors post their videos to our internal social learning network, and from there, the community rated, commented and voted on the best videos. This approach highlighted a strong cultural value for us: it reinforced that we’re all peers, and we can learn from each other – no matter where we sit in the organization.
Another key element of informal learning involves the concept of agility. Quick learning that doesn’t dilute quality is paramount right now in an incredibly time-constrained business environment where lost time equals lost opportunity. According to Nucleus Research, the average sales person spends 3 to 5 hours per week searching for information across five corporate systems, leaving two out of every three searches feeling overwhelmed by the volume of information they must process. Recent research from the University of Texas concludes that a mere 10% increase in information accessibility results in a 14.4% increase in sales. In this spirit of learning agility and instant access, we work with a financial services client to make short-form, expert-designed informal learning content about financial products and industry trends available to their partners at the moment of need—such as minutes before an important client meeting!
The Self-Directed Learner Is an Inspired Learner
Self-directed learners are intrinsically motivated. They understand that their passion for learning is fundamentally connected to their ability to differentiate themselves and succeed in the workplace. They know where they need to get smarter to add even more value to their organizations and to advance their careers. They take responsibility for their own learning because they are passionate, inspired and curious.
It’s these passionate, self-directed learners who will help drive the 21st century workforce transformation that our global economy requires. As I’ve shared, I believe adaptability, curiosity and commitment are the hallmarks of self-directed learners who will differentiate and succeed by leveraging the fantastic array of informal learning technologies available to them today. When individuals exhibit these characteristics, they will become highly valued and indispensible to their organizations. And in turn, the organizations that employ them will succeed – even in the most uncertain economy.
Sam Herring is the CEO of Intrepid. Since 1999, Intrepid Learning has leveraged its extensive expertise in the field of learning to help blue-chip companies excel. Their services help organizations respond quickly to unpredictable market conditions and seize new opportunities.
A Harvard professor's study discovers the worst year to be alive.
- Harvard professor Michael McCormick argues the worst year to be alive was 536 AD.
- The year was terrible due to cataclysmic eruptions that blocked out the sun and the spread of the plague.
- 536 ushered in the coldest decade in thousands of years and started a century of economic devastation.
The past year has been nothing but the worst in the lives of many people around the globe. A rampaging pandemic, dangerous political instability, weather catastrophes, and a profound change in lifestyle that most have never experienced or imagined.
But was it the worst year ever?
Nope. Not even close. In the eyes of the historian and archaeologist Michael McCormick, the absolute "worst year to be alive" was 536.
Why was 536 so bad? You could certainly argue that 1918, the last year of World War I when the Spanish Flu killed up to 100 million people around the world, was a terrible year by all accounts. 1349 could also be considered on this morbid list as the year when the Black Death wiped out half of Europe, with up to 20 million dead from the plague. Most of the years of World War II could probably lay claim to the "worst year" title as well. But 536 was in a category of its own, argues the historian.
It all began with an eruption...
According to McCormick, Professor of Medieval History at Harvard University, 536 was the precursor year to one of the worst periods of human history. It featured a volcanic eruption early in the year that took place in Iceland, as established by a study of a Swiss glacier carried out by McCormick and the glaciologist Paul Mayewski from the Climate Change Institute of The University of Maine (UM) in Orono.
The ash spewed out by the volcano likely led to a fog that brought an 18-month-long stretch of daytime darkness across Europe, the Middle East, and portions of Asia. As wrote the Byzantine historian Procopius, "For the sun gave forth its light without brightness, like the moon, during the whole year." He also recounted that it looked like the sun was always in eclipse.
Cassiodorus, a Roman politician of that time, wrote that the sun had a "bluish" color, the moon had no luster, and "seasons seem to be all jumbled up together." What's even creepier, he described, "We marvel to see no shadows of our bodies at noon."
...that led to famine...
The dark days also brought a period of coldness, with summer temperatures falling by 1.5° C. to 2.5° C. This started the coldest decade in the past 2300 years, reports Science, leading to the devastation of crops and worldwide hunger.
...and the fall of an empire
In 541, the bubonic plague added considerably to the world's misery. Spreading from the Roman port of Pelusium in Egypt, the so-called Plague of Justinian caused the deaths of up to one half of the population of the eastern Roman Empire. This, in turn, sped up its eventual collapse, writes McCormick.
Between the environmental cataclysms, with massive volcanic eruptions also in 540 and 547, and the devastation brought on by the plague, Europe was in for an economic downturn for nearly all of the next century, until 640 when silver mining gave it a boost.
Was that the worst time in history?
Of course, the absolute worst time in history depends on who you were and where you lived.
Native Americans can easily point to 1520, when smallpox, brought over by the Spanish, killed millions of indigenous people. By 1600, up to 90 percent of the population of the Americas (about 55 million people) was wiped out by various European pathogens.
Like all things, the grisly title of "worst year ever" comes down to historical perspective.
Quantum theory has weird implications. Trying to explain them just makes things weirder.
- The weirdness of quantum theory flies in the face of what we experience in our everyday lives.
- Quantum weirdness quickly created a split in the physics community, each side championed by a giant: Albert Einstein and Niels Bohr.
- As two recent books espousing opposing views show, the debate still rages on nearly a century afterward. Each "resolution" comes with a high price tag.
Albert Einstein and Niels Bohr, two giants of 20th century science, espoused very different worldviews.
To Einstein, the world was ultimately rational. Things had to make sense. They should be quantifiable and expressible through a logical chain of cause-and-effect interactions, from what we experience in our everyday lives all the way to the depths of reality. To Bohr, we had no right to expect any such order or rationality. Nature, at its deepest level, need not follow any of our expectations of well-behaved determinism. Things could be weird and non-deterministic, so long as they became more like what we expect when we traveled from the world of atoms to our world of trees, frogs, and cars. Bohr divided the world into two realms, the familiar classical world, and the unfamiliar quantum world. They should be complementary to one another but with very different properties.
The two scientists spent decades arguing about the impact of quantum physics on the nature of reality. Each had groups of physicists as followers, all of them giants of their own. Einstein's group of quantum weirdness deniers included quantum physics pioneers Max Planck, Louis de Broglie, and Erwin Schrödinger, while Bohr's group had Werner Heisenberg (of uncertainty principle fame), Max Born, Wolfgang Pauli, and Paul Dirac.
Almost a century afterward, the debate rages on.
Einstein vs. Bohr, Redux
Two books — one authored by Sean Carroll and published last fall and another published very recently and authored by Carlo Rovelli — perfectly illustrate how current leading physicists still cannot come to terms with the nature of quantum reality. The opposing positions still echo, albeit with many modern twists and experimental updates, the original Einstein-Bohr debate.
Albert Einstein and Niels Bohr, two giants of 20th century science, espoused very different worldviews.
I summarized the ongoing dispute in my book The Island of Knowledge: Are the equations of quantum physics a computational tool that we use to make sense of the results of experiments (Bohr), or are they supposed to be a realistic representation of quantum reality (Einstein)? In other words, are the equations of quantum theory the way things really are or just a useful map?
Einstein believed that quantum theory, as it stood in the 1930s and 1940s, was an incomplete description of the world of the very small. There had to be an underlying level of reality, still unknown to us, that made sense of all its weirdness. De Broglie and, later, David Bohm, proposed an extension of the quantum theory known as hidden variable theory that tried to fill in the gap. It was a brilliant attempt to appease the urge Einstein and his followers had for an orderly natural world, predictable and reasonable. The price — and every attempt to deal with the problem of figuring out quantum theory has a price tag — was that the entire universe had to participate in determining the behavior of every single electron and all other quantum particles, implicating the existence of a strange cosmic order.
Later, in the 1960s, physicist John Bell proved a theorem that put such ideas to the test. A series of remarkable experiments starting in the 1970s and still ongoing have essentially disproved the de Broglie-Bohm hypothesis, at least if we restrict their ideas to what one would call "reasonable," that is, theories that have local interactions and causes. Omnipresence — what physicists call nonlocality — is a hard pill to swallow in physics.
Credit: Public domain
Yet, the quantum phenomenon of superposition insists on keeping things weird. Here's one way to picture quantum superposition. In a kind of psychedelic dream state, imagine that you had a magical walk-in closet filled with identical shirts, the only difference between them being their color. What's magical about this closet? Well, as you enter this closet, you split into identical copies of yourself, each wearing a shirt of a different color. There is a you wearing a blue shirt, another a red, another a white, etc., all happily coexisting. But as soon as you step out of the closet or someone or something opens the door, only one you emerges, wearing a single shirt. Inside the closet, you are in a superposition state with your other selves. But in the "real" world, the one where others see you, only one copy of you exists, wearing a single shirt. The question is whether the inside superposition of the many yous is as real as the one you that emerges outside.
To Einstein, the world was ultimately rational... To Bohr, we had no right to expect any such order or rationality.
The (modern version of the) Einstein team would say yes. The equations of quantum physics must be taken as the real description of what's going on, and if they predict superposition, so be it. The so-called wave function that describes this superposition is an essential part of physical reality. This point is most dramatically exposed by the many-worlds interpretation of quantum physics, espoused in Carroll's book. For this interpretation, reality is even weirder: the closet has many doors, each to a different universe. Once you step out, all of your copies step out together, each into a parallel universe. So, if I happen to see you wearing a blue shirt in this universe, in another, I'll see you wearing a red one. The price tag for the many-worlds interpretation is to accept the existence of an uncountable number of non-communicating parallel universes that enact all possibilities from a superstition state. In a parallel universe, there was no COVID-19 pandemic. Not too comforting.
Bohm's team would say take things as they are. If you stepped out of the closet and someone saw you wearing a shirt of a given color, then this is the one. Period. The weirdness of your many superposing selves remains hidden in the quantum closet. Rovelli defends his version of this worldview, called relational interpretation, in which events are defined by the interactions between the objects involved, be them observers or not. In this example, the color of your shirt is the property at stake, and when I see it, I am entangled with this specific shirt of yours. It could have been another color, but it wasn't. As Rovelli puts it, "Entanglement… is the manifestation of one object to another, in the course of an interaction, in which the properties of the objects become actual." The price to pay here is to give up the hope of ever truly understanding what goes on in the quantum world. What we measure is what we get and all we can say about it.
What should we believe?
Both Carroll and Rovelli are master expositors of science to the general public, with Rovelli being the more lyrical of the pair.
There is no resolution to be expected, of course. I, for one, am more inclined to Bohr's worldview and thus to Rovelli's, although the interpretation I am most sympathetic to, called QBism, is not properly explained in either book. It is much closer in spirit to Rovelli's, in that relations are essential, but it places the observer on center stage, given that information is what matters in the end. (Although, as Rovelli acknowledges, information is a loaded word.)
We create theories as maps for us human observers to make sense of reality. But in the excitement of research, we tend to forget the simple fact that theories and models are not nature but our representations of nature. Unless we nurture hopes that our theories are really how the world is (the Einstein camp) and not how we humans describe it (the Bohr camp), why should we expect much more than this?
Maybe eyes really are windows into the soul — or at least into the brain, as a new study finds.
- Researchers find a correlation between pupil size and differences in cognitive ability.
- The larger the pupil, the higher the intelligence.
- The explanation for why this happens lies within the brain, but more research is needed.
What can you tell by looking into someone's eyes? You can spot a glint of humor, signs of tiredness, or maybe that they don't like something or someone.
But outside of assessing an emotional state, a person's eyes may also provide clues about their intelligence, suggests new research. A study carried out at the Georgia Institute of Technology shows that pupil size is "closely related" to differences in intelligence between individuals.
The scientists found that larger pupils may be connected to higher intelligence, as demonstrated by tests that gauged reasoning skills, memory, and attention. In fact, the researchers claim that the relationship of intelligence to pupil size is so pronounced, that it came across their previous two studies as well and can be spotted just with your naked eyes, without any additional scientific instruments. You should be able to tell who scored the highest or the lowest on the cognitive tests just by looking at them, say the researchers.
The pupil-IQ link
The connection was first noticed across memory tasks, looking at pupil dilations as signs of mental effort. The studies involved more than 500 people aged 18 to 35 from the Atlanta area. The subjects' pupil sizes were measured by eye trackers, which use a camera and a computer to capture light reflecting off the pupil and cornea. As the scientists explained in Scientific American, pupil diameters range from two to eight millimeters. To determine average pupil size, they took measurements of the pupils at rest when the participants were staring at a blank screen for a few minutes.
Another part of the experiment involved having the subjects take a series of cognitive tests that evaluated "fluid intelligence" (the ability to reason when confronted with new problems), "working memory capacity" (how well people could remember information over time), and "attention control" (the ability to keep focusing attention even while being distracted). An example of the latter involves a test that attempts to divert a person's focus on a disappearing letter by showing a flickering asterisk on another part of the screen. If a person pays too much attention to the asterisk, they might miss the letter.
The conclusions of the research were that having a larger baseline pupil size was related to greater fluid intelligence, having more attention control, and even greater working memory capacity, although to a smaller extent. In an email exchange with Big Think, author Jason Tsukahara pointed out, "It is important to consider that what we find is a correlation — which should not be confused with causation."
The researchers also found that pupil size seemed to decrease with age. Older people had more constricted pupils but when the scientists standardized for age, the pupil-size-to-intelligence connection still remained.
Why are pupils linked to intelligence?
The connection between pupil size and IQ likely resides within the brain. Pupil size has been previously connected to the locus coeruleus, a part of the brain that's responsible for synthesizing the hormone and neurotransmitter norepinephrine (noradrenaline), which mobilizes the brain and body for action. Activity in the locus coeruleus affects our perception, attention, memory, and learning processes.
As the authors explain, this region of the brain "also helps maintain a healthy organization of brain activity so that distant brain regions can work together to accomplish challenging tasks and goals." Because it is so important, loss of function in the locus coeruleus has been linked to conditions like Alzheimer's disease, Parkinson's, clinical depression, and attention deficit hyperactivity disorder (ADHD).
The researchers hypothesize that people who have larger pupils while in a restful state, like staring at a blank computer screen, have "greater regulation of activity by the locus coeruleus." This leads to better cognitive performance. More research is necessary, however, to truly understand why having larger pupils is related to higher intelligence.
In an email to Big Think, Tsukahara shared, "If I had to speculate, I would say that it is people with greater fluid intelligence that develop larger pupils, but again at this point we only have correlational data."
Do other scientists believe this?
As the scientists point out in the beginning of their paper, their conclusions are controversial and, so far, other researchers haven't been able to duplicate their results. The research team addresses this criticism by explaining that other studies had methodological issues and examined only memory capacity but not fluid intelligence, which is what they measured.