Amendments to the U.S. Constitution

Do you know your rights? Hit refresh on your constitutional knowledge!


The 2nd Amendment: How the gun control debate went crazy

The gun control debate has been at fever pitch for several years now, and as things fail to change the stats get grimmer. The New York Times reports that there have been 239 school shootings nationwide since the 2012 Sandy Hook Elementary school massacre, where 20 first graders and six adults were killed. Six years later, 438 more people have been shot in schools, and for 138 of them it was fatal. Here, journalist and author Kurt Andersen reads the Second Amendment, and explains its history from 1791 all the way to now. "What people need to know is that the Second Amendment only recently became such a salient amendment," says Andersen. It's only in the last 50 years that the gun debate has gone haywire, and it was the moment the NRA went from reasonable to absolutist. So what does the "right to bear arms" really mean? What was a firearm in the 1790s, and what is a firearm now? "Compared to [the] many, many, many rounds-per-second firearms that we have today, it's the same word but virtually a different machine." Kurt Andersen is the author of Fantasyland: How America Went Haywire.

The 5th Amendment: Do not break in case of emergency

The Fifth Amendment of the United States Constitution is often talked about but rarely read in full. The reason? Counterterrorism expert Amaryllis Fox explains that it has, these days, simply become shorthand for not saying anything in court to incriminate yourself. But the full text states how important the due process of law is to every American. So perhaps learning the full text, not just the shorthand, is an important step to being an American citizen. You can find out more about Amaryllis Fox here.

The 13th Amendment: The unjust prison to profit pipeline

The 13th Amendment to the U.S. Constitution abolished slavery—but it still remains legal under one condition. The amendment reads: "Neither slavery nor involuntary servitude, except as a punishment for crime whereof the party shall have been duly convicted, shall exist within the United States, or any place subject to their jurisdiction." Today in America, big corporations profit of cheap prison labor in both privatized and state-run prisons. Shaka Senghor knows this second wave of slavery well—he spent 19 years in jail, working for a starting wage of 17 cents per hour, in a prison where a 15-minute phone call costs between $3-$15. In this video, he shares the exploitation that goes on in American prisons, and how the 13th Amendment allows slavery to continue. He also questions the profit incentive to incarcerate in this country: why does America represent less than 5% of the world's population, but almost 25% of the world's prisoners? Shaka Senghor's latest venture is Mind Blown Media.

The 14th Amendment: History's most radical idea?

In 1868, three years after slavery was abolished, the 14th Amendment to the U.S. Constitution was adopted, granting equal protection under the law to every born and naturalized U.S. citizen. For CNN news commentator Van Jones this amendment is, in his words, the "whole enchilada." It's not the most popular amendment—it doesn't get name-dropped in TV courtroom dramas, or fiercely debated before elections—but to Jones it is a weighty principle that was far ahead of its time. "It doesn't say equal protection under the law unless you're a lesbian. That's not what it says. It doesn't say equal protection under the law unless you're African American. That's not what it says. It says if you're in the jurisdiction you get equal protection under the law. That's radical. In 10,000 years of human history, that's radical." Van Jones is the author of Beyond the Messy Truth: How We Came Apart, How We Come Together.

The 26th Amendment: The act of voting should empower people

Is a 55.7% voter turnout really enough? Bryan Cranston was disappointed with the 2016 presidential election, not for the outcome but for the process. According to Census Bureau figures it was a bumper year for voter engagement with 137.5 million total ballots cast—but is just over half of the eligible voters really that impressive? The Pew Research Center shows that the U.S. still trails behind most developed nations in voter registration and turnout. "I think we've devalued the honor and privilege of voting and we've become complacent, and maybe a bit cynical about our place and rights as citizens and our duties and responsibilities," says Cranston. The good news? Millennials and Gen Xers are on an upward trend in civic engagement, casting more votes than Boomers and older generations in the 2016 election. Cranston reminds us of how empowering the 26th Amendment is in granting voting rights to Americans over the age of 18. "We can't take that lightly," says Cranston. It's a timely reminder too, as 40 million people are expected to drop off that 55.7% figure for the midterm elections, mostly from the millennial, unmarried women and people of color demographics. Bryan Cranston's new book is the spectacular memoir A Life in Parts.

More playlists
  • Polarization and extreme partisanships have been on the rise in the United States.
  • Political psychologist Diana Mutz argues that we need more deliberation, not political activism, to keep our democracy robust.
  • Despite increased polarization, Americans still have more in common than we appear to.


Imagine everyday citizens engaging in the democratic process. What images spring to mind? Maybe you thought of town hall meetings where constituents address their representatives. Maybe you imagined mass sit-ins or marches in the streets to protest unpopular legislation. Maybe it's grassroot organizations gathering signatures for a popular referendum. Though they vary in means and intensity, all these have one thing in common: participation.

Participatory democracy is a democratic model that emphasizes civic engagement as paramount for a robust government. For many, it's both the "hallmark of social movements" and the gold standard of democracy.

But all that glitters may not be gold. While we can all point to historical moments in which participatory democracy was critical to necessary change, such activism can have deleterious effects on the health of a democracy, too. One such byproduct, political psychologist Diana Mutz argues, can be the lessening political tolerance.

Participation or deliberation?

In her book Hearing the Other Side: Deliberative Versus Participatory Democracy, Mutz argues that participatory democracy is best supported by close-knit groups of like-minded people. Political activism requires fervor to rouse people to action. To support such passions, people surround themselves with others who believe in the cause and view it as unassailable.

Alternative voices and ideologies — what Mutz calls "cross-cutting exposures" — are counterproductive to participation because they don't reinforce the group's beliefs and may soften the image of the opposing side. This can dampen political zeal and discourage participation, particularly among those averse to conflict. To prevent this from happening, groups can become increasingly intolerant of the other side.

"You can have a coup and maximize levels of participation, but that wouldn't be a great thing to do. It wouldn't be a sign of health and that things were going well."

As the book's title suggests, deliberative democracy fosters a different outlook for those who practice it. This model looks toward deliberation, communication, compromise, and consensus as the signs of a resilient democracy. While official deliberation is the purview of politicians and members of the court, it's worth noting that deliberative democracy doesn't mean inactivity from constituents. It's a philosophy we can use in our daily lives, from community memberships to interactions on social media.

"The idea is that people learn from one another," Mutz tells Big Think. "They learn arguments from the other side as well as learn more about the reasons behind their own views. [In turn], they develop a respect for the other side as well as moderate their own views."

Mutz's analysis leads her to support deliberation over activism in U.S. politics. She notes that the homogeneous networks required for activism can lead to positive changes — again, there are many historical examples to choose from. But such networks also risk developing intolerance and extremism within their ranks, examples of which are also readily available on both the right and left.

Meanwhile, the cross-cutting networks required for deliberative democracy offer a bounty of benefits, with the only risk being lowered levels of participation.

As Mutz writes: "Hearing the other side is also important for its indirect contributions to political tolerance. The capacity to see that there is more than one side to an issue, that political conflict is, in fact, a legitimate controversy with rationales on both sides, translates to greater willingness to extend civil liberties to even those groups whose political views one dislikes a great deal."

Of politics and summer camp

(Photo by Fox Photos/Getty Images)

Take that! A boxing bout between two members of a schoolboys' summer camp at Pendine, South Wales, takes place in a field within a ring of cheering campmates.

Of course, listening openly and honestly to the other side doesn't come naturally. Red versus blue. Religious versus secular. Rural versus cosmopolitan. We divide ourselves into polarized groups that seek to silence cross-cutting communication in the pursuit of political victory.

"The separation of the country into two teams discourages compromise and encourages an escalation of conflict," Lilliana Mason, assistant professor of Government and Politics at the University of Maryland, writes in her book Uncivil Agreement: How Politics Became Our Identity. "The cooperation and compromise required by democracy grow less attainable as partisan isolation and conflict increase."

Mason likens the current situation to Muzafer Sherif's famous Robbers Cave Experiment.

In the early 1950s, Sherif gathered a group of boys for a fun summer camp at Robbers Cave State Park, Oklahoma. At least, that was the pretense. In reality, Sherif and his counselors were performing an experiment in intergroup conflict that would now be considered unethical.

The 20 boys were divided into two groups, the Rattlers and the Eagles. For a while, the counselors kept the groups separate, allowing the boys to bond only with their assigned teammates. Then the two groups were introduced to participate in a tournament. They played competitive games, such as baseball and tug-o-war, with the winning team promised the summer camp trophy.

Almost immediately, the boys identified members of the other team as intruders. As the tournament continued, the conflict escalated beyond sport. The Eagles burned a Rattlers flag. The Rattlers raided the Eagles' cabin. When asked to describe the other side, both groups showed in-group favoritism and out-group aggression.

Most troubling, the boys wholly assumed the identity of an Eagle or Rattler despite having never been either before that very summer.

"We, as modern Americans, probably like to think of ourselves as more sophisticated and tolerant than a group of fifth-grade boys from 1954. In many ways, of course, we are," Mason writes. "But the Rattlers and the Eagles have a lot more in common with today's Democrats and Republicans than we would like to believe."

Like at Robbers Cave, signs of incendiary conflict are easy to spot in U.S. politics today.

"Political Polarization in the American Public", Pew Research Center, Washington, D.C. (June 12, 2014)

A 2014 Pew survey found that the ideological overlap between Democrats and Republicans is much more distant than in the past. More Republicans lie further right of moderate Democrats than before and vice versa. The survey also found that partisan animosity had doubled since 1994.

In her book, Mason points to research that shows an "increasing number of partisans don't want party leaders to compromise," blame "the other party for all incivility in government," and abhor the idea of dating someone from outside their ideological group.

And let's not forget Congress, which has grown increasingly divided along ideological lines over the past 60 years.

A dose of daily deliberation

Painting by Charles Francois Jalabert (1819-1901) 1846. Beaux-Arts museum, Nimes, France. Photo by Leemage/Corbis via Getty Images.

Horace, Virgil and Varius at the house of Maecenas.

A zero-sum mindset may be inevitable in a summer camp tournament, but it's detrimental if taken into wider society and politics. Yet if participatory democracy leads to the silencing of oppositional voices, a zero-sum mindset is exactly what we get. Conversely, creating networks that tolerate and support differing opinions offers non-zero benefits, like tolerance and an improvement of one's understanding of complicated issues.

Mutz wrote her book in 2006, but as she told us in our interview, the intervening years have only strengthened her resolve that deliberation improves democratic health:

"Right now, I'm definitely on the side of greater deliberation rather than just do whatever we can to maximize levels of participation. You can have a coup and maximize levels of participation, but that wouldn't be a great thing to do. It wouldn't be a sign of health and that things were going well. Democracy [must be] able to absorb differences in opinion and funnel them into a means of governing that people were okay with, even when their side didn't win."

Unfortunately, elected officials and media personalities play up incivility and the sense of national crisis for ratings and attention, respectively. That certainly doesn't help promote deliberation, but as Mutz reminded us, people perceive political polarization to be much higher than it actually is. In our daily lives, deliberative democracy is more commonplace than we realize and something we can promote in our communities and social groups.

Remember that 2014 Pew survey that found increased levels of partisan animosity? Its results showed the divide to be strongest among those most engaged and active in politics. The majority of those surveyed did not hold uniform left or right views, did not see the opposing party as an existential threat, and believed in the deliberative process in government. In other words, the extremes were pulling hard at the poles.

Then there's social media. The popular narrative is that social media is a morass of political hatred and clashing identities. But most social media posts have nothing to do with politics. An analysis of Facebook posts from September 2016, the middle of an election year, found the most popular topics centered on football, Halloween, Labor Day, country music, and slow cookers.

And what of political partisanship and prejudice? In an analysis of polarization and ideological identity, Mason found that labels like "liberal" and "conservative" had less to do with values and policy attitudes – as the majority of Americans agree on a substantial number of issues – and more to do with social group identification.

Yes, we all know those maps that media personalities dust off every election year, the ones that show the U.S. carved up into competing camps of red and blue. The reality is far more intricate and complex, and Americans' intolerance for the other side varies substantially from place to place and across demographics.

So while participation has its place, a healthy democracy requires deliberation, a recognition of the other side's point of view, and the willingness to compromise. Tolerance may not make for good TV or catchy political slogans, but it's something we all can foster in our own social groups.

Understanding what tolerance means in a highly polarized America

Imagine you're dining out at a casual restaurant with some friends. After looking over the menu, you decide to order the steak. But then, after a dinner companion orders a salad for their main course, you declare: “I'll have the salad too."


This kind of situation – making choices that you probably otherwise wouldn't make were you alone – probably happens more often than you think in a wide variety of settings, from eating out to shopping and even donating to charity. And it's not just a matter of you suddenly realizing the salad sounds more appetizing.

Prior research has shown people have a tendency to mimic the choices and behaviors of others. But other work suggests people also want to do the exact opposite to signal their uniqueness in a group by making a different choice from others.

As scholars who examine consumer behavior, we wanted to resolve this discrepancy: What makes people more likely to copy others' behavior, and what leads them to do their own thing?

A social signal

We developed a theory that how and why people match or mimic others' choices depends a lot on the attributes of the thing being selected.

Choices have what we call “ordinal" attributes that can be ranked objectively – such as size or price – as well as “nominal" attributes that are not as easily ranked – such as flavor or shape. We hypothesized that ordinal attributes have more social influence, alerting others to what may be seen as “appropriate" in a given context.

Nominal attributes, on the other hand, would seem to be understood as a reflection of one's personal preferences.

So we performed 11 studies to test our theory.

Size may be social, but flavor remains a personal choice.

One scoop or two

In one study conducted with 190 undergraduate students, we told participants that they were on their way to an ice cream parlor with a friend to get a cone. We then told our would-be ice cream consumers that their companion was getting either one scoop of vanilla, one scoop of chocolate, two scoops of vanilla or two scoops of chocolate. We then asked participants what they wanted to order.

We found that people were much more likely to order the same size as their companion but not the same flavor.

The participants seemed to interpret the number of scoops the companion ordered as an indication of what's appropriate. For example, ordering two scoops might signal "permission" to indulge or seem the more financially savvy – if less healthy – choice, since it usually costs only marginally more than one. Or a single scoop might suggest "let's enjoy some ice cream – but not too much."

The choice of chocolate or vanilla, on the other hand, is readily understood as a personal preference and thus signals nothing about which is better or more appropriate. I like vanilla, you like chocolate – everyone's happy.

We also asked participants to rate how important avoiding social discomfort was in their decision. Those who ordered the same number of scoops as their companion rated it as more important than those who picked a different amount.

Examining other contexts

In the other studies, we replicated our results using different products, in various settings and with a variety of ordinal and nominal attributes.

For example, in another experiment, we gave participants US$1 to buy one of four granola bars from a mock store we set up inside the University of Pittsburgh's Katz/CBA Business Research Center. As the ordinal attribute, we used brand prestige: They could pick either a more expensive well-known national brand or a cheaper one sold by a grocery store under its own label. Our nominal attribute was chocolate or peanut butter.

Before making the choice, a "store employee" stationed behind the checkout register told participants she or he had tested out a granola bar, randomly specifying one of the four – without saying anything about how it tasted. We rotated which granola bar the employee mentioned every hour during the five-day experiment.

Similar to the ice cream study, participants tended to choose the brand that the employee said he or she had chosen – whether it was the cheaper or pricier one – but ignored the suggested flavor.

Moving away from food, we also examined influences on charitable donations. In this study, we recruited online participants who were paid for their time. In addition, we gave each participant 50 cents to either keep or donate to charity.

If they chose to donate the money, they could give all of it or half to a charity focused on saving either elephants or polar bears. Before they made their choice, we told them what another participant had supposedly decided to do with their money – randomly based on one of the four possibilities.

The results were the same as in all our other studies, including ones we conducted involving different brands and shapes of pasta and varieties and taste profiles of wine. People matched the ordinal attribute – in this case the amount – but paid little heed to the nominal attribute – the chosen charity – which remained a personal preference.

These kinds of social cues regarding others' choices are everywhere, from face-to-face interactions with friends to online tweets or Instagram posts, making it difficult to escape the influence of what others do on our own consumption choices.

And if we believe we're making our companions feel more comfortable while still choosing something we like, what's the harm in that?

Kelly L. Haws, Associate Professor of Marketing, Vanderbilt University; Brent McFerran, W. J. Van Duse Associate Professor, Marketing, Simon Fraser University, and Peggy Liu, Assistant Professor of Business Administration, University of Pittsburgh

This article is republished from The Conversation under a Creative Commons license. Read the original article.

  • Along with Maslow, Carl Rogers helped pioneer the field of humanistic psychology.
  • Although most associate the term "self-actualization" with Maslow, it's a concept that's frequently found in humanistic psychological literature.
  • What's the difference between Maslow's and Rogers' versions of self-actualization, and what can we learn from Rogers?

One could be forgiven for thinking that the term "self-actualization" was developed entirely by Abraham Maslow. Today, there are very few contexts where one can hear the term outside of Maslow's famous hierarchy of needs. But in fact, the twentieth century featured many humanistic psychologists who used the term to mean one thing or another. It was first coined by psychologist Kurt Goldstein, who used it to refer to something very similar to what Maslow would later focus on: the tendency for human beings to become all that they can, that "what a man can be, he must be."

But this isn't the only take on self-actualization. Carl Rogers, a peer of Maslow's, thought of humanistic psychology and self-actualization in an entirely distinct way.

Rogers' theory of personality and behavior

Carl Rogers

Jan Rieckhoff/ullstein bild via Getty Images

A sketch of Carl Rogers.

Along with Maslow, Rogers was one of the pioneers of humanistic psychology. Specifically, Rogers' greatest contribution was to the practice of psychotherapy, particularly in the development of what's known as "person-centered therapy," which is thought of today as one of the major approaches to therapy, along with cognitive behavioral therapy, psychoanalysis, and so on.

At the core of this therapeutic approach was Rogers' theory of personality and behavior. Just as Maslow had his hierarchy of needs, with self-actualization at the top, Rogers had his own model of human development, although self-actualization played a very different role in Rogers' system. Rogers actually had 19 separate propositional statements upon which he built his theory, but we'll just summarize the major components.

In Rogers' theory, reality for an individual (which he refers to as an organism) is the sum of subjective perceptions that the organism experiences. A developing organism will take some of these perceptions and separate them, labeling them as the self. As an example, you might perceive your body and a box of paperclips on your desk, but you would only consider your perception of your body to fall under the designation of "self." This happens with concepts and beliefs, too. Some of these things become part of the self, while others are perceived as belonging to the environment.

This idea of what counts as the self and what doesn't isn't fixed; it's fluid. Different concepts, perceptions, and experiences occur as a result of interacting with the environment, and the organism has to sort out how to relate their identity to these experiences.

Naturally, this isn't a smooth process. As a result of these interactions, most of us invent an "ideal" self, the person we think we should be, rather than the person we actually are. In Rogers' system, the broader the gap between the real self and the ideal self, the greater the sense of incongruence. All sorts of behaviors and experiences might occur that seem unacceptable to who we think we are. If this incongruence is severe enough, the organism might develop a psychopathology. If, on the other hand, the person we actually are and the person we think we should be are congruent with one another, we become more open to experiences and have to do less work defending ourselves from the outside world.

Where does self-actualization fit into all of this?

Where Maslow had self-actualization at the very top of a hierarchy of motivations, Rogers argued that self-actualization was the only motivation and that it was constantly driving the organism forward. "The organism has one basic tendency and striving — to actualize, maintain, and enhance the experiencing organism," wrote Rogers. For Rogers, every behavior and motivation is directed in pursuit of actualization, of this constant negotiation between the self and the perceptual field that composes an individual's reality.

Immediately, we can see that Maslow's version of self-actualization is a lot more aspirational. In Rogers' system, self-actualization is sort of just the default way of life — the only way of life, in fact. And where Maslow's version is sort of an endpoint, Rogers saw self-actualization as a never-ending process. But Rogers does have his own version of an ideal way of living, which he called, appropriately enough, "the good life."

Living the good life

In order to live the good life, an organism must symbolically assimilate all experiences into a consistent relationship with the self. To be fair, that's not an exactly intuitive definition. Consider, for instance, a narcissist hearing criticism. The narcissist perceives themselves to be perfect, and criticism is a threat that cannot be assimilated into their concept of their perfect self. Somebody living the "good life," however, might take that criticism as potentially true — potentially false, too, but worth considering at the very least.

In this regard, somebody living the good life matches up neatly with Maslow's idea of the self-actualized individual. Like Maslow, Rogers also believed that individuals living the good life would exemplify certain characteristics that would make them distinct from the fragile, neurotic, rank-and-file folks that most of us are. According to Rogers, the fully functioning person living the good life would have these characteristics:

  • An increasing openness to experience, as no experience would threaten the individual's self-concept;
  • An increasingly existential and present lifestyle, since they wouldn't need to distort the present in a way that fits with their self-concept;
  • Greater trust in their own values rather than those imposed upon them by, say, their parents or their society;
  • Openness to a wide variety of choices, as they wouldn't be restricted by possible threats to their self-concept (such as a narcissist might be if they engaged in some activity that could make them appear foolish);
  • More creativity, as they wouldn't feel the need to conform;
  • More frequently constructive rather than destructive;
  • And living a rich and full life.

Seems like a pretty good life, all in all. But Rogers also warned that not everybody is ready for the good life. He wrote,

"This process of the good life is not, I am convinced, a life for the faint-hearted. It involves the stretching and growing of becoming more and more of one's potentialities. It involves the courage to be. It means launching oneself fully into the stream of life."

On Becoming a Person: A Therapist's View of Psychotherapy


  • "Super-agers" seem to escape the decline in cognitive function that affects most of the elderly population.
  • New research suggests this is because of higher functional connectivity in key brain networks.
  • It's not clear what the specific reason for this is, but research has uncovered several activities that encourage greater brain health in old age.

At some point in our 20s or 30s, something starts to change in our brains. They begin to shrink a little bit. The myelin that insulates our nerves begins to lose some of its integrity. Fewer and fewer chemical messages get sent as our brains make fewer neurotransmitters.

As we get older, these processes increase. Brain weight decreases by about 5 percent per decade after 40. The frontal lobe and hippocampus — areas related to memory encoding — begin to shrink mainly around 60 or 70. But this is just an unfortunate reality; you can't always be young, and things will begin to break down eventually. That's part of the reason why some individuals think that we should all hope for a life that ends by 75, before the worst effects of time sink in.

But this might be a touch premature. Some lucky individuals seem to resist these destructive forces working on our brains. In cognitive tests, these 80-year-old "super-agers" perform just as well as individuals in their 20s.

Just as sharp as the whippersnappers

To find out what's behind the phenomenon of super-agers, researchers conducted a study examining the brains and cognitive performances of two groups: 41 young adults between the ages of 18 and 35 and 40 older adults between the ages of 60 and 80.

First, the researchers administered a series of cognitive tests, like the California Verbal Learning Test (CVLT) and the Trail Making Test (TMT). Seventeen members of the older group scored at or above the mean scores of the younger group. That is, these 17 could be considered super-agers, performing at the same level as the younger study participants. Aside from these individuals, members of the older group tended to perform less well on the cognitive tests. Then, the researchers scanned all participants' brains in an fMRI, paying special attention to two portions of the brain: the default mode network and the salience network.

The default mode network is, as its name might suggest, a series of brain regions that are active by default — when we're not engaged in a task, they tend to show higher levels of activity. It also appears to be very related to thinking about one's self, thinking about others, as well as aspects of memory and thinking about the future.

The salience network is another network of brain regions, so named because it appears deeply linked to detecting and integrating salient emotional and sensory stimuli. (In neuroscience, saliency refers to how much an item "sticks out"). Both of these networks are also extremely important to overall cognitive function, and in super-agers, the activity in these networks was more coordinated than in their peers.

Default Mode Network

Wikimedia Commons

An image of the brain highlighting the regions associated with the default mode network.

How to ensure brain health in old age

While prior research has identified some genetic influences on how "gracefully" the brain ages, there are likely activities that can encourage brain health. "We hope to identify things we can prescribe for people that would help them be more like a superager," said Bradford Dickerson, one of the researchers in this study, in a statement. "It's not as likely to be a pill as more likely to be recommendations for lifestyle, diet, and exercise. That's one of the long-term goals of this study — to try to help people become superagers if they want to."

To date, there is some preliminary evidence of ways that you can keep your brain younger longer. For instance, more education and a cognitively demanding job predicts having higher cognitive abilities in old age. Generally speaking, the adage of "use it or lose it" appears to hold true; having a cognitively active lifestyle helps to protect your brain in old age. So, it might be tempting to fill your golden years with beer and reruns of CSI, but it's unlikely to help you keep your edge.

Aside from these intuitive ways to keep your brain healthy, regular exercise appears to boost cognitive health in old age, as Dickinson mentioned. Diet is also a protective factor, especially for diets delivering omega-3 fatty acids (which can be found in fish oil), polyphenols (found in dark chocolate!), vitamin D (egg yolks and sunlight), and the B vitamins (meat, eggs, and legumes). There's also evidence that having a healthy social life in old age can protect against cognitive decline.

For many, the physical decline associated with old age is an expected side effect of a life well-lived. But the idea that our intellect will also degrade can be a much scarier reality. Fortunately, the existence of super-agers shows that at the very least, we don't have to accept cognitive decline without a fight.


  • Right now cybercrime is basically a financial crime — it's a business of stealing people's money or stealing their data. Data has value.
  • We develop a lot of technology — we need to always ask the question how the new innovation can be misused and make safeguards so that it cannot be done.
  • Because we currently don't do these things, we have hackable vehicles, pacemakers, and laptops.