Once a week.
Subscribe to our weekly newsletter.
Steve Jobs changes computing (again)
Today marked an historic announcement. Surprisingly, I’m not talking about Obama’s first State of the Union, but rather Steve Jobs’ unveiling of the new iPad. So, how has Uncle Steve changed the game? Let’s take a look.
A perfect machine for Baby Boomers\n
I’m convinced the iPad is the perfect unit for a selling into a large market that hasn’t been catered to yet, has plenty of disposable income, and is would benefit the most immediately from what we will all come to recognize as a new type of computer: Baby Boomers.\n
At the time of the 2000 census, there were more than 79-million Baby Boomers in the US whom are now starting to slow down the pace of their daily lives as they transition towards retirement. Their personal computing needs (outside the office) aren’t very intensive — they communicate via email, read the news, share photos, maybe use video chat and do light research.\n
So, it would seem that current laptop or desktop computers do far more than is necessary for this audience. And since added complexity often causes frustration, there may be a better solution. What would the perfect “home computer” for a boomer look like?\n
That machine would be:
\n- Simple to understand and use
\n- Quickly capable of completing tasks (see below)
\n- Be available whenever and wherever a need to interact with the digital world arose.
Here’s what that computer should be able to accomplish:
\n- Booking movie tickets or reservations online
\n- Looking up references (online recipes, fact checking, manuals, etc.)
\n- Video chating with their family
\n- Storing pictures of family trips or events
\n- Occasionally doing light amounts of work
\n- Online Banking
\n(note: this is not intended to be exhaustive list,)
When you think about a machine that handles those common tasks well, and does so in a very responsive and always accessible way, the iPad is really the first good answer (more will follow if the iPad is successful).
Apple creates Ambient Computing\n
This type of machine represents a new concept — Ambient Computing. Ambient Computing is robust enough to handle most computing tasks but requires much less effort to access than a traditional computer.\n
The most impressive innovation, and the one that truly makes Ambient Computing possible, was the A4 chip. That chip is at the hart of the new devices speed and responsiveness. While, I hope this new chip design extends to the iPhone in the future, it currently, makes the iPad capable of near-instant boot and it empowers applications to be incredibly responsive. It removes all of the experience associated with computing other than getting into your desired program and completing your goal.\n
If Apple has built a machine that almost entirely removes the starting cost of completing an action on a traditional computer (which, even in good scenarios, often takes 20-30 seconds on non apple machines), then it has created a machine that’s much more capable of capturing cognitive inspiration from it’s owner – making you, as the user, more likely to act on your ideas. Apple is already good at this (going from sleep/closed to working on a new macbook is generally a sub-10 second proposition), but carrying a laptop with you everywhere is a nuisance, and pulling a computer out of your bag for a 1 minute task in most situations is awkward (and often rude). Smartphones already handle these issues well, but they are generally sluggish and unreliable for anything but the simplest tasks.\n
Bridging the accessibility of a mobile device with the robustness and trustworthiness of a full computer, will appeal to the large audience generally — which will grow over time. But, Apple’s best bet for establishing this device category is to put up impressive sales numbers for the first model. There’s also a huge immediate ability to replace the standard machine for lightweight home PC users – like baby boomers, as outlined above — or families, as outlined by Kottke. If I was Scott Forstall, I’d be focused on empowering applications that resonate heavily with this crowd: cookbooks come to mind, board games also, news/photos/communication will be killer (and already are on the machine), what else?\n
Sure, there are fairly unacceptable limitations like no camera, no easy solution for printing/scanning periphery, and questionable support of other screens (TV) for media content, which will have to be ironed out in V2. There are also broader reaching issues that might cause trouble for Apple: like the lack of flash support and the inability to show and track most web advertisements in mobile Safari. But with the hardware improvements announced today, the content and consumer-billing relationships Apple has built, and the knowledge that they can improve over several generations (do you remember the first iPod?), I think we are looking at a large market that Apple has a good chance of succeeding in.\n
That’s why I’m bullish on the iPad. With the keyboard dock, this could be a full-on replacement PC for some non-power consumers (Think of WebTV — and trust me, WebTV users didn’t need multi-tasking). For heavier users, this still provides a great “ambient computing” experience that can allow someone to act on their immediate thoughts with far lower effort (creating more personal value), while still having a more robust machine capable of handling more demanding tasks.\n
I’m concerned about the movement away from open systems, but, that doesn’t change the writing on the wall for this type of device need — kudos to Apple for seeing and defining a great first step at an ambient computing device that I expect to become a category definer.\n
Great job Apple.\nAncillary thoughts that might be interesting to you:\n- Who called this first? Carl Howe back in 2005?\n- I think the computing setup of the future looks like cheapish, durable long-term machines at home and work (think mac mini), smartphone for always there, and a “slate” for heavier-duty work that can travel with you. Phones and slates will change every 1-2 years, the stable machines will go 4-6. Heavy duty tasks (ex: quickbooks), will migrate towards the slate over time. At some point, you’ll see home/work machines becoming just docks/enhancements to the “brain” of your slate. Slates will have to allow for more open computing for this future to occur (i.e. the iPad technology will have to run/support full OSX.\n- Many of my friends hate the lack of multi-tasking. Let me make a bold statement: multi-tasking is not important in ambient computing, which, by it’s nature, will be most useful for single tasking. Multi-tasking is a nice to have, but one that threatens Apple’s music sales (streaming pandora vs. using itunes) and encourages pundits to classify the machine as a replacement computer (hmm, kinda like I’m doing above), which Apple doesn’t want as it would set consumer expectations for the device too high and possibly cannibalize laptop sales (which are much higher margin right now).\n
Why mega-eruptions like the ones that covered North America in ash are the least of your worries.
- The supervolcano under Yellowstone produced three massive eruptions over the past few million years.
- Each eruption covered much of what is now the western United States in an ash layer several feet deep.
- The last eruption was 640,000 years ago, but that doesn't mean the next eruption is overdue.
The end of the world as we know it
Panoramic view of Yellowstone National Park
Image: Heinrich Berann for the National Park Service – public domain
Of the many freak ways to shuffle off this mortal coil – lightning strikes, shark bites, falling pianos – here's one you can safely scratch off your worry list: an outbreak of the Yellowstone supervolcano.
As the map below shows, previous eruptions at Yellowstone were so massive that the ash fall covered most of what is now the western United States. A similar event today would not only claim countless lives directly, but also create enough subsidiary disruption to kill off global civilisation as we know it. A relatively recent eruption of the Toba supervolcano in Indonesia may have come close to killing off the human species (see further below).
However, just because a scenario is grim does not mean that it is likely (insert topical political joke here). In this case, the doom mongers claiming an eruption is 'overdue' are wrong. Yellowstone is not a library book or an oil change. Just because the previous mega-eruption happened long ago doesn't mean the next one is imminent.
Ash beds of North America
Ash beds deposited by major volcanic eruptions in North America.
Image: USGS – public domain
This map shows the location of the Yellowstone plateau and the ash beds deposited by its three most recent major outbreaks, plus two other eruptions – one similarly massive, the other the most recent one in North America.
The Huckleberry Ridge eruption occurred 2.1 million years ago. It ejected 2,450 km3 (588 cubic miles) of material, making it the largest known eruption in Yellowstone's history and in fact the largest eruption in North America in the past few million years.
This is the oldest of the three most recent caldera-forming eruptions of the Yellowstone hotspot. It created the Island Park Caldera, which lies partially in Yellowstone National Park, Wyoming and westward into Idaho. Ash from this eruption covered an area from southern California to North Dakota, and southern Idaho to northern Texas.
About 1.3 million years ago, the Mesa Falls eruption ejected 280 km3 (67 cubic miles) of material and created the Henry's Fork Caldera, located in Idaho, west of Yellowstone.
It was the smallest of the three major Yellowstone eruptions, both in terms of material ejected and area covered: 'only' most of present-day Wyoming, Colorado, Kansas and Nebraska, and about half of South Dakota.
The Lava Creek eruption was the most recent major eruption of Yellowstone: about 640,000 years ago. It was the second-largest eruption in North America in the past few million years, creating the Yellowstone Caldera.
It ejected only about 1,000 km3 (240 cubic miles) of material, i.e. less than half of the Huckleberry Ridge eruption. However, its debris is spread out over a significantly wider area: basically, Huckleberry Ridge plus larger slices of both Canada and Mexico, plus most of Texas, Louisiana, Arkansas, and Missouri.
This eruption occurred about 760,000 years ago. It was centered on southern California, where it created the Long Valley Caldera, and spewed out 580 km3 (139 cubic miles) of material. This makes it North America's third-largest eruption of the past few million years.
The material ejected by this eruption is known as the Bishop ash bed, and covers the central and western parts of the Lava Creek ash bed.
Mount St Helens
The eruption of Mount St Helens in 1980 was the deadliest and most destructive volcanic event in U.S. history: it created a mile-wide crater, killed 57 people and created economic damage in the neighborhood of $1 billion.
Yet by Yellowstone standards, it was tiny: Mount St Helens only ejected 0.25 km3 (0.06 cubic miles) of material, most of the ash settling in a relatively narrow band across Washington State and Idaho. By comparison, the Lava Creek eruption left a large swathe of North America in up to two metres of debris.
The difference between quakes and faults
The volume of dense rock equivalent (DRE) ejected by the Huckleberry Ridge event dwarfs all other North American eruptions. It is itself overshadowed by the DRE ejected at the most recent eruption at Toba (present-day Indonesia). This was one of the largest known eruptions ever and a relatively recent one: only 75,000 years ago. It is thought to have caused a global volcanic winter which lasted up to a decade and may be responsible for the bottleneck in human evolution: around that time, the total human population suddenly and drastically plummeted to between 1,000 and 10,000 breeding pairs.
Image: USGS – public domain
So, what are the chances of something that massive happening anytime soon? The aforementioned mongers of doom often claim that major eruptions occur at intervals of 600,000 years and point out that the last one was 640,000 years ago. Except that (a) the first interval was about 200,000 years longer, (b) two intervals is not a lot to base a prediction on, and (c) those intervals don't really mean anything anyway. Not in the case of volcanic eruptions, at least.
Earthquakes can be 'overdue' because the stress on fault lines is built up consistently over long periods, which means quakes can be predicted with a relative degree of accuracy. But this is not how volcanoes behave. They do not accumulate magma at constant rates. And the subterranean pressure that causes the magma to erupt does not follow a schedule.
What's more, previous super-eruptions do not necessarily imply future ones. Scientists are not convinced that there ever will be another big eruption at Yellowstone. Smaller eruptions, however, are much likelier. Since the Lava Creek eruption, there have been about 30 smaller outbreaks at Yellowstone, the last lava flow being about 70,000 years ago.
As for the immediate future (give or take a century): the magma chamber beneath Yellowstone is only 5 percent to 15 percent molten. Most scientists agree that is as un-alarming as it sounds. And that its statistically more relevant to worry about death by lightning, shark, or piano.
Strange Maps #1041
Got a strange map? Let me know at firstname.lastname@example.org.
The potential of CRISPR technology is incredible, but the threats are too serious to ignore.
- CRISPR (Clustered Regularly Interspaced Short Palindromic Repeats) is a revolutionary technology that gives scientists the ability to alter DNA. On the one hand, this tool could mean the elimination of certain diseases. On the other, there are concerns (both ethical and practical) about its misuse and the yet-unknown consequences of such experimentation.
- "The technique could be misused in horrible ways," says counter-terrorism expert Richard A. Clarke. Clarke lists biological weapons as one of the potential threats, "Threats for which we don't have any known antidote." CRISPR co-inventor, biochemist Jennifer Doudna, echos the concern, recounting a nightmare involving the technology, eugenics, and a meeting with Adolf Hitler.
- Should this kind of tool even exist? Do the positives outweigh the potential dangers? How could something like this ever be regulated, and should it be? These questions and more are considered by Doudna, Clarke, evolutionary biologist Richard Dawkins, psychologist Steven Pinker, and physician Siddhartha Mukherjee.
Measuring a person's movements and poses, smart clothes could be used for athletic training, rehabilitation, or health-monitoring.
In recent years there have been exciting breakthroughs in wearable technologies, like smartwatches that can monitor your breathing and blood oxygen levels.
But what about a wearable that can detect how you move as you do a physical activity or play a sport, and could potentially even offer feedback on how to improve your technique?
And, as a major bonus, what if the wearable were something you'd actually already be wearing, like a shirt of a pair of socks?
That's the idea behind a new set of MIT-designed clothing that use special fibers to sense a person's movement via touch. Among other things, the researchers showed that their clothes can actually determine things like if someone is sitting, walking, or doing particular poses.
The group from MIT's Computer Science and Artificial Intelligence Lab (CSAIL) says that their clothes could be used for athletic training and rehabilitation. With patients' permission, they could even help passively monitor the health of residents in assisted-care facilities and determine if, for example, someone has fallen or is unconscious.
The researchers have developed a range of prototypes, from socks and gloves to a full vest. The team's "tactile electronics" use a mix of more typical textile fibers alongside a small amount of custom-made functional fibers that sense pressure from the person wearing the garment.
According to CSAIL graduate student Yiyue Luo, a key advantage of the team's design is that, unlike many existing wearable electronics, theirs can be incorporated into traditional large-scale clothing production. The machine-knitted tactile textiles are soft, stretchable, breathable, and can take a wide range of forms.
"Traditionally it's been hard to develop a mass-production wearable that provides high-accuracy data across a large number of sensors," says Luo, lead author on a new paper about the project that is appearing in this month's edition of Nature Electronics. "When you manufacture lots of sensor arrays, some of them will not work and some of them will work worse than others, so we developed a self-correcting mechanism that uses a self-supervised machine learning algorithm to recognize and adjust when certain sensors in the design are off-base."
The team's clothes have a range of capabilities. Their socks predict motion by looking at how different sequences of tactile footprints correlate to different poses as the user transitions from one pose to another. The full-sized vest can also detect the wearers' pose, activity, and the texture of the contacted surfaces.
The authors imagine a coach using the sensor to analyze people's postures and give suggestions on improvement. It could also be used by an experienced athlete to record their posture so that beginners can learn from them. In the long term, they even imagine that robots could be trained to learn how to do different activities using data from the wearables.
"Imagine robots that are no longer tactilely blind, and that have 'skins' that can provide tactile sensing just like we have as humans," says corresponding author Wan Shou, a postdoc at CSAIL. "Clothing with high-resolution tactile sensing opens up a lot of exciting new application areas for researchers to explore in the years to come."
The paper was co-written by MIT professors Antonio Torralba, Wojciech Matusik, and Tomás Palacios, alongside PhD students Yunzhu Li, Pratyusha Sharma, and Beichen Li; postdoc Kui Wu; and research engineer Michael Foshey.
The work was partially funded by Toyota Research Institute.