Researchers were even able store and read a 767-kilobit full-color short movie file in the fabric.
MIT researchers have created the first fiber with digital capabilities, able to sense, store, analyze, and infer activity after being sewn into a shirt.
Yoel Fink, who is a professor in the departments of materials science and engineering and electrical engineering and computer science, a Research Laboratory of Electronics principal investigator, and the senior author on the study, says digital fibers expand the possibilities for fabrics to uncover the context of hidden patterns in the human body that could be used for physical performance monitoring, medical inference, and early disease detection.
Or, you might someday store your wedding music in the gown you wore on the big day — more on that later.
Fink and his colleagues describe the features of the digital fiber today in Nature Communications. Until now, electronic fibers have been analog — carrying a continuous electrical signal — rather than digital, where discrete bits of information can be encoded and processed in 0s and 1s.
"This work presents the first realization of a fabric with the ability to store and process data digitally, adding a new information content dimension to textiles and allowing fabrics to be programmed literally," Fink says.
MIT PhD student Gabriel Loke and MIT postdoc Tural Khudiyev are the lead authors on the paper. Other co-authors MIT postdoc Wei Yan; MIT undergraduates Brian Wang, Stephanie Fu, Ioannis Chatziveroglou, Syamantak Payra, Yorai Shaoul, Johnny Fung, and Itamar Chinn; John Joannopoulos, the Francis Wright Davis Chair Professor of Physics and director of the Institute for Soldier Nanotechnologies at MIT; Harrisburg University of Science and Technology master's student Pin-Wen Chou; and Rhode Island School of Design Associate Professor Anna Gitelson-Kahn. The fabric work was facilitated by Professor Anais Missakian, who holds the Pevaroff-Cohn Family Endowed Chair in Textiles at RISD.
Memory and more
The new fiber was created by placing hundreds of square silicon microscale digital chips into a preform that was then used to create a polymer fiber. By precisely controlling the polymer flow, the researchers were able to create a fiber with continuous electrical connection between the chips over a length of tens of meters.
The fiber itself is thin and flexible and can be passed through a needle, sewn into fabrics, and washed at least 10 times without breaking down. According to Loke, "When you put it into a shirt, you can't feel it at all. You wouldn't know it was there."
Making a digital fiber "opens up different areas of opportunities and actually solves some of the problems of functional fibers," he says.
For instance, it offers a way to control individual elements within a fiber, from one point at the fiber's end. "You can think of our fiber as a corridor, and the elements are like rooms, and they each have their own unique digital room numbers," Loke explains. The research team devised a digital addressing method that allows them to "switch on" the functionality of one element without turning on all the elements.
A digital fiber can also store a lot of information in memory. The researchers were able to write, store, and read information on the fiber, including a 767-kilobit full-color short movie file and a 0.48 megabyte music file. The files can be stored for two months without power.
When they were dreaming up "crazy ideas" for the fiber, Loke says, they thought about applications like a wedding gown that would store digital wedding music within the weave of its fabric, or even writing the story of the fiber's creation into its components.
Fink notes that the research at MIT was in close collaboration with the textile department at RISD led by Missakian. Gitelson-Kahn incorporated the digital fibers into a knitted garment sleeve, thus paving the way to creating the first digital garment.
Image: Anna Gitelson-Kahn. Photo by Roni Cnaani.
On-body artificial intelligence
The fiber also takes a few steps forward into artificial intelligence by including, within the fiber memory, a neural network of 1,650 connections. After sewing it around the armpit of a shirt, the researchers used the fiber to collect 270 minutes of surface body temperature data from a person wearing the shirt, and analyze how these data corresponded to different physical activities. Trained on these data, the fiber was able to determine with 96 percent accuracy what activity the person wearing it was engaged in.
Adding an AI component to the fiber further increases its possibilities, the researchers say. Fabrics with digital components can collect a lot of information across the body over time, and these "lush data" are perfect for machine learning algorithms, Loke says.
"This type of fabric could give quantity and quality open-source data for extracting out new body patterns that we did not know about before," he says.
With this analytic power, the fibers someday could sense and alert people in real-time to health changes like a respiratory decline or an irregular heartbeat, or deliver muscle activation or heart rate data to athletes during training.
The fiber is controlled by a small external device, so the next step will be to design a new chip as a microcontroller that can be connected within the fiber itself.
"When we can do that, we can call it a fiber computer," Loke says.
This research was supported by the U.S. Army Institute of Soldier Nanotechnologies, National Science Foundation, the U.S. Army Research Office, the MIT Sea Grant, and the Defense Threat Reduction Agency.
The helicopter's sixth mission almost went down in disaster.
- The Ingenuity Mars Helicopter was out on a photo-taking mission when it started to act strangely.
- It kept changing its speed and tipping back and forth.
- A single error threw its entire navigation system into confusion.
Something went wrong on the Ingenuity Mars Helicopter's sixth flight. Not to worry, though: the copter is fine. The story of what went wrong and why it's okay now reminds us once again just how impressively smart space engineers have to be and usually are.
An image taken by the helicopter during its sixth missionCredit: NASA / JPL-Caltech
The helicopter was sent aloft to take stereo images of a region of interest. The plan was for it to ascend to a height of ten meters and then travel at a speed of four meters per second for 150 meters to the southwest, capturing images as it flew. Next, it was to travel 15 meters to the south with its camera facing westward, and then finally 50 meters to the northeast where it was to land.
At the end of the mission's first leg, however, telemetry revealed that the helicopter had begun adjusting its velocity and repeatedly tilting backward and forward. It kept on with this strange behavior before successfully landing at the end of the mission's third leg.
How the helicopter knows where it is
Credit: NASA / JPL-Caltech
Here's how things normally work.
The helicopter's navigation system has two parts. The first is an onboard inertial measurement unit (IMU). This device keeps track of the helicopter's acceleration and rotation. It monitors these aspects of its motion 500 times per second, allowing the craft to estimate where it is, how fast it's traveling, and its attitude. (IMUs also feature prominently in the navigation systems of autonomous cars back here on Earth.)
However, this is just an estimate, and since small errors build up over time, the IMU alone is not enough to keep the helicopter on course. A second system confirms the IMU estimate or alerts the craft that something has gone wrong.
This system involves a downward-pointing camera that takes time-stamped images of the ground beneath the helicopter during most of a flight. It fires each image directly to the craft's navigation system, where:
- The copter makes note of the timestamp to know when the image was captured.
- An algorithm predicts what the image should be based on the last image it received and the time that's elapsed since that first image was taken. (The system recognizes colors and topographical features such as sand ripples and rocks.)
- The algorithm examines the newest image for the predicted features.
- If it doesn't see what it expects — in other words, there's some kind of discontinuity — it corrects its IMU estimates of the craft's position, velocity, and altitude and makes adjustments accordingly.
This all happens incredibly quickly — the down-facing camera takes 30 images per second.
What went wrong
Apparently, for unknown reasons, about 54 seconds into the flight, a glitch occurred in the system responsible for transferring the down-facing images to the navigation system, and one single image got lost along the way. This had the effect of throwing the timestamp of all of the subsequent images off.
For the rest of the flight, the Ingenuity Mars Helicopter was unsure where it was. Its weird behavior was a frantic — not really, it's a machine — attempt to respond as the discrepancy compounded over time.
Anticipating such surprises, the designers built into the algorithm a stability margin that allows the craft to remain relatively stable even if it encounters a significant number of errors, as happened here. As chief pilot of the craft Håvard Grip puts it: "This built-in margin was not fully needed in Ingenuity's previous flights, because the vehicle's behavior was in-family with our expectations, but this margin came to the rescue in Flight Six."
The system also had one final trick up its sleeve that allowed the confused craft to land safely. When a craft is close to the Martian surface, either landing or taking off, a lot of dust gets kicked up. Concerned that flying dust would create problems for the algorithm, the craft is programmed to ignore the images once the craft's altitude is one meter or less.
In this case, that meant that the helicopter set aside the confused image system during landing, relying solely on its IMU. We'll give Grip the final word:
"In a very real sense, Ingenuity muscled through the situation, and while the flight uncovered a timing vulnerability that will now have to be addressed, it also confirmed the robustness of the system in multiple ways."
The unique light signatures of nautical beacons translate into hypnotic cartography.
- Many of the world's 23,000 lighthouses feature a distinct combination of color, frequency, and range.
- These unique light signatures help ships verify their positions and safeguard maritime traffic.
- But they also translate into this map, visualizing the ingenuity and courage of lighthouse builders and keepers.
Land and sea are both shaded dark, so it's a bit hard at first to make out that this collection of merrily blinking lights is actually a map. Once the coastal contours pop, though, all becomes clear: these are lighthouses!
The Age of Big Data
The map not only shows where they are, but how they are: static or blinking in various colors with the size of the circles corresponding to the range of their lights.
Up until the 20th century, a map of lighthouses would have been a subdued affair: just a string of dots strung along lines of coast. But this is the 21st century! We're in the Age of Big Data, ruled by the clever boffins who know how to stitch one dataset to another. Zap it with electricity and presto: it's alive!
That's what the folks did over at Geodienst, the spatial expertise center of the University of Groningen (Netherlands). Back in 2018, student/assistant Jelmer van der Linde (currently with the University of Edinburgh) came across OpenSeaMap, an open-source resource for nautical information similar to its more famous landlubber cousin, OpenStreetMap.
OpenSeaMap contained a database with detailed information on nautical beacons and lighthouses, which included not just their location, but also the frequency, range, and even the color of their signals. Would it be possible to visualize all those data points on a map? Yes, it would!
The result is this riot of a map. It's important that ships don't mistake one lighthouse for another. That's why they come in various colors and their lights flicker with a distinct frequency. Norway in particular is lit up with beacons and lighthouses, as its fjord-indented coast warrants. And the rest of Europe is well provided with nautical warning lights.
However, while the map is reminiscent of other global traffic trackers for flights (like Flightradar24 or FlightAware) or shipping (such as VesselFinder or MarineTraffic), it is neither live nor global. The flickering lights aren't a real-time report; they merely repeat the code in the original database. And that database is incomplete.
Zoom out, and the map gets a bit too dark. According to the Lighthouse Directory, there are at least 23,000 lighthouses in the world. And even though the United States has more lighthouses than any other nation – 700 by some counts – the map only shows a handful of lights in North America.
Like its parent, the lighthouse map is open source too, so if anyone out there is capable of filling in the gaps, they can. Lighthouse enthusiasts, get to it!
Not one yet yourself? Below are 10 lighthouse facts to help you come over to the light side.
Trapped in a giant phallus and other true facts about lighthouses
- The world's smallest lighthouse is the North Queensferry Light Tower, near the Forth Bridge in Scotland. A mere 16 feet (5 m) tall, it was built in 1817 by Robert Stevenson, famous builder of lighthouses, as was his son Thomas, who was the father of the famous novelist Robert Louis Stevenson.
- Reaching a height of 436 ft (133 m), Jeddah Light in Saudi Arabia is the world's tallest lighthouse.
- The 2019 movie The Lighthouse, starring Willem Dafoe and Robert Pattinson, was based on a true incident, known as the Smalls Lighthouse Tragedy. In 1801, a storm trapped two Welsh lighthouse keepers, both named Thomas, in their lighthouse. One died, the other went mad. Asked to summarize his film, writer/director Robert Eggers said, "Nothing good can happen when two men are trapped alone in a giant phallus."
- From its inauguration in 1886 until 1901, the Statue of Liberty also served as a lighthouse. Its nine electric arc lamps, located in the torch, could be seen 24 miles out to sea.
- All U.S. lighthouses are now automated – save for Boston Light, the oldest continually used lighthouse in the country. For historical reasons, Congress has decided it shall remain staffed year-round.
- Hook Lighthouse, on Hook Head in Ireland's County Wexford, claims to be the world's oldest lighthouse still in use. It was first built by a medieval lord in the early decades of the 13th century.
- The Tower of Hercules in La Coruña, Spain has a slightly better claim. It was built by the Romans in the 1st century AD and still functions as a lighthouse.
- Stannard Rock Lighthouse is also known as "the loneliest place in the world." It is located in Lake Superior, Michigan. At 24 miles (39 km) from shore, it is the most remote lighthouse in the U.S. and one of the most remote in the world. It opened in 1883 and was staffed for parts of the year until 1962.
- A lighthouse on Märket is the reason for the weird border on the island, divided between Sweden and Finland. In 1885, the Finns built a lighthouse on the highest part of the island – on the Swedish half. Thanks to a complicated land swap, the lighthouse is back on the Finnish side.
- In the United States, August 7 is National Lighthouse Day.
Strange Maps #1082
Many thanks to Toon Wassenberg for sending in this map. Got a strange map? Let me know at firstname.lastname@example.org.
The EmDrive turns out to be the "um..." drive after all, as a new study dubs any previous encouraging EmDrive results "false positives."
- The proposed EmDrive captured the public's imagination with the promise of super-fast space travel that broke the laws of physics.
- Some researchers have detected thrusts from the EmDrive that seemed to prove its validity as a technology.
- A new, authoritative study says, no, those results were just "false positives."
Now it seems that, yep, it was too good to be true. Scientists at Dresden University of Technology (TU Dresden) appear to have conclusively proven that the EmDrive does not, in fact, produce any thrust. They provide some compelling evidence that small indications of thrust in previous research were simply false positives produced by outside forces.
How the EmDrive is supposed to work
Credit: AndSus/Adobe Stock
In the EmDrive, says
the company that owns rights to the invention, "Thrust is produced by the amplification of the radiation pressure of an electromagnetic wave propagated through a resonant waveguide assembly." In simpler words, trapped microwaves bounce around a specially shaped enclosed container, producing thrust that pushes the whole thing forward.
They also assert that while the EmDrive is not exactly on speaking terms with Newton's Third Law, the company says it's perfectly in line with the second one:
"This relies on Newton's Second Law where force is defined as the rate of change of momentum. Thus, an electromagnetic (EM) wave, traveling at the speed of light has a certain momentum which it will transfer to a reflector, resulting in a tiny force."
Interest in the EmDrive has been understandable considering what it was supposed to do. Speaking to Popular Mechanics last year, Mike McCulloch, the leader of DARPA's EmDrive investigation, describes how the engine could "transform space travel and see craft lifting silently off from launchpads and reaching beyond the solar system." He mentioned his excitement at being able to get from here to Proxima Centauri — 4.2465 light years away — in just 90 human years.
It doesn't work. Yes it does. No, it doesn't.
NASA Eagleworks' EmDriveCredit: NASA/Wikimedia Commons
DARPA, part of the U.S. Department of Defense, is only one of the organizations investigating the claims made for the EmDrive. In 2018 the agency invested $1.3 million to study the device in research that will be wrapping up this May barring any significant last-minute breakthroughs.
Teams from all over the world have been testing Shawyer's idea since it was introduced and releasing often contradictory test results. This may have to do with the fact that teams detecting any EmDrive thrust at all have reported vanishingly small amounts of it, measured in milliNewtons (mN). A mN equals about 0.00022 pounds of force.
"Ever since the introduction of the EmDrive concept in 2001, every few years a group claims to have measured a net force coming from its device. But these researchers are measuring an incredibly tiny effect: a force so small it couldn't even budge a piece of paper. This leads to significant statistical uncertainty and measurement error."
For a sense of how minuscule these results are, consider that the possible thrust force reported by NASA in 2014 of 30-50 micro-Newtons is roughly equivalent to the weight of a big ant. Chinese researchers have claimed detection of 720 mN in their tests. That would be 72 grams of thrust. An iPhone 11 with a case weights 219 grams.
Too small to stand out against background noise
These tiny amounts of EmDrive thrust lie at the heart of what the TU Dresden researchers are saying: The effects are simply too small to rule out effects that don't really come from the EmDrives at all. The researchers have just published three papers. The title of one "High-Accuracy Thrust Measurements of the EmDrive and Elimination of False-Positive Effects" tells the story. The other two studies are here and here.
When the UT Dresden team turned on their EmDrive based on NASA's EmDrive, they, too witnessed tiny amounts of apparent thrust.
However, says Martin Tajmar of UT Dresden to German media outlet GreWi, they soon realized what was going on: "When power flows into the EmDrive, the engine warms up. This also causes the fastening elements on the scale to warp, causing the scale to move to a new zero point. We were able to prevent that in an improved structure."
Putting the kibosh on other researchers' results, the authors of the studies write:
"Using a geometry and operating conditions close to the model by White et al. that reported positive results published in the peer-reviewed literature, we found no thrust values within a wide frequency band including several resonance frequencies. Our data limits any anomalous thrust to below the force equivalent from classical radiation for a given amount of power. This provides strong limits to all proposed theories and rules out previous test results by more than three orders of magnitude."
This would seem to be the definitive end of the EmDrive story.
The bird demonstrates cutting-edge technology for devising self-folding nanoscale robots.
Cornell University has just announced what may be the smallest origami bird ever folded. While a typical origami animal is the product of an artist's dexterous hands, the Cornell bird was folded by the strategic application of small electrical voltages. It had to be: The material of which the bird is comprised is just 30 atoms thick.
Creative expression isn't the point of the university's little avian — its construction previews principles and techniques that will lead to new generations of moving, nano-scaled robots that "can enable smart material design and interaction with the molecular biological world," says Dean Culver of the U.S. Army Combat Capabilities Development Command's Army Research Laboratory, which supported the research.
According to Cornell's Paul McEuen, "We humans, our defining characteristic is we've learned how to build complex systems and machines at human scales, and at enormous scales as well. But what we haven't learned how to do is build machines at tiny scales. And this is a step in that basic, fundamental evolution in what humans can do, of learning how to construct machines that are as small as cells."
The lead author of the paper describing the tiny bird is postdoctoral researcher Qingkun Liu. The paper, "Micrometer-Sized Electrically Programmable Shape Memory Actuators for Low-Power Microrobotics," is the cover story of the March 17 issue of the journal Science Robotics.
A minuscule swarm of helpers
The project is the result of a collaboration between physical scientist McEeuen and physicist Itai Cohen, both of Cornell's College of Arts and Sciences. It's already resulted in a (very) small herd of nanoscale machines and devices.
Cohen explains, "We want to have robots that are microscopic but have brains on board. So that means you need to have appendages that are driven by complementary metal-oxide-semiconductor (CMOS) transistors, basically a computer chip on a robot that's 100 microns on a side."
The idea is that these minuscule workhorses—a metaphor, no nanoscale origami horses yet exist—are released from a wafer, fold themselves into the desired form factor, and then go on about their business. Additional folding would endow them with motion as they work, change shapes to move their limbs and manipulate microscopic objects. The researchers anticipate that these nanobots will eventually be able to achieve similar functionality to their larger brethren.
Credit: nobeastsofierce/Adobe Stock
How a tiny robot is made and works
The project combines materials science with chemistry, since the folding is achieved with the strategic deployment of electrochemical reactions. Liu explains, "At this small scale, it's not like traditional mechanical engineering, but rather chemistry, material science, and mechanical engineering all mixed together."
"The hard part," says Cohen, "is making the materials that respond to the CMOS circuits. And this is what Qingkun and his colleagues have done with this shape memory actuator that you can drive with voltage and make it hold a bent shape."
The bots are constructed from a nanometer-thick platinum layer that's coated with a titanium oxide film. Rigid panels of silicon oxide glass are affixed to the platinum. A positive voltage creates oxidation, forcing oxygen atoms into the platinum seams between the glass panels, and forcing platinum atoms out. This causes the platinum to expand, which bends the entire glass-platinum structure to a desired angle.
Because the oxygen atoms collect to form a barrier, a bend is retained even after the charge is switched off. To undo a fold, a negative charge can be applied that removes the oxygen atoms from the seam, allowing it to relax and unbend.
This all happens very quickly — a machine can fold itself within just 100 milliseconds. The process is also repeatable. The team reports that a bot can flatten and refold itself thousands of times, and all it takes is a single volt of electricity.
Artistry after all
None of this really removes what one might consider the artistry. Working out how and where to apply voltages to effect the desired shape is not a simple thing to do. McEuen says, "One thing that's quite remarkable is that these little tiny layers are only about 30 atoms thick, compared to a sheet of paper, which might be 100,000 atoms thick. So it's an enormous engineering challenge to figure out how to make something like that have the kind of functionalities we want."
Still, the group is getting quite good at microscopic robotics, and has already been awarded the Guinness World Record for assembling the smallest-ever walking robot. The little 4-legged dude is 40 microns wide and between 40 and 70 microns long. They're angling for a new record with their 60-micron-wide origami bird.
Says Cohen, "These are major advances over current state-of-the-art devices. We're really in a class of our own."