Researchers were even able store and read a 767-kilobit full-color short movie file in the fabric.
MIT researchers have created the first fiber with digital capabilities, able to sense, store, analyze, and infer activity after being sewn into a shirt.
Yoel Fink, who is a professor in the departments of materials science and engineering and electrical engineering and computer science, a Research Laboratory of Electronics principal investigator, and the senior author on the study, says digital fibers expand the possibilities for fabrics to uncover the context of hidden patterns in the human body that could be used for physical performance monitoring, medical inference, and early disease detection.
Or, you might someday store your wedding music in the gown you wore on the big day — more on that later.
Fink and his colleagues describe the features of the digital fiber today in Nature Communications. Until now, electronic fibers have been analog — carrying a continuous electrical signal — rather than digital, where discrete bits of information can be encoded and processed in 0s and 1s.
"This work presents the first realization of a fabric with the ability to store and process data digitally, adding a new information content dimension to textiles and allowing fabrics to be programmed literally," Fink says.
MIT PhD student Gabriel Loke and MIT postdoc Tural Khudiyev are the lead authors on the paper. Other co-authors MIT postdoc Wei Yan; MIT undergraduates Brian Wang, Stephanie Fu, Ioannis Chatziveroglou, Syamantak Payra, Yorai Shaoul, Johnny Fung, and Itamar Chinn; John Joannopoulos, the Francis Wright Davis Chair Professor of Physics and director of the Institute for Soldier Nanotechnologies at MIT; Harrisburg University of Science and Technology master's student Pin-Wen Chou; and Rhode Island School of Design Associate Professor Anna Gitelson-Kahn. The fabric work was facilitated by Professor Anais Missakian, who holds the Pevaroff-Cohn Family Endowed Chair in Textiles at RISD.
Memory and more
The new fiber was created by placing hundreds of square silicon microscale digital chips into a preform that was then used to create a polymer fiber. By precisely controlling the polymer flow, the researchers were able to create a fiber with continuous electrical connection between the chips over a length of tens of meters.
The fiber itself is thin and flexible and can be passed through a needle, sewn into fabrics, and washed at least 10 times without breaking down. According to Loke, "When you put it into a shirt, you can't feel it at all. You wouldn't know it was there."
Making a digital fiber "opens up different areas of opportunities and actually solves some of the problems of functional fibers," he says.
For instance, it offers a way to control individual elements within a fiber, from one point at the fiber's end. "You can think of our fiber as a corridor, and the elements are like rooms, and they each have their own unique digital room numbers," Loke explains. The research team devised a digital addressing method that allows them to "switch on" the functionality of one element without turning on all the elements.
A digital fiber can also store a lot of information in memory. The researchers were able to write, store, and read information on the fiber, including a 767-kilobit full-color short movie file and a 0.48 megabyte music file. The files can be stored for two months without power.
When they were dreaming up "crazy ideas" for the fiber, Loke says, they thought about applications like a wedding gown that would store digital wedding music within the weave of its fabric, or even writing the story of the fiber's creation into its components.
Fink notes that the research at MIT was in close collaboration with the textile department at RISD led by Missakian. Gitelson-Kahn incorporated the digital fibers into a knitted garment sleeve, thus paving the way to creating the first digital garment.
Image: Anna Gitelson-Kahn. Photo by Roni Cnaani.
On-body artificial intelligence
The fiber also takes a few steps forward into artificial intelligence by including, within the fiber memory, a neural network of 1,650 connections. After sewing it around the armpit of a shirt, the researchers used the fiber to collect 270 minutes of surface body temperature data from a person wearing the shirt, and analyze how these data corresponded to different physical activities. Trained on these data, the fiber was able to determine with 96 percent accuracy what activity the person wearing it was engaged in.
Adding an AI component to the fiber further increases its possibilities, the researchers say. Fabrics with digital components can collect a lot of information across the body over time, and these "lush data" are perfect for machine learning algorithms, Loke says.
"This type of fabric could give quantity and quality open-source data for extracting out new body patterns that we did not know about before," he says.
With this analytic power, the fibers someday could sense and alert people in real-time to health changes like a respiratory decline or an irregular heartbeat, or deliver muscle activation or heart rate data to athletes during training.
The fiber is controlled by a small external device, so the next step will be to design a new chip as a microcontroller that can be connected within the fiber itself.
"When we can do that, we can call it a fiber computer," Loke says.
This research was supported by the U.S. Army Institute of Soldier Nanotechnologies, National Science Foundation, the U.S. Army Research Office, the MIT Sea Grant, and the Defense Threat Reduction Agency.
As droughts threaten water supplies across the planet, some municipalities aim to utilize an untapped resource: sewage water.
- Water recycling, or water reclamation, involves cleaning water with filters and chemicals to make it environmentally safe.
- In Texas, El Paso's water utility is taking this a step further by building a closed-loop system that will directly convert sewage water into drinkable water.
- Unsurprisingly, surveys show that most people don't like the idea of drinking recycled water, but public outreach programs seem able to change minds.
Of all the projects aiming to make the world more sustainable, none is less appealing than toilet to tap, a water recycling process where wastewater is converted into potable water.
But despite the gross-out factor, a handful of governments have already invested in the technology, including those in Singapore, South Africa, Belgium, California, and Texas. Soon, others may have few other options. El Paso is leading the way.
The case for drinking treated wastewater. (Yes, from the toilet.) | Just Might Work by Freethink www.youtube.com
Depletion of resources and climate change are threatening to dry up parts of the global water supply. By the late 21st century, the number of people impacted by extreme droughts is projected to double, a shortage that would not only affect the health of millions of people but also potentially create catastrophic socioeconomic problems and geopolitical conflicts.
The U.S. is already feeling the heat. In May, California declared a drought emergency in 39 counties. It wasn't really a shock to the state, which has endured severe droughts over the 20th century, including a historical five-year drought from 2012 to 2016. The U.S. Forest Service has warned that droughts like these could render half of the nation's freshwater basins unable to consistently meet monthly water demand by 2071.
The causes are twofold. One is a growing population that will demand more water. The other is that global warming is evaporating more water from soil, lakes, reservoirs, and rivers, while climate change alters patterns of precipitation and snowmelt, which feed the rivers and lakes from which we get much of our drinking water.
Facing a dry future, some municipalities have accepted the crappy-sounding reality: Converting sewage water into drinking water through water recycling may be the best way to prevent a crisis.
The average adult flushes about 320 pounds of poop down the toilet every year. Where does it all go?
When you flush your toilet, the water swirls through a U-shaped pipe, called a trap, that prevents sewage gases from entering your home. That toilet water — along with other wastewater from your sinks, washer, and shower — flows into a sewer line, which is connected to the buildings and homes in the immediate area. These sewer lines can be big. In New York City, for example, combined sewer lines can span more than 12 feet wide, enough space for a subway car.
These pipes carry wastewater to municipal water treatment plants for cleaning. In the U.S., the water treating process typically involves steps like:
- Odor control: Chemicals help mute foul odors.
- Screening: Wastewater is moved through screens to separate larger solids and trash.
- Primary treatment: Water sits in large tanks, allowing solid material to settle at the surface. Material is scraped off and disposed of.
- Aeration: Water is stirred to release gases, and air is pumped through the water to allow bacteria to act on organic matter, which helps it decay.
- Remove sludge: Solid material settles to the bottom and is removed.
- More filtration: Water is filtered through sand to reduce bacteria, odors, iron, and other solids.
- "Digest" the solid material: Solid material is heated to break it down to nutrient-rich biosolids and methane gas.
- Disinfection: Water is treated with chlorine to kill bacteria.
After wastewater is treated and deemed clean enough for the environment, it's used for crop irrigation, or it's discharged back into streams, rivers, and lakes. But some municipalities take water reclamation several steps further, purifying wastewater to the point where it's safe to drink.
Wastewater treatment facilitySongkhla Studio via Adobe Stock
Today, drinking water in places like Northern Virginia, Phoenix, and Southern California is, at least in part, reclaimed wastewater. But in some parts of the U.S., climate change poses such a severe threat to the water supply that more drastic measures are required.
A closed-loop water recycling system
El Paso, Texas, is an exceptionally dry city. Located in the Chihuahuan Desert where only nine inches of rain falls per year, it's drier than some parts of sub-Saharan Africa. The city has historically received half of its water supply from the Rio Grande, but the river has been steadily drying up, forcing officials to turn to other solutions, like building the nation's largest inland desalination plant and establishing incentives that encourage residents to use less water.
In recent years, El Paso has been working on what officials call the next logical step: Creating a closed-loop water recycling system that purifies wastewater and sends it right back into the drinking water supply.
El Paso and other U.S. cities already clean wastewater and pump it back into the aquifer, an underground layer of rock. But while this water reclamation process is environmentally safe, it can take years for the recycled water to make its way back into the drinking supply. A closed-loop system would speed things up.
The process will begin at El Paso's conventional water treatment facility, which cleans water according to long-established standards. But then the water will be piped nearby to the city's Advanced Water Purification Facility to undergo several additional cleaning steps:
- Water is filtered through thin sheets of material that remove salts, viruses, and contaminants, in a process known as reverse osmosis.
- Water is treated with hydrogen peroxide and UV light, both of which deactivate or destroy pathogens.
- Finally, the water is passed through granular-activated carbon that's been superheated to help trap any remaining particles.
As El Paso's reclaimed water goes through these additional purification stages, technicians at El Paso Water will monitor the water in real-time to ensure it meets safety standards.
"The water we're going to produce out of the Advanced Water Purification plant is the safest water that could be produced through treatment processes these days," Gilbert Trejo, EPWater's chief technician officer, told Freethink.
Freethink recently visited El Paso Water to get an up-close look at what is set to be the first closed-loop water recycling system in a major U.S. city. (See video above.)
In addition to cleaner water, water recycling facilities like El Paso's would also be cheaper and more practical than solutions like desalination. After all, not every city lives close to the ocean, and even those that do have to pay to transport saltwater to the treatment plants. But practical benefits aside, toilet to tap is tough to sell to the public.
Clean but spiritually contaminated?
The prospect of drinking recycled water unsurprisingly elicits a disgust response in many people, some more so than others. A 2015 survey of more than 2,000 U.S. residents across the nation found that: "Approximately 13% of our adult American sample definitely refuses to try recycled water, while 49% are willing to try it, with 38% uncertain," the researchers wrote. "Both disgust and contamination sensitivity predict resistance to consumption of recycled water."
For a minority of people, it seems no amount of purification through technical means will render the water potable. That's because of "spiritual contagion," which the researchers said is "conceived of in terms of the entrance into the target of some spiritual essence which does not resemble standard physical entities. It does not respond to washing, boiling or filtering, but remains as a permanent essence."
But even though water reclamation is generally unpopular, and some people may always resist it, research suggests that people become more accepting of water recycling as they learn more about the process.
That's why El Paso has aimed to be transparent and proactive in explaining the process to residents through public outreach programs. In 2016, nearly 90 percent of El Pasoans supported the idea of producing more drinking water through the city's Advanced Water Purification Facility.
Trejo said it's about establishing trust with residents:
"I think it's very exciting for El Pasoans to know that what we're doing here in El Paso is going to change the water industry. The engineering community and the water community knows and understands that these treatment processes treat the water and produce a very high-quality water. It's a matter of which community is going to be the first one to have absolute trust in their water utility, and in the water, and that's what we're about to do here in El Paso.
It uses radio waves to pinpoint items, even when they're hidden from view.
"Researchers have been giving robots human-like perception," says MIT Associate Professor Fadel Adib. In a new paper, Adib's team is pushing the technology a step further. "We're trying to give robots superhuman perception," he says.
The researchers have developed a robot that uses radio waves, which can pass through walls, to sense occluded objects. The robot, called RF-Grasp, combines this powerful sensing with more traditional computer vision to locate and grasp items that might otherwise be blocked from view. The advance could one day streamline e-commerce fulfillment in warehouses or help a machine pluck a screwdriver from a jumbled toolkit.
The research will be presented in May at the IEEE International Conference on Robotics and Automation. The paper's lead author is Tara Boroushaki, a research assistant in the Signal Kinetics Group at the MIT Media Lab. Her MIT co-authors include Adib, who is the director of the Signal Kinetics Group; and Alberto Rodriguez, the Class of 1957 Associate Professor in the Department of Mechanical Engineering. Other co-authors include Junshan Leng, a research engineer at Harvard University, and Ian Clester, a PhD student at Georgia Tech.Play video
As e-commerce continues to grow, warehouse work is still usually the domain of humans, not robots, despite sometimes-dangerous working conditions. That's in part because robots struggle to locate and grasp objects in such a crowded environment. "Perception and picking are two roadblocks in the industry today," says Rodriguez. Using optical vision alone, robots can't perceive the presence of an item packed away in a box or hidden behind another object on the shelf — visible light waves, of course, don't pass through walls.
But radio waves can.
For decades, radio frequency (RF) identification has been used to track everything from library books to pets. RF identification systems have two main components: a reader and a tag. The tag is a tiny computer chip that gets attached to — or, in the case of pets, implanted in — the item to be tracked. The reader then emits an RF signal, which gets modulated by the tag and reflected back to the reader.
The reflected signal provides information about the location and identity of the tagged item. The technology has gained popularity in retail supply chains — Japan aims to use RF tracking for nearly all retail purchases in a matter of years. The researchers realized this profusion of RF could be a boon for robots, giving them another mode of perception.
"RF is such a different sensing modality than vision," says Rodriguez. "It would be a mistake not to explore what RF can do."
RF Grasp uses both a camera and an RF reader to find and grab tagged objects, even when they're fully blocked from the camera's view. It consists of a robotic arm attached to a grasping hand. The camera sits on the robot's wrist. The RF reader stands independent of the robot and relays tracking information to the robot's control algorithm. So, the robot is constantly collecting both RF tracking data and a visual picture of its surroundings. Integrating these two data streams into the robot's decision making was one of the biggest challenges the researchers faced.
"The robot has to decide, at each point in time, which of these streams is more important to think about," says Boroushaki. "It's not just eye-hand coordination, it's RF-eye-hand coordination. So, the problem gets very complicated."
The robot initiates the seek-and-pluck process by pinging the target object's RF tag for a sense of its whereabouts. "It starts by using RF to focus the attention of vision," says Adib. "Then you use vision to navigate fine maneuvers." The sequence is akin to hearing a siren from behind, then turning to look and get a clearer picture of the siren's source.
With its two complementary senses, RF Grasp zeroes in on the target object. As it gets closer and even starts manipulating the item, vision, which provides much finer detail than RF, dominates the robot's decision making.
RF Grasp proved its efficiency in a battery of tests. Compared to a similar robot equipped with only a camera, RF Grasp was able to pinpoint and grab its target object with about half as much total movement. Plus, RF Grasp displayed the unique ability to "declutter" its environment — removing packing materials and other obstacles in its way in order to access the target. Rodriguez says this demonstrates RF Grasp's "unfair advantage" over robots without penetrative RF sensing. "It has this guidance that other systems simply don't have."
RF Grasp could one day perform fulfilment in packed e-commerce warehouses. Its RF sensing could even instantly verify an item's identity without the need to manipulate the item, expose its barcode, then scan it. "RF has the potential to improve some of those limitations in industry, especially in perception and localization," says Rodriguez.
Adib also envisions potential home applications for the robot, like locating the right Allen wrench to assemble your Ikea chair. "Or you could imagine the robot finding lost items. It's like a super-Roomba that goes and retrieves my keys, wherever the heck I put them."
The research is sponsored by the National Science Foundation, NTT DATA, Toppan, Toppan Forms, and the Abdul Latif Jameel Water and Food Systems Lab (J-WAFS).
A machine learning system lets visitors at a Kandinsky exhibition hear the artwork.
Have you ever heard colors?
As part of a new exhibition, the worlds of culture and technology collide, bringing sound to the colors of abstract art pioneer Wassily Kandinsky.
Kandinsky had synesthesia, where looking at colors and shapes causes some with the condition to hear associated sounds. With the help of machine learning, virtual visitors to the Sounds Like Kandinsky exhibition, a partnership project by Centre Pompidou in Paris and Google Arts & Culture, can have an aural experience of his art.
An eye for music
Kandinsky's synesthesia is thought to have heavily influenced his painting. Seeing yellow summoned up trumpets, evoking emotions like cheekiness; reds produced violins portraying restlessness; while organs representing heavenliness he associated with blues, according to the exhibition notes.
Virtual visitors are invited to take part in an experiment called Play a Kandinsky, which allows them to see and hear the world through the artist's eyes.
Kandinsky's synesthesia is thought to have heavily influenced his 1925 painting Yellow, Red, Blue.Image: Guillaume Piolle/Wikimedia Commons
In 1925, the artist's masterpiece, "Yellow, Red, Blue", broke new ground in the world of abstract art, guiding the viewer from left to right with shifting shapes and shades. Almost a century after it was painted, Google's interactive tool lets visitors click different parts of the artwork to journey through the artist's description of the colors, associated sounds and moods that inspired the work.
But Google's new toy is not the only tool developed to enhance the artistic experience.
Artist Neil Harbisson has developed an artificial way to emulate Kandinsky by turning colors into sounds. He has a rare form of color blindness and sees the world in greyscale. But a smart antenna attached to his head translates dominant colors into musical notes, creating a real-world soundtrack of what's in front of him. The invention could open up a new world for people who are color blind.
A new method could make holograms for virtual reality, 3D printing, and more. You can even run it can run on a smartphone.
Despite years of hype, virtual reality headsets have yet to topple TV or computer screens as the go-to devices for video viewing.
One reason: VR can make users feel sick. Nausea and eye strain can result because VR creates an illusion of 3D viewing although the user is in fact staring at a fixed-distance 2D display. The solution for better 3D visualization could lie in a 60-year-old technology remade for the digital world: holograms.
Holograms deliver an exceptional representation of 3D world around us. Plus, they're beautiful. (Go ahead — check out the holographic dove on your Visa card.) Holograms offer a shifting perspective based on the viewer's position, and they allow the eye to adjust focal depth to alternately focus on foreground and background.
Researchers have long sought to make computer-generated holograms, but the process has traditionally required a supercomputer to churn through physics simulations, which is time-consuming and can yield less-than-photorealistic results. Now, MIT researchers have developed a new way to produce holograms almost instantly — and the deep learning-based method is so efficient that it can run on a laptop in the blink of an eye, the researchers say.
"People previously thought that with existing consumer-grade hardware, it was impossible to do real-time 3D holography computations," says Liang Shi, the study's lead author and a PhD student in MIT's Department of Electrical Engineering and Computer Science (EECS). "It's often been said that commercially available holographic displays will be around in 10 years, yet this statement has been around for decades."
Shi believes the new approach, which the team calls "tensor holography," will finally bring that elusive 10-year goal within reach. The advance could fuel a spillover of holography into fields like VR and 3D printing.
Shi worked on the study, published today in Nature, with his advisor and co-author Wojciech Matusik. Other co-authors include Beichen Li of EECS and the Computer Science and Artificial Intelligence Laboratory at MIT, as well as former MIT researchers Changil Kim (now at Facebook) and Petr Kellnhofer (now at Stanford University).
The quest for better 3D
Courtesy of the researchers
A typical lens-based photograph encodes the brightness of each light wave — a photo can faithfully reproduce a scene's colors, but it ultimately yields a flat image.
In contrast, a hologram encodes both the brightness and phase of each light wave. That combination delivers a truer depiction of a scene's parallax and depth. So, while a photograph of Monet's "Water Lilies" can highlight the paintings' color palate, a hologram can bring the work to life, rendering the unique 3D texture of each brush stroke. But despite their realism, holograms are a challenge to make and share.
First developed in the mid-1900s, early holograms were recorded optically. That required splitting a laser beam, with half the beam used to illuminate the subject and the other half used as a reference for the light waves' phase. This reference generates a hologram's unique sense of depth. The resulting images were static, so they couldn't capture motion. And they were hard copy only, making them difficult to reproduce and share.
Computer-generated holography sidesteps these challenges by simulating the optical setup. But the process can be a computational slog. "Because each point in the scene has a different depth, you can't apply the same operations for all of them," says Shi. "That increases the complexity significantly." Directing a clustered supercomputer to run these physics-based simulations could take seconds or minutes for a single holographic image. Plus, existing algorithms don't model occlusion with photorealistic precision. So Shi's team took a different approach: letting the computer teach physics to itself.
They used deep learning to accelerate computer-generated holography, allowing for real-time hologram generation. The team designed a convolutional neural network — a processing technique that uses a chain of trainable tensors to roughly mimic how humans process visual information. Training a neural network typically requires a large, high-quality dataset, which didn't previously exist for 3D holograms.
The team built a custom database of 4,000 pairs of computer-generated images. Each pair matched a picture — including color and depth information for each pixel — with its corresponding hologram. To create the holograms in the new database, the researchers used scenes with complex and variable shapes and colors, with the depth of pixels distributed evenly from the background to the foreground, and with a new set of physics-based calculations to handle occlusion. That approach resulted in photorealistic training data. Next, the algorithm got to work.
By learning from each image pair, the tensor network tweaked the parameters of its own calculations, successively enhancing its ability to create holograms. The fully optimized network operated orders of magnitude faster than physics-based calculations. That efficiency surprised the team themselves.
"We are amazed at how well it performs," says Matusik. In mere milliseconds, tensor holography can craft holograms from images with depth information — which is provided by typical computer-generated images and can be calculated from a multicamera setup or LiDAR sensor (both are standard on some new smartphones). This advance paves the way for real-time 3D holography. What's more, the compact tensor network requires less than 1 MB of memory. "It's negligible, considering the tens and hundreds of gigabytes available on the latest cell phone," he says.
The research "shows that true 3D holographic displays are practical with only moderate computational requirements," says Joel Kollin, a principal optical architect at Microsoft who was not involved with the research. He adds that "this paper shows marked improvement in image quality over previous work," which will "add realism and comfort for the viewer." Kollin also hints at the possibility that holographic displays like this could even be customized to a viewer's ophthalmic prescription. "Holographic displays can correct for aberrations in the eye. This makes it possible for a display image sharper than what the user could see with contacts or glasses, which only correct for low order aberrations like focus and astigmatism."
"A considerable leap"
Real-time 3D holography would enhance a slew of systems, from VR to 3D printing. The team says the new system could help immerse VR viewers in more realistic scenery, while eliminating eye strain and other side effects of long-term VR use. The technology could be easily deployed on displays that modulate the phase of light waves. Currently, most affordable consumer-grade displays modulate only brightness, though the cost of phase-modulating displays would fall if widely adopted.
Three-dimensional holography could also boost the development of volumetric 3D printing, the researchers say. This technology could prove faster and more precise than traditional layer-by-layer 3D printing, since volumetric 3D printing allows for the simultaneous projection of the entire 3D pattern. Other applications include microscopy, visualization of medical data, and the design of surfaces with unique optical properties.
"It's a considerable leap that could completely change people's attitudes toward holography," says Matusik. "We feel like neural networks were born for this task."
The work was supported, in part, by Sony.