That's as fast as a bullet train in Japan.
The way an elephant manipulates its trunk to eat and drink could lead to better robots, researchers say.
Elephants dilate their nostrils to create more space in their trunks, allowing them to store up to 5.5 liters (1.45 gallons) of water, according to their new study.
They can also suck up three liters (0.79 gallons) per second—a speed 30 times faster than a human sneeze (150 meters per second/330 mph), the researchers found.
The researchers wanted to better understand the physics of how elephants use their trunks to move and manipulate air, water, food, and other objects. They also wanted to learn if the mechanics could inspire the creation of more efficient robots that use air motion to hold and move things.
Photo by David Clode on Unsplash
While octopuses use jets of water to propel themselves and archer fish shoot water above the surface to catch insects, elephants are the only animals able to use suction both on land and underwater.
"An elephant eats about 400 pounds of food a day, but very little is known about how they use their trunks to pick up lightweight food and water for 18 hours, every day," says lead author Andrew Schulz, a mechanical engineering PhD student at the Georgia Institute of Technology. "It turns out their trunks act like suitcases, capable of expanding when necessary."
Sucking up tortilla chips without breaking them
Schulz and his colleagues worked with veterinarians at Zoo Atlanta, studying elephants as they ate various foods. For large rutabaga cubes, for example, the animal grabbed and collected them. It sucked up smaller cubes and made a loud vacuuming sound, like the sound of a person slurping noodles, before transferring the vegetables to its mouth.
To learn more about suction, the researchers gave elephants a tortilla chip and measured the applied force. Sometimes the animal pressed down on the chip and breathed in, suspending the chip on the tip of its trunk without breaking it, similar to a person inhaling a piece of paper onto their mouth. Other times the elephant applied suction from a distance, drawing the chip to the edge of its trunk.
Elephants inhale at speeds comparable to Japan's 300 mph bullet trains.
"An elephant uses its trunk like a Swiss Army knife," says David Hu, Schulz's advisor and a professor in Georgia Tech's School of Mechanical Engineering. "It can detect scents and grab things. Other times it blows objects away like a leaf blower or sniffs them in like a vacuum."
By watching elephants inhale liquid from an aquarium, the team was able to time the durations and measure volume. In just 1.5 seconds, the trunk sucked up 3.7 liters (just shy of 1 gallon), the equivalent of 20 toilets flushing simultaneously.
Soft robots and elephant conservation
The researchers used an ultrasonic probe to take trunk wall measurements and see how the trunk's inner muscles work. By contracting those muscles, the animal dilates its nostrils up to 30%. This decreases the thickness of the walls and expands nasal volume by 64%.
"At first it didn't make sense: an elephant's nasal passage is relatively small and it was inhaling more water than it should," Schulz says. "It wasn't until we saw the ultrasonographic images and watched the nostrils expand that we realized how they did it. Air makes the walls open, and the animal can store far more water than we originally estimated."
Based on the pressures applied, Schulz and the team suggest that elephants inhale at speeds comparable to Japan's 300-mph bullet trains.
"By investigating the mechanics and physics behind trunk muscle movements, we can apply the physical mechanisms—combinations of suction and grasping—to find new ways to build robots," Schulz says.
"In the meantime, the African elephant is now listed as endangered because of poaching and loss of habitat. Its trunk makes it a unique species to study. By learning more about them, we can learn how to better conserve elephants in the wild."
The paper appears in the Journal of the Royal Society Interface. The US Army Research Laboratory and the US Army Research Oﬃce 294 Mechanical Sciences Division, Complex Dynamics and Systems Program, funded the work. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the view of the sponsoring agency.
Source: Georgia Tech
Original Study DOI: 10.1098/rsif.2021.0215
It uses radio waves to pinpoint items, even when they're hidden from view.
"Researchers have been giving robots human-like perception," says MIT Associate Professor Fadel Adib. In a new paper, Adib's team is pushing the technology a step further. "We're trying to give robots superhuman perception," he says.
The researchers have developed a robot that uses radio waves, which can pass through walls, to sense occluded objects. The robot, called RF-Grasp, combines this powerful sensing with more traditional computer vision to locate and grasp items that might otherwise be blocked from view. The advance could one day streamline e-commerce fulfillment in warehouses or help a machine pluck a screwdriver from a jumbled toolkit.
The research will be presented in May at the IEEE International Conference on Robotics and Automation. The paper's lead author is Tara Boroushaki, a research assistant in the Signal Kinetics Group at the MIT Media Lab. Her MIT co-authors include Adib, who is the director of the Signal Kinetics Group; and Alberto Rodriguez, the Class of 1957 Associate Professor in the Department of Mechanical Engineering. Other co-authors include Junshan Leng, a research engineer at Harvard University, and Ian Clester, a PhD student at Georgia Tech.Play video
As e-commerce continues to grow, warehouse work is still usually the domain of humans, not robots, despite sometimes-dangerous working conditions. That's in part because robots struggle to locate and grasp objects in such a crowded environment. "Perception and picking are two roadblocks in the industry today," says Rodriguez. Using optical vision alone, robots can't perceive the presence of an item packed away in a box or hidden behind another object on the shelf — visible light waves, of course, don't pass through walls.
But radio waves can.
For decades, radio frequency (RF) identification has been used to track everything from library books to pets. RF identification systems have two main components: a reader and a tag. The tag is a tiny computer chip that gets attached to — or, in the case of pets, implanted in — the item to be tracked. The reader then emits an RF signal, which gets modulated by the tag and reflected back to the reader.
The reflected signal provides information about the location and identity of the tagged item. The technology has gained popularity in retail supply chains — Japan aims to use RF tracking for nearly all retail purchases in a matter of years. The researchers realized this profusion of RF could be a boon for robots, giving them another mode of perception.
"RF is such a different sensing modality than vision," says Rodriguez. "It would be a mistake not to explore what RF can do."
RF Grasp uses both a camera and an RF reader to find and grab tagged objects, even when they're fully blocked from the camera's view. It consists of a robotic arm attached to a grasping hand. The camera sits on the robot's wrist. The RF reader stands independent of the robot and relays tracking information to the robot's control algorithm. So, the robot is constantly collecting both RF tracking data and a visual picture of its surroundings. Integrating these two data streams into the robot's decision making was one of the biggest challenges the researchers faced.
"The robot has to decide, at each point in time, which of these streams is more important to think about," says Boroushaki. "It's not just eye-hand coordination, it's RF-eye-hand coordination. So, the problem gets very complicated."
The robot initiates the seek-and-pluck process by pinging the target object's RF tag for a sense of its whereabouts. "It starts by using RF to focus the attention of vision," says Adib. "Then you use vision to navigate fine maneuvers." The sequence is akin to hearing a siren from behind, then turning to look and get a clearer picture of the siren's source.
With its two complementary senses, RF Grasp zeroes in on the target object. As it gets closer and even starts manipulating the item, vision, which provides much finer detail than RF, dominates the robot's decision making.
RF Grasp proved its efficiency in a battery of tests. Compared to a similar robot equipped with only a camera, RF Grasp was able to pinpoint and grab its target object with about half as much total movement. Plus, RF Grasp displayed the unique ability to "declutter" its environment — removing packing materials and other obstacles in its way in order to access the target. Rodriguez says this demonstrates RF Grasp's "unfair advantage" over robots without penetrative RF sensing. "It has this guidance that other systems simply don't have."
RF Grasp could one day perform fulfilment in packed e-commerce warehouses. Its RF sensing could even instantly verify an item's identity without the need to manipulate the item, expose its barcode, then scan it. "RF has the potential to improve some of those limitations in industry, especially in perception and localization," says Rodriguez.
Adib also envisions potential home applications for the robot, like locating the right Allen wrench to assemble your Ikea chair. "Or you could imagine the robot finding lost items. It's like a super-Roomba that goes and retrieves my keys, wherever the heck I put them."
The research is sponsored by the National Science Foundation, NTT DATA, Toppan, Toppan Forms, and the Abdul Latif Jameel Water and Food Systems Lab (J-WAFS).
For an easy and efficient way to keep your hard floors spotless, snag this robot vacuum cleaner while it's on sale.
- There are convenient ways to start your spring cleaning without lifting a finger.
- For an easy and efficient way to keep your space spotless, upgrade to a robot vacuum cleaner.
- The Hard Wood & Tile Cleaning Robot Vacuum is only $39.95, which is a 69% drop from its regular price of $129.
Dust, dirt, and everything else has a way of building up — especially if you have pets. If you're still plugging in and lugging your vacuum around to clean things up, it's no secret that there's a better way. Upgrade to this Hard Wood & Tile Cleaning Robot Vacuum for more convenient and efficient cleaning.
Unlike the Roomba or other competing devices on the market, this vacuum cleaner works for any budget. But that doesn't mean it sacrifices quality. It features a 1,600Pa suction, two side brushes, and a double-layer turbofan that work together to clean anything in its way.
Additionally, the vacuum is multi-functional and has different types of useful cleaning modes. You won't have to worry about constantly dumping things out either, as it has a larger-than-average dust bin. As far as battery life goes, it automatically knows to recharge itself when its power goes as low as 20 percent. That means even less stress for you.
One thing to note: This vacuum cleaner only works on smooth surfaces and doesn't clean carpeted surfaces. However, it does have an ultra-thin body that allows it to easily get under beds and sofas for a deeper, reliable clean.
The Floor Cleaning Robot Vacuum is an incredible value that's priced at an extra-low $39.95. That's nearly a 70% markdown from its original value of $129. Stop making cleaning difficult for yourself. Start your spring cleaning without lifting a finger by snagging this robot vacuum before it sells out.
Prices subject to change.
When you buy something through a link in this article or from our shop, Big Think earns a small commission. Thank you for supporting our team's work.
The bird demonstrates cutting-edge technology for devising self-folding nanoscale robots.
Cornell University has just announced what may be the smallest origami bird ever folded. While a typical origami animal is the product of an artist's dexterous hands, the Cornell bird was folded by the strategic application of small electrical voltages. It had to be: The material of which the bird is comprised is just 30 atoms thick.
Creative expression isn't the point of the university's little avian — its construction previews principles and techniques that will lead to new generations of moving, nano-scaled robots that "can enable smart material design and interaction with the molecular biological world," says Dean Culver of the U.S. Army Combat Capabilities Development Command's Army Research Laboratory, which supported the research.
According to Cornell's Paul McEuen, "We humans, our defining characteristic is we've learned how to build complex systems and machines at human scales, and at enormous scales as well. But what we haven't learned how to do is build machines at tiny scales. And this is a step in that basic, fundamental evolution in what humans can do, of learning how to construct machines that are as small as cells."
The lead author of the paper describing the tiny bird is postdoctoral researcher Qingkun Liu. The paper, "Micrometer-Sized Electrically Programmable Shape Memory Actuators for Low-Power Microrobotics," is the cover story of the March 17 issue of the journal Science Robotics.
A minuscule swarm of helpers
The project is the result of a collaboration between physical scientist McEeuen and physicist Itai Cohen, both of Cornell's College of Arts and Sciences. It's already resulted in a (very) small herd of nanoscale machines and devices.
Cohen explains, "We want to have robots that are microscopic but have brains on board. So that means you need to have appendages that are driven by complementary metal-oxide-semiconductor (CMOS) transistors, basically a computer chip on a robot that's 100 microns on a side."
The idea is that these minuscule workhorses—a metaphor, no nanoscale origami horses yet exist—are released from a wafer, fold themselves into the desired form factor, and then go on about their business. Additional folding would endow them with motion as they work, change shapes to move their limbs and manipulate microscopic objects. The researchers anticipate that these nanobots will eventually be able to achieve similar functionality to their larger brethren.
Credit: nobeastsofierce/Adobe Stock
How a tiny robot is made and works
The project combines materials science with chemistry, since the folding is achieved with the strategic deployment of electrochemical reactions. Liu explains, "At this small scale, it's not like traditional mechanical engineering, but rather chemistry, material science, and mechanical engineering all mixed together."
"The hard part," says Cohen, "is making the materials that respond to the CMOS circuits. And this is what Qingkun and his colleagues have done with this shape memory actuator that you can drive with voltage and make it hold a bent shape."
The bots are constructed from a nanometer-thick platinum layer that's coated with a titanium oxide film. Rigid panels of silicon oxide glass are affixed to the platinum. A positive voltage creates oxidation, forcing oxygen atoms into the platinum seams between the glass panels, and forcing platinum atoms out. This causes the platinum to expand, which bends the entire glass-platinum structure to a desired angle.
Because the oxygen atoms collect to form a barrier, a bend is retained even after the charge is switched off. To undo a fold, a negative charge can be applied that removes the oxygen atoms from the seam, allowing it to relax and unbend.
This all happens very quickly — a machine can fold itself within just 100 milliseconds. The process is also repeatable. The team reports that a bot can flatten and refold itself thousands of times, and all it takes is a single volt of electricity.
Artistry after all
None of this really removes what one might consider the artistry. Working out how and where to apply voltages to effect the desired shape is not a simple thing to do. McEuen says, "One thing that's quite remarkable is that these little tiny layers are only about 30 atoms thick, compared to a sheet of paper, which might be 100,000 atoms thick. So it's an enormous engineering challenge to figure out how to make something like that have the kind of functionalities we want."
Still, the group is getting quite good at microscopic robotics, and has already been awarded the Guinness World Record for assembling the smallest-ever walking robot. The little 4-legged dude is 40 microns wide and between 40 and 70 microns long. They're angling for a new record with their 60-micron-wide origami bird.
Says Cohen, "These are major advances over current state-of-the-art devices. We're really in a class of our own."
Creating an afterlife—or a simulation of one—would take vast amounts of energy. Some scientists think the best way to capture that energy is by building megastructures around stars.
- In a 2018 paper, researchers Alexey Turchin and Maxim Chernyakov published a paper outlining various ways humans might someday be able to achieve immortality or resurrection.
- One way involves creating a simulated afterlife, in which artificial intelligence would build simulations of past human lives.
- Getting the necessary power for the simulation might require building a Dyson sphere, which is a theoretical megastructure that orbits a star and captures its energy.
Is there an afterlife?
Despite centuries of inquiry, nobody's made progress on this fundamental question, and perhaps nobody ever will. So, maybe a better question is: Can humans create an afterlife?
Some scientists think so.
In 2018, Alexey Turchin and Maxim Chernyakov, both members of the Russian Transhumanist Movement, wrote a paper outlining the main ways science might someday make immortality and resurrection possible. Called the "Immortality Roadmap," the project describes the ways people might be able to extend lifespan or live forever, from using cryonics to freeze themselves, to constructing nanobots for "treatment of injuries and cell cyborgization."
But the Immortality Roadmap mentions one particularly grandiose road to immortality. Outlined in "Plan C" of the project, the idea is to create a simulation of humanity's past through artificial intelligence that's able to digitally reconstruct people.
The AI would use DNA and other information about individuals to create models of those individuals within a simulation, allowing recently deceased people to experience another chance at life — or, at least an approximation of life.
"The main idea of a resurrection-simulation is that if one takes the DNA of a past person and subjects it to the same developmental condition, as well as correcting the development based on some known outcomes, it is possible to create a model of a past person which is very close to the original," the researchers wrote.
"DNA samples of most people who lived in past 1 to 2 centuries could be extracted via global archeology. After the moment of death, the simulated person is moved into some form of the afterlife, perhaps similar to his religious expectations, where he meets his relatives."
But would that digital copy really be you, or rather a fundamentally different digital being that resembles you? What about the other "people" that inhabit the simulation, would they be "real"? And would people actually want to repeat their lives over again, perhaps forever?
Of course, these are questions that Immortality Roadmap can't answer. But what's clear is that, if technology ever becomes able to create a "resurrection simulation," it's going to require vast amounts of computing power — far more than what currently exists on Earth. That's where Dyson spheres come into play.
In 1960, the theoretical physicist Freeman Dyson published a paper describing a peculiar strategy scientists could use to detect signs of alien life: look for stars encompassed by gigantic megastructures.
Why? Dyson figured that if spacefaring alien civilizations do exist, then they must have figured out a way to generate vast amounts of energy. One theoretical way aliens could do that is through harnessing the power of stars: By surrounding a star with orbiting structures that capture solar energy, a civilization could theoretically generate far more energy than they could on a planet.
That's the basic idea behind Dyson spheres. Of course, modern science is far from being able to build such a complex megastructure, and it's unclear whether it'll ever be possible.
"An actual sphere around the sun is completely impractical," Stuart Armstrong, a research fellow at Oxford University's Future of Humanity Institute who has studied megastructure concepts, told Popular Mechanics in 2020.
There are many questions about and arguments against the feasibility of Dyson spheres. Obviously, our modern engineering capabilities wouldn't enable us to build a structure that big and complex, and then transport it to the sun. And even if engineers could build an enormous sun shell, we don't have materials with enough tensile strength to hold together the structure once it's surrounding the sun.
Other potential problems: space debris colliding with the sphere, inefficiencies in transporting the energy back to Earth, and having to perform maintenance on a megastructure that's dangerously close to the sun. In short, the Dyson sphere is a very theoretical concept.
Credit: vexworldwide via Adobe Stock
But some people think building a Dyson sphere is more feasible than it seems. In 2012, the bioethicist and transhumanist George Dvorsky published a blog post titled "How to build a Dyson sphere in five (relatively) easy steps." His strategy, in short, calls for sending autonomous robots into space, where they would:
- Get energy
- Mine Mercury
- Get materials into orbit
- Make solar collectors
- Extract energy
"The idea is to build the entire swarm in iterative steps and not all at once. We would only need to build a small section of the Dyson sphere to provide the energy requirements for the rest of the project. Thus, construction efficiency will increase over time as the project progresses," Dvorsky wrote.
"We're going to have to mine materials from Mercury. Actually, we'll likely have to take the whole planet apart. The Dyson sphere will require a horrendous amount of material—so much so, in fact, that, should we want to completely envelope the sun, we are going to have to disassemble not just Mercury, but Venus, some of the outer planets, and any nearby asteroids as well."
Credit: ALEXEY TURCHIN
Turchin echoed a similar idea to Popular Mechanics, acknowledging that while humans currently can't build a Dyson sphere, "nanorobots could do it."
Still, even if scientists someday manage to create a Dyson sphere that's able to power a resurrection simulation, there's a good chance many people won't take part: Surveys repeatedly show that most people would not opt to live forever if given the choice.