An accident left this musician with one arm. Now he is helping create future tech for others with disabilities.
- Meet the world's first bionic drummer. Rock musician Jason Barnes lost his arm in a terrible accident... and then he became the fastest drummer in the world.
- With the help of Gil Weinberg, a Georgia Tech professor and inventor of musical robots, the pair utilized electromyography and ultrasound technology to break musical records.
- Weinberg and Barnes hope to perfect the technology so that it can one day be used to help other people with disabilities realize that "they're not only not disabled, they're actually super-able."
Duke University researchers might have solved a half-century old problem.
- Duke University researchers created a hydrogel that appears to be as strong and flexible as human cartilage.
- The blend of three polymers provides enough flexibility and durability to mimic the knee.
- The next step is to test this hydrogel in sheep; human use can take at least three years.
Duke researchers have developed the first gel-based synthetic cartilage with the strength of the real thing. A quarter-sized disc of the material can withstand the weight of a 100-pound kettlebell without tearing or losing its shape.
Photo: Feichen Yang.<p>That's the word from a team in the Department of Chemistry and Department of Mechanical Engineering and Materials Science at Duke University. Their <a href="https://onlinelibrary.wiley.com/doi/abs/10.1002/adfm.202003451" target="_blank">new paper</a>, published in the journal<em> </em>Advanced Functional Materials, details this exciting evolution of this frustrating joint.<br></p><p>Researchers have sought materials strong and versatile enough to repair a knee since at least the 1970s. This new hydrogel, comprised of three polymers, might be it. When two of the polymers are stretched, a third keeps the entire structure intact. When pulled 100,000 times, the cartilage held up as well as materials used in bone implants. The team also rubbed the hydrogel against natural cartilage a million times and found it to be as wear-resistant as the real thing. </p><p>The hydrogel has the appearance of Jell-O and is comprised of 60 percent water. Co-author, Feichen Yang, <a href="https://today.duke.edu/2020/06/lab-first-cartilage-mimicking-gel-strong-enough-knees" target="_blank">says</a> this network of polymers is particularly durable: "Only this combination of all three components is both flexible and stiff and therefore strong." </p><p> As with any new material, a lot of testing must be conducted. They don't foresee this hydrogel being implanted into human bodies for at least three years. The next step is to test it out in sheep. </p><p>Still, this is an exciting step forward in the rehabilitation of one of our trickiest joints. Given the potential reward, the wait is worth it. </p><p><span></span>--</p><p><em>Stay in touch with Derek on <a href="http://www.twitter.com/derekberes" target="_blank">Twitter</a>, <a href="https://www.facebook.com/DerekBeresdotcom" target="_blank">Facebook</a> and <a href="https://derekberes.substack.com/" target="_blank">Substack</a>. His next book is</em> "<em>Hero's Dose: The Case For Psychedelics in Ritual and Therapy."</em></p>
The old idea of running with springs on your feet gets a high-tech makeover.
The precursor to the modern bicycle, dubbed the hobby horse, was invented in 1817 by Baron Karl von Drais.
Researchers at UCSF have trained an algorithm to parse meaning from neural activity.
Eavesdropping<img type="lazy-image" data-runner-src="https://assets.rebelmouse.io/eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpbWFnZSI6Imh0dHBzOi8vYXNzZXRzLnJibC5tcy8yMjkxMDY1MC9vcmlnaW4uanBnIiwiZXhwaXJlc19hdCI6MTYxNDQ4MjM4MX0.xn8aljMO7UFbibI2-B0AoniAfvkrOWiDx7diBVEdMBc/img.jpg?width=980" id="b7ede" class="rm-shortcode" data-rm-shortcode-id="8a54d456f0a8a9594f4db1512286b564" data-rm-shortcode-name="rebelmouse-image" data-width="1440" data-height="961" />
Image source: Teeradej/Shutterstock<p>To train their AI, Makin and co-author <a href="http://changlab.ucsf.edu/our-team" target="_blank">Edward F. Chang</a> "listened in" on the neural activity of four participants. As epileptics, each participant had had brain electrodes implanted for the purpose of seizure monitoring.</p><p>The participants were supplied 50 sentences they were to read aloud at least three times. As they did, neural data was collected by the researchers. (Audio recordings were also made.)</p><p>The study lists a handful of the sentences the participants recited, among them:</p><ul><li>"Those musicians harmonize marvelously."</li><li>"She wore warm fleecy woolen overalls."</li><li>"Those thieves stole thirty jewels."</li><li>"There is chaos in the kitchen."</li></ul><p>The algorithm's task was to analyze the collected neural data and make predictions as to what was being said when the data was generated. (Data associated with non-verbal sounds captured in the participants' audio recording was factored out first.)</p><p>The researchers' algorithm learned pretty quickly to predict the words associated with chunks of neural data. The AI predicted the data generated when "A little bird is watching the commotion" was spoken would mean "The little bird is watching watching the commotion," quite close, while "The ladder was used to rescue the cat and the man" was predicted as, "Which ladder will be used to rescue the cat and the man."</p><p>The accuracy varied form participant to participant. Makin and Chang found that an algorithm based on one participant had a head start on being trained for another, suggesting that training the AI could get easier over time and repeated use. </p><p><em>The Guardian</em> spoke with expert Christian Herff, who found the system impressive for using less than 40 minutes of training data for each participant rather than the far greater amount of time required by other attempts to derive text from neural data. He says, "By doing so they achieve levels of accuracy that haven't been achieved so far."</p><p>Previous attempts to derive speech from neural activity focused on the phonemes from which spoken words are built, but Makin and Chang focused on the overall words instead. While there are certainly more words than phonemes, and thus this poses a greater challenge, the study says, "the production of any particular phoneme in continuous speech is strongly influenced by the phonemes preceding it, which decreases its distinguishability." To minimize the difficulty of their word-based approach, the spoken sentences used a total of just 250 words.</p>
Through the neural fog<img type="lazy-image" data-runner-src="https://assets.rebelmouse.io/eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpbWFnZSI6Imh0dHBzOi8vYXNzZXRzLnJibC5tcy8yMjkxMDY5MC9vcmlnaW4uanBnIiwiZXhwaXJlc19hdCI6MTY3MjkwMDY2Mn0.QR8hrwTdsq28J5Jo1oCitznh4AKogX3EvJNJKZhqltw/img.jpg?width=980" id="c5608" class="rm-shortcode" data-rm-shortcode-id="b74ddd0c88ddb97d7bcb5b723905a856" data-rm-shortcode-name="rebelmouse-image" data-width="1440" data-height="809" />
Image source: whitehoune/Shutterstock/Big Think<p>Clearly, though, there's room for improvement. The AI also predicted that "Those musicians harmonize marvelously" was "The spinach was a famous singer." "She wore warm fleecy woolen overalls" was mis-predicted as "The oasis was a mirage." "Those thieves stole thirty jewels" was misconstrued as "Which theatre shows mother goose," while the algorithm predicted the data for "There is chaos in the kitchen" meant "There is is helping him steal a cookie."</p><p>Of course, the vocabulary involved in this research is limited, as are the sentence exemplars. "If you try to go outside the [50 sentences used] the decoding gets much worse," notes Makin, citing the limitations of his study. Another obvious caveat comes from the fact that the AI was trained from sentences spoken aloud by each participant, an impossibility with locked-in patients.</p><p>Still, the research by Makin and Chang is encouraging. Predictions for one of their participants required just a tiny 3% correction. That's actually better than the 5% error rate found in human transcriptions.</p>
Our clever human hands may soon be outdone.
- MIT-affiliated researchers develop a hypersensitive glove that can capture the way in which we handle objects.
- The data captured by the glove can be "learned" by a neural net.
- Smart tactile interaction will be invaluable when A.I.-based robots start to interact with objects — and us.
The STAG<p>The "scalable tactile glove," or "STAG," that the CSAIL scientists are using for data-gathering contains 550 tiny pressure sensors. These track and capture how hands interact with objects as they touch, move, pick up, put down, drop, and feel them. The resulting data is fed into a <a href="https://en.wikipedia.org/wiki/Convolutional_neural_network" target="_blank">convolutional neural network</a> for learning. So far, the team has taught their system to recognize 26 everyday objects — among them a soda can, pair of scissors, tennis ball, spoon, pen, and mug — with an impressive 76 percent accuracy rate. The STAG system can also accurately predict object's weights plus or minus 60 grams.</p><p>There are other tactile gloves available, but the CSAIL gloves are different. While other versions tend to be very expensive — costing in the thousands of dollars — these are made from just $10 worth of readily available materials. In addition, other gloves typically sport a mingy 50 sensors, and are thus not nearly as sensitive or informative as this glove.</p><p>The STAG is laminated with electrically conductive polymer that perceives changes in resistance as pressure is applied to an object. Woven into the glove are conductive threads that overlap, producing comparative <a href="https://www.quora.com/What-is-meant-by-Delta-in-computer-terms" target="_blank">deltas</a> that allow pairs of them to serve as pressure sensors. When the wearer touches an object, the glove picks up each point of contact as a pressure point.</p>
Touching stuff<img type="lazy-image" data-runner-src="https://assets.rebelmouse.io/eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpbWFnZSI6Imh0dHBzOi8vYXNzZXRzLnJibC5tcy8xOTU2NjE3NS9vcmlnaW4uanBnIiwiZXhwaXJlc19hdCI6MTY0MzEwMzU1OX0.MziYVOFa9SEMYGa4sNkV6mQX4vNJqSQcT6Ku0MisJWI/img.jpg?width=980" id="c047d" class="rm-shortcode" data-rm-shortcode-id="efbc2262a50481c982243806f5f2ff72" data-rm-shortcode-name="rebelmouse-image" />
Image source: Jackie Niam/Shutterstock<p>An external circuit creates "tactile maps" of pressure points, brief videos that depict each contact point as a dot sized according to the amount of pressure applied. The 26 objects assessed so far were mapped out to some 135,000 video frames that show dots growing and shrinking at different points on the hand. That raw dataset had to be massaged in a few ways for optimal recognition by the neural network. (A separate dataset of around 11,600 frames was developed for weight prediction.) </p><p>In addition to capturing pressure information, the researchers also measured the manner in which a hand's joints interact while handling an object. Certain relationships turn out to be predictable: When someone engages the middle joint of their index finger, for example, they seldom use their thumb. On the other hand (no pun intended), using the index and middle-fingertips always means that the thumb will be involved. "We quantifiably show for the first time," says Sundaram, "that if I'm using one part of my hand, how likely I am to use another part of my hand."</p>