A study from McGill University reveals the secret of musicians who have excellent time.
- When a person locks onto a beat, it's because their brain rhythms have become aligned with it.
- Listening and physically performing are brain functions not directly related to rhythm synchronization.
- The study tracked EEG brain activity during listening, playing along, and recreating rhythms.
Listening and tapping<img type="lazy-image" data-runner-src="https://assets.rebelmouse.io/eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpbWFnZSI6Imh0dHBzOi8vYXNzZXRzLnJibC5tcy8yMzYyNDIzNS9vcmlnaW4uanBnIiwiZXhwaXJlc19hdCI6MTY0MzU4NjIzOH0.vK-N6A-goMccmBsL5xOyrzmWoxsiOHDKV-J9YPfHj7Y/img.jpg?width=980" id="48cf6" class="rm-shortcode" data-rm-shortcode-id="1adaf404031fa0036848a1ba4193c1fd" data-rm-shortcode-name="rebelmouse-image" alt="TR-808 rhythm composer" />
A beat machine that produces notes similar to those used by the researchers
Credit: Steve Harvey/Unsplash<p>Palmer and her colleagues worked with 29 adult musicians — 21 female and 6 males, aged 18 to 30 years old — each of whom was proficient with an instrument, having studied for a minimum of six years. With electroencephalogram (EEG) electrodes affixed to their scalps, the participants listened to and tapped along with different versions of three basic rhythms as the scientists captured their brain activity.</p><p>Each rhythm was preceded by a four-beat count off. </p><ul><li><a href="https://www.mcgill.ca/newsroom/files/newsroom/simple1-1.mp3" target="_blank">Rhythm 1:1</a> — repeatedly played a simple series of evenly spaced clicks.</li><li><a href="https://www.mcgill.ca/newsroom/files/newsroom/moderate1-2.mp3" target="_blank" rel="noopener noreferrer">Rhythm 1:2</a> — repeatedly played a two-beat phrase with a higher-pitched sound for the first beat of each phrase and a lower-pitched sound for the second.</li><li><a href="https://www.mcgill.ca/newsroom/files/newsroom/complex3-2.mp3" target="_blank">Rhythm 3:2</a> — repeatedly played the most complex rhythm of the three, a series of triplets. In this case, the lower-pitched sound played the quarter notes while a higher-pitched sound played the triplet notes.</li></ul><p>(Tap or click each rhythm's name above to listen to its complete version with no beats or sounds omitted.)</p><p>The participants were assigned Listen, Synchronize, and Motor tasks. In the:</p><ul><li>Listen task — participants were played a dozen modified versions of the rhythms and asked to report any missing beats they noticed.</li><li>Synchronize task — individuals played along with a dozen versions of the rhythms, in some cases supplying sounds researchers had removed from the patterns.</li><li>Motor task — participants were asked to reproduce a dozen rhythm variations after hearing each one.</li></ul>
Beat markers<img type="lazy-image" data-runner-src="https://assets.rebelmouse.io/eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpbWFnZSI6Imh0dHBzOi8vYXNzZXRzLnJibC5tcy8yMzYyNDQyNi9vcmlnaW4uanBnIiwiZXhwaXJlc19hdCI6MTYyNDA5NDU4OX0.GKl27Ed_kuwLg0r_eh_s6yUoes8RN_QS2fMHLBx0vBI/img.jpg?width=980" id="b927a" class="rm-shortcode" data-rm-shortcode-id="b73b2bdc7bb4f9b3c4499fab78b7c5f6" data-rm-shortcode-name="rebelmouse-image" alt="chart with wave lines" />
Credit: Chaikom/Shutterstock<p>The scientists were able to identify neural markers representing each musician's beat perception, revealing the degree of synchronicity between the researchers' rhythms and the brain's own rhythms. Surprisingly, this synchronicity turned out to be unrelated to brain activity associated with either listening or playing.</p><p>Said the study's first authors, PhD students Brian Mathias and Anna Zamm, "We were surprised that even highly trained musicians sometimes showed reduced ability to synchronize with complex rhythms, and that this was reflected in their EEGs."</p><p>While the musician participants were all reasonably competent at tapping along to the rhythms, the degree to which the markers aligned to the beats was what separated the good players from the best. "Most musicians are good synchronizers," say Mathias and Zamm. "Nonetheless, this signal was sensitive enough to distinguish the 'good' from the 'better' or 'super-synchronizers,' as we sometimes call them."</p><p>When Palmer is asked whether a person can develop the ability to become a super-synchronizer, she answers: "The range of musicians we sampled suggests that the answer would be 'yes.' And the fact that only 2-3% of the population are 'beat deaf' is also encouraging. Practice definitely improves your ability and improves the alignment of the brain rhythms with the musical rhythms. But whether everyone is going to be as good as a drummer is not clear."</p>
A Mercury-bound spacecraft's noisy flyby of our home planet.
- There is no sound in space, but if there was, this is what it might sound like passing by Earth.
- A spacecraft bound for Mercury recorded data while swinging around our planet, and that data was converted into sound.
- Yes, in space no one can hear you scream, but this is still some chill stuff.
First off, let's be clear what we mean by "hear" here. (Here, here!)
Sound, as we know it, requires air. What our ears capture is actually oscillating waves of fluctuating air pressure. Cilia, fibers in our ears, respond to these fluctuations by firing off corresponding clusters of tones at different pitches to our brains. This is what we perceive as sound.
All of which is to say, sound requires air, and space is notoriously void of that. So, in terms of human-perceivable sound, it's silent out there. Nonetheless, there can be cyclical events in space — such as oscillating values in streams of captured data — that can be mapped to pitches, and thus made audible.
Image source: European Space Agency
The European Space Agency's BepiColombo spacecraft took off from Kourou, French Guyana on October 20, 2019, on its way to Mercury. To reduce its speed for the proper trajectory to Mercury, BepiColombo executed a "gravity-assist flyby," slinging itself around the Earth before leaving home. Over the course of its 34-minute flyby, its two data recorders captured five data sets that Italy's National Institute for Astrophysics (INAF) enhanced and converted into sound waves.
Into and out of Earth's shadow
In April, BepiColombo began its closest approach to Earth, ranging from 256,393 kilometers (159,315 miles) to 129,488 kilometers (80,460 miles) away. The audio above starts as BepiColombo begins to sneak into the Earth's shadow facing away from the sun.
The data was captured by BepiColombo's Italian Spring Accelerometer (ISA) instrument. Says Carmelo Magnafico of the ISA team, "When the spacecraft enters the shadow and the force of the Sun disappears, we can hear a slight vibration. The solar panels, previously flexed by the Sun, then find a new balance. Upon exiting the shadow, we can hear the effect again."
In addition to making for some cool sounds, the phenomenon allowed the ISA team to confirm just how sensitive their instrument is. "This is an extraordinary situation," says Carmelo. "Since we started the cruise, we have only been in direct sunshine, so we did not have the possibility to check effectively whether our instrument is measuring the variations of the force of the sunlight."
When the craft arrives at Mercury, the ISA will be tasked with studying the planets gravity.
The second clip is derived from data captured by BepiColombo's MPO-MAG magnetometer, AKA MERMAG, as the craft traveled through Earth's magnetosphere, the area surrounding the planet that's determined by the its magnetic field.
BepiColombo eventually entered the hellish mangentosheath, the region battered by cosmic plasma from the sun before the craft passed into the relatively peaceful magentopause that marks the transition between the magnetosphere and Earth's own magnetic field.
MERMAG will map Mercury's magnetosphere, as well as the magnetic state of the planet's interior. As a secondary objective, it will assess the interaction of the solar wind, Mercury's magnetic field, and the planet, analyzing the dynamics of the magnetosphere and its interaction with Mercury.
Recording session over, BepiColombo is now slipping through space silently with its arrival at Mercury planned for 2025.
Why finding joy is more easily attainable than the pursuit of happiness.
- Joy and happiness are often used synonymously, but designer Ingrid Fetell Lee argues that there is an important distinction between the two: time. Happiness is something that measures how good we feel over time, while joy is about feeling good in the moment.
- Noticing visual and sensorial patterns in the things that brought people joy, Lee was able to identify 10 "aesthetics": abundance, harmony, energy, freedom, play, surprise, transcendence, magic, renewal, and celebration.
- In this video, we learn more about each aesthetic and why focusing on joyful moments is the key to getting the most out of life.
Relax Melodies is the most positively reviewed app in the history of the App Store.
- Blue light negatively impacts the secretion of melatonin, making it harder for smartphone and computer users to sleep.
- Meditation has been shown to have a positive impact on sleep by increasing specific neurochemicals while decreasing stress.
- Relax Melodies can help you get the sleep you need, and it's on sale.
Audio recordings reveal cows have unique voices and share emotions with each other.
- New audio recordings of cows reveal rich communication and unique individual voices.
- Cows do more than vocalize to their calves — they share emotions with each other.
- A better understanding of what cows are saying and feeling can help in the formulation of humane cattle-care standards.
The herd is heard<img type="lazy-image" data-runner-src="https://assets.rebelmouse.io/eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpbWFnZSI6Imh0dHBzOi8vYXNzZXRzLnJibC5tcy8yMjU5OTUxMC9vcmlnaW4uanBnIiwiZXhwaXJlc19hdCI6MTY2NTIyMzE1M30.XYoHG22MASih_ex5BzYTdykui3ME1s2FG2USIPSkS9M/img.jpg?width=980" id="3b916" class="rm-shortcode" data-rm-shortcode-id="4c549ef4657f21c290e1caad0a98e0e5" data-rm-shortcode-name="rebelmouse-image" />
Image source: The Feed<p>Alexandra Green, a PhD student, is lead author of the study published in <a href="https://www.nature.com/articles/s41598-019-54968-4#Abs1" target="_blank"><em>Scientific Reports</em></a>. For her research, she <a href="https://sydney.edu.au/content/dam/corporate/documents/sydney-institute-of-agriculture/outreach-engagement/launch-and-research-showcase/Alexandra%20Green.pdf" target="_blank">recorded</a> 333 vocalizations of 13 Holstein-Friesian heifers. She tells <a href="https://sydney.edu.au/news-opinion/news/2019/12/19/stand-out-from-herd-how-cows-communicate.html" target="_blank"><em>University of Sydney News</em></a>, "We hope that through gaining knowledge of these vocalizations, farmers will be able to tune into the emotional state of their cattle, improving animal welfare."</p><p style="margin-left: 20px;"><em>"This study shows that cattle vocal individuality of high-frequency calls is stable across different emotionally loaded farming contexts. Individual distinctiveness is likely to attract social support from conspecifics, and knowledge of these individuality cues could assist farmers in detecting individual cattle for welfare or production purposes." — Green, et al</em></p><p>The study's recordings were captured across five months at an Australian farm. They were captured by Green during cows' estrus, during feed anticipation — a presumably happy moment — and during feed frustration as cattle were denied expected food. Vocalizations were also recorded when these social animals were individually isolated from their herd.</p>
Analysis<img type="lazy-image" data-runner-src="https://assets.rebelmouse.io/eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpbWFnZSI6Imh0dHBzOi8vYXNzZXRzLnJibC5tcy8yMjU5OTUxOS9vcmlnaW4uanBnIiwiZXhwaXJlc19hdCI6MTYxNDA5MTM2MH0.IPQsSO51gOqtN_CJxJ_V5pFho0cbcYBZs_SZTs2CW-w/img.jpg?width=980" id="1a91a" class="rm-shortcode" data-rm-shortcode-id="b3e6e29964effbc2ec1fbb17c1241c67" data-rm-shortcode-name="rebelmouse-image" />
Audio analysis of a moo: Yellow arrow shows blue indicator of voices' fundamental pitch. Red arrow is where cow begins to close her mouth post-moo.
Image source: The Feed<p>Green traveled to Saint-Etienne, France, where she worked with co-authors psychologist <a href="http://www.sussex.ac.uk/profiles/115148" target="_blank">David Reby</a> and bioacoustician and animal behaviorist <a href="https://unito.academia.edu/LivioFavaro" target="_blank">Livio Favaro</a>. Together, they analyzed her field recordings using <a href="http://www.fon.hum.uva.nl/praat/" target="_blank">Praat</a> phonetics software, which produced visual representations of the audio, including an indicator of each voice's fundamental pitch.</p><p>These analyses proved the uniqueness of each cow's voice. For cattle farmer Neville Catt, on whose grounds the research was conducted, there's no doubt who he's hearing when a cow begins vocalizing. "Not only do I talk to cattle, I think they talk to me," he says. One of the new insights Green's study contributes is that the sound of each heifer's voice is not limited to specific circumstances like parenting, but in fact remains constant for life. Says Green, "We found that cattle vocal individuality is relatively stable across different emotionally loaded farming contexts."</p><p>"Cows are gregarious, social animals," says Green. "In one sense it isn't surprising they assert their individual identity throughout their life and not just during mother-calf imprinting. But this is the first time we have been able to analyze voice to have conclusive evidence of this trait."</p>