In the future, you might voluntarily share your social media data with your psychiatrist to inform a more accurate diagnosis.
- About one in five people suffer from a psychiatric disorder, and many go years without treatment, if they receive it at all.
- In a new study, researchers developed machine-learning algorithms that analyzed the relationship between psychiatric disorders and Facebook messages.
- The algorithms were able to correctly predict the diagnosis of psychiatric disorders with statistical accuracy, suggesting digital tools may someday help clinicians identify mental illnesses in early stages.
Identifying psychiatric disorders<p>The goal was for the algorithms to analyze patterns in these datasets, then predict which group participants belonged to: schizophrenia spectrum disorders (SSD), mood disorders (MD), or healthy volunteers (HV). The results were promising, showing that the algorithms correctly identified:</p><ul><li>The SDD group with an accuracy of 52% (chance was 33%)</li><li>The MD group with an accuracy of 57% (chance was 37%)</li><li>The HV group with an accuracy of 56% (chance was 29%)</li></ul><p>The study also showed interesting differences in Facebook activity among the groups, such as:</p><ul><li>The SSD group was more likely to use language related to perception (hear, see, feel).</li><li>The MD and SSD groups were far more likely to use swear words and anger-related language.</li><li>The MD group was more likely to use language related to biological processes (blood, pain).</li><li>The SSD group was more likely to express negative emotions, use second-person pronouns and write in netspeak (lol, btw, thx).</li><li>The MD group was more likely to post photos containing more blues and less yellows.</li></ul><p>These differences tended to become more apparent in the months before a patient was hospitalized. But even 18 months before hospitalization, the results revealed signals that hinted participants might be on the path to developing a psychiatric disorder. That's where these tools may someday help improve early-identification efforts.</p><p>"In psychiatry, we often get a snapshot of somebody's life, for 30 minutes once a month or so," he said. "There's the potential to get much greater granularity with some of these new assessment tools. Facebook, for example, can allow us to understand somebody's thoughts and behaviors in a more real-time, longitudinal fashion, as opposed to cross-sectional moments in time."</p><p>Dr. Birnbaum noted that everyone has a unique style of <a href="https://www.northwell.edu/behavioral-health/news/insights/digital-activity-provides-more-clues-to-its-impact-on-mental-health" target="_blank">online behavior</a> and that certain behavioral changes may contain clues about mental health.</p><p>"The way that we're understanding this is that everybody has a digital baseline, a way they typically act and behave on social media and the internet," he said. "So, ultimately here we would want to identify this baseline for each individual—a fingerprint—and then monitor for changes over time, and identify which changes are concerning, and which are not."</p><p>Using digital tools to better identify psychiatric conditions could someday reduce the number of people who suffer without treatment.</p><p>"There's an alarming gap between the number of people who experience mental illness and those who receive care," said Michael Dowling, president and CEO of Northwell Health. "It's especially troubling when you consider that the health disparity between people with mental illness and those without is larger than disparities attributable to race, ethnicity, geography or socioeconomic status."<a href="#_msocom_1" target="_blank"></a></p>
A step toward the future of psychiatry<img class="rm-lazyloadable-image rm-shortcode" type="lazy-image" data-runner-src="https://assets.rebelmouse.io/eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpbWFnZSI6Imh0dHBzOi8vYXNzZXRzLnJibC5tcy8yNTU1NzkzNy9vcmlnaW4uanBnIiwiZXhwaXJlc19hdCI6MTYzMjMyNTU2OX0.EP0V-l7aldnzNJKupUq4otg8r3UIE_f7vH7M4Pdisg4/img.jpg?width=980" id="6c141" width="2000" height="1125" data-rm-shortcode-id="9b2303ef4ce0c88f0669e2d72a04b63d" data-rm-shortcode-name="rebelmouse-image" />
Credit: Jewel Samad/AFP via Getty Images<p>Although previous research has examined the relationship between online activity and psychiatric disorders, the new study is unique because it paired online behavior with clinically confirmed cases of psychiatric disorders.</p><p>"The vast majority of the data thus far has been extracted from anonymous, or semi-anonymous individuals online, without any real way to validate the diagnosis or confirm the authenticity of the symptoms," Dr. Birnbaum said.</p><p>But before clinicians can use these kinds of digital approaches, researchers have more work to do.</p><p>"I think that we need much larger datasets," Dr. Birnbaum said. "We need to repeat these findings. We need to better understand how demographic differences, like age, ethnicity and gender, can play a role."</p><p>Privacy is another consideration. Dr. Birnbaum emphasized that these kinds of approaches would only be conducted on a voluntary basis, and that the Facebook data used in the recent study was anonymized, and the algorithms examined only individual words, not the context or meaning of sentences.</p><p>"This isn't about surveillance, or that Facebook should somehow be monitoring us," Dr. Birnbaum said. "It's about giving the power to the patient. I imagine a world where patients could come into the doctor's office and express their concerns, but also provide some additional clinically meaningful information that they own."</p><p>Dr. Birnbaum said the long-term goal isn't for algorithms to make official diagnoses or replace physicians, but rather to serve as supplementary tools. He added that these tools would be used only for people seeking help or information about their risk of developing a psychiatric condition, or suffering a relapse.</p><p>"Hopefully one day, we'll be able to incorporate this and other information to inform what we do, the same way you go to a doctor and you get an X-ray or a blood test to inform the diagnosis," he said. "It doesn't make the diagnosis, but it informs the doctor. That is where psychiatry is heading, and hopefully this is a step in that direction."</p>
Max Planck Institute scientists crash into a computing wall there seems to be no way around.
- Artificial intelligence that's smarter than us could potentially solve problems beyond our grasp.
- AI that are self-learning can absorb whatever information they need from the internet, a Pandora's Box if ever there was one.
- The nature of computing itself prevents us from limiting the actions of a super-intelligent AI if it gets out of control.
Why worry?<img class="rm-lazyloadable-image rm-shortcode" type="lazy-image" data-runner-src="https://assets.rebelmouse.io/eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpbWFnZSI6Imh0dHBzOi8vYXNzZXRzLnJibC5tcy8yNTUwNzc3OS9vcmlnaW4uanBnIiwiZXhwaXJlc19hdCI6MTY2OTYyMDE5MX0.EN9QQ0BTIiHBvD3XJ0D1n2OhmCOfzyf40MocBiV6Y68/img.jpg?width=980" id="b2c31" width="1440" height="682" data-rm-shortcode-id="20cdee3dcd8dad17fe3e4e3ff784a97a" data-rm-shortcode-name="rebelmouse-image" />
Credit: @nt/Adobe Stock<p>"A super-intelligent machine that controls the world sounds like science fiction," says paper co-author <a href="https://www.mpib-berlin.mpg.de/staff/manuel-cebrian" target="_blank">Manuel Cebrian</a> in a <a href="https://www.mpg.de/16231640/0108-bild-computer-scientists-we-wouldn-t-be-able-to-control-superintelligent-machines-149835-x?c=2249" target="_blank">press release</a>. "But there are already machines that perform certain important tasks independently without programmers fully understanding how they learned it. The question therefore arises whether this could at some point become uncontrollable and dangerous for humanity."</p><p>The lure of AI is clear. Its ability to "see" the patterns in data make it a promising agent for solving problems too complex for us to wrap our minds around. Could it cure cancer? Solve the climate crisis? The possibilities are nearly endless.</p><p>Connected to the internet, AI can grab whatever information it needs to achieve its task, and therein lies a big part of the danger. With access to every bit of human data—and responsible for its own education—who knows what lessons it would learn regardless of any ethical constraints built into its programming? Who knows what goals it would embrace and what it might do to achieve them?</p><p>Even assuming benevolence, there's danger. Suppose that an AI is confronted by an either/or choice akin to the <a href="https://bigthink.com/culture-religion/trolley-problem-solution" target="_blank">Trolley Dilemma</a>, maybe even on a grand scale: Might an AI decide to annihilate millions of people if it decided the remaining billions would stand a better chance of survival?</p>
A pair of flawed options<img class="rm-lazyloadable-image rm-shortcode" type="lazy-image" data-runner-src="https://assets.rebelmouse.io/eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpbWFnZSI6Imh0dHBzOi8vYXNzZXRzLnJibC5tcy8yNTUwNzc5MC9vcmlnaW4uanBnIiwiZXhwaXJlc19hdCI6MTY1NzM3NDQ2Mn0.0GYCRvvo--LWLlRkpxm1fYxEWjK8DWyMSuU-bLdhtlE/img.jpg?width=980" id="044f3" width="1440" height="1080" data-rm-shortcode-id="0bdc3512c25e1b1f79bacef29e5e6222" data-rm-shortcode-name="rebelmouse-image" />
Credit: Maxim_Kazmin/Adobe Stock<p>The most obvious way to keep a super intelligent AI from getting ahead of us is to limit its access to information by preventing it from connecting to the internet. The problem with limiting access to information, though, is that it would make any problem we assign the AI more difficult for it to solve. We would be weakening its problem-solving promise possibly to a point of uselessness.</p><p>The second approach that might be taken is to limit what a super-intelligent AI is capable of doing by programming into it certain boundaries. This might be akin to writer Isaac Asimov's <a href="https://en.wikipedia.org/wiki/Three_Laws_of_Robotics" target="_blank">Laws of Robotics</a>, the first of which goes: "A robot may not injure a human being or, through inaction, allow a human being to come to harm."</p><p>Unfortunately, says the study, a series of logical tests reveal that it's impossible to create such limits. Any such a containment algorithm, it turns out, would be self-defeating.</p>
Containment is impossible<img class="rm-lazyloadable-image rm-shortcode" type="lazy-image" data-runner-src="https://assets.rebelmouse.io/eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpbWFnZSI6Imh0dHBzOi8vYXNzZXRzLnJibC5tcy8yNTUwNzc5OC9vcmlnaW4uanBnIiwiZXhwaXJlc19hdCI6MTYyMzA0NDM1Mn0.ukZgrtJYO_SyrMH21-Y_UTanTh4fJjHtTCdXTsQBOA8/img.jpg?width=980" id="e2ad4" width="1440" height="753" data-rm-shortcode-id="7eb6ca2dbb8d14d52264133b4d0af7a0" data-rm-shortcode-name="rebelmouse-image" />
Credit: UncleFredDesign/Adobe Stock<p>"If you break the problem down to basic rules from theoretical computer science, it turns out that an algorithm that would command an AI not to destroy the world could inadvertently halt its own operations. If this happened, you would not know whether the containment algorithm is still analyzing the threat, or whether it has stopped to contain the harmful AI. In effect, this makes the containment algorithm unusable."</p><p>The team investigated stacking containment algorithms, with each monitoring the behavior of the previous one, but eventually the same problem arises: The final check halts itself, rendering it unreliable.</p>
Too smart?<p>The Planck researchers also concluded that a similar bit of logic makes it impossible for us to know when a self-learning computer's intelligence has come to exceed our own. Essentially, we're not smart enough to be able to develop tests for intelligence superior to ours.</p><p style="margin-left: 20px;">"Machines take me by surprise with great frequency. This is largely because I do not do sufficient calculation to decide what to expect them to do." — Alan Turing</p><p>This means that it's entirely conceivable that an AI capable of self-learning may well quietly ascend to super-intelligence without our even knowing it — a scary reason all by itself to slow down our hurly-burley race to artificial intelligence.</p><p>In the end, we're left with a dangerous bargain to make or not make: Do we risk our safety in exchange for the possibility that AI will solve problems we can't?</p>
Northwell Health is using insights from website traffic to forecast COVID-19 hospitalizations two weeks in the future.
- The machine-learning algorithm works by analyzing the online behavior of visitors to the Northwell Health website and comparing that data to future COVID-19 hospitalizations.
- The tool, which uses anonymized data, has so far predicted hospitalizations with an accuracy rate of 80 percent.
- Machine-learning tools are helping health-care professionals worldwide better constrain and treat COVID-19.
The value of forecasting<img class="rm-lazyloadable-image rm-shortcode" type="lazy-image" data-runner-src="https://assets.rebelmouse.io/eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpbWFnZSI6Imh0dHBzOi8vYXNzZXRzLnJibC5tcy8yNTA0Njk2OC9vcmlnaW4uanBnIiwiZXhwaXJlc19hdCI6MTYyMzM2NDQzOH0.rid9regiDaKczCCKBsu7wrHkNQ64Vz_XcOEZIzAhzgM/img.jpg?width=980" id="2bb93" width="1546" height="1056" data-rm-shortcode-id="c0dacb7750a94ea663539c6f5b44916e" data-rm-shortcode-name="rebelmouse-image" />
Northwell emergency departments use the dashboard to monitor in real time.
Credit: Northwell Health<p>One unique benefit of forecasting COVID-19 hospitalizations is that it allows health systems to better prepare, manage and allocate resources. For example, if the tool forecasted a surge in COVID-19 hospitalizations in two weeks, Northwell Health could begin:</p><ul><li>Making space for an influx of patients</li><li>Moving personal protective equipment to where it's most needed</li><li>Strategically allocating staff during the predicted surge</li><li>Increasing the number of tests offered to asymptomatic patients</li></ul><p>The health-care field is increasingly using machine learning. It's already helping doctors develop <a href="https://care.diabetesjournals.org/content/early/2020/06/09/dc19-1870" target="_blank">personalized care plans for diabetes patients</a>, improving cancer screening techniques, and enabling mental health professionals to better predict which patients are at <a href="https://healthitanalytics.com/news/ehr-data-fuels-accurate-predictive-analytics-for-suicide-risk" target="_blank" rel="noopener noreferrer">elevated risk of suicide</a>, to name a few applications.</p><p>Health systems around the world have already begun exploring how <a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7315944/" target="_blank" rel="noopener noreferrer">machine learning can help battle the pandemic</a>, including better COVID-19 screening, diagnosis, contact tracing, and drug and vaccine development.</p><p>Cruzen said these kinds of tools represent a shift in how health systems can tackle a wide variety of problems.</p><p>"Health care has always used the past to predict the future, but not in this mathematical way," Cruzen said. "I think [Northwell Health's new predictive tool] really is a great first example of how we should be attacking a lot of things as we go forward."</p>
Making machine-learning tools openly accessible<p>Northwell Health has made its predictive tool <a href="https://github.com/northwell-health/covid-web-data-predictor" target="_blank">available for free</a> to any health system that wishes to utilize it.</p><p>"COVID is everybody's problem, and I think developing tools that can be used to help others is sort of why people go into health care," Dr. Cruzen said. "It was really consistent with our mission."</p><p>Open collaboration is something the world's governments and health systems should be striving for during the pandemic, said Michael Dowling, Northwell Health's president and CEO.</p><p>"Whenever you develop anything and somebody else gets it, they improve it and they continue to make it better," Dowling said. "As a country, we lack data. I believe very, very strongly that we should have been and should be now working with other countries, including China, including the European Union, including England and others to figure out how to develop a health surveillance system so you can anticipate way in advance when these things are going to occur."</p><p>In all, Northwell Health has treated more than 112,000 COVID patients. During the pandemic, Dowling said he's seen an outpouring of goodwill, collaboration, and sacrifice from the community and the tens of thousands of staff who work across Northwell.</p><p>"COVID has changed our perspective on everything—and not just those of us in health care, because it has disrupted everybody's life," Dowling said. "It has demonstrated the value of community, how we help one another."</p>
From 260-year-old ciphers to the most recent Zodiac Killer solution, these unbreakable codes just needed time.
- After 51 years, the Zodiac Killer's infamous "340 code" has been solved.
- Humans have a natural passion for puzzles, making cryptography a lifelong pursuit for some.
- Other famous cracked codes include Poe's Challenge and Zimmermann's Letter.
How I cracked the Zodiac Killer's cipher<span style="display:block;position:relative;padding-top:56.25%;" class="rm-shortcode" data-rm-shortcode-id="cae87dd9ac4bc213bd6bc12ddb67d557"><iframe type="lazy-iframe" data-runner-src="https://www.youtube.com/embed/3sLFRm29eto?rel=0" width="100%" height="auto" frameborder="0" scrolling="no" style="position:absolute;top:0;left:0;width:100%;height:100%;"></iframe></span><h2>Zodiac Killer</h2><p>After the Zodiac Killer's first cryptogram was quickly solved in 1969, he followed up with a 340-character puzzle that has baffled cryptographers ever since. Three men worked tirelessly on the letter and <a href="https://www.wired.com/story/zodiac-killers-cipher-finally-cracked-after-51-years/" target="_blank" rel="noopener noreferrer">finally revealed the encoded message</a>: </p><p>I HOPE YOU ARE HAVING LOTS OF FUN IN TRYING TO CATCH ME THAT WASN'T ME ON THE TV SHOW WHICH BRINGS UP A POINT ABOUT ME I AM NOT AFRAID OF THE GAS CHAMBER BECAUSE IT WILL SEND ME TO PARADICE ALL THE SOONER BECAUSE I NOW HAVE ENOUGH SLAVES TO WORK FOR ME WHERE EVERYONE ELSE HAS NOTHING WHEN THEY REACH PARADICE SO THEY ARE AFRAID OF DEATH I AM NOT AFRAID BECAUSE I KNOW THAT MY NEW LIFE WILL BE AN EASY ONE IN PARADICE DEATH</p><p>While the San Francisco branch of the FBI has acknowledged the puzzle has been solved, they're not providing any more comments considering the case remains open. </p><h2>Poe's Challenge </h2><p>Edgar Allan Poe's "The Gold Bug" was based on a cipher mystery, as Poe himself was fascinated with puzzles. In 1840, he offered a free subscription to Graham's Magazine to anyone who could stump him. He claims to have solved a hundred entries, ending the contest by publishing a challenging code written by W.B. Tyler—who many at the time suspected was a pseudonym.</p><p>It wasn't until 2000 that a <a href="https://www.scientificamerican.com/article/a-cipher-from-poe-solved/" target="_blank" rel="noopener noreferrer">software engineer decoded the message</a>, which opened up, "It was early spring, warm and sultry glowed the afternoon. The very breezes seemed to share the delicious languor of universal nature..."</p><p>Given the numerous typesetting mistakes, recent researchers aren't convinced that Poe actually wrote it. The author will likely remain a mystery, but the code itself is in the books. </p><h2>Copiale cipher</h2><p>An entire team spanning two countries was needed to crack the 260-year-old mystery of the <a href="https://cl.lingfil.uu.se/~bea/copiale/" target="_blank" rel="noopener noreferrer">Copiale cipher</a>. Unlike a few lines of prose, this 75,000-character manuscript filled 105 pages written by a group of ophthalmologists. The book was encrypted in German and relied on a complex substitution code that used symbols and letters for spaces as well as text. </p><p>Dating from the second half of the eighteenth century, the first 16 pages discuss a masonic initiation ceremony by the Oculists. The strange ritual involves initiates "reading" a blank piece of paper before being given a pair of glasses—those wily eye doctors. After their eyes are washed, the referees then pluck a single eyebrow of each recruit. </p><p>Better than college hazing, though still an odd text to keep so secretive. Then again, maybe that was the point. </p>
Slate statue of Mathematician Alan Turing at Bletchley Park
Credit: lenscap50 / Adobe Stock<h2>The Zimmermann Telegram</h2><p>Not all codes are so playful, or strange. Some are insidious. Such is the case with the <a href="https://www.history.com/news/what-was-the-zimmermann-telegram" target="_blank" rel="noopener noreferrer">Zimmermann Telegram</a>, a note sent from Germany to Mexico in 1917. Intended for the German ambassador to Mexico, Heinrich von Eckardt, the Germans were preparing America's southern neighbors for battle—in the name of Germany. In exchange for weapons and funding, the Mexicans would reclaim Arizona, New Mexico, and Texas upon victory. </p><p>The cipher was cracked about a month after interception by Britain's "Room 40." The text read, in part:</p><p style="margin-left: 20px;">"We make Mexico a proposal of alliance on the following basis: make war together, make peace together, generous financial support and an understanding on our part that Mexico is to reconquer the lost territory in Texas, New Mexico, and Arizona. The settlement in detail is left to you."</p><p>Tensions between the US and Germany were already high; this message pushed America over the edge. A month later, President Wilson overruled his intention of remaining neutral and entered World War I on the side of the Allies. </p><h2>The Enigma Code </h2><p>One of the most famous cracks in history is certainly the <a href="https://www.iwm.org.uk/history/how-alan-turing-cracked-the-enigma-code" target="_blank" rel="noopener noreferrer">Enigma Code</a>. If the Zimmermann Telegram helped us get into World War I, the second chapter only ended in our favor thanks to Alan Turing's unforgettable machine. </p><p>The Germans were utilizing an enciphering machine to pass messages to its Axis partners. Perhaps learning from past mistakes, they changed the entire cipher system on a daily basis. </p><p>Turing responded with his own machinery: the Bombe, Lorenz, and Universal Turing Machine. Thanks to his inventions, alongside tireless efforts by British cryptologists, the Allied forces exploited procedural flaws and operator mistakes by the Germans. The Enigma Code was cracked, saving countless Allied lives and helping turn the tide of the war. </p><p>--</p><p><em>Stay in touch with Derek on <a href="http://www.twitter.com/derekberes" target="_blank">Twitter</a> and <a href="https://www.facebook.com/DerekBeresdotcom" target="_blank" rel="noopener noreferrer">Facebook</a>. His new book is</em> "<em><a href="https://www.amazon.com/gp/product/B08KRVMP2M?pf_rd_r=MDJW43337675SZ0X00FH&pf_rd_p=edaba0ee-c2fe-4124-9f5d-b31d6b1bfbee" target="_blank" rel="noopener noreferrer">Hero's Dose: The Case For Psychedelics in Ritual and Therapy</a>."</em></p>
A new theory suggests that dreams' illogical logic has an important purpose.
Overfitting<p>The goal of machine learning is to supply an algorithm with a data set, a "training set," in which patterns can be recognized and from which predictions that apply to other unseen data sets can be derived.</p><p>If machine learning learns its training set too well, it merely spits out a prediction that precisely — and uselessly — matches that data instead of underlying patterns within it that could serve as predictions likely to be true of other thus-far unseen data. In such a case, the algorithm describes what the data set <em>is</em> rather than what it <em>means</em>. This is called "overfitting."</p><img class="rm-lazyloadable-image rm-shortcode" type="lazy-image" data-runner-src="https://assets.rebelmouse.io/eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpbWFnZSI6Imh0dHBzOi8vYXNzZXRzLnJibC5tcy8yNDc4NTQ4Ni9vcmlnaW4uanBnIiwiZXhwaXJlc19hdCI6MTY2NDM4NDk1Mn0.bMHbBbt7Nz0vmmQ8fdBKaO-Ycpme5eOCxbjPLEHq9XQ/img.jpg?width=980" id="5049a" width="1440" height="585" data-rm-shortcode-id="10fc10e636fcb55325a1f4f1f8bf9db3" data-rm-shortcode-name="rebelmouse-image" />
The value of noise<p>To keep machine learning from becoming too fixated on the specific data points in the set being analyzed, programmers may introduce extra, unrelated data as noise or corrupted inputs that are less self-similar than the real data being analyzed.</p><p>This noise typically has nothing to do with the project at hand. It's there, metaphorically speaking, to "distract" and even confuse the algorithm, forcing it to step back a bit to a vantage point at which patterns in the data may be more readily perceived and not drawn from the specific details within the data set.</p><p>Unfortunately, overfitting also occurs a lot in the real world as people race to draw conclusions from insufficient data points — xkcd has a fun example of how this can happen with <a href="https://xkcd.com/1122/" target="_blank">election "facts."</a></p><p>(In machine learning, there's also "underfitting," where an algorithm is too simple to track enough aspects of the data set to glean its patterns.)</p><img class="rm-lazyloadable-image rm-shortcode" type="lazy-image" data-runner-src="https://assets.rebelmouse.io/eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpbWFnZSI6Imh0dHBzOi8vYXNzZXRzLnJibC5tcy8yNDc4NTQ5My9vcmlnaW4uanBnIiwiZXhwaXJlc19hdCI6MTYyMDE5NjY1M30.iS2bq7WEQLeS34zNFPnXwzAZZn9blCyI-KVuXmcHI6o/img.jpg?width=980" id="cd486" width="1440" height="810" data-rm-shortcode-id="debb36da6eff5a4f368914f6bac5054d" data-rm-shortcode-name="rebelmouse-image" />
Credit: agsandrew/Adobe Stock