A clever new design introduces a way to image the vast ocean floor.
- Neither light- nor sound-based imaging devices can penetrate the deep ocean from above.
- Stanford scientists have invented a new system that incorporates both light and sound to overcome the challenge of mapping the ocean floor.
- Deployed from a drone or helicopter, it may finally reveal what lies beneath our planet's seas.
A great many areas of the ocean floor covering about 70 percent of the Earth remain unmapped. With current technology, it's an extremely arduous and time-consuming task, accomplished only by trawling unmapped areas with sonar equipment dangling from boats. Advanced imaging technologies that work so well on land are stymied by the relative impenetrability of water.
That may be about to change. Scientists at Stanford University have announced an innovative system that combines the strengths of light-based devices and those of sound-based devices to finally make mapping the entire sea floor possible from the sky.
The new system is detailed in a study published in IEEE Explore.
"Airborne and spaceborne radar and laser-based, or LIDAR, systems have been able to map Earth's landscapes for decades. Radar signals are even able to penetrate cloud coverage and canopy coverage. However, seawater is much too absorptive for imaging into the water," says lead study author and electrical engineer Amin Arbabian of Stanford's School of Engineering in Stanford News.
One of the most reliable ways to map a terrain is by using sonar, which deduces the features of a surface by analyzing sound waves that bounce off it. However, If one were to project sound waves from above into the sea, more than 99.9 percent of those sound waves would be lost as they passed into water. If they managed to reach the seabed and bounce upward out of the water, another 99.9 percent would be lost.
Electromagnetic devices—using light, microwaves, or radar signals—are also fairly useless for ocean-floor mapping from above. Says first author Aidan Fitzpatrick, "Light also loses some energy from reflection, but the bulk of the energy loss is due to absorption by the water." (Ever try to get phone service underwater? Not gonna happen.)
The solution presented in the study is the Photoacoustic Airborne Sonar System (PASS). Its core idea is the combining of sound and light to get the job done. "If we can use light in the air, where light travels well, and sound in the water, where sound travels well, we can get the best of both worlds," says Fitzpatrick.
An imaging session begins with a laser fired down to the water from a craft above the area to be mapped. When it hits the ocean surface, it's absorbed and converted into fresh sound waves that travel down to the target. When these bounce back up to the surface and out into the air and back to PASS technicians, they do still suffer a loss. However, using light on the way in and sound only on the way out cuts that loss in half.
This means that the PASS transducers that ultimately retrieve the sound waves have plenty to work with. "We have developed a system," says Arbabian, "that is sensitive enough to compensate for a loss of this magnitude and still allow for signal detection and imaging." Form there, software assembles a 3D image of the submerged target from the acoustic signals.
PASS was initially designed to help scientists image underground plant roots.
Although its developers are confident that PASS will be able to see down thousands of meters into the ocean, so far it's only been tested in an "ocean" about the size of a fish tank—tiny and obviously free of real-world ocean turbulence.
Fitzpatrick says that, "current experiments use static water but we are currently working toward dealing with water waves. This is a challenging, but we think feasible, problem."
Scaling up, Fitzpatrick adds, "Our vision for this technology is on-board a helicopter or drone. We expect the system to be able to fly at tens of meters above the water."
A small proof-of-concept study shows smartphones could help detect drunkenness based on the way you walk.
- The legal blood alcohol concentration (BAC) limit for driving in the U.S is 0.08 percent. You can measure your BAC 15 minutes after your first drink and your levels will remain safe if you consume no more than one standard drink per hour.
- Portable breathalyzers can be used to measure BAC, but not many people own these devices.
- A small proof-of-concept study suggests that your smartphone could detect your drunkenness based on the way you walk.
The legal limit for driving within the United States is a blood alcohol concentration of 0.08 percent. According to BAC Track, you can measure your BAC (blood alcohol content) as soon as 15 minutes after your first drink. BAC Track suggests your BAC level will remain within safe limits if you consume one standard drink per hour.
According to BAC Track, one standard drink is half an ounce of alcohol, which can be:
- One 12 ounce beer
- One 5 ounce glass of wine
- One 1.5 ounce shot of distilled alcohol
There are many things that influence a person's BAC, including how quickly you drink, your body weight, altitude, how much food you've eaten, whether you're male or female, and what kind of medications you're currently on.
A new study has found that your smartphone could actually tell you if your blood alcohol concentration exceeds the limit of 0.08 percent.
The small study that could mean big things for alcohol testing
Image by gritsalak karalak on Shutterstock
While devices such as portable breath analyzers are available, not many people own them due to how expensive they are and the social stigma surrounding them. This 2020 study suggests smartphones could be an alternative. According to PEW Research, up to 81 percent of people own a smartphone.
For this small-scale study, there were 22 participants who visited the lab to consume a vodka-based drink that would raise their breath alcohol concentration to 0.02 percent.
Dr. Brian Suffoletto of the Stanford Medical School's Department of Emergency Medicine (and corresponding author of the study) explains to Medical News Today: "I lost a close friend to a drinking and driving crash in college," Dr. Suffoletto says. "And as an emergency physician, I have taken care of scores of adults with injuries related to acute alcohol intoxication. Because of this, I have dedicated the past 10 years to testing digital interventions to prevent deaths and injury related to excessive alcohol consumption."
How it works:
Before having the drink, each participant had a smartphone strapped to their back and was asked to walk 10 steps in a straight line and then back again. Every hour for the next 7 hours, the participants repeated this walk.
The sensors on the smartphone measured each person's acceleration and their movements (both from side to side and up and down).
This is not the first study of it's kind.
Previous research (such as this 2016 study) has used machine learning to determine whether a person was intoxicated. That data, gathered from 34 'intoxicated' participants, generated time and frequency domain features such as sway area and cadence, which were classified using supervised machine learning.
This 2020 study showed promising results of the smartphone analysis: over 90 percent accuracy.
Researchers found through analyzing the data that 92.5 percent of the time they were able to determine if a participant had exceeded the legal BAC limit.
Of course, the study had some limitations.
In real life, a person is very unlikely to keep their smartphone strapped to their back. Placing the phone in your pocket (or carrying it) could impact the accuracy.
This study also measured breath alcohol concentrations, which are on average 15 percent lower than blood alcohol concentrations.
The implications of this small-scale study are exciting.
While this was a relatively small study, it is being used as a "proof of concept" marker for further research. Researchers on this project explain that future research would ideally be done in real-world settings with more volunteers.
Dr. Brian Suffoletto explains to Medical News Today:
"In 5 years, I would like to imagine a world in which, if people go out with friends and drink at risky levels, they get an alert at the first sign of impairment and are sent strategies to help them stop drinking and protect them from high-risk events, like driving, interpersonal violence, and unprotected sexual encounters."
The Silicon Valley titan has promised scholarships for its tech-focused certificate courses alongside $10 million in job training grants.
American has a "middle skills" gap. Good jobs requiring a high school diploma have contracted since the 1990s, while workers wielding a college education continue to excel. But according to a report out of Georgetown University, two out of three entry-level jobs today require some training and education beyond high school but not a bachelor's degree. This demand for middle-skilled workers has resulted from the assimilation of work by the digital revolution, while people have been outpaced by the technology they rely on.
As Stephane Kasriel, former CEO of Upwork, wrote for the World Economic Forum: "Our current education system adapts to change too slowly and operates too ineffectively for this new world. […] Skills, not college pedigree, will be what matters for the future workforce" To bridge the skills gap, many employers and institutions have turned to online education and other non-traditional models. One such employer is Google.
Unable to find enough qualified candidates to fill necessary positions, the Silicon Valley titan created its own certification course of Coursera to teach people IT support skills. The program proved so successful that earlier this week, Google announced it would expand the program to include three new courses. It's also offering scholarships to help in-need people enroll.
An improved educational pipeline?
A chart showing the increase and decrease of "good jobs" based on level of education required.
The new suite of courses will train students in skills necessary for data analyst, project manager, and UX design positions. While Google has released no specifics on these courses, they will likely follow the current certificate course template. This means they won't require a degree to enroll, will be entirely online, and will be taught by Google Staff.
Like other massive open online courses (or MOOCs), they will likely be self-paced. According to Coursera, Google's current IT support course takes between three to six months to complete at $49 a month. To offset those costs, Google is also offering 100,000 need-based scholarships.
"College degrees are out of reach for many Americans, and you shouldn't need a college diploma to have economic security. We need new, accessible job-training solutions—from enhanced vocational programs to online education—to help America recover and build," wrote Kent Walker, SVP of Global Affairs at Google, in a release.
By the courses' end, students will have created hands-on projects to build their portfolio and will receive a certificate of completion. In the release, Walker states that Goggle will consider the certification as "equivalent of a four-year degree" for job seekers. The current IT support course has credit recommendation from the American Council on Education, meaning it may be possible for students to translate the certificate into some college credits. No word on whether the new courses will also have credit recommendation.
"Launched in 2018, the Google IT Certificate program has become the single most popular certificate on Coursera, and thousands of people have found new jobs and increased their earnings after completing the course," Walker added.
As part of the initiative, Google.org, the company's charity branch, has committed $10 million in job training grants. The grants will go to Google's nonprofit partners, such as YWCA, JFF, and NPower, to help women, veterans, and underrepresented groups obtain jobs skills relevant to today's in-demand positions.
An improved educational pipeline?
The need for middle-skills will grow as the American workforce continues to digitize at an extraordinary rate. According to the Brookings Institution, in 2002 just 5 percent of jobs studied—which covered 90 percent of the workforce—required high-digital skills while 40 percent required medium-level skills. By 2016, that percentage rose to 23 and 48 respectively. In the same period, jobs requiring low-digital skills fell precipitously, from 56 to 30 percent. Beyond rapid job growth and competitive advantage, those with the skills are set to reap the economic rewards.
But more needs to be done.
As of this writing, more than 275,000 people have enrolled in Google's IT Support course, but it's unclear how many companies will accept the certificate as proof of capability. While Google and its Employer Consortium, a group of employers who connect with Google to find prospective candidates, may consider the certificate equivalent to a four-year degree, MOOC certifications lack the universality of either associate's or bachelor's degrees. Without mainstream acceptance, graduates may be contending with each other within a puddle of prospective companies, not the vast, oceanic marketplace of corporate America.
And the COVID-19 pandemic hasn't halted but accelerated digitalization as companies widely adopt new technological trends to survive. Many of the 20 million unemployed Americans may suddenly need to upskill or even find their jobs outsourced to the digital realm. They'll need a quick, yet employer recognized, means to acquire new skills to help find work.
Ten million dollars will buy Google—a company valued at one trillion dollars—a nice commemorative brick in the path to a solution and hopefully help many lives. But we have many miles of work to go.
The programming giant exits the space due to ethical concerns.
- IBM sent a latter to Congress stating it will no longer research, develop, or sell facial recognition software.
- AI-based facial recognition software remains widely available to law enforcement and private industry.
- Facial recognition software is far from infallible, and often reflects its creators' bias.
In what strikes one as a classic case of shutting the stable door long after the horse has bolted, IBM's CEO Arvind Krishna has announced the company will no longer sell general-purpose facial recognition software, citing ethical concerns, in particular with the technology's potential for use in racial profiling by police. They will also cease research and development of this tech.
While laudable, this announcement arguably arrives about five years later than it might have, as numerous companies sell AI-based facial recognition software, often to law enforcement. Anyone who uses Facebook or Google also knows all about this technology, as we watch both companies tag friends and associates for us. (Facebook recently settled a lawsuit regarding the unlawful use of facial recognition for $550 million.)
It's worth noting that no one other than IBM has offered to cease developing and selling facial recognition software.
Image source: Tada Images/Shutterstock
Krishna made the announcement in a public letter to Senators Cory Booker (D-NJ) and Kamala Harris (D-CA), and Representatives Karen Bass (D-CA), Hakeem Jeffries (D-NY), and Jerrold Nadler (D-NY). Democrats in Congress are considering legislation to ban facial-recognition software as reported abuses pile up.
IBM's letter states:
"IBM firmly opposes and will not condone uses of any technology, including facial recognition technology offered by other vendors, for mass surveillance, racial profiling, violations of basic human rights and freedoms, or any purpose which is not consistent with our values and Principles of Trust and Transparency. We believe now is the time to begin a national dialogue on whether and how facial recognition technology should be employed by domestic law enforcement agencies."
Prior to their exit entirely from facial recognition, IBM had a mixed record. The company scanned nearly a million Creative Commons images from Flickr without their owners' consent. On the other hand, IBM released a public data set in 2018 in an attempt at transparency.
Image source: Best-Backgrounds/Shutterstock
Privacy issues aside — and there definitely are privacy concerns here — the currently available software is immature and prone to errors. Worse, it often reflects the biases of its programmers, who work for private companies with little regulation or oversight. And since commercial facial recognition software is sold to law enforcement, the frequent identification errors and biases are dangerous: They can ruin the lives of innocent people.
The website Gender Shades offers an enlightening demonstration of the type of inaccuracies to which facial recognition is inclined. The page was put together by Joy Buolamwini and Timnit Gebru in 2018, and doesn't reflect the most recent iterations of the software it tests, from three companies, Microsoft, the now-presumably-late IBM Watson, and Face++. Nonetheless, it's telling. To begin with, all three programs did significantly better at identifying men than women. However, when it came to gender identification — simplified to binary designations for simplicity — and skin color, the unimpressive results were genuinely troubling for the bias they reflected.
Amazon's Rekognition facial recognition software is the one most frequently sold to law enforcement, and an ACLU test run in 2018 revealed it also to be pretty bad: It incorrectly identified 28 members of Congress as people in a public database of 28,000 mugshots.
Update, 6/11/2020: Amazon today announced a 12-month moratorium on law-enforcement use of Rekognition, expressing the company's hope that Congress will in the interim enact "stronger regulations to govern the ethical use of facial recognition technology."
In 2019, a federal study by the National Institute of Standards and Technology reported empirical evidence of bias relating to age, gender, and race in the 189 facial recognition algorithms they analyzed. Members of certain groups of people were 100 times more likely to be misidentified. This study is ongoing.
Facial rec's poster child
Image source: Gian Cescon/Unsplash
The company most infamously associated with privacy-invading facial recognition software has to be Clearview AI, about whom we've previously written. This company scraped identification from over 3 billion social media images without posters' permission to develop software sold to law enforcement agencies.
The ACLU sued Clearview AI in May of 2020 for engaging in "unlawful, privacy-destroying surveillance activities" in violation of Illinois' Biometric Information Privacy Act. The organization wrote to CNN, "Clearview is as free to look at online photos as anyone with an internet connection. But what it can't do is capture our faceprints — uniquely identifying biometrics — from those photos without consent." The ACLU's complaint alleges "In capturing these billions of faceprints and continuing to store them in a massive database, Clearview has failed, and continues to fail, to take the basic steps necessary to ensure that its conduct is lawful."
The longer term
Though it undoubtedly sends a chill down the spine, the onrush of facial recognition technologies — encouraged by the software industry's infatuation with AI — suggests that we can't escape being identified by our faces for long, legislation or not. Advertisers want to know who we are, law enforcement wants to know who we are, and as our lives revolve ever more decisively around social media, many will no doubt welcome technology that automatically brings us together with friends and associates old and new. Concerns about the potential for abuse may wind up taking a back seat to convenience.
It's been an open question for some time whether privacy is even an issue for those who've grown up surrounded by connected devices. These generations don't care so much about privacy because they — realistically — don't expect it, particularly in the U.S. where very little is legally private.
IBM's principled stand may ultimately be more pyrrhic than anything else.
Get your finances in shape with this powerful money manager.
- Emma is a personal finance and budgeting app to help you better control your money.
- Emma organizes and analyzes all your financial accounts to save you cash.
- A $299.99 lifetime subscription is on sale now for just $39.
Quick...how many monthly subscriptions do you have? Subscriptions for streaming services and cable; website, newspaper, magazine or app access; subscription boxes; or services like a food prep supplier or the gym? It’s probably even more than that number you just blurted out.
Welcome to the subscription economy, where companies are increasingly moving to charging you monthly or annual fees to continue providing you with goods and services that you may not always need.
That’s just one of the ways money can slip out of your pocket without you even realizing it. Emma is a money management and budgeting app that can help you stem that outgoing cashflow and streamline your expenses so money doesn’t get wasted. Right now, a lifetime subscription to Emma is almost 80 percent off, just $39.
Launched last year and already featured in outlets like TechCrunch, Forbes and the Financial Times, the Emma app is described as a fitness tracker for your money. Emma syncs financial statements from all your bank accounts, credit cards and investments, tracks your payments and analyzes your personal finances to help you make smarter decisions about where your money goes.
With Emma, you can set up budgets for all your regular expenditures like monthly bills, groceries, transportation and more. Once it has an overview of your finances, Emma will point out potential problems like overdrafts or an upcoming payment. It’ll also help you spot waste like subscriptions that you can cancel, all to help you keep a tighter rein on your money.
Emma uses end-to-end 256-bit TLS bank grade encryption protections, so your sensitive financial information won't fall into the wrong hands.
Right now, a lifetime of Emma Personal Finance and Budgeting app service, a $299.99 value, is on sale for only $39.
Prices are subject to change.
When you buy something through a link in this article or from our shop, Big Think earns a small commission. Thank you for supporting our team's work.