from the world's big
Innovative use of blockchain tech, data trusts, algorithm assessments, and cultural shifts abound.
- A study published last year by the Pew Research Center found that most American's distrust the federal government, and there's plenty of evidence to suggest that the situation has yet to improve.
- Governments have more access than ever to our private information, which creates an inherent tension between how they can use data for the public good while ensuring they aren't abusing citizens' privacy rights.
- As emerging technologies mature, it will become more evident to the public which models are the most effective ways for governments to achieve the levels of transparency they've committed to delivering.
Using blockchain to secure citizens’ data<p>The Austrian government has recently turned to blockchain as a means of establishing transparent communications about the COVID-19 crisis, between authorities, institutions, and citizens. Communication specialist A-Trust has launched the <a href="https://cointelegraph.com/news/austrian-govt-pilot-aims-to-secure-covid-19-communications-with-blockchain-tech" target="_blank">QualiSig project</a>, on Ignis, part of the Ardor blockchain platform.</p><p>The project will use transparent, encrypted communications visible on the blockchain, and decentralized data storage to secure data against attacks. Citizens can control the use of their own data using qualified digital signatures. </p><p>Alexander Pfeiffer, Danube University Krems researcher, and partner to A-Trust, has a high degree of confidence that blockchain can help to increase trust in governments. "The more such solutions are used by government agencies and their partners, the more likely it is that citizens will regain confidence in the operations of these government authorities," he wrote in an email. "In addition, it will also be possible to work much more efficiently and on a much higher level of mutual trust between the parties involved."</p><p>This is the second time the Austrian government has engaged Jelurida, the Swiss firm that operates Ardor, in projects designed to improve transparency. In May this year, the Austrian government <a href="https://cointelegraph.com/news/austrian-government-backed-project-will-use-blockchain-to-find-waste-heat-spots" target="_blank">announced</a> funding for a sustainability project designed to pinpoint sources of waste heat that could be redirected back into the energy grid. The "Hot City" project is a collaboration with the Austrian Institute of Technology and plans to use the Ignis chain for providing rewards to citizens submitting data about waste heat that can be harnessed for the public good. </p><p>An outspoken advocate of using blockchain to increase transparency, Lior Yaffe is the co-founder and director of Jelurida. "For the Austrian government, funding applied blockchain technologies has been a major priority for several years," he told Big Think. "Now, the Hot City and QualiSig projects show how a public blockchain can be used to store and display specific datasets, thus increasing transparency." </p>
Demonstrating transparency in the electoral process<p>The potential for using blockchain to demonstrate electoral transparency has been hotly discussed for years now. The first such experiment took place in Denmark back in 2014, when the Liberal Alliance party used blockchain for one of its local elections.</p> <p>At the time, the chair of the party's IT group made a bold prediction. "Voting is the most important process in a democratic society," <a href="https://www.version2.dk/artikel/liberal-alliance-holder-e-valg-med-bitcoin-teknologi-57645" target="_blank">he said</a>. "Here, there is no doubt that new technology will play an increasing role going forward."</p>
Using transparency to combat corruption<p>Countries that have had problems with corruption going back generations have an especially steep mountain to climb when it comes to gaining public trust. Ukraine is one such example. As part of a <a href="https://www.cnbc.com/2019/03/01/ukraine-rows-back-anti-corruption-law-imf-aid-in-jeopardy.html" target="_blank">bailout agreement</a> in 2015, the International Monetary Fund demanded that the country's government do more to fight corruption.</p> <p>In 2017, the Ukrainian government engaged blockchain firm Bitfury to store all of its data on the blockchain, in an attempt to demonstrate better transparency. In September that year, the justice ministry successfully used the technology for auctioning seized assets, and later transferred state property and land registries to the platform. </p> <p>"We want to make the system of selling seized assets more transparent and secure," Deputy Justice Minister Serhiy Petukhov <a href="https://www.reuters.com/article/us-ukraine-blockchain/ukrainian-ministry-carries-out-first-blockchain-transactions-idUSKCN1BH2ME" target="_blank">told Reuters</a>, "so that the information there is accessible to everyone so that there aren't concerns about possible manipulation."</p>
Creating a culture of transparency<p>While technology can be a useful tool for governments to demonstrate transparency, it's not the only means. Some countries, particularly those in northern Europe such as Norway and Denmark, are renowned for their culture of governmental transparency.</p> <p>Canada also ranks highly in transparency from an international perspective, although its <a href="https://www.transparency.org/en/cpi/2019/press-and-downloads" target="_blank">Corruption Perceptions Index score</a> has been dropping. </p>
Inside the algorithms<p>One government that has been almost universally lauded for its handling of the coronavirus pandemic is New Zealand's. Even before the crisis, the government there has been taking some impressive measures to demonstrate transparency. One example is its "algorithm assessment" program, launched in 2018, designed to introduce more transparency into how the government is deploying AI for its citizens. </p> <p>Fourteen government agencies used a self-assessment method, underpinned by the government's own "principles for safe and effective use of data and analytics." The outcome was a <a href="https://www.data.govt.nz/use-data/analyse-data/government-algorithm-transparency-and-accountability/algorithm-assessment-report/" target="_blank">report</a> that acknowledged the need to retain human oversight over machine-led decisions and recommended using independent experts in the areas of privacy, ethics, and data expertise. </p> <p>"We must prepare for the ethical challenges AI poses to our legal and political systems," <a href="https://www.beehive.govt.nz/release/government-will-move-quickly-ai-action-plan" target="_blank">stated Clare Curran</a>, New Zealand's Minister for Digital Services, "as well as the impact AI will have on workforce planning, the wider issues of digital rights, data bias, transparency, and accountability are also important for this Government to consider."</p>
Data trusts, a work in progress<p>In the UK, the Open Data Institute (ODI) has been working on several pilots to implement "<a href="https://theodi.org/project/data-trusts/" target="_blank">data trusts</a>" in collaboration with various government agencies, in an attempt to create more transparency. The ODI defines a data trust as a "legal structure that provides independent stewardship of data." They aim to increase access to data, along with providing confidence in the use of it.</p><p>The Institute worked on three pilots with varying degrees of success. The pilots attempted to bring transparency to food waste, illegal wildlife trade, and smart city implementation with a focus on parking data for green vehicles. </p>
Trust in flux<p>The events of 2020 so far have amounted to a perfect storm as far as testing government trust goes. Social distancing and shelter in place rules mean that there's a more significant reliance than ever on technology. However, governments need to continue to walk a tightrope of ensuring that they deploy the best technology available while demonstrating transparency.</p> <p>As emerging technologies mature, it will become more evident to the public which models are the most effective ways for governments to achieve the levels of transparency they've committed to delivering. </p>
The programming giant exits the space due to ethical concerns.
- IBM sent a latter to Congress stating it will no longer research, develop, or sell facial recognition software.
- AI-based facial recognition software remains widely available to law enforcement and private industry.
- Facial recognition software is far from infallible, and often reflects its creators' bias.
In what strikes one as a classic case of shutting the stable door long after the horse has bolted, IBM's CEO Arvind Krishna has announced the company will no longer sell general-purpose facial recognition software, citing ethical concerns, in particular with the technology's potential for use in racial profiling by police. They will also cease research and development of this tech.
While laudable, this announcement arguably arrives about five years later than it might have, as numerous companies sell AI-based facial recognition software, often to law enforcement. Anyone who uses Facebook or Google also knows all about this technology, as we watch both companies tag friends and associates for us. (Facebook recently settled a lawsuit regarding the unlawful use of facial recognition for $550 million.)
It's worth noting that no one other than IBM has offered to cease developing and selling facial recognition software.
Image source: Tada Images/Shutterstock
Krishna made the announcement in a public letter to Senators Cory Booker (D-NJ) and Kamala Harris (D-CA), and Representatives Karen Bass (D-CA), Hakeem Jeffries (D-NY), and Jerrold Nadler (D-NY). Democrats in Congress are considering legislation to ban facial-recognition software as reported abuses pile up.
IBM's letter states:
"IBM firmly opposes and will not condone uses of any technology, including facial recognition technology offered by other vendors, for mass surveillance, racial profiling, violations of basic human rights and freedoms, or any purpose which is not consistent with our values and Principles of Trust and Transparency. We believe now is the time to begin a national dialogue on whether and how facial recognition technology should be employed by domestic law enforcement agencies."
Prior to their exit entirely from facial recognition, IBM had a mixed record. The company scanned nearly a million Creative Commons images from Flickr without their owners' consent. On the other hand, IBM released a public data set in 2018 in an attempt at transparency.
Image source: Best-Backgrounds/Shutterstock
Privacy issues aside — and there definitely are privacy concerns here — the currently available software is immature and prone to errors. Worse, it often reflects the biases of its programmers, who work for private companies with little regulation or oversight. And since commercial facial recognition software is sold to law enforcement, the frequent identification errors and biases are dangerous: They can ruin the lives of innocent people.
The website Gender Shades offers an enlightening demonstration of the type of inaccuracies to which facial recognition is inclined. The page was put together by Joy Buolamwini and Timnit Gebru in 2018, and doesn't reflect the most recent iterations of the software it tests, from three companies, Microsoft, the now-presumably-late IBM Watson, and Face++. Nonetheless, it's telling. To begin with, all three programs did significantly better at identifying men than women. However, when it came to gender identification — simplified to binary designations for simplicity — and skin color, the unimpressive results were genuinely troubling for the bias they reflected.
Amazon's Rekognition facial recognition software is the one most frequently sold to law enforcement, and an ACLU test run in 2018 revealed it also to be pretty bad: It incorrectly identified 28 members of Congress as people in a public database of 28,000 mugshots.
Update, 6/11/2020: Amazon today announced a 12-month moratorium on law-enforcement use of Rekognition, expressing the company's hope that Congress will in the interim enact "stronger regulations to govern the ethical use of facial recognition technology."
In 2019, a federal study by the National Institute of Standards and Technology reported empirical evidence of bias relating to age, gender, and race in the 189 facial recognition algorithms they analyzed. Members of certain groups of people were 100 times more likely to be misidentified. This study is ongoing.
Facial rec's poster child
Image source: Gian Cescon/Unsplash
The company most infamously associated with privacy-invading facial recognition software has to be Clearview AI, about whom we've previously written. This company scraped identification from over 3 billion social media images without posters' permission to develop software sold to law enforcement agencies.
The ACLU sued Clearview AI in May of 2020 for engaging in "unlawful, privacy-destroying surveillance activities" in violation of Illinois' Biometric Information Privacy Act. The organization wrote to CNN, "Clearview is as free to look at online photos as anyone with an internet connection. But what it can't do is capture our faceprints — uniquely identifying biometrics — from those photos without consent." The ACLU's complaint alleges "In capturing these billions of faceprints and continuing to store them in a massive database, Clearview has failed, and continues to fail, to take the basic steps necessary to ensure that its conduct is lawful."
The longer term
Though it undoubtedly sends a chill down the spine, the onrush of facial recognition technologies — encouraged by the software industry's infatuation with AI — suggests that we can't escape being identified by our faces for long, legislation or not. Advertisers want to know who we are, law enforcement wants to know who we are, and as our lives revolve ever more decisively around social media, many will no doubt welcome technology that automatically brings us together with friends and associates old and new. Concerns about the potential for abuse may wind up taking a back seat to convenience.
It's been an open question for some time whether privacy is even an issue for those who've grown up surrounded by connected devices. These generations don't care so much about privacy because they — realistically — don't expect it, particularly in the U.S. where very little is legally private.
IBM's principled stand may ultimately be more pyrrhic than anything else.
Got any embarrassing old posts collecting dust on your profile? Facebook wants to help you delete them.
- The feature is called Manage Activity, and it's currently available through mobile and Facebook Lite.
- Manage Activity lets users sort old content by filters like date and posts involving specific people.
- Some companies now use AI-powered background checking services that scrape social media profiles for problematic content.
Social media background checks<p><br></p><p>Now, the feature could bring users some peace of mind. After all, the platform currently has more than <a href="https://www.statista.com/statistics/264810/number-of-monthly-active-facebook-users-worldwide/#:~:text=How%20many%20users%20does%20Facebook,network%20ever%20to%20do%20so." target="_blank">2.6 billion monthly active users</a>, and some of these users created accounts in their teens, around the time Facebook became widely available in 2006. As these veteran users get older, it seems likely that many would want to delete years-old posts, whether because content is embarrassing, outdated or professionally jeopardizing.</p><p>Some employers now use automated or <a href="https://www.goodegg.io/blog/is-this-legal-and-other-social-media-screening-faqs" target="_blank">third-party background</a> checks that scrape candidates' social media accounts. These checks can search for content that's racist, sexually explicit, criminal or otherwise offensive. </p><p>But they're not always accurate. One AI-powered background service called <a href="https://www.vox.com/recode/2020/5/11/21166291/artificial-intelligence-ai-background-check-checkr-fama" target="_blank">Checkr has even faced lawsuits</a> from people who claim the company's algorithms made mistakes that cost them job opportunities.</p>
How to use Manage Activity<iframe src="https://www.facebook.com/plugins/video.php?href=https%3A%2F%2Fwww.facebook.com%2Ffacebookapp%2Fvideos%2F707969696627907%2F&show_text=0&width=560" width="560" height="353" style="border:none;overflow:hidden" scrolling="no" frameborder="0" allowTransparency="true" allowFullScreen="true"></iframe><p>It's unclear when Manage Activity will become available on desktop. But to learn how to use it on mobile or Facebook Lite, check out this instructional video from Facebook.</p>
The program aims to notify people after they've come in close contact with someone who tested positive.
- The program currently involves 25,000 contract tracers who are capable of tracing 10,000 contacts per day.
- Participation in the program is voluntary, though officials said it may become mandatory if necessary.
- The program will eventually include a smartphone app that records who you've come in close proximity to.
TIMOTHY A. CLARY / Getty<p>Hancock said the program will be voluntary at first, but that the government will make it mandatory if "that's what it takes."</p><p style="margin-left: 20px;">"If we don't collectively make this work, the only way forward is to keep the lockdown," he said. "The more people who follow the instructions, the safer we can be and the faster we can lift the lockdown."</p><p>The NHS wants people who are experiencing symptoms to visit <a href="https://www.nhs.uk/conditions/coronavirus-covid-19/" target="_blank">nhs.uk/conditions/coronavirus-covid-19</a>. The agency also wants to automate its Test and Trace program through the <a href="https://www.telegraph.co.uk/technology/2020/05/27/nhs-app-covid-19-uk-coronavirus-track-trace/" target="_blank">NHS COVID-19 app</a>, which is currently being tested by more than 52,000 people on the Isle of Wight. If the test on the Isle of Wight is successful, the app is expected to be available for the rest of England in June.</p>
How the contact-tracing app works<p>The app doesn't ask for names or personal information, except for a partial postal code. Rather, each user's phone is assigned a randomized identifier number that's transmitted to a centralized database. The app doesn't do much else, besides ask users how they're feeling each day.</p><p>Other governments have already been using digital contact tracing apps to limit the spread of COVID-19. South Korea, for example, made a tracing app mandatory for new arrivals to the country, and people who violate quarantine are required to wear location-tracking bracelets. As of May 29, South Korea has reported less than 300 deaths. The U.K. has suffered <a href="https://www.reuters.com/article/us-health-coronavirus-britain-casualties/uks-covid-19-death-toll-tops-40000-worst-in-europe-idUSKBN22O16T" target="_blank">more than 40,000</a>.</p>
Privacy concerns<p>Manual contact tracing has been used for decades to help contain viruses — the NHS describes it as a "tried and tested method used to slow down the spread of infectious diseases." But the prospect of digital contact tracing has <a href="https://www.washingtonpost.com/politics/2020/04/28/contact-tracing-apps-can-help-stop-coronavirus-they-can-hurt-privacy/" target="_blank">raised concerns for privacy advocates</a> who question how governments and private companies might use the technology. </p><p>Speaking about the new NHS app, Ian Levy, the technical director of the National Cyber Security Centre (NCSC), told <a href="https://www.wired.co.uk/article/nhs-covid-19-tracking-app-contact-tracing" target="_blank">Wired U.K.</a>:</p><p style="margin-left: 20px;">"In theory, that's a privacy risk, but it's only stored on the NHS app system and there's no way to link device 123456 to 'Ian Levy' or a particular place," Levy said. "If you discover that my app ID is 123456, there are some theoretical things you can do to try to understand my contacts if you've followed me round. But if you've followed me round, you've probably seen my contacts anyway."</p><p>In the U.S., federal officials haven't indicated that they're developing a national contact-tracing app. But several states — <a href="https://www.forbes.com/sites/rachelsandler/2020/05/20/alabama-north-dakota-and-south-carolina-to-debut-apple-and-googles-covid-19-contact-tracing/#4147ac591732" target="_blank">Alabama, North Dakota, and South Carolina</a> — are working individually with Apple and Google to implement their own contact-tracing apps. </p><p>Similar to the NHS app, Apple and Google's system uses Bluetooth signals to record when users come in close proximity with each other. The companies said their system won't collect users' personal information. </p><p><span></span>Apple and Google develop the contact-tracing apps themselves. Rather, they've made the technology available so that individual health agencies to do so. In addition to the three U.S. states, <a href="https://www.forbes.com/sites/rachelsandler/2020/05/20/alabama-north-dakota-and-south-carolina-to-debut-apple-and-googles-covid-19-contact-tracing/#49f900e1732e" target="_blank">22 countries have also signed on to use Apple and Google's system.</a></p>
The system can even be designed to send alerts to employees when they've come too close to a coworker.
- Since the pandemic began, nations have been using technology in varying degrees to contain the outbreak.
- This new tool is able to place moving people on a map and estimate the distance between them.
- Some privacy advocates are raising concerns about private companies and governments installing surveillance technologies.