Due to weak oversight, we don’t really know how tech companies are using facial recognition data - Rickey J. White, Jr. | RJW™
24161
post-template-default,single,single-post,postid-24161,single-format-standard,ajax_fade,page_not_loaded,,qode-theme-ver-16.3,qode-theme-bridge,wpb-js-composer js-comp-ver-5.4.7,vc_responsive
 

Due to weak oversight, we don’t really know how tech companies are using facial recognition data

Due to weak oversight, we don’t really know how tech companies are using facial recognition data

For years, activists in the privacy and security communities have warned that biometric data, including photo IDs, fingerprints, and other information, could be hacked by bad actors. This past week, these fears were confirmed as the U.S. Customs and Border Protection agency announced that hackers had gained access to a database containing traveler photo IDs and license plate images that’s managed by subcontractor Perceptics. In recent years, CPB has asked foreign travelers for facial recognition data, fingerprints, and other biometric information, so it’s possible that such information may also be at risk of being obtained by hackers.

If facial recognition (FR) data is compromised, along with other personal information like names and social security numbers, a person’s identity can easily be stolen for financial fraud. Beyond this type of criminal activity, there is the specter of physical risks—such as revealing an individual’s location to a stalker, or handing over home security FR data to a burglar. And, of course, if a government maintains a database of face scans, it can be used to identify and control activists, which is how China is now surveilling its Muslim Uighur minority community.

Agencies are rushing to collect as much information as they can, and it’s outpacing their ability to protect the data, says Dave Maass, a senior investigative researcher at the Electronic Frontier Foundation. The same holds true for biometric vendors marketing their systems to private sector companies.

“To be honest, they should’ve seen this coming, considering that India’s biometric system had been breached just a year before,” Maass says. “We’ve also seen law enforcement misplacing trust in vendors, for whom public safety and cybersecurity may not be their primary concerns.”

Maass expects breaches of surveillance systems like facial recognition tech to continue to grow. Several years ago, the Electronic Frontier Foundation found that automated license plate readers were exposed online, a problem that reporters recently confirmed is proliferating.

“If CBP’s systems were breached, then what threats lie ahead for all these surveillance systems run by local law enforcement around the country that don’t have the resources of the federal government?” says Maass. “Perceptics has been going around using its CBP contracts to establish credibility. We don’t yet know who else contracted with them based on that endorsement.”

Jay Stanley, a senior policy analyst at the American Civil Liberties Union (ACLU), is uncertain if biometric companies are using face data obtained in the private sector beyond simply identifying people. But, he says, this data could be sold to chains of retail stores, whose facial recognition systems could identify customers the moment they set foot in the building. The system could use video analytics to log other information, like how long they stay in the store and where they focused their attention.

Due to lax regulation and weak government oversight, Stanley adds, we don’t really know how biometric tech companies are handling and securing FR data.

“It’s every company for itself, it’s the Wild West—there are no rules, there aren’t any industry best practices,” says Stanley. “There is nothing to stop a company from using a very high-resolution surveillance camera in a store, taking face prints of everyone who walks in, storing it in a database, and using it for whatever purposes their imagination can come up with. The advertising world’s appetite for information on people is boundless.”

Once you create these big databases, Stanley explains, they become honeypots that attract hackers. And ultimately it proves easier to hack into the system than keep hackers out. But it’s not just hackers who are interested in such databases. With “mission creep,” FR gets normalized, causing it to spread from airport check-in to a public agency like CPB, tasked with identifying and deporting illegal immigrants from the United States.

‘When you create these big [biometric] databases, it’s just a recipe for privacy invasions and abuses by hackers, in addition to whatever abuses the people who are collecting the data are engaged in,” says Stanley.

Operating in the shadows

In the United States, dozens of companies make facial recognition systems, serving both government and corporate clients. These companies are building their systems in the absence of any real regulation.

According to the Electronic Frontier Foundation, one of the largest vendors of face recognition and other biometric ID technology in the U.S. is Idemia (formerly MorphoTrust), maker of the FR system IdentoGO. The company has designed systems for federal and state law enforcement agencies, state DMVs, border control and airports, and the State Department.  “Other common vendors include 3M, Cognitec, DataWorks Plus, Dynamic Imaging Systems, FaceFirst, and NEC Global,” writes the EFF.

EFF’s Lynch raised the alarm about Idemia back in 2017 when the TSA announced its intention to use FR to track people around airports. Idemia is the vendor in the TSA’s PreCheck program, which Lynch noted had already moved beyond airports to include expedited entry for “PreCheck approved travelers” at concerts and stadiums across the U.S..

“Idemia says it will equip stadiums with biometric-based technology, not just for security, but also ‘to assist in fan experience,’” Lynch writes. “Adding face recognition would allow Idemia to track fans as they move throughout the stadium, just as another company, NEC, is already doing at a professional soccer stadium in Medellin, Columbia, and at an LPGA championship event in California earlier this year.”

In April, Idemia announced a partnership with the Boston Red Sox. As the first MLB team to collaborate with Idemia, Fenway Park will be equipped with the company’s Fast Pass lane, which fans can only use if they submit to fingerprint scanning. The company boasts of providing a wide range of “identity-related services” for commercial customers through its IdentoGo centers. “Our primary service is the secure capture and transmission of electronic fingerprints for employment, certification, licensing and other verification purposes,” the company says. “Additional services, such as passport photos, identity history checks and fingerprint cards are available at participating locations today, and new identity-related products and services are being developed for the future.”

In 2016, the Minnesota-based multinational conglomerate 3M announced its entry into the face recognition game with the 3M Live Face Identification System. Unlike Idemia’s product, the system uses live video to match identities in real time, for public and private sector customers. In a press release, 3M Cogent noted that the system “automatically recognizes multiple faces simultaneously from live or imported footage” in order to identify individual people from “dynamic, uncontrolled environments.” In real time, faces are captured and matched, and any desktop PC or mobile device can be notified immediately when there is a match in the database.

3M Cogent doesn’t see its Live Face Identification System being used only for law enforcement, border control, and private security forces. They are marketing it to be used in spaces such as casinos, cruise line boarding areas, sports stadiums, and banks.

While many FR vendors are busy marketing to U.S. government agencies and private companies, IBM is looking abroad. In May, BuzzFeed’s Megha Rajagopalan reported that the tech giant, along with China’s Hikvision and Huawei, is marketing facial recognition technology to the United Arab Emirates, a regime well known for using mobile spyware against dissidents. The product, known as Oyoon (Arabic for “eyes”), is currently being rolled out by Dubai police. It combines facial recognition and AI analysis, with the stated goal of identifying traffic accidents. But, as Rajagopalan noted, government procurement and regulatory documents make it clear that Dubai police want to be able to scan people’s faces and analyze them, among other things like voice recording.

Other FR ventures, like Verint’s product Face-Int, have raised concerns among human rights activists. Verint is a spyware maker whose smartphone snooping systems have been used by governments suspected of human rights abuses, like the UAE, South Sudan, and Mexico. Developed by Terrogence, a subsidiary of Verint, Face-Int uses scans of videos and photos from Facebook, YouTube, and other social media for an FR database used to identify terrorists. But, as Forbes‘s Thomas Brewster reported in April 2018, a former Terrogence staffer noted on their LinkedIn profile that Face-Int had also been used to profile activist groups.

Biometric data at risk

Unlike a password that can be changed if stolen, a person’s face or irises are impossible to change. Storing and transmitting this data securely should be of paramount importance to biometric tech companies. But, as the Perceptics database breach and other exposures illustrate, this really doesn’t seem to be the case.

Earlier this year, Dutch security researcher Victor Gevers discovered that SenseNets, a Chinese company that sells video-based crowd analysis and facial recognition technology, left its database open on the internet for months. The company’s face recognition technology is used by the Chinese government to track the Uyghur Muslim population in the Xinjiang region. The open database exposed 2,565,724 users, according to Gevers, as well as GPS coordinates.

After making the discovery, Gevers tweeted that the database contains personal information: “This database contains over 2.565.724 records of people with personal information like ID card number (issue & expire date, sex, nation, address, birthday, passphoto, employer and which locations with trackers they have passed in the last 24 hours.”

But this wasn’t the only facial recognition database left exposed in China, a nation with grand hopes of using this technology to identify people in seconds. Two months ago, security researcher John Wethington found and accessed a Chinese smart city database on a web browser that didn’t have a password. As Wethington told TechCrunch, he found gigabytes’ worth of exposed data on an Elasticsearch database that included facial recognition scans of hundreds of people. The database, used by an unnamed company, was hosted on Chinese tech giant Alibaba’s cloud service.

While these examples might seem like outliers, Wethington tells Fast Company that the databases aren’t all that hard to obtain. He says that facial recognition systems rely on existing database technology—products like Elasticsearch, MySQL, SQL, and other data storage solutions—and most of them are prone to poor configurations.

“Facial recognition companies are often leveraging cloud storage and services to scale in a cheap and efficient manner,” Wethington says. “Very few of these organizations are placing an emphasis on security and privacy. Frankly, the prevailing attitude has been that consumers give up that right in public.”

How to find a facial recognition database within seconds

Wethington calls these face recognition databases “poorly protected assets”—ones that are easily discoverable using things like BinaryEdge, Shodan, and similar tools. He says that some of these systems are even designed to be “open” to users by default, so it’s up to the deployer to actually secure it. A typical SQL server, which could be used to store facial recognition data, runs on a set of standard ports that can easily be scanned using free tools like Masscan. “Discovering these databases is trivial,” says Wethington.

As he demonstrated, a malicious hacker can execute a search for an Elasticsearch database with these tools by defining a port, an index, and a keyword. In less than a minute, Wethington found 33,000 open Elasticsearch databases, a small percentage of which would be related to facial recognition or artificial intelligence. “The point is that the data is easy to find if it’s even remotely insecure,” he says. In short order, Wethington found a database in China that is using facial recognition technology. He says the tools to access these systems are freely available and often require zero authentication. “Technically, they aren’t being hacked—they are simply not being secured,” says Wethington.

Security researcher Gevers, who works at GDI Foundation, an organization that is trying to protect a free and open internet by making it safer, says that the reason there are a lot of open-source databases is that they can be installed quickly. The problem, he says, is that engineers don’t seem to be reading the manual when it comes to securing the data.

“I think that’s the main reason why China is where the second-most data leaks are,” says Gevers. “The first being in the United States because of the Amazon Web Services, Google Online Services, and Azure. People deploy systems very quickly, forget to firewall it or to put some credentials on it so you have to log in.”

“The moment you deploy your prediction systems in the cloud online, you have to make double sure that these systems are not going to be exposed because a system administrator left something open, or a web developer built something that introduced a vulnerability,” he adds.

Gevers thinks it’s most likely that there will be an uptick in data breaches that include photos or other material that is being used for facial recognition, or iris recognition. And even if the systems are locked down with login credentials, it only takes one mistake to create a vulnerability, and then any hacker has access to the database.

“The problem with facial recognition is that you need a lot of data, and to store a lot of data and make it quickly searchable,” says Gevers. “We see a lot of people that tend to use [free, open-source products]. These are fine products if you know what you’re doing. And I think that is the problem: most people don’t know what they’re doing. They just take a Lego building block, stack it up, and say, ‘Hey, look, the product is finished.’”

“If you look at the amount of data leaks there were in the last three years, and you add the word ‘MongoDB’ or ‘Elasticsearch’, you will see those rose up very quickly,” he says. “This is because the products are so easy to use and set up, but it’s a bit more tricky to make them more secure.”

Gevers blames weak FR database security on a competitive marketplace, where a proof of concept is built and the product quickly marketed and sold. And though a government or company may learn from vulnerabilities, as time passes and people get promoted or leave their job, Gevers says the tendency is for organizations to forget what they learned about systems security. Which is why he would like to see cloud providers like AWS, Google, and Azure provide programmers with templates, such as are available from the Open Web Application Security Project, so that they can build safely and securely on those platforms.

“It costs money to properly protect data, and companies don’t want to pay the money,” says ACLU’s Jay Stanley. “And except for a PR hit, they generally don’t pay the cost of the data breaches. It’s the customers who pay the costs—customers who may not even know that the data is being collected, let alone had a choice [to consent].”

‘An existential threat to fundamental freedoms’

In recent testimony before the House Committee on Oversight and Reform, Clare Garvie, a senior associate with the Center on Privacy & Technology at Georgetown Law, said that just a few years ago she thought that strong regulations were adequate to protect people from the misuse of facial recognition technology. Now she believes that federal, state, and local governments should place a moratorium on police use of face recognition.

“Communities need time to consider whether they want face recognition in their streets and neighborhoods,” Garvie said. “The power that this technology gives to law enforcement, combined with the secrecy with which the technology has been deployed, its persistent inaccuracy and race and gender bias, and the way it has been misused and abused, make face recognition an existential threat to fundamental freedoms in our society.”

Except for biometric laws in Illinois and Texas, and San Francisco’s ban, there typically aren’t legal restrictions preventing companies from selling the data they collect, says John Verdi, vice president of policy at the Future of Privacy Forum, a D.C.-based think tank focused on data privacy.

One reason for this is that FR tech systems aren’t interoperable. That is, there is no universal standard that would allow FR data to be shared among systems. So, if a retailer who has a face print of a customer from an NEC system wants to market to an individual who walked into another store that uses Hitachi FR technology, the two systems would not be able to speak to one another.

“Here is the other reason: when you’re talking about real-world tracking for marketing purposes, most marketers and ecosystems already have an identifier that is almost as good and is interoperable, and is well understood and used often, and that’s your mobile device ID,” says Verdi. “So, if I’m a retailer or data broker, I could . . . try to engage in a tremendous consensus-building lift with a bunch of facial recognition companies, and I might get data that is slightly more accurate than the data I get from mobile phones, though maybe I won’t.”

But just because FR systems aren’t interoperable doesn’t mean that biometric companies won’t find ways to sell face data. And if people aren’t giving their consent to facial scans, then the question of privacy inevitably and immediately arises. Like Garvie, EFF’s Dave Maass believes it’s time for a moratorium on facial recognition.

“Vendors are aggressively pushing agencies to adopt this technology very quickly, not necessarily because it’s good for public safety, but probably because they’re vying for market dominance,” he says. “They’re making promises about all the crimes it could miraculously solve or prevent, and officials aren’t asking about what could go wrong. We support San Francisco’s decision to outright ban the technology, but at the very least, decisions about surveillance should be made by the community and their elected representatives, not by police in partnership with surveillance salespeople.”

The Future of Privacy Forum recently published a paper on the regulation of and best practices for facial recognition technology. The paper proposes that common-sense regulation needs to start from a position of “opt-in, explicit affirmative consent” for enrollment in the systems. Verdi says there are going to be exceptions, as in law enforcement, where criminals won’t opt in, which will require another paradigm. Verdi also says that opt-in doesn’t make sense for FR systems in schools, which may mean there are some situations where the technology shouldn’t be used at all. FPF supports a moratorium in certain cases, as in schools, but not for useful applications that raise few private risks, like unlocking mobile devices with faceprints.

In the House Oversight and Reform Committee hearings in late May, there was strong interest from Democrats and Republicans alike in putting the brakes on facial recognition technology before it alters American life. “We’ve called for a moratorium [on face recognition technology],” says Stanley. “And we got a lot of support in the room during those hearings.”


Source: Fast Company

Tags:
No Comments

Sorry, the comment form is closed at this time.