List of police, govt, uni orgs in Clearview AI’s facial-recognition trials – IAIDL

In brief Clearview AI’s controversial facial-recognition system has been trialed, at least, by police, government agencies, and universities around the world, according to newly leaked files.

Internal documents revealed by BuzzFeed News show that Clearview offered its technology to law enforcement agencies, governments, and academic institutions in 24 countries, including the UK, Brazil, and Saudi Arabia, on a try-before-you-buy basis.

The facial-recognition biz scraped billions of photos from public social media profiles, including Instagram and Facebook, and put them all into a massive database. Clearview’s customers can submit pictures of people and the system will automatically try to locate those people in the database, using facial recognition, and return any details picked up from their personal pages if successful. Thus, the police can, for example, give the service a CCTV camera still of someone, and if it matches a face in the database, the system will report back their information, such as their name, social media handles, and so on.

Canada, for one, cracked down on the operation. Meanwhile, in Britain, the Metropolitan Police, the Ministry of Defence, and the National Crime Agency, as well as police in North Yorkshire, Northamptonshire, Suffolk, and Surrey, plus a university tested or were given access to Clearview’s face-recognition algorithms, according to BuzzFeed.

More Clearview news

A state court of Illinois this week denied Clearview’s motion to dismiss a lawsuit brought against it by the American Civil Liberties Union.

Illinois law is quite tough on collecting data for biometric applications, including facial recognition. The state’s Biometric Information Privacy Act (BIPA) requires companies to obtain written consent from people to collect and store data that can be used to identification purposes.

The ACLU sued Clearview in May 2020, claiming it had violated BIPA. Clearview tried to get the case thrown out by saying its business practices were protected under the First Amendment. But an Illinois court didn’t agree; Judge Pamela Meyerson dismissed [PDF] the startup’s claims and the lawsuit will go ahead.

“Today’s decision shows that it is still possible for individuals to take control of their personal information from Big Tech, and legislation like BIPA is the key,” Rebecca Glenberg, senior staff counsel with the ACLU of Illinois, said in a statement. “We must continue to fight for the right to protect our privacy through control of our personal information.”

Waymo expands autonomous taxi fleet to select SF residents, kills off Lidar business

Google self-driving car spinoff Waymo has launched its Waymo One Trusted Tester program in San Francisco.

The program allows a selected group of people in the California city to hail rides in Waymo’s white electric Jaguar I-PACE vehicles running on the upstart’s fifth-generation Waymo Driver software through a smartphone app. Ideally, the car is able to use computer vision to drive itself throughout the whole journey with no hiccups. A human driver, or “autonomous specialist,” will be behind the wheel to take over at any point, however.

Waymo has also decided to stop selling its lidar sensors known as the Laser Bear Honeycomb, to other companies, according to The Information. The components were touted to people making robots and suchlike, though now Waymo is keeping all production in house.

Mortgage application algorithms favor White applicants over people of color

Algorithms used by mortgage brokers in the US were 80 per cent more likely to reject Black applicants looking to own homes compared to their White counterparts, according to a probe by The Markup.

A team of reporters analyzed data from more than two million mortgage applications across America in 2019. They controlled 17 factors, such as income, so that “the prospective borrowers of color looked almost exactly the same on paper as the White applicants, except for their race.”

They found that lenders were more likely to deny loans to Latino people by 40 per cent, Asian Pacific Islanders by 50 per cent, Native Americans by 70 per cent, and Black people by 80 per cent compared to White people. The gap also varies by city.

The investigation was long and complicated. Reporters could only probe the algorithms’ effects from public data, and their inner-workings are proprietary and secret. The study was criticized by bank and lender associations for not taking into account people’s credit scores. The Markup said it couldn’t access people’s private credit scores to analyze.

A play written by GPT-3 to be shown in London

A play, imaginatively titled AI, is set to feature a team of human actors on stage and another group interacting with OpenAI’s GPT-3 behind the scenes. The software will generate text based on prompts written by humans, and the actors will then improvise and play out the scene described by GPT-3, Time reported.

Large language models capable of generating text are unpredictable; they can often say things that are offensive due to biases picked up in training data. GPT-3 was trained on swathes of text scraped from the internet, so the play script written by the neural network may contain racist and sexist themes.

The performance art show will run in London’s Young Vic theater for three nights starting next week. ®

2024-03-26T15:55:19+00:00
Change Language