In Brief Highway patrol officers in California arrested a man this week accused of riding in the backseat of his Tesla while it was under Autopilot.
The super-cruise-control software should have disengaged without him in the driver seat, yet it is claimed 25-year-old Param Sharma managed to bypass that requirement so that the vehicle would drive itself with him in the back. You’re also supposed to have your hands on the wheel even while Autopilot is active so that you can take over from the computer system as necessary.
Following reports of a driverless Tesla Model 3, a highway patrol officer spotted the vehicle travelling east-bound towards the Bay Bridge in San Francisco, and attempted to stop it. It is alleged Sharma climbed back into the driver’s seat before he pulled over for the police.
“The safety of all who share our roadways is the primary concern of the [California Highway Patrol],” the police force posted on Facebook. “The Department thanks the public for providing valuable information that aided in this investigation and arrest.”
Saying no to Google money
Three non-profit groups, Black in AI, Queer in AI, and Widening NLP, which focus on supporting underrepresented groups in the machine learning community, said they won’t accept any funding from Google after its two ethical AI co-leads were fired.
Timnit Gebru and Margaret Mitchell were controversially pushed out from the Chocolate Factory after they wrote a paper criticizing large language models, like the ones used by Google. The meltdown ignited a massive PR disaster and a reshuffle of AI research at Google. Now, three non-profit organizations have decided to sever their relationship with the advertising giant.
The three groups have accepted funding in the past to organize events for black, queer, and female researchers at conferences. Gebru is a co-founder of Black in AI.
“While we cannot prevent individuals or organizations from using their influence and resources to diminish and cause damage to members of our respective communities, we can control how we engage with organizations that clearly are not willing to engage in challenging yet necessary conversations,” the orgs said in a joint statement.
“Until Google addresses the harm they’ve caused by undermining both inclusion and critical research, we are unable to reconcile Google’s actions with our organizational missions.”
Google wants to hire more ethical AI researchers anyway
In related news, Google said it wants to hire more machine-learning ethics researchers in a fresh attempt to reinvigorate its image from the disaster.
Marian Croak, VP of engineering at Google, who was ushered in to oversee Responsible AI (a new research unit encompassing ethical AI) admitted: “Being responsible in the way that you develop and deploy AI technology is fundamental to the good of the business,” she told the Wall Street Journal. “It severely damages the brand if things aren’t done in an ethical way.”
Google’s reputation has obviously taken a huge hit among the machine-learning community. Not only have the aforementioned non-profit groups shunned the corporation, the Conference on Fairness, Accountability, and Transparency, which is focused on ethics and technology, dropped Google as a sponsor earlier this year. Individual computer-science researchers have also rejected grants from the biz.
It may be difficult to woo ethics researchers to Google in the near future. Croak said she wants the ethics team to focus on health applications and why they’re less effective for people with different skin tones. ®