Here Come the AI Regulations  


New proposed laws to govern AI are being entertained in the US and Europe, with China following a government-first approach. (Credit: Getty Images)  

By – IAIDL Staff 

New laws will soon shape how companies use AI.   

The five largest federal financial regulators in the US recently released a request for information how banks use AI, signaling that new guidance is coming for the finance business. Soon after that, the US Federal Trade Commission released a set of guidelines on “truth, fairness and equity” in AI, defining the illegal use of AI as any act that “causes more harm than good,” according to a recent account in Harvard Business Review  

And on April 21, the European Commission issued its own proposal for the regulation of AI (See  – IAIDL, April 22, 2021)  

Andrew Burt, Managing Partner, bnh.ai

While we don’t know what these regulation will allow, “Three central trends unite nearly all current and proposed laws on AI, which means that there are concrete actions companies can undertake right now to ensure their systems don’t run afoul of any existing and future laws and regulations,” stated article author Andrew Burt, the managing partner of bnh.ai, a boutique law firm focused on AI and analytics.  

First, conduct assessments of AI risks. As part of the effort, document how the risks have been minimized or resolved. Regulatory frameworks that refer to these “algorithmic impact assessments,” or “IA for AI,” are available.  

For example, Virginia’s recently-passed Consumer Data Protection Act, requires assessments for certain types of high-risk algorithms. 

The EU’s new proposal requires an eight-part technical document to be completed for high-risk AI systems that outlines “the foreseeable unintended outcomes and sources of risks” of each AI system, Burt states. The EU proposal is similar to the Algorithmic Accountability Act filed in the US Congress in 2019. The bill did not go anywhere but is expected to be reintroduced.  

Second, accountability and independence. This suggestion is that the data scientists, lawyers and others evaluating the AI system have different incentives than those of the frontline data scientists. This could mean that the AI is tested and validated by different technical personnel than those who originally developed it, or organizations may choose to hire outside experts to assess the AI system.   

“Ensuring that clear processes create independence between the developers and those evaluating the systems for risk is a central component of nearly all new regulatory frameworks on AI,” Burt states.  

Third, continuous review. AI systems are “brittle and subject to high rates of failure,” with risks that grow and change over time, making it difficult to mitigate risk at a single point in time. “Lawmakers and regulators alike are sending the message that risk management is a continual process,” Burt stated.  

Approaches in US, Europe and China Differ  

The approaches between the US, Europe and China toward AI regulation differ in their approach, according to a recent account in The Verdict, based on analysis by Global Data, the data analytics and consulting company based in London. 

“Europe appears more optimistic about the benefits of regulation, while the US has warned of the dangers of over regulation,”’ the account states. Meanwhile, “China continues to follow a government-first approach” and has been widely criticized for the use of AI technology to monitor citizens. The account noted examples in the rollout by Tencent last year of an AI-based credit scoring system to determine the “trust value” of people, and the installation of surveillance cameras outside people’s homes to monitor the quarantine imposed after the breakout of COVID-19. 

Whether the US’ tech industry-led efforts, China’s government-first approach, or Europe’s privacy and regulation-driven approach is the best way forward remains to be seen,” the account stated. 

In the US, many companies are aware of the risk of new AI regulation that could stifle innovation and their ability to grow in the digital economy, suggested a recent report from pwc, the multinational professional services firm.   

It’s in a company’s interests to tackle risks related to data, governance, outputs, reporting, machine learning and AI models, ahead of regulation,” the pwc analysts state. They recommended business leaders assemble people from across the organization to oversee accountability and governance of technology, with oversight from a diverse team that includes members with business, IT and specialized AI skills.  

Critics of European AI Act Cite Too Much Gray Area 

While some argue that the European Commission’s proposed AI Act leaves too much gray area, the hope of the European Commission is that their proposed AI Act will provide guidance for businesses wanting to pursue AI, as well as a degree of legal certainty.   

Thierry Breton, European Commissioner for the Internal Market

“Trust… we think is vitally important to allow the development we want of artificial intelligence,” stated Thierry Breton, European Commissioner for the Internal Market, in an account in TechCrunch. AI applications “need to be trustworthy, safe, non-discriminatory — that is absolutely crucial — but of course we also need to be able to understand how exactly these applications will work.” 

“What we need is to have guidance. Especially in a new technology… We are, we will be, the first continent where we will give guidelines—we’ll say ‘hey, this is green, this is dark green, this is maybe a little bit orange and this is forbidden’. So now if you want to use artificial intelligence applications, go to Europe! You will know what to do, you will know how to do it, you will have partners who understand pretty well and, by the way, you will come also to the continent where you will have the largest amount of industrial data created on the planet for the next ten years.” 

“So come here—because artificial intelligence is about data—we’ll give you the guidelines. We will also have the tools to do it and the infrastructure,” Breton suggested. 

Another reaction was that the Commission’s proposal has overly broad exemptions, such as for law enforcement to use remote biometric surveillance including facial recognition technology, and it does not go far enough to address the risk of discrimination. 

Reactions to the Commission’s proposal included plenty of criticism of overly broad exemptions for law enforcement’s use of remote biometric surveillance (such as facial recognition tech) as well as concerns that measures in the regulation to address the risk of AI systems discriminating don’t go nearly far enough. 

“The legislation lacks any safeguards against discrimination, while the wide-ranging exemption for ‘safeguarding public security’ completely undercuts what little safeguards there are in relation to criminal justice,” stated Griff Ferris, legal and policy officer for Fair Trials, the global criminal justice watchdog based in London. “The framework must include rigorous safeguards and restrictions to prevent discrimination and protect the right to a fair trial. This should include restricting the use of systems that attempt to profile people and predict the risk of criminality.”  

To accomplish this, he suggested, “The EU’s proposals need radical changes to prevent the hard-wiring of discrimination in criminal justice outcomes, protect the presumption of innocence and ensure meaningful accountability for AI in criminal justice. 

Read the source articles and information in Harvard Business Review, in The Verdict and in TechCrunch. 

2024-03-26T15:55:46+00:00
Change Language