Mitigating Artificial Intelligence Bias for EU

Critics say it’s overzealous and will stifle innovation. Supporters say its strict limits will boost trust and allow innovation to flourish. But regardless of what companies think of the European Union’s draft AI law,1 they must be ready for it.

If you work in a law firm or corporate legal team, you may already be following the European Commission’s proposals to create the world’s first regional-scale legal framework for artificial intelligence (AI) systems. And if not, maybe you should be.

The proposed AI law, which was published in April 2021, bans “unacceptable” uses of AI that violate basic human rights and safety, setting strict rules for “high-risk” applications. Companies that fail to comply can be fined up to €30 million or 6% of global revenue,2 who is the highest.

The Commission considered the draft in September 2022, following proposals for an AI Liability Directive and a revised Product Liability Directive, which would make it easier for people to receive compensation if they are harmed by artificial intelligence, including discrimination.

Together with the General Data Protection Regulation, the Digital Services Act and the Digital Markets Act,3 the proposed rules are part of the EU’s strategy to set the global gold standard for ethical and reliable technology.

The world is watching

The implications are far-reaching not only for software developers, but also for companies large and small using AI, including those outside the EU whose systems are used in the bloc. According to a 2020 survey by the European Commission, more than four out of 10 companies in Europe reported using at least one artificial intelligence technology.4 This year alone, in March, China introduced tighter controls on how tech companies can use recommendation algorithms. The US is considering a draft of the Algorithmic Accountability Act of 2022, and in October the White House released a draft of the AI ​​Bill of Rights.5

As strict rules could be applied in the EU as early as 2025, organizations developing and deploying AI systems need to ensure they have robust governance structures and governance systems in place to mitigate risk, particularly AI bias. In recent months, major law firms have taken steps to advise their clients accordingly, but many organizations have yet to lay the groundwork for compliance.

AI deviation increase

The draft AI law takes a very broad view of what qualifies as an AI system. It covers:

  • Machine learning approaches including supervised, unsupervised and reinforcement learning using various techniques including deep learning.
  • Logic and knowledge-based approaches, including knowledge representation, inductive (logic) programming, knowledge bases, inference and deductive engines, (symbolic) reasoning and expert systems.
  • Statistical approaches such as Bayesian estimation, search and optimization methods

The argument for such a broad definition is that the system itself is less important than the use case. In other words, the law takes a broad view of what an AI system is, but a narrow view of its use case.6

Also Read :  RPT-Chinese brands outnumber foreign names among Singles Day best-sellers

Beyond chatbots and self-driving vehicles, narrow AI is used in its many forms every day to make decisions that affect people’s lives, from credit scores to job applications to university admissions to health care expenses.

Along with the lauded successes, there are many horror stories about the unintended consequences of algorithmic decision-making. A recruiting tool that discriminates against women, a chatbot that talks like a Nazi, facial recognition platforms that perform significantly worse for dark-skinned women than lighter-skinned men.7 — these are examples that have left some of the world’s biggest companies in the red.

One of the challenges when AI fails is the lack of “explainability”. In today’s era of deep learning algorithms and pervasive AI, the biases that human error can create at any stage of an AI’s lifecycle are reproduced and amplified on a scale that is sometimes impossible to control and mitigate. When this happens, the forensics required to identify the trigger is not trivial and requires advanced technical expertise.

Shared responsibility

So who is to blame if the results are unfair and discriminatory? The people who trained the system? The people who selected the data? The people who created the software, often including code written over many years by different practitioners and for different intended purposes?

The draft AI law takes a strong stance on liability, imposing obligations on providers, distributors, exporters and even “users” of AI systems deemed to pose a high risk to human security or fundamental rights. The word “users” here does not mean end users (such as the general public), but individuals or entities under their control using an AI system, such as a recruitment firm deploying an automated CV screening tool. As for ‘service providers’, this definition includes not only software developers but also companies that commission them to market or use an AI system in the EU under their own name or trademark. In other words, companies will not be able to avoid responsibility by outsourcing projects. If they are involved in creating or deploying a high-risk AI system, they will have at least some legal obligations.

The bill divides AI systems into four risk categories: unacceptable, high risk, limited risk, and minimal risk.

Unacceptable applications are prohibited and include subliminal techniques, exploit systems, social scoring systems used by government agencies, and real-time remote biometric identification systems used by law enforcement agencies in public areas.

The proposed regulation fully addresses the “high risk” category, which includes applications related to transportation, biometric ID, education, employment, welfare, private sector credit, law enforcement, and immigration, among others. Before placing a high-risk AI system on the market or in operation, EU service providers would have to undergo a pre-compliance assessment and meet a long list of requirements. These include establishing a risk management system and complying with regulations on technical robustness, testing, data training, data governance and cyber security. Human monitoring throughout the life cycle of the system should be mandatory and transparency should be introduced so that users can interpret the results of the system and challenge it if necessary. Frameworks have already been developed to serve as management tools. A good example is capAI, a procedure for assessing the compliance of artificial intelligence systems under the EU’s Artificial Intelligence Law, developed in collaboration between the Oxford Internet Institute and Said Business School.8

Also Read :  AFAN to provide gadgets, inputs for 20 million Nigerian farmers

Service providers would also have to register their systems in an EU-wide database and implement an appropriate quality management system that commits to post-sale monitoring.

Users, including companies implementing AI technology, should commit to only input data relevant to the AI ​​system’s intended purpose and to monitor the system’s performance. In addition to keeping accurate records and conducting data protection impact assessments, they should also inform the service provider or distributor of any risks of malfunction or serious incidents.

How bias infiltrates systems

Companies planning to develop or implement AI systems should act now to ensure their AI management and risk mitigation infrastructure meets the upcoming regulations.

This work should not be left to technical experts or limited to legal and compliance departments. Rather, it will require a cross-functional team, led from the top, ideally including diversity, equality and inclusion specialists who can provide a people-centred perspective.

We expect machines to be unbiased decision makers, but the reality is that human biases can and will inevitably infiltrate AI systems at multiple stages in their lifecycle, from their creation to many years into their implementation.

First of all, it is the quality of the raw material – the data. The old adage “garbage in, garbage out” applies to all data-driven analytics. The consistency, representativeness and usefulness of the data must be assessed. For example, if the data contains too few observations for a certain group, the training samples may contain too few such units and, although representative of the entire population, may not be effective in training the algorithm, leading to biased decisions. Therefore, the appropriate questions must be asked. How old is the data? How was it collected? For what original purposes? Does it accurately represent the population that will be affected by the technology, or does it favor a particular gender, ethnicity, geographic area, or other demographic? Implicit stereotypes, reporting bias, selection bias, group attribution errors, the halo effect, and other problems can creep in at this stage, threatening an AI system before it’s even built.

Also Read :  Photo of Ivanka Trump at the World Cup Leaves Internet Divided

Then there is the model, a coded algorithm that is trained to perform a specific task based on the data provided. In a supervised scenario, when people annotate data to train a model, they can complicate issues related to sampling bias, non-sampling error, confirmation bias, anecdotal error, and more. These biases can then propagate and be amplified during the model training process, and may not be noticed by humans when evaluating the model’s performance. When humans see the output of an AI system, they may mistakenly assume that it is objective and correct—the so-called automation bias—creating a feedback loop in which biased data is fed back into the system, repeating and amplifying the original flaws. A data scientist coding AI is not immune to these biases, and is often further misled by a performance-based culture where model accuracy is paramount.

The risks do not end there. Over time, an AI system can be scaled to serve a wider population or adapted to serve a new purpose. Data informing systems can become outdated, and without monitoring and intervention, the potential for discrimination with real-world consequences for individuals and damage to the reputation of system providers is even greater.

Reducing bias

However, companies can do a lot to protect themselves by implementing a robust AI risk assessment framework. This should eliminate the risk of bias at every stage of the project, from design to retirement. This means understanding and documenting the inherent characteristics of the data, carefully deciding the algorithm’s goal, using relevant information to train it, and capturing all model parameters and performance metrics in a model registry. Service providers can be confident in creating ethical systems that meet the expected regulations and ethical standards if they take a holistic approach characterized by:

  • Technical system for evaluation and monitoring of artificial intelligence
  • Adequate management infrastructure to ensure system compliance with existing regulations
  • Diversity of skills in teams involved in the AI ​​lifecycle

After attracting more than 3,000 amendments, the draft AI law could face major revisions over the next year — possibly even changes to the Commission’s very broad definition of an AI system. It is unlikely to become binding law before mid-2023, by which time tech-friendly Sweden will hold the presidency of the European Council, which could further influence the final version. A grace period of 24-36 months should give companies time to comply with the new legislation.

In the meantime, there are many decisions to be made. And they cannot be left to robots.

Source

Leave a Reply

Your email address will not be published.

Related Articles

Back to top button