Bias still dominates the discussion of AI adoption in business. So it should.

Posted by
Check your BMI

Bias still dominates the discussion of AI adoption in business. So it should.

As international regulation starts to take shape, organisations must learn from the past, writes James Duez. In his blog post, he explains why predictive technology still requires human judgement, and offers three principles all -enabled organisations must live by.

SwissCognitive Guest Blogger: James Duez, Technologist, Futurist, Co-Founder & CEO, Rainbird Technologies Official Member Forbes Technology Council

SwissCognitive, AI, Artificial Intelligence, Bots, CDO, CIO, CI, Cognitive Computing, Deep Learning, IoT, Machine Learning, NLP, Robot, Virtual reality, learning

toonsbymoonlight

As international regulation starts to take shape, organisations must educate themselves on lessons from the past. One thing is clear: implementing and automated decision-making should not mean imposing biased decisions on the public.

While the ethical imperative for this principle is (hopefully) obvious, the legal and regulatory imperative is gathering pace.

In April 2021, the EU put forward “the first ever legal framework on ”. It warns against bias in systems, and argues that firms must do their bit to monitor and prevent it. Failure to comply could result in fines as large as six per cent of a company’s global revenue.

The framework is not only relevant to multinational companies who operate in the EU; as the first of its kind, the framework signals where things might head for North America. It follows the essence of GDPR, which states that firms must “regularly check [their] systems for accuracy and bias and feed any changes back into the design process.”

Why bias is an important issue

The reality is that when automated decision-making involves , bias embedded in data poses pretty significant challenges. According to HBR, 97 per cent of business data is flawed. And 10 of the most used data sets for contain errors, according to MIT Technology Review.

In late 2019 Apple’s credit card was investigated by the US financial regulator for sexism, after it turned out women were being offered lower credit limits than men. Not long before, Amazon scrapped its recruitment tool, which was offering more job opportunities to men than women.

Without proper, carefully implemented governance systems, organisations risk causing significant harm to society.

Why data poses risk

Ultimately, the predictions those algorithms made depended on the quality of the data used. But that leaves the question: Why does the quality of data vary so much?

Well, one of the biggest reasons why is that data is the historical output of previous human judgement.

In part this explains one of the major challenges when applying to real-world scenarios. Indeed, it’s how firms can inadvertently end up magnifying errors and biases in human judgement, locked up in the data used to build machine-learnt models. The Nobel-Prize winning economist and psychologist Daniel Kahneman has produced a wealth of work on (the flaws of) human decision-making. If his research speaks to anything in the context of applied in business, it’s that we need to think very carefully about how we go about making automated decisions using .

Therefore, there are three principles organisations implementing must live by:

  1. Minimise bias as far as possible. Any automated decision-making system should proactively seek to diminish bias created at the level of system structure—i.e. aim to use the highest quality data.
  2. Be interpretable, so that those implementing the automation have complete visibility over how decisions are being reached and can easily explain each and every decision if/when necessary.
  3. Put governance protocols in place, by ensuring that machine-learnt predictions inform (rather than determine) decisions, so human expertise is always in control.

How the risks can be prevented

Organisations are at last beginning to take ethical standpoints on and its role in automated decision-making. According to HBR, companies (including Google, Microsoft, BMW and Deutsche Telekom) are creating internal policies, making commitments to fairness, safety, privacy and diversity.

Organisations must recognise as a predictive technology that requires the application of judgement—a key part of any such policy—ensuring interpretability and, consequently, trust.

While it might be hard to remove bias from your data entirely, you can effectively minimise the effects of that bias by applying a layer of systemised judgement. This turns predictions into decisions that can be trusted. To achieve this you need technology that can efficiently and transparently automate that governance process. New platforms enable firms to apply machine-learnt predictions safely by incorporating a layer of automated human judgement into their systems.

Bias will continue to dominate the discussion of adoption in business long after the first tranches of international regulation and legislation are implemented. But by getting the governance of implementations right today, organisations can ensure they won’t be at the centre of that discussion tomorrow.


About the Author:

James is a business and technology innovator with over 25 years experience building companies. His passion is guiding talented and ambitious organisations to accelerate growth through exponential technology adoption. James is recognised as astute, committed and results-driven – with a focus on growth, sustainability and profit.

Der Beitrag Bias still dominates the discussion of AI adoption in business. So it should. erschien zuerst auf SwissCognitive – The Global AI Hub.