Bias still dominates the discussion of AI adoption in business. So it should.
As international regulation starts to take shape, organisations must learn from the past, writes James Duez. In his blog post, he explains why predictive technology still requires human judgement, and offers three principles all -enabled organisations must live by.
SwissCognitive Guest Blogger: James Duez, Technologist, Futurist, Co-Founder & CEO, Rainbird Technologies Official Member Forbes Technology Council
As international regulation starts to take shape, organisations must educate themselves on lessons from the past. One thing is clear: implementing and automated decision-making should not mean imposing biased decisions on the public.
While the ethical imperative for this principle is (hopefully) obvious, the legal and regulatory imperative is gathering pace.
In April 2021, the EU put forward “the first ever legal framework on ”. It warns against bias in systems, and argues that firms must do their bit to monitor and prevent it. Failure to comply could result in fines as large as six per cent of a company’s global revenue.
The framework is not only relevant to multinational companies who operate in the EU; as the first of its kind, the framework signals where things might head for North America. It follows the essence of GDPR, which states that firms must “regularly check [their] systems for accuracy and bias and feed any changes back into the design process.”
Why bias is an important issue
The reality is that when automated decision-making involves , bias embedded in data poses pretty significant challenges. According to HBR, 97 per cent of business data is flawed. And 10 of the most used data sets for
In late 2019 Apple’s credit card was investigated by the US financial regulator for sexism, after it turned out women were being offered lower credit limits than men. Not long before, Amazon scrapped its
Without proper, carefully implemented
Why data poses risk
Ultimately, the predictions those algorithms made depended on the quality of the data used. But that leaves the question: Why does the quality of data vary so much?
Well, one of the biggest reasons why is that data is the historical output of previous human judgement.
In part this explains one of the major challenges when applying
Therefore, there are three principles organisations implementing
- Minimise bias as far as possible. Any automated decision-making system should proactively seek to diminish bias created at the level of system structure—i.e. aim to use the highest quality data.
- Be interpretable, so that those implementing the automation have complete visibility over how decisions are being reached and can easily explain each and every decision if/when necessary.
- Put governance protocols in place, by ensuring that machine-learnt predictions inform (rather than determine) decisions, so human expertise is always in control.
How the risks can be prevented
Organisations are at last beginning to take ethical standpoints on
Organisations must recognise
While it might be hard to remove bias from your data entirely, you can effectively minimise the effects of that bias by applying a layer of systemised judgement. This turns predictions into decisions that can be trusted. To achieve this you need technology that can efficiently and transparently automate that governance process. New platforms enable firms to apply machine-learnt predictions safely by incorporating a layer of automated human judgement into their systems.
Bias will continue to dominate the discussion of
About the Author:
James is a business and technology innovator with over 25 years experience building companies. His passion is guiding talented and ambitious organisations to accelerate growth through exponential technology adoption. James is recognised as astute, committed and results-driven – with a focus on growth, sustainability and profit.
Der Beitrag Bias still dominates the discussion of AI adoption in business. So it should. erschien zuerst auf SwissCognitive – The Global AI Hub.