Building Trust In AI: The Case For Transparency

Posted by
Check your BMI

Exploring the critical role of transparency in AI, Bernard Marr’s article highlights how clear and understandable AI practices are essential for building trust and ensuring ethical deployment.

 

Copyright: forbes.com – “Building Trust In AI: The Case For Transparency”


 

SwissCognitive_Logo_RGB

toonsbymoonlight
AI is rapidly transforming the world of business as it becomes increasingly woven into the fabric of organizations and the day-to-day lives of customers.

However, the speed of this transformation creates risks as organizations struggle with challenges around deploying AI in ways that are responsible and minimize the risk of harm.

One of the cornerstones of responsible AI is transparency. AI systems – including algorithms themselves as well as the data sources – should be understandable so we can comprehend how decisions are made and ensure it’s done in a fair, unbiased and ethical way.

Today, many businesses that use AI are taking steps towards ensuring this happens. However there have been cases where use of AI has been worryingly opaque.

Here we will look at real-world examples, good and bad, that illustrate the benefits of transparent AI and the dangers of obscure or unexplainable algorithms.

Transparent AI Done Well

When Adobe released its Firefly generative AI toolset, it reassured users that it is open and transparent about the data used to train its models, unlike other generative AI tools (e.g., OpenAI’s Dall-E, it published information on all of the images that were used, along with reassurance that it owned all the rights to these images, or that they were in the public domain. This means users can make informed choices about whether to trust that their tool hasn’t been trained in a way that infringes copyrights.

Salesforce includes transparency as an important element of “accuracy” – one of its five guidelines for developing trustworthy AI. This means that they take steps to make it clear when AI provides answers that it isn’t sure are completely correct. This includes citing sources and highlighting areas that users of their tools might want to double-check to ensure there haven’t been mistakes![…]

Read more: www.forbes.com

Der Beitrag Building Trust In AI: The Case For Transparency erschien zuerst auf SwissCognitive | AI Ventures, Advisory & Research.

0 0 votes
Article Rating
Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments