Large language models, such as ChatGPT, tend to make things up. A new approach now allows the systems to explain their responses—at least partially.
Copyright: scientificamerican.com – “New Tool Reveals How AI Makes Decisions”
When I asked ChatGPT to name the editor in chief of Spektrum der Wissenschaft, the German-language sister publication of Scientific American, the answer was, “I have the latest information until 2021, and at that time Dr. Florian Freistetter was the editor-in-chief of Spektrum der Wissenschaft. However, it is possible that the information has changed since then.” This highlights one of the biggest drawbacks of current language-generating artificial intelligence programs: they “hallucinate.” While Spektrum der Wissenschaft features a lot of work by Freistetter, he was never a staff member—let alone editor in chief—of Spektrum der Wissenschaft. That’s why it is important to work on so-called explainable AI (XAI) models that can justify their answers—and thus become more transparent.
Most AI programs function like a “black box.” “We know exactly what a model does but not why it has now specifically recognized that a picture shows a cat,” said computer scientist Kristian Kersting of the Technical University of Darmstadt in Germany to the German-language newspaper Handelsblatt. That dilemma prompted Kersting—along with computer scientists Patrick Schramowski of the Technical University of Darmstadt and Björn Deiseroth, Mayukh Deb and Samuel Weinbach, all at the Heidelberg, Germany–based AI company Aleph Alpha—to introduce an algorithm called AtMan earlier this year. AtMan allows large AI systems such as ChatGPT, Dall-E and Midjourney to finally explain their outputs.
In mid-April 2023 Aleph Alpha integrated AtMan into its own language model Luminous, allowing the AI to reason about its output. Those who want to try their hand at it can use the Luminous playground for free for tasks such as summarizing text or completing an input. For example, “I like to eat my burger with” is followed by the answer “fries and salad.” Then, thanks to AtMan, it is possible to determine which input words led to the output: in this case, “burger” and “favorite.”
AtMan’s explanatory power is limited to the input data, however. It can indeed explain that the words “burger” and “like” most strongly led Luminous to complete the input with “fries and salad.” But it cannot reason how Luminous knows that burgers are often consumed with fries and salad. This knowledge remains in the data with which the model was trained.
AtMan also cannot debunk all of the lies (the so-called hallucinations) told by AI systems—such as that Florian Freistetter is my boss. Nevertheless, the ability to explain AI reasoning from input data offers enormous advantages. For example, it is possible to quickly check whether an AI-generated summary is correct—and to ensure the AI hasn’t added anything. Such an ability also plays an important role from an ethical perspective. “If a bank uses an algorithm to calculate a person’s creditworthiness, for example, it is possible to check which personal data led to the result: Did the AI use discriminatory characteristics such as skin color, gender, and so on?” says Deiseroth, who co-developed AtMan.[…]
Read more: www.scientificamerican.com
Der Beitrag New Tool Reveals How AI Makes Decisions erschien zuerst auf SwissCognitive, World-Leading AI Network.