Efficient Technique Improves Machine Learning Models’ Reliability

Posted by
Check your BMI

Researchers from MIT and the MIT-IBM Watson AI Lab have developed a new technique that can enable a machine learning model to quantify how confident it is in its predictions but does not require vast troves of new data and is much less computationally intensive than other techniques.

 

Copyright: news.mit.edu – “Efficient technique improves machine learning models’ reliability”


 

toonsbymoonlight
The method enables a model to determine its confidence in a prediction while using no additional data and far fewer computing resources than other methods.
Powerful machine learning models are being used to help people tackle tough problems such as identifying disease in medical images or detecting road obstacles for autonomous vehicles. But machine learning models can make mistakes, so in high-stakes settings it’s critical that humans know when to trust a model’s predictions.

Uncertainty quantification is one tool that improves a model’s reliability; the model produces a score along with the prediction that expresses a confidence level that the prediction is correct. While uncertainty quantification can be useful, existing methods typically require retraining the entire model to give it that ability. Training involves showing a model millions of examples so it can learn a task. Retraining then requires millions of new data inputs, which can be expensive and difficult to obtain, and also uses huge amounts of computing resources.

Researchers at MIT and the MIT-IBM Watson AI Lab have now developed a technique that enables a model to perform more effective uncertainty quantification, while using far fewer computing resources than other methods, and no additional data. Their technique, which does not require a user to retrain or modify a model, is flexible enough for many applications.

The technique involves creating a simpler companion model that assists the original machine learning model in estimating uncertainty. This smaller model is designed to identify different types of uncertainty, which can help researchers drill down on the root cause of inaccurate predictions.

“Uncertainty quantification is essential for both developers and users of machine learning models. Developers can utilize uncertainty measurements to help develop more robust models, while for users, it can add another layer of trust and reliability when deploying models in the real world. Our work leads to a more flexible and practical solution for uncertainty quantification,” says Maohao Shen, an electrical engineering and computer science graduate student and lead author of a paper on this technique.

Shen wrote the paper with Yuheng Bu, a former postdoc in the Research Laboratory of Electronics (RLE) who is now an assistant professor at the University of Florida; Prasanna Sattigeri, Soumya Ghosh, and Subhro Das, research staff members at the MIT-IBM Watson AI Lab; and senior author Gregory Wornell, the Sumitomo Professor in Engineering who leads the Signals, Information, and Algorithms Laboratory RLE and is a member of the MIT-IBM Watson AI Lab. The research will be presented at the AAAI Conference on Artificial Intelligence.[…]

Read more: www.news.mit.edu

Der Beitrag Efficient Technique Improves Machine Learning Models’ Reliability erschien zuerst auf SwissCognitive, World-Leading AI Network.

Source: SwissCognitive