Artificial intelligence is changing the face of healthcare. The technology has already made its way through every facet of the industry, from diagnosis and treatment to testing and research. It can help predict the onset of serious ailments, pinpoint disease outbreaks, reduce errors of in-hospital procedures, and so forth. To this day, AI has had a considerable impact on the whole sector. Now, the conversation has shifted to yet another domain, compliance. If we want to see AI technologies operate to the fullest, if we want AI to truly revolutionize the field, we need to make it compliant with the proposed regulations of the healthcare industry.
SwissCognitive Guest Blogger: Layla Li, Co-founder and CEO of KOSA AI
Current AI Regulations for the Healthcare Sector
Healthcare is generally a heavily regulated industry, so the AI regulations must comply with that as well. As AI is increasingly being developed for applications in the healthcare industry, the standards for regulations are high as it is narrowed down to saving human lives. On top of this, there are plenty of AI bias cases where a patient’s diagnosis and treatment are affected by their gender or ethnic background, exacerbating existing inequalities in healthcare. Analyses estimate that AI bias in the healthcare system may amount to approximately $93 billion in excess medical care costs and $42 billion in lost productivity per year as whole groups of minorities are going to be denied basic care.Skin cancer for example is one the most widespread types of cancer. Dermatologists use AI systems to scan and detect it in patients and suggest treatment. It is estimated that over 100,000 new melanoma cases will be diagnosed in 2021. However, existing AI models are prone to predict false negatives in diagnostics due to active gender bias against males – which may result in dreadful losses. (Read more here: Case study: Improve skin cancer diagnostics accuracy through algorithmic bias mitigation)
This illustrates rather well why healthcare is a heavily regulated sector. However, controlling AI legally within healthcare still lacks concrete laws to tackle the systemic bias inherent in the AI systems. For example, in our previous article, “Breaking down the AI healthcare regulations,” we have stressed that all the major regulatory frameworks for the use of AI in healthcare, paving the path for healthcare companies and those who provide AI software to be in compliance. The three main EU regulatory frameworks are the EU Medical Devices Regulation, the General Data Protection Regulation (GDPR), and specific EU AI regulations. They suggest that documentation containing information about the algorithms and the logic of algorithms used in the AI system is crucial, so there is no fault diagnosis, unacceptable use of personal data, and elimination of bias built into the healthcare algorithms. Read this article for more.
Compliance as the next step for AI in healthcare
Bearing all this in mind, it is crucial to highlight that developments in AI will likely accelerate exponentially over the next decade. A 2019 report found that 41 percent of healthcare executives ranked AI as the technology that will have the highest impact on improving their organizations’ operations in the next three years. As healthcare organizations figure out how to scale up AI-led innovations, they should also start paying attention to compliance requirements, to better manage AI’s inherent risks and mitigate bias in the systems themselves. One of the approaches is to have consistency and accuracy in the AI system’s data and build responsible AI with outcomes and predictions that are ethical, fair, and inclusive-building products for the future!
Conclusion
AI healthcare software solution providers need to be in compliance with industry regulations to gain trust. AI developments in healthcare must be held within the principles of being designed for the patient’s benefit, safety, and privacy with no one left behind approach and be transparent and produce explainable outputs. In parallel, being AI compliant means understanding the “why” behind every system’s decision, and continuously uprooting risks and potential harm. To borrow some techie jargon, this translates into decoding the past to create a better future for healthcare development.
About the Author:
Layla Li, Co-founder and CEO of KOSA AI, is a full-stack developer, data scientist, and data-driven global business strategist. She has built her own machine learning Saas products from scratch and had successfully proof of concepts with multiple clients. Also did data consulting gigs for a wide range of companies in e-commerce, consumer products, NGOs, telecom, and more. She worked for 3 years as a data expert at Tesla. Aside from the technical background, she had some experiences working in social media marketing and business developments plus a degree in fashion design, so she considers herself as a techie who has a holistic business understanding.
Der Beitrag The Future of AI in Healthcare: Preparing for Compliance erschien zuerst auf SwissCognitive, World-Leading AI Network.
Source: SwissCognitive