AI and automation are proving to be effective shields against data breaches, reducing breach lifecycles and costs, according to a recent IBM study.
Copyright: venturebeat.com – “IBM Study Reveals How AI, Automation Protect Enterprises Against Data Breaches”
The more integrated AI, automation and threat intelligence are across tech stacks and SecOps teams, the stronger they make an enterprise against breaches. Follow-on benefits include greater cyber-resilience, and spending less on data breaches than enterprises with no AI or automation defenses at all.
IBM Security’s 2023 Cost of a Data Breach Report provides compelling evidence that investing in AI, automation and threat intelligence delivers shorter breach lifecycles, lower breach costs and a stronger, more resilient security posture company-wide. The report is based on analysis of 553 actual breaches between March 2022 and March 2023.
The findings are good news for CISOs and their teams, many of whom are short-staffed and juggling multiple priorities, balancing support for new business initiatives while protecting virtual workforces. As IBM found, the average total cost of a data breach reached an all-time high of $4.45 million globally, representing a 15% increase over the last three years. There’s the added pressure to identify and contain a breach faster.
IBM’s Institute for Business Value study of AI and automation in cybersecurity also finds that enterprises using AI as part of their broader cybersecurity strategy concentrate on gaining a more holistic view of their digital landscapes. Thirty-five percent are applying AI and automation to discover endpoints and improve how they manage assets, a use case they predict will increase by 50% in three years. Endpoints are the perfect use case for applying AI to breaches because of the proliferating number of new identities on every endpoint.
Why AI needs to be cybersecurity’s new DNA
Scanning public cloud instances for gaps in cloud security (including misconfigurations), inventing new malware and ransomware strains and using generative AI and ChatGPT to fine-tune social engineering and pretexting attacks are just a few of the ways attackers try to evade being detected.
– SwissCognitive