Druva CSO on ransomware’s impact on cyber insurance

Interview. Yogesh Badwe, chief security officer at SaaS-based data protector Druva, caught up with Blocks & Files for a Q&Q session to discuss how ransomware as a data security problem is affecting cyber insurance.

Blocks & Files: Is ransomware a data security problem rather than a firewall, anti-phishing, credential stealing exercise?

Yogesh Badwe: Ransomware is a serious concern for businesses, and data security is absolutely a major part of that. If an attack is successful, ransomware can impact confidentiality, integrity and/or availability of data, and a strong approach to data security can reduce the probability of the negative outcomes that are associated with ransomware. While there is no silver bullet to preventing ransomware, ensuring a strong approach to data security alongside some of these other critical cyber hygiene practices – like properly segmented firewalls, anti-phishing practices, and strong password policies and management to prevent tactics like credential stealing – are all critical pieces of the puzzle to reducing the likelihood of being impacted by ransomware. 

Yogesh Badwe, Druva
Yogesh Badwe

Blocks & Files: As ransomware attacks are increasing, what will happen to cyber insurance premiums?

Yogesh Badwe: As ransomware attacks increase, we have also seen an increasing trend of ransomware victims paying ransom. Average ransomware payments are also on the uptick. This likely changes the calculus for insurance underwriters. In response, we have and will continue to see a few things:

  1. Scoped-down cyber insurance policies, with sub limits enforced on ransomware payments 
  2. Increased premiums
  3. Premiums that are tightly tied to real-time risk postures (as opposed to one-time understanding of clients risks)
  4. Increased stringency on risk assessments during the initial policy/premium formulation
  5. Requirements for continuous monitoring – it is in the best interest of cyber insurance providers to monitor for and inform its clients of outside-in cyber weaknesses that they see. We will see increasing use of this outside-in open source security monitoring to mitigate risk faced by the clients.

Blocks & Files: Do open source supply chains contribute to the risks here and why?

Yogesh Badwe: Yes, vulnerable OSS supply chains can be a surface area that is targeted by ransomware threat actors for initial intrusion or lateral movement inside an organization. We have also seen persistent and well-resourced threat actors stealthily insert backdoors inside commonly used libraries. 

Blocks & Files: What role does non-human identity (NHI) security play in this?

Yogesh Badwe: NHI is an increasing area of focus for security practitioners. From a ransomware perspective, NHIs are yet another vector for initial intrusion or lateral movement inside an organization. Orgs have spent a lot of time securing human identities via SSO, strong policies around password hygiene – rotation session lifetime, etc. 

To put it into perspective, there are more than 10x NHIs to human identities and as an industry we haven’t spent enough time improving NHI security posture. In fact, over the last 18 months, the majority of public breaches have had some sort of NHI component associated with them. 

The reality is that NHIs cannot have the same security policy enforcement that we assume for human identities. For example, holistically for all NHIs in an organization, it is difficult to have strict provisioning and de-provisioning processes around NHIs similar to what we do for humans – to enforce MFA and password rotation, and to notice the misuse of NHIs as compared to human identities.

Due to NHI sprawl, it is trivial for an attacker to get their hands on a NHI, and typically, NHIs have broad sets of permissions that are not monitored to the extent that human identities are. We’re seeing a number of startup companies focused on security NHIs get top-tier VC funding due to the nature and uniqueness of this problem.

Blocks & Files: Should there be a federal common standard for cybersecurity?

Yogesh Badwe: Absolutely. Approximately two decades ago we had GAAP (generally accepted accounting principles) come out, which laid down a clear set of guidelines, rules, expectations, and processes for the bare-minimum, baseline accounting standard in the finance world. 

We don’t have a GAAP for security. What we have is a variety of overlapping (and sometimes subjective) industry standards and industry frameworks that different organizations use differently. Duty of care as it relates to reasonable security measures that an entity should take is left to the judgment and description of each individual entity without any common federally accepted definition of what good security looks like. 

Only a federal common standard on cybersecurity will help convert the tribal knowledge of what good looks like into enforceable and auditable framework like GAAP.

Blocks & Files: Can AI be used to improve data security, and how do you ensure it works well?

Yogesh Badwe: Generative AI can be leveraged to improve a number of security paradigms, including data security. It can play a transformative role in everything from gathering and generating relevant context about the data and its classification, generating anomalies around permissions and activity patterns, and helping security practitioners either prioritize or help action and remediate data security concerns. 

One simple example is a security analyst reviewing a data security alert around activity related to sensitive data. He or she can leverage generative AI to get context about the data, context about the activity, and the precise next steps to triage or mitigate the risk. The possibilities to leverage AI to improve data security are limitless.

How do we ensure it works well? Generative AI itself is a data security problem. We have to be careful in ensuring the security of data that is leveraged by GenAI technologies. As an example, we have to think about how to enforce permission and authorization that exists on source data, as well as on the output generated by the AI models. 

It’s essential to continue with human-in-the-loop processes, at least initially, until the use cases and technology mature where we can rely on it 100 percent and allow it to make state changes in response to data security concerns.