The risks of cloud storage and backup failures

K
KordaMentha

Contributor

KordaMentha, an independent firm in Asia-Pacific, specializes in cybersecurity, financial crime, forensic, performance improvement, real estate, and restructuring services. With a diverse team of almost 400 specialists, they provide customised solutions to help clients grow, protect from financial loss, and recover value. Trusted since 2002, they deliver bold, impactful solutions for clients.
The critical need to safeguard against IT failures was highlighted recently by the accidental deletion of UniSuper data.
Australia Technology
To print this article, all you need is to be registered or login on Mondaq.com.

The critical need for businesses to safeguard against IT failures such as third-party provider disruption was highlighted dramatically recently with Google Cloud's accidental deletion of UniSuper's entire company and customer data.

The incident, caused by an "inadvertent misconfiguration during provisioning", could have been catastrophic for UniSuper and its 647,000 members, had the $140 billion superannuation fund not held backups with another cloud provider.

The failure is a stark reminder of many organisations' high dependency on outsourced cloud and IT services providers. It also highlights the potential consequences organisations face when they fail to address the inherent risks of cloud and IT outsourcing, including reputational and financial risks, reduced stakeholder trust and potential legal liabilities. Even if fault is found to lie with the third party service provider, affected customers will typically hold responsible the company with which they have the direct relationship.

As most Australian businesses store data in the cloud, it's a timely prompt to examine the critical areas for risk mitigation.

Firstly, many service level agreements (SLAs) lack a matrix that clearly allocates responsibilities to the client and provider. Defining responsibilities through tools like a RACI chart (Responsible, Accountable, Consulted and Informed) can ensure accountability and prevent mismatched or unrealistic expectations. For example, responsibility for comprehensive, current backups is vital and – unfortunately for the client – it's in the interests of the provider not to take accountability for this area.

Essentially, business interruption plans such as incident response and disaster recovery must be tested with input and buy-in from the provider. Otherwise, should a provider experience a cyber breach or failure, the client may find their supplier unavailable until the incident is resolved. Moreover, cyber criminals might also access the client's systems through a breach of the supplier.

Additionally, incident response plans must set recovery time and recovery point objectives aligned to the maximum allowable downtime and interruption to be tolerated by the business. For example, it took more than two weeks to rectify the problems that affected UniSuper – a time clearly unacceptable to many of its members.

Organisations also need to be thinking about whether the cloud is appropriate for them in the long term. Cost-benefit analysis may show it's more cost effective initially, particularly when organisations are in start-up and scale-up mode, but studies show maintaining data in-house can be more cost-effective after year four. Some analysts even suggest 'you're crazy if you don't start in the cloud; you're crazy if you stay on it.'1

Cost creep is a significant issue because cheap storage tends to lead to the retention of excessive data, which leads to another, perhaps more significant, issue. Cyber risks increase exponentially if personal data is retained unnecessarily and a cyber breach occurs. Therefore, data minimisation and optimisation are more likely if data is kept in-house, where there is a higher awareness of costs. This approach not only reduces costs but mitigates risk.

Cyber resilience also rests heavily on comprehensive, current backups. Although having backups in multiple locations may offer some comfort, backup restoration testing often occurs infrequently or not at all. Organisations commonly rely on automated backup systems, which can fail to complete. A crisis then reveals that data hasn't backed up properly for months.

Additionally, it's critical to ensure that backups themselves are well protected from cyber criminals. We see more and more ransomware attacks where hackers are entirely aware that organisations will back up information as insurance against such an attack to avoid paying ransoms. As a result, hackers actively search for and delete backups, to worsen the victim's position.

To assess an organisation's overall cyber resilience, a very high level of expertise is required. If this is not held in-house, qualified and highly skilled consultants can advise on mitigating the many risks inherent in cybersecurity. In-house expertise can also be augmented with external cyber advisory support on an ongoing basis through services such as vCISO (virtual chief information security officer).

Organisations considering how to mitigate third-party and supplier risk, particularly from cloud services providers, should undertake a due diligence assessment and review, which should include:

  • Ensuring that service level agreements (SLAs) include a responsibly and accountability matrix.
  • Checking that incident response plans include recovery time and recovery point objectives that align with business and operational needs.
  • Ensuring disaster recovery plans have meaningful input, buy-in and documented acceptance from the cloud provider.
  • Verifying that the service provider itself has strong cyber resilience and its own comprehensive business continuity and disaster recovery plans.
  • Scheduling regular testing of backups to ensure they are current, comprehensive and effective.
  • Implementing appropriate protection of backups from ransomware attacks. A popular approach is the '3-2-1 rule' of backup – three copies of your data on two different types of media (such as disk, tape or cloud) with one copy offsite.
  • Evaluating the long-term cost-effectiveness of cloud compute versus traditional on-premises compute.

Of course, whether your organisation's risk assessment results in managing your data in-house or in the cloud, ongoing and expert vigilance is the key. Because accidents happen.

Footnote

1 Andreessen Horowitz. The Cost of Cloud, A Trillion Dollar Paradox, 27 May 2021 https://a16z.com/the-cost-of-cloud-a-trillion-dollar-paradox/>.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

See More Popular Content From

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More