Disaster Recovery and Business Continuity for Australian SMBs: RTO, RPO, and What They Mean for Your Business

Managed IT & Cybersecurity

Disaster Recovery and Business Continuity for Australian SMBs: RTO, RPO, and What They Mean for Your Business

Most Australian small and medium-sized businesses have some form of backup in place. Cloud sync is running, an external drive is plugged in somewhere, or a server is quietly copying files to a network folder each night. That is enough — right?

Not quite. Having data stored somewhere is not the same as being able to recover quickly when something goes wrong. The difference between a minor disruption and a business-ending event often comes down to whether a business has defined what recovery actually means, tested that it is achievable, and built the infrastructure to support it.

This article explains the fundamentals of disaster recovery planning for Australian SMBs — specifically the two numbers that define your recovery position (RTO and RPO), the 3-2-1 backup rule, and what a real disaster recovery plan looks like in practice.


Why Most SMBs Don't Have a Real Disaster Recovery Plan

Ask most SMB owners whether they have backups. The answer is usually yes. Ask them when they last tested a restore. The answer is usually never, or "we think IT tested it a while ago."

Ask them how long the business can operate without its accounting system, customer database, or email — and you will often get a pause, then an estimate. Ask them how much data loss is acceptable if a failure occurs at 4:59 pm on a Tuesday, right before the backup runs. The answer is rarely a specific, documented figure.

This is the gap at the centre of SMB disaster recovery. It is not a lack of awareness — most business owners know backups matter. It is the absence of formal thinking about what recovery actually requires, expressed in concrete, measurable terms.

The consequences of this gap become visible at the worst possible time. A ransomware attack lands. A server fails. A staff member accidentally deletes a critical database. The business turns to IT — internal or external — and discovers that the backup has not been running properly for three months, or that the restore process takes 36 hours, or that the backup system itself was encrypted along with everything else.

The Australian Cyber Security Centre (ACSC) consistently identifies backup and recovery failures as a key factor that turns a manageable cyber incident into a catastrophic one. The businesses that recover quickly are not the ones that had backups — they are the ones that had tested, documented recovery processes built around defined recovery objectives.


What Is Disaster Recovery (DR)?

Disaster recovery is the process of restoring IT systems, data, and operations following a disruptive event. That event might be a ransomware attack, a hardware failure, accidental deletion, a natural disaster (flood, fire, cyclone), a power outage, or a building access issue that prevents staff from reaching on-premises systems.

DR is often conflated with business continuity planning (BCP), but they are distinct — though closely related. Business continuity planning is the broader framework for keeping a business operational during and after a disruption. It covers manual workarounds, alternate working locations, staff communication plans, customer-facing continuity arrangements, and regulatory notification requirements.

Disaster recovery is the IT restoration component within that broader framework: how do we get our systems and data back, how fast, and at what point in time?

Both are necessary. A business that can restore its systems in four hours but has no plan for how staff will work during those four hours — or how to notify customers — has only half a plan. But for most SMBs, the IT recovery component is the least defined and the highest risk, and that is where this article focuses.


RTO and RPO — The Two Numbers Every Business Needs

If you take one thing from this article, make it this: every business needs to define its Recovery Time Objective and its Recovery Point Objective before it buys a single piece of backup infrastructure. These two numbers determine everything else.

Recovery Time Objective (RTO)

The Recovery Time Objective is the maximum amount of time your business can operate without its IT systems following an incident before the impact becomes unacceptable.

RTO is expressed in time — hours, typically. It is not how long you want recovery to take. It is the maximum time the business can sustain before the disruption causes material harm: loss of revenue, failure to meet contractual obligations, inability to service customers, or regulatory breach.

Examples of RTO in practice:

  • A professional services firm with largely paper-capable workflows might define an RTO of 8 hours — they can manage client calls and urgent matters manually for most of a business day.
  • A medical practice with electronic health records as the primary system and no paper fallback might define an RTO of 2 hours — beyond that, clinical safety and patient care are at risk.
  • An e-commerce business where the website is the revenue source might define an RTO of 1 hour or less — every hour of downtime is direct, measurable revenue loss.

RTO drives decisions about recovery infrastructure. A business with an 8-hour RTO can restore from cloud backup manually, which is slower but cheaper. A business with a 1-hour RTO needs near-instant failover infrastructure — virtualised standby systems, hot failover, or Disaster Recovery as a Service (DRaaS) — which costs more but delivers the recovery speed required.

Recovery Point Objective (RPO)

The Recovery Point Objective is the maximum amount of data loss the business can accept, expressed as a period of time. Specifically, it is the age of the most recent backup that would be acceptable to restore from if a failure occurred right now.

If your RPO is 24 hours and you back up daily at midnight, a failure at 11:55 pm means restoring from last night's backup — losing nearly 24 hours of transactions, emails, and changes. If that is acceptable, your backup schedule matches your RPO. If it is not acceptable, your backup frequency needs to increase.

Examples of RPO in practice:

  • A retail business using cloud-based point-of-sale with all sales data stored in the cloud in real time might have an RPO of minutes or seconds — they can tolerate almost no data loss because the cloud system maintains continuous state.
  • An architectural practice where designers work in large local files that sync to the server periodically might define an RPO of 4 hours — they can re-do up to four hours of design work if necessary, but a full day's loss would be unacceptable.
  • A payroll team running fortnightly payroll runs might define an RPO of 1 hour for payroll data specifically, given the regulatory and contractual obligations associated with accurate payroll records.

RPO drives decisions about backup frequency and method. Daily snapshots are sufficient for a 24-hour RPO. An hourly RPO requires continuous data protection, real-time replication, or near-real-time cloud backup agents.

A Worked Example

Consider a law firm with five staff. After reviewing its operations, the firm defines:

  • RTO of 4 hours. The firm can use paper-based processes and phone calls for up to four hours before the inability to access precedents, billing records, and court deadlines creates real client risk.
  • RPO of 1 hour. Any more than one hour of billing data, file notes, and correspondence loss is unacceptable given the firm's regulatory obligations and billing accuracy requirements.

These two numbers immediately expose a problem: the firm's current backup — a nightly copy to a NAS on the office network — produces a 24-hour RPO, not a 1-hour RPO. It also has no documented restore process, making the actual RTO unknown.

To meet a 1-hour RPO, the firm needs continuous or near-continuous data protection — a cloud backup agent running every 15–30 minutes, or real-time replication to a cloud environment. To meet a 4-hour RTO, they need a tested restore procedure, not just a copy of data sitting on a drive.

The numbers make the gap concrete. Without them, the firm would continue believing its backup was adequate.


The 3-2-1 Backup Rule

The 3-2-1 rule is the most widely cited framework for backup architecture, and for good reason — it is simple, technology-agnostic, and resilient against the most common failure modes.

The rule states: maintain 3 copies of your data, on 2 different storage media types, with 1 copy stored offsite.

3 copies

This means your original production data plus two backups. Not one backup — two. The reasoning is straightforward: a single backup eliminates single-point-of-failure risk from the production system, but creates a new single point of failure in the backup itself. Two independent backups mean one backup can fail silently (as they often do) without leaving you with nothing.

2 different media types

The two backups should be on different storage technologies — for example, a local NAS plus a cloud backup service, or an internal server plus an external USB drive plus cloud storage. The purpose is to prevent a single failure mode — a firmware bug, a compatibility issue, a ransomware strain that targets a specific storage type — from taking out both backups simultaneously.

Common combinations used by Australian SMBs:

  • Local NAS (for fast local restores) plus cloud backup service such as Veeam Cloud Connect, Acronis Cyber Cloud, or Datto Backupify
  • On-premises backup server plus encrypted cloud storage (Backblaze B2, AWS S3, Azure Blob)
  • Windows Backup to internal drive plus a cloud backup agent

1 copy offsite

At least one backup must be physically and logically separated from your office and your production systems. This protects against physical disasters (fire, flood, theft) and, critically, against ransomware — which, as discussed below, specifically targets backup systems connected to the production network.

Cloud backup satisfies the offsite requirement if the cloud storage is not connected to your production network and cannot be modified or deleted by ransomware that has compromised your systems. This is where immutability becomes important.

The modern extension: 3-2-1-1-0

The 3-2-1 rule has been extended by modern backup vendors to address the ransomware threat specifically. The updated rule adds:

  • 1 additional copy that is either immutable (cannot be modified or deleted for a defined retention period) or air-gapped (physically disconnected from any network)
  • 0 errors confirmed through verified, tested restores

The zero is the most important addition. A backup that has never been tested is not a backup — it is an assumption. Backup jobs fail silently. Media degrades. Restore procedures that have never been rehearsed take far longer than expected. The "0 errors" in 3-2-1-1-0 means your backup is only valid if you have confirmed it restores successfully.


Common Backup Approaches and Their Limitations

Not all backup methods are equal. The table below summarises the most common approaches used by Australian SMBs, their recovery characteristics, and their resilience against the most damaging current threat: ransomware.

Backup methodRecovery speedData loss riskRansomware resilienceRelative cost
External hard drive (manual rotation)SlowHigh — depends on manual disciplineVery low — drive often connected at time of attackLow
On-site NAS (automated)ModerateModerate — depends on snapshot frequencyLow — if network-accessible, ransomware can encrypt itModerate
Cloud backup (Veeam, Acronis, Datto)Moderate to fastLow — frequent automated snapshotsHigh if immutable storage is configuredModerate
Hybrid local + cloudFast local restore + offsite protectionLowHigh — cloud copy survives local compromiseModerate to high
DRaaS (Disaster Recovery as a Service)Fast — near-instant failover possibleVery low — near-real-time replicationHigh — production environment replicated offsiteHigh

A few critical observations.

External hard drives remain common in small businesses. They are the most vulnerable backup method in existence for ransomware — if the drive is connected when the ransomware executes, it will be encrypted alongside everything else. If the last rotation was three weeks ago, that is three weeks of data loss. Manual processes require manual discipline, which is difficult to sustain.

On-site NAS systems configured as a network share are similarly vulnerable. Ransomware that has compromised a Windows machine with access to a NAS share will encrypt that share. NAS systems with snapshot capabilities (such as QNAP or Synology) offer some protection if snapshots are retained in a location the compromised machine cannot access — but this requires deliberate configuration, not default settings.

Cloud backup with immutable storage is the current best practice for ransomware resilience. Immutable storage means that once written, the backup data cannot be modified or deleted for a defined retention period — even by an administrator account that has been compromised. Ransomware cannot destroy what it cannot modify.

For most Australian SMBs, a hybrid approach — fast local backup for quick restores of individual files combined with immutable cloud backup for disaster-level recovery — offers the best balance of recovery speed, data protection, and cost.

The Essential Eight framework from the ACSC specifically addresses backups under Mitigation Strategy 8, recommending that backups be disconnected from the network, tested regularly, and retained for at least three months. Compliance with the Essential Eight is increasingly a baseline expectation for Australian businesses in regulated sectors.


Testing Your Recovery — The Step Most Businesses Skip

The single most common failure in SMB disaster recovery is not a technical failure — it is the failure to test. Backup jobs that have been silently failing for months. Restore procedures that exist only in someone's head and have never been rehearsed. Full system restores that take 18 hours when the RTO was defined as 4.

Testing is the only way to know whether your recovery capability is real.

A minimum testing schedule for Australian SMBs:

Monthly — file-level restore test. Restore a sample of files from the previous night's backup to a different location and verify they are intact and readable. This confirms the backup job is running, the data is accessible, and the restore process works at a basic level. This test takes 15 minutes and should be logged.

Quarterly — system-level restore test. Restore a critical system (file server, accounting server, email) to a test environment — either a separate physical machine or a virtual machine — and confirm it boots, connects to required services, and operates correctly. Measure how long the restore takes. Compare to your RTO.

Annually — full DR exercise. Simulate a complete loss of the production environment. Restore from backup to a clean environment, bring all critical systems online, verify connectivity and data integrity, and document the actual recovery time and data loss. Identify gaps and adjust the backup strategy accordingly.

Document every test: what was restored, when, how long it took, what the actual recovery time and data loss would have been, and any issues encountered. This documentation serves two purposes — it drives continuous improvement, and it demonstrates due diligence to cyber insurers and regulators.

The cyber incident response process your business follows during an actual incident will be dramatically more effective if the recovery steps have been rehearsed. When a ransomware attack is underway at 2 am is not the time to discover that the restore procedure is undocumented and the person who knows how it works is overseas.


What Your Disaster Recovery Plan Should Contain

A disaster recovery plan is a document — a real, written, accessible-offline document — not a mental model or an informal understanding. It needs to be specific enough that a competent person who has never seen your systems before could execute it under pressure.

At minimum, your DR plan should cover the following.

Inventory of critical systems with individual RTO/RPO targets. Not all systems have the same recovery priority. Your customer database may need to be online within two hours; your internal document archive can wait 24 hours. Rank your systems, define individual targets, and sequence your recovery accordingly.

Backup schedule and method for each system. Document what is backed up, how often, where the backup is stored, how long backups are retained, and who is responsible for each backup job. This should be specific enough to verify.

Step-by-step restoration procedures. For each critical system, document the exact procedure to restore from backup: which backup to use, in what order, what dependencies exist (Active Directory before file server, for example), what credentials are required, and what verification steps confirm the restore was successful. These procedures must be stored somewhere accessible if your systems are down — printed and secured, or stored in a cloud document outside your production environment.

Contact list. Include your IT provider (Pickle: 1300 688 588 / [email protected]), your cloud backup vendor's emergency line, your cyber insurer's claims line, and the ACSC's reporting portal at reportcyber.gov.au. If the incident involves personal information, you will also need your privacy officer's contact and an understanding of your notifiable data breach obligations under the Privacy Act.

Communication plan. Decide in advance who notifies staff that systems are down and what the message is. Who contacts customers if service is affected? Who speaks to the media if the incident becomes public? Who notifies regulators if required? These decisions should not be made under pressure during an incident.

Escalation and decision-making authority. Define who has the authority to declare a DR event (as opposed to continuing to troubleshoot). Define who decides to pay a ransom demand versus invoking recovery. Define who authorises external forensic assistance. Without clear authority, incidents stall.


Ransomware and Disaster Recovery

Ransomware deserves specific attention in any discussion of disaster recovery, because ransomware is not a bolt-from-the-blue event. It is a deliberate, targeted process designed to maximise the attacker's leverage — and backup systems are a primary target.

Modern ransomware operators do not simply encrypt systems as soon as they gain access. They dwell inside the network, sometimes for weeks, before deploying the encryption payload. During that time, they are mapping the environment, identifying backup systems, and either encrypting, corrupting, or exfiltrating backup data to eliminate the victim's recovery options. By the time the ransom note appears, the attacker has often ensured that "just restore from backup" is no longer straightforward.

This is why backup resilience is not just about having backups — it is about having backups the attacker cannot reach. Immutable cloud backups that are logically isolated from the production network cannot be encrypted or deleted by ransomware running on a compromised machine. Air-gapped backups (physically disconnected from any network) offer the highest protection but are operationally complex for most SMBs.

The practical implications:

Any backup system that is continuously connected to the production network and accessible via standard file protocols (SMB, NFS) is vulnerable to ransomware. This includes NAS shares, mapped network drives, and some on-premises backup agents if not properly configured.

Cloud backups must use immutable storage with a retention period longer than the dwell time of a typical attacker — 30 to 90 days minimum. Immutable storage with a 7-day retention period is insufficient if the attacker has been in the network for 14 days.

Having a tested, documented DR plan means the difference between a four-hour recovery — isolate the compromised systems, restore from immutable cloud backup, resume operations — and a three-week crisis involving forensic investigation, ransom negotiation, and regulatory notification.

For more on preventing ransomware from reaching this point, see Pickle's guide to ransomware prevention. Prevention reduces the probability of an incident; a robust DR plan limits the damage when prevention is not enough.


How Pickle Approaches Disaster Recovery for Australian SMBs

Disaster recovery planning is not a one-size-fits-all exercise. The right backup frequency, the right storage architecture, and the right recovery infrastructure depend entirely on your business's specific RTO and RPO requirements — which in turn depend on how your business operates, what your systems are, and what data loss and downtime actually cost you.

Pickle works with Australian SMBs, strata buildings, and commercial properties to design and manage backup and recovery solutions built around each business's actual recovery objectives. That means:

  • Conducting an RTO/RPO assessment to define recovery requirements for each critical system
  • Designing a backup architecture that meets those requirements — combining local and cloud backup with immutable storage where ransomware resilience is a priority
  • Implementing and monitoring backup jobs to ensure they run correctly and alerts fire when they do not
  • Managing and documenting restore procedures so that recovery can be executed by any competent person, not just whoever set the system up
  • Conducting regular restore tests and providing documented test results

The goal is to move businesses from "we have backups" to "we know we can recover, in this amount of time, with this amount of data loss, and we have proved it."

For Australian businesses that want to understand their current recovery position — or that have never formally defined their RTO and RPO — Pickle offers an initial assessment to identify gaps and recommend a path forward.

To get started, call 1300 688 588 or email [email protected].

You can also learn more about Pickle's broader managed IT services offering for Australian SMBs.


FAQ

Q: What is the difference between a backup and a disaster recovery plan?

A: A backup is a copy of your data stored separately from your production systems. A disaster recovery plan is the documented process for using that backup — and other resources — to restore your business operations following a disruptive event. Having a backup without a tested DR plan means you have the raw material for recovery but no confirmed ability to use it under pressure. Many businesses discover this distinction only after a serious incident.

Q: How often should we test our backups?

A: At minimum, test a file-level restore monthly to confirm the backup is running and readable. Test a full system restore at least annually — restore to a clean environment, confirm the system operates correctly, and measure how long it takes. Businesses with tight RTOs or high ransomware exposure should test full restores quarterly. Every test should be documented, and results should be compared to defined RTO/RPO targets.

Q: What is a realistic RTO for a small business?

A: It depends entirely on the business. A tradie with a cloud-based job management system and a mobile-capable team might function for 24 hours without core IT systems with limited impact. A medical practice or legal firm with regulatory obligations tied to electronic record access might have an RTO of 2 to 4 hours. An e-commerce business may measure its RTO in minutes. The right question is not "what is realistic" — it is "how long can we actually afford to be down?" Start there, then build the recovery infrastructure to meet that requirement.

Q: Can ransomware encrypt cloud backups?

A: Yes — if the cloud backup is accessed via a mapped drive or connected folder that is reachable from a compromised machine, ransomware can encrypt it like any other network resource. The protection against this is immutable cloud storage: a configuration where written backup data cannot be modified or deleted for a defined retention period, even by an account with administrative credentials. Immutable backups stored in a cloud environment isolated from the production network are the current best practice for ransomware-resilient backups.

Q: Does cyber insurance require a disaster recovery plan?

A: Most Australian cyber insurers now ask detailed questions about backup and recovery practices during underwriting. A documented, tested disaster recovery plan — including defined RTO/RPO, regular restore testing, and immutable offsite backups — can reduce premiums and, more importantly, demonstrates the controls insurers expect to see before paying a claim. Policies with poor recovery controls may face coverage disputes, particularly if a claim arises from a ransomware attack that exploited inadequate backup practices. Insurers are increasingly requiring proof of tested recovery capability, not just the presence of a backup system.