Senior AWS Consultant
For many organizations, data is their most valuable asset. And while the cloud facilitates many data-driven processes, well-thought-out backup strategies are more important than ever today. No hyperscaler in the world can effortlessly prevent data loss or failures – behind reliable availability, there are always complex, well-planned concepts.
Why are backups so important?
There are many reasons why applications and websites can fail – from human error like misconfiguration to cyberattacks (ransomware or other malware). The backup you have in place can make the difference between getting back up and running quickly or suffering long-term damage.
A good backup helps you:
- Prevent data loss
- Minimize downtime
- Stay operational in an emergency
The 3-2-1 rule – a must, even in the cloud
With the tried-and-tested 3-2-1 rule, you have a simple and effective way to keep your data secure – even in the cloud:
- 3 copies of the data: 1 original + 2 backups
- 2 backup copies: exports to object storage, storage gateways or copies to other vaults
- 1 backup to another location: protection against physical incidents such as fire or theft
Using this strategy, you will be reliably protected against a wide range of failure scenarios – from accidental deletion to regional disasters.
Backup strategies – customized instead of one-size-fits-all
Especially in complex system landscapes, one thing holds true: There is no universal backup solution. Instead, define clear requirements, particularly with regard to:
- RTO (Recovery Time Objective): How quickly do your systems need to be up and running again after a failure?
- RPO (Recovery Point Objective): How much data loss is still acceptable to you in a worst-case scenario?
Define these metrics together with the responsible business unit managers. That is the only way to choose the right methodology and technology.
Tools and methods in the AWS cloud
Amazon Web Services provides a wide range of building blocks for robust backup strategies. These include:
- AWS Backup: scheduled and event-driven backups
- Amazon Machine Images: complete images of EC2 instances
- Snapshots: of databases and volumes
- Log exports and versioning: especially useful for object storage
- Cross-region replication: increased reliability, even with triple redundancy
Example of an AWS-based strategy
- Scheduled backups: every 4 hours, around the clock
- Event-driven backups: automatically scheduled to run before maintenance windows and software updates
- Regional replication: of snapshots and object versions to protect against regional outages and failures
What makes a good backup?
A backup is only useful if it works reliably in the event of an emergency. The following characteristics are of particular importance:
- Regularity: automation based on defined RPO/RTO criteria
- Redundancy: multiple geographically separated storage locations
- Security: encryption in transit and encryption at rest
- Recoverability: reliable and tested restore procedures
What is often overlooked: Backups are only as good as the restore processes. In regulated industries, companies must even provide evidence of regular restore tests.
Restore tests – the ultimate test in daily operations
Restore tests are an essential tool to:
- Check the performance of the backup system
- Validate internal processes and responsibilities
- Identify weaknesses in documentation
What about your organization? Can you restore your sales database within the expected timeframe? Do your employees know in which configuration file they need to enter the correct endpoint?
When reviewing and updating documentation and procedures, it is important to consider not only the quality but also the storage location. Saving the wiki’s restore file in the wiki itself, for example, would not be advisable.
Backups are like insurance policies: You hope you will never need them. But in the event of an emergency, you will quickly see who is prepared – and who remains capable of acting.