Every major cloud breach in the last five years has one thing in common: the organization that got breached thought their cloud provider was handling something that was actually their responsibility. The shared responsibility model isn't a suggestion — it's a contract, and misunderstanding it has cost companies hundreds of millions of dollars.
The Core Principle
Cloud providers secure the infrastructure — the physical data centers, hypervisors, managed service internals, and the global network. You secure everything you put on that infrastructure — your data, your configurations, your access controls, and your applications.
AWS puts it simply: they handle "security of the cloud." You handle "security in the cloud."
What Changes by Service Model
Your responsibility shifts depending on how much abstraction you're using:
IaaS — You Own Almost Everything
With services like EC2, Azure VMs, or GCE instances, you're responsible for:
# OS patching — this is on you
sudo apt update && sudo apt upgrade -y
# Firewall rules — default-deny, open only what's needed
ufw default deny incoming
ufw allow 443/tcp
ufw enable
# Disk encryption — not always on by default
# You must verify and enable itThe provider gives you a virtual machine. Everything from the OS up is yours to secure.
PaaS — Shared But Not Gone
With managed services like RDS, Cloud Functions, or App Engine, the provider handles OS patching and runtime updates. But you still own:
- Access controls — who can connect to your RDS instance
- Encryption configuration — enabling encryption at rest isn't always the default
- Network exposure — a publicly accessible database is still your mistake
- Backup strategy — managed doesn't mean backed up the way you need
resource "aws_db_instance" "main" {
engine = "postgres"
instance_class = "db.t3.medium"
# YOUR responsibility: encryption at rest
storage_encrypted = true
kms_key_id = aws_kms_key.db.arn
# YOUR responsibility: not making it public
publicly_accessible = false
# YOUR responsibility: backup retention
backup_retention_period = 14
}SaaS — Less Surface, Same Core Duties
Even with SaaS products, you're responsible for:
- User access management — who has admin access to your SaaS tools
- MFA enforcement — your provider offers it, you must enable it
- Data classification — knowing what sensitive data lives in the service
- Integration security — API keys and OAuth tokens connecting your systems
Real Breaches That Prove the Point
Capital One (2019)
A misconfigured WAF on AWS allowed an attacker to exploit an SSRF vulnerability and access S3 buckets containing 106 million customer records. The root cause was overly permissive IAM roles — the compromised service had access to every S3 bucket in the account.
AWS infrastructure was not compromised. Capital One's IAM configuration was.
Cost: $80 million fine + $190 million settlement.
Twitch (2021)
125GB of internal data — including source code, internal tools, and creator payout information — was leaked due to a misconfigured server. The data was stored on infrastructure Twitch controlled and was responsible for securing.
Lesson: your provider secures the storage service. You secure what you put in it.
The Four Things You Must Get Right
1. IAM — Least Privilege, Always
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:PutObject"
],
"Resource": "arn:aws:s3:::my-app-uploads/*"
}
]
}Rules to enforce on every project:
- No wildcard permissions —
"Action": "*"is never acceptable in production - MFA on every human account — no exceptions
- IAM roles over static credentials — EC2 instance profiles, not access keys
- Regular access reviews — permissions accumulate; prune them quarterly
2. Encryption — At Rest and In Transit
# Verify S3 bucket encryption
aws s3api get-bucket-encryption --bucket my-bucket
# Check if RDS encryption is enabled
aws rds describe-db-instances \
--query "DBInstances[*].[DBInstanceIdentifier,StorageEncrypted]" \
--output table3. Network Configuration
# Find security groups with 0.0.0.0/0 on sensitive ports
aws ec2 describe-security-groups \
--filters "Name=ip-permission.cidr,Values=0.0.0.0/0" \
--query "SecurityGroups[*].[GroupId,GroupName]" \
--output tableA security group open to 0.0.0.0/0 on port 22 or 3389 is a breach waiting to happen. These should be audited weekly.
4. Patching — The IaaS Tax
If you run IaaS, OS patching is your problem:
#!/bin/bash
# Automated security patching for Ubuntu servers
apt update
apt upgrade -y --only-upgrade
# Log what was updated
apt list --upgradeable 2>/dev/null | tee /var/log/patch-$(date +%F).logFor managed services, the provider handles this — which is one of the strongest arguments for using PaaS/SaaS when you can.
Operationalizing the Model
Understanding the model isn't enough. You need to enforce it continuously:
- Infrastructure as Code — define security controls in Terraform/Pulumi so they're versioned and reviewable
- Policy as Code — use AWS Config Rules, Azure Policy, or OPA to automatically detect misconfigurations
- CSPM tools — AWS Security Hub, Azure Defender, or third-party tools like Wiz scan continuously for compliance drift
- Audit logging — CloudTrail, Azure Monitor, and GCP Audit Logs should be enabled on every account, with alerts on high-signal events
Key Takeaways
- Your provider secures their infrastructure — you secure your configurations — this is non-negotiable
- The less abstraction you use, the more you own — IaaS means you own almost everything
- IAM misconfigurations cause the majority of breaches — invest time here first
- Automate compliance checking — manual audits don't scale and drift happens between reviews
- When in doubt, assume it's your responsibility — this mindset prevents the gaps that cause breaches