AWS MAJOR

Increased Error Rates

March 7, 2026 · 07:53 PM UTC – 09:04 PM UTC · Duration: 1h 11min

Affected Services

Amazon Managed Workflows for Apache AirflowAWS AppConfigAWS AppSyncAmazon AthenaAWS WAFAWS BatchAWS Client VPNAWS CloudTrailAmazon CloudWatchAWS CodeDeployAWS DataSyncAmazon Elastic Compute CloudAmazon Elastic Container RegistryAmazon Elastic Container ServiceAmazon Elastic Kubernetes ServiceAmazon OpenSearch ServiceAmazon Elastic Load BalancingAmazon Elastic MapReduceAWS FargateAmazon Kinesis FirehoseAmazon GlacierAWS GlueAmazon Managed Streaming for Apache KafkaAmazon Kinesis Data StreamsAWS Key Management ServiceAWS LambdaAWS Application Migration ServiceAWS VPCE PrivateLinkAmazon Route 53Amazon SageMakerAWS Cloud MapAWS Storage GatewayAWS Transfer FamilyAWS Transit GatewayAmazon VPC Lattice

Timeline

07:53 PM
We are investigating increased error rates in the EU-CENTRAL-2 Region.
08:17 PM
We can confirm substantial error rates for PUT and GET requests to Amazon S3 in the EU-CENTRAL-2 Region. Engineers engaged immediately based on automated alarming. We have triangulated the issue to a subsystem responsible for assembling objects from bytes in storage. We have begun implementing mitigations, and are observing some improvement in error rates. We continue to work to identify the root cause, and are working on multiple parallel paths to fully mitigate the issue. Other AWS Services (such as EC2 launches) that rely on S3 are also affected by this issue. Existing EC2 instances are unaffected by this issue. We will provide another update by 12:45 PM PST, or sooner if we have additional information to share.
08:28 PM
We are seeing early signs of recovery and continue to monitor and work toward full recovery.
09:04 PM
Between 11:27 AM and 12:20 PM PST we experienced substantial error rates for S3 PUT/GET requests in EU-CENTRAL-2 Region. Engineers were engaged immediately based on automated alarming. We identified the root cause as an issue with a subsystem responsible for assembling objects bytes in storage. At 12:04 PM PST, we implemented mitigations and began observing early signs of recovery for S3. Error rates continued to improve, and other AWS Services continued to recover until 12:50 PM PST when we observed full recovery. We continue to work toward backfilling Cloudwatch logs, and expect that to continue over the next couple hours. We recommend customers retry any failed requests. The issue has been resolved and all services are operating normally.