AWS CRITICAL
Increased connectivity issues and API Error Rates
March 2, 2026 · 05:56 AM UTC · Duration: Ongoing
Affected Services
Multiple Services
Timeline
05:56 AM
We are investigating increased API error rates in a single Availability Zone (mes1-az2) in the ME-SOUTH-1 Region.
05:56 AM
[[The following AWS services have been affected by this event:]]
Service: appsync Status: Informational
Service: autoscaling Status: Informational
Service: awsiot Status: Informational
Service: awsiotdevicemanagement Status: Informational
Service: awswaf Status: Informational
Service: clientvpn Status: Informational
Service: cloud9 Status: Informational
Service: cloudformation Status: Informational
Service: cloudfront Status: Informational
Service: cloudhsm Status: Informational
Service: cloudshell Status: Informational
Service: cloudtrail Status: Informational
Service: cloudwan Status: Informational
Service: cloudwatch Status: Informational
Service: codebuild Status: Informational
Service: codedeploy Status: Informational
Service: codepipeline Status: Informational
Service: cognito Status: Informational
Service: computeoptimizer Status: Informational
Service: controltower Status: Informational
Service: datasync Status: Informational
Service: directoryservice Status: Informational
Service: dms Status: Informational
Service: drs Status: Informational
Service: ec2 Status: Informational
Service: ecr Status: Informational
Service: ecs Status: Informational
Service: eks Status: Informational
Service: elasticache Status: Informational
Service: elasticbeanstalk Status: Informational
Service: elasticfilesystem Status: Informational
Service: elasticsearch Status: Informational
Service: elb Status: Informational
Service: emr Status: Informational
Service: emrserverless Status: Informational
Service: events Status: Informational
Service: fargate Status: Informational
Service: firehose Status: Informational
Service: fsx Status: Informational
Service: globalaccelerator Status: Informational
Service: glue Status: Informational
Service: iamidentitycenter Status: Informational
Service: inspector Status: Informational
Service: iotdevicedefender Status: Informational
Service: kafka Status: Informational
Service: kinesis Status: Informational
Service: kms Status: Informational
Service: lakeformation Status: Informational
Service: lambda Status: Informational
Service: management-console Status: Degradation
Service: mq Status: Informational
Service: natgateway Status: Informational
Service: networkfirewall Status: Informational
Service: privatelink Status: Informational
Service: rds Status: Degradation
Service: redshift Status: Informational
Service: resourceexplorer Status: Informational
Service: resourcegroups Status: Informational
Service: resourcegroupstaggingapi Status: Informational
Service: route53 Status: Informational
Service: sagemaker Status: Informational
Service: scheduler Status: Informational
Service: servicecatalog Status: Informational
Service: sns Status: Informational
Service: ssmsap Status: Informational
Service: state Status: Informational
Service: storagegateway Status: Informational
Service: swf Status: Informational
Service: transcribe Status: Informational
Service: transfer Status: Informational
Service: transitgateway Status: Informational
Service: vpclattice Status: Informational
Service: vpnvpc Status: Informational
06:00 AM
Increased connectivity issues and API Error Rates
06:14 AM
Increased connectivity issues and API Error Rates
07:09 AM
We are investigating connectivity and power issues affecting APIs and instances in a single Availability Zone (mes1-az2) in the ME-SOUTH-1 Region due to a localized power issue. Existing instances in this zone will also be affected. Other AWS Services may also be experiencing increased errors and latencies for their workflows, and we are working to route requests away from this affected Availability Zone. We recommend customers make use of other Availability Zones at this time. During this time, we are also experiencing delays in propagating DNS changes for Route53 to pops (Points of Presence) in ME-SOUTH-1. Targeting new launches using RunInstances in the remaining AZs should succeed. Existing instances in the other AZs are not affected.
09:03 AM
We continue to work on a localized power issue affecting a single Availability Zone (mes1-az2) in the ME-SOUTH-1 Region. In the impacted Availability Zone, EC2 Instances, DB Instances, EBS Volumes, and other AWS Services are also experiencing elevated error rates and latencies for some workflows. As part of our recovery effort, we have shifted traffic away from the impacted Availability Zone for most services. We recommend customers utilize one of the other Availability Zones in the ME-SOUTH-1 Region, as existing instances in other AZs remain unaffected by this issue. We are actively working to restore power and connectivity, at which time we will begin recovering affected resources. Currently, we expect recovery to take many hours. We will provide an update by 2:30 AM PST, or sooner if we have additional information to share.
10:41 AM
We continue to work toward restoring power in the affected Availability Zone (mes1-az2) in the ME-SOUTH-1 Region. At this time, some AWS services have shifted traffic away from the affected Availability Zone and are seeing recovery for their affected operations and workflows. EC2 Instances, EBS Volumes, and other resources impacted in the affected Availability Zone will require a longer recovery timeline. Power has not yet been restored to the affected Availability Zone. If immediate recovery is required, we recommend customers restore from EBS Snapshots and/or launch replacement resources in one of the unaffected Availability Zones or an alternate Region. In parallel, we are actively working on reducing the error rates and latencies that some customers are experiencing with EC2 APIs. For now, we recommend continuing to retry any failed API requests. We will provide an update by 6:00 AM PST on March 2, or sooner if we have additional information to share.
02:23 PM
We continue to work toward restoring power in the impacted Availability Zone (mes1-az2) in the ME-SOUTH-1 Region. Meanwhile, EC2 instance and networking APIs have been restored for the other Availability Zones. Additionally, we have made improvements to the availability of RDS multi-AZ databases while operating with the impaired Availability Zone. These improvements will help customers create database exports to preserve data, and we recommend customers with databases in the affected Availability Zone consider creating exports as a precautionary measure. EC2 Instances, EBS Volumes, and other resources impacted in the affected Availability Zone will require a longer recovery timeline, as power has not yet been restored. We are expecting recovery to take at least a day, as it requires repair of facilities, cooling and power systems, coordination with local authorities, and careful assessment to ensure the safety of our operators. If immediate recovery is required, we recommend customers restore from EBS Snapshots and/or launch replacement resources in one of the unaffected Availability Zones or an alternate AWS Region. We will provide an update by 11:00 AM PST on March 2, or sooner if we have additional information to share.
05:11 PM
Increased Connectivity Issues and API Error Rates
05:13 PM
Increased Connectivity Issues and API Error Rates
06:52 PM
We continue to work towards restoring power in the affected Availability Zone (mes1-az2) in the ME-SOUTH-1 Region. We currently expect our recovery efforts to take at least a day. Our current guidance regarding immediate recovery remains unchanged from our previous update. Customers are able to disassociate Elastic IP addresses from resources in the affected Availability Zone and associate those with resources in the unaffected Availability Zones. This can be done by specifying --allow-reassociation when attempting to associate the Elastic IP to the new resource. We will provide you with further updates by 2:00 PM PST or sooner if new information becomes available.
07:33 PM
Increased Connectivity Issues and API Error Rates
08:20 PM
Increased Connectivity Issues and API Error Rates
09:18 PM
Increased Connectivity Issues and API Error Rates
09:19 PM
Increased Connectivity Issues and API Error Rates
09:25 PM
Increased Connectivity Issues and API Error Rates
09:25 PM
Increased Connectivity Issues and API Error Rates
09:27 PM
Increased Connectivity Issues and API Error Rates
09:50 PM
Increased Connectivity Issues and API Error Rates
10:17 PM
Increased Connectivity Issues and API Error Rates
10:29 PM
We continue to work towards restoring power in the affected Availability Zone (mes1-az2) in the ME-SOUTH-1 Region. We have no updated guidance on expected recovery times, and still expect this to take at least a day to fully restore power and connectivity. We continue to advise customers to launch replacement resources in one of the unaffected Availability Zones or an alternate AWS Region. At this time we recommend that customers that are capable of backing up data outside of the region consider doing so. You can view the current status of affected AWS services below. We will provide you with another update by 7:00 PM PST, or sooner if we have additional information to share.
12:22 AM
We are providing an update on the ongoing service disruptions affecting the AWS Middle East (UAE) Region (ME-CENTRAL-1) and the AWS Middle East (Bahrain) Region (ME-SOUTH-1). Due to the ongoing conflict in the Middle East, both affected regions have experienced physical impacts to infrastructure as a result of drone strikes. In the UAE, two of our facilities were directly struck, while in Bahrain, a drone strike in close proximity to one of our facilities caused physical impacts to our infrastructure. These strikes have caused structural damage, disrupted power delivery to our infrastructure, and in some cases required fire suppression activities that resulted in additional water damage. We are working closely with local authorities and prioritizing the safety of our personnel throughout our recovery efforts.
In the ME-CENTRAL-1 (UAE) Region, two of our three Availability Zones (mec1-az2 and mec1-az3) remain significantly impaired. The third Availability Zone (mec1-az1) continues to operate normally, though some services have experienced indirect impact due to dependencies on the affected zones. In the ME-SOUTH-1 (Bahrain) Region, one facility has been impacted. Across both regions, customers are experiencing elevated error rates and degraded availability for services including Amazon EC2, Amazon S3, Amazon DynamoDB, AWS Lambda, Amazon Kinesis, Amazon CloudWatch, Amazon RDS, and the AWS Management Console and CLI. We are working to restore full service availability as quickly as possible, though we expect recovery to be prolonged given the nature of the physical damage involved.
In parallel with efforts to restore the physical infrastructure at the affected sites, we are pursuing multiple software-based recovery paths that do not depend on the underlying facilities being fully brought back online. For Amazon S3 and Amazon DynamoDB, we are actively working to restore data access and service availability through software mitigations, including deploying updates to enable S3 to operate within the current infrastructure constraints and remediating impaired DynamoDB tables to restore read and write availability for dependent services. Our focus on restoring these foundational services is deliberate, as recovery of Amazon S3 and Amazon DynamoDB will in turn enable a broad range of dependent AWS services to recover. For other affected service APIs, we are deploying targeted software updates to reduce error rates and restore functionality where possible, independent of the physical recovery timeline. We are also working to restore access to the AWS Management Console and CLI through network-level changes that route traffic away from the affected infrastructure. While these software-based mitigations can address many of the service-level impacts, some recovery actions are constrained by the physical state of the affected facilities — meaning that full restoration of certain services will require the underlying infrastructure to be repaired and brought back online. Across all services, our teams are working in parallel on both the physical restoration of the affected facilities and these software-based mitigations, with the goal of restoring as much customer access as possible as quickly as possible, even ahead of full infrastructure recovery. In addition, we are prioritizing the restoration of services and tools that enable customers to back up and migrate their data and applications out of the affected regions.
Finally, even as we work to restore these facilities, the ongoing conflict in the region means that the broader operating environment in the Middle East remains unpredictable. We recommend that customers with workloads running in the Middle East consider taking action now to backup data and potentially migrate your workloads to alternate AWS Regions. We recommend customers exercise their disaster recovery plans, recover from remote backups stored in other regions, and update their applications to direct traffic away from the affected regions. For customers requiring guidance on alternate regions, we recommend considering AWS Regions in the United States, Europe, or Asia Pacific, as appropriate for your latency and data residency requirements.
We will continue to provide updates as recovery progresses and as the situation evolves. Our next update will be provided by 9:00 PM PST on March 2, 2026, or sooner if new information becomes available.
06:27 AM
We continue to work towards restoring power in the affected Availability Zone (mes1-az2) in the ME-SOUTH-1 Region. We have no updated guidance on expected recovery times, and still expect this to take at least a day to fully restore power and connectivity. AWS infrastructure is designed to be highly resilient, but given the uncertainty of the current situation, we encourage our customers to replicate Amazon S3 and critical data from the ME-SOUTH-1 Region to another AWS Region. For customers requiring guidance on alternate regions, we recommend considering AWS Regions in the United States, Europe, or Asia Pacific, as appropriate for your latency and data residency requirements. We will provide another update by March 3 at 3:00 AM PST, or sooner if new information becomes available.
For more information on Cross-Region Replication, refer [1]. For more information on S3 Batch Replication, see [2]. For a simple script to quickly set up and start S3 Replication, see [3]. If you have questions or concerns, please contact AWS Support [4].
[1] <a href="https://docs.aws.amazon.com/AmazonS3/latest/userguide/replication.html">https://docs.aws.amazon.com/AmazonS3/latest/userguide/replication.html</a>
[2] <a href="https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-batch-replication-batch.html">https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-batch-replication-batch.html</a>
[3] <a href="https://github.com/awslabs/aws-support-tools/blob/master/S3/Setup_Replication/setup_replication.py">https://github.com/awslabs/aws-support-tools/blob/master/S3/Setup_Replication/setup_replication.py</a>
[4] <a href="https://aws.amazon.com/support">https://aws.amazon.com/support</a>
11:10 AM
We continue to work toward restoring power in the affected Availability Zone (mes1-az2) in the ME-SOUTH-1 Region. The overall state of the region remains largely unchanged from our previous update. At this time, we have no updated guidance on expected timelines for fully restoring power and connectivity. We are taking all necessary steps to support the recovery process. While progress is being made, significant work remains before full restoration is complete.
Given the ongoing uncertainty, we encourage customers to replicate their Amazon S3 data and other critical data from the ME-SOUTH-1 Region to another AWS Region, using the guidance provided in our previous update. We will continue to provide updates as recovery progresses and as the situation evolves. Our next update will be provided by 6:00 AM PST on March 3, or sooner if new information becomes available.
02:02 PM
Recovery efforts in the affected Availability Zone (mes1-az2) in the ME-SOUTH-1 Region are ongoing, with the situation remaining consistent with our last update. We have no change to expected timelines for fully restoring power and connectivity. While progress is being made, significant work remains before full restoration is complete. We continue to recommend customers launch replacement resources in one of the unaffected Availability Zones or an alternate AWS Region.
Given the extended nature of this event, we continue to encourage customers to replicate Amazon S3 data and other critical workloads from ME-SOUTH-1 to another AWS Region using the guidance shared previously. We will provide our next update by 12:00 PM PST on March 3, or sooner if conditions change.
04:40 PM
We are providing an update on the ongoing service disruptions affecting the AWS Middle East (Bahrain) Region (ME-SOUTH-1). We continue to make progress on recovery efforts across multiple workstreams. With the immediate phase of this event now better understood, we are moving to a more targeted communication model. Going forward, updates will be delivered directly to affected customers through the AWS Personal Health Dashboard. Customers who require assistance with this event are encouraged to contact AWS Support through the AWS Management Console or the AWS Support Center.
We continue to strongly recommend that customers with workloads running in the Middle East take action now to migrate those workloads to alternate AWS Regions. Customers should enact their disaster recovery plans, recover from remote backups stored in other Regions, and update their applications to direct traffic away from the affected Regions. For customers requiring guidance on alternate regions, we recommend considering AWS Regions in the United States, Europe, or Asia Pacific, as appropriate for your latency and data residency requirements.
11:27 PM
Increased Connectivity Issues and API Error Rates
01:21 AM
[[The following AWS services have been affected by this event:]]
Service: appsync Status: Operating normally
Service: autoscaling Status: Informational
Service: awsiot Status: Operating normally
Service: awsiotdevicemanagement Status: Operating normally
Service: awswaf Status: Operating normally
Service: clientvpn Status: Informational
Service: cloud9 Status: Informational
Service: cloudformation Status: Operating normally
Service: cloudfront Status: Operating normally
Service: cloudhsm Status: Informational
Service: cloudshell Status: Operating normally
Service: cloudtrail Status: Operating normally
Service: cloudwan Status: Informational
Service: cloudwatch Status: Operating normally
Service: codebuild Status: Informational
Service: codedeploy Status: Operating normally
Service: codepipeline Status: Operating normally
Service: cognito Status: Operating normally
Service: computeoptimizer Status: Informational
Service: controltower Status: Informational
Service: datasync Status: Operating normally
Service: directoryservice Status: Informational
Service: dms Status: Informational
Service: drs Status: Informational
Service: ec2 Status: Informational
Service: ecr Status: Operating normally
Service: ecs Status: Informational
Service: eks Status: Informational
Service: elasticache Status: Informational
Service: elasticbeanstalk Status: Informational
Service: elasticfilesystem Status: Informational
Service: elasticsearch Status: Informational
Service: elb Status: Informational
Service: emr Status: Informational
Service: emrserverless Status: Operating normally
Service: events Status: Operating normally
Service: fargate Status: Informational
Service: firehose Status: Informational
Service: fsx Status: Informational
Service: globalaccelerator Status: Operating normally
Service: glue Status: Operating normally
Service: iamidentitycenter Status: Operating normally
Service: inspector Status: Informational
Service: iotdevicedefender Status: Operating normally
Service: kafka Status: Informational
Service: kinesis Status: Operating normally
Service: kms Status: Operating normally
Service: lakeformation Status: Operating normally
Service: lambda Status: Informational
Service: management-console Status: Informational
Service: mq Status: Informational
Service: natgateway Status: Informational
Service: networkfirewall Status: Informational
Service: privatelink Status: Informational
Service: rds Status: Informational
Service: redshift Status: Informational
Service: resourceexplorer Status: Informational
Service: resourcegroups Status: Operating normally
Service: resourcegroupstaggingapi Status: Informational
Service: route53 Status: Operating normally
Service: sagemaker Status: Operating normally
Service: scheduler Status: Operating normally
Service: servicecatalog Status: Operating normally
Service: sns Status: Informational
Service: ssmsap Status: Operating normally
Service: state Status: Operating normally
Service: storagegateway Status: Operating normally
Service: swf Status: Operating normally
Service: transcribe Status: Operating normally
Service: transfer Status: Operating normally
Service: transitgateway Status: Informational
Service: vpclattice Status: Informational
Service: vpnvpc Status: Informational
01:21 AM
Increased Connectivity Issues and API Error Rates