AWS MINOR
SQS Increased API Error Rates
September 25, 2025 · 09:56 PM UTC – 05:25 AM UTC · Duration: 7h 28min
Affected Services
AWS IoT CoreAWS WAFAWS CloudTrailAmazon DynamoDBAmazon EventBridgeAmazon Kinesis FirehoseAWS Ground StationAmazon Managed Streaming for Apache KafkaAWS Lake FormationAWS LambdaAmazon SageMakerAWS Step Functions
Timeline
09:56 PM
We are investigating increased API error rates & latencies for SQS requests in the EU-NORTH-1 Region.
10:25 PM
Beginning at 1:25 PM PDT we began experiencing increased API error rates & latencies for SQS requests in the EU-NORTH-1 Region. Engineers were automatically engaged and immediately began investigating. We are actively working on mitigating this issue. In parallel are working to identify the root cause. This issue is impacting SQS RecieveMessage API for queues that are configured using Server Side Encryption. This issue is also affecting some APIs and workflows for other AWS Services. We will provide another update by 4:00 PM.
11:02 PM
We continue to investigate increased API error rates & latencies for SQS ReceiveMessage APIs configured with Server Side Encryption in the EU-NORTH-1 Region. We are actively working on multiple paths to mitigate and root-cause the issue. While we have not yet seen recovery on SQS ReceiveMessage API Requests, some AWS Services have seen full recovery for their impacted operations and workflows. We continue to work toward full recovery and will provide an update by 4:45 PM.
11:59 PM
We continue all efforts to fully mitigate and root cause the increased API error rates & latencies for SQS ReceiveMessage APIs configured with server-side encryption in the EU-NORTH-1 Region. We are making slower progress than we initially anticipated. Some AWS Services have seen full recovery for their impacted operations and workflows. We have not yet established an ETA for full recovery and will continue to provide regular updates while the team continues to work on multiple parallel paths. We will provide our next update by 5:45 PM PDT.
01:10 AM
We are making steady progress in our ongoing efforts to fully mitigate the increased API error rates & latencies for SQS ReceiveMessage APIs configured with server-side encryption in the EU-NORTH-1 Region. We have identified the root cause to be related to an issue with a subsystem responsible for SQS metadata. We are pursuing multiple parallel paths and are rolling out changes to address the issue. Our current expectation for recovery is approximately two hours away. We will continue to provide updates as the changes make progress. Some AWS Services have fully recovered for their impacted operations and workflows. We will provide our next update by 7:30 PM PDT.
02:32 AM
We continue to make steady progress in our ongoing efforts to fully mitigate the increased API error rates & latencies for SQS ReceiveMessage APIs configured with server-side encryption in the EU-NORTH-1 Region. The mitigation efforts are taking longer than expected and we are currently monitoring the progress of a change that was initiated to address the underlying issue. Some AWS Services have fully recovered for their impacted operations and workflows. We will continue to monitor the progress and will provide our next update by 9:30 PM PDT.
04:42 AM
We continue to make steady progress in our ongoing efforts to fully mitigate the increased API error rates & latencies for SQS ReceiveMessage APIs configured with server-side encryption in the EU-NORTH-1 Region. While the investigation into the issue was started immediately, it took us longer to determine a path to recovery. We are continuing to monitor the progress of the change that we made and we are seeing some early signs of recovery. Some AWS Services have fully recovered for their impacted operations and workflows. We will continue to monitor the progress and will provide our next update by 10:30 PM PDT.
05:25 AM
Between 1:28 PM and 9:27 PM PDT we experienced increased API error rates and latencies for SQS ReceiveMessage APIs configured with server-side encryption in the EU-NORTH-1 Region. Engineers were automatically engaged at 1:31 PM and immediately began working on mitigation while simultaneously investigating the root cause of this impact. We identified the root cause to be related to an issue with a subsystem responsible for SQS metadata. We applied mitigations to address the underlying issue with the metadata subsystem. Once the change was successful, we observed initial signs of recovery and continued to monitor until full recovery at 9:27 PM. We recommend customers that have a queue with a dead-letter queue configured redrive affected messages to the source queue or a custom destination for processing [1]. Other AWS Services that were affected by this event have also fully recovered for their impacted operations and workflows. We do not expect this issue to reoccur. The issue has been resolved and all AWS Services are operating normally.
[1] <a href="https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-configure-dead-letter-queue-redrive.html">https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-configure-dead-letter-queue-redrive.html</a>