May 12, 23:58 UTC
Resolved - This incident has been resolved.
May 12, 23:51 UTC
Monitoring - A fix has been implemented and we are monitoring the results.
May 12, 23:38 UTC
Identified - The issue has been identified and a fix is being implemented.
May 12, 20:13 UTC
Resolved - This incident has been resolved.
May 12, 19:36 UTC
Investigating - We are investigating elevated errors on requests to Claude Sonnet 4.6 and Haiku 4.5. We will provide an update as soon as possible.
May 12, 18:57 UTC
Resolved - This incident has been resolved.
May 12, 18:51 UTC
Monitoring - A fix has been implemented and we are monitoring the results.
May 12, 18:47 UTC
Investigating - We are currently investigating this issue.
May 9, 23:51 UTC
Resolved - This incident has been resolved
May 9, 23:33 UTC
Investigating - We are currently investigating this issue.
May 6, 21:26 UTC
Resolved - This incident has been resolved.
May 6, 17:54 UTC
Update - We are continuing to monitor for any further issues.
May 6, 17:54 UTC
Monitoring - GitHub is reporting a major incident for degraded pull request availability. This does not impact running CircleCI builds for newly created pull requests or push events.
May 6, 20:13 UTC
Resolved - This incident is resolved. Thank you for your patience, and we apologize for any inconvenience.
May 6, 20:03 UTC
Monitoring - The rollback was successful. Pipelines are flowing normally again. Monitoring for a bit.
May 6, 19:43 UTC
Update - We are continuing to investigate this issue.
May 6, 19:42 UTC
Investigating - We are having an issue with some pipelines being lost. We are rolling back the affected service, and will share more information when we have it.
May 1, 14:02 UTC
Resolved - This issue has been resolved. All data is up to date through yesterday.
May 1, 10:04 UTC
Investigating - We are currently investigating
May 1, 11:08 UTC
Resolved - This incident has been resolved.
May 1, 11:02 UTC
Monitoring - A fix has been implemented and we are monitoring the results.
May 1, 10:58 UTC
Identified - The issue has been identified and a fix is being implemented.
May 1, 10:51 UTC
Investigating - We are investigating the cause.
Apr 24, 18:36 UTC
Resolved - Customers running macOS jobs on m4pro.medium and m4pro.large with the xcode:26.4.0 image experienced job rejections with the error: "Job was rejected because resource class
Customers whose jobs failed may rerun affected jobs.
We thank you for your patience while our team worked on implementing a fix.
Apr 24, 18:01 UTC
Monitoring - A fix has been implemented and we are monitoring the results.
Apr 24, 17:25 UTC
Identified - There are build failures for Xcode 26.4.0; we have identified the issue and are currently implementing a fix.
May 13, 06:01 UTC
Resolved - This incident has been resolved.
May 13, 03:31 UTC
Monitoring - A fix has been implemented and we are monitoring the results.
May 13, 00:57 UTC
Identified - One of Cloudflare's upstream transit providers is currently experiencing a route leak, impacting a subset of traffic reaching Cloudflare's network. Customers whose traffic traverses affected paths may experience connection timeouts or errors. We have identified the source and are working with the transit provider to resolve the issue.
May 13, 04:30 UTC
Resolved - Between 04:30 - 04:56 UTC customers reaching Bangkok, Thailand would have experienced an increase of 5xx errors and performance degradation due to congestion.
May 12, 21:20 UTC
Resolved - This incident has been resolved.
May 12, 21:00 UTC
Investigating - Cloudflare is investigating issues with the DNS resolver component within Gateway in our Chennai (MAA) and Mumbai (BOM) data centers. Performance and connectivity through Gateway may be impacted, and identity-based Gateway policies may not be enforced correctly for users in these regions. We are working to understand the full impact and mitigate this problem
May 12, 21:17 UTC
Resolved - This incident has been resolved.
May 12, 20:23 UTC
Monitoring - A fix has been implemented and we are monitoring the results.
May 12, 19:57 UTC
Investigating - Cloudflare is investigating an issue with https://www.cloudflare.com/ips-v4. Customers attempting to view the published list of Cloudflare IPv4 ranges may receive an HTTP 500 error. This issue is limited to the IP-ranges page on www.cloudflare.com and does not affect Cloudflare's CDN, DNS, security, or any other production services. We are working to mitigate this problem
May 12, 19:48 UTC
Resolved - This incident has been resolved.
May 12, 19:20 UTC
Monitoring - A fix has been implemented and we are monitoring the results.
May 12, 19:07 UTC
Identified - The issue has been identified and a fix is being implemented.
May 12, 19:04 UTC
Investigating - Cloudflare is investigating issues with network performance near Johannesburg (JNB). We are working to analyse and mitigate this problem. More updates to follow shortly.
May 9, 01:15 UTC
Resolved - This incident has been resolved.
May 8, 20:08 UTC
Investigating - We have Identified that Let's Encrypt experiencing API service outage. Their team is currently investigating the issue. We will provide updates as soon as more information becomes available. We sincerely apologize for any inconvenience this may cause
May 8, 11:54 UTC
Resolved - This incident has been resolved.
May 8, 08:07 UTC
Monitoring - The service disruption affecting Cloudways customers hosted on VULTR infrastructure in the Sydney and Melbourne regions has now been resolved by the VULTR team.
We are continuing to monitor the situation closely to ensure services remain stable. Further updates will be shared if required.
Thank you for your patience and understanding.
May 8, 05:45 UTC
Investigating - We are currently experiencing service disruption affecting Cloudways customers hosted on VULTR infrastructure in the Sydney and Melbourne regions.
The VULTR team is actively investigating the issue, and we are closely monitoring the situation. We will share further updates as soon as more information becomes available.
Thank you for your patience and understanding.
Apr 30, 08:32 UTC
Resolved - This incident has been resolved.
Apr 29, 21:58 UTC
Investigating - Our upstream provider Vultr is currently experiencing a partial outage in their Seoul - Korea location. Their engineering team is actively working to resolve the issue as soon as possible. We regret any inconvenience this may cause.
Apr 29, 23:27 UTC
Resolved - This incident has been resolved.
Apr 29, 19:46 UTC
Monitoring - We have implemented a fix for this issue and are currently monitoring the results. We will provide a final update once the resolution is confirmed.
Apr 29, 19:37 UTC
Identified - We are currently investigating an issue related to the Cloudways Platform dashboard. While our servers remain fully operational and all hosted websites are working fine, some users may experience slow loading times or temporary errors when accessing their account or platform details.
Our engineering team has implemented a fix and we are currently monitoring the situation to ensure full stability. We will provide further updates as they become available. Thank you for your patience.
Apr 21, 02:15 UTC
Resolved - This incident has been resolved.
Apr 20, 14:51 UTC
Monitoring - A fix has been implemented by the cloud provider team and they are monitoring the results.
Apr 20, 12:33 UTC
Investigating - Our upstream provider Linode is currently investigating an emerging service issue affecting connectivity in their Frankfurt (DE-FRA-2) region. We will share further updates as soon as more information becomes available from the provider.
May 5, 14:41 UTC
Resolved - We've recovered the logs for all customers for the affected dates and these have been sent through to customer storage.
May 5, 13:13 UTC
Identified - Due to an internal change in data permissions, some audit logging data was not delivered to customers between April 30 and May 4. The pipeline has since recovered, and today’s logs are being delivered as expected. We are now working to restore the missing logs from the past few days.
May 4, 16:30 UTC
Resolved - The issue has been resolved.
May 4, 14:08 UTC
Monitoring - All issues have been resolved, we are monitoring the situation.
May 4, 14:06 UTC
Identified - We have identified the issue and are addressing it. Most emails are being sent again. SSO login issues have been resolved
May 4, 13:20 UTC
Investigating - Some customers are experiencing issues logging in/receiving email communication such as invitations and password resets. We are investigating.
Apr 28, 12:22 UTC
Resolved - The issue has been resolved.
Apr 28, 12:05 UTC
Monitoring - This was a transient issue and has resolved. We continue to monitor the situation.
Apr 28, 11:59 UTC
Investigating - Some customers are reporting issues accessing their spaces/environments in the Contentful web app. We are investigating the issue.
Apr 23, 14:18 UTC
Resolved - The issue has been resolved.
Apr 23, 14:09 UTC
Identified - We are working on improving the situation and see reduced error rates on workflows management.
Apr 23, 13:39 UTC
Investigating - Some Customers are experiencing issues with workflows, we are investigating.
Apr 23, 11:57 UTC
Resolved - The issue has been resolved.
Apr 23, 11:52 UTC
Monitoring - The situation has improved. We will continue to monitor.
Apr 23, 11:38 UTC
Investigating - Some customers are experiencing 500 errors on our APIs. We are investigating.
Apr 17, 00:30 EDT
Resolved - Earlier this morning, some accounts hosted on our Sydney servers experienced errors when loading forms, including “There was a problem on our end trying to load forms” and “An unknown error occurred.”
The issue has been resolved, and services have returned to normal. We will continue to monitor.
Apr 8, 10:49 EDT
Resolved - The issue affecting email notification delivery has been resolved. A temporary third-party blocklist listing was identified and removed. Email notifications are now functioning normally.
If you continue to experience issues, please contact support.
Apr 7, 19:56 EDT
Monitoring - We have identified the cause of the email notification delivery issue as a temporary listing of one of our sending IPs on a third-party blocklist, which resulted in email rejections for some customers.
The listing has since been removed, and we have not observed any new rejection logs since 2:21 PM ET. Email notifications should now be functioning normally.
We are continuing to monitor the situation to ensure stability. We will provide another update once we confirm the issue is fully resolved.
Apr 7, 15:35 EDT
Investigating - Some customers are reporting that email notifications aren't being sent as expected upon form completion, and our team is investigating now.
We will post more information as it becomes available.
Mar 26, 11:57 EDT
Resolved - This incident has been resolved.
Mar 26, 11:18 EDT
Monitoring - A fix has been implemented and we are monitoring the results.
Mar 26, 10:50 EDT
Investigating - We are investigating reports of inaccessible form lists affecting accounts in the Frankfurt region. We understand the disruption this may cause and are actively working to resolve it as quickly as possible. We will provide updates as more information becomes available.
Feb 12, 15:22 EST
Resolved - This incident has been resolved.
Feb 12, 14:55 EST
Monitoring - A fix has been implemented. Existing authorizations should now function as expected. If you’re still seeing an error, please re-authenticate the connector.
Feb 12, 14:10 EST
Investigating - We are currently investigating an issue impacting Microsoft Excel and Sharepoint connectors on the platform.
At this time, customers who encounter connector errors are unable to successfully reconnect to Microsoft. Once the error appears, reconnection attempts are failing, and affected connectors remain blocked.
Our team is actively working to identify the root cause and restore reconnection functionality as quickly as possible.
We will provide updates here as soon as more information becomes available.
Feb 4, 15:27 EST
Resolved - The fix for this issue has been deployed. We are currently monitoring to ensure normal behavior has been fully restored.
Feb 2, 15:58 EST
Identified - We have identified the cause of the issue impacting form submission redirects.
Our engineering team is actively working on a fix, which will be implemented as soon as possible. We will continue to provide updates here as progress is made.
Feb 2, 15:27 EST
Investigating - We are currently investigating an issue where some form submissions are not redirecting respondents as expected after submission.
Under normal circumstances, respondents are redirected back to the form if there is a validation or connector error, allowing them to correct their information and resubmit. If no errors occur, respondents are redirected to the configured thank-you page or redirect URL.
Some submissions may remain on the response processing page and display a default confirmation message instead of submissions displaying potential errors.
Our team is actively investigating the root cause of this issue. We will continue to post updates here as more information becomes available.
May 12, 17:43 UTC
Resolved - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.
May 12, 17:43 UTC
Update - All services have fully recovered.
May 12, 16:59 UTC
Update - CodeQL has fully recovered. We're continuing to work on recovery for the remaining impacted services.
May 12, 16:29 UTC
Update - Webhooks have fully recovered. Continuing to work on recovery for the other services.
May 12, 16:28 UTC
Update - Webhooks is operating normally.
May 12, 16:18 UTC
Update - We've established that most delays are related to a queuing service and are working to scale out. Early signals from the scale-out are showing signs of recovery for some services. We'll provide an update when services are fully recovered.
May 12, 15:44 UTC
Update - Webhooks is experiencing degraded performance. We are continuing to investigate.
May 12, 15:42 UTC
Update - We're continuing to investigate issues with CodeQL actions workflows. We're additionally seeing delays for notifications, webhooks, and the Slack integration.
May 12, 15:13 UTC
Update - CodeQL actions are currently experiencing delays, which may result in those actions being stuck in a pending state or having failed due to a timeout.
May 12, 14:38 UTC
Investigating - We are investigating reports of degraded performance for CodeQL
May 11, 14:33 UTC
Resolved - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.
May 11, 14:25 UTC
Investigating - We are investigating reports of degraded performance for Git Operations
May 7, 06:56 UTC
Resolved - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.
May 7, 06:14 UTC
Update - Copilot code review and cloud agents are starting again for pull requests, we are monitoring for full recovery.
May 7, 06:13 UTC
Monitoring - The degradation has been mitigated. We are monitoring to ensure stability.
May 7, 05:02 UTC
Investigating - We are investigating reports of impacted performance for some GitHub services.
May 6, 19:04 UTC
Resolved - On May 6, 2026 between 15:12 and 19:02 UTC creation of new pull request review threads on GitHub.com failed. This included new line comments and file comments on pull requests. Existing PRs and previously created comments were unaffected.
This incident was caused by a 32-bit integer key reaching its maximum value in a Vitess lookup table used during PR thread creation. The primary table had been migrated to a 64-bit integer key but the Vitesse lookup table remained 32-bit. Once the values in the primary table passed the available 32-bit ID space in the lookup table, attempts to create new review threads began failing, resulting in near 100% failure rate for new thread creation requests. We mitigated the issue by updating the impacted lookup table definitions across all shards to use 64-bit integer column types, increasing the available ID range and restoring normal operation. Service was fully restored once the schema changes competed globally.
To help prevent similar incidents, we are expanding existing monitoring of database columns to include Vitess lookup tables to notify in advance of any tables that is approaching a column size limit. This work is intended to provide earlier detection of columns approaching size limits before customer impact occurs.
May 6, 19:04 UTC
Update - Mitigations have been fully applied and we are seeing full recovery of functionality on Pull Request threads. We are continuing to monitor to ensure sustained recovery.
May 6, 17:52 UTC
Update - Creation of new Pull Request threads (including line and file comments) continues to be affected although we are seeing partial recovery.
A mitigation is being applied to continue to accelerate recovery with complete recovery expected by 8:00pm UTC.
Top-level comments on pull requests still function and should remain usable during recovery. Opening and merging pull requests, actions, and other pull request operations remain functional.
May 6, 16:20 UTC
Update - Creation of new Pull Request threads (including line and file comments) continues to be affected.
Top-level comments on pull requests still function and should remain usable during recovery. Opening and merging pull requests, actions, and other pull request operations remain functional.
A mitigation is being applied. Recovery is expected to be gradual, with complete recovery expected by 8:00pm UTC.
May 6, 16:07 UTC
Update - Pull Requests is experiencing degraded availability. We are continuing to investigate.
May 6, 15:55 UTC
Update - Creation of new Pull Request threads (including line and file comments) continues to be affected. We have identified the cause of the issue and have started taking steps to mitigate this issue.
May 6, 15:28 UTC
Update - We are investigating failures for new thread creation on Pull Requests. Responses to existing pull request threads are unaffected.
May 6, 15:25 UTC
Investigating - We are investigating reports of degraded performance for Pull Requests
May 6, 11:59 UTC
Resolved - On May 6, 2026 between 11:02 UTC and 11:13 UTC, users were unable to start or view Copilot Cloud Agent or remote sessions. During this time, requests to the session API returned errors, preventing users from creating new sessions or viewing existing ones. The issue was caused by a configuration change to the service's network routing that inadvertently removed the ingress path for the service. The team reverted the change at 11:13 UTC which restored service. The incident remained open until 11:59 UTC while the team verified full recovery. We are taking steps to improve our deployment validation process to prevent similar configuration changes from impacting production traffic in the future.
May 6, 11:59 UTC
Update - We have applied a mitigation and Copilot services have recovered.
May 6, 11:25 UTC
Update - We are investigating issues with the ability to start Copilot Cloud Agent sessions and view them.
May 6, 11:21 UTC
Investigating - We are investigating reports of impacted performance for some GitHub services.
May 12, 10:55 EDT
Resolved - Between 02:47 PM (UTC +01:00) and 03:15 PM (UTC +01:00), some customers in all affected regions experienced issues with all impaired tools. This was caused by a database impairment. As of 03:15 PM (UTC +01:00), our all impaired tools are working properly and the incident has been fully resolved.
HubSpot conducts a thorough review after each incident to understand the cause and prevent it from happening again. Learn more about HubSpot's commitment to reliability at www.HubSpot.com/reliability.
The information on this page reflects our understanding of the incident and impact at the time of the update.
May 12, 09:57 EDT
Investigating - We're investigating reports that HubSpot may be unavailable for some users. We'll update this page when we have more information.
May 7, 23:14 EDT
Resolved - Between 05:23 PM (UTC -07:00) and 07:56 PM (UTC -07:00), HubSpot experienced degradation in portals in the U.S. . This was related to an AWS outage. HubSpot is now working properly and the incident has been resolved.
HubSpot conducts a thorough review after each incident to understand the cause and prevent it from happening again. Learn more about HubSpot's commitment to reliability at www.HubSpot.com/reliability.
The information on this page reflects our understanding of the incident and impact at the time of the update
May 7, 22:07 EDT
Update - We are experiencing degradation in portals in the U.S. which is related to AWS outage. We are working with AWS on the full resolution.
We will be back with an update within 30 minutes
The information on this page reflects our understanding of the incident and impact at the time of the update.
May 7, 21:36 EDT
Identified - We are experiencing degradation in portals in the U.S. which we believe is related to AWS outage.
We are working with AWS to resolve the issue. We will be back with an update within 30 minutes.
The information on this page reflects our understanding of the incident and impact at the time of the update.
May 2, 15:31 EDT
Resolved - HubSpot has continued to monitor the fix and believe we have addressed the underlying issue. Some messages deployed before May 1, 5 pm EDT may still experience delayed deliveries or failures over the next ~48 hours as retries and queues fully clear.
May 1, 12:34 EDT
Update - In April, some customers may have experienced intermittent DMARC failures on messages to Microsoft-hosted inboxes. Impact was most visible for senders with a DMARC policy of p=reject or p=quarantine. The root cause was related to how certain email headers were being handled.
We have implemented a fix and will continue to monitor closely. While there may still be some residual impact to messages with elongated headers and as outstanding retries and queue delays clear over the next ~72 hours, we expect impact to remain limited.
May 1, 11:52 EDT
Monitoring - In April, some customers may have experienced intermittent DMARC failures on messages to Microsoft-hosted inboxes. Impact was most visible for senders with a DMARC policy of p=reject or p=quarantine. The root cause was related to how certain email headers were being handled.
We have implemented a fix and will continue to monitor closely. While there may still be some residual impact to messages as outstanding retries and queue delays clear over the next ~72 hours, we expect impact to remain limited.
Apr 30, 11:10 EDT
Resolved - Between April 23 and April 29, some customers in experienced issues with publishing Instagram posts. This was caused by an integration issue. As of April 29 6:17 PM, our social publishing tools are working properly and the incident has been fully resolved.
HubSpot conducts a thorough review after each incident to understand the cause and prevent it from happening again. Learn more about HubSpot's commitment to reliability at www.HubSpot.com/reliability.
The information on this page reflects our understanding of the incident and impact at the time of the update.
Apr 29, 22:15 EDT
Monitoring - We have mitigated an issue with publishing of some Instagram posts since April 23. We're monitoring performance closely to make sure the tools recover properly.
The information on this page reflects our understanding of the incident and impact at the time of the update.
Apr 29, 16:02 EDT
Identified - We estimate that we will restore service in the next several days.
We are mitigating impact from an issue with a third-party integration, and monitoring the situation closely. If you have trouble posting, please retry or post directly on Instagram.
We will be back with an update within 6 hours.
The information on this page reflects our understanding of the incident and impact at the time of the update.
Apr 30, 10:44 EDT
Resolved - Between 9:45 AM EDT (UTC -04:00) on Apr 30, 2026 and 10:30 AM EDT (UTC -04:00) on Apr 30, 2026, some customers experienced errors when creating or updating workflows. The root cause has been resolved, and no data was lost. We apologize for any inconvenience and appreciate your patience as we resolved the issue.
Apr 30, 09:59 EDT
Investigating - We are investigating an issue impacting Workflows. We will provide an update when we have more information.
The information on this page reflects our understanding of the incident and impact at the time of the update.
May 13, 09:59 UTC
Resolved - We appreciate your patience during this incident. Our services are now fully operational and all delayed automation events have been processed.
May 13, 08:42 UTC
Update - Our teams are actively processing events that were delayed during this incident. We will provide a further update once all delayed events have been processed.
May 13, 07:42 UTC
Update - While new automation events are being processed correctly, our teams continue working through the processing of historical events that were delayed by this incident.
We will provide an update on the progress within one hour.
May 13, 06:38 UTC
Update - Our team has been able to confirm that new automation events should now be processed correctly.
We are working through the processing of historical events that have been delayed by this incident and will provide an update on progress within one hour.
May 13, 05:38 UTC
Update - The root cause of this issue has now been confirmed by our incident response team.
We are working urgently to bring back normal automation processing to impacted users, and process historical events that were delayed by the impact of the incident.
We will provide further update within one hour or sooner if available.
May 13, 02:41 UTC
Identified - Our team has identified a likely root cause of this issue and is actively working on a fix.
At this point in time the required steps to mitigate this issue are expected to take approximately three hours.
We will provide further update within three hours or sooner if further information is available prior to that time.
May 13, 01:37 UTC
Investigating - At this time it is expected that impacted users may be experiencing delays for multiple hours on their automation events.
Our team is continuing to investigate this issue with urgency. We will provide further update within an hour.
May 8, 19:45 UTC
Resolved - On May 8, 2026, some customers utilizing Atlassian products experienced elevated error rates and degraded performance. The issue has now been resolved, and the service is operating normally for all affected customers.
May 8, 11:12 UTC
Update - Our services are now fully operational. We continue to replay any events that were missed during the incident, and are making good progress. We will provide a further update once replays are complete. If you experience any ongoing issues, please contact our support team.
We apologise for the disruption and thank you for your patience.
May 8, 08:03 UTC
Update - We continue to monitor the situation as services recover. We are currently in the process of clearing the backlog of queued events. We will provide a further update in approximately one hour.
May 8, 05:53 UTC
Monitoring - The underlying issue in public infrastructure which affected asynchronous event processing has been mitigated and all the affected services are recovering. We are now working on clearing the backlog of queued events, which means some actions (such as notifications, automation triggers, and data syncs) may be in degraded state. We will continue to monitor and provide updates as the backlog is cleared.
May 8, 04:10 UTC
Update - We are continuing to work with our public cloud provider to mitigate this issue. We are starting to see some recovery in regions outside of Eastern USA, however, users globally may still be experiencing issues with certain product features. These are listed at the bottom of each product page.
May 8, 04:10 UTC
Update - Our teams continue to work on mitigating the infrastructure outage from our public cloud provider. We will provide further updates when they are available.
May 8, 04:10 UTC
Update - We have identified that the root cause of the issue is related to an infrastructure outage from our public cloud provider. We are working closely with them to mitigate this issue. We will provide further updates when they become available.
May 8, 03:00 UTC
Update - Our teams continue to work on mitigating the infrastructure outage from our public cloud provider. We will provide further updates when they are available.
May 8, 03:00 UTC
Update - We have identified that the root cause of the issue is related to an infrastructure outage from our public cloud provider. We are working closely with them to mitigate this issue. We will provide further updates when they become available.
May 8, 02:15 UTC
Identified - We have identified that the root cause of the issue is related to an infrastructure outage from our public cloud provider. We are working closely with them to mitigate this issue. We will provide further updates when they become available.
May 8, 01:31 UTC
Investigating - We are experiencing issues with multiple Atlassian products. Our teams are investigating further and more updates including will be shared within 1 hour.
May 6, 11:17 UTC
Resolved - The issue causing failures in the lookup objects within JSM Automation has been resolved. A fix was implemented to address the problem and the service is now operating normally for all affected customers.
May 6, 10:18 UTC
Monitoring - A fix has been implemented to address the failures observed in the lookup objects within JSM Automation. We are actively monitoring the situation to ensure stability. We will share the final update within next 1 hour.
May 6, 09:35 UTC
Investigating - We are investigating reports of lookup objects in Automation for Jira Service Management are not functioning as expected. Our engineering teams are actively investigating and working to resolve the issue. We will provide further updates in next 1 hour.
May 2, 02:48 UTC
Resolved - The Jira Work Item View experience has been restored to normal service. Our teams are continuing to investigate the root cause affecting this issue.
We will provide more details once we identify the root cause
May 2, 02:00 UTC
Update - We are actively investigating reports of a service disruption affecting the Work Item viewing experience in Jira. This is also impacting the accessibility of support tickets.
We will share updates here as more information becomes available.
May 2, 01:38 UTC
Update - We are actively investigating reports of a service disruption affecting the issue viewing experience in Jira. This is also impacting the accessibility of support tickets.
We will share updates here as more information becomes available.
May 2, 00:55 UTC
Investigating - We are investigating an incident affecting Jira Software (viewBoard, viewIssue, createIssue). Our team is working to identify the cause and restore service to normal levels. We will provide the next update within 60 minutes.
Apr 14, 16:11 UTC
Resolved - On April 14, 2026, affected users may have experienced some service disruption with automation rules that use Rovo agents. The issue has now been resolved, and the service is operating normally for all affected customers.
Apr 14, 15:46 UTC
Monitoring - The issue has been resolved, and services are now operating normally for all affected customers. We will continue to monitor closely to confirm stability.
Apr 14, 14:11 UTC
Identified - We have identified the issue, and our teams are working to resolve it and restore normal operations as quickly as possible. We will provide further updates as they become available.
Apr 14, 12:25 UTC
Update - We have identified that this incident also affects automation rules in Confluence that use Rovo agents. We are investigating and will provide updates as we learn more.
Apr 14, 12:12 UTC
Investigating - We are actively investigating reports of a partial service disruption affecting Rovo, specifically for customers using automation rules that invoke Rovo agents. Some customers may find that these automations are not completing as expected. We'll share updates here as more information becomes available.
Apr 27, 21:04 UTC
Resolved - We have confirmed that the issue has been resolved.We will conduct an internal review of this issue to help prevent or minimize future recurrence.
Apr 15, 14:57 UTC
Monitoring - Our engineers have corrected the issue and we are confirming that the latest extension release has been approved. The fastest way to ensure you are running the latest version is to manually update your LastPass browser extension. We will continue monitoring the situation and provide a final update shortly.
Apr 14, 20:12 UTC
Update - The fix has been validated and submitted to the browser extension store for review. We are continuing additional testing during the review process and will share further updates as available.
Apr 14, 17:14 UTC
Update - A hotfix has been implemented and is under validation. After validation, we plan to release the update via the browser stores and begin a broader rollout. Further updates will follow.
Apr 13, 15:35 UTC
Identified - The issue has been identified and our engineering team is actively working through final validation.
Additional updates will be shared as more information becomes available.
Apr 10, 08:18 UTC
Investigating - We are aware of a recent change made in the latest Chrome browser version, which likely affects Edge functionality as well (based on Chromium). This change affected LastPass site launch and autofill capabilities. If you're experiencing problems with the extension, fully quitting and restarting your browser resolves the issue and allows you to continue using LastPass normally. If you haven't yet updated to browser version 147, we recommend holding off until further notice.
Our team has identified a likely cause and is actively working on a fix. Once confirmed, we'll prepare a release and submit it to the browser stores. We'll post updates here as they become available.
Apr 8, 10:39 UTC
Resolved - We have confirmed that the issue has been resolved with the latest Chrome extension version 4.152.1. Affected users are advised to upgrade to this version to ensure proper functionality.
We will conduct an internal investigation of this issue to help prevent or minimize future recurrence.
Apr 8, 08:10 UTC
Identified - Our engineers have identified an issue where a subset of Business Chrome Extension users are being logged out frequently when the extension setting "Logout on browser close" is turned on via policy. A fix has been identified to mitigate the issue and we are actively working to implement this fix. We will provide another update shortly.
Apr 6, 19:04 UTC
Resolved - We have confirmed that the issue has been resolved.We will conduct an internal review of this issue to help prevent or minimize future recurrence.
Apr 1, 17:16 UTC
Update - We are continuing to monitor for any further issues.
Apr 1, 00:25 UTC
Update - We are continuing to monitor for any further issues.
Mar 31, 21:06 UTC
Update - We are continuing to monitor for any further issues.
Mar 31, 18:38 UTC
Update - We are continuing to monitor for any other issues.
Mar 31, 17:19 UTC
Monitoring - A fix has been implemented to mitigate the issue. We are now monitoring the system to ensure continued stability.
Mar 31, 15:58 UTC
Identified - Our engineers have identified the issue and are now actively working towards a resolution. In the meantime impacted users should ensure they are running the latest version of the browser extension, and try using a different browser if issues persist. We will provide another update shortly.
Mar 31, 15:30 UTC
Investigating - We are actively investigating reports that some LastPass customers may be experiencing intermittent login issues. Our engineers are working to identify the issue and will provide another update shortly.
Mar 26, 13:26 UTC
Resolved - We have confirmed that the issue has been resolved.We will conduct an internal review of this issue to help prevent or minimize future recurrence.
Mar 25, 15:00 UTC
Monitoring - We have deployed a fix to production and are continuing to monitor the issue.
Mar 25, 13:49 UTC
Update - We are continuing to investigate this issue.
Mar 25, 13:15 UTC
Investigating - We are investigating reports that some LastPass customers are experiencing login issues. Our engineering team is working to identify the cause and will provide an update soon.
Mar 25, 12:11 UTC
Resolved - We have confirmed that the issue has been resolved.We will conduct an internal review of this issue to help prevent or minimize future recurrence.
Mar 25, 11:48 UTC
Update - We are continuing to investigate this issue.
Mar 25, 11:27 UTC
Investigating - We are actively investigating reports that some LastPass customers may be experiencing login issues and access to the vault and Admin console. Our engineers are working to identify the issue and will provide another update shortly.
May 12, 15:38 UTC
Resolved - This incident has been resolved.
May 12, 15:20 UTC
Monitoring - We have implemented a fix and are monitoring system performance to ensure services continue to recover as expected.
May 12, 14:30 UTC
Investigating - We are currently investigating increased latency affecting our API and build systems. Some users may experience slower-than-normal response times for API requests and builds while we work to identify the source of the issue.
May 11, 15:27 UTC
Resolved - This incident has been resolved.
May 11, 15:19 UTC
Update - We are continuing to monitor for any further issues.
May 11, 14:47 UTC
Monitoring - We are monitoring the results of our fix.
May 11, 14:10 UTC
Identified - We have identified other causes of this latency, and are continuing to work on resolutions.
May 11, 13:47 UTC
Monitoring - A fix has been implemented and we are monitoring the results.
May 11, 13:36 UTC
Investigating - We are currently experiencing increased latency in our API and builds. We are investigating the source.
May 11, 15:27 UTC
Resolved - This incident has been resolved.
May 11, 15:20 UTC
Update - We are continuing to monitor for any further issues.
May 11, 14:47 UTC
Monitoring - We are monitoring improvements.
May 11, 14:35 UTC
Identified - The issue has been identified and posted on Github's status page.
May 11, 14:31 UTC
Investigating - We are experiencing issues with our build pipeline due to degraded performance in Github
May 10, 16:33 UTC
Resolved - Our network experienced degraded service in the IAD region between 12:53 UTC and 13:53 UTC. This issue is now resolved.
May 10, 08:56 UTC
Resolved - This incident has been resolved.
May 10, 08:55 UTC
Update - We are continuing to monitor for any further issues.
May 10, 08:32 UTC
Monitoring - A fix has been implemented and we are monitoring the results.
May 10, 08:07 UTC
Identified - We're experiencing an increase in DNS resolution errors impacting domains on our Standard Edge Network. A fix has been deployed, and we're monitoring the results.
Apr 29, 20:49 UTC
Resolved - This incident has been resolved.
Apr 29, 20:49 UTC
Update - We are continuing to investigate this issue.
Apr 29, 20:07 UTC
Investigating - We are currently investigating this issue.
Apr 29, 19:35 UTC
Resolved - This incident has been resolved.
Apr 29, 18:47 UTC
Monitoring - A fix has been implemented and we are monitoring the results.
Apr 29, 16:48 UTC
Investigating - We are currently investigating this issue.
Apr 28, 22:24 UTC
Resolved - This incident has been resolved.
Apr 28, 21:04 UTC
Monitoring - A fix has been implemented and we are monitoring the results.
Apr 28, 19:55 UTC
Investigating - We are currently investigating this issue.
Apr 27, 22:30 UTC
Resolved - This incident has been resolved.
Apr 27, 22:28 UTC
Update - We are continuing to investigate this issue.
Apr 27, 21:06 UTC
Investigating - We are currently investigating this issue.
Apr 1, 13:12 UTC
Resolved - This incident has been resolved.
Apr 1, 12:20 UTC
Monitoring - A fix has been implemented and we are monitoring the results.
Apr 1, 10:05 UTC
Investigating - We are observing increased errors in viewing packages and authentication on npmjs website. We are investigating this issue.
All impacted services have now fully recovered.
Affected components
- Codex Web (Operational)
- CLI (Operational)
- App (Operational)
- VS Code extension (Operational)
- Codex API (Operational)
All impacted services have now fully recovered.
Affected components
- Audio (Operational)
All impacted services have now fully recovered.
Affected components
- Chat Completions (Operational)
- Responses (Operational)
All impacted services have now fully recovered.
Affected components
- Codex Web (Operational)
- File uploads (Operational)
All impacted services have now fully recovered. Between 4:05pm and 4:40pm PT on May 8, some customers using the Responses API may have experienced elevated 404 errors. We identified the issue as related to a recent deploy and rolled it back. Responses API traffic has recovered, and we are marking this incident as resolved.
Affected components
- Responses (Operational)
May 12, 17:22 PDT
Resolved - Our engineers have monitored the fix and confirmed all services are now operating normally.
May 12, 16:48 PDT
Monitoring - Between 16:10 PDT and 16:27 PDT on 05/12/2026, our engineers began investigating an issue with a subset of services experiencing 5xx errors. Users may have experienced issues when accessing subusers, authenticating domains and link branding records, accessing the account, and checking account stats.
This does not impact mail send.
Our engineers have implemented a fix and are monitoring system performance. We will provide another update in an hour or as soon as more information becomes available.
May 12, 11:20 PDT
Resolved - Our engineers have monitored the fix and confirmed the issue with teammate account creation has been resolved. All services are now operating normally at this time.
May 12, 10:13 PDT
Monitoring - Our engineers have implemented a fix and are monitoring system performance. Teammates that were impacted by the issue need to request a new invite link and delete cache and cookies on their browsers before attempting to join again. We will provide another update in one hour or as soon as more information becomes available.
May 12, 09:59 PDT
Identified - Our engineers have identified the issue and are working toward a fix. We will provide another update in one hour or as soon as more information becomes available.
May 12, 09:31 PDT
Investigating - Starting around 9 AM PDT on Tuesday, 5/12/2026, our engineers started investigating an issue with teammates, where invitees could not join their organizations and instead they were led to creating standalone Twilio Unified accounts. New teammates, therefore, are unable to join their organizations at the time. This issue does not impact mail send. We will provide another update in an hour or as soon as more information becomes available.
May 8, 01:09 PDT
Resolved - Our engineers have monitored the fix and confirmed the issue with Mail delay has been resolved. All services are now operating normally at this time
May 8, 00:04 PDT
Update - Our engineering team has identified the root cause of the issue and is actively working on a resolution. Please note that new Mail Send requests are processing as expected and are not impacted. We will share another update within the next two hours, or sooner if additional information becomes available.
May 7, 20:38 PDT
Identified - Our engineers have identified the issue and are working toward a fix. New Mail Send requests are not affected and are processing normally. We are still actively working on delivering previously delayed mails. We will provide another update in next 2hr or as soon as more information becomes available.
May 7, 19:53 PDT
Update - New mail send requests are not affected. Our engineers are actively working on remediating the delayed mails. We will provide another update in 30 minutes or as soon as we have more information.
May 7, 19:04 PDT
Investigating - Customers may have experienced mail delay between 5pm and 5:15pm PDT.Our engineers have been alerted and are investigating. We will provide another update in 30 minutes or as soon as we have more information.
May 6, 16:22 PDT
Resolved - Our engineers have monitored the fix and confirmed the issue with Twilio SendGrid invoices and the tax amounts has been resolved. All services are now operating normally at this time.
May 6, 15:04 PDT
Monitoring - Our engineers have implemented a fix and are monitoring system performance. We have resolved the tax calculation issue, and customer invoices have been corrected with the appropriate tax amounts. If you were overcharged, your refund will be automatically processed to your card on file within 7–10 business days. If you were undercharged, the corrected tax balance has been applied; please log in to your console to validate the updated invoice details. We will provide another update in an hour or as soon as more information becomes available.
May 6, 12:47 PDT
Update - Our engineers have identified the issue and are continuing on working toward a fix. We will provide another update in 2 hours or as soon as more information becomes available.
May 6, 11:48 PDT
Update - Our engineers have identified the issue and are continuing on working toward a fix. We will provide another update in 1 hour or as soon as more information becomes available.
May 6, 10:46 PDT
Identified - Our engineers have identified the issue and are working toward a fix. We will provide another update in 1 hour or as soon as more information becomes available.
May 6, 10:34 PDT
Investigating - Starting around 9am PST on May 5, 2026, our engineers began investigating an issue with the most recent Twilio SendGrid invoices and the tax amounts on those invoices not being charged correctly. Users may experience issues with the tax amounts on the most recent invoices. This does not impact mail send. We will provide another update in 1 hour or as soon as more information becomes available.
May 5, 09:17 PDT
Resolved - Our engineers have successfully completed processing all delayed Engagement Tracking and Segmentation data for Legacy Marketing Campaigns. All contact engagement data and segmentation features are now fully up-to-date and functioning normally. We sincerely apologize for any inconvenience this delay caused. This issue is now resolved, but if you experience any further issues please reach out to our SendGrid Technical Support Team for further assistance.
May 4, 09:43 PDT
Update - Our system is still processing the delayed Engagement Tracking and Segmentation data for Legacy Marketing Campaigns; currently, the data should be up-to-date up through May 1. We will provide another update in 24 hours or as soon as more information becomes available.
May 3, 07:54 PDT
Update - Our system is still processing the missing Engagement Tracking and Segmentation data for Legacy Marketing Campaigns. We will provide another update in 24 hours or as soon as more information becomes available.
May 2, 17:59 PDT
Update - We've identified the issue and still are working towards a fix. We will provide another update in 12 hours or as soon as more information becomes available.
May 1, 17:58 PDT
Update - Our engineers have implemented a fix and are continuing to monitor system performance and recovery. During this time a subset of Legacy Marketing customers may continue to see delays in updating contact engagement data. We will be providing a daily update moving forward, or as soon as we have new information to share.
May 1, 12:02 PDT
Monitoring - Our engineers have implemented a fix and are monitoring system performance. . We will provide another update in 6 hours or as soon as more information becomes available.
May 1, 08:31 PDT
Update - We've identified the issue and still are working towards a fix. We will provide another update in 6 hours or as soon as more information becomes available.
Apr 30, 13:58 PDT
Update - We've identified the issue and still are working towards a fix. We will provide another update in 4 hours or as soon as more information becomes available.
Apr 30, 11:50 PDT
Identified - Our engineers have identified an issue that is causing delays in updating individual contacts with recent engagement data (such as clicks and opens) for a subset of Legacy Marketing customers (TNE Marketing Campaigns are NOT affected). As a result, affected customers may experience delays when segmenting lists based on recent engagement, and the option to export campaign click data may appear disabled. This does not impact mail send. We will provide another update in 2 hours or as soon as more information becomes available.
May 12, 22:37 UTC
Resolved - This incident has been resolved.
May 12, 17:00 UTC
Monitoring - We've applied a configuration change to mitigate the API/UI errors being experienced.
May 12, 15:43 UTC
Update - We are continuing to investigate increased UI and API timeouts and errors.
May 12, 13:51 UTC
Update - We are continuing to investigate increased UI and API timeouts and errors
May 12, 12:28 UTC
Investigating - We are seeing increased UI and API timeouts and errors, due to an overloaded database cluster. Ingestion and alerting should not be affected.
May 11, 22:11 UTC
Resolved - Database performance has returned to normal
May 11, 21:39 UTC
Monitoring - We have implemented a fix and are monitoring the situation
May 11, 21:08 UTC
Update - We are working on restoring normal service. API and application performance should be improved, ingestion still has delays.
May 11, 20:25 UTC
Update - Our event database continues to have issues and we are actively investigating the issue.
May 11, 19:28 UTC
Update - Ingestion for spans, replays, and crons is delayed by about 15 minutes.
May 11, 18:57 UTC
Update - We are continuing to investigate degraded database performance.
May 11, 15:32 UTC
Investigating - We are seeing increased UI and API timeouts and errors, due to an overloaded database cluster. Ingestion and alerting should not be affected
May 6, 07:55 UTC
Resolved - This incident has been resolved.
May 6, 07:42 UTC
Investigating - We are currently investigating the issue.
Apr 27, 20:59 UTC
Resolved - This incident has been resolved.
Apr 27, 17:48 UTC
Monitoring - The system is operating as normal, we are monitoring the fix.
Apr 27, 17:31 UTC
Update - Ingestion is now back to normal, we are slowly increasing write traffic to reduce intermittent API failures
Apr 27, 16:52 UTC
Investigating - We are experiencing delays on our ingestion pipeline. Explorer views are in a degraded state
Apr 23, 02:17 UTC
Resolved - All application features are now operating as normal.
Apr 23, 02:12 UTC
Monitoring - Ingestion for spans and replays is about 10 minutes behind and catching up. API and Dashboard performance has returned to normal.
Apr 23, 02:06 UTC
Investigating - We are currently investigating this issue.
May 8, 00:09 EDT
Resolved - This incident has been resolved.
May 7, 21:13 EDT
Monitoring - There is an ongoing AWS outage impacting some of the services.
Our platform and APIs remain fully operational and stable.
We're keeping a close eye on the situation as AWS work through recovery.
May 7, 20:18 EDT
Investigating - We're experiencing an elevated level of API errors and are currently looking into the issue.
Apr 27, 12:44 EDT
Resolved - This incident has been resolved.
Apr 27, 10:52 EDT
Monitoring - A fix has been implemented and we are monitoring the results.
Apr 27, 06:36 EDT
Investigating - Our Engineering team is currently investigating intermittent performance degradations to the Smartling translation platform and API. We will provide more information as it becomes available. Global Delivery Network services remain operational.
Feb 19, 12:53 EST
Resolved - This incident has been resolved.
Feb 19, 12:16 EST
Monitoring - A fix has been implemented and we are monitoring the results.
Feb 19, 11:18 EST
Identified - The issue has been identified and a fix is being implemented.
Feb 19, 10:45 EST
Investigating - We are currently investigating delays in processing file published callbacks and webhooks.
Feb 17, 18:10 EST
Resolved - This incident has been resolved.
Feb 17, 17:26 EST
Monitoring - Fix is applied, we're monitoring queue processing
Feb 17, 08:03 EST
Investigating - We are currently investigating possible delays in jobs authorizaiton and processing
Feb 10, 19:46 EST
Resolved - This incident has been resolved.
Feb 10, 17:41 EST
Monitoring - A fix has been implemented and we are monitoring the results.
Feb 10, 16:36 EST
Investigating - We are currently investigating reports of users experiencing difficulty accessing the Translation Workbench.
Our customers are reporting that they're able to access Tempo now.
Affected components
- Capacity Planner (Operational)
- Timesheets (Operational)
- Financial Manager (Operational)
Incident has been resolved.
Affected components
- Capacity Planner (Operational)
- Timesheets (Operational)
- Financial Manager (Operational)
The migration of Timesheets, Capacity Planner, and Financial Manager to Forge: https://www.tempo.io/product-news/tempo-moves-to-forge is underway, but the automatic rollout did not start on March 31, 2026 as planned. The automatic migration to Forge is tentatively scheduled to begin on June 15, 2026. You have the option to do either of the following: • Perform a manual update Review the expected changes (see links below) and assess any impacts to your workflows. If the impact is manageable, we recommend updating manually to control the timing of your instance’s migration. • Wait for automatic update The automatic rollout will occur gradually, so we are unable to confirm the exact date your instance will be updated. It will not be possible to schedule, delay, or opt out once started. After your instance is updated, you will receive an in-app notification confirming the change. You can also verify if your instance has migrated to Forge by checking the app versions: https://tempo-io.atlassian.net/wiki/spaces/DRAFTTIMESHEETS/pages/5679415354/Timesheets+Migration+to+Forge+FAQs#How-to-Check-Your-Current-Version. Breaking changes Most of the migration is seamless, but a few things will change that may affect you, particularly around JQL filters: • JQL filter syntax – issueInternal, Account, Tempo Team, and custom field IDs • Tempo Panels in Jira issue view • Keyboard shortcuts • OAuth app URLs Please review the relevant documentation to prepare for changes that might affect you: • Expected Changes in Timesheets on Forge: https://help.tempo.io/timesheets/latest/expected-changes-in-timesheets-on-forge • Expected Changes in Capacity Planner on Forge: https://help.tempo.io/planner/latest/expected-changes-in-capacity-planner-on-forge • Expected Changes in Financial Manager on Forge: https://help.tempo.io/financialmanager/latest/expected-changes-in-financial-manager-on-forge
We have identified and addressed the root cause of the delays. The issue is now resolved.
Affected components
- Timesheets (Operational)
May 12, 13:48 UTC
Resolved - This incident has been resolved.
May 12, 12:05 UTC
Monitoring - A fix has been implemented and we are monitoring the results.
May 12, 10:50 UTC
Identified - The issue has been identified and the fix is being implemented.
May 12, 09:49 UTC
Investigating - Some regions are experiencing connectivity issues due to an ongoing network problem. We are currently investigating
May 11, 18:22 UTC
Resolved - The incident has been resolved. We are working with Fly team on RCA.
May 11, 17:19 UTC
Update - We are working with Fly team to investigate the root cause.
May 11, 15:07 UTC
Update - We are continuing to investigate the issue.
May 11, 15:05 UTC
Investigating - Some databases may experience increased latency or timeouts in Fly.io’s FRA region.
May 8, 10:08 UTC
Resolved - This incident has been resolved, we will publish RCA soon.
May 8, 10:06 UTC
Monitoring - A fix has been implemented and we are monitoring the results.
May 8, 09:46 UTC
Update - We are continuing to investigate this issue.
May 8, 09:46 UTC
Investigating - We are currently investigating the issue.
May 6, 08:05 UTC
Resolved - This incident has been resolved.
May 5, 21:31 UTC
Update - Main schedule functionality is back to normal. We are currently checking if previously created schedules are delivered as expected before marking the incident as resolved.
May 5, 15:44 UTC
Monitoring - A fix has been implemented and we are monitoring the results.
May 5, 14:29 UTC
Update - We are continuing to work on a fix for this issue.
May 5, 14:24 UTC
Identified - We are currently experiencing issues in the US region.
- Duplicate Deliveries: During this period, some scheduled jobs may be executed twice.
- Schedule Disruption: Schedules created between April 24, 2026 and May 2, 2026 are currently not running.
Our team is actively working on a fix. Once the migration is complete, affected schedules will resume normal operation.
We will provide updates as progress continues.
Apr 2, 19:53 UTC
Resolved - Replication complete, incident resolved.
Apr 2, 18:45 UTC
Identified - Servers in the iad region experienced unexpected disk load, resulting in elevated latencies and a temporary read-only state. We are migrating replicas to new instances to mitigate the issue and expect to have it fully resolved shortly.
May 9, 05:40 UTC
Resolved - This incident has been resolved.
May 9, 04:45 UTC
Monitoring - Between 1:30 to 3:44 UTC, messages in Vercel Queues in iad1 were enqueued but not processed, and Vercel Workflows were blocked from making progress (i.e. remaining in pending / active states). This is recovering, Vercel Queues backlogs are being processed, and Vercel Workflows are unblocked.
May 8, 19:15 UTC
Resolved - Between 18:36 and 19:04 UTC, issuing SSL certificates was delayed for new domains. These certificates have now been issued. Certificate renewals were not affected.
May 8, 16:41 UTC
Resolved - This incident has been resolved.
May 8, 16:33 UTC
Monitoring - We have applied a fix and are observing recovery across all builds. We'll provide additional updates as-needed.
May 8, 16:30 UTC
Identified - We've identified an issue where some customers may experience delays in builds starting and/or builds stuck in an initializing state. We are applying a fix and will provide additional updates as they become available.
May 8, 06:39 UTC
Resolved - We've deployed mitigations and all services are operating normally. Traffic may continue to reroute to nearby regions for the next few hours, but we will restore traffic gradually after verifying system availability in the IAD1 region.
May 8, 04:54 UTC
Monitoring - We implemented a fix and are monitoring the results. Backlogs for Workflows and Queues will be processed within the next few hours.
May 8, 02:33 UTC
Update - New workflow runs and queue messages are being processed now. Processing the backlog of existing workflow runs and queued messages is still being investigated.
May 8, 02:18 UTC
Update - We are continuing to work on a fix for this issue.
May 8, 02:13 UTC
Update - New messages to Vercel Workflows and Vercel Queues are being queued, but message processing is paused. Queued messages will be processed when service is restored.
May 8, 01:47 UTC
Update - Traffic to the IAD1 region has been re-routed to nearby regions. Functions configured in IAD1 will be invoked in a different region if failover regions are configured, but if no other regions are configured, the function will still be invoked in IAD1.
May 8, 01:18 UTC
Identified - Some Vercel functions that run in the IAD1 region are experiencing elevated invocation failures. We are investigating the issue and will share more information as it becomes available.
May 7, 21:48 UTC
Resolved - This issue has been resolved and new deployments are being created successfully.
May 7, 21:14 UTC
Investigating - We are investigating an issue affecting new deployments that are stuck in the provisioning state. We will share more information as it becomes available.
May 13, 02:18 PDT
Update - We continue to work on a fix for Web Mail issue and will provide updates as soon as there is more information to share.
We appreciate your patience as we work to resolve this issue.
May 12, 22:02 PDT
Update - We have fixed the issue where a subset of users may have experienced issues with Web calendar.
We continue to work on the fix for Web Mail issue and will provide updates as soon as there is more information to share.
We appreciate your patience as we work to resolve this issue.
May 12, 20:54 PDT
Update - We continue to work on a fix for this issue and will provide updates as soon as there is more information to share.
We appreciate your patience as we work to resolve this issue.
May 12, 19:54 PDT
Update - We continue to work on a fix for this issue and will provide updates as soon as there is more information to share.
We appreciate your patience as we work to resolve this issue.
May 12, 18:54 PDT
Update - We continue to work on a fix for this issue and will provide updates as soon as there is more information to share.
We appreciate your patience as we work to resolve this issue.
May 12, 17:53 PDT
Identified - We have successfully identified the root cause affecting a subset of users in Web Mail and Calendar.
Our team is actively working on a resolution, and we will keep you informed with timely updates as progress is made.
Thank you for your patience.
May 12, 16:01 PDT
Investigating - We are currently investigating a service degradation with a subset of users in Web Mail and Calendar.
Our team is actively working to identify the impact and root cause. We will provide an update as soon as more information becomes available.
We appreciate your patience as we work to resolve this issue.
May 11, 17:48 PDT
Resolved - The service degradation with inbound and outbound SMS messages affecting a subset of users in Australia has been successfully resolved.
Our team will continue to monitor the situation closely and keep you informed of any further developments.
May 11, 17:24 PDT
Monitoring - The service degradation with inbound and outbound SMS messages affecting a subset of users in Australia has been successfully resolved.
Our team will continue to monitor the situation closely and keep you informed of any further developments.
May 11, 16:44 PDT
Identified - We have successfully identified the root cause affecting inbound and outbound SMS messages affecting a subset of users in Australia.
Our team is actively working on a resolution, and we will keep you informed with timely updates as progress is made.
Thank you for your patience.
May 11, 16:19 PDT
Investigating - We are currently investigating a service degradation with inbound and outbound SMS messages affecting a subset of users in Australia .
Our team is actively working to identify the impact and root cause. We will provide an update as soon as more information becomes available.
We appreciate your patience as we work to resolve this issue.
May 10, 23:45 PDT
Resolved - Between 05/11/2026 05:05 UTC and 08:24 UTC, a subset of Airtel subscribers may have experienced service degradation with inbound call connectivity to Zoom India phone numbers for Zoom Phone and Zoom Contact Center.
This incident has been resolved and the affected services have been restored.
May 10, 23:24 PDT
Monitoring - The service degradation affecting a subset of Airtel subscribers for inbound call connectivity to Zoom India phone numbers for Zoom Phone and Zoom Contact Center has been successfully resolved.
Our team will continue to monitor the situation closely and keep you informed of any further developments.
May 10, 23:04 PDT
Investigating - We are currently investigating a service degradation affecting a subset of Airtel subscribers for inbound call connectivity to Zoom India phone numbers for Zoom Phone and Zoom Contact Center. Outbound calls continue to function normally.
Our team is actively working to identify the impact and root cause. We will provide an update as soon as more information becomes available.
We appreciate your patience as we work to resolve this issue.
May 9, 13:53 PDT
Resolved - This incident has been resolved.
May 9, 07:53 PDT
Monitoring - The issue affecting users' ability to schedule and list medical appointments in North America region has been successfully resolved.
Our team will continue to monitor the situation closely and keep you informed of any further developments.
May 9, 07:46 PDT
Identified - We have successfully identified the root cause affecting users' ability to schedule and list medical appointments in North America region.
Our team is actively working on a resolution, and we will keep you informed with timely updates as progress is made.
Thank you for your patience.
May 9, 06:13 PDT
Update - We continue to investigate a service degradation affecting users' ability to schedule and list medical appointments in North America region.
Our team is actively working with our vendor to identify the impact and root cause. We will provide an update as soon as more information becomes available.
We appreciate your patience as we work to resolve this issue.
May 9, 04:37 PDT
Update - We continue to investigate a service degradation affecting users' ability to schedule and list medical appointments in North America region.
Our team is actively working with our vendor to identify the impact and root cause. We will provide an update as soon as more information becomes available.
We appreciate your patience as we work to resolve this issue.
May 9, 03:36 PDT
Investigating - We are currently investigating a service degradation affecting users' ability to schedule and list medical appointments in North America region.
Our team is actively working with our vendor to identify the impact and root cause. We will provide an update as soon as more information becomes available.
We appreciate your patience as we work to resolve this issue.