Mar 11, 17:28 UTC
Resolved - The incident is now resolved. To recap we had an incident starting at 07:17 PT / 14:17 UTC and the errors returned to baseline at 10:11 PT / 17:11 UTC.
Mar 11, 17:22 UTC
Monitoring - A fix has been implemented and we are monitoring the results.
Mar 11, 17:22 UTC
Update - We are continuing to work on a fix for this issue.
Mar 11, 17:19 UTC
Identified - The issue has been identified and a fix is being implemented.
Mar 11, 15:44 UTC
Update - We're currently investigating issues with Claude Code and Claude.ai. Some users may be unable to log in, and others may experience slower than usual performance. The Claude API is not affected.
Mar 11, 15:27 UTC
Update - We are continuing to investigate this issue.
Mar 11, 14:47 UTC
Update - This is affecting the login/logout actions in Claude Code as well.
Mar 11, 14:44 UTC
Investigating - We are currently investigating this issue.
Mar 10, 16:52 UTC
Resolved - This incident has been resolved.
Mar 8, 19:42 UTC
Monitoring - Root cause: Users with scheduled tasks in Claude Cowork or Claude Code who are in a timezone that observed daylight saving time last night were affected by an infinite loop. When the app tried to locate tasks scheduled during the “skipped” hour, it couldn’t resolve them and got stuck.
Fix: Update to version 1.1.5749 via https://claude.com/download. If you’re unable to update right away, temporarily switching to a timezone that doesn’t observe daylight saving time will also resolve the issue.
We’re working on a backend fix as well. We know disruptions to Claude affect your work, and we apologize for the trouble.
Mar 8, 19:42 UTC
Update - Root cause: Users with scheduled tasks in Claude Cowork or Claude Code who are in a timezone that observed daylight saving time last night were affected by an infinite loop. When the app tried to locate tasks scheduled during the “skipped” hour, it couldn’t resolve them and got stuck.
Mar 8, 18:27 UTC
Identified - The issue has been identified and a fix is being implemented.
Mar 8, 17:57 UTC
Investigating - We have identified the issue and are working on a mitigation. Currently, scheduled tasks are disabled for cowork and claude code desktop.
Mar 8, 15:49 UTC
Resolved - Filesystem extension has been restored to allowlists
Mar 6, 17:16 UTC
Identified - The issue has been identified and a fix has been implemented. Users may need to re-add the Filesystem connector to their organization allowlist in order to reactivate it.
Mar 7, 16:58 UTC
Resolved - Today between 5:35 PT / 13:35 UTC and 6:44 PT / 14:44 UTC Haiku 4.5 had an elevated error rate. We are now at the baseline error rate.
Mar 7, 13:58 UTC
Investigating - We are currently investigating this issue.
Mar 6, 16:23 UTC
Resolved - This incident has been resolved.
Mar 6, 16:01 UTC
Monitoring - A fix has been implemented and we are monitoring the results.
Mar 6, 15:34 UTC
Investigating - Some API requests to api.anthropic.com are experiencing connection timeouts due to network degradation at an upstream peering point. Requests that successfully connect are unaffected. We are working with our upstream providers to resolve the issue.
Mar 11, 02:24 UTC
Resolved - The issue affecting the orb list on the Organization Settings page has been resolved. Our engineers have deployed a fix and users should now be able to view their orb list as expected. Orb usage in pipelines and builds was not impacted during this incident. If you continue to experience issues, please contact CircleCI support.
Thank you for your patience while we worked on resolving this issue.
Mar 11, 01:58 UTC
Investigating - We are investigating an issue where users are unable to view their organization's orb list on the Organization Settings page. The page may appear blank or fail to load the expected list of public and private orbs. Other CircleCI functionality, including pipeline execution and orb usage in builds, is not affected. Our engineering team is actively investigating and working toward a resolution. We will provide an update within 30 minutes.
Mar 2, 18:21 UTC
Resolved - Looks like everything is good. Thank you for your patience.
Mar 2, 18:15 UTC
Monitoring - The fix has gone out. We will continue to monitor the situation. Sorry for any inconvenience.
Mar 2, 17:59 UTC
Identified - We're seeing issues with docker on some convenience images (cimg:*) and a docker update we just pushed this morning. We are working on a fix and will let you know when we have more information. Thank you for your patience.
Feb 27, 00:17 UTC
Resolved - GitHub have acknowledged and resolved an incident affecting their platform.
https://www.githubstatus.com/incidents/vd3xqfq36rgm
Users should no longer experience intermittent checkout failures.
Feb 26, 22:57 UTC
Update - We are continuing to investigate this issue.
Feb 26, 22:56 UTC
Update - We are continuing to investigate this issue.
Feb 26, 22:50 UTC
Identified - Some users are currently experiencing inconsistent failures when checking out.
We believe this to be an issue with an upstream VCS provider and are investigating.
Feb 22, 19:05 UTC
Resolved - Our authentication provider has confirmed the earlier service disruption has been fully mitigated: https://status.auth0.com/incidents/kknl0nbdzbvx
Email/password login functionality has been restored and is operating normally.
Feb 22, 18:53 UTC
Monitoring - Our downstream authentication provider has implemented mitigation actions and is reporting a reduction in errors. We are seeing corresponding improvement in email/password login success rates. We are continuing to monitor closely to ensure stability before marking this incident resolved.
Feb 22, 17:37 UTC
Identified - We’ve identified that the issue affecting email/password sign-ins is caused by a degradation in service from our downstream authentication provider, Auth0: https://status.auth0.com/incidents/kknl0nbdzbvx
Users should still be able to sign in using GitHub or Bitbucket OAuth, which remain operational. We continue to monitor and will update as we learn more.
Feb 22, 17:25 UTC
Investigating - We are currently investigating an issue affecting users attempting to log in with email and password credentials. Sign-in via GitHub or Bitbucket OAuth is not affected and remains fully operational.
This issue appears to be related to an outage with a downstream authentication provider: https://status.auth0.com/
Feb 20, 17:44 UTC
Resolved - This incident has been resolved. Between approximately 16:00 UTC and 17:10 UTC, users may have experienced slow load times on the Pipelines page, errors when viewing pipeline and workflow history, and delays in GitHub commit status checks being delivered.
Jobs and builds were not impacted during this time.
We apologize for the disruption and thank you for your patience.
Feb 20, 17:31 UTC
Monitoring - We have deployed a fix to stabilize internal services that were experiencing elevated load. Affected components are recovering and we are closely monitoring to confirm full resolution.
Users may still experience some residual slow load times on the Pipelines page, errors when viewing pipeline and workflow history, and delays in GitHub commit status checks being delivered as systems continue to catch up and stabilize.
Jobs and builds remain unaffected and continue to run as expected.
We thank you for your patience while our engineers worked to stabilize the affected services. We will provide an update within 30 minutes or earlier.
Feb 20, 17:19 UTC
Identified - We have deployed a fix to stabilize internal services that were experiencing elevated load since approximately 16:00 UTC. Affected components appear to be recovering and we are closely monitoring the situation.
Users may still experience some slow load times on the Pipelines page, errors when viewing pipeline and workflow history, and delays in GitHub commit status checks being delivered as systems continue to stabilize.
Jobs and builds remain unaffected and continue to run as expected.
We will provide our next update within 30 minutes or sooner if things change. We thank you for your patience while we continue to work to resolve this issue.
Feb 20, 16:56 UTC
Update - At approximately 16:00 UTC, we began seeing elevated load on internal data systems that serve read operations across CircleCI. This has resulted in degraded performance on the Pipelines page, difficulty loading historical workflow and pipeline data, and delays in GitHub commit status checks being posted.
Write operations and job execution are unaffected — builds are queuing and running as expected. We are actively working to stabilize affected internal services and restore full read performance across the platform.
We will provide an update within 30 minutes. Thank you for your patience while we work to investigate this issue
Feb 20, 16:25 UTC
Investigating - We are currently investigating two issues affecting CircleCI services. Users may experience slow load times on the Pipelines page and errors when attempting to view workflow details. Additionally, GitHub commit status checks may not be updating as expected for some users.
Builds and jobs are continuing to run and are not impacted at this time.
We will provide an update within 30 minutes. Thank you for your patience while we work to investigate this issue
Mar 11, 18:12 UTC
Resolved - This incident has been resolved.
Mar 11, 18:01 UTC
Identified - The issue has been identified and a fix is being implemented.
Mar 11, 17:35 UTC
Investigating - Some customers may find challenge pages may be impossible to solve.
Mar 9, 19:11 UTC
Update - We are still working closely with AWS to resolve this issue. AWS services remain impacted, and our team continues to monitor the situation and coordinate with AWS for a resolution. We will share further updates on this status page as more information becomes available. We sincerely apologize for the inconvenience caused and appreciate your patience.
Mar 2, 18:28 UTC
Update - AWS is currently experiencing a power outage affecting two of its three data centers in Bahrain. The remaining center is currently over capacity, leading to widespread service failures. Since backups are hosted in the same region, recovery is dependent on AWS restoring local power and connectivity. We are monitoring the situation 24/7 and will update you the moment services begin to recover.
Mar 2, 08:05 UTC
Identified - Our upstream provider, Amazon Web Services (AWS), is currently experiencing a connectivity and power outage in the Bahrain region, impacting instances hosted in the ME-SOUTH-1 region.
Their technical teams are actively investigating the issue and working to restore services as quickly as possible. We regret the inconvenience this may cause.
Mar 9, 19:10 UTC
Update - We are still working closely with AWS to resolve this issue. AWS services remain impacted, and our team continues to monitor the situation and coordinate with AWS for a resolution. We will share further updates on this status page as more information becomes available. We sincerely apologize for the inconvenience caused and appreciate your patience.
Mar 2, 18:30 UTC
Identified - AWS is reporting a power outage affecting two Availability Zones in the UAE region. The massive load on the remaining zone is causing widespread disruptions. As backups are localized to this region, they are also impacted. We are tracking AWS updates closely and will notify you as soon as the service is restored.
Mar 1, 13:53 UTC
Investigating - Our upstream provider, Amazon Web Services (AWS), is currently experiencing a connectivity and power outage in the UAE region, impacting instances hosted in the ME-CENTRAL-1 region.
Their technical teams are actively investigating the issue and working to restore services as quickly as possible. We regret the inconvenience this may cause.
Feb 13, 07:16 UTC
Resolved - The issue has been resolved. Please contact our support if you continue to experience any related issues.
Feb 12, 22:47 UTC
Monitoring - Storage service is stable and fully restored. Our team continues to monitor system performance.
Feb 12, 14:48 UTC
Update - We are continuing to investigate this issue.
Feb 12, 14:43 UTC
Investigating - Some Cloudways Autonomous applications hosted in the London region may experience temporary connectivity issues due to a partial disruption in the underlying storage service. Our team is actively working with the infrastructure provider to restore full stability as quickly as possible.
Jan 28, 15:34 UTC
Resolved - This incident has been resolved.
Jan 28, 13:04 UTC
Investigating - Our upstream provider Vultr is currently experiencing a partial outage in their Singapore region. Their engineering team is actively working to resolve the issue as soon as possible. We regret any inconvenience this may cause.
Jan 21, 08:20 UTC
Resolved - This incident has been resolved.
Jan 20, 10:51 UTC
Monitoring - A fix has been implemented and we are monitoring the results.
Jan 19, 10:07 UTC
Investigating - Our SMTP Add-on Service (Elastic Email) is currently experiencing performance issues, leading to delayed outbound emails for some accounts. We are in active communication with the Elastic Email team to investigate and resolve the root cause. We appreciate your patience and will keep this page updated with our progress.
Mar 3, 13:47 UTC
Resolved - This issue should now be resolved for all customers.
Mar 3, 13:01 UTC
Monitoring - We have identified the issue and customers should see errors reduced to normal levels.
Mar 3, 12:25 UTC
Investigating - Some customers may be seeing elevated error rates from Contentful APIs. We are currently investigating the issue.
Feb 17, 22:16 UTC
Resolved - The issue has been resolved.
Feb 17, 22:07 UTC
Monitoring - We have resolved the issue and are monitoring the situation.
Feb 17, 21:54 UTC
Identified - We are still observing issues for some customers and are working on a resolution.
Feb 17, 21:33 UTC
Monitoring - We addressed the issue and are monitoring the situation.
Feb 17, 20:55 UTC
Identified - Some customers will find certain features like e.g. Studio disabled. We have identified the issue and are working on a fix.
Feb 9, 20:54 UTC
Resolved - This issue has been resolved.
Feb 9, 20:21 UTC
Monitoring - We were able to mitigate the issue and have re-enabled the regular file uploader.
Feb 9, 13:20 UTC
Update - We have enabled a simpler fallback uploader that provides less functionality while we investigate the issue (XML-like file types, e.g. SVG, are working again though). Customers will need to reload the Contentful WebApp to use the fallback uploader.
Feb 9, 12:48 UTC
Identified - Uploads of XML-like file types, e.g. SVG, are failing. We are working on a resolution.
Feb 9, 12:31 UTC
Investigating - Customers are experiencing issues with uploading assets, specifically SVGs. We are investigating.
Feb 6, 17:19 UTC
Resolved - The issue is now resolved.
Feb 6, 16:46 UTC
Update - We think the impact from the issue is over.
Feb 6, 14:25 UTC
Monitoring - We have deployed a fix which has resolved the issue. We will continue to monitor.
Feb 6, 14:09 UTC
Update - We are still working to mitigate the issue and expect to have a fix in place within the next 30 minutes
Feb 6, 11:35 UTC
Identified - We have identified the cause of the issue and are working on a fix
Feb 6, 11:03 UTC
Investigating - We are investigating an issue where cache purges are not being processed (which may result in customers seeing stale data).
Feb 6, 10:44 UTC
Resolved - The issue is now resolved.
Feb 6, 10:38 UTC
Update - We are continuing to monitor for any further issues.
Feb 6, 10:37 UTC
Monitoring - We have put a fix in place and the Typeform app should now be functional again.
Feb 6, 09:34 UTC
Investigating - Customers are experiencing issues loading the typeform app, we are investigating.
Feb 12, 15:22 EST
Resolved - This incident has been resolved.
Feb 12, 14:55 EST
Monitoring - A fix has been implemented. Existing authorizations should now function as expected. If you’re still seeing an error, please re-authenticate the connector.
Feb 12, 14:10 EST
Investigating - We are currently investigating an issue impacting Microsoft Excel and Sharepoint connectors on the platform.
At this time, customers who encounter connector errors are unable to successfully reconnect to Microsoft. Once the error appears, reconnection attempts are failing, and affected connectors remain blocked.
Our team is actively working to identify the root cause and restore reconnection functionality as quickly as possible.
We will provide updates here as soon as more information becomes available.
Feb 4, 15:27 EST
Resolved - The fix for this issue has been deployed. We are currently monitoring to ensure normal behavior has been fully restored.
Feb 2, 15:58 EST
Identified - We have identified the cause of the issue impacting form submission redirects.
Our engineering team is actively working on a fix, which will be implemented as soon as possible. We will continue to provide updates here as progress is made.
Feb 2, 15:27 EST
Investigating - We are currently investigating an issue where some form submissions are not redirecting respondents as expected after submission.
Under normal circumstances, respondents are redirected back to the form if there is a validation or connector error, allowing them to correct their information and resubmit. If no errors occur, respondents are redirected to the configured thank-you page or redirect URL.
Some submissions may remain on the response processing page and display a default confirmation message instead of submissions displaying potential errors.
Our team is actively investigating the root cause of this issue. We will continue to post updates here as more information becomes available.
Jan 29, 09:51 EST
Resolved - We experienced processing delays with Salesforce connectors in the London region due to increased system load. Our team has upgraded system capacity and the issue is now resolved. All connectors are operating normally.
Jan 23, 14:11 EST
Resolved - This incident has been resolved. Please reach out to [email protected] if you run into any further issues.
Jan 22, 10:00 EST
Identified - We confirmed a secondary issue related to yeseterday's SAML concern where SAML authenticated forms are not preserving URL query parameters after authentication. The team has identified the root cause and is working to resolve the issue. Additional details will be provided here as they become available.
Jan 21, 18:02 EST
Resolved - The issue impacting SAML authentication has been fully resolved. Respondents can now access SAML-authenticated forms as expected, and SAML-based login is functioning normally.
Our team has confirmed service restoration and is continuing to monitor the system to ensure stability. Thank you for your patience while we worked to resolve this issue.
Jan 21, 16:52 EST
Identified - The issue impacting SAML authentication has been identified, and a fix is currently being implemented. Some respondents may still be unable to access SAML-authenticated forms, and SAML-based login may continue to be affected during this time.
Our engineering team is actively deploying the fix and monitoring the situation closely. We will provide another update once the fix has been fully applied and we’ve confirmed service restoration.
Jan 21, 15:40 EST
Investigating - We are currently investigating an issue that is impacting SAML authentication. At this time, some respondents are unable to access SAML-authenticated forms. This issue may also affect SAML-based login.
Our engineering team is actively working to identify the root cause and implement a fix. We will provide updates as more information becomes available.
Mar 11, 15:53 UTC
Resolved - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.
Mar 11, 15:53 UTC
Update - Copilot Code Review queue processing has returned to normal levels.
Mar 11, 15:31 UTC
Update - We experienced degraded performance with Copilot Code Review starting at 14:01 UTC. Customers experienced extended review times and occasional failures. Some extended processing times may continue briefly. We are monitoring for full recovery. We'll post another update by 16:30 UTC.
Mar 11, 14:28 UTC
Monitoring - We are investigating degraded performance with Copilot Code Review. Customers may experience extended review times or occasional failures. We are seeing signs of improvement as our team works to restore normal service. We'll post another update by 15:30 UTC.
Mar 11, 14:25 UTC
Investigating - We are investigating reports of impacted performance for some GitHub services.
Mar 11, 15:02 UTC
Resolved - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.
Mar 11, 15:02 UTC
Update - We are investigating elevated timeouts that affected GitHub API requests. The incident began at 14:37 UTC. Some users experienced slower response times and request failures. System metrics have returned to normal levels, and we are now investigating the root cause to prevent recurrence.
Mar 11, 14:37 UTC
Investigating - We are investigating reports of degraded performance for API Requests
Mar 9, 17:03 UTC
Resolved - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.
Mar 9, 17:03 UTC
Update - Webhooks is operating normally.
Mar 9, 15:56 UTC
Update - We are experiencing latency on the API and UI endpoints. We are working to resolve the issue.
Mar 9, 15:50 UTC
Investigating - We are investigating reports of degraded performance for Webhooks
Mar 9, 03:51 UTC
Resolved - On March 9, 2026, between 01:23 UTC and 03:25 UTC, users attempting to create or resume codespaces in the Australia East region experienced elevated failures, peaking at a 100% failure rate for this region. Codespaces in other regions were not affected.
The create and resume failures were caused by degraded network connectivity between our control plane services and the VMs hosting the codespaces. This was resolved by redirecting traffic to an alternate site within the region. While we are addressing the core network infrastructure issue, we have also improved our observability of components in this area to improve detection. This will also enable our existing automated failovers to cover this failure mode. These changes will prevent or significantly reduce the time any similar incident causes user impact.
Mar 9, 03:51 UTC
Update - This incident has been resolved. New Codespace creation requests are now completing successfully.
Mar 9, 03:32 UTC
Update - We are seeing recovery, with the failure rate for new Codespace creation requests dropping from 5% to about 3%.
Mar 9, 03:04 UTC
Update - We are seeing about 5% of new Codespace creation requests failing. We are investigating the root cause and identifying the impacted regions.
Mar 9, 03:04 UTC
Investigating - We are investigating reports of degraded performance for Codespaces
Mar 6, 23:28 UTC
Resolved - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.
Mar 6, 23:28 UTC
Update - Webhooks is operating normally.
Mar 6, 23:26 UTC
Update - We have deployed a fix and are observing a full recovery. The affected endpoint was the webhook deliveries API (https://docs.github.com/en/rest/repos/webhooks?apiVersion=2022-11-28#list-deliveries-for-a-repository-webhook) and its organization and integration variants. We will continue monitoring to confirm stability.
Mar 6, 22:35 UTC
Update - We are preparing a new mitigation for the issue affecting the webhook deliveries API (https://docs.github.com/en/rest/repos/webhooks?apiVersion=2022-11-28#list-deliveries-for-a-repository-webhook) and its organization and integration variants. Overall impact remains low, with under 1% of requests failing for a subset of customers.
Mar 6, 21:34 UTC
Update - The previous mitigation did not resolve the issue. We are investigating further. The affected endpoint is the webhook deliveries API (https://docs.github.com/en/rest/repos/webhooks?apiVersion=2022-11-28#list-deliveries-for-a-repository-webhook) and its organization and integration variants. Overall impact remains low, with under 1% of requests failing for a subset of customers.
Mar 6, 20:18 UTC
Update - We have deployed a fix for the issue causing some users to experience intermittent failures when accessing the Webhooks API and configuration pages. We are monitoring to confirm full recovery.
Mar 6, 19:39 UTC
Update - We continue working on mitigations to restore service.
Mar 6, 19:07 UTC
Update - We continue working on mitigations to restore service.
Mar 6, 18:39 UTC
Update - We continue working on mitigations to restore service.
Mar 6, 18:07 UTC
Update - We continue working on mitigations to restore full service.
Mar 6, 17:43 UTC
Update - Our engineers have identified the root cause and are actively implementing mitigations to restore full service.
Mar 6, 17:19 UTC
Update - This problem is impacting less than 1% of UI and webhook API calls.
Mar 6, 17:12 UTC
Update - We are investigating an issue affecting a subset of customers experiencing errors when viewing webhook delivery histories and retrying webhook deliveries. UI and webhook API is impacted. Engineers have identified the cause and are actively working on mitigation.
Mar 6, 16:58 UTC
Investigating - We are investigating reports of degraded performance for Webhooks
Mar 5, 07:26 EST
Resolved - The affected areas of HubSpot are now fully accessible. We've identified and addressed the root cause of the issue. The incident has been fully resolved and The affected areas of HubSpot should be working properly. No data was lost.
Mar 5, 07:12 EST
Monitoring - We've addressed the issue that caused multiple areas in HubSpot to be unavailable since 6:00 AM EST (UTC -05:00) on Mar 5, 2026. We're monitoring performance closely to ensure the tools recover properly. Only customers hosted in North America are currently impacted. We will be back with an update within 30 minutes.
Mar 5, 06:57 EST
Update - We've identified the issue that's caused multiple areas in HubSpot to be unavailable since 6:00 AM EST (UTC -05:00) on Mar 5, 2026. We're addressing the cause of this issue and will update this page when we have more information. Only customers hosted in North America are currently impacted. We will be back with an update within 30 minutes.
Mar 5, 06:53 EST
Update - We've identified the issue that's caused multiple areas in HubSpot to be unavailable since 6:15 AM EST (UTC -05:00) on Mar 5, 2026. We're addressing the cause of this issue and will update this page when we have more information. Only customers hosted in North America are currently impacted. We will be back with an update within 30 minutes.
Mar 5, 06:39 EST
Identified - We estimate that we will restore service in the next several hours.
We are mitigating impact from a server impairment which we believe is caused by resource exhaustion. We are reverting the change.
We will be back with an update within 1 hour.
The information on this page reflects our understanding of the incident and impact at the time of the update.
Feb 25, 16:37 EST
Resolved - Workflows are now fully accessible. We've identified and addressed the root cause of the issue. The incident has been fully resolved and Workflows should be working properly. No data was lost.
Feb 25, 16:10 EST
Investigating - We are investigating an issue impacting workflows. We will provide an update when we have more information.
The information on this page reflects our understanding of the incident and impact at the time of the update.
Feb 19, 13:54 EST
Resolved - CRM record creation is now fully accessible. We've identified and addressed the root cause of the issue. The incident has been fully resolved and CRM record creation should be working properly. No data was lost.
Feb 19, 13:53 EST
Monitoring - We've addressed the issue that caused CRM record creation to be unavailable since 1:11 PM EST (UTC -05:00) on Feb 19, 2026. We're monitoring performance closely to ensure the tools recover properly. We will be back with an update within 15 minutes.
Feb 18, 07:24 EST
Resolved - Between 08:48 AM (UTC +00:00) and 12:20 AM (UTC +00:00), many customers in all affected regions experienced issues with session invalidation. This was caused by a load balancer impairment. As of 12:20 AM (UTC +00:00), our session invalidation is working properly and the incident has been fully resolved.
HubSpot conducts a thorough review after each incident to understand the cause and prevent it from happening again. Learn more about HubSpot's commitment to reliability at www.HubSpot.com/reliability.
The information on this page reflects our understanding of the incident and impact at the time of the update.
Feb 18, 06:19 EST
Monitoring - We've addressed the issue that caused sessions to be invalidated in North America, Europe, the Asian Pacific since 08:48 AM (UTC +00:00). We're monitoring performance closely to ensure the tools recover properly.
The information on this page reflects our understanding of the incident and impact at the time of the update.
Feb 18, 06:01 EST
Investigating - We are investigating an issue impacting session expirations and disconnections. We will provide an update when we have more information.
The information on this page reflects our understanding of the incident and impact at the time of the update.
Feb 10, 06:42 EST
Resolved - Between 10:30 AM (UTC +01:00) and 12:20 PM (UTC +01:00), some customers in all affected regions experienced issues with our Google Meet integration. This was caused by a rollout issue. As of 12:20 PM (UTC +01:00), our our Google Meet integration are working properly and the incident has been fully resolved.
HubSpot conducts a thorough review after each incident to understand the cause and prevent it from happening again. Learn more about HubSpot's commitment to reliability at www.HubSpot.com/reliability.
The information on this page reflects our understanding of the incident and impact at the time of the update.
Feb 10, 06:21 EST
Update - The choice to add "Google Meet" links in meetings isn't available right now for some customers . We're investigating what's causing this issue and will update this page when we have more information. Only customers hosted in North America and Europe are currently impacted. We will be back with an update within 30 minutes.
Feb 10, 06:16 EST
Investigating - Scheduling meetings with Google Meet isn't available right now. We're investigating what's causing this issue and will update this page when we have more information. Only customers hosted in North America and Europe are currently impacted. We will be back with an update within 30 minutes.
Mar 10, 00:59 UTC
Resolved - On 9 March UTC, Automation users in the APAC region may have experienced performance degradation within Jira, Jira Product Discovery, Jira Service Management, Jira Work Management, Confluence. The issue has now been resolved, and the service is operating normally for all affected customers.
Mar 10, 00:51 UTC
Monitoring - The performance degradation of automations has been resolved, and services are now operating normally for all affected customers. We'll continue to monitor performance closely to confirm stability.
Mar 10, 00:15 UTC
Identified - Our team has now identified the cause of this issue relating to delayed automation events and has put a hotfix in place to help restore automation performance. We will continue to monitor performance as the backlog of issues is now being processed.
Mar 10, 00:06 UTC
Investigating - We are aware of customers experiencing delays with their automations within Jira, Jira Service Management, Jira Work Management, Jira Product Discovery and Confluence. Our team is investigating with urgency and we will provide an update within one hour.
Feb 26, 18:55 UTC
Resolved - On February 26, 2026, JSM experienced a disruption, and Atlassian Assist service was unavailable to affected users. The issue has now been resolved, and the service is operating normally for all affected customers.
Feb 26, 18:33 UTC
Monitoring - The issue has now been resolved, and services are operating normally for all affected customers. We will continue to monitor closely to confirm stability.
Feb 26, 18:05 UTC
Identified - We have identified the likely cause of the issue, and our teams are diligently working on a mitigation. Affected users may experience Atlassian Assist unavailability in JSM.
We will continue to share additional updates here as more information is available.
Feb 26, 16:21 UTC
Update - We continue to investigate the issue, and will share an update within next 2 hours or sooner.
Feb 26, 14:25 UTC
Update - We are still actively investigating this issue and working to restore Atlassian Assist in Jira Service Management. We will share another update here within the next 2 hours or sooner.
Feb 26, 13:22 UTC
Investigating - Atlassian Assist in Jira Service Management is currently unavailable. We are actively investigating the problem. We will share updates here as more information in next 60 mins or sooner.
Feb 20, 12:26 UTC
Resolved - This issue has been resolved and all services are functional.
Feb 20, 12:26 UTC
Update - We understand that customers using Customer Service Management, Jira, Jira Service Management and Confluence may be experiencing difficulties with user and team selection in certain system and custom fields.
We are continuing to work towards mitigating this issue for all users and products.
We anticipate our next update to be posted within 6 hours or sooner based on significant progress.
Feb 20, 07:13 UTC
Update - We understand that customers using Customer Service Management, Jira, Jira Service Management and Confluence may be experiencing difficulties with user and team selection in certain system and custom fields.
We are continuing to work towards mitigating this issue for all users and products.
We anticipate our next update to be posted within 6 hours or sooner based on significant progress.
Feb 20, 04:14 UTC
Update - Customers using Jira, Jira Service Management and Confluence may be experiencing difficulties with user and team selection in certain system and custom fields.
We are continuing to work towards mitigating this issue for all users and products.
We anticipate our next update to be posted within 6 hours as we continue our investigation.
Feb 20, 02:52 UTC
Identified - Our team has identified components of Jira Service Management that are also impacted by this incident and we have updated our notifications here appropriately.
We anticipate our next update to be posted within 8 hours, as our team continues working with urgency towards mitigation of this issue.
Feb 20, 2026 - 02:52 UTC
Please see prior notifications relating to this incident below:
Update - Several regions have fully recovered, and the team continues to diligently work on full mitigation. Some affected users may experience inability to select users and teams in specified fields on certain sites.
We anticipate next update to be posted in approximately 8 hours.
Feb 19, 2026 - 21:04 UTC
Update - We continue to make progress on full mitigation, and more users should see experience returning to normal. Some affected users may experience inability to select users and teams in specified fields on certain sites.
We will continue to share additional updates here as more information is available.
Feb 19, 2026 - 18:56 UTC
Identified - Mitigation is progressing, and user experience is returning to normal. Some affected users may experience inability to select users and teams in specified fields on certain sites.
Feb 19, 2026 - 17:34 UTC
Update - We have identified the likely cause of the issue, and our teams are diligently working on a mitigation. Affected users may experience inability to select users and teams in specified fields on certain sites.
We will continue to share additional updates here as more information is available.
Feb 19, 2026 - 17:14 UTC
Investigating - We are actively investigating reports of performance degradation affecting the ability to select users and teams in specified fields on certain sites.
We will share updates here as more information is available.
Feb 19, 2026 - 16:44 UTC
Feb 7, 05:38 UTC
Resolved - On February 6, 2026, JIRA and JSM experienced a disruption, and services were unavailable to affected users. The issue has now been resolved, and the service is operating normally for all affected customers.
Feb 7, 05:28 UTC
Monitoring - The issue has now been resolved, and services are operating normally for all affected customers. We will continue to monitor closely to confirm stability.
Feb 7, 03:43 UTC
Update - Our teams have restored access for most users in us-east-1 region, and continue to work diligently towards full mitigation.
Feb 7, 02:22 UTC
Update - Our teams are in final stages of mitigation. During this time, some users in us-east-1 region may not be able to access JIRA and JSM. We will continue to share additional updates here as more information is available.
Feb 6, 22:33 UTC
Update - Our teams are diligently working on a mitigation. Some users in us-east-1 region may not be able to access JIRA and JSM. We will continue to share additioanl updates here as more information is available.
Feb 6, 20:26 UTC
Update - We have identified the likely cause of the issue, and our teams are diligently working on a mitigation. Some users in us-east-1 region might not be able to access Jira and JSM.
We will continue to share additional updates here as more information is available.
Feb 6, 20:24 UTC
Update - Impact
The incident is affecting users of the Jira product, resulting in connection failures in specific impacted areas. Customers might be experiencing difficulties accessing or operating certain features within Jira due to this disruption.
Current Status
Our teams are actively addressing the situation, with restoration activities underway. Key efforts include initiating a point-in-time recovery to restore service as efficiently as possible.
Next Steps
The incident response team is focusing on the recovery process to resolve the issue. Further communication will be provided in 30 minutes.
Feb 6, 20:24 UTC
Identified - We have identified the likely cause of the issue, and our teams are diligently working on a mitigation. Some users in us-east-1 region may not be able to access Jira and JSM.
We will continue to share additional updates here as more information is available.
Feb 6, 20:01 UTC
Resolved - The team has verified that the fix has successfully propagated to all accounts. Impacted customers will receive an email follow-up containing additional information.
Feb 6, 14:41 UTC
Monitoring - The team has corrected entitlements for impacted customers and reverted plans to Service Collection Free.
We are now monitoring the rollout to confirm the fix has successfully propagated to all accounts.
Feb 6, 10:25 UTC
Update - Our team has identified the steps required to restore the affected sites and is actively working to restore original entitlements.
We will provide our next update in four hours, or sooner if we have meaningful progress to share.
Feb 6, 07:52 UTC
Update - Our team continues active testing of the fix to restore customer accounts affected by this incident.
We will provide further update within two hours or sooner.
Feb 6, 05:57 UTC
Update - Our team is now actively testing the fix required to properly restore customer accounts impacted by this incident.
We will provide further update within two hours to provide guidance on our expected timeline for the resolution to be fully completed.
Feb 6, 03:02 UTC
Identified - During an update designed to enable Service Collection for our Jira Service Management Free customers, some of these customers were incorrectly updated to a Standard Service Collection no-cost trial, and received a notification that they would now need to pay for this service at the end of their trial period.
Please rest assured that requiring payment was not an intended change for these Free customers, and we are working to restore these accounts back to their Free status.
We will provide further update within two hours or earlier if there is important information to share.
Feb 6, 01:47 UTC
Investigating - We are aware of reports that Free JSM customers received emails stating that their plans have been adjusted to Standard Trial Service Collection accounts.
Our team is working urgently to address this issue and we plan to revert the change for all customers that are impacted, to reinstate their Free plan status.
Please note that if your plan has been incorrectly updated, you will be placed into a no-cost trial status.
There is currently no action required from those customers that have incorrectly received these emails.
We will provide further update within an hour as our effort continues.
Feb 18, 14:29 UTC
Resolved - This incident has been resolved.
Feb 18, 03:21 UTC
Update - We are continuing to investigate this issue.
Feb 17, 22:22 UTC
Update - We are continuing to investigate this issue.
Feb 17, 21:09 UTC
Investigating - We are aware of an issue where some customers are not receiving login verification emails required for verification. As a result, impacted users may be unable to complete the login process.
If you are affected, please contact Support using the Request Help Logging in button from this article: https://support.lastpass.com/s/document-item?language=en_US&bundleId=lastpass&topicId=LastPass/having-trouble-logging-in-faq.html&_LANG=enus for assistance, and our team will help verify your identity and restore access.
We will provide updates as the team continues the investigation.
Feb 3, 17:49 UTC
Resolved - We have confirmed that the issue has been resolved completely and all systems are 100% operational at this time. Affected users may need to log out and log back in.
We will conduct an internal investigation of this issue and make appropriate improvements to our systems to help prevent or minimize future recurrence.
Feb 3, 16:24 UTC
Monitoring - Our engineering team has identified the underlying issue and completed a rollback, which has mitigated the issue. Affected users may need to log out and log back in to resolve the issue.
We will provide an additional update shortly.
Feb 3, 16:01 UTC
Investigating - We are actively investigating reports that some LastPass users may be experiencing issues launching saved sites from their vault. Access to Vaults is not affected. Engineers continue to troubleshoot the situation and we will update once resolved.
Oct 30, 00:04 UTC
Resolved - Systems are operational and the incident is resolved
Oct 29, 21:34 UTC
Monitoring - Our third-party provider has applied fixes that are gradually resolving the issue. While many users are seeing improvements, some connectivity issues remain. We are actively monitoring the situation.
Oct 29, 20:01 UTC
Update - LastPass engineers continue to work with the cloud provider to resolve the issue.
Oct 29, 17:44 UTC
Identified - Our third party cloud provider has identified the issue and are now actively working towards a resolution. We will provide another update shortly.
Oct 29, 16:23 UTC
Investigating - We’re currently experiencing service degradation on LastPass's marketing site due to an issue with our external cloud provider. Our team is actively working to resolve the issue and minimize the impact.
Oct 21, 09:42 UTC
Resolved - Service degradation issue has been resolved. Vault access and login functionality, including for federated users, are now fully operational.
Oct 21, 09:19 UTC
Update - We’re currently investigating an issue that may prevent some users from logging in. Our team is working to restore full access as quickly as possible.
Oct 21, 09:12 UTC
Investigating - We’re currently experiencing system degradation. LastPass Vaults may take longer to load. Our team is actively working to resolve the problem and minimize impact.
Oct 20, 13:25 UTC
Resolved - This incident has been resolved.
Oct 20, 11:12 UTC
Update - All of our services are now running and operational. However, we are closely monitoring the situation, as some of our external providers who were also affected by this incident are still in the process of recovery.
We’re keeping a close eye on our integrations to ensure continued stability and prevent further disruptions.
Oct 20, 10:41 UTC
Monitoring - Most services have now been restored and are operational, including our phone lines. We continue to monitor the situation closely to ensure full stability.
Oct 20, 09:57 UTC
Investigating - We’re currently experiencing service degradation due to an issue with our external cloud provider, this has also impacted our phone lines. For support, please reach out through our Support Center. Our team is actively working to resolve the issue and minimize the impact.
Mar 11, 11:33 UTC
Resolved - This incident has been resolved.
Mar 11, 11:19 UTC
Update - We are continuing to monitor for any further issues.
Mar 11, 10:57 UTC
Monitoring - The cause of the issue was identified, and we are monitoring the fix that was applied.
Mar 11, 10:33 UTC
Update - We are continuing to investigate this issue.
Mar 11, 10:08 UTC
Update - We are continuing to investigate this issue
Mar 11, 09:38 UTC
Investigating - We continue to investigate this rise in latencies creating high error rates.
Mar 11, 09:09 UTC
Monitoring - A fix has been implemented and we are monitoring the results.
Mar 11, 08:49 UTC
Identified - The issue has been identified and we are working on a resolution.
Mar 11, 08:44 UTC
Investigating - We are experiencing high latencies in our origin. This may cause high error rates and elevated response times, as well as build failures. We are currently investigating.
Mar 10, 16:43 UTC
Resolved - From 16:43 UTC to 16:58 UTC, we served HTTP 500 for paths invoking Edge Functions (including framework-generated ones like Next.js Middleware/Proxy). This affected the Free/Starter as well as some part of the Pro tier users on the Standard Edge Network. The issue has since been resolved.
Mar 4, 10:26 UTC
Resolved - Between February 25 until March 4, we saw an increased number of edge function errors related to requests with larger payloads on some sites on the standard network. This has been resolved.
Mar 3, 01:25 UTC
Resolved - This incident has been resolved.
Mar 3, 01:14 UTC
Monitoring - A fix has been implemented, and we are monitoring the results.
Mar 3, 00:51 UTC
Investigating - Beginning at 23:58 UTC, we began seeing an increased rate of delayed builds for plans on the Free tier. We are currently investigating this issue.
Mar 5, 17:35 UTC
Resolved - This incident has been resolved.
Mar 5, 13:01 UTC
Monitoring - A fix has been implemented and we are monitoring the results.
Mar 5, 11:41 UTC
Investigating - We are currently investigating this issue.
Feb 25, 20:03 UTC
Resolved - This incident has been resolved.
Feb 25, 19:28 UTC
Monitoring - A fix has been implemented and we are monitoring the results.
Feb 25, 19:24 UTC
Identified - The issue has been identified and a fix is being implemented.
Feb 25, 18:43 UTC
Investigating - We are currently investigating this issue.
Feb 17, 22:28 UTC
Resolved - This incident has been resolved.
Feb 17, 21:17 UTC
Monitoring - A fix has been implemented and we are monitoring the results.
Feb 17, 20:01 UTC
Investigating - We are currently investigating this issue.
Jan 29, 20:49 UTC
Resolved - This incident has been resolved.
Jan 29, 20:18 UTC
Monitoring - A fix has been implemented and we are monitoring the results.
Jan 29, 20:04 UTC
Investigating - We are currently investigating this issue.
A mitigation has been applied; however, some EU workspaces may still experience the issue. An additional mitigation is currently being investigated to fully resolve the problem.
Affected components
- File uploads (Degraded performance)
Issue has been identified and mitigation is being deployed.
All impacted services have now fully recovered.
Affected components
- Codex (Operational)
All impacted services have now fully recovered.
Mar 5, 11:14 PST
Resolved - Between 8:53 AM and 9:37 AM PST, customers may have experienced delays in email event processing, including click and open data. Our Engineers identified a bottleneck, and have resolved the issue.
While email delivery continued with low execution times, the reporting of these events to the Stats API and your dashboard was delayed.
However, all downstream queues, including Engagement Stats, have fully caught up as of 9:45 AM PST and are resolved at the time.
Mar 4, 08:44 PST
Resolved - Our engineers have investigated and resolved an issue that occurred causing delays in timezone updates. On 03/03 ~2:09PM customers would have seen errors in the console when trying to update their timezone setting or while fetching timezones via /v3/timezones. A fix was deployed a fix on 03/04 ~7:48AM. This issuse has now been resolved
Mar 4, 07:30 PST
Resolved - Our engineers have investigated and resolved the issues that began on 3/4/2026 from 8am PST to 3/9/2026 at 5pm PST that impacted the reporting for a subset of group unsubscribe and group resubscribe Event Webhooks. Users may have experienced issues with group unsubscribes and group resubscribes that were not processed, although the api calls for those events were processed. The issue has been resolved and all impacted services are operating normally.
Mar 2, 17:17 PST
Resolved - Between 4:00 PM and 5:04 PM PST, customers may have experienced delays when viewing Global Stats. Our Engineers identified and resolved the issue
Mar 9, 20:37 UTC
Resolved - Ingestion and alert latency is back to normal.
Mar 9, 18:17 UTC
Monitoring - We are burning the backlog and actively monitoring the progress.
Mar 9, 17:59 UTC
Identified - US customers may experience delay in alerts. We've identified the issue and will be putting in a fix.
Mar 2, 21:09 UTC
Resolved - This incident has been resolved.
Mar 2, 20:57 UTC
Update - EU ingestion has been restored and latency is back to normal levels. US continues to recover and will likely be caught up within the next hour.
Mar 2, 20:29 UTC
Monitoring - We have implemented a fix and are monitoring
Mar 2, 20:09 UTC
Identified - The issue has been identified and the fix is being implemented.
Mar 2, 19:57 UTC
Investigating - We are currently investigating this issue.
Feb 26, 20:56 UTC
Resolved - Ingestion backlog has finished processing and our system is now operating normally.
Feb 26, 20:02 UTC
Monitoring - Our cloud provider has resolved an underlying problem and our dashboard availability issues have been resolved. We're continuing to process our ingestion backlog and monitor the situation.
Feb 26, 19:15 UTC
Investigating - We're investigating intermittent failures loading our dashboard (all regions) and increased latency for ingestion of all events types in US
Feb 26, 19:41 UTC
Resolved - We have identified that the core problem is related to the Intermittent dashboard failures. Please follow https://status.sentry.io/incidents/z3g2bjxxwv9l for the latest updates. In the meantime, this will be marked as resolved.
Feb 26, 19:07 UTC
Update - We also identified that transaction ingestion was also affected.
Feb 26, 18:59 UTC
Investigating - We are experiencing an ingestion issue with spans, logs, and metrics. Our teams are currently investigating the problem.
Feb 19, 12:53 EST
Resolved - This incident has been resolved.
Feb 19, 12:16 EST
Monitoring - A fix has been implemented and we are monitoring the results.
Feb 19, 11:18 EST
Identified - The issue has been identified and a fix is being implemented.
Feb 19, 10:45 EST
Investigating - We are currently investigating delays in processing file published callbacks and webhooks.
Feb 17, 18:10 EST
Resolved - This incident has been resolved.
Feb 17, 17:26 EST
Monitoring - Fix is applied, we're monitoring queue processing
Feb 17, 08:03 EST
Investigating - We are currently investigating possible delays in jobs authorizaiton and processing
Feb 10, 19:46 EST
Resolved - This incident has been resolved.
Feb 10, 17:41 EST
Monitoring - A fix has been implemented and we are monitoring the results.
Feb 10, 16:36 EST
Investigating - We are currently investigating reports of users experiencing difficulty accessing the Translation Workbench.
Feb 9, 22:41 EST
Resolved - This incident has been resolved.
Feb 9, 18:45 EST
Monitoring - A fix has been implemented and we are monitoring the results.
Feb 9, 11:46 EST
Identified - Due to a disruption on GitHub’s side, the GitHub Connector may experience increased latency in deliveries and request processing. Dashboard, Smartling API, Global Delivery Network services remain operational.
Jan 14, 12:04 EST
Resolved - This incident has been resolved.
Jan 14, 11:34 EST
Monitoring - A fix has been implemented and we are monitoring the results.
Jan 14, 10:54 EST
Investigating - We are currently investigating reports of delays in jobs authorizaiton and processing
We are currently experiencing delays in calendar event synchronization for users with Microsoft Outlook (O365) and Google Calendar integrations. Events may not appear in Tempo Timesheets as expected. Our engineering team is actively investigating the issue. We will provide further updates as we learn more. We apologize for the inconvenience.
Affected components
- Timesheets (Degraded performance)
The incident was resolved. Root cause analysis for a permanent solution is in progress.
Affected components
- Capacity Planner (Operational)
- Timesheets (Operational)
- Financial Manager (Operational)
The disruption has now been resolved.
Affected components
- Financial Manager (Operational)
- Jira (Operational)
- Tempo for Slack (Operational)
- Tempo for Jira Cloud Help Center (Operational)
- Adaptive Planner (Operational)
- Time Tracker (Operational)
- Capacity Planner (Operational)
- Timesheets (Operational)
Resolved
Affected components
- Tempo for Slack (Operational)
- Tempo for Jira Cloud Help Center (Operational)
- Adaptive Planner (Operational)
- Capacity Planner (Operational)
- Timesheets (Operational)
- Financial Manager (Operational)
- Jira (Operational)
Mar 6, 14:49 UTC
Resolved - This incident has been resolved.
Mar 6, 14:08 UTC
Monitoring - A fix has been implemented and we are monitoring the results.
Mar 6, 13:50 UTC
Investigating - We are currently investigating this issue.
Jan 23, 15:30 UTC
Resolved - We identified the cause of elevated latency impacting some databases in us-east-1 region between 15:30–15:35 UTC as a sudden surge of connection attempts that hit OS-level connection limits on our proxy layer. This resulted in slower new connection establishment and increased latency for some requests. Databases were not impacted. We are implementing additional proxy-level metrics and safeguards to detect and manage similar edge cases earlier.
Dec 10, 12:08 UTC
Resolved - Issue has been identified and replicas were successfully reconnected.
Dec 10, 11:50 UTC
Investigating - We identified an issue in Regional Databases in US-East-1 where database replicase have connectivity issues with each other.
Other regions are not impacted. Global databases are not impacted.
We are working on the issue.
Dec 5, 09:18 UTC
Resolved - This incident has been resolved.
Dec 5, 09:03 UTC
Investigating - Upstream provider confirmed an incident. We are investigating the impact and potential resolutions.
Dec 1, 07:00 UTC
Resolved - We identified and fixed a bug that could cause messages with Flow Control enabled to be delayed longer than their configured delay, resulting in unexpectedly long pending times.
The fix is in place and the issue should not recur. If you’re still seeing unusually long-delayed messages, please contact [email protected] and we can help with remediation.
Mar 2, 18:23 UTC
Monitoring - We are monitoring the situation and continue to work toward restoring capacity in the dxb1 region. We will send further updates when new information is available.
Mar 2, 15:29 UTC
Identified - Due to operational issues in the dxb1 region, traffic is currently re-routed to bom1. Additionally, dxb1 is currently unavailable as a Function Region for new deployments.
If your existing deployments that use the dxb1 region are experiencing elevated function invocation errors, we strongly recommend switching to the nearest region (such as bom1) and redeploy until capacity is restored in dxb1. Deployments using multiple regions or failover regions are not affected since traffic is automatically routed to the nearest region based on the configured settings.
Mar 6, 21:38 UTC
Resolved - This incident has been resolved.
Mar 6, 21:20 UTC
Monitoring - A fix has been applied and we are seeing recovery for affected deployments. We are continuing to monitor.
Mar 6, 19:18 UTC
Update - We are applying a fix for the deployments experiencing elevated errors.
We continue to recommend redeploying if you are seeing errors on deployments created between 11:20 UTC and 15:14 UTC. Deployments created outside of this window are unaffected and no action is required. Additionally, deployments with middleware on the Node runtime are unaffected.
Mar 6, 15:25 UTC
Identified - Some deployments created between 11:20 UTC and 15:14 UTC with Edge Middleware may be seeing elevated errors. Deployments created outside of this time window are unaffected. If you are experiencing issues, we recommend redeploying.
Mar 6, 21:22 UTC
Resolved - This incident has been resolved.
Mar 6, 21:10 UTC
Monitoring - A fix has been implemented and we are monitoring the results.
Mar 6, 21:00 UTC
Identified - The issue has been identified and a fix is being implemented.
Mar 6, 20:53 UTC
Investigating - We are investigating reports of elevated queue message latency and message retry latency in iad1, as well as elevated sleep times and increased step retries in Workflow. We will provide additional updates as they become available.
Mar 3, 21:10 UTC
Resolved - This incident has been resolved.
Mar 3, 19:14 UTC
Monitoring - A fix has been implemented and we are seeing recovery for data loading and ingestion across services. We are continuing to monitor and will provide additional updates as they become available.
Mar 3, 17:10 UTC
Investigating - Dashboard pages that use Observability data, including Observability, Speed Insights, Web Analytics, Usage, Firewall, and Activity, are experiencing delays while loading data. These pages are also experiencing delays ingesting new data. We are investigating this issue and will provide additional updates as they become available.
Mar 2, 15:20 UTC
Resolved - This incident has been resolved.
Mar 2, 14:59 UTC
Monitoring - We have rolled out a second mitigation for elevated Build errors and are seeing recovery. All builds are now excluding the Dubai region (dxb1) from their deployment targets as a temporary measure. We will provide additional updates as they become available.
Mar 2, 13:00 UTC
Update - We have rolled out a first mitigation for elevated Build errors. Builds that use Middleware are now excluding the Dubai region (dxb1) from their deployment targets as a temporary measure, and should complete successfully again. We are now working on a mitigation for Builds that are using Edge Functions.
Mar 2, 11:59 UTC
Update - We are currently deploying a mitigation for elevated Build errors. Builds that use Middleware or Edge Functions will exclude the Dubai region (dxb1) from their deployment targets as a temporary measure. We will provide additional updates as they become available.
Mar 2, 10:50 UTC
Update - We are still seeing elevated errors in Builds in all regions, because Middleware and Edge Functions may be deployed globally. Builds that don't use Middleware and Edge Functions are not impacted. We are continuing to work on a fix for this issue.
Mar 2, 08:43 UTC
Update - The dxb1 Edge traffic is currently being rerouted to the nearest Edge region (bom1) to mitigate the impact. We will provide additional updates as they become available.
Mar 2, 06:24 UTC
Update - We have rolled out mitigations and are seeing recovery. If you are still seeing build failures and are using Dubai (dxb1) as your primary Vercel Functions region, you can switch to another region as a workaround.
Mar 2, 06:06 UTC
Identified - Starting from 5:00 am UTC, we have started seeing failures to deploy and invoke functions in Dubai region (dxb1). Deployments with Middleware Functions are also impacted in all regions, because Middleware Functions are deployed globally for production deployments. Our team is actively investigating the issue.
Mar 8, 23:11 PDT
Resolved - This incident has been resolved and the affected services have been restored.
Mar 8, 23:01 PDT
Monitoring - On 03/09/2026, Between 01:07 UTC - 01:49 UTC, A subset of users experienced issues with inbound and outbound PSTN one way audio calls in Japan region. Our vendor has resolved the issue and the affected services have been restored.
Mar 6, 06:25 PST
Resolved - This incident has been resolved and the affected services have been restored.
Mar 6, 06:14 PST
Monitoring - An issue where Zoom Rooms users are not able to join Microsoft Teams Meetings has been successfully resolved by our vendor.
Our team will continue to monitor the situation closely and keep you informed of any further developments.
Mar 6, 03:02 PST
Update - Our vendor is actively working on a resolution, and we will keep you informed with the timely updates as progress is made. Thank you for your patience.
Mar 5, 23:53 PST
Update - Our vendor is actively working on a resolution, and we will keep you informed with the timely updates as progress is made. Thank you for your patience.
Mar 5, 21:48 PST
Update - Our vendor is actively working on a resolution, and we will keep you informed with the timely updates as progress is made. Thank you for your patience.
Mar 5, 19:49 PST
Update - Our vendor is actively working on a resolution, and we will keep you informed with the timely updates as progress is made. Thank you for your patience.
Mar 5, 17:48 PST
Update - We are continuing to work on a fix for this issue and will provide updates as soon as there is more information to share.
Mar 5, 16:45 PST
Update - We are continuing to work on a fix for this issue and will provide updates as soon as there is more information to share.
Mar 5, 15:43 PST
Update - We are continuing to work on a fix for this issue and will provide updates as soon as there is more information to share.
Mar 5, 14:43 PST
Update - We are continuing to work on a fix for this issue and will provide updates as soon as there is more information to share.
Mar 5, 13:30 PST
Identified - We have successfully identified the root cause where Zoom Rooms are not able to join Microsoft Teams Meetings.
Our team is actively working on a resolution, and we will keep you informed with timely updates as progress is made.
Thank you for your patience.
Mar 5, 12:43 PST
Investigating - We are currently investigating an issue where Zoom Rooms are not able to join Microsoft Teams Meetings
Our team is actively working to identify the impact and root cause. We will provide an update as soon as more information becomes available.
We appreciate your patience as we work to resolve this issue.
Mar 5, 15:09 PST
Resolved - This incident has been resolved.
Mar 5, 14:16 PST
Monitoring - The service degradation with Offline status for Zoom Rooms in the admin dashboard has been successfully resolved.
Our team will continue to monitor the situation closely and keep you informed of any further developments.
Thank you for your patience.
Mar 5, 13:16 PST
Investigating - We are currently investigating a service degradation for a subset of users that is only impacting the offline status for Zoom Rooms in the admin dashboard.
Our team is actively working to identify the impact and root cause. We will provide an update as soon as more information becomes available.
We appreciate your patience as we work to resolve this issue.
Mar 5, 12:45 PST
Resolved - This incident has been resolved and the affected services have been restored.
Mar 5, 12:26 PST
Monitoring - The service degradation affecting Zoom Billing and Payments web pages in all the regions has been successfully resolved.
Our team will continue to monitor the situation closely and keep you informed of any further developments.
Mar 5, 12:13 PST
Update - We have identified a service degradation affecting Zoom Billing and Payments web pages in all the regions.
We continue to work to identify the impact and root cause. We will provide an update as soon as more information becomes available.
We appreciate your patience as we work to resolve this issue.
Mar 5, 09:56 PST
Update - We have identified a service degradation affecting Zoom Billing and Payments web pages in all the regions.
Our team is continues to work to identify the impact and root cause. We will provide an update as soon as more information becomes available.
We appreciate your patience as we work to resolve this issue.
Mar 5, 09:56 PST
Update - We continue to identify a service degradation affecting Zoom Billing and Payments web pages in all the regions.
Our team is actively working to identify the impact and root cause. We will provide an update as soon as more information becomes available.
We appreciate your patience as we work to resolve this issue.
Mar 5, 07:03 PST
Identified - We have identified a service degradation affecting Zoom Billing and Payments web pages in all the regions.
Our team is actively working to identify the impact and root cause. We will provide an update as soon as more information becomes available.
We appreciate your patience as we work to resolve this issue.
Mar 5, 06:42 PST
Investigating - We are currently investigating a service degradation affecting Zoom Billing and Payments web pages in all the regions.
Our team is actively working to identify the impact and root cause. We will provide an update as soon as more information becomes available.
We appreciate your patience as we work to resolve this issue.
Mar 5, 10:19 PST
Resolved - This incident has been resolved.
Mar 5, 10:01 PST
Monitoring - The issue affecting customers messages being delayed for delivery has been successfully resolved.
Our team will continue to monitor the situation closely and keep you informed of any further developments.
Mar 5, 09:12 PST
Investigating - A subset of customers globally are observing messages being delayed for delivery. Our team is actively working to identify the impact and root cause. We will provide an update as soon as more information becomes available.
We appreciate your patience as we work to resolve this issue.