Feb 3, 23:06 UTC
Resolved - Users experienced errors on Opus 4.5 between 12:08 PM and 2:38 PM PT (20:08 and 22:38 UTC).
Feb 3, 21:02 UTC
Investigating - We are currently investigating this issue.
Feb 3, 20:47 UTC
Resolved - This incident has been resolved.
Feb 3, 20:26 UTC
Monitoring - A fix has been implemented and we are monitoring the results.
Feb 3, 18:42 UTC
Investigating - We are currently investigating this issue.
Feb 3, 20:30 UTC
Resolved - SSO and magic link sign-in degraded on Claude Desktop
Feb 3, 18:03 UTC
Resolved - This is a repeat of the earlier incident today. The incident started at 9:52 PT / 17:52 UTC and ended at 9:56 PT / 17:56 UTC. The percentage of requests impacted was also smaller.
Feb 3, 18:00 UTC
Investigating - We are currently investigating this issue.
Feb 3, 18:58 UTC
Resolved - This incident has been resolved. Thank you for your patience.
Feb 3, 18:19 UTC
Monitoring - The capacity constraints affecting Linux and Remote Docker job execution have been mitigated. Jobs are now starting within expected timeframes. We continue to monitor the situation to ensure stability.
- What's impacted: Linux and Remote Docker job execution - working within normal parameters
- What's happening: Service levels have returned to normal after implementing mitigation measures
We will provide an update within 15 minutes or sooner if conditions change. Thank you for your patience while our engineers worked to resolve this issue.
Feb 3, 17:46 UTC
Update - We are continuing to work on a fix for this issue.
Feb 3, 17:46 UTC
Identified - We have identified delays affecting Linux and Remote Docker job execution. Customers are currently experiencing approximately 3-6 minute delays for these jobs to start due to capacity constraints in our infrastructure provider. All other compute resource classes are operating normally.
We are actively mitigating this issue by routing a portion of traffic to an alternate region and continue working to restore normal service levels.
- What's impacted: Linux and Remote Docker job execution
- What's happening: Jobs are experiencing 3-6 minute delays starting execution due to upstream capacity constraints
We will provide an update with 30 minutes. Thank you for your patience while we work to reduce these delays.
Feb 2, 22:18 UTC
Resolved - The issue affecting email notifications has been resolved. Build completion emails and plan-related notifications are now being delivered normally. We apologize for any inconvenience this may have caused.
Feb 2, 22:05 UTC
Update - We are continuing to monitor for any further issues.
Feb 2, 22:05 UTC
Monitoring - Our upstream provider has resolved the issue affecting their system. We are currently monitoring email notification delivery to confirm full restoration. Build completion emails and plan-related notifications should begin flowing normally. All other notification types and build results through the CircleCI web interface and GitHub checks continue to function normally.
We will provide a final update within 15 minutes.
Feb 2, 21:48 UTC
Update - We are continuing to work with our upstream provider to restore email notification delivery. Build completion emails and plan-related notifications remain impacted. All other notification types and build results through the CircleCI web interface and GitHub checks continue to function normally.
We will provide an update within 30 minutes.
Feb 2, 21:12 UTC
Update - We continue to work with our vendor on restoring email notification delivery. Build completion emails and plan-related notifications remain impacted. All other notification types and build results through the CircleCI web interface and GitHub checks continue to function normally.
We will provide an update within 30 minutes.
Feb 2, 20:34 UTC
Identified - We have identified the issue affecting email notifications. Our notification delivery system is experiencing disruptions that are preventing build completion emails and plan-related notifications from being sent. We are actively working with our vendor to restore service. All other notification types, including Slack and webhook notifications, continue to function normally, and build results remain accessible through the CircleCI web interface and GitHub checks.
We will provide an update within 30 minutes.
Feb 2, 20:07 UTC
Investigating - We are currently experiencing issues with email notifications across CircleCI. Build completion emails and plan-related notifications are not being delivered as expected. All other notification types, including Slack and webhook notifications, continue to function normally. Build results remain accessible through the CircleCI web interface and GitHub checks.
We are actively investigating the issue with our notification delivery system and will provide an update within 30 minutes.
Jan 29, 19:37 UTC
Resolved - This incident has been resolved.
Jan 29, 19:35 UTC
Update - We are continuing to monitor for any further issues.
Jan 29, 19:35 UTC
Monitoring - A fix has been implemented and we are monitoring the results.
Jan 29, 19:29 UTC
Identified - We are experiencing capacity constraints affecting arm-medium, large, xlarge, 2xl, resulting in job start delays of up to 5 minutes.
Current Status:
- arm-medium, large, xlarge, 2xl: Experiencing delays up to 5 minutes due to capacity constraints
- Docker, Mac, Windows, and Android jobs: Operating normally without delays
Our engineering team is actively working to address these constraints and expand available capacity. All jobs will continue to run normally after the initial delay.
We appreciate your patience as we work to resolve this issue. Next Update: Within 30 minutes or as the situation changes.
Jan 12, 18:08 UTC
Resolved - We identified and resolved an issue that caused jobs to be delayed. During this period, some customers experienced longer than normal job start times while we performed database optimization work.
Our team has completed the necessary tuning and service performance has returned to normal levels.
We apologize for any inconvenience this may have caused.
Jan 9, 23:25 UTC
Resolved - This issue has been resolved. Organizations using Bitbucket are now able to build successfully following mitigation actions we implemented earlier today.
What happened: We identified an issue where retrieving user identities from Bitbucket was encountering rate limiting for accounts with extensive project configurations. This was caused by recent changes to Bitbucket's rate limiting behavior that were enforced in late December 2025. Our mitigating actions helped resolve the issue for most of the affected organizations.
Current status: Organizations that were previously unable to build are now building successfully. A small number of individual users with access to very large numbers of repositories may still occasionally encounter rate limiting by Bitbucket during the identity retrieval process. These users can have another team member trigger builds as a workaround.
If you continue to experience issues: Please reach out to us via your regular support channels.
Jan 9, 19:05 UTC
Monitoring - We have implemented mitigation measures to address the issue affecting Bitbucket users and are seeing significant improvement. Organizations that were previously unable to build are now building successfully.
What's impacted: Bitbucket users with a large number of projects may experience job failures during the build start process. Failed jobs will present with a 404 error page.
What's happening: We identified an issue where retrieving user identities from Bitbucket was encountering rate limiting for accounts with extensive project configurations. We have taken mitigating actions, which has resolved the blocking issue for most customers.
What to expect: Organizations that were previously unable to build should now be able to build successfully. Individual users with access to very large numbers of repositories may still occasionally encounter rate limiting by Bitbucket during the identity retrieval process.
Next update: Our engineering team is continuing to investigate the recent changes in rate limiting behavior by Bitbucket. We will update our users as we have new information to share. If you are impacted by this issue and have any questions, please reach out to us via your regular support channels.
Jan 8, 23:29 UTC
Update - We are continuing to investigate an issue affecting Bitbucket users with a large number of projects. Jobs are failing to start for these users, and we are working to understand and mitigate the underlying rate limiting behavior.
What's impacted: Bitbucket users with a large number of projects may experience job failures during the build start process. Failed jobs will present with a 404 error page.
What's happening: We're continuing to investigate an issue where retrieving user identities from Bitbucket is encountering rate limiting for accounts with extensive project configurations. This prevents the user profile build process from completing, which blocks jobs from starting.
What to expect: Bitbucket users with large numbers of projects may continue to experience intermittent job start failures until we can resolve the rate limiting issue.
Next update: Our engineering team is continuing to investigate and work toward mitigation. We will update our users as we have new information to share or by Jan 09, 2026 - 22:00 UTC (within 24 hours). If you are impacted by this issue and have any other questions, please reach out to us via your regular support channels.
Jan 8, 15:48 UTC
Update - We are continuing to investigate and are working with upstream providers to determine a resolution.
Jan 8, 13:21 UTC
Update - We have identified an issue with retrieving user identities from Bitbucket and are continuing to investigate.
Jan 8, 11:23 UTC
Investigating - A small subset of Bitbucket users are currently affected by an issue causing jobs to fail to start.
This presents as a failed job that leads to a 404 page.
We are currently investigating this and the affected user accounts.
Jan 28, 15:34 UTC
Resolved - This incident has been resolved.
Jan 28, 13:04 UTC
Investigating - Our upstream provider Vultr is currently experiencing a partial outage in their Singapore region. Their engineering team is actively working to resolve the issue as soon as possible. We regret any inconvenience this may cause.
Jan 21, 08:20 UTC
Resolved - This incident has been resolved.
Jan 20, 10:51 UTC
Monitoring - A fix has been implemented and we are monitoring the results.
Jan 19, 10:07 UTC
Investigating - Our SMTP Add-on Service (Elastic Email) is currently experiencing performance issues, leading to delayed outbound emails for some accounts. We are in active communication with the Elastic Email team to investigate and resolve the root cause. We appreciate your patience and will keep this page updated with our progress.
Jan 19, 10:01 UTC
Resolved - This incident has been resolved.
Jan 19, 09:12 UTC
Investigating - We are currently investigating an issue where users are experiencing significant delays in receiving two-factor authentication (2FA) codes via email.
This issue has been traced to a service degradation with one of our upstream email delivery providers. We are in contact with their support team to monitor their progress on a fix. We apologize for the delay and will provide an update as soon as the provider shares more information.
Jan 9, 20:54 UTC
Resolved - This incident has been resolved.
Jan 9, 19:48 UTC
Identified - We have identified an issue with our support chat tool that is affecting communication with our customers. As a result, there may be delays when reporting support issues via chat during this period.
For immediate assistance, please reach out to us by submitting a support ticket.
We greatly appreciate your patience and understanding as we work to resolve this issue as quickly as possible. Thank you for your continued support.
Jan 5, 15:50 UTC
Resolved - This incident has been resolved.
Jan 5, 14:24 UTC
Investigating - We have identified an issue with our support Chat tool, impacting our communication with the customers. As a result, customers may encounter delays while reporting support issues through chat during this period. You can reach out to us on tickets support for immediate assistance.
We highly value your patience and understanding as we strive to rectify the situation promptly. Thank you for your continued support.
Jan 27, 16:28 UTC
Resolved - This issue has been resolved.
Jan 27, 16:06 UTC
Monitoring - We have deployed a fix which has mitigated the issue for customers. We are continuing to monitor the situation.
Jan 27, 15:41 UTC
Investigating - We are investigating an issue where some customers are experiencing errors containing “COLLIDING_TYPE_NAMES” when using Contentful GraphQL API. This is occurring in particular when customers have content types named “Pages”. We are working on a fix for this issue.
Jan 22, 21:00 UTC
Resolved - The issue was resolved.
Jan 22, 20:38 UTC
Monitoring - We have resolved the issue and are monitoring the situation.
Jan 22, 20:33 UTC
Update - Scheduling publishing is affected when publishing entries in certain time zones. We are continuing the investigation.
Jan 22, 19:59 UTC
Investigating - Some customers are experiencing issues with scheduling actions. We are investigating the situation.
Jan 22, 13:21 UTC
Resolved - This issue is now resolved.
Jan 22, 12:57 UTC
Monitoring - We have implemented a fix and the Contentful Management API is now operated normally again in the EU region. We are continuing to monitor the situation.
Jan 22, 12:44 UTC
Update - We are continuing to investigate this issue.
Jan 22, 12:41 UTC
Investigating - We are seeing elevated errors for Management API traffic in our EU region, we are investigating the issue.
Jan 21, 10:59 UTC
Resolved - This issue has been resolved
Jan 21, 10:08 UTC
Monitoring - We have rolled out a fix and are monitoring the results.
Jan 21, 10:07 UTC
Update - We are still working on a fix for the issue - for some customers the licences page in Org Settings is failing to load.
Jan 21, 10:05 UTC
Investigating - We are investigating an issue where errors/warnings are inadvertently showing up in the Contentful WebApp.
Jan 16, 14:34 UTC
Resolved - The problem with assets being stuck in processing has been addressed all back to operational
Jan 16, 14:03 UTC
Monitoring - The issue causing assets to be stuck in processing state has been addressed, we are monitoring the fix.
Jan 16, 13:14 UTC
Investigating - We're investigating an issue reported by customers that while uploading assets it stays in processing until the page is refreshed.
Feb 2, 15:58 EST
Identified - We have identified the cause of the issue impacting form submission redirects.
Our engineering team is actively working on a fix, which will be implemented as soon as possible. We will continue to provide updates here as progress is made.
Feb 2, 15:27 EST
Investigating - We are currently investigating an issue where some form submissions are not redirecting respondents as expected after submission.
Under normal circumstances, respondents are redirected back to the form if there is a validation or connector error, allowing them to correct their information and resubmit. If no errors occur, respondents are redirected to the configured thank-you page or redirect URL.
Some submissions may remain on the response processing page and display a default confirmation message instead of submissions displaying potential errors.
Our team is actively investigating the root cause of this issue. We will continue to post updates here as more information becomes available.
Jan 29, 09:51 EST
Resolved - We experienced processing delays with Salesforce connectors in the London region due to increased system load. Our team has upgraded system capacity and the issue is now resolved. All connectors are operating normally.
Jan 23, 14:11 EST
Resolved - This incident has been resolved. Please reach out to [email protected] if you run into any further issues.
Jan 22, 10:00 EST
Identified - We confirmed a secondary issue related to yeseterday's SAML concern where SAML authenticated forms are not preserving URL query parameters after authentication. The team has identified the root cause and is working to resolve the issue. Additional details will be provided here as they become available.
Jan 21, 18:02 EST
Resolved - The issue impacting SAML authentication has been fully resolved. Respondents can now access SAML-authenticated forms as expected, and SAML-based login is functioning normally.
Our team has confirmed service restoration and is continuing to monitor the system to ensure stability. Thank you for your patience while we worked to resolve this issue.
Jan 21, 16:52 EST
Identified - The issue impacting SAML authentication has been identified, and a fix is currently being implemented. Some respondents may still be unable to access SAML-authenticated forms, and SAML-based login may continue to be affected during this time.
Our engineering team is actively deploying the fix and monitoring the situation closely. We will provide another update once the fix has been fully applied and we’ve confirmed service restoration.
Jan 21, 15:40 EST
Investigating - We are currently investigating an issue that is impacting SAML authentication. At this time, some respondents are unable to access SAML-authenticated forms. This issue may also affect SAML-based login.
Our engineering team is actively working to identify the root cause and implement a fix. We will provide updates as more information becomes available.
Jan 21, 13:30 EST
Resolved - What's Happening: We've deployed an update to improve Fai, our AI agent. During this enhancement, some users may experience a brief disconnect message.
Quick Reconnection Steps: To reconnect to Fai, please try one of the following:
- Refresh your browser page
- Clear your browser cookies
- Log out and log back in
Still Having Trouble? If you continue to experience issues after trying these steps, please contact our support team and let us know which troubleshooting options you've already attempted so we can help you get back up and running.
Feb 3, 19:28 UTC
Resolved - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.
Feb 3, 18:06 UTC
Update - Our telemetry shows improvement on latency in job status updates. We will continue monitoring until full recovery.
Feb 3, 16:51 UTC
Update - We've applied a mitigation to improve system throughput and are monitoring for reduced latency for job status updates.
Feb 3, 16:10 UTC
Investigating - We are investigating reports of degraded performance for Actions
Feb 3, 10:56 UTC
Resolved - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.
Feb 3, 10:55 UTC
Update - We are now seeing recovery.
Feb 3, 10:21 UTC
Update - We are investigating elevated 500s across Copilot services.
Feb 3, 10:16 UTC
Investigating - We are investigating reports of degraded performance for Copilot
Feb 3, 00:56 UTC
Resolved - On February 2, 2026, between 18:35 UTC and 22:15 UTC, GitHub Actions hosted runners were unavailable, with service degraded until full recovery at 23:10 UTC for standard runners and at February 3, 2026 00:30 UTC for larger runners. During this time, Actions jobs queued and timed out while waiting to acquire a hosted runner. Other GitHub features that leverage this compute infrastructure were similarly impacted, including Copilot Coding Agent, Copilot Code Review, CodeQL, Dependabot, GitHub Enterprise Importer, and Pages. All regions and runner types were impacted. Self-hosted runners on other providers were not impacted.
This outage was caused by a backend storage access policy change in our underlying compute provider that blocked access to critical VM metadata, causing all VM create, delete, reimage, and other operations to fail. More information is available at https://azure.status.microsoft/en-us/status/history/?trackingId=FNJ8-VQZ. This was mitigated by rolling back the policy change, which started at 22:15 UTC. As VMs came back online, our runners worked through the backlog of requests that hadn’t timed out.
We are working with our compute provider to improve our incident response and engagement time, improve early detection before they impact our customers, and ensure safe rollout should similar changes occur in the future. We recognize this was a significant outage to our users that rely on GitHub’s workloads and apologize for the impact this had.
Feb 3, 00:56 UTC
Update - Actions is operating normally.
Feb 2, 23:50 UTC
Update - Based on our telemetry, most customers should see full recovery from failing GitHub Actions jobs on hosted runners.
We are monitoring closely to confirm complete recovery.
Other GitHub features that rely on GitHub Actions (for example, Copilot Coding Agent and Dependabot) should also see recovery.
Feb 2, 23:43 UTC
Update - Actions is experiencing degraded performance. We are continuing to investigate.
Feb 2, 23:42 UTC
Update - Copilot is operating normally.
Feb 2, 23:31 UTC
Update - Pages is operating normally.
Feb 2, 22:53 UTC
Update - Our upstream provider has applied a mitigation to address queuing and job failures on hosted runners.
Telemetry shows improvement, and we are monitoring closely for full recovery.
Feb 2, 22:10 UTC
Update - We continue to investigate failures impacting GitHub Actions hosted-runner jobs.
We're waiting on our upstream provider to apply the identified mitigations, and we're preparing to resume job processing as safely as possible.
Feb 2, 21:27 UTC
Update - Copilot is experiencing degraded performance. We are continuing to investigate.
Feb 2, 21:13 UTC
Update - We continue to investigate failures impacting GitHub Actions hosted-runner jobs.
We have identified the root cause and are working with our upstream provider to mitigate.
This is also impacting GitHub features that rely on GitHub Actions (for example, Copilot Coding Agent and Dependabot).
Feb 2, 20:27 UTC
Update - The team continues to investigate issues causing GitHub Actions jobs on hosted runners to remain queued for extended periods, with a percentage of jobs failing. We will continue to provide updates as we make progress toward mitigation.
Feb 2, 19:48 UTC
Update - Pages is experiencing degraded performance. We are continuing to investigate.
Feb 2, 19:44 UTC
Update - The team continues to investigate issues causing GitHub Actions jobs on hosted runners to remain queued for extended periods, with a percentage of jobs failing. We will continue to provide updates as we make progress toward mitigation.
Feb 2, 19:43 UTC
Update - Actions is experiencing degraded availability. We are continuing to investigate.
Feb 2, 19:07 UTC
Update - GitHub Actions hosted runners are experiencing high wait times across all labels. Self-hosted runners are not impacted.
Feb 2, 19:03 UTC
Investigating - We are investigating reports of degraded performance for Actions
Feb 3, 00:54 UTC
Resolved - On February 2, 2026, GitHub Codespaces were unavailable between 18:55 and 22:20 UTC and degraded until the service fully recovered at February 3, 2026 00:15 UTC. During this time, Codespaces creation and resume operations failed in all regions.
This outage was caused by a backend storage access policy change in our underlying compute provider that blocked access to critical VM metadata, causing all VM create, delete, reimage, and other operations to fail. More information is available at https://azure.status.microsoft/en-us/status/history/?trackingId=FNJ8-VQZ. This was mitigated by rolling back the policy change, which started at 22:15 UTC. As VMs came back online, our runners worked through the backlog of requests that hadn’t timed out.
We are working with our compute provider to improve our incident response and engagement time, improve early detection before they impact our customers, and ensure safe rollout should similar changes occur in the future. We recognize this was a significant outage to our users that rely on GitHub’s workloads and apologize for the impact this had.
Feb 3, 00:54 UTC
Update - Codespaces is operating normally.
Feb 3, 00:25 UTC
Update - Codespaces is experiencing degraded performance. We are continuing to investigate.
Feb 2, 23:52 UTC
Update - Codespaces is seeing steady recovery
Feb 2, 20:19 UTC
Update - Users may see errors creating or resuming codespaces. We are investigating and will provide further updates as we have them.
Feb 2, 20:17 UTC
Investigating - We are investigating reports of degraded availability for Codespaces
Feb 2, 18:46 UTC
Resolved - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.
Feb 2, 17:58 UTC
Update - Dependabot is currently experiencing an issue that may cause scheduled update jobs to fail when creating pull requests.
Our team has identified the problem and deployed a fix. We’re seeing signs of recovery and expect full resolution within the next few hours.
Feb 2, 17:41 UTC
Investigating - We are investigating reports of impacted performance for some GitHub services.
Jan 29, 09:53 EST
Resolved - We're resolved the issue causing event processing delays since 1:30 AM EST (UTC -05:00) on Jan 29, 2026. These delays affected tools such as email events, workflow enrollment and CRM updates in HubSpot.
Jan 29, 09:41 EST
Update - We're continuing to address the cause of the event processing delays since 1:30 AM EST (UTC -05:00) on Jan 29, 2026. These delays may impact tools such as email events, workflow enrollment and CRM updates in HubSpot. Our team is actively working to mitigate this issue and will update this page when more information is available. Only customers hosted in North America are currently impacted. We will be back with an update within 30 minutes.
Jan 29, 09:10 EST
Update - We're continuing to address the cause of the event processing delays of up to 3 hours since 1:30 AM EST (UTC -05:00) on Jan 29, 2026. These delays may impact tools such as email events, workflow enrollment and CRM updates in HubSpot. Our team is actively working to mitigate this issue and will update this page when more information is available. Only customers hosted in North America are currently impacted. We will be back with an update within 30 minutes.
Jan 29, 08:43 EST
Update - We're continuing to address the cause of the event processing delays of up to 3 hours since 1:30 AM EST (UTC -05:00) on Jan 29, 2026. These delays may impact tools such as email events, workflow enrollment and CRM updates in HubSpot. Our team is actively working to mitigate this issue and will update this page when more information is available. Only customers hosted in North America are currently impacted. We will be back with an update within 30 minutes.
Jan 29, 08:16 EST
Update - We're addressing the cause of the event processing delays of up to 3 hours since 1:30 AM EST (UTC -05:00) on Jan 29, 2026. These delays may impact tools such as email events, workflow enrollment and CRM updates in HubSpot. Our team is actively working to mitigate this issue and will update this page when more information is available. Only customers hosted in North America are currently impacted. We will be back with an update within 30 minutes.
Jan 29, 07:47 EST
Update - We've identified the cause of the event processing delays of up to 3 hours since 1:30 AM EST (UTC -05:00) on Jan 29, 2026. These delays may impact tools such as email events, workflow enrolment and CRM updates in HubSpot. Our team is actively investigating the cause of this issue and will update this page when more information is available. Only customers hosted in North America are currently impacted. We will be back with an update within 30 minutes.
Jan 29, 06:58 EST
Update - We've identified the cause of the event processing delays of up to 3 hours since 1:30 AM EST (UTC -05:00) on Jan 29, 2026. These delays may impact tools such as email events, workflow enrolment and CRM updates in HubSpot. Our team is investigating the cause of this issue and will update this page when more information is available. Only customers hosted in North America are currently impacted. We will be back with an update within 30 minutes.
Jan 29, 06:28 EST
Identified - We estimate that we will restore service in the next hour.
We are mitigating impact from a database impairment which we believe is caused by resource exhaustion. We are scaling the system.
We will be back with an update within 30 minutes.
The information on this page reflects our understanding of the incident and impact at the time of the update.
Jan 20, 13:17 EST
Resolved - Between 11:47 AM (UTC -05:00) and 01:05 PM (UTC -05:00), customers in all regions experienced issues loading app cards and UI extensions in HubSpot. This was caused by a load balancer misconfiguration. As of 01:05 PM (UTC -05:00), our app cards and UI extensions are working properly and the incident has been fully resolved. No data was lost.
HubSpot conducts a thorough review after each incident to understand the cause and prevent it from happening again. Learn more about HubSpot's commitment to reliability at www.HubSpot.com/reliability.
The information on this page reflects our understanding of the incident and impact at the time of the update.
Jan 20, 12:43 EST
Investigating - We are investigating an issue impacting integration UI extensions. We will provide an update when we have more information.
The information on this page reflects our understanding of the incident and impact at the time of the update.
Jan 15, 02:06 EST
Resolved - Between 05:25 AM (UTC +00:00) and 07:00 AM (UTC +00:00), some customers in North America, Europe experienced issues with content tools, marketing tools. This was caused by a load balancer impairment. As of 07:00 AM (UTC +00:00), our content tools, marketing tools are working properly and the incident has been fully resolved.
HubSpot conducts a thorough review after each incident to understand the cause and prevent it from happening again. Learn more about HubSpot's commitment to reliability at www.HubSpot.com/reliability.
The information on this page reflects our understanding of the incident and impact at the time of the update.
Jan 15, 01:19 EST
Investigating - We are investigating an issue impacting content tools, marketing tools. We will provide an update when we have more information.
The information on this page reflects our understanding of the incident and impact at the time of the update.
Jan 15, 00:47 EST
Identified - We've identified the issue that caused marketing tools to be partially unavailable in North America, Europe since 05:25 AM (UTC +00:00). We're addressing the cause of this issue and will update this page when we have more information. We will be back with an update within 30 minutes.
The information on this page reflects our understanding of the incident and impact at the time of the update.
Jan 12, 18:47 EST
Resolved - Between 03:39 PM (UTC -05:00) and 06:40 PM (UTC -05:00), a small number of customers in all affected regions experienced issues with marketing emails. This was caused by a database impairment. As of 06:40 PM (UTC -05:00), our marketing emails are working properly and the incident has been fully resolved.
HubSpot conducts a thorough review after each incident to understand the cause and prevent it from happening again. Learn more about HubSpot's commitment to reliability at www.HubSpot.com/reliability.
The information on this page reflects our understanding of the incident and impact at the time of the update.
Jan 12, 17:46 EST
Monitoring - We've addressed the issue that caused marketing emails to be partially unavailable in North America, Europe, the Asian Pacific since 03:39 PM (UTC -05:00). We're monitoring performance closely to ensure the tools recover properly.
The information on this page reflects our understanding of the incident and impact at the time of the update.
Jan 12, 17:35 EST
Identified - We estimate that we will restore service in the next hour.
We are mitigating impact from a database impairment which we believe is caused by resource exhaustion. We are scaling the system.
We will be back with an update within 1 hour.
The information on this page reflects our understanding of the incident and impact at the time of the update.
Jan 12, 17:18 EST
Investigating - We are investigating an issue impacting marketing emails. We will provide an update when we have more information.
The information on this page reflects our understanding of the incident and impact at the time of the update.
Jan 9, 13:19 EST
Resolved - Between 12:44 PM (UTC -05:00) and 12:56 PM (UTC -05:00), some customers in North America, Europe, the Asian Pacific experienced issues with hubspot APIs. This was caused by a server impairment. As of 12:54 PM (UTC -05:00), our hubspot APIs are working properly and the incident has been fully resolved.
HubSpot conducts a thorough review after each incident to understand the cause and prevent it from happening again. Learn more about HubSpot's commitment to reliability at www.HubSpot.com/reliability.
The information on this page reflects our understanding of the incident and impact at the time of the update.
Jan 9, 13:01 EST
Investigating - We're investigating reports that HubSpot may be unavailable for some users. We'll update this page when we have more information.
Jan 28, 19:42 UTC
Resolved - On Jan 28, 2026, Jira, Jira Product Discovery, and Jira Service Management users in eu-west-1 region may have experienced delays in viewing recently submitted updates on the web page and/or mobile apps. Updates continued to process successfully during the incident. There are no actions needed from customers. The issue has now been resolved, and the service is operating normally for all affected customers.
Jan 28, 18:50 UTC
Update - We are continuing to investigate an issue affecting Jira, Jira Product Discovery, and Jira Service Management. Affected users may experience delayed issue data in some experiences, such as issue view. We will share updates here in one hour or as more information becomes available.
Jan 28, 18:12 UTC
Identified - We are actively investigating an issue affecting Jira, Jira Product Discovery, and Jira Service Management. Affected users may experience delayed issue data in some experiences, such as issue view. We will share updates here in one hour or as more information becomes available.
Jan 27, 11:49 UTC
Resolved - Deployment of the fix is completed and services are back to normal.
This issue is resolved.
Jan 27, 11:33 UTC
Identified - The incident has affected the Assets product, leading users encountering intermittent unavailability.
Our team has identified the cause of the issue and deployed fix for the same. We see that the error rate has decreased, and the team is actively monitoring to ensure further stability.
Further updates will follow in 60 minutes or upon significant progress.
Jan 27, 11:07 UTC
Investigating - We understand that some of our customers are facing intermittent issues with Assets. Our team is investigating the issue.
We will update you on the progress within 60 minutes or sooner.
Jan 22, 16:47 UTC
Resolved - On January 22, 2026, affected Jira Service Management users may have experienced some service disruption where assets were not loading, affecting their interactions with service management functionality.
The issue has now been resolved, and the service is operating normally for all affected customers.
Jan 22, 16:32 UTC
Monitoring - The issue has been resolved, and services are now operating normally for all affected customers. We'll continue to monitor closely to confirm stability.
Jan 22, 15:41 UTC
Identified - We have identified the cause of the issue, and our teams are diligently working on a mitigation. Affected users may experience issue where assets are not loading, affecting their interactions with service management functionality. We will continue to share additional updates here as more information becomes available.
Jan 22, 14:44 UTC
Investigating - Jira Service Management users are currently experiencing an issue where assets are not loading, affecting their interactions with service management functionality.
We are currently investigating the issue, and the next update will be shared in 60 minutes or sooner
Jan 20, 23:09 UTC
Resolved - On January 20, 2026, some customers may have experienced performance degradation of Assets. The issue has now been resolved, and the service is operating normally for all affected customers.
Jan 20, 21:14 UTC
Monitoring - The performance degradation of Assets has been resolved, and services are now operating normally for all affected customers. We'll continue to monitor performance closely to confirm stability.
Jan 20, 20:08 UTC
Update - We continue to investigate the issue, and next communication will be issued in 60 minutes, or sooner if a significant milestone is achieved.
Jan 20, 19:10 UTC
Investigating - Impact
Users of the Assets product are experiencing intermittent accessibility issues, resulting in gateway timeouts and internal server errors. These technical disruptions are causing difficulties in performing queries, which may manifest as slow performance or failure to load certain pages, particularly those involving schemas. This issue affects related features that depend on the Assets product, such as specific custom fields and automation processes.
Current Status
The support teams are actively working to diagnose the issue and are focused on identifying the underlying problems causing the disruption. Efforts are ongoing to restore normal service levels and minimize any further inconvenience to users.
Next Steps
The incident team is continuing its investigation to determine the full extent of the impact and to identify effective resolutions. Further communications will be issued in 60 minutes or when significant progress is made.
Jan 7, 11:32 UTC
Resolved - Rerunning the remaining Automation for Jira rules that failed for affected customers is complete.
This update resolves the issue.
Jan 7, 08:37 UTC
Update - We have replayed about 7 hours of historical events for affected customers to rerun Automation for Jira rules that previously failed. This covers most of the APAC business hours.
Based on customer feedback, our team continues monitoring the situation and plans to complete rerunning the remaining Automation for Jira rules that failed earlier.
We will provide the next update once replaying the historical events is complete.
Jan 7, 04:43 UTC
Update - We are currently replaying historical events for affected customers to rerun Automation for Jira rules that failed to execute earlier. Our teams are closely monitoring the results of this replay to confirm that all affected automations are running as expected.
We will provide the next update in about two hours or sooner.
Jan 6, 14:04 UTC
Monitoring - Issue has been fixed for all new events - we are waiting for confirmation from some customers when they come online.
Customers using Jira and Jira Service Management in Asia Pacific and South East regions had faced issues with Automation for Jira. Other regions not impacted.
During Asia Pacific hours tomorrow, the team plans to replay historical events.
Team will continue monitoring. Next update will tomorrow.
Jan 6, 12:19 UTC
Update - Some automation rules using specific triggers failed to execute, causing rules to appear unresponsive. Customers using Jira and Jira Service Management in Asia Pacific and South East regions have faced issues with Automation for Jira. Other regions are not impacted.
The cause has been identified, and the team is actively working on restoring the operations back to normal as soon as possible.
Workaround to manage critical rules, disable and re-enable each automation rule. This triggers the rule and must be done for every critical rule.
We will provide the next update in about two hours or sooner.
Jan 6, 10:11 UTC
Update - Customers using Jira and Jira Service Management in Asia Pacific and South East regions have faced issues with Automation for Jira. Other regions are not impacted.
Some automation rules using specific triggers failed to execute, causing rules to appear unresponsive.
We have identified the cause and we continue working on restoring the operations back to normal as soon as possible.
In the meantime, workaround to manage business critical rules is to disable and then re-enable each automation rule. This action triggers the rule and must be done for every critical rule.
We will provide the next update in about two hours or sooner.
Jan 6, 07:58 UTC
Update - Our engineers are continuing work toward the resolution of the incident impacting Automation for Jira. There are no new updates to share at this time. We will provide the next update within 2 hours.
Jan 6, 06:17 UTC
Identified - We have identified the cause of the issue, and our teams are diligently working on a mitigation. We will continue to share additional updates here as more information becomes available.
Jan 6, 05:18 UTC
Investigating - We are investigating an incident impacting Automation for Jira across Jira and Jira Service Management. Some automation rules that use specific triggers may fail to execute. Our team is working to identify the cause and restore normal service. We will provide the next update within 60 minutes.
Feb 3, 17:49 UTC
Resolved - We have confirmed that the issue has been resolved completely and all systems are 100% operational at this time. Affected users may need to log out and log back in.
We will conduct an internal investigation of this issue and make appropriate improvements to our systems to help prevent or minimize future recurrence.
Feb 3, 16:24 UTC
Monitoring - Our engineering team has identified the underlying issue and completed a rollback, which has mitigated the issue. Affected users may need to log out and log back in to resolve the issue.
We will provide an additional update shortly.
Feb 3, 16:01 UTC
Investigating - We are actively investigating reports that some LastPass users may be experiencing issues launching saved sites from their vault. Access to Vaults is not affected. Engineers continue to troubleshoot the situation and we will update once resolved.
Oct 30, 00:04 UTC
Resolved - Systems are operational and the incident is resolved
Oct 29, 21:34 UTC
Monitoring - Our third-party provider has applied fixes that are gradually resolving the issue. While many users are seeing improvements, some connectivity issues remain. We are actively monitoring the situation.
Oct 29, 20:01 UTC
Update - LastPass engineers continue to work with the cloud provider to resolve the issue.
Oct 29, 17:44 UTC
Identified - Our third party cloud provider has identified the issue and are now actively working towards a resolution. We will provide another update shortly.
Oct 29, 16:23 UTC
Investigating - We’re currently experiencing service degradation on LastPass's marketing site due to an issue with our external cloud provider. Our team is actively working to resolve the issue and minimize the impact.
Oct 21, 09:42 UTC
Resolved - Service degradation issue has been resolved. Vault access and login functionality, including for federated users, are now fully operational.
Oct 21, 09:19 UTC
Update - We’re currently investigating an issue that may prevent some users from logging in. Our team is working to restore full access as quickly as possible.
Oct 21, 09:12 UTC
Investigating - We’re currently experiencing system degradation. LastPass Vaults may take longer to load. Our team is actively working to resolve the problem and minimize impact.
Oct 20, 13:25 UTC
Resolved - This incident has been resolved.
Oct 20, 11:12 UTC
Update - All of our services are now running and operational. However, we are closely monitoring the situation, as some of our external providers who were also affected by this incident are still in the process of recovery.
We’re keeping a close eye on our integrations to ensure continued stability and prevent further disruptions.
Oct 20, 10:41 UTC
Monitoring - Most services have now been restored and are operational, including our phone lines. We continue to monitor the situation closely to ensure full stability.
Oct 20, 09:57 UTC
Investigating - We’re currently experiencing service degradation due to an issue with our external cloud provider, this has also impacted our phone lines. For support, please reach out through our Support Center. Our team is actively working to resolve the issue and minimize the impact.
Oct 15, 13:52 UTC
Resolved - We have confirmed that the issue has been resolved. We will conduct an internal review of this issue to help prevent or minimize future recurrence.
Oct 15, 13:21 UTC
Update - We are continuing to work on a fix for this issue.
Oct 15, 13:20 UTC
Identified - We are currently experiencing an issue that is preventing users from downloading invoices. Our engineering team is actively investigating the root cause and working to restore full functionality as quickly as possible.
Jan 28, 19:39 UTC
Resolved - The issue was caused by errors relating to a service worker. The solution for the blank page issue is to clear the browser cache on the system where the issue is occurring. This incident is now resolved.
Jan 28, 18:51 UTC
Update - We continue to investigate the issue which appears to be related to a service worker the website uses. If you are seeing a blank page for app.netlify.com clearing the browser cache will resolve the issue.
Jan 28, 18:23 UTC
Investigating - We are investigating an issue that is causing app.netlify.com to show a blank page for some users.
Jan 26, 23:00 UTC
Resolved - Between 16:51 UTC and 17:05 UTC we saw increased Function latency on both the High Performance Edge Network and the Standard Edge Network. The issue has been resolved.
Jan 26, 16:16 UTC
Resolved - This incident has been resolved.
Jan 26, 16:03 UTC
Monitoring - A fix has been implemented and we are monitoring the results.
Jan 26, 15:52 UTC
Investigating - We're currently experiencing increased errors and latencies in the Netlify UI. We are investigating the problem.
Jan 22, 15:58 UTC
Resolved - This incident has been resolved.
Jan 22, 14:19 UTC
Investigating - We're experiecing increased build failures due to an issue with GitHub: https://www.githubstatus.com/incidents/cqb5hcy0gx18
Jan 29, 20:49 UTC
Resolved - This incident has been resolved.
Jan 29, 20:18 UTC
Monitoring - A fix has been implemented and we are monitoring the results.
Jan 29, 20:04 UTC
Investigating - We are currently investigating this issue.
Dec 5, 09:42 UTC
Resolved - This incident has been resolved.
Dec 5, 09:17 UTC
Monitoring - A fix has been implemented and we are monitoring the results.
Dec 5, 09:02 UTC
Investigating - We are currently investigating this issue.
Nov 18, 19:11 UTC
Resolved - We have confirmed that all systems are now fully operational and the connectivity issues have been completely resolved.
Nov 18, 19:09 UTC
Update - This incident has been resolved. All services are now operating normally. We apologize for the disruption and appreciate your patience.
Nov 18, 15:23 UTC
Monitoring - A fix has been implemented and we are monitoring the results.
Nov 18, 13:34 UTC
Investigating - We are currently investigating intermittent connectivity issues affecting npmjs.com. Our team is actively working to restore full service. We apologize for any inconvenience and will provide updates as we learn more.
Nov 6, 14:28 UTC
Resolved - This incident has been resolved.
Nov 6, 14:08 UTC
Monitoring - A fix has been implemented and we are monitoring the results.
Nov 6, 13:58 UTC
Update - We are continuing to investigate this issue.
Nov 6, 13:56 UTC
Investigating - We are currently investigating this issue.
Oct 20, 20:15 UTC
Resolved - This incident has been resolved.
Oct 20, 19:20 UTC
Monitoring - We are seeing recovery in package publishing. We will continue to monitor the results.
Oct 20, 17:20 UTC
Investigating - We are currently investigating this issue.
All impacted services have now fully recovered.
Affected components
- Fine-tuning (Operational)
All impacted services have now fully recovered.
Affected components
- GPTs (Operational)
- Login (Operational)
- Codex (Operational)
- Connectors/Apps (Operational)
- Search (Operational)
- Conversations (Operational)
- Image Generation (Operational)
- Agent (Operational)
- Deep Research (Operational)
- ChatGPT Atlas (Operational)
- Login (Operational)
- File uploads (Operational)
- Voice mode (Operational)
- Compliance API (Operational)
All impacted services have now fully recovered.
Affected components
- Login (Operational)
All impacted services have now fully recovered.
Affected components
- Login (Operational)
Feb 3, 11:38 PST
Resolved - Our engineering team identified a processing gap which occurred within the Advanced Stats dataset on January 27, 2026, between 07:00 AM PT and 09:00 AM PT. During this two-hour window, data may be incomplete. Impacted reports may show inaccuracies of approximately 8.3% for daily, 1.2% for weekly, and 0.3% for monthly stats.
Our engineering team has confirmed that the Advanced Stats API is fully operational.
Feb 3, 07:02 PST
Update - Our engineering team is actively investigating this issue, and we will provide an update as soon as more information becomes available.
Feb 2, 23:14 PST
Update - Our engineering team is currently investigating this issue. Since the team is still working to identify the root cause, it may take some additional time to resolve. We will provide the next update in 8 hours or so.
Feb 2, 19:45 PST
Update - Our engineering team is actively investigating this issue and we will provide an update as soon as more information becomes available.
Feb 2, 15:56 PST
Update - Our engineering team is actively investigating this issue and we will provide an update as soon as more information becomes available.
Feb 2, 13:51 PST
Investigating - Our engineering team is actively investigating this issue and we will provide another update as soon as more information becomes available.
Feb 2, 13:28 PST
Identified - Our engineers have identified the issue and are working toward and testing a fix. We will provide another update in hour or as soon as more information becomes available.
Feb 2, 12:30 PST
Update - Our engineering team is actively investigating this issue and we will provide another update in one hour or as soon as more information becomes available.
Feb 2, 11:29 PST
Investigating - Starting around 11:00 AM PST, February 2nd, 2026, our engineers began investigating an issue affecting Advanced stats.
Our engineers have identified that a processing delay prevented about two hours of Advanced Stats data from January 27, 2026 from being included. As a result, customers may see incomplete Advanced Stats results when querying reports that include that time window. The Advanced Stats API remains available, but the data returned may be missing information for that period. This is currently being investigated and our team is working on a fix.
Email sending is not affected.
We’ll share another update within the next hour, or sooner as we learn more.
Feb 3, 10:49 PST
Resolved - On February 3, 2026 between 10:08 and 10:26AM PT, our engineering team investigated a service disruption that impacted Marketing Campaigns creating Single Sends, Automations, Contact uploads, and API configuration requests. Customers would have experienced 500 errors. Mail send was not impacted and Legacy Marketing Campaigns were not impacted. Our engineering teams have identified the root cause and restored full functionality. No mail was lost. Services are now operational. We apologize for any inconvenience.
Jan 29, 12:08 PST
Resolved - Our Engineering team has completed monitoring and confirmed that no SendGrid-specific issues remain at this time. Microsoft has acknowledged that some senders may continue to experience elevated throttling, and has confirmed that this behavior is not isolated to SendGrid nor specific to our sending configuration.
Microsoft has advised that any additional updates will be communicated via their Smart Network Data Service (SNDS) page. Customers who continue to experience deferrals or blocks when sending to Microsoft Outlook Consumer domains (for example, hotmail.com or outlook.com) are encouraged to open a support request directly with Microsoft for further assistance:
https://olcsupport.office.com/
All SendGrid services are operating normally, and we will continue to monitor for any relevant upstream updates.
Jan 27, 16:05 PST
Update - We have acknowledgment from Microsoft about the change in delivery behavior on their consumer email platform. Microsoft has acknowledged that the behavior is not isolated to SendGrid's environment and is not specific to our sending configuration.
Our engineering team is seeing improvement (reduction) of deferrals since the start of the incident on 1/14. Blocks have also reduced but are still at elevated levels; we will keep monitoring the behavior.
Customers that feel they are continuing to observe excessive deferrals and/or blocks should fill out the OLC Support request form at https://olcsupport.office.com/
Jan 27, 15:24 PST
Monitoring - Our engineers continue to work with Microsoft and will provide additional details and guidance shortly.
Jan 27, 08:00 PST
Update - Our engineering team is still actively investigating this issue, and we will provide more updates as soon as they become available.
Jan 26, 07:56 PST
Update - Our engineering team is still actively investigating this issue, and we will provide more updates as soon as they become available.
Jan 24, 10:55 PST
Update - Our engineering team is still actively investigating this issue and we will provide more updates as soon as they become available.
Jan 23, 15:35 PST
Update - Our engineering team is still actively investigating this issue and we will provide more updates as soon as they become available.
Jan 23, 08:00 PST
Update - Our engineering team is still actively investigating this issue and we will provide more updates as soon as they become available.
Jan 23, 03:22 PST
Update - Our engineering team is still actively investigating this issue and we will provide more updates as soon as they become available.
Jan 22, 16:18 PST
Update - Our engineering team is actively investigating this issue and we will provide more updates as soon as they become available.
Jan 22, 14:07 PST
Investigating - Starting around 3:00 am PST on January 14, our engineers began investigating an issue with emails sent to Microsoft Outlook Consumer domains. Some customers may experience an increase in email block rates to Microsoft Outlook Consumer domains which include Hotmail and Outlook domains. Our Engineers are working with Microsoft. We will provide another update in two hours or as soon as more information becomes available.
Jan 24, 19:00 PST
Resolved - Our engineers have monitored the fix and confirmed the issue with email delivery to Gmail recipients has been resolved. All services are now operating normally at this time.
Jan 24, 12:43 PST
Update - We are observing that email delivery to Gmail recipients is recovering. However, we will continue to monitor the situation closely and await Google’s next official update. Additional updates will be shared as more information becomes available.
Jan 24, 08:58 PST
Monitoring - Gmail has confirmed that their services are currently affected, so we are continuing to monitor the situation closely while awaiting their next official update. We will provide another update as soon as more information becomes available.
Jan 24, 08:42 PST
Investigating - Starting around 1:00 PM UTC on January 24, 2026, our engineers began investigating an issue with email delivery to Gmail recipients.
Users may experience delivery delays, with emails taking up to 5x longer than usual to reach Gmail addresses. This is due to an issue on the Gmail/Google side, as other email providers remain unaffected.
This does not impact mail send; Emails are successfully delivered to Gmail's mail servers; however, Gmail is experiencing delays in delivering those messages to recipient inboxes. You can monitor the status of Google services via the official Google Status Page: https://www.google.com/appsstatus/dashboard/
We will provide another update as soon as more information becomes available.
Jan 23, 10:13 PST
Resolved - Our engineers have monitored the fix and confirmed the issue impacting Event Webhook deliveries and engagement tracking events has been resolved. All services are now operating normally at this time.
Jan 23, 09:10 PST
Monitoring - Our engineers have implemented a fix and are monitoring system performance. We will provide another update in 1 hour or as soon as more information becomes available.
Jan 23, 08:58 PST
Investigating - Starting at approximately 8:34 AM PST on January 23, our engineers began investigating an issue causing delays in engagement tracking events and Event Webhook deliveries.
Some customers may experience increased latency in receiving these events. Email sending is not impacted at this time. We will provide another update within two hours, or sooner as additional information becomes available.
Jan 29, 23:43 UTC
Resolved - We have resolved the issue and all systems are working as expected.
Jan 29, 23:24 UTC
Monitoring - We have identified an issue related to database contention and have issued a fix. We are continuing to monitor the system as it returns to health.
Jan 29, 23:05 UTC
Update - We are investigating an issue where the Sentry Dashboard may be slow to load.
Jan 29, 23:02 UTC
Investigating - We are currently investigating this issue.
Jan 27, 18:56 UTC
Resolved - Ingestion has recovered.
Jan 27, 18:03 UTC
Identified - The issue has been identified and a fix is being implemented.
Jan 27, 14:51 UTC
Investigating - We are investigating a delay in ingesting spans in the US region.
Jan 24, 00:27 UTC
Resolved - Backlog has been processed and crons should be running as normal
Jan 23, 21:46 UTC
Identified - The issue has been identified and we working through backlogs.
Jan 23, 18:38 UTC
Investigating - We are currently investigating this issue.
Jan 12, 17:56 UTC
Resolved - This incident has been resolved.
Jan 12, 16:53 UTC
Monitoring - Our UI has recovered from intermittent availability issues due to an increase in load on our DB, the load has recovered and the UI shouldn't experience any further issues. We're continuing to monitor.
Jan 12, 16:10 UTC
Investigating - Our UI is unavailable again, we're investigating the issue.
Jan 12, 15:44 UTC
Monitoring - The UI availability issue has been resolved, we're continuing to monitor the situation.
Jan 12, 15:30 UTC
Investigating - We are currently investigating this issue.
Jan 12, 15:16 UTC
Resolved - All backlogs have been processed and no data was lost.
Jan 12, 14:21 UTC
Update - We are now ingesting recent transaction data again and are processing old data in the background.
Jan 12, 14:16 UTC
Update - We have implemented a fix and are monitoring the situation closely. No data has been lost.
Jan 12, 13:33 UTC
Investigating - We are investigating an issue affecting transaction ingestion in the EU region
Jan 14, 12:04 EST
Resolved - This incident has been resolved.
Jan 14, 11:34 EST
Monitoring - A fix has been implemented and we are monitoring the results.
Jan 14, 10:54 EST
Investigating - We are currently investigating reports of delays in jobs authorizaiton and processing
Nov 21, 08:25 EST
Resolved - This incident has been resolved.
Nov 21, 06:00 EST
Update - All affected systems are up and running. We are monitoring associated systems.
Nov 21, 05:59 EST
Monitoring - A fix has been implemented and we are monitoring the results.
Nov 21, 05:15 EST
Identified - Our data processing is running behind due to an issue on GitHub’s side. No data has been lost, and the system should catch up shortly.
Nov 2, 09:31 EST
Resolved - This incident has been resolved.
Nov 2, 09:19 EST
Monitoring - A fix has been implemented and we are monitoring the results.
Nov 2, 08:58 EST
Update - We are continuing to investigate this issue.
Nov 2, 08:58 EST
Investigating - We are currently investigating reports of performance issue with Jobs related functionality
Oct 28, 19:55 EDT
Resolved - This incident has been resolved.
Oct 28, 16:59 EDT
Monitoring - A fix has been implemented and we are monitoring the results.
Oct 28, 16:44 EDT
Investigating - We’re experiencing a higher rate of TM Management API errors and are actively investigating the issue.
Oct 24, 17:18 EDT
Resolved - This incident has been resolved.
Oct 24, 13:57 EDT
Update - We are continuing to monitor for any further issues.
Oct 24, 13:57 EDT
Monitoring - A fix has been implemented and we are monitoring the results.
Oct 24, 13:07 EDT
Identified - The issue has been identified and a fix is being implemented.
Oct 24, 12:59 EDT
Investigating - We are currently investigating delays in data processing. Our team is actively working on resolving the issue. We will provide more information as it becomes available. Global Delivery Network services remain operational.
The disruption has now been resolved.
Affected components
- Tempo for Jira Cloud Help Center (Operational)
- Adaptive Planner (Operational)
- Time Tracker (Operational)
- Capacity Planner (Operational)
- Timesheets (Operational)
- Financial Manager (Operational)
- Jira (Operational)
- Tempo for Slack (Operational)
Resolved
Affected components
- Capacity Planner (Operational)
- Timesheets (Operational)
- Financial Manager (Operational)
- Jira (Operational)
- Tempo for Slack (Operational)
- Tempo for Jira Cloud Help Center (Operational)
- Adaptive Planner (Operational)
Incident has been resolved.
Affected components
- Capacity Planner (Operational)
- Timesheets (Operational)
- Financial Manager (Operational)
- Tempo for Slack (Operational)
Jan 23, 15:30 UTC
Resolved - We identified the cause of elevated latency impacting some databases in us-east-1 region between 15:30–15:35 UTC as a sudden surge of connection attempts that hit OS-level connection limits on our proxy layer. This resulted in slower new connection establishment and increased latency for some requests. Databases were not impacted. We are implementing additional proxy-level metrics and safeguards to detect and manage similar edge cases earlier.
Dec 10, 12:08 UTC
Resolved - Issue has been identified and replicas were successfully reconnected.
Dec 10, 11:50 UTC
Investigating - We identified an issue in Regional Databases in US-East-1 where database replicase have connectivity issues with each other.
Other regions are not impacted. Global databases are not impacted.
We are working on the issue.
Dec 5, 09:18 UTC
Resolved - This incident has been resolved.
Dec 5, 09:03 UTC
Investigating - Upstream provider confirmed an incident. We are investigating the impact and potential resolutions.
Dec 1, 07:00 UTC
Resolved - We identified and fixed a bug that could cause messages with Flow Control enabled to be delayed longer than their configured delay, resulting in unexpectedly long pending times.
The fix is in place and the issue should not recur. If you’re still seeing unusually long-delayed messages, please contact [email protected] and we can help with remediation.
Oct 20, 18:50 UTC
Resolved - A fix has been deployed as a workaround so that our systems are not affected from the ongoing incident of the cloud provider
Oct 20, 18:07 UTC
Update - Developer API (api.upstash.com) is having availability issues alongside Upstash Console. While we are monitoring the underlying cloud provider's status updates, we are also working on a remediation.
Oct 20, 15:05 UTC
Monitoring - Console is back to normal again. We are currently monitoring.
Oct 20, 14:52 UTC
Investigating - We are currently experiencing errors in the Upstash Console due to issues with one of our upstream providers. This may affect access to the dashboard and related operations.
Our team is actively monitoring the situation and working to mitigate the impact. We will provide updates as soon as more information becomes available.
Feb 3, 21:54 UTC
Resolved - This issue has been resolved and sandboxes are creating normally.
Feb 3, 20:14 UTC
Monitoring - A fix has been implemented and we are monitoring the results.
Feb 3, 19:51 UTC
Update - We are releasing a fix and are continuing to work on stabilizing sandbox creation.
Feb 3, 18:49 UTC
Update - We are continuing to work on a fix for this issue.
Feb 3, 18:38 UTC
Identified - We have identified an issue while creating new sandboxes. We are working on a fix for the issue.
Feb 3, 21:36 UTC
Resolved - The issue has been resolved and all deployments are building normally.
Feb 3, 21:12 UTC
Monitoring - A fix has been applied resolving the queue for new deployments. We are continuing to monitor the results.
Feb 3, 20:48 UTC
Investigating - We are investigating elevated latency creating new deployments. Some deployments may be delayed.
Feb 3, 17:30 UTC
Resolved - This incident has been resolved.
Feb 3, 16:50 UTC
Monitoring - A fix has been implemented, and we are monitoring results.
Feb 3, 16:22 UTC
Update - We have begun applying fixes and are seeing partial recovery across API, dashboard, and builds. We are continuing to apply fixes. We will share additional updates as they become available.
Feb 3, 15:43 UTC
Identified - The issue has been identified, and we are working on a fix.
Feb 3, 15:43 UTC
Update - We are continuing to investigate this issue.
Feb 3, 15:22 UTC
Investigating - We are currently investigating reports of elevated error rates and slowness on the Vercel Dashboard and APIs. Existing deployments and live traffic are not affected by this issue. We will share updates as they become available.
Feb 1, 08:57 UTC
Resolved - There were elevated error rates in the sin1 (Singapore) edge region between 08:57 and 09:04 UTC. The issue has been resolved, and all systems are operating normally.
Jan 30, 11:08 UTC
Resolved - This incident has been resolved.
Jan 30, 10:42 UTC
Monitoring - The new build routing has addressed the issue. We are monitoring the results.
Jan 30, 10:24 UTC
Update - We have mitigated the impact of the underlying issue by moving builds from pdx1 to iad1 while we continue to investigate the underlying cause.
Jan 30, 09:02 UTC
Update - We are continuing to investigate this issue.
Jan 30, 06:21 UTC
Update - We are continuing to investigate this issue.
Jan 30, 03:34 UTC
Update - We’ve confirmed the issue is intermittent, and redeploying a failed build may succeed.
Alternatively, as a temporary workaround, builds can be run locally and deployed as prebuilt output using Vercel CLI:
`vercel build && vercel deploy --prebuilt`
Jan 29, 16:00 UTC
Investigating - We are currently observing elevated `git clone` durations on a subset of builds. Some builds are taking significantly longer than expected to clone repositories, resulting in slower overall build start times.
We are actively analyzing clone performance metrics and infrastructure behavior to identify the root cause.
Feb 3, 04:52 PST
Resolved - This incident has been resolved and the affected services have been restored.
Feb 3, 04:29 PST
Monitoring - On 02/03/2026, between 11:20 UTC and 11:25 UTC, a subset of users in Jeddah may have experienced call disconnections.
This incident has been resolved, and the affected services have been restored.
Feb 2, 18:34 PST
Resolved - This incident has been resolved, and inbound and outbound calling services for Zoom Phone and Zoom Contact Center are fully operational.
Feb 2, 17:38 PST
Monitoring - The service degradation affecting some inbound and outbound Toll Free Calls in the Japan Region has been resolved.
Our team will continue to monitor the situation closely and keep you informed of any further developments.
Feb 2, 16:49 PST
Identified - We have successfully identified the root cause affecting some inbound and outbound Toll Free Calls in the Japan Region.
Our team is actively working on a resolution, and we will keep you informed with timely updates as progress is made.
Thank you for your patience.
Feb 2, 16:45 PST
Investigating - We are currently investigating a service degradation affecting some inbound and outbound Toll Free Calls in the Japan Region.
Our team is actively working to identify the impact and root cause. We will provide an update as soon as more information becomes available.
We appreciate your patience as we work to resolve this issue.
Feb 2, 02:58 PST
Resolved - This incident has been resolved.
Feb 2, 02:09 PST
Monitoring - On 02/02/2026, between 9:12 UTC to 9:22 UTC, users experienced service degradation on Zoom Phone’s Outbound, Inbound Calls, and Cloud Recordings in Frankfurt, Germany region.
This incident has been resolved and the affected services have been restored.
Jan 31, 21:53 PST
Resolved - This incident has been resolved and the affected services have been restored.
Jan 31, 21:25 PST
Investigating - On Feb 01 between 04:10 AM UTC - 05:06 AM UTC free users might have experienced issues with Zoom Whiteboard Engagement feature in US region. We have successfully resolved the issue.
Our team will continue to monitor the situation closely and keep you informed of any further developments.
Jan 30, 22:29 PST
Resolved - This issue is on the far-end termination carrier and is outside of Zoom's control. As a result, we will be closing this incident.
Thank you for your patience.
Jan 30, 19:01 PST
Update - We continue to identify the root cause with our underlying carrier affecting Zoom Phone outbound for a subset of user calls in United Kingdom.
We will keep you informed with timely updates as progress is made.
Thank you for your patience.
Jan 30, 08:12 PST
Update - We continue to identify the root cause with our underlying carrier affecting Zoom Phone outbound for a subset of user calls in United Kingdom.
Our team is actively working on a resolution, and we will keep you informed with timely updates as progress is made.
Thank you for your patience.
Jan 30, 06:12 PST
Identified - We have successfully identified the root cause with our underlying carrier affecting Zoom Phone outbound for a subset of user calls in United Kingdom.
Our team is actively working on a resolution, and we will keep you informed with timely updates as progress is made.
Thank you for your patience.
Jan 30, 05:37 PST
Update - We continue to investigate a service degradation with Zoom Phone outbound affecting a subset of user calls in United Kingdom.
Our team is actively working to identify the impact and root cause. We will provide an update as soon as more information becomes available.
We appreciate your patience as we work to resolve this issue.
Jan 30, 04:39 PST
Investigating - We are currently investigating a service degradation with Zoom Phone outbound affecting a subset of user calls in United Kingdom.
Our team is actively working to identify the impact and root cause. We will provide an update as soon as more information becomes available.
We appreciate your patience as we work to resolve this issue.