Cosmic Global - Notice history

100% - uptime

Los Angeles, California - Operational

100% - uptime
Feb 2026 · 100.0%Mar · 100.0%Apr · 99.98%
Feb 2026
Mar 2026
Apr 2026

Dallas, Texas - Operational

100% - uptime
Feb 2026 · 100.0%Mar · 99.77%Apr · 99.99%
Feb 2026
Mar 2026
Apr 2026

Ashburn, Virginia - Operational

100% - uptime
Feb 2026 · 100.0%Mar · 100.0%Apr · 100.0%
Feb 2026
Mar 2026
Apr 2026

London, United Kingdom - Operational

100% - uptime
Feb 2026 · 100.0%Mar · 99.77%Apr · 99.98%
Feb 2026
Mar 2026
Apr 2026

Amsterdam, Netherlands - Operational

100% - uptime
Feb 2026 · 100.0%Mar · 100.0%Apr · 99.98%
Feb 2026
Mar 2026
Apr 2026
100% - uptime

Transit - Operational

100% - uptime
Feb 2026 · 100.0%Mar · 100.0%Apr · 100.0%
Feb 2026
Mar 2026
Apr 2026

Proxies - Operational

100% - uptime
Feb 2026 · 100.0%Mar · 100.0%Apr · 100.0%
Feb 2026
Mar 2026
Apr 2026

API - Operational

100% - uptime
Feb 2026 · 100.0%Mar · 100.0%Apr · 100.0%
Feb 2026
Mar 2026
Apr 2026

Customer Portal - Operational

100% - uptime
Feb 2026 · 100.0%Mar · 100.0%Apr · 100.0%
Feb 2026
Mar 2026
Apr 2026

Notice history

Apr 2026

Network Outage
  • Postmortem
    Postmortem

    At approximately 1:25AM EST during a routine filter re-deployment (as a hotfix) to address concerns visualized through our monitoring, the deployment process of our Luascript code went haywire. This was not due to any code logic or bugs introduced into the code, as we redeployed the hotfix moments after without issue, but what we believe to be an extremely unlucky software bug that cascaded through all PoPs due to the way the syncing system works. This was not a full outage, but it is what we would classify as a major outage with more than 80% of traffic being dropped at the time.

    The issue was 95% resolved within the first 10 mins by bringing back up Ashburn, Dallas, London and Amsterdam without issue. Los Angeles took about 12 minutes extra due to needing to reboot the filtering appliances there, and gradually shifting traffic back online. Frankfurt was not affected during this time and its traffic was still flowing through the network.

    We are currently investigating why this specific filter deployment went haywire, as we have deployed code over 30 times in the month of March without a single issue, so the situation is obviously one of concern to our customers, and we acknowledge that and are investigation due to it's how peculiar this situation was.

    We do also acknowledge how the 3 outages within the last 30 days are a significant cause for concern, and we do not want to make excuses, but we do want to iterate that all 3 outages have been caused by things out of our control at the time, but we are currently implementing ways to control them. We do understand that to you, our customers, these are in our control as you trust us to maintain the stability of the network you use, so we take full responsibility for the outages even if they weren't directly caused by us, and are making it our goal to implement increased redundancy and resiliency.

    We are currently in the process of shipping out upgraded filtering hardware & software, upgraded routers and upgraded router components to critical locations to not only introduce further redundancy, but as previously mentioned to improve resliency overall.

    We will have more to share on this in the coming weeks, but rest assured we are working on the issues, and we completely understand your frustrations as a customer. Do not hesitate to voice any concerns to us, and we will be glad to respond.

    Please also take a look at the RFO for the outage in Dallas last week and our plans for the future:

    https://status.as30456.net/cmn9oqp8405n49b3k562hrh6i

  • Resolved
    Resolved
    This incident has been resolved.
  • Update
    Update

    All PoPs have recovered except for LAX. We are working on restoring traffic to LAX.

  • Monitoring
    Monitoring
    We implemented a fix and are currently monitoring the result.
  • Investigating
    Investigating

    We are currently investigating this incident.

Mar 2026

DFW Outage
  • Postmortem
    Postmortem

    Our explanation:

    Over the past few weeks, some of you may have experienced brief service interruptions across parts of our network. We want to be upfront about what happened, what we've learned, and most importantly what we've done about it.

    Our edge routers have always been built with internal redundancy in mind - redundant supervisors, redundant power supplies, multiple line cards, and redundant fabric modules. That level of hardware resilience handles the vast majority of failure scenarios well.

    However, the recent outages exposed a gap: when an issue affects the chassis itself such as a software defect, a firmware upgrade that requires a full reload, or cases like where a software process on a router crashes (as has happened recently in London -> Twice) - there was no second device to immediately absorb the traffic. The router was redundant in every way except the one that mattered in these incidents.

    What we're doing:

    We're rolling out a dual router design across all six of our points of presence — Dallas, Ashburn, Los Angeles, London, Amsterdam, and Frankfurt. Once complete, every PoP will operate with two independent edge routers in an active/active configuration, with full BGP session redundancy to all upstream and peering partners. If an entire chassis needs to be taken offline for maintenance, a software upgrade, or an unexpected failure then traffic will automatically reconverge on the second device with no customer-facing impact.

    Each router in the pair will run on independent power feeds with independent management and control planes. We're also using this as an opportunity to standardize failover testing procedures across all PoPs, so this architecture is validated continuously, not just at deployment. This also provides protection against cases where a configuration change (with possibly human error involved) leads to a change which ends up knocking out a bunch of traffic. The investments for these changes were made during the last couple of weeks, so were already in the works and unrelated to incidents in March, but with summer right around the corner we wanted to let you know you'll be in good hands.

    These changes will also allow for things like DDoS mitigation changes to be performed in a more controlled rollout (e.g. to parts of traffic only), zero-downtime maintenance windows for core networking equipment and a stronger foundation for the capacity expansions we have planned for the rest of 2026.

  • Resolved
    Resolved
    This incident has been resolved.
  • Monitoring
    Monitoring

    The incident was resolved shortly after onset, and we have been monitoring since. A full RFO will be posted when this status is closed. In short, the root cause was a cascading failure triggered by a bug in the routing software itself — not by any action taken by our team. The issue was entirely outside our control, and we responded quickly to restore normal operation.

  • Identified
    Identified

    We identified the root cause, and have been working on resolution. Recovery efforts are showing progress as traffic is beginning to restore in Dallas.

  • Investigating
    Investigating

    We are currently investigating this incident.

London Connectivity Issues
  • Resolved
    Resolved

    Following emergency maintenance yesterday that required a reboot of a core router in our London facility, an Arista runtime software bug caused the router's ARP entries to gradually decay from active memory.

    Although the router's configuration remained correct throughout, the hardware chip (ASIC) responsible for directing network traffic failed to correctly reload the address mappings after the reboot. These mappings are what tell the router how to reach a set of internal endpoints used for multicast traffic forwarding. With them missing from the hardware's active memory, traffic that should have been flowing through those paths was silently dropped.

    Because the configuration itself was never corrupted, the root cause was not immediately obvious. A number of other potential causes were investigated before the true issue was identified — a desync between the router's stored configuration and what the hardware had actually loaded into memory.

    We sincerely apologize for the impact this had on your services and for the time it took to identify the root cause. We understand how frustrating extended investigations can be, and we appreciate your patience while our engineers worked methodically through the contributing factors to reach a definitive resolution.

  • Monitoring
    Monitoring

    We have implemented another round of fixes and connectivity is recovering. Please reach out to us if you are still having issue while we continue to monitor.

    Thank you again for your patience in this matter. We will provide a full report when we confirm all is well.

  • Update
    Update

    We are continuing to investigate TCP issues in the London PoP. We apologize for the continued problems today and are making progress toward a full resolution for this location.

  • Identified
    Identified

    We are continuing to monitor reports of elevated issues and are still working towards a permanent resolution.

  • Monitoring
    Monitoring

    We have rolled out a batch of fixes and are seeing connectivity recover. We are continuing to monitor the situation closely.

  • Identified
    Identified

    We have identified an issue with our filtering software in our London PoP and are working on a resolution as quickly as possible.

Feb 2026

Upstream Issues
  • Resolved
    Resolved
  • Investigating
    Investigating

    We are currently investigating this incident. We are seeing issues with one of our upstreams losing announcement of prefixes temporarily. We are working with them in an emergency fashion. You may see blips as traffic fails over and back at times.

Feb 2026 to Apr 2026

Next