<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/">
  <channel>
    <title>Cosmic Global Status - Incident history</title>
    <link>https://status.as30456.net</link>
    <description>Cosmic Global</description>
    <pubDate>Fri, 3 Apr 2026 05:32:47 +0000</pubDate>
    
<item>
  <title>Network Outage</title>
  <description>
    Type: Incident
    Duration: 37 minutes

    Affected Components: London, United Kingdom, Dallas, Texas, Los Angeles, California, Amsterdam, Netherlands, Ashburn, Virginia, , 
Network Points of Presence →
    Apr 3, 05:32:47 GMT+0 - Investigating - We are currently investigating this incident.  Apr 3, 05:41:05 GMT+0 - Monitoring - We implemented a fix and are currently monitoring the result. Apr 3, 05:50:54 GMT+0 - Monitoring - All PoPs have recovered except for LAX. We are working on restoring traffic to LAX. Apr 3, 06:09:40 GMT+0 - Resolved - This incident has been resolved. Apr 3, 06:32:20 GMT+0 - Postmortem - At approximately 1:25AM EST during a routine filter re-deployment (as a hotfix) to address concerns visualized through our monitoring, the deployment process of our Luascript code went haywire. This was not due to any code logic or bugs introduced into the code, as we redeployed the hotfix moments after without issue, but what we believe to be an extremely unlucky software bug that cascaded through all PoPs due to the way the syncing system works. This was not a full outage, but it is what we would classify as a major outage with more than 80% of traffic being dropped at the time.

The issue was 95% resolved within the first 10 mins by bringing back up Ashburn, Dallas, London and Amsterdam without issue. Los Angeles took about 12 minutes extra due to needing to reboot the filtering appliances there, and gradually shifting traffic back online. Frankfurt was not affected during this time and its traffic was still flowing through the network.

We are currently investigating why this specific filter deployment went haywire, as we have deployed code over 30 times in the month of March without a single issue, so the situation is obviously one of concern to our customers, and we acknowledge that and are investigation due to it&#039;s how peculiar this situation was.

We do also acknowledge how the 3 outages within the last 30 days are a significant cause for concern, and we do not want to make excuses, but we do want to iterate that all 3 outages have been caused by things out of our control at the time, but we are currently implementing ways to control them. We do understand that to you, our customers, these are in our control as you trust us to maintain the stability of the network you use, so we take full responsibility for the outages even if they weren&#039;t directly caused by us, and are making it our goal to implement increased redundancy and resiliency.

We are currently in the process of shipping out upgraded filtering hardware &amp; software, upgraded routers and upgraded router components to critical locations to not only introduce further redundancy, but as previously mentioned to improve resliency overall.

We will have more to share on this in the coming weeks, but rest assured we are working on the issues, and we completely understand your frustrations as a customer. Do not hesitate to voice any concerns to us, and we will be glad to respond.

Please also take a look at the RFO for the outage in Dallas last week and our plans for the future:

&lt;https://status.as30456.net/cmn9oqp8405n49b3k562hrh6i&gt; 
  </description>
  <content:encoded>
    <![CDATA[<p><strong>Type:</strong> Incident</p>
    <p><strong>Duration:</strong> 37 minutes</p>
    <p><strong>Affected Components:</strong> , , , , , </p>
    &lt;p&gt;&lt;small&gt;Apr &lt;var data-var=&#039;date&#039;&gt; 3&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;05:32:47&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Investigating&lt;/strong&gt; -
  We are currently investigating this incident. .&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Apr &lt;var data-var=&#039;date&#039;&gt; 3&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;05:41:05&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Monitoring&lt;/strong&gt; -
  We implemented a fix and are currently monitoring the result..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Apr &lt;var data-var=&#039;date&#039;&gt; 3&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;05:50:54&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Monitoring&lt;/strong&gt; -
  All PoPs have recovered except for LAX. We are working on restoring traffic to LAX..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Apr &lt;var data-var=&#039;date&#039;&gt; 3&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;06:09:40&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Resolved&lt;/strong&gt; -
  This incident has been resolved..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Apr &lt;var data-var=&#039;date&#039;&gt; 3&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;06:32:20&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Postmortem&lt;/strong&gt; -
  At approximately 1:25AM EST during a routine filter re-deployment (as a hotfix) to address concerns visualized through our monitoring, the deployment process of our Luascript code went haywire. This was not due to any code logic or bugs introduced into the code, as we redeployed the hotfix moments after without issue, but what we believe to be an extremely unlucky software bug that cascaded through all PoPs due to the way the syncing system works. This was not a full outage, but it is what we would classify as a major outage with more than 80% of traffic being dropped at the time.

The issue was 95% resolved within the first 10 mins by bringing back up Ashburn, Dallas, London and Amsterdam without issue. Los Angeles took about 12 minutes extra due to needing to reboot the filtering appliances there, and gradually shifting traffic back online. Frankfurt was not affected during this time and its traffic was still flowing through the network.

We are currently investigating why this specific filter deployment went haywire, as we have deployed code over 30 times in the month of March without a single issue, so the situation is obviously one of concern to our customers, and we acknowledge that and are investigation due to it&#039;s how peculiar this situation was.

We do also acknowledge how the 3 outages within the last 30 days are a significant cause for concern, and we do not want to make excuses, but we do want to iterate that all 3 outages have been caused by things out of our control at the time, but we are currently implementing ways to control them. We do understand that to you, our customers, these are in our control as you trust us to maintain the stability of the network you use, so we take full responsibility for the outages even if they weren&#039;t directly caused by us, and are making it our goal to implement increased redundancy and resiliency.

We are currently in the process of shipping out upgraded filtering hardware &amp; software, upgraded routers and upgraded router components to critical locations to not only introduce further redundancy, but as previously mentioned to improve resliency overall.

We will have more to share on this in the coming weeks, but rest assured we are working on the issues, and we completely understand your frustrations as a customer. Do not hesitate to voice any concerns to us, and we will be glad to respond.

Please also take a look at the RFO for the outage in Dallas last week and our plans for the future:

&lt;https://status.as30456.net/cmn9oqp8405n49b3k562hrh6i&gt;.&lt;/p&gt;
]]>
  </content:encoded>
  <pubDate>Fri, 3 Apr 2026 05:32:47 +0000</pubDate>
  <link>https://status.as30456.net/incident/cmnigw4ki0ejswkl8ij6nn44f</link>
  <guid>https://status.as30456.net/incident/cmnigw4ki0ejswkl8ij6nn44f</guid>
</item>

<item>
  <title>DFW Outage</title>
  <description>
    Type: Incident
    Duration: 12 hours and 8 minutes

    Affected Components: Dallas, Texas
    Mar 28, 04:43:41 GMT+0 - Monitoring - The incident was resolved shortly after onset, and we have been monitoring since. A full RFO will be posted when this status is closed. In short, the root cause was a cascading failure triggered by a bug in the routing software itself — not by any action taken by our team. The issue was entirely outside our control, and we responded quickly to restore normal operation. Mar 28, 14:03:21 GMT+0 - Resolved - This incident has been resolved. Mar 28, 01:55:00 GMT+0 - Investigating - We are currently investigating this incident.  Mar 28, 02:08:45 GMT+0 - Identified - We identified the root cause, and have been working on resolution. Recovery efforts are showing progress as traffic is beginning to restore in Dallas.  Apr 3, 06:31:01 GMT+0 - Postmortem - ### Our explanation:

Over the past few weeks, some of you may have experienced brief service interruptions across parts of our network. We want to be upfront about what happened, what we&#039;ve learned, and most importantly what we&#039;ve done about it.

Our edge routers have always been built with internal redundancy in mind - redundant supervisors, redundant power supplies, multiple line cards, and redundant fabric modules. That level of hardware resilience handles the vast majority of failure scenarios well.

However, the recent outages exposed a gap: when an issue affects the chassis itself such as a software defect, a firmware upgrade that requires a full reload, or cases like where a software process on a router crashes (as has happened recently in London -&gt; Twice) - there was no second device to immediately absorb the traffic. The router was redundant in every way except the one that mattered in these incidents.

### What we&#039;re doing:

We&#039;re rolling out a dual router design across all six of our points of presence — Dallas, Ashburn, Los Angeles, London, Amsterdam, and Frankfurt. Once complete, every PoP will operate with two independent edge routers in an active/active configuration, with full BGP session redundancy to all upstream and peering partners. If an entire chassis needs to be taken offline for maintenance, a software upgrade, or an unexpected failure then traffic will automatically reconverge on the second device with no customer-facing impact.

Each router in the pair will run on independent power feeds with independent management and control planes. We&#039;re also using this as an opportunity to standardize failover testing procedures across all PoPs, so this architecture is validated continuously, not just at deployment. This also provides protection against cases where a configuration change (with possibly human error involved) leads to a change which ends up knocking out a bunch of traffic. The investments for these changes were made during the last couple of weeks, so were already in the works and unrelated to incidents in March, but with summer right around the corner we wanted to let you know you&#039;ll be in good hands.

These changes will also allow for things like DDoS mitigation changes to be performed in a more controlled rollout (e.g. to parts of traffic only), zero-downtime maintenance windows for core networking equipment and a stronger foundation for the capacity expansions we have planned for the rest of 2026. 
  </description>
  <content:encoded>
    <![CDATA[<p><strong>Type:</strong> Incident</p>
    <p><strong>Duration:</strong> 12 hours and 8 minutes</p>
    <p><strong>Affected Components:</strong> </p>
    &lt;p&gt;&lt;small&gt;Mar &lt;var data-var=&#039;date&#039;&gt; 28&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;04:43:41&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Monitoring&lt;/strong&gt; -
  The incident was resolved shortly after onset, and we have been monitoring since. A full RFO will be posted when this status is closed. In short, the root cause was a cascading failure triggered by a bug in the routing software itself — not by any action taken by our team. The issue was entirely outside our control, and we responded quickly to restore normal operation..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Mar &lt;var data-var=&#039;date&#039;&gt; 28&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;14:03:21&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Resolved&lt;/strong&gt; -
  This incident has been resolved..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Mar &lt;var data-var=&#039;date&#039;&gt; 28&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;01:55:00&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Investigating&lt;/strong&gt; -
  We are currently investigating this incident. .&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Mar &lt;var data-var=&#039;date&#039;&gt; 28&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;02:08:45&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  We identified the root cause, and have been working on resolution. Recovery efforts are showing progress as traffic is beginning to restore in Dallas. .&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Apr &lt;var data-var=&#039;date&#039;&gt; 3&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;06:31:01&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Postmortem&lt;/strong&gt; -
  ### Our explanation:

Over the past few weeks, some of you may have experienced brief service interruptions across parts of our network. We want to be upfront about what happened, what we&#039;ve learned, and most importantly what we&#039;ve done about it.

Our edge routers have always been built with internal redundancy in mind - redundant supervisors, redundant power supplies, multiple line cards, and redundant fabric modules. That level of hardware resilience handles the vast majority of failure scenarios well.

However, the recent outages exposed a gap: when an issue affects the chassis itself such as a software defect, a firmware upgrade that requires a full reload, or cases like where a software process on a router crashes (as has happened recently in London -&gt; Twice) - there was no second device to immediately absorb the traffic. The router was redundant in every way except the one that mattered in these incidents.

### What we&#039;re doing:

We&#039;re rolling out a dual router design across all six of our points of presence — Dallas, Ashburn, Los Angeles, London, Amsterdam, and Frankfurt. Once complete, every PoP will operate with two independent edge routers in an active/active configuration, with full BGP session redundancy to all upstream and peering partners. If an entire chassis needs to be taken offline for maintenance, a software upgrade, or an unexpected failure then traffic will automatically reconverge on the second device with no customer-facing impact.

Each router in the pair will run on independent power feeds with independent management and control planes. We&#039;re also using this as an opportunity to standardize failover testing procedures across all PoPs, so this architecture is validated continuously, not just at deployment. This also provides protection against cases where a configuration change (with possibly human error involved) leads to a change which ends up knocking out a bunch of traffic. The investments for these changes were made during the last couple of weeks, so were already in the works and unrelated to incidents in March, but with summer right around the corner we wanted to let you know you&#039;ll be in good hands.

These changes will also allow for things like DDoS mitigation changes to be performed in a more controlled rollout (e.g. to parts of traffic only), zero-downtime maintenance windows for core networking equipment and a stronger foundation for the capacity expansions we have planned for the rest of 2026..&lt;/p&gt;
]]>
  </content:encoded>
  <pubDate>Sat, 28 Mar 2026 01:55:00 +0000</pubDate>
  <link>https://status.as30456.net/incident/cmn9oqp8405n49b3k562hrh6i</link>
  <guid>https://status.as30456.net/incident/cmn9oqp8405n49b3k562hrh6i</guid>
</item>

<item>
  <title>Brief Blip of Users Routing Through AMS</title>
  <description>
    Type: Incident
    Duration: 2 hours and 35 minutes

    Affected Components: Amsterdam, Netherlands
    Mar 26, 18:37:00 GMT+0 - Investigating - We are currently investigating a brief, but sizable blip of traffic that occurred for users routing through Amsterdam. Mar 26, 19:31:19 GMT+0 - Monitoring - We located the problem to be a private transport link between AMS and LON had caused the drop due to issues outside of our control. The resulting drop was because of the traffic shifting over. As a result of the private transport link failure, we have switched the mode of traffic for this transport, and do not expect it to occur again, however we are monitoring the situation in the event it does. We have also linked this private link to a few other recent small drops in the EU region (unrelated to the last status post which was a hardware issue), and apologize for any impact from that.

In regards to network stability, our London PoP will be undergoing an upgrade to improve a routing appliance constraint, and furthermore we will be implementing increased redundancy across all PoPs in the near future to increase resiliency during both planned and unplanned outages. Mar 26, 21:11:36 GMT+0 - Resolved - This incident has been resolved. 
  </description>
  <content:encoded>
    <![CDATA[<p><strong>Type:</strong> Incident</p>
    <p><strong>Duration:</strong> 2 hours and 35 minutes</p>
    <p><strong>Affected Components:</strong> </p>
    &lt;p&gt;&lt;small&gt;Mar &lt;var data-var=&#039;date&#039;&gt; 26&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;18:37:00&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Investigating&lt;/strong&gt; -
  We are currently investigating a brief, but sizable blip of traffic that occurred for users routing through Amsterdam..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Mar &lt;var data-var=&#039;date&#039;&gt; 26&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;19:31:19&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Monitoring&lt;/strong&gt; -
  We located the problem to be a private transport link between AMS and LON had caused the drop due to issues outside of our control. The resulting drop was because of the traffic shifting over. As a result of the private transport link failure, we have switched the mode of traffic for this transport, and do not expect it to occur again, however we are monitoring the situation in the event it does. We have also linked this private link to a few other recent small drops in the EU region (unrelated to the last status post which was a hardware issue), and apologize for any impact from that.

In regards to network stability, our London PoP will be undergoing an upgrade to improve a routing appliance constraint, and furthermore we will be implementing increased redundancy across all PoPs in the near future to increase resiliency during both planned and unplanned outages..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Mar &lt;var data-var=&#039;date&#039;&gt; 26&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;21:11:36&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Resolved&lt;/strong&gt; -
  This incident has been resolved..&lt;/p&gt;
]]>
  </content:encoded>
  <pubDate>Thu, 26 Mar 2026 18:37:00 +0000</pubDate>
  <link>https://status.as30456.net/incident/cmn7ui65m02iqzuwrzrj7mvg8</link>
  <guid>https://status.as30456.net/incident/cmn7ui65m02iqzuwrzrj7mvg8</guid>
</item>

<item>
  <title>London Connectivity Issues</title>
  <description>
    Type: Incident
    Duration: 7 hours and 46 minutes

    Affected Components: London, United Kingdom, , 
Network Points of Presence →
    Mar 23, 16:46:14 GMT+0 - Resolved - Following emergency maintenance yesterday that required a reboot of a core router in our London facility, an Arista runtime software bug caused the router&#039;s ARP entries to gradually decay from active memory.  
  
Although the router&#039;s configuration remained correct throughout, the hardware chip (ASIC) responsible for directing network traffic failed to correctly reload the address mappings after the reboot. These mappings are what tell the router how to reach a set of internal endpoints used for multicast traffic forwarding. With them missing from the hardware&#039;s active memory, traffic that should have been flowing through those paths was silently dropped.  
  
Because the configuration itself was never corrupted, the root cause was not immediately obvious. A number of other potential causes were investigated before the true issue was identified — a desync between the router&#039;s stored configuration and what the hardware had actually loaded into memory.  
  
We sincerely apologize for the impact this had on your services and for the time it took to identify the root cause. We understand how frustrating extended investigations can be, and we appreciate your patience while our engineers worked methodically through the contributing factors to reach a definitive resolution. Mar 23, 09:00:04 GMT+0 - Identified - We have identified an issue with our filtering software in our London PoP and are working on a resolution as quickly as possible. Mar 23, 10:40:23 GMT+0 - Monitoring - We have rolled out a batch of fixes and are seeing connectivity recover. We are continuing to monitor the situation closely. Mar 23, 11:47:07 GMT+0 - Identified - We are continuing to monitor reports of elevated issues and are still working towards a permanent resolution. Mar 23, 14:10:22 GMT+0 - Identified - We are continuing to investigate TCP issues in the London PoP. We apologize for the continued problems today and are making progress toward a full resolution for this location. Mar 23, 15:39:38 GMT+0 - Monitoring - We have implemented another round of fixes and connectivity is recovering. Please reach out to us if you are still having issue while we continue to monitor.

Thank you again for your patience in this matter. We will provide a full report when we confirm all is well. 
  </description>
  <content:encoded>
    <![CDATA[<p><strong>Type:</strong> Incident</p>
    <p><strong>Duration:</strong> 7 hours and 46 minutes</p>
    <p><strong>Affected Components:</strong> , </p>
    &lt;p&gt;&lt;small&gt;Mar &lt;var data-var=&#039;date&#039;&gt; 23&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;16:46:14&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Resolved&lt;/strong&gt; -
  Following emergency maintenance yesterday that required a reboot of a core router in our London facility, an Arista runtime software bug caused the router&#039;s ARP entries to gradually decay from active memory.  
  
Although the router&#039;s configuration remained correct throughout, the hardware chip (ASIC) responsible for directing network traffic failed to correctly reload the address mappings after the reboot. These mappings are what tell the router how to reach a set of internal endpoints used for multicast traffic forwarding. With them missing from the hardware&#039;s active memory, traffic that should have been flowing through those paths was silently dropped.  
  
Because the configuration itself was never corrupted, the root cause was not immediately obvious. A number of other potential causes were investigated before the true issue was identified — a desync between the router&#039;s stored configuration and what the hardware had actually loaded into memory.  
  
We sincerely apologize for the impact this had on your services and for the time it took to identify the root cause. We understand how frustrating extended investigations can be, and we appreciate your patience while our engineers worked methodically through the contributing factors to reach a definitive resolution..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Mar &lt;var data-var=&#039;date&#039;&gt; 23&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;09:00:04&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  We have identified an issue with our filtering software in our London PoP and are working on a resolution as quickly as possible..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Mar &lt;var data-var=&#039;date&#039;&gt; 23&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;10:40:23&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Monitoring&lt;/strong&gt; -
  We have rolled out a batch of fixes and are seeing connectivity recover. We are continuing to monitor the situation closely..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Mar &lt;var data-var=&#039;date&#039;&gt; 23&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;11:47:07&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  We are continuing to monitor reports of elevated issues and are still working towards a permanent resolution..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Mar &lt;var data-var=&#039;date&#039;&gt; 23&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;14:10:22&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  We are continuing to investigate TCP issues in the London PoP. We apologize for the continued problems today and are making progress toward a full resolution for this location..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Mar &lt;var data-var=&#039;date&#039;&gt; 23&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;15:39:38&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Monitoring&lt;/strong&gt; -
  We have implemented another round of fixes and connectivity is recovering. Please reach out to us if you are still having issue while we continue to monitor.

Thank you again for your patience in this matter. We will provide a full report when we confirm all is well..&lt;/p&gt;
]]>
  </content:encoded>
  <pubDate>Mon, 23 Mar 2026 09:00:04 +0000</pubDate>
  <link>https://status.as30456.net/incident/cmn31qz0511dmt3c6v3eht9fm</link>
  <guid>https://status.as30456.net/incident/cmn31qz0511dmt3c6v3eht9fm</guid>
</item>

<item>
  <title>Quick maintenance to resolve packet loss problems in London</title>
  <description>
    Type: Maintenance
    Duration: 1 hour

    Affected Components: London, United Kingdom
    Mar 22, 16:00:00 GMT+0 - Identified - Quick maintenance to resolve packet loss problems in London Mar 22, 15:15:19 GMT+0 - Identified - Maintenance is now in progress. Mar 22, 16:15:00 GMT+0 - Completed - Maintenance has completed successfully 
  </description>
  <content:encoded>
    <![CDATA[<p><strong>Type:</strong> Maintenance</p>
    <p><strong>Duration:</strong> 1 hour</p>
    <p><strong>Affected Components:</strong> </p>
    &lt;p&gt;&lt;small&gt;Mar &lt;var data-var=&#039;date&#039;&gt; 22&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;16:00:00&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  Quick maintenance to resolve packet loss problems in London.&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Mar &lt;var data-var=&#039;date&#039;&gt; 22&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;15:15:19&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  Maintenance is now in progress..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Mar &lt;var data-var=&#039;date&#039;&gt; 22&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;16:15:00&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Completed&lt;/strong&gt; -
  Maintenance has completed successfully.&lt;/p&gt;
]]>
  </content:encoded>
  <pubDate>Sun, 22 Mar 2026 16:00:00 +0000</pubDate>
  <link>https://status.as30456.net/maintenance/cmn1wddx20p8x995p2hmz35sn</link>
  <guid>https://status.as30456.net/maintenance/cmn1wddx20p8x995p2hmz35sn</guid>
</item>

<item>
  <title>Upstream Issues</title>
  <description>
    Type: Incident
    Duration: 5 hours and 58 minutes

    Affected Components: London, United Kingdom, Dallas, Texas, Los Angeles, California, Amsterdam, Netherlands, Ashburn, Virginia, , 
Network Points of Presence →
    Feb 20, 17:52:00 GMT+0 - Investigating - We are currently investigating this incident. We are seeing issues with one of our upstreams losing announcement of prefixes temporarily. We are working with them in an emergency fashion. You may see blips as traffic fails over and back at times. Feb 20, 23:50:00 GMT+0 - Resolved - Cloudflare resolved their incident.  
  
&lt;https://www.cloudflarestatus.com/incidents/kwy3dt82bwbt&gt; 
  </description>
  <content:encoded>
    <![CDATA[<p><strong>Type:</strong> Incident</p>
    <p><strong>Duration:</strong> 5 hours and 58 minutes</p>
    <p><strong>Affected Components:</strong> , , , , , </p>
    &lt;p&gt;&lt;small&gt;Feb &lt;var data-var=&#039;date&#039;&gt; 20&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;17:52:00&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Investigating&lt;/strong&gt; -
  We are currently investigating this incident. We are seeing issues with one of our upstreams losing announcement of prefixes temporarily. We are working with them in an emergency fashion. You may see blips as traffic fails over and back at times..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Feb &lt;var data-var=&#039;date&#039;&gt; 20&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;23:50:00&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Resolved&lt;/strong&gt; -
  Cloudflare resolved their incident.  
  
&lt;https://www.cloudflarestatus.com/incidents/kwy3dt82bwbt&gt;.&lt;/p&gt;
]]>
  </content:encoded>
  <pubDate>Fri, 20 Feb 2026 17:52:00 +0000</pubDate>
  <link>https://status.as30456.net/incident/cmlv8a2wn0009x6oklppemogx</link>
  <guid>https://status.as30456.net/incident/cmlv8a2wn0009x6oklppemogx</guid>
</item>

<item>
  <title>Upstream Provider Outage</title>
  <description>
    Type: Incident
    Duration: 52 minutes

    Affected Components: London, United Kingdom, Dallas, Texas, Los Angeles, California, Amsterdam, Netherlands, Ashburn, Virginia, , 
Network Points of Presence →
    Feb 18, 15:38:00 GMT+0 - Investigating - One of our upstream providers appears to be having a brief outage related to the services we use with them. We are in contact with them getting it restored, but traffic has already failed over. Feb 18, 16:30:07 GMT+0 - Resolved - This incident has been resolved. 
  </description>
  <content:encoded>
    <![CDATA[<p><strong>Type:</strong> Incident</p>
    <p><strong>Duration:</strong> 52 minutes</p>
    <p><strong>Affected Components:</strong> , , , , , </p>
    &lt;p&gt;&lt;small&gt;Feb &lt;var data-var=&#039;date&#039;&gt; 18&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;15:38:00&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Investigating&lt;/strong&gt; -
  One of our upstream providers appears to be having a brief outage related to the services we use with them. We are in contact with them getting it restored, but traffic has already failed over..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Feb &lt;var data-var=&#039;date&#039;&gt; 18&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;16:30:07&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Resolved&lt;/strong&gt; -
  This incident has been resolved..&lt;/p&gt;
]]>
  </content:encoded>
  <pubDate>Wed, 18 Feb 2026 15:38:00 +0000</pubDate>
  <link>https://status.as30456.net/incident/cmls7mtae0b0peookj4gf4or0</link>
  <guid>https://status.as30456.net/incident/cmls7mtae0b0peookj4gf4or0</guid>
</item>

<item>
  <title>Connection Drop in EU &amp; US</title>
  <description>
    Type: Incident
    Duration: 18 hours and 3 minutes

    Affected Components: London, United Kingdom, Dallas, Texas, Los Angeles, California, Amsterdam, Netherlands, Ashburn, Virginia
    Jan 10, 18:57:07 GMT+0 - Investigating - We are currently investigating this incident. We are aware of a mass disconnection in the EU and US regions primarily effecting the ASH, LAX and LON regions. Jan 11, 01:01:11 GMT+0 - Identified - We identified the issue as a software bug within our filtering stack caused by a specific set of triggers. The issue was quickly under control, and no further impact was seen as a result of the event this morning. We are working on a permanent fix, and will provide an update once the permanent fix has been applied, and monitored. Jan 11, 13:00:00 GMT+0 - Resolved - This incident has been resolved. 
  </description>
  <content:encoded>
    <![CDATA[<p><strong>Type:</strong> Incident</p>
    <p><strong>Duration:</strong> 18 hours and 3 minutes</p>
    <p><strong>Affected Components:</strong> , , , , </p>
    &lt;p&gt;&lt;small&gt;Jan &lt;var data-var=&#039;date&#039;&gt; 10&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;18:57:07&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Investigating&lt;/strong&gt; -
  We are currently investigating this incident. We are aware of a mass disconnection in the EU and US regions primarily effecting the ASH, LAX and LON regions..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Jan &lt;var data-var=&#039;date&#039;&gt; 11&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;01:01:11&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  We identified the issue as a software bug within our filtering stack caused by a specific set of triggers. The issue was quickly under control, and no further impact was seen as a result of the event this morning. We are working on a permanent fix, and will provide an update once the permanent fix has been applied, and monitored..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Jan &lt;var data-var=&#039;date&#039;&gt; 11&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;13:00:00&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Resolved&lt;/strong&gt; -
  This incident has been resolved..&lt;/p&gt;
]]>
  </content:encoded>
  <pubDate>Sat, 10 Jan 2026 18:57:07 +0000</pubDate>
  <link>https://status.as30456.net/incident/cmk8o2ssi1eva9wwudrvoei9h</link>
  <guid>https://status.as30456.net/incident/cmk8o2ssi1eva9wwudrvoei9h</guid>
</item>

<item>
  <title>Widespread Connectivity Issues</title>
  <description>
    Type: Incident
    

    Affected Components: Transit, London, United Kingdom, Dallas, Texas, Customer Portal, Los Angeles, California, Amsterdam, Netherlands, Ashburn, Virginia
    Nov 28, 19:18:57 GMT+0 - Postmortem - At approximately 1:43 PM EST, we detected a service anomaly affecting our filter infrastructure. Upon investigation, we determined this incident was a bug related to the earlier issue reported this morning: &lt;https://status.as30456.net/cmiivbxm6004wt518aafiq5dd&gt;

  
This caused a brief drop in connections. Within about 60 seconds, we pushed a hotfix and connections stabilized. Nov 28, 10:00:00 GMT+0 - Resolved - ## Incident Report: DDoS Mitigation Infrastructure Outage

**Date:** November 28, 2025  
**Duration:** Approximately 2.5 hours  
**Impact:** Full network unavailability  
  
**Summary**

On November 28, 2025, our DDoS mitigation infrastructure experienced a service interruption lasting approximately one hour. As all inbound and outbound traffic flows through our mitigation systems, this caused the network to appear entirely offline from the outside.

---

### What Happened

Several days ago, we deployed an enhancement to our rate limiting module to improve DDoS mitigation coordination across our globally distributed filtering nodes. After running without issue for several days, a latent memory corruption bug manifested under specific production conditions, causing the filtering nodes to crash. Due to the synchronized nature of our filtering infrastructure—where nodes coordinate with each other to provide consistent global protection—once the issue manifested at one location, it propagated to all locations worldwide within seconds.

---

### Root Cause

The enhancement was written in C++, as it extended an existing C++ module. The code contained a memory corruption bug that only surfaced after several days of continuous runtime under production load. This class of bug is notoriously difficult to detect in testing, as it may only manifest under specific memory states that develop over extended periods.

---

### Resolution

Upon detecting the issue, our engineering team immediately began a full rewrite of the affected module in Rust. Within one hour, we deployed the rewritten module, restoring full service.

Rust is a memory-safe language that eliminates entire categories of bugs at compile time—before code ever runs in production. Unlike C++, where programmers must manually manage memory and the compiler does not verify correctness, Rust&#039;s compiler enforces strict ownership and borrowing rules that make memory corruption, buffer overflows, and use-after-free vulnerabilities impossible in safe code. These guarantees are achieved without sacrificing performance; Rust runs as fast as C++ while providing the safety guarantees typically found only in higher-level languages.

For some time now, all new features for our DDoS mitigation systems have been developed in Rust. However, because this particular feature extended an existing C++ module, it was originally written in C++ for consistency.

---

### What we&#039;re doing to ensure this doesn&#039;t happen again. 

Going forward, we will cease using C++ for any new development in our codebase. All future features—including extensions to existing modules—will be written in Rust to prevent this class of issue from occurring again. We will also be replacing any existing C++ code over the next couple of weeks.

---

### Apology

We sincerely apologize for the disruption this caused to your services. We truly appreciate the confidence you place in us to keep your services online and secure. We are committed to learning from this incident and improving our systems to prevent similar issues in the future. Thank you for your understanding and continued support. Nov 28, 12:58:37 GMT+0 - Postmortem - ## Incident Report: DDoS Mitigation Infrastructure Outage

**Date:** November 28, 2025  
**Duration:** Approximately 2.5 hours  
**Impact:** Full network unavailability  
  
**Summary**

On November 28, 2025, our DDoS mitigation infrastructure experienced a service interruption lasting approximately one hour. As all inbound and outbound traffic flows through our mitigation systems, this caused the network to appear entirely offline from the outside.

---

### What Happened

Several days ago, we deployed an enhancement to our rate limiting module to improve DDoS mitigation coordination across our globally distributed filtering nodes. After running without issue for several days, a latent memory corruption bug manifested under specific production conditions, causing the filtering nodes to crash. Due to the synchronized nature of our filtering infrastructure—where nodes coordinate with each other to provide consistent global protection—once the issue manifested at one location, it propagated to all locations worldwide within seconds.

---

### Root Cause

The enhancement was written in C++, as it extended an existing C++ module. The code contained a memory corruption bug that only surfaced after several days of continuous runtime under production load. This class of bug is notoriously difficult to detect in testing, as it may only manifest under specific memory states that develop over extended periods.

---

### Resolution

Upon detecting the issue, our engineering team immediately began a full rewrite of the affected module in Rust. Within one hour, we deployed the rewritten module, restoring full service.

Rust is a memory-safe language that eliminates entire categories of bugs at compile time—before code ever runs in production. Unlike C++, where programmers must manually manage memory and the compiler does not verify correctness, Rust&#039;s compiler enforces strict ownership and borrowing rules that make memory corruption, buffer overflows, and use-after-free vulnerabilities impossible in safe code. These guarantees are achieved without sacrificing performance; Rust runs as fast as C++ while providing the safety guarantees typically found only in higher-level languages.

For some time now, all new features for our DDoS mitigation systems have been developed in Rust. However, because this particular feature extended an existing C++ module, it was originally written in C++ for consistency.

---

### What we&#039;re doing to ensure this doesn&#039;t happen again.

Going forward, we will cease using C++ for any new development in our codebase. All future features—including extensions to existing modules—will be written in Rust to prevent this class of issue from occurring again. We will also be replacing any existing C++ code over the next couple of weeks.

---

### Apology

We sincerely apologize for the disruption this caused to your services. We truly appreciate the confidence you place in us to keep your services online and secure. We are committed to learning from this incident and improving our systems to prevent similar issues in the future. Thank you for your understanding and continued support. 
  </description>
  <content:encoded>
    <![CDATA[<p><strong>Type:</strong> Incident</p>
    
    <p><strong>Affected Components:</strong> , , , , , , </p>
    &lt;p&gt;&lt;small&gt;Nov &lt;var data-var=&#039;date&#039;&gt; 28&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;19:18:57&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Postmortem&lt;/strong&gt; -
  At approximately 1:43 PM EST, we detected a service anomaly affecting our filter infrastructure. Upon investigation, we determined this incident was a bug related to the earlier issue reported this morning: &lt;https://status.as30456.net/cmiivbxm6004wt518aafiq5dd&gt;

  
This caused a brief drop in connections. Within about 60 seconds, we pushed a hotfix and connections stabilized..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Nov &lt;var data-var=&#039;date&#039;&gt; 28&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;10:00:00&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Resolved&lt;/strong&gt; -
  ## Incident Report: DDoS Mitigation Infrastructure Outage

**Date:** November 28, 2025  
**Duration:** Approximately 2.5 hours  
**Impact:** Full network unavailability  
  
**Summary**

On November 28, 2025, our DDoS mitigation infrastructure experienced a service interruption lasting approximately one hour. As all inbound and outbound traffic flows through our mitigation systems, this caused the network to appear entirely offline from the outside.

---

### What Happened

Several days ago, we deployed an enhancement to our rate limiting module to improve DDoS mitigation coordination across our globally distributed filtering nodes. After running without issue for several days, a latent memory corruption bug manifested under specific production conditions, causing the filtering nodes to crash. Due to the synchronized nature of our filtering infrastructure—where nodes coordinate with each other to provide consistent global protection—once the issue manifested at one location, it propagated to all locations worldwide within seconds.

---

### Root Cause

The enhancement was written in C++, as it extended an existing C++ module. The code contained a memory corruption bug that only surfaced after several days of continuous runtime under production load. This class of bug is notoriously difficult to detect in testing, as it may only manifest under specific memory states that develop over extended periods.

---

### Resolution

Upon detecting the issue, our engineering team immediately began a full rewrite of the affected module in Rust. Within one hour, we deployed the rewritten module, restoring full service.

Rust is a memory-safe language that eliminates entire categories of bugs at compile time—before code ever runs in production. Unlike C++, where programmers must manually manage memory and the compiler does not verify correctness, Rust&#039;s compiler enforces strict ownership and borrowing rules that make memory corruption, buffer overflows, and use-after-free vulnerabilities impossible in safe code. These guarantees are achieved without sacrificing performance; Rust runs as fast as C++ while providing the safety guarantees typically found only in higher-level languages.

For some time now, all new features for our DDoS mitigation systems have been developed in Rust. However, because this particular feature extended an existing C++ module, it was originally written in C++ for consistency.

---

### What we&#039;re doing to ensure this doesn&#039;t happen again. 

Going forward, we will cease using C++ for any new development in our codebase. All future features—including extensions to existing modules—will be written in Rust to prevent this class of issue from occurring again. We will also be replacing any existing C++ code over the next couple of weeks.

---

### Apology

We sincerely apologize for the disruption this caused to your services. We truly appreciate the confidence you place in us to keep your services online and secure. We are committed to learning from this incident and improving our systems to prevent similar issues in the future. Thank you for your understanding and continued support..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Nov &lt;var data-var=&#039;date&#039;&gt; 28&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;12:58:37&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Postmortem&lt;/strong&gt; -
  ## Incident Report: DDoS Mitigation Infrastructure Outage

**Date:** November 28, 2025  
**Duration:** Approximately 2.5 hours  
**Impact:** Full network unavailability  
  
**Summary**

On November 28, 2025, our DDoS mitigation infrastructure experienced a service interruption lasting approximately one hour. As all inbound and outbound traffic flows through our mitigation systems, this caused the network to appear entirely offline from the outside.

---

### What Happened

Several days ago, we deployed an enhancement to our rate limiting module to improve DDoS mitigation coordination across our globally distributed filtering nodes. After running without issue for several days, a latent memory corruption bug manifested under specific production conditions, causing the filtering nodes to crash. Due to the synchronized nature of our filtering infrastructure—where nodes coordinate with each other to provide consistent global protection—once the issue manifested at one location, it propagated to all locations worldwide within seconds.

---

### Root Cause

The enhancement was written in C++, as it extended an existing C++ module. The code contained a memory corruption bug that only surfaced after several days of continuous runtime under production load. This class of bug is notoriously difficult to detect in testing, as it may only manifest under specific memory states that develop over extended periods.

---

### Resolution

Upon detecting the issue, our engineering team immediately began a full rewrite of the affected module in Rust. Within one hour, we deployed the rewritten module, restoring full service.

Rust is a memory-safe language that eliminates entire categories of bugs at compile time—before code ever runs in production. Unlike C++, where programmers must manually manage memory and the compiler does not verify correctness, Rust&#039;s compiler enforces strict ownership and borrowing rules that make memory corruption, buffer overflows, and use-after-free vulnerabilities impossible in safe code. These guarantees are achieved without sacrificing performance; Rust runs as fast as C++ while providing the safety guarantees typically found only in higher-level languages.

For some time now, all new features for our DDoS mitigation systems have been developed in Rust. However, because this particular feature extended an existing C++ module, it was originally written in C++ for consistency.

---

### What we&#039;re doing to ensure this doesn&#039;t happen again.

Going forward, we will cease using C++ for any new development in our codebase. All future features—including extensions to existing modules—will be written in Rust to prevent this class of issue from occurring again. We will also be replacing any existing C++ code over the next couple of weeks.

---

### Apology

We sincerely apologize for the disruption this caused to your services. We truly appreciate the confidence you place in us to keep your services online and secure. We are committed to learning from this incident and improving our systems to prevent similar issues in the future. Thank you for your understanding and continued support..&lt;/p&gt;
]]>
  </content:encoded>
  <pubDate>Fri, 28 Nov 2025 10:00:00 +0000</pubDate>
  <link>https://status.as30456.net/incident/cmiivbxm6004wt518aafiq5dd</link>
  <guid>https://status.as30456.net/incident/cmiivbxm6004wt518aafiq5dd</guid>
</item>

<item>
  <title>Emergency Maintenance to Address TCP Issues in LAX</title>
  <description>
    Type: Maintenance
    Duration: 5 hours and 34 minutes

    Affected Components: Dallas, Texas, Los Angeles, California, Ashburn, Virginia
    Nov 27, 18:00:00 GMT+0 - Identified - Maintenance is in progress and has been extended to ASH and Dallas due to BGP reconvergence. Nov 27, 16:00:01 GMT+0 - Identified - Maintenance is now in progress Nov 27, 21:34:18 GMT+0 - Completed - Maintenance has completed successfully.

The maintenance was done to fix a software bug in one of our core routers that put it into a bad state. The maintenance was extended due to a side effect from the reboot of the router. Immediately after the router came back up we saw a complete service restoration in LAX, but we began seeing BGP errors between our PoPs. Eventually, this led to multiple pops having a quick, but pretty full drop.

We immediately began restarting our BGP daemons on all of the core routers so that inter-pop communications could be restored.

We are monitoring the network to ensure no further issues do occur, but based on the results of the reboot, and our subsequent fixes we think we are in good order.

Thanks for your patience, and Happy Thanksgiving to those in the U.S.! 
  </description>
  <content:encoded>
    <![CDATA[<p><strong>Type:</strong> Maintenance</p>
    <p><strong>Duration:</strong> 5 hours and 34 minutes</p>
    <p><strong>Affected Components:</strong> , , </p>
    &lt;p&gt;&lt;small&gt;Nov &lt;var data-var=&#039;date&#039;&gt; 27&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;18:00:00&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  Maintenance is in progress and has been extended to ASH and Dallas due to BGP reconvergence..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Nov &lt;var data-var=&#039;date&#039;&gt; 27&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;16:00:01&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  Maintenance is now in progress.&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Nov &lt;var data-var=&#039;date&#039;&gt; 27&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;21:34:18&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Completed&lt;/strong&gt; -
  Maintenance has completed successfully.

The maintenance was done to fix a software bug in one of our core routers that put it into a bad state. The maintenance was extended due to a side effect from the reboot of the router. Immediately after the router came back up we saw a complete service restoration in LAX, but we began seeing BGP errors between our PoPs. Eventually, this led to multiple pops having a quick, but pretty full drop.

We immediately began restarting our BGP daemons on all of the core routers so that inter-pop communications could be restored.

We are monitoring the network to ensure no further issues do occur, but based on the results of the reboot, and our subsequent fixes we think we are in good order.

Thanks for your patience, and Happy Thanksgiving to those in the U.S.!.&lt;/p&gt;
]]>
  </content:encoded>
  <pubDate>Thu, 27 Nov 2025 16:00:00 +0000</pubDate>
  <link>https://status.as30456.net/maintenance/cmih0ifxr0055129ncshhhdrp</link>
  <guid>https://status.as30456.net/maintenance/cmih0ifxr0055129ncshhhdrp</guid>
</item>

<item>
  <title>Line card failure </title>
  <description>
    Type: Incident
    Duration: 1 hour and 15 minutes

    Affected Components: Los Angeles, California
    Nov 15, 11:33:43 GMT+0 - Postmortem - Line card replacement was carried out shortly after, stabilizing services.  Nov 15, 00:50:00 GMT+0 - Investigating - A line card has failed in our LAX router. We are currently investigating this incident.  Nov 15, 01:43:38 GMT+0 - Monitoring - We implemented a fix and are currently monitoring the result. Nov 15, 02:05:00 GMT+0 - Resolved - This incident has been resolved. 
  </description>
  <content:encoded>
    <![CDATA[<p><strong>Type:</strong> Incident</p>
    <p><strong>Duration:</strong> 1 hour and 15 minutes</p>
    <p><strong>Affected Components:</strong> </p>
    &lt;p&gt;&lt;small&gt;Nov &lt;var data-var=&#039;date&#039;&gt; 15&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;11:33:43&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Postmortem&lt;/strong&gt; -
  Line card replacement was carried out shortly after, stabilizing services. .&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Nov &lt;var data-var=&#039;date&#039;&gt; 15&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;00:50:00&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Investigating&lt;/strong&gt; -
  A line card has failed in our LAX router. We are currently investigating this incident. .&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Nov &lt;var data-var=&#039;date&#039;&gt; 15&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;01:43:38&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Monitoring&lt;/strong&gt; -
  We implemented a fix and are currently monitoring the result..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Nov &lt;var data-var=&#039;date&#039;&gt; 15&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;02:05:00&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Resolved&lt;/strong&gt; -
  This incident has been resolved..&lt;/p&gt;
]]>
  </content:encoded>
  <pubDate>Sat, 15 Nov 2025 00:50:00 +0000</pubDate>
  <link>https://status.as30456.net/incident/cmhzm31gi025kzlw0h32h8qtd</link>
  <guid>https://status.as30456.net/incident/cmhzm31gi025kzlw0h32h8qtd</guid>
</item>

<item>
  <title>Emergency maintenance</title>
  <description>
    Type: Maintenance
    Duration: 15 minutes

    Affected Components: Dallas, Texas
    Oct 31, 21:15:00 GMT+0 - Identified - Hi everyone,

In 5 minutes we&#039;ll need to perform an emergency procedure to our primary dallas router, traffic will likely drop for a few minutes.  
  
We apologize for the inconvenience this might bring to you.  Oct 31, 21:15:01 GMT+0 - Identified - Maintenance is now in progress Oct 31, 21:30:00 GMT+0 - Completed - Maintenance has completed successfully 
  </description>
  <content:encoded>
    <![CDATA[<p><strong>Type:</strong> Maintenance</p>
    <p><strong>Duration:</strong> 15 minutes</p>
    <p><strong>Affected Components:</strong> </p>
    &lt;p&gt;&lt;small&gt;Oct &lt;var data-var=&#039;date&#039;&gt; 31&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;21:15:00&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  Hi everyone,

In 5 minutes we&#039;ll need to perform an emergency procedure to our primary dallas router, traffic will likely drop for a few minutes.  
  
We apologize for the inconvenience this might bring to you. .&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Oct &lt;var data-var=&#039;date&#039;&gt; 31&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;21:15:01&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  Maintenance is now in progress.&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Oct &lt;var data-var=&#039;date&#039;&gt; 31&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;21:30:00&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Completed&lt;/strong&gt; -
  Maintenance has completed successfully.&lt;/p&gt;
]]>
  </content:encoded>
  <pubDate>Fri, 31 Oct 2025 21:15:00 +0000</pubDate>
  <link>https://status.as30456.net/maintenance/cmhfcidb80192u5ib7zr51wy8</link>
  <guid>https://status.as30456.net/maintenance/cmhfcidb80192u5ib7zr51wy8</guid>
</item>

<item>
  <title>Cloudflare Maintenance</title>
  <description>
    Type: Maintenance
    Duration: 3 days

    Affected Components: London, United Kingdom, Dallas, Texas, Los Angeles, California, Amsterdam, Netherlands, Ashburn, Virginia
    Oct 29, 05:00:00 GMT+0 - Identified - Hi all,  
  
Cloudflare has a very heavy maintenance schedule going on currently: &lt;https://www.cloudflarestatus.com/&gt;  
  
Which may affect connectivity to our network briefly at times. 

We&#039;re publishing this status post as a warning. Oct 29, 05:00:01 GMT+0 - Identified - Maintenance is now in progress Nov 1, 05:00:00 GMT+0 - Completed - Maintenance has completed successfully 
  </description>
  <content:encoded>
    <![CDATA[<p><strong>Type:</strong> Maintenance</p>
    <p><strong>Duration:</strong> 3 days</p>
    <p><strong>Affected Components:</strong> , , , , </p>
    &lt;p&gt;&lt;small&gt;Oct &lt;var data-var=&#039;date&#039;&gt; 29&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;05:00:00&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  Hi all,  
  
Cloudflare has a very heavy maintenance schedule going on currently: &lt;https://www.cloudflarestatus.com/&gt;  
  
Which may affect connectivity to our network briefly at times. 

We&#039;re publishing this status post as a warning..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Oct &lt;var data-var=&#039;date&#039;&gt; 29&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;05:00:01&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  Maintenance is now in progress.&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Nov &lt;var data-var=&#039;date&#039;&gt; 1&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;05:00:00&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Completed&lt;/strong&gt; -
  Maintenance has completed successfully.&lt;/p&gt;
]]>
  </content:encoded>
  <pubDate>Wed, 29 Oct 2025 05:00:00 +0000</pubDate>
  <link>https://status.as30456.net/maintenance/cmhbsahxj01i811bzuhxnewsj</link>
  <guid>https://status.as30456.net/maintenance/cmhbsahxj01i811bzuhxnewsj</guid>
</item>

<item>
  <title>Partial Server Outage</title>
  <description>
    Type: Incident
    Duration: 7 hours and 21 minutes

    Affected Components: London, United Kingdom
    Mar 30, 18:50:00 GMT+0 - Investigating - We are currently investigating this incident. We are aware that some client machines are facing a network outage in London. We have already dispatched our technicians to be onsite, and are working diligently to resolve this issue. Mar 30, 19:06:06 GMT+0 - Identified - A redundant line card has failed in our LON Docklands Core Router, primarily impacting 10GE devices. A replacement is currently underway. Mar 30, 19:17:08 GMT+0 - Monitoring - The affected line card has been replaced using a spare from our on-site inventory. Impacted customer services will begin restoring as the new card completes its boot process. Mar 30, 20:18:31 GMT+0 - Identified - We are aware the outage has been extended, and our techs are still onsite diligently working towards the full fix. We will continue to update. Mar 30, 20:36:46 GMT+0 - Monitoring - The LON Docklands Core Router has fully recovered, with all line cards now online. A supervisor switchover was performed, which caused temporary impact. We are actively monitoring to ensure all services remain stable. Mar 31, 02:11:13 GMT+0 - Resolved - We are closing this issue as it has been 6 hours since the resolution and we have confirmed everything has remained stable since. 
  </description>
  <content:encoded>
    <![CDATA[<p><strong>Type:</strong> Incident</p>
    <p><strong>Duration:</strong> 7 hours and 21 minutes</p>
    <p><strong>Affected Components:</strong> </p>
    &lt;p&gt;&lt;small&gt;Mar &lt;var data-var=&#039;date&#039;&gt; 30&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;18:50:00&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Investigating&lt;/strong&gt; -
  We are currently investigating this incident. We are aware that some client machines are facing a network outage in London. We have already dispatched our technicians to be onsite, and are working diligently to resolve this issue..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Mar &lt;var data-var=&#039;date&#039;&gt; 30&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;19:06:06&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  A redundant line card has failed in our LON Docklands Core Router, primarily impacting 10GE devices. A replacement is currently underway..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Mar &lt;var data-var=&#039;date&#039;&gt; 30&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;19:17:08&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Monitoring&lt;/strong&gt; -
  The affected line card has been replaced using a spare from our on-site inventory. Impacted customer services will begin restoring as the new card completes its boot process..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Mar &lt;var data-var=&#039;date&#039;&gt; 30&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;20:18:31&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  We are aware the outage has been extended, and our techs are still onsite diligently working towards the full fix. We will continue to update..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Mar &lt;var data-var=&#039;date&#039;&gt; 30&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;20:36:46&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Monitoring&lt;/strong&gt; -
  The LON Docklands Core Router has fully recovered, with all line cards now online. A supervisor switchover was performed, which caused temporary impact. We are actively monitoring to ensure all services remain stable..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Mar &lt;var data-var=&#039;date&#039;&gt; 31&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;02:11:13&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Resolved&lt;/strong&gt; -
  We are closing this issue as it has been 6 hours since the resolution and we have confirmed everything has remained stable since..&lt;/p&gt;
]]>
  </content:encoded>
  <pubDate>Sun, 30 Mar 2025 18:50:00 +0000</pubDate>
  <link>https://status.as30456.net/incident/cm8w06l3d003m411zgdqxsrxh</link>
  <guid>https://status.as30456.net/incident/cm8w06l3d003m411zgdqxsrxh</guid>
</item>

<item>
  <title>Latency Spikes Caused by Upstream Carriers</title>
  <description>
    Type: Incident
    

    Affected Components: London, United Kingdom, Amsterdam, Netherlands
    Mar 21, 16:00:00 GMT+0 - Resolved - We are aware of the latency spikes that have been occurring since Friday, and have been diligently routing around our upstream carriers in preparation of/during these events. It appears based on our metrics, and the way these events are occurring that our upstream carrier(s) are feeling congestion/latency caused by the large botnet attacks that have been plaguing the internet.

We are continuing to monitor the situation, route around where possible, and deploy all possibilities. We will update when needed. 
  </description>
  <content:encoded>
    <![CDATA[<p><strong>Type:</strong> Incident</p>
    
    <p><strong>Affected Components:</strong> , </p>
    &lt;p&gt;&lt;small&gt;Mar &lt;var data-var=&#039;date&#039;&gt; 21&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;16:00:00&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Resolved&lt;/strong&gt; -
  We are aware of the latency spikes that have been occurring since Friday, and have been diligently routing around our upstream carriers in preparation of/during these events. It appears based on our metrics, and the way these events are occurring that our upstream carrier(s) are feeling congestion/latency caused by the large botnet attacks that have been plaguing the internet.

We are continuing to monitor the situation, route around where possible, and deploy all possibilities. We will update when needed..&lt;/p&gt;
]]>
  </content:encoded>
  <pubDate>Fri, 21 Mar 2025 16:00:00 +0000</pubDate>
  <link>https://status.as30456.net/incident/cm8opn9jf000z963jczv8cokf</link>
  <guid>https://status.as30456.net/incident/cm8opn9jf000z963jczv8cokf</guid>
</item>

<item>
  <title>London/Amsterdam Transport Outage</title>
  <description>
    Type: Incident
    Duration: 9 hours

    Affected Components: London, United Kingdom, Amsterdam, Netherlands
    Mar 13, 15:27:00 GMT+0 - Investigating -  Mar 13, 15:36:00 GMT+0 - Resolved - We’re experiencing an outage with one of our transport providers between London and Amsterdam, which temporarily affected connectivity to our London PoP for users routed through Amsterdam. We’ve implemented adjustments to reroute traffic around the issue, and no further impact is expected. We sincerely apologize for the disruption and appreciate your patience and understanding.

Impact was between 10:27 AM CT and 10:36 AM CT (9 minutes). 
  </description>
  <content:encoded>
    <![CDATA[<p><strong>Type:</strong> Incident</p>
    <p><strong>Duration:</strong> 9 hours</p>
    <p><strong>Affected Components:</strong> , </p>
    &lt;p&gt;&lt;small&gt;Mar &lt;var data-var=&#039;date&#039;&gt; 13&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;15:27:00&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Investigating&lt;/strong&gt; -
  .&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Mar &lt;var data-var=&#039;date&#039;&gt; 13&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;15:36:00&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Resolved&lt;/strong&gt; -
  We’re experiencing an outage with one of our transport providers between London and Amsterdam, which temporarily affected connectivity to our London PoP for users routed through Amsterdam. We’ve implemented adjustments to reroute traffic around the issue, and no further impact is expected. We sincerely apologize for the disruption and appreciate your patience and understanding.

Impact was between 10:27 AM CT and 10:36 AM CT (9 minutes)..&lt;/p&gt;
]]>
  </content:encoded>
  <pubDate>Thu, 13 Mar 2025 15:27:00 +0000</pubDate>
  <link>https://status.as30456.net/incident/cm87j9fjc002g13ry0rvhagsy</link>
  <guid>https://status.as30456.net/incident/cm87j9fjc002g13ry0rvhagsy</guid>
</item>

<item>
  <title>Multiple PoP Degradation of Service</title>
  <description>
    Type: Incident
    Duration: 37 minutes

    Affected Components: London, United Kingdom, Ashburn, Virginia, Los Angeles, California, Amsterdam, Netherlands, Dallas, Texas
    Feb 15, 16:40:00 GMT+0 - Investigating - We are investigating a multi-pop degradation of service. Feb 15, 17:03:17 GMT+0 - Identified - Ashburn and Amsterdam should be recovered. We are still working diligently. Feb 15, 17:08:45 GMT+0 - Monitoring - All degraded PoPs should have recovered/are recovering. Feb 15, 17:17:21 GMT+0 - Resolved - Hi everyone, During an emergency update to our DDoS mitigation systems, an error caused the automated rollback functionality to fail. As a result, the systems entered a continuous boot loop and, due to limited connectivity, were unable to restore the latest version automatically. What would typically be a five-second operation took over 15 minutes as we manually restored the systems.   
  
Although we normally do not perform updates on weekends, a critical bug required an immediate fix to prevent potential issues later today. We appreciate your understanding. The situation has now been resolved, and no further impact is expected over the remainder of the weekend.   
  
Thank you. 
  </description>
  <content:encoded>
    <![CDATA[<p><strong>Type:</strong> Incident</p>
    <p><strong>Duration:</strong> 37 minutes</p>
    <p><strong>Affected Components:</strong> , , , , </p>
    &lt;p&gt;&lt;small&gt;Feb &lt;var data-var=&#039;date&#039;&gt; 15&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;16:40:00&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Investigating&lt;/strong&gt; -
  We are investigating a multi-pop degradation of service..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Feb &lt;var data-var=&#039;date&#039;&gt; 15&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;17:03:17&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  Ashburn and Amsterdam should be recovered. We are still working diligently..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Feb &lt;var data-var=&#039;date&#039;&gt; 15&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;17:08:45&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Monitoring&lt;/strong&gt; -
  All degraded PoPs should have recovered/are recovering..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Feb &lt;var data-var=&#039;date&#039;&gt; 15&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;17:17:21&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Resolved&lt;/strong&gt; -
  Hi everyone, During an emergency update to our DDoS mitigation systems, an error caused the automated rollback functionality to fail. As a result, the systems entered a continuous boot loop and, due to limited connectivity, were unable to restore the latest version automatically. What would typically be a five-second operation took over 15 minutes as we manually restored the systems.   
  
Although we normally do not perform updates on weekends, a critical bug required an immediate fix to prevent potential issues later today. We appreciate your understanding. The situation has now been resolved, and no further impact is expected over the remainder of the weekend.   
  
Thank you..&lt;/p&gt;
]]>
  </content:encoded>
  <pubDate>Sat, 15 Feb 2025 16:40:00 +0000</pubDate>
  <link>https://status.as30456.net/incident/cm76fpl170016eqct6zqk4le7</link>
  <guid>https://status.as30456.net/incident/cm76fpl170016eqct6zqk4le7</guid>
</item>

<item>
  <title>London Partial Outage</title>
  <description>
    Type: Incident
    Duration: 16 minutes

    Affected Components: London, United Kingdom
    Feb 15, 00:50:00 GMT+0 - Monitoring - Our network PoP in London had a brief blip in traffic. We were aware of this blip immediately, and a fix has been applied. We will continue monitoring, and provide an update later. Feb 15, 00:58:00 GMT+0 - Resolved - We experienced an outage in London due to an unexpected ASIC restart on our Arista core router. We&#039;re doing a root cause analysis to determine the cause. We believe it is related to new capacity we had provisioned pending setup next week. 

Everything is back online. Please feel free to reach out to our team if you have any questions. 
  </description>
  <content:encoded>
    <![CDATA[<p><strong>Type:</strong> Incident</p>
    <p><strong>Duration:</strong> 16 minutes</p>
    <p><strong>Affected Components:</strong> </p>
    &lt;p&gt;&lt;small&gt;Feb &lt;var data-var=&#039;date&#039;&gt; 15&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;00:50:00&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Monitoring&lt;/strong&gt; -
  Our network PoP in London had a brief blip in traffic. We were aware of this blip immediately, and a fix has been applied. We will continue monitoring, and provide an update later..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Feb &lt;var data-var=&#039;date&#039;&gt; 15&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;00:58:00&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Resolved&lt;/strong&gt; -
  We experienced an outage in London due to an unexpected ASIC restart on our Arista core router. We&#039;re doing a root cause analysis to determine the cause. We believe it is related to new capacity we had provisioned pending setup next week. 

Everything is back online. Please feel free to reach out to our team if you have any questions..&lt;/p&gt;
]]>
  </content:encoded>
  <pubDate>Sat, 15 Feb 2025 00:50:00 +0000</pubDate>
  <link>https://status.as30456.net/incident/cm75horij002gfgi00njh9muz</link>
  <guid>https://status.as30456.net/incident/cm75horij002gfgi00njh9muz</guid>
</item>

<item>
  <title>GTT Maintenance Event (Ashburn, VA)</title>
  <description>
    Type: Maintenance
    Duration: 4 hours

    Affected Components: Ashburn, Virginia
    Feb 12, 11:00:00 GMT+0 - Completed - Maintenance has completed successfully Feb 12, 07:00:01 GMT+0 - Identified - Maintenance is now in progress Feb 12, 07:00:00 GMT+0 - Identified - GTT has scheduled an emergency maintenance in Ashburn, VA (US East).

Service impact is not expected, as traffic will automatically reroute to alternative carriers. However, customers utilizing Anycast proxies may experience a brief disconnect during the traffic transition.

Start: 2025-02-12 07:00:00 GMT

End: 2025-02-12 11:00:00 GMT 
  </description>
  <content:encoded>
    <![CDATA[<p><strong>Type:</strong> Maintenance</p>
    <p><strong>Duration:</strong> 4 hours</p>
    <p><strong>Affected Components:</strong> </p>
    &lt;p&gt;&lt;small&gt;Feb &lt;var data-var=&#039;date&#039;&gt; 12&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;11:00:00&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Completed&lt;/strong&gt; -
  Maintenance has completed successfully.&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Feb &lt;var data-var=&#039;date&#039;&gt; 12&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;07:00:01&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  Maintenance is now in progress.&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Feb &lt;var data-var=&#039;date&#039;&gt; 12&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;07:00:00&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  GTT has scheduled an emergency maintenance in Ashburn, VA (US East).

Service impact is not expected, as traffic will automatically reroute to alternative carriers. However, customers utilizing Anycast proxies may experience a brief disconnect during the traffic transition.

Start: 2025-02-12 07:00:00 GMT

End: 2025-02-12 11:00:00 GMT.&lt;/p&gt;
]]>
  </content:encoded>
  <pubDate>Wed, 12 Feb 2025 07:00:00 +0000</pubDate>
  <link>https://status.as30456.net/maintenance/cm71jsoeb0001x0d9uzdqpzyr</link>
  <guid>https://status.as30456.net/maintenance/cm71jsoeb0001x0d9uzdqpzyr</guid>
</item>

<item>
  <title>LAX Outage</title>
  <description>
    Type: Incident
    Duration: 40 minutes

    Affected Components: Los Angeles, California
    Nov 30, 14:14:41 GMT+0 - Resolved - This incident has been resolved. Nov 30, 13:34:39 GMT+0 - Investigating - We are currently investigating this incident. 
  </description>
  <content:encoded>
    <![CDATA[<p><strong>Type:</strong> Incident</p>
    <p><strong>Duration:</strong> 40 minutes</p>
    <p><strong>Affected Components:</strong> </p>
    &lt;p&gt;&lt;small&gt;Nov &lt;var data-var=&#039;date&#039;&gt; 30&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;14:14:41&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Resolved&lt;/strong&gt; -
  This incident has been resolved..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Nov &lt;var data-var=&#039;date&#039;&gt; 30&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;13:34:39&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Investigating&lt;/strong&gt; -
  We are currently investigating this incident..&lt;/p&gt;
]]>
  </content:encoded>
  <pubDate>Sat, 30 Nov 2024 13:34:39 +0000</pubDate>
  <link>https://status.as30456.net/incident/cm447s9r7000imv9k9xw3vhke</link>
  <guid>https://status.as30456.net/incident/cm447s9r7000imv9k9xw3vhke</guid>
</item>

<item>
  <title>Network Blip in Dallas</title>
  <description>
    Type: Incident
    Duration: 54 minutes

    Affected Components: Dallas, Texas
    Oct 25, 18:15:00 GMT+0 - Identified - We are currently investigating this incident. We are aware of what caused the issue, and are investigating why. We will update this status page when we have more information. Oct 25, 19:08:43 GMT+0 - Resolved - This incident has been resolved. No further issues or updates are expected.

Thank you. 
  </description>
  <content:encoded>
    <![CDATA[<p><strong>Type:</strong> Incident</p>
    <p><strong>Duration:</strong> 54 minutes</p>
    <p><strong>Affected Components:</strong> </p>
    &lt;p&gt;&lt;small&gt;Oct &lt;var data-var=&#039;date&#039;&gt; 25&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;18:15:00&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  We are currently investigating this incident. We are aware of what caused the issue, and are investigating why. We will update this status page when we have more information..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Oct &lt;var data-var=&#039;date&#039;&gt; 25&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;19:08:43&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Resolved&lt;/strong&gt; -
  This incident has been resolved. No further issues or updates are expected.

Thank you..&lt;/p&gt;
]]>
  </content:encoded>
  <pubDate>Fri, 25 Oct 2024 18:15:00 +0000</pubDate>
  <link>https://status.as30456.net/incident/cm2p2qiwq0076miqcgbpcscuc</link>
  <guid>https://status.as30456.net/incident/cm2p2qiwq0076miqcgbpcscuc</guid>
</item>

<item>
  <title>ASH Scheduled Maintenance</title>
  <description>
    Type: Maintenance
    Duration: 6 minutes

    Affected Components: Ashburn, Virginia
    Aug 21, 05:00:01 GMT+0 - Identified - Maintenance is now in progress Aug 15, 03:31:22 GMT+0 - Identified - Revised scheduled time. Aug 21, 05:00:00 GMT+0 - Identified - We are planning for a maintenance at 1AM ET in our ASH region. We’re applying software updates to our routers. Although we have completed DFW last week and previously AMS without interruption, we still advise caution and to expect brief interruption while BGP sessions renegotiate. Aug 21, 05:06:08 GMT+0 - Completed - Maintenance is complete. We observed no impact to our network. 
  </description>
  <content:encoded>
    <![CDATA[<p><strong>Type:</strong> Maintenance</p>
    <p><strong>Duration:</strong> 6 minutes</p>
    <p><strong>Affected Components:</strong> </p>
    &lt;p&gt;&lt;small&gt;Aug &lt;var data-var=&#039;date&#039;&gt; 21&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;05:00:01&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  Maintenance is now in progress.&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Aug &lt;var data-var=&#039;date&#039;&gt; 15&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;03:31:22&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  Revised scheduled time..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Aug &lt;var data-var=&#039;date&#039;&gt; 21&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;05:00:00&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  We are planning for a maintenance at 1AM ET in our ASH region. We’re applying software updates to our routers. Although we have completed DFW last week and previously AMS without interruption, we still advise caution and to expect brief interruption while BGP sessions renegotiate..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Aug &lt;var data-var=&#039;date&#039;&gt; 21&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;05:06:08&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Completed&lt;/strong&gt; -
  Maintenance is complete. We observed no impact to our network..&lt;/p&gt;
]]>
  </content:encoded>
  <pubDate>Wed, 21 Aug 2024 05:00:00 +0000</pubDate>
  <link>https://status.as30456.net/maintenance/clzuo8juv30889hhn5pohzane5</link>
  <guid>https://status.as30456.net/maintenance/clzuo8juv30889hhn5pohzane5</guid>
</item>

<item>
  <title>LAX Scheduled Maintenance</title>
  <description>
    Type: Maintenance
    Duration: 18 minutes

    Affected Components: Los Angeles, California
    Aug 19, 08:00:01 GMT+0 - Identified - Maintenance is now in progress Aug 19, 08:17:37 GMT+0 - Completed - Maintenance is complete. We observed minimal impact to traffic routing via LAX. Aug 15, 03:31:56 GMT+0 - Identified - Revised scheduled time. Aug 19, 08:00:00 GMT+0 - Identified - We are planning for a maintenance at 1AM PT in our LAX region. We’re applying software updates to our routers. Although we have completed DFW last week and previously AMS without interruption, we still advise caution and to expect brief interruption while BGP sessions renegotiate. 
  </description>
  <content:encoded>
    <![CDATA[<p><strong>Type:</strong> Maintenance</p>
    <p><strong>Duration:</strong> 18 minutes</p>
    <p><strong>Affected Components:</strong> </p>
    &lt;p&gt;&lt;small&gt;Aug &lt;var data-var=&#039;date&#039;&gt; 19&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;08:00:01&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  Maintenance is now in progress.&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Aug &lt;var data-var=&#039;date&#039;&gt; 19&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;08:17:37&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Completed&lt;/strong&gt; -
  Maintenance is complete. We observed minimal impact to traffic routing via LAX..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Aug &lt;var data-var=&#039;date&#039;&gt; 15&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;03:31:56&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  Revised scheduled time..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Aug &lt;var data-var=&#039;date&#039;&gt; 19&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;08:00:00&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  We are planning for a maintenance at 1AM PT in our LAX region. We’re applying software updates to our routers. Although we have completed DFW last week and previously AMS without interruption, we still advise caution and to expect brief interruption while BGP sessions renegotiate..&lt;/p&gt;
]]>
  </content:encoded>
  <pubDate>Mon, 19 Aug 2024 08:00:00 +0000</pubDate>
  <link>https://status.as30456.net/maintenance/clzuo59tg26663han5lio9ylb0</link>
  <guid>https://status.as30456.net/maintenance/clzuo59tg26663han5lio9ylb0</guid>
</item>

<item>
  <title>LON Scheduled Maintenance</title>
  <description>
    Type: Maintenance
    Duration: 16 minutes

    Affected Components: London, United Kingdom
    Aug 16, 01:00:00 GMT+0 - Identified - We are planning for a maintenance at 2AM BST in our London region. We’re applying software updates to our routers. Although we have completed DFW last week and previously AMS without interruption, we still advise caution and to expect brief interruption while BGP sessions renegotiate. Aug 15, 03:32:17 GMT+0 - Identified - Revised scheduled time. Aug 16, 01:00:01 GMT+0 - Identified - Maintenance is now in progress Aug 16, 01:16:06 GMT+0 - Completed - Maintenance is complete. We observed no impact to our network. 
  </description>
  <content:encoded>
    <![CDATA[<p><strong>Type:</strong> Maintenance</p>
    <p><strong>Duration:</strong> 16 minutes</p>
    <p><strong>Affected Components:</strong> </p>
    &lt;p&gt;&lt;small&gt;Aug &lt;var data-var=&#039;date&#039;&gt; 16&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;01:00:00&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  We are planning for a maintenance at 2AM BST in our London region. We’re applying software updates to our routers. Although we have completed DFW last week and previously AMS without interruption, we still advise caution and to expect brief interruption while BGP sessions renegotiate..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Aug &lt;var data-var=&#039;date&#039;&gt; 15&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;03:32:17&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  Revised scheduled time..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Aug &lt;var data-var=&#039;date&#039;&gt; 16&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;01:00:01&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  Maintenance is now in progress.&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Aug &lt;var data-var=&#039;date&#039;&gt; 16&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;01:16:06&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Completed&lt;/strong&gt; -
  Maintenance is complete. We observed no impact to our network..&lt;/p&gt;
]]>
  </content:encoded>
  <pubDate>Fri, 16 Aug 2024 01:00:00 +0000</pubDate>
  <link>https://status.as30456.net/maintenance/clzuo1t1h28668hhn5nn32f34l</link>
  <guid>https://status.as30456.net/maintenance/clzuo1t1h28668hhn5nn32f34l</guid>
</item>

<item>
  <title>DFW Scheduled Maintenance</title>
  <description>
    Type: Maintenance
    Duration: 41 minutes

    Affected Components: Dallas, Texas
    Aug 13, 06:00:00 GMT+0 - Identified - We are planning for a maintenance event early tomorrow morning at 1AM Central Time in our Dallas Fort-Worth region. We expect some turbulence for around 30 minutes to 1 hour while BGP sessions renegotiate.  Aug 13, 06:40:40 GMT+0 - Completed - Maintenance is complete. We observed no impact to our network. Aug 13, 06:00:01 GMT+0 - Identified - Maintenance is now in progress 
  </description>
  <content:encoded>
    <![CDATA[<p><strong>Type:</strong> Maintenance</p>
    <p><strong>Duration:</strong> 41 minutes</p>
    <p><strong>Affected Components:</strong> </p>
    &lt;p&gt;&lt;small&gt;Aug &lt;var data-var=&#039;date&#039;&gt; 13&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;06:00:00&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  We are planning for a maintenance event early tomorrow morning at 1AM Central Time in our Dallas Fort-Worth region. We expect some turbulence for around 30 minutes to 1 hour while BGP sessions renegotiate. .&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Aug &lt;var data-var=&#039;date&#039;&gt; 13&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;06:40:40&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Completed&lt;/strong&gt; -
  Maintenance is complete. We observed no impact to our network..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Aug &lt;var data-var=&#039;date&#039;&gt; 13&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;06:00:01&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  Maintenance is now in progress.&lt;/p&gt;
]]>
  </content:encoded>
  <pubDate>Tue, 13 Aug 2024 06:00:00 +0000</pubDate>
  <link>https://status.as30456.net/maintenance/clzrclike30721hnodejvro46x</link>
  <guid>https://status.as30456.net/maintenance/clzrclike30721hnodejvro46x</guid>
</item>

<item>
  <title>GTT Outage - London</title>
  <description>
    Type: Incident
    Duration: 9 hours and 53 minutes

    Affected Components: London, United Kingdom
    Jun 27, 20:26:12 GMT+0 - Investigating - We are seeing flaps on our transit BGP session with GTT in London, UK. We’re monitoring and will temporarily drop the session if interruptions continue to occur. Jun 27, 20:29:02 GMT+0 - Monitoring - We have temporarily dropped the BGP session for GTT in London, UK. Traffic routed via GTT will re-route to Amsterdam, NL. We are still peering via BGP with NTT and Liberty Global in London, UK. Jun 28, 06:19:12 GMT+0 - Resolved - We have resumed BGP advertisements over GTT in London, UK. 
  </description>
  <content:encoded>
    <![CDATA[<p><strong>Type:</strong> Incident</p>
    <p><strong>Duration:</strong> 9 hours and 53 minutes</p>
    <p><strong>Affected Components:</strong> </p>
    &lt;p&gt;&lt;small&gt;Jun &lt;var data-var=&#039;date&#039;&gt; 27&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;20:26:12&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Investigating&lt;/strong&gt; -
  We are seeing flaps on our transit BGP session with GTT in London, UK. We’re monitoring and will temporarily drop the session if interruptions continue to occur..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Jun &lt;var data-var=&#039;date&#039;&gt; 27&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;20:29:02&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Monitoring&lt;/strong&gt; -
  We have temporarily dropped the BGP session for GTT in London, UK. Traffic routed via GTT will re-route to Amsterdam, NL. We are still peering via BGP with NTT and Liberty Global in London, UK..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Jun &lt;var data-var=&#039;date&#039;&gt; 28&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;06:19:12&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Resolved&lt;/strong&gt; -
  We have resumed BGP advertisements over GTT in London, UK..&lt;/p&gt;
]]>
  </content:encoded>
  <pubDate>Thu, 27 Jun 2024 20:26:12 +0000</pubDate>
  <link>https://status.as30456.net/incident/clxxpsmg782774xdn1fe348rao</link>
  <guid>https://status.as30456.net/incident/clxxpsmg782774xdn1fe348rao</guid>
</item>

<item>
  <title>NTT Scheduled Maintenance Event (Dallas, TX)</title>
  <description>
    Type: Maintenance
    Duration: 3 hours

    Affected Components: London, United Kingdom
    May 27, 00:00:00 GMT+0 - Identified - NTT is planning a scheduled maintenance for this time. There is the possibility of downtime during this event. May 27, 03:00:00 GMT+0 - Completed - Maintenance has completed successfully May 27, 00:00:01 GMT+0 - Identified - Maintenance is now in progress 
  </description>
  <content:encoded>
    <![CDATA[<p><strong>Type:</strong> Maintenance</p>
    <p><strong>Duration:</strong> 3 hours</p>
    <p><strong>Affected Components:</strong> </p>
    &lt;p&gt;&lt;small&gt;May &lt;var data-var=&#039;date&#039;&gt; 27&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;00:00:00&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  NTT is planning a scheduled maintenance for this time. There is the possibility of downtime during this event..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;May &lt;var data-var=&#039;date&#039;&gt; 27&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;03:00:00&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Completed&lt;/strong&gt; -
  Maintenance has completed successfully.&lt;/p&gt;
&lt;p&gt;&lt;small&gt;May &lt;var data-var=&#039;date&#039;&gt; 27&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;00:00:01&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  Maintenance is now in progress.&lt;/p&gt;
]]>
  </content:encoded>
  <pubDate>Mon, 27 May 2024 00:00:00 +0000</pubDate>
  <link>https://status.as30456.net/maintenance/clw6qad393556b2n6zcro2r5n</link>
  <guid>https://status.as30456.net/maintenance/clw6qad393556b2n6zcro2r5n</guid>
</item>

<item>
  <title>Power outage at DFW Evocative</title>
  <description>
    Type: Incident
    Duration: 17 hours and 23 minutes

    Affected Components: API, Dallas, Texas
    May 26, 12:52:59 GMT+0 - Investigating - DFW Evocative’s datacenter staff are investigating an issue with their UPS equipment. ETA provided by the datacenter has been missed and we have not yet been provided an additional ETA. We will update as soon as possible. May 26, 11:51:54 GMT+0 - Investigating - There is a power outage in DFW Evocative related to a storm. Our DFW Equinix location is still online. We have been provided a 30-60 minute ETA by the facility. May 26, 12:28:26 GMT+0 - Investigating - DFW Evocative is investigating an issue with their UPS equipment. They are communicating with the vendor. May 26, 14:03:32 GMT+0 - Monitoring - Services are being restored and we are currently monitoring the result. May 26, 14:20:11 GMT+0 - Monitoring - As of 15 minutes ago, DFW Evocative’s power has recovered and services have started to come online. We will report when all clear. May 26, 15:34:55 GMT+0 - Monitoring - We have restored 95% of customer services in DFW Evocative. We will update when all clear. May 26, 20:09:03 GMT+0 - Monitoring - We are still working on restoring the remaining 5% of customers affected in our Dallas data center. It is isolated to a few racks at this point, and seems to have been caused by the power outage itself. We will continue updating as soon as we can. May 26, 23:06:27 GMT+0 - Monitoring - We are nearing the end of the issues caused by the power outage today. We do apologize that this has taken longer than expected, and are hoping to restore the rest of our affected clients in the next few hours. We will continue to update here when possible. Thank you for your patience. May 27, 05:14:47 GMT+0 - Resolved - We have restored 99% of services under our control, and thus we are closing this issue. Any remaining issues are still being looked into as isolated issues out of our control. We do not expect any further issues. The datacenter is in the process of performing a root cause analysis to figure out what went wrong during today&#039;s power outage, and they have already had the UPS vendors come out and check everything over and get it all operational again. Thank you for your patience today. 
  </description>
  <content:encoded>
    <![CDATA[<p><strong>Type:</strong> Incident</p>
    <p><strong>Duration:</strong> 17 hours and 23 minutes</p>
    <p><strong>Affected Components:</strong> , </p>
    &lt;p&gt;&lt;small&gt;May &lt;var data-var=&#039;date&#039;&gt; 26&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;12:52:59&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Investigating&lt;/strong&gt; -
  DFW Evocative’s datacenter staff are investigating an issue with their UPS equipment. ETA provided by the datacenter has been missed and we have not yet been provided an additional ETA. We will update as soon as possible..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;May &lt;var data-var=&#039;date&#039;&gt; 26&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;11:51:54&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Investigating&lt;/strong&gt; -
  There is a power outage in DFW Evocative related to a storm. Our DFW Equinix location is still online. We have been provided a 30-60 minute ETA by the facility..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;May &lt;var data-var=&#039;date&#039;&gt; 26&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;12:28:26&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Investigating&lt;/strong&gt; -
  DFW Evocative is investigating an issue with their UPS equipment. They are communicating with the vendor..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;May &lt;var data-var=&#039;date&#039;&gt; 26&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;14:03:32&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Monitoring&lt;/strong&gt; -
  Services are being restored and we are currently monitoring the result..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;May &lt;var data-var=&#039;date&#039;&gt; 26&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;14:20:11&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Monitoring&lt;/strong&gt; -
  As of 15 minutes ago, DFW Evocative’s power has recovered and services have started to come online. We will report when all clear..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;May &lt;var data-var=&#039;date&#039;&gt; 26&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;15:34:55&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Monitoring&lt;/strong&gt; -
  We have restored 95% of customer services in DFW Evocative. We will update when all clear..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;May &lt;var data-var=&#039;date&#039;&gt; 26&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;20:09:03&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Monitoring&lt;/strong&gt; -
  We are still working on restoring the remaining 5% of customers affected in our Dallas data center. It is isolated to a few racks at this point, and seems to have been caused by the power outage itself. We will continue updating as soon as we can..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;May &lt;var data-var=&#039;date&#039;&gt; 26&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;23:06:27&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Monitoring&lt;/strong&gt; -
  We are nearing the end of the issues caused by the power outage today. We do apologize that this has taken longer than expected, and are hoping to restore the rest of our affected clients in the next few hours. We will continue to update here when possible. Thank you for your patience..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;May &lt;var data-var=&#039;date&#039;&gt; 27&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;05:14:47&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Resolved&lt;/strong&gt; -
  We have restored 99% of services under our control, and thus we are closing this issue. Any remaining issues are still being looked into as isolated issues out of our control. We do not expect any further issues. The datacenter is in the process of performing a root cause analysis to figure out what went wrong during today&#039;s power outage, and they have already had the UPS vendors come out and check everything over and get it all operational again. Thank you for your patience today..&lt;/p&gt;
]]>
  </content:encoded>
  <pubDate>Sun, 26 May 2024 11:51:54 +0000</pubDate>
  <link>https://status.as30456.net/incident/clwnhbyuj188982axoaqzvrcjis</link>
  <guid>https://status.as30456.net/incident/clwnhbyuj188982axoaqzvrcjis</guid>
</item>

<item>
  <title>NTT Scheduled Maintenance Event (London, UK)</title>
  <description>
    Type: Maintenance
    Duration: 3 hours

    Affected Components: Dallas, Texas
    May 23, 06:00:00 GMT+0 - Identified - NTT is planning a scheduled maintenance for this time. There is the possibility of downtime during this event. May 23, 06:00:01 GMT+0 - Identified - Maintenance is now in progress May 23, 09:00:00 GMT+0 - Completed - Maintenance has completed successfully 
  </description>
  <content:encoded>
    <![CDATA[<p><strong>Type:</strong> Maintenance</p>
    <p><strong>Duration:</strong> 3 hours</p>
    <p><strong>Affected Components:</strong> </p>
    &lt;p&gt;&lt;small&gt;May &lt;var data-var=&#039;date&#039;&gt; 23&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;06:00:00&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  NTT is planning a scheduled maintenance for this time. There is the possibility of downtime during this event..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;May &lt;var data-var=&#039;date&#039;&gt; 23&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;06:00:01&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  Maintenance is now in progress.&lt;/p&gt;
&lt;p&gt;&lt;small&gt;May &lt;var data-var=&#039;date&#039;&gt; 23&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;09:00:00&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Completed&lt;/strong&gt; -
  Maintenance has completed successfully.&lt;/p&gt;
]]>
  </content:encoded>
  <pubDate>Thu, 23 May 2024 06:00:00 +0000</pubDate>
  <link>https://status.as30456.net/maintenance/clw6qi6vb7950b2n6gpv9ondo</link>
  <guid>https://status.as30456.net/maintenance/clw6qi6vb7950b2n6gpv9ondo</guid>
</item>

<item>
  <title>Scheduled Maintenance (Amsterdam, NL)</title>
  <description>
    Type: Maintenance
    Duration: 1 hour

    Affected Components: Amsterdam, Netherlands
    May 16, 03:00:01 GMT+0 - Identified - Maintenance is now in progress May 16, 04:00:00 GMT+0 - Completed - Maintenance has completed successfully May 16, 03:00:00 GMT+0 - Identified - We will be performing software upgrades on our routing infrastructure in Amsterdam which will require a hard reboot. Expected downtime is 15 minutes. This may be felt in neighboring PoPs as we re-route traffic between PoPs. 
  </description>
  <content:encoded>
    <![CDATA[<p><strong>Type:</strong> Maintenance</p>
    <p><strong>Duration:</strong> 1 hour</p>
    <p><strong>Affected Components:</strong> </p>
    &lt;p&gt;&lt;small&gt;May &lt;var data-var=&#039;date&#039;&gt; 16&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;03:00:01&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  Maintenance is now in progress.&lt;/p&gt;
&lt;p&gt;&lt;small&gt;May &lt;var data-var=&#039;date&#039;&gt; 16&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;04:00:00&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Completed&lt;/strong&gt; -
  Maintenance has completed successfully.&lt;/p&gt;
&lt;p&gt;&lt;small&gt;May &lt;var data-var=&#039;date&#039;&gt; 16&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;03:00:00&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  We will be performing software upgrades on our routing infrastructure in Amsterdam which will require a hard reboot. Expected downtime is 15 minutes. This may be felt in neighboring PoPs as we re-route traffic between PoPs..&lt;/p&gt;
]]>
  </content:encoded>
  <pubDate>Thu, 16 May 2024 03:00:00 +0000</pubDate>
  <link>https://status.as30456.net/maintenance/clw79jpa2130547bdogtzycpbyj</link>
  <guid>https://status.as30456.net/maintenance/clw79jpa2130547bdogtzycpbyj</guid>
</item>

<item>
  <title>Scheduled maintenance</title>
  <description>
    Type: Maintenance
    Duration: 57 minutes

    Affected Components: London, United Kingdom, Amsterdam, Netherlands, Ashburn, Virginia, Los Angeles, California, Dallas, Texas
    Apr 26, 05:56:40 GMT+0 - Completed - All set. Apr 26, 05:00:01 GMT+0 - Identified - Maintenance is now in progress Apr 26, 05:00:00 GMT+0 - Identified - We&#039;ll be performing upgrades to our DDoS mitigation platform.  
  
We&#039;re not expecting this to impact user traffic. But we do recommend making your players aware of this window. 
  </description>
  <content:encoded>
    <![CDATA[<p><strong>Type:</strong> Maintenance</p>
    <p><strong>Duration:</strong> 57 minutes</p>
    <p><strong>Affected Components:</strong> , , , , </p>
    &lt;p&gt;&lt;small&gt;Apr &lt;var data-var=&#039;date&#039;&gt; 26&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;05:56:40&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Completed&lt;/strong&gt; -
  All set..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Apr &lt;var data-var=&#039;date&#039;&gt; 26&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;05:00:01&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  Maintenance is now in progress.&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Apr &lt;var data-var=&#039;date&#039;&gt; 26&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;05:00:00&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  We&#039;ll be performing upgrades to our DDoS mitigation platform.  
  
We&#039;re not expecting this to impact user traffic. But we do recommend making your players aware of this window..&lt;/p&gt;
]]>
  </content:encoded>
  <pubDate>Fri, 26 Apr 2024 05:00:00 +0000</pubDate>
  <link>https://status.as30456.net/maintenance/clvg1hvoy49810bsn6p420fwnc</link>
  <guid>https://status.as30456.net/maintenance/clvg1hvoy49810bsn6p420fwnc</guid>
</item>

<item>
  <title>GTT Planned Work - Reston, VA</title>
  <description>
    Type: Maintenance
    Duration: 4 hours

    Affected Components: Ashburn, Virginia
    Apr 4, 23:00:00 GMT+0 - Identified - GTT is planning a scheduled maintenance during the below timeframe. Service interruption is possible. Maximum service interruption expected is 15 minutes.

**Start: 2024-04-05 00:00:00 GMT**

**End: 2024-04-05 04:00:00 GMT**

**Backup Start: 2024-04-09 00:00:00 GMT**

**Backup End: 2024-04-09 04:00:00 GMT** Apr 5, 03:00:00 GMT+0 - Completed - Maintenance has completed successfully Apr 4, 23:00:01 GMT+0 - Identified - Maintenance is now in progress 
  </description>
  <content:encoded>
    <![CDATA[<p><strong>Type:</strong> Maintenance</p>
    <p><strong>Duration:</strong> 4 hours</p>
    <p><strong>Affected Components:</strong> </p>
    &lt;p&gt;&lt;small&gt;Apr &lt;var data-var=&#039;date&#039;&gt; 4&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;23:00:00&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  GTT is planning a scheduled maintenance during the below timeframe. Service interruption is possible. Maximum service interruption expected is 15 minutes.

**Start: 2024-04-05 00:00:00 GMT**

**End: 2024-04-05 04:00:00 GMT**

**Backup Start: 2024-04-09 00:00:00 GMT**

**Backup End: 2024-04-09 04:00:00 GMT**.&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Apr &lt;var data-var=&#039;date&#039;&gt; 5&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;03:00:00&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Completed&lt;/strong&gt; -
  Maintenance has completed successfully.&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Apr &lt;var data-var=&#039;date&#039;&gt; 4&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;23:00:01&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  Maintenance is now in progress.&lt;/p&gt;
]]>
  </content:encoded>
  <pubDate>Thu, 4 Apr 2024 23:00:00 +0000</pubDate>
  <link>https://status.as30456.net/maintenance/clu8v53e0113985b8myz3ljlkxl</link>
  <guid>https://status.as30456.net/maintenance/clu8v53e0113985b8myz3ljlkxl</guid>
</item>

<item>
  <title>Mass Disconnections in EU and US</title>
  <description>
    Type: Incident
    Duration: 1 day, 6 hours and 7 minutes

    Affected Components: London, United Kingdom, Ashburn, Virginia, Dallas, Texas
    Mar 12, 05:02:57 GMT+0 - Investigating - We are currently investigating this incident. We are aware of some disconnects affecting a large number of users in the last 30 minutes.

We will update ASAP. Mar 13, 00:04:42 GMT+0 - Resolved - This incident is confirmed to be resolved. A post-mortem will follow shortly. We appreciate your patience during this time.This incident has been resolved. Mar 13, 02:44:27 GMT+0 - Investigating - We&#039;re currently investigating a similar disconnection event as last night. We&#039;re already working on a fix, and will update everyone soon. Thanks! Mar 13, 05:04:01 GMT+0 - Identified - The issue has been narrowed down, and we are diligently working on a permanent and full fix. We hope to have a more positive update to share soon.  Mar 13, 08:26:35 GMT+0 - Monitoring - We have several engineers working on pushing out a full fix ASAP. Updates will follow as we have them for you. Mar 13, 11:10:16 GMT+0 - Resolved - We have pushed fixes to all locations and are monitoring proactively to ensure all services fully stabilize and remain stable.  
  </description>
  <content:encoded>
    <![CDATA[<p><strong>Type:</strong> Incident</p>
    <p><strong>Duration:</strong> 1 day, 6 hours and 7 minutes</p>
    <p><strong>Affected Components:</strong> , , </p>
    &lt;p&gt;&lt;small&gt;Mar &lt;var data-var=&#039;date&#039;&gt; 12&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;05:02:57&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Investigating&lt;/strong&gt; -
  We are currently investigating this incident. We are aware of some disconnects affecting a large number of users in the last 30 minutes.

We will update ASAP..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Mar &lt;var data-var=&#039;date&#039;&gt; 13&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;00:04:42&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Resolved&lt;/strong&gt; -
  This incident is confirmed to be resolved. A post-mortem will follow shortly. We appreciate your patience during this time.This incident has been resolved..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Mar &lt;var data-var=&#039;date&#039;&gt; 13&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;02:44:27&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Investigating&lt;/strong&gt; -
  We&#039;re currently investigating a similar disconnection event as last night. We&#039;re already working on a fix, and will update everyone soon. Thanks!.&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Mar &lt;var data-var=&#039;date&#039;&gt; 13&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;05:04:01&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  The issue has been narrowed down, and we are diligently working on a permanent and full fix. We hope to have a more positive update to share soon. .&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Mar &lt;var data-var=&#039;date&#039;&gt; 13&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;08:26:35&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Monitoring&lt;/strong&gt; -
  We have several engineers working on pushing out a full fix ASAP. Updates will follow as we have them for you..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Mar &lt;var data-var=&#039;date&#039;&gt; 13&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;11:10:16&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Resolved&lt;/strong&gt; -
  We have pushed fixes to all locations and are monitoring proactively to ensure all services fully stabilize and remain stable. .&lt;/p&gt;
]]>
  </content:encoded>
  <pubDate>Tue, 12 Mar 2024 05:02:57 +0000</pubDate>
  <link>https://status.as30456.net/incident/cltnwp5vx36991buoajbrl5r76</link>
  <guid>https://status.as30456.net/incident/cltnwp5vx36991buoajbrl5r76</guid>
</item>

<item>
  <title>Packet Loss / Disconnects</title>
  <description>
    Type: Incident
    Duration: 8 hours and 9 minutes

    Affected Components: London, United Kingdom, Ashburn, Virginia, Dallas, Texas
    Mar 4, 02:43:15 GMT+0 - Resolved - This incident has been resolved. Mar 3, 18:33:51 GMT+0 - Investigating - Good Morning/Afternoon/Evening, 

We&#039;re aware of the packet loss events, and disconnects that are currently happening, and we&#039;ve been looking into them. We&#039;re in the process of implementing some things, and we will have things calmed down ASAP. 

Thanks for your patience. 
  </description>
  <content:encoded>
    <![CDATA[<p><strong>Type:</strong> Incident</p>
    <p><strong>Duration:</strong> 8 hours and 9 minutes</p>
    <p><strong>Affected Components:</strong> , , </p>
    &lt;p&gt;&lt;small&gt;Mar &lt;var data-var=&#039;date&#039;&gt; 4&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;02:43:15&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Resolved&lt;/strong&gt; -
  This incident has been resolved..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Mar &lt;var data-var=&#039;date&#039;&gt; 3&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;18:33:51&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Investigating&lt;/strong&gt; -
  Good Morning/Afternoon/Evening, 

We&#039;re aware of the packet loss events, and disconnects that are currently happening, and we&#039;ve been looking into them. We&#039;re in the process of implementing some things, and we will have things calmed down ASAP. 

Thanks for your patience..&lt;/p&gt;
]]>
  </content:encoded>
  <pubDate>Sun, 3 Mar 2024 18:33:51 +0000</pubDate>
  <link>https://status.as30456.net/incident/cltbupbdf28412biogddn28jlw</link>
  <guid>https://status.as30456.net/incident/cltbupbdf28412biogddn28jlw</guid>
</item>

<item>
  <title>Outage in Ashburn Evoque</title>
  <description>
    Type: Incident
    Duration: 3 hours and 44 minutes

    Affected Components: Ashburn, Virginia
    Feb 19, 17:44:59 GMT+0 - Investigating - We&#039;re currently investigating an outage in Evoque. We should have an update shortly. Feb 19, 18:08:51 GMT+0 - Monitoring - The services in Ashburn were restored shortly after posting this incident. We are monitoring currently. Feb 19, 21:28:36 GMT+0 - Resolved - This issue has been resolved as we have not experienced any issues since the 10 minute downtime. No further issues expected. 
  </description>
  <content:encoded>
    <![CDATA[<p><strong>Type:</strong> Incident</p>
    <p><strong>Duration:</strong> 3 hours and 44 minutes</p>
    <p><strong>Affected Components:</strong> </p>
    &lt;p&gt;&lt;small&gt;Feb &lt;var data-var=&#039;date&#039;&gt; 19&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;17:44:59&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Investigating&lt;/strong&gt; -
  We&#039;re currently investigating an outage in Evoque. We should have an update shortly..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Feb &lt;var data-var=&#039;date&#039;&gt; 19&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;18:08:51&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Monitoring&lt;/strong&gt; -
  The services in Ashburn were restored shortly after posting this incident. We are monitoring currently..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Feb &lt;var data-var=&#039;date&#039;&gt; 19&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;21:28:36&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Resolved&lt;/strong&gt; -
  This issue has been resolved as we have not experienced any issues since the 10 minute downtime. No further issues expected..&lt;/p&gt;
]]>
  </content:encoded>
  <pubDate>Mon, 19 Feb 2024 17:44:59 +0000</pubDate>
  <link>https://status.as30456.net/incident/clst88eij85293ccoo6pkyzh1y</link>
  <guid>https://status.as30456.net/incident/clst88eij85293ccoo6pkyzh1y</guid>
</item>

<item>
  <title>Moving AMS -&gt; LON traffic to private transport capacity</title>
  <description>
    Type: Maintenance
    Duration: 7 minutes

    Affected Components: London, United Kingdom, Amsterdam, Netherlands
    Feb 17, 06:00:01 GMT+0 - Identified - Maintenance is now in progress Feb 17, 06:00:00 GMT+0 - Identified - Hi everyone,  
  
We&#039;ll be moving our traffic between london and amsterdam (in both directions) over to private transport capacity rather than the tunneled system we&#039;ve historically relied on.  
  
This should help keep things smooth during the busiest times of the day.  
  
No impact is expected while we route traffic over. Feb 17, 06:06:39 GMT+0 - Completed - Complete. 
  </description>
  <content:encoded>
    <![CDATA[<p><strong>Type:</strong> Maintenance</p>
    <p><strong>Duration:</strong> 7 minutes</p>
    <p><strong>Affected Components:</strong> , </p>
    &lt;p&gt;&lt;small&gt;Feb &lt;var data-var=&#039;date&#039;&gt; 17&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;06:00:01&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  Maintenance is now in progress.&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Feb &lt;var data-var=&#039;date&#039;&gt; 17&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;06:00:00&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  Hi everyone,  
  
We&#039;ll be moving our traffic between london and amsterdam (in both directions) over to private transport capacity rather than the tunneled system we&#039;ve historically relied on.  
  
This should help keep things smooth during the busiest times of the day.  
  
No impact is expected while we route traffic over..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Feb &lt;var data-var=&#039;date&#039;&gt; 17&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;06:06:39&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Completed&lt;/strong&gt; -
  Complete..&lt;/p&gt;
]]>
  </content:encoded>
  <pubDate>Sat, 17 Feb 2024 06:00:00 +0000</pubDate>
  <link>https://status.as30456.net/maintenance/clspmzr6g57118balh3fc7ez8d</link>
  <guid>https://status.as30456.net/maintenance/clspmzr6g57118balh3fc7ez8d</guid>
</item>

<item>
  <title>Outage in VA/DFW</title>
  <description>
    Type: Incident
    Duration: 17 hours and 38 minutes

    Affected Components: Ashburn, Virginia, Dallas, Texas
    Feb 7, 04:03:59 GMT+0 - Investigating - We are currently investigating this incident. Feb 7, 04:40:49 GMT+0 - Identified - No updates to share. Still working/investigating the issue. Feb 7, 05:20:48 GMT+0 - Identified - We have located the issue, and implemented a temporary fix for the issue. As of now, traffic previously routing through Ashburn has been diverted to the next closest pop for your user. We are working on a full scale solution, and will have an update ASAP.

For now, things should be stable with slightly increased latency being possible. We will also be sharing a full scale RFO in the next few days. Feb 7, 21:41:41 GMT+0 - Resolved - This incident has been resolved. 
  </description>
  <content:encoded>
    <![CDATA[<p><strong>Type:</strong> Incident</p>
    <p><strong>Duration:</strong> 17 hours and 38 minutes</p>
    <p><strong>Affected Components:</strong> , </p>
    &lt;p&gt;&lt;small&gt;Feb &lt;var data-var=&#039;date&#039;&gt; 7&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;04:03:59&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Investigating&lt;/strong&gt; -
  We are currently investigating this incident..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Feb &lt;var data-var=&#039;date&#039;&gt; 7&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;04:40:49&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  No updates to share. Still working/investigating the issue..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Feb &lt;var data-var=&#039;date&#039;&gt; 7&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;05:20:48&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  We have located the issue, and implemented a temporary fix for the issue. As of now, traffic previously routing through Ashburn has been diverted to the next closest pop for your user. We are working on a full scale solution, and will have an update ASAP.

For now, things should be stable with slightly increased latency being possible. We will also be sharing a full scale RFO in the next few days..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Feb &lt;var data-var=&#039;date&#039;&gt; 7&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;21:41:41&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Resolved&lt;/strong&gt; -
  This incident has been resolved..&lt;/p&gt;
]]>
  </content:encoded>
  <pubDate>Wed, 7 Feb 2024 04:03:59 +0000</pubDate>
  <link>https://status.as30456.net/incident/clsb9md3m16193agn67p64xhem</link>
  <guid>https://status.as30456.net/incident/clsb9md3m16193agn67p64xhem</guid>
</item>

<item>
  <title>Portal + API Degradation </title>
  <description>
    Type: Incident
    Duration: 19 days, 2 hours and 6 minutes

    Affected Components: Customer Portal
    Feb 14, 04:56:05 GMT+0 - Resolved - This incident has been resolved. Jan 26, 02:50:00 GMT+0 - Investigating - Portal + API are currently degraded due to maintenance on backend cluster hardware and changes.

 
  </description>
  <content:encoded>
    <![CDATA[<p><strong>Type:</strong> Incident</p>
    <p><strong>Duration:</strong> 19 days, 2 hours and 6 minutes</p>
    <p><strong>Affected Components:</strong> </p>
    &lt;p&gt;&lt;small&gt;Feb &lt;var data-var=&#039;date&#039;&gt; 14&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;04:56:05&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Resolved&lt;/strong&gt; -
  This incident has been resolved..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Jan &lt;var data-var=&#039;date&#039;&gt; 26&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;02:50:00&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Investigating&lt;/strong&gt; -
  Portal + API are currently degraded due to maintenance on backend cluster hardware and changes.

.&lt;/p&gt;
]]>
  </content:encoded>
  <pubDate>Fri, 26 Jan 2024 02:50:00 +0000</pubDate>
  <link>https://status.as30456.net/incident/clrim6xoa87312b8ogxgxppglr</link>
  <guid>https://status.as30456.net/incident/clrim6xoa87312b8ogxgxppglr</guid>
</item>

<item>
  <title>GTT Major Outage in London</title>
  <description>
    Type: Incident
    Duration: 2 days, 19 hours and 50 minutes

    Affected Components: London, United Kingdom
    Jan 25, 17:30:00 GMT+0 - Resolved - GTT is reporting a Major Outage in London due to a faulty device.

Reported start time is: 2024-01-25 14:00:00 GMT

We will provide updates as they are received. 
  </description>
  <content:encoded>
    <![CDATA[<p><strong>Type:</strong> Incident</p>
    <p><strong>Duration:</strong> 2 days, 19 hours and 50 minutes</p>
    <p><strong>Affected Components:</strong> </p>
    &lt;p&gt;&lt;small&gt;Jan &lt;var data-var=&#039;date&#039;&gt; 25&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;17:30:00&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Resolved&lt;/strong&gt; -
  GTT is reporting a Major Outage in London due to a faulty device.

Reported start time is: 2024-01-25 14:00:00 GMT

We will provide updates as they are received..&lt;/p&gt;
]]>
  </content:encoded>
  <pubDate>Thu, 25 Jan 2024 17:30:00 +0000</pubDate>
  <link>https://status.as30456.net/incident/clrtf9lgu94492binc615ssgku</link>
  <guid>https://status.as30456.net/incident/clrtf9lgu94492binc615ssgku</guid>
</item>

<item>
  <title>Small Disconnect in DFW Region</title>
  <description>
    Type: Incident
    

    Affected Components: Dallas, Texas
    Jan 7, 04:26:48 GMT+0 - Resolved - Dear customers,

In an attempt to resolve the packet loss issues we&#039;ve been seeing for the last few days in Dallas, we made some routing changes to address the issues based on the data we have, but unfortunately it caused some heavy BGP turbulence causing a small Disconnect on certain connections in Dallas.

We do not expect any further disconnects, and we should hopefully have a fix on our end for most of the packet loss issues you&#039;ve been seeing. Thanks for your patience. 
  </description>
  <content:encoded>
    <![CDATA[<p><strong>Type:</strong> Incident</p>
    
    <p><strong>Affected Components:</strong> </p>
    &lt;p&gt;&lt;small&gt;Jan &lt;var data-var=&#039;date&#039;&gt; 7&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;04:26:48&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Resolved&lt;/strong&gt; -
  Dear customers,

In an attempt to resolve the packet loss issues we&#039;ve been seeing for the last few days in Dallas, we made some routing changes to address the issues based on the data we have, but unfortunately it caused some heavy BGP turbulence causing a small Disconnect on certain connections in Dallas.

We do not expect any further disconnects, and we should hopefully have a fix on our end for most of the packet loss issues you&#039;ve been seeing. Thanks for your patience..&lt;/p&gt;
]]>
  </content:encoded>
  <pubDate>Sun, 7 Jan 2024 04:26:48 +0000</pubDate>
  <link>https://status.as30456.net/incident/clr2zsas944100b6na8a6rx21x</link>
  <guid>https://status.as30456.net/incident/clr2zsas944100b6na8a6rx21x</guid>
</item>

<item>
  <title>Configuration Change in London</title>
  <description>
    Type: Maintenance
    Duration: 30 minutes

    Affected Components: London, United Kingdom
    Nov 14, 01:00:01 GMT+0 - Identified - Maintenance is now in progress Nov 14, 01:00:00 GMT+0 - Identified - We are planning for a scheduled maintenance at 1AM GMT on Tuesday, November 14th. This maintenance will not last more than 30 minutes. It could cause a flap and disconnection of traffic. Nov 14, 01:30:00 GMT+0 - Completed - Maintenance has completed successfully 
  </description>
  <content:encoded>
    <![CDATA[<p><strong>Type:</strong> Maintenance</p>
    <p><strong>Duration:</strong> 30 minutes</p>
    <p><strong>Affected Components:</strong> </p>
    &lt;p&gt;&lt;small&gt;Nov &lt;var data-var=&#039;date&#039;&gt; 14&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;01:00:01&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  Maintenance is now in progress.&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Nov &lt;var data-var=&#039;date&#039;&gt; 14&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;01:00:00&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  We are planning for a scheduled maintenance at 1AM GMT on Tuesday, November 14th. This maintenance will not last more than 30 minutes. It could cause a flap and disconnection of traffic..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Nov &lt;var data-var=&#039;date&#039;&gt; 14&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;01:30:00&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Completed&lt;/strong&gt; -
  Maintenance has completed successfully.&lt;/p&gt;
]]>
  </content:encoded>
  <pubDate>Tue, 14 Nov 2023 01:00:00 +0000</pubDate>
  <link>https://status.as30456.net/maintenance/clow1t9c414812baoe42y6k93k</link>
  <guid>https://status.as30456.net/maintenance/clow1t9c414812baoe42y6k93k</guid>
</item>

<item>
  <title>Activating Peering Capacity to Liberty Global</title>
  <description>
    Type: Maintenance
    Duration: 1 hour

    Affected Components: London, United Kingdom
    Jul 11, 04:00:00 GMT+0 - Completed - Maintenance has completed successfully Jul 11, 03:00:00 GMT+0 - Identified - Hi there,

We&#039;ll be provisioning capacity to Liberty Global B.V. in london to improve residential connectivity in the E.U. Region. Upon activation a subset of users may temporarily disconnect. Total maintenance will be less than 5 minutes. Jul 11, 03:00:01 GMT+0 - Identified - Maintenance is now in progress 
  </description>
  <content:encoded>
    <![CDATA[<p><strong>Type:</strong> Maintenance</p>
    <p><strong>Duration:</strong> 1 hour</p>
    <p><strong>Affected Components:</strong> </p>
    &lt;p&gt;&lt;small&gt;Jul &lt;var data-var=&#039;date&#039;&gt; 11&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;04:00:00&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Completed&lt;/strong&gt; -
  Maintenance has completed successfully.&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Jul &lt;var data-var=&#039;date&#039;&gt; 11&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;03:00:00&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  Hi there,

We&#039;ll be provisioning capacity to Liberty Global B.V. in london to improve residential connectivity in the E.U. Region. Upon activation a subset of users may temporarily disconnect. Total maintenance will be less than 5 minutes..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Jul &lt;var data-var=&#039;date&#039;&gt; 11&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;03:00:01&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  Maintenance is now in progress.&lt;/p&gt;
]]>
  </content:encoded>
  <pubDate>Tue, 11 Jul 2023 03:00:00 +0000</pubDate>
  <link>https://status.as30456.net/maintenance/cljxn0t6o111803xmojcjm7jrc3</link>
  <guid>https://status.as30456.net/maintenance/cljxn0t6o111803xmojcjm7jrc3</guid>
</item>

  </channel>
  </rss>