Service Status Icon

Service Status

Check our service status and past incidents

Overall Service Status

100%

Up

All services are operating normally at this time.

Last Incident


Status: Resolved - Posted at: 11th October 2024 - 13:07

All Regions

Data Center Current Last Day Last Month Last Year
Dallas DC Icon
Dallas, TX
Up 100% 100% 100%
Seattle DC Icon
Seattle, WA
Up 100% 100% 100%
New Jersey DC Icon
Piscataway, NJ
Up 100% 100% 100%
Los Angeles DC Icon
Los Angeles, CA
Up 100% 100% 100%
London Datacenter Icon
London, UK
Up 100% 100% 100%
Maidenhead Datacenter Icon
Maidenhead, UK
Up 100% 100% 100%
Netherlands Datacenter Icon
The Netherlands
Up 100% 100% 100%

Past Incidents

Resolved

Network wide port blocks

Posted at: 6th October 2018 - 14:02

Due to recent reflection exploits involving ports 111 (portmap/NFS), 161 (snmpd), and 389 (ldap), we've implemented a network-wide filter at our network edge to avoid your services running on these ports from being exploited and used to attack other computers on the internet.

We understand that this may be disruptive to your usage, and we can provide guidance on how to adjust these services to use different ports so that you may continue to enjoy uninterrupted connectivity.

These services are not installed on our VPS by default, so if you do not recognize any of the above services or ports, you're likely not affected by this action and this message is informational only.

Resolved

New Jersey Maintenance

Posted at: 4th October 2018 - 15:58

Thursday, 04 October 2018, 15:58 - We've just been briefed that our provider in New Jersey has scheduled a 30 minute maintenance window that may have service impacts for our clients on the nj1 node. See notice below:

Event Type: Network Upgrade
Start Time: 10/9/2018 12AM EST
End Time: 10/9/2018 12:30AM EST
Location(s): Piscataway, NJ
Device(s): c8-10-b4-1.pnj1
Client Affecting: Yes

Event Summary: Important maintenance will be performed on one or more devices that may affect network connectivity to your services.


 

Resolved

Intermittent connectivity affecting all Dallas nodes

Posted at: 27th September 2018 - 18:24

Thursday, 27 September 2018, 18:24 - We're aware of the issue, and are attempting to follow up with our connectivity provider. Further updates will be posted to this issue as we receive them.
Thursday, 27 September 2018, 18:33 - Our provider has confirmed that this was a misconfigured network-wide-filter applied on our upstream port(s). The rogue change has now been rolled back, services are back online.

We apologize for any inconvenience caused.

Resolved

NJ Maintenance

Posted at: 24th September 2018 - 02:27

Our New Jersey Datacenter is currently performing a scheduled maintenance

Event Type: Network Upgrade
Start Time: 2018-09-24 - 03:00:00 AM EST
End Time: 2018-09-24 - 05:00:00 AM EST
Location(s): NYC Metro Datacenter 
Device(s): Core/Edge Network Devices
Client Affecting: Yes

Event Summary: Important maintenance will be performed on one or more devices that may cause intermittent network connectivity issues to your services. We do not anticipate extended outages for this network maintenance.

We will continue to post updates as we receive them.
Resolved

ukhighram3 offline

Posted at: 23rd August 2018 - 06:21

Thursday, 23 August 2018, 06:21 - We have just been made aware of an event concerning this node. Further investigation is ongoing, more details will be posted to this issue as they become available.
Thursday, 23 August 2018, 06:38 - Our datacenter has confirmed that they're experiencing a network event that's affecting this server. An explicit ETA has not been communicated, but we remain hopeful about service restoration soon.
Thursday, 23 August 2018, 08:40 - This remains in progress. We will update this as we have more information. 
Thursday, 23 August 2018, 10:01 - Datacenter has confirmed that this is still in progress. 
Thursday, 23 August 2018, 11:22 - An update from the datacenter has specified that a PDU in the rack where this server is located is malfunctioning, and they are working on moving the server to a temporary rack to get it back online. 
Thursday, 23 August 2018, 13:49 - We continue to wait for the server to be moved to a different rack to be put back online. 
Thursday, 23 August 2018, 15:15 - We have noted that the server is back online, and we're waiting for confirmation of stability. 
Thursday, 23 August 2018, 15:45 - Confirmed that server will remain online for the foreseeable future. We will be providing one month of credit for the downtime. All VPS are online and running, so if your server is still offline or inaccessible, please reach out to us and we'll investigate further. 
Resolved

dalstorage4 disk array rebuild

Posted at: 7th August 2018 - 00:08

Tuesday, 07 August 2018, 00:08 - Earlier today, our dalstorage4 node suffered a disk failure and it is currently operating in a degraded state. There unfortunately isn't enough I/O to go around, so I/O bound programs are taking a very long time to respond (due to the I/O wait). We're investigating, further updates will be posted to this issue. Customers not on the dalstorage4 node are not affected by this issue.

Wednesday, 08 August 2018, 09:03 - This situation does remain in progress. Your VPS should be online and accessible though, since the I/O contention seems to come in short infrequent waves. If you're having an issue with access beyond being slower than usual, please inquire and we can investigate. 

Thursday, 09 August 2018, 08:23 - This continues to be in progress. Reboots via the client area/VM control system have been disabled as rebooting won't magically fix this situation and will only contribute to more I/O contention. If you need your VPS rebooted and are truly unable to figure out how otherwise, please ticket in and we'll review your request. 

Friday, 10 August 2018, 11:46 - We're going to shut down the server to replace the drives and let it rebuild without VM contention; this will take less time than if we left VMs running, resulting less net time back to normal. This shouldn't have much of an impact on actual usage given the speeds of the server as it stands. The current estimate is 24 hours for array rebuild.

Saturday, 11 August 2018, 06:25 - All the processes successed and we have booted up the VPS's. Everything should be healthy at this point.
Resolved

Dallas connectivity

Posted at: 24th July 2018 - 10:29

Tuesday, 24 July 2018, 10:19 - the Dallas connectivity issue is acknowledged and being investigated now.
Resolved

nj1 downtime

Posted at: 11th July 2018 - 10:45

11 July 2018, 10:45 - Planned network maintenance for this server has begun

11 July 2018, 11:25 - Planned network maintenance is complete and the server is back online
Resolved

Connectivity Problem in Dallas

Posted at: 30th May 2018 - 12:21

Wednesday, 30 May 2018, 12:21 - we are aware of the Dallas connectivity issue. Datacenter can be followed here: https://twitter.com/InceroStatus - we'll update this when we hear back. 

Wednesday, 30 May 2018, 12:29 - connectivity restored
Resolved

nj1 connectivity

Posted at: 22nd May 2018 - 18:12

Sunday, 22 May 2018, 18:00 - we're investigating an outage of our NJ1 server. We'll update this announcement as we have more information.

Sunday, 22 May 2018, 18:40 - server is back online without issue noted.
Resolved

FW intermittent network issues

Posted at: 21st May 2018 - 07:49

 Monday, 21 May 2018, 07:49 - We're aware of the intermittent network issues plaguing our Dallas (DFW) POP. This seems to be due to heavy packet loss on our upstream's end. We're collaborating with our upstream to mitigate this, thank you for your patience.
Resolved

Kernel panic on dalhighram10

Posted at: 2nd May 2018 - 02:13

We have experienced a kernel panic on dalhighram10 and we are currently looking into it.

Update: All the VPS's booted back up again.
Resolved

Dallas network errors

Posted at: 28th February 2018 - 10:49

Wednesday, 28 February 2018, 10:49 - We're investigating a Dallas connectivity issue. We'll update this announcement as we have more information. This is a network issue. We are experiencing high package loss. Our datacenter is working on the issue.

Wednesday, 28 February 2018, 11:41 - Root cause has been determined: outbound network attacks from insecure memcached installations. More info here: https://blog.cloudflare.com/memcrashed-major-amplification-attacks-from-port-11211/ We're looking at how to adjust them now. 

Wednesday, 28 February 2018, 12:28 - This has been mitigated, root cause was a new memcached amplification attack. If you're still seeing connectivity issues, or truly need internet-accessible memcached access for some reason, please ticket in. 
Resolved

nj1 network connectivity

Posted at: 25th February 2018 - 13:48

Sunday, 25 February 2018, 13:48 - we're investigating an outage of our NJ1 server. We'll update this announcement as we have more information. 

Sunday, 25 February 2018, 14:52 - the server is back online, and we're awaiting confirmation that the network is stable
Resolved

dalstorage1 unreachable

Posted at: 20th February 2018 - 03:35

Tuesday, 20 February 2018, 02:55 - We've been made aware of an event on this node and are currently investigating.
Tuesday, 20 February 2018, 05:16 - Node is back up, this was due to a storage event. VMs are now booting back up. Please expect to be contacted individually about remedies.
Resolved

dalhighram25 unreachable

Posted at: 17th February 2018 - 19:40

The node is currently unreachable from the panel and the VMs are working extremely slowly. We're already looking into the issue but it has already taken more time than expected. Please excuse us for the inconvenience.

Update: The situation is under control now, please open a ticket if you're still having issues.
Resolved

Dallas network problems

Posted at: 6th February 2018 - 11:38

Tuesday, 06 February 2018 11:34 - We're receiving reports of an unexpected partial routing failure somewhere, or similar network issue. Please don't stop/start/reboot your VPS, it won't fix the issue. When opening a ticket to inform us that your VPS is offline, could you please send a TEXT traceroute between yourself and your service with us? This will assist us with resolving the issue. 

Tuesday, 06 February 2018 11:59 -  The issue should have been rectified now. We'll update the announcement with a RFO.

Friday, 02 March 2018, 10:46 - Formal RFO is available at the following link: https://d.pr/f/RN2AjO

Resolved

ukhighram2 issues

Posted at: 2nd February 2018 - 19:46

ukhighram2 is currently facing issues. We are working on the problem at the moment.

Update: The server is back online now. We are investigating the reason for the lock up.
Resolved

Storage VPS Line Meltdown and Spectre Maintenance

Posted at: 14th January 2018 - 07:06

We are going to reboot storage VPS nodes to patch them against Meltdown and Spectre vulnerabilities.

We are going to update this announcement once we have updates.

09:56 AM: All the nodes have been rebooted and all the VPS's are back online.
Resolved

Reboot for Meltdown / Spectre patching

Posted at: 6th January 2018 - 07:58

Today, Saturday January 6th, 2018, we will be rebooting host nodes serving all VPSDime Linux VPS to execute a new kernel patched against the recently discovered Meltdown and Spectre exploits.

No doubt that in recent days you have noted news that two vulnerabilites have been discovered in all CPUs, ranging back to the development of speculative execution on modern processors over 20 years ago. Although both attacks are based on the same general principle, Meltdown allows malicious programs to gain access to higher-privileged parts of a computer's memory, while Spectre steals data from the memory of other applications running on a machine. Nearly every modern computer in use is vulnerable at some level.

Our virtualization vendor for the Linux VPS product has recently produced a patch to mitigate these two issues, and this patch has been generally accepted by the computing community at large as being an effective method to ensure that the exploit is ineffective. At this time there is no evidence that the exploit has been applied to gain access to other systems, either on our host nodes or elsewhere.

Due to the grievous nature of these attacks, we are updating all host nodes and rebooting them immediately to execute the new kernel. Note that all vendors of kernel live patching are either taking a very long time to implement a live patch (CloudLinux's Kernelcare) or have directly stated that a live patch won't be available (Canonical's Livepatch). Therefore, a reboot of the host node is mandatory in this situation.

We apologize for the inconveience that you will experience due to this, however, please note that we are committed to the utmost stability and security of your virtual machines.

There are some key points to note:

- We'll use this page to make further announcements regarding this reboot action. So please follow up with this page for the updates.

- All VPS will be gracefully shut down, similar to how the "shutdown -h now" or "halt" command works inside your VPS' operating system.

- This update and reboot action will make absolutely no changes to your VPS whatsoever. No changes will be made to your configuration, software, applications, data, or anything else regarding your VPS directly. All VPS will be checked that they are running after the reboot, however, if an application or software you've installed is not responding, please log in to your VPS and troubleshoot your software and make sure your services are running before opening a ticket.

- Barring any unexpected issues, we expect no more than 30 minutes per host node to reboot. This is the time for VPS to shut down and host node to complete a reboot; your VPS may take a bit longer to boot after the host node comes back online. Your patience is appreciated.

Please note that we are unable to provide a certain time frame for your VPS to be shut down because we are going to apply the security patch on one host node at a time. Again, we apologize for the inconvenience of a reboot.

If you have any questions or concerns regarding this, please click here and open a ticket with us.


============

We're working through the reboots now.

If you view your service and see "This node is locked", the node has not yet been rebooted.

If you view your service and see the usual start/stop/reboot buttons and traffic/memory/load graphs, the reboot action has been completed for that node and any services on that node will remain online.

============

All the nodes in Dallas, TX but 2 have been rebooted on Saturday January 6th, 2018.

============

Sunday January 7th, 2018:

We are continuing to reboot the nodes in our other locations today.

Contact Technical Support: