Service Status Icon

Service Status

Check our service status and past incidents

Overall Service Status

100%

Up

All services are operating normally at this time.

Last Incident


Status: Resolved - Posted at: 10th August 2021 - 16:02

All Regions

Data Center Current Last Day Last Month Last Year
Dallas DC Icon
Dallas, TX
Up 100% 100% 100%
Seattle DC Icon
Seattle, WA
Up 100% 100% 100%
New Jersey DC Icon
Piscataway, NJ
Up 100% 100% 100%
Los Angeles DC Icon
Los Angeles, CA
Up 100% 100% 100%
London Datacenter Icon
London, UK
Up 100% 100% 100%
Maidenhead Datacenter Icon
Maidenhead, UK
Up 100% 100% 100%
Netherlands Datacenter Icon
The Netherlands
Up 100% 100% 100%

Past Incidents

Resolved

Reboot for Meltdown / Spectre patching

Posted at: 6th January 2018 - 07:58

Today, Saturday January 6th, 2018, we will be rebooting host nodes serving all VPSDime Linux VPS to execute a new kernel patched against the recently discovered Meltdown and Spectre exploits.

No doubt that in recent days you have noted news that two vulnerabilites have been discovered in all CPUs, ranging back to the development of speculative execution on modern processors over 20 years ago. Although both attacks are based on the same general principle, Meltdown allows malicious programs to gain access to higher-privileged parts of a computer's memory, while Spectre steals data from the memory of other applications running on a machine. Nearly every modern computer in use is vulnerable at some level.

Our virtualization vendor for the Linux VPS product has recently produced a patch to mitigate these two issues, and this patch has been generally accepted by the computing community at large as being an effective method to ensure that the exploit is ineffective. At this time there is no evidence that the exploit has been applied to gain access to other systems, either on our host nodes or elsewhere.

Due to the grievous nature of these attacks, we are updating all host nodes and rebooting them immediately to execute the new kernel. Note that all vendors of kernel live patching are either taking a very long time to implement a live patch (CloudLinux's Kernelcare) or have directly stated that a live patch won't be available (Canonical's Livepatch). Therefore, a reboot of the host node is mandatory in this situation.

We apologize for the inconveience that you will experience due to this, however, please note that we are committed to the utmost stability and security of your virtual machines.

There are some key points to note:

- We'll use this page to make further announcements regarding this reboot action. So please follow up with this page for the updates.

- All VPS will be gracefully shut down, similar to how the "shutdown -h now" or "halt" command works inside your VPS' operating system.

- This update and reboot action will make absolutely no changes to your VPS whatsoever. No changes will be made to your configuration, software, applications, data, or anything else regarding your VPS directly. All VPS will be checked that they are running after the reboot, however, if an application or software you've installed is not responding, please log in to your VPS and troubleshoot your software and make sure your services are running before opening a ticket.

- Barring any unexpected issues, we expect no more than 30 minutes per host node to reboot. This is the time for VPS to shut down and host node to complete a reboot; your VPS may take a bit longer to boot after the host node comes back online. Your patience is appreciated.

Please note that we are unable to provide a certain time frame for your VPS to be shut down because we are going to apply the security patch on one host node at a time. Again, we apologize for the inconvenience of a reboot.

If you have any questions or concerns regarding this, please click here and open a ticket with us.


============

We're working through the reboots now.

If you view your service and see "This node is locked", the node has not yet been rebooted.

If you view your service and see the usual start/stop/reboot buttons and traffic/memory/load graphs, the reboot action has been completed for that node and any services on that node will remain online.

============

All the nodes in Dallas, TX but 2 have been rebooted on Saturday January 6th, 2018.

============

Sunday January 7th, 2018:

We are continuing to reboot the nodes in our other locations today.

Resolved

ukhighram1 issues

Posted at: 1st January 2018 - 14:13

ukhighram1 is currently facing issues. We are working on the problem at the moment.

Update: The server is back online now. We are investigating the reason for the lock up.
Resolved

dalhighram9 availability issues

Posted at: 1st January 2018 - 04:40

Monday, 01 January 2018, 04:40 CDT - We are looking into a problem on this node that is affecting availability. Please stay tuned for further updates.
Monday, 01 January 2018, 04:55 CDT - Node has had to be taken down for an emergency reboot, it is currently booting back up.
Monday, 01 January 2018, 04:59 CDT - Node is online. VPS's are booting up at the moment. Your VPS should be online in the next 5 minutes.
Resolved

seapure3 emergency reboot

Posted at: 29th December 2017 - 02:01

Friday, 29 December 2017, 02:01 - seapure3 has kernel panicked, and had to go through an emergency reboot.

Friday, 29 December 2017, 02:09 - The server is back online. VPS's are now booting up. All VPS's should be online in the next 10 minutes.
Resolved

UK Packet loss

Posted at: 14th November 2017 - 02:30

Tuesday, 14 November 2017, 02:30 - It has come to our attention that some of our UK nodes (UKHighram { 1 | 2 } in particular) are currently experiencing rather high levels of packet loss. This appears to be being caused by an issue on our upstream provider's side. They have been ticketed accordingly, and we're awaiting response.
Tuesday, 14 November 2017, 04:41 - This is now fixed, root cause analysis and a formal RFO from our providers is still pending.
Resolved

VPS control system issues

Posted at: 30th September 2017 - 13:35

09/30/2017 13:35 - we are aware of an issue with the communication of our client area against our VM control system. Your VPS is likely online; please check with ping/SSH before opening a ticket that your VPS is offline. Please open a ticket if you need any start or rebuild actions performed on your VM until this is fixed.
Resolved

dalpure11 responsiveness

Posted at: 13th August 2017 - 10:41

Sunday, 13 August 2017, 10:41 - We are investigating a responsiveness issue with our dalpure11 server. We will post more information here as we have it. 

Sunday, 13 August 2017, 11:26 - We have discovered a failing RAM module in this server and are working to replace it now. We will post more information here as we have it. 

Sunday, 13 August 2017, 11:45 - We have replaced all the memory sticks to avoid any future problems and the server has been booted into the OS. VPS's are starting at the moment. Your VPS should be online momentarily if not already. Please note there may be some slowness for the first 30 mins.
Resolved

Scheduled maintenance for ukpure11

Posted at: 29th July 2017 - 04:32

We have been notified by our datacenter that they have to physically move ukpure11 server to another rack to improve their infrastructure.

The date and time booked for this maintenance is Sunday 07/30/2017 10:00 AM GMT+1.

They anticipate a 20 minute downtime due to this maintenance.

We are sorry the inconvenience this maintenance that may cause you and we thank you for your understanding.

If you have any questions or concerns regarding this, please click here and open a ticket with us.

--------------------------------

Update 03:59 AM CST: We are shutting down the VPS's right now.

Update 04:04 AM CST: The server is powered off and the datacenter technicians is now working on to move it to the new rack.

Update 04:14 AM CST: The server is moved and it's powered back on and the VPS's are now starting up. The load may be high at this time and your VPS could be slow. This should be resolved in the next few minutes.

Update 04:25 AM CST: Everything is back to normal.
Resolved

dalstorage4 outage

Posted at: 31st May 2017 - 17:34

5/30/2017 17:34 PM - Node dalstorage4 lost connectivity. Upon checking the serial console we found a kernel panic occured. 
5/30/2017 17:59 PM - All VPS are back onine.
Resolved

dalhighram17 degraded performance

Posted at: 24th May 2017 - 04:54

We are aware of a performance degradation on dalhighram17. Our engineers are investigating the issue.
Resolved

lapure5 emergency reboot

Posted at: 3rd May 2017 - 06:23

05/03/2017 06:23 AM CST - We are going to reboot lapure5 to fix the high load average issue on it which was caused by a bug in KernelCare. We utilize KernelCare to do rebootless kernel upgrades but unfortunately a bug in the software is causing lock ups during the patching process. 
We will update this announcement once all VPS's are back online again.

05/03/2017 07:07 AM CST - All the VPS's are booted up now and we have confirmed the issue has been resolved.
Resolved

Dallas network connectivity (RFO Added)

Posted at: 26th April 2017 - 10:24

04/26/2017 10:24 CST - We're aware of the network connectivity issue in Dallas and are working on the situation now. We'll update this announcement as we have more information.

04/26/2017 10:30 CST - We have pointed and resolved the issue. It will take around 5 mins before your VPS to come online if it's not already.

04/26/2017 10:54 CST - Everything should be back online now. A full Reason For Outage (RFO) report will be sent to ones who are affected shortly. We are sorry for the inconvenience.

Reason for Outage

This morning at around 10 AM CST we upgraded our Dallas core network switch and added an additional 10 Gbit capacity.

During the switchover to our new core switch, some VPS took longer than others to re-ARP into the VLAN. We had expected this to be transparent to all the customers based on the dry runs and tests we have processed during the previous weeks, but had a hard time getting the VLAN ARPed quickly enough.

Unfortunately, due to the slow re-ARP'ing, some of the IPs had around 5-25 minutes of downtime.

With this upgrade we now have full diverse path fibers configured to failover to avoid any interruptions due to physical factors.

We now also announce IPs under our own ASN, and can connect directly to other peering partners or carriers.

We can assure you that there won't be any more upgrades like these in the future as with this new setup, we have the ability to handle the new upgrades completely transparent.

We apologize for the inconvience that this unexpected network outage caused, however, we're committed to doing everything we can to ensure a consistent and reliable service at a great price.

If you have any questions or comments, please don't hesitate to inquire.
Resolved

Premium VPS line scheduled kernel upgrade

Posted at: 18th March 2017 - 04:13

As scheduled, we are currently upgrading Linux Kernel's of the Premium VPS line host nodes.

Update: This has been done. Everything is back online.
Resolved

ukpure7 Maintenance

Posted at: 6th March 2017 - 14:48

06 March 2017 14:40 PM - Due to an unexpected hardware issue with ukpure7 VPS node, we are going to perform emergency maintenance on it at 22:00 GMT today, around 1.25 hours from now.

Your VPS will be shut down at around 21:50 GMT to prepare the node for the maintenance. Your data and IP address will not change.

We expect the server to be back online in around 30 minutes after the maintenance begins.

We are sorry for the inconvenience caused by this but unfortunately we have to perform this to avoid further performance degradation of the node.

06 March 2017 16:00 PM - The VPS node is now powered off and we are awaiting for the datacenter techs to perform the maintenance.

06 March 2017 16:16 PM - The maintenance was completed with success. The server is now booting back up.

06 March 2017 16:22 PM - All the VPS's are now online. You may face slowness for a few minutes before the server catches its breath.

07 March 2017 01:40 AM - The server had a kernel panic. We are working on it.

07 March 2017 02:06 AM - We have brought the server back online. The VPS's are booting now. With the data we have, we are suspecting a bad memory module causing this. We will be monitoring the node.

07 March 2017 02:17 AM - All the VPS's are back online now. We have detected errors on some memory modules on the server. We are coordinating with the datacenter technicans to get them replaced to avoid further kernel panics.

07 March 2017 02:23 AM - We just had another kernel panic at the moment.

07 March 2017 02:58 AM - We are awaiting an update from the datacenter staff. We are not booting the server back again to avoid further kernel panic incidents.

07 March 2017 04:20 AM - The memory sticks have been replaced but we still see the same errors on the same DIMM slots. It looks like the motherboard is actually the faulty one here.

07 March 2017 04:56 AM - The datacenter has notified us that they are going to replace the server keeping its SSDs in. This wasn't something we wanted because of its risks but due to communication issues, they have gone ahead with it and we are waiting for an update from them now.

07 March 2017 05:15 AM - The datacenter has assured us that it'd be safe to do it and it'd be the best way to rule out any hardware issues. So we are at the capable hands of theirs and we are waiting for an update.

07 March 2017 05:56 AM - The datacenter has notified us that the server is booted but the network is not up, yet. We are working on getting the networking up at the moment.

07 March 2017 06:16 AM - We were able to get the networking working and booted the containers and then we have faced another kernel panic. Once we compare the memory module serials, it turned out that the datacenter staff has misunderstood our request and replaced the wrong memory modules 4 hours ago. We are finally waiting for them to take the server down once again and replace the correct memory modules now. The confusion occured because the datacenter staff didn't pay enough attention to the memory locations when we worded them starting from 0 and instead they thought the numbers we were mentioning were starting from 1. Even though there was a paranthesis mentioning this, it was unfortunately overlooked.

07 March 2017 06:51 AM - Finally the correct bad memory modules were swapped and we confirmed that we do not see any more error messages now. We are booting VPS's up at the moment. Please allow a few minutes for the slowness to pass.

07 March 2017 07:07 AM - All the VPS's confirmed online. We continue to monitor the server. A RFO will be emailed to all affected customers.
Resolved

All Premium VPS nodes rebooted

Posted at: 5th March 2017 - 23:23

We had to upgrade the kernel of all Premium VPS nodes due to the CPU lockups we were getting with the old kernel. We are sorry for the inconvenience.
Resolved

dalstorage4 downtime

Posted at: 2nd March 2017 - 18:49

 02 March 2017 18:49 PM - We are aware that dalstorage4 is down at the moment. We are currently troubleshooting.

 02 March 2017 20:12 PM - We were able to boot the server and we are running fsck on the storage at the moment.

 02 March 2017 21:36 PM - fsck process has been completed succesfully. We are starting the VPS's on the node now.

 02 March 2017 22:01 PM - All the VPS's except a few have been booted. The others are calculating the quota at the moment.
Resolved

Amazon SES Issues

Posted at: 28th February 2017 - 13:54

Due to issues with Amazon S3, our Amazon SES system which we use to send emails is not working at the moment. Please bare with us until this is resolved.

More information: https://status.aws.amazon.com/

Update 2:06 PM: We have switched to another SMTP provider for the time being.

UPdate 6:50 PM: We have switched back to Amazon SES.
Resolved

Dallas Network

Posted at: 26th January 2017 - 11:55

26 January 2017, 11:55 - We are investigating the issue within our Dallas network. Please do not open a ticket as this would slow us down at the moment.

26 January 2017, 12:02 - The issue is related to our upstreams from what we understand so far.

26 January 2017, 12:19 - Connectivity has returned. We're awaiting confirmation of stability from the datacenter. 
Resolved

seapure5 maintenance

Posted at: 5th January 2017 - 14:02

05 January 2017, 14:02 - The following maintenance is now in progress:

Hello,

You are receiving this message regarding work on our seapure5 server where you have active VPSDime service.

Tomorrow, Thursday January 5 2017, at 10 AM PST (UTC-8), we will be installing additional memory modules in this host node. We will begin shutting down the server approximately 20 minutes in advance to ensure a 10 AM start time with the technicians, so you can expect your VM to go down sometime between 9:40 and 10 AM PST (UTC-8). Please work to ensure your services are prepared to automatically start after a reboot action. Note that no customer data will be modified in any way by this operation.

We are specifying an hour of downtime total for this operation, although we expect it to be much less.

We apologize for the inconvenience, however we appreciate your understanding, and we are committed to providing you with the best service at a great price. Please don't hesitate to open a ticket if you have any questions or comments. Thank you.

-VPSDime Team

05 January 2017, 15:32 - This took longer than expected due to unforeseen problems, but is now complete and VMs are back online. 

Resolved

Rebooting seapure2

Posted at: 1st January 2017 - 14:33

Due to soft lock up, we are rebooting seapure2. Your VPS should be online in the next 10 minutes.

Update 14:49: All the VPS's are back online and responsive.

Contact Technical Support: