Service Status Icon

Service Status

Check our service status and past incidents

Overall Service Status



All services are operating normally at this time.

Last Incident

Status: In-Progress - Posted at: 13th January 2022 - 00:33

All Regions

Data Center Current Last Day Last Month Last Year
Dallas DC Icon
Dallas, TX
Up 100% 100% 100%
Seattle DC Icon
Seattle, WA
Up 100% 100% 100%
New Jersey DC Icon
Piscataway, NJ
Up 100% 100% 100%
Los Angeles DC Icon
Los Angeles, CA
Up 100% 100% 100%
London Datacenter Icon
London, UK
Up 100% 100% 100%
Maidenhead Datacenter Icon
Maidenhead, UK
Up 100% 100% 100%
Netherlands Datacenter Icon
The Netherlands
Up 100% 100% 100%

Past Incidents


dalstorage2 is rebooting

Posted at: 18th December 2014 - 05:44

We are rebooting this server (dalstorage2) due to soft lock up. Please be patient until your VPS boots back.

Update 06:59 AM: All VPS's are booted. Please open a ticket if you are having troubles.

lapure2 is having hardware issues

Posted at: 12th December 2014 - 05:46

lapure2 is having network hardware issues. We are investigating.

Update 7:36 AM: No updates, yet.

Update 8:35 AM: Turns out the onboard network port was faulty. We are using the other port and network is now stable. The server is currently booting. Your VPS's should be online in the next 15 minutes.

Update 8:39 AM: We are still investigating the packet loss issue.

Update 9:17 AM: The issue is resolved.

Here is the RFO:

The issue started with a major packet loss on this node. We couldn't even get into the shell to check.

First, we have suspected a DDoS attack and alerted our datacenter. They told us that there was no attack.

Then we've gotten them to replace our cables as well as their cables to the top of the rack switch. This has not solved the problem.

Then we've gotten them to swap the network cable to the other ethernet port. Unfortunately, this has not solved the problem, either.

After eliminating the hardware issues, we have started to troubleshoot the issue (very slowly, due to the high load caused by the attack!) on the IPMI console further. And then we have realized that there was indeed an attack on one of the IPs which our DC failed to recognize.

We have nullrouted the target IP manually and the packet loss issue is gone.

Your VPS is currently up and running.

We are terribly sorry for the downtime this has caused you and we want to make it up to you. Please open a ticket to redeem $5 credit on your account for the trouble.


node1ssd kernel panic

Posted at: 12th December 2014 - 02:03

node1ssd had a kernel panic and we are rebooting it now. Your VPS should come back online in around 15 minutes.

Update 02:22 AM: All VPS's are back up.

Rebooting nj1

Posted at: 3rd December 2014 - 10:03

We are rebooting this node due to a unstability issue.

Rebooting Colocrossing Los Angeles Server

Posted at: 29th November 2014 - 05:52

We are doing an emergency reboot of this server. Please be patient.

Update 06:34 AM: This has been completed. All VPS's are online.

Update 11/30/2014 8:08 PM: The issue has come back and we are booting it again unfortunately.

Update 11/30/2014 8:31 PM: The node is back online.

dalstorage3 Problem

Posted at: 26th November 2014 - 10:49

We are currently working on to get the server back online.

Update 11:32: This server has been brought back.

Network problem in Los Angeles

Posted at: 17th November 2014 - 18:09

There is an ongoing network issue on one of the nodes in LA. If you are on it, your VPS is down at the moment. We are working on it to resolve ASAP.

Update 18:21: This is resolved. Reason for outage will follow...

Dallas issues

Posted at: 17th November 2014 - 13:54

We are having an issue with one server. If you are on that one, your VPS is down at the moment. Please be patient while we are looking into the issue.

Update: 14:10: This is resolved. It was a DDoS attack.

UK Maintenance

Posted at: 16th November 2014 - 16:57

ukpure1 is under an emergency maintenance at the moment. ETA to come back online is 20 minutes. We are sorry for the trouble.

Update 17:53: Unfortunately, this is taking very long. We are still waiting an update from the datacenter technicians.

Update 17:57: The server is back up.

New Jersey Network Issues

Posted at: 21st October 2014 - 13:41

We are experiencing netwo?rk problems in our New Jersey location.

Update 13:46: Issue has been resolved.

node3 Los Angeles Network Issues

Posted at: 15th October 2014 - 19:40

One of our Los Angeles nodes is with ColoCrossing and they are having network issues. They are currently working on it.

Update: 20:25: The network is back online.

The cause of the issue was: "The cause for the incident was due to a bug in the firmware our switches utilize. It had a bit of an impact on us because we just recently completed a significant network wide upgrade of all cabinet level switches. Subsequently all of our cabinets run the same type of switch now, so when an issue occurs it can happen on more than 1 cabinet at once."

node3ssd ddos

Posted at: 13th October 2014 - 04:31

A large DDoS attack halted the NIC on node3ssd before we had any chance to take action. This is resolved and VMs are in the process of booting up.

DDoS attack on the Dallas storage VPS network

Posted at: 11th October 2014 - 02:35

There is a huge DDoS attack happening on our Dallas Storage VPS network.

node3ssd reboot

Posted at: 8th October 2014 - 01:16

Node3ssd had to be rebooted for emergency unplanned maintanence. The server is back online and VMs should be coming up shortly.

More lapure1 problems

Posted at: 4th October 2014 - 12:14

lapure1 went offline again. We are looking into it.

Update: The node is back online. However, we are going to migrate off this server this week.

Please open a ticket if you are having troubles.

lapure1 problems.

Posted at: 2nd October 2014 - 06:53

We are rebooting this server to make it recognize a disk replacement. It should come back in 10 minutes.

Update 17:31: The raid volume isn't getting recognized. We are working on it.

Update 17:43: The raid volume got recognized. However, the OS is not booting. We are working on it.

This is a bigger issue now. Here are the details:

Description of the issue: A drive backplane issue happened on this server. Impact: Potential data loss. Details: On Tuesday, we have got alerted by a drive failure on the server and we went ahead and replaced the drive. However, the server didn't recognize it and still showed the port as failed. Then we tried with 2 other drives and nothing has changed. Today we have tried with yet another drive that has arrived to the DC a few hours ago. Nothing has changed. We have been seeing disk read/write errors in the logs and decided to do a cold reboot on the server. After the reboot, the raid controller marked 4 drives as failed. We have shut the server down and reseated the drives. We have powered it back on and the raid controller gave us an option to reactivate the raid volume after recognizing all the drives. We went ahead and reactivated the raid volume. However, the host OS didn't boot. We have booted the server with a Live OS and tried to view the volume status. The raid utility shows the volume as healthy, however the OS didn't recognize a partition table. We are now going to try recreating the partition table and the partitions to see if we can view the data on them. Restore Process: We have the backup of the server from ~6 hours ago. We have a spare node in the DC. We are currently restoring the data to this spare node to get your VPS online with the data that's from 6 hours ago. We except that this process will take around 2-3 hours. We are terribly sorry for the inconvenience this issue caused you. Please open a ticket if you have any questions.


  We have tried to rescue the data off the volume, however, even the filesystems are not getting recognized on it.

Unfortunately, the data on the server is lost. As a reminder: we do have backups from ~6 hours ago.

The restoring process to the spare node is taking much longer than expected and it seems it'll need around 12-16 more hours. This is due to backup containing lots of small files.

To avoid that much waiting time, we have recreated the raid array on the old server and made sure it's working as expected to prepare it for a bare-metal restore from the backup server. This will eliminate for each file being recreated one by one and will do the restoring at block level.

The bare metal restore process is now running. And luckily, it's much faster.

As a compensation for this awful and unlucky event, we are going to apply 2 months worth of free credit to your account.

We'll update you when we get more progress on the matter.


The restore process is still going. Based on the calculations we have made, unfortunately, this will take around 16-18 more hours.

We realize that waiting for another 16-18 hours for your VPS to come back online may be unbearable.

If you have your own backups and would like a fresh VPS, please open a ticket and we'll provision one you immediately.

We are extremely frustrated for the outcome of the things and would love to make it up to you. Therefore, we are extending the account credit compensation to 3 months. We are going to apply a 3 months worth of free credit for your VPS on your account.

We are also going to work on a better disaster recovery strategy that would fix things much faster in the future if a terrible event like this ever happen.

We sincerely apologize.


The restoring process is still running. However, it has only restored around half of the data, yet.

If you could live with a fresh VPS and only need a few folders and databases from the backup, please open a ticket and we'll recreate your VPS on a different node with same IP and restore your folders and databases manually to get you online quicker.

Please use this template when opening a ticket:

Subject: Recreating VPS & Partial Restore

Contents: I'd like my VPS to be recreated on a different node with same IP address and would like the following folders restored:

And we'll put your files in /root/backup/ directory for you to move them to their original folders.

To restore your MySQL databases, please follow the steps:

0. Install the mysql-server package if you have not already.
1. Stop the MySQL server.
2. Delete your current /var/lib/mysql folder.
3. Move the mysql files from your backup to a new /var/lib/mysql folder. (The folder will be named mysql in your /root/backup folder. Simply issue the commands: mkdir -p /var/lib/mysql and cp -R /root/backup/mysql/* /var/lib/mysql)
4. Change the ownership of the database files. (chown -R mysql:mysql /var/lib/mysql)
5. Then start the MySQL server.



We will work on cleaning everything up and confirming everything is as it should be before sending out an announcement email and applying credit.


We are glad to let you know that the restoring process has ended around 8 hours ago and we have run filesystem correcting processes.

Your VPS should be online for the last 7-8 hours. If you are having problems with your VPS, please open a ticket and we will either try to fix or restore your VPS from the backup manually.


ukstorage1 is rebooting

Posted at: 25th September 2014 - 04:23

Due to CPU lock up, we are rebooting this node.

dalpure3 Downtime

Posted at: 23rd September 2014 - 02:44

@8:45 AM BST: DALPURE3 became unresponsible moments ago. We have begun investigation.

@8:49 AM BST: The network is back up. We are investigating the cause.

dalpure3 connectivity issue

Posted at: 22nd September 2014 - 11:10

We have experienced a lock up on this node and had to proceed with a power cycle.

Update: 11:30 AM: The node is back up and VPS's are booting.

Update 11:50 AM: All VPS's look online. Please contact us if you have any issue.

Los Angeles Network Problems

Posted at: 21st September 2014 - 01:14

We are again seeing huge packet loss in this network. We are in contact with the DC.

Update: 01:39 AM: The issue looks resolved.

Contact Technical Support: