Service Status Icon

Service Status

Check our service status and past incidents

Overall Service Status

100%

Up

All services are operating normally at this time.

Last Incident


Status: In-Progress - Posted at: 26th April 2024 - 13:32

All Regions

Data Center Current Last Day Last Month Last Year
Dallas DC Icon
Dallas, TX
Up 100% 100% 100%
Seattle DC Icon
Seattle, WA
Up 100% 100% 100%
New Jersey DC Icon
Piscataway, NJ
Up 100% 100% 100%
Los Angeles DC Icon
Los Angeles, CA
Up 100% 100% 100%
London Datacenter Icon
London, UK
Up 100% 100% 100%
Maidenhead Datacenter Icon
Maidenhead, UK
Up 100% 100% 100%
Netherlands Datacenter Icon
The Netherlands
Up 100% 100% 100%

Past Incidents

Resolved

node3ssd ddos

Posted at: 13th October 2014 - 04:31

A large DDoS attack halted the NIC on node3ssd before we had any chance to take action. This is resolved and VMs are in the process of booting up.
Resolved

DDoS attack on the Dallas storage VPS network

Posted at: 11th October 2014 - 02:35

There is a huge DDoS attack happening on our Dallas Storage VPS network.
Resolved

node3ssd reboot

Posted at: 8th October 2014 - 01:16

Node3ssd had to be rebooted for emergency unplanned maintanence. The server is back online and VMs should be coming up shortly.
Resolved

More lapure1 problems

Posted at: 4th October 2014 - 12:14

lapure1 went offline again. We are looking into it.

Update: The node is back online. However, we are going to migrate off this server this week.

Please open a ticket if you are having troubles.
Resolved

lapure1 problems.

Posted at: 2nd October 2014 - 06:53

We are rebooting this server to make it recognize a disk replacement. It should come back in 10 minutes.

Update 17:31: The raid volume isn't getting recognized. We are working on it.

Update 17:43: The raid volume got recognized. However, the OS is not booting. We are working on it.



This is a bigger issue now. Here are the details:

Description of the issue: A drive backplane issue happened on this server. Impact: Potential data loss. Details: On Tuesday, we have got alerted by a drive failure on the server and we went ahead and replaced the drive. However, the server didn't recognize it and still showed the port as failed. Then we tried with 2 other drives and nothing has changed. Today we have tried with yet another drive that has arrived to the DC a few hours ago. Nothing has changed. We have been seeing disk read/write errors in the logs and decided to do a cold reboot on the server. After the reboot, the raid controller marked 4 drives as failed. We have shut the server down and reseated the drives. We have powered it back on and the raid controller gave us an option to reactivate the raid volume after recognizing all the drives. We went ahead and reactivated the raid volume. However, the host OS didn't boot. We have booted the server with a Live OS and tried to view the volume status. The raid utility shows the volume as healthy, however the OS didn't recognize a partition table. We are now going to try recreating the partition table and the partitions to see if we can view the data on them. Restore Process: We have the backup of the server from ~6 hours ago. We have a spare node in the DC. We are currently restoring the data to this spare node to get your VPS online with the data that's from 6 hours ago. We except that this process will take around 2-3 hours. We are terribly sorry for the inconvenience this issue caused you. Please open a ticket if you have any questions.

--

  We have tried to rescue the data off the volume, however, even the filesystems are not getting recognized on it.

Unfortunately, the data on the server is lost. As a reminder: we do have backups from ~6 hours ago.

The restoring process to the spare node is taking much longer than expected and it seems it'll need around 12-16 more hours. This is due to backup containing lots of small files.

To avoid that much waiting time, we have recreated the raid array on the old server and made sure it's working as expected to prepare it for a bare-metal restore from the backup server. This will eliminate for each file being recreated one by one and will do the restoring at block level.

The bare metal restore process is now running. And luckily, it's much faster.

As a compensation for this awful and unlucky event, we are going to apply 2 months worth of free credit to your account.

We'll update you when we get more progress on the matter.

--

The restore process is still going. Based on the calculations we have made, unfortunately, this will take around 16-18 more hours.

We realize that waiting for another 16-18 hours for your VPS to come back online may be unbearable.

If you have your own backups and would like a fresh VPS, please open a ticket and we'll provision one you immediately.

We are extremely frustrated for the outcome of the things and would love to make it up to you. Therefore, we are extending the account credit compensation to 3 months. We are going to apply a 3 months worth of free credit for your VPS on your account.

We are also going to work on a better disaster recovery strategy that would fix things much faster in the future if a terrible event like this ever happen.

We sincerely apologize.

--

The restoring process is still running. However, it has only restored around half of the data, yet.

If you could live with a fresh VPS and only need a few folders and databases from the backup, please open a ticket and we'll recreate your VPS on a different node with same IP and restore your folders and databases manually to get you online quicker.

Please use this template when opening a ticket:

Subject: Recreating VPS & Partial Restore

Contents: I'd like my VPS to be recreated on a different node with same IP address and would like the following folders restored:
/var/lib/mysql
/var/www
...

And we'll put your files in /root/backup/ directory for you to move them to their original folders.

To restore your MySQL databases, please follow the steps:

0. Install the mysql-server package if you have not already.
1. Stop the MySQL server.
2. Delete your current /var/lib/mysql folder.
3. Move the mysql files from your backup to a new /var/lib/mysql folder. (The folder will be named mysql in your /root/backup folder. Simply issue the commands: mkdir -p /var/lib/mysql and cp -R /root/backup/mysql/* /var/lib/mysql)
4. Change the ownership of the database files. (chown -R mysql:mysql /var/lib/mysql)
5. Then start the MySQL server.

--

THE RESTORATION PROCESS HAS BEEN COMPLETED. ALL VPS' ARE ONLINE  AND FULLY FUNCTIONAL WITH DATA FULLY RESTORED.

We will work on cleaning everything up and confirming everything is as it should be before sending out an announcement email and applying credit.

--

We are glad to let you know that the restoring process has ended around 8 hours ago and we have run filesystem correcting processes.

Your VPS should be online for the last 7-8 hours. If you are having problems with your VPS, please open a ticket and we will either try to fix or restore your VPS from the backup manually.

Resolved

ukstorage1 is rebooting

Posted at: 25th September 2014 - 04:23

Due to CPU lock up, we are rebooting this node.
Resolved

dalpure3 Downtime

Posted at: 23rd September 2014 - 02:44

@8:45 AM BST: DALPURE3 became unresponsible moments ago. We have begun investigation.

@8:49 AM BST: The network is back up. We are investigating the cause.
Resolved

dalpure3 connectivity issue

Posted at: 22nd September 2014 - 11:10

We have experienced a lock up on this node and had to proceed with a power cycle.

Update: 11:30 AM: The node is back up and VPS's are booting.

Update 11:50 AM: All VPS's look online. Please contact us if you have any issue.
Resolved

Los Angeles Network Problems

Posted at: 21st September 2014 - 01:14

We are again seeing huge packet loss in this network. We are in contact with the DC.

Update: 01:39 AM: The issue looks resolved.
Resolved

Los Angeles Network Issue

Posted at: 12th September 2014 - 03:17

We are having network issues in Los Angeles again. A good amount of packets are getting dropped and latency is high.

We are in contact with the DC.

Update: 03:22: The issue has been solved.
Resolved

NODE3SSD Down

Posted at: 11th September 2014 - 05:31

@11:15AM BST 9/11/2014 NODE3SSD in Dallas became unresponsive moments ago. We have begun investigation and will update this announcement asap with additional information.

@11:28AM BST 9/11/2014 NODE3SSD has been rebooted. The issue was related to the kernel.

@11:32AM BST 9/11/2014 NODE3SSD is back online and VMs are being started. It may take up to 30 minutes for your VM to appear online because quota checks need to be ran.
Resolved

Los Angeles Network Dropping Packets

Posted at: 1st September 2014 - 08:48

Our Los Angeles network is dropping packets. We are in contact with the DC.

Update: 8:52: The connection is stable again.

Update: 20:55: The issue is back again.

Update: 21:04: It looks resolved again.
Resolved

DDos Happening in dalpure1

Posted at: 30th August 2014 - 15:53

We have an incoming DDoS attack on dalpure1. It should be taken care of shortly.

Update 15:54: This has been solved.
Resolved

Packet Loss in Los Angeles

Posted at: 17th August 2014 - 11:58

We are experiencing some packet loss in this network. We are in contact with the DC. Please bare with us.
Resolved

Los Angeles High Storage Network is Down

Posted at: 15th August 2014 - 19:59

There is a network issue with our IP subnets in this network zone. We are in contact with the DC.

Update: It looks like we are getting affected by this issue: http://blogs.cisco.com/sp/global-internet-routing-table-reaches-512k-milestone/ and getting filtered by some of the internet carriers.

Update 23:08: The issue is temporarily fixed. You may experience slowness until tomorrow.
Resolved

Los Angeles Maintenance

Posted at: 11th August 2014 - 15:55

Date: 11 August 2014, Monday
Start Time: 2:00 PM PST
End Time: 2:30 PM PST
Cause: Expansion
Expected Downtime: 30 Minutes

In order for us to deploy new nodes in our Los Angeles network, we are going to have to move a few servers off their current rack to a new rack.

We apologize for the inconvenience this maintenance will cause.

Update 1:55 PM (PST): Maintenance has started, we are shutting down the VPS's.

Update 2:20 PM (PST): All VPS's are powered down, moving started.

Update 3:05 PM (PST): All the VPS's are back online.

Resolved

node2 Dallas is Down

Posted at: 7th August 2014 - 19:56

node2 Dallas just kernel panic'ed, we are rebooting it.

Update 20:10: All containers have booted. They'll go down for a few moments and come back online while the server recalculates the quotas used.

Update 20:40: Everything is back to normal.
Resolved

Los Angeles is down

Posted at: 18th July 2014 - 05:27

We are aware that Los Angeles nodes are down. There is a network issue which the DC is working on.

Update 05:30: The network is back up.
Resolved

Offloaded SQL server Maintenance

Posted at: 29th June 2014 - 10:03

From the announcement we sent out last week:

You are receiving this email because you have active offloaded MySQL service.

We are writing to inform you of a maintenance window regarding our Dallas offloaded MySQL service. This window is scheduled for Sunday, June 29th from 10 AM to 11 AM CDT (-5 UTC).

We do not expect to have a total outage of the service, but connectivity may be intermittent as we restart the SQL server process.

Please open a ticket if you have any questions.

Resolved

node2 issues

Posted at: 24th June 2014 - 17:24

Rebooting caused a kernel panic on this node2 Dallas and we are running fsck at the moment. Please be patient.

Update 19:16: fsck is done. The node is booted. VPSes are booting at the moment. To not to wait, go to your client area and boot your VPS.

Contact Technical Support: