Service Status Icon

Service Status

Check our service status and past incidents

Overall Service Status



All services are operating normally at this time.

Last Incident

Status: In-Progress - Posted at: 13th January 2022 - 00:33

All Regions

Data Center Current Last Day Last Month Last Year
Dallas DC Icon
Dallas, TX
Up 100% 100% 100%
Seattle DC Icon
Seattle, WA
Up 100% 100% 100%
New Jersey DC Icon
Piscataway, NJ
Up 100% 100% 100%
Los Angeles DC Icon
Los Angeles, CA
Up 100% 100% 100%
London Datacenter Icon
London, UK
Up 100% 100% 100%
Maidenhead Datacenter Icon
Maidenhead, UK
Up 100% 100% 100%
Netherlands Datacenter Icon
The Netherlands
Up 100% 100% 100%

Past Incidents


uklegvz7highram1, uklegvz7highram2, uklegvz7highram5 are currently down

Posted at: 23rd May 2020 - 00:46

Saturday, 23 May 2020, 00:43 - We're aware and are currently investigating an outage on uklegvz7highram1, uklegvz7highram2 and uklegvz7highram5.

Saturday, 23 May 2020, 01:01 - The servers are powered on and working fine. The issue seems to be network related. We have contacted the datacenter regarding this.

Saturday, 23 May 2020, 01:08 - The servers are back online. The DC has told us that there was a small networking issue on this zone of the DC and it's been resolved.

seavz7highram1 Kernel Panic Issues

Posted at: 22nd May 2020 - 21:53

Friday, 22 May 2020, 21:53 - We're investigating an outage with seavz7highram1, we'll update the announcement as soon as new information is available.

Friday, 22 May 2020, 22:01 - The node has crashed due to kernel panic and has been rebooted. The VMs should be up and running already. If you're experiencing any issues as a result of this incident, please do not hesitate to contact our support team via ticket.

Friday, 22 May 2020, 23:03 - To avoid further downtime due to kernel panic issues of this node, we are migrating all the VMs to a new node. We are using a special method to do the migrations so your VM gets offline only a few seconds during the migration.

Friday, 22 May 2020, 01:13 - All the VMs have been successfully migrated to the new node.

dalstorage4 connectivity

Posted at: 8th May 2020 - 01:53

Friday, 08 May 2020, 01:53 - we're investigating a network outage of our dalstorage4 server.

Friday, 08 May 2020, 02:00 The connectivity issue has been resolved, we're waiting for an RFO by the DC.

Hardware issues on dalvz7highram32

Posted at: 27th April 2020 - 00:12

We are having kernel panics due to hardware issues on dalvz7highram32. We are currently migrating the VPS's off this server.

Update 2020-04-27 01:27 AM: All the VPS's have been migrated off dalvz7highram32. Most of them were migrated live with a few seconds of downtime. A few of them were migrated with regular offline method with only a few minutes of downtime.

Emergency reboot of ukpure8 and ukpure9

Posted at: 26th April 2020 - 03:44

We are rebooting ukpure8 and ukpure9 to resolve an unstability issue.

dalstorage4 issue

Posted at: 24th April 2020 - 15:59

Friday, 24 April 2020, 15:59 - we're investigating a network outage of our dalstorage4 server. 
Friday, 24 April 2020, 16:12 - The connectivity issue has been resolved, we're waiting for an RFO by the DC. 


dalvz7highram11 has crashed.

Posted at: 23rd April 2020 - 22:44

dalvz7highram11 has crashed. We are currently resolving the problem.

Update 2020-04-23 22:51: The server is back up. The crash reason seems to be a kernel bug. The new kernel is loaded on the reboot. We are monitoring.

Update 2020-04-25 00:46: The server has crashed again. We are working on it.

Update 2020-04-25 00:56: VPS's are back up and we have submitted a bug report to Virtuozzo team.

Emergency reboot of ukpure11

Posted at: 21st April 2020 - 21:37

We are arebooting ukpure11 due to an emergency configuration problem.

Packet loss in our UK location

Posted at: 20th April 2020 - 17:46

Monday, 20 April 2020, 17:46 - We've been made aware of packet loss (that's originating upstream from us) in the UK location. Please refrain from attempting to take intrusive management action(s) (such as reinstalling systems) at this time until the situation returns to normal.

ukvz7highram2 down

Posted at: 3rd March 2020 - 22:45

Tuesday, 03 March 2020, 22:45 - Our monitoring has alerted us that this node is down. Our team has alerted the datacenter and is awaiting updates.

Wednesday, 04 March 2020, 01:30 - Service has now been restored. We await RFO from the DC.

Connectivity issues affecting Seattle DC

Posted at: 3rd March 2020 - 00:16

Tuesday, 03 March 2020, 23:38 - We've been alerted to reachability issues affecting our Seattle datacenter. We're currently looking into this. Further updates as the sitaution unfolds will be posted here. Thank you for your patience, we apologize sincerely for the interruption.
Tuesday, 03 March 2020, 23:56 - The datacenter was facing big DDoS attacks and they have handled them and the connectivity has returned to normal.

Seattle reachability issues

Posted at: 20th February 2020 - 19:05

Thursday, 20 February 2020, 19:05 - we've been alerted to reachability issues affecting our Seattle datacenter. We're currently looking into this. Further updates as the sitaution unfolds will be posted here. Thank you for your patience, we apologize sincerely for the interruption.

Thursday, 20 February 2020, 19:52 - Services restored. RFO pending.

Seattle network issues

Posted at: 19th February 2020 - 04:16

We are aware of the Seattle network issues. We are working with the datacenter staff to resolve these as soon as possible.

Update 04:36 AM This has been resolved. The datacenter confirmed a bad PDU caused this. We are sorry for the downtime.

Rebooting dalvz7highram12

Posted at: 26th January 2020 - 10:50

We are currently rebooting dalvz7highram12 node due to instability issues.

26 Jan 2020 11:15: Reboot has been completed.

NJ network problem

Posted at: 8th January 2020 - 04:22

Wednesday, 08 January 2020, 04:22 - We're investigating a network outage affecting certain IP ranges our New Jersey servers. We'll update this announcement as we have more information.

Wednesday, 08 January 2020, 05:18 - The issue has been resolved.

seapure1 connectivity

Posted at: 7th January 2020 - 11:54

 Tuesday, 07 January 2020, 11:46 - We're investigating connectivity issues to our seapure1 node now. 

 Tuesday, 07 January 2020, 12:20 - Server connectivity has been restored.  


Posted at: 21st December 2019 - 07:25

Saturday, 21 December 2019, 07:17 - we're investigating an issue with our dalvz7highram6 server now. 

Saturday, 21 December 2019, 07:39 - issue resolved and servers are back online now. 

seavz7highram1 accessibility issues

Posted at: 3rd December 2019 - 00:09

Tuesday, 03 December 2019, 00:03 - Our internal monitoring has alerted us about an outage of our seavz7highram1 node. Our engineers are investigating.

Tuesday, 03 December 2019, 00:25 - Our engineers have resolved the problem.

seavz7highram1 connectivity issues

Posted at: 1st December 2019 - 06:04

Sunday, 01 December 2019, 06:02 - we've been alerted via our internal monitoring, about an outage of our seavz7highram1 node. Our engineers are investigating.
Sunday, 01 December 2019, 06:10 - our engineers have detected and resolved the problem.

Network Issue in Dallas

Posted at: 28th October 2019 - 16:10

Monday, 28 October 2019, 15:53 - we're aware of the Dallas connectivity issues and are investigating them now. 
Monday, 28 October 2019, 19:18 - DFW is now completely back online. Official RFO will be posted to this announcement as it becomes available.

Tuesday, 29 October 2019, 14:04 - RFO is as follows:


OFFICIAL RFO - 10/28/2019
Summary of Incident:


Yesterday, Monday October 28th 2019, at approximately 4:23pm portions of customers in our TPA1, TPA2 and DAL1 data centers experienced a loss of network that lasted anywhere from a few minutes to a few hours depending on your server(s) location. The cause of the issue has been identifed and is as follows:

At roughly 4:23pm one of our Network Engineers applied a policy update to our DAL1 edge routers. This policy update was incomplete which led to the full internet routing table being propogated throughout the aggreagation layer of DAL1. This mistake was further exacerbated when that full routing table was automatically injected into the Hivelocity DDoS protection network resulting in the full routing table being distributed to other Hivelocity facilities, i.e. TPA1 and TPA2. The full internet routing table injection led to multiple network devices having their resources exhausted which ultimately led to the network disuption. Once our Network Engineers identified the cause of the issue we began reloading each of the affected network devices to correct the problem. Ultimately, yesterday's network event was a result of human error.

Service Impact Times:

October 28th, 4:23pm - 6:44pm EST

Remediation Plans:


We have implemented new router policies that will prevent full route tables being similarly propogated should human error ever occur again. Additionally, we have implemented new review protocols to minimize the likelihood of any human error occurring.

For years most of our customers have experienced 100% uptime due to our redundancies and nearly 2 decades of experience. We take our responsbility to you very seriously and no one hates it more than us when we fall short of our goals. We are deeply sorry for the inconvenience and any negative impact this disruption had on your operation.

Contact Technical Support: