How much does data center downtime cost?

How much does data center downtime cost?

Data center downtime is unbearable for any enterprise. Data center business interruption will cause users to be unable to access applications normally, resulting in business losses. Downtime also puts tremendous pressure on service providers because they need to identify the cause of the problem and solve it as soon as possible.

While this situation is frustrating, these complaints often pale in comparison to the cost of downtime. More and more businesses are taking the time to figure out exactly how much downtime they are experiencing and using that information to ensure their service provider is doing everything they can to prevent future outages.

But how much does data center downtime cost an enterprise? What strategies do service providers use to prevent downtime?

Industry experts say data center downtime costs the U.S. an average of nearly $8,000 per minute.

[[206350]]

The Cost of Data Center Downtime

According to a 2013 study by the Ponemon Institute and Emerson, downtime costs U.S. businesses nearly $8,000 per minute. This is a 41 percent increase from 2010 statistics, when downtime cost $5,600 per minute. While factors such as network complexity would lead some to expect downtime to rise, the researchers were surprised by the increase they found.

"Given that today's data centers are home to more interdependent equipment and IT systems than ever before, most people expected the cost of unplanned data center outages to increase compared to 2010, but the 41 percent increase was higher than expected," said Ponemon Institute chairman and founder, Arry Ponemon.

It is important to note that not all companies that deal with downtime will experience these types of costs, with the most expensive downtime incident costing an organization up to $1.7 million. While there are no updated statistics on downtime, it is certain that the cost of unplanned downtime will continue to rise. Now, by using tools such as downtime timers, companies can determine the cost of a single downtime in terms of lost sales and lost business based on annual operating revenue and operating hours.

What causes downtime?

In order to properly address downtime and unplanned outages, service providers must first understand the main causes of such incidents. According to Rob McClary, a contributor to the industry media "Data Center Journal", while many people may think that the design of the network or equipment is the main cause of downtime, more downtime is caused by human error every year.

In addition to human error, other major causes of downtime include poor maintenance practices and lifecycle strategies, as well as data center site selection and inadequate risk mitigation measures.

While most outages are related to human error, poor site selection or maintenance, some are harder to predict: squirrels chewing on power lines outside a data center, an anchor severing a communications cable on the seafloor or a fire caused by a lit cigarette, industry experts say.

What measures does the supplier take to prevent downtime?

Thankfully, service providers have some strategies and measures to help prevent common causes of downtime, one of which is to use redundant equipment throughout the data center's critical systems. When data center facilities are equipped with backup equipment for power, connectivity, and cooling, even if a power outage or other negative impact occurs, staff can quickly switch to redundant systems to keep the data center running. In fact, Eric Hanselman, chief analyst at research firm 451Research, said that organizations need to invest more in redundant settings because equipment failure is inevitable.

"People have to have a realistic understanding of what downtime costs their business," Hanselman said. In this way, proactive investments in redundant systems can help prevent costly downtime events.

To address human error, service providers should ensure that all employees are properly trained, not just in their day-to-day work, but also in the worst-case scenario, so they can respond quickly and mitigate any damage. Hanselman also recommends leveraging improved automation processes to help reduce human interaction, thereby reducing the chance of human error.

"No organization should have to manually change even a small part of its infrastructure," Hanselman said. "Routine tasks, upgrading systems, and configuring and managing systems should be automated."

Hanselman also noted that data centers should have more advanced security controls in place to prevent cyber threats and distributed denial-of-service attacks that could cause service disruptions.

"One has to make sure that the entire interaction experience is protected from the end customer's path to the end customer," he said.

Unplanned downtime is a costly event that can cause considerable business loss in terms of productivity and collaboration. Service providers should strive to ensure that these events can be prevented at every turn.

<<:  Demystifying the elastic data center

>>:  What are the big opportunities after NB-IOT in the field of Internet of Things?

Recommend

How 5G will shape the future of construction

5G is an enabler that will deliver new capabiliti...

Spain's 5G state subsidies may exclude Huawei, Huawei appeals

On October 10th, local time on Monday, Huawei app...

"Disruption" or "Pie in the sky", what is the charm of OpenRAN?

OpenRAN (Open Radio Access Network) seems to be v...

How Businesses Can Implement IoT Solutions

While the Internet of Things is changing people’s...

Gcore (gcorelabs) Santa Clara VPS simple test

A few days ago, we posted simple test information...

Operators hijacked the system and even changed Json

Operator hijacking is a common tactic used by thi...

Home broadband can also be wirelessly accessed. What are the benefits for IoT?

People are looking forward to a lower-priced broa...

A Brief Analysis of Web3.0 Technology

Part 01 Web3.0 concepts and features Web3.0 is a ...