ESG Webinar Recap: Storage Performance – Building the Right Foundation for Meeting SLAs

By Raj Patel, Sr Director of Corporate and Field Marketing

enterprise strategy group esgIn August we and a number of other industry leaders partnered with Enterprise Strategy Group (ESG) to survey IT professionals on their storage system buying decisions in relation to their application workloads – and you can review a summary of the findings of that research in this blog post.

Meanwhile, we launched a three-part webinar series called “De-Clouding the Issues – Managing Performance, Complexity & Cost in the Ever-Evolving Data Center.” The first webinar featured ESG senior analyst Mark Peters, and examined the results of our research. We’re sharing the recap below as we prepare for the next webinar in our series.

In the webinar, titled “Storage Performance – Building the Right Foundation for Meeting SLAs” and co-hosted by Peters and our CTO, John Gentry, we focused on the SLAs that business application owners are demanding from their IT and storage infrastructure teams.

As we analyzed our research findings, we found a fundamental disconnect between application owners who are demanding performance and availability SLAs for their workloads (despite rapid and constant change and compounding complexity) —and the IT teams that are ill-equipped to deliver them.

The research exposed the most inconvenient truth that IT teams must face now and forever forward: despite best efforts, they are trying to manage today’s reality with last century’s technology. The mismatch is that obvious, and peril looms large.

Imagine if today’s jet pilots were required to fly using the same technology used to fly World War I era biplanes. What if they simply relied on their wits, dexterity, strength and instincts to fly the jet you and your family took to Hawaii—and opted to ditch the fly-by-wire system that automatically stabilizes flight, prevents unsafe operation and ensures performance, precision and control? Would you get on that plane?

Whether the interdependent devices, systems and networks that deliver applications are called an IT infrastructure, digital business infrastructure, The Cloud, or some derivative—the jet is in flight and there is an expectation that everything will perform reliably and predictably because the most advanced, purpose-built systems are enabling it to do so.

Unfortunately, this is far from true in many business-critical enterprise IT shops, where teams are still relying on archaic point-tools to help them meet SLAs—and are failing miserably. They are flying blind, trying to monitor and ensure performance of their Tier 0 and Tier 1 app environments with tools that were never intended for this purpose. When application performance degrades and systems crash, yes, it’s good to know that point tools can prove their own innocence—however, application and business owners should expect more than this, as livelihoods depend on it.

Fortunately, things are changing, and enterprise leaders are realizing that there are advanced solutions available that can help them confidently deliver and assure even the most demanding performance SLAs no matter what changes occur. They’ve crossed the chasm, and shifted focus to proactively monitoring and managing IT infrastructure performance via a platform approach that helps them keep their Tier 0 and Tier 1 applications flying high, as expected, despite rapid and constant change and compounding complexity.

During the webinar, Gentry discussed commonalities between Virtual Instruments customers in extreme growth mode. Their environments are comprised of multiple storage and server vendors, and due to accelerated business growth, their mission is to virtualize everything across hosts, servers and storage. At the same time, they must also reliably ensure stable and predictable performance, while driving peak utilization at perpetual scale.

Every one of them is accomplishing their mission and delivering on their SLAs by deploying our next-generation end-to-end monitoring platform that is both storage vendor-independent, and generationally independent (supporting different generations of any storage platform)—along with their existing device-tools.

Elsewhere in the webinar, Peters highlighted industry trends that drove survey responses such as these:

  • Big data analytics is driving a need for ever-greater storage capacity
  • 39% of respondents said their top concern was to use storage more efficiently
  • 91% of respondents still expect 50% or more of their applications to be on-premises in five years
  • 57% of respondents said they have pulled back applications/workloads back on premise from a public cloud infrastructure service

Despite these challenges, Peters noted that organizations are increasingly taking a measured approach to their storage usage. He pointed out that while organizations have been talking about becoming more storage-efficient for quite a while, he hasn’t heard them attesting to it in the past as much as they are now. Peters noted that organizations are spending the same or less for storage today as in the past by being more efficient in the way they run and manage their storage deployments. Ultimately, Peters agreed that SLAs will come to dominate discussions about infrastructure teams delivering on performance and availability commitments to their business app owners.

The webinar was chock-full of additional insights, so we encourage you to take the time to watch it here. And look out for Part 2 of the webinar series, which will focus on enabling IT agility, transformation, innovation and business alignment via end-to-end monitoring and analytics. And in the meantime, if you want to hear more about what our app-centric IPM solutions can do for your organization’s storage investments, drop us a line or let’s connect on Twitter – we’re @Virtual_Inst.