What Performance Analysis and Capacity Planning Review can reveal about your Technology Investments?

By | March 9, 2013

Importance of Performance Analysis and Capacity Planning

We live and work in a new era of extreme business speed, with heightened customer, partner, and employee expectations. To keep up, businesses demand more more innovation, speed, and flexibility from their data centers. Storage CapEx costs then becomes one of the largest investments in an Enterprise Class Data Center, since every piece of information finally gets stored in a data storage device in some form. A high latency infrastructure can impact the performance of critical applications in an enterprise and seriously impact business.

Which is why, Performance is a key concern when deploying mission critical applications in a highly consolidated environment. With multiple application servers relying on a shared storage infrastructure, there is a worry that performance requirements are not being met. In addition to performance, scalability, complexity, and cost are the top storage challenges reported by CIOs and CTOs at enterprises.

Let’s consider everyday examples: Mail servers such as MS-Exchange are busy environments where hundreds or even thousands of users log in and check e-mail every morning at about the same time. A well-designed network must ensure fast and consistent response times both during peak load times as well as during relatively idle periods.

Generally, Transaction-oriented applications are characterized by largely random IO and generate both queries (reads) and updates (writes). Examples of these types of applications include OLTP, database operations, and mail server implementations. For example, in a credit card database system, this will be the number of credit card authorizations that can be executed per second. Response time then becomes an extremely important parameter that decides Quality of Service. Delayed response times become apparent to users and affect the company’s image.

The downside of all this is a fear among technology managers that the storage environment is not able to meet performance requirements, which leads to knee-jerk investments in storage infrastructure.

Performance Analysis of Storage Networks, measures, monitors and analyzes key performance metrics such as

  • Input / output or I/O rate, measured in IOPS
  • Actual data going through the storage array devices in question: also known as data rate, and measured in MBPS
  • Average response time in milliseconds for all I/Os in a sample interval, including both cache hits as well as – misses to backend storage if required
  • Cache hit rate: this is the number of times that an IO request, either read or write, was satisfied from the device cache or memory, typically shown as a percentage

This has created a new area of technology, with companies creating Tools that undertake Performance Analysis and Capacity Planning services. Prominent examples include Perfonics™ by Interscape Technologies and their Performance-as-a-Service™ offerings.

Performance as a Service™ provides an unprecedented opportunity for you to effectively manage the performance of your applications and supporting infrastructure. PAAS help ensure SLA compliance and reveal areas in your system requiring further or deeper investigation through additional expert analysis.

These tools help:

  • Right-Tier hosts and applications by looking at their IO profiles in the back-end storage arrays and recommending Type of storage, RAID Levels and most efficient use of infrastructure
  • Find Performance bottlenecks for large enterprise class storage from EMC, HDS, IBM and HP.
  • Pro-active analysis of large storage infrastructures by looking at hundreds of storage performance metrics
  • Look at hardware resource utilizations to see how much of head-room is left in the existing infrastructure before recommending purchases
  • Create Performance Baselines for storage arrays
  • Create performance baseline profiles for servers (AIX, LINUX, Solaris, Windows)
  • Consolidation Planning for data center refresh projects for cost and performance optimized target storage
  • Green Storage analysis to see how efficient the current storage infrastructure is by looking at cost/GB, IOPS/GB, Power analysis, footprints, heating/cooling

Performance analysis allows IT departments to migrate toward more agile proactive organizations that can guarantee service delivery to the business, reduce costs, improve IT efficiency and shift the focus away from reactive firefighting and toward innovating for the business.

The findings from Performance Analysis can help determine what categories or cost-reduction conditions are possible within the enterprise. Once identified, there are concise, deterministic methods available to quantify cost differences; apply time values of money to the savings; and determine internal rates of return (IRR), payback or ROI, and net present value (NPV) of the future savings. These financial metrics are essential for financial justification of new storage infrastructures.

Some of the things that Performance Analysis and Capacity Planning of the Storage Environment can reveal about your IT Investments are:

  • Effectiveness of your storage administration: Human resource remains the single highest data center cost and so this becomes a key target area for reducing current and future OPEX costs. Performance analysis reveals loopholes in your storage strategy or storage architecture that is making the network more cumbersome to manage. By revising the storage strategy or architecture, the network’s performance can be optimized and made easier to manage with lesser people.
  • Availability of Data: Enterprise-class storage, added with highly available storage network topologies, can reduce the risk of unscheduled downtime. Higher data-path availability means less exposure to opportunity-loss or opportunity-cost as a result of an outage. New-generation-storage provides higher data-storage uptime when compared to locally-attached Disk and SCSI-connections. Disk information available from Performance Analysis can be converted into opportunity-cost or opportunity-loss calculations.
  • Environmental Costs: Newer generations of storage systems take up less floor space and lower power bills and A/C costs due to a lower kVA and BTU per gigabyte of storage capacity. The smaller footprint and reduced electrical costs generate real savings to the IT department. Data from Performance Analysis and Capacity Planning  can help consolidate storage (fewer, larger storage systems) and make this possible.
  • Hardware Maintenance Costs: Storage hardware contributes to significant maintenance costs, as unused capacity also needs to be paid for. Data from Performance Analysis and Capacity Planning can help consolidate storage in highly cost and SLA optimized target footprint which can lower maintenance costs by paying only for capacity that is needed and used.
  • Software Maintenance Costs: Many times, software licenses are based on total capacity or total number of storage frames (controllers). If this number can be reduced through consolidation, additional software savings from license fees and maintenance can be realized. Performance and Capacity Analysis exactly provides the kind of data required for consolidating storage.
  • Asset Utilization: Storage utilization increases as storage is aggregated and shared among a larger population of servers. Less storage waste is realized, and capacity-on-demand allows storage capacity to be purchased in the future (when it is cheaper). Improved storage utilization reduces future storage procurements, and can provide just-in-time storage provisioning. By providing data around disk capacity utilization, performance analysis helps improve asset utilization.
  • Backup Improvement: Storage networks can provide the necessary infrastructure (backup servers, media servers, Fibre Channel, high-speed connections) to improve backup windows, reduce the backup workload on servers, and generally improve recovery times (RTO) and recovery points (RPO). Performance Analysis can provide data for an optimized storage design, which can lower the impact of backups, minimize backup windows and meet quality of service (QoS) SLAs with clients.
  • Disaster Recovery/Business Continuity Provisioning. The separation of storage from servers provides an opportunity to manage data and processing separately, and to also plan for recovery with better optimization of data. Data replication can be applied to storage to create multiple copies of critical data for other parts of the SAN. Generally, data center managers understand that data loss is less likely to happen due to natural disasters, and more likely from common events. Performance Analysis can help plan for such high-probability events (with features like replication or snapshot copies) along with target data planning.
  • Development Time: Data replication, shadow volumes and snapshot techniques (enabled by advanced storage architectures) can provide very rapid access to development teams that need access to near-real-time production databases and information. Mirrors can be split, and copies made available to developers to test both code and performance on near-live data. Migration time is reduced, development time is positively impacted, and the need for developers to wait for test data is minimized. Whether the network can support this or not will be evident from data obtained through performance analysis.

In short, IT departments are increasingly being forced to accomplish more at lesser costs. Performance Analysis helps design storage solutions that will perform predictably and cost-effectively to support mission-critical applications in a corporate environment.  This will allow IT Managers to maximize the benefits and operational efficiencies from their data centers and storage networks, creating an agile and profitable enterprise.