Frontend and backend IOPS

      4 Comments on Frontend and backend IOPS

I’ve met quite of few customers and colleagues who are unfamiliar with the terms mentioned in the titles. Could be that a blogpost might help clear up a lot of the noise around these terms and help them understand these strange storage guys better.First let’s make clear what the terms frontend IOPS and backend IOPS mean.

For those who don’t know the term IOPS, it means IO’s per second. Next we can explain what frontend and backend IOPS are.

Frontend IOPS is the total number of read and write operations per second generated by an application or applications. These read and write operations are communicated to the disk(s) presented to the application or applications.

Backend IOPS  is the total number of read and write operations per second that a RAID/storage controller sends to the physical disks. The value of the backend IOPS is higher most of the time, because of the simple reason that a certain RAID level has a certain overhead. This overhead is called the write penalty.

Some values for write penalties are:

  • RAID 1: 2 backend IO’s for every frontend IO
  • RAID 5: 4 backend IO’s for every frontend IO
  • RAID 6: 6 backend IO’s for every frontend IO

Above values can vary based on the implementation by a manufacturer in it’s RAID controller(s). Caching also playes a part in calculating the backend IOPS, but to keep the message simple, I won’t go into that now.

Also don’t forget you read/write-ratio, which mostly depends on the application you are using. A fairly standard ratio to use is 70% reads and 30% writes, but this might not be the ratio in your specific case! Read are counted as one (1) IOPS in this case.

Total IOPS = Read IOPS + (RAID level based write penalty * Write IOPS)

Given RAID 1 the formula is:
Total IOPS = Read IOPS + (2 * Write IOPS)
And using RAID 5 the formula is:
Total IOPS = Read IOPS + (4 * Write IOPS)

The Total IOPS can be recalculated to the number of disks by knowing how many IOPS a single disk can deliver. Some commonly used values are:

  • 7,2k RPM Nearline drive: 75-100 IOPS
  • 10k RPM drive: 125-150 IOPS
  • 15k RPM drive: 175-210 IOPS

My guess is that above information can give you ballpark figures about what your current storage system can deliver, or what your new storage system should be able to deliver. The actual performance of a storage is strongly dependent on the implementation of the manufacturer, so all above formulas are just to give you an indication.

About Martien Korenblom

Infrastructure Consultant at Centric IT Solutions, specializing in storage and backup and also has knowledge of HP-UX, Linux and virtualization. More than 10 years of experience in ICT

4 thoughts on “Frontend and backend IOPS

  1. bill villers

    There are some vendors, Fusion-IO (now SanDisk) and PernixData, for example, who provide a software which resides on the host, for example, Cisco UCS, and the storage array, for example, EMC VNX5300, and state that their product increases performance of IOPs.

    Can you provide your perspective on these types of products? Are they worthwhile?

  2. Martien Korenblom Post author

    I’ve not factored in the vendor architecture, as these are very different sometimes. This is article is the most simple relation between frontend and backend IOPS.

    E.g. HP 3PAR calculates with 2,5 backend IOP for 1 frontend IOP when using RAID 5, they factor in the cache, they factor in the way of parity calculation when using multiple writes to RAID 5 etc.

    Most (if not all) storage manufacturers have some kind of storage calculator for their arrays, they should give you vendor specific values for IOPS.

  3. Bozo


    i am quite interested how do you factor the amount of cache and relate it to overall sizing of the IOPS? I guess this all derives from vendor architecture but sure we will have to assume some IO hits the cache? I am interested how much it is as i have never seen this approach.


Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.