VMware Software-Defined Storage. Martin Hosken

Читать онлайн.
Название VMware Software-Defined Storage
Автор произведения Martin Hosken
Жанр Зарубежная образовательная литература
Серия
Издательство Зарубежная образовательная литература
Год выпуска 0
isbn 9781119292784



Скачать книгу

of data to a disk is also a write to the mirrored disk, meaning that both physical disks contain exactly the same information at all times. This mechanism is once again fully transparent to the vSphere platform and is managed by the RAID controller or storage controller. If a disk fails, the RAID controller uses the mirrored drive for data recovery, but continues I/O operations simultaneously, with data on the replaced drive being rebuilt from the mirrored drive in the background.

The primary benefits of mirroring are that it provides fast recovery from disk failure and improved read performance (see Figure 2.6). However, the main drawbacks include the following:

      • Degraded write performance, as each block of data is written to multiple disks simultaneously

      • A high financial cost for data protection, in that disk mirroring requires a 100 percent cost increase per gigabyte of data

Illustration depicting Redundancy in disk mirroring.

Figure 2.6 Redundancy in disk mirroring

      Enterprise storage systems typically support multiple RAID levels, and these levels can be mixed within a single storage array. However, once a RAID type is assigned to a set of physical disks, all LUNs carved from that RAID set will be assigned that RAID type.

      Nested RAID

      Some RAID levels are referred to as nested RAID, as they are based on a combination of RAID levels. Examples of nested RAID levels include RAID 03 (RAID 0+3, also known as RAID 53, or RAID 5+3) and RAID 50 (RAID 5+0). However, the only two commonly implemented nested RAID levels are RAID 1+0, also commonly known as RAID 10, and RAID 01 (RAID 0+1). These two are similar, except the data organization methods are slightly different; rather than creating a mirror and then striping the mirror, as in RAID 1+0, RAID 0+1 creates a stripe set and then mirrors it.

      Calculating I/O per Second RAID Penalty

      One of the primary ways to measure disk performance is input/output per second, also referred to as I/O per second or, more commonly, IOPS. This formula is simple: one read request or one write request is equal to one I/O.

Each physical disk in the storage is capable of providing a fixed number of I/O. Disk manufacturers calculate this based on the rotational speed, average latency, and seek time. Table 2.1 shows examples of typical physical drive IOPS specifications for the most common drive types.

Table 2.1 Typical average I/O per second (per physical disk)

      A storage device’s IOPS capability is calculated as an aggregate of the sum of disks that make up the device. For instance, when considering a JBOD configuration, three disks rotating at 10,000 RPMs provide the JBOD with a total of 375 IOPS. However, with the exception of RAID 0 (which is simply a set of disks aggregated together to create a larger storage device), all RAID set configurations are based on the fact that write operations result in multiple writes to the RAID set, in order to provide the targeted level of availability and performance.

      In a RAID 5 disk set, for example, for each random write request, the storage controller is required to perform multiple disk operations, which has a significant impact on the raw IOPS calculation. Typically, that RAID 5 disk set requires four IOPS per write operation. In addition, RAID 6, which provides a higher level of protection through double fault tolerance, also provides a significantly worse I/O penalty of six operations per write. Therefore, as the architect of such a solution, you must also plan for any I/O penalty associated with the RAID type being used in the design.

Table 2.2 summarizes the read and write RAID penalties for the most common RAID levels. Notice that you don’t have to calculate parity for a read operation, and no penalty is associated with this type of I/O. The I/O penalty relates specifically to writes, and there is no negative performance or IOPS impact when calculating read operations. It is only when you have writes to disk that you will see the RAID penalty come into play in RAID calculations and formulas. This is true even though in a parity-based RAID-type write operation, reads are performed as part of that write. For instance, writes in a RAID 5 disk set, where data is being written with a size that is less than that of a single block, require the following actions to be performed:

Table 2.2 RAID I/O penalty impact

      1. Read the old data block.

      2. Read the old parity block.

      3. Compare data in the old block with the newly arrived data. For every changed bit, change the corresponding bit in parity.

      4. Write the new data block.

      5. Write the new parity block.

      As noted previously, a RAID 0 stripe has no write penalty associated with it because there is no parity to be calculated. In Table 2.2, a no RAID penalty is expressed as a 1.

      Конец ознакомительного фрагмента.

      Текст предоставлен ООО «ЛитРес».

      Прочитайте эту книгу целиком, купив полную легальную версию на ЛитРес.

      Безопасно оплатить книгу можно банковской картой Visa, MasterCard, Maestro, со счета мобильного телефона, с платежного терминала, в салоне МТС или Связной, через PayPal, WebMoney, Яндекс.Деньги, QIWI Кошелек, бонусными картами или другим удобным Вам способом.

      1

      This calculation is based on 100 TB of storage, deployed at a cost of $50–$100 per gigabyte for flash (1–3 percent as tier 1), plus $7–$20 per gigabyte for fast disk (12–20 percent as tier1), plus $1 to $8 per gigabyte for capacity disk (20–25 percent as tier 3), plus $0.20–$2 per gigabyte for low-performance, high-capacity storage (40–60 percent as tier 4) totals approximately $482,250. Splitting the same capacity requirement between only tier 2 and tier 3 at the estimated cost range per gigabyte for each type of storage provides an esti

1

This calculation is based on 100 TB of storage, deployed at a cost of $50–$100 per gigabyte for flash (1–3 percent as tier 1), plus $7–$20 per gigabyte for fast disk (12–20 percent as tier1), plus $1 to $8 per gigabyte for capacity disk (20–25 percent as tier 3), plus $0.20–$2 per gigabyte for low-performance, high-capacity storage (40–60 percent as tier 4) totals approximately $482,250. Splitting the same capacity requirement between only tier 2 and tier 3 at the estimated cost range per gigabyte for each type of storage provides an estimated storage infrastructure cost of $765,000.