Название | VMware Software-Defined Storage |
---|---|
Автор произведения | Martin Hosken |
Жанр | Зарубежная образовательная литература |
Серия | |
Издательство | Зарубежная образовательная литература |
Год выпуска | 0 |
isbn | 9781119292784 |
Manageability
For this design factor, you should keep in mind KISS: keep it standardized and simple. Making a design unnecessarily complex has a serious impact on the manageability of the environment. Also, a design that is unnecessarily complex can easily contribute to failure, because the operational team might not understand the design, and making a change to one component can have implications on another. Instead, your aim should be to keep the design as simple as possible, while still meeting the business goals. The objective should be to keep the design easy to deploy, easy to administer and maintain for the operational teams, and easy to update and upgrade when the time comes.
Goals
The key goals for the design will be different for each project. However, in general, a good design is not unnecessarily complex, provides detailed documentation (which includes rationales for design decisions), balances the organization’s requirements with technical best practices, and involves key stakeholders and the customer’s subject matter experts in every aspect of the design, delivery, testing, and hand-over of the storage platform.
Security and Governance
Needless to say, in today’s world security is a key deliverable in every enterprise IT or cloud service provider project. On some of the projects involving government agencies and financial institutions that I’ve worked on almost every aspect of the design is governed by security considerations and requirements. This can have a significant impact on both operational considerations and budget.
Standards
An enterprise organization or cloud service provider typically has standards that must be met for every project. Hopefully, these standards include a clear methodology for identifying stakeholders, identifying the most relevant business drivers, and providing transparency and traceability for all decisions. Standards might also include a defined and repeatable approach to design, delivery, testing and verification, and hand-over to operational teams.
Performance
Like availability, performance is often governed by a service-level agreement. The design must meet the performance requirements set out by the customer. Performance is typically measured by achievable throughput, latency, I/O per second, or other defined metrics the customer deems appropriate. Storage performance is probably less understood than capacity or availability. However, in a virtualized infrastructure, not much has a greater impact on the overall performance of the environment than the storage platform.
Recoverability
Like availability and performance, recoverability is typically governed by a service-level agreement. The design should document how the infrastructure can be recovered from any kind of outage. Typically, two metrics are used to define recoverability: recovery time objective (RTO), which is the amount of time it takes to restore the service after the disruption began; and recovery point objective (RPO), which is the point in time at which data must be recovered to, after the disruption began.
Scalability
The design should be scalable – able to grow as the customer’s data requirements change and the storage platform is required to expand. As part of the project, it is important to determine the business growth plans for data capacity, and any future performance requirements. This information is typically provided as a percentage of growth per year, and the design should take these factors into account. Later we address a building-block approach to storage design, but for now, it’s of key importance that the customer is able to provide clear expectations on the growth of their environment, as this will almost certainly impact the design.
Capacity
The design’s capacity requirements can typically be achieved as a business grows or shrinks. Capacity is generally predictable and can be provisioned on demand, as it is typically a relatively easy procedure to add disks and/or enclosures to most storage arrays or hosts without experiencing downtime. As a result, capacity can be managed relatively easily, but it is still an important aspect of storage design.
The Economics of Storage
At first glance, storage technologies, much like compute resource, should be priced based on a commodity hardware model; however, this is typically not the case. As illustrated in Figure 1.4, each year the cost of raw physical disk storage, on a per gigabyte basis, continues to fall, and has being doing so since the mid-1980s.
Figure 1.4 Hard disk drive cost per gigabyte
Alongside the falling cost of storage, as you might expect, in terms of raw disk capacity per drive, this has aligned with the falling cost per gigabyte charged by cloud service providers. This is illustrated in Figure 1.5, where the increasing capacity available on physical disks pretty much aligns with that falling cost.
Figure 1.5 Hard disk drive capacity improvements
Despite these falling costs in raw disk storage capacity, the chassis, the disk shelves used to create disk arrays, and the storage controllers tasked with organizing disks into large RAID (redundant array of independent disks) or JBOD (just a bunch of disks) sets, vendor prices for their technologies continue to increase year after year, regardless of this growing commoditization of the components used by them.
The reason for this is the ongoing development and sophistication of vendor software. For instance, an array made up of commoditized components, including 300 2 TB disks stacked in commodity shelves, may have a hardware cost totaling approximately $4,000. However, the end array vendor might assign a manufacturer’s suggested retail price tag of $400,000. This price is based on the vendor adding their secret source software, enabling the commodity hardware to include features such as manageability and availability and to provide the performance aspects required by its customers, while also allowing the vendor to differentiate their product from that of their competitors. It is this aspect of storage that often adds the most significant cost component to storage technologies, regardless of the actual value added by the vendor’s software, or which of those added features are actually used by their customers.
So whether you are buying or leasing, storage costs and other factors all contribute to the acquisition of storage resources, which is why IT organizations are increasingly trying to extend the useful life expectancy of their storage hardware. A decade ago, IT organizations were purchasing hardware with an expected life expectancy of three years. Today the same IT organizations are routinely acquiring hardware with the aim of achieving a five-to-seven-year useful life expectancy. One of the challenges is that most hardware and software ships with a three-year support contract and warranty, and renewing that agreement when it reaches end-of-life can sometimes cost as much as purchasing an entirely new array.
The next significant aspect of storage ownership to consider is that hardware acquisition accounts for approximately only one-fifth of the estimated annual total cost of ownership (TCO). This clearly outweighs the cost to acquire or capital expenditures (CapEx), and makes operational and management costs (OpEx) a far greater factor than many IT organizations account for in their initial design and planning cost estimations.
Calculating the