CompTIA Cloud+ Study Guide. Ben Piper

Читать онлайн.
Название CompTIA Cloud+ Study Guide
Автор произведения Ben Piper
Жанр Зарубежная компьютерная литература
Серия
Издательство Зарубежная компьютерная литература
Год выпуска 0
isbn 9781119810957



Скачать книгу

System Requirements

      After you have completed your assessments and needs analysis, you'll have then defined your requirements and which cloud service and deployment models best meet them. The next step is to select a pilot application to migrate to the cloud from your existing data center.

      Prior to performing the migration, the engineering team should sit down and review the complete design, from the application, configuration, hardware, and networking to the storage and security. As part of this verification, it is helpful to stage the system in the cloud as a proof-of-concept design. This allows everyone to test the systems and configuration in a cloud environment prior to going live.

      Correct Scaling for Your Requirements

      The ability of the cloud to scale resources up or down rapidly to match demand is called elasticity. For IaaS services, this can be done automatically as needed using autoscaling. This allows cloud consumers to scale up automatically as their workload increases and then have the cloud remove the services after the workload subsides. For SaaS and PaaS services, dynamic allocation of resources occurs automatically and is handled by the cloud provider. (Later in the chapter, we'll discuss the division of responsibilities between you and the provider.) With elastic computing, there is no longer any need to deploy servers and storage systems designed to handle peak loads—servers and systems that may otherwise sit idle during normal operations. Now you can scale the cloud infrastructure to the normal load and automatically expand as needed when the occasion arises.

      Pay as you grow (PAYG) is like a basic utility, such as power or water, where you pay for only what you use. This is very cost effective because there are minimal up-front costs, and the ongoing costs track your actual consumption of the service. The elasticity of the cloud lets you add resources on-demand, so there's no need to overprovision for future growth. With a normal data center operation, the computing must be overprovisioned to take into account peak usage or future requirements that may never be needed.

      Making Sure the Cloud Is Always Available

      In this section, you'll become familiar with common deployment architectures used by many of the leading cloud providers to address availability, survivability, and resilience in their services offerings.

      Regions

Schematic illustration of cloud regions.

      All of the regions are interconnected to each other and the Internet with high-speed optical networks but are isolated from each other, so if there is an outage in one region, it should not affect the operations of other regions.

      Generally, data and resources in one region aren't replicated to any other regions unless you specifically configure such replication to occur. One of the reasons for this is to address regulatory and compliance issues that require data to remain in its country of origin.

      When you deploy your cloud operations, you'll be given a choice of what region you want to use. Also, for a global presence and to reduce network delays, the cloud customer can choose to replicate operations in multiple regions around the world.

      Availability Zones

Schematic illustration of availability zones.

      Cluster Placement

      As I alluded to earlier, you can choose to run VMs on different virtualization hosts for redundancy in case one host fails. However, there are times when you want VMs to run on the same host, such as when the VMs need extremely low-latency network connectivity to one another. To achieve this, you would group these VMs into the same cluster and implement a cluster placement rule to ensure that the VMs always run on the same host. If this sounds familiar, it's because this is the same principle you read about in the discussion of hypervisor affinity rules.

      This approach obviously isn't resilient if that host fails, so what you may also do is to create multiple clusters of redundant VMs that run on different hosts, and perhaps even in different AZs.

      Remote Management of VMs

Schematic illustration of local computer running the hypervisor management application. Schematic illustration of remote hypervisor management application.

      As we've discussed, your options for managing your VMs and other cloud resources are limited to the web management interface, command-line