Название | AWS Certified Solutions Architect Study Guide |
---|---|
Автор произведения | David Higby Clinton |
Жанр | Зарубежная компьютерная литература |
Серия | |
Издательство | Зарубежная компьютерная литература |
Год выпуска | 0 |
isbn | 9781119713104 |
Step Scaling Policies
If the demand on your application is rapidly increasing, a simple scaling policy may not add enough instances quickly enough. Using a step scaling policy, you can instead add instances based on how much the aggregate metric exceeds the threshold.
To illustrate, suppose your group starts out with four instances. You want to add more instances to the group as the average CPU utilization of the group increases. When the utilization hits 50 percent, you want to add two more instances. When it goes above 60 percent, you want to add four more instances.
You'd first create a CloudWatch Alarm to monitor the average CPU utilization and set the alarm threshold to 50 percent, since this is the utilization level at which you want to start increasing the desired capacity.
You must then specify at least one step adjustment. Each step adjustment consists of the following:
A lower bound
An upper bound
The adjustment type
The amount by which to increase the desired capacity
The upper and lower bounds define a range that the metric has to fall within for the step adjustment to execute. Suppose that for the first step you set a lower bound of 50 and an upper bound of 60, with a ChangeInCapacity adjustment of 2. When the alarm triggers, Auto Scaling will consider the metric value of the group's average CPU utilization. Suppose it's 55 percent. Because 55 is between 50 and 60, Auto Scaling will execute the action specified in this step, which is to add two instances to the desired capacity.
Suppose now that you create another step with a lower bound of 60 and an upper bound of infinity. You also set a ChangeInCapacity adjustment of 4. If the average CPU utilization increases to 62 percent, Auto Scaling will note that 60 <= 62 < infinity and will execute the action for this step, adding four instances to the desired capacity.
You might be wondering what would happen if the utilization were 60 percent. Step ranges can't overlap. A metric of 60 percent would fall within the lower bound of the second step.
With a step scaling policy, you can optionally specify a warm‐up time, which is how long Auto Scaling will wait until considering the metrics of newly added instances. The default warm‐up time is 300 seconds. Note that there are no cooldown periods in step scaling policies.
Target Tracking Policies
If step scaling policies are too involved for your taste, you can instead use target tracking policies. All you do is select a metric and target value, and Auto Scaling will create a CloudWatch Alarm and a scaling policy to adjust the number of instances to keep the metric near that target.
The metric you choose must change proportionally to the instance load. Metrics like this include average CPU utilization for the group and request count per target. Aggregate metrics like the total request count for the ALB don't change proportionally to the load on an individual instance and aren't appropriate for use in a target tracking policy.
In addition to scaling out, target tracking will scale in by deleting instances to maintain the target metric value. If you don't want this behavior, you can disable scaling in. Also, just as with a step scaling policy, you can optionally specify a warm‐up time.
Scheduled Actions
Scheduled actions are useful if you have a predictable load pattern and want to adjust your capacity proactively, ensuring you have enough instances before demand hits.
When you create a scheduled action, you must specify the following:
A minimum, maximum, or desired capacity value
A start date and time
You may optionally set the policy to recur at regular intervals, which is useful if you have a repeating load pattern. You can also set an end time, after which the scheduled policy gets deleted.
To illustrate how you might use a scheduled action, suppose you normally run only two instances in your Auto Scaling group during the week. But on Friday, things get busy, and you know you'll need four instances to keep up. You'd start by creating a scheduled action that sets the desired capacity to 2 and recurs every Saturday, as shown in Figure 2.3.
FIGURE 2.3 Scheduled action setting the desired capacity to 2 every Saturday
The start date is January 5, 2019, which is a Saturday. To handle the expected Friday spike, you'd create another weekly recurring policy to set the desired capacity to 4, as shown in Figure 2.4.
FIGURE 2.4 Scheduled action setting the desired capacity to 4 every Friday
This action will run every Friday, setting the desired capacity to 4, prior to the anticipated increased load.
Note that you can combine scheduled actions with dynamic scaling policies. For example, if you're running an e‐commerce site, you may use a scheduled action to increase the maximum group size during busy shopping seasons and then rely on dynamic scaling policies to increase the desired capacity as needed.
AWS Systems Manager
AWS Systems Manager, formerly known as EC2 Systems Manager and Simple Systems Manager (SSM), lets you automatically or manually perform actions against your AWS resources and on‐premises servers.
From an operational perspective, Systems Manager can handle many of the maintenance tasks that often require manual intervention or writing scripts. For on‐premises and EC2 instances, these tasks include upgrading installed packages, taking an inventory of installed software, and installing a new application. For your other AWS resources, such tasks may include creating an AMI golden image from an EBS snapshot, attaching IAM instance profiles, or disabling public read access to S3 buckets.
Systems Manager provides the following two capabilities:
Actions
Insights
Actions
Actions let you automatically or manually perform actions against your AWS resources, either individually or in bulk. These actions must be defined in documents, which are divided into three types:
Automation—actions you can run against your AWS resources
Command—actions you run against your Linux or Windows instances
Policy—defined processes for collecting inventory data from managed instances
Automation