OpenStack AutoScaling in UrbanCode Deploy with Patterns

The first Fix Pack released for UCD+P added initial support for provisioning to VMware vSphere infrastructure along with support for auto-scaling groups in OpenStack. The UCD+P Knowledge Centre topic describes what is currently supported and this YouTube video shows this in action. This post works through an example of creating and provisioning a UCD+P blueprint with an auto-scaling policy on OpenStack.

Since the support depends on Ceilometer , the first step is to ensure we have a working Ceilometer configuration. Using Devstack to install OpenStack, the Ceilometer service entries look like this in local.conf:

#Enable the ceilometer metering services
enable_service ceilometer-acompute ceilometer-acentral ceilometer-anotification ceilometer-collector

# Enable the ceilometer alarming services
enable_service ceilometer-alarm-evaluator,ceilometer-alarm-notifier

# Enable the ceilometer api services
enable_service ceilometer-api

The default data collection interval is 600 seconds and listening to my poor cooling fans blowing away at full speed when stressing the CPUs out for that long isn’t pleasant. So this will change the interval to 10 seconds:

CEILOMETER_PIPELINE_INTERVAL=10

Add the following entries to the Nova configuration meta-section of local.conf:

notification_driver=nova.openstack.common.notifier.rabbit_notifier
notification_driver=ceilometer.compute.nova_notifier

Once stack.sh has done its thing, check that the ceilometer services are up:

ps a | grep ceilometer
 3344 pts/31   S+     0:03 /usr/bin/python /usr/local/bin/ceilometer-alarm-evaluator --config-file /etc/ceilometer/ceilometer.conf
 3379 pts/27   S+     5:16 /usr/bin/python /usr/local/bin/ceilometer-agent-notification --config-file /etc/ceilometer/ceilometer.conf
 3409 pts/26   S+     1:18 /usr/bin/python /usr/local/bin/ceilometer-agent-central --config-file /etc/ceilometer/ceilometer.conf
 3420 pts/29   S+     0:29 /usr/bin/python /usr/local/bin/ceilometer-api -d -v --log-dir=/var/log/ceilometer-api --config-file /etc/ceilometer/ceilometer.conf
 3426 pts/30   S+     0:00 /usr/bin/python /usr/local/bin/ceilometer-alarm-notifier --config-file /etc/ceilometer/ceilometer.conf
 3429 pts/28   S+     5:15 /usr/bin/python /usr/local/bin/ceilometer-collector --config-file /etc/ceilometer/ceilometer.conf
 3557 pts/28   S+     2:13 /usr/bin/python /usr/local/bin/ceilometer-collector --config-file /etc/ceilometer/ceilometer.conf
 3558 pts/27   S+     0:02 /usr/bin/python /usr/local/bin/ceilometer-agent-notification --config-file /etc/ceilometer/ceilometer.conf
19656 pts/25   S+     0:27 /usr/bin/python /usr/local/bin/ceilometer-agent-compute --config-file /etc/ceilometer/ceilometer.conf

Assuming that an Authentication Realm, Cloud project and Team have all been configured (see Security in the UCD+P Knowledge Center), login as a user defined in the OpenStack realm and click “New..” in the Blueprints page. Give the blueprint a name, select “BluePrint” for Type and click Save.

Drag and drop the “private” Network resource from the Network drawer and a “New AutoScaling Group” from the Policies drawer of the palette on the right onto the blueprint.

dndnetaustoscalegrp

Selecting  the autoscaling group shows the Properties, Policy and Alarm settings that can be configured for it. Clicking the Scaling Policies icon will bring up a Policy dialog. Leaving the default policies as-is and  changing  the Max Size property to “2” will cause the group to add an instance when the CPU utilization on the first instance is above 50% and remove an instance once the CPU utilization on the first instance is less than 15%.

scalingpolicies

Next drag and drop the “Ubuntu-14.04.x86_64” Compute resource onto the autoscaling group and connect it to the private network element. Click the “IP” icon at the bottom right of the Ubuntu compute resource. This will create a new (nested) blueprint to hold the content of the autoscaling group.

newnested

Save the blueprint, click Provision, provide values for the TODO items and click Provision.

provision

The Environments page should show the new stack being created and the initial instance details once complete.

stackcreate

instance01

The instance shows up on the OpenStack server:

nova list
+--------------------------------------+---------------------+--------+------------+-------------+------------------------------+
| ID                                   | Name                | Status | Task State | Power State | Networks                     |
+--------------------------------------+---------------------+--------+------------+-------------+------------------------------+
| 50577402-bbbf-4aab-99d1-66d8bdacc68f | Ubuntu-14.04.x86_64 | ACTIVE | -          | Running     | private=10.0.0.6, 172.24.4.8 |
+--------------------------------------+---------------------+--------+------------+-------------+------------------------------+

Ceilometer will show the meters for this instance:

ceilometer meter-list -q resource=50577402-bbbf-4aab-99d1-66d8bdacc68f
+--------------------------+------------+-----------+--------------------------------------+----------------------------------+----------------------------------+
| Name                     | Type       | Unit      | Resource ID                          | User ID                          | Project ID                       |
+--------------------------+------------+-----------+--------------------------------------+----------------------------------+----------------------------------+
| cpu                      | cumulative | ns        | 50577402-bbbf-4aab-99d1-66d8bdacc68f | da893e11da6447ff9e49911cee7ee034 | 6210e0622ba14d2ab9a174317747c74f |
| cpu_util                 | gauge      | %         | 50577402-bbbf-4aab-99d1-66d8bdacc68f | da893e11da6447ff9e49911cee7ee034 | 6210e0622ba14d2ab9a174317747c74f |
| disk.ephemeral.size      | gauge      | GB        | 50577402-bbbf-4aab-99d1-66d8bdacc68f | da893e11da6447ff9e49911cee7ee034 | 6210e0622ba14d2ab9a174317747c74f |
| disk.read.bytes          | cumulative | B         | 50577402-bbbf-4aab-99d1-66d8bdacc68f | da893e11da6447ff9e49911cee7ee034 | 6210e0622ba14d2ab9a174317747c74f |
| disk.read.bytes.rate     | gauge      | B/s       | 50577402-bbbf-4aab-99d1-66d8bdacc68f | da893e11da6447ff9e49911cee7ee034 | 6210e0622ba14d2ab9a174317747c74f |
| disk.read.requests       | cumulative | request   | 50577402-bbbf-4aab-99d1-66d8bdacc68f | da893e11da6447ff9e49911cee7ee034 | 6210e0622ba14d2ab9a174317747c74f |
| disk.read.requests.rate  | gauge      | request/s | 50577402-bbbf-4aab-99d1-66d8bdacc68f | da893e11da6447ff9e49911cee7ee034 | 6210e0622ba14d2ab9a174317747c74f |
| disk.root.size           | gauge      | GB        | 50577402-bbbf-4aab-99d1-66d8bdacc68f | da893e11da6447ff9e49911cee7ee034 | 6210e0622ba14d2ab9a174317747c74f |
| disk.write.bytes         | cumulative | B         | 50577402-bbbf-4aab-99d1-66d8bdacc68f | da893e11da6447ff9e49911cee7ee034 | 6210e0622ba14d2ab9a174317747c74f |
| disk.write.bytes.rate    | gauge      | B/s       | 50577402-bbbf-4aab-99d1-66d8bdacc68f | da893e11da6447ff9e49911cee7ee034 | 6210e0622ba14d2ab9a174317747c74f |
| disk.write.requests      | cumulative | request   | 50577402-bbbf-4aab-99d1-66d8bdacc68f | da893e11da6447ff9e49911cee7ee034 | 6210e0622ba14d2ab9a174317747c74f |
| disk.write.requests.rate | gauge      | request/s | 50577402-bbbf-4aab-99d1-66d8bdacc68f | da893e11da6447ff9e49911cee7ee034 | 6210e0622ba14d2ab9a174317747c74f |
| instance                 | gauge      | instance  | 50577402-bbbf-4aab-99d1-66d8bdacc68f | da893e11da6447ff9e49911cee7ee034 | 6210e0622ba14d2ab9a174317747c74f |
| instance.scheduled       | delta      | instance  | 50577402-bbbf-4aab-99d1-66d8bdacc68f | None                             | 6210e0622ba14d2ab9a174317747c74f |
| instance:m1.little       | gauge      | instance  | 50577402-bbbf-4aab-99d1-66d8bdacc68f | da893e11da6447ff9e49911cee7ee034 | 6210e0622ba14d2ab9a174317747c74f |
| memory                   | gauge      | MB        | 50577402-bbbf-4aab-99d1-66d8bdacc68f | da893e11da6447ff9e49911cee7ee034 | 6210e0622ba14d2ab9a174317747c74f |
| vcpus                    | gauge      | vcpu      | 50577402-bbbf-4aab-99d1-66d8bdacc68f | da893e11da6447ff9e49911cee7ee034 | 6210e0622ba14d2ab9a174317747c74f |
+--------------------------+------------+-----------+--------------------------------------+----------------------------------+----------------------------------+

And the alarms created.

ceilometer alarm-list
+--------------------------------------+-------------------------------------------------------------------------------+-------+---------+------------+--------------------------------+------------------+
| Alarm ID                             | Name                                                                          | State | Enabled | Continuous | Alarm condition                | Time constraints |
+--------------------------------------+-------------------------------------------------------------------------------+-------+---------+------------+--------------------------------+------------------+
| 4e8b3801-cbd2-44a9-932a-30df92080e9a | autoscale_env01-autoscaling_group_scaleup_policy_cpu_alarm_high-h3e5hvnlblsj  | ok    | True    | False      | cpu_util > 50.0 during 1 x 60s | None             |
| 9e154ecb-dde4-46c7-bf2e-97fbcabab9b9 | autoscale_env01-autoscaling_group_scaledown_policy_cpu_alarm_low-wauylsnpnyav | ok    | True    | False      | cpu_util < 15.0 during 1 x 60s | None             |
+--------------------------------------+-------------------------------------------------------------------------------+-------+---------+------------+--------------------------------+------------------+

Now for the fun bit: stressing the CPUs on the instance to trigger the “cpu_util > 50.0 during 1 x 60s” alarm.

One simple way to do this is to run

   yes > /dev/null &

once for each CPU on the instance. The CPU utilization maxes out:

tophigh

Ceilometer shows the alarm being triggered:

ceilometer alarm-list
+--------------------------------------+-------------------------------------------------------------------------------+-------+---------+------------+--------------------------------+------------------+
| Alarm ID                             | Name                                                                          | State | Enabled | Continuous | Alarm condition                | Time constraints |
+--------------------------------------+-------------------------------------------------------------------------------+-------+---------+------------+--------------------------------+------------------+
| 4e8b3801-cbd2-44a9-932a-30df92080e9a | autoscale_env01-autoscaling_group_scaleup_policy_cpu_alarm_high-h3e5hvnlblsj  | alarm | True    | False      | cpu_util > 50.0 during 1 x 60s | None             |
| 9e154ecb-dde4-46c7-bf2e-97fbcabab9b9 | autoscale_env01-autoscaling_group_scaledown_policy_cpu_alarm_low-wauylsnpnyav | ok    | True    | False      | cpu_util < 15.0 during 1 x 60s | None             |
+--------------------------------------+-------------------------------------------------------------------------------+-------+---------+------------+--------------------------------+------------------+

and back in UCD+P the Environment details page shows a second instance spawned.

instance02

By this time the cooling fans are going nuts so a “killall yes” brings the CPU utilization back down below 15% which triggers the low alarm.

ceilometer alarm-list
+--------------------------------------+-------------------------------------------------------------------------------+-------+---------+------------+--------------------------------+------------------+
| Alarm ID                             | Name                                                                          | State | Enabled | Continuous | Alarm condition                | Time constraints |
+--------------------------------------+-------------------------------------------------------------------------------+-------+---------+------------+--------------------------------+------------------+
| 4e8b3801-cbd2-44a9-932a-30df92080e9a | autoscale_env01-autoscaling_group_scaleup_policy_cpu_alarm_high-h3e5hvnlblsj  | ok    | True    | False      | cpu_util > 50.0 during 1 x 60s | None             |
| 9e154ecb-dde4-46c7-bf2e-97fbcabab9b9 | autoscale_env01-autoscaling_group_scaledown_policy_cpu_alarm_low-wauylsnpnyav | alarm | True    | False      | cpu_util < 15.0 during 1 x 60s | None             |
+--------------------------------------+-------------------------------------------------------------------------------+-------+---------+------------+--------------------------------+------------------+

UCD+P shows the environment back to a single instance and the cooling fans breathe (literally:-) a huge sigh of relief.

instancedown

Currently only the CPU utilization meter is supported but I guess more of the others (from the ceilometer meter-list) will be added in future updates to UCD+P.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s