Different strokes for different folks. When it comes to full-stack environment provisioning every customer has their own opinions and cobbled together set of tools: homegrown scripts to Chef and everything in between (and on either side). Working with UrbanCode customers over the past year there was a sort of trend where they want “everything” on the stack – from infrastructure up through application and configuration – sometimes including the operating system, to be done in/from one place. This is particularly true where they don’t already have investments in say Chef or don’t want to take on yet another technology/tool/solution/language. What they might have is a bunch of existing automation in the form of scripts or knowledge in the heads of their development/operations folks. Or in reams of documentation that must be worked through carefully to get an environment up
One such example is deploying the IBM MobileFirst Platform in different topologies quickly, easily and in a repeatable fashion. Taking a look at the Knowledge Center it’s obvious that there are a number of steps that need to be followed, in a certain order, depending on which features are required, what middle-ware is used and what topology is required. Let’s take just the “core” components: the MF Platform Server, MF Operational Analytics Server, the Operations Console and Administrative services applications, the run time environments and the database that all this will use. Assuming that the Application Server will be WebSphere Liberty and the database will be DB2, all running on Linux I will look at provisioning 3 different topologies or environments, the UCD blueprints for which are shown below :
- A “development” environment which has everything on a single server
- a “QA” environment which has a 2 node MFP server farm, 2 analytics servers and a single DB2 server
- a “production” environment which has a 2 node MFP server farm, 2 analytics servers and 2 DB2 servers in a primary/standby HADR configuration
The HOT documents for each as designed in the UCD Blueprint Designer are relatively simple. The “heavy lifting” is done by the UCD component processes so let’s take a look at those in a bit of detail.
- IBM Installation Manager
Possibly the simplest of all the components, it has a single Component Environment Property that specifies the IBM IM installation location. Component Versions contain the installation media and there is a simple process that uses the “Install or Upgrade IBM IM” step from the IBMIM plugin. Once installed IBM IM is used to install/upgrade IBM WebSphere Liberty.
- IBM WAS Liberty
This component has a couple of Environment Properties: one that points to the IM Repository for the Liberty media and another that specifies the Liberty installation directory on the target server. It also has an IBM IM response file as a Configuration File Template. The process to install Liberty installs the response file – replacing tokens with the values from the environment properties – and invokes IBM IM on the response file.
- IBM DB2
This component is used to install DB2 and create a DB2 instance in one go. It has a number of Environment Properties that specify DB2 credentials and also the location of the DB2 install media server. There are also a couple of Version Properties that specify the precise URL and name of the DB2 install media. As with the Liberty component UCD Configuration File Templates are used to store tokenised response files for the DB2 installation and the DB2 instance creation. The installation process is (slightly:-) more complex, making use of a “sub-process” – which in turn calls another sub-process – to detect the Linux variant and install DB2 prerequisites before using the response files to install DB2 and create the instance.
- IBM MobileFirst DB
This is a component that keeps versions of DB2 database creation SQL scripts, used to create the empty MFP databases. Component Environment properties specify the DB2 credentials as well as the name to use for the MFP database, which are then used to replace tokens in the database creation script downloaded from the component version in the database creation process.
- DB2 HADR
For the “production” topology, this component has a number of “Operational (No Version Neeeded)” processes that basically implement the steps described in Section 6.2.3 of the DB2 HADR Redbook. There are a bunch of Component Environment Properties for the DB2 credentials, the database to enable for HADR, DB2 ports to use and one to indicate whether the server is a primary or standby server. There are two main processes one to be run on a primary server and the other on the standby. Each of these call a bunch of sub-processes as needed. Some of those in turn invoke a couple of Generic processes which make it easier to call “db2 update” or “db2 update cfg”.
- IBM MobileFirst Analytics Server
This component creates a server on the Liberty profile and installs the analytics application (ear) on it. There are a number of Component properties, though the default values will be sufficient for most of them. Notable properties are the ones that specify the IP of the master server in the QA and production topologies, JNDI entries specific to master and data nodes and one to indicate whether the node is a data node. The deployment process mostly uses steps from the IBM Websphere Liberty plugin.
- IBM MobileFirst Platform Server
The most complicated of all the components, mostly because the it needs to cater to the clustered (server farm) and DB2 HADR variants as well as the single server setup. Again lots of Component Environment Properties, mostly related to paths and credentials. Notable properties include those used to indicate a server farm, DB2 HADR, and Analytics server connection information. There are also a number of Configuration File Templates used for the Liberty server serve.xml, installing the MFP server with IBM IM, running the Ant tasks for the console and database.
- Pure UCD vs Chef (vs another Chef-like tool)
While we could have (should have?) used Chef for some of the pieces of this puzzle, I found that just putting everything in UCD made it easy to figure out where things were and most importantly I didn’t have to dig down through many levels to find where things went wrong when they did.
The good thing is that future iterations of this could very easily be adapted to use Chef: either using the UCD plugin or Chef roles in the Heat template.
- Provisioning from UCD vs from the UCD designer
One of the challenges was to make sure that there aren’t multiple processes required just to cater to differences in topology or deployment method. In some cases where the UCD Designer capability cannot be used, the servers (VMs or bare metal) are stood up in the “traditional” way and the UCD agent installed on them. In such cases UCD Resource Templates, Agent Prototypes and Application Blueprints come in handy. Each of the topologies shown in the Blueprint Designer has a corresponding Application Blueprint used to create an environments. When the agents are installed on the servers they are given names to match the prototypes and when they come on line they get slotted into the right location and the right components assigned to them. Then all that needs to be done is run an application process to orchestrate the component processes. For example here’s the production Resource Template.Notice the use of tags to determine which process to run, used for example in the DB2 HADR primary/standby processes:
On the other hand provisioning to clouds where the servers can be stood up on demand via the UCD designer and Heat, the agent installation, naming and placing is all done auto-magically. The Designer also makes it easy to provision the same blueprint to different clouds. Instead of using an Application process the component process orchestration is setup using the Deployment Sequence. In addition setting property values such as dynamic IP addresses is easy using the UCD Designer’s code assists.
- Component Environment properties vs Environment properties
When provisioning from UCD, the environment is created manually and the various properties need to be changed if required after the environment is created. This also allows putting common/shared properties (like the DB2 credentials) at the Environment level rather than in Component Environment properties. However, to be able to set these values in the Heat template in the UCD Designer they must be captured as Component Environment properties.
- Using a content server vs Download Artifacts
Normally the artifacts for a component are stored in a component version in CodeStation and then retrieved during deployment using the Download Artifacts step. In this implementation, however, all of the binaries (except for IBM IM) are stored and retrieved from an external content server. This is done primarily because the UCD server used here is relatively small, has limited disk space and is used for other deployments/PoCs/demonstrations sometimes resulting in NIC overload. A simple Nginx based content server alleviates this.
- Using Configuration file templates vs storing files in component versions
Most of the templates such as those for response files are stored as Configuration Template Files. This is is fine because we don’t expect these files to be changed. However, if these files are expected to be different between component versions, then they are better off stored in the component versions as is the case with the SQL db creation script.
- Generic DB2 processes vs Shell or plugin step
As I mentioned in the description of the DB2 HADR component, A couple of Generic processes make it easier to call “db2 update” or “db2 update cfg”. One reason I did this is so I did not have to specify the DB2 user’s credentials each time I need to use “db2 update”. The impersonation is specified once in the step in the Generic process. The db2 update command to run is passed in as the value of a property of the Generic process.
As with most things the devil is indeed in the detail; while I’ve not gone into extreme detail on each and every aspect of the solution, hopefully there’s enough here to make sense of how it all hangs together.
It would also be extremely remiss of me if I didn’t mention that I knew (know?:-) precious little of the MobileFirst Platform, and Larry Steck (from IBM Cloud Lab Services) provided all of the MFP knowledge for this and worked closely with me to get this to actually work.