Tuesday, September 1, 2015

Automatic Shutdown of Idle Machines with vRealize Operations and vRealize Automation

(Credit to my colleagues for this work: James Polizzi, Scott Stickells)

We were engaged at a customer recently who had an issue with over-consumption of their development platform.  This is probably familiar to lots of us, and the customer was interested in how vRealize Suite could help solve that.

As we know, vRealize Automation helps to attach ownership to resources, identify an appropriate lease time for non-permanent machines, and help with the entire lifecycle of virtual machines.  vRealize Operations helps even further by identifying inefficiencies in the environment – in particular, those virtual machines that are chewing up resources inappropriately, whether over-sized, idle and unused, or perhaps just hogging storage.

In our customer’s situation, they not only wanted to control the lifecycle of virtual machines, but also ensure that ONLY needed virtual machines were running on the platform.  That meant figuring out which virtual machines were idle and unused, and shutting them down immediately – all without human intervention, review or overhead.  To help address this, we looked at two main elements of extending the vRealize Suite:

  1. A policy in vRealize Operations that could be associated with the development environment, identify idle machines (with the appropriate policy to define what “idle” meant), and AUTOMATICALLY take an action to call vRealize Orchestrator and shutdown the idle machines. 
  2. A workflow in vRealize Orchestrator that would take the parameters from vRealize Operations, find the machines under vRealize Automation, invoke the clean and controlled “Shutdown” action, and notify the associated machine owner of the action that had been taken.

vRealize Operations

(Just as a foreword, one of the key features that is soon to be released in vRealize Operations 6.1 is the ability to fully automate “Smart Alerts” without ANY user intervention.  So for the purpose of this blog, keep in mind that we are using the vRealize Operations 6.1 BETA release)

To achieve the required outcomes there were a few elements that we configured in vRealize Operations. 

Firstly, we installed a customised version of the vRealize Orchestrator adapter available here.  When I say customised, what we did was add an additional vRealize Operations action called “VM: Auto Power” that references the vRealize Orchestrator workflow we created (detailed below).  In theory, this could reference any workflow you require. While we will see this process simplified in future releases, today it requires assistance from VMware Professional Services – or some detailed know-how of Solution Pack design!

Secondly, we created a “Custom policy” which we will assign to a Custom group of objects. This allows us to selectively enforce our automated alert and define exactly what we consider to be an “Idle VM”. 

Next we created a “Custom group” of objects (VMs) that we wanted to target with this automated remediation action. We have the ability to filter on a number of different aspects.  One of the more popular ways is to create a dynamic group using a vSphere tag, or perhaps using folder structures in vCenter.  In our case, for simplistic testing, we statically assigned particular workloads to this group using “Objects to always include”.  After this group is created, it needs to be referenced in the custom policy specified above.  Take care with the order in which you perform these actions, to ensure you don’t accidentally start to automatically shutdown idle machines across all vRA-managed workloads!

After this we configured a new “Smart Alert” with the condition “VM is Idle = 1” and add the action “VM: Auto Power”, to trigger our Orchestrator workflow when the workflows are detected as being idle.

Finally, we activated the alert - by “editing” our recently created policy, finding the alert in question (in section 5. Override Alert/Symptoms Definitions) and selecting “enable” for both “state” and “automate”. In other words, we enable this alert locally in the policy and enable the automate action when the given conditions are true!

Note: the “Automate” option is disabled on all alerts by default - which is definitely a good thing considering the power of this functionality!

Once this is all complete, you will see that when any workload, that is a member of the given group/policy that “is idle”, we will automatically initiate the Shutdown workflow and send an email notification to the owner of the workload!

vRealize Orchestrator

In this workflow, which you can download here, we still need to manually configure a few items first.  This is done by invoking the “Configuration” workflow first, which makes it a little easier to draw out the static environment inputs needed by the workflow such as your email server, port, etc.

The workflow is invoked by vRealize Operations with a virtual machine identifier, an event correlation identifier (for logs and auditing), and the unique ID for the vCenter Server (in case your vRealize Operations is monitoring multiple vCenters).  Virtual machines have quite a few different types of identifiers – in this case, vRealize Operations is passing the Managed Object ID which is of the type “vm-9110”.  This identifier is not guaranteed to be unique, especially across multiple vCenters.  More importantly, it isn’t the one tracked by vRealize Automation.  The workflow, therefore, needs to use the given identifier to find the “instanceUUID” of the VM from vCenter, which is also a key attribute (“vmUniqueId”) in vRealize Automation.

Next, the workflow extracts the matching VM object from vRealize Automation, and also the email address of the object’s owner.  It uses this later to send an email notification to the owner of the idle machine’s shutdown, with the in-built mail workflows.  In vRealize Automation, the Shutdown action is not something that can be called by name, but rather has its own UUID which has to be found.  In the workflow, we enumerate all the available actions for the identified virtual machine, and find the one named “Shutdown”.  There is a chance that this action won’t be found – most especially if the virtual machine is already off!  This could also be a possibility if the virtual machine is in the middle of some other state (being reconfigured), or if the Shutdown action isn’t exposed by the blueprint owner.

As you can see by the below screenshots, this workflow has a relatively small number of steps (if you ignore the helpful logging parts!), and just works as designed.

In the email notifications, we found a way to reconstruct the URL directly to the virtual machine in vRealize Automation, and hence help the VM owner to quickly go and power their machine straight up again if needed.  This leads me onto the conclusions for this article…


While this workflow does exactly what was desired – shutting down idle virtual machines – it doesn’t have the whole picture covered yet.  From a technical perspective, should the workflow look to “Power off” the machine if Shutdown doesn’t work?  How should it handle multi-machine blueprints (we didn’t test those)?

A really helpful consequence of invoking the Shutdown action through the vRealize Automation layer is that any existing Approval steps or further custom extensions will still be invoked, just as if the user had tried to manually shutdown their virtual machine.  This opens up the potential for the system to be controlled and reviewed through a business process, using the standard vRealize Automation features.

This opens up a line of questions around whether ANY machines should be subject to automatic shutdown without human intervention and review.  The answer for some environments will be an emphatic “NO”, because the impacts won’t be well understood straight away.  Other environments already do this, however, because the private/public cloud use cases are build within a certain operational culture from the outset.  So, before you take this work and throw it into your own environment, ask some questions about what virtual machine populations this would be suitable for, what would happen if idle machines magically became unavailable, and whether this is really addressing your core problems.  You might find confirmations that this is indeed the right way to go, but you might also find that the basic features of vRealize Automation allows enough accountability, lease control and cost visibility to address most of the wastage inside your infrastructure.

Happy automating!

Thursday, July 9, 2015

Searching within vRealize Automation – Part 2 – REST API

In Part 1 of this article series, I wrote about using some of the more interesting queries possible through the vRealize Automation IaaS component.  Specifically, using the Model Manager calls from vRealize Orchestrator.  Here in Part 2, I will demonstrate yet another approach to searching – but this time from the new vRA REST API.

Getting into the API session

There are already a number of articles about getting into the REST API.  But in short, I used the Firefox plug-in “REST Client”.  See the screenshot below for the example of how to initiate the authenticated session in the REST Client.

URI: https://{vrahost}/identity/api/tokens

Notice the credentials being passed in the POST body?  This indicates the tenant as well, so the authentication URI is consistent across all tenants.

Another good step to complete is to validate your session.  See the screenshot below for the example of this being performed.

URI: https://{vrahost}/vcac/org/{tenant}/tokens/{tokenID}

Note that the “id” returned from the authentication call is now attached as an appendix to the URI call.  The successful validation is merely the HTTP 200 response code, to say that everything is OK.

Getting unfiltered content

The REST API is surprisingly easy to use, once you’ve done a little reading and poking around.  I was able to pickup some basic skills with just the regular documentation and playing with the running product.  I didn’t end up breaking anything, so I guess I did OK!

To get the entire list of currently provisioned items, those that I am entitled to see, it a simple API call as shown below.

URI: https://{vrahost}/catalog-service/api/consumer/resources

Note that the “id” token form the initial authentication is now being passed in a request header called “Authorization”.  This represents the current session, and seem to be valid for 24 hours (by default).

This API call will return ALL my items, and/or my group’s item.  However, my recent dealings with a customer were needing a way to efficiently lookup a specific item in vRA, if it existed at all.

Filtering by VM name – OData queries again!

As it turns out, the vRA REST API also supports some of the OData query syntax that was introduced in Part 1 of this blog.  In particular, you can use the $filter parameter to submit a query to the API call above.  See the below example for searching for a specific item (VM) by name.

URI: https://{vrahost}/catalog-service/api/consumer/resources?$filter=name eq ‘win042’

Pay attention!  In this query, I am actually passing spaces in my URI!  I presume this is being encoded for me by the REST Client plug-in, otherwise I assume planes would start falling out of the sky!  According to the Programming Guide, the query is happy to take encoded or non-encoded form.  The proper encoded call equivalent would look like one of the below.
URI: https://{vrahost}/catalog-service/api/consumer/resources?$filter=name+eq+%27win042%27
URI: https://{vrahost}/catalog-service/api/consumer/resources?$filter=name%20eq%20%27win042%27

As you can check for yourself, if the query succeeds, it will return only the items matching the query – success!  Please see below for an example of the query returning no matches.

As you might notice, the API call still succeeds and returns content – it just returns a JSON document with no members in the content section.

Other query parameters

As per the previous article, the vRA REST API supports a few more of the OData parameter types.  One parameter type - $select – is missing from the API, presumably because the structured JSON data returned is easier to navigate in full, rather than if it were teased apart into flat fields.

The table below is replicated from the in-built documentation within each vRealize Automation appliance, which is a copy of the online documentation.  Go ahead, check it out on your own appliance!

Appliance Source:  https://{vrahost}/catalog-service/api/docs/resource_Resource.html

managedOnly if true, the returned requests are from User's managed subtenants. query
page Page Number query
limit Number of entries per page query
$orderby Multiple comma-separated properties sorted in ascending or descending order query
$top Sets the number of returned entries from the top of the response (total number per page in relation to skip) query
$skip Sets how many entries you would like to skip query
$filter Boolean expression for whether a particular entry should be included in the response query

That concludes this series of articles on searching with vRealize Automation using both Orchestrator and the REST API.  Please let me know any feedback or comments below!

Monday, July 6, 2015

Searching within vRealize Automation – Part 1 - Orchestrator

I was recently engaged with a customer needing to execute some very particular manipulations of their provisioned virtual machines in their nascent vRealize Automation environment.  I learned a few things about using search functions in both vRealize Orchestrator and the vRealize Automation REST API, and I figured it was worth sharing here.

vRealize Orchestrator

The requirement for this part was to use vRealize Orchestrator to change properties of a specific type of vRA-provisioned machine.  This entailed finding all managed virtual machines that lived on a certain cluster within vCenter, and then filtering those machines by a further custom property that was being used.

It tuned out to be a bad idea to use vRealize Orchestrator to do all the grabbing and manual sorting, iterating over scripts and actions to gradually reduce all machines down to the required subset.  It was taking ages, and it was REALLY annoying to have to wait for so long before finding out I hadn’t quite nailed the search anyway!  So I turned to the Model Manager function within the vRA IaaS component.  This is one of the services that runs on Windows servers in the solution, and manages the data model quite directly.

Cluster search

I used a little bit of cheeky searching to discover the right “entity sets” and fields I needed to utilize for the search.  I needed to find the UUID of the target cluster first, taking the “cluster path” that was derived from vCenter’s API structure.  I handled the search as per the below code snippet.
var modelName = 'ManagementModelEntities.svc';
var entitySetName = 'Hosts';
var filter = "HostUniqueID eq '" + clusterPath + "'";
var orderBy = null;
var top = 1;
var skip = 0;
var headers = null;
var select = null;
var entities = vCACEntityManager.readModelEntitiesBySystemQuery(host.id,     modelName, entitySetName, filter, orderBy, select, top, skip, headers);
The important parts to point out are:

  • modelName is indicating the standard entity model.  There is a special one for AWS, for example, if I wanted to search on that instead.
  • entitySetName is “Hosts” in the first search, which is the host or cluster running a given virtual machine.  If you are running clusters, then the cluster is stored as the “host”, rather than a specific runtime host.
  • filter is the magic part.  It turns out that this complies with OData syntax, which you can read about here ().  In my case, the query was for a cluster path, but this is where you need to engage your brain to determine was to search for.
  • orderBy, top, skip, headers and select are all modifiers for what gets returned from the search result set and how.  I will come back to these in the follow up blog article where I search in the vRA REST API.
  • vCACEntityManager is the object that executes the search, and this invocation method is the same across all my queries – just varying a couple of parameters.

Virtual Machine search
In my use case, I then wanted to find the virtual machines within the matching cluster, which was similar to above, but I used a new entitySet and filter.
var entitySetName = 'VirtualMachineProperties';
var filter = "PropertyName eq '" + propertyName + "' and PropertyValue eq     '" + propertyValue + "'";

Properties search

And the final search was to find all virtual machines with a given property – a custom property in my case.
var entitySetName = 'VirtualMachineProperties';
var filter = "PropertyName eq '" + propertyName + "' and PropertyValue eq     '" + propertyValue + "'";
End result
At the end of these searches, I had two arrays of virtual machine objects.  Merging these arrays in vRO was a piece of regular Javascript, looping over the data set looking for common elements.

See below for the visual representation of the two searches and final merge.
(Download the vRO actions here, if you like.)

In the next article, I will show a different search use case, where I needed to filter a specific VM from the vRA API.

Tuesday, June 30, 2015

Integrating Infoblox IPAM with vRealize Automation - Part 3

This is the last article in my series on integrating InfoBlox with vRealize Automation.  Part 1 discussed the general setup of the integration, and specifically how the solution looked in vRealize Orchestrator.  Part 2 discussed how some of the common integration elements might be used, regardless of the IP allocation approach.  This last article is the guide to the specific IP allocation methods available, how to use them, and some guidance on which one might be right for you.

InfoBlox integration types

Method 1 – vRA allocates IP, registers in InfoBlox

In hindsight (as mentioned in Part 2), I believe this was the functionality I should have explored the first time!  It is where you allow vRealize Automation to continue to manage its own IP pools, pick addresses for VMs, use Network Profiles and all the other goodness from vRA.  However, once vRA allocates an address, it then calls out to the InfoBlox workflow to register that allocation in InfoBlox.  
This method assumes that there is a range of addresses that you can pre-assign to vRA usage, and that the ranges are matching between vRA and InfoBlox to ensure no conflicting usage of this range!

Specifically, this method picks up the following existing vRA properties that you have probably already taken care of in your vRA Network Profiles – so you don’t have to worry about them!
  • VirtualMachine.IPaddress
  • VirtualMachine.PrimaryDNS
  • VirtualMachine.SecondaryDNS
  • VirtualMachine.DNSSuffix
  • VirtualMachine.SubnetMask
  • VirtualMachine.Gateway
  • VirtualMachine.PrimaryWins
  • VirtualMachine.SecondaryWins
  • VirtualMachine.DnsSearchSuffixes

Method 2 – InfoBox allocates from specified network

This is the method I initially explored, and unfortunately it had the consequence that I had to disable vRA Network Profiles for my vRA Reservation (well, I created a new reservation for blueprints using IPAM) to avoid the two IP management methods from conflicting with each other.  In particular, when Network Profiles are also present vRA assigns and manages the VM’s address from its own pool, and while InfoBlox was still invoked, that IPAM address was completely ignored.

You can specify a network is two ways.  One is to specify the network/CIDR identity of the desired network.  This is fine if you are dealing with a small number of distinct networks and the number of machines will not burst beyond that network’s limits.  To use this approach you specify the below two custom properties:
  • Infoblox.IPAM.netaddr – the identity of the Infoblox Network to use, such as “172.16.50.x”
  • Infoblox.IPAM.cidr – the subnet mask, such as “24”
The other way is to search for a set of networks by extended attributes.  This is possibly going to match several networks (which should be equal in purpose).  The cool thing about this method is that for very large environments, a virtual machine could need to exist in any number of adjacent network segments, or dynamic inputs such as location, environment, security level, etc.  For instance, you may be deploying into multiple 24-bit networks that are all equivalent (internal, data access layer), and you don’t know which one will be chosen because of (a) available addresses, or (b) during provisioning there might be some request input that determines network placement.

The custom properties to use for searching for networks by attributes are those below:
  • Infoblox.IPAM.searchByEa – set this to “true” to use this search method
  • Infoblox.IPAM.searchEa1Name – attribute name
  • Infoblox.IPAM.searchEa1Value – attribute value to compare
  • Infoblox.IPAM.searchEa1Comparison – comparison type, one of the following types:
    • EQUAL
  • …  up to 10 search attributes can be specified, as below
  • Infoblox.IPAM.searchEa10Name
  • Infoblox.IPAM.searchEa10Value
  • Infoblox.IPAM.searchEa10Comparison
Using the method of IP allocation by network, Infoblox expects to fill IP details from the DHCP options already existing in the network definition in the IPAM system.  However, the blueprint can specify “default” values for these, in case they are missing from Infoblox.  This would certainly NOT be recommended if you would be searching for networks, as different subnet ranges might be returned.  Additionally, any values found in Infoblox will override the provided values, and the blueprint values will be ignored – so it is not an override mechanism.
  • Infoblox.IPAM.defaultGateway
  • Infoblox.IPAM.defaultPrimaryDns
  • Infoblox.IPAM.defaultSecondaryDns
  • Infoblox.IPAM.defaultPrimaryWins
  • Infoblox.IPAM.defaultSecondaryWins
  • Infoblox.IPAM.defaultDnsSuffix
  • Infoblox.IPAM.defaultDnsSearchSuffixes

Method 3 – InfoBlox allocates from specified IP range

As you might guess, this method finds an available IP address within the specified range.  This is similar to Method 2 above, but you can skip the fancy searching if you already roughly know the addressing you want for the machine.  

The unique parameters used for this method are below:
  • Infoblox.IPAM.startAddress
  • Infoblox.IPAM.endAddress
I would warn, however, that this method is taking the least advantage of either vRA or Infoblox functionality.  This method assumes that the blueprint owner or the service requester somehow know more about the available IP environment that does either vRealize Automation or Infoblox.  If this is actually the case, then you still have some major network management challenges to solve!  I recommend you find a way to utilize one of the other approaches, and adapt your provisioning processes to get it right the first time…

This concludes this particular series of articles, all to do with vRealize Automation and Infoblox integration and use cases.  Hopefully you have found some value from it.  Please leave any comments or feedback!

Wednesday, June 24, 2015

Integrating Infoblox IPAM with vRealize Automation - Part 2

Following on from Part 1 of this blog article, I wanted to explore how to implement the InfoBlox integration into vRealize Automation blueprints.

First approach

When first learning about how to utilize the IPAM integration, I believe I actually went down the complicated route first (Method 2, in Part 3!).  I assumed I would NOT use vRA to manage and assign IP addressing at all, and would delegate this entirely to InfoBlox.  In order to do this, I had to disable the vRA Network Profile in the Reservation, whose job it would normally be to create the IP information.   This had two implications:

  1. In my environment, not all VM requests were going to be IPAM integrated, so I had to create a NEW Reservation in vRA, overlapping with my existing ones, with the Network Profile disabled.  I then had to make duplicate blueprints to be associated with the new reservation.  Of course, I could have made the Reservation Policy be dynamically selected during request, but that is just another complication.
  2. Now that vRA was not supplying the networking parameters to go with the IP address (subnet mask, DNS servers, DNS suffix, etc), I had to supply this is additional vRA Build Profile properties.

So, maybe don’t do it that way…!  There are two other approaches, and I’ll outline each of them in Part 3 of this series of articles.  I’ll also explain below some more details about how the provisioning options work.

But for now, I’ll explain some of the common elements across the integration methods.

Using the Build Profiles

After creating the initial Build Profile (as per Part 1), I went back into vRA and modified the Build Profile to pre-populate the static attributes that I knew I would use.

Common properties

There is one set of common properties that are used no matter which IPAM methodology you use.  There are a number of “create-something” attributes, and you need to choose exactly what type of Infoblox record you wish to create.

DNS-integrated Host record

If you specify this option, as cannot specify any of the other below types.  It is both an InfoBlox record type and a DNS record type.

  • Infoblox.IPAM.createHostRecord

InfoBlox address record type

You can choose between either of the below options, depending on how you plan to use InfoBlox records.  I believe Fixed Address probably fits the use case most people think of for the automation use-case.

  • Infoblox.IPAM.createFixedAddress
  • Infoblox.IPAM.createReservation

DNS address record type

If you are not creating an InfoBlox/DNS host record (above), then you can either specify to create a DNS A record alone, or both the A and PTR records for the entry.  Obviously “one or both” means you don’t choose both these options!

  • Infoblox.IPAM.createAddressRecord
  • Infoblox.IPAM.createAddressAndPtrRecords

Additional common properties

There are a list of additional properties created in the Build Profile, most of which I have not used.  One property I did statically populate was “Infoblox.IPAM.comment” with a string such as “Record created by vRealize Automation”, so that when managing my addresses directly in InfoBlox I could quickly determine which entries are under automated management.

A quick word on one other property – “Infoblox.IPAM.vmName”.  In my view, this property is a bit redundant when using vRA, as I was already using vRA Dynamic Hostname workflows to determine a special hostname, and didn’t want to populate this separately in a new field.  In my case, I went and edited a few of the InfoBlox workflows to just pickup the vRA name.

My snippet of additional code is below, which was inserted into the “Retrieve Properties” workflow scripting elements.

Property list

  • Infoblox.IPAM.vmName
  • Infoblox.IPAM.dnsView
  • Infoblox.IPAM.networkView
  • Infoblox.IPAM.comment
  • Infoblox.IPAM.enableDHCP (see comments below)
  • Infoblox.IPAM.aliases
  • Infoblox.IPAM.defaultPortGroup

Have a look through the InfoBlox to explore some other use-cases you can try out with the DNS and Network views – as these will apply well to multi-tenanted usage, overlapping network ranges, etc.  This is something considerably beefed up in the later vNIOS 7.x version of InfoBlox which I haven’t covered in these articles.

One last word of warning – the “Infoblox.enableDHCP” property is defined automatically when you generate the Build Profiles with the provided workflow.  However, this property is never referenced in the later call-outs, and so might give you the wrong impression for this parameter that there is some DHCP-specific IP assignment.  There is not – static IP allocation and assignment is the only method implemented, at least in the version of the plug-in I have been reviewing.

That's it for this article on the common elements of integration.  Please go ahead and read Part 3 to understand the juicy details of specific integration methods, and how you might actually go about using this for your own environment!

Please let me know any comments or feedback!

Tuesday, March 31, 2015

Integrating Infoblox IPAM with vRealize Automation - Part 1

Many customers love the idea of self-service provisioning through vRealize Automation (vRA), but do not want to give up control of IP address management (IPAM) to the new tool. I frequently see requests for us to integrate vRealize Automation into a customer’s existing IPAM solution – and pretty much every time this is the Infoblox solution. So, I thought I’d give it a go!

In this first blog, I will replay how I initially configured my environment to integrate the solutions. In my next post, I will explain how I used that integration to achieve an effective server build from the vRA blueprint.

In my environment, I am using vRealize Automation 6.2.1, and vRealize Orchestrator 6.0.1. For Infoblox, I used the virtual appliance vNIOS version 6.11, and their vRO plug-in version 2.4.1.

For starters, I had some trouble in my past attempts to do this. The Infoblox plug-in for vRealize Orchestrator (vRO) was finicky and a little hard to work with. But Infoblox have revised their plug-ins and their core solution, and I have to say I had almost no trouble this time around.

Deployment of the appliance is easy enough, documented here: https://www.infoblox.com/sites/infobloxcom/files/resources/vnios-trial-quick-start-guide_1.pdf

Deployment of the plug-in was similarly fairly easy, and documentation is provided with the plug-in itself (for which you need to register, here: https://www.infoblox.com/downloads/software/vmware-vcenter-orchestrator-plug-in).

Two quirks of note that I discovered:
  • I could not register the vNIOS appliance to the plug-in using the FQDN. I believe this was because the appliance FQDN was a “.local” domain, and deemed as invalid. Using the IP address got around this problem, and the error message made it pretty clear that it was not happy with DNS validity, so it wasn’t hard to drill down to what alternatives to try.
  • The self-signed certificate was expired. It appears that the default certificate generated has a lifetime of one year.  This was only an issue for me when trying to connect the plug-in. Again, fairly easily fixed – this time through the Infoblox System Manager web app under System – Certificates.
The vRO plug-in also requires a workflow package to be imported, to support some of the additional functions that the plug-in invokes.  This package is included in the plug-in download, and included in the documented instructions.

Once I installed the plug-in for my setup, I used the provided Infoblox vRO workflows to:
  • Install vCO customization wrapper”.

    This enabled vRA to call out to Infoblox via vRO (aka vCO) during three distinct lifecycle stages:
    • Building – this stage is where IP addressing is reserved in IPAM and passed back into vRA during the initial provisioning.
    • Provisioned – once the machine is built, this calls out to the workflow “Update MAC address for vCAC VM wrapper”, which appears to grab the as-built MAC address from the VM (nic0) in order to populate Infoblox with this detail.
    • Disposing – when the machine is destroyed, this calls-out to “Remove Host Record or A/PTR/CNAME/Fixed address/Reservation of vCAC VM wrapper”. In essence, this removes the entries made by the previous workflows.

    • CAVEAT: In my environment only, the above Removal workflow does not release the IP address back into the available pool. I am still working on this, and will update this article accordingly. For the moment, I manually review the “Used” records (without any other data associated) and perform a “Reclaim” in the Infoblox management console.  Strangely, this behaviour did NOT happen in a customer's environment, nor in Infoblox's own test environment.  
  • Create Build Profile for Reserve an IP for a vCAC VM in Network”. This piece of absolute magic sets up a new Build Profile in vRA so that I can merely select it during blueprint definition to enable IPAM integration. Magic!!

I used the “in Network” method of IP allocation, because I just wanted Infoblox to pick the next address within a given subnet range. I already have certain ranges carved out and reserved for other purposes (such as vRA’s own ranges that it manages, more on that later) – so anything that wasn’t already reserved is free game for Infoblox to grab. The other methods are “in Range” (if you have specifically carved out one in Infoblox for this purpose), or “general” (if the IP address to reserve is already known, perhaps through an external process prior to the request).

Once I had run these initial configuration workflows, everything was almost ready to go for vRealize Automation to utilise.

 This wraps up the first blog covering initial setup. In the next blog article, I will specify how vRA is configured to utilise Infoblox as part of its provisioning.

Tuesday, January 13, 2015

Explaining Hybrid Cloud to a 5 year old?

Someone recently pointed me to this article on Tech Week Europe on "How To Explain Hybrid Cloud To A Five-Year-Old".  It was not shared because of its awesomeness, but how ridiculous some of the explanations were.  I completely agree, but unfortunately it made me want to create my own analogies!  What a sorry state of affairs!

Of course, Massimo Re Ferre asked the obvious question "Why would anyone want to explain this to a 5 year old anyway??".  As a father of two kids, brought up to ask questions and challenge their father, I reckon I've probably already been challenged to explain to them what it is that I do!  But more importantly, if you can't explain this fluffy concept in simple enough terms, there's a good chance you might leave your customers, managers, executives or users with an unsettled mind about what Hybrid Cloud is all about, why they should use it, what it is NOT, and how to figure out if it's working the right way.

Anyhow, I came up with two and felt the need to share.  I'm sure they are terrible, but I like them.  So please feel free to let me know better ones!

(1) Hybrid Cloud is like a perfect lunch at school. You have some food you've brought from home, because that's what Mum gives you and maybe it's cheaper, or healthier or maybe you can't eat peanut butter because you're allergic, and your Mum looks after you! And then you also get some money to spend at the school canteen for a nice cold chocolate milk, or a fresh cookie. You can choose what you feel like on each day. But when you put together your healthy lunch from home and your special fun things from the canteen - you have a perfect lunch in front of you!
(2) Hybrid Cloud is like your home. At home you have your own bedroom where you sleep and keep your toys and no one is allowed in if you don't want them to. Sometimes you play there, but sometimes you play out in the family room with everyone else. In the family room there's more space, and the big TV and other people - but that also means sometimes your toys get stepped on, or you fight with your sister, or you can't have the room all to yourself. So, you play sometimes in your room, sometimes in the rest of the house, and you have toys everywhere (but your special ones are safe in your room)! And every day, you can choose where to play!