Server Name Generator Workflow

 

Summary/Use Cases:

This workflow can be used to automatically generate a server name in your environment.  With the inputs given, it generates the next unused name available.  In my case, I used partial name because we have different environments (Prod, Dev, QA).

Inputs:

  • partialname: Type = String
  • domainSuffix: Type = String

Outputs:

  • vmName: Type = String
  • fqdnout: Type = String

The Workflow:

ServerNameGenerator

 

The Code:

As you can see there is only a scriptable task within the workflow.



// Read temp server name list - prepare to invoke file writing capabilities if needed
var fr = new FileReader("D:\\VCOInstallationPath\\Windows\\IPReservation\\NamesReserved.txt");
var fw = new FileWriter("D:\\VCOInstallationPath\\Windows\\IPReservation\\NamesReserved.txt");

fr.open();
var content = fr.readAll();
fr.close();

// Initialize variables
var number = 1 ;
var temphost = 0 ; // If we don't set temphost to zero, we break out of for loop imediatley
// Function to pad zeros on number that we increment while number is less then 10
function padzero(number) {
return (number < 10 ? '0' : '') + number
}

// increment through hostnames unitil we find a hostname that does not exist

System.log("Your partial name is: " + partialname) ;
System.log("Starting loop --- ") ;

for (number=1; temphost != null; number++)
{
var padded_number=padzero(number) ;
var vmName = partialname + padded_number;

if(content.search(vmName) <0)
{
System.log("Server name: " + vmName + " not found in master list. Recording new name and continuing.") ;
fw.open() ;
fw.writeLine(" " + vmName) ;
fw.close() ;
var fqdn = vmName + "." + domainSuffix ;
var temphost = System.resolveHostName(fqdn) ;
System.log(temphost);
System.log("A host by the name of " + fqdn + " exists with the IP address of: " + temphost) ;
System.sleep(500) ;
}
else
{
System.log("Server Name: " + vmName + " found in master list - Incrementing with next number and starting over") ;
temphost != null ;
}
}

// Log for debuging purposes
//Output FQDN
fqdnout = fqdn;

System.log("") ;
System.log("The vmName to pass as output is: " + vmName);
System.log("The FQDN to use is: " + fqdn) ;

 

Find a Storage DRS Pod from Datastore Name and Refresh Recommendations

Summary/Use Cases:

This workflow can be used as part of a self healing solution to ensure SDRS runs whenever you get a datastore alarm. (see Self Healing Datacenter). This ensures SDRS immediately runs and can balance the storage in the cluster.

Inputs: 

  • Datastore Name: Type = String

Outputs: None

The Workflow:

SDRSWorkflow

How It Works:

  1. Takes the Datastore string passed to it and then searches vCenter for a Datastore Object matching the stringtext. This is stored as type VC:Datastore.
  2. Looks through all of the SDRS PODs declared in your general attributes to see if any of them contain the Datastore object found in step 1.
  3. If there is a match, it stores this as the variable “podToRunSDRSOn”, and stops searching.
  4. Run’s refresh recommendations on the pod “podToRunSDRSOn”.

 The Code:

Only 1 scriptable task has been used above, and i’m considering making it an action item if I don’t expand on the workflow further.

//Search vCenter for a Datastore matching the name

var dataStoreObject = System.getModule 
("com.vmware.library.vc.datastore").getAllDatastoresMatchingRegexp(dataStoreName);
dataStore = dataStoreObject[0];
System.log("Datastore Object Found is " + dataStore);
//Now we have the datastore in question, we need to find out which SDRS POD the Datastore 
is a member of.

//Checking the array of pods
for (var i in podsArray)
{
 System.log("Checking POD: " + i + "in podsArray. The current POD is" + podsArray 
[i]);

 //Grab the child entities of the pod and put them in the arrayCheckDatastore

checkDatastore = podsArray[i].childEntity;

//Now check each datastore against the original one we found to verify the 
objects are the same

for (var t in checkDatastore) 
 {
 System.log ("Datastore at object " + t + "is : " + 
checkDatastore[t]);
 if (checkDatastore[t] == dataStore) 
 {
 System.log("Datastore Match. The pod we need to 
run SDRS on is " + podsArray[i]);
 podToRunSDRSOn = podsArray[i];
 break;
 }
 else 
 {
 System.log("There was no match...we could not 
find an SDRS POD");
 }
 }
}
System.log("CHECKING COMPLETE: Pod to run SDRS on is : " + podToRunSDRSOn);

//RUN SDRS

if (podToRunSDRSOn != null)
 {
var m = podToRunSDRSOn.vimHost.storageResourceManager;
task = m.refreshStorageDrsRecommendation(podToRunSDRSOn);
 }

 

Self Healing Datacenter: Automating HA/DRS Configuration Settings

First of all, thank you if you attended the session I presented with Dan Mitchell on the Self Healing Datacenter. Also a big thanks to Kim Jahnz and Dan for giving me the opportunity to go up there and talk about some of the things I’ve been working on.

Here is the detailed walkthrough of the 1st example I gave.

Summary:

In this example I demonstrated how you could effectively use vCO as a Configuration Management tool for vSphere settings. Now this does not mean that there aren’t more advanced tools out there for configuration management. Puppet, Chef etc. are your big knife tools for more serious configuration management, but vCO can fill some very easy gaps without needing to go to more complex tools.

Goal:

Always start with a goal in mind and then work your way through all the components required.

  • HA Settings
    • HA should be turned on
    • HA admission control should be enabled
    • HA Admission Control policy should be set to percentage based and allow for 1 host in the cluster to be out of service. (e.g. for a 4 host cluster, I would set this to 25%)
    • DRS
      • DRS should be turned on
      • DRS should be set to fully automated

Break it down:

The very first time I created this workflow, I used workflows containing scriptable tasks. Later I improved it ahead of VMWorld and turned that workflow of scriptable tasks into an action item.

I don’t think there is a right or wrong answer to this, but using the action item seems cleaner and more reusable, with less chance of error.

The main workflow:

HADRS-vmworld

Now for the action item code…

Part 1: Calculating your HA % for the amount of hosts in your cluster.

var Hosts = System.getModule("com.vmware.library.vc.cluster").getAllHostSystemsOfCluster(cluster);

System.log("Number of Hosts in Cluster: " + Hosts.length);

var HApercent = ((1/Hosts.length)*100);
HApercent = HApercent.toFixed(0);

System.log("HA Percent which will be used for cluster is: " + HApercent);

Part 2: The cluster specifications and task to reconfigure the cluster

//Create variables for DRS/HA config

var clusterConfigSpec = new VcClusterConfigSpecEx();
clusterConfigSpec.drsConfig = new VcClusterDrsConfigInfo();
clusterConfigSpec.dasConfig = new VcClusterDasConfigInfo();

//Enable DRS/HA

clusterConfigSpec.dasConfig.enabled = true;
clusterConfigSpec.drsConfig.enabled = true;

//Set DRS to INPUT (Passed to the Action)

System.log("Setting DRS to Fully Automated");

clusterConfigSpec.drsConfig.defaultVmBehavior = drsBehaviour;

//Fix Admissions control policy

System.log("Updating HA Admission Control policy for " + cluster.name);

clusterConfigSpec.dasConfig.admissionControlPolicy = new VcClusterFailoverResourcesAdmissionControlPolicy();
clusterConfigSpec.dasConfig.admissionControlEnabled = true;

//Set host monitoring to the setting passed to the action

clusterConfigSpec.dasConfig.hostMonitoring = haHostMonitoring;
clusterConfigSpec.dasConfig.admissionControlPolicy.cpuFailoverResourcesPercent = HApercent;
clusterConfigSpec.dasConfig.admissionControlPolicy.memoryFailoverResourcesPercent = HApercent;

//Reconfigure the cluster, by adding the True parameter this ensures any previous settings remain

System.log("Executing Cluster Reconfiguration for " + cluster.name);
task = cluster.reconfigureComputeResource_Task(clusterConfigSpec, true);

Putting it all together:

So now we have an action item which can take in the following inputs:

1. Cluster (Type: VCCluster)
2. DRS Behaviour (Type: DRS Behaviour)
3. HA Host Monitoring (Type: Boolean)

Now comes the easy part, you can just create a workflow where the 3 inputs required above are inputs, or general attributes. If you use Inputs then you will be prompted each time. If you choose general attributes then they are set permanently in the vCO workflow unless you change them.
I chose to set the DRS Behaviour/HA Host Monitoring settings as general attributes inside the workflow which I called “Configure HA/DRS Settings for Cluster”. Then I would just select the cluster I wanted to apply the settings to when I ran the workflow.

Now once I decided this worked great, I wanted to push the settings out to ALL of my clusters, so I created the main operational workflow “Configure HA/DRS for ALL Clusters”. I then made an array of clusters as the general attribute, which I could add in, or take out clusters as I wanted to.

Finally, schedule the workflow in vCO to run every night at midnight, and you know that all your clusters are exactly as they should be. If you are working on a cluster, just take it out of the array and put it back in when you are done.

vCenter Orchestrator Appliance – Guest File Operations (Copying a file to guest VM)

One of the things you will often find you need to do with vCO is to get a file to a guest VM, or just run a file from inside the VM. Now for Windows you can use Powershell remote features in many cases, but what if your server isn’t on the network yet? Until version 5.1 we had to rely on VIX as a way to do this, but now VMware has added a number of new workflows under “Guest Operations” which are much more reliable.

vCO Guest Operations
vCO Guest Operations

“Copy file from vCO to guest” is the one I’m going to be using in this example.

First of all copy the workflow into a sandbox area. This way you can move a bunch of the inputs to attributes and not have to key them in each time (e.g. The local administrator username, password, and test VM).

In my example, I’m going to create a text file called test.txt in a new folder under /opt called “vcofiles”.

My target machine is a Windows 2008 R2 server, where I will copy the file and place it in the C:\temp\ folder with the name “testcopy.txt”

If you run the workflow then these are my input parameters:

GuestFileOperations-Run

 

The problem is that if you run this you will get an error similar to this:

“No permissions on the file for the attempted operation (Workflow: Copying files from vCO appliance to guest/Scriptable task…”

GuestFileFailure
GuestFileFailure

In order to fix this you first need to give the correct rights to the folder and file on your vCO Appliance.

1. Login as root onto the appliance
2. Give Read/Write/Execution rights to the new folder

FolderRights

3. Give Read/Write rights to the Text file you made

Filerights

 

Unfortunately we aren’t quite done yet. You also need to tell orchestrator which locations it can read/write/execute from. This involves editing the “js-io-rights.conf” file located in “/opt/vmo/app-server/server/vmo/conf”

Java-FolderRights-2

Add the line “+rwx /opt/vcofiles/” as shown above.

If anyone isn’t too sure on the linux commands to do this:

  • Type “cd /opt/vmo/app-server/server/vmo/conf” and press enter.
  • Type “vi js-io.rights.conf” and press enter.
  • Use the arrow keys to move the cursor where you want and press the insert key
  • Press Enter and type in the line “+rwx /opt/vcofiles”
  • Press ESC
  • Type “:wq” and press enter.

4. Now, there’s one more thing. You need to restart the vCO service for this to take effect.

Login to the vCO configuration manager, go to startup, and click restart service.

ServiceRestarted

5. Now run your workflow and see if your text file copied across.

Success

You can see a quick video demo of this on youtube. (apologies for the mouse pointer issue..)

 

Thanks for reading. Let me know if you have any questions.

Nick

 

 

 

Disclaimer

All content provided on this “2ninjas1blog.com” blog is for informational purposes only. The owner of this blog makes no representations as to the accuracy or completeness of any information on this site or found by following any link on this site.

The owner of 2ninjas1blog.com will not be liable for any errors or omissions in this information nor for the availability of this information. The owner will not be liable for any losses, injuries, or damages from the display or use of this information.

The views expressed anywhere on this site are strictly those by Amy Manley and Nick Colyer, and do not reflect the opinions of our employer.