Terraform 101 – What is it? How do I use it?

logo_large-3e11db19

I’ve been watching Terraform over the past few years and finally have had some time to start getting stuck into it. I must say, I’m impressed by the potential of this product and others from Hashicorp.

Terraform essentially fits in the Infrastructure Automation category, and has a similar coding approach to tools like Puppet, while in some ways operating more like an Orchestrator without the visual aspect.

What is it?

Essentially it adds a layer of abstraction to services like Amazon, Google etc. Instead of an AWS Cloud Formation template, I can use a Terraform configuration instead. On top of that, and the piece that is more intriguing to me, is the ability to use their module approach as well as other providers and provisioners.

Providers allow you to use the same declarative state language for other systems. I encourage you to check out the list on the Terraform site.

Provisioners allow us to essentially determine what and where we initiate other tasks. For example, you could use local-exec to execute commands locally on the terraform box, or remote-exec to execute on a remote server via SSH or WinRM.

The idea behind all of this is that you have one place, and one language to learn which then works across public Cloud providers. You don’t need to learn say the AWS Cloud Formation Template language and then go learn another language in another cloud provider. You simply would use Terraform to deploy all.

How do I use it?

Let’s get stuck in and walk through a very basic Terraform configuration for deploying an AWS Instance. At the core of Terraform is the .tf file. This combined with other files in the same directory or module directories, form a Terraform Configuration. There are 2 formats to the Terraform files, Terraform format, or JSON. It is recommend that you use the Terraform format which is easily readable (think Puppet DSL).

Example: Create an AWS EC2 Instance with Terraform

Note: For all activities below you will need an AWS account and will be charged via Amazon appropriately. I try to use free tier for all demo examples.

  • Create a folder to store your Terraform configuration.
  • Open up notepad or your favorite editor. I use Visual Studio code along with the Terraform Extension.
  • Create the Terraform configuration and save it as a .tf file.
terraformcode

Terraform example for deploying AWS Instance

The first piece we declare is the provider which in this case is AWS. Grab your access key and secret key and then choose a region you want to provision our EC2 Instance into.

provider "aws" {
access_key = "yourkeyhere"
secret_key = "yoursecretkeyhere"
region     = "us-east-1"
}

Next, we declare  our new resource. In this case I am choosing to instantiate and AWS instance called “2ninjasexample1”. I am going to use the Amazon AMI with ID “ami-13be557e”. Finally i’m choosing my type of instance as t2.micro.

resource "aws_instance" "2ninjasexample1" {
ami           = "ami-13be557e"
instance_type = "t2.micro"
}

That’s it for our configuration file. Simply save it in the folder you created in step 1 and browse to that folder.

  • Type terraform plan and you should see a result like the screenshot below.terraformplan
    You can see that if we go ahead and run the configuration, it is going to add the aws instance.
  • Now it’s time to actually apply the configuration. Type terraform apply to go ahead and create the instance.terraformapplyaws

Terraform creates a new AWS EC2 instance as well as 2 additional files in our folder which maintain the state information.

tfstate

If we examine the .tfstate file, you will see it contains all the specific information about our AWS instance.

terrafomstate

In particular, you can see that it has captured the AWS instance ID which you can also view from your AWS console if you select your EC2 image.

  • Finally let’s destroy the stack. Type terraform destroy. You will be prompted to confirm by typing yes.

terraformdestroy

Just like that, it is destroyed! You will also notice your state file updated to reflect this.

Hopefully at this point, you can see the power behind this tool. Stay tuned for more posts on this.

 

 

turbonomic-logo-300x150

It’s ON with Turbonomic and vRO

There have been a lot of changes for VMTu Turbonomic.  I believe most in the industry are aware of the bold name change.  This name is more representative of what the product does based on the economic model it is known for.  Besides that with the latest version, Turbonomic also released vRealize Automation workflows to integrate with their product.  You have to be a member of the Green Circle, which is free, but you can download them here.  There are instructions on importing the workflows, setting up Operations Manager as a REST host, etc.  I was excited to see this but, unfortunately, my environment only uses vRealize Orchestrator.
Below is the schema for vRA workflow:

vra_vmt

First is a scriptable task to gather inputs for vRA.  The inputs are all vRA specific so I could remove these. At the end it, the workflow is pushing back properties to vRA so I removed “Override vRA Settings” at the end.

Inputs removed from original VMTurbo Main workflow:

parameters

 

My workflow ended up like this, removing vRA dependencies and ending with 2 scriptable tasks to convert the datastore and host to VC:objects instead of strings.  These scripts will be covered in another post.

newworkflow

My inputs end up moving from general attributes and are templateName, clusterName and datacentreName. In the future I will likely add a scriptable task at the beginning of the workflow to determine these as they will come from inputs generated by my Windows or Linux Master Build workflow.

Inputs converted from attributes:

newinput

I also now have outputs for the actual VC: Datastore VC: Host System objects for your clone workflow in vRO. These were created via the scriptable tasks which take the strings returned from Turbonomic and do a lookup to match them to the vCenter objects.

Outputs created:

outputs

 

What’s great about having this functionality from Turbonomic is now the best host and the best datastore will be selected based on analytics from Operations Manager. I originally was picking my datastore based on amount of free space but now using the REST API I can have the least utilized host and datastore supplied to by clone workflow.

Download the modified workflows here.

I’ll be going over these workflows in the upcoming webinar “Overcoming Private Cloud Challenges in Healthcare IT”, September 29th at 2:00PM EST.  Register here

aws

2 Ninjas and Amazon Web Services

Amy and I spend a good amount of time working on external projects. In fact, we discussed at the beginning of this year what we wanted to focus on. For me it has been wrapping up my Pluralsight Course for vRO, as well as, working on extending Tintri APIs to meet business use cases. For Amy, it’s been knee deep in automating the world at UCMC, as well as, working and discussing ideas around community and charity work that we hope to start early next year.

For the rest of this year, we are going to now continue our Real World Cloud Series and given the rise in AWS ,which does not seem to be slowing down, we’ve decided to get going on a series focused around AWS. We are going to start off in the IaaS services first, expand these into the automation and service catalog discussions that we have on a day to day basis. After that we will continue on to gather AWS certifications. I will also be blogging about this on the Ahead blog site from a higher level and business standpoint. There are tons of useful posts there from many of my colleagues whom I work with so definitely check it out.

We have created 2 pages to organize this:

AWS Guides

AWS Solutions Architect Associate Exam

In some cases, both pages will share some of the same blog posts but hopefully this helps if you are just trying to focus on the exam.  It will all become clear as the posts start to come out in the next few months.

 

 

 

screen-shot-2016-09-19-at-10-45-16-am

AWS Simple Storage Service (S3) Fundamentals

 

Before diving into the other AWS services, it is highly recommended that you gather a strong background in all of the AWS Storage services and their specific use cases. In this post, we will be discussing S3 specifically.

In short, S3 provides highly scalable object storage. In 2013, Jeff Barr , wrote a blog which stated that Amazon S3 had reached over 2 trillion objects and there were 1.1 million requests a second. I’d love to find an updated stat but this in itself gives an indication of how widely used this service is already.

Object Storage – Quick Primer

For anyone not familiar, object storage provides the ability to store objects (obvious I know). These are essentially collections of digital bits. This could be a document, digital photo, xml file etc. Object storage offers highly reliable and easy scalable storage of all these digital bits but there is basically no structure at all. It simply provides storage and differs from file storage which provides additional functionality. An example is something like update functionality. In a typical file system, you can append information directly to a file. In object storage, this is not the case. You can add an object and retrieve it immediately but you can’t change it. Rather, you have to update the object and then reinsert it. You can still apply permissions and versioning as we will see soon but as you architect applications today, you need to consider whether or not you truly do need a file system. Amazon did recently release EFS (think NAS basically) and this can potentially satisfy your specific file use cases. It is still early on though and the verdict is still out.

How do I use it? – Creating our first S3 Bucket

First login to your AWS console and you will see on the left hand side under “Storage & Content Delivery” the icon for S3.

AWS-S3

You will be presented with the welcome screen to S3

S3-welcome

The first thing to note is the term “Bucket”. It helps to think of a bucket basically as a folder but the name of the bucket is globally unique. Once someone takes the bucket name, it is not available for anyone else to use.

Simply select Create Bucket and type in a name for your new S3 bucket.

If someone else has the name already, it will error out and let you know. The name of the bucket also needs to be in lowercase.

firstS3Bucket

Once created, you will see the main S3 management screen.

screen-shot-2016-09-17-at-8-26-41-pm

You can see on the right hand side a number of options which we will come back to in subsequent posts. For now, if we click into our bucket, we will see that it is empty.

screen-shot-2016-09-17-at-8-28-36-pm

 

We can create additional folders inside of our bucket or simply begin to upload files at this point. If you select the Actions menu, you will also see additional options.

screen-shot-2016-09-17-at-8-30-22-pm

Let’s go ahead and upload a file. In my example, I will simply select a PNG image file as per the screenshot below.

screen-shot-2016-09-17-at-8-33-11-pm

Before we go ahead and start the upload it is worth clicking the Set Details button.screen-shot-2016-09-17-at-8-32-11-pm

You can see here that we have additional storage options we can apply. For now, we are going to select Use Standard Storage but there are ways to further reduce cost if the other storage options apply. There is also an option to use Server Side Encryption.

Go back and select Start Upload.

screen-shot-2016-09-17-at-8-35-28-pm

Once completed, we will see our image file appear on the left hand side.

Select Properties from the menu on the top right, and you will be able to see

screen-shot-2016-09-17-at-8-36-43-pm

Note the link. If I put this into my web browser directly, I get the following Access Denied error.

screen-shot-2016-09-17-at-8-37-21-pm

This is because the permissions are not set to allow public access. If I go ahead and add Everyone to have Open/Download permissions as follows…

screen-shot-2016-09-17-at-8-37-32-pm

…I end up now being able to access this image publicly.

screen-shot-2016-09-17-at-8-43-12-pm

 

With that, our basic primer comes to an end. In the next post we will discuss the different storage types and permissions we saw above.


AWS Guides

AWS Solutions Architect – Associate Exam Guide

Living the cloudy life… #cloudlife

A few people asked me recently why some of us are using the hashtag #cloudlife and what it means. This came out of the Ahead Tech Summit presentation I was preparing for and worked with Nick Rodriguez on in June. I was explaining to Nick my concept and he created this great image.

Pasted image at 2016_08_08 08_27 AM

So what does it mean?

Ultimately, it comes from a belief that Cloud is about creating a true experience. This means not just changing the way customers of IT consume services via a catalog, but going that extra mile.

I’ll get on to roles and more Cloud Design topics in a future post .The one thing I want to stress over and over is that our goal in creating a Cloud is to create this place people come to for IT services and leave feeling like they got something more.

If you’re an IT person, you must put yourself in the developers shoes and try to think of the pain and annoyance they actually go through when submitting a form.  They wait weeks for their server to come and they then still have to go to subsequent teams to get various pieces of software installed such as:  DR options approved, extra storage and so on. Then, they have to make sure that everything they did in Dev works in QA and finally Production. A sysadmin might push a patch out or a VM template that doesn’t work as it did the previous month because someone else made a change.

Follow this up with the sheer amount of Public Cloud PaaS services and other external services the teams wish to consume.  Many of these services require security approvals and perhaps additional firewall and networking configurations.

It all adds up to a frustrated customer and in turn ultimately affects the businesses ability to innovate and grow.

dev_journey_0

The opposite is the #cloudlife experience..

Happy Customer A: “Wow, I came to this catalog and got everything I needed. BAM! Now I can create something awesome today while my idea is hot.”

Happy Customer B: “This Cloud is better than just the AWS or Microsoft Cloud. I get those features and more. Everything I want is here!”

Happy Customer C: “I think…I love this Cloud… #cloudlife”

Happy Customer D: “If I had a Cloud, it would be just like this cloud. I’m telling my friends about DevOps and #cloudlife.”

happy_customer

It’s not about just having the best programmer and engineering the best back end services but the full end to end experience. How you design the front end menu, how you guide every decision the user makes, and how you can get them what they need to be successful and grow the business are front and center. It takes a combination of people and skills to execute on this successfully.

What does it mean in practice?

Take an example of a Developer that has deployed an environment of SugarCRM, an Open Source CRM tool. Great they deployed it from their request catalog but what if they want to synchronize data from one environment to another for testing? Previously, they would have had to put in a request for someone to backup and restore the database to the new environment. This could then involve a piece of paper being handed around between teams until the task is completed.

The alternative is an option like the screenshot below in vRealize Automation. We add an Action which is visible in the items list that gives them the ability to execute this operation with one click.

vRA Day 2

Clicking the “vRA-DevOpsTeamX-SyncData” Button initiates a vRealize Orchestrator workflow. This workflow in turn connects to a Tintri Storage Array to initiate a Sync VM. The workflow will create all the appropriate change controls, shutting down of VMs, storage array tasks etc. Again, think of everything that you need to do to complete the task and provide it as a self service option.

Essentially, the workflow would look something like this:

Screen Shot 2016-08-09 at 8.55.24 AM

Other Examples…

Time permitting, some of these will turn into blog posts as well, but here are some examples of clear services you can offer to make peoples lives easier.

  • Complex Environment Deployments (IaaS, PaaS, SaaS mixes)
    • This means getting everything they need. Not just a VM deployed.
  • Event Based Orchestration – e.g. AWS Lambda to SNOW, Orchestration systems etc.
  • Automated Redeployment of Environments on Schedule
  • Self Service Disaster Recovery CheckBox
  • Self Service Backups and Restores
  • Business Discovery Mapping via Parent/Child Relationships created in blueprints
  • Automated Service Account creation and deletion
  • Automated Snapshots before Patching of Systems
  • Automated Firewall Rule creation and deletion

These are just a handful of ideas.  Remember, with each one, we’re taking out the additional paperwork by automating the tasks you’d typically do in your ITIL tool like ServiceNow.

What is #cloudlife…?

It’s certainly also become a #hashtag we use whenever we are working on Cloudy stuff (e.g. creating a cloud proposal while in the dentist chair…wasn’t me) or thinking about a new innovative Cloud idea while drinking a Tim Carr Starbucks Iced Green Tea (#notpropertea). Essentially, it’s a way of thinking beyond our Infrastructure roles and what the requester is asking for to create something more.

#cloudlife is about reaching for the best possible user experience. One that doesn’t feel like it’s forcing you into a box but instead feels refreshing end enjoyable.

 

vRealize Orchestrator Appliance – Guest File Operations Part 1 – (Copying a file to guest VM)

One of the things you will often find you need to do with vRO is to get a file to a guest VM, or just run a file from inside the VM. Now for Windows you can use Powershell remote features in many cases, but what if your server isn’t on the network yet? Until version 5.1 we had to rely on VIX as a way to do this, but now VMware has added a number of new workflows under “Guest Operations” which are much more reliable.

vCO Guest Operations

vRO Guest Operations

“Copy file from vCO to guest” is the one I’m going to be using in this example.

First of all copy the workflow into a sandbox area. This way you can move a bunch of the inputs to attributes and not have to key them in each time (e.g. The local administrator username, password, and test VM).

In my example, I’m going to create a text file called test.txt in a new folder under /opt called “vcofiles”.

My target machine is a Windows 2008 R2 server, where I will copy the file and place it in the C:\temp\ folder with the name “testcopy.txt”

If you run the workflow then these are my input parameters:

GuestFileOperations-Run

The problem is that if you run this you will get an error similar to this:

“No permissions on the file for the attempted operation (Workflow: Copying files from vCO appliance to guest/Scriptable task…”

GuestFileFailure

GuestFileFailure

In order to fix this you first need to give the correct rights to the folder and file on your vCO Appliance.

1. Login as root onto the appliance
2. Give Read/Write/Execution rights to the new folder

FolderRights

3. Give Read/Write rights to the Text file you made

Filerights

Unfortunately we aren’t quite done yet. You also need to tell orchestrator which locations it can read/write/execute from. This involves editing the “js-io-rights.conf” file located in “/opt/vmo/app-server/server/vmo/conf”

Java-FolderRights-2

Add the line “+rwx /opt/vcofiles/” as shown above.

If anyone isn’t too sure on the linux commands to do this:

  • Type “cd /opt/vmo/app-server/server/vmo/conf” and press enter.
  • Type “vi js-io.rights.conf” and press enter.
  • Use the arrow keys to move the cursor where you want and press the insert key
  • Press Enter and type in the line “+rwx /opt/vcofiles”
  • Press ESC
  • Type “:wq” and press enter.

4. Now, there’s one more thing. You need to restart the vCO service for this to take effect.

Login to the vCO configuration manager, go to startup, and click restart service.

ServiceRestarted

5. Now run your workflow and see if your text file copied across.

Success

You can see a quick video demo of this on youtube. (apologies for the mouse pointer issue..)

Thanks for reading. Let me know if you have any questions.

 

How to match and correlate Windows SCSI Disk IDs with VMware VMDKs

*Note: This is a repost due to moving my posts from SystemsGame.com to 2ninjas1blog.com”

This post comes from a colleague of mine who couldn’t find a great resource on how to correlate the Windows Disk in Disk Management, with the Virtual Disk presented by VMware.

When all the disks are different sizes it is easy, but sometimes they are the same…how can you be sure you are expanding the right disk?

These instructions/steps should allow you to correlate Windows Disks to VMDK Disks.

  1. RDP  to the Windows server in question and run this powershell script
Get-WmiObject Win32_DiskDrive | select-object DeviceID,{$_.size/1024/1024/1024},scsiport,scsibus,scsitargetid,scsilogicalunit | out-file -FilePath c:\OutputPhysicalDrive.txt

This script should allow you to match the OS disks to the VMDK Disks. The output will be referenced in later steps.

Example output

DeviceID : \\.\PHYSICALDRIVE3
$_.size/1024/1024/1024 : 9.99680757522583
scsiport : 3
scsibus : 0
scsitargetid : 0
scsilogicalunit : 0

DeviceID : \\.\PHYSICALDRIVE6
$_.size/1024/1024/1024 : 49.9993586540222
scsiport : 5
scsibus : 0
scsitargetid : 1
scsilogicalunit : 0

DeviceID : \\.\PHYSICALDRIVE4
$_.size/1024/1024/1024 : 19.9936151504517
scsiport : 4
scsibus : 0
scsitargetid : 0
scsilogicalunit : 0

DeviceID : \\.\PHYSICALDRIVE0
$_.size/1024/1024/1024 : 59.996166229248
scsiport : 2
scsibus : 0
scsitargetid : 0
scsilogicalunit : 0

DeviceID : \\.\PHYSICALDRIVE1
$_.size/1024/1024/1024 : 19.9936151504517
scsiport : 2
scsibus : 0
scsitargetid : 1
scsilogicalunit : 0

DeviceID : \\.\PHYSICALDRIVE2
$_.size/1024/1024/1024 : 19.9936151504517
scsiport : 2
scsibus : 0
scsitargetid : 2
scsilogicalunit : 0

DeviceID : \\.\PHYSICALDRIVE5
$_.size/1024/1024/1024 : 49.9993586540222
scsiport : 5
scsibus : 0
scsitargetid : 0
scsilogicalunit : 0

The second step is to get a list of your VMDK disk information by editing the virtual machine in question. 

The information you will be retrieving is the
Disk Name: “Hard disk 1”
Size: “60 GB”
Bus ID: 0
Disk ID: 0

SCSI (X:Y) Hard Disk under Virtual Device Node. The X:Y values are:

X = Bus ID
Y = Disk ID

Enter the Disk information for all VMDK disks into a table like the one below:

Reference OutputPhysicalDrive.txt and match up any OS disks to VMDK disk that have a unique size.

For the non unique drives you will need to match the Windows disk scsitargetid with the VMDK Disk ID.

The first 2 in the example below are both 50GB Drives.

DeviceID : \\.\PHYSICALDRIVE6
$_.size/1024/1024/1024 : 49.9993586540222
scsiport : 5
scsibus : 0
scsitargetid : 1
scsilogicalunit : 0

DeviceID : \\.\PHYSICALDRIVE5
$_.size/1024/1024/1024 : 49.9993586540222
scsiport : 5
scsibus : 0
scsitargetid : 0
scsilogicalunit : 0

The next 3 are all 20GB drives.

DeviceID : \\.\PHYSICALDRIVE2
$_.size/1024/1024/1024 : 19.9936151504517
scsiport : 2
scsibus : 0
scsitargetid : 2

DeviceID : \\.\PHYSICALDRIVE1
$_.size/1024/1024/1024 : 19.9936151504517
scsiport : 2
scsibus : 0
scsitargetid : 1
scsilogicalunit : 0

DeviceID : \\.\PHYSICALDRIVE4
$_.size/1024/1024/1024 : 19.9936151504517
scsiport : 4
scsibus : 0
scsitargetid : 0
scsilogicalunit : 0

Hope this helps anyone else having the issue. I’ll loop around and update the PowerShell script I ended up using for this soon as well.

Thank you vRad for this great guide!

vRealize Orchestrator Workflow: Change VM Port Group for VM on Standard vSwitch

*Note: This is a repost due to moving my posts from SystemsGame.com to 2ninjas1blog.com”

I was surprised recently to find that no builtin workflow existed for changing the backing information for a VM if you aren’t using a VDS. Now, before I go any further, I’m a big fan of moving to a vSphere Distributed Switch mode, but there are certainly cases where you might encounter a standard vSwitch environment which you need to automate port group changes upon.

The Approach:

Essentially when it comes to changing NIC settings on a VM, you have to change the “Backing” information for the NIC associated with the VM. In my case this was for VMs which were just built as part of an overall automation process, and had only one NIC.

Step 1: Create Action Item.

I created an action item which has 2 inputs.

“vm” of type VC:VirtualMachine – This is basically so you can select the VM in vCO that you want to modify

“vSwitchPGName” of type String – This is so you can pass in the string value of the portgroup name for the vSwitch.

Code:

The code I then used is below. I’ve commented it but please let me know if you have any questions.

var spec = new VcVirtualMachineConfigSpec(); // Initialize a Virtual Machine Config Spec first
var myDeviceChange = new Array(); // Create an array to hold all of your changes
var devices = vm.config.hardware.device;

//Find devices that are VMXNET3 or E1000
for (var i in devices)
	{
		if 	(
				(devices[i] instanceof VcVirtualVmxnet3) ||
				(devices[i] instanceof VcVirtualE1000) 
			)
		{
			System.log("The device we are going to modify is: " + devices[i]);
			var nicChangeSpec = new VcVirtualDeviceConfigSpec(); //This is the specification for the Network adapter we are going to change
			nicChangeSpec.operation = VcVirtualDeviceConfigSpecOperation.edit; //Use edit as we are going to be modifying a NIC
			nicChangeSpec.device = new VcVirtualE1000;
			nicChangeSpec.device.key = devices[i].key; 
			System.log("NicChangeSpec key is : " + nicChangeSpec.device.key);

			nicChangeSpec.device.addressType = devices[i].addressType;
			nicChangeSpec.device.macAddress = devices[i].macAddress;

			System.log("Adding backing info" ) ;
			//Add backing information

			nicChangeSpec.device.backing = new VcVirtualEthernetCardNetworkBackingInfo();
			System.log("Backing info for nicChangeSpec is : " + nicChangeSpec.backing);
			nicChangeSpec.device.backing.deviceName = vSwitchPGName; //Change the backing to the portgroup input
			System.log("Backing info for deviceName on nicChangeSpec is : " + nicChangeSpec.device.backing.deviceName);

			//Push change spec to device change variable
			myDeviceChange.push(nicChangeSpec);

		}
	}

spec.deviceChange = myDeviceChange;
System.log("DeviceChange Spec is: " + spec.deviceChange);
return vm.reconfigVM_Task(spec);

Step 2:

I created a simple workflow which calls this action item and then has a vim3WaitTaskEnd so we can be sure the task is completed before moving on to any other workflows. This is useful if you are going to be incorporating this action into a larger process.

Update Port Group for vSwitch

Running the workflow gives you this simple presentation.

vSwitchPG 2

And that’s basically all there is to it. Select your VM, type in your PortGroup name, and voila!

For a vDS, VMware included a workflow out of the box in vCO so there is no need to create any of the above.

Enjoy!

vRealize IaaS Essentials: Building your Windows Server 2012 Template on vSphere – Part 3 (OS Tuning)

Now that we have a base OS build completed, we need to start configuring the OS to the settings we want.

Step 1: Get VMware Tools Installed

Without VMware tools on the OS, many things are sluggish and just annoying. Most importantly it fixes the annoying mouse cursor tracking issues (this is even more noticable when you’re in a VDI session into a VMware Console).

  • Login to your vSphere Web Client and Locate your VM
  • Select the VM > Actions > Guest OS > Install VMware Tools...

rwc-template-tools1

  • You will get a prompt to mount the Tools ISO. Select Mount.

rwc-template-tools2

  • Now inside the OS, Open My Computer/This Computer and Tab over to the CD ROM Drive. I found it almost impossible with the mouse using the VRM Console until Tools was installed so I had no choice but to use the keyboard to get it done. A combination of Tab and Space did the trick.

rwc-template-ostools1

  • Once you are there, run Setup and you should be presented with the VMware Tools installation screen.

rwc-template-ostools2

  • Choose Next
  • Select Typical for your installation type

rwc-template-ostools3

 

  • Once installation is complete, reboot the OS

Step 2: Fine tune your OS

First of all a big thanks to some of my twitter friends who gave some good suggestions on tweaks here. There is always going to be a debate as to what gets done in the template vs GPO/Configuration Management. I’d say the settings I set below are just the core ones necessary to facilitate deployment of an OS with ease. AD and configuration management should definitely come in after the fact and take care setting other OS settings to their necessary values.

  1. Patch the OS to the latest (It’s worth automating this in the future)
  2. Set Date/Time
  3. Set the OS Hostname to VM Template Name – this helps to know if sysprep worked etc.
  4. Disable the Windows Firewall
  5. Disable UAC
    1. http://social.technet.microsoft.com/wiki/contents/articles/13953.windows-server-2012-deactivating-uac.aspx
  6. Create a Local User account for use by vRealize (e.g. svc_vrealize). You can make sure this account gets disabled automatically as part of your builds or via Puppet, GPO to comply with security requirements. It helps however to be able to easily get into a system using vRO Guest File Operations via a local service account early on.

Also here is a useful link provided by Sean Massey who does a lot of tuning on the Desktop side: https://labs.vmware.com/flings/vmware-os-optimization-tool

Finally, remember to disconnect your CD ISO.

After turning your VM back into a template, we now have a template ready to deploy! Now we can get onto the fun stuff.

Upcoming #vBrownBag Webinar Series: AWS Certified Solutions Architect – Associate Exam

download

It’s finally time to bang out some AWS certifications!

AWS as been on my radar for a long time now, and really this is one of many certifications that are just overdue and need to get done.

Thanks to an invite from Jonathan Frappier to get me motivated and put a date on it, I will be presenting the first in the #vBrownbag series on the certification for AWS Certified Solutions Architect: Associate Exam. Signup by going to the vBrownBag site. Following Part 1, many of my other colleagues at Ahead (Tim Carr & Bryan Krausen) will also be presenting subsequent parts of Domain 1 (Designing highly available, cost-efficient, fault-tolerant, scalable systems).

In part 1, I will be covering the following objectives:

– Identify and recognize cloud architecture considerations, such as fundamental components and effective designs
— How to design cloud services
— Planning and design.

Part 1 is certainly high level and focused on an overview of fundamental components and design. Once I get through the specific requirements, I’m happy to stick around and share real world experiences as well for anyone interested from my time leading the Cloud and Automation practice at Ahead with our clients. I’m sure other Ahead people will be on there if you want to hear their real world experiences as well. Otherwise lets do some cert training!

computertraining

Look forward to seeing many of you on the vBrownbag!