The post New Pluralsight Course: Automating AWS and vSphere with Terraform appeared first on 2ninjas1blog.com.
]]>I’m proud to announce that as of 10:30pm CST last night, my new Pluralsight course “Automating AWS and vSphere with Terraform” has been released!
You can find all of the details about the course on Pluralsight here.
Terraform has been a technology I was keen to get into, and this course is ultimately a 101 course on using Terraform functionality with AWS and vSphere. I decided to build this course across On-Premises and a Public Cloud because of the massive growth in Hybrid Management technologies that we are seeing today (and the amount of bad information in the industry vs real world).
Terraform as you will see in the course can solve many use cases. In addition, if you augment it with other tools in your arsenal, can be an extremely powerful way to perform most of your Infrastructure Automation.
The course covers the following topics:
As always, please let me know if you have any feedback. Otherwise I hope you enjoy the course!
The post New Pluralsight Course: Automating AWS and vSphere with Terraform appeared first on 2ninjas1blog.com.
]]>The post Terraform – Assigning an AWS Key Pair to your EC2 Instance Resource appeared first on 2ninjas1blog.com.
]]>Assign a Key Value Pair
In order to access an EC2 instance once it is created, you need to assign an AWS EC2 Key Pair at the time of instantiating the instance. If you haven’t already done so, go ahead and create a Key Pair from the AWS Console by clicking the Key Pairs section on the left hand side. You will see a screen like the one below. Clicking Create Key Pair will walk you through the process.
During the process you will be prompted to save a private key file (.pem). Keep this safe as you will need it.
Now in Terraform, we are going to add one additional line under the resource section for our EC2 Instance. You can see in my screenshot above that my demo key pair is called “AWS EC2 – SEP 2016”, so we simply need to reference this by adding the following line.
key_name = "AWS EC2 - SEP 2016"
The end result looks like this:
If you execute a terraform apply now, you will see that your new EC2 instance is created and the Key Pair name should appear correctly in the details pane.
Note, if you did not destroy your previous terraform configuration, and you deployed it just like in part 1 without a key pair, you will notice the following when you execute a terraform plan.
The reason for this is because you cannot assign a key pair to an already running EC2 instance. Terraform is letting you know that it will be forced to delete the instance and create a new one. When you perform your terraform apply, your end result will reflect this..
Otherwise that completes this post. Now you know how to use your key pairs. Terraform also has the power to create the pairs on demand which we will hopefully circle back around to in the future.
The post Terraform – Assigning an AWS Key Pair to your EC2 Instance Resource appeared first on 2ninjas1blog.com.
]]>The post New Pluralsight Course – Introduction to Workflow Development with VMware vRealize Orchestrator appeared first on 2ninjas1blog.com.
]]>Here is quick a video overview of the Course:
I aimed this course at getting people into workflow development. This means I don’t focus on product installation and plugin installations, but more on specifically how you can develop and code the workflows.
The course contains the following 7 modules:
In addition to my course, I also work with a large number of customers in my role at Ahead. For anyone looking to get started with Orchestrator, Ahead also now offers an AHEADStart for VMware vRealize Orchestrator which takes care of all the plumbing and gets people up and running with the product.
Please enjoy the course and I would absolutely love any feedback. Teaching in this format has been completely new to me and took some learning and getting used to. I can certainly tell when comparing the first 2 modules to the last 2, the difference as I got more comfortable. I plan to circle back and write about my experience for anyone else looking to do a course in this manner.
Finally, I can’t say enough great things about working with the Pluralsight team. Simply great people.
Nick
The post New Pluralsight Course – Introduction to Workflow Development with VMware vRealize Orchestrator appeared first on 2ninjas1blog.com.
]]>The post Terraform 101 – What is it? How do I use it? appeared first on 2ninjas1blog.com.
]]>I’ve been watching Terraform over the past few years and finally have had some time to start getting stuck into it. I must say, I’m impressed by the potential of this product and others from Hashicorp.
Terraform essentially fits in the Infrastructure Automation category, and has a similar coding approach to tools like Puppet, while in some ways operating more like an Orchestrator without the visual aspect.
What is it?
Essentially it adds a layer of abstraction to services like Amazon, Google etc. Instead of an AWS Cloud Formation template, I can use a Terraform configuration instead. On top of that, and the piece that is more intriguing to me, is the ability to use their module approach as well as other providers and provisioners.
Providers allow you to use the same declarative state language for other systems. I encourage you to check out the list on the Terraform site.
Provisioners allow us to essentially determine what and where we initiate other tasks. For example, you could use local-exec to execute commands locally on the terraform box, or remote-exec to execute on a remote server via SSH or WinRM.
The idea behind all of this is that you have one place, and one language to learn which then works across public Cloud providers. You don’t need to learn say the AWS Cloud Formation Template language and then go learn another language in another cloud provider. You simply would use Terraform to deploy all.
How do I use it?
Let’s get stuck in and walk through a very basic Terraform configuration for deploying an AWS Instance. At the core of Terraform is the .tf file. This combined with other files in the same directory or module directories, form a Terraform Configuration. There are 2 formats to the Terraform files, Terraform format, or JSON. It is recommend that you use the Terraform format which is easily readable (think Puppet DSL).
Example: Create an AWS EC2 Instance with Terraform
Note: For all activities below you will need an AWS account and will be charged via Amazon appropriately. I try to use free tier for all demo examples.
The first piece we declare is the provider which in this case is AWS. Grab your access key and secret key and then choose a region you want to provision our EC2 Instance into.
provider "aws" {
access_key = "yourkeyhere"
secret_key = "yoursecretkeyhere"
region = "us-east-1"
}
Next, we declare our new resource. In this case I am choosing to instantiate and AWS instance called “2ninjasexample1”. I am going to use the Amazon AMI with ID “ami-13be557e”. Finally i’m choosing my type of instance as t2.micro.
resource "aws_instance" "2ninjasexample1" {
ami = "ami-13be557e"
instance_type = "t2.micro"
}
That’s it for our configuration file. Simply save it in the folder you created in step 1 and browse to that folder.
Terraform creates a new AWS EC2 instance as well as 2 additional files in our folder which maintain the state information.
If we examine the .tfstate file, you will see it contains all the specific information about our AWS instance.
In particular, you can see that it has captured the AWS instance ID which you can also view from your AWS console if you select your EC2 image.
Just like that, it is destroyed! You will also notice your state file updated to reflect this.
Hopefully at this point, you can see the power behind this tool. Stay tuned for more posts on this.
The post Terraform 101 – What is it? How do I use it? appeared first on 2ninjas1blog.com.
]]>The post It’s ON with Turbonomic and vRO appeared first on 2ninjas1blog.com.
]]>First is a scriptable task to gather inputs for vRA. The inputs are all vRA specific so I could remove these. At the end it, the workflow is pushing back properties to vRA so I removed “Override vRA Settings” at the end.
Inputs removed from original VMTurbo Main workflow:
My workflow ended up like this, removing vRA dependencies and ending with 2 scriptable tasks to convert the datastore and host to VC:objects instead of strings. These scripts will be covered in another post.
My inputs end up moving from general attributes and are templateName, clusterName and datacentreName. In the future I will likely add a scriptable task at the beginning of the workflow to determine these as they will come from inputs generated by my Windows or Linux Master Build workflow.
Inputs converted from attributes:
I also now have outputs for the actual VC: Datastore VC: Host System objects for your clone workflow in vRO. These were created via the scriptable tasks which take the strings returned from Turbonomic and do a lookup to match them to the vCenter objects.
Outputs created:
What’s great about having this functionality from Turbonomic is now the best host and the best datastore will be selected based on analytics from Operations Manager. I originally was picking my datastore based on amount of free space but now using the REST API I can have the least utilized host and datastore supplied to by clone workflow.
Download the modified workflows here.
I’ll be going over these workflows in the upcoming webinar “Overcoming Private Cloud Challenges in Healthcare IT”, September 29th at 2:00PM EST. Register here
The post It’s ON with Turbonomic and vRO appeared first on 2ninjas1blog.com.
]]>The post Living the cloudy life… #cloudlife appeared first on 2ninjas1blog.com.
]]>So what does it mean?
Ultimately, it comes from a belief that Cloud is about creating a true experience. This means not just changing the way customers of IT consume services via a catalog, but going that extra mile.
I’ll get on to roles and more Cloud Design topics in a future post .The one thing I want to stress over and over is that our goal in creating a Cloud is to create this place people come to for IT services and leave feeling like they got something more.
If you’re an IT person, you must put yourself in the developers shoes and try to think of the pain and annoyance they actually go through when submitting a form. They wait weeks for their server to come and they then still have to go to subsequent teams to get various pieces of software installed such as: DR options approved, extra storage and so on. Then, they have to make sure that everything they did in Dev works in QA and finally Production. A sysadmin might push a patch out or a VM template that doesn’t work as it did the previous month because someone else made a change.
Follow this up with the sheer amount of Public Cloud PaaS services and other external services the teams wish to consume. Many of these services require security approvals and perhaps additional firewall and networking configurations.
It all adds up to a frustrated customer and in turn ultimately affects the businesses ability to innovate and grow.
The opposite is the #cloudlife experience..
Happy Customer A: “Wow, I came to this catalog and got everything I needed. BAM! Now I can create something awesome today while my idea is hot.”
Happy Customer B: “This Cloud is better than just the AWS or Microsoft Cloud. I get those features and more. Everything I want is here!”
Happy Customer C: “I think…I love this Cloud… #cloudlife”
Happy Customer D: “If I had a Cloud, it would be just like this cloud. I’m telling my friends about DevOps and #cloudlife.”
It’s not about just having the best programmer and engineering the best back end services but the full end to end experience. How you design the front end menu, how you guide every decision the user makes, and how you can get them what they need to be successful and grow the business are front and center. It takes a combination of people and skills to execute on this successfully.
What does it mean in practice?
Take an example of a Developer that has deployed an environment of SugarCRM, an Open Source CRM tool. Great they deployed it from their request catalog but what if they want to synchronize data from one environment to another for testing? Previously, they would have had to put in a request for someone to backup and restore the database to the new environment. This could then involve a piece of paper being handed around between teams until the task is completed.
The alternative is an option like the screenshot below in vRealize Automation. We add an Action which is visible in the items list that gives them the ability to execute this operation with one click.
Clicking the “vRA-DevOpsTeamX-SyncData” Button initiates a vRealize Orchestrator workflow. This workflow in turn connects to a Tintri Storage Array to initiate a Sync VM. The workflow will create all the appropriate change controls, shutting down of VMs, storage array tasks etc. Again, think of everything that you need to do to complete the task and provide it as a self service option.
Essentially, the workflow would look something like this:
Other Examples…
Time permitting, some of these will turn into blog posts as well, but here are some examples of clear services you can offer to make peoples lives easier.
These are just a handful of ideas. Remember, with each one, we’re taking out the additional paperwork by automating the tasks you’d typically do in your ITIL tool like ServiceNow.
What is #cloudlife…?
It’s certainly also become a #hashtag we use whenever we are working on Cloudy stuff (e.g. creating a cloud proposal while in the dentist chair…wasn’t me) or thinking about a new innovative Cloud idea while drinking a Tim Carr Starbucks Iced Green Tea (#notpropertea). Essentially, it’s a way of thinking beyond our Infrastructure roles and what the requester is asking for to create something more.
#cloudlife is about reaching for the best possible user experience. One that doesn’t feel like it’s forcing you into a box but instead feels refreshing end enjoyable.
The post Living the cloudy life… #cloudlife appeared first on 2ninjas1blog.com.
]]>The post vRealize Orchestrator Appliance – Guest File Operations Part 1 – (Copying a file to guest VM) appeared first on 2ninjas1blog.com.
]]>“Copy file from vCO to guest” is the one I’m going to be using in this example.
First of all copy the workflow into a sandbox area. This way you can move a bunch of the inputs to attributes and not have to key them in each time (e.g. The local administrator username, password, and test VM).
In my example, I’m going to create a text file called test.txt in a new folder under /opt called “vcofiles”.
My target machine is a Windows 2008 R2 server, where I will copy the file and place it in the C:\temp\ folder with the name “testcopy.txt”
If you run the workflow then these are my input parameters:
The problem is that if you run this you will get an error similar to this:
“No permissions on the file for the attempted operation (Workflow: Copying files from vCO appliance to guest/Scriptable task…”
In order to fix this you first need to give the correct rights to the folder and file on your vCO Appliance.
1. Login as root onto the appliance
2. Give Read/Write/Execution rights to the new folder
3. Give Read/Write rights to the Text file you made
Unfortunately we aren’t quite done yet. You also need to tell orchestrator which locations it can read/write/execute from. This involves editing the “js-io-rights.conf” file located in “/opt/vmo/app-server/server/vmo/conf”
Add the line “+rwx /opt/vcofiles/” as shown above.
If anyone isn’t too sure on the linux commands to do this:
4. Now, there’s one more thing. You need to restart the vCO service for this to take effect.
Login to the vCO configuration manager, go to startup, and click restart service.
5. Now run your workflow and see if your text file copied across.
You can see a quick video demo of this on youtube. (apologies for the mouse pointer issue..)
Thanks for reading. Let me know if you have any questions.
The post vRealize Orchestrator Appliance – Guest File Operations Part 1 – (Copying a file to guest VM) appeared first on 2ninjas1blog.com.
]]>The post vRealize Orchestrator Workflow: Change VM Port Group for VM on Standard vSwitch appeared first on 2ninjas1blog.com.
]]>I was surprised recently to find that no builtin workflow existed for changing the backing information for a VM if you aren’t using a VDS. Now, before I go any further, I’m a big fan of moving to a vSphere Distributed Switch mode, but there are certainly cases where you might encounter a standard vSwitch environment which you need to automate port group changes upon.
The Approach:
Essentially when it comes to changing NIC settings on a VM, you have to change the “Backing” information for the NIC associated with the VM. In my case this was for VMs which were just built as part of an overall automation process, and had only one NIC.
Step 1: Create Action Item.
I created an action item which has 2 inputs.
“vm” of type VC:VirtualMachine – This is basically so you can select the VM in vCO that you want to modify
“vSwitchPGName” of type String – This is so you can pass in the string value of the portgroup name for the vSwitch.
Code:
The code I then used is below. I’ve commented it but please let me know if you have any questions.
var spec = new VcVirtualMachineConfigSpec(); // Initialize a Virtual Machine Config Spec first
var myDeviceChange = new Array(); // Create an array to hold all of your changes
var devices = vm.config.hardware.device;
//Find devices that are VMXNET3 or E1000
for (var i in devices)
{
if (
(devices[i] instanceof VcVirtualVmxnet3) ||
(devices[i] instanceof VcVirtualE1000)
)
{
System.log("The device we are going to modify is: " + devices[i]);
var nicChangeSpec = new VcVirtualDeviceConfigSpec(); //This is the specification for the Network adapter we are going to change
nicChangeSpec.operation = VcVirtualDeviceConfigSpecOperation.edit; //Use edit as we are going to be modifying a NIC
nicChangeSpec.device = new VcVirtualE1000;
nicChangeSpec.device.key = devices[i].key;
System.log("NicChangeSpec key is : " + nicChangeSpec.device.key);
nicChangeSpec.device.addressType = devices[i].addressType;
nicChangeSpec.device.macAddress = devices[i].macAddress;
System.log("Adding backing info" ) ;
//Add backing information
nicChangeSpec.device.backing = new VcVirtualEthernetCardNetworkBackingInfo();
System.log("Backing info for nicChangeSpec is : " + nicChangeSpec.backing);
nicChangeSpec.device.backing.deviceName = vSwitchPGName; //Change the backing to the portgroup input
System.log("Backing info for deviceName on nicChangeSpec is : " + nicChangeSpec.device.backing.deviceName);
//Push change spec to device change variable
myDeviceChange.push(nicChangeSpec);
}
}
spec.deviceChange = myDeviceChange;
System.log("DeviceChange Spec is: " + spec.deviceChange);
return vm.reconfigVM_Task(spec);
Step 2:
I created a simple workflow which calls this action item and then has a vim3WaitTaskEnd so we can be sure the task is completed before moving on to any other workflows. This is useful if you are going to be incorporating this action into a larger process.
Running the workflow gives you this simple presentation.
And that’s basically all there is to it. Select your VM, type in your PortGroup name, and voila!
For a vDS, VMware included a workflow out of the box in vCO so there is no need to create any of the above.
Enjoy!
The post vRealize Orchestrator Workflow: Change VM Port Group for VM on Standard vSwitch appeared first on 2ninjas1blog.com.
]]>The post vRealize IaaS Essentials: Building your Windows Server 2012 Template on vSphere – Part 3 (OS Tuning) appeared first on 2ninjas1blog.com.
]]>Step 1: Get VMware Tools Installed
Without VMware tools on the OS, many things are sluggish and just annoying. Most importantly it fixes the annoying mouse cursor tracking issues (this is even more noticable when you’re in a VDI session into a VMware Console).
Step 2: Fine tune your OS
First of all a big thanks to some of my twitter friends who gave some good suggestions on tweaks here. There is always going to be a debate as to what gets done in the template vs GPO/Configuration Management. I’d say the settings I set below are just the core ones necessary to facilitate deployment of an OS with ease. AD and configuration management should definitely come in after the fact and take care setting other OS settings to their necessary values.
Also here is a useful link provided by Sean Massey who does a lot of tuning on the Desktop side: https://labs.vmware.com/flings/vmware-os-optimization-tool
Finally, remember to disconnect your CD ISO.
After turning your VM back into a template, we now have a template ready to deploy! Now we can get onto the fun stuff.
The post vRealize IaaS Essentials: Building your Windows Server 2012 Template on vSphere – Part 3 (OS Tuning) appeared first on 2ninjas1blog.com.
]]>The post Putting your Cloud on Autopilot and #CloudLife appeared first on 2ninjas1blog.com.
]]>The Conference
On June 23rd, I was delighted to speak for the 3rd year at the Looking Ahead 2016 summit. I’ve talked about how much I love my job before and I can say that our summit reinforces that for me every single year. I leave feeling energized as we take risk after risk every year and try to show customers where we are heading and how we can improve their lives.
My Session: Putting Your Cloud on Autopilot
First of all, I would absolutely love feedback on the session, so please send me an e-mail or tweet me. I really appreciate it.
Approaching this years session, it was clear to me so many of the customers I deal with on a day to day basis have moved beyond what I often call the “plumbing phase” of Cloud. I decided to reinforce the message around Cloud by starting off with what it means to me and the Ahead team in general. I am fairly sure every session I do on Cloud for the rest of my life will start off with 60 seconds of what we exactly mean by it; given how misused the term is in the industry.
Deployment Models
Once we got over the basics of doing Infrastructure as a Service, it was time to move onto newer items. In the past I’ve talked a lot about Self Healing Datacenter and how to actually make that a reality, but this time I wanted to focus on the different ways Automation can help across On-Premises and the Public Cloud.
Essentially going from the IaaS Approach via Puppet…
To a partial refactor using AWS RDS…
To a complete PaaS deployment…
All using the same application. I completed a demo showing this, as well as the various ways AWS failover works. The main point here is to stress the choice and flexibility you give up by embracing the various deployment models. I remember saying a few years ago “No 2 clouds are the same”, and that seems to have taken off. I think it’s still valid, at least for now.
Self Healing
Then it was time to get back onto the Autopilot theme again, this time using a Google Car to illustrate the mechanisms we use in the real world to create safety. Relating it back to Cloud, I explained an example of event management using AWS Lambda and ServiceNow. I took an AWS Lambda function and used it to connected to ServiceNow so as nodes spun up or spun down ServiceNow Change records would be created automatically. I’ve got a post brewing on the benefits of Orchestration and Event Driven Automation which I hope to finish up some time. I think this is a key topic, often overlooked these days and something I’ve been discussing heavily with our team at Ahead.
Finally – The Cloud Experience
If there’s one thing I get fed up with at VMUGs and other user groups, it’s people standing up and saying you need to program and that’s the skill. While important, I feel like many just state the obvious in career development without truly explaining what it means to have a functioning Cloud and how you get to that, across On-Premises and in the Datacenter.
Nick Rodriguez and I came up with a new term which we call #CloudLife (Also a future blog post). How do you create the awesome experience that truly changes behaviours in an orgnaization? I talk also with my colleague, Dave Janusz, on this topic alone at length. How do you make someone do something in your IT environment without having to tell them? I love asking this question as it creates all sorts of interesting ideas for design best practices. I’m going to write more on this topic soon also but I hope people start to realize the most successful clouds are the ones that create a user experience that works. I read a book during my University days when I studied a module on Human Computer Interaction. I still state to this day, that the book I read combined with the module taught me some of the most important lessons in IT.
If you haven’t got it, check it out below. It’s a fun read and not entirely related to IT, but I loved it:
Remember, programming is important, but it’s not the only major skill.
With that, I’m going to end this post. I hope to finally sit down soon and write 3 posts I’ve been thinking and talking about for a while…
These topics deserve more debate than they get today. I feel like the DevOps initiatives when done as a Silo (yup you heard me, people do DevOps in a silo that they call DevOps), have masked some of the changes IT has to make. Also IT hasn’t always been able to articulate and truly create the services Developers always needPublic Cloud is here, but there’s more to wrap around it. Do Developers use visual studio and connect directly to Azure? Do they use Docker + IaaS for more flexibility? How do you present the right services and lego bricks of automation?
Time to dream more about….#CloudLife
The post Putting your Cloud on Autopilot and #CloudLife appeared first on 2ninjas1blog.com.
]]>