vRealize Orchestrator Workflow: Change VM Port Group for VM on Standard vSwitch

*Note: This is a repost due to moving my posts from SystemsGame.com to 2ninjas1blog.com”

I was surprised recently to find that no builtin workflow existed for changing the backing information for a VM if you aren’t using a VDS. Now, before I go any further, I’m a big fan of moving to a vSphere Distributed Switch mode, but there are certainly cases where you might encounter a standard vSwitch environment which you need to automate port group changes upon.

The Approach:

Essentially when it comes to changing NIC settings on a VM, you have to change the “Backing” information for the NIC associated with the VM. In my case this was for VMs which were just built as part of an overall automation process, and had only one NIC.

Step 1: Create Action Item.

I created an action item which has 2 inputs.

“vm” of type VC:VirtualMachine – This is basically so you can select the VM in vCO that you want to modify

“vSwitchPGName” of type String – This is so you can pass in the string value of the portgroup name for the vSwitch.


The code I then used is below. I’ve commented it but please let me know if you have any questions.

var spec = new VcVirtualMachineConfigSpec(); // Initialize a Virtual Machine Config Spec first
var myDeviceChange = new Array(); // Create an array to hold all of your changes
var devices = vm.config.hardware.device;

//Find devices that are VMXNET3 or E1000
for (var i in devices)
		if 	(
				(devices[i] instanceof VcVirtualVmxnet3) ||
				(devices[i] instanceof VcVirtualE1000) 
			System.log("The device we are going to modify is: " + devices[i]);
			var nicChangeSpec = new VcVirtualDeviceConfigSpec(); //This is the specification for the Network adapter we are going to change
			nicChangeSpec.operation = VcVirtualDeviceConfigSpecOperation.edit; //Use edit as we are going to be modifying a NIC
			nicChangeSpec.device = new VcVirtualE1000;
			nicChangeSpec.device.key = devices[i].key; 
			System.log("NicChangeSpec key is : " + nicChangeSpec.device.key);

			nicChangeSpec.device.addressType = devices[i].addressType;
			nicChangeSpec.device.macAddress = devices[i].macAddress;

			System.log("Adding backing info" ) ;
			//Add backing information

			nicChangeSpec.device.backing = new VcVirtualEthernetCardNetworkBackingInfo();
			System.log("Backing info for nicChangeSpec is : " + nicChangeSpec.backing);
			nicChangeSpec.device.backing.deviceName = vSwitchPGName; //Change the backing to the portgroup input
			System.log("Backing info for deviceName on nicChangeSpec is : " + nicChangeSpec.device.backing.deviceName);

			//Push change spec to device change variable


spec.deviceChange = myDeviceChange;
System.log("DeviceChange Spec is: " + spec.deviceChange);
return vm.reconfigVM_Task(spec);

Step 2:

I created a simple workflow which calls this action item and then has a vim3WaitTaskEnd so we can be sure the task is completed before moving on to any other workflows. This is useful if you are going to be incorporating this action into a larger process.

Update Port Group for vSwitch

Running the workflow gives you this simple presentation.

vSwitchPG 2

And that’s basically all there is to it. Select your VM, type in your PortGroup name, and voila!

For a vDS, VMware included a workflow out of the box in vCO so there is no need to create any of the above.


vRealize IaaS Essentials: Building your Windows Server 2012 Template on vSphere – Part 3 (OS Tuning)

Now that we have a base OS build completed, we need to start configuring the OS to the settings we want.

Step 1: Get VMware Tools Installed

Without VMware tools on the OS, many things are sluggish and just annoying. Most importantly it fixes the annoying mouse cursor tracking issues (this is even more noticable when you’re in a VDI session into a VMware Console).

  • Login to your vSphere Web Client and Locate your VM
  • Select the VM > Actions > Guest OS > Install VMware Tools...


  • You will get a prompt to mount the Tools ISO. Select Mount.


  • Now inside the OS, Open My Computer/This Computer and Tab over to the CD ROM Drive. I found it almost impossible with the mouse using the VRM Console until Tools was installed so I had no choice but to use the keyboard to get it done. A combination of Tab and Space did the trick.


  • Once you are there, run Setup and you should be presented with the VMware Tools installation screen.


  • Choose Next
  • Select Typical for your installation type



  • Once installation is complete, reboot the OS

Step 2: Fine tune your OS

First of all a big thanks to some of my twitter friends who gave some good suggestions on tweaks here. There is always going to be a debate as to what gets done in the template vs GPO/Configuration Management. I’d say the settings I set below are just the core ones necessary to facilitate deployment of an OS with ease. AD and configuration management should definitely come in after the fact and take care setting other OS settings to their necessary values.

  1. Patch the OS to the latest (It’s worth automating this in the future)
  2. Set Date/Time
  3. Set the OS Hostname to VM Template Name – this helps to know if sysprep worked etc.
  4. Disable the Windows Firewall
  5. Disable UAC
    1. http://social.technet.microsoft.com/wiki/contents/articles/13953.windows-server-2012-deactivating-uac.aspx
  6. Create a Local User account for use by vRealize (e.g. svc_vrealize). You can make sure this account gets disabled automatically as part of your builds or via Puppet, GPO to comply with security requirements. It helps however to be able to easily get into a system using vRO Guest File Operations via a local service account early on.

Also here is a useful link provided by Sean Massey who does a lot of tuning on the Desktop side: https://labs.vmware.com/flings/vmware-os-optimization-tool

Finally, remember to disconnect your CD ISO.

After turning your VM back into a template, we now have a template ready to deploy! Now we can get onto the fun stuff.

Upcoming #vBrownBag Webinar Series: AWS Certified Solutions Architect – Associate Exam


It’s finally time to bang out some AWS certifications!

AWS as been on my radar for a long time now, and really this is one of many certifications that are just overdue and need to get done.

Thanks to an invite from Jonathan Frappier to get me motivated and put a date on it, I will be presenting the first in the #vBrownbag series on the certification for AWS Certified Solutions Architect: Associate Exam. Signup by going to the vBrownBag site. Following Part 1, many of my other colleagues at Ahead (Tim Carr & Bryan Krausen) will also be presenting subsequent parts of Domain 1 (Designing highly available, cost-efficient, fault-tolerant, scalable systems).

In part 1, I will be covering the following objectives:

– Identify and recognize cloud architecture considerations, such as fundamental components and effective designs
— How to design cloud services
— Planning and design.

Part 1 is certainly high level and focused on an overview of fundamental components and design. Once I get through the specific requirements, I’m happy to stick around and share real world experiences as well for anyone interested from my time leading the Cloud and Automation practice at Ahead with our clients. I’m sure other Ahead people will be on there if you want to hear their real world experiences as well. Otherwise lets do some cert training!


Look forward to seeing many of you on the vBrownbag!

Putting your Cloud on Autopilot and #CloudLife


The Conference

On June 23rd,  I was delighted to speak for the 3rd year at the Looking Ahead 2016 summit. I’ve talked about how much I love my job before and I can say that our summit reinforces that for me every single year. I leave feeling energized as we take risk after risk every year and try to show customers where we are heading and how we can improve their lives.

My Session: Putting Your Cloud on Autopilot

First of all, I would absolutely love feedback on the session, so please send me an e-mail or tweet me. I really appreciate it.

Approaching this years session, it was clear to me so many of the customers I deal with on a day to day basis have moved beyond what I often call the “plumbing phase” of Cloud. I decided to reinforce the message around Cloud by starting off with what it means to me and the Ahead team in general. I am fairly sure every session I do on Cloud for the rest of my life will start off with 60 seconds of what we exactly mean by it; given how misused the term is in the industry.

Deployment Models

Once we got over the basics of doing Infrastructure as a Service, it was time to move onto newer items. In the past I’ve talked a lot about Self Healing Datacenter and how to actually make that a reality, but this time I wanted to focus on the different ways Automation can help across On-Premises and the Public Cloud.

Essentially going from the IaaS Approach via Puppet…

Screen Shot 2016-07-26 at 3.18.32 PM

To a partial refactor using AWS RDS…

Screen Shot 2016-07-26 at 3.17.58 PM

To a complete PaaS deployment…

Screen Shot 2016-07-26 at 3.21.34 PM

All using the same application. I completed a demo showing this, as well as the various ways AWS failover works. The main point here is to stress the choice and flexibility you give up by embracing the various deployment models. I remember saying a few years ago “No 2 clouds are the same”, and that seems to have taken off. I think it’s still valid, at least for now.

Self Healing


Then it was time to get back onto the Autopilot theme again, this time using a Google Car to illustrate the mechanisms we use in the real world to create safety. Relating it back to Cloud, I explained an example of event management using AWS Lambda and ServiceNow. I took an AWS Lambda function and used it to connected to ServiceNow so as nodes spun up or spun down ServiceNow Change records would be created automatically. I’ve got a post brewing on the benefits of Orchestration and Event Driven Automation which I hope to finish up some time. I think this is a key topic, often overlooked these days and something I’ve been discussing heavily with our team at Ahead.

Finally – The Cloud Experience

Screen Shot 2016-07-26 at 3.41.29 PM

If there’s one thing I get fed up with at VMUGs and other user groups, it’s people standing up and saying you need to program and that’s the skill. While important, I feel like many just state the obvious in career development without truly explaining what it means to have a functioning Cloud and how you get to that, across On-Premises and in the Datacenter.

Nick Rodriguez and I came up with a new term which we call #CloudLife (Also a future blog post). How do you create the awesome experience that truly changes behaviours in an orgnaization? I talk also with my colleague, Dave Janusz, on this topic alone at length. How do you make someone do something in your IT environment without having to tell them? I love asking this question as it creates all sorts of interesting ideas for design best practices. I’m going to write more on this topic soon also but I hope people start to realize the most successful clouds are the ones that create a user experience that works. I read a book during my University days when I studied a module on Human Computer Interaction. I still state to this day, that the book I read combined with the module taught me some of the most important lessons in IT.

If you haven’t got it, check it out below. It’s a fun read and not entirely related to IT, but I loved it:

Remember, programming is important, but it’s not the only major skill.

With that, I’m going to end this post. I hope to finally sit down soon and write 3 posts I’ve been thinking and talking about for a while…

  • What is #CloudLife?
  • Skills of Successful Cloud Deployments
  • Visual Orchestration vs Non-Visual Orchestration

These topics deserve more debate than they get today. I feel like the DevOps initiatives when done as a Silo (yup you heard me, people do DevOps in a silo that they call DevOps), have masked some of the changes IT has to make. Also IT hasn’t always been able to articulate and truly create the services Developers always needPublic Cloud is here, but there’s more to wrap around it. Do Developers use visual studio and connect directly to Azure? Do they use Docker + IaaS for more flexibility? How do you present the right services and lego bricks of automation?

Time to dream more about….#CloudLife


A ninja walks into a #TurboFest

Screen Shot 2016-07-26 at 2.49.56 PM

*Very late posting this, but better late than never!*

On Jun 15th I was fortunate enough to attend my first VMTurbo TurboFest. VMTurbo is a company I’ve personally been watching for a while, and have always been excited to see the direction they are taking, not just in the Datacenter, but Cloud operations in general.

On top of that, Amy, along with her director at UCMC Jason Cherry, were attending in order to present their results with VMTurbo as well as the new wave of Automation Amy is leading over there.

About the day

What stood out for me most, was just how informal and social the event was. I thought this was a nice touch compared to other events. It was also extremely customer focused, something I think many other events lose.

The kickoff was great as we learned from CEO Benjamin Nye the direction VMTurbo continues to focus in.


VMTurbo truly GET that Cloud is real and they are working quickly to adapt to the increased momentum of Public Cloud adoption. VMTurbo treats everything as a commodity and works to create a level of abstraction and allow automated intelligence to determine workload placement.

Given how many vendors have done a poor job at trying to offer automated decisions based on Public Cloud models, I’m excited to see another company with a proven track record in this space on-premises, fully embracing this gap in the market.

The UCMC Presentation


One of the main reasons I attended was to support my fellow ninja blogger as well as the UCMC team who I’ve had the pleasure of working with.

What strikes me most in all my talks with Amy and Jason is the results they’ve been able to achieve. $600,000 in cost avoidance due to properly utilizing their VMware environments. They saw instant benefits by adding VMTurbo to their infrastructure management tools and were able to get higher density on their clusters as a direct results of the VMTurbo software.

Amy went on to talk about the work around the UCMC Cloud they have been building, primarily on-premises now, with future expansion to Public.


UCMC will be using a mixture of ServiceNow, vRealize Orchestrator and Puppet to easily automate their deployments and take UCMC IT into the 21st century as they put it. This is similar to the solution we both worked on during our time at a previous employer, and the methodology is clean and simple to use.


In addition, Amy has been developing a number of vRO Workflows around the VMTurbo plugin specifically for workload placement and chargeback. This placement workflow is called before the VM clone workflow is initiated in order to determine the optimal host location of it. Handy way to integrate the placement decision directly into the workflow instead of having to go to VMTurbo first, get the answer, and then manually key it in as an input. Awesome stuff.

Stay tuned as Amy plans to release all the workflows on our blog when finished.

After the community sessions, the panel sat back down and had a very lively Q&A session with lots of users in the community asking great questions.


Before I left, the VMTurbo marketing team played one of the most impressive videos I’ve honestly ever seen. I’m trying to get a youtube link of it to post here as it is pure awesome. Think VMs meet insane action movie with awesome effects.


With that said, I can’t wait to do more with VMTurbo again myself, and look forward to catching up with the team again at VMworld.


Today I woke up to a Brexit…

united kingdom exit from europe relative image

I woke up this morning, off an insane high of yesterday’s Ahead Tech Summit. I got to present to hundreds of people and more importantly alongside fantastic colleagues who I love working with every day. While the conference was going on, I also knew the vote was going on, and asked people not to tell me.

When I finally checked the news at around 5pm Chicago time, (11pm UK time), it said we were going to remain. Even though I felt in the days leading up to the referendum we should leave, I felt a sense of relief, and togetherness at the stay vote. It said something like 57% remain, 43% leave at the time.

I went out to party and felt like life would just go on as normal. Then waking up in the morning, I received some texts “wow, just wow”, only to find the result had changed and in fact it is a 52% leave, 48% remain. Suddenly shocked, I started to collect my thoughts and think about the vote again. We had actually decided as a country to leave. With that, we are entering new territory…changing directions after years of integrating more and more with the EU.

The Internet

Sadly, I looked at Twitter. I found people being hateful, Trump somehow thinking this compared to his campaign, and pure idiocracy. I won’t paste those tweets here because I think everyone is entitled to feel their own emotions and hopefully some of these viewpoints cool down soon.

I read my cousin’s post on Facebook which I did enjoy and I hope we see more of:

“51.9% of the (voting) population wanted to leave the EU. 48.1% of the (voting) population wanted to stay. Regardless of your political beliefs, ladies and gentlemen, it’s called democracy. Millions of people died so we can enjoy such freedom. This isn’t football, rugby, darts or stamp collecting. This is your country, don’t turn your back on it because the result didn’t go your way. Take a knee, drink water.”

The argument

What I see as I browse twitter is single viewpoints from people. “The UK is anti immigration”, and somehow that’s what people thought the whole thing was about? The UK is the biggest melting pot in the world. I think if you go to London and spend a day walking around, you will see how welcoming we have been to other countries. I don’t expect the British attitude of caring for others to change. Their may be a nationalist undertone as in many countries today but that does not mean that’s the viewpoint of many of the people who voted leave.

For me there are fundamental problems with the EU. I would rather have stayed and worked them out, but it’s clear that path is unlikely.

The only reason I would vote leave is for one major point:

To restore powers back to the UK parliament

EU law has supremacy over UK law. We as a people did not elect the European Commision or the EU president for that matter. Yes we have MEPs but this does not go far enough. UK VETOs to laws get overruled by EU laws. We cannot govern ourselves as a people and when you lose the ability to remove people from government as a people you have a problem. If we want closer Europe, then let’s have an EU that supports that. Maybe every EU Citizen has a vote for the EU president and people in the European commission? I feel as if the EU is built on protectionism and by nature has become undemocratic.

But why would I want to stay?

I wish we could find a different solution. There is something to be said for EU integration. I’ve always loved the idea of the EU from when I was in high school. Single trade, freedom of movement. It all sounds great. I can handle the extra money we give the EU given the UK GDP and unemployment rates being lower than most of Europe, we should help our EU neighbors.

Quite frankly though, I have loved going to Europe many times. I’ve enjoyed the Euro. I’ve enjoyed the idea of being able to one day work there if I wanted to with absolutely no issue. I can see the benefits.

Taking an even further step back. I miss Europe, and not just the UK. I’ve lived in America for some time now, I love it here as well, I love my friends here, and today I feel very far away from what is going on overseas. I have many European friends, saddened by the events of today, because it feels like a breakup. Except the kind where we are still living together and half of you doesn’t want to break up at all. You’ve spent years together working to be closer.

The other reasons for leaving..


There are many other arguments being put forward and I’ll give my viewpoint on immigration.

I am all for immigration, but I do believe there is a sustainable amount

Immigration is beneficial. I believe in helping out others, but I also believe like a company, if you bring in too many people at once, you lose the very reason they are coming there in the first place. We are a successful country, and people want to come to the UK. In the age we are in today the UK has low unemployment compared to other countries and if we can offer people work we should.

If you take the size down for a moment and just look at a company. I can even use my company as an example. We have a culture, one we all love. We look out for each other and work well together. If too many people suddenly joined, our culture could change. We have to hire fast today, but we do a good job of integrating people. However there is a limit to that. I’d say we’ve even flirted with that limit before.

Now a country is far more complicated, especially one like the UK where we have what I still view as a fantastic health service. This in itself is an appeal for many who can’t get healthcare easily. If too many people immigrate into the UK we may not be able to sustain those services, amongst many other.

2. National Security

This one is questionable. I still believe working with our European partners we are stronger. Isolating ourselves to an island is a problem. Terrorism is a global problem and I don’t believe Brexit truly in the long term makes us safer here. Others may disagree.

3. Economy

Whether we are better off or not from an economic standpoint is hard to say. Yes we give £19 billion a year to the EU. We get a rebate of £5 billion and receive EU payments back for around $4 billion. so in effect we give £10 billion to the EU. As a % of our GDP I don’t think it’s that unreasonable given the European goals and I could probably live with this, if it meant a better Europe. If we just look at the EU economy as a whole, it’s really not been doing as well as other countries so the argument for tying ourselves to it is sometimes hard to argue for.

Either way, if we’re in it as a team, I don’t see this as the major reason for leaving.

In closing…

First of all, I certainly believe the UK isn’t moving into a nationalist type country. I’m not a fan of Nigel Farage and UKIP. He had some good arguments in the debate but the way he goes about politics is a problem in my opinion. Personal attacks are “Trump like” and I can’t vote for people that quite simply show no respect for others even if they disagree with them. Example: https://www.youtube.com/watch?v=ViPm0GUxw-M

There are other reasons in the leave camp as well. I wish we could have found a new solution rather than leave it. Isolationism is scary at best and I hope as the dust settles the UK can become an enthusiastic nation that competes on the international stage while continuing to embrace our neighbours. I believe we have a great culture. I love going to the Lake District, Scotland, London, and most of all I miss Cambridge. I miss the many British people I’ve met during my life there and know that we are a good, caring nation. I hope out of this a new emergence of ideas begin to form on ways to make the country great. That may be false thinking, but in the end I am glad that we will start to be able to govern ourselves more directly, rather than a central body in Brussels.

Saying that, I can’t help but feel extremely sad, that we lost something. I just hope we can overcome it and move forward, not backward. Let’s continue to holiday in Spain, take the Eurostar to Paris and overcome and strange feelings we might have as we begin to part ways.

I will miss the EU stars on the UK car license plates. We may not be in the EU, but I hope we can be a united Europe.

What I wish for…a reform of the EU and a way to come back together before it’s too late. On the plus side, as my cousin said. I’m thankful we live in a democracy where we can even have a referendum.


I wanted to leave some items here. I spent days reading and watching youtube videos but I highly recommend people start reading and watching the debates before making judgements online.

Remain vs Leave Debate: https://www.youtube.com/watch?v=uYTJGBBjkGo – I honestly did not think the remain campaign did a good job here. Maybe if they had better people here this could have gone different.

Research from my Uncle on why he’s voting Brexit: Why vote Brexit – I suggest reading this. He’s done a ton of work and research in this area.

PM David Cameron interviewed on Remain: https://www.youtube.com/watch?v=HO6MZcOQH0g

Google around, read the EU web site. Learn how the EU works. I read and watched so many other pieces on this.


PowerCLI: Checking for and removing Virtual Machine Memory Limits

Here is a quick one liner I found to check for any VMs which had memory limits set on them:

Get-VM | Get-VMResourceConfiguration | where {$_.MemlimitMB -ne -1}

If you want to target a specific cluster, just add Get-Cluster “clustername” to the beginning:

Get-Cluster “Clustername” | Get-VM | Get-VMResourceConfiguration | where {$_.MemlimitMB -ne -1}

Now if you want to get rid of the memory limits, add the following:

Set-VMResourceConfiguration -MemlimitMB $null

Final script for all VMs to find and remove limits:

Get-VM | Get-VMResourceConfiguration | where {$_.MemlimitMB -ne -1} | Set-VMResourceConfiguration -MemlimitMB $null

Next step…setting this as a scheduled workflow in Orchestrator to run every night/week and send a report out of any limits discovered.

*Note: This is a repost due to move from Systemsgame.com to 2ninjas1blog.com*

PowerCLI: List of VMWare Hosts, Clusters, Datacenters

Just a quick one liner I used to gather a list of VMWare Hosts including their cluster and datacenter.

get-vmhost | Select Name, @{N=”Cluster”;E={Get-Cluster -VMHost $_}},@{N=”Datacenter”;E={Get-Datacenter -VMHost $_}} | Export-csv c:\temp\inventory.csv


*Note: This is a repost due to move from Systemsgame.com to 2ninjas1blog.com*

IaaS Fundamentals: Creating a fresh Windows Server 2012 Template – Part 2

With our base VMware vSphere VM shell ready, it’s time to continue installing the Windows OS.


Just before we dive in, it is worth noting that depending on how you are remotely connected into the desktop, you may have issues controlling your mouse. In my case I was going via a View Desktop and then into the VRM console. I decided to just use the Tab and Spacebar key instead to make my selections. This will get much easier later on when VMware Tools is installed in the VM.

  • Select Install now, accept the defaults for language etc. until you get to the type of OS you wish to deploy.
  • I choose the Datacenter Edition with GUI here. Note: You can always remove the GUI and go back to ServerCore if needed. I know in my environment our Windows team still generally uses the GUI
  • Click Next once chosen



  • Accept the license terms and click Next
  • Change the installation type to Custom: Install Windows Only (advanced) and click Next.



  • Next you will be prompted for your drive layout. It should look like the screenshot below unless you chose a different drive configuration.


  • Leave Drive 0 selected and click Next

Sit back relax and enjoy the show!



Enjoy some tea while you wait…


  • Once finished you will need to enter your Administrator Password for your Windows Template.

Coming soon – Part 3 – Configuring and tuning your OS

Rubrik Announces r528 Cloud Appliance and Sexy New Features

Rubrik announced the r528 cloud appliance today.  Yes, Rubrik just got sexier. Not only has Rubrik grown exponentially as a company ,they are on their  3rd update and are now quite the global force with 90+ signed Channel Partners and 4PB+ Protected in the Field


The r528 offers encryption at rest and in flight from VMware.  Because the appliance is using hardware encryption, there is no compromise on speed or performance.  The self encrypting drives (SED), use  AES 256 circuitry. All data written to disk is encrypted automatically and data read is decrypted automatically.  Eliminating or overwriting the security key would perform an instantaneous wipe.  If a drive were to be taken out, it would be deemed worthless without the key


Boring stuff you should know about: NIST



This offering is FIPS 140-2 Security validated. What does that mean? It sounds important.  The drives and Rubrik Cryptographic library are FIPS 140-2 certified.  Where most backup appliances are Level 1, Level 2 brings about the ability to detect physical tampering.  If you want to nerd out and read up on FIPS 140 here. From there, you can read that FIPS 140-2 Level 1 provides the lowest level of security. Basic security requirements are specified for a cryptographic module (e.g., at least one Approved algorithm or Approved security function shall be used). No specific physical security mechanisms are required in a Security Level 1 cryptographic module beyond the basic requirement for production-grade components. FIPS 140-2 Level 2 improves upon the physical security mechanisms of a Security Level 1 cryptographic module by requiring features that show evidence of tampering, including tamper-evident coatings or seals that must be broken to attain physical access to the plaintext cryptographic keys and critical security parameters (CSPs) within the module, or pick-resistant locks on covers or doors to protect against unauthorized physical access.

For key management, Rubrik supports external key standards using KMIP 1.0 or they also provide a Trusted Platform Module (TPM) so there would be no need for KMS .  Giving the customer options if they don’t have KMS setup in their environment.

But wait there’s more!

Rubrik Converged Data Management 2.2

Enhancing the auto protect and SLA inheritance Rubrik already offers

  • Dynamic Assignment – Set policy on a vCenter, Data Center, Cluster, Folder, Host, and more.
  • Inheritance Options – Any new object or workload created will automatically pick up parent SLA assignment.
  • Do Not Protect – Block SLA policy from being inherited with explicit denial to prevent data protection at any desired level.

Throttle detection!

Most people don’t want backups to affect workloads.  The software can look for latency to make sure it’s not causing performance issues. If storage latency is rising, it is smart enough to halt additional tasks on the fly.  Backups don’t continue to pile up on your environment like a WWE royal rumble.



  • In testing, Rubrik has scaled out to a 20u, 10 brik, 40 node cluster.  That’s insane
  • Protect 10,000 VMS using vSPhere 6.0
  • Instant recovery:  Quicker spin up of clone workloads (thanks to being able to get 20,000 I/O per brik) and faster storage vMotion to your production environment.

Cluster Policy Enhancements

  • Global pause gives you the ability to use a maintenance window to perofrm work on the cluster
  • Recurring First Full Snapshot Window gives you control to say when a full backup should be performed within an SLA
  • New Retention Periods bring increased flexibility for SLA policies to meet different customer requirements
  • Blackout Windows define when no operational taks should be executed by the cluster
NAT Support
For customers that don’t want to use site-to-site tunneling , there is now NAT support for public bi-directional replications.


User Experience and Management
You might as well enjoy managing your backups …userexp
Last but not least
Backup physical alongside your virtual environment.  This includes SQL and Linux. You have all the capabilities for physical recovery that you are used to having for your VMs

Automation Fun


An oldie but goodie, you can tell from this blog, we’re all about automating all the things possible.

There are several options to satisfy the automation ninja within you:

GitHub PowerShell-Module Repository
PowerShell Gallery (NuGet)
Continuous Integration with AppVeyor

And a personal favorite:
vRealize Orchestrator Packages

Rubrik maintains it’s mantra don’t backup go forward with it’s continuous improvements to the backup experience.