Terraform – Assigning an AWS Key Pair to your EC2 Instance Resource

In the first post on Terraform, we took a look at how to create and destroy a simple EC2 instance. However, one of the common things we need to do in AWS is to assign a Key Value pair, as well as, Tag Instances with names, project codes, etc.

Assign a Key Value Pair

In order to access an EC2 instance once it is created, you need to assign an AWS EC2 Key Pair at the time of instantiating the instance. If you haven’t already done so, go ahead and create a Key Pair from the AWS Console by clicking the Key Pairs section on the left hand side. You will see a screen like the one below. Clicking Create Key Pair will walk you through the process.


During the process you will be prompted to save a private key file (.pem). Keep this safe as you will need it.

Now in Terraform, we are going to add one additional line under the resource section for our EC2 Instance. You can see in my screenshot above that my demo key pair is called “AWS EC2 – SEP 2016”, so we simply need to reference this by adding the following line.

key_name = "AWS EC2 - SEP 2016"

The end result looks like this:

EC2 with keypair

If you execute a terraform apply now, you will see that your new EC2 instance is created and the Key Pair name should appear correctly in the details pane.


Note, if you did not destroy your previous terraform configuration, and you deployed it just like in part 1 without a key pair, you will notice the following when you execute a terraform plan.


The reason for this is because you cannot assign a key pair to an already running EC2 instance. Terraform is letting you know that it will be forced to delete the instance and create a new one. When you perform your terraform apply, your end result will reflect this..


Otherwise that completes this post. Now you know how to use your key pairs. Terraform also has the power to create the pairs on demand which we will hopefully circle back around to in the future.


Identity and Access Management – AWS Part 1

Amazon’s Identity and Access Management or IAM as commonly referred to gives centralized control to your AWS account. Not only can you give shared access with role based permissions you can also provide temporary access to users. Like Active Directory, you can set up a password rotation policy, length and complexity. There is an option for added security with multifactor authentication through providers like Google.

Log into your AWS account and click on the IAM symbol



When creating your console link you have to remember it is a global DNS name space.  What you choose may already be in use.  Also your  users, groups, roles etc are not region specific. They are shared globally.   Here I created 2ninjas1blog by clicking on customize next to my IAM link. There are now 5 steps to complete your setup. You’ll see the root access keys are already deleted.

Click on Active MFA on your root account and then Manage MFA

You can select Virtual or Hardware, here I selected Virtual. There is a link from Amazon showing supported MFA devices. The Google Authenticator on your smartphone works well and is easy for this demonstration.


A QR code will be displayed. This is where your authenticator device comes in. I used Google authenticator on my phone and scanned the QR code. You then input 2 codes from the authenticator


And click Activate Virtual MFA

You will now get a message that the MFA device was successfully associated.

Now that we have activated MFA on the root account, lets go on to the next step and Create individual IAM users
Click on Create individual IAM users on the dashboard and on the left click Users


Click Create New Users. Remember by default users will not have access until permissions are granted to them.

Here I create our 2 users and click on generate an access key for each user. This is very important to save because you cannot obtain it later. This will prompt a download of the credentials after we create a password.


The next screen shows the Access Key ID and Secret Access Key.  These can be used for CLI to directly interact with the AWS console.  You can download the credentials and put them in a safe place because this will be the only chance you get.  Below is an example user to show the screen.


Here are both accounts but no password is assigned to either one.

We need to click check next to the User Name and click User Actions and select Manage Password

Here you can have an auto-generated password and set if you want the user to reset their password at login. Credentials in a form of a csv file can be downloaded and should be kept in a safe place.

Now these logins are useless without permissions assigned. You can assign them directly to a user account but it’s best practice to create a group and assign users to the gorup.

Now we are on the 4th task of our IAM console, Use groups to assign permissions.
In order to assign permissions we need to create a group with a policy attached. Policies have the permissions listed within them. Click on Groups then


Through the wizard, I can Set a Group Name

Then attach the Administrator Policy
image and click Generate


Now I can add my 2 users to my new Administrator group. By going to Groups on my dashboard, click the check box next to NinjaAdmins and under Group Actions select Add Users to Group.

Now I can select NinjaAmy and NinjaNick to be NinjaAdmins

I select my NinjaAmy and NinjaNick users and now I can see 2 users are part of my NinjaAdmin group


You can always go back and click on Groups to see who is a member and what sort of permissions they have.  You can also remove users from the group


Yay, we are almost done.  Lastly, we have to Apply an IAM password policy.  Click on Manage Password Policy


Here you can go through a number of requirements


Apply your password policy and you will see you are now finished setting up IAM.

I can now log into https://2ninjas1blog.signin.aws.amazon.com/console with my username: NinjaAmy and come into the AWS console, no longer using root


On the top right, you can see NinjaAmy in lieu of root.


This completes Part 1 of IAM. Part 2 will go deeper into the roles and policies.

New Pluralsight Course – Introduction to Workflow Development with VMware vRealize Orchestrator

I’m pleased to announce that as of last week, Pluralsight have released my first Training Course on VMware vRealize Orchestrator. I’ve been working with this product since the early days and I couldn’t be more thrilled to have completed a 2 hour course teaching people how to get up and running quickly.

Here is quick a video overview of the Course:

I aimed this course at getting people into workflow development. This means I don’t focus on product installation and plugin installations, but more on specifically how you can develop and code the workflows.

The course contains the following 7 modules:

  1.  Getting Started – A tour of vRealize Orchestrator and the components.
  2. Building Workflows – Approach to workflow design, followed by some basic workflow creation.
  3. Scriptable Tasks – Learn some basic Javascript and start using Scriptable tasks.
  4. Input Presentation and Additional Javascript – Learn some basic ways to use the presentation view, as well as more javascript.
  5. Actions – Turn scriptable tasks into reusable actions.
  6. VMware Tools and HW Upgrade – A basic real world example for combining workflows in a real world use case.
  7. Snapshot creation and automated deletion – Create a snapshot and then schedule it for automatic deletion at a future date.

In addition to my course, I also work with a large number of customers in my role at Ahead. For anyone looking to get started with Orchestrator, Ahead also now offers an AHEADStart for VMware vRealize Orchestrator which takes care of all the plumbing and gets people up and running with the product.

Please enjoy the course and I would absolutely love any feedback. Teaching in this format has been completely new to me and took some learning and getting used to. I can certainly tell when comparing the first 2 modules to the last 2, the difference as I got more comfortable. I plan to circle back and write about my experience for anyone else looking to do a course in this manner.

Finally, I can’t say enough great things about working with the Pluralsight team. Simply great people.



Terraform 101 – What is it? How do I use it?


I’ve been watching Terraform over the past few years and finally have had some time to start getting stuck into it. I must say, I’m impressed by the potential of this product and others from Hashicorp.

Terraform essentially fits in the Infrastructure Automation category, and has a similar coding approach to tools like Puppet, while in some ways operating more like an Orchestrator without the visual aspect.

What is it?

Essentially it adds a layer of abstraction to services like Amazon, Google etc. Instead of an AWS Cloud Formation template, I can use a Terraform configuration instead. On top of that, and the piece that is more intriguing to me, is the ability to use their module approach as well as other providers and provisioners.

Providers allow you to use the same declarative state language for other systems. I encourage you to check out the list on the Terraform site.

Provisioners allow us to essentially determine what and where we initiate other tasks. For example, you could use local-exec to execute commands locally on the terraform box, or remote-exec to execute on a remote server via SSH or WinRM.

The idea behind all of this is that you have one place, and one language to learn which then works across public Cloud providers. You don’t need to learn say the AWS Cloud Formation Template language and then go learn another language in another cloud provider. You simply would use Terraform to deploy all.

How do I use it?

Let’s get stuck in and walk through a very basic Terraform configuration for deploying an AWS Instance. At the core of Terraform is the .tf file. This combined with other files in the same directory or module directories, form a Terraform Configuration. There are 2 formats to the Terraform files, Terraform format, or JSON. It is recommend that you use the Terraform format which is easily readable (think Puppet DSL).

Example: Create an AWS EC2 Instance with Terraform

Note: For all activities below you will need an AWS account and will be charged via Amazon appropriately. I try to use free tier for all demo examples.

  • Create a folder to store your Terraform configuration.
  • Open up notepad or your favorite editor. I use Visual Studio code along with the Terraform Extension.
  • Create the Terraform configuration and save it as a .tf file.

Terraform example for deploying AWS Instance

The first piece we declare is the provider which in this case is AWS. Grab your access key and secret key and then choose a region you want to provision our EC2 Instance into.

provider "aws" {
access_key = "yourkeyhere"
secret_key = "yoursecretkeyhere"
region     = "us-east-1"

Next, we declare  our new resource. In this case I am choosing to instantiate and AWS instance called “2ninjasexample1”. I am going to use the Amazon AMI with ID “ami-13be557e”. Finally i’m choosing my type of instance as t2.micro.

resource "aws_instance" "2ninjasexample1" {
ami           = "ami-13be557e"
instance_type = "t2.micro"

That’s it for our configuration file. Simply save it in the folder you created in step 1 and browse to that folder.

  • Type terraform plan and you should see a result like the screenshot below.terraformplan
    You can see that if we go ahead and run the configuration, it is going to add the aws instance.
  • Now it’s time to actually apply the configuration. Type terraform apply to go ahead and create the instance.terraformapplyaws

Terraform creates a new AWS EC2 instance as well as 2 additional files in our folder which maintain the state information.


If we examine the .tfstate file, you will see it contains all the specific information about our AWS instance.


In particular, you can see that it has captured the AWS instance ID which you can also view from your AWS console if you select your EC2 image.

  • Finally let’s destroy the stack. Type terraform destroy. You will be prompted to confirm by typing yes.


Just like that, it is destroyed! You will also notice your state file updated to reflect this.

Hopefully at this point, you can see the power behind this tool. Stay tuned for more posts on this.




It’s ON with Turbonomic and vRO

There have been a lot of changes for VMTu Turbonomic.  I believe most in the industry are aware of the bold name change.  This name is more representative of what the product does based on the economic model it is known for.  Besides that with the latest version, Turbonomic also released vRealize Automation workflows to integrate with their product.  You have to be a member of the Green Circle, which is free, but you can download them here.  There are instructions on importing the workflows, setting up Operations Manager as a REST host, etc.  I was excited to see this but, unfortunately, my environment only uses vRealize Orchestrator.
Below is the schema for vRA workflow:


First is a scriptable task to gather inputs for vRA.  The inputs are all vRA specific so I could remove these. At the end it, the workflow is pushing back properties to vRA so I removed “Override vRA Settings” at the end.

Inputs removed from original VMTurbo Main workflow:



My workflow ended up like this, removing vRA dependencies and ending with 2 scriptable tasks to convert the datastore and host to VC:objects instead of strings.  These scripts will be covered in another post.


My inputs end up moving from general attributes and are templateName, clusterName and datacentreName. In the future I will likely add a scriptable task at the beginning of the workflow to determine these as they will come from inputs generated by my Windows or Linux Master Build workflow.

Inputs converted from attributes:


I also now have outputs for the actual VC: Datastore VC: Host System objects for your clone workflow in vRO. These were created via the scriptable tasks which take the strings returned from Turbonomic and do a lookup to match them to the vCenter objects.

Outputs created:



What’s great about having this functionality from Turbonomic is now the best host and the best datastore will be selected based on analytics from Operations Manager. I originally was picking my datastore based on amount of free space but now using the REST API I can have the least utilized host and datastore supplied to by clone workflow.

Download the modified workflows here.

I’ll be going over these workflows in the upcoming webinar “Overcoming Private Cloud Challenges in Healthcare IT”, September 29th at 2:00PM EST.  Register here


2 Ninjas and Amazon Web Services

Amy and I spend a good amount of time working on external projects. In fact, we discussed at the beginning of this year what we wanted to focus on. For me it has been wrapping up my Pluralsight Course for vRO, as well as, working on extending Tintri APIs to meet business use cases. For Amy, it’s been knee deep in automating the world at UCMC, as well as, working and discussing ideas around community and charity work that we hope to start early next year.

For the rest of this year, we are going to now continue our Real World Cloud Series and given the rise in AWS ,which does not seem to be slowing down, we’ve decided to get going on a series focused around AWS. We are going to start off in the IaaS services first, expand these into the automation and service catalog discussions that we have on a day to day basis. After that we will continue on to gather AWS certifications. I will also be blogging about this on the Ahead blog site from a higher level and business standpoint. There are tons of useful posts there from many of my colleagues whom I work with so definitely check it out.

We have created 2 pages to organize this:

AWS Guides

AWS Solutions Architect Associate Exam

In some cases, both pages will share some of the same blog posts but hopefully this helps if you are just trying to focus on the exam.  It will all become clear as the posts start to come out in the next few months.





AWS Simple Storage Service (S3) – Fundamentals

Before diving into the other AWS services, it is highly recommended that you gather a strong background in all of the AWS Storage services and their specific use cases. In this post, we will be discussing AWS S3 specifically.

In short, S3 provides highly scalable object storage. In 2013, Jeff Barr , wrote a blog which stated that Amazon S3 had reached over 2 trillion objects and there were 1.1 million requests a second. I’d love to find an updated stat but this in itself gives an indication of how widely used this service is already.

Object Storage – Quick Primer

For anyone not familiar, object storage provides the ability to store objects (obvious I know). These are essentially collections of digital bits. This could be a document, digital photo, xml file etc. Object storage offers highly reliable and easy scalable storage of all these digital bits but there is basically no structure at all. It simply provides storage and differs from file storage which provides additional functionality. An example is something like update functionality. In a typical file system, you can append information directly to a file. In object storage, this is not the case. You can add an object and retrieve it immediately but you can’t change it. Rather, you have to update the object and then reinsert it. You can still apply permissions and versioning as we will see soon but as you architect applications today, you need to consider whether or not you truly do need a file system. Amazon did recently release EFS (think NAS basically) and this can potentially satisfy your specific file use cases. It is still early on though and the verdict is still out.

How do I use it? – Creating our first S3 Bucket

First login to your AWS console and you will see on the left hand side under “Storage & Content Delivery” the icon for S3.


You will be presented with the welcome screen to S3


The first thing to note is the term “Bucket”. It helps to think of a bucket basically as a folder but the name of the bucket is globally unique. Once someone takes the bucket name, it is not available for anyone else to use.

Simply select Create Bucket and type in a name for your new S3 bucket.

If someone else has the name already, it will error out and let you know. The name of the bucket also needs to be in lowercase.


Once created, you will see the main S3 management screen.


You can see on the right hand side a number of options which we will come back to in subsequent posts. For now, if we click into our bucket, we will see that it is empty.



We can create additional folders inside of our bucket or simply begin to upload files at this point. If you select the Actions menu, you will also see additional options.


Let’s go ahead and upload a file. In my example, I will simply select a PNG image file as per the screenshot below.


Before we go ahead and start the upload it is worth clicking the Set Details button.screen-shot-2016-09-17-at-8-32-11-pm

You can see here that we have additional storage options we can apply. For now, we are going to select Use Standard Storage but there are ways to further reduce cost if the other storage options apply. There is also an option to use Server Side Encryption.

Go back and select Start Upload.


Once completed, we will see our image file appear on the left hand side.

Select Properties from the menu on the top right, and you will be able to see


Note the link. If I put this into my web browser directly, I get the following Access Denied error.


This is because the permissions are not set to allow public access. If I go ahead and add Everyone to have Open/Download permissions as follows…


…I end up now being able to access this image publicly.



With that, our basic primer comes to an end. In the next post we will discuss the different storage types and permissions we saw above.

AWS Guides

AWS Solutions Architect – Associate Exam Guide

Living the cloudy life… #cloudlife

A few people asked me recently why some of us are using the hashtag #cloudlife and what it means. This came out of the Ahead Tech Summit presentation I was preparing for and worked with Nick Rodriguez on in June. I was explaining to Nick my concept and he created this great image.

Pasted image at 2016_08_08 08_27 AM

So what does it mean?

Ultimately, it comes from a belief that Cloud is about creating a true experience. This means not just changing the way customers of IT consume services via a catalog, but going that extra mile.

I’ll get on to roles and more Cloud Design topics in a future post .The one thing I want to stress over and over is that our goal in creating a Cloud is to create this place people come to for IT services and leave feeling like they got something more.

If you’re an IT person, you must put yourself in the developers shoes and try to think of the pain and annoyance they actually go through when submitting a form.  They wait weeks for their server to come and they then still have to go to subsequent teams to get various pieces of software installed such as:  DR options approved, extra storage and so on. Then, they have to make sure that everything they did in Dev works in QA and finally Production. A sysadmin might push a patch out or a VM template that doesn’t work as it did the previous month because someone else made a change.

Follow this up with the sheer amount of Public Cloud PaaS services and other external services the teams wish to consume.  Many of these services require security approvals and perhaps additional firewall and networking configurations.

It all adds up to a frustrated customer and in turn ultimately affects the businesses ability to innovate and grow.


The opposite is the #cloudlife experience..

Happy Customer A: “Wow, I came to this catalog and got everything I needed. BAM! Now I can create something awesome today while my idea is hot.”

Happy Customer B: “This Cloud is better than just the AWS or Microsoft Cloud. I get those features and more. Everything I want is here!”

Happy Customer C: “I think…I love this Cloud… #cloudlife”

Happy Customer D: “If I had a Cloud, it would be just like this cloud. I’m telling my friends about DevOps and #cloudlife.”


It’s not about just having the best programmer and engineering the best back end services but the full end to end experience. How you design the front end menu, how you guide every decision the user makes, and how you can get them what they need to be successful and grow the business are front and center. It takes a combination of people and skills to execute on this successfully.

What does it mean in practice?

Take an example of a Developer that has deployed an environment of SugarCRM, an Open Source CRM tool. Great they deployed it from their request catalog but what if they want to synchronize data from one environment to another for testing? Previously, they would have had to put in a request for someone to backup and restore the database to the new environment. This could then involve a piece of paper being handed around between teams until the task is completed.

The alternative is an option like the screenshot below in vRealize Automation. We add an Action which is visible in the items list that gives them the ability to execute this operation with one click.

vRA Day 2

Clicking the “vRA-DevOpsTeamX-SyncData” Button initiates a vRealize Orchestrator workflow. This workflow in turn connects to a Tintri Storage Array to initiate a Sync VM. The workflow will create all the appropriate change controls, shutting down of VMs, storage array tasks etc. Again, think of everything that you need to do to complete the task and provide it as a self service option.

Essentially, the workflow would look something like this:

Screen Shot 2016-08-09 at 8.55.24 AM

Other Examples…

Time permitting, some of these will turn into blog posts as well, but here are some examples of clear services you can offer to make peoples lives easier.

  • Complex Environment Deployments (IaaS, PaaS, SaaS mixes)
    • This means getting everything they need. Not just a VM deployed.
  • Event Based Orchestration – e.g. AWS Lambda to SNOW, Orchestration systems etc.
  • Automated Redeployment of Environments on Schedule
  • Self Service Disaster Recovery CheckBox
  • Self Service Backups and Restores
  • Business Discovery Mapping via Parent/Child Relationships created in blueprints
  • Automated Service Account creation and deletion
  • Automated Snapshots before Patching of Systems
  • Automated Firewall Rule creation and deletion

These are just a handful of ideas.  Remember, with each one, we’re taking out the additional paperwork by automating the tasks you’d typically do in your ITIL tool like ServiceNow.

What is #cloudlife…?

It’s certainly also become a #hashtag we use whenever we are working on Cloudy stuff (e.g. creating a cloud proposal while in the dentist chair…wasn’t me) or thinking about a new innovative Cloud idea while drinking a Tim Carr Starbucks Iced Green Tea (#notpropertea). Essentially, it’s a way of thinking beyond our Infrastructure roles and what the requester is asking for to create something more.

#cloudlife is about reaching for the best possible user experience. One that doesn’t feel like it’s forcing you into a box but instead feels refreshing end enjoyable.


vRealize Orchestrator Appliance – Guest File Operations Part 1 – (Copying a file to guest VM)

One of the things you will often find you need to do with vRO is to get a file to a guest VM, or just run a file from inside the VM. Now for Windows you can use Powershell remote features in many cases, but what if your server isn’t on the network yet? Until version 5.1 we had to rely on VIX as a way to do this, but now VMware has added a number of new workflows under “Guest Operations” which are much more reliable.

vCO Guest Operations

vRO Guest Operations

“Copy file from vCO to guest” is the one I’m going to be using in this example.

First of all copy the workflow into a sandbox area. This way you can move a bunch of the inputs to attributes and not have to key them in each time (e.g. The local administrator username, password, and test VM).

In my example, I’m going to create a text file called test.txt in a new folder under /opt called “vcofiles”.

My target machine is a Windows 2008 R2 server, where I will copy the file and place it in the C:\temp\ folder with the name “testcopy.txt”

If you run the workflow then these are my input parameters:


The problem is that if you run this you will get an error similar to this:

“No permissions on the file for the attempted operation (Workflow: Copying files from vCO appliance to guest/Scriptable task…”



In order to fix this you first need to give the correct rights to the folder and file on your vCO Appliance.

1. Login as root onto the appliance
2. Give Read/Write/Execution rights to the new folder


3. Give Read/Write rights to the Text file you made


Unfortunately we aren’t quite done yet. You also need to tell orchestrator which locations it can read/write/execute from. This involves editing the “js-io-rights.conf” file located in “/opt/vmo/app-server/server/vmo/conf”


Add the line “+rwx /opt/vcofiles/” as shown above.

If anyone isn’t too sure on the linux commands to do this:

  • Type “cd /opt/vmo/app-server/server/vmo/conf” and press enter.
  • Type “vi js-io.rights.conf” and press enter.
  • Use the arrow keys to move the cursor where you want and press the insert key
  • Press Enter and type in the line “+rwx /opt/vcofiles”
  • Press ESC
  • Type “:wq” and press enter.

4. Now, there’s one more thing. You need to restart the vCO service for this to take effect.

Login to the vCO configuration manager, go to startup, and click restart service.


5. Now run your workflow and see if your text file copied across.


You can see a quick video demo of this on youtube. (apologies for the mouse pointer issue..)

Thanks for reading. Let me know if you have any questions.


How to match and correlate Windows SCSI Disk IDs with VMware VMDKs

*Note: This is a repost due to moving my posts from SystemsGame.com to 2ninjas1blog.com”

This post comes from a colleague of mine who couldn’t find a great resource on how to correlate the Windows Disk in Disk Management, with the Virtual Disk presented by VMware.

When all the disks are different sizes it is easy, but sometimes they are the same…how can you be sure you are expanding the right disk?

These instructions/steps should allow you to correlate Windows Disks to VMDK Disks.

  1. RDP  to the Windows server in question and run this powershell script
Get-WmiObject Win32_DiskDrive | select-object DeviceID,{$_.size/1024/1024/1024},scsiport,scsibus,scsitargetid,scsilogicalunit | out-file -FilePath c:\OutputPhysicalDrive.txt

This script should allow you to match the OS disks to the VMDK Disks. The output will be referenced in later steps.

Example output

$_.size/1024/1024/1024 : 9.99680757522583
scsiport : 3
scsibus : 0
scsitargetid : 0
scsilogicalunit : 0

$_.size/1024/1024/1024 : 49.9993586540222
scsiport : 5
scsibus : 0
scsitargetid : 1
scsilogicalunit : 0

$_.size/1024/1024/1024 : 19.9936151504517
scsiport : 4
scsibus : 0
scsitargetid : 0
scsilogicalunit : 0

$_.size/1024/1024/1024 : 59.996166229248
scsiport : 2
scsibus : 0
scsitargetid : 0
scsilogicalunit : 0

$_.size/1024/1024/1024 : 19.9936151504517
scsiport : 2
scsibus : 0
scsitargetid : 1
scsilogicalunit : 0

$_.size/1024/1024/1024 : 19.9936151504517
scsiport : 2
scsibus : 0
scsitargetid : 2
scsilogicalunit : 0

$_.size/1024/1024/1024 : 49.9993586540222
scsiport : 5
scsibus : 0
scsitargetid : 0
scsilogicalunit : 0

The second step is to get a list of your VMDK disk information by editing the virtual machine in question. 

The information you will be retrieving is the
Disk Name: “Hard disk 1”
Size: “60 GB”
Bus ID: 0
Disk ID: 0

SCSI (X:Y) Hard Disk under Virtual Device Node. The X:Y values are:

X = Bus ID
Y = Disk ID

Enter the Disk information for all VMDK disks into a table like the one below:

Reference OutputPhysicalDrive.txt and match up any OS disks to VMDK disk that have a unique size.

For the non unique drives you will need to match the Windows disk scsitargetid with the VMDK Disk ID.

The first 2 in the example below are both 50GB Drives.

$_.size/1024/1024/1024 : 49.9993586540222
scsiport : 5
scsibus : 0
scsitargetid : 1
scsilogicalunit : 0

$_.size/1024/1024/1024 : 49.9993586540222
scsiport : 5
scsibus : 0
scsitargetid : 0
scsilogicalunit : 0

The next 3 are all 20GB drives.

$_.size/1024/1024/1024 : 19.9936151504517
scsiport : 2
scsibus : 0
scsitargetid : 2

$_.size/1024/1024/1024 : 19.9936151504517
scsiport : 2
scsibus : 0
scsitargetid : 1
scsilogicalunit : 0

$_.size/1024/1024/1024 : 19.9936151504517
scsiport : 4
scsibus : 0
scsitargetid : 0
scsilogicalunit : 0

Hope this helps anyone else having the issue. I’ll loop around and update the PowerShell script I ended up using for this soon as well.

Thank you vRad for this great guide!