So with vSphere 6.5 now GA, I decided to upgrade my lab to vSphere 6.5. In my environment, I use a vCenter with an external Platform Services Controller (PSC). So as part of the upgrade, I have to upgrade the PSC first.
When you run the UI installer provided within the VCSA 6.5 Appliance ISO, you have the option to “Upgrade” a vCenter Server Appliance or a Platform Services Controller. The installer detects the component that you are trying to upgrade and prompts for settings appropriate to that upgrade.
So during the keynote at VMworld in Barcelona on Tuesday morning, 18 October 2016, VMware showed a demo of how a VMware Cloud infrastructure is stood up in AWS and following that, showed how a virtual machine was migrated with vMotion into the AWS hosted VMware Cloud. This seemed impressive. However, something’s been bothering me and I’ve been to the VMware booth to get an answer but came up short.
The question I have is around processor architecture. If I’m running Intel in my local vSphere environment and AWS/VMware decided to run AMD in the VMware Cloud on AWS, how would you get that vMotion migration to work? It can’t right?
Is there an option to select the processor vendor for the newly deployed VMware Cloud on AWS?
Answers on o postcard or comment section below! Go!
And we have an answer!! Thank you Alex Jauch (@ajauch)!
Container technology has been around for quite a while now. Most people would by now have heard about Docker, and a lot of people are using Docker. What about VMware Photon? What’s that? Well again, I’d say that it’s been around for a while, however while people have been raving on about Docker and the container revolution, VMware has been working on their own implementation of container technologies as well as products that utilise and integrate with existing container technologies, such as Docker. At VMworld Europe 2016, VMware announced vSphere 6.5 and one feature that has caught my attention in this release (apart from the long overdue vSphere HTML5 Client) is vSphere Integrated Containers, or simply, VIC. At the moment I’m trying to make sense of all these technologies, how (and if) they fit together and where you would want to use each one.
In the last 6 months, I've done quite a bit of vRA6, 7 and vRO. During this time, I've had to learn quite a bit about both products, and how they interact with each other and with other REST based APIs, such as ServiceNow. Having been set in my ways in vRA 6 of using workflow stubs to break out to vRO in order to extend vRA functionality, I was concious of the fact that VMware will be removing .NET workflow stubs in future releases of vRA 7, and that the preferred method of extending out to vRO in vRA7 is to make use of the event broker service. Also, vRA7 makes use of converged blueprints, which from an extensibility point of view, actually means that we have to do things slightly differently in code than what we got used in in vRA/vRO6.
In VMware vRealize Automation 7 (vRA), blueprints are converged, rather than the single vs. multi machine blueprints that we were used to in vRA6. This presents an interesting challenge when requesting new catalog items from vRO.
In vRA6, if you wanted to request a new catalog item from vRO, you would run the “Request catalog item” workflow and simply pass any property values along with your request and those property values would be applied to the resulting item in vRA. For instance, when requesting a new VM with 2 vCPUs specified as part of the request, you could specify the following custom property in as part of the request from vRA6:
provider-VirtualMachine.CPU.Count = 2;
In vRA7, you could still use the “Request a catalog item” workflow, however you’ll find that the “provider-<propertyName>“ properties passed with the request are not honoured and will have no effect on the resulting virtual machine. The reason this is happening is because of the converged blueprint. You now need to specify the VM for which the property value is mean to be set. It’s no longer assumed that you only have one virtual machine as part of your blueprint.
So, you've done all the hard work to change your Hyperic Server certificate (or not). Now you browse to your Hyperic server's management page via HTTPS on port 7443 and you're presented with this uninspiring message from your browser:
I've been working intensively with the VMware vRealize product suite over that past 4 months, including Hyperic. One of the things we have to do on our current project is to replace the Hyperic server certificate whenever a new Hyperic instance is introduced into the environment. This is a relatively straight forward task, but one that consists of quite a few steps. In this blog post, I've documented exactly how to go about replacing Hyperic server certificates.
I have identified an issue in Log Insight 2.5 where alerts passed via email or to vROPS contain the following text in the message:
“Notification event – The worker node sending this alert was unable to contact the standalone node. You may receive duplicate notifications for this alert.”
I also confirmed that DNS resolution and reverse lookup functions are working as expected. I was also able to reproduce this issue successfully in a lab environment, with DNS working correctly.
While VMware vRealize Operations Manager makes use of a Gemfire database and vRealize Hyperic makes use of vPostgress, VMware vRealize Log Insight makes use of Cassandra. You might wonder why knowing that even matters. Well, as I’ve seen again this week, the database engine that drives each of these products essentially dictates the design and deployment of their environments and their limitations.
This week, we had a situation where our newly deployed Log Insight cluster wasn’t performing. In fact it was so bad, that it took 20 – 30 minutes to simply log into the admin interface. Yet the CPU and Memory usage counters for each of the appliances weren’t even being tickled. It was a strange issue for sure, and by 5pm on Monday 31st of August, we were in the process of logging a P1 call with VMware support.
Following on from my previous blog post where I mentioned that we’ve discovered a bug in the Hyperic 5.8.4 client (on both Windows and Linux), I think it’s only fair that I share our findings. It’s a bug that we discovered whilst deploying a very large vRealize Suite (two maximum sized global clusters of vROPS, vRLI, Hyperic and vRA/vRO).
Whilst carrying out some testing in my lab surrounding the impact of replacing SSL certificates in Hyperic, I noticed that if for whatever reason authentication between the Hyperic agent and Hyperic server fails, the Hyperic agent increases CPU utilisation of the client machine it’s running on to between 85% and 100%. At first I thought that it’s an anomaly, but I was then able to reproduce the symptoms a further 3 times in proving to VMware GSS that the issue really does exist. A long story short