I am honoured to have been given the opportunity to be on the legendary Mike Laverick’s chinwag last night. Mike and I hooked up our webcams and logged onto Skype to have a relaxed chat about some topics that affects the virtualisation enthusiasts today. In the session we talked about:
How I got into Virtualisation;
Our home lab environments (Mike is quite serious when it comes to his lab!);
The challenges that new blade systems such as HP Matrix and Cisco UCS brings;
A discussion on NFS, iSCSI and FC;
VMware’s plans to discontinue the Service Console version of ESX, in favour of ESXi;
The Chinwag is available here: http://www.rtfm-ed.co.uk/2010/03/25/chinwag-with-mike-and-rynardt-spies-episode-08/
I would also recommend that you subscribe to Mike Laverick’s Pod Cast: http://itunes.apple.com/us/podcast/mike-laverick-podcasts/id356669479
I've been passed a press release regarding executive changes at PHD Virtual. The full press release to follow below:
Virtual Machine Backup Leader, PHD Virtual, Names Thomas Charlton Chairman and CEO
The Pioneer of Virtual Backup Appliances Adds Technology Management Expert to Corporate Team
MOUNT ARLINGTON, N.J. – March 10, 2010 — PHD Virtual Technologies, award winning provider of esXpress VM Backup, the fastest multi-VM backup and restore solution on the market, today announced that Thomas Charlton has been appointed Chairman and CEO by the PHD Virtual board of directors. Charlton has more than 20 years of leadership experience in emerging technology ventures, leading past companies to increased profitability and successful acquisitions.
“We are pleased to add Thomas to PHD Virtual’s corporate structure,” said Michael Triplett, managing director, Insight Venture Partners and PHD Virtual board member. “Thomas brings a wealth of company management expertise and innovation to PHD Virtual from his years of experience. His amazing track record in corporate growth and visionary thinking will help continue PHD Virtual’s dramatic growth.”
Prior to joining PHD Virtual, Charlton was the CEO of multiple software companies, including Shunra Software (network emulation and appliances), VoiceGenie Technologies (Voice XML speech platform) and Trailblazer Systems (eCommerce EDI software). Responsible for each company’s strategic direction, revenue growth, profitability and global expansion, Charlton led Shunra and VoiceGenie to profitability, and led VoiceGenie and Trailblazer Sytems to successful acquisitions by Alcatel and Nu Bridges respectively. Charlton also served as CEO at Tidal Software (enterprise job scheduling) which was recently acquired by Cisco Systems, Inc.
“I am excited about the opportunity that PHD Virtual represents based on the innovative technology it has built to address the growing data protection needs of the virtualization market,” said Charlton. “Customers with virtualized environments cannot adequately protect their growing information through traditional data protection solutions. Virtual server environments need technology that has been purpose-built to meet the unique requirements of virtual machines. PHD Virtual is the only technology to be designed specifically for a virtual environment with the performance and scalability enterprise customers require. It also delivers the unique distinction as a virtual solution that integrates easily with a customers’ physical storage environment for true end-to-end data protection.”
About PHD Virtual Technologies
The fastest multi-VM backup on the market and pioneer of Virtual Backup Appliances (VBAs), PHD Virtual Technologies has been transforming data protection for VMware since 2006. Its award-winning data protection solution, esXpress, is used today by more than 2,000 enterprises worldwide to achieve scalable, high availability and cost effective backup and restore solutions for VMware. esXpress was awarded Best of VMworld Finalist for 2009. In 2008, esXpress was named "Data Protection Product of the Year" by SearchServerVirtualization.com. PHD Virtual also provides a suite of free, virtualization utilities to assist with the administration and management of virtualized environments. PHD Virtual supports global resellers through its Channel Xpress partner program and is a proud VMware Technology Alliance Partner. For more information, please visit www.phdvirtual.com.
This is more of a note for future reference rather than a blog post.
I recently had to replace a RAID-10 member disk as the original disk had developed bad sectors and was causing mostly read related problems in the array. (That’s a whole other story it it’s won right and I don’t have time to get into that now). However, when I tried adding the replacement disk to the server, I found that the disk had a GPT table and not an msdos partition table, unlike the other 3 members in the RAID array. I was therefore unable to add the disk “as-is” to the RAID array as all disks are required to have the same partition table type. I therefore needed to remove the GUID Partition Table and replace it with an msdos partition table.
Over the past few weeks I’ve heard a whole lot of arguments around vCenter design considerations. A few of the questions asked were:
- Do I install vCenter on 32 bit or 64 bit?
- vCenter as a physical or Virtual machine?
- vCenter Database – Local or Remote?
- Placement of the Update Manager Server and Database
Before I dig into the vCenter design topic, I think it would be good to put some perspective on this post and why I’ve decided to blog on this. Last week I attended a meeting with some fellow virtualisation consultants and one of the topics raised in the meeting was to find a common standard practise between us regarding vCenter Server design and specifically the “default” stance between the consultants in regards to the placement of the vCenter server and whether it should be a physical or a virtual machine. Some consultants were in favour of the idea of a default stace and others were against the idea, stating that the decision of vCenter being hosted on a physical or virtual machine is down to the circumstances of each consultancy engagement. Thinking back now, I don’t think we came to an agreement in the end.
This post is basically my opinion on vCenter design, and the steps that I take in deciding what my infrastructure design will look like.
I've received information of Novell’s intentions to include support for VMware vNetworking Distributed switches in vSphere. Currently, when performing migrations to vSphere using Novell Platespin Migrate, the tool fails to properly detect VMware vNetworking Distributed Switches, preventing any migration operation from using them.
Novell product management have potentially scheduled the support of vDS for June 2010.
If you are going to be using Platespin Migrate to perform migrations in a vSphere environment that utilises vDS networking, Platespin Migrate will require a standard virtual switch, including at least 1 port group.
As I have now rebuilt my Openfiler 2.3 iSCSI box, I thought that it would be wise to document the procedure as I have installed Openfiler on a USB memory stick. This was something I’ve wanted to do this for a while now. Basically, I’m trying to cut back on the number hard disk drives in my environment. If therefore decided to install Openfiler on a USB memory stick instead of another hard drive. I could then run 4 750GB SATA drives in RAID10 and leave the Openfiler OS to run on the USB stick.
As most servers can boot from USB, I didn’t expect any issues with installing and booting Openfiler from USB. However, Openfiler doesn’t load the USB storage drivers when it boots by default. You’ll have to tweak the initrd image in order to boot from USB.
I am sure many of you have noticed that this site has been unavailable at times during the last two or so weeks. This is because I’ve been plagued with problems relating to my iSCSI SAN. The annoying thing is that the problems only started when I decided to move from the stable release of vSphere 4, running ESX Server and not ESXi, to a beta release of ESXi. This has now made troubleshooting the issues more complex as I’m not quite sure whether the issues are related to the new beta version or if it’s simply down to my iSCSI SAN, which is running Openfiler 2.3.
The highlight of the day was my wife’s statement: “If AOL, Google, Yahoo and those people can keep their systems online, why can’t you?” Well, let’s see... The difference is that my solution is a few hundred £££, not millions! You get what you pay for!
So, earlier today, I’ve decided to install the beta version of ESX rather than ESXi, but the problems still seem to be there. At the moment, I’m working on a slow and painful plan to move all the data (and we’re talking TBs here) from the iSCSI solution to a NAS. This will give me reduced performance, but it will at least allow me to rebuild my iSCSI SAN. I will also be going back to the stable release of ESX 4.0 for this environment and do my beta testing somewhere else (maybe in the solution centre at work). I do apologise if www.virtualvcp.com is down at times, but I’m working as hard as I possibly can with a limited budget to resolve the issues asap.
So you're designing a new Virtual Infrastructure on VMware right? Ok, one of the first decisions that your client will have to make is whether to virtualise on VI3 or vSphere. At this stage I'd say it would be a rather silly move to go with VI3.5 as VMware vSphere 4 GA has been available for quite some time now. However, I still see new designs based on VI3.5 being signed off. So why would I rather go for vSphere 4 and not VI3.5? Here are some my reasons:
We all know that vSphere is stable for production, if not more stable than VI3.5
Although vSphere 4 has more bells and whistles than VI3.5, it can still do what VI3.5 does. It just does it, well, better that VI3.5 in my opinion.
As people have learnt with ESX 2.5 when VI3 was released, you'll have to upgrade eventually. Sooner or later, you'll have to upgrade from VI3.5, so why do all the work twice? Why build a VI3.5 solution only to upgrade to vSphere 4 eventually anyway?
I'm not saying that you should go with the latest release, in fact, my policy is to always hold off one or two months before upgrading to the latest release of anything.
Well, ok, so now you have decided to go with vSphere right? Here's the next question... Do I run a 32-bit or 64-bit OS for my vCenter server? Do I install Windows 2003 32-bit or Windows 2003 64-Bit? Or, do I install Windows 2008 R2, which is 64-bit anyway? Now, I may be able to point you in the right direction here. As I'm bound by non-disclosure agreements for most of the information I have from VMware, I won't be able to say too much about anything I've been working with in the past few weeks. However, the purpose of this post is not to help you design a virtual infrastructure that will work for you today, but to help you design an infrastructure that will work for you today, tomorrow, and that will work for and with you when the time comes to upgrade to the next generation of VMware's Datacentre Virtualisation product. So, here's a tip, and probably the whole idea of this post: WHEN DESIGNING A NEW VIRTUAL INFRASTRUCTURE, BE SURE THAT YOU CHOOSE A 64-BIT WINDOWS OPERATING SYSTEM FOR YOUR VCENTER SERVER DEPLOYMENTS AS IT WILL SAVE YOU A LOT OF TIME AND HASSLE IN THE NEXT YEAR!
I have just downloaded and deployed CapacityIQ and it all went fine until I actually decided to register my vCenter server with the appliance, only to find out that the newest product by VMware does not even support vCenter 4, or in fact vSphere! That will teach me to read the release notes before actually bothering to try something new. This is what the release notes have to say:
CapacityIQ supports VirtualCenter 2.5, Update 4 and Update 5, managing hosts running ESX Server 3.0.2 through 3.5. CapacityIQ 1.0 does not support VMware vSphere 4.0 or vCenter 4.0
Am I dreaming? What's going on here? VMware, why did you even bother? Heck, why did I even bother?
VMware has yet again delivered another value-add component to vCenter. vCenter CapacityIQ provides capacity management capabilities for virtualised data centre and or desktop environments. The product integrates with vCenter Server ensuring that your virtualised capacity is always predictable and efficiently used.
The product website states:
“VMware vCenter CapacityIQ balances business demand with IT supply, without compromising performance, availability and security. With CapacityIQ, your IT infrastructure is guaranteed to have sufficient capacity to meet any business service level agreements.”
Once I have had a good play with CapacityIQ (which I intend on doing sometime this week), I will report back with my review of the product.
More information on vCenter CapacityIQ can be found at: http://www.vmware.com/products/vcenter-capacityiq/