Coming from a Cisco background this was a little bit of a change. I am now using Dell Force10 switches and I have been trying to figure out how to get LLDP to advertise the management IP to its neighbors like CDP does. CDP does this by default and there is non-default configuration that must be done in order to get LLDP to do it. With LLDP you must add LLDP configuration to each neighbor facing interface. Below is an example of what that configuration would look like:
In configuration mode:
I spent most of the day today troubleshooting an error that I was getting while configuring the vCAC appliance. This error had me, VMware support, and our consultant all scratching our heads. The error that we were getting was:
Invalid “Host Settings” in the remote SSO server. Expected: ssoservername.domain.dom:7444
As it turns out the SSO server information that you enter is case sensitive. We finally ran across a very good write up about the issue and how to resolve it:
It would be very helpful if VMware would note this type of information in the configuration guides or at least provide that information to their support and professional services teams.
Have you tried to connect to a SMBv2 NAS appliance via CIFS, using a UNC path, from a Windows 2012R2 client? If you have, have most likely run across the Invalid Signature error. The reason for this error is that your NAS does not support SMB 3.0. SMB 3.0 added a feature called “Secure Negotiate”. This feature depends on the error responses from all SMBv2 servers being correctly signed. If the error responses are not correctly signed the Workstation Service will immediately drop the connection. Microsoft added this feature to combat man in the middle attacks.
There is however a way to disable this functionality to allow you to use a SMBv2 NAS. All you need to do is run the following commands on the Windows 2012 R2/Windows 8.1 client machines:
These settings will disable the requirement of the security signature and disable Secure Negotiate.
The next part of the NSX deployment is to get the NSX Manager registered with vCenter. The first thing to do for this is to browse to the NSX Manager IP that you chose during the OVF deployment. You will be presented with a login screen as seen below.
VMware NSX for vSphere was announced at VMworld 2013. The surprising thing is there is not a lot of information available on how to install and configure it. So I set off on this venture and I thought I would share the information. NSX for vSphere is a combination of technologies from the traditional vCloud Network & Security (vCNS) as well as technologies from the Nicera acquisition. The underlying overlay protocol that NSX for vSphere uses is VXLAN, but it takes VXLAN to the next level by allowing unicast VXLAN and eliminates the requirement for multicast routing.
I will have a four post series on the steps from A to Z on how to install and configure NSX for vSphere. This first post will focus on how to install the NSX Manager. The NSX Manager is the API interface to the NSX Control Cluster. The NSX Manager comes in the form of an OVA template.
Vmware has made available their Disaster Recovery as a Service (DRaaS ) offering. It will be served out of the vCloud Hybrid Service datacenters. It is powered by the time proven VMware Site Recovery Manager and VMware Replication appliance.
The service is intended for mid-sized companies and promises to be a seamless integration with existing vSphere-based infrastructures. At a starting price of $835/month the service seems poised to hit the ground running.
When looking into performance on a virtualized workload, one of the most important statistics to watch is CPU Ready Time.
What is CPU Ready Time, you ask?
Well, CPU Ready Time is the amount of time that a vCPU is waiting for a time slot on a physical CPU core. In other words, if you are seeing a high CPU Ready Time, that means that your host is having a high amount of CPU contention and may be over subscribed.
There is a great VMware KB Article that explains how to calculate CPU Ready % from the CPU Ready summation values that you find on the performance tab in the vClient:
Converting between CPU summation and CPU % ready values
Sizing VM’s appropriately is very important. Just because you have a physical server that has 8 cores does not necessarily mean that its VM counterpart needs to have as many vCPUs. The best approach is to start small and grow as needed. A VM that is oversized can actually perform worse than one that is undersized.
We all know that VMware NSX brings L2–L4 network services up into the logical space, things like Layer 2 switching, distributed layer 3 routing, and distributed firewalling can all now be processed within the hypervisor.
Yeah so? That is so 2013.
What is 2014 going to bring?
What else did NSX enable that is not as broadly spoken about? Network visibility for the hypervisor.
Once you have lifted your network up into the logical space (software) you then have end to end visibility into the logical traffic flows. When DRS is given insight into these flows it can then be able to use that information (in addition to traditional compute level information) to more intelligently place VM’s in the environment. Enter Network DRS. A very simplistic scenario of this would be if you have 2 VM’s that are on separate hosts and those VM’s are generating a significant amount of traffic between each other. Since DRS would now have insight into the logical network traffic flows it would detect this and migrate one of the VM’s onto the same host as the other. Now, instead of having the network traffic go out onto the physical wire, it would be moving between the two VM’s at bus speed.
This is just one very brief example of the exciting things that NSX enables. VMware very briefly mentioned this as a technology preview at VMworld 2013. I, for one, am very excited to see what progress has been made to bring this to a generally available reality.
We are now broadcasting to twitter on @vbootstrap.
After spending the last week in training, I was doing some research on memory management in ESXi. In doing so, I ran across one of the most amazingly clear depictions of memory flow from the inside of ESXi:
If you want to get it in PDF format, there is a downloadable version available on the VMware KB article:
VMware vSphere 5 Memory Management and Monitoring