Category Archives: vSphere 6

vSphere 5.5 or higher and Reliable Memory Technology

like we know ECC (Error-Correcting Code) is a great feature we have to tackle soft errors happening in RAM, without causing OS to fail, but ECC protects us from single soft error in a memory block at a time, so if there are multiple soft errors in a single memory block or hard errors on one or multiple cells in main memory, this will surely cause OS Kernel to panic. This will then result into longer downtime in terms of going through variety of steps to identify faulty module to replace it. Or resetting OS (in case of multiple soft errors) with all it’s application services. If this was a virtualised environment with VMware ESXi  or any such other hypervisors then we know there will be multiple VMs running and the all go down along with the hypervisor.

This is where Reliable Memory Technology (RMT) comes into picture which is basically a hardware feature which works along with supported OS (like ESXi 5.5 or higher). This will make sure that if during ongoing operations, any multi-bit soft error in a DIMM or hard error occurs, it will be detected by RMT and will take corrective actions in such a way so it won’t trigger OS Kernel to panic.

For Example, let’s say there was a hard error in one of the DIMM, system will detect it and mark faulty cell and some cells around it as non-usable. so that current OS operations will continue but after next reboot, OS will not see those faulty cells because hardware is not even presenting those anymore.

RMT proves to be really great when it comes to minimise downtimes due to Memory Fault related kernel panics, and also avoiding to replace whole memory module due to hard errors in DIMM.

if you have vSphere 5.5 or higher with Enterprise or Enterprise plus edition license, and if you hardware has it, then in your ESXi host you will be able to see Reliable memory using following command.


List of References:
Tech. White Paper from Dell about RMT
Third-party blog post1
Third-party blog post2


Long Distance vMotion in vSphere 6

With the help of dynamically resized network socket buffer, vSphere 6 is capable to support vMotion of a VM across vCenter servers located far apart. Network round trip latency >100 ms (upto 150 ms) is tolerated/supported.

basic requirements to achieve long distance vMotion are as bellow

1) L2 Streched VM Network
2) Multi-site gateway support
3) Secure vMotion network between both the sites

Use cases of Long Distance vMotion

1) Multisite Load balancing
2) Permanent migration
3) Follow-the-sun-scenario support
4) Disaster Avoidance

VMware pushes the envelope with vSphere 6.0 vMotion


vSphere 6, Clear state

Memory management in vSphere 5 happens with following 4 states.

High State: 100% of minFree available (<100% and >=64%, TPS is in action)
Soft State: 64% of minFree is available (<64% and >=32%, Ballooning is in action)
Hard State: 32% of minFree is available (<32% and >= 16%, Compression, Page Swapping)
Low State: 16% of minFree is available (<16%, Page Swapping)

ESXi 5.x Memory state chart

while in vSphere 6, this has changed, a new memory management state has been introduced called Clear state which is equivalent to previous High state. So now in ESXi 6 following are memory management states available.

High State: 300% of minFree available (<300% and >=64%, TPS is in action)
Clear State: 100% of minFree available (TPS is in action, this is previous version High State)
Soft State: 64% of minFree is available (<64% and >=32%, Ballooning is in action)
Hard State: 32% of minFree is available (<32% and >= 16%, Compression, Page Swapping)
Low State: 16% of minFree is available (<16%, Page Swapping)

ESXi 6 Memory states chart

In Short, an additional state which has been introduced called Clear state, and High State is tripled with compare to last version, now it’s 300% of minFree value of an esxi host, this is purely so TPS gets a chance to be triggered long before it hits Clear state and carry on acting on memory accordingly.

Reference: Duncan Epping’s blog about vSphere 6, Memory states & vSphere 6 Resource Management Guide


ESXi 6 – minFree value

minFree is associated with High Memory state of ESXi 6 or prior to it.

If you are wondering how ESXi is calculating minFree value, here is the math behind it.
Value of minFree is 899 MB for the ESXi host with upto 28 GB RAM.
host with more than 28 GB RAM, minFree is calculated as bellow
minFree = 899 MB (upto 28 GB RAM) + 1% of remaining capacity of RAM in that host.

Like an ESXi host with 100 GB RAM has minFree = 1619 MB
899 MB + 720 MB (1% of 72 GB)


Modify DCUI access list

In vSphere 6 or prior, by default ROOT is the user who has access to DCUI, even in the lockdown mode. If I wish to add additional users in this list, how to do it?

Using vSphere Web Client

Select your host, go to Manage->Settings->Advanced System Settings and try to locate DCUI.Access parameter as shown in following screen

Screen Shot 2015-06-06 at 8.12.32 am

now Click on EDIT button

it will bring  a screen like following where you use comma as separator to mention more usernames. when done, click on OK.
Screen Shot 2015-06-06 at 8.18.18 am


vSphere 6, Lockdown mode

What is Lockdown mode?

to improve security of ESXi host which is being managed by vCenter server, we enable lockdown mode on that host, that way it will restrict any operations to be carried out on that host directly but you will be forced to go via vCenter server only to be able to manage that host.

now what do I mean by doing something directly on that host?

  • Using vSphere client to connect to host directly and creating a new VM, removing an existing VM, powering on/off/suspending a VM, or making any configuration modification on host.
  • using vCLI to login directly into host to do any of the above activity
  • using ssh client to login directly into host and doing any of those activities listed in first bullet point.

Since the host is being managed to vCenter and vCenter has got it’s own central access and authentication mechanism in place. Why would I want above things to be done directly, I would like to use that vCenter server’s access and authentication mechanism to have centrally managed security. Where I create roles, and assign permissions on inventory objects of vCenter server to user/group accounts.

what vSphere 6 has to offer when it comes to enabling lockdown mode.

To reach upto this settings, Login in your vSphere Web client 6.0, Go to Host and Cluster Inventory, Select your Host, GO TO manage->Settings->Security Profile->Lockdown mode like it’s visible in following screenshot

Screen Shot 2015-06-05 at 4.19.04 pm


By Clicking on Edit button I can see following screen

Screen Shot 2015-06-05 at 4.19.21 pm

  1. Normal lockdown mode
  2. strict lockdown mode
  3. no lockdown mode (Disabled)

Along with that we have a discussion of Exception users list & users who have access to DCUI

  1. Normal Lockdown mode
  • this is same as we had in previous versions of vSphere prior to version 6.0, you enable lockdown mode, will leave vpxuser as only user account in host with full privileges on host and rest of the user account doesn’t have any privileges any more. Except root user who is part of DCUI.Access so root account still can access DCUI in case we lost connection to vCenter.

2) Strict Lockdown mode

  • This is where you will get DCUI service of host also being stopped, so now even people in DCUI.Access list will not have any control because DCUI is not running.

in above two case, Exception users list plays a specific role. Those uses will still have access to your SSH shell (Technical support mode). Like in following screenshot I have added root account as Exceptions users list. If I do that on all the esxi hosts than I don’t have to worry about disabling lockdown mode at the time when I want to connect to host directly via SSH etc.Screen Shot 2015-06-05 at 4.19.57 pm

VMware KB 1008077


VMware vCenter 6.0 – deployment

Most I liked about it is same set of features in windows based vCenter server and Linux based appliance. And the way services are arranged in two parts, Platform Services Controller (PSC) and vCenter Server (Management Node).

Supports embedded PSC based deployment mode which is just one VM/Physical host with PSC and vCenter server both in it. suites for smaller environments and less complex management.

with compare to that distributed deployment of vCenter server goes like following
1) PSC deployment on a separate host
2) vCenter server deployment as Management Node

We can have one PSC with multiple vCenter management nodes, and to scale it out later, we can deploy further more PSCs in enhanced linked mode.

Major deployment difference between previous version vCenter 5.5 and vCenter 6.0 is that, in the previous version, when it came to do distributed deployment, we were able to install vCenter SSO, Inventory Services, Virtual Centre service, Web Client etc on separate machines, but in the vCenter 6 distributed deployment, we have PSC and Management Node, where PSC is having SSO, Lookup Service, VMCA, VECS, License Service, while management node is going to hold vCenter Server, Inventory Service, AutoDeploy, Dump Collector, Syslog Collector. In short we cannot separate them on different machines apart from those two which I have explained.

I would strongly advise you refer vSphere Installation and Setup Guide from following url
vSphere 6 Documentation Center