VAAI & Which Virtual Disk Type: Thick vs Thin

vmw-dgrm-vsphere-storage-disk-typesHow can you decide which virtual disk type to choose? I had a number of questions last week around vSphere 5.x storage, VAAI & disk provisioning pros / cons. It makes total sense to take the opportunity and summarise it all here to answer the question.

Firstly we know that there are three types of Virtual Disks which we can choose from upon creation

  • Thin
  • Thick – Lazy Zeroed (Default)
  • Thick – Eager Zeroed

The disk type we select has a direct impact on the provisioning time, and in some cases, the performance of writes to the underlying storage.

To understand the differences we need to look at how the disks are created. Thick disks are fully allocated virtual disk files, where Lazy Zeroed are not zeroed out upon creation, and Eager Zeroed are zeroed out upon creation. Thin Disks in comparison are not yet fully allocated (grow as you go – save up front disk space) and are not zeroed out upon creation. Ultimately this means that Thin & Thick-Lazy disks have to do some additional work each time there is a new write operation i.e Thin Disk: Allocate Space+Zero, Thick-Lazy Disk: Zero.

After reading the above paragraph we could conclude that Thick Eager Zeroed disks are better for high I/O write workloads. A performance whitepaper also illustrates that whilst marginal, this was the case indeed. You can read the full performance paper here.

However, that is not enough to draw a conclusion just yet! It may have been the case in 2009 and in vSphere 4.0, but that was Pre-VAAI. In vSphere 4.1, VMware introduced the vStorage API for Array Integration (VAAI) which, if you are using a storage array that supports it, can offload intensive tasks to the array which therefore frees ESXi host resources to do other tasks.

vmware_vaai_image

Below are the key features of VAAI;

  • Full Copy / XCOPY / Extended Copy – Enables arrays to make full copies of data within the array without having the ESXi micromanage. Without VAAI, a ESXi host has to touch every single block during the copy like a broker. With VAAI, the host can off load the task to the array and asks it to handle it. The net result is that the copy is faster, as data does not need to go back up to the host and back down to the array, and you also save CPU, Memory & Network resource on the host. This also results in improved clones / Storage vMotions.
  • Block Zeroing / Write Same / Zero – Enables arrays to zero out a large number of blocks to speed up provisioning of virtual disks. Without VAAI, the ESXi host has to micromanage and repeatedly ask the array to Zero Out blocks. With VAAI, the ESXi host asks for a range of blocks to be zeroed out without supervision. The net result is you save CPU, Memory & Network resource on the host. This also mitigates some of the pre I/O write task latency of  Thin / Thick-Lazy disk performance mentioned above.
  • ATS aka (Hardware Offloaded Locking) – A method of protecting the VMFS file system servicing the datastore. Without VAAI, when the ESXi host wants to write to the datastore it needs to lock the whole LUN via a SCSI reservation, which means that when you have multiple hosts accessing the same LUN you can have slower write operations / contention issues. With VAAI, the ATS locking mechanism is more granular and solves the contention issues you have using SCSI reservations. The net result is faster write operations to a datastore.

The full details of VAAI are summarised at a blog post here, I also suggest you read the whitepaper on VAAI here.

Finally to conclude this post and answer the question posed at the start of this post “How can you decide which virtual disk type to choose?”. The short answer is – It depends on the type of storage array you have and more importantly if you have implemented VAAI in your environment.

If you have implemented VAAI then performance differences are almost non existent, especially in environments with flash arrays. In which case, the decision comes down to the resource saving benefits of thin disks vs the predictable nature of thick disks and managing commitment.

Below is a summary of the pros & cons;

vmware_disk_types