白皮書

VMware Horizon View on SUPERMICRO Virtual SAN Ready Node with SanDisk® Flash Storage

This white paper demonstrates and discusses VMware all-flash Virtual SUPERMICRO SAN Ready Node, a hypervisor-converged infrastructure, which perfectly fits these needs. The paper also shares the results of virtual desktop infrastructure full-clone and linked desktop deployments scenarios running on SUPERMICRO TwinPro2 servers, which are superior deployment models for virtual desktop infrastructure.

Introduction

The SUPERMICRO TwinPro™ Solution architecture builds on SUPERMICRO’s proven Twin technology to provide outstanding throughput storage, networking, I/O, memory, and processing capabilities in a 2U server, allowing customers to further optimize SUPERMICRO solutions to solve their most challenging IT requirements.

Optimized for high-end Enterprise, Data Center, Hyper-Converged and Cloud Computing environments, the SUPERMICRO TwinPro Solutions are designed for ease of installation and maintenance with high quality for continuous operation at maximum capacity. The resulting benefit is compelling Total Cost of Ownership for customers seeking a competitive advantage from their data center resources.

SanDisk SAS-based flash storage has long helped customers realize new levels of technical, operational and financial efficiencies in virtualized environments. This storage has enabled

  • Scaling up of Virtual Machine (VM) performance
  • Increased VM densities
  • Higher productivity and service levels

When a VMware® all-flash Virtual SAN™ is combined with Horizon View software on SUPERMICRO TwinPro2 servers, customers have one of the most-cost effective, high-performing solutions for client virtualization and virtual desktop infrastructure) needs.

VMware Virtual SAN is software-defined storage for VMware vSphere. The all-flash Virtual SAN solution clusters server-attached, SAS-based, mixed-used-capability SSDs as a flash storage tier for caching, and provides read-intensive-capability SSDs for capacity. The result is a flash-optimized, highly resilient shared datastore designed for virtual environments.

VMware Horizon View enables users to access all their virtual desktops, applications and online services through a single workspace. Horizon View is the virtual desktop host infrastructure for vSphere. This platform enables user desktops as virtual machines on top of ESXi hosts.

This white paper discusses the benefits of the solution described above. A key benefit is the high level of responsiveness of the virtual desktops and applications as measured by response times.

 

Executive Summary

Virtual desktop infrastructure has been adopted extensively by enterprises who use it in financial, healthcare, engineering, education, and other sectors. As virtual desktop infrastructure is being widely deployed, new challenges are emerging. With workforce globalization and desktop consolidation in data centers, user demands and expectations have changed. The boot storm is no longer simply a 9 a.m. phenomenon for a particular time zone, or accessing desktops for a certain time period. The virtual desktop infrastructure environment now needs to be up and running 24x7, with the promise of consistent, predictable performance under any condition.

With the adoption of cloud deployment, virtual desktop infrastructure needs have become more elastic in nature. The environment grows or shrinks rapidly, and traditional storage approaches are not a good fit for these demands.

All these challenges need elastic, scalable, and predictable performance in a pre-configured environment. This white paper demonstrates and discusses VMware all-flash Virtual SUPERMICRO SAN Ready Node, a hypervisor-converged infrastructure, which perfectly fits these needs. The paper also shares the results of virtual desktop infrastructure full-clone and linked desktop deployments scenarios running on SUPERMICRO TwinPro2 servers, which are superior deployment models for virtual desktop infrastructure.

The average response times for virtual desktop infrastructure desktops are listed below. This shows how robust the solution is, as the application response time is well below the threshold.

Linked-Clone Desktop Response Time:
CPU-sensitive applications: 95th percentile: 0.53 seconds (threshold <1 sec.)
CPU- and disk-sensitive applications: 95th percentile: 3.10 seconds (threshold <6 sec.)

Full-Clone Desktop Response Time:
CPU-sensitive applications: 95th percentile: 0.58 seconds (threshold <1 sec.)
CPU- and disk-sensitive applications: 95th percentile: 3.23 seconds (threshold <6 sec.)

 

Solution Summary: SUPERMICRO Virtual SAN Ready Node

As VMware defines it, “Virtual SAN Ready Node is a validated server configuration in a tested, certified hardware form factor for Virtual SAN deployment, jointly recommended by the server OEM and VMware. Virtual SAN Ready Nodes are ideal as hyperconverged building blocks for larger data center environments looking for automation and a need to customize hardware and software configurations.”

SUPERMICRO and SanDisk jointly tested an all-flash VSAN Ready Node. Below are the configuration details.

Figure 1: All-flash VSAN Ready Node using a SUPERMICRO Server and SanDisk SAS SSDs

 

In this Ready Node, the following tasks were done:

  • Installed and configured VMware Horizon View with the virtual desktop infrastructure solution.
  • Executed a virtual desktop infrastructure workload using View Planner, VMware’s workload generator tool.
  • Created two types of desktops: Linked Clone and Full Clone.
  • Validated both the deployments, as customers are always interested to know the performance of both for taking deployment decision.
  • Used a 60 GB Windows7 desktop image for validating the performance.

 

Below are the application response times for virtual desktop infrastructure desktops under test.

Full-Clone Desktops (230 users) Response Time:
CPU-sensitive applications 95th percentile: 0.58 seconds (threshold target of <= 1 second)
CPU-and disk-sensitive applications 95th percentile: 3.23 seconds (threshold target of <= 6 seconds)

Linked-Clone Desktops (730 users) Response Time:
CPU-sensitive applications 95th percentile: 0.53 seconds (threshold target of <= 1 second)
CPU- and disk-sensitive applications 95th percentile: 3.10 seconds (threshold target of <= 6 seconds)

The above application response times were measured using VMware View Planner, which provides 95th- percentile value at the end of the test run and generates a report. Individual application response times are shown in the Test Results section.

Scaling the number of users depends on the storage capacity of the Ready Node: with 14 TB of usable capacity, scale-up can reach 230 Full Clone users. However, Linked Clone has better image management and thus can accommodate more users. Additional storage can be added to increase the density of virtual desktop infrastructure users.

 

Testing Process

For validation testing in the Virtual SAN environment, the VMware View Planner standard benchmark workload was used. Group A (CPU-sensitive) and Group B (CPU- and I/O-sensitive) scores were used to determine the Virtual SAN environment capability for hosting virtual desktop infrastructure.

The View desktop was created using a Windows 7, 32-bit operating system standard image. Necessary configuration changes were done according to the View Planner Installation and Configuration User Guide. All applications included in the View Planner pre-selected workload requirements were installed inside this image.

The VMware Virtual SAN default storage policy was applied to the desktop VM, which provides high availability in case one of the nodes goes down.

A four-node Virtual SAN cluster was created as defined in the Ready Node.

  • A Full Clone desktop pool was created and tested.
  • After testing, the pool was removed, and the Linked Clone pool was built to test.

 

Figure 2: VMware Horizon View pool creation - datastore selection

 

All other Horizon View infrastructure VMs, such as View Planner Appliance, vCenter, AD-DNS, DHCP, and VMware Horizon View, were provisioned outside the Virtual SAN cluster.

 

Figure 3: Test bed architecture

 

The Virtual SAN datastore was configured using a single disk group in each node. Each disk group was configured with one caching tier, using a 400 GB Optimus Ascend™ SAS SSD and two capacity-tier Optimus MAX 4 TB SAS SSDs per server.

 

Figure 4: All-flash Virtual SAN disk group configuration

 

Test Results

The tests were executed in two different phases. First, the Full Clone desktop pool was created, and the desktops were tested for performance and density using View Planner. Later, the Full Clone pool was deleted, and Linked Clones were recreated from the same desktop image.

The following section discusses the test results for both environments.

View Planner: Full Clone

Full clone desktops are generally used for engineering or high-end users. For that reason, we ran View Planner Standard Benchmark, which generally represents the power user profile requirement.

Response Time
The following figures shows the response time for CPU-sensitive and disk-sensitive application operations of the View Planner workload.

 

Figure 5: Response time – CPU-sensitive applications operation

 

Figure 6: Response time – disk-sensitive applications operation

 

CPU Utilization Data for VSAN
The following graph captures the CPU core utilization across all the nodes and combined. The CPU utilization at the host was well below the availability of CPU cycles.

 

Figure 7: CPU utilization – all nodes shown as combined

 

Disk IOPS
The next graph collects the IOPS at the storage controller level to aggregate the total IOPS at steady state. Each disk group in the VSAN node is configured with a single storage controller.

 

Figure 8: Disk IOPS (read and write) – combined storage IOPS (controller level)

 

Disk Throughput
Similar to the IOPS collection, the throughput of all disks is gathered at the storage controller level.

 

Figure 9: Disk throughput (read and write) – combined storage IOPS (Controller Level)

 

Disk Latency
The disk latency is well below one millisecond, except for some occasional spikes during steady state.

 

Figure 10: Disk latency (read and write) – combined latency (controller level)

 

View Planner Linked Clone

Linked clone pools are widely deployed, mostly being used for knowledge user profiles. For that reason, we ran a modified standard benchmark by increasing the think time from 5 sec. (for power user profile) to 10 sec. for a knowledge user profile. There is no standard definition of power or knowledge user think time, but as think time is increased, the resource consumption reduces for each desktop.

Response Time
The following figure shows the response time for CPU-sensitive and disk-sensitive application operations of the View Planner workload.

 

Fig 11: Response time – CPU-sensitive applications operation

 

Figure 12: Response time – disk-sensitive applications operation

 

Figure 13: CPU utilization – combined nodes

 

Figure 14: Disk IOPS (read and write) – combined storage IOPS (controller level)

 

Figure 15: Disk throughput (read and write) – combined node storage throughput (controller level)

 

Figure 16: Disk latency (read and write) – combined node storage latency (controller level)

 

The disk latency is 0ms for most of the time in steady state. Because of low latency, the application response time is well below the threshold limit.

 

Test Results Observation

For full-clone desktops, the system was scaled up to 230 users with a power user profile. This is a very high density, considering the resource consumption of the desktops. With the given usable capacity, no additional full-clone desktops could be deployed. If additional storage is added in the four node all-flash Virtual SAN, further scaling is possible, as the resource utilization is well within the limits.

For linked clones, the system was scaled up to 730 users with a knowledge user profile. These are commonly used desktops in any mid- to large-size enterprise. The desktop density was kept at a level where application response times were very fast and user experience needs were met.

In both cases, the disk latencies were sub-millisecond, with very few spikes. This enables faster application response, thus improving user experience.

 

Virtual SAN Solution Test Bed

The following tables describe the test bed.

ESXi Host Configuration

HARDWARE SPECIFICATION
Servers
  • 2U-4Node TwinPro2 server w/3008 E5-2670v3 12C 2.3G 30M 9.6 GT/s - Hyper Threading Enabled)
  • 4 X 512 GB RAM
Storage Per Node
  • 1 x 400 GB Optimus Ascend SSD
  • 2 x 4 TB Optimus MAX SSDs
  • 1 x SMC3008 12Gbps SAS3/HBA/2 internal mini ports
Network Per Node
  • 2 x 10 Gb NIC in each server
  • 2 x 1 Gb NIC in each server

 

Virtual SAN Configuration

HARDWARE SPECIFICATION
Each disk group configuration Caching Tier – 1 x 400 GB Optimus Ascend SSD
apacity Tier – C2 x 4 TB Optimus MAX SSDs
Disk group in each node 1
Total Virtual SAN node 4

 

Virtual Machine Configuration

HARDWARE SPECIFICATION
Desktop
  • Win7 Enterprise 32-bit Edition
  • 1 vCPU, 3 GB RAM
  • VM Virtual SAN storage policy
    • Failure to Tolerate (FTT) = 1
    • Stripe Width (SW) = 1
VMware Horizon View Manager
  • Win 2012 R2 64-bit Edition
  • 4 vCPU, 10 GB RAM
VMware Horizon View Composer
  • Win 2012 R2 64-bit Edition
  • 1 vCPU, 4 GB RAM
VMware vCenter
  • Win 2012 R2 64-bit Edition
  • 4 vCPU, 16 GB RAM

 

Installed Desktop Application

MASTER IMAGE APPLICATIONS
Golden Desktop
  • MS Office 2010 Professional 32 bit with no SP
  • Internet Explorer 8.0
  • Mozilla Firefox 7.0
  • Adobe Reader 10.1.4
  • Windows Media Player
  • 7-Zip
VMware Horizon View Manager
  • Win 2012 R2 64-bit Edition
  • 4 vCPU, 10 GB RAM
VMware Horizon View Composer
  • Win 2012 R2 64-bit Edition
  • 1 vCPU, 4 GB RAM
VMware vCenter
  • Win 2012 R2 64-bit Edition
  • 4 vCPU, 16 GB RAM

Infrastructure Software Configuration

Software Installed

  • VMware vSphere 6.0
  • VMware Horizon View 6.1.1
  • VMware View Planner 3.5
  • SQL Server (Embedded with vCenter)
 

Bill of Materials

The following table summarizes the bill of materials for this solution:

HARDWARE COMPONENT QTY
Servers 2U-4Node TwinPro2 server w/3008 E5-2670v3 12C 2.3G 30M 9.6 GT/s 512 GB RAM 4
Storage 400 GB Optimus Ascend SSD (1 per server) 4
4 TB Optimus MAX SSDs (2 per server) 8
SMC3008 12Gbps SAS3/HBA/2 internal mini ports (1 per server) 4
Network 2 X10 Gb NIC and 2 X 1 Gb in each node 4
10G Ethernet Switch SSE-X3348T / SSE-X3348TR 1

 

Conclusion

This solution from SUPERMICRO and SanDisk, based on VMware’s Virtual SAN Ready Node, meets the stringent requirements of today’s applications without the complexity but with a scalable technology that is rapidly deployable, cost-effective, easy to manage and can be fully integrated into existing datacenters.

This paper illustrates with relevant benchmark performance case studies how the challenging Service Level Agreement (SLA) requirements for virtual desktop infrastructure can be met with sub millisecond latencies by using SUPERMICRO’s Hyper Converged Infrastructure (HCI, here implemented on 2U TwinPro²) with SanDisk storage technologies and VMware’s VSAN & Horizon View. In summary, this joint SUPERMICRO/SanDisk/VMware Virtual SAN Ready Node solution delivers lower TCO, with increased scale and operational efficiency over the traditional alternatives.

READY TO FLASH FORWARD?

Whether you’re a Fortune 500 or five person startup, SanDisk has solutions that will help you get the most out of your infrastructure.

VIA
EMAIL

Go ahead, ask us some questions and we'll get back to you with answers.

Let's Talk
800.578.6007

Don't wait, let's just talk now and start building the perfect flash solution.

Global Contact

Find contact information for offices all over the world.

SALES INQUIRIES

Whether you'd like to ask a few initial questions or are ready to discuss a SanDisk solution tailored to your organizations's needs, the SanDisk sales team is standing by to help.

We're happy to answer your questions, so please fill out the form below so we can get started. If you need to talk to the sales team immediately, please phone: 800.578.6007

欄位不能為空。
欄位不能為空。
請輸入有效的電子郵件地址。
欄位只能包含數字。
欄位不能為空。
欄位不能為空。
欄位不能為空。
欄位不能為空。
欄位不能為空。
欄位不能為空。

Please indicate your areas of interest:

你必須選擇一個選項。

Questions or comments:

你必須選擇一個選項。

Thank you. We have received your request.