NETLAB+ VE Customers: Please refer to the NETLAB+ VE documentation page.


Central Rack White

NETLAB+ integrates with 3rd party virtualization products to provide powerful and cost effective remote PC support. The NETLAB+ documentation library includes several guides with extensive detail on the implementation of virtualization with your NETLAB+ system.

VMware

VMware Inc. provides cutting-edge virtualization technology and resources to academic institutions for little or no charge. Academic licenses for VMware ESXi and vCenter Server may be used for your NETLAB+ infrastructure. The procedure for obtaining licenses for this purpose will vary, depending on your participation in the VMware Academic Program and/or the VMware IT Academy Program. For guidance on navigating the different licensing options that may be available to your organization, please refer to the VMware Product Licensing Through VMware Academic Subscription (VMAS) Chart.

 

Virtualization Components

NETLAB+ Virtalization Components    

Virtualization Product Support Status

Product VMware
Version
vCenter
Required
NETLAB+
Support
Minimum
NETLAB+
Version
NETLAB+ Implementation Guide
VMware ESXi / vCenter 8.0 N/A Coming Soon 22.0.10 Documentation and testing underway.
VMware ESXi / vCenter 7.0 Yes1 Recommended N/A NETLAB+ Remote PC Guide Series - Learn More
VMware ESXi / vCenter 6.7 N/A Supported, not documented N/A Specifications and configuration guidance are not currently available for VMware vSphere 6.7.
VMware ESXi / vCenter 6.5 N/A No N/A Not recommended or supported.
VMware ESXi / vCenter 6.0 Yes1 Recommended 2015.R1.final NETLAB+ Remote PC Guide Series - Learn More
VMware ESXi / vCenter 5.1 Yes1 Deprecated 2011.R2 NETLAB+ Remote PC Guide Series - Learn More
VMware ESXi / vCenter 5.0 N/A No N/A Not recommended or supported.
VMware ESXi / vCenter 4.1 U2 Yes Deprecated 2011.R1V Remote PC Guide for VMware Implementation
Using ESXi versions 4.01 and 4.1 with vCenter
VMware ESXi / vCenter 4.01 Yes Deprecated2 2011.R1V Remote PC Guide for VMware Implementation
Using ESXi versions 4.01 and 4.1 with vCenter
VMware ESXi Standalone 4.01 No End of Support

Dec. 20132
2009.R1 Remote PC Guide for VMware Implementation
Using VMware ESXi 3.5/4.01
VMware ESXi Standalone 3.5 No End of Support

Dec. 20132
2009.R1 Remote PC Guide for VMware Implementation
Using VMware ESXi 3.5/4.01

1Please use the NDG Optimized VMware vCenter Server Appliance, OVA available from CSSIA, details below.

2NETLAB+ functionality is limited with these products/versions.

VMware ESXi 5.0 on physical host servers and vCenter 5.0 for NETLAB+ VM management are not recommended or supported due to several known issues. VMware ESXi 5.0 and vCenter 5.0 are supported as virtual machines running in NDG ICM 5 pods; the physical host servers that host ICM 5 pods must run ESXi 5.1 (recommended) or ESXi 4.1 U2.

Only VMware ESXi is supported. VMware ESX is not supported.

VMware ESXi 4.01 is the last version that can be used with NETLAB+ as a standalone server (i.e. without vCenter management). ESXi 5.1 and ESXi 4.1 require a vCenter implementation. After December 31, 2013, only configurations managed by VMware vCenter will be supported by NETLAB+.

 

Server Specifications for Hosting NETLAB+ Pod Virtual Machines

The following table shows the current recommended specifications for ESXi host servers used to host virtual machines in NETLAB+ pods.

Please check the VMware Compatibility Guide to verify that all server hardware components are compatible with the version of VMware ESXi that you wish to use.

Specification Last Updated: February 25, 2019


Components Recommended Minimum / Features
Dell R630
Recommended Minimum / Features
SuperMicro 1028U-TR4+
Server Model Dell R630 SuperMicro 1028U-TR4+
Chassis Hard Drive Configuration1 10 x 2.5" HDDs 10 x 2.5" HDDs
Operating System Specify NO operating system on order. Specify NO operating system on order.
Hypervisor (installed by you)2 VMware ESXi 6.0 (supported)
VMware ESXi 5.1/5.5 (deprecated)
VMware ESXi 6.0 (supported)
VMware ESXi 5.1/5.5 (deprecated)
Physical CPUs (Minimum Host Server)

Physical CPUs (High Performance Host Server)
Two (2) x Intel Xeon E5-2630 v4 10C/20T

Two (2) x Intel Xeon E5-2683 v4 16C/32T
Two (2) x Intel Xeon E5-2630 v4 10C/20T

Two (2) x Intel Xeon E5-2683 v4 16C/32T
Memory (Minimum Host Server)

Memory (High Performance Host Server)
384GB RDIMM (12x32GB)

512GB RDIMM (16x32GB)
384GB RDIMM (12x32GB)

512GB RDIMM (16x32GB)
Hardware Assisted Virtualization Support Intel-VT Intel-VT
Accelerated Encryption Instruction Set AES-NI AES-NI
BIOS Setting Performance BIOS Setting Performance BIOS Setting
RAID RAID 5 for PERC H730P 2GB NV Cache AOC-S3108L-H8iR & 2x CBL-SAST-0593
HDD 8x 600GB SAS 10K 2.5"6Gbps 8x 600GB SAS 10K 2.5" 6Gbps
Power Supply Dual, 1100W Redundant PS Dual 750W
Power Cords 2x NEMA 5-15P to C13 2x CBL-0160L 5-15P to C13
Rails ReadyRails Sliding Rails MCP-290-00062-0N
Bezel Bezel 10/24 No Bezel
1G Network Broadcom 5720QP(4ports) 1GB daughter card Onboard 4x 1GB AOC-UR-i4G
10G Network3 Intel X520 DP (SFP+) or Intel X540 DP (10GBASE-T) AOC-STGN-i2S (SFP+) or AOC-STG-i2T (10GBASE-T)
Internal SD (Opt) Internal SD Module with 1x 16GB SD Card N/A

12 HDD slots on the chassis will not be used. These can be used with optional SSDs in the future.

2Install VMware ESXi to Internal SD or Internal USB port using 8GB or larger USB Flash Drive.

3For 10Gbps support, choose either SFP+ or BASE-T depending on the 10Gbps network you choose.

See the specifications for the older, previously recommended server models, Dell R710/R720.

 

RAID Arrays and Configuration

If you are storing virtual machines on the ESXi host server's internal Direct Attached Storage, the type of RAID controller and RAID array configuration will have a very significant impact on performance, particularly as the number of active VMs increase. The amount of cache on the RAID controller is very important. A controller with no cache is likely to perform poorly under load and will significantly decrease the amount of active VMs you can run on the server. Keep in mind that many controllers will disable the disk's onboard cache, which is designed for standalone usage.


Recommended RAID Configuration for Dell R630 Servers
  • RAID Controller: Dell PERC H730P Internal Integrated RAID Controller with 2GB NV cache.
  • RAID Configuration: RAID 5 is recommended. It provides excellent read and write performance with data protection if one drive in the array fails.

Recommended RAID for SuperMicro 1028U-TR4+
  • AOC-S3108L-H8iR & 2x CBL-SAST-0593
 

Storage Area Networks

NDG performs all testing on servers with Internal Direct Attached Storage (i.e. RAID arrays and RAID controllers directly attached to each ESXi host server). This is the configuration that most academic institutions are likely to find affordable and adopt.

A Storage Area Network (SAN) is a dedicated network that provides access to consolidated, block level data storage, which can be used for disk storage in a VMware vSphere environment.

Currently NDG does not provide benchmarks, guidance or troubleshooting for SAN configurations. Our documentation may show an optional SAN in the environment, however this is not a recommendation or requirement to deploy a SAN.

NDG benchmarks and capacity planning guidance do not account for the additional latencies introduced by SAN.

  • When compared to Direct Attached Storage, a SAN may introduce additional I/O latency between ESXi server and disk. Therefore, a SAN may reduce the number of active VMs you can run on an ESXi host.
  • If you deploy a SAN, you should perform your own benchmarks and determine the number of active VMs you can host on your ESXi server. Your mileage may vary.
  • Always configure NETLAB+ Proactive Resource Awareness to ensure that the number of VMs that can be activated will remain within your predetermined performance limits.
 

Quantity of Physical VMware ESXi Host Servers Required

The tables below show the minimum server configurations recommended for various NDG supported courseware. These configurations are based on the Dell R630 specifications above and vary only by memory and active VMs supported. You do not need separate host servers for each curriculum and may run VMs for Cisco, General IT, and Cybersecurity on the same servers. We recommend no more than 40 active VMs per server with 128GB of memory. Always configure NETLAB+ Proactive Resource Awareness to limit the number of scheduled VMs at any one given time and to prevent oversubscription of the host resources.

 
Minimum Server Configurations Based On Dell R630 and SuperMicro 1028U-TR4+ Specifications
Server Role/Courses Processor(s) Memory Cores/Threads Active VMs Storage Active Pod to Host Ratio
Cisco Only Setup 2x Intel Xeon E5-2630 v3 64GB 8/16 24 amounts vary1 8 Map Pods to 1
VMware vSphere ICM 4.1 2x Intel Xeon E5-2630 v3 72GB 8/16 40 40 GB 8 VMware vSphere ICM Pods to 1
VMware vSphere ICM 5.0 2x Intel Xeon E5-2630 v3 128GB 8/16 40 20 GB 8 VMware vSphere ICM Pods to 1
VMware vSphere ICM 5.1 2x Intel Xeon E5-2630 v3 128GB 8/16 40 20 GB 8 VMware vSphere ICM Pods to 1
VMware View ICM 5.1 2x Intel Xeon E5-2630 v3 192GB 8/16 40 20 GB 8 VMware View ICM Pods to 12
EMC ISM 2x Intel Xeon E5-2630 v3 128GB 8/16 48 8 GB 16 ISM Pods to 1
CSSIA CompTIA Security+® 2x Intel Xeon E5-2630 v3 128GB 8/16 40 28 GB (master)
15 GB (student)
5 MSEC Pods to 1
General IT / Cybersecurity 2x Intel Xeon E5-2630 v3 128GB 8/16 40 amounts vary2 amounts vary2

1Please refer to the topology specific information for Cisco Content.

2Please refer to the topology specific information for General IT or Cyber Security.

 

Specifications for the Physical Management Server

VMware vCenter enables you to manage the resources of multiple ESXi hosts and allows you to monitor and manage your physical and virtual infrastructure. Starting with software version 2011.R1V, NETLAB+ integrates with VMware vCenter to assist the administrator with installing, replicating and configuring virtual machine pods.

For performance reasons, a separate physical management server is recommended for vCenter.

Management Server (vCenter)
Server Type Processor(s) Memory Cores/Threads
Dell R630 2x Intel Xeon E5-2630 v4 128GB RDIMM (4x32GB) 10/20
SuperMicro 1028U-TR4+ 2x Intel Xeon E5-2630 v4 128GB RDIMM (4x32GB) 10/20

Please adhere to the VMware's requirements and best practices. vCenter requires at least two CPU cores. Unsatisfactory results have been observed with older / single core hardware that did not meet VMware's minimum specifications.

NDG does not support configurations where a virtualized vCenter server instance is running on a heavily loaded ESXi host and/or an ESXi host that is also used to host virtual machines for NETLAB+ pods (with the exception of HA failover of the management server). These configurations have exhibited poor performance and API timeouts that can adversely affect NETLAB+ operation.

 

Configuration for the Virtual vCenter Server

As of vSphere 5.1, NDG only supports the VMware vCenter Appliance. The physical server on which vCenter resides should be a dedicated "management server" to provide ample compute power. It is strongly recommended you follow our server recommendations to provide ample compute power now and in the future.

Starting with VMware ESXi 5.1, NDG strongly recommends using the NDG Optimized VMware vCenter Server Appliance. This appliance is a virtual machine that runs on ESXi 5.1. The physical server on which vCenter resides should be a dedicated "management server" to provide ample computing power. see the instructions below to request the NDG Optimized vCenter Server v5.1 Appliance OVA from CSSIA.org.


  1. Go to the CSSIA Resources page: https://www.cssia.org/cssiaresources/
  2. Select VM Image Sharing Agreement - Image Sharing Agreement.
  3. Select VM Image Sharing Agreement to open the request form.
  4. Complete and submit your access request by following the instructions on the request form.
  5. CSSIA will email a link, along with a username and password to access the download server. Access to the download server is provided only to customers who are current with their NETLAB+ support contract and are participants in the appropriate partner programs (i.e. Cisco Networking Academy, VMware IT Academy, EMC Academic Alliance and/or Red Hat Academy).
  6. Once access to the download server has been established, the virtual machines can be deployed directly to the vCenter Server by clicking on File > Deploy OVF Template in the vClient window and copying the link into the location field.
  7. The deployment will start after the username and password are entered.
  8. Each virtual machine is deployed individually.

The following table lists the server memory requirements for the vCenter 6.0 Appliance, based on the number of virtual machines in the inventory.

vCenter 6.0 Appliance Size Virtual Machines CPUs RAM Disk
Tiny Up to 100 2 8GB 120GB
Small Up to 1000 4 16GB 150GB
Medium (Recommended) Up to 4000 8 24GB 300GB
Large 4000+ 16 32GB 450GB

Guest Operating Systems (virtual machines)

NDG has tested Windows and Linux as guest operating systems. Novell Netware is not currently supported. Other operating systems that are supported by VMware may work, but have not been tested by NDG. The guest operating system must support VMware tools for the mouse to work within NETLAB+.

If you are using the topologies designed to support Cisco Networking Academy® content, please review this additional information on determining the number of VMware Servers needed >

 

NDG Equipment Selection Disclaimer

NDG offers no warranties (expressed or implied) or performance guarantees (current or future) for 3rd party products, including those products NDG recommends. Due to the dynamic nature of the IT industry, our recommended specifications are subject to change at any time.

NDG recommended equipment specifications are based on actual testing performed by NDG. To achieve comparable compatibility and performance, we strongly encourage you to utilize the same equipment, exactly as specified and configure the equipment as directed in our setup documentation. Choosing other hardware with similar specifications may or may not result in the same compatibility and performance. The customer is responsible for compatibility testing and performance validation of any hardware that deviates from NDG recommendations. NDG has no obligation to provide support for any hardware that deviates from our recommendations, or for configurations that deviate from our standard setup documentation.