
上QQ阅读APP看书,第一时间看更新
Private cloud sample infrastructure
Following is a sample of a real-world infrastructure that can support up to 3000 VMs and 64 server nodes running Windows 2016 Hyper-V.
The number of VMs you can run on an implementation like this will depend on some key factors. Do not take the following configuration as a mirror for your deployment, but as a starting point. My recommendation is to start understanding the environment, then run a capacity planner such as a MAP toolkit. It will help you gather information that you can use to design your private cloud.
I am assuming a ratio of 50 VMs per node cluster with 3 GB of RAM, configured to use Dynamic Memory (DM):
- Servers
- 64 servers (4 clusters x 16 nodes)
- Dual processor, 6 cores: 12 cores in total
- 192 GB RAM
- 2 x 146 GB local HDD (ideally SDD) in Raid 1
- Storage
- Switch and host redundancy
- Fiber channel or iSCSI or S2D (converged)
- Array with capacity to support customer workloads
- Switch with connectivity for all hosts.
- Network
- A switch with switch redundancy and sufficient port density and connectivity to all hosts.
- It provides support for VLAN tagging and trunking.
- NIC Team and VLAN are recommended for better network availability, security, and performance achievement.
- Storage connectivity
- If it uses a fiber channel: 2 (two) x 4 GB HBAs
- If it uses ISCSI: 2 (two) x dedicated NICs (recommended 10 GbE)
- If it uses S2D: 2 (two) x dedicated 10Gbps NICs (recommended RDMA-capable adapters)
- Network connectivity
- If it maintains a 1 GbE connectivity: 6 dedicated 1 GbE (live migration, CSV, management, virtual machines' traffic)
- If it maintains a 10 GbE connectivity: 3 dedicated NICs 10 GbE
(live migration, CSV, management, virtual machines' traffic)
Another way to build private cloud infrastructure is to use hyper-converged solution in which all Storage Spaces Direct, Hyper-V, Failover Clustering and other components are configured on the same cluster hosts. In this model, storage and compute resources cannot be scaled up separately (adding one more host to an existing cluster will extend both compute and storage resources). There are also some requirements for the IT staff who have to carefully plan any management tasks on each storage and compute subsystem to eliminate any possible downtimes. To avoid all these disadvantages and for larger deployments, I'd recommend using a converged solution with separate clusters for SOFS and Hyper-V workloads.