Network Design Rack Server x3850 X5 with fourteen 1 GbE Network Adapters
Welcome, I want to share with you one of my older network design with 1 GbE network adapters.
In this example we will speak about IBM System x3850 X5, 4x Xeon 10C E7-8860, 8x Memory Expansion Cards, RAM 64x16GB (1TB), 2x QLogic 8Gb FC Dual-port HBA , 3x Intel Ethernet Quad Port Server Adapter I340-T4, no disks – USB Memory Key for VMware ESXi in motherboard slot.
Logical Network Design
When you have all requirements on the paper this one is easy, you collect portgroup you need and assign 2 NICs for redundancy. Don’t forget to read and follow VMware best practices for distributed switches.
- mgmt – vmk0 (active, stand-by)
- vMotion – vmk1 (active, stand-by), vmk2 (stand-by, active)
- Prod – LBT (load based teaming)
- Backup – LBT (load based teaming)
- DMZ – LBT (load based teaming)
- DMZ Backup – LBT (load based teaming)
We have 6 portgroups with 2 NICs so I need 12 network connections, I will use the leftover 2 connections for production network for extra redundancy, lets put this in visio and in few moments we have what our vCenter configuration should look like, don’t forget to add important information about:
- vmkernel name, IP, mask,
- Distributed switch names,
- VLAN IDs,
- IPs subnet and mask,
- native or trunk ports,
- redundant connection to switches.
Logical Network Design Diagram
Physical Network Design
Now its time to think how to put all inside the server. I have 3x NIC, 2x HBA, 1x build in 10GB NIC in PCIe7. It’s good I have 6 PCIe cards and I should fit in! In order to know how to put PCIe cards to slots there are 2 important things both of them are described in IBM redbook for x3850 X5. First think is Order for adding cards on page 100.
- For us is important, that PCIe2 should be populated last and in our case it will remain empty.
- Dual-port Emulex 10 Gb Ethernet adapter fit only in PCIe7.
Second important thing is system board architecture on page 57.
The key here is to have each HBA on different I/O hub to be sure each HBA have dedicated CPU.
Let’s combine all information together and we have our setup:
- PCIe1 – QLogic 8Gb FC Dual-port HBA,
- PCIe2 – empty,
- PCIe3 – Intel Ethernet Quad Port Server Adapter I340-T4,
- PCIe4 – Intel Ethernet Quad Port Server Adapter I340-T4,
- PCIe5 – QLogic 8Gb FC Dual-port HBA,
- PCIe6 – Intel Ethernet Quad Port Server Adapter I340-T4,
- PCIe7 – Dual-port Emulex 10 Gb Ethernet adapter – not used.
Rear look to server populated with PCIe cards:
Physical Network Design Diagram
There is a lot of possibilities how to draw physical network design in visio, I will show one of them and in next article another views. We have to pay attention to important things in physical design:
- redundancy of portgroups each cable have to use diferent switch,
- plan for loosing 1 quad port card and distribute connections between all cards,
- plan for loosing 2 quad port cards to still have important production networks available,
- physical details of hardware,
- details with exact names of switches or NICs (skipped here).
Thats all. In next article we will focus on Network Design Rack Server x3850 X6 with 10 GbE Network Adapters.
References:
Roman Macek
Latest posts by Roman Macek (see all)
- Backup proxy power off - July 13, 2017
- SoftLayer local disk performance SATA/vSAN/SSD - November 24, 2015
- The Virtualist at NetApp Insight 2015 - November 23, 2015
I think you’ve missed an important design consideration of the VMware layer, or at least not discussed its merits. The decision to go to DVS implies you have Enterprise Plus licensing, here you have a lot more decisions to make such as using NOIC and using all uplinks in a single DVS.
I plan to post more network designs, definitely one of them will combine all NICS in single vDS and another netowork design will use NIOC shares. This design can be implemented as vSS as the logic allow that so for those who dont have ENT+ they can use this design.