No more Tiers.
Data center networking has fundamentally changed over the last few years. Previously there was a clear separation of tiers; A couple of cores, a few distros, some server edges, some aggregation, and a bunch of access layer switches. The current datacenter network design philosophy is no more tiers. A data center network should be flat and encompass the previous functionality in one layer. Cisco called this a single tier datacenter model now they call it the Fabric Path based networ. I call this a distributed core architecture. All switches are access layer, distro layer, and core layer switches.
The move to a single layer in the datacenter is hard. There are many entrenched CCNA who are not capable of free thought unless cisco says so. Encouraging networkers to think outside the cert, is challenging.
To move to a single tier you must evaluate newer technologies. In a single tier everything counts so you want all ports to run at line speed. The next key piece of the architecture is uplinks.
Although Cisco provides nice equipment there is no way to get there from here. Current cisco inventory has to little backplane speed and throughput. Look for cisco to buy another company who makes decent 10/40Gbe gear.
A properly laid out single tier datacenter network has over subscription rates of less than 3:1@10Gbe. Less when the density is not as high. With a multi-tier layout over subscription on the access switch is at least 1.2:1@1Gbe and 9+:1@10Gbe. You can lay out a multiple tier datacenter in many different ways, but with tiers you will always have high over-subscription.
A benefit of a single tier is latency, when the max number of switches you will traverse to get between servers is two. Low latency is king in the Datcenter network. I like to illustrate this with a voip call, on a high latency network, voip calls are somewhat choppy and occasionally you can here yourself echo. Then you have to wait for the other end to catch up. Although most network engineers will tell you latency doesn't matter when its bellow a few ms. Many applications are sensitive to latency, so we buy separate networking for these applications; with single a tier all of these pools of technology can be integrated.
The drawbacks of moving to a single tier in the datacenter is complexity of configuration, the type of equipment you should buy, and the amount of ports utilized for full speed uplinks. The biggest drawback is scalability each datacenter will be limited to ~8 switches because of the number of interconnects. A single tier only scales to ~4Tb and depending on switch, 800 10Gbe server links.
Data center networking has fundamentally changed over the last few years. Previously there was a clear separation of tiers; A couple of cores, a few distros, some server edges, some aggregation, and a bunch of access layer switches. The current datacenter network design philosophy is no more tiers. A data center network should be flat and encompass the previous functionality in one layer. Cisco called this a single tier datacenter model now they call it the Fabric Path based networ. I call this a distributed core architecture. All switches are access layer, distro layer, and core layer switches.
The move to a single layer in the datacenter is hard. There are many entrenched CCNA who are not capable of free thought unless cisco says so. Encouraging networkers to think outside the cert, is challenging.
To move to a single tier you must evaluate newer technologies. In a single tier everything counts so you want all ports to run at line speed. The next key piece of the architecture is uplinks.
Although Cisco provides nice equipment there is no way to get there from here. Current cisco inventory has to little backplane speed and throughput. Look for cisco to buy another company who makes decent 10/40Gbe gear.
A properly laid out single tier datacenter network has over subscription rates of less than 3:1@10Gbe. Less when the density is not as high. With a multi-tier layout over subscription on the access switch is at least 1.2:1@1Gbe and 9+:1@10Gbe. You can lay out a multiple tier datacenter in many different ways, but with tiers you will always have high over-subscription.
A benefit of a single tier is latency, when the max number of switches you will traverse to get between servers is two. Low latency is king in the Datcenter network. I like to illustrate this with a voip call, on a high latency network, voip calls are somewhat choppy and occasionally you can here yourself echo. Then you have to wait for the other end to catch up. Although most network engineers will tell you latency doesn't matter when its bellow a few ms. Many applications are sensitive to latency, so we buy separate networking for these applications; with single a tier all of these pools of technology can be integrated.
The drawbacks of moving to a single tier in the datacenter is complexity of configuration, the type of equipment you should buy, and the amount of ports utilized for full speed uplinks. The biggest drawback is scalability each datacenter will be limited to ~8 switches because of the number of interconnects. A single tier only scales to ~4Tb and depending on switch, 800 10Gbe server links.