FOA Guide 

Topic: Data Centers

Table of Contents: The FOA Reference Guide To Fiber Optics

Data Centers

The flood of information being generated, stored and transmitted over the Internet has created a massive demand for new data centers, facilities where servers, storage, switches and routers fill racks and cabling runs fill overhead and under floor cable trays. For electrical contractors who also do telecom work, data centers are immense opportunities, involving power, mechanical structures, cabling and security systems.
Data center

What Is A Data Center?
Data centers are facilities that store and distribute the data on the Internet. With an estimated 100 billion plus web pages on over 100 million websites, data centers contain a lot of data. With almost two billion users accessing all these websites, including a growing amount of high bandwidth video, it’s easy to understand but hard to comprehend how much data is being uploaded and downloaded every second on the Internet.
Data centers are filled with tall racks of electronics surrounded by cable racks. Data is typically stored on big, fast hard drives. Servers are computers that take requests and move the data using fast switches to access the right hard drives. Routers connect the servers to the Internet. Speed is of the essence. Servers are very fast computers optimized for finding and moving data. Likewise, the hard drives, switches and routers are chosen for speed. Interconnection use the fastest methods possible. Faster speed means lower latency, the time it takes to find and send the data along to the requestor.
While speed is a primary concern for data centers, so is reliability. Data centers must be available 24/7 since all those 2 billion Internet users are spread all around the world. Reliability comes from designing devices with redundancy, backups for storage, uninterruptible power and fighting the #1 enemy of reliability, heat.
Heat is generated by all the electronics, and the faster they run, the more power they consume and the more heat they produce. Getting rid of heat requires lots of air conditioning which can consume as much power as the data center electronics itself. Uninterruptible power requires generators, batteries or even fuel cells and those generate heat from inefficiency also.
Data centers consume vast amounts of power. A few years ago, the magnitude of this consumption became apparent in surveys of giant data centers. Estimates are that data centers consume more than 3% of all the power consumed in the US, which hosts the majority of data centers, an amount more than consumed in total by almost half the states! Power consumption in a data center is more than 100 times as much per square foot as the average commercial property.
Within the data center, the focus is on moving data, reducing power and heat and ensuring reliability. That’s done by choosing components and systems, designing facilities and installing them properly.

Moving Data over Cabling
Every data center begins with fiber optic connections to the Internet. Entrance facilities must be provided for multiple cables connecting to the outside communications networks. Incoming cables will terminate in racks with connections to routers that in turn connect to the servers hosted in the data center. These connections will carry vast quantities of data over singlemode optical fibers at 10-100Gb/s.
Within the data center, the goal is to move data as fast as possible with the lowest latency and that means using the fastest possible data communications links. Gigabit Ethernet is way too slow. 10 gigabit Ethernet and Fibre Channel are commonly used today. Fibre Channel is moving to 16 Gb/s and Ethernet is headed for 40 Gb/s and 100 Gb/s. The big data center users want 40/100 Gb/s as fast as possible and are pushing standards committees and manufacturers to produce useable products as soon as possible.
At 10 Gb/s, standard data links use multimode or singlemode fiber optics, coax cables or Category 6A unshielded twisted pair links. Of the three, fiber is the most reliable and has the lowest power consumption, especially if the links are long, saving as much as 70% over UTP copper links. Up to about 10 meters (33 feet) 10GBASE-CX4 on coax can be used on very short links effectively. Cat 6A consumes lots of power to transmit 10 Gb/s on longer links and generates more heat. Many data centers use coax for short links and fiber for longer links, but Cat 6A continues to be developed for lower power and is becoming more widely used.
The 40/100 Gb/s standards call for multiple 10 Gb/s channels run over parallel multimode optical fiber cables or WDM on singlemode fiber. The MM links require 8 fibers for 40 Gb/s (two fibers transmitting in opposite directions for each 10 Gb/s link) and 20 fibers for 100 Gb/s. These links use the MTP multifiber connector in 12 and 24 fiber configurations, the same connector used on prefab systems today. Singlemode generally uses LC connectors. The largest users of data centers have expressed a preference for WDM on SM fiber for reduced cabling bulk and cost.

Here is a table of the options.

Fibers/Connectors Limitations
40Gb, 850nm multimode 4 X 10Gb, parallel 8 OM3/4, MTP/MPO 100-125m
Manage fibers in groups of 4
40 Gb, 1300 nm singlemode 4 X 10Gb, CWDM 2 OS1/OS2, SC or LC 10, 40 km
Premises singlemode
40 Gb, 1550 nm singlemode 1 X 40Gb 2 OS1/OS2, SC or LC 2 km
Premises singlemode
100 Gb,  850nm multimode 10 X 10Gb, parallel 20 OM3/4, MTP/MPO 100-125m Manage fibers in groups of 10 or 12?
100Gb. 1300 nm singlemode 4 X 25Gb, CWDM 2 OS1/OS2, SC or LC 10, 40 km Premises singlemode
800G, Intel SPT, 1300 nm multimode (more, FOA NL 9/13)
32 X 25Gb/s
64 special MM fibers, custom 64 fiber expanded beam connector - MXC
300 m
Proprietary fiber and connector, 64 fiber cables, prefab only

MTP connector
The 12 fiber MTP connector used on 40 and 100 Gb/s links and prefab cable systems.

Even inside the servers, increasing speed and reducing power has led to the development of optical fiber interconnects. Intel is actively developing board level interconnects for their products, touting all the usual benefits of fiber.
Most data centers will contain a mix of fiber, coax and UTP cabling. The connections to the Internet coming in from the outside world are going to be on singlemode fibers. Multimode fiber, either OM3 or the new OM4 high bandwidth fibers, are likely to be used for longer connections. Coax will be used on very short connections. And Cat 6A may be used for short and medium length links. Another option is becoming available, active optical cables. These are fiber optic cables with transceivers at either end to convert electrical ports to optical fiber, providing the advantages of fiber to conventional copper ports.
Whichever media is used for interconnects, one thing is certain, there will be lots of cables! Cables from routers to servers, servers to other servers or switches and switches to storage devices. The volume of cables involved and the number of routes they can follow make planning the pathways and spaces carefully to prevent chaos in the data center. Cables may be run overhead or under floors. All cable trays must be properly sized for the expected cable loads. Under floor trays are generally wire mesh types while heavy-duty overhead racks are usually needed, especially when expected to carry large quantities of heavy copper cabling. Additional room should be provided for Cat 6A since cables cannot be neatly bundled to save space, because that may cause alien crosstalk between adjacent cables.

prefab cable system
Prefab cable system with modular interface

Because a Cat 6A or coax cable makes only one connection link per cable, multifiber optical fiber cables are big space savers. Prefabricated fiber optic cable systems using multifiber MTP style connectors not only save space but also installation time, since they just require installation and plugging together, no termination or splicing required. Fiber optic cables that are “armored” are sometimes used in the under floor trays to prevent cable damage when more cables are placed in the trays.

Armored indoor fiber optic cable

Armored indoor fiber optic cable

Cable testing is important for all cables. Cat 6A is highly stressed at 10 Gb/s so all terminations must be carefully made. Even fiber links, especially if they include multiple connections, must be tested since link loss at 10 Gb/s and above is quite small and intolerant of bad connections.
Obviously cable plant documentation and labeling are critically important. In a facility that may have thousands of cables, it’s vitally important that each cable end and each port we logically market to allow moving, tracing, testing and troubleshooting cables.

Powering The Data Center
As noted above, data centers consume vast amounts of power and require large uninterruptible power supplies for reliability. All power requires conditioning for delicate electronic modules. Many data centers run dual power systems with dual power backup to ensure maximum reliability. This means dual UPS setups, dual generators and dual power distribution systems.

UPS in data center  backup power generator

Battery backup and generator at a data center

Power cables are usually under the floor on separate aisles from data cables.
The actual layout of power and data cables has to be coordinated with the layout of equipment racks because cooling of the equipment has become quite sophisticated. Generally data centers are kept much cooler than regular offices to ensure proper cooling for all the equipment. Racks and electronics are designed for directional airflow to keep hot air away from other electronics and feeding cooling air to the racks. When you consider that the air conditioning will use as much power as the servers, careful design of air conditioning is very important. Cable racks must not impede cooling airflow.
Because of the high power consumption, data centers have become a major focus of research on how to reduce power. Electronics manufacturers are developing lower power systems and using lower power fiber optic data links. Even the location of the data centers can affect the cost of power and design of a data center. The owners of big data centers like Google are well aware of the issues. In the Bay Area near their headquarters, Google uses solar photovoltaic systems to generate 30% of their power. Another major data center is located on the Columbia River so it can utilize lower cost (and lower carbon) hydroelectric power.

Data centers are considered critical facilities and of course involve massive investment. Owners invest millions in preventing intrusion from the Internet from hackers and are also concerned with physically securing the facility. A typical data center will include state-of-the-art personnel entry systems, intrusion alarms and numerous surveillance cameras. This will often cover not only the actual building but the area in which it is located. Some data center owners even ban cell phones and WiFi from the facilities because of worries about data interference.

Coordination is Key
While data centers tap all the resources from the well-rounded electrical contractor, it is important that the contractor be involved with the customer from the initial concept of the project. A successful data center will involve coordination among the customer, vendors, contractors and often even municipalities and utilities from the beginning of the project. Each party should contribute to the design to ensure that all issues like facility design, communications, power and security are covered properly.

Table of Contents: The FOA Reference Guide To Fiber Optics


(C)2010-2012, The Fiber Optic Association, Inc.