Structure of the Data Center of TIER 3 Level
ITBeeline, earlier Bee Line GSM is a Russian telecommunications brand by OJSC VimpelCom. OJSC VimpelCom is a Russia’s third-largest wireless and second-largest telecommunications operator. Its headquarters is located in Moscow. Since 2009, OJSC VimpelCom has been a subsidiary of VimpelCom Ltd., which is based in Amsterdam. VimpelCom’s main competitors in Russia are Mobile TeleSystems and MegaFon.
There used to be a field in place of the data center. We dug a huge 100 meter foundation pit. Then, the data center looked like a concrete pad. Using metal constructions, we have built a hexagonal building, which houses six modules of the data center (they’re highlighted in green), the command center to monitor the backbone network across the country, and the office.
Warning! A lot of traffic and geekporn: 91 images of the newly launched first module. There are still some works being maintained there, but no more major construction.
Basic Parameters of the Data Center
Territory
- 7 ha;
- 30 thousand sq m of the building area.
Module Approach
- 6 independent modules of the data center (the first one is built for now).
Fault Tolerance
- Tier III Certification according to the Uptime Institute classification.
Uninterrupted Power Supply
- 2 independent entrance inputs of 10 MW;
- Dynamic Diesel UPS: 2500 kVA;
- 2N redundancy;
- 30 000 liters of fuel storage for 12 hours of operation of 6 modules when fully loaded.
Refrigeration Supply
- N+2;
- Water cooling on chiller plants;
- Natural Free Cooling;
- Adiabatic cooling system;
- Uninterrupted power supply of the cooling system equipment;
- The annual average PUE not more than 1.3.
Firefighting
- Novec Gas, harmless to humans;
- Early Warning Fire Detection System (VESDA).
Security
- 24х7х365 Security;
- Access Control, CCTV.
Environmental Friendliness
- Uninterrupted power supply system without chemical batteries;
- Natural Free Cooling up to 90% time of the year;.
- Eco-friendly interior decoration materials of the office.
In reality, the building is almost as transparent as in the plan, because a lot of polarized glass is used there.
The main building that houses the data center.
For comparison: this is a small container data center for seven racks. We have built it within three months. The mobile data center and the neighboring blue container with a diesel generator worked here during the commissioning and generally kept a number of basic services until we raised the first turbine hall.
The Containerized Data Center
Here’s the plan of the «space cruiser»:
A hexagon in the middle is the main building, which houses the turbine hall, as well as office and service areas of the Enterprise. The containerized data center underneath is small, they will take it away soon. As for the two blocks of the cruiser’s “engines”, they are independent Power modules. Let’s start there.
This is how the building with power modules looks like from the outside:
The power module that houses the dynamic diesel UPS
The diesel fuel is stored in the blue container. We use the winter diesel fuel in winter, and the summer one in summer. We do not need to pump out, as we produce diesel fuel during monthly operational checks.
The fuel storage for 30 000 liters.
But let’s get inside the power module. A wonderful device meets us here.
A motor-generator of the dynamic diesel UPS system.
It’s a complex of devices. Here is the video from the distributor:
In short, the power we get from the two independent substations is reserved by the diesel generator system. There is a kinetic accumulator in the middle of the circuit, between the sources and the payload.
Diagram of the electric power supply system of the complex.
The data center will consume up to 10 MW. For obvious reasons, it’s much more practical to take the power from the city. But if anything happens, we should have either a huge stock of USP batteries for the time of heating the diesel, or a stock of smaller batteries and a basin with ice water for cooling, or a constantly running power plant (generation) of our own. One of the best solutions here is the dynamic diesel UPS: a kinetic accumulator, plus a quick start diesel in the standby mode. In practice, everything is much more complicated, but the general principle of operation is like this.
The control panel of the dynamic diesel UPS.
It should be noted that the power we get from the city is not of the highest quality. When it is worsening or missing, we use the energy of the kinetic accumulator (a seven-ton flywheel spun to high speed). The energy of the flywheel’s residual rotation is enough to start the diesel even at the second attempt. Among other advantages is that this system allows to handle most power surges. Roughly speaking, the dynamic diesel UPS in the middle of the circuit can easily smooth an average of about 80% of peaks and sags without engaging the motor-generator. Another advantage of applying the dynamic diesel UPS is no problem with the delay of switching to the backup power system, as the kinetic accumulator is connected into the circuit.
In addition to conventional filtration, the Fuel Purification System also enables the system of separation of impurities, which performs the purification of fuel right before it is fed to the fuel pump.
Separator-filter of the fuel supply system
Cylinder heads
Solenoid operated valve of the fuel supply system
One of the most important tasks is to ensure energy efficiency at low consumption.
The pipe leads to the second floor, where the systems of silencing and exhaust gas purification are located. In particular, 4.5 meter two-ton silencers.
Take a look at the real size of silencers which is located at a factory of dynamic diesel UPS in Germany.
It’s a fuel tank.
Diesel fuel for the diesel generator is stored in large quantities in the blue house outside. This tank stores the necessary reserve for an hour or two of operation (depending on load). The tanks are connected, we can bring fuel infinitely. First, the main storage will be filled, then this tank, and, finally, the diesel generator set. Of course, the second power module is the same, and it has fuel storage of its own.
Exhaust fan of the ventilation system.
Pipelines of the diesel engine cooling systems. Refrigerant circulates in pipes. It passes the radiators, installed on the second floor.
To reduce losses, the flywheel rotates in the gaseous helium. As long as it’s in this environment, the cost of maintaining the rotation is minimum even at the highest speed. Thus, it’s enough to give it energy for the spin-up, and it will consume little energy. Energy conversion efficiency is more than 99%. When necessary, the flywheel gives the stored energy to the system.
Pressure reducer of the helium supply to the kinetic accumulator.
An emergency stop valve of the diesel engine for the “fire” signal or the red emergency shutdown button. It is used only as a last resort, and actually cuts off the oxygen from the motor.
The fire is suppressed with the Novec 1230 gas. It is not dangerous to humans in the applied concentration. The firefighting system is smart: in case there’s a fire on the diesel, it gives a chance to deal with it. This increases operability of the data center.
Starter batteries
Power cables from the generator to the load (the data center)
Fire alarm siren. As soon as it’s activated, it is necessary to run out of the building.
The control cabinet of the ventilation system.
A circuit breaker of the distribution board.
Here is a foolproof. The lock opens critical panels. To press something, we should remove the plastic shutter by turning it 180 degrees. Then it’s enough to strongly press the button. Such security options have been created, so that no one would do anything without fully understanding what is happening.
The button of the smoke exhaust system is located at the entrance to a room.
That is the unit, to which the test load is connected outside for operational checks of the dynamic diesel UPS:
The load is connected up with a special cable that is very flexible for its diameter.
Here’s the load itself. It’s a semi-trailer truck. In the trailer, it has something like a huge calorifer that effectively heats the air to take away more power from power module.
Not that winter became warmer, but we did what we could.
The semi-trailer truck.
These are the two interesting objects, the Lightning rod and the diverter. More precisely, the high mast structure on the power module is the lightning rod. It is a metal rod that diverts the discharge deep under the ground. On the roof of the data center, you can see the already active unit of similar purpose. It allows to achieve a similar effect, but with much smaller sizes.
The passive and the active lightning rods.
The second module is symmetrical to the first one. The only thing that distinguishes its surroundings is this additional facility. It’s a separate diesel generator for the office. Requirements to the uninterrupted operation are not so high, so we do not have to expend the power of the dynamic diesel UPS.
The diesel generator for the office.
Now, let’s go down to the subway line leading to the Kremlin.
The cable collector
Of course, it’s a cable collector, but if we turn off the light in the right way, we could let beginners into the secrets of the USSR.
We can also do like this:
There are cables as thick as a hand of a strong engineer, or as a leg of a pretty girl.
We had no problems with cables on the site, but it should be definitely checked at acceptance, as cables can be with cuts or damaged insulation, and much more.
The “glamorous” fire-fighting foam. During the mounting, it expands and covers very tightly everything it can to prevent the spread of fire and smoke through the holes, in which in our case the cable is.
So, we have the power. But the data center lacks cooling and the Internet. And beer, of course, but there’s a brewery just outside the window. Everything’s fine with the backhaul. We have two backbone rooms reserving each other. There are 4 inputs of the backbone optics supplied to the data center. We have provided the ability to install 100Gbit/s transponders.
Let’s take a look at the cooling process.
Here are some main points:
Scheme of the Natural Free Cooling system
Free cooling works like this. A wind blows into the “face” (mixing chambers) of our data center. The cold air gets into the main duct, gives the cold to the inner circuit, then moves further through the data center and goes outside. Using the heat exchanger, the hot environment of the turbine hall gives the heat to the hangar, which houses the data center modules. Hot updrafts go up through the openings in the roof, or forced through the mixing chamber. It’s simple, right?
Layout of the main air duct
The first one we meet is a “spider”. It’s a chiller. We call it spider because of the characteristic shape.
The Chiller
Why do we need a chiller in the data center with the free cooling system? This is because if the outside temperature rises above 75 F, turbine halls will not be cooled with the desired efficiency. Reminding you that we have chosen Yaroslavl, as it’s cooler in summer here, than in Moscow, so we can save on cooling.
Nevertheless, the average temperature rises above the mentioned boundary for up to a month. In this case, we use the second cooling system on chillers that cool the water in the vast network of pipelines. Simply put, they are huge refrigerators that cool the water. They require a lot more power, but there’s nothing we can do about it. Chiller redundancy is N+2, which means that there are three units for the first turbine hall.
Do you recognize these characters?
Expansion tanks
A total of 180 tons of water in the circuit.
Main collector
Drain cock
That’s another spider, but from the other side ( the side of the control panel)
Balancing valve
If we need emergency water drainage, it will flow into special pits.
Pumps provide the circulation in the circuit. The second pump is redundant.
Refrigerants lead to the roof. Bend is a refrigerant trap, which acts as a turbulator to entrain oil droplets and carry them efficiently up the elevated discharge line.
We will go to the turbine hall soon. It’s right behind this wall. But! There is an important point. The thing is that the temperature of the ambient air is sufficient to cool turbine halls with the necessary reserve only for 8 months a year. But we run chillers for two to four weeks. What happened with at least three and a half months? Here, look at this device.
MNFC — a heat exchanger.
This is the heat exchanger chamber. The cold aisle of the turbine hall starts here, and the hot one ends. Thus, we make hot air cold. This is the key point of the cooling system. The heat exchanger works in the mode of cooling by the ambient air, as well as in the water cooling mode.
The basic mode is as follows. A large amount of outside air is passed through the heat exchanger. If the air is hot, we close the air intake and begin to cool the exchange chamber by the output from the chiller, wasting electricity on the cooling process.
The heat is exchanged behind this grating.
But when the air is warmer by just 3-4°C (up to 24°C), which happens during the “missing” months, we apply a life hack of the school physics course. We simply saturate the air with moisture (run the air though a wall of water mist behind this unit), which gives the desired delta for an adiabatic process. This techniques is much more energy conserving than the chiller system, and is widely used together with free cooling.
Here, just through the building, goes the exhaust main duct. It is quite a wide hot aisle.
Roof lights. When necessary, the flaps open and hot air flows out.
Okay, okay, let’s go to the turbine hall. As for now, the halls from second to sixth look like this, modules are not assembled yet.
Here’s the first module launched and being filled. We came when a part of servers was operating, but the hall was half-empty.
Module of the data center.
Since racks are standard, and the iron is homogeneous (it’s our data center, and we know what will be installed here), we could mount all the inputs-outputs at once.
Power sockets for connecting racks.
Another advantage of standard racks is the known dimensions of the hot and cold aisles. Here’s the cold air blowing from the false floor. These panels are covered with polyethylene, as the aisle is not used for the moment, and we don’t want the engineers to be blown away. By the way, these cold aisles is the last reminder of why an admin is wearing a sweater. Because it’s very cold and blowing.
The cold air aisle.
This is our teleportation device, the prototype.
Jokes aside, this door closes (limits) the cold aisle, so that cooling would be efficient. And that’s how a part of the hall with racks looks like.
Racks
As for the racks, there’s following equipment:
- Oracle servers of Hi-End and Mid-Range types;
- Data storage system, built on HDS, HP, IBM, Brocade;
- The network core is based on Avaya equipment;
- HP servers.
The system of power distribution to racks: busbars with power take off boxes.
Our system has structured cabling, top-of-rack Panduit
Top-of-rack cartridges.
Crossover with switching is not perfect, but it will be soon.
Accuracy in assembling is a kind of our little fad. We know from experience that an unsigned messy mounted cable can cause interruption of service.
Switching aggregation of network switches
Active network equipment
10% of the hall is designed for racks of high density – up to 20 kW per unit. The most powerful rack are located in the center of the hall, as it has a reserve of the cold due to peculiarities of air flows of high speed.
These are data storage systems, but we have not connected them yet. The data about customer’s balance will be stored here.
Tape Library
And these ate fan heaters. Using them, we tested the capability of the hall to remove heat. Exactly, we simply warmed them and watched what would happen.
The tap to the in-line busbar.
Novec, again. The reserve is enough for two runs of the firefighting system.
That’s how power cables that we’ve seen in the basement go to the turbine hall.
Now, let’s look at the office of admins and not only them.
The main entrance to the building of the data center.
We go around the data center and see the temporary modular units. They are the dining room and the «office». These are the places where people worked when the office rooms were not ready. Actually, it’s better than working in a “container”.
Temporary office and dining room.
There’s the layout of the data center at the entrance:
The cooling scheme is illuminated:
On the roof, we can see condensing units of the chillers:
We can also illuminate the power and main inputs:
Before we get inside, I’ll show you helmets in the locker room. One of the builders turned out to be an artist and painted his helmet, and not only his.
Open space
The wall is made of a school blackboard.
Room for administrators.
Coffe Point
WC
Let’s get outside! We’ve been everywhere, except for the roof, but there are chillers on it. Here they are:
The roof is very big and striped. From the satellite, our data center looks like a our logo.
We’re getting close to chiller units. They’re just like those of household air conditioners, but a bit bigger.
These “windows” or roof lights are places where the hot air naturally leaves the building.
Roof lights
Each roof light has its own weather station that sends data to the automation system.
Here’s the final destination of refrigerants we’ve seen before.
There are plenty of pipes here, as each unit is dual circuit, and there are two of them per each chiller (that is to say, the chiller is four circuit).
Elements of general ventilation systems of the building.
This is the mast of the lightning protection:
That is how our data center looks like. As for now, we have launched the fist module of the flagship data center, designed for 236 racks. We are moving a part of our “combat” systems there. By the way, the Federal Center for monitoring has moved to us, and girls from the United Service Center will come soon as well. So, we’re quite inspired.
Comments