
A data centre is unlike any other building type. IT equipment runs continuously, generating concentrated heat loads of 5–25 kW per rack. Server hardware is sensitive: an inlet temperature excursion above 35°C can cause thermal throttling, hardware faults, or unplanned shutdown. There is no tolerance for the kind of temperature drift that might be acceptable in an office or warehouse. At the same time, data centres are significant energy consumers — cooling can account for 30–40% of total facility power — and operators are under increasing pressure to reduce Power Usage Effectiveness (PUE).
The BMS is the control layer that makes this possible: maintaining tight thermal envelopes, sequencing CRAC units for efficiency, managing free cooling opportunities, integrating leak detection, and generating the data needed for PUE reporting. Alpha Controls delivers BMS solutions for data centres and server rooms across London and the South East. For a deeper look at why a standard commercial BMS configuration is the wrong tool for a data centre environment — and what a purpose-configured system looks like — see our article on BMS for data centres: why standard building automation isn't enough.
ASHRAE TC 9.9 defines the A2 thermal envelope applicable to most enterprise-class IT equipment: an allowable inlet temperature range of 10°C to 35°C with a maximum rate of change of 5°C per hour — both parameters that require continuous monitoring, not periodic spot checks. Most enterprise equipment is rated to ASHRAE A1 or A2 class:
Breaching these envelopes has real consequences. Temperatures above the upper limit cause thermal throttling — processors reduce clock speed to protect themselves, reducing compute performance. Sustained excursions cause hardware failure and void manufacturer warranties. Low temperatures are less immediately damaging but increase the risk of condensation if humidity is not also controlled. The BMS must alarm before these thresholds are approached, not after they are breached. EN 50600 — the European standard for data centre facilities and infrastructure — defines environmental monitoring requirements under its availability classification framework, including mandatory temperature and humidity monitoring at equipment inlet level, not just at room level.
Computer Room Air Conditioning (CRAC) units and Computer Room Air Handlers (CRAH) are the primary cooling plant in most data centres. Modern units expose their control interfaces via Modbus RTU or BACnet MS/TP, allowing the BMS to read and write key parameters:
With this integration, the BMS can coordinate multiple CRAC units as a system rather than each unit acting independently. This eliminates the "fighting" that occurs when independently controlled units simultaneously heat and cool different zones of the same space — a common source of both hot spots and energy waste in server rooms without BMS integration.
Hot aisle/cold aisle containment is now standard practice in well-designed data halls. Server racks are arranged so that equipment exhausts hot air into contained hot aisles, while CRAC units supply cold air into contained cold aisles. The BMS role in contained environments is to maintain the correct pressure differential across the containment barrier:
The BMS sequences CRAC fan speeds using variable frequency drives to maintain these differentials, adjusting dynamically as IT load (and therefore heat output) changes throughout the day.
Power Usage Effectiveness is the primary energy efficiency metric for data centres: PUE = Total Facility Power / IT Equipment Power. A PUE of 1.0 is theoretical perfection (all power goes to IT). Legacy facilities often run at 2.0 or above. A modern well-managed facility should target below 1.4; hyperscale facilities achieve 1.1–1.2.
Achieving and demonstrating target PUE requires a sub-metering strategy integrated into the BMS. For a detailed guide to how energy meters connect to a BMS and what granular consumption data enables, see our article on energy metering and sub-metering.
The sub-metering strategy for PUE monitoring covers:
The BMS calculates live PUE from these inputs and trends it over time. This data drives operational decisions: when to enable free cooling, how many CRAC units to run, and whether capital investment in more efficient plant is justified.
Water in a data centre is a catastrophic risk. Cooling systems use chilled water, condenser water, and sometimes direct expansion refrigerants — all under pressure in the same space as sensitive IT equipment. A leak under a raised floor may not be visible until significant damage has occurred.
BMS-integrated leak detection addresses this through:
Alpha Controls provides leak detection installation as a standalone service and as part of integrated data centre BMS projects.
Mechanical refrigeration is the largest single energy cost in most data centres. When the outside air temperature is low enough — typically below 12–15°C wet bulb for UK climate — the BMS can switch from mechanical cooling to free cooling (economiser mode), circulating cooled water or air without running compressors.
The BMS manages this transition automatically:
In the UK climate, free cooling is available for a significant portion of the year. A properly configured free cooling strategy can reduce annual cooling energy consumption by 30–50%.
Critical data centres are designed with N+1 cooling redundancy — one more CRAC unit than the minimum required to handle the IT load. The BMS must manage this redundancy actively:
Data centre operations teams require continuous visibility of their environment. ISO/IEC 27001 certification — increasingly required for colocation operators and cloud service providers — includes physical environmental controls as a mandatory control domain, with BMS monitoring records forming part of the evidence base. Any BMS that is network-connected or remotely accessible must also be evaluated for cybersecurity risk — for a guide to BMS network threats and how to address them, see our article on BMS cybersecurity. The BMS provides operational visibility through:
Data centre BMS projects require a contractor with specific experience in critical infrastructure — not just general commercial building controls. Alpha Controls brings expertise in CRAC unit protocol integration, precision cooling control, leak detection systems, and the alarm management strategies that critical facilities demand.
We work on colocation data centres, enterprise server rooms, and edge computing facilities across London and the South East. Our networking services complement BMS work in facilities where structured cabling and BMS infrastructure are being installed together.
Contact Alpha Controls to discuss your data centre BMS project, or explore our BMS services and leak detection pages for more information.
Our team of building automation specialists is ready to help you optimise your building's performance and efficiency.
Get in Touch