In this series of blogs so far, we have discussed the internal extremes of industrial power levels: from kW motor drives to µW energy harvesting for IIoT node power. The industrial environment can certainly be varied, with hundreds if not thousands of amps flowing around power lines, and voltages up to 690VAC inside buildings. Surges and transients are routine as loads switch and contactors operate, radiated and conducted EMI is at a high level, and temperatures and vibrational forces can be intense. In the middle of this maelstrom of environmental effects, CPUs, FPGAs and other forms of data processing circuitry keep control of the whole operation with expectedly high reliability, maintaining productivity and minimising downtime.
To achieve this, the local environment of supply rail stability, electrical noise and temperature must be precisely controlled, even though the ICs themselves are not helping the problem, taking tens of amps, sometimes off sub-1V rails with large load steps.
When process control involved switching motors on and off, and digital logic was TTL powered from 5V, it was feasible to have a centralised power architecture (CPA). A single cabinet AC-DC converter supplied the low voltage power rails for cards in a control cabinet rack. Load currents were low, voltage drops manageable, and the noise margin of the logic tolerated the pick-up on the long leads. However, as processor speeds and power draw increased, voltage rails dropped to 3.5V then 3.3V. The CPA scheme became unworkable and a new distributed power architecture (DPA), was adopted.
Here, a higher voltage, usually 24V, is routed around a cabinet, and board-mounted DC-DC converters step the voltage down to the end-load requirements.
The converters, typically isolated, ensure high ground loop currents are kept local to the load, thus minimising interference. Although expensive, there were advantages that a 24V supply could feature battery backup. Cards, duplicated in a redundant configuration, could allow ‘hot swapping’ if any