There are two ways of measuring electricity usage that may affect your utility delivery charges: consumption and demand. Consumption refers to how much electricity you used over the meter-read period, and it’s measured in kilowatt hours (kWh). In other words, how much electricity did the power plants have to put into the grid for you?
Demand is different. It is the maximum rate at which you draw power from the grid, and it’s usually measured during the heaviest-usage 15-minute interval during the meter-read period. In other words, how ready does the grid have to be to account for your maximum need? And, how much does your maximum need add to the cost of maintaining the system and reliability?
In theory, an energy company could bill a customer two charges, one for consumption and one for demand. But, as a residential customer, your energy demand tends to be consistent and predictable, you make only minimal impact, and so your bill doesn’t have a separate demand charge.
Commercial electricity customers occasionally exceed a certain level of demand, so they get a separate category of service from the utility. The two main categories are secondary service and primary service. Whether or not they get charged for demand depends on how high a demand they have.
With-demand charges are determined by how high the customer’s peak demand is compared to their baseline usage. An example is a sports arena: Consider how high the demand is when they turn on their lights for a night game. More infrastructure is needed for such a high burst of usage, and that infrastructure must be maintained even if the arena does not use enough electricity overall to offset the costs incurred. Short bursts of very high usage mean high demand and higher rates.
Maybe it’s a good thing most of us don’t run big banks of bright lights.