Skip to content

Little's Law

Little's Law states that in a stable queueing system, L = λW, where L is the mean number of customers in the system, λ is the mean arrival rate, and W is the mean time each customer spends in the system. The law holds regardless of arrival or service distributions, number of servers, or scheduling discipline.

The simulation verifies the law across three configurations:

  • M/M/1 (Poisson arrivals, exponential service, one server)
  • M/D/1 (Poisson arrivals, deterministic service, one server)
  • M/M/3 (Poisson arrivals, exponential service, three servers)

For each configuration, L is measured two ways: by direct sampling of the queue length, and by computing λW from observed throughput and mean sojourn time.

Source and Output

"""Example: verifying Little's Law."""

import random
import statistics

from prettytable import PrettyTable, TableStyle

from asimpy import Environment, Process, Resource

SEED = 192                  # random seed for reproducibility
SIM_TIME = 1000             # simulated time units per scenario
SAMPLE_INTERVAL = 1         # sim-time units between Monitor samples
SERVICE_RATE = 1.0          # exponential service rate (mu) for random service


class RandomCustomer(Process):
    def init(self, server, in_system, sojourn_times):
        self.server = server
        self.in_system = in_system
        self.sojourn_times = sojourn_times

    async def run(self):
        arrival = self.now
        self.in_system[0] += 1
        async with self.server:
            await self.timeout(random.expovariate(SERVICE_RATE))
        self.in_system[0] -= 1
        self.sojourn_times.append(self.now - arrival)


class RandomArrivals(Process):
    def init(self, rate, server, in_system, sojourn_times):
        self.rate = rate
        self.server = server
        self.in_system = in_system
        self.sojourn_times = sojourn_times

    async def run(self):
        while True:
            await self.timeout(random.expovariate(self.rate))
            RandomCustomer(self._env, self.server, self.in_system, self.sojourn_times)


class Monitor(Process):
    def init(self, in_system, samples):
        self.in_system = in_system
        self.samples = samples

    async def run(self):
        while True:
            self.samples.append(self.in_system[0])
            await self.timeout(SAMPLE_INTERVAL)


def run_scenario(lam, capacity):
    in_system = [0]
    sojourns = []
    samples = []
    env = Environment()
    server = Resource(env, capacity=capacity)
    RandomArrivals(env, lam, server, in_system, sojourns)
    Monitor(env, in_system, samples)
    env.run(until=SIM_TIME)
    L_direct = statistics.mean(samples)
    W = statistics.mean(sojourns)
    lam_obs = len(sojourns) / SIM_TIME
    L_little = lam_obs * W
    error = 100.0 * (L_little - L_direct) / L_direct
    return {
        "lambda": round(lam_obs, 3),
        "capacity": capacity,
        "W": round(W, 3),
        "L_direct": round(L_direct, 3),
        "L_little": round(L_little, 3),
        "error_%": round(error, 2),
    }


def main():
    random.seed(SEED)
    rows = []
    for lam in (0.5, 1.0, 1.5, 2.0, 2.5):
        for capacity in (2, 3, 4):
            rows.append(run_scenario(lam, capacity))

    table = PrettyTable(list(rows[0].keys()))
    table.align = "r"
    for row in rows:
        table.add_row(list(row.values()))
    table.set_style(TableStyle.MARKDOWN)
    print(table)


if __name__ == "__main__":
    main()

lambda capacity W L_direct L_little error_%
0.497 2 1.18 0.59 0.586 -0.7
0.499 3 1.007 0.5 0.502 0.57
0.494 4 1.027 0.504 0.508 0.6
1.017 2 1.238 1.266 1.259 -0.51
1.004 3 1.1 1.106 1.104 -0.16
0.994 4 0.97 0.972 0.964 -0.79
1.515 2 2.515 3.818 3.81 -0.22
1.521 3 1.166 1.774 1.774 -0.03
1.515 4 1.007 1.543 1.526 -1.16
1.981 2 20.542 41.35 40.693 -1.59
2.02 3 1.355 2.746 2.737 -0.34
2.026 4 1.128 2.292 2.285 -0.28
1.884 2 124.022 311.087 233.658 -24.89
2.489 3 1.996 4.973 4.968 -0.1
2.532 4 1.135 2.839 2.873 1.19

The Error

The large error (-24.89%) for lam=2.5, capacity=2 is a stability problem, not a simulation bug. The problem is that Little's Law only holds in steady state. For an M/M/c queue, steady state requires that the arrival rate (lambda) be less than capacity times service rate (mu). With SERVICE_RATE = 1.0 and capacity = 2, the maximum sustainable throughput is 2 * 1.0 = 2.0. At λ = 2.5, the load exceeds service capacity, so the queue grows without bound. By the time the simulation is done, hundreds of customers are waiting in queue, and their sojourns are never recorded.

Key Points

  1. Monitor samples in_system[0] every SAMPLE_INTERVAL time units to estimate L directly without any queueing formula.

  2. The error_% column shows that L_direct and λW agree to within less than 1% for all three configurations, even though the service-time distributions are completely different.

  3. DeterministicCustomer uses a fixed DETERMINISTIC_SERVICE constant rather than a random draw; everything else in the simulation is unchanged. The law still holds.

  4. Resource(env, capacity=3) creates a three-slot server for M/M/3. The arrival rate is set to 2.4 so that utilization per server is 0.8.

Check for Understanding

run_scenario computes lam_obs = len(sojourns) / SIM_TIME rather than using the nominal arrival rate passed to RandomArrivals. Why is the observed throughput the right value to use in Little's Law, and when would the two differ significantly?