Opinion: The policy cost of getting domestic violence statistics wrong

In coverage of domestic violence, precision is not a luxury. It is the difference between evidence-based policymaking and narratives built on quicksand. The recent correction by ABC News to a widely shared article on the statistical rarity of lethal family violence offers a reminder of how easily public understanding can slip when journalism falters on basic numerical accuracy. Errors of this kind do not merely distort a single story; they risk shaping an entire policy conversation around a false premise.

The ABC article explored an important and uncomfortable truth emerging from new research: that fatal domestic violence is, statistically, a low-base-rate event. This does not diminish its seriousness. It does underscore the limits of prediction models that some advocates, policymakers and reporters have come to rely on with surprising confidence. When journalists misstate how rare such events are, even inadvertently, the entire scaffolding of risk assessment is bent out of alignment.

Domestic violence policy is already prone to simplification. We reach instinctively for slogans - believe victims, spot the red flags - that carry emotional clarity but statistical imprecision. The temptation to translate complex research into digestible claims is human, but it becomes dangerous when the underlying data are misstated. When an article claims that a particular share of cases resulted in lethal or near-lethal violence - and that percentage is wrong - every subsequent inference becomes unstable.

Misreporting on issues this serious contributes to a deeper structural problem: a public discourse built on the assumption that prediction and prevention are the same thing. They are not. The Swinburne study makes that clear. Most so-called “high-risk” indicators are common among thousands of cases, while fatal outcomes number in the dozens. The hard reality is that systems designed to forecast a handful of tragedies will inevitably generate large volumes of false positives. When the press misstates the denominator, the false comfort of prediction becomes even more seductive.

There is also a subtler consequence. Policymakers, responding to public pressure shaped by media narratives, may reach for extreme tools - preventive detention, blanket bail restrictions, sweeping “high-risk” classifications - justified by the belief that these measures thwart imminent killings. But if the statistical premise is faulty, the policies risk sweeping in thousands who will never commit such acts. Misreporting amplifies that risk. It encourages interventions that may satisfy political urgency while doing little to reduce harm.

None of this means we give up on risk assessment or treat warnings from victims with anything less than seriousness. But we cannot pretend that every warning is a predictor of lethal violence, nor should journalism reinforce that misconception. The role of the press, particularly on issues where fear and politics intermingle, is to triangulate carefully between what is intuitively persuasive and what the evidence can actually bear.

If public debate is to be grounded in reality rather than rhetoric, then accuracy must be treated as the first discipline of reporting, not a box to be ticked in hindsight. When the numbers are wrong, everything built atop them - policy, resourcing, community expectations - stands on an unstable foundation. In matters of life and death, that is a risk no society can afford to take.

Previous
Previous

POLITICO corrects detail on rice producers in EU migration-trade agreement coverage

Next
Next

The Guardian amends location detail in coverage of undersea drone industry