Opinion: Empirical Evidence Shapes Policy Design

Australian Treasury

One morning, an American woman by the name of Anita Kramer woke up and could not move her left arm. Kramer called 911 and during her assessment, doctors discovered she had a narrowing in a major blood vessel in her brain. She had an intracranial stent inserted. Less than a week later, a second stroke left Kramer more disabled than the first.

Stenting was first approved by the US Food and Drug Administration in 2005, following a promising study that did not use a control group. The stroke rate was better than expected, so the procedure was approved. Thousands of patients received stents.

Six years later, in 2011, the New England Journal of Medicine published the results of a randomised trial. It found that those who got a stent were more than twice as likely to have a stroke in the next month than patients in the control group (who were assigned to medical therapy). Five patients in the treatment group died, compared to one in the control group. The results were so dramatic that the study was terminated early.

In medicine, randomised trials have produced a plethora of surprises. For patients with appendicitis, it was once thought that going straight to surgery was the best option. Then 4 randomised trials compared surgery with a second intervention: antibiotics as the first option, with surgery reserved for those whose symptoms got worse. In the antibiotic arm, two‑thirds of patients never got the surgery. The randomised studies found that the rate of life‑threatening outcomes and time in hospital were the same for the 2 groups. Moreover, antibiotics are less expensive and less invasive. No one has to cover up their antibiotic scar.

Another popular surgery for a time involved injecting medical‑grade cement into osteoporotic fractures of the spine to treat chronic back pain. Alas, a randomised trial showed it was ineffective.

You don't need to face major surgery or a life‑threatening condition to be the beneficiary of insights from randomised trials. In complementary medicine, randomised trials have shown many widely used interventions to be ineffective. In a review of 10 trials, glucosamine and chondroitin for joint health were found to not affect joint pain. Echinacea turned out not to reduce the duration of the common cold. Acupuncture, when tested compared to sham acupuncture, has been found ineffective in reducing pain. Multivitamin randomised trials have found no benefit on survival, heart disease, or cancer.

It can be beneficial for medical researchers to encounter surprising results. Surprises promote scientific inquiry, which can prompt further investigation. Surprises encourage open‑mindedness, reminding researchers to not become too entrenched in prevailing theories or expectations. Dealing with unexpected results also requires creativity and critical thinking, as researchers must figure out why their results deviate from expectations.

The 21st century is one of improving outcomes through evidence‑based medicine rather than 'eminence‑based medicine'. The challenge now is to bring the same approach to public policy.

In Australia, randomised trials have shown that drug courts are a cost‑effective way of reducing recidivism, that intensive caseworker support for long‑term homeless people does not increase short‑term employment rates, and that high‑quality early education programs boost the IQ scores of vulnerable children by up to 7 points.

Some of these findings may surprise you, and that's a good thing.

To expand the number of randomised policy trials, the Australian Government last year created the Australian Centre for Evaluation. The Australian Centre for Evaluation was established to help put evaluation evidence at the heart of policy design and decision‑making. We seek to improve the volume, quality, and use of evaluation evidence to support better policies and programs that improve the lives of Australians.

Based in Treasury, the Australian Centre for Evaluation works with agencies across the Commonwealth government to design and implement trials to answer challenging social and economic policy questions.

The centre's first trials have been developed in collaboration with the Department of Employment and Workplace Relations. They are seeking to understand what works to support people find jobs.

The Australian Centre for Evaluation's ambition goes beyond individual trials, to embedding good evaluation principles and practices across government and fostering an evaluative culture that supports continuous learning about what works, why, and for whom.

Just as ethics review is at the heart of randomised medical trials, so too randomised policy trials conducted by the Australian Centre for Evaluation will be carried out within a rigorous ethical framework. Building trust is vital as we work to expand the quality and quantity of policy design evaluation across Australia. It is vital that evaluations are conducted ethically, carefully and transparently.

Alongside the production of evidence, the Australian Centre for Evaluation is working to improve the use of evidence: encouraging the use of high‑quality evaluations over low‑quality evaluations, and meta‑analyses over single studies.

In policy design, as in medicine, it is healthy to be surprised. As we build the evidence base, we should expect more unexpected findings. In this sense, healthy surprises are part of the journey to shaping a better world.

/Public Release. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).View in full here.