By Simon Booth
In December, the Victorian Ombudsman released a report outlining findings from investigations into the management of complex workers’ compensation claims.
I have read the report in its entirety — all 250 pages. I laughed, I cried… I mainly cried. My frustration with the report grew with every page, which resulted in a need to put it down for a few days and revisit it again on a number of occasions.
Having read Victorian Ombudsman Ms Deborah Glass’ 2016 report, I was managing my expectations with respect to the report’s statistical validity, however I did expect there to be some evidentiary due diligence, which by her own words is lacking.
“The Ombudsman may investigate in such a manner as she thinks fit and is not bound by the rules of evidence which apply to legal proceedings.”
Ironically though, she consistently criticises WorkSafe’s Agents for holding to decisions based on evidence that in her opinion would not hold up in court. Ms Glass criticises individual agents, WorkSafe and the scheme itself through a one-sided review, seemingly unaware of the biases influencing its interpretations and conclusions.
In addition, the anonymity of the witnesses interviewed for the report grants the provision of unsubstantiated, anecdotal evidence at times, with no scrutiny of such claims or opinions, and no opportunity for those accused of wrong doing to question or cross examine their accusers’ statements. I would think this would be an essential component of establishing and testing truth.
As is often the criticism of social media, anonymity can lead to embellishment and misinformation, if not outright lies. (Apologies to any Anti-Vaxxers and Flat-Earthers, I am sure your opinions are based on significant peer reviewed and reproducible research)
Alas, the anonymity provided to the witnesses and the Ombudsman’s assertion that “no evidence of bias was apparent”, provides me with as much confidence in their statements as I place in information gleaned from Donald Trump’s twitter feed.
How can we ascertain if any witnesses bear grudges, how can we test the veracity of their allegations, how can we identify if there are innocent explanations to the behaviours in question and how can we identify if situations and communication have been misinterpreted?
I would expect that an investigation into an area of such importance as the Victorian Workers’ Compensation system would have been held to a higher evidentiary basis and been based on a true random sample of the Victorian claims population.
Unfortunately, we are about to experience another sweeping round of scheme-wide changes based on another report that lacks any form of statistical validity, with a narrow and selected sample population and untested anecdotal opinions of an anonymous few, who’s motives for participating are unknown.
The Victorian Ombudsman references randomly selected claims. The reality is that the investigation is comprised of an insufficient sample size that is too significantly compromised by several types of bias to enable the making any assertions with any confidence.
Forgetting the issues with bias for a second, we know that the Ombudsman reviewed 102 claims of the 63,085 claims she outlined were currently active in the scheme. As such her sample size reflects 0.16% of the active claims in the scheme.
In addition, Ms Glass makes statements such as:
- “The investigation found cases of…”
- “This investigation identified many examples…”
- “This investigation provided examples of…”
- “This included examples of…”
- “Further examples were identified…”
- “Also identified examples of…”
- “Highlighted examples where…”
However, she fails to provide the exact number or percentage of claims that relate to what was ‘found’.
I note that the Ombudsman provides 59 case studies in support of her findings. This reflects a figure of more than half the claims reviewed. However the discerning reader will notice at the bottom of many of these case studies the words “This case is also discussed on page…”. On several occasions the same case study is used as evidence across multiple areas of the report. Reducing the number of relevant case studies to 48.
In line with this, a number of these case studies are raised as issues based on Ms Glass’ interpretation of the Act and its intent, which is not necessarily how the act should be interpreted. On a number of occasions those who have worked within the scheme for decades disagreed with the Ombudsman’s interpretation.
Most importantly, of the 48 case studies used to promote her findings the Ombudsman fails to identify which of these were selected for review based on complaints to the Ombudsman, and which were part of those ‘randomly’ selected.
In contrast, WorkSafe’s own audit of their Agents, as outlined by the Ombudsman, reviewed 880 claims of which 4% or 37 claims initially failed and only 0.5% (4 claims) of cases resulted in the worker being wrongfully disentitled.
The concrete numbers and percentages relating to WorkSafe’s audit do not provide the same dire picture of a rogue system operated by corrupted capitalists hell bent on making a dollar at any expense, that we receive from the Ombudsman’s report.
The heart of the issue I have with the Ombudsman’s report relates to the significant biases, not just in the population sampled, but also the investigative methodology.
As data analyst Tomi Mester says in his article, Statistical Bias Types Explained (2017), “…just to make this clear: biased statistics are bad statistics. Everything I will describe here is to help you prevent the same mistakes that some of the less smart ‘researcher’ folks make from time to time.”
The Ombudsman’s investigation is guilty of a number biases. To assist Ms Glass in understanding how to avoid bias in future investigations, here are some of the types of biases that the Ombudsman’s investigation is guilty of, as defined by Tomi Mester.
Selection Bias1
Selection bias occurs when you are selecting your sample or your data wrong. Usually this means accidentally working with a specific subset of your audience instead of the whole, rendering your sample unrepresentative of the whole population.
Self-Selection Bias1
Self-selection bias is a subcategory of selection bias. If you let the subjects of your analyses select themselves, that means that less proactive people will be excluded. The bigger issue is that self-selection is a specific behaviour, and one that may correlate with other specific behaviours, so this sample does not represent the entire population.
Recall Bias1
Recall bias is another common error of interview/survey situations, when the respondent doesn’t remember things correctly. It’s not about bad or good memory — humans have selective memory by default. After a few years (or even a few days), certain things stay and others fade. It’s normal, but it makes research much more difficult.
Observer Bias1
Observer bias happens when the researcher subconsciously projects his/her expectations onto the research. It can come in many forms, such as (unintentionally) influencing participants (during interviews and surveys) or doing some serious cherry picking (focusing on the statistics that support our hypothesis rather than those that don’t).
Cause-effect Bias1
Our brain is wired to see causation everywhere that correlation shows up.
Cause-effect bias is usually not mentioned as a classic statistical bias, but I wanted to include it on this list as many decision makers (business/marketing managers) are not aware of that. Even those who are aware of it (including me), have to remind themselves from time to time: correlation does not imply causation.
Cognitive Bias1
- Confirmation bias.Confirmation bias happens when a decision maker has serious pre-conceptions and listens only to that part of your presentation that confirms their beliefs, completely missing the rest.
- Belief bias.When someone is so sure about their own gut feelings that they are ignoring the results of a data research project
1 Tomi Mester “Statistical Bias Types Explained” (2017)
Disclaimer: The opinions expressed in this publication are those of the authors. They do not purport to reflect the opinions or views of Aegis Risk Management Services or BJS Insurance.
