Background & Scientific Rationale
Reproducibility refers to the degree to which others can replicate published research to confirm the findings using similar methods. Good science should ensure results are reproducible in order to maintain confidence and trust in scientific research. However, various recent evidence suggests confidence in academic research is waning and that there is much avoidable research waste created in the research process. Chalmers & Glasziou (2009) estimated 85% “of investment in [biomedical] research are lost every year because of correctable problems”. As part of this series Chan et al (2014) identified the need to make research more accessible, and thus to encourage comprehensive reporting. The largest health research funder in the US, the NIH, is now addressing the reproducibility issue.
Currently, the majority of economic evaluations are performed alongside clinical trials, using outcomes and resource use collected within the trial as the primary source of economic data. However, it is also possible to conduct an economic evaluation using a decision-analytic model, defined by the International Society for Pharmacoeconomics and Outcomes Research (ISPOR) Task Force ‘‘as an analytic methodology that accounts for events over time and across populations, that is based on data drawn from primary and/or secondary sources, and whose purpose is to estimate the effects of an intervention on valued health consequences and costs’’. In doing this, it is possible to draw on multiple sources of evidence as well as to evaluate all relevant comparators, hence reducing the need to perform additional clinical trials to compare interventions previously not compared head to head. In line with current guidelines, it is advised that in reporting decision-analytic models there should be “Technical documentation, written in sufficient detail to enable a reader with necessary expertise to evaluate the model and potentially reproduce it, should be made available openly or under agreements that protect intellectual property, at the discretion of the modelers.” but the degree to which published economic models adhere to this has not to our knowledge been tested. We did find one study that examines the between model consistency, which attempted to see if different models would give the same results with identical inputs (even though model structure may be different). In contrast, this proposal seeks to examine how easy it is to take both the structure and content of a published model and reproduce it.
A list of search terms will be devised with the intention of identifying any published article which considers the reproduction, re-analysis or re-engineering of a decision-analytic model. Using these terms, a search of standard databases will be conducted, for example: Medline and EMBASE.
Once the search has been conducted, the resulting abstracts will be initially screened by two independent reviewers, to determine their eligibility for the review. Of those that are deemed potentially eligible, a further screening process will take place of the full article. Any disputes will be dealt with by a third reviewer. Following the identification of eligible papers, data will be extracted using a standard data extraction form, including, but not limited to, the reason for replication, the type of model replicated, any problems encountered, as well as the results of the replication. This extraction will be completed by two independent reviewers, with any discrepancies being resolved by a third reviewer, if necessary.
As the aims of the systematic review are largely qualitative in nature, and as it is likely that the papers identified will be replicated models considering different interventions and population groups, it is envisaged that it would be unsuitable to conduct a meta-analysis. Therefore, the results of the systematic review will be presented in a narrative approach and the methodology will be reported according to the “Preferred reporting items for systematic review and meta-analysis protocols (PRISMA-P) 2015” guidance.
Following this, an attempt to replicate a previously published decision-analytic model will be made, using only information from the published report and information available in the public domain. In the process of doing this, particular challenges will be identified and the feasibility of model replication, as well as the strengths and weaknesses of the research will be discussed. Additionally, the implication of putting models in the public domain on intellectual property rights will be discussed.
Expected Output of Research / Impact and added value
A manuscript will be submitted for publication in a peer-reviewed journal, detailing the systematic review and the reproduced model.
Information will be gathered about the method of reproducing decision-analytic models and the feasibility of using this method as a means of validation. This will help to shape future research using decision-analytic models and to advise on reporting guidelines.