Abstract
InterVA, an automated and widely available tool for assigning cause of death using verbal autopsies (VAs), does not perform as well as other methods, such as physician-certified verbal autopsy (PCVA) and the Simplified Symptom Pattern (SSP) method, according to a study published by researchers at IHME and the University of Queensland, as part of the Population Health Metrics Research Consortium (PHMRC).
Research objective
Understanding what causes people to die is essential for developing effective health interventions. In remote areas where reliable vital registration data are scarce, VAs conducted by interviewing a family member about symptoms and signs before a death occurred are a critical tool for identifying the cause of death. Because different methods have been developed to interpret information gathered during VA and assign causes of death, it is important to understand their strengths and weaknesses.
This research is part of ongoing work by IHME to study the most accurate and efficient methods of using VA to determine individual- and population-level causes of death. It compares InterVA, an affordable and automated method for assigning causes of death from verbal autopsies, with the more labor intensive and expensive PCVA approach and with the automated SSP method, using validated, gold standard deaths collected as part of the PHMRC gold standard VA validation study.
Research findings
Researchers found that across all age groups, PCVA performed better than InterVA at both an individual and population level. At an individual level for chance-corrected concordance, InterVA predicted the true cause of death 24.2% of the time for adults and 24.9% for children. The tool performed less well for neonates, only assigning the correct cause of death 6.3% of the time. In each case, these percentages were lower than those for PCVA.
Across different causes of death, InterVA performed quite well in identifying transport-related deaths and some accidents, such as poisoning and drowning in children. It also predicted the correct cause of death over 50% of the time for homicide, liver disease, and tuberculosis in adults and perinatal asphyxia in neonates. However, the tool was less accurate for other causes of death, particularly those that are rare for a particular age group or fall into residual categories defined by “other.”
At the population level, InterVA also performed less well than PCVA in each age group. The cause-specific mortality fraction accuracy across all causes for InterVA was 0.546 for adults, 0.504 for children, and 0.404 for neonates.
The SSP method also performed better than InterVA, with a chance-corrected concordance of 48% at the individual level and a cause-specific mortality fraction accuracy of 0.73 at the population level. As elements of the SSP method were modified to become more like InterVA, the SSP method performed progressively worse.
Analytical approach
Previous studies have compared InterVA’s cause of death assignments with causes of death ascertained through hospital review or discharge notices. But the quality of that hospital data may be suspect in low-resource and rural settings and may not necessarily reflect the true cause of death.
Researchers evaluated the performance of InterVA to predict cause of death, using the PHMRC gold standard VA validation dataset, which includes 12,542 cases for which VAs were collected and strictly defined diagnostic criteria such as laboratory, pathology, and medical imaging findings were used to identify the true cause of death. This high-quality dataset was designed to validate the accuracy of different VA tools, such as InterVA.
Policy implications
Cause of death data are critical for formulating good public health policy. In the absence of vital registration data, VAs are commonly used to better understand causes of death. The InterVA tool has proven popular with some researchers trying to assign cause of death from VA information because it is automated, inexpensive, and simple enough to be used in a variety of settings. However, these advantages are outweighed by its poor performance in accurately assigning cause of death. Fortunately, other automated options for analyzing VA data have the same advantages and better performance. Until InterVA is substantially revised, users might consider other approaches for analyzing VA data.