Predicting the outcome of clinical trials by computer?

Clinical trials are very expensive, time consuming and frequently yield inconclusive results.

An article in Wired magazine described a computer simulation model that can predict the results of drug trials in humans, without actually giving a single patient a pill.

The model is called Archimedes, and is based at the San Francisco company of the same name. Its creator, David Eddy, spent two decades programming information about anatomy, physiology, disease, risk factors and their response to different drugs. The article explains how Archimedes was able to almost exactly predict the true results of the CARDS trial (examining the effects of statin therapy on cardiovascular outcomes in diabetic patients) ahead of unblinding of that study.

Whilst the underlying algorithms and assumptions of Archimedes are a trade secret, do you think it gives us a glimpse into the future of clinical trials? Where studies of drug efficacy will be simulated on hundreds of thousands of patients? There’s also evidence that the adverse effects of drugs (for example hepatotoxicity) can also be predicted with reasonable accuracy. This is achieved by comparing the molecular structure of the drug in question with millions of others that have a known side effect profile.

Personally, I think these developments are fascinating, but I think the era of the large-scale clinical trial will be here for a while yet. Whether big pharma can leverage these types of simulations to screen for likely efficacious molecules with few adverse effects on human physiology remains to be seen.

What do you think? Let me know in the comments below.

Automatic For The People - Cardiac CT Reads Without Human Interaction?

One of the challenges of using cardiac CT to rule out coronary artery disease is the availability of suitably qualified people to read the studies, especially outside of the working day. Researchers from Beth Israel Deaconess Medical Center, Boston, MA recently described an automated software approach to CT reads, and presented the results at the 2009 RSNA meeting (http://www.theheart.org/article/1030325.do).

They trialled the COR Analyzer (Rcadia Medical Imaging, Auburndale, MA, http://www.rcadia.com/), a cardiac CT analyzing system that works without human input from the raw data. The study involved CT datasets from 115 patients at low to intermediate risk of CAD imaged in the ER. Of the 100 analyzable studies, compared to two expert CT readers, the algorithm’s negative predictive value was 98%, its sensitivity was 83% and specificity was 82%. The ‘black’ box’ correctly identified only 5 of the 6 patients with significant obstructive coronary disease; a result which to me means that further refinements to the algorithm must be made.

The same system was also tested in a different population of patients in a study published in November 2009 in European Radiology (http://www.springerlink.com/content/aml5708k1147r547/). This time, an X-ray coronary angiogram was used as the gold standard. A population with either chest pain or an abnormal nuclear perfusion study was chosen. The COR Analyzer pulls out from the CT dataset curved MPRs for all four main coronary arteries and highlights those where stenosis is suspected, or where it believes there has been a technical failure of acquisition or processing. Results were similar to the RSNA study; high negative predictive values at the patient level, but of note the system missed ⅔ significant stenoses of the left main coronary artery.

Considering these findings, the algorithm in its current form seems best placed to aid ruling out of CAD in patients a with low likelihood of disease. Do you agree?

What do you think of computer aided diagnostics? We’ve seen it applied in other areas of medicine - EKG reads, CT colonography and for automatic detection of pulmonary emboli on CT.

Do you use this kind of technology in your practice?

Would you be happy to ‘hand-over’ diagnostics to a black box?