Skip to Main Content

Critical Appraisal of Research Articles: Diagnosis

Definitions

Sensitivity - The proportion of people with disease who have a positive test.

Specificity - The proportion of people free of a disease who have a negative test.

Sensitivity reflects how good the test is at picking up people with disease, while the specificity reflects how good the test is at identifying people without the disease.

Sensitivity and specificity are calculated vertically in a 2 X 2 table. 

These measures are combined into an overall measure of the efficacy of a diagnostic test called the likelihood ratio: the likelihood that a given test result would be expected in a patient with the target disorder compared to the likelihood that the same results would be expected in a patient without the disorder.

For more information on how to calculate the likelihood ratio, please see the Centre for Evidence Based Medicine or eMedicine's Screening and Diagnostic Tests.

Questions to Ask

Is the study valid?

  • Was there a clearly defined question?
  • Was the presence or absence of the target disorder confirmed with a validated test ("gold" or reference standard)? Were the reference standard and the diagnostic test interpreted blind and independently of each other?
  • Was the test evaluated on an appropriate spectrum of patients?
  • Was the reference standard applied to all patients?

What were the results?  What are the likelihood ratio(s)?

Are the results important?

   What is meant by test accuracy?

a. The test can correctly detect a disease that is present (a true positive result).

b. The test can detect disease when it is really absent (a false positive result).

c. The test can incorrectly identify someone as being free of a disease when it is present (a false negative result).

d. The test can correctly identify that someone does not have a disease (a true negative result).