You’re chugging a beer with a bud when he blurts, “Hey, this new study says HIVers who drink have worse CD4 counts and viral control than those who don’t.” Uh-oh. All those stats and studies from the Net and newspapers make your head spin, but maybe you’d better try to understand this one. Ask these questions before dating the data in any study:
- WHAT KIND of study is it? Most are either experimental (giving different groups different treatments) or observational (tracking a group’s behavior over time). Experimental studies try to “control” for everything but the treatment—with each group equal in age, sex, degree of illness and other variables. The result should be explained only by the treatment. For instance, to see if Bactrim prevents PCP, one group would take Bactrim, the other a placebo, or fake pill. It would be unethical to study some topics—such as how meds interact with alcohol or illicit drugs—experimentally, requiring people to take them. Here docs rely on observational studies—collecting data on lung-related infections, for example, by following a group of women HIVers who smoke. At best, these studies can show a strong association between a behavior and an outcome, but they can’t prove a cause. These are most useful when people are followed for a long time and when dose (packs per day) can be linked to outcome (disease risk).
- WHEN was the research done? HIV treatment changes constantly, so a study following HIVers on meds from 1995 to 1997 won’t tell you much—HAART has changed vastly since then. Look for when the study was performed as well as when results were reported.
- HOW MANY folks participated? A thousand subjects will provide more reliable data than 20. Large numbers produce “statistically significant” results, outwitting mere chance.
- HOW DOES IT RELATE to other research? Even if one investigation failed to show that triple therapy was better than monotherapy, many other papers support the three-drug approach. Often you have to read down to the conclusion to learn why the researchers think their results differ from others’—and whether their conclusions are significant anyway. Remember: Each study is just a piece of the ongoing puzzle of HIV research. No one study has the final say.
- DOES IT APPLY to you? This depends on how much you’re like the participants. A study might assert that 50 percent of HIVers have lipodystrophy, but the subjects may have been on HIV treatment far longer than you—or on meds or classes of meds you’ve never taken. Don’t stop at the headlines—the devil’s always in the details.
CASE STUDY
OK, let’s practice. That booze study your friend mentioned was conducted by the University of Miami and reported in the March 2003 Addiction Biology. The researchers surveyed 220 recreational-drug users with HIV. (Did your pal mention that they were all drugging, too?) The participants who were on HAART and drank three to four times a week or more were twice as likely to have CD4 counts below 500 than light or non-drinkers, and four times less likely to reach an undetectable viral load. Point by point:
- WHAT KIND? It was an observational study—it found that the risk, drinking, is associated with the outcome, poor response to treatment. It doesn’t prove that drinking was the cause. Perhaps the drinking caused sloppy adherence, which in turn caused the outcome.
- WHEN? Recent—conducted from 1999 to 2001, reported in 2003.
- HOW MANY? Fairly small—220 people.
- SIMILAR RESEARCH? Before you dismiss this study because, say, you don’t do drugs, look for more info. You’ll find a number of studies on alcohol and HIV, including one performed in 2001 and reported in 2003 in the journal Alcoholism: Clinical and Experimental Research, which also shows an association between booze and reduced CD4 counts/elevated viral loads in HIVers on HAART (and this time they weren’t recreational drug-users).
- DOES IT APPLY? Yes, if you’re an active drug user, drink heavily and are taking HAART. Maybe, if you booze it up and take meds but don’t do party drugs. Not necessarily, if you only drink occasionally or you’re not on HAART.
- THE TAKE-HOME? No study’s a cheat sheet for all your treatment woes, but some do deliver the 411. Review with your doc before making any drastic changes—hand her a printout or Web address for any study you want to discuss. Honing a critical eye (and healthy skepticism) takes time, but good marks will translate into better HIV care.
TECH TALK
Parlez-vous HIV study lingo? Some common research terms:
Cohort—A group being studied. Notable cohorts include the Multicenter AIDS Cohort Study (MACS), with more than 5,000 gay and bisexual male HIVers across America, and the Women’s Interagency HIV Study (WIHS), involving more than 2,500 women.
Bias—A flaw that changes the results. Example: A “selection bias” skews data because of unrepresentative subjects—only seeking participants from the back rooms of sex clubs, say, to study gay men’s safer-sex practices.
Blinded—Participants don’t know if they’re getting the treatment drug or the placebo (or a new combo vs. an older one). Researchers can be blinded, too, not knowing which treatment which patients get. Patients and researchers both in the dark? The study is double-blinded.
For more AIDS-related medical and research terms, visit www.smartlink.net/~martinjh/ch_glos.htm
FOLLOW THE MONEY
No matter who pays for a study, results are results. But drug companies often put the best face on the outcome, highlighting their products’ strengths and putting the weaknesses in fine print.
Case in point: the VaxGen trial. Results showed the vaccine overall gave no protection against HIV, but the company spun the news focus on the small group of minority participants who might have derived a real benefit.
Media reports often don’t name the funder, so read critically.
Comments
Comments