Wednesday, March 12, 2014

Systematic reform in medicine - the case of early elective delivery might offer a model

This interesting article in the New York Times tells the story of the dramatic reduction in early elective deliveries since 2010. This is not something I've followed before, but apparently early elective delivery (meaning an induced delivery or elective cesarean section before 39 weeks gestation that is not medically necessary) accounted for 17% of all births in 2010.

Although early elective delivery had been identified as an unsafe practice in 1979 and medical groups and non governmental organizations had lobbied for years for reform, the changes to practice have been slow. Yet since 2010, there has been a 73% decline in these procedures and the national average last year was 4.6%.

This article provides a good description of some of the catalyzing factors, including a couple of scientific articles offering examples of how hospitals had reduced the practice, increased pressure to measure the incidence of these procedures, and leadership by two states, South Carolina and Texas, to dramatically reduce their rates. South Carolina, for example, assembled a team of providers from all hospitals in the state. The team meets monthly and best practices are shared and challenges are discussed. Then, after a couple years of that, the state stopped payments to hospitals (through Medicaid and the main private insurer) for early elective deliveries. Over the following 6 months, rates dropped by 45%.

This is a nice example of how concerted efforts to produce reform can lead to real changes.

Wednesday, March 5, 2014

e-cigarettes and the importance of knowing your audience

The second article in a series by the New York Times about electronic cigarettes points out an important challenge facing researchers investigating all topics: know your audience. The article discusses how youth are rapidly adopting the use of nicotine delivery devices that adults and researchers are calling "e-cigarettes," but which the kids are calling "e-hookas," "vape pens," and other names - anything but "e-cigarettes."

While one could see this as just a classic case of mom and dad being so out of touch, the impact is real. The Centers for Disease Control and Prevention national tobacco use survey and statewide surveys like the California Health Kids Survey ask youth questions about the use of e-cigarettes. From the reporting in this article, it looks like many young people will answer "no" to that question, believing that the product they are using is not an "e-cigarette," when in fact they have used the type of product that the survey designers are intending to inquire about.

This is not just a matter of social desirability creating information bias - the reporting in this article suggests that youth would feel comfortable reporting their use of these products if the questions asked them about that use in a way that they understood. It is a matter of not knowing your audience.

In my work in West Africa, one of our challenges was to develop a survey that would measure constructs around reintegration in a diverse, but generally uneducated population. The first important practice we employed was to begin with discussions with the group we were hoping to learn about, finding out from them what they thought was important and taking note of the terminology they used for constructs that were sensitive. For example, one of the topics we wanted to address was participation in transactional sex or prostitution. In Sierra Leone, some young women suggested we call that "having boyfriends." This wasn't just a way of easing the stigma associated with the practice, but it helped them understand what we meant because that was how they talked about it among themselves. When we raised the concern that some people might endorse having boyfriends when they were talking about a long-term stable relationship, the girls laughed because that is not how anyone would have answered - the meaning was clear to them, even though it was not necessarily crystal clear to us.

The second practice we engaged in was to pilot test the survey with the inclusion of qualitative probes after each question so we could evaluate whether participants were answering the survey question with the same understanding of what the question meant as the researchers' understanding of the question. An example: after a question asking, "Is your boyfriend or husband supportive of your children?" we probed participants to share with us "how is he supportive or unsupportive?" This yielded very important information. For example, some participants said "No" because they did not have a boyfriend or husband, while other participants said, "No" and explained that their husband refused to have the children eat at the same table or would beat the children. These are two very different kinds of "No" responses.

After the pilot testing, we asked participants for their feedback on the survey - did it measure what they thought it should measure, were there questions they especially did or did not like, was there anything they didn't understand? With this feedback, we revised the survey, deciding to maintain the qualitative probes in the final version so as to have the option of exploring the quantitative findings qualitatively.

This post is getting rather lengthy, so I'll end it here with a final suggestion. For most research to produce meaningful results, study participants should be engaged at all phases of the research, from study design all the way through data analysis. Otherwise, interpretation of the study findings can be at best a murky endeavor and at worst not credible.

Monday, March 3, 2014

Breast cancer screening

I hope the world will take notice of a study just published in the British Medical Journal reporting the results of a 25 year follow up randomized control trial of mammography screening. In the study, almost 90,000 Canadian women were randomized to receive either annual screening mammograms or usual care in the community between 1980 - 1985. The rates of death due to breast cancer in the two groups were indistinguishable over the study period.

The authors suggest that in a country with accessible and available treatment for breast cancer, screening mammography (as opposed to diagnostic mammography) does not improve the chances of survival with breast cancer. In addition, based on the differences in breast cancer rates between the screening and control arm, the authors report that they believe that 22% of the breast cancers diagnosed and treated in the screening arm were what they called "over-diagnosed," meaning that the diagnosed "cancer" would never have resulted in a life threatening disease.

Along with several recent articles in the past few years (like this one that described a push away from calling DCIS "cancer"), these findings suggest that the way that we are currently screening for, diagnosing, and treating breast cancer is flawed. Though it will be hard to counter the "feel good war on breast cancer" that proselytizes screening and early detection as life saving, clearly the science is speaking.