While one could see this as just a classic case of mom and dad being so out of touch, the impact is real. The Centers for Disease Control and Prevention national tobacco use survey and statewide surveys like the California Health Kids Survey ask youth questions about the use of e-cigarettes. From the reporting in this article, it looks like many young people will answer "no" to that question, believing that the product they are using is not an "e-cigarette," when in fact they have used the type of product that the survey designers are intending to inquire about.
This is not just a matter of social desirability creating information bias - the reporting in this article suggests that youth would feel comfortable reporting their use of these products if the questions asked them about that use in a way that they understood. It is a matter of not knowing your audience.
In my work in West Africa, one of our challenges was to develop a survey that would measure constructs around reintegration in a diverse, but generally uneducated population. The first important practice we employed was to begin with discussions with the group we were hoping to learn about, finding out from them what they thought was important and taking note of the terminology they used for constructs that were sensitive. For example, one of the topics we wanted to address was participation in transactional sex or prostitution. In Sierra Leone, some young women suggested we call that "having boyfriends." This wasn't just a way of easing the stigma associated with the practice, but it helped them understand what we meant because that was how they talked about it among themselves. When we raised the concern that some people might endorse having boyfriends when they were talking about a long-term stable relationship, the girls laughed because that is not how anyone would have answered - the meaning was clear to them, even though it was not necessarily crystal clear to us.
The second practice we engaged in was to pilot test the survey with the inclusion of qualitative probes after each question so we could evaluate whether participants were answering the survey question with the same understanding of what the question meant as the researchers' understanding of the question. An example: after a question asking, "Is your boyfriend or husband supportive of your children?" we probed participants to share with us "how is he supportive or unsupportive?" This yielded very important information. For example, some participants said "No" because they did not have a boyfriend or husband, while other participants said, "No" and explained that their husband refused to have the children eat at the same table or would beat the children. These are two very different kinds of "No" responses.
After the pilot testing, we asked participants for their feedback on the survey - did it measure what they thought it should measure, were there questions they especially did or did not like, was there anything they didn't understand? With this feedback, we revised the survey, deciding to maintain the qualitative probes in the final version so as to have the option of exploring the quantitative findings qualitatively.
This post is getting rather lengthy, so I'll end it here with a final suggestion. For most research to produce meaningful results, study participants should be engaged at all phases of the research, from study design all the way through data analysis. Otherwise, interpretation of the study findings can be at best a murky endeavor and at worst not credible.