In the Netherlands as in many other countries, we invest in promotion of healthful and safe behaviours to contribute to prevention of chronic disease, accidents, et cetera. One of the ways to contribute to promotion of healthy lifestyles are so-called national campaigns, that often make use of mass media to communicate health promotion messages to the population at large.
Such campaigns should be evaluated to study if these campaigns indeed contribute to more healthful lifestyles among the population, i.e. we should study if money and other resources allocated to such campaigns are well spend.
A valid evaluation of national campaigns is not easy. The strongest research design to evaluate the effects of interventions is the so-called randomized controlled trial (RCT). RCTs are considered the most reliable form of scientific evidence in healthcare because they eliminate spurious causality and bias. RCTs are mainly used in clinical studies, but are also employed in other sectors such as judicial, educational, and social research. As their name suggests, RCTs involve the random allocation of different interventions (or treatments) to subjects or respondents/participants. This ensures that confounding factors are evenly distributed between treatment groups.
However, in evaluation of national campaigns randomization is not possible and a control group is not available because the population at large is exposed to the intervention.
Many national campaigns are therefore evaluated by means of a very simple design with one before and one after measurement. This often means that before and after the campaign is launched a sample of the target population is surveyed on the behaviour the campaign is addressing. If the after-campaign survey shows better results, this is supposed to indicate that the campaign was successful. The weakness of this research design can be illustrated with an example. The first picture next to this blog message shows the results of such before and after campaign measures related to the introduction of a bicycle helmet law in Australia. Before wearing a helmet became compulsory the number of head injuries was much higher than after. Conclusion: making people wear helmets prevents head injuries!? The second picture shows that this was not the case. In this picture not one before and one after campaign measurement was used, but a whole series of before and after measurements. This is called an interrupted time-series design. This picture shows that the reduction in head injuries was already ongoing before the helmet law was introduced, that this downward trend was continued after the law was in place. The picture indicates that the law did not change this trend in any way. Conclusion: the bicycle helmet law had no effect at all on head injuries.
The interrupted time series design is generally regarded as best-practice in evaluation of national campaigns or other circumstances where a control group is not possible. The Dutch Health Council recently published an advice on evaluation of national campaigns in which this was confirmed (http://www.gr.nl/samenvatting.php?ID=1454&highlight=landelijke%20campagnes). However, such an interrupted time-series design requires careful and timely planning of evaluation, additional resources to do the extra measurements, and expertise for statistical analyses of time series data. Time, extra resources and specialized statistical expertise is not always available for the organizations that develop and implement the campaigns, and these organizations feel that interrupted time-series design may be the best but not a realistic option for them. Furthermore, evaluation of national campaigns may not always need to be concerned with effects in terms of lifestyle behaviour changes, but sometimes evaluation in terms of reach, adoption, implementation or maintenance (see RE-AIM framework, http://www.re-aim.org/) of campaign activities may be sufficient, and such evaluations may require different research designs.
The Netherlands Organisation for Health Care Research and Development, ZonMW (http://www.zonmw.nl/) has ask me to further explore best and realistic practice in evaluations of national campaigns in the Netherlands. In the few months interviews will be held with representatives of all the organizations in the Netherlands that organize and implement national campaigns to learn about their evaluation goals, barriers and wishes for improvements.
Such campaigns should be evaluated to study if these campaigns indeed contribute to more healthful lifestyles among the population, i.e. we should study if money and other resources allocated to such campaigns are well spend.
A valid evaluation of national campaigns is not easy. The strongest research design to evaluate the effects of interventions is the so-called randomized controlled trial (RCT). RCTs are considered the most reliable form of scientific evidence in healthcare because they eliminate spurious causality and bias. RCTs are mainly used in clinical studies, but are also employed in other sectors such as judicial, educational, and social research. As their name suggests, RCTs involve the random allocation of different interventions (or treatments) to subjects or respondents/participants. This ensures that confounding factors are evenly distributed between treatment groups.
However, in evaluation of national campaigns randomization is not possible and a control group is not available because the population at large is exposed to the intervention.
Many national campaigns are therefore evaluated by means of a very simple design with one before and one after measurement. This often means that before and after the campaign is launched a sample of the target population is surveyed on the behaviour the campaign is addressing. If the after-campaign survey shows better results, this is supposed to indicate that the campaign was successful. The weakness of this research design can be illustrated with an example. The first picture next to this blog message shows the results of such before and after campaign measures related to the introduction of a bicycle helmet law in Australia. Before wearing a helmet became compulsory the number of head injuries was much higher than after. Conclusion: making people wear helmets prevents head injuries!? The second picture shows that this was not the case. In this picture not one before and one after campaign measurement was used, but a whole series of before and after measurements. This is called an interrupted time-series design. This picture shows that the reduction in head injuries was already ongoing before the helmet law was introduced, that this downward trend was continued after the law was in place. The picture indicates that the law did not change this trend in any way. Conclusion: the bicycle helmet law had no effect at all on head injuries.
The interrupted time series design is generally regarded as best-practice in evaluation of national campaigns or other circumstances where a control group is not possible. The Dutch Health Council recently published an advice on evaluation of national campaigns in which this was confirmed (http://www.gr.nl/samenvatting.php?ID=1454&highlight=landelijke%20campagnes). However, such an interrupted time-series design requires careful and timely planning of evaluation, additional resources to do the extra measurements, and expertise for statistical analyses of time series data. Time, extra resources and specialized statistical expertise is not always available for the organizations that develop and implement the campaigns, and these organizations feel that interrupted time-series design may be the best but not a realistic option for them. Furthermore, evaluation of national campaigns may not always need to be concerned with effects in terms of lifestyle behaviour changes, but sometimes evaluation in terms of reach, adoption, implementation or maintenance (see RE-AIM framework, http://www.re-aim.org/) of campaign activities may be sufficient, and such evaluations may require different research designs.
The Netherlands Organisation for Health Care Research and Development, ZonMW (http://www.zonmw.nl/) has ask me to further explore best and realistic practice in evaluations of national campaigns in the Netherlands. In the few months interviews will be held with representatives of all the organizations in the Netherlands that organize and implement national campaigns to learn about their evaluation goals, barriers and wishes for improvements.
1 comment:
The concept of “health” refers to personal and communal well-being, both physically and mentally.Here we discuss the benefits and complications of using social media in public health campaigns.
Post a Comment