Nationwide health promotion campaigns are an important part of government-funded health promotion efforts. Valid evaluations of these campaigns are important, but difficult because gold standard research designs are not applicable and budget, personnel, and time for evaluation is often very tight. In the Netherlands, Health Promotion Institutes (HPIs) are responsible for these campaigns. We conducted an exploratory study among the HPIs to gain a better insight into the goals, practices, conditions, and perceived barriers regarding evaluation of these campaigns. A paper reporting this exploration was recently published in Health Promotion International.
We conducted personal interviews with representatives of six different HPIs who had direct responsibility for the management of the evaluation of their national campaigns. Based on these interviews it became clear that the HPIs typically made use of a pre-test–post-test design with single measurements before and after the campaign without a control group. In campaign preparations, HPIs used qualitative research to pre- and pilot-test some campaign materials or activities, but true formative evaluation was rare. Accountability to their sponsors, peers, and the population at large, was an important reason for evaluation, but the most important intrinsic motivation to evaluate was to inform future campaigns. In terms of the RE-AIM (Reach Efficacy Adoption Implementation Maintenance) framework, evaluation was mostly restricted to reach and effects; hardly any evaluation of adoption, implementation, or maintenance was reported. Budget restrictions and time restraints were reported as the main barriers for more extensive formative and more elaborate interrupted time series design effect evaluation. In conclusion, our exploration indicates that evaluation of nationwide campaigns is standard procedure, but the research designs applied are weak, due to lack of time, budget and research methodology expertise. Next to additional budget and opportunities for longer-term planning, input from external experts regarding evaluation research designs and data management are needed for evaluation improvement.
No comments:
Post a Comment