RESEARCH

 

Improving the use of evidence in health impact assessment

 

Améliorer l'utilisation des éléments d'appréciation dans l'évaluation de l'impact sanitaire

 

Mejorar el uso de la evidencia en la evaluación del impacto sanitario

 

 

Jennifer Mindell,I,*; Jane Biddulph,II; Lorraine Taylor,III; Karen Lock,IV; Annette Boaz,V; Michael JoffeVI; Sarah CurtisVII

I Research Department of Epidemiology and Public Health, University College London, 1-19 Torrington Place, London, WC1E 6BT, England
II Research Department of Infection and Population Health, University College London, London, England
III National Institute for Health and Clinical Excellence, London, England
IV London School of Hygiene and Tropical Medicine, London, England
V Division of Health and Social Care Research, King's College London, London, England
VI Department of Epidemiology and Public Health, Imperial College London, London, England
VII Department of Geography, Durham University, England

 

 


ABSTRACT

OBJECTIVE: Health impact assessment (HIA) has been proposed as one mechanism that can inform decision-making by public policy-makers. However, HIA methodology has been criticized for a lack of rigour in its use of evidence. The aim of this work was to formulate, develop and test a practical guide to reviewing publicly available evidence for use in HIA. The term evidence includes all scientific assessments, whether research studies in peer-reviewed journals or previous HIAs.
METHODS: The formulation and development of the guide involved substantial background research, qualitative research with the target audience, substantial consultations with potential users and other stakeholders, a pilot study to explore content, format and usability, and peer review. Finally, the guide was tested in practice by invited volunteers who used it to appraise existing HIA evidence reviews.
FINDINGS: During development, a wealth of data was generated on how the guide might be applied in practice, on terminology, on ensuring clarity of the text and on additional resources needed. The final guide provides advice on reviewing quantitative and qualitative research in plain language and is suitable for those working in public health but who may not have experience in reviewing evidence. During testing, it enabled users to discriminate between satisfactory and unsatisfactory evidence reviews. By late 2009, 1700 printed and 2500 downloaded copies of the guide had been distributed.
CONCLUSION: Substantive and iterative consultation, though time-consuming, was pivotal to producing a simple, systematic and accessible guide to reviewing publicly available research evidence for use in HIA.


RÉSUMÉ

OBJECTIF: L'évaluation de l'impact sanitaire (EIS) a été proposée comme mécanisme pouvant étayer la prise de décisions par les décideurs politiques. Néanmoins, la méthodologie EIS est critiquée pour son manque de rigueur dans l'utilisation des éléments d'appréciation. L'objectif de cette étude est de rédiger, de développer et de tester un guide pratique pour examiner les éléments d'appréciation disponibles dans le domaine public et destinés à être utilisés dans les EIS. Le terme élément d'appréciation couvre toutes les évaluations scientifiques, qu'il s'agisse de travaux de recherche publiés dans des revues révisées par des pairs ou d'EIS antérieures.
MÉTHODES: La rédaction et le développement du guide ont nécessité d'importantes recherches bibliographiques, une étude qualitative avec les destinataires du guide, des consultations de grande ampleur avec les utilisateurs potentiels et autres parties prenantes, une étude pilote pour examiner le contenu, le format et l'utilité du guide et une revue par des pairs. Enfin, le guide a fait l'objet d'un test pratique par des volontaires invités, qui l'ont utilisé pour jauger les évaluations existantes des éléments utilisées par les EIS.
RÉSULTATS: Le développement du guide a généré une très grande quantité de données sur ses possibilités d'application dans la pratique et sur la terminologie, les moyens de garantir la clarté du texte et les ressources supplémentaires nécessaires. La version finale du guide fournit en termes simples des conseils pour l'examen des travaux de recherche quantitatifs et qualitatifs et convient aux personnes travaillant dans le domaine de la santé publique, mais manquant d'expérience dans l'évaluation des éléments d'appréciation. Pendant la phase de test, le guide a permis aux utilisateurs de différencier les évaluations d'éléments d'appréciation satisfaisantes et non satisfaisantes. Fin 2009, 1700 exemplaires imprimés et 2500 exemplaires téléchargés avaient été distribués.
CONCLUSION: S'il a pris beaucoup de temps, le recours à des consultations itératives et de grande ampleur, a été essentiel dans l'élaboration d'un guide simple, systématique et accessible pour l'examen des résultats de recherche à la disposition du public et devant servir à des EIS.


RESUMEN

OBJETIVO: Se ha propuesto usar la evaluación del impacto sanitario (EIS) como un mecanismo que fundamente la toma de decisiones por los responsables de las políticas públicas. Sin embargo, los métodos de la EIS han sido criticados por hacer un uso poco riguroso de las pruebas científicas. El objetivo de este trabajo fue formular, elaborar y ensayar una guía práctica para la revisión de la evidencia disponible en el dominio público para los análisis de EIS. El término 'evidencia' abarca todas las evaluaciones científicas, ya se trate de trabajos de investigación publicados en revistas revisadas por pares o de EIS anteriores.
MÉTODOS: Para diseñar y elaborar la guía se requirió un trabajo sustancial de investigación de los antecedentes, investigaciones cualitativas con el público destinatario, consultas sustanciales con los usuarios potenciales y otras partes interesadas, un estudio piloto de análisis del contenido, el formato y la usabilidad, y una revisión por homólogos. Por último, la guía fue ensayada en la práctica por voluntarios invitados, que la emplearon para valorar revisiones ya existentes de la evidencia utilizada en la EIS.
RESULTADOS: Durante la elaboración de la guía se obtuvieron numerosos datos respecto a las posibilidades de aplicación en la práctica, la terminología, la forma de garantizar la claridad del texto, y los recursos adicionales necesarios. La versión final proporciona asesoramiento en términos sencillos sobre la revisión de las investigaciones cuantitativas y cualitativas y resulta idónea para muchos profesionales de la salud pública sin experiencia en la tarea de revisar la evidencia. Durante los ensayos, la guía permitió a los usuarios distinguir las revisiones de la evidencia satisfactorias de las insatisfactorias. A finales de 2009 se habían distribuido 1700 ejemplares impresos de la guía y se habían descargado 2500 copias de la misma.
CONCLUSIÓN: La realización de consultas sustantivas y reiteradas, aunque exige tiempo, fue fundamental para elaborar una guía sencilla, sistemática y accesible que permite revisar la evidencia científica públicamente disponible para la EIS.



 

 

Introduction

The health of an individual is influenced by a range of factors amenable to public policy on, for example, housing, education and transport. Consequently, multidisciplinary policies outside the jurisdiction of health services or health ministries have the potential to influence health. 1 While policy-makers increasingly focus on obtaining better information, they often receive little support with decision-making. Health impact assessment (HIA) has been proposed as one mechanism that can support decision-making; 2 it focuses primarily on policy outside the medical sector and on intersectoral actions. 3 Recently, the World Health Organization (WHO) Commission on Social Determinants of Health recommended that the impact of all policies on health inequality should be assessed. 1 In practice, HIA also includes an assessment of impact on health inequality. 4

Consideration of the health impact of policies has been encouraged across the world. Major centres for HIA include Australia, 5 Thailand 6 and several countries in Europe. 7–9 The Finnish government made HIA and “health for all policies” central strands of its 2006 presidency of the European Union. 10 In addition, HIA is being introduced in the United States of America. 11 The comparatively high rates of morbidity and mortality experienced in middle- and low-income countries can only partly be addressed by improving health-care provision, so the need for HIA is even greater in these countries than in the developed world.

Various definitions of HIA have been proposed over time. 12 Ratner et al. (1997) defined HIA as “any combination of procedures or methods by which a proposed policy or program may be judged as to the effect(s) it may have on the health of a population”. 13 In 1999, the WHO Regional Office for Europe added “and the distribution of those effects within the population” 4 to include consideration of health inequalities; this concept is central to the Jakarta declaration. 14 Further, HIA has also been described as “the use of the best available evidence to assess the likely effect of a specific policy in a specific situation”, 15,16 leading to comparisons with evidence-based medicine. Current HIA methodology has been criticized for a lack of rigour in collecting and analysing evidence. 17–19 Thus, despite the policy drive to encourage its use, HIA will be discredited if it fails to be both rigorous and well founded. 20

It is generally agreed that three types of knowledge are combined in HIA: that provided by stakeholders based on their experience; local data; and publicly available evidence, including past HIAs ( Fig. 1). It is important that better frameworks are developed for integrating different types of research evidence so that they can be used in decision-making 21 and, consequently, so that HIAs can make better use of publicly available evidence, such as scientific assessments, research studies published in peer-reviewed journals and previous HIAs. For simplicity, the term “evidence” is used throughout this paper to cover all publicly available evidence.

 

 

Several specific difficulties are encountered when reviewing evidence for use in HIA. 22 These include: tight timescales; a diverse evidence base (created, in part, by the diversity of health impacts and complex causal pathways); the wide range of stakeholders; the need to make recommendations to decision-makers; the need to consider the reversibility of adverse factors that damage health and have an unequal impact on different population subgroups; and the need to review relevant qualitative as well as statistical evidence. 21,22 Another important problem is a lack of skills and training, particularly in searching the literature, undertaking critical appraisals and synthesizing findings. In medium- and low-income countries, where specific expertise is likely to be scarce, there is a particular need for guidance on how to conduct high-quality HIA. Guidance for those commissioning HIAs is also necessary to enable them to evaluate the evidence obtained.

This paper describes the formulation, development and testing of a simple guide to reviewing evidence for use in HIA. The guide was designed to improve the way evidence is used by providing user-friendly and accessible assistance for those responsible for commissioning, conducting or appraising evidence reviews. The intention was to make the guide authoritative and practical. In addition, it was designed to support those carrying out both brief and comprehensive reviews of evidence in HIA, thus reflecting the flexibility required by the application of HIA itself. Finally, we investigated whether the guide enabled users to distinguish evidence reviews of acceptable quality from those of poor quality.

 

Methods

A steering group, which comprised the authors of this report plus the London Health Observatory's HIA facilitator while she was in post, was set up to draft and amend the guide, to organize consultations and to disseminate the guide. Fig. 2 shows the various stages involved in the formulation (phase 1), development (phase 2) and testing (phase 3) of a guide to reviewing evidence for use in HIA. The entire process involved substantial consultation with individuals who carry out evidence reviews for use in HIAs, such as HIA academics, HIA practitioners and public health professionals ( Table 1). A modified Delphi technique involving iteration and feedback was used at each stage during the many consultations on, and revisions to, the guide.

 

 

Members of an advisory group were asked to comment on the contents of the developing guide throughout the process. The advisory group and the individuals interviewed in the qualitative research study in phase 1, the invitees and volunteers who piloted and conducted a peer-review of the developing guide in phase 2, and individuals who tested the final guide in phase 3 were all independent of each other, and no individual was involved in more than one phase.

Wider consultation, with feedback from numerous additional individuals, was implemented by members of the steering group who presented various preliminary versions of the guide at HIA training courses, at workshops at international HIA and national public health conferences, and via email networks.

Phase 1: formulation

A scoping review of the literature and of existing guidelines on reviewing different types of research (i.e. quantitative and qualitative) was undertaken to identify criteria for the principles underpinning the assessment of research evidence likely to be relevant to HIA. Key themes and criteria were collated and used to structure the content of an outline of the guide that was presented to the advisory group for comment ( Fig. 2).

The aim of the qualitative research study was to determine the most practical, relevant and accessible format for the guide. A series of in-depth, face-to-face interviews lasting about 75 minutes each was conducted with 10 representatives of the guide's target audience. Participants were recruited from three cities in England: London, Birmingham and Cambridge. To achieve a cross-section of the different sectors, one individual involved in each of commissioning, conducting and appraising HIA evidence reviews was recruited from each of the three cities and at least one representative from each of local government, a public health authority and the independent (commercial) sector was recruited from each city.

The interviews were semi-structured and included open-ended questions. An outline of the guide was used towards the end of the interviews to elicit the interviewees' attitudes to and views on format and presentation. Full details of the qualitative research study are provided elsewhere. 23

Phase 2: development

The outline of the guide was modified on the basis of feedback from the advisory group, the results of the qualitative research study and feedback following wider dissemination of the outline ( Fig. 2), and an initial version of the guide was produced. This was the starting point (stage 1) for phase 2, which also included wider dissemination, feedback and revision (stage 2), a pilot study (stage 3) and peer review (stage 4). Stage 2 was an iterative process in which initial and developing versions of the guide were sent alternately to the advisory group and to a wider range of individuals, as described above. The feedback received was used to amend the guide and produce the next version.

In the pilot study, a copy of the almost-final version of the guide was presented with an evaluation tool, which comprised a set of questions about the content, format and usability or usefulness of the guide. 24 No instructions were provided with the guide to mimic real life in which HIA practitioners would use it as a stand-alone document for a range of purposes, including commissioning evidence reviews, conducting literature reviews and appraising an existing review for use in a specific HIA. Minor changes were made to the guide following feedback from the pilot study and, subsequently, peer reviewers were sent a modified copy for comment.

Phase 3: testing

To test the guide's use in practice (stage 5), appraisers were sent one of five selected, brief or comprehensive evidence reviews that had been produced to support an HIA and asked to judge the review against the essential steps for a brief evidence review listed in the guide plus, if relevant, additional elements recommended for a “more comprehensive” review. 24 In particular, appraisers were asked whether the review was “of sufficient quality” to be made widely available. Where possible, the author or authors of the evidence review and the appraisers were blinded to each other's identities.

In stage 6, the final version of the guide was produced in a hard-copy form 25 and made publicly available in September 2006 as a free downloadable Adobe Acrobat document on a nationally recognized public health web site in the United Kingdom of Great Britain and Northern Ireland, 26 along with various supporting resources. The guide was also made available on a United Kingdom HIA web site. 27 As a proxy estimate of the guide's use in the field, the number of downloads from the guide's web sites 26,27 was ascertained.

 

Results

It took 2 years to progress from the outline of the guide used in phase 1 in 2004 to the final version in phase 3.

Phase 1: formulation

The criteria specifying the nature of the guide were developed following a scoping review of the literature and of existing guidelines on reviewing different types of evidence, such as systematic reviews, 28 qualitative studies 29 and non-randomized studies. The criteria included: the clarity of purpose and provenance of the review; the rationale for the review process; the processes employed by the review, including search strategies and inclusion and exclusion criteria; the quality of the research and the “strength” of the evidence; acknowledgement of likely biases; a summary of the conclusions drawn; and the limitations of the review. Consultations during this phase led to pivotal decisions being made about the purpose, presentation, format and content of the guide.

The qualitative, in-depth, face-to-face interviews revealed a range of divergent views about the practical purpose of a guide to reviewing evidence for use in HIA. Less experienced HIA practitioners wanted a guide that would provide advice on how to review evidence for an HIA. By contrast, more experienced practitioners perceived the value of the guide to be in establishing quality standards for reviewing evidence.

Both groups judged the most practical, and motivating, format to be a good practice approach rather than a checklist of procedures, a step-by-step toolkit or a set of rules and instructions. This viewpoint felt relevant and appealed to all practitioners, as the same document could both provide guidance on the process of reviewing evidence for HIA and establish procedural standards.

Other findings identified by the qualitative research study were that the guide should:

•be written in plain, simple English;

•avoid academic and subject-specific terms to maximize accessibility and user-friendliness;

•provide a glossary of specialist terms whose use was unavoidable;

•be as simple and short as possible to provide quick and easy access to practical information;

•not be extended by being combined with practical examples or background literature;

•offer guidance separately for “brief” and “more comprehensive” reviews, presented side-by-side, rather than consecutively;

•employ an improved layout to aid comprehension; and

•use different terms to prevent misinterpretation.

Phase 2: development

The guide was developed in accordance with the qualitative research findings. A three-column format was used to present guidance on “brief” and “more comprehensive” evidence reviews side-by-side. The first column detailed “essential steps in a brief evidence review” and the second, “additional elements for a more comprehensive evidence review”. A third column listed “tips and resources”. For details, see the final version of the guide (available at: http://www.lho.org.uk/viewResource. aspx?id=10846).

During development, stage 1 centred on enhancing information on “tips and resources”, introducing additional steps into the literature search and review processes, and making the guide easier to use and understand. With each stage during development, the comments made on the guide became progressively less substantial. 24

Of the 32 participants directly involved in developing the guide, 26 were based in the United Kingdom, 5 came from elsewhere in Europe and 1 came from outside Europe ( Table 1). Many other individuals from outside Europe actively involved in HIA were consulted through wider dissemination of the developing versions of the guide.

The most substantive modifications to the guide that arose in stages 3 and 4 were amendments to the introductory pages, the inclusion of examples of questions that a literature review for HIA might address, and the development of a glossary, as attempts to avoid jargon had not been entirely successful.

Phase 3: testing

The appraisers involved in testing the guide's use in practice (stage 5) found it to be useable and accessible. Upon using the guide, one of the selected evidence reviews appraised was considered not to meet the required quality standards. Reviews often lacked information on how various steps in the review process had been conducted. For example, details of the criteria used in searching for relevant studies or for assessing the quality of studies included may have been missing, making it difficult to assess the representativeness of the evidence presented or the validity of the conclusions.

Appraisers generally made very positive comments about the guide, even those who said that they were initially sceptical. For example, one HIA practitioner commented: “My initial thoughts were that the guidance was too rigid for rapid HIAs but the more I worked through it and thought about it the more I agreed with it.” Another stated that although the criteria, “set a high standard for the quality of evidence reviews, they are reasonable and well judged in the context of HIAs. I would and will use them as a benchmark when undertaking general evidence reviews for HIAs ...” Full details of the appraisals are contained in a published report. 24

The final version of the guide produced in stage 6 is a 12-page A4 booklet26,27 that takes the reader through nine steps (steps A–I; Box 1) in reviewing evidence common to “brief” and “more comprehensive” evidence reviews. Supporting information for the guide has also been made available on the web, included a glossary, 30 details of published sources of quality criteria 31 and a document on assessing causality. 32

 

 

By mid-October 2009, the guide had been downloaded almost 2500 times: 2050 from the London Health Observa tory web site (http://www.lho.org.uk) 26 and 438 from the HIA Gateway 27 at the Association of Public Health Observato ries web site (http://www.apho.org.uk), both in the United Kingdom. Between November 2008 and April 2009, monthly downloads from the London Health Observatory web site averaged 113 (range 56–237). The average for the period from May to October 2009 was higher, at 170.

Requests for feedback elicited responses from a range of professionals from around the world, including a medical consultant in Nigeria, a research student in Israel and a policy officer and coordinator for a Healthy Built Environments programme in Australia. They were generally enthusiastic and found the guide to be useful in their locations despite its origins in the United Kingdom. Although the legislative and environmental contexts of HIA can differ, it was appreciated that the underlying principles of good practice in reviewing published evidence apply across all jurisdictions.

 

Discussion

We undertook a substantive, systematic, practitioner-led consultation with the aim of producing a guide to reviewing evidence for use in HIA that is simple, systematic and accessible to a cross-section of people involved in HIA. A glossary was produced explaining any jargon or words with different lay and technical meanings whose use could not be avoided in reviewing evidence for HIA. The guide stipulates the minimum criteria that must be satisfied by any HIA evidence review, however brief it is or however limited the resources for producing it. It also suggests additional elements that a review should include when circumstances permit to make its conclusions more robust.

Although a comprehensive review might be considered the gold standard for reviewing evidence for use in HIA, the guide also sets minimum quality standards for brief reviews, which can be conducted in days or sometimes weeks. In practice, anyone commissioning an HIA can specify that the guide is used when preparing an evidence review and, for example, require that all steps essential for a brief review should be included plus a specified list of additional elements for a more comprehensive review. When the guide's use was tested in practice, it enabled users to distinguish between satisfactory and unsatisfactory evidence reviews.

Although expensive and time-consuming, the substantive consultations contributed considerably to the development and refinement of the guide. The guide was considered fit for purpose by those involved in the peer review and in testing the guide's use in practice. With each successive consultation, fewer changes were required. The guide may still be 12 pages long, but this is the shortest document that is able to cover all the elements considered essential during the development phase.

Reviewing evidence for an HIA is complicated by differences in collating evidence between different contexts. 22 The challenge of framing questions for a broad evidence review for HIA was dealt with by including brief examples in the guide. In addition, the guide makes it clear that study design must be taken into consideration when addressing questions relevant for HIA, which is different from the traditional hierarchical approach adopted for evaluating clinical evidence. 33

If the guide is to have a positive impact on the process of HIA, it must be seen to be useful and user-friendly. The substantive consultations with HIA practitioner, HIA academics and public health professionals ensured that a wide range of stakeholders were involved throughout the entire development process and that the guide is suitable for both conducting and appraising evidence reviews for HIA. Moreover, the involvement of individuals in the United Kingdom, in other parts of Europe and in other continents was invaluable for making the guide applicable outside Europe.

Dora 34 notes that HIA “brings transparency to the use of evidence in decision-making”. However, rigour is also needed to sustain the use of HIA. Use of the guide could improve the quality of evidence used in HIA and, therefore, increase the credibility of HIA. During development of the guide, it was noted that existing guidelines and toolkits, despite being theoretically sound and well regarded by experts, were often inappropriate for users' needs, as has been observed in other public health contexts. 35 The development process described in this paper was unusual in focusing specifically on the difficulties HIA practitioners encountered when trying to adopt an evidence-based approach in their daily work. The consultative process used here is one way of introducing scientific knowledge into public health practice.

During development of the guide, we also note that, within the HIA field, there was: (i) limited capacity for reviewing and synthesizing research findings; (ii) a lack of familiarity with the principles of critical appraisal and with research methods; (iii) more broadly, barriers to knowledge transfer and to the implementation of evidence-based practice; and (iv) a need to increase the capacity for reviewing evidence. While the guide's use was being tested in practice, it was suggested that training based on the guide could be carried out.

The guide described here is unique. An existing Canadian decision-making tool on the use of evidence 36 is intended for use by managers and planners when making decisions on priorities for starting or ending specific interventions. Consequently, it differs from the tools needed to assess the potential impact of other policy proposals. 37

The final published version of the guide 25 has been widely disseminated and made available to networks of individuals actively involved or likely to become involved in HIA. Around 1700 printed copies of the final version have been distributed via national and international conferences and HIA training courses, mostly to international participants who do not have English as a first language (H Dreaves, personal communication, 2009) as well as to public health professionals and HIA practitioners. The high total and monthly download rates from web sites indicate that existing users are finding the document useful and are recommending it to others. The guide has also been made available at international conferences and several hundred were distributed in Thailand.

Finally, local adaptations of the guide can incorporate information on country-specific legislation and regulations, links to language-specific web-based resources and “local ownership” (i.e. the professional groups involved in the local process that resulted in the document accept it and are committed to its use), all of which are important for the successful implementation of guidelines. 38,39 For example, an Australian version is being prepared with permission from the authors and funders of the original United Kingdom version. It includes minor changes to the text and alternative suggestions for information resources. Therefore, despite its origins in the United Kingdom, the guide is being used around the world and is bringing the underlying principles of reviewing evidence to the notice of HIA practitioners.

Acknowledgements

We would like to thank: the Department of Health for funding this project; Francesca Taylor for conducting, analysing and reporting the qualitative research; Raj Mehta at the London Health Observatory for creating the web pages; and Anne Sweetmore for designing the guide and its many revisions.

We would also like to thank the following individuals for their support for the project, for participating in the qualitative research, for commenting on draft principles or draft guidelines, for taking part in the pilot study or peer review of the guide, for offering an evidence review for appraisal, or for appraising one of the evidence reviews: Kukuwa Abba, Helen Atkinson, Muna Aziz, Lea den Broeder, Anna Boltong, Caron Bowen, Ceri Breeze, Antonella Cardone, Ben Cave, Rowena Clayton, Andrew Cook, Anthea Cooke, Adam Coutts, Margaret Douglas, Debbie Fox, Cheryl France, Angela Harden, Claire Higgins, Anna Hillier, Bobbie Jacobson, Stephen James, Caroline Keir, Mike Kelly, John Kemm, Carolyn Lester, Anita Linell, Mary Mahoney, Marco Martuzzi, Peter Molyneux, Jayne Parry, Mark Petticrew, Jenny Popay, Andrew Pratt, Francesca Racciopi, Deborah Richardson, Simon Sanderson, Amanda Sowden, Francesca Viliani, Salim Vohra, Rhiannon Walters and Colleen Williams.

Funding: This work was undertaken by authors who were funded by the Department of Health through its Policy Research Programme (grant reference number 030 0072).

Competing interests: None declared.

 

References

1. Closing the gap in a generation: health equity through action on the social determinants of health. Geneva: WHO Commission on Social Determinants of Health; 2008. Available from: www.who.int/social_determinants/final_report/en/ [accessed 30 April 2009]          .

2. Lock K. Les outils pour améliorer les politiques de santé publiques fondées sur des preuves avérées: les roles potentiels de l'évauation d'impact sur la santé, de l'analyse de decision et des techniques de prevision [Tools for improving 'evidence-based' public health policy: the potential roles of health impact assessment, decision analysis and future techniques]. Télescope 2008;14:107–17.         

3. Bos R. Health impact assessment and health promotion. Bull World Health Organ 2006;84:914–5. PMID:17143468

        

4. WHO European Centre for Health Policy. Health impact assessment: main concepts and suggested approach. Gothenburg Consensus Paper, December 1999. Copenhagen: WHO Regional Office for Europe; 1999. Available from: www.apho.org.uk/resource/item.aspx?RID=44163 [accessed 10 June 2010]          .

5. Mahoney M. Health impact assessment in a policy context. In: Barraclough S, Gardner H, eds. Analysing health policy: a problem-oriented approach. Sydney: Elsevier Australia; 2007.         

6. Phoolcharoen W, Sukkumnoed D, Kessomboon P. Development of health impact assessment in Thailand: recent experiences and challenges. Bull World Health Organ 2003;81:465–7. PMID:12894336

        

7. Ritsatakis A. Health impact assessment: a tool for health policy development. Regions for Health Network in Europe annual conference, Borås, Sweden, 12-13 October 2000.         

8. Commission of the European Communities. Report from the commission to the council, the European parliament and the economic and social committee on the integration of health protection in community policies. Brussels: CEC; 1995.         

9. Joffe M. Future of European community (EC) activities in the area of public health: European Public Health Alliance. Health Promot Int 1993;8:53–61. doi:10.1093/heapro/8.1.53

        

10. Health in all policies: prospects and potentials. Brussels: European Observatory on Health Systems and Policies, 2006. Available from: www.euro.who.int/document/E89260.pdf [accessed 30 April 2009]          .

11. Dannenberg AL, Bhatia R, Cole BL, Heaton SK, Feldman JD, Rutt CD. Use of health impact assessment in the United States: 27 case studies, 1999-2007. Am J Prev Med 2008;34:241–56. doi:10.1016/j.amepre.2007.11.015 PMID:18312813

        

12. Mindell JS, Boltong A, Forde I. A review of health impact assessment frameworks. Public Health 2008;122:1177–87. doi:10.1016/j. puhe.2008.03.014 PMID:18799174

        

13. Ratner PA, Green LW, Frankish CJ, Chomik T, Larsen C. Setting the stage for health impact assessment. J Public Health Policy 1997;18:67–79. doi:10.2307/3343358 PMID:9170789

        

14. Jakarta declaration on health promotion into the 21st Century. Copenhagen: WHO Regional Office for Europe; 1997.         

15. Douglas M. World Trade Organisation agreements should be subject to health impact assessment [letter]BMJ 2000;320:802–3. doi:10.1136/ bmj.320.7237.802 PMID:10720379

        

16. Conway L, Douglas M, Gavin S, Gorman D, Laughlin S. HIA: piloting the process in Scotland. Glasgow: SNAP; 2000.         

17. Parry J, Stevens A. Prospective health impact assessment: pitfalls, problems, and possible ways forward. BMJ 2001;323:1177–82. doi:10.1136/ bmj.323.7322.1177 PMID:11711414

        

18. Petticrew M. Systematic reviews from astronomy to zoology: myths and misconceptions. BMJ 2001;322:98–101. doi:10.1136/bmj.322.7278.98 PMID:11154628

        

19. Thomson H. HIA forecast: cloudy with sunny spells later? Eur J Public Health 2008;18:436–38. PMID:18809593

        

20. Joffe M, Mindell J. A framework for the evidence base to support health impact assessment. J Epidemiol Community Health 2002;56:132–8. doi:10.1136/jech.56.2.132 PMID:11812813

        

21. Petticrew M. Systematic reviews in public health: old chestnuts and new challenges. Bull World Health Organ 2009;87:163. doi:10.2471/ BLT.09.063719 PMID:19377705

        

22. Mindell J, Boaz A, Joffe M, Curtis S, Birley M. Enhancing the evidence base for HIA. J Epidemiol Community Health 2004;58:546–51. doi:10.1136/ jech.2003.012401 PMID:15194713

        

23. Taylor F. Qualitative research on format and presentation of guidelines for reviewing the evidence for HIA. London: London Health Observatory, 2004. Available from: www.lho.org.uk/viewResource.aspx?id=8859 [accessed 30 April 2009]         

24. Improving access to robust evidence for HIA: final report to the Department of Health. London: London Health Observatory, 2006. Available from: www.lho.org.uk/viewResource.aspx?id=10845 [accessed 30 April 2009]          .

25. Mindell J, Biddulph JP, Boaz A, Boltong A, Curtis S, Joffe M, et al. A guide to reviewing published evidence for use in health impact assessment. London: London Health Observatory, 2006. ISBN: 0-9542956-5-X.         

26. A guide to reviewing published evidence for use in health impact assessment. London: London Health Observatory; 2006. Available from: www.lho.org.uk/viewResource.aspx?id=10846 [accessed 30 April 2009]          .

27. A guide to reviewing published evidence for HIA. York: Association of Public Health Observatories; 2006. Available from: www.apho.org.uk/resource/item.aspx?RID=44867 [accessed 30 April 2009]          .

28. Cochrane handbook for systematic reviews of interventions. Version 5.0.1, updated 28 September 2008. Oxford: The Cochrane Collaboration; 2003. Available from: www.cochrane-handbook.org [accessed 8 June 2009]          .

29. Quality in qualitative evaluation: a framework for assessing research evidence. London: Strategy Unit, Government Chief Social Researcher's Office; 2003. Available from: www.civilservice.gov.uk/Assets/a_quality_ framework_tcm6-7314.pdf [accessed 10 June 2010]          .

30. HIA glossary. London: London Health Observatory; 2006. Available from: www.lho.org.uk/viewResource.aspx?id=10064 [accessed 30 April 2009]          .

31. Quality criteria for appraisal of studies and articles. London: London Health Observatory; 2006. Available from: www.lho.org.uk/viewResource.aspx?id=10616 [accessed 30 April 2009]          .

32. Assessing causality. London: London Health Observatory; 2006. Available from: www.lho.org.uk/viewResource.aspx?id=9377 [accessed 30 April 2009]          .

33. Petticrew M, Roberts H. Evidence, hierarchies, and typologies: horses for courses. J Epidemiol Community Health 2003;57:527–9. doi:10.1136/ jech.57.7.527 PMID:12821702

        

34. Dora C. What can health impact assessment add to comparative risk assessment in decision-making? Bull World Health Organ 2003;81:460. PMID:12894333

        

35. Waters E. Evidence for public health decision-making: towards reliable synthesis. Bull World Health Organ 2009;87:164. doi:10.2471/ BLT.09.064022 PMID:19377706

        

36. Anonymous. Can I use this evidence in my program decision? Assessing applicability and transferability of evidence. Hamilton: National Collaborating Center for Methods and Tools; 2007. www.nccmt.ca/pubs/2007_12_AT_tool_v_nov2007_ENG.pdf [accessed 30 April 2009]          .

37. Welteke R, Fehr R. Health impact assessment: developing a software assisted tool for assessment of evidence. Italian J Public Health 2007;4:165–8.         

38. Grimshaw JM, Thomas RE, MacLennan G, Fraser C, Ramsay CR, Vale L et al. Effectiveness and efficiency of guideline dissemination and implementation strategies. Health Technol Assess 2004;8:1–72. PMID:14960256

        

39. Nzinga J, Mbindyo P, Mbaabu L, Warira A, English M. Documenting the experiences of health workers expected to implement guidelines during an intervention study in Kenyan hospitals. Implement Sci 2009;4:44. doi:10.1186/1748-5908-4-44 PMID:19627591

        

 

 

(Submitted: 8 June 2009 – Revised version received: 29 October 2009 – Accepted: 18 November 2009 – Published online: 23 December 2009)

 

 

*Correspondence to Jennifer Mindell (e-mail: j.mindell@ucl.ac.uk).

World Health Organization Genebra - Genebra - Switzerland
E-mail: bulletin@who.int