RESEARCH

 

Evidence summaries tailored to health policy-makers in low- and middle-income countries

 

Synthèses de preuves adaptées aux décideurs de la santé dans les pays à revenu faible ou intermédiaire

 

Resúmenes de datos diseñados para los responsables políticos sanitarios de los países de ingresos medios y bajos

 

 

Sarah E RosenbaumI,*; Claire GlentonII; Charles Shey WiysongeIII; Edgardo AbalosIV; Luciano MigniniIV; Taryn YoungV; Fernando AlthabeVI; Agustín CiapponiVI; Sebastian Garcia MartiVI; Qingyue MengVII; Jian WangVII; Ana Maria De la Hoz BradfordVIII; Suzanne N KiwanukaIX; Elizeus RutebemberwaIX; George W PariyoIX; Signe FlottorpI; Andrew D OxmanI

INorwegian Knowledge Centre for the Health Services, Boks 7004, St Olavs Plass, N/0130 Oslo, Norway
IISintef, Trondheim, Norway
IIISchool of Child and Adolescent Health, University of Cape Town, Cape Town, South Africa
IVCentro Rosarino de Estudios Perinatales, Santa Fe, Argentina
VSouth African Cochrane Centre, Cape Town, South Africa
VIInstitute for Clinical Effectiveness and Health Policy, Buenos Aires, Argentina
VIICentre for Health Management and Policy, Shandong University, Jinan, China
VIIIDepartment of Clinical Epidemiology and Biostatistics, Pontificia Universidad Javeriana, Bogota, Colombia
IXHealth Policy Planning and Management, Makerere University School of Public Health, Kampala, Uganda

 

 


ABSTRACT

OBJECTIVE: To describe how the SUPPORT collaboration developed a short summary format for presenting the results of systematic reviews to policy-makers in low- and middle-income countries (LMICs).
METHODS: We carried out 21 user tests in six countries to explore users' experiences with the summary format. We modified the summaries based on the results and checked our conclusions through 13 follow-up interviews. To solve the problems uncovered by the user testing, we also obtained advisory group feedback and conducted working group workshops.
FINDINGS: Policy-makers liked a graded entry format (i.e. short summary with key messages up front). They particularly valued the section on the relevance of the summaries for LMICs, which compensated for the lack of locally-relevant detail in the original review. Some struggled to understand the text and numbers. Three issues made redesigning the summaries particularly challenging: (i) participants had a poor understanding of what a systematic review was; (ii) they expected information not found in the systematic reviews and (iii) they wanted shorter, clearer summaries. Solutions included adding information to help understand the nature of a systematic review, adding more references and making the content clearer and the document quicker to scan.
CONCLUSION: Presenting evidence from systematic reviews to policy-makers in LMICs in the form of short summaries can render the information easier to assimilate and more useful, but summaries must be clear and easy to read or scan quickly. They should also explain the nature of the information provided by systematic reviews and its relevance for policy decisions.




RÉSUMÉ

OBJECTIF: Décrire la façon dont la collaboration SUPPORT a développé un format synthétique permettant de présenter les résultats d'évaluations systématiques aux décideurs de la santé dans les pays à revenu faible ou intermédiaire.
MÉTHODES: Nous avons réalisé 21 tests utilisateur dans six pays afin d'étudier l'expérimentation du format synthétique par les utilisateurs. Nous avons modifié les synthèses en fonction des résultats et vérifié nos conclusions par le biais de 13 entretiens de suivi. Afin de résoudre les problèmes dont les tests utilisateur ne traitaient pas, nous avons également recueilli les commentaires du groupe consultatif et organisé des ateliers de groupe de travail.
RÉSULTATS: Les responsables appréciaient le format d'entrée progressive (par ex. une synthèse courte introduite par des messages clés). Ils avaient une préférence notable pour la section dédiée à l'importance des synthèses pour les PRFI car elle compensait l'absence de détails locaux essentiels de l'évaluation d'origine. Certains ont eu du mal à saisir le texte et les chiffres. Trois problèmes ont fait de la reconception des synthèses une véritable gageure : (i) les participants avaient une compréhension médiocre de l'évaluation systémique; (ii) ils attendaient des informations que les évaluations systématiques ne contenaient pas et (iii) ils souhaitaient des synthèses plus courtes et plus précises. Les solutions comprenaient l'ajout d'informations afin de mieux saisir la nature d'une évaluation systématique, un surcroît de références, la rédaction d'un contenu plus clair ainsi qu'une analyse plus rapide du document.
CONCLUSION: La présentation d'éléments de preuve provenant d'évaluations systématiques aux responsables de la santé dans les pays à revenu faible ou intermédiaire sous forme de brèves synthèses peut faciliter l'assimilation des informations et les rendre plus utiles, mais ces synthèses doivent être claires, faciles à lire ou rapidement analysables. Elles doivent également expliquer la nature des informations fournies par les évaluations systématiques et leur pertinence en matière de décision de politiques.



RESUMEN

OBJETIVO: Describir cómo la colaboración SUPPORT ha desarrollado un formato de resumen breve que presenta los resultados de las revisiones sistemáticas para los responsables políticos en los países de ingresos medios y bajos (PIMB).
MÉTODOS: Hemos llevado a cabo 21 pruebas de usuario en seis países distintos, con el fin de conocer las experiencias de los usuarios con este formato de resumen. Hemos modificado los resúmenes en función de los resultados y hemos comprobado nuestras conclusiones mediante 13 entrevistas de seguimiento. Con el fin de solucionar los problemas que las pruebas de usuario evidenciaron, hemos consultado la opinión de un grupo de asesores y hemos llevado a cabo talleres de grupos de trabajo.
RESULTADOS: A los responsables políticos les gustó el formato de entradas clasificadas (es decir, un resumen breve con los mensajes clave al principio). Valoraron positivamente el apartado sobre la importancia de los resúmenes para los PIMB, que compensaba la falta de información más detallada con interés local en la revisión original. A algunos les costó trabajo entender el texto y los números. La reestructuración de los resúmenes resultó especialmente compleja por tres cuestiones en particular: (a) los participantes contaban con escasos conocimientos sobre lo que era una revisión sistemática; (b) esperaban disponer de información que no se incluye en las revisiones sistemáticas y (c) preferían resúmenes más cortos y claros. Entre las posibles soluciones se encontraban la de añadir información para ayudar a entender el concepto de revisión sistemática, añadir más referencias y hacer que el contenido resultara más claro y que el documento pudiera escanearse más rápido.
CONCLUSIÓN: La presentación de los datos extraídos de las revisiones sistemáticas a los responsables sanitarios de los PIMB en forma de resúmenes breves puede contribuir a que la información resulte más fácil de asimilar y más útil, si bien los resúmenes deben ser claros, así como fáciles y rápidos de leer. También deben explicar la naturaleza de la información que se facilita en las revisiones sistemáticas y su relevancia para la toma de decisiones sobre políticas sanitarias.


 

 

Introduction

To maximize the use of available resources, health policy-makers need reliable up-to-date evidence about "what works".1-3 In low- and middle-income countries (LMICs) the pressure to extract the most out of funds is particularly great, as the gap between the resources available and those that are needed to address the burden of preventable diseases is much larger than elsewhere.4 Systematic reviews cover not only clinical interventions, but also arrangements for delivering, financing and managing health services. Insofar as they are based on an exhaustive search for and appraisal of the relevant studies available, they are valuable sources of research evidence and reduce the chances of being misled by biased information.5 They make finding and appraising the evidence much easier and faster and they illuminate areas where no evidence exists.3, 6-8

Systematic reviews very often contain findings that are relevant for LMICs. However, most are written largely for scientific audiences and are not well tailored to the information needs of policy-makers.9Table 1 presents what we know about the type of information policy-makers need.

The Supporting Policy-relevant Reviews and Trials (SUPPORT) project was an international collaboration funded from 2006 to 2010 by the European Commission´s 6th Framework Programme and by the Global Health Research Initiative of the Canadian Institutes of Health Research. Its objective was to provide training and support to encourage researchers and policy-makers to undertake and use policy-relevant research. The consortium had 10 partners in nine countries in Africa, South America and Europe. In this article we report on the SUPPORT collaboration's development of summaries of systematic reviews for policy-makers in LMICs. Our objective was to tailor a summary format that was sensitive to the needs of this audience.

 

Methods

Selecting reviews and developing content

We screened references (up to 2009) from the Cochrane Library, MEDLINE and EMBASE to identify systematic reviews relevant for SUPPORT, based on topic and methods documentation. For this study we selected five of these.26-30 We extracted data, assessed review quality using a checklist and assessed evidence quality using GRADE.31, 32 Based on earlier research on the issues of greatest importance to policy-makers, we added content regarding applicability, equity, cost and the future research needed.3, 13 Researchers and policy-makers in LMICs, the lead author of the systematic review being summarized and peers of the authors with expertise in the review topic assessed the completed summaries. An in-depth description of selection criteria, quality assessments and content development can be found on the SUPPORT web site.33

Developing summary format

As a starting point, we adopted a graded entry format consisting of key messages followed by a short abstract.13, 34 The abstract deviated from a traditional academic format in that we replaced the methods section with a short description of review characteristics. In addition, we replaced the discussion section with one describing different aspects of the information's relevance for LMICs. This section included the applicability of the evidence to LMIC settings, the impact of the approach or intervention on equity, the costs and other considerations involved in scaling up the intervention, and the need for further evaluation.

We used four methods in repeated cycles to further develop this preliminary format. First we carried out user testing with LMIC policy-makers to inform summary development from a user perspective. We then elicited advisory group feedback from multi-disciplinary LMIC researchers (prospective summary authors) to inform summary development from an author's perspective. This was followed by working group workshops with experts in evidence dissemination (CG), information design (SR) and epidemiology (ADO), during which ideas were generated based on an analysis of user testing and advisory group feedback. Finally, we designed new summary versions based on the above.

Testing the summaries

The working group conducted three pilot user tests with participants from Norwegian government agencies involved in LMIC development projects and made further improvements based on these results. The advisory group then tested the summaries with 18 policy-makers in Argentina (6), China (3), Colombia (3), South Africa (3) and Uganda (3). The group used Spanish-language versions in Argentina and Colombia and English-language versions in the remaining countries. The advisory group purposively sampled participants, including health policy-makers and managers at different levels, and recruited them by e-mail and telephone.

The testing method was a think-aloud protocol using a semi-structured interview guide. Individual sessions lasted one hour and included one participant, one interviewer and one note-taker. Introductory questions covered the participants' education, employment and familiarity with research and systematic reviews. Participants chose one of five possible summaries to read at their own pace. The interviewer then guided them through each part of the document, prompting them to think aloud. The interview guide was based on a framework for user experience with six facets: "findability", credibility, usability, usefulness, desirability and value.35 Finally, the interviewer asked participants for suggestions and additional comments.

The interviewers audio-taped and transcribed each session and arranged for the transcriptions and notes to be translated (when necessary) and sent to the working group. They also erased audio tapes and removed participant identity from the compiled results. Two researchers from the working group performed separate analyses and identified barriers or facilitators to favourable experiences of the summary according to the user experience framework. We then compared and reconciled analyses and sorted the findings according to summary section (e.g. front page) or general theme (e.g. language).

We used the results of the analysis to make both content and design changes, after which we presented both the user test analysis results and the new summary to the advisory group in a telephone meeting. This group suggested only minor changes.

After this redesign we sent the participants both a brief outline of our findings and the old and new summaries by post or e-mail. We asked them to indicate which version of the summary they preferred and why and to comment on the accuracy of our findings.

 

Results

Since the pilot test results did not deviate from the rest, we pooled together all the results. One test participant had full-time medical school employment; the other 20 were primarily senior staff members involved in national or international health service or policy-related work in health departments, national insurance programmes, hospitals or aid organizations. Seventeen participants said that they used research in their work, though several seemed to define "research" as any information-gathering on a topic; 18 said that they knew what a systematic review was, but six of these participants were unfamiliar with Cochrane reviews.

Of the six facets we explored from the user experience framework, those pertaining to usefulness, usability and credibility yielded the most important findings.

Usefulness

Sixteen participants reported that the evidence summary would be useful to them if they had to make a decision on the subject treated in the summary. The graded-entry format with key messages up front was perceived as particularly useful because it offered concision. However, many still felt a mismatch between the type of content offered and their information needs:

"[The summary] explains that there is a high degree of satisfaction with what the nurse practitioners are doing compared to the doctors. But it doesn't say ... whether they are supposed to cover what the medical doctor or practitioner usually covers. And what sort of services? Is it general practice, is it in a hospital ward or where?"

Some respondents expressed unmet expectations probably stemming from a poor understanding of the nature of a systematic review. Specifically, they expected content lying outside the scope of a review: recommendations, outcome measurements not usually included in a review, detailed information about local applicability or costs and a broader framing of the research enquiry.

Usability

Five participants felt that the summary was not comprehensive enough. However, six wanted a shorter, clearer presentation:

"Operational managers will be petrified. When I think summary, I think one page ... I would not have time to read a long document even though I would want my work to be evidence-based."

Eight participants found the tables difficult or confusing, and nine said that the concepts presented in them, including those that showed the GRADE assessment and different levels of risk, were not clear.

"This section [summary of the findings] would be very difficult to understand by people not trained in evidence-based medicine. Words like 'sample size' and 'relative risk' would be difficult to interpret..."

Some participants felt that tables running over two pages were cumbersome to read and that the abbreviations caused confusion. Participants also compared the numbers in the text with those in the tables and became confused if they did not correspond precisely. The use of jargon and/or unfamiliar vocabulary posed a barrier (e.g. "scaling up" was not understood to include financial considerations).

Credibility

Early in the interview participants were asked if they would trust the summary. Two responded affirmatively because it was "well written". Twelve answered that they would trust it because they perceived it as coming from credible sources:

"I would trust a report like this. It uses systematic reviews as sources of information and I know that this kind of information is of high quality."

"The references are clear as well as the source. That's the most important thing."

However, not everybody understood that the summary stemmed from a systematic review. Some expressed confusion about authorship (partner logos appeared on the last page). Some also expressed reduced interest in the content when they discovered that the quality of the evidence was low, that no evidence for important outcomes existed or that the studies were old. One participant was confused about how a high-quality review could be compatible with low-quality evidence.

Value

Seventeen participants felt that summaries of the kind presented to them would be valuable to policy-makers holding positions similar to theirs.

Desirability

Fourteen participants said they liked the summary, particularly the front page with key messages and the section on the relevance of the evidence and the intervention for LMICs. Seven reacted positively to the table describing the characteristics of the reviews:

"[I] like this chart; it makes clear what the review was looking for."

Five participants said that they liked the framing of the title as a question (e.g. "Does pay-for-performance improve the quality of health care?").

"Findability"

When asked where they would expect to find these summaries, seven participants answered "in face-to-face meetings". Many mentioned the web sites of the World Health Organization, the Pan American Health Organization, the Cochrane Collaboration, health ministries and universities.

Three challenging findings

Several of our findings pointed to obvious solutions that we adopted. These included simplifying the text and tables; limiting the number of tables and not letting them break across pages; ensuring that the results in the text matched those in the tables; eliminating abbreviations; using consistent language and standard phrases to describe effect sizes and the quality of the evidence36 ; replacing unfamiliar terms or adding definitions; moving partner logos and the summary publication date to the front page.

However, we found three larger issues more challenging: (i) participants' poor conceptual understanding of systematic reviews; (ii) participants' expectations that they would receive information not found in the systematic reviews; and (iii) participants' expressed desire for shorter, clearer summaries.

To address the poor understanding of the nature of systematic reviews and the type of information they can provide, we added "information about the information" or meta-information in the form of boxes placed throughout the summaries.37

To help satisfy participants' expectation of being provided with information not found in the systematic reviews, we replaced the section for references with a section for "additional information". We broadened the scope of this section and included not only research references but also information that was helpful for understanding the problem, that provided details about the interventions or that put the results of the review in a broader context.

The third change we made addressed the need for shorter, clearer summaries. Since each summary was already extremely condensed, making the text even shorter proved difficult. Instead, we facilitated rapid scanning of the document by reformatting the text to make it easier to pick out important parts. We reformatted the findings in the text as bullet point items highlighted with blue arrows; we divided the part on relevance into a table placed between the findings and the section on the authors' interpretations; we moved the table with the characteristics of the review to the background section, making it possible to restrict background text to key information; and we used a narrower font to reduce document length.

To help summary authors in their efforts to create short, pertinent texts, we developed explicit instructions about what information to include and exclude (template and guidelines available from authors).

Follow-up interviews

Thirteen participants responded to the follow-up questions. All preferred the new format and said that they found it easier to read, primarily because of the new front-page design and the addition of the meta-information boxes:

"The content is presented in simple, easy-to-understand language, especially the first page ... The reference box on the right, on page one, is perfect as it tells you what to expect."

There was general agreement that our analysis of the problems was precise and that the new summary resolved the main issues. Two participants repeated earlier misgivings about missing content outside the scope of a systematic review. One participant felt that the tables remained confusing because "relative risk" was still not defined.

 

Discussion

Policy-makers participating in user tests indicated that the graded entry format (one page of key messages followed by a short summary) was well suited to their needs. The sections of the summary on key messages and relevance for LMICs proved to be the most interesting to participants, who had difficulty understanding the risks presented in the tables and were often frustrated with text that seemed too long and complicated. Some did not seem to understand what a systematic review was and expected or wanted information not usually found in one. Some were also confused about the source of the summaries. We addressed these issues by altering the template's content and design, especially by adding meta-information and reformatting the text to make it easier to scan. The advisory group and the participants agreed with our analysis and supported our subsequent changes.

Study strengths and weaknesses

Our study derived strength from the participation of a wide range of policy-makers from different countries who represented different levels of decision-making and familiarity with research evidence. It was also strengthened by the presence of a multi-disciplinary advisory group of researchers and summary authors from LMICs. However, the translation of interview transcripts from Spanish and Chinese into English may have affected the correct interpretation of participants' feedback. Additionally, participants' awareness that interviewers were involved in preparing the summaries may have affected their responses. Finally, summary topics were pre-selected and not necessarily matched to participants' interests, and as a result reading motivation and understanding of the material may have been undermined.

Other summaries and evaluations

There are various products for effectively conveying the results of systematic reviews that are targeted to policy-makers,38, 39 and several of them target policy-makers in LMICs in particular. For instance, the Evidence Aid project40 provides summaries of Cochrane reviews for emergency settings. Sources of relevant evidence summaries from high-income countries include the Rx for Change database,41 Evidence Boost42 and the Policy Liaison Initiative.43 However, we uncovered few studies reporting on the evaluations of summary formats for policy-makers and those we did identify support our own findings. Lavis et al. found that a graded-entry format and up-front take-home messages rendered health technology assessment reports more useful.44 An evaluation of Evidence Aid summaries showed that the summaries would be more useful if their coverage were not restricted to a single review and that language should be tailored to non-clinical audiences.45 In both studies, content that helped users to contextualize the evidence (e.g. a discussion of applicability) was found to be particularly valuable.

Shorter messages or rapidly scannable texts?

One of our overriding findings was a clear strong preference for short messages, also found in other studies of policy-maker preferences in research presentation.10, 13, 20 There is, however, a limit to how much information can be condensed before it loses value and credibility. When these limits are reached, editing the text does not suffice and methods such as graded-entry structuring of the text and front-page summaries of key messages must be used. In recent years, research on the use of web sites has taught us much about how people visually scan texts, rather than read them. This knowledge can be applied to improve information delivery in policy contexts where readers have limited time. Bulleted lists, shorter paragraphs and judicious use of headings are known to make scanning a text easier.46

Supporting better comprehension

We uncovered several problems linked to poor comprehension of numbers and statistics. Other studies have also shown that even highly educated people struggle to understand numerical risk.47 This can result in frustration and in a failure to fully understand the main messages. However, correct comprehension depends not only on the skills and knowledge of the reader, but also on the way the information is presented.48 By assuming a weak background knowledge (e.g. of scientific language or of the nature of systematic reviews) and low "statistical literacy",48 summary authors can add information to help readers better understand the strengths and limitations of the scientific evidence being summarized. Adding meta-information that explains concepts such as the quality of the evidence may help eliminate frustration and trigger reflection.

Future research

Systematic reviews attempt to answer narrowly-defined scientific questions, such as whether or not an intervention has an impact on specific outcomes. But policy-makers' questions go far beyond whether an intervention merely works; they also address, among other things, whether it will work in a particular setting, how much it will cost and what consequences it may bring. The answers to these questions will vary from setting to setting and cannot be provided by a single, generic summary. But summaries can support policy-makers by including content that maps out the main issues they may need to consider in their own contexts (e.g. those findings and interpretations that relate to applicability, equity and cost and hence to the relevance of the intervention for LMICs). Despite the lack of local detail in the texts they were given, policy-makers in our study found this general type of information very useful.

The favourable responses we observed suggest that there is value in mapping out these issues even if answers geared towards specific settings cannot be provided. According to earlier studies, research findings may be conceptually useful though not necessarily instrumental for policy-making.11, 21, 49, 50 When evidence quality is too weak to provide conclusive answers or when decision-makers' settings vary greatly from those portrayed in the studies, evidence coupled with this kind of complementary content may still be helpful in understanding the nature of the problem at hand, a possibility that future research should explore. In addition, studies should be conducted to determine how SUPPORT summaries compare with full reviews in terms of their effect on understanding, time spent reading and user satisfaction. If conducted in real-life contexts, such studies could further inform summary development.

 

Conclusion

Systematic reviews are an important resource, but policy-makers are often unfamiliar with them and they are not easily accessible. Summaries of systematic reviews can help address these problems as long as they are clear and easy to read or scan quickly. They should also help to clarify the nature of the information provided by a systematic review and its applicability to policy decisions. The SUPPORT summary format can make the evidence gathered through systematic reviews more useful for policy-making in health.

Funding: The SUPPORT collaboration was funded by the European Commission's 6th Framework Programme Priority FP6-2004-INCO-DEV-3. SUPPORT summaries are freely available at http://www.support-collaboration.org

Competing interests: None declared.

 

References

1. Task Force on Health Systems Research. Informed choices for attaining the Millennium Development Goals: towards an international cooperative agenda for health-systems research. Lancet 2004;364:997-1003. doi:10.1016/S0140-6736(04)17026-8 PMID:15364193        

2. Travis P, Bennett S, Haines A, Pang T, Bhutta Z, Hyder AA et al. Overcoming health-systems constraints to achieve the Millennium Development Goals. Lancet 2004;364:900-6. doi:10.1016/S0140-6736(04)16987-0 PMID:15351199        

3. Lavis JN, Posada FB, Haines A, Osei E. Use of research to inform public policymaking. Lancet 2004;364:1615-21. doi:10.1016/S0140-6736(04)17317-0 PMID:15519634        

4. The Millenium Development Goals report 2006. New York: United Nations; 2008.         

5. Greenhalgh T. Papers that summarise other papers (systematic reviews and meta-analyses). BMJ 1997;315:672-5. PMID:9310574        

6. Dobbins M, Cockerill R, Barnsley J. Factors affecting the utilization of systematic reviews. A study of public health decision makers. Int J Technol Assess Health Care 2001;17:203-14. [research article] doi:10.1017/S0266462300105069 PMID:11446132        

7. Davies HTO, Nutley SM, Smith PC, editors. What works? Evidence-based policy and practice in public services. Bristol: The Policy Press; 2000.         

8. Lavis JN, Davies HTO, Gruen RL, Walshe K, Farquhar CM. Working within and beyond the Cochrane Collaboration to make systematic reviews more useful to healthcare managers and policy makers. Healthc Policy 2006;1:21-33. PMID:19305650        

9. Sheldon TA. Making evidence synthesis more useful for management and policy-making. J Health Serv Res Policy 2005;10(Suppl 1):1-5. doi:10.1258/1355819054308521 PMID:16053579        

10. Petticrew M, Whitehead M, Macintyre SJ, Graham H, Egan M. Evidence for public health policy on inequalities: 1: the reality according to policymakers. J Epidemiol Community Health 2004;58:811-6. doi:10.1136/jech.2003.015289 PMID:15365104        

11. Innvaer S, Vist GE, Trommald M, Oxman AD. Health policy-makers' perceptions of their use of evidence: a systematic review. J Health Serv Res Policy 2002;7:239-44. doi:10.1258/135581902320432778 PMID:12425783        

12. Elliott H, Popay J. How are policy makers using evidence? Models of research utilisation and local NHS policy making. J Epidemiol Community Health 2000;54:461-8. doi:10.1136/jech.54.6.461 PMID:10818123        

13. Lavis J, Davies H, Oxman A, Denis JL, Golden-Biddle K, Ferlie E. Towards systematic reviews that inform health care management and policy-making. J Health Serv Res Policy 2005;10(Suppl 1):35-48. doi:10.1258/1355819054308549 PMID:16053582        

14. Smith R. Closing the digital divide. [editorial]BMJ 2003;326:238. doi:10.1136/bmj.326.7383.238 PMID:12560254        

15. Edejer TT-T. Disseminating health information in developing countries: the role of the internet. BMJ 2000;321:797-800. doi:10.1136/bmj.321.7264.797 PMID:11009519        

16. Isaakidis P, Swingler GH, Pienaar E, Volmink J, Ioannidis JP. Relation between burden of disease and randomised evidence in sub-Saharan Africa: survey of research. BMJ 2002;324:702. doi:10.1136/bmj.324.7339.702 PMID:11909786        

17. Reinar LM, Greenhalgh T, editors. Working in partnership with people in the developing world. London: Cochrane Colloquium Abstracts; 2005. Available from: http://www.imbi.uni-freiburg.de/OJS/cca/index.php/cca/article/view/1229 [accessed 8 November 2010]          .

18. Haines A, Kuruvilla S, Borchert M. Bridging the implementation gap between knowledge and action for health. Bull World Health Organ 2004;82:724-31, discussion 732. PMID:15643791        

19. Lomas J. Using research to inform healthcare managers' and policy makers' questions: from summative to interpretive synthesis. Healthc Policy 2005;1:55-71. PMID:19308103        

20. Dobbins M, Jack S, Thomas H, Kothari A. Public health decision-makers' informational needs and preferences for receiving research evidence. Worldviews Evid Based Nurs 2007;4:156-63. doi:10.1111/j.1741-6787.2007.00089.x PMID:17850496        

21. Lomas J. Improving research dissemination and uptake in the health sector: beyond the sound of one hand clapping. Hamilton: McMaster University Centre for Health Economics and Policy Analysis; 1997.         

22. Jewell CJ, Bero LA. "Developing good taste in evidence": facilitators of and hindrances to evidence-informed health policymaking in state government. Milbank Q 2008;86:177-208. doi:10.1111/j.1468-0009.2008.00519.x PMID:18522611        

23. Dobbins M, Thomas H, O'Brien MA, Duggan M. Use of systematic reviews in the development of new provincial public health policies in Ontario. Int J Technol Assess Health Care 2004;20:399-404. doi:10.1017/S0266462304001278 PMID:15609787        

24. Rosenbaum SE, Glenton C, Oxman AD. Summary-of-findings tables in Cochrane reviews improved understanding and rapid retrieval of key information. J Clin Epidemiol 2010;63:620-6. doi:10.1016/j.jclinepi.2009.12.014 PMID:20434024        

25. Dobbins M, Rosenbaum P, Plews N, Law M, Fysh A. Information transfer: what do decision makers want and need from researchers? Implement Sci 2007;2:20. doi:10.1186/1748-5908-2-20 PMID:17608940        

26. Waters H, Hatt L, Peters D. Working with the private sector for child health. Health Policy Plan 2003;18:127-37. doi:10.1093/heapol/czg017 PMID:12740317        

27. Briggs CJ, Garner P. Strategies for integrating primary health services in middle- and low-income countries at the point of delivery. Cochrane Database Syst Rev 2006;19:CD003318. PMID:16625576        

28. Gosden T, Forland F, Kristiansen IS, Sutton M, Leese B, Giuffrida A et al. Impact of payment method on behaviour of primary care physicians: a systematic review. J Health Serv Res Policy 2001;6:44-55. doi:10.1258/1355819011927198 PMID:11219360        

29. Beney J, Bero LA, Bond C. Expanding the roles of outpatient pharmacists: effects on health services utilisation, costs, and patient outcomes. Cochrane Database Syst Rev 2000;3:CD000336. PMID:10908471        

30. Lewin SA, Dick J, Pond P, Zwarenstein M, Aja G, van Wyk B et al. Lay health workers in primary and community health care. Cochrane Database Syst Rev 2005;1:CD004015.[PMID:15674924] PMID:15674924        

31. Atkins D, Best D, Briss PA, Eccles M, Falck-Ytter Y, Flottorp S et al.; GRADE Working Group. Grading quality of evidence and strength of recommendations. BMJ 2004;328:1490. doi:10.1136/bmj.328.7454.1490 PMID:15205295        

32. Cochrane IMS [Internet]. GRADEpro (GRADEprofiler) is the software used to create summary of findings (SoF) tables in Cochrane systematic reviews. Oxford: The Cochrane Collaboration. Available from: http://www.cc-ims.net/revman/gradepro/gradepro. [accessed 8 November 2010]          .

33. SUPPORT Collaboration [Internet]. How SUPPORT summaries are prepared. SUPPORT. Available from: http://www.support-collaboration.org/summaries/methods.htm [accessed 8 November 2010]          .

34. Reader-friendly writing - 1:3:25. Ottawa: Canadian Health Services Research Foundation; 2001. Available from: http://www.chsrf.ca/knowledge_transfer/pdf/cn-1325_e.pdf [accessed 12 November 2010]          .

35. Morville P. User experience design. Ann Arbor: Semantic Studios LLC; 2004. Available from: http://www.semanticstudios.com/publications/semantics/000029.php [accessed 8 November 2010]          .

36. Glenton C, Santesso N, Rosenbaum S, Nilsen ES, Rader T, Ciapponi A et al. Presenting the results of Cochrane Systematic Reviews to a consumer audience: a qualitative study. Med Decis Making 2010;30:566-77. doi:10.1177/0272989X10375853 PMID:20643912        

37. SUPPORT Collaboration [Internet]. Supporting policy relevant reviews and trials. Support summaries. SUPPORT. Available from: http://www.iecs.org.ar/support/iecs-visor-publicaciones.php. [accessed 8 November 2010]          .

38. Lavis JN. How can we support the use of systematic reviews in policymaking? PLoS Med 2009;6:e1000141. doi:10.1371/journal.pmed.1000141 PMID:19936223        

39. Robeson P, Dobbins M, DeCorby K, Tirilis D. Facilitating access to pre-processed research evidence in public health. BMC Public Health 2010;10:95. doi:10.1186/1471-2458-10-95 PMID:20181270        

40. The Cochrane Collaboration [Internet]. Evidence aid summaries. London: Evidence Aid Project, The Cochrane Collaboration. Available from: www.cochrane.org/evidenceaid/index.htm. [accessed 8 November 2010]          .

41. Rx for change [Internet]. Ottawa: Canadian Agency for Drugs and Technologies in Health. Available from: www.cadth.ca/index.php/en/compus/optimal-ther-resources/interventions/ [accessed 8 November 2010]          .

42. EvidenceBoost [Internet]. Ottawa: Canadian Health Services Research Foundation. Available from: www.chsrf.ca/mythbusters/eb_e.php [accessed 8 November 2010]          .

43. Policy Liaison Initiative [Internet]. Linking health policy to the latest evidence. Melbourne: Australasian Cochrane Centre. Available from: www.cochrane.org.au/ebpnetwork/ [accessed 8 November 2010]          .

44. Lavis JN, Wilson MG, Grimshaw JM, Haynes RB, Ouimet M, Raina P et al. Supporting the use of health technology assessments in policy making about health systems. Int J Technol Assess Health Care 2010;26:405-14. doi:10.1017/S026646231000108X PMID:20923592        

45. Turner T, Green S, Harris C. Supporting evidence-based health care in crises: What information do humanitarian organisations need? Disaster Med Public Health Prep Forthcoming        

46. Useit.com: Jakob Nielsen's website [Internet]. Morkes J, Nielsen J. Concise, scannable, and objective: how to write for the web; 1997. Available from: http://www.useit.com/papers/webwriting/writing.html [accessed 8 November 2010]          .

47. Lipkus IM, Samsa G, Rimer BK. General performance on a numeracy scale among highly educated samples. Med Decis Making 2001;21:37-44. doi:10.1177/0272989X0102100105 PMID:11206945        

48. Ancker JS, Kaufman D. Rethinking health numeracy: a multidisciplinary literature review. J Am Med Inform Assoc 2007;14:713-21. doi:10.1197/jamia.M2464 PMID:17712082        

49. Weiss CH. The many meanings of research utilization. Public Adm Rev 1979;39:426-31. doi:10.2307/3109916        

50. Nutley SM, Walter I, Davey HTO. Using evidence: how research can inform public services. Bristol: The Policy Press; 2007.         

 

 

(Submitted: 2 January 2010 - Revised version received: 15 September 2010 - Accepted: 17 September 2010 - Published online: 24 November 2010 )

 

 

* Correspondence to Sarah E Rosenbaum (e-mail: sarah@rosenbaum.no)

World Health Organization Genebra - Genebra - Switzerland
E-mail: bulletin@who.int