Applying best practices to make programmes work
An important part of WHO's work is to implement public health programmes in countries in the most effective way possible. The Bulletin interviewed three WHO experts to find out some of the best practices that made their programmes work.
WHO programme on the Evaluation of Diagnostic Tests
Rosanna Peeling, Manager of Diagnostics Research and Development in the Special Programme on Research and Training in Tropical Diseases (TDR) cited the development of a Diagnostics Evaluation Scheme as one of TDR's recent successes. Launched in 1999, the programme facilitates the development, evaluation and implementation of diagnostics for infectious diseases in tropical countries, such as malaria and sleeping sickness as well as tuberculosis and sexually transmitted diseases. The TDR team and its partners in 35 countries are working to ensure that developing countries have access to quality-assured diagnostics at negotiated prices. Their work on rapid syphilis tests led to the initiation of national plans for the elimination of congenital syphilis in several WHO regions.
Q: What practices made your programme work?
A: We focus on technology that is appropriate for primary health-care providers to use in developing countries and we collaborate closely with control programmes in endemic countries in every aspect of our work. These two factors are key to our success. Disease control, especially in resource-constrained settings, is complex. To deliver appropriate diagnostics, we have to be aware of the social, economic and technical context in which they will be used. With a long-term view to the large-scale use of diagnostics in control programmes, we consult our partners to develop, from the outset, a framework for gathering evidence to estimate potential impact and costeffectiveness. Embedded in the framework are the development and application of international standards for the design and conduct of diagnostic evaluations. The quality of this evidence is vital when advocating for widespread implementation and sustainable adoption of these tests by donors and public health authorities in the countries where they are needed. Although we always think big, our approach is to start small, do it really well, and then gradually expand. We are a small Geneva-based team and most of our work relies on a network of partners, particularly in developing countries. We have been really fortunate because of the dedication, competence and sheer hard work of our partners, and a measure of our success lies in the way they have made these projects their own.
Q: What were the lessons you learnt for the future when implementing this programme?
A: We start by asking disease control programmes about their diagnostic needs, and use this information to guide our search for appropriate technologies. This user-inspired approach has proven successful and we will continue our research in this way. By maintaining regular contact with the control programmes, we have been able to disseminate the results of our research as they become available, and receive feedback. This has added value to our work since the control programmes can see the research cycle as a dynamic process. However, our programme is only one of many that national control programmes have to deal with. Often they face major difficulties handling the conflicting agendas of multiple donors. In countries already suffering from severe constraints, ill-conceived and uncoordinated integration of vertical programmes into a broader health system may end up hampering what little health care is already available. We have learnt how to work more closely with the WHO regional offices and our partners in countries to allow more horizontal integration of control activities. It is an area we need to strengthen if our work is to make a difference.