La descarga está en progreso. Por favor, espere

La descarga está en progreso. Por favor, espere

Efecto de los estudios pequeños y sesgos de informe

Presentaciones similares


Presentación del tema: "Efecto de los estudios pequeños y sesgos de informe"— Transcripción de la presentación:

1 Efecto de los estudios pequeños y sesgos de informe

2 Pasos de una revisión sistemática Cochrane
Formular la pregunta Planificar los criterios de elegibilidad Planificar la metodología Buscar los estudios Aplicar los criterios de elegibilidad Obtener los datos Evaluar el riesgo de sesgo de los estudios Analizar y presentar los resultados Interpretar los resultados y obtener conclusiones Mejorar y actualizar la revisión One we have begun the analysis of our results, and set up our meta-analyses, the next important step is to start to explore our results, and in particular the differences we observe between the results of our included studies. This exploration of differences will inform our understanding of the effects we’re observing, and how we should interpret them

3 Índice Identificar los efectos de los estudios pequeños
Entender los sesgos de informe Ver el capítulo 10 del Cochrane Handbook cochrane training

4 Repaso: el error aleatorio
En un grupo de estudios que estima un efecto, cada estudio presenta un error aleatorio Los resultados estarán dispersos alrededor del efecto real –en menor o en mayor grado Error aleatorio Efecto real Estimación del efecto Usually, the differences between small studies and larger studies vary based on their vulnerability to random sampling error. Any time we conduct a study and estimate an effect, the study is affected by random error – there is a gap between our estimate, and the true effect of the intervention. The estimates of multiple studies will be scattered, sometimes overestimating and sometimes underestimating the effect. cochrane training Fuente: Julian Higgins

5 Error aleatorio y los estudios pequeños
El error aleatorio asume que: Los estudios pequeños serán menos precisos que los estudios de mayor tamaño Los estudios pequeños estarán dispersos de forma más amplia alrededor de la media Efectos de los estudios pequeños Cuando los estudios pequeños son consistentemente más positivos o negativos que los estudios de mayor tamaño Un posible tipo de heterogeneidad Puede haber muchas explicaciones We can usually assume that small studies will be less precise than large studies in estimating an intervention effect – we expect the results larger studies to be closer to the true effect, and smaller studies to be more widely scattered. This assumption will hold true, for fixed-effect and random-effects meta-analyses, even in the presence of other kinds of heterogeneity such as differences in the intervention or population, except in one case: when the results of the small studies are consistently different to the larger studies, either more positive or more negative. Like other kinds of heterogeneity, it may be the case that differences in the results of the study are somehow related to the size of the study – what this means, and what the different explanations might be, we’ll explore further. cochrane training

6 Identificando el efecto de los estudios pequeños
Evaluar cada desenlace por separado Métodos disponibles: gráfico de embudo (“funnel plot”) pruebas estadísticas análisis de sensibilidad Proceder con cautela y obtener apoyo estadístico experto The first thing we have to do is identify whether or not small-study effects are at work in our review. There are several methods available to test whether the results of your studies are associated with the study size: we can use funnel plots, statistical tests and sensitivity analysis. We’ll explain the basics of each of these, remembering that you may find small- study effects for some outcomes and not others. These methods are tricky, though, and the best way to proceed is to get advice from a statistician who can assist you in planning your steps and interpreting your findings. cochrane training

7 Gráficos de embudo Representan el tamaño del efecto contra el tamaño del estudio El tamaño del estudio usualmente se representa mediante una medida de la varianza como el error estándar Los estudios estarán dispersos alrededor de un efecto estimado Los estudios más grandes estarán arriba, los pequeños abajo Los estudios pequeños tienen una dispersión más amplia Un gráfico simétrico lucirá como un embudo invertido o un triángulo El software RevMan puede generar gráficos de embudo Sólo es apropiado con ≥ 10 estudios de distintos tamaños Funnel plots are a tool available to take the results of a meta-analysis and plot the results of each individual study against a measure of the study’s size (usually represented by a measure of variance like standard error). If the study’s size is not associated with the results, then the plot should represent an inverted funnel – larger studies will be at the top in the centre of the plot, close to the meta-analysed estimate of effect, and smaller studies will progressively scatter more widely either side towards the bottom of the plot. RevMan can generate these plots for you, but it should be emphasised that funnel plots will not give meaningful results if you have fewer than 10 studies in your meta-analysis, or if they are all the same size. cochrane training

8 Gráfico de embudo simétrico
0.1 0.33 1 3 2 10 0.6 Error estándar This is what a funnel plot looks like. You can see that the standard error has been used as the measure of size. The scale is reversed so that studies with low SE (i.e. large studies) will be at the top of the plot, and studies with high SE (i.e. small studies) will be at the bottom. So, the studies at the point of the funnel will be the large studies, and the smaller studies gradually scatter wider and wider towards the bottom. Note that the important vertical line on this plot is not the line of no effect, in this case 1, as it would be on a forest plot. The important vertical line that we want to be in the centre of the triangle is the overall effect estimate from the meta-analysis. For ratio measures, just like a forest plot, a logarithmic scale is used for the measure of treatment effect so that the scale is symmetrical. Efecto cochrane training Fuente: Matthias Egger & Jonathan Sterne

9 Gráfico de embudo asimétrico
0.1 0.33 1 3 2 10 0.6 Error estándar Estudios no publicados On our hypothetical plot, this is what it might look like if we have small-study effects. You can see that we have large studies at the top, close to the overall effect estimate, but we don’t have a nice even scatter of smaller studies either side – our smaller studies appear to be consistently estimating lower odds ratios than the larger studies. This is called funnel plot asymmetry, and indicates that we have some kind of small-study effect at work. This might be because these small studies were not published, and couldn’t be found for the review. Efecto cochrane training Fuente: Matthias Egger & Jonathan Sterne

10 Gráfico de embudo asimétrico
0.1 0.33 1 3 2 10 0.6 Error estándar Estudios pequeños con efecto positivo Alternatively, it might be that the small studies are consistently finding different results to the large studies, and so there are no studies with results up the other end of the scale. Efecto cochrane training Fuente: Matthias Egger & Jonathan Sterne

11 Coloides vs cristaloides para resucitación con líquidos
This is a more realistic picture of what a funnel plot might look like for your review – if you are lucky enough to have so many included studies. ASK: Is this plot symmetrical? Yes. Real funnel plots will rarely be perfect triangles, but this one appears fairly symmetrical. Muerte cochrane training Adaptado de Perel P, Roberts I. Colloids versus crystalloids for fluid resuscitation in critically ill patients. Cochrane Database of Systematic Reviews 2011, Issue 3.

12 Magnesio para infarto al miocardio
ASK: How about this example - is this plot symmetrical? No – it appears that there are more studies on the left side of the effect estimate. It can be difficult to judge from a visual inspection like this how much we should be concerned about the missing studies – is it likely to be reporting bias, or is it one of the other reasons? cochrane training Adaptado de Li J, Zhang Q, Zhang M, Egger M. Intravenous magnesium for acute myocardial infarction. Cochrane Database of Systematic Reviews 2007, Issue 2.

13 Causas de asimetría en un gráfico de embudo
Azar Artefactos Algunos estadísticos se correlacionan con el error estándar, e.g. OR Variabilidad clínica Diferentes poblaciones en estudios pequeños Diferente implementación en estudios pequeños Variabilidad metodológica Mayor riesgo de sesgo en los estudios pequeños Sesgos de informe It’s important not to jump to conclusions about the causes of funnel plot asymmetry and small-study effects. There are many different reasons why this might occur, and you will need to put some thought into distinguishing between these different effects. Knowing your intervention, and the circumstances in which it was implemented in different studies, can help identify causes of funnel plot asymmetry. It’s also important to remember that your review may suffer from some of these problems even if the funnel plots are symmetrical and the tests negative, so you will always be required to explore and understand your results, and consider each of these issues for yourself. The first reason you might find asymmetry is chance – it may just be random chance that the small studies found different effects, particularly in reviews with few studies – which applies to most Cochrane reviews. Secondly, it may be artefactual – some statistics are naturally correlated with their standard errors, for example odds ratios, and in these cases some funnel plot asymmetry is to be expected. It may be due to clinical diversity, or the heterogeneity in your study populations and intervention. For example, you may have different underlying populations in the smaller studies that obtain different benefits from the intervention. Early, small, exploratory studies may have been conducted in high-risk populations, who might receive more benefit from the intervention – in that case you might have a correlation between the effect and the size of the study. This can also apply to the delivery of the intervention – e.g. if larger studies deliver the intervention with less fidelity and monitoring, or less intensity than smaller studies, we might see different effects. It may be that you are already planning subgroup analyses which can clarify these differences in effects. In some cases, methodological diversity may be at work. Small studies may be consistently overestimating the effect due to bias, e.g. poor allocation concealment, lack of blinding, etc. Concern about this is the reason we assess the risk of bias in our included studies in the first place, and the risk of bias assessment in your review may indicate whether these factors may be impacting on your results. If this is occurring in your review, and you have not done so already, you may wish to consider excluding studies at high risk of bias from your analysis. Finally, asymmetry may be caused by reporting biases, otherwise known as publication bias – we’ll come to that later. cochrane training Fuente: Egger M et al. Bias in meta-analysis detected by a simple, graphical test. BMJ 1997; 315: 629

14 Gráficos de embudo de contorno mejorado
Some enhancements to funnel plots can be helpful in this regard – such as these contour- enhanced funnel plots. Unfortunately these enhanced plots are not currently available in RevMan. The shaded areas on these plots indicate P values – that is, studies falling outside the white area in the middle indicate significant results, at the level of P < 0.1, P < 0.05, and P < respectively as we move away from the middle. If studies appear to be missing in the middle of the plot – indicating that the results of the missing studies would not be statistically significant, then this is consistent with our understanding of reporting bias. That is, non- significant trials are less likely to be found, although we should still consider the other possible explanations. The plot on the left is an example of this kind of case. However, if the asymmetry suggests that the missing studies would be statistically significant, especially if they would be significant in the direction considered desirable by the authors, then it’s less likely that studies have not been informed or published for reasons of reporting bias. Looking at the plot on the right, the asymmetry suggests missing studies over to the left side, crossing into the area of statistical significance. This is not consistent with our understanding of reporting bias – it would mean that the non-significant studies had been published, and not the significant ones. It’s much more likely that there is some other reason for the asymmetry. If there are no statistically significant studies at all, then it’s very unlikely that reporting bias is the cause of the asymmetry. cochrane training Fuente: Sterne JAC, Sutton AJ, Ioannidis JPA et al. Recommendations for examining and interpreting funnel plot asymmetry in meta-analyses of randomised controlled trials. BMJ 2011;342:d4002 doi: /bmj.d4002

15 Asimetría debida a heterogeneidad
Here at the top left is an example of a funnel plot in which there is overall asymmetry. On these plots, the dotted triangle lines indicate the area within which we would expect to find 95% of the data in the absence of bias and heterogeneity. As it turns out, the asymmetry in this plot is due entirely to differences in the effects between subgroups. Separate funnel plots for each of the three subgroups show that none of them is asymmetrical, but there are differences in the effect of the intervention in each subgroup. When we look at all three subgroups overlaid, it looks asymmetrical, but in face what we have is heterogeneity arising from other factors. cochrane training Fuente: Sterne JAC, Sutton AJ, Ioannidis JPA et al. Recommendations for examining and interpreting funnel plot asymmetry in meta-analyses of randomised controlled trials. BMJ 2011;342:d4002 doi: /bmj.d4002

16 Pruebas para evaluar asimetría en el gráfico de embudo
¿Existe una asociación mayor a lo esperado por el azar entre el tamaño del estudio y el efecto de la intervención? Se recomiendan tres pruebas : Generalmente poco poder para descartar sesgo de informe Usar además la inspección visual del gráfico de embudo Sólo apropiado si se tienen ≥ 10 estudios de tamaño variado Visual inspection of funnel plots is not always easy or reliable. Aside from funnel plots, it’s also possible to conduct statistical tests to determine whether there is a greater association between study size and effect than we would expect to occur by random chance. There are a range of different tests available, although they have advantages and disadvantages. Three of the available tests are recommended – check the Cochrane Handbook section , and you should definitely get statistical advice before deciding to use any of them. If you are assessing small-study effects, you should always include a visual assessment of the funnel plot as well, and like funnel plots, these statistical tests are usually only useful if you have at least 10 studies. Ver Sección del Cochrane Handbook cochrane training

17 Análisis de sensibilidad
Si se detecta efecto de los estudios pequeños ¿cómo impacta en los resultados? Consultar a un estadístico antes de proceder Si existe heterogeneidad (I2 > 0), comparar las estimaciones obtenidas a partir de un modelo de efectos fijos y uno de efectos aleatorios ¿Hay diferencias? De ser así, ¿existe alguna razón para que la intervención sea más efectiva en los estudios pequeños? Modelos de selección y otros métodos If you suspect that you have identified a small-study effect, you may wish to know how large its impact might be on your results. We have already come across a useful technique for testing this in our separate presentation on heterogeneity: where small studies are systematically different, comparing the fixed- and random-effects meta-analyses will give you a sensitivity analysis of the potential impact of the small studies. If the random-effects model shows a different effect, consider whether it is reasonable to conclude that the intervention was more effective in smaller studies, with reference to possible clinical and methodological diversity between the studies. This is not a perfect test – it is possible for small-study effects to bias the results where there is no heterogeneity, and where fixed-effect and random-effects models give the same result. Selection models (e.g. ‘trim and fill’, other more sophisticated models) can be used, but require expertise and careful application. Do not attempt to use these without statistical advice. cochrane training

18 Análisis de sensibilidad
Here we have an example where the size of the studies in the review is correlated with their results – the same example we looked at in the separate presentation on Heterogeneity. This review is looking at intra-venous magnesium for acute myocardial infarction, and measuring mortality. As you can see, there are a few large studies, shown by the large squares, lined up closely to the line of no effect. There are then a lot of small studies, and they are all over to the left of the plot, showing a stronger reduction in mortality. If the small studies were not systematically different to the larger studies, we would expect the fixed-effect diamond to sit neatly inside the diamond for the random-effects model. In this case, we can see that this doesn’t happen. The fixed-effect result is right on the line of no effect – between 0.94 and In comparison, the random-effects result shifts to the left, with a CI of between 0.53 and 0.82 – they don’t overlap at all. The random-effects model, by giving more weight to the smaller studies, has highlighted a systematic difference. Remember that this kind of sensitivity analysis can highlight the presence of small study effects, but it doesn’t tell you why this has happened. It is still your job to consider the possible explanations. cochrane training Adaptado de Li J, Zhang Q, Zhang M, Egger M. Intravenous magnesium for acute myocardial infarction. Cochrane Database of Systematic Reviews 2007, Issue 2.

19 Índice Identificar los efectos de los estudios pequeños
Entender los sesgos de informe Ver el capítulo 10 del Cochrane Handbook cochrane training

20 La diseminación de la evidencia
No disponible (sin publicar) Disponible en principio (tesis, conferencia, revista pequeña) Fácilmente accesible (Indexado en Medline) Diseminada activamente (noticias, compañías farmacéuticas) Dissemination of research results falls along a continuum: Unavailable: e.g. not published, only available through informal circulation from the author. Available in principle: e.g. published as thesis, a conference abstract, or in a journal with smaller circulation and impact, perhaps not published in English, not indexed by the major databases. Only about half of the abstracts presented at conferences are ever published in full (Scherer 2007). Easily available: e.g. published in a journal indexed in Medline Actively disseminated: e.g. trials distributed by the drug rep or some other interested organisation Only a proportion of studies will ever be published in a way that makes it easy to access and include in your review. Fuente: Matthias Egger 9

21 Sesgos de informe La diseminación de los hallazgos de investigación está influenciada por la naturaleza y dirección de los resultados Es más probable publicar resultados ‘positivos’ estadísticamente significativos … …por lo tanto es más probable que sean incluidos en una revisión Esto resulta en efectos exagerados Es más probable que estudios grandes se publiquen de todas formas, así que los estudios pequeños serán los afectados Los resultados no significativos son tan importantes para la revisión como los resultados significativos We now that these differences in dissemination aren’t random. They are influenced by the results of the studies. Studies with more positive results, and significant findings, are more likely to be published and widely disseminated, which in turn makes it much more likely that they will be incorporated into your systematic review. Small studies are more likely to be affected by this problem. Large studies are more likely to be published in any case, regardless of their findings, and small studies are the most vulnerable. So, if we find an excess of small, positive studies in a review, one of the possible explanations is that we have failed to find published records of the balancing, neutral or negative studies. If we can only find and include the positive, significant findings in our review, the risk is that we are misrepresenting the true effect of the intervention. For our review to be accurate and reliable, we need to make sure we include all the evidence, including the negative and statistically non-significant results. cochrane training

22 Evidencia del sesgo de informe
Proporción de estudios no publicados There is evidence to demonstrate this effect at work. In this study, Stern & Simes looked at a cohort of clinical studies to see how long it took for the results to be published. The answer varied strongly depending on the results of the study. The red line shows studies with significant results – they were the fastest to be published, and after 10 years less than 20% remained unpublished. Studies with non-significant results, but with a discernible trend, were slower to be published over time, and nearly half remaining unpublished after 20 years. The slowest to be published were studies with a null result – that is, the intervention had no discernible effect. Almost none of these studies were published within 5 years, and after 10 years more than 70% remained unpublished. Significativos Tendencia no significativa Nulos Años desde su inicio cochrane training Fuente: Stern JM, Simes RJ. Publication bias: evidence of delayed publication in a cohort study of clinical research projects BMJ 1997;315:

23 Es más probable que los estudios positivos sean
Enviados a publicar… …y aceptados (sesgo de publicación “publication bias”) …rápidamente (sesgo de lapso de tiempo “time lag bias”) …como más de una publicación (sesgo de publicación múltiple “multiple publication bias) …en inglés (sesgo de lenguaje “language bias”) …en revistas con alto factor de impacto e indexadas (sesgo de ubicación “location bias”) …incluyen desenlaces positivos (informe selectivo de desenlaces) …y son citados por otros (sesgo de citación “citation bias”) Concebidos Realizados Enviados Citados Publicados Reporting biases can occur at many stages of the dissemination process. File drawer problem (Rosenthal 1979) Journals are filled with 5% of the studies that show Type I errors, while file drawers are filled with the 95% of studies that show non significant results. Authors may not spend the time to write up and submit manuscripts on disappointing results. This may be especially true of industry-funded research. Publication bias: Once submitted, papers may be less likely to interest journal editors, or receive less favourable peer review, leading to less chance of publication. Time lag bias: Another Cochrane methodology review (Hopewell 2007a) has shown that non-significant results may take 2 to 3 years longer to be published compared to studies with significant results (Royal Prince Alfred Ethics Committees, Stern and Simms 1997; and HIV multicentre studies, Ioannidis 1998). This means that at the point a systematic review is written, the literature may be dominated by positive studies, with years going by before the balancing studies are published. Duplicate/multiple publication bias: It is relatively common for trials to be published multiple times and difficult to determine when this has occurred (may be different authors, different population sizes etc..). Positive studies are more likely to be published more than once, which means they are more likely to be located and included in the review. If multiple publications are included without being recognised, participants will be double-counted and the treatment effect will be even more exaggerated. Language bias: There is some evidence (although not conclusive) that positive results are more likely to be submitted to and published in English-language journals. This highlights the importance of not limiting your search to papers published in English or databases that largely index the English literature. Location bias: Studies with positive findings are more likely to be accepted for publication in high-impact, high-distribution journals, and importantly, the limited proportion of journals that are indexed by the major databases. Studies published in non-indexed journals are harder to find for your review. Selective outcome reporting: As discussed in relation to our risk of bias assessment for individual studies, within studies that do make it to publications, outcome measures showing positive findings are more likely to be informed. Citation bias: Studies with positive findings are more likely to be cited by other papers, which again makes them easier to find if citations are used as part of the search strategy for the review. And, since authors tend to cite papers that agree with their findings, additional citations reinforces existing the bias. And don’t forget the importance of selective outcome reporting, or the selective publication of positive or significant findings within papers, while leaving out or altering the reporting of those outcomes with negative or non-significant findings. This issue is addressed as part of the ‘Risk of bias’ assessment of included studies. cochrane training Fuente: Julian Higgins

24 Ejemplo: alfa bloqueantes
Se identifican 10 ensayos clínicos midiendo diferentes dosis Los ensayos deben haberse terminado y sus resultados deben haberse enviado a las agencias reguladoras para que la droga sea aprobada Se encontraron pocos ensayos Muchas de las dosis aprobadas por los reguladores no tenían suficiente evidencia que apoyara su uso Para algunas dosis no había datos publicados In determining whether reporting bias is impacting on your review, perhaps causing funnel plot asymmetry or perhaps not, you will need to consider the context of your intervention, and susceptibility to different biases for example through conflict of interest. Sometimes there is real world information that can help us work out if publication bias is likely – it doesn’t always depend on statistical tests and small-study effects. This example from a Cochrane review of alpha blockers for hypertension. A total of 10 trials were found in the review, although measuring several different doses of the drugs, so they could not be pooled together, and there weren’t enough trials in a meta-analysis to generate a funnel plot. Nonetheless, these drugs were known to be approved for use by regulators (e.g. the FDA in the US), so we know there had to have been trials completed and submitted for that approval to be successfully given. However, as so few trials were available, and not enough to support all the doses that were approved – in fact none to support some doses - we can conclude that there are missing trials that the drug companies have not made public. This might lead us to conclude that publication bias is likely, although it does not give us a clear idea of how great its impact might be in this particular case. cochrane training Fuente: Nancy Santesso and Holger Schünemann. Based on Heran BS, Galm BP, Wright JM. Blood pressure lowering efficacy of alpha blockers for primary hypertension. Cochrane Database of Systematic Reviews 2009, Issue 4

25 Ejemplo: antidepresvios
Here is another example, identified by Moreno and colleagues in a BMJ paper looking at a set of trials on anti-depressants, and comparing the published literature to the set of trials submitted for FDA approval. On the left is a funnel plot based on all the FDA data. [CLICK] Here on the right, we have the results of all the publications that could be found reporting the same studies. You can see that we have many fewer studies (50 studies instead of 73), and it’s clear from this plot that the studies that are missing are those that informed non-significant results. Not all cases will be so clear-cut, of course. The role of companies with a strong financial interest in the outcomes of the research is always a powerful conflict of interest that should be considered. Trial registration and standardising the reporting of outcomes in a field can be reassuring about reporting bias, as can the inclusion of data from pharmaceutical regulators in the review. cochrane training Fuente: Moreno, S. G., A. J. Sutton, et al. Novel methods to deal with publication biases: secondary analysis of antidepressant trials in the FDA trial registry database and related journal publications. BMJ 2009, 339.

26 Impacto del sesgo de publicación
The effect of reporting biases, while important, may be less than risk of bias related to study design, such as blinding and allocation concealment. This is another Cochrane methodology review, assessing the impact on the results of meta-analysis of including grey literature, finding between a 4% and 28% increase in odds ratios. Identifying grey literature will not always make a dramatic difference to your results, and may bring its own issues: the studies may be at higher risk of bias (which we assess as we do for all studies), and it may be that even the grey literature we find is more likely to be positive than grey literature overall, e.g. as authors are more likely to respond to requests for unpublished data. It’s important to keep this in perspective. cochrane training Hopewell S, McDonald S, Clarke MJ, Egger M. Grey literature in meta-analyses of randomized trials of health care interventions. Cochrane Database of Systematic Reviews 2007, Issue 2.

27 ¿Qué implica esto para mi revisión?
Prevención Una búsqueda exhaustiva en múltiples fuentes Literatura gris, literatura no escrita en inglés, búsqueda manual Registros de ensayos clínicos Diagnóstico Considerar identificar efectos de los estudios pequeños Análisis de sensibilidad para identificar el posible impacto Sesgo de publicación no es siempre la única explicación No existe cura Explorar cualquier efecto observado de estudios pequeños También se espera que comentes sobre la probabilidad de sesgos de informe So, in practice, what should you do in your review? In relation to reporting bias in particular, the best thing we can do to prevent it is to do our best to find all the studies that have been conducted, by running a comprehensive search, attempting to find unpublished and grey literature, contacting authors in the field, etc. Trials registries are an important initiative – as they grow internationally, and more journals require registration before publication, registries have the potential to make an important difference in publication bias (although there are still limitations on the completeness of the data in the registered trials, and the application of requirements by journals for registration). Still, we may not be completely successful in preventing reporting bias. Thinking more broadly about small-study effects, we can use the tools available for diagnosis. Funnel plots and statistical tests can help us identify small-study effects, and sensitivity analyses, such as comparing fixed- and random-effects meta-analyses, can help us measure how great the impact of the small studies might be. Even where we do identify small-study effects, we have to remember the range of possible causes of these effects. If we do explore those effects and conclude that reporting bias is the most likely cause, there is no cure. Nonetheless, authors will be expected to comment on both of these issues – small-study effects and the possibility of reporting biases - in their review. cochrane training

28 ¿Qué incluir en tu protocolo?
Evaluación de sesgos de informe Uso opcional de gráficos de embudo y pruebas estadísticas para detectar asimetría Bringing all this back to your protocol – in the Methods section of the review, under the collective heading ‘Data and analysis’, there is a specific subheading on ‘Assessment of reporting biases’. In this section you should include how to you plan to consider reporting biases in your review, including the option to include specific methods such as funnel plots and statistical tests, but remembering that small-study effects have many possible causes. cochrane training

29 Puntos clave Busca el efecto de estudios pequeños en tu revisión
Sé consciente de las posibles causas Considera el posible impacto del sesgo de informe en tu revisión Siempre que exista la duda, consulta a un experto en estadística cochrane training

30 Referencias Agradecimientos cochrane training
Sterne JAC, Egger M, Moher D (editors). Chapter 10: Addressing reporting biases. In: Higgins JPT, Green S (editors). Cochrane Handbook for Systematic Reviews of Interventions Version [updated March 2011]. The Cochrane Collaboration, Available from Egger M et al. Bias in meta-analysis detected by a simple, graphical test. BMJ 1997; 315: 629 Sterne JAC, Sutton AJ, Ioannidis JPA et al. Recommendations for examining and interpreting funnel plot asymmetry in meta-analyses of randomised controlled trials. BMJ 2011;342:d4002 doi: /bmj.d4002 Agradecimientos Compilado por Miranda Cumpston. Basado en los materiales de: Jonathan Sterne, Matthias Egger, Julian Higgins, David Moher, Nancy Santesso, Holger Schünemann, the Cochrane Bias Methods Group, the Australasian Cochrane Centre and the Cochrane Applicability and Recommendations Methods Group. Aprobado por el Cochrane Methods Board Traducido por Carlos Cuello , Marta Roquè y Jesús López-Alcalde cochrane training


Descargar ppt "Efecto de los estudios pequeños y sesgos de informe"

Presentaciones similares


Anuncios Google