Why does finding small effects in large studies indicate publication bias?












7












$begingroup$


Several methodological papers (e.g. Egger et al 1997a, 1997b) discuss publication bias as revealed by meta-analyses, using funnel plots such as the one below.
Funnel plot of beta blockers in myocardial infarction



The 1997b paper goes on to say that "if publication bias is present, it is expected that, of published studies, the largest ones will report the smallest effects." But why is that? It seems to me that all this would prove is what we already know: small effects are only detectable with large sample sizes; while saying nothing about the studies that remained unpublished.



Also, the cited work claims that asymmetry that is visually assessed in a funnel plot "indicates that there was selective non-publication of smaller trials with less sizeable benefit." But, again, I don't understand how any features of studies that were published can possibly tell us anything (allow us to make inferences) about works that were not published!



References

Egger, M., Smith, G. D., & Phillips, A. N. (1997). Meta-analysis: principles and procedures. BMJ, 315(7121), 1533-1537.



Egger, M., Smith, G. D., Schneider, M., & Minder, C. (1997). Bias in meta-analysis detected by a simple, graphical test. BMJ, 315(7109), 629-634.










share|cite|improve this question











$endgroup$












  • $begingroup$
    I do not think you have this the right way round. Perhaps the answer to this Q&A might help stats.stackexchange.com/questions/214017/…
    $endgroup$
    – mdewey
    14 hours ago








  • 3




    $begingroup$
    For a small study to get published at all it will have to show a large effect no matter what the true effect size is.
    $endgroup$
    – einar
    14 hours ago
















7












$begingroup$


Several methodological papers (e.g. Egger et al 1997a, 1997b) discuss publication bias as revealed by meta-analyses, using funnel plots such as the one below.
Funnel plot of beta blockers in myocardial infarction



The 1997b paper goes on to say that "if publication bias is present, it is expected that, of published studies, the largest ones will report the smallest effects." But why is that? It seems to me that all this would prove is what we already know: small effects are only detectable with large sample sizes; while saying nothing about the studies that remained unpublished.



Also, the cited work claims that asymmetry that is visually assessed in a funnel plot "indicates that there was selective non-publication of smaller trials with less sizeable benefit." But, again, I don't understand how any features of studies that were published can possibly tell us anything (allow us to make inferences) about works that were not published!



References

Egger, M., Smith, G. D., & Phillips, A. N. (1997). Meta-analysis: principles and procedures. BMJ, 315(7121), 1533-1537.



Egger, M., Smith, G. D., Schneider, M., & Minder, C. (1997). Bias in meta-analysis detected by a simple, graphical test. BMJ, 315(7109), 629-634.










share|cite|improve this question











$endgroup$












  • $begingroup$
    I do not think you have this the right way round. Perhaps the answer to this Q&A might help stats.stackexchange.com/questions/214017/…
    $endgroup$
    – mdewey
    14 hours ago








  • 3




    $begingroup$
    For a small study to get published at all it will have to show a large effect no matter what the true effect size is.
    $endgroup$
    – einar
    14 hours ago














7












7








7


2



$begingroup$


Several methodological papers (e.g. Egger et al 1997a, 1997b) discuss publication bias as revealed by meta-analyses, using funnel plots such as the one below.
Funnel plot of beta blockers in myocardial infarction



The 1997b paper goes on to say that "if publication bias is present, it is expected that, of published studies, the largest ones will report the smallest effects." But why is that? It seems to me that all this would prove is what we already know: small effects are only detectable with large sample sizes; while saying nothing about the studies that remained unpublished.



Also, the cited work claims that asymmetry that is visually assessed in a funnel plot "indicates that there was selective non-publication of smaller trials with less sizeable benefit." But, again, I don't understand how any features of studies that were published can possibly tell us anything (allow us to make inferences) about works that were not published!



References

Egger, M., Smith, G. D., & Phillips, A. N. (1997). Meta-analysis: principles and procedures. BMJ, 315(7121), 1533-1537.



Egger, M., Smith, G. D., Schneider, M., & Minder, C. (1997). Bias in meta-analysis detected by a simple, graphical test. BMJ, 315(7109), 629-634.










share|cite|improve this question











$endgroup$




Several methodological papers (e.g. Egger et al 1997a, 1997b) discuss publication bias as revealed by meta-analyses, using funnel plots such as the one below.
Funnel plot of beta blockers in myocardial infarction



The 1997b paper goes on to say that "if publication bias is present, it is expected that, of published studies, the largest ones will report the smallest effects." But why is that? It seems to me that all this would prove is what we already know: small effects are only detectable with large sample sizes; while saying nothing about the studies that remained unpublished.



Also, the cited work claims that asymmetry that is visually assessed in a funnel plot "indicates that there was selective non-publication of smaller trials with less sizeable benefit." But, again, I don't understand how any features of studies that were published can possibly tell us anything (allow us to make inferences) about works that were not published!



References

Egger, M., Smith, G. D., & Phillips, A. N. (1997). Meta-analysis: principles and procedures. BMJ, 315(7121), 1533-1537.



Egger, M., Smith, G. D., Schneider, M., & Minder, C. (1997). Bias in meta-analysis detected by a simple, graphical test. BMJ, 315(7109), 629-634.







meta-analysis publication-bias






share|cite|improve this question















share|cite|improve this question













share|cite|improve this question




share|cite|improve this question








edited 5 hours ago









Alexis

16.4k34599




16.4k34599










asked 14 hours ago









z8080z8080

4931625




4931625












  • $begingroup$
    I do not think you have this the right way round. Perhaps the answer to this Q&A might help stats.stackexchange.com/questions/214017/…
    $endgroup$
    – mdewey
    14 hours ago








  • 3




    $begingroup$
    For a small study to get published at all it will have to show a large effect no matter what the true effect size is.
    $endgroup$
    – einar
    14 hours ago


















  • $begingroup$
    I do not think you have this the right way round. Perhaps the answer to this Q&A might help stats.stackexchange.com/questions/214017/…
    $endgroup$
    – mdewey
    14 hours ago








  • 3




    $begingroup$
    For a small study to get published at all it will have to show a large effect no matter what the true effect size is.
    $endgroup$
    – einar
    14 hours ago
















$begingroup$
I do not think you have this the right way round. Perhaps the answer to this Q&A might help stats.stackexchange.com/questions/214017/…
$endgroup$
– mdewey
14 hours ago






$begingroup$
I do not think you have this the right way round. Perhaps the answer to this Q&A might help stats.stackexchange.com/questions/214017/…
$endgroup$
– mdewey
14 hours ago






3




3




$begingroup$
For a small study to get published at all it will have to show a large effect no matter what the true effect size is.
$endgroup$
– einar
14 hours ago




$begingroup$
For a small study to get published at all it will have to show a large effect no matter what the true effect size is.
$endgroup$
– einar
14 hours ago










2 Answers
2






active

oldest

votes


















12












$begingroup$

First, we need think about what "publication bias" is, and how it will affect what actually makes it into the literature.



A fairly simple model for publication bias is that we collect some data and if $p < 0.05$, we publish. Otherwise, we don't. So how does this affect what we see in the literature? Well, for one, it guarantees that $|hat theta |/ SE(hat theta) >1.96$ (assuming a Wald statistic is used). Now, one point being made is that if $n$ is really small, then $SE(hat theta)$ is relatively large and a large $|hat theta|$ is required for publication.



Now suppose that in reality, $theta$ is relatively small. Suppose we run 200 experiments, 100 with really small sample sizes and 100 with really large sample sizes. Note that of 100 really small sample size experiments, the only ones that will get published by our simple publication bias model is those with large values of $|hat theta|$ just due to random error. However, in our 100 experiments with large sample sizes, much smaller values of $hat theta$ will be published. So if the larger experiments systematically show smaller effect than the smaller experiments, this suggests that perhaps $|theta|$ is actually significantly smaller than what we typically see from the smaller experiments that actually make it into publication.



Technical note: it's true that either having a large $|hat theta|$ and/or small $SE(hat theta)$ will lead to $p < 0.05$. However, since effect sizes are typically thought of as relative to standard deviation of error term, these two conditions are essentially equivalent.






share|cite|improve this answer











$endgroup$













  • $begingroup$
    "Now, one point being made is that if $n$ is really small, then $SE(theta)$ is relatively large and a large $|theta|$ is required for publication." This is not, technically speaking, necessarily true: $SE(theta) = frac{SD(theta)}{sqrt{n}}$: if $SE(theta)$ is very small, then a small $SE$ may result even for a small sample size, right? EDIT: Oh wait! Just read your closing sentence. :) +1
    $endgroup$
    – Alexis
    5 hours ago





















8












$begingroup$

Read this statement a different way:



If there is no publication bias, effect size should be independent of study size.



That is, if you are studying one phenomenon, the effect size is a property of the phenomenon, not the sample/study.



Estimates of effect size could (and will) vary across studies, but if there is a systematic decreasing effect size with increasing study size, that suggests there is bias. The whole point is that this relationship suggests that there are additional small studies showing low effect size that have not been published, and if they were published and therefore could be included in a meta analysis, the overall impression would be that the effect size is smaller than what is estimated from the published subset of studies.



The variance of the effect size estimates across studies will depend on sample size, but you should see an equal number of under and over estimates at low sample sizes if there is no bias.






share|cite|improve this answer











$endgroup$









  • 1




    $begingroup$
    But is it really correct to say that "If there is no publication bias, effect size should be independent of study size"? This is true of course when you refer to the true underlying effect, but I think they are referring to the estimated effect. An effect size that is dependent of study size (suggesting bias) amounts to a linear relationship in that scatter plot (high correlation). This is something I have personally not seen in any funnel plots, even though of course many of those funnel plots did imply that a bias existed.
    $endgroup$
    – z8080
    9 hours ago






  • 2




    $begingroup$
    @z8080 You're right, only if the mean and standard deviation estimates are unbiased will the estimated effect size be completely independent of study size if there is no publication bias. Since the sample standard deviation is biased, there will be some bias in the effect size estimates, but that bias is small compared to the level of bias across studies that Egger et al are referring to. In my answer I'm treating it as negligible, assuming the sample size is large enough that the SD estimate is nearly unbiased, and therefore considering it to be independent of study size.
    $endgroup$
    – Bryan Krause
    8 hours ago












  • $begingroup$
    @z8080 The variance of the effect size estimates will depend on sample size, but you should see an equal number of under and over estimates at low sample sizes.
    $endgroup$
    – Bryan Krause
    8 hours ago










  • $begingroup$
    "Estimates of effect size could (and will) vary across studies, but if there is a systematic relationship between effect size and study size" That phrasing is a bit unclear about the difference between dependence and effect size. The distribution of effect size will the different for difference sample size, and thus will not be independent of sample size, regardless of whether there's bias. Bias is a systematic direction of the dependence.
    $endgroup$
    – Acccumulation
    4 hours ago










  • $begingroup$
    @Acccumulation Does my edit fix the lack of clarity you saw?
    $endgroup$
    – Bryan Krause
    4 hours ago











Your Answer





StackExchange.ifUsing("editor", function () {
return StackExchange.using("mathjaxEditing", function () {
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
});
});
}, "mathjax-editing");

StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "65"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});

function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});


}
});














draft saved

draft discarded


















StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstats.stackexchange.com%2fquestions%2f393256%2fwhy-does-finding-small-effects-in-large-studies-indicate-publication-bias%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown

























2 Answers
2






active

oldest

votes








2 Answers
2






active

oldest

votes









active

oldest

votes






active

oldest

votes









12












$begingroup$

First, we need think about what "publication bias" is, and how it will affect what actually makes it into the literature.



A fairly simple model for publication bias is that we collect some data and if $p < 0.05$, we publish. Otherwise, we don't. So how does this affect what we see in the literature? Well, for one, it guarantees that $|hat theta |/ SE(hat theta) >1.96$ (assuming a Wald statistic is used). Now, one point being made is that if $n$ is really small, then $SE(hat theta)$ is relatively large and a large $|hat theta|$ is required for publication.



Now suppose that in reality, $theta$ is relatively small. Suppose we run 200 experiments, 100 with really small sample sizes and 100 with really large sample sizes. Note that of 100 really small sample size experiments, the only ones that will get published by our simple publication bias model is those with large values of $|hat theta|$ just due to random error. However, in our 100 experiments with large sample sizes, much smaller values of $hat theta$ will be published. So if the larger experiments systematically show smaller effect than the smaller experiments, this suggests that perhaps $|theta|$ is actually significantly smaller than what we typically see from the smaller experiments that actually make it into publication.



Technical note: it's true that either having a large $|hat theta|$ and/or small $SE(hat theta)$ will lead to $p < 0.05$. However, since effect sizes are typically thought of as relative to standard deviation of error term, these two conditions are essentially equivalent.






share|cite|improve this answer











$endgroup$













  • $begingroup$
    "Now, one point being made is that if $n$ is really small, then $SE(theta)$ is relatively large and a large $|theta|$ is required for publication." This is not, technically speaking, necessarily true: $SE(theta) = frac{SD(theta)}{sqrt{n}}$: if $SE(theta)$ is very small, then a small $SE$ may result even for a small sample size, right? EDIT: Oh wait! Just read your closing sentence. :) +1
    $endgroup$
    – Alexis
    5 hours ago


















12












$begingroup$

First, we need think about what "publication bias" is, and how it will affect what actually makes it into the literature.



A fairly simple model for publication bias is that we collect some data and if $p < 0.05$, we publish. Otherwise, we don't. So how does this affect what we see in the literature? Well, for one, it guarantees that $|hat theta |/ SE(hat theta) >1.96$ (assuming a Wald statistic is used). Now, one point being made is that if $n$ is really small, then $SE(hat theta)$ is relatively large and a large $|hat theta|$ is required for publication.



Now suppose that in reality, $theta$ is relatively small. Suppose we run 200 experiments, 100 with really small sample sizes and 100 with really large sample sizes. Note that of 100 really small sample size experiments, the only ones that will get published by our simple publication bias model is those with large values of $|hat theta|$ just due to random error. However, in our 100 experiments with large sample sizes, much smaller values of $hat theta$ will be published. So if the larger experiments systematically show smaller effect than the smaller experiments, this suggests that perhaps $|theta|$ is actually significantly smaller than what we typically see from the smaller experiments that actually make it into publication.



Technical note: it's true that either having a large $|hat theta|$ and/or small $SE(hat theta)$ will lead to $p < 0.05$. However, since effect sizes are typically thought of as relative to standard deviation of error term, these two conditions are essentially equivalent.






share|cite|improve this answer











$endgroup$













  • $begingroup$
    "Now, one point being made is that if $n$ is really small, then $SE(theta)$ is relatively large and a large $|theta|$ is required for publication." This is not, technically speaking, necessarily true: $SE(theta) = frac{SD(theta)}{sqrt{n}}$: if $SE(theta)$ is very small, then a small $SE$ may result even for a small sample size, right? EDIT: Oh wait! Just read your closing sentence. :) +1
    $endgroup$
    – Alexis
    5 hours ago
















12












12








12





$begingroup$

First, we need think about what "publication bias" is, and how it will affect what actually makes it into the literature.



A fairly simple model for publication bias is that we collect some data and if $p < 0.05$, we publish. Otherwise, we don't. So how does this affect what we see in the literature? Well, for one, it guarantees that $|hat theta |/ SE(hat theta) >1.96$ (assuming a Wald statistic is used). Now, one point being made is that if $n$ is really small, then $SE(hat theta)$ is relatively large and a large $|hat theta|$ is required for publication.



Now suppose that in reality, $theta$ is relatively small. Suppose we run 200 experiments, 100 with really small sample sizes and 100 with really large sample sizes. Note that of 100 really small sample size experiments, the only ones that will get published by our simple publication bias model is those with large values of $|hat theta|$ just due to random error. However, in our 100 experiments with large sample sizes, much smaller values of $hat theta$ will be published. So if the larger experiments systematically show smaller effect than the smaller experiments, this suggests that perhaps $|theta|$ is actually significantly smaller than what we typically see from the smaller experiments that actually make it into publication.



Technical note: it's true that either having a large $|hat theta|$ and/or small $SE(hat theta)$ will lead to $p < 0.05$. However, since effect sizes are typically thought of as relative to standard deviation of error term, these two conditions are essentially equivalent.






share|cite|improve this answer











$endgroup$



First, we need think about what "publication bias" is, and how it will affect what actually makes it into the literature.



A fairly simple model for publication bias is that we collect some data and if $p < 0.05$, we publish. Otherwise, we don't. So how does this affect what we see in the literature? Well, for one, it guarantees that $|hat theta |/ SE(hat theta) >1.96$ (assuming a Wald statistic is used). Now, one point being made is that if $n$ is really small, then $SE(hat theta)$ is relatively large and a large $|hat theta|$ is required for publication.



Now suppose that in reality, $theta$ is relatively small. Suppose we run 200 experiments, 100 with really small sample sizes and 100 with really large sample sizes. Note that of 100 really small sample size experiments, the only ones that will get published by our simple publication bias model is those with large values of $|hat theta|$ just due to random error. However, in our 100 experiments with large sample sizes, much smaller values of $hat theta$ will be published. So if the larger experiments systematically show smaller effect than the smaller experiments, this suggests that perhaps $|theta|$ is actually significantly smaller than what we typically see from the smaller experiments that actually make it into publication.



Technical note: it's true that either having a large $|hat theta|$ and/or small $SE(hat theta)$ will lead to $p < 0.05$. However, since effect sizes are typically thought of as relative to standard deviation of error term, these two conditions are essentially equivalent.







share|cite|improve this answer














share|cite|improve this answer



share|cite|improve this answer








edited 5 hours ago

























answered 10 hours ago









Cliff ABCliff AB

13k12365




13k12365












  • $begingroup$
    "Now, one point being made is that if $n$ is really small, then $SE(theta)$ is relatively large and a large $|theta|$ is required for publication." This is not, technically speaking, necessarily true: $SE(theta) = frac{SD(theta)}{sqrt{n}}$: if $SE(theta)$ is very small, then a small $SE$ may result even for a small sample size, right? EDIT: Oh wait! Just read your closing sentence. :) +1
    $endgroup$
    – Alexis
    5 hours ago




















  • $begingroup$
    "Now, one point being made is that if $n$ is really small, then $SE(theta)$ is relatively large and a large $|theta|$ is required for publication." This is not, technically speaking, necessarily true: $SE(theta) = frac{SD(theta)}{sqrt{n}}$: if $SE(theta)$ is very small, then a small $SE$ may result even for a small sample size, right? EDIT: Oh wait! Just read your closing sentence. :) +1
    $endgroup$
    – Alexis
    5 hours ago


















$begingroup$
"Now, one point being made is that if $n$ is really small, then $SE(theta)$ is relatively large and a large $|theta|$ is required for publication." This is not, technically speaking, necessarily true: $SE(theta) = frac{SD(theta)}{sqrt{n}}$: if $SE(theta)$ is very small, then a small $SE$ may result even for a small sample size, right? EDIT: Oh wait! Just read your closing sentence. :) +1
$endgroup$
– Alexis
5 hours ago






$begingroup$
"Now, one point being made is that if $n$ is really small, then $SE(theta)$ is relatively large and a large $|theta|$ is required for publication." This is not, technically speaking, necessarily true: $SE(theta) = frac{SD(theta)}{sqrt{n}}$: if $SE(theta)$ is very small, then a small $SE$ may result even for a small sample size, right? EDIT: Oh wait! Just read your closing sentence. :) +1
$endgroup$
– Alexis
5 hours ago















8












$begingroup$

Read this statement a different way:



If there is no publication bias, effect size should be independent of study size.



That is, if you are studying one phenomenon, the effect size is a property of the phenomenon, not the sample/study.



Estimates of effect size could (and will) vary across studies, but if there is a systematic decreasing effect size with increasing study size, that suggests there is bias. The whole point is that this relationship suggests that there are additional small studies showing low effect size that have not been published, and if they were published and therefore could be included in a meta analysis, the overall impression would be that the effect size is smaller than what is estimated from the published subset of studies.



The variance of the effect size estimates across studies will depend on sample size, but you should see an equal number of under and over estimates at low sample sizes if there is no bias.






share|cite|improve this answer











$endgroup$









  • 1




    $begingroup$
    But is it really correct to say that "If there is no publication bias, effect size should be independent of study size"? This is true of course when you refer to the true underlying effect, but I think they are referring to the estimated effect. An effect size that is dependent of study size (suggesting bias) amounts to a linear relationship in that scatter plot (high correlation). This is something I have personally not seen in any funnel plots, even though of course many of those funnel plots did imply that a bias existed.
    $endgroup$
    – z8080
    9 hours ago






  • 2




    $begingroup$
    @z8080 You're right, only if the mean and standard deviation estimates are unbiased will the estimated effect size be completely independent of study size if there is no publication bias. Since the sample standard deviation is biased, there will be some bias in the effect size estimates, but that bias is small compared to the level of bias across studies that Egger et al are referring to. In my answer I'm treating it as negligible, assuming the sample size is large enough that the SD estimate is nearly unbiased, and therefore considering it to be independent of study size.
    $endgroup$
    – Bryan Krause
    8 hours ago












  • $begingroup$
    @z8080 The variance of the effect size estimates will depend on sample size, but you should see an equal number of under and over estimates at low sample sizes.
    $endgroup$
    – Bryan Krause
    8 hours ago










  • $begingroup$
    "Estimates of effect size could (and will) vary across studies, but if there is a systematic relationship between effect size and study size" That phrasing is a bit unclear about the difference between dependence and effect size. The distribution of effect size will the different for difference sample size, and thus will not be independent of sample size, regardless of whether there's bias. Bias is a systematic direction of the dependence.
    $endgroup$
    – Acccumulation
    4 hours ago










  • $begingroup$
    @Acccumulation Does my edit fix the lack of clarity you saw?
    $endgroup$
    – Bryan Krause
    4 hours ago
















8












$begingroup$

Read this statement a different way:



If there is no publication bias, effect size should be independent of study size.



That is, if you are studying one phenomenon, the effect size is a property of the phenomenon, not the sample/study.



Estimates of effect size could (and will) vary across studies, but if there is a systematic decreasing effect size with increasing study size, that suggests there is bias. The whole point is that this relationship suggests that there are additional small studies showing low effect size that have not been published, and if they were published and therefore could be included in a meta analysis, the overall impression would be that the effect size is smaller than what is estimated from the published subset of studies.



The variance of the effect size estimates across studies will depend on sample size, but you should see an equal number of under and over estimates at low sample sizes if there is no bias.






share|cite|improve this answer











$endgroup$









  • 1




    $begingroup$
    But is it really correct to say that "If there is no publication bias, effect size should be independent of study size"? This is true of course when you refer to the true underlying effect, but I think they are referring to the estimated effect. An effect size that is dependent of study size (suggesting bias) amounts to a linear relationship in that scatter plot (high correlation). This is something I have personally not seen in any funnel plots, even though of course many of those funnel plots did imply that a bias existed.
    $endgroup$
    – z8080
    9 hours ago






  • 2




    $begingroup$
    @z8080 You're right, only if the mean and standard deviation estimates are unbiased will the estimated effect size be completely independent of study size if there is no publication bias. Since the sample standard deviation is biased, there will be some bias in the effect size estimates, but that bias is small compared to the level of bias across studies that Egger et al are referring to. In my answer I'm treating it as negligible, assuming the sample size is large enough that the SD estimate is nearly unbiased, and therefore considering it to be independent of study size.
    $endgroup$
    – Bryan Krause
    8 hours ago












  • $begingroup$
    @z8080 The variance of the effect size estimates will depend on sample size, but you should see an equal number of under and over estimates at low sample sizes.
    $endgroup$
    – Bryan Krause
    8 hours ago










  • $begingroup$
    "Estimates of effect size could (and will) vary across studies, but if there is a systematic relationship between effect size and study size" That phrasing is a bit unclear about the difference between dependence and effect size. The distribution of effect size will the different for difference sample size, and thus will not be independent of sample size, regardless of whether there's bias. Bias is a systematic direction of the dependence.
    $endgroup$
    – Acccumulation
    4 hours ago










  • $begingroup$
    @Acccumulation Does my edit fix the lack of clarity you saw?
    $endgroup$
    – Bryan Krause
    4 hours ago














8












8








8





$begingroup$

Read this statement a different way:



If there is no publication bias, effect size should be independent of study size.



That is, if you are studying one phenomenon, the effect size is a property of the phenomenon, not the sample/study.



Estimates of effect size could (and will) vary across studies, but if there is a systematic decreasing effect size with increasing study size, that suggests there is bias. The whole point is that this relationship suggests that there are additional small studies showing low effect size that have not been published, and if they were published and therefore could be included in a meta analysis, the overall impression would be that the effect size is smaller than what is estimated from the published subset of studies.



The variance of the effect size estimates across studies will depend on sample size, but you should see an equal number of under and over estimates at low sample sizes if there is no bias.






share|cite|improve this answer











$endgroup$



Read this statement a different way:



If there is no publication bias, effect size should be independent of study size.



That is, if you are studying one phenomenon, the effect size is a property of the phenomenon, not the sample/study.



Estimates of effect size could (and will) vary across studies, but if there is a systematic decreasing effect size with increasing study size, that suggests there is bias. The whole point is that this relationship suggests that there are additional small studies showing low effect size that have not been published, and if they were published and therefore could be included in a meta analysis, the overall impression would be that the effect size is smaller than what is estimated from the published subset of studies.



The variance of the effect size estimates across studies will depend on sample size, but you should see an equal number of under and over estimates at low sample sizes if there is no bias.







share|cite|improve this answer














share|cite|improve this answer



share|cite|improve this answer








edited 4 hours ago

























answered 10 hours ago









Bryan KrauseBryan Krause

586210




586210








  • 1




    $begingroup$
    But is it really correct to say that "If there is no publication bias, effect size should be independent of study size"? This is true of course when you refer to the true underlying effect, but I think they are referring to the estimated effect. An effect size that is dependent of study size (suggesting bias) amounts to a linear relationship in that scatter plot (high correlation). This is something I have personally not seen in any funnel plots, even though of course many of those funnel plots did imply that a bias existed.
    $endgroup$
    – z8080
    9 hours ago






  • 2




    $begingroup$
    @z8080 You're right, only if the mean and standard deviation estimates are unbiased will the estimated effect size be completely independent of study size if there is no publication bias. Since the sample standard deviation is biased, there will be some bias in the effect size estimates, but that bias is small compared to the level of bias across studies that Egger et al are referring to. In my answer I'm treating it as negligible, assuming the sample size is large enough that the SD estimate is nearly unbiased, and therefore considering it to be independent of study size.
    $endgroup$
    – Bryan Krause
    8 hours ago












  • $begingroup$
    @z8080 The variance of the effect size estimates will depend on sample size, but you should see an equal number of under and over estimates at low sample sizes.
    $endgroup$
    – Bryan Krause
    8 hours ago










  • $begingroup$
    "Estimates of effect size could (and will) vary across studies, but if there is a systematic relationship between effect size and study size" That phrasing is a bit unclear about the difference between dependence and effect size. The distribution of effect size will the different for difference sample size, and thus will not be independent of sample size, regardless of whether there's bias. Bias is a systematic direction of the dependence.
    $endgroup$
    – Acccumulation
    4 hours ago










  • $begingroup$
    @Acccumulation Does my edit fix the lack of clarity you saw?
    $endgroup$
    – Bryan Krause
    4 hours ago














  • 1




    $begingroup$
    But is it really correct to say that "If there is no publication bias, effect size should be independent of study size"? This is true of course when you refer to the true underlying effect, but I think they are referring to the estimated effect. An effect size that is dependent of study size (suggesting bias) amounts to a linear relationship in that scatter plot (high correlation). This is something I have personally not seen in any funnel plots, even though of course many of those funnel plots did imply that a bias existed.
    $endgroup$
    – z8080
    9 hours ago






  • 2




    $begingroup$
    @z8080 You're right, only if the mean and standard deviation estimates are unbiased will the estimated effect size be completely independent of study size if there is no publication bias. Since the sample standard deviation is biased, there will be some bias in the effect size estimates, but that bias is small compared to the level of bias across studies that Egger et al are referring to. In my answer I'm treating it as negligible, assuming the sample size is large enough that the SD estimate is nearly unbiased, and therefore considering it to be independent of study size.
    $endgroup$
    – Bryan Krause
    8 hours ago












  • $begingroup$
    @z8080 The variance of the effect size estimates will depend on sample size, but you should see an equal number of under and over estimates at low sample sizes.
    $endgroup$
    – Bryan Krause
    8 hours ago










  • $begingroup$
    "Estimates of effect size could (and will) vary across studies, but if there is a systematic relationship between effect size and study size" That phrasing is a bit unclear about the difference between dependence and effect size. The distribution of effect size will the different for difference sample size, and thus will not be independent of sample size, regardless of whether there's bias. Bias is a systematic direction of the dependence.
    $endgroup$
    – Acccumulation
    4 hours ago










  • $begingroup$
    @Acccumulation Does my edit fix the lack of clarity you saw?
    $endgroup$
    – Bryan Krause
    4 hours ago








1




1




$begingroup$
But is it really correct to say that "If there is no publication bias, effect size should be independent of study size"? This is true of course when you refer to the true underlying effect, but I think they are referring to the estimated effect. An effect size that is dependent of study size (suggesting bias) amounts to a linear relationship in that scatter plot (high correlation). This is something I have personally not seen in any funnel plots, even though of course many of those funnel plots did imply that a bias existed.
$endgroup$
– z8080
9 hours ago




$begingroup$
But is it really correct to say that "If there is no publication bias, effect size should be independent of study size"? This is true of course when you refer to the true underlying effect, but I think they are referring to the estimated effect. An effect size that is dependent of study size (suggesting bias) amounts to a linear relationship in that scatter plot (high correlation). This is something I have personally not seen in any funnel plots, even though of course many of those funnel plots did imply that a bias existed.
$endgroup$
– z8080
9 hours ago




2




2




$begingroup$
@z8080 You're right, only if the mean and standard deviation estimates are unbiased will the estimated effect size be completely independent of study size if there is no publication bias. Since the sample standard deviation is biased, there will be some bias in the effect size estimates, but that bias is small compared to the level of bias across studies that Egger et al are referring to. In my answer I'm treating it as negligible, assuming the sample size is large enough that the SD estimate is nearly unbiased, and therefore considering it to be independent of study size.
$endgroup$
– Bryan Krause
8 hours ago






$begingroup$
@z8080 You're right, only if the mean and standard deviation estimates are unbiased will the estimated effect size be completely independent of study size if there is no publication bias. Since the sample standard deviation is biased, there will be some bias in the effect size estimates, but that bias is small compared to the level of bias across studies that Egger et al are referring to. In my answer I'm treating it as negligible, assuming the sample size is large enough that the SD estimate is nearly unbiased, and therefore considering it to be independent of study size.
$endgroup$
– Bryan Krause
8 hours ago














$begingroup$
@z8080 The variance of the effect size estimates will depend on sample size, but you should see an equal number of under and over estimates at low sample sizes.
$endgroup$
– Bryan Krause
8 hours ago




$begingroup$
@z8080 The variance of the effect size estimates will depend on sample size, but you should see an equal number of under and over estimates at low sample sizes.
$endgroup$
– Bryan Krause
8 hours ago












$begingroup$
"Estimates of effect size could (and will) vary across studies, but if there is a systematic relationship between effect size and study size" That phrasing is a bit unclear about the difference between dependence and effect size. The distribution of effect size will the different for difference sample size, and thus will not be independent of sample size, regardless of whether there's bias. Bias is a systematic direction of the dependence.
$endgroup$
– Acccumulation
4 hours ago




$begingroup$
"Estimates of effect size could (and will) vary across studies, but if there is a systematic relationship between effect size and study size" That phrasing is a bit unclear about the difference between dependence and effect size. The distribution of effect size will the different for difference sample size, and thus will not be independent of sample size, regardless of whether there's bias. Bias is a systematic direction of the dependence.
$endgroup$
– Acccumulation
4 hours ago












$begingroup$
@Acccumulation Does my edit fix the lack of clarity you saw?
$endgroup$
– Bryan Krause
4 hours ago




$begingroup$
@Acccumulation Does my edit fix the lack of clarity you saw?
$endgroup$
– Bryan Krause
4 hours ago


















draft saved

draft discarded




















































Thanks for contributing an answer to Cross Validated!


  • Please be sure to answer the question. Provide details and share your research!

But avoid



  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.


Use MathJax to format equations. MathJax reference.


To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstats.stackexchange.com%2fquestions%2f393256%2fwhy-does-finding-small-effects-in-large-studies-indicate-publication-bias%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

Polycentropodidae

Magento 2 Error message: Invalid state change requested

Paulmy