In numerous scientific studies a general declining trend has been noticed, seemingly without reason, and this trend has been termed ‘the declining effect’. For example, one study found that in general animals prefer mates with symmetrical features, with a margin of around 30%. As time goes on this study was reproduced, however, it was found that this margin drops lower and lower, until the effect has eventually regressed to insignificance. Many causes for this effect have been hypothesized and examined, but no clear cause has been found. This state is made even more perplexing by the fact that this effect can be observed across multiple disciplines, and is not limited to any one field of study.
The decline effect basically suggests that a set of data may contain certain patterns and trends but over time, after series of observation, the data will regress to the mean and show results closer to the reality. After listening to the radio, I was pretty convinced but I really didn't know what to think. Why do these data change over time? Is replication really the problem? If all our well-established, multiply confirmed findings start to look increasingly uncertain, where does that leave us? It's as if our facts were losing their truth: claims that have been enshrined in textbooks are suddenly unprovable. "The most likely explanation for the decline is an obvious one: regression to the mean. As the experiment is repeated, that is, an early statistical fluke gets canceled out... And yet Schooler has noticed that many of the data sets that end up declining seem statistically solid--that is, they contain enough data that any regression to the mean shouldn't be dramatic. "These are the results that pass all the tests," he says. "The odds of them being random are typically quite remote, like one in a million. This means that the decline effect should almost never happen. But it happens all the time!..." (Lehrer) If these results aren't random, why does the data change over time? Is it then, the act of observation that changes reality itself then? The podcast gave an example that really stood out to me, placing your hand on your leg, you feel it but as you leave it there, it becomes less and less noticeable, somehow there may be some kind of habituation that comes into the middle of this all. The podcast then goes on to say that in this sense, we can never know what is absolutely set in stone and even the notion that the laws of reality are unchangeable may be changed, because, it's just a reasonable assumption that we, as humans, make. In this sense, you can never know what's for sure, and what's not, truth then, would be based on "the observer’s position, habits, biases, information whatever." But some things in life seem to be constant, like Newton's law. Maybe the decline effect only happens in places where a lot of variables are at stake, most evidently, science. The real question in this study is: how do we know that the facts that were – are here today will be there tomorrow?
This paradoxical notion that all scientific knowledge is based on certain assumptions (and should therefore be considered most unscientific) is not limited to science alone. It also appears in mathematics, which provides a much purer and more ideal environment for reasoning and deduction than the real, messy world does. A perfect example of this is Kurt Gödel’s first incompleteness theorem, which basically states that no matter how you define your assumptions, there will always be things that are true, but that you cannot prove are true. This inherent uncertainty seems to extend across all fields of knowledge.
The decline effect basically suggests that a set of data may contain certain patterns and trends but over time, after series of observation, the data will regress to the mean and show results closer to the reality. After listening to the radio, I was pretty convinced but I really didn't know what to think. Why do these data change over time? Is replication really the problem? If all our well-established, multiply confirmed findings start to look increasingly uncertain, where does that leave us? It's as if our facts were losing their truth: claims that have been enshrined in textbooks are suddenly unprovable. "The most likely explanation for the decline is an obvious one: regression to the mean. As the experiment is repeated, that is, an early statistical fluke gets canceled out... And yet Schooler has noticed that many of the data sets that end up declining seem statistically solid--that is, they contain enough data that any regression to the mean shouldn't be dramatic. "These are the results that pass all the tests," he says. "The odds of them being random are typically quite remote, like one in a million. This means that the decline effect should almost never happen. But it happens all the time!..." (Lehrer) If these results aren't random, why does the data change over time? Is it then, the act of observation that changes reality itself then? The podcast gave an example that really stood out to me, placing your hand on your leg, you feel it but as you leave it there, it becomes less and less noticeable, somehow there may be some kind of habituation that comes into the middle of this all. The podcast then goes on to say that in this sense, we can never know what is absolutely set in stone and even the notion that the laws of reality are unchangeable may be changed, because, it's just a reasonable assumption that we, as humans, make. In this sense, you can never know what's for sure, and what's not, truth then, would be based on "the observer’s position, habits, biases, information whatever." But some things in life seem to be constant, like Newton's law. Maybe the decline effect only happens in places where a lot of variables are at stake, most evidently, science. The real question in this study is: how do we know that the facts that were – are here today will be there tomorrow?
This paradoxical notion that all scientific knowledge is based on certain assumptions (and should therefore be considered most unscientific) is not limited to science alone. It also appears in mathematics, which provides a much purer and more ideal environment for reasoning and deduction than the real, messy world does. A perfect example of this is Kurt Gödel’s first incompleteness theorem, which basically states that no matter how you define your assumptions, there will always be things that are true, but that you cannot prove are true. This inherent uncertainty seems to extend across all fields of knowledge.
No comments:
Post a Comment