Thursday, October 20, 2011

Faster than the speed of light


At Cern, the world’s largest physics lab, results suggest that subatomic particles have gone faster than the speed of light. Unless the study is proven otherwise, modern science as we know it will never be the same again. The speed of light is known to be the universe’s ultimate speed limit, and based on Albert Einstein’s theory of spatial relativity; modern physics depends on the idea that nothing can exceed it. In the course of the experiments, the researchers noticed that the particles arrived at the destination 60 billionths of a second earlier than they would have if they travelled at the speed of light. Although this is only a minute difference, it is a consistent one says the lead researcher. “The team measured the travel times of neutrino bunches some 16, 000 times, and have reached a level of statistical significance that in scientific circles would count as a formal discovery.”

Article: http://www.bbc.co.uk/news/science-environment-15017484

Apart from being a potentially ground-breaking discovery that will render many research based on the speed of light being the limit obsolete, Cern’s research also addresses in regards to epistemology, the justifications of knowledge. How do we know that what we know is, and will always be true? Do age, education, culture and experience influence selection of data and formation of knowledge claims? How can we decide what ought to be checked further? Based on this research, it seems that man’s notion to what constitutes knowledge and legitimate information is flawed, and prone to sudden outbreaks like this, in which data from a research puts decades of studies under jeopardy.


Issues being debated

-How do we know that what we know is, and will always be true?

-Do age, education, culture and experience influence selection of data and formation of knowledge claims?

-How can we decide what ought to be checked further?

In the field of science, everything that we allegedly ‘know’ comes from inductive reasoning – reasoning that evaluates propositions that are abstractions of observations. There are four steps to inductive reasoning, observation, analysis, inference, and confirmation. The assumption that inductive reasoning is based upon, is that a constant, or at least a constant range, exists. This contrasts directly with what is being debated in that unless we continuously experiment, we can never say for certain that the data is accurate enough for a legit conclusion. In the article, Cern says that their research has “reached a level of statistical significance that in scientific circles would count as a formal discovery”. Cern does a good job in choosing language wisely, in that his use of semantics makes it so that readers know that it acknowledges the potential deviation and even regression to the mean. But the question still stands in that we have no way of knowing if the level of significance as dictated by the scientific circle is significant enough for application. There are always outliers to an experiment, and this article fails to discuss them except briefly, and very vaguely.

This leads to the question of whether or not there is a certain bar of significance that must be crossed for data from an experiment to become applicable. The realistic answer is never, because it is outside of human capacity to be able to take sets of data for an infinite amount of years, and condense that to a period of two to three years. However, if all discoveries have reached a level of significance as approved of the scientific circle, we can rest assured that we are as certain of particles being able to exceed the speed of light, as we are certain of global warming. I believe that all scientific experiments can never reach the level of certainty amounting to that of ‘precise’, or ‘accurate’, and the ideal situation, would be to have these experiments constantly running, with the results constantly being recorded and analyzed. That way, we can be the most certain with our assumptions, because we know that the study is still being conducted, and will continue to be conducted, so any minute change will be discovered immediately.

Article: http://www.wired.com/wiredscience/2011/06/dzero-non-result/

This issue is very similar to the discovery of tevatron particles, in which the results from the Collider Detector at Fermilab (CDF) could not be confirmed by its other detector. The two articles share the same underlying issue in regards to justification of knowledge in that the flaw of inductive reasoning is exposed for public scrutiny. Fermilab based their results from one Collider Detector, and published their result without confirmation with others, and was soon proven to be just experimental error. Basically, the issue of justification of knowledge is what is being disputed in that there can never be enough results to prove a point. We only know what we find from doing experiments.

5 comments:

  1. I personally think that there is no need for such three questions, and instead it could all be focused on whether or not our personal background (age, education, culture, experience etc.) can influence are selection of data and formation of knowledge. I don't really understand what was being argued on both sides of the argument and only see further questions posed that expands on this article. Personally, I believe that things such a schema can ultimately effect someone's sense of perception and can ultimately change the results of the experiments. Although, as stated in the article the researchers at Cern actually are encouraging for other people to conduct similar experiments to find faults in it, and said that they have repeated their test many times and stated how it is the only possible conclusion they can come up as of this point. The only problem raised here is that how is it possible for other organizations to posses the similar equipment to conduct the same experiment? It is a known fact that Cern is the biggest physics lab in the world, so would research and experiments done by others be not as precise and is unable to prove their results wrong?

    ReplyDelete
  2. This is an interesting article. I am extremely curious to find out what impact this will have on the science community if the results prove to be true. Scientific theories often times build upon each other and to such an extent that they are assumed to be true. I think a positive result of this experiment is that it will remind the scientific community to remember that at the end of the day theories are still theoretical. Also this bring up the issue of, is there anything that we can actually believe? The theory that the speed of light is the fastest speed anything can go has lasted for a long time and has been the basic of many scientific ideas. If such a basic idea can be disproven is there any point in learning these theories? Perhaps everything we know now will be looked upon as follies in the future.

    ReplyDelete
  3. I agree with Charlie where he comments that this result is an important reminder that scientific thinking never yields absolute certainty. The result that the CERN physicists arrived at is obviously very important, not necessarily because it may disprove Einstein's theory of relativity, but because it could provide evidence that the same physical, universal laws do not, in fact, apply across the whole universe. One general assumption often made, for example, is that the size of a hydrogen atom is consistent throughout the universe, or that sub-atomic forces remain constant. While this seems perfectly reasonable and there has never been much reason to doubt this, it is however still an assumption. Most people get along fine without worrying about this fact because to all intents and purposes it makes no difference in the everyday scheme of things.

    I thought your statement about the accuracy of scientific generalisations was very interesting, especially so given that we can never prove our generalisation. Even if we could, however, the usefulness of this could be limited, as this is very similar to the issue of the map vs. the territory---as scientific theories become more and more accurate they become more complicated and harder to apply; a scientific derivative of the Paradox of Cartography.

    ReplyDelete
  4. This comment has been removed by the author.

    ReplyDelete
  5. I think you did a interesting job on analyzing the issue of knowledge/knower. I think people usually think that science is something concrete, but to some degree a lot in science is based on speculation or theory, so there are never absolute results. This also relates to the area of history. In history, something can never be just "is" but only what it seems to be. That's why, the way to see history in a more precise way is to analyze its origin, purpose,intent,consequences, and among many other factors. Because our world is full of ambiguousness, all we can do is analyze and categorize the aspects of our reality and try to make sense from that. Also, you mention that there are always outliers in an experiment, and I think that in our objective reality, those data make no difference to the overall framework.

    ReplyDelete