The Scientific Method has its roots in ancient civilizations, as outlined in this timeline, which you can look up for yourself. The development into what we use today really began with Decartes, a brilliant 17th century mathematician, which you can read about in this history.
The importance of the Scientific Method is that it provides a consistent framework for the advancement of scientific hypothesis through testing and finally to status as theory or fact. This process, when coupled with peer review and statistical verification, provides the confidence required to, for example: bring drugs to market; build aircraft, cars, buildings and bridges; and certify the effectiveness and safety of fluoridated water. Critically, it is the fundament upon which public policy is based.
While the steps in the Scientific Method are not always described the same way, it is generally outlined as:
- Observe: Collect evidence and make measurements relating to the phenomenon you intend to study.
- Hypothesize: Invent a hypothesis explaining the phenomenon that you have observed.
- Predict: Use the hypothesis to predict the results of new observations or measurements. Often, advanced mathematical and statistical hypothesis testing techniques are used to design experiments that attempt to effectively test the plausibility of hypotheses.
- Verify: Perform experiments to test those predictions. Attempting to experimentally falsify hypotheses is thought by many to be a better choice of term here.
- Evaluate: If the experiments contradict your hypothesis, reject it and form another. If the results are compatible with predictions, make more predictions and test it further.
- Publish: Tell other people of your ideas and results, and encourage them to verify the claims themselves, in particular by inviting them to challenge your reasoning and check that your experimental results can be repeated. This process is known as peer review.
Climate models bring together hundreds of known facts (chemical reactions, for example), observed correlations (CO2 with temperature) and scientific conjectures in an effort to predict climate into the future. These models have some fundamental issues. Importantly, the relationship between factors is not well understood and therefore assumptions and parameterisations need to be made using the best available knowledge and data. If any one of these assumptions is wrong then it will have a knock-on effect throughout the whole model, creating spurious results. If many of the assumptions are even a little bit out then it's impossible to accept the output of the model. It is also important to understand that models are limited by available computing power, which means that the effects of water vapour, clouds etc are not included, as they require a level of computing power that is many years away. Instead, they are represented by an averaging effect over a large area; parameterisation. Current estimates are that computing power needs to increase by 10,000 times in order to more effectively model some of this data. That brings us to another problem - the resolution of the models is too low. That is, the smallest area of atmosphere that can be analysed in a global model is too high. If clouds are as important as everyone agrees in the formation of climate then having a smallest unit of measure of even 20 km is grossly inadequate.
CO2 Science is a website that provides a lot of important information of which the general public would be completely unaware. Amongst the list of inadequacies they have of climate models are: biology, clouds, ENSO, general, radiation, rainfall, sea ice and soil moisture. Choosing just one as an example from an article 'Climate Models: Are they improving?' we take the following:
Reference: Hoar, M.R., Palutikof, J.P. and Thorne, M.C. 2004. Model intercomparison for the present day, the mid-Holocene, and the Last Glacial Maximum over western Europe. Journal of Geophysical Research 109: 10.1029/2003JD004161.
What was done: The authors of this intriguing paper conducted "an evaluation of the performance over western Europe of an ensemble of General Circulation Models (GCMs) used to simulate climates at the present-day, the mid-Holocene and the Last Glacial Maximum (LGM)," comparing simulations of surface air temperature and precipitation among the different models and with observed and proxy data sets.
What was learned: In the words of the authors, "for absolute values, there was a higher inter-model correlation for temperature than there was for precipitation for all months and all time slices." In terms of differences between the present and the mid-Holocene, however, they found that the temperature correlations "are no longer robust and show a large variation where any climate signal was swamped by the inter-model variability." Hoar et al. also found that "experiments performed with models from the same institution tend to cluster," and that "statistical comparisons of the models with observed and proxy data sets demonstrate a lack of consistency in model performance between months, transects and time slices." Most amazing of all, perhaps, the three East Anglia researchers found that for the LGM, "a more realistic simulation of the ocean, as given in the sensitivity study of Kitoh et al. (2001), widened [our emphasis] the difference between simulated and proxy derived winter temperatures in western Europe." And in their very next sentence, they say that "this does not necessarily imply that the models are getting worse (although there is undoubtedly a need for further development), but rather emphasizes the need for more accurate and better spatially resolved palaeoproxy data sets for the LGM."
What it means: Taken as a whole, the several findings of Hoar et al. reveal a number of serious model deficiencies. The one just mentioned, however, is astounding: incorporation of a more realistic simulation of the ocean actually leads to temperature simulations that are worse than they are with a less realistic ocean, demonstrating that while a little knowledge can be a dangerous thing, a little more knowledge may sometimes be even more dangerous. Just because climate models tend to become more complex with the passage of time does not insure they are getting better; they may well be stagnating or actually on a retrograde course in terms of their ability to faithfully represent the final climatic outcome of a small perturbation of the atmosphere's composition, especially when that perturbation involves the concentration of a trace gas (CO2) that stimulates all sorts of phenomena in nearly all of earth's plants, many of the physiological processes of which have significant climatic implications. Consequently, the discrepancy between simulated and proxy temperatures discovered by Hoar et al. may well indicate a need for greater model introspection and less questioning of the palaeoproxy data sets with which the model simulations disagree.
You will find article after article detailing issues with the models and pointing out that due to a lack of computing power a lot of parameterisation needs to take place.
Here's the rub with climate models, they don't work. Not even a little bit. If you plug in climate conditions from the 1930s and run the models then they don't get the next 70 years anywhere near correct, failing to predict the cooling from the 1940s to 1970s. If you add a bunch of adjustments so that it gets the cooling period correct then they fail to predict the warming period that follows. The modellers answer? Add or tweak parameters until they do. In the world of statistics and validation this is referred to as backfitting and is one of the greatest sins that can be committed by any scientist.
Therefore, the predictions from climate models are shown to be false at the verification phase of the Scientific Method. Does this mean that they get thrown out? Nope. Climate scientists simply request more money in order to improve the models. In today's politically charged environment this money is forthcoming, which is why there are now billions of dollars being spent on the issue around the world.
To anybody that's had the slightest bit of scientific or mathematical training what's going on is an abomination. The corruption of the use of the Scientific Method by Climate Fascists will have the long-term negative effect of diminishing the public's trust of all science and, as a consequence, government funding of important research in all fields will be harder to come by.
The following really does accurately describe the aberrant scientific process used in the development of climate science:
Given all of the issues with the output of climate models, the work should never even get to the publication and peer review stage. In his testimony before a Congressional Committee, Dr Edward Wegman provided an analysis of the use of statistics in the development of the infamous Hockey Stick and the peer review process that supported it. Wegman, it should be noted, confirmed in the hearings that he voted for Al Gore in 2000 so he's no Republican stooge. I'll end up with an excerpt from his report regarding statements of temperature and peer review (I'll be doing an essay on peer review in climate science at some point):
It is important to note the isolation of the paleoclimate community; even though they rely heavily on statistical methods they do not seem to be interacting with the statistical community. Additionally, we judge that the sharing of research materials, data and results was haphazardly and grudgingly done. In this case we judge that there was too much reliance on peer review, which was not necessarily independent. Moreover, the work has been sufficiently politicized that this community can hardly reassess their public positions without losing credibility. Overall, our committee believes that Dr. Mann’s assessments that the decade of the 1990s was the hottest decade of the millennium and that 1998 was the hottest year of the millennium cannot be supported by his analysis.