Among the issues most commonly discussed are individuality, the rights of the individual, the limits of legitimate government, morality, history, economics, government policy, science, business, education, health care, energy, and man-made global warming evaluations. My posts are aimed at intelligent and rational individuals, whose comments are very welcome.

"No matter how vast your knowledge or how modest, it is your own mind that has to acquire it." Ayn Rand

"Observe that the 'haves' are those who have freedom, and that it is freedom that the 'have-nots' have not." Ayn Rand

"The virtue involved in helping those one loves is not 'selflessness' or 'sacrifice', but integrity." Ayn Rand

22 April 2018

The Worst States in Taxpayer Debt Burden and Taxes Per Capita

According to a report released in September 2017 by Truth in Accounting, a non-profit organization, the states listed below have a per taxpayer debt burden greater than $20,000.  The debt burden or surplus was calculated as the state's total reported assets minus capital assets and assets restricted by law which was compared to money the state owes, including on pension plans and healthcare benefits for retirees.

Listed from the highest debt burden to the last of those debt burdens greater than 20,000 per taxpayer as of the 2016 fiscal year:

50. New Jersey, $67,200

49. Illinois, $50,400

48.  Connecticut, $49,500

47.  Kentucky, $39,000

46.  Massachusetts, $32,900

45.  Hawaii, $27,100

44.  Delaware, $26,300

43.  California, $21,600

42.  New York, $20,500

Each of these states, with the exception of Kentucky, has long been a one-party state, dominated by the Democratic Party.  Clearly that party is very little concerned about the debt burden it places on its taxpayers.

There are states with a taxpayer surplus, believe it or not. From the highest to the lowest taxpayer surplus we have:

1.  Alaska, $38,200

2.  North Dakota, $24,000

3.  Wyoming, $20,500

4.  Utah, $4,600

5.  Nebraska, $2,600

6.  South Dakota, $2,300

7.  Tennessee, $2,100

8.  Idaho, $1,800

9.  Iowa, $500

The following states have taxpayer debt burdens below $5,000:  Arizona, Arkansas, Colorado, Florida, Indiana, Montana, Nevada, New Mexico, Oregon, Virginia, and Wisconsin.

According to the total state and local tax revenue per capita data for 2015 of the Tax Policy Center, the highest tax states (including the District of Columbia) with per capita taxes above $5,000 are:

51.  Washington, DC, $10,605

50.  North Dakota, $9,183

49.  New York, $8,743

48.  Connecticut, $7,422

47.  New Jersey, $6,680

46.  Wyoming, $6,389

45.  Massachusetts, $6,349

44.  Hawaii, $6,111

43.  Minnesota, $5,944

42.  California, $5,864

41.  Maryland, $5,857

40.  Vermont, $5,800

39.  Illinois, $5,751

38.  Rhode Island, $5,421

37.  Maine, $5,105

Most of the states with the greatest debt burdens are also in this list of states with the highest total taxes per capita.  North Dakota and Wyoming are special cases with low populations and very high tax revenues from oil and gas and in Wyoming's case also from coal.

One can escape the high taxes and the high debt burdens by pulling up stakes and moving to a state with lower taxes and lower debt burdens.  From July 2013 to July 2014, the following states had net domestic migration loses of more than 15,000 people:

1-New York (-153,921)
2-Illinois (-94,956)
3-New Jersey (-55,469)
4-California (-32,090)
5-Pennsylvania (-31,448)
6-Michigan (-28,679)
7-Connecticut (-26,216)
8-Virginia (-20,400)
9-Ohio (-18,243)
10-Massachusetts (-16,354)
11-Maryland (-15,295)
Once again, this list of states with the largest net losses in domestic migration has 8 solid blue states and 3 states which lean to being blue states.

The governance of state and local governments has consequences.  Bad governance causes bad effects.  The examples above of bad governance are commonly paired with other examples such as low ratings for freedom in the state.




25 March 2018

Solving the Parallel Plate Thermal Radiation Problem Correctly Proves the Settled Science of Man-Made Global Warming Wrong


This is essentially a re-posting of an article of 2 November 2017 which has been edited with additions several times since the original version appeared.  The essential physics has not been changed, but I have tried to help the reader understand it more readily along with its implications for the fate of the catastrophic man-made global warming hypothesis.

I will present the Consensus, Settled Science solution to the parallel plane black body radiator problem and demonstrate that it is wrong.  I will show that it exaggerates the energy of electromagnetic radiation between the two planes by as much as a factor of two as their temperatures approach one another.  As a result, the calculations of the so-called Consensus, Settled Science dealing with thermal radiation very often result in violations of the Law of Conservation of Energy.

Their calculations of thermal radiation greatly exaggerate the density of the infrared photon radiation in the atmosphere and the extent of the absorption of infrared radiation by infrared-active molecules, commonly called greenhouse gases, such as water vapor and carbon dioxide.  Their theory of the transport of heat energy between the surface of the Earth and the atmosphere and through the atmosphere is very wrong.  It exaggerates the role of thermal radiation greatly and minimizes the role of the water evaporation and condensation cycle and the role of thermal convection.  It cannot be emphasized enough how harmful their mishandling of thermal radiation calculations is to their understanding of the critical issues pertaining to the Earth's climate and to man's role in changing the climate through the use of carbon-based (fossil) fuels. 

In the case in which the two black body parallel plane radiators have the same temperature, the volume between them becomes that of a black body radiator.  The fundamental characteristic of a black body radiator is the constant energy density in the cavity.  I will show that the so-called settled science treatment, which wrongly takes the primary characteristic of a black body radiator to be that the power of emission of radiant energy is given by the Stefan-Boltzmann Law, clearly violates the real principal characteristic of a black body cavity, namely that its constant energy density, e, is given by Stefan’s Law as


e = aT4,


where T is the temperature in Kelvin, a is Stefan’s Constant of 7.57 x 10-16 J/m3K4 , and e is in Joules per cubic meter.


I have already demonstrated the failure of the settled science treatment of thermal radiation from black body radiators in the form of concentric spherical shells in a paper posted on 23 October 2017, entitled Thermal Radiation Basics and Their Violation by the Settled Science of the Catastrophic Man-Made Global Warming Hypothesis, but I want to post this solution with its simpler geometry so that the reason the consensus treatment is wrong will be even more apparent to thinking readers.

Numerous critics of the consensus science on catastrophic man-made global warming have argued that the Second Law of Thermodynamics claims that energy only flows from the warmer body to the colder body, but the consensus scientists have argued that thermodynamics only applies to the net flow of energy.  I have long argued that the reason that radiant energy only flows from the warmer to the cooler body is because the flow is controlled by an electromagnetic field and an energy gradient in that field.  I will offer that proof in this paper.  The Second Law of Thermodynamics is not invoked as the basis of the proof in this paper, but the minimum energy of a system consistent with the Second Law of Thermodynamics does turn out to be a consistent solution to the problem of thermal radiation, while the Consensus, Settled Science theory of thermal radiation does not minimize the system total energy, does not produce the correct energy density of a black body cavity, and is not consistent with the Conservation of Energy.  I have pointed out its failure to conserve energy in many prior posts. 


In a black body cavity, the electromagnetic radiation is in equilibrium with the walls of the cavity at a temperature T.  The energy density e is the mean value of 


½ E·D + ½ H·B,


where E is the impressed electric field, D is the displacement, which differs from E when the medium is polarized (i.e., has dipoles), H is the impressed magnetic field and B is the magnetic polarization of a medium.  If the cavity is under vacuum, then D = E and B = H in the cavity volume and |E| = |H|, so e equals |E|2.  The mean value of the energy density of the electromagnetic field in the cavity depends on the temperature and is created by the oscillating dipoles and higher order electric poles in the cavity walls.  The energy density is independent of the volume of the cavity.  The radiation pressure on the cavity walls is proportional to the energy density.

This physics may be reviewed by the reader in an excellent textbook called Thermal Physics by Philip M. Morse, Professor of Physics at MIT, published in 1965 by W.A. Benjamin, Inc., New York.  Prof. Morse wrote it as a challenging text for seniors and first-year graduate students.  I was fortunate to use it in a Thermodynamics course at Brown University in my Junior year.  Alternatively, see my post The Greenhouse Gas Hypothesis and Thermal Radiation -- A Critical Review.

Inside a black body cavity radiator at a temperature T, the energy density, or the energy per unit volume of the vacuum in the cavity is constant in accordance with Stefan’s Law.  If one opens a small peephole in the wall of the black body cavity, the energy density just inside that peephole is the energy density of the black body cavity and that energy density is proportional to the square of the electric field magnitude there.  The Stefan-Boltzmann Law states that the flow rate of energy out of the peephole when the black body cavity is surrounded by vacuum and an environment at T = 0 K, is given as the power P per unit area of the peephole as


P = σT4


Note that P = (σ/a) e and that e in the T=0K sink is equal to zero.  A change of energy density in the vacuum volume immediately inside the peephole into the black body cavity as given by Stefan's Law to a value of zero in the T=0K outside environment causes a power of thermal radiation emission out of the peephole to the outside environment as given by the Stefan-Boltzmann Law.

Why is it that a surface which is not a peephole into a black body cavity might act like a black body radiator?  It has to be that the energy density very, very close to that surface has the characteristic of the energy density in a black body cavity radiator, namely that


e = aT4.


Any flow of energy out of the surface due to its temperature T must be caused by this electromagnetic field energy density at the surface generated by the vibrational motion of electric charges in the material of the surface.  Such flow of energy from the surface only occurs to regions with an energy density that is lower.  There is no flow of energy from the inside wall of a black body radiator because the energy density everywhere inside the cavity at equilibrium is equal.  P from the interior walls is everywhere zero.  A non-zero P is the result of a non-zero Δe.

In fact, while it is commonly claimed that photons inside the cavity are being 100% absorbed on the walls and an equal amount of radiant energy is emitted from the absorbing wall, the actual case is that the radiant energy incident upon the walls can be entirely reflected from the walls.  Planck had derived the frequency spectrum of a black body cavity from an assumption of complete reflection from the walls.

Here is the problem of the parallel plane black body radiators diagrammed, where TC is the cooler temperature and TH is the warmer temperature:




Let us first consider the case that each plane is alone and surrounded by an environment of space at T=0.  Each plane has a power input that causes the plane to have its given temperature.  Each plane radiates electromagnetic energy at a rate per unit surface area of 

P = σT4.


Consequently, if neither plane were in the presence of the other and each plane has a surface area of A on each side and PCO , PCI , PHO , and PHI are all radiation powers per unit area, we have

PC = APCO + APCI = 2AσTC4


And


PH = APHO + APHI = 2AσTH4 ,


since in equilibrium the power input is equal to the power output by radiation.


In the consensus viewpoint, shared by many physicists and by almost every climate scientist, the parallel plane black body radiators above are believed to emit photons from every surface of each plane even in the presence of the other plane with a power per unit area of

PH = σTH4 and PC = σTC4,


just as they would if they were not near one another and they only cast off photons into a sink at T = 0 K.  This viewpoint takes the emitted radiation as a primary property of the surfaces rather than an electromagnetic field with a known energy density as the primary property of the surfaces.

Thus, when these planes are in one another's presence this consensus viewpoint says that

PC = APCO + APCI - APHI = 2AσTC4 - AσTH4


PH = APHO + APHI - APCI = 2AσTH4 - AσTC4


Note that PC becomes zero at TC = 0.8409 TH and below that temperature PC is negative or a cooling power in addition to radiative cooling.  If TH = 288K, then TC = 242.2K, the effective radiative temperature of the cooler plane to space if the cooler plane is thought of as the atmosphere and the warmer plane is the surface of the Earth, both the atmosphere and the surface act like black body radiators, and the atmosphere receives only radiant energy from the surface.  This is in agreement with calculations I have presented in the past and is a result which I believe to be correct under the assumptions, even though in some critical respects this consensus viewpoint is wrong.


If these planes were isolated from one another and each plane faced only that T = 0 K vacuum, then one would have


eH = aTH4 and eC = aTC4,


because these are black body radiator surfaces.  PHI when the hotter plane is surrounded by T=0 environment provides the photon flow near the emitting surface which causes the local energy density to be

e
H = aTH4 in this case.

Unlike the case of concentric spherical shells, which I considered in Thermal Radiation Basics and Their Violation by the Settled Science of the Catastrophic Man-Made Global Warming Hypothesis, there is no divergence or convergence of the photons emitted from either surface.  The relationship of the radiative power P to the energy density due to that electromagnetic radiation is always e = (a/σ) P as one traverses the distance between the planes.  

Consequently, the energy density between the planes is


e = (a/σ) PHI + (a/σ) PCI = aTH4 + aTC4


anywhere between the two planes, because photons have energy no matter which direction they are traveling and they do not annihilate one another based on their direction of travel.  The total energy between the planes is that of the electric field or it is the sum of the energy of all the photons in the space between the planes.  The energy density e would then be the total energy of all the photons divided by the volume of vacuum between the planes.


Now, let us imagine that these planes are very close together and the ends are far away and nearly closed.  Let us have TC → TH, then


e → 2aTH4,


but this space between the planes is now a black body cavity in the limit that TC → TH, and we know by Stefan’s Law that


e = aTH4


in this case.  In addition, we have created a black body cavity radiator here and P for the walls inside the cavity is actually zero because the interior is in a state of equilibrium and constant energy density.  P is only P = σ T4 just outside the peephole facing an environment at T = 0 K.  In the above consensus viewpoint case, each plane surface is emitting real photons, but these cannot annihilate those photons of the opposite plane.  There are no negative energy photons.  These respective photon streams simply add to the total energy density.


The consensus treatment of black body thermal radiation doubles the energy density in a black body cavity, in clear violation of the principal characteristic of a black body cavity upon which their treatment must be based.  Their treatment greatly increases the energy density between the planes whenever TC is anywhere near TH, such as is the case of the temperatures in the lower troposphere compared to the Earth’s surface temperature.  Consequently, the sum of PHI and PCI must be much smaller than they are thought to be in the consensus treatment of this problem or in the similar concentric spherical shell problem.


Let us now examine the correct solution to this parallel plane black body radiators problem.  It is the electromagnetic field between the two planes that governs the flow of electromagnetic energy between the planes.  Or one can say it is the energy density at each plane surface that drives the exchange of energy between the planes due to the energy density gradient between the two planes.  The critical and driving parameter here is


Δe = eH - eC = aTH4 – aTC4 ,


where each black body radiator surface maintains its black body radiator requirement that the energy density at the surface is given by Stefan’s Law.


Electromagnetic energy flows from the high energy surface to the low energy surface, as is the case in energy flows generally.


PHI = (σ/a) Δe = (σ/a) (aTH4 – aTC4 ) = σ TH4 – σ TC4


PCI = 0,


which is consistent with experimental measurements of the rate of radiant heat flow between two black body radiators.  Note that as TC approaches TH, PHI approaches zero as should be the case inside a black body cavity in thermal equilibrium.  There is no thermal emission from either of the black body cavity walls then.  Note also that when TC = 0 K, PHI is given by the Stefan-Boltzmann Law


PHI = σ TH4.


Let us recalculate PH and PC in this correct formulation of the problem:


PC = APCO + APCI - APHI = AσTC4 + 0 – [AσTH4 - AσTC4] = 2AσTC4 - AσTH4


PH = APHO + APHI - APCI = AσTH4 + [AσTH4 - AσTC4] - 0 = 2AσTH4 - AσTC4


And we see that the power inputs to each plane needed to maintain their respective temperatures as they cool themselves by thermal radiation are unchanged in this correct energy density or electromagnetic field centered viewpoint from the consensus viewpoint. Experimentally, the relationship between the power inputs to the thermal radiation emitting planes at given temperatures are exactly the same.  This fact causes the proponents of the consensus viewpoint to believe they are right, but they nonetheless violate the energy density requirements of electromagnetic fields and of black body radiation itself.

Because PCI = 0, back radiation from a cooler atmosphere to the surface is also zero and not 100% of the top of the atmosphere solar insolation as in the current NASA Earth Energy Budget.  Because PHI = σ TH4 – σ TC4 , the Earth's surface does not radiate 117% of the top of the atmosphere insolation either.  These radiation flows are hugely exaggerated by NASA and in similar Earth Energy Budgets presented in the UN IPCC reports.  See the NASA Earth Energy Budget below:





This is very important because reducing these two radiant energy flows of infrared photons reduces the effect of infrared-active gases, the so-called greenhouse gases, drastically.  Many fewer photons are actually available to be absorbed or emitted by greenhouse gases than they imagine.  This is a principal error that should cause the global climate computer models to greatly exaggerate the effects of the greenhouse gases, just as they have.

As I have pointed out in the past, one fatal consequence of the exaggeration of thermal radiation from the surface of the Earth is readily calculated from the fact that even if the atmosphere acted as a black body absorber, which it does not, it can only absorb the thermal radiation said to be emitted from the surface at an absurdly low atmospheric temperature.  Observe that the power of infrared radiation from the surface which is absorbed by the atmosphere PSA is


PSA = σ ( TS4 – TA4)


PSA = (1.17 – 0.12) (340 W/m2)


The first equation is from the discussion above with TS the surface temperature and TA the temperature of the atmosphere.  The second is according to the NASA Earth Energy Budget above.  If one takes TS to be 288K, then the temperature of the atmosphere required to absorb as much infrared energy as NASA claims is absorbed is 155.4K.  There is no such low temperature in the Earth’s atmosphere.  To find so low a temperature, one has to go far out into the solar system many times the radius of Earth’s orbit.  That being the case, any such thermal radiation absorbed by matter at such a low temperature is as much lost in the Earth Energy Budget as is the power equal to 12% of the solar insolation emitted from the Earth’s surface which NASA allows passes through the atmospheric window into space.  It should be apparent to the reader that the NASA Earth Energy Budget is nonsense.

It is not at all surprising that physics adheres to a minimum total energy in the system and does not generate a superfluous stream of photons from the colder body to the warmer body and does not have any more photons flowing to the colder body from the warmer body than necessary.  By means of the electromagnetic field between these two planes, the photon emission of the planes is coupled and affected by the presence of the other plane.  This is in no way surprising for an electromagnetic field problem.  One needs to remember that photons are creatures of electromagnetic fields.  Opposing streams of photons do not annihilate one another to cancel out energy, they simply add their energies.  Treating them as though one stream has a negative energy and the other a positive energy is just a means to throw the use of the Conservation of Energy out the window.  That is too critical a principle of physics to be tossed out the window.


Extending the Solution to Gray Body Thermal Radiators and Other Real Materials:


Many real materials do not behave like black body radiators of thermal radiation.  Those that do not radiate as black body materials would, radiate less than the black body radiator would.  Why would they radiate less?  This is because they do not create as high an electromagnetic field energy density at their surfaces as does a black body radiator.  From Stefan’s Law for a black body radiator, the energy density at the surface is


e = aT4


but for a gray body radiator the energy density at each wavelength λ is


e(λ) = εaT4.


This means the energy density at any given frequency is a constant fraction 0 < ε
<1 a="" behave="" black="" body="" case="" either="" general="" gray="" in="" like="" material="" may="" nbsp="" not="" o:p="" of="" or="" radiator.="" that="">

e(λ) = ε(λ)aT4,


where the fraction of the black body output at wavelength λ is variable.


An isolated material surrounded by vacuum and a T=0 K environment then has a power per unit area output of


P(λ) = εσT4 for a gray body and ε is seen to be the emissivity, and


P(λ) = ε(λ)σT4, for a general material, such as carbon dioxide or water vapor, where the absorption and emission become variable fractions of that of a black body as a function of wavelength.


For our two parallel plates above, if both are gray bodies, then between the plates


Δe = eH – eC = εH a TH4 – εC a TC4


PHI = (σ/a) Δe = εH σ TH4 – εC σ TC4


PCI = 0.


Here we see that the emissivity which determines the electromagnetic field energy density at the surface is also playing the role of the absorptivity at the absorbing colder surface.  So of course, Kirchhoff’s Law of thermal radiation that the emissivity equals the absorptivity of a material in a steady state process applies.  There is really nothing at all to prove if one starts with the primary fact and boundary condition that the energy density is the fundamental driver of the thermal radiation of materials.

28 February 2018

Will a Warmer Earth Really be a Drier Earth?


According to a recent Popular Science video and article, a 2C increase in temperature will cause the Earth to be a lot drier Earth, but one which also will have increased “extreme precipitation” events, such as the National Climate Assessment says has occurred “in every region of the contiguous states since the 1950s.”  It claims that “droughts and heat waves have also intensified, as is evident in California, which in recent years has seen less rain, drier soil, and the spread of wildfires.”  The United Nations ascribes to this same viewpoint.

It fails to note that for a few hundred years prior to 200 years ago, California was also drier than it has been in the last couple hundred years.  It does note that the world’s surface is 70% ocean.  It fails to note that a 2C temperature increase would cause more water to evaporate from the oceans, which has to be cycled back to the surface as more rain and snow.  It fails to note that much of the warmer land on the Earth’s surface is covered with rainforest, which is hardly dry.  It fails to note that the driest land areas on the Earth are those in very cold regions near the poles.  Is it not logical that warming the polar regions might make them less dry?  And one must not forget that this is the water planet with 71% of its surface covered with water, both ocean and fresh water.

The video and article state that:
The last time the Earth was as warm as it is now was over 11,000 years ago. Oceans covers 70% of our planet, and it takes a lot of energy to heat up that much water, not to mention the air and land. So a two-degree increase in the average global temperature means that temperature increases across the board are a lot more than 2°C.
This statement ignores the Medieval Warm Period, the Roman Warming, and the Minoan Warm Period, which were as warm, or warmer, than the present moment -- a moment brief as yet compared to those much longer periods and one not yet clearly established as climate rather than just weather.  It was not observed that when California was previously drier than in the last couple hundred years, the Little Ice Age was underway.

And what does that foolish second sentence in the quote above mean?  The average is the across the board temperature.  Yes, if the Earth were to warm, the warming would be greater in some areas than other areas.  There is a tendency for the warming to be greater over land areas and to be greater where the temperature is colder than where it is warmer.  The tropics would not warm up as much as would areas of land at higher latitudes because water evaporation tends to limit the temperature increase.  However, the second sentence in the quote does not say this.  It is nonsense.

There are many natural cycles that cause the climate to change.  As I have shown in numerous articles on this blog, the physics used to claim catastrophic effects due to man-made global warming caused by carbon dioxide emissions is very wrong.  There is as yet no empirical or experimental evidence that further increases in the concentration of carbon dioxide in the atmosphere will cause significant global warming.  It may well actually cause an insignificant cooling for reasons I have discussed a number of times.

What we do know for sure is that more carbon dioxide in the atmosphere will greatly aid plant growth, making it easier for us to feed a growing human population.  Unlike many of the anti-human proponents of the idea that man is destroying the Earth, I think a growing human population is a good thing, at least if we can see to it that most of them are free to use their minds and free to be productive.

Apparently, the increased rain events noted in this foolish article and video are to be the precipitation of anhydrous water, which will create terrible droughts.

Is it perhaps the case that part of the reason the Industrial Revolution got underway when it did is because the Little Ice Age was ending?  Warming on Earth is usually a good thing for mankind, not a bad thing.  But note that those hawking alarmist and catastrophic man-made effects on climate love to start the reference clock at the end of the Little Ice Age.  We are still warming as a result of the end of that cooling period primarily due to the large heat capacity of our oceans.

Thanks to Prof. Howard “Cork” Hayden for bringing this article and video to my attention.

17 February 2018

Overheated claims on temperature records by Dr. Tim Ball and Tom Harris

It’s time for sober second thoughts on climate alarms

Now that the excitement has died down over the news that Earth’s surface temperature made 2017 one of the hottest years on record, it is time for sober second thoughts.
Did the January 18 announcement by the National Oceanic and Atmospheric Administration (NOAA) that 2017 was our planet’s third-hottest year since 1880, and NASA’s claim that it was the second hottest year, actually mean anything?
Although the Los Angeles Times called 2017 “a top-three scorcher for planet Earth,” neither the NOAA nor the NASA records are significant. One would naturally expect the warmest years to come during the most recent years of a warming trend. And thank goodness we have been in a gradual warming trend since the depths of the Little Ice Age in the late 1600s! Back then, the River Thames was covered by a meter of ice, as Jan Grifier’s 1683 painting “The Great Frost’ illustrates.
Regardless, recent changes have been too small for even most thermometers to notice. More important, they are often less than the government’s estimates of uncertainty in the measurements. In fact, we lack the data to properly and scientifically compare today’s temperatures with the past.
This is because, until the 1960s, surface temperature data was collected using mercury thermometers located at weather stations situated mostly in the United States, Japan, the United Kingdom and eastern Australia. Most of the rest of the planet had very few temperature sensing stations. And none of the Earth’s oceans, which constitute 70 percent of the planet’s surface area, had more than an occasional station separated from its neighbors by thousands of kilometers or miles.
The data collected at the weather stations in this sparse grid had, at best, an accuracy of +/-0.5 degrees Celsius (0.9 degrees Fahrenheit). In most cases, the real-world accuracy was no better than +/-1 deg C (1.8 deg F). Averaging such poor data in an attempt to determine global conditions cannot yield anything meaningful. Displaying average global temperature to tenths or even hundreds of a degree, as is done in the NOAA and NASA graphs, clearly defies common sense.
Modern weather station surface temperature data is now collected using precision thermocouples. But, starting in the 1970s, less and less ground surface temperature data was used for plots such as those by NOAA and NASA. This was done initially because governments believed satellite monitoring could take over from most of the ground surface data collection.
However, the satellites did not show the warming forecast by computer models, which had become so crucial to climate studies and energy policy-making. So bureaucrats closed most of the colder rural surface temperature sensing stations – the ones furthest from much warmer urban areas – thereby yielding the warming desired for political purposes.
Today, virtually no data exist for approximately 85 percent of the earth’s surface. Indeed, fewer weather stations are in operation now than in 1960.
That means surface temperature computations by NOAA and NASA after about 1980 are meaningless. Combining this with the problems with earlier data renders an unavoidable conclusion: It is not possible to know how Earth’s so-called average surface temperature has varied over the past century and a half.
The data is therefore useless for input to the computer models that form the basis of policy recommendations produced by the United Nations Intergovernmental Panel on Climate Change (IPCC) and used to justify eliminating fossil fuels, and replacing them with renewable energy.
But the lack of adequate surface data is only the start of the problem. The computer models on which the climate scare is based are mathematical constructions that require the input of data above the surface, as well as on it. The models divide the atmosphere into cubes piled on top of each other, ideally with wind, humidity, cloud cover and temperature conditions known for different altitudes. But we currently have even less data above the surface than on it, and there is essentially no historical data at altitude.
Many people think the planet is adequately covered by satellite observations, data that represents global 24/7 coverage and is far more accurate than anything determined at weather stations. But the satellites are unable to collect data from the north and south poles, regions that the IPCC, NOAA and NASA tout as critical to understanding global warming. Besides, space-based temperature data collection did not start until 1979, and 30 years of weather data are required to generate a single data point on a climate graph.
So the satellite record is far too short to allow us to come to useful conclusions about climate change.
In fact, there is insufficient data of any kind – temperature, land and sea ice, glaciers, sea level, extreme weather, ocean pH,  and so on – to be able to determine how today’s climate differs from the past. Lacking such fundamental data, climate forecasts cited by climate activists therefore have no connection with the real world.
British Professor Hubert Lamb is often identified as the founder of modern climatology. In his comprehensive 1972 treatise, Climate: Past, Present and Future, he clearly showed that it is not possible to understand climate change without having vast amounts of accurate weather data over long time frames. Lamb also noted that funding for improving the weather database was dwarfed by money being spent on computer models and theorizing. He warned that this would result in wild and unsubstantiated theories and assertions, while predictions failed to improve. That is precisely what happened.
Each and every prediction made by the computer models cited by the IPCC have turned out to be incorrect. Indeed, the first predictions they made for the IPCC’s 1990 Assessment Report were so wrong that the panel started to call them “projections” and offered low, medium and high “confidence” ranges for future guesstimates, which journalists, politicians and others nevertheless treated as reliable predictions for future weather and climate.
IPCC members seemed to conclude that, if they provided a broad enough range of forecasts, one was bound to be correct. Yet, even that was too optimistic. All three ranges predicted by the IPCC have turned out to be wrong.
US Environmental Protection Agency (EPA) Administrator Scott Pruitt is right to speak about the need for a full blown public debate among scientists about the causes and consequences of climate change. In his February 6 television interview on KSNV, an NBC affiliate in Las Vegas, Mr. Pruitt explained:
“There are very important questions around the climate issue that folks really don’t get to. And that’s one of the reasons why I’ve talked about having an honest, open, transparent debate about what do we know, and what don’t we know, so the American people can be informed and they can make decisions on their own with respect to these issues.”
On January 30, Pruitt told the Senate Environment and Public Works Committee that a “red team-blue team exercise” (an EPA-sponsored debate between climate scientists holding differing views) is under consideration. It is crucially important that such a debate take place.
The public needs to understand that even the most basic assumptions underlying climate concerns are either in doubt or simply wrong. The campaign to force America, Canada, Europe and the rest of the world to switch from abundant and affordable coal and other fossil fuels – to expensive, unreliable, land intensive alternatives – supposedly to control Earth’s always fluctuating climate, will then finally be exposed for what it really is: the greatest, most damaging hoax in history.

Dr. Tim Ball is an environmental consultant and former climatology professor at the University of Winnipeg in Manitoba. Tom Harris is executive director of the Ottawa, Canada-based International Climate Science Coalition.

My Note:
I added the yellow highlighting.  This is a point I have also long made.  I will add another point, much of the historical data has been "corrected" in recent times and the corrections are very substantial compared to the temperature trends and somehow almost always make the older temperatures colder.  If the older data really does need to have such large corrections, then the older data is worthless as scientific data and should be treated as such.  There is no point in making the corrections on such a wobbly, uncertain base.