Posted by: Grant | April 7, 2012

The ARGO Floats Debacle.

 So what do you do when you spend millions and millions putting 3500 high tech buoys in the oceans, read them by satellite, and “publish the data on the internet for everyone to see” to show how the Globe is warming – and it shows no, zero, zilch, nada, warming – in fact actually shows a slight cooling?
 
Simple, you obfuscate the data, bury it as deep as possible so it is almost inaccessible, and hope to hell that your political masters will not cut your funding.
 
This scientist relates the trouble he had digging out the data and goes on to try to interpret it.
WUWT
Where in the World is Argo?
Posted on February 6, 2012
by Willis Eschenbach
The Argo floats are technical marvels. They float around below the surface of the ocean, about a kilometre down, for nine days. On the tenth day, they rise slowly to the surface, sampling the pressure, temperature, and salinity as they go. When they reach the surface, they radio home like ET, transmit the data from the instrumental profiles, and they drop back down into the eternal darkness for another nine days. The first Argo floats were put into the ocean in the year 2000. In 2007, the goal of 3,000 floats was achieved…. “
 
” …So … I set out to take a look at where the Argo floats have sampled and where they haven’t sampled. I didn’t realize what I was getting into.
 
It’s a dang nuisance to get all the Argo files. They are available at the Global Argo Data Repository. The problem is that there have been over 8,000 individual Argo floats … and the rocket scientists at NOAA have decided to make the data available as one file per float, eight thousand individual files … grrr …
So I thought it might be of interest to describe how I went about getting the data. I haven’t gotten all of it, at the moment I’m somewhere between 5,000 and 6,000 files downloaded.
The first step in the process is to get the URL addresses of all of the files, which are shown on the web page at the link given above. To get the text of the URLs, remember that these are all listed in the “source” file that created that web page. Under the “View” menu (on the Mac, at least) you have a choice called “View Source”. This “source file” is a text file that contains the HTML information on how to make up the page, including all of the URLs of all the links on the page.
So … the first file listed on the web page is “IF000550″. I searched the source file for that, it’s at the start of the table. A similar search for “9018420″, the last file listed on the page, found me the end of the table.
I copied all of that information from the start to the end of the table from the “Source” document, and pasted it into a text processor. The end of an HTTP address is marked by the close code “”. I did a global search for that phrase, and replaced them all with a carriage return (“^p” in Microsoft Word). That left the text broken into short sentences suitable for pasting into Excel.
So I copied all of the resulting text, and pasted it into Excel. From there, it was easy to sort the lines. I wanted lines containing addresses that looked like http://www.nodc.noaa.gov/argo/data/coriolis/7900073.tgz These are the files with the actual float-by-float temperature profiles. I sorted them out, there were about 8,500 of them or so.
That gave me the list of all of the URLs of the files I was interested in. I saved those as a comma-delimited file, and opened it using the computer language “R”.
Using R, I was then able to automate the download process, having the computer download the files one after another. The one thing you need to do is leave gaps in your file requests. If you just request one file after another with no pause, you may get mistaken for a denial-of-service (DOS) attack on their server. So I put in a half second pause in after every five downloads. This adds about 12 minutes on 8,000+ downloads, not bad.
So that’s how I’m doing it. Once I get it all downloaded, I’ll put it together in some more reasonable format and stick it back out on the web, so people won’t have to go through that madness for the data…. “
WUWT
Argo Notes the Third
Posted on February 29, 2012
by Willis Eschenbach
WUWT
Argo, Latitude, Day, and Reynolds Interpolation
Posted on March 5, 2012
by Willis Eschenbach
.
The results are so far not spectacular, one way or the other, the Earth’s temperature is remarkably stable and self-regulating, but one thing is sure, if there had been a significant rise in ocean temperature in the ARGO data there would have been a great fanfare, the data would not have been buried, and it would not have been necessary to bring out this following piece of pseudo-scientific propaganda.
SCRIPPS INSTITUTE
New Comparison of Ocean Temperatures Reveals Rise over the Last Century
Debunked here…
WUWT
300 soundings from 19th century compared to Argo data
Posted on April 2, 2012
by Anthony Watts
“From the University of California – San Diego Scripps Institute, you gotta love the subheading in this PR. I didn’t know robots could travel back in time. Gosh, I learn something new every day. Apparently 300 soundings done by the HMS Challenger between 1872-1876 are enough to establish a “new global baseline” for the last century…. “
Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Categories

%d bloggers like this: