%matplotlib inline
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
plt.style.use('ggplot')
plt.rcParams['figure.figsize'] = (15, 3)
plt.rcParams['font.family'] = 'sans-serif'
By the end of this chapter, we're going to have downloaded all of Canada's weather data for 2012, and saved it to a CSV.
We'll do this by downloading it one month at a time, and then combining all the months together.
Here's the temperature every hour for 2012!
data_url = "https://sciencedata.dk/public/6e3ed434c0fa43df906ce2b6d1ba9fc6/pandas-cookbook/data/weather_2012.csv"
weather_2012_final = pd.read_csv(data_url, index_col='Date/Time')
weather_2012_final['Temp (C)'].plot(figsize=(15, 6))
When playing with the cycling data, I wanted temperature and precipitation data to find out if people like biking when it's raining. So I went to the site for Canadian historical weather data, and figured out how to get it automatically.
Here we're going to get the data for March 2012, and clean it up
Here's an URL template you can use to get data in Montreal.
url_template = "https://climate.weather.gc.ca/climate_data/bulk_data_e.html?format=csv&stationID=5415&Year={year}&Month={month}&Day=1&time=&timeframe=2&submit=Download+Data"
To get the data for March 2012, we need to format it with month=3, year=2012
.
url = url_template.format(month=3, year=2012)
weather_mar2012 = pd.read_csv(url, skiprows=0, index_col='Date/Time', parse_dates=True)
This is super great! We can just use the same read_csv
function as before, and just give it a URL as a filename. Awesome.
There are 16 rows of metadata at the top of this CSV, but pandas knows CSVs are weird, so there's a skiprows
options. We parse the dates again, and set 'Date/Time' to be the index column. Here's the resulting dataframe.
weather_mar2012
Let's plot it!
weather_mar2012[u"Mean Temp (°C)"].plot(figsize=(15, 5))
Notice how it goes up to 25° C in the middle there? That was a big deal. It was March, and people were wearing shorts outside.
And I was out of town and I missed it. Still sad, humans.
Let's fix up the columns. We're going to just print them out, copy, and fix them up by hand.
weather_mar2012[:5]
You'll notice in the summary above that there are a few columns which are are either entirely empty or only have a few values in them. Let's get rid of all of those with dropna
.
The argument axis=1
to dropna
means "drop columns", not rows", and how='any'
means "drop the column if any value is null".
This is much better now -- we only have columns with real data.
weather_mar2012 = weather_mar2012.dropna(axis=1, how='any')
weather_mar2012[:5]
The Year/Month/Day columns are redundant, though. Let's get rid of those.
The axis=1
argument means "Drop columns", like before. The default for operations like dropna
and drop
is always to operate on rows.
weather_mar2012 = weather_mar2012.drop(['Year', 'Month', 'Day'], axis=1)
weather_mar2012[:5]
Awesome! We now only have the relevant columns, and it's much more manageable.
This one's just for fun -- we've already done this before, using groupby and aggregate! But let's do it anyway.
snow = weather_mar2012[[u'Snow on Grnd (cm)']].copy()
snow.loc[:,'Mean Temp (°C)'] = weather_mar2012[[u'Mean Temp (°C)']].copy()
snow.groupby('Mean Temp (°C)').aggregate(np.median).plot()
Okay, so what if we want the data for the whole year? Ideally the API would just let us download that, but I couldn't figure out a way to do that.
First, let's put our work from above into a function that gets the weather for a given month.
I noticed that there's an irritating bug where when I ask for January, it gives me data for the previous year, so we'll fix that too. [no, really. You can check =)]
def download_weather_month(year, month):
url = url_template.format(year=year, month=month)
weather_data = pd.read_csv(url, skiprows=0, index_col='Date/Time', parse_dates=True)
weather_data = weather_data.dropna(axis=1)
weather_data = weather_data.drop(['Year', 'Day', 'Month'], axis=1)
return weather_data
We can test that this function does the right thing:
download_weather_month(2012, 1)[:5]
Now we can get all the months at once. This will take a little while to run.
data_by_month = [download_weather_month(2012, i) for i in range(1, 12)]
Once we have this, it's easy to concatenate all the dataframes together into one big dataframe using pd.concat
. And now we have the whole year's data!
weather_2012 = pd.concat(data_by_month)
weather_2012
It's slow and unnecessary to download the data every time, so let's save our dataframe to ScienceData for later use! If we use the unqualified hostname sciencedata
, we'll be using the internal network, and ssl will complain about the certificate of not matching the host. We tell ssl to ignore this.
We save to a folder tmp
. If you don't already have a folder by that name, just create it.
import ssl
ssl._create_default_https_context = ssl._create_unverified_context
import requests
requests.put('https://sciencedata/files/tmp/weather_2012.csv', data=weather_2012.to_csv())
We can also save a copy to the pod we're running on, for faster access, but it'll be gone once we delete the pod.
weather_2012.to_csv('weather_2012.csv')
And we're done!