{
"nbformat": 4,
"nbformat_minor": 0,
"metadata": {
"colab": {
"provenance": [],
"toc_visible": true,
"include_colab_link": true
},
"kernelspec": {
"name": "python3",
"display_name": "Python 3"
},
"language_info": {
"name": "python"
}
},
"cells": [
{
"cell_type": "markdown",
"metadata": {
"id": "view-in-github",
"colab_type": "text"
},
"source": [
"
"
]
},
{
"cell_type": "markdown",
"source": [
"# Tabular Data with Pandas"
],
"metadata": {
"id": "NvlIQdJBNcQb"
}
},
{
"cell_type": "markdown",
"source": [
""
],
"metadata": {
"id": "IB_v-hAJI7kU"
}
},
{
"cell_type": "markdown",
"source": [
"## Introduction"
],
"metadata": {
"id": "2X6fenL0JBGd"
}
},
{
"cell_type": "markdown",
"source": [
"[Pandas](http://pandas.pydata.org/) is a an open source library providing high-performance, easy-to-use data structures and data analysis tools. Pandas is particularly suited to the analysis of *tabular* data, i.e. data that can can go into a table. In other words, if you can imagine the data in an Excel spreadsheet, then Pandas is the tool for the job.\n",
"\n",
"A [2017 recent analysis](https://stackoverflow.blog/2017/09/06/incredible-growth-python/) of questions from Stack Overflow showed that python was the fastest growing and most widely used programming language in the world (in developed countries). As of 2021, the growth has now leveled off, but Python remains at the top."
],
"metadata": {
"id": "PYqB5tb5JDWa"
}
},
{
"cell_type": "markdown",
"source": [
""
],
"metadata": {
"id": "tlHlmomWJFxD"
}
},
{
"cell_type": "markdown",
"source": [
"[Link to generate your own version of this figure](https://insights.stackoverflow.com/trends?tags=java%2Cc%2Cc%2B%2B%2Cpython%2Cc%23%2Cvb.net%2Cjavascript%2Cassembly%2Cphp%2Cperl%2Cruby%2Cvb%2Cswift%2Cr%2Cobjective-c)"
],
"metadata": {
"id": "Jjtd8FOZVZ4G"
}
},
{
"cell_type": "markdown",
"source": [
"A [follow-up analysis](https://149351115.v2.pressablecdn.com/wp-content/uploads/2017/09/related_tags_over_time-1-2000x2000.png) showed that this growth is driven by the data science packages such as `numpy`, `matplotlib`, and especially `pandas`."
],
"metadata": {
"id": "sOLgO_LmVeQ1"
}
},
{
"cell_type": "markdown",
"source": [
""
],
"metadata": {
"id": "iGpnaehiWgUw"
}
},
{
"cell_type": "markdown",
"source": [
"The exponential growth of pandas is due to the fact that it *just works*. It saves you time and helps you do science more efficiently and effectively.\n",
"\n",
"**Pandas capabilities** (from the [Pandas website](https://pandas.pydata.org/)):\n",
"\n",
"* A fast and efficient DataFrame object for data manipulation with integrated indexing;\n",
"\n",
"* Tools for reading and writing data between in-memory data structures and different formats: CSV and text files, Microsoft Excel, SQL databases, and the fast HDF5 format;\n",
"\n",
"* Intelligent data alignment and integrated handling of missing data: gain automatic label-based alignment in computations and easily manipulate messy data into an orderly form;\n",
"\n",
"* Flexible reshaping and pivoting of data sets;\n",
"\n",
"* Intelligent label-based slicing, fancy indexing, and subsetting of large data sets;\n",
"\n",
"* Columns can be inserted and deleted from data structures for size mutability;\n",
"\n",
"* Aggregating or transforming data with a powerful group by engine allowing split-apply-combine operations on data sets;\n",
"\n",
"* High performance merging and joining of data sets;\n",
"\n",
"* Hierarchical axis indexing provides an intuitive way of working with high-dimensional data in a lower-dimensional data structure;\n",
"\n",
"* Time series-functionality: date range generation and frequency conversion, moving window statistics, moving window linear regressions, date shifting and lagging. Even create domain-specific time offsets and join time series without losing data;\n",
"\n",
"* Highly optimized for performance, with critical code paths written in Cython or C."
],
"metadata": {
"id": "Y_SgZRBjW7v_"
}
},
{
"cell_type": "markdown",
"source": [
"In this notebook, we will go over the basic capabilities of Pandas. It is a very deep library, and you will need to dig into the [documentation](http://pandas.pydata.org/pandas-docs/stable/) for more advanced usage.\n",
"\n",
"`Pandas` was created by [Wes McKinney](http://wesmckinney.com/). Many of the examples here are drawn from Wes McKinney’s book [Python for Data Analysis](http://shop.oreilly.com/product/0636920023784.do), which includes \"a GitHub repo of [code samples](https://github.com/wesm/pydata-book)."
],
"metadata": {
"id": "6sMYdu2QXCM_"
}
},
{
"cell_type": "markdown",
"source": [
"## Pandas Data Structures: Series"
],
"metadata": {
"id": "GGW3COT4XIUC"
}
},
{
"cell_type": "code",
"source": [
"import pandas as pd\n",
"import numpy as np\n",
"from matplotlib import pyplot as plt\n",
"%matplotlib inline"
],
"metadata": {
"id": "Mq93TFheW_eZ"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"A `Series` represents a one-dimensional array of data. The main difference between a `Series` and `numpy` array is that a `Series` has an index. The index contains the labels that we use to access the data.\n",
"\n",
"There are many ways to create a `Series`. We will just show a few.\n",
"\n",
"(Data are from the NASA [Planetary Fact Sheet](https://nssdc.gsfc.nasa.gov/planetary/factsheet/).)"
],
"metadata": {
"id": "Wtss6opYXMZg"
}
},
{
"cell_type": "code",
"source": [
"names = ['Mercury', 'Venus', 'Earth']\n",
"values = [0.3e24, 4.87e24, 5.97e24]\n",
"masses = pd.Series(values, index=names)\n",
"masses"
],
"metadata": {
"id": "ijvF1SqDWy-d"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"# Series have built in plotting methods.\n",
"masses.plot(kind='bar')"
],
"metadata": {
"id": "ASN3VDtuVexr"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"# Arithmetic operations and most `numpy` function can be applied to Series.\n",
"# An important point is that the Series keep their index during such operations.\n",
"np.log(masses) / masses**2"
],
"metadata": {
"id": "ZOwK63V_Vai0"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"# We can access the underlying index object if we need to:\n",
"masses.index"
],
"metadata": {
"id": "X0yWlq9SJGjy"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"### Indexing"
],
"metadata": {
"id": "84PkoKpJXoL4"
}
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "xreaix9UpPl9"
},
"outputs": [],
"source": [
"# We can get values back out using the index via the `.loc` attribute\n",
"masses.loc['Earth']"
]
},
{
"cell_type": "code",
"source": [
"# Or by raw position using `.iloc`\n",
"masses.iloc[2]"
],
"metadata": {
"id": "lyOkxQcSI8KR"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"# We can pass a list or array to loc to get multiple rows back:\n",
"masses.loc[['Venus', 'Earth']]"
],
"metadata": {
"id": "oHiFdznBX2F1"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"# And we can even use slice notation\n",
"masses.loc['Mercury':'Earth']"
],
"metadata": {
"id": "u-H-4oeiX6Xi"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"masses.iloc[:2]"
],
"metadata": {
"id": "RbTz0ySmX-v7"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"# If we need to, we can always get the raw data back out as well\n",
"masses.values # a numpy array"
],
"metadata": {
"id": "a_zLQvhcX_7C"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"masses.index # a pandas Index object"
],
"metadata": {
"id": "MZAEJIMYYFV6"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"\n",
"## Pandas Data Structures: DataFrame\n",
"\n",
"There is a lot more to Series, but they are limit to a single “column”. A more useful Pandas data structure is the DataFrame. A DataFrame is basically a bunch of series that share the same index. It’s a lot like a table in a spreadsheet.\n",
"\n",
"Below we create a DataFrame."
],
"metadata": {
"id": "YLhNymHYYI2i"
}
},
{
"cell_type": "code",
"source": [
"# first we create a dictionary\n",
"data = {'mass': [0.3e24, 4.87e24, 5.97e24], # kg\n",
" 'diameter': [4879e3, 12_104e3, 12_756e3], # m\n",
" 'rotation_period': [1407.6, np.nan, 23.9] # h\n",
" }\n",
"df = pd.DataFrame(data, index=['Mercury', 'Venus', 'Earth'])\n",
"df"
],
"metadata": {
"id": "FGrsJiZPYLX8"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"Pandas handles missing data very elegantly, keeping track of it through all calculations."
],
"metadata": {
"id": "LrUGTTqiYOie"
}
},
{
"cell_type": "code",
"source": [
"df.info()"
],
"metadata": {
"id": "X6LjEWTQYPZN"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"A wide range of statistical functions are available on both `Series` and `DataFrames`."
],
"metadata": {
"id": "4PsXlX19YTo3"
}
},
{
"cell_type": "code",
"source": [
"print(df.min())\n",
"print(df.mean())\n",
"print(df.std())\n",
"print(df.describe())"
],
"metadata": {
"id": "0QR1vPqGYUaJ"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"# We can get a single column as a `Series` using python’s `getitem` syntax on the `DataFrame` object.\n",
"print(df['mass'])\n",
"# …or using attribute syntax.\n",
"print(df.mass)\n"
],
"metadata": {
"id": "qIcXPj1QYsn_"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"# Indexing works very similar to series\n",
"print(df.loc['Earth'])\n",
"print(df.iloc[2])"
],
"metadata": {
"id": "Rogaf0zxY3G3"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"# But we can also specify the column we want to access\n",
"df.loc['Earth', 'mass']"
],
"metadata": {
"id": "L9G5eVbRZEFk"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"df.iloc[0:2, 0]"
],
"metadata": {
"id": "kkc7E7rtZINz"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"If we make a calculation using columns from the `DataFrame`, it will keep the same index:"
],
"metadata": {
"id": "eB9fNG2KZLQU"
}
},
{
"cell_type": "code",
"source": [
"volume = 4/3 * np.pi * (df.diameter/2)**3\n",
"df.mass / volume"
],
"metadata": {
"id": "HsxKxTY8ZM9A"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"Which we can easily add as another column to the `DataFrame`:\n"
],
"metadata": {
"id": "3kfbEcbEZQ3E"
}
},
{
"cell_type": "code",
"source": [
"df['density'] = df.mass / volume\n",
"df"
],
"metadata": {
"id": "zozn_tQ5ZSXg"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"\n",
"## Merging Data\n",
"\n",
"Pandas supports a wide range of methods for merging different datasets. These are described extensively in the [documentation](https://pandas.pydata.org/pandas-docs/stable/merging.html). Here we just give a few examples."
],
"metadata": {
"id": "_3V-PHIUZWkg"
}
},
{
"cell_type": "code",
"source": [
"temperature = pd.Series([167, 464, 15, -65],\n",
" index=['Mercury', 'Venus', 'Earth', 'Mars'],\n",
" name='temperature')\n",
"temperature"
],
"metadata": {
"id": "nHIVyWZzZZIO"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"# returns a new DataFrame\n",
"df.join(temperature)"
],
"metadata": {
"id": "S6ria6RrZNkN"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"# returns a new DataFrame\n",
"df.join(temperature, how='right')"
],
"metadata": {
"id": "jbIevLN6ZgNk"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"# returns a new DataFrame\n",
"everyone = df.reindex(['Mercury', 'Venus', 'Earth', 'Mars'])\n",
"everyone"
],
"metadata": {
"id": "ZGhfcNh8Zj8G"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"We can also index using a boolean series. This is very useful:"
],
"metadata": {
"id": "4DLgGLOJZsLf"
}
},
{
"cell_type": "code",
"source": [
"adults = df[df.mass > 4e24]\n",
"adults"
],
"metadata": {
"id": "pnn_GBPsZrbW"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"df['is_big'] = df.mass > 4e24\n",
"df"
],
"metadata": {
"id": "KbZQtoTtZv_S"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"## Modifying Values\n",
"We often want to modify values in a dataframe based on some rule. To modify values, we need to use `.loc` or `.iloc`"
],
"metadata": {
"id": "Wtj5zuaeZyDx"
}
},
{
"cell_type": "code",
"source": [
"df.loc['Earth', 'mass'] = 5.98+24\n",
"df.loc['Venus', 'diameter'] += 1\n",
"df"
],
"metadata": {
"id": "fFu4dInkZ0ls"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"## Plotting\n",
"DataFrames have all kinds of [useful plotting](https://pandas.pydata.org/pandas-docs/stable/visualization.html) built in."
],
"metadata": {
"id": "h-NYMG6hZ7aB"
}
},
{
"cell_type": "code",
"source": [
"df.plot(kind='scatter', x='mass', y='diameter', grid=True)"
],
"metadata": {
"id": "7ICcC_LpZ41J"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"df.plot(kind='bar')"
],
"metadata": {
"id": "f4KjoyxWaBFI"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"## Time `Index`es\n",
"Indexes are very powerful. They are a big part of why `Pandas` is so useful. There are different indexes for different types of data. Time indexes are especially great!"
],
"metadata": {
"id": "78Jz7ESBaDTd"
}
},
{
"cell_type": "code",
"source": [
"two_years = pd.date_range(start='2014-01-01', end='2016-01-01', freq='D')\n",
"timeseries = pd.Series(np.sin(2 *np.pi *two_years.dayofyear / 365),\n",
" index=two_years)\n",
"timeseries.plot()"
],
"metadata": {
"id": "b2H-XhCfaIBb"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"We can use python’s slicing notation inside `.loc` to select a date range."
],
"metadata": {
"id": "glHNWdq7aOeu"
}
},
{
"cell_type": "code",
"source": [
"timeseries.loc['2015-01-01':'2015-07-01'].plot()"
],
"metadata": {
"id": "ZbtttsLNaO-5"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"timeseries.index.month"
],
"metadata": {
"id": "fp1XdQZGaLKE"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"timeseries.index.day"
],
"metadata": {
"id": "evlZm52MaUIE"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"\n",
"## Reading Data Files: Weather Station Data\n",
"\n",
"In this example, we will use NOAA weather station data from https://www.ncei.noaa.gov/products/land-based-station.\n",
"\n",
"The details of files we are going to read are described in this README file."
],
"metadata": {
"id": "rernSiRmaaKB"
}
},
{
"cell_type": "code",
"source": [
"import pooch\n",
"POOCH = pooch.create(\n",
" path=pooch.os_cache(\"noaa-data\"),\n",
" base_url=\"doi:10.5281/zenodo.5564850/\",\n",
" registry={\n",
" \"data.txt\": \"md5:5129dcfd19300eb8d4d8d1673fcfbcb4\",\n",
" },\n",
")\n",
"datafile = POOCH.fetch(\"data.txt\")\n",
"datafile"
],
"metadata": {
"id": "sTQ__M_WadR6"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"! head '/root/.cache/noaa-data/data.txt' # Replace this value with the download path indicated above"
],
"metadata": {
"id": "RUkIldI1ahke"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"We now have a text file on our hard drive called `data.txt`.\n",
"\n",
"To read it into `pandas`, we will use the `read_csv` function. This function is incredibly complex and powerful. You can use it to extract data from almost any text file. However, you need to understand how to use its various options.\n",
"\n",
"With no options, this is what we get."
],
"metadata": {
"id": "gPrOqbcbalHI"
}
},
{
"cell_type": "code",
"source": [
"df = pd.read_csv(datafile)\n",
"df.head()"
],
"metadata": {
"id": "XY11UI7nanCm"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"Pandas failed to identify the different columns. This is because it was expecting standard [`CSV` (comma-separated values)](https://en.wikipedia.org/wiki/Comma-separated_values) file. In our file, instead, the values are separated by whitespace. And not a single whilespace–the amount of whitespace between values varies. We can tell pandas this using the `sep` keyword."
],
"metadata": {
"id": "qlo3SNBnap5p"
}
},
{
"cell_type": "code",
"source": [
"df = pd.read_csv(datafile, sep='\\s+')\n",
"df.head()"
],
"metadata": {
"id": "isLmxGlAaqdW"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"Great! It worked.\n",
"\n",
"If we look closely, we will see there are lots of -99 and -9999 values in the file. The README file tells us that these are values used to represent missing data. Let’s tell this to pandas."
],
"metadata": {
"id": "hstDfiC0au3T"
}
},
{
"cell_type": "code",
"source": [
"df = pd.read_csv(datafile, sep='\\s+', na_values=[-9999.0, -99.0])\n",
"df.head()"
],
"metadata": {
"id": "_tad7jbYauI7"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"Wonderful. The missing data is now represented by `NaN`.\n",
"\n",
"What data types did pandas infer?"
],
"metadata": {
"id": "s080W7kMaz9K"
}
},
{
"cell_type": "code",
"source": [
"df.info()"
],
"metadata": {
"id": "v-dwyaD6a1DW"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"One problem here is that pandas did not recognize the `LST_DATE` column as a date. Let’s help it."
],
"metadata": {
"id": "khe7dXtFa8RF"
}
},
{
"cell_type": "code",
"source": [
"df = pd.read_csv(datafile, sep='\\s+',\n",
" na_values=[-9999.0, -99.0],\n",
" parse_dates=[1])\n",
"df.info()"
],
"metadata": {
"id": "C5EHSklxa82U"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"# It worked! Finally, let’s tell pandas to use the date column as the index.\n",
"df = df.set_index('LST_DATE')\n",
"df.head()"
],
"metadata": {
"id": "mP8dvwfVbBf_"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"df.loc['2017-08-07']"
],
"metadata": {
"id": "kSHDgQx1bG-3"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"df.loc['2017-07-01':'2017-07-11']"
],
"metadata": {
"id": "2riezJEQbLC8"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"## Quick statistics"
],
"metadata": {
"id": "TOEWjPY_bP5v"
}
},
{
"cell_type": "code",
"source": [
"df.describe()"
],
"metadata": {
"id": "lGxc0ZJZbNcQ"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"\n",
"## Plotting Values"
],
"metadata": {
"id": "sc8EoRUJbYt1"
}
},
{
"cell_type": "markdown",
"source": [
"Pandas integrates a convenient [`boxplot`](https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.boxplot.html) function:"
],
"metadata": {
"id": "toNcXrbTzg7P"
}
},
{
"cell_type": "code",
"source": [
"fig, ax = plt.subplots(ncols=2, nrows=2, figsize=(14,14))\n",
"\n",
"df.iloc[:, 4:8].boxplot(ax=ax[0,0])\n",
"df.iloc[:, 10:14].boxplot(ax=ax[0,1])\n",
"df.iloc[:, 14:17].boxplot(ax=ax[1,0])\n",
"df.iloc[:, 18:22].boxplot(ax=ax[1,1])\n",
"\n",
"\n",
"ax[1, 1].set_xticklabels(ax[1, 1].get_xticklabels(), rotation=90);"
],
"metadata": {
"id": "yfTyrVjubQ7f"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"# Pandas is \"TIME-AWARE\"\n",
"df.T_DAILY_MEAN.plot()"
],
"metadata": {
"id": "eAYTxW5dbc60"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"**Note:** we could also manually create an axis and plot into it."
],
"metadata": {
"id": "nGEsFYyebmYM"
}
},
{
"cell_type": "code",
"source": [
"fig, ax = plt.subplots()\n",
"df.T_DAILY_MEAN.plot(ax=ax)\n",
"ax.set_title('Pandas Made This!')"
],
"metadata": {
"id": "UjWp-uqTbjYZ"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"df[['T_DAILY_MIN', 'T_DAILY_MEAN', 'T_DAILY_MAX']].plot()"
],
"metadata": {
"id": "8-zviMf_bsiP"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"## Resampling\n",
"Since `pandas` understands time, we can use it to do resampling. The frequency string of each date offset is listed [in the time series documentation](https://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.html#dateoffset-objects)."
],
"metadata": {
"id": "KRRo81Ytbr07"
}
},
{
"cell_type": "code",
"source": [
"# monthly reampler object\n",
"rs_obj = df.resample('MS')\n",
"rs_obj"
],
"metadata": {
"id": "wi45dgdqb1Kt"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"rs_obj.mean()"
],
"metadata": {
"id": "p3ApPM7Bb599"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"# We can chain all of that together\n",
"df_mm = df.resample('MS').mean()\n",
"df_mm[['T_DAILY_MIN', 'T_DAILY_MEAN', 'T_DAILY_MAX']].plot()"
],
"metadata": {
"id": "Xqvl7rp5b9RU"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"**And this concludes this notebook's tutorial 😀**\n",
"\n",
"If you would like to learn more about `pandas`, check out the [`groupby` tutorial](https://earth-env-data-science.github.io/lectures/pandas/pandas_groupby.html) from the [Earth and Environmental Data Science book](https://earth-env-data-science.github.io/intro.html), the [Github repository](https://github.com/wesm/pydata-book) of the [Python for Data Analysis](http://shop.oreilly.com/product/0636920023784.do), or the [`pandas` community tutorials](https://pandas.pydata.org/pandas-docs/stable/getting_started/tutorials.html)."
],
"metadata": {
"id": "JjxORZ_wcJZp"
}
},
{
"cell_type": "code",
"source": [],
"metadata": {
"id": "FJ5NKXX_rCuy"
},
"execution_count": null,
"outputs": []
}
]
}