pandas的拼接操作
pandas的拼接分为两种:
- 级联:pd.concat, pd.append
- 合并:pd.merge, pd.join
import numpy as np
import pandas as pd
from pandas import Series, DataFrame
0. 回顾numpy的级联
nd = np.random.randint(0, 150, size=(5, 4))
nd
-
输出
array([[ 47, 48, 144, 83],
[ 69, 112, 87, 107],
[ 67, 47, 74, 145],
[115, 141, 125, 85],
[ 65, 12, 87, 118]])
np.concatenate([nd, nd], axis=1)
-
输出
array([[ 47, 48, 144, 83, 47, 48, 144, 83],
[ 69, 112, 87, 107, 69, 112, 87, 107],
[ 67, 47, 74, 145, 67, 47, 74, 145],
[115, 141, 125, 85, 115, 141, 125, 85],
[ 65, 12, 87, 118, 65, 12, 87, 118]])
为方便讲解,我们首先定义一个生成DataFrame的函数:
def make_df(cols, index):
data = {col: [str(col)+str(ind) for ind in index] for col in cols}
df = DataFrame(data=data, columns = cols, index=index)
return df
df1 = make_df(['a', 'b', 'c'], [1, 2, 3])
df1
| a | b | c | |
|---|---|---|---|
| 1 | a1 | b1 | c1 |
| 2 | a2 | b2 | c2 |
| 3 | a3 | b3 | c3 |
df2 = make_df(['a', 'b', 'c'], [4, 5, 6])
df2
| a | b | c | |
|---|---|---|---|
| 4 | a4 | b4 | c4 |
| 5 | a5 | b5 | c5 |
| 6 | a6 | b6 | c6 |
1. 使用pd.concat()级联
pandas使用pd.concat函数,与np.concatenate函数类似,只是多了一些参数:
pd.concat(objs, axis=0, join='outer', join_axes=None, ignore_index=False,
keys=None, levels=None, names=None, verify_integrity=False,
copy=True)
1) 简单级联
和np.concatenate一样,优先增加行数(默认axis=0)
# np.concatenate(axis=1)的情况是水平的级联, np中没有index,和columns,所以只要行列相等就可以级联
# 在pd中,如果行和列不一致,但是形状相同,会级联成一个更大的df,但是不对应的值会填充
pd.concat([df1, df2], axis=1)
| a | b | c | a | b | c | |
|---|---|---|---|---|---|---|
| 1 | a1 | b1 | c1 | NaN | NaN | NaN |
| 2 | a2 | b2 | c2 | NaN | NaN | NaN |
| 3 | a3 | b3 | c3 | NaN | NaN | NaN |
| 4 | NaN | NaN | NaN | a4 | b4 | c4 |
| 5 | NaN | NaN | NaN | a5 | b5 | c5 |
| 6 | NaN | NaN | NaN | a6 | b6 | c6 |
可以通过设置axis来改变级联方向
注意index在级联时可以重复
df3 = make_df(['a', 'b', 'c'], [2, 3, 4])
df3
| a | b | c | |
|---|---|---|---|
| 2 | a2 | b2 | c2 |
| 3 | a3 | b3 | c3 |
| 4 | a4 | b4 | c4 |
pd.concat([df1, df3])
| a | b | c | |
|---|---|---|---|
| 1 | a1 | b1 | c1 |
| 2 | a2 | b2 | c2 |
| 3 | a3 | b3 | c3 |
| 2 | a2 | b2 | c2 |
| 3 | a3 | b3 | c3 |
| 4 | a4 | b4 | c4 |
pd.concat([df1, df3], axis=1)
| a | b | c | a | b | c | |
|---|---|---|---|---|---|---|
| 1 | a1 | b1 | c1 | NaN | NaN | NaN |
| 2 | a2 | b2 | c2 | a2 | b2 | c2 |
| 3 | a3 | b3 | c3 | a3 | b3 | c3 |
| 4 | NaN | NaN | NaN | a4 | b4 | c4 |
也可以选择忽略ignore_index,重新索引
# gitignore 这个文件,会把写入的文件路径,给屏蔽,不会被上传到云端
# ignore_index的作用是对索引重新排序
pd.concat([df1, df3], axis=0, ignore_index=True)
# 在工作中,大部分的分析来源于mysql,mysql中的id都是唯一的,分表
# 分表, 每个表最大的存储限制是100w(索引条数),mysql,80w(实际中应用)
| a | b | c | |
|---|---|---|---|
| 0 | a1 | b1 | c1 |
| 1 | a2 | b2 | c2 |
| 2 | a3 | b3 | c3 |
| 3 | a2 | b2 | c2 |
| 4 | a3 | b3 | c3 |
| 5 | a4 | b4 | c4 |
或者使用多层索引 keys
concat([x,y],keys=[‘x’,‘y’])
df4 = pd.concat([df1, df3], keys=['期中', '期末'])
df4
| a | b | c | ||
|---|---|---|---|---|
| 期中 | 1 | a1 | b1 | c1 |
| 2 | a2 | b2 | c2 | |
| 3 | a3 | b3 | c3 | |
| 期末 | 2 | a2 | b2 | c2 |
| 3 | a3 | b3 | c3 | |
| 4 | a4 | b4 | c4 |
2) 不匹配级联
不匹配指的是级联的维度的索引不一致。例如纵向级联时列索引不一致,横向级联时行索引不一致
有3种连接方式:
- 外连接:补NaN(默认模式)
df1
| a | b | c | |
|---|---|---|---|
| 1 | a1 | b1 | c1 |
| 2 | a2 | b2 | c2 |
| 3 | a3 | b3 | c3 |
df5 = make_df(['c', 'd', 'e'], [3, 4, 5])
df5
| c | d | e | |
|---|---|---|---|
| 3 | c3 | d3 | e3 |
| 4 | c4 | d4 | e4 |
| 5 | c5 | d5 | e5 |
#using()
df6 = pd.concat([df1, df5], axis=0)
df6
| a | b | c | d | e | |
|---|---|---|---|---|---|
| 1 | a1 | b1 | c1 | NaN | NaN |
| 2 | a2 | b2 | c2 | NaN | NaN |
| 3 | a3 | b3 | c3 | NaN | NaN |
| 3 | NaN | NaN | c3 | d3 | e3 |
| 4 | NaN | NaN | c4 | d4 | e4 |
| 5 | NaN | NaN | c5 | d5 | e5 |
- 内连接:只连接匹配的项
# 回忆mysql中outer inner 不同
# 外连接 left 以左边的表中的数据为核心,右表数据不匹配,则填充null
# 内连接 join 两边的表数据不完全对应的话,会只显示能对应上的数据
# join 默认值是outer
df6 = pd.concat([df1, df5], axis=1, join='inner')
df6
# 同mysql一致
| a | b | c | c | d | e | |
|---|---|---|---|---|---|---|
| 3 | a3 | b3 | c3 | c3 | d3 | e3 |
- 连接指定轴 join_axes
df1
| a | b | c | |
|---|---|---|---|
| 1 | a1 | b1 | c1 |
| 2 | a2 | b2 | c2 |
| 3 | a3 | b3 | c3 |
df7 = pd.concat([df1,df5], join_axes=[df1.columns])
df7
# join_axes的值是一个列表[df1.index]
# select df1.a df1.b df1.c from df1 left join df5 using(c);
# using(c) 相当于 on df1.c = df5.c
| a | b | c | |
|---|---|---|---|
| 1 | a1 | b1 | c1 |
| 2 | a2 | b2 | c2 |
| 3 | a3 | b3 | c3 |
| 3 | NaN | NaN | c3 |
| 4 | NaN | NaN | c4 |
| 5 | NaN | NaN | c5 |
3) 使用append()函数添加
由于在后面级联的使用非常普遍,因此有一个函数append专门用于在后面添加
append 和 concat 相似
df1.append(df2)
| a | b | c | |
|---|---|---|---|
| 1 | a1 | b1 | c1 |
| 2 | a2 | b2 | c2 |
| 3 | a3 | b3 | c3 |
| 4 | a4 | b4 | c4 |
| 5 | a5 | b5 | c5 |
| 6 | a6 | b6 | c6 |
df1.append(df5)
| a | b | c | d | e | |
|---|---|---|---|---|---|
| 1 | a1 | b1 | c1 | NaN | NaN |
| 2 | a2 | b2 | c2 | NaN | NaN |
| 3 | a3 | b3 | c3 | NaN | NaN |
| 3 | NaN | NaN | c3 | d3 | e3 |
| 4 | NaN | NaN | c4 | d4 | e4 |
| 5 | NaN | NaN | c5 | d5 | e5 |
============================================
新建一个只有张三李四王老五的期末考试成绩单ddd3,使用append()与期中考试成绩表ddd级联
============================================
2. 使用pd.merge()合并
merge与concat的区别在于,merge需要依据某一共同的行或列来进行合并
使用pd.merge()合并时,会自动根据两者相同column名称的那一列,作为key来进行合并。
注意每一列元素的顺序不要求一致
1) 一对一合并
df1
| a | b | c | |
|---|---|---|---|
| 1 | a1 | b1 | c1 |
| 2 | a2 | b2 | c2 |
| 3 | a3 | b3 | c3 |
df2
| a | b | c | |
|---|---|---|---|
| 4 | a4 | b4 | c4 |
| 5 | a5 | b5 | c5 |
| 6 | a6 | b6 | c6 |
# 默认的是内连接,表的两边数据都不对应
pd.merge(df1, df2)
| a | b | c |
|---|
# how:{'left', 'right', 'outer', 'inner'}, default 'inner'
pd.merge(df1,df2, how='right')
| a | b | c | |
|---|---|---|---|
| 0 | a4 | b4 | c4 |
| 1 | a5 | b5 | c5 |
| 2 | a6 | b6 | c6 |
pd.merge(df1, df5)
# how默认的是inner,数据对称才显示
| a | b | c | d | e | |
|---|---|---|---|---|---|
| 0 | a3 | b3 | c3 | d3 | e3 |
pd.merge(df1, df5,how='right')
| a | b | c | d | e | |
|---|---|---|---|---|---|
| 0 | a3 | b3 | c3 | d3 | e3 |
| 1 | NaN | NaN | c4 | d4 | e4 |
| 2 | NaN | NaN | c5 | d5 | e5 |
2) 多对一|一对多合并
df8 = make_df(['c', 'd', 'e'],[1, 1, 1, 4])
df8
| c | d | e | |
|---|---|---|---|
| 1 | c1 | d1 | e1 |
| 1 | c1 | d1 | e1 |
| 1 | c1 | d1 | e1 |
| 4 | c4 | d4 | e4 |
pd.merge(df1, df8)
# select * from df1 join df8 on df1.c = df8.c
| a | b | c | d | e | |
|---|---|---|---|---|---|
| 0 | a1 | b1 | c1 | d1 | e1 |
| 1 | a1 | b1 | c1 | d1 | e1 |
| 2 | a1 | b1 | c1 | d1 | e1 |
pd.merge(df1, df8, how='left')
| a | b | c | d | e | |
|---|---|---|---|---|---|
| 0 | a1 | b1 | c1 | d1 | e1 |
| 1 | a1 | b1 | c1 | d1 | e1 |
| 2 | a1 | b1 | c1 | d1 | e1 |
| 3 | a2 | b2 | c2 | NaN | NaN |
| 4 | a3 | b3 | c3 | NaN | NaN |
pd.merge(df1, df8, how='outer')
# 在工作中要使用outer,自动分配
| a | b | c | d | e | |
|---|---|---|---|---|---|
| 0 | a1 | b1 | c1 | d1 | e1 |
| 1 | a1 | b1 | c1 | d1 | e1 |
| 2 | a1 | b1 | c1 | d1 | e1 |
| 3 | a2 | b2 | c2 | NaN | NaN |
| 4 | a3 | b3 | c3 | NaN | NaN |
| 5 | NaN | NaN | c4 | d4 | e4 |
3) 多对多合并
df8
| c | d | e | |
|---|---|---|---|
| 1 | c1 | qwe | e1 |
| 1 | c1 | d1 | e1 |
| 1 | c1 | d1 | e1 |
| 4 | c4 | d4 | asd |
df8.iloc[0]['d'] = 'qwe'
df8['e'][4] = 'asd'
df9 = make_df(list('abc'),[1,1,4,4])
df9
| a | b | c | |
|---|---|---|---|
| 1 | a1 | b1 | c1 |
| 1 | a1 | b1 | c1 |
| 4 | a4 | b4 | c4 |
| 4 | a4 | b4 | c4 |
pd.merge(df9, df8, how='outer')
| a | b | c | d | e | |
|---|---|---|---|---|---|
| 0 | a1 | b1 | c1 | qwe | e1 |
| 1 | a1 | b1 | c1 | d1 | e1 |
| 2 | a1 | b1 | c1 | d1 | e1 |
| 3 | a1 | b1 | c1 | qwe | e1 |
| 4 | a1 | b1 | c1 | d1 | e1 |
| 5 | a1 | b1 | c1 | d1 | e1 |
| 6 | a4 | b4 | c4 | d4 | asd |
| 7 | a4 | b4 | c4 | d4 | asd |
4) key的规范化
- 使用on=显式指定哪一列为key,当有多个key相同时使用
df1
| a | b | c | |
|---|---|---|---|
| 1 | a1 | b1 | c1 |
| 2 | a2 | b2 | c2 |
| 3 | a3 | b3 | c3 |
df10 = make_df(list('cde'),[1, 2, 3])
# 现在的情况是df已经生成了,那么我想把列修改一下
df10.columns = list('wde')
df10
| w | d | e | |
|---|---|---|---|
| 1 | c1 | d1 | e1 |
| 2 | c2 | d2 | e2 |
| 3 | c3 | d3 | e3 |
df11 = make_df(list('bcd'),[1, 2, 3])
df11
| b | c | d | |
|---|---|---|---|
| 1 | b1 | c1 | d1 |
| 2 | b2 | c2 | d2 |
| 3 | b3 | c3 | d3 |
pd.merge(df1, df11, on='b')
# mysql中一般碰到两个字段相同,但是数据类型不同
# 那么a.u = b.u a.o - b.o
# select a.o as ao , b.0 as bo
| a | b | c_x | c_y | d | |
|---|---|---|---|---|---|
| 0 | a1 | b1 | c1 | c1 | d1 |
| 1 | a2 | b2 | c2 | c2 | d2 |
| 2 | a3 | b3 | c3 | c3 | d3 |
pd.merge(df1, df11, on='c')
- 使用left_on和right_on指定左右两边的列作为key,当左右两边的key都不想等时使用
# on的作用是将两个表中相同数据类型,含义一致的字段进行连接的
pd.merge(df1, df10, left_on='c', right_on='w')
#mysql
| a | b | c | w | d | e | |
|---|---|---|---|---|---|---|
| 0 | a1 | b1 | c1 | c1 | d1 | e1 |
| 1 | a2 | b2 | c2 | c2 | d2 | e2 |
| 2 | a3 | b3 | c3 | c3 | d3 | e3 |
============================================
-
假设有两份成绩单,除了ddd是张三李四王老五之外,还有ddd4是张三和赵小六的成绩单,如何合并?
-
如果ddd4中张三的名字被打错了,成为了张十三,怎么办?
============================================
5) 内合并与外合并
-
内合并:只保留两者都有的key(默认模式)
-
外合并 how=‘outer’:补NaN
-
左合并、右合并:how=‘left’,how=‘right’,
============================================
-
如果只有张三赵小六语数英三个科目的成绩,如何合并?
-
考虑应用情景,使用多种方式合并ddd与ddd4
============================================
6) 列冲突的解决
当列冲突时,即有多个列名称相同时,需要使用on=来指定哪一个列作为key,配合suffixes指定冲突列名
可以使用suffixes=自己指定后缀
pd.merge(df1, df11, on='c', suffixes=('_up','_down'))
| a | b_up | c | b_down | d | |
|---|---|---|---|---|---|
| 0 | a1 | b1 | c1 | b1 | d1 |
| 1 | a2 | b2 | c2 | b2 | d2 |
| 2 | a3 | b3 | c3 | b3 | d3 |
============================================
假设有两个同学都叫李四,ddd5、ddd6都是张三和李四的成绩表,如何合并?
============================================
3. 案例分析:美国各州人口数据分析
首先导入文件,并查看数据样本
# 面积
areas = pd.read_csv('./data/state-areas.csv')
# 缩写
abbr = pd.read_csv('./data/state-abbrevs.csv')
# 人口
pop = pd.read_csv('./data/state-population.csv')
areas.shape
-
输出
(52, 2)
areas.head()
| state | area (sq. mi) | |
|---|---|---|
| 0 | Alabama | 52423 |
| 1 | Alaska | 656425 |
| 2 | Arizona | 114006 |
| 3 | Arkansas | 53182 |
| 4 | California | 163707 |
abbr.shape
-
输出
(51, 2)
abbr.head()
| state | abbreviation | |
|---|---|---|
| 0 | Alabama | AL |
| 1 | Alaska | AK |
| 2 | Arizona | AZ |
| 3 | Arkansas | AR |
| 4 | California | CA |
pop.shape
-
输出
(2544, 4)
pop.head()
| state/region | ages | year | population | |
|---|---|---|---|---|
| 0 | AL | under18 | 2012 | 1117489.0 |
| 1 | AL | total | 2012 | 4817528.0 |
| 2 | AL | under18 | 2010 | 1130966.0 |
| 3 | AL | total | 2010 | 4785570.0 |
| 4 | AL | under18 | 2011 | 1125763.0 |
合并pop与abbrevs两个DataFrame,分别依据state/region列和abbreviation列来合并。
为了保留所有信息,使用外合并。
abbrToPop = pd.merge(abbr, pop, left_on='abbreviation', right_on='state/region', how='outer')
abbrToPop.head()
| state | abbreviation | state/region | ages | year | population | |
|---|---|---|---|---|---|---|
| 0 | Alabama | AL | AL | under18 | 2012 | 1117489.0 |
| 1 | Alabama | AL | AL | total | 2012 | 4817528.0 |
| 2 | Alabama | AL | AL | under18 | 2010 | 1130966.0 |
| 3 | Alabama | AL | AL | total | 2010 | 4785570.0 |
| 4 | Alabama | AL | AL | under18 | 2011 | 1125763.0 |
去除abbreviation的那一列(axis=1)
# drop()
# 一般的一执行完就打印的,这种形式的方法不对原数据产生影响
abbrToPop.drop(labels='abbreviation', axis=1, inplace=True)
abbrToPop.head()
abbrToPop.shape
-
输出
(2544, 5)
查看存在缺失数据的列。
使用.isnull().any(),只有某一列存在一个缺失数据,就会显示True。
# NaN
abbrToPop.isnull().any()
-
输出
state False
state/region False
ages False
year False
population True
dtype: bool
查看缺失数据
根据数据是否缺失情况显示数据,如果缺失为True,那么显示
# 怎么计算丢失数据的数量
abbrToPop['state'].isnull().sum()
-
输出
96
找到有哪些state/region使得state的值为NaN,使用unique()查看非重复值
# 把空数据填充上
cond = abbrToPop['state'].isnull()
# state是州名,如何填充
# unique()
abbrToPop['state/region'][cond].unique()
-
输出
array([‘PR’, ‘USA’], dtype=object)
# 我们通过翻阅资料查到的PR的全称
# Puerto Rico
# 开始赋值
cond_pr = abbrToPop['state/region'] == 'PR'
abbrToPop['state'][cond_pr] = 'Puerto Rico'
C:\Users\Administrator\Anaconda3\lib\site-packages\ipykernel_launcher.py:1: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame
See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy
"""Entry point for launching an IPython kernel.
cond_usa = abbrToPop['state/region'] == 'USA'
abbrToPop['state'][cond_usa] = 'United States'
C:\Users\Administrator\Anaconda3\lib\site-packages\ipykernel_launcher.py:1: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame
See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy
"""Entry point for launching an IPython kernel.
# population 查阅资料,我们先删除掉
abbrToPop.isnull().sum()
-
输出
state 0
state/region 0
ages 0
year 0
population 0
dtype: int64
# dropna()
# inplace表示对原表进行操作
abbrToPop.dropna(inplace=True)
为找到的这些state/region的state项补上正确的值,从而去除掉state这一列的所有NaN!
记住这样清除缺失数据NaN的方法!
合并各州面积数据areas,使用左合并。
思考一下为什么使用外合并?
areas.head()
| state | area (sq. mi) | |
|---|---|---|
| 0 | Alabama | 52423 |
| 1 | Alaska | 656425 |
| 2 | Arizona | 114006 |
| 3 | Arkansas | 53182 |
| 4 | California | 163707 |
abbrToPopToAreas = pd.merge(abbrToPop, areas, how='outer')
abbrToPopToAreas.head()
| state | state/region | ages | year | population | area (sq. mi) | |
|---|---|---|---|---|---|---|
| 0 | Alabama | AL | under18 | 2012 | 1117489.0 | 52423.0 |
| 1 | Alabama | AL | total | 2012 | 4817528.0 | 52423.0 |
| 2 | Alabama | AL | under18 | 2010 | 1130966.0 | 52423.0 |
| 3 | Alabama | AL | total | 2010 | 4785570.0 | 52423.0 |
| 4 | Alabama | AL | under18 | 2011 | 1125763.0 | 52423.0 |
继续寻找存在缺失数据的列
我们会发现area(sq.mi)这一列有缺失数据,为了找出是哪一行,我们需要找出是哪个state没有数据
abbrToPopToAreas.isnull().any()
-
输出
state False
state/region False
ages False
year False
population False
area (sq. mi) False
dtype: bool
cond_area = abbrToPopToAreas['area (sq. mi)'].isnull()
abbrToPopToAreas['state/region'][cond_area]
-
输出
2476 USA
2477 USA
2478 USA
2479 USA
2480 USA
2481 USA
2482 USA
2483 USA
2484 USA
2485 USA
2486 USA
2487 USA
2488 USA
2489 USA
2490 USA
2491 USA
2492 USA
2493 USA
2494 USA
2495 USA
2496 USA
2497 USA
2498 USA
2499 USA
2500 USA
2501 USA
2502 USA
2503 USA
2504 USA
2505 USA
2506 USA
2507 USA
2508 USA
2509 USA
2510 USA
2511 USA
2512 USA
2513 USA
2514 USA
2515 USA
2516 USA
2517 USA
2518 USA
2519 USA
2520 USA
2521 USA
2522 USA
2523 USA
Name: state/region, dtype: object
total_area = areas['area (sq. mi)'].sum()
total_area
-
输出
3790399
cond_ab = abbrToPopToAreas['state/region'] == 'USA'
abbrToPopToAreas['area (sq. mi)'][cond_ab] = total_area
C:\Users\Administrator\Anaconda3\lib\site-packages\ipykernel_launcher.py:2: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame
See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy
查看数据是否缺失
abbrToPopToAreas.isnull().sum()
-
输出
state 0
state/region 0
ages 0
year 0
population 0
area (sq. mi) 0
dtype: int64
找出2010年的全民人口数据,df.query(查询语句)
abbrToPopToAreas
| state | state/region | ages | year | population | area (sq. mi) | |
|---|---|---|---|---|---|---|
| 0 | Alabama | AL | under18 | 2012 | 1117489.0 | 52423.0 |
| 1 | Alabama | AL | total | 2012 | 4817528.0 | 52423.0 |
| 2 | Alabama | AL | under18 | 2010 | 1130966.0 | 52423.0 |
| 3 | Alabama | AL | total | 2010 | 4785570.0 | 52423.0 |
| 4 | Alabama | AL | under18 | 2011 | 1125763.0 | 52423.0 |
| 5 | Alabama | AL | total | 2011 | 4801627.0 | 52423.0 |
| 6 | Alabama | AL | total | 2009 | 4757938.0 | 52423.0 |
| 7 | Alabama | AL | under18 | 2009 | 1134192.0 | 52423.0 |
| 8 | Alabama | AL | under18 | 2013 | 1111481.0 | 52423.0 |
| 9 | Alabama | AL | total | 2013 | 4833722.0 | 52423.0 |
| 10 | Alabama | AL | total | 2007 | 4672840.0 | 52423.0 |
| 11 | Alabama | AL | under18 | 2007 | 1132296.0 | 52423.0 |
| 12 | Alabama | AL | total | 2008 | 4718206.0 | 52423.0 |
| 13 | Alabama | AL | under18 | 2008 | 1134927.0 | 52423.0 |
| 14 | Alabama | AL | total | 2005 | 4569805.0 | 52423.0 |
| 15 | Alabama | AL | under18 | 2005 | 1117229.0 | 52423.0 |
| 16 | Alabama | AL | total | 2006 | 4628981.0 | 52423.0 |
| 17 | Alabama | AL | under18 | 2006 | 1126798.0 | 52423.0 |
| 18 | Alabama | AL | total | 2004 | 4530729.0 | 52423.0 |
| 19 | Alabama | AL | under18 | 2004 | 1113662.0 | 52423.0 |
| 20 | Alabama | AL | total | 2003 | 4503491.0 | 52423.0 |
| 21 | Alabama | AL | under18 | 2003 | 1113083.0 | 52423.0 |
| 22 | Alabama | AL | total | 2001 | 4467634.0 | 52423.0 |
| 23 | Alabama | AL | under18 | 2001 | 1120409.0 | 52423.0 |
| 24 | Alabama | AL | total | 2002 | 4480089.0 | 52423.0 |
| 25 | Alabama | AL | under18 | 2002 | 1116590.0 | 52423.0 |
| 26 | Alabama | AL | under18 | 1999 | 1121287.0 | 52423.0 |
| 27 | Alabama | AL | total | 1999 | 4430141.0 | 52423.0 |
| 28 | Alabama | AL | total | 2000 | 4452173.0 | 52423.0 |
| 29 | Alabama | AL | under18 | 2000 | 1122273.0 | 52423.0 |
| ... | ... | ... | ... | ... | ... | ... |
| 2494 | United States | USA | under18 | 1999 | 71946051.0 | 3790399.0 |
| 2495 | United States | USA | total | 2000 | 282162411.0 | 3790399.0 |
| 2496 | United States | USA | under18 | 2000 | 72376189.0 | 3790399.0 |
| 2497 | United States | USA | total | 1999 | 279040181.0 | 3790399.0 |
| 2498 | United States | USA | total | 2001 | 284968955.0 | 3790399.0 |
| 2499 | United States | USA | under18 | 2001 | 72671175.0 | 3790399.0 |
| 2500 | United States | USA | total | 2002 | 287625193.0 | 3790399.0 |
| 2501 | United States | USA | under18 | 2002 | 72936457.0 | 3790399.0 |
| 2502 | United States | USA | total | 2003 | 290107933.0 | 3790399.0 |
| 2503 | United States | USA | under18 | 2003 | 73100758.0 | 3790399.0 |
| 2504 | United States | USA | total | 2004 | 292805298.0 | 3790399.0 |
| 2505 | United States | USA | under18 | 2004 | 73297735.0 | 3790399.0 |
| 2506 | United States | USA | total | 2005 | 295516599.0 | 3790399.0 |
| 2507 | United States | USA | under18 | 2005 | 73523669.0 | 3790399.0 |
| 2508 | United States | USA | total | 2006 | 298379912.0 | 3790399.0 |
| 2509 | United States | USA | under18 | 2006 | 73757714.0 | 3790399.0 |
| 2510 | United States | USA | total | 2007 | 301231207.0 | 3790399.0 |
| 2511 | United States | USA | under18 | 2007 | 74019405.0 | 3790399.0 |
| 2512 | United States | USA | total | 2008 | 304093966.0 | 3790399.0 |
| 2513 | United States | USA | under18 | 2008 | 74104602.0 | 3790399.0 |
| 2514 | United States | USA | under18 | 2013 | 73585872.0 | 3790399.0 |
| 2515 | United States | USA | total | 2013 | 316128839.0 | 3790399.0 |
| 2516 | United States | USA | total | 2009 | 306771529.0 | 3790399.0 |
| 2517 | United States | USA | under18 | 2009 | 74134167.0 | 3790399.0 |
| 2518 | United States | USA | under18 | 2010 | 74119556.0 | 3790399.0 |
| 2519 | United States | USA | total | 2010 | 309326295.0 | 3790399.0 |
| 2520 | United States | USA | under18 | 2011 | 73902222.0 | 3790399.0 |
| 2521 | United States | USA | total | 2011 | 311582564.0 | 3790399.0 |
| 2522 | United States | USA | under18 | 2012 | 73708179.0 | 3790399.0 |
| 2523 | United States | USA | total | 2012 | 313873685.0 | 3790399.0 |
2524 rows × 6 columns
abbrToPopToAreas_2010 = abbrToPopToAreas.query('year == 2010 & ages == "total"')
abbrToPopToAreas_2010
| state | state/region | ages | year | population | area (sq. mi) | |
|---|---|---|---|---|---|---|
| 3 | Alabama | AL | total | 2010 | 4785570.0 | 52423.0 |
| 91 | Alaska | AK | total | 2010 | 713868.0 | 656425.0 |
| 101 | Arizona | AZ | total | 2010 | 6408790.0 | 114006.0 |
| 189 | Arkansas | AR | total | 2010 | 2922280.0 | 53182.0 |
| 197 | California | CA | total | 2010 | 37333601.0 | 163707.0 |
| 283 | Colorado | CO | total | 2010 | 5048196.0 | 104100.0 |
| 293 | Connecticut | CT | total | 2010 | 3579210.0 | 5544.0 |
| 379 | Delaware | DE | total | 2010 | 899711.0 | 1954.0 |
| 389 | District of Columbia | DC | total | 2010 | 605125.0 | 68.0 |
| 475 | Florida | FL | total | 2010 | 18846054.0 | 65758.0 |
| 485 | Georgia | GA | total | 2010 | 9713248.0 | 59441.0 |
| 570 | Hawaii | HI | total | 2010 | 1363731.0 | 10932.0 |
| 581 | Idaho | ID | total | 2010 | 1570718.0 | 83574.0 |
| 666 | Illinois | IL | total | 2010 | 12839695.0 | 57918.0 |
| 677 | Indiana | IN | total | 2010 | 6489965.0 | 36420.0 |
| 762 | Iowa | IA | total | 2010 | 3050314.0 | 56276.0 |
| 773 | Kansas | KS | total | 2010 | 2858910.0 | 82282.0 |
| 858 | Kentucky | KY | total | 2010 | 4347698.0 | 40411.0 |
| 869 | Louisiana | LA | total | 2010 | 4545392.0 | 51843.0 |
| 954 | Maine | ME | total | 2010 | 1327366.0 | 35387.0 |
| 965 | Montana | MT | total | 2010 | 990527.0 | 147046.0 |
| 1050 | Nebraska | NE | total | 2010 | 1829838.0 | 77358.0 |
| 1061 | Nevada | NV | total | 2010 | 2703230.0 | 110567.0 |
| 1146 | New Hampshire | NH | total | 2010 | 1316614.0 | 9351.0 |
| 1157 | New Jersey | NJ | total | 2010 | 8802707.0 | 8722.0 |
| 1242 | New Mexico | NM | total | 2010 | 2064982.0 | 121593.0 |
| 1253 | New York | NY | total | 2010 | 19398228.0 | 54475.0 |
| 1338 | North Carolina | NC | total | 2010 | 9559533.0 | 53821.0 |
| 1349 | North Dakota | ND | total | 2010 | 674344.0 | 70704.0 |
| 1434 | Ohio | OH | total | 2010 | 11545435.0 | 44828.0 |
| 1445 | Oklahoma | OK | total | 2010 | 3759263.0 | 69903.0 |
| 1530 | Oregon | OR | total | 2010 | 3837208.0 | 98386.0 |
| 1541 | Maryland | MD | total | 2010 | 5787193.0 | 12407.0 |
| 1626 | Massachusetts | MA | total | 2010 | 6563263.0 | 10555.0 |
| 1637 | Michigan | MI | total | 2010 | 9876149.0 | 96810.0 |
| 1722 | Minnesota | MN | total | 2010 | 5310337.0 | 86943.0 |
| 1733 | Mississippi | MS | total | 2010 | 2970047.0 | 48434.0 |
| 1818 | Missouri | MO | total | 2010 | 5996063.0 | 69709.0 |
| 1829 | Pennsylvania | PA | total | 2010 | 12710472.0 | 46058.0 |
| 1914 | Rhode Island | RI | total | 2010 | 1052669.0 | 1545.0 |
| 1925 | South Carolina | SC | total | 2010 | 4636361.0 | 32007.0 |
| 2010 | South Dakota | SD | total | 2010 | 816211.0 | 77121.0 |
| 2021 | Tennessee | TN | total | 2010 | 6356683.0 | 42146.0 |
| 2106 | Texas | TX | total | 2010 | 25245178.0 | 268601.0 |
| 2117 | Utah | UT | total | 2010 | 2774424.0 | 84904.0 |
| 2202 | Vermont | VT | total | 2010 | 625793.0 | 9615.0 |
| 2213 | Virginia | VA | total | 2010 | 8024417.0 | 42769.0 |
| 2298 | Washington | WA | total | 2010 | 6742256.0 | 71303.0 |
| 2309 | West Virginia | WV | total | 2010 | 1854146.0 | 24231.0 |
| 2394 | Wisconsin | WI | total | 2010 | 5689060.0 | 65503.0 |
| 2405 | Wyoming | WY | total | 2010 | 564222.0 | 97818.0 |
| 2470 | Puerto Rico | PR | total | 2010 | 3721208.0 | 3515.0 |
| 2519 | United States | USA | total | 2010 | 309326295.0 | 3790399.0 |
对查询结果进行处理,以state列作为新的行索引:set_index
# 工作中会使用id作为列的索引
# set_index()
abbrToPopToAreas_2010.set_index('state',inplace=True)
abbrToPopToAreas_2010.head()
| state/region | ages | year | population | area (sq. mi) | |
|---|---|---|---|---|---|
| state | |||||
| Alabama | AL | total | 2010 | 4785570.0 | 52423.0 |
| Alaska | AK | total | 2010 | 713868.0 | 656425.0 |
| Arizona | AZ | total | 2010 | 6408790.0 | 114006.0 |
| Arkansas | AR | total | 2010 | 2922280.0 | 53182.0 |
| California | CA | total | 2010 | 37333601.0 | 163707.0 |
计算人口密度population/area。注意是Series/Series,其结果还是一个Series。
density_2010 = abbrToPopToAreas_2010['population'] / abbrToPopToAreas_2010['area (sq. mi)']
density_2010.head()
-
输出
state
Alabama 91.287603
Alaska 1.087509
Arizona 56.214497
Arkansas 54.948667
California 228.051342
dtype: float64
2010年的人口密度融合到表中
abbrToPopToAreas_2010['density_2010']=density_2010
C:\Users\Administrator\Anaconda3\lib\site-packages\ipykernel_launcher.py:1: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy
"""Entry point for launching an IPython kernel.
排序,并找出人口密度最高的五个州sort_values()(根据值来排序)
abbrToPopToAreas_2010.sort_values(by='density_2010').tail()
| state/region | ages | year | population | area (sq. mi) | density_2010 | |
|---|---|---|---|---|---|---|
| state | ||||||
| Connecticut | CT | total | 2010 | 3579210.0 | 5544.0 | 645.600649 |
| Rhode Island | RI | total | 2010 | 1052669.0 | 1545.0 | 681.339159 |
| New Jersey | NJ | total | 2010 | 8802707.0 | 8722.0 | 1009.253268 |
| Puerto Rico | PR | total | 2010 | 3721208.0 | 3515.0 | 1058.665149 |
| District of Columbia | DC | total | 2010 | 605125.0 | 68.0 | 8898.897059 |
找出人口密度最低的五个州
abbrToPopToAreas_2010.sort_values(by='density_2010').head()
要点总结:
- 统一用loc()索引
- 善于使用.isnull().any()找到存在NaN的列
- 善于使用.unique()确定该列中哪些key是我们需要的
- 一般使用外合并、左合并,目的只有一个:宁愿该列是NaN也不要丢弃其他列的信息
回顾:Series/DataFrame运算与ndarray运算的区别
- Series与DataFrame没有广播,如果对应index没有值,则记为NaN;或者使用add的fill_value来补缺失值
- ndarray有广播,通过重复已有值来计算
苹果股票涨跌绘制
import matplotlib.pyplot as plt
import numpy as np
读取数据
apple = pd.read_csv('./data/AAPL.csv')
apple.head()
# adj闭盘以后的调整价格
| Date | Open | High | Low | Close | Adj Close | Volume | |
|---|---|---|---|---|---|---|---|
| 0 | 1980-12-12 | 0.513393 | 0.515625 | 0.513393 | 0.513393 | 0.023268 | 117258400.0 |
| 1 | 1980-12-15 | 0.488839 | 0.488839 | 0.486607 | 0.486607 | 0.022054 | 43971200.0 |
| 2 | 1980-12-16 | 0.453125 | 0.453125 | 0.450893 | 0.450893 | 0.020435 | 26432000.0 |
| 3 | 1980-12-17 | 0.462054 | 0.464286 | 0.462054 | 0.462054 | 0.020941 | 21610400.0 |
| 4 | 1980-12-18 | 0.475446 | 0.477679 | 0.475446 | 0.475446 | 0.021548 | 18362400.0 |
apple.tail()
| Date | Open | High | Low | Close | Adj Close | Volume | |
|---|---|---|---|---|---|---|---|
| 9453 | 2018-06-08 | 191.169998 | 192.000000 | 189.770004 | 191.699997 | 191.699997 | 26656800.0 |
| 9454 | 2018-06-11 | 191.350006 | 191.970001 | 190.210007 | 191.229996 | 191.229996 | 18308500.0 |
| 9455 | 2018-06-12 | 191.389999 | 192.610001 | 191.149994 | 192.279999 | 192.279999 | 16911100.0 |
| 9456 | 2018-06-13 | 192.419998 | 192.880005 | 190.440002 | 190.699997 | 190.699997 | 21638400.0 |
| 9457 | 2018-06-14 | 191.550003 | 191.570007 | 190.220001 | 190.800003 | 190.800003 | 21491500.0 |
apple.shape
-
输出
(9458, 7)
apple.dtypes
-
输出
Date datetime64[ns]
Open float64
High float64
Low float64
Close float64
Adj Close float64
Volume float64
dtype: object
转换一下data的数据类型
mysql中datetime pd.to_datetime()
apple['Date'] = pd.to_datetime(apple['Date'])
apple.set_index('Date', inplace=True)
---------------------------------------------------------------------------
apple.tail()
| Open | High | Low | Close | Adj Close | Volume | |
|---|---|---|---|---|---|---|
| Date | ||||||
| 2018-06-08 | 191.169998 | 192.000000 | 189.770004 | 191.699997 | 191.699997 | 26656800.0 |
| 2018-06-11 | 191.350006 | 191.970001 | 190.210007 | 191.229996 | 191.229996 | 18308500.0 |
| 2018-06-12 | 191.389999 | 192.610001 | 191.149994 | 192.279999 | 192.279999 | 16911100.0 |
| 2018-06-13 | 192.419998 | 192.880005 | 190.440002 | 190.699997 | 190.699997 | 21638400.0 |
| 2018-06-14 | 191.550003 | 191.570007 | 190.220001 | 190.800003 | 190.800003 | 21491500.0 |
绘制图形
adj_plot = apple['Adj Close'].plot()
fig = adj_plot.get_figure()
# set_size_inches()设置图片大小,单位英寸
fig.set_size_inches(12,6)

apple.drop('Volume',axis=1,inplace=True)
app = apple.plot()
# 需要获取当前图片
fig1 = app.get_figure()
fig1 = fig1.set_size_inches(12,6)

本文深入探讨了Pandas库中的数据拼接与合并操作,包括pd.concat、pd.append、pd.merge及pd.join的使用方法,并通过实例展示了如何处理数据合并过程中出现的各种复杂情况。
7789

被折叠的 条评论
为什么被折叠?



