%matplotlib inline
import random
import numpy as np
import scipy as sp
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import statsmodels.api as sm
import statsmodels.formula.api as smf
sns.set_context("talk")
Anscombe’s quartet
Anscombe’s quartet comprises of four datasets, and is rather famous. Why? You’ll find out in this exercise.
In [4]:
anascombe = pd.read_csv('data/anscombe.csv')
anascombe.head()
Out [4]:
dataset x y
0 I 10 8.04
1 I 8 6.95
2 I 13 7.58
3 I 9 8.81
4 I 11 8.33
Part 1
For each of the four datasets…
- Compute the mean and variance of both x and y
- Compute the correlation coefficient between x and y
- Compute the linear regression line: y=β0+β1x+ϵ (hint: use statsmodels and look at the Statsmodels notebook)
In [5]:
database1 = anascombe[anascombe['dataset']=='I']
database2 = anascombe[anascombe['dataset']=='II']
database3 = anascombe[anascombe['dataset']=='III']
database4 = anascombe[anascombe['dataset']=='IV']
# 均值和方差
# or anascombe.loc[anascombe["dataset"]=='I'].x.mean()
print("datasetI:\nmean of x:",database1.x.mean()," mean of y:",database1.y.mean())
print("variance of x:",database1.x.var()," variance of y:",database1.y.var())
print("datasetII:\nmean of x:",database2.x.mean()," mean of y:",database2.y.mean())
print("variance of x:",database2.x.var()," variance of y:",database2.y.var())
print("datasetIII:\nmean of x:",database3.x.mean()," mean of y:",database3.y.mean())
print("variance of x:",database3.x.var()," variance of y:",database3.y.var())
print("datasetIV:\nmean of x:",database4.x.mean()," mean of y:",database4.y.mean())
print("variance of x:",database4.x.var()," variance of y:",database4.y.var())
#相关系数
print("\ncorrelation coefficient between x and y:",anascombe.x.corr(anascombe.y))
#线性回归
n = len(anascombe)
is_train = np.random.rand(n) < 0.7
train = anascombe[is_train].reset_index(drop=True)
test = anascombe[~is_train].reset_index(drop=True)
lin_model = smf.ols("y ~ x", train).fit()
print(lin_model.summary())
Out [5]:
datasetI:
mean of x: 9.0 mean of y: 7.500909090909093
variance of x: 11.0 variance of y: 4.127269090909091
datasetII:
mean of x: 9.0 mean of y: 7.50090909090909
variance of x: 11.0 variance of y: 4.127629090909091
datasetIII:
mean of x: 9.0 mean of y: 7.5
variance of x: 11.0 variance of y: 4.12262
datasetIV:
mean of x: 9.0 mean of y: 7.500909090909091
variance of x: 11.0 variance of y: 4.123249090909091
correlation coefficient between x and y: 0.81636624276147
OLS Regression Results
==============================================================================
Dep. Variable: y R-squared: 0.636
Model: OLS Adj. R-squared: 0.623
Method: Least Squares F-statistic: 52.34
Date: Fri, 08 Jun 2018 Prob (F-statistic): 4.72e-08
Time: 19:25:11 Log-Likelihood: -50.810
No. Observations: 32 AIC: 105.6
Df Residuals: 30 BIC: 108.6
Df Model: 1
Covariance Type: nonrobust
==============================================================================
coef std err t P>|t| [0.025 0.975]
------------------------------------------------------------------------------
Intercept 3.2531 0.666 4.887 0.000 1.894 4.613
x 0.4843 0.067 7.234 0.000 0.348 0.621
==============================================================================
Omnibus: 1.104 Durbin-Watson: 1.995
Prob(Omnibus): 0.576 Jarque-Bera (JB): 0.593
Skew: 0.332 Prob(JB): 0.743
Kurtosis: 3.054 Cond. No. 30.9
==============================================================================
Part 2
Using Seaborn, visualize all four datasets.
hint: use sns.FacetGrid combined with plt.scatter
In [6]:
g = sns.FacetGrid(anascombe,col = 'dataset')
g = g.map(plt.scatter, "x", "y")
plt.show()
Out [6]: