pyspark的dataframe的常用操作详解
1.创建dataframe
1.1读取文件来创建dataframe
from pyspark.sql import SparkSession #sparkSession为同统一入口
#创建spark 对象
spark = SparkSession\
.builder\
.appName('readfile')\
.getOrCreate()
# 1.读取csv,parquet等文件文件
logFilePath = 'births_train.csv'
log_df = spark.read.csv(logFilePath,
encoding='utf-8',
header=True,
inferSchema=True,
sep=',')
logFilePath:这是我自定义的一个参数,为文件路径
encoding:文件编码格式,默认为utf-8
header:是否将文件第一行作为表头,True即将文件第一行作为表头
inferSchema:是否自动推断列类型
sep:列分割符
数据展示:
df.show() # 默认展示前20行
1.2手动创建dataframe
不指定dataframe的schema
employees为数据内容,schema为表头,这种方式比较简单,类型为spark推断类型,可以通过df.printSchema()查看df的的schema
from pyspark.sql import SparkSession
#创建spark 对象
spark = SparkSession.builder.appName("FirstApp").getOrCreate()
employees = [(1, "John", 25), (2, "Ray", 35), (3, "Mike", 24), (4, "Jane", 28), (5, "Kevin", 26),
(6, "Vincent", 35), (7, "James", 38), (8, "Shane", 32), (9, "Larry", 29), (10, "Kimberly", 29),
(11, "Alex", 28), (12, "Garry", 25), (13, "Max", 31)]
employees = spark.createDataFrame(employees, schema=["emp_id", "name", "age"]) # 传入数据和列字段
指定dataframe的schema
from pyspark.sql import SparkSession # sparkSession为同统一入口
from pyspark.sql.types import *
# 创建spark对象
spark = SparkSession \
.builder \
.appName('readfile') \
.getOrCreate()
employees = [(1, "John", 25), (2, "Ray", 35), (3, "Mike", 24), (4, "Jane", 28), (5, "Kevin", 26),
(6, "Vincent", 35), (7, "James", 38),