对,R 语言里常用的 read.*() 系列函数(read.csv()、read.table() 等)以及 readr 包(read_csv()、read_tsv() 等)确实和 Python 的 pandas 对应函数非常像,都是把“文本文件 → 数据框”。下面把最常见的几种场景做一个 1:1 对照,一眼就能看出来有多像
R查看帮助的方法
举例说明
?read.table
。
| 任务 | R(base/readr) | Python(pandas) |
|---|---|---|
| 读 CSV | read.csv("file.csv") 或 readr::read_csv("file.csv") | pandas.read_csv("file.csv") |
| 读 TSV | read.delim("file.tsv") 或 readr::read_tsv("file.tsv") | pandas.read_csv("file.tsv", sep='\t') |
| 读任意分隔符 | `read.table(“file.txt”, sep = " | ") 或readr::read_delim(“file.txt”, delim = " |
| 读固定宽度 | read.fwf("file.txt", widths = c(5,10,3)) 或 readr::read_fwf("file.txt", fwf_widths(c(5,10,3))) | pandas.read_fwf("file.txt", widths=[5,10,3]) |
| 不自动转字符串为因子 | read.csv(..., stringsAsFactors = FALSE)(R < 4.0 要手动关) 或 readr::read_csv() 默认就是 | pandas.read_csv(..., dtype=str) 或手动指定 |
| 指定列类型 | readr::read_csv("file.csv", col_types = cols(x = "i", y = "d")) | pandas.read_csv("file.csv", dtype={'x':'int32', 'y':'float64'}) |
| 只读前 n 行 | readr::read_csv("file.csv", n_max = 1000) | pandas.read_csv("file.csv", nrows=1000) |
| 跳过前 k 行 | read.csv("file.csv", skip = 3) 或 readr::read_csv("file.csv", skip = 3) | pandas.read_csv("file.csv", skiprows=3) |
| 读远程 URL | 直接把 URL 当路径:read_csv("https://…") | pandas.read_csv("https://…") |
| 返回对象 | data.frame(base)或 tibble(readr) | pandas.DataFrame |
一句话总结
- 如果你用 base R 的
read.csv/read.table,功能跟pandas.read_csv类似,但默认会把字符串变因子、速度也慢。 - 如果你用 readr 包(
read_csv/read_tsv),行为几乎就是“R 界的 pandas”:默认不转因子、速度更快、列类型可显式指定、进度条友好。
在 R 语言里,“因子(factor)” 是一种专门用于表示“分类变量”的数据类型:
它把字符串映射成整数编码,背后附带一个 levels 属性,用来存储所有可能出现的类别。
“默认不转因子” 就是说:
把文本文件读进来时,不要把明明是字符串的列偷偷变成 factor,而是保持原来的字符型(character / string)。
为什么曾经会“偷偷转”
-
早期 R 的
read.csv()/read.table()默认把 所有字符串列 都变成 factor:df <- read.csv("sex.csv") # 文件里有一列叫 sex,取值 Male/Female str(df$sex) # Factor w/ 2 levels "Female","Male"这样设计的初衷是“统计建模时分类变量用因子最方便”,但对新手来说非常反直觉:
- 你以为只是普通字符串,结果
merge()、==、画图、写回硬盘时都可能因为 levels 对不上而出错; - 因子 levels 顺序还影响回归模型基准组,很多人踩坑。
- 你以为只是普通字符串,结果
-
因此从 R 4.0.0(2020-04)开始,base R 也改了默认值:
options(stringsAsFactors = FALSE) # 现在全局默认就是 FALSE也就是说,新版 R 的
read.csv()也不再自动转因子了,但老代码、老教程里还能看到stringsAsFactors = FALSE这句“防坑咒语”。
readr 包的做法
readr::read_csv()从诞生第一天就 不!转!因子!
字符列进来就是character,如果你想让它变成因子,必须事后自己转:
这就是“默认不转因子”的含义——不替你默默做类型转换,把选择权留给用户。library(readr) df <- read_csv("sex.csv") df$sex <- factor(df$sex) # 显式转
一句话总结
“默认不转因子” = 读进来是字符串就保持字符串,
不会为了“统计建模方便”而提前把字符串硬编码成因子,避免后续各种隐藏 bug
TSV 与 CSV 的区别只有一句话:分隔符不同。
| 项目 | CSV | TSV |
|---|---|---|
| 全称 | Comma-Separated Values | Tab-Separated Values |
| 分隔符 | 半角逗号 , | 制表符(Tab,\t) |
| 官方 MIME | text/csv | text/tab-separated-values |
| 文件扩展名 | .csv | .tsv(也常见 .txt) |
| 可读性 | 字段里含逗号时必须加引号,肉眼看着容易乱 | 很少出现 Tab 字符,引号用得少,肉眼更清爽 |
| 适用场景 | 通用交换格式,Excel/BI 工具默认 | 日志、生物信息(GFF/BLAST)、机器学习样本表等 |
| 读法(R) | read.csv() / readr::read_csv() | read.delim() / readr::read_tsv() |
| 读法(pandas) | pd.read_csv("f.csv") | pd.read_csv("f.tsv", sep='\t') 或 pd.read_table("f.tsv") |
一句话:
CSV 用逗号当列间隔,TSV 用 Tab 当列间隔;其余规则(换行、引号转义、UTF-8/GBK 编码等)完全一样。。
read.table() 是 R 里最底层的文本文件→数据框接口,参数多到吓人,但日常只用记住下面 10 个高频项即可。
(记不住就写 ?read.table 随时查)
| 参数 | 含义 | 常用值/例子 | 备注 |
|---|---|---|---|
file | 文件名或 URL | "foo.txt"、url("https://…") | 必写;压缩文件 .gz 也能自动识别 |
sep | 列分隔符 | "\t"(TSV)、","(CSV)、 `" | "` |
header | 首行是否做列名 | TRUE / FALSE | 默认 FALSE |
quote | 引号字符 | "\""(双引号)、"'" | 成对出现,用来包住含分隔符的字段 |
na.strings | 哪些字符串当缺失值 | c("NA", "", "999") | 默认 "NA" |
colClasses | 强制列类型 | c("integer","numeric","character") | 与列数等长;"NULL" 表示跳过该列 |
skip | 跳过文件前 n 行 | 3 | 文件头部有说明文字/版权头时常用 |
nrows | 只读前 n 行 | 1000 | 大文件试读利器 |
comment.char | 注释行起始符 | "#" 或 ""(关闭) | 默认 #;遇到即整行忽略 |
stringsAsFactors | 字符列是否变因子 | FALSE(推荐) | R ≥4.0 全局默认已是 FALSE |
3 个易踩坑补充
-
空白分隔
默认sep = ""表示“一个或多个空格/Tab 都算分隔符”,
如果数据里有人手对齐的空格,列会错位;此时一定显式写sep = "\t"或sep = ","。 -
行名自动识别
若文件最左侧多出一列没有列名,read.table会把它当成行名读入,
导致列数对不上;解决办法:read.table("foo.txt", row.names = NULL) # 强制不拿任何列当行名 -
因子水平顺序
旧代码里常写stringsAsFactors = FALSE就是怕把字符串变因子后 levels 顺序不可控;
现在 R 4.x 已默认不转,但老教程还保留这句“咒语”。
最小可用模板
df <- read.table("foo.txt",
sep = "\t",
header = TRUE,
na.strings = c("NA", "", "999"),
stringsAsFactors = FALSE)
把这几行背下来,90 % 的文本文件都能顺利读进来。
帮助文档
read.table {utils} R Documentation
Data Input
Description
Reads a file in table format and creates a data frame from it, with cases corresponding to lines and variables to fields in the file.
Usage
read.table(file, header = FALSE, sep = “”, quote = “”'",
dec = “.”, numerals = c(“allow.loss”, “warn.loss”, “no.loss”),
row.names, col.names, as.is = !stringsAsFactors, tryLogical = TRUE,
na.strings = “NA”, colClasses = NA, nrows = -1,
skip = 0, check.names = TRUE, fill = !blank.lines.skip,
strip.white = FALSE, blank.lines.skip = TRUE,
comment.char = “#”,
allowEscapes = FALSE, flush = FALSE,
stringsAsFactors = FALSE,
fileEncoding = “”, encoding = “unknown”, text, skipNul = FALSE)
read.csv(file, header = TRUE, sep = “,”, quote = “”",
dec = “.”, fill = TRUE, comment.char = “”, …)
read.csv2(file, header = TRUE, sep = “;”, quote = “”",
dec = “,”, fill = TRUE, comment.char = “”, …)
read.delim(file, header = TRUE, sep = “\t”, quote = “”",
dec = “.”, fill = TRUE, comment.char = “”, …)
read.delim2(file, header = TRUE, sep = “\t”, quote = “”",
dec = “,”, fill = TRUE, comment.char = “”, …)
Arguments
file
the name of the file which the data are to be read from. Each row of the table appears as one line of the file. If it does not contain an absolute path, the file name is relative to the current working directory, getwd(). Tilde-expansion is performed where supported. This can be a compressed file (see file).
Alternatively, file can be a readable text-mode connection (which will be opened for reading if necessary, and if so closed (and hence destroyed) at the end of the function call). (If stdin() is used, the prompts for lines may be somewhat confusing. Terminate input with a blank line or an EOF signal, Ctrl-D on Unix and Ctrl-Z on Windows. Any pushback on stdin() will be cleared before return.)
file can also be a complete URL. (For the supported URL schemes, see the ‘URLs’ section of the help for url.)
header
a logical value indicating whether the file contains the names of the variables as its first line. If missing, the value is determined from the file format: header is set to TRUE if and only if the first row contains one fewer field than the number of columns.
sep
the field separator character. Values on each line of the file are separated by this character. If sep = “” (the default for read.table) the separator is ‘white space’, that is one or more spaces, tabs, newlines or carriage returns.
quote
the set of quoting characters. To disable quoting altogether, use quote = “”. See scan for the behaviour on quotes embedded in quotes. Quoting is only considered for columns read as character, which is all of them unless colClasses is specified.
dec
the character used in the file for decimal points.
numerals
string indicating how to convert numbers whose conversion to double precision would lose accuracy, see type.convert. Can be abbreviated. (Applies also to complex-number inputs.)
row.names
a vector of row names. This can be a vector giving the actual row names, or a single number giving the column of the table which contains the row names, or character string giving the name of the table column containing the row names.
If there is a header and the first row contains one fewer field than the number of columns, the first column in the input is used for the row names. Otherwise if row.names is missing, the rows are numbered.
Using row.names = NULL forces row numbering. Missing or NULL row.names generate row names that are considered to be ‘automatic’ (and not preserved by as.matrix).
col.names
a vector of optional names for the variables. The default is to use “V” followed by the column number.
as.is
controls conversion of character variables (insofar as they are not converted to logical, numeric or complex) to factors, if not otherwise specified by colClasses. Its value is either a vector of logicals (values are recycled if necessary), or a vector of numeric or character indices which specify which columns should not be converted to factors.
Note: to suppress all conversions including those of numeric columns, set colClasses = “character”.
Note that as.is is specified per column (not per variable) and so includes the column of row names (if any) and any columns to be skipped.
tryLogical
a logical determining if columns consisting entirely of “F”, “T”, “FALSE”, and “TRUE” should be converted to logical; passed to type.convert, true by default.
na.strings
a character vector of strings which are to be interpreted as NA values. Blank fields are also considered to be missing values in logical, integer, numeric and complex fields. Note that the test happens after white space is stripped from the input (if enabled), so na.strings values may need their own white space stripped in advance.
colClasses
character. A vector of classes to be assumed for the columns. If unnamed, recycled as necessary. If named, names are matched with unspecified values being taken to be NA.
Possible values are NA (the default, when type.convert is used), “NULL” (when the column is skipped), one of the atomic vector classes (logical, integer, numeric, complex, character, raw), or “factor”, “Date” or “POSIXct”. Otherwise there needs to be an as method (from package methods) for conversion from “character” to the specified formal class.
Note that colClasses is specified per column (not per variable) and so includes the column of row names (if any).
nrows
integer: the maximum number of rows to read in. Negative and other invalid values are ignored.
skip
integer: the number of lines of the data file to skip before beginning to read data.
check.names
logical. If TRUE then the names of the variables in the data frame are checked to ensure that they are syntactically valid variable names. If necessary they are adjusted (by make.names) so that they are, and also to ensure that there are no duplicates.
fill
logical. If TRUE then in case the rows have unequal length, blank fields are implicitly added. See ‘Details’.
strip.white
logical. Used only when sep has been specified, and allows the stripping of leading and trailing white space from unquoted character fields (numeric fields are always stripped). See scan for further details (including the exact meaning of ‘white space’), remembering that the columns may include the row names.
blank.lines.skip
logical: if TRUE blank lines in the input are ignored.
comment.char
character: a character vector of length one containing a single character or an empty string. Use “” to turn off the interpretation of comments altogether.
allowEscapes
logical. Should C-style escapes such as ‘\n’ be processed or read verbatim (the default)? Note that if not within quotes these could be interpreted as a delimiter (but not as a comment character). For more details see scan.
flush
logical: if TRUE, scan will flush to the end of the line after reading the last of the fields requested. This allows putting comments after the last field.
stringsAsFactors
logical: should character vectors be converted to factors? Note that this is overridden by as.is and colClasses, both of which allow finer control.
fileEncoding
character string: if non-empty declares the encoding used on a file when given as a character string (not on an existing connection) so the character data can be re-encoded. See the ‘Encoding’ section of the help for file, the ‘R Data Import/Export’ manual and ‘Note’.
encoding
encoding to be assumed for input strings. It is used to mark character strings as known to be in Latin-1 or UTF-8 (see Encoding): it is not used to re-encode the input, but allows R to handle encoded strings in their native encoding (if one of those two). See ‘Value’ and ‘Note’.
text
character string: if file is not supplied and this is, then data are read from the value of text via a text connection. Notice that a literal string can be used to include (small) data sets within R code.
skipNul
logical: should NULs be skipped?
…
Further arguments to be passed to read.table.
Details
This function is the principal means of reading tabular data into R.
Unless colClasses is specified, all columns are read as character columns and then converted using type.convert to logical, integer, numeric, complex or (depending on as.is) factor as appropriate. Quotes are (by default) interpreted in all fields, so a column of values like “42” will result in an integer column.
A field or line is ‘blank’ if it contains nothing (except whitespace if no separator is specified) before a comment character or the end of the field or line.
If row.names is not specified and the header line has one less entry than the number of columns, the first column is taken to be the row names. This allows data frames to be read in from the format in which they are printed. If row.names is specified and does not refer to the first column, that column is discarded from such files.
The number of data columns is determined by looking at the first five lines of input (or the whole input if it has less than five lines), or from the length of col.names if it is specified and is longer. This could conceivably be wrong if fill or blank.lines.skip are true, so specify col.names if necessary (as in the ‘Examples’).
read.csv and read.csv2 are identical to read.table except for the defaults. They are intended for reading ‘comma separated value’ files (‘.csv’) or (read.csv2) the variant used in countries that use a comma as decimal point and a semicolon as field separator. Similarly, read.delim and read.delim2 are for reading delimited files, defaulting to the TAB character for the delimiter. Notice that header = TRUE and fill = TRUE in these variants, and that the comment character is disabled.
The rest of the line after a comment character is skipped; quotes are not processed in comments. Complete comment lines are allowed provided blank.lines.skip = TRUE; however, comment lines prior to the header must have the comment character in the first non-blank column.
Quoted fields with embedded newlines are supported except after a comment character. Embedded NULs are unsupported: skipping them (with skipNul = TRUE) may work.
Value
A data frame (data.frame) containing a representation of the data in the file.
Empty input is an error unless col.names is specified, when a 0-row data frame is returned: similarly giving just a header line if header = TRUE results in a 0-row data frame. Note that in either case the columns will be logical unless colClasses was supplied.
Character strings in the result (including factor levels) will have a declared encoding if encoding is “latin1” or “UTF-8”.
CSV files
See the help on write.csv for the various conventions for .csv files. The commonest form of CSV file with row names needs to be read with read.csv(…, row.names = 1) to use the names in the first column of the file as row names.
Memory usage
These functions can use a surprising amount of memory when reading large files. There is extensive discussion in the ‘R Data Import/Export’ manual, supplementing the notes here.
Less memory will be used if colClasses is specified as one of the six atomic vector classes. This can be particularly so when reading a column that takes many distinct numeric values, as storing each distinct value as a character string can take up to 14 times as much memory as storing it as an integer.
Using nrows, even as a mild over-estimate, will help memory usage.
Using comment.char = “” will be appreciably faster than the read.table default.
read.table is not the right tool for reading large matrices, especially those with many columns: it is designed to read data frames which may have columns of very different classes. Use scan instead for matrices.
Note
The columns referred to in as.is and colClasses include the column of row names (if any).
There are two approaches for reading input that is not in the local encoding. If the input is known to be UTF-8 or Latin1, use the encoding argument to declare that. If the input is in some other encoding, then it may be translated on input. The fileEncoding argument achieves this by setting up a connection to do the re-encoding into the current locale. Note that on Windows or other systems not running in a UTF-8 locale, this may not be possible.
References
Chambers, J. M. (1992) Data for models. Chapter 3 of Statistical Models in S eds J. M. Chambers and T. J. Hastie, Wadsworth & Brooks/Cole.
See Also
The ‘R Data Import/Export’ manual.
scan, type.convert, read.fwf for reading fixed width formatted input; write.table; data.frame.
count.fields can be useful to determine problems with reading files which result in reports of incorrect record lengths (see the ‘Examples’ below).
https://www.rfc-editor.org/rfc/rfc4180 for the IANA definition of CSV files (which requires comma as separator and CRLF line endings).
Examples
Run examples
using count.fields to handle unknown maximum number of fields
when fill = TRUE
test1 <- c(1:5, “6,7”, “8,9,10”)
tf <- tempfile()
writeLines(test1, tf)
read.csv(tf, fill = TRUE) # 1 column
ncol <- max(count.fields(tf, sep = “,”))
read.csv(tf, fill = TRUE, header = FALSE,
col.names = paste0(“V”, seq_len(ncol)))
unlink(tf)
“Inline” data set, using text=
Notice that leading and trailing empty lines are auto-trimmed
read.table(header = TRUE, text = "
a b
1 2
3 4
")
868

被折叠的 条评论
为什么被折叠?



