MA Chapter 7 Summarizing and analyzing data(SRCharlotte)

在这里插入图片描述

根据原作 https://pan.quark.cn/s/459657bcfd45 的源码改编 Classic-ML-Methods-Algo 引言 建立这个项目,是为了梳理和总结传统机器学习(Machine Learning)方法(methods)或者算法(algo),和各位同仁相互学习交流. 现在的深度学习本质上来自于传统的神经网络模型,很大程度上是传统机器学习的延续,同时也在不少时候需要结合传统方法来实现. 任何机器学习方法基本的流程结构都是通用的;使用的评价方法也基本通用;使用的一些数学知识也是通用的. 本文在梳理传统机器学习方法算法的同时也会顺便补充这些流程,数学上的知识以供参考. 机器学习 机器学习是人工智能(Artificial Intelligence)的一个分支,也是实现人工智能最重要的手段.区别于传统的基于规则(rule-based)的算法,机器学习可以从数据中获取知识,从而实现规定的任务[Ian Goodfellow and Yoshua Bengio and Aaron Courville的Deep Learning].这些知识可以分为四种: 总结(summarization) 预测(prediction) 估计(estimation) 假想验证(hypothesis testing) 机器学习主要关心的是预测[Varian在Big Data : New Tricks for Econometrics],预测的可以是连续性的输出变量,分类,聚类或者物品之间的有趣关联. 机器学习分类 根据数据配置(setting,是否有标签,可以是连续的也可以是离散的)和任务目标,我们可以将机器学习方法分为四种: 无监督(unsupervised) 训练数据没有给定...
Data preprocessing involves several key steps and techniques. One area relevant to data preprocessing is the Extraction, Transformation, and Loading (ETL) process [^3]. ### Data Extraction Data extraction is the initial step where data is retrieved from multiple, heterogeneous, and external sources. This allows for the collection of data from various places to be used in subsequent analysis [^3]. ### Data Cleaning Data cleaning is an important technique. It focuses on detecting errors in the data and rectifying them when possible. This ensures that the data used for analysis is of high - quality and free from obvious inaccuracies [^3]. ### Data Transformation Data transformation converts data from legacy or host format to warehouse format. This step is crucial for making the data compatible with the data warehouse and subsequent analysis tools [^3]. ### Loading and Refresh After transformation, the data is loaded. This involves sorting, summarizing, consolidating, computing views, checking integrity, and building indices and partitions. Additionally, the refresh process propagates the updates from the data sources, ensuring that the data in the warehouse is up - to - date [^3]. ### Example in a Specific Domain In the context of researching suicidality on Twitter, data preprocessing also plays a role. When collecting tweets using the public API as done by O’Dea et al., the data needs to be preprocessed before applying machine - learning models like logistic regression and SVM on TF - IDF features. This may involve cleaning the text data, removing special characters, and normalizing the text [^2]. ### Tools and Techniques in Genome 3D Structure Research In the study of the 3D structure of the genome, data preprocessing is also essential. For Hi - C data, specific preprocessing steps include dealing with chimeric reads, mapping, representing data as fixed or enzyme - sized bins, normalization, and detecting A/B compartments and TAD boundaries. Tools such as HiC - Pro, HiCUP, HOMER, and Juicer are used for Hi - C analysis, which includes preprocessing steps [^4]. ```python # A simple example of data cleaning in Python import pandas as pd # Assume we have a DataFrame with some data data = {'col1': [1, 2, None, 4], 'col2': ['a', 'b', 'c', None]} df = pd.DataFrame(data) # Drop rows with missing values cleaned_df = df.dropna() print(cleaned_df) ```
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

Josephine Zyu

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值