Model Representation

本文介绍了监督学习的基本概念,包括输入变量、输出变量、训练样本等,并区分了回归问题与分类问题的不同。通过定义假设函数来预测输出变量。

To establish notation for future use, we’ll use x(i)x^{(i)}x(i) to denote the “input” variables (living area in this example), also called input features, and y(i)y^{(i)}y(i) to denote the “output” or target variable that we are trying to predict (price). A pair (x(i),y(i)(x^{(i)} , y^{(i)}(x(i),y(i)) is called a training example, and the dataset that we’ll be using to learn—a list of m training examples (x(i),y(i)(x^{(i)} , y^{(i)}(x(i),y(i)) ; i = 1, . . . , m—is called a training set. Note that the superscript “(i)” in the notation is simply an index into the training set, and has nothing to do with exponentiation. We will also use X to denote the space of input values, and Y to denote the space of output values. In this example, X = Y = ℝ.

To describe the supervised learning problem slightly more formally, our goal is, given a training set, to learn a function h : X → Y so that h(x) is a “good” predictor for the corresponding value of y. For historical reasons, this function h is called a hypothesis. Seen pictorially, the process is therefore like this:Supervise learning precess
When the target variable that we’re trying to predict is continuous, such as in our housing example, we call the learning problem a regression problem. When y can take on only a small number of discrete values (such as if, given the living area, we wanted to predict if a dwelling is a house or an apartment, say), we call it a classification problem.

``` from bertopic import BERTopic from sentence_transformers import SentenceTransformer from umap import UMAP from hdbscan import HDBSCAN from bertopic.vectorizers import ClassTfidfTransformer import plotly.io as pio import pandas as pd from sklearn.feature_extraction.text import CountVectorizer from bertopic.representation import KeyBERTInspired data = pd.read_excel("数据.xlsx") # Step 1 - Embed documents embedding_model = SentenceTransformer('all-MiniLM-L12-v2') # Step 2 - Reduce dimensionality降维 # umap_model = UMAP(n_neighbors=15, n_components=5,min_dist=0.0, metric='cosine') umap_model = UMAP(n_neighbors=15, n_components=5,min_dist=0.0, metric='cosine', random_state=28) # Step 3 - Cluster reduced embeddings对降维向量聚类 hdbscan_model = HDBSCAN(min_cluster_size=15, metric='euclidean', prediction_data=True) # Step 4 - Create topic representation创造主题候选词 vectorizer_model = CountVectorizer(stop_words=None) # vectorizer_model = CountVectorizer(stop_words=["人工智能","ai","AI"]) # Step 5 - Create topic representation ctfidf_model = ClassTfidfTransformer() # Step 6 - (Optional) Fine-tune topic representations with a `bertopic.representation` model representation_model = KeyBERTInspired() # 训练bertopic主题模型 topic_model = BERTopic( embedding_model=embedding_model, # Step 1 - Extract embeddings umap_model=umap_model, # Step 2 - Reduce dimensionality hdbscan_model=hdbscan_model, # Step 3 - Cluster reduced embeddings vectorizer_model=vectorizer_model, # Step 4 - Tokenize topics ctfidf_model=ctfidf_model, # Step 5 - Extract topic words representation_model=representation_model, # Step 6 - (Optional) Fine-tune topic representations ) # 使用fit_transform对输入文本向量化,然后使用topic_model模型提取主题topics,并且计算主题文档概率probabilities filtered_text = data["内容"].astype(str).tolist() topics, probabilities = topic_model.fit_transform(filtered_text) document_info = topic_model.get_document_info(filtered_text) print(document_info) # 查看每个主题数量 topic_freq = topic_model.get_topic_freq() print(topic_freq) # 查看某个主题-词的概率分布 topic = topic_model.get_topic(0) print(topic) # 主题-词概率分布 pic_bar = topic_model.visualize_barchart() pio.show(pic_bar) # 文档主题聚类 embeddings = embedding_model.encode(filtered_text, show_progress_bar=False) pic_doc = topic_model.visualize_documents(filtered_text, embeddings=embeddings) pio.show(pic_doc) # 聚类分层 pic_hie = topic_model.visualize_hierarchy() pio.show(pic_hie) # 主题相似度热力图 pic_heat = topic_model.visualize_heatmap() pio.show(pic_heat) # 主题模排名图 pic_term_rank = topic_model.visualize_term_rank() pio.show(pic_term_rank) # 隐含主题主题分布图 pic_topics = topic_model.visualize_topics() pio.show(pic_topics) #DTM图 summary=data['内容'].astype(str).tolist() timepoint = data['时间'].tolist() timepoint = pd.Series(timepoint) print(timepoint[:10]) topics_over_time = topic_model.topics_over_time(summary, timepoint, datetime_format='mixed', nr_bins=20, evolution_tuning=True) DTM = topic_model.visualize_topics_over_time(topics_over_time, title='DTM',) pio.show(DTM)```请解释这个代码内容
最新发布
03-13
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值