A Survey on Time-Series Pre-Trained Models

828 篇文章

已下架不支持订阅

本文详述了时间序列预训练模型(TS-PTM)的最新进展,旨在解决深度学习模型对大量标记数据的依赖。TS-PTM分为监督、无监督和自监督三大类,涉及时间序列分类、预测和异常检测任务。实验分析表明,基于Transformer的PTM在时间序列预测和异常检测中有优势,但在时间序列分类中仍有挑战。未来的研究方向包括大规模时间序列数据集、Transformer在时间序列的应用以及对抗性攻击等。

本文是LLM系列的文章,针对《A Survey on Time-Series Pre-Trained Models》的翻译。

摘要

时间序列挖掘在实际应用中显示出巨大的潜力,是一个重要的研究领域。基于大量标记数据的深度学习模型已成功用于TSM。然而,由于数据注释成本的原因,构建大规模标记良好的数据集是困难的。近年来,预训练模型由于其在计算机视觉和自然语言处理方面的卓越表现,逐渐引起了时间序列领域的关注。在这项综述中,我们对时间序列预训练模型(TS-PTM)进行了全面的回顾,旨在指导对TS-PTM的理解、应用和研究。具体来说,我们首先简要介绍TSM中使用的典型深度学习模型。然后,我们根据预训练技术对TS-PTM进行了概述。我们探索的主要类别包括有监督的、无监督的和自我监督的TS-PTM。此外,还进行了大量的实验来分析迁移学习策略、基于Transformer的模型和具有代表性的TS-PTM的优缺点。最后,我们指出了TS-PTM未来工作的一些潜在方向。源代码位于https://github.com/qianlima-lab/time-series-ptms.

1 引言

作为数据挖掘领域的一个重要研究方向,时间序列挖掘(TSM)已被广泛应用于现实世界中的应用,如金融、语音分析、动作识别和交通流预测。TSM的基本问题是如何表示时间序列数据。然后,可以基于给定的表示来执行各种挖掘任务。由于严重依赖领域或专家知识,传统的时间序列表示(例如,shapelets)非常耗时。因此,自动学习适当的时间

已下架不支持订阅

### Prompt-based Combinatorial Optimization Using Pre-Trained Language Models Prompt-based combinatorial optimization leverages pre-trained language models (LLMs) to solve complex optimization problems by encoding the problem as a textual prompt and using the model's generative capabilities to produce solutions. This approach has gained traction due to the versatility of LLMs in understanding structured inputs and generating structured outputs. #### Conceptual Framework Pre-trained language models are inherently designed to predict missing parts of text sequences, which aligns with the goal of combinatorial optimization: finding an optimal solution from a set of possible configurations. By framing the optimization problem as a natural language task, researchers have demonstrated that LLMs can approximate solutions effectively[^2]. The key lies in designing appropriate prompts that encode the problem constraints and objectives into a format understandable by the model. #### Implementation Approach To implement prompt-based combinatorial optimization, one must carefully design the input prompt to guide the model toward valid solutions. Below is an outline of the process: 1. **Problem Encoding**: Convert the combinatorial optimization problem into a textual representation. For example, a traveling salesman problem (TSP) might be framed as "Find the shortest route visiting cities A, B, C, and D exactly once." 2. **Prompt Design**: Construct a prompt that includes both the problem description and any necessary constraints. This step is crucial, as poorly designed prompts may lead to suboptimal or invalid solutions. 3. **Model Inference**: Use the pre-trained language model to generate potential solutions based on the prompt. Fine-tuning the model for specific tasks can further enhance performance[^3]. 4. **Post-Processing**: Validate and refine the generated solutions to ensure they meet all problem constraints. This step often involves integrating domain-specific knowledge or heuristics. #### Example Code Below is an example implementation using a hypothetical pre-trained language model to solve a simple combinatorial optimization problem: ```python import openai def solve_combinatorial_optimization(prompt): # Set up OpenAI API key openai.api_key = "your-api-key" # Generate response using GPT-3 response = openai.Completion.create( engine="text-davinci-003", prompt=prompt, max_tokens=100, temperature=0.7, n=1 ) return response.choices[0].text.strip() # Define the combinatorial optimization problem as a prompt problem_prompt = ( "You are tasked with solving the following optimization problem:\n" "Minimize the total cost of assigning workers to tasks.\n" "Each worker can only be assigned to one task, and each task must be assigned to exactly one worker.\n" "The cost matrix is as follows:\n" "[[90, 75, 75, 80], [35, 85, 55, 65], [125, 95, 90, 105], [45, 110, 95, 115]]\n" "Provide the optimal assignment of workers to tasks and the total minimum cost." ) # Solve the problem solution = solve_combinatorial_optimization(problem_prompt) print("Solution:", solution) ``` This code demonstrates how a combinatorial optimization problem can be solved by encoding it as a natural language prompt and leveraging a pre-trained language model like GPT-3 to generate solutions. #### Research Insights Research into prompt-based combinatorial optimization highlights the importance of prompt engineering and model fine-tuning. Studies have shown that larger models with extensive pre-training perform better in terms of accuracy and efficiency. Additionally, incorporating domain-specific knowledge into the prompt design can significantly improve results. #### Limitations While promising, this approach has limitations: - **Scalability**: Large-scale optimization problems may exceed the context window size of current LLMs. - **Accuracy**: Solutions generated by LLMs may not always be optimal or feasible without post-processing. - **Resource Intensity**: Running inference on large models can be computationally expensive. Despite these challenges, prompt-based combinatorial optimization represents a novel and powerful paradigm for solving complex problems using pre-trained language models.
评论
成就一亿技术人!
拼手气红包6.0元
还能输入1000个字符
 
红包 添加红包
表情包 插入表情
 条评论被折叠 查看
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

UnknownBody

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值