IO Model

本文讲述IO model(IO模型)。

1.Basic concept

首先讲述一些基本的概念。

1.1. Blocking call vs. Non-blocking call

什么是Blocking call(阻塞调用) Non-blocking call(非阻塞调用)?

Blocking call cause the requesting process to be blocked until the call return.

Non-blocking call does not cause the requesting process to be blocked.

1.2. Synchronous IO vs. Asynchronous IO

1.2.1 首先什么是Synchronous(同步)和Asynchronous(异步)?

Synchronous mean change at the same time and same step between 2 objectives
Asynchronous mean change not at the same time and same step between 2 objectives

同步异步是通用的概念,比如在TCP协议中已有同步的概念。
TCP 3 way handshake:
3 way handshake make client and server change at the same time, so segment for handshake call SYNC

1.2.2 接下来,我们讨论什么是Synchronous IO(同步IO)和Asynchronous IO (异步IO)?

Synchronous IO:
Process and kernel complete work at the same time
Asynchronous IO:
Process and kernel complete work not at the same time

For example:
Process read file A, kernel read file at the same time.
Process read files in the order of A,B,C, kernel also read files in the order of A,B,C, that is at the same step.

In practice, IO operation have request and result, in asynchronous IO, request and result is separate, there are 2 functions, one for request, one for result.

1.3. Multiplexing vs. multiple process/thread

Multiplexing is handle multiple request in one process/thread
Multiple process/thread is handle one request in one process/thread

Level Triggered(水平触发)
Edge Triggered(边缘触发)

Multiplexing is a part of select, poll, epoll, kqueue, IOCP

1.4. Iterative vs. concurrent

Iterative: one after one
Concurrent: work at multiple threads

1.5. Polling triggers notification vs. call back notification

Select, poll, epoll, IOCP is polling triggers notification
Signal is call back notification

1.6. IO Ready Notification vs. IO Complete Notification

IO READY Notification: IO is ready, such as readable, writable
Completed IO Request Notification: IO request is completed

2.各操作系统的支持

上面介绍了一些基本概念,这些基本概念中有一些是操作系统的基本机制,比如IO Ready Notification vs. IO Complete Notification两种机制。下面介绍一下各操作系统的一些IO model要用到的一些机制。

2.1 Linux IO Ready Notification

2.1.1 IO file descriptor Notification

This is not independent mechanism, it is a part of select, poll, epoll mechanism.

2.1.2 Signal Notification (SIGIO)

2.2 Linux Completed IO Request Notification

2.2.1 Linux Native AIO

A completed IO Request Notification mechanism is built in Linux Native AIO

2.3 Windows Completed IO Request Notification

2.3.1. IO Completion Port

A completed IO Request Notification mechanism is built in IO Completion Port

2.3.2. windows Device kernel object

2.3.3. windows event

2.3.4. windows alertable IO

3. Target of IO Model

我们研究IO模型的目的是最大化利用CPU,要最大化利用CPU就要去除无用的操作,那什么是无用的操作那,blocking和线程的CPU上线文切换都是无用的操作。

Maximize CPU utilization -> no useless operation -> no block, no context switch

Make thread keeping working and busy

4. IO Model

我将IO Model总体上分为三类:

  • 1.All in one call I/O Model category
  • 2.IO Ready Notification I/O Model category
  • 3.Completed IO Request Notification I/O Model category

这里写图片描述

Blocking and Non-blocking in Asynchronous 没有太大的意义

3.1. Synchronous blocking Model category
3.2. Synchronous non-blocking Model category
3.3. IO Ready Model category
3.4. Asynchronous Model (IO complete Model) category

4. Common Synchronous blocking Model Category (3 models)

各种操作系统对这类IO model都有支持,而且采用的形式都一致,因为这总模型基于blocking socket api,所以称这类模型是common。

在这个类别下有几种不同的实现,主要的区别在于使用Iterative还是使用Concurrent。

4.1. Synchronous Blocking Iterative Model

4.2. Synchronous Blocking Concurrent Model

4.2.1. Synchronous Blocking Request Separation Model

4.2.2. Synchronous Blocking Read/Write Separation Model

5. Common Synchronous Non-blocking Model Category (1 models)

各种操作系统对这类IO model也都有支持,而且采用的形式都一致,因为这总模型基于non-blocking socket api,所以称这类模型是common的。

8. Linux IO Ready Notification I/O Model Category (4 Models)

8.1. SIGIO
8.1.1. SIGIO – Blocking Model

8.1.2. SIGIO – Non-Blocking Model

8.2. Select model
8.2.1. Select-Blocking Model

8.2.2. Select-Non-blocking Model
8.3. Poll Model
8.4. Epoll Model

9. Linux Completed IO Request Notification I/O Model Category (4 Models)**

9.1. linux native aio Model (blocking)

9.2. Linux native aio with epoll Model (blocking)
9.3. Epoll with multi-thread Model
9.4. Linux native aio, Epoll, multi-thread Model

11. Windows IO Ready Notification I/O Model

11.1. Windows WSAAsyncSelect Model
11.2. Windows WSAEventSelect Model

12. Windows Completed IO Request Notification I/O Model

12.1. Only Completed IO Request Notification Model
12.1.1. windows Device kernel object Model (overlap IO) (Asynchronous blocking)
12.1.2. windows event Model (overlap IO) (Asynchronous blocking)
12.1.3. Completion Routine Model (overlap IO) (windows alertable IO) (Asynchronous blocking)
12.2. Using Completed IO Request Notification and Multiplexing Model
12.2.1. Windows IO completion port Model (Asynchronous Non-blocking)

13. Solaris Completed IO Request Notification I/O Model

13.1. Event completion Model

14. glibc Completed IO Request Notification

14.1. Signal
Signal in Posix aio
14.2. Callback
Callback in Posix aio

15. Glibc Completed IO Request Notification I/O Model

15.1. Posix aio (signal, call) Model (Asynchronous Non-blocking)

### LangChain中Model I/O的概念与实现 #### 1. **核心概念** LangChain 提供了一套灵活的工具集,用于处理语言模型的应用开发。其核心组件之一是 Model I/O(输入/输出),它涵盖了如何设计提示词(Prompts)、调用模型以及解析模型输出的过程[^2]。 - **Prompt Templates**: Prompt 是一种结构化的模板,允许开发者定义动态参数以生成适合特定任务的输入。这些模板可以被用来创建复杂的上下文环境,从而引导模型生成更高质量的结果。 - **Language Models (LLMs)**: LLMs 接受一段文本作为输入,并返回另一段文本作为输出。这种简单的输入输出模式使得它们能够广泛应用于各种自然语言处理任务,比如问答、翻译和摘要生成等。 - **Chat Models**: 这些是由基础的语言模型支持的高级形式,专门针对对话场景进行了优化。相比于普通的 LLMs,Chat Models 能够接收一系列的消息历史记录作为输入,并据此生成连贯的回答。 - **Output Parsers**: 输出解析器的作用是从原始的模型响应中提取有用的信息片段。这一步骤对于将非结构化数据转化为可操作的数据至关重要,尤其是在涉及多步推理或者复杂查询的情况下。 --- #### 2. **具体实现** 以下是关于如何在实际项目中使用 LangChain 实现 Model I/O 的一些示例: ##### a. 定义 Prompt Template 通过 `PromptTemplate` 类型对象,我们可以轻松定制不同类型的提示语句。下面是一个基本的例子展示如何设置一个带有变量插槽的简单模板: ```python from langchain import PromptTemplate template = """Answer the following question based on context provided: Context: {context} Question: {question}""" prompt_template = PromptTemplate(input_variables=["context", "question"], template=template) ``` ##### b. 初始化 Language Model 和 Chat Model 利用 OpenAI 或其他第三方服务提供商的大规模预训练模型实例化具体的预测函数。这里假设我们已经配置好了 API 密钥等相关信息。 ```python import os from langchain.llms import OpenAI os.environ["OPENAI_API_KEY"] = "your_api_key_here" llm = OpenAI(temperature=0.7) chat_model = llm.chat() # 如果该库版本支持 chat 方法的话 ``` ##### c. 解析 Output Response 当接收到模型反馈之后,通常还需要进一步加工才能满足业务需求。例如,在某些情况下可能只需要抽取其中的一部分字段即可完成后续逻辑判断;而在另外一些场合下则或许要重新组合多个部分形成最终答案呈现给用户端查看。 ```python def parse_output(response_text): start_marker = "<answer>" end_marker = "</answer>" if not response_text.startswith(start_marker) or not response_text.endswith(end_marker): raise ValueError("Invalid format of model output.") clean_response = response_text[len(start_marker):-len(end_marker)].strip() return clean_response ``` --- #### 3. **应用场景举例** 除了上述提到的基础功能之外,LangChain 还特别适用于构建更加智能化的企业级解决方案,如下所示几个典型例子说明了这一点: - 数据库交互:借助于 text-to-SQL 技术栈,可以让不具备专业知识背景的人群也能够方便快捷地访问后台存储资源而无需编写繁琐的传统编程代码[^3]; - 自动客服机器人:结合实时语音识别技术再加上精心打磨过的回复策略,完全可以打造出媲美真人交流体验的服务平台; - 文档理解助手:无论是专利文件还是法律条规解读等领域都存在着大量未被充分利用起来的知识财富等待挖掘出来创造价值。 --- ### 总结 综上所述,LangChain 不仅简化了与各类先进 AI 工具之间的桥梁搭建工作流程,而且极大地促进了跨学科交叉融合创新的可能性空间拓展。通过对 Model I/O 各环节深入剖析可以看出,合理运用这一框架可以帮助技术人员快速原型验证想法进而推向市场检验效果。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值