Netty之源代码解析

Netty始终要写一篇来作为终结的,但是到了写的时候才发现无从下手,了解 的还是不够吧。无奈,从四处摘录了一大片东西,很多都是官网下来的,没有什么文字说明,权当参考了。

首先来一张总体架构图,这个是从Neety官网上摘下来的,描述了Netty的核心架构和总体功能。
 

1.  BootStrap

 

Bootstrap : ChannelFactory, ChannelPipeline, ChannelPipelineFactory

初始化channel的辅助类

为具体的子类提供公共数据结构

ServerBootstrap: bind()

创建服务器端channel的辅助类

接收connection请求

ClientBootstrap: connect()

创建客户端channel的辅助类

发起connection请求

ConnectionlessBootstrap: connect() , bind()

创建无连接传输channel的辅助类(UDP)

包括Client 和Server

2.  Buffer

Buffer的作用在于取代nio中的Java.nio.ByteBuffer,相比ByteBuffer,可以根据需要自定义buffertype。内置混合的buffertype, 以实现zero-copy。提供类似StringBuffer的动态dynamic buffer;不需要调用flip方法;推荐使用ChannelBuffers的静态工厂创建ChannelBuffer.

3.  channel

org.jboss.netty.channel

channel核心api,包括异步和事件驱动等各种传送接口

org.jboss.netty.channel.group

channel group,帮助用户维护channel列表

org.jboss.netty.channel.local

一种虚拟传输方式,允许同一个虚拟机上的两个部分可以互相通信

org.jboss.netty.channel.socket

TCP, UDP接口,继承了核心的channel API

org.jboss.netty.channel.socket.nio

基于nio的Socket channel实现

org.jboss.netty.channel.socket.oio

基于老io的Socket channel实现

org.jboss.netty.channel.socket.http

基于http的客户端和相应的server端的实现,工作在有防火墙的情况

Channel的核心包结构如下图所示:


4.  handler

org.jboss.netty.handler

处理器

org.jboss.netty.handler.codec

编码解码器

org.jboss.netty.handler.execution

基于Executor的实现

org.jboss.netty.handler.queue

将event存入内部队列的处理

org.jboss.netty.handler.ssl

基于SSLEngine的SSL以及TLS实现

org.jboss.netty.handler.stream

异步写入大数据,不会产生outOfMemory也不会花费很多内存

org.jboss.netty.handler.timeout

通过Timer来对读写超时或者闲置链接进行通知

 

5.  Netty的事件模型

5.1.  Netty Pipline

                                      I/O Request
                                     via Channel or
                                 ChannelHandlerContext
                                           |
  +----------------------------------------+---------------+
  |                  ChannelPipeline       |               |
  |                                       \|/              |
  |  +----------------------+  +-----------+------------+  |
  |  | Upstream Handler  N  |  | Downstream Handler  1  |  |
  |  +----------+-----------+  +-----------+------------+  |
  |            /|\                         |               |
  |             |                         \|/              |
  |  +----------+-----------+  +-----------+------------+  |
  |  | Upstream Handler N-1 |  | Downstream Handler  2  |  |
  |  +----------+-----------+  +-----------+------------+  |
  |            /|\                         .               |
  |             .                          .               |
  |     [ sendUpstream() ]        [ sendDownstream() ]     |
  |     [ + INBOUND data ]        [ + OUTBOUND data  ]     |
  |             .                          .               |
  |             .                         \|/              |
  |  +----------+-----------+  +-----------+------------+  |
  |  | Upstream Handler  2  |  | Downstream Handler M-1 |  |
  |  +----------+-----------+  +-----------+------------+  |
  |            /|\                         |               |
  |             |                         \|/              |
  |  +----------+-----------+  +-----------+------------+  |
  |  | Upstream Handler  1  |  | Downstream Handler  M  |  |
  |  +----------+-----------+  +-----------+------------+  |
  |            /|\                         |               |
  +-------------+--------------------------+---------------+
                |                         \|/
  +-------------+--------------------------+---------------+
  |             |                          |               |
  |     [ Socket.read() ]          [ Socket.write() ]      |
  |                                                        |
  |  Netty Internal I/O Threads (Transport Implementation) |
  +--------------------------------------------------------+

 

处理方式:

ChannelPipelinep = Channels.pipeline();

p.addLast(“1”, newUpstreamHandlerA());

p.addLast(“2”, newUpstreamHandlerB());

p.addLast(“3”, newDownstreamHandlerA());

p.addLast(“4”, newDownstreamHandlerB());

p.addLast(“5”, newUpstreamHandlerX());

Upstream: 1 –> 2 –> 5 顺序处理

Downstream: 4 –> 3 逆序处理

 

5.2.  ChannelState

 

Direction

State

Value

Meaning

Upstream

OPEN

true

The channel is open.

Upstream

OPEN

false

The channel is closed.

Upstream

BOUND

SocketAddress

The channel is bound to a local address.

Upstream

BOUND

null

The channel is unbound to a local address.

Upstream

CONNECTED

SocketAddress

The channel is connected to a remote address.

Upstream

CONNECTED

null

The channel is disconnected from a remote address.

Upstream

INTEREST_OPS

an integer

The channel interestOps has been changed.

Downstream

OPEN

true

N/A

Downstream

OPEN

false

Close the channel.

Downstream

BOUND

SocketAddress

Bind the channel to the specified local address.

Downstream

BOUND

null

Unbind the channel from the current local address.

Downstream

CONNECTED

SocketAddress

Connect the channel to the specified remote address.

Downstream

CONNECTED

null

Disconnect the channel from the current remote address.

Downstream

INTEREST_OPS

an integer

Change the interestOps of the channel.

 

5.3.  Upstreamevents

Event name

Event type and condition

Meaning

“messageReceived”

MessageEvent

a message object (e.g. ChannelBuffer) was received from a remote peer

“exceptionCaught”

ExceptionEvent

an exception was raised by an I/O thread or a ChannelHandler

“channelOpen”

ChannelStateEvent
(state = OPEN, value = true)

a Channel is open, but not bound nor connected

“channelClosed”

ChannelStateEvent
(state = OPEN, value = false)

a Channel was closed and all its related resources were released

“channelBound”

ChannelStateEvent
(state = BOUND, value = SocketAddress)

a Channel is open and bound to a local address, but not connected

“channelUnbound”

ChannelStateEvent
(state = BOUND, value = null)

a Channel was unbound from the current local address

“channelConnected”

ChannelStateEvent
(state = CONNECTED, value =SocketAddress)

a Channel is open, bound to a local address, and connected to a remote address

“writeComplete”

WriteCompletionEvent

something has been written to a remote peer

“channelDisconnected”

ChannelStateEvent
(state = CONNECTED, value = null)

a Channel was disconnected from its remote peer

“channelInterestChanged”

ChannelStateEvent
(state = INTEREST_OPS, no value)

a Channel’s interestOps was changed

 

5.4.  Downstreamevents

Event name

Event type and condition

Meaning

“write”

MessageEvent

Send a message to the Channel.

“bind”

ChannelStateEvent
(state = BOUND, value = SocketAddress)

Bind the Channel to the specified local address.

“unbind”

ChannelStateEvent
(state = BOUND, value = null)

Unbind the Channel from the current local address.

“connect”

ChannelStateEvent
(state = CONNECTED, value = SocketAddress)

Connect the Channel to the specified remote address.

“disconnect”

ChannelStateEvent
(state = CONNECTED, value = null)

Disconnect the Channel from the current remote address.

“close”

ChannelStateEvent
(state = OPEN, value = false)

Close the Channel.

 

6.  Netty VS Mina

Netty基于Pipeline处理,Mina基于Filter过滤

Netty的事件驱动模型具有更好的扩展性和易用性

Https,SSL,PB,RSTP,Text&Binary等协议支持

Netty中UDP传输有更好的支持

官方测试Netty比Mina性能更好

<think>好的,用户想了解Netty框架中处理粘包问题的源代码实现。首先,我需要回忆一下Netty处理粘包的机制。Netty主要通过解码器(Decoder)来处理粘包,常见的如LineBasedFrameDecoder、LengthFieldBasedFrameDecoder等。用户可能需要具体的代码位置和实现细节。首先,粘包问题的产生是因为TCP是流式协议,数据没有明确的边界,所以应用层需要自己解析Netty的解决方案是在ChannelPipeline中添加处理粘包的解码器。需要找到这些解码器的源码,比如在io.netty.handler.codec包中。 接下来,用户可能需要知道具体的解码器实现。例如,LengthFieldBasedFrameDecoder会根据长度字段来分割数据包。我需要查看该类的源码,特别是decode方法。另外,可能需要提到如何通过继承ByteToMessageDecoder来自定义解码逻辑,处理粘包。 还需要考虑Netty的缓冲区管理,比如如何累积数据直到一个完整的帧被接收。这可能涉及到Cumulator接口和CompositeByteBuf的使用。此外,可能需要引用Netty官方文档或关键类的注释来说明设计思路。 用户可能还想知道不同解码器的适用场景,比如行分隔符和固定长度解码器的区别。需要指出源码中对应的配置参数和处理逻辑。例如,LineBasedFrameDecoder是如何查找换行符的。最后,确保回答结构清晰,分点说明源码位置、关键类、处理流程,并给出示例代码和引用。同时生成相关问题,帮助用户深入理解。</think>关于Netty粘包处理的源代码实现,主要涉及以下核心模块: ### 1. 解码器基类实现 Netty在`io.netty.handler.codec.ByteToMessageDecoder`类中实现基础解码逻辑,其核心方法`callDecode()`通过累积缓冲区数据循环解码: ```java protected void callDecode(ChannelHandlerContext ctx, ByteBuf in, List<Object> out) { while (in.isReadable()) { int outSize = out.size(); int oldInputLength = in.readableBytes(); decode(ctx, in, out); // 调用具体解码方法 if (outSize == out.size()) { if (oldInputLength == in.readableBytes()) { break; } } } } ``` 该机制保证数据持续累积直到满足解码条件[^1]。 ### 2. 长度域解码器 `LengthFieldBasedFrameDecoder`是处理粘包的核心实现类,其解码过程包含: ```java protected Object decode(ChannelHandlerContext ctx, ByteBuf in) throws Exception { // 1. 读取长度字段 int actualLengthFieldOffset = in.readerIndex() + lengthFieldOffset; long frameLength = getUnadjustedFrameLength(in, actualLengthFieldOffset, lengthFieldLength, byteOrder); // 2. 验证数据完整性 if (in.readableBytes() < frameLength) { return null; // 等待后续数据 } // 3. 分割数据帧 ByteBuf frame = extractFrame(ctx, in, in.readerIndex(), frameLength); in.readerIndex(in.readerIndex() + frameLength); return frame; } ``` 该实现支持长度字段偏移量、长度调整值等灵活配置[^2]。 ### 3. 行解码器实现 `LineBasedFrameDecoder`通过查找换行符处理粘包: ```java protected Object decode(ChannelHandlerContext ctx, ByteBuf buffer) throws Exception { int eol = findEndOfLine(buffer); if (eol >= 0) { int length = eol - buffer.readerIndex(); return buffer.readRetainedSlice(length); } return null; } ``` 支持`\n`和`\r\n`两种换行符检测[^3]。 ### 4. 自定义解码器模板 开发人员可通过继承`ByteToMessageDecoder`实现自定义协议: ```java public class MyDecoder extends ByteToMessageDecoder { protected void decode(ChannelHandlerContext ctx, ByteBuf in, List<Object> out) { if (in.readableBytes() < HEADER_SIZE) return; int dataLength = in.readInt(); if (in.readableBytes() < dataLength) { in.resetReaderIndex(); return; } byte[] payload = new byte[dataLength]; in.readBytes(payload); out.add(new MyMessage(payload)); } } ``` 这种模式广泛应用于私有协议实现[^4]。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值