转载自:https://blog.youkuaiyun.com/u010412719/article/details/78443379
https://blog.youkuaiyun.com/u010412719/article/details/78448380
之前分析到了processSelectedKey:
private void processSelectedKeys() {
if (selectedKeys != null) {
processSelectedKeysOptimized();
} else {
processSelectedKeysPlain(selector.selectedKeys());
}
}
这个方法中, 会根据 selectedKeys 字段是否为空, 而分别调用 processSelectedKeysOptimized 或 processSelectedKeysPlain. selectedKeys 字段是在调用 openSelector()方法时, 根据 JVM 平台的不同, 而有设置不同的值。其实 processSelectedKeysOptimized与processSelectedKeysPlain没有太大的区别, 为了简单起见, 以processSelectedKeysOptimized为例分析一下源码的工作流程。
private void processSelectedKeysOptimized(SelectionKey[] selectedKeys) {
for (int i = 0;; i ++) {
final SelectionKey k = selectedKeys[i];
// 在获取所有key(即flip)时会将最后一个有效key的下一个位置值为null,因此碰到null,说明所有有效的key已经获取完 ,因此利用break跳出循环
if (k == null) {
break;
}
// null out entry in the array to allow to have it GC'ed once the Channel close
// See https://github.com/netty/netty/issues/2363
selectedKeys[i] = null;//方便gC
final Object a = k.attachment();
// key关联两种不同类型的对象,一种是AbstractNioChannel,一种是NioTask
if (a instanceof AbstractNioChannel) {
processSelectedKey(k, (AbstractNioChannel) a);
} else {
@SuppressWarnings("unchecked")
NioTask<SelectableChannel> task = (NioTask<SelectableChannel>) a;
processSelectedKey(k, task);
}
// 如果需要重新select则重置当前数据
if (needsToSelectAgain) {
// null out entries in the array to allow to have it GC'ed once the Channel close
// See https://github.com/netty/netty/issues/2363
for (;;) {
if (selectedKeys[i] == null) {
break;
}
selectedKeys[i] = null;
i++;
}
selectAgain();
// Need to flip the optimized selectedKeys to get the right reference to the array
// and reset the index to -1 which will then set to 0 on the for loop
// to start over again.
//
// See https://github.com/netty/netty/issues/1523
selectedKeys = this.selectedKeys.flip();
i = -1;
}
}
}
迭代 selectedKeys 获取就绪的 IO 事件, 然后为每个事件都调用 processSelectedKey 来处理它。
k.attachment()获取一个附加在 selectionKey 中的对象, 那么这个对象是什么呢? 它又是在哪里设置的呢? 来回忆一下 SocketChannel 是如何注册到 Selector中的.在客户端的 Channel 注册过程中, 会有如下调用链:
Bootstrap.initAndRegister ->
AbstractBootstrap.initAndRegister ->
MultithreadEventLoopGroup.register ->
SingleThreadEventLoop.register ->
AbstractUnsafe.register ->
AbstractUnsafe.register0 ->
AbstractNioChannel.doRegister
最后的AbstractNioChannel.doRegister方法会调用SocketChannel.register方法注册一个 SocketChannel 到指定的 Selector:
@Override
protected void doRegister() throws Exception {
// 省略错误处理
selectionKey = javaChannel().register(eventLoop().selector, 0, this);
}
特别注意一下 register 的第三个参数, 这个参数是设置selectionKey的附加对象的, 和调用 selectionKey.attach(object)的效果一样. 而调用 register 所传递的第三个参数是 this, 它其实就是一个 NioSocketChannel 的实例. 那么这里就很清楚了, 我们在将 SocketChannel 注册到 Selector 中时, 将 SocketChannel 所对应的 NioSocketChannel 以附加字段的方式添加到了selectionKey 中.
再回到 processSelectedKeysOptimized 方法中, 当我们获取到附加的对象后, 我们就调用 processSelectedKey 来处理这个 IO 事件:
final Object a = k.attachment();
if (a instanceof AbstractNioChannel) {
processSelectedKey(k, (AbstractNioChannel) a);
} else {
@SuppressWarnings("unchecked")
NioTask<SelectableChannel> task = (NioTask<SelectableChannel>) a;
processSelectedKey(k, task);
}
processSelectedKey 方法源码如下:
private void processSelectedKey(SelectionKey k, AbstractNioChannel ch) {
final AbstractNioChannel.NioUnsafe unsafe = ch.unsafe();
//...略
try {
int readyOps = k.readyOps();
// 连接事件
if ((readyOps & SelectionKey.OP_CONNECT) != 0) {
int ops = k.interestOps();
ops &= ~SelectionKey.OP_CONNECT;
k.interestOps(ops);
unsafe.finishConnect();
}
//可写事件
if ((readyOps & SelectionKey.OP_WRITE) != 0) {
// Call forceFlush which will also take care of clear the OP_WRITE once there is nothing left to write
ch.unsafe().forceFlush();
}
//可读事件
if ((readyOps & (SelectionKey.OP_READ | SelectionKey.OP_ACCEPT)) != 0 || readyOps == 0) {
// 这里就是核心中的核心了. 事件循环器读到了字节后, 就会将数据传递到pipeline
unsafe.read();
}
} catch (CancelledKeyException ignored) {
unsafe.close(unsafe.voidPromise());
}
}
processSelectedKey中处理了下面几个事件, 分别是:
OP_ACCEPT,接受客户端连接
OP_READ, 可读事件, 即 Channel 中收到了新数据可供上层读取。
OP_WRITE, 可写事件, 即上层可以向 Channel 写入数据。
OP_CONNECT, 连接建立事件, 即 TCP 连接已经建立, Channel 处于 active 状态。
OP_READ | OP_ACCEPT处理
当就绪的 IO 事件是 OP_READ|OP_ACCEPT, 代码会调用 unsafe.read() 方法, 即:
if ((readyOps & (SelectionKey.OP_READ | SelectionKey.OP_ACCEPT)) != 0 || readyOps == 0) {
unsafe.read();
if (!ch.isOpen()) {
// Connection already closed - no need to handle write.
return;
}
}
unsafe是一个 NioSocketChannelUnsafe实例,注意 NioSocketChannelUnsafe extends NioByteUnsafe,对客户端而言是NioSocketChannelUnsafe,但是对服务端而言是NioMessageUnsafe。这里先看客户端,unsafe负责的是 Channel 的底层 IO 操作,实际上调用的是AbstractNioByteChannel#read:
@Override
public void read() {
final ChannelConfig config = config();
if (!config.isAutoRead() && !isReadPending()) {
// ChannelConfig.setAutoRead(false) was called in the meantime
removeReadOp();
return;
}
final ChannelPipeline pipeline = pipeline();
final ByteBufAllocator allocator = config.getAllocator();
final int maxMessagesPerRead = config.getMaxMessagesPerRead();
RecvByteBufAllocator.Handle allocHandle = this.allocHandle;
if (allocHandle == null) {
this.allocHandle = allocHandle = config.getRecvByteBufAllocator().newHandle();
}
ByteBuf byteBuf = null;
int messages = 0;
boolean close = false;
try {
int totalReadAmount = 0;
boolean readPendingReset = false;
do {
// 分配缓存
byteBuf = allocHandle.allocate(allocator);
int writable = byteBuf.writableBytes();//可写的字节容量
// 将socketChannel数据写入缓存
int localReadAmount = doReadBytes(byteBuf);
if (localReadAmount <= 0) {
// not was read release the buffer
byteBuf.release();
close = localReadAmount < 0;
break;
}
if (!readPendingReset) {
readPendingReset = true;
setReadPending(false);
}
// 触发pipeline的ChannelRead事件来对byteBuf进行后续处理
pipeline.fireChannelRead(byteBuf);
byteBuf = null;
if (totalReadAmount >= Integer.MAX_VALUE - localReadAmount) {
// Avoid overflow.
totalReadAmount = Integer.MAX_VALUE;
break;
}
totalReadAmount += localReadAmount;
// stop reading
if (!config.isAutoRead()) {
break;
}
if (localReadAmount < writable) {
// Read less than what the buffer can hold,
// which might mean we drained the recv buffer completely.
break;
}
} while (++ messages < maxMessagesPerRead);
pipeline.fireChannelReadComplete();
allocHandle.record(totalReadAmount);
if (close) {
closeOnRead(pipeline);
close = false;
}
} catch (Throwable t) {
handleReadException(pipeline, byteBuf, t, close);
} finally {
// Check if there is a readPending which was not processed yet.
// This could be for two reasons:
// * The user called Channel.read() or ChannelHandlerContext.read() in channelRead(...) method
// * The user called Channel.read() or ChannelHandlerContext.read() in channelReadComplete(...) method
//
// See https://github.com/netty/netty/issues/2254
if (!config.isAutoRead() && !isReadPending()) {
removeReadOp();
}
}
}
}
read() 源码比较长, 归纳起来, 可以认为做了如下工作:
• 分配 ByteBuf
• 从 SocketChannel 中读取数据;
• 调用 pipeline.fireChannelRead 发送一个inbound 事件.
下面介绍比较重要的代码:
final ByteBufAllocator allocator = config.getAllocator()
这行代码的作用:得到缓存分配器,默认是UnpooledByteBufAllocator实例。可以通过设置属性io.netty.allocator.type来改变。如果设置io.netty.allocator.type = “pooled”,则缓存分配器就为PooledByteBufAllocator。
为什么是这样?跟踪代码,最后的落地点在ByteBufUtil的DEFAULT_ALLOCATOR常量上,该常量初始化如下:
static final ByteBufAllocator DEFAULT_ALLOCATOR;
static {
//...略
String allocType = SystemPropertyUtil.get("io.netty.allocator.type", "unpooled").toLowerCase(Locale.US).trim();
ByteBufAllocator alloc;
if ("unpooled".equals(allocType)) {
alloc = UnpooledByteBufAllocator.DEFAULT;
logger.debug("-Dio.netty.allocator.type: {}", allocType);
} else if ("pooled".equals(allocType)) {
alloc = PooledByteBufAllocator.DEFAULT;
logger.debug("-Dio.netty.allocator.type: {}", allocType);
} else {
alloc = UnpooledByteBufAllocator.DEFAULT;
logger.debug("-Dio.netty.allocator.type: unpooled (unknown: {})", allocType);
}
DEFAULT_ALLOCATOR = alloc;
//...略
}
得到的结论是,默认情况allocType为unpooled,即内存分配器alloc为UnpooledByteBufAllocator实例。
allocHandler的实例化过程
allocHandle负责自适应调整当前缓存分配的大小,以防止缓存分配过多或过少:
final RecvByteBufAllocator.Handle allocHandle = recvBufAllocHandle();
@Override
public RecvByteBufAllocator.Handle recvBufAllocHandle() {
if (recvHandle == null) {
recvHandle = config().getRecvByteBufAllocator().newHandle();
}
return recvHandle;
}
config.getRecvByteBufAllocator()得到的是一个 AdaptiveRecvByteBufAllocator实例。
byteBuf = allocHandle.allocate(allocator);
申请一块指定大小的内存MaxMessageHandle#allocate:
public ByteBuf allocate(ByteBufAllocator alloc) {
return alloc.ioBuffer(guess());
}
@Override
public ByteBuf ioBuffer(int initialCapacity) {
if (PlatformDependent.hasUnsafe()) {
return directBuffer(initialCapacity);
}
return heapBuffer(initialCapacity);
}
ioBuffer函数中主要逻辑为:看平台是否支持unsafe,选择使用直接物理内存还是堆上内存。
先看 heapBuffer
@Override
public ByteBuf heapBuffer(int initialCapacity, int maxCapacity) {
if (initialCapacity == 0 && maxCapacity == 0) {
return emptyBuf;
}
validate(initialCapacity, maxCapacity);
return newHeapBuffer(initialCapacity, maxCapacity);
}
这里的newHeapBuffer有两种实现:至于具体用哪一种,取决于我们对系统属性io.netty.allocator.type的设置,如果设置为pooled,则缓存分配器就为PooledByteBufAllocator,进而利用对象池技术进行内存分配。如果不设置或者设置为其他,则缓存分配器为UnPooledByteBufAllocator,则直接返回一个UnpooledHeapByteBuf对象。
directBuffer与newHeapBuffer一样,newDirectBuffer方法也有两种实现,至于具体用哪一种,取决于我们对系统属性io.netty.allocator.type的设置,如果设置为 pooled,则缓存分配器就为PooledByteBufAllocator,进而利用对象池技术进行内存分配。如果不设置或者设置为其他,则缓存分配器为UnPooledByteBufAllocator。
doReadBytes方法
doReadBytes方法:将socketChannel数据写入缓存。
@Override
protected int doReadBytes(ByteBuf byteBuf) throws Exception {
final RecvByteBufAllocator.Handle allocHandle = unsafe().recvBufAllocHandle();
allocHandle.attemptedBytesRead(byteBuf.writableBytes());
return byteBuf.writeBytes(javaChannel(), allocHandle.attemptedBytesRead());
}
之后会调用pipeline.fireChannelRead。pipeline.fireChannelRead 正好就是 inbound 事件起点,产生了一个 inbound 事件, 此事件会以 head -> customContext -> tail 的方向依次流经 ChannelPipeline 中的各个 handler。调用了 pipeline.fireChannelRead 后, 就是 ChannelPipeline 中所需要做的工作了。
OP_WRITE 处理
OP_WRITE 可写事件代码如下. 这里代码比较简单, 没有详细分析的必要了.
OP_CONNECT 处理
最后一个事件是 OP_CONNECT, 即 TCP 连接已建立事件.
if ((readyOps & SelectionKey.OP_CONNECT) != 0) {
// remove OP_CONNECT as otherwise Selector.select(..) will always return without blocking
// See https://github.com/netty/netty/issues/924
int ops = k.interestOps();
ops &= ~SelectionKey.OP_CONNECT;
k.interestOps(ops);
unsafe.finishConnect();
}
OP_CONNECT事件的处理中, 只做了两件事情:
• 将 OP_CONNECT 从就绪事件集中清除, 不然会一直有 OP_CONNECT 事件.
• 调用 unsafe.finishConnect() 通知上层连接已建立 .
unsafe.finishConnect()调用最后会调用到pipeline().fireChannelActive(), 产生一个 inbound 事件, 通知 pipeline 中的各个 handler TCP 通道已建立(即 ChannelInboundHandler.channelActive方法会被调用)。
来看服务端NioMessageUnsafe的read:
@Override
public void read() {
assert eventLoop().inEventLoop();
final ChannelConfig config = config();
final ChannelPipeline pipeline = pipeline();
final RecvByteBufAllocator.Handle allocHandle = unsafe().recvBufAllocHandle();
allocHandle.reset(config);
boolean closed = false;
Throwable exception = null;
try {
try {
do {
int localRead = doReadMessages(readBuf);
if (localRead == 0) {
break;
}
if (localRead < 0) {
closed = true;
break;
}
allocHandle.incMessagesRead(localRead);
} while (allocHandle.continueReading());
} catch (Throwable t) {
exception = t;
}
int size = readBuf.size();
for (int i = 0; i < size; i ++) {
readPending = false;
pipeline.fireChannelRead(readBuf.get(i));
}
readBuf.clear();
allocHandle.readComplete();
pipeline.fireChannelReadComplete();
if (exception != null) {
closed = closeOnReadError(exception);
pipeline.fireExceptionCaught(exception);
}
if (closed) {
inputShutdown = true;
if (isOpen()) {
close(voidPromise());
}
}
} finally {
// Check if there is a readPending which was not processed yet.
// This could be for two reasons:
// * The user called Channel.read() or ChannelHandlerContext.read() in channelRead(...) method
// * The user called Channel.read() or ChannelHandlerContext.read() in channelReadComplete(...) method
//
// See https://github.com/netty/netty/issues/2254
if (!readPending && !config.isAutoRead()) {
removeReadOp();
}
}
}
}
在do-while循环中,通过调用方法doReadMessages来进行处理ServerSocketChannel的accept操作。
@Override
protected int doReadMessages(List<Object> buf) throws Exception {
SocketChannel ch = SocketUtils.accept(javaChannel());
try {
if (ch != null) {
buf.add(new NioSocketChannel(this, ch));
return 1;
}
} catch (Throwable t) {
logger.warn("Failed to create a new channel from an accepted socket.", t);
try {
ch.close();
} catch (Throwable t2) {
logger.warn("Failed to close a socket.", t2);
}
}
return 0;
}
ServerSocketChannel.accept()方法监听新进来的连接,当accept()方法返回的时候,它返回一个包含新进来的连接的 SocketChannel。非阻塞模式下,accept() 方法会立刻返回,如果还没有新进来的连接,返回的将是null。 因此,需要检查返回的SocketChannel是否是null。·
之后遍历第一步得到的readBuf中的每个客户端SocketChannel,触发各自pipeline的ChannelRead事件,具体为:在pipeline中从head节点开始寻找第一个Inbound=true的HandlerContext来对其进行处理,通过跟踪代码我们发现最终执行ServerBootstrapAcceptor的channelRead方法。先看下ServerBootstrapAcceptor类的channelRead方法主要做了些什么。
ServerBootstrapAcceptor该方法的代码如下:
public void channelRead(ChannelHandlerContext ctx, Object msg) {
final Channel child = (Channel) msg;
child.pipeline().addLast(childHandler);
setChannelOptions(child, childOptions, logger);
for (Entry<AttributeKey<?>, Object> e: childAttrs) {
child.attr((AttributeKey<Object>) e.getKey()).set(e.getValue());
}
try {
childGroup.register(child).addListener(new ChannelFutureListener() {
@Override
public void operationComplete(ChannelFuture future) throws Exception {
if (!future.isSuccess()) {
forceClose(child, future.cause());
}
}
});
} catch (Throwable t) {
forceClose(child, t);
}
}
通过child.pipeline().addLast(childHandler)添加childHandler到NioSocketChannel的pipeline。其中childHandler是通过ServerBootstrap的childHandler方法进行配置的。
通过childGroup.register(child)将NioSocketChannel注册到work的eventLoop中,这个过程和NioServerSocketChannel注册到boss的eventLoop的过程一样,最终由work线程对应的selector进行read事件的监听。
Netty IO事件处理流程
本文解析了Netty中Selector的IO事件处理流程,重点分析了processSelectedKeysOptimized方法,包括事件类型如OP_READ、OP_WRITE等的处理,以及如何触发Pipeline中的ChannelRead事件。
822

被折叠的 条评论
为什么被折叠?



