4. Netty源代码分析-服务端连接建立过程解析

根据上一节的Netty server端启动流程的探索,已经知道当服务端启动后,实际上是启动了一个线程不断的监听NioChannel上的OP_ACCEPT事件。

// NioEventLoop
protected void run() {
        int selectCnt = 0;
        for (;;) {
            try {
                int strategy;
                try {
                    strategy = selectStrategy.calculateStrategy(selectNowSupplier, hasTasks());
                    switch (strategy) {
                    case SelectStrategy.CONTINUE:
                        continue;
                    case SelectStrategy.BUSY_WAIT:
                    case SelectStrategy.SELECT:
                        long curDeadlineNanos = nextScheduledTaskDeadlineNanos();
                        if (curDeadlineNanos == -1L) {
                            curDeadlineNanos = NONE; // nothing on the calendar
                        }
                        nextWakeupNanos.set(curDeadlineNanos);
                        try {
                            if (!hasTasks()) {
                                strategy = select(curDeadlineNanos);
                            }
                        } finally {
                            nextWakeupNanos.lazySet(AWAKE);
                        }
                        // fall through
                    default:
                    }
                } catch (IOException e) {
                    rebuildSelector0();
                    selectCnt = 0;
                    handleLoopException(e);
                    continue;
                }
                selectCnt++;
                cancelledKeys = 0;
                needsToSelectAgain = false;
                final int ioRatio = this.ioRatio;
                boolean ranTasks;
                if (ioRatio == 100) {
                    try {
                        if (strategy > 0) {
                            processSelectedKeys();
                        }
                    } finally {
                        // Ensure we always run tasks.
                        ranTasks = runAllTasks();
                    }
                } else if (strategy > 0) {
                    final long ioStartTime = System.nanoTime();
                    try {
                        processSelectedKeys();
                    } finally {
                        final long ioTime = System.nanoTime() - ioStartTime;
                        ranTasks = runAllTasks(ioTime * (100 - ioRatio) / ioRatio);
                    }
                } else {
                    ranTasks = runAllTasks(0); // This will run the minimum number of tasks
                }

                if (ranTasks || strategy > 0) {
                    if (selectCnt > MIN_PREMATURE_SELECTOR_RETURNS && logger.isDebugEnabled()) {
                       
                    }
                    selectCnt = 0;
                } else if (unexpectedSelectorWakeup(selectCnt)) { // Unexpected wakeup (unusual case)
                    selectCnt = 0;
                }
            } catch (CancelledKeyException e) {
               
            } catch (Throwable t) {
                handleLoopException(t);
            }
            try {
                if (isShuttingDown()) {
                    closeAll();
                    if (confirmShutdown()) {
                        return;
                    }
                }
            } catch (Throwable t) {
                handleLoopException(t);
            }
        }
    }

可以看到处理连接的逻辑就是在:

processSelectedKeys();

进入 processSelectedKeys();,其逻辑代码如下:

private void processSelectedKeys() {
        if (selectedKeys != null) {
            processSelectedKeysOptimized(); // Netty对JDK NIO提供的SelectionKeys利用反射做了一些优化处理,一般都是在这个方法里面处理
        } else {
            processSelectedKeysPlain(selector.selectedKeys());
        }
    }

private void processSelectedKey(SelectionKey k, AbstractNioChannel ch) {
        final AbstractNioChannel.NioUnsafe unsafe = ch.unsafe();
        if (!k.isValid()) {
            final EventLoop eventLoop;
            try {
                eventLoop = ch.eventLoop();
            } catch (Throwable ignored) {
                return;
            }
            if (eventLoop == this) {
                unsafe.close(unsafe.voidPromise());
            }
            return;
        }

        try {
            int readyOps = k.readyOps();
            if ((readyOps & SelectionKey.OP_CONNECT) != 0) {
                int ops = k.interestOps();
                ops &= ~SelectionKey.OP_CONNECT;
                k.interestOps(ops);
                unsafe.finishConnect();
            }
            if ((readyOps & SelectionKey.OP_WRITE) != 0) {
                ch.unsafe().forceFlush();
            }
            if ((readyOps & (SelectionKey.OP_READ | SelectionKey.OP_ACCEPT)) != 0 || readyOps == 0) {
                unsafe.read();
            }
        } catch (CancelledKeyException ignored) {
            unsafe.close(unsafe.voidPromise());
        }
    }

NioServerChannel注册的是OP_ACCEPT事件,实际最终调用的是unsafe.read方法。这里实际上有两个实现:

NioMessageUnsafe.read (NioServerSocketChannel读处理)
NioByteUnsafe.read(NioSocketChannel读处理)

进入NioMessageUnsafe.read

public void read() {
            assert eventLoop().inEventLoop();
            final ChannelConfig config = config();
            final ChannelPipeline pipeline = pipeline();
            final RecvByteBufAllocator.Handle allocHandle = unsafe().recvBufAllocHandle();
            allocHandle.reset(config);

            boolean closed = false;
            Throwable exception = null;
            try {
                try {
                    do {
                        int localRead = doReadMessages(readBuf);
                        if (localRead == 0) {
                            break;
                        }
                        if (localRead < 0) {
                            closed = true;
                            break;
                        }

                        allocHandle.incMessagesRead(localRead);
                    } while (allocHandle.continueReading());
                } catch (Throwable t) {
                    exception = t;
                }

                int size = readBuf.size();
                for (int i = 0; i < size; i ++) {
                    readPending = false;
                    pipeline.fireChannelRead(readBuf.get(i));
                }
                readBuf.clear();
                allocHandle.readComplete();
                pipeline.fireChannelReadComplete();
                if (exception != null) {
                    closed = closeOnReadError(exception);
                    pipeline.fireExceptionCaught(exception);
                }
                if (closed) {
                    inputShutdown = true;
                    if (isOpen()) {
                        close(voidPromise());
                    }
                }
            } finally {
                if (!readPending && !config.isAutoRead()) {
                    removeReadOp();
                }
            }
        }
    }

最终调用的是doReadMessages,在NioServerSocketChannel中实现

    protected int doReadMessages(List<Object> buf) throws Exception {
        SocketChannel ch = SocketUtils.accept(javaChannel());
        try {
            if (ch != null) {
                buf.add(new NioSocketChannel(this, ch));
                return 1;
            }
        } catch (Throwable t) {
            try {
                ch.close();
            } catch (Throwable t2) {
            }
        }

        return 0;
    }

可以看到到这一步,我们就拿到了需要建立连接的客户端的socket,这里还是通过我们之前说的NioServerSocketChannel中持有的ServerSocketChannelImpl底层获取真正的网络连接,然后又会再次封装成一个NioSocketChannel.

注意,这里都是基于一个线程处理,是串行的,不会存在并发的问题。这样我们实际上就到了建立新连接对应客户端的SocketChannel,接下来就是将这个Channel交给worker线程处理,那这个是怎么实现的呢,我们回到之前的步骤,在NioMessageUnsafe.read中当连接建立好之后,就会调用pipeline.fireChannelRead(NioServerSocketChannel中对应读取到数据的实际上就是一个NioSocketChannel)。在pipline.fireChannelRead会调用handler链上channelRead方法(这里的handler并不是在ServerBootStrap中传入的childHandler,如果想要在boss线程中增加自己的handler处理,可以调用ServerBootStrap.handler传入boss线程的handler处理链)。
pipline.fireChannelRead最终会调用到服务端在启动时,NioServerSocketChannel在init时绑定的ServerBootstrapAcceptor.channelRead来进行处理,其代码逻辑如下:

public void channelRead(ChannelHandlerContext ctx, Object msg) {
			// 读取到的数据实际上就是一个NioSocketChannel
            final Channel child = (Channel) msg;
            child.pipeline().addLast(childHandler);
            setChannelOptions(child, childOptions, logger);
            setAttributes(child, childAttrs);
            try {
            	// 这里的childGroup就是在构造ServerBootstrapAcceptor传入的
            	// 实际上就是ServerBootstrap中worker线程组
            	// 也就是在这里将worker线程和客户端Channel关联上了
                childGroup.register(child).addListener(new ChannelFutureListener() {
                    public void operationComplete(ChannelFuture future) throws Exception {
                        if (!future.isSuccess()) {
                            forceClose(child, future.cause());
                        }
                    }
                });
            } catch (Throwable t) {
                forceClose(child, t);
            }
        }

在这个方法里面,实际上和NioServerSocketChannel一样,完成了NioSocketChannel的init和register,只不过NioServerSocketChannel有专门的init方法去处理,而这里在channelRead里面直接完成了init,同时调用worker线程去完成注册,查看worker线程,和NioServerSocketChannel的register是同样的逻辑处理。可以看到,Netty中对于类继承,很好的区分了类的不同职责,极大的程度的实现了代码和功能复用
这里我们先看下NioSocketChannel的类图:
在这里插入图片描述
对比之前的NioServerSocketChannel:
在这里插入图片描述
我们发现在左边的继承链路上,二者特别类似,除了一个是继承AbstractNioByteChannel一个是AbstractNioMessageChannel
二者有一个很大的区别就是二者监听的事件:

public NioServerSocketChannel(ServerSocketChannel channel) {
        super(null, channel, SelectionKey.OP_ACCEPT);
        config = new NioServerSocketChannelConfig(this, javaChannel().socket());
    }
protected AbstractNioByteChannel(Channel parent, SelectableChannel ch) {
        super(parent, ch, SelectionKey.OP_READ);
    }

一个监听了OP_READ,一个监听了OP_ACCEPT

注册到worker线程之后,在worker线程中会对注册在worker线程上的channel进行I/O事件监听,与boss线程的处理是一模一样的,唯一的区别在于 NioServerSocketChannel和NioSocketChannel注册的监听事件不一样。
至此,客户端的连接处理完成。
另外chileGroup线程组即worker线程组里面的线程也会和worker线程组里面的线程一样开始监听网络事件,不过其监听的是OP_READ事件。

我们来总结下服务端建立连接的过程:

  1. 服务端EventLoop线程循环监听socket事件,如果有事件到来的话,会进行事件的处理,调用底层ServerSocketChannelImpl获取底层真正客户端连接,同时封装成NioSocketChannel返回,实例化NioSocketChannel也会为其生成一个DefaultChannelPipeline
  2. 返回后的NioSocketChannel通过触发NioServerSocketChannel的pipline(DefaultChannelPipelinefireChannelRead方法,最终会调用到ServerBootstrapAcceptor
  3. ServerBootstrapAcceptorchannelRead方法会将新建立的连接channel交给childGroup线程组(worker线程)去进行注册,childGroup线程组会选择一个线程去进行实际注册
  4. 实际注册与NioServerSocketChannell注册基本一样,也会将NioSocketChannel绑定到执行注册任务的线程上,
  5. 注册完之后,与NioServerSocketChannell一样,也会触发pipeline.fireChannelRegisteredpipeline.fireChannelActive只不过,这里NioSocketChannel注册的网络事件是OP_READ。
  6. 到这里连接就真正的建立起来,同时worker线程组里面的线程开始循环监听已经注册的NioSocketChannel的OP_READ事件
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值