Using multi-processor for node

本文深入探讨了Node.js中集群模块的应用,展示了如何利用该模块实现跨多核处理器的工作负载分配,有效提升服务器资源利用率。文章通过具体示例介绍了如何创建子进程来处理HTTP请求,并在主进程和子进程间传递消息,同时实现子进程故障自动恢复。

Node is single threaded. This means Node is only using one processor to do its work. However, most servers have several "multi-core" processors, and you can a single multi-core processor has many processors. A server with two physical CPU sockets might have "24 logical cores," that is 24 processors exposed to the operating system. In order to make the best use of Node we should use those too. So if we don't have threads, how do we do that?
Node provides a module called cluster that allows you to deligate work to child processes. This means that Node creates a copy of its current program in another process (on Windows, it is actually another thread). Each child process has some special abilities, such as the ability to share a socket with other children. This allows us to write Node programs that start many other Node programs and then delegate work to them.

It is important to understand that when you use cluster to share work between a number of copies of a Node program, the master process isn't involved in every transaction. The master process manages the child processes, but when the children interact with I/O they do it directly, not through the master. This means if you set up a web server using cluster, requests don't go through your master process, but directly to the children. Hence, dispatching requests does not create a bottleneck in the system.


Here is an example. 
In this example, the main thread of the cluster forks the number which is equal to how many processors on your system of workers(threads) to  serve the http request. The serving is just to tell users which thread(process ID) is doing socket stuffs. As the connection is established, the worker will send confirming message includes the process id of itself back to the main thread. And the main thread will output the number of requests each worker got.
There is another cool feature. If one of the worker die('death' event on cluster), main thread will fork another worker to replace it and listen to the 'message' event on that new one.



/* running node on multi-processers */

var cluster = require('cluster'),

http = require('http'),

NR_CPUS = require('os').cpus().length,

numReqs = {}, 

totle = 0/* account for amount requests */


if (cluster.isMaster) {

     console.log('INFO: there are ' + NR_CPUS + ' cpus here');

     console.log('INFO: the PID of main thread is ' + process.pid);
     /* Master process, for doing fork stuffs */

     for ( var i = 0; i < NR_CPUS; i += 1 ) {

          var worker = cluster.fork();

        numReqs[Number(worker.pid).toString()] = 0;

        console.log('INFO: worker PID[' + worker.pid + '] was spawned');
        /* End of INFO */

          

        worker.on('message', function(msg) {

               ifmsg.cmd && 'notifyRequest' ===msg.cmd ) {

               console.log('\n');

               numReqs[msg.pid] += 1;

               totle += 1;

                   for ( var pid in numReqs ) {

                         if ( undefined !== numReqs[pid] ) {

                         console.log('worker PID[' + pid + '] gets ' + numReqs[pid] + ' requests now');

                    }

                }

                console.log('total requests are ' + totle);

             }

        });

    }


     cluster.on('death', function(worker) {

          console.log('[worker ' + worker.pid + ' died]');

              /* remove the request counter of dead process */

          numReqs[worker.pid] = undefined;

              /* fork another worker and add its request counter */

           var rebirth = cluster.fork();

          numReqs[Number(rebirth.pid).toString()] = 0;


          console.log('[worker ' + rebirth.pid + ' join us now]');

              /* add the message event listener on that new worker */

          rebirth.on('message', function(msg) {

                  if ( msg.cmd && 'notifyRequest' === msg.cmd ) {

                    console.log('\n');

                    numReqs[msg.pid] += 1;

                    total += 1;

                         for ( var pid in numReqs ) {

                              if ( undefined !== numReqs[pid] ) {

                              console.log('worker PID[' + pid + '] gets ' + numReqs[pid] + ' requests now');

                         }

                    }

                    console.log('total requests are ' + total);

               }

          });

     });


else {

     /* Child processes serving on http protocal */

    http.createServer(function(req, res) {

            /* send msg to master process */

        process.send({

               cmd: 'notifyRequest',

               pid: process.pid.toString()

         });

         res.writeHead(200, {

              'Content-Type': 'text/plain'

         });

         res.end('worker PID[' + process.pid + '] start serving you now:)\n');

    }).listen(8888);

}

As using curl -i http://my.domain.com:8888 to test, you will see things happen.

Further more, I added some cool features to this program that 
1). worker threads check their memory usage and send this message to master every second. The running time of each worker will be updated at the same time. If a worker uses too much memory, the master gonna throw a warning. If a worker keeps on its callback for more that 6 second, it will be killed and the master spawns a new one to go on its works.
2). abstract the program into different modules:
     - workerSet: deposit workers and gives public apis to manipulate the infos of workers.
     - msgSys: the message system for communications between workers and master.
3). master can still be notified how many requests got by each worker from clients.

Here is the updated cluster.js:
/* running node on multi-processers */

var cluster = require('cluster'),
     http = require('http'),
     NR_CPUS = require('os').cpus().length,
     rssWarn = (50 * 1024 * 1024),
     workerSet = {

          /* the actual worker set */
          __workerSet: {
          },

          /* total requests getting from clients */
          total: 0,
         
          /* if we have this worker already */
          isWorkerExist: function(worker) {
               return ( worker && worker.pid
                         && undefined !== this.__workerSet[Number(worker.pid).toString()] ) ? true : false;
          },

          pushWorker: function(worker) {
               if ( ! this.isWorkerExist(worker) ) {
                    this.__workerSet[Number(worker.pid).toString()] = {};
                    this.__workerSet[Number(worker.pid).toString()].process = worker;
               }
               /* for cascading */
               return this;
          },

          popWorker: function(worker) {
               if ( this.isWorkerExist(worker) ) {
                    this.__workerSet[Number(worker.pid).toString()] = undefined;
               }
               /* for cascading */
               return this;
          },

          getPid: function(worker) {
               return ( true === this.isWorkerExist(worker) ) ?
                    this.__workerSet[Number(worker.pid).toString()].process.pid : undefined;
          },

          getMemUsage: function(worker) {
               return ( true === this.isWorkerExist(worker) ) ?
                    this.__workerSet[Number(worker.pid).toString()].process.memoryUsage() : undefined;
          },

          numReqs: function(worker, num) {
               if ( 1 == arguments.length )  {
                    /* getter */
                    return ( true === this.isWorkerExist(worker)
                              && undefined !== this.__workerSet[Number(worker.pid).toString()].req ) ?
                                   this.__workerSet[Number(worker.pid).toString()].req : undefined;
               } else if ( 2 == arguments.length ) {
                    /* setter */
                    if ( true === this.isWorkerExist(arguments[0]) ) {
                         var req = this.__workerSet[Number(arguments[0].pid).toString()].req;
                         this.__workerSet[Number(arguments[0].pid).toString()].req =
                              ( undefined === req ) ? 0 : req + num;
                    }
                    return this;
               }

          },

          /* setter and getter */
          duration: function(worker, dur) {
               if ( 1 == arguments.length )  {
                    /* getter */
                    var temp = this.__workerSet[Number(arguments[0].pid).toString()].dur;
                    return ( true === this.isWorkerExist(arguments[0]) && undefined !== temp ) ?
                         temp : undefined;
               } else if ( 2 == arguments.length ) {
                    /* setter */
                    if ( true === this.isWorkerExist(arguments[0]) ) {
                         this.__workerSet[Number(arguments[0].pid).toString()].dur = dur;
                    }
                    return this;
               }
          },

          foreach: function() {
               return this.__workerSet;
          }
     },

   /**
     * The message system
     * Send Methods
     *            @memNotify:      notify master how many memory has been used by worker.
     *            @numReqsNotify: notify master how many requests got by worker from clients.
    *
     * Notified Method
     *            @notified:          get notified of the duration, number of requests and memory usage of
     *                                workers from its last running to check whether it has spent too
     *                                long time on dealing with callback.
    */
     msgSys = {
          memNotify: function(worker) {
               worker.send({
                    pid: worker.pid.toString(),
                    cmd: 'memNotify',
                    memory: worker.memoryUsage()
               });
          },
         
          numReqsNotify: function(worker) {
               worker.send({
                    pid: worker.pid.toString(),
                    cmd: 'numReqsNotify'
               });
          },
         
          notified: function(worker) {

               worker.on('message', function(msg) {
                    /* duration of workers would be updated every time master thread get notified. */
                    workerSet.duration(worker, new Date().getTime());

                    if ( msg.cmd && 'numReqsNotify' === msg.cmd ) {
                         console.log('\n');

                         if ( workerSet.isWorkerExist(worker) ) {
                              workerSet.numReqs(worker, 1);
                              workerSet.total += 1;
                         }

                         for ( var each in workerSet.foreach() ) {
                              /* get the real object */
                              var obj = workerSet.foreach()[each];

                              if ( undefined !== obj ) {
                                   console.log('worker PID[' +
                                                  obj.process.pid +
                                                  '] gets ' + obj.req +
                                                  ' requests now');
                              }
                         }
                         console.log('total requests are ' + workerSet.total);

                    } else if ( msg.cmd && 'memNotify' === msg.cmd ) {

                         /* duration of workers would be updated every time master thread get notified. */
                         workerSet.duration(worker, new Date().getTime());

                         if ( msg.memory.rss > rssWarn ) {
                              console.log('worker ' + msg.pid + ' using too much memory.')
                         }
                    }
               });
          }
     };

function createWorker() {
     var worker = cluster.fork();
     /* worker set initializations. */
     workerSet.pushWorker(worker);
     workerSet.numReqs(worker, 0);
     /* allow boot time */
     workerSet.duration(worker, new Date().getTime() - 1000);
     console.log('INFO: worker PID[' + worker.pid + '] was spawned');

     /* notifications */
     msgSys.notified(worker);
}

if (cluster.isMaster) {
     console.log('INFO: there are ' + NR_CPUS + ' cpus here');
     console.log('INFO: the PID of main thread is ' + process.pid);
     /* Master process, for doing fork stuffs */
     for ( var i = 0; i < NR_CPUS; i += 1 ) {
          createWorker();
     }

     /* kill the worker running out of 6 seconds. */
     setInterval(function() {
          var time = new Date().getTime();
          for ( var each in workerSet.foreach() ) {
               /* get the real object */
               var obj = workerSet.foreach()[each];

               if ( undefined !== obj ) {
                    if ( obj.dur && obj.dur + 6000 < time ) {
                         obj.process.kill();
                    }
               }
          }
     }, 1000);

     /* If worker died */
     cluster.on('death', function(worker) {
          console.log('INFO: worker PID[' + worker.pid + '] died.');
          /* remove the request counter of dead process */
          workerSet.popWorker(worker);
          /* fork another worker and add its request counter */
          createWorker();
     });

} else {
     /* Child processes serving on http protocal */
     http.createServer(function(req, res) {
          res.writeHead(200, {
               'Content-Type': 'text/plain'
          });

          /* notify master how many requests got by worker from clients each
           * time 'request' event on happened. */
          msgSys.numReqsNotify(process);

          /* mess up 1 in 10 reqs */
          var r = Math.floor(Math.random() * 10);
          if ( 4 == r ) {
               res.write('Stopped ' + process.pid + ' from ever finishing\n');
               while (true) {
                    continue;
               }
          }

          res.end('worker PID[' + process.pid + '] starts serving you now:)\n');

     }).listen(8888);

     /* report worker memory usage once a second. */
     setInterval(function() {
          msgSys.memNotify(process);
     }, 1000);
}
下载前可以先看下教程 https://pan.quark.cn/s/a4b39357ea24 SSM框架,涵盖了Spring MVC、Spring以及MyBatis这三个框架,在Java领域内作为构建Web应用程序的常用架构而备受青睐,特别是在电子商务购物平台的设计与实现过程中展现出极高的应用价值。 这三个框架各自承担着特定的功能角色,通过协同运作来达成高效且灵活的业务处理目标。 Spring MVC作为Spring体系结构中的一个关键部分,主要致力于处理HTTP请求与响应,并推行模型-视图-控制器(MVC)的设计模式。 其运作机制中,DispatcherServlet扮演着前端控制器的角色,负责接收HTTP请求,并将其依据请求映射至相应的Controller处理方法。 在Controller执行完业务逻辑后,会将处理结果传递给ModelAndView对象,随后由ViewResolver将其解析为具体视图进行呈现。 Spring MVC还具备数据绑定、验证以及国际化等附加功能,这些特性显著提升了开发工作的效率以及代码的可维护程度。 Spring框架则是一个综合性的企业级应用开发框架,其核心能力包含依赖注入(DI)、面向切面编程(AOP)以及事务管理等关键特性。 DI机制使得开发者能够借助配置文件或注解手段来管理对象的生成与依赖关联,从而有效降低组件之间的耦合性。 AOP技术则适用于实现诸如日志记录、权限管理这类跨领域的功能需求,有助于使代码结构更为清晰。 Spring在事务管理方面提供了编程式和声明式两种处理途径,确保了数据操作过程遵循原子性与一致性原则。 MyBatis则是一个轻量级的数据库访问层框架,其特点在于将SQL语句与Java代码进行分离,并支持动态SQL的编写。 开发者可以在XM...
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值