(1)销毁连接线程核心逻辑:
首先需要先知道几个关键的对象代表的含义:
- DruidConnectionHolder[] evictConnections对象中的connection都是准备关闭的,放进去就说明准备最后一起批量关闭,
- 参数 keepAliveCount 表示的是保活的连接数量
- 参数 evictCount 代表的是evictConnections数组中的需要关闭Conection的数量
第一大步,首先是先加锁,然后遍历DruidConnectionHolder中当前池子中所有的连接
- 首先判断【空闲时间 < 配置参数minEvictableIdleTimeMillis && 空闲时间 < keepAliveBetweenTimeMillis】直接跳出循环,说明不需要回收该connection,然后继续
- 如果 idleMillis >= 配置参数minEvictableIdleTimeMillis 说明需要进行回收该Connection对象,保证minIdle,空闲时间大于活跃时间则回收,继续
- 如果keepAlive && idleMillis >= 配置参数keepAliveBetweenTimeMillis,说明空闲时间大于了保活时间,也需要回收该连接
- 然后继续判断 【keepAlive && poolingCount + activeCount < minIdle】 ,如果成功,说明连接池中的Connection没有到达配置的minIdle参数,⚠️需要后面重新进行创建补上
- 如果【evictCount > 0】为True,说明DruidConnectionHolder[] evictConnections中的连接需要关闭销毁
- 如果【keepAliveCount > 0】为True的话,说明需要保活的DruidConnectionHolder[] keepAliveConnections连接需要进行keepAlive处理。
- ⚠️最后如果第4步中的有需要进行创建的连接,这里需要进行emptySignal()通知创建连接线程创建新的连接来达到配置参数的minIdle
以下是shirk()方法的代码:
其实核心就是去维持当前池子中的连接数的一个
public void shrink(boolean checkTime, boolean keepAlive) {
try {
lock.lockInterruptibly();
} catch (InterruptedException e) {
return;
}
boolean needFill = false;
int evictCount = 0;
int keepAliveCount = 0;
try {
if (!inited) {
return;
}
//这个是根据用户的参数,来进行判断是不是需要对时间进行检验【比如通过用户配置的phyTimeoutMillis时间和minEvictableIdleTimeMillis参数时间】
final int checkCount = poolingCount - minIdle;
final long currentTimeMillis = System.currentTimeMillis();
for (int i = 0; i < poolingCount; ++i) {
DruidConnectionHolder connection = connections[i];
if (checkTime) {
if (phyTimeoutMillis > 0) {
long phyConnectTimeMillis = currentTimeMillis - connection.connectTimeMillis;
if (phyConnectTimeMillis > phyTimeoutMillis) {
evictConnections[evictCount++] = connection;
continue;
}
}
long idleMillis = currentTimeMillis - connection.lastActiveTimeMillis;
if (idleMillis < minEvictableIdleTimeMillis
&& idleMillis < keepAliveBetweenTimeMillis
) {
break;
}
if (idleMillis >= minEvictableIdleTimeMillis) {
if (checkTime && i < checkCount) {
evictConnections[evictCount++] = connection;
continue;
} else if (idleMillis > maxEvictableIdleTimeMillis) {
evictConnections[evictCount++] = connection;
continue;
}
}
if (keepAlive && idleMillis >= keepAliveBetweenTimeMillis) {
keepAliveConnections[keepAliveCount++] = connection;
}
} else {
//如果不检测时间,就可以直接进行回收该Connection
if (i < checkCount) {
evictConnections[evictCount++] = connection;
} else {
break;
}
}
}
int removeCount = evictCount + keepAliveCount;
if (removeCount > 0) {
System.arraycopy(connections, removeCount, connections, 0, poolingCount - removeCount);
Arrays.fill(connections, poolingCount - removeCount, poolingCount, null);
poolingCount -= removeCount;
}
keepAliveCheckCount += keepAliveCount;
//这一步主要是为了如果当前池子中存活的连接如果过少,就需要去创建新的连接来进行满足用户配置的midle数量的连接
if (keepAlive && poolingCount + activeCount < minIdle) {
needFill = true;
}
} finally {
lock.unlock();
}
//进行销毁线程
if (evictCount > 0) {
for (int i = 0; i < evictCount; ++i) {
DruidConnectionHolder item = evictConnections[i];
Connection connection = item.getConnection();
JdbcUtils.close(connection);
destroyCountUpdater.incrementAndGet(this);
}
Arrays.fill(evictConnections, null);
}
//对需要保持keepAlive的连接进行保活处理
if (keepAliveCount > 0) {
// keep order
for (int i = keepAliveCount - 1; i >= 0; --i) {
DruidConnectionHolder holer = keepAliveConnections[i];
Connection connection = holer.getConnection();
holer.incrementKeepAliveCheckCount();
boolean validate = false;
try {
this.validateConnection(connection);
validate = true;
} catch (Throwable error) {
if (LOG.isDebugEnabled()) {
LOG.debug("keepAliveErr", error);
}
// skip
}
boolean discard = !validate;
if (validate) {
holer.lastKeepTimeMillis = System.currentTimeMillis();
boolean putOk = put(holer, 0L);
if (!putOk) {
discard = true;
}
}
if (discard) {
try {
connection.close();
} catch (Exception e) {
// skip
}
lock.lock();
try {
discardCount++;
if (activeCount + poolingCount <= minIdle) {
emptySignal();
}
} finally {
lock.unlock();
}
}
}
this.getDataSourceStat().addKeepAliveCheckCount(keepAliveCount);
Arrays.fill(keepAliveConnections, null);
}
//这一步就是创建新的连接进行满足用户配置的midle数量
if (needFill) {
lock.lock();
try {
int fillCount = minIdle - (activeCount + poolingCount + createTaskCount);
for (int i = 0; i < fillCount; ++i) {
emptySignal();
}
} finally {
lock.unlock();
}
}
}
(2)如果有很多个数据源,需要创建很多个DataSources的时候,怎么优化?
-
首先会先需要考虑到,因为我们创建很多个DataSources,那么其实每个DataSources都会创建三个线程,一个日志采集线程,一个连接创建线程,一个连接回收线程,如果无限多DataSources数据源的话,那么就会创建很多线程导致浪费资源,
-
在Druid中可以通过设置延迟/周期执行线程池,来通过线程池去统一管理所有的DataSources对象中的每个连接回收线程和连接创建线程,来进行优化,降低创建线程的开销
druidDataSourceSingle.setDestroyScheduler(ScheduledExecutorService destroyScheduler); druidDataSourceSingle.setCreateScheduler(ScheduledExecutorService);