—client 的调用流程
table.put(put); 操作
HTable table = new HTable(conf, Bytes.toBytes(tableName));
调用流程如上面的delete流程一样
首先创建一个muti的操作对象
new BufferedMutatorImpl(this, rpcCallerFactory, rpcControllerFactory, params);
然后调用
BufferedMutatorImpl.mutate(Mutation m)
在创建 BufferedMutatorImpl 对象的时候,在低层有异步创建
ap = new AsyncProcess(connection, conf, pool, rpcCallerFactory, true, rpcFactory);
实时异步批量的操作提交
AsyncProcess.submit
判断提交的那一个row对象是在那个region当中
RegionLocations locs = connection.locateRegion(
tableName, r.getRow(), true, true, RegionReplicaUtil.DEFAULT_REPLICA_ID);
该过程和之前的delete的过程中查找row的过程一样,先到zk中拿到meta然后在meta 的regionserver中扫描对应的行在那个regionserver当中
submitMultiActions(tableName, retainedActions, nonceGroup, callback, null, needResults,
locationErrors, locationErrorRows, actionsByServer, pool)
如下代码又创建一个异步的提交对象
<CResult> AsyncRequestFuture submitMultiActions(TableName tableName,
List<Action<Row>> retainedActions, long nonceGroup, Batch.Callback<CResult> callback,
Object[] results, boolean needResults, List<Exception> locationErrors,
List<Integer> locationErrorRows, Map<ServerName, MultiAction<Row>> actionsByServer,
ExecutorService pool) {
AsyncRequestFutureImpl<CResult> ars = createAsyncRequestFuture(
tableName, retainedActions, nonceGroup, pool, callback, results, needResults);
// Add location errors if any
if (locationErrors != null) {
for (int i = 0; i < locationErrors.size(); ++i) {
int originalIndex = locationErrorRows.get(i);
Row row = retainedActions.get(originalIndex).getAction();
ars.manageError(originalIndex, row,
Retry.NO_LOCATION_PROBLEM, locationErrors.get(i), null);
}
}
ars.sendMultiAction(actionsByServer, 1, null, false);
return ars;
}
然后根据发送到不同的regionser进行起动多个线程进行发送,
for (Map.Entry<ServerName, MultiAction<Row>> e : actionsByServer.entrySet()) {
ServerName server = e.getKey();
MultiAction<Row> multiAction = e.getValue();
incTaskCounters(multiAction.getRegions(), server);
Collection<? extends Runnable> runnables = getNewMultiActionRunnable(server, multiAction,
numAttempt);
对每个region创建对应的线程
Runnable runnable =
new SingleServerRequestRunnable(runner.getActions(), numAttempt, server,
callsInProgress);
进行异步发送过去。在线程中创建
new MultiServerCallable<Row>(connection, tableName, server, this.rpcFactory, multi)
然后在callable对象中创建proto对象,组装数据,发送过去
for (Map.Entry<byte[], List<Action<R>>> e: this.multiAction.actions.entrySet()) {
final byte [] regionName = e.getKey();
final List<Action<R>> actions = e.getValue();
regionActionBuilder.clear();
regionActionBuilder.setRegion(RequestConverter.buildRegionSpecifier(
HBaseProtos.RegionSpecifier.RegionSpecifierType.REGION_NAME, regionName) );
if (this.cellBlock) {
// Presize. Presume at least a KV per Action. There are likely more.
if (cells == null) cells = new ArrayList<CellScannable>(countOfActions);
// Send data in cellblocks. The call to buildNoDataMultiRequest will skip RowMutations.
// They have already been handled above. Guess at count of cells
regionActionBuilder = RequestConverter.buildNoDataRegionAction(regionName, actions, cells,
regionActionBuilder, actionBuilder, mutationBuilder);
} else {
regionActionBuilder = RequestConverter.buildRegionAction(regionName, actions,
regionActionBuilder, actionBuilder, mutationBuilder);
}
multiRequestBuilder.addRegionAction(regionActionBuilder.build());
}
// Controller optionally carries cell data over the proxy/service boundary and also
// optionally ferries cell response data back out again.
if (cells != null) controller.setCellScanner(CellUtil.createCellScanner(cells));
controller.setPriority(getTableName());
controller.setCallTimeout(callTimeout);
ClientProtos.MultiResponse responseProto;
ClientProtos.MultiRequest requestProto = multiRequestBuilder.build();
try {
responseProto = getStub().multi(controller, requestProto);
} catch (ServiceException e) {
throw ProtobufUtil.getRemoteException(e);
}
if (responseProto == null) return null; // Occurs on cancel
return ResponseConverter.getResults(requestProto, responseProto, controller.cellScanner());
}