Virtual Shard

介绍了VirtualShard技术的基本原理及其在系统扩容时的作用,同时指出了该技术无法解决的数据重新分布问题。

对于DB Shard而言,Reshard是一个在系统扩容时不得不面对的问题.Reshard需要解决两个问题:1.修改散列规则以适应新的结点规模.2.将已有数据重新分布到正确的结点.Virtual Shard主要用于解决第一个问题.

  Virtual Shard的基本思想是:建立一组虚拟的结点,虚拟结点的数量是固定的,一般定义为一个系统可预期的最大结点数.具体数字并不重要,只要足够大就可以,比如100,1000等等.数据的散列和路由规则都是以虚拟结点为基础进行的.在系统上线初期,多个虚拟结点会映射到一个物理结点,当系统需要扩容时,只需要修改虚拟结点和物理结点之间的隐射关系即可,不需要修改数据散列和路由规则.这是Virtual Shard的主要目的.举例说:如个一个系统虚拟结点是100,在上线初期,有两个物理结点,虚拟结点和物理结点的映射关系为:1-50的虚拟结点映射到1号物理结点上.51-100结点映射到2号结点上.当后期系统扩容至4个物理结点时,可将映射策略修改为:1-25号虚拟结点对应1号物理结点, 26-50号虚拟结点对应2号物理结点,51-75号虚拟结点对应3号物理结点,76-100号虚拟结点对应4号物理结点.

  但是,Virtual Shard只能解决第一个问题,对于数据的重新分布是无能为力的.目前没有一种通用的技术用于系统扩容时的数据重新分布.这是因为它是一项非常偏向特定应用的工作.

我搭建的量子神经网络为什么报错如下? --------------------------------------------------------------------------- ValueError Traceback (most recent call last) Cell In[105], line 3 1 for epoch in range(epochs): 2 print(f"epoch {epoch+1}\n-------------------------") ----> 3 train(model,train_ds) 4 test(model,test_ds,loss_fn) 5 print("Done!") Cell In[104], line 19, in train(model, dataset) 17 model.set_train() 18 for batch,(data,label) in enumerate(dataset.create_tuple_iterator()): ---> 19 loss=train_step(data,label) 21 if batch%100==0: 22 loss,current=loss.asnumpy(),batch Cell In[104], line 10, in train_step(data, label) 9 def train_step(data,label): ---> 10 (loss,_),grads=grad_fn(data,label) 11 optimizer(grads) 12 return loss File D:\10_The_Programs\4_The_Codes\00_virtual_environment\DeepLearning\Lib\site-packages\mindspore\ops\composite\base.py:638, in _Grad.__call__.<locals>.after_grad(*args, **kwargs) 637 def after_grad(*args, **kwargs): --> 638 return grad_(fn_, weights)(*args, **kwargs) File D:\10_The_Programs\4_The_Codes\00_virtual_environment\DeepLearning\Lib\site-packages\mindspore\common\api.py:187, in _wrap_func.<locals>.wrapper(*arg, **kwargs) 185 @wraps(fn) 186 def wrapper(*arg, **kwargs): --> 187 results = fn(*arg, **kwargs) 188 return _convert_python_data(results) File D:\10_The_Programs\4_The_Codes\00_virtual_environment\DeepLearning\Lib\site-packages\mindspore\ops\composite\base.py:610, in _Grad.__call__.<locals>.after_grad(*args, **kwargs) 608 @_wrap_func 609 def after_grad(*args, **kwargs): --> 610 run_args, res = self._pynative_forward_run(fn, grad_, weights, *args, **kwargs) 611 if self.has_aux: 612 out = _pynative_executor.grad_aux(fn, grad_, weights, grad_position, *run_args) File D:\10_The_Programs\4_The_Codes\00_virtual_environment\DeepLearning\Lib\site-packages\mindspore\ops\composite\base.py:671, in _Grad._pynative_forward_run(self, fn, grad, weights, *args, **kwargs) 669 _pynative_executor.set_grad_flag(True) 670 _pynative_executor.new_graph(fn, *args, **kwargs) --> 671 outputs = fn(*args, **kwargs) 672 _pynative_executor.end_graph(fn, outputs, *args, **kwargs) 673 run_forward = True File D:\10_The_Programs\4_The_Codes\00_virtual_environment\DeepLearning\Lib\site-packages\mindspore\ops\composite\base.py:578, in _Grad.__call__.<locals>.aux_fn(*args, **kwargs) 577 def aux_fn(*args, **kwargs): --> 578 outputs = fn(*args, **kwargs) 579 if not isinstance(outputs, tuple) or len(outputs) < 2: 580 raise ValueError("When has_aux is True, origin fn requires more than one outputs.") Cell In[104], line 3, in forward_fn(data, label) 2 def forward_fn(data,label): ----> 3 logits=model(data) 4 loss=loss_fn(logits,label) 5 return loss,logits File D:\10_The_Programs\4_The_Codes\00_virtual_environment\DeepLearning\Lib\site-packages\mindspore\nn\cell.py:1355, in Cell.__call__(self, *args, **kwargs) 1352 if not (self.requires_grad or self._dynamic_shape_inputs or self.mixed_precision_type): 1353 if not (self._forward_pre_hook or self._forward_hook or self._backward_pre_hook or self._backward_hook or 1354 self._shard_fn or self._recompute_cell or (self.has_bprop and _pynative_executor.requires_grad())): -> 1355 return self.construct(*args, **kwargs) 1357 return self._run_construct(*args, **kwargs) 1359 return self._complex_call(*args, **kwargs) Cell In[87], line 16, in Net.construct(self, x) 14 def construct(self,x): 15 x=self.flatten(x) ---> 16 logits=self.conv(x) 17 return logits File D:\10_The_Programs\4_The_Codes\00_virtual_environment\DeepLearning\Lib\site-packages\mindspore\nn\cell.py:1355, in Cell.__call__(self, *args, **kwargs) 1352 if not (self.requires_grad or self._dynamic_shape_inputs or self.mixed_precision_type): 1353 if not (self._forward_pre_hook or self._forward_hook or self._backward_pre_hook or self._backward_hook or 1354 self._shard_fn or self._recompute_cell or (self.has_bprop and _pynative_executor.requires_grad())): -> 1355 return self.construct(*args, **kwargs) 1357 return self._run_construct(*args, **kwargs) 1359 return self._complex_call(*args, **kwargs) File D:\10_The_Programs\4_The_Codes\00_virtual_environment\DeepLearning\Lib\site-packages\mindspore\nn\layer\container.py:295, in SequentialCell.construct(self, input_data) 293 def construct(self, input_data): 294 for cell in self.cell_list: --> 295 input_data = cell(input_data) 296 return input_data File D:\10_The_Programs\4_The_Codes\00_virtual_environment\DeepLearning\Lib\site-packages\mindspore\nn\cell.py:1355, in Cell.__call__(self, *args, **kwargs) 1352 if not (self.requires_grad or self._dynamic_shape_inputs or self.mixed_precision_type): 1353 if not (self._forward_pre_hook or self._forward_hook or self._backward_pre_hook or self._backward_hook or 1354 self._shard_fn or self._recompute_cell or (self.has_bprop and _pynative_executor.requires_grad())): -> 1355 return self.construct(*args, **kwargs) 1357 return self._run_construct(*args, **kwargs) 1359 return self._complex_call(*args, **kwargs) File D:\10_The_Programs\4_The_Codes\00_virtual_environment\DeepLearning\Lib\site-packages\mindquantum\framework\layer.py:94, in MQLayer.construct(self, arg) 92 def construct(self, arg): 93 """Construct a MQLayer node.""" ---> 94 return self.evolution(arg, self.weight) File D:\10_The_Programs\4_The_Codes\00_virtual_environment\DeepLearning\Lib\site-packages\mindspore\nn\cell.py:1357, in Cell.__call__(self, *args, **kwargs) 1353 if not (self._forward_pre_hook or self._forward_hook or self._backward_pre_hook or self._backward_hook or 1354 self._shard_fn or self._recompute_cell or (self.has_bprop and _pynative_executor.requires_grad())): 1355 return self.construct(*args, **kwargs) -> 1357 return self._run_construct(*args, **kwargs) 1359 return self._complex_call(*args, **kwargs) File D:\10_The_Programs\4_The_Codes\00_virtual_environment\DeepLearning\Lib\site-packages\mindspore\nn\cell.py:1063, in Cell._run_construct(self, *args, **kwargs) 1061 output = self._recompute_cell(*args, **kwargs) 1062 elif self.has_bprop: -> 1063 output = self._call_custom_bprop(*args, **kwargs) 1064 else: 1065 output = self.construct(*args, **kwargs) File D:\10_The_Programs\4_The_Codes\00_virtual_environment\DeepLearning\Lib\site-packages\mindspore\nn\cell.py:1411, in Cell._call_custom_bprop(self, *args, **kwargs) 1407 """ 1408 Call custom bprop for cell bprop. 1409 """ 1410 with _no_grad(): -> 1411 output = self.construct(*args, **kwargs) 1412 return _pynative_executor.call_custom_bprop(self, output, *args, **kwargs) File D:\10_The_Programs\4_The_Codes\00_virtual_environment\DeepLearning\Lib\site-packages\mindquantum\framework\operations.py:134, in MQOps.construct(self, enc_data, ans_data) 132 def construct(self, enc_data, ans_data): 133 """Construct an MQOps node.""" --> 134 check_enc_input_shape(enc_data, self.shape_ops(enc_data), len(self.expectation_with_grad.encoder_params_name)) 135 check_ans_input_shape(ans_data, self.shape_ops(ans_data), len(self.expectation_with_grad.ansatz_params_name)) 136 fval, g_enc, g_ans = self.expectation_with_grad(enc_data.asnumpy(), ans_data.asnumpy()) File D:\10_The_Programs\4_The_Codes\00_virtual_environment\DeepLearning\Lib\site-packages\mindspore\ops\primitive.py:955, in constexpr.<locals>.decorator.<locals>.ProxyOp.__call__(self, *args, **kwargs) 954 def __call__(self, *args, **kwargs): --> 955 return fn(*args, **kwargs) File D:\10_The_Programs\4_The_Codes\00_virtual_environment\DeepLearning\Lib\site-packages\mindquantum\framework\operations.py:38, in check_enc_input_shape(data, encoder_tensor, enc_len) 36 raise TypeError(f"Encoder parameter requires a Tensor but get {type(data)}") 37 if len(encoder_tensor) != 2 or encoder_tensor[1] != enc_len: ---> 38 raise ValueError( 39 'Encoder data requires a two dimension Tensor with second' 40 + f' dimension should be {enc_len}, but get shape {encoder_tensor}' 41 ) ValueError: Encoder data requires a two dimension Tensor with second dimension should be 0, but get shape (64, 16)
11-03
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值