创建全0数组(Zero filled array creation)

本文探讨了JavaScript中创建全0数组的多种方法,并分析了不同数组声明方式对map函数使用的影响,给出了性能最优的方案。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

问题

乍看之下,创建全0数组应该是一件不能再简单的事情了:

var arr = [0,0,0,0,0,0];

然而有时候需要创建出长度比较长的全0数组(比如做桶排序时就需要),这种字面声明可能就不太适合,因为不可能手打出几万个0。所以今天创建全0数组的时候,我用了以下方法:

var arr = new Array(10);
//这里等同于
//var arr = [];arr.length = 10;

arr = arr.map(function(value,index,array){
    return value = 0;
});

然后对于arr里的元素都进行了++处理:

arr[i]++;

最后console.log(arr)的时候却发现,结果是:

[NaN,NaN,NaN,NaN,NaN,NaN,NaN,NaN,NaN,NaN]

当时还以为自己map函数写错了,于是想了一个方法来验证:

var arr = [1,2,3,4,5,6,7,8,9,10];
arr = arr.map(function(value,index,array){
    return value = 0;
});
console.log(arr);

结果得到了[0,0,0,0,0,0,0,0,0,0];

也就是说,使用new方法声明的数组,没法用map方法初始化每一个值

解决

就着这个问题去网上搜索答案,结果在Stackoverflow - Most efficient way to create a zero filled JavaScript array?中找到了一些参考。在友人zertosh的答案的评论中,他解释了:

当你用new关键字加上某个长度来创建数组的时候,你创建的是类似于{ length: 5, __proto__: Array.prototype }的东西,而不是{0:0, 1:0, 2:0, 3:0 length: 3, __proto__: Array.prototype }的一般数组的样子,也就是说里面徒有数组长度,却一个数组元素都没有。这样在调用map等函数的时候,由于数组里没有一个元素,所以也就不能对数组的元素进行操作。

扩展

这位老兄的答案中附上了一个非常狂野的方法:

Array.apply(null, Array(5)).map(Number.prototype.valueOf,0);

这段代码可以创建出一个长度为5的全0数组。因为兴趣所以我搜索了其他创建全0数组的代码放在下面:

new Array(5+1).join('0').split('')
var ary = new Uint8Array(10);
//ES6添加的fill方法
var arr = new Array(10);
arr.fill(0);

当然少不了最简单粗暴的:

var arr = [];for(var n = 0; n < 100; n++) arr[n] = 0

有网友测试了上述方法的性能,结果发现原来简单粗暴的arr[n] = 0的性能是最好的,哪怕你用arr.push(0)都弱爆了。创建全0数组测试 - jsperf.com

看来有时候优(zhuang)雅(bi)的方法不一定是最好的啊

NotImplementedError: Cannot convert a symbolic Tensor (lstm_3/strided_slice:0) to a numpy array. --------------------------------------------------------------------------- NotImplementedError Traceback (most recent call last) Cell In[24], line 5 2 start_time = time.time() 4 # 选择要训练的模型类型(rnn/lstm/gru) ----> 5 train_model(model_type='lstm') 7 # 计算总耗时 8 total_time = time.time() - start_time Cell In[21], line 12, in train_model(model_type) 10 model = build_gru_model() 11 else: # 默认使用LSTM ---> 12 model = build_lstm_model() 13 # 打印模型结构 14 model.summary() Cell In[23], line 8, in build_lstm_model() 6 inputs = Input(shape=(max_review_len,)) 7 x = layers.Embedding(total_words, embedding_len)(inputs) ----> 8 x = layers.LSTM(64, return_sequences=False)(x) 9 outputs = layers.Dense(1, activation='sigmoid')(x) 10 model = Model(inputs=inputs, outputs=outputs) File /opt/conda/lib/python3.8/site-packages/tensorflow_core/python/keras/layers/recurrent.py:623, in RNN.__call__(self, inputs, initial_state, constants, **kwargs) 617 inputs, initial_state, constants = _standardize_args(inputs, 618 initial_state, 619 constants, 620 self._num_constants) 622 if initial_state is None and constants is None: --> 623 return super(RNN, self).__call__(inputs, **kwargs) 625 # If any of `initial_state` or `constants` are specified and are Keras 626 # tensors, then add them to the inputs and temporarily modify the 627 # input_spec to include them. 629 additional_inputs = [] File /opt/conda/lib/python3.8/site-packages/tensorflow_core/python/keras/engine/base_layer.py:854, in Layer.__call__(self, inputs, *args, **kwargs) 852 outputs = base_layer_utils.mark_as_return(outputs, acd) 853 else: --> 854 outputs = call_fn(cast_inputs, *args, **kwargs) 856 except errors.OperatorNotAllowedInGraphError as e: 857 raise TypeError('You are attempting to use Python control ' 858 'flow in a layer that was not declared to be ' 859 'dynamic. Pass `dynamic=True` to the class ' 860 'constructor.\nEncountered error:\n"""\n' + 861 str(e) + '\n"""') File /opt/conda/lib/python3.8/site-packages/tensorflow_core/python/keras/layers/recurrent.py:2548, in LSTM.call(self, inputs, mask, training, initial_state) 2546 self.cell.reset_dropout_mask() 2547 self.cell.reset_recurrent_dropout_mask() -> 2548 return super(LSTM, self).call( 2549 inputs, mask=mask, training=training, initial_state=initial_state) File /opt/conda/lib/python3.8/site-packages/tensorflow_core/python/keras/layers/recurrent.py:681, in RNN.call(self, inputs, mask, training, initial_state, constants) 675 def call(self, 676 inputs, 677 mask=None, 678 training=None, 679 initial_state=None, 680 constants=None): --> 681 inputs, initial_state, constants = self._process_inputs( 682 inputs, initial_state, constants) 684 if mask is not None: 685 # Time step masks must be the same for each input. 686 # TODO(scottzhu): Should we accept multiple different masks? 687 mask = nest.flatten(mask)[0] File /opt/conda/lib/python3.8/site-packages/tensorflow_core/python/keras/layers/recurrent.py:798, in RNN._process_inputs(self, inputs, initial_state, constants) 796 initial_state = self.states 797 else: --> 798 initial_state = self.get_initial_state(inputs) 800 if len(initial_state) != len(self.states): 801 raise ValueError('Layer has ' + str(len(self.states)) + 802 ' states but was passed ' + str(len(initial_state)) + 803 ' initial states.') File /opt/conda/lib/python3.8/site-packages/tensorflow_core/python/keras/layers/recurrent.py:605, in RNN.get_initial_state(self, inputs) 603 dtype = inputs.dtype 604 if get_initial_state_fn: --> 605 init_state = get_initial_state_fn( 606 inputs=None, batch_size=batch_size, dtype=dtype) 607 else: 608 init_state = _generate_zero_filled_state(batch_size, self.cell.state_size, 609 dtype) File /opt/conda/lib/python3.8/site-packages/tensorflow_core/python/keras/layers/recurrent.py:2313, in LSTMCell.get_initial_state(self, inputs, batch_size, dtype) 2312 def get_initial_state(self, inputs=None, batch_size=None, dtype=None): -> 2313 return list(_generate_zero_filled_state_for_cell( 2314 self, inputs, batch_size, dtype)) File /opt/conda/lib/python3.8/site-packages/tensorflow_core/python/keras/layers/recurrent.py:2752, in _generate_zero_filled_state_for_cell(cell, inputs, batch_size, dtype) 2750 batch_size = array_ops.shape(inputs)[0] 2751 dtype = inputs.dtype -> 2752 return _generate_zero_filled_state(batch_size, cell.state_size, dtype) File /opt/conda/lib/python3.8/site-packages/tensorflow_core/python/keras/layers/recurrent.py:2768, in _generate_zero_filled_state(batch_size_tensor, state_size, dtype) 2765 return array_ops.zeros(init_state_size, dtype=dtype) 2767 if nest.is_sequence(state_size): -> 2768 return nest.map_structure(create_zeros, state_size) 2769 else: 2770 return create_zeros(state_size) File /opt/conda/lib/python3.8/site-packages/tensorflow_core/python/util/nest.py:536, in map_structure(func, *structure, **kwargs) 532 flat_structure = [flatten(s, expand_composites) for s in structure] 533 entries = zip(*flat_structure) 535 return pack_sequence_as( --> 536 structure[0], [func(*x) for x in entries], 537 expand_composites=expand_composites) File /opt/conda/lib/python3.8/site-packages/tensorflow_core/python/util/nest.py:536, in <listcomp>(.0) 532 flat_structure = [flatten(s, expand_composites) for s in structure] 533 entries = zip(*flat_structure) 535 return pack_sequence_as( --> 536 structure[0], [func(*x) for x in entries], 537 expand_composites=expand_composites) File /opt/conda/lib/python3.8/site-packages/tensorflow_core/python/keras/layers/recurrent.py:2765, in _generate_zero_filled_state.<locals>.create_zeros(unnested_state_size) 2763 flat_dims = tensor_shape.as_shape(unnested_state_size).as_list() 2764 init_state_size = [batch_size_tensor] + flat_dims -> 2765 return array_ops.zeros(init_state_size, dtype=dtype) File /opt/conda/lib/python3.8/site-packages/tensorflow_core/python/ops/array_ops.py:2338, in zeros(shape, dtype, name) 2334 if not isinstance(shape, ops.Tensor): 2335 try: 2336 # Create a constant if it won't be very big. Otherwise create a fill op 2337 # to prevent serialized GraphDefs from becoming too large. -> 2338 output = _constant_if_small(zero, shape, dtype, name) 2339 if output is not None: 2340 return output File /opt/conda/lib/python3.8/site-packages/tensorflow_core/python/ops/array_ops.py:2295, in _constant_if_small(value, shape, dtype, name) 2293 def _constant_if_small(value, shape, dtype, name): 2294 try: -> 2295 if np.prod(shape) < 1000: 2296 return constant(value, shape=shape, dtype=dtype, name=name) 2297 except TypeError: 2298 # Happens when shape is a Tensor, list with Tensor elements, etc. File <__array_function__ internals>:180, in prod(*args, **kwargs) File /opt/conda/lib/python3.8/site-packages/numpy/core/fromnumeric.py:3088, in prod(a, axis, dtype, out, keepdims, initial, where) 2970 @array_function_dispatch(_prod_dispatcher) 2971 def prod(a, axis=None, dtype=None, out=None, keepdims=np._NoValue, 2972 initial=np._NoValue, where=np._NoValue): 2973 """ 2974 Return the product of array elements over a given axis. 2975 (...) 3086 10 3087 """ -> 3088 return _wrapreduction(a, np.multiply, 'prod', axis, dtype, out, 3089 keepdims=keepdims, initial=initial, where=where) File /opt/conda/lib/python3.8/site-packages/numpy/core/fromnumeric.py:86, in _wrapreduction(obj, ufunc, method, axis, dtype, out, **kwargs) 83 else: 84 return reduction(axis=axis, out=out, **passkwargs) ---> 86 return ufunc.reduce(obj, axis, dtype, out, **passkwargs) File /opt/conda/lib/python3.8/site-packages/tensorflow_core/python/framework/ops.py:735, in Tensor.__array__(self) 734 def __array__(self): --> 735 raise NotImplementedError("Cannot convert a symbolic Tensor ({}) to a numpy" 736 " array.".format(self.name)) NotImplementedError: Cannot convert a symbolic Tensor (lstm_3/strided_slice:0) to a numpy array. + Code
最新发布
06-22
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值