LWC 67: 762. Prime Number of Set Bits in Binary Representation

本文介绍了一个算法问题:在给定范围内找出二进制表示中1的个数为素数的整数数量。提供了Java和Python两种语言的实现方案。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

LWC 67: 762. Prime Number of Set Bits in Binary Representation

传送门:762. Prime Number of Set Bits in Binary Representation

Problem:

Given two integers L and R, find the count of numbers in the range [L, R] (inclusive) having a prime number of set bits in their binary representation.

(Recall that the number of set bits an integer has is the number of 1s present when written in binary. For example, 21 written in binary is 10101 which has 3 set bits. Also, 1 is not a prime.)

Example 1:

Input: L = 6, R = 10
Output: 4
Explanation:
6 -> 110 (2 set bits, 2 is prime)
7 -> 111 (3 set bits, 3 is prime)
9 -> 1001 (2 set bits , 2 is prime)
10->1010 (2 set bits , 2 is prime)

Example 2:

Input: L = 10, R = 15
Output: 5
Explanation:
10 -> 1010 (2 set bits, 2 is prime)
11 -> 1011 (3 set bits, 3 is prime)
12 -> 1100 (2 set bits, 2 is prime)
13 -> 1101 (3 set bits, 3 is prime)
14 -> 1110 (3 set bits, 3 is prime)
15 -> 1111 (4 set bits, 4 is not prime)

Note:

  • L, R will be integers L <= R in the range [1, 10^6].
  • R - L will be at most 10000.

思路:
判断二进制中1的个数是否为素数,int型总共32位,所以只需要枚举出<32的素数的集合,接着非常暴力,对每个在[L, R]中的数枚举即可。

Java版本:

    public int countPrimeSetBits(int L, int R) {
        boolean[] isPrime = new boolean[32];
        int[] primes = {2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31};
        for (int prime : primes) isPrime[prime] = true;
        int ans = 0;
        for (int i = L; i <= R; ++i) {
            if (isPrime[Integer.bitCount(i)]) ans += 1;
        }
        return ans;
    }

Python版本:

    def countPrimeSetBits(self, L, R):
        """
        :type L: int
        :type R: int
        :rtype: int
        """
        prime = {2, 3, 5, 7, 11, 13, 17, 19, 21, 23, 29, 31}
        ans = 0
        for num in range(L, R + 1):
            num = bin(num)
            cnt = num.count('1')
            if cnt in prime:
                ans += 1
        return ans
class UniformAffineQuantizer(nn.Module): def __init__( self, n_bits: int = 8, symmetric: bool = False, per_channel_axes=[], metric="minmax", dynamic=False, dynamic_method="per_cluster", group_size=None, shape=None, lwc=False, disable_zero_point=False, ): """ support cluster quantize dynamic_method support per_token and per_cluster """ super().__init__() self.symmetric = symmetric self.disable_zero_point = disable_zero_point assert 2 <= n_bits <= 16, "bitwidth not supported" self.n_bits = n_bits if self.disable_zero_point: self.qmin = -(2 ** (n_bits - 1)) self.qmax = 2 ** (n_bits - 1) - 1 else: self.qmin = 0 self.qmax = 2 ** (n_bits) - 1 self.per_channel_axes = per_channel_axes self.metric = metric self.cluster_counts = None self.cluster_dim = None self.scale = None self.zero_point = None self.round_zero_point = None self.cached_xmin = None self.cached_xmax = None self.dynamic = dynamic self.dynamic_method = dynamic_method self.deficiency = 0 self.lwc = lwc init_value = 4. # inti value of learnable weight clipping if lwc: if group_size: dim1 = int(shape[0]*math.ceil(shape[1]/group_size)) self.deficiency = shape[-1]%group_size if self.deficiency > 0: self.deficiency = group_size - self.deficiency assert self.symmetric # support for mlc-llm symmetric quantization else: dim1 = shape[0] self.upbound_factor = nn.Parameter(torch.ones((dim1,1))*init_value) self.lowbound_factor = nn.Parameter(torch.ones((dim1,1))*init_value) self.sigmoid = nn.Sigmoid() self.enable = True self.group_size = group_size def change_n_bits(self, n_bits): self.n_bits = n_bits if self.disable_zero_point: self.qmin = -(2 ** (n_bits - 1)) self.qmax = 2 ** (n_bits - 1) - 1 else: self.qmin = 0 self.qmax = 2 ** (n_bits) - 1 def fake_quant(self, x, scale, round_zero_point): if self.deficiency > 0: pad_zeros = torch.zeros((x.shape[0],self.deficiency),dtype=x.dtype,device=x.device) x = torch.cat((x,pad_zeros),dim=1) if self.group_size: assert len(x.shape)==2, "only support linear layer now" dim1, dim2 = x.shape x = x.reshape(-1, self.group_size) x_int = round_ste(x / scale) if round_zero_point is not None: x_int = x_int.add(round_zero_point) x_int = x_int.clamp(self.qmin, self.qmax) x_dequant = x_int if round_zero_point is not None: x_dequant = x_dequant.sub(round_zero_point) x_dequant = x_dequant.mul(scale) if self.group_size: x_dequant = x_dequant.reshape(dim1, dim2) if self.deficiency > 0: x_dequant = x_dequant[:,:-self.deficiency] return x_dequant def forward(self, x: torch.Tensor): if self.n_bits >= 16 or not self.enable: return x if self.metric == "fix0to1": return x.mul_(2**self.n_bits-1).round_().div_(2**self.n_bits-1) if self.dynamic_method == "per_token" or self.dynamic_method == "per_channel": self.per_token_dynamic_calibration(x) else: raise NotImplementedError() x_dequant = self.fake_quant(x, self.scale, self.round_zero_point) return x_dequant def per_token_dynamic_calibration(self, x): if self.group_size: if self.deficiency == 0: x = x.reshape(-1,self.group_size) else: pad_zeros = torch.zeros((x.shape[0],self.deficiency),dtype=x.dtype,device=x.device) x = torch.cat((x,pad_zeros),dim=1) x = x.reshape(-1,self.group_size) reduce_shape = [-1] xmin = x.amin(reduce_shape, keepdim=True) xmax = x.amax(reduce_shape, keepdim=True) if self.lwc: xmax = self.sigmoid(self.upbound_factor)*xmax xmin = self.sigmoid(self.lowbound_factor)*xmin if self.symmetric: abs_max = torch.max(xmax.abs(),xmin.abs()) scale = abs_max / (2**(self.n_bits-1)-1) self.scale = scale.clamp(min=CLIPMIN, max=1e4) zero_point = (2**(self.n_bits-1)-1)*torch.ones_like(self.scale) else: range = xmax - xmin scale = range / (2**self.n_bits-1) self.scale = scale.clamp(min=CLIPMIN, max=1e4) zero_point = -(xmin) / (self.scale) if self.disable_zero_point: self.round_zero_point = None else: self.round_zero_point = zero_point.clamp(min=-1e4, max=1e4).round() def register_scales_and_zeros(self): self.register_buffer('scales', self.scale) self.register_buffer('zeros', self.round_zero_point) del self.scale del self.round_zero_point
最新发布
07-24
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值