34. Search for a Range

本文介绍了一种算法,可以在已排序的整数数组中找到给定目标值的起始和结束位置,采用二分查找法实现O(log n)的时间复杂度。如果目标值不存在于数组中,则返回[-1, -1]。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

Given an array of integers sorted in ascending order, find the starting and ending position of a given target value.
Your algorithm's runtime complexity must be in the order of O(log n).
If the target is not found in the array, return [-1, -1].
For example,
Given [5, 7, 7, 8, 8, 10] and target value 8,
return [3, 4].

class Solution {
public:
    vector<int> searchRange(vector<int>& nums, int target) {
        int min = 0, max = nums.size() - 1, mid = 0;
        vector <int>  res;
        int start = 0, end = 0;
        while(min <= max){
            mid = min + (max - min)/2;
            if(nums[mid] == target){
                start = mid;
                end = mid;
                while(start > 0 && nums[start] == nums[start - 1]){
                    --start;
                }
                while(end <= nums.size() - 1 && nums[end] == nums[end+1]){
                    ++end;
                }
                res.push_back(start);
                res.push_back(end);
                return res;
            }
            if(target < nums[mid]){
                max = mid - 1;
            }
            if(target > nums[mid]){
                min = mid + 1;
            }
        }
        res.push_back(-1);
        res.push_back(-1);
        return res;
    }
};

CH341A编程器是一款广泛应用的通用编程设备,尤其在电子工程和嵌入式系统开发领域中,它被用来烧录各种类型的微控制器、存储器和其他IC芯片。这款编程器的最新版本为1.3,它的一个显著特点是增加了对25Q256等32M芯片的支持。 25Q256是一种串行EEPROM(电可擦可编程只读存储器)芯片,通常用于存储程序代码、配置数据或其他非易失性信息。32M在这里指的是存储容量,即该芯片可以存储32兆位(Mbit)的数据,换算成字节数就是4MB。这种大容量的存储器在许多嵌入式系统中都有应用,例如汽车电子、工业控制、消费电子设备等。 CH341A编程器的1.3版更新,意味着它可以与更多的芯片型号兼容,特别是针对32M容量的芯片进行了优化,提高了编程效率和稳定性。26系列芯片通常指的是Microchip公司的25系列SPI(串行外围接口)EEPROM产品线,这些芯片广泛应用于各种需要小体积、低功耗和非易失性存储的应用场景。 全功能版的CH341A编程器不仅支持25Q256,还支持其他大容量芯片,这意味着它具有广泛的兼容性,能够满足不同项目的需求。这包括但不限于微控制器、EPROM、EEPROM、闪存、逻辑门电路等多种类型芯片的编程。 使用CH341A编程器进行编程操作时,首先需要将设备通过USB连接到计算机,然后安装相应的驱动程序和编程软件。在本例中,压缩包中的"CH341A_1.30"很可能是编程软件的安装程序。安装后,用户可以通过软件界面选择需要编程的芯片类型,加载待烧录的固件或数据,然后执行编程操作。编程过程中需要注意的是,确保正确设置芯片的电压、时钟频率等参数,以防止损坏芯片。 CH341A编程器1.3版是面向电子爱好者和专业工程师的一款实用工具,其强大的兼容性和易用性使其在众多编程器中脱颖而出。对于需要处理25Q256等32M芯片的项目,或者26系列芯片的编程工作,CH341A编程器是理想的选择。通过持续的软件更新和升级,它保持了与现代电子技术同步,确保用户能方便地对各种芯片进行编程和调试。
### Hierarchical Embedding Model for Personalized Product Search In machine learning, hierarchical embedding models aim to capture the intricate relationships between products and user preferences by organizing items within a structured hierarchy. This approach facilitates more accurate recommendations and search results tailored specifically towards individual users' needs. A hierarchical embedding model typically involves constructing embeddings that represent both product features and their positions within a category tree or other organizational structures[^1]. For personalized product searches, this means not only capturing direct attributes of each item but also understanding how these relate across different levels of abstraction—from specific brands up through broader categories like electronics or clothing. To train such models effectively: - **Data Preparation**: Collect data on user interactions with various products along with metadata describing those goods (e.g., price range, brand name). Additionally, gather information about any existing hierarchies used in categorizing merchandise. - **Model Architecture Design**: Choose an appropriate neural network architecture capable of processing multi-level inputs while maintaining computational efficiency during training sessions. Techniques from contrastive learning can be particularly useful here as they allow systems to learn meaningful representations even when labels are scarce or noisy[^3]. - **Objective Function Formulation**: Define loss functions aimed at optimizing performance metrics relevant for ranking tasks; minimizing negative log-likelihood serves well as it encourages correct predictions over incorrect ones[^4]. Here’s a simplified example using Python code snippet demonstrating part of what might go into building one aspect of this kind of system—learning embeddings based off some hypothetical dataset containing customer reviews alongside associated product IDs: ```python import torch from torch import nn class HierarchicalEmbedder(nn.Module): def __init__(self, vocab_size, embed_dim=100): super().__init__() self.embedding = nn.Embedding(vocab_size, embed_dim) def forward(self, x): return self.embedding(x) # Example usage: vocab_size = 5000 # Number of unique words/products embeddings_model = HierarchicalEmbedder(vocab_size) input_tensor = torch.LongTensor([i for i in range(10)]) # Simulated input indices output_embeddings = embeddings_model(input_tensor) print(output_embeddings.shape) # Should output something similar to "torch.Size([10, 100])" ``` This script initializes a simple PyTorch module designed to generate fixed-size vector outputs corresponding to given integer keys representing either textual tokens found within review texts or numeric identifiers assigned uniquely per catalog entry.
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值