34. Search for a Range

本文详细解析了LeetCode第34题“查找给定数值的起始和终止位置”,介绍了如何使用二分查找算法高效地在有序数组中找到目标数首次和最后一次出现的位置。特别针对数组中有重复元素的情况,提供了避免无限循环的具体实现技巧。

LeetCode

  • 题目地址:https://leetcode.com/problems/search-for-a-range/#/description
  • 问题描述&解题思路:给定一个排好序的数组,数组中可能有重复的元素,再给一个目标数target,找到target出现的第一个下标和最后一个下标,如果没有target则返回(-1,-1)。主要通过二分的方法,先找到第一个出现target的下标,再通过二分的方法,找到第二个出现target的下标,都没什么难度。主要想分析一下,二分搜索的时候怎么写,才不会出现超时的错误
  • 二分查找,超时主要是因为begin = end - 1的时候,mid = begin,而在后续的判断中又令begin = mid,从而造成无限循环:

    • 对于不重复的数,分三种情况,由于+1,-1的存在,所以不会出现mid不变的情况:
      1. nums[mid] < target,begin = mid + 1
      2. nums[mid] > target,end = mid - 1
      3. nums[mid] == target,返回mid
    • 对于有重复的数,如果要找第一个,那么是这么写的,这是不会超时的,因为当begin = end - 1的时候,无论此时的mid进入哪个判断,begin或者end都会改变
    int begin = 0, end = nums.size()-1;
        while (begin < end) {
            int mid = (begin + end) / 2;
            if (nums[mid] < target) {
                begin = mid + 1;
            } else {
                end = mid; 
    
    • 对于有重复的数,如果要找最后一个,那么应该有所改变,因为此时当begin = end - 1,mid的计算还按照以前的方法就会导致begin不会改变,所以需要在mid的计算中加上1/2,这样才能保证不会无限循环
    int begin2 = begin, end2 = nums.size()-1;
        while (begin2 < end2) {
            int mid = (begin2 + end2 + 1) / 2;
            if (nums[mid] == target) {
                begin2 = mid;
            } else {
                end2 = mid - 1;
            }
        }
  • 总结:对于二分的方法来说,最后都要解决在begin = end - 1的时候,不陷入无限循环的情况,此时如果你的判断分支是begin没有+1(即begin = mid没有+1)的情况,那么mid计算需要变成mid = (begin+end+1)/2,反过来也一样。

  • 34题的代码:
class Solution {
public:
    vector<int> searchRange(vector<int>& nums, int target) {
        vector<int> res;
        //find the starting position
        int begin = 0, end = nums.size()-1;
        while (begin < end) {
            int mid = (begin + end) / 2;
            if (nums[mid] < target) {
                begin = mid + 1;
            } else {
                end = mid; 
            }
        }
        if (nums.size() == 0 || nums.size() != 0 && nums[begin] != target)
            return vector<int>(2,-1);
        res.push_back(begin);
        //find the ending position
        int begin2 = begin, end2 = nums.size()-1;
        while (begin2 < end2) {
            int mid = (begin2 + end2 + 1) / 2;
            if (nums[mid] == target) {
                begin2 = mid;
            } else {
                end2 = mid - 1;
            }
        }
        res.push_back(begin2);
        return res;
    }
};
### Hierarchical Embedding Model for Personalized Product Search In machine learning, hierarchical embedding models aim to capture the intricate relationships between products and user preferences by organizing items within a structured hierarchy. This approach facilitates more accurate recommendations and search results tailored specifically towards individual users' needs. A hierarchical embedding model typically involves constructing embeddings that represent both product features and their positions within a category tree or other organizational structures[^1]. For personalized product searches, this means not only capturing direct attributes of each item but also understanding how these relate across different levels of abstraction—from specific brands up through broader categories like electronics or clothing. To train such models effectively: - **Data Preparation**: Collect data on user interactions with various products along with metadata describing those goods (e.g., price range, brand name). Additionally, gather information about any existing hierarchies used in categorizing merchandise. - **Model Architecture Design**: Choose an appropriate neural network architecture capable of processing multi-level inputs while maintaining computational efficiency during training sessions. Techniques from contrastive learning can be particularly useful here as they allow systems to learn meaningful representations even when labels are scarce or noisy[^3]. - **Objective Function Formulation**: Define loss functions aimed at optimizing performance metrics relevant for ranking tasks; minimizing negative log-likelihood serves well as it encourages correct predictions over incorrect ones[^4]. Here’s a simplified example using Python code snippet demonstrating part of what might go into building one aspect of this kind of system—learning embeddings based off some hypothetical dataset containing customer reviews alongside associated product IDs: ```python import torch from torch import nn class HierarchicalEmbedder(nn.Module): def __init__(self, vocab_size, embed_dim=100): super().__init__() self.embedding = nn.Embedding(vocab_size, embed_dim) def forward(self, x): return self.embedding(x) # Example usage: vocab_size = 5000 # Number of unique words/products embeddings_model = HierarchicalEmbedder(vocab_size) input_tensor = torch.LongTensor([i for i in range(10)]) # Simulated input indices output_embeddings = embeddings_model(input_tensor) print(output_embeddings.shape) # Should output something similar to "torch.Size([10, 100])" ``` This script initializes a simple PyTorch module designed to generate fixed-size vector outputs corresponding to given integer keys representing either textual tokens found within review texts or numeric identifiers assigned uniquely per catalog entry.
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值