Search for a Range

本文介绍了一种使用二分查找算法解决特定问题的方法:寻找有序数组中特定元素的第一个和最后一个位置。通过非递归和递归两种实现方式,文章详细解释了如何有效地在O(log n)的时间复杂度内完成这一任务。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

要求复杂度为logn,只能用二分法。

二分有递归和非递归两种方式。

一开始想的的非递归的实现方式,用二分法找到这个数后,然后从这个位置出发,分别向左,向右遍历,找到这个数出现的范围。

注意找到这个数之后,就要跳出循环。二分法查找的循环条件是low<=high,不能忘记等于。

public int[] searchRange(int[] A, int target) {
        int len=A.length;
        int[] ret={-1,-1};
        if(len<1)return ret;
        int low=0,high=len-1,middle=0;
        while(low<=high){
            middle = (low+high)/2;
            if(A[middle]==target){
                int left=middle-1;
                if(left>=0){
                    while(A[left]==A[middle]){
                        left--;
                        if(left<0){
                            break;
                        }
                    }
                }
                int right=middle+1;
                if(right<len){
                    while(A[right]==A[middle]){
                        right++;
                        if(right>=len){
                            break;
                        }
                    }
                }
                ret[0]=left+1;
                ret[1]=right-1;
                break;
            }else if(A[middle]<target){
               low=middle+1;
            }else{
               high=middle-1;
            }
        }
        return ret;
    }
还有一种方法是递归的方法,这是找的网上的方法,也贴出来吧

public int[] searchRange(int[] A, int target) {
        int len=A.length;
        int[] ret={Integer.MAX_VALUE,Integer.MIN_VALUE};
        recSearchRange(A,target,ret,0,len-1);
        if(ret[0]==Integer.MAX_VALUE && ret[1]==Integer.MIN_VALUE){
            ret[0]=-1;
            ret[1]=-1;
        }
        return ret;
    }
    
    void recSearchRange(int[] A,int target,int[] ret,int low,int high){
        if(low>high){
            return;
        }
        int middle=(low+high)/2;
        if(A[middle] == target){
            if(ret[0]>middle){
                ret[0]=middle;
            }
            if(ret[1]<middle){
                ret[1]=middle;
            }
            recSearchRange(A,target,ret,low,middle-1);
            recSearchRange(A,target,ret,middle+1,high);
        }else if(A[middle] < target){
            recSearchRange(A,target,ret,middle+1,high);
        }else{
            recSearchRange(A,target,ret,low,middle-1);
        }
    }




### Hierarchical Embedding Model for Personalized Product Search In machine learning, hierarchical embedding models aim to capture the intricate relationships between products and user preferences by organizing items within a structured hierarchy. This approach facilitates more accurate recommendations and search results tailored specifically towards individual users' needs. A hierarchical embedding model typically involves constructing embeddings that represent both product features and their positions within a category tree or other organizational structures[^1]. For personalized product searches, this means not only capturing direct attributes of each item but also understanding how these relate across different levels of abstraction—from specific brands up through broader categories like electronics or clothing. To train such models effectively: - **Data Preparation**: Collect data on user interactions with various products along with metadata describing those goods (e.g., price range, brand name). Additionally, gather information about any existing hierarchies used in categorizing merchandise. - **Model Architecture Design**: Choose an appropriate neural network architecture capable of processing multi-level inputs while maintaining computational efficiency during training sessions. Techniques from contrastive learning can be particularly useful here as they allow systems to learn meaningful representations even when labels are scarce or noisy[^3]. - **Objective Function Formulation**: Define loss functions aimed at optimizing performance metrics relevant for ranking tasks; minimizing negative log-likelihood serves well as it encourages correct predictions over incorrect ones[^4]. Here’s a simplified example using Python code snippet demonstrating part of what might go into building one aspect of this kind of system—learning embeddings based off some hypothetical dataset containing customer reviews alongside associated product IDs: ```python import torch from torch import nn class HierarchicalEmbedder(nn.Module): def __init__(self, vocab_size, embed_dim=100): super().__init__() self.embedding = nn.Embedding(vocab_size, embed_dim) def forward(self, x): return self.embedding(x) # Example usage: vocab_size = 5000 # Number of unique words/products embeddings_model = HierarchicalEmbedder(vocab_size) input_tensor = torch.LongTensor([i for i in range(10)]) # Simulated input indices output_embeddings = embeddings_model(input_tensor) print(output_embeddings.shape) # Should output something similar to "torch.Size([10, 100])" ``` This script initializes a simple PyTorch module designed to generate fixed-size vector outputs corresponding to given integer keys representing either textual tokens found within review texts or numeric identifiers assigned uniquely per catalog entry.
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值