1383. Maximum Performance of a Team

本文介绍了一种基于效率和速度的团队性能优化算法,通过小顶堆数据结构筛选出最多k个成员,以达到团队整体性能的最大化。算法首先对成员的效率和速度进行排序,然后使用小顶堆维护k个速度值,每次迭代更新速度总和和当前最大性能。

本质上,就是一个遍历所有的effecient,找到满足最小effecient的所有speed中,最多k个数字的最大值。

要找k个最大值,最方便的就是开个小顶堆,然后只存k个数,新来一个k+1,先pop最小的,然后再放入k+1个数。

bool cmp(const pair<int, int>& a, const pair<int, int>& b){
    if (a.second>b.second){
        return true;
    }
    else{
        return false;
    }
    
}

class Solution {
public:
    // 最多k个人,也就是说可以小于k个人。
    // 判断所有efficiency的情况
    int maxPerformance(int n, vector<int>& speed, vector<int>& efficiency, int k) {
        
        
        vector<pair<int, int>> cnt;
        for(int i=0;i<n;i++){
            pair<int, int> temp(speed[i], efficiency[i]);
            cnt.push_back(temp);
        }
        sort(cnt.begin(), cnt.end(), cmp);
        
        // for first k
        long res = 0;
        long mod = 1e9+7;
        long sum = 0;
        priority_queue<int, vector<int>, greater<int> > myqueue;
        
        // for k+1, ..., end
        for(int i=0;i<n;i++){
            if (i<k){
                myqueue.push(cnt[i].first);
                sum += cnt[i].first;
                res = max(res, sum*cnt[i].second);
            }
            else{
                sum += cnt[i].first-myqueue.top();
                myqueue.pop();
                myqueue.push(cnt[i].first);
                res = max(res, sum*cnt[i].second);
            }
        }
        
        return res%mod;
    }
};
2.2.1. YOLOv8 Since the inception of the YOLO single-stage target detection algorithm, it has gar￾nered significant academic interest. Over the years, the YOLO algorithm has undergone continuous updates and optimization. In 2023, the ultralytics team introduced YOLOv8, ... Forests 2024, 15, 1486 6 of 23 a version of the algorithm that ensures real-time performance with high detection accuracy and a lightweight network structure, thereby solidifying its position as a popular algorithm in the field of target detection, as shown in Figure 4. Forests 2024, 15, 1486 6 of 24 2.2. Improved Detection Models Based on YOLOv8 2.2.1. YOLOv8 Since the inception of the YOLO single-stage target detection algorithm, it has gar￾nered significant academic interest. Over the years, the YOLO algorithm has undergone continuous updates and optimization. In 2023, the ultralytics team introduced YOLOv8, a version of the algorithm that ensures real-time performance with high detection accu￾racy and a lightweight network structure, thereby solidifying its position as a popular algorithm in the field of target detection, as shown in Figure 4. Conv Conv C2f Conv C2f Conv C2f Conv C2f SPPF Upsample Concat Upsample C2f Concat C2f Concat Concat Conv C2f Conv C2f Detect Detect Detect Input Backbone Head Neck Figure 4. YOLOv8 model structure. The structure of YOLOv8 comprises four main components: the Input, Backbone, Neck, and Head. The Input component is responsible for scaling and data enhancement operations on the input image. The Backbone serves as the network’s foundation for ex￾tracting target features, incorporating the convolutional module Conv, the C2f structure, and the Spatial Pyramid Pooling-Fast (SPPF) module. The C2f structure enhances the Figure 4. YOLOv8 model structure. The structure of YOLOv8 comprises four main components: the Input, Backbone, Neck, and Head. The Input component is responsible for scaling and data enhancement operations on the input image. The Backbone serves as the network’s foundation for extracting target features, incorporating the convolutional module Conv, the C2f structure, and the Spatial Pyramid Pooling-Fast (SPPF) module. The C2f structure enhances the gradient flow by linking branches across layers, thereby enhancing feature representation. The Neck component enhances and merges features of varying dimensions, drawing from the Feature Pyramid Network (FPN) and Path Aggregation Network (PAN) methodologies while eliminating the convolution operation in the upsampling phase. The Head component utilizes a decoupled head structure to segregate the classification and detection tasks and embraces the Anchor-Free concept. For the loss function, YOLOv8 employs Binary Forests 2024, 15, 1486 7 of 23 Cross Entropy (BCE) Loss for classification loss, and Dynamic Focal Loss (DFL) Loss and Complete-IoU (CIOU) Loss for regression loss. Task-Aligned Assigner matching is utilized for sample matching. YOLOv8 builds upon the YOLOv8X baseline model, striving to enhance its detection accuracy. 2.2.2. GLU-YOLOv8 Model This paper introduces the GLU-YOLOv8 optimization model, which is based on YOLOv8, to address the challenges of low recognition rates and slow recognition speeds of pests. The new GLU-YOLOv8 structure is depicted in Figure 5, and the model enhancement methods are outlined below. Forests 2024, 15, 1486 7 of 24 gradient flow by linking branches across layers, thereby enhancing feature representation. The Neck component enhances and merges features of varying dimensions, drawing from the Feature Pyramid Network (FPN) and Path Aggregation Network (PAN) methodolo￾gies while eliminating the convolution operation in the upsampling phase. The Head com￾ponent utilizes a decoupled head structure to segregate the classification and detection tasks and embraces the Anchor-Free concept. For the loss function, YOLOv8 employs Bi￾nary Cross Entropy (BCE) Loss for classification loss, and Dynamic Focal Loss (DFL) Loss and Complete-IoU (CIOU) Loss for regression loss. Task-Aligned Assigner matching is utilized for sample matching. YOLOv8 builds upon the YOLOv8X baseline model, striv￾ing to enhance its detection accuracy. 2.2.2. GLU-YOLOv8 Model This paper introduces the GLU-YOLOv8 optimization model, which is based on YOLOv8, to address the challenges of low recognition rates and slow recognition speeds of pests. The new GLU-YOLOv8 structure is depicted in Figure 5, and the model enhance￾ment methods are outlined below. GLU-CONV C2f C2f SPPF Upsample Concat Upsample C2f Concat C2f C2f Input Upsample Concat C2f EMA Conv Concat GLU-CONV CBAM GLU-CONV C2f CBAM GLU-CONV CBAM C2f CBAM GLU-CONV C2f Conv Concat Conv Concat C2f 640×640×3 320×320×64 160×160×128 160×160×128 160×160×128 80×80×256 80×80×256 80×80×256 40×40×512 40×40×512 40×40×512 20×20×512 20×20×512 20×20×512 20×20×512 20×20×512 40×40×512 40×40×1024 20×20×1024 20×20×512 20×20×1024 20×20×512 40×40×512 40×40×512 40×40×512 40×40×768 40×40×256 80×80×512 80×80×768 80×80×768 160×160×768 160×160×896 160×160×128 160×160×128 160×160×128 80×80×128 80×80×896 80×80×256 80×80×256 Conv Conv Bbox.Loss Cls.Loss Conv Conv Bbox.Loss Cls.Loss Conv Conv Bbox.Loss Cls.Loss Conv Conv Bbox.Loss Cls.Loss Backbone Neck Head SODL LSK LSK LSK LSK Figure 5. GLU-YOLOv8 model structure. 1. The model employs the SIOU loss function instead of CIOU to handle different sizes and shapes of pests and reduce fluctuations during training, as outlined in Section 2.2.3. 2. The aĴ ention mechanism assigns varying weights to different channels or regions to assist the model by focusing on extracting crucial feature information. This paper utilizes the CBAM aĴ ention mechanism, innovative LSK aĴ ention mechanism, and efficient multiscale aĴ ention (EMA) mechanism. The aĴ ention mechanism is intro￾duced in Section 2.2.4. Figure 5. GLU-YOLOv8 model structure. 1. The model employs the SIOU loss function instead of CIOU to handle different sizes and shapes of pests and reduce fluctuations during training, as outlined in Section 2.2.3. 2. The attention mechanism assigns varying weights to different channels or regions to assist the model by focusing on extracting crucial feature information. This pa￾per utilizes the CBAM attention mechanism, innovative LSK attention mechanism, and efficient multiscale attention (EMA) mechanism. The attention mechanism is introduced in Section 2.2.4. 3. This paper introduces a novel Gated Linear Unit CONV (GLU-CONV) convolution block as an alternative to the CONV convolution block in the original YOLOv8 model. The structure of the GLU-CONV block is detailed in Section 2.2.5. 4. In this study, a small-object detection layer (SODL) is incorporated into the Neck structure of YOLOv8. The original YOLOv8 model has a maximum feature map size of 80 × 80 pixels. To address issues related to the leakage and false detection of small-target pests, this research implements a more in-depth feature transfer and Forests 2024, 15, 1486 8 of 23 fusion process. Specifically, a small-object detection layer with a feature map size of 160 × 160 pixels is introduced into the Neck layer. The SODL, depicted in gray in Figure 5, is explained in detail in Section 2.2.6.阅读论文内容 告诉我怎么向yolov8中加入GLU_CONV卷积层 要详细具体步骤
最新发布
12-09
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值