UVA_102_Ecological Bin Packing

本文介绍了一个使用C++实现的垃圾回收调度算法,该算法通过输入一个3x3矩阵来模拟不同类型的垃圾(棕色、清澈、绿色)在三个不同区域的分布情况,并通过计算找出最优的垃圾收集路径,以最小化剩余垃圾总量。
#include<iostream>
#include<sstream>
#include<algorithm>
#include<vector>
#include<string>
using std::cin;
using std::cout;
using std::endl;
using std::vector;
using std::string;
using std::next_permutation;
using std::stringstream;
using std::sort;
using std::unique;
using std::swap;
int main()
{
    //生成二维数组
    s:vector<vector<int>>bin(3);
    for (vector<int>::size_type i = 0; i < 3; i++)
    {
        bin[i].resize(3);
    }
    //输入
    for (int i = 0; i < 3; i++)
    {
        for (int j = 0; j < 3; j++)
        {
            int num; cin >> num;
            bin[i][j] = num;
        }
    }
    //使数据横纵都排列成BCG
    for (int i = 0; i <3; i++)
    {
        swap(bin[i][1], bin[i][2]);
    }
    //分别计算B,C,G的垃圾总和
    int Brown_sum = bin[0][0]+bin[1][0]+bin[2][0], 
        Clear_sum = bin[0][1] + bin[1][1] + bin[2][1],
        Green_sum = bin[0][2] + bin[1][2] + bin[2][2];
    string code = "012", code_min = "999";
    int count = 1000000000;
    do
    {
        int sum = Brown_sum - bin[0][code[0] - '0']
             +Clear_sum - bin[1][code[1] - '0']
        +Green_sum- bin[2][code[2] - '0'];
        
        if (count > sum)
        {
            count = sum;
            code_min = code;
        }
        else if (count == sum)
        {
            code_min = code_min < code ? code_min : code;
        }
    } while (next_permutation(code.begin(), code.end()));
    for (int i = 0; i < 3; i++)
    {
        switch (code_min[i])
        {
        case '0':cout << 'B'; break;
        case '1':cout << 'C'; break;
        case '2':cout << 'G'; break;
        }
    }
    cout << ' ';
    cout << count << endl;
    cin.get();
    if (cin.peek() != EOF)
    {
        goto s;
    }
    return 0;
}

### Temporal Self-Attention in Deep Learning Temporal self-attention mechanisms allow models to focus on different parts of the input sequence when making predictions, which is particularly useful for tasks involving sequences such as time series analysis or natural language processing. The core idea behind temporal self-attention lies in enabling each position (or token) within a sequence to attend to all positions in the previous layer[^1]. In terms of **principles**, temporal self-attention operates by computing attention scores between pairs of elements across timesteps. These scores determine how much weight one element should place on another during information aggregation. This process can be mathematically represented through scaled dot-product attention: \[ \text{Attention}(Q,K,V)=\text{softmax}\left(\frac{QK^T}{\sqrt{d_k}}\right)V \] where \( Q \), \( K \), and \( V \) represent query, key, and value matrices respectively; \( d_k \) denotes dimensionality. For **implementation**, implementing temporal self-attention typically involves constructing these three components—queries (\( Q \)), keys (\( K \)), values (\( V \))—from the hidden states produced by recurrent neural networks (RNNs) or convolutional layers applied over time steps. One common approach uses multi-head attention where multiple parallel attention layers are employed simultaneously, allowing the model to jointly address information from different representation subspaces at various positions[^2]: ```python import torch.nn.functional as F def multi_head_attention(query, key, value, num_heads): # Split into heads q_split = split_heads(query, num_heads) k_split = split_heads(key, num_heads) v_split = split_heads(value, num_heads) # Compute attention weights attn_weights = compute_scaled_dot_product_attention(q_split, k_split) # Apply softmax & multiply with values output = apply_softmax_and_multiply(attn_weights, v_split) # Concatenate outputs back together final_output = concat_heads(output) return final_output ``` Regarding **use cases**, applications benefiting most from this technique include those requiring ecological validity, like working environments modeled via computers or educational settings. In scenarios characterized by complex interactions among entities evolving over time, capturing long-range dependencies becomes crucial. For instance, predicting user behavior based on historical activities or understanding student performance trends throughout semesters could leverage temporal self-attention effectively.
评论
成就一亿技术人!
拼手气红包6.0元
还能输入1000个字符
 
红包 添加红包
表情包 插入表情
 条评论被折叠 查看
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值