【AT2336】Flags

本文介绍了一种结合2-SAT、Tarjan强连通分量算法及线段树/分块技术解决特定问题的方法。通过将问题转化为节点选择问题,并运用2-SAT建模来确定节点的选择条件,再利用Tarjan算法找出强连通分量,最终通过线段树优化边的连接过程。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

链接:AT2336 Flags
(题意不解释了)

解法:2-sat + tarjan-scc + 线段树/分块

(话说2-sat这东西我读成二坐,zh大佬每次都要纠正我233333)

使用2-sat建模,将每个点 i i 拆为i i i ′ ,题目转化为选点问题。一条边 (u,v) ( u , v ) 表示若选择了 u u ,则v必须被选择。这东西有啥用呢?由于输入的每对数要么必须选 x x ,要么必须选y,也就是说若我们能用2-sat推出“选 u u 必须选u”那么问题解决。

考虑如何连边。首先二分答案,设答案为 m m ,那么我们用线段树优化连边(或使用分块优化连边),将所有读入的值排序,对于一个点u,在排序的值中二分查找区间 [l,r] [ l , r ] ,让 [l,r] [ l , r ] 上的每个数与 u u 的相对差均小于m,然后由于二分出的答案为 m m ,所以若[l,r]区间上的选了,则 u u 不能选,从而u必须选,于是将 [l,r] [ l , r ] 区间上的每个点向 u u ′ 连边。

判断时,先求出所有强连通分量,若某个点 u u u处于同一强连通分量内,那么即是说:若 u u 选了,则u必须选,与题设矛盾,验证返回伪。

时间复杂度:线段树连边 O(Nlog2N) O ( N log 2 ⁡ N ) ,分块连边 O(NNlogN) O ( N N log ⁡ N ) ,均可过。

代码(只有线段树版本的,我太懒了QAQ)
#include<iostream>
#include<cstdio>
#include<vector>
#include<cstring>
#include<stack>
#include<algorithm>

using namespace std;

struct data{
    int x,i;
    data():x(0),i(0){}
    data(int y,int j):x(y),i(j){}
    friend bool operator<(const data &dt1,const data &dt2){return dt1.x<dt2.x;}
};

stack<int> s;
int n,x[80001],idx,tot,id[80001],low[80001],dfn[80001],scc,cid[80001];
vector<data> vec;
vector<int> point[80001];

void build(int o,int l,int r){
    id[o]=++tot;
    if(o>1)point[id[o>>1]].push_back(tot);
    if(l==r){int v=vec[l-1].i;point[id[o]].push_back(v<=n?v+n:v-n);return;}
    int m=l+r>>1;
    build(o<<1,l,m),build(o<<1|1,m+1,r);
}

void update(int o,int l,int r,int a,int b,int x){
    if(a>b)return;
    if(l==a&&r==b){point[x].push_back(id[o]);return;}
    int m=l+r>>1;
    if(m>=b)update(o<<1,l,m,a,b,x);else if(m<a)update(o<<1|1,m+1,r,a,b,x);else update(o<<1,l,m,a,m,x),update(o<<1|1,m+1,r,m+1,b,x);
}

pair<int,int> get(int i,int m){
    pair<int,int> res;int l=1,r=i,mid;
    while(l<=r)if(vec[i-1].x-vec[(mid=l+r>>1)-1].x>=m)l=mid+1;else r=mid-1;
    res.first=r+1,l=i,r=n<<1;
    while(l<=r)if(vec[(mid=l+r>>1)-1].x-vec[i-1].x<m)l=mid+1;else r=mid-1;
    res.second=l-1;
    return res;
}

void dfs(int u){
    low[u]=dfn[u]=++idx;s.push(u);
    for(int v:point[u])if(!dfn[v]){dfs(v);low[u]=min(low[u],low[v]);}else if(dfn[v]>0)low[u]=min(low[u],dfn[v]);
    if(low[u]==dfn[u]){int x;++scc;while(x!=u)x=s.top(),s.pop(),dfn[x]=-dfn[x],cid[x]=scc;}
}

bool check(int m){
    memset(low,scc=idx=0,sizeof(low)),memset(dfn,0,sizeof(dfn));for(int i=1;i<=80000;++i)point[i].clear();
    build(1,1,tot=n<<1);
    for(int i=1;i<=n<<1;++i){int r=vec[i-1].i;pair<int,int> pr=get(i,m);update(1,1,n<<1,pr.first,i-1,r),update(1,1,n<<1,i+1,pr.second,r);}
    for(int i=1;i<=n<<1;++i)if(!dfn[i])dfs(i);
    for(int i=1;i<=n;++i)if(cid[i]==cid[i+n])return false;
    return true;
}

int main(){
    scanf("%d",&n);
    for(int i=1;i<=n;++i)scanf("%d%d",x+i,x+n+i),vec.push_back(data(x[i],i)),vec.push_back(data(x[i+n],i+n));
    sort(vec.begin(),vec.end());
    int l=0,r=1000000000,m;while(l<=r)if(check(m=l+r>>1))l=m+1;else r=m-1;
    printf("%d",l-1);
}
### LabelMe Flags Usage in Image Annotation Tool In the context of using LabelMe as an image annotation tool, various command-line flags can be utilized to customize how annotations are created and managed. These flags provide flexibility when preparing datasets for machine learning tasks such as object detection or segmentation. For specifying a list of labels from a text file during the creation of annotated data: ```bash labelme data_annotated/ --labels labels.txt ``` This command allows users to define specific categories that will appear within the annotation interface by reading them directly from `labels.txt`[^1]. Each line inside this file should contain one class name intended for labeling purposes. To handle cases where JSON files generated through LabelMe might include negative point values—this typically occurs due to user error while drawing polygons on images—it's important to preprocess these outputs before converting into other formats like COCO. Negative coordinates need correction since they do not conform to standard expectations about bounding boxes or masks being fully contained within their respective image boundaries[^2]. When aiming at transforming mask RCNN-style data structures into TensorFlow Record format suitable for training models built upon TensorFlow framework components, three main directories must be organized properly: - **Images Directory**: Contains all input pictures. - **Annotations JSON Directory**: Holds corresponding metadata stored in .json form after manual tagging via GUI tools similar to those provided under LabelMe package. - **Label Map Path (.pbtxt)**: Defines mapping between integer IDs used internally throughout model architecture layers against human-readable string identifiers representing distinct classes present across entire collection of samples[^3]. Additionally, there exist more advanced options available depending on particular requirements related either towards enhancing productivity gains offered by graphical interfaces themselves or ensuring seamless integration paths toward downstream processing stages involving deep neural networks architectures specifically designed around computer vision problems solving paradigms[^4]. --related questions-- 1. How does Scalabel compare with LabelMe regarding ease-of-use? 2. What steps are necessary to correct invalid coordinate points found in LabelMe output files? 3. Can you explain the process involved in creating a `.pbtxt` label map file for use with TensorFlow Records? 4. Which features make video annotation platforms particularly suited for non-expert annotators?
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值