Uva 123 Searching Quickly

本文介绍了计算机科学中的快速索引(KWIC)与排序算法,重点讲解了二分查找和快速排序等经典算法,并通过实例展示了如何创建KWIC索引,即在排除特定词汇的情况下对一组标题进行关键字提取与排序。

 Searching Quickly 

Background

Searching and sorting are part of the theory and practice of computer science. For example, binary search provides a good example of an easy-to-understand algorithm with sub-linear complexity. Quicksort is an efficient tex2html_wrap_inline29 [average case] comparison based sort.

KWIC-indexing is an indexing method that permits efficient ``human search'' of, for example, a list of titles.

The Problem

Given a list of titles and a list of ``words to ignore'', you are to write a program that generates a KWIC (Key Word In Context) index of the titles. In a KWIC-index, a title is listed once for each keyword that occurs in the title. The KWIC-index is alphabetized by keyword.

Any word that is not one of the ``words to ignore'' is a potential keyword.

For example, if words to ignore are ``the, of, and, as, a'' and the list of titles is:

Descent of Man
The Ascent of Man
The Old Man and The Sea
A Portrait of The Artist As a Young Man

A KWIC-index of these titles might be given by:

                      a portrait of the ARTIST as a young man 
                                    the ASCENT of man 
                                        DESCENT of man 
                             descent of MAN 
                          the ascent of MAN 
                                the old MAN and the sea 
    a portrait of the artist as a young MAN 
                                    the OLD man and the sea 
                                      a PORTRAIT of the artist as a young man 
                    the old man and the SEA 
          a portrait of the artist as a YOUNG man

The Input

The input is a sequence of lines, the string :: is used to separate the list of words to ignore from the list of titles. Each of the words to ignore appears in lower-case letters on a line by itself and is no more than 10 characters in length. Each title appears on a line by itself and may consist of mixed-case (upper and lower) letters. Words in a title are separated by whitespace. No title contains more than 15 words.

There will be no more than 50 words to ignore, no more than than 200 titles, and no more than 10,000 characters in the titles and words to ignore combined. No characters other than 'a'-'z', 'A'-'Z', and white space will appear in the input.

The Output

The output should be a KWIC-index of the titles, with each title appearing once for each keyword in the title, and with the KWIC-index alphabetized by keyword. If a word appears more than once in a title, each instance is a potential keyword.

The keyword should appear in all upper-case letters. All other words in a title should be in lower-case letters. Titles in the KWIC-index with the same keyword should appear in the same order as they appeared in the input file. In the case where multiple instances of a word are keywords in the same title, the keywords should be capitalized in left-to-right order.

Case (upper or lower) is irrelevant when determining if a word is to be ignored.

The titles in the KWIC-index need NOT be justified or aligned by keyword, all titles may be listed left-justified.

Sample Input

is
the
of
and
as
a
but
::
Descent of Man
The Ascent of Man
The Old Man and The Sea
A Portrait of The Artist As a Young Man
A Man is a Man but Bubblesort IS A DOG

Sample Output

a portrait of the ARTIST as a young man 
the ASCENT of man 
a man is a man but BUBBLESORT is a dog 
DESCENT of man 
a man is a man but bubblesort is a DOG 
descent of MAN 
the ascent of MAN 
the old MAN and the sea 
a portrait of the artist as a young MAN 
a MAN is a man but bubblesort is a dog 
a man is a MAN but bubblesort is a dog 
the OLD man and the sea 
a PORTRAIT of the artist as a young man 
the old man and the SEA 
a portrait of the artist as a YOUNG man

题目大意:

(1) 给出一系列要忽略的单词,这些单词以外的单词都看作关键字。

(2) 然后给出一些标题,找出标题中所有的关键字,然后按这些关键字的字典序给标题排序。

(3) 相同关键字出现在不同标题中,出现在输入较前位置的标题排在前面;

(4) 同一个关键字在一个标题中出现多次,关键字位于较前位置的排在前面。

分析:按行读入,对每行中出现的关键词和这这行用multimap存储,最后输出就可以了。


#include<iostream>
#include<map>
#include<set>
#include<string>
#include<ctype.h>
#include<string.h>
using namespace std;

int main() {
	multimap<string,string> str;	
	set<string> ignore;
	string t;
	while(getline(cin,t) && t != "::") //输入关键词
		ignore.insert(t);
	while(getline(cin,t)) {		//输入字符串	
		for(int i=0;i<t.size();i++)	//将字符串所有的字符转换为小写	
			t[i] = tolower(t[i]);
		for(int i=0;i<t.size();i++) {
			if(!isalpha(t[i]))  //如果为非英文字符则跳过输入
				continue;
			else {
				string t1;		//用于保存当前的word	
				int count = i;	//记录下单词的首字母的位置
				int num = 0;
				while(isalpha(t[count])) { //将当前的字母传给t1
					t1 += t[count];	
					count++;
					num++;
				}	
				if(!ignore.count(t1)) {		//如果t1字符串符合关键词
					for(int j=0; j<num;j++) { //则将t1字符串的所有的字母转换为大写
						
						if(t1[j]>='a' && t1[j]<='z') {
							t1[j] -= 32;
						}
					}
					string t2 = t;		//将当前的字符串,传给临时变量t2
					t2.replace(i,t1.size(),t1);
					str.insert(make_pair(t1,t2));
				}
				i=count;
			}
		}
	}
	multimap<string,string>::iterator it;//输出结果
	for(it = str.begin();it != str.end();it++)
		cout<<it->second<<endl;
	return 0;
}


### MobileNetV3 模型介绍 MobileNetV3 是一种轻量级深度学习模型,专为移动设备和嵌入式设备设计,以实现高效的图像识别任务。它通过应用平台感知的神经架构搜索(NAS)和 NetAdapt 算法来优化网络结构,并结合了新的改进技术,如 Squeeze-and-Excite 模块和 Hard-Swish 激活函数,从而在保持低计算成本的同时提高了模型性能。 该模型分为两个版本:**MobileNetV3-Large** 和 **MobileNetV3-Small**,分别适用于高资源需求和低资源需求的场景[^1]。这两个版本在 ImageNet 数据集上的表现优于前代模型 MobileNetV2,尽管实际部署中 MobileNetV2 仍然较为常见[^2]。 #### 主要特性 - **轻量化设计**:采用深度可分离卷积(Depthwise Separable Convolution),大幅减少参数数量和计算量。 - **Squeeze-and-Excite 模块**:引入通道注意力机制,提升特征表达能力。 - **Hard-Swish 激活函数**:相比传统 ReLU,提供更平滑的非线性变换,有助于提高模型精度。 - **平台感知 NAS 搜索**:利用自动化网络搜索技术优化模型结构,使其在特定硬件平台上达到最佳性能。 ### 应用指南 #### 图像分类 MobileNetV3 可用于图像分类任务,尤其适合在计算资源受限的设备上运行。以下是一个使用 TensorFlow/Keras 加载预训练 MobileNetV3-Small 模型并进行图像分类的示例: ```python import tensorflow as tf from tensorflow.keras.applications.mobilenet_v3 import MobileNetV3Small, preprocess_input, decode_predictions from tensorflow.keras.preprocessing import image import numpy as np # 加载预训练的小型模型 model = MobileNetV3Small(weights='imagenet') # 加载并预处理输入图像 img_path = 'path_to_your_image.jpg' img = image.load_img(img_path, target_size=(224, 224)) x = image.img_to_array(img) x = np.expand_dims(x, axis=0) x = preprocess_input(x) # 进行预测 preds = model.predict(x) print('Predicted:', decode_predictions(preds, top=3)[0]) ``` #### 目标检测与语义分割 MobileNetV3 也可作为特征提取器应用于目标检测(如 SSD、YOLO)或语义分割(如 DeepLab)等任务中。通常会将 MobileNetV3 的主干网络替换原有特征提取模块,以获得更高的推理效率。 #### 部署建议 - 在移动端部署时,推荐使用 **MobileNetV3-Small** 版本,以节省内存和计算资源。 - 若需更高精度且对资源要求不敏感,可选择 **MobileNetV3-Large**。 - 利用 TensorFlow Lite 或 ONNX Runtime 等工具进行模型压缩和加速,以便更好地适应边缘设备。 ---
评论
成就一亿技术人!
拼手气红包6.0元
还能输入1000个字符
 
红包 添加红包
表情包 插入表情
 条评论被折叠 查看
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值