最后一步,生成tf-idf
调用方法是TFIDFConverter.processTfIdf,继续以tf-vectors为输入目录
先是makePartialVectors,hadoop程序,Mapper是缺省的,Reducer是TFIDFPartialVectorReducer
@Override
protected void reduce(WritableComparable<?> key, Iterable<VectorWritable> values, Context context)
throws IOException, InterruptedException {
Iterator<VectorWritable> it = values.iterator();
if (!it.hasNext()) {
return;
}
Vector value = it.next().get();
Iterator<Vector.Element> it1 = value.iterateNonZero();
Vector vector = new RandomAccessSparseVector((int) featureCount, value.getNumNondefaultElements());
while (it1.hasNext()) {
Vector.Element e = it1.next();
if (!dictionary.containsKey(e.index())) {
continue;
}
long df = dictionary.get(e.index());
if (maxDf > -1 && df > maxDf) {
continue;
}
if (df < minDf) {
df = minDf;
}
vector.setQuick(e.index(), tfidf.calculate((int) e.get(), (int) df, (int) featureCount, (int) vectorCount));
}
if (sequentialAccess) {
vector = new SequentialAccessSparseVector(vector);
}
if (namedVector) {
vector = new NamedVector(vector, key.toString());
}
VectorWritable vectorWritable = new VectorWritable(vector);
context.write(key, vectorWritable);
}key是文档id,value是index组成的vectors
重点的语句是
vector.setQuick(e.index(), tfidf.calculate((int) e.get(), (int) df, (int) featureCount, (int) vectorCount));
如理解tf-idf算法,上面语句应该很容易理解 -- tf * log(n/df),这里的算法是用luceneDefaultSimilarity计算的
public class TFIDF implements Weight {
private Similarity sim = new DefaultSimilarity();
public TFIDF() { }
public TFIDF(Similarity sim) {
this.sim = sim;
}
@Override
public double calculate(int tf, int df, int length, int numDocs) {
// ignore length
return sim.tf(tf) * sim.idf(df, numDocs);
}
}最后由PartialVectorMerger.mergePartialVectors将各部分merge在一起
本文介绍TF-IDF算法的实现细节,并通过Hadoop进行大规模文档集的TF-IDF向量计算。文中详细解释了TF-IDF计算过程中的关键步骤,包括如何使用Lucene DefaultSimilarity进行权重计算,以及如何利用MapReduce框架处理大规模数据。

被折叠的 条评论
为什么被折叠?



