Support Vector Machines (SVM) in Ruby

Support Vector Machines (SVM) in Ruby

Your Family Guy fan-site is riding a wave of viral referrals, the community has grown tenfold in last month alone! First, you've deployed an SVD recommendation system, then you've optimized the site content and layout with the help of decision trees, but of course, that wasn't enough, and you've also added a Bayes classifier to help you filter and rank the content - no wonder the site is doing so well! The community is buzzing with action, but as with any honey pot with high traffic, the spam bots have also arrived on the scene. No problem, you think to yourself, SVMs will be perfect for this one.

History of Support Vector Machines

Support Vector Machine (SVM) is a supervised learning algorithm developed by Vladimir Vapnik and his co-workers at AT&T Bell Labs in the mid 90's. Since their inception, they have continuously been shown to outperform many prior learning algorithms in both classification, and regression applications. In fact, the elegance and the rigorous mathematical foundations from optimization and statistical learning theory have propelled SVMs to the very forefront of the machine learning field within the last decade.

At their core, SVMs are a method for creating a predictor function from a set of training data where the function itself can be a binary, a multi-category, or even a general regression predictor. To accomplish this mathematical feat, SVMs find a hypersurface (for example, a plane in 2D) which attempts to split the positive and negative examples with the largest possible margin on all sides of the (hyper)plane.

Thus, as you can see from the diagram, SVMs make an implicit assumption that the larger the margin or distance between the examples and the hyperplane, the better the performance of a classifier will be - arguably a leap of faith, but in practice, this assumption has proven to perform extremely well. Certainly within the context of text classification (spam, or not spam, for example) SVMs have become the weapon of choice for most ML/AI researchers!

Installing and Configuring LIBSVM with Ruby

There is a plethora of available SVM implementations, but we will choose LIBSVM for our purposes. Aside from being one of the most popular libraries, it also happens to have a set of Ruby bindings to make our life much more enjoyable! To get yourself up and running:

# Install LIBSVM
$ sudo apt-get install libsvm2 libsvm-dev libsvm-tools

# Install RubySVM bindings
$ wget http://debian.cilibrar.com/debian/pool/main/libs/libsvm-ruby/libsvm-ruby_2.8.4.orig.tar.gz
$ tar zxvf libsvm*
$ cd libsvm*
$ ./configure
$ make && make install

Preparing the Data - Building Document Vectors

To perform text classification with SVMs we first have to convert our documents to use a vector space model. In this representation instead of working with words or sentences, the text is broken down into individual words, a unique id is assigned to each unique word, and the text is then reconstructed as a sequence of unique word ids. As usual, a picture is worth a thousand words:

Thus, if we use the global dictionary for each document, and mark all present words as '1', and missing words as '0',Document A can be represented as [1, 1, 0, 1, 1] - indices 1, 2, 4, and 5 are marked as 1, and index 3 (Ilya) is missing from this document. In similar fashion, Document B would become: [0, 1, 1, 1, 1]. Thankfully, this process is easily automated with a few Ruby one-liners:

# Sample training set ...
# ----------------------------------------------------------
  # Labels for each document in the training set
  #    1 = Spam, 0 = Not-Spam
  labels = [1, 1, 0, 1, 1, 0, 0]

  documents = [
    %w[FREE NATIONAL TREASURE],      # Spam
    %w[FREE TV for EVERY visitor],   # Spam
    %w[Peter and Stewie are hilarious], # OK
    %w[AS SEEN ON NATIONAL TV],      # SPAM
    %w[FREE drugs],          # SPAM
    %w[New episode rocks, Peter and Stewie are hilarious], # OK
    %w[Peter is my fav!]        # OK
    # ...
  ]

# Test set ...
# ----------------------------------------------------------
  test_labels = [1, 0, 0]

  test_documents = [
    %w[FREE lotterry for the NATIONAL TREASURE !!!], # Spam
    %w[Stewie is hilarious],     # OK
    %w[Poor Peter ... hilarious],    # OK
    # ...
  ]

# Build a global dictionary of all possible words
dictionary = (documents+test_documents).flatten.uniq
puts "Global dictionary: \n #{dictionary.inspect}\n\n"

# Build binary feature vectors for each document
#  - If a word is present in document, it is marked as '1', otherwise '0'
#  - Each word has a unique ID as defined by 'dictionary'
feature_vectors = documents.map { |doc| dictionary.map{|x| doc.include?(x) ? 1 : 0} }
test_vectors = test_documents.map { |doc| dictionary.map{|x| doc.include?(x) ? 1 : 0} }

puts "First training vector: #{feature_vectors.first.inspect}\n"
puts "First test vector: #{test_vectors.first.inspect}\n"

For the sake of an example we'll keep the training set nice and short - in production, we're going to use hundreds of examples to train our classifier. Nonetheless, executing our code returns (notice that word 'FREE' which corresponds to index 0 in the dictionary below is marked as present in both the first training and first test documents, just as expected):

Global dictionary:
["FREE", "NATIONAL", "TREASURE", "TV", "for", "EVERY", "visitor", "Peter", "and", "Stewie", "are", "hilarious", "AS", "SEEN", "ON", "drugs", "New", "episode", "rocks,", "is", "my", "fav!", "lotterry", "the", "!!!", "Poor", "..."]

First training vector: [1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
First test vector: [1, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0]

Training the Support Vector Machine

With the grunt work behind us, we're finally ready to train our spam classifier. By default, LIBSVM comes with a selection of kernels which map our vectors into a higher-dimensional space - usually the default 'linear' kernel is good in 99% of the cases, but for sake of experiment, let's try a few different ones:

require 'rubygems'
require 'SVM'
include SVM

puts "Spam filtering test with LIBSVM"
puts "-------------------------------"

# ... insert svm-documents.rb code

# Define kernel parameters -- we'll stick with the defaults
pa = Parameter.new
pa.C = 100
pa.svm_type = NU_SVC
pa.degree = 1
pa.coef0 = 0
pa.eps= 0.001

sp = Problem.new

# Add documents to the training set
labels.each_index { |i| sp.addExample(labels[i], feature_vectors[i]) }

# We're not sure which Kernel will perform best, so let's give each a try
kernels = [ LINEAR, POLY, RBF, SIGMOID ]
kernel_names = [ 'Linear', 'Polynomial', 'Radial basis function', 'Sigmoid' ]

kernels.each_index { |j|
  # Iterate and over each kernel type
  pa.kernel_type = kernels[j]
  m = Model.new(sp, pa)
  errors = 0

  # Test kernel performance on the training set
  labels.each_index { |i|
    pred, probs = m.predict_probability(feature_vectors[i])
    puts "Prediction: #{pred}, True label: #{labels[i]}, Kernel: #{kernel_names[j]}"
    errors += 1 if labels[i] != pred
  }
  puts "Kernel #{kernel_names[j]} made #{errors} errors on the training set"

  # Test kernel performance on the test set
  errors = 0
  test_labels.each_index { |i|
    pred, probs = m.predict_probability(test_vectors[i])
    puts "\t Prediction: #{pred}, True label: #{test_labels[i]}"
    errors += 1 if test_labels[i] != pred
  }

  puts "Kernel #{kernel_names[j]} made #{errors} errors on the test set \n\n"
}
svm.rb - SVM Classification Code

Running our SVM produces:

Global dictionary:
["FREE", "NATIONAL", "TREASURE", "TV", "for", "EVERY", "visitor", "Peter", "and", "Stewie", "are", "hilarious", "AS", "SEEN", "ON", "drugs", "New", "episode", "rocks,", "is", "my", "fav!", "lotterry", "the", "!!!", "Poor", "..."]

First training vector: [1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
First test vector: [1, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0]

And there you have it, our Polynomial and Radial basis function kernels correctly identified the spam messages in the system. Over time, and as we accumulate more and more examples, our kernel performance should only get better, and spam messages will be a thing of the past!

Previous iterations: SVD Recommendation SystemDecision Tree Learning and Bayes Classification

在IT领域,尤其是地理信息系统(GIS)中,坐标转换是一项关键技术。本文将深入探讨百度坐标系、火星坐标系和WGS84坐标系之间的相互转换,并介绍如何使用相关工具进行批量转换。 首先,我们需要了解这三种坐标系的基本概念。WGS84坐标系,即“World Geodetic System 1984”,是一种全球通用的地球坐标系统,广泛应用于GPS定位和地图服务。它以地球椭球模型为基础,以地球质心为原点,是国际航空和航海的主要参考坐标系。百度坐标系(BD-09)是百度地图使用的坐标系。为了保护隐私和安全,百度对WGS84坐标进行了偏移处理,导致其与WGS84坐标存在差异。火星坐标系(GCJ-02)是中国国家测绘局采用的坐标系,同样对WGS84坐标进行了加密处理,以防止未经授权的精确位置获取。 坐标转换的目的是确保不同坐标系下的地理位置数据能够准确对应。在GIS应用中,通常通过特定的算法实现转换,如双线性内插法或四参数转换法。一些“坐标转换小工具”可以批量转换百度坐标、火星坐标与WGS84坐标。这些工具可能包含样本文件(如org_xy_格式参考.csv),用于提供原始坐标数据,其中包含需要转换的经纬度信息。此外,工具通常会附带使用指南(如重要说明用前必读.txt和readme.txt),说明输入数据格式、转换步骤及可能的精度问题等。x86和x64目录则可能包含适用于32位和64位操作系统的软件或库文件。 在使用这些工具时,用户需要注意以下几点:确保输入的坐标数据准确无误,包括经纬度顺序和浮点数精度;按照工具要求正确组织数据,遵循读写规则;注意转换精度,不同的转换方法可能会产生微小误差;在批量转换时,检查每个坐标是否成功转换,避免个别错误数据影响整体结果。 坐标转换是GIS领域的基础操作,对于地图服务、导航系统和地理数据分析等至关重要。理解不同坐标系的特点和转换方法,有助于我们更好地处
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值