SVM Application List
Support vector machines-based generalized predictive control
- Reference(s):
“Support vector machines-based generalized predictive control,” Serdar Iplikci, INTERNATIONAL JOURNAL OF ROBUST AND NONLINEAR CONTROL, Vol. 16, pp. 843-862, 2006
- Reference link(s):
http://ietfec.oxfordjournals.org/cgi/content/abstract/E89-A/10/2787
- Data link(s):
Entered by: Serdar Iplikci <iplikci@pau.edu.tr> - Monday, October 23, 2006 at 18:05:17 (GMT)
Comments:
Dynamic Reconstruction of Chaotic Systems from Inter-spike Intervals Using Least Squares Support Vector Machines
- Reference(s):
Physica D, Vol. 216, pp. 282-293, 2006
- Reference link(s):
- Data link(s):
Entered by: Serdar Iplikci <iplikci@pau.edu.tr> - Monday, May 29, 2006 at 12:53:56 (GMT)
Comments:
Application of The Kernel Method to the Inverse Geosounding Problem
- Reference(s):
"Application of the kernel method to the inverse geosounding problem", Hugo Hidalgo, Sonia Sosa and E. Gómez-Treviño, Neural Networks, vol. 16, pp. 349-353, 2003
- Reference link(s):
http://cienciascomp.cicese.mx/recopat/articulos/NeuralNetworks03.pdf
- Data link(s):
Entered by: Hugo Hidalgo <hugo@cicese.mx> - Wednesday, March 22, 2006 at 14:04:25 (MST)
Comments:
Support Vector Machines Based Modeling of Seismic Liquefaction Potential
- Reference(s):
Goh ATC. Seismic Liquefaction Potential Assessed by Neural Networks. Journal of Geotechnical Engineering 1994; 120(9): 1467-1480.
Goh ATC. Neural-Network Modeling of CPT Seismic Liquefaction Data. Journal of Geotechnical Engineering 1996; 122(1): 70-73
- Reference link(s):
Accepted for publication in International Journal for Numerical and Analytical Methods in Geomechanics.
- Data link(s):
Entered by: Mahesh Pal <mpce_pal@yahoo.co.uk> - Wednesday, February 22, 2006 at 06:50:07 (GMT)
Comments:
SVM for Geo- and Environmental Sciences
- Reference(s):
1. N. Gilardi, M. Kanevski, M. Maignan and E. Mayoraz. Environmental and Pollution Spatial Data Classification with Support Vector Machines and Geostatistics. Workshop W07 “Intelligent techniques for Spatio-Temporal Data Analysis in Environmental Applications”. ACAI99, Greece, July, 1999. pp. 43-51. www.idiap.ch
2. M Kanevski, N Gilardi, E Mayoraz, M Maignan. Spatial Data Classification with Support Vector Machines. Geostat 2000 congress. South Africa, April 2000.
3. Kanevski M., Wong P., Canu S. Spatial Data Mapping with Support Vector Regression and Geostatistics. 7th International Conference on Neural Information Processing, Taepon, Korea. Nov. 14-18, 2000. Pp. 1307-1311.
4. N GILARDI, Alex GAMMERMAN, Mikhail KANEVSKI, Michel MAIGNAN, Tom MELLUISH, Craig SAUNDERS, Volodia VOVK. Application des méthodes d’apprentissage pour l’étude des risques de pollution dans le Lac Léman. 5e Colloque transfrontalier CLUSE. Risques majeurs: perception, globalisation et management. Université de Genève, 2000.
5. M. Kanevski. Evaluation of SVM Binary Classification with Nonparametric Stochastic Simulations. IDIAP Research Report, IDIAP-RR-01-07, 17 p. 2001. www.idiap.ch
6. M. Kanevski, A. Pozdnukhov, S. Canu, M. Maignan. Advanced Spatial Data Analysis and Modelling with Support Vector Machines. International Journal on Fuzzy Systems 2002. p. 606-615.
7. M. Kanevski , A. Pozdnukhov , S. Canu ,M. Maignan , P.M. Wong , S.A.R. Shibli “Support Vector Machines for Classification and Mapping of Reservoir Data”. In: “Soft Computing for Reservoir Characterization and Modelling”. P. Wong, F. Aminzadeh, M. Nikravesh (Eds.). Physica-Verlag, Heidelberg, N.Y. pp. 531-558, 2002.
8. Kanevski M., Pozdnukhov A., McKenna S., Murray Ch., Maignan M. Statistical Learning Theory for Spatial Data. In proceedings of GeoENV2002 conference. Barcelona, 2002.
9. M. Kanevski et al. Environmental data mining and modelling based on machine learning algorithms and geostatistics. Journal of Environmental Modelling and Software, 2004. vol. 19, pp. 845-855.
10. M. Kanevski, M. Maignan et al. Advanced geostatistical and machine learning models for spatial data analysis of radioactively contaminated territories. Journal of Environmental Sciences and Pollution Research, pp.137-149, 2003.
11. Kanevski M., Maignan M. and Piller G. Advanced analysis and modelling tools for spatial environmental data. Case study: indoor radon data n Switzerland. International conference EnviroInfo, 2004. http://www.enviroinfo2004.org/cdrom/Datas/Kanevski.htm
12. Kanevski M., Maignan M. and Pozdnukhov A. Active Learning of Environmental Data Using Support Vector Machines. Conference of the International Association for Mathematical Geology, Toronto 2005. http://www.iamgconference.com/
13. M. Kanevski, A. Pozdnukhov, M. Tonini, M. Motelica, E. Savelieva, M. Maignan. Statistical Learning Theory for Geospatial Data. Case study: Aral Sea. 14th European colloquium on Theoretical and Quantitative Geography. Portugal, September 2005.
14. Pozdnukhov A., Kanevski M. Monitoring network optimisation using support vector machines. In: Geostatistics for Environmental applications. (Renard Ph., Demougeot-Renard H and Froidevaux, Eds.). Springer, 2005. pp. 39-50.
15. Pozdnukhov A. and Kanevski M. Monitoring Network Optimisation for Spatial Data Classification Using Support Vector Machines. (2006). International Journal of Environment and Pollution. Vol.28. 20 pp.
- Reference link(s):
www.unil.ch/igar
www.idiap.ch
- Data link(s):
Entered by: Mikhail Kanevski <Mikhail.Kanevski@unil.ch> - Sunday, February 12, 2006 at 16:30:07 (GMT)
Comments:
SVM for Protein Fold and Remote Homology Detection
- Reference(s):
Profile based direct kernels for remote homology detection and fold recognition by Huzefa Rangwala and George Karypis (Bioinformatics 2005)
- Reference link(s):
http://bioinformatics.oxfordjournals.org/cgi/content/abstract/bti687v1
- Data link(s):
http://bioinfo.cs.umn.edu/supplements/remote-homology/
Entered by: Huzefa Rangwala <rangwala@cs.umn.edu> - Sunday, November 06, 2005 at 06:02:08 (GMT)
Comments:
content based image retrieval
- Reference(s):
Dacheng Tao, Xiaoou Tang, Xuelong Li, and Xindong Wu, Asymmetric Bagging and Random Subspacing for Support Vector Machines-based Relevance Feedback in Image Retrieval, IEEE Transactions on Pattern Analysis and Machine Intelligence, accepted, to appear.
- Reference link(s):
- Data link(s):
Entered by: Dacheng Tao <Dacheng Tao> - Tuesday, October 11, 2005 at 19:03:18 (GMT)
Comments:
DATA Classification ursing SSVM
- Reference(s):
[1] O. L. Mangasarian . A Finite Newton Method for Classification Problems
[2] O. L. Mangasarian . A Smooth Support Vector machine for classification .
[3] K.P. Soman . XSVMs and Applications
- Reference link(s):
- Data link(s):
Entered by: Aduru . Venkateswarlu <venkatsherma@yahoo.com> - Monday, September 19, 2005 at 04:35:39 (GMT)
Comments:
DTREG SVM and decision tree modeling
- Reference(s):
- Reference link(s):
http://www.dtreg.com/svm.htm
- Data link(s):
Entered by: Phil Sherrod <phil.sherrod@sandh.com> - Saturday, September 10, 2005 at 20:32:24 (GMT)
Comments:
DTREG - SVM and Decision Tree Predictive Modeling
- Reference(s):
- Reference link(s):
http://www.dtreg.com/svm.htm
- Data link(s):
Entered by: Phil Sherrod <phil.sherrod@sandh.com> - Friday, August 26, 2005 at 20:09:46 (GMT)
Comments: DTREG supports Linear, Polynomial, Sigmoid and Radial Basis kernel functions. It can handle problems with millions of data rows and hundreds of variables.
Facial expression classification
- Reference(s):
J. Ghent and J. McDonald, "Facial Expression Classification using a One-Against-All Support Vector Machine", proceedings of the Irish Machine Vision and Image Processing Conference, Aug 2005.
J. Ghent and J. McDonald, "Holistic Facial Expression Classification", SPIE Opto-Ireland, pp 5823-18, April 2005.
- Reference link(s):
- Data link(s):
Entered by: John Ghent <jghent@cs.may.ie> - Tuesday, August 09, 2005 at 10:14:08 (GMT)
Comments:
End-depth and discharge prediction in semi-circular and circular shaped channels
- Reference(s):
C. Cortes and V.N. Vapnik, Support vector networks, Machine Learning 20 (1995), pp. 273–297.
S. Dey, Free over fall in circular channels with flat base: a method of open channel flow measurement, Flow Meas. Instrum. 13 (2002), pp. 209–221.
S. Dey, Free over fall in open channels: state-of-the-art review, Flow Meas. Instrum. 13 (2002), pp. 247–264.
Y.B. Dibike, S. Velickov, D.P. Solomatine and M.B. Abbott, Model induction with support vector machines: Introduction and applications, J. Comput. Civil Eng. 15 (2001), pp. 208–216.
D. Leunberger, Linear and Nonlinear Programming, Addison-Wesley (1984).
H. Rouse, Discharge characteristics of the free overfall, Civil Engineering, ASCE 6 (1936) (4), pp. 257–260.
R.V. Raikar, D. Nagesh Kumar and S. Dey, End depth computation in inverted semi circular channels using ANNs, Flow Meas. Instrum. 15 (2004), pp. 285–293.
A.J. Smola, Regression estimation with support vector learning machines, Master’s Thesis, Technische Universität München, Germany, 1996.
M. Sterling and D.W. Knight, The free overfall as a flow measuring device in a circular channel, Water and Maritime Engineering Proceedings of Institution of Civil Engineers London 148 (December) (2001), pp. 235–243.
V.N. Vapnik, Statistical Learning Theory, John Wiley and Sons, New York (1998).
- Reference link(s):
http://www.sciencedirect.com/science/journal/09555986
- Data link(s):
Entered by: mahesh pal <mpce_pal@yahoo.co.uk> - Monday, August 01, 2005 at 10:20:34 (GMT)
Comments:
Identification of alternative exons using SVM
- Reference(s):
Dror G., Sorek R. and Shamir S.
Accurate identification of alternatively spliced exons using Support Vector Machine
Bioinformatics. 2005 Apr 1;21(7):897-901.
Epub 2004 Nov 5.
- Reference link(s):
http://www2.mta.ac.il/~gideon/nns_pub.html
- Data link(s):
Entered by: Gideon Dror <gideon@mta.ac.il> - Monday, June 20, 2005 at 11:55:09 (GMT)
Comments: 2 class, 243 positive , 1753 negative instances. total 228 features gaussian kernel. Baseline systems: neural networks and Naive Bayes. SVM outperformed them in terms of area under ROC curve, but most inportantly, in its ability to get very high true positives rate (50%) for very low false positives rate (0.5%). This performance would enable effectivescan of exon databases in search for novel alternatively spliced exons, in the human or other genomes.
Support Vector Machines For Texture Classification
- Reference(s):
1.Support Vector Machines for Texture Classification
Kwang In Kim, Keechul Jung, Se Hyun Park, and
Hang Joon Kim,IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. 24, NO. 11, NOVEMBER 2002,
2.“An introduction to Support Vector Machines and other kernel-based learning
methods” by Nello Cristianini & John Shawe-Taylor. (http://www.support http://www.supportvector.
net/)
- Reference link(s):
- Data link(s):
Entered by: sathishkumar <sathishkumar.maddy@gmail.com> - Thursday, June 02, 2005 at 05:02:34 (GMT)
Comments:
SVM application in E-learning
- Reference(s):
- Reference link(s):
- Data link(s):
Entered by: sandeep dixit <sandeepdixit2004@yahoo.com> - Thursday, March 31, 2005 at 15:14:17 (GMT)
Comments:
text classification with SVMs
- Reference(s):
- Reference link(s):
- Data link(s):
Entered by: Duong DInh DUng <dungngtq8@yahoo.com> - Thursday, March 24, 2005 at 06:03:04 (GMT)
Comments:
Isolated Handwritten Jawi Characters Categorization Using Support Vector Machines (SVM).
- Reference(s):
- Reference link(s):
- Data link(s):
Entered by: Suhaimi Abd Latif <suhaimie@iiu.edu.my> - Wednesday, January 19, 2005 at 06:02:27 (GMT)
Comments:
Image Clustering
- Reference(s):
- Reference link(s):
- Data link(s):
Entered by: Ahmed Yousuf Saber <saber_uap@yahoo.com> - Wednesday, January 19, 2005 at 02:16:09 (GMT)
Comments:
ewsRec, a SVM-driven Personal Recommendation System for News Websites
- Reference(s):
Bomhardt, C. (2004): NewsRec, a SVM-driven Personal Recommendation System for News Websites
In: Web Intelligence, IEEE/WIC/ACM International Conference on (WI'04)
Keywords: Personal Recommendation, Support-Vector-Machine, Personalization, Text Classification
- Reference link(s):
http://csdl.computer.org/comp/proceedings/wi/2004/2100/00/2100toc.htm
- Data link(s):
Entered by: Christian Bomhardt <christian.bomhardt@etu.uni-karlsruhe.de> - Monday, October 11, 2004 at 15:26:58 (GMT)
Comments: about 1200 datasets, about 30000 features, linear kernel, SVMs are very fast compared to other methods and can handle the large number of features.
Equbits Foresight
- Reference(s):
- Reference link(s):
www.equbits.com
- Data link(s):
Entered by: Ravi Mallela <ravi@equbits.com> - Saturday, October 09, 2004 at 15:37:20 (GMT)
Comments:
SPEAKER /SPEECH RECOGNITION
- Reference(s):
X.DONG......
ELECTRONICS LETTERS ,VOL.37,PP.527-529,(2001)
C.MA.RANDOLPH ....
IEEE INT.CONFERENCE ON ACOUSTICS,SPEECH,AND SIGNAL PROCESSING VOL.1,PP.381-384,(2001)
V.WAN...
IEEE WORKSHOP ON NEURAL NETWORK FOR SIGNAL PROCESSING X,VOL.2,(2000)
- Reference link(s):
- Data link(s):
Entered by: MEHDI GHAYOUMI <M_GHAYOUMI@YAHOO.COM> - Tuesday, March 09, 2004 at 06:25:10 (GMT)
Comments:
STUDENT IN AI
- Reference(s):
X.DONG......
ELECTRONICS LETTERS ,VOL.37,PP.527-529,(2001)
C.MA.RANDOLPH ....
IEEE INT.CONFERENCE ON ACOUSTICS,SPEECH,AND SIGNAL PROCESSING VOL.1,PP.381-384,(2001)
V.WAN...
IEEE WORKSHOP ON NEURAL NETWORK FOR SIGNAL PROCESSING X,VOL.2,(2000)
- Reference link(s):
- Data link(s):
Entered by: MEHDI GHAYOUMI <M_GHAYOUMI@YAHOO.COM> - Tuesday, March 09, 2004 at 06:23:03 (GMT)
Comments:
Analysis and Applications of Support Vector Forecasting Model Based on Chaos Theory
- Reference(s):
[1] ÂÀ½ð»¢µÈ.»ìãçʱ¼äÐòÁзÖÎö¼°ÆäÓ¦ÓÃ[M]. Î人: Î人´óѧ³ö°æÉç,2001
[2] Stefania Tronci, Massimiliano Giona, Roberto Baratti, ¡°Reconstruction of chaotic time series by neural models: a case study,¡± Neurocomputing, vol. 55, pp. 581-591, 2003.
[3] ºØÌ«¸Ù,Ö£³çÑ«. »ìãçÐòÁеķÇÏßÐÔÔ¤²â[J].×ÔÈ»ÔÓÖ¾, 19(1): 10-13, 2001.
[4] À÷, ÍõÕýÅ·. »ùÓÚRBFÍøÂçµÄ»ìãçʱ¼äÐòÁеĽ¨Ä£Óë¶à²½Ô¤²â[J].ϵͳ¹¤³ÌÓëµç×Ó¼¼Êõ, 24(6): 81-83, 2002.
[5] ½ªÌÎ. º½¿Õ·¢¶¯»ú´Õñ/ʧËÙÔ¤¹ÀÄ£Ðͺ͹ÊÕϼì²âÑо¿[D], ²©Ê¿Ñ§Î»ÂÛÎÄ, Î÷°²:¿Õ¾ü¹¤³Ì´óѧ¹¤³ÌѧԺ,2002.
[6] Íõº£Ñà, Ê¢ÕÑå«. »ìãçʱ¼äÐòÁÐÏà¿Õ¼äÖØ¹¹²ÎÊýµÄѡȡ·½·¨[J].¶«ÄÏ´óѧѧ±¨, 30(5):113-117, 2000.
[7] L.-Y. Cao, ¡°Practical method for determining minimum embedding dimension of a scalar time series,¡± Physica D, vol. 110, pp. 43-52, 1997.
[8] Eckmann J.P, Kamphorst S.O, ¡°Lyapunov exponent from time series,¡± Phys. Rev. A, vol. 34, no. 6, pp.4971~4979, Dec. 1986.
[9] Oiwa N.N, Fiedler-Ferrara N, ¡°A fast algorithm for estimating lyapunov exponents from time series,¡± Physics Letter A, vol.246, pp.117-121, Sep. 1998.
[10] Fabio Sattin, ¡°Lyap: A FORTRAN 90 program to compute the Lyapunov exponents of a dynamical system from a time series,¡± Computer Physics Communications, vol.107, pp.253-257. 1997.
[11] K.R.M¨¹ler, A.J. Smola, G.Rätsch, ¡°Predicting time series with support vector machines,¡± in Proceeding of ICANN 97¡¯, Berlin: Springer LNCS, vol. 1327, pp. 999-1004. 1997.
[12] B.-J. Chen, ¡°Load forecasting using Support vector machines: A study on EUNITE Competition 2001,¡±unpublished.
[13] L.-J.Cao, Q.-M. Gu, ¡°Dynamic support vector machines for non-stationary time series forecasting,¡± Intelligent Data Analysis. vol. 6, no. 1, pp. 67-83, 2002.
[14] F.E.H.Tay, L.-J. Cao, ¡°Applications of support vector machines in financial forecasting,¡± Omega, vol. 9, no. 4, pp.309-317, Aug. 2001.
[15] K.W.Lau, Q.-H. Wu, ¡°Local prediction of chaotic time series based on Gaussian processes,¡± in Proceeding of the 2002 IEEE International Conference on Control Applications, Glasgow, Scotland, U.K, pp. 1309-1313, Sep. 18-20 2002.
[16] Sayan Mukherjee, Edgar Osuna, Frederico Girosi, ¡° Nonlinear prediction of chaotic time series using support vector machines,¡± in Proc.of IEEE NNSP 97, Amelia Island, FL, Sep. 1997.
- Reference link(s):
In press
Proceeding of WCICA 2004
- Data link(s):
Entered by: xunkai <skyhawkf119@163.com> - Monday, February 23, 2004 at 04:51:26 (GMT)
Comments: It seems impossible but SVM do perfect well!
A Comparison Of The Performance Of Artificial Neural Networks And Support Vector Machines For The Prediction Of Traffic Speed and Travel Time
- Reference(s):
V. Kecman, Learning And Soft Computing: Support Vector Machines, Neural Networks, And Fuzzy Logic Models, The MIT press, Cambridge, Massachusetts, London, England.
S. Haykin, Neural Networks: A Comprehensive Foundation, Prentice Hall, 1999
N. Cristianini and J. S. Taylor, An Introduction To Support Vector Machines And Other Kernel Based Learning Methods, Cambridge university press, 2000
S. R. Gunn, “Support Vector Machines for Classification and Regression, http://www.ecs.soton.ac.uk/~srg/ publications/pdf/SVM.pdf
- Reference link(s):
- Data link(s):
Entered by: Lelitha Vanajakshi <lelitha@yahoo.com> - Friday, January 30, 2004 at 17:39:08 (GMT)
Comments: When the training data was less SVM outperformed ANN, when enough data was available both performed more or less same.
none
- Reference(s):
- Reference link(s):
- Data link(s):
Entered by: leechs <leechs@sohu.com> - Sunday, January 25, 2004 at 13:44:16 (GMT)
Comments:
svm learning
- Reference(s):
- Reference link(s):
- Data link(s):
Entered by: burak <burakkaragoz2002@yahoo.com> - Monday, December 08, 2003 at 16:06:03 (GMT)
Comments:
Protein Structure Prediction
- Reference(s):
1. Kim, H. and H. Park, "Prediction of protein relative solvent accessibility with support vector machines and long-range interaction 3D local descriptor",
Proteins:structure, function, and genetics, to appear. (pdf download)
2. Kim, H. and H. Park, "Protein secondary structure prediction by support vector machines and position-specific scoring matrices",
Protein Engineering, to appear. (pdf download)
- Reference link(s):
http://www.cs.umn.edu/~hpark/papers/surface.pdf
http://www.cs.umn.edu/~hpark/papers/protein2.pdf
- Data link(s):
Entered by: Dr. Haesun Park <hpark@cs.umn.edu> - Friday, July 11, 2003 at 18:59:18 (GMT)
Comments:
Support vector classifiers for land cover classification
- Reference(s):
Mahesh Pal recently finished his PhD form the university of Nottingham, UK and presently working as a lecturer in department of civil enginnering NIT kurukshetra, haryana, India.
- Reference link(s):
http://www.gisdevelopment.net/technology/rs/pdf/23.pdf
- Data link(s):
Entered by: Mahesh Pal <mpce_pal@yahoo.co.uk> - Wednesday, May 21, 2003 at 07:17:46 (GMT)
Comments:
Intrusion Detection
- Reference(s):
Srinivas Mukkamala joined the Computer Science graduate program of New Mexico Tech in 2000 and is currently a Ph.D. student. He received his B.E. degree in computer science and engineering in 1999 from University of Madras. His interests are information assurance, information hiding, artificial intelligence, soft computing techniques for computer security.
Andrew H. Sung is professor and chairman of the Computer Science Department, and the Coordinator of the Information Technology Program, at New Mexico Tech. He received his Ph.D. in computer science from the State University of New York at Stony Brook in1984. His interests are intelligent systems, soft computing, and information assurance.
- Reference link(s):
www.cs.nmt.edu/~IT
- Data link(s):
http://kdd.ics.uci.edu/
Entered by: Srinivas Mukkamala <srinivas@cs.nmt.edu> - Thursday, January 09, 2003 at 05:02:19 (GMT)
Comments: SVMs are superior to ANNs for intrusion detection in three critical respects: SVMs train, and run, an order of magnitude faster; SVMs scale much better; and SVMs give higher classification accuracy.For details on number of classes, kernels used, input features, number of support vectors, input feature selection and ranking methods. Please take a read of our latest versions. If you need our latest versions or need any assistance, please send the author an email: srinivas@cs.nmt.eduSincerelySrinivas Mukkamala
The Gaussian Dynamic Time Warping (GDTW) kernel for On-line Handwriting Recognition
During the last years the task of on-line handwriting recognition has gained an immense importance in all-day applications, mainly due to the increasing popularity of the personal digital assistant (pda). Currently a next generation of ``smart phones'' and tablet-style PCs, which also rely on handwriting input, is further targeting the consumer market. However, in the majority of these devices the handwriting input method is still not satisfying. In current pdas people still use input methods, which abstract from the natural writing style, e.g. in the widespread Graffiti.
Thus there is demand for a handwriting recognition system which is accurate, efficient and which can deal with the natural handwriting of a wide range of different writers.
- Reference(s):
Claus Bahlmann, Bernard Haasdonk and Hans Burkhardt. On-line Handwriting Recognition using Support Vector Machines - A kernel approach. In Int. Workshop on Frontiers in Handwriting Recognition (IWFHR) 2002, Niagara-on-the-Lake, August 2002.
Description of the frog on hand recognition system
Data URL:
- Reference link(s):
- Data link(s):
Entered by: Claus Bahlmann <bahlmann@informatik.uni-freiburg.de> - Monday, September 09, 2002 at 11:52:27 (GMT)
Comments:
Usual SVM kernels are designed to deal with data of fixed dimension. However, on-line handwriting data is not of a fixed dimension, but of a variable-length sequential form. In this respect SVMs cannot be applied to HWR straightforwardly.
We have addressed this issue by developing an appropriate SVM kernel for sequential data, the Gaussian dynamic time warping (GDTW) kernel. The basic idea of the GDTW kernel is, that instead of the squared Euclidean distance in the usual Gaussian kernel it uses the dynamic time warping distance. In addition to on-line handwriting recognition the GDTW kernel can be straightforwardly applied to all classification problems, where DTW gives a reasonable distance measure, e.g. speech recognition or genome processing.
Experiments have shown superior recognition rate in comparison to an HMM-based classifier for relative small training sets (~ 6000) and comparable rates for larger training sets.
The Gaussian Dynamic Time Warping (GDTW) kernel for On-line Handwriting Recognition
During the last years the task of on-line handwriting recognition has gained an immense importance in all-day applications, mainly due to the increasing popularity of the personal digital assistant (pda). Currently a next generation of ``smart phones'' and tablet-style PCs, which also rely on handwriting input, is further targeting the consumer market. However, in the majority of these devices the handwriting input method is still not satisfying. In current pdas people still use input methods, which abstract from the natural writing style, e.g. in the widespread Graffiti.
Thus there is demand for a handwriting recognition system which is accurate, efficient and which can deal with the natural handwriting of a wide range of different writers.
- Reference(s):
href="http://lmb.informatik.uni-freiburg.de/people/bahlmann/">Claus
Bahlmann, href="http://lmb.informatik.uni-freiburg.de/people/haasdonk/">Bernard
Haasdonk and href="http://lmb.informatik.uni-freiburg.de/people/burkhardt/">Hans
Burkhardt. href="http://lmb.informatik.uni-freiburg.de/people/bahlmann/science.en.html#Anchor_ba_ha_bu_iwfh02">On-line
Handwriting Recognition using Support Vector Machines - A kernel
approach. In Int. Workshop on Frontiers in Handwriting
Recognition (IWFHR) 2002, Niagara-on-the-Lake, August 2002.
href="http://lmb.informatik.uni-freiburg.de/people/bahlmann/frog.en.html">Description of the frog on hand recognition system
- Reference link(s):
- Data link(s):
Entered by: Claus Bahlmann <bahlmann@informatik.uni-freiburg.de> - Friday, September 06, 2002 at 11:39:08 (GMT)
Comments:
Usual SVM kernels are designed to deal with data of fixeddimension. However, on-line handwriting data is not of a fixed dimension, but of a variable-length sequential form. In this respect SVMs cannot be applied to HWR straightforwardly.
We have addressed this issue by developing an appropriate SVM kernel for sequential data, the Gaussian dynamic time warping (GDTW) kernel. The basic idea of the GDTW kernel is, that instead of the squared Euclidean distance in the usual Gaussian kernel it uses the dynamic time warping distance. In addition to on-line handwriting recognition the GDTW kernel can be straightforwardly applied to all classification problems, where DTW gives a reasonabledistance measure, e.g. speech recognition or genome processing.
Experiments have shown superior recognition rate in comparison toan HMM-based classifier for relative small training sets (~ 6000) andcomparable rates for larger training sets.
forecast
- Reference(s):
- Reference link(s):
- Data link(s):
Entered by: shen <shen0204@yahoo.com.tw> - Thursday, September 05, 2002 at 07:24:00 (GMT)
Comments:
Detecting Steganography in digital images
- Reference(s):
Detecting Hidden Messages Using Higher-Order Statistics and Support Vector Machines
S. Lyu and H. Farid
5th International Workshop on Information Hiding, Noordwijkerhout, The Netherlands, 2002
- Reference link(s):
http://www.cs.dartmouth.edu/~farid/publications/ih02.html
- Data link(s):
http://www.cs.dartmouth.edu/~farid/publications/ih02.html
Entered by: Siwei Lyu <lsw@cs.dartmouth.edu> - Thursday, August 22, 2002 at 15:58:54 (GMT)
Comments: 2 classes3600 training examples, over 18,000 testing samples1100 SVsRBF kernelLibSVM
Detecting Steganography in digital images
- Reference(s):
- Reference link(s):
http://www.cs.dartmouth.edu/~farid/publications/ih02.html
- Data link(s):
http://www.cs.dartmouth.edu/~farid/publications/ih02.html
Entered by: Siwei Lyu <lsw@cs.dartmouth.edu> - Thursday, August 22, 2002 at 15:57:24 (GMT)
Comments: 2 classes3600 training examples, over 18,000 testing samples1100 SVsRBF kernelLibSVM
Fast Fuzzy Cluster
- Reference(s):
- Reference link(s):
members.aol.com/awareai
- Data link(s):
Entered by: Michael Bickel <awareai@aol.com> - Tuesday, July 23, 2002 at 01:11:11 (GMT)
Comments:
Breast Cancer Prognosis: Chemotherapy Effect on Survival Rate
- Reference(s):
Yuh-Jye Lee, O. L. Mangasarian and W. H. Wolberg: ¡§Survival-Time Classification of Breast Cancer Patients, Data Mining Institute Technical Report 01-03, March 2001.
Yuh-Jye Lee, O. L. Mangasarian and W. H. Wolberg: ¡§Breast Cancer Survival and Chemotherapy: A Support Vector Machine Analysis¡¨, DIMACS Series in Discrete Mathematics and Theoretical Computer Science, Vol. 55 (2000), pp. 1-10.
Yuh-Jye Lee and O. L. Mangasarian: ¡§SSVM: Smooth Support Vector Machine for Classification¡¨, Computational Optimization and Applications (20)1: pp. 5-22.
- Reference link(s):
- Data link(s):
WPBCC: Wisconsin Prognostic Breast Cancer Chemotherapy Database.
ftp://ftp.cs.wisc.edu/math-prog/cpo-dataset/machine-learn/WPBCC/
Entered by: Yuh-Jye Lee <yjlee@cs.ccu.edu.tw> - Wednesday, October 24, 2001 at 19:38:50 (MDT)
Comments:
Underground Cable Temperature Prediction
- Reference(s):
- Reference link(s):
- Data link(s):
Entered by: Robin Willis <rew198@soton.ac.uk> - Friday, May 04, 2001 at 08:31:41 (PDT)
Comments:
Image classification
- Reference(s):
- Reference link(s):
www.ens-lyon.fr/~ochapell/tnn99.ps.gz
- Data link(s):
Entered by: Olivier Chapelle <chapelle@research.att.com> - Tuesday, April 04, 2000 at 13:50:39 (PDT)
Comments: Number of classes = 6 or 14 Dimension of the input features = 4096 Kernel = RBF with various distances SVM outperforms KNN. The choice of the distance in the RBF kernel is critical.
Particle and Quark-Flavour Identification in High Energy Physics
We compared the performance of SVMs (RBF kernel) with NNs (trained with backprop). The amounts of available data are very large, we tested on 3x100k patterns for the quark-flavour problem.
- Reference(s):
Classifying LEP Data with Support Vector Algorithms
P. Vannerem, K.-R. Müller, B. Schölkopf, A. Smola, S. Söldner-Rembold
submitted to Proceedings of AIHENP'99.
- Reference link(s):
http://wwwrunge.physik.uni-freiburg.de/preprints/EHEP9901.ps
ftp://ftp.physics.uch.gr/aihenp99/Vannerem/
- Data link(s):
Entered by: Philippe Vannerem <philippe.vannerem@cern.ch> - Tuesday, October 19, 1999 at 16:17:56 (PDT)
Comments: We saw only small differences in performance between NNs and SVMs.
Object Detection
- Reference(s):
Constantine Papageorgiou, Tomaso Poggio
APattern Classification Approach to Dynamical Object Detection
Proceedings of ICCV, 1999, pp. 1223-1228Anuj Mohan
ObjectDetection in Images by Components
CBCL Paper #178/AI Memo #1664, Massachusetts Institute of TechnologyConstantine Papageorgiou, Theodoros Evgeniou, Tomaso Poggio
ATrainable Pedestrian Detection System
Proceedings of Intelligent Vehicles, 1998, pp. 241-246Constantine P. Papageorgiou, Michael Oren, Tomaso Poggio
AGeneral Framework for Object Detection
Proceedings of ICCV, 1998, pp. 555-562
- Reference link(s):
http://www.ai.mit.edu/projects/cbcl/publications/ps/dyn-obj-iccv99.ps.gz
ftp://publications.ai.mit.edu/ai-publications/1500-199/AIM-1664.ps
www.ai.mit.edu/projects/cbcl/publications/ps/ped-det-iv98.ps.gz
www.ai.mit.edu/projects/cbcl/publications/ps/gen-obj-det-iccv98.ps.gz - Data link(s):
Comments (entered by Isabelle Guyon): In Papageorgiou-Oren-Poggio-98, the authors investigate face detection and pedestrian detection. From the point of view of static images, they obtain 75% correct face detection for a rate of 1 false dectection in 7500 windows and 70% correct pedestrian detection for a false detection rate of 1 false detection in 15000 windows. Of particular interest is their method to increase the number of negative examples with a "bootstrap method": they start with a training set consisting of positive examples (faces or pedestrians) and a small number of negative examples that are "meaningless" images. After a first round of training and testing on fresh examples, the negative examples corresponding to false detections are added to the training set. Training/test set enlargement is iterated. The dynamic system that uses motion to refine performance is roughly 20% better. In this first paper, they authors reduce the dimensionality of input space before training with SVM down to 20-30 input features. Thousands of examples are used for training. In contrast, in Papageorgiou-Poggio-99, using again the problem of pedestrian detection in motion picture, the authors train an SVM with 5201 examples directly in a 6630 dimensional input space consisting of wavelet features at successive time steps. They find that their system is simpler and faster than traditional HMM or Kalman filter systems and has lower false positive rates than static systems.
Combustion Engine Knock Detection
- Reference(s):
M.Rychetsky, S.Ortmann, M.Glesner: Constructionof a Support Vector Machine with Local Experts. Workshop onSupport Vector Machines at the International Joint Conference on ArtificialIntelligence (IJCAI 99), August, 1999, Stockholm, Sweden
M.Rychetsky, S.Ortmann, M.Glesner: SupportVector Approaches for Engine Knock Detection. International JointConference on Neural Networks (IJCNN 99), July, 1999, Washington, USA
- Reference link(s):
- Data link(s):
Comments: We compared for our database (unfortunately not publicdomain) SVM approaches, MLP nets and Adaboost. The SV Machines outperformedall other approaches significantly. For this application real time calculationis an issue, therefore we currently examine methods to reduce computationalburden at recall phase (e.g. reduced set algorithms or integer based approaches).
Engineering Support Vector Machine Kernels That Recognize Translation InitiationSites
- Reference(s):
A. Zien and G. Rätsch and S. Mika and B. Schölkopf and C.Lemmen and A. Smola and T. Lengauer and K.-R. Müller
Engineering Support Vector Machine Kernels That Recognize TranslationInitiation Sites
German Conference on Bioinformatics 1999
- Reference link(s):
http://www.bioinfo.de/isb/gcb99/talks/zien/ - Data link(s):
The data sets we used were kindly supplied by Pedersen and Nielsen
(Center for Biological Sequence Analysis, Denmark; http://www.cbs.dtu.dk/).
Comments: SVMs beat a neural network.
Detection of Remote Protein Homologies
- Reference(s):
A discriminative framework for detecting remote protein homologies.
Tommi Jaakkola, Mark Diekhans, and DavidHaussler - Reference link(s):
http://www.cse.ucsc.edu/research/compbio/discriminative/Jaakola2-1998.ps - Data link(s):
Dataset(12Mb compressed)
Comments: Jaakkola et al combine SVMs with HMMs and show thesuperiority of this approach compared to several baseline systems.
Function Approximation and Regression
- Reference(s):
- Support Vector Regression Machines.
Drucker, H.; Burges, C.; Kaufman, L.; Smola, A.; Vapnik, V. 1997.
In: M. Mozer, M. Jordan, and T. Petsche (eds.):
Neural Information Processing Systems, Vol. 9. MIT Press, Cambridge,MA,
1997. - Support Vector Regression with ANOVA Decomposition Kernels.
Mark O. Stitson, Alex Gammerman,Vladimir Vapnik, Volodya Vovk, Chris Watkins, and Jason Weston
in Advances in Kernel Methods, B. Schölkopf, C.J.C. Burges, andA.J. Smola Eds.
Pages 285-291, MIT Press, 1999. ISBN 0-262-19416-3.
- Support Vector Regression Machines.
- Reference link(s):
Drucker-97,
- Data link(s):
ftp://ftp.ics.uci.edu/pub/machine-learning-databases/housing
Comments: Drucker-97 finds that SVMs outperform the baselinesystem (bagging) on the Boston housing problem. It is noted that SVMs canmake a real difference when the dimensionality of input space and the orderof the approximation create a dimensionality of feature space which isuntractable with other methods. The results of Drucker et al are furtherimproved in Stitson-99 (overall 35% better than the baseline method).
3-D Object Recognition Problems
- Reference(s):
- Visual Learning and Recognition of 3-D Object from Appearance
H. Murase and S.K. Nayar
Int. J. Comput. Vision, Vol. 14, 1995, pp. 5--24. - Comparison of view-based object recognition algorithms using realistic3D models.
V. Blanz, B. Schölkopf,H. Bülthoff, C. Burges, V. Vapnik, T. & Vetter,
In: C. von der Malsburg, W. von Seelen, J. C. Vorbrüggen, andB. Sendhoff (eds.): Artificial Neural Networks - ICANN'96.
Springer Lecture Notes in Computer Science Vol. 1112, Berlin, 251-256.
1996.
(also published in Proc of ICANN'96,
LNCS Vol. 1112, 1996, pp. 251--256.) - Training Support Vector Machines: an Application to Face Detection,
Edgar Osuna,Robert Freund and Federico Girosi.
Proceedings of CVPR'97, Puerto Rico,
1997. - Kernel Principal Component Analysis.
B. Schölkopf, A. Smola, K.-R.Müller,
Proceedings ICANN'97, p.583. Springer Lecture Notes in Computer Science.
1997. - Schölkopf, B.; 1997. Support Vector Learning. PhD Thesis. Publishedby: R. Oldenbourg Verlag, Munich, 1997.
-
Directaspect-based 3-D object recognition.
MassimilianoPontil, and A. Verri.
Proc. Int. Conf on Image Analysis and Processing, Firenze,
1997. - Automatic Target Recognition with Support Vector Machines,
Qun Zhao and Jose Principe,
NIPS-98 Workshop on Large Margin Classifiers,
1998. - The kernel adatron algorithm: a fast and simple learning procedure forsupport vector machines.
T.-T. Friess, N. Cristianini,C. Campbell.
15th Intl. Conf. Machine Learning, Morgan Kaufman Publishers.
1998. - View-based 3D object recognition with Support Vector Machines.
Danny Roobaert & MarcM. Van Hulle
Proc. IEEE Neural Networks for Signal Processing Workshop
1999. - Improvingthe Generalisation of Linear Support Vector Machines: an Application to3D Object Recognition with Cluttered Background
Danny Roobaert
Proc. Workshop on Support Vector Machines at the 16th InternationalJoint Conference on Artificial Intelligence, July
31-August 6, Stockholm, Sweden, p. 29-33
1999.
- Reference link(s):
Blanz-96,Osuna-97,Schölkopf-thesis-97,Schölkopf-NIPS-97,Pontil-97,
Zhao-98,Friess-98,Roobaert-99a,Roobaert-99b - Data link(s):
Chair data set
Sonar data
PontilMassimiliano < pontil@ai.mit.edu>- Monday, October 04, 1999 at 18:48:38 (PDT)
Danny Roobaert < roobaert@nada.kth.se>- Friday, October 08, 1999 at 12:01:21 (PDT)
Edited by Isabelle Guyon - Thursday, October 14, 1999.
Comments: SVM's have been used either in the classificationstage or in the pre-processing (Kernel Principal Component Analysis).
In Blanz-96. Support Vector Classifiers show excellent performance,leaving behind other methods. Osuna-97 demonstrates that SVCs can be trainedon very large data sets (50,000 examples). The classification performancereaches that of one of the best known system while being 30 times fasterat run time.
In Schölkopf-97, the advantage of KPCA is more measured in termsof simplicity, ensured convergence, and ease of understanding of the non-linearities.
Zhao-98 notes that SVCs with Gaussian kernels handle the rejectionof unknown "confusers" particularly well. Friess-98 reports performanceon the sonar data of Gorman and Sejnowski (1988). Their kernel adatronSVMs has a 95.2% success, compared to 90.2% for the best BackpropagationNeural Networks.
Papageorgiou-98 applies SVM with a wavelet preprocessing to face andpeople detection, showing improvements with respect to their base system.
Roobaert-99 shows that an SVM system working on raw data, not incorporatingany domain knowledge about the task, matches the performance of their baselinesystem that does incorporate such knowledge.
Massimiliano Pontil points out that, as shown by the comparison withother techniques, it appears that SVMs can be effectively trained evenif the number of examples is much lower than the dimensionality of theobject space. In the paper Pontil-Verri-97, linear SVMs are used for 3-Dobject recognition. The potential of SVMs is illustrated on a databaseof 7200 images of 100 different objects. The proposed system does not requirefeature extraction and performs recognition on images regarded as pointsof a space of high dimension without estimating pose. The excellent recognitionrates achieved in all the performed experiments indicate that SVMs arewell-suited for aspect-based recognition.
In Roobaert-99, 3 methods for the improvement of Linear Support VectorMachines are presented in the case of Pattern Recognition with a numberof irrelevent dimensions. A method for 3D object recognition without segmentationis proposed.
Text Categorization
- Reference(s):
- Text Categorization with Support Vector Machines: Learning with Many RelevantFeatures.
T. Joachims,
European Conference on Machine Learning (ECML),
1998. - Inductive Learning Algorithms and Representations for Text Categorization,
S. Dumais, J.Platt, D. Heckerman, M. Sahami,
7th International Conference on Information and Knowledge Management,
1998. - Support Vector Machines for Spam Categorization. H. Drucker, with D. Wuand V. Vapnik. IEEE Trans. on Neural Networks , vol 10, number 5, pp. 1048-1054.1999.
- Transductive Inference for Text Classification using Support Vector Machines.
Thorsten Joachims.
International Conference on Machine Learning (ICML),
1999.
- Text Categorization with Support Vector Machines: Learning with Many RelevantFeatures.
- Reference link(s):
Joachims-98Postcript,Joachims-98PDF
Dumaiset al 98
Druckeret al 98
Joachins-99Postcript
Joachims-99PDF - Data link(s):
Reuters-21578
Comments: Joachims-98 reports that SVMs are well suited to learnin very high dimensional spaces (> 10000 inputs). They achieve substantialimprovements over the currently best performing methods, eliminating theneed for feature selection. The tests were run on the Oshumed corpus of WilliamHersh and Reuter-21578. Dumais et al report that they use linear SVMsbecause they are both accurate and fast (to train and to use). They are35 times faster to train that the next most accurate classifier that theytested (Decision Trees). They have applied SVMs to the Reuter-21578 collection,emails and web pages. Drucker at al classify emails as spam and non spam.They find that boosting trees and SVMs have similar performance in termsof accuracy and speed. SVMs train significatly faster. Joachims-99 reportthat transduction is a very natural setting for many text classificationand information retrieval tasks. Transductive SVMs improve performanceespecially in cases with very small amounts of labelled training data.
Time Series Prediction and Dynamic Resconstruction of Chaotic Systems
- Reference(s):
Predicting Time Series with Support Vector Machines.
K.-R. Müller, A.Smola,G. Rätsch,B.Schölkopf,J. Kohlmorgen,V.Vapnik.
Proceedings ICANN'97, p.999.
Springer Lecture Notes in Computer Science, 1997Nonlinear Prediction of Chaotic Time Series using a Support Vector Machine
S. Mukherjee, E. Osuna, and F. Girosi NNSP'97, 1997.Using Support Vector Machines for Time Series Prediction
Müller, K.-R.; Smola, A.; Rätsch, G.; Schölkopf, B.;Kohlmorgen, J.; Vapnik, V.
in Advances in Kernel Methods, B. Schölkopf, C.J.C. Burges, andA.J. Smola Eds.
Pages 242-253, MIT Press, 1999. ISBN 0-262-19416-3.Support Vector Machines for Dynamic Reconstruction of a Chaotic System
Davide Matterra andSimonHaykin
in Advances in Kernel Methods, B. Schölkopf, C.J.C. Burges, andA.J. Smola Eds.
Pages 211-241, MIT Press, 1999. ISBN 0-262-19416-3. - Reference link(s):
Müller et al
Mukherjee et al - Data link(s):
Synthetic data used: Mackey-Glass, Ikewda Map and Lorenz, and
SantaFe competition Data Set D
Comments: Müller et al report excellent performance ofSVM. They set a new record on the Santa Fe competition data set D, 37%better than the winning approach during the competition. Mattera et alreport that SVM are effective for such tasks and that their main advantageis the possibility of trading off the required accuracy with the numberof Support Vectors.
Support Vector Machine Classification of Microarray Gene Expression Data
- Reference(s):
Support Vector Machine Classification of Microarray Gene ExpressionData
M. Brown, W. Grundy, D. Lin, N. Cristianini C. Sugnet, M. Ares Jr.,D. Haussler
University of California, Santa Cruz,
technical report UCSC-CRL-99-09. - Reference link(s):
http://www.cse.ucsc.edu/research/compbio/genex/genex.tech.html - Data link(s):
http://www.cse.ucsc.edu/research/compbio/genex/
Comments: SVMs outperformed all other classifers, when providedwith a specifically designed kernel to deal with very imbalanced data.
Handwritten digit recognition problems
- Reference(s):
- An training algorithm for optimal margin classifiers.
B. Boser, I.Guyon, and V. Vapnik.
In Fifth Annual Workshop on Computational Learning Theory, pages 144--152,Pittsburgh, ACM.
1992. - Writer adaptation for on-line handwritten character recognition.
N. Matic , I. Guyon, J. Denker,and V. Vapnik
In Second International Conference on Pattern Recognition and DocumentAnalysis , pages 187--191, Tsukuba, Japan, IEEE Computer Society Press,
1993. - Support Vector Networks.
C. Cortes and V.Vapnik,
Machine Learning, 20:273-297,
1995. - Learning algorithms for classification: A comparison on handwritten digitrecognition.
Y. Le Cun, L.D. Jackel, L. Bottou, C. Cortes, J. Denker, H. Drucker,I. Guyon, U.A. Muller, E.
Sackinger, P. Simard, and V. Vapnik.
In J.H. Kwon and S. Cho, editors, Neural Networks: The StatisticalMechanics Perspective, pages
261--276. World Scientific
1995. - Discovering informative patterns and data cleaning.
I. Guyon, N. Matic , and V. Vapnik,
In U.M. Fayyad, G. Piatetsky-Shapiro, P. Smyth, and R. Uthurusamy,editors, Advances in Knowledge Discovery and Data Mining, pages 181--203.MIT Press.
1996. - Incorporating Invariances in Support Vector Learning Machines.
B. Schölkopf, C.Burges, and V. Vapnik,
In: C. von der Malsburg, W. von Seelen, J. C. Vorbr|ggen, and B. Sendhoff(eds.): Artificial Neural Networks - ICANN'96. Springer Lecture Notes inComputer Science Vol. 1112, Berlin,47-52
1996. - Prior Knowledge in Support Vector Kernels
B. Schölkopf, P. Simard, A. Smola, and V. Vapnik,
NIPS'97
1997. - Pairwise Classification and Support Vector Machines
Ulrich H.-G. Kressel
in Advances in Kernel Methods, B. Schölkopf , C.J.C. Burges, andA.J. Smola Eds.
Pages 255-268, MIT Press, 1999. ISBN 0-262-19416-3. - The kernel adatron algorithm: a fast and simple learning procedure forsupport vector machines.
T.-T. Friess, N. Cristianini,C. Campbell.
15th Intl. Conf. Machine Learning, Morgan Kaufman Publishers.
1998.
- An training algorithm for optimal margin classifiers.
- Reference link(s):
Boser-92,Matic-93,Guyon-96,Schölkopf-96,Schölkopf-98 - Data link(s):
http://www.clopinet.com/isabelle/Projects/LITTLE1200/
http://www.research.att.com/~yann/ocr/mnist/
Comments: This is one of the first applications of SVCs. Itwas demonstrated that SCVs could be applied directly to pixel maps andnearly match or outperform other techniques requiring elaborate pre-processing,architecture design (structured neural networks), and/or a metric incorporatingprior knowledge of the task (tangent distance) -- see e.g. Lecun-95. Elaboratemetrics such as tangent distance can be used in combination with SVCs (Schölkopf-96-97)and yield improved performance. SVCs are also attractive for handwritingrecognition tasks because they lend themself to easy writer adaptationand data cleaning, by making use of the support vectors (Matic-93 and Guyon-96).In Friess-98, the kernel Adatron SVM slightly outperforms the originalSVM on the USPS character recognition benchmark.
Breast cancer diagnosis and prognosis
- Reference(s):
O. L. Mangasarian, W. Nick Street and W. H. Wolberg: ``Breast cancerdiagnosis and prognosis via linear programming", Operations Research, 43(4),July-August 1995, 570-577.P. S. Bradley, O. L. Mangasarian and W. Nick Street: ``Feature selectionvia mathematical programming", INFORMS Journal on Computing 10, 1998, 209-217.
P. S. Bradley, O. L. Mangasarian and W. Nick Street: ``Clustering viaconcave minimization", in ``Advances in Neural Information Processing Systems-9-", (NIPS*96), M. C. Mozer and M. I. Jordan and T. Petsche, editors,MIT Press, Cambridge, MA, 1997, 368-374.
T.-T. Friess; N. Cristianini;C. Campbell.
The kernel adatron algorithm: a fast and simple learning procedurefor support vector machines.
15th Intl. Conf. Machine Learning, Morgan Kaufman Publishers.
1998. - Reference link(s):
ftp://ftp.cs.wisc.edu/math-prog/tech-reports/94-10.ps
ftp://ftp.cs.wisc.edu/math-prog/tech-reports/95-21.ps
ftp://ftp.cs.wisc.edu/math-prog/tech-reports/96-03.ps
http://svm.first.gmd.de/papers/FriCriCam98.ps.gz - Data link(s):
WDBC:Wisconsin Diagnostic Breast Cancer Database
BC:Wisconsin Prognostic Breast Cancer Database
Modified by: Isabelle Guyon < isabelle@clopinet.com>- Monday, September 20, 1999 at 9:30 (PDT)
Comments: Mangasarian et al use a linear programming formulationunderlying that can be interpreted as an SVM. Their system (XCYT) is ahighly accurate non-invasive breast cancer diagnostic program currentlyin use at University of Wisconsin Hospital. Friess et al report that theWisconsin breat cancer dataset has been extensively studied. Their system,which uses Adatron SVMs, has 99.48% success rate, compared to 94.2% (CART),95.9% (RBF), 96% (linear discriminant), 96.6% (Backpropagation network),all results reported elsewhere in the literature.
Support Vector Decision Tree Methods for Database Marketing
- Reference(s):
On Support Vector Decision Trees for Database Marketing K.P. Bennett,D. Wu, and L. Auslender Report No. 98-100, Rensselaer Polytechnic Institute,Troy, NY, 1998. - Reference link(s):
http://www.rpi.edu/~bennek/mr98100.ps - Data link(s):
Data is proprietary.
Comments: The support vector decision tree performed betterthan C4.5. SVDT produced very simple trees using few attributes.
- Add a new entry to the SVMApplication List.
- Read more about SVM applications on the Kernel Machines website.
- This page was powered by MattWright
- Other resources: Artificial Intelligence resources -directory of Artificial Intelligence related websites.
from: http://www.clopinet.com/isabelle/Projects/SVM/