nutch1.4 部署应用

nutch1.4在2011年的11月26日正式发布了,nutch1.4之后更新了一些内容和一些配置,但是和1.3差别还是不大,但是和1.2之前的差异就比较大了,在nutch1.3之后,索引就用solr来进行生成了,包括查询也是用solr,所以在nutch1.2之前的web搜索服务也就不需要了。

首先我们去nutch的官网下载最新版的nutch1.4

地址为:

http://www.apache.org/dyn/closer.cgi/nutch/

下载apache-nutch-1.4-bin.zip或者apache-nutch-1.4-bin.tar.gz都可以

下载下来后,我们解压,现在先进行linux下的应用,下一节我会写eclipse中进行nutch开发

解压之后,进入nutch/runtime/local/conf的目录下在这里我们只需要知道2个文件即可:

nutch-default.xml 是nutch 的配置文件

regex-urlfilter.txt文件内是编辑NUTCH爬取的策略规则的

我们这是进行初次爬取,那么我们测试的话不需要对其他设置进行优化,只需要做到如下即可:

在nutch-default.xml文件中找到http.agent.name属性,将其中的value内容加上;

<!-- HTTP properties -->

<property>
  <name>http.agent.name</name>
  <value>jdodrc</value>
  <description>HTTP 'User-Agent' request header. MUST NOT be empty - 
  please set this to a single word uniquely related to your organization.

  NOTE: You should also check other related properties:

	http.robots.agents
	http.agent.description
	http.agent.url
	http.agent.email
	http.agent.version

  and set their values appropriately.

  </description>
</property>

如果不加上该属性的话,在执行nutch的时候会报如下错误:

Exception in thread "main" java.lang.IllegalArgumentException: Fetcher: No agents listed in 'http.agent.name' property.

增加上属性后,我们还需要进行规则的设置,比如我们要爬取www.163.com ,但是我们不是要把里面的所有链接都爬取下来,如sohu的广告,我们就不需要爬,我们只需要爬取163的内容,那么我们就需要设置爬取规则,爬取规则采用正则表达式进行编写(正则表达式在这里不做具体阐述)

那么我们在哪里编写规则呢?

regex-urlfilter.txt文件中编写规则:

# skip image and other suffixes we can't yet parse
# for a more extensive coverage use the urlfilter-suffix plugin
-\.(gif|GIF|jpg|JPG|png|PNG|ico|ICO|css|CSS|sit|SIT|eps|EPS|wmf|WMF|zip|ZIP|ppt|PPT|mpg|MPG|xls|XLS|gz|GZ|rpm|RPM|tgz|TGZ|mov|MOV|exe|EXE|jpeg|JPEG|bmp|BMP|js|JS)$

这里是过滤的扩展名

抓取动态网页

# skip URLs containing certain characters as probable queries, etc.
#-[?*!@=]如果需要抓取动态网页就把这里注释掉
-[~]

页面链接过滤规则,如下为过滤163站的

# accept anything else
#+^http://([a-z0-9]*\.)*(.*\.)*.*/
+^http://([a-z0-9]*\.)*163\.com

如果做测试用只需要修改过滤规则即可。

nutch-default.xml的http.agent.name配置好后

regex-urlfilter.txt正则规则配置好后

那么我们在linux 在把runtime/local/bin下的.sh全部改为可执行文件

打开bin目录后,执行:

chmod +x *.sh

将所有的sh变为可执行

然后我们做下测试:

在runtime/local目录下,创建一个urls目录,然后里面创建一个文件test,在test文件里面输入我们要进行爬取的网站入口:

http://www.163.com/

然后保存,现在在我们的local目录下有一个urls目录,里面有一个入口文件

那么我们现在就进行一下测试:

测试之前我们需要对nutch的参数进行一下了解:

Crawl <urlDir> -solr <solrURL> [-dir d] [-threads n] [-depth i] [-topN N]

[]中间的是可选的

urlDir就是入口文件地址

-solr <solrUrl>为solr的地址(如果没有则为空)

-dir 是保存爬取文件的位置

-threads 是爬取开的线程(线程不是越多越好,实现要求即可,默认为10)

-depth 是访问的深度 (默认为5)

-topN 是访问的广度 (默认是Long.max)

 

然后在bin目录下有一个 nutch的shell文件,在nutch的shell文件中有一个crawl参数就是启动我们抓取类的:

我们现在测试爬行一下,现在我们的 目录位置是在nutch/runtime/local下

bin/nutch crawl urls -solr http://localhost:8080/solr/ -dir crawl -depth 2 -threads 5 -topN 100

如果要以后查看日志的话,那么就在最后加上一个 >& (输出位置)

 

solr需要单独配置,我会在solr一篇文章中讲怎么部署,这里的-solr的位置,只需要输入solr的url地址即可


点这里查看solr部署


如果要想在windows下测试或者开发,那么需要首先安装cygwin,安装cygwin我会在eclipse中部署nutch1.4中介绍

测试结果:

crawl started in: crawl
rootUrlDir = urls/test.txt
threads = 10
depth = 2
solrUrl=http://localhost:8080/solr/
topN = 100
Injector: starting at 2012-02-07 14:21:20
Injector: crawlDb: crawl/crawldb
Injector: urlDir: urls/test.txt
Injector: Converting injected urls to crawl db entries.
Injector: Merging injected urls into crawl db.
Injector: finished at 2012-02-07 14:21:25, elapsed: 00:00:04
Generator: starting at 2012-02-07 14:21:25
Generator: Selecting best-scoring urls due for fetch.
Generator: filtering: true
Generator: normalizing: true
Generator: topN: 100
Generator: jobtracker is 'local', generating exactly one partition.
Generator: Partitioning selected urls for politeness.
Generator: segment: crawl/segments/20120207142128
Generator: finished at 2012-02-07 14:21:30, elapsed: 00:00:05
Fetcher: Your 'http.agent.name' value should be listed first in 'http.robots.agents' property.
Fetcher: starting at 2012-02-07 14:21:30
Fetcher: segment: crawl/segments/20120207142128
Using queue mode : byHost
Fetcher: threads: 10
Fetcher: time-out divisor: 2
QueueFeeder finished: total 1 records + hit by time limit :0
Using queue mode : byHost
Using queue mode : byHost
Using queue mode : byHost
fetching http://www.163.com/
-finishing thread FetcherThread, activeThreads=1
-finishing thread FetcherThread, activeThreads=1
Using queue mode : byHost
Using queue mode : byHost
Using queue mode : byHost
-finishing thread FetcherThread, activeThreads=1
-finishing thread FetcherThread, activeThreads=1
-finishing thread FetcherThread, activeThreads=1
Using queue mode : byHost
Using queue mode : byHost
Using queue mode : byHost
-finishing thread FetcherThread, activeThreads=1
-finishing thread FetcherThread, activeThreads=1
-finishing thread FetcherThread, activeThreads=1
Using queue mode : byHost
Fetcher: throughput threshold: -1
-finishing thread FetcherThread, activeThreads=1
Fetcher: throughput threshold retries: 5
-activeThreads=1, spinWaiting=0, fetchQueues.totalSize=0
-finishing thread FetcherThread, activeThreads=0
-activeThreads=0, spinWaiting=0, fetchQueues.totalSize=0
-activeThreads=0
Fetcher: finished at 2012-02-07 14:21:36, elapsed: 00:00:05
ParseSegment: starting at 2012-02-07 14:21:36
ParseSegment: segment: crawl/segments/20120207142128
Parsing: http://www.163.com/
ParseSegment: finished at 2012-02-07 14:21:39, elapsed: 00:00:03
CrawlDb update: starting at 2012-02-07 14:21:39
CrawlDb update: db: crawl/crawldb
CrawlDb update: segments: [crawl/segments/20120207142128]
CrawlDb update: additions allowed: true
CrawlDb update: URL normalizing: true
CrawlDb update: URL filtering: true
CrawlDb update: 404 purging: false
CrawlDb update: Merging segment data into db.
CrawlDb update: finished at 2012-02-07 14:21:42, elapsed: 00:00:03
Generator: starting at 2012-02-07 14:21:42
Generator: Selecting best-scoring urls due for fetch.
Generator: filtering: true
Generator: normalizing: true
Generator: topN: 100
Generator: jobtracker is 'local', generating exactly one partition.
Generator: Partitioning selected urls for politeness.
Generator: segment: crawl/segments/20120207142145
Generator: finished at 2012-02-07 14:21:48, elapsed: 00:00:05
Fetcher: Your 'http.agent.name' value should be listed first in 'http.robots.agents' property.
Fetcher: starting at 2012-02-07 14:21:48
Fetcher: segment: crawl/segments/20120207142145
Using queue mode : byHost
Fetcher: threads: 10
Fetcher: time-out divisor: 2
Using queue mode : byHost
QueueFeeder finished: total 97 records + hit by time limit :0
Using queue mode : byHost
fetching http://bbs.163.com/
Using queue mode : byHost
fetching http://bbs.163.com/rank/
Using queue mode : byHost
fetching http://tech.163.com/cnstock/
Using queue mode : byHost
fetching http://tech.163.com/
Using queue mode : byHost
fetching http://tech.163.com/digi/nb/
Using queue mode : byHost
Using queue mode : byHost
fetching http://g.163.com/a?CID=10625&Values=3331479594&Redirect=http:/www.edu-163.com/Item/list.asp?id=1164
fetching http://g.163.com/r?site=netease&affiliate=homepage&cat=homepage&type=textlinkhouse&location=1
Using queue mode : byHost
fetching http://g.163.com/a?CID=10627&Values=896009995&Redirect=http:/www.dv37.com/jiaoyu/xiaoxinxing/
Using queue mode : byHost
fetching http://g.163.com/a?CID=10635&Values=1012801948&Redirect=http:/www.worldwayhk.com/
Fetcher: throughput threshold: -1
Fetcher: throughput threshold retries: 5
fetching http://g.163.com/a?CID=12392&Values=441270714&Redirect=http:/www.qinzhe.com/chinese/index.htm
fetching http://g.163.com/a?CID=10634&Values=2943411042&Redirect=http:/www.kpeng.com.cn/
fetching http://g.163.com/a?CID=12337&Values=3289604641&Redirect=http:/www.offcn.com/zg/2011ms/index.html
fetching http://g.163.com/a?CID=10633&Values=1745739655&Redirect=http:/www.edu-163.com/aidi/aidinj1.htm
fetching http://g.163.com/a?CID=12307&Values=3388898846&Redirect=http:/www.offcn.com/zg/2011ms/index.html
fetching http://g.163.com/a?CID=10629&Values=740233954&Redirect=http:/www.embasjtu.com/
fetching http://g.163.com/a?CID=10632&Values=715626766&Redirect=http:/www.edu-163.com/aidi/aidimg.htm
fetching http://g.163.com/a?CID=12259&Values=3180311081&Redirect=http:/www.gpkdtx.com/
fetching http://g.163.com/a?CID=12271&Values=904657751&Redirect=http:/www.vipabc.com/count.asp?code=QnfF0agFbn
fetching http://g.163.com/a?CID=10628&Values=2735701856&Redirect=http:/www.wsi.com.cn
fetching http://g.163.com/a?CID=10623&Values=1704187161&Redirect=http:/www.wsi.com.cn
fetching http://g.163.com/a?CID=12267&Values=608079303&Redirect=http:/edu.163.com/special/official/
fetching http://g.163.com/a?CID=10631&Values=3773655455&Redirect=http:/www.xinhaowei.cn/zt/sasheng-new/
fetching http://g.163.com/a?CID=10630&Values=4025376053&Redirect=http:/www.bwpx.com/
fetching http://g.163.com/a?CID=12283&Values=1441209353&Redirect=http:/www.zyqm.org/
fetching http://mobile.163.com/
fetching http://mobile.163.com/app/
fetching http://reg.vip.163.com/enterMail.m?enterVip=true-----------
fetching http://product.tech.163.com/mobile/
fetching http://hea.163.com/
-activeThreads=10, spinWaiting=0, fetchQueues.totalSize=68
fetching http://reg.email.163.com/mailregAll/reg0.jsp?from=163&regPage=163
-activeThreads=10, spinWaiting=0, fetchQueues.totalSize=67
fetching http://yuehui.163.com/
fetching http://auto.163.com/
fetching http://auto.163.com/buy/
fetching http://gongyi.163.com/
fetching http://reg.163.com/Main.jsp?username=pInfo
fetching http://reg.163.com/Logout.jsp?username=accountName&url=http:/www.163.com/
-activeThreads=10, spinWaiting=0, fetchQueues.totalSize=61
fetching http://money.163.com/fund/
fetching http://money.163.com/stock/
fetching http://money.163.com/hkstock/
fetching http://money.163.com/
-activeThreads=10, spinWaiting=0, fetchQueues.totalSize=57
fetching http://blog.163.com/passportIn.do?entry=163
fetching http://blog.163.com/?fromNavigation
fetching http://pay.163.com/
fetching http://baby.163.com/
fetching http://discovery.163.com/
-activeThreads=10, spinWaiting=0, fetchQueues.totalSize=52
fetching http://p.mail.163.com/mailinfo/shownewmsg_www_0819.htm
fetching http://help.163.com?b01abh1
fetching http://www.163.com/rss/
fetching http://home.163.com/
fetching http://product.auto.163.com/
-activeThreads=10, spinWaiting=0, fetchQueues.totalSize=47
fetching http://ecard.163.com/
fetching http://photo.163.com/?username=pInfo
fetching http://photo.163.com/pp/square/
fetching http://email.163.com/
fetching http://m.163.com/
-activeThreads=10, spinWaiting=0, fetchQueues.totalSize=42
fetching http://edu.163.com/
fetching http://edu.163.com/liuxue/
fetching http://xf.house.163.com/gz/
fetching http://game.163.com/
fetching http://travel.163.com/
-activeThreads=10, spinWaiting=0, fetchQueues.totalSize=37
fetching http://baoxian.163.com/?from=index
-activeThreads=10, spinWaiting=0, fetchQueues.totalSize=36
fetching http://zx.caipiao.163.com?from=shouye
fetching http://entry.mail.163.com/coremail/fcg/ntesdoor2?verifycookie=1&lightweight=1
fetching http://biz.163.com/
fetching http://t.163.com/rank?f=163dh
fetching http://t.163.com/chat?f=163dh
-activeThreads=10, spinWaiting=0, fetchQueues.totalSize=31
fetching http://t.163.com/?f=wstopmicoblogmsg
fetch of http://zx.caipiao.163.com?from=shouye failed with: org.apache.nutch.protocol.http.api.HttpException: bad status line '<html>': For input string: "<html>"
fetching http://t.163.com/rank/daren?f=163dh
fetching http://t.163.com/?f=wstopmicoblogmsg.enter
fetching http://t.163.com/
fetching http://sports.163.com/
-activeThreads=10, spinWaiting=0, fetchQueues.totalSize=26
fetching http://sports.163.com/nba/
fetching http://sports.163.com/cba/
-activeThreads=10, spinWaiting=0, fetchQueues.totalSize=24
fetching http://sports.163.com/yc/
-activeThreads=10, spinWaiting=0, fetchQueues.totalSize=23
fetching http://vipmail.163.com/
fetching http://digi.163.com/
fetching http://lady.163.com/beauty/
fetching http://lady.163.com/
fetching http://lady.163.com/sense/
-activeThreads=10, spinWaiting=0, fetchQueues.totalSize=18
fetching http://house.163.com/
fetching http://news.163.com/review/
-activeThreads=10, spinWaiting=0, fetchQueues.totalSize=16
fetching http://news.163.com/photo/
fetching http://news.163.com/
fetching http://v.163.com/doc/
-activeThreads=10, spinWaiting=0, fetchQueues.totalSize=13
fetching http://v.163.com/zongyi/
fetching http://v.163.com/
fetching http://v.163.com/focus/
fetching http://fushi.163.com/
-activeThreads=10, spinWaiting=0, fetchQueues.totalSize=9
fetching http://yc.163.com/
-activeThreads=10, spinWaiting=0, fetchQueues.totalSize=8
fetching http://mall.163.com/
fetching http://ent.163.com/movie/
fetching http://ent.163.com/
fetching http://ent.163.com/music/
fetching http://ent.163.com/tv/
fetching http://war.163.com/
-activeThreads=10, spinWaiting=0, fetchQueues.totalSize=2
* queue: http://fashion.163.com
  maxThreads    = 10
  inProgress    = 0
  crawlDelay    = 5000
  minCrawlDelay = 0
  nextFetchTime = 1328595704430
  now           = 1328595728444
  0. http://fashion.163.com/
* queue: http://book.163.com
  maxThreads    = 10
  inProgress    = 0
  crawlDelay    = 5000
  minCrawlDelay = 0
  nextFetchTime = 1328595704430
  now           = 1328595728445
  0. http://book.163.com/
fetching http://fashion.163.com/
-activeThreads=10, spinWaiting=0, fetchQueues.totalSize=1
* queue: http://book.163.com
  maxThreads    = 10
  inProgress    = 0
  crawlDelay    = 5000
  minCrawlDelay = 0
  nextFetchTime = 1328595704430
  now           = 1328595729445
  0. http://book.163.com/
fetching http://book.163.com/
-activeThreads=10, spinWaiting=0, fetchQueues.totalSize=0
-finishing thread FetcherThread, activeThreads=9
-finishing thread FetcherThread, activeThreads=8
-activeThreads=8, spinWaiting=0, fetchQueues.totalSize=0
-finishing thread FetcherThread, activeThreads=7
-finishing thread FetcherThread, activeThreads=6
-finishing thread FetcherThread, activeThreads=5
-finishing thread FetcherThread, activeThreads=4
-finishing thread FetcherThread, activeThreads=3
-finishing thread FetcherThread, activeThreads=2
-activeThreads=2, spinWaiting=0, fetchQueues.totalSize=0
-activeThreads=2, spinWaiting=0, fetchQueues.totalSize=0
-activeThreads=2, spinWaiting=0, fetchQueues.totalSize=0
-finishing thread FetcherThread, activeThreads=1
-activeThreads=1, spinWaiting=0, fetchQueues.totalSize=0
-activeThreads=1, spinWaiting=0, fetchQueues.totalSize=0
-activeThreads=1, spinWaiting=0, fetchQueues.totalSize=0
-finishing thread FetcherThread, activeThreads=0
-activeThreads=0, spinWaiting=0, fetchQueues.totalSize=0
-activeThreads=0
Fetcher: finished at 2012-02-07 14:22:20, elapsed: 00:00:32
ParseSegment: starting at 2012-02-07 14:22:20
ParseSegment: segment: crawl/segments/20120207142145
Parsing: http://auto.163.com/
Parsing: http://auto.163.com/buy/
Parsing: http://baby.163.com/
Parsing: http://baoxian.163.com/?from=index
Parsing: http://bbs.163.com/
Parsing: http://bbs.163.com/rank/
Parsing: http://biz.163.com/
Parsing: http://blog.163.com/?fromNavigation
Parsing: http://book.163.com/
Parsing: http://digi.163.com/
Parsing: http://discovery.163.com/
Parsing: http://edu.163.com/
Parsing: http://edu.163.com/liuxue/
Parsing: http://email.163.com/
Parsing: http://ent.163.com/
Parsing: http://ent.163.com/movie/
Parsing: http://ent.163.com/music/
Parsing: http://ent.163.com/tv/
Parsing: http://fashion.163.com/
Parsing: http://fushi.163.com/
Parsing: http://g.163.com/a?CID=10623&Values=1704187161&Redirect=http:/www.wsi.com.cn
Error parsing: http://g.163.com/a?CID=10623&Values=1704187161&Redirect=http:/www.wsi.com.cn: failed(2,0): Can't retrieve Tika parser for mime-type application/octet-stream
Parsing: http://g.163.com/a?CID=10625&Values=3331479594&Redirect=http:/www.edu-163.com/Item/list.asp?id=1164
Error parsing: http://g.163.com/a?CID=10625&Values=3331479594&Redirect=http:/www.edu-163.com/Item/list.asp?id=1164: failed(2,0): Can't retrieve Tika parser for mime-type application/octet-stream
Parsing: http://g.163.com/a?CID=10627&Values=896009995&Redirect=http:/www.dv37.com/jiaoyu/xiaoxinxing/
Error parsing: http://g.163.com/a?CID=10627&Values=896009995&Redirect=http:/www.dv37.com/jiaoyu/xiaoxinxing/: failed(2,0): Can't retrieve Tika parser for mime-type application/octet-stream
Parsing: http://g.163.com/a?CID=10628&Values=2735701856&Redirect=http:/www.wsi.com.cn
Error parsing: http://g.163.com/a?CID=10628&Values=2735701856&Redirect=http:/www.wsi.com.cn: failed(2,0): Can't retrieve Tika parser for mime-type application/octet-stream
Parsing: http://g.163.com/a?CID=10629&Values=740233954&Redirect=http:/www.embasjtu.com/
Error parsing: http://g.163.com/a?CID=10629&Values=740233954&Redirect=http:/www.embasjtu.com/: failed(2,0): Can't retrieve Tika parser for mime-type application/octet-stream
Parsing: http://g.163.com/a?CID=10630&Values=4025376053&Redirect=http:/www.bwpx.com/
Error parsing: http://g.163.com/a?CID=10630&Values=4025376053&Redirect=http:/www.bwpx.com/: failed(2,0): Can't retrieve Tika parser for mime-type application/octet-stream
Parsing: http://g.163.com/a?CID=10631&Values=3773655455&Redirect=http:/www.xinhaowei.cn/zt/sasheng-new/
Error parsing: http://g.163.com/a?CID=10631&Values=3773655455&Redirect=http:/www.xinhaowei.cn/zt/sasheng-new/: failed(2,0): Can't retrieve Tika parser for mime-type application/octet-stream
Parsing: http://g.163.com/a?CID=10632&Values=715626766&Redirect=http:/www.edu-163.com/aidi/aidimg.htm
Parsing: http://g.163.com/a?CID=10633&Values=1745739655&Redirect=http:/www.edu-163.com/aidi/aidinj1.htm
Parsing: http://g.163.com/a?CID=10634&Values=2943411042&Redirect=http:/www.kpeng.com.cn/
Error parsing: http://g.163.com/a?CID=10634&Values=2943411042&Redirect=http:/www.kpeng.com.cn/: failed(2,0): Can't retrieve Tika parser for mime-type application/octet-stream
Parsing: http://g.163.com/a?CID=10635&Values=1012801948&Redirect=http:/www.worldwayhk.com/
Error parsing: http://g.163.com/a?CID=10635&Values=1012801948&Redirect=http:/www.worldwayhk.com/: failed(2,0): Can't retrieve Tika parser for mime-type application/octet-stream
Parsing: http://g.163.com/a?CID=12259&Values=3180311081&Redirect=http:/www.gpkdtx.com/
Error parsing: http://g.163.com/a?CID=12259&Values=3180311081&Redirect=http:/www.gpkdtx.com/: failed(2,0): Can't retrieve Tika parser for mime-type application/octet-stream
Parsing: http://g.163.com/a?CID=12267&Values=608079303&Redirect=http:/edu.163.com/special/official/
Error parsing: http://g.163.com/a?CID=12267&Values=608079303&Redirect=http:/edu.163.com/special/official/: failed(2,0): Can't retrieve Tika parser for mime-type application/octet-stream
Parsing: http://g.163.com/a?CID=12271&Values=904657751&Redirect=http:/www.vipabc.com/count.asp?code=QnfF0agFbn
Error parsing: http://g.163.com/a?CID=12271&Values=904657751&Redirect=http:/www.vipabc.com/count.asp?code=QnfF0agFbn: failed(2,0): Can't retrieve Tika parser for mime-type application/octet-stream
Parsing: http://g.163.com/a?CID=12283&Values=1441209353&Redirect=http:/www.zyqm.org/
Error parsing: http://g.163.com/a?CID=12283&Values=1441209353&Redirect=http:/www.zyqm.org/: failed(2,0): Can't retrieve Tika parser for mime-type application/octet-stream
Parsing: http://g.163.com/a?CID=12307&Values=3388898846&Redirect=http:/www.offcn.com/zg/2011ms/index.html
Parsing: http://g.163.com/a?CID=12337&Values=3289604641&Redirect=http:/www.offcn.com/zg/2011ms/index.html
Parsing: http://g.163.com/a?CID=12392&Values=441270714&Redirect=http:/www.qinzhe.com/chinese/index.htm
Parsing: http://g.163.com/r?site=netease&affiliate=homepage&cat=homepage&type=textlinkhouse&location=1
Parsing: http://game.163.com/
Parsing: http://gongyi.163.com/
Parsing: http://hea.163.com/
Parsing: http://home.163.com/
Parsing: http://house.163.com/
Parsing: http://lady.163.com/
Parsing: http://lady.163.com/beauty/
Parsing: http://lady.163.com/sense/
Parsing: http://mall.163.com/
Parsing: http://mobile.163.com/
Parsing: http://mobile.163.com/app/
Parsing: http://money.163.com/
Parsing: http://money.163.com/fund/
Parsing: http://money.163.com/hkstock/
Parsing: http://money.163.com/stock/
Parsing: http://news.163.com/
Parsing: http://news.163.com/photo/
Parsing: http://news.163.com/review/
Parsing: http://p.mail.163.com/mailinfo/shownewmsg_www_0819.htm
Parsing: http://pay.163.com/
Parsing: http://photo.163.com/pp/square/
Parsing: http://product.auto.163.com/
Parsing: http://product.tech.163.com/mobile/
Parsing: http://reg.163.com/Logout.jsp?username=accountName&url=http:/www.163.com/
Parsing: http://reg.163.com/Main.jsp?username=pInfo
Parsing: http://reg.email.163.com/mailregAll/reg0.jsp?from=163&regPage=163
Parsing: http://reg.vip.163.com/enterMail.m?enterVip=true-----------
Parsing: http://sports.163.com/
Parsing: http://sports.163.com/cba/
Parsing: http://sports.163.com/nba/
Parsing: http://sports.163.com/yc/
Parsing: http://t.163.com/chat?f=163dh
Parsing: http://t.163.com/rank/daren?f=163dh
Parsing: http://t.163.com/rank?f=163dh
Parsing: http://tech.163.com/
Parsing: http://tech.163.com/cnstock/
Parsing: http://tech.163.com/digi/nb/
Parsing: http://travel.163.com/
Parsing: http://v.163.com/
Parsing: http://v.163.com/doc/
Parsing: http://v.163.com/focus/
Parsing: http://vipmail.163.com/
Parsing: http://war.163.com/
Parsing: http://www.163.com/rss/
Parsing: http://xf.house.163.com/gz/
Parsing: http://yc.163.com/
Parsing: http://yuehui.163.com/
ParseSegment: finished at 2012-02-07 14:22:26, elapsed: 00:00:06
CrawlDb update: starting at 2012-02-07 14:22:26
CrawlDb update: db: crawl/crawldb
CrawlDb update: segments: [crawl/segments/20120207142145]
CrawlDb update: additions allowed: true
CrawlDb update: URL normalizing: true
CrawlDb update: URL filtering: true
CrawlDb update: 404 purging: false
CrawlDb update: Merging segment data into db.
CrawlDb update: finished at 2012-02-07 14:22:30, elapsed: 00:00:04
crawl finished: crawl

转载于:https://my.oschina.net/toblackmagic/blog/41511

标题SpringBoot智能在线预约挂号系统研究AI更换标题第1章引言介绍智能在线预约挂号系统的研究背景、意义、国内外研究现状及论文创新点。1.1研究背景与意义阐述智能在线预约挂号系统对提升医疗服务效率的重要性。1.2国内外研究现状分析国内外智能在线预约挂号系统的研究与应用情况。1.3研究方法及创新点概述本文采用的技术路线、研究方法及主要创新点。第2章相关理论总结智能在线预约挂号系统相关理论,包括系统架构、开发技术等。2.1系统架构设计理论介绍系统架构设计的基本原则和常用方法。2.2SpringBoot开发框架理论阐述SpringBoot框架的特点、优势及其在系统开发中的应用。2.3数据库设计与管理理论介绍数据库设计原则、数据模型及数据库管理系统。2.4网络安全与数据保护理论讨论网络安全威胁、数据保护技术及其在系统中的应用。第3章SpringBoot智能在线预约挂号系统设计详细介绍系统的设计方案,包括功能模块划分、数据库设计等。3.1系统功能模块设计划分系统功能模块,如用户管理、挂号管理、医生排班等。3.2数据库设计与实现设计数据库表结构,确定字段类型、主键及外键关系。3.3用户界面设计设计用户友好的界面,提升用户体验。3.4系统安全设计阐述系统安全策略,包括用户认证、数据加密等。第4章系统实现与测试介绍系统的实现过程,包括编码、测试及优化等。4.1系统编码实现采用SpringBoot框架进行系统编码实现。4.2系统测试方法介绍系统测试的方法、步骤及测试用例设计。4.3系统性能测试与分析对系统进行性能测试,分析测试结果并提出优化建议。4.4系统优化与改进根据测试结果对系统进行优化和改进,提升系统性能。第5章研究结果呈现系统实现后的效果,包括功能实现、性能提升等。5.1系统功能实现效果展示系统各功能模块的实现效果,如挂号成功界面等。5.2系统性能提升效果对比优化前后的系统性能
在金融行业中,对信用风险的判断是核心环节之一,其结果对机构的信贷政策和风险控制策略有直接影响。本文将围绕如何借助机器学习方法,尤其是Sklearn工具包,建立用于判断信用状况的预测系统。文中将涵盖逻辑回归、支持向量机等常见方法,并通过实际操作流程进行说明。 一、机器学习基本概念 机器学习属于人工智能的子领域,其基本理念是通过数据自动学习规律,而非依赖人工设定规则。在信贷分析中,该技术可用于挖掘历史数据中的潜在规律,进而对未来的信用表现进行预测。 二、Sklearn工具包概述 Sklearn(Scikit-learn)是Python语言中广泛使用的机器学习模块,提供多种数据处理和建模功能。它简化了数据清洗、特征提取、模型构建、验证与优化等流程,是数据科学项目中的常用工具。 三、逻辑回归模型 逻辑回归是一种常用于分类任务的线性模型,特别适用于二类问题。在信用评估中,该模型可用于判断借款人是否可能违约。其通过逻辑函数将输出映射为01之间的概率值,从而表示违约的可能性。 四、支持向量机模型 支持向量机是一种用于监督学习的算法,适用于数据维度高、样本量小的情况。在信用分析中,该方法能够通过寻找最佳分割面,区分违约与非违约客户。通过选用不同核函数,可应对复杂的非线性关系,提升预测精度。 五、数据预处理步骤 在建模前,需对原始数据进行清理与转换,包括处理缺失值、识别异常点、标准化数值、筛选有效特征等。对于信用评分,常见的输入变量包括收入水平、负债比例、信用历史记录、职业稳定性等。预处理有助于减少噪声干扰,增强模型的适应性。 六、模型构建与验证 借助Sklearn,可以将数据集划分为训练集和测试集,并通过交叉验证调整参数以提升模型性能。常用评估指标包括准确率、召回率、F1值以及AUC-ROC曲线。在处理不平衡数据时,更应关注模型的召回率与特异性。 七、集成学习方法 为提升模型预测能力,可采用集成策略,如结合多个模型的预测结果。这有助于降低单一模型的偏差与方差,增强整体预测的稳定性与准确性。 综上,基于机器学习的信用评估系统可通过Sklearn中的多种算法,结合合理的数据处理与模型优化,实现对借款人信用状况的精准判断。在实际应用中,需持续调整模型以适应市场变化,保障预测结果的长期有效性。 资源来源于网络分享,仅用于学习交流使用,请勿用于商业,如有侵权请联系我删除!
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值