Just a Crawler

本文介绍了一段Perl脚本,该脚本用于登录Google并爬取特定关键词的数据,然后将这些数据保存到本地文件中。需要注意的是,由于Google可能会禁止此类行为,脚本有时可能无法正常工作。

use strict;
use WWW::Mechanize;
use HTTP::Cookies;

###go to login page and login.
#my $url = 'https://www.google.com/accounts/ServiceLogin?hl=en&service=finance&nui=1&continue=http%3A%2F%2Ffinance.google.com%2Ffinance';
my $url = 'https://accounts.google.com/ServiceLogin';
my $username = $ARGV[0];
my $password = $ARGV[1];
my $keyword = $ARGV[2];
my $outputfile = $ARGV[3];
chomp($username);
chomp($password);
chomp($keyword);
chomp($outputfile);

print "usr: $username\n";
print "psw: $password\n";
print "keyword: $keyword\n";
print "output: $outputfile\n";
print "Searching ......\n";

my $mech = WWW::Mechanize->new();
$mech->cookie_jar(HTTP::Cookies->new());
$mech->get($url);
$mech->form_number(1);
$mech->field(Email => $username);
$mech->field(Passwd => $password);
$mech->click();
#Go to the next link, now that we are logged in.
#$url = 'http://www.google.com/trends/viz?q=alan+kay&graph=all_csv&sa=N';
$url = 'http://www.google.com/trends/viz?q='.$keyword.'&date=all&geo=cn&graph=all_csv&scale=1&sa=N';
#$url = 'http://finance.google.com/finance/portfolio?action=view&pid=1&pview=pview&output=csv';

$mech->get($url);
my $output_page = $mech->content();

my $fh;
open $fh, ">$outputfile";
print $fh $output_page;


12/4/2011 Update

This script  can't work sometimes because of  Google 's ban. 


D:\Python\python.exe D:\A-PythonCode\UserManagementSystem\app01\crawler.py 🚀 开始爬取百度百科《中国十大名花》 📌 目标网址:https://baike.baidu.com/item/%E4%B8%AD%E5%9B%BD%E5%8D%81%E5%A4%A7%E5%90%8D%E8%8A%B1 ⚠️ 仅用于学习交流,请遵守法律法规。 🌐 正在启动浏览器... Traceback (most recent call last): File "D:\A-PythonCode\UserManagementSystem\app01\crawler.py", line 148, in <module> asyncio.run(main()) File "D:\Python\Lib\asyncio\runners.py", line 190, in run return runner.run(main) ^^^^^^^^^^^^^^^^ File "D:\Python\Lib\asyncio\runners.py", line 118, in run return self._loop.run_until_complete(task) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\Python\Lib\asyncio\base_events.py", line 654, in run_until_complete return future.result() ^^^^^^^^^^^^^^^ File "D:\A-PythonCode\UserManagementSystem\app01\crawler.py", line 125, in main html = await fetch_page_with_playwright() ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\A-PythonCode\UserManagementSystem\app01\crawler.py", line 21, in fetch_page_with_playwright browser = await p.chromium.launch(headless=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\Python\Lib\site-packages\playwright\async_api\_generated.py", line 14465, in launch await self._impl_obj.launch( File "D:\Python\Lib\site-packages\playwright\_impl\_browser_type.py", line 98, in launch await self._channel.send( File "D:\Python\Lib\site-packages\playwright\_impl\_connection.py", line 69, in send return await self._connection.wrap_api_call( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\Python\Lib\site-packages\playwright\_impl\_connection.py", line 558, in wrap_api_call raise rewrite_error(error, f"{parsed_st['apiName']}: {error}") from None playwright._impl._errors.Error: BrowserType.launch: Executable doesn't exist at C:\Users\Administrator\AppData\Local\ms-playwright\chromium_headless_shell-1187\chrome-win\headless_shell.exe ╔════════════════════════════════════════════════════════════╗ ║ Looks like Playwright was just installed or updated. ║ ║ Please run the following command to download new browsers: ║ ║ ║ ║ playwright install ║ ║ ║ ║ <3 Playwright Team ║ ╚════════════════════════════════════════════════════════════╝ 进程已结束,退出代码为 1
最新发布
10-15
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值