生信常用论坛bio-star里面所有帖子爬取
这个是爬虫专题第一集,主要讲如何分析bio-star这个网站并爬去所有的帖子列表,及标签列表等等,前提是读者必须掌握perl,然后学习perl的LWP模块,可以考虑打印那本书读读,挺有用的!
http://seqanswers.com/ 这个是首页
http://seqanswers.com/forums/forumdisplay.php?f=18 这个共570个页面需要爬取
http://seqanswers.com/forums/forumdisplay.php?f=18&order=desc&page=1
http://seqanswers.com/forums/forumdisplay.php?f=18&order=desc&page=570
<tbody id="threadbits_forum_18">这个里面包围这很多<tr>对,
前五个<tr>对可以跳过,里面的内容不需要
这样就可以捕获到所有的目录啦!
首先我们看看如何爬去该论坛主页的板块构成,然后才进去各个板块里面继续爬去帖子。
接下来看进入各个板块里面爬帖子的代码,可以直接复制张贴使用的!
[perl]
use LWP::Simple;
use HTML::TreeBuilder;
use Encode;
use LWP::UserAgent;
use HTTP::Cookies;
my $tmp_ua = LWP::UserAgent->new; #UserAgent用来发送网页访问请求
$tmp_ua->timeout(15); ##连接超时时间设为15秒
$tmp_ua->protocols_allowed( [ 'http', 'https' ] ); ##只允许http和https协议
$tmp_ua->agent(
"Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1; .NET CLR 2.0.50727;.NET CLR 3.0.04506.30; .NET CLR 3.0.4506.2152; .NET CLR 3.5.30729)"
) ;
$base='https://www.biostars.org';
open FH_IN,"index.txt";
while (<FH_IN>) {
chomp;
@F=split;
open FH_OUT,">index-$F[1].txt";
$total_pages=int($F[2]/40)+1;
foreach (1..$total_pages){
my $url = URI->new("$F[0]/?");
my($sort,$page) = ("update",$_);#
$url->query_form(
'page' => $page,
'sort' => $sort,
);
&get_each_index($url,'FH_OUT');
print $url."\n";
}
}
sub get_each_index{
my ($url,$handle)=@_;
$response = $tmp_ua->get($url);
$html=$response->content;
my $tree = HTML::TreeBuilder->new; # empty tree
$tree->parse($html) or print "error : parse html ";
my @list_title=$tree->find_by_attribute('class',"post-title");
foreach (@list_title) {
my $title = $_->as_text();
my $ref = $_->find_by_tag_name('a')->attr('href');
print $handle "$base$href,$title\n";
}
}
[/perl]
这样就可以爬去帖子列表了
https://www.biostars.org/t/rna-seq rna 1573
https://www.biostars.org/t/R R 1309
https://www.biostars.org/t/snp snp 1268
等等```````````````````````````````````````````````````````````
帖子文件如下,在我的群里面共享了所有的代码及帖子内容,欢迎加群201161227,生信菜鸟团!