text网络爬虫(代码片段)

author author     2022-12-01     369

关键词:

## Task class
require 'mysql'

module OrgSocGraph
  FIELDS = 
    "orgs.csv" => [
      Class.new(Object) do
        def name
          :description
        end
     end.new
    ]
  
 
  START_JOBS = [
    Class.new(BaseJob) do
      # parent job doesn't do anything
      def url
        "http://www.example.com"
      end
      
      def get_children(doc)
        # do MySQL query to get the URLs of the child jobs
        dbh = MySQL.real_connect("hostname", "dbuser", "password", "database")
        # limit to 10 for test run
        res = dbh.query('SELECT website_url FROM tbl_organizations WHERE website_url != "" LIMIT 10')
        res.each_row do |r|
          Class.new(BaseJobWithUrl) do
            def url
               url
            end
                
            def execute(doc, data_store, fields)
              # crawl for meta description 
               data_store.add_item("orgs.csv", [
                 self.url,
                 doc.css("meta[name='description']").first
               ])
            end
          end.new(r["website_url"])
        end
      end
    end
  ]
end

## Job Class
class BaseJob
  def document
    doc = nil
    begin
      doc = Nokogiri::HTML(open(url))
    rescue
      puts "problem opening uri"
    end
    doc
  end
  
  def execute(doc, data_store, fields)
  end
  
  def get_children(doc)
    []
  end
end

class BaseJobWithURL < BaseJob
  attr_accessor :url
  def initialize(url)
    @url = url
  end
end

## main Ruby script (main.rb)
# for compatibility with 1.8.x require rubygems
require 'rubygems'
require 'open-uri'
# 1.8.x requires <= 1.5.0 of Nokogiri
require 'nokogiri'
require 'csv'
require 'mechanize'
Dir[File.dirname(__FILE__) + '/lib/*.rb'].each |file| require file 
Dir[File.dirname(__FILE__) + '/tasks/*.rb'].each |file| require file 

ARGV.each do |mod| 
  jobs = eval("#mod::START_JOBS")
  fields = eval("#mod::FIELDS")
  
  je = JobExecutor.new(fields)
  je.add_jobs(jobs)
  je.run
  
  fields.each_pair do |file, columns|
    CSV.open("output/#file", "wb") do |csv|
      csv << ['source_url'] + columns.map|c| c.name.to_s
      for record in je.data_store.get_items(file)
        csv << record.map|r| HTMLCleaning::clean(r.to_s, :convert_to_plain_text => true)
      end
    end
  end
end

网络爬虫基本练习(代码片段)

1.取出h1标签的文本importrequestsurl=‘http://news.gzcc.cn/html/2018/xiaoyuanxinwen_0328/9113.html‘res=requests.get(url)res.encoding=‘utf-8‘frombs4importBeautifulSoupsoup=BeautifulSoup(res.text,‘html.parser‘)s 查看详情

网络爬虫基础练习(代码片段)

importrequestsnewsurl=‘http://localhost:56341/bd/123.html?_ijt=7pd1hi6n7j1ue90de4jivbr31k‘res=requests.get(newsurl)#返回response对象res.encoding=‘utf-8‘print(res.text)frombs4importBeautifulSoupsoup=Beauti 查看详情

网络爬虫基础练习(代码片段)

importrequestsnewsurl=‘http://localhost:63342/bd/aaa.html?_ijt=7pd1hi6n7j1ue90de4jivbr31k‘res=requests.get(newsurl)#返回response对象res.encoding=‘utf-8‘print(res.text)frombs4importBeautifulSoupsoup=Beauti 查看详情

网络爬虫基础练习(代码片段)

importrequestsnewsurl=‘http://localhost:63342/bd/aaa.html?_ijt=7pd1hi6n7j1ue90de4jivbr31k‘res=requests.get(newsurl)#返回response对象res.encoding=‘utf-8‘print(res.text)frombs4importBeautifulSoupsoup=Beauti 查看详情

python网络异步爬虫(aiohttp库)(代码片段)

0x01单线程访问网页importrequestsimporttimenum=20url="https://www.dandanzan10.top/dianying/index.html"ks=time.time()for_inrange(1,num+1):res=requests.get(url)print(res.text[:10])js 查看详情

python网络异步爬虫(aiohttp库)(代码片段)

0x01单线程访问网页importrequestsimporttimenum=20url="https://www.dandanzan10.top/dianying/index.html"ks=time.time()for_inrange(1,num+1):res=requests.get(url)print(res.text[:10])js 查看详情

python——网络爬虫,一个简单的通用代码框架(代码片段)

一、代码"""通用代码框架:可使网页爬取变得更稳定更有效下面是一个爬取百度网页的例子,正常情况下是返回"""importrequestsdefget_HTML_Text():try:r=requests.get(url,timeout=30)r.raise_for_status()#若状态不是200,引发HTTPError异常r.encoding=r.apparen... 查看详情

如何使这个爬虫更有效[关闭](代码片段)

我构建了这个网络爬虫。https://github.com/shoutweb/WebsiteCrawlerEmailExtractor//Regularexpressionfunctionthatscansindividualpagesforemailsfunctionget_emails_from_webpage($url)$text=file_get_contents($url);$res= 查看详情

爬虫入门(代码片段)

爬虫简单的说网络爬虫(Webcrawler)也叫做网络铲(Webscraper)、网络蜘蛛(Webspider),其行为一般是先“爬”到对应的网页上,再把需要的信息“铲”下来。分类网络爬虫按照系统结构和实现技术,大致可以分为以下几种类型:... 查看详情

python网络爬虫(代码片段)

查看详情

java网络爬虫(代码片段)

查看详情

最全反爬虫技术介绍(代码片段)

...控制访问:无论是浏览器还是爬虫程序,在向服务器发起网络请求的时候,都会发过去一个头文件:headers,比如知乎的requests headers:Accept:text/html,application/xhtml+xml,application/xml;q=0.9,image/webp 查看详情

第一个爬虫(代码片段)

(2)用get()函数访问一个网站20次,打印返回状态,text()内容,计算text()属性和content()属性所返回的网页内容长度。importrequestsr=requests.get("https://www.so.com/")r.encoding="UTF-8"foriinrange(20):print(r.status_code)print(r.text)print(len(r. 查看详情

第一个爬虫(代码片段)

(2)用get()函数访问一个网站20次,打印返回状态,text()内容,计算text()属性和content()属性所返回的网页内容长度。importrequestsr=requests.get("https://www.so.com/")r.encoding="UTF-8"foriinrange(20):print(r.status_code)print(r.text)print(len(r. 查看详情

简单爬虫实例(代码片段)

代码工具:jupyter抓包工具:fiddle1:搜狗页面内容爬取1importrequests23url=‘https://www.sogou.com/‘4response=requests.get(5url=url6)7text=response.text8text搜狗内容2:豆瓣电影分类爬取1importrequests2url=‘https://movie.douban.com/j/new 查看详情

爬虫大作业(代码片段)

词云生成importjiebaimportPILfromwordcloudimportWordCloudimportmatplotlib.pyplotaspimportosinfo=open(‘wmh.txt‘,‘r‘,encoding=‘utf-8‘).read()text=‘‘text+=‘‘.join(jieba.lcut(info))wc=WordCloud(font_path=‘./fo 查看详情

试验一下golang网络爬虫框架gocolly/colly(代码片段)

参考:http://www.cnblogs.com/majianguo/p/8186429.html框架源码在github.com/gocolly/colly代码如下(源码中的demo)packagemainimport("fmt""github.com/gocolly/colly")funcmain()//Instantiatedefaultcollectorc:=colly.NewCollector(//Visitonlydomains:hackerspaces.org,wiki.hackerspaces.orgcolly.A... 查看详情

网络爬虫(代码片段)

1.爬虫流程图2.简单爬虫整个网页的内容--python2importurllib2response=urllib2.urlopen("http://www.baidu.com")html=response.read()print(html) 3.中文乱码处理 #coding:utf-8importre#importrequestsimportsysimportcodecs#p 查看详情