将Privoxy代理用于Tor时,Scrapy会出现NoneType错误 - python

我正在使用Ubuntu 14.04 LTS。

我尝试了Polipo,但即使我将自己添加为allowClient,也一直拒绝进行解决方案,但它始终拒绝Firefox的连接。因此,我转而安装了Privoxy,并通过访问Tor网站验证了它与Firefox兼容,并表示祝贺此浏览器配置为使用Tor。这证实了我应该能够抓取Tor网站。

但是,当我使用Scrapy时,出现一个似乎没有人犯的错误……?

2016-07-14 02:43:34 [scrapy] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
 'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
 'myProject.middlewares.RandomUserAgentMiddleware',
 'myProject.middlewares.ProxyMiddleware',
 'scrapy.downloadermiddlewares.retry.RetryMiddleware',
 'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
 'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
 'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
 'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
 'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
 'scrapy.downloadermiddlewares.chunked.ChunkedTransferMiddleware',
 'scrapy.downloadermiddlewares.stats.DownloaderStats']
2016-07-14 02:43:34 [scrapy] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
 'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
 'scrapy.spidermiddlewares.referer.RefererMiddleware',
 'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
 'scrapy.spidermiddlewares.depth.DepthMiddleware']
2016-07-14 02:43:34 [scrapy] INFO: Enabled item pipelines:
['myProject.pipelines.MysqlPipeline']
2016-07-14 02:43:34 [scrapy] INFO: Spider opened
2016-07-14 02:43:34 [scrapy] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2016-07-14 02:43:34 [scrapy] DEBUG: Telnet console listening on 127.0.0.1:6023
2016-07-14 02:43:34 [Tor] DEBUG: User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_7_3) AppleWebKit/534.55.3 (KHTML, like Gecko) Version/5.1.3 Safari/534.53.10 <GET http://thehiddenwiki.org>
2016-07-14 02:43:34 [scrapy] ERROR: Error downloading <GET http://thehiddenwiki.org>
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/twisted/internet/defer.py", line 1126, in _inlineCallbacks
    result = result.throwExceptionIntoGenerator(g)
  File "/usr/local/lib/python2.7/dist-packages/twisted/python/failure.py", line 389, in throwExceptionIntoGenerator
    return g.throw(self.type, self.value, self.tb)
  File "/usr/local/lib/python2.7/dist-packages/scrapy/core/downloader/middleware.py", line 43, in process_request
    defer.returnValue((yield download_func(request=request,spider=spider)))
  File "/usr/local/lib/python2.7/dist-packages/scrapy/utils/defer.py", line 45, in mustbe_deferred
    result = f(*args, **kw)
  File "/usr/local/lib/python2.7/dist-packages/scrapy/core/downloader/handlers/__init__.py", line 65, in download_request
    return handler.download_request(request, spider)
  File "/usr/local/lib/python2.7/dist-packages/scrapy/core/downloader/handlers/http11.py", line 60, in download_request
    return agent.download_request(request)
  File "/usr/local/lib/python2.7/dist-packages/scrapy/core/downloader/handlers/http11.py", line 259, in download_request
    agent = self._get_agent(request, timeout)
  File "/usr/local/lib/python2.7/dist-packages/scrapy/core/downloader/handlers/http11.py", line 239, in _get_agent
    _, _, proxyHost, proxyPort, proxyParams = _parse(proxy)
  File "/usr/local/lib/python2.7/dist-packages/scrapy/core/downloader/webclient.py", line 37, in _parse
    return _parsed_url_args(parsed)
  File "/usr/local/lib/python2.7/dist-packages/scrapy/core/downloader/webclient.py", line 20, in _parsed_url_args
    host = b(parsed.hostname)
  File "/usr/local/lib/python2.7/dist-packages/scrapy/core/downloader/webclient.py", line 17, in <lambda>
    b = lambda s: to_bytes(s, encoding='ascii')
  File "/usr/local/lib/python2.7/dist-packages/scrapy/utils/python.py", line 117, in to_bytes
    'object, got %s' % type(text).__name__)
TypeError: to_bytes must receive a unicode, str or bytes object, got NoneType

我查找了此“ to_byte”错误,但转到了Scrapy的源代码。

我知道此代码无需代理即可工作,因为它会刮除我的localhost网站和其他网站,但显然不会对Tor不利,因为它需要代理访问洋葱网站。

到底是怎么回事?

中间件

class RandomUserAgentMiddleware(object):
    def process_request(self, request, spider):
        ua  = random.choice(settings.get('USER_AGENT_LIST'))
        if ua:
            request.headers.setdefault('User-Agent', ua)
            #this is just to check which user agent is being used for request
            spider.log(
                u'User-Agent: {} {}'.format(request.headers.get('User-Agent'), request),
                level=log.DEBUG
            )

class ProxyMiddleware(object):
    def process_request(self, request, spider):
        request.meta['proxy'] = settings.get('HTTP_PROXY')

Settings.py

USER_AGENT_LIST = [
    'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/535.7 (KHTML, like Gecko) Chrome/16.0.912.36 Safari/535.7',
    'Mozilla/5.0 (Windows NT 6.2; Win64; x64; rv:16.0) Gecko/16.0 Firefox/16.0',
    'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_7_3) AppleWebKit/534.55.3 (KHTML, like Gecko) Version/5.1.3 Safari/534.53.10'
]

DOWNLOADER_MIDDLEWARES = {
     'myProject.middlewares.RandomUserAgentMiddleware': 400,
     'myProject.middlewares.ProxyMiddleware': 410,
     #'scrapy.contrib.downloadermiddleware.useragent.UserAgentMiddleware': None
     'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware': None
    # Disable compression middleware, so the actual HTML pages are cached
}

HTTP_PROXY = 'localhost:8118'

参考方案

在内部,Scrapy uses urllib(2)'s _parse_proxy用于检测代理设置。从urllib docs:

urlopen()函数与不需要身份验证的代理透明地一起工作。在Unix或Windows环境中,在启动Python解释器之前,将http_proxy或ftp_proxy环境变量设置为标识代理服务器的URL。

% http_proxy="http://www.someproxy.com:3128"
% export http_proxy
% python
...

当在proxy中使用meta键时,Scrapy期望使用相同的语法,即它必须包含方案,例如'http://localhost:8118'

This is in the docs,尽管有点埋葬:

您还可以将每个请求的元键proxy设置为http://some_proxy_server:port之类的值。

在返回'Response'(Python)中传递多个参数 - python

我在Angular工作,正在使用Http请求和响应。是否可以在“响应”中发送多个参数。角度文件:this.http.get("api/agent/applicationaware").subscribe((data:any)... python文件:def get(request): ... return Response(seriali…

Python exchangelib在子文件夹中读取邮件 - python

我想从Outlook邮箱的子文件夹中读取邮件。Inbox ├──myfolder 我可以使用account.inbox.all()阅读收件箱,但我想阅读myfolder中的邮件我尝试了此页面folder部分中的内容,但无法正确完成https://pypi.python.org/pypi/exchangelib/ 参考方案 您需要首先掌握Folder的myfo…

无法注释掉涉及多行字符串的代码 - python

基本上,我很好奇这为什么会引发语法错误,以及如何用Python的方式来“注释掉”我未使用的代码部分,例如在调试会话期间。''' def foo(): '''does nothing''' ''' 参考方案 您可以使用三重双引号注释掉三重单引…

R'relaimpo'软件包的Python端口 - python

我需要计算Lindeman-Merenda-Gold(LMG)分数,以进行回归分析。我发现R语言的relaimpo包下有该文件。不幸的是,我对R没有任何经验。我检查了互联网,但找不到。这个程序包有python端口吗?如果不存在,是否可以通过python使用该包? python参考方案 最近,我遇到了pingouin库。

用大写字母拆分字符串,但忽略AAA Python Regex - python

我的正则表达式:vendor = "MyNameIsJoe. I'mWorkerInAAAinc." ven = re.split(r'(?<=[a-z])[A-Z]|[A-Z](?=[a-z])', vendor) 以大写字母分割字符串,例如:'我的名字是乔。 I'mWorkerInAAAinc”变成…