登录成功后,对网站首页发送的get请求被重定向,重定向仍然指向网站首页,反爬手段都用上了

来源:8-1 爬虫和反爬的对抗过程以及策略

vicety

2017-09-13

http://szimg.mukewang.com/59b823220001db0a08190106.jpg

如果未能成功登录,POST请求返回的GET地址是https://accounts.pixiv.net/login?lang=zh&source=pc&view_type=page&ref=wwwtop_accounts_index

而非该网站首页www.pixiv.net  因此可以确定不是账号密码的错误

我作出的防反爬虫措施如下

添加headers(所有request headers中的内容我都加进去过),setting中 DOWNLOAD_DELAY = 1.0,但还是不行

请问还有什么可以采取的措施吗? 或者老师能帮我调试一下程序就更好了

代码如下

pixiv.py

import scrapy
import requests
import re
import time
import shutil
from scrapy.http import Request
import json
try:
    import urlparse as parse
except:
    from urllib import parse

try:
    import cookielib
except:
    import http.cookiejar as cookielib
from scrapy.loader import ItemLoader

session = requests.session()


class PixivSpider(scrapy.Spider):
    agent = "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/60.0.3112.113 Safari/537.36"
 header = {
        "Host": "accounts.pixiv.net",
 "Origin": "https://accounts.pixiv.net",
 "Referer": "https://accounts.pixiv.net/login?lang=zh&source=pc&view_type=page&ref=wwwtop_accounts_index",
 "User-Agent": agent,
 "Connection": "keep-alive",
 # "Accept": "application/json, text/javascript, */*; q=0.01",
        # "Accept-Encoding": "gzip, deflate, br",
 "X-Requested-With": "XMLHttpRequest",
 # "Cookie": "p_ab_id=3; p_ab_id_2=4; bookmark_tag_type=count; bookmark_tag_order=desc; device_token=480d576b6ad09b5a602e6f6fbb4b9593; module_orders_mypage=%5B%7B%22name%22%3A%22recommended_illusts%22%2C%22visible%22%3Atrue%7D%2C%7B%22name%22%3A%22everyone_new_illusts%22%2C%22visible%22%3Atrue%7D%2C%7B%22name%22%3A%22following_new_illusts%22%2C%22visible%22%3Atrue%7D%2C%7B%22name%22%3A%22mypixiv_new_illusts%22%2C%22visible%22%3Atrue%7D%2C%7B%22name%22%3A%22fanbox%22%2C%22visible%22%3Atrue%7D%2C%7B%22name%22%3A%22featured_tags%22%2C%22visible%22%3Atrue%7D%2C%7B%22name%22%3A%22contests%22%2C%22visible%22%3Atrue%7D%2C%7B%22name%22%3A%22sensei_courses%22%2C%22visible%22%3Atrue%7D%2C%7B%22name%22%3A%22spotlight%22%2C%22visible%22%3Atrue%7D%2C%7B%22name%22%3A%22booth_follow_items%22%2C%22visible%22%3Atrue%7D%5D; __utmt=1; PHPSESSID=afd7e67cc28e76bf61a7d72488a24e40; __utma=235335808.2088567130.1501534936.1505164346.1505226280.27; __utmb=235335808.11.10.1505226280; __utmc=235335808; __utmz=235335808.1505226280.27.6.utmcsr=accounts.pixiv.net|utmccn=(referral)|utmcmd=referral|utmcct=/login; __utmv=235335808.|2=login%20ever=yes=1^3=plan=normal=1^5=gender=male=1^6=user_id=17759808=1^9=p_ab_id=3=1^10=p_ab_id_2=4=1^11=lang=zh=1; login_bc=1; _ga=GA1.2.2088567130.1501534936; _gid=GA1.2.321153936.1505067374; _gat=1; _ga=GA1.3.2088567130.1501534936; _gid=GA1.3.321153936.1505067374; _gat_UA-76252338-4=1",
        # "Content-Length": "185",
        # "Content-Type": "application/x-www-form-urlencoded; charset=UTF-8",
        # "Accept-Language": "zh-CN,zh;q=0.8"
 }
    name = "pixiv"
 allowed_domains = ["www.pixiv.net", "accounts.pixiv.net"]
    start_urls = ["https://www.pixiv.net"]

    def parse(self, response):
        all_urls = response.css("a::attr(href)").extract()
        all_urls = [parse.urljoin(response.url, url) for  url in all_urls]
        pass


    def start_requests(self):
        login_url = "https://accounts.pixiv.net/login?lang=zh&source=pc&view_type=page&ref=wwwtop_accounts_index"
 return [scrapy.Request(login_url, headers=self.header, callback=self.login)]

    def login(self, response):
        match_post_key = re.match('.*"pixivAccount.postKey":"(.*?)"',response.text, re.DOTALL)
        try:
            post_key = match_post_key.group(1)
        except:
            self.handle_error("unmatched re expression","post-key not found")
        post_data = {
            "pixiv_id": "335462631ch@gmail.com",
 "password": "test123",
 "captcha": "",
 "g_recaptcha_response": "",
 "post_key": post_key,
 "source": "pc",
 "ref": "wwwtop_accounts_index",
 "return_to": "https://www.pixiv.net/",

 }
        time.sleep(3)
        return scrapy.FormRequest(url="https://accounts.pixiv.net/login?lang=zh&source=pc&view_type=page&ref=wwwtop_accounts_index",
 headers=self.header,dont_filter=True, formdata=post_data, callback=self.after_login)
        # yield scrapy.Request("https://www.pixiv.net/", headers=self.header)

 def after_login(self, response):
 yield scrapy.Request("https://www.pixiv.net/",dont_filter=True, headers=self.header, callback=self.parse)

    def handle_error(self, type, message):
        print("ERROR:%s %s" % (type, message))

setting.py

# -*- coding: utf-8 -*-
import os

# Scrapy settings for ArticleV0 project
#
# For simplicity, this file contains only settings considered important or
# commonly used. You can find more settings consulting the documentation:
#
#     http://doc.scrapy.org/en/latest/topics/settings.html
#     http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html
#     http://scrapy.readthedocs.org/en/latest/topics/spider-middleware.html

BOT_NAME = 'ArticleV0'

SPIDER_MODULES = ['ArticleV0.spiders']
NEWSPIDER_MODULE = 'ArticleV0.spiders'


# Crawl responsibly by identifying yourself (and your website) on the user-agent
#USER_AGENT = 'ArticleV0 (+http://www.yourdomain.com)'

# Obey robots.txt rules
ROBOTSTXT_OBEY = False

# Configure maximum concurrent requests performed by Scrapy (default: 16)
#CONCURRENT_REQUESTS = 32

# Configure a delay for requests for the same website (default: 0)
# See http://scrapy.readthedocs.org/en/latest/topics/settings.html#download-delay
# See also autothrottle settings and docs
DOWNLOAD_DELAY = 1.0
# The download delay setting will honor only one of:
#CONCURRENT_REQUESTS_PER_DOMAIN = 16
#CONCURRENT_REQUESTS_PER_IP = 16

# Disable cookies (enabled by default)
COOKIES_ENABLED = True

# Disable Telnet Console (enabled by default)
#TELNETCONSOLE_ENABLED = False

# Override the default request headers:
#DEFAULT_REQUEST_HEADERS = {
#   'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
#   'Accept-Language': 'en',
#}

# Enable or disable spider middlewares
# See http://scrapy.readthedocs.org/en/latest/topics/spider-middleware.html
#SPIDER_MIDDLEWARES = {
#    'ArticleV0.middlewares.Articlev0SpiderMiddleware': 543,
#}

# Enable or disable downloader middlewares
# See http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html
#DOWNLOADER_MIDDLEWARES = {
#    'ArticleV0.middlewares.MyCustomDownloaderMiddleware': 543,
#}

# Enable or disable extensions
# See http://scrapy.readthedocs.org/en/latest/topics/extensions.html
#EXTENSIONS = {
#    'scrapy.extensions.telnet.TelnetConsole': None,
#}

# Configure item pipelines
# See http://scrapy.readthedocs.org/en/latest/topics/item-pipeline.html
ITEM_PIPELINES = {
    #'ArticleV0.pipelines.JsonExporterPipeline': 2,
    #'ArticleV0.pipelines.JsonWithEncodingPipeline': 2,
    #'scrapy.pipelines.images.ImagesPipeline': 1,  # 默认图像处理
    #'ArticleV0.pipelines.ArticleImagePipeline': 1,
    #'ArticleV0.pipelines.MysqlPipeline': 1,
    'ArticleV0.pipelines.MysqlTwistedPipeline': 1,


}
IMAGES_URLS_FIELD = "front_image_url"
project_dir = os.path.abspath(os.path.dirname(__file__))
IMAGES_STORE = os.path.join(project_dir,'images')
IMAGES_MIN_HEIGHT = 50
IMAGES_MIN_WIDTH = 50

# Enable and configure the AutoThrottle extension (disabled by default)
# See http://doc.scrapy.org/en/latest/topics/autothrottle.html
#AUTOTHROTTLE_ENABLED = True
# The initial download delay
#AUTOTHROTTLE_START_DELAY = 5
# The maximum download delay to be set in case of high latencies
#AUTOTHROTTLE_MAX_DELAY = 60
# The average number of requests Scrapy should be sending in parallel to
# each remote server
AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
# Enable showing throttling stats for every response received:
#AUTOTHROTTLE_DEBUG = False

# Enable and configure HTTP caching (disabled by default)
# See http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
#HTTPCACHE_ENABLED = True
#HTTPCACHE_EXPIRATION_SECS = 0
#HTTPCACHE_DIR = 'httpcache'
#HTTPCACHE_IGNORE_HTTP_CODES = []
#HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'


MYSQL_HOST = "localhost"
MYSQL_DBNAME = "article_spider" # 数据库名称
MYSQL_USER = "root"
MYSQL_PASSWORD = "*****"

SQL_DATETIME_FORMAT = "%Y-%m-%d %H:%M:%S"
SQL_DATE_FORMAT = "%Y-%m-%d"

如果代码格式在复制粘贴的时候出现错误,也可以在https://github.com/vicety/scrapy_learning/tree/master/ArticleV0/ArticleV0访问

谢谢老师!

写回答

2回答

vicety

提问者

2017-09-14

...解决了,似乎这个网站在访问主页的时候就会无限302循环……

0
2
vicety
回复
精慕门5384516
发给你了
2017-09-28
共2条回复

vicety

提问者

2017-09-13

报错图片不够清晰,再发个文字版本

2017-09-13 02:09:30 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)

2017-09-13 02:09:30 [scrapy.extensions.telnet] DEBUG: Telnet console listening on 127.0.0.1:6024

2017-09-13 02:09:30 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://accounts.pixiv.net/login?lang=zh&source=pc&view_type=page&ref=wwwtop_accounts_index> (referer: https://accounts.pixiv.net/login?lang=zh&source=pc&view_type=page&ref=wwwtop_accounts_index)

2017-09-13 02:09:34 [scrapy.downloadermiddlewares.redirect] DEBUG: Redirecting (302) to <GET https://www.pixiv.net/> from <POST https://accounts.pixiv.net/login?lang=zh&source=pc&view_type=page&ref=wwwtop_accounts_index>

2017-09-13 02:09:35 [scrapy.downloadermiddlewares.redirect] DEBUG: Redirecting (302) to <GET https://www.pixiv.net/> from <GET https://www.pixiv.net/>

2017-09-13 02:09:36 [scrapy.downloadermiddlewares.redirect] DEBUG: Redirecting (302) to <GET https://www.pixiv.net/> from <GET https://www.pixiv.net/>

2017-09-13 02:09:38 [scrapy.downloadermiddlewares.redirect] DEBUG: Redirecting (302) to <GET https://www.pixiv.net/> from <GET https://www.pixiv.net/>

2017-09-13 02:09:39 [scrapy.downloadermiddlewares.redirect] DEBUG: Redirecting (302) to <GET https://www.pixiv.net/> from <GET https://www.pixiv.net/>

2017-09-13 02:09:40 [scrapy.downloadermiddlewares.redirect] DEBUG: Redirecting (302) to <GET https://www.pixiv.net/> from <GET https://www.pixiv.net/>


0
0

Scrapy打造搜索引擎 畅销4年的Python分布式爬虫课

带你彻底掌握Scrapy,用Django+Elasticsearch搭建搜索引擎

5829 学习 · 6293 问题

查看课程