爬虫请求数据返回不正确。

来源:18-7 开始检索并分析pdf的下载链接

慕雪5297078

2023-10-01

解决了,是接口地址搞错了

Python 3.11的

import scrapy


class NberSpider(scrapy.Spider):
    name = "nber"
    allowed_domains = ["nber.org"]
    start_urls = [
        # "https://www.nber.org/",
        'https://www.nber.org/search?page=1&perPage=50&q=monetray%20policy'
    ]

    def parse(self, response):
        data = response.json()
        print(data)

运行代码后:

2023-10-01 00:22:47 [scrapy.utils.log] INFO: Scrapy 2.8.0 started (bot: NBERSpider)
2023-10-01 00:22:47 [scrapy.utils.log] INFO: Versions: lxml 4.9.2.0, libxml2 2.10.3, cssselect 1.1.0, parsel 1.6.0, w3lib 1.21.0, Twisted 22.10.0, Python 3.11.4 | packag
ed by Anaconda, Inc. | (main, Jul  5 2023, 13:38:37) [MSC v.1916 64 bit (AMD64)], pyOpenSSL 23.2.0 (OpenSSL 1.1.1u  30 May 2023), cryptography 41.0.2, Platform Windows-1
0-10.0.19041-SP0
2023-10-01 00:22:47 [scrapy.crawler] INFO: Overridden settings:
{'BOT_NAME': 'NBERSpider',
 'FEED_EXPORT_ENCODING': 'utf-8',
 'NEWSPIDER_MODULE': 'NBERSpider.spiders',
 'REQUEST_FINGERPRINTER_IMPLEMENTATION': '2.7',
 'ROBOTSTXT_OBEY': True,
 'SPIDER_MODULES': ['NBERSpider.spiders'],
 'TWISTED_REACTOR': 'twisted.internet.asyncioreactor.AsyncioSelectorReactor'}
2023-10-01 00:22:47 [asyncio] DEBUG: Using selector: SelectSelector
2023-10-01 00:22:47 [scrapy.utils.log] DEBUG: Using reactor: twisted.internet.asyncioreactor.AsyncioSelectorReactor
2023-10-01 00:22:47 [scrapy.utils.log] DEBUG: Using asyncio event loop: asyncio.windows_events._WindowsSelectorEventLoop
2023-10-01 00:22:47 [scrapy.extensions.telnet] INFO: Telnet Password: 606d9780ccbe5a58
2023-10-01 00:22:47 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.corestats.CoreStats',
 'scrapy.extensions.telnet.TelnetConsole',
 'scrapy.extensions.logstats.LogStats']
2023-10-01 00:22:47 [scrapy.middleware] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.robotstxt.RobotsTxtMiddleware',
 'scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
 'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
 'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
 'scrapy.downloadermiddlewares.retry.RetryMiddleware',
 'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
 'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
 'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
 'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
 'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware',
 'scrapy.downloadermiddlewares.stats.DownloaderStats']
2023-10-01 00:22:47 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
 'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
 'scrapy.spidermiddlewares.referer.RefererMiddleware',
 'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
 'scrapy.spidermiddlewares.depth.DepthMiddleware']
2023-10-01 00:22:47 [scrapy.middleware] INFO: Enabled item pipelines:
[]
2023-10-01 00:22:47 [scrapy.core.engine] INFO: Spider opened
2023-10-01 00:22:47 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2023-10-01 00:22:47 [scrapy.extensions.telnet] INFO: Telnet console listening on 127.0.0.1:6023
2023-10-01 00:22:58 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.nber.org/robots.txt> (referer: None)
2023-10-01 00:23:08 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.nber.org/search?page=1&perPage=50&q=monetray%20policy> (referer: None)
2023-10-01 00:23:08 [scrapy.core.scraper] ERROR: Spider error processing <GET https://www.nber.org/search?page=1&perPage=50&q=monetray%20policy> (referer: None)
Traceback (most recent call last):
  File "C:\ProgramData\anaconda3\Lib\site-packages\twisted\internet\defer.py", line 892, in _runCallbacks
    current.result = callback(  # type: ignore[misc]
                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ProgramData\anaconda3\Lib\site-packages\scrapy\spiders\__init__.py", line 73, in _parse
    return self.parse(response, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "E:\study\py\NBERSpider\NBERSpider\spiders\nber.py", line 13, in parse
    data = response.json()
           ^^^^^^^^^^^^^^^
  File "C:\ProgramData\anaconda3\Lib\site-packages\scrapy\http\response\text.py", line 82, in json
    self._cached_decoded_json = json.loads(self.text)
                                ^^^^^^^^^^^^^^^^^^^^^
  File "C:\ProgramData\anaconda3\Lib\json\__init__.py", line 346, in loads
    return _default_decoder.decode(s)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ProgramData\anaconda3\Lib\json\decoder.py", line 337, in decode
    obj, end = self.raw_decode(s, idx=_w(s, 0).end())
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ProgramData\anaconda3\Lib\json\decoder.py", line 355, in raw_decode
    raise JSONDecodeError("Expecting value", s, err.value) from None
json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
2023-10-01 00:23:08 [scrapy.core.engine] INFO: Closing spider (finished)
2023-10-01 00:23:08 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 498,
 'downloader/request_count': 2,
 'downloader/request_method_count/GET': 2,
 'downloader/response_bytes': 28999,
 'downloader/response_count': 2,
 'downloader/response_status_count/200': 2,
 'elapsed_time_seconds': 20.938983,
 'finish_reason': 'finished',
 'finish_time': datetime.datetime(2023, 9, 30, 16, 23, 8, 644405),
 'httpcompression/response_bytes': 83035,
 'httpcompression/response_count': 1,
 'log_count/DEBUG': 5,
 'log_count/ERROR': 1,
 'log_count/INFO': 10,
 'response_received_count': 2,
 'robotstxt/request_count': 1,
 'robotstxt/response_count': 1,
 'robotstxt/response_status_count/200': 1,
 'scheduler/dequeued': 1,
 'scheduler/dequeued/memory': 1,
 'scheduler/enqueued': 1,
 'scheduler/enqueued/memory': 1,
 'spider_exceptions/JSONDecodeError': 1,
 'start_time': datetime.datetime(2023, 9, 30, 16, 22, 47, 705422)}
2023-10-01 00:23:08 [scrapy.core.engine] INFO: Spider closed (finished)

用的annaconda,换了3.9的环境,也是一样的报错,没有正确获得数据,请问老师,以上是什么问题,要怎样解决?

写回答

1回答

小布_老师

2023-10-07

json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)


具体的错误是json解析问题,也就是数据解析错了,不符合json格式。


检查请求数据和请求地址,正常情况是不会报错的。

0
1
慕雪5297078
谢谢老师,解决了,是接口地址搞错了。
2023-10-09
共1条回复

Python多领域场景实战课 快速成为多面手

Python多领域场景实战课 快速成为多面手

171 学习 · 40 问题

查看课程