CrawlerRunner 无法写入elasticsearch
来源:15-3 django实现elasticsearch的搜索建议 - 1
慕妹5495383
2017-07-16
在main.py中使用
#execute(['scrapy','crawl','mj_spider'])
#execute(['scrapy','crawl','gd_spider'])
启动spider时可以正常写入elasticsearch
当使用下面这种方式时====
import scrapy
from scrapy.crawler import CrawlerProcess
from spiders.onem3spider import MJSpider
from spiders.gd_spider import GdSpiderSpider
from twisted.internet import reactor, defer
from scrapy.crawler import CrawlerRunner
from scrapy.utils.log import configure_logging
configure_logging()
runner = CrawlerRunner()
@defer.inlineCallbacks
def crawl():
yield runner.crawl(MJSpider)
yield runner.crawl(GdSpiderSpider)
reactor.stop()
crawl()
reactor.run() # the script will block here until the last crawl call is finished
就无法写入elasticsearch了,这是什么原因?
1回答
-
bobby
2017-07-17
亲 无法写入es的错误提示是什么呢 你有没有debug过?
00
相似问题