Elasticsearch 配置了suggest后 出连接错误 [Errno 61] Connection refused

来源:15-2 es完成搜索建议-搜索建议字段保存 - 2

Hi_Mike

2020-05-28

外网参考解决方案
更新了es-dsl 从5.1 到 5.4 还是没有解决问题。也有https://stackoverflow.com/questions/25471828/elasticsearch-exceptions-connectionerror 问题依旧.

错误信息如下:
Traceback (most recent call last):
  File "/Users/yangjiayuan/Workspace/virtualenv/Spider/lib/python3.7/site-packages/elasticsearch/connection/http_urllib3.py", line 115, in perform_request
    response = self.pool.urlopen(method, url, body, retries=False, headers=self.headers, **kw)
  File "/Users/yangjiayuan/Workspace/virtualenv/Spider/lib/python3.7/site-packages/urllib3/connectionpool.py", line 725, in urlopen
    method, url, error=e, _pool=self, _stacktrace=sys.exc_info()[2]
  File "/Users/yangjiayuan/Workspace/virtualenv/Spider/lib/python3.7/site-packages/urllib3/util/retry.py", line 379, in increment
    raise six.reraise(type(error), error, _stacktrace)
  File "/Users/yangjiayuan/Workspace/virtualenv/Spider/lib/python3.7/site-packages/urllib3/packages/six.py", line 735, in reraise
    raise value
  File "/Users/yangjiayuan/Workspace/virtualenv/Spider/lib/python3.7/site-packages/urllib3/connectionpool.py", line 677, in urlopen
    chunked=chunked,
  File "/Users/yangjiayuan/Workspace/virtualenv/Spider/lib/python3.7/site-packages/urllib3/connectionpool.py", line 392, in _make_request
    conn.request(method, url, **httplib_request_kw)
  File "/usr/local/Cellar/python/3.7.5/Frameworks/Python.framework/Versions/3.7/lib/python3.7/http/client.py", line 1252, in request
    self._send_request(method, url, body, headers, encode_chunked)
  File "/usr/local/Cellar/python/3.7.5/Frameworks/Python.framework/Versions/3.7/lib/python3.7/http/client.py", line 1298, in _send_request
    self.endheaders(body, encode_chunked=encode_chunked)
  File "/usr/local/Cellar/python/3.7.5/Frameworks/Python.framework/Versions/3.7/lib/python3.7/http/client.py", line 1247, in endheaders
    self._send_output(message_body, encode_chunked=encode_chunked)
  File "/usr/local/Cellar/python/3.7.5/Frameworks/Python.framework/Versions/3.7/lib/python3.7/http/client.py", line 1026, in _send_output
    self.send(msg)
  File "/usr/local/Cellar/python/3.7.5/Frameworks/Python.framework/Versions/3.7/lib/python3.7/http/client.py", line 966, in send
    self.connect()
  File "/Users/yangjiayuan/Workspace/virtualenv/Spider/lib/python3.7/site-packages/urllib3/connection.py", line 187, in connect
    conn = self._new_conn()
  File "/Users/yangjiayuan/Workspace/virtualenv/Spider/lib/python3.7/site-packages/urllib3/connection.py", line 172, in _new_conn
    self, "Failed to establish a new connection: %s" % e
urllib3.exceptions.NewConnectionError: <urllib3.connection.HTTPConnection object at 0x1064b1f10>: Failed to establish a new connection: [Errno 61] Connection refused

During handling of the above exception, another exception occurred:

在没有配置suggest之前 抓取保存 一切都正常。
代码如下:


def gen_suggests(index, info_tuple):
    # 生成搜索建议短语 符号suggest格式
    # 为了避免重复
    used_words = set()
    suggests = []

    for text, weight in info_tuple:
        if text:
            # 调用es analyze接口分析字符串 将字符串分词
            words = es.indices.analyze(index=index,
                                       analyzer="ik_max_word",
                                       params={"filter": ["lowercase"]},
                                       body=text)
            # 获取r["token"]的内容 使用es分析把句子分词
            anylyzed_words = set([r["token"]
                                  for r in words["tokens"] if len(r["token"]) > 1])
            new_words = anylyzed_words - used_words
        else:
            new_words = set()

        if new_words:
            suggests.append({"input": list(new_words),
                             "weight": weight})
    return suggests
    
    def save_to_es(self):
        muto = MutoType()
        muto.title = self['title']
        muto.url = self['url']
        if "img_url" in self:
            muto.img_url = self['img_url']
        if "img_path" in self:
            muto.img_path = self['img_path']
        muto.content = remove_tags(self['content'])
        muto.tag = self['tag']
        muto.date = self['date']
        muto.click_num = self['click_num']
        muto.meta.id = self['url_object_id']

        muto.suggest = gen_suggests(MutoType._doc_type.index,
                                    ((muto.title, 10), (muto.tag, 7)))
        ipdb.set_trace()
        # 保存
        muto.save()

        return

写回答

1回答

bobby

2020-06-01

这里报错应该是es没有启动你检查一下es的状态

0
1
Hi_Mike
非常感谢!
2020-08-15
共1条回复

Scrapy打造搜索引擎 畅销4年的Python分布式爬虫课

带你彻底掌握Scrapy,用Django+Elasticsearch搭建搜索引擎

5795 学习 · 6290 问题

查看课程