logstash后台挂起运行报错,前台运行正确

来源:9-13 【阶段总结】使用ES工具升级数据接入-索引构建

superLiuLiuLiu

2020-05-03

logstash前台运行正确命令如下:./logstash -f mysql/jdbc.conf
但是logstash后台运行报错命令如下:nohup ./logstash -f mysql/jdbc.conf &
错误日志:

Thread.exclusive is deprecated, use Thread::Mutex
Sending Logstash logs to /usr/local/logstash-7.3.0/logs which is now configured via log4j2.properties
[2020-05-03T09:30:41,733][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
[2020-05-03T09:30:41,744][INFO ][logstash.runner          ] Starting Logstash {"logstash.version"=>"7.3.0"}
[2020-05-03T09:30:42,979][INFO ][org.reflections.Reflections] Reflections took 31 ms to scan 1 urls, producing 19 keys and 39 values 
[2020-05-03T09:30:43,358][WARN ][logstash.outputs.elasticsearch] You are using a deprecated config setting "document_type" set in elasticsearch. Deprecated settings will continue to work, but are scheduled for removal from logstash in the future. Document types are being deprecated in Elasticsearch 6.0, and removed entirely in 7.0. You should avoid this feature If you have any questions about this, please visit the #logstash channel on freenode irc. {:name=>"document_type", :plugin=><LogStash::Outputs::ElasticSearch index=>"shop", id=>"6203e06de1207318a8f20cb7b2c6051323003e66c3781935ace562e425a5896f", document_id=>"%{id}", hosts=>[//116.63.153.205:9200], document_type=>"_doc", enable_metric=>true, codec=><LogStash::Codecs::Plain id=>"plain_13e25edd-b415-4cc3-bf35-5ec2fb9db413", enable_metric=>true, charset=>"UTF-8">, workers=>1, manage_template=>true, template_name=>"logstash", template_overwrite=>false, doc_as_upsert=>false, script_type=>"inline", script_lang=>"painless", script_var_name=>"event", scripted_upsert=>false, retry_initial_interval=>2, retry_max_interval=>64, retry_on_conflict=>1, ilm_enabled=>"auto", ilm_rollover_alias=>"logstash", ilm_pattern=>"{now/d}-000001", ilm_policy=>"logstash-policy", action=>"index", ssl_certificate_verification=>true, sniffing=>false, sniffing_delay=>5, timeout=>60, pool_max=>1000, pool_max_per_route=>100, resurrect_delay=>5, validate_after_inactivity=>10000, http_compression=>false>}
[2020-05-03T09:30:43,816][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://116.63.153.205:9200/]}}
[2020-05-03T09:30:43,980][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://116.63.153.205:9200/"}
[2020-05-03T09:30:44,019][INFO ][logstash.outputs.elasticsearch] ES Output version determined {:es_version=>7}
[2020-05-03T09:30:44,024][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
[2020-05-03T09:30:44,053][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//116.63.153.205:9200"]}
[2020-05-03T09:30:44,134][WARN ][org.logstash.instrument.metrics.gauge.LazyDelegatingGauge] A gauge metric of an unknown type (org.jruby.RubyArray) has been create for key: cluster_uuids. This may result in invalid serialization.  It is recommended to log an issue to the responsible developer/development team.
[2020-05-03T09:30:44,145][INFO ][logstash.outputs.elasticsearch] Using default mapping template
[2020-05-03T09:30:44,160][INFO ][logstash.javapipeline    ] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>2, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>250, :thread=>"#<Thread:0x4d374607 run>"}
[2020-05-03T09:30:44,269][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_template=>{"index_patterns"=>"logstash-*", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s", "number_of_shards"=>1}, "mappings"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date"}, "@version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}
[2020-05-03T09:30:44,459][INFO ][logstash.javapipeline    ] Pipeline started {"pipeline.id"=>"main"}
[2020-05-03T09:30:44,532][ERROR][logstash.javapipeline    ] A plugin had an unrecoverable error. Will restart this plugin.
  Pipeline_id:main
  Plugin: <LogStash::Inputs::Stdin id=>"e62af3e767f5d364edc2cc9d0491e9e819e397a9c444d5e1313b57dea52c9715", enable_metric=>true, codec=><LogStash::Codecs::Line id=>"line_49773f46-0a2c-457c-b490-8582ea39015d", enable_metric=>true, charset=>"UTF-8", delimiter=>"\n">>
  Error: Bad file descriptor - Bad file descriptor
  Exception: Errno::EBADF
  Stack: com/jrubystdinchannel/StdinChannelLibrary.java:101:in `read'
/usr/local/logstash-7.3.0/vendor/bundle/jruby/2.5.0/gems/logstash-input-stdin-3.2.6/lib/logstash/inputs/stdin.rb:77:in `channel_read'
/usr/local/logstash-7.3.0/vendor/bundle/jruby/2.5.0/gems/logstash-input-stdin-3.2.6/lib/logstash/inputs/stdin.rb:37:in `run'
/usr/local/logstash-7.3.0/logstash-core/lib/logstash/java_pipeline.rb:309:in `inputworker'
/usr/local/logstash-7.3.0/logstash-core/lib/logstash/java_pipeline.rb:302:in `block in start_input'
[2020-05-03T09:30:44,585][INFO ][logstash.agent           ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[2020-05-03T09:30:45,032][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}
/usr/local/logstash-7.3.0/vendor/bundle/jruby/2.5.0/gems/rufus-scheduler-3.0.9/lib/rufus/scheduler/cronline.rb:77: warning: constant ::Fixnum is deprecated
[2020-05-03T09:30:45,582][ERROR][logstash.javapipeline    ] A plugin had an unrecoverable error. Will restart this plugin.
  Pipeline_id:main
  Plugin: <LogStash::Inputs::Stdin id=>"e62af3e767f5d364edc2cc9d0491e9e819e397a9c444d5e1313b57dea52c9715", enable_metric=>true, codec=><LogStash::Codecs::Line id=>"line_49773f46-0a2c-457c-b490-8582ea39015d", enable_metric=>true, charset=>"UTF-8", delimiter=>"\n">>
  Error: Bad file descriptor - Bad file descriptor
  Exception: Errno::EBADF
  Stack: com/jrubystdinchannel/StdinChannelLibrary.java:101:in `read'
/usr/local/logstash-7.3.0/vendor/bundle/jruby/2.5.0/gems/logstash-input-stdin-3.2.6/lib/logstash/inputs/stdin.rb:77:in `channel_read'
/usr/local/logstash-7.3.0/vendor/bundle/jruby/2.5.0/gems/logstash-input-stdin-3.2.6/lib/logstash/inputs/stdin.rb:37:in `run'
/usr/local/logstash-7.3.0/logstash-core/lib/logstash/java_pipeline.rb:309:in `inputworker'
/usr/local/logstash-7.3.0/logstash-core/lib/logstash/java_pipeline.rb:302:in `block in start_input'
写回答

1回答

superLiuLiuLiu

提问者

2020-05-03

在网上找到了答案:

直接bin/logstash 启动的时候,有个plugin要从stdin里面读数据
而一旦用了nohup,表明stdin就是空了,自然会报 bad file descriptor这个错。
解决的方法,一个是把logstash安装成服务,另外一个在启动时加个参数:
 
nohup bin/logstash -f config/D_HINFO_PDT.conf 0</dev/null &
 

0
0

ES7+Spark 构建高匹配度搜索服务+千人千面推荐系统

ElasticSearch实现高相关性搜索,Spark MLlib实现个性化推荐

1384 学习 · 559 问题

查看课程