这篇“Centos7下如何快速部署EFK服务”文章的知识点大部分人都不太理解,所以小编给大家总结了以下内容,内容详细,步骤清晰,具有一定的借鉴价值,希望大家阅读完这篇文章能有所收获,下面我们一起来看看这篇“Centos7下如何快速部署EFK服务”文章吧。

EFK是一套分布式日志服务解决方案,由各个组件构成。EFK分别是指:elasticsearch、filebeat、kibana。不过在真实的生产环境中,搭建日志服务可能还需要logstash来进行规制解析,使用kafka进行削峰填谷作为缓冲。

架构

EFK采用集中式的日志管理架构

elasticsearch:一个开源分布式搜索引擎,提供搜集、分析、存储数据三大功能。它的特点有:分布式,零配置,自动发现,索引自动分片,索引副本机制,restful风格接口,多数据源,自动搜索负载等。

kibana:可以为Logstash 、Beats和ElasticSearch提供友好的日志分析Web 界面,可以帮助汇总、分析和搜索重要数据日志。

filebeat:轻量级日志采集器。需要在每个应用服务器配置filebeat,来采集日志,并输出到elasticsearch。

一键部署KIBANA脚本

functioninstall_es7_el7(){echo""echo-e"\033[33m****************************************************安装ElasticSearch7.6.2*****************************************************\033[0m"#action"********初始化JAVA环境********"/bin/true#install_jdk#下载包if[-f/opt/elasticsearch-7.6.2-x86_64.rpm]&&[-f/opt/elasticsearch-analysis-ik-7.6.2.zip];thenecho"*****存在ElasticSearch7.6.2安装包,无需下载*****"elseping-c4artifacts.elastic.co>/dev/null2>&1if[$?-eq0];thenwgethttps://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.6.2-x86_64.rpm-O/opt/elasticsearch-7.6.2-x86_64.rpmwgethttps://github.com/medcl/elasticsearch-analysis-ik/releases/download/v7.6.2/elasticsearch-analysis-ik-7.6.2.zip-O/opt/elasticsearch-analysis-ik-7.6.2.zipelseecho"pleasedownloadES7packagemanual!"exit$?fifi#安装es7.6action"********安装ElasticSearch7.6.2服务********"/bin/truechmodu+x/opt/elasticsearch-7.6.2-x86_64.rpm&&rpm-ivh/opt/elasticsearch-7.6.2-x86_64.rpm#建目录及授权mkdir-p$ES_HOME/data&&mkdir-p$ES_HOME/logchown-Relasticsearch:elasticsearch$ES_HOME&&chmod-R755$ES_HOME#修改ES配置文件cp/etc/elasticsearch/elasticsearch.yml/etc/elasticsearch/elasticsearch.yml_bak&>/dev/nullcat>/etc/elasticsearch/elasticsearch.yml#设定本机节点名node.name:es_node#设定集群主节点清单cluster.initial_master_nodes:["es_node"]path.data:${ES_HOME}/datapath.logs:${ES_HOME}/logbootstrap.memory_lock:falsebootstrap.system_call_filter:false#允许从其它机器访问network.host:0.0.0.0http.port:${ES_PORT}discovery.zen.ping.unicast.hosts:["${ES_IP}:${ES_PORT}"]EOF#安装分词器:ik-analyzer插件#默认情况下,ES使用内置的标准分词器,对文本进行解析。但是对于中文,其会拆解为一个一个的汉字,最终失去了分词的意义,所以安装分词器:ik-analyzer插件action"********安装ik-analyzer插件********"/bin/truemkdir-p/usr/share/elasticsearch/plugins/ikunzip/opt/elasticsearch-analysis-ik-7.6.2.zip-d/usr/share/elasticsearch/plugins/ik/&>/dev/nullchown-Relasticsearch:elasticsearch/usr/share/elasticsearch/plugins/&&chmod-R755/usr/share/elasticsearch/plugins/sleep2#在filebeat配置文件中为索引模板添加默认分词器属性。未来新创建的索引均引用此属性:#setup.template.settings:#index.analysis.analyzer.default.type:"ik_max_word"#index.analysis.analyzer.default_search.type:"ik_max_word"#setup.template.overwrite:true#在Linux终端中执行如下命令,为现有所有索引,设置默认分词器属性:#curl-XPOST"172.16.20.143:9200/_all/_close"#curl-XPUT-H'Content-Type:application/json''http://172.16.20.143:9200/_all/_settings?preserve_existing=true'-d'{#"index.analysis.analyzer.default.type":"ik_max_word",#"index.analysis.analyzer.default_search.type":"ik_max_word"#}'#curl-XPOST"172.16.20.143:9200/_all/_open"#启动ES并初始化数据action"********启动es并初始化数据********"/bin/truesystemctldaemon-reload&&systemctlenableelasticsearch.servicesystemctlrestartelasticsearch.servicees_version=`curl-XGET${ES_IP}:${ES_PORT}`echo-e"\033[33m**************************************************完成ElasticSearch7.6.2安装***************************************************\033[0m"cat>/tmp/es7.log${ES_IP}ES服务器端口:${ES_PORT}ES数据目录:${ES_HOME}/dataES日志目录:${ES_HOME}/logES详细信息:${es_version}EOFcat/tmp/es7.logecho-e"\e[1;31m以上信息10秒后消失,保存在/tmp/es7.log文件下\e[0m"echo-e"\033[33m************************************************************************************************************************\033[0m"echo""sleep10}

一键部署Elasticsearch脚本

functioninstall_es7_el7(){echo""echo-e"\033[33m****************************************************安装ElasticSearch7.6.2*****************************************************\033[0m"#action"********初始化JAVA环境********"/bin/true#install_jdk#下载包if[-f/opt/elasticsearch-7.6.2-x86_64.rpm]&&[-f/opt/elasticsearch-analysis-ik-7.6.2.zip];thenecho"*****存在ElasticSearch7.6.2安装包,无需下载*****"elseping-c4artifacts.elastic.co>/dev/null2>&1if[$?-eq0];thenwgethttps://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.6.2-x86_64.rpm-O/opt/elasticsearch-7.6.2-x86_64.rpmwgethttps://github.com/medcl/elasticsearch-analysis-ik/releases/download/v7.6.2/elasticsearch-analysis-ik-7.6.2.zip-O/opt/elasticsearch-analysis-ik-7.6.2.zipelseecho"pleasedownloadES7packagemanual!"exit$?fifi#安装es7.6action"********安装ElasticSearch7.6.2服务********"/bin/truechmodu+x/opt/elasticsearch-7.6.2-x86_64.rpm&&rpm-ivh/opt/elasticsearch-7.6.2-x86_64.rpm#建目录及授权mkdir-p$ES_HOME/data&&mkdir-p$ES_HOME/logchown-Relasticsearch:elasticsearch$ES_HOME&&chmod-R755$ES_HOME#修改ES配置文件cp/etc/elasticsearch/elasticsearch.yml/etc/elasticsearch/elasticsearch.yml_bak&>/dev/nullcat>/etc/elasticsearch/elasticsearch.yml#设定本机节点名node.name:es_node#设定集群主节点清单cluster.initial_master_nodes:["es_node"]path.data:${ES_HOME}/datapath.logs:${ES_HOME}/logbootstrap.memory_lock:falsebootstrap.system_call_filter:false#允许从其它机器访问network.host:0.0.0.0http.port:${ES_PORT}discovery.zen.ping.unicast.hosts:["${ES_IP}:${ES_PORT}"]EOF#安装分词器:ik-analyzer插件#默认情况下,ES使用内置的标准分词器,对文本进行解析。但是对于中文,其会拆解为一个一个的汉字,最终失去了分词的意义,所以安装分词器:ik-analyzer插件action"********安装ik-analyzer插件********"/bin/truemkdir-p/usr/share/elasticsearch/plugins/ikunzip/opt/elasticsearch-analysis-ik-7.6.2.zip-d/usr/share/elasticsearch/plugins/ik/&>/dev/nullchown-Relasticsearch:elasticsearch/usr/share/elasticsearch/plugins/&&chmod-R755/usr/share/elasticsearch/plugins/sleep2#在filebeat配置文件中为索引模板添加默认分词器属性。未来新创建的索引均引用此属性:#setup.template.settings:#index.analysis.analyzer.default.type:"ik_max_word"#index.analysis.analyzer.default_search.type:"ik_max_word"#setup.template.overwrite:true#在Linux终端中执行如下命令,为现有所有索引,设置默认分词器属性:#curl-XPOST"172.16.20.143:9200/_all/_close"#curl-XPUT-H'Content-Type:application/json''http://172.16.20.143:9200/_all/_settings?preserve_existing=true'-d'{#"index.analysis.analyzer.default.type":"ik_max_word",#"index.analysis.analyzer.default_search.type":"ik_max_word"#}'#curl-XPOST"172.16.20.143:9200/_all/_open"#启动ES并初始化数据action"********启动es并初始化数据********"/bin/truesystemctldaemon-reload&&systemctlenableelasticsearch.servicesystemctlrestartelasticsearch.servicees_version=`curl-XGET${ES_IP}:${ES_PORT}`echo-e"\033[33m**************************************************完成ElasticSearch7.6.2安装***************************************************\033[0m"cat>/tmp/es7.log${ES_IP}ES服务器端口:${ES_PORT}ES数据目录:${ES_HOME}/dataES日志目录:${ES_HOME}/logES详细信息:${es_version}EOFcat/tmp/es7.logecho-e"\e[1;31m以上信息10秒后消失,保存在/tmp/es7.log文件下\e[0m"echo-e"\033[33m************************************************************************************************************************\033[0m"echo""sleep10}

一键部署filebeat脚本

functioninstall_filebeat7_el7(){echo""echo-e"\033[33m****************************************************安装Filebeat7.6.2*****************************************************\033[0m"#下载包if[-f/opt/filebeat-7.6.2-x86_64.rpm];thenecho"*****存在Filebeat7.6.2安装包,无需下载*****"elseping-c4artifacts.elastic.co>/dev/null2>&1if[$?-eq0];thenwgethttps://artifacts.elastic.co/downloads/beats/filebeat/filebeat-7.6.2-x86_64.rpm-O/opt/filebeat-7.6.2-x86_64.rpmelseecho"pleasedownloadFilebeat7.6packagemanual!"exit$?fifi#安装filebeat7.6action"********安装filebeat7.6.2服务********"/bin/truechmodu+x/opt/filebeat-7.6.2-x86_64.rpm&&rpm-ivh/opt/filebeat-7.6.2-x86_64.rpm#修改kibana配置文件cp/etc/filebeat/filebeat.yml/etc/filebeat/filebeat.yml_bakcat>/etc/filebeat/filebeat.ymltype:logenabled:truepaths:-/lcp_logs/*.logfilebeat.config.modules:path:/etc/filebeat/modules.d/*.ymlreload.enabled:falsesetup.template.settings:#number_of_shards是数据分片数,默认为5,有时候设置为3index.number_of_shards:3index.analysis.analyzer.default.type:"ik_max_word"index.analysis.analyzer.default_search.type:"ik_max_word"setup.template.overwrite:truesetup.kibana:host:"${KIBANA_IP}:${KIBANA_PORT}"output.elasticsearch:hosts:["${ES_IP}:${ES_PORT}"]ilm.enabled:trueilm.rollover_alias:"fsl_uat.prod1"ilm.pattern:"{now/d}-000001"processors:-add_host_metadata:~-add_cloud_metadata:~EOF#启动filebeat并初始化数据action"********启动filebeat并初始化数据********"/bin/truesystemctldaemon-reload&&systemctlenablefilebeat.servicesystemctlrestartfilebeat.service#nohup./filebeat-e-cfilebeat.yml>/dev/null2>&1&echo-e"\033[33m**************************************************完成Filebeat7.6.2安装***************************************************\033[0m"cat>/tmp/filebeat7.log${KIBANA_IP}:${KIBANA_PORT}filebeat配置elasticsearch:${ES_IP}:${ES_PORT}EOFcat/tmp/filebeat7.logecho-e"\e[1;31m以上信息10秒后消失,保存在/tmp/filebeat7.log文件下\e[0m"echo-e"\033[33m************************************************************************************************************************\033[0m"echo""sleep10}

以上就是关于“Centos7下如何快速部署EFK服务”这篇文章的内容,相信大家都有了一定的了解,希望小编分享的内容对大家有帮助,若想了解更多相关的知识内容,请关注亿速云行业资讯频道。