日志分析ELK平台部署
工作原理如下如所示:
部署流程:
1、安装logstash的JDK环境:
#tarzvxfjdk-8u73-linux-x64.tar.gz#mvjdk-8u73-linux-x64/usr/local/java#vim/etc/profileexportJAVA_HOME=/usr/local/javaCLASSPATH=/usr/local/java/lib/dt.jar/usr/local/java/lib/tools.jarPATH=/usr/local/java/bin:$PATHexportPATHJAVA_HOMECLASSPATH#source/etc/profile#java-versionjavaversion"1.8.0_73"Java(TM)SERuntimeEnvironment(build1.8.0_73-b02)JavaHotSpot(TM)64-BitServerVM(build25.73-b02,mixedmode)
出来java的版本号,JDK安装成功;
2、安装 logstash
下载并安装 Logstash ,本次安装 logstash 到/usr/local (安装路径自己定义);
#wget#tarzvxflogstash-1.5.2.tar.gz-C/usr/local/
安装完成后执行命令:
#/usr/local/logstash-1.5.2/bin/logstash-e'input{stdin{}}output{stdout{}}'LogstashstartupcompletedhelloELK2016-09-29T09:28:57.992Zweb10.gz.comhelloELK-e:指定logstash的配置信息,可以用于快速测试;
-f:指定logstash的配置文件;可以用于生产环境;
在 logstash 安装目录下创建一个测试文件 logstash-test.conf, 文件内容如下:
#vimlogstash-simple.confinput{stdin{}}output{stdout{codec=>rubydebug}}
#echo"`date`helloELK"ThuSep2917:33:23CST2016helloELK#/usr/local/logstash-1.5.2/bin/logstashagent-flogstash-simple.confLogstashstartupcompletedThuSep2917:33:23CST2016helloELK{"message"=>"ThuSep2917:33:23CST2016helloELK","@version"=>"1","@timestamp"=>"2016-09-29T09:33:57.711Z","host"=>"web10.gz.com"}安装supervisor,管理logstash:#yuminstall-yinstallsupervisor--enablerepo=epel#vim/etc/supervisord.conf添加内容[program:elkpro_1]environment=LS_HEAP_SIZE=5000mdirectory=/usr/local/logstash-1.5.2#logstash安装目录command=/usr/local/logstash-1.5.2/bin/logstash-f/usr/local/logstash-1.5.2/logstash-simple.conf-w10-l/var/log/logstash/logstash-simple.log#logstash执行的命令pro1.conf#logstash指定运行的配置文件/var/log/logstash/pro1.log#指定logstash日志存放位置;开启关闭supervisord#servicesupervisordstop#servicesupervisordstart开机启动#chkconfigsupervisordon开启关闭logstash#supervisorctlstartelkpro_1#supervisorctlstopelkpro_1
3、安装 Elasticsearch
下载 Elasticsearch 后,解压到/usr/local/;
#wget#tarzvxfelasticsearch-1.6.0.tar.gz-C/usr/local/
启动 Elasticsearch
#/usr/local/elasticsearch-1.6.0/bin/elasticsearch
后台运行 elasticsearch:
#nohup/usr/local/elasticsearch-1.6.0/bin/elasticsearch>nohup&
#psaux|greplogstashroot211541.65.03451732196856pts/0Sl+17:330:10/usr/local/java/bin/java-XX:+UseParNewGC-XX:+UseConcMarkSweepGC-Djava.awt.headless=true-XX:CMSInitiatingOccupancyFraction=75-XX:+UseCMSInitiatingOccupancyOnly-Xmx500m-Xss2048k-Djffi.boot.library.path=/usr/local/logstash-1.5.2/vendor/jruby/lib/jni-XX:+UseParNewGC-XX:+UseConcMarkSweepGC-Djava.awt.headless=true-XX:CMSInitiatingOccupancyFraction=75-XX:+UseCMSInitiatingOccupancyOnly-Xbootclasspath/a:/usr/local/logstash-1.5.2/vendor/jruby/lib/jruby.jar-classpath:/usr/local/java/lib/dt.jar/usr/local/java/lib/tools.jar-Djruby.home=/usr/local/logstash-1.5.2/vendor/jruby-Djruby.lib=/usr/local/logstash-1.5.2/vendor/jruby/lib-Djruby.script=jruby-Djruby.shell=/bin/shorg.jruby.Main--1.9/usr/local/logstash-1.5.2/lib/bootstrap/environment.rblogstash/runner.rbagent-flogstash-simple.conf
elasticsearch官方给的启动脚本:https://codeload.github.com/elastic/elasticsearch-servicewrapper/zip/master上传到服务器上#unzipelasticsearch-servicewrapper-master.zip#mvelasticsearch-servicewrapper-master/service//usr/local/elasticsearch/bin/#cd/usr/local/elasticsearch/bin/service#./elasticsearchinstall(在init.d下自动创建服务脚本)#/etc/init.d/elasticsearchrestart#curl-XGET'http://elasticsearch_IP:9200/_count?pretty'-d'#IP为elasticsearch安装的服务器IP>{>"query":{>"match_all":{}>}>}>'返回值:{"count":710,"_shards":{"total":6,"successful":6,"failed":0}
在logstash安装目录下,创建测试文件logstash-es-simple.conf,查看结果显示是否输出到elastisearch中。
#vimlogstash-es-simple.conflogstash-es-simple.confinput{stdin{}}output{elasticsearch{host=>"localhost"}stdout{codec=>rubydebug}}执行:#/usr/local/logstash-1.5.2/bin/logstashagent-flogstash-es-simple.conf...启动输出...LogstashstartupcompletedhelloELK{"message"=>"helloELK","@version"=>"1","@timestamp"=>"2016-09-29T09:52:21.426Z","host"=>"web10.gz.com"}使用curl命令发送请求来查看elastisearch是否接收到了数据:#curl'{"took":1,"timed_out":false,"_shards":{"total":6,"successful":6,"failed":0},.....
现在已成功可以使用 Elasticsearch 和 Logstash 来收集日志数据了。
4、安装 elasticsearch 插件
在你安装 Elasticsearch 的目录中执行以下命令;
#cd/usr/local/elasticsearch-1.6.0/#./bin/plugin-installlmenezes/elasticsearch-kopf安装完成后在plugins目录下可以看到kopf#lsplugins/kopf
在浏览器访问 http://192.168.1.114:9200/_plugin/kopf 浏览保存在 Elasticsearch 中的数据,如图:
5、安装 Kibana
下载 kibana 后,解压到/usr/local/下
#wget#tarzvxfkibana-4.1.1-linux-x64.tar.gz启动kibana
#/usr/local/kibana-4.1.1-linux-x64/bin/kibana
使用 http://kibanaServerIP:5601 访问 Kibana ,登录后,配置一个索引,默认就可以, Kibana 的数据被指向 Elasticsearch ,使用默认的 logstash-* 的索引名称,并且是基于时间的,点击“ Create ”即可。
看到如下界面说明索引创建完成。
点击“ Discover ”,可以搜索和浏览 Elasticsearch 中的数据;
到此, ELK 平台部署已完成。
声明:本站所有文章资源内容,如无特殊说明或标注,均为采集网络资源。如若本站内容侵犯了原著者的合法权益,可联系本站删除。