ELK怎么写入日志的对应键值信息
今天小编给大家分享一下ELK怎么写入日志的对应键值信息的相关知识点,内容详细,逻辑清晰,相信大部分人都还太了解这方面的知识,所以分享这篇文章给大家参考一下,希望大家阅读完这篇文章后有所收获,下面我们一起来了解一下吧。
ELK是三个开源软件的缩写,分别表示:Elasticsearch , Logstash, Kibana , 它们都是开源软件。新增了一个FileBeat,它是一个轻量级的日志收集处理工具(Agent),Filebeat占用资源少,适合于在各个服务器上搜集日志后传输给Logstash
Logstash是一个开源的用于收集,分析和存储日志的工具。Kibana4用来搜索和查看Logstash已索引的日志的web接口。这两个工具都基于Elasticsearch。●Logstash:Logstash服务的组件,用于处理传入的日志。●Elasticsearch:存储所有日志●Kibana4:用于搜索和可视化的日志的Web界面,通过nginx反代●LogstashForwarder:安装在将要把日志发送到logstash的服务器上,作为日志转发的道理,通过lumberjack网络协议与Logstash服务通讯注意:logstash-forwarder要被beats替代了,关注后续内容。后续会转到logstash+elasticsearch+beats上。
ELK架构如下:
elasticsearch-1.7.2.tar.gzkibana-4.1.2-linux-x64.tar.gzlogstash-1.5.6-1.noarch.rpmlogstash-forwarder-0.4.0-1.x86_64.rpm单机模式#OSCentOSrelease6.5(Final)#BaseandJDKgroupaddelkuseradd-gelkelkpasswdelkyuminstallvimlsofmanwgetntpdatevixie-cron-ycrontab-e*/1****/usr/sbin/ntpdatetime.windows.com>/dev/null2>&1servicecrondrestart禁用selinux,关闭iptablessed-i"s#SELINUX=enforcing#SELINUX=disabled#"/etc/selinux/configserviceiptablesstopreboottar-zxvfjdk-8u92-linux-x64.tar.gz-C/usr/local/vim/etc/profileexportJAVA_HOME=/usr/local/jdk1.8.0_92exportJRE_HOME=/usr/local/jdk1.8.0_92/jreexportPATH=$JAVA_HOME/bin:$JAVA_HOME/jre/bin:$PATHexportCLASSPATH=$CLASSPATH:.:$JAVA_HOME/lib:$JAVA_HOME/jre/libsource/etc/profile#Elasticsearch#(cluster时在其他server安装elasticsearch,并配置相同集群名称,不同节点名称即可)RPM安装rpm--importhttp://packages.elastic.co/GPG-KEY-elasticsearchwget-chttps://download.elastic.co/elasticsearch/elasticsearch/elasticsearch-1.7.2.noarch.rpmrpm-ivhelasticsearch-1.7.2.noarch.rpmtar安装wget-chttps://download.elastic.co/elasticsearch/elasticsearch/elasticsearch-1.7.2.tar.gztarzxvfelasticsearch-1.7.2.tar.gz-C/usr/local/cd/usr/local/elasticsearch-1.7.2/mkdir-p/data/{db,logs}vimconfig/elasticsearch.yml#cluster.name:elasticsearch#node.name:"es-node1"#node.master:true#node.data:truepath.data:/data/dbpath.logs:/data/logsnetwork.host:192.168.28.131#插件安装cd/usr/local/elasticsearch-1.7.2/bin/plugin-installmobz/elasticsearch-head#https://github.com/mobz/elasticsearch-headbin/plugin-installlukas-vlcek/bigdeskbin/plugininstalllmenezes/elasticsearch-kopf#会提示版本过低解决办法就是手动下载该软件,不通过插件安装命令...cd/usr/local/elasticsearch-1.7.2/pluginswgethttps://github.com/lmenezes/elasticsearch-kopf/archive/master.zipunzipmaster.zipmvelasticsearch-kopf-masterkopf以上操作就完全等价于插件的安装命令cd/usr/local/chownelk:elkelasticsearch-1.7.2/-Rchownelk:elk/data/*-Rsupervisord安装:yuminstallsupervisor-y末尾添加针对elasticsearch的配置项vim/etc/supervisord.conf[program:elasticsearch]directory=/usr/local/elasticsearch-1.7.2/;command=su-c"/usr/local/elasticsearch-1.7.2/bin/elasticsearch"elkcommand=/usr/local/elasticsearch-1.7.2/bin/elasticsearchnumprocs=1autostart=truestartsecs=5autorestart=truestartretries=3user=elk;stdout_logfile_maxbytes=200MB;stdout_logfile_backups=20;stdout_logfile=/var/log/pvs_elasticsearch_stdout.log#Kibana(注意版本搭配)https://download.elastic.co/kibana/kibana/kibana-4.1.2-linux-x64.tar.gztarzxvfkibana-4.1.2-linux-x64.tar.gz-C/usr/local/cd/usr/local/kibana-4.1.2-linux-x64vimconfig/kibana.ymlport:5601host:"192.168.28.131"elasticsearch_url:"http://192.168.28.131:9200"./bin/kibana-l/var/log/kibana.log#启动服务,kibana4.0开始是以socket服务启动的#cd/etc/init.d&&curl-okibanahttps://gist.githubusercontent.com/thisismitch/8b15ac909aed214ad04a/raw/fc5025c3fc499ad8262aff34ba7fde8c87ead7c0/kibana-4.x-init#cd/etc/default&&curl-okibanahttps://gist.githubusercontent.com/thisismitch/8b15ac909aed214ad04a/raw/fc5025c3fc499ad8262aff34ba7fde8c87ead7c0/kibana-4.x-default#修改对应信息,添加可执行权限或者如下:cat>>/etc/init.d/kibana"$KIBANA_LOG"2>&1&sleep2pidofprocnode>$PID_FILERETVAL=$?[[$?-eq0]]&&success||failureecho[$RETVAL=0]&&touch$LOCK_FILEreturn$RETVALfi}reload(){echo"Reloadcommandisnotimplementedforthisservice."return$RETVAL}stop(){echo-n"Stopping$DESC:"killproc-p$PID_FILE$DAEMONRETVAL=$?echo[$RETVAL=0]&&rm-f$PID_FILE$LOCK_FILE}case"$1"instart)start;;stop)stop;;status)status-p$PID_FILE$DAEMONRETVAL=$?;;restart)stopstart;;reload)reload;;*)#InvalidArguments,printthefollowingmessage.echo"Usage:$0{start|stop|status|restart}">&2exit2;;esacEOFchmod+xkibanamvkibana/etc/init.d/#Nginxyuminstallnginx-yvim/etc/nginx/conf.d/elk.confserver{server_nameelk.sudo.com;auth_basic"RestrictedAccess";auth_basic_user_filepasswd;location/{proxy_passhttp://192.168.28.131:5601;proxy_http_version1.1;proxy_set_headerUpgrade$http_upgrade;proxy_set_headerConnection'upgrade';proxy_set_headerHost$host;proxy_cache_bypass$http_upgrade;}}#htpsswd添加:yuminstallhttpd-tools–yecho-n'sudo:'>>/etc/nginx/passwd#添加用户opensslpasswdelk.sudo.com>>/etc/nginx/passwd#添加密码cat/etc/nginx/passwd#查看chkconfignginxon&&servicenginxstart#Logstash--Setuprpm--importhttps://packages.elasticsearch.org/GPG-KEY-elasticsearchvi/etc/yum.repos.d/logstash.repo[logstash-1.5]name=Logstashrepositoryfor1.5.xpackagesbaseurl=http://packages.elasticsearch.org/logstash/1.5/centosgpgcheck=1gpgkey=http://packages.elasticsearch.org/GPG-KEY-elasticsearchenabled=1yuminstalllogstash-y#创建SSL证书(在logstash服务器上生成ssl证书。创建ssl证书有两种方式,一种指定IP地址,一种指定fqdn(dns)),选其一即可#1、IP地址在[v3_ca]配置段下设置上面的参数。192.168.28.131是logstash服务端的地址。vi/etc/pki/tls/openssl.cnfsubjectAltName=IP:192.168.28.131cd/etc/pki/tlsopensslreq-config/etc/pki/tls/openssl.cnf-x509-days3650-batch-nodes-newkeyrsa:2048-keyoutprivate/logstash-forwarder.key-outcerts/logstash-forwarder.crt#注意将-days设置大点,以免证书过期。#2、fqdn#不需要修改openssl.cnf文件。cd/etc/pki/tlsopensslreq-subj'/CN=logstash.sudo.com/'-x509-days3650-batch-nodes-newkeyrsa:2048-keyoutprivate/logstash-forwarder.key-outcerts/logstash-forwarder.crtlogstash.sudo.com是我自己测试的域名,所以无需添加logstash.sudo.com的A记录#Logstash-Config#添加GeoIP数据源#wgethttp://geolite.maxmind.com/download/geoip/database/GeoLiteCity.dat.gz#gzip-dGeoLiteCity.dat.gz&&mvGeoLiteCity.dat/etc/logstash/.logstash配置文件是以json格式设置参数的,配置文件位于/etc/logstash/conf.d目录下,配置包括三个部分:输入端,过滤器和输出。首先,创建一个01-lumberjack-input.conf文件,设置lumberjack输入,Logstash-Forwarder使用的协议。vi/etc/logstash/conf.d/01-lumberjack-input.confinput{lumberjack{port=>5043type=>"logs"ssl_certificate=>"/etc/pki/tls/certs/logstash-forwarder.crt"ssl_key=>"/etc/pki/tls/private/logstash-forwarder.key"}}再来创建一个11-nginx.conf用于过滤nginx日志vi/etc/logstash/conf.d/11-nginx.conffilter{if[type]=="nginx"{grok{match=>{"message"=>"%{IPORHOST:clientip}-%{NOTSPACE:remote_user}\[%{HTTPDATE:timestamp}\]\"(?:%{WORD:method}%{NOTSPACE:request}(?:%{URIPROTO:proto}/%{NUMBER:httpversion})?|%{DATA:rawrequest})\"%{NUMBER:status}(?:%{NUMBER:size}|-)%{QS:referrer}%{QS:agent}%{QS:xforwardedfor}"}add_field=>["received_at","%{@timestamp}"]add_field=>["received_from","%{host}"]}date{match=>["timestamp","dd/MMM/YYYY:HH:mm:ssZ"]}#geoip{#source=>"clientip"#add_tag=>["geoip"]#fields=>["country_name","country_code2","region_name","city_name","real_region_name","latitude","longitude"]#remove_field=>["[geoip][longitude]","[geoip][latitude]"]#}}}这个过滤器会寻找被标记为“nginx”类型(Logstash-forwarder定义的)的日志,尝试使用“grok”来分析传入的nginx日志,使之结构化和可查询。type要与logstash-forwarder相匹配。同时,要注意nginx日志格式设置,我这里采用默认log_format。#负载均衡反向代理时可修改为如下格式:log_formatmain'$remote_addr-$remote_user[$time_local]"$request"''$status$upstream_response_time$request_time$body_bytes_sent''"$http_referer""$http_user_agent""$http_x_forwarded_for""$request_body"''$scheme$upstream_addr';日志格式不对,grok匹配规则要重写。可以通过http://grokdebug.herokuapp.com/在线工具进行调试。多数情况下ELK没数据的错误在此处。#GrokDebug--http://grokdebug.herokuapp.com/grok匹配日志不成功,不要往下看测试。之道匹配成功对为止。可参考ttp://grokdebug.herokuapp.com/patterns#grok匹配模式,对后面写规则匹配很受益的。最后,创建一文件,来定义输出。vi/etc/logstash/conf.d/99-lumberjack-output.confoutput{if"_grokparsefailure"in[tags]{file{path=>"/var/log/logstash/grokparsefailure-%{type}-%{+YYYY.MM.dd}.log"}}elasticsearch{host=>"192.168.28.131"protocol=>"http"index=>"logstash-%{type}-%{+YYYY.MM.dd}"document_type=>"%{type}"workers=>5template_overwrite=>true}#stdout{codec=>rubydebug}}定义结构化的日志存储到elasticsearch,对于不匹配grok的日志写入到文件。注意,后面添加的过滤器文件名要位于01-99之间。因为logstash配置文件有顺序的。在调试时候,先不将日志存入到elasticsearch,而是标准输出,以便排错。同时,多看看日志,很多错误在日志里有体现,也容易定位错误在哪。在启动logstash服务之前,最好先进行配置文件检测,如下:#/opt/logstash/bin/logstash--configtest-f/etc/logstash/conf.d/*ConfigurationOK也可指定文件名检测,直到OK才行。不然,logstash服务器起不起来。最后,就是启动logstash服务了。#logstash-forwarder需要将在安装logstash时候创建的ssl证书的公钥logstash.crt拷贝到每台logstash-forwarder服务器(需监控日志的server)wgethttps://download.elastic.co/logstash-forwarder/binaries/logstash-forwarder-0.4.0-1.x86_64.rpmrpm-ivhlogstash-forwarder-0.4.0-1.x86_64.rpmvi/etc/logstash-forwarder.conf{"network":{"servers":["192.168.28.131:5043"],"sslca":"/etc/pki/tls/certs/logstash-forwarder.crt","timeout":30},"files":[{"paths":["/var/log/nginx/*-access.log"],"fields":{"type":"nginx"}}]}配置文件是json格式,格式不对logstash-forwarder服务是启动不起来的。后面就是启动logstash-forwarder服务了echo-e"192.168.28.131Test1\n192.168.28.130Test2\n192.168.28.138Test3">>/etc/hosts#不添加elasticsearch启动会报错(无法识别Test*)su-elkcd/usr/local/elasticsearch-1.7.2nohup./bin/elasticsearch&(可以通过supervisord进行管理,与其他服务一同开机启动)elk:servicelogstashrestartservicekibanarestart访问http://elk.sudo.com:9200/查询启动是否成功client:servicenginxstart&&servicelogstash-forwarderstart#使用redis存储日志(队列),创建对应的配置文件vi/etc/logstash/conf.d/redis-input.confinput{lumberjack{port=>5043type=>"logs"ssl_certificate=>"/etc/pki/tls/certs/logstash-forwarder.crt"ssl_key=>"/etc/pki/tls/private/logstash-forwarder.key"}}filter{if[type]=="nginx"{grok{match=>{"message"=>"%{IPORHOST:clientip}-%{NOTSPACE:remote_user}\[%{HTTPDATE:timestamp}\]\"(?:%{WORD:method}%{NOTSPACE:request}(?:%{URIPROTO:proto}/%{NUMBER:httpversion})?|%{DATA:rawrequest})\"%{NUMBER:status}(?:%{NUMBER:size}|-)%{QS:referrer}%{QS:agent}%{QS:xforwardedfor}"}add_field=>["received_at","%{@timestamp}"]add_field=>["received_from","%{host}"]}date{match=>["timestamp","dd/MMM/YYYY:HH:mm:ssZ"]}#test}}output{####将接收的日志放入redis消息队列####redis{host=>"127.0.0.1"port=>6379data_type=>"list"key=>"logstash:redis"}}vi/etc/logstash/conf.d/redis-output.confinput{#读取redisredis{data_type=>"list"key=>"logstash:redis"host=>"192.168.28.131"#redis-serverport=>6379#threads=>5}}output{elasticsearch{host=>"192.168.28.131"protocol=>"http"index=>"logstash-%{type}-%{+YYYY.MM.dd}"document_type=>"%{type}"workers=>36template_overwrite=>true}#stdout{codec=>rubydebug}}#/opt/logstash/bin/logstash--configtest-f/etc/logstash/conf.d/*ConfigurationOK登录redis查询,可以看到日志的对应键值信息已经写入
以上就是“ELK怎么写入日志的对应键值信息”这篇文章的所有内容,感谢各位的阅读!相信大家阅读完这篇文章都有很大的收获,小编每天都会为大家更新不同的知识,如果还想学习更多的知识,请关注亿速云行业资讯频道。
声明:本站所有文章资源内容,如无特殊说明或标注,均为采集网络资源。如若本站内容侵犯了原著者的合法权益,可联系本站删除。