Mysql优化之Zabbix分区优化的示例分析
这篇文章主要介绍了Mysql优化之Zabbix分区优化的示例分析,具有一定借鉴价值,感兴趣的朋友可以参考下,希望大家阅读完这篇文章之后大有收获,下面让小编带着大家一起了解一下。
使用zabbix最大的瓶颈在于数据库,维护好zabbix的数据存储,告警,就能很好地应用zabbix去构建监控系统。目前zabbix的数据主要存储在history和trends的2个表中,随着时间的推移,这两个表变得非常大,性能会非常差,影响监控的使用。对MySQL进行调优,能够极大的提升Zabbix的性能,本文采用对MySQL进行分区的方法进行调优。
原理
对zabbix中的history和trends等表进行分区,按日期进行分区,每天一个,共保留90天分区。
操作详细步骤
操作影响: 可以在线操作,MySQL的读写变慢,Zabbix性能变慢,影响时间根据数据的小而变化,一般在2个小时左右。
第一步
登录zabbix server的数据库,统一MySQL的配置
cat>/etc/my.cnf<<EOF[mysqld]datadir=/data/mysqlsocket=/var/lib/mysql/mysql.sockdefault-storage-engine=innodbcollation-server=utf8_general_ciinit-connect='SETNAMESutf8'character-set-server=utf8symbolic-links=0max_connections=4096innodb_buffer_pool_size=12Gmax_allowed_packet=32Mjoin_buffer_size=2Msort_buffer_size=2Mquery_cache_size=64Mquery_cache_limit=4Mthread_concurrency=8table_open_cache=1024innodb_flush_log_at_trx_commit=0long_query_time=1log-slow-queries=/data/mysql/mysql-slow.log[mysqld_safe]log-error=/var/log/mariadb/mariadb.logpid-file=/var/run/mariadb/mariadb.pid#[mysql]#socket=/data/mysql/mysql.sock##includeallfilesfromtheconfigdirectory#!includedir/etc/my.cnf.dEOF
注意:一定要修改innodb_buffer_pool_size=物理内存的1/3
第二步
先确认zabbix的版本,本操作zabbix的版本一定要大于3.2.0。小于3.2的版本不能安装此操作,线上默认是zabbix-3.2.6。
a、 导入存储过程
#catpartition.sqlDELIMITER$$CREATEPROCEDURE`partition_create`(SCHEMANAMEvarchar(64),TABLENAMEvarchar(64),PARTITIONNAMEvarchar(64),CLOCKint)BEGIN/*SCHEMANAME=TheDBschemainwhichtomakechangesTABLENAME=ThetablewithpartitionstopotentiallydeletePARTITIONNAME=Thenameofthepartitiontocreate*//*Verifythatthepartitiondoesnotalreadyexist*/DECLARERETROWSINT;SELECTCOUNT(1)INTORETROWSFROMinformation_schema.partitionsWHEREtable_schema=SCHEMANAMEANDtable_name=TABLENAMEANDpartition_description>=CLOCK;IFRETROWS=0THEN/*1.Printamessageindicatingthatapartitionwascreated.2.CreatetheSQLtocreatethepartition.3.ExecutetheSQLfrom#2.*/SELECTCONCAT("partition_create(",SCHEMANAME,",",TABLENAME,",",PARTITIONNAME,",",CLOCK,")")ASmsg;SET@sql=CONCAT('ALTERTABLE',SCHEMANAME,'.',TABLENAME,'ADDPARTITION(PARTITION',PARTITIONNAME,'VALUESLESSTHAN(',CLOCK,'));');PREPARESTMTFROM@sql;EXECUTESTMT;DEALLOCATEPREPARESTMT;ENDIF;END$$DELIMITER;DELIMITER$$CREATEPROCEDURE`partition_drop`(SCHEMANAMEVARCHAR(64),TABLENAMEVARCHAR(64),DELETE_BELOW_PARTITION_DATEBIGINT)BEGIN/*SCHEMANAME=TheDBschemainwhichtomakechangesTABLENAME=ThetablewithpartitionstopotentiallydeleteDELETE_BELOW_PARTITION_DATE=Deleteanypartitionswithnamesthataredatesolderthanthisone(yyyy-mm-dd)*/DECLAREdoneINTDEFAULTFALSE;DECLAREdrop_part_nameVARCHAR(16);/*GetalistofallthepartitionsthatareolderthanthedateinDELETE_BELOW_PARTITION_DATE.Allpartitionsareprefixedwitha"p",souseSUBSTRINGTOgetridofthatcharacter.*/DECLAREmyCursorCURSORFORSELECTpartition_nameFROMinformation_schema.partitionsWHEREtable_schema=SCHEMANAMEANDtable_name=TABLENAMEANDCAST(SUBSTRING(partition_nameFROM2)ASUNSIGNED)<DELETE_BELOW_PARTITION_DATE;DECLARECONTINUEHANDLERFORNOTFOUNDSETdone=TRUE;/*Createthebasicsforwhenweneedtodropthepartition.Also,create@drop_partitionstoholdacomma-delimitedlistofallpartitionsthatshouldbedeleted.*/SET@alter_header=CONCAT("ALTERTABLE",SCHEMANAME,".",TABLENAME,"DROPPARTITION");SET@drop_partitions="";/*Startloopingthroughallthepartitionsthataretooold.*/OPENmyCursor;read_loop:LOOPFETCHmyCursorINTOdrop_part_name;IFdoneTHENLEAVEread_loop;ENDIF;SET@drop_partitions=IF(@drop_partitions="",drop_part_name,CONCAT(@drop_partitions,",",drop_part_name));ENDLOOP;IF@drop_partitions!=""THEN/*1.BuildtheSQLtodropallthenecessarypartitions.2.RuntheSQLtodropthepartitions.3.Printoutthetablepartitionsthatweredeleted.*/SET@full_sql=CONCAT(@alter_header,@drop_partitions,";");PREPARESTMTFROM@full_sql;EXECUTESTMT;DEALLOCATEPREPARESTMT;SELECTCONCAT(SCHEMANAME,".",TABLENAME)AS`table`,@drop_partitionsAS`partitions_deleted`;ELSE/*Nopartitionsarebeingdeleted,soprintout"N/A"(Notapplicable)toindicatethatnochangesweremade.*/SELECTCONCAT(SCHEMANAME,".",TABLENAME)AS`table`,"N/A"AS`partitions_deleted`;ENDIF;END$$DELIMITER;DELIMITER$$CREATEPROCEDURE`partition_maintenance`(SCHEMA_NAMEVARCHAR(32),TABLE_NAMEVARCHAR(32),KEEP_DATA_DAYSINT,HOURLY_INTERVALINT,CREATE_NEXT_INTERVALSINT)BEGINDECLAREOLDER_THAN_PARTITION_DATEVARCHAR(16);DECLAREPARTITION_NAMEVARCHAR(16);DECLAREOLD_PARTITION_NAMEVARCHAR(16);DECLARELESS_THAN_TIMESTAMPINT;DECLARECUR_TIMEINT;CALLpartition_verify(SCHEMA_NAME,TABLE_NAME,HOURLY_INTERVAL);SETCUR_TIME=UNIX_TIMESTAMP(DATE_FORMAT(NOW(),'%Y-%m-%d00:00:00'));SET@__interval=1;create_loop:LOOPIF@__interval>CREATE_NEXT_INTERVALSTHENLEAVEcreate_loop;ENDIF;SETLESS_THAN_TIMESTAMP=CUR_TIME+(HOURLY_INTERVAL*@__interval*3600);SETPARTITION_NAME=FROM_UNIXTIME(CUR_TIME+HOURLY_INTERVAL*(@__interval-1)*3600,'p%Y%m%d%H00');IF(PARTITION_NAME!=OLD_PARTITION_NAME)THENCALLpartition_create(SCHEMA_NAME,TABLE_NAME,PARTITION_NAME,LESS_THAN_TIMESTAMP);ENDIF;SET@__interval=@__interval+1;SETOLD_PARTITION_NAME=PARTITION_NAME;ENDLOOP;SETOLDER_THAN_PARTITION_DATE=DATE_FORMAT(DATE_SUB(NOW(),INTERVALKEEP_DATA_DAYSDAY),'%Y%m%d0000');CALLpartition_drop(SCHEMA_NAME,TABLE_NAME,OLDER_THAN_PARTITION_DATE);END$$DELIMITER;DELIMITER$$CREATEPROCEDURE`partition_verify`(SCHEMANAMEVARCHAR(64),TABLENAMEVARCHAR(64),HOURLYINTERVALINT(11))BEGINDECLAREPARTITION_NAMEVARCHAR(16);DECLARERETROWSINT(11);DECLAREFUTURE_TIMESTAMPTIMESTAMP;/**CheckifanypartitionsexistforthegivenSCHEMANAME.TABLENAME.*/SELECTCOUNT(1)INTORETROWSFROMinformation_schema.partitionsWHEREtable_schema=SCHEMANAMEANDtable_name=TABLENAMEANDpartition_nameISNULL;/**Ifpartitionsdonotexist,goaheadandpartitionthetable*/IFRETROWS=1THEN/**Takethecurrentdateat00:00:00andaddHOURLYINTERVALtoit.Thisisthetimestampbelowwhichwewillstorevalues.*Webeginpartitioningbasedonthebeginningofaday.Thisisbecausewedon'twanttogeneratearandompartition*thatwon'tnecessarilyfallinlinewiththedesiredpartitionnaming(ie:ifthehourintervalis24hours,wecould*endupcreatingapartitionnownamed"p201403270600"whenallotherpartitionswillbelike"p201403280000").*/SETFUTURE_TIMESTAMP=TIMESTAMPADD(HOUR,HOURLYINTERVAL,CONCAT(CURDATE(),"",'00:00:00'));SETPARTITION_NAME=DATE_FORMAT(CURDATE(),'p%Y%m%d%H00');--CreatethepartitioningquerySET@__PARTITION_SQL=CONCAT("ALTERTABLE",SCHEMANAME,".",TABLENAME,"PARTITIONBYRANGE(`clock`)");SET@__PARTITION_SQL=CONCAT(@__PARTITION_SQL,"(PARTITION",PARTITION_NAME,"VALUESLESSTHAN(",UNIX_TIMESTAMP(FUTURE_TIMESTAMP),"));");--RunthepartitioningqueryPREPARESTMTFROM@__PARTITION_SQL;EXECUTESTMT;DEALLOCATEPREPARESTMT;ENDIF;END$$DELIMITER;DELIMITER$$CREATEPROCEDURE`partition_maintenance_all`(SCHEMA_NAMEVARCHAR(32))BEGINCALLpartition_maintenance(SCHEMA_NAME,'history',90,24,14);CALLpartition_maintenance(SCHEMA_NAME,'history_log',90,24,14);CALLpartition_maintenance(SCHEMA_NAME,'history_str',90,24,14);CALLpartition_maintenance(SCHEMA_NAME,'history_text',90,24,14);CALLpartition_maintenance(SCHEMA_NAME,'history_uint',90,24,14);CALLpartition_maintenance(SCHEMA_NAME,'trends',730,24,14);CALLpartition_maintenance(SCHEMA_NAME,'trends_uint',730,24,14);END$$DELIMITER;
上面内容包含了创建分区的存储过程,将上面内容复制到partition.sql中,然后执行如下:
mysql-uzabbix-pzabbixzabbix<partition.sql
b、 添加crontable,每天执行01点01分执行,如下:
crontab-l>crontab.txtcat>>crontab.txt<<EOF#zabbixpartition_maintenance0101***mysql-uzabbix-pzabbixzabbix-e"CALLpartition_maintenance_all('zabbix')"&>/dev/nullEOFcatcrontab.txt|crontab
注意: mysql的zabbix用户的密码部分按照实际环境配置
c、首先执行一次(由于首次执行的时间较长,请使用nohup执行),如下:
nohupmysql-uzabbix-pzabbixzabbix-e"CALLpartition_maintenance_all('zabbix')"&>/root/partition.log&
注意:观察/root/partition.log的输出
d、 查看结果
登录mysql,查看history等表, 如下:
MariaDB[zabbix]>showcreatetablehistory|history|CREATETABLE`history`(`itemid`bigint(20)unsignedNOTNULL,`clock`int(11)NOTNULLDEFAULT'0',`value`double(16,4)NOTNULLDEFAULT'0.0000',`ns`int(11)NOTNULLDEFAULT'0',KEY`history_1`(`itemid`,`clock`))ENGINE=InnoDBDEFAULTCHARSET=utf8/*!50100PARTITIONBYRANGE(`clock`)(PARTITIONp201708280000VALUESLESSTHAN(1503936000)ENGINE=InnoDB,PARTITIONp201708290000VALUESLESSTHAN(1504022400)ENGINE=InnoDB,PARTITIONp201708300000VALUESLESSTHAN(1504108800)ENGINE=InnoDB,PARTITIONp201708310000VALUESLESSTHAN(1504195200)ENGINE=InnoDB,PARTITIONp201709010000VALUESLESSTHAN(1504281600)ENGINE=InnoDB,PARTITIONp201709020000VALUESLESSTHAN(1504368000)ENGINE=InnoDB,PARTITIONp201709030000VALUESLESSTHAN(1504454400)ENGINE=InnoDB,PARTITIONp201709040000VALUESLESSTHAN(1504540800)ENGINE=InnoDB,PARTITIONp201709050000VALUESLESSTHAN(1504627200)ENGINE=InnoDB,PARTITIONp201709060000VALUESLESSTHAN(1504713600)ENGINE=InnoDB,PARTITIONp201709070000VALUESLESSTHAN(1504800000)ENGINE=InnoDB,PARTITIONp201709080000VALUESLESSTHAN(1504886400)ENGINE=InnoDB,PARTITIONp201709090000VALUESLESSTHAN(1504972800)ENGINE=InnoDB,PARTITIONp201709100000VALUESLESSTHAN(1505059200)ENGINE=InnoDB,PARTITIONp201709110000VALUESLESSTHAN(1505145600)ENGINE=InnoDB)*/|
发现了大量PARTITION字段,说明配置正确。注意观察Mysql的Slow Query,一般到执行操作的第二天,Slow Query几乎就会有了,此时Zabbix的Dashboard响应速度应该非常流畅了。
感谢你能够认真阅读完这篇文章,希望小编分享的“Mysql优化之Zabbix分区优化的示例分析”这篇文章对大家有帮助,同时也希望大家多多支持亿速云,关注亿速云行业资讯频道,更多相关知识等着你来学习!
声明:本站所有文章资源内容,如无特殊说明或标注,均为采集网络资源。如若本站内容侵犯了原著者的合法权益,可联系本站删除。