这篇文章主要讲解了“怎么从0到1部署一套TiDB本地集群”,文中的讲解内容简单清晰,易于学习与理解,下面请大家跟着小编的思路慢慢深入,一起来研究和学习“怎么从0到1部署一套TiDB本地集群”吧!

TiDB是一款开源的NewSQL数据库,我们看一下官方的描述:

TiDB 是 PingCAP 公司自主设计、研发的开源分布式关系型数据库,是一款同时支持在线事务处理与在线分析处理 (Hybrid Transactional and Analytical Processing, HTAP)的融合型分布式数据库产品,具备水平扩容或者缩容、金融级高可用、实时 HTAP、云原生的分布式数据库、兼容 MySQL 5.7 协议和 MySQL 生态等重要特性。目标是为用户提供一站式 OLTP (Online Transactional Processing)、OLAP (Online Analytical Processing)、HTAP 解决方案。TiDB 适合高可用、强一致要求较高、数据规模较大等各种应用场景。

这里面有几个关键点:

分布式关系型数据库

兼容MySQL5.7

支持HTAP(在线事务处理和在线分析处理)

对金融行业支持很好,支持高可用、强一致和大数据场景

基本概念

这里介绍一下TiDB中的几个重要概念:

PD:Placement Driver,是TiDB的一个总控节点,负责集群的整体调度外,全局ID生成,以及全局时间戳TSO(中心化授时)的生成。也就是说全局时钟在这个节点实现。

TiKV:TiDB 的存储层,是一个分布式事务型的键值数据库,满足ACID事务,使用Raft协议保证多副本一致性,还存储统计数据,

TiFlash:HTAP形态的关键组件,它是TiKV的列存扩展,在提供了良好的隔离性的同时,也兼顾了强一致性。

Monitor:TiDB监控组件

实验环境

由于我本地资源的限制,我们使用快速部署的方式。

TiDB快速部署的方式有2种:

第一种:使用 TiUP Playground 快速部署本地测试环境

适用场景:利用本地 Mac 或者单机 Linux 环境快速部署 TiDB 集群。可以体验 TiDB 集群的基本架构,以及 TiDB、TiKV、PD、监控等基础组件的运行。

第二种:使用TiUP cluster在单机上模拟生产环境部署步骤

希望用单台Linux服务器,体验TiDB最小的完整拓扑的集群,并模拟生产的部署步骤。

这里我采用第二种方式。

据官方描述,TiDB在CentOS 7.3做过大量的测试,建议在CentOS 7.3以上部署。

本地环境:VMware虚拟机,操作系统CentOS7.6

开始部署

我们按照官方的步骤来安装

1.关闭防火墙

systemctlstopfirewalldserviceiptablesstop

2.下载并安装 TiUP,命令和结果如下

[root@master~]#curl--proto'=https'--tlsv1.2-sSfhttps://tiup-mirrors.pingcap.com/install.sh|sh%Total%Received%XferdAverageSpeedTimeTimeTimeCurrentDloadUploadTotalSpentLeftSpeed1008697k1008697k004316k00:00:020:00:02--:--:--4318kWARN:addingrootcertificateviainternet:https://tiup-mirrors.pingcap.com/root.jsonYoucanrevokethisbyremove/root/.tiup/bin/7b8e153f2e2d0928.root.jsonSetmirrortohttps://tiup-mirrors.pingcap.comsuccessDetectedshell:bashShellprofile:/root/.bash_profile/root/.bash_profilehasbeenmodifiedtoaddtiuptoPATHopenanewterminalorsource/root/.bash_profiletouseitInstalledpath:/root/.tiup/bin/tiup===============================================Haveatry:tiupplayground===============================================

3.安装 TiUP 的 cluster 组件

首先声明全局的环境变量,不然找不到tiup命令:

source.bash_profile

执行安装cluster命令:

tiupcluster

输出如下:

[root@master~]#tiupclusterThecomponent`cluster`isnotinstalled;downloadingfromrepository.downloadhttps://tiup-mirrors.pingcap.com/cluster-v1.3.1-linux-amd64.tar.gz10.05MiB/10.05MiB100.00%13.05MiBp/sStartingcomponent`cluster`:/root/.tiup/components/cluster/v1.3.1/tiup-clusterDeployaTiDBclusterforproductionUsage:tiupcluster[command]AvailableCommands:checkPerformpreflightchecksforthecluster.deployDeployaclusterforproductionstartStartaTiDBclusterstopStopaTiDBclusterrestartRestartaTiDBclusterscale-inScaleinaTiDBclusterscale-outScaleoutaTiDBclusterdestroyDestroyaspecifiedclusterclean(EXPERIMENTAL)CleanupaspecifiedclusterupgradeUpgradeaspecifiedTiDBclusterexecRunshellcommandonhostinthetidbclusterdisplayDisplayinformationofaTiDBclusterpruneDestroyandremoveinstancesthatisintombstonestatelistListallclustersauditShowauditlogofclusteroperationimportImportanexistTiDBclusterfromTiDB-Ansibleedit-configEditTiDBclusterconfig.Willuseeditorfromenvironmentvariable`EDITOR`,defaultusevireloadReloadaTiDBcluster'sconfigandrestartifneededpatchReplacetheremotepackagewithaspecifiedpackageandrestarttheservicerenameRenametheclusterenableEnableaTiDBclusterautomaticallyatbootdisableDisablestartingaTiDBclusterautomaticallyatboothelpHelpaboutanycommandFlags:-h,--helphelpfortiup--sshstring(EXPERIMENTAL)Theexecutortype:'builtin','system','none'.--ssh-timeoutuintTimeoutinsecondstoconnecthostviaSSH,ignoredforoperationsthatdon'tneedanSSHconnection.(default5)-v,--versionversionfortiup--wait-timeoutuintTimeoutinsecondstowaitforanoperationtocomplete,ignoredforoperationsthatdon'tfit.(default120)-y,--yesSkipallconfirmationsandassumes'yes'Use"tiupclusterhelp[command]"formoreinformationaboutacommand.

4.调大sshd服务的连接数限制

这里需要使用root权限,具体修改/etc/ssh/sshd_config文件下面的参数配置:

MaxSessions20

改完后重启sshd:

[root@master~]#servicesshdrestartRedirectingto/bin/systemctlrestartsshd.service

5.编辑集群配置模板文件

这个文件我们命名为topo.yaml,内容如下:

##Globalvariablesareappliedtoalldeploymentsandusedasthedefaultvalueof##thedeploymentsifaspecificdeploymentvalueismissing.global:user:"tidb"ssh_port:22deploy_dir:"/tidb-deploy"data_dir:"/tidb-data"##Monitoredvariablesareappliedtoallthemachines.monitored:node_exporter_port:9100blackbox_exporter_port:9115server_configs:tidb:log.slow-threshold:300tikv:readpool.storage.use-unified-pool:falsereadpool.coprocessor.use-unified-pool:truepd:replication.enable-placement-rules:truereplication.location-labels:["host"]tiflash:logger.level:"info"pd_servers:-host:192.168.59.146tidb_servers:-host:192.168.59.146tikv_servers:-host:192.168.59.146port:20160status_port:20180config:server.labels:{host:"logic-host-1"}#-host:192.168.59.146#port:20161#status_port:20181#config:#server.labels:{host:"logic-host-2"}#-host:192.168.59.146#port:20162#status_port:20182#config:#server.labels:{host:"logic-host-3"}tiflash_servers:-host:192.168.59.146

这里有2点需要注意:

文件中的host是部署TiDB的服务器ip

ssh_port默认是22

官方文件的tikv_servers是3个节点,我这里设置成了只有1个节点,原因是本地配置多个节点时只有1个节点能启动成功

6.部署集群

部署集群的命令如下:

tiupclusterdeploy<cluster-name><tidb-version>./topo.yaml--userroot-p

上面的cluster-name是集群名称,tidb-version是指TiDB版本号,可以通过tiup list tidb这个命令来查看,这里使用v3.1.2,集群名称叫mytidb-cluster,命令如下:

tiupclusterdeploymytidb-clusterv3.1.2./topo.yaml--userroot-p

下面是部署时输出的日志:

[root@master~]#tiupclusterdeploymytidb-clusterv3.1.2./topo.yaml--userroot-pStartingcomponent`cluster`:/root/.tiup/components/cluster/v1.3.1/tiup-clusterdeploymytidb-clusterv3.1.2./topo.yaml--userroot-pPleaseconfirmyourtopology:Clustertype:tidbClustername:mytidb-clusterClusterversion:v3.1.2TypeHostPortsOS/ArchDirectories-------------------------------pd192.168.59.1462379/2380linux/x86_64/tidb-deploy/pd-2379,/tidb-data/pd-2379tikv192.168.59.14620160/20180linux/x86_64/tidb-deploy/tikv-20160,/tidb-data/tikv-20160tidb192.168.59.1464000/10080linux/x86_64/tidb-deploy/tidb-4000tiflash192.168.59.1469000/8123/3930/20170/20292/8234linux/x86_64/tidb-deploy/tiflash-9000,/tidb-data/tiflash-9000prometheus192.168.59.1469090linux/x86_64/tidb-deploy/prometheus-9090,/tidb-data/prometheus-9090grafana192.168.59.1463000linux/x86_64/tidb-deploy/grafana-3000Attention:1.Ifthetopologyisnotwhatyouexpected,checkyouryamlfile.2.Pleaseconfirmthereisnoport/directoryconflictsinsamehost.Doyouwanttocontinue?[y/N]:yInputSSHpassword:+GenerateSSHkeys...Done+DownloadTiDBcomponents-Downloadpd:v3.1.2(linux/amd64)...Done-Downloadtikv:v3.1.2(linux/amd64)...Done-Downloadtidb:v3.1.2(linux/amd64)...Done-Downloadtiflash:v3.1.2(linux/amd64)...Done-Downloadprometheus:v3.1.2(linux/amd64)...Done-Downloadgrafana:v3.1.2(linux/amd64)...Done-Downloadnode_exporter:v0.17.0(linux/amd64)...Done-Downloadblackbox_exporter:v0.12.0(linux/amd64)...Done+Initializetargethostenvironments-Prepare192.168.59.146:22...Done+Copyfiles-Copypd->192.168.59.146...Done-Copytikv->192.168.59.146...Done-Copytidb->192.168.59.146...Done-Copytiflash->192.168.59.146...Done-Copyprometheus->192.168.59.146...Done-Copygrafana->192.168.59.146...Done-Copynode_exporter->192.168.59.146...Done-Copyblackbox_exporter->192.168.59.146...Done+CheckstatusEnablingcomponentpdEnablinginstancepd192.168.59.146:2379Enablepd192.168.59.146:2379successEnablingcomponentnode_exporterEnablingcomponentblackbox_exporterEnablingcomponenttikvEnablinginstancetikv192.168.59.146:20160Enabletikv192.168.59.146:20160successEnablingcomponenttidbEnablinginstancetidb192.168.59.146:4000Enabletidb192.168.59.146:4000successEnablingcomponenttiflashEnablinginstancetiflash192.168.59.146:9000Enabletiflash192.168.59.146:9000successEnablingcomponentprometheusEnablinginstanceprometheus192.168.59.146:9090Enableprometheus192.168.59.146:9090successEnablingcomponentgrafanaEnablinginstancegrafana192.168.59.146:3000Enablegrafana192.168.59.146:3000successCluster`mytidb-cluster`deployedsuccessfully,youcanstartitwithcommand:`tiupclusterstartmytidb-cluster`

7.启动集群

命令如下:

tiupclusterstartmytidb-cluster

启动成功日志如下:

[root@master~]#tiupclusterstartmytidb-clusterStartingcomponent`cluster`:/root/.tiup/components/cluster/v1.3.1/tiup-clusterstartmytidb-clusterStartingclustermytidb-cluster...+[Serial]-SSHKeySet:privateKey=/root/.tiup/storage/cluster/clusters/mytidb-cluster/ssh/id_rsa,publicKey=/root/.tiup/storage/cluster/clusters/mytidb-cluster/ssh/id_rsa.pub+[Parallel]-UserSSH:user=tidb,host=192.168.59.146+[Parallel]-UserSSH:user=tidb,host=192.168.59.146+[Parallel]-UserSSH:user=tidb,host=192.168.59.146+[Parallel]-UserSSH:user=tidb,host=192.168.59.146+[Parallel]-UserSSH:user=tidb,host=192.168.59.146+[Parallel]-UserSSH:user=tidb,host=192.168.59.146+[Serial]-StartClusterStartingcomponentpdStartinginstancepd192.168.59.146:2379Startpd192.168.59.146:2379successStartingcomponentnode_exporterStartinginstance192.168.59.146Start192.168.59.146successStartingcomponentblackbox_exporterStartinginstance192.168.59.146Start192.168.59.146successStartingcomponenttikvStartinginstancetikv192.168.59.146:20160Starttikv192.168.59.146:20160successStartingcomponenttidbStartinginstancetidb192.168.59.146:4000Starttidb192.168.59.146:4000successStartingcomponenttiflashStartinginstancetiflash192.168.59.146:9000Starttiflash192.168.59.146:9000successStartingcomponentprometheusStartinginstanceprometheus192.168.59.146:9090Startprometheus192.168.59.146:9090successStartingcomponentgrafanaStartinginstancegrafana192.168.59.146:3000Startgrafana192.168.59.146:3000success+[Serial]-UpdateTopology:cluster=mytidb-clusterStartedcluster`mytidb-cluster`successfully

8.访问数据库

因为TiDB支持mysql客户端访问,我们使用sqlyog登录TiDB,用户名root,密码空,地址192.168.59.149,端口4000,如下图:

登录成功如下图,左侧我们可以看到TiDB自带的一些表:

9.访问TiDB的Grafana监控

访问地址如下:

http://192.168.59.146:3000/login

初始用户名/密码:admin/admin,登录进去后修改密码,成功后页面如下:

10.dashboard

TiDB v3.x版本没有dashboard,v4.0开始加入,访问地址如下:

http://192.168.59.146:2379/dashboard

11.查看集群列表

命令:tiup cluster list,结果如下:

[root@master/]#tiupclusterlistStartingcomponent`cluster`:/root/.tiup/components/cluster/v1.3.1/tiup-clusterlistNameUserVersionPathPrivateKey-----------------------------mytidb-clustertidbv3.1.2/root/.tiup/storage/cluster/clusters/mytidb-cluster/root/.tiup/storage/cluster/clusters/mytidb-cluster/ssh/id_rsa

12.查看集群拓扑结构

命令如下:

tiupclusterlist

输入命令后,我本地集群的输出如下:

[root@master/]#tiupclusterlistStartingcomponent`cluster`:/root/.tiup/components/cluster/v1.3.1/tiup-clusterlistNameUserVersionPathPrivateKey-----------------------------mytidb-clustertidbv3.1.2/root/.tiup/storage/cluster/clusters/mytidb-cluster/root/.tiup/storage/cluster/clusters/mytidb-cluster/ssh/id_rsaYouhavenewmailin/var/spool/mail/root[root@master/]#tiupclusterdisplaymytidb-clusterStartingcomponent`cluster`:/root/.tiup/components/cluster/v1.3.1/tiup-clusterdisplaymytidb-clusterClustertype:tidbClustername:mytidb-clusterClusterversion:v3.1.2SSHtype:builtinIDRoleHostPortsOS/ArchStatusDataDirDeployDir----------------------------------------------192.168.59.146:3000grafana192.168.59.1463000linux/x86_64Up-/tidb-deploy/grafana-3000192.168.59.146:2379pd192.168.59.1462379/2380linux/x86_64Up|L/tidb-data/pd-2379/tidb-deploy/pd-2379192.168.59.146:9090prometheus192.168.59.1469090linux/x86_64Up/tidb-data/prometheus-9090/tidb-deploy/prometheus-9090192.168.59.146:4000tidb192.168.59.1464000/10080linux/x86_64Up-/tidb-deploy/tidb-4000192.168.59.146:9000tiflash192.168.59.1469000/8123/3930/20170/20292/8234linux/x86_64Up/tidb-data/tiflash-9000/tidb-deploy/tiflash-9000192.168.59.146:20160tikv192.168.59.14620160/20180linux/x86_64Up/tidb-data/tikv-20160/tidb-deploy/tikv-20160Totalnodes:6

遇到的问题

安装TiDB v4.0.9版本,可以部署成功,但是启动报错,如果topo.yaml中配置了3个节点,启动报错,tikv只能启动成功一个,日志如下:

[root@master~]#tiupclusterstartmytidb-clusterStartingcomponent`cluster`:/root/.tiup/components/cluster/v1.3.1/tiup-clusterstartmytidb-clusterStartingclustermytidb-cluster...+[Serial]-SSHKeySet:privateKey=/root/.tiup/storage/cluster/clusters/mytidb-cluster/ssh/id_rsa,publicKey=/root/.tiup/storage/cluster/clusters/mytidb-cluster/ssh/id_rsa.pub+[Parallel]-UserSSH:user=tidb,host=192.168.59.146+[Parallel]-UserSSH:user=tidb,host=192.168.59.146+[Parallel]-UserSSH:user=tidb,host=192.168.59.146+[Parallel]-UserSSH:user=tidb,host=192.168.59.146+[Parallel]-UserSSH:user=tidb,host=192.168.59.146+[Parallel]-UserSSH:user=tidb,host=192.168.59.146+[Parallel]-UserSSH:user=tidb,host=192.168.59.146+[Parallel]-UserSSH:user=tidb,host=192.168.59.146+[Serial]-StartClusterStartingcomponentpdStartinginstancepd192.168.59.146:2379Startpd192.168.59.146:2379successStartingcomponentnode_exporterStartinginstance192.168.59.146Start192.168.59.146successStartingcomponentblackbox_exporterStartinginstance192.168.59.146Start192.168.59.146successStartingcomponenttikvStartinginstancetikv192.168.59.146:20162Startinginstancetikv192.168.59.146:20160Startinginstancetikv192.168.59.146:20161Starttikv192.168.59.146:20162successError:failedtostarttikv:failedtostart:tikv192.168.59.146:20161,pleasechecktheinstance'slog(/tidb-deploy/tikv-20161/log)formoredetail.:timedoutwaitingforport20161tobestartedafter2m0sVerbosedebuglogshasbeenwrittento/root/.tiup/logs/tiup-cluster-debug-2021-01-05-19-58-46.log.Error:run`/root/.tiup/components/cluster/v1.3.1/tiup-cluster`(wd:/root/.tiup/data/SLGrLJI)failed:exitstatus1

查看日志文件/tidb-deploy/tikv-20161/log/tikv.log,提示下面2个目录下找不到文件:

[2021/01/0605:48:44.231-05:00][FATAL][lib.rs:482]["called`Result::unwrap()`onan`Err`value:Os{code:2,kind:NotFound,message:\"Nosuchfileordirectory\"}"][backtrace="stackbacktrace:\n0:tikv_util::set_panic_hook::{{closure}}\natcomponents/tikv_util/src/lib.rs:481\n1:std::panicking::rust_panic_with_hook\natsrc/libstd/panicking.rs:475\n2:rust_begin_unwind\natsrc/libstd/panicking.rs:375\n3:core::panicking::panic_fmt\natsrc/libcore/panicking.rs:84\n4:core::result::unwrap_failed\natsrc/libcore/result.rs:1188\n5:core::result::Result<T,E>::unwrap\nat/rustc/0de96d37fbcc54978458c18f5067cd9817669bc8/src/libcore/result.rs:956\ncmd::server::TiKVServer::init_fs\natcmd/src/server.rs:310\ncmd::server::run_tikv\natcmd/src/server.rs:95\n6:tikv_server::main\natcmd/src/bin/tikv-server.rs:166\n7:std::rt::lang_start::{{closure}}\nat/rustc/0de96d37fbcc54978458c18f5067cd9817669bc8/src/libstd/rt.rs:67\n8:main\n9:__libc_start_main\n10:<unknown>\n"][location=src/libcore/result.rs:1188][thread_name=main]

如果配置一个节点,启动还是失败,启动日志我们截取后半段:

StartingcomponentpdStartinginstancepd192.168.59.146:2379Startpd192.168.59.146:2379successStartingcomponentnode_exporterStartinginstance192.168.59.146Start192.168.59.146successStartingcomponentblackbox_exporterStartinginstance192.168.59.146Start192.168.59.146successStartingcomponenttikvStartinginstancetikv192.168.59.146:20160Starttikv192.168.59.146:20160successStartingcomponenttidbStartinginstancetidb192.168.59.146:4000Starttidb192.168.59.146:4000successStartingcomponenttiflashStartinginstancetiflash192.168.59.146:9000Error:failedtostarttiflash:failedtostart:tiflash192.168.59.146:9000,pleasechecktheinstance'slog(/tidb-deploy/tiflash-9000/log)formoredetail.:timedoutwaitingforport9000tobestartedafter2m0sVerbosedebuglogshasbeenwrittento/root/.tiup/logs/tiup-cluster-debug-2021-01-06-20-02-13.log.

在/tidb-deploy/tiflash-9000/log中文件如下:

[2021/01/0620:06:26.207-05:00][INFO][mod.rs:335]["startingworkingthread"][worker=region-collector-worker][2021/01/0620:06:27.130-05:00][FATAL][lib.rs:482]["called`Result::unwrap()`onan`Err`value:Os{code:2,kind:NotFound,message:\"Nosuchfileordirectory\"}"][backtrace="stackbacktrace:\n0:tikv_util::set_panic_hook::{{closure}}\n1:std::panicking::rust_panic_with_hook\natsrc/libstd/panicking.rs:475\n2:rust_begin_unwind\natsrc/libstd/panicking.rs:375\n3:core::panicking::panic_fmt\natsrc/libcore/panicking.rs:84\n4:core::result::unwrap_failed\natsrc/libcore/result.rs:1188\n5:cmd::server::run_tikv\n6:run_proxy\n7:operator()\nat/home/jenkins/agent/workspace/optimization-build-tidb-linux-amd/tics/dbms/src/Server/Server.cpp:415\n8:execute_native_thread_routine\nat../../../../../libstdc++-v3/src/c++11/thread.cc:83\n9:start_thread\n10:__clone\n"][location=src/libcore/result.rs:1188][thread_name=<unnamed>]

试了v4.0.1版本,也是一样的问题,都是报找不到文件的错误。

感谢各位的阅读,以上就是“怎么从0到1部署一套TiDB本地集群”的内容了,经过本文的学习后,相信大家对怎么从0到1部署一套TiDB本地集群这一问题有了更深刻的体会,具体使用情况还需要大家实践验证。这里是亿速云,小编将为大家推送更多相关知识点的文章,欢迎关注!