本站部分文章、图片属于网络上可搜索到的公开信息,均用于学习和交流用途,不能代表睿象云的观点、立场或意见。我们接受网民的监督,如发现任何违法内容或侵犯了您的权益,请第一时间联系小编邮箱jiasou666@gmail.com 处理。
Prometheus 运维工具 Promtool (四)TSDB 功能
Promtool 在 TSDB 方面一个有 6 个子命令,分别用来进行写性能测试、分析、列出其中的块、dump、从 OpenMetric 导入数据块、为新的记录规则创建数据块,接下来我们依次看一下。这大概是第一篇写 Promtool TSDB 方面的文章了。在网络上都没有找到相关资料。写性能测试Promtool 可以对 Prometheus 进行写的性能测试,命令参数如下:[root@Erdong-Test ~]# ./promtool tsdb bench write --helpusage: promtool tsdb bench write [] []Run a write performance benchmark.Flags: -h, --help Show context-sensitive help (also try --help-long and --help-man). --version Show application version. --enable-feature= ... Comma separated feature names to enable (only PromQL related). See https://prometheus.io/docs/prometheus/latest/feature_flags/ for the options and more details. --out="benchout" Set the output path. --metrics=10000 Number of metrics to read. --scrapes=3000 Number of scrapes to simulate.Args: [] Input file with samples data, default is (../../tsdb/testdata/20kseries.json).首先从官网下载测试所需要的数据 20kseries.json 文件,这个文件在源码仓库的 tsdb/testdata/ 下,可以通过这个命令来下载wget https://raw.githubusercontent.com/prometheus/prometheus/main/tsdb/testdata/20kseries.json下载好以后,指定 metrics 和 scrapes 两个参数进行测试[root@Erdong-Test ~]# ./promtool tsdb bench write --metrics=10000 --scrapes=3000 ./20kseries.jsonlevel=info ts=2022-07-27T13:21:25.546626055Z caller=head.go:493 msg="Replaying on-disk memory mappable chunks if any"level=info ts=2022-07-27T13:21:25.546734166Z caller=head.go:536 msg="On-disk memory mappable chunks replay completed" duration=11.815µslevel=info ts=2022-07-27T13:21:25.546766084Z caller=head.go:542 msg="Replaying WAL, this may take a while"level=info ts=2022-07-27T13:21:25.547101874Z caller=head.go:613 msg="WAL segment loaded" segment=0 maxSegment=0level=info ts=2022-07-27T13:21:25.547131383Z caller=head.go:619 msg="WAL replay completed" checkpoint_replay_duration=38.26µs wal_replay_duration=315.491µs total_replay_duration=409.177µslevel=info ts=2022-07-27T13:21:25.549132675Z caller=db.go:1467 msg="Compactions disabled">> start stage=readData>> completed stage=readData duration=145.973395ms>> start stage=ingestScrapesingestion completed>> completed stage=ingestScrapes duration=3.628682202s > total samples: 30000000 > samples/sec: 8.267435217071414e+06>> start stage=stopStorage>> completed stage=stopStorage duration=1.522008202s这个测试会在当前目录生成一个 benchout 的文件夹,里边是测试生成的文件。这个路径也可以通过参数来进行修改,生成到其他地方或者其他名称。简单来看,这次读数据消耗了 145ms ,存储数据消耗了 1.5s 。关于这个结果解读,我们后续再进行专门的讲解。TSDB 分析Promtool 可以对 Prometheus 的数据块进行分析,分析 churn 、label 对的基数和压缩效率,命令参数如下:[root@Erdong-Test ~]# ./promtool tsdb analyze --helpusage: promtool tsdb analyze [] [] []Analyze churn, label pair cardinality and compaction efficiency.Flags: -h, --help Show context-sensitive help (also try --help-long and --help-man). --version Show application version. --enable-feature= ... Comma separated feature names to enable (only PromQL related). See https://prometheus.io/docs/prometheus/latest/feature_flags/ for the options and more details. --limit=20 How many items to show in each list. --extended Run extended analysis.Args: [] Database path (default is data/). [] Block to analyze (default is the last block).在分析的时候可以指定每个列表最多显示多少个项,必须要指定数据的 data 目录,block id 可以指定也可以不指定,不指定的话会分析最新的那个 block 。执行命令以后可以看到如下输出。[root@Erdong-Test ~]# ./promtool tsdb analyze ./prometheus/data/Block ID: 01G8ZXKAHRGF7BV0FTQYZ18FH5Duration: 59m55.713sSeries: 606677Label names: 26Postings (unique label pairs): 13190Postings entries (total label pairs): 7819776Label pairs most involved in churning:329487 workload_user_cattle_io_workloadselector=deployment-appops-pushgateway-flink329487 pod_template_hash=6d695f75cf329487 kubernetes_namespace=appops62797 kubernetes_pod_name=pushgateway-flink-6d695f75cf-2xhfh58524 kubernetes_pod_name=pushgateway-flink-6d695f75cf-8fq5852495 kubernetes_pod_name=pushgateway-flink-6d695f75cf-d76n751544 kubernetes_pod_name=pushgateway-flink-6d695f75cf-z4bjb22074 kubernetes_pod_name=pushgateway-flink-6d695f75cf-9jgdk21516 kubernetes_pod_name=pushgateway-flink-6d695f75cf-96hpq20426 kubernetes_pod_name=pushgateway-flink-6d695f75cf-9twr220047 kubernetes_pod_name=pushgateway-flink-6d695f75cf-ngzwz11819 kubernetes_pod_name=pushgateway-flink-6d695f75cf-8hv9z8242 kubernetes_pod_name=pushgateway-flink-6d695f75cf-pddrc3783 __name__=flink_taskmanager_Status_JVM_Memory_Heap_Used3783 __name__=flink_taskmanager_job_task_checkpointAlignmentTime3783 __name__=flink_taskmanager_Status_JVM_ClassLoader_ClassesLoaded3783 __name__=flink_taskmanager_Status_JVM_Threads_Count3783 __name__=flink_taskmanager_Status_Shuffle_Netty_UsedMemorySegments3783 __name__=flink_taskmanager_Status_Flink_Memory_Managed_Used3783 __name__=flink_taskmanager_job_task_Shuffle_Netty_Output_Buffers_outputQueueLengthLabel names most involved in churning:329487 kubernetes_namespace329487 kubernetes_pod_name329487 workload_user_cattle_io_workloadselector329487 __name__329487 pod_template_hash329487 job314352 instance314050 host314050 tm_id185403 task_attempt_num185403 task_name185403 subtask_index185403 job_name185403 task_id185403 task_attempt_id185403 job_id15134 operator_name15134 operator_id66 method55 quantileMost common label pairs:606565 kubernetes_namespace=appops606565 workload_user_cattle_io_workloadselector=deployment-pushgateway-flink606565 pod_template_hash=6d695f75cf87063 kubernetes_pod_name=pushgateway-flink-6d695f75cf-ngzwz87063 kubernetes_pod_name=pushgateway-flink-6d695f75cf-9jgdk87062 kubernetes_pod_name=pushgateway-flink-6d695f75cf-96hpq87062 kubernetes_pod_name=pushgateway-flink-6d695f75cf-9twr267920 kubernetes_pod_name=pushgateway-flink-6d695f75cf-2xhfh62700 kubernetes_pod_name=pushgateway-flink-6d695f75cf-8fq5854174 kubernetes_pod_name=pushgateway-flink-6d695f75cf-d76n753304 kubernetes_pod_name=pushgateway-flink-6d695f75cf-z4bjb11892 kubernetes_pod_name=pushgateway-flink-6d695f75cf-8hv9z8325 kubernetes_pod_name=pushgateway-flink-6d695f75cf-pddrc6965 __name__=flink_taskmanager_job_task_Shuffle_Netty_Input_Buffers_inputFloatingBuffersUsage6965 __name__=flink_taskmanager_job_task_numBuffersOut6965 __name__=flink_taskmanager_job_task_isBackPressured6965 __name__=flink_taskmanager_Status_JVM_ClassLoader_ClassesLoaded6965 __name__=flink_taskmanager_job_task_numBuffersOutPerSecond6965 __name__=push_time_seconds6965 __name__=flink_taskmanager_job_task_buffers_inPoolUsageLabel names with highest cumulative label value length:50890 task_name42890 tm_id34890 operator_id34890 task_id34890 task_attempt_id34890 job_id21890 job_name18928 job18890 operator_name13890 host5972 __name__3890 task_attempt_num3890 subtask_index3100 instance340 kubernetes_pod_name40 revision38 handler35 workload_user_cattle_io_workloadselector19 quantile16 methodHighest cardinality labels:1012 instance1002 job1000 operator_id1000 task_id1000 task_attempt_num1000 host1000 task_name1000 tm_id1000 job_id1000 job_name1000 task_attempt_id1000 subtask_index1000 operator_name137 __name__10 kubernetes_pod_name7 handler7 quantile4 method3 code2 versionHighest cardinality metric names:6965 my_batch_job_duration_seconds6965 flink_taskmanager_Status_Flink_Memory_Managed_Used6965 flink_taskmanager_Status_JVM_CPU_Load6965 flink_taskmanager_Status_JVM_CPU_Time6965 flink_taskmanager_Status_JVM_ClassLoader_ClassesLoaded6965 flink_taskmanager_Status_JVM_ClassLoader_ClassesUnloaded6965 flink_taskmanager_Status_JVM_GarbageCollector_PS_MarkSweep_Count6965 flink_taskmanager_Status_JVM_GarbageCollector_PS_MarkSweep_Time6965 flink_taskmanager_Status_JVM_GarbageCollector_PS_Scavenge_Count6965 flink_taskmanager_Status_JVM_GarbageCollector_PS_Scavenge_Time6965 flink_taskmanager_Status_JVM_Memory_Direct_Count6965 flink_taskmanager_Status_JVM_Memory_Direct_MemoryUsed6965 flink_taskmanager_Status_JVM_Memory_Direct_TotalCapacity6965 flink_taskmanager_Status_JVM_Memory_Heap_Committed6965 flink_taskmanager_Status_JVM_Memory_Heap_Max6965 flink_taskmanager_Status_JVM_Memory_Heap_Used6965 flink_taskmanager_Status_JVM_Memory_Mapped_Count6965 flink_taskmanager_Status_JVM_Memory_Mapped_MemoryUsed6965 flink_taskmanager_Status_JVM_Memory_Mapped_TotalCapacity6965 flink_taskmanager_Status_JVM_Memory_Metaspace_Committed列出 TSDB 数据块使用 Promtool 还可以列出当前 Prometheus 的所有数据块。命令的参数如下:[root@Erdong-Test ~]# ./promtool tsdb list --helpusage: promtool tsdb list [] []List tsdb blocks.Flags: -h, --help Show context-sensitive help (also try --help-long and --help-man). --version Show application version. --enable-feature= ... Comma separated feature names to enable (only PromQL related). See https://prometheus.io/docs/prometheus/latest/feature_flags/ for the options and more details. -r, --human-readable Print human readable values.Args: [] Database path (default is data/).执行这个命令我们来看一下效果,建议加上 -r 参数[root@Erdong-Test ~]# ./promtool tsdb list -r ./prometheus-pushgateway/data/BLOCK ULID MIN TIME MAX TIME DURATION NUM SAMPLES NUM CHUNKS NUM SERIES SIZE01G8XB6J42S6FTDMAPYV30CTR3 2022-07-26 12:00:03 +0000 UTC 2022-07-26 13:00:00 +0000 UTC 59m56.018s 78421200 435860 435412 92MiB217KiB450B01G8XEMDR374M816NZ2NMNS5AQ 2022-07-26 13:00:03 +0000 UTC 2022-07-26 14:00:00 +0000 UTC 59m56.018s 78421200 435860 435412 92MiB217KiB825B01G8XJ29C2JVV8PEQFR748C1QR 2022-07-26 14:00:03 +0000 UTC 2022-07-26 15:00:00 +0000 UTC 59m56.018s 78421200 435860 435412 94MiB125KiB158B01G8XNG502K46CF1960R1FYEAA 2022-07-26 15:00:03 +0000 UTC 2022-07-26 16:00:00 +0000 UTC 59m56.018s 78421200 435860 435412 92MiB388KiB448B01G8XRY0M2CV8J323VWM393JTZ 2022-07-26 16:00:03 +0000 UTC 2022-07-26 17:00:00 +0000 UTC 59m56.018s 78421200 435860 435412 93MiB978KiB543B01G8XWBW82NGHBPP6QPZMJ6XAM 2022-07-26 17:00:03 +0000 UTC 2022-07-26 18:00:00 +0000 UTC 59m56.018s 78421200 435860 435412 93MiB979KiB590B01G8XZSQW2WY7SG70Z40CW2CZV 2022-07-26 18:00:03 +0000 UTC 2022-07-26 19:00:00 +0000 UTC 59m56.018s 78421200 435860 435412 94MiB719KiB630B通过这个命令我们可以看到 Block ID、数据存储的开始时间和结束时间,一共存储了多长时间段 数据,Sample 的数量,Chunk 的数量,Serie 的数量、以及 Block 的大小。DumpPromtool 还可以进行 dump,只不过是从 TSDB 中 dump Sample 数据出来。[root@Erdong-Test ~]# ./promtool tsdb dump --helpusage: promtool tsdb dump [] []Dump samples from a TSDB.Flags: -h, --help Show context-sensitive help (also try --help-long and --help-man). --version Show application version. --enable-feature= ... Comma separated feature names to enable (only PromQL related). See https://prometheus.io/docs/prometheus/latest/feature_flags/ for the options and more details. --min-time=-9223372036854775808 Minimum timestamp to dump. --max-time=9223372036854775807 Maximum timestamp to dump.Args: [] Database path (default is data/).对于这个命令一定要指定数据目录 data 的路径,另外最大最小时间也指定一下,否则会 dump 数据库中所有的 Sample 数据。我们执行一下这个命令[root@Erdong-Test ~]# ./promtool tsdb dump --min-time=1658927217000 --max-time=1658930818000 ../prometheus-pushgateway/data/ 这个命令 dump 出来的数据好像只会在屏幕输出,并不会输出到文件,dump 的时候记得人为导入到某个文件。从 OpenMetric 导入数据块Promtool 工具还可以从 OpenMetrics 输入导入 sample 数据并生成 TSDB 数据块。这个命令的使用场景是在不同的监控系统或者时序数据库之间迁移数据使用的 ,先将对应的数据转换成 OpenMetric 格式,然后将 OpenMetric 格式的数据导入到 Prometheus 的 TSDB 数据库中。这个命令的参数如下:[root@Erdong-Test ~]# ./promtool tsdb create-blocks-from openmetrics --helpusage: promtool tsdb create-blocks-from openmetrics [
--- End ---
作者:耳东,运维攻城狮以及技术爱好者。
暂时没有评论,来抢沙发吧~