寻觅生命中的那一片浅草......

文章属于类别 监控

清理Zabbix旧数据

问题

磁盘空间不够了,要删除一些历史数据,后面换T级别的硬盘吧

解决过程

备份

硬盘有价,数据无价,在开始删除前,请先做个完整备份,万一以后要查呢。

/usr/bin/pg_dump -U ${zabbix_user} ${zabbix_dbname} | bzip2 -c > zabbix_pgsql_backup_20170309.dump.bz2

统计各个表大小

登入PostgreSQL

PGPASSWORD=xxxx  /usr/bin/psql -U zabbix

统计大小

SELECT *, pg_size_pretty(total_bytes) AS total
    , pg_size_pretty(index_bytes) AS INDEX
    , pg_size_pretty(toast_bytes) AS toast
    , pg_size_pretty(table_bytes) AS TABLE
  FROM (
  SELECT *, total_bytes-index_bytes-COALESCE(toast_bytes,0) AS table_bytes FROM (
      SELECT c.oid,nspname AS table_schema, relname AS TABLE_NAME
              , c.reltuples AS row_estimate
              , pg_total_relation_size(c.oid) AS total_bytes
              , pg_indexes_size(c.oid) AS index_bytes
              , pg_total_relation_size(reltoastrelid) AS toast_bytes
          FROM pg_class c
          LEFT JOIN pg_namespace n ON n.oid = c.relnamespace
          WHERE relkind = 'r'
  ) a
) a;

删除方法一

无论表有没有做分区,都可以使用此方法

执行下面的SQL

DELETE FROM history WHERE itemid NOT IN (SELECT itemid FROM items WHERE status='0');
DELETE FROM history_uint WHERE itemid NOT IN (SELECT itemid FROM items WHERE status='0');
DELETE FROM history_str WHERE itemid NOT IN (SELECT itemid FROM items WHERE status='0');
DELETE FROM history_text WHERE itemid NOT IN (SELECT itemid FROM items WHERE status='0');
DELETE FROM history_log WHERE itemid NOT IN (SELECT itemid FROM items WHERE status='0');
DELETE FROM trends WHERE itemid NOT IN (SELECT itemid FROM items WHERE status='0');
DELETE FROM trends_uint WHERE itemid NOT IN (SELECT itemid FROM items WHERE status='0');

删除后,手动执行下回收空间

# trends_uint为表名
VACUUM (VERBOSE,ANALYZE) trends_uint;

此法存在问题,就是删除时间比较长,容易引起锁表,并导致Zabbix无法使用

查询是否锁表

SELECT relation::regclass, * FROM pg_locks WHERE NOT GRANTED;

更多请参考:burner1024/zabbix-sql

删除方法二

此方法适用于表做了分区,我们采用删除分区的方式,特点是快,就好像删除文件一样快,我怀疑PostgreSQL是不是一个分区就存一个文件

# 显示Schema的搜索路径
SHOW search_path;

# 从上面的输出可以看出,默认没有partitions,我们加入,如果不加入,在drop table的时候,就会提示does not exist
SET search_path TO "$user",partitions,public;

# 可以drop了,CASCADE的作用是把关联的触发条件删除,这触发条件,其实就是什么数据应该插入此分区的,现在已经是2017了,自然不会有2015的数据需要插入,所以可以放心删除
drop table history_uint_2015_01_11 CASCADE;

此法真是快到没朋友,最后 ,简单粗暴,根据上面统计出的表大小结果,我们来批量生成SQL,此处删除2015年的

grep _2015_ zabbix_table_size.txt |awk -F\| '{print "drop table" $3 "CASCADE;"}'

其他参考资料:

从下面的参考资料可以看出,自动创建分区,连删除分区,都是自动的

Syrupy

Syrupy is a Python script that regularly takes snapshots of the memory and CPU load of one or more running processes, so as to dynamically build up a profile of their usage of system resources.

https://github.com/jeetsukumaran/Syrupy

zz:How To Install Munin on an Ubuntu VPS

DigitalOcean的文档真心赞

转载:How To Install Munin on an Ubuntu VPS

spine的make错误:cannot find -lz

./configure时没有报错
/usr/bin/ld: cannot find -lz

vi config.log
搜索lz
可以看到如下错误

configure:19207: $? = 0
configure:19210: test -s conftest
configure:19213: $? = 0
configure:19226: result: yes
configure:19240: checking for deflate in -lz
configure:19270: gcc -o conftest -g -O2 conftest.c -lz -lpthread -lm >&5
/usr/bin/ld: cannot find -lz
collect2: ld returned 1 exit status
configure:19276: $? = 1
configure: failed program was:
| /* confdefs.h. */

Google下checking for deflate in -lz,发现跟zlib有关
yum -y install zlib-devel

就可以正常make了

安装ndoutils找不到mysql.h和errmsg.h

安装ndoutils-1.4b9时报以下错

# ./configure –enable-mysql –with-mysql=/usr/local/mysql
# make
cd ./src && make
make[1]: Entering directory `/root/nagiosddd/ndoutils-1.4b9/src’
gcc -fPIC -g -O2 -I/usr/local/mysql/include/mysql -DHAVE_CONFIG_H -c -o io.o io.c
In file included from io.c:11:
../include/config.h:261:25: error: mysql/mysql.h: No such file or directory
../include/config.h:262:26: error: mysql/errmsg.h: No such file or directory
make[1]: *** [io.o] Error 1
make[1]: Leaving directory `/root/nagiosddd/ndoutils-1.4b9/src’
make: *** [all] Error 2

解决方法
# vi include/config.h

#include <mysql/mysql.h>
#include <mysql/errmsg.h>
修改为
#include </usr/local/mysql/include/mysql/mysql.h>
#include </usr/local/mysql/include/mysql/errmsg.h>

2024年三月
« 5月    
 123
45678910
11121314151617
18192021222324
25262728293031