天地维杰网

人如秋鸿来有信,事若春梦了无痕


  • 首页

  • Redis

  • java

  • linux

  • 日常问题

  • Spring和Springboot

  • Mac相关

  • 中间件

  • 架构

  • python

  • 前端

  • jvm

  • c语言

  • web3

  • 归档

  • 关于

  • 搜索
close

时间: 0001-01-01   |   阅读: 1884 字 ~9分钟

redis4.0之MEMORY命令详解

前言

在过去,查看redis的内存使用状态只有info memory命令,而且也只有一些基础信息,想要获取全局信息就有些困难。4.0开始redis提供了MEMORY命令,一切都变得简单起来。

MEMORY命令

MEMORY命令一共有5个子命令,可以通过MEMORY HELP来查看:

127.0.0.1:7008> MEMORY HELP
1) "MEMORY DOCTOR                        - Outputs memory problems report"
2) "MEMORY USAGE <key> [SAMPLES <count>] - Estimate memory usage of key"
3) "MEMORY STATS                         - Show memory usage details"
4) "MEMORY PURGE                         - Ask the allocator to release memory"
5) "MEMORY MALLOC-STATS                  - Show allocator internal stats"

接下来我们从MEMORY STATS开始,一一介绍各个子命令的功能。

1. MEMORY STATS

首先,我们需要明确一个概念,redis的内存使用不仅包含所有的key-value数据,还有描述这些key-value的元信息,以及许多管理功能的消耗,比如持久化、主从复制,通过MEMORY STATS可以更好的了解到redis的内存使用状况。

这里我们启动了一个打开持久化功能并且带slave的redis,向其中随机写入了一些数据(某些数据还带有过期时间),以便读者可以更好的了解redis的内存使用,接下来执行MEMORY STATS命令:

127.0.0.1:7008> MEMORY STATS
 1) "peak.allocated"
 2) (integer) 14834728
 3) "total.allocated"
 4) (integer) 14834800
 5) "startup.allocated"
 6) (integer) 786640
 7) "replication.backlog"
 8) (integer) 1048576
 9) "clients.slaves"
10) (integer) 16866
11) "clients.normal"
12) (integer) 49638
13) "aof.buffer"
14) (integer) 0
15) "db.0"
16) 1) "overhead.hashtable.main"
    2) (integer) 40192
    3) "overhead.hashtable.expires"
    4) (integer) 0
17) "overhead.total"
18) (integer) 1941912
19) "keys.count"
20) (integer) 800
21) "keys.bytes-per-key"
22) (integer) 17560
23) "dataset.bytes"
24) (integer) 12892888
25) "dataset.percentage"
26) "91.776351928710938"
27) "peak.percentage"
28) "100.00048828125"
29) "fragmentation"
30) "1.6665706634521484"

一共有15项内容,内存使用量均以字节为单位,我们一个一个来看:

1. peak.allocated

  • redis启动到现在,最多使用过多少内存(即峰值内存量)。

2. total.allocated

  • 当前使用的内存总量。

3. startup.allocated

  • redis启动初始化时使用的内存,有很多读者会比较奇怪,为什么我的redis启动以后什么都没做就已经占用了几十MB的内存?
  • 这是因为redis本身不仅存储key-value,还有其他的内存消耗,比如共享变量、主从复制、持久化和db元信息,下面各项会有详细介绍。

4. replication.backlog

  • 主从复制backlog使用的内存,默认10MB,backlog只在主从断线重连时发挥作用,主从复制本身并不依赖此项。

5. clients.slaves

主从复制中所有slave的读写缓冲区,包括output-buffer(也即输出缓冲区)使用的内存和querybuf(也即输入缓冲区),这里简单介绍一下主从复制:

  • redis把一次事件循环中,所有对数据库发生更改的内容先追加到slave的output-buffer中,在事件循环结束后统一发送给slave。
  • 那么主从之间就难免会有数据的延迟,如果主从之间连接断开,重连时为了保证数据的一致性就要做一次全量同步,这显然是不够高效的。backlog就是为此而设计,master在backlog中缓存一部分主从复制的增量数据,断线重连时如果slave的偏移量在backlog中,那就可以只把偏移量之后的增量数据同步给slave即可,避免了全量同步的开销。

6. clients.normal

  • 除slave外所有其他客户端的读写缓冲区。
  • 有时候一些客户端读取不及时,就会造成output-buffer积压占用内存过多的情况,可以通过配置项client-output-buffer-limit来限制,当超过阈值之后redis就会主动断开连接以释放内存,slave亦是如此。

7. aof.buffer

此项为aof持久化使用的缓存和aofrewrite时产生的缓存之和,当然如果关闭了appendonly那这项就一直为0:

  • redis并不是在有写入时就立即做持久化的,而是在一次事件循环内把所有的写入数据缓存,待到事件循环结束后再持久化到磁盘。
  • aofrewrite时缓存增量数据使用的内存,只在aofrewrite时才会使用,aofrewrite机制可以参考之前的文章《redis4.0之利用管道优化aofrewrite》。

可以看出这一项的大小与写入流量成正比。

8. db.0

redis每个db的元信息使用的内存,这里只使用了db0,所以只打印了db0的内存使用状态,当使用其他db时也会有相应的信息。 db的元信息有以下三项:

  • a) redis的db就是一张hash表,首先就是这张hash表使用的内存(redis使用链式hash,hash表中存放所有链表的头指针);

  • b) 每一个key-value对都有一个dictEntry来记录他们的关系,元信息便包含该db中所有dictEntry使用的内存;

  • c) redis使用redisObject来描述value所对应的不同数据类型(string、list、hash、set、zset),那么redisObject占用的空间也计算在元信息中。

  1. overhead.hashtable.main: db的元信息也即是以上三项之和,计算公式为: > hashtable + dictEntry + redisObject

  2. overhead.hashtable.expires:

    对于key的过期时间,redis并没有把它和value放在一起,而是单独用一个hashtable来存储,但是expires这张hash表记录的是key-expire信息,所以不需要redisObject来描述value,其元信息也就少了一项,计算公式为: > hashtable + dictEntry

9. overhead.total

3-8项之和:startup.allocated+replication.backlog+clients.slaves+clients.normal+aof.buffer+dbx

10. dataset.bytes

所有数据所使用的内存——也即total.allocated - overhead.total——当前内存使用量减去管理类内存使用量。

11. dataset.percentage

所有数据占比,这里并没有直接使用total.allocated做分母,而是除去了redis启动初始化的内存,计算公式为:

    100 * dataset.bytes / (total.allocated - startup.allocated)

12. keys.count

redis当前存储的key总量

13. keys.bytes-per-key

平均每个key的内存大小,直觉上应该是用dataset.bytes除以keys.count即可,但是redis并没有这么做,而是把管理类内存也平摊到了每个key的内存使用中,计算公式为:

    (total.allocated - startup.allocated) / keys.count

14. peak.percentage

当前使用内存与历史最高值比例

15. fragmentation

内存碎片率

2. MEMORY USAGE

相信所有redis用户都希望对每一个key-value的内存使用了如指掌,然而4.0之前redis并没有提供一个明确的方法来进行内存评估,不过从4.0开始,MEMORY命令实现了这一功能。

首先看下使用方法:MEMORY usage [samples count]

127.0.0.1:7007> MEMORY USAGE dr:hello_biglist_00000011 samples 0
(integer) 3065
127.0.0.1:7007> MEMORY USAGE dr:hello_biglist_00000011 samples 0
(integer) 3065
127.0.0.1:7007> MEMORY USAGE dr:hello_biglist_00000011 samples 5
(integer) 3065

命令参数不多,通过字面意思也可以看出来是评估指定key的内存使用情况。samples样本数是可选参数,默认为5,如果给0会计算全部样本数。以hash为例看下其如果工作:

  1. 首先类似于上一节中的overhead.hashtable.main,要计算hash的元信息内存,包括hash表的大小以及所有dictEntry的内存占用信息。
  2. 与overhead.hashtable.main不同的是,每个dictEntry中key-value都是字符串,所以没redisObject的额外消耗。在评估真正的数据内存大小时redis并没有去遍历所有key,而是采用的抽样估算:随机抽取samples个key-value对计算其平均内存占用,再乘以key-value对的个数即得到结果。试想一下如果要精确计算内存占用,那么就需要遍历所有的元素,当元素很多时就是使redis阻塞,所以请合理设置samples的大小。其他数据结构的计算方式类似于hash,此处就不再赘述。

3. MEMORY DOCTOR

此项子命令是作者给出的关于redis内存使用方面的建议,在不同的允许状态下会有不同的分析结果:

首先是没问题的情况

  • 运行状态良好:

    Hi Sam, I can't find any memory issue in your instance.
    I can only account for what occurs on this base.
  • redis的数据量很小,暂无建议:

    Hi Sam, this instance is empty or is using very little memory, 
    my issues detector can't be used in these conditions. Please, 
    leave for your mission on Earth and fill it with some data. The 
    new Sam and I will be back to our programming as soon as I 
    finished rebooting.

接下来出现的结果就需要注意了

  • 内存使用峰值1.5倍于目前内存使用量,此时内存碎片率可能会比较高,需要注意:

    Peak memory: In the past this instance used more than 150% the 
    memory that is currently using. The allocator is normally not 
    able to release memory after a peak, so you can expect to see a 
    big fragmentation ratio, however this is actually harmless and 
    is only due to the memory peak, and if the Redis instance 
    Resident Set Size (RSS) is currently bigger than expected, the 
    memory will be used as soon as you fill the Redis instance with 
    more data. If the memory peak was only occasional and you want 
    to try to reclaim memory, please try the MEMORY PURGE command, 
    otherwise the only other option is to shutdown and restart the 
    instance.
  • 内存碎片率过高超过1.4,需要注意:

    High fragmentation: This instance has a memory fragmentation 
    greater than 1.4 (this means that the Resident Set Size of the 
    Redis process is much larger than the sum of the logical 
    allocations Redis performed). This problem is usually due either 
    to a large peak memory (check if there is a peak memory entry 
    above in the report) or may result from a workload that causes 
    the allocator to fragment memory a lot. If the problem is a 
    large peak memory, then there is no issue. Otherwise, make sure 
    you are using the Jemalloc allocator and not the default libc 
    malloc.
  • 每个slave缓冲区的平均内存超过10MB,原因可能是master写入流量过高,也有可能是主从同步的网络带宽不足或者slave处理较慢:

    Big slave buffers: The slave output buffers in this instance are 
    greater than 10MB for each slave (on average). This likely means 
    that there is some slave instance that is struggling receiving 
    data, either because it is too slow or because of networking 
    issues. As a result, data piles on the master output buffers. 
    Please try to identify what slave is not receiving data 
    correctly and why. You can use the INFO output in order to check 
    the slaves delays and the CLIENT LIST command to check the 
    output buffers of each slave.
  • 普通客户端缓冲区的平均内存超过200KB,原因可能是pipeline使用不当或者Pub/Sub客户端处理消息不及时导致:

    Big client buffers: The clients output buffers in this instance 
    are greater than 200K per client (on average). This may result 
    from different causes, like Pub/Sub clients subscribed to 
    channels bot not receiving data fast enough, so that data piles 
    on the Redis instance output buffer, or clients sending commands 
    with large replies or very large sequences of commands in the 
    same pipeline. Please use the CLIENT LIST command in order to 
    investigate the issue if it causes problems in your instance, or 
    to understand better why certain clients are using a big amount 
    of memory.

具体有哪些返回,可以查看源码的object.c文件中有具体实现。5.0版本相较于4.0版本,还增加了一些其他的诊断结果。

  • 分配器外部碎片率高于1.1

    High allocator fragmentation: This instance has an allocator 
    external fragmentation greater than 1.1. This problem is 
    usually due either to a large peak memory (check if there is a 
    peak memory entry above in the report) or may result from a 
    workload that causes the allocator to fragment memory a lot. 
    You can try enabling 'activedefrag' config option.
  • 实例分配器内存开销比率大于1.1

    High allocator RSS overhead: This instance has an RSS memory 
    overhead is greater than 1.1 (this means that the Resident Set 
    Size of the allocator is much larger than the sum what the 
    allocator actually holds). This problem is usually due to a 
    large peak memory (check if there is a peak memory entry above 
    in the report), you can try the MEMORY PURGE command to 
    reclaim it.
  • 进程内存比率大于1.1

    High process RSS overhead: This instance has non-allocator RSS 
    memory overhead is greater than 1.1 (this means that the 
    Resident Set Size of the Redis process is much larger than the 
    RSS the allocator holds). This problem may be due to Lua 
    scripts or Modules.
  • 缓存脚本超过1000个

    Many scripts: There seem to be many cached scripts in this 
    instance (more than 1000). This may be because scripts are 
    generated and `EVAL`ed, instead of being parameterized (with 
    KEYS and ARGV), `SCRIPT LOAD`ed and `EVALSHA`ed. Unless 
    `SCRIPT FLUSH` is called periodically, the scripts' caches may 
    end up consuming most of your memory.

4. MEMORY MALLOC-STATS

打印内存分配器状态,只在使用jemalloc时有用。

127.0.0.1:7008> MEMORY MALLOC-STATS
___ Begin jemalloc statistics ___
Version: 4.0.3-0-ge9192eacf8935e29fc62fddc2701f7942b1cc02c
Assertions disabled
Run-time option settings:
  opt.abort: false
  opt.lg_chunk: 21
  opt.dss: "secondary"
  opt.narenas: 1
  opt.lg_dirty_mult: 3 (arenas.lg_dirty_mult: 3)
  opt.stats_print: false
  opt.junk: "false"
  opt.quarantine: 0
  opt.redzone: false
  opt.zero: false
  opt.tcache: true
  opt.lg_tcache_max: 15
CPUs: 1
Arenas: 1
Pointer size: 8
Quantum size: 8
Page size: 4096
Min active:dirty page ratio per arena: 8:1
Maximum thread-cached size class: 32768
Chunk size: 2097152 (2^21)
Allocated: 14885856, active: 16277504, metadata: 1614792, resident: 18382848, mapped: 20971520
Current active ceiling: 16777216

arenas[0]:
assigned threads: 1
dss allocation precedence: secondary
min active:dirty page ratio: 8:1
dirty pages: 3974:132 active:dirty, 0 sweeps, 0 madvises, 0 purged
                            allocated      nmalloc      ndalloc    nrequests
small:                        9864160      1870748      1343297      2255398
large:                        5021696         7938         7728         7939
huge:                               0            0            0            0
total:                       14885856      1878686      1351025      2263337
active:                      16277504
mapped:                      18874368
metadata: mapped: 479232, allocated: 50952
bins:           size ind    allocated      nmalloc      ndalloc    nrequests      curregs      curruns regs pgs  util       nfills     nflushes      newruns       reruns
                   8   0      2461920       308470          730       310415       307740          602  512   1 0.998        45268          197          603           91
                  16   1       176240        11763          748        12136        11015           44  256   1 0.977          305          130           46            2
                  24   2      4960704       719462       512766       955290       206696          414  512   3 0.975         8173         5145          458         2210
                  32   3        25792       516628       515822       643427          806           48  128   1 0.131         6693         5221          628         3953
                  40   4         8320          583          375          913          208            1  512   5 0.406          163          166            1            0
                  48   5         1728       304732       304696       319752           36            2  256   3 0.070         4602         3105         1011          384
                  56   6          896          333          317          471           16            1  512   7 0.031          104          107            1            0
                  64   7          192          217          214          704            3            1   64   1 0.046          116          108            1            0
                  80   8          240          388          385          599            3            1  256   5 0.011          217          189            1            0
                  96   9        38304         1036          637         1103          399            4  128   3 0.779          202          189            7           12
                 112  10            0          454          454          401            0            0  256   7 1              118          107          101            0
                 128  11            0          281          281          674            0            0   32   1 1              214          187          181            2
                 160  12            0          287          287          411            0            0  128   5 1              184          186          180            0
                 192  13            0          246          246          429            0            0   64   3 1              171          176          171            0
                 224  14            0          284          284          601            0            0  128   7 1              183          178           12            0
                 256  15            0          197          197          605            0            0   16   1 1              175          178          173            0
                 320  16         5120          252          236          416           16            1   64   5 0.250          184          189            1            0
                 384  17          384          195          194          427            1            1   32   3 0.031          158          162            1            0
                 448  18            0          246          246          401            0            0   64   7 1              178          183          178            0
                 512  19          512          189          188          605            1            1    8   1 0.125          180          182            2            1
                 640  20            0          214          214          401            0            0   32   5 1              179          183          179            0
                 768  21        76800          372          272          400          100            7   16   3 0.892          179          181           23            5
                 896  22         2688          216          213          303            3            2   32   7 0.046          179          183            2            1
                1024  23         2048          190          188          504            2            1    4   1 0.500          181          182            3            1
                1280  24       128000          412          312          401          100            7   16   5 0.892          199          201           27            8
                1536  25            0          207          207          301            0            0    8   3 1              198          200          199            0
                1792  26            0          208          208          300            0            0   16   7 1              187          190          187            0
                2048  27         4096          172          170          503            2            1    2   1 1              163          165          140            0
                2560  28            0          188          188          300            0            0    8   5 1              179          181          180            0
                3072  29       307200          366          266          300          100           25    4   3 1              179          180           91           21
                3584  30            0          188          188          200            0            0    8   7 1              179          181          180            0
                4096  31            0          169          169          401            0            0    1   1 1              160          161          169            0
                5120  32        10240          193          191          202            2            1    4   5 0.500          182          184            3            1
                6144  33         6144          203          202          201            1            1    2   3 0.500          193          195            6            1
                7168  34            0          208          208          200            0            0    4   7 1              199          201          201            0
                8192  35      1646592          630          429          401          201          201    1   2 1              275          200          630            0
               10240  36            0          143          143          100            0            0    2   5 1              100          102          104            0
               12288  37            0          113          113          100            0            0    1   3 1              100          102          113            0
               14336  38            0          113          113          100            0            0    2   7 1              100          102          104            0
large:          size ind    allocated      nmalloc      ndalloc    nrequests      curruns
               16384  39      3276800          300          100          301          200
               20480  40        81920          104          100          104            4
               24576  41            0          100          100          100            0
               28672  42            0          100          100          100            0
               32768  43        32768          101          100          101            1
               40960  44        40960         6829         6828         6829            1
               49152  45            0          100          100          100            0
               57344  46            0          100          100          100            0
               65536  47            0          100          100          100            0
               81920  48        81920          101          100          101            1
                     ---
              131072  51       131072            1            0            1            1
                     ---
              327680  56       327680            1            0            1            1
                     ---
             1048576  63      1048576            1            0            1            1
                     ---
huge:           size ind    allocated      nmalloc      ndalloc    nrequests   curhchunks
                     ---
--- End jemalloc statistics ---

5. MEMORY PURGE

请求分配器释放内存,同样只对jemalloc生效。手动进行回收内存(此命令会阻塞主进程)。命令效果如下,课件碎片的内存和碎片率都有所降低

127.0.0.1:7007> info memory
# Memory
used_memory:13771144
used_memory_human:13.13M
used_memory_rss:27230208
used_memory_rss_human:25.97M
allocator_frag_ratio:1.11
allocator_frag_bytes:1492936
mem_fragmentation_ratio:1.98
mem_fragmentation_bytes:13500080
127.0.0.1:7007> memory purge
OK
127.0.0.1:7007> info memory
# Memory
used_memory:13771144
used_memory_human:13.13M
used_memory_rss:23994368
used_memory_rss_human:22.88M
allocator_frag_ratio:1.11
allocator_frag_bytes:1510856
mem_fragmentation_ratio:1.75
mem_fragmentation_bytes:10264240
不与天斗Domino

不与天斗Domino

Programmer & Architect

183 日志
15 分类
224 标签
© 2013 - 2023 天地维杰网 京ICP备13019191号-1
Powered by - Hugo v0.63.2
Theme by - NexT
0%