David Brazdil | 0f672f6 | 2019-12-10 10:32:29 +0000 | [diff] [blame^] | 1 | ========================== |
| 2 | Memory Resource Controller |
| 3 | ========================== |
| 4 | |
| 5 | NOTE: |
| 6 | This document is hopelessly outdated and it asks for a complete |
| 7 | rewrite. It still contains a useful information so we are keeping it |
| 8 | here but make sure to check the current code if you need a deeper |
| 9 | understanding. |
| 10 | |
| 11 | NOTE: |
| 12 | The Memory Resource Controller has generically been referred to as the |
| 13 | memory controller in this document. Do not confuse memory controller |
| 14 | used here with the memory controller that is used in hardware. |
| 15 | |
| 16 | (For editors) In this document: |
| 17 | When we mention a cgroup (cgroupfs's directory) with memory controller, |
| 18 | we call it "memory cgroup". When you see git-log and source code, you'll |
| 19 | see patch's title and function names tend to use "memcg". |
| 20 | In this document, we avoid using it. |
| 21 | |
| 22 | Benefits and Purpose of the memory controller |
| 23 | ============================================= |
| 24 | |
| 25 | The memory controller isolates the memory behaviour of a group of tasks |
| 26 | from the rest of the system. The article on LWN [12] mentions some probable |
| 27 | uses of the memory controller. The memory controller can be used to |
| 28 | |
| 29 | a. Isolate an application or a group of applications |
| 30 | Memory-hungry applications can be isolated and limited to a smaller |
| 31 | amount of memory. |
| 32 | b. Create a cgroup with a limited amount of memory; this can be used |
| 33 | as a good alternative to booting with mem=XXXX. |
| 34 | c. Virtualization solutions can control the amount of memory they want |
| 35 | to assign to a virtual machine instance. |
| 36 | d. A CD/DVD burner could control the amount of memory used by the |
| 37 | rest of the system to ensure that burning does not fail due to lack |
| 38 | of available memory. |
| 39 | e. There are several other use cases; find one or use the controller just |
| 40 | for fun (to learn and hack on the VM subsystem). |
| 41 | |
| 42 | Current Status: linux-2.6.34-mmotm(development version of 2010/April) |
| 43 | |
| 44 | Features: |
| 45 | |
| 46 | - accounting anonymous pages, file caches, swap caches usage and limiting them. |
| 47 | - pages are linked to per-memcg LRU exclusively, and there is no global LRU. |
| 48 | - optionally, memory+swap usage can be accounted and limited. |
| 49 | - hierarchical accounting |
| 50 | - soft limit |
| 51 | - moving (recharging) account at moving a task is selectable. |
| 52 | - usage threshold notifier |
| 53 | - memory pressure notifier |
| 54 | - oom-killer disable knob and oom-notifier |
| 55 | - Root cgroup has no limit controls. |
| 56 | |
| 57 | Kernel memory support is a work in progress, and the current version provides |
| 58 | basically functionality. (See Section 2.7) |
| 59 | |
| 60 | Brief summary of control files. |
| 61 | |
| 62 | ==================================== ========================================== |
| 63 | tasks attach a task(thread) and show list of |
| 64 | threads |
| 65 | cgroup.procs show list of processes |
| 66 | cgroup.event_control an interface for event_fd() |
| 67 | memory.usage_in_bytes show current usage for memory |
| 68 | (See 5.5 for details) |
| 69 | memory.memsw.usage_in_bytes show current usage for memory+Swap |
| 70 | (See 5.5 for details) |
| 71 | memory.limit_in_bytes set/show limit of memory usage |
| 72 | memory.memsw.limit_in_bytes set/show limit of memory+Swap usage |
| 73 | memory.failcnt show the number of memory usage hits limits |
| 74 | memory.memsw.failcnt show the number of memory+Swap hits limits |
| 75 | memory.max_usage_in_bytes show max memory usage recorded |
| 76 | memory.memsw.max_usage_in_bytes show max memory+Swap usage recorded |
| 77 | memory.soft_limit_in_bytes set/show soft limit of memory usage |
| 78 | memory.stat show various statistics |
| 79 | memory.use_hierarchy set/show hierarchical account enabled |
| 80 | memory.force_empty trigger forced page reclaim |
| 81 | memory.pressure_level set memory pressure notifications |
| 82 | memory.swappiness set/show swappiness parameter of vmscan |
| 83 | (See sysctl's vm.swappiness) |
| 84 | memory.move_charge_at_immigrate set/show controls of moving charges |
| 85 | memory.oom_control set/show oom controls. |
| 86 | memory.numa_stat show the number of memory usage per numa |
| 87 | node |
| 88 | memory.kmem.limit_in_bytes set/show hard limit for kernel memory |
| 89 | This knob is deprecated and shouldn't be |
| 90 | used. It is planned that this be removed in |
| 91 | the foreseeable future. |
| 92 | memory.kmem.usage_in_bytes show current kernel memory allocation |
| 93 | memory.kmem.failcnt show the number of kernel memory usage |
| 94 | hits limits |
| 95 | memory.kmem.max_usage_in_bytes show max kernel memory usage recorded |
| 96 | |
| 97 | memory.kmem.tcp.limit_in_bytes set/show hard limit for tcp buf memory |
| 98 | memory.kmem.tcp.usage_in_bytes show current tcp buf memory allocation |
| 99 | memory.kmem.tcp.failcnt show the number of tcp buf memory usage |
| 100 | hits limits |
| 101 | memory.kmem.tcp.max_usage_in_bytes show max tcp buf memory usage recorded |
| 102 | ==================================== ========================================== |
| 103 | |
| 104 | 1. History |
| 105 | ========== |
| 106 | |
| 107 | The memory controller has a long history. A request for comments for the memory |
| 108 | controller was posted by Balbir Singh [1]. At the time the RFC was posted |
| 109 | there were several implementations for memory control. The goal of the |
| 110 | RFC was to build consensus and agreement for the minimal features required |
| 111 | for memory control. The first RSS controller was posted by Balbir Singh[2] |
| 112 | in Feb 2007. Pavel Emelianov [3][4][5] has since posted three versions of the |
| 113 | RSS controller. At OLS, at the resource management BoF, everyone suggested |
| 114 | that we handle both page cache and RSS together. Another request was raised |
| 115 | to allow user space handling of OOM. The current memory controller is |
| 116 | at version 6; it combines both mapped (RSS) and unmapped Page |
| 117 | Cache Control [11]. |
| 118 | |
| 119 | 2. Memory Control |
| 120 | ================= |
| 121 | |
| 122 | Memory is a unique resource in the sense that it is present in a limited |
| 123 | amount. If a task requires a lot of CPU processing, the task can spread |
| 124 | its processing over a period of hours, days, months or years, but with |
| 125 | memory, the same physical memory needs to be reused to accomplish the task. |
| 126 | |
| 127 | The memory controller implementation has been divided into phases. These |
| 128 | are: |
| 129 | |
| 130 | 1. Memory controller |
| 131 | 2. mlock(2) controller |
| 132 | 3. Kernel user memory accounting and slab control |
| 133 | 4. user mappings length controller |
| 134 | |
| 135 | The memory controller is the first controller developed. |
| 136 | |
| 137 | 2.1. Design |
| 138 | ----------- |
| 139 | |
| 140 | The core of the design is a counter called the page_counter. The |
| 141 | page_counter tracks the current memory usage and limit of the group of |
| 142 | processes associated with the controller. Each cgroup has a memory controller |
| 143 | specific data structure (mem_cgroup) associated with it. |
| 144 | |
| 145 | 2.2. Accounting |
| 146 | --------------- |
| 147 | |
| 148 | :: |
| 149 | |
| 150 | +--------------------+ |
| 151 | | mem_cgroup | |
| 152 | | (page_counter) | |
| 153 | +--------------------+ |
| 154 | / ^ \ |
| 155 | / | \ |
| 156 | +---------------+ | +---------------+ |
| 157 | | mm_struct | |.... | mm_struct | |
| 158 | | | | | | |
| 159 | +---------------+ | +---------------+ |
| 160 | | |
| 161 | + --------------+ |
| 162 | | |
| 163 | +---------------+ +------+--------+ |
| 164 | | page +----------> page_cgroup| |
| 165 | | | | | |
| 166 | +---------------+ +---------------+ |
| 167 | |
| 168 | (Figure 1: Hierarchy of Accounting) |
| 169 | |
| 170 | |
| 171 | Figure 1 shows the important aspects of the controller |
| 172 | |
| 173 | 1. Accounting happens per cgroup |
| 174 | 2. Each mm_struct knows about which cgroup it belongs to |
| 175 | 3. Each page has a pointer to the page_cgroup, which in turn knows the |
| 176 | cgroup it belongs to |
| 177 | |
| 178 | The accounting is done as follows: mem_cgroup_charge_common() is invoked to |
| 179 | set up the necessary data structures and check if the cgroup that is being |
| 180 | charged is over its limit. If it is, then reclaim is invoked on the cgroup. |
| 181 | More details can be found in the reclaim section of this document. |
| 182 | If everything goes well, a page meta-data-structure called page_cgroup is |
| 183 | updated. page_cgroup has its own LRU on cgroup. |
| 184 | (*) page_cgroup structure is allocated at boot/memory-hotplug time. |
| 185 | |
| 186 | 2.2.1 Accounting details |
| 187 | ------------------------ |
| 188 | |
| 189 | All mapped anon pages (RSS) and cache pages (Page Cache) are accounted. |
| 190 | Some pages which are never reclaimable and will not be on the LRU |
| 191 | are not accounted. We just account pages under usual VM management. |
| 192 | |
| 193 | RSS pages are accounted at page_fault unless they've already been accounted |
| 194 | for earlier. A file page will be accounted for as Page Cache when it's |
| 195 | inserted into inode (radix-tree). While it's mapped into the page tables of |
| 196 | processes, duplicate accounting is carefully avoided. |
| 197 | |
| 198 | An RSS page is unaccounted when it's fully unmapped. A PageCache page is |
| 199 | unaccounted when it's removed from radix-tree. Even if RSS pages are fully |
| 200 | unmapped (by kswapd), they may exist as SwapCache in the system until they |
| 201 | are really freed. Such SwapCaches are also accounted. |
| 202 | A swapped-in page is not accounted until it's mapped. |
| 203 | |
| 204 | Note: The kernel does swapin-readahead and reads multiple swaps at once. |
| 205 | This means swapped-in pages may contain pages for other tasks than a task |
| 206 | causing page fault. So, we avoid accounting at swap-in I/O. |
| 207 | |
| 208 | At page migration, accounting information is kept. |
| 209 | |
| 210 | Note: we just account pages-on-LRU because our purpose is to control amount |
| 211 | of used pages; not-on-LRU pages tend to be out-of-control from VM view. |
| 212 | |
| 213 | 2.3 Shared Page Accounting |
| 214 | -------------------------- |
| 215 | |
| 216 | Shared pages are accounted on the basis of the first touch approach. The |
| 217 | cgroup that first touches a page is accounted for the page. The principle |
| 218 | behind this approach is that a cgroup that aggressively uses a shared |
| 219 | page will eventually get charged for it (once it is uncharged from |
| 220 | the cgroup that brought it in -- this will happen on memory pressure). |
| 221 | |
| 222 | But see section 8.2: when moving a task to another cgroup, its pages may |
| 223 | be recharged to the new cgroup, if move_charge_at_immigrate has been chosen. |
| 224 | |
| 225 | Exception: If CONFIG_MEMCG_SWAP is not used. |
| 226 | When you do swapoff and make swapped-out pages of shmem(tmpfs) to |
| 227 | be backed into memory in force, charges for pages are accounted against the |
| 228 | caller of swapoff rather than the users of shmem. |
| 229 | |
| 230 | 2.4 Swap Extension (CONFIG_MEMCG_SWAP) |
| 231 | -------------------------------------- |
| 232 | |
| 233 | Swap Extension allows you to record charge for swap. A swapped-in page is |
| 234 | charged back to original page allocator if possible. |
| 235 | |
| 236 | When swap is accounted, following files are added. |
| 237 | |
| 238 | - memory.memsw.usage_in_bytes. |
| 239 | - memory.memsw.limit_in_bytes. |
| 240 | |
| 241 | memsw means memory+swap. Usage of memory+swap is limited by |
| 242 | memsw.limit_in_bytes. |
| 243 | |
| 244 | Example: Assume a system with 4G of swap. A task which allocates 6G of memory |
| 245 | (by mistake) under 2G memory limitation will use all swap. |
| 246 | In this case, setting memsw.limit_in_bytes=3G will prevent bad use of swap. |
| 247 | By using the memsw limit, you can avoid system OOM which can be caused by swap |
| 248 | shortage. |
| 249 | |
| 250 | **why 'memory+swap' rather than swap** |
| 251 | |
| 252 | The global LRU(kswapd) can swap out arbitrary pages. Swap-out means |
| 253 | to move account from memory to swap...there is no change in usage of |
| 254 | memory+swap. In other words, when we want to limit the usage of swap without |
| 255 | affecting global LRU, memory+swap limit is better than just limiting swap from |
| 256 | an OS point of view. |
| 257 | |
| 258 | **What happens when a cgroup hits memory.memsw.limit_in_bytes** |
| 259 | |
| 260 | When a cgroup hits memory.memsw.limit_in_bytes, it's useless to do swap-out |
| 261 | in this cgroup. Then, swap-out will not be done by cgroup routine and file |
| 262 | caches are dropped. But as mentioned above, global LRU can do swapout memory |
| 263 | from it for sanity of the system's memory management state. You can't forbid |
| 264 | it by cgroup. |
| 265 | |
| 266 | 2.5 Reclaim |
| 267 | ----------- |
| 268 | |
| 269 | Each cgroup maintains a per cgroup LRU which has the same structure as |
| 270 | global VM. When a cgroup goes over its limit, we first try |
| 271 | to reclaim memory from the cgroup so as to make space for the new |
| 272 | pages that the cgroup has touched. If the reclaim is unsuccessful, |
| 273 | an OOM routine is invoked to select and kill the bulkiest task in the |
| 274 | cgroup. (See 10. OOM Control below.) |
| 275 | |
| 276 | The reclaim algorithm has not been modified for cgroups, except that |
| 277 | pages that are selected for reclaiming come from the per-cgroup LRU |
| 278 | list. |
| 279 | |
| 280 | NOTE: |
| 281 | Reclaim does not work for the root cgroup, since we cannot set any |
| 282 | limits on the root cgroup. |
| 283 | |
| 284 | Note2: |
| 285 | When panic_on_oom is set to "2", the whole system will panic. |
| 286 | |
| 287 | When oom event notifier is registered, event will be delivered. |
| 288 | (See oom_control section) |
| 289 | |
| 290 | 2.6 Locking |
| 291 | ----------- |
| 292 | |
| 293 | lock_page_cgroup()/unlock_page_cgroup() should not be called under |
| 294 | the i_pages lock. |
| 295 | |
| 296 | Other lock order is following: |
| 297 | |
| 298 | PG_locked. |
| 299 | mm->page_table_lock |
| 300 | pgdat->lru_lock |
| 301 | lock_page_cgroup. |
| 302 | |
| 303 | In many cases, just lock_page_cgroup() is called. |
| 304 | |
| 305 | per-zone-per-cgroup LRU (cgroup's private LRU) is just guarded by |
| 306 | pgdat->lru_lock, it has no lock of its own. |
| 307 | |
| 308 | 2.7 Kernel Memory Extension (CONFIG_MEMCG_KMEM) |
| 309 | ----------------------------------------------- |
| 310 | |
| 311 | With the Kernel memory extension, the Memory Controller is able to limit |
| 312 | the amount of kernel memory used by the system. Kernel memory is fundamentally |
| 313 | different than user memory, since it can't be swapped out, which makes it |
| 314 | possible to DoS the system by consuming too much of this precious resource. |
| 315 | |
| 316 | Kernel memory accounting is enabled for all memory cgroups by default. But |
| 317 | it can be disabled system-wide by passing cgroup.memory=nokmem to the kernel |
| 318 | at boot time. In this case, kernel memory will not be accounted at all. |
| 319 | |
| 320 | Kernel memory limits are not imposed for the root cgroup. Usage for the root |
| 321 | cgroup may or may not be accounted. The memory used is accumulated into |
| 322 | memory.kmem.usage_in_bytes, or in a separate counter when it makes sense. |
| 323 | (currently only for tcp). |
| 324 | |
| 325 | The main "kmem" counter is fed into the main counter, so kmem charges will |
| 326 | also be visible from the user counter. |
| 327 | |
| 328 | Currently no soft limit is implemented for kernel memory. It is future work |
| 329 | to trigger slab reclaim when those limits are reached. |
| 330 | |
| 331 | 2.7.1 Current Kernel Memory resources accounted |
| 332 | ----------------------------------------------- |
| 333 | |
| 334 | stack pages: |
| 335 | every process consumes some stack pages. By accounting into |
| 336 | kernel memory, we prevent new processes from being created when the kernel |
| 337 | memory usage is too high. |
| 338 | |
| 339 | slab pages: |
| 340 | pages allocated by the SLAB or SLUB allocator are tracked. A copy |
| 341 | of each kmem_cache is created every time the cache is touched by the first time |
| 342 | from inside the memcg. The creation is done lazily, so some objects can still be |
| 343 | skipped while the cache is being created. All objects in a slab page should |
| 344 | belong to the same memcg. This only fails to hold when a task is migrated to a |
| 345 | different memcg during the page allocation by the cache. |
| 346 | |
| 347 | sockets memory pressure: |
| 348 | some sockets protocols have memory pressure |
| 349 | thresholds. The Memory Controller allows them to be controlled individually |
| 350 | per cgroup, instead of globally. |
| 351 | |
| 352 | tcp memory pressure: |
| 353 | sockets memory pressure for the tcp protocol. |
| 354 | |
| 355 | 2.7.2 Common use cases |
| 356 | ---------------------- |
| 357 | |
| 358 | Because the "kmem" counter is fed to the main user counter, kernel memory can |
| 359 | never be limited completely independently of user memory. Say "U" is the user |
| 360 | limit, and "K" the kernel limit. There are three possible ways limits can be |
| 361 | set: |
| 362 | |
| 363 | U != 0, K = unlimited: |
| 364 | This is the standard memcg limitation mechanism already present before kmem |
| 365 | accounting. Kernel memory is completely ignored. |
| 366 | |
| 367 | U != 0, K < U: |
| 368 | Kernel memory is a subset of the user memory. This setup is useful in |
| 369 | deployments where the total amount of memory per-cgroup is overcommited. |
| 370 | Overcommiting kernel memory limits is definitely not recommended, since the |
| 371 | box can still run out of non-reclaimable memory. |
| 372 | In this case, the admin could set up K so that the sum of all groups is |
| 373 | never greater than the total memory, and freely set U at the cost of his |
| 374 | QoS. |
| 375 | |
| 376 | WARNING: |
| 377 | In the current implementation, memory reclaim will NOT be |
| 378 | triggered for a cgroup when it hits K while staying below U, which makes |
| 379 | this setup impractical. |
| 380 | |
| 381 | U != 0, K >= U: |
| 382 | Since kmem charges will also be fed to the user counter and reclaim will be |
| 383 | triggered for the cgroup for both kinds of memory. This setup gives the |
| 384 | admin a unified view of memory, and it is also useful for people who just |
| 385 | want to track kernel memory usage. |
| 386 | |
| 387 | 3. User Interface |
| 388 | ================= |
| 389 | |
| 390 | 3.0. Configuration |
| 391 | ------------------ |
| 392 | |
| 393 | a. Enable CONFIG_CGROUPS |
| 394 | b. Enable CONFIG_MEMCG |
| 395 | c. Enable CONFIG_MEMCG_SWAP (to use swap extension) |
| 396 | d. Enable CONFIG_MEMCG_KMEM (to use kmem extension) |
| 397 | |
| 398 | 3.1. Prepare the cgroups (see cgroups.txt, Why are cgroups needed?) |
| 399 | ------------------------------------------------------------------- |
| 400 | |
| 401 | :: |
| 402 | |
| 403 | # mount -t tmpfs none /sys/fs/cgroup |
| 404 | # mkdir /sys/fs/cgroup/memory |
| 405 | # mount -t cgroup none /sys/fs/cgroup/memory -o memory |
| 406 | |
| 407 | 3.2. Make the new group and move bash into it:: |
| 408 | |
| 409 | # mkdir /sys/fs/cgroup/memory/0 |
| 410 | # echo $$ > /sys/fs/cgroup/memory/0/tasks |
| 411 | |
| 412 | Since now we're in the 0 cgroup, we can alter the memory limit:: |
| 413 | |
| 414 | # echo 4M > /sys/fs/cgroup/memory/0/memory.limit_in_bytes |
| 415 | |
| 416 | NOTE: |
| 417 | We can use a suffix (k, K, m, M, g or G) to indicate values in kilo, |
| 418 | mega or gigabytes. (Here, Kilo, Mega, Giga are Kibibytes, Mebibytes, |
| 419 | Gibibytes.) |
| 420 | |
| 421 | NOTE: |
| 422 | We can write "-1" to reset the ``*.limit_in_bytes(unlimited)``. |
| 423 | |
| 424 | NOTE: |
| 425 | We cannot set limits on the root cgroup any more. |
| 426 | |
| 427 | :: |
| 428 | |
| 429 | # cat /sys/fs/cgroup/memory/0/memory.limit_in_bytes |
| 430 | 4194304 |
| 431 | |
| 432 | We can check the usage:: |
| 433 | |
| 434 | # cat /sys/fs/cgroup/memory/0/memory.usage_in_bytes |
| 435 | 1216512 |
| 436 | |
| 437 | A successful write to this file does not guarantee a successful setting of |
| 438 | this limit to the value written into the file. This can be due to a |
| 439 | number of factors, such as rounding up to page boundaries or the total |
| 440 | availability of memory on the system. The user is required to re-read |
| 441 | this file after a write to guarantee the value committed by the kernel:: |
| 442 | |
| 443 | # echo 1 > memory.limit_in_bytes |
| 444 | # cat memory.limit_in_bytes |
| 445 | 4096 |
| 446 | |
| 447 | The memory.failcnt field gives the number of times that the cgroup limit was |
| 448 | exceeded. |
| 449 | |
| 450 | The memory.stat file gives accounting information. Now, the number of |
| 451 | caches, RSS and Active pages/Inactive pages are shown. |
| 452 | |
| 453 | 4. Testing |
| 454 | ========== |
| 455 | |
| 456 | For testing features and implementation, see memcg_test.txt. |
| 457 | |
| 458 | Performance test is also important. To see pure memory controller's overhead, |
| 459 | testing on tmpfs will give you good numbers of small overheads. |
| 460 | Example: do kernel make on tmpfs. |
| 461 | |
| 462 | Page-fault scalability is also important. At measuring parallel |
| 463 | page fault test, multi-process test may be better than multi-thread |
| 464 | test because it has noise of shared objects/status. |
| 465 | |
| 466 | But the above two are testing extreme situations. |
| 467 | Trying usual test under memory controller is always helpful. |
| 468 | |
| 469 | 4.1 Troubleshooting |
| 470 | ------------------- |
| 471 | |
| 472 | Sometimes a user might find that the application under a cgroup is |
| 473 | terminated by the OOM killer. There are several causes for this: |
| 474 | |
| 475 | 1. The cgroup limit is too low (just too low to do anything useful) |
| 476 | 2. The user is using anonymous memory and swap is turned off or too low |
| 477 | |
| 478 | A sync followed by echo 1 > /proc/sys/vm/drop_caches will help get rid of |
| 479 | some of the pages cached in the cgroup (page cache pages). |
| 480 | |
| 481 | To know what happens, disabling OOM_Kill as per "10. OOM Control" (below) and |
| 482 | seeing what happens will be helpful. |
| 483 | |
| 484 | 4.2 Task migration |
| 485 | ------------------ |
| 486 | |
| 487 | When a task migrates from one cgroup to another, its charge is not |
| 488 | carried forward by default. The pages allocated from the original cgroup still |
| 489 | remain charged to it, the charge is dropped when the page is freed or |
| 490 | reclaimed. |
| 491 | |
| 492 | You can move charges of a task along with task migration. |
| 493 | See 8. "Move charges at task migration" |
| 494 | |
| 495 | 4.3 Removing a cgroup |
| 496 | --------------------- |
| 497 | |
| 498 | A cgroup can be removed by rmdir, but as discussed in sections 4.1 and 4.2, a |
| 499 | cgroup might have some charge associated with it, even though all |
| 500 | tasks have migrated away from it. (because we charge against pages, not |
| 501 | against tasks.) |
| 502 | |
| 503 | We move the stats to root (if use_hierarchy==0) or parent (if |
| 504 | use_hierarchy==1), and no change on the charge except uncharging |
| 505 | from the child. |
| 506 | |
| 507 | Charges recorded in swap information is not updated at removal of cgroup. |
| 508 | Recorded information is discarded and a cgroup which uses swap (swapcache) |
| 509 | will be charged as a new owner of it. |
| 510 | |
| 511 | About use_hierarchy, see Section 6. |
| 512 | |
| 513 | 5. Misc. interfaces |
| 514 | =================== |
| 515 | |
| 516 | 5.1 force_empty |
| 517 | --------------- |
| 518 | memory.force_empty interface is provided to make cgroup's memory usage empty. |
| 519 | When writing anything to this:: |
| 520 | |
| 521 | # echo 0 > memory.force_empty |
| 522 | |
| 523 | the cgroup will be reclaimed and as many pages reclaimed as possible. |
| 524 | |
| 525 | The typical use case for this interface is before calling rmdir(). |
| 526 | Though rmdir() offlines memcg, but the memcg may still stay there due to |
| 527 | charged file caches. Some out-of-use page caches may keep charged until |
| 528 | memory pressure happens. If you want to avoid that, force_empty will be useful. |
| 529 | |
| 530 | Also, note that when memory.kmem.limit_in_bytes is set the charges due to |
| 531 | kernel pages will still be seen. This is not considered a failure and the |
| 532 | write will still return success. In this case, it is expected that |
| 533 | memory.kmem.usage_in_bytes == memory.usage_in_bytes. |
| 534 | |
| 535 | About use_hierarchy, see Section 6. |
| 536 | |
| 537 | 5.2 stat file |
| 538 | ------------- |
| 539 | |
| 540 | memory.stat file includes following statistics |
| 541 | |
| 542 | per-memory cgroup local status |
| 543 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
| 544 | |
| 545 | =============== =============================================================== |
| 546 | cache # of bytes of page cache memory. |
| 547 | rss # of bytes of anonymous and swap cache memory (includes |
| 548 | transparent hugepages). |
| 549 | rss_huge # of bytes of anonymous transparent hugepages. |
| 550 | mapped_file # of bytes of mapped file (includes tmpfs/shmem) |
| 551 | pgpgin # of charging events to the memory cgroup. The charging |
| 552 | event happens each time a page is accounted as either mapped |
| 553 | anon page(RSS) or cache page(Page Cache) to the cgroup. |
| 554 | pgpgout # of uncharging events to the memory cgroup. The uncharging |
| 555 | event happens each time a page is unaccounted from the cgroup. |
| 556 | swap # of bytes of swap usage |
| 557 | dirty # of bytes that are waiting to get written back to the disk. |
| 558 | writeback # of bytes of file/anon cache that are queued for syncing to |
| 559 | disk. |
| 560 | inactive_anon # of bytes of anonymous and swap cache memory on inactive |
| 561 | LRU list. |
| 562 | active_anon # of bytes of anonymous and swap cache memory on active |
| 563 | LRU list. |
| 564 | inactive_file # of bytes of file-backed memory on inactive LRU list. |
| 565 | active_file # of bytes of file-backed memory on active LRU list. |
| 566 | unevictable # of bytes of memory that cannot be reclaimed (mlocked etc). |
| 567 | =============== =============================================================== |
| 568 | |
| 569 | status considering hierarchy (see memory.use_hierarchy settings) |
| 570 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
| 571 | |
| 572 | ========================= =================================================== |
| 573 | hierarchical_memory_limit # of bytes of memory limit with regard to hierarchy |
| 574 | under which the memory cgroup is |
| 575 | hierarchical_memsw_limit # of bytes of memory+swap limit with regard to |
| 576 | hierarchy under which memory cgroup is. |
| 577 | |
| 578 | total_<counter> # hierarchical version of <counter>, which in |
| 579 | addition to the cgroup's own value includes the |
| 580 | sum of all hierarchical children's values of |
| 581 | <counter>, i.e. total_cache |
| 582 | ========================= =================================================== |
| 583 | |
| 584 | The following additional stats are dependent on CONFIG_DEBUG_VM |
| 585 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
| 586 | |
| 587 | ========================= ======================================== |
| 588 | recent_rotated_anon VM internal parameter. (see mm/vmscan.c) |
| 589 | recent_rotated_file VM internal parameter. (see mm/vmscan.c) |
| 590 | recent_scanned_anon VM internal parameter. (see mm/vmscan.c) |
| 591 | recent_scanned_file VM internal parameter. (see mm/vmscan.c) |
| 592 | ========================= ======================================== |
| 593 | |
| 594 | Memo: |
| 595 | recent_rotated means recent frequency of LRU rotation. |
| 596 | recent_scanned means recent # of scans to LRU. |
| 597 | showing for better debug please see the code for meanings. |
| 598 | |
| 599 | Note: |
| 600 | Only anonymous and swap cache memory is listed as part of 'rss' stat. |
| 601 | This should not be confused with the true 'resident set size' or the |
| 602 | amount of physical memory used by the cgroup. |
| 603 | |
| 604 | 'rss + mapped_file" will give you resident set size of cgroup. |
| 605 | |
| 606 | (Note: file and shmem may be shared among other cgroups. In that case, |
| 607 | mapped_file is accounted only when the memory cgroup is owner of page |
| 608 | cache.) |
| 609 | |
| 610 | 5.3 swappiness |
| 611 | -------------- |
| 612 | |
| 613 | Overrides /proc/sys/vm/swappiness for the particular group. The tunable |
| 614 | in the root cgroup corresponds to the global swappiness setting. |
| 615 | |
| 616 | Please note that unlike during the global reclaim, limit reclaim |
| 617 | enforces that 0 swappiness really prevents from any swapping even if |
| 618 | there is a swap storage available. This might lead to memcg OOM killer |
| 619 | if there are no file pages to reclaim. |
| 620 | |
| 621 | 5.4 failcnt |
| 622 | ----------- |
| 623 | |
| 624 | A memory cgroup provides memory.failcnt and memory.memsw.failcnt files. |
| 625 | This failcnt(== failure count) shows the number of times that a usage counter |
| 626 | hit its limit. When a memory cgroup hits a limit, failcnt increases and |
| 627 | memory under it will be reclaimed. |
| 628 | |
| 629 | You can reset failcnt by writing 0 to failcnt file:: |
| 630 | |
| 631 | # echo 0 > .../memory.failcnt |
| 632 | |
| 633 | 5.5 usage_in_bytes |
| 634 | ------------------ |
| 635 | |
| 636 | For efficiency, as other kernel components, memory cgroup uses some optimization |
| 637 | to avoid unnecessary cacheline false sharing. usage_in_bytes is affected by the |
| 638 | method and doesn't show 'exact' value of memory (and swap) usage, it's a fuzz |
| 639 | value for efficient access. (Of course, when necessary, it's synchronized.) |
| 640 | If you want to know more exact memory usage, you should use RSS+CACHE(+SWAP) |
| 641 | value in memory.stat(see 5.2). |
| 642 | |
| 643 | 5.6 numa_stat |
| 644 | ------------- |
| 645 | |
| 646 | This is similar to numa_maps but operates on a per-memcg basis. This is |
| 647 | useful for providing visibility into the numa locality information within |
| 648 | an memcg since the pages are allowed to be allocated from any physical |
| 649 | node. One of the use cases is evaluating application performance by |
| 650 | combining this information with the application's CPU allocation. |
| 651 | |
| 652 | Each memcg's numa_stat file includes "total", "file", "anon" and "unevictable" |
| 653 | per-node page counts including "hierarchical_<counter>" which sums up all |
| 654 | hierarchical children's values in addition to the memcg's own value. |
| 655 | |
| 656 | The output format of memory.numa_stat is:: |
| 657 | |
| 658 | total=<total pages> N0=<node 0 pages> N1=<node 1 pages> ... |
| 659 | file=<total file pages> N0=<node 0 pages> N1=<node 1 pages> ... |
| 660 | anon=<total anon pages> N0=<node 0 pages> N1=<node 1 pages> ... |
| 661 | unevictable=<total anon pages> N0=<node 0 pages> N1=<node 1 pages> ... |
| 662 | hierarchical_<counter>=<counter pages> N0=<node 0 pages> N1=<node 1 pages> ... |
| 663 | |
| 664 | The "total" count is sum of file + anon + unevictable. |
| 665 | |
| 666 | 6. Hierarchy support |
| 667 | ==================== |
| 668 | |
| 669 | The memory controller supports a deep hierarchy and hierarchical accounting. |
| 670 | The hierarchy is created by creating the appropriate cgroups in the |
| 671 | cgroup filesystem. Consider for example, the following cgroup filesystem |
| 672 | hierarchy:: |
| 673 | |
| 674 | root |
| 675 | / | \ |
| 676 | / | \ |
| 677 | a b c |
| 678 | | \ |
| 679 | | \ |
| 680 | d e |
| 681 | |
| 682 | In the diagram above, with hierarchical accounting enabled, all memory |
| 683 | usage of e, is accounted to its ancestors up until the root (i.e, c and root), |
| 684 | that has memory.use_hierarchy enabled. If one of the ancestors goes over its |
| 685 | limit, the reclaim algorithm reclaims from the tasks in the ancestor and the |
| 686 | children of the ancestor. |
| 687 | |
| 688 | 6.1 Enabling hierarchical accounting and reclaim |
| 689 | ------------------------------------------------ |
| 690 | |
| 691 | A memory cgroup by default disables the hierarchy feature. Support |
| 692 | can be enabled by writing 1 to memory.use_hierarchy file of the root cgroup:: |
| 693 | |
| 694 | # echo 1 > memory.use_hierarchy |
| 695 | |
| 696 | The feature can be disabled by:: |
| 697 | |
| 698 | # echo 0 > memory.use_hierarchy |
| 699 | |
| 700 | NOTE1: |
| 701 | Enabling/disabling will fail if either the cgroup already has other |
| 702 | cgroups created below it, or if the parent cgroup has use_hierarchy |
| 703 | enabled. |
| 704 | |
| 705 | NOTE2: |
| 706 | When panic_on_oom is set to "2", the whole system will panic in |
| 707 | case of an OOM event in any cgroup. |
| 708 | |
| 709 | 7. Soft limits |
| 710 | ============== |
| 711 | |
| 712 | Soft limits allow for greater sharing of memory. The idea behind soft limits |
| 713 | is to allow control groups to use as much of the memory as needed, provided |
| 714 | |
| 715 | a. There is no memory contention |
| 716 | b. They do not exceed their hard limit |
| 717 | |
| 718 | When the system detects memory contention or low memory, control groups |
| 719 | are pushed back to their soft limits. If the soft limit of each control |
| 720 | group is very high, they are pushed back as much as possible to make |
| 721 | sure that one control group does not starve the others of memory. |
| 722 | |
| 723 | Please note that soft limits is a best-effort feature; it comes with |
| 724 | no guarantees, but it does its best to make sure that when memory is |
| 725 | heavily contended for, memory is allocated based on the soft limit |
| 726 | hints/setup. Currently soft limit based reclaim is set up such that |
| 727 | it gets invoked from balance_pgdat (kswapd). |
| 728 | |
| 729 | 7.1 Interface |
| 730 | ------------- |
| 731 | |
| 732 | Soft limits can be setup by using the following commands (in this example we |
| 733 | assume a soft limit of 256 MiB):: |
| 734 | |
| 735 | # echo 256M > memory.soft_limit_in_bytes |
| 736 | |
| 737 | If we want to change this to 1G, we can at any time use:: |
| 738 | |
| 739 | # echo 1G > memory.soft_limit_in_bytes |
| 740 | |
| 741 | NOTE1: |
| 742 | Soft limits take effect over a long period of time, since they involve |
| 743 | reclaiming memory for balancing between memory cgroups |
| 744 | NOTE2: |
| 745 | It is recommended to set the soft limit always below the hard limit, |
| 746 | otherwise the hard limit will take precedence. |
| 747 | |
| 748 | 8. Move charges at task migration |
| 749 | ================================= |
| 750 | |
| 751 | Users can move charges associated with a task along with task migration, that |
| 752 | is, uncharge task's pages from the old cgroup and charge them to the new cgroup. |
| 753 | This feature is not supported in !CONFIG_MMU environments because of lack of |
| 754 | page tables. |
| 755 | |
| 756 | 8.1 Interface |
| 757 | ------------- |
| 758 | |
| 759 | This feature is disabled by default. It can be enabled (and disabled again) by |
| 760 | writing to memory.move_charge_at_immigrate of the destination cgroup. |
| 761 | |
| 762 | If you want to enable it:: |
| 763 | |
| 764 | # echo (some positive value) > memory.move_charge_at_immigrate |
| 765 | |
| 766 | Note: |
| 767 | Each bits of move_charge_at_immigrate has its own meaning about what type |
| 768 | of charges should be moved. See 8.2 for details. |
| 769 | Note: |
| 770 | Charges are moved only when you move mm->owner, in other words, |
| 771 | a leader of a thread group. |
| 772 | Note: |
| 773 | If we cannot find enough space for the task in the destination cgroup, we |
| 774 | try to make space by reclaiming memory. Task migration may fail if we |
| 775 | cannot make enough space. |
| 776 | Note: |
| 777 | It can take several seconds if you move charges much. |
| 778 | |
| 779 | And if you want disable it again:: |
| 780 | |
| 781 | # echo 0 > memory.move_charge_at_immigrate |
| 782 | |
| 783 | 8.2 Type of charges which can be moved |
| 784 | -------------------------------------- |
| 785 | |
| 786 | Each bit in move_charge_at_immigrate has its own meaning about what type of |
| 787 | charges should be moved. But in any case, it must be noted that an account of |
| 788 | a page or a swap can be moved only when it is charged to the task's current |
| 789 | (old) memory cgroup. |
| 790 | |
| 791 | +---+--------------------------------------------------------------------------+ |
| 792 | |bit| what type of charges would be moved ? | |
| 793 | +===+==========================================================================+ |
| 794 | | 0 | A charge of an anonymous page (or swap of it) used by the target task. | |
| 795 | | | You must enable Swap Extension (see 2.4) to enable move of swap charges. | |
| 796 | +---+--------------------------------------------------------------------------+ |
| 797 | | 1 | A charge of file pages (normal file, tmpfs file (e.g. ipc shared memory) | |
| 798 | | | and swaps of tmpfs file) mmapped by the target task. Unlike the case of | |
| 799 | | | anonymous pages, file pages (and swaps) in the range mmapped by the task | |
| 800 | | | will be moved even if the task hasn't done page fault, i.e. they might | |
| 801 | | | not be the task's "RSS", but other task's "RSS" that maps the same file. | |
| 802 | | | And mapcount of the page is ignored (the page can be moved even if | |
| 803 | | | page_mapcount(page) > 1). You must enable Swap Extension (see 2.4) to | |
| 804 | | | enable move of swap charges. | |
| 805 | +---+--------------------------------------------------------------------------+ |
| 806 | |
| 807 | 8.3 TODO |
| 808 | -------- |
| 809 | |
| 810 | - All of moving charge operations are done under cgroup_mutex. It's not good |
| 811 | behavior to hold the mutex too long, so we may need some trick. |
| 812 | |
| 813 | 9. Memory thresholds |
| 814 | ==================== |
| 815 | |
| 816 | Memory cgroup implements memory thresholds using the cgroups notification |
| 817 | API (see cgroups.txt). It allows to register multiple memory and memsw |
| 818 | thresholds and gets notifications when it crosses. |
| 819 | |
| 820 | To register a threshold, an application must: |
| 821 | |
| 822 | - create an eventfd using eventfd(2); |
| 823 | - open memory.usage_in_bytes or memory.memsw.usage_in_bytes; |
| 824 | - write string like "<event_fd> <fd of memory.usage_in_bytes> <threshold>" to |
| 825 | cgroup.event_control. |
| 826 | |
| 827 | Application will be notified through eventfd when memory usage crosses |
| 828 | threshold in any direction. |
| 829 | |
| 830 | It's applicable for root and non-root cgroup. |
| 831 | |
| 832 | 10. OOM Control |
| 833 | =============== |
| 834 | |
| 835 | memory.oom_control file is for OOM notification and other controls. |
| 836 | |
| 837 | Memory cgroup implements OOM notifier using the cgroup notification |
| 838 | API (See cgroups.txt). It allows to register multiple OOM notification |
| 839 | delivery and gets notification when OOM happens. |
| 840 | |
| 841 | To register a notifier, an application must: |
| 842 | |
| 843 | - create an eventfd using eventfd(2) |
| 844 | - open memory.oom_control file |
| 845 | - write string like "<event_fd> <fd of memory.oom_control>" to |
| 846 | cgroup.event_control |
| 847 | |
| 848 | The application will be notified through eventfd when OOM happens. |
| 849 | OOM notification doesn't work for the root cgroup. |
| 850 | |
| 851 | You can disable the OOM-killer by writing "1" to memory.oom_control file, as: |
| 852 | |
| 853 | #echo 1 > memory.oom_control |
| 854 | |
| 855 | If OOM-killer is disabled, tasks under cgroup will hang/sleep |
| 856 | in memory cgroup's OOM-waitqueue when they request accountable memory. |
| 857 | |
| 858 | For running them, you have to relax the memory cgroup's OOM status by |
| 859 | |
| 860 | * enlarge limit or reduce usage. |
| 861 | |
| 862 | To reduce usage, |
| 863 | |
| 864 | * kill some tasks. |
| 865 | * move some tasks to other group with account migration. |
| 866 | * remove some files (on tmpfs?) |
| 867 | |
| 868 | Then, stopped tasks will work again. |
| 869 | |
| 870 | At reading, current status of OOM is shown. |
| 871 | |
| 872 | - oom_kill_disable 0 or 1 |
| 873 | (if 1, oom-killer is disabled) |
| 874 | - under_oom 0 or 1 |
| 875 | (if 1, the memory cgroup is under OOM, tasks may be stopped.) |
| 876 | |
| 877 | 11. Memory Pressure |
| 878 | =================== |
| 879 | |
| 880 | The pressure level notifications can be used to monitor the memory |
| 881 | allocation cost; based on the pressure, applications can implement |
| 882 | different strategies of managing their memory resources. The pressure |
| 883 | levels are defined as following: |
| 884 | |
| 885 | The "low" level means that the system is reclaiming memory for new |
| 886 | allocations. Monitoring this reclaiming activity might be useful for |
| 887 | maintaining cache level. Upon notification, the program (typically |
| 888 | "Activity Manager") might analyze vmstat and act in advance (i.e. |
| 889 | prematurely shutdown unimportant services). |
| 890 | |
| 891 | The "medium" level means that the system is experiencing medium memory |
| 892 | pressure, the system might be making swap, paging out active file caches, |
| 893 | etc. Upon this event applications may decide to further analyze |
| 894 | vmstat/zoneinfo/memcg or internal memory usage statistics and free any |
| 895 | resources that can be easily reconstructed or re-read from a disk. |
| 896 | |
| 897 | The "critical" level means that the system is actively thrashing, it is |
| 898 | about to out of memory (OOM) or even the in-kernel OOM killer is on its |
| 899 | way to trigger. Applications should do whatever they can to help the |
| 900 | system. It might be too late to consult with vmstat or any other |
| 901 | statistics, so it's advisable to take an immediate action. |
| 902 | |
| 903 | By default, events are propagated upward until the event is handled, i.e. the |
| 904 | events are not pass-through. For example, you have three cgroups: A->B->C. Now |
| 905 | you set up an event listener on cgroups A, B and C, and suppose group C |
| 906 | experiences some pressure. In this situation, only group C will receive the |
| 907 | notification, i.e. groups A and B will not receive it. This is done to avoid |
| 908 | excessive "broadcasting" of messages, which disturbs the system and which is |
| 909 | especially bad if we are low on memory or thrashing. Group B, will receive |
| 910 | notification only if there are no event listers for group C. |
| 911 | |
| 912 | There are three optional modes that specify different propagation behavior: |
| 913 | |
| 914 | - "default": this is the default behavior specified above. This mode is the |
| 915 | same as omitting the optional mode parameter, preserved by backwards |
| 916 | compatibility. |
| 917 | |
| 918 | - "hierarchy": events always propagate up to the root, similar to the default |
| 919 | behavior, except that propagation continues regardless of whether there are |
| 920 | event listeners at each level, with the "hierarchy" mode. In the above |
| 921 | example, groups A, B, and C will receive notification of memory pressure. |
| 922 | |
| 923 | - "local": events are pass-through, i.e. they only receive notifications when |
| 924 | memory pressure is experienced in the memcg for which the notification is |
| 925 | registered. In the above example, group C will receive notification if |
| 926 | registered for "local" notification and the group experiences memory |
| 927 | pressure. However, group B will never receive notification, regardless if |
| 928 | there is an event listener for group C or not, if group B is registered for |
| 929 | local notification. |
| 930 | |
| 931 | The level and event notification mode ("hierarchy" or "local", if necessary) are |
| 932 | specified by a comma-delimited string, i.e. "low,hierarchy" specifies |
| 933 | hierarchical, pass-through, notification for all ancestor memcgs. Notification |
| 934 | that is the default, non pass-through behavior, does not specify a mode. |
| 935 | "medium,local" specifies pass-through notification for the medium level. |
| 936 | |
| 937 | The file memory.pressure_level is only used to setup an eventfd. To |
| 938 | register a notification, an application must: |
| 939 | |
| 940 | - create an eventfd using eventfd(2); |
| 941 | - open memory.pressure_level; |
| 942 | - write string as "<event_fd> <fd of memory.pressure_level> <level[,mode]>" |
| 943 | to cgroup.event_control. |
| 944 | |
| 945 | Application will be notified through eventfd when memory pressure is at |
| 946 | the specific level (or higher). Read/write operations to |
| 947 | memory.pressure_level are no implemented. |
| 948 | |
| 949 | Test: |
| 950 | |
| 951 | Here is a small script example that makes a new cgroup, sets up a |
| 952 | memory limit, sets up a notification in the cgroup and then makes child |
| 953 | cgroup experience a critical pressure:: |
| 954 | |
| 955 | # cd /sys/fs/cgroup/memory/ |
| 956 | # mkdir foo |
| 957 | # cd foo |
| 958 | # cgroup_event_listener memory.pressure_level low,hierarchy & |
| 959 | # echo 8000000 > memory.limit_in_bytes |
| 960 | # echo 8000000 > memory.memsw.limit_in_bytes |
| 961 | # echo $$ > tasks |
| 962 | # dd if=/dev/zero | read x |
| 963 | |
| 964 | (Expect a bunch of notifications, and eventually, the oom-killer will |
| 965 | trigger.) |
| 966 | |
| 967 | 12. TODO |
| 968 | ======== |
| 969 | |
| 970 | 1. Make per-cgroup scanner reclaim not-shared pages first |
| 971 | 2. Teach controller to account for shared-pages |
| 972 | 3. Start reclamation in the background when the limit is |
| 973 | not yet hit but the usage is getting closer |
| 974 | |
| 975 | Summary |
| 976 | ======= |
| 977 | |
| 978 | Overall, the memory controller has been a stable controller and has been |
| 979 | commented and discussed quite extensively in the community. |
| 980 | |
| 981 | References |
| 982 | ========== |
| 983 | |
| 984 | 1. Singh, Balbir. RFC: Memory Controller, http://lwn.net/Articles/206697/ |
| 985 | 2. Singh, Balbir. Memory Controller (RSS Control), |
| 986 | http://lwn.net/Articles/222762/ |
| 987 | 3. Emelianov, Pavel. Resource controllers based on process cgroups |
| 988 | http://lkml.org/lkml/2007/3/6/198 |
| 989 | 4. Emelianov, Pavel. RSS controller based on process cgroups (v2) |
| 990 | http://lkml.org/lkml/2007/4/9/78 |
| 991 | 5. Emelianov, Pavel. RSS controller based on process cgroups (v3) |
| 992 | http://lkml.org/lkml/2007/5/30/244 |
| 993 | 6. Menage, Paul. Control Groups v10, http://lwn.net/Articles/236032/ |
| 994 | 7. Vaidyanathan, Srinivasan, Control Groups: Pagecache accounting and control |
| 995 | subsystem (v3), http://lwn.net/Articles/235534/ |
| 996 | 8. Singh, Balbir. RSS controller v2 test results (lmbench), |
| 997 | http://lkml.org/lkml/2007/5/17/232 |
| 998 | 9. Singh, Balbir. RSS controller v2 AIM9 results |
| 999 | http://lkml.org/lkml/2007/5/18/1 |
| 1000 | 10. Singh, Balbir. Memory controller v6 test results, |
| 1001 | http://lkml.org/lkml/2007/8/19/36 |
| 1002 | 11. Singh, Balbir. Memory controller introduction (v6), |
| 1003 | http://lkml.org/lkml/2007/8/17/69 |
| 1004 | 12. Corbet, Jonathan, Controlling memory use in cgroups, |
| 1005 | http://lwn.net/Articles/243795/ |