123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382 |
- The intent of this file is to give a brief summary of hugetlbpage support in
- the Linux kernel. This support is built on top of multiple page size support
- that is provided by most modern architectures. For example, i386
- architecture supports 4K and 4M (2M in PAE mode) page sizes, ia64
- architecture supports multiple page sizes 4K, 8K, 64K, 256K, 1M, 4M, 16M,
- 256M and ppc64 supports 4K and 16M. A TLB is a cache of virtual-to-physical
- translations. Typically this is a very scarce resource on processor.
- Operating systems try to make best use of limited number of TLB resources.
- This optimization is more critical now as bigger and bigger physical memories
- (several GBs) are more readily available.
- Users can use the huge page support in Linux kernel by either using the mmap
- system call or standard SYSv shared memory system calls (shmget, shmat).
- First the Linux kernel needs to be built with the CONFIG_HUGETLBFS
- (present under "File systems") and CONFIG_HUGETLB_PAGE (selected
- automatically when CONFIG_HUGETLBFS is selected) configuration
- options.
- The kernel built with huge page support should show the number of configured
- huge pages in the system by running the "cat /proc/meminfo" command.
- /proc/meminfo also provides information about the total number of hugetlb
- pages configured in the kernel. It also displays information about the
- number of free hugetlb pages at any time. It also displays information about
- the configured huge page size - this is needed for generating the proper
- alignment and size of the arguments to the above system calls.
- The output of "cat /proc/meminfo" will have lines like:
- .....
- HugePages_Total: vvv
- HugePages_Free: www
- HugePages_Rsvd: xxx
- HugePages_Surp: yyy
- Hugepagesize: zzz kB
- where:
- HugePages_Total is the size of the pool of huge pages.
- HugePages_Free is the number of huge pages in the pool that are not yet
- allocated.
- HugePages_Rsvd is short for "reserved," and is the number of huge pages for
- which a commitment to allocate from the pool has been made,
- but no allocation has yet been made. Reserved huge pages
- guarantee that an application will be able to allocate a
- huge page from the pool of huge pages at fault time.
- HugePages_Surp is short for "surplus," and is the number of huge pages in
- the pool above the value in /proc/sys/vm/nr_hugepages. The
- maximum number of surplus huge pages is controlled by
- /proc/sys/vm/nr_overcommit_hugepages.
- /proc/filesystems should also show a filesystem of type "hugetlbfs" configured
- in the kernel.
- /proc/sys/vm/nr_hugepages indicates the current number of configured hugetlb
- pages in the kernel. Super user can dynamically request more (or free some
- pre-configured) huge pages.
- The allocation (or deallocation) of hugetlb pages is possible only if there are
- enough physically contiguous free pages in system (freeing of huge pages is
- possible only if there are enough hugetlb pages free that can be transferred
- back to regular memory pool).
- Pages that are used as hugetlb pages are reserved inside the kernel and cannot
- be used for other purposes.
- Once the kernel with Hugetlb page support is built and running, a user can
- use either the mmap system call or shared memory system calls to start using
- the huge pages. It is required that the system administrator preallocate
- enough memory for huge page purposes.
- The administrator can preallocate huge pages on the kernel boot command line by
- specifying the "hugepages=N" parameter, where 'N' = the number of huge pages
- requested. This is the most reliable method for preallocating huge pages as
- memory has not yet become fragmented.
- Some platforms support multiple huge page sizes. To preallocate huge pages
- of a specific size, one must preceed the huge pages boot command parameters
- with a huge page size selection parameter "hugepagesz=<size>". <size> must
- be specified in bytes with optional scale suffix [kKmMgG]. The default huge
- page size may be selected with the "default_hugepagesz=<size>" boot parameter.
- /proc/sys/vm/nr_hugepages indicates the current number of configured [default
- size] hugetlb pages in the kernel. Super user can dynamically request more
- (or free some pre-configured) huge pages.
- Use the following command to dynamically allocate/deallocate default sized
- huge pages:
- echo 20 > /proc/sys/vm/nr_hugepages
- This command will try to configure 20 default sized huge pages in the system.
- On a NUMA platform, the kernel will attempt to distribute the huge page pool
- over the all on-line nodes. These huge pages, allocated when nr_hugepages
- is increased, are called "persistent huge pages".
- The success or failure of huge page allocation depends on the amount of
- physically contiguous memory that is preset in system at the time of the
- allocation attempt. If the kernel is unable to allocate huge pages from
- some nodes in a NUMA system, it will attempt to make up the difference by
- allocating extra pages on other nodes with sufficient available contiguous
- memory, if any.
- System administrators may want to put this command in one of the local rc init
- files. This will enable the kernel to request huge pages early in the boot
- process when the possibility of getting physical contiguous pages is still
- very high. Administrators can verify the number of huge pages actually
- allocated by checking the sysctl or meminfo. To check the per node
- distribution of huge pages in a NUMA system, use:
- cat /sys/devices/system/node/node*/meminfo | fgrep Huge
- /proc/sys/vm/nr_overcommit_hugepages specifies how large the pool of
- huge pages can grow, if more huge pages than /proc/sys/vm/nr_hugepages are
- requested by applications. Writing any non-zero value into this file
- indicates that the hugetlb subsystem is allowed to try to obtain "surplus"
- huge pages from the buddy allocator, when the normal pool is exhausted. As
- these surplus huge pages go out of use, they are freed back to the buddy
- allocator.
- When increasing the huge page pool size via nr_hugepages, any surplus
- pages will first be promoted to persistent huge pages. Then, additional
- huge pages will be allocated, if necessary and if possible, to fulfill
- the new huge page pool size.
- The administrator may shrink the pool of preallocated huge pages for
- the default huge page size by setting the nr_hugepages sysctl to a
- smaller value. The kernel will attempt to balance the freeing of huge pages
- across all on-line nodes. Any free huge pages on the selected nodes will
- be freed back to the buddy allocator.
- Caveat: Shrinking the pool via nr_hugepages such that it becomes less
- than the number of huge pages in use will convert the balance to surplus
- huge pages even if it would exceed the overcommit value. As long as
- this condition holds, however, no more surplus huge pages will be
- allowed on the system until one of the two sysctls are increased
- sufficiently, or the surplus huge pages go out of use and are freed.
- With support for multiple huge page pools at run-time available, much of
- the huge page userspace interface has been duplicated in sysfs. The above
- information applies to the default huge page size which will be
- controlled by the /proc interfaces for backwards compatibility. The root
- huge page control directory in sysfs is:
- /sys/kernel/mm/hugepages
- For each huge page size supported by the running kernel, a subdirectory
- will exist, of the form
- hugepages-${size}kB
- Inside each of these directories, the same set of files will exist:
- nr_hugepages
- nr_overcommit_hugepages
- free_hugepages
- resv_hugepages
- surplus_hugepages
- which function as described above for the default huge page-sized case.
- If the user applications are going to request huge pages using mmap system
- call, then it is required that system administrator mount a file system of
- type hugetlbfs:
- mount -t hugetlbfs \
- -o uid=<value>,gid=<value>,mode=<value>,size=<value>,nr_inodes=<value> \
- none /mnt/huge
- This command mounts a (pseudo) filesystem of type hugetlbfs on the directory
- /mnt/huge. Any files created on /mnt/huge uses huge pages. The uid and gid
- options sets the owner and group of the root of the file system. By default
- the uid and gid of the current process are taken. The mode option sets the
- mode of root of file system to value & 0777. This value is given in octal.
- By default the value 0755 is picked. The size option sets the maximum value of
- memory (huge pages) allowed for that filesystem (/mnt/huge). The size is
- rounded down to HPAGE_SIZE. The option nr_inodes sets the maximum number of
- inodes that /mnt/huge can use. If the size or nr_inodes option is not
- provided on command line then no limits are set. For size and nr_inodes
- options, you can use [G|g]/[M|m]/[K|k] to represent giga/mega/kilo. For
- example, size=2K has the same meaning as size=2048.
- While read system calls are supported on files that reside on hugetlb
- file systems, write system calls are not.
- Regular chown, chgrp, and chmod commands (with right permissions) could be
- used to change the file attributes on hugetlbfs.
- Also, it is important to note that no such mount command is required if the
- applications are going to use only shmat/shmget system calls or mmap with
- MAP_HUGETLB. Users who wish to use hugetlb page via shared memory segment
- should be a member of a supplementary group and system admin needs to
- configure that gid into /proc/sys/vm/hugetlb_shm_group. It is possible for
- same or different applications to use any combination of mmaps and shm*
- calls, though the mount of filesystem will be required for using mmap calls
- without MAP_HUGETLB. For an example of how to use mmap with MAP_HUGETLB see
- map_hugetlb.c.
- *******************************************************************
- /*
- * Example of using huge page memory in a user application using Sys V shared
- * memory system calls. In this example the app is requesting 256MB of
- * memory that is backed by huge pages. The application uses the flag
- * SHM_HUGETLB in the shmget system call to inform the kernel that it is
- * requesting huge pages.
- *
- * For the ia64 architecture, the Linux kernel reserves Region number 4 for
- * huge pages. That means the addresses starting with 0x800000... will need
- * to be specified. Specifying a fixed address is not required on ppc64,
- * i386 or x86_64.
- *
- * Note: The default shared memory limit is quite low on many kernels,
- * you may need to increase it via:
- *
- * echo 268435456 > /proc/sys/kernel/shmmax
- *
- * This will increase the maximum size per shared memory segment to 256MB.
- * The other limit that you will hit eventually is shmall which is the
- * total amount of shared memory in pages. To set it to 16GB on a system
- * with a 4kB pagesize do:
- *
- * echo 4194304 > /proc/sys/kernel/shmall
- */
- #include <stdlib.h>
- #include <stdio.h>
- #include <sys/types.h>
- #include <sys/ipc.h>
- #include <sys/shm.h>
- #include <sys/mman.h>
- #ifndef SHM_HUGETLB
- #define SHM_HUGETLB 04000
- #endif
- #define LENGTH (256UL*1024*1024)
- #define dprintf(x) printf(x)
- /* Only ia64 requires this */
- #ifdef __ia64__
- #define ADDR (void *)(0x8000000000000000UL)
- #define SHMAT_FLAGS (SHM_RND)
- #else
- #define ADDR (void *)(0x0UL)
- #define SHMAT_FLAGS (0)
- #endif
- int main(void)
- {
- int shmid;
- unsigned long i;
- char *shmaddr;
- if ((shmid = shmget(2, LENGTH,
- SHM_HUGETLB | IPC_CREAT | SHM_R | SHM_W)) < 0) {
- perror("shmget");
- exit(1);
- }
- printf("shmid: 0x%x\n", shmid);
- shmaddr = shmat(shmid, ADDR, SHMAT_FLAGS);
- if (shmaddr == (char *)-1) {
- perror("Shared memory attach failure");
- shmctl(shmid, IPC_RMID, NULL);
- exit(2);
- }
- printf("shmaddr: %p\n", shmaddr);
- dprintf("Starting the writes:\n");
- for (i = 0; i < LENGTH; i++) {
- shmaddr[i] = (char)(i);
- if (!(i % (1024 * 1024)))
- dprintf(".");
- }
- dprintf("\n");
- dprintf("Starting the Check...");
- for (i = 0; i < LENGTH; i++)
- if (shmaddr[i] != (char)i)
- printf("\nIndex %lu mismatched\n", i);
- dprintf("Done.\n");
- if (shmdt((const void *)shmaddr) != 0) {
- perror("Detach failure");
- shmctl(shmid, IPC_RMID, NULL);
- exit(3);
- }
- shmctl(shmid, IPC_RMID, NULL);
- return 0;
- }
- *******************************************************************
- /*
- * Example of using huge page memory in a user application using the mmap
- * system call. Before running this application, make sure that the
- * administrator has mounted the hugetlbfs filesystem (on some directory
- * like /mnt) using the command mount -t hugetlbfs nodev /mnt. In this
- * example, the app is requesting memory of size 256MB that is backed by
- * huge pages.
- *
- * For ia64 architecture, Linux kernel reserves Region number 4 for huge pages.
- * That means the addresses starting with 0x800000... will need to be
- * specified. Specifying a fixed address is not required on ppc64, i386
- * or x86_64.
- */
- #include <stdlib.h>
- #include <stdio.h>
- #include <unistd.h>
- #include <sys/mman.h>
- #include <fcntl.h>
- #define FILE_NAME "/mnt/hugepagefile"
- #define LENGTH (256UL*1024*1024)
- #define PROTECTION (PROT_READ | PROT_WRITE)
- /* Only ia64 requires this */
- #ifdef __ia64__
- #define ADDR (void *)(0x8000000000000000UL)
- #define FLAGS (MAP_SHARED | MAP_FIXED)
- #else
- #define ADDR (void *)(0x0UL)
- #define FLAGS (MAP_SHARED)
- #endif
- void check_bytes(char *addr)
- {
- printf("First hex is %x\n", *((unsigned int *)addr));
- }
- void write_bytes(char *addr)
- {
- unsigned long i;
- for (i = 0; i < LENGTH; i++)
- *(addr + i) = (char)i;
- }
- void read_bytes(char *addr)
- {
- unsigned long i;
- check_bytes(addr);
- for (i = 0; i < LENGTH; i++)
- if (*(addr + i) != (char)i) {
- printf("Mismatch at %lu\n", i);
- break;
- }
- }
- int main(void)
- {
- void *addr;
- int fd;
- fd = open(FILE_NAME, O_CREAT | O_RDWR, 0755);
- if (fd < 0) {
- perror("Open failed");
- exit(1);
- }
- addr = mmap(ADDR, LENGTH, PROTECTION, FLAGS, fd, 0);
- if (addr == MAP_FAILED) {
- perror("mmap");
- unlink(FILE_NAME);
- exit(1);
- }
- printf("Returned address is %p\n", addr);
- check_bytes(addr);
- write_bytes(addr);
- read_bytes(addr);
- munmap(addr, LENGTH);
- close(fd);
- unlink(FILE_NAME);
- return 0;
- }
|