Java程序辅导

C C++ Java Python Processing编程在线培训 程序编写 软件开发 视频讲解

客服在线QQ:2653320439 微信:ittutor Email:itutor@qq.com
wx: cjtutor
QQ: 2653320439
Linux-Kernel Archive: Re: RFC: Memory Tiering Kernel Interfaces Re: RFC: Memory Tiering Kernel Interfaces From: Wei Xu Date: Tue May 03 2022 - 03:03:02 EST Next message: xkernel . wang: "[PATCH 12/12] staging: r8188eu: check the return of kzalloc()" Previous message: xkernel . wang: "[PATCH 11/12] staging: r8188eu: fix potential memory leak in _rtw_init_xmit_priv()" In reply to: Aneesh Kumar K.V: "Re: RFC: Memory Tiering Kernel Interfaces" Next in thread: Dave Hansen: "Re: RFC: Memory Tiering Kernel Interfaces" Messages sorted by: [ date ] [ thread ] [ subject ] [ author ] On Sun, May 1, 2022 at 11:25 PM Aneesh Kumar K.V wrote: > > Wei Xu writes: > > .... > > > > > Tiering Hierarchy Initialization > > ================================ > > > > By default, all memory nodes are in the top tier (N_TOPTIER_MEMORY). > > > > A device driver can remove its memory nodes from the top tier, e.g. > > a dax driver can remove PMEM nodes from the top tier. > > Should we look at the tier in which to place the memory an option that > device drivers like dax driver can select? Or dax driver just selects > the desire to mark a specific memory only numa node as demotion target > and won't explicity specify the tier in which it should be placed. I > would like to go for the later and choose the tier details based on the > current memory tiers and the NUMA distance value (even HMAT at some > point in the future). This is what has been proposed here. The driver doesn't determine which particular tier the node should be placed in. It just removes the node from the top-tier (i.e. making the node a demotion target). The actual tier of the node is determined based on all the nodes and their NUMA distance values. > The challenge with NUMA distance though is which > distance value we will pick. For example, in your example1. > > node 0 1 2 3 > 0 10 20 30 40 > 1 20 10 40 30 > 2 30 40 10 40 > 3 40 30 40 10 > > When Node3 is registered, how do we decide to create a Tier2 or add it > to Tier1? . This proposal assumes a breadth-first search in tier construction, which is also how the current implementation works. In this example, the top-tier nodes are [0,1]. We then find a best demotion node for each of [0,1] and get [0->2, 1->3]. Now we have two tiers: [0,1], [2,3], and the search terminates. But this algorithm doesn't work if there is no node 1 and we still want node 2 & 3 in the same tier. Without the additional hardware information such as HMAT, we will need a way to override the default tier definition. > We could say devices that wish to be placed in the same tier > will have same distance as the existing tier device ie, for the above > case, > > node_distance[2][2] == node_distance[2][3] ? Can we expect the firmware > to have distance value like that? node_distance[2][2] is local, which should be smaller than node_distance[2][3]. I expect that this should be the case in normal firmwares. > > > > The kernel builds the memory tiering hierarchy and per-node demotion > > order tier-by-tier starting from N_TOPTIER_MEMORY. For a node N, the > > best distance nodes in the next lower tier are assigned to > > node_demotion[N].preferred and all the nodes in the next lower tier > > are assigned to node_demotion[N].allowed. > > > > node_demotion[N].preferred can be empty if no preferred demotion node > > is available for node N. > > > > If the userspace overrides the tiers via the memory_tiers sysfs > > interface, the kernel then only rebuilds the per-node demotion order > > accordingly. > > > > Memory tiering hierarchy is rebuilt upon hot-add or hot-remove of a > > memory node, but is NOT rebuilt upon hot-add or hot-remove of a CPU > > node. > > > > > > Memory Allocation for Demotion > > ============================== > > > > When allocating a new demotion target page, both a preferred node > > and the allowed nodemask are provided to the allocation function. > > The default kernel allocation fallback order is used to allocate the > > page from the specified node and nodemask. > > > > The memopolicy of cpuset, vma and owner task of the source page can > > be set to refine the demotion nodemask, e.g. to prevent demotion or > > select a particular allowed node as the demotion target. > > > > > > Examples > > ======== > > > > * Example 1: > > Node 0 & 1 are DRAM nodes, node 2 & 3 are PMEM nodes. > > > > Node 0 has node 2 as the preferred demotion target and can also > > fallback demotion to node 3. > > > > Node 1 has node 3 as the preferred demotion target and can also > > fallback demotion to node 2. > > > > Set mempolicy to prevent cross-socket demotion and memory access, > > e.g. cpuset.mems=0,2 > > > > node distances: > > node 0 1 2 3 > > 0 10 20 30 40 > > 1 20 10 40 30 > > 2 30 40 10 40 > > 3 40 30 40 10 > > > > /sys/devices/system/node/memory_tiers > > 0-1 > > 2-3 > > How can I make Node3 the demotion target for Node2 in this case? Can > we have one file for each tier? ie, we start with > /sys/devices/system/node/memory_tier0. Removing a node with memory from > the above file/list results in the creation of new tiers. > > /sys/devices/system/node/memory_tier0 > 0-1 > /sys/devices/system/node/memory_tier1 > 2-3 > > echo 2 > /sys/devices/system/node/memory_tier1 > /sys/devices/system/node/memory_tier1 > 2 > /sys/devices/system/node/memory_tier2 > 3 The proposal does something similar, except using a single file: memory_tiers. Another idea is to pass the tier override from a kernel boot argument, though it is challenging to deal with hot-plugged nodes. > > > > N_TOPTIER_MEMORY: 0-1 > > > > node_demotion[]: > > 0: [2], [2-3] > > 1: [3], [2-3] > > 2: [], [] > > 3: [], [] > > > > * Example 2: > > Node 0 & 1 are DRAM nodes. > > Node 2 is a PMEM node and closer to node 0. > > > > Node 0 has node 2 as the preferred and only demotion target. > > > > Node 1 has no preferred demotion target, but can still demote > > to node 2. > > > > Set mempolicy to prevent cross-socket demotion and memory access, > > e.g. cpuset.mems=0,2 > > > > node distances: > > node 0 1 2 > > 0 10 20 30 > > 1 20 10 40 > > 2 30 40 10 > > > > /sys/devices/system/node/memory_tiers > > 0-1 > > 2 > > > > N_TOPTIER_MEMORY: 0-1 > > > > node_demotion[]: > > 0: [2], [2] > > 1: [], [2] > > 2: [], [] > > > > > > * Example 3: > > Node 0 & 1 are DRAM nodes. > > Node 2 is a PMEM node and has the same distance to node 0 & 1. > > > > Node 0 has node 2 as the preferred and only demotion target. > > > > Node 1 has node 2 as the preferred and only demotion target. > > > > node distances: > > node 0 1 2 > > 0 10 20 30 > > 1 20 10 30 > > 2 30 30 10 > > > > /sys/devices/system/node/memory_tiers > > 0-1 > > 2 > > > > N_TOPTIER_MEMORY: 0-1 > > > > node_demotion[]: > > 0: [2], [2] > > 1: [2], [2] > > 2: [], [] > > > > > > * Example 4: > > Node 0 & 1 are DRAM nodes, Node 2 is a memory-only DRAM node. > > > > All nodes are top-tier. > > > > node distances: > > node 0 1 2 > > 0 10 20 30 > > 1 20 10 30 > > 2 30 30 10 > > > > /sys/devices/system/node/memory_tiers > > 0-2 > > > > N_TOPTIER_MEMORY: 0-2 > > > > node_demotion[]: > > 0: [], [] > > 1: [], [] > > 2: [], [] > > > > > > * Example 5: > > Node 0 is a DRAM node with CPU. > > Node 1 is a HBM node. > > Node 2 is a PMEM node. > > > > With userspace override, node 1 is the top tier and has node 0 as > > the preferred and only demotion target. > > > > Node 0 is in the second tier, tier 1, and has node 2 as the > > preferred and only demotion target. > > > > Node 2 is in the lowest tier, tier 2, and has no demotion targets. > > > > node distances: > > node 0 1 2 > > 0 10 21 30 > > 1 21 10 40 > > 2 30 40 10 > > > > /sys/devices/system/node/memory_tiers (userspace override) > > 1 > > 0 > > 2 > > > > N_TOPTIER_MEMORY: 1 > > > > node_demotion[]: > > 0: [2], [2] > > 1: [0], [0] > > 2: [], [] > > > > -- Wei Next message: xkernel . wang: "[PATCH 12/12] staging: r8188eu: check the return of kzalloc()" Previous message: xkernel . wang: "[PATCH 11/12] staging: r8188eu: fix potential memory leak in _rtw_init_xmit_priv()" In reply to: Aneesh Kumar K.V: "Re: RFC: Memory Tiering Kernel Interfaces" Next in thread: Dave Hansen: "Re: RFC: Memory Tiering Kernel Interfaces" Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]