Less known Solaris Features: CPU resource Pools

After writing about the processor sets and their role in computing the number of garbage collection threads recently, there were some questions about configuring processor sets. In this article i want to explain the usage of resource pools in Solaris in regard of CPU resources. You don’t configure the processor sets directly. Instead of this the pool facility does this for you. I want to explain how to configure, how to use and how to monitor them. However i can just scratch the surface in such an article.

Prerequisites

When you want to demonstrate a feature of the OS that influences the way processes are dispatched on the CPUs it’s useful to have something that produces a lot of load. A normal household cpuhog will be sufficient, so i use the one from the resource manager tutorial.

jmoekamp@hivemind:~$ cat cpuhog.pl<br />
#! /usr/bin/perl<br />
while (1) { my $res = ( 3.3333 / 3.14 ) }</blockquote>
</code>
<h3>Configuring the CPU resource pool</h3>
There are two important commands to control the behavior of the pool facility. <code>pooladm</code> and <code>poolcfg</code>. 
Normally the pool facility isn't activated, so you can't work with it.<br />
<blockquote><code>jmoekamp@hivemind:~# pooladm<br />
pooladm: couldn't open pools state file: Facility is not active</blockquote>
</code>Additionally it's not possible to configure it.<br />
<blockquote><code>jmoekamp@hivemind:~# poolcfg -c "discover"<br />
poolcfg: cannot create the configuration, hivemind: Facility is not active</blockquote>
</code>So we have to activate the pool facility first.<br />
<blockquote><code>jmoekamp@hivemind:~# pooladm -e</blockquote>
</code>Now we are able to start the configuration.<br />
<blockquote><code>jmoekamp@hivemind:~# poolcfg -c "discover"</blockquote>
</code>The "discover" subcommand scans the system and creates a matching configuration and writes them into the default location  
There are two ways to lookup the current configuration. You can use the <code>pooladm</code> command without further arguments to lookup the currently active configuration.<small><br />

<!-- Migration Rule 4 --> 

<figure class="highlight"><pre><code class="language-plaintext" data-lang="plaintext">jmoekamp@hivemind:~# pooladm

system default
        string  system.comment 
        int     system.version 1
        boolean system.bind-default true
        string  system.poold.objectives wt-load

        pool pool_default
                int     pool.sys_id 0
                boolean pool.active true
                boolean pool.default true
                int     pool.importance 1
                string  pool.comment 
                pset    pset_default

        pset pset_default
                int     pset.sys_id -1
                boolean pset.default true
                uint    pset.min 1
                uint    pset.max 65536
                string  pset.units population
                uint    pset.load 193
                uint    pset.size 4
                string  pset.comment 

                cpu
                        int     cpu.sys_id 1
                        string  cpu.comment 
                        string  cpu.status on-line

                cpu
                        int     cpu.sys_id 3
                        string  cpu.comment 
                        string  cpu.status on-line

                cpu
                        int     cpu.sys_id 0
                        string  cpu.comment 
                        string  cpu.status on-line

                cpu
                        int     cpu.sys_id 2
                        string  cpu.comment 
                        string  cpu.status on-line</code></pre></figure>


</small>With the <code>poolcfg -c "info"</code> you can view the currently persistent configuration in regard of the processor distribution. This configuration is contained in the file <code>/etc/pooladm.cfg</code>. In its native appearance it's a XML file, but <code>poolcfg -c "info"</code> makes something a little bit (but not much ;) ) readable format out of it.  When Solaris detects this file at startup it will automatically start the pool facility.<small><br />

<!-- Migration Rule 4 --> 

<figure class="highlight"><pre><code class="language-plaintext" data-lang="plaintext">jmoekamp@hivemind:~# poolcfg -c "info"

system default
        string  system.comment 
        int     system.version 1
        boolean system.bind-default true
        string  system.poold.objectives wt-load

        pool pool_default
                int     pool.sys_id 0
                boolean pool.active true
                boolean pool.default true
                int     pool.importance 1
                string  pool.comment 
                pset    pset_default

        pset pset_default
                int     pset.sys_id -1
                boolean pset.default true
                uint    pset.min 1
                uint    pset.max 65536
                string  pset.units population
                uint    pset.load 218
                uint    pset.size 4
                string  pset.comment 

                cpu
                        int     cpu.sys_id 1
                        string  cpu.comment 
                        string  cpu.status on-line

                cpu
                        int     cpu.sys_id 3
                        string  cpu.comment 
                        string  cpu.status on-line

                cpu
                        int     cpu.sys_id 0
                        string  cpu.comment 
                        string  cpu.status on-line

                cpu
                        int     cpu.sys_id 2
                        string  cpu.comment 
                        string  cpu.status on-line</code></pre></figure>


</small>At start both outputs are identical, but the difference will be much clearer soon.
Let's configure the pool facility. At first we create a new processor set.<br />
<blockquote><code>jmoekamp@hivemind:~# poolcfg -c 'create pset hog_set (uint pset.min=1 ; uint pset.max=1)'

This command creates a processor set that has at least one CPU and at most one cpu. Or to say it easier: It hast exactly one CPU all the time. In the next step we create a pool. You can think of the pool as the entity that poold use as a administative unit.

jmoekamp@hivemind:~# poolcfg -c 'create pool hog_pool'

For my example I’ve created a pool called hog_pool. Okay … now we have to configure the connection between the pool and the processor set.

jmoekamp@hivemind:~# poolcfg -c 'associate pool hog_pool (pset hog_set)'

With the associate subcommand you configure this connection. Okay, let’s have short look into the state of of pool facility.

jmoekamp@hivemind:~# poolstat
                              pset
 id pool                 size used load
  0 pool_default            4 0,00 0,18 

</small>We see just one pool at the moment. We have to activate the currently configured configuration first. We have to do this by using the the pooladm command.

jmoekamp@hivemind:~# pooladm -c</blockquote>
</code> When we use the <code>poolstat</code> command again, you will see two pools and the distribution matches the configured state. One CPU for <code>hog_pool</code> and the rest (3 in my case) for the default pool.<br />

<!-- Migration Rule 4 --> 

<figure class="highlight"><pre><code class="language-plaintext" data-lang="plaintext">jmoekamp@hivemind:~# poolstat
                              pset
 id pool                 size used load
  3 hog_pool                1 0,00 0,00
  0 pool_default            3 0,00 0,17</code></pre></figure>


When you look at the currently running configuration, you will see that one processor has moved to the <code>hog_set</code> processor set.<small><br />

<!-- Migration Rule 4 --> 

<figure class="highlight"><pre><code class="language-plaintext" data-lang="plaintext">jmoekamp@hivemind:~# pooladm 

system default
        string  system.comment 
        int     system.version 1
        boolean system.bind-default true
        string  system.poold.objectives wt-load

        pool hog_pool
                int     pool.sys_id 3
                boolean pool.active true
                boolean pool.default false
                int     pool.importance 1
                string  pool.comment 
                pset    hog_set

        pool pool_default
                int     pool.sys_id 0
                boolean pool.active true
                boolean pool.default true
                int     pool.importance 1
                string  pool.comment 
                pset    pset_default

        pset hog_set
                int     pset.sys_id 1
                boolean pset.default false
                uint    pset.min 1
                uint    pset.max 1
                string  pset.units population
                uint    pset.load 0
                uint    pset.size 1
                string  pset.comment 

                cpu
                        int     cpu.sys_id 0
                        string  cpu.comment 
                        string  cpu.status on-line

        pset pset_default
                int     pset.sys_id -1
                boolean pset.default true
                uint    pset.min 1
                uint    pset.max 65536
                string  pset.units population
                uint    pset.load 177
                uint    pset.size 3
                string  pset.comment 

                cpu
                        int     cpu.sys_id 1
                        string  cpu.comment 
                        string  cpu.status on-line

                cpu
                        int     cpu.sys_id 3
                        string  cpu.comment 
                        string  cpu.status on-line

                cpu
                        int     cpu.sys_id 2
                        string  cpu.comment 
                        string  cpu.status on-line</code></pre></figure>


</small>But there is a little gotcha in the configuration. The currently persistent configuration looks a little bit differently. Let's look up that configuration.<small><br />

<!-- Migration Rule 4 --> 

<figure class="highlight"><pre><code class="language-plaintext" data-lang="plaintext">jmoekamp@hivemind:~# poolcfg -c "info"

system default
        string  system.comment 
        int     system.version 1
        boolean system.bind-default true
        string  system.poold.objectives wt-load

        pool pool_default
                int     pool.sys_id 0
                boolean pool.active true
                boolean pool.default true
                int     pool.importance 1
                string  pool.comment 
                pset    pset_default

        pool hog_pool
                boolean pool.active true
                boolean pool.default false
                int     pool.importance 1
                string  pool.comment 
                pset    hog_set

        pset pset_default
                int     pset.sys_id -1
                boolean pset.default true
                uint    pset.min 1
                uint    pset.max 65536
                string  pset.units population
                uint    pset.load 218
                uint    pset.size 4
                string  pset.comment 

                cpu
                        int     cpu.sys_id 1
                        string  cpu.comment 
                        string  cpu.status on-line

                cpu
                        int     cpu.sys_id 3
                        string  cpu.comment 
                        string  cpu.status on-line

                cpu
                        int     cpu.sys_id 0
                        string  cpu.comment 
                        string  cpu.status on-line

                cpu
                        int     cpu.sys_id 2
                        string  cpu.comment 
                        string  cpu.status on-line

        pset hog_set
                int     pset.sys_id -2
                boolean pset.default false
                uint    pset.min 1
                uint    pset.max 1
                string  pset.units population
                uint    pset.load 0
                uint    pset.size 0
                string  pset.comment </code></pre></figure>


</small>The <code>hog_set</code> and <code>hog_pool</code> exist, but <code>hog_set</code> has no processor and <code>pset_default</code> has all the procs ... 
To make the current distribution boot persistent you have to write the currently active configuration in the persisting configuration.<br />
<blockquote><code>jmoekamp@hivemind:~# pooladm -s

When you now recheck the persisting configuration, you will see that it’s equal to the configuration you get with the pooladm command. However you should do this step only when you are sure that the configuration matches your expectations. Let’s recheck the persistent configuration.

jmoekamp@hivemind:~# poolcfg -c "info"

system default
        string  system.comment 
        int     system.version 1
        boolean system.bind-default true
        string  system.poold.objectives wt-load

        pool hog_pool
                int     pool.sys_id 3
                boolean pool.active true
                boolean pool.default false
                int     pool.importance 1
                string  pool.comment 
                pset    hog_set

        pool pool_default
                int     pool.sys_id 0
                boolean pool.active true
                boolean pool.default true
                int     pool.importance 1
                string  pool.comment 
                pset    pset_default

        pset hog_set
                int     pset.sys_id 1
                boolean pset.default false
                uint    pset.min 1
                uint    pset.max 1
                string  pset.units population
                uint    pset.load 0
                uint    pset.size 1
                string  pset.comment 

                cpu
                        int     cpu.sys_id 0
                        string  cpu.comment 
                        string  cpu.status on-line

        pset pset_default
                int     pset.sys_id -1
                boolean pset.default true
                uint    pset.min 1
                uint    pset.max 65536
                string  pset.units population
                uint    pset.load 166
                uint    pset.size 3
                string  pset.comment 

                cpu
                        int     cpu.sys_id 1
                        string  cpu.comment 
                        string  cpu.status on-line

                cpu
                        int     cpu.sys_id 3
                        string  cpu.comment 
                        string  cpu.status on-line

                cpu
                        int     cpu.sys_id 2
                        string  cpu.comment 
                        string  cpu.status on-line

jmoekamp@hivemind:~# 

</small>

Using the the resource pools

Okay, now we have this nice processor set, but how do we use it. There are several possibilities, but i my example i will explain the usage of projects to use the resource pools. At first i create a project.

jmoekamp@hivemind:~# projadd -U jmoekamp -K project.pool=hog_pool hog_project

The commands create a project with the name hog_project, the user jmoekamp is member of this project. The most interesting part is the -K project.pool=hog_pool part. This tells Solaris to use the pool hog_pool for everything executed within this project. When you think about the Resource Manager tutorial you are already know how to start command under a dedicated project (and then you will know, you are already running your processes within a project called user.jmoekamp, too). You can do this by using the newtask command. I will start some the CPU hogs to demonstrate the impact of the CPU pool.

jmoekamp@hivemind:~$ newtask -p hog_project ./cpuhog.pl &<br />
[1] 24633<br />
jmoekamp@hivemind:~$ newtask -p hog_project ./cpuhog.pl &<br />
[2] 24636<br />
jmoekamp@hivemind:~$ newtask -p hog_project ./cpuhog.pl &<br />
[3] 24639<br />
jmoekamp@hivemind:~$ newtask -p hog_project ./cpuhog.pl &<br />
[4] 24642<br />
jmoekamp@hivemind:~$ newtask -p hog_project ./cpuhog.pl &<br />
[5] 24645

When you look at output of poolstat a little bit later, you will see an unloaded default set as well as an really hard working hog_pool

jmoekamp@hivemind:~# poolstat
                              pset
 id pool                 size used load
  3 hog_pool                1 0,00 4,84
  0 pool_default            3 0,00 0,07

Just to check the behavior from a different perspective:

jmoekamp@hivemind:~# prstat -J -C 1        
   PID USERNAME  SIZE   RSS STATE  PRI NICE      TIME  CPU PROCESS/NLWP       
 24645 jmoekamp 9068K 2200K cpu0    41    0   0:01:50 5,5% cpuhog.pl/1
 24642 jmoekamp 9068K 2200K run     32    0   0:01:44 5,1% cpuhog.pl/1
 24636 jmoekamp 9068K 2200K run     32    0   0:01:44 5,0% cpuhog.pl/1
 24639 jmoekamp 9068K 2200K run     32    0   0:01:37 4,7% cpuhog.pl/1
 24633 jmoekamp 9068K 2200K run     32    0   0:01:35 4,5% cpuhog.pl/1
[...]
PROJID    NPROC  SWAP   RSS MEMORY      TIME  CPU PROJECT                     
   115        5 1540K 8988K   0,2%   0:08:30  25% hog_project                 
[...]
Total: 5 processes, 5 lwps, load averages: 4,99, 4,05, 2,12

</small>Albeit there are at least two CPUs, there is just one cpuhog on a CPU and all other are runnable, but wait until they are dispatched on the cpu0. Okay, we don’t need those hogs any longer …

<pre>
jmoekamp@hivemind:~# pkill "cpuhog.pl"
[1]   Terminated              newtask -p hog_project ./cpuhog.pl
[2]   Terminated              newtask -p hog_project ./cpuhog.pl
[3]   Terminated              newtask -p hog_project ./cpuhog.pl
[4]   Terminated              newtask -p hog_project ./cpuhog.pl
[5]   Terminated              newtask -p hog_project ./cpuhog.pl</pre>
</code></blockquote</small>It's possible to move an already running  process under the control of a project and thus under the control of the resource pools.
Let's assume you've started a single cpuhog.<br />
<blockquote><code>jmoekamp@hivemind:~$ ./cpuhog.pl &<br />
[6] 24784

With prstat 1 you will see that ./cpuhog.pl moves from CPU to CPU.
We can assign a project to a running process. We need the process id of the process for this task:

jmoekamp@hivemind:~$ newtask -p hog_project -c 24784

When you look at the output of prstat 1 you will recognize that, that the ./cpuhog.pl just stays on cpu0, the cpu of the hog_pool and doesn’t move away. So we have proofed that it’s under the control of the resource pool. Another way to reach the same is to use the poolbind command. With the poolbind command you can assign tasks, projects or processes to a certain pool:

<br />
jmoekamp@hivemind:~$ ./cpuhog.pl &<br />
[1] 25681<br />
jmoekamp@hivemind:~$ pfexec bash<br />
jmoekamp@hivemind:~# poolbind -Q 25681<br />
25681   pset    pset_default<br />
jmoekamp@hivemind:~# poolbind -p hog_pool -i pid 25681<br />
jmoekamp@hivemind:~# poolbind -Q 25681<br />
25681   pset    hog_set

However i prefer the first method :)

This was just the surface

In this tutorial i barely touched the resource pool facility of Solaris. It has vastly more functions from dynamic resource pools based on load measurement and a rule set which pools and thus which applications are most important up to mechanisms to have several config files that could activate different pool configurations via cron (for example for dayshift and nightshift configurations) or mechanisms to force poold to obey locality rules (for example to prevent that poold chooses procs on seperate uniboards on a large server. You can even use it to configure different scheduling classes per pools on the CPU.

Conclusion

The pool facility is a really powerful feature to control the usage of CPU resources. However many people aren’t really aware of the possibibilties of the toolset. This may result from the point that most people use their server in a single task per server configuration or aren’t aware of possible performance advantages by forcing processes on certain CPUs leveraging the topology of a computer system.

Do you want to learn more?

Documentation
docs.sun.com - Resource Pools (Overview)
docs.sun.com - Creating and Administering Resource Pools (Tasks) man pages
poold - automated resource pools partitioning daemon
pooladm - activate and deactivate the resource pools facility</a>
poolcfg - create and modify resource pool configuration files</a>
poolstat - report active pool statistics</a>
poolbind - bind processes, tasks, or projects or query binding of processes to resource pools</a>