Skip to content
Snippets Groups Projects
Commit fb20c56e authored by Moe Jette's avatar Moe Jette
Browse files

Update documentation, partition key replaced by RootOnly flag.

parent 03bb7706
No related branches found
No related tags found
No related merge requests found
...@@ -350,13 +350,9 @@ specification will utilize this partition. ...@@ -350,13 +350,9 @@ specification will utilize this partition.
Possible values are "YES" and "NO". Possible values are "YES" and "NO".
The default value is "NO". The default value is "NO".
<dt>Key <dt>RootOnly
<dd>Specifies if SLURM provided key is required for a job to <dd>Specifies if only user ID zero (or user <i>root</i> may
execute in this partition. initiate jobs in this partition.
The key is provided to user root upon request and is invalidated
after one use or expiration.
The key may be used to deligate control of partitions to external
schedulers.
Possible values are "YES" and "NO". Possible values are "YES" and "NO".
The default value is "NO". The default value is "NO".
...@@ -417,7 +413,7 @@ PartitionName=DEFAULT MaxTime=30 MaxNodes=2 ...@@ -417,7 +413,7 @@ PartitionName=DEFAULT MaxTime=30 MaxNodes=2
PartitionName=login Nodes=lx[0001-0002] State=DOWN PartitionName=login Nodes=lx[0001-0002] State=DOWN
PartitionName=debug Nodes=lx[0003-0030] State=UP Default=YES PartitionName=debug Nodes=lx[0003-0030] State=UP Default=YES
PartitionName=class Nodes=lx[0031-0040] AllowGroups=students PartitionName=class Nodes=lx[0031-0040] AllowGroups=students
PartitionName=batch Nodes=lx[0041-9999] MaxTime=UNLIMITED MaxNodes=4096 Key=YES PartitionName=batch Nodes=lx[0041-9999] MaxTime=UNLIMITED MaxNodes=4096 RootOnly=YES
</pre> </pre>
<p> <p>
APIs and an administrative tool can be used to alter the SLRUM APIs and an administrative tool can be used to alter the SLRUM
...@@ -640,7 +636,7 @@ PartitionName=DEFAULT MaxTime=30 MaxNodes=2 ...@@ -640,7 +636,7 @@ PartitionName=DEFAULT MaxTime=30 MaxNodes=2
PartitionName=login Nodes=lx[0001-0002] State=DOWN PartitionName=login Nodes=lx[0001-0002] State=DOWN
PartitionName=debug Nodes=lx[0003-0030] State=UP Default=YES PartitionName=debug Nodes=lx[0003-0030] State=UP Default=YES
PartitionName=class Nodes=lx[0031-0040] AllowGroups=students PartitionName=class Nodes=lx[0031-0040] AllowGroups=students
PartitionName=batch Nodes=lx[0041-9999] MaxTime=UNLIMITED MaxNodes=4096 Key=YES PartitionName=batch Nodes=lx[0041-9999] MaxTime=UNLIMITED MaxNodes=4096 RootOnly=YES
</pre> </pre>
<a name="SampleAdmin"><h2>Sample scontrol Execution</h2></a> <a name="SampleAdmin"><h2>Sample scontrol Execution</h2></a>
...@@ -661,7 +657,7 @@ Remove node lx0030 from service, removing jobs as needed: ...@@ -661,7 +657,7 @@ Remove node lx0030 from service, removing jobs as needed:
<hr> <hr>
URL = http://www-lc.llnl.gov/dctg-lc/slurm/admin.guide.html URL = http://www-lc.llnl.gov/dctg-lc/slurm/admin.guide.html
<p>Last Modified July 30, 2002</p> <p>Last Modified August 2, 2002</p>
<address>Maintained by <a href="mailto:slurm-dev@lists.llnl.gov"> <address>Maintained by <a href="mailto:slurm-dev@lists.llnl.gov">
slurm-dev@lists.llnl.gov</a></address> slurm-dev@lists.llnl.gov</a></address>
</body> </body>
......
.TH SCONTROL "1" "July 2002" "scontrol 0.1" "Slurm components" .TH SCONTROL "1" "August 2002" "scontrol 0.1" "Slurm components"
.SH "NAME" .SH "NAME"
scontrol \- Used view and modify Slurm configuration and state. scontrol \- Used view and modify Slurm configuration and state.
...@@ -94,7 +94,7 @@ Display the version number of scontrol being executed. ...@@ -94,7 +94,7 @@ Display the version number of scontrol being executed.
.br .br
scontrol: show part class scontrol: show part class
.br .br
PartitionName=class MaxTime=30 MaxNodes=2 TotalNodes=10 TotalCPUs=160 Key=NO PartitionName=class MaxTime=30 MaxNodes=2 TotalNodes=10 TotalCPUs=160 RootOnly=NO
.br .br
Default=NO Shared=NO State=UP Nodes=lx[0031-0040] AllowGroups=students Default=NO Shared=NO State=UP Nodes=lx[0031-0040] AllowGroups=students
.br .br
......
.TH "Slurm API" "3" "July 2002" "Morris Jette" "Slurm administrative calls" .TH "Slurm API" "3" "August 2002" "Morris Jette" "Slurm administrative calls"
.SH "NAME" .SH "NAME"
.LP .LP
\fBslurm_job\fR \- Slurm administrative calls \fBslurm_job\fR \- Slurm administrative calls
...@@ -6,14 +6,6 @@ ...@@ -6,14 +6,6 @@
.LP .LP
#include <slurm.h> #include <slurm.h>
.LP .LP
void \fBslurm_free_key\fR (
.br
slurm_key_t *\fIslurm_key_ptr\fP
.br
);
.LP
slurm_key_t *\fBslurm_get_key\fR ( );
.LP
int \fBslurm_reconfigure\fR ( ); int \fBslurm_reconfigure\fR ( );
.LP .LP
int \fBslurm_shutdown\fR ( ); int \fBslurm_shutdown\fR ( );
...@@ -44,9 +36,6 @@ int \fBslurm_update_partition\fR ( ...@@ -44,9 +36,6 @@ int \fBslurm_update_partition\fR (
.SH "ARGUMENTS" .SH "ARGUMENTS"
.LP .LP
.TP .TP
\fIslurm_key_ptr\fP
Specifies the pointer to a Slurm generated key as returned by \fBslurm_get_key\fR.
.TP
\fIupdate_job_msg_ptr\fP \fIupdate_job_msg_ptr\fP
Specifies the pointer to a job update request specification. See slurm.h for full details on the data structure's contents. Specifies the pointer to a job update request specification. See slurm.h for full details on the data structure's contents.
.TP .TP
...@@ -57,10 +46,6 @@ Specifies the pointer to a node update request specification. See slurm.h for fu ...@@ -57,10 +46,6 @@ Specifies the pointer to a node update request specification. See slurm.h for fu
Specifies the pointer to a partition update request specification. See slurm.h for full details on the data structure's contents. Specifies the pointer to a partition update request specification. See slurm.h for full details on the data structure's contents.
.SH "DESCRIPTION" .SH "DESCRIPTION"
.LP .LP
\fBslurm_free_key\fR Release the storage generated in response to a call of the function \fBslurm_get_key\fR.
.LP
\fBslurm_get_key\fR Generate a key authorizing use of some Slurm partitions (depending upon partition configuration. This storage should be released by executing \fBslurm_free_key\fR. This function may only be successfully executed by user root.
.LP
\fBslurm_init_part_desc_msg\fR Initialize the contents of a partition descriptor with default values. Execute this function before executing \fBslurm_update_part\fR. \fBslurm_init_part_desc_msg\fR Initialize the contents of a partition descriptor with default values. Execute this function before executing \fBslurm_update_part\fR.
.LP .LP
\fBslurm_reconfigure\fR Request that the Slurm controller re-read its configuration file. The new configuration parameters take effect immediately. This function may only be successfully executed by user root. \fBslurm_reconfigure\fR Request that the Slurm controller re-read its configuration file. The new configuration parameters take effect immediately. This function may only be successfully executed by user root.
...@@ -102,8 +87,6 @@ int main (int argc, char *argv[]) ...@@ -102,8 +87,6 @@ int main (int argc, char *argv[])
partition_desc_msg_t update_part_msg ; partition_desc_msg_t update_part_msg ;
.br .br
resource_allocation_response_msg_t* slurm_alloc_msg_ptr ; resource_allocation_response_msg_t* slurm_alloc_msg_ptr ;
.br
slurm_key_t *slurm_key_ptr;
.LP .LP
if (slurm_reconfigure ( )) { if (slurm_reconfigure ( )) {
.br .br
...@@ -150,8 +133,6 @@ int main (int argc, char *argv[]) ...@@ -150,8 +133,6 @@ int main (int argc, char *argv[])
.br .br
} }
.LP .LP
slurm_key_ptr = slurm_get_key ( );
.br
slurm_init_part_desc_msg ( &update_part_msg ); slurm_init_part_desc_msg ( &update_part_msg );
.br .br
job_mesg. name = ("job01\0"); job_mesg. name = ("job01\0");
...@@ -159,8 +140,6 @@ int main (int argc, char *argv[]) ...@@ -159,8 +140,6 @@ int main (int argc, char *argv[])
job_mesg. partition = ("reserved\0");; job_mesg. partition = ("reserved\0");;
.br .br
job_mesg. num_nodes = 400; job_mesg. num_nodes = 400;
.br
job_mesg. slurm_key_ptr = slurm_key_ptr;
.br .br
if (slurm_allocate_resources(&job_desc_msg,&slurm_alloc_msg_ptr,true)) { if (slurm_allocate_resources(&job_desc_msg,&slurm_alloc_msg_ptr,true)) {
.br .br
...@@ -175,8 +154,6 @@ int main (int argc, char *argv[]) ...@@ -175,8 +154,6 @@ int main (int argc, char *argv[])
slurm_alloc_msg_ptr->node_list, slurm_alloc_msg_ptr->job_id ); slurm_alloc_msg_ptr->node_list, slurm_alloc_msg_ptr->job_id );
.br .br
slurm_free_resource_allocation_response_msg ( slurm_alloc_msg_ptr ); slurm_free_resource_allocation_response_msg ( slurm_alloc_msg_ptr );
.br
slurm_free_key ( slurm_key_ptr );
.br .br
exit (0); exit (0);
.br .br
......
.TH "Slurm API" "3" "July 2002" "Morris Jette" "Slurm error calls" .TH "Slurm API" "3" "August 2002" "Morris Jette" "Slurm error calls"
.SH "NAME" .SH "NAME"
.LP .LP
\fBslurm_job\fR \- Slurm error calls \fBslurm_job\fR \- Slurm error calls
...@@ -86,9 +86,8 @@ details. ...@@ -86,9 +86,8 @@ details.
\fBslurm_complete_job\fR(3), \fBslurm_complete_job_step\fR(3), \fBslurm_complete_job\fR(3), \fBslurm_complete_job_step\fR(3),
\fBslurm_free_ctl_conf\fR(3), \fBslurm_free_ctl_conf\fR(3),
\fBslurm_free_resource_allocation_response_msg\fR(3), \fBslurm_free_job_info\fR(3), \fBslurm_free_resource_allocation_response_msg\fR(3), \fBslurm_free_job_info\fR(3),
\fBslurm_free_key\fR(3), \fBslurm_free_node_info\fR(3), \fBslurm_free_partition_info\fR(3), \fBslurm_free_node_info\fR(3), \fBslurm_free_partition_info\fR(3),
\fBslurm_free_submit_response_response_msg\fR(3), \fBslurm_free_submit_response_response_msg\fR(3),
\fBslurm_get_key\fR(3),
\fBslurm_init_part_desc_msg\fR(3), \fBslurm_init_job_desc_msg\fR(3), \fBslurm_init_part_desc_msg\fR(3), \fBslurm_init_job_desc_msg\fR(3),
\fBslurm_job_will_run\fR(3), \fBslurm_job_will_run\fR(3),
\fBslurm_load_ctl_conf\fR(3), \fBslurm_load_jobs\fR(3), \fBslurm_load_node\fR(3), \fBslurm_load_partitions\fR(3), \fBslurm_load_ctl_conf\fR(3), \fBslurm_load_jobs\fR(3), \fBslurm_load_node\fR(3), \fBslurm_load_partitions\fR(3),
......
.TH "Slurm API" "3" "July 2002" "Morris Jette" "Slurm job management calls" .TH "Slurm API" "3" "August 2002" "Morris Jette" "Slurm job management calls"
.SH "NAME" .SH "NAME"
.LP .LP
\fBslurm_job\fR \- Slurm job management calls \fBslurm_job\fR \- Slurm job management calls
...@@ -251,4 +251,6 @@ FOR A PARTICULAR PURPOSE. See the GNU General Public License for more ...@@ -251,4 +251,6 @@ FOR A PARTICULAR PURPOSE. See the GNU General Public License for more
details. details.
.SH "SEE ALSO" .SH "SEE ALSO"
.LP .LP
\fBscancel\fR(1), \fBsrun\fR(1), \fBslurm_free_job_info\fR(3), \fBslurm_free_key\fR(3), \fBslurm_get_errno\fR(3), \fBslurm_load_jobs\fR(3), \fBslurm_get_key\fR(3), \fBslurm_perror\fR(3), \fBslurm_strerror\fR(3) \fBscancel\fR(1), \fBsrun\fR(1), \fBslurm_free_job_info\fR(3),
\fBslurm_get_errno\fR(3), \fBslurm_load_jobs\fR(3),
\fBslurm_perror\fR(3), \fBslurm_strerror\fR(3)
...@@ -409,14 +409,14 @@ to the SLURM controller. \slurmctld\ may use PAM modules to authenticate ...@@ -409,14 +409,14 @@ to the SLURM controller. \slurmctld\ may use PAM modules to authenticate
users based upon UNIX passwords, Kerberos, or any other method that users based upon UNIX passwords, Kerberos, or any other method that
may be represented in a PAM module. may be represented in a PAM module.
Access to some partitions is restricted via a `` partition key''. This may be used, Access to some partitions is restricted via a `` RootOnly'' flag.
If this flag is set, job submit or allocation requests to this
partition will only be accepted if the effective user ID originating
the RPC is zero or {\tt root}. This may be used,
for example, to provide specific external schedulers with exclusive access for example, to provide specific external schedulers with exclusive access
to partitions. Individual users will not be permitted to directly submit to partitions. Individual users will not be permitted to directly submit
jobs to such a partition, which would prevent the external scheduler jobs to such a partition, which would prevent the external scheduler
from effectively managing it. This key will be generated by \slurmctld\ from effectively managing it.
and provided to user {\tt root} via API upon request. The external scheduler,
which must run as user {\tt root} to submit jobs on the behalf of other
users, will submit jobs as the appropriate user using this `` partition key''.
\subsection{Example: Executing a Batch Job} \subsection{Example: Executing a Batch Job}
...@@ -672,7 +672,7 @@ scheduling component. ...@@ -672,7 +672,7 @@ scheduling component.
Data to be associated with a partition will include: Data to be associated with a partition will include:
\begin{itemize} \begin{itemize}
\item Name \item Name
\item Access controlled by key granted to user root (to support external schedulers) \item RootOnly flag to indicated that only user {\tt root} may initiate jobs
\item List of associated nodes (may use regular expression) \item List of associated nodes (may use regular expression)
\item State of partition (UP or DOWN) \item State of partition (UP or DOWN)
\item Maximum time limit for any job \item Maximum time limit for any job
...@@ -714,8 +714,7 @@ task count, the need for contiguous nodes assignment, and (optionally) ...@@ -714,8 +714,7 @@ task count, the need for contiguous nodes assignment, and (optionally)
an explicit list of nodes. Nodes will be selected so as to satisfy all an explicit list of nodes. Nodes will be selected so as to satisfy all
job requirements. For example a job requesting four CPUs and four nodes job requirements. For example a job requesting four CPUs and four nodes
will actually be allocated eight CPUs and four nodes in the case of all will actually be allocated eight CPUs and four nodes in the case of all
nodes having two CPUs each. The submitted job may have an associated nodes having two CPUs each.
``partition key'', and by virtue of this can be granted access to specific partitions.
The request may also indicate node configuration constraints such as The request may also indicate node configuration constraints such as
minimum real memory or CPUs per node, required features, etc. minimum real memory or CPUs per node, required features, etc.
...@@ -795,7 +794,7 @@ PartitionName=DEFAULT MaxTime=30 MaxNodes=2 ...@@ -795,7 +794,7 @@ PartitionName=DEFAULT MaxTime=30 MaxNodes=2
PartitionName=login Nodes=lx[0000-0002] State=DOWN # Don't schedule work here PartitionName=login Nodes=lx[0000-0002] State=DOWN # Don't schedule work here
PartitionName=debug Nodes=lx[0003-0030] State=UP Default=YES PartitionName=debug Nodes=lx[0003-0030] State=UP Default=YES
PartitionName=class Nodes=lx[0031-0040] AllowGroups=students,teachers PartitionName=class Nodes=lx[0031-0040] AllowGroups=students,teachers
PartitionName=batch Nodes=lx[0041-9999] MaxTime=UNLIMITED MaxNodes=4096 Key=YES PartitionName=batch Nodes=lx[0041-9999] MaxTime=UNLIMITED MaxNodes=4096 RootOnly=YES
\end{verbatim} \end{verbatim}
\subsection{Job Manager} \subsection{Job Manager}
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment