Newer
Older
.TH "slurm.conf" "5" "February 2005" "slurm.conf 0.5" "Slurm configuration file"
.SH "NAME"
slurm.conf \- Slurm configuration file
.SH "DESCRIPTION"
\fB/etc/slurm.conf\fP is an ASCI file which describes general SLURM configuration
information, the nodes to be managed, information about how those nodes are
grouped into partitions, and various scheduling parameters associated with
those partitions. The file location can be modified at system build time using
the DEFAULT_SLURM_CONF parameter.
.LP
The contents of the file are case insensitive except for the names of nodes
and partitions. Any text following a "#" in the configuration file is treated
as a comment through the end of that line.
The size of each line in the file is limited to 1024 characters.
Changes to the configuration file take effect upon restart of
SLURM daemons, daemon receipt of the SIGHUP signal, or execution
of the command "scontrol reconfigure" unless otherwise noted.
.LP
The overall configuration parameters available include:
.TP
Define the authentication method for communications between SLURM
Acceptable values at present include "auth/none", "auth/authd",
and "auth/munge".
The default value is "auth/none", which means the UID included in
communication messages is not verified.
This may be fine for testing purposes, but
\fBdo not use "auth/none" if you desire any security\fR.
"auth/authd" indicates that Brett Chun's authd is to be used (see
"http://www.theether.org/authd/" for more information).
"auth/munge" indicates that Chris Dunlap's munge is to be used
(this is the best supported authentication mechanism for SLURM,
see "http://www.llnl.gov/linux/munge/" for more information).
All SLURM daemons and commands must be terminated prior to changing
the value of \fBAuthType\fR and later restarted (SLURM jobs can be
preserved).
\fBBackupAddr\fR
Name that \fBBackupController\fR should be referred to in
establishing a communications path. This name will
be used as an argument to the gethostbyname() function for
identification. For example, "elx0000" might be used to designate
the ethernet address for node "lx0000".
By default the \fBBackupAddr\fR will be identical in value to
\fBBackupController\fR.
.TP
\fBBackupController\fR
The name of the machine where SLURM control functions are to be
executed in the event that \fBControlMachine\fR fails. This node
may also be used as a compute server if so desired. It will come into service
as a controller only upon the failure of ControlMachine and will revert
to a "standby" mode when the ControlMachine becomes available once again.
This should be a node name without the full domain name (e.g. "lx0002").
While not essential, it is recommended that you specify a backup controller.
\fBCheckpointType\fR
Define the system-initiated checkpoint method to be used for user jobs.
The slurmctld daemon must be restarted for a change in CheckpointType
to take effect.
.TP
\fBControlAddr\fR
Name that \fBControlMachine\fR should be referred to in
establishing a communications path. This name will
be used as an argument to the gethostbyname() function for
identification. For example, "elx0000" might be used to designate
the ethernet address for node "lx0000".
By default the \fBControlAddr\fR will be identical in value to
\fBControlMachine\fR.
.TP
\fBControlMachine\fR
The name of the machine where SLURM control functions are executed.
This should be a node name without the full domain name (e.g. "lx0001").
This value must be specified.
.TP
\fBEpilog\fR
Fully qualified pathname of a script to execute as user root on every
node when a user's job completes (e.g. "/usr/local/slurm/epilog"). This may
be used to purge files, disable user login, etc. By default there is no epilog.
.TP
\fBFastSchedule\fR
If set to 1 (the default), then consider the configuration of each node
to be that specified in the configuration file. If set to 0, then base
scheduling decisions upon the actual configuration of each individual node.
If the number of node configuration entries in the configuration file
is significantly lower than the number of nodes, setting FastSchedule to
1 will permit much faster scheduling decisions to be made.
(The scheduler can just check the values in a few configuration records
instead of possibly thousands of node records. If a job can't be initiated
immediately, the scheduler may execute these tests repeatedly.)
Note that on systems with hyper-threading, the processor count
reported by the node will be twice the actually processor count.
Consider which value you want to be used for scheduling purposes.
.TP
\fBFirstJobId\fR
The job id to be used for the first submitted to SLURM without a
specific requested value. Job id values generated will incremented by 1
for each subsequent job. This may be used to provide a meta-scheduler
with a job id space which is disjoint from the interactive jobs.
The default value is 1.
.TP
\fBHeartbeatInterval\fR
The interval, in seconds, at which the SLURM controller tests the
status of other daemons. The default value is 30 seconds.
.TP
\fBInactiveLimit\fR
The interval, in seconds, a job or job step is permitted to be inactive
before it is terminated. A job or job step is considered inactive if
the associated srun command is not responding to slurm daemons. This
could be due to the termination of the srun command or the program
being is a stopped state. A batch job is considered inactive if it
has no active job steps (e.g. periods of pre- and post-processing).
This limit permits defunct jobs to be purged in a timely fashion
without waiting for their time limit to be reached.
This value should reflect the possibility that the srun command may
stopped by a debugger or considerable time could be required for batch
job pre- and post-processing. The default value is unlimited (zero).
\fBJobCompLoc\fR
The interpretation of this value depends upon the logging mechanism
specified by the \fBJobCompType\fR parameter.
.TP
\fBJobCompType\fR
Define the job completion logging mechanism type.
Acceptable values at present include "jobcomp/none", "jobcomp/filetxt",
and "jobcomp/script".
The default value is "jobcomp/none", which means that upon job completion
the record of the job is purged from the system.
The value "jobcomp/filetxt" indicates that a record of the job should be
written to a text file specified by the \fBJobCompLoc\fR parameter.
The value "jobcomp/script" indicates that a script specified by the
\fBJobCompLoc\fR parameter is to be executed with environment variables
indicating the job information.
\fBJobCredentialPrivateKey\fR
Fully qualified pathname of a file containing a private key used for
authentication by Slurm daemons.
.TP
\fBJobCredentialPublicCertificate\fR
Fully qualified pathname of a file containing a public key used for
authentication by Slurm daemons.
.TP
\fBKillTree\fR
If set to "1", signals (e.g. Ctrl-C or scancel) are forwarded to all descendant
processes of one that was directly invoked by the user. This is always
required if \fBMpichGmDirectSupport\fR is set to "1". The default behavior
is that signals are forwarded to processes that belong to the process group
of the process that was directly invoked by the user.
NOTE: This option is not currently supported on AIX systems.
.TP
\fBKillWait\fR
The interval, in seconds, given to a job's processes between the
SIGTERM and SIGKILL signals upon reaching its time limit.
If the job fails to terminate gracefully
in the interval specified, it will be forcably terminated.
The default value is 30 seconds.
\fBMaxJobCount\fR
The maximum number of jobs SLURM can have in its active database
at one time. Set the values of \fBMaxJobCount\fR and \fBMinJobAge\fR
to insure the slurmctld daemon does not exhaust its memory or other
resources. Once this limit is reached, requests to submit additional
jobs will fail. The default value is 2000 jobs. This value may not
be reset via "scontrol reconfig". It only takes effect upon restart
of the slurmctld daemon.
.TP
\fBMinJobAge\fR
The minimum age of a completed job before its record is purged from
SLURM's active database. Set the values of \fBMaxJobCount\fR and
\fBMinJobAge\fR to insure the slurmctld daemon does not exhaust
its memory or other resources. The default value is 300 seconds.
A value of zero prevents any job record purging.
.TP
\fBMpichGmDirectSupport\fR
If set to "1", srun handles executable files linked with the MPICH-GM
library directly, not via mpirun that uses rsh. If set, \fBKillTree\fR
must also be set to "1".
.TP
\fBPluginDir\fR
Identifies the places in which to look for SLURM plugins.
This is a colon-separated list of directories, like the PATH
environment variable.
The default value is "/usr/local/lib/slurm".
\fBProctrackType\fR
Identifies the plugin to be used for process tracking.
The slurmd daemon uses this mechanism to identify all processes
which are children of processes it spawns for a user job.
Acceptable values at present include "proctrack/aix" (which
is the default for AIX systems) and "proctrack/sid" (which
is the default for all other systems).
The slurmd daemon must be restarted for a change in ProctrackType
to take effect.
.TP
Fully qualified pathname of a script to execute as user root on every
node when a user's job begins execution (e.g. "/usr/local/slurm/prolog").
This may be used to purge files, enable user login, etc. By default there
is no prolog.
.TP
\fBReturnToService\fR
If set to 1, then a DOWN node will become available for use
upon registration. The default value is 0, which
means that a node will remain in the DOWN state
until a system administrator explicitly changes its state
(even if the slurmd daemon registers and resumes communications).
\fBSchedulerAuth\fR
An authentication token, if any, that must be used in a scheduler
communication protocol. The interpretation of this value depends
upon the value of \fBSchedulerType\fR. In the Wiki scheduler plugin,
this value must correspond to the checksum seed with which Maui was
compiled.
The port number on which slurmctld should listen for connection requests.
This value is only used by the Maui Scheduler (see \fBSchedulerType\fR).
\fBSchedulerRootFilter\fR
Identifies whether or not RootOnly partitions should be filtered from
any external scheduling activities. Currently only used by the built-in
backfill scheduling module "sched/backfill" (see \fBSchedulerType\fR).
.TP
Identifies the type of scheduler to be used. Acceptable values include
"sched/builtin" for the built-in FIFO scheduler,
"sched/backfill" for a backfill scheduling module to augment
the default FIFO scheduling,
"sched/hold" to hold all newly arriving jobs if a file "/etc/slurm.hold"
exists otherwise use the built-in FIFO scheduler, and
"sched/wiki" for the Wiki interface to the Maui Scheduler.
The default value is "sched/builtin".
Backfill scheduling will initiate lower-priority jobs if doing
so does not delay the expected initiation time of any higher
priority job.
Note that this backfill scheduler implementation is relatively
simple. It does not support partitions configured to to share
resources (run multiple jobs on the same nodes) or support
jobs requesting specific nodes.
When initially setting the value to "sched/wiki", any pending jobs
must have their priority set to zero (held).
When changing the value from "sched/wiki", all pending jobs
should have their priority change from zero to some large number.
The \fBscontrol\fR command can be used to change job priorities.
The \fBslurmctld\fR daemon must be restarted for a change in
scheduler type to become effective.
\fBSelectType\fR
Identifies the type of node selection algorithm to be used.
Acceptable values include
"select/linear" for a one-dimentional array of nodes in which
sequentially ordered nodes are preferable, and
"select/bluegene" for a three-dimentional Blue Gene system.
The default value is "select/bluegene" for Blue Gene systems
and "select/linear" for all other systems.
.TP
The name of the user that the \fBslurmctld\fR daemon executes as.
For security purposes, a user other than "root" is recommended.
The default value is "root".
\fBSlurmctldDebug\fR
The level of detail to provide \fBslurmctld\fR daemon's logs.
Values from 0 to 7 are legal, with `0' being "quiet" operation and `7'
being insanely verbose.
\fBSlurmctldLogFile\fR
Fully qualified pathname of a file into which the \fBslurmctld\fR daemon's
logs are written.
The default value is none (performs logging via syslog).
Fully qualified pathname of a file into which the \fBslurmctld\fR daemon
may write its process id. This may be used for automated signal processing.
The default value is "/var/run/slurmctld.pid".
\fBSlurmctldPort\fR
The port number that the SLURM controller, \fBslurmctld\fR, listens
to for work. The default value is SLURMCTLD_PORT as established at system
build time. NOTE: Either slurmctld and slurmd daemons must not execute
on the same nodes or the values of \fBSlurmctldPort\fR and \fBSlurmdPort\fR
must be different.
.TP
\fBSlurmctldTimeout\fR
The interval, in seconds, that the backup controller waits for the
primary controller to respond before assuming control.
The default value is 120 seconds.
\fBSlurmdDebug\fR
The level of detail to provide \fBslurmd\fR daemon's logs.
Values from 0 to 7 are legal, with `0' being "quiet" operation and `7' being
insanely verbose.
\fBSlurmdLogFile\fR
Fully qualified pathname of a file into which the \fBslurmd\fR daemon's
logs are written.
The default value is none (performs logging via syslog).
.TP
\fBSlurmdPidFile\fR
Fully qualified pathname of a file into which the \fBslurmd\fR daemon may write
its process id. This may be used for automated signal processing.
The default value is "/var/run/slurmd.pid".
\fBSlurmdPort\fR
The port number that the SLURM compute node daemon, \fBslurmd\fR, listens
to for work. The default value is SLURMD_PORT as established at system
build time. NOTE: Either slurmctld and slurmd daemons must not execute
on the same nodes or the values of \fBSlurmctldPort\fR and \fBSlurmdPort\fR
must be different.
\fBSlurmdSpoolDir\fR
Fully qualified pathname of a directory into which the \fBslurmd\fR
daemon's state information and batch job script information are written. This
must be a common pathname for all nodes, but should represent a directory which
is local to each node (reference a local file system). The default value
is "/var/spool/slurmd." \fBNOTE\fR: This directory is also used to store
\fBslurmd\fR's
shared memory lockfile, and \fBshould not be changed\fR unless the system
is being cleanly restarted. If the location of \fBSlurmdSpoolDir\fR is
changed and \fBslurmd\fR is restarted, the new daemon will attach to a
different shared memory region and lose track of any running jobs.
\fBSlurmdTimeout\fR
The interval, in seconds, that the SLURM controller waits for \fBslurmd\fR
to respond before configuring that node's state to DOWN.
The default value is 300 seconds.
A value of zero indicates the node should never be set DOWN if not responding.
Fully qualified pathname of a directory into which the SLURM controller,
\fBslurmctld\fR, saves its state (e.g. "/usr/local/slurm/checkpoint").
SLURM state will saved here to recover from system failures.
\fBSlurmUser\fR must be able to create files in this directory.
If you have a \fBBackupController\fR configured, this location should be
readable and writable by both systems.
If any slurm daemons terminate abnormally, their core files will also be written
into this directory.
.TP
\fBSwitchType\fR
Identifies the type of switch or interconnect used for application
communications.
Acceptable values include
"switch/none" for switches not requiring special processing for job launch
or termination (Myrinet, Ethernet, and InfiniBand),
"switch/elan" for Quadrics Elan 3 or Elan 4 interconnect.
The default value is "switch/none".
All SLURM daemons, commands and running jobs must be restarted for a
change in \fBSwitchType\fR to take effect.
If running jobs exist at the time \fBslurmctld\fR is restarted with a new
value of \fBSwitchType\fR, records of all jobs in any state may be lost.
\fBTmpFS\fR
Fully qualified pathname of the file system available to user jobs for
temporary storage. This parameter is used in establishing a node's \fBTmpDisk\fR
space.
The default value is "/tmp".
.TP
Specifies how many seconds the srun command should by default wait after
the first task terminates before terminating all remaining tasks. The
"--wait" option on the srun command line overrides this value.
If set to 0, this feature is disabled.
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
.LP
The configuration of nodes (or machines) to be managed by Slurm is
also specified in \fB/etc/slurm.conf\fR.
Only the NodeName must be supplied in the configuration file.
All other node configuration information is optional.
It is advisable to establish baseline node configurations,
especially if the cluster is heterogeneous.
Nodes which register to the system with less than the configured resources
(e.g. too little memory), will be placed in the "DOWN" state to
avoid scheduling jobs on them.
Establishing baseline configurations will also speed SLURM's
scheduling process by permitting it to compare job requirements
against these (relatively few) configuration parameters and
possibly avoid having to check job requirements
against every individual node's configuration.
The resources checked at node registration time are: Procs,
RealMemory and TmpDisk.
While baseline values for each of these can be established
in the configuration file, the actual values upon node
registration are recorded and these actual values may be
used for scheduling purposes (depending upon the value of
\fBFastSchedule\fR in the configuration file.
.LP
Default values can be specified with a record in which
"NodeName" is "DEFAULT".
The default entry values will apply only to lines following it in the
configuration file and the default values can be reset multiple times
in the configuration file with multiple entries where "NodeName=DEFAULT".
The "NodeName=" specification must be placed on every line
describing the configuration of nodes.
In fact, it is generally possible and desirable to define the
configurations of all nodes in only a few lines.
This convention permits significant optimization in the scheduling
of larger clusters.
In order to support the concept of jobs requiring consecutive nodes
on some architectures,
node specifications should be place in this file in consecutive order.
If a specific node name is listed more than once in the configuration
file only its "State" and "Reason" fields may be reset.
This may be useful to record the state of nodes which are temporarily
in a DOWN or DRAINED state without altering permanent configuration
information as shown in the example.
A job step's tasks are allocated to nodes in order the nodes appear
in the configuration file. There is presently no capability within
SLURM to arbitarily order a job step's tasks.
.LP
A simple node range expression may optionally be used to specify
ranges of nodes to avoid building a configuration file with large
numbers of entries. The node range expression can contain one
pair of square brackets with a sequence of comma separated
numbers and/or ranges of numbers separated by a "-"
(e.g. "linux[0-64,128]", or "lx[15,18,32-33]").
Presently the numeric range must be the last characters in the
node name (e.g. "unit[0-31]rack1" is invalid).
The node configuration specified the following information:
.TP
\fBNodeName\fR
Name that SLURM uses to refer to a node.
Typically this would be the string that "/bin/hostname -s"
returns, however it may be an arbitary string if
\fBNodeHostname\fR is specified.
If the \fBNodeName\fR is "DEFAULT", the values specified
with that record will apply to subsequent node specifications
unless explicitly set to other values in that node record or
replaced with a different set of default values.
For architectures in which the node order is significant,
nodes will be considered consecutive in the order defined.
For example, if the configuration for "NodeName=charlie" immediately
follows the configuration for "NodeName=baker" they will be
considered adjacent in the computer.
.TP
\fBNodeHostname\fR
The string that "/bin/hostname -s" returns.
A node range expression can be used to specify a set of nodes.
If an expression is used, the number of nodes identified by
\fBNodeHostname\fR on a line in the configuration file must
be identical to the number of nodes identified by \fBNodeName\fR.
By default, the \fBNodeHostname\fR will be identical in value to
\fBNodeName\fR.
\fBNodeAddr\fR
Name that a node should be referred to in establishing
a communications path.
This name will be used as an
argument to the gethostbyname() function for identification.
If a node range expression is used to designate multiple nodes,
they must exactly match the entries in the \fBNodeName\fR
(e.g. "NodeName=lx[0-7] NodeAddr="elx[0-7]").
\fBNodeAddr\fR may also contain IP addresses.
By default, the \fBNodeAddr\fR will be identical in value to
\fBNodeName\fR.
.TP
\fBFeature\fR
A comma delimited list of arbitrary strings indicative of some
characteristic associated with the node.
There is no value associated with a feature at this time, a node
either has a feature or it does not.
If desired a feature may contain a numeric component indicating,
for example, processor speed.
By default a node has no features.
.TP
\fBRealMemory\fR
Size of real memory on the node in MegaBytes (e.g. "2048").
The default value is 1.
.TP
\fBProcs\fR
Number of processors on the node (e.g. "2").
The default value is 1.
.TP
\fBReason\fR
Identifies the reason for a node being in state "DOWN" or "DRAINED"
or "DRAINING". Use quotes to enclose a reason having more than one
word.
.TP
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
\fBState\fR
State of the node with respect to the initiation of user jobs.
Acceptable values are "BUSY", "DOWN", "DRAINED", "DRAINING", "IDLE",
and "UNKNOWN". "BUSY" indicates the node has been allocated work
and should not be used in the configuration file.
"DOWN" indicates the node failed and is unavailable to be allocated work.
"DRAINED" indicates the node was configured unavailable to be
allocated work and is presently not performing any work.
"DRAINING" indicates the node is unavailable to be allocated new
work, but is completing the processing of a job.
"IDLE" indicates the node available to be allocated work, but
has none at present
"UNKNOWN" indicates the node's state is undefined, but will be
established when the \fBslurmd\fR daemon on that node registers.
The default value is "UNKNOWN".
.TP
\fBTmpDisk\fR
Total size of temporary disk storage in \fBTmpFS\fR in MegaBytes
(e.g. "16384"). \fBTmpFS\fR (for "Temporary File System")
identifies the location which jobs should use for temporary storage.
Note this does not indicate the amount of free
space available to the user on the node, only the total file
system size. The system administration should insure this file
system is purged as needed so that user jobs have access to
most of this space.
The Prolog and/or Epilog programs (specified in the configuration file)
might be used to insure the file system is kept clean.
The default value is 1.
.TP
\fBWeight\fR
The priority of the node for scheduling purposes.
All things being equal, jobs will be allocated the nodes with
the lowest weight which satisfies their requirements.
For example, a heterogeneous collection of nodes might
be placed into a single partition for greater system
utilization, responsiveness and capability. It would be
preferable to allocate smaller memory nodes rather than larger
memory nodes if either will satisfy a job's requirements.
The units of weight are arbitrary, but larger weights
should be assigned to nodes with more processors, memory,
disk space, higher processor speed, etc.
Weight is an integer value with a default value of 1.
.LP
The partition configuration permits you to establish different job
limits or access controls for various groups (or partitions) of nodes.
Nodes may be in only one partition. Jobs are allocated resources
within a single partition. The partition configuration
file contains the following information:
.TP
\fBAllowGroups\fR
Comma separated list of group IDs which may execute jobs in the partition.
If at least one group associated with the user attempting to execute the
job is in AllowGroups, he will be permitted to use this partition.
Jobs executed as user root can use any partition without regard to
the value of AllowGroups.
If user root attempts to execute a job as another user (e.g. using
srun's \-\-uid option), this other user must be in one of groups
identified by AllowGroups for the job to succesfully execute.
The default value is "ALL".
.TP
\fBDefault\fR
If this keyword is set, jobs submitted without a partition
specification will utilize this partition.
Possible values are "YES" and "NO".
The default value is "NO".
.TP
\fBHidden\fR
Specifies if the partition and its jobs are to be hidden by default.
Hidden partitions will by default not be reported by the SLURM
APIs or commands.
Possible values are "YES" and "NO".
The default value is "NO".
.TP
Specifies if only user ID zero (i.e. user \fIroot\fR) may allocate resources
in this partition. User root may allocate resources for any other user,
but the request must be initiated by user root.
This option can be useful for a partition to be managed by some
external entity (e.g. a higher\-level job manager) and prevents
users from directly using those resources.
Possible values are "YES" and "NO".
The default value is "NO".
.TP
\fBMaxNodes\fR
Maximum count of nodes which may be allocated to any single job.
The default value is "UNLIMITED", which is represented internally as -1.
This limit does not apply to jobs executed by SlurmUser or user root.
.TP
\fBMaxTime\fR
Maximum wall-time limit for any job in minutes. The default
value is "UNLIMITED", which is represented internally as -1.
This limit does not apply to jobs executed by SlurmUser or user root.
\fBMinNodes\fR
Minimum count of nodes which may be allocated to any single job.
The default value is 1.
This limit does not apply to jobs executed by SlurmUser or user root.
\fBNodes\fR
Comma separated list of nodes which are associated with this
partition. Node names may be specified using the
node range expression syntax described above. A blank list of nodes
(i.e. "Nodes= ") can be used if one wants a partition to exist,
but have no resources (possibly on a temporary basis).
.TP
\fBPartitionName\fR
Name by which the partition may be referenced (e.g. "Interactive").
This name can be specified by users when submitting jobs.
.TP
\fBShared\fR
Ability of the partition to execute more than one job at a
time on each node. Shared nodes will offer unpredictable performance
for application programs, but can provide higher system utilization
and responsiveness than otherwise possible.
Possible values are "FORCE", "YES", and "NO".
"FORCE" makes all nodes in the partition available for sharing
without user means of disabling it.
"YES" makes nodes in the partition available for sharing if and
only if the individual jobs permit sharing (see the srun
"--shared" option).
"NO" makes nodes unavailable for sharing under all circumstances.
The default value is "NO".
.TP
\fBState\fR
State of partition or availability for use. Possible values
are "UP" or "DOWN". The default value is "UP".
.SH "EXAMPLE"
#
.br
# Sample /etc/slurm.conf for dev[0-25].llnl.gov
.br
# Author: John Doe
.br
# Date: 11/06/2001
.br
#
.br
ControlMachine=dev0 ControlAddr=edev0
.br
BackupController=dev1 BackupAddr=edev1
#
.br
AuthType=auth/authd
.br
Epilog=/usr/local/slurm/epilog
.br
Prolog=/usr/local/slurm/prolog
.br
FastSchedule=1
.br
FirstJobId=65536
.br
HeartbeatInterval=60
.br
InactiveLimit=120
.br
JobCompType=jobcomp/filetxt
.br
JobCompLoc=/var/log/slurm.job.log
.br
MaxJobCount=10000
.br
MinJobAge=3600
.br
PluginDir=/usr/local/lib:/usr/local/slurm/lib
.br
SchedulerType=sched/wiki
.br
SchedulerAuth=42 SchedulerPort=7004
.br
SlurmctldLogFile=/var/log/slurmctld.log
.br
SlurmdLogFile=/var/log/slurmd.log
.br
SlurmctldDebug=4 SlurmdDebug=3
.br
SlurmctldPort=7002 SlurmdPort=7003
.br
SlurmctldTimeout=300 SlurmdTimeout=300
.br
SlurmdSpoolDir=/usr/local/slurm/slurmd.spool
.br
StateSaveLocation=/usr/local/slurm/slurm.state
.br
SwitchType=switch/elan
.br
WaitTime=30
.br
JobCredentialPrivateKey=/usr/local/slurm/private.key
.br
JobCredentialPublicCertificate=/usr/local/slurm/public.cert
.br
#
.br
# Node Configurations
.br
#
.br
NodeName=DEFAULT Procs=2 RealMemory=2000 TmpDisk=64000
.br
NodeName=DEFAULT State=UNKNOWN
NodeName=dev[0-25] NodeAddr=edev[0-25] Weight=16
# Update records for specific DOWN nodes
.br
NodeName=dev20 State=DOWN Reason="power,ETA=Dec25"
.br
#
.br
# Partition Configurations
.br
#
.br
PartitionName=DEFAULT MaxTime=30 MaxNodes=10
.br
PartitionName=debug Nodes=dev[0-8,18-25] State=UP Default=YES
PartitionName=batch Nodes=dev[9-17] State=UP MinNodes=4
.SH "COPYING"
Copyright (C) 2002 The Regents of the University of California.
Produced at Lawrence Livermore National Laboratory (cf, DISCLAIMER).
UCRL-CODE-2002-040.
.LP
This file is part of SLURM, a resource management program.
For details, see <http://www.llnl.gov/linux/slurm/>.
.LP
SLURM is free software; you can redistribute it and/or modify it under
the terms of the GNU General Public License as published by the Free
Software Foundation; either version 2 of the License, or (at your option)
any later version.
.LP
SLURM is distributed in the hope that it will be useful, but WITHOUT ANY
WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
FOR A PARTICULAR PURPOSE. See the GNU General Public License for more
details.
.SH "FILES"
/etc/slurm.conf
.SH "SEE ALSO"
.LP
\fBgethostbyname\fR(3), \fBgroup\fR(5), \fBhostname\fR(1),
\fBscontrol\fR(1), \fBslurmctld\fR(8), \fBslurmd\fR(8),
\fBsyslog\fR(2)