Skip to content
Snippets Groups Projects
Commit fdf3a0d7 authored by Moe Jette's avatar Moe Jette
Browse files

Major update for v2.1 release

parent bcbaa8cd
No related branches found
No related tags found
No related merge requests found
RELEASE NOTES FOR SLURM VERSION 2.1 RELEASE NOTES FOR SLURM VERSION 2.1
1 June 2009 (after SLURM 2.1.0-pre1) 14 October 2009 (through SLURM 2.1.0-pre4)
IMPORTANT NOTE: IMPORTANT NOTE:
...@@ -24,25 +24,90 @@ doc/html/configurator.html that comes with the distribution. ...@@ -24,25 +24,90 @@ doc/html/configurator.html that comes with the distribution.
HIGHLIGHTS HIGHLIGHTS
* Support has been added for Solaris. * The sched/gang plugin has been removed. The logic is now directly within the
slurmctld daemon so that gang scheduling and/or job preemption can be
performed with a backfill scheduler.
* Preempted jobs can now be canceled, checkpointed or requeued rather than
only suspended.
* Support for QOS (Quality Of Service) has been added to the accounting
database with configurable limits, priority and preemption rules.
* Added -"-signal=<int>@<time>" option to salloc, sbatch and srun commands to
notify programs before reaching the end of their time limit.
* Added squeue option "--start" to report expected start time of pending jobs.
* The pam_slurm Pluggable Authentication Module for SLURM previously
distributed separately has been moved within the main SLURM distribution
and is packaged as a separate RPM.
* Support has been added for OpenSolaris.
CONFIGURATION FILE CHANGES (see "man slurm.conf" for details) CONFIGURATION FILE CHANGES (see "man slurm.conf" for details)
* Added PreemptType parameter to specify the plugin used to identify
preemptable jobs (partition priority or quality of service) and
PreemptionMode to identify how to preempt jobs (requeue, cancel, checkpoint,
or suspend).
* The sched/gang plugin has be removed, use PreemptType=preempt/partition_prio
and PreemptMode=suspend,gang.
* ControlMachine changed to accept multiple comma-separated hostnames for
support of some high-availability architectures.
* Added MaxTasksPerNode to control how many tasks that the slurmd can launch.
* Removed SrunIOTimeout parameter.
* Added SchedulerParameters option of "max_job_bf=#" to control how far down
the queue of pending jobs that SLURM searches in an attempt backfill
schedule them. The default value is 50 jobs.
COMMAND CHANGES (see man pages for details) COMMAND CHANGES (see man pages for details)
* Added a --detail option to "scontrol show job" to display the cpu/memory
allocation informaton on a node-by-node basis.
* sacctmgr show problems command added to display problems in the accounting
database (e.g. accounts with no users, users with no UID, etc.)
* Several redundant squeue output and sorting options have been removed: * Several redundant squeue output and sorting options have been removed:
"%o" (use %D"), "%b" (use "%S"), "%X", %Y, and "%Z" (use "%z"). "%o" (use %D"), "%b" (use "%S"), "%X", %Y, and "%Z" (use "%z").
* Standardized on the use of the '-Q' flag for all commands that offer the * Standardized on the use of the '-Q' flag for all commands that offer the
--quiet option. --quiet option.
* salloc's --wait=<secs> option deprecated by --immediate=<secs> option to
match the srun command.
* Scalability of sview dramatically improved.
* Added reservation flag of "OVERLAP" to permit a new reservation to use
nodes already in another reservation.
* Added sacct ability to use --format NAME%LENGTH similar to sacctmgr.
* For salloc, sbatch and srun commands, ignore _maximum_ values for
--sockets-per-node, --cores-per-socket and --threads-per-core options.
Remove --mincores, --minsockets, --minthreads options (map them to
minimum values of -sockets-per-node, --cores-per-socket and
--threads-per-core for now).
* Change scontrol show job info: ReqProcs (number of processors requested)
is replaced by NumProcs (number of processors requested or actually
allocated) and ReqNodes (number of nodes requested) is replaced by NumNodes
(number of nodes requested or actually allocated).
BLUEGENE SPECIFIC CHANGES
* scontrol show blocks option added.
* scontrol delete block and update block can now remove blocks on dynamic
layout configuration.
* sinfo and sview now display correct CPU counts for partitions.
* Jobs waiting for a block to boot will now be reported in Configuring state.
* Vastly improve dynamic layout mode algorithm.
* Environment variables such as SLURM_NNODES, SLURM_JOB_NUM_NODES and
SLURM_JOB_CPUS_PER_NODE now reference cnode counts instead of midplane
counts. SLURM_NODELIST still references midplane names.
OTHER CHANGES OTHER CHANGES
* A mechanism has beeded added for SPANK plugins to set environment variables * A mechanism has been added for SPANK plugins to set environment variables
for Prolog, Epilog, PrologSLurmctld and EpilogSlurmctld programs using the for Prolog, Epilog, PrologSLurmctld and EpilogSlurmctld programs using the
functions spank_get_job_env, spank_set_job_env, and spank_unset_job_env. See functions spank_get_job_env, spank_set_job_env, and spank_unset_job_env. See
"man spank" for more information. "man spank" for more information.
* Set a node's power_up/configuring state flag while PrologSlurmctld is
running for a job allocated to that node.
* Added sched/wiki2 (Moab) JOBMODIFY command support for VARIABLELIST option
to set supplemental environment variables for pending batch jobs.
* The RPM previously named "slurm-aix-federation-<version>.rpm" has been * The RPM previously named "slurm-aix-federation-<version>.rpm" has been
renamed to just "slurm-aix-<version>.rpm" (the federation switch plugin may renamed to just "slurm-aix-<version>.rpm" (the federation switch plugin may
not be present). not be present).
* Added sched/wiki2 (Moab) JOBMODIFY command support for VARIABLELIST option * Environment variables SLURM_TOPOLOGY_ADDR and SLURM_TOPOLOGY_ADDR_PATTERN
to set supplemental environment variables for pending batch jobs. added to describe the network topology for each launched task when
TopologyType=topology/tree is configured
* Add new job wait reason, ReqNodeNotAvail: Required node is not available
(down or drained).
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment