Skip to content
Snippets Groups Projects
NEWS 321 KiB
Newer Older
Christopher J. Morrone's avatar
Christopher J. Morrone committed
This file describes changes in recent versions of SLURM. It primarily
documents those changes that are of interest to users and admins.
Moe Jette's avatar
Moe Jette committed
* Changes in SLURM 2.3.0.pre2
=============================
 -- Log a job's requeue or cancellation due to preemption to that job's stderr:
    "*** JOB 65547 CANCELLED AT 2011-01-21T12:59:33 DUE TO PREEMPTION ***".
 -- Added new job termination state of JOB_PREEMPTED, "PR" or "PREEMPTED" to
    indicate job termination was due to preemption.
 -- Optimize advanced reservations resource selection for computer topology.
    The logic has been added to select/linear and select/cons_res, but will
    not be enabled until the other select plugins are modified.
Moe Jette's avatar
Moe Jette committed
 -- Remove checkpoint/xlch plugin.
 -- Disable deletion of partitions that have unfinished jobs (pending,
    running or suspended states). Patch from Martin Perry, BULL.
 -- In sview, disable the sorting of node records by name at startup for
    clusters over 1000 nodes. Users can enable this by selecting the "Name"
    tab. This change dramatically improves scalability of sview.
 -- Do not attempt to read the batch script for non-batch jobs. This patch
    eliminates some inappropriate error messages. 01_interactive-no-script.diff
 -- Report error when trying to change a node's state from scontrol for Cray
    systems. Based upon 01_Cray-scontrol-warning-node-update.diff
* Changes in SLURM 2.3.0.pre1
=============================
 -- Added that when a slurmctld closes the connection to the database it's
    registered host and port are removed.
 -- Added flag to slurmdbd.conf TrackSlurmctldDown where if set will mark idle
    resources as down on a cluster when a slurmctld disconnects or is no
    longer reachable.
 -- Added support for more than one front-end node to run slurmd on
    architectures where the slurmd does not execute on the compute nodes
Moe Jette's avatar
Moe Jette committed
    (e.g. BlueGene). New configuration parameters FrontendNode and FrontendAddr
    added. See "man slurm.conf" for more information.
 -- With the scontrol show job command when using the --details option, show
    a batch job's script.
 -- Add ability to create reservations or partitions and submit batch jobs
    using sview. Also add the ability to delete reservations and partitions.
Moe Jette's avatar
Moe Jette committed
 -- Added new configuration parameter MaxJobId. Once reached, restart job ID
    values at FirstJobId.
 -- When restarting slurmctld with priority/basic, increment all job priorities
    so the highest job priority becomes TOP_PRIORITY.
* Changes in SLURM 2.2.2
========================
 -- Correct logic to set correct job hold state (admin or user) when setting
    the job's priority using scontrol's "update jobid=..." rather than its 
    "hold" or "holdu" commands.
 -- Modify squeue to report unset --mincores, --minthreads or --extra-node-info
    values as "*" rather than 65534. Patch from Rod Schulz, BULL.
 -- Report the StartTime of a job as "Unknown" rather than the year 2106 if its
    expected start time was too far in the future for the backfill scheduler
    to compute.
* Changes in SLURM 2.2.1
========================
 -- Fix setting derived exit code correctly for jobs that happen to have the
    same jobid.
 -- Better checking for time overflow when rolling up in accounting.
Moe Jette's avatar
Moe Jette committed
 -- Add scancel --reservation option to cancel all jobs associated with a
 -- Treat reservation with no nodes like one that starts later (let jobs of any
Moe Jette's avatar
Moe Jette committed
    size get queued and do not block any pending jobs).
 -- Fix bug in gang scheduling logic that would temporarily resume to many jobs
    after a job completed.
 -- Change srun message about job step being deferred due to SlurmctldProlog
    running to be more clear and only print when --verbose option is used.
 -- Made it so you could remove the hold on jobs with sview by setting the
    priority to infinite.
 -- BLUEGENE - better checking small blocks in dynamic mode whether a full
    midplane job could run or not.
 -- Decrease the maximum sleep time between srun job step creation retry
    attempts from 60 seconds to 29 seconds. This should eliminate a possible
    synchronization problem with gang scheduling that could result in job
    step creation requests only occuring when a job is suspended.
 -- Fix to prevent changing a held job's state from HELD to DEPENDENCY
    until the job is released. Patch from Rod Schultz, Bull.
 -- Fixed sprio -M to reflect PriorityWeight values from remote cluster.
 -- Fix bug in sview when trying to update arbitrary field on more than one
    job. Formerly would display information about one job, but update next
    selected job.
 -- Made it so QOS with UsageFactor set to 0 would make it so jobs running
    under that QOS wouldn't add time to fairshare or association/qos
    limits.
 -- Fixed issue where QOS priority wasn't re-normalized until a slurmctld
    restart when a QOS priority was changed.
 -- Fix sprio to use calculated numbers from slurmctld instead of
    calulating it own numbers.
 -- BLUEGENE - fixed race condition with preemption where if the wind blows the
    right way the slurmctld could lock up when preempting jobs to run others.
 -- BLUEGENE - fixed epilog to wait until MMCS job is totally complete before
    finishing.
 -- BLUEGENE - more robust checking for states when freeing blocks.
 -- Added correct files to the slurm.spec file for correct perl api rpm
    creation.
 -- Added flag "NoReserve" to a QOS to make it so all jobs are created equal
    within a QOS.  So if larger, higher priority jobs are unable to run they
    don't prevent smaller jobs from running even if running the smaller
    jobs delay the start of the larger, higher priority jobs.
 -- BLUEGENE - Check preemptees one by one to preempt lower priority jobs first
    instead of first fit.
 -- In select/cons_res, correct handling of the option
    SelectTypeParameters=CR_ONE_TASK_PER_CORE.
 -- Fix for checking QOS to override partition limits, previously if not using
    QOS some limits would be overlooked.
 -- Fix bug which would terminate a job step if any of the nodes allocated to
    it were removed from the job's allocation. Now only the tasks on those
    nodes are terminated.
 -- Fixed issue when using a storage_accounting plugin directly without the
    slurmDBD updates weren't always sent correctly to the slurmctld, appears to
    OS dependent, reported by Fredrik Tegenfeldt.
* Changes in SLURM 2.2.0
========================
 -- Change format of Duration field in "scontrol show reservation" output from
    an integer number of minutes to "[days-]hours:minutes:seconds".
 -- Add support for changing the reservation of pending or running jobs.
 -- On Cray systems only, salloc sends SIGKILL to spawned process group when
    job allocation is revoked. Patch from Gerrit Renker, CSCS.
 -- Fix for sacctmgr to work correctly when modifying user associations where
    all the associations contain a partition.
 -- Minor mods to salloc signal handling logic: forwards more signals and
    releases allocation on real-time signals. Patch from Gerrit Renker, CSCS.
 -- Add salloc logic to preserve tty attributes after abnormal exit. Patch
    from Mark Grondona, LLNL.
 -- BLUEGENE - Fix for issue in dynamic mode when trying to create a block
    overlapping a block with no job running on it but in configuring state.
 -- BLUEGENE - Speedup by skipping blocks that are deallocating for other jobs
    when starting overlapping jobs in dynamic mode.
 -- Fix for sacct --state to work correctly when not specifying a start time.
 -- Fix upgrade process in accounting from 2.1 for clusters named "cluster".
 -- Export more jobacct_common symbols needed for the slurm api on some systems.
Moe Jette's avatar
Moe Jette committed
* Changes in SLURM 2.2.0.rc4
============================
 -- Correction in logic to spread out over time highly parallel messages to
    minimize lost messages. Effects slurmd epilog complete messages and PMI
    key-pair transmissions. Patch from Gerrit Renker, CSCS.
 -- Fixed issue where if a system has unset messages to the dbd in 2.1 and
    upgrades to 2.2.  Messages are now processed correctly now.
 -- Fixed issue where assoc_mgr cache wasn't always loaded correctly if the
    slurmdbd wasn't running when the slurmctld was started.
 -- Make sure on a pthread create in step launch that the error code is looked
    at. Improves fault-tolerance of slurmd.
 -- Fix setting up default acct/wckey when upgrading from 2.1 to 2.2.
 -- Fix issue with associations attached to a specific partition with no other
    association, and requesting a different partition.
 -- Added perlapi to the slurmdb to the slurm.spec.
 -- In sched/backfill, correct handling of CompleteWait parameter to avoid
    backfill scheduling while a job is completing. Patch from Gerrit Renker,
    CSCS.
 -- Send message back to user when trying to launch job on computing lacking
    that user ID. Patch from Hongjia Cao, NUDT.
 -- BLUEGENE - Fix it so 1 midplane clusters will run small block jobs.
 -- Add Command and WorkDir to the output of "scontrol show job" for job
    allocations created using srun (not just sbatch).
 -- Fixed sacctmgr to not add blank defaultqos' when doing a cluster dump.
 -- Correct processing of memory and disk space specifications in the salloc,
    sbatch, and srun commands to work properly with a suffix of "MB", "GB",
    etc. and not only with a single letter (e.g. "M", "G", etc.).
 -- Prevent nodes with suspended jobs from being powered down by SLURM.
 -- Normalized the way pidfile are created by the slurm daemons.
 -- Fixed modifying the root association to no read in it's last value
    when clearing a limit being set.
 -- Revert some resent signal handling logic from salloc so that SIGHUP sent
    after the job allocation will properly release the allocation and cause
    salloc to exit.
 -- BLUEGENE - Fix for recreating a block in a ready state.
 -- Fix debug flags for incorrect logic when dealing with DEBUG_FLAG_WIKI.
 -- Report reservation's Nodes as a hostlist expression of all nodes rather
    than using "ALL".
 -- Fix reporting of nodes in BlueGene reservation (was reporting CPU count
    rather than cnode count in scontrol output for NodeCnt field).
* Changes in SLURM 2.2.0.rc3
============================
 -- Modify sacctmgr command to accept plural versions of options (e.g. "Users"
    in addition to "User"). Patch from Don Albert, BULL.
 -- BLUEGENE - make it so reset of boot counter happens only on state change
    and not when a new job comes along.
 -- Modify srun and salloc signal handling so they can be interrupted while
    waiting for an allocation. This was broken in version 2.2.0.rc2.
 -- Fix NULL pointer reference in sview. Patch from Gerrit Renker, CSCS.
 -- Fix file descriptor leak in slurmstepd on spank_task_post_fork() failure.
    Patch from Gerrit Renker, CSCS.
 -- Fix bug in preserving job state information when upgrading from SLURM
    version 2.1. Bug introduced in version 2.2.0-pre10. Patch from Par
    Andersson, NSC.
 -- Fix bug where if using the slurmdbd if a job wasn't able to start right
    away some accounting information may be lost.
 -- BLUEGENE - when a prolog failure happens the offending block is put in
    an error state.
 -- Changed the last column heading of the sshare output from "FS Usage" to
    "FairShare" and added more detail to the sshare man page.
 -- Fix bug in enforcement of reservation by account name. Used wrong index
    into an array. Patch from Gerrit Renker, CSCS.
 -- Modify job_submit/lua plugin to treat any non-zero return code from the
    job_submit and job_modify functions as an error and the user request should
    be aborted.
 -- Fix bug which would permit pending job to be started on completing node
Loading
Loading full blame...