Skip to content
Snippets Groups Projects
NEWS 227 KiB
Newer Older
Christopher J. Morrone's avatar
Christopher J. Morrone committed
This file describes changes in recent versions of SLURM. It primarily
documents those changes that are of interest to users and admins.
* Changes in SLURM 2.1.0-pre3
=============================
 -- Removed sched/gang plugin and moved the logic directly into the slurmctld
    daemon so that job preemption and gang scheduling can be used with the
    sched/backfill plugin. Added configuration parameter:
    PreemptMode=gang|off|suspend|cancel|requeue|checkpoint 
    to enable/disable gang scheduling and job preemption logic (both are 
    disabled by default).
    (NOTE: There are some problems with memory management which could prevent a
    job from starting when memory would be freed by a job being requeued or 
    otherwise removed, these are being worked on)
 -- Added PreemptType configuration parameter to identify preemptable jobs.
    Former users of SchedType=sched/gang should set SchedType=sched/backfill,
    PreemptType=preempt/partition_prio and PreemptMode=gang,suspend. See
    web and slurm.conf man page for other options.
    PreemptType=preempt/qos uses Quality Of Service information in database.
 -- In select/linear, optimize job placement across partitions.
 -- If the --partition option is used with the sinfo or squeue command then
    print information about even hidden partitions.
 -- Replaced misc cpu allocation members in job_info_t with select_job_res_t
    which will only be populated when requested (show_flags & SHOW_DETAIL)
 -- Added a --detail option to "scontrol show job" to display the cpu/mem
    allocation info on a node-by-node basis.
 -- Added logic to give correct request uid for individual steps that 
    were cancelled.
 -- Created a spank_get_item() option (S_JOB_ALLOC_MEM) that conveys the memory
    that the select/cons_res plugin has allocated to a job.
 -- BLUEGENE - blocks in error state are now handled correctly in accounting.
 -- Modify squeue to print job step information about a specific job ID using
    the following syntax: "squeue -j <job_id> -s".
 -- BLUEGENE - scontrol delete block and update block can now remove blocks 
    on dynamic laid out systems.
 -- BLUEGENE - Vastly improve Dynamic layout mode algorithm.
 -- Address some issues for SLURM support of Solaris.
 -- Applied patch from Doug Parisek (Doug.Parisek@bull.com) for speeding up 
    start of sview by delaying to creation of tooltips until requested.
 -- Changed GtkToolTips to GtkToolTip for newer versions of GTK.
 -- Applied patch from Rod Schultz (Rod.Schultz@Bull.com) that eliminates
    ambiguity in the documentation over use of the terms "CPU" and "socket".
 -- Modified get_resource_arg_range() to return full min/max values when input
    string is null.  This fixes the srun -B option to function as documented.
 -- If the job, node, partition, reservation or trigger state file is missing 
    or too small, automatically try using the previously saved state (file 
    name with ".old" suffix).
 -- Set a node's power_up/configuring state flag while PrologSlurmctld is
    running for a job allocated to that node.
 -- If PrologSlurmctld has a non-zero exit code, requeue the job or kill it.
Danny Auble's avatar
Danny Auble committed
 -- Added sacct ability to use --format NAME%LENGTH similar to sacctmgr.
 -- Improve hostlist logic for multidimensional systems.
 -- The pam_slurm Pluggable Authentication Module for SLURM previously
    distributed separately has been moved within the main SLURM distribution
    and is packaged as a separate RPM.
 -- Added configuration parameter MaxTasksPerNode.
 -- Remove configuration parameter SrunIOTimeout.
 -- Added functionality for sacctmgr show problems.  Current problems include
    Accounts/Users with no associations, Accounts with no users or subaccounts
    attached in a cluster, and Users with No UID on the system.
 -- Added new option for sacctmgr list assoc and list cluster WOLimits.  This 
    gives a smaller default format without the limit information.  This may 
    be the new default for list assocations and list clusters.
 -- Users are now required to have an association with there default account.
    Sacctmgr will now complain when you try to modify a users default account
    which they are not associated anywhere.
 -- Fix select/linear bug resulting in run_job_cnt underflow message if a 
    suspended job is cancelled.
 -- Add test for fsync() error for state save files. Log and retry as needed.
 -- Log fatal errors from slurmd and slurmctld to syslog.
 -- Added error detection and cleanup for the case in which a compute node is 
    rebooted and restarts its slurmd before its "down" state is noticed.
 -- BLUEGENE systems only - remove vestigal start location from jobinfo.
 -- Add reservation flag of "OVERLAP" to permit a new reservation to use
    nodes already in another reservation.
 -- Fix so "scontrol update jobid=# nice=0" can clear previous nice value.
 -- BLUEGENE - env vars such as SLURM_NNODES, SLURM_JOB_NUM_NODES, and
    SLURM_JOB_CPUS_PER_NODE now reference cnode counts instead of midplane
    counts.  SLURM_NODELIST still references midplane names.
 -- Added qos support to salloc/sbatch/srun/squeue
 -- Added to scancel the ability to select jobs by account and qos
 -- Recycled the "-A" argument indicate "account" for all the commands that
    accept the --account argument (srun -A to allocate is no longer supported.)
 -- Change sbatch response from "sbatch: Submitted batch job #" written to 
    stderr to "Submitted batch job #" written to stdout.
 -- Made shutdown and cleanup a little safer for the mvapich and mpich1_p4
    plugins.
 -- QOS support added with limits, priority and preemption
    (no documentation yet).
* Changes in SLURM 2.1.0-pre2
=============================
 -- Added support for smap to query off node name for display.
 -- Slurmdbd modified to set user ID and group ID to SlurmUser if started as 
    user root.
 -- Configuration parameter ControlMachine changed to  accept multiple comma-
    separated hostnames for support of some high-availability architectures.
 -- ALTERED API CALL slurm_get_job_steps 0 has been changed to NO_VAL for both
    job and step id to recieve all jobs/steps.  Please make adjustments to
    your code.
 -- salloc's --wait=<secs> option deprecated by --immediate=<secs> option to 
    match the srun command.
 -- Add new slurmctld list for node features with node bitmaps for simplified
    scheduling logic.
 -- Multiple features can be specified when creating a reservation. Use "&" 
    (AND) or "|" (OR) separators between the feature names.
 -- Changed internal node name caching so that front-end mode would work with
    multiple lines of node name definitions. 
 -- Add node state flag for power-up/configuring. Represented by "#" suffix
    on the node state name (e.g. "ALLOCATED#") for command output.
 -- Add CONFIGURING/CF job state flag for node power-up/configuring.
 -- Modify job step cancel logic for scancel and srun (on reciept of SIGTERM 
    or three SIGINT) to immediately send SIGKILL to spawned tasks.  Previous 
    logic would send SIGCONT, SIGTERM, wait KillWait seconds, SIGKILL.
 -- Created a spank_get_item() option (S_JOB_ALLOC_CORES) that conveys the cpus
    that the select/cons_res plugin has allocated to a job.
 -- Improve sview performance (outrageously) on very large machines.
 -- Add support for licenses in resource reservation.
 -- BLUEGENE - Jobs waiting for a block to boot will now be in Configuring
    state. 
 -- bit_fmt now does not return brackets surrounding any set of data.
* Changes in SLURM 2.1.0-pre1
=============================
 -- Slurmd notifies slurmctld of node boot time to better clean up after node
    reboots.
 -- Slurmd sends node registration information repeatedly until successful
    transmit.
 -- Change job_state in job structure to dedicate 8-bits to state flags. 
    Added macros to get state information (IS_JOB_RUNNING(job_ptr), etc.)
 -- Added macros to get node state information (IS_NODE_DOWN(node_ptr), etc).
 -- Added support for Solaris. Patch from David Hoppner.
 -- Rename "slurm-aix-federation-<version>.rpm" to just 
    "slurm-aix-<version>.rpm" (federation switch plugin may not be present).
 -- Eliminated the redundant squeue output format and sort options of 
    "%o" and "%b". Use "%D" and "%S" formats respectively. Also eliminated 
    "%X" and "%Y" and "%Z" formats. Use "%z" instead.
 -- Added mechanism for SPANK plugins to set environment variables for
    Prolog, Epilog, PrologSLurmctld and EpilogSlurmctld programs using
    the functions spank_get_job_env, spank_set_job_env, and 
    spank_unset_job_env. See "man spank" for more information.
 -- Completed the work to begun in 2.0.0 to standardize on using '-Q' as the
    --quiet flag for all the commands.
 -- BLUEGENE - sinfo and sview now display correct cpu counts for partitions
 -- Cleaned up the cons_res plugin.  It now uses a ptr to a part_record
    instead of having to do strcmp's to find the correct one.
 -- Pushed most all the plugin specific info in src/common/node_select.c 
    into the respected plugin.
 -- BLUEGENE - closed some corner cases where a block could had been removed 
    while a job was waiting for it to become ready because an underlying 
    part of the block was put into an error state.
 -- Modify sbcast logic to prevent a user from moving files to nodes they
    have not been allocated (this would be possible in previous versions
    only by hacking the sbcast code).
 -- Add contribs/sjstat script (Perl tool to report job state information).
    Put into new RPM: sjstat.
 -- Add sched/wiki2 (Moab) JOBMODIFY command support for VARIABLELIST option
    to set supplemental environment variables for pending batch jobs.
 -- BLUEGENE - add support for scontrol show blocks.
 -- Added support for job step time limits.
* Changes in SLURM 2.0.5
========================
 -- BLUEGENE - Added support for emulating systems with a X-dimension of 4.
 -- BLUEGENE - When a nodecard goes down on a non-Dynamic system SLURM will 
    now only drain blocks under 1 midplane, if no such block exists then SLURM 
    will drain the entire midplane and not mark any block in error state.  
    Previously SLURM would drain every overlapping block of the nodecard 
    making it possible for a large block to make other blocks not work since 
    they overlap some other part of the block that really isn't bad.
 -- BLUEGENE - Handle L3 errors on boot better.
 -- Don't revoke a pending batch launch request from the slurmctld if the
    job is immediately suspended (a normal event with gang scheduling).
 -- BLUEGENE - Fixed issue with restart of slurmctld would allow error block 
    nodes to be considered for building new blocks when testing if a job would 
    run.  This is a visual bug only, jobs would never run on new block, but 
    the block would appear in slurm tools.
 -- Better responsiveness when starting new allocations when running with the 
    slurmdbd.
 -- Fixed race condition when reconfiguring the slurmctld and using the 
    consumable resources plugin which would cause the controller to core.
 -- Fixed race condition that sometimes caused jobs to stay in completing
    state longer than necessary after being terminated.
 -- Fixed issue where if a parent account has a qos added and then a child
    account has the qos removed the users still get the qos.
 -- BLUEGENE - New blocks in dynamic mode will only be made in the system
    when the block is actually needed for a job, not when testing.
 -- BLUEGENE - Don't remove larger block used for small block until job starts.
 -- Add new squeue output format and sort option of "%L" to print a job's time 
    left (time limit minus time used).
 -- BLUEGENE - Fixed draining state count for sinfo/sview.
 -- Fix for sview to not core when viewing nodes allocated to a partition 
    and the all jobs finish.
 -- Fix cons_res to not core dump when finishing a job running on a 
    defunct partition.
 -- Don't require a node to have --ntasks-per-node CPUs for use when the 
    --overcommit option is also used.
 -- Increase the maximum number of tasks which can be launched by a job step
    per node from 64 to 128. 
 -- sview - make right click on popup window title show sorted list.
Loading
Loading full blame...