Skip to content
Snippets Groups Projects
NEWS 412 KiB
Newer Older
David Bigagli's avatar
David Bigagli committed
This file describes changes in recent versions of Slurm. It primarily
documents those changes that are of interest to users and administrators.

* Changes in Slurm 17.02.0pre3
==============================
 -- Add srun host & PID to job step data structures.
 -- Avoid creating duplicate pending step records for the same srun command.
 -- Rewrite srun's logic for pending steps for better efficiency (fewer RPCs).
 -- Added new SchedulerParameters options step_retry_count and step_retry_time
    to control scheduling behaviour of job steps waiting for resources.
* Changes in Slurm 17.02.0pre2
==============================
 -- Add new RPC (REQUEST_EVENT_LOG) so that slurmd and slurmstepd can log events
    through the slurmctld daemon.
 -- Remove sbatch --bb option. That option was never supported.
 -- Automically cleanup task/cgroup cpuset and devices cgroups after steps are
    done.
 -- Add federation read/write locks.
 -- Limit job purge run time to 1 second at a time.
 -- The database index for jobs is now 64 bit.  If you happen to be close to
    4 billion jobs in your database you will want to update your slurmctld at
    the same time as your slurmdbd to prevent roll over of this variable as
    it is 32 bit previous versions of Slurm.
 -- Optionally lock slurmstepd in memory for performance reasons and to avoid
    possible SIGBUS if the daemon is paged out at the time of a Slurm upgrade
    (changing plugins). Controlled via new LaunchParameters options of
    slurmstepd_memlock and slurmstepd_memlock_all.
Morris Jette's avatar
Morris Jette committed
 -- Add event trigger on burst buffer errors (see strigger man page,
    --burst_buffer option).
 -- Add job AdminComment field which can only be set by a Slurm administrator.
 -- Add salloc, sbatch and srun option of --delay-boot=<time>, which will
    temporarily delay booting nodes into the desired state for a job in the
    hope of using nodes already in the proper state which will be available at
    a later time.
Morris Jette's avatar
Morris Jette committed
 -- Add job burst_buffer_state and delay_boot fields to scontrol and squeue
    output. Also add ability to modify delay_boot from scontrol.
 -- Fix for node's available tres array getting filled in with configured gres
    model types.
 -- Log if job --bb option contains any unrecognized content.
 -- Display configured and allocated tres for nodes in scontrol show nodes.
 -- Change all memory values (in MB) to uint64_t to accommodate > 2TB per node.
 -- Add MailDomain option to qualify email addresses.
 -- Refactor the persistent connections within the federation code to use
    the same logic that was found in the slurmdbd.  Now both functionalities
    share the same code.
 -- Remove BlueGene/L and BlueGene/P support.
 -- Add "flag" field to launch_tasks_request_msg. Remove the following fields
    (moved into flags): multi_prog, task_flags, user_managed_io, pty,
    buffered_stdio, and labelio.
 -- Add protocol version to slurmd startup communications for slurmstepd to
    permit changes in the protocol.
* Changes in Slurm 17.02.0pre1
==============================
 -- burst_buffer/cray - Add support for rounding up the size of a buffer reqeust
    if the DataWarp configuration "equalize_fragments" is used.
Tim Wickberg's avatar
Tim Wickberg committed
 -- Remove AIX support.
 -- Rename "in" to "input" in slurm_step_io_fds data structure defined in
    slurm.h. This is needed to avoid breaking Python with by using one of its
    keywords in a Slurm data structure.
 -- Remove eligible_time from jobcomp/elasticsearch.
 -- Fix issue where if no clusters were added but yet a QOS needed to be
    deleted make it possible.
 -- SlurmDBD - change all timestamps to bigint from int to solve Y2038 problem.
Morris Jette's avatar
Morris Jette committed
 -- Add salloc/sbatch/srun --spread-job to distribute tasks over as many nodes
    as possible. This also treats the --ntasks-per-node option as a maximum
Morris Jette's avatar
Morris Jette committed
    value.
 -- Add ConstrainKmemSpace to cgroup.conf, defaulting to yes, to allow
    cgroup Kmem enforcement to be disabled while still using ConstrainRAMSpace.
 -- Add support for sbatch --bbf option.
 -- Add burst buffer support for job arrays. Add new SchedulerParameters option
    of bb_array_stage_cnt=# to indicate how many pending tasks of a job array
    should be made available for burst buffer resource allocation.
 -- Fix small memory leak when a job fails to load from state save.
 -- Fix invalid read when attempting to delete clusters from db with running
    jobs.
 -- Fix small memory leak when deleting clusters from db.
 -- Add SLURM_ARRAY_TASK_COUNT environment variable. Total number of tasks in a
    job array (e.g. "--array=2,4,8" will set SLURM_ARRAY_TASK_COUNT=3).
 -- Add new sacctmgr commands: "shutdown" (shutdown the server), "list stats"
    (get server statistics) "clear stats" (clear server statistics).
 -- Restructure job accounting query to use 'id_job in (1, 2, .. )' format
    instead of logically equivalent 'id_job = 1 || id_job = 2 || ..' .
 -- Added start_delay field to jobcomp/elasticsearch.
Brian Christiansen's avatar
Brian Christiansen committed
 -- In order to support federated jobs, the MaxJobID configuration parameter
    default value has been reduced from 2,147,418,112 to 67,043,328 and its
    maximum value is now 67,108,863. Upon upgrading, any pre-existing jobs that
    have a job ID above the new range will continue to run and new jobs will get
    job IDs in the new range.
 -- Added infrastructure for setting up federations in database and establishing
    connections between federation clusters.
* Changes in Slurm 16.05.6
==========================
 -- Docs - the correct default value for GroupUpdateForce is 0.
 -- mpi/pmix - improve point to point communication performance.
 -- SlurmDB - include pending jobs in search during 'sacctmgr show runawayjobs'.
 -- Add client side out-of-range checks to --nice flag.
 -- Fix support for sbatch "-W" option, previously eeded to use "--wait".
Danny Auble's avatar
Danny Auble committed
* Changes in Slurm 16.05.5
==========================
 -- Fix accounting for jobs requeued after the previous job was finished.
 -- slurmstepd modified to pre-load all relevant plugins at startup to avoid
    the possibility of modified plugins later resulting in inconsistent API
    or data structures and a failure of slurmstepd.
 -- Export functions from parse_time.c in libslurm.so.
 -- Export unit convert functions from slurm_protocol_api.c in libslurm.so.
 -- Fix scancel to allow multiple steps from a job to be cancelled at once.
 -- Update and expand upgrade guide (in Quick Start Administrator web page).
 -- burst_buffer/cray: Requeue, but do not hold a job which fails the pre_run
    operation.
 -- Insure reported expected job start time is not in the past for pending jobs.
 -- Add support for PMIx v2.
 -- mpi/pmix: support for passing TMPDIR path through info key
 -- Cray: update slurmconfgen_smw.py script to correctly identify service nodes
    versus compute nodes.
 -- FreeBSD - fix build issue in knl_cray plugin.
 -- Corrections to gres.conf parsing logic.
 -- Make partition State independent of EnforcePartLimits value.
 -- Fix multipart srun submission with EnforcePartLimits=NO and job violating
    the partition limits.
 -- Fix problem updating job state_reason.
 -- pmix - Provide HWLOC topology in the job-data if Slurm was configured
    with hwloc.
 -- Cray - Fix issue restoring jobs when blade count increases due to hardware
    reconfiguration.
 -- burst_buffer/cray - Hold job after 3 failed pre-run operations.
 -- sched/backfill - Check that a user's QOS is allowed to use a partition
    before trying to schedule resources on that partition for the job.
 -- sacctmgr - Fix displaying nodenames when printing out events or
    reservations.
 -- Fix mpiexec wrapper to accept task count with more than one digit.
 -- Add mpiexec man page to the script.
 -- Add salloc_wait_nodes option to the SchedulerParameters parameter in the
    slurm.conf file controlling when the salloc command returns in relation to
    when nodes are ready for use (i.e. booted).
 -- Handle case when slurmctld daemon restart while compute node reboot in
    progress. Return node to service rather than setting DOWN.
 -- Preserve node "RESERVATION" state when one of multiple overlapping
    reservations ends.
 -- Restructure srun command locking for task_exit processing logic for improved
    parallelism.
 -- Modify srun task completion handling to only build the task/node string for
    logging purposes if it is needed. Modified for performance purposes.
 -- Docs - update salloc/sbatch/srun man pages to mention corresponding
    environment variables for --mem/--mem-per-cpu and allowed suffixes.
 -- Silence srun warning when overriding the job ntasks-per-node count
    with a lower task count for the step.
 -- Docs - assorted spelling fixes.
 -- node_features/knl_cray: Fix bug where MCDRAM state could be taken from
    capmc rather than cnselect.
 -- node_features/knl_cray: If a node is rebooted outside of Slurm's direction,
    update it's active features with current MCDRAM and NUMA mode information.
 -- Restore ability to manually power down nodes, broken in 15.08.12.
 -- Don't log error for job end_time being zero if node health check is still
    running.
 -- When powering up a node to change it's state (e.g. KNL NUMA or MCDRAM mode)
    then pass to the ResumeProgram the job ID assigned to the nodes in the
    SLURM_JOB_ID environment variable.
 -- Allow a node's PowerUp state flag to be cleared using update_node RPC.
 -- capmc_suspend/resume - If a request modify NUMA or MCDRAM state on a set of
    nodes or reboot a set of nodes fails then just requeue the job and abort the
    entire operation rather than trying to operate on individual nodes.
 -- node_features/knl_cray plugin: Increase default CapmcTimeout parameter from
    10 to 60 seconds.
 -- Fix squeue filter by job license when a job has requested more than 1
    license of a certain type.
 -- Fix bug in PMIX_Ring in the pmi2 plugin so that it supports singleton mode.
    It also updates the testpmixring.c test program so it can be used to check
    singleton runs.
 -- Automically cleanup task/cgroup cpuset and devices cgroups after steps are
    done.
 -- Testsuite - Fix test1.83 to handle gaps in node names properly.
 -- BlueGene - correctly scale node counts when enforcing MaxNodes limit.
 -- Make sure no attempt is made to schedule a requeued job until all steps are
    cleaned (Node Health Check completes for all steps on a Cray).
 -- KNL: Correct task affinity logic for some NUMA modes.
 -- Add salloc/sbatch/srun --priority option of "TOP" to set job priority to
    the highest possible value. This option is only available to Slurm operators
    and administrators.
 -- Add salloc/sbatch/srun option --use-min-nodes to prefer smaller node counts
    when a range of node counts is specified (e.g. "-N 2-4").
 -- Validate salloc/sbatch --wait-all-nodes argument.
 -- Add "sbatch_wait_nodes" to SchedulerParameters to control default sbatch
    behaviour with respect to waiting for all allocated nodes to be ready for
    use. Job can override the configuration option using the --wait-all-nodes=#
    option.
 -- Prevent partition group access updates from resetting last_part_update when
    no changes have been made. Prevents backfill scheduler from restarting
    mid-cycle unnecessarily.
 -- Cray - add NHC_ABSOLUTELY_NO to never run NHC, even on certain edge cases
    that it would otherwise be run on with NHC_NO.
 -- Ignore GRES/QOS updates that maintain the same value as before.
 -- mpi/pmix - prepare temp directory for application.
 -- Fix display for the nice and priority values in sprio/scontrol/squeue.
Morris Jette's avatar
Morris Jette committed
* Changes in Slurm 16.05.4
==========================
Loading
Loading full blame...