Skip to content
Snippets Groups Projects
NEWS 161 KiB
Newer Older
Christopher J. Morrone's avatar
Christopher J. Morrone committed
This file describes changes in recent versions of SLURM. It primarily
documents those changes that are of interest to users and admins.
Moe Jette's avatar
Moe Jette committed
* Changes in SLURM 1.3.5
========================
Moe Jette's avatar
Moe Jette committed
 -- Fix processing of auth/munge authtentication key for messages originating 
    in slurmdbd and sent to slurmctld. 
Moe Jette's avatar
Moe Jette committed

Moe Jette's avatar
Moe Jette committed
* Changes in SLURM 1.3.4
========================
 -- Some updates to man page formatting from Gennaro Oliva, ICAR.
 -- Smarter loading of plugins (doesn't stat every file in the plugin dir)
 -- In sched/backfill avoid trying to schedule jobs on DOWN or DRAINED nodes.
 -- forward exit_code from step completion to slurmdbd
 -- Add retry logic to socket connect() call from client which can fail 
    when the slurmctld is under heavy load.
Danny Auble's avatar
Danny Auble committed
 -- Fixed bug when adding associations to add correctly.
 -- Added support for associations for user root.
 -- For Moab, sbatch --get-user-env option processed by slurmd daemon
    rather than the sbatch command itself to permit faster response
    for Moab.
 -- IMPORTANT FIX: This only effects use of select/cons_res when allocating
    resources by core or socket, not by CPU (default for SelectTypeParameter). 
    We are not saving a pending job's task distribution, so after restarting
    slurmctld, select/cons_res was over-allocating resources based upon an 
    invalid task distribution value. Since we can't save the value without 
    changing the state save file format, we'll just set it to the default 
    value for now and save it in Slurm v1.4. This may result in a slight 
    variation on how sockets and cores are allocated to jobs, but at least 
    resources will not be over-allocated.
 -- Correct logic in accumulating resources by node weight when more than 
    one job can run per node (select/cons_res or partition shared=yes|force).
 -- slurm.spec file updated to avoid creating empty RPMs. RPM now *must* be
    built with correct specification of which packages to build or not build.
    See the top of the slurm.spec file for information about how to control
    package building specification.
 -- Set SLURM_JOB_CPUS_PER_NODE for jobs allocated using the srun command.
    It was already set for salloc and sbatch commands.
 -- Fix to handle suspended jobs that were cancelled in accounting
 -- BLUEGENE - fix to only include bps given in a name from the bluegene.conf 
    file.
 -- For select/cons_res: Fix record-keeping for core allocations when more 
    than one partition uses a node or there is more than one socket per node.
 -- In output for "scontrol show job" change "StartTime" header to "EligibleTime"
    for pending jobs to accurately describe what is reported.
 -- Add more slurmdbd.conf paramters: ArchiveScript, ArchiveAge, JobPurge, and
    StepPurge (not fully implemented yet).
 -- Add slurm.conf parameter EnforcePartLimits to reject jobs which exceed a
    partition's size and/or time limits rather than leaving them queued for a
    later change in the partition's limits. NOTE: Not reported by
    "scontrol show config" to avoid changing RPCs. It will be reported in 
    SLURM version 1.4.
 -- Added idea of coordinator to accounting.  A coordinator can add associations
    between exsisting users to the account or any sub-account they are 
    coordinator to.  They can also add/remove other coordinators to those 
    accounts.
 -- Add support for Hostname and NodeHostname in slurm.conf being fully 
    qualified domain names (by Vijay Ramasubramanian, University of Maryland). 
    For more information see "man slurm.conf".
* Changes in SLURM 1.3.3
========================
 -- Add mpi_openmpi plugin to the main SLURM RPM.
 -- Prevent invalid memory reference when using srun's --cpu_bind=cores option
    (slurm-1.3.2-1.cea1.patch from Matthieu Hautreux, CEA).
 -- Task affinity plugin modified to support a particular cpu bind type: cores,
    sockets, threads, or none. Accomplished by setting an environment variable
    SLURM_ENFORCE_CPU_TYPE (slurm-1.3.2-1.cea2.patch from Matthieu Hautreux, 
    CEA).
 -- For BlueGene only, log "Prolog failure" once per job not once per node.
 -- Reopen slurmctld log file after reconfigure or SIGHUP is received.
 -- In TaskPlugin=task/affinity, fix possible infinite loop for slurmd.
 -- Accounting rollup works for mysql plugin.  Automatic rollup when using 
    slurmdbd.
Danny Auble's avatar
Danny Auble committed
 -- Copied job stat logic out of sacct into sstat in the future sacct -stat 
    will be deprecated.
 -- Correct sbatch processing of --nice option with negative values.
 -- Add squeue formatted print option %Q to print a job's integer priority.
 -- In sched/backfill, fix bug that was changing a pending job's shared value
    to zero (possibly changing a pending job's resource requirements from a 
    processor on some node to the full node).
Moe Jette's avatar
Moe Jette committed
* Changes in SLURM 1.3.2
========================
 -- Get --ntasks-per-node option working for sbatch command.
 -- BLUEGENE: Added logic to give back a best block on overlapped mode 
    in test_only mode
 -- BLUEGENE: Updated debug info and man pages for better help with the 
    numpsets option and to fail correctly with bad image request for building
    blocks.
 -- In sched/wiki and sched/wiki2 properly support Slurm license consumption
    (job state reported as "Hold" when required licenses are not available).
 -- In sched/wiki2 JobWillRun command, don't return an error code if the job(s)
    can not be started at that time. Just return an error message (from 
    Doug Wightman, CRI).
 -- Fix bug if sched/wiki or sched/wiki2 are configured and no job comment is 
    set.
 -- scontrol modified to report partition partition's "DisableRootJobs" value.
 -- Fix bug in setting host address for PMI communications (mpich2 only).
Moe Jette's avatar
Moe Jette committed
 -- Fix for memory size accounting on some architectures.
 -- In sbatch and salloc, change --dependency's one letter option from "-d"
    to "-P" (continue to accept "-d", but change the documentation).
 -- Only check that task_epilog and task_prolog are runable by the job's
    user, not as root.
Moe Jette's avatar
Moe Jette committed
 -- In sbatch, if specifying an alternate directory (--workdir/-D), then
    input, output and error files are in that directory rather than the 
    directory from which the command is executed
 -- NOTE: Fully operational with Moab version 5.2.3+. Change SUBMITCMD in
Moe Jette's avatar
Moe Jette committed
    moab.cfg to be the location of sbatch rather than srun. Also set 
    HostFormat=2 in SLURM's wiki.conf for improved performance.
 -- NOTE: We needed to change an RPC from version 1.3.1. You must upgrade 
    all nodes in a cluster from v1.3.1 to v1.3.2 at the same time.
 -- Postgres plugin will work from job accounting, not for association 
    management yet.
 -- For srun/sbatch --get-user-env option (Moab use only) look for "env"
    command in both /bin and /usr/sbin (for Suse Linux).
 -- Fix bug in processing job feature requests with node counts (could fail
    to schedule job if some nodes have not associated features).
 -- Added nodecnt and gid to jobcomp/script
 -- Insure that nodes select in "srun --will-run" command or the equivalent in
    sched/wiki2 are in the job's partition.
 -- BLUGENE - changed partition Min|MaxNodes to represent c-node counts
    instead of base partitions
 -- In sched/gang only, prevent possible invalid memory reference when 
    slurmctld is reconfigured, e.g. "scontrol reconfig".
 -- In select/linear only, prevent invalid memory reference in log message when
    nodes are added to slurm.conf and then "scontrol reconfig" is executed. 
Moe Jette's avatar
Moe Jette committed

* Changes in SLURM 1.3.1
========================
 -- Correct logic for processing batch job's memory limit enforcement.
 -- Fix bug that was setting a job's requeue value on any update of the 
    job using the "scontrol update" command. The invalid value of an 
    updated job prevents it's recovery when slurmctld restarts.
 -- Add support for cluster-wide consumable resources. See "Licenses"
    parameter in slurm.conf man page and "--licenses" option in salloc, 
    sbatch and srun man pages.
 -- Major changes in select/cons_res to support FastSchedule=2 with more
    resources configured than actually exist (useful for testing purposes).
 -- Modify srun --test-only response to include expected initiation time 
    for a job as well as the nodes to be allocated and processor count
    (for use by Moab).
 -- Correct sched/backfill to properly honor job dependencies.
 -- Correct select/cons_res logic to allocate CPUs properly if there is
    more than one thread per core (previously failed to allocate all cores).
 -- Correct select/linear logic in shared job count (was off by 1).
Moe Jette's avatar
Moe Jette committed
 -- Add support for job preeption based upon partition priority (in sched/gang,
    preempt.patch from Chris Holmes, HP).
 -- Added much better logic for mysql accounting.  
 -- Finished all basic functionality for sacctmgr.
 -- Added load file logic to sacctmgr for setting up a cluster in one step.
Moe Jette's avatar
Moe Jette committed
 -- NOTE: We needed to change an RPC from version 1.3.0. You must upgrade 
    all nodes in a cluster from v1.3.0 to v1.3.1 at the same time.
 -- NOTE: Work is currently underway to improve placement of jobs for gang
    scheduling and preemption.
 -- NOTE: Work is underway to provide additional tools for reporting 
    accounting information.
* Changes in SLURM 1.3.0
========================
 -- In sched/wiki2, add processor count to JOBWILLRUN response.
 -- Add event trigger for node entering DRAINED state.
 -- Build properly without OpenSSL installed (OpenSSL is recommended, but not 
    required).
Danny Auble's avatar
Danny Auble committed
 -- Added slurmdbd, and modified accounting_storage plugin to talk to it. 
    Allowing multiple slurm systems to securly store and gather information
    not only about jobs, but the system also. See accounting web page for more
    information.    
* Changes in SLURM 1.3.0-pre11
==============================
 -- Restructure the sbcast RPC to take advantage of larger buffers available
    in Slurm v1.3 RPCs.
Moe Jette's avatar
Moe Jette committed
 -- Fix several memory leaks.
Moe Jette's avatar
Moe Jette committed
 -- In scontrol, show job's Requeue value, permit change of Requeue and Comment
 -- In slurmctld job record, add QOS (quality of service) value for accounting
    purposes with Maui and Moab.
 -- Log to a job's stderr when it is being cancelled explicitly or upon reaching
    it's time limit.
 -- Only permit a job's account to be changed while that job is PENDING.
 -- Fix race condition in job suspend/resume (slurmd.sus_res.patch from HP).
Moe Jette's avatar
Moe Jette committed
* Changes in SLURM 1.3.0-pre10
==============================
 -- Add support for node-specific "arch" (architecture) and "os" (operating 
    system) fields. These fields are set based upon values reported by the
    slurmd daemon on each compute node using SLURM_ARCH and SLURM_OS environment 
    variables (if set, the uname function otherwise) and are intended to support
    changes in real time changes in operating system. These values are reported
    by "scontrol show node" plus the sched/wiki and sched/wiki2 plugins for Maui
    and Moab respectively.
 -- In sched/wiki and sched/wiki2: add HostFormat and HidePartitionJobs to 
    "scontrol show config" SCHEDULER_CONF output.
 -- In sched/wiki2: accept hostname expression as input for GETNODES command.
 -- Add JobRequeue configuration parameter and --requeue option to the sbatch
    command.
 -- Add HealthCheckInterval and HealthCheckProgram configuration parameters.
Moe Jette's avatar
Moe Jette committed
 -- Add SlurmDbdAddr, SlurmDbdAuthInfo and SlurmDbdPort configuration parameters.
 -- Modify select/linear to achieve better load leveling with gang scheduler.
Loading
Loading full blame...