Skip to content
Snippets Groups Projects
NEWS 169 KiB
Newer Older
Christopher J. Morrone's avatar
Christopher J. Morrone committed
This file describes changes in recent versions of SLURM. It primarily
documents those changes that are of interest to users and admins.
* Changes in SLURM 1.4.0-pre2
=============================
 -- Remove srun's --ctrl-comm-ifhn-addr option (for PMI/MPICH2). It is no
    longer needed.

* Changes in SLURM 1.4.0-pre1
=============================
 -- Save/restore a job's task_distribution option on slurmctld retart.
    NOTE: SLURM must be cold-started on converstion from version 1.3.x.
 -- Remove task_mem from job step credential (only job_mem is used now).
 -- Remove --task-mem and --job-mem options from salloc, sbatch and srun
    (use --mem-per-cpu or --mem instead).
 -- Remove DefMemPerTask from slurm.conf (use DefMemPerCPU or DefMemPerNode
    instead).
 -- Modify slurm_step_launch API call. Move launch host from function argument
    to element in the data structure slurm_step_launch_params_t, which is
    used as a function argument.
 -- Add state_reason_string to job state with optional details about why
    a job is pending.
 -- Make "scontrol show node" output match scontrol input for some fields
    ("Cores" changed to "CoresPerSocket", etc.).
 -- Add support for a new node state "FUTURE" in slurm.conf. These node records
    are created in SLURM tables for future use without a reboot of the SLURM
    daemons, but are not reported by any SLURM commands or APIs.

* Changes in SLURM 1.3.7
========================
 -- Add jobid/stepid to MESSAGE_TASK_EXIT to address race condition when 
    a job step is cancelled, another is started immediately (before the 
    first one completely terminates) and ports are reused. 
    NOTE: This change requires that SLURM be updated on all nodes of the
    cluster at the same time. There will be no impact upon currently running
    jobs (they will ignore the jobid/stepid at the end of the message).
 -- Added Python module to process hostslists as used by SLURM. See
    contribs/python/hostlist. Supplied by Kent Engstrom, National
    Supercomputer Centre, Sweden.
 -- Report task termination due to signal (restored functionality present
 -- Remove sbatch test for script size being no larger than 64k bytes.
    The current limit is 4GB.
 -- Disable FastSchedule=0 use with SchedulerType=sched/gang. Node 
    configuration must be specified in slurm.conf for gang scheduling now.
 -- For sched/wiki and sched/wiki2 (Maui or Moab scheduler) disable the ability
    of a non-root user to change a job's comment field (used by Maui/Moab for
    storing scheduler state information).
 -- For sched/wiki (Maui) add pending job's future start time to the state
    info reported to Maui.
 -- Improve reliability of job requeue logic on node failure.
 -- Add logic to ping non-responsive nodes even if SlurmdTimeout=0. This permits
    the node to be returned to use when it starts responding rather than 
    remaining in a non-usable state.
 -- Honor HealthCheckInterval values that are smaller than SlurmdTimeout.
 -- For non-responding nodes, log them all on a single line with a hostlist 
    expression rather than one line per node. Frequency of log messages is 
    dependent upon SlurmctldDebug value from 300 seconds at SlurmctldDebug<=3
    to 1 second at SlurmctldDebug>=5.
 -- If a DOWN node is resumed, set its state to IDLE & NOT_RESPONDING and 
    ping the node immediately to clear the NOT_RESPONDING flag.
 -- Log that a job's time limit is reached, but don't sent SIGXCPU.
 -- Fixed gid to be set in slurmstepd when run by root
 -- Changed getpwent to getpwent_r in the slurmctld and slurmd
 -- Increase timeout on most slurmdbd communications to 60 secs (time for
    substantial database updates).
 -- Treat srun option of --begin= with a value of now without a numeric
    component as a failure (e.g. "--begin=now+hours").
* Changes in SLURM 1.3.6
========================
 -- Add new function to get information for a single job rather than always
    getting information for all jobs. Improved performance of some commands. 
    NOTE: This new RPC means that the slurmctld daemons should be updated
    before or at the same time as the compute nodes in order to process it.
 -- In salloc, sbatch, and srun replace --task-mem options with --mem-per-cpu
    (--task-mem will continue to be accepted for now, but is not documented).
    Replace DefMemPerTask and MaxMemPerTask with DefMemPerCPU, DefMemPerNode,
Moe Jette's avatar
Moe Jette committed
    MaxMemPerCPU and MaxMemPerNode in slurm.conf (old options still accepted
    for now, but mapped to "PerCPU" parameters and not documented). Allocate
    a job's memory memory at the same time that processors are allocated based
    upon the --mem or --mem-per-cpu option rather than when job steps are
    initiated.
 -- Altered QOS in accounting to be a list of admin defined states, an
    account or user can have multiple QOS's now.  They need to be defined using
    'sacctmgr add qos'.  They are no longer an enum.  If none are defined
    Normal will be the QOS for everything.  Right now this is only for use 
    with MOAB.  Does nothing outside of that.
 -- Added spank_get_item support for field S_STEP_CPUS_PER_TASK.
 -- Make corrections in spank_get_item for field S_JOB_NCPUS, previously 
    reported task count rather than CPU count.
 -- Convert configuration parameter PrivateData from on/off flag to have
    separate flags for job, partition, and node data. See "man slurm.conf"
    for details.
 -- Fix bug, failed to load DisableRootJobs configuration parameter.
 -- Altered sacctmgr to always return a non-zero exit code on error and send 
    error messages to stderr.
Moe Jette's avatar
Moe Jette committed
* Changes in SLURM 1.3.5
========================
Moe Jette's avatar
Moe Jette committed
 -- Fix processing of auth/munge authtentication key for messages originating 
    in slurmdbd and sent to slurmctld. 
 -- If srun is allocating resources (not within sbatch or salloc) and MaxWait
    is configured to a non-zero value then wait indefinitely for the resource
    allocation rather than aborting the request after MaxWait time.
 -- For Moab only: add logic to reap defunct "su" processes that are spawned by
    slurmd to load user's environment variables.
 -- Added more support for "dumping" account information to a flat file and 
    read in again to protect data incase something bad happens to the database.
 -- Sacct will now report account names for job steps.
 -- For AIX: Remove MP_POERESTART_ENV environment variable, disabling 
    poerestart command. User must explicitly set MP_POERESTART_ENV before 
    executing poerestart.
 -- Put back notification that a job has been allocated resources when it was
    pending.
Moe Jette's avatar
Moe Jette committed

Moe Jette's avatar
Moe Jette committed
* Changes in SLURM 1.3.4
========================
 -- Some updates to man page formatting from Gennaro Oliva, ICAR.
 -- Smarter loading of plugins (doesn't stat every file in the plugin dir)
 -- In sched/backfill avoid trying to schedule jobs on DOWN or DRAINED nodes.
 -- forward exit_code from step completion to slurmdbd
 -- Add retry logic to socket connect() call from client which can fail 
    when the slurmctld is under heavy load.
Danny Auble's avatar
Danny Auble committed
 -- Fixed bug when adding associations to add correctly.
 -- Added support for associations for user root.
 -- For Moab, sbatch --get-user-env option processed by slurmd daemon
    rather than the sbatch command itself to permit faster response
    for Moab.
 -- IMPORTANT FIX: This only effects use of select/cons_res when allocating
    resources by core or socket, not by CPU (default for SelectTypeParameter). 
    We are not saving a pending job's task distribution, so after restarting
    slurmctld, select/cons_res was over-allocating resources based upon an 
    invalid task distribution value. Since we can't save the value without 
    changing the state save file format, we'll just set it to the default 
    value for now and save it in Slurm v1.4. This may result in a slight 
    variation on how sockets and cores are allocated to jobs, but at least 
    resources will not be over-allocated.
 -- Correct logic in accumulating resources by node weight when more than 
    one job can run per node (select/cons_res or partition shared=yes|force).
 -- slurm.spec file updated to avoid creating empty RPMs. RPM now *must* be
    built with correct specification of which packages to build or not build.
    See the top of the slurm.spec file for information about how to control
    package building specification.
 -- Set SLURM_JOB_CPUS_PER_NODE for jobs allocated using the srun command.
    It was already set for salloc and sbatch commands.
 -- Fix to handle suspended jobs that were cancelled in accounting
 -- BLUEGENE - fix to only include bps given in a name from the bluegene.conf 
    file.
 -- For select/cons_res: Fix record-keeping for core allocations when more 
    than one partition uses a node or there is more than one socket per node.
 -- In output for "scontrol show job" change "StartTime" header to "EligibleTime"
    for pending jobs to accurately describe what is reported.
 -- Add more slurmdbd.conf paramters: ArchiveScript, ArchiveAge, JobPurge, and
    StepPurge (not fully implemented yet).
 -- Add slurm.conf parameter EnforcePartLimits to reject jobs which exceed a
    partition's size and/or time limits rather than leaving them queued for a
    later change in the partition's limits. NOTE: Not reported by
    "scontrol show config" to avoid changing RPCs. It will be reported in 
    SLURM version 1.4.
 -- Added idea of coordinator to accounting.  A coordinator can add associations
    between exsisting users to the account or any sub-account they are 
    coordinator to.  They can also add/remove other coordinators to those 
    accounts.
 -- Add support for Hostname and NodeHostname in slurm.conf being fully 
    qualified domain names (by Vijay Ramasubramanian, University of Maryland). 
    For more information see "man slurm.conf".
* Changes in SLURM 1.3.3
========================
 -- Add mpi_openmpi plugin to the main SLURM RPM.
 -- Prevent invalid memory reference when using srun's --cpu_bind=cores option
    (slurm-1.3.2-1.cea1.patch from Matthieu Hautreux, CEA).
 -- Task affinity plugin modified to support a particular cpu bind type: cores,
    sockets, threads, or none. Accomplished by setting an environment variable
    SLURM_ENFORCE_CPU_TYPE (slurm-1.3.2-1.cea2.patch from Matthieu Hautreux, 
    CEA).
 -- For BlueGene only, log "Prolog failure" once per job not once per node.
 -- Reopen slurmctld log file after reconfigure or SIGHUP is received.
 -- In TaskPlugin=task/affinity, fix possible infinite loop for slurmd.
 -- Accounting rollup works for mysql plugin.  Automatic rollup when using 
    slurmdbd.
Danny Auble's avatar
Danny Auble committed
 -- Copied job stat logic out of sacct into sstat in the future sacct -stat 
    will be deprecated.
 -- Correct sbatch processing of --nice option with negative values.
 -- Add squeue formatted print option %Q to print a job's integer priority.
 -- In sched/backfill, fix bug that was changing a pending job's shared value
    to zero (possibly changing a pending job's resource requirements from a 
    processor on some node to the full node).
Moe Jette's avatar
Moe Jette committed
* Changes in SLURM 1.3.2
========================
 -- Get --ntasks-per-node option working for sbatch command.
 -- BLUEGENE: Added logic to give back a best block on overlapped mode 
    in test_only mode
 -- BLUEGENE: Updated debug info and man pages for better help with the 
    numpsets option and to fail correctly with bad image request for building
    blocks.
 -- In sched/wiki and sched/wiki2 properly support Slurm license consumption
    (job state reported as "Hold" when required licenses are not available).
Loading
Loading full blame...