Newer
Older
This file describes changes in recent versions of Slurm. It primarily
documents those changes that are of interest to users and admins.
-- Fix issue where not enforcing QOS but a partition either allows or denies
them.
-- CRAY - Make switch/cray default when running on a Cray natively.
-- CRAY - Make job_container/cncu default when running on a Cray natively.
-- Disable job time limit change if it's preemption is in progress.
-- Correct logic to properly enforce job preemption GraceTime.
-- Fix sinfo -R to print each down/drained node once, rather than once per
partition.
-- If a job has non-responding node, retry job step create rather than
returning with DOWN node error.
-- Support SLURM_CONF path which does not have "slurm.conf" as the file name.
-- CRAY - make job_container/cncu default when running on a Cray natively
-- Fix issue where batch cpuset wasn't looked at correctly in
jobacct_gather/cgroup.
-- Correct squeue's job node and CPU counts for requeued jobs.
-- Correct SelectTypeParameters=CR_LLN with job selecition of specific nodes.
-- Only if ALL of their partitions are hidden will a job be hidden by default.
-- Run EpilogSlurmctld for a job is killed during slurmctld reconfiguration.
-- Close window with srun if waiting for an allocation and while printing
something you also get a signal which would produce deadlock.
-- Add SelectTypeParameters option of CR_PACK_NODES to pack a job's tasks
tightly on its allocated nodes rather than distributing them evenly across
the allocated nodes.
-- cpus-per-task support: Try to pack all CPUs of each tasks onto one socket.
Previous logic could spread the tasks CPUs across multiple sockets.
-- Add new distribution method fcyclic so when a task is using multiple cpus
it can bind cyclically across sockets.
-- task/affinity - When using --hint=nomultithread only bind to the first
thread in a core.
-- Make cgroup task layout (block | cyclic) method mirror that of
task/affinity.
-- If TaskProlog sets SLURM_PROLOG_CPU_MASK reset affinity for that task
based on the mask given.
-- Keep supporting 'srun -N x --pty bash' for historical reasons.
-- If EnforcePartLimits=Yes and QOS job is using can override limits, allow
it.
-- Fix issues if partition allows or denys account's or QOS' and either are
not set.
-- If a job requests a partition and it doesn't allow a QOS or account the
job is requesting pend unless EnforcePartLimits=Yes. Before it would
always kill the job at submit.
-- Fix format output of scontrol command when printing node state.
-- Improve the clean up of cgroup hierarchy when using the
jobacct_gather/cgroup plugin.
-- Added SchedulerParameters value of Ignore_NUMA.
-- Fix issues with code when using automake 1.14.1
-- select/cons_res plugin: Fix memory leak related to job preemption.
-- After reconfig rebuild the job node counters only for jobs that has
not finished yet otherwise if requeued the job may enter an invalid
COMPLETING state.
-- Do not purge the script and environment files for completed jobs on
slurmctld reconfiguration or restart (they might be later requeued).
-- scontrol now accepts the option job=xxx or jobid=xxx for the requeue,
requeuehold and release operations.
-- task/cgroup - fix to bind batch job in the proper CPUs.
* Changes in Slurm 14.03.3-2
============================
* Changes in Slurm 14.03.3
==========================
-- Correction to default batch output file name. In version 14.03.2 was using
"slurm_<jobid>_4294967294.out" due to error in job array logic.
-- In slurm.spec file, replace "Requires cray-MySQL-devel-enterprise" with
"Requires mysql-devel".
-- Fix race condition if PrologFlags=Alloc,NoHold is used.
-- Cray - Make NPC only limit running other NPC jobs on shared blades instead
of limited non NPC jobs.
-- Fix for sbatch #PBS -m (mail) option parsing.
-- Fix job dependency bug. Jobs dependent upon multiple other jobs may start
prematurely.
-- Set "Reason" field for all elements of a job array on short-circuited
scheduling for job arrays.
-- Allow -D option of salloc/srun/sbatch to specify relative path.
-- Added SchedulerParameter of batch_sched_delay to permit many batch jobs
to be submitted between each scheduling attempt to reduce overhead of
scheduling logic.
-- Added job reason of "SchedTimeout" if the scheduler was not able to reach
the job to attempt scheduling it.
-- Add job's exit state and exit code to email message.
-- scontrol hold/release accepts job name option (in addition to job ID).
-- Handle when trying to cancel a step that hasn't started yet better.
-- Add --priority option to salloc, sbatch and srun commands.
-- Honor partition priorities over job priorities.
-- Fix sacct -c when using jobcomp/filetxt to read newer variables
-- Fix segfault of sacct -c if spaces are in the variables.
-- Release held job only with "scontrol release <jobid>" and not by resetting
the job's priority. This is needed to support job arrays better.
-- Correct squeue command not to merge jobs with state pending and completing
together.
-- Fix issue where user is requesting --acctg-freq=0 and no memory limits.
-- Fix issue with GrpCPURunMins if a job's timelimit is altered while the job
is running.
-- Temporary fix for handling our typemap for the perl api with newer perl.
-- Fix allowgroup on bad group seg fault with the controller.
-- Handle node ranges better when dealing with accounting max node limits.
-- Update configure to set correct version without having to run autogen.sh
* Changes in Slurm 14.03.1
==========================
-- Add support for job std_in, std_out and std_err fields in Perl API.
-- Add "Scheduling Configuration Guide" web page.
-- BGQ - fix check for jobinfo when it is NULL
-- Do not check cleaning on "pending" steps.
-- task/cgroup plugin - Fix for building on older hwloc (v1.0.2).
-- In the PMI implementation by default don't check for duplicate keys.
Set the SLURM_PMI_KVS_DUP_KEYS if you want the code to check for
duplicate keys.
-- Permit user root to propagate resource limits higher than the hard limit
slurmd has on that compute node has (i.e. raise both current and maximum
limits).
-- Fix issue with license used count when doing an scontrol reconfig.
-- Fix the PMI iterator to not report duplicated keys.
-- Fix issue with sinfo when -o is used without the %P option.
-- Rather than immediately invoking an execution of the scheduling logic on
every event type that can enable the execution of a new job, queue its
execution. This permits faster execution of some operations, such as
modifying large counts of jobs, by executing the scheduling logic less
frequently, but still in a timely fashion.
-- If the environment variable is greater than MAX_ENV_STRLEN don't
set it in the job env otherwise the exec() fails.
-- Optimize scontrol hold/release logic for job arrays.
-- Modify srun to report an exit code of zero rather than nine if some tasks
exit with a return code of zero and others are killed with SIGKILL. Only an
exit code of zero did this.
-- Avoid slurmctld crash getting job info if detail_ptr is NULL.
-- Fix sacctmgr add user where both defaultaccount and accounts are specified.
-- Added SchedulerParameters option of max_sched_time to limit how long the
main scheduling loop can execute for.
-- Added SchedulerParameters option of sched_interval to control how frequently
the main scheduling loop will execute.
-- Move start time of main scheduling loop timeout after locks are aquired.
-- Add squeue job format option of "%y" to print a job's nice value.
-- Update scontrol update jobID logic to operate on entire job arrays.
-- Fix PrologFlags=Alloc to run the prolog on each of the nodes in the
allocation instead of just the first.
-- Fix race condition if a step is starting while the slurmd is being
restarted.
-- Make sure a job's prolog has ran before starting a step.
-- BGQ - Fix invalid memory read when using DefaultConnType in the
bluegene.conf
-- Make sure we send node state to the DBD on clean start of controller.
-- Fix some sinfo and squeue sorting anomalies due to differences in data
types.
-- Only send message back to slurmctld when PrologFlags=Alloc is used on a
Cray/ALPS system, otherwise use the slurmd to wait on the prolog to gate
the start of the step.
-- Remove need to check PrologFlags=Alloc in slurmd since we can tell if prolog
has ran yet or not.
-- Fix squeue to use a correct macro to check job state.
-- BGQ - Fix incorrect logic issues if MaxBlockInError=0 in the bluegene.conf.
-- priority/basic - Insure job priorities continue to decrease when jobs are
submitted with the --nice option.
-- Make the PrologFlag=Alloc work on batch scripts
-- Make PrologFlag=NoHold (automatically sets PrologFlag=Alloc) not hold in
salloc/srun, instead wait in the slurmd when a step hits a node and the
prolog is still running.
-- Added --cpu-freq=highm1 (high minus one) option.
-- Expand StdIn/Out/Err string length output by "scontrol show job" from 128
to 1024 bytes.
-- squeue %F format will now print the job ID for non-array jobs.
-- Use quicksort for all priority based job sorting, which improves performance
significantly with large job counts.
-- If a job has already been released from a held state ignore successive
release requests.
-- Fix srun/salloc/sbatch man pages for the --no-kill option.
-- Add squeue -L/--licenses option to filter jobs by license names.
-- Handle abort job on node on front end systems without core dumping.
-- Fix dependency support for job arrays.
-- When updating jobs verify the update request is not identical to
the current settings.
-- When sorting jobs and priorities are equal sort by job_id.
-- Do not overwrite existing reason for node being down or drained.
-- Requeue batch job if Munge is down and credential can not be created.
-- Make _slurm_init_msg_engine() tolerate bug in bind() returning a busy
ephemeral port.
-- Don't block scheduling of entire job array if it could run in multiple
partitions.
-- Introduce a new debug flag Protocol to print protocol requests received
together with the remote IP address and port.
-- CRAY - Set up the network even when only using 1 node.
-- CRAY - Greatly reduce the number of error messages produced from the task
plugin and provide more information in the message.
-- job_submit/lua: Fix invalid memory reference if script returns error message
for user.
-- Add logic to sleep and retry if slurm.conf can't be read.
-- Reset a node's CpuLoad value at least once each SlurmdTimeout seconds.
-- Scheduler enhancements for reservations: When a job needs to run in
reservation, but can not due to busy resources, then do not block all jobs
in that partition from being scheduled, but only the jobs in that
reservation.
-- Export "SLURM*" environment variables from sbatch even if --export=NONE.
-- When recovering node state if the Slurm version is 2.6 or 2.5 set the
protocol version to be SLURM_2_5_PROTOCOL_VERSION which is the minimum
supported version.
-- Update the scancel man page documenting the -s option.
-- Update sacctmgr man page documenting how to modify account's QOS.
-- Fix for sjstat which currently does not print >1TB memory values correctly.
-- Change xmalloc()/xfree() to malloc()/free() in hostlist.c for better
performance.
-- Update squeue.1 man page describing the SPECIAL_EXIT state.
-- Added scontrol option of errnumstr to return error message given a slurm
error number.
-- If srun invoked with the --multi-prog option, but no task count, then use
the task count provided in the MPMD configuration file.
-- Prevent sview abort on some systems when adding or removing columns to the
display for nodes, jobs, partitions, etc.
-- Add job array hash table for improved performance.
-- Make AccountingStorageEnforce=all not include nojobs or nosteps.
-- Added sacctmgr mod qos set RawUsage=0.
-- Modify hostlist functions to accept more than two numeric ranges (e.g.
"row[1-3]rack[0-8]slot[0-63]")
-- Run job scheduling logic immediately when nodes enter service.
-- Added sbatch '--parsable' option to output only the job id number and the
cluster name separated by a semicolon. Errors will still be displayed.
-- Added failure management "slurmctld/nonstop" plugin.
-- Prevent jobs being killed when a checkpoint plugin is enabled or disabled.
-- Update the documentation about SLURM_PMI_KVS_NO_DUP_KEYS environment
variable.
-- select/cons_res bug fix for range of node counts with --cpus-per-task
option (e.g. "srun -N2-3 -c2 hostname" would allocate 2 CPUs on the first
node and 0 CPUs on the second node).
-- Change reservation flags field from 16 to 32-bits.
-- Add reservation flag value of "FIRST_CORES".
-- Added the idea of Resources to the database. Framework for handling
license servers outside of Slurm.
-- When starting the slurmctld only send past job/node state information to
accounting if running for the first time (should speed up startup
dramatically on systems with lots of nodes or lots of jobs).
-- Make job array expressions more flexible to accept multiple step counts in
the expression (e.g. "--array=1-10:2,50-60:5,123").
-- switch/cray - add state save/restore logic tracking allocated ports.
-- SchedulerParameters - Replace max_job_bf with bf_max_job_start (both will
work for now).
-- Add SchedulerParameters options of preempt_reorder_count and
preempt_strict_order.
-- Make memory types in acct_gather uint64_t to handle systems with more than
4TB of memory on them.
-- BGQ - --export=NONE option for srun to make it so only the SLURM_JOB_ID
and SLURM_STEP_ID env vars are set.
-- Munge plugins - Add sleep between retries if can't connect to socket.
-- Added DebugFlags value of "License".
-- Added --enable-developer which will give you -Werror when compiling.
-- Fix for job request with GRES count of zero.
-- Job array dependency logic: Cache results for major performance improvement.
-- Modify squeue to support filter on job states Special_Exit and Resizing.
-- Defer purging job record until after EpilogSlurmctld completes.
-- Fix handling RPCs from a 14.03 slurmctld to a 2.6 slurmd
* Changes in Slurm 14.03.0pre6
==============================
-- Modify slurmstepd to log messages according to the LogTimeFormat
parameter in slurm.conf.
-- Insure that overlapping reservations do not oversubscribe available
licenses.
-- Added core specialization logic to select/cons_res plugin.
-- Added whole_node field to job_resources structure and enable gang scheduling
for jobs with core specialization.
-- When using FastSchedule = 1 the nodes with less than configured resources
are not longer set DOWN, they are set to DRAIN instead.
-- Modified 'sacctmgr show associations' command to show GrpCPURunMins
by default.
-- Replace the hostlist_push() function with a more efficient
hostlist_push_host().
-- Modify the reading of lustre file system statistics to print more
information when debug and when io error occur.
-- Add specialized core count field to job credential data.
NOTE: This changes the communications protocol from other pre-releases of
version 14.03. All programs must be cancelled and daemons upgraded from
previous pre-releases of version 14.03. Upgrades from version 2.6 or earlier
can take place without loss of jobs
-- Add version number to node and front-end configuration information visible
using the scontrol tool.
-- Add idea of a RESERVED flag for node state so idle resources are marked
not "idle" when in a reservation.
-- Added core specialization plugin infrastructure.
-- Added new job_submit/trottle plugin to control the rate at which a user
can submit jobs.
-- CRAY - added network performance counters option.
-- Allow scontrol suspend/resume to accept jobid in the format jobid_taskid
to suspend/resume array elements.
-- In the slurmctld job record, split "shared" variable into "share_res" (share
resource) and "whole_node" fields.
-- Fix the format of SLURM_STEP_RESV_PORTS. It was generated incorrectly
when using the hostlist_push_host function and input surrounded by [].
-- Modify the srun --slurmd-debug option to accept debug string tags
(quiet, fatal, error, info verbose) beside the numerical values.
-- Fix the bug where --cpu_bind=map_cpu is interpreted as mask_cpu.
-- Update the documentation egarding the state of cpu frequencies after
a step using --cpu-freq completes.
-- CRAY - Fix issue when a job is requeued and nhc is still running as it is
being scheduled to run again. This would erase the previous job info
that was still needed to clean up the nodes from the previous job run.
(Bug 526).
-- Set SLURM_JOB_PARTITION environment variable set for all job allocations.
-- Set SLURM_JOB_PARTITION environment variable for Prolog program.
-- Added SchedulerParameters option of partition_job_depth to limit scheduling
logic depth by partition.
-- Handle the case in which errno is not reset to 0 after calling
getgrent_r(), which causes the controller to core dump.
-- Added squeue format option of "%X" (core specialization count).
-- Added core specialization web page (just a start for now).
-- Added the SLURM_ARRAY_JOB_ID and SLURM_ARRAY_TASK_ID
-- Fix bug in job step allocation failing due to memory limit.
-- Modify the pbsnodes script to reflect its output on a TORQUE system.
-- Add ability to clear a node's DRAIN flag using scontrol or sview by setting
it's state to "UNDRAIN". The node's base state (e.g. "DOWN" or "IDLE") will
not be changed.
-- Modify the output of 'scontrol show partition' by displaying
DefMemPerCPU=UNLIMITED and MaxMemPerCPU=UNLIMITED when these limits are
configured as 0.
-- mpirun-mic - Major re-write of the command wrapper for Xeon Phi use.
-- Add new configuration parameter of AuthInfo to specify port used by
authentication plugin.
-- Fixed conditional RPM compiling.
-- Corrected slurmstepd ident name when logging to syslog.
-- Fixed sh5util loop when there are no node-step files.
-- Add SLURM_CLUSTER_NAME to environment variables passed to PrologSlurmctld,
Prolog, EpilogSlurmctld, and Epilog
-- Add the idea of running a prolog right when an allocation happens
instead of when running on the node for the first time.
-- If user runs 'scontrol reconfig' but hostnames or the host count changes
the slurmctld throws a fatal error.
-- gres.conf - Add "NodeName" specification so that a single gres.conf file
can be used for a heterogeneous cluster.
-- Add flag to accounting RPC to indicate if job data is packed or not.
-- After all srun tasks have terminated on a node close the stdout/stderr
channel with the slurmstepd on that node.
-- In case of i/o error with slurmstepd log an error message and abort the
job.
-- Add --test-only option to sbatch command to validate the script and options.
The response includes expected start time and resources to be allocated.
==============================
-- Remove the ThreadID documentation from slurm.conf. This functionality has
been obsoleted by the LogTimeFormat.
-- Sched plugins - rename global and plugin functions names for consistency
with other plugin types.
-- BGQ - Added RebootQOSList option to bluegene.conf to allow an implicate
reboot of a block if only jobs in the list are running on it when cnodes
go into a failure state.
-- Correct task count of pending job steps.
-- Improve limit enforcement for jobs, set RLIMIT_RSS, RLIMIT_AS and/or
RLIMIT_DATA to enforce memory limit.
-- Pending job steps will have step_id of INFINITE rather than NO_VAL and
will be reported as "TBD" by scontrol and squeue commands.
-- Add logic so PMI_Abort or PMI2_Abort can propagate an exit code.
-- Added SlurmdPlugstack configuration parameter.
-- Added PriorityFlag DEPTH_OBLIVIOUS to have the depth of an association
not effect it's priorty.
-- Multi-thread the sinfo command (one thread per partition).
-- Added sgather tool to gather files from a job's compute nodes into a
central location.
-- Added configuration parameter FairShareDampeningFactor to offer a greater
priority range based upon utilization.
-- Change MaxArraySize and job's array_task_id from 16-bit to 32-bit field.
Additional Slurm enhancements are be required to support larger job arrays.
-- Added -S/--core-spec option to salloc, sbatch and srun commands to reserve
specialized cores for system use. Modify scontrol and sview to get/set
the new field. No enforcement exists yet for these new options.
struct job_info / slurm_job_info_t: Added core_spec
struct job_descriptorjob_desc_msg_t: Added core_spec
-- Do not set SLURM_NODEID environment variable on front-end systems.
-- Convert bitmap functions to use int32_t instead of int in data structures
and function arguments. This is to reliably enable use of bitmaps containing
up to 4 billion elements. Several data structures containing index values
were also changed from data type int to int32_t:
- Struct job_info / slurm_job_info_t: Changed exc_node_inx, node_inx, and
req_node_inx from type int to type int32_t
- job_step_info_t: Changed node_inx from type int to type int32_t
- Struct partition_info / partition_info_t: Changed node_inx from type int
to type int32_t
- block_job_info_t: Changed cnode_inx from type int to type int32_t
- block_info_t: Changed ionode_inx and mp_inx from type int to type int32_t
- Struct reserve_info / reserve_info_t: Changed node_inx from type int to
type int32_t
-- Modify qsub wrapper output to match torque command output, just print the
job ID rather than "Submitted batch job #"
-- Change Slurm error string for ESLURM_MISSING_TIME_LIMIT from
"Missing time limit" to
"Time limit specification required, but not provided"
-- Change salloc job_allocate error message header from
"Failed to allocate resources" to
"Job submit/allocate failed"
-- Modify slurmctld message retry logic to support Cray cold-standby SDB.
-- Added "JobAcctGatherParams" configuration parameter. Value of "NoShare"
disables accounting for shared memory.
-- Added fields to "scontrol show job" output: boards_per_node,
sockets_per_board, ntasks_per_node, ntasks_per_board, ntasks_per_socket,
ntasks_per_core, and nice.
-- Add squeue output format options for job command and working directory
(%o and %Z respectively).
-- Add stdin/out/err to sview job output.
-- Add new job_state of JOB_BOOT_FAIL for job terminations due to failure to
boot it's allocated nodes or BlueGene block.
-- CRAY - Add SelectTypeParameters NHC_NO_STEPS and NHC_NO which will disable
the node health check script for steps and allocations respectfully.
-- Reservation with CoreCnt: Avoid possible invalid memory reference.
-- Add new error code for attempt to create a reservation with duplicate name.
-- Validate that a hostlist file contains text (i.e. not a binary).
-- switch/generic - propagate switch information from srun down to slurmd and
slurmstepd.
-- CRAY - Do not package Slurm's libpmi or libpmi2 libraries. The Cray version
of those libraries must be used.
-- Added a new option to the scontrol command to view licenses that are
configured in use and avalable. 'scontrol show licenses'.
-- MySQL - Made Slurm compatible with 5.6
==============================
-- Add task pointer to the task_post_term() function in task plugins. The
terminating task's PID is available in task->pid.
-- Defer sending SIGKILL signal to processes while core dump in progress.
-- Added JobContainerPlugin configuration parameter and plugin infrastructure.
-- Added partition configuration parameters AllowAccounts, AllowQOS,
DenyAccounts and DenyQOS.
-- The rpmbuild option for a cray system with ALPS has changed from
%_with_cray to %_with_cray_alps.
-- The log file timestamp format can now be selected at runtime via the
LogTimeFormat configuration option. See the slurm.conf and slurmdbd.conf
man pages for details.
-- Added switch/generic plugin to a job's convey network topology.
-- BLUEGENE - If block is in 'D' state or has more cnodes in error than
MaxBlockInError set the job wait reason appropriately.
-- API use: Generate an error return rather than fatal error and exit if the
configuraiton file is absent or invalid. This will permit Slurm APIs to be
more reliably used by other programs.
-- Add support for load-based scheduling, allocate jobs to nodes with the
largest number of available CPUs. Added SchedulingParameters paramter of
"CR_LLN" and partition parameter of "LLN=yes|no".
-- Added job_info() and step_info() functions to the gres plugins to extract
plugin specific fields from the job's or step's GRES data structure.
-- Added sbatch --signal option of "B:" to signal the batch shell rather than
only the spawned job steps.
-- Added sinfo and squeue format option of "%all" to print all fields available
for the data type with a vertical bar separating each field.
-- Add mechanism for job_submit plugin to generate error message for srun,
salloc or sbatch to stderr. New argument added to job_submit function in
the plugin.
-- Add StdIn, StdOut, and StdErr paths to job information dumped with
"scontrol show job".
-- Permit Slurm administrator to submit a batch job as any user.
-- Set a job's RLIMIT_AS limit based upon it's memory limit and VsizeFactor
configuration value.
-- Remove Postgres plugins
-- Make jobacct_gather/cgroup work correctly and also make all jobacct_gather
plugins more maintainable.
-- Proctrack/pgid - Add support for proctrack_p_plugin_get_pids() function.
-- Sched/backfill - Change default max_job_bf parameter from 50 to 100.
-- Added -I|--item-extract option to sh5util to extract data item from series.
* Changes in Slurm 2.6.10
=========================
-- Switch/nrt - On switch resource allocation failure, free partial allocation.
-- Switch/nrt - Properly track usage of CAU and RDMA resources with multiple
tasks per compute node.
-- Fix issue where user is requesting --acctg-freq=0 and no memory limits.
-- BGQ - Temp fix issue where job could be left on job_list after it finished.
-- BGQ - Fix issue where limits were checked on midplane counts instead of
cnode counts.
-- BGQ - Move code to only start job on a block after limits are checked.
-- Handle node ranges better when dealing with accounting max node limits.
-- Fix perlapi to compile correctly with perl 5.18
-- BGQ - Fix issue with uninitialized variable.
-- Correct sinfo --sort fields to match documentation: E => Reason,
H -> Reason Time (new), R -> Partition Name, u/U -> Reason user (new)
-- If an invalid assoc_ptr comes in don't use the id to verify it.
-- Sched/backfill modified to avoid using nodes in completing state.
-- Correct support for job --profile=none option and related documentation.
-- Properly enforce job --requeue and --norequeue options.
-- If a job --mem-per-cpu limit exceeds the partition or system limit, then
scale the job's memory limit and CPUs per task to satisfy the limit.
* Changes in Slurm 2.6.9
========================
-- Fix sinfo to work correctly with draining/mixed nodes as well as filtering
on Mixed state.
-- Fix sacctmgr update user with no "where" condition.
-- Fix logic bugs for SchedulerParameters option of max_rpc_cnt.
* Changes in Slurm 2.6.8
========================
-- Add support for Torque/PBS job array options and environment variables.
-- Fix issue where jobs still pending after a reservation would remain
in waiting reason ReqNodeNotAvail.
-- Update last_job_update when a job's state_reason was modified.
-- Free job_ptr->state_desc where ever state_reason is set.
-- Fixed sacct.1 and srun.1 manual pages which contains a hyphen where
a minus sign for options was intended.
-- sinfo - Make sure if partition name is long and the default the last char
doesn't get chopped off.
-- task/affinity - Protect against zero divide when simulating more hardware
than you really have.
-- NRT - Fix issue with 1 node jobs. It turns out the network does need to
be setup for 1 node jobs.
-- Fix recovery of job dependency on task of job array when slurmctld restarts.
-- mysql - Fix invalid memory reference.
-- Lock the /cgroup/freezer subsystem when creating files for tracking processes.
-- Fix preempt/partition_prio to avoid preempting jobs in partitions with
PreemptMode=OFF
-- launch/poe - Implicitly set --network in job step create request as needed.
-- Permit multiple batch job submissions to be made for each run of the
scheduler logic if the job submissions occur at the nearly same time.
-- Fix issue where associations weren't correct if backup takes control and
new associations were added since it was started.
-- Fix race condition is corner case with backup slurmctld.
-- With the backup slurmctld make sure we reinit beginning values in the
slurmdbd plugin.
-- Fix sinfo to work correctly with draining/mixed nodes.
-- MySQL - Fix it so a lock isn't held unnecessarily.
-- Added new SchedulerParameters option of max_rpc_cnt when too many RPCs
are active.
-- BGQ - Fix deny_pass to work correctly.
-- BGQ - Fix sub block steps using a block when the block has passthrough's
in it.
* Changes in Slurm 2.6.7
========================
-- Properly enforce a job's cpus-per-task option when a job's allocation is
constrained on some nodes by the mem-per-cpu option.
-- Correct the slurm.conf man pages and checkpoint_blcr.html page
describing that jobs must be drained from cluster before deploying
-- Fix issue where if using munge and munge wasn't running and a slurmd
needed to forward a message, the slurmd would core dump.
-- Update srun.1 man page documenting the PMI2 support.
-- Fix slurmctld core dump when a jobs gets its QOS updated but there
is not a corresponding association.
-- If a job requires specific nodes and can not run due to those nodes being
busy, the main scheduling loop will block those specific nodes rather than
the entire queue/partition.
-- Fix minor memory leak when updating a job's name.
-- Fix minor memory leak when updating a reservation on a partition using "ALL"
nodes.
-- Fix minor memory leak when adding a reservation with a nodelist and core
count.
-- Update sacct man page description of job states.
-- BGQ - Fix minor memory leak when selecting blocks that can't immediately be
placed.
-- Fixed minor memory leak in backfill scheduler.
-- MYSQL - Fixed memory leak when querying clusters.
-- MYSQL - Fix when updating QOS on an association.
-- NRT - Fix to supply correct error messages to poe/pmd when a launch fails.
-- Add SLURM_STEP_ID to Prolog environment.
-- Add support for SchedulerParameters value of bf_max_job_start that limits
the total number of jobs that can be started in a single iteration of the
backfill scheduler.
-- Don't print negative number when dealing with large memory sizes with
sacct.
-- Fix sinfo output so that host in state allocated and mixed will not be
merged together.
-- GRES: Avoid crash if GRES configurations is inconstent.
-- Make S_SLURM_RESTART_COUNT item available to SPANK.
-- Munge plugins - Add sleep between retries if can't connect to socket.
-- Fix the database query to return all pending jobs in a given time interval.
-- switch/nrt - Correct logic to get dynamic window count.
-- Remove need to use job->ctx_params in the launch plugin, just to simplify
code.
-- NRT - Fix possible memory leak if using multiple adapters.
-- NRT - Fix issue where there are more than NRT_MAXADAPTERS on a system.
-- NRT - Increase Max number of adapters from 8 -> 9
-- NRT - Initialize missing variables when the PMD is starting a job.
-- NRT - Fix issue where we are launching hosts out of numerical order,
this would cause pmd's to hang.
-- NRT - Change xmalloc's to malloc just to be safe.
-- NRT - Sanity check to make sure a jobinfo is there before packing.
-- Add missing options to the print of TaskPluginParam.
-- Fix a couple of issues with scontrol reconfig and adding nodes to
slurm.conf. Rebooting daemons after adding nodes to the slurm.conf
is highly recommended.
* Changes in Slurm 2.6.6
========================
-- sched/backfill - Fix bug that could result in failing to reserve resources
for high priority jobs.
-- Correct job RunTime if requeued from suspended state.
-- Reset job priority from zero (held) on manual resume from suspend state.
-- If FastSchedule=0 then do not DOWN a node with low memory or disk size.
-- Update sshare.1 man page making it consistent with sacctmgr.1.
-- Do not reset a job's priority when the slurmctld restarts if previously
set to some specific value.
-- sview - Fix regression where the Node tab wasn't able to add/remove columns.
-- Fix slurmstepd lock when job terminates inside the infiniband
network traffic accounting plugin.
-- Correct the documentation to read filesystem instead of Lustre. Update
the srun help.
-- Fix the acct_gather_filesystem_lustre.c to compute the Lustre accounting
data correctly accumulating differences between sampling intervals.
Fix the data structure mismatch between acct_gather_filesystem_lustre.c
and slurm_jobacct_gather.h which caused the hdf5 plugin to log incorrect
data.
-- Don't allow PMI_TIME to be zero which will cause floating exception.
-- Fix purging of old reservation errors in database.
-- MYSQL - If starting the plugin and the database isn't up attempt to
connect in a loop instead of producing a fatal.
-- BLUEGENE - If IONodesPerMP changes in bluegene.conf recalculate bitmaps
based on ionode count correctly on slurmctld restart.
-- Fix step allocation when some CPUs are not available due to memory limits.
This happens when one step is active and using memory that blocks the
scheduling of another step on a portion of the CPUs needed. The new step
is now delayed rather than aborting with "Requested node configuration is
not available".
-- Make sure node limits get assessed if no node count was given in request.
-- Removed obsolete slurm_terminate_job() API.
-- Update documentation about QOS limits
-- Retry task exit message from slurmstepd to srun on message timeout.
-- Correction to logic reserving all nodes in a specified partition.
-- Added support for selecting AMD GPU by setting GPU_DEVICE_ORDINAL env var.
-- Properly enforce GrpSubmit limit for job arrays.
-- CRAY - fix issue with using CR_ONE_TASK_PER_CORE
-- CRAY - fix memory leak when using accelerators
* Changes in Slurm 2.6.5
========================
-- Correction to hostlist parsing bug introduced in v2.6.4 for hostlists with
more than one numeric range in brackets (e.g. rack[0-3]_blade[0-63]").
-- Add notification if using proctrack/cgroup and task/cgroup when oom hits.
-- Corrections to advanced reservation logic with overlapping jobs.
-- job_submit/lua - add cpus_per_task field to those available.
-- Add cpu_load to the node information available using the Perl API.
-- Correct a job's GRES allocation data in accounting records for non-Cray
systems.
-- Substantial performance improvement for systems with Shared=YES or FORCE
and large numbers of running jobs (replace bubble sort with quick sort).
-- proctrack/cgroup - Add locking to prevent race condition where one job step
is ending for a user or job at the same time another job stepsis starting
and the user or job container is deleted from under the starting job step.
-- Fixed sh5util loop when there are no node-step files.
-- Fix race condition on batch job termination that could result in a job exit
code of 0xfffffffe if the slurmd on node zero registers its active jobs at
the same time that slurmstepd is recording the job's exit code.
-- Correct logic returning remaining job dependencies in job information
reported by scontrol and squeue. Eliminates vestigial descriptors with
no job ID values (e.g. "afterany").
-- Improve performance of REQUEST_JOB_INFO_SINGLE RPC by removing unnecessary
locks and use hash function to find the desired job.
-- jobcomp/filetxt - Reopen the file when slurmctld daemon is reconfigured
or gets SIGHUP.
-- Remove notice of CVE with very old/deprecated versions of Slurm in
news.html.
-- Fix if hwloc_get_nbobjs_by_type() returns zero core count (set to 1).
-- Added ApbasilTimeout parameter to the cray.conf configuration file.
-- Handle in the API if parts of the node structure are NULL.
-- Fix srun hang when IO fails to start at launch.
-- Fix for GRES bitmap not matching the GRES count resulting in abort
(requires manual resetting of GRES count, changes to gres.conf file,
and slurmd restarts).
-- Modify sview to better support job arrays.
-- Modify squeue to support longer job ID values (for many job array tasks).
-- Fix race condition in authentication credential creation that could corrupt
memory. (NOTE: This race condition has existed since 2003 and would be
exceedingly rare.)
-- Slurmstepd variable initialization - Without this patch, free() is called
on a random memory location (i.e. whatever is on the stack), which can
result in slurmstepd dying and a completed job not being purged in a
timely fashion.
-- Fix slurmstepd race condition when separate threads are reading and
modifying the job's environment, which can result in the slurmstepd failing
with an invalid memory reference.
-- Fix erroneous error messages when running gang scheduling.
-- Fix minor memory leak.
-- scontrol modified to suspend, resume, hold, uhold, or release multiple
jobs in a space separated list.
-- Minor debug error when a connection goes away at the end of a job.
-- Validate return code from calls to slurm_get_peer_addr
-- BGQ - Fix issues with making sure all cnodes are accounted for when mulitple
steps cause multiple cnodes in one allocation to go into error at the
same time.
-- scontrol show job - Correct NumNodes value calculated based upon job
specifications.
-- BGQ - Fix issue if user runs multiple sub-block jobs inside a multiple
midplane block that starts on a higher coordinate than it ends (i.e if a
block has midplanes [0010,0013] 0013 is the start even though it is
listed second in the hostlist).
-- BGQ - Add midplane to the total_cnodes used in the runjob_mux plugin
for better debug.
-- Update AllocNodes paragraph in slurm.conf.5.
* Changes in Slurm 2.6.4
========================
-- Honor ntasks-per-node option with exclusive node allocations.
-- sched/backfill - Prevent invalid memory reference if bf_continue option is
configured and slurm is reconfigured during one of the sleep cycles or if
there are any changes to the partition configuration or if the normal
scheduler runs and starts a job that the backfill scheduler is actively
working on.
-- Update man pages information about acct-freq and JobAcctGatherFrequency
to reflect only the latest supported format.
-- Minor document update to include note about PrivateData=Usage for the
slurm.conf when using the DBD.
-- Expand information reported with DebugFlags=backfill.
-- Initiate jobs pending to run in a reservation as soon as the reservation
becomes active.
-- Purged expired reservation even if it has pending jobs.
-- Corrections to calculation of a pending job's expected start time.
-- Remove some vestigial logic treating job priority of 1 as a special case.
-- Memory freeing up to avoid minor memory leaks at close of daemons
-- Updated documentation to give correct units being displayed.
-- Report AccountingStorageBackupHost with "scontrol show config".
-- init scripts ignore quotes around Pid file name specifications.
-- Fixed typo about command case in quickstart.html.
-- task/cgroup - handle new cpuset files, similar to commit c4223940.
-- Replace the tempname() function call with mkstemp().
-- Fix for --cpu_bind=map_cpu/mask_cpu/map_ldom/mask_ldom plus
--mem_bind=map_mem/mask_mem options, broken in 2.6.2.
-- Restore default behavior of allocating cores to jobs on a cyclic basis
across the sockets unless SelectTypeParameters=CR_CORE_DEFAULT_DIST_BLOCK
or user specifies other distribution options.
-- Enforce JobRequeue configuration parameter on node failure. Previously
always requeued the job.
-- acct_gather_energy/ipmi - Add delay before retry on read error.
-- select/cons_res with GRES and multiple threads per core, fix possible
infinite loop.
-- proctrack/cgroup - Add cgroup create retry logic in case one step is
starting at the same time as another step is ending and the logic to create
and delete cgroups overlaps.
-- Improve setting of job wait "Reason" field.
-- Correct sbatch documentation and job_submit/pbs plugin "%j" is job ID,
not "%J" (which is job_id.step_id).
-- Improvements to sinfo performance, especially for large numbers of
partitions.
-- SlurmdDebug - Permit changes to slurmd debug level with "scontrol reconfig"
-- smap - Avoid invalid memory reference with hidden nodes.
-- Fix sacctmgr modify qos set preempt+/-=.
-- BLUEGENE - fix issue where node count wasn't set up correctly when srun
preforms the allocation, regression in 2.6.3.
-- Add support for dependencies of job array elements (e.g.
"sbatch --depend=afterok:123_4 ...") or all elements of a job array (e.g.
"sbatch --depend=afterok:123 ...").
-- Add support for new options in sbatch qsub wrapper:
-W block=true (wait for job completion)
Clear PBS_NODEFILE environment variable
-- Fixed the MaxSubmitJobsPerUser limit in QOS which limited submissions
a job too early.
-- sched/wiki, sched/wiki2 - Fix to work with change logic introduced in
version 2.6.3 preventing Maui/Moab from starting jobs.
-- Updated the QOS limits documentation and man page.
* Changes in Slurm 2.6.3
========================
-- Add support for some new #PBS options in sbatch scripts and qsub wrapper:
-l accelerator=true|false (GPU use)
-l mpiprocs=# (processors per node)
-l naccelerators=# (GPU count)
-l select=# (node count)
-l ncpus=# (task count)
-v key=value (environment variable)
-W depend=opts (job dependencies, including "on" and "before" options)
-W umask=# (set job's umask)
-- Added qalter and qrerun commands to torque package.
-- Corrections to qstat logic: job CPU count and partition time format.
-- Add job_submit/pbs plugin to translate PBS job dependency options to the
extend possible (no support for PBS "before" options) and set some PBS
environment variables.
-- Add spank/pbs plugin to set a bunch of PBS environment variables.
-- Backported sh5util from master to 2.6 as there are some important
bugfixes and the new item extraction feature.
-- select/cons_res - Correct MacCPUsPerNode partition constraint for CR_Socket.
-- scontrol - for setdebugflags command, avoid parsing "-flagname" as an
scontrol command line option.
-- Fix issue with step accounting if a job is requeued.
-- Close file descriptors on exec of prolog, epilog, etc.
-- Fix issue when a user has held a job and then sets the begin time
into the future.
-- Scontrol - Enable changing a job's stdout file.
-- Fix issues where memory or node count of a srun job is altered while the
srun is pending. The step creation would use the old values and possibly
hang srun since the step wouldn't be able to be created in the modified
allocation.
-- Add support for new SchedulerParameters value of "bf_max_job_part", the
maximum depth the backfill scheduler should go in any single partition.
-- acct_gather/infiniband plugin - Correct packets_in/out values.
-- BLUEGENE - Don't ignore a conn-type request from the user.
-- BGQ - Force a request on a Q for a MESH to be a TORUS in a dimension that
can only be a TORUS (1).
-- Change max message length from 100MB to 1GB before generating "Insane
message length" error.
-- sched/backfill - Prevent possible memory corruption due to use of
bf_continue option and long running scheduling cycle (pending jobs could
have been cancelled and purged).
-- CRAY - fix AcceleratorAllocation depth correctly for basil 1.3
-- Created the environment variable SLURM_JOB_NUM_NODES for srun jobs and
updated the srun man page.
-- BLUEGENE/CRAY - Don't set env variables that pertain to a node when Slurm
isn't doing the launching.
-- gres/gpu and gres/mic - Do not treat the existence of an empty gres.conf
file as a fatal error.
-- Fixed for if hours are specified as 0 the time days-0:min specification
is not parsed correctly.
-- Subtract the PMII_COMMANDLEN_SIZE in contribs/pmi2/pmi2_api.c to prevent
certain implementation of snprintf() to segfault.
* Changes in Slurm 2.6.2
========================
-- Fix issue with reconfig and GrpCPURunMins
-- Fix of wrong node/job state problem after reconfig
-- Allow users who are coordinators update their own limits in the accounts
they are coordinators over.
-- BackupController - Make sure we have a connection to the DBD first thing
to avoid it thinking we don't have a cluster name.
-- Correct value of min_nodes returned by loading job information to consider
the job's task count and maximum CPUs per node.
-- If running jobacct_gather/none fix issue on unpacking step completion.
-- Reservation with CoreCnt: Avoid possible invalid memory reference.
-- sjstat - Add man page when generating rpms.
-- Make sure GrpCPURunMins is added when creating a user, account or QOS with
sacctmgr.
-- Fix for invalid memory reference due to multiple free calls caused by
job arrays submitted to multiple partitions.
-- Enforce --ntasks-per-socket=1 job option when allocating by socket.
-- Validate permissions of key directories at slurmctld startup. Report
anything that is world writable.
-- Improve GRES support for CPU topology. Previous logic would pick CPUs then
reject jobs that can not match GRES to the allocated CPUs. New logic first
filters out CPUs that can not use the GRES, next picks CPUs for the job,
and finally picks the GRES that best match those CPUs.
-- Switch/nrt - Prevent invalid memory reference when allocating single adapter
per node of specific adapter type
-- CRAY - Make Slurm work with CLE 5.1.1
-- Fix segfault if submitting to multiple partitions and holding the job.
-- Use MAXPATHLEN instead of the hardcoded value 1024 for maximum file path
lengths.
-- If OverTimeLimit is defined do not declare failed those jobs that ended
in the OverTimeLimit interval.
* Changes in Slurm 2.6.1
========================
-- slurmdbd - Allow job derived ec and comments to be modified by non-root
users.
-- Fix issue with job name being truncated to 24 chars when sending a mail
message.
-- Fix minor issues with spec file, missing files and including files
erroneously on a bluegene system.
-- sacct - fix --name and --partition options when using
accounting_storage/filetxt.
-- squeue - Remove extra whitespace of default printout.
-- BGQ - added head ppcfloor as an include dir when building.
-- BGQ - Better debug messages in runjob_mux plugin.
-- PMI2 Updated the Makefile.am to build a versioned library.
-- CRAY - Fix srun --mem_bind=local option with launch/aprun.
-- PMI2 Corrected buffer size computation in the pmi2_api.c module.
-- GRES accounting data wrong in database: gres_alloc, gres_req, and gres_used
fields were empty if the job was not started immediately.
-- Fix sbatch and srun task count logic when --ntasks-per-node specified,
but no explicit task count.
-- Corrected the hdf5 profile user guide and the acct_gather.conf
documentation.
-- IPMI - Fix Math bug getting new wattage.
-- Corrected the AcctGatherProfileType documentation in slurm.conf
-- Corrected the sh5util program to print the header in the csv file
only once, set the debug messages at debug() level, make the argument
check case insensitive and avoid printing duplicate \n.
-- If cannot collect energy values send message to the controller
to drain the node and log error slurmd log file.
-- Handle complete removal of CPURunMins time at the end of the job instead
of at multifactor poll.
-- sview - Add missing debug_flag options.
-- PGSQL - Notes about Postgres functionality being removed in the next
version of Slurm.
-- MYSQL - fix issue when rolling up usage and events happened when a cluster
was down (slurmctld not running) during that time period.
-- sched/wiki2 - Insure that Moab gets current CPU load information.
-- Prevent infinite loop in parsing configuration if including file containing
one blank line.
-- Fix pack and unpack between 2.6 and 2.5.
-- Fix job state recovery logic in which a job's accounting frequency was
not set. This would result in a value of 65534 seconds being used (the
equivalent of NO_VAL in uint16_t), which could result in the job being
requeued or aborted.
-- Validate a job's accounting frequency at submission time rather than
waiting for it's initiation to possibly fail.
-- Fix CPURunMins if a job is requeued from a failed launch.
-- Fix in accounting_storage/filetxt to correct start times which sometimes
could end up before the job started.
-- Fix issue with potentially referencing past an array in parse_time()
-- CRAY - fix issue with accelerators on a cray when parsing BASIL 1.3 XML.
-- Fix issue with a 2.5 slurmstepd locking up when talking to a 2.6 slurmd.
-- Add argument to priority plugin's priority_p_reconfig function to note
when the association and QOS used_cpu_run_secs field has been reset.
* Changes in Slurm 2.6.0
========================
-- Fix it so bluegene and serial systems don't get warnings over new NODEDATA
enum.
-- When a job is aborted send a message for any tasks that have completed.
-- Correction to memory per CPU calculation on system with threads and
allocating cores or sockets.
-- Requeue batch job if it's node reboots (used to abort the job).
-- Enlarge maximum size of srun's hostlist file.
-- IPMI - Fix first poll to get correct consumed_energy for a step.
-- Correction to job state recovery logic that could result in assert failure.
-- Record partial step accounting record if allocated nodes fail abnormally.

Francois Diakhate
committed
-- Accounting - fix issue where PrivateData=jobs or users could potentially
show information to users that had no associations on the system.
-- Make PrivateData in slurmdbd.conf case insensitive.
-- sacct/sstat - Add format option ConsumedEnergyRaw to print full energy
values.
* Changes in Slurm 2.6.0rc2
===========================
-- HDF5 - Fix issue with Ubuntu where HDF5 development headers are
overwritten by the parallel versions thus making it so we need handle
both cases.
-- ACCT_GATHER - handle suspending correctly for polling threads.
-- Make SLURM_DISTRIBUTION env var hold both types of distribution if
specified.
-- Remove hardcoded /usr/local from slurm.spec.
-- Modify slurmctld locking to improve performance under heavy load with
very large numbers of batch job submissions or job cancellations.
-- sstat - Fix issue where if -j wasn't given allow last argument to be checked
for as the job/step id.
-- IPMI - fix adjustment on poll when using EnergyIPMICalcAdjustment.
* Changes in Slurm 2.6.0rc1
===========================
-- Added helper script for launching symmetric and MIC-only MPI tasks within
SLURM (in contribs/mic/mpirun-mic).
-- Change maximum delay for state save from 2 secs to 5 secs. Make timeout
configurable at build time by defining SAVE_MAX_WAIT.
-- Modify slurmctld data structure locking to interleave read and write
locks rather than always favor write locks over read locks.
-- Added sacct format option of "ALL" to print all fields.
-- Deprecate the SchedulerParameters value of "interval" use "bf_interval"
instead as documented.
-- Add acct_gather_profile/hdf5 to profile jobs with hdf5
-- Added MaxCPUsPerNode partition configuration parameter. This can be
especially useful to schedule systems with GPUs.
-- Permit "scontrol reboot_node" for nodes in MAINT reservation.
-- Added "PriorityFlags" value of "SMALL_RELATIVE_TO_TIME". If set, the job's
size component will be based upon not the job size alone, but the job's
size divided by it's time limit.

Morris Jette
committed
-- Added sbatch option "--ignore-pbs" to ignore "#PBS" options in the batch
script.
-- Rename slurm_step_ctx_params_t field from "mem_per_cpu" to "pn_min_memory".
Job step now accepts memory specification in either per-cpu or per-node
basis.
-- Add ability to specify host repitition count in the srun hostfile (e.g.
"host1*2" is equivalent to "host1,host1").
* Changes in Slurm 2.6.0pre3
============================
-- Add milliseconds to default log message header (both RFC 5424 and ISO 8601
time formats). Disable milliseconds logging using the configure
parameter "--disable-log-time-msec". Default time format changes to
ISO 8601 (without time zone information). Specify "--enable-rfc5424time"
to restore the time zone information.
-- Add username (%u) to the filename pattern in the batch script.
-- Added options for front end nodes of AllowGroups, AllowUsers, DenyGroups,
and DenyUsers.
-- Fix sched/backfill logic to initiate jobs with maximum time limit over the
partition limit, but the minimum time limit permits it to start.
-- gres/gpu - Fix for gres.conf file with multiple files on a single line
using a slurm expression (e.g. "File=/dev/nvidia[0-1]").
-- Replaced ipmi.conf with generic acct_gather.conf file for all acct_gather
plugins. For those doing development to use this follow the model set
forth in the acct_gather_energy_ipmi plugin.
-- Added more options to update a step's information
-- Add DebugFlags=ThreadID which will print the thread id of the calling
thread.
-- CRAY - Allocate whole node (CPUs) in reservation despite what the
user requests. We have found any srun/aprun afterwards will work on a
subset of resources.
-- Do not purge inactive interactive jobs that lack a port to ping (added
for MR+ operation).
-- Advanced reservations with hostname and core counts now supports asymetric
reservations (e.g. specific different core count for each node).
-- Added slurmctld/dynalloc plugin for MapReduce+ support.
-- Added "DynAllocPort" configuration parameter.
-- Added partition paramter of SelectTypeParameters to override system-wide
value.
-- Added cr_type to partition_info data structure.
-- Added allocated memory to node information available (within the existing
select_nodeinfo field of the node_info_t data structure). Added Allocated
Memory to node information displayed by sview and scontrol commands.
-- Make sched/backfill the default scheduling plugin rather than sched/builtin
(FIFO).

Alejandro Lucero Palau
committed
-- Added support for a job having different priorities in different partitions.
-- Added new SchedulerParameters configuration parameter of "bf_continue"
which permits the backfill scheduler to continue considering jobs for
backfill scheduling after yielding locks even if new jobs have been
submitted. This can result in lower priority jobs from being backfill
scheduled instead of newly arrived higher priority jobs, but will permit
more queued jobs to be considered for backfill scheduling.
-- Added support to purge reservation records from accounting.
-- Cray - Add support for Basil 1.3
* Changes in SLURM 2.6.0pre1
============================
-- Add "state" field to job step information reported by scontrol.
-- Notify srun to retry step creation upon completion of other job steps
rather than polling. This results in much faster throughput for job step
execution with --exclusive option.
-- Added "ResvEpilog" and "ResvProlog" configuration parameters to execute a
program at the beginning and end of each reservation.
-- Added "slurm_load_job_user" function. This is a variation of
"slurm_load_jobs", but accepts a user ID argument, potentially resulting
in substantial performance improvement for "squeue --user=ID"
-- Added "slurm_load_node_single" function. This is a variation of
"slurm_load_nodes", but accepts a node name argument, potentially resulting
in substantial performance improvement for "sinfo --nodes=NAME".
-- Added "HealthCheckNodeState" configuration parameter identify node states
on which HealthCheckProgram should be executed.
-- Remove sacct --dump --formatted-dump options which were deprecated in
2.5.
-- Added support for job arrays (phase 1 of effort). See "man sbatch" option
-a/--array for details.
-- Add new AccountStorageEnforce options of 'nojobs' and 'nosteps' which will
allow the use of accounting features like associations, qos and limits but
not keep track of jobs or steps in accounting.
-- Cray - Add new cray.conf parameter of "AlpsEngine" to specify the
communication protocol to be used for ALPS/BASIL.
-- select/cons_res plugin: Correction to CPU allocation count logic in for
cores without hyperthreading.
-- Added new SelectTypeParameter value of "CR_ALLOCATE_FULL_SOCKET".
-- Added PriorityFlags value of "TICKET_BASED" and merged priority/multifactor2
plugin into priority/multifactor plugin.

Morris Jette
committed
-- Add "KeepAliveTime" configuration parameter controlling how long sockets
used for srun/slurmstepd communications are kept alive after disconnect.
-- Added SLURM_SUBMIT_HOST to salloc, sbatch and srun job environment.
-- Added SLURM_ARRAY_TASK_ID to environment of job array.
-- Added squeue --array/-r option to optimize output for job arrays.
-- Added "SlurmctldPlugstack" configuration parameter for generic stack of
slurmctld daemon plugins.
-- Removed contribs/arrayrun tool. Use native support for job arrays.
-- Modify default installation locations for RPMs to match "make install":
_prefix /usr/local
_slurm_sysconfdir %{_prefix}/etc/slurm
_mandir %{_prefix}/share/man
_infodir %{_prefix}/share/info
-- Add acct_gather_energy/ipmi which works off freeipmi for energy gathering
* Changes in Slurm 2.5.8
========================
-- Fix for slurmctld segfault on NULL front-end reason field.
-- Avoid gres step allocation errors when a job shrinks in size due to either
down nodes or explicit resizing. Generated slurmctld errors of this type:
"step_test ... gres_bit_alloc is NULL"
-- Fix bug that would leak memory and over-write the AllowGroups field if on
"scontrol reconfig" when AllowNodes is manually changed using scontrol.
-- Get html/man files to install in correct places with rpms.
-- Remove --program-prefix from spec file since it appears to be added by
default and appeared to break other things.
-- Updated the automake min version in autogen.sh to be correct.
-- Select/cons_res - Correct total CPU count allocated to a job with
--exclusive and --cpus-per-task options
-- switch/nrt - Don't allocate network resources unless job step has 2+ nodes.
-- select/cons_res - Avoid extraneous "oversubscribe" error messages.
-- Reorder get config logic to avoid deadlock.
-- Enforce QOS MaxCPUsMin limit when job submission contains no user-specified
time limit.
-- EpilogSlurmctld pthread is passed required arguments rather than a pointer
to the job record, which under some conditions could be purged and result
in an invalid memory reference.
* Changes in Slurm 2.5.7
========================
-- Fix for linking to the select/cray plugin to not give warning about
undefined variable.
-- Add missing symbols to the xlator.h
-- Avoid placing pending jobs in AdminHold state due to backfill scheduler
interactions with advanced reservation.
-- Accounting - make average by task not cpu.
-- CRAY - Change logging of transient ALPS errors from error() to debug().
-- POE - Correct logic to support poe option "-euidevice sn_all" and
"-euidevice sn_single".
-- Accounting - Fix minor initialization error.
-- POE - Correct logic to support srun network instances count with POE.
-- POE - With the srun --launch-cmd option, report proper task count when
the --cpus-per-task option is used without the --ntasks option.
-- POE - Fix logic binding tasks to CPUs.
-- sview - Fix race condition where new information could of slipped past
the node tab and we didn't notice.
-- Accounting - Fix an invalid memory read when slurmctld sends data about
-- If a prolog or epilog failure occurs, drain the node rather than setting it
down and killing all of its jobs.
-- Priority/multifactor - Avoid underflow in half-life calculation.
-- POE - pack missing variable to allow fanout (more than 32 nodes)
-- Prevent clearing reason field for pending jobs. This bug was introduced in
v2.5.5 (see "Reject job at submit time ...").
-- BGQ - Fix issue with preemption on sub-block jobs where a job would kill
all preemptable jobs on the midplane instead of just the ones it needed to.
-- switch/nrt - Validate dynamic window allocation size.
-- BGQ - When --geo is requested do not impose the default conn_types.
-- RebootNode logic - Defers (rather than forgets) reboot request with job
running on the node within a reservation.
-- switch/nrt - Correct network_id use logic. Correct support for user sn_all
and sn_single options.
-- sched/backfill - Modify logic to reduce overhead under heavy load.
-- Fix job step allocation with --exclusive and --hostlist option.
-- Select/cons_res - Fix bug resulting in error of "cons_res: sync loop not
progressing, holding job #"
-- checkpoint/blcr - Reset max_nodes from zero to NO_VAL on job restart.
-- launch/poe - Fix for hostlist file support with repeated host names.
-- priority/multifactor2 - Prevent possible divide by zero.
-- srun - Don't check for executable if --test-only flag is used.
-- energy - On a single node only use the last task for gathering energy.
Since we don't currently track energy usage per task (only per step).
Otherwise we get double the energy.
-- Gres accounting - Fix regression in 2.5.5 for keeping track of gres
requested and allocated.
* Changes in Slurm 2.5.5
========================
-- Fix for sacctmgr add qos to handle the 'flags' option.
-- Export SLURM_ environment variables from sbatch, even if "--export"
option does not explicitly list them.
-- If node is in more than one partition, correct counting of allocated CPUs.
-- If step requests more CPUs than possible in specified node count of job
allocation then return ESLURM_TOO_MANY_REQUESTED_CPUS rather than
ESLURM_NODES_BUSY and retrying.
-- CRAY - Fix SLURM_TASKS_PER_NODE to be set correctly.
-- Accounting - more checks for strings with a possible `'` in it.
-- sreport - Fix by adding planned down time to utilization reports.
-- Do not report an error when sstat identifies job steps terminated during
its execution, but log using debug type message.
-- Select/cons_res - Permit node removed from job by going down to be returned
to service and re-used by another job.
-- Select/cons_res - Tighter packing of job allocations on sockets.
-- SlurmDBD - fix to allow user root along with the slurm user to register a
cluster.
-- Select/cons_res - Fix for support of consecutive node option.
-- Select/cray - Modify build to enable direct use of libslurm library.
-- Bug fixes related to job step allocation logic.
-- Cray - Disable enforcement of MaxTasksPerNode, which is not applicable
with launch/aprun.
-- Accounting - When rolling up data from past usage ignore "idle" time from
a reservation when it has the "Ignore_Jobs" flag set. Since jobs could run
outside of the reservation in it's nodes without this you could have
double time.
-- Accounting - Minor fix to avoid reuse of variable erroneously.
-- Reject job at submit time if the node count is invalid. Previously such a
job submitted to a DOWN partition would be queued.
-- Purge vestigial job scripts when the slurmd cold starts or slurmstepd
terminates abnormally.
-- Add sanity check for NULL cluster names trying to register.
-- BGQ - Push action 'D' info to scontrol for admins.
-- Reset a job's reason from PartitionDown when the partition is set up.
-- BGQ - Handle issue where blocks would have a pending job on them and
while it was free cnodes would go into software error and kill the job.
-- BGQ - Fix issue where if for some reason we are freeing a block with
a pending job on it we don't kill the job.
-- BGQ - Fix race condition were a job could of been removed from a block
without it still existing there. This is extremely rare.
-- BGQ - Fix for when a step completes in Slurm before the runjob_mux notifies
the slurmctld there were software errors on some nodes.
-- BGQ - Fix issue on state recover if block states are not around
and when reading in state from DB2 we find a block that can't be created.
You can now do a clean start to rid the bad block.
-- Modify slurmdbd to retransmit to slurmctld daemon if it is not responding.
-- BLUEGENE - Fix issue where when doing backfill preemptable jobs were
never looked at to determine eligibility of backfillable job.
-- Cray/BlueGene - Disable srun --pty option unless LaunchType=launch/slurm.
-- CRAY - Fix sanity check for systems with more than 32 cores per node.
-- CRAY - Remove other objects from MySQL query that are available from
the XML.
-- BLUEGENE - Set the geometry of a job when a block is picked and the job
isn't a sub-block job.
-- Cray - avoid check of macro versions of CLE for version 5.0.
-- CRAY - Fix memory issue with reading in the cray.conf file.
-- CRAY - If hostlist is given with srun make sure the node count is the same
as the hosts given.
-- CRAY - If task count specified, but no tasks-per-node, then set the tasks
per node in the BASIL reservation request.
-- CRAY - fix issue with --mem option not giving correct amount of memory
per cpu.
-- CRAY - Fix if srun --mem is given outside an allocation to set the
APRUN_DEFAULT_MEMORY env var for aprun. This scenario will not display
the option when used with --launch-cmd.
-- Change sview to use GMutex instead of GStaticMutex
-- CRAY - set APRUN_DEFAULT_MEMROY instead of CRAY_AUTO_APRUN_OPTIONS
-- sview - fix issue where if a partition was completely in one state the
cpu count would be reflected correctly.
-- BGQ - fix for handling half rack system in STATIC of OVERLAP mode to
implicitly create full system block.
-- CRAY - Dynamically create BASIL XML buffer to resize as needed.
-- Fix checking if QOS limit MaxCPUMinsPJ is set along with DenyOnLimit to
deny the job instead of holding it.
-- Make sure on systems that use a different launcher than launch/slurm not
to attempt to signal tasks on the frontend node.
-- Cray - when a step is requested count other steps running on nodes in the
allocation as taking up the entire node instead of just part of the node
allocated. And always enforce exclusive on a step request.
-- Cray - display correct nodelist, node/cpu count on steps.
* Changes in Slurm 2.5.4
========================
-- Fix bug in PrologSlurmctld use that would block job steps until node
responds.

Danny Auble
committed
-- CRAY - If a partition has MinNodes=0 and a batch job doesn't request nodes
put the allocation to 1 instead of 0 which prevents the allocation to
happen.
-- Better debug when the database is down and using the --cluster option in
the user commands.
-- When asking for job states with sacct, default to 'now' instead of midnight
of the current day.
-- Fix for handling a test-only job or immediate job that fails while being
built.
-- Comment out all of the logic in the job_submit/defaults plugin. The logic
is only an example and not meant for actual use.
-- Eliminate configuration file 4096 character line limitation.
-- More robust logic for tree message forward
-- BGQ - When cnodes fail in a timeout fashion correctly look up parent
midplane.
-- Correct sinfo "%c" (node's CPU count) output value for Bluegene systems.
-- Backfill - Responsive improvements for systems with large numbers of jobs
(>5000) and using the SchedulerParameters option bf_max_job_user.
-- slurmstepd: ensure that IO redirection openings from/to files correctly
handle interruption
-- BGQ - Able to handle when midplanes go into Hardware::SoftwareFailure
-- GRES - Correct tracking of specific resources used after slurmctld restart.
Counts would previously go negative as jobs terminate and decrement from
a base value of zero.
-- Fix for priority/multifactor2 plugin to not assert when configured with
--enable-debug.
-- Select/cons_res - If the job request specified --ntasks-per-socket and the
allocation using is cores, then pack the tasks onto the sockets up to the
specified value.
-- BGQ - If a cnode goes into an 'error' state and the block containing the
cnode does not have a job running on it do not resume the block.
-- BGQ - Handle blocks that don't free themselves in a reasonable time better.
-- BGQ - Fix for signaling steps when allocation ends before step.
-- Fix for backfill scheduling logic with job preemption; starts more jobs.
-- xcgroup - remove bugs with EINTR management in write calls
-- jobacct_gather - fix total values to not always == the max values.
-- Fix for handling node registration messages from older versions without
energy data.
-- BGQ - Allow user to request full dimensional mesh.
-- sdiag command - Correction to jobs started value reported.
-- Prevent slurmctld assert when invalid change to reservation with running
jobs is made.
-- BGQ - If signal is NODE_FAIL allow forward even if job is completing
and timeout in the runjob_mux trying to send in this situation.
-- BGQ - More robust checking for correct node, task, and ntasks-per-node
options in srun, and push that logic to salloc and sbatch.
-- GRES topology bug in core selection logic fixed.
-- Fix to handle init.d script for querying status and not return 1 on
success.
* Changes in SLURM 2.5.3
========================
-- Gres/gpu plugin - If no GPUs requested, set CUDA_VISIBLE_DEVICES=NoDevFiles.
This bug was introduced in 2.5.2 for the case where a GPU count was
configured, but without device files.
-- task/affinity plugin - Fix bug in CPU masks for some processors.
-- Modify sacct command to get format from SACCT_FORMAT environment variable.
-- BGQ - Changed order of library inclusions and fixed incorrect declaration
to compile correctly on newer compilers
-- Fix for not building sview if glib exists on a system but not the gtk libs.
-- BGQ - Fix for handling a job cleanup on a small block if the job has long
since left the system.
-- Fix race condition in job dependency logic which can result in invalid
memory reference.
* Changes in SLURM 2.5.2
========================
-- Fix advanced reservation recovery logic when upgrading from version 2.4.
-- BLUEGENE - fix for QOS/Association node limits.
-- Add missing "safe" flag from print of AccountStorageEnforce option.
-- Fix logic to optimize GRES topology with respect to allocated CPUs.
-- Add job_submit/all_partitions plugin to set a job's default partition
to ALL available partitions in the cluster.
-- Modify switch/nrt logic to permit build without libnrt.so library.
-- Handle srun task launch failure without duplicate error messages or abort.

Matthieu Hautreux
committed
-- Fix bug in QoS limits enforcement when slurmctld restarts and user not yet
added to the QOS list.
-- Fix issue where sjstat and sjobexitmod was installed in 2 different RPMs.
-- Fix for job request of multiple partitions in which some partitions lack
nodes with required features.
-- Permit a job to use a QOS they do not have access to if an administrator
manually set the job's QOS (previously the job would be rejected).
-- Make more variables available to job_submit/lua plugin: slurm.MEM_PER_CPU,
slurm.NO_VAL, etc.
-- Fix topology/tree logic when nodes defined in slurm.conf get re-ordered.
-- In select/cons_res, correct logic to allocate whole sockets to jobs. Work
by Magnus Jonsson, Umea University.
-- In select/cons_res, correct logic when job removed from only some nodes.
-- Avoid apparent kernel bug in 2.6.32 which apparently is solved in
at least 3.5.0. This avoids a stack overflow when running jobs on
more than 120k nodes.
-- BLUEGENE - If we made a block that isn't runnable because of a overlapping
block, destroy it correctly.
-- Switch/nrt - Dynamically load libnrt.so from within the plugin as needed.
This eliminates the need for libnrt.so on the head node.
-- BLUEGENE - Fix in reservation logic that could cause abort.
* Changes in SLURM 2.5.1
========================
-- Correction to hostlist sorting for hostnames that contain two numeric
components and the first numeric component has various sizes (e.g.
"rack9blade1" should come before "rack10blade1")
-- BGQ - Only poll on initialized blocks instead of calling getBlocks on
each block independently.
-- Fix of task/affinity plugin logic for Power7 processors having hyper-
threading disabled (cpu mask has gaps).
-- Fix of job priority ordering with sched/builtin and priority/multifactor.
-- CRAY - Fix for setting up the aprun for a large job (+2000 nodes).

Morris Jette
committed
-- Fix for race condition related to compute node boot resulting in node being
set down with reason of "Node <name> unexpectedly rebooted"
-- RAPL - Fix for handling errors when opening msr files.
-- BGQ - Fix for salloc/sbatch to do the correct allocation when asking for
-N1 -n#.
-- BGQ - in emulation make it so we can pretend to run large jobs (>64k nodes)
-- BLUEGENE - Correct method to update conn_type of a job.
-- BLUEGENE - Fix issue with preemption when needing to preempt multiple jobs
to make one job run.
-- Fixed issue where if an srun dies inside of an allocation abnormally it
would of also killed the allocation.
-- FRONTEND - fixed issue where if a systems nodes weren't defined in the
slurm.conf with NodeAddr's signals going to a step could be handled
incorrectly.

Morris Jette
committed
-- If sched/backfill starts a job with a QOS having NO_RESERVE and not job
time limit, start it with the partition time limit (or one year if the
partition has no time limit) rather than NO_VAL (140 year time limit);
-- Alter hostlist logic to allocate large grid dynamically instead of on
stack.
-- Change RPC version checks to support version 2.5 slurmctld with version 2.4
slurmd daemons.
-- Correct core reservation logic for use with select/serial plugin.
-- Exit scontrol command on stdin EOF.
-- Disable job --exclusive option with select/serial plugin.
* Changes in SLURM 2.5.0
========================
-- Add DenyOnLimit flag for QOS to deny jobs at submission time if they
request resources that reach a 'Max' limit.
-- Permit SlurmUser or operator to change QOS of non-pending jobs (e.g.
running jobs).
-- BGQ - move initial poll to beginning of realtime interaction, which will
also cause it to run if the realtime server ever goes away.
-- Modify sbcast logic to survive slurmd daemon restart while file a
transmission is in progress.
-- Add retry logic to munge encode/decode calls. This is needed if the munge
deamon is under very heavy load (e.g. with 1000 slurmd daemons per compute
node).
-- Add launch and acct_gather_energy plugins to RPMs.
-- Restore support for srun "--mpi=list" option.
-- CRAY - Introduce step accounting for a Cray.
-- Modify srun to abandon I/O 60 seconds after the last task ends. Otherwise
an aborted slurmstepd can cause the srun process to hang indefinitely.
-- ENERGY - RAPL - alter code to close open files (and only open them once
where needed)
-- If the PrologSlurmctld fails, then requeue the job an indefinite number
of times instead of only one time.
-- Added Prolog and Epilog Guide (web page). Based upon work by Jason Sollom,
Cray Inc. and used by permission.
-- Restore gang scheduling functionality. Preemptor was not being scheduled.
Fix for bugzilla #3.
-- Add "cpu_load" to node information. Populate CPULOAD in node information
reported to Moab cluster manager.
-- Preempt jobs only when insufficient idle resources exist to start job,
regardless of the node weight.
-- Added priority/multifactor2 plugin based upon ticket distribution system.
Work by Janne Blomqvist, Aalto University.
-- Add SLURM_NODELIST to environment variables available to Prolog and Epilog.
-- Permit reservations to allow or deny access by account and/or user.
-- Add ReconfigFlags value of KeepPartState. See "man slurm.conf" for details.
-- Modify the task/cgroup plugin adding a task_pre_launch_priv function and
move slurmstepd outside of the step's cgroup. Work by Matthieu Hautreux.
-- Intel MIC processor support added using gres/mic plugin. BIG thanks to
Olli-Pekka Lehto, CSC-IT Center for Science Ltd.
-- Accounting - Change empty jobacctinfo structs to not actually be used
instead of putting 0's into the database we put NO_VALS and have sacct
figure out jobacct_gather wasn't used.
-- Cray - Prevent calling basil_confirm more than once per job using a flag.
-- Fix bug with topology/tree and job with min-max node count. Now try to
get max node count rather than minimizing leaf switches used.
-- Add AccountingStorageEnforce=safe option to provide method to avoid jobs
launching that wouldn't be able to run to completion because of a
GrpCPUMins limit.
-- Add support for RFC 5424 timestamps in logfiles. Disable with configuration
option of "--disable-rfc5424time". By Janne Blomqvist, Aalto University.
-- CRAY - Replace srun.pl with launch/aprun plugin to use srun to wrap the
aprun process instead of a perl script.
-- srun - Rename --runjob-opts to --launcher-opts to be used on systems other
than BGQ.
-- Added new DebugFlags - Energy for AcctGatherEnergy plugins.
-- start deprecation of sacct --dump --fdump
-- BGQ - added --verbose=OFF when srun --quiet is used
-- Added acct_gather_energy/rapl plugin to record power consumption by job.
Work by Yiannis Georgiou, Martin Perry, et. al., Bull
* Changes in SLURM 2.5.0.pre3
=============================
-- Add Google search to all web pages.
-- Add sinfo -T option to print reservation information. Work by Bill Brophy,
Bull.
-- Force slurmd exit after 2 minute wait, even if threads are hung.
-- Change node_req field in struct job_resources from 8 to 32 bits so we can
run more than 256 jobs per node.
-- sched/backfill: Improve accuracy of expected job start with respect to
reservations.
-- sinfo partition field size will be set the the length of the longest
partition name by default.
-- Make it so the parse_time will return a valid 0 if given epoch time and
set errno == ESLURM_INVALID_TIME_VALUE on error instead.
-- Correct srun --no-alloc logic when node count exceeds node list or task
task count is not a multiple of the node count. Work by Hongjia Cao, NUDT.
-- Completed integration with IBM Parallel Environment including POE and IBM's
NRT switch library.
* Changes in SLURM 2.5.0.pre2
=============================
-- When running with multiple slurmd daemons per node, enable specifying a
range of ports on a single line of the node configuration in slurm.conf.
-- Add reservation flag of Part_Nodes to allocate all nodes in a partition to
a reservation and automatically change the reservation when nodes are
added to or removed from the reservation. Based upon work by
Bill Brophy, Bull.
-- Add support for advanced reservation for specific cores rather than whole
nodes. Current limiations: homogeneous cluster, nodes idle when reservation
created, and no more than one reservation per node. Code is still under
development. Work by Alejandro Lucero Palau, et. al, BSC.
-- Add DebugFlag of Switch to log switch plugin details.
-- Correct job node_cnt value in job completion plugin when job fails due to
down node. Previously was too low by one.
-- Add new srun option --cpu-freq to enable user control over the job's CPU
frequency and thus it's power consumption. NOTE: cpu frequency is not
currently preserved for jobs being suspended and later resumed. Work by
Don Albert, Bull.
-- Add node configuration information about "boards" and optimize task
placement on minimum number of boards. Work by Rod Schultz, Bull.
* Changes in SLURM 2.5.0.pre1
=============================
-- Add new output to "scontrol show configuration" of LicensesUsed. Output is
"name:used/total"
-- Changed jobacct_gather plugin infrastructure to be cleaner and easier to
maintain.
-- Change license option count separator from "*" to ":" for consistency with
the gres option (e.g. "--licenses=foo:2 --gres=gpu:2"). The "*" will still
be accepted, but is no longer documented.
-- Permit more than 100 jobs to be scheduled per node (new limit is 250
-- Restructure of srun code to allow outside programs to utilize existing
logic.
* Changes in SLURM 2.4.6
========================
-- Correct WillRun authentication logic when issued for non-job owner.
-- BGQ - Fix to check block for action 'D' if it also has nodes in error.
* Changes in SLURM 2.4.5
========================
-- Cray - On job kill requeust, send SIGCONT, SIGTERM, wait KillWait and send
SIGKILL. Previously just sent SIGKILL to tasks.
-- BGQ - Fix issue when running srun outside of an allocation and only
specifying the number of tasks and not the number of nodes.
-- BGQ - validate correct ntasks_per_node
-- BGQ - when srun -Q is given make runjob be quiet

Morris Jette
committed
-- Modify use of OOM (out of memory protection) for Linux 2.6.36 kernel
or later. NOTE: If you were setting the environment variable
SLURMSTEPD_OOM_ADJ=-17, it should be set to -1000 for Linux 2.6.36 kernel
or later.
-- BGQ - Fix job step timeout actually happen when done from within an
allocation.
-- Reset node MAINT state flag when a reservation's nodes or flags change.
-- Accounting - Fix issue where QOS usage was being zeroed out on a
slurmctld restart.
-- BGQ - Add 64 tasks per node as a valid option for srun when used with
overcommit.
-- BLUEGENE - With Dynamic layout mode - Fix issue where if a larger block
was already in error and isn't deallocating and underlying hardware goes
bad one could get overlapping blocks in error making the code assert when
a new job request comes in.
-- BGQ - handle pending actions on a block better when trying to deallocate it.
-- Accounting - Fixed issue where if nodenames have changed on a system and
you query against that with -N and -E you will get all jobs during that
time instead of only the ones running on -N.
-- Accounting - If a job start message fails to the SlurmDBD reset the db_inx
so it gets sent again. This isn't a major problem since the start will
happen when the job ends, but this does make things cleaner.
-- If an salloc is waiting for an allocation to happen and is canceled by the
user mark the state canceled instead of completed.
-- Fix issue in accounting if a user puts a '\' in their job name.
-- Accounting - Fix for if asking for users or accounts that were deleted
with associations get the deleted associations as well.
-- BGQ - Handle shared blocks that need to be removed and have jobs running
on them. This should only happen in extreme conditions.
-- Fix inconsistency for hostlists that have more than 1 range.
-- BGQ - Add mutex around recovery for the Real Time server to avoid hitting
DB2 so hard.
-- BGQ - If an allocation exists on a block that has a 'D' action on it fail
job on future step creation attempts.
* Changes in SLURM 2.4.4
========================
-- BGQ - minor fix to make build work in emulated mode.
-- BGQ - Fix if large block goes into error and the next highest priority jobs
are planning on using the block. Previously it would fail those jobs
erroneously.
-- BGQ - Fix issue when a cnode going to an error (not SoftwareError) state
with a job running or trying to run on it.
-- Execute slurm_spank_job_epilog when there is no system Epilog configured.
-- Fix for srun --test-only to work correctly with timelimits
-- BGQ - If a job goes away while still trying to free it up in the
database, and the job is running on a small block make sure we free up
the correct node count.
-- BGQ - Logic added to make sure a job has finished on a block before it is
purged from the system if its front-end node goes down.
-- Modify strigger so that a filter option of "--user=0" is supported.
-- Correct --mem-per-cpu logic for core or socket allocations with multiple
threads per core.
-- Fix for older < glibc 2.4 systems to use euidaccess() instead of eaccess().
-- BLUEGENE - Do not alter a pending job's node count when changing it's
partition.
-- BGQ - Add functionality to make it so we track the actions on a block.
This is needed for when a free request is added to a block but there are
jobs finishing up so we don't start new jobs on the block since they will
fail on start.
-- BGQ - Fixed InactiveLimit to work correctly to avoid scenarios where a
user's pending allocation was started with srun and then for some reason
the slurmctld was brought down and while it was down the srun was removed.
-- Fixed InactiveLimit math to work correctly
-- BGQ - Add logic to make it so blocks can't use a midplane with a nodeboard
in error for passthrough.
-- BGQ - Make it so if a nodeboard goes in error any block using that midplane
for passthrough gets removed on a dynamic system.
-- BGQ - Fix for printing realtime server debug correctly.
-- BGQ - Cleaner handling of cnode failures when reported through the runjob
interface instead of through the normal method.
-- smap - spread node information across multiple lines for larger systems.
-- Cray - Defer salloc until after PrologSlurmctld completes.
-- Correction to slurmdbd communications failure handling logic, incorrect
error codes returned in some cases.
* Changes in SLURM 2.4.3
========================
-- Accounting - Fix so complete 32 bit numbers can be put in for a priority.
-- cgroups - fix if initial directory is non-existent SLURM creates it
correctly. Before the errno wasn't being checked correctly
-- BGQ - fixed srun when only requesting a task count and not a node count
to operate the same way salloc or sbatch did and assign a task per cpu
by default instead of task per node.
-- Fix salloc --gid to work correctly. Reported by Brian Gilmer
-- BGQ - fix smap to set the correct default MloaderImage
-- Close the batch job's environment file when it contains no data to avoid
leaking file descriptors.
-- Fix sbcast's credential to last till the end of a job instead of the
previous 20 minute time limit. The previous behavior would fail for
large files 20 minutes into the transfer.
-- Return ESLURM_NODES_BUSY rather than ESLURM_NODE_NOT_AVAIL error on job
submit when required nodes are up, but completing a job or in exclusive
job allocation.
-- Add HWLOC_FLAGS so linking to libslurm works correctly
-- BGQ - If using backfill and a shared block is running at least one job
and a job comes through backfill and can fit on the block without ending
jobs don't set an end_time for the running jobs since they don't need to
end to start the job.
-- Initialize bind_verbose when using task/cgroup.
-- BGQ - Fix for handling backfill much better when sharing blocks.
-- BGQ - Fix for making small blocks on first pass if not sharing blocks.
-- BLUEGENE - Remove force of default conn_type instead of leaving NAV
when none are requested. The Block allocator sets it up temporarily so
this isn't needed.
-- BLUEGENE - Fix deadlock issue when dealing with bad hardware if using
static blocks.
-- Fix to mysql plugin during rollup to only query suspended table when jobs
reported some suspended time.
-- Fix compile with glibc 2.16 (Kacper Kowalik)
-- BGQ - fix for deadlock where a block has error on it and all jobs
running on it are preemptable by scheduling job.
-- proctrack/cgroup: Exclude internal threads from "scontrol list pids".
Patch from Matthieu Hautreux, CEA.
-- Memory leak fixed for select/linear when preempting jobs.
-- Fix if updating begin time of a job to update the eligible time in
accounting as well.
-- BGQ - make it so you can signal steps when signaling the job allocation.
-- BGQ - Remove extra overhead if a large block has many cnode failures.
-- Priority/Multifactor - Fix issue with age factor when a job is estimated to
start in the future but is able to run now.
-- CRAY - update to work with ALPS 5.1
-- BGQ - Handle issue of speed and mutexes when polling instead of using the
realtime server.
-- BGQ - Fix minor sorting issue with sview when sorting by midplanes.
-- Accounting - Fix for handling per user max node/cpus limits on a QOS
correctly for current job.
-- Update documentation for -/+= when updating a reservation's
users/accounts/flags
-- Update pam module to work if using aliases on nodes instead of actual
host names.
-- Correction to task layout logic in select/cons_res for job with minimum
and maximum node count.
-- BGQ - Put final poll after realtime comes back into service to avoid
having the realtime server go down over and over again while waiting
for the poll to finish.
-- task/cgroup/memory - ensure that ConstrainSwapSpace=no is correctly
handled. Work by Matthieu Hautreux, CEA.
-- CRAY - Fix for sacct -N option to work correctly
-- CRAY - Update documentation to describe installation from rpm instead
or previous piecemeal method.
-- Fix sacct to work with QOS' that have previously been deleted.
-- Added all available limits to the output of sacctmgr list qos
* Changes in SLURM 2.4.2
========================
-- BLUEGENE - Correct potential deadlock issue when hardware goes bad and
there are jobs running on that hardware.
-- If job is submitted to more than one partition, it's partition pointer can
be set to an invalid value. This can result in the count of CPUs allocated
on a node being bad, resulting in over- or under-allocation of its CPUs.
Patch by Carles Fenoy, BSC.
-- Fix bug in task layout with select/cons_res plugin and --ntasks-per-node
option. Patch by Martin Perry, Bull.
-- BLUEGENE - remove race condition where if a block is removed while waiting
for a job to finish on it the number of unused cpus wasn't updated
correctly.
-- BGQ - make sure we have a valid block when creating or finishing a step
allocation.
-- BLUEGENE - If a large block (> 1 midplane) is in error and underlying
hardware is marked bad remove the larger block and create a block over
just the bad hardware making the other hardware available to run on.
-- BLUEGENE - Handle job completion correctly if an admin removes a block
where other blocks on an overlapping midplane are running jobs.
-- BLUEGENE - correctly remove running jobs when freeing a block.
-- BGQ - correct logic to place multiple (< 1 midplane) steps inside a
multi midplane block allocation.
-- BGQ - Make it possible for a multi midplane allocation to run on more
than 1 midplane but not the entire allocation.
-- BGL - Fix for syncing users on block from Tim Wickberg
-- Fix initialization of protocol_version for some messages to make sure it
is always set when sending or receiving a message.

Alejandro Lucero Palau
committed
-- Reset backfilled job counter only when explicitly cleared using scontrol.
Patch from Alejandro Lucero Palau, BSC.
-- BLUEGENE - Fix for handling blocks when a larger block will not free and
while it is attempting to free underlying hardware is marked in error
making small blocks overlapping with the freeing block. This only
applies to dynamic layout mode.
-- Cray and BlueGene - Do not treat lack of usable front-end nodes when
slurmctld deamon starts as a fatal error. Also preserve correct front-end
node for jobs when there is more than one front-end node and the slurmctld
daemon restarts.

Morris Jette
committed
-- Correct parsing of srun/sbatch input/output/error file names so that only
the name "none" is mapped to /dev/null and not any file name starting
with "none" (e.g. "none.o").
-- BGQ - added version string to the load of the runjob_mux plugin to verify
the current plugin has been loaded when using runjob_mux_refresh_config
-- CGROUPS - Use system mount/umount function calls instead of doing fork
exec of mount/umount from Janne Blomqvist.
-- BLUEGENE - correct start time setup when no jobs are blocking the way
from Mark Nelson
-- Fixed sacct --state=S query to return information about suspended jobs
current or in the past.
-- FRONTEND - Made error warning more apparent if a frontend node isn't
configured correctly.
-- BGQ - update documentation about runjob_mux_refresh_config which works
correctly as of IBM driver V1R1M1 efix 008.
* Changes in SLURM 2.4.1
========================
-- Fix bug for job state change from 2.3 -> 2.4 job state can now be preserved
correctly when transitioning. This also applies for 2.4.0 -> 2.4.1, no
state will be lost. (Thanks to Carles Fenoy)
* Changes in SLURM 2.4.0
========================
-- Cray - Improve support for zero compute note resource allocations.
Partition used can now be configured with no nodes nodes.
-- BGQ - make it so srun -i<taskid> works correctly.
-- Fix parse_uint32/16 to complain if a non-digit is given.
-- Add SUBMITHOST to job state passed to Moab vial sched/wiki2. Patch by Jon
Bringhurst (LANL).
-- BGQ - Fix issue when running with AllowSubBlockAllocations=Yes without
compiling with --enable-debug
-- Modify scontrol to require "-dd" option to report batch job's script. Patch
from Don Albert, Bull.
-- Modify SchedulerParamters option to match documentation: "bf_res="
changed to "bf_resolution=". Patch from Rod Schultz, Bull.
-- Fix bug that clears job pending reason field. Patch fron Don Lipari, LLNL.
-- In etc/init.d/slurm move check for scontrol after sourcing
/etc/sysconfig/slurm. Patch from Andy Wettstein, University of Chicago.
-- Fix in scheduling logic that can delay jobs with min/max node counts.
-- BGQ - fix issue where if a step uses the entire allocation and then
the next step in the allocation only uses part of the allocation it gets
the correct cnodes.
-- BGQ - Fix checking for IO on a block with new IBM driver V1R1M1 previous
function didn't always work correctly.
-- BGQ - Fix issue when a nodeboard goes down and you want to combine blocks
to make a larger small block and are running with sub-blocks.
-- BLUEGENE - Better logic for making small blocks around bad nodeboard/card.
-- BGQ - When using an old IBM driver cnodes that go into error because of
a job kill timeout aren't always reported to the system. This is now
handled by the runjob_mux plugin.
-- BGQ - Added information on how to setup the runjob_mux to run as SlurmUser.
-- Improve memory consumption on step layouts with high task count.
-- BGQ - quiter debug when the real time server comes back but there are
still messages we find when we poll but haven't given it back to the real
time yet.
-- BGQ - fix for if a request comes in smaller than the smallest block and
we must use a small block instead of a shared midplane block.
-- Fix issues on large jobs (>64k tasks) to have the correct counter type when
packing the step layout structure.
-- BGQ - fix issue where if a user was asking for tasks and ntasks-per-node
but not node count the node count is correctly figured out.
-- Move logic to always use the 1st alphanumeric node as the batch host for
batch jobs.
-- BLUEGENE - fix race condition where if a nodeboard/card goes down at the
same time a block is destroyed and that block just happens to be the
smallest overlapping block over the bad hardware.
-- Fix bug when querying accounting looking for a job node size.
-- BLUEGENE - fix possible race condition if cleaning up a block and the
removal of the job on the block failed.
-- BLUEGENE - fix issue if a cable was in an error state make it so we can
check if a block is still makable if the cable wasn't in error.
-- Put nodes names in alphabetic order in node table.
-- If preempted job should have a grace time and preempt mode is not cancel
but job is going to be canceled because it is interactive or other reason
it now receives the grace time.
-- BGQ - Modified documents to explain new plugin_flags needed in bg.properties
in order for the runjob_mux to run correctly.
-- BGQ - change linking from libslurm.o to libslurmhelper.la to avoid warning.
-- Improve task binding logic by making fuller use of HWLOC library,
especially with respect to Opteron 6000 series processors. Work contributed
by Komoto Masahiro.
-- Add new configuration parameter PriorityFlags, based upon work by
Carles Fenoy (Barcelona Supercomputer Center).
-- Modify the step completion RPC between slurmd and slurmstepd in order to
eliminate a possible deadlock. Based on work by Matthieu Hautreux, CEA.
-- Change the owner of slurmctld and slurmdbd log files to the appropriate
user. Without this change the files will be created by and owned by the
user starting the daemons (likely user root).
-- Reorganize the slurmstepd logic in order to better support NFS and
Kerberos credentials via the AUKS plugin. Work by Matthieu Hautreux, CEA.
-- Fix bug in allocating GRES that are associated with specific CPUs. In some
cases the code allocated first available GRES to job instead of allocating
GRES accessible to the specific CPUs allocated to the job.
-- spank: Add callbacks in slurmd: slurm_spank_slurmd_{init,exit}
and job epilog/prolog: slurm_spank_job_{prolog,epilog}
-- spank: Add spank_option_getopt() function to api
-- Change resolution of switch wait time from minutes to seconds.
-- Added CrpCPUMins to the output of sshare -l for those using hard limit
accounting. Work contributed by Mark Nelson.
-- Added mpi/pmi2 plugin for complete support of pmi2 including acquiring
additional resources for newly launched tasks. Contributed by Hongjia Cao,
NUDT.
-- BGQ - fixed issue where if a user asked for a specific node count and more
tasks than possible without overcommit the request would be allowed on more
nodes than requested.
-- Add support for new SchedulerParameters of bf_max_job_user, maximum number
of jobs to attempt backfilling per user. Work by Bjørn-Helge Mevik,
University of Oslo.
-- BLUEGENE - fixed issue where MaxNodes limit on a partition only limited
larger than midplane jobs.
-- Added cpu_run_min to the output of sshare --long. Work contributed by
Mark Nelson.
-- BGQ - allow regular users to resolve Rack-Midplane to AXYZ coords.
-- Add sinfo output format option of "%R" for partition name without "*"
appended for default partition.
-- Cray - Add support for zero compute note resource allocation to run batch
script on front-end node with no ALPS reservation. Useful for pre- or post-
processing.
-- Support for cyclic distribution of cpus in task/cgroup plugin from Martin
Perry, Bull.
-- GrpMEM limit for QOSes and associations added Patch from Bjørn-Helge Mevik,
University of Oslo.
-- Various performance improvements for up to 500% higher throughput depending
upon configuration. Work supported by the Oak Ridge National Laboratory
Extreme Scale Systems Center.
-- Added jobacct_gather/cgroup plugin. It is not advised to use this in
production as it isn't currently complete and doesn't provide an equivalent
substitution for jobacct_gather/linux yet. Work by Martin Perry, Bull.
* Changes in SLURM 2.4.0.pre4
=============================
-- Add logic to cache GPU file information (bitmap index mapping to device
file number) in the slurmd daemon and transfer that information to the
slurmstepd whenever a job step is initiated. This is needed to set the
appropriate CUDA_VISIBLE_DEVICES environment variable value when the
devices are not in strict numeric order (e.g. some GPUs are skipped).
Based upon work by Nicolas Bigaouette.
-- BGQ - Remove ability to make a sub-block with a geometry with one or more
of it's dimensions of length 3. There is a limitation in the IBM I/O
subsystem that is problematic with multiple sub-blocks with a dimension
of length 3, so we will disallow them to be able to be created. This
mean you if you ask the system for an allocation of 12 c-nodes you will
be given 16. If this is ever fix in BGQ you can remove this patch.
-- BLUEGENE - Better handling blocks that go into error state or deallocate
while jobs are running on them.
-- BGQ - fix for handling mix of steps running at same time some of which
are full allocation jobs, and others that are smaller.
-- BGQ - fix for core dump after running multiple sub-block jobs on static
blocks.
-- BGQ - fixed sync issue where if a job finishes in SLURM but not in mmcs
for a long time after the SLURM job has been flushed from the system
we don't have to worry about rebooting the block to sync the system.
-- BGQ - In scontrol/sview node counts are now displayed with
CnodeCount/CnodeErrCount so to point out there are cnodes in an error state
on the block. Draining the block and having it reboot when all jobs are
gone will clear up the cnodes in Software Failure.
-- Change default SchedulerParameters max_switch_wait field value from 60 to
300 seconds.
-- BGQ - catch errors from the kill option of the runjob client.
-- BLUEGENE - make it so the epilog runs until slurmctld tells it the job is
gone. Previously it had a timelimit which has proven to not be the right
thing.
-- FRONTEND - fix issue where if a compute node was in a down state and
an admin updates the node to idle/resume the compute nodes will go
instantly to idle instead of idle* which means no response.
-- Fix regression in 2.4.0.pre3 where number of submitted jobs limit wasn't
being honored for QOS.
-- Cray - Enable logging of BASIL communications with environment variables.
Set XML_LOG to enable logging. Set XML_LOG_LOC to specify path to log file
or "SLURM" to write to SlurmctldLogFile or unset for "slurm_basil_xml.log".
Patch from Steve Tronfinoff, CSCS.
-- FRONTEND - if a front end unexpectedly reboots kill all jobs but don't
mark front end node down.
-- FRONTEND - don't down a front end node if you have an epilog error
-- BLUEGENE - if a job has an epilog error don't down the midplane it was
running on.
-- BGQ - added new DebugFlag (NoRealTime) for only printing debug from
state change while the realtime server is running.
-- Fix multi-cluster mode with sview starting on a non-bluegene cluster going
to a bluegene cluster.
-- BLUEGENE - ability to show Rack Midplane name of midplanes in sview and
scontrol.
* Changes in SLURM 2.4.0.pre3
=============================
-- Let a job be submitted even if it exceeds a QOS limit. Job will be left
in a pending state until the QOS limit or job parameters change. Patch by
Phil Eckert, LLNL.
-- Add sacct support for the option "--name". Work by Yuri D'Elia, Center for
Biomedicine, EURAC Research, Italy.
-- Add an srun shepard process to cancel a job and/or step of the srun process
is killed abnormally (e.g. SIGKILL).
-- BGQ - handle deadlock issue when a nodeboard goes into an error state.
-- BGQ - more thorough handling of blocks with multiple jobs running on them.
-- Fix man2html process to compile in the build directory instead of the
source dir.
-- Behavior of srun --multi-prog modified so that any program arguments
specified on the command line will be appended to the program arguments
specified in the program configuration file.
-- Add new command, sdiag, which reports a variety of job scheduling
statistics. Based upon work by Alejandro Lucero Palau, BSC.
-- BLUEGENE - Added DefaultConnType to the bluegene.conf file. This makes it
so you can specify any connection type you would like (TORUS or MESH) as
the default in dynamic mode. Previously it always defaulted to TORUS.
-- Made squeue -n and -w options more consistent with salloc, sbatch, srun,
and scancel. Patch by Don Lipari, LLNL.
-- Have sacctmgr remove user records when no associations exist for that user.
-- Several header file changes for clean build with NetBSD. Patches from
Aleksej Saushev.
-- Fix for possible deadlock in accounting logic: Avoid calling
jobacct_gather_g_getinfo() until there is data to read from the socket.
-- Fix race condition that could generate "job_cnt_comp underflow" errors on
front-end architectures.
-- BGQ - Fix issue where a system with missing cables could cause core dump.
* Changes in SLURM 2.4.0.pre2
=============================
-- CRAY - Add support for GPU memory allocation using SLURM GRES (Generic
RESource) support. Work by Steve Trofinoff, CSCS.
-- Add support for job allocations with multiple job constraint counts. For
example: salloc -C "[rack1*2&rack2*4]" ... will allocate the job 2 nodes
from rack1 and 4 nodes from rack2. Support for only a single constraint
name been added to job step support.
-- BGQ - Remove old method for marking cnodes down.
-- BGQ - Remove BGP images from view in sview.
-- BGQ - print out failed cnodes in scontrol show nodes.
-- BGQ - Add srun option of "--runjob-opts" to pass options to the runjob
command.
-- FRONTEND - handle step launch failure better.
-- BGQ - Added a mutex to protect the now changing ba_system pointers.
-- BGQ - added new functionality for sub-block allocations - no preemption
for this yet though.
-- Add --name option to squeue to filter output by job name. Patch from Yuri
D'Elia.
-- BGQ - Added linking to runjob client libary which gives support to totalview
to use srun instead of runjob.
-- Add numeric range checks to scontrol update options. Patch from Phil
Eckert, LLNL.
-- Add ReconfigFlags configuration option to control actions of "scontrol
reconfig". Patch from Don Albert, Bull.
-- BGQ - handle reboots with multiple jobs running on a block.
-- BGQ - Add message handler thread to forward signals to runjob process.
* Changes in SLURM 2.4.0.pre1
=============================
-- BGQ - use the ba_geo_tables to figure out the blocks instead of the old
algorithm. The improves timing in the worst cases and simplifies the code
greatly.
-- BLUEGENE - Change to output tools labels from BP to Midplane
(i.e. BP List -> MidplaneList).
-- BLUEGENE - read MPs and BPs from the bluegene.conf
-- Modify srun's SIGINT handling logic timer (two SIGINTs within one second) to
be based microsecond rather than second timer.
-- Modify advance reservation to accept multiple specific block sizes rather
than a single node count.
-- Permit administrator to change a job's QOS to any value without validating
the job's owner has permission to use that QOS. Based upon patch by Phil
Eckert (LLNL).
-- Add trigger flag for a permanent trigger. The trigger will NOT be purged
after an event occurs, but only when explicitly deleted.
-- Interpret a reservation with Nodes=ALL and a Partition specification as
reserving all nodes within the specified partition rather than all nodes
on the system. Based upon patch by Phil Eckert (LLNL).
-- Add the ability to reboot all compute nodes after they become idle. The
RebootProgram configuration parameter must be set and an authorized user
must execute the command "scontrol reboot_nodes". Patch from Andriy
Grytsenko (Massive Solutions Limited).
-- Modify slurmdbd.conf parsing to accept DebugLevel strings (quiet, fatal,
info, etc.) in addition to numeric values. The parsing of slurm.conf was
modified in the same fashion for SlurmctldDebug and SlurmdDebug values.
The output of sview and "scontrol show config" was also modified to report
those values as strings rather than numeric values.
-- Changed default value of StateSaveLocation configuration parameter from
-- Prevent associations from being deleted if it has any jobs in running,
pending or suspended state. Previous code prevented this only for running
jobs.
-- If a job can not run due to QOS or association limits, then do not cancel
the job, but leave it pending in a system held state (priority = 1). The
job will run when its limits or the QOS/association limits change. Based
upon a patch by Phil Ekcert (LLNL).
-- BGQ - Added logic to keep track of cnodes in an error state inside of a
booted block.
-- Added the ability to update a node's NodeAddr and NodeHostName with
scontrol. Also enable setting a node's state to "future" using scontrol.
-- Add a node state flag of CLOUD and save/restore NodeAddr and NodeHostName
information for nodes with a flag of CLOUD.
-- Cray: Add support for job reservations with node IDs that are not in
numeric order. Fix for Bugzilla #5.
-- Fix association limit support for jobs queued for multiple partitions.
-- BLUEGENE - fix issue for sub-midplane systems to create a full system
block correctly.
-- BLUEGENE - Added option to the bluegene.conf to tell you are running on
a sub midplane system.
-- Added the UserID of the user issuing the RPC to the job_submit/lua
functions.
-- Fixed issue where if a job ended with ESLURMD_UID_NOT_FOUND and
ESLURMD_GID_NOT_FOUND where slurm would be a little over zealous
in treating missing a GID or UID as a fatal error.
-- If job time limit exceeds partition maximum, but job's minimum time limit
does not, set job's time limit to partition maximum at allocation time.
* Changes in SLURM 2.3.6
========================
-- Fix DefMemPerCPU for partition definitions.
-- Fix to create a reservation with licenses and no nodes.
-- Fix issue with assoc_mgr if a bad state file is given and the database
isn't up at the time the slurmctld starts, not running the
priority/multifactor plugin, and then the database is started up later.
-- Gres: If a gres has a count of one and an associated file then when doing
a reconfiguration, the node's bitmap was not cleared resulting in an
underflow upon job termination or removal from scheduling matrix by the
backfill scheduler.
-- Fix race condition in job dependency logic which can result in invalid
memory reference.
* Changes in SLURM 2.3.5
========================
-- Improve support for overlapping advanced reservations. Patch from
Bill Brophy, Bull.
-- Modify Makefiles for support of Debian hardening flags. Patch from
Simon Ruderich.
-- CRAY: Fix support for configuration with SlurmdTimeout=0 (never mark
node that is DOWN in ALPS as DOWN in SLURM).
-- Fixed the setting of SLURM_SUBMIT_DIR for jobs submitted by Moab (BZ#1467).
Patch by Don Lipari, LLNL.
-- Correction to init.d/slurmdbd exit code for status option. Patch by Bill
Brophy, Bull.
-- When the optional max_time is not specified for --switches=count, the site
max (SchedulerParameters=max_switch_wait=seconds) is used for the job.
Based on patch from Rod Schultz.
-- Fix bug in select/cons_res plugin when used with topology/tree and a node
range count in job allocation request.
-- Fixed moab_2_slurmdb.pl script to correctly work for end records.
-- Add support for new SchedulerParameters of max_depend_depth defining the
maximum number of jobs to test for circular dependencies (i.e. job A waits
for job B to start and job B waits for job A to start). Default value is
10 jobs.
-- Fix potential race condition if MinJobAge is very low (i.e. 1) and using
slurmdbd accounting and running large amounts of jobs (>50 sec). Job
information could be corrupted before it had a chance to reach the DBD.
-- Fix state restore of job limit set from admin value for min_cpus.
-- Fix clearing of limit values if an admin removes the limit for max cpus
and time limit where it was previously set by an admin.
-- Fix issue where log message is more than 256 chars and then has a format.
-- Fix sched/wiki2 to support job account name, gres, partition name, wckey,
or working directory that contains "#" (a job record separator). Also fix
for wckey or working directory that contains a double quote '\"'.
-- CRAY - fix for handling memory requests from user for an allocation.
-- Add support for switches parameter to the job_submit/lua plugin. Work by
Par Andersson, NSC.
-- Fix to job preemption logic to preempt multiple jobs at the same time.
-- Fix minor issue where uid and gid were switched in sview for submitting
batch jobs.
-- Fix possible illegal memory reference in slurmctld for job step with
relative option. Work by Matthieu Hautreux (CEA).
-- Reset priority of system held jobs when dependency is satisfied. Work by
Don Lipari, LLNL.
* Changes in SLURM 2.3.4
========================
-- Set DEFAULT flag in partition structure when slurmctld reads the
configuration file. Patch from Rémi Palancher.
-- Fix for possible deadlock in accounting logic: Avoid calling
jobacct_gather_g_getinfo() until there is data to read from the socket.
-- Fix typo in accounting when using reservations. Patch from Alejandro
Lucero Palau.
-- Fix to the multifactor priority plugin to calculate effective usage earlier
to give a correct priority on the first decay cycle after a restart of the
slurmctld. Patch from Martin Perry, Bull.
-- Permit user root to run a job step for any job as any user. Patch from
Didier Gazen, Laboratoire d'Aerologie.
-- BLUEGENE - fix for not allowing jobs if all midplanes are drained and all
blocks are in an error state.
-- Avoid slurmctld abort due to bad pointer when setting an advanced
reservation MAINT flag if it contains no nodes (only licenses).
-- Fix bug when requeued batch job is scheduled to run on a different node
zero, but attemts job launch on old node zero.
-- Fix bug in step task distribution when nodes are not configured in numeric
order. Patch from Hongjia Cao, NUDT.
-- Fix for srun allocating running within existing allocation with --exclude
option and --nnodes count small enough to remove more nodes. Patch from
Phil Eckert, LLNL.
-- Work around to handle certain combinations of glibc/kernel
(i.e. glibc-2.14/Linux-3.1) to correctly open the pty of the slurmstepd
as the job user. Patch from Mark Grondona, LLNL.
-- Modify linking to include "-ldl" only when needed. Patch from Aleksej
Saushev.
-- Fix smap regression to display nodes that are drained or down correctly.
-- Several bug fixes and performance improvements with related to batch
scripts containing very large numbers of arguments. Patches from Par
Andersson, NSC.
-- Fixed extremely hard to reproduce threading issue in assoc_mgr.
-- Correct "scontrol show daemons" output if there is more than one
ControlMachine configured.
-- Add node read lock where needed in slurmctld/agent code.
-- Added test for LUA library named "liblua5.1.so.0" in addition to
"liblua5.1.so" as needed by Debian. Patch by Remi Palancher.
-- Added partition default_time field to job_submit LUA plugin. Patch by
Remi Palancher.
-- Fix bug in cray/srun wrapper stdin/out/err file handling.
-- In cray/srun wrapper, only include aprun "-q" option when srun "--quiet"
option is used.
-- BLUEGENE - fix issue where if a small block was in error it could hold up
the queue when trying to place a larger than midplane job.
-- CRAY - ignore all interactive nodes and jobs on interactive nodes.
-- Add new job state reason of "FrontEndDown" which applies only to Cray and
IBM BlueGene systems.
-- Cray - Enable configure option of "--enable-salloc-background" to permit
the srun and salloc commands to be executed in the background. This does
NOT remove the ALPS limitation that only one job reservation can be created
for each Linux session ID.
-- Cray - For srun wrapper when creating a job allocation, set the default job
name to the executable file's name.
-- FRONTEND - if a front end unexpectedly reboots kill all jobs but don't
mark front end node down.
-- FRONTEND - don't down a front end node if you have an epilog error.
-- Cray - fix for if a frontend slurmd was started after the slurmctld had
already pinged it on startup the unresponding flag would be removed from
the frontend node.
-- Cray - Fix issue on smap not displaying grid correctly.
* Changes in SLURM 2.3.3
========================
-- Fix task/cgroup plugin error when used with GRES. Patch by Alexander
Bersenev (Institute of Mathematics and Mechanics, Russia).
-- Permit pending job exceeding a partition limit to run if its QOS flag is
modified to permit the partition limit to be exceeded. Patch from Bill
Brophy, Bull.
-- sacct search for jobs using filtering was ignoring wckey filter.
-- Fixed issue with QOS preemption when adding new QOS.
-- Fixed issue with comment field being used in a job finishing before it
starts in accounting.
-- Add slashes in front of derived exit code when modifying a job.
-- Handle numeric suffix of "T" for terabyte units. Patch from John Thiltges,
University of Nebraska-Lincoln.
-- Prevent resetting a held job's priority when updating other job parameters.
Patch from Alejandro Lucero Palau, BSC.
-- Improve logic to import a user's environment. Needed with --get-user-env
option used with Moab. Patch from Mark Grondona, LLNL.
-- Fix bug in sview layout if node count less than configured grid_x_width.
-- Modify PAM module to prefer to use SLURM library with same major release
number that it was built with.
-- Permit gres count configuration of zero.
-- Fix race condition where sbcast command can result in deadlock of slurmd
daemon. Patch by Don Albert, Bull.
-- Fix bug in srun --multi-prog configuration file to avoid printing duplicate
record error when "*" is used at the end of the file for the task ID.
-- Let operators see reservation data even if "PrivateData=reservations" flag
is set in slurm.conf. Patch from Don Albert, Bull.
-- Added new sbatch option "--export-file" as needed for latest version of
Moab. Patch from Phil Eckert, LLNL.
-- Fix for sacct printing CPUTime(RAW) where the the is greater than a 32 bit
number.
-- Fix bug in --switch option with topology resulting in bad switch count use.
Patch from Alejandro Lucero Palau (Barcelona Supercomputer Center).
-- Fix PrivateFlags bug when using Priority Multifactor plugin. If using sprio
all jobs would be returned even if the flag was set.
Patch from Bill Brophy, Bull.
-- Fix for possible invalid memory reference in slurmctld in job dependency
logic. Patch from Carles Fenoy (Barcelona Supercomputer Center).
* Changes in SLURM 2.3.2
========================
-- Add configure option of "--without-rpath" which builds SLURM tools without
the rpath option, which will work if Munge and BlueGene libraries are in
the default library search path and make system updates easier.
-- Fixed issue where if a job ended with ESLURMD_UID_NOT_FOUND and
ESLURMD_GID_NOT_FOUND where slurm would be a little over zealous
in treating missing a GID or UID as a fatal error.
-- Backfill scheduling - Add SchedulerParameters configuration parameter of
"bf_res" to control the resolution in the backfill scheduler's data about
when jobs begin and end. Default value is 60 seconds (used to be 1 second).
-- Cray - Remove the "family" specification from the GPU reservation request.

Morris Jette
committed
-- Updated set_oomadj.c, replacing deprecated oom_adj reference with
oom_score_adj
-- Fix resource allocation bug, generic resources allocation was ignoring the
job's ntasks_per_node and cpus_per_task parameters. Patch from Carles
Fenoy, BSC.
-- Avoid orphan job step if slurmctld is down when a job step completes.
-- Fix Lua link order, patch from Pär Andersson, NSC.
-- Set SLURM_CPUS_PER_TASK=1 when user specifies --cpus-per-task=1.
-- Fix for fatal error managing GRES. Patch by Carles Fenoy, BSC.
-- Fixed race condition when using the DBD in accounting where if a job
wasn't started at the time the eligible message was sent but started
before the db_index was returned information like start time would be lost.
-- Fix issue in accounting where normalized shares could be updated
incorrectly when getting fairshare from the parent.
-- Fixed if not enforcing associations but want QOS support for a default
qos on the cluster to fill that in correctly.
-- Fix in select/cons_res for "fatal: cons_res: sync loop not progressing"
with some configurations and job option combinations.
-- BLUEGNE - Fixed issue with handling HTC modes and rebooting.
Loading
Loading full blame...