Newer
Older
This file describes changes in recent versions of Slurm. It primarily
documents those changes that are of interest to users and admins.
-- Fix issue where not enforcing QOS but a partition either allows or denies
them.
-- CRAY - Make switch/cray default when running on a Cray natively.
-- CRAY - Make job_container/cncu default when running on a Cray natively.
-- Disable job time limit change if it's preemption is in progress.
-- Correct logic to properly enforce job preemption GraceTime.
-- Fix sinfo -R to print each down/drained node once, rather than once per
partition.
-- If a job has non-responding node, retry job step create rather than
returning with DOWN node error.
-- Support SLURM_CONF path which does not have "slurm.conf" as the file name.
-- CRAY - make job_container/cncu default when running on a Cray natively
-- Fix issue where batch cpuset wasn't looked at correctly in
jobacct_gather/cgroup.
-- Correct squeue's job node and CPU counts for requeued jobs.
-- Correct SelectTypeParameters=CR_LLN with job selecition of specific nodes.
-- Only if ALL of their partitions are hidden will a job be hidden by default.
-- Run EpilogSlurmctld for a job is killed during slurmctld reconfiguration.
-- Close window with srun if waiting for an allocation and while printing
something you also get a signal which would produce deadlock.
-- Add SelectTypeParameters option of CR_PACK_NODES to pack a job's tasks
tightly on its allocated nodes rather than distributing them evenly across
the allocated nodes.
-- cpus-per-task support: Try to pack all CPUs of each tasks onto one socket.
Previous logic could spread the tasks CPUs across multiple sockets.
-- Add new distribution method fcyclic so when a task is using multiple cpus
it can bind cyclically across sockets.
-- task/affinity - When using --hint=nomultithread only bind to the first
thread in a core.
-- Make cgroup task layout (block | cyclic) method mirror that of
task/affinity.
-- If TaskProlog sets SLURM_PROLOG_CPU_MASK reset affinity for that task
based on the mask given.
-- Keep supporting 'srun -N x --pty bash' for historical reasons.
-- If EnforcePartLimits=Yes and QOS job is using can override limits, allow
it.
-- Fix issues if partition allows or denys account's or QOS' and either are
not set.
-- If a job requests a partition and it doesn't allow a QOS or account the
job is requesting pend unless EnforcePartLimits=Yes. Before it would
always kill the job at submit.
-- Fix format output of scontrol command when printing node state.
-- Improve the clean up of cgroup hierarchy when using the
jobacct_gather/cgroup plugin.
-- Added SchedulerParameters value of Ignore_NUMA.
-- Fix issues with code when using automake 1.14.1
-- select/cons_res plugin: Fix memory leak related to job preemption.
-- After reconfig rebuild the job node counters only for jobs that has
not finished yet otherwise if requeued the job may enter an invalid
COMPLETING state.
-- Do not purge the script and environment files for completed jobs on
slurmctld reconfiguration or restart (they might be later requeued).
-- scontrol now accepts the option job=xxx or jobid=xxx for the requeue,
requeuehold and release operations.
-- task/cgroup - fix to bind batch job in the proper CPUs.
-- Added strigger option of -N, --noheader to not print the header when
displaying a list of triggers.
-- Modify strigger to accept arguments to the program to execute when an
event trigger occurs.
-- Attempt to create duplicate event trigger now generates ESLURM_TRIGGER_DUP
("Duplicate event trigger").
-- Treat special characters like %A, %s etc. literally in the file names
when specified escaped e.g. sbatch -o /home/zebra\\%s will not expand
%s as the stepid of the running job.
-- CRAYALPS - Add better support for CLE 5.2 when running Slurm over ALPS.
-- Test time when job_state file was written to detect multiple primary
slurmctld daemons (e.g. both backup and primary are functioning as
primary and there is a split brain problem).
-- Fix scontrol to accept update jobid=# numtasks=#
-- If the backup slurmctld assumes primary status, then do NOT purge any
job state files (batch script and environment files) but if any attempt
is made to re-use them consider this a fatal error. It may indicate that
multiple primary slurmctld daemons are active (e.g. both backup and primary
are functioning as primary and there is a split brain problem).
-- Set correct error code when requeuing a completing/pending job
-- When checking for if dependency of type afterany, afterok and afternotok
don't clear the dependency if the job is completing.
-- Cleanup the JOB_COMPLETING flag and eventually requeue the job when the
last epilog completes, either slurmd epilog or slurmctld epilog, whichever
comes last.
-- When attempting to requeue a job distinguish the case in which the job is
JOB_COMPLETING or already pending.
-- When reconfiguring the controller don't restart the slurmctld epilog if it
is already running.
-- Email messages for job array events print now use the job ID using the
format "#_# (#)" rather than just the internal job ID.
-- Set the number of free licenses to be 0 if the global license count decreases
and total is less than in use.
* Changes in Slurm 14.03.3-2
============================
* Changes in Slurm 14.03.3
==========================
-- Correction to default batch output file name. In version 14.03.2 was using
"slurm_<jobid>_4294967294.out" due to error in job array logic.
-- In slurm.spec file, replace "Requires cray-MySQL-devel-enterprise" with
"Requires mysql-devel".
-- Fix race condition if PrologFlags=Alloc,NoHold is used.
-- Cray - Make NPC only limit running other NPC jobs on shared blades instead
of limited non NPC jobs.
-- Fix for sbatch #PBS -m (mail) option parsing.
-- Fix job dependency bug. Jobs dependent upon multiple other jobs may start
prematurely.
-- Set "Reason" field for all elements of a job array on short-circuited
scheduling for job arrays.
-- Allow -D option of salloc/srun/sbatch to specify relative path.
-- Added SchedulerParameter of batch_sched_delay to permit many batch jobs
to be submitted between each scheduling attempt to reduce overhead of
scheduling logic.
-- Added job reason of "SchedTimeout" if the scheduler was not able to reach
the job to attempt scheduling it.
-- Add job's exit state and exit code to email message.
-- scontrol hold/release accepts job name option (in addition to job ID).
-- Handle when trying to cancel a step that hasn't started yet better.
-- Add --priority option to salloc, sbatch and srun commands.
-- Honor partition priorities over job priorities.
-- Fix sacct -c when using jobcomp/filetxt to read newer variables
-- Fix segfault of sacct -c if spaces are in the variables.
-- Release held job only with "scontrol release <jobid>" and not by resetting
the job's priority. This is needed to support job arrays better.
-- Correct squeue command not to merge jobs with state pending and completing
together.
-- Fix issue where user is requesting --acctg-freq=0 and no memory limits.
-- Fix issue with GrpCPURunMins if a job's timelimit is altered while the job
is running.
-- Temporary fix for handling our typemap for the perl api with newer perl.
-- Fix allowgroup on bad group seg fault with the controller.
-- Handle node ranges better when dealing with accounting max node limits.
-- Update configure to set correct version without having to run autogen.sh
* Changes in Slurm 14.03.1
==========================
-- Add support for job std_in, std_out and std_err fields in Perl API.
-- Add "Scheduling Configuration Guide" web page.
-- BGQ - fix check for jobinfo when it is NULL
-- Do not check cleaning on "pending" steps.
-- task/cgroup plugin - Fix for building on older hwloc (v1.0.2).
-- In the PMI implementation by default don't check for duplicate keys.
Set the SLURM_PMI_KVS_DUP_KEYS if you want the code to check for
duplicate keys.
-- Permit user root to propagate resource limits higher than the hard limit
slurmd has on that compute node has (i.e. raise both current and maximum
limits).
-- Fix issue with license used count when doing an scontrol reconfig.
-- Fix the PMI iterator to not report duplicated keys.
-- Fix issue with sinfo when -o is used without the %P option.
-- Rather than immediately invoking an execution of the scheduling logic on
every event type that can enable the execution of a new job, queue its
execution. This permits faster execution of some operations, such as
modifying large counts of jobs, by executing the scheduling logic less
frequently, but still in a timely fashion.
-- If the environment variable is greater than MAX_ENV_STRLEN don't
set it in the job env otherwise the exec() fails.
-- Optimize scontrol hold/release logic for job arrays.
-- Modify srun to report an exit code of zero rather than nine if some tasks
exit with a return code of zero and others are killed with SIGKILL. Only an
exit code of zero did this.
-- Avoid slurmctld crash getting job info if detail_ptr is NULL.
-- Fix sacctmgr add user where both defaultaccount and accounts are specified.
-- Added SchedulerParameters option of max_sched_time to limit how long the
main scheduling loop can execute for.
-- Added SchedulerParameters option of sched_interval to control how frequently
the main scheduling loop will execute.
-- Move start time of main scheduling loop timeout after locks are aquired.
-- Add squeue job format option of "%y" to print a job's nice value.
-- Update scontrol update jobID logic to operate on entire job arrays.
-- Fix PrologFlags=Alloc to run the prolog on each of the nodes in the
allocation instead of just the first.
-- Fix race condition if a step is starting while the slurmd is being
restarted.
-- Make sure a job's prolog has ran before starting a step.
-- BGQ - Fix invalid memory read when using DefaultConnType in the
bluegene.conf
-- Make sure we send node state to the DBD on clean start of controller.
-- Fix some sinfo and squeue sorting anomalies due to differences in data
types.
-- Only send message back to slurmctld when PrologFlags=Alloc is used on a
Cray/ALPS system, otherwise use the slurmd to wait on the prolog to gate
the start of the step.
-- Remove need to check PrologFlags=Alloc in slurmd since we can tell if prolog
has ran yet or not.
-- Fix squeue to use a correct macro to check job state.
-- BGQ - Fix incorrect logic issues if MaxBlockInError=0 in the bluegene.conf.
-- priority/basic - Insure job priorities continue to decrease when jobs are
Loading
Loading full blame...