Select Git revision
NEWS 272.44 KiB
This file describes changes in recent versions of SLURM. It primarily
documents those changes that are of interest to users and admins.
* Changes in SLURM 2.2.0.pre7
=============================
* Changes in SLURM 2.2.0.pre6
=============================
-- sview - added ability to see database configuration.
-- sview - added ability to add/remove visible tabs.
-- sview - change way grid highlighting takes place on selected objects.
-- Added infrastructure to support allocation of generic node resources.
-Added node configuration parameter of Gres=.
-Added ability to view/modify a node's gres using scontrol, sinfo and sview.
-Added salloc, sbatch and srun --gres option.
-Added ability to view a job or job step's gres using scontrol, squeue and
sview.
-Added new configuration parameter GresPlugins to define plugins used to
manage generic resources.
-Added framework for gres plugins.
-Added DebugFlags option of "gres" for detailed debugging of gres actions.
-- Slurmd modified to log slow slurmstepd startup and note possible file system
problem.
-- sview - There is now a .slurm/sviewrc created when running sview.
Defaults are put in there as to how sview looks when first launched.
You can set these by Ctrl-S or Options->Set Default Settings.
-- Add scontrol "wait_job <job_id>" option to wait for nodes to boot as needed.
Useful for batch jobs (in Prolog, PrologSlurmctld or the script) if powering
down idle nodes.
-- Added salloc and sbatch option --wait-for-nodes. If set non-zero, job
initiation will be delayed until all allocated nodes have booted. Salloc
will log the delay with the messages "Waiting for nodes to boot" and "Nodes
are ready for use".
-- The Priority/mulitfactor plugin now takes into consideration size of job
in cpus as well as size in nodes when looking at the job size factor.
Previously only nodes were considered.
-- When using the SlurmDBD messages waiting to be sent will be combined
and sent in one message.
-- Remove srun's --core option. Move the logic to an optional SPANK plugin
(currently in the contribs directory, but plan to distribute through
http://code.google.com/p/slurm-spank-plugins/).
-- Patch for adding CR_CORE_DEFAULT_DIST_BLOCK as a select option to layout
jobs using block layout across cores within each node instead of cyclic
which was previously the default.
-- Accounting - When removing associations if jobs are running, those jobs
must be killed before proceeding. Before the jobs were killed
automatically thus causing user confusion on what is most likely an
admin's mistake.
-- sview - color column keeps reference color when highlighting.
-- Configuration parameter MaxJobCount changed from 16-bit to 32-bit field.
The default MaxJobCount was changed from 5,000 to 10,000.
-- SLURM commands (squeue, sinfo, etc...) can now go cross-cluster on like
linux systems. Cross-cluster for bluegene to linux and such does not
currently work. You can submit jobs with sbatch. Salloc and srun are not
cross-cluster compatible, and given their nature to talk to actual compute
nodes these will likely never be.
-- salloc modified to forward SIGTERM to the spawned program.
-- In sched/wiki2 (for Moab support) - Add GRES and WCKEY fields to MODIFYJOBS
and GETJOBS commands. Add GRES field to GETNODES command.
-- In struct job_descriptor and struct job_info: rename min_sockets to
sockets_per_node, min_cores to cores_per_socket, and min_threads to
threads_per_core (the values are not minimum, but represent the target
values).
-- Fixed bug in clearing a partition's DisableRootJobs value reported by
Hongjia Cao.
-- Purge (or ignore) terminated jobs in a more timely fashion based upon the
MinJobAge configuration parameter. Small values for MinJobAge should improve
responsiveness for high job throughput.
* Changes in SLURM 2.2.0.pre5