- Feb 15, 2014
-
-
Morris Jette authored
-
- Feb 14, 2014
-
-
Daniele Didomizio authored
Added sbatch '--parsable' option to output only the job id number and the cluster name separated by a semicolon rather than "Submitted batch job....". Errors will still be displayed.
-
David Bigagli authored
-
Danny Auble authored
needed to forward a message the slurmd would core dump.
-
- Feb 13, 2014
-
-
Morris Jette authored
-
David Bigagli authored
describing that jobs must be drained from cluster before deploying any checkpoint plugin.
-
- Feb 12, 2014
-
-
David Bigagli authored
-
Morris Jette authored
Properly enforce a job's cpus-per-task option when a job's allocation is constrained on some nodes by the mem-per-cpu option. bug 590
-
- Feb 11, 2014
-
-
Morris Jette authored
-
- Feb 10, 2014
-
-
David Bigagli authored
-
Morris Jette authored
limit scheduling logic depth by partition.
-
Morris Jette authored
-
Morris Jette authored
-
Morris Jette authored
-
- Feb 09, 2014
-
-
Moe Jette authored
-
- Feb 08, 2014
-
-
Danny Auble authored
-
Danny Auble authored
-
- Feb 07, 2014
-
-
Morris Jette authored
bug 586
-
Morris Jette authored
Partial response to bug 521
-
- Feb 06, 2014
-
-
Morris Jette authored
No change in logic, just change name of recently added env var
-
Morris Jette authored
Set the environment variable SLURM_PARTITION to the partition in which a job is running. Set for salloc, sbatch and srun.
-
Danny Auble authored
-
- Feb 05, 2014
-
-
David Bigagli authored
-
Martin Perry authored
-
Danny Auble authored
-
Dominik Bartkiewicz authored
Set GPU_DEVICE_ORDINAL environment variable.
-
Danny Auble authored
-
- Feb 04, 2014
-
-
Morris Jette authored
Previous logic would try to pick a specific node count and on a heterogeneous system, this would cause a problem. This change largely reverts commit a270417b
-
David Bigagli authored
beside the numerical values.
-
Danny Auble authored
-
Morris Jette authored
Added whole_node field to job_resources structure Enable gang scheduling for jobs with core specialization and other jobs allocated whole nodes.
-
- Feb 03, 2014
-
-
Danny Auble authored
-
- Jan 31, 2014
-
-
David Bigagli authored
-
Danny Auble authored
i.e. salloc -n32 doesn't request the number of nodes and with the previous code if this request used 4 nodes and only 1 was left in GrpNodes it would just run with no issue since we were checking things before we selected how many nodes it ran on. Now we check this afterwards so we always check the limits on how many nodes, cpus and how much memory is to be used.
-
Morris Jette authored
Fix step allocation when some CPUs are not available due to memory limits. This happens when one step is active and using memory that blocks the scheduling of another step on a portion of the CPUs needed. The new step is now delayed rather than aborting with "Requested node configuration is not available". bug 577
-
- Jan 29, 2014
-
-
David Bigagli authored
incorrectly when using the hostlist_push_host function and input surrounded by [].
-
- Jan 28, 2014
-
-
Danny Auble authored
based on ionode count correctly on slurmctld restart.
-
- Jan 25, 2014
-
-
jette authored
-
Morris Jette authored
Split a slurmctld's job record "shared" field into "share_res" (share resource) and "whole_node" fields. Needed to better manage allocation of whole nodes for core specialization without disabling gang scheduling of such jobs.
-
- Jan 23, 2014
-
-
David Bigagli authored
to suspend/resume array elements.
-