- May 28, 2014
-
-
Danny Auble authored
67fdbce5
-
Morris Jette authored
When a batch job requeue completes, clear it's bitmap of completing nodes. If the bitmap were to persist, and nodes are added/remove in slurm.conf, and "scontrol reconfigure" is executed, and one of the bits in that bitmap now point to a DOWN node, avoid killing the job. bug 805
-
- May 27, 2014
-
-
Morris Jette authored
If a batch job is discovered to be missing from it's head node, set its exit code to 1 rather than leaving it as zero. Bug 833
-
Morris Jette authored
Was printing unsigned as int
-
Morris Jette authored
-
Danny Auble authored
-
- May 23, 2014
-
-
David Bigagli authored
-
David Bigagli authored
-
Yu Watanabe authored
-
Danny Auble authored
compiler would treat 1 as a 32 bit number and wrap.
-
Danny Auble authored
more of a hey, the user is asking for something out of the norm.
-
Danny Auble authored
not able to be separated into multiply patches. If EnforcePartLimits=Yes and QOS job is using can override limits, allow it. Fix issues if partition allows or denys account's or QOS' and either are not set. If a job requests a partition and it doesn't allow a QOS or account the job is requesting pend unless EnforcePartLimits=Yes. Before it would always kill the job at submit.
-
Danny Auble authored
-
- May 22, 2014
-
-
wickberg authored
-
- May 21, 2014
-
-
Morris Jette authored
This reverts commit 859839a7 The ntasks_per_core option was previously really treated like number of CPUs (rather than tasks) to allocate per core, which seems to be what is desired.
-
David Bigagli authored
-
Danny Auble authored
-
Danny Auble authored
wait for.
-
Danny Auble authored
based on the mask given.
-
Danny Auble authored
task/affinity.
-
Danny Auble authored
thread in a core.
-
Danny Auble authored
it can bind cyclically across sockets.
-
- May 20, 2014
-
-
Morris Jette authored
Previous logic assumed cpus_per_task=1, so ntasks_per_core option could spread the job across more cores than desired
-
Morris Jette authored
cpus-per-task support: Try to pack all CPUs of each tasks onto one socket. Previous logic could spread the tasks CPUs across multiple sockets.
-
Morris Jette authored
Previous logic was counting CPUs, but assuming each task would only use one CPU.
-
Dan Weeks authored
-
Danny Auble authored
This reverts commit b22268d8.
-
Danny Auble authored
-
Morris Jette authored
-
- May 19, 2014
-
-
Morris Jette authored
-
Nathan Yee authored
-
Morris Jette authored
-
Morris Jette authored
Conflicts: src/slurmctld/job_mgr.c
-
Morris Jette authored
Properly enforce job --requeue and --norequeue options. Previous logic was in three places not doing so (either ignoring the value, ANDing it with the JobRequeue configuration option or using the JobRequeue configuration option by itself). bug 821
-
Morris Jette authored
-
Morris Jette authored
There should be no change in behavior with the production code, but this will improve the robustness of the code if someone makes changes to the logic.
-
- May 15, 2014
-
-
Morris Jette authored
Add SelectTypeParameters option of CR_PACK_NODES to pack a job's tasks tightly on its allocated nodes rather than distributing them evenly across the allocated nodes. bug 819
-
Danny Auble authored
something you also get a signal which would produce deadlock. Fix Bug 601.
-
- May 14, 2014
-
-
Morris Jette authored
-
Morris Jette authored
Run EpilogSlurmctld for a job is killed during slurmctld reconfiguration. bug 806
-