- May 19, 2014
-
-
Morris Jette authored
-
Morris Jette authored
There should be no change in behavior with the production code, but this will improve the robustness of the code if someone makes changes to the logic.
-
- May 17, 2014
-
-
Morris Jette authored
Always set to default values if no user input This is needed to clear any vestigial values from abnormal Slurm termination.
-
- May 16, 2014
-
-
Morris Jette authored
Used uninitialized variable in using the srun --cpu-freq option to set the CPU governor, resulting in invalid memory reference. Some minor cosmtic changes too.
-
Morris Jette authored
Add srun --cpu-freq options to set the CPU governor (OnDemand, Performance, Conservative, PowerSave or UserSpace). task/affinity: support set cpu_freq without cpuset (using hwloc and sched functions) Fix calculation used to set --cpu-freq=highm1 (relied upon ordering of possible CPU frequencies).
-
Morris Jette authored
The suspend/resume is performed on a per-job basis, but the cpu_freq is set on a per job step basis. This is a partial reversion of commit 5e40f627
-
- May 15, 2014
-
-
Morris Jette authored
-
Morris Jette authored
Add SelectTypeParameters option of CR_PACK_NODES to pack a job's tasks tightly on its allocated nodes rather than distributing them evenly across the allocated nodes. bug 819
-
Morris Jette authored
Job allocaitons with core specialization are allocated all available CPUs on the selected nodes rather than only the number required to satisfy their allocation request (e.g. "srun --core-spec=1 -n1 date" will not be allocated 1 CPU, but will get the entire node except for one specialized core).
-
Danny Auble authored
something you also get a signal which would produce deadlock. Fix Bug 601.
-
Morris Jette authored
If a job step reqeusts specific cores outside of their allocation with the --core-spec option, then bind the tasks to only those CPUs available to the job.
-
Morris Jette authored
-
Morris Jette authored
-
Morris Jette authored
-
Morris Jette authored
-
Morris Jette authored
task/cgroup - Correct specialized core task binding with user supplied invalid CPU mask or map. Rather than generating an error and ignoring user specification. Mask user supplied map against available CPUs or bind to all available CPUs. In either case, log the invalid CPU map or mask. bug 782
-
Morris Jette authored
-
- May 14, 2014
-
-
Morris Jette authored
-
Morris Jette authored
Conflicts: src/slurmctld/job_scheduler.c
-
Morris Jette authored
Run EpilogSlurmctld for a job is killed during slurmctld reconfiguration. bug 806
-
Morris Jette authored
-
Morris Jette authored
Only if ALL of their partitions are hidden will a job be hidden by default. bug 812
-
- May 13, 2014
-
-
Morris Jette authored
If a batch job launch request can not be built (the script file is missing, a credential can not be created, or the user does not exist on the selected compute node), then cancel the job in a graceful fashion. Previously, the bad RPC would be sent to the compute node and that node DRAINED. see bug 807
-
Morris Jette authored
-
Morris Jette authored
Correct SelectTypeParameters=CR_LLN with job selecition of specific nodes. Previous logic would in most instances allocate resources on all nodes to the job.
-
Morris Jette authored
Correct squeue's job node and CPU counts for requeued jobs. Previously, when a job was requeued, its CPU count reported was that of the previous execution. When combined with the --ntasks-per-node option, squeue would compute the expected node count. If the --exclusive option is also used, the node count reported by squeue could be off by a large margin (e.g. "sbatch --exclusive --ntasks-per-node=1 -N1 .." on requeue would use the number of CPUs on the allocated node to recompute the expected node count). bug 756
-
David Gloe authored
req.c: In function ‘_launch_complete_rm’: req.c:5372: error: array subscript is above array bounds req.c: In function ‘_launch_complete_add’: req.c:5328: error: array subscript is above array bounds The lines are if (job_id != active_job_id[j]) { after the for loops in those functions. If no match is found in the loop, j will be JOB_STATE_CNT and overflow the array by one.
-
Morris Jette authored
-
Danny Auble authored
jobacct_gather/cgroup.
-
Morris Jette authored
Support SLURM_CONF path which does not have "slurm.conf" as the file name. bug 803
-
Morris Jette authored
-
Morris Jette authored
-
Morris Jette authored
For a nested batch job (within an salloc, run "sbatch --jobid=$SLURM_JOBID ..."), report the completing node rank as 0, rather than -1
-
Morris Jette authored
-
- May 12, 2014
-
-
Morris Jette authored
-
Morris Jette authored
-
Morris Jette authored
-
Morris Jette authored
If a job has non-responding node, retry job step create rather than returning with DOWN node error. bug 734
-
Morris Jette authored
-
Morris Jette authored
-