- May 23, 2014
-
-
Morris Jette authored
No changes to logic
-
Morris Jette authored
-
Morris Jette authored
-
Morris Jette authored
-
Morris Jette authored
-
Morris Jette authored
-
- May 22, 2014
-
-
Morris Jette authored
Fix the limit at which slow suspend/resumes are logged
-
Morris Jette authored
Array of active job suspend IDs was using -1 to indicated empty slots in the array, but the array was of type uint32_t. Cray might have noted a problem in this.
-
Morris Jette authored
-
Morris Jette authored
-
Morris Jette authored
Increase the number of simultaneous job suspends or resumes that slurmd can handle from 8 to 64. See bug 826
-
Morris Jette authored
Increase MAX_THREAD from 130 to 256
-
Morris Jette authored
If specialized cores are allocated for a job and that job request CPU binding using a cpu_map, the cpu_map was being interpretted as a cpu_mask and a cpu_map of zero was being treated as invalid. This is now fixed.
-
Morris Jette authored
-
Morris Jette authored
-
wickberg authored
-
Morris Jette authored
-
- May 21, 2014
-
-
Morris Jette authored
This reverts commit 859839a7 The ntasks_per_core option was previously really treated like number of CPUs (rather than tasks) to allocate per core, which seems to be what is desired.
-
Morris Jette authored
This needs a lot of work, but checking in the framework for now.
-
David Bigagli authored
-
Danny Auble authored
-
Danny Auble authored
-
Danny Auble authored
wait for.
-
Danny Auble authored
based on the mask given.
-
Danny Auble authored
task/affinity.
-
Danny Auble authored
thread in a core.
-
Danny Auble authored
it can bind cyclically across sockets.
-
Danny Auble authored
0c090f95
-
Morris Jette authored
add a PriorityFlags option of CALCULATE_RUNNING. If set, then the priority of running jobs will continue to be recalculated periodically. The PriorityFlags value reported by sview and "scontrol show config" will be reported as a string rather than its numeric value.
-
- May 20, 2014
-
-
Morris Jette authored
-
Morris Jette authored
-
Morris Jette authored
Previous logic assumed cpus_per_task=1, so ntasks_per_core option could spread the job across more cores than desired
-
Morris Jette authored
cpus-per-task support: Try to pack all CPUs of each tasks onto one socket. Previous logic could spread the tasks CPUs across multiple sockets.
-
Morris Jette authored
Previous logic was counting CPUs, but assuming each task would only use one CPU.
-
Dan Weeks authored
-
Danny Auble authored
This reverts commit b22268d8.
-
Danny Auble authored
-
Morris Jette authored
-
- May 19, 2014
-
-
Morris Jette authored
-
Morris Jette authored
-