- Jul 09, 2015
-
-
Morris Jette authored
The slurmctld logic throttles some RPCs so that only one of them can execute at a time in order to reduce contention for the job, partition and node locks (only one of the effected RPCs can execute at any time anyway and this lets other RPC types run). While an RPC is stuck in the throttle function, do not count that thread against the slurmctld thread limit. but 1794
-
- Jul 08, 2015
-
-
Morris Jette authored
-
- Jul 07, 2015
-
-
Trey Dockendorf authored
This patch moves the QOS update of an existing job to be before the partition update. This ensures a new QOS value is the value used when doing validations against things like a partition's AllowQOS and DenyQOS. Currently if a two partitions have AllowQOS that do not share any QOS, the order of updates prevents a job from being moved from one partition to another using something like the following: scontrol update job=<jobID> partition=<new part> qos=<new qos>
-
David Bigagli authored
-
Morris Jette authored
Correct task layout with CR_Pack_Node option and more than 1 CPU per task. Previous logic would place one task per CPU launch too few tasks. bug 1781
-
- Jul 06, 2015
-
-
Morris Jette authored
Backfill scheduler now considers OverTimeLimit and KillWait configuration parameters to estimate when running jobs will exit. Initially the job's end time is estimated based upon it's time limit. After the time limit is reached, the end time estimate is based upon the OverTimeLimit and KillWait configuration parameters. bug 1774
-
Morris Jette authored
Backfill scheduler: The configured backfill_interval value (default 30 seconds) is now interpretted as a maximum run time for the backfill scheduler. Once reached, the scheduler will build a new job queue and start over, even if not all jobs have been tested. bub 1774
-
- Jun 30, 2015
-
-
Thomas Cadeau authored
Bug 1745
-
Brian Christiansen authored
This reverts commit 3f91f4b2.
-
- Jun 29, 2015
-
-
Nathan Yee authored
Bug 1745
-
- Jun 25, 2015
-
-
Morris Jette authored
-
- Jun 24, 2015
-
-
David Bigagli authored
-
- Jun 23, 2015
-
-
David Bigagli authored
-
- Jun 22, 2015
-
-
Morris Jette authored
Updates of existing bluegene advanced reservations did not work at all. Some multi-core configurations resulting in an abort due to creating core_bitmaps for the reservation that only had one bit per node rather than one bit per core. These bugs were introduced in commit 5f258072
-
David Bigagli authored
-
David Bigagli authored
-
- Jun 19, 2015
-
-
David Bigagli authored
-
- Jun 15, 2015
-
-
Morris Jette authored
Logic was assuming the reservation had a node bitmap which was being used to check for overlapping jobs. If there is no node bitmap (e.g. a licenses only reservation), an abort would result.
-
- Jun 12, 2015
-
-
Brian Christiansen authored
Bug 1739
-
Brian Christiansen authored
Bug 1743
-
- Jun 11, 2015
-
-
Brian Christiansen authored
Bug 1733
-
- Jun 10, 2015
-
-
Morris Jette authored
-
- Jun 09, 2015
-
-
David Bigagli authored
-
Morris Jette authored
1. I submit a first job that uses 1 GPU: $ srun --gres gpu:1 --pty bash $ echo $CUDA_VISIBLE_DEVICES 0 2. while the first one is still running, a 2-GPU job asking for 1 task per node waits (and I don't really understand why): $ srun --ntasks-per-node=1 --gres=gpu:2 --pty bash srun: job 2390816 queued and waiting for resources 3. whereas a 2-GPU job requesting 1 core per socket (so just 1 socket) actually gets GPUs allocated from two different sockets! $ srun -n 1 --cores-per-socket=1 --gres=gpu:2 -p testk --pty bash $ echo $CUDA_VISIBLE_DEVICES 1,2 With this change #2 works the same way as #3. bug 1725
-
- Jun 05, 2015
-
-
Danny Auble authored
Only going to do this in the master as it may affect scripts. This reverts commit 454f78e6. Conflicts: NEWS
-
- Jun 04, 2015
-
-
David Bigagli authored
-
David Bigagli authored
-
- Jun 03, 2015
-
-
Morris Jette authored
switch/cray: Refine logic to set PMI_CRAY_NO_SMP_ENV environment variable. Rather than testing for the task distribution option, test the actual task IDs to see fi they are monotonically increasing across all nodes. Based upon idea from Brian Gilmer (Cray).
-
- Jun 02, 2015
-
-
Danny Auble authored
-
Danny Auble authored
afterward cause a divide by zero error.
-
Danny Auble authored
corruption if thread uses the pointer basing validity off the id. Bug 1710
-
- Jun 01, 2015
-
-
David Bigagli authored
-
- May 30, 2015
-
-
Danny Auble authored
-
- May 29, 2015
-
-
Brian Christiansen authored
Bug 1495
-
Morris Jette authored
Correct count of CPUs allocated to job on system with hyperthreads. The bug was introduced in commit a6d3074d On a system with hyperthreads: srun -n1 --ntasks-per-core=1 hostname you would get: slurmctld: error: job_update_cpu_cnt: cpu_cnt underflow on job_id 67072
-
Morris Jette authored
preempt/job_prio plugin: Implement the concept of Warm-up Time here. Use the QoS GraceTime as the amount of time to wait before preempting. Basically, skip preemption if your time is not up.
-
Morris Jette authored
-
Danny Auble authored
a job runs past it's time limit.
-
- May 28, 2015
-
-
Brian Christiansen authored
Bug 1705
-
- May 27, 2015
-
-
Morris Jette authored
However, --mem=0 now reflects the appropriate amount of memory in the system, --mem-per-cpu=0 hasn't changed. This allows all the memory to be allocated in a cgroup but is not "consumed" and is available for other jobs running on the same host. Eric Martin, Washington University School of Medicine
-