- Feb 10, 2015
-
-
Brian Christiansen authored
uid's are 0 when associations are loaded.
-
Morris Jette authored
The backfill scheduler build a queue of eligible job/partition information and then proceeds to determine when and where those jobs will start. The backfill scheduler can be configured to periodically release locks in order to let other operations take place. If the partition(s) associated with one of those jobs changes during one of those periods, the job will still be considered for scheduling in the old partition until the backfill scheduler starts over with a new job/partition list. This change to the backfill scheduler validates each job's partition in from the list based upon current information (considering any partition changes). See bug 1436
-
Morris Jette authored
If bitmap size is initially NO_VAL 0xfffffffe, then a tiny buffer is allocated and accessing it can go off the end of the buffer. This has not been observed in production, but only in the investigation of another problem.
-
Brian Christiansen authored
Fix segfault in controller when deleting a user association of a user which had been previously removed from the system. Bug 1238
-
- Feb 09, 2015
-
-
Morris Jette authored
Fix slurmctld initialization problem which could cause requeue of the last task in a job array to fail if executed prior to the slurmctld loading the maximum size of a job array into a variable in the job_mgr.c module.
-
Morris Jette authored
Fix slurmctld job recovery logic which could cause the last task in a job array to be lost on restart.
-
Morris Jette authored
-
Brian Christiansen authored
Only supported with task/affinity plugin.
-
Pär Lindfors authored
When CgroupMountpoint was not defined in cgroup.conf the mount point got undefined. This resulted in cgroups not being released.
-
Nicolas Joly authored
-
- Feb 08, 2015
-
-
Danny Auble authored
-
- Feb 05, 2015
-
-
David Bigagli authored
event REQUEUED to slurmdbd.
-
Morris Jette authored
Related to bug 1429
-
Brian Christiansen authored
Improve "Prolog and Epilog Scripts" in slurm.conf(5)
-
Pär Lindfors authored
-
Pär Lindfors authored
The environment variable name SLURM_JOB_CLUSTER_NAME should be SLURM_CLUSTER_NAME. This is also available in Prolog and Epilog, so remove note about it only being available in PrologSlurmctld and EpilogSlurmctld.
-
- Feb 04, 2015
-
-
Morris Jette authored
-
Morris Jette authored
Previously it was not possible to distinguish between a job needing exclusive nodes and the default job/partition configuration.
-
Morris Jette authored
Fix job array logic that can cause slurmctld to abort. bug 1426
-
Morris Jette authored
Enable CUDA v7.0+ use with a Slurm configuration of TaskPlugin=task/cgroup ConstrainDevices=yes (in cgroup.conf). With that configuration CUDA_VISIBLE_DEVICES will start at 0 rather than the device number. bug 1421
-
- Feb 03, 2015
-
-
Morris Jette authored
Move the functions that read cgroup.conf from src/slurmd/common to slurmd/common so that the gres/gpu plugin can use it.
-
Brian Christiansen authored
-
Brian Christiansen authored
-
David Bigagli authored
-
David Bigagli authored
debug2 instead of info.
-
David Bigagli authored
SLURM_JOB_PARTITION to be the one in which the job started.
-
Morris Jette authored
-
Brian Christiansen authored
-
Brian Christiansen authored
Bug 1407 Continuation of da8409c0
-
Morris Jette authored
-
Morris Jette authored
If using proctrack/cgroup and gres/gpu, always start CUDA_VISIBLE_DEVICES environment variable numbering at 0. bug 1421
-
Danny Auble authored
reservations.
-
Danny Auble authored
-
Danny Auble authored
This is an add on to commit 2e5142ef. Servicing Bug 1418.
-
David Bigagli authored
the partition in which the job runs.
-
- Feb 02, 2015
-
-
David Bigagli authored
-
David Bigagli authored
This reverts commit c18a3c72.
-
David Bigagli authored
-
Danny Auble authored
Front End systems.
-
David Bigagli authored
-