- Apr 30, 2019
-
-
Tim Wickberg authored
Avoids a dereference of slurmctld_primary, which is a weak symbol, and is not available when the select plugins are loaded in the user commands. (Weak symbols on macOS cannot have local definitions as well - so any reference to them that does not resolve causes the process to crash.)
-
Tim Wickberg authored
-
Tim Wickberg authored
-
Tim Wickberg authored
-
Tim Wickberg authored
-
Tim Wickberg authored
-
Tim Wickberg authored
macOS always has these available without setting -pthread, and ACX_PTHREAD does not work properly on macOS at the moment.
-
Tim Wickberg authored
These should ideally be split off from the task plugin interface since they're tied to a specific implementations. And the core_spec code itself does not really belong directly in slurmd.c.
-
Tim Wickberg authored
There is no equivalent to cpuset_t.
-
Tim Wickberg authored
-
Tim Wickberg authored
Remove initializations - these are not permitted on weak_import'd symbols.
-
Tim Wickberg authored
This was cobbled together from a few different references. All it's doing is forcing a symbol to be created with the slurm_* name, and setting the symbol address to that of the unaliased symbol.
-
Tim Wickberg authored
-
Tim Wickberg authored
-
Doug Jacobsen authored
Bug 3745.
-
- Apr 29, 2019
-
-
Tim Wickberg authored
Using an index value equal to the number of elements puts you one past the end of the array, modify conditional to >= instead. CID 197759.
-
Tim Wickberg authored
Anyone jumping to the cleanup label does not have agent_info_ptr set. CID 197735.
-
Nate Rini authored
Bug 6513
-
Nate Rini authored
Bug 6513
-
Nate Rini authored
Bug 6513
-
Nate Rini authored
Bug 6513
-
Tim Wickberg authored
CID 197758.
-
Tim Wickberg authored
I believe job cannot be NULL here, but the flow through this function is messy, and relies on some external setup that has not been documented with xassert()s. Put this in for now as a stop-gap; this code should be refactored at a later point.
-
Brian Christiansen authored
-
Brian Christiansen authored
-
Tim Wickberg authored
Bug 6632.
-
Tim Wickberg authored
-
Matt Ezell authored
The slurmsmwd cannot be build as part of a Cray Aries build at this time, so should not be bundled alongside it. Add this separate build option to make it easier to package independently of the main installation. Bug 6632.
-
Matt Ezell authored
The --disable-native-cray option was used to revert back to Cray/ALPS mode, and does not do anything at this point. So reuse the spec option to actual disable ("Native") Cray Aries builds. Bug 6632.
-
Brian Christiansen authored
when one offset passes and other fails. Bug 6892
-
Nate Rini authored
Bug 6513.
-
Brian Christiansen authored
Bug 6513
-
Brian Christiansen authored
Bug 6513 First offset is good but second is bad -- didn't request task count. $ cat etc/job_submit.lua function slurm_job_submit(job_desc, part_list, submit_uid) slurm.log_user("submit1\nstuff") slurm.log_user("submit2") slurm.log_user("submit3") -- slurm.log_user("case 0") if job_desc.num_tasks == slurm.NO_VAL or job_desc.num_tasks == nil then slurm.log_user("Batch submit error: Must specify either number of nodes or number of tasks!") -- reject the job return slurm.ERROR end return slurm.SUCCESS end function slurm_job_modify(job_desc, job_rec, part_list, modify_uid) slurm.log_user("modify1") slurm.log_user("modify2") slurm.log_user("modify3") return slurm.SUCCESS end slurm.log_user("initialized") return slurm.SUCCESS $ sbatch -Ablah2 -n1 --wrap="hostname" : -J asdfl sbatch: error: 0: initialized sbatch: error: 0: submit1 sbatch: error: 0: stuff sbatch: error: 0: submit2 sbatch: error: 0: submit3 sbatch: error: submit1 sbatch: error: stuff sbatch: error: submit2 sbatch: error: submit3 sbatch: error: Batch submit error: Must specify either number of nodes or number of tasks! sbatch: error: Batch job submission failed: Unspecified error $ sbatch -Ablah2 -n1 --wrap="hostname" : -J asdfl sbatch: error: 0: initialized sbatch: error: 0: submit1 sbatch: error: 0: stuff sbatch: error: 0: submit2 sbatch: error: 0: submit3 sbatch: error: 1: submit1 sbatch: error: 1: stuff sbatch: error: 1: submit2 sbatch: error: 1: submit3 sbatch: error: 1: Batch submit error: Must specify either number of nodes or number of tasks! sbatch: error: Batch job submission failed: Unspecified error srun already handles this
-
Nate Rini authored
Was dumping this: $ srun -A test7.21-account.1 --qos test7.21-qos.1 -n5 : -n3 : -n1 /bin/true srun: error: 0: submit1 srun: error: submit2 srun: error: submit3 srun: error: Unable to allocate resources: Invalid account or account/partition combination specified Will now dump this: $ srun -A test7.21-account.1 --qos test7.21-qos.1 -n5 : -n3 : -n1 /bin/true srun: error: 0: initialized srun: error: 0: submit1 srun: error: 0: submit2 srun: error: 0: submit3 srun: error: Unable to allocate resources: Invalid account or account/partition combination specified Bug 6513.
-
Nate Rini authored
Bug 6895.
-
Brian Christiansen authored
Bug 6895
-
Brian Christiansen authored
Bug 6895
-
Daniel Letai authored
And require the compute nodes to have an identical installation version to avoid issues with mismatched libraries. Bug 6598.
-
Daniel Letai authored
And require the compute nodes to have an identical installation version to avoid issues with mismatched libraries. Bug 6598.
-