- Jan 16, 2013
-
-
Morris Jette authored
While this will validate job at submit time, it results in redundant looping when scheduling jobs. Working on alternate patch now.
-
Danny Auble authored
submission.
-
Danny Auble authored
-
- Jan 15, 2013
-
-
Matthieu Hautreux authored
QoS limits enforcement on the controller side is based on a list of used_limits per user. When a user is not yet added to the list, which is common when the controller is restarted and the user has no running jobs, the current logic is to not check some of the "per user limits" and let the submission succeed. However, if one of these limits is a zero-valued limit, the check chould failed as it means that no job should be submitted at all as it would necessarily result in a crossing of the limit. This patch ensures that even when a user is not yet present in the per user used_limits list, the 0-valued limits are correctly treated.
-
- Jan 14, 2013
-
-
jette authored
-
- Jan 11, 2013
-
-
Morris Jette authored
-
- Jan 10, 2013
-
-
jette authored
-
Morris Jette authored
-
- Jan 09, 2013
-
-
Danny Auble authored
-
- Jan 08, 2013
-
-
Danny Auble authored
-
Morris Jette authored
-
- Jan 03, 2013
-
-
Morris Jette authored
-
Morris Jette authored
-
Morris Jette authored
-
Morris Jette authored
-
Morris Jette authored
-
- Dec 28, 2012
-
-
Morris Jette authored
-
- Dec 22, 2012
-
-
Danny Auble authored
stack.
-
- Dec 21, 2012
-
-
Morris Jette authored
If sched/backfill starts a job with a QOS having NO_RESERVE and not job time limit, start it with the partition time limit (or one year if the partition has no time limit) rather than NO_VAL (140 year time limit); If a standby job, which in this case has the NO_RESERVE flag set, is submitted without a time limit, and is backfilled, it will get an EndTime waaayyyy into the future. JobId=99 Name=cmdll UserId=eckert(1043) GroupId=eckert(1043) Priority=12083 Account=sa QOS=standby JobState=RUNNING Reason=None Dependency=(null) Requeue=1 Restarts=0 BatchFlag=1 ExitCode=0:0 RunTime=00:00:14 TimeLimit=12:00:00 TimeMin=N/A SubmitTime=2012-12-20T11:49:36 EligibleTime=2012-12-20T11:49:36 StartTime=2012-12-20T11:49:44 EndTime=2149-01-26T18:16:00 so I looked at the code in /src/plugins/sched/backfill: if (job_ptr->start_time <= now) { int rc = _start_job(job_ptr, resv_bitmap); if (qos_ptr && (qos_ptr->flags & QOS_FLAG_NO_RESERVE)){ job_ptr->time_limit = orig_time_limit; job_ptr->end_time = job_ptr->start_time + (orig_time_limit * 60); Using the debugger I found that if the job does not have a specified time limit, the job_ptr->time_limit is equal to NO_VAL when it hits this code.
-
- Dec 20, 2012
-
-
Danny Auble authored
slurm.conf with NodeAddr's signals going to a step could be handled incorrectly.
-
Danny Auble authored
would of also killed the allocation.
-
- Dec 19, 2012
-
-
Danny Auble authored
to make one job run.
-
Danny Auble authored
-
Danny Auble authored
-
Danny Auble authored
-N1 -n#.
-
- Dec 17, 2012
-
-
Danny Auble authored
-
Chris Read authored
-
- Dec 14, 2012
-
-
Morris Jette authored
-
Danny Auble authored
-
Chris Reed authored
Without this patch, use of sched/builtin would always result in FIFO scheduling, even if priority/multifactor was configured
-
Danny Auble authored
-
- Dec 13, 2012
-
-
jette authored
-
Danny Auble authored
each block independently.
-
Danny Auble authored
-
- Dec 12, 2012
-
-
Morris Jette authored
-
- Dec 07, 2012
-
-
Morris Jette authored
Correction to hostlist sorting for hostnames that contain two numeric components and the first numeric component has various sizes (e.g. "rack9blade1" should come before "rack10blade1")
-
- Dec 06, 2012
-
-
Morris Jette authored
-
- Dec 05, 2012
-
-
Danny Auble authored
job on future step creation attempts.
-
Danny Auble authored
also cause it to run if the realtime server ever goes away.
-
Morris Jette authored
Especially for newly started jobs, the PrologSlurmctld can change a job's QOS based upon resource allocation.
-