Skip to content
Snippets Groups Projects
  1. Jul 09, 2015
    • Morris Jette's avatar
      Change slurmctld threads count against limit · ad9c2413
      Morris Jette authored
      The slurmctld logic throttles some RPCs so that only one of them
      can execute at a time in order to reduce contention for the job,
      partition and node locks (only one of the effected RPCs can execute
      at any time anyway and this lets other RPC types run). While an
      RPC is stuck in the throttle function, do not count that thread
      against the slurmctld thread limit.
      but 1794
      ad9c2413
  2. Jul 08, 2015
  3. Jul 07, 2015
    • Trey Dockendorf's avatar
      Update job's QOS before partition · f2faa213
      Trey Dockendorf authored
      This patch moves the QOS update of an existing job to be before the
      partition update.  This ensures a new QOS value is the value used when
      doing validations against things like a partition's AllowQOS and DenyQOS.
      
      Currently if a two partitions have AllowQOS that do not share any QOS,
      the order of updates prevents a job from being moved from one partition
      to another using something like the following:
      
      scontrol update job=<jobID> partition=<new part> qos=<new qos>
      f2faa213
    • David Bigagli's avatar
    • Morris Jette's avatar
      Correct pack node logic · 0e0c64de
      Morris Jette authored
      Correct task layout with CR_Pack_Node option and more than 1 CPU per task.
      Previous logic would place one task per CPU launch too few tasks.
      bug 1781
      0e0c64de
  4. Jul 06, 2015
    • Morris Jette's avatar
      scheduler/backfill enhancements · edfbabe6
      Morris Jette authored
      Backfill scheduler now considers OverTimeLimit and KillWait configuration
      parameters to estimate when running jobs will exit. Initially the job's
      end time is estimated based upon it's time limit. After the time limit
      is reached, the end time estimate is based upon the OverTimeLimit and
      KillWait configuration parameters.
      bug 1774
      edfbabe6
    • Morris Jette's avatar
      Add backfill scheduler timeout · 7e944220
      Morris Jette authored
      Backfill scheduler: The configured backfill_interval value (default 30
          seconds) is now interpretted as a maximum run time for the backfill
          scheduler. Once reached, the scheduler will build a new job queue and
          start over, even if not all jobs have been tested.
      bub 1774
      7e944220
  5. Jun 30, 2015
  6. Jun 29, 2015
  7. Jun 25, 2015
  8. Jun 24, 2015
  9. Jun 23, 2015
  10. Jun 22, 2015
    • Morris Jette's avatar
      Advanced reservation fixes · a6454176
      Morris Jette authored
      Updates of existing bluegene advanced reservations did not work at all.
      Some multi-core configurations resulting in an abort due to creating
        core_bitmaps for the reservation that only had one bit per node rather
        than one bit per core.
      These bugs were introduced in commit 5f258072
      a6454176
    • David Bigagli's avatar
      Update NEWS · c8545598
      David Bigagli authored
      c8545598
    • David Bigagli's avatar
      Update NEWS · 38007f9b
      David Bigagli authored
      38007f9b
  11. Jun 19, 2015
  12. Jun 15, 2015
  13. Jun 12, 2015
  14. Jun 11, 2015
  15. Jun 10, 2015
  16. Jun 09, 2015
    • David Bigagli's avatar
      Search for user in all groups · 93ead71a
      David Bigagli authored
      93ead71a
    • Morris Jette's avatar
      Fix scheduling inconsistency with GRES · e1a00772
      Morris Jette authored
      1. I submit a first job that uses 1 GPU:
      $ srun --gres gpu:1 --pty bash
      $ echo $CUDA_VISIBLE_DEVICES
      0
      
      2. while the first one is still running, a 2-GPU job asking for 1 task per node
      waits (and I don't really understand why):
      $ srun --ntasks-per-node=1 --gres=gpu:2 --pty bash
      srun: job 2390816 queued and waiting for resources
      
      3. whereas a 2-GPU job requesting 1 core per socket (so just 1 socket) actually
      gets GPUs allocated from two different sockets!
      $ srun -n 1  --cores-per-socket=1 --gres=gpu:2 -p testk --pty bash
      $ echo $CUDA_VISIBLE_DEVICES
      1,2
      
      With this change #2 works the same way as #3.
      bug 1725
      e1a00772
  17. Jun 05, 2015
  18. Jun 04, 2015
  19. Jun 03, 2015
    • Morris Jette's avatar
      switch/cray: Refine PMI_CRAY_NO_SMP_ENV set · ef66b2eb
      Morris Jette authored
      switch/cray: Refine logic to set PMI_CRAY_NO_SMP_ENV environment variable.
      Rather than testing for the task distribution option, test the actual
      task IDs to see fi they are monotonically increasing across all nodes.
      Based upon idea from Brian Gilmer (Cray).
      ef66b2eb
  20. Jun 02, 2015
  21. Jun 01, 2015
  22. May 30, 2015
  23. May 29, 2015
  24. May 28, 2015
  25. May 27, 2015
    • Morris Jette's avatar
      Map job --mem-per-cpu=0 to --mem=0. · 33c77302
      Morris Jette authored
      However, --mem=0 now reflects the appropriate amount of memory in the
      system, --mem-per-cpu=0 hasn't changed.  This allows all the memory to
      be allocated in a cgroup but is not "consumed" and is available for
      other jobs running on the same host.
      Eric Martin, Washington University School of Medicine
      33c77302
Loading