Skip to content
Snippets Groups Projects
  1. Aug 28, 2013
  2. Aug 27, 2013
    • Morris Jette's avatar
      Reservation with CoreCnt: Avoid possible invalid memory reference · e0541f93
      Morris Jette authored
      If reservation create request included a CoreCnt value and more
      nodes are required than configured, the logic in select/cons_res
      could go off the end of the core_cnt array. This patch adds a
      check for a zero value in the core_cnt array, which terminates
      the user-specified array.
      Back-port from master of commit 211c224b
      e0541f93
  3. Aug 24, 2013
  4. Aug 23, 2013
    • Morris Jette's avatar
      Correct value of min_nodes returned by loading job info · 98e24b0d
      Morris Jette authored
      This is a correction of a bug introduced in commit
      https://github.com/SchedMD/slurm/commit/ac44db862c8d1f460e55ad09017d058942ff6499
      That commit eliminated the need of reading the node state information
      from squeue for performance reasons (mostly for large parallel systems
      in which the Prolog ran squeue, which generates a lot of simultaneous
      RPCs, slowing down the job launch process). It also assumed 1 CPU per
      node. If a pending job specified a node count of 1 and a task count
      larger than one, squeue was reporting the node count of the job as
      the same as the task count. This patch moves that same calculation
      of a pending job's minimum node count into slurmctld, so the squeue
      still does not need to read the node information, but can report the
      correct node count for pending jobs with minimal overhead.
      98e24b0d
  5. Aug 22, 2013
  6. Aug 21, 2013
    • Hongjia Cao's avatar
      Fix of wrong node/job state problem after reconfig · d80c8667
      Hongjia Cao authored
      If there are completing jobs, a reconfigure will set wrong job/node
      state: all nodes of the completing job will be set allocated, and the
      job will not be removed even if the completing nodes are released. The
      state can only be restored by restarting slurmctld after the completing
      nodes released.
      d80c8667
  7. Aug 20, 2013
  8. Aug 17, 2013
  9. Aug 16, 2013
  10. Aug 15, 2013
  11. Aug 14, 2013
  12. Aug 13, 2013
  13. Aug 09, 2013
  14. Aug 07, 2013
  15. Aug 06, 2013
  16. Aug 01, 2013
  17. Jul 31, 2013
  18. Jul 30, 2013
  19. Jul 26, 2013
  20. Jul 25, 2013
  21. Jul 23, 2013
  22. Jul 22, 2013
  23. Jul 18, 2013
  24. Jul 16, 2013
  25. Jul 11, 2013
Loading