Skip to content
Snippets Groups Projects
Commit fc7eee88 authored by Danny Auble's avatar Danny Auble
Browse files

Merge remote-tracking branch 'origin/slurm-2.6'

Conflicts:
	NEWS
	doc/html/team.shtml
parents 1071b145 9f97c2e9
No related branches found
No related tags found
No related merge requests found
......@@ -203,9 +203,6 @@ documents those changes that are of interest to users and admins.
-- Sched/backfill - Change default max_job_bf parameter from 50 to 100.
-- Added -I|--item-extract option to sh5util to extract data item from series.
* Changes in Slurm 2.6.7
========================
* Changes in Slurm 2.6.6
========================
-- sched/backfill - Fix bug that could result in failing to reserve resources
......@@ -243,6 +240,7 @@ documents those changes that are of interest to users and admins.
-- Update documentation about QOS limits
-- Retry task exit message from slurmstepd to srun on message timeout.
-- Correction to logic reserving all nodes in a specified partition.
-- Added support for selecting AMD GPU by setting GPU_DEVICE_ORDINAL env var.
* Changes in Slurm 2.6.5
========================
......
......@@ -35,6 +35,7 @@ Lead Slurm developers are:
<li>Leith Bade (Australian National University)</li>
<li>Troy Baer (The University of Tennessee, Knoxville)</li>
<li>Susanne Balle (HP)</li>
<li>Dominik Bartkiewicz (University of Warsaw, Poland)</li>
<li>Ralph Bean (Rochester Institute of Technology)</li>
<li>Alexander Bersenev (Institute of Mathematics and Mechanics, Russia)</li>
<li>David Bigagli (SchedMD)</li>
......@@ -187,6 +188,6 @@ Lead Slurm developers are:
<!-- INDIVIDUALS, PLEASE KEEP IN ALPHABETICAL ORDER -->
</ul>
<p style="text-align:center;">Last modified 18 November 2013</p>
<p style="text-align:center;">Last modified 5 February 2014</p>
<!--#include virtual="footer.txt"-->
......@@ -132,6 +132,16 @@ SwitchName=s3 Nodes=tux[12-15]
SwitchName=s4 Switches=s[0-3]
</pre>
<p>Note that compute nodes on switches that lack a common parent switch can
be used, but no job will span leaf switches without a common parent.
For example, it is legal to remove the line "SwitchName=s4 Switches=s[0-3]"
from the above topology.conf file.
In that case, no job will span more than four compute nodes on any single leaf
switch.
This configuration can be useful if one wants to schedule multiple phyisical
clusters as a single logical cluster under the control of a single slurmctld
daemon.</p>
<h2>User Options</h2>
<p>For use with the topology/tree plugin, user can also specify the maximum
......@@ -161,6 +171,6 @@ The value will be set component types listed in SLURM_TOPOLOGY_ADDR.
Each component will be identified as either "switch" or "node".
A period is used to separate each hardware component type.</p>
<p style="text-align:center;">Last modified 13 January 2014</p>
<p style="text-align:center;">Last modified 5 February 2014</p>
<!--#include virtual="footer.txt"-->
......@@ -250,6 +250,8 @@ extern void job_set_env(char ***job_env_ptr, void *gres_ptr)
if (dev_list) {
env_array_overwrite(job_env_ptr,"CUDA_VISIBLE_DEVICES",
dev_list);
env_array_overwrite(job_env_ptr,"GPU_DEVICE_ORDINAL",
dev_list);
xfree(dev_list);
}
}
......@@ -294,6 +296,8 @@ extern void step_set_env(char ***job_env_ptr, void *gres_ptr)
if (dev_list) {
env_array_overwrite(job_env_ptr,"CUDA_VISIBLE_DEVICES",
dev_list);
env_array_overwrite(job_env_ptr,"GPU_DEVICE_ORDINAL",
dev_list);
xfree(dev_list);
}
}
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment