- Dec 16, 2013
-
-
Morris Jette authored
-
Hughes, Doug authored
This allows multiple job ids to hold, uhold, resume, suspend, release, etc.
-
- Sep 26, 2013
-
-
Morris Jette authored
-
- Sep 18, 2013
-
-
Morris Jette authored
-
Morris Jette authored
Bug introduced earlier today in commit 7912c05b
-
- Sep 17, 2013
-
-
Morris Jette authored
for setdebugflags command, avoid parsing "-flagname" as an scontrol command line option.
-
- Aug 27, 2013
-
-
Morris Jette authored
-
- Apr 24, 2013
-
-
- Feb 27, 2013
-
-
Danny Auble authored
-
- Feb 20, 2013
-
-
Morris Jette authored
-
- Feb 08, 2013
-
-
David Bigagli authored
the user commands.
-
- Jan 28, 2013
-
-
Morris Jette authored
-
- Jan 10, 2013
- Jan 09, 2013
-
-
David Bigagli authored
-
- Jan 03, 2013
-
-
Morris Jette authored
Command line argument would not be processed, but scontrol would exit immediately
-
Morris Jette authored
-
- Dec 28, 2012
-
-
jette authored
-
- Dec 06, 2012
-
-
Danny Auble authored
-
- May 29, 2012
-
-
Don Albert authored
I have implemented the changes as you suggested: using a "-dd" option to indicate that the display of the script is wanted, and setting both the "SHOW_DETAIL" and a new "SHOW_DETAIL2" flag. Since "scontrol" can be run interactively as well, I added a new "script" option to indicate that display of both the script and the details is wanted if the job is a batch job. Here are the man page updates for "man scontrol". For the "-d, --details" option: -d, --details Causes the show command to provide additional details where available. Repeating the option more than once (e.g., "-dd") will cause the show job command to also list the batch script, if the job was a batch job. For the interactive "details" option: details Causes the show command to provide additional details where available. Job information will include CPUs and NUMA memory allocated on each node. Note that on computers with hyperthreading enabled and SLURM configured to allocate cores, each listed CPU represents one physical core. Each hyperthread on that core can be allocated a separate task, so a job's CPU count and task count may differ. See the --cpu_bind and --mem_bind option descriptions in srun man pages for more information. The details option is currently only supported for the show job command. To also list the batch script for batch jobs, in addition to the details, use the script option described below instead of this option. And for the new interactive "script" option: script Causes the show job command to list the batch script for batch jobs in addition to the detail informa- tion described under the details option above. Attached are the patch file for the changes and a text file with the results of the tests I did to check out the changes. The patches are against SLURM 2.4.0-rc1. -Don Albert-
-
- May 25, 2012
-
-
Don Albert authored
I have implemented the changes as you suggested: using a "-dd" option to indicate that the display of the script is wanted, and setting both the "SHOW_DETAIL" and a new "SHOW_DETAIL2" flag. Since "scontrol" can be run interactively as well, I added a new "script" option to indicate that display of both the script and the details is wanted if the job is a batch job. Here are the man page updates for "man scontrol". For the "-d, --details" option: -d, --details Causes the show command to provide additional details where available. Repeating the option more than once (e.g., "-dd") will cause the show job command to also list the batch script, if the job was a batch job. For the interactive "details" option: details Causes the show command to provide additional details where available. Job information will include CPUs and NUMA memory allocated on each node. Note that on computers with hyperthreading enabled and SLURM configured to allocate cores, each listed CPU represents one physical core. Each hyperthread on that core can be allocated a separate task, so a job's CPU count and task count may differ. See the --cpu_bind and --mem_bind option descriptions in srun man pages for more information. The details option is currently only supported for the show job command. To also list the batch script for batch jobs, in addition to the details, use the script option described below instead of this option. And for the new interactive "script" option: script Causes the show job command to list the batch script for batch jobs in addition to the detail informa- tion described under the details option above. Attached are the patch file for the changes and a text file with the results of the tests I did to check out the changes. The patches are against SLURM 2.4.0-rc1. -Don Albert-
-
- May 23, 2012
-
-
Morris Jette authored
-
- Feb 24, 2012
-
-
Morris Jette authored
-
- Nov 30, 2011
-
-
Morris Jette authored
Patches from Andriy Grytsenko (Massive Solutions Limited).
-
- Oct 27, 2011
-
-
Morris Jette authored
This patch contains corrections for spelling errors in the code and improvements for some man pages. Patch from Gennaro Oliva.
-
- Oct 04, 2011
-
-
Morris Jette authored
Also message fix packing problem
-
Morris Jette authored
Patch from Andriy Grytsenko (Massive Solutions Limited).
-
- Sep 27, 2011
-
-
Morris Jette authored
Add the ability to reboot all compute nodes after they become idle. The RebootProgram configuration parameter must be set and an authorized user must execute the command "scontrol reboot_nodes". Patch from Andriy Grytsenko (Massive Solutions Limited).
-
- Sep 13, 2011
-
-
Danny Auble authored
(i.e. BP List -> MidplaneList).
-
- Aug 09, 2011
-
-
Danny Auble authored
-
- Jul 22, 2011
-
-
Danny Auble authored
-
- May 31, 2011
-
-
Moe Jette authored
Note that scontrol can only support a single cluster at one time.
-
- Apr 10, 2011
-
-
Moe Jette authored
On our frontend host we support multiple clusters (Cray and non-Cray) by setting the SLURM_CLUSTERS environment variable accordingly. In order to use scontrol (e.g. for hold/release of a user job) from a frontend host to control jobs on a remote Cray system, we need support for the SLURM_CLUSTERS environment variable also in scontrol.
-
- Mar 24, 2011
-
-
Danny Auble authored
-
- Mar 23, 2011
-
-
Danny Auble authored
-
- Mar 03, 2011
-
-
Moe Jette authored
-
- Feb 23, 2011
-
-
Danny Auble authored
-
Moe Jette authored
-
Moe Jette authored
associated with a given NodeHostName when running multiple slurmd daemons per compute node (typically used for testing purposes). Patch from Matthieu Hautreux, CEA.
-
Danny Auble authored
-