Skip to content
GitLab
Explore
Sign in
Primary navigation
Search or go to…
Project
S
Slurm
Manage
Activity
Members
Labels
Plan
Issues
Issue boards
Milestones
Code
Merge requests
Repository
Branches
Commits
Tags
Repository graph
Compare revisions
Snippets
Build
Pipelines
Jobs
Pipeline schedules
Artifacts
Deploy
Releases
Package Registry
Model registry
Operate
Environments
Terraform modules
Monitor
Incidents
Service Desk
Analyze
Value stream analytics
Contributor analytics
CI/CD analytics
Repository analytics
Model experiments
Help
Help
Support
GitLab documentation
Compare GitLab plans
Community forum
Contribute to GitLab
Provide feedback
Terms and privacy
Keyboard shortcuts
?
Snippets
Groups
Projects
Show more breadcrumbs
tud-zih-energy
Slurm
Commits
9c50e6c4
Commit
9c50e6c4
authored
22 years ago
by
Moe Jette
Browse files
Options
Downloads
Patches
Plain Diff
Major revision of README with build info.
parent
2d0947bc
No related branches found
Branches containing commit
No related tags found
No related merge requests found
Changes
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
README
+37
-26
37 additions, 26 deletions
README
with
37 additions
and
26 deletions
README
+
37
−
26
View file @
9c50e6c4
TO BUILD:
cd slurm
./autogen.sh
./configure
make [--with-authd] [--with-elan] [--with-totalview]
/* --with-authd if you want to use authd for authentication */
/* --with-elan if you have an Quadics elan switch, defaults to IP */
/* --with-totalview if you want to support the Etnus TotalView debugger */
> cd slurm
> ./autogen.sh
> ./configure [--with-authd] [--with-elan] [--with-totalview]
> make
NOTES:
# --with-authd if you want to use authd for authentication
# --with-elan if you have an Quadics elan switch, defaults to IP
# --with-totalview if you want to support the Etnus TotalView debugger
TO TEST:
You will need to construct a valid configuration file for your machine.
To run on a single host, you can use the file etc/slurm.conf. For a
cluster, you should build something based upon etc/slurm.conf.dev.
See doc/man/man5/slurm.conf.5 for help in building this.
To run on a single host, you can probably use the file in
"etc/slurm.conf.localhost" with minimal modifications.
For a cluster, you should build something based upon "etc/slurm.conf.dev".
Be sure to update "SlurmUser", "JobCredentialPrivateKey" and
"JobCredentialPublicCertificate". There are keys in
"src/slurmd/private.key" and "src/slurmd/public.cert"
See "doc/man/man5/slurm.conf.5" for help in building this.
Initiate slurmctld on the control machine (it can run without root
permissions, as user SlurmUser as set in slurm.conf).
Initiate slurmd on each host (needs to run as root for production,
but can run as a normal user for testing).
Run jobs using the srun command.
Get system status using scontrol, sinfo, squeue.
Terminate jobs using scancel.
These daemons and commands are in sub-directories of "slurm/src".
Initiate "src/slurmctld/slurmctld" on the control machine (it can run
without root permissions, as user SlurmUser as specified in slurm.conf).
Initiate "src/slurmd/slurmd" on each compute server (it needs to run
as root for production, but can run as a normal user for testing -
it will report errors on the initgroups, seteuid, and setegid functions
if not run as root).
Run jobs using the "src/srun/srun" command.
Get system status using "src/sinfo/sinfo" and "src/squeue/squeue".
Terminate jobs using "src/scancel/scancel".
Get and set system configuration information using "src/scontrol/scontrol".
Man pages for all of these daemons and commands can be found in "doc/man".
There DejaGnu scripts to exercise various APIs and tools (they need
more work.
more work
)
.
NOTES:
You should have autoconf version 2.52 or higher (see "autoconf -V").
There is no authentication of communications between commands and
daemons without the the authd daemon in operation. For more
information, see "http://www.theether.org/authd/".
STATUS (As of 11/26/2002):
Basic functionality is in place, although development work is required
in various areas.
The code is not ready for production use yet.
Documentation needs updating, although the man pages are in pretty
good shape.
Scaling studies are needed.
STATUS (As of 12/19/2002):
Most functionality is in place and working.
Performance is good (under 5 seconds to run 1900 tasks of "/bin/hostname"
over 950 nodes).
Numerous performance enhancements are in the works as well as support
for the TotalView debugger and pluggable authentication modules (PAM).
Send feedback to Morris Jette <jette1@llnl.gov> and Mark Grondona
<grondona1@llnl.gov>.
This diff is collapsed.
Click to expand it.
Preview
0%
Loading
Try again
or
attach a new file
.
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Save comment
Cancel
Please
register
or
sign in
to comment