.SH "SYNOPSIS"
.LP
sh5util

.SH "DESCRIPTION"
.LP
sh5util merges HDF5 files produced on each node for each step of a job into
one HDF5 file for the job. The resulting file can be viewed and manipulated
by common HDF5 tools such as HDF5View, h5dump, h5edit, or h5ls.
.LP
sh5util also has two extract modes. The first, writes a limited set of
data for specific nodes, steps, and data series in
"comma separated value" form to a file which can be imported into other
analysis tools such as spreadsheets.
.LP
The second, (Item-Extract) extracts one data time from one time series for all
the samples on all the nodes from a jobs HDF5 profile.
.TP
\- Finds sample with maximum value of the item.
.TP
\- Write CSV file with min, ave, max, and item totals for each node for each
sample


.SH "OPTIONS"
.LP

.TP
\fB\-E\fR, \fB\-\-extract\fR

Extract data series from a merged job file.

.RS
.TP 10
Extract mode options

.TP
\fB\-i\fR, \fB\-\-input\fR=\fIpath\fR
merged file to extract from (default ./job_$jobid.h5)

.TP
\fB\-N\fR, \fB\-\-node\fR=\fInodename\fR
Node name to extract (default is all)

.TP
\fB\-l\fR, \fB\-\-level\fR=\fI[Node:Totals|Node:TimeSeries]\fR
Level to which series is attached. (default Node:Totals)

.TP
\fB\-s\fR, \fB\-\-series\fR=\fI[Energy | Lustre | Network | Tasks | Task_#]\fR
\fBTasks\fR is all tasks, \fBTask_#\fR (# is a task id) (default is everything)

.TP
\fB\-d\fR, \fB\-\-data\fR
Name of data item in series (See note below).

.RE

.TP
\fB\-j\fR, \fB\-\-jobs\fR=\fI<job(.step)>\fR
Format is <job(.step)>. Merge this job/step
(or a comma-separated list of job steps). This option is required.
Not specifying a step will result in all steps found to be processed.

.TP
\fB\-h\fR, \fB\-\-help\fR
Print this description of use.

.TP
\fB\-o\fR, \fB\-\-output\fR=\fIpath\fR
.nf
Path to a file into which to write.
Default for merge is ./job_$jobid.h5
Default for extract is ./extract_$jobid.csv
.fi

.TP
\fB\-p\fR, \fB\-\-profiledir\fR=\fIdir\fR
Directory location where node-step files exist default is set in
acct_gather.conf.

.TP
\fB\-S\fR, \fB\-\-savefiles\fR
Instead of removing node-step files after merging them into the job file,
keep them around.

.TP
\fB\-\-user\fR=\fIuser\fR
User who profiled job.
(Handy for root user, defaults to user running this command.)

.TP
\fB\-\-usage\fR
Display brief usage message.

.SH "Data Items per Series"

.TP
\fBEnergy\fR
.nf
Power
CPU_Frequency
.fi

Packets_Out
Megabytes_Out
.fi

.TP
\fBTask\fR
.nf
CPU_Frequency
CPU_Time
CPU_Utilization
RSS
VM_Size
Pages
Read_Megabytes
Write_Megabytes
.fi

.SH "PERFORMANCE"
.PP
Executing \fBsh5util\fR sends a remote procedure call to \fBslurmctld\fR. If
enough calls from \fBsh5util\fR or other Slurm client commands that send remote
procedure calls to the \fBslurmctld\fR daemon come in at once, it can result in
a degradation of performance of the \fBslurmctld\fR daemon, possibly resulting
in a denial of service.
.PP
Do not run \fBsh5util\fR or other Slurm client commands that send remote
procedure calls to \fBslurmctld\fR from loops in shell scripts or other
programs. Ensure that programs limit calls to \fBsh5util\fR to the minimum
necessary for the information you are trying to gather.

.SH "Examples"

.TP
Merge node-step files (as part of a sbatch script)
.LP
sbatch \-n1 \-d$SLURM_JOB_ID \-\-wrap="sh5util \-\-savefiles \-j $SLURM_JOB_ID"

.TP
Extract all task data from a node
.LP
sh5util \-j 42 \-N snowflake01 \-\-level=Node:TimeSeries \-\-series=Tasks

.TP
Extract all energy data
sh5util \-j 42 \-\-series=Energy \-\-data=power

.SH "COPYING"
Copyright (C) 2013 Bull.
.br
Copyright (C) 2013 SchedMD LLC.
Slurm is free software; you can redistribute it and/or modify it under
the terms of the GNU General Public License as published by the Free

Man(1) output converted with man2html