Dashboard Deployment Tutorial
This tutorial covers three common deployment scenarios for the Torc web dashboard (torc-dash). Each scenario addresses different environments and use cases.
Prefer the terminal? If you work primarily in SSH sessions or terminal environments, consider using the Terminal UI (TUI) instead. The TUI provides the same workflow and job management capabilities without requiring a web browser or SSH tunnels.
Overview of Deployment Scenarios
| Scenario | Environment | Use Case |
|---|---|---|
| 1. Standalone | Local computer | Single-computer workflows, development, testing |
| 2. All-in-One Login Node | HPC login node | Small HPC workflows (< 100 jobs) |
| 3. Shared Server | HPC login node + dedicated server | Large-scale multi-user HPC workflows |
Prerequisites
Before starting, ensure you have:
-
Built Torc binaries (see Installation):
cargo build --release --workspace -
Added binaries to PATH:
export PATH="$PATH:/path/to/torc/target/release" -
Initialized the database (if not using standalone mode):
sqlx database setup
Scenario 1: Local Development (Standalone Mode)
Best for: Single-computer workflows on your laptop or workstation. Also ideal for development, testing, and learning Torc.
This is the simplest setup - everything runs on one machine with a single command. Use this when you want to run workflows entirely on your local computer without HPC resources.
Architecture
flowchart TB
subgraph computer["Your Computer"]
browser["Browser"]
dash["torc-dash<br/>(web UI)"]
server["torc-server<br/>(managed)"]
cli["torc CLI"]
db[("SQLite DB")]
browser --> dash
dash -->|"HTTP API"| server
dash -->|"executes"| cli
cli -->|"HTTP API"| server
server --> db
end
Setup
Step 1: Start the dashboard in standalone mode
torc-dash --standalone
This single command:
- Automatically starts
torc-serveron a free port - Starts the dashboard on http://127.0.0.1:8090
- Configures the dashboard to connect to the managed server
Step 2: Open your browser
Navigate to http://localhost:8090
Step 3: Create and run a workflow
- Click Create Workflow
- Upload a workflow specification file (YAML, JSON, or KDL)
- Click Create
- Click Initialize on the new workflow
- Click Run Locally to execute
Configuration Options
# Custom dashboard port
torc-dash --standalone --port 8080
# Specify database location
torc-dash --standalone --database /path/to/my.db
# Faster job completion detection
torc-dash --standalone --completion-check-interval-secs 2
# Specify binary paths (if not in PATH)
torc-dash --standalone \
--torc-bin /path/to/torc \
--torc-server-bin /path/to/torc-server
Stopping
Press Ctrl+C in the terminal. This stops both the dashboard and the managed server.
Scenario 2: All-in-One Login Node
Best for: Small HPC workflows (fewer than 100 jobs) where you want the complete Torc stack running on the login node, with jobs submitted to Slurm.
This is the simplest HPC setup - everything runs on the login node. It’s ideal for individual users running small HPC workflows without needing a dedicated server infrastructure.
Important: Login nodes are shared resources. The torc-dash and torc-server applications consume minimal resources when workflows are small (e.g., less than 100 jobs). If you run these applications on bigger workflows, especially with faster job completion interval checks, you may impact other users.
Architecture
flowchart TB
subgraph local["Your Local Machine"]
browser["Browser"]
end
subgraph login["Login Node"]
dash["torc-dash<br/>(port 8090)"]
server["torc-server<br/>(port 8080)"]
cli["torc CLI"]
db[("SQLite DB")]
slurm["sbatch/squeue"]
dash -->|"HTTP API"| server
dash -->|"executes"| cli
cli -->|"HTTP API"| server
server --> db
cli --> slurm
end
subgraph compute["Compute Nodes (Slurm)"]
runner1["torc-slurm-job-runner<br/>(job 1)"]
runner2["torc-slurm-job-runner<br/>(job 2)"]
runnerN["torc-slurm-job-runner<br/>(job N)"]
runner1 -->|"HTTP API"| server
runner2 -->|"HTTP API"| server
runnerN -->|"HTTP API"| server
end
browser -->|"SSH tunnel"| dash
slurm --> compute
Setup
Step 1: Start torc-server on the login node
# Start server
torc-server run \
--port 8080 \
--database $SCRATCH/torc.db \
--completion-check-interval-secs 60
Or as a background process:
nohup torc-server run \
--port 8080 \
--database $SCRATCH/torc.db \
> $SCRATCH/torc-server.log 2>&1 &
Step 2: Start torc-dash on the same login node
# Set API URL to local server
export TORC_API_URL="http://localhost:8080/torc-service/v1"
# Start dashboard
torc-dash --port 8090
Or in the background:
nohup torc-dash --port 8090 > $SCRATCH/torc-dash.log 2>&1 &
Step 3: Access via SSH tunnel
From your local machine:
ssh -L 8090:localhost:8090 user@login-node
Important: Use
localhostin the tunnel command, not the login node’s hostname. This works because torc-dash binds to 127.0.0.1 by default.
Open http://localhost:8090 in your browser.
Submitting to Slurm
Via Dashboard:
- Create a workflow with Slurm scheduler configuration
- Click Initialize
- Click Submit (not “Run Locally”)
Via CLI:
export TORC_API_URL="http://localhost:8080/torc-service/v1"
# Create workflow with Slurm actions
torc workflows create my_slurm_workflow.yaml
# Submit to Slurm
torc submit <workflow_id>
Monitoring Slurm Jobs
The dashboard shows job status updates as Slurm jobs progress:
- Go to Details tab
- Select Jobs
- Enable Auto-refresh
- Watch status change from
pending→running→completed
You can also monitor via:
- Events tab for state transitions
- Debugging tab for job logs after completion
Scenario 3: Shared Server on HPC
Best for: Large-scale multi-user HPC environments where a central torc-server runs persistently on a dedicated server, and multiple users access it via torc-dash from login nodes.
This is the most scalable setup, suitable for production deployments with many concurrent users and large workflows.
Architecture
flowchart TB
subgraph local["Your Local Machine"]
browser["Browser"]
end
subgraph login["Login Node"]
dash["torc-dash<br/>(port 8090)"]
cli["torc CLI"]
dash -->|"executes"| cli
end
subgraph shared["Shared Server"]
server["torc-server<br/>(port 8080)"]
db[("SQLite DB")]
server --> db
end
browser -->|"SSH tunnel"| dash
dash -->|"HTTP API"| server
cli -->|"HTTP API"| server
Setup
Step 1: Start torc-server on the shared server
On the shared server (e.g., a dedicated service node):
# Start server with production settings
torc-server run \
--port 8080 \
--database /shared/storage/torc.db \
--completion-check-interval-secs 60
For production, consider running as a systemd service:
torc-server service install --user \
--port 8080 \
--database /shared/storage/torc.db
Step 2: Start torc-dash on a login node
SSH to the login node and start the dashboard:
# Connect to the shared server
export TORC_API_URL="http://shared-server:8080/torc-service/v1"
# Start dashboard (accessible only from login node by default)
torc-dash --port 8090
Step 3: Access the dashboard via SSH tunnel
From your local machine, create an SSH tunnel:
ssh -L 8090:localhost:8090 user@login-node
Important: Use
localhostin the tunnel command, not the login node’s hostname. The tunnel forwards your local port tolocalhost:8090as seen from the login node, which matches where torc-dash binds (127.0.0.1:8090).
Then open http://localhost:8090 in your local browser.
Using the CLI
Users can also interact with the shared server via CLI:
# Set the API URL
export TORC_API_URL="http://shared-server:8080/torc-service/v1"
# Create and run workflows
torc workflows create my_workflow.yaml
torc workflows run <workflow_id>
Authentication
For multi-user environments, enable authentication:
# Create htpasswd file with users
torc-htpasswd create /path/to/htpasswd
torc-htpasswd add /path/to/htpasswd alice
torc-htpasswd add /path/to/htpasswd bob
# Start server with authentication
torc-server run \
--port 8080 \
--auth-file /path/to/htpasswd \
--require-auth
See Authentication for details.
Comparison Summary
| Feature | Standalone | All-in-One Login Node | Shared Server |
|---|---|---|---|
| Setup complexity | Low | Medium | Medium-High |
| Multi-user support | No | Single user | Yes |
| Slurm integration | No | Yes | Yes |
| Database location | Local | Login node | Shared storage |
| Persistence | Session only | Depends on setup | Persistent |
| Best for | Single-computer workflows | Small HPC workflows (< 100 jobs) | Large-scale production |
Troubleshooting
Cannot connect to server
# Check if server is running
curl http://localhost:8080/torc-service/v1/workflows
# Check server logs
tail -f torc-server.log
SSH tunnel not working
# Verify tunnel is established
lsof -i :8090
# Check for port conflicts
netstat -tuln | grep 8090
Slurm jobs not starting
# Check Slurm queue
squeue -u $USER
# Check Slurm job logs
cat output/slurm_output_*.e
Dashboard shows “Disconnected”
- Verify API URL in Configuration tab
- Check network connectivity to server
- Ensure server is running and accessible
Next Steps
- Web Dashboard Guide - Complete feature reference
- Working with Slurm - Detailed Slurm configuration
- Server Deployment - Production server setup
- Authentication - Securing your deployment