Web Dashboard (torc-dash)
The torc-dash application is a web gateway that provides a browser-based UI for managing Torc workflows. It bridges a web frontend with the torc ecosystem by proxying API requests and executing CLI commands.
Architecture
flowchart LR
Browser["Browser<br/>(Web UI)"] <--> Dashboard["torc-dash<br/>(Gateway)"]
Dashboard <--> Server["torc-server<br/>(API)"]
Dashboard --> CLI["torc CLI<br/>(subprocess)"]
The dashboard acts as a gateway layer that:
- Serves embedded static assets - HTML, CSS, and JavaScript bundled into the binary
- Proxies API requests - Forwards
/torc-service/*requests to a remote torc-server - Executes CLI commands - Runs
torcCLI as subprocesses for complex operations - Manages server lifecycle - Optionally spawns and manages a torc-server instance
Core Components
Embedded Static Assets
Uses the rust_embed crate to bundle all files from the static/ directory directly into the binary at compile time:
#![allow(unused)]
fn main() {
#[derive(Embed)]
#[folder = "static/"]
struct Assets;
}
This enables single-binary deployment with no external file dependencies.
Application State
Shared state across all request handlers:
#![allow(unused)]
fn main() {
struct AppState {
api_url: String, // Remote torc-server URL
client: reqwest::Client, // HTTP client for proxying
torc_bin: String, // Path to torc CLI binary
torc_server_bin: String, // Path to torc-server binary
managed_server: Mutex<ManagedServer>, // Optional embedded server state
}
}
Standalone Mode
When launched with --standalone, torc-dash automatically spawns a torc-server subprocess:
- Starts torc-server with configurable port (0 for auto-detection)
- Reads
TORC_SERVER_PORT=<port>from stdout to discover actual port - Configures API URL to point to the managed server
- Tracks process ID for lifecycle management
This enables single-command deployment for local development or simple production setups.
Request Routing
Static File Routes
| Route | Handler | Purpose |
|---|---|---|
/ | index_handler | Serves index.html |
/static/* | static_handler | Serves embedded assets with MIME types |
API Proxy
All /torc-service/* requests are transparently proxied to the remote torc-server:
Browser: GET /torc-service/v1/workflows
↓
torc-dash: forwards to http://localhost:8080/torc-service/v1/workflows
↓
torc-server: responds with workflow list
↓
torc-dash: returns response to browser
The proxy preserves HTTP methods (GET, POST, PUT, PATCH, DELETE), headers, and request bodies.
CLI Command Endpoints
These endpoints execute the torc CLI as subprocesses, enabling operations that require local file access or complex orchestration:
| Endpoint | CLI Command | Purpose |
|---|---|---|
POST /api/cli/create | torc workflows create | Create workflow from spec file |
POST /api/cli/run | torc workflows run | Run workflow locally |
POST /api/cli/submit | torc workflows submit | Submit to scheduler |
POST /api/cli/initialize | torc workflows initialize | Initialize job dependencies |
POST /api/cli/delete | torc workflows delete | Delete workflow |
POST /api/cli/reinitialize | torc workflows reinitialize | Reinitialize workflow |
POST /api/cli/reset-status | torc workflows reset-status | Reset job statuses |
GET /api/cli/run-stream | torc workflows run | SSE streaming execution |
Server Management Endpoints
| Endpoint | Purpose |
|---|---|
POST /api/server/start | Start a managed torc-server |
POST /api/server/stop | Stop the managed server |
GET /api/server/status | Check server running status |
Utility Endpoints
| Endpoint | Purpose |
|---|---|
POST /api/cli/read-file | Read local file contents |
POST /api/cli/plot-resources | Generate resource plots from DB |
POST /api/cli/list-resource-dbs | Find resource database files |
Key Features
Streaming Workflow Execution
The /api/cli/run-stream endpoint uses Server-Sent Events (SSE) to provide real-time feedback:
Event: start
Data: Running workflow abc123
Event: stdout
Data: Job job_1 started
Event: status
Data: Jobs: 3 running, 7 completed (total: 10)
Event: stdout
Data: Job job_1 completed
Event: end
Data: success
Event: exit_code
Data: 0
The stream includes:
- stdout/stderr from the torc CLI process
- Periodic status updates fetched from the API every 3 seconds
- Exit code when the process completes
CLI Execution Pattern
All CLI commands follow a consistent execution pattern:
#![allow(unused)]
fn main() {
async fn run_torc_command(torc_bin: &str, args: &[&str], api_url: &str) -> CliResponse {
Command::new(torc_bin)
.args(args)
.env("TORC_API_URL", api_url) // Pass server URL to CLI
.output()
.await
}
}
Returns structured JSON:
{
"success": true,
"stdout": "Workflow created: abc123",
"stderr": "",
"exit_code": 0
}
Configuration Merging
Configuration is merged from multiple sources (highest to lowest priority):
- CLI arguments - Command-line flags
- Environment variables -
TORC_API_URL,TORC_BIN, etc. - Configuration file -
TorcConfigfrom~/.torc.tomlor similar
Design Rationale
Why Proxy Instead of Direct API Access?
- CORS avoidance - Browser same-origin policy doesn’t apply to server-side requests
- Authentication layer - Can add authentication/authorization without modifying torc-server
- Request transformation - Can modify requests/responses as needed
- Logging and monitoring - Centralized request logging
Why CLI Delegation?
Complex operations like workflow creation are delegated to the existing torc CLI rather than reimplementing:
- Code reuse - Leverages tested CLI implementation
- Local file access - CLI can read workflow specs from the filesystem
- Consistent behavior - Same behavior as command-line usage
- Maintenance - Single implementation to maintain
Why Standalone Mode?
- Single-binary deployment - One command starts everything needed
- Development convenience - Quick local testing without separate server
- Port auto-detection - Avoids port conflicts with port 0 support