§ 01Platform

Spot GPU orchestration,
honed for async AI workloads.

Training is expensive. slipa cuts it 30–70% — by routing checkpoint-friendly jobs across five spot providers, and handling every eviction for you.

§ 02Savings

Same run. A third of the bill.

Illustrative runs on an 8B open model, priced against AWS on-demand H100.

Example run On-demand slipa Save
Fine-tune · 8B QLoRA, 2M samples ~$150 ~$50 −67%
Batch inference · 8B, 5M prompts ~$420 ~$140 −67%
RL rollouts · 8B, 500k × 8 completions ~$330 ~$110 −67%

§ 03Workloads

One router. Many workload shapes.

finetune
Fine-tune an open model on your data. LoRA / QLoRA / full fine-tune. Axolotl under the hood.
Live
hello-gpu
Prove the GPU works before a real run. Boots a GPU, runs a CUDA sanity check, marks complete.
Live
batch-inference
Run a model over a large prompt set, offline. Streams the dataset through vLLM; predictions land in R2.
Live
rl-rollouts
Generate RLHF training data at scale. N completions per prompt; optional reward-model scoring.
Live
agent
Run long autonomous agents without babysitting the GPU. Checkpoint-friendly; resumes across evictions.
Soon
Watches five providers · picks the cheapest live GPU · migrates on eviction

§ 04Access

Request an invite.

Private beta. Leave an address and we'll reach out when capacity opens for your workload shape.

No commitment. No newsletter.