SaladCloud

Distributed GPU cloud for low-cost compute, deployment, and AI transcription
5 
Rating
58 votes
Your vote:
Screenshots
1 / 1
Visit Website
salad.com
Loading

SaladCloud is a distributed GPU cloud designed to make GPU compute more affordable and accessible. Instead of relying only on traditional data-center hardware, SaladCloud taps into a large network of consumer-grade GPUs to run containerized workloads at lower cost—especially useful for GPU-heavy tasks like AI inference, batch processing, and media pipelines. The platform combines compute, deployment tooling, and APIs so teams can ship workloads quickly without managing their own fleet.

Developers can deploy applications as containers using the Salad Container Engine, and can also integrate through APIs, SDKs, and GitHub-based workflows. For Kubernetes users, SaladCloud supports Virtual Kubelets to extend existing clusters with Salad’s distributed capacity. This makes it easier to burst workloads, run parallel jobs, or scale AI services while controlling spend. A pricing calculator helps estimate costs based on GPU model, vCPU, memory, and priority tier.

SaladCloud also offers an AI Transcription API positioned as a low-cost alternative for speech-to-text use cases, aiming to keep accuracy high while reducing per-minute transcription costs. The broader roadmap includes storage and data services such as distributed file storage, object storage, and managed databases (noted as coming soon), rounding out a stack for building and operating production workloads. more

Review Summary

Features

  • Distributed GPU cloud compute
  • Container Engine for deploying containerized apps
  • AI Transcription (speech-to-text) API
  • Gateway service
  • Kubernetes integration via Virtual Kubelets
  • Deployment options via API/SDK/GitHub workflows
  • Cost estimation via pricing calculator
  • Roadmap: distributed file storage, object storage, managed databases (coming soon).

How It’s Used

  • Image generation
  • Voice AI and real-time audio pipelines
  • Computer vision inference and processing
  • Data collection using large pools of residential IPs
  • Large-scale batch processing jobs
  • Molecular dynamics simulation on many low-cost GPUs
  • AI transcription for speech-to-text workloads (including complex needs such as diarization/accents per FAQ topics).

Plans & Pricing

Rtx 5090

$0.294/hr (Batch)

32GB VRAM, 8GB Memory, 4 vCPUs

Rtx 5080

$0.219/hr (Batch)

16GB VRAM, 8GB Memory, 4 vCPUs

Rtx 4090

$0.204/hr (Batch)

24GB VRAM, 8GB Memory, 4 vCPUs

Rtx 3090 Ti

$0.154/hr (Batch)

24GB VRAM, 8GB Memory, 4 vCPUs

Rtx 3090

$0.124/hr (Batch)

24GB VRAM, 8GB Memory, 4 vCPUs

Rtx 4080

$0.154/hr (Batch)

16GB VRAM, 8GB Memory, 4 vCPUs

Rtx 4070 Ti

$0.124/hr (Batch)

12GB VRAM, 8GB Memory, 4 vCPUs

Rtx 3080 Ti

$0.124/hr (Batch)

12GB VRAM, 8GB Memory, 4 vCPUs

Rtx 3060

$0.084/hr (Batch)

12GB VRAM, 8GB Memory, 4 vCPUs

Rtx 3080

$0.114/hr (Batch)

10GB VRAM, 8GB Memory, 4 vCPUs

Rtx 3070 Ti

$0.094/hr (Batch)

8GB VRAM, 8GB Memory, 4 vCPUs

Rtx 3070

$0.094/hr (Batch)

8GB VRAM, 8GB Memory, 4 vCPUs

Rtx 3060 Ti

$0.064/hr (Batch)

8GB VRAM, 8GB Memory, 4 vCPUs

General Purpose Instance

$0.005/hr

1GB Memory, 1 vCPU

Cpu-optimized Instance

$0.006/hr

2GB Memory, 1 vCPU

Memory-optimized Instance

$0.012/hr

8GB Memory, 1 vCPU

To view the latest pricing, please visit the following link: https://salad.com/pricing

Comments

5
Rating
58 votes
5 stars
0
4 stars
0
3 stars
0
2 stars
0
1 stars
0
User

Your vote: