TensorDock
Search
⌃K

Welcome to TensorDock

Welcome to your new cloud.

We are TensorDock.

We're democratizing the cloud by aggregating together a global network of hosts. They run our software to monetize their servers, and we take a cut.
In turn, you access a wider range of compute than you would at any single cloud provider. And, because they compete against each other for pricing, you compute at the industry's lowest prices when deploying cloud GPUs through us.

Unique Culture. Unique People. Unique Ideas.

We hire the self-taught, the unestablished, the fresh-out-of-school new grads.
We don't even conduct technical coding interviews (everyone can code well. Can't you use Stack Overflow on the job?). Instead, we show the candidate an excerpt of our codebase and ask them what would they would do better. Depending on the passion shown to our mission of democratizing high-performance computing, we may hire them to implement whatever they pitched.
It's this culture that encourages fresh perspectives that allows us to boost efficiency without compromising quality. This strikes at the core of our culture: we innovate, and because we innovate, we stay ahead of the pack.

We follow vertical paths. And do things like building a hypervisor.

Controlling costs is in our DNA. TensorDock started as a way to rent out our own infrastructure, not as a compute marketplace. We've played around with open-air chassis, modified crypto mining rig frames, natural airflow cooling (no fans = less power!), and even immersion cooling.
The point is, for the past five years that we've been in the cloud infrastructure space, we've always examined which corners are "cuttable" to slash costs for you. When we can't get something right, we look upstream and optimize that.
In fact, after years of deploying Proxmox and OpenStack clusters, we built our own Libvirt-based hypervisor. It's 90% more reliable than both previous hypervisors, and it offers the flexibility we need for storage-only billing, easy modifications from GPU virtual machines to CPU-only virtual machines, etc.
So... what does this mean for you, our customer?

Quick, reliable GPU deployments. 8 seconds, not 8 minutes.

By building our entire stack in-house and from the ground up, we've optimized our product. Instead of deploying virtual machines in 8 minutes, they'll launch in as little as 8 seconds on us. And, with stock checking pre-deployment, you won't ever waste 20 minutes attempting to deploy a workload in an out-of-stock location.

Integrate one platform, deploy anywhere

We epitomize multi-cloud. We partner with cloud providers around the world so that you can build one API integration with us, and that's it. Whether you're looking to deploy a workload in San Jose, United States, or San Jose, Costa Rica, we've got you covered.

Right size. Right now.

With over two dozen available GPU models available at over two dozen locations worldwide, you can truly right-size your workloads. And, because we aggregate together stock from various suppliers, you'll never run out of GPU stock when deploying workloads through us.

Get Started

If you will be running an ML workload, see our machine learning guide.
If you will be cloud gaming, see our cloud gaming via Parsec guide.

Support

We're always available at [email protected]. If you have a favorite team member, you can reach out to them directly, as all of our team members help with support!