Welcome to TensorDock
Welcome to your new cloud.
We are TensorDock.
We're democratizing the cloud by aggregating together a global network of hosts. They run our software to monetize their servers, and we take a cut.
In turn, you access a wider range of compute than you would at any single cloud provider. And, because they compete against each other for pricing, you compute at the industry's lowest prices when deploying cloud GPUs through us.
Unique Culture. Unique People. Unique Ideas.
We hire the self-taught, the unestablished, the fresh-out-of-school new grads.
We innovate, and because we move quickly, we stay ahead of the pack.
We follow vertical paths. And do things like building a hypervisor.
Optimizing performance is in our DNA. TensorDock started as a way to rent out our own infrastructure, not as a compute marketplace. We've played around with open-air chassis, modified crypto mining rig frames, natural airflow cooling, and even immersion cooling. We realized we're best at building billing and orchestration software, but this ideology remains.
For the past five years, we've always examined which corners are "cuttable" to slash costs for you while maintaining quality. When we can't get something right, we look upstream and optimize that.
In fact, after years of deploying Proxmox and OpenStack clusters, we built our own Libvirt-based hypervisor. It's 90% more reliable than both previous hypervisors, and it offers the flexibility we need for storage-only billing, easy modifications from GPU virtual machines to CPU-only virtual machines, etc.
Quick, reliable GPU deployments. 20 seconds, not 20 minutes.
By building our entire stack in-house and from the ground up, we've optimized our product. Instead of deploying virtual machines in 20 minutes, they'll launch in as little as 20 seconds on us.
Integrate one platform, deploy anywhere
We epitomize multi-cloud. We partner with cloud providers around the world so that you can build one API integration with us and move on. From San Jose, United States, to San Jose, Costa Rica, we've got you covered.
Right size. Right now.
With over two dozen available GPU models available at over two dozen locations worldwide, you can truly right-size your workloads. And, because we aggregate together stock from various suppliers, you'll never run out of GPU stock when deploying workloads through us.
Get Started
If you will be running an ML workload, see our machine learning guide.
If you will be cloud gaming, see our cloud gaming via Parsec guide.
Support
We're always available at support@tensordock.com. If you have a favorite team member, you can reach out to them directly, as all of our team members help with support!
Last updated