Skip to main content

What is Compute with Hivenet?

Run GPU-powered workloads on Hivenet’s distributed cloud. Start quickly, then dig deeper in our Compute docs.

Thanasis Karavasilis avatar
Written by Thanasis Karavasilis
Updated over 2 weeks ago

Compute with Hivenet lets you run on-demand GPU and CPU machines for AI inference, training, and general workloads. You create an instance, choose the resources you need, and connect over the network just like any other server.

Heads up: Compute uses a separate account from the main Hivenet app. If you already use Store, you’ll still need to sign up for Compute.

What you can do

  • Spin up on-demand GPU instances for AI workloads

  • Host inference servers (for example, vLLM)

  • Use templates to start with preinstalled runtimes and frameworks

  • Connect via SSH or web console to manage your instance

What you’ll need

  • A Compute account

  • A payment method added in Billing

  • An SSH key or your preferred connection method

Quick start

  1. Sign in to Compute with Hivenet.

  2. Select Create instance.

  3. Pick a template or start from a clean image.

  4. Choose resources, then create the instance.

  5. Connect and run your workload.

Tip: Start small. You can create a second, larger instance later if your job needs more resources.

Warning: Billing accrues while an instance is running. Terminate the instance when you’re done to stop charges.

Pricing and billing basics

  • You’re billed for active runtime per instance.

  • Adding a payment method is required before creating paid instances.

  • Terminating an instance stops future runtime charges.

Data and security

  • Compute runs workloads on Hivenet’s distributed infrastructure.

  • You control what runs on your instance and who can access it.

  • For details on network access, data handling, and compliance, see the security section in our docs.

Learn more in the docs

The full step-by-step guides live in our docs. You’ll find:

  • Quickstart and setup

  • Creating and connecting to an instance

  • Inference servers (vLLM)

  • Templates and images

  • Billing and invoices

  • Security and access

Did this answer your question?