templates endpoints
browse and retrieve deployment templates
GET
/templates
list all available templates
query parameters
categorystring (optional) - filter by category
requiresGpuboolean (optional) - filter by gpu requirement
searchstring (optional) - search name/description
response
{
"templates": [
{
"id": "ollama",
"slug": "ollama",
"name": "Ollama",
"description": "Run LLMs locally with Ollama",
"category": "llm",
"requiresGpu": true,
"minGpuVram": 8000
},
{
"id": "vllm",
"slug": "vllm",
"name": "vLLM",
"description": "High-performance inference server",
"category": "llm",
"requiresGpu": true,
"minGpuVram": 16000
}
]
}GET
/templates/:idOrSlug
get template details by id or slug
path parameters
idOrSlugstring - template id or slug
response
{
"template": {
"id": "ollama",
"slug": "ollama",
"name": "Ollama",
"description": "Run LLMs locally with Ollama. Supports llama2, mistral, codellama, and many more models.",
"category": "llm",
"dockerImage": "ollama/ollama:latest",
"defaultEnv": {
"OLLAMA_MODELS": "llama2"
},
"ports": [
{
"internal": 11434,
"protocol": "http",
"description": "Ollama API"
}
],
"requiresGpu": true,
"minGpuVram": 8000,
"minCpuCores": 4,
"minRamMb": 8192,
"minDiskGb": 50,
"estimatedPullTime": 120
}
}template categories
llmlarge language models
imageimage generation & processing
notebookjupyter & interactive environments
utilitygeneral purpose containers
customuser-defined images