Skip to content

Cortex Engines

This command allows you to manage various engines available within Cortex.

Usage:

Terminal window
cortex engines [options] [subcommand]

Options:

OptionDescriptionRequiredDefault valueExample
-h, --helpDisplay help information for the command.No--h

cortex engines list

This command lists all the Cortex’s engines.

Usage:

Terminal window
cortex engines list

For example, it returns the following:

+---+--------------+-------------------+---------+----------------------------+---------------+
| # | Name | Supported Formats | Version | Variant | Status |
+---+--------------+-------------------+---------+----------------------------+---------------+
| 1 | onnxruntime | ONNX | | | Incompatible |
+---+--------------+-------------------+---------+----------------------------+---------------+
| 2 | llama-cpp | GGUF | 0.1.34 | linux-amd64-avx2-cuda-12-0 | Ready |
+---+--------------+-------------------+---------+----------------------------+---------------+

cortex engines get

This command returns an engine detail defined by an engine engine_name.

Usage:

Terminal window
cortex engines get <engine_name>

For example, it returns the following:

+-----------+-------------------+---------+-----------+--------+
| Name | Supported Formats | Version | Variant | Status |
+-----------+-------------------+---------+-----------+--------+
| llama-cpp | GGUF | 0.1.37 | mac-arm64 | Ready |
+-----------+-------------------+---------+-----------+--------+

Options:

OptionDescriptionRequiredDefault valueExample
engine_nameThe name of the engine that you want to retrieve.Yes-llama-cpp
-h, --helpDisplay help information for the command.No--h

cortex engines install

This command downloads the required dependencies and installs the engine within Cortex. Currently, Cortex supports three engines:

  • llama-cpp
  • onnxruntime

Usage:

Terminal window
cortex engines install [options] <engine_name>

Options:

OptionDescriptionRequiredDefault valueExample
engine_nameThe name of the engine you want to install.Yesllama-cpp, onnxruntime, tensorrt-llm-
-h, --helpDisplay help for command.No--h

cortex engines uninstall

This command uninstalls the engine within Cortex.

Usage:

Terminal window
cortex engines uninstall [options] <engine_name>

For Example:

Terminal window
cortex engines uninstall llama-cpp

Options:

OptionDescriptionRequiredDefault valueExample
engine_nameThe name of the engine you want to uninstall.Yes--
-h, --helpDisplay help for command.No--h