# LiteLLM Models Not Appearing in Mito

In most cases, this happens because the frontend falls back to default models when it cannot successfully retrieve the configured models from the backend.

This guide will help identify where the issue is occurring and how to resolve it.

### Step 1: Check the API Response in the Browser

Open your JupyterLab environment and run this in the browser console:

```javascript
await fetch('/mito-ai/available-models', {
  credentials: 'same-origin'
}).then(r => r.json())
```

> If you’re unsure of the exact path, open DevTools → Network tab and find the `available-models` request.

What to look for:

* ✅ LiteLLM models returned → backend is working, issue may be frontend
* ❌ Default models returned → backend config issue
* ❌ Error / non-200 response → request failing → frontend fallback triggered

### Step 2: Verify Enterprise Mode

LiteLLM is only used if **enterprise mode is enabled**.

Run this inside your Jupyter environment:

```bash
python -c "from mito_ai.utils.version_utils import is_enterprise; print(is_enterprise())"
```

Expected:

* ✅ True → OK
* ❌ False → LiteLLM will NOT be used

### Step 3: Check LiteLLM Environment Variables (in Running Server)

Run:

```bash
python -c "from mito_ai import constants; print(constants.LITELLM_BASE_URL, constants.LITELLM_MODELS)"
```

Important notes:

* These values must be:
  * Set in the **running Jupyter server environment**
  * Not just in a shell or config file
* If you changed them recently, you must **restart the server**

### Step 4: Validate LITELLM\_MODELS Format

Ensure the variable is correctly formatted as a comma-separated list:

Example:

```
litellm/openai/gpt-4o,litellm/anthropic/claude-3-5-sonnet
```

Common issues:

* Empty value
* Incorrect formatting
* Extra spaces or invalid model names

If parsing fails, the system silently falls back to default models.

### Step 5: Check Server Startup Logs

When the server starts, it logs whether LiteLLM is configured.

Please look for log lines like:

* `"Enterprise mode enabled"`
* `"LiteLLM configured: endpoint=..., models=..."`

If these are missing:

* LiteLLM is not being detected at startup
* Likely an environment or configuration issue

### Step 6: JupyterHub-Specific Considerations

Since you're using JupyterHub, the issue is often environment-related.

Common causes:

**1. Environment Variables Not Applied to User Servers**

* Variables may be set on the Hub, but not passed to user notebook servers

**2. Server Not Restarted**

* Environment variables are read at startup
* Restart is required after any changes

**3. Per-User Differences**

* Enterprise mode may depend on user-specific configuration
* One user may be enabled, another not

#### Configuring Environment Variables via `jupyterhub_config.py`

If you are using a `jupyterhub_config.py` file, you may need to explicitly inject the LiteLLM configuration into each user’s environment using a pre-spawn hook.

Below is an example configuration:

```python
def pre_spawn_hook(spawner):
    spawner.environment["LITELLM_BASE_URL"] = "https://my-litellm-server.com"
    spawner.environment["LITELLM_MODELS"] = "litellm/openai/gpt-oss-120b"
    spawner.environment["LITELLM_API_KEY"] = api_key    
```

This ensures that each user’s notebook server has access to the required LiteLLM environment variables at startup.
