← Back to Help Center
Ollama

Setting Up Ollama

Configure local AI models for StashCanvas

What is Ollama?

Ollama is a tool that lets you run large language models (LLMs) locally on your computer. This means you can use AI features in StashCanvas without sending data to external servers, giving you more privacy and control.

Installation

macOS

brew install ollama

Linux

curl -fsSL https://ollama.com/install.sh | sh

Windows

Download the installer from the official website.

Downloading Models

After installing Ollama, you need to download a model. We recommend starting with Qwen 3.5:

ollama pull qwen3.5:latest

Other popular options include:

  • ollama pull llama3.1
  • ollama pull mistral
  • ollama pull codellama

Required Settings

For StashCanvas to work with Ollama, you need to configure two settings in the Ollama app:

Ollama Settings
  1. Context Size (128k): Set the context size to 128k (or 64k if your PC is weaker) in the Ollama settings. This allows the AI to handle larger conversations and more context.
  2. Expose to Network: Enable "Expose to Network" or "Allow connections from other devices" in Ollama settings. This allows StashCanvas running on different ports or devices to connect to your Ollama instance.

Configuring CORS

To allow StashCanvas to communicate with Ollama, you need to configure the OLLAMA_ORIGINS environment variable.

Windows

  1. Open Start Menu and search for "Environment Variables"
  2. Click "Environment Variables..."
  3. Under User variables, click New
  4. Set Variable name: OLLAMA_ORIGINS
  5. Set Variable value: * (or the specific URL like https://stashcanvas.com)
  6. Click OK to save
  7. Important: Right-click the Ollama icon in your system tray and select "Quit Ollama"
  8. Restart Ollama from the Start Menu

macOS

If running as a desktop app:

launchctl setenv OLLAMA_ORIGINS "*"

Quit the Ollama app from the menu bar and restart it.

If running via terminal:

OLLAMA_ORIGINS="*" ollama serve

Linux (systemd)

sudo systemctl edit ollama.service

Add under [Service]:

Environment="OLLAMA_ORIGINS=*"sudo systemctl daemon-reload && sudo systemctl restart ollama

Using with StashCanvas

Once Ollama is installed and running, you can use it in StashCanvas by adding an AI Chat node to your canvas. The AI will automatically detect your local Ollama instance and use it for generating responses.

Troubleshooting

Ollama not starting

Try running ollama serve in a terminal.

Model not found

Make sure you've downloaded the model with ollama pull modelname.

Connection refused

Check that Ollama is running on port 11434. You can verify with curl http://localhost:11434.