Skip to main content
Run AI models entirely on your Mac for fully offline, air-gapped operation. Zero network requests. Your code never leaves your machine.

Ollama

1

Install Ollama

Download and install Ollama from ollama.ai.
2

Pull a model

Open Terminal and pull a coding model:
ollama pull codellama
or
ollama pull deepseek-coder
3

Verify the local server

Ollama runs a local server at localhost:11434. It starts automatically after installation.
4

Configure Parsaa

Open Parsaa Settings and configure the Ollama endpoint under Local Models.
5

Select the model

Open the model selector in Parsaa and choose your local model. All inference runs on-device.

LM Studio

1

Install LM Studio

Download and install LM Studio from lmstudio.ai.
2

Download a model

Browse the LM Studio model catalog and download a coding-focused model.
3

Start the local server

In LM Studio, navigate to the Local Server tab and start the server.
4

Configure Parsaa

Open Parsaa Settings and configure the LM Studio endpoint under Local Models.
5

Select the model

Open the model selector in Parsaa and choose the LM Studio model.

Hardware Recommendations

Apple Silicon (M1 or later) with 16GB+ RAM is recommended for local models. Larger models (13B+ parameters) benefit from 32GB+ RAM. Apple’s unified memory architecture makes M-series chips particularly well-suited for local inference.
Model SizeMinimum RAMRecommended RAM
7B parameters8 GB16 GB
13B parameters16 GB32 GB
34B+ parameters32 GB64 GB

Privacy

With local models, your code never touches any network. Every request is processed entirely on your Mac. This is ideal for:
  • Proprietary codebases where source code cannot leave the organization
  • Regulated industries with strict data residency requirements
  • Air-gapped environments with no internet access
  • Any scenario where you need complete control over where your data goes
Local models trade some capability for privacy. Cloud-hosted models like Claude Opus 4.5 and GPT-5.2 generally produce higher-quality results for complex tasks. Choose based on your privacy requirements and the complexity of your work.