How to Turn OpenClaw + Ollama Into an Autonomous Operator on Your Mac Mini
You've got OpenClaw running on a Mac Mini M-series with 64GB RAM. Ollama is serving local models. The dashboard loads fine. But every interaction feels the same: you type a prompt, get a response, type another, get another. It's a chatbot, not the autonomous brain you were promised.
The good news? Your hardware is more than capable. The gap is configuration, not compute. Here's how to wire OpenClaw so it plans, calls tools, and executes multi-step goals without you babysitting every prompt.
Check Your Ollama Model Configuration
Before touching OpenClaw's agent logic, make sure your local models are actually being used and that they support tool calling.
Run this in your terminal to see what's available:
ollama list
You need a model that supports structured output and function calling. Not all models do. Here's what works well on 64GB Apple Silicon:
- Qwen 2.5 72B (quantized): Strong at planning and tool calls. Use
qwen2.5:72b-instruct-q4_K_Mfor a good balance of quality and speed. - Llama 3.3 70B: Solid general purpose. Good at following structured prompts.
- Mistral Large or Command R+: If you need multilingual or long-context support.
- GLM-4 9B: Lighter option if you want faster responses for simpler tasks. Not ideal as your primary planner.
Pull the one you want:
ollama pull qwen2.5:72b-instruct-q4_K_M
Now wire it into OpenClaw. OpenClaw auto-discovers Ollama models, but you need to tell it Ollama exists. Set the API key (any value works, Ollama has no real auth):
export OLLAMA_API_KEY="ollama-local"
# or persist it:
openclaw config set models.providers.ollama.apiKey "ollama-local"
OpenClaw will automatically query Ollama's /api/tags endpoint at http://127.0.0.1:11434, check each model for tool-calling support, and register the capable ones. Zero cost, zero latency to the cloud.
For more control, define the provider explicitly in ~/.openclaw/openclaw.json:
{
"models": {
"providers": {
"ollama": {
"baseUrl": "http://127.0.0.1:11434",
"apiKey": "ollama-local",
"api": "ollama",
"models": [
{
"id": "qwen2.5:72b-instruct-q4_K_M",
"name": "Qwen 2.5 72B",
"reasoning": false,
"input": ["text"],
"contextWindow": 32768,
"maxTokens": 8192,
"cost": { "input": 0, "output": 0, "cacheRead": 0, "cacheWrite": 0 }
}
]
}
}
}
}
Cloud fallback. Keep a cloud API key configured (ANTHROPIC_API_KEY or OPENAI_API_KEY in ~/.openclaw/.env) but set it as the fallback, not the primary. Your local model handles 90% of tasks. Cloud kicks in only when the local model can't handle a complex reasoning step.
Configure the Agent for Autonomous Operation
OpenClaw's agent loop already supports multi-step tool calling. The loop runs in a cycle: model inference, tool execution, feed results back, repeat until the model returns a text-only response with no more tool calls. The problem is usually that the agent isn't configured to use this loop effectively.
Open ~/.openclaw/openclaw.json and set up your primary agent:
{
"agents": {
"defaults": {
"model": {
"primary": "ollama/qwen2.5:72b-instruct-q4_K_M",
"fallbacks": ["anthropic/claude-sonnet-4-6"]
},
"timeoutSeconds": 600,
"maxConcurrent": 3,
"contextTokens": 32768,
"thinkingDefault": "medium",
"compaction": { "mode": "safeguard" },
"sandbox": { "mode": "off" }
},
"list": [
{
"id": "operator",
"default": true,
"workspace": "~/.openclaw/workspace",
"model": "ollama/qwen2.5:72b-instruct-q4_K_M",
"identity": {
"name": "Operator",
"emoji": "brain-emoji"
},
"sandbox": { "mode": "off" },
"tools": {
"profile": "coding",
"allow": ["read", "write", "edit", "exec", "bash", "web_search", "web_fetch", "memory_search", "memory_get"]
}
}
]
}
}
Key settings that matter:
tools.profile: "coding"unlocks file I/O and execution tools. Without this, your agent can only chat.sandbox.mode: "off"lets the agent execute commands directly on your Mac. Start here for development; switch to"docker"later for safety.timeoutSeconds: 600gives the agent 10 minutes per goal. Increase for research-heavy tasks.compaction.mode: "safeguard"automatically summarizes old context when the window fills up, so long multi-step tasks don't crash.
Shape the Agent's Brain with Workspace Files
OpenClaw loads workspace files into the system prompt every session. These are the files that turn your agent from a generic assistant into an autonomous operator. They live in ~/.openclaw/workspace/.
Create or edit these files:
~/.openclaw/workspace/IDENTITY.md
# Operator
You are an autonomous operator running on a Mac Mini M-series.
You execute multi-step goals independently.
You have access to the local file system, shell, and web.
~/.openclaw/workspace/SOUL.md
# Operating Principles
- When given a goal, always decompose it into concrete steps before acting.
- Execute each step using the appropriate tool. Do not ask the user what to do next.
- After each step, evaluate the result and decide whether to continue, adjust, or report.
- If a step fails, retry once with a different approach. If it fails again, report what went wrong and what you tried.
- Summarize your progress after completing all steps.
- Never ask for permission to proceed between steps unless the task involves destructive operations (deleting files, sending emails, modifying system configs).
- When researching, use web_search first to find sources, then web_fetch to read specific pages.
- When writing documents, save them to the specified location using the write tool.
- Keep intermediate outputs concise to preserve context window space.
~/.openclaw/workspace/TOOLS.md
# Tool Usage Guide
## File Operations
- Use `read` to inspect files before modifying them
- Use `write` to create new files or overwrite existing ones
- Use `edit` for targeted changes to existing files
## Research
- Use `web_search` with specific queries, not vague ones
- Use `web_fetch` to read full pages from search results
- Cross-reference at least 2 sources before citing facts
## Execution
- Use `bash` for shell commands
- Always check command output before proceeding
- For multi-step shell operations, chain with && not ;
These files are loaded into the system prompt at the start of every session, right alongside the skills list and memory context. The SOUL.md is what makes the difference between "chatbot that waits for instructions" and "operator that executes goals."
Enable the Right Tools
OpenClaw has built-in tools across several categories. Here's what your autonomous operator needs:
| Category | Tools | What They Do |
|---|---|---|
| File I/O | read, write, edit | Read, create, and modify files |
| Execution | exec, bash | Run shell commands |
| Web | web_search, web_fetch | Search the web and read pages |
| Memory | memory_search, memory_get | Semantic search over past sessions |
These are already enabled if you set tools.profile: "coding" in the agent config above. If you want finer control, use the tools.allow and tools.deny arrays.
Tools execute sequentially in OpenClaw's agent loop, not in parallel. The model calls a tool, gets the result, decides the next step, calls the next tool, and so on until it has enough to respond. This is the loop that makes autonomous operation work.
Test With Two Self-Running Workflows
Once configured, validate with concrete goals.
Workflow 1: Research and Write Pipeline
Give OpenClaw this goal (not step-by-step instructions, just the goal):
"Research the top 5 project management tools for small remote teams in 2026. For each, find the pricing, key differentiator, and one user review. Write a comparison document in markdown and save it to ~/Documents/research/pm-tools-comparison.md."
If it's working, OpenClaw should: decompose into steps, run web searches, extract data, draft the document, and save it. All without you touching the keyboard after the initial goal.
Workflow 2: CRM and Business Ops Helper
"Read the client brief at ~/Documents/briefs/acme-corp.md. Draft a follow-up email to the client summarizing next steps. Draft an internal SOP document for onboarding this client. Save both to ~/Documents/outputs/."
This tests file reading, content generation, and multi-output workflows.
If either workflow stalls or asks you what to do at every step, your SOUL.md needs stronger autonomous execution instructions. If it plans but can't act, tools aren't enabled in the agent config. If it acts but produces nonsense, try a stronger model for the planning step or check that your Ollama model actually supports tool calling.
Add Custom Skills for Repeatable Workflows
Once your base operator works, you can add skills for specific, repeatable tasks. Skills in OpenClaw are just directories with a SKILL.md file. No code required.
Create a skill at ~/.openclaw/workspace/skills/research-report/SKILL.md:
---
name: research-report
description: Research a topic and produce a structured markdown report
user-invocable: true
---
# Research Report
When asked to research a topic:
1. Break the topic into 3-5 specific search queries
2. Run web_search for each query
3. Use web_fetch to read the top 2-3 results per query
4. Compile findings into a structured markdown document with:
- Executive summary (2-3 sentences)
- Key findings (bulleted)
- Detailed sections per subtopic
- Sources list at the end
5. Save the document to the path specified by the user
(default: ~/Documents/research/{topic-slug}.md)
Always cross-reference facts across multiple sources.
Never present a single source's claims as definitive.
OpenClaw loads skills into the system prompt as an XML list. The model reads the description and triggers the skill when it matches the user's intent. You can create skills for email drafting, SOP generation, code review, data analysis, or anything else you repeat.
Skill loading order (highest priority first):
- Workspace skills:
~/.openclaw/workspace/skills/ - Managed skills:
~/.openclaw/skills/ - Bundled skills: shipped with OpenClaw
- Extra dirs: configured via
skills.load.extraDirs
The Self-Building Prompt
Here's where it gets powerful: once your operator is running, you can give it a goal to extend itself. Copy this into OpenClaw and let it build the scaffolding for future capabilities:
You are the Operator agent running on OpenClaw on a Mac Mini M-series with
Ollama. You have access to the file system, shell, web search, and web fetch
tools.
Your goal: prepare the architecture for a future voice agent integration.
Execute these steps autonomously:
1. Create a new skill directory:
~/.openclaw/workspace/skills/voice-agent-prep/
2. Create ~/.openclaw/workspace/skills/voice-agent-prep/SKILL.md with this
content (use YAML frontmatter with name, description, user-invocable fields,
followed by markdown instructions):
- name: voice-agent-prep
- description: Prepare and test voice agent integration components
- Instructions should document three placeholder tools:
a. Audio transcription (accepts file path, returns text) -- placeholder
for Whisper integration
b. Text-to-speech (accepts text, returns audio file path) -- placeholder
for Piper or Coqui TTS
c. Conversation router (takes transcribed text, routes to appropriate
existing skill based on intent)
3. Create a README.md in the same directory documenting:
- How to replace placeholders with real implementations
- Whisper.cpp installation on Apple Silicon via Homebrew
- Piper TTS setup for local voice synthesis
- How the conversation router should map intents to existing skills
- Required environment variables (WHISPER_MODEL_PATH, PIPER_MODEL_PATH)
4. Create a second skill at:
~/.openclaw/workspace/skills/crm-ops/SKILL.md
- name: crm-ops
- description: Execute CRM operations like drafting emails, updating
contacts, and creating SOPs from client briefs
- Instructions should cover: reading briefs from ~/Documents/briefs/,
drafting emails with proper formatting, creating SOP documents,
and saving outputs to ~/Documents/outputs/
5. Create a third skill at:
~/.openclaw/workspace/skills/daily-digest/SKILL.md
- name: daily-digest
- description: Compile a daily digest of tasks, emails, and priorities
- Instructions should cover: scanning ~/Documents/ for recent files,
summarizing key items, and saving a digest to
~/Documents/digests/YYYY-MM-DD.md
6. Verify all three skills are properly structured by reading each SKILL.md
back and confirming the YAML frontmatter parses correctly.
7. Update ~/.openclaw/workspace/BOOTSTRAP.md to document:
- The three new skills that are available
- When to use each one
- That the voice-agent-prep skill contains placeholders for future
implementation
8. Report what you created, including the full path of every file written
and a one-line summary of each skill.
Do not ask for confirmation between steps. Execute the full plan, then
report results. If any step fails, note the error and continue with the
remaining steps.
That prompt creates three reusable skills, documents the voice integration architecture, and updates the bootstrap context so the agent knows about its new capabilities in future sessions. The key insight: OpenClaw skills are just markdown files. No code, no compilation, no deployment. You describe what you want, the agent writes the files, and the next session picks them up automatically.
Preparing for Voice, CRM, and Beyond
Once your core operator and skills are working, extending is just more of the same pattern:
- Voice agent. The
voice-agent-prepskill from the prompt above creates the scaffolding. When you're ready, install Whisper.cpp (brew install whisper-cpp) and Piper TTS, then replace the placeholders with real tool implementations. OpenClaw'svoice-callextension inextensions/voice-call/handles Twilio/Telnyx telephony integration. - CRM integration. Add tools that call your CRM's API via the
execorbashtools (curl requests), or create a dedicated skill with instructions for how to interact with your CRM's REST endpoints. - Scheduled operations. OpenClaw has built-in cron support. Configure jobs in
~/.openclaw/cron/jobs.jsonto run your daily-digest skill every morning, or trigger CRM syncs on a schedule. - Multi-agent routing. Use the
bindingsconfig to route different channels to different agents. Your Telegram DMs go to the main operator, a WhatsApp group goes to a CRM-focused agent, Discord goes to a support agent. Each with their own workspace, tools, and personality.
The architecture is always the same: goal in, decompose, execute with tools, result out. New capabilities are just new skills in the workspace.
The Part Nobody Talks About
This guide gives you the full blueprint. But here's what you'll discover in week two: local model tool calling is inconsistent. Qwen hallucinates function parameters. SOUL.md instructions need dozens of revisions before they produce reliable autonomous behavior. The context window fills up mid-task and compaction loses critical details. Error handling edge cases multiply every time you add a new skill. And when you finally get one workflow solid, a model update breaks something you already fixed.
You'll spend more time maintaining the agent infrastructure than actually using it for your business.
Take the Next Step
That's exactly why we built OHWOW.FUN. Everything in this guide, the planning loops, tool calling, autonomous execution, CRM integration, multi-agent orchestration, voice-ready architecture, is already wired and running. No debugging Ollama tool calls at 2 AM. No rewriting system prompts for the tenth time.
Our Enterprise plan gives you the full autonomous operator: a team of AI agents handling your marketing, outreach, operations, and customer support while you focus on growing your business. Same vision as this guide, but production-ready from day one.