v0.1.0-beta.1 Releasing soon

Think deeper.
Reason further.
Verify everything.

Some questions need breadth. Others need proof. MIRA brings both — NAE agents that reason across your entire problem space, and RLM as the executor that won't stop until the numbers hold. Open-source. Any LLM. Nothing leaves your machine.

MIT LicenseZero TelemetryOn-DeviceOwn Your DataOpen Source

Adapt to every session — instantly

Conversational Skill Persona Workflow

Every session is yours to shape — chat freely, focus with a Skill, or automate with a Workflow.

M
Search…
All

SaaS contract · IP review

now

EMEA Q3 gap analysis

2h ago

RLM🎭 Legal Analyst🎭 Skill Persona
M

What can I help you with?

Multi-step reasoning across research, code, data and documents.

Security review

Audit code for vulnerabilities

Research synthesis

Summarise papers with citations

Analyse data

Verify findings with real code

Debug & explain

Trace errors step by step

Message MIRA…
RLM ready
2 docsIter 1/30›_ Console

Drop files here

PDF, DOCX, CSV…

pdfsaas_agreement_v3.pdf

§12, §14, §18, §19, §21, §22

pdfip_schedule_b.pdf

Schedule B §3 · IP assignment

Who it's for

Built for anyone who needs
to be right.

Whether you're running data analysis, reviewing documents, or synthesising research — MIRA adapts to your domain so you don't have to adapt to it.

Data Analysts & Scientists

Ask questions about your data in plain language. MIRA computes the answer from your actual data — every number earned, every result auditable.

Researchers & Academics

Upload papers, datasets, and reports. Run multi-step research pipelines that synthesise evidence and cite sources — not statistical plausibility.

Legal & Compliance

Cross-reference contracts, flag conflicting clauses, and surface jurisdiction risks. RLM runs the analysis in code so every finding is traceable, not inferred.

Financial & Business Analysts

Model, forecast, and cross-reference data with an engine that verifies its own calculations before reporting them — not an engine that guesses.

Security Professionals

Threat modelling, secure review, and CVE analysis with an engine that doesn't skip steps or paper over uncertainty.

Anyone Handling Sensitive Data

Everything runs on your machine. Documents, credentials, and conversations never leave — by design, not by policy.

Capabilities

Ask anything.
MIRA figures out the rest.

Analyse a csv, review a contract, analyse financial reports, simulate scenarios, or synthesise research papers — MIRA decomposes the problem, reasons through each part with the right engine, and shows you exactly what it did and why.

Engines

Two Reasoning Engines

The Native Agent Engine spawns parallel sub-agents for open-ended research, managing long context and episodic memory automatically. The RLM Engine investigates using real computation, observes the actual output, and refines until the answer is verified — not inferred.

NAE · parallel sub-agentsRLM · code execution loopSwitch in Settings ⌘,
Context

Document Intelligence

Upload PDFs, Word docs, CSVs, Markdown, code files — up to 50 MB each. Every file is parsed locally, chunked intelligently, and injected into the active reasoning context. Toggle documents in or out per session without restarting.

PDF · DOCX · CSV · MD · TXTUp to 50 MB per filePer-session toggling
Personas

Skills — Instant Domain Expert

Activate a Skill and MIRA adopts a specialist mindset for that field. Five built-in Skills cover Life Sciences, Software Architecture, Financial Analysis, Security Review, and Research Synthesis — each tuning the model's temperature, iteration depth, and tool access to match domain norms.

5 built-in SkillsCustom skills supportedPer-skill model overrides
Automation

Deterministic Workflows

Build repeatable multi-step pipelines where each step receives the previous step's output via template variables. Add conditional routing rules to branch, retry, or skip based on actual results. Three production pipelines ship out of the box — Research Report, Code Review, Document Deep Dive.

3 built-in pipelinesConditional routing rulesCustom step templates
Extensible

MCP Tool Integration

Connect any Model Context Protocol server — web search, SQL databases, REST APIs, or your own internal tools. Both reasoning engines can invoke MCP tools mid-chain and fold the live results straight back into their reasoning loop.

Web searchDatabases & REST APIsCustom MCP servers
Transparency

Live REPL Console

Every line of code the RLM Engine writes and executes streams to the REPL Console in real time — including stdout, stderr, and engine status messages. Watch reasoning happen step by step. Nothing runs in a black box, and session state persists across iterations.

Real-time execution logstdout · stderr · stdin streamsPersistent session state
Quality

Built-in Eval Framework

Automatically grade every response using LLM judges, rule-based checks, embedding similarity scores, and custom metrics. Run evals on demand after any session to track answer quality over time and catch regressions before they become habits.

LLM judge scoringRule & similarity checksCustom metric definitions
Privacy

Zero Telemetry. Fully On-Device.

No analytics pipeline, no crash reports, no cloud sync — ever. API keys are stored in your OS keychain (macOS Keychain, Windows Credential Manager, libsecret). The renderer runs with contextIsolation and has no direct filesystem access. Your data is private by design.

MIT open sourceOS keychain for secretscontextIsolation renderer

How it works

From question to
verified answer.

Install once, connect any LLM, and follow four steps to get answers you can trust and trace.

01

Install — one click, then done

Download the macOS, Windows, or Linux installer from GitHub Releases. On first launch MIRA automatically provisions a bundled Python 3.11 virtual environment. No terminal. No pip. No PATH configuration. Nothing to maintain.

~60 seconds · No admin rights · No terminal
02

Connect the AI you trust

Open Settings (⌘,) and paste an Anthropic, OpenAI, or AWS Bedrock API key — or point MIRA at a local Ollama instance for fully air-gapped inference. Credentials are written directly to your OS keychain; never stored on disk or sent anywhere.

Anthropic · OpenAI · AWS Bedrock · Ollama
03

Load your context

Drop in any PDF, DOCX, CSV, Markdown, or code file (up to 50 MB each). Attach a Skill to give the AI a specialised reasoning persona — analyst, reviewer, security auditor. Wire up Workflows for repeatable multi-step pipelines. Connect external data via MCP tools.

PDF · CSV · DOCX · Skills · Workflows · MCP
04

Ask. Reason. Verify.

Choose NAE to decompose your question into parallel sub-tasks with automatic context compaction and episodic memory. Choose RLM to investigate with real computation, observe the actual output, and iterate until the answer is proven — not guessed.

NAE multi-agent · RLM code execution · REPL Console

Stay in the loop.

Be the first to know when MIRA ships new reasoning features, engine improvements, and releases. No spam — ever.

No spam. Unsubscribe at any time.