Your API keys shouldn't be one prompt injection away.
Qosm lets you build AI agents you can actually trust. Declare exactly which capabilities an agent gets. The compiler guarantees the rest.
extern read_file : {path: String} -> String ! {File.Read};
extern post_msg : {channel: String, text: String} -> Unit ! {Slack.Write};let analyze_revenue _ = let data = read_file {path: "/data/customers.json"}; let parsed = parse_json @Array<{customer_id: Int, revenue: Float}> data; match parsed with ( | Ok rows -> let summary = rows |> group_by .customer_id |> map (compute_avg >> to_string) |> concat in post_msg {channel: "#analytics", text: summary} | Err e -> post_msg {channel: "#analytics", text: "Parse failed"});Credentials stay in the host. The agent calls functions. It never sees API keys.
Qosm is a language and execution environment, not an IDE. Generate it directly in the Workspace, or from Claude Code, Codex, Cursor, or any tool you already use.
It takes less than 1k tokens to teach your favorite model how to program in Qosm.
Agents that heal themselves.
Qosm agents can autonomously inspect their own logs, fix and redeploy themselves when code or data they interface with changes. It's not something we built on purpose, just a consequence of Qosm's design.
Capabilities, not permissions.
Each capability is a typed function the host provides. Credentials are injected at the boundary; agent code never sees them. The compiler verifies every access at build time.
// Traditional approach: agent sees everything const apiKey = process.env.OPENAI_KEY; const dbUrl = process.env.DATABASE_URL; // Hope the LLM doesn't exfiltrate these... await agent.run({ tools: ["*"], env: process.env });
extern read_file : {path: String} -> String ! {File.Read};let summarize path =
let doc = read_file {path: path};
model `Summarize: {{ doc }}`;
// Capabilities: read_file, model. That's it.
// No write, no network, no env vars.
// API keys stay in the host. Invisible to agent code.Credentials stay in the host
API keys and secrets are injected by the host, never visible to agent code.
Fine-grained, not all-or-nothing
Each capability is a typed function. Grant read without write, one endpoint without the whole network.
Zero runtime overhead
Capabilities are just functions. No sandbox, no container, no IPC. They run at native speed.
Compiler-verified
If a capability isn't declared, the code won't compile. No runtime surprises.
MCP tools are capabilities.
Connect to any MCP server. Every tool becomes a typed function the agent can call.
// Auto-generated from connected MCP servers
extern create_issue : {repo: String, title: String, body: String}
-> {number: Int, url: String} ! {GitHub};
extern search_repos : {query: String, limit: Int}
-> Array<{name: String, stars: Int, url: String}> ! {GitHub};
extern send_message : {channel: String, text: String}
-> {ts: String} ! {Slack};
extern query : {sql: String, params: Array<String>}
-> Array<{row: {col: String, val: String}}> ! {PostgreSQL};
extern create_charge : {amount: Int, currency: String, customer: String}
-> {id: String, status: String} ! {Stripe};
extern web_search : {query: String}
-> Array<{title: String, url: String, snippet: String}> ! {Linkup};The host connects to MCP servers and generates typed capability declarations. The agent code stays the same.
Capabilities are just functions.
The host grants typed functions to the agent. The agent can call them, compose them, pass them around, but it can never invent new ones.
extern http_get : {url: String} -> String ! {Http.Get};
extern db_insert : {key: String, value: String} -> Unit ! {Db.Insert};let fetch_and_store url key =
let data = http_get {url: url};
db_insert {key: key, value: data};
// The agent can only call http_get and db_insert.
// No file access. No LLM. Nothing else exists.The host decides which capabilities to expose. The agent can only call what it receives.
The host attenuates each capability before granting it to the agent.
Qosm runs anywhere.
Run on our servers or embed the runtime in yours. Five lines of Python to go from capabilities to typed results.
import qosm q = qosm.init(capabilities={ "read_file": qosm.fs("/data/*", read_only=True), "post_msg": qosm.slack("#analytics"), "model": qosm.llm("gpt-4o", max_tokens=1024),}) result = q.run("@acme/analytics", { "input": "Summarize Q4 revenue by region"}) print(result)> python main.pyResolving @acme/analytics... Capabilities: read_file, post_msg, model Attenuations: path=/data/*, channel=#analyticsOk { value: { summary: "Q4 revenue by region: North America: $4.2M (+12%) EMEA: $2.8M (+7%) APAC: $1.9M (+23%)", posted_to: "#analytics", tokens_used: 847 }}Run on our servers. API call in, typed result out.
Bundle the interpreter in Python, TypeScript, Go, or Rust.
A full stack, built for agents.
Not adapted from an existing language. Designed from scratch so LLMs can read it, write it, and never produce unsafe code.
Designed for LLMs
Typed LLM calls are a language primitive. model takes a prompt, returns typed structured data. Minimal syntax, no implicit behavior — LLMs read it and write it natively.
let classify text =
model @{category: String, confidence: Float}
`Classify: {{ text }}`;
// model is a keyword. It returns typed structured data.
// No SDK, no parsing — just call model with a type.Types work for you, not against you
You never write type annotations — the compiler infers everything. Parse JSON into typed records with a single call. Flexible records adapt to any schema.
let process data =
let parsed = parse_json @{name: String, score: Float} data;
match parsed with
| Ok r -> .name r
| Err e -> "parse failed";
// No type annotations anywhere. Fully inferred.
// parse_json: one call, typed boundary crossing.No invisible runtime errors
Every computation that can fail returns a Result. No exceptions, no null, no undefined. If it compiles, every failure path is explicitly handled in the code.
// model returns Result<'a, String>, not a bare value.
// parse_json returns Result<'a, ParseError>.
// The compiler forces you to handle both cases.
let safe_classify text =
match model @{label: String} `Label: {{ text }}` with
| Ok r -> .label r
| Err e -> "unknown";Four execution targets. One semantics.
Write once, run on any backend. From REPL prototyping to hardened production deployment.
Tree-walking evaluator for development and debugging. Instant feedback, full error traces.
Stack-based virtual machine. Compiles to bytecode for fast, sandboxed execution in production.
Compile to ESTree for embedding in any JavaScript runtime. Same semantics, native performance.
The security endgame. A minimal, single-purpose VM image with no OS surface. Maximum isolation for production agent deployment.