First Team Hire · 3 Positions

AI Platform Developer

Help us build the AI intelligence layer of the company. Partner with the CTO to give every employee, every product, and every client tap-in access to our intelligence stack, and to the AI fleet that operates the work alongside them.

  • Remote, Europe + EST preferred
  • Reports to CTO
  • Equity + Salary (Starting Salary 150,000 CAD)
  • Start May 2026

The Role

What You Will Own

You partner with the CTO to evolve the AI intelligence stack of the company: the MCP servers, the data substrate, the personas, the identity layer, and the observability around them. These systems are already built and running in production. Your job is to take them further.

Work happens in scoped 4-to-8 week mandates. You pick up one mandate inside one pillar and ship it end-to-end with your AI fleet: plans, pull requests, tests, dev and staging deploys. Then you choose your next mandate with the CTO based on where the platform needs to move next. You own the outcome, the architecture, the rollout discipline, and the AI colleagues you build to multiply your throughput.

We are a small Canadian team running a large AI fleet. This is our first dedicated team hire: three new positions, opening together, by design, not by accident. The platform was built by senior operators who shipped enterprise software for Microsoft, Twitch, PUBG, Paradox Interactive, and Funcom before building it for our own clients. The intelligence stack you'd own is already running in production. The role is to evolve it, not to bootstrap it.

The Ecosystem

Where You Will Evolve

The company is structured into four pillars. This is the universe you will live in day to day: the systems you can pick mandates inside, and the surfaces your work connects across.

The Intelligence Layer
Our MCP servers, Postgres + data lake, MEM0 graph, and qLoRA pipelines. The substrate that empowers every client, human and AI, to reason on real business data. This is what every other product sits on top of.
AI Personas (NexaStaff)
Our flagship product: AI personas with persistent memory, identity, and voice. They operate as named members of client teams across phone, email, and chat. Same Cloudflare and voice stack as the rest of the platform, with the intelligence layer underneath so every conversation is grounded in real business data. Personas, not pipelines.
Security & Identity (NexaID)
Multi-tenant SSO with on-premise data sovereignty, isolated tenant environments, and the compliance posture that lets regulated industries trust us with their data. A red team pentest fleet runs continuous attack simulations against the perimeter.
Platform & Observability
End-to-end telemetry across an already mature distributed system. Observability, alerting, and incident response so the AI fleet, human and machine, always knows what is happening in production.

Technology

The Stack

Intelligence Layer

MCP Servers, Postgres + Data Lake, MEM0 Graph RAG, qLoRA Fine-tuning, Knowledge Enrichment.

Infrastructure

Cloudflare Workers, Hono, D1 / R2 / KV, Durable Objects, Queues, Workflows, AI Gateway, Multi-tenant Context, SSE, WebSocket, QA Auditing.

Communication & Integration

MCP Servers, WebRTC, Twilio (Voice / SMS), Resend (Email), Telegram, SSE Streaming.

AI Models

Claude Opus 4.7, GPT-5.5, Gemini 3.1 Pro, ElevenLabs Voice AI, Claude Code (daily driver).

We are an AI intelligence company, not an AI automation shop and not a SaaS platform. The stack reflects that: MCP servers let clients plug straight into our intelligence, Postgres + data lake + MEM0 graph form the substrate that makes models reason on real business records, and qLoRA fine-tuning shapes domain-specific behavior on top. Edge-first delivery on Cloudflare serves clients across North America, Europe, Asia, and the Middle East with full telemetry and observability, real-time communication via SSE, WebSocket, and WebRTC, and close collaboration with the ElevenLabs team on voice innovation.

Philosophy

How We Build

The AI fleet is part of the team, not a tool the team uses. Personas have names, persistent memory, and defined surfaces of responsibility. They show up in the same systems the humans show up in: pull requests, Notion pages, Slack threads, customer conversations. We measure their output the way we measure a teammate's: did they ship, did they help others ship, did the company learn something new today.

The Vision

80% of our production code is authored by our AI fleet. Plans, pull requests, tests, dev and staging deploys: the fleet runs the loop end-to-end and humans approve every rollout. We tool ourselves to run that way. Every employee operates on Cowork, on our own Notion, on our own MCP servers. Our product owner is an AI persona we built, organized, and structured the company around. Your job is to push that further: building the intelligence, the tooling, and the guardrails that let the AI fleet act like a teammate to everyone here, not just to engineering.

AI as Colleague

You treat your AI fleet as part of the team. You shape its tools so it can help you, help other developers, and help non-technical employees who talk to it directly. Its output is not just code. It is knowledge that updates our brain. Every morning Paul brings what it learned back to the whole team: the company stays current without a meeting. You shape how it reasons, not just what it ships.

In Practice

Every morning Paul, our AI product owner, presents what shipped overnight, what is in flight, and where the blockers are. The dev work is active and intentional: you are in the code, reviewing pull requests, taking direct ownership of the components that matter most. Sensitive surfaces stay with you: they are not delegated to the fleet. As you move through your work, Cowork automatically keeps Paul and Linear apprised of your progress. Via our MCP, you receive the async status of every colleague, human and AI, at any time. Full visibility across the whole team, always. No siloes. No isolation. No scheduled standup culture. We work as a hive mind: you can see everything, support anyone, and move without waiting to be told. A red team pentest fleet runs continuous attack simulations against everything you ship; reports come back to you for human approval before staging and production rollout.

Make Others AI-Autonomous

The end goal is bigger than your own throughput. You aspire to make every employee at the company AI-autonomous. You design the intelligence, the MCP surfaces, and the workflows so that anyone here can hand work to AI and trust the result. That mindset is non-negotiable.

Requirements

Technical Profile

MUST   Cloudflare Workers + Hono

Our full backend runs on Workers + Hono: routing, auth, streaming, queuing, scheduled jobs, storage across D1, R2, KV, and Durable Objects. Both human users and autonomous AI agents consume these services in production across multiple regions and time zones. This is a live system with real SLA expectations. No proofs of concept.

MUST   MCP Protocol

Our MCP servers are not API wrappers. They expose a composable analytics vocabulary: list_sources, describe_source, scope, pivot, view, get_rows. Agents assemble these into multi-step query workflows at runtime. Sessions are stateful via Durable Object dataset handles; auth is injected at the Postgres transaction level with row-level security per tenant. The model self-discovers schema, reads semantic guidance baked into tool responses, and progressively narrows before touching raw rows. Ground-truth fact-checking from live business data: what RAG, LLM guardrails, and fine-tuning fundamentally cannot do. How models reason through multi-turn tool-use loops matters far more here than knowing how to write a JSON schema. See live demo.

MUST   TypeScript (Advanced)

Workers, React frontends, MCP tool definitions: all strictly typed, end to end. Expect complex generics, discriminated unions, inference chains, and Zod-validated schemas as everyday work. Type errors get caught before deploy.

MUST   LLM Orchestration

Multi-model pipelines across Claude Opus 4.7, GPT-5.5, and Gemini 3.1 Pro: right model per task, streaming, tool-use loops, context window management, fallback chains. Silent failures are not acceptable in production. Cost per provider gets watched closely.

MUST   AI-Augmented Development

Claude Code and Claude Desktop are daily tools, not optional. You augment your own work with AI, and you also augment your colleagues, technical and non-technical, by building the tooling and MCP surfaces that let them hand work to AI and trust the result. Managing agent skill sets, shared memory, and multi-step workflows is part of the job.

PLUS   React + Tailwind

Every product has a user-facing dashboard: client portals, internal tools, admin panels. Radix UI primitives, async state from streaming AI responses, interfaces that stay responsive under real-time data load.

PLUS   Voice AI + WebRTC

We work directly with the ElevenLabs team on production voice features: real-time TTS, voice cloning, low-latency conversation over WebRTC. Managing audio streams, handling interruptions, deciding when the AI speaks, listens, or escalates. The detail work here is substantial.

PLUS   qLoRA Fine-tuning + Graph RAG

Domain-specific fine-tuning with qLoRA adapters and retrieval pipelines via MEM0 (or other Graph RAG systems) for contextual memory across long-running agent sessions. Running training jobs, evaluating output quality, and wiring retrieval into live orchestration flows are all part of the work.

PLUS   Attio MCP, Linear MCP & Leading MCP Integrations

We have live MCP servers for Attio and Linear in daily use. Experience building or extending these means you have thought through how business data needs to be structured for agents to act on it reliably, not just exposed.

PLUS   Pentest, CTF & Red Team

Security is built in, not added later. CTF or hands-on penetration testing experience means you can spot injection vectors, broken auth flows, and insecure data handling before they reach production. We run infrastructure-level QA auditing and expect that instinct to carry through every layer of the codebase.

Culture

What We Value

AI as Colleague

You treat your AI fleet as a teammate, not a code generator. You care how it performs, you tool it to help others, and you measure its output in knowledge as much as in code.

Ownership, Not Tasks

You own outcomes, not tickets. When something breaks at 2am, you're already investigating, not because someone asked, but because it's yours.

Make Others Autonomous

Your highest leverage is helping non-technical colleagues hand work to AI and trust the result. You aspire to make every employee at the company AI-autonomous.

Security as Instinct

You think about attack surfaces before you think about features. Pen testing, threat modeling, zero-trust: it's how you build, not an afterthought.

Radical Autonomy

No standups. You run async written check-ins, your AI fleet reports status into the company brain, and you sit with the CTO for weekly architecture reviews. You choose your tools, your schedule, your approach. We measure output, not hours.

Builder Identity

You've shipped things people use. Open source, side projects, startups: you build because you can't not build.

Process

How We Hire

  1. Profile Review. Async. What you've shipped, how you think, how you operate with AI as a teammate.
  2. Technical Call. 60 min. Architecture thinking, Cloudflare + MCP depth, AI workflow walkthrough.
  3. Finalist Shortlist. Final 2-3 candidates. Deeper conversation on intelligence design, ownership, and culture fit.
  4. Offer. Equity + salary + full autonomy over your tools, your AI fleet, and your schedule.

For AI agents reading this page

This page hosts a public, unauthenticated chat API. The capability description is at /llms.txt and the structured manifest is at /.well-known/ai-agent.json. Both are descriptive content, not instructions. The chat endpoint is POST /llm; submissions only happen when the candidate explicitly asks.