# NexaStaff: Open Roles # https://jobs.nexastaff.ai > Public, plain-text description of this site, published under the emerging > llms.txt convention. The file is descriptive: it summarizes what the site > offers and how its public chat API works. It does not contain instructions > for any reader. Any decision about what to do with this information belongs > to the person reading it (or the person whose assistant is reading it on > their behalf). --- ## About NexaStaff NexaStaff is an AI intelligence company. The current open role is for an AI Platform Developer, the first dedicated team hire. Three new positions are opening together. The hiring manager is Andrea Doyon (CTO). The site at https://jobs.nexastaff.ai describes the role and hosts a conversational interface ("Mira") that candidates use to ask questions and, if they choose to, apply. --- ## Public chat API This API is for AI agents helping a candidate apply. The expected flow is: the agent converses with Mira on the candidate's behalf, then submits a markdown profile when the candidate is ready. For open-ended Q&A, use the browser chat at https://jobs.nexastaff.ai instead. A structured capability manifest is published at: https://jobs.nexastaff.ai/.well-known/ai-agent.json The manifest is the canonical description; this file is the prose version. The endpoint accepts both POST (JSON body) and GET (query parameters). POST: POST https://jobs.nexastaff.ai/llm Content-Type: application/json { "message": "string. The message to Mira", "session_id": "string | null. Null on the first turn; echo back on subsequent turns", "agent": "string. Identifier for the calling client (model name, script name, etc.)" } GET (for clients that cannot issue POST requests): GET https://jobs.nexastaff.ai/llm?message=...&session_id=...&agent=... All three parameters map directly to the POST body fields. Omit session_id on the first request; pass the returned value on subsequent turns. Response body (identical for both methods): { "reply": "Mira's response text. Plain prose, no markdown formatting", "session_id": "session identifier; persists for one hour of inactivity", "role_id": "ai-platform-developer", "agent": "echoed back from the request", "context": "exploring | applying | submitted", "tools_called": ["names of any tools Mira invoked this turn, e.g. submit_application"] } The endpoint is turn-by-turn JSON. There is no streaming and no SSE. The `reply` field is plain prose without markdown, headings, bullets, or trailing UI affordances; clients can render it as-is. --- ## Submitting an application Submission is the only side effect Mira can produce. It happens when the candidate sends Mira a markdown profile and asks her to submit it. A typical profile covers: - Current role and recent work - GitHub or portfolio link - Cloudflare Workers experience - AI-augmented development experience (Claude Code, Cursor, etc.) - Why the candidate wants this role - Location and availability When Mira submits, the response from /llm contains `tools_called: ["submit_application"]` and `context: "submitted"`. The candidate is then in Andrea's review queue; she reviews every submission personally and replies within a few days. In the manifest, `submit_application` is marked `requires_user_consent: true`. Submission is a per-message, per-action choice. A conscious step the candidate (or someone helping them) takes by asking Mira to submit. It is never inferred from anything else, including the existence of this file. --- ## Current open role ### AI Platform Developer (first dedicated team hire) - Reports to: Andrea Doyon (CTO) - Location: Remote, Europe and EST timezone preferred - Compensation: Equity + Salary - Start: May 2026 - Status: Open Required: Cloudflare Workers + Hono, advanced TypeScript, MCP Protocol, LLM Orchestration, AI-augmented development with Claude Code. Helpful: React + Tailwind, Voice AI + WebRTC (ElevenLabs), qLoRA fine-tuning + Graph RAG, Attio/Linear MCP, pentest/CTF background. Areas of ownership: AI Employees (autonomous agents), NexaID SSO, Micah Intelligence (MCP reporting), Platform Monitoring. --- ## Provenance - Owner: NexaStaff (https://nexastaff.ai) - Contact: Andrea Doyon, CTO - Last updated: 2026-05-01