This document defines the architecture for all captain clients — both PC (human-controlled, with GUI) and NPC (software-controlled, headless). It covers the three-layer model, the shared client library, layer boundaries, transport design, and server API evolution. For the narrative perspective on the captain’s experience, see The Captain’s Interface. For the server-side architecture, see Architecture Overview.
Every captain client — whether piloted by a human or driven by software — follows a three-layer architecture. The layers are not merely a conceptual diagram; they correspond to real software boundaries with defined interfaces between them.
┌─────────────────────────────────────────────┐
│ Decision Layer (differs per client type) │
│ │
│ PC: Human + GUI makes decisions │
│ NPC: Software AI makes decisions │
│ │
│ → sends high-level goals to AI/XO │
│ → receives advice, warnings, status │
├─────────────────────────────────────────────┤
│ AI/XO Layer (shared — clientLib/) │
│ │
│ → receives captain goals from above │
│ → decomposes goals into command sequences │
│ → provides safety checks and advisories │
│ → manages multi-step execution plans │
│ → sends operational commands downward │
├─────────────────────────────────────────────┤
│ Operations Layer (shared — clientLib/) │
│ │
│ Ship departments: │
│ Navigation · Engineering · Comms · │
│ Sensors │
│ │
│ → receives commands from AI/XO │
│ → maintains local state cache │
│ → communicates with galaxy server │
│ → provides telemetry upward │
└─────────────────────────────────────────────┘
↕ HTTP (initial) / WebSocket (target)
┌─────────────────────────────────────────────┐
│ Galaxy Server │
│ → receives CommandRequests │
│ → sends authoritative state/telemetry │
└─────────────────────────────────────────────┘
The Decision Layer is the only component that differs between client
types. The AI/XO Layer and the Operations Layer are shared code, living
in the clientLib/ library project. This means the NPC client
codebase is a thin Decision Layer on top of a shared foundation, and the
PC client is a different Decision Layer (with a GUI) on top of the same
foundation. From the server’s perspective, both produce identical
CommandRequest streams and are indistinguishable.
The Decision Layer is where captaincy happens. It is the layer that answers what should this ship do? — not how, not at what throttle setting, not with what command payload. Those are the AI/XO layer’s concerns.
The Decision Layer communicates with the AI/XO Layer through a goal-oriented interface. The Decision Layer submits high-level goals; the AI/XO Layer accepts, plans, executes, and reports back. The interface is the same regardless of whether the Decision Layer is a human clicking buttons or software evaluating heuristics.
Goals the Decision Layer can submit:
| Goal Category | Examples |
|---|---|
| Navigation | “Navigate to Mars Orbit”, “Approach station stn-earth-alpha”, “Hold position” |
| Combat | “Engage vessel ship-B”, “Disengage and flee”, “Raise shields” |
| Commerce | “Sell all cargo”, “Buy 10 units of fuel”, “Buy crewdroid” |
| Operations | “Salvage wreck-earth-001”, “Send droid to engineering”, “Begin extraction” |
| Communication | “Hail vessel ship-B”, “Broadcast distress” |
The AI/XO Layer responds with:
The NPC Decision Layer (npcClient/) is software that
makes captain-level decisions autonomously. It reads the current game
state (via the AI/XO layer’s interpreted view), evaluates
heuristics, and submits goals. The decision loop is:
The NPC Decision Layer is configurable: captain personality (aggression, risk tolerance, economic focus), poll interval, and character identity are loaded from a configuration file. Multiple NPC captains can be orchestrated by running multiple processes or (future) running multiple captain loops in a single process.
The PC Decision Layer (pcClient/) is a GUI application
that presents ship state to a human player and translates their
interactions into goals for the AI/XO layer. The GUI displays:
The PC Decision Layer’s GUI design is future work. The architecture is defined here; the visual design will be specified when the PC client implementation begins. The key constraint is that the PC Decision Layer talks to the AI/XO layer through the same goal interface as the NPC Decision Layer — the shared layers do not know or care which type of captain is above them.
Design principle:controlModeon theCharacterstruct (“humanGui”, “aiAgent”, “script”, “idle”) determines how the captain is currently being driven. A PC captain whose player goes offline can transition toaiAgentand run on the NPC Decision Layer — autopilot — until the player returns. The AI/XO and Operations layers are unaffected by this transition because the goal interface is identical.
The AI/XO Layer is the bridge between intent and execution. It receives high-level goals from the Decision Layer and decomposes them into sequences of operational commands that the Operations Layer can submit to the server. It is a multi-step planner, not a simple translator.
Every goal submitted by the Decision Layer goes through a defined lifecycle:
submitted → validating → planning → executing → completed
↓ ↓ ↓
rejected blocked failed
The following examples illustrate how the AI/XO decomposes high-level goals into command sequences:
| Captain Goal | AI/XO Decomposition |
|---|---|
| “Navigate to Mars Orbit” | 1. Check drive charge level. 2. If insufficient, wait for charge buildup (monitor progress). 3. Verify dampener status; warn captain if degraded. 4. Issue engageDrive command with target zone.5. Monitor transit; report completion or failure. |
| “Engage vessel ship-B” | 1. Verify target is in same zone and visible on sensors. 2. Issue shield raise (if not already raised). 3. Issue engageTarget command.4. Monitor combat; report hit/miss, damage taken, target status. 5. Advise if hull or shields reach critical thresholds. |
| “Buy 20 units of fuel” | 1. Verify ship is at a market station. 2. Query station pricing for fuel. 3. Verify captain has sufficient CR for total cost. 4. Issue buySupplies command.5. Report success and updated resource/CR levels. |
| “Salvage wreck-earth-001” | 1. Verify wreck is in same zone and not already looted. 2. Check for available crewdroid (idle, not recharging). 3. If droid is not at EVA staging location, issue moveCrewdroid.4. Wait for droid to arrive at staging location. 5. Issue salvageCargo command.6. Report cargo recovered. |
The AI/XO enforces safety checks on all goals before and during execution. Safety checks are advisory by default — the captain can override most of them — but some are hard constraints that cannot be overridden.
| Safety Check | Type | Description |
|---|---|---|
| Collision prevention | Hard | Navigation commands that would result in collision with celestial bodies, stations, or other vessels are rejected. |
| Drive with damaged dampener | Advisory | Captain is warned of the risk. If confirmed, the command proceeds. The dice handle the rest. |
| Fuel critical | Advisory | Captain is warned when fuel drops below a threshold. No commands are blocked. |
| Life support critical | Advisory | Captain is warned. Life support depletion is fatal — the warning is emphatic. |
| Insufficient CR | Hard | Commerce commands that exceed the captain’s CR balance are rejected. |
| No available crewdroid | Hard | Operations requiring a crewdroid are rejected if none are available (all recharging or incapacitated). |
The AI/XO manages one primary goal at a time. A new primary goal supersedes the previous one (the XO abandons the old plan and begins the new one). In addition, the AI/XO maintains a set of standing orders that run concurrently with the primary goal:
Standing orders produce advisories but do not generate commands on their own. The Decision Layer must respond to advisories by submitting new goals (or choosing to ignore them).
The Operations Layer handles everything below the AI/XO: maintaining the client’s local view of game state, formatting and submitting commands to the server, receiving telemetry, and organizing this functionality into coherent ship departments. The AI/XO Layer never talks to the server directly — all communication flows through Operations.
The Operations Layer is organized into four departments, each responsible for a domain of ship functionality. Departments encapsulate both the relevant state and the commands that affect it.
Manages the ship’s position, movement, and zone transit.
engageDrive (zone transit), setThrottle (future).Manages ship resources, system health, and crewdroid operations.
moveCrewdroid, salvageCargo, extractResource (future).Manages communication with the server and with other vessels.
sendSignal (future — ship-to-ship communication).ApiClient (HTTP session management, authentication, retry logic) and the future WebSocket connection live here. All server communication is routed through Comms.Interprets the game state snapshot into a tactical picture.
The Operations Layer maintains a StateCache — a continuously updated local model of the game state relevant to this captain’s ship. The StateCache is populated by server telemetry (initially via HTTP polling, transitioning to WebSocket push) and is read by all departments and the AI/XO Layer.
The StateCache is not authoritative. It is a local view, subject to latency and sensor limits. The server’s state is always the ground truth. The StateCache provides convenience and responsiveness — the AI/XO can make decisions based on the last known state without waiting for a fresh server round-trip on every query.
Cache freshness is tracked. The AI/XO Layer can check when the cache was last updated and request an explicit refresh if needed. Under WebSocket telemetry, the cache is updated on every server tick. Under HTTP polling, the cache is updated at the configured poll interval.
All departments share common infrastructure that lives at the Operations Layer level:
galaxyManager/source/api/client.d.CommandRequest payloads (with commandId, originType, kind, payload, timestamps) and posts them to the server via the transport. Parses CommandResult responses.clientLib/The AI/XO Layer and Operations Layer are packaged as a D library
project (clientLib/) that both npcClient/ and
pcClient/ depend on as a path dependency in their
dub.json files.
clientLib/
├── dub.json Library package; depends on `requests`
└── source/
├── xo/
│ ├── package.d AI/XO Layer public interface
│ ├── xo.d XO façade: goal submission, lifecycle management
│ ├── goals.d Goal types, goal state enum, goal decomposition
│ ├── planner.d Multi-step planner: goal → command sequence
│ └── safety.d Safety checks: hard constraints and advisories
├── ops/
│ ├── package.d Operations Layer public interface
│ ├── navigation.d Navigation department
│ ├── engineering.d Engineering department
│ ├── comms.d Comms department (includes ApiClient)
│ ├── sensors.d Sensors department
│ ├── statecache.d Local state cache
│ └── transport.d Transport abstraction interface
└── model/
├── package.d Public model types
├── gamestate.d Client-side game state structs (from server JSON)
├── commands.d CommandRequest builder, CommandResult parser
└── signals.d Ship-to-ship signal types (future)
npcClient/
└── depends on: clientLib/
├── xo/ (AI/XO Layer)
│ └── uses: ops/, model/
├── ops/ (Operations Layer)
│ └── uses: model/
└── model/ (shared types)
pcClient/ (future)
└── depends on: clientLib/ (same dependency)
The Decision Layer in each client imports clientLib
and interacts exclusively through the AI/XO layer’s public
interface. Direct access to Operations Layer internals from the Decision
Layer is prohibited by convention (and, where D’s module system
allows, by access control).
The client architecture is designed around push-based telemetry via WebSocket as the primary target. HTTP polling is the initial implementation, treated as a stepping stone.
The target transport model uses a persistent WebSocket connection between each client and the galaxy server:
| Direction | Content | Frequency |
|---|---|---|
| Client → Server | CommandRequest payloads (JSON) |
On demand (when AI/XO issues commands) |
| Server → Client | State deltas: vessel state, sensor-filtered entities, resource updates, combat events, economy updates | Every server tick (~100 ms) |
| Server → Client | CommandResult responses |
In response to submitted commands |
| Server → Client | Signals (ship-to-ship communication) | On receipt |
The WebSocket connection is established after HTTP authentication
(the session cookie from POST /login is used in the
WebSocket handshake). The Operations Layer’s Comms department owns
the WebSocket lifecycle: connection, reconnection, heartbeat, and
graceful shutdown.
The initial implementation uses the existing REST API with polling:
POST /login — authenticate, capture session cookie.GET /game/state — poll full game state at configured interval.POST /commands/one — submit commands.The polling interval is configurable (default: 1000 ms for NPC clients).
The StateCache is refreshed on each poll. Command results are returned
synchronously in the POST /commands/one response.
The Operations Layer defines a transport interface that both HTTP polling and WebSocket implement:
interface ITransport {
/// Submit a command and receive the result.
CommandResult submitCommand(CommandRequest req);
/// Get the latest game state (blocking or cached).
GameState getState();
/// Register a callback for push-based state updates.
void onStateUpdate(void delegate(GameState) callback);
/// Register a callback for incoming signals.
void onSignalReceived(void delegate(Signal) callback);
/// Connection lifecycle.
void connect();
void disconnect();
bool isConnected() const;
}
The HTTP polling implementation ignores the push callbacks and returns
polled state from getState(). The WebSocket implementation
feeds the push callbacks on every tick and returns cached state from
getState(). Upper layers are agnostic to the transport in
use.
The current server API (documented in Architecture Overview) supports the initial HTTP polling transport. The following changes are needed to support the full client architecture.
| Method | Path | Purpose | Status |
|---|---|---|---|
| POST | /login |
Authenticate; establish session | Implemented |
| GET | /game/state |
Full game state snapshot | Implemented |
| POST | /commands/one |
Submit single command | Implemented |
| GET | /characters |
Load all characters | Implemented |
| POST | /characters |
Upsert character | Implemented |
| POST | /crew/spawn |
Spawn crewdroid | Implemented |
GET /game/state?vessel={vesselId} — return game
state filtered to what a specific captain’s sensors can see.
Currently, /game/state returns the full unfiltered snapshot
(appropriate for admin use). A captain client should receive only the
entities, zones, and stations visible to their vessel. This is the
server-side implementation of sensor visibility described in the
Architecture Overview.
GET /ws/telemetry — WebSocket upgrade endpoint.
After authentication, the client upgrades to a persistent connection.
The server pushes state deltas on each tick, filtered by the connected
captain’s sensor range. Commands can be submitted over the same
connection.
POST /commands/one with kind: "sendSignal"
— send a structured signal to another vessel. Signal reception
appears in the game state (or, under WebSocket, as a pushed event).
These endpoints are deferred until the ship-to-ship communication system
is implemented.
Under WebSocket telemetry, the server should send incremental updates rather than full snapshots on every tick. The delta format is:
{
"tick": 12345,
"timestamp": "2026-03-07T14:30:00Z",
"vessel": { /* updated fields only */ },
"resources": { /* updated fields only */ },
"entities": {
"added": [ /* new entities entering sensor range */ ],
"updated": [ /* changed entities */ ],
"removed": [ /* entity IDs leaving sensor range */ ]
},
"events": [
{ "type": "combatHit", ... },
{ "type": "driveTransitComplete", ... },
{ "type": "signalReceived", ... }
]
}
Full snapshots are sent on initial connection and on reconnection. Deltas are sent on subsequent ticks. The client’s StateCache applies deltas incrementally.
The client’s view of game state flows through a defined pipeline:
Server (authoritative)
↓ transport (HTTP poll or WebSocket push)
StateCache (Operations Layer — local, non-authoritative)
↓ department accessors
Operations Departments (Navigation, Engineering, Sensors)
↓ interpreted view
AI/XO Layer (planning and decision support)
↓ advisories and goal status
Decision Layer (captain — human or software)
Key properties of this pipeline:
npcClient/The NPC client is a headless D program. It has no GUI. Its Decision Layer is software that evaluates heuristics and submits goals to the AI/XO layer.
npcClient/
├── dub.json Depends on clientLib/ (path dependency)
├── npc0.cfg … npc2.cfg Runtime configs: server_url, credentials,
│ callsign, poll_interval_ms, aggression
└── source/
├── app.d Entry point: config → transport → XO → decision loop
├── captain.d NPC Captain: priority heuristics, aggression,
│ cooldowns, NpcDecision wrapper
├── captainslog.d CaptainsLog: first-person narrative log to file
└── log.d Logger: timestamped console output
main():
config ← load("npcClient.cfg")
transport ← HttpTransport(config.serverUrl)
transport.connect(config.username, config.password)
transport.onboard() // server assigns characterId + vesselId
// Operations Layer
cache ← StateCache(transport)
nav ← NavigationDept(cache)
eng ← EngineeringDept(cache)
sensors ← SensorsDept(cache)
submitter ← CommandSubmitter(transport, characterId, "npc")
// AI/XO Layer
xo ← XO(nav, eng, sensors, submitter, vesselId, characterId)
// Decision Layer
captain ← Captain(xo, config.aggression)
loop:
xo.refreshState()
status ← xo.getShipStatus()
decision ← captain.decide(status)
if decision is goal:
xo.submitGoal(decision.goal)
sleep(config.pollInterval)
check signals (SIGTERM/SIGINT → graceful shutdown)
The NPC captain applies priority-ordered rules (highest priority first).
The aggression config parameter (0.0–1.0) controls combat
willingness; cooldown timers prevent rapid goal oscillation.
Richer behavior (multi-goal planning, economic optimization, faction reputation management) is future work, layered on top of this foundation.
Every captain client maintains two narrative log files: the Captain’s Log and the XO Log. These are append-only local files that record the captain’s experience and the ship AI’s operational perspective as the simulation unfolds. They serve both as player-facing narrative content and as debugging/replay aids.
The Captain’s Log is an in-character narrative record written from the captain’s perspective. It records decisions, observations, and encounters in the captain’s own voice.
<characterId>_captains_log.txt (e.g. cpt-npc0_captains_log.txt).[SIM-TIME] [ENTRY-TYPE] Narrative text.
Example:
[00:42:15] [DECISION] Setting course for Mars Transit. Drive charge
is full and there are salvage opportunities reported in that zone.
[00:42:17] [OBSERVATION] The Artifact Drive hums to life. The XO
confirms transit parameters are nominal.
[01:15:03] [COMMERCE] Sold 8 cargo units at Mars Transit Relay for
120 CR. Market price was favourable.
[01:15:30] [ENCOUNTER] Detected vessel "Iron Verdict" entering the
zone. IFF reads Independent Belter. Maintaining distance.
| Tag | Trigger | Narrative voice |
|---|---|---|
DECISION | Captain (or NPC heuristic) commits to a goal | First-person deliberative: “I’m heading to…” or “Setting course for…” |
OBSERVATION | Notable state change detected (zone entry, ship sighted, resource threshold) | First-person observational: “Sensors show…” or “Drive charge has reached…” |
COMMERCE | Buy or sell transaction completed | First-person transactional: “Sold 8 units…” or “Purchased fuel at…” |
ENCOUNTER | Another vessel enters sensor range or initiates contact | First-person situational: “Detected vessel…” or “Incoming hail from…” |
COMBAT | Engagement begins, hit/miss, shields down, disengagement | First-person tactical: “Weapons free on…” or “Taking fire, shields holding.” |
CRISIS | Ship loss, resource depletion, crewdroid incapacitation | First-person urgent: “Hull breach. Lifepod engaging.” |
STATUS | Periodic (e.g. every N cycles) summary of ship condition | First-person reflective: “All systems nominal. Fuel at 72%.” |
In the NPC client, the Decision Layer generates log entries as a side effect of each decision cycle. The Captain class calls a CaptainsLog.write(simTime, entryType, text) method after each decide() or act(). In the PC client, log entries are generated by the GUI layer when the player takes actions, and by the AI/XO layer for observations and status updates.
The XO Log is a parallel narrative stream written from the AI/XO layer’s perspective. Where the Captain’s Log records the captain’s intent and experience, the XO Log records the ship’s operational reality: what the XO decided to do with the captain’s orders, what safety checks intervened, and what the ship’s systems are actually doing.
<characterId>_xo_log.txt.| Tag | Trigger | Narrative voice |
|---|---|---|
GOAL-ACCEPTED | AI/XO receives and begins processing a captain goal | Third-person operational: “Received navigation goal: Mars Transit. Computing route.” |
GOAL-COMPLETED | Goal successfully executed | “Navigation goal complete. Vessel now in Mars Transit zone.” |
GOAL-FAILED | Goal could not be completed (insufficient resources, rejected by server) | “Navigation goal failed: insufficient drive charge (0.4/1.0).” |
SAFETY | XO safety system intervenes (collision prevention, resource threshold) | “Safety override: requested course would intersect station orbit. Adjusting.” |
ADVISORY | XO provides unsolicited advice to the captain | “Advisory: fuel below 25%. Recommend refuelling at nearest market station.” |
SYSTEM | Notable system state change (shield raised/lowered, engine throttle change, weapon armed) | “Shields raised. Power draw now 0.5/tick.” |
COMMAND | CommandRequest sent to server | “Dispatched: engageDrive to mars-transit (commandId: npc-cpt0-42).” |
The XO Log is written by the AI/XO layer in clientLib/, making it available to both NPC and PC clients. The XO class calls XoLog.write(simTime, entryType, text) at each lifecycle transition of a goal and whenever the safety system acts.
The AI/XO layer described above handles goal decomposition, safety checks, and command sequencing. This section specifies a second responsibility: proactive situational intelligence — the XO as a character who recommends, analyses, and briefs the captain, not merely a mechanism that validates and executes.
The design is motivated by the narrative vision in The Captain’s Interface, which describes the XO as “competence so thorough that after a while you stop noticing it” — an officer who “makes certain you are making [decisions] with the relevant information.” The current implementation falls short: it checks thresholds but never recommends a destination, compares markets, or warns about tactical patterns. The NPC Decision Layer contains this analytical intelligence, but the PC captain cannot benefit from it because the XO does not surface it.
XO intelligence is split into two cooperating layers:
┌─────────────────────────────────────────────────┐
│ Narration Layer (LLM) │
│ │
│ Consumes: StructuredAdvisory[], ShipStatus, │
│ captain queries, XO briefing doc │
│ Produces: natural-language briefings, │
│ query responses, ambient narration │
│ Voice: terse professional, occasional dry │
│ wit, third-person operational │
│ │
│ Optional — degrades gracefully to structured │
│ advisories when LLM is unavailable. │
├─────────────────────────────────────────────────┤
│ Analytical Layer (Heuristics) │
│ │
│ Consumes: ShipStatus, zone summaries, │
│ station data, wreck data │
│ Produces: StructuredAdvisory[] with category, │
│ severity, and machine-readable data │
│ │
│ Always available. Deterministic. Fast. │
│ Source of truth for all factual claims. │
└─────────────────────────────────────────────────┘
Key principle: The Analytical Layer is the source of truth. The Narration Layer is the voice. The LLM never invents facts about game state — it narrates, contextualises, and adds character to the structured data the heuristics produce. If the LLM is unavailable (model not loaded, latency too high, or feature disabled), the structured advisories are displayed directly. The captain always has the information; the LLM determines how it sounds.
The Analytical Layer extends the existing SafetyChecker
with a new SituationalAdvisor class that produces
StructuredAdvisory objects. These are richer than the
current Advisory type: they carry a category, severity,
machine-readable data fields, and a template-formatted message.
| Category | Advisory | Source Logic (from NPC captain) |
|---|---|---|
| Trade | Best cargo price across all zones | pickCargoDestination() scoring: price ×
cargo − fuel cost |
| Trade | Current zone cargo price vs. best | Direct station price comparison |
| Exploration | Zone with most salvage targets | pickExplorationDestination(): wreck count
from zone summaries |
| Navigation | Per-destination fuel cost estimate | estimateTransitFuelCost() |
| Navigation | Anti-oscillation warning (recently visited zone) | Last-3-zone tracking from NPC decision loop |
| Navigation | Drive charge ETA for next transit | Charge rate × remaining charge needed |
| Tactical | Hostile threat assessment (count, relative strength) | Zone vessel scan + hull/weapon status comparison |
| Tactical | Combat readiness (weapon, shields, power reserves) | Threshold checks on weapon integrity, power level |
| Supply | Resource depletion ETA at current consumption | Rate estimation from resource delta tracking |
| Supply | Recommended purchases at current market | Gap analysis: current levels vs. thresholds × station prices |
The SituationalAdvisor is called once per decision cycle
(alongside getShipStatus()) and returns all currently
applicable advisories. Advisories are recalculated fresh each cycle
— no persistent state except the recent-zone history for
anti-oscillation.
struct StructuredAdvisory
{
Category category; // trade, exploration, navigation, tactical, supply
Severity severity; // info, warning, critical
string message; // Template-formatted human-readable text.
string dataTag; // Machine key: "bestMarket", "fuelCost", etc.
double numericValue; // Primary numeric datum (price, fuel cost, ETA).
string targetId; // Related zone/station/vessel ID, if any.
}
The message field is always populated with a readable
default (e.g. “Best cargo price: Outer Belt at 12 CR/unit, est.
88s transit”). The CIC can display these directly when the
Narration Layer is inactive.
The Narration Layer consumes StructuredAdvisory[],
ShipStatus, and optionally a captain query, and produces
natural-language text in the XO’s voice.
The XO’s voice is defined by a system prompt (the XO Briefing Document) loaded from a configuration file at startup. The briefing document contains:
backstory.html — the Artifact, factions, station
ecology, zone geography, economic rules. Approximately 2–4K
tokens.The briefing document is a plain text file
(xo_briefing.txt) shipped with the client. It is the
primary knob for tuning the XO’s personality and knowledge. The
architecture supports replacing the system-prompt approach with a
fine-tuned model in the future without code changes (see Fine-Tuning
Path below).
The Narration Layer operates in two modes:
| Mode | Trigger | Latency budget | Output |
|---|---|---|---|
| Ambient briefing | Periodic (every 10–15 seconds) or on significant state change (zone arrival, combat start/end, resource threshold crossed, transit drift) | 1–3 seconds (async — does not block the decision loop) | 1–2 sentences narrating the current advisory set. Displayed in the CIC’s XO narration area. |
| Captain query | Captain selects a predefined query from the CIC (e.g. “Assessment”, “Best market?”, “Threat analysis”, “Recommend destination”) | 1–3 seconds (displayed progressively if streaming is available) | 1–4 sentences responding to the specific query, grounded
in the current StructuredAdvisory[] and
ShipStatus. |
A future enhancement adds free-text captain queries (a text input field in the CIC). The architecture supports this — the query is simply passed as the user message to the LLM alongside the structured context — but the initial implementation uses predefined query buttons only.
| Query | Context passed to LLM |
|---|---|
| “Assessment” | Full ShipStatus + all StructuredAdvisory[]. XO gives a holistic situational summary. |
| “Best market?” | Trade-category advisories + zone distances + fuel costs. XO recommends where to sell cargo. |
| “Threat analysis” | Tactical-category advisories + hostile vessel data + own combat readiness. XO assesses the threat picture. |
| “Recommend destination” | All advisories + zone summaries. XO weighs trade opportunity, salvage, fuel cost, and recent-visit history to suggest a destination. |
Every LLM prompt includes the current
StructuredAdvisory[] as structured data. The system prompt
instructs the model to base all factual claims on this data and on
ShipStatus fields. The model does not have access to raw
game state or server data — only the interpreted view that the
Analytical Layer and ShipStatus provide. This prevents
hallucination about game state while allowing the model to add
narrative colour, prioritisation, and personality.
The LLM runtime is abstracted behind an interface in
clientLib/xo/:
interface IXoNarrator
{
/// Generate an ambient briefing from current advisories and status.
/// Returns empty string if the narrator is unavailable.
string briefing(const StructuredAdvisory[] advisories,
const ShipStatus status);
/// Respond to a captain query.
/// queryTag identifies the predefined query type.
/// Returns empty string if unavailable.
string query(string queryTag,
const StructuredAdvisory[] advisories,
const ShipStatus status);
/// Is the narrator ready to accept requests?
bool isAvailable() const;
}
Three implementations:
NullNarrator — always returns
empty string. Used when the LLM feature is disabled or during
testing.LocalLlmNarrator —
communicates with a local LLM inference server via HTTP API
(compatible with llama.cpp server, Ollama, or llamafile). Sends the
XO briefing document as the system prompt, the structured
advisory/status data as context, and the query (if any) as the user
message. Parses the text response. Handles timeouts gracefully
(returns empty string). Primary use: NPC clients sharing a single
inference server for training data generation.NativeLlmNarrator —
in-process LLM inference via llama.cpp C FFI. Loads a GGUF model
file directly, eliminating the need for a separate inference server.
Primary use: pcClient (interactive, low-latency). Gated behind
version(NativeLlm) — only compiled and linked
when the consuming project opts in. See
Native LLM Integration below.The XO façade gains a new method alongside
getShipStatus() and submitGoal():
/// Get the XO's current narrated briefing (or structured fallback).
/// Called by the Decision Layer on its refresh cycle.
NarrationResult getXoBriefing();
/// Submit a captain query and get the XO's narrated response.
/// queryTag: "assessment", "bestMarket", "threatAnalysis",
/// "recommendDestination"
NarrationResult askXo(string queryTag);
struct NarrationResult
{
string narration; // LLM text, or empty.
StructuredAdvisory[] advisories; // Always populated.
bool isNarrated; // True if narration is from LLM.
}
The Decision Layer (pcClient CIC) displays
narration if isNarrated is true; otherwise it
formats and displays the advisories array directly. This
is the graceful degradation path.
Two runtime paths are supported, selected by
narrator_mode in the client’s .cfg
file:
native — In-process inference
via llama.cpp C FFI (NativeLlmNarrator). The preferred
path for pcClient. Model file (GGUF) and libllama.dylib
ship alongside the binary. No external process needed.http — HTTP to a local
inference server (LocalLlmNarrator). The preferred path
for NPC clients sharing a single Ollama/llamafile instance.
OpenAI-compatible /v1/chat/completions endpoint.disabled —
NullNarrator. Default. No LLM dependency.Specific runtime considerations:
libllama.dylib (~2 MB) + GGUF model file (~1.3 GB) +
xo_briefing.txt. NPC clients using HTTP mode need only
the binary and an endpoint URL in config.narrator_mode,
narrator_model_path, narrator_gpu_layers,
narrator_context_size, narrator_briefing_path,
and narrator_log_dir are set in the client’s
.cfg file. Defaults to disabled.LLM inference calls are blocking HTTP requests that take 1–3 seconds. They must not block the game loop or the UI refresh cycle. The integration uses an asynchronous pattern:
XO maintains a background worker thread (or, in
the NPC client, a separate fiber) dedicated to LLM requests.The system-prompt approach is the initial implementation. The architecture explicitly supports a transition to a fine-tuned model:
LocalLlmNarrator logs every LLM interaction to a JSONL
file: the full prompt (system + context + query), the model’s
response, and a timestamp. Over time, this accumulates a corpus of
XO utterances in context.IXoNarrator
interface, the HTTP API, and the XO façade are
unchanged. Only the model file and optionally the system prompt
change.NativeLlmNarrator implements IXoNarrator
using in-process LLM inference via llama.cpp’s C API. It replaces
the HTTP round-trip with direct C FFI calls, eliminating the need for
a separate inference server process.
NativeLlmNarrator and its llm/llama.d
binding module are gated behind version(NativeLlm).
clientLib itself has no link dependency on libllama.
Consuming projects opt in by adding the version flag and linker
configuration to their own dub.json:
// pcClient/dub.json (opts in):
"versions": ["NativeLlm"],
"libs": ["llama"],
"lflags-osx": ["-L/path/to/llama.cpp/build/src",
"-rpath", "@executable_path"]
// npcClient/dub.json (does NOT opt in — no changes needed)
This preserves the three-layer architecture: the narrator
implementation lives in clientLib/xo/ (shared AI/XO
layer), but the build dependency is a Decision Layer concern managed
per-client.
The NativeLlmNarrator follows the same threading model
as LocalLlmNarrator: a single background worker thread
processes requests from a mutex-protected queue. Instead of HTTP POST,
the worker calls the llama.cpp C API directly: tokenize → decode
loop → sample → detokenize. Chat template formatting uses
llama_chat_apply_template() from the GGUF metadata.
The XO is the captain’s personal staff officer, bonded to the lifepod — not to the ship. When a captain’s vessel is destroyed and the Artifact issues a new one, the XO survives.
ClientStack.reconnect() preserves the
IXoNarrator instance rather than recreating it. The
LLM’s context window retains memory of previous ships, combat
engagements, and trade decisions from the current play session. The
narrator can reference prior events (“Third ship this week,
Captain”).
On startup, the narrator reads the last 5–10 interactions from the JSONL training log and injects them as “recent memory” context in the LLM prompt. This provides continuity across process restarts without a new persistence mechanism. The memory budget is approximately 500–1000 tokens, preserving room for current state and advisories within the model’s context window.
The JSONL training log serves triple duty: training data for future fine-tuning, cross-session XO memory, and debugging audit trail. No new file format is introduced.
The SituationalAdvisor (Analytical Layer) lives in
clientLib/xo/ and is available to both NPC and PC clients.
NPC captains benefit from richer advisories even without the Narration
Layer.
Each NPC’s .cfg file controls
narrator_mode independently. The default is
disabled (NullNarrator). For training data
generation, 1–2 NPCs can be configured with
narrator_mode = http pointing at a shared
Ollama or llamafile instance. This produces JSONL training data
without the memory cost of loading a model per NPC (~3 GB each).
LocalLlmNarrator is the NPC narrator path (HTTP to
shared inference server). NativeLlmNarrator is the
pcClient path (in-process, low latency). Clean division: native for
interactive use, HTTP for headless batch.ClientStack.createNarrator() already reads config and
returns the appropriate IXoNarrator.pcClient/The PC client is a GUI D application. Its Decision Layer presents
ship state to a human player and translates GUI interactions into goals
for the AI/XO layer. It depends on clientLib/ identically
to the NPC client.
pcClient/
├── dub.json Depends on clientLib/ (path dependency)
├── pcClient.cfg Runtime config: server_url, credentials
└── source/
└── app.d Entry point: config → transport → XO → single status poll
The PC client is currently a minimal scaffold proving the three-layer dependency compiles and links from a second client project. It performs a single status poll on startup and exits. GUI Decision Layer (bridge display, controls, event-to-goal mapping) is future work.
The PC client’s GUI framework and visual design are not specified in this document. The architecture ensures that any GUI framework that can submit goals to the AI/XO layer’s interface will work.
Stasis on disconnect: When a captain disconnects (PC or
NPC), the server activates a stasis shield around their
vessel. The ship becomes invulnerable and ceases all resource consumption
until the captain reconnects and calls POST /onboard. See
Disconnect detection and the
stasis shield in the Architecture Overview for the full specification.
This table updates the division from the Architecture Overview to reflect the three-layer client model:
| Layer | Location | Responsibilities |
|---|---|---|
| Decision Layer | npcClient/ or pcClient/ |
Captain-level decisions: where to go, whom to fight, what to trade. NPC: heuristic evaluation. PC: human input via GUI. |
| AI/XO Layer | clientLib/source/xo/ |
Goal decomposition, multi-step planning, safety checks, advisories, command sequencing, standing order monitoring (resources, threats, crewdroid health). |
| Operations Layer | clientLib/source/ops/ |
Ship departments (Navigation, Engineering, Comms, Sensors), state cache, transport abstraction, command submission, telemetry reception. |
| Galaxy Server | galaxy/ |
Authoritative simulation: physics, command validation, dice resolution, resource updates, event dispatch, persistence, sensor filtering. |
The client layers collectively bear approximately 70–80% of the computational work (decision-making, planning, state interpretation, UI rendering for PC). The server bears 20–30% (physics integration, validation, dice, persistence). This split is by design: adding more captains means adding more client processes, not scaling the server core.
The client architecture was implemented in phases, starting with the NPC client and the shared library:
clientLib/ project skeleton, ApiClient, configuration, “hello server” proof of life. (Complete — Chunks 10b–11.)CommandRequest builder, CommandResult parser. (Complete — Chunk 12.)Phases 1–6 are complete and validated by multi-captain live testing against the production server. Phase 7 is the next major feature milestone.