- Python 72.6%
- HTML 18.2%
- Shell 7.7%
- CSS 1.2%
- TeX 0.3%
|
Some checks failed
synthesize-and-publish / synthesize-and-build (push) Failing after 5s
|
||
|---|---|---|
| .forgejo/workflows | ||
| .spel | ||
| assets | ||
| forge/03-forge-implementation | ||
| heather-phase3 | ||
| obsidian-vault | ||
| portal | ||
| synthesis-cache | ||
| templates | ||
| wishes | ||
| .gitignore | ||
| breaking_change_detector.py | ||
| build.sh | ||
| CLAUDE.md | ||
| collect-content.py | ||
| forge-routing-and-telemetry.md | ||
| forge_config.py | ||
| generate.py | ||
| lore-attribution-manual-review.md | ||
| metadata.yaml | ||
| obsidian_publish.py | ||
| pattern-winnow-candidates-2026-04-09.yaml | ||
| portal_synthesize.py | ||
| README.md | ||
| requirements.txt | ||
| synthesize.py | ||
SPELWork Documentation Generator
Database-First Documentation: All docs auto-generated from forge.db and forge_manifest.yaml
Quick Start
./generate.sh
This will extract:
- Database schema →
output/schema.md - Ontology hierarchy →
output/hierarchy.md - Kafka event schemas →
output/kafka_events.md - Ruse generation pipeline →
output/ruse_pipeline.md - Combined PDF →
output/pdf/spelwork_docs.pdf
Philosophy
"Database IS the documentation source"
Generated from forge.db + forge_manifest.yaml - always current, never stale.
See full README for extraction scripts, templates, and usage.
SPELWork System Reference Guide
Technical Specification
The SPELWork system, a federated architecture, seamlessly integrates AI-driven automation with community governance. At its core, the system comprises three primary roles: Actors, Familiars, and Stewards, all operating within a shared environment known as a Forge. Actors, typically users, initiate requests or goals, referred to as “Wishes.” These wishes are then assigned to Familiars, intelligent agents that act as personal assistants or “magical familiars,” executing the tasks.
Stewards, often human moderators, assume a supervisory role, ensuring compliance, trust, and safety throughout the system. Communication between these components is governed by standard protocols, enabling agents (Familiars) to connect to external data, invoke tools, and collaborate across different runtime environments.
This interconnectedness ensures that no agent operates in isolation. They can seamlessly interface with external APIs, databases, or even other community agents as required, leveraging open integration standards for portability and interoperability. Every action and output flows through a transparent logging and verification layer, known as the trust ledger. This ledger provides auditable records of every step, aligning with community guidelines and ensuring accountability.
Ritual Lifecycle: Wish → Ruse → Rubric:
SPELWork defines a clear four-phase lifecycle for tasks, often referred to in “magical” terms for ease of understanding. Each phase produces a structured manifest documenting its details:
- Ritual: In this phase, the Familiar executes the plan step-by-step, performing the actions described in the Ruse. Each action (called an “incantation” informally) could be a tool call, an API request, a database query, or other operations. The Forge comes into play here – it is the controlled execution environment where the Ritual takes place. The Forge is constructed with all required resources (data access, tool integrations, sandbox policies) needed for the Ritual. As the Ritual unfolds, every significant action and result is logged to the trust ledger for transparency. If a step marked as requiring consent is reached, the Ritual pauses and emits a consent gate request – essentially asking the Actor or a Steward for approval before proceeding . This human-in-the-loop checkpoint ensures that risky or out-of-scope actions don’t occur without oversight (for instance, if the Familiar needs to use a tool that accesses private data or incurs cost, the Actor must explicitly consent). Once all steps are executed, the Ritual produces an outcome – the raw results or artifact fulfilling the Wish (e.g. gathered information, a generated report, etc.).
- Wish: This is the initial request or goal state expressed by the Actor. It’s a manifest capturing what the user wants to achieve. A Wish typically includes a natural language description of the goal, any parameters or constraints, and the Actor’s comfort range (sensitivity settings for content or methods). When a Wish is created, it enters the system as a pending task for a Familiar to handle. (See example Wish manifest below.)*
- Ruse: Once a Familiar accepts a Wish, it devises a Ruse, which is essentially the plan or strategy to fulfill the Wish. The Ruse manifest details how the Familiar intends to achieve the goal – breaking the Wish into actionable steps or clever approaches. The term “Ruse” reflects that the plan may involve non-obvious or creative steps (sometimes even workarounds) to get the job done. For example, if the Wish is to obtain some information, the Ruse might outline which tools or data sources to query and in what sequence. The Ruse phase can involve the Familiar doing research or reasoning internally to map out a solution approach. If any step in the plan might breach the Actor’s comfort settings or system policies, the Ruse will mark those steps for approval (triggering a consent checkpoint in the next phase).
- Rubric: The final phase is evaluation and compliance. The Rubric manifest contains the results of validating the outcome against rules and quality standards. It’s essentially a checklist or scorecard that the Familiar (and Stewards, if needed) use to ensure the outcome is safe, accurate, and within the scope of the original Wish and community guidelines. The Rubric might include automated checks (e.g. content moderation scans, veracity checks) and notes on whether the Wish was satisfied. If all checks pass, the result is packaged for the Actor. If there are issues – for example, the content might violate a policy or the result is incomplete – the Rubric can trigger an escalation. Escalation might mean invoking a higher-level rule set (a stricter Rubric), involving a Steward to review the outcome, or even negotiating a revised Wish if the original request cannot be fulfilled within the rules. Only once the Rubric is satisfied (or an authorized override is given) will the final output be released to the Actor – this is known as block release logic, meaning the system blocks the release of results until all rubric conditions are met to protect the Actor and community.
Throughout this lifecycle, the Familiar is an autonomous (AND optional) agent orchestrating the process on behalf of the Actor. The Familiar can reason about the Actor’s goals and figure out how to achieve them, not just by following a script but by dynamically adapting to the situation.
Via events, It can call external tools and APIs as needed (for example, invoking a web search or a calculation service) and even coordinate with other Familiars or agents if the task requires multiple expertise. Crucially, the Familiar operates under the watch of governance mechanisms: it respects the Actor’s comfort range, logs its actions for audit, and knows to halt for consent when it approaches a boundary. In essence, the Familiar is “intelligent” and proactive, but bounded by the ethical and procedural framework of SPELWork.
Forge Construction & Tool Integration:
The Forge is the sandboxed micro-environment where a Ritual is executed. When a Wish is accepted, a Forge is instantiated (virtually) for that specific task. Think of it as a safe workshop containing the tools, data, and permissions needed for the job. The Forge’s manifest defines what resources are available: for example, which Tools the Familiar can use, what data sources it can access, memory or context limits, and any external APIs it’s allowed to call. Tools are integrated via a standardized interface – SPELWork supports wrapping external APIs or services as callable tools in a declarative way (for instance, using an OpenAPI specification or plugin descriptor to integrate a new tool). This open integration model ensures that developers can plug in new capabilities without modifying the core system, and it keeps the system portable and interoperable across different deployments.
In practice, to add a tool (say a mapping service or a translation API), a developer would provide a manifest entry for that tool (including its endpoint and any credentials or usage limits). The Steward can then invoke the tool during a Ritual by referencing it from the Forge’s context. Each tool can have associated metadata like whether its use requires special permission – e.g. a tool that can send an email might be flagged to always require an Actor’s confirmation. The Microsoft Agent Framework, for example, allows tools that require human approval to be marked and will automatically pause the agent for approval ; SPELWork implements a similar pattern via consent gates. By constructing the Forge with only the needed tools and clear rules (a principle of least privilege), SPELWork limits the scope of what any given Ritual can do, thereby reducing risk. After the Ritual completes, the Forge is deconstructed (torn down), ensuring that no residual state carries over to future tasks except what is recorded in logs or explicitly saved as outcomes.
Familiar Behavior: The Familiar in SPELWork is designed to be proactive but accountable. When it formulates a Ruse, it uses internal reasoning (which can involve large language model prompts, symbolic logic, or other AI planning methods) to decide on steps. It’s aware of the Actor’s Comfort Range setting, which might restrict certain content or approaches (for instance, a strict comfort range might forbid using unverified data sources or creating potentially disturbing content). If the best plan requires stepping outside those bounds, the Familiar will include a step that triggers a consent gate or involve a Steward. During the Ritual execution, the Familiar monitors the results of each step – if something unexpected occurs (like a tool fails or returns ambiguous data), the Familiar can dynamically adjust the plan (this might result in an updated Ruse on the fly). The ability to adapt dynamically is critical; the Familiar is not a simple script but an agent that can handle contingencies – much like how advanced agent frameworks allow agents to reason and adjust course  . For example, if one data source doesn’t have information, the Familiar might try an alternative source or ask the Actor a clarifying question via the trustlog. Throughout this process, the Familiar emits events to the trustlog: e.g., “queried Tool X with parameter Y at time Z,” “received response size N,” “awaiting approval for action Q,” etc. This trustlog is essentially an append-only journal that Stewards (and even Actors, in a readable form) can review to see exactly what the Familiar did. By the end of a task, the Familiar assembles the final output along with a proposed Rubric evaluation. In straightforward cases, the Rubric check is automated and the Familiar can immediately finalize the result. In borderline cases, the Familiar might flag the result for Steward review – e.g., “the answer was found but involves medical advice, which exceeds my autonomy; a Steward review is required.” The Familiar is programmed to defer to Stewards and system rules whenever uncertainty arises. In summary, the Familiar’s behavior is goal-driven and tool-assisted, but bounded by consent requirements, comfort ranges, and oversight. It functions much like a well-governed autonomous agent described in literature – capable of complex reasoning and tool use, but pausing for human approval when governance rules demand .
Schema Definitions: Every phase and component in SPELWork is represented by a structured manifest (often serialized as YAML or JSON). These manifests allow developers to validate and trace the state of a Wish as it moves through the system. For instance, a Wish manifest contains fields like the actor’s ID, description, timestamps, etc., whereas a Rubric manifest contains fields for outcome evaluation results. The formal schemas for all manifest types are provided in the Schema API Reference section of this guide. Below is an example Wish manifest (YAML format) to illustrate how a request is represented for implementation purposes:
Example of a Wish manifest
id: "wish-20251008-00123" actor: "user_alex_456" # Reference to the Actor who made the wish description: "Find the top 5 coffee shops nearby with a quiet atmosphere." comfort_range: "Safe" # The actor’s comfort setting (e.g., Safe/Moderate/Open) created_at: "2025-10-08T14:30:00Z" status: "pending" # Current status (pending, in_progress, completed, escalated) parameters: location: "Seattle, WA" radius: "5km" forge: allowed_tools: ["MapsAPI", "ReviewDB"] # Tools the Familiar can use in the Ritual memory_context: "short-term" # Context scope for the agent’s memory
In this example, the Wish manifest includes an id and actor to identify the request, a textual description of the goal, a comfort_range set to “Safe” (meaning the user only wants content from fully trusted sources and no risky actions), and some optional parameters that further guide the task. It also pre-defines a Forge configuration, listing that the Familiar is allowed to use a Maps API and a Review database in the process, and that it has only short-term memory (no long-term data retention) for privacy. As the Wish progresses, other manifests (Ruse, Ritual, Rubric) will similarly capture their respective data (e.g., the Ruse manifest would list the planned steps and tools, the Ritual manifest would log the actual steps taken and results, and the Rubric manifest would document the checks performed and their outcomes). All these manifests together form a complete record of the “spell” cast by the Familiar, from intention to outcome. This structured approach makes the system predictable for developers and auditable for the community.
Community Onboarding Guide
Welcome to SPELWork! This section is a friendly guide for new users (Actors), community stewards, and contributors to understand how to participate in the SPELWork ecosystem. We’ll explain key concepts in simple terms, walk through a typical user journey (with examples), and outline the roles and ethical responsibilities that keep our community safe and productive.
Key Concepts for New Users • Comfort Ranges: When you join, you’ll be asked to set your Comfort Range. This is essentially your personal setting for how cautious or open you want the system to be with content and methods on your behalf. Think of it like movie ratings or safety levels – for example, Safe means “I only want trusted, verified information and no potentially sensitive actions,” whereas Open means “I’m comfortable with more experimental or unfiltered content.” The system (and your Familiar) will use this setting as a guideline. If your request (Wish) might result in something beyond your comfort zone, the system will pause and ask for your consent before proceeding. This ensures you’re never taken by surprise. You can adjust your comfort range anytime as you get more familiar with the community. • Trustlog: Every interaction in SPELWork is transparently logged in something we call the Trustlog (or trust ledger). What this means for you is that there’s a record of what the system is doing for you. If your Familiar searches for information, calls a tool, or modifies your request, it notes it down in the trustlog. As a user, you can view a human-friendly version of your trustlog to see the “story” of how your Wish was fulfilled – almost like an order tracking or a detailed receipt of magical work done. This builds trust because nothing is hidden; if the Familiar accessed a certain data source or if a Steward intervened for moderation, you’ll know. The trustlog is also used by Stewards to audit and by the community to spot-check the system’s fairness and compliance. In short, transparency is baked in, so you can trust that there’s oversight every step of the way. • Consent Gates: Sometimes, your Familiar might need to do something that requires your permission. For example, if you ask it to plan an event, it might want to send invites via your email – but it won’t until you say it’s okay. These moments are called Consent Gates – points at which the system will explicitly ask “Do you allow me to proceed with X?” before doing something potentially sensitive. You’ll typically see a prompt in the app or interface to approve or deny the action. This also applies if the Familiar encounters a request that touches on sensitive content or personal data. The design is very much “human-in-the-loop” for safety: tools or actions requiring human approval will generate a request that must be approved in the UI or by a Steward . If you deny, the Familiar will adjust its plan or stop. If you approve, it will continue. You are in control. Consent gates ensure nothing crosses your boundaries without you explicitly letting it.
Roles and Responsibilities
SPELWork is a community with several roles working in harmony. Here’s who’s who, and what each does: • Actor (You!) – An Actor is any user who interacts with the system to get something done. If you’re asking a question, seeking help, or tasking a Familiar with a job, you’re in the Actor role. Actors should formulate clear Wishes (requests) and provide honest feedback. As an Actor, your responsibilities include respecting the community guidelines when making requests (e.g. not asking your Familiar to do something malicious or disallowed), responding to consent prompts in a timely manner, and using the information or results you get in good faith. Essentially, Actors drive the demand – you bring the goals and creativity, and the system helps you achieve them. • Familiar – A Familiar is your personal agent in the system. It’s an AI assistant assigned to help with your Wishes. You can think of it as a clever digital familiar spirit that works on tasks for you. Familiars communicate with you in natural language, and you can chat or refine your requests as needed. Under the hood, each Familiar can reason and use tools to fulfill tasks (for example, it can fetch data, summarize content, or perform analysis), but always within the limits of the rules. Familiars also have personality settings you might customize – some may be very formal, others more playful – but all share the trait of being loyal to their Actor and aligned with the community’s ethics. If something goes wrong or confusing, you can always ask your Familiar what it’s doing – it will explain and even show excerpts from the trustlog to keep you informed. Remember, the Familiar is powerful , but not infallible – treat it as a helper, and feel free to correct or guide it. It will learn your preferences over time. • Steward – Stewards are the guardians and facilitators of the SPELWork community. Some Stewards are experienced community members, and some parts of stewardship are automated services (like the “Paladin” moderation agent described later). A Steward’s job is to ensure that everything stays within the agreed rules and that users have a positive experience. They monitor the trustlogs and system metrics for any signs of trouble (for instance, a Familiar struggling with a task or a user’s request that hits a policy limit). Stewards can step in to pause or adjust a Ritual if it’s going off-track. They can also help Actors refine their Wishes if needed (maybe suggesting a clarification if the request was too broad). If an output is blocked because of a Rubric rule (say it looked like disallowed content), a Steward reviews it and can decide to approve it (if it’s actually okay) or help sanitize it. Stewards also handle petitions – formal requests from users for things like rule changes or appeals on a decision (more on this in the Governance section). Ethically, Stewards are expected to be fair, transparent, and respect user autonomy. They follow a Stewardship Ethics code (covered below) that emphasizes trust, impartiality, and minimal intrusion. In short, Stewards keep the magic safe and the community thriving.
In many cases, the same individual can wear multiple hats. You might start as an Actor, and over time become a Steward helping others. We encourage a culture where everyone looks out for each other and for the system’s well-being.
Onboarding Workflow: From Wish to Outcome
Let’s walk through a sample scenario to illustrate how everything comes together when you use SPELWork as a new Actor:
Illustration: An Actor and Contributor in the community. In this example, Alex is a new user (Actor) looking for information, and Mia is a community contributor whose knowledge might help. Alex “seeks trustworthy reviews,” while Mia “shares helpful reviews” in the community. SPELWork bridges them: Alex will use a Familiar to find what Mia has shared, creating a moment of community connection. 1. Expressing a Wish: Alex logs in and is greeted by his Familiar. He formulates a Wish: “Find me the top-rated coffee shops in town that have a quiet atmosphere for working.” He sets his comfort range to Safe because he only wants reputable information. He submits this Wish. 2. Wish Acceptance and Ruse Planning: Alex’s Familiar receives the Wish and analyzes it. The Familiar knows from Alex’s profile and comfort settings that it should use verified sources (like community-reviewed data or trusted APIs). It creates a plan (Ruse) to fulfill the Wish – for instance, first check the ReviewDB (a community database of reviews where Mia and others contribute), then cross-reference with a Maps API for location data. The plan notes that if any review content looks potentially inappropriate or if no data is found, the Familiar might have to either broaden the search or ask Alex for permission to search the web at large (which could be outside Alex’s comfort since web data is unvetted). All this planning happens behind the scenes in seconds, and Alex is notified that his Familiar is “on it.” 3. Ritual Execution: Now the Familiar, within a Forge environment set up for this task, executes the plan. It queries the ReviewDB for “quiet coffee shops” and finds several entries – some of those entries were contributed by community members like Mia, who wrote detailed reviews of local cafes. The Familiar might also use a sentiment analysis tool to filter for “quiet” or “good for work” from the review text. Then it calls the MapsAPI to check the ratings and distance. During this Ritual, Alex’s Familiar logs each step to the trustlog (Alex can peek at an activity log that might say “Searching community reviews… Found 12 candidates… Filtering by quiet ambiance…”). Suppose one of the steps is to click a link to an external site for more info – since Alex’s settings are Safe, the Familiar triggers a consent gate: “I found an external source with info on one cafe. Do you want me to open it?” Alex sees this prompt and decides to allow it, trusting his Familiar’s judgment. The Familiar proceeds, fetches that info, and continues. Eventually, it compiles a short list of 5 coffee shops with high ratings and notes about their quiet atmosphere, gleaned from Mia’s and others’ reviews. It might also note one shop that had mixed reviews about noise, which it places lower on the list.
Onboarding Diagram: The journey from discovery to contribution. In the above diagram, we see the flow of community interaction: Discovery (Alex searches for info), Engagement (Alex’s Familiar finds Mia’s reviews and engages with that content), and Contribution (Mia’s act of sharing reviews enriches the community). The point where Alex finds value in Mia’s input is labeled the “Moment of Community Connection,” which SPELWork strives to create repeatedly. Alex gets his answer, and Mia’s contribution is recognized (perhaps via an upvote or thank-you in the system), reinforcing a positive feedback loop. 4. Rubric Check and Delivery: Before presenting the results to Alex, the Familiar performs the Rubric evaluation. It checks that the list of coffee shops and summary comply with guidelines – e.g., no inappropriate language was included from the reviews, all facts are backed by the data, and the content is within Alex’s comfort scope. The Familiar sees that everything looks good (no policy flags, the info seems accurate and sourced from trusted reviews). It attaches the sources (so Alex can see snippets of Mia’s reviews for each cafe) and a note that “Data comes from community reviews and Maps API.” The Rubric is satisfied, so no Steward intervention is needed. The Familiar delivers the final output to Alex: a neatly formatted list of five coffee shops, each with a brief description of the atmosphere and a link to the reviews. Alex is happy with the results and heads out to the cafe of his choice. Meanwhile, the trustlog entry for this Wish is closed out as completed, and Mia (the contributor) gets a small credibility boost in the system for having her review used (this could reflect in Mia’s reputation or trust score). 5. Follow-up and Feedback: After visiting one of the recommended coffee shops, Alex decides to add his own review through the community interface – thus he becomes a contributor as well, completing the circle of engagement to contribution. His Familiar assists by providing the template to submit a review. The next time someone like Alex searches, his contribution might help them – this is how the community grows. Stewards in the background monitor such interactions to ensure everything remains civil and useful. If Alex had a problem (say one of the recommendations was really off), he could give feedback or even file a petition for review, and a Steward would look into why that happened (maybe the data was outdated or a malicious entry slipped in, which would then be corrected).
Throughout this journey, new users like Alex are guided by the system in a friendly way – the interface might show tooltips explaining consent gates, how to read the trustlog, etc. Stewards might proactively reach out if they see a confused user or a declined action, to offer help. The community thrives on this virtuous cycle: Actors get quick, trusted help for their Wishes, Contributors (like Mia) earn trust and recognition for sharing knowledge, and Stewards ensure the environment stays safe and fair. All the while, the Familiars handle the heavy lifting of searching, computing, and cross-referencing, under human-friendly controls.
Stewardship Ethics and Best Practices
If you take on the role of a Steward (or are simply interested in how we keep things ethical), here are the guiding principles we follow in SPELWork’s community governance: • Transparency and Honesty: Stewards must act openly. Any moderation actions or interventions should be logged and, where appropriate, explained to the affected users. Trust is maintained by not having “secret” rules – users should be able to understand why a decision was made (often via the trustlog or a direct communication). For example, if a Steward edits or blocks an output, they should leave a note like “Removed personal data to protect privacy.” • User Autonomy and Consent: Stewards respect the autonomy of Actors. This means we avoid stepping in unless it’s necessary for safety or rule compliance. If a user’s preference is clear, we honor it. Consent gates are respected – if an Actor says “no” to a certain action, a Steward won’t override that without very strong reason (like a legal mandate). The ethos is that the user’s comfort comes first. • Safety and Inclusion: We have zero tolerance for abuse, harassment, or dangerous content. Stewards ensure that Familiars are not being misused to generate harmful outputs and that users are not exposed to content outside their comfort range. We also strive for inclusion – making sure the system treats all users fairly and doesn’t reflect bias. Stewards should be mindful of biases in AI behavior or data and correct them when identified. • Impartiality and Fairness: When resolving disputes or handling petitions, Stewards act like impartial judges. They weigh evidence and community guidelines above personal feelings. For instance, if two users have a disagreement or if a Familiar’s decision is contested, the Steward refers to the rubric and rules to make a fair call. They should also recuse themselves if they have a conflict of interest and let another Steward handle it. • Empowerment and Education: Rather than just enforce rules, Stewards aim to educate. If a new user makes a mistake (say, phrasing a Wish in a problematic way), a Steward’s approach is to gently coach them on how to do it better next time, not just punish. We provide resources and help so that over time, users need Stewards less and can self-govern more. • Accountability: Stewards are themselves accountable to the community. There are meta-reviews (Stewardship reviews) where fellow Stewards and community members can evaluate if a Steward’s actions were appropriate. Everything a Steward does is logged (in the ledger and trustlog) so there’s a trail. Abuse of Steward power is grounds for removal of that Steward role. We even have mechanisms like petitions where users can appeal a Steward’s decision to a council or higher authority if needed.
By adhering to these ethics, Stewards help maintain a culture of trust, respect, and continuous learning in SPELWork. We want the system to feel like a collaborative magic workshop where everyone – users, AI familiars, and stewards – is working together to accomplish things safely and enjoyably.
Schema API Reference
This section provides a detailed, structured reference for all SPELWork manifest types and their object fields. Developers and advanced users can use these schemas to validate manifests or to programmatically generate them. The schemas are presented in a YAML-like notation for readability (they can be converted to JSON Schema or OpenAPI specifications as needed). Each manifest type corresponds to one stage or component of the SPELWork lifecycle or governance structure.
Wish Manifest Schema
WishManifest: type: object required: ["id", "actor", "description", "comfort_range", "created_at", "status"] properties: id: type: string description: "Unique identifier for the Wish." actor: type: string description: "Identifier of the Actor (user) who created the Wish." description: type: string description: "Textual description of the Actor's desired goal or request." comfort_range: type: string description: "Safety/comfort setting applied to this Wish (e.g., Safe, Moderate, Open)." parameters: type: object description: "Optional key-value parameters providing additional context for the Wish." created_at: type: string format: date-time description: "Timestamp when the Wish was created." status: type: string description: "Current lifecycle status of the Wish (e.g., pending, in_progress, completed, escalated)." forge: type: object description: "Forge configuration specifying the tools and environment for this Wish." properties: allowed_tools: type: array items: { type: string } description: "List of tool identifiers the Familiar is permitted to use for this Wish." memory_context: type: string description: "Memory scope for the Familiar (e.g., 'none', 'short-term', 'session', 'long-term')." # ... (additional forge settings like resource limits or sandbox flags can be included)
Explanation: The Wish manifest captures the user’s request. Notably, it includes the comfort_range field to encode the user’s content sensitivity, and a nested forge configuration that enumerates what tools or resources are allowed when fulfilling this Wish. This manifest is created by the system when an Actor submits a new request.
Ruse Manifest Schema
RuseManifest: type: object required: ["id", "wish_id", "created_at", "steps"] properties: id: type: string description: "Unique identifier for this Ruse (plan)." wish_id: type: string description: "Reference ID of the Wish that this Ruse is addressing." familiar_id: type: string description: "Identifier of the Familiar agent planning this Ruse." created_at: type: string format: date-time description: "Timestamp when the Ruse was formulated." steps: type: array description: "Ordered list of steps or actions planned to fulfill the Wish." items: type: object properties: action: type: string description: "Description of the action (e.g., 'query Reviews database')." tool: type: string description: "If applicable, the tool or resource to be used for this step." requires_consent: type: boolean description: "Whether this step is gated by a consent requirement." rationale: type: string description: "Optional free-text explanation by the Familiar why this plan is chosen."
Explanation: The Ruse manifest lays out the strategy. The steps array is a crucial part of this schema – each step can include a description of the action, which tool it uses (if any), and a flag if that step needs user consent. The presence of requires_consent: true on any step will cause the system to halt and request approval during the Ritual. The Ruse manifest is generated by the Familiar and can be inspected for transparency or debugging. For example, a step might look like: { action: "Call MapsAPI for location data", tool: "MapsAPI", requires_consent: false }. If a step had requires_consent: true, the Familiar would not proceed with that step without approval (as indicated in the plan). The rationale field is an optional human-readable note that the Familiar might produce (especially in verbose/debug mode) to explain its planning logic (useful for developers or stewards reviewing the plan).
Ritual Manifest Schema
RitualManifest: type: object required: ["id", "ruse_id", "started_at", "ended_at", "status"] properties: id: type: string description: "Unique identifier for this Ritual (execution instance)." ruse_id: type: string description: "Reference to the Ruse plan ID that this Ritual is executing." started_at: type: string format: date-time description: "Timestamp when the Ritual began." ended_at: type: string format: date-time description: "Timestamp when the Ritual completed or stopped." status: type: string description: "Outcome status of the Ritual (e.g., success, halted, failed, partial)." logs: type: array description: "Chronological log of actions and events during execution." items: type: object properties: timestamp: type: string format: date-time event: type: string description: "Description of the event or action taken." result_summary: type: string description: "Short summary of the result of the action (if applicable)." # (The log could include more structured data, like tool outputs or error codes, as needed) output: type: object description: "Raw output or result produced by the Ritual (could be text, data, etc.)." consent_requests: type: array description: "Records of any consent gates triggered during the Ritual." items: type: object properties: step_ref: type: integer description: "Index or identifier of the plan step that required consent." requested_at: type: string responded_at: type: string decision: type: string description: "What decision was made (approved, denied, timeout)." actor_or_steward: type: string description: "Who provided the consent decision (could be Actor ID or Steward ID)."
Explanation: The Ritual manifest is essentially the execution trace. It references the ruse_id (plan it followed) and contains a detailed logs array. Each log entry might correspond to a step from the Ruse (for example, an event like “Queried ReviewDB for ‘quiet coffee shop’: 12 results found”). The status tells whether the ritual completed successfully or was halted. The output field holds the raw outcome prior to the Rubric check (this could be complex data or a draft answer). We also explicitly track consent_requests – each entry logs when the Familiar had to stop to ask for approval and what the response was. This way, one can audit that all required consent was actually obtained and how long it took. The Ritual manifest is invaluable for debugging and audit purposes, as it shows exactly what happened during the task. It is typically generated automatically as the Familiar runs, and finalized at the end of execution.
Rubric Manifest Schema
RubricManifest: type: object required: ["id", "ritual_id", "evaluated_at", "outcome"] properties: id: type: string description: "Unique identifier for this Rubric evaluation." ritual_id: type: string description: "Reference to the Ritual ID that produced the output being evaluated." evaluated_at: type: string format: date-time description: "Timestamp when the evaluation was completed." outcome: type: string description: "Final decision or state after evaluation (e.g., approved, blocked, needs_review)." score: type: number description: "Optional numeric score or rating indicating quality/success (if applicable)." issues: type: array description: "List of any issues or rule violations found." items: type: object properties: code: type: string description: "Short code for the type of issue (e.g., 'POLICY_VIOLATION', 'DATA_MISSING')." message: type: string description: "Human-readable description of the issue." severity: type: string description: "Severity level of the issue (e.g., warning, error)." lineage: type: array description: "Provenance information for content in the output." items: type: object properties: source_type: type: string description: "Type of source (e.g., 'UserContribution', 'ExternalAPI')." source_id: type: string description: "Identifier of the source (e.g., contributor ID or API name)." portion: type: string description: "Which part of the output traces to this source." final_output: type: string description: "The final user-facing output (after any moderation or fixes)." steward_notes: type: string description: "Optional notes from a Steward if manual review was involved."
Explanation: The Rubric manifest captures the result of the evaluation phase. The outcome field is key – if “approved”, the result was fine; if “blocked” or “needs_review”, it indicates the Familiar could not finalize the result autonomously. The issues array lists any problems detected (each with a code and message). For example, the Familiar might flag something like a policy violation if the output contained disallowed content. The lineage field is especially important for provenance: it provides an audit trail of where each piece of the output came from. If the answer included a snippet from Mia’s review (a user contribution), one entry in lineage might be { source_type: "UserContribution", source_id: "user_mia_123", portion: "Review quote about Cafe X" }. If another part of the answer came from an external API, that would be another entry (ensuring attribution and traceability). This anchoring of provenance helps in compliance and is part of how SPELWork implements trust – every fact or content bit can be traced to its origin. The final_output is what will be shown to the Actor, which might be identical to the raw output if no changes were needed, or a cleaned-up version if issues were resolved. If a Steward had to intervene (for instance, to approve something or to edit out sensitive info), they can leave a steward_notes comment explaining what was done. The Rubric manifest closes the loop on the task, and along with the other manifests, it’s stored in the ledger for future reference or audits.
Forge Configuration Schema
ForgeManifest: type: object required: ["id", "wish_id", "tools"] properties: id: type: string description: "Unique identifier for this Forge instance or configuration." wish_id: type: string description: "Reference to the Wish ID this Forge is associated with." tools: type: array description: "List of tool definitions available in this Forge." items: type: object properties: name: type: string description: "Name of the tool (must match a known integration)." version: type: string description: "Version or reference of the tool/API." permissions: type: string description: "Permission level or scope granted (e.g., read-only, read-write)." requires_approval: type: boolean description: "Whether usage of this tool inherently requires Actor/Steward approval." environment: type: string description: "Execution environment identifier (e.g., a container or sandbox profile)." memory: type: object description: "Memory and state management configurations." properties: type: type: string description: "Type of memory (ephemeral, persistent, etc.)." limit: type: string description: "Limits on memory or context size for the Familiar in this Forge." timeout: type: number description: "Maximum allowed execution time for the Ritual in this Forge (in seconds)."
Explanation: The Forge manifest defines the sandbox for execution. It lists all permitted tools with details like name and what permissions the Familiar has with that tool. For example, a tool entry might be { name: "OpenAI_API", version: "2.1", permissions: "read-only", requires_approval: false } meaning the Familiar can call that API (perhaps to get info) but not perform destructive actions, and it doesn’t need special approval each time. If requires_approval were true, the system would trigger a consent gate whenever the Familiar tries to use that tool (this could be set for tools that post on behalf of the user, spend money, etc.). The environment can indicate which sandbox or container is used – for instance, a restricted Python environment vs. a full OS container, depending on the task needs. The memory settings specify what kind of memory the Familiar has access to in this Forge; “ephemeral” might mean it cannot store data beyond the life of the Ritual, whereas “persistent” might allow saving state (subject to policy). The timeout ensures that tasks don’t run forever – if exceeded, the Ritual will be halted. Developers can adjust Forge settings for different categories of tasks; for example, a data analysis Wish might have a different Forge profile than a simple Q&A Wish. By tuning Forge configurations, the SPELWork system maintains security and efficiency for each task. This manifest is usually generated automatically based on the Wish context and system defaults, but it’s important for developers to know its structure when integrating new tools or adjusting execution policies.
(Note: Additional manifest types such as TrustLog entries or Petition records exist in the system, but those are primarily used in governance and auditing rather than in the core Wish lifecycle. They can be documented elsewhere or extended as needed. The above covers the primary types related to the operation of tasks in SPELWork.)
Legal/Governance Protocol
The SPELWork network operates not just on technology, but on a foundation of legal agreements and governance policies that ensure collaboration remains safe and fair – even across different communities or “realms.” This section describes the rules and compliance logic around federation (how multiple communities interact), contract negotiation between agents or communities, how rubrics escalate when there are conflicts, and how lineage of information is tracked. It also outlines enforcement patterns and audit structures that uphold these rules, such as the Paladin moderation agents, block release mechanisms, provenance anchoring in our ledger, and the processes of petitions and audits. In essence, this is the checks-and-balances layer of SPELWork.
Federation and Contract Negotiation
SPELWork is designed to be federated, meaning there isn’t one single monolithic server controlling everything – instead, there can be multiple nodes or communities (often called Covens or Guilds in our whimsical terminology) that interoperate. Each community might be run by different stewards or organizations, and might have its own local rules or focus. Federation allows an Actor in one community to benefit from resources or knowledge in another, but this happens only through governed channels. When communities connect, they establish a Federation Contract. This is essentially a negotiated agreement that sets the terms of how they will share data, requests, or agent services. For example, two communities might agree to allow their Familiars to collaborate on cross-community Wishes if the user consents, sharing only non-sensitive data, and abiding by the stricter of the two communities’ rubrics. These contracts cover things like data privacy (ensuring that, say, Community A’s user info doesn’t get logged in Community B’s systems improperly), usage limits, and conflict resolution (what happens if a rule in community A contradicts one in B).
When a cross-community interaction is initiated, the system performs a contract negotiation handshake: the involved Familiars or Steward services automatically reference the federation contract to see what’s allowed. Technically, this may involve protocol-driven messaging (similar to agent-to-agent protocols in other frameworks ) where an agent from community A sends a structured request to community B’s agent, including metadata about the originating community and applicable rubric. Community B’s side will validate this against the contract rules and either accept, reject, or request modifications (for instance, “I’ll provide you this info but you must anonymize user data”). This negotiation is usually instantaneous and behind the scenes, thanks to pre-established contracts and standards. However, if something new is being attempted outside existing agreements, a formal negotiation might be triggered, possibly requiring Steward involvement or even an addendum to the contract. Stewards from each community can communicate (oftentimes through a secure channel or meeting) to update terms. All federation contracts are documented and often made transparent to members (so everyone knows the partnership terms).
A key principle is the Principle of Least Privilege in federation: communities only share the minimum necessary for the task at hand, and an Actor’s data or requests are only shared externally if it’s needed and allowed. Actors can also opt-out of cross-community sharing if they desire (e.g., a user might set a preference “don’t send my queries outside this community”), which the system will honor by not federating those Wishes. Legally, these contracts also cover liability and compliance – e.g., if community B’s data is used to answer community A’s user, who is responsible if something goes wrong. Clear logging (with lineage info passed along) ensures that there is evidence of who contributed what (useful for attribution and any legal dispute). In summary, federation in SPELWork extends the power of the network by allowing cooperation, but it’s always done under agreed rules to maintain trust between communities.
Rubrics and Escalation
Each community or node in SPELWork can have its own set of Rubrics – essentially its local policies and quality standards. For example, a scholarly community might have a very strict Rubric about citation and accuracy, whereas a creative storytelling community might have a more lenient Rubric on fictional content but strict rules on hate speech. When a Wish is processed entirely within one community, the local Rubric is the authority for approving the output. However, situations arise where Rubric escalation is needed. This means moving up to a higher authority or a broader rule-set when local rules are insufficient or conflicting.
One scenario is if an Actor’s request cannot be fulfilled without violating a local rule – rather than just failing, the system might attempt an escalation. For instance, suppose a user in community A asks a question that accidentally runs against a local guideline (maybe it’s a medical question and community A disallows medical advice). The Familiar might escalate by checking if there’s a legal way to fulfill this – perhaps by involving a partner community B that has certified medical experts and a proper rubric for that. The escalation would involve notifying a Steward: “This Wish falls under medical advice which is not allowed here; recommend escalation to MedGuild (community B) under their rubric.” If approved (and with user consent), the Wish is forwarded to community B’s infrastructure, and community B’s Rubric will govern the answer. This is a form of vertical federation specifically triggered by rubric issues.
Rubric escalation can also happen internally: if an automated Rubric check fails (e.g., the content looks like it might be disallowed), the system escalates to a human Steward to review the case. The Steward then uses a higher rubric – which could simply mean their own judgment guided by community principles – to decide. They might override the block if they determine it’s a false alarm or apply an edit and then approve. This process is logged, and often multiple levels exist: e.g., junior stewards might escalate to a council of senior stewards for very tough or gray-area cases (similar to an appeals court).
In cross-community interactions, if two Rubrics conflict (say community A allows something community B forbids), the federation contract usually specifies that the more restrictive rule prevails for safety. Alternatively, the task might be split – parts of the Wish that can be answered under shared rules are handled, and the rest is declined with explanation. Escalation might mean that the issue is brought up in inter-community governance meetings to possibly harmonize rules in the future. For example, if it’s found that one community’s overly strict rubric is frequently blocking useful outcomes, communities might negotiate a common ground or mark certain requests as non-federable.
From a user’s perspective, rubric escalation is mostly behind the scenes. You might only notice it if your request is taking longer or if you get a message: “Your request is being reviewed for policy compliance” or “We are consulting an external expert community to get you an answer.” These are signs that escalation is happening. Usually, it results in either a safely crafted answer or a polite refusal with reasons. The lineage metadata in the Rubric manifest (described earlier) also plays a role – if content from elsewhere is used due to escalation, it’s clearly attributed, and if the escalation decision was complex, you might see a steward note in the Rubric explaining the resolution.
In essence, Rubric escalation ensures that when rules get in the way of helping users, there’s a pathway to seek exceptions or alternate solutions, but in a controlled and accountable manner. It prevents both reckless rule-breaking and rigid obstruction, aiming for a balance that upholds safety without unnecessarily hindering knowledge sharing.
Lineage and Provenance
Information is the lifeblood of SPELWork, and Lineage (provenance tracking) is how we ensure that information retains its context and credit as it flows through the system. Every piece of content or data that a Familiar uses or produces is tagged with provenance metadata. This is implemented via the lineage field in manifests like the Rubric manifest, and also recorded in the immutable ledger. But beyond just recording it, lineage is used in governance for attribution, accountability, and compliance.
From a governance standpoint, provenance anchoring means that whenever data is imported or an output is created, an anchor (like a cryptographic hash or a ledger entry) is generated linking the output back to its source. For example, if a Familiar quotes a sentence from a user-contributed article, that quote is anchored to the original article’s ID and the contributor’s ID. This anchor might be stored on a blockchain or secure ledger to prevent tampering. If someone later questions the output (say, “This seems plagiarized” or “Who originally said this?”), the system can prove exactly where it came from, when, and that it hasn’t been altered since extraction. This also helps in giving credit – the original contributor might automatically get notified or credited that “Your content was used in fulfilling X’s Wish.”
Lineage is critical when multiple communities are involved. Federation contracts often include clauses about provenance – e.g., “If Community B’s data is used in Community A, it must be attributed and linked back to Community B.” The SPELWork ledger uses global identifiers to make this possible. So a piece of data from community B carries a globally unique ID that anyone (with proper permission) can look up to see its origin. Provenance anchoring thus supports compliance with licenses and data usage policies. If community B provided data under certain terms, the lineage info carries those terms, and the Familiar in community A will know, for instance, not to further distribute that data beyond the approved scope (or to purge it after use if required).
Another aspect is lineage-based moderation. Because we can trace content, if a certain source is found to be problematic (say a user contributed false info or toxic content), it’s possible to retract or correct anything that source touched. Stewards can query the ledger: “Where has content from user_X been used?” and then decide if any outputs need review or retraction. This traceability acts as both a deterrent for malicious contributions and a safety net to fix issues.
Legally, lineage provides the evidentiary basis for the system’s outputs. In an enterprise or legal context, if someone says “How did the system come up with this result?”, SPELWork can produce the chain of provenance – which might be crucial for compliance with regulations (like data protection laws or academic citation requirements). It’s similar to maintaining an audit trail for decisions: we maintain an audit trail for information.
To summarize, lineage and provenance in SPELWork ensure that content is never rootless. Every answer has a family tree. This fosters a culture of attribution (people get credit for their contributions), quality (origins can be verified), and accountability (if something goes wrong, we can pinpoint where). It transforms what could be a murky magic box into a transparent ledger of knowledge transformation.
Enforcement and Moderation Patterns
Given the sophisticated abilities of Familiars, it’s crucial that strong enforcement mechanisms are in place to prevent misuse and to intervene when policies are at risk of being breached. SPELWork employs both automated and human-driven moderation in complementary ways.
Paladin Moderation Agents: Paladins in SPELWork are special agent processes (akin to “white knights”) dedicated to monitoring and enforcing community guidelines in real-time. A Paladin agent functions similarly to a compliance agent in other AI frameworks that ensures policy enforcement . For example, as a Familiar is executing a Ritual, a Paladin might be running in parallel, scanning the intermediate outputs and tool calls. If the Familiar tries to do something disallowed – say it’s about to output personal identifiable information or use a forbidden tool – the Paladin will intercept. Technically, Paladin agents have hooks into the trustlog and the message bus of the system. They can issue an immediate halt to a Ritual if a serious violation is detected, triggering an alert to Stewards. Paladins can also auto-correct minor issues: for instance, if an output just needs a simple redaction (like removing a phone number), a Paladin might do that edit on the fly and log it. These agents are essentially the first line of defense, handling straightforward rule enforcement so that human Stewards don’t have to micromanage every action.
Block Release Logic: As mentioned earlier, one of the core enforcement mechanisms is that no result goes straight to an Actor without passing checks. This block release logic is a pattern where the system by default holds the final output until it’s confirmed safe. Think of it like a final checkpoint – the Rubric evaluation must give an “all clear.” If it doesn’t, the output is literally blocked from being delivered. The Actor might see a message like “Your result is undergoing review” instead of the result, in such cases. Only when the issue is resolved (either automatically or via a Steward) will the content be unblocked. This logic is crucial in preventing, for example, an AI hallucination containing a dangerous suggestion from ever reaching a user. It might feel slightly slower, but it dramatically reduces the chances of harm. Over time, as trust builds and the system improves, some workflows might allow faster releases for low-risk tasks, but the ability to block is always there as a safety net.
Dynamic Rate Limiting and Gates: Enforcement isn’t only about content. The system also monitors usage patterns. If a particular user or Familiar starts making an unusually high number of requests or actions (which could indicate a bug or misuse), SPELWork can enforce rate limits. Similarly, if an external tool starts failing or returning suspect data, the system can cut it off (circuit-breaker style). These measures ensure stability and prevent any one part of the system from overwhelming others.
Steward Intervention: Automated measures have their limits, so Stewards play a key role in moderation. The moment a Paladin or Rubric flag comes up, it can be routed to a Steward’s dashboard. Stewards have the ability to inject themselves into a running Ritual (pausing or aborting it), or to modify a Rubric outcome. They also handle edge cases – e.g., deciding if something is in a gray area. Stewards can apply temporary blocks on certain content or even on a user if needed (like a cooldown if someone keeps trying to break rules). However, all such actions are logged and usually are accompanied by a path for the user to appeal (so enforcement is firm but fair).
Community Moderation: In addition to official Stewards, SPELWork encourages community members to flag issues. For example, if Alex received an output that he found inappropriate or incorrect, he could flag it. Those flags feed into the moderation system, perhaps spawning a petition or alert. This way, moderation is crowdsourced to an extent – many eyes make bugs shallow, to borrow an open-source concept. The Paladin agents also learn from these flags; if the community flags something that slipped through, that pattern can be used to update the Paladin’s rules.
In all enforcement patterns, a strong emphasis is on minimal disruptiveness – intervene only as much as needed. We want the magical experience to continue smoothly for users who are not doing anything wrong. So, when an enforcement triggers, it’s often accompanied by a clear message. For instance: “Content removed by policy – personal data detected” so the user knows what happened. Over time, repeated minor violations might lead a Familiar to automatically educate the Actor (“Your last question asked for something we can’t provide, here’s why…”), aligning with the educative approach of Stewardship.
Audit Structures: Ledger, Trustlog, and Petitions
Transparency and recourse are pillars of SPELWork’s governance. Audit structures ensure that every action can be examined and that users have ways to voice concerns or challenge decisions.
The Ledger: At the heart of the audit system is a tamper-evident ledger – essentially a secure database (or blockchain-like system) that records significant events and decisions. Every Wish, Ruse, Ritual, and Rubric manifest is logged here, as are important events like consent decisions, tool usage, and moderation actions. The ledger is append-only; entries cannot be removed or altered without leaving a trace (which upholds integrity). In practice, this means if a dispute arises – for example, “Why was my request denied?” or “Did the system mishandle something?” – authorized auditors (which could be senior stewards or an external inspector for compliance audits) can review the sequence of events in the ledger. The ledger is structured in such a way that entries are linked (for instance, a Rubric entry links to the corresponding Ritual and Wish), so one can reconstruct the entire chain easily. Portions of the ledger are also exposed to users in the form of the trustlog we discussed – the trustlog is basically a user-friendly view filtered from the raw ledger, focusing on what an Actor would care about for their own request.
Trustlog (Revisited): As an audit tool, the trustlog serves both Actors and Stewards. For Actors, it’s transparency into their own session. For Stewards, aggregated trustlogs can be analyzed to spot anomalies or systemic issues. For instance, if a particular tool is frequently causing consent gates or a particular user’s requests often end in escalation, those patterns show up in the logs. SPELWork likely has analytic dashboards running on the trustlog data to highlight trends (like “10% of requests this week were auto-blocked by rubric – up from 2% last week – investigate why”). This data-driven oversight helps in continuously refining policies and system behavior.
Petitions: A unique aspect of SPELWork governance is the petition system. Petitions are a formal way for community members to raise issues or propose changes. Here are some common scenarios for petitions: • An Actor files a petition to appeal a decision – e.g., “My request was wrongly blocked as inappropriate, please review.” • A Contributor files a petition about attribution – e.g., “I contributed this info but wasn’t credited” or “I want my content removed from the system.” • A Steward might even file a petition on behalf of the community – e.g., “We should tighten the policy on X” or “Allow a new category of tool integration”. • A general user could petition to challenge a rule – “I think the comfort range definitions need updating, here’s why…”
Petitions go into a queue where Stewards and possibly elected community representatives review them. Each petition is tracked (with its own manifest possibly, including who filed it, what it’s about, related Wish IDs if any, etc.). During review, the relevant parts of the ledger and trustlogs are consulted. For example, if Alice petitions that her output was unfairly blocked, the Stewards pull up the ledger entries for that Wish and see what happened – maybe the Paladin flagged a false positive. If they agree it was an error, they can not only release the output to Alice but also adjust the system (update the Paladin’s filter, etc.). The petition outcome is then recorded and communicated.
From a governance perspective, petitions are essential for community-driven evolution. All significant petition outcomes might be summarized periodically and published, so everyone sees what changes or decisions have been made. It’s a way to keep Stewards accountable and the rules dynamic to community needs. Repeated types of petitions might signal a need for a policy update, which Stewards can then propose formally (sometimes even putting to a community vote if the platform is designed for that).
Compliance Audits: In more formal settings (like enterprise or cross-jurisdiction operations), SPELWork may be subject to external audits. Because of the ledger and structured schemas, it’s relatively straightforward to export data for an auditor to review, say, whether all user consent requests were properly handled over a period, or whether any data went to unauthorized places. Compliance auditors can verify that for each piece of personal data used, there was consent logged, etc. This is how SPELWork meets legal requirements like GDPR for data handling – by design, it has an audit trail for consent and usage.
In conclusion, the governance layer of SPELWork ensures that the system is not a black box. It’s governed by clear agreements (federation contracts), adaptable yet strict rules (rubrics that can escalate), and a vigilant enforcement system (Paladins and stewards) – all underpinned by transparency (ledger and trustlog) and community feedback loops (petitions). By marrying technical rigor with community values, SPELWork creates a total framework where innovation in agentic AI can flourish within safe bounds . It’s a system where magic (automation) and trust (verification and ethics) co-exist, providing a reference model for others to follow in building human-centric AI ecosystems.