MindScript, Google ADK Integration, and the Living Multi-Agent Garden
Series Continuation: Build Multi-Agent Systems with ADK
PeacebinfLow | SAGEWORKS AI | Maun, Botswana | 2026
Abstract
Volumes I and II of the EcoSynapse series established the foundational architecture of a botanical intelligence system: a living data platform where ten garden plant species run as autonomous agents governed by sourced physiological equations, communicate through a cryptographic event protocol, store behavioral memory in Snowflake, and surface their state to users through Google Gemini. The agents existed. The data flowed. The protocol held.
What those two volumes did not resolve is the question of how agents become aware of each other as peers rather than merely as recipients of protocol packets. Volume I gave plants identities. Volume II gave plants mathematics. Volume III gives plants intelligence that is distributed, composable, and deployable.
This volume introduces three interconnected developments. The first is MindScript, the custom agent programming language that emerges from the intersection of BioSyntax (defined in Volume II) and the MindsEye repository ecosystem. MindScript is not a scripting language added on top of the system; it is the natural evolution of BioSyntax once agents need to describe not just their own physiological transitions but their relationships with other agents, their awareness of the broader ecosystem, and their capacity to learn from interaction. The second development is the full integration of Google's Agent Development Kit (ADK) and the Agent-to-Agent (A2A) protocol into the EcoSynapse layer stack. Google's ADK provides the production-grade orchestration infrastructure that the system needs to move from architectural prototype to deployed, running multi-agent system. The Google application layer — Gemini, Google Sheets, Gmail, Google Docs, Google AI Studio, and Google Cloud Run — is mapped onto the agent roles that each application will fulfill, creating a mirrored ecosystem where familiar tools become autonomous participants. The third development is the Lab system, redesigned from the ground up as a full Cloud Run deployment target where individual agents run as separate microservices, a frontend web interface manages their composition and communication, and users can tokenize, share, and link agents into compound systems that behave as single unified intelligences while preserving each component's specialized role.
The MindsEye repositories are embedded throughout. They are not peripheral references; they are the awareness layer of the entire architecture.
Series Context
Before proceeding, it is worth stating clearly what this series has built and where it stands.
Volume I defined EcoSynapse as an open-source ecosystem intelligence platform. It established the ACP protocol (the Agent Communication Protocol), the immutable event ledger, the Snowflake behavioral schema with its tables for agents, events, permissions, and anomalies, and the Gemini interpretation layer. It introduced the concept of plant agents, Auth0 for agent identity management, Backboard for API orchestration, and Solana for data node tokenization. It described the system in architectural terms with the understanding that the implementation would follow.
Volume II filled the biological and mathematical interior of that architecture. It selected ten specific garden plants with sourced datasets from USDA, GBIF, FAO, Kew Gardens, Wageningen, and CGIAR. It derived the physiological equations governing each plant's behavior — the Penman-Monteith transpiration model, the Farquhar-von Caemmerer-Berry photosynthesis model, the Ball-Woodrow-Berry stomatal conductance model, Michaelis-Menten kinetics for nutrient uptake — and introduced BioSyntax as the domain-specific expression language for describing plant state transitions in botanical vocabulary. It specified the Labs system as a bounded simulation environment and described the EcoSynapse Language Model as a domain-specific transformer trained on the system's own communication stream.
Volume III is the operational layer. It takes the architecture of Volume I and the biology of Volume II and builds the agent system that makes both of them run in production.
Part One: MindsEye as the Awareness Layer
1.1 What MindsEye Is
MindsEye is a repository ecosystem developed under the PEACEBINFLOW GitHub organization that provides the cognitive scaffolding for the EcoSynapse agent network. If the ACP protocol of Volume I is the nervous system through which agents transmit signals, MindsEye is the structure that gives agents something to be aware of.
The repositories in the MindsEye ecosystem are organized across several functional domains. There is a core layer containing minds-eye-core, which defines the fundamental awareness primitives that all agents inherit. There is a data processing layer containing mindseye-sql-core, mindseye-sql-bridges, and mindseye-kaggle-binary-ledger, which govern how agents acquire and process structured knowledge. There is an orchestration layer containing mindseye-workspace-automation, mindseye-google-workflows, and mindseye-gemini-orchestrator, which define how agents coordinate through Google's infrastructure. There is an interface layer containing minds-eye-dashboard, minds-eye-search-engine, and mindseye-chrome-agent-shell, which define how agents present themselves to human operators. There is a runtime layer containing mindseye-android-lawt-runtime and mindseye-cloud-fabric, which define the deployment and execution environments where agents live.
The full repository list embedded in this document:
- https://github.com/PEACEBINFLOW/mindseye-workspace-automation
- https://github.com/PEACEBINFLOW/mindseye-google-ledger
- https://github.com/PEACEBINFLOW/mindseye-gemini-orchestrator
- https://github.com/PEACEBINFLOW/mindseye-google-devlog
- https://github.com/PEACEBINFLOW/mindseye-google-analytics
- https://github.com/PEACEBINFLOW/mindseye-google-workflows
- https://github.com/PEACEBINFLOW/minds-eye-law-n-network
- https://github.com/PEACEBINFLOW/minds-eye-core
- https://github.com/PEACEBINFLOW/minds-eye-search-engine
- https://github.com/PEACEBINFLOW/minds-eye-gworkspace-connectors
- https://github.com/PEACEBINFLOW/minds-eye-dashboard
- https://github.com/PEACEBINFLOW/minds-eye-automations
- https://github.com/PEACEBINFLOW/minds-eye-playground
- https://github.com/PEACEBINFLOW/mindseye-binary-engine
- https://github.com/PEACEBINFLOW/mindseye-chrome-agent-shell
- https://github.com/PEACEBINFLOW/mindseye-android-lawt-runtime
- https://github.com/PEACEBINFLOW/mindseye-moving-library
- https://github.com/PEACEBINFLOW/mindseye-data-splitter
- https://github.com/PEACEBINFLOW/mindseye-kaggle-binary-ledger
- https://github.com/PEACEBINFLOW/mindseye-sql-core
- https://github.com/PEACEBINFLOW/mindseye-sql-bridges
- https://github.com/PEACEBINFLOW/mindseye-cloud-fabric
1.2 How MindsEye Maps onto EcoSynapse
Each MindsEye repository provides a specific capability that the EcoSynapse agent system depends on. The mapping is direct and deliberate.
minds-eye-core provides the base AwarenessAgent class, which is the parent class for every plant agent in EcoSynapse. It defines the primitives that distinguish an aware agent from a mere protocol participant: the capacity to hold a belief state, update that belief state from incoming events, query the beliefs of neighboring agents, and decide on actions based on the current belief state rather than only on reactive state machine transitions. An agent without awareness reacts. An agent with awareness anticipates.
mindseye-sql-core and mindseye-sql-bridges define the interface between the agent runtime and the Snowflake memory layer. In Volume II, the Snowflake schema was described as the system's behavioral memory. These repositories contain the query abstraction layer that agents use to retrieve their own history, compare their current state against their historical baseline, and identify anomalies in their own behavior before the external anomaly detection queries catch them. An agent that can query its own past is an agent that can learn.
mindseye-gemini-orchestrator defines the communication protocol between plant agents and the Gemini intelligence layer. Agents do not call Gemini directly; they submit structured observations to the orchestrator, which decides when and how to relay those observations to Gemini and when to return Gemini's interpretations back to the agent as updated context. This prevents agents from being overwhelmed by responses from a language model that may operate on a different time scale than their physiological simulation cycle.
mindseye-workspace-automation and mindseye-google-workflows define the automation sequences that drive the Google application agents described in Part Two of this document. These repositories are where the workflows that connect Gmail, Google Sheets, Google Docs, and Google AI Studio are defined, not as manual integrations but as autonomous agent behaviors.
minds-eye-dashboard and minds-eye-search-engine define the user interface layer through which human operators observe the running agent ecosystem. The dashboard is the window into the Lab. The search engine allows operators to query the behavioral history of any agent in the system using natural language.
mindseye-cloud-fabric defines the Cloud Run deployment configuration for the entire agent ecosystem. It is the production bridge between the locally runnable agent definitions and the deployed microservices that run on Google Cloud.
1.3 MindScript — The Agent Programming Language
BioSyntax, defined in Volume II, was designed for a single purpose: expressing plant physiological transitions in botanical vocabulary. It works within a single agent's scope. When a tomato agent needs to describe what it does when its stomatal conductance drops below threshold, BioSyntax provides exactly the right expressive surface. But BioSyntax has no vocabulary for what an agent does when it needs to reason about another agent's state, compose its behavior with another agent's behavior, or decide which of several agents in a Lab should take priority in a given moment.
MindScript extends BioSyntax into the inter-agent domain. It introduces the vocabulary of awareness, coordination, and composition that the multi-agent system requires. MindScript is still grounded in biological process — every statement still reads as a description of ecological behavior — but its scope is the ecosystem rather than the individual organism.
The design of MindScript follows four principles. First, every MindScript statement must be interpretable as a biological or ecological process description. An agent receiving an observation from another agent is modeled as one organism sensing the chemical signal of another. An agent yielding priority to another is modeled as a subordinate organism redirecting resources toward a dominant one. Second, MindScript compiles to Google ADK agent definitions. The compilation target is not Python directly but ADK's agent specification format, which allows MindScript-defined agents to be deployed to Cloud Run without any additional transformation step. Third, MindScript agents are composable. Two or more MindScript agents can be bound together into a compound agent that presents a single interface while internally distributing work according to the composition rules defined in the binding statement. Fourth, MindScript is the output of the EcoSynapse Language Model defined in Volume II. As the system accumulates more behavioral data, the EcoSynapse LLM becomes capable of generating valid MindScript expressions from natural language descriptions of desired agent behavior.
The MindScript specification:
// MindScript — core syntax specification
// Extends BioSyntax for inter-agent coordination
// --- AGENT DECLARATION (ADK-compatible) ---
AGENT tomato_stress_monitor
EXTENDS solanum_lycopersicum_base
ROLE stress_detection
PRIORITY high
ZONE zone_b_bangalore
AWARENESS radius: 1.2m, depth: soil_horizon_1
END AGENT
// --- OBSERVATION DECLARATIONS ---
// An agent observes the state of neighboring agents
// Maps to ADK tool call: query_neighbor_state()
OBSERVE basil_agent_04.voc_emission_rate
IF value > 0.8 nmol_per_g_per_hr
RECORD observation AS signal.neighbor.voc.elevated
UPDATE self.belief_state.neighbor_stress TO true
END IF
END OBSERVE
// --- COMPOSITION DECLARATIONS ---
// Binds multiple agents into a compound agent
// Maps to ADK multi-agent orchestration pattern
COMPOSE lab_bangalore_cluster
MEMBERS [tomato_agent_01, basil_agent_04, marigold_agent_07]
PRIORITY_ORDER [tomato_agent_01 WHEN stress_index > 0.6,
marigold_agent_07 WHEN pest_signal DETECTED,
basil_agent_04 DEFAULT]
INTERFACE single
HANDOFF graceful
END COMPOSE
// --- INTER-AGENT MESSAGE DECLARATIONS ---
SEND TO basil_agent_04
MESSAGE signal.water.share_request
PAYLOAD { requested_ml: 40, urgency: self.stress_index }
AWAIT response TIMEOUT 2_simulation_ticks
ON_TIMEOUT reduce_water_demand BY 0.15
END SEND
// --- GOOGLE APPLICATION AGENT BINDING ---
// Maps agent outputs to Google application agents
BIND self.stress_events TO google_sheets_agent.stress_log_sheet
ON_EVENT stress.water.onset
WRITE [timestamp, agent_id, stress_index, zone] TO row
END ON_EVENT
END BIND
BIND self.critical_events TO gmail_agent.alert_channel
ON_EVENT stress_index > 0.85
SEND email SUBJECT "Critical: {self.agent_id} approaching senescence"
BODY narrative_summary(self.last_24h_events)
END ON_EVENT
END BIND
MindScript's compilation to ADK agent definitions means that every AGENT declaration in MindScript produces a deployable Cloud Run service. Every COMPOSE declaration produces an ADK orchestrator agent. Every BIND declaration produces an ADK tool integration. This is the bridge between the biological description layer and the production deployment layer.
Part Two: The Google Application Agent Layer
2.1 The Mirroring Principle
The central architectural decision in Volume III is that the Google application suite — Gmail, Google Sheets, Google Docs, Google AI Studio, and Gemini — does not serve as a passive interface for EcoSynapse. Each application is assigned a specific agent role, and each agent role mirrors the functional purpose of the application it is built around. Gmail is a communication channel agent because email is a communication channel. Google Sheets is a data storage agent because spreadsheets are data storage. The mirroring is not metaphorical; it is structural. The agent's behavior in the system is defined by the capabilities of the application it inhabits.
This design decision has a practical rationale. Google Workspace applications are infrastructure that billions of people already have access to, already understand, and already use for the purposes that each application was designed for. Assigning agent roles to these applications means that the cognitive cost of understanding the agent system is near zero for most users. When a user receives a Gmail message from the tomato stress monitor, they do not need to understand that it is an ADK agent running on Cloud Run. They need only understand that something in their garden requires attention.
2.2 Gmail — The Alert and Communication Agent
The Gmail agent is built on the mindseye-google-ledger repository and the Gmail MCP connector. Its role is to serve as the inter-agent and human-facing communication channel for events that require asynchronous notification.
In the EcoSynapse context, the Gmail agent monitors the Snowflake anomaly table for events that cross configurable severity thresholds. When a plant agent in a running Lab emits an event that populates the anomalies table with a severity classification of high or critical, the Gmail agent composes and sends a notification. The notification is not a raw data dump; it is a Gemini-narrated summary of the event, its likely cause based on the behavioral history query, and the recommended response.
The Gmail agent also serves as the inter-agent message delivery system for asynchronous agent-to-agent communication that does not require real-time response. When a researcher in one location submits a Lab template to the EcoSynapse Knowledge Commons, the tokenization confirmation, the provenance record, and the initial simulation summary are all delivered through the Gmail agent to the researcher's registered address. When another researcher forks that Lab, the original contributor receives a notification through the Gmail agent informing them that their work has been built upon.
ADK configuration for the Gmail agent:
from google.adk.agents import Agent, Tool
from google.adk.tools import gmail_tool
class GmailAlertAgent(Agent):
name = "ecosynapse_gmail_alert"
description = """
Monitors EcoSynapse event streams for threshold-crossing anomalies
and delivers Gemini-narrated notifications to registered recipients.
Handles both human notification and inter-system async messaging.
"""
tools = [
gmail_tool.send_message,
gmail_tool.read_messages,
gmail_tool.create_draft,
]
model = "gemini-2.0-flash"
instruction = """
You are the communication agent for the EcoSynapse botanical
intelligence system. When you receive an anomaly event from the
event stream, you compose a clear, scientifically accurate
notification that describes what happened, why it happened based
on the behavioral history, and what the operator can do about it.
You write as a knowledgeable botanist would write to a fellow
researcher. You do not use alarm language. You state facts and
provide context. You always include the Snowflake query that
surfaced the event so the recipient can investigate further.
"""
2.3 Google Sheets — The Data Storage and Ingestion Agent
The Google Sheets agent is built on the mindseye-google-analytics repository and the Google Sheets MCP connector. Its role is twofold: it serves as a human-readable data store for simulation outputs, and it serves as a data ingestion point through which contributors can submit plant observation data to the system without needing direct Snowflake access.
For simulation output storage, the Sheets agent subscribes to the Backboard event stream for any Lab that has Sheets integration enabled. At each simulation tick, the agent writes a row to the designated sheet: timestamp, agent ID, species name, zone, current state, stress index, water availability, biomass, and the top three physiological readings from the state vector. This creates a continuously updated, human-readable record of every agent's behavior that is accessible to anyone with view permissions on the sheet.
For data ingestion, the Sheets agent exposes a standardized template that contributors can use to submit plant observations. The template columns correspond exactly to the PlantRecord canonical format defined in Volume II. When a contributor fills in the template and marks a row as ready for submission, the Sheets agent reads the row, validates it against the schema, writes it to the Snowflake raw_observations table with the appropriate source attribution, and returns a confirmation in the adjacent status column. This is a zero-code contribution pathway for researchers who have plant data but no programming background.
The Sheets agent also serves as the calculation surface for the equation agents described in section 2.7. When an equation agent needs to expose its calculation to a human operator for inspection or adjustment, it writes the inputs, formula, and output to a designated sheet, where the operator can observe the computation and override the formula parameters if required.
2.4 Google Docs — The Narrative and Documentation Agent
The Google Docs agent is built on the mindseye-google-devlog repository. Its role is to maintain the living documentation of the EcoSynapse system as it runs. This is not static documentation written once and left to age; it is a continuously updated narrative record generated by the Gemini interpretation layer and deposited into Google Docs as the system produces it.
When a Lab is created, the Docs agent creates a new document for that Lab and begins writing its history. The initial entry is the Lab configuration: species composition, zone selection, environmental parameters, and the Solana token address. As the Lab runs, the Docs agent adds entries at configurable intervals: narrative summaries of the current ecosystem state, descriptions of significant events, records of agent state transitions, and interpretations of inter-agent communication patterns. The result is a readable story of the garden's life, written in botanical language and accessible to anyone with a link to the document.
The Docs agent is also responsible for generating the submission documentation for the EcoSynapse Knowledge Commons. When a contributor tokenizes a Lab or a dataset on Solana, the Docs agent automatically generates the documentation record that accompanies the token: a structured description of what the contribution contains, how it was produced, what its data sources are, and how other contributors can build on it.
2.5 Google AI Studio — The Interface Generation Agent
The Google AI Studio agent operates differently from the other Google application agents. Where the Gmail, Sheets, and Docs agents are data-flow participants, the AI Studio agent is an interface builder. Its role is to generate the visual interfaces through which users interact with running Labs and observe agent behavior.
When a user creates a new Lab, the AI Studio agent receives the Lab configuration and generates a custom interface for that Lab. The interface includes an SVG visualization of the simulation grid (described in Volume II), a real-time status panel showing each agent's current state and physiological readings, an event log showing the most recent protocol packets from all agents, a simulation controls panel for adjusting environmental parameters mid-run, and a Gemini chat panel for natural language interaction with the Lab.
The interface is not a static template. The AI Studio agent generates the HTML and JavaScript for the interface dynamically based on the Lab's specific composition. A Lab with ten tomato agents in a grid layout generates a different interface from a Lab with three agents in a companion-planting triangle. The interface reflects the actual spatial and compositional structure of the simulation.
SVG visualization generation follows the pattern described in Volume II: each agent is rendered as a positioned plant graphic whose visual state reflects its current physiological and behavioral state. Green agents are in optimal state. Yellow agents are stressed. Red agents are approaching senescence. Root zone radii are rendered as overlapping circles beneath each plant graphic, with overlap regions highlighted to show where allelopathic interactions are occurring. Signal arcs between agents show active inter-agent communications.
The AI Studio agent uses the Imagen integration to generate photorealistic representations of specific plant states on request. When an operator asks Gemini "what does my tomato agent look like right now," the AI Studio agent calls Imagen with a structured prompt built from the agent's current physiological state — stress index, growth stage, water deficit, leaf area index — and returns a generated image alongside the numerical state data.
2.6 Gemini — The Top Layer Interpretation and Coordination Agent
Gemini's role in Volume III expands from interpretation to coordination. In Volume II, Gemini was the interface between human queries and structured Snowflake data. In Volume III, Gemini is the ADK orchestrator agent that coordinates the activity of all the Google application agents described in this section.
This is the architectural insight that unifies the Google layer of EcoSynapse. Gemini does not merely answer questions; it manages the communication between the Gmail agent, the Sheets agent, the Docs agent, the AI Studio agent, and the plant simulation agents that are producing events. When a plant agent emits a critical stress event, it is Gemini that decides whether the event warrants a Gmail notification, a Sheets log entry, a Docs narrative update, or all three. When a user asks a natural language question about their Lab, it is Gemini that determines which combination of Snowflake queries, agent state reads, and application agent outputs will produce the most complete answer.
This coordination role is implemented through the ADK orchestrator pattern. Gemini runs as a root ADK agent, and the Gmail, Sheets, Docs, and AI Studio agents are its sub-agents. The plant simulation agents communicate upward to Gemini through the Backboard event stream. Gemini routes outbound communications downward to the application agents based on the event type, the operator's configured notification preferences, and its own assessment of the event's significance.
ADK orchestrator configuration:
from google.adk.agents import Agent
from google.adk.agents.orchestration import SequentialOrchestrator, ParallelOrchestrator
class EcoSynapseOrchestrator(Agent):
name = "ecosynapse_orchestrator"
description = """
Root orchestrator for the EcoSynapse multi-agent system.
Coordinates plant simulation agents, Google application agents,
and human operator interactions across the full ecosystem.
"""
model = "gemini-2.0-flash"
sub_agents = [
GmailAlertAgent(),
GoogleSheetsAgent(),
GoogleDocsAgent(),
AIStudioInterfaceAgent(),
SnowflakeQueryAgent(),
LabSimulationAgent(),
]
instruction = """
You are the central intelligence of the EcoSynapse botanical
simulation system. You receive events from running plant agent Labs,
manage the flow of information between simulation outputs and Google
application agents, and respond to human operator queries.
When routing events:
- Critical stress events (stress_index > 0.85) route to Gmail and Docs
- Routine simulation data routes to Sheets
- Significant behavioral patterns route to Docs narrative
- User queries route to SnowflakeQueryAgent then back to you for narration
When composing responses, you always ground your language in the
actual physiological state of the agents involved. You describe what
is happening biologically before you describe what it means operationally.
"""
2.7 The Equation Agents
The equation agents are a category of agent type that has no direct Google application counterpart. They are computational specialists. Each equation agent is responsible for a specific mathematical operation from the physiological model defined in Volume II, and each runs as an independent Cloud Run microservice.
The design rationale for equation agents as separate services is isolation of computational complexity. The Penman-Monteith transpiration calculation involves multiple atmospheric variables, species-specific constants, and zone-specific calibration factors. Running this calculation inside the main plant agent process creates a computational dependency that could slow the agent's response to incoming signals. By offloading the calculation to a dedicated equation agent running as a separate Cloud Run service, the plant agent can submit a calculation request and continue processing incoming signals while the result is being computed.
The equation agents are:
The TranspirationAgent handles all Penman-Monteith calculations for all ten plant species. It receives a structured request containing the atmospheric variables, the species constants from the Snowflake state vector, and the zone calibration factors, performs the calculation, and returns the transpiration rate to the requesting plant agent through the Backboard API.
The PhotosynthesisAgent handles all FvCB net photosynthesis calculations. It is the most computationally expensive of the equation agents because the FvCB model requires iterative solution of a nonlinear system to find the minimum of the three limiting rates. Isolating this calculation as a separate service ensures that photosynthesis computations for one agent do not delay the state updates of other agents.
The StomatalAgent handles Ball-Woodrow-Berry stomatal conductance calculations and the related water use efficiency computations. It is tightly coupled to both the TranspirationAgent and the PhotosynthesisAgent, and receives pre-computed results from both as inputs.
The NutrientAgent handles Michaelis-Menten kinetics for nitrogen and phosphorus uptake across all ten species. It queries the Snowflake conditions table for current soil nutrient concentrations and returns uptake rates calibrated to the current soil state.
The BioMassAgent handles the biomass accumulation calculation across all three response functions — temperature, water, and light — and maintains the running biomass state for each agent over the simulation duration.
Each equation agent is defined in MindScript with a ROLE of computation, which produces a Cloud Run service deployment configuration rather than a stateful agent deployment configuration. Equation agents are stateless between requests, which makes them horizontally scalable: if a Lab contains forty plant agents all requiring photosynthesis calculations in the same tick, the PhotosynthesisAgent can be scaled to handle parallel requests without modifying any other component of the system.
Part Three: ADK Architecture and Cloud Run Deployment
3.1 ADK Multi-Agent Architecture Overview
Google's Agent Development Kit (ADK) provides three patterns for multi-agent coordination: sequential orchestration, where agents execute in a defined order; parallel orchestration, where agents execute simultaneously and their outputs are merged; and agent delegation, where a root agent dynamically assigns subtasks to specialized agents based on the nature of each incoming request. EcoSynapse uses all three patterns in different parts of the system.
Sequential orchestration governs the agent initialization sequence. When a Lab is created, the initialization steps described in Volume II follow a strict order: state vector retrieval from Snowflake, Auth0 credential issuance, BioSyntax model instantiation, event bus registration. These cannot be parallelized because each step depends on the output of the previous one. The ADK SequentialOrchestrator manages this sequence.
Parallel orchestration governs the simulation tick engine. At each tick, all plant agents in a Lab compute their physiological state independently and simultaneously. None of these computations depend on each other at the computation stage, though they do depend on each other at the signaling stage that follows. The ADK ParallelOrchestrator manages the tick's computation phase.
Agent delegation governs the orchestrator's response to incoming user queries and incoming events. When a user submits a natural language query, the orchestrator does not know in advance whether the answer requires a Snowflake query, an equation computation, a Gemini narration, a Sheets lookup, or a combination of all of these. ADK's delegation mechanism allows the orchestrator to assign the query to whichever combination of sub-agents is appropriate for that specific request.
3.2 The Agent-to-Agent (A2A) Protocol in EcoSynapse
The Agent-to-Agent protocol is the communication standard that allows ADK agents running as separate Cloud Run services to communicate with each other without shared state or shared infrastructure. Each agent exposes an A2A endpoint, and agents discover each other through the A2A registry maintained by the orchestrator.
In EcoSynapse, the A2A protocol maps directly onto the ACP protocol defined in Volume I. Every ACP packet that a plant agent emits is also a valid A2A message. The ACP packet fields — agent_id, action_type, authorization_scope, payload, signature, parent_packet_id — satisfy the A2A message requirements for sender identity, message type, authorization context, content, integrity verification, and causal chain. The decision to design the ACP protocol in Volume I with these fields was made precisely to ensure A2A compatibility. The two protocols are not in tension; they are the same protocol at different levels of abstraction.
The A2A registry in EcoSynapse is maintained by the Backboard orchestration layer. When a plant agent is initialized, Backboard registers its A2A endpoint in the registry. When the plant agent needs to send a signal to a neighboring agent, it queries the registry for the neighbor's endpoint, constructs an ACP/A2A packet, and delivers it directly to the neighbor's endpoint through the A2A protocol. The Snowflake event log receives a copy of every packet, maintaining the immutable audit trail defined in Volume I, but the actual agent-to-agent communication is direct rather than mediated through a central bus.
This distinction matters at scale. A Lab with fifty plant agents that all need to exchange stress signals at the same tick cannot route every signal through a single central bus without creating a bottleneck. A2A direct delivery with Snowflake logging as a side effect allows the signal exchange to happen at the speed of Cloud Run network communication while preserving the complete behavioral record.
3.3 Cloud Run Service Architecture
The Cloud Run deployment of EcoSynapse follows a microservices architecture where each agent type runs as an independent service. The services are organized into three tiers.
The first tier is the simulation tier. This tier contains the plant simulation services, one per active Lab. A Lab with ten plant agents running three species across two zones is a single simulation service that manages all ten agents internally. The simulation service handles the tick engine, the physiological computations (delegating to equation services in the second tier), the state machine transitions, and the ACP packet generation. Simulation services are stateful; they maintain the current state of all agents in their Lab across ticks.
The second tier is the computation tier. This tier contains the equation services: TranspirationService, PhotosynthesisService, StomatalService, NutrientService, and BioMassService. These services are stateless and horizontally scalable. They receive calculation requests from simulation services, perform the calculation using the parameters in the request, and return the result. They do not maintain any state between requests.
The third tier is the application tier. This tier contains the Google application agent services: GmailAlertService, GoogleSheetsService, GoogleDocsService, and AIStudioService. It also contains the OrchestratorService, which runs the EcoSynapseOrchestrator ADK agent, and the QueryService, which handles Snowflake queries on behalf of the orchestrator. These services are semi-stateful: they maintain session state for ongoing interactions but do not maintain simulation state.
Cloud Run deployment manifest for the simulation tier:
apiVersion: serving.knative.dev/v1
kind: Service
metadata:
name: ecosynapse-simulation-{lab_id}
namespace: ecosynapse
labels:
tier: simulation
lab_id: {lab_id}
spec:
template:
metadata:
annotations:
autoscaling.knative.dev/minScale: "1"
autoscaling.knative.dev/maxScale: "5"
run.googleapis.com/cpu-throttling: "false"
spec:
containerConcurrency: 1
timeoutSeconds: 3600
containers:
- image: gcr.io/ecosynapse/simulation-service:latest
env:
- name: LAB_ID
value: {lab_id}
- name: SNOWFLAKE_ACCOUNT
valueFrom:
secretKeyRef:
name: snowflake-credentials
key: account
- name: AUTH0_DOMAIN
valueFrom:
secretKeyRef:
name: auth0-credentials
key: domain
- name: TRANSPIRATION_SERVICE_URL
value: https://ecosynapse-transpiration-{region}.run.app
- name: PHOTOSYNTHESIS_SERVICE_URL
value: https://ecosynapse-photosynthesis-{region}.run.app
- name: A2A_REGISTRY_URL
value: https://ecosynapse-orchestrator-{region}.run.app/a2a/registry
resources:
limits:
cpu: "2"
memory: "2Gi"
Cloud Run deployment manifest for an equation service:
apiVersion: serving.knative.dev/v1
kind: Service
metadata:
name: ecosynapse-photosynthesis
namespace: ecosynapse
labels:
tier: computation
equation: fvCB
spec:
template:
metadata:
annotations:
autoscaling.knative.dev/minScale: "0"
autoscaling.knative.dev/maxScale: "50"
spec:
containerConcurrency: 80
containers:
- image: gcr.io/ecosynapse/photosynthesis-service:latest
resources:
limits:
cpu: "1"
memory: "512Mi"
The photosynthesis service scales to zero when no Labs are running and up to fifty instances when many Labs are running simultaneously. The simulation service maintains a minimum of one instance to preserve the in-memory agent state.
3.4 The Simulation Service Implementation
The simulation service is the operational core of a running Lab. Its implementation brings together the BioSyntax model definitions from Volume II, the MindScript coordination layer from Part One of this volume, and the ADK agent infrastructure from the preceding sections.
from google.adk.agents import Agent
from google.adk.agents.orchestration import ParallelOrchestrator
from ecosynapse.protocol import ACPHandler
from ecosynapse.biosyntax import BioSyntaxRuntime
from ecosynapse.mindscript import MindScriptRuntime
from ecosynapse.snowflake import SnowflakeClient
from ecosynapse.a2a import A2ARegistry
import asyncio
class LabSimulationService(Agent):
"""
Root agent for a running Lab. Manages all plant agents
within the Lab, coordinates the tick engine, and routes
all events to Snowflake and the A2A network.
"""
name: str
lab_id: str
agents: dict # agent_id -> PlantAgent instance
adjacency_graph: dict
tick_interval: int
environment: dict
def __init__(self, lab_config: dict):
self.lab_id = lab_config['lab_id']
self.tick_interval = lab_config['tick_interval_seconds']
self.environment = lab_config['environment']
self.snowflake = SnowflakeClient()
self.acp_handler = ACPHandler(self.snowflake)
self.a2a_registry = A2ARegistry()
self.biosyntax_rt = BioSyntaxRuntime()
self.mindscript_rt = MindScriptRuntime()
self.agents = self._initialize_agents(lab_config['plants'])
self.adjacency_graph = self._build_adjacency_graph(
lab_config['grid']
)
def _initialize_agents(self, plant_specs):
agents = {}
for spec in plant_specs:
for i in range(spec['count']):
agent_id = f"{spec['species']}-{self.lab_id}-{i:04d}"
state_vector = self.snowflake.query("""
SELECT * FROM ecosynapse.agent_init_profile
WHERE agent_id = :species AND ecosystem_zone = :zone
""", species=spec['species'], zone=spec['zone'])
agent = PlantAgent(
agent_id=agent_id,
species=spec['species'],
zone=spec['zone'],
state_vector=state_vector,
biosyntax_rt=self.biosyntax_rt,
mindscript_rt=self.mindscript_rt,
acp_handler=self.acp_handler,
)
self.a2a_registry.register(agent_id, agent.a2a_endpoint)
agents[agent_id] = agent
return agents
async def run_tick(self, t: int):
# Advance environment
env = self.env_model.advance(self.environment, t)
# Parallel physiological computation
async with asyncio.TaskGroup() as tg:
physio_tasks = {
agent_id: tg.create_task(agent.compute_physiology(env))
for agent_id, agent in self.agents.items()
}
# Process inter-agent signals via A2A
for agent_id, agent in self.agents.items():
neighbors = self.adjacency_graph.get(agent_id, [])
for signal in agent.flush_outbound_signals():
for neighbor_id in neighbors:
neighbor_endpoint = self.a2a_registry.lookup(neighbor_id)
await self.a2a_registry.deliver(
signal, neighbor_endpoint
)
# Evaluate MindScript composition rules
for composition in self.mindscript_rt.compositions:
await composition.evaluate(self.agents)
# Flush all events to Snowflake
all_events = []
for agent in self.agents.values():
all_events.extend(agent.flush_event_queue())
await self.snowflake.batch_insert('ecosynapse.events', all_events)
async def run(self):
t = 0
while t < self.lab_config['simulation_duration_ticks']:
await self.run_tick(t)
await asyncio.sleep(self.tick_interval)
t += 1
Part Four: The Lab System — Full Specification
4.1 Lab Architecture in Volume III
The Lab system was introduced in Volume II as a bounded simulation environment with configurable plant composition and environmental parameters. Volume III specifies its full implementation as a Cloud Run-deployed multi-agent system with a web frontend, agent composition tools, and live training capabilities.
A Lab in Volume III has five components that Volume II did not fully specify. The first is the agent registry: every agent in the Lab is registered in the A2A registry with its current endpoint, its species classification, its zone, and its current state summary. The second is the composition engine: the MindScript COMPOSE declarations that define how multiple agents behave as a single compound agent, which agent holds priority under which conditions, and how handoffs between agents are managed. The third is the live training loop: an ongoing process by which agent behavioral data generated during the simulation is fed back into the EcoSynapse LLM fine-tuning pipeline, gradually improving the model's ability to predict agent behavior and generate MindScript from natural language. The fourth is the tokenization layer: the Solana token representing the Lab, its composition, its configuration, and all agent outputs produced during its run. The fifth is the frontend interface: the Cloud Run-deployed web application through which operators observe, query, and interact with the running Lab.
4.2 Lab Frontend Architecture
The Lab frontend is a single-page web application deployed to Cloud Run. It communicates with the OrchestratorService through a WebSocket connection for real-time event streaming and through REST endpoints for queries and control operations.
The frontend has six panels. The simulation grid panel displays the SVG visualization of the current Lab state, updated at each simulation tick. The agent roster panel displays a list of all agents in the Lab with their current state, physiological readings, and behavioral history. The event stream panel displays the most recent ACP packets from all agents, scrolling in real time as the simulation runs. The composition panel displays the active MindScript composition declarations and allows operators to modify priority rules mid-simulation. The query panel provides the Gemini natural language interface for asking questions about the Lab. The controls panel allows operators to adjust environmental parameters, pause and resume the simulation, add or remove agents, and export the full behavioral history to Snowflake or Google Sheets.
Frontend HTML structure:
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>EcoSynapse Lab — {lab_name}</title>
<link rel="stylesheet" href="/static/ecosynapse.css">
</head>
<body class="lab-interface">
<header class="lab-header">
<div class="lab-identity">
<span class="lab-name">{lab_name}</span>
<span class="lab-zone">{primary_zone}</span>
<span class="lab-token" title="Solana token address">{solana_mint_short}</span>
</div>
<div class="lab-status">
<span class="sim-tick">Tick: <strong id="current-tick">0</strong></span>
<span class="sim-day">Day: <strong id="sim-day">0</strong></span>
<div class="status-indicator" id="status-dot"></div>
</div>
</header>
<main class="lab-grid">
<section class="panel panel-simulation" id="simulation-panel">
<h2>Simulation Grid</h2>
<div class="svg-container" id="agent-grid">
<!-- SVG rendered by AI Studio agent, updated per tick -->
</div>
<div class="grid-legend">
<span class="legend-optimal">Optimal</span>
<span class="legend-stressed">Stressed</span>
<span class="legend-adapting">Adapting</span>
<span class="legend-senescent">Senescent</span>
</div>
</section>
<section class="panel panel-agents" id="agent-roster">
<h2>Agent Roster</h2>
<div class="agent-list" id="agent-list-container">
<!-- Populated dynamically from agent registry -->
</div>
</section>
<section class="panel panel-events" id="event-stream">
<h2>Event Stream</h2>
<div class="event-log" id="event-log-container">
<!-- Real-time ACP packet stream via WebSocket -->
</div>
</section>
<section class="panel panel-composition" id="composition-panel">
<h2>Agent Composition</h2>
<div class="composition-editor" id="mindscript-editor">
<!-- MindScript composition editor -->
</div>
<div class="composition-status" id="active-compositions">
<!-- Active COMPOSE declarations and current priority holder -->
</div>
</section>
<section class="panel panel-query" id="query-panel">
<h2>Ask Gemini</h2>
<div class="query-history" id="query-history"></div>
<div class="query-input-row">
<input type="text" id="query-input"
placeholder="What is happening in this garden right now?"
autocomplete="off">
<button id="query-submit">Ask</button>
</div>
</section>
<section class="panel panel-controls" id="controls-panel">
<h2>Simulation Controls</h2>
<div class="env-controls">
<label>Water Availability
<input type="range" id="water-control" min="0" max="1" step="0.01">
<span id="water-value">0.68</span>
</label>
<label>Temperature (°C)
<input type="range" id="temp-control" min="10" max="45" step="0.5">
<span id="temp-value">28.5</span>
</label>
<label>Light Hours
<input type="range" id="light-control" min="4" max="16" step="0.1">
<span id="light-value">11.4</span>
</label>
</div>
<div class="sim-buttons">
<button id="btn-pause">Pause</button>
<button id="btn-resume">Resume</button>
<button id="btn-export">Export to Sheets</button>
<button id="btn-tokenize">Tokenize Lab</button>
</div>
</section>
</main>
<script type="module" src="/static/lab-client.js"></script>
</body>
</html>
Frontend CSS (ecosynapse.css — core variables and layout):
:root {
--forest: #1A3C34;
--canopy: #3A7D6A;
--leaf: #5BA896;
--mist: #E8F4F1;
--bark: #1C1C1C;
--soil: #4A3728;
--gold: #B5860D;
--optimal: #4CAF50;
--stressed: #FF9800;
--adapting: #2196F3;
--senescent: #F44336;
--bg: #0F1E17;
--surface: #162820;
--border: rgba(91, 168, 150, 0.2);
--text: #E8F4F1;
--text-dim: #7EB5A6;
--font-ui: 'Inter', 'Helvetica Neue', sans-serif;
--font-data: 'JetBrains Mono', 'Courier New', monospace;
}
* { box-sizing: border-box; margin: 0; padding: 0; }
body.lab-interface {
background: var(--bg);
color: var(--text);
font-family: var(--font-ui);
min-height: 100vh;
display: flex;
flex-direction: column;
}
.lab-header {
background: var(--forest);
border-bottom: 1px solid var(--border);
padding: 12px 24px;
display: flex;
justify-content: space-between;
align-items: center;
}
.lab-name { font-size: 18px; font-weight: 600; color: var(--mist); }
.lab-zone { font-size: 13px; color: var(--leaf); margin-left: 12px; }
.lab-token {
font-family: var(--font-data);
font-size: 11px;
color: var(--text-dim);
margin-left: 12px;
background: rgba(91,168,150,0.1);
padding: 2px 8px;
border-radius: 4px;
}
.lab-grid {
display: grid;
grid-template-columns: 1fr 280px;
grid-template-rows: auto auto;
gap: 12px;
padding: 16px;
flex: 1;
}
.panel {
background: var(--surface);
border: 1px solid var(--border);
border-radius: 8px;
padding: 16px;
overflow: hidden;
}
.panel h2 {
font-size: 13px;
font-weight: 600;
text-transform: uppercase;
letter-spacing: 0.08em;
color: var(--leaf);
margin-bottom: 12px;
padding-bottom: 8px;
border-bottom: 1px solid var(--border);
}
.panel-simulation { grid-column: 1; grid-row: 1 / 3; }
.panel-agents { grid-column: 2; grid-row: 1; }
.panel-events { grid-column: 2; grid-row: 2; }
.panel-composition { grid-column: 1; grid-row: 3; }
.panel-query { grid-column: 2; grid-row: 3; }
.panel-controls { grid-column: 1 / 3; grid-row: 4; }
.svg-container {
width: 100%;
aspect-ratio: 3/2;
background: rgba(26, 60, 52, 0.4);
border-radius: 6px;
display: flex;
align-items: center;
justify-content: center;
}
.agent-list {
display: flex;
flex-direction: column;
gap: 6px;
max-height: 320px;
overflow-y: auto;
}
.agent-card {
background: rgba(91,168,150,0.06);
border: 1px solid var(--border);
border-radius: 6px;
padding: 8px 12px;
display: flex;
justify-content: space-between;
align-items: center;
}
.agent-card .agent-id {
font-family: var(--font-data);
font-size: 11px;
color: var(--text-dim);
}
.agent-card .agent-state {
font-size: 11px;
font-weight: 600;
padding: 2px 8px;
border-radius: 3px;
}
.state-optimal { background: rgba(76,175,80,0.2); color: #4CAF50; }
.state-stressed { background: rgba(255,152,0,0.2); color: #FF9800; }
.state-adapting { background: rgba(33,150,243,0.2); color: #2196F3; }
.state-senescent { background: rgba(244,67,54,0.2); color: #F44336; }
.state-signaling { background: rgba(181,134,13,0.2); color: #B5860D; }
.event-log {
font-family: var(--font-data);
font-size: 11px;
line-height: 1.5;
max-height: 280px;
overflow-y: auto;
color: var(--text-dim);
}
.event-entry { padding: 3px 0; border-bottom: 1px solid rgba(255,255,255,0.04); }
.event-entry .event-type { color: var(--leaf); }
.event-entry .event-agent { color: var(--gold); }
.query-history {
min-height: 160px;
max-height: 240px;
overflow-y: auto;
margin-bottom: 12px;
}
.query-input-row {
display: flex;
gap: 8px;
}
.query-input-row input {
flex: 1;
background: rgba(91,168,150,0.08);
border: 1px solid var(--border);
border-radius: 6px;
color: var(--text);
font-family: var(--font-ui);
font-size: 13px;
padding: 8px 12px;
outline: none;
}
.query-input-row input:focus {
border-color: var(--canopy);
background: rgba(91,168,150,0.12);
}
.query-input-row button {
background: var(--canopy);
border: none;
border-radius: 6px;
color: white;
cursor: pointer;
font-size: 13px;
font-weight: 600;
padding: 8px 16px;
}
.env-controls {
display: grid;
grid-template-columns: repeat(3, 1fr);
gap: 16px;
margin-bottom: 16px;
}
.env-controls label {
display: flex;
flex-direction: column;
gap: 6px;
font-size: 12px;
color: var(--text-dim);
}
.env-controls input[type=range] {
accent-color: var(--canopy);
width: 100%;
}
.sim-buttons { display: flex; gap: 8px; }
.sim-buttons button {
background: transparent;
border: 1px solid var(--border);
border-radius: 6px;
color: var(--text);
cursor: pointer;
font-size: 12px;
padding: 6px 14px;
transition: all 0.15s;
}
.sim-buttons button:hover {
background: rgba(91,168,150,0.12);
border-color: var(--canopy);
}
.status-indicator {
width: 8px;
height: 8px;
border-radius: 50%;
background: var(--optimal);
animation: pulse 2s infinite;
}
@keyframes pulse {
0%, 100% { opacity: 1; }
50% { opacity: 0.4; }
}
.legend-optimal { color: var(--optimal); }
.legend-stressed { color: var(--stressed); }
.legend-adapting { color: var(--adapting); }
.legend-senescent { color: var(--senescent); }
.grid-legend {
display: flex;
gap: 16px;
margin-top: 8px;
font-size: 11px;
}
::-webkit-scrollbar { width: 4px; }
::-webkit-scrollbar-track { background: transparent; }
::-webkit-scrollbar-thumb { background: var(--canopy); border-radius: 2px; }
Frontend WebSocket client (lab-client.js):
// lab-client.js — WebSocket connection and real-time UI updates
const LAB_ID = document.querySelector('meta[name="lab-id"]')?.content;
const WS_URL = `wss://ecosynapse-orchestrator-${REGION}.run.app/ws/labs/${LAB_ID}`;
class LabClient {
constructor() {
this.ws = null;
this.tick = 0;
this.agents = new Map();
this.connect();
this.setupControls();
}
connect() {
this.ws = new WebSocket(WS_URL);
this.ws.onopen = () => {
document.getElementById('status-dot').style.background = 'var(--optimal)';
this.ws.send(JSON.stringify({ type: 'subscribe', lab_id: LAB_ID }));
};
this.ws.onmessage = (event) => {
const msg = JSON.parse(event.data);
switch (msg.type) {
case 'tick_update': this.handleTickUpdate(msg); break;
case 'agent_event': this.handleAgentEvent(msg); break;
case 'svg_update': this.handleSVGUpdate(msg); break;
case 'gemini_response': this.handleGeminiResponse(msg); break;
case 'anomaly': this.handleAnomaly(msg); break;
}
};
this.ws.onclose = () => {
document.getElementById('status-dot').style.background = 'var(--senescent)';
setTimeout(() => this.connect(), 3000);
};
}
handleTickUpdate(msg) {
this.tick = msg.tick;
document.getElementById('current-tick').textContent = msg.tick;
document.getElementById('sim-day').textContent = Math.floor(msg.tick / 288);
// Update agent roster
msg.agent_states.forEach(state => {
this.agents.set(state.agent_id, state);
this.updateAgentCard(state);
});
}
handleAgentEvent(msg) {
const log = document.getElementById('event-log-container');
const entry = document.createElement('div');
entry.className = 'event-entry';
entry.innerHTML = `
<span class="event-time">${new Date(msg.timestamp).toISOString().slice(11,19)}</span>
<span class="event-agent"> ${msg.agent_id.slice(0,20)}</span>
<span class="event-type"> ${msg.action_type}</span>
`;
log.prepend(entry);
if (log.children.length > 200) log.lastChild.remove();
}
handleSVGUpdate(msg) {
document.getElementById('agent-grid').innerHTML = msg.svg;
}
updateAgentCard(state) {
const roster = document.getElementById('agent-list-container');
let card = document.getElementById(`agent-${state.agent_id}`);
if (!card) {
card = document.createElement('div');
card.className = 'agent-card';
card.id = `agent-${state.agent_id}`;
roster.appendChild(card);
}
card.innerHTML = `
<div>
<div class="agent-id">${state.agent_id.slice(0,22)}</div>
<div style="font-size:11px;color:var(--text-dim);margin-top:2px">
SI: ${state.stress_index.toFixed(2)}
| BM: ${state.biomass.toFixed(1)}g
</div>
</div>
<span class="agent-state state-${state.current_state}">
${state.current_state}
</span>
`;
}
setupControls() {
['water', 'temp', 'light'].forEach(param => {
const control = document.getElementById(`${param}-control`);
const display = document.getElementById(`${param}-value`);
control.addEventListener('input', (e) => {
display.textContent = e.target.value;
});
control.addEventListener('change', (e) => {
this.ws.send(JSON.stringify({
type: 'env_update',
lab_id: LAB_ID,
parameter: param,
value: parseFloat(e.target.value)
}));
});
});
document.getElementById('query-submit').addEventListener('click', () => {
const input = document.getElementById('query-input');
const query = input.value.trim();
if (!query) return;
this.submitQuery(query);
input.value = '';
});
document.getElementById('query-input').addEventListener('keydown', (e) => {
if (e.key === 'Enter') document.getElementById('query-submit').click();
});
document.getElementById('btn-pause').addEventListener('click', () => {
this.ws.send(JSON.stringify({ type: 'sim_control', action: 'pause', lab_id: LAB_ID }));
});
document.getElementById('btn-resume').addEventListener('click', () => {
this.ws.send(JSON.stringify({ type: 'sim_control', action: 'resume', lab_id: LAB_ID }));
});
document.getElementById('btn-export').addEventListener('click', () => {
fetch(`/api/labs/${LAB_ID}/export/sheets`, { method: 'POST' })
.then(r => r.json())
.then(d => alert(`Exported to ${d.sheet_url}`));
});
document.getElementById('btn-tokenize').addEventListener('click', () => {
fetch(`/api/labs/${LAB_ID}/tokenize`, { method: 'POST' })
.then(r => r.json())
.then(d => {
const tokenSpan = document.querySelector('.lab-token');
tokenSpan.textContent = d.mint.slice(0,12) + '...';
tokenSpan.title = d.mint;
});
});
}
submitQuery(query) {
const history = document.getElementById('query-history');
const userMsg = document.createElement('div');
userMsg.style.cssText = 'margin-bottom:8px;font-size:13px;';
userMsg.innerHTML = `<span style="color:var(--gold)">You:</span> ${query}`;
history.appendChild(userMsg);
this.ws.send(JSON.stringify({
type: 'gemini_query',
lab_id: LAB_ID,
query
}));
}
handleGeminiResponse(msg) {
const history = document.getElementById('query-history');
const botMsg = document.createElement('div');
botMsg.style.cssText = 'margin-bottom:12px;font-size:13px;line-height:1.6;';
botMsg.innerHTML = `
<span style="color:var(--leaf)">EcoSynapse:</span>
<span style="color:var(--text-dim)">${msg.response}</span>
`;
history.appendChild(botMsg);
history.scrollTop = history.scrollHeight;
}
handleAnomaly(msg) {
const card = document.getElementById(`agent-${msg.agent_id}`);
if (card) card.style.boxShadow = '0 0 8px rgba(244,67,54,0.6)';
console.warn('Anomaly detected:', msg);
}
}
window.addEventListener('DOMContentLoaded', () => new LabClient());
4.3 Live Agent Training
The live training loop is the mechanism through which the EcoSynapse Language Model defined in Volume II improves during system operation. It is not batch training that happens offline; it is a continuous process that updates the model's fine-tuning dataset in real time as agents produce new behavioral data.
The training loop operates on a rolling window. Every simulation tick produces events that are written to Snowflake. Every hour, a background process queries Snowflake for the events of the past hour, formats them as training examples for the EcoSynapse LLM, and appends them to the fine-tuning queue. Every twenty-four hours, if the fine-tuning queue contains more than a configurable threshold of new examples, the EcoSynapse LLM is fine-tuned on the accumulated examples using the Olama platform's incremental fine-tuning API.
This means that the EcoSynapse LLM becomes progressively more accurate as Labs run. An LLM fine-tuned on one hundred hours of tomato agent behavior in Bangalore conditions will generate better MindScript predictions for tomato agents in Bangalore than an LLM fine-tuned only on the initial BioSyntax compilation log. The model learns from the system's own operation.
The training loop also serves as a quality filter. Before a training example is added to the fine-tuning queue, it is validated against a set of biological consistency checks. A training example in which a plant's transpiration rate increases while its stomatal conductance decreases is biologically inconsistent and is flagged for review rather than added to the queue. This prevents the model from learning from simulation artifacts.
4.4 Agent Composition in Labs
The composition system allows users to combine multiple agents into a compound agent that presents a unified interface while internally distributing work. This is the feature that makes EcoSynapse more than a simulation tool; it is what makes it a platform for building custom ecological intelligence systems.
A composed agent has one interface. When a user interacts with a composed agent through the Gemini query panel or through the Gmail alert channel, they address a single entity. The composition engine decides internally which member agent handles the request based on the MindScript COMPOSE declaration.
Priority assignment determines which agent's output takes precedence when multiple agents have responses to the same query or event. Priority can be static (always defer to agent A on this type of query) or dynamic (defer to the agent with the highest stress index, or the agent that has been active longest without a state transition). Dynamic priority is computed by the composition engine at each evaluation step and can change during a simulation run.
The handoff mechanism defines what happens when priority shifts from one agent to another. Graceful handoff means the outgoing priority agent completes its current processing cycle before yielding. Immediate handoff means the incoming priority agent takes over at the next tick. Emergency handoff means the priority shifts immediately and the outgoing agent's current processing is suspended.
Part Five: Architecture Gaps Identified and Resolved
This section addresses the components that were missing or underspecified in the original voice description and that are necessary for the system to be complete.
5.1 Agent Discovery and Registry
The original description assumes that agents can find each other, but does not specify how. In the full architecture, agent discovery is handled through the A2A registry maintained by the OrchestratorService. Every agent registers at initialization with its agent_id, species classification, zone, current state, A2A endpoint URL, and the list of action types it can receive. The registry is queryable by any agent through the Backboard API: an agent that needs to find all agents of a specific species in a specific zone within a specific radius submits a registry query and receives a list of matching agent endpoints.
5.2 Agent Lifecycle Management
Agents do not run indefinitely. They have lifecycle states: initializing, active, stressed, adapting, senescent, and terminated. The transition to terminated happens when a plant agent's stress_index remains above the senescence threshold for longer than the species' tolerance duration. When an agent terminates, it flushes its complete event queue to Snowflake, submits a final lifecycle event, deregisters from the A2A registry, and signals the Lab simulation service that it has terminated. The Lab simulation service may respond by spawning a replacement agent (modeling plant regrowth) or by adjusting the Lab's composition (modeling permanent plant loss).
5.3 Data Sovereignty and Export
Users who create Labs own the behavioral data produced by those Labs. This is established by the Solana token described in Volume II. But ownership without access is meaningless. The full architecture specifies three export pathways. The Sheets export pathway, triggered through the frontend controls panel, writes the complete event history of all agents in the Lab to a Google Sheet in the operator's Drive. The Snowflake export pathway provides a direct query interface for technical users who want to run their own analyses against the behavioral data. The Solana export pathway allows users to publish specific event sequences from their Lab as tokenized training data contributions to the EcoSynapse Knowledge Commons, making their Lab's behavioral history available to the EcoSynapse LLM fine-tuning pipeline while retaining attribution.
5.4 Security Boundaries
The ACP protocol's cryptographic signing provides integrity guarantees for individual packets, but the full multi-agent deployment requires additional security boundaries. Each Cloud Run service runs under a dedicated Google Cloud service account with the minimum permissions required for its function. The simulation service has Snowflake write access and A2A registry access but not Gmail or Sheets access. The GmailAlertService has Gmail send access but not Snowflake write access and not simulation control access. The OrchestratorService has read access to all services but write access only through explicitly defined inter-agent message channels. Auth0 role claims are validated at every Backboard API endpoint before any request is processed.
5.5 The MindsEye Playground
The minds-eye-playground repository serves a specific role in the overall architecture that has not been described in detail: it is the sandboxed development environment where new BioSyntax and MindScript behaviors are tested before being integrated into production Labs. When a contributor writes a new MindScript COMPOSE declaration or a new BioSyntax WHEN block, they test it in the playground first. The playground runs a lightweight simulation with a single agent in a minimal environment, applies the new behavior definition, and reports whether the agent's state transitions are biologically consistent. Only behaviors that pass playground validation are eligible for deployment to production Labs.
Part Six: Submission — Build Multi-Agent Systems with ADK
What I Built
EcoSynapse Volume III is a distributed multi-agent system built on Google's Agent Development Kit where ten botanically-modeled plant species run as autonomous agents in user-created garden simulation Labs. Each plant agent operates as an independent Cloud Run microservice, governed by real physiological equations sourced from peer-reviewed botanical literature, communicating with neighboring agents through Google's Agent-to-Agent (A2A) protocol, and coordinated by a Gemini-based ADK orchestrator that manages the full Google application layer.
The system breaks what would otherwise be a single monolithic simulation prompt into a coordinated team of specialized agents. The EcoSynapseOrchestrator delegates to six sub-agents: a Snowflake query agent for behavioral data retrieval, a simulation management agent for Lab control, a Gmail alert agent for threshold notifications, a Google Sheets agent for data storage and contributor ingestion, a Google Docs agent for narrative history generation, and an AI Studio interface agent for dynamic frontend generation. Five stateless equation microservices — transpiration, photosynthesis, stomatal conductance, nutrient uptake, and biomass accumulation — handle the computationally intensive physiological calculations in parallel, scaling independently of the main simulation services.
The Agents and Their Roles
The EcoSynapseOrchestrator is the root ADK agent. It runs on Cloud Run with Gemini 2.0 Flash as its model. It receives the event stream from all running Labs, routes events to the appropriate application agents based on event type and severity, and handles all natural language queries from operators. It does not perform any computation itself; it delegates everything to its sub-agents.
The LabSimulationAgent manages a running Lab. It runs the tick engine, coordinates the parallel physiological computations, processes inter-agent signal exchanges through the A2A protocol, and writes all behavioral events to Snowflake. One LabSimulationAgent instance runs per active Lab.
The five equation agents are stateless computation services. They receive structured calculation requests from LabSimulationAgents, apply the appropriate physiological equation with the species-specific constants from Snowflake, and return the result. They scale to zero when no Labs are running and to fifty instances under load.
The GmailAlertAgent composes and delivers threshold notifications. It writes as a botanist to a fellow researcher: factual, specific, with full citation of the Snowflake query that surfaced the event.
The GoogleSheetsAgent maintains real-time simulation logs and processes contributor data submissions. It is the zero-code contribution pathway for researchers with plant data but no programming background.
The GoogleDocsAgent writes the living history of each Lab. It generates a continuously updated narrative record of the garden's behavior in botanical language.
The AIStudioInterfaceAgent generates the custom web frontend for each Lab dynamically from the Lab's species composition and configuration.
Key Technical Decisions
The decision to treat the ACP protocol defined in Volume I as a valid A2A message format made the entire architecture coherent. The packet fields designed in Volume I for agent accountability happened to satisfy A2A's requirements for sender identity, message type, authorization context, content, and causal chain. No adaptation was required.
The decision to make equation agents stateless and horizontally scalable was driven by the observation that physiological calculations are the computational bottleneck in large Labs. By isolating each equation as a separate Cloud Run service with concurrency set to eighty, the system can compute photosynthesis for four hundred plant agents simultaneously without any single service becoming a bottleneck.
The decision to build MindScript as the compilation layer between biological descriptions and ADK agent definitions means that contributors who understand botany but not Python can write agent behaviors in MindScript and have them deployed to Cloud Run without writing a line of Python. The EcoSynapse LLM, fine-tuned on the system's own behavioral data, is working toward making this pathway even more accessible by generating MindScript from plain English descriptions.
What Was Challenging
The most challenging aspect of the architecture is the tension between simulation fidelity and real-time responsiveness. Physiological equations for plants are not computationally cheap. The Penman-Monteith transpiration calculation involves nine parameters and three nested computations. The FvCB photosynthesis model requires iterative numerical solution. Running these for fifty agents at every simulation tick creates a computational load that requires careful parallelization and equation agent scaling to keep the tick interval below the five-minute target. The solution — isolating equations as separate scalable Cloud Run services and running tick computations as asyncio task groups — addresses the load problem but introduces network latency between the simulation service and the equation services. Managing that latency without degrading simulation fidelity is an ongoing optimization problem.
References
[1] Google AI. (2025). Agent Development Kit (ADK) Documentation. google.github.io/adk-docs. Primary reference for ADK agent architecture, orchestration patterns, and Cloud Run deployment.
[2] Google AI. (2025). Agent-to-Agent (A2A) Protocol Specification. google.github.io/A2A. Reference for the A2A message format and agent discovery registry design.
[3] Google Cloud. (2025). Cloud Run Documentation: Service Configuration and Autoscaling. cloud.google.com/run/docs. Reference for Knative service manifest format and scaling configuration.
[4] Google Workspace. (2025). Gmail API and MCP Connector Documentation. developers.google.com/gmail. Reference for Gmail agent tool integration.
[5] Google Workspace. (2025). Google Sheets API Documentation. developers.google.com/sheets. Reference for Sheets agent data ingestion and export pipeline.
[6] Google AI Studio. (2025). Imagen API Documentation. ai.google.dev/docs/imagen. Reference for AI Studio agent image generation integration.
[7] MindsEye Repository Ecosystem. (2025). PEACEBINFLOW GitHub Organization. github.com/PEACEBINFLOW. Primary source for awareness primitives, SQL bridges, Gemini orchestration, and workspace automation referenced throughout this volume.
[8] Allen, R. G., Pereira, L. S., Raes, D., and Smith, M. (1998). Crop Evapotranspiration: Guidelines for Computing Crop Water Requirements. FAO Irrigation and Drainage Paper 56. Reference for Penman-Monteith transpiration equation used in the TranspirationAgent.
[9] Farquhar, G. D., von Caemmerer, S., and Berry, J. A. (1980). A Biochemical Model of Photosynthetic CO₂ Assimilation in Leaves of C₃ Species. Planta, 149, 78–90. Reference for FvCB model used in the PhotosynthesisAgent.
[10] Ball, J. T., Woodrow, I. E., and Berry, J. A. (1987). A Model Predicting Stomatal Conductance. Progress in Photosynthesis Research. Springer. Reference for stomatal conductance equation used in the StomatalAgent.
[11] Snowflake Inc. (2025). Snowflake Documentation: Streaming and Real-Time Data Pipelines. docs.snowflake.com. Reference for the Snowflake integration used by the LabSimulationAgent for batch event writes.
[12] Auth0 by Okta. (2025). Machine-to-Machine Applications. auth0.com/docs. Reference for agent identity and Cloud Run service account integration.
[13] Solana Foundation. (2025). Solana Program Library: Token Metadata. spl.solana.com. Reference for Lab tokenization and Knowledge Commons provenance design.
[14] Vaswani, A., et al. (2017). Attention Is All You Need. NeurIPS 30. arxiv.org/abs/1706.03762. Reference for the transformer architecture underlying the EcoSynapse LLM.
[15] Lamport, L. (1978). Time, Clocks, and the Ordering of Events in a Distributed System. Communications of the ACM, 21(7). Reference for monotonic timestamp enforcement across A2A message delivery.
[16] Python Software Foundation. (2025). asyncio Documentation: Task Groups and Concurrent Execution. docs.python.org/3/library/asyncio. Reference for the parallel tick execution pattern in LabSimulationService.
[17] Knative Authors. (2025). Knative Serving API Specification. knative.dev/docs/serving. Reference for the Cloud Run service manifest format and container lifecycle management.
[18] GBIF. (2025). GBIF Occurrence Search API. api.gbif.org/v1. Continued data source for plant occurrence records used in zone calibration across all ten species.
[19] USDA PLANTS Database. (2025). plants.usda.gov. Continued data source for species characteristics and physiological parameters referenced in all three volumes.
EcoSynapse Volume III closes the architectural loop opened by Volume I and filled by Volume II. The agents are defined. The mathematics are derived. The system runs. What remains is use.
github.com/PeacebinfLow/ecosynapse — SAGEWORKS AI — Maun, Botswana — 2026













