Examples
Compare ordinary agent messaging with bio-inspired signaling.
These examples are deliberately concrete. They show how the same problem looks when communication is treated as generic traffic versus a typed signaling system with scopes, feedback, and thresholds.
Example
Local research swarm
A planner should coordinate only the nearby agents that matter, not flood the entire swarm with undifferentiated text.
Standard approach
Standard orchestration
# Generic message passing
planner_message = {
"goal": "research the regulation",
"context": docs,
}
for agent in all_agents:
agent.run(planner_message)
# routing is broad, semantic meaning is implicit
# no typed receptor match
# limited causal trace of why a route workedBio-inspired approach
bca2p signal-first design
from bca2p.core import ReceptorSpec, SignalEnvelope, SignalMode
signal = SignalEnvelope(
signal_id="research-1",
mode=SignalMode.PARACRINE,
sender="planner",
recipient_scope="policy-neighborhood",
receptor="evidence.request",
payload={"topic": "new regulation", "depth": "high"},
)
# only matching receptors in the target scope activate
# downstream feedback can reward useful routesExample
Escalation by quorum
Escalation should happen because evidence crosses a threshold, not because one agent wrote a dramatic sentence.
Standard approach
Prompt-heavy escalation logic
if support_agent.confidence < 0.55:
escalate_to_human()
# single-agent confidence often overfits
# disagreement between specialists is lostBio-inspired approach
Thresholded coordination
from bca2p.core import QuorumRule
quorum = QuorumRule(
rule_id="support-escalation",
target_scope="case-resolution",
threshold=0.67,
min_participants=3,
action="escalate",
)
# escalate when enough participating agents disagree
# or when low-confidence signals accumulate togetherExample
Self-healing communication
When a route becomes noisy, most systems keep retrying the same path until latency and cost climb.
Standard approach
Retry loop
for attempt in range(3):
result = worker.run(task)
if result.ok:
break
# retries are local decisions
# the wider system never learns the channel is unhealthyBio-inspired approach
Homeostatic regulation
from bca2p.core import HomeostasisPolicy, CausalFeedback
policy = HomeostasisPolicy(
max_inflight_signals=256,
noisy_sender_threshold=20,
cooldown_seconds=2.0,
)
feedback = CausalFeedback(
target_signal_id="route-44",
feedback_type="damping",
outcome="high-noise route suppressed",
)
# the system attenuates unstable routes
# later runs can avoid themExample
Model-agnostic LLM integration
Hardcoding specific LLM providers inside agent logic makes it difficult to switch models, evaluate different providers, or route specific tasks to specialized models.
Standard approach
Hardcoded provider logic
# Tied to a specific API client
def review_agent(state):
import openai
client = openai.OpenAI()
response = client.chat.completions.create(
model="gpt-4o",
messages=state["messages"]
)
return {"decision": response.choices[0].message.content}
# Swapping to Anthropic or Gemini requires
# completely rewriting the agent's internal logicBio-inspired approach
Provider-agnostic node injection
from bca2p.core import AgentNode
# The agent focuses on signaling, not model specifics
def create_reviewer_node(llm):
def reviewer(signal):
# The node invokes whichever model is injected
response = llm.invoke(signal.payload["messages"])
return response.content
return AgentNode(handler=reviewer, receptor="review.requested")
# Seamlessly initialize different providers
from langchain_openai import ChatOpenAI
from langchain_anthropic import ChatAnthropic
from langchain_google_genai import ChatGoogleGenerativeAI
# The exact same agent logic can run on any provider
openai_node = create_reviewer_node(ChatOpenAI(model="gpt-4o"))
claude_node = create_reviewer_node(ChatAnthropic(model="claude-3-5-sonnet-latest"))
gemini_node = create_reviewer_node(ChatGoogleGenerativeAI(model="gemini-2.0-flash"))