Where orchestration ends and communication semantics begin.
Existing frameworks are useful and often complementary. The gap is that most of them do not natively model agent communication as typed, multi-scale signaling with explicit causal feedback and homeostatic control.
| Framework / SDK | Primary orientation | Typed signaling | Scoped local/global modes | Complexes and quorum | Causal feedback | Native replay focus | Bio-inspired semantics |
|---|---|---|---|---|---|---|---|
| LangChain | Agent APIs and tool orchestration | Partial | No | No | Partial | Partial | No |
| LangGraph | Stateful graph orchestration | Partial | Partial | Partial | Partial | Yes | No |
| CrewAI | Role-based collaboration | Partial | No | No | No | Partial | No |
| AutoGen | Conversational agent exchange | Partial | No | No | Partial | Partial | No |
| bca2p | Bio-inspired communication protocol and runtime | Yes | Yes | Yes | Yes | Yes | Yes |
This comparison reflects each framework's native emphasis and first-class abstractions. It is not a claim that extensions are impossible elsewhere.
What existing stacks do well
They orchestrate agents, tools, and workflows effectively.
They give developers practical execution primitives quickly.
They make it easier to connect models and tools into a usable system.
Where coordination still breaks
Message semantics are usually implicit or prompt-defined.
Routing decisions are hard to audit and credit correctly.
Local, global, and contact-dependent communication are not first-class modes.
System-wide stabilization and quorum logic are often ad hoc.
A framework can orchestrate steps. `bca2p` changes what those steps mean.
Local research swarm
A planner should coordinate only the nearby agents that matter, not flood the entire swarm with undifferentiated text.
# Generic message passing
planner_message = {
"goal": "research the regulation",
"context": docs,
}
for agent in all_agents:
agent.run(planner_message)
# routing is broad, semantic meaning is implicit
# no typed receptor match
# limited causal trace of why a route workedfrom bca2p.core import ReceptorSpec, SignalEnvelope, SignalMode
signal = SignalEnvelope(
signal_id="research-1",
mode=SignalMode.PARACRINE,
sender="planner",
recipient_scope="policy-neighborhood",
receptor="evidence.request",
payload={"topic": "new regulation", "depth": "high"},
)
# only matching receptors in the target scope activate
# downstream feedback can reward useful routesEscalation by quorum
Escalation should happen because evidence crosses a threshold, not because one agent wrote a dramatic sentence.
if support_agent.confidence < 0.55:
escalate_to_human()
# single-agent confidence often overfits
# disagreement between specialists is lostfrom bca2p.core import QuorumRule
quorum = QuorumRule(
rule_id="support-escalation",
target_scope="case-resolution",
threshold=0.67,
min_participants=3,
action="escalate",
)
# escalate when enough participating agents disagree
# or when low-confidence signals accumulate together