Framework comparison

Where orchestration ends and communication semantics begin.

Existing frameworks are useful and often complementary. The gap is that most of them do not natively model agent communication as typed, multi-scale signaling with explicit causal feedback and homeostatic control.

Framework / SDKPrimary orientationTyped signalingScoped local/global modesComplexes and quorumCausal feedbackNative replay focusBio-inspired semantics
LangChainAgent APIs and tool orchestrationPartialNoNoPartialPartialNo
LangGraphStateful graph orchestrationPartialPartialPartialPartialYesNo
CrewAIRole-based collaborationPartialNoNoNoPartialNo
AutoGenConversational agent exchangePartialNoNoPartialPartialNo
bca2pBio-inspired communication protocol and runtimeYesYesYesYesYesYes

This comparison reflects each framework's native emphasis and first-class abstractions. It is not a claim that extensions are impossible elsewhere.

What existing stacks do well

They orchestrate agents, tools, and workflows effectively.

They give developers practical execution primitives quickly.

They make it easier to connect models and tools into a usable system.

Where coordination still breaks

Message semantics are usually implicit or prompt-defined.

Routing decisions are hard to audit and credit correctly.

Local, global, and contact-dependent communication are not first-class modes.

System-wide stabilization and quorum logic are often ad hoc.

Side-by-side examples

A framework can orchestrate steps. `bca2p` changes what those steps mean.

Local research swarm

A planner should coordinate only the nearby agents that matter, not flood the entire swarm with undifferentiated text.

Standard
Standard orchestration
# Generic message passing
planner_message = {
    "goal": "research the regulation",
    "context": docs,
}

for agent in all_agents:
    agent.run(planner_message)

# routing is broad, semantic meaning is implicit
# no typed receptor match
# limited causal trace of why a route worked
bca2p
bca2p signal-first design
from bca2p.core import ReceptorSpec, SignalEnvelope, SignalMode

signal = SignalEnvelope(
    signal_id="research-1",
    mode=SignalMode.PARACRINE,
    sender="planner",
    recipient_scope="policy-neighborhood",
    receptor="evidence.request",
    payload={"topic": "new regulation", "depth": "high"},
)

# only matching receptors in the target scope activate
# downstream feedback can reward useful routes

Escalation by quorum

Escalation should happen because evidence crosses a threshold, not because one agent wrote a dramatic sentence.

Standard
Prompt-heavy escalation logic
if support_agent.confidence < 0.55:
    escalate_to_human()

# single-agent confidence often overfits
# disagreement between specialists is lost
bca2p
Thresholded coordination
from bca2p.core import QuorumRule

quorum = QuorumRule(
    rule_id="support-escalation",
    target_scope="case-resolution",
    threshold=0.67,
    min_participants=3,
    action="escalate",
)

# escalate when enough participating agents disagree
# or when low-confidence signals accumulate together