Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.hacktionbase.com/llms.txt

Use this file to discover all available pages before exploring further.

Overview

Chat triggers turn a user’s natural-language message into a structured action your product can run. The detection pipeline is designed to keep chatbot latency low and LLM cost bounded, while leaving the final go/no-go decision to the user.

End-to-end flow

┌──────────────────────────────────────────────────────────────────────┐
│ User                                                                 │
│   │                                                                  │
│   │  types message                                                   │
│   ▼                                                                  │
│ Widget iframe ── POST /public/widget/chat ──► Core API               │
│                                                  │                   │
│                                                  ▼                   │
│                                       1. Embed message               │
│                                       2. Vector search (Atlas)       │
│                                          → top-K trigger candidates  │
│                                       3. LLM extracts fields         │
│                                          → typed payload + score     │
│                                       4. Bot reply + CTA proposed    │
│                                                                      │
│ Widget renders bot message + CTA button                              │
│   │                                                                  │
│   │  user clicks CTA                                                 │
│   ▼                                                                  │
│ POST /public/widget/triggers/confirm                                 │
│                                                  │                   │
│                                                  ▼                   │
│                                       5. Fan-out:                    │
│                                          a. Enqueue SQS webhook job  │
│                                          b. publishWsEvent(...)      │
│                                                                      │
│ SDK receives "hacktionbase:trigger" ─► onTrigger handler runs        │
│ Tenant backend receives signed webhook ─► business logic runs        │
└──────────────────────────────────────────────────────────────────────┘

Detection — two-stage classifier

A naive approach would send every chatbot turn to an LLM with the full trigger catalog. That’s expensive and slow. Hacktionbase splits detection in two:
  1. Vector pre-filter — every active trigger’s intent description is embedded and stored in MongoDB Atlas. The incoming message is embedded once and matched against the tenant’s trigger embeddings via vector search. Only the top candidates move on. This step is cheap and bounded by the 50-active-triggers-per-tenant cap.
  2. LLM extraction — the shortlisted triggers and the user message are sent to the LLM with a strict JSON schema generated from each trigger’s declared fields. The model returns the extracted payload along with a confidence score. The strict schema means the response is always either a valid payload or null per field — never free-form text we’d have to parse.
If detection fails (timeout, LLM error, no candidate), the chat reply is sent as usual — trigger detection never blocks the chatbot. Budget: 4 s soft timeout per turn.

Proposal — confirmation gate

Detection produces a proposal, not an execution. The proposal is persisted with a short TTL and attached to the bot’s reply as a CTA button. Nothing runs server-side or client-side until the user clicks the CTA. Three terminal states:
StateWhen
confirmedUser clicked the CTA. Fan-out runs.
dismissedUser dismissed the CTA explicitly.
expiredProposal TTL elapsed without action.
A race-safe findOneAndUpdate({ state: 'proposed' }) flips the proposal to confirmed, so a double-click can’t fire twice.

Fan-out — webhook + WS

On confirmation, Hacktionbase emits two channels in parallel:

Webhook (server → tenant backend)

  • Enqueued on the hacktionbase-trigger-webhook SQS queue.
  • Worker delivers with 5 s timeout, 3 retries, exponential backoff.
  • Failures land in a DLQ surfaced in the dashboard.
  • Payload signed with HMAC so your backend can verify authenticity.
  • Includes executionEventId — use it as an idempotency key.

WebSocket event (server → originating session)

  • Published via the existing publishWsEvent helper.
  • Channel scoped to the specific session that produced the message — either user-conversation for identified users or the anonymous-session channel for visitors.
  • Forwarded by the widget iframe to the host page as hacktionbase:trigger, then dispatched to the matching Hacktionbase.onTrigger handler.
  • Never broadcast cross-user. The WS layer only routes to the originating session.

Why confirmation is mandatory (v1)

The early instinct was “match → execute immediately.” Two reasons we don’t:
  1. Safety: a misread intent could create resources, redirect mid-flow, or trigger destructive side effects with no recourse.
  2. Trust during rollout: with the CTA gate, tenants can ship triggers without fearing edge cases. Once a trigger has accumulated enough confirmed runs we can revisit the auto-execute mode.
Future versions may allow trustedAutoExecute: true per trigger, gated by historical confirmation rate, but that’s deliberately out of scope for v1.

Operational notes

  • Atlas vector index trigger_embeddings_vector_index must be provisioned per tenant DB. A brute-force fallback (200 docs) is used if the index is missing — bounded by the 50-trigger cap.
  • Plan gating is enforced at the chat-controller level by requireTriggersPlan. Tenants on the free plan never enter the detection pipeline.
  • Observability — see the trigger detail page in the dashboard for run history, webhook delivery status, and confirmation rate.