Skip to content

Feature Gap Deep Dive (Curated Pass)

This page is the second-pass, manually curated analysis on top of the full inventory. It focuses on product capabilities that materially affect win-rate, retention, and enterprise deal velocity for Visyble.

All recommendations here are derived from the full processed corpus (175/175 pages); see competitive-intelligence/coverage-audit-and-traceability for verification.

How to read this page

  • What competitors do: concise pattern summary with direct references.
  • Visyble current state: what exists based on current internal docs.
  • Gap: what is missing or under-exposed.
  • Build spec: concrete delivery guidance (data model, API, UI, KPIs).

1) Prompt Diagnostics: Fanouts, Intent, and Query Paths

What competitors do

Visyble current state

  • Prompt Volumes exists as a section in docs and product taxonomy.
  • No first-class docs flow for fanout tree generation, intent tagging model, or query transformation lineage.

Gap

  • Insight depth is lower than competitor narrative: Visyble can show prompt activity, but not enough “why/how the model decomposed this query”.

Build spec

  • Data model
    • prompt_runs -> fanout_queries[] (query text, rank, source engine, timestamp, locale).
    • intent_labels[] with confidence and optional multi-label tags.
  • API
    • GET /prompt-diagnostics/{prompt_id}
    • GET /prompt-diagnostics/{prompt_id}/fanouts
    • GET /prompt-diagnostics/{prompt_id}/intent
  • UI
    • Fanout graph panel + expandable query table.
    • Intent chips with confidence and trend-over-time.
    • Source overlap panel by fanout branch.
  • KPIs
    • Fanout breadth, intent drift rate, fanout-to-citation conversion rate.

2) Citation Intelligence: Categories, Gaps, and Benchmarks

What competitors do

Visyble current state

  • Sources exists and includes URL analytics endpoint in generated docs.
  • No clear docs taxonomy for source classes (news/docs/forums/reviews/ugc) and no published benchmark ranges.

Gap

  • Missing “decision layer” for users: which source category matters most for each objective and model.

Build spec

  • Data model
    • source_category, source_trust_score, source_type, model_overlap.
    • Gap objects: mention_gap, source_gap, citation_gap.
  • API
    • GET /sources/categories
    • GET /sources/gaps
    • GET /benchmarks/citation-rate
  • UI
    • Category treemap + benchmark gauge.
    • Gap explorer with suggested actions.
  • KPIs
    • Citation share by category, benchmark percentile, gap closure velocity.

3) Asset Hierarchies and Segmentation

What competitors do

Visyble current state

  • Brands, Knowledge Bases, and Personas APIs exist.
  • No documented hierarchy rollups from brand -> product -> feature -> URL/campaign.

Gap

  • Multi-product customers cannot easily connect visibility shifts to sub-assets with clean rollups.

Build spec

  • Data model
    • entity_nodes (brand/product/feature/page), entity_edges, rollup_rules.
  • API
    • POST /entities
    • POST /entities/edges
    • GET /entities/{id}/rollup
  • UI
    • Hierarchy builder and rollup filters on all analytics screens.
  • KPIs
    • Sub-asset visibility trend, rollup accuracy, segment-level contribution to brand score.

4) Content Optimization Workbench + Action Loop

What competitors do

Visyble current state

  • Actions, Suggestions, and Agents exist.
  • Generated docs show Content section without discovered API flow.

Gap

  • Missing closed loop from insight -> content recommendation -> execution -> measured lift.

Build spec

  • Data model
    • content_assets, content_score, recommendation_cards, execution_jobs.
  • API
    • POST /content/analyze
    • GET /content/{id}/score
    • POST /content/{id}/apply-suggestion
  • UI
    • Content workbench with scoring breakdown and “run action” controls.
  • KPIs
    • Score lift after actions, citation lift by content cohort, time-to-impact.

5) Agent Analytics Integrations (CDN/CMS/Storefront)

What competitors do

Visyble current state

  • Agent Analytics exists with Cloudflare and related status endpoints.
  • Integration story is present but less explicit as a broad connector strategy in docs UX.

Gap

  • Perceived platform completeness is lower for buyers who compare “integration matrix” first.

Build spec

  • Data model
    • integration_instances, connector_capabilities, sync_status, audit_logs.
  • API
    • GET /integrations/catalog (already present) -> expand metadata for ecosystem coverage page.
    • POST /integrations/{connector}/onboard.
  • UI
    • Integration matrix page (supported, beta, roadmap).
    • Setup wizard per connector.
  • KPIs
    • Connector activation rate, time-to-first-signal, data freshness SLA.

6) Commerce Intelligence Package

What competitors do

Visyble current state

  • No explicit commerce vertical package surfaced in current docs IA.

Gap

  • Missing a high-intent GTM package for ecommerce and retail buyers.

Build spec

  • Data model
    • shopping_prompts, merchant_mentions, product_recommendation_slots, price_context.
  • API
    • GET /commerce/overview
    • GET /commerce/prompts
    • GET /commerce/brands/{id}/share
  • UI
    • Commerce dashboard with recommendation share and seasonal event views.
  • KPIs
    • Shopping visibility share, product recommendation rate, seasonal performance index.

7) Trust, Compliance, and Enterprise Readiness

What competitors do

Visyble current state

  • Billing/settings/system controls exist; trust posture is not prominently structured in docs navigation.

Gap

  • Enterprise buyers may miss critical trust signals during evaluation.

Build spec

  • Docs IA
    • Add Trust Center section under System.
  • Content
    • Security controls matrix, data retention policy, compliance roadmap.
  • Product hooks
    • SSO/SCIM, audit exports, role templates documented alongside controls.

8) Benchmark Program and Category Authority

What competitors do

Visyble current state

  • Strong internal engineering docs, limited outward benchmark narrative structure.

Gap

  • Lower category authority compared to competitors that repeatedly publish data-backed studies.

Build spec

  • Quarterly benchmark release process.
  • Public methodology page + reproducibility notes.
  • In-product benchmark comparison cards connected to customer account metrics.
  1. Sprint 1-2: Prompt Diagnostics + Citation Intelligence v2 foundations.
  2. Sprint 3-4: Content Workbench MVP + Asset Hierarchy schema.
  3. Sprint 5-6: Integration Matrix + Commerce package alpha.
  4. Sprint 7-8: Trust Center + Benchmark publication workflow.

Unified product and engineering documentation.