Bridging the Trust Gap in Threat Intelligence
In cybersecurity, an insight is only as valuable as its source. Trust is essential. While the initial AI pilot in Echosec proved that LLMs could summarize data, it suffered from a "Trust Gap". It lacked direct source citations.
Analysts were skeptical due to the inability to verify findings, which reduced its viability as an integral part of their workflows. To bring this to our flagship platform, Ignite, we had to move beyond simple summarization and solve for verifiable intelligence.


Role
Principal Product Designer
Industry
Cybersecurity
My Goal
Establish Trust
Integrate a citation framework that deep-links every AI claim to its raw data source.
Scale for Enterprise
Move from a 100-result limit to a 5,000-result engine to support broad investigations.
Native Integration
Redesigning the UX from a "bolt-on" widget to a side-by-side workspace that preserves the analyst's workflow.
Understanding cybersecurity analysts
Job Statement
“I'm essentially the digital detective and the first responder rolled into one.” Developing and automating detection rules for real-time threat flagging.
Struggles
Every search and tweak can generate thousands of results, each demanding individual review. This manual, high-volume process makes finding the right formula painfully slow and resource-intensive.
Desired Outcomes
Identify credible threats, threat actors, and techniques quickly, to then share with leadership team.
Context
Understaffed cybersecurity analysts are drowning in data across multiple tools and topics, critically impeding rapid threat response in a time-sensitive environment.

Discovery and Validation
To ensure the transition from Echosec to Ignite was grounded in user needs, I spearheaded a three-phased discovery roadmap. This allowed us to validate assumptions early and iterate based on real-world analyst workflows.
Discovery Process
1. Contextual Inquiries
Conducted deep-dive customer interviews to identify current friction points in the Ignite search experience and Echosec AI tools.
2. Rapid Prototype Testing
Led iterative usability sessions using low-fidelity wireframes to test the placement and "trigger" logic of the AI summary.
3. Beta Testing Sessions
Partnered with key enterprise customers for a high-fidelity pilot, capturing "in-the-wild" usage data and edge-case feedback.
Critical Findings
The discovery phase surfaced two "deal-breakers":
Capping the data pool created a manual "pre-search" burden for customers, limiting the AI’s perceived intelligence.
Users demanded a more integrated experience where AI insights feel native to the search interface, rather than a bolt-on feature.
Design Impact
These findings directly shifted our technical scope, moving the requirement from a 100-result "widget" to a 5,000-result integrated framework and directly shaped my Design Requirements.
Low fidelity prototype testing: Fostered feedback on placement, trigger logic, and source citation interaction.

Designed the Gen-AI pilot for Echosec and spearheaded the subsequent research that defined the AI integration strategy for our flagship platform, Ignite.
Design Requirements
Massive Data Scalability (The 5k Threshold)Goal - Eliminate the "100-result bottleneck" by re-engineering the UI to support summarization of up to 5,000 results simultaneously.
Design - This required designing "processing" states that maintain user confidence during high-volume data crunching without freezing the interface.Contextual Persistence
Goal - Ensure the summary is a companion to the data, not a replacement.
Design - I designed a collapsible, persistent side-panel that allows analysts to maintain full visibility of the raw search results while the summary is active. This preserved the "Source of Truth" while providing the "Speed of AI."User Controlled Synchronization
Goal - The summary must evolve with the update search criteria.
Design - I implemented an explicit "Update" trigger that prompts the AI to refresh the summary upon modification of search criteria or filters, giving customers full control over when the right moment is to generate a new summary.Phased IntelligenceGoal - Build for the future while delivering value today.
Design - I architected the UI to be future-proof. While Phase 1 focused on high-quality static summarization, the interface was built to seamlessly "unlock" multi-turn conversations (chat) in Phase 2 without requiring a total redesign.
Final Designs

Persistent Workspace Control: A collapsible panel allows analysts to minimize the AI summary at any time, reclaiming screen real estate for deep-data triage while keeping insights just one click away.
V2 Dark mode: Demonstrating multi-turn conversation
V2 Light mode: Demonstrating multi-turn conversation
Real-time detection of search criteria changes prompts a summary refresh, ensuring AI insights stay synced with raw data while preventing unwanted distractions.

With the summary minimized, real-time detection of search criteria changes still prompts a discrete refresh button

Verifiable Intelligence: Every AI claim includes direct source citations that deep-link to the raw data. This eliminates skepticism, allowing analysts to instantly verify insights and maintain a high-trust workflow.
Results
"This is not like the other guys out there that are just putting AI features out there to say that they have them. This actually works." ~ Cybersecurity Analyst, Beta program
By prioritizing time-to-insight over novelty, the feature delivered immediate, measurable value across three key pillars:
110% WoW Usage Growth
Surpassed adoption expectations with 2.5k+ sessions in the first 30 days and sustained double-digit weekly growth thereafter.30% Faster Workflows
Analysts reported a significant reduction in manual data triage, accelerating search-to-action efficiency.Zero-Defect Launch
A rigorous DPE (Design, Product, Eng) collaboration and Beta rollout resulted in zero major customer-reported bugs.
110%
Increased Usage WoW
30%
Efficiency Gain
0
Customer Reported Bugs
Looking Back
Leading the end-to-end design of AI Summarization for Flashpoint’s flagship platform was a defining milestone. Moving from deep customer discovery to high-fidelity prototyping and a successful launch taught me that "AI-driven" isn't just a feature—it’s a paradigm shift in how we streamline the processing and analysis phases of the threat intelligence lifecycle. While the positive reviews and usage numbers validate the solution, the journey provided invaluable growth:
Radical Transparency at Scale
While my Loom updates effectively bridged the gap between discovery and engineering, I learned the value of creating "vision-level" updates to keep the broader organization aligned and excited.
AI-First Prototyping
Moving forward, I would integrate AI into the prototyping stage even earlier. Testing LLM-driven interactions in the first week provides insights that static mocks simply cannot capture.
The Branding of Innovation
I realized that AI tooling requires its own strategic brand identity. Developing the branding in tandem with the feature works, but treating it as a distinct project ensures a more cohesive and scalable visual language.


