hila: Building Trust in Conversational AI for Finance
VIANAI
An experimental GenAI platform for conversational data analysis that became our flagship product. Built trust through transparent sourcing and honest uncertainty.
PROJECT ROLE: Lead Designer (Founding Team)
TIMELINE: 4 months (0→1 product)
OUTCOME: 4,000 users, $1M ARR in 4 months, scaled team from 5 to 50 people
The Challenge
In early 2023, while the world was using ChatGPT to write essays, we saw a high-stakes opportunity: conversational data analysis for finance. Investors and analysts spend hours combing through SEC filings. We hypothesized that an AI “research assistant” could automate this.
However, we faced a fundamental roadblock: In finance, a single hallucination can cost millions.
The Goal: Build a conversational AI that was accurate and transparent enough for professional financial analysts to trust with their livelihoods.
The Evolution: From POC to Platform
Version 1: Proving the Concept
We started narrow. Users entered a stock ticker and year to “Ask hila a question.” By scoping the data strictly to earnings transcripts, we drastically reduced hallucination risk.
The Breakthrough: Trust comes from transparency, not confidence.
Our research (20 one-on-one interviews, 4 user surveys) revealed that users didn’t just want an answer; they wanted to verify the work. When hila calculated a metric like EBITDA, users needed to see the math. This insight shifted our entire design philosophy.
Version 2 & 3: From Public to Proprietary
We evolved from public transcripts to allowing users to upload their own sensitive documents. This required a shift in Context Management—users needed to know exactly which “brain” they were talking to at any given time.
Key Design Decisions: Building Trust
1. Calculation Transparency
The Problem: Financial metrics are the result of complex logic and massive datasets. Providing a raw summary—like “Human Resources had the highest CAGR”—without visible proof feels like a “black box,” making the AI feel untrustworthy for high-stakes analysis.
The Solution: We designed interactive data tables that act as the connective tissue between the AI’s natural language summary and the raw enterprise data. Instead of just providing a final conclusion, hila generates a structured breakdown of every division, year-over-year amounts, and the resulting CAGR.
Key Interaction Features:
Steps Accordion: Shows the AI’s reasoning path (e.g., “3 steps completed”)
before presenting results
Multi-Modal Views: Toggle between data tables (precise auditing) and charts
(trend analysis)
Contextual Summaries: Narrative text references specific data points
(e.g., “Division Number: DV00003”) for instant cross-verification
Insight: By “showing the work” through both step-by-step logic and structured tables, we transformed user trust from blind faith in an LLM into an active, verifiable auditing process.
Figure 1. By surfacing the raw data points used in the calculation, we moved from ‘Black Box’ AI to a verifiable tool.
2. Direct Source Attribution & Verification Layers
To support “calculated trust,” we implemented multiple verification levels:
The Reasoning Tab: Clicking “Reasoning” reveals the specific arithmetic and logic
hila used (e.g., CAGR formula, revenue components).
Side-by-Side Verification: Clicking data points opens a side drawer showing the
raw source document (PDF or database) with relevant sections highlighted in real-time.
The Debug Link: For technical users, clicking “Debug” reveals the underlying SQL
code, allowing deep verification of query logic, table selection, and filters.
The Insight: Professional trust requires different levels of evidence. Reasoning
satisfies the analyst, Debug satisfies the technical stakeholder. Both are essential
for enterprise adoption.
Figure 2. Calculating Transparency and Direct Source Attribution
3. Honest Uncertainty: “No Dead Ends”
When the system’s confidence fell below 80%, it explicitly admitted uncertainty rather than hallucinating. Instead of generic errors, we provided “Disambiguation Prompts”—suggesting better phrasing or explaining why data was missing.
[Deep Dive] The Interaction of "AI Thinking"
One of the most critical interaction challenges was managing wait time during complex data processing. Unlike a search engine, hila takes 5-10 seconds to “think.”
Static loading spinners caused anxiety. Users wondered if the system had crashed.
The Solution: We designed a “Streaming Thought” UI. As hila processed data, the UI displayed the steps it was taking (e.g., “Searching 2023 10-K…” → “Extracting Revenue numbers…” → “Calculating growth…”).
The Result: This transparency reduced perceived wait time and built a mental model for the user of how the AI was “working” for them.
Figure 3. Streaming Thought: AI reasoning steps
Beyond Financials: The Probabilistic Shift
My work on hila (and previously on Kinect) has centered on the same fundamental shift in computing: The move from deterministic to probabilistic interfaces.
Deterministic (Traditional): You click a button; a specific action happens.
Probabilistic (AI/Gesture): The system “guesses” your intent based on noisy input or ambiguous prompts.
My Strategy for the AI Era:
Smooth the Jitter: Don’t show the user the raw machine uncertainty; provide a filtered, confident UI.
Affordances Matter: Empty chat boxes cause “Blank Canvas Anxiety.” Always provide a “hand to hold” through suggestions and visible constraints.
Reliability > Magic: A feature that is “cool” but fails 20% of the time is a liability. I design for the 99% use case.
Impact & Validation
The results were immediate and validated our “accuracy over impressiveness” approach:
- 0 to 4,000 users in 4 months
- $1M ARR in the same timeframe
- Scale: The design team grew from 2 to 4 senior designers as hila became the company’s flagship product
User Testimonials
“Connect it to your data sources (no need to build a data lakehouse) and it’ll just start answering your questions. After a few thumbs up/down and real-time reinforcement learning it’ll get it right. Almost magic.”
– Boris Evelson, VP at Forrester Research
“hila vastly improves my research process. I can rapidly search 10-Ks and earnings calls to find if anything related to my theses are hidden inside… More importantly, I can do it quickly without wasting time skimming irrelevant topics or pinpointing key words.”
– Mike Ostroff, Investment Analyst at Maverick Capital
“Being able to monitor and improve LLM performance is critical to unlocking the true power of gen AI. Vianai’s hila Enterprise provides clients a platform to safely and reliably deploy any large language model (LLM), optimized and fine-tuned to speak to their systems of record.”
– Ravi Kumar S, CEO of Cognizant
“Vianai is helping customers innovate by bringing its hila agents to Google Cloud. Leveraging the power of Gemini models, these solutions allow businesses to easily deploy sophisticated analytics without technical expertise, unlocking value from their data faster and more effectively.”
– Kevin Ichhpurani, Corporate VP at Google Cloud
Reflection
hila taught me that the best AI products feel effortless not because the AI is sophisticated, but because the design earns user trust.
We succeeded by making the most interpretable AI on the market. Every design decision—from source highlights to honest uncertainty to streaming thought processes—served the goal of making a powerful, unpredictable technology feel reliable and controllable.