Select Page

ML platform

VIANAI

Modular end-to-end ML platform that helps all stakeholders to collaboratively identify signal within data, and enables the roles involved with building ML models, using predictions, and explaining model behavior, to achieve results faster with a better experience.

PROJECT ROLE: UX Designer, Interaction Designer, Visual Designer
DELIVERABLES: User Flows, User Interviews, Information Architecture, Wireframes, Prototypes, Visual Design Comps, Specs, and Assets

Fig. 1: The End-to-End ML Platform: A unified interface for data scientists and business users.

The Challenge

Machine learning workflows often involve disjointed tools and complex underlying micro-services that make collaboration difficult. The core challenge was to abstract these technical complexities into a cohesive user experience that could serve different roles involved in building models, generating predictions, and explaining model behavior.

Fig. 2: Mapping the complexity: Early user flows identifying the disjointed steps in the original ML workflow.

The Solution

Design a “Power to the People” user experience that unifies the ML workflow.

  • Unified Interface: Created a cohesive UX that abstracts the capabilities of underlying micro-services, making powerful ML tools accessible.
  • Role-Based Enablement: The platform was designed to support specific needs of various user roles, allowing them to achieve results faster and with a significantly improved user experience.
  • Collaborative Focus: The architecture emphasizes collaboration, helping teams work together to find signal in their data effectively.

Fig. 3: Transforming data into decision-making: The Model Selection dashboard abstracts complex performance logs into an interactive scatter plot, allowing data scientists to instantly filter and compare models based on metrics like Precision and F1 Score.

Fig. 4: Integrated Development Environment: A split-view configuration screen that allows technical leads to manage permissions and edit TensorFlow code without leaving the platform ecosystem.

Fig. 5: Full Lifecycle Management: A dual view demonstrating the platform’s range. The Model Deployment screen (left) uses radar charts to visualize optimization trade-offs like latency vs. accuracy, while the Settings hub (right) provides a centralized command center for managing every stage of the ML pipeline

Key Takeaways

The project successfully delivered a platform that democratizes access to machine learning capabilities, allowing users to focus on insights and model behavior rather than grappling with the complexities of the underlying infrastructure.