Since introducing the AI Answering service, we found ourselves gaining a surge of new users eager to leverage the service. AI Answering was designed to revolutionize how businesses managed customer interactions, offering an AI-powered response system that could handle inquiries with efficiency and accuracy. However, while the AI Answering service was met with enthusiasm, it quickly became apparent that its potential was undermined by a lack of clarity and understanding. Our users were unable to fully grasp the return on investment, the data was perplexing, and valuable insights such as trend detection for call traffic at specific times were not accessible. Fundamentally, the service's usability was challenged.
While the AI Answering service was a cutting-edge solution, its user interface wasn't keeping pace. Our service, despite its promise and potential, was held back by an overview that fell short in delivering clarity and understanding. Our users needed a clear vision of their data and the value derived from the service, but the interface in its existing state was unable to meet these demands.
The goal for this project was to design a dashboard that not only showcased the value of our AI Answering service but also made it understandable for our users. Previously, we offered a call log page (as shown in the picture above) detailing metrics like Received Calls, Average Call Duration, and Resolution Rate. However, these raw numbers didn't paint a clear enough picture for users, leaving them questioning the return on investment (ROI).
Our goals for this dashboard design were to:
1. Clear the fog around ROI, turning complex data into an easy-to-understand format.
2. Highlight trends for users, facilitating informed business decisions.
3. Make the dashboard more intuitive and user-friendly.
I led the design of the AI Answering dashboard and worked hand-in-hand with a Product Manager, Engineering Lead, and a UX research. I had the rewarding task of shaping the design strategy and bringing it to life.
Research was pivotal in our approach to this project. Since we had a lot of data at our disposal related to AI Answering, our initial challenge was to decipher it into comprehensible, valuable information. With the help of the data analyst, we analyzed the data in depth, identifying 17 key metrics that we thought could potentially illustrate the value of AI Answering.
Embracing a user-centric perspective, we sought to gain deep insights into our users' priorities. We conducted a Card Sorting test on Maze using the metrics we initially identified, enabling us to view these metrics through the lens of the users. This allowed us to distill our initial list down to the nine most critical metrics according to user feedback. It also offered valuable insights into the users' logical grouping of these metrics for the dashboard. The insights gained during this research phase were pivotal in guiding the subsequent design process, ensuring we remained aligned with our users' needs and perspectives.
Before I began the designing process, I led a discussion on the function of dashboards and their fit within the broader ecosystem of our software. I had conversations with the product team and internal stakeholders at Popmenu to ensure a consistent interface across Popmenu. This consistency would allow our users to understand and navigate the platform more effortlessly. By establishing a mental model of what a Popmenu dashboard looks like, we aimed to ease user transition between different sections of our platform. Meanwhile, another product designer on our team focused on designing and testing various dashboard layouts. This collaborative effort not only informed a company-wide dashboard framework but also established a universal layout applicable across all areas of our software. Now that we had a shared understanding and insights from our research, I could rapidly iterate through design explorations.
We released an early dashboard design to a select group of users for testing, focusing on a comprehensive assessment of its functionality. Besides seeking their opinions on overall impressions and ease of use, we aimed to determine three main aspects: 1) whether the dashboard effectively communicated the ROI of AI Answering, 2) if it helped users spot trends and make informed business decisions, and 3) if the dashboard increased data comprehension for users. The insights gathered from this user testing phase were invaluable, providing us with a clear direction for our subsequent design iterations.
The project presented its fair share of challenges. One of the most pressing was effectively communicating the value of a metric to our users. Our testing showed that despite user requests for specific metrics, understanding the real-world impact and value of those metrics wasn't always clear. This realization required us to think differently about how we approached the dashboard design. Instead of only displaying data, we needed to present it in a narrative format – to tell a story with the data, not just show numbers.
This was a considerable challenge to address, and while our first version of the dashboard didn't fully embrace the storytelling aspect, it laid the groundwork for its future incorporation. The plan was to take the feedback and learnings from version one and fold them into the next iteration, creating a more narrative-driven version two.
Feedback gathered from our users signified that this design met the original challenges - it cleared the fog around ROI by simplifying complex data, highlighted critical trends enabling users to make informed business decisions, and enhanced the user-friendliness and intuitiveness of the dashboard.
The dashboard's design impacted our users in a positively. We transitioned from a minimal data display of just three data points, which struggled to articulate their worth, to an expansive, intuitive dashboard that vividly illustrated its value.
Following the dashboard release, our users were able to better understand the potential of the AI Answering service. This became particularly evident for those experiencing high call volumes during peak hours. The service seamlessly handled 10-20 calls per hour, a difficult task that would typically require a dedicated staff member. By visualizing this data, the dashboard allowed these users to recognize the immense value of AI Answering.
One notable success story was a client who, after employing AI Answering, noted the service answered a whopping 7,200 phone calls. None of these required human intervention, saving considerable staffing resources. Most impressively, they registered a revenue of $90,000 in online ordering, a direct result of implementing our service. The client was initially unable to perceive this ROI, but the introduction of the dashboard made the advantages distinctly evident.
One of the most valuable takeaways from this project was the importance of cross-functional collaboration. Regular communication with various teams—mobile product designer, data analysts, engineers, product marketing, and sales—led to a richer understanding of the project from multiple perspectives.
If I were to start the project again, I would incorporate the "storytelling" aspect from the beginning. While version 1 was data-driven, future iterations should better convey the narrative behind the data, thereby creating a more meaningful connection with users.