“Dashbots” — the inevitable fusion of dashboards and chatbots
The failure modes of these two common form factors teach us a lesson about the dangers of thoughtless design.
It’s April 1st, which for many parts of the world is April Fool’s Day. While the entertainment value of tricking people into believing the wrong thing steadily declines with age, I still enjoy taking every opportunity to fool around in an industry that often takes itself far too seriously.
And no part of tech takes itself more seriously than B2B software. Forget the design principles of a bygone era; today, every Productivity Tool only needs to follow these two simple rules:
- All serious products must have a dashboard
- All innovative products must have a chatbot
As everyone wants to be seen as both extremely serious and completely innovative, every product now has both a dashboard and a chatbot, despite the fact that neither dashboards nor chatbots are particularly effective at solving the problems they’re created to solve.
This proliferation of dashboards and chatbots creates a nightmare for people who are just trying to get their work done, and therefore the managers responsible for smooth operations — who are, ultimately, the buyers of your products.
All they want is a single source of truth, and we’re about to give it to them in the worst possible way.
Let me take you on a journey; a journey of how “dashbots” come to be thought of as a good idea, and the rakes teams step on along the way of trying to implement them.
Ideation is a failure mode
If chatbots and dashboards are so bad, why are there so many of them? To answer that question, we have to ask ourselves: where do ideas come from?
Ideation, like every other process, follows the path of least resistance. In a post-ZIRP industry that has optimized itself for velocity at any cost, user research processes like “asking if people want this” are simply seen as slowing us down. The only way ideas make their way into the semantic environment is wholly out of the product manager’s head. Gone are the days of discovery; the “build-measure-learn” loop starts with “build” and we are not getting any new data until that first idea has been shipped.
So the PM sits down and thinks of something we can ship. The inspiration for what the product ought to be able to do comes out of other products that the PM is accustomed to seeing. And because we are in an era of measureship, what the PM sees most frequently is dashboards.
Now, “a lot of products have dashboards” is not a bad starting point. A diligent researcher might analyze the workflows running through that product to try and figure out why. Unfortunately, we are following the path of least resistance, so what gets noticed most is just the form factor — and as a result we are building a dashboard to solve the problem “our product doesn’t have a dashboard.”
Notice what we didn’t do: the work to understand what information is relevant or actionable to a given user, or whether a “single pane of glass” is the right vehicle to deliver that information.
Chatbots function on a similar principle. Most of them don’t actually do anything in practice, but there is an endless source of wishful thinking that someday, they might. The goal is to implement GenAI first, and to serve the customer’s needs a distant second.
One of the most popular trains of thought goes something like this:
- Our website is an infinite lasagne of content sadness
- Instead of fixing the information architecture we will add a chatbot that users can query for the content
After six months of development, the outcome is that instead of one bad interface, you now have two bad interfaces.
Despicable design
It’s only a matter of time until the two systems collide, and a PM will start conceptualizing how to implement a dashboard as a chatbot. Or perhaps this has already happened. And as your favorite thought leader’s favorite thought leader, it’s up to me to issue the design principles for this up-and-coming genre of “dashbots” — a facetious solution to a facetious problem.
From one point of view, an LLM is the worst possible medium for a dashboard. Firstly, its outputs are non-deterministic; a given query is going to return different results every time. Secondly, it’s going to get those outputs wrong two-thirds of the time. Thirdly, it’s perfectly willing to change its answer when challenged.
But from another point of view, all of these are features.
Giving users relevant data to take action is only a secondary use of dashboards. The first use, by a country mile, is storytelling; as managers report to their own leadership, they are required to increasingly massage the chaos of the front lines into a coherent narrative of inevitable success. That leadership, far too important to delve into the data themselves, is increasingly reliant on “executive summaries” etched in 8pt font onto slide decks marked “confidential.” Roll enough dice and you’re bound to get something that fits their preferred narrative.
However! We are engaged in despicable design, and yet we’ve inadvertently stumbled upon a good idea.
One of the biggest problems with dashboards is the total disregard for the attention economy; its designers slather whatever they can think of onto the screen, with the result being a screenful of garbage information. By allowing a user to compose their own query, they could potentially sidestep all of the data points we’ve decided are important (based on what our stakeholders would like us to show off), and therefore obtain some benefit from this tool.
Fortunately, the push-based modality of chatbots plays into our hands. In order to get that information, our hapless users first have to articulate what they want, without any guidance on what is available, or support for making sure that today’s prompt was phrased the same way as yesterday’s. Rather than making our dashboard glanceable, we have succeeded in making it merely queryable.
The purpose of a system is what it does
If a product team were to build dashbots, they would be the worst of both worlds: not only demanding excessive cognitive load, but also providing unreliable and manipulable data. These are not bug; they are the inherent properties of the form factors — dashboards and chatbots — assigned to the task. If you, as the designer, choose these form factors, then I can only assume that your intention is to produce the inevitable results.
However, if your goal is not to mislead your user after extracting the greatest possible effort in exchange for that information, I hope that you take this article as a warning. By understanding the dimensions in which these form factors can fail, you will have understood the dimensions in which they can succeed — if you leverage their strengths rather than insist upon their weaknesses.