I built the wrong dashboard for two weeks
When I worked at Automattic, on parts of WordPress.com and Jetpack, we used to say that counting things is hard. With time I realized the harder problem is one rung up: counting the right things is even harder. Most teams solve the first problem, define the metric carefully, and never notice the second. The metric they defined is not the one that mattered.
I walked into a clean version of this on my own product. I built an outreach tool last month. The first thing I did was sit and watch it work. Emails sent today, emails queued, emails waiting for the morning batch. The numbers moved when I clicked things. It felt productive.
Two weeks in, I was still sending email and I had no idea who had read any of it.
How to tell when a metric is the wrong half of the loop
The reason this is so easy to get wrong is structural. Anything I do inside my own software produces a clean record on the way out. I click Send, my code notes the click, the counter goes up. The action and the metric are in the same loop, on the same machine, written by the same people.
What happens to the email after that is on someone else’s screen. It might land in a folder. It might be skimmed. It might sit unread in a tab that stays open all afternoon. Each step adds latency, ambiguity, and another team’s instrumentation choices. By the time any of it makes it back to me, it lives in a different table, behind a different filter, on a different page. The path is longer and the data is less clean.
Most teams do not bother to bring it back at all. They are not lying. They are measuring what is easy.
The shortcut for spotting this is to ask, of any number on a dashboard, who created the event that made the number move. If the answer is “I did”, or “my team did”, or “my system did”, the metric is on the inside of the loop. If the answer is “the person we are trying to reach”, the metric is on the outside. Most dashboards are 90% inside-the-loop because that is where the data is cheap.
What I changed this week
This week I pulled some of the response signal up to where I was already looking. I did not invent a metric. I just stopped hiding the ones I had. The contact card now tells me when I last drafted to a person and when I last marked them as contacted, both in plain language above the action buttons. The admin view for outbound links shows whether a link has been clicked, is still waiting, or has expired, with relative timestamps, instead of leaving me to grep logs.
The interesting result is what stopped happening, not what started. I stopped sending followups to people I had already reached. I stopped sending followups to people I had just contacted. None of this came from new resolve. The right number sitting in the right place did most of the work.
Loud Camel news
The product I have been describing is Loud Camel, a tool that helps researchers get cited and recognized. The contact-card recency lines and the magic-link status panel both shipped this week, alongside a first cut of an admin-driven prospect outreach flow. Each is small. Together they move the screen closer to the one I should have built first.
Takeaway
Open the dashboard you check every morning. If most of what is on it is things you did, the screen is telling you about your week and not about the world. The fix is rarely a new metric. It is usually a number that already exists somewhere, moved one screen over to where the next decision happens.