Building Our Own Tool to Solve the Performance-Review Problem
March 14th | 2018

A little over a year into my tenure at Zeus Jones, I received a request to participate in a peer’s 360 review.

This was…surprising. I’d never spoken to anyone about a review process. Not in my onboarding, not in casual conversation, not at my one-year anniversary.

A bit of poking around revealed that each department had its own philosophy on reviews and that, like anything around here, individuals had different thoughts about those differing philosophies. Most people, though, were longing for something a little more structured — as long as it still felt true to our culture.

Thus began a year-long process of searching out employee review software, keeping salespeople at bay, and conducting half-hearted product trials — all to realize what we kind of knew in the beginning: we should just build our own.

Before we wrote a line of code, though, we had to learn a few things about ourselves as an organization:

  1. The idea of a feedback middleman, someone who collects info from others and then filters it back to an individual, was off-putting in a culture where we have so much autonomy that we half-joke about not having bosses.
  2. While we all strive to be better in our individual disciplines, the real growth comes from the work we do in cross-functional teams. So having a Strategy Boss give me a yearly review even though he had probably never worked with me, for instance, didn’t make sense.
  3. And finally, the nature of our work is so in flux that a yearly review cycle also didn’t make sense. More frequent, on-demand feedback felt more useful.

With these things in mind, we made the decision that peer reviews would be the bedrock of growth at Zeus Jones. We would build a tool that allowed people to request feedback from each other whenever they wanted (but hopefully at least yearly).  We’d provide some questions to choose from, but allow for personalization. And most important: reviews wouldn’t have to include feedback from even a nominal boss, which would truly set the expectation that we are responsible for and to each other, more than to any Boss Figure.

As with any good dev project, our first version was lightweight enough that we could launch quickly. We spun it up and tossed it out, and quickly learned that we had not created an MVP —it was so stripped down, for instance, that we’d missed out on necessary triggers to make sure peer reviews were getting done. We also learned that we were missing something we couldn’t code.

Turns out, giving your peers honest feedback is super hard. Telling them how awesome they are? Easy. Telling them they let you down, or failed in a task, or made things harder for the team? The worst.

To address this we set aside an entire day for coaching and soul-searching, which I’m going to completely short-change in this post. It didn’t solve everything, but it opened the door to a conversation we’re still having.

With a better understanding of peer review psychology, we launched the 2.0 version of our tool and called it Tandem. By giving it its own brand and adding just a handful of new features, we gave it more gravity — made it more real. Participation surged. Feedback — lots of it positive — started pouring in.  💯

We’re all still learning how to be constructive in our feedback and not to take the easy, always-positive way out. We’re still learning how to write questions that get us the answers that are actually helpful, and not just nice to hear. We’re still learning how to carve out time in our schedules to craft thoughtful responses. It’s nowhere near perfect yet, but it’s definitely an MVP.

Just this taste of success, though, has inspired us to make big plans for this little tool. I’m going to put this here in the hopes it’ll come true: Tandem 3.0 will be available outside of Zeus Jones in the coming year. Watch this space.  👀