Experimenting with Kanban

We have consistently experimented with product development since we started Healthify and I wanted to share where we've ended up. Much of what we do is focused on regularly recurring feedback loops. We want to objectively evaluate what we're doing on a daily, weekly, and monthly basis so we can be sure what we're doing is helping to improve our product development process. We've structured all these goals under a system called Kanban for the last few weeks and we've enjoyed it so far.



Per Atlassian's advice, our overaraching goal is to lower our cycle time, and focus on the singular task at hand. Velocity is important but we've never found optimizing for velocity to be a naturally achievable goal. It's like trying to get to the X-Games by just "skating better" - it's not achievable on a daily basis and it's affected by too many external factors.


That's not to say we don't think about the quality of our code. We don't like to break things so we check a view things:

  • maintain 90%+ code coverage in both RSpec and Cucumber while building our test suite on Travis
  • squash bugs ASAP
  • automate linting with JSHint and Rubocop
  • review our CodeClimate GPA regularly
  • pair code review every pull request

Nothing is merged unless it passes all of these measures and it gives us a certain level of comfort in the quality of what we're producing.


On a daily basis these goals affect our decisions. We optimize for getting code merged. We can track our own progress via Pivotal Tracker and to-do lists created in Github on pull requests. These create very discrete minimal goals and we can always ask ourselves - "Is this deployed or not?" If the answer is "no" then we know what our priority is.


  • Code Coverage and Linting
  • Code Review
  • Is it deployed?


On a weekly basis we can start to look at higher level metrics like cycle time during a weekly iteration meeting. Iteration meetings are sort of a misnomer though because we're not really defining an iteration - we're really just reordering our priorities for that week. We might set a goal of finishing a certain amount of stuff within that week but it's not really the be all, end all.

For our cycle time, we are averaging around 5.1 days per story right now across the team. This isn't just about the developer who wrote the code though. Making small, clearly-defined stories, that our development team can build and deploy quickly will drive this number down as much as a strong developer. It's a very good high-level measure of our entire product development team. Additionally, we look at our bug count each week. We believe this is an easy way to quickly assess the quality of code being writen.


  • Cycle Time
  • Bug Count
  • Did we clear the backlog we set at the last iteration meeting?


Lastly, we have product planning meetings every 1-2 months. These are just like our iteration meetings but we bring in the non-technical part of the team and look very high level at what's happening with the company. Ultimately, we're always trying to deliver value to customers so everything builds up to whether they're happy or not. Additionally, we look at our velocity and volatility now that we have 4-8 weeks of new data to evaluate ourselves on. If we're consistently and quickly building things that make our users happy, then we've done our job.


  • Velocity
  • Volatility
  • Did we build the things our clients asked for/needed?*


We constantly question ourselves. We try to avoid subjective lines of questioning like, "Did we do better?" We focus on what we can measure, and boolean questions with clear answers. While we may have high level goals like building more things for clients - those are hard to measure. We focus on low-level, discrete goals that we can accomplish today and we can drive ourselves towards those overarching goals.

*We haven't gotten into it here but there are many ways to evaluate customer satisfaction, but that's another blog post.