Thursday, July 5, 2012

User-story presence flags ease split-test metrics for lean startups, howto

This morning I read 'Measure', a chapter of the book, The Lean Startup. It discusses cohort analysis, split-testing, and the triple-A of metrics: Actionable, Accessible and Auditable.

Then I got an idea regarding split-testing the user-story stream in website development.

For split-testing newly deployed stories it's easy to include (in the logs) a (growing) bitstring of indicators for each user story, which indicate their presence (with/without or after/before), and the ordinal number (implicitly) of the story (perhaps from PivotalTracker). All are kept in the same central place (in the source code) usually used for configuration.

Packed together by story number (using standard Base64 encoding), each log line includes them as a short string. (They take up only a single character for each 64 stories, of course.)

With current aggregated logging, remembering which log records came from which active set of stories might be difficult. But at the first level this method eases split-testing (for the impact of) each newly-deployed story.

Going deeper, the flags in the logs categorize the comparison data cleanly and safely, especially if we ever want something more complex (in the current context)—such as to reassess an old story. To disable an earlier story, some special programming is required, but our log data will indicate clearly which stories are active.

For split-testing, we can filter the log data by these story-presence strings. We can split-test for various configurations (of user stories), new-user activation (or usage rates or whatever we desire).

Perhaps we might want to remove an old feature, and split-test that, before we expend the effort to develop an incompatible new feature—good idea? And arbitrary configurations of features can be split-tested.

Copyright (c) 2012 Mark D. Blackwell.

No comments:

Post a Comment

Thanks for commenting on my post!