Had a very nice and rambling conversation with Ron Jeffries at the Brighton Agile Round Table yesterday, where we talked all around the two straw men models of “Strict Waterfall Project Management” and “Absolute Agile Project Management”, in light of the complexological model I’ve been sketching.
Mostly there were interesting reactions and clarifications to be made on the “analysis” end of the model, so I’m going to focus there for the moment. Recall that the point of this exercise is to craft a complex systems-y model of “project management” that’s flexible enough to include the “Waterfall” storyline and the “Agile” storyline, keeping track of costs, time and features and… what’s the other one? Oh yeah, quality. [joke!]
So as you’ll recall, I started (as any heir to Stu Kauffman would) with a Random Boolean Network to represent the “ground truth” of what people want. Originally, I’d sketched this as the dynamics of an RBN, with the “product” being built trying to predict and match the dynamics of the market’s whims. Seems that’s not a very comfortable analogy to draw: where I was seeing the time-series of interconnected dynamics of many features “as” a correlated feature, the language we use to talk about software projects very strongly emphasizes features as fixed structure, and user experience as the dynamic thing. So the sense of Product trying to match the twinkling lights of Market is tricky.
On the other hand, I have to set my heels at the idea of features-as-atomic-traits, since after all I’m ultimately aiming to model development and team learning as features are built and revised, and patterns are learned and re-used. So I’m not willing to drop the RBN’s complexity all the way back to a more “traditional” Kauffmanian static Nk model of a fitness landscape. Not that there isn’t a lot of nice tunable ground to be covered in fitness landscapes and stuff, but because software is used, it doesn’t just have attributes.
So here’s what we’ll try next:
Suppose that the ground truth of what the Market wants is determined by a Random Boolean Network. But not in the sense I described before, where the desired features are only evident via dynamics, and the goal is to build a product that mimics those dynamics. So no more “twinkling lights”.
Instead, consider the RBN to be the (secret) map of Market expectations. The Market will pay (or cost, in some cases) a certain amount for every Product feature which works in the same way as the ground truth expectation. But in this new version, the Market is not actively generating some kind of light-twinkling trace of data for the Team now, it’s just willing (for a price) to answer questions about its preferred outcomes.
Think of every possible transition of N input bits to N output bits as a use case over the N input features. “When the browser is open to the home page, AND the user field is empty, AND I click ‘log in’, AND I do not have a cookie set, THEN this other stuff should happen….”
A perfect Product will produce all the same output strings as the explicit Market function does, for all inputs. Any given Product earns money in proportion to how well the released features match the Market’s target desires.
But the Team only has access to aggregate income information, unless they pay for marketing.
There’s a lot of leeway here. Suppose a Team playing the low-hanging-fruit game comes along, and they release a Product that is just a constant output, say 1111100000, the ultimate in “you can have any color car, as long as it’s black.” They might just make some money, since there’s a chance the underlying Market desire is biased that way.
Although I don’t have any interest to do it in this model, I have to note there’s an opening here for exploring “market dynamics” in such a world. A slightly smarter Team (in the same Market setting) might come along and release something that is just as dynamically dumb as the 1111100000 folks, but better matches the average desired output for each feature. They’ll make more. Some innovator may well come along and release a product that pays attention to some inputs, maybe first-order interactions. They’ll make a killing.
But today I want to focus on the process within a Team, not the dynamics of a full market populated by Teams with different strategies and capacities.
Now I’m reminded that the models of project management (at least the ones we’re used to) will include some big block process labeled “analysis” or “marketing” or “project management”, and since Agile project management implies holistic adaptation that should include market research on the Team whenever market research generates business value, we’d better include that block in our model.
Call it Requirements Gathering (or maybe Refinement, if you want to imply the inclusion of the traditional “Maintenance” functionality, as we should in an Agile Team). Let’s give them some tools.
How about interviews? Say each interview costs a certain amount of money and time, and the Team’s analysts can (with 100% confidence) elicit one input-output combination from the Market. “If you’re 1011100011, what will you do next?” To me, this has the metaphoric capacity to handle people not knowing until they’re asked, and also revelation of only indirect information.
An interview like that should be expensive, I think. How about A/B testing, or more generally polling? “If you’re 1001101111, which of the following do you prefer as the outcome, 1011100011 or 1100000011?
Then of course there’s sales data, for situations in which there is in fact a Product already released: As I mentioned, even a very dumb product that fills some roles slightly better than the ones it messes up has the capacity to provide historical information (and revenue to pay for improvements!).
Now this brings up one more change from the previous iteration of this model: Revenue.
In the first version, I proposed a model in which each feature had an intrinsic positive value, and a released Product collected that value during every time period where it matched the intrinsic Market dynamics. We don’t have Market dynamics now, we have a Market that’s a black box.
The baseline in this old one was “you make nothing”. I’ve rethought that, too; let’s throw a monkey wrench into our Team’s world, and allow features to be negatives as well as positive desires. In other words, whenever a feature with a negative value is matched, the Team loses money.
So where are we now?
Let’s say the Product is some kind of function that takes generalized Boolean inputs and produces (coincidentally) generalized Boolean outputs. The space of inputs is all –bit binary strings, generalized to include # or “don’t care” inputs where an input is ignored, and the output space is all –bit binary strings generalized to include ? or “unset” outputs, when an output bit is not explicitly set to either 0 or 1.
Take a second here. We can therefore say a Team “starts” (having done no work thinking about or developing any Product at all) with a “default” Product that ignores all inputs and produces no outputs: ###…### -> ???…???. They aren’t earning or losing any money because their Product has no visible features to capture (positive or negative) Market share.
Suppose after some development time a Team releases a (rather stupid) Product that implements ###…### -> 111…111; in other words, it always returns 1 for every feature. How much money do they earn?
We determine this either by doing the math or by sampling. We know the ground truth Market function, a large and rather complicated truth table. If we want to do the math, we can convert it into a cumbersome probability function, and just assume that the diversity of people in the market is absolutely uniform: that is, that people will “use it” by applying every possible input. Alternately, we can sample this space (using the same assumption of uniformity) by taking 1000 or so random bit-strings as inputs, and counting up how much revenue the Product generates (or loses) by matching those features.
Suppose in our example there are 21 features, and that the value of the first is , the second , and so on, with the 21st providing value of . The Product that returns 1 for every input will lose $10 for every 1 in the first feature’s output table, and earn $10 for every 1 in the 21st feature’s output table. So it’s just a matter of counting to determine how profitable it turns out to be.
Just to make it clear, suppose some other specialized Team releases a Product that does a perfect job at only one feature whose intrinsic value is $4. It will earn the Team $4 for every possible input combination, regardless of the other features; no money is lost or gained for features that are not present, or do not match desired outputs.
More later; time for lunch!