managementproductivitystartups

RIT – An Opportunity Ranking Framework


I was recently speaking with a founder I’m advising and we ended up in a discussion about sorting and ranking company opportunities.

I mentioned to him that, in some cases, I have used what I came to know as the “RIT” framework. It’s a framework which allows an individual numeric score to be assigned to the resources, impact and time (RIT) attributes of any potential initiative which then supports a total score. That total score can then be used to order the full list by relative business value, granted everything on the list was scored using the same scale.

Bryan Eisenberg was the first person to share this framework with me during an A/B testing discussion over breakfast circa 2012. Anecdotally, I heard that the RIT framework evolved out of Dell in the 2000s, but I don’t know if that’s true.

I do find RIT to be overkill for short lists or lists of items which could be accomplished by a person or team in a day or two. However, as lists grow longer more complex and persistent (e.g. company initiatives, feature requests, backlogs, bug queues) the RIT scale can help quickly focus the conversation and bubble the right priorities to the top. It’s especially useful when the lists are full of initiatives which are larger, mutually exclusive, budget-intensive and cross-functional.

Here is how RIT works:

Resources, Impact and Time basics

Each letter (R, I and T) gets a 1, 2, 3, 4 or 5. 1 is the lowest value score and 5 is highest value score. It’s inexact, and is a rough number score. As you work your way through your list, hopefully with your team or other stakeholders, you will probably create some loose rules about how you think about assigning the numbers, especially 2, 3 and 4. You might have to go back and rescore some of the early items after you have done a handful and you establish a bit more context for your decision making within the data set. The RIT scoring decisions should become relative to the list being scored and can change across lists, department or functions. I will give some examples later. 

Also, more than one item can have the same score(s) assigned. It’s not a forced ranking system in which once a 5 is used, it can’t be used again. After R, I and T get assigned their number for each initiative, you then multiply the three numbers to get that item’s total score and relative value to the other previously scored initiatives. The best possible score is a 125 (R5 x I5 x T5) and the worst is a 1 (R1 x I1 x T1). Most actual scores land somewhere in the middle.

That’s RIT at a high level but there is obviously additional nuance. Read on for further definition.

R – Resources

This is the amount of resources, typically money and time, it will take to accomplish the initiative at hand. As an example, I used to say an example R5 is one person working on their own, taking a day or two with limited budgetary needs. Easy lift. Conversely, some example R1s could be a cross-functional team working for 6 months or a project only the CEO can do herself in a month or a $250,000 system implementation. These are long, heavy or expensive lifts.  Another example of an R2 that baselines off the previous examples could be a cross-functional team working on something for 3 months or the company spending $100,000 on a new tool. Less costly than an R1, but still relatively heavy lifts. See the difference in my scale between and R1 and R2? Remember, resources in this context are both time and money. My point (I’ve made a few times) is that the score is relative to the other types of things that are on the list you are scoring. The lightest lifts taking the shortest time to complete should get a 4 or 5 while the biggest investments taking the longest time with the most people involved should get the lowest numbers. Again, 2s, 3s and 4s are more nuanced judgment calls compared to what you are doing for the other R decisions. You will get better as you go.

I – Impact

This is the impact on the business, usually measured via financial gain or risk mitigation.  How far will this initiative move the needle in relation to the other items we’re stacking it against? How much risk does this mitigate compared to the others? 1 is the smallest impact and 5 is the largest impact, in relative terms. Also, depending on what function the initiatives reside within, the impact can certainly be measured for other non-financial KPIs, like employee engagement or brand reputation. I’d still argue those outputs can ultimately be rolled up to support the top or bottom line, however.

As mentioned above, remember, you are calibrating your list as you go.  Different teams may calibrate their lists differently, but the items on any single list need to be scored using the same rough scale. Some hypothetical examples from an executive-level list might be an I1 for $1M in revenue impact with an I5 being $50,000,000 in revenue. If this was the A/B testing backlog, you might divide those values by 10x when the product team scores their list. Alternatively, if the list happened to be internal counsel’s, an I1 could be insulating against $100,000 of single-state tax compliance exposure or an I5 could be ensuring that customer data privacy statutes are adhered to, avoiding $1M+ in fines. 

T – Time

This is the tricky one, but only because it’s named poorly. “Time” is this context is NOT time to complete. Time to complete is covered under Resources (R) above. Time (T) is the time for the business to realize the impact of the work or, said differently, the “payback period”. T1, the heaviest lift, could be more than 3 years for something like a new product line, innovation or revenue stream development. Conversely, a T5 might be instantly valuable, potentially impacting customers, staff, costs or revenue streams immediately. A good example of a T1 might be an R&D initiative while a T5 could be a Shopping Cart Conversion optimization for an eCom business.

(Side note – For simplicity, I should call this framework RIP and use Payback Period (P) instead of Time (T). I don’t know why it wasn’t named that way from the beginning. It would be less confusing. I could have just changed the damn name of the post and framework before you read all this, so I guess I’m guilty as well for propagating the confusion. I didn’t want to seem like I ripped off someone else’s work because, you know, I’m sure a wide swath of engineers from Dell in the ’00s subscribe via RSS. #sarcasm)

Completion and Examples:

Now that each initiative has a number score for each RIT letter, the numbers are multiplied to create the final score. The final score should nicely order your list while also demonstrating greater disparity between the weaker and more valuable opportunities. If the RIT numbers were simply added, there would be less score dispersion and the final list would be harder to scan at a glance. It also makes sense that things on the same lists could very well be exponentially more valuable than each other.

Here are a handful of scored examples with supporting KPIs from a fictional product feature list that most of us in B2C can relate to:

  • 100 – R5I4T5 – Implement a new banner on the site to highlight “Free 2 day shipping on all orders” to boost conversion
  • 60 – R3I4T5 – Implement One-Click Buy functionality utilizing Apple Pay, Paypal or Venmo, where applicable to boost conversion
  • 50 – R2I5T5 –  Implement a new shopping cart process, working with creative, merchandising and digital product (conversion, avg ticket)
  • 32 – R2I4T4  – Implement a new loyalty program, working with finance and marketing departments (churn, frequency, LTV)
  • 24 – R2I4T3 – Implement new project management system across design, project management and digital product departments (speed to market, efficiency)

Conclusion

Just imagine long sad lists of 50+ features, bugs, initiatives or projects living in spreadsheets and/or PM tools with limited context around how they are arranged and the value they could provide. How useful are those backlogs to the teams or to their managers? How easy are they to revisit? How easy are they to be gut checked for value or degree of difficulty at a glance? Can decisions of what got prioritized be supported objectively and simply in a postmortem?

Now think of those same sad, sometimes messy lists ordered by business value using a contextually useful scoring framework which has been collaboratively debated and applied by the team that is responsible for the list and its potential completion. 

That’s a much happier list and a much happier team by way of the highest value items getting prioritized more easily. That should yield happier customers and, in turn, “happier” company performance overall.

 

Back to top button