Month: May 2019

  • RIT – An Opportunity Ranking Framework


    I was recently speaking with a founder I’m advising and we ended up in a discussion about sorting and ranking company opportunities.

    I mentioned to him that, in some cases, I have used what I came to know as the “RIT” framework. It’s a framework which allows an individual numeric score to be assigned to the resources, impact and time (RIT) attributes of any potential initiative which then supports a total score. That total score can then be used to order the full list by relative business value, granted everything on the list was scored using the same scale.

    Bryan Eisenberg was the first person to share this framework with me during an A/B testing discussion over breakfast circa 2012. Anecdotally, I heard that the RIT framework evolved out of Dell in the 2000s, but I don’t know if that’s true.

    I do find RIT to be overkill for short lists or lists of items which could be accomplished by a person or team in a day or two. However, as lists grow longer more complex and persistent (e.g. company initiatives, feature requests, backlogs, bug queues) the RIT scale can help quickly focus the conversation and bubble the right priorities to the top. It’s especially useful when the lists are full of initiatives which are larger, mutually exclusive, budget-intensive and cross-functional.

    Here is how RIT works:

    Resources, Impact and Time basics

    Each letter (R, I and T) gets a 1, 2, 3, 4 or 5. 1 is the lowest value score and 5 is highest value score. It’s inexact, and is a rough number score. As you work your way through your list, hopefully with your team or other stakeholders, you will probably create some loose rules about how you think about assigning the numbers, especially 2, 3 and 4. You might have to go back and rescore some of the early items after you have done a handful and you establish a bit more context for your decision making within the data set. The RIT scoring decisions should become relative to the list being scored and can change across lists, department or functions. I will give some examples later. 

    Also, more than one item can have the same score(s) assigned. It’s not a forced ranking system in which once a 5 is used, it can’t be used again. After R, I and T get assigned their number for each initiative, you then multiply the three numbers to get that item’s total score and relative value to the other previously scored initiatives. The best possible score is a 125 (R5 x I5 x T5) and the worst is a 1 (R1 x I1 x T1). Most actual scores land somewhere in the middle.

    That’s RIT at a high level but there is obviously additional nuance. Read on for further definition.

    R – Resources

    This is the amount of resources, typically money and time, it will take to accomplish the initiative at hand. As an example, I used to say an example R5 is one person working on their own, taking a day or two with limited budgetary needs. Easy lift. Conversely, some example R1s could be a cross-functional team working for 6 months or a project only the CEO can do herself in a month or a $250,000 system implementation. These are long, heavy or expensive lifts.  Another example of an R2 that baselines off the previous examples could be a cross-functional team working on something for 3 months or the company spending $100,000 on a new tool. Less costly than an R1, but still relatively heavy lifts. See the difference in my scale between and R1 and R2? Remember, resources in this context are both time and money. My point (I’ve made a few times) is that the score is relative to the other types of things that are on the list you are scoring. The lightest lifts taking the shortest time to complete should get a 4 or 5 while the biggest investments taking the longest time with the most people involved should get the lowest numbers. Again, 2s, 3s and 4s are more nuanced judgment calls compared to what you are doing for the other R decisions. You will get better as you go.

    I – Impact

    This is the impact on the business, usually measured via financial gain or risk mitigation.  How far will this initiative move the needle in relation to the other items we’re stacking it against? How much risk does this mitigate compared to the others? 1 is the smallest impact and 5 is the largest impact, in relative terms. Also, depending on what function the initiatives reside within, the impact can certainly be measured for other non-financial KPIs, like employee engagement or brand reputation. I’d still argue those outputs can ultimately be rolled up to support the top or bottom line, however.

    As mentioned above, remember, you are calibrating your list as you go.  Different teams may calibrate their lists differently, but the items on any single list need to be scored using the same rough scale. Some hypothetical examples from an executive-level list might be an I1 for $1M in revenue impact with an I5 being $50,000,000 in revenue. If this was the A/B testing backlog, you might divide those values by 10x when the product team scores their list. Alternatively, if the list happened to be internal counsel’s, an I1 could be insulating against $100,000 of single-state tax compliance exposure or an I5 could be ensuring that customer data privacy statutes are adhered to, avoiding $1M+ in fines. 

    T – Time

    This is the tricky one, but only because it’s named poorly. “Time” is this context is NOT time to complete. Time to complete is covered under Resources (R) above. Time (T) is the time for the business to realize the impact of the work or, said differently, the “payback period”. T1, the heaviest lift, could be more than 3 years for something like a new product line, innovation or revenue stream development. Conversely, a T5 might be instantly valuable, potentially impacting customers, staff, costs or revenue streams immediately. A good example of a T1 might be an R&D initiative while a T5 could be a Shopping Cart Conversion optimization for an eCom business.

    (Side note – For simplicity, I should call this framework RIP and use Payback Period (P) instead of Time (T). I don’t know why it wasn’t named that way from the beginning. It would be less confusing. I could have just changed the damn name of the post and framework before you read all this, so I guess I’m guilty as well for propagating the confusion. I didn’t want to seem like I ripped off someone else’s work because, you know, I’m sure a wide swath of engineers from Dell in the ’00s subscribe via RSS. #sarcasm)

    Completion and Examples:

    Now that each initiative has a number score for each RIT letter, the numbers are multiplied to create the final score. The final score should nicely order your list while also demonstrating greater disparity between the weaker and more valuable opportunities. If the RIT numbers were simply added, there would be less score dispersion and the final list would be harder to scan at a glance. It also makes sense that things on the same lists could very well be exponentially more valuable than each other.

    Here are a handful of scored examples with supporting KPIs from a fictional product feature list that most of us in B2C can relate to:

    • 100 – R5I4T5 – Implement a new banner on the site to highlight “Free 2 day shipping on all orders” to boost conversion
    • 60 – R3I4T5 – Implement One-Click Buy functionality utilizing Apple Pay, Paypal or Venmo, where applicable to boost conversion
    • 50 – R2I5T5 –  Implement a new shopping cart process, working with creative, merchandising and digital product (conversion, avg ticket)
    • 32 – R2I4T4  – Implement a new loyalty program, working with finance and marketing departments (churn, frequency, LTV)
    • 24 – R2I4T3 – Implement new project management system across design, project management and digital product departments (speed to market, efficiency)

    Conclusion

    Just imagine long sad lists of 50+ features, bugs, initiatives or projects living in spreadsheets and/or PM tools with limited context around how they are arranged and the value they could provide. How useful are those backlogs to the teams or to their managers? How easy are they to revisit? How easy are they to be gut checked for value or degree of difficulty at a glance? Can decisions of what got prioritized be supported objectively and simply in a postmortem?

    Now think of those same sad, sometimes messy lists ordered by business value using a contextually useful scoring framework which has been collaboratively debated and applied by the team that is responsible for the list and its potential completion. 

    That’s a much happier list and a much happier team by way of the highest value items getting prioritized more easily. That should yield happier customers and, in turn, “happier” company performance overall.

     

  • My Fireside Chat with Wawa’s CEO at PhillyMag’s ThinkFest

    Last October I participated in Philadelphia Magazine’s 7th Annual ThinkFest.

    The footage was previously lost, but has now luckily been found, so this post gets to come out of the drafts folder (which currently contains 74 others).

    Philly Mag approached me to do a “Fireside Chat” with the CEO of Wawa, Chris Gheysens. I’ve done lots of panels and “talks,” but never a fireside, and Chris and I didn’t know each other previously.

    I also quickly called shenanigans on Philly Mag to see if they purposefully put us together hoping for some convenience store slug-fest on stage. goPuff and Wawa, our respective current affiliations, are direct competitors, albeit at different stages of company life. They believably denied my allegation so I warmly accepted their invite.

    Here is the 30-minute discussion in which we talk Wawa, building companies in Philly, and leadership, all with a side order of Breakfast Sizzli. 

    Chris is savvy and knowledgeable with a high EQ. No surprise from the “Lead Goose”.

    In the green room he said, “Go easy on me, OK?” To which I replied, “I’m not here for a ‘gotcha.’ If anything, I’m empathetic to the challenge of sitting in the top seat.” Being a great CEO is as exhausting as it is fulfilling, regardless of your company’s size.

    I’m actually used to being in the other chair, riffing off the panel moderator and answering questions about my background directly. It was interesting and a lot more prep to be the one hoping to both contribute and guide the conversation. Sitting on a panel usually requires prep of a 15-min conf call, tops. For this I researched Chris, and Wawa, talked to their PR person and even briefly chatted with Chris during the week prior. We cut our chat short to save it for the stage.

    I also did some googling on nailing a fireside chat vs a panel. Coupled with my experience, here were my takeaways on the format:

    • The internet says to let the audience know the “arc” of the conversation before you dive in to help set things up. I did that, along with telling Chris the first question I was going to ask after we intro’d ourselves. Easy setup ice breaker for me as well as to put him at ease.
    • A fireside should be more conversational, where both parties are sharing (60/40) vs one party interviewing the other (90/10). Interviews are fine, but a real conversation is much better.
    • I didn’t write out a lot of questions. I listed a few topics and spent the time putting them in a logical order. My prep fit on the top third of a printed page and consisted of about 10 words like: Snapshot, Philly’s Role, Current Customer? Future Customer? Innovation, etc.
    • The internet says quality follow-up questions are everything. Drilling down into interesting subtopics and then actually discussing them while also listening and guiding the conversation is the real skill. If you can do all that while being present – boom.
    • I believe the fireside format is better than an interview or panel. It’s a more natural and conversational way for both parties to share while creating a more spontaneous “window” for the audience. It’s also lighter prep all around.

    Thanks for tapping me, PhillyMag, and thanks for putting on such an interesting event. Also, thanks for being a good sport, Chris Gheysens. #cowtailsFTW

    Also thanks to my bud David Lipson at PhillyMag for texting me 48 hours prior to the event, “Hey man, you’re going first to kick off the day. Be sure to bring the energy! No pressure!” Yeah. Thanks.

    I woke up at 4 am, day of, and reviewed and revised my notes for the eighth time. I have a tendency to over-prepare. What’s new…

    ps – I got busy this Spring and my posting frequency took a beating. I need to work on that. So many drafts in the queue…

Back to top button