Balancing data-driven insights with experience – find out more with Ben Nicholson at ShareDo

We’re doing a fair amount of work on our Finance V3 Roadmap, particularly on how ShareDo manages fee estimates or quotes. Here’s what I’ve learned:

I spent the first half of my career in professional services, so I’ve had a fair amount of experience pricing both large complex projects and smaller volume engagements under a variety of fee arrangements: fixed-priced, hourly, and the dreaded “capped” T&M

And like anyone who’s priced services, I’ve got it wrong.

– Price too low and we erode margin which at best leads to difficult conversations with our clients about increasing our fees

– Price too high and we risk either losing the business or potentially being seen as over-profiting

Either way, having a more intelligent estimating process is crucial.

Today this is how I approach pricing services:

My starting point is to look at the effort required from two perspectives:

1) A bottom-up view

This view is objective and asks, “What are the tasks that need to be done?”

We know we need to do A, B, and C, and it takes X hours or days to do each step.

2) A top-down view

This view is much more subjective. I’m putting my finger in the air and going, “Ehhh, I think it’s about 10 hours work.”

Then I apply some contingency and other factors such as client complexity

What I don’t do today is search across the huge sample of data:

– I don’t look at similar examples to help inform my or my team members’ estimates

– and I don’t look at a statistics analysis to understand some of the correlations I have between matter information and the estimate itself

So how can we better predict fees?

For me the mission is to support fee estimating with robust data, analytics, and correlation analysis from past performance.

So what does that look like?

1) Supporting discoverability

I want to be able to go into a tool and understand similar estimates for similar work and their associated actuals.

If I estimated 50k and a similar project cost 60k, I want to see that.

I want to discover that as an expert, and be able to decipher whether or not the case is similar or not similar enough to be informed by that difference.

2) Informing the estimator

Then, I want to understand where my estimate sits on a scale from pessimist to optimist.

For example, I’m a huge optimist. So when I estimate how long a piece of work will take or how much it’s going to cost, my gut feeling is generally around 20% under.

If I had better statistical analysis, I’d have a tool that told me, “Ben, you’re super optimistic and you’re always under by 20%.” So straight away, I can adjust my thoughts.

3) Leverage statistical analysis to understand the machine’s guess

This is a plain number that the machine spits out for us to consider.

What do you think — are you a top-down or bottom-up estimator? And how would a tool best support you?

If you’ve got an idea to share, leave a comment or shoot me a DM!

ShareDo’s cloud-based case management platform transforms top law firms by unlocking time and accelerating operational [...]