XpdWiki
Set your name in
UserPreferences Edit this page Referenced by
JSPWiki v2.0.52
![]() ![]() |
When I first recorded my velocity I did it like this: On Monday (T1) I estimated some tasks Task A - estimate 2 days Task B - estimate 1 day Task C - estimate 2 days Next Monday (T2) I saw that I had only completed Task A and Task B so I worked out my velocity as: 3 days estimated tasks complete in 5 days that means velocity = 3/5 = 0.6! My first ever Velocity record. I then did my next set of estimates for that week Task C (from previous week) - estimate 1 day Task D - estimate 2 days Task E - estimate 1/2 day Task F estimate 1/2 day The following Monday (T3) I saw I had completed Tasks C, E and F but D was outstanding That is 1 + 1/2 + 1/2 a day in 5 days 2 days tasks completed in 5 days = 2/5 = 0.4 and so on... I also noted things like in time period from T1 to T2 "task C included some nasty new technology I'd never used before so my estimate was off", or "task C was also quite a big chunk of work that I could have broken down into smaller bits" , or "Fred was really stuck with his GUI sorting model one day so I had to help him a lot", or "I had a load of bugs come in that week so I didn't actually spend that long on the actual tasks I'd been set!" A really bad thing to have happen is that you complete NO tasks during a week/iteration. That means you have a ? days tasks completed /5 which means you have no velocity to report which hurts. You start to fold things into next week and things get too complicated. Therefore the golden rule is try to estimate something you can complete by the end of the week!! Basically break your tasks down into nice and small chunks! Then you can start to use YesterdaysWeather to make your estimates more reliable. For instance after the period from T1 to T2 I had a velocity of 0.6. So when I made my next week's estimates I should have known that I can only do 3 days estimates in 5 calendar days. And that's how it was. E.g. once you have a velocity for week 1 you can say probable ACTUAL time to complete work = week 2's estimates/velocity. So looking at week 2: Task C (from previous week) - estimate 1 day Task D - estimate 2 days Task E - estimate 1/2 day Task F estimate 1/2 day I can only do 3 days estimated work in 5 days. Week two's estimates are I can choose to complete C, E and F and know it's unlikely I'll complete D. Or I can choose D, E and F and know that it's likely I'll complete them all. I chose to do C, E and F then D (and not surprisingly I managed to complete C, E and F but not D.) My prediction was not (in this made up example) exactly right. I hit the date and predicted the future as best as I can. My velocity did fall to 0.4 and I only just made it though. Other weeks I might miss out and my predictions are further off. If you have a bad week though your velocity will fall off and you next week's predictions will be give you time to catch up. I can see these velocities bouncing around a bit. Mine is still bouncing around quite a bit between 0.35 and 0.8 (over three months). -Neil Thorne Why do the division? We track the sum of the estimates of the tasks completed in an iteration. Since we don't track tasks that aren't complete, there's some noise in our figures, but also some "give", and we are constantly reminded that it's not an exact science: no bad thing. So, our iterations are two weeks long, and we might have two or thee pairs working on a project, and we estimate tasks at 1, 2, or 3. That ends up giving us project velocities between 20 and 30, typically. It's not clear to me how scaling all the numbers by a constant 1/5 would add value. I might be missing something, though. Do you find it easier to think in terms of 0.35 to 0.8 than in terms of 1.75 (call it 2, how do you measure 0.75 of an estimated day?) and 4? WhoWroteThis?? DoesItMatter?? As far as I understand it Velocities should nearly always be between 0 and 1 (because you nearly always underestimate how long something is going to take). I just do completed tasks estimated time divided by actual time spent that week/iteration/whatever time frame. Since I'm just recording my own velocity I do it in tasks at the moment. But if I were doing it for team iterations, all I'd do is add up all the estimated times of everyones completed tasks over total time spent in the iteration. I tried giving tasks 1, 2, 3 weightings like you suggest, but that is far too subjective in my experience, and I found all I wanted was a high level picture of how accurate my estimates were if I made them believing I would just code them and nothing else. I have to remind myself not to factor in anything when I make the estimate. I guess different people have different ways of doing it. I don't need to measure 0.75 of an estimated day. All I need to measure is what got completed that week (since I never run out of tasks usually). Then I just say, I completed X. Y and Z. I estimated 2.5 days and I spent a whole week on it. That gives me 3/5. Or if there were 3 pairs I'd say, pair 1 estimated 3 days pair 2 estimated 2 days pair 3 estimated 4 days total = 9 days all pairs actually spent 15 days therefore velocity is 9/15. I think this is simple. I just measure what happened. I don't introduce any subjective weightings. I do estimate things like 1/2 a day all the time. But no lower. -NeilThorne If this works for you, then absolutely carry on. Meanwhile, here are some thought that might be of interest. Where did you get the idea that project velocity should be in 0,1?? This division by actual time sounds more like the old load factor than project velocity. I've not come across a description of load factor that suggests that it should be normalised to the unit interval, is there one somewhere? The units of load factor would seem to be real days per engineering day, which is always going to be greater than 1, not so much because we underestimate (though we do), as because things always get in the way. It looks like you are calculating something like the reciprocal of this, and using it as a measure of how "good" your estimates are. The important realisation here is that it fundamentally doesn't matter how good your estimates are in absolute terms (hint: they are very bad). What matters is how fast the team is going, and how consistent that measure is. The suprising thing is just how consistent a team's project velocity can tun out to be. (hint: your estimates are about the same badness all the time). I'm puzzled as to why you feel that estimating 2, 3 or 4 "days" is less subjective that estimating "1", "2" or "3". Here's what I think we're doing when we estimate that a task is easy, <mumble>, or hard ("1","2" or "3"), we are estimating the size of the problem. If we were to attempt to estimate in terms of days, we'd be attempting to estimate the size of the solution. We don't know particularly well, ahead of time, how much actual work it's going to take to do something, but we seem to have a pretty good idea of how complicated something sounds relative to the other things we've done. This distinction is exactly the shifting sand that classical estimating methods are built upoon. The idea is to use something like function points to measure the size of the problem, then use something like COCOMO to predict how long it will take to build the solution. Somewhere in the middle you have to be able to figure out how many KLOC this many FPs will take. Heap big magic, though some people seem to be able to do it well. Using project velocity we elide the whole mess into a bunch of normalisation "constants", which we then don't calculate (the COCOMO guys do try to calculate them). So, when we add up the estimates on the unfinished cards, we are finding out how much complexity we think remains to be put into the system. Our project velocity tells us how much complexity we think we put into the system over the last fortnight. From this we can easily calculate how long we think it'll take us to put the rest in. As long as we are confident enought that the term we think is consistent, the quality of our thinking is largely immaterial. NeilThorne replies- It stands to reason that if your estimates are optimistic (like most developers out there) then your record of estimated time/elapsed time is going to be between 0 and 1 however, if you were really conservative then of course it could go over 1 (way over infact). Its interesting, because I looked around on the c2.com wiki and the descriptions of velocity there are even more diverse. Actually most of the descriptions talk about LoadFactor. I think I'm measuring the inverse of this. The most important comment I read was that however you measure you velocity the important thing was... <quote> We track individual tasks by asking the developer: "How long did you originally estimate? How long did you actually work on it?" Thus the developer gets to hear herself say "I thought this would take me one day. It took me one and a half." This provides feedback to the developer on how accurate her estimates are. </quote> Your right, estimating time, is just as subjective as estimating complexity. I think we are both frequently testing our subjective assumptions and getting the feedback we need.
|