There were 2 serious problems that we encountered and learned from our mistakes. We were biased with waterfall model or the only model that we knew how to manage / run a software project.
Running a waterfall model inside a sprint.
When the sprint starts rolling even on the first day, a user story is taken by a developer and starts to actual code the piece of information. At the same time the tester creates the test scripts based on the test plan detailed in sprint planning.
On the day one, the daily standup are normally missed or happens for less than 10 minutes for the working group to pickup on the activities that they have agreed on the Sprint Planning. For a 30 day Sprint, effectively we would have 29 Daily Stand-up’s. On my observation, I have found 20% of 29 days to be greatly effective and remaining 70% is just tracking and in progress status and 10% of 29 days to be very ineffective.
The 20% is again split up in the start of the sprint where the coder actually starts working on the first piece where he has actually made his mind to stick with the promised schedule and he gets it 90% of the time. The second gear up actually happen just 2 or 3 days before the sprint ends. The fast tracking or eventually working hard to deliver is in every once mind. Over a period of time, when the team gets matured, they understand this really happens in self organized teams.
The 70% is the actual threat where we by mistake fall into a waterfall model, unknowingly due to delays (may be) even from a single resource standpoint. Once a delay is found or the velocity of the team moves to and fro, the shift affects the next step to happen. (That itself falls unknowingly, into waterfall)
The developer delayed the process by 2 days and starts to work on Task B with a delay of 2 days, the tester has no work for 2 days, including the other parties, over a period of time, we could see the developers creates function after function and tester testing features after features inside the sprint, we run a waterfall without even knowing its run one after the other, one being depended on the other, one waiting for work and one extending or delaying what was promised.
It is actually a complicated process to run parallel, with test sprint run to find the team’s velocity and over a period of executed content, it gives you the truth, the fact of what really can be delivered as a team, not as an individual. The PDCA cycle pulls the dirt and show off the real threat.
A highly self organized team provides 100% results. The threat is not if they produce less than 100%, the threat is when the team produces more than 100%. The Scrum Master changes the velocity of the team and it grows with a decent run. Most of the times, in my practices, I have seen teams take the best velocity to run the sprint, on the 70% in progress meet-ups on daily scrum, they take more user stories performing the actual calculation of Sizing, Duration fix and Schedule draw. And on the flow, the user stories that have picked up are left without a single work done due to the test bugs that had to be reworked or completed in full that pulls the velocity of the person or the team to a positive side. On the next Sprint if the Scrum Master decides to go with the best bet for the resource, It becomes even more a cumbersome process to detail the real value. Its fine if the velocity and the deliverable can be managed but each time the sprint runs, it has to be managed well. The Scrum Master gets the experience of validating the real velocity to go in comes with running few sprints and experience.
The problem of estimating features is a continuous process as features are always and mostly different from one another. When we spend time to create a feature and label it “Done” we also take a measure to see if that can be automated. The time taken to automate the feature would be normally 5% of the estimated user story size. This is sometimes a kill when we automate a feature and when we try to use it, the same 5% goes in integration even if the feature is ready to plug and play. The save is only for the testers who do not recreate the scripts to run.
We have our User stories complete, An acceptance criteria and a proper done statement, Next in line are estimating the size, Deriving a duration (using the velocity) and draw a schedule. We do them one after the other and continue till we reach the end of the velocity. Here we are running a clear waterfall model that can’t be stopped. Over a period of time, the team understands with a shock on both the cases to feel that they are not doing an iterative mode. They have unknowingly trapped in the waterfall model and the way to stop is to terminate the sprint and start working on with the Sprint Review and Retrospect to identify where we could change.
Terminating a Sprint is not as easy as it takes, as we would be running in the 70% in progress status. There are deliverables half done and team members waiting for their share to shed. It becomes a cumbersome war in between the team to deliver. The elasticity of the team would help it bring over few portions of digestible delivery. If the Sprint is terminated and has to restart, the work doesn’t restart from scratch but from the part that was half done but the estimate(size) in full.
If you are starting to follow Scrum, I would highly suggest to run demo sprint of full 2 rounds with the executing team to understand how it work. Even before following Scrum make sure you have a proper need to use Scrum. Threats like this are lessons learned from my practices.