Setting up a measurement platform for improving the software.

Data or evidences are the prime focus of a judiciary to convict or acquit a person. Here the data helps to analyse the past and with which helps to take the judgement. An error free judgement is actually a precautionary warning for any potential similar instances. Or in other words, data helps to predict the judgement for future similar instances. The scenario could be easily mapped to a software industry where we could see data or measurements helping us to do post-mortem or predictive analysis. Both analyses have its own relevance. Let us analyse how measurements help us to improve software in terms of these analysis. If it was a hardware , then the improvements in terms of measurements could have been easily understood as there is something physically observable.

We use data for mainly two purposes.  First of all we need to know where we stand right now, what is the current performance etc. For this we conduct milestone analysis, intermediate data analysis and closure analysis at the end of s project. And at the organizational level based on a defined periodicity process performance baselines are derived which are indicative of organizatioanl performance as a whole. The second purpose data helps to predict the future. For example, we can say if the coding is done with ‘x’ speed, whether we will be able to complete the project as per schedule. Process performance models helps to do predictive analysis a lot. Defect prevention activities are also an output of predictive analysis.

In order to set up a measurement platform, there are a set of actions to be done as below.

1              Define the base parameters to be collected. Normally it could be size, effort schedule and defects. Define the units for each measurement

a.     Size can be in Function point, Lines of code (LOC), Use case point

b.    Effort can be in person month, person days or persons hour

c.     Schedule can be calendar days.

d.    Defects  can be in number of defects

2             Define the derived parameters based on the profile of the organization. It could be     

a.     Productivity            : Size/Effort

b.    Defect density         : Defects/KLOC

3             Categorise the nature of work done in the organization based on the type of projects. It could be fresh development, enhancement porting etc. Inside projects the data should be collected to the best possible granularity.

4             Form a metric database to collect the data from closed projects.

5             Do periodical baseline analysis to know the current performance of the organization.

Now we have set the measurement platform. Now how good is the collected data..?  If no one had verified the three “C” aspects of the data , then the entire system is a failure. Read more on how good is the collected data.

The activity does not stop here too. If we need to have improvements, then live data analysis should be done inside the projects.


The author is a Quality Assurance professional by experience. Part Quantitative data analyst, part consultant for quality and information security practices, part software tester, she is a writer by passion and blogs at and

You may also like...

Leave a Reply