Believe in Good Measurements

Without measurement there is no control and without any control there is no improvement

Measurement data tells the quantitative nature of activities.  With data we are actually doing a hypothetical judgement. Say for example, in a software industry the effort consumed for a task ‘A’ is 50 Person Days (PDs) while the estimate for the same was 35 PDs only ( Assume there was no schedule slippage). This calls for an analysis. Suppose the reasons identified were related to high complexity of the source code which led to more bugs in the system. The complexity analysis of source code, bug analysis etc. also revealed the same. So here in this scenario, believing in data, we interfere that live complexity analysis should have been taken inside projects, rather than at the end. This would have helped to refactor the code or plan for a focused review or a focused testing initially itself!!

Now let us just have a rewind of the actual process involved in the project…

Case 1   : The tool used for measuring complexity was not calibrated and it had some error with it showing an incorrect value!!

Case 2   : There was junk value of effort logged by resources. No one verified it!!

Case 3   : Bug classification was not proper. Similar defects were mapped against different defect types by different users. The analysis done based on the defect classification showed incorrect results!!

The above three cases actually question our problem itself, “whether do we really had an effort overrun..?” And depending upon each case, there could be different scenarios in the real project life. How this could be reduced..? How can we really “rely upon data “which leads to less hypothetical errors… Following could be a partial solution for the same..

1.       Analyse and reduce subjective dependency during data collection as far as possible. For Example,

a. A normal turnkey kind of project derives the effort estimate (and corresponding cost part too) based on the size estimate. If the size measure varies a lot from people to people for the same application, then once again the measurement system is under question. In such cases Measurement System Analysis (Attribute agreement analysis – Refer AAA ) should be done on practitioners to evaluate their percentage of concordance. If acceptable results are not found from MSA, proper actions like training should be conducted on the people.

2.       Automate the data logging and related analysis as far as possible. When there is a manual intervention while collecting the data, then the sanity of the data collection system is questionable. For example,

a. If effort logging is done manually, then chances are there to have junk effort logging or zero logging on a case to case basis. So better to have some automated system for taking effort log, linked to the window switching being done.

3.       There should have proper guideline for data collection and analysis. For example

a. When similar defects are logged multiple times against an artefact or a lot many minor issues are reported, ultimately it should be affecting the total defect density of the project. So proper attention should be paid while counting the defects in order to avoid unnecessary conclusions.

4.       Identify and maximise the usage of metrics which are directly linked to the artefacts. For example,

a. Cyclomatic complexity is a measure of number of independent paths in a program. If some proper tools are there to measure the same, then the data is purely objective measurement. Another main benefit of such metrics is ‘it has got a direct impact on the artefact, so it is much easy to showcase the impact in front of practitioners or senior management.

5.       The data should be complete, correct and consistent. From an organization perspective, there has to be a dedicated team to check for the three “C” aspects before entering the data to the organizational database.

a. Complete: It should have all the required contextual information. For example if we need to compare two productivity data, the first and foremost thing is to ensure that both data have same units.

b. Correctness: There should not be any question on the integrity of the data.

c.  Consistency: If same conditions are repeated, the data should be reproducible.

6.       Determine the precision required for data collection. Based on the correctness, consistency and applicability of the data determine the precision required.

7.       Do statistical monitoring on the live data to observe if any out of turn point variation.

8.       Compare the collected data (derived data) with organizational or industry benchmarks to analyse for random variation.

9.       Ensure that data and related interferences are produced on time. As time passes, the error gets cumulated misleading to wrong conclusions.

About

The author is a Quality Assurance professional by experience. Part Quantitative data analyst, part consultant for quality and information security practices, part software tester, she is a writer by passion and blogs at https://wordsandnotion.wordpress.com and http://qualitynotion.com/.

You may also like...

Leave a Reply