Data or evidences are the prime focus of a judiciary to convict or acquit a person. Here the data helps to analyse the past and with which helps to take the judgement. An error free judgement is actually a precautionary warning for any potential similar instances. Or in other words, data helps to predict the judgement for future similar instances. The scenario could be easily mapped to a software industry where we could see data or measurements helping us to do post-mortem or predictive analysis. Both analyses have its own relevance. Let us analyse how measurements help us to improve software in terms of these analysis. If it was a hardware , then the improvements in terms of measurements could have been easily understood as there is something physically observable.

We use data for mainly two purposes.  First of all we need to know where we stand right now, what is the current performance etc. For this we conduct milestone analysis, intermediate data analysis and closure analysis at the end of s project. And at the organizational level based on a defined periodicity process performance baselines are derived which are

Without measurement there is no control and without any control there is no improvement

Measurement data tells the quantitative nature of activities.  With data we are actually doing a hypothetical judgement. Say for example, in a software industry the effort consumed for a task ‘A’ is 50 Person Days (PDs) while the estimate for the same was 35 PDs only ( Assume there was no schedule slippage). This calls for an analysis. Suppose the reasons identified were related to high complexity of the source code which led to more bugs in the system. The complexity analysis of source code, bug analysis etc. also revealed the same. So here in this scenario, believing in data, we interfere that live complexity analysis should have been taken inside projects, rather than at the end. This would have helped to refactor the code or plan for a focused review or a focused testing initially itself!!

Now let us just have a rewind of the actual process involved in the project…

Case 1   : The tool used for measuring complexity was not calibrated and it had some error with it showing an incorrect value!!

Humans are of varying in nature. Because of this, the judgement made by one person won’t be similar to that of another person’s under equal conditions. Take the case of a judiciary system. What happens if laws are not applied consistently over people.. Ultimately it leads to the failure of the system. How can the consistency be ensured?

To understand this we need to monitor the system, analyse the variation and take appropriate actions. How can the variation happen? We need to understand the reasons for this variation. Variation can happen due to the subjectivity of judgement. Here the attribute which needs to be analysed for variation is ‘judgement’. Statistical tools like Minitab can be used to

“Nine women can’t deliver a baby in one month”

–          Quotes from Frederick P. Brooks’ book- “The Mythical Man-Month: Essays on Software Engineering”.

The discipline of Project Management is organizing and managing resources (e.g. people) to achieve project requirements, in such a way that the project is completed within defined scope, quality, time and cost constraints. Time management is a critically important skill for any successful project manager. Ten people can pick cotton ten times as fast as one person because the work can be partitioned. But nine women can’t have a baby any faster than one woman can, because the work cannot be partitioned.

Brooks argues that

There should be a Business Objective (BO) for each and every organization. It can be in terms of profitability, time to market etc. Basically it is chosen on the basis of work being handled by the organization. Here the problem is to map the BO from enterprise level or organizational level to each work units inside the organization. Based on the work units inside the organization, a second layer of objectives might need to be defined (Process Performance Objectives- PPOs, in CMMI terms). The parameter for PPO should be taken based on the critical set of parameters which needs to be monitored. For example to improve profitability, productivity can be critical parameter. So profitability will become the parameter for BO and Productivity as the parameter for PPO. Based on the PPO defined at the organizational level and customer requirements project team needs to come up with project specific goals. For a CMMI high-maturity compliant organization, probability analysis should be there providing enough evidence for achieving the targets. As the PPOs are defined for a range at the organizational level, the probability analysis done at the organizational level won’t match with that at the project level as project goal will be more stringent usually. Hence blindly following the organizational PPOs and corresponding probability analysis might lead to irrelevant conclusions.

Consider a baseline deduced for productivity in an organization as below

Lower control limit= 24 units, Central Value = 27 units and Upper control limit=30 units

There can be four goal statements as given below.

  1. Goal = Baseline
  2. Improve average
  3. Reduce sigma
  4. Improve average and Reduce sigma