Process composition, Process performance models etc. are terminologies in CMMI High Maturity area. Process needs to be composed quantitatively considering various alternatives to achieve the project performance objective. There needs to be a linkage between composed process and PPM. Or rather there needs to be alternative methods for the critical sub processes in the PPM. It means there needs to be a number of PPMs too. i.e. If customer feedback score is the objective and number of acceptance testing bugs, code review defect density, schedule variance in requirement analysis etc are the subprocesses in PPM , then to compose the process of code review or requirement analysis, there needs to be various alternatives. Say for example code review could be done by peer review or expert review, like that. Similarly for other subprocesses. Then there will be different PPMs built with different alternatives in an organization unless otherwise the data from all alternatives altogether forms a stable process.. So is there a chance to have PPM in single, since data pertaining to each alternative would be different?

 

PPMs are valid within the data used or building them. We cannot extrapolate the data for prediction. Now we use these PPMs to check the probability of confidence in achieving the targets. That is, we are simulating subprocess parameters which could be outside the valid range of PPM. In that case, how can we expect the PPM to give a convincing result..?

Let me illustrate it with an example .

I have made a productivity PPM, such as Xs are coding speed, expertise index, requirements stability index and Y is the productivity. The data comes from current performance baselines of the organization (stable process). In the PPM, adjusted R-sq factor, VIF, individual correlation factor, everything as required itself. While performed simulation to check, the probability ‘to achieve above the mean of productivity PPB’, >50% observed.

Organizational PPO is to improve productivity from 30 LOC/PD to 40 LOC/PD though some improvements identified in coding process so that increase in coding speed will lead to increase in productivity. Suppose current PPB of coding speed is 25 to 45 LOC/Hr and the increased expected range is 35 to 55 LOC/Hr (proven through some piloting/validation). Now project A want to use this PPM for productivity. Here actually the subprocess performance data is outside the stable limits/outside the valid range of PPM. In such a scenario, the project team cannot use the previous PPM at all. So what is the use of organizational PPM in that case..?

To summarize, if PPO is targeted higher to current PPB and to achieve the same, organization comes up with improvement initiatives with an improved rage on subprocess limits (outside to the valid range), then definitely, the previously defined PPM cannot be used. Or rather in such scenario, organization should come up with a calibrated ppm too..

For Productivity PPM, Code Review Effectiveness (CRE) is a sub process Parameter. CRE, itself is a dependent parameter and hence a sub PPM is built with CRE as the predictor. So while using Productivity PPM, the process shall be composed with CRE PPM first and then the expected range of CRE based on the simulated values shall be put in Productivity PPM.

 

 

 

 

 

 

 

 

 

A PM  says to his senior manager, ” There is only a 50- 50 chance to deliver the project on time”. A reply like ‘”you take the risk and go ahead as it is ” would be a rare case especially in case of a project where on time delivery is a critical requirement. Back to the question from where we started, is this probability okay to proceed..?

Similarly, normally improvement initiatives triggered by the organization would lead  to achieve the specification limits (range) of the PPO with a probability of 99.7 % ( or roughly>90 %, considering data variability in Prediction models) ie.to achieve above/below to central value of PPO, probability would be < 50 %. Now if the projects are targeting on the positive side (above to mean in case  ‘ productivity’ is a PPO ), ultimately organization objectives could be attained.  Otherwise reverse situation will happen. So project team would be targeting to achieve above to central value, in normal case. This means that probability of success is only 50 % even with organizational improvement initiatives.

Back to the question from where we started, is this probability okay to proceed..?
Certainly not, in the eyes of senior management.  if a project is targeting for mean as PPO, then probably for success is 50 % only. So either there needs to be some other improvement initiatives inside the projects other than those triggered by the organization or  the 50-50 risk of not meeting the target needs to be accepted by the senior management of the organization.
If additional improvement initiatives are triggered to make probability of success greater, definitely, the current mean would be changed, and in such a scenario project team needs to deduce the internal estimates with current mean.( see more on setting internal estimates)

Now there could be another case like, some of the projects are targeting below to mean due to certain genuine reasons. Then probability of success would be higher. But if the case is repeated among many projects, ultimately organizational objective wont be attained. So there needs to have mechanism to ensure that if any of the projects are targeting below to mean of PPO, there must have another project with PPO above to mean. Normally Engineering process group (EPG) needs to take care of this.