A risk is a potential issue waiting to happen leading to some unintended effects. If there is zero probability for the risk to happen, then it is not at all a risk. Similarly, if there is no adverse effect with the happening of the issue, it is not a risk. If the issue had already happened, it is no more a risk, but a problem.

Risk Identification and Analysis

There could be various risks inside the projects. Those needs to be first identified and then analysed based on their probability of occurrence and effect (impact). Based on these analysis risk needs to be prioritised for action.  Here we are actually prioritising the risks based on three factors i.e. Probability of occurrence, impact and detectability of the risk

Highly probable, huge impact and low detectable risks needs to be treated first. For prioritization of risks we need to have some metrics associated with it. We call it as risk value. It is the product of probability, impact and detectability.

Data or evidences are the prime focus of a judiciary to convict or acquit a person. Here the data helps to analyse the past and with which helps to take the judgement. An error free judgement is actually a precautionary warning for any potential similar instances. Or in other words, data helps to predict the judgement for future similar instances. The scenario could be easily mapped to a software industry where we could see data or measurements helping us to do post-mortem or predictive analysis. Both analyses have its own relevance. Let us analyse how measurements help us to improve software in terms of these analysis. If it was a hardware , then the improvements in terms of measurements could have been easily understood as there is something physically observable.

We use data for mainly two purposes.  First of all we need to know where we stand right now, what is the current performance etc. For this we conduct milestone analysis, intermediate data analysis and closure analysis at the end of s project. And at the organizational level based on a defined periodicity process performance baselines are derived which are

Without measurement there is no control and without any control there is no improvement

Measurement data tells the quantitative nature of activities.  With data we are actually doing a hypothetical judgement. Say for example, in a software industry the effort consumed for a task ‘A’ is 50 Person Days (PDs) while the estimate for the same was 35 PDs only ( Assume there was no schedule slippage). This calls for an analysis. Suppose the reasons identified were related to high complexity of the source code which led to more bugs in the system. The complexity analysis of source code, bug analysis etc. also revealed the same. So here in this scenario, believing in data, we interfere that live complexity analysis should have been taken inside projects, rather than at the end. This would have helped to refactor the code or plan for a focused review or a focused testing initially itself!!

Now let us just have a rewind of the actual process involved in the project…

Case 1   : The tool used for measuring complexity was not calibrated and it had some error with it showing an incorrect value!!

There are a set of process areas asking for institutionalization of certain organizational process activities, other than the normal project activities. It can be summarised as ,

  • RSKM    : Organizational Risk Database
  • OPD       : Organizational Set of Standard Process (OSSP), Six Process assets
  • OPF        : Minor Improvement Initiatives

A process consultant’s activities inside a project start with the project kick-off meeting. And finally it ends up with closure meeting. The sequence of activities carried out during the project life cycle is as explained below.
1. Help Project manager in project planning
• Defining the workflow and milestones for project activities.
• Identifying the risks during project start up and execution.
• Defining process and product goals using process performance objectives and models defined in the organization
• Identifying the critical parameter for statistical process monitoring.
2. Review the project plan and it’s annexure like CM plan, auditing plan, risk management plan, Quantitative project management plan, estimation, schedule etc.

There are a lot many methods to improve the quality. Quality improvements can be people, process, product or technology based. The following list covers some of the main quality improvement methods.

Statistical Process Control (SPC) 

Process can be controlled statistically if the measurable parameter associated with the process is monitored on a continuous basis. The monitoring and controlling can be easily digested if charts are used for plotting the data. One of the seven Quality Control tools, known as Control chart is mainly used for this purpose.

The history of quality management, from mere ‘inspection’ to Total Quality Management, and its modern ‘branded interpretations such as ‘Six Sigma’, leading  to the development of essential processes is described in a sequential order below, based on a research work

In a competitive era of software industry, the demand is to ascertain that products of highest quality are delivered in the least possible schedule. Historians have traced the concept as far back as 3000 B.C. in Babylonia. Among the references to quality from the code of Hammurabi, ruler of Babylonia, is the following excerpt: “The mason who builds a house which falls down and kills the inmate shall be put to death.” This law reflects a concern for quality in antiquity.

During pre-industrial revolution the dominant Production method was Craftsmanship model. The master craftsman set standards, reviewed the work of others and ordered rework and revision as necessary. One of the limitations of the craft approach was that relatively few goods could be produced.

In the late 13th century,

Humans are of varying in nature. Because of this, the judgement made by one person won’t be similar to that of another person’s under equal conditions. Take the case of a judiciary system. What happens if laws are not applied consistently over people.. Ultimately it leads to the failure of the system. How can the consistency be ensured?

To understand this we need to monitor the system, analyse the variation and take appropriate actions. How can the variation happen? We need to understand the reasons for this variation. Variation can happen due to the subjectivity of judgement. Here the attribute which needs to be analysed for variation is ‘judgement’. Statistical tools like Minitab can be used to

“Nine women can’t deliver a baby in one month”

–          Quotes from Frederick P. Brooks’ book- “The Mythical Man-Month: Essays on Software Engineering”.

The discipline of Project Management is organizing and managing resources (e.g. people) to achieve project requirements, in such a way that the project is completed within defined scope, quality, time and cost constraints. Time management is a critically important skill for any successful project manager. Ten people can pick cotton ten times as fast as one person because the work can be partitioned. But nine women can’t have a baby any faster than one woman can, because the work cannot be partitioned.

Brooks argues that