Some natural synergies exist between the generic practices and their related process areas as explained in Evidences supporting implementation of CMMI GPs.

Here the ‘recursive relationships between generic practices and their closely related process areas’ are explained.CMMI GPs-PA

For more information on required evidences for each generic practices, please refer Evidences supporting Implementation of CMMI GPs

Many process areas address institutionalization by supporting the implementation of the generic practices. An example is the Project Planning process area and GP 2.2. To implement this generic practice we need to implement the Project Planning process area, all or in part. In the below table such related process areas ( which are supporting the GPs) as well as required artefacts( which could be the evidences for the implementation of GPs) are shown.

In addition to this normal GP-PA relationship, there are some recursive relationships between generic practices and their closely related process areas. This is explained in How does the Generic Practice recursively rpply to its related Process Area(s)?

CMMI was originated by SEI (Software Engineering Institute), sponsored by US Department of Defence. Later on SEI has transferred CMMI-related products and activities to the CMMI Institute, a 100%-controlled subsidiary of Carnegie Innovations, Carnegie Mellon University’s technology commercialization enterprise.

The below pictures illustrates the evolution of CMMI

Evolution of CMMI 1

  • When the era of computerized information systems started in 1960, there was a significant demand for software development. Even though software industry was growing rapidly, many processes for software development were amateur and project failure was common.

Sub processes are components of a larger defined process. For example, a typical development process may be defined in terms of sub processes such as requirements development, design, build, review and test. The sub processes themselves may be further decomposed into other sub processes and process elements. Measurable parameters are defined for these sub processes to analyse the performance of the sub processes. These sub processes are further studied to identify the critical sub processes which are influencing the process performance objectives i.e. PPO. Measurable objectives are set for the critical sub process measures also. PPOs are derived fromBusiness Objectives (BOs).

In the above paragraph, there is a linkage established starting from sub process to BOs. In fact in an organization

Baselines are derived statistically using performance data collected over a period of time. They are indicators of current performance of an organization. Hence proper attention must be paid while deriving baselines as an error can cause even a loss of a business. There are some critical, but common mistakes observed in the baselining process as explained below. Crucial steps must be taken to avoid such mistakes.

Pitfall #1: Inapt parameter for baselining.

Organization must plan and define measures that are tangible indicators of process performance. Baselining does not simply imply gathering and baselining the entire set of data available in the organization. Based on the business objectives, the critical processes of the organization whose performance needs to be analyzed is selected. Then process parameters for monitoring the same are defined, collected data and finally baselining done. There is no harm in collecting and baselining the entire parameters defined in the organization, but why should we waste our time collecting data which won’t be used.

Pitfall #2: Not chronological data.

For baselining with control charts, it is essential that the data to be chronological. Hence during data collection itself, time stamp of the data must be noted.

Pitfall #3: Lack of enough number of data points.

In software industry, often we hear complaints from baselining team regarding the deficiency of data points. And when the question is put on project team, they tell like “we just don’t have time” or “it is too difficult”. In order to derive baselines there needs to be a minimum number of data points, say like 10 or so. Then only, at least all the 4 rules of stability can be applied over the data points. But in a software industry people try to build baselines with 8 or less data points. Then it won’t indicate the correct performance level of the process under investigation. In such cases where number of data points is insufficient, baselining needs to be postponed. Or organization can plan to collect more samples by increasing the frequency of data collection.

Pitfall #4: Being inconsistent.

While collecting as well as baselining data, one must use consistent methods and processes. What is being measured in the post baseline data needs to be same as what was measured in the baseline data collection process.

Pitfall #5: Taking non homogeneous data

Data taken for baselining needs to be of homogenous nature. Otherwise the baselining output won’t give the correct indication of process performance. The data can be categorized based on the qualitative parameters like type of project, complexity of the work, nature of development, programming languages etc. instead of clubbing it altogether and thereby leading to loss the homogeneity

Pitfall #6: Absence of data verification.

Usually it is a common mistake to take data blindly from organizational database and start the baselining process. Essentially, data must be verified to ensure its completeness, correctness and consistency before any statistical processing.

Pitfall #7: Not representative sample.

Processes that permits self-selection by respondents aren’t random samples and often aren’t representative of the target population. In order to have a random, representative sample, it has to be ensured that it’s truly random and representative.

Pitfall #8: Basing the baseline value on assumptions, not real data.

People have a tendency to believe that the collected data follows a normal distribution. Sometimes they don’t even check the normality statistically. Another case is like, even after data is found to be non- normal statistically, people try to make it normal by removing some data points. It is logical to remove one or two points out of 15 to 20 points, if there are some assignable reasons. Other than that it is not a good practice, to simply remove the data points in order to make the distribution normal. It is essential to check the actual distribution of the data before going ahead with baselining. Control charts work on a normal data set only. One can check the distribution of the data visually using histograms or so, and can confirm the distribution statistically using some other tools (there are a plenty of excel addins to check the distribution).

Pitfall #9: Ignoring the past Data if there is no process change.

Suppose in an organization yearly baselining is done. In the start of the year 2013 baselines were derived using data points in the previous year, say 2012. Objectives were set to ‘maintain the current process performance’ and no higher targets. And hence no improvement initiatives were triggered to raise the performance level. Next year, data points in the year 2013 were collected for baselining and it was confirmed statistically that both sets of data were equal (data points in 2012 and those in 2013), may the results from a 2 sample T test. Now which data set is taken by the organization for 2014 baselining? It is a common mistake to ignore the 2012 data and do the baselining with 2013 data points alone. Since both sets of data points were similar and statistically equal, both set must be combined in the chronological order while baselining.

Pitfall #10: Blindly taking p value as 0.05

Null hypothesis is rejected if p value is less than a significant level. In the industry, usually the significant level of P is taken as 0.05. Actually P value is an arbitrary value. Higher the p values means; risk attached with it is increasing as we reject a null hypothesis when it was actually true. (Refer more details of p value in the blog hypothesis test ) And it is up to the organization to decide that significant level.

Pitfall #11: Removing out of turn points when there is no assignable causes

Out of turn points cannot be removed if there are no assignable reasons behind it. If there is no reason for an out of turn point, it implies that data is not stable and one cannot go ahead with baselining.

Pitfall #12: Placing unfeasible values as control limits

Sometimes the control limits derived statistically during baselining process may be unworkable. Say for example a baseline of review effectiveness data (in %) cannot have an upper control limit (UCL) as 120% even though statistically it is correct. Similarly a coding speed baseline cannot have a lower control limits (LCL) as -15 lines of code/hr. All such values are unusable. So an organization needs to have a policy to handle such situations. Say for example, an organization can use 25th and 75th percentiles of the stable data as control limits in such scenario. Or organization can decide to change the LCL/UCL to the minimum/maximum permissible value of that parameter. i.e. organization can change the LCL of coding speed as ‘zero’ instead of a negative value and UCL of review effectiveness as 100% in the above examples.

Pitfall #13: Stating the baseline without contextual information

Stating the context description involves a consistent understanding of the result of the measurement process. Contextual information refers to the additional data related to the environment in which a process is executed. As a part of contextual information, timestamp, context, measurement units etc. are collected.

Pitfall #14: Inapt communication mode.

Nowadays, our computer software supports a wide range of graphs. And people try to use those graphs altogether and finally making real stuffs hidden or complex. One must select the right graph to communicate the processed data. Run charts, pie charts, control charts and bar charts are all good means of communication, but the best fit must be chosen.

Pitfall #15: Not beginning with the end in mind.

One must determine in advance how the processed data is going to be used. This helps to make good choices in what data to be collected (never waste time collecting data which won’t be used), what tool to be used. Also one must plan to measure everything needed to know how the effect of the change is going to be calculated. It is usually too late to go back and correct things if something is left out.

Review process is intended to ensure the completeness and correctness of the work products. Merely performing reviews does not meet the full purpose. Reviews must be followed with

Proper correction

The defects need to be fixed and the completion status of reported defects needs to be logged properly. After rework the work product needs to be labelled suitably.

Verification by reviewers after correction

Reviewer needs to ensure the verification of the work product against the reported defects.

 

Root cause analysis of defects

Major or repetitive defects needs to be further analysed to determine the root causes.

 

Taking corrective actions on root causes of defects

Further to root cause analysis, action plans to prevent recurrences of the defects needs to be taken

Review Data Analysis

Review parameters like review speed, review effectiveness etc needs to be analysed and compared against the goal set for those parameters. Further actions needs to be deployed based on the analysis. For more information please refer the post on Review Data Analysis

Making Frequently Committed Defect List (FCDL)

FCDL can be made taking review feedback from the reviewer. And FCDL needs to be continuously updated after each round of review. The key is to learn from faults and consciously avoid repeating them.

Conducting Trainings

Trainings can be triggered to avoid repetition of defects or to improve the depth of review

Communicating the status to stakeholders

Finally, it is important to ensure that the status of review, rework, corrective actions etc. is communicated to all stakeholders.

 

Reviews can be analysed based on a number of metrics.  A few of the review metrics are given below

  • Review coverage             : Reviewed Size/Total size
    • For example, if 50 pages out of 100 pages of an artefact are reviewed, then review coverage in percentage is 50 % only. In case 100% review is not possible, the Project Manager or concerned person can identify the critical areas in the work product which need to be mandatorily reviewed.
  • Review Speed                   : Reviewed Size/Total time taken for review
    • For example, if a 1000 Lines Of Code (LOC) is reviewed in 5 hours, then review speed is 200 LOC/Hr.
  • Review Effectiveness    : Defects caught in Review/Total number of defects caught in review and testing
    • For example, if 100 defects were caught in a project including user acceptance testing and 50 of those defects were caught during reviews, then Review Effectiveness will be 50 %
  • Review Defect Density: Defects caught in Review/ Reviewed Size
    • For example, if 100 defects were caught during code review (reviewed LOC =1000), then Review Defect Density will be 0.1 defects/LOC or 100 defects/KLOC

Analysis is not limited to defining and evaluating a metric. From the observed data recommendations must be put forth. Sometimes,

Audits are the mechanisms of ensuring the integrity of the product as well as process. During audits a number of deviations may be revealed. It is extremely important to ensure that the deviations are documented or reported properly. Deviations can be of specific or generic in nature. If it is a generic one, the auditor needs to report multiple instances of the issue. A well-documented deviation should be self-explanatory and should address the following questions.

  • What is the issue?
  • What is the importance of the issue?
  • What is the impact of the issue?
  • When was the issue observed?
  • Where was the issue observed?

A non-conformance could be stated simply as

‘Planned review of a work product is not done ‘.

This statement is not self-explanatory; instead it triggers few other questions like whether any alternative methods were adopted in the absence of review or what could be the impact etc. So a non-conformance needs to be reported completely and correctly.

The above deviation could be made much more self-explanatory if written as below,

“In a Project ‘A’ implementation is done by average skilled resources. Even though independent review was planned, it didn’t happen. Review practices are required as per organizational policies to produce high quality work products. The project team didn’t take any additional measures or alternative mechanism to overcome the issue. In this scenario, absence of review will lead to more testing bugs and thereby causing schedule slippage or poor quality product.

1. Interviewing the Project team

A quality audit can be done easily by interviewing the project manager as well as some of the project team members. Before starting the interview try to understand the project scope, known risks, problems etc. And even can have some quantitative analysis done, on effort variance or schedule slippage (assuming sufficient access permission is provided to pull the data for analysis). At the time of interview, auditor can ask about the project, current status etc. Even auditor can ask some pre-planned general questions to evaluate the knowledge of interviewee. (Generic questions could be based on the management system in the organization). Then observe how the interviewee responds to the questions. From his response itself, definitely there will be an opening to another question. The response might give you hints leading to a different set of questions. So an interview based audit is somewhat easy compared to a remote audit. But auditor has to lead the show; otherwise in order to hide non-conformances, auditor might get misled by the interviewee.

2. Check Tailoring

Project team will define their own process by suitably tailoring the Organizational Set of Standard Process (OSSP). These tailored processes should be submitted to process owners of the organization like Engineering Process Group, Only with their approval, the tailored process can be executed within the project. Auditor has to check the necessity of these tailoring, approval details etc. Also the auditor has to ensure that the tailored process is not a risk to the organizational business needs.

3. Project compliance audit

Normally project activities are executed as per a plan. Plan could be a management plan, test plan, integration plan, configuration management plan, QA plan or an integrated master plan. If there is a template defined for these plans, it would be adhering to the organizational practices. Thereby a plan template will detail all the processes which are supposed to be executed within the project as demanded by the organization complying to specific standards/models. So during a project audit, it is very important to ensure that the project plan used is in line with the template defined in the organization. It has to be ensured that the sections in the template are not removed while taken for the project. Each section in the plan might be a specific practice to be adhered. So there are chances of sections being removed if project team do not want to practice it.

4. Plan based audit

After ensuring that the plans are compliant to the organizational template, go through the plan section by section. Plan will direct you to each artefact in the Configuration Management (CM) tool. Take the respective artefact or Configurable Item (CI) and do a configuration audit on the CI. CM audit cannot be done on the entire CIs, so do it randomly. While checking the CI for process compliance, it may lead to another audit. Say for example, if a requirement document is taken, first check the contents for completeness and correctness. Check whether any legal or regulatory requirements are mentioned. If mentioned, trace it out in the lower level documents like design. If it cannot be traced, it could be a noncompliance. Then ensure other CM aspects of the requirement document like document history. If reviewer column is unfilled in the document history, check whether review is actually done or not, assuming review is not tailored. Likewise, audit goes on. Then go back to project plan and continue with next section.

5. CM audit

Functional and physical configuration management audit needs to be done on work products. Functional configuration audits are a kind of work product audits. It is done to ensure the functional performance of the work products. As a part of Physical configuration audit check the correct versions, ensure properly filled in document history/amendment record, impact analysis document for changes, change tracking sheet traceability document etc.

6. Quantitative Data Audit

Auditor can randomly verify the data collected. If there are some specific measures to be collected as instructed by the organization or customer, ensure the availability of the same. Auditor can check the integrity of collected data. In addition check whether corrective actions planned in the milestone analysis are implemented inside the project or not.

7. Workproduct audit

In addition to process audits, work products are also audited to check compliance. CMMI PA- PPQA talks about the same. Auditor has to do some sample validation of final work products. If it is a product, probably testing might be a mechanism for work product audit. It need not be a regular testing as done by testers. Instead, auditor can take some sample Test Cases, a representative sample Test Cases which are already certified as ‘passed’ by testers and execute those Test Cases to ensure compliance

.

8. CAPA based audit

As part of internal quality audits, corrective actions are planned for non-conformances. During course of time, those actions/plans are usually ignored. Auditor has to ensure the compliance to those corrective or preventive actions

9. Audit of customer driven points

Inside the project, there may be a lot of customer reported issues, customer feedbacks, complaints etc. Timely analysis and proper actions needs to be taken on all those points. Audit must check and report deviations if the issues are not addressed.

10. Check List based audit

Finally take the audit checklist and ensure coverage. A checklist based audit is not a recommended practice. Checklists may make your audit a machine kind. But checklists can definitely be used to ensure coverage in the final stage of your audit.

A gap analysis is normally done to identify the gaps/missing in the existing system. The current system is compared to a master standard while doing the gap analysis. A gap analysis reveals gaps in written procedures as well as gaps in practices.

If the organization is already compliant to some older versions of the standard/some other standards, some process and procedures may be there in the organization. So first step is to