Horizontal traceability shows relationship among related items such as between requirements itself. It traces dependent items within a development phase. Vertical traceability is a characteristic identifying the source of requirements typically from requirements to design, to the source code and to test cases.

Horizontal traceability is an aspect identifying non hierarchical similarities, mutual properties, interactions, etc. among requirements and work products. For example assume we want to implement a login function in four different types of browsers. There are four sub teams to do this. Here the functional requirement remains the same. If any change in requirement happens, then it needs to be reflected across all the four browsers. These kind of dependent requirements are easily traceable if

Read More

Requirements are the basis for design. Requirements management processes manage all requirements received or generated by the project, including both technical and nontechnical requirements as well as requirements levied on the project by the organization.

In CMMI Requirements Management (REQM) Process Area provides traceability of requirements from customer requirements to product requirements to product component requirements. Requirements Development (RD) Process Area describes Customer requirements, Product and Product component requirements.

This post is a consolidation of different types of requirements as described in CMMI, starting from Customer requirements to Derived requirements.

Requirements 1

Customer requirements

Customer needs and constraints are analyzed and elaborated to obtain prioritized customer requirements. In Agile environments, customer needs and ideas are iteratively elicited, elaborated, analyzed, and validated.

 Contractual requirements

Customer requirements are further refined and made suitable to be included in the contractual documents or supplier agreements. Contractual requirements include both technical and nontechnical requirements necessary for the acquisition of a product or service.

 Product and product component requirements

The customer functional and quality attribute requirements are usually expressed in the customer’s terms and can be nontechnical descriptions. A translation of requirements from customer’s language to developer’s language gives rise to product requirements. Or rather the product requirements are the expression of the customer requirements in technical terms that can be used for design decisions. An example of this translation is found in the first House of Quality Function Deployment, which maps customer desires into technical parameters. For instance, “solid sounding door” may be mapped to size, weight, fit, dampening, and resonant frequencies.

The product architecture provides the basis for allocating product requirements to product components. The developer uses product requirements to guide the design and building of the product or service. Product component requirements are a complete specification of a product or service component, including fit, form, function, performance, and any other requirement.

In short, product and product component requirements are the refined Customer requirements. This refinement of customer requirements converts implicit requirements into explicit derived requirements such as interface requirements.

Derived Requirements

Derived Requirements are those requirements which are not explicitly stated in customer requirements but are inferred from contextual requirements (e.g., applicable standards, laws, policies, common practices, management decisions) or from requirements needed to specify a product or service component. Derived requirements can also arise during analysis and design of components of the product or service. Derived requirements also address the needs of other lifecycle phases (e.g., production, operations, and disposal) to the extent compatible with business objectives.

Interface requirements

As internal components are developed, additional interfaces are defined and interface requirements are established. Interfaces between functions (or between objects or other logical entities) are identified. Interface requirements between products or product components identified in the product architecture are defined. They are controlled as part of product and product component integration and are an integral part of the architecture definition. Before integration, each product component should be confirmed to be compliant with its interface requirements. Interfaces can drive the development of alternative solutions described in the Technical Solution process area.

The blog won’t be complete if the below kinds of requirements are not touched

  • Allocated requirement

Requirements are said to be allocated when the higher level requirements are levied on a lower level architectural element or design components.

More generally, requirements can be allocated to other logical or physical components including people, consumables, delivery increments, or the architecture as a whole, depending on what best enables the product or service to achieve the requirements.

  • Service requirements

Service requirements are the complete set of requirements that affect service delivery and service system development. Service requirements include both technical and nontechnical requirements.

  • Technical requirements

Technical requirements are the properties (i.e., attributes) of products or services to be acquired or developed

  • Nontechnical requirements

Nontechnical requirements are the requirements affecting product and service acquisition or development that are not properties of the product or service.

Examples include numbers of products or services to be delivered, data rights for delivered COTS and non- developmental items, delivery dates, and milestones with exit criteria. Other nontechnical requirements may include additional conditions and commitments identified by agreements, needed capabilities, conditions derived from business objectives and work constraints associated with training, site provisions, and deployment schedules etc.

Baselines are created for the core “value-generating” processes of the business in the organizations. From the observed measurement data, an organization comes up with various process performance baselines (PPB) periodically. In a software industry, there can be PPBs for coding speed, defect density, productivity, testing speed, review effectiveness etc. Then measurable improvement targets (process performance objectives) are set for the selected processes. Say for example an objective could be to increase the coding speed by 10 % (Definitely some improvement initiatives need to be there to achieve the targets which are above to the current performance). Even, there can be an objective to maintain the performance at current level itself instead of improving upon the same. The process performance objectives are based on

    • the organization’s business objectives
    • the past performance of projects
    • customer requirements

In all these cases organization needs to have a reference value as baseline to know where it is right now. (Otherwise goals will be subjective) Baselines show the current performance measures of an organization. Now assume a case where an organization has just started to collect measurement data. Definitely there won’t be a baseline initially. In this scenario, how the organization will be setting objectives, without a reference baselines

They can have different options as below.

  1. Industry bench marks
  2. The organization can look in to the industry benchmarks. If the organization is performing a similar nature of work as per the contextual information of the industry benchmarks, they can use those values as references for setting targets.
  3. Expert discussions and brainstorming
  4. There might be employees in the organization who are much skilled to come up with some reference values on the critical processes for setting the targets.
  5. Collecting information from similar organization.
  6. The organization can refer baselines of other similar organization via employee contacts and use it as reference values.
  7. Process Performance Models (PPMs)
  8. If there are some prediction models defined, suitable to the requirements of the organization, dependent parameters (y values) can be predicted, assuming organization have some knowledge on the values of the independent parameters(x values)

When a reference value is obtained, organization can build targets upon the same. Once the organization has collected enough data over a period of time, PPBs can be built. Then those PPBs can be used to set targets for coming year. In this way process is continued and baselines are revised on a periodical manner, say on a yearly basis or so.

Add your points if you have got some other methods ‘to set objectives when there is no baseline’

In this article, 7 statistical tests are explained which are essential for doing statistical analysis inside a CMMI High Maturity (HM) compliant project or organization.

1       Stability test

Definition:

Data stability for the selected parameter is tested using minitab before making performance baselines.

Steps:

  1. Go to Stat->Control Charts-> I-MR (In variables, enter the column containing the parameter)
  2. From the section for Estimate, choose ‘Average Moving Range” as methods of estimating sigma and ‘2’ as moving range of length
  3. From section of tests, choose specific tests to perform
  4. From the section for ‘Options’ enter sigma limit positions as 1 2 3.

Results:

After eliminating all the out of turn points, the system attains stability and is ready for baseline.

2       Capability test

Definition:

Once the selected parameter is baselined, capability of the same to meet the specification limits are tested.

Steps:

  1. Go to Stat->Quality Tools->Capability Sixpack(Normal), Choose single Column, (In variables, enter the column containing the parameter), Enter ‘1’ as subgroup size),Enter Lower spec and upper spec,
  2. From the section for Estimate choose ‘Average Moving Range” as methods of estimating sigma and ‘2’ as moving range of length,
  3. From the section for Options, enter ‘6’ as sigma tolerance, choose ‘within subgroup analysis’ and ‘percents’, and Opt ‘display graph’

Results:

If the control limits are within specification limits or the Cp and Cpk values are equal to or greater than one, the data is found to be capable.

3       Correlation test

Definition:

Correlation test will be conducted between each independent parameter and the dependent parameter (if both are of continuous data type) in the Process Performance Model.

Steps:

  1. Go to Stat->Basis Statistics ->Correlation (Opt display P values)

Results:

For each correlation test p-value has to be less than 0.05 (or the decided p value within the organization based on risk analysis)

4       Regression test

Definition:

Regression test will be conducted including all the independent parameters and the dependent parameter in the Process Performance Model.

Steps:

  1. Go to Stat->regression->regression; (In response and predictors, enter the columns containing the dependent and independent parameters respectively)
  2. From the section for storage, include Residuals also

Results:

  • p-value has to be less than 0.05 for each factor as well as for the regression equation obtained. (Or the decided p value within the organization based on risk analysis)
  • [R-Sq (adj)] has to be greater than 70 %( or the decided value within the organization based on risk analysis) for ensuring the correlation between the independent parameters and the dependent parameter. Otherwise, the parameter cannot be taken.
  • Variance Inflation Factor (VIF) has to be less than 10. If VIF is greater than 10, correlation test (stat->basic statistics->correlation) will be conducted among the different parameters which are influencing Process Performance Model. In cases where    correlation is high i.e. correlation greater than 0.5 or -0.5, the factors have dependency. In such cases if degree of correlation is quite high one of the factors will be avoided or relooked for new terms.

5       Normality test

Definition:

Normality of the data is tested using the Anderson-Darling test.

Steps:

  1. Go to Stat > Basic Statistics > Normality Test> Anderson-Darling test
  2. In Variables, enter the columns containing the measurement data.

Results:

For the data to be normally distributed, null hypothesis cannot be rejected. For this p value has to be greater than 0.05 (or the decided p value within the organization based on risk analysis) and A2 value has to be less than .757.

6       Test for Two Variances

Definition:

Test for Two Variances is conducted to analyse whether variances are significantly different in two sets of data.

This null hypothesis is tested against the alternate hypothesis (two samples are having unequal variance)

Steps:

  1. Go to Stat > Basic Statistics > 2 Variances.
  2. Opt ‘Samples in Different Columns’. In Variables, enter the columns containing the measurement data

Results: If the test’s p-value is less than the chosen significance level (normally 0.05), null hypothesis will be rejected.

7       Two sample T- Test

Definition:

Two sample T test is used to check whether means are significantly different in two periods for two groups of data.

The null hypothesis is checked against one of the alternative hypotheses, depending on the situation.

Steps:

  1. Go to Stat > Basic Statistics > 2 Sample T
  2. Opt ‘Samples in Different Columns’. In Variables, enter the columns containing the measurement data.( First should be the initial data and second should be current data)
  3. Check or uncheck the box for “Assume equal variances” depending upon the F test results (Two variance Test results)
  4. In the Options, use the required alternative, whether ‘not equal’, ‘less than’ or ‘greater than’.
  5. Put test difference as 0 and confidence interval as 95.

Results:

If the test’s p-value is less than the chosen significance level (normally 0.05), null hypothesis has to be rejected.

CMMI appraisal method is known as SCAMPI. Result of appraisal may include a rating as demanded by the appraisal sponsor.

  • In a continuous representation
    • Rating is a “capability level profile” (e.g. Requirements Development Process Area is at Capability level 3).
  • In a staged representation
    • Rating is a “maturity level rating” (e.g. maturity level 2).

Maturity level rating is an easier way for organizations to compare themselves with other organizations.

But with capability level rating, how is the comparison possible? If each organization selects the same process areas, Capability level profiles can be used for comparison. But still there are some limits for the same.

Is there a way to convert the generated capability level rating to maturity level rating?

Yes, There is. It is known as Equivalent staging. Equivalent staging enables an organization using the continuous representation to convert a capability level profile to the associated maturity level rating.

How is this translation possible?

Before knowing the translation, let’s see how a capability profile is maintained? It could be a graph of process areas and their associated capability level (achieved as well as targeted), as shown below. (The label ‘1, 2, 3 ‘ in the Y axis represents capability level1, capability level2, capability level 3 respectively).

Combined Target and Achievement Profile

Combined Target and Achievement Profile

In the graph all the Process Areas (PAs) are at Capability Level 1 (CL 1) except the PA, Configuration Management (CM).

There are two types of capability level profiles, as listed below

  • An achievement profile represents the current achieved capability level in selected process areas
  • A target profile represents the capability levels that an organization wishes to achieve.

Maintaining capability level profiles is advisable when using the continuous representation as it aids an organization to plan and track its progress for each selected process area.

Now back to the topic, how equivalent staging is done..?

The most effective way to represent equivalent staging is to provide a sequence of target profiles for each PA, which is equivalent to a maturity level rating (of the staged representation). The result is a target staging that is equivalent to the maturity levels of the staged representation. Below figure shows a summary of the target profiles that must be achieved when using the continuous representation to be equivalent to maturity levels 2 through 5. Each colored area in the capability level columns represents a target profile that is equivalent to a maturity level.

Target Profiles and Equivalent StagingThe following rules summarize equivalent staging:

To achieve maturity level 2, all process areas assigned to maturity level 2 must achieve capability level 2 or 3.

To achieve maturity level 3, all process areas assigned to maturity levels 2 and 3 must achieve capability level 3.

To achieve maturity level 4, all process areas assigned to maturity levels 2, 3, and 4 must achieve capability level 3.

To achieve maturity level 5, all process areas must achieve capability level 3.

In short, Equivalent staging allows the unidirectional translation of assessment results from the continuous to the staged representation. Such staging permits benchmarking of progress among organizations.

To know more on equivalent staging in level 4 and 5, please read How can you achieve CMMI High Maturity in a continuous Representation..?

In CMMI, have you noticed the evolutionary path for Process Areas? Infact CMMI Maturity Levels are defined as evolutionary stages of process improvement.

An evolutionary process is a process whose stages consist of expanding increments of the defined process.

Watts Humphrey’s Capability Maturity Model (CMM) was published in 1988. According to him organizations mature their processes in stages based on solving process problems in a specific order. Humphrey based his approach on the staged evolution of a system of software development practices within an organization, rather than measuring the maturity of each separate development process independently.

CMMs focus on improving processes in an organization. CMMI describes an evolutionary improvement path for the processes from ad hoc, immature processes to disciplined and mature processes with improved quality and effectiveness.

In some Process Areas the evolutionary path is very evident. It is detailed below.

Evolutionary paths are expected to be one of the new features in CMMI Next Generation

Measurement and Analysis starts at Maturity Level 2 and becomes more quantitative and statistical analysis at High Maturity levels. Even at Maturity level 3 Verification process area and Validation process area demand analysis of peer review data (SP 2.3 in VER) and validation results(SP 2.2 in VAL).

Requirements Management at Maturity level 2 talks about ensuring alignment between project work products and requirements through reviews and all (SP 1.5). Verification process area at Maturity Level 3 supports this process.

Risk Management begins at Maturity Level 2 (Process Area- Project planning SP 2.2 and Project Monitoring and Control SP 1.3) and becomes robust at Maturity Level 3.

Project Planning process area talks about planning for knowledge and skills needed to perform the project. This is aided by the Organizational Training process area in Maturity level 3 by which people can perform their roles effectively and efficiently.

Project Monitoring and Control talks about issue analysis at Maturity level 2 which becomes much more elaborated at ML5 in Causal Analysis and Resolution Process Area

The Process Area Organizational Process Focus is treated as a younger brother of the PA -Organizational Performance Management

Integrated Project Management at maturity Level 3 talks about establishing an integrated and defined process that is tailored from the organization’s set of standard processes. And Quantitative Project Management at Maturity level 4 talks about composing a defined process quantitatively to help the project to achieve the project’s quality and process performance objectives.

The below picture illustrates the Evolutionary path of the mentioned Process Areas

Evolutionary PAs

“Walking on water and developing software from a specification are easy if both are frozen.” –  Edward Berard

Baseline means ‘A Line which is the Base’. The word baseline may refer to surveying, typography, budgeting, pharmacology, Configuration Management, calculations etc.

In an IT industry, baseline mainly implies

  • A Configuration Baseline – A configuration of software, hardware or a process that is established and documented as a point of reference
  • A Process Performance Baseline (PPB) – A datum used as the basis for calculation or comparison

Configuration baseline

As and when a work product is created and ready for review, it should be labeled as ‘ready for review v0.9’. (The version number may change based on the project’s strategy). And at the same time the work product should be available in the configuration tool. Once the review, rework and verification are completed the work product is ready for approval. There should be an identified approving authority for each work product. After the approval, the work product is baselined. (The baseline string can be ‘baselined v1.0’). Whenever a change is triggered in the work product, initially an impact analysis is triggered. Impact analysis helps to understand the change in cost, schedule, competency requirements, affected work products etc. The Change Control Board (as defined in the project plan) should approve the change in order to update the work product. After update, the work product undergoes review, approval and baselining process ( If the change is a minor, the review process can be skipped as defined in the project plan). The baseline string can be ‘baselined v1.1’or ‘baselined v2.0’depending on whether the change is a minor or major. All the change requests should be recorded and tracked for future reference.

If there is no configuration management, many people have to work on a work product that is altering. It creates copy-over problem, which causes versions of files to be lost. The number of uncontrolled copies of files makes it difficult to determine where the latest version really exists or not.

A configuration baseline is the configuration of a service, product or infrastructure that has been formally reviewed and approved, that thereafter can be changed only through formal change mechanisms.

Process Performance Baseline

Project data is recorded in the organizational metric repository. And from the data, performance baselines are created, in most cases the center line and upper and lower control limits from a control chart on that process execution. These baselines are used as a benchmark for comparing actual process performance against expected process performance and are revisited over a defined frequency.

These baselines are used

  • In project estimation process
  • While setting Objectives for the project
  • As inputs to the process performance models

Baseline helps to monitor current project performance and also to improve the accuracy of future estimates.

In short

A Configuration baseline shows the current state of a configuration item while a Process Performance Baseline shows the current performance of a process.

 

CMMI Version 1.3 was released on November 1, 2010. All the three constellations of CMMI – SVC, ACQ and Dev, got released in the same period, considering the similarities across them.

How good it would be, if all the three constellations were combined into a single model..? Definitely it would have eased the process of CMMI compliance as well as appraisal.

Yes, the realization is not far ahead. CMMI institute has started to redefine and rewrite the CMMI. ’CMMi NextGen’ is the project currently underway at the CMMI institute. ‘Next Gen will be combining ACQ, DEV, SVC, P-CMM and DMM into a single model. NextGen won’t be a simple upgrade like v1.1 to v1.2, or v1.2 to v1.3. It is a “re-molding” of the entire model.

A set of working groups has been formed under this project to define the future of CMMI.  They work on

  • Architecture
  • Current needs
  • Implementation
  • Performance
  • Plain language
  • Simplify model and appraisal method
  • System
  • Trainings

Each working group consists of “authors” and “reviewers.” Refer more on working groups at http://partners.clearmodel.com/volunteer-cmmi-next-generation-working-group/. CMMI institute is collecting requirements and recommendations. Since the change being major, we will have to wait for a couple of years to have the next generation in place.

Next Generation Project Information Portal is now available on Partner Resource Centre.  All Partner BPOCs(Business Point Of Contacts) and Certified Individuals will have access to the information about the project through this portal.

 

Causal Analysis and Resolution (CAR) helps to identify and rectify the root cause of a defect or problem. It can be applied to any process area. Analysis can be qualitative, limited to a fish bone diagram or so. And at the same time it can be more advanced involving quantitative data. The steps below show how to quantify the performance of CAR activities.

  1. Define a measurable parameter based on the root cause of the issue identified.
  2. Determine solution for fixing the root cause.
  3. Measure the performance of the parameter before and after implementing the change.
  4. Perform some hypothetical tests like 2 sample T test or paired T test on the measured parameter.
  5. Compare the predicted performance and actual performance of the data.
  6. If there is no statistically significant difference between the predicted performance and actual performance, it can be assumed that the changes are effective.
  7. If there is a statistically significant difference between the predicted performance and actual performance, it can be assumed that the changes are not effective and another CAR needs to be done.

Traceability is the ability to trace the origin. In simple terms a police man identifies thieves by tracing out finger prints or so.. Traceability plays a key role in Requirements Management. It helps to conduct impact assessments when change in requirements happens.  Some unique identifiers are given for each requirement. These requirements are elaborated from the user needs provided by the customer. Thus traceability starts with user needs and then extends to the

Read More