Skip to main content

 

 

Coupa Success Portal

Program Design Considerations

Below are various program design considerations in Risk Assess.

Program design considerations

Program types

Risk Assess offers two difference categories of programs:

  • Relationship-based > Associated with a single supplier through the relationship.  Available types include: Compliance programs and Performance programs. For more information see the Risk Assess online help topic, “How To Create a Relationship-based Program.”
  • Enterprise > Associated with multiple objects (supplier, supplier location, relationship, relationship location, engagement and engagement candidate supplier) across the Client enterprise.  Available types include: Compliance programs, Performance programs, Risk programs, Information Management programs (SIM) and Robotic (Robo) programs. For more information, see the Risk Assess online help topic, “How To Create an Enterprise Program.”

Under these categories, the application incorporates five different types of programs:

  • Risk Assessment
  • Compliance Attestation
  • Performance Appraisal
  • Information Management
  • Robotic (Robo)

While there are a number of similarities, each program type has some unique characteristics, as well as some usage conventions.

Risk assessments

Risk assessment programs are typically administered to evaluate different dimensions of risk associated with doing business with a supplier, either for the supplier in general, or in the context of doing business with the supplier under a particular relationship. These dimensions cross a broad spectrum that includes financial risk, business continuity risk, information privacy and security risk, viability, strategic importance to the organization, and many other categories. Since the determination of risk is evaluated on a continuum, risk programs generally return a numeric value as a result, although this value is often translated into discrete bands using a rating scale, such as “high-medium-low” or “red-yellow-green.” The individual items evaluated in a risk assessment are called “risk items.” Risk assessments can be designed to be evaluated by internal participants, or the supplier may play a role in the evaluation.

Compliance attestations

A compliance program is a set of requirements to which a supplier/relationship must conform, either voluntarily or by mandate as a condition of the relationship. These compliance requirements are used to determine a supplier’s conformance to any number of factors, from laws and regulations to policies and guidelines. Compliance may be sought for insurance certifications, evidence of employment eligibility, adherence to sound financial principles, or acceptance of procurement policies and codes of conduct. In general, compliance attestations are administered to the supplier, and the results either reviewed internally or automatically approved (in the case where the response is 100% compliant). Compliance programs typically return a binary result, with a final “score” of either “PASS” or “FAIL.”

Performance appraisals

A performance program is a set of KPIs or performance measures which, when completed, are used to evaluate the degree to which a supplier is delivering goods and/or services in accordance with an agreed upon statement of work (SOW), service level agreement (SLA), or service level expectations (SLE) within the context of a relationship. Performance scorecards can be used to measure the performance of a supplier against most any type of contract, from annual customer satisfaction surveys to formal quarterly business reviews for outsourced service providers. Performance appraisals often include multiple categories of performance, each with a variety of KPIs. Both categories and KPIs can be individually weighted, and any number of evaluators can participate in the assessment, with each evaluator’s score assigned a different weight. The final score is mathematically “rolled up” to arrive at an overall score. Performance appraisals generally result in a numeric expression from 0% to 100%, although this value can be translated into ranges as well. The performance appraisal can be used to conduct formal periodic business reviews with the supplier.

Since performance programs are often used to determine performance-based compensation, the program can be used to calculate an “at risk fee” based on individual organization scores, as well as the overall score.

Information management programs

The information management module is designed to facilitate the automated collection and maintenance of supplier and relationship information. There are Information Management-specific programs that allow customers to pull in standard and UDFs from the supplier and relationship records to create a program. An example of an Information Management Program is one that presents a supplier with their current corporate address, phone number and name and address of the account executive. The program recipient can review, update and finally approve and submit their response, which will update the appropriate fields of the supplier record. Since the Information Management Program type is intended only to update data, there is no quantitative result to it; only a result of ‘Open’ or ‘Closed.’ Information Management Programs are often used externally to allow suppliers to maintain their own data and internally to validate data regarding relationships with business unit owners. Information Management programs can be automatically sent to the supplier on a regular basis to request the review and update of profile information. The supplier’s responses will update the supplier and relationship records accordingly.

For more information, see the Risk Assess online help topic, “How To Create an Information Management Program.”

Robotic (Robo) programs

Robo programs run in the background to fully automate the gathering of new/updated information. These programs do not get scored, approved or collaborated; they simply execute the workflow-enabled components contained within them. An example of a Robo Program would be one that employs data about a supplier to compute that supplier’s classification: If the supplier is a strategic supplier with spending > $6,000,000 and it’s been identified as a high-risk supplier, then a classification is assigned of “Level 1.” Robo programs can be used to create risk profiles. Assignment defaults can be configured to send notifications to supplier users. For more information, see the Risk Assess online help topic, “How To Create a Robo Program.”

A word about programs and evaluations/assessments

There is a clear distinction in Risk Assess between a program and an evaluation, assessment, or scorecard (the last three terms are used interchangeably and mean the same thing).

The program in the application is only the definition of all the program parameters, including workflow options, evaluation items such as risk items or KPIs, and weights and evaluators. Think of the program as a rubber stamp that can stamp out an evaluation or assessment.

An evaluation or assessment is a copy of a program that is launched for a particular object, whether that is a supplier, relationship, or location. An evaluation or assessment is the set of questions or evaluation items for a particular program, launched for a particular period, and pertaining to a particular supplier or relationship. For example, an evaluation may be the annual supplier risk assessment launched for a single supplier on January 1st of every year.

Once an evaluation is launched, the underlying program may be changed without affecting either evaluations that are in process, or completed evaluations (evaluation history). In this way, evaluations always reflect the state of the program at the time they were launched.

Program scope or component

In addition to the five basic types of programs, a program can be further designated as applying to one of two different “scopes”; that is, the context in which the program is deployed:

  • Relationship-based programs
  • Enterprise programs

The following graphic illustrates the different types of programs. It includes examples of programs that may appear in a particular category.

RA - Program types.png

Collaboration Type

The Collaboration Type specifies the workflow options for the program. Selecting the appropriate collaboration type establishes the behavior of the evaluation routing, notification, and timing. There are four collaboration types from which to choose when creating a program:

  • Party 1 Only

    • Only the customer has access to complete and approve the evaluation on a supplier.

    • Default for Risk programs. Workflow for all Robo programs.

End discussions can still take place between the customer and supplier relationship manager.

  • Collaborative: Party 2 First

    • This is the most common workflow option. Both parties (Party 1 = Client, Party 2 = Supplier) have access to complete the evaluation. Party 2 must complete the evaluation and approval first before the customer is able to start the evaluation/approval.

    • Default for Compliance and Performance programs.

  • Collaborative: Simultaneous

    • Supplier and customer have access to the evaluation at the same time and can be completed in parallel. This is used when program managers want the internal team to complete an evaluation of the supplier without being influenced by the supplier response, or to reduce the amount of time required for the workflow.

  • External Scoring Only

    • This is opposite of Party 1 Only. This option is used primarily to collect data from a supplier. In the case of an information management program, the collected data automatically updates the supplier/relationship once the program is completed.

    • Default for Information Management programs.

Evaluation Items

Evaluation items are the elements of every program for which the evaluator will provide a response. Although very similar in nature, different names are used to refer to evaluation items depending on the program type.

Program Type

Evaluation Item Nomenclature

Performance

Key Performance Indicator (KPI)

Compliance

Requirement

Risk

Risk Item

Information Management and Robo

Data Item

Evaluation items share certain common characteristics:

  • Inclusion in a category of one or more related evaluation items (e.g., financial performance KPIs may be grouped under a category “Financial”).

    • A specialized category called “Critical Gate” changes the behavior of the evaluation such that a rating of these items that is below minimum acceptable values will result in the failure or rejection of the entire evaluation: effectively, if a Critical Gate evaluation item fails, the whole evaluation fails.

  • Unique identification number – allows unambiguous reference and ordering of evaluation items.

  • Unique description – provides sufficient explanation to the evaluator to understand what data are being requested.

  • Frequency - specifies the periodicity of the evaluation (annual, semi-annual, quarterly, monthly, one-time).

  • Score type - specifies the scale on which the evaluation item will be rated (10 pt Scale, 5 pt Scale, Info Only, Letter (A-E), Numeric, Percent, R/Y/G, Yes/No).

  • Rating scale parameters (depend on the score type, and may allow specification of minimum, target, and maximum values associated with an evaluation item).

  • Weight - provides a means to emphasize one evaluation item in comparison to another.

Designing Evaluation Items

There are some key differences in evaluation items for different types of programs:

  • Compliance requirements are generally yes/no questions, and phrased such that a positive or “yes” response is a preferred status for that requirement.

    • “Has the supplier provided evidence of employment eligibility for all employees and sub-contractors who may work on-site in the delivery of services under this contract?”

  • Performance KPIs are typically expressed as a factor for which a numeric rating can be provided.

    • “Average on-time completion rate for all projects under management during current quarter.”

Regardless of these differences, there are some fundamental best-practice guidelines for all evaluation items that should be considered:

  • Evaluation items should be specific.

    • Good: “Percent of projects delivered within budgeted amount ± 5%”

    • Could be Better: “Level of financial performance”

  • Evaluation items should be measurable.

    • Good: “Number and cumulative duration of system outages within acceptable limits established by SLA”

    • Could be Better: “System performed adequately during the period”

  • Data should be available to the evaluator with which to answer the evaluation item.

  • The total number of evaluation items should be reasonable for the circumstances of the evaluation.

    • Two evaluation items for a ten-year IT outsourcing contract for a Fortune 100 company’s data centers worldwide are probably too few or too vague.

    • Three-hundred unique KPIs for a $50,000 contract to provide paper recycling services are probably too many or too detailed.

At-Risk Fees

At-risk fees allow the user to calculate performance-based compensation for a supplier or relationship based on the final score of an assessment from a performance program. At-risk fees are fundamentally different from penalties and bonuses. These fees affect compensation in the period being evaluated. Penalties and bonuses are often implemented as service credits for periods following the period being evaluated.

At-risk fees are only available for performance programs.

Two levels of at-risk fees that can be defined include:

  • At risk fees that apply to the overall outcome of the entire assessment.

  • At risk fees specific to the performance within a single organization unit.

At risk fees are additive; i.e., total performance-based compensation is the sum of the global at-risk fee based on the overall score of the evaluation, plus individual organization unit at-risk fees based on the scores for each organization unit.

If the user must create a penalty bonus scheme, they can achieve this through a combination of at-risk fee definitions and the use of an administrative configuration, entitled “At Risk Payment Range.” This configuration feature is normally used to define discrete compensation amounts for “bands” of performance.

For example: 

  • 0% to 50% performance = 0% of at-risk fee compensation

  • 51% to 80% performance = 50% of at-risk fee compensation

  • > 80% performance = 100% of at-risk fee compensation

To implement a penalty scheme, simply specify a negative percentage for that particular band of performance. The at-risk fees that show up will then be a negative amount, indicating a penalty. For example:

  • 0% to 50% performance = -100% of at-risk fee compensation

  • 51% to 80% performance = -50% of at-risk fee compensation

  • > 80% performance = 0% of at-risk fee compensation

 

KPI weights / requirements measurements

Performance Programs

Risk Assess allows the program architect to specify weights that will be applied to individual evaluation items as part of the calculation of performance. For example, if there are two KPIs, but one is far more important than the other, the architect can assign 75% to that KPI, and 25% to the other. When calculating the final score, the application will multiply the rating times the appropriate weights to arrive at the score.

Compliance Programs

Compliance programs are by design a “pass-fail” calculation. Therefore, each compliance requirement must achieve a minimum score designated on this page, or the entire evaluation receives a score of “fail.”

Program Template

Once a program has been created and activated the customer can make that program into a template that can be reused later to create other programs for different suppliers and/or relationships or as a starting point for a similar program. When the user views a program, they will see a link next to “Make Template” that says “Make Program A Template.” If you click on this link the system will add the program as a template to your template library.

Conditionality

Conditionality provides the ability to make questions appear conditionally based on the answer to a specific question. Conditionality is available for all program types, and its logic must be defined based on yes/no questions.

Unchecking the Show Conditions Panel in Step 1 of the Program Wizard does not disable the defined conditions. In order to disable the conditions, they must be deleted from Step 3 in the Program Wizard.

Conditional questions that contribute to the overall score of the program must be defined so that the “passing” response is “Yes.”

Program Configuration Options

Following are the Program Configuration options available in the application:

  • Enable Launch Request checkbox - makes completed active enterprise programs available to the Supplier Program Launch widget and program workbench enrollment launch; if indicated, then the program is available to the Supplier Program Launch widget on the Dashboard.

  • Hide Calculated Line Values checkbox - controls the display of the min, target, max, and rating in the evaluation; if indicated, then the score values are hidden from the evaluator.

  • Information Profile Program checkbox - indicates whether or not the results of this program will be used to populate an information profile section associated with the enrolled object; if indicated, then the program results will populate the information profile. 

  • Auto Approve Program - indicates whether the compliance program will be automatically approved on Pass result (i.e., the approval step is skipped during the evaluation workflow) or will require review and approval by the designated approver. External Compliance Programs only defaults to "Yes", “Yes” if this program will not require authorization workflow when the program result is a Pass."

  • Show Conditions Panel checkbox - allows the program architect to define criteria to control when certain questions will be asked; if indicated, then the conditions panel is displayed in Step 3 of the Program Wizard.

  • Hide Min/Max/Target Columns checkbox - controls the display of the min, target, max, and rating column labels evaluation; if indicated, then the column labels are hidden from the evaluator.

  • Allow Import/Export? checkbox - enables import/export of the evaluation; if indicated, then the evaluations associated with this program can be exported to a spreadsheet, completed in the spreadsheet and the responses can then be imported back into the application and the evaluation submitted for approval.

  • Enable Evaluation Summary? - allows the program architect to define a summary view of the evaluation in HTML or PDF based on the XSLT that the customer has created and uploaded into their instance. Even with this option enabled, if the associated XSLT file has not been established for Evaluation Result Summary, the “Evaluation Summary” link will not be displayed on the top left of the evaluation. Evaluation Result Summary configuration files must be uploaded using the Administrative feature for Configuration File Management.

  • Attempt to Run Robo – allows an enterprise program to run in robo-mode. If selected, then the system tries to complete without human intervention when the program is launched.

Scores must be available (from UDFs or information profile fields) to complete any calculations that are defined within the program in order for the program to complete successfully.

  • Cascade Program Security to Enrollments - allows the program architect to grant view rights to the program enrollment and associated program history to any user who has permission to view the program.

  • Cascade Final Rating Range to Org and Category – allows the program architect to configure the program to replace the organization and category/section score presented to the approver with a rating that is translated using a final rating range.

This feature only allows overrides performed at the KPI level.

Program Collaboration Configuration Options

Collaboration Options (Options to configure number of days are expressed in calendar days and do not take holiday or weekends into account) include the following:

  • # of Days For Supplier Scoring - number of elapsed (calendar) days the system allows for suppliers to complete their evaluations, once the program scoring period begins (defaults to 10).

  • # of Days For PM Approval - number of (calendar) days the program manager or the relationship manager has to complete their review/approval of the assessment (defaults to 0).

  • # of Days to Collaborate - number of (calendar) days the top client and supplier managers have to review/approval and negotiate the final assessment results (defaults to 10).

  • Launch Evals When Supplier Late - indicates whether or not the evaluation should be launched to the customer evaluator on time, even if the supplier evaluator has not completed their assessment (defaults to checked - when the evaluation is launched early, the client receives the assessment without the supplier's scores).

  • Require All Org Mgrs To Approve checkbox - controls whether the relationship manager can approve the assessment before all of the organizations managers have submitted their approvals; if unchecked, then the relationship manager can approve the assessment that has been received to date when one or more org manager approvals are late.

  • Prevent In Line Discussions checkbox - controls whether a scorecard specific forum can be created as part of the evaluation and/or review of the assessment; if indicated, then the forums cannot be created.

  • Enable Approval Delegation checkbox - controls whether the administrator or relationship manager can delegate approval of an assessment either before or after the assessment is launched (defaults to checked; if indicated, then the approval can be delegated).

  • Evaluation Delegation Level - controls whether the administrator or relationship manager can delegate scoring of an evaluation either before or after the evaluation is launched, and allows the program architect to configure the program to disable delegation for the program or to allow delegation of the entire evaluation or at the Line/Section level.

  • Late Day Notification Frequency - defines how often a reminder is sent that an evaluation/approval is delinquent (reminders after the due date); Options: “None,” “Every Day,” “Every Other Day,” “Every Third Day,” “Each Week,” defaults to "Every Other Day."

  • Reminder Days Prior (required) – allows entry of the number of days the reminders should be sent before an evaluation/approval start date; this setting is required when Reminder Frequency is not equal to None (defaults to 1).

  • Reminder Frequency - defines how often a reminder is sent that an evaluation/approval is due. Reminder Frequency is computed by dividing the number of days for the evaluator to complete the evaluation by the number of reminders configured; for example: If ‘Reminder Days Prior’ is set to 1, then the evaluator receives a single reminder on the day before the evaluation due date. If Reminder Days Prior is set to 2, then the evaluator receives 2 reminders: the first halfway through the times to complete, and the second on the day before the evaluation due date. Options for selection include: “None,” “Every Day,” “Every Other Day,” “Every Third Day,” “Each Week.”

  • Escalate When Late? - controls whether to escalate the approval to the designated approver on a program. This setting will also delegate the survey to the designated approver, if indicated.

  • Run Inactive Objects? checkbox - controls whether the program can be run on enrolled objects with status = inactive (defaults to checked; if indicated, then programs will be run on inactive objects. For example, when indicated for a relationship that has expired, the workflow will disregard the relationship expiry date).

  • Reject Internal checkbox - enables the internal approver to reject the evaluation back to the internal evaluator for clarification or additional information; if indicated, then the internal approver has the capability to reject the evaluation back to the internal and/or external evaluator.

  • # of Days For Client Scoring - number of elapsed (calendar) days the system allows for the customer  to complete their evaluations, once the program scoring period begins (defaults to 10); not required for External Scoring Only programs.

  • # of Days For OM Approval - number of (calendar) days the organization manager has to complete their review/approval of the assessment (defaults to 5).

  • Re-calculate Due Dates checkbox - controls whether to recalculate the due dates in the workflow based on Actual Completion Date of previous steps in the workflow (defaults to checked); when this option is not enabled for the program, it works based on a due date static calculation at the time of launch.

  • Auto Fill Internal Cards checkbox - controls pre-population of the customer assessment with supplier answers. When answers are prepopulated, the customer evaluator can overwrite the rating if they want to change it; if indicated, then the customer assessment is auto-populated with the supplier's ratings.

  • Skip Collaboration Step checkbox - controls whether customer and supplier managers are required to conduct a final review and approval; if indicated, then the collaboration step is skipped.

  • Prevent In Line Action Plans checkbox - controls whether the approver can create action plans as part of their review; if indicated, then approvers cannot create action plans during their review.

  • Show Supplier Scores checkbox - controls whether the customer evaluator sees the scores assigned by the supplier evaluator(s); if indicated, then the customer sees the supplier's scores.

  • # of Late Notifications - defines how many reminders will be sent when an evaluation/approval is delinquent; required when Late Notification Frequency is not equal to "None" (defaults to 3).

  • Reminder Days Prior - defines how many reminders will be sent for an evaluation/approval that is approaching its start date; required when Reminder Frequency is not equal to "None" (defaults to 1).

  • Days After Due To Escalate checkbox - defines the number of days past the due date that the approval will be escalated; required when Escalate when late is indicated (defaults to 5).

  • Approval Method - indicates the type of approval required. Options: “Normal” (Organization Manager and Relationship Manager Approval), “No External Approval,” “No Internal Approval,” “No Approval” (defaults to "Normal").

  • Enable Copy Previous Item Results checkbox - indicates whether the values entered for the KPIs from a previously completed evaluation will be presented to the evaluator when they receive the evaluation for a subsequent period; if indicated, then previous answers are carried forward into subsequent evaluations.

  • Reject External checkbox - enables the external approver to reject the evaluation back to the external evaluator for clarification or additional information; if indicated, the external approver has the capability to reject the evaluation back to the external evaluator.

  • Comment Visibility - enables users to prevent program recipients from including attachments when none are required.

  • Approval Display Mode - enables users to control the display format of the approval form (defaults to “Expanded”); “Collapsed” mode-scoring information and override controls are hidden from view. “Expanded” mode - displays with content in all visible sections.

Prerequisites and other helpful hints

Following is additional information pertaining to requirements and other auxiliary details for programs:

  • All required fields must be completed in order to save or activate a program.

  • The “Use Corp Calendar” checkbox option on the specific Program details page, allows the user to set the program calendar to something other than the standard calendar. The corporate calendar is defined first in the administration tab in the section of the screen called “Company Information Management.” The corporate calendar feature allows the user to define a calendar that is consistent with their company’s fiscal calendar.

  • A supplier user can add the supplier self-launch programs widget to their dashboard. If the customer has granted their suppliers the ability to launch programs to themselves, then the supplier will see that program(s) in the widget, and can launch the program at will. Customers grant access in Step 1 of the Program Wizard by selecting the “Enable Launch Request button option. Access is granted on a program-by-program basis. Any program that has this option selected will be visible in the supplier widget.

  • Program workflow respects the expiry date unless the “Run Inactive Objects” option is checked on page 1 of the program wizard under Performance Type options.

 

 

  • Was this article helpful?