Bạn đang xem bản rút gọn của tài liệu. Xem và tải ngay bản đầy đủ của tài liệu tại đây (815.83 KB, 22 trang )
<span class="text_page_counter">Trang 1</span><div class="page_container" data-page="1">
<i><b>24 MAY 2018 Special Management AIR FORCE MILITARY RISK ASSESSMENT FRAMEWORK </b></i>
<b>COMPLIANCE WITH THIS PUBLICATION IS MANDATORY </b>
<b>ACCESSIBILITY: Publications and forms are available on the e-Publishing website at </b>
<b>www.e-Publishing.af.mil</b> for downloading or ordering
<b>RELEASABILITY: There are no releasability restrictions on this publication </b>
(Colonel Jonathan T. Hamill) Pages: 22
<i>This publication implements Air Force Policy Directive (AFPD) 90-16, Studies, Analyses and Assessments. It provides guidance and procedures for using the Air Force Risk Assessment </i>
Framework (RAF) in developing, conducting, and presenting assessments of risk to future planned Air Force (AF) operations, requirements, and objectives. This type of military risk does not overlap other risk types in other disciplines such as Safety or Acquisition or change assessment methods and/or processes currently in use such as Air Force Critical Asset Risk Management (CARM). It applies to individuals at all levels who plan and assess future contingency AF operations, including the Air Reserve Component (ARC) composed of Air Force Reserve Command (AFRC) and Air National Guard (ANG), except where noted otherwise. This publication may be supplemented at any level, but all supplements should be routed to the Office of Primary Responsibility (OPR) listed above for coordination prior to certification and approval. Refer recommended changes and questions about this publication to
<i>the OPR listed above using the AF Form 847, Recommendation for Change of Publication; route </i>
AF Forms 847 from the field through the appropriate chain of command. The authorities to waive wing/unit level requirements in this publication are identified with a Tier (“T-0, T-1, T-2, T-3”) number following the compliance statement. See Air Force Instruction (AFI) 33-360,
<i>Publications and Forms Management, Table 1.1 for a description of the authorities associated </i>
with the Tier numbers. Submit requests for waivers through the chain of command to the appropriate Tier waiver approval authority, or alternately, to the Publication OPR for non-tiered compliance items. Ensure that all records created as a result of processes prescribed in this publication are maintained in accordance with (IAW) Air Force Manual (AFMAN) 33-363, Management of Records, and disposed of IAW the Air Force Records Disposition Schedule (RDS) in the Air Force Records Information Management System (AFRIMS). The use of the
</div><span class="text_page_counter">Trang 2</span><div class="page_container" data-page="2">name or mark of any specific manufacturer, commercial product, commodity, or service in this publication does not imply endorsement by the Air Force.
<b><small>Chapter 1— GUIDANCE ON IMPLEMENTING THE AIR FORCE RISK ASSESSMENT </small></b>
<small>1.1. Overview. ... 4 </small>
<small>1.2. Guidance. ... 4 </small>
<small>Figure 1.1. Military Risk Matrix (CJCSM 3105.01 Joint Risk Analysis, 14 Oct 2016). ... 5 </small>
<small>Figure 1.2. Linking the AF Criteria to the type of risk... 6 </small>
<small>Figure 1.3. Example Risk to Mission Tree. ... 7 </small>
<small>Figure 1.4. Example Risk to Force Tree. ... 7 </small>
<b><small>Chapter 2— COMMON FORMAT RISK STATEMENT (CFRS) 8 </small></b> <small>2.1. The Common Format Risk Statement (Figure 2.1) will ... 8 </small>
<small>Figure 2.1. Common Format Risk Statement. ... 8 3.1. Listed below are the CJCS-based definitions of the risk categories used in this AFMAN when assessing risk: ... 12 </small>
<small>3.2. Defining risk metrics. ... 12 </small>
<small>3.3. Quantitative metrics. ... 12 </small>
<small>Figure 3.1. Definitions for Metric Level Thresholds. ... 13 </small>
</div><span class="text_page_counter">Trang 3</span><div class="page_container" data-page="3"><small>Figure 3.2. Calculating Thresholds for a Quantitative Metric... 14 </small>
<small>3.4. Qualitative Metrics. ... 14 </small>
<b><small>Chapter 4— MILITARY RISK CONSTRUCT 16 </small></b> <small>4.1. Service components will ... 16 </small>
<small>Figure 4.1. Military Risk Structure. ... 16 </small>
<small>4.2. Risk to Mission. ... 16 </small>
<small>4.3. Risk to Force. ... 17 </small>
<small>4.4. These sections have provided a general overview of ... 18 </small>
<b><small>Attachment 1— GLOSSARY OF REFERENCES AND SUPPORTING INFORMATION 19 </small></b>
</div><span class="text_page_counter">Trang 4</span><div class="page_container" data-page="4"><b>Chapter 1 </b>
<b>GUIDANCE ON IMPLEMENTING THE AIR FORCE RISK ASSESSMENT FRAMEWORK FOR STRATEGIC PLANNING </b>
<b>1.1. Overview. This guidance is divided into four sections. Section One provides a general </b>
overview of assumptions and detailed guidance for the Air Force Military Risk Assessment Framework (RAF). Section Two defines the Common Format Risk Statement (CFRS). Section Three defines risk metrics and assessment. Section Four defines the Military Risk Construct and
<i>the relationship between components of Risk to Mission and Risk to Force. </i>
<b>1.1.1. Background. The risk assessment process and assumptions are described below: </b>
1.1.1.1. Senior leaders need a consistent and standardized approach for assessing, displaying, and discussing risk in the context of Air Force Strategic Planning in the future and across the entire Air Force Enterprise.
1.1.1.1.1. Senior leaders should establish or approve a common set of assumptions that will guide development of the assessment.
1.1.2. Senior leaders are best equipped for decision-making when presented with risk assessments that employ a common set of terms and methods by which to assess risk.
1.1.3. The word “risk” should always be used with an adjective qualifier as to the type of risk (e.g., risk to mission, risk to force, or Military Risk, which encompasses both).
1.1.4. Additionally, any numerical answer from a turn-the-crank process by itself does not constitute the final assessment, but may be an important input into the final assessment.
<b>1.2. Guidance. Within this section we provide detailed guidance for those who may be </b>
unfamiliar with this type of structure. The method requires assessors to provide senior leaders with a strategy-to-task understanding of how their capabilities and activities link to strategic planning objectives. Assessors determine the level of detail required to tell their risk story and should balance that with the difficulty of conducting the assessment. In general, the RAF method requires an activity to be assessed in terms of the vital Resources, Schedules, and Performance metrics (or indicators). Vital in the case of the AFMAN is defined as only those critical and most important activities required to achieve the strategic objectives, not everything required for all mission sets. The RAF is linked to the Chairman of the Joint Chiefs of Staff (CJCS) Risk Matrix (<b>Figure1.1</b><i>) as prescribed by CJCS Manual (CJCSM) 3105.01 Joint Risk Analysis. </i>
</div><span class="text_page_counter">Trang 5</span><div class="page_container" data-page="5"><b>Figure 1.1. Military Risk Matrix (CJCSM 3105.01 Joint Risk Analysis, 14 Oct 2016). </b>
1.2.1. The RAF is not intended to change assessment methods and/or processes currently in use. Rather, it provides a structured way to translate results from different assessment methods, which could include Safety and Acquisition risks, among others, into a common RAF format, so senior leaders are presented consistent and comparable information. Although this manual addresses military risk to strategic planning (Combatant Commander’s [CCDR] strategic scenarios, objectives, and outcomes), the RAF stands alone and may be used for assessing risk to any strategy-to-task future planning requirement.
1.2.2. Professional military judgment remains an important part of the assessment process. The RAF provides parameters for using professional military judgment in a way that maximizes consistency throughout the assessment process. The RAF also requires that professional military judgment be documented and substantiated for traceability, making future assessments of the same activity consistent over time.
1.2.3. Measures selected or developed should attempt to draw upon widely recognized data sources (authoritative data sources, when possible). Those measures should be used to identify and define metrics (i.e. indicators), including defensible success/failure points (i.e. criteria), which are boundaries for the risk assessment. Risk categories only exist between defined success/failure points.
</div><span class="text_page_counter">Trang 6</span><div class="page_container" data-page="6">1.2.4. Within each Military Risk type, the RAF strategy to task methodology helps categorize risk drivers and guide the selection of risk metrics (see <b>Figure1.2</b> and Section 4).
<b>Figure1.2</b> indicates that for each type of Military Risk strategic objectives are determined by senior leadership, from which activities to accomplish the objectives can be determined. Next, metrics are needed to measure achievement of those activities and objectives, which are used to determine the overall risk assessment for each specific Military Risk being assessed. Military Risk assessments will be organized using a tree structure (see <b>Figures1.3</b>
and <b>1.4</b>) to provide insight into how military capability gaps drive risk into the overall strategy (T-3). For each type of Military Risk, the base of the tree represents the functional aggregation of objectives for any risk type. The examples in <b>Figures1.3</b> and <b>1.4</b> are a disaggregation of a Service Core Function (SCF). The SCF is the base of the tree that then branches into its core capabilities and objectives to achieve future scenarios and its activities and metrics needed to achieve the desired objectives. Other core capability branches of the tree could connect to other core capability trees that are vital to the accomplishment of future scenario planning, thus building and interdependent network. Assessors should strive to keep the tree as sparse as possible while still delivering a thorough understanding of the vital strategic-level drivers needed to assess risk for future plans or scenarios. Each branch of the tree will end with at least one predictor metric to enable assessment for that branch (T-3).
<b>Figure 1.2. Linking the AF Criteria to the type of risk. </b>
</div><span class="text_page_counter">Trang 7</span><div class="page_container" data-page="7"><b>Figure 1.3. Example Risk to Mission Tree. </b>
<b>Figure 1.4. Example Risk to Force Tree. </b>
</div><span class="text_page_counter">Trang 8</span><div class="page_container" data-page="8"><b>Chapter 2 </b>
<b>COMMON FORMAT RISK STATEMENT (CFRS) </b>
<b>2.1. The Common Format Risk Statement (Figure 2.1) will be used by all Air Force </b>
organizations when discussing and/or presenting results of “military risk assessments” defined in this AFMAN to Air Force leaders (T-1). This statement establishes a standardized method of reporting a risk assessment specific to an activity. The items within parentheses in the table below are all required elements, though not necessarily in the exact sequence when presenting a complete assessment in the proper context, to include the mitigation.
<b>Figure 2.1. Common Format Risk Statement. </b>
<small>For (Activity), (Organization) on (Date) assesses (Type of Risk) with (Analytic Rigor) for (Scenario) assuming (Condition), in (Timeframe), with (Force Structure), and (Mitigation) is (Assessment). </small>
<b><small>Activity Assessment Context Assessment Setting </small></b>
<b>2.2. Activity or Objective. This part of the risk statement identifies what activity and objective </b>
will be assessed and reported to senior decision makers.
<b>2.3. Organization and Date. This part of the risk statement is the organization/person </b>
accomplishing the particular risk assessment.
<b>2.4. Type of Risk. In general, users of a given framework will pre-define the types of risk to be </b>
assessed to describe the areas of responsibility at risk (T-3). See section 4 for a more detailed description of the types of military risk to be assessed by Air Force analysts.
</div><span class="text_page_counter">Trang 9</span><div class="page_container" data-page="9"><b>2.4.1. Risk Category. The analyst’s assessment of risk provides an estimate of the </b>
likelihood that the strategy or plan to be applied within a context (timeframe, scenario, and force structure) will be executed with an unacceptable outcome; however, the risk assessment is conditional upon context likelihood. Risk Categories are defined in Section 3 – Metrics.
<b>2.5. Level of Analytic Rigor. A qualitative indicator regarding the level of RAF </b>
implementation shows the assessor’s confidence in the analytics that support the assessment results. Analytic rigor gives leadership a quick understanding of the assessment’s confidence level described by its dependability, measurability, repeatability, and traceability. Additional attributes such as its linkability, implementability, scalability, and military judgment add confidence in the risk assessment. Assessors will use the criteria presented below to provide a self-scored understanding of the process used to develop each risk assessment presented to leadership (T-1). Analysts will determine the rigor level at each transition in the strategy to task RAF and tree structures, i.e., at the metric level (most accurate), activity level, objective level, and capability or function level (most aggregated). Generally, each aggregated level decreases the overall rigor and confidence (T-1).
2.5.1. Level 1. The assessment incorporates military judgment through mechanisms to apply senior military leader judgment and/or subjective input based on recognized experience or expertise. Implementable through some documentation, structure, or metrics – limited defensibility. At a minimum, the assessor reports risk in context using the elements of the common format risk statement with a four-color scale.
2.5.2. Level 2. At this level assessors have improved the structure and defensibility through the use of an objective-activity-metric tree structure, development of metrics, and success/fail criteria. Metrics are in the maturing stage but portions may depend solely on subject matter expert judgment (i.e., traceability and measurability are improved, but defensibility is based on the logical structure used and can still rely on SME judgements).
2.5.3. Level 3. Analytic processes replace subjectivity with data and objective analytic techniques. Fully defensible and traceable through Measurable Assessment criteria (either qualitative or quantitative) that are concise, mutually exclusive, and collectively exhaustive. Discussion will focus on the contributions of the analytics to determining metrics and to success and failure points, rather than who made the determining assessment. The assessment has fully developed structure with traceable links between vital objectives, activities, and metrics. It’s fully documented with an analytic process for determining success/failure end points (five components of rigor exist: defendable, measureable, repeatable, traceable, and repeatable) (T-3).
2.5.4. Level 4. The assessment has the linkable mechanisms available to allow it to incorporate results from subordinate-level assessments or to integrate into higher-level assessments (scalable to make multiple level assessments within and/or outside AF organizations). The assessment provides information about the mitigating assumptions that have been included in the assessment and how changes to those assumptions impact the assessed risk level (all five components of rigor exist, plus mitigation and integration – leading to enterprise-level assessments).
</div><span class="text_page_counter">Trang 10</span><div class="page_container" data-page="10"><b>2.6. Scenario. This element provides the senior leader with an understanding of the </b>
hypothetical setting used by the analyst to make an assessment. The scenario provides assumptions regarding elements (e.g., opposing forces, environment) that impede or improve the execution of a plan. It includes amplifying information such as Operation Plans (OPLANs) for considering near-year assessments or Defense Planning Scenarios (DPS) for future-year assessments. The senior leader should set or approve the scenarios to be used for an assessment.
<b>2.7. Conditions . Any special conditions that may have bearing on the risk assessment. </b>
<b>2.8. Timeframe(s). The guidance for each given assessment should provide the timeframe(s) of </b>
that assessment, since it or they will drive both friendly and hostile force assumptions. The senior leader should set or approve the timeframe(s) to be used for an assessment.
<b>2.9. Force Structure. This element provides the force structure assumption behind an </b>
assessment. Examples include programmed force or programmed force extended. The senior leader should set or approve the force structure assumptions to be used for an assessment.
<b>2.10. Mitigation/Assumptions. Assessors must consider options outside the standard planning </b>
context for the scenario being assessed if they identify a level of risk greater than low risk (T-2). This element of the framework requires going beyond listing the underlying assumptions and additional constraints that bound the initial assessment. Assessors must explore mitigation options across the Doctrine, Organization, Training, Materiel, Leadership and Education, Personnel, Facilities, and Policy (DOTMLPF-P) spectrum (T-2). Once a non-standard DOTMLPF-P option is identified that reduces the level of risk for the scenario, the assessors should use the mitigated level of risk as their assessed level in the formal risk statement.
<b>2.11. Assessment. This element sets forth the risk level assessed with available mitigation </b>
<i>measures in place. The defined risk categories are low, moderate, significant, and high. The risk </i>
region is bounded from below and above by success and failure end points. Analysts will assess each node (objective, activity, and metric levels) in the tree structure at one of the four defined risk categories (T-1).
2.11.1. The use of professional military judgment is most effective when combining risks. Assessors encounter the challenge of combining risk in two different situations. One instance is when various assessments of the same activity exist. For example, an acquisition program may have risk assessments for performance, cost, and schedule that need to be combined into an overall program risk assessment. The other situation is when the assessor represents or evaluates an aggregate node in the RAF tree structure that combines multiple activities that have individual risk assessments.
2.11.1.1. When combining multiple assessments of a single activity, the assessor should
<i>consider the two related aspects of combined consequences and positive correlation. </i>
First, the assessor asks, if the combined consequences for an activity are sufficient to justify a risk assessment worse than any of the individual estimates. For example, in combined consequence, he or she may rate the cost and schedule for an activity as low risk when the expected outcome of each is 15 percent more than the plan; however, the assessor may consider the combination of both risk factors to result in a significantly higher likelihood of failure in the activity. In this example if the assessor values a schedule slip of 15 percent at more than 5 percent of the activity cost, the total consequence value of both the cost and schedule slip would be 20 percent and thus would
</div><span class="text_page_counter">Trang 11</span><div class="page_container" data-page="11">constitute moderate risk in our assessment framework. This is an example of a SME adjudication of aggregating risk assessments. Another way would be to use a mathematical approach such as a weighted average, which again can be adjudicated by professional military judgement. The second case, positive correlation, is where different risks for the same activity are very likely positively correlated; meaning when one thing goes wrong, other things are more likely to go wrong. If an activity is behind schedule, its costs are very likely also over budget. With positive correlation, assessing the risk as the worst of the activity’s aspect risks may actually be optimistic. For example, an acquisition system encountering significant performance risk is very likely to also experience increased cost and schedule risk, as more resources and time are dedicated to improve performance. A program with only 80% performance risk should be rated as less risky than a program with 80% risk in each of cost, schedule, and performance. The risk assessment should account for combined consequences, and the associated probability needs to be inclusive.
2.11.1.2. When an assessor combines multiple activities with individual risk ratings into an aggregate node (e.g., an Objective, Activity, or Metric), assessment of the risk for that node will be based on best military judgment for rolling up those contributing risk factors. One option is to use a SME-Informed weighted approach that emphasizes professional military judgment to account for sub-node contributions to a node’s risk rating. Another method is to rate the node the same as the highest (worst) risk assessed for the subordinate nodes, but justification for use of this second option should be well documented for each instance in which it is used. In this case a node’s assessment would be the highest (worst) risk measured for the supporting metrics and contributing sub-nodes. A third option is to use a mathematical approach for aggregating risk. Professional military judgment will dictate which roll-up approach is best for a given node, and inappropriate use of the “worst case” approach should be avoided.
2.11.2. In order to assess a node on the risk tree that has no subordinate nodes, the analyst should develop metrics for the critical factors related to that node’s contribution to scenario success. These critical factors would ideally be measurable, or at least expressible, such that risk thresholds could be developed. Any risk thresholds of metrics identifying a node’s success and failure should pass acceptance by the requestor and assessor(s) in order to maintain consistency and understanding. The level of the lowest boundary (i.e. Objective value) of risk for a metric should always assure that the supported node is successful with regard to that metric. In other words, no additional improvement in that metric will appreciably increase the supported node’s chance of success. At the other extreme, the highest boundary of risk for a metric is always at a level that assures failure of the supported activity as a result of the critical factor associated with that metric. In other words, no additional degradation in that metric will appreciably worsen the supported node’s chance of failure. Between the low and high risk boundaries, boundaries mark risk assessment
<i>transitions from low to moderate to significant to high that act as flags for a deteriorating </i>
situation, where senior leader attention or action may be required. These in turn inform a justifiable maximum acceptable risk tolerance (i.e. Threshold value) for a given metric. Consistent application of these levels will aid decision-makers in establishing and identifying trends. Senior decision-makers can quickly understand or challenge any risk assessment by asking about failure and success thresholds.
</div>