From: brian@ucsd.Edu (Brian Kantor) Newsgroups: comp.doc Subject: Computer Security - 1983 Orange Book [part 3 of 5] Date: 23 Jun 90 15:12:14 GMT Distribution: usa Organization: The Avant-Garde of the Now, Ltd. 4.1.1.4 Mandatory Access Control The TCB shall enforce a mandatory access control policy over all resources (i.e., subjects, storage objects, and I/O devices) that are directly or indirectly accessible by subjects external to the TCB. These subjects and objects shall be assigned sensitivity labels that are a combination of hierarchical classification levels and non-hierarchical categories, and the labels shall be used as the basis for mandatory access control decisions. The TCB shall be able to support two or more such security levels. (See the Mandatory Access Control guidelines.) The following requirements shall hold for all accesses between all subjects external to the TCB and all objects directly or indirectly accessible by these subjects: A subject can read an object only if the hierarchical classification in the subject's security level is greater than or equal to the hierarchical classification in the object's security level and the non-hierarchical categories in the subject's security level include all the non-hierarchical categories in the object's security level. A subject can write an object only if the hierarchical classification in the subject's security level is less than or equal to the hierarchical classification in the object's security level and all the non-hierarchical categories in the subject's security level are included in the non- hierarchical categories in the object's security level. 4.1.2 ACCOUNTABILITY 4.1.2.1 Identification and Authentication The TCB shall require users to identify themselves to it before beginning to perform any other actions that the TCB is expected to mediate. Furthermore, the TCB shall maintain authentication data that includes information for verifying the identity of individual users (e.g., passwords) as well as information for determining the clearance and authorizations of individual users. This data shall be used by the TCB to authenticate the user's identity and to determine the security level and authorizations of subjects that may be created to act on behalf of the individual user. The TCB shall protect authentication data so that it cannot be accessed by any unauthorized user. The TCB shall be able to enforce individual accountability by providing the capability to uniquely identify each individual ADP system user. The TCB shall also provide the capability of associating this identity with all auditable actions taken by that individual. 4.1.2.1.1 Trusted Path The TCB shall support a trusted communication path between itself and users for use when a positive TCB-to- user connection is required (e.g., login, change subject security level). Communications via this trusted path shall be activated exclusively by a user or the TCB and shall be logically isolated and unmistakably distinguishable from other paths. 4.1.2.2 Audit The TCB shall be able to create, maintain, and protect from modification or unauthorized access or destruction an audit trail of accesses to the objects it protects. The audit data shall be protected by the TCB so that read access to it is limited to those who are authorized for audit data. The TCB shall be able to record the following types of events: use of identification and authentication mechanisms, introduction of objects into a user's address space (e.g., file open, program initiation), deletion of objects, and actions taken by computer operators and system administrators and/or system security officers. The TCB shall also be able to audit any override of human-readable output markings. For each recorded event, the audit record shall identify: date and time of the event, user, type of event, and success or failure of the event. For identification/authentication events the origin of request (e.g., terminal ID) shall be included in the audit record. For events that introduce an object into a user's address space and for object deletion events the audit record shall include the name of the object and the object's security level. The ADP system administrator shall be able to selectively audit the actions of any one or more users based on individual identity and/or object security level. The TCB shall be able to audit the identified events that may be used in the exploitation of covert storage channels. The TCB shall contain a mechanism that is able to monitor the occurrence or accumulation of security auditable events that may indicate an imminent violation of security policy. This mechanism shall be able to immediately notify the security administrator when thresholds are exceeded. 4.1.3 ASSURANCE 4.1.3.1 Operational Assurance 4.1.3.1.1 System Architecture The TCB shall maintain a domain for its own execution that protects it from external interference or tampering (e.g., by modification of its code or data structures). The TCB shall maintain process isolation through the provision of distinct address spaces under its control. The TCB shall be internally structured into well-defined largely independent modules. It shall make effective use of available hardware to separate those elements that are protection-critical from those that are not. The TCB modules shall be designed such that the principle of least privilege is enforced. Features in hardware, such as segmentation, shall be used to support logically distinct storage objects with separate attributes (namely: readable, writeable). The user interface to the TCB shall be completely defined and all elements of the TCB identified. The TCB shall be designed and structured to use a complete, conceptually simple protection mechanism with precisely defined semantics. This mechanism shall play a central role in enforcing the internal structuring of the TCB and the system. The TCB shall incorporate significant use of layering, abstraction and data hiding. Significant system engineering shall be directed toward minimizing the complexity of the TCB and excluding from the TCB modules that are not protection-critical. 4.1.3.1.2 System Integrity Hardware and/or software features shall be provided that can be used to periodically validate the correct operation of the on-site hardware and firmware elements of the TCB. 4.1.3.1.3 Covert Channel Analysis The system developer shall conduct a thorough search for COVERT CHANNELS and make a determination (either by actual measurement or by engineering estimation) of the maximum bandwidth of each identified channel. (See the Covert Channels Guideline section.) FORMAL METHODS SHALL BE USED IN THE ANALYSIS. 4.1.3.1.4 Trusted Facility Management The TCB shall support separate operator and administrator functions. The functions performed in the role of a security administrator shall be identified. The ADP system administrative personnel shall only be able to perform security administrator functions after taking a distinct auditable action to assume the security administrator role on the ADP system. Non-security functions that can be performed in the security administration role shall be limited strictly to those essential to performing the security role effectively. 4.1.3.1.5 Trusted Recovery Procedures and/or mechanisms shall be provided to assure that, after an ADP system failure or other discontinuity, recovery without a protection compromise is obtained. 4.1.3.2 Life-Cycle Assurance 4.1.3.2.1 Security Testing The security mechanisms of the ADP system shall be tested and found to work as claimed in the system documentation. A team of individuals who thoroughly understand the specific implementation of the TCB shall subject its design documentation, source code, and object code to thorough analysis and testing. Their objectives shall be: to uncover all design and implementation flaws that would permit a subject external to the TCB to read, change, or delete data normally denied under the mandatory or discretionary security policy enforced by the TCB; as well as to assure that no subject (without authorization to do so) is able to cause the TCB to enter a state such that it is unable to respond to communications initiated by other users. The TCB shall be found resistant to penetration. All discovered flaws shall be corrected and the TCB retested to demonstrate that they have been eliminated and that new flaws have not been introduced. Testing shall demonstrate that the TCB implementation is consistent with the FORMAL top- level specification. (See the Security Testing Guidelines.) No design flaws and no more than a few correctable implementation flaws may be found during testing and there shall be reasonable confidence that few remain. MANUAL OR OTHER MAPPING OF THE FTLS TO THE SOURCE CODE MAY FORM A BASIS FOR PENETRATION TESTING. 4.1.3.2.2 Design Specification and Verification A formal model of the security policy supported by the TCB shall be maintained that is proven consistent with its axioms. A descriptive top-level specification (DTLS) of the TCB shall be maintained that completely and accurately describes the TCB in terms of exceptions, error messages, and effects. A FORMAL TOP-LEVEL SPECIFICATION (FTLS) OF THE TCB SHALL BE MAINTAINED THAT ACCURATELY DESCRIBES THE TCB IN TERMS OF EXCEPTIONS, ERROR MESSAGES, AND EFFECTS. THE DTLS AND FTLS SHALL INCLUDE THOSE COMPONENTS OF THE TCB THAT ARE IMPLEMENTED AS HARDWARE AND/OR FIRMWARE IF THEIR PROPERTIES ARE VISIBLE AT THE TCB INTERFACE. THE FTLS shall be shown to be an accurate description of the TCB interface. A convincing argument shall be given that the DTLS is consistent with the model AND A COMBINATION OF FORMAL AND INFORMAL TECHNIQUES SHALL BE USED TO SHOW THAT THE FTLS IS CONSISTENT WITH THE MODEL. THIS VERIFICATION EVIDENCE SHALL BE CONSISTENT WITH THAT PROVIDED WITHIN THE STATE-OF-THE-ART OF THE PARTICULAR COMPUTER SECURITY CENTER-ENDORSED FORMAL SPECIFICATION AND VERIFICATION SYSTEM USED. MANUAL OR OTHER MAPPING OF THE FTLS TO THE TCB SOURCE CODE SHALL BE PERFORMED TO PROVIDE EVIDENCE OF CORRECT IMPLEMENTATION. 4.1.3.2.3 Configuration Management During THE ENTIRE LIFE-CYCLE, I.E., DURING THE DESIGN, DEVELOPMENT, and maintenance of the TCB, a configuration management system shall be in place FOR ALL SECURITY- RELEVANT HARDWARE, FIRMWARE, AND SOFTWARE that maintains control of changes to THE FORMAL MODEL, the descriptive AND FORMAL top-level SPECIFICATIONS, other design data, implementation documentation, source code, the running version of the object code, and test fixtures and documentation. The configuration management system shall assure a consistent mapping among all documentation and code associated with the current version of the TCB. Tools shall be provided for generation of a new version of the TCB from source code. Also available shall be tools, MAINTAINED UNDER STRICT CONFIGURATION CONTROL, for comparing a newly generated version with the previous TCB version in order to ascertain that only the intended changes have been made in the code that will actually be used as the new version of the TCB. A COMBINATION OF TECHNICAL, PHYSICAL, AND PROCEDURAL SAFEGUARDS SHALL BE USED TO PROTECT FROM UNAUTHORIZED MODIFICATION OR DESTRUCTION THE MASTER COPY OR COPIES OF ALL MATERIAL USED TO GENERATE THE TCB. 4.1.3.2.4 Trusted Distribution A TRUSTED ADP SYSTEM CONTROL AND DISTRIBUTION FACILITY SHALL BE PROVIDED FOR MAINTAINING THE INTEGRITY OF THE MAPPING BETWEEN THE MASTER DATA DESCRIBING THE CURRENT VERSION OF THE TCB AND THE ON-SITE MASTER COPY OF THE CODE FOR THE CURRENT VERSION. PROCEDURES (E.G., SITE SECURITY ACCEPTANCE TESTING) SHALL EXIST FOR ASSURING THAT THE TCB SOFTWARE, FIRMWARE, AND HARDWARE UPDATES DISTRIBUTED TO A CUSTOMER ARE EXACTLY AS SPECIFIED BY THE MASTER COPIES. 4.1.4 DOCUMENTATION 4.1.4.1 Security Features User's Guide A single summary, chapter, or manual in user documentation shall describe the protection mechanisms provided by the TCB, guidelines on their use, and how they interact with one another. 4.1.4.2 Trusted Facility Manual A manual addressed to the ADP system administrator shall present cautions about functions and privileges that should be controlled when running a secure facility. The procedures for examining and maintaining the audit files as well as the detailed audit record structure for each type of audit event shall be given. The manual shall describe the operator and administrator functions related to security, to include changing the security characteristics of a user. It shall provide guidelines on the consistent and effective use of the protection features of the system, how they interact, how to securely generate a new TCB, and facility procedures, warnings, and privileges that need to be controlled in order to operate the facility in a secure manner. The TCB modules that contain the reference validation mechanism shall be identified. The procedures for secure generation of a new TCB from source after modification of any modules in the TCB shall be described. It shall include the procedures to ensure that the system is initially started in a secure manner. Procedures shall also be included to resume secure system operation after any lapse in system operation. 4.1.4.3 Test Documentation The system developer shall provide to the evaluators a document that describes the test plan and results of the security mechanisms' functional testing. It shall include results of testing the effectiveness of the methods used to reduce covert channel bandwidths. THE RESULTS OF THE MAPPING BETWEEN THE FORMAL TOP-LEVEL SPECIFICATION AND THE TCB SOURCE CODE SHALL BE GIVEN. 4.1.4.4 Design Documentation Documentation shall be available that provides a description of the manufacturer's philosophy of protection and an explanation of how this philosophy is translated into the TCB. The interfaces between the TCB modules shall be described. A formal description of the security policy model enforced by the TCB shall be available and proven that it is sufficient to enforce the security policy. The specific TCB protection mechanisms shall be identified and an explanation given to show that they satisfy the model. The descriptive top-level specification (DTLS) shall be shown to be an accurate description of the TCB interface. Documentation shall describe how the TCB implements the reference monitor concept and give an explanation why it is tamperproof, cannot be bypassed, and is correctly implemented. The TCB implementation (i.e., in hardware, firmware, and software) shall be informally shown to be consistent with the FORMAL TOP- LEVEL SPECIFICATION (FTLS). The elements of the FTLS shall be shown, using informal techniques, to correspond to the elements of the TCB. Documentation shall describe how the TCB is structured to facilitate testing and to enforce least privilege. This documentation shall also present the results of the covert channel analysis and the tradeoffs involved in restricting the channels. All auditable events that may be used in the exploitation of known covert storage channels shall be identified. The bandwidths of known covert storage channels, the use of which is not detectable by the auditing mechanisms, shall be provided. (See the Covert Channel Guideline section.) HARDWARE, FIRMWARE, AND SOFTWARE MECHANISMS NOT DEALT WITH IN THE FTLS BUT STRICTLY INTERNAL TO THE TCB (E.G., MAPPING REGISTERS, DIRECT MEMORY ACCESS I/O) SHALL BE CLEARLY DESCRIBED. 4.2 BEYOND CLASS (A1) Most of the security enhancements envisioned for systems that will provide features and assurance in addition to that already provided by class (Al) systems are beyond current technology. The discussion below is intended to guide future work and is derived from research and development activities already underway in both the public and private sectors. As more and better analysis techniques are developed, the requirements for these systems will become more explicit. In the future, use of formal verification will be extended to the source level and covert timing channels will be more fully addressed. At this level the design environment will become important and testing will be aided by analysis of the formal top-level specification. Consideration will be given to the correctness of the tools used in TCB development (e.g., compilers, assemblers, loaders) and to the correct functioning of the hardware/firmware on which the TCB will run. Areas to be addressed by systems beyond class (A1) include: * System Architecture A demonstration (formal or otherwise) must be given showing that requirements of self-protection and completeness for reference monitors have been implemented in the TCB. * Security Testing Although beyond the current state-of-the-art, it is envisioned that some test-case generation will be done automatically from the formal top-level specification or formal lower-level specifications. * Formal Specification and Verification The TCB must be verified down to the source code level, using formal verification methods where feasible. Formal verification of the source code of the security-relevant portions of an operating system has proven to be a difficult task. Two important considerations are the choice of a high-level language whose semantics can be fully and formally expressed, and a careful mapping, through successive stages, of the abstract formal design to a formalization of the implementation in low-level specifications. Experience has shown that only when the lowest level specifications closely correspond to the actual code can code proofs be successfully accomplished. * Trusted Design Environment The TCB must be designed in a trusted facility with only trusted (cleared) personnel. PART II: 5.0 CONTROL OBJECTIVES FOR TRUSTED COMPUTER SYSTEMS The criteria are divided within each class into groups of requirements. These groupings were developed to assure that three basic control objectives for computer security are satisfied and not overlooked. These control objectives deal with: * Security Policy * Accountability * Assurance This section provides a discussion of these general control objectives and their implication in terms of designing trusted systems. 5.1 A Need for Consensus A major goal of the DoD Computer Security Center is to encourage the Computer Industry to develop trusted computer systems and products, making them widely available in the commercial market place. Achievement of this goal will require recognition and articulation by both the public and private sectors of a need and demand for such products. As described in the introduction to this document, efforts to define the problems and develop solutions associated with processing nationally sensitive information, as well as other sensitive data such as financial, medical, and personnel information used by the National Security Establishment, have been underway for a number of years. The criteria, as described in Part I, represent the culmination of these efforts and describe basic requirements for building trusted computer systems. To date, however, these systems have been viewed by many as only satisfying National Security needs. As long as this perception continues the consensus needed to motivate manufacture of trusted systems will be lacking. The purpose of this section is to describe, in some detail, the fundamental control objectives that lay the foundations for requirements delineated in the criteria. The goal is to explain the foundations so that those outside the National Security Establishment can assess their universality and, by extension, the universal applicability of the criteria requirements to processing all types of sensitive applications whether they be for National Security or the private sector. 5.2 Definition and Usefulness The term "control objective" refers to a statement of intent with respect to control over some aspect of an organization's resources, or processes, or both. In terms of a computer system, control objectives provide a framework for developing a strategy for fulfilling a set of security requirements for any given system. Developed in response to generic vulnerabilities, such as the need to manage and handle sensitive data in order to prevent compromise, or the need to provide accountability in order to detect fraud, control objectives have been identified as a useful method of expressing security goals.[3] Examples of control objectives include the three basic design requirements for implementing the reference monitor concept discussed in Section 6. They are: * The reference validation mechanism must be tamperproof. * The reference validation mechanism must always be invoked. * The reference validation mechanism must be small enough to be subjected to analysis and tests, the completeness of which can be assured.[1] 5.3 Criteria Control Objectives The three basic control objectives of the criteria are concerned with security policy, accountability, and assurance. The remainder of this section provides a discussion of these basic requirements. 5.3.1 Security Policy In the most general sense, computer security is concerned with controlling the way in which a computer can be used, i.e., controlling how information processed by it can be accessed and manipulated. However, at closer examination, computer security can refer to a number of areas. Symptomatic of this, FIPS PUB 39, Glossary For Computer Systems Security, does not have a unique definition for computer security.[16] Instead there are eleven separate definitions for security which include: ADP systems security, administrative security, data security, etc. A common thread running through these definitions is the word "protection." Further declarations of protection requirements can be found in DoD Directive 5200.28 which describes an acceptable level of protection for classified data to be one that will "assure that systems which process, store, or use classified data and produce classified information will, with reasonable dependability, prevent: a. Deliberate or inadvertent access to classified material by unauthorized persons, and b. Unauthorized manipulation of the computer and its associated peripheral devices."[8] In summary, protection requirements must be defined in terms of the perceived threats, risks, and goals of an organization. This is often stated in terms of a security policy. It has been pointed out in the literature that it is external laws, rules, regulations, etc. that establish what access to information is to be permitted, independent of the use of a computer. In particular, a given system can only be said to be secure with respect to its enforcement of some specific policy.[30] Thus, the control objective for security policy is: SECURITY POLICY CONTROL OBJECTIVE A STATEMENT OF INTENT WITH REGARD TO CONTROL OVER ACCESS TO AND DISSEMINATION OF INFORMATION, TO BE KNOWN AS THE SECURITY POLICY, MUST BE PRECISELY DEFINED AND IMPLEMENTED FOR EACH SYSTEM THAT IS USED TO PROCESS SENSITIVE INFORMATION. THE SECURITY POLICY MUST ACCURATELY REFLECT THE LAWS, REGULATIONS, AND GENERAL POLICIES FROM WHICH IT IS DERIVED. 5.3.1.1 Mandatory Security Policy Where a security policy is developed that is to be applied to control of classified or other specifically designated sensitive information, the policy must include detailed rules on how to handle that information throughout its life-cycle. These rules are a function of the various sensitivity designations that the information can assume and the various forms of access supported by the system. Mandatory security refers to the enforcement of a set of access control rules that constrains a subject's access to information on the basis of a comparison of that individual's clearance/authorization to the information, the classification/sensitivity designation of the information, and the form of access being mediated. Mandatory policies either require or can be satisfied by systems that can enforce a partial ordering of designations, namely, the designations must form what is mathematically known as a "lattice."[5] A clear implication of the above is that the system must assure that the designations associated with sensitive data cannot be arbitrarily changed, since this could permit individuals who lack the appropriate authorization to access sensitive information. Also implied is the requirement that the system control the flow of information so that data cannot be stored with lower sensitivity designations unless its "downgrading" has been authorized. The control objective is: MANDATORY SECURITY CONTROL OBJECTIVE SECURITY POLICIES DEFINED FOR SYSTEMS THAT ARE USED TO PROCESS CLASSIFIED OR OTHER SPECIFICALLY CATEGORIZED SENSITIVE INFORMATION MUST INCLUDE PROVISIONS FOR THE ENFORCEMENT OF MANDATORY ACCESS CONTROL RULES. THAT IS, THEY MUST INCLUDE A SET OF RULES FOR CONTROLLING ACCESS BASED DIRECTLY ON A COMPARISON OF THE INDIVIDUAL'S CLEARANCE OR AUTHORIZATION FOR THE INFORMATION AND THE CLASSIFICATION OR SENSITIVITY DESIGNATION OF THE INFORMATION BEING SOUGHT, AND INDIRECTLY ON CONSIDERATIONS OF PHYSICAL AND OTHER ENVIRONMENTAL FACTORS OF CONTROL. THE MANDATORY ACCESS CONTROL RULES MUST ACCURATELY REFLECT THE LAWS, REGULATIONS, AND GENERAL POLICIES FROM WHICH THEY ARE DERIVED. 5.3.1.2 Discretionary Security Policy Discretionary security is the principal type of access control available in computer systems today. The basis of this kind of security is that an individual user, or program operating on his behalf, is allowed to specify explicitly the types of access other users may have to information under his control. Discretionary security differs from mandatory security in that it implements an access control policy on the basis of an individual's need-to-know as opposed to mandatory controls which are driven by the classification or sensitivity designation of the information. Discretionary controls are not a replacement for mandatory controls. In an environment in which information is classified (as in the DoD) discretionary security provides for a finer granularity of control within the overall constraints of the mandatory policy. Access to classified information requires effective implementation of both types of controls as precondition to granting that access. In general, no person may have access to classified information unless: (a) that person has been determined to be trustworthy, i.e., granted a personnel security clearance -- MANDATORY, and (b) access is necessary for the performance of official duties, i.e., determined to have a need-to-know -- DISCRETIONARY. In other words, discretionary controls give individuals discretion to decide on which of the permissible accesses will actually be allowed to which users, consistent with overriding mandatory policy restrictions. The control objective is: DISCRETIONARY SECURITY CONTROL OBJECTIVE SECURITY POLICIES DEFINED FOR SYSTEMS THAT ARE USED TO PROCESS CLASSIFIED OR OTHER SENSITIVE INFORMATION MUST INCLUDE PROVISIONS FOR THE ENFORCEMENT OF DISCRETIONARY ACCESS CONTROL RULES. THAT IS, THEY MUST INCLUDE A CONSISTENT SET OF RULES FOR CONTROLLING AND LIMITING ACCESS BASED ON IDENTIFIED INDIVIDUALS WHO HAVE BEEN DETERMINED TO HAVE A NEED-TO-KNOW FOR THE INFORMATION. 5.3.1.3 Marking To implement a set of mechanisms that will put into effect a mandatory security policy, it is necessary that the system mark information with appropriate classification or sensitivity labels and maintain these markings as the information moves through the system. Once information is unalterably and accurately marked, comparisons required by the mandatory access control rules can be accurately and consistently made. An additional benefit of having the system maintain the classification or sensitivity label internally is the ability to automatically generate properly "labeled" output. The labels, if accurately and integrally maintained by the system, remain accurate when output from the system. The control objective is: MARKING CONTROL OBJECTIVE SYSTEMS THAT ARE DESIGNED TO ENFORCE A MANDATORY SECURITY POLICY MUST STORE AND PRESERVE THE INTEGRITY OF CLASSIFICATION OR OTHER SENSITIVITY LABELS FOR ALL INFORMATION. LABELS EXPORTED FROM THE SYSTEM MUST BE ACCURATE REPRESENTATIONS OF THE CORRESPONDING INTERNAL SENSITIVITY LABELS BEING EXPORTED. 5.3.2 Accountability The second basic control objective addresses one of the fundamental principles of security, i.e., individual accountability. Individual accountability is the key to securing and controlling any system that processes information on behalf of individuals or groups of individuals. A number of requirements must be met in order to satisfy this objective. The first requirement is for individual user identification. Second, there is a need for authentication of the identification. Identification is functionally dependent on authentication. Without authentication, user identification has no credibility. Without a credible identity, neither mandatory nor discretionary security policies can be properly invoked because there is no assurance that proper authorizations can be made. The third requirement is for dependable audit capabilities. That is, a trusted computer system must provide authorized personnel with the ability to audit any action that can potentially cause access to, generation of, or effect the release of classified or sensitive information. The audit data will be selectively acquired based on the auditing needs of a particular installation and/or application. However, there must be sufficient granularity in the audit data to support tracing the auditable events to a specific individual who has taken the actions or on whose behalf the actions were taken. The control objective is: ACCOUNTABILITY CONTROL OBJECTIVE SYSTEMS THAT ARE USED TO PROCESS OR HANDLE CLASSIFIED OR OTHER SENSITIVE INFORMATION MUST ASSURE INDIVIDUAL ACCOUNTABILITY WHENEVER EITHER A MANDATORY OR DISCRETIONARY SECURITY POLICY IS INVOKED. FURTHERMORE, TO ASSURE ACCOUNTABILITY THE CAPABILITY MUST EXIST FOR AN AUTHORIZED AND COMPETENT AGENT TO ACCESS AND EVALUATE ACCOUNTABILITY INFORMATION BY A SECURE MEANS, WITHIN A REASONABLE AMOUNT OF TIME, AND WITHOUT UNDUE DIFFICULTY. 5.3.3 Assurance The third basic control objective is concerned with guaranteeing or providing confidence that the security policy has been implemented correctly and that the protection-relevant elements of the system do, indeed, accurately mediate and enforce the intent of that policy. By extension, assurance must include a guarantee that the trusted portion of the system works only as intended. To accomplish these objectives, two types of assurance are needed. They are life-cycle assurance and operational assurance. Life-cycle assurance refers to steps taken by an organization to ensure that the system is designed, developed, and maintained using formalized and rigorous controls and standards.[17] Computer systems that process and store sensitive or classified information depend on the hardware and software to protect that information. It follows that the hardware and software themselves must be protected against unauthorized changes that could cause protection mechanisms to malfunction or be bypassed completely. For this reason trusted computer systems must be carefully evaluated and tested during the design and development phases and reevaluated whenever changes are made that could affect the integrity of the protection mechanisms. Only in this way can confidence be provided that the hardware and software interpretation of the security policy is maintained accurately and without distortion. While life-cycle assurance is concerned with procedures for managing system design, development, and maintenance; operational assurance focuses on features and system architecture used to ensure that the security policy is uncircumventably enforced during system operation. That is, the security policy must be integrated into the hardware and software protection features of the system. Examples of steps taken to provide this kind of confidence include: methods for testing the operational hardware and software for correct operation, isolation of protection- critical code, and the use of hardware and software to provide distinct domains. The control objective is: ASSURANCE CONTROL OBJECTIVE SYSTEMS THAT ARE USED TO PROCESS OR HANDLE CLASSIFIED OR OTHER SENSITIVE INFORMATION MUST BE DESIGNED TO GUARANTEE CORRECT AND ACCURATE INTERPRETATION OF THE SECURITY POLICY AND MUST NOT DISTORT THE INTENT OF THAT POLICY. ASSURANCE MUST BE PROVIDED THAT CORRECT IMPLEMENTATION AND OPERATION OF THE POLICY EXISTS THROUGHOUT THE SYSTEM'S LIFE-CYCLE. 6.0 RATIONALE BEHIND THE EVALUATION CLASSES 6.1 The Reference Monitor Concept In October of 1972, the Computer Security Technology Planning Study, conducted by James P. Anderson & Co., produced a report for the Electronic Systems Division (ESD) of the United States Air Force.[1] In that report, the concept of "a reference monitor which enforces the authorized access relationships between subjects and objects of a system" was introduced. The reference monitor concept was found to be an essential element of any system that would provide multilevel secure computing facilities and controls. The Anderson report went on to define the reference validation mechanism as "an implementation of the reference monitor concept . . . that validates each reference to data or programs by any user (program) against a list of authorized types of reference for that user." It then listed the three design requirements that must be met by a reference validation mechanism: a. The reference validation mechanism must be tamper proof. b. The reference validation mechanism must always be invoked. c. The reference validation mechanism must be small enough to be subject to analysis and tests, the completeness of which can be assured."[1] Extensive peer review and continuing research and development activities have sustained the validity of the Anderson Committee's findings. Early examples of the reference validation mechanism were known as security kernels. The Anderson Report described the security kernel as "that combination of hardware and software which implements the reference monitor concept."[1] In this vein, it will be noted that the security kernel must support the three reference monitor requirements listed above. 6.2 A Formal Security Policy Model Following the publication of the Anderson report, considerable research was initiated into formal models of security policy requirements and of the mechanisms that would implement and enforce those policy models as a security kernel. Prominent among these efforts was the ESD-sponsored development of the Bell and LaPadula model, an abstract formal treatment of DoD security policy.[2] Using mathematics and set theory, the model precisely defines the notion of secure state, fundamental modes of access, and the rules for granting subjects specific modes of access to objects. Finally, a theorem is proven to demonstrate that the rules are security-preserving operations, so that the application of any sequence of the rules to a system that is in a secure state will result in the system entering a new state that is also secure. This theorem is known as the Basic Security Theorem. The Bell and LaPadula model defines a relationship between clearances of subjects and classifications of system objects, now referenced as the "dominance relation." From this definition, accesses permitted between subjects and objects are explicitly defined for the fundamental modes of access, including read-only access, read/write access, and write-only access. The model defines the Simple Security Condition to control granting a subject read access to a specific object, and the *-Property (read "Star Property") to control granting a subject write access to a specific object. Both the Simple Security Condition and the *-Property include mandatory security provisions based on the dominance relation between the clearance of the subject and the classification of the object. The Discretionary Security Property is also defined, and requires that a specific subject be authorized for the particular mode of access required for the state transition. In its treatment of subjects (processes acting on behalf of a user), the model distinguishes between trusted subjects (i.e., not constrained within the model by the *-Property) and untrusted subjects (those that are constrained by the *-Property). >From the Bell and LaPadula model there evolved a model of the method of proof required to formally demonstrate that all arbitrary sequences of state transitions are security-preserving. It was also shown that the *- Property is sufficient to prevent the compromise of information by Trojan Horse attacks. 6.3 The Trusted Computing Base In order to encourage the widespread commercial availability of trusted computer systems, these evaluation criteria have been designed to address those systems in which a security kernel is specifically implemented as well as those in which a security kernel has not been implemented. The latter case includes those systems in which objective (c) is not fully supported because of the size or complexity of the reference validation mechanism. For convenience, these evaluation criteria use the term Trusted Computing Base to refer to the reference validation mechanism, be it a security kernel, front-end security filter, or the entire trusted computer system. The heart of a trusted computer system is the Trusted Computing Base (TCB) which contains all of the elements of the system responsible for supporting the security policy and supporting the isolation of objects (code and data) on which the protection is based. The bounds of the TCB equate to the "security perimeter" referenced in some computer security literature. In the interest of understandable and maintainable protection, a TCB should be as simple as possible consistent with the functions it has to perform. Thus, the TCB includes hardware, firmware, and software critical to protection and must be designed and implemented such that system elements excluded from it need not be trusted to maintain protection. Identification of the interface and elements of the TCB along with their correct functionality therefore forms the basis for evaluation. For general-purpose systems, the TCB will include key elements of the operating system and may include all of the operating system. For embedded systems, the security policy may deal with objects in a way that is meaningful at the application level rather than at the operating system level. Thus, the protection policy may be enforced in the application software rather than in the underlying operating system. The TCB will necessarily include all those portions of the operating system and application software essential to the support of the policy. Note that, as the amount of code in the TCB increases, it becomes harder to be confident that the TCB enforces the reference monitor requirements under all circumstances. 6.4 Assurance The third reference monitor design objective is currently interpreted as meaning that the TCB "must be of sufficiently simple organization and complexity to be subjected to analysis and tests, the completeness of which can be assured." Clearly, as the perceived degree of risk increases (e.g., the range of sensitivity of the system's protected data, along with the range of clearances held by the system's user population) for a particular system's operational application and environment, so also must the assurances be increased to substantiate the degree of trust that will be placed in the system. The hierarchy of requirements that are presented for the evaluation classes in the trusted computer system evaluation criteria reflect the need for these assurances. As discussed in Section 5.3, the evaluation criteria uniformly require a statement of the security policy that is enforced by each trusted computer system. In addition, it is required that a convincing argument be presented that explains why the TCB satisfies the first two design requirements for a reference monitor. It is not expected that this argument will be entirely formal. This argument is required for each candidate system in order to satisfy the assurance control objective. The systems to which security enforcement mechanisms have been added, rather than built-in as fundamental design objectives, are not readily amenable to extensive analysis since they lack the requisite conceptual simplicity of a security kernel. This is because their TCB extends to cover much of the entire system. Hence, their degree of trustworthiness can best be ascertained only by obtaining test results. Since no test procedure for something as complex as a computer system can be truly exhaustive, there is always the possibility that a subsequent penetration attempt could succeed. It is for this reason that such systems must fall into the lower evaluation classes. On the other hand, those systems that are designed and engineered to support the TCB concepts are more amenable to analysis and structured testing. Formal methods can be used to analyze the correctness of their reference validation mechanisms in enforcing the system's security policy. Other methods, including less-formal arguments, can be used in order to substantiate claims for the completeness of their access mediation and their degree of tamper-resistance. More confidence can be placed in the results of this analysis and in the thoroughness of the structured testing than can be placed in the results for less methodically structured systems. For these reasons, it appears reasonable to conclude that these systems could be used in higher-risk environments. Successful implementations of such systems would be placed in the higher evaluation classes. 6.5 The Classes It is highly desirable that there be only a small number of overall evaluation classes. Three major divisions have been identified in the evaluation criteria with a fourth division reserved for those systems that have been evaluated and found to offer unacceptable security protection. Within each major evaluation division, it was found that "intermediate" classes of trusted system design and development could meaningfully be defined. These intermediate classes have been designated in the criteria because they identify systems that: * are viewed to offer significantly better protection and assurance than would systems that satisfy the basic requirements for their evaluation class; and * there is reason to believe that systems in the intermediate evaluation classes could eventually be evolved such that they would satisfy the requirements for the next higher evaluation class. Except within division A it is not anticipated that additional "intermediate" evaluation classes satisfying the two characteristics described above will be identified. Distinctions in terms of system architecture, security policy enforcement, and evidence of credibility between evaluation classes have been defined such that the "jump" between evaluation classes would require a considerable investment of effort on the part of implementors. Correspondingly, there are expected to be significant differentials of risk to which systems from the higher evaluation classes will be exposed. 7.0 THE RELATIONSHIP BETWEEN POLICY AND THE CRITERIA Section 1 presents fundamental computer security requirements and Section 5 presents the control objectives for Trusted Computer Systems. They are general requirements, useful and necessary, for the development of all secure systems. However, when designing systems that will be used to process classified or other sensitive information, functional requirements for meeting the Control Objectives become more specific. There is a large body of policy laid down in the form of Regulations, Directives, Presidential Executive Orders, and OMB Circulars that form the basis of the procedures for the handling and processing of Federal information in general and classified information specifically. This section presents pertinent excerpts from these policy statements and discusses their relationship to the Control Objectives. 7.1 Established Federal Policies A significant number of computer security policies and associated requirements have been promulgated by Federal government elements. The interested reader is referred to reference [32] which analyzes the need for trusted systems in the civilian agencies of the Federal government, as well as in state and local governments and in the private sector. This reference also details a number of relevant Federal statutes, policies and requirements not treated further below. Security guidance for Federal automated information systems is provided by the Office of Management and Budget. Two specifically applicable Circulars have been issued. OMB Circular No. A-71, Transmittal Memorandum No. 1, "Security of Federal Automated Information Systems,"[26] directs each executive agency to establish and maintain a computer security program. It makes the head of each executive branch, department and agency responsible "for assuring an adequate level of security for all agency data whether processed in-house or commercially. This includes responsibility for the establishment of physical, administrative and technical safeguards required to adequately protect personal, proprietary or other sensitive data not subject to national security regulations, as well as national security data."[26, para. 4 p. 2] OMB Circular No. A-123, "Internal Control Systems,"[27] issued to help eliminate fraud, waste, and abuse in government programs requires: (a) agency heads to issue internal control directives and assign responsibility, (b) managers to review programs for vulnerability, and (c) managers to perform periodic reviews to evaluate strengths and update controls. Soon after promulgation of OMB Circular A-123, the relationship of its internal control requirements to building secure computer systems was recognized.[4] While not stipulating computer controls specifically, the definition of Internal Controls in A-123 makes it clear that computer systems are to be included: "Internal Controls - The plan of organization and all of the methods and measures adopted within an agency to safeguard its resources, assure the accuracy and reliability of its information, assure adherence to applicable laws, regulations and policies, and promote operational economy and efficiency."[27, sec. 4.C] The matter of classified national security information processed by ADP systems was one of the first areas given serious and extensive concern in computer security. The computer security policy documents promulgated as a result contain generally more specific and structured requirements than most, keyed in turn to an authoritative basis that itself provides a rather clearly articulated and structured information security policy. This basis, Executive Order 12356, "National Security Information," sets forth requirements for the classification, declassification and safeguarding of "national security information" per se.[14] 7.2 DoD Policies Within the Department of Defense, these broad requirements are implemented and further specified primarily through two vehicles: 1) DoD Regulation 5200.1-R [7], which applies to all components of the DoD as such, and 2) DoD 5220.22-M, "Industrial Security Manual for Safeguarding Classified Information" [11], which applies to contractors included within the Defense Industrial Security Program. Note that the latter transcends DoD as such, since it applies not only to any contractors handling classified information for any DoD component, but also to the contractors of eighteen other Federal organizations for whom the Secretary of Defense is authorized to act in rendering industrial security services.* ____________________________________________________________ * i.e., NASA, Commerce Department, GSA, State Department, Small Business Administration, National Science Foundation, Treasury Department, Transportation Department, Interior Department, Agriculture Department, Health and Human Services Department, Labor Department, Environmental Protection Agency, Justice Department, U.S. Arms Control and Disarmament Agency, Federal Emergency Management Agency, Federal Reserve System, and U.S. General Accounting Office. ____________________________________________________________ For ADP systems, these information security requirements are further amplified and specified in: 1) DoD Directive 5200.28 [8] and DoD Manual 5200.28-M [9], for DoD components; and 2) Section XIII of DoD 5220.22-M [11] for contractors. DoD Directive 5200.28, "Security Requirements for Automatic Data Processing (ADP) Systems," stipulates: "Classified material contained in an ADP system shall be safeguarded by the continuous employment of protective features in the system's hardware and software design and configuration . . . ."[8, sec. IV] Furthermore, it is required that ADP systems that "process, store, or use classified data and produce classified information will, with reasonable dependability, prevent: a. Deliberate or inadvertent access to classified material by unauthorized persons, and b. Unauthorized manipulation of the computer and its associated peripheral devices."[8, sec. I B.3] Requirements equivalent to these appear within DoD 5200.28-M [9] and in DoD 5220.22-M [11]. >From requirements imposed by these regulations, directives and circulars, the three components of the Security Policy Control Objective, i.e., Mandatory and Discretionary Security and Marking, as well as the Accountability and Assurance Control Objectives, can be functionally defined for DoD applications. The following discussion provides further specificity in Policy for these Control Objectives.