Computer security is a branch of information security applied to both theoretical and actual computer systems. Computer security is a branch of computer science that addresses enforcement of 'secure' behavior on the operation of computers. The definition of 'secure' varies by application, and is typically defined implicitly or explicitly by a security policy that addresses confidentiality, integrity and availability of electronic information that is processed by or stored on computer systems.
The traditional approach is to create a trusted security kernel that exploits special-purpose hardware mechanisms in the microprocessor to constrain the operating system and the application programs to conform to the security policy. These systems can isolate processes and data to specifier domains and restrict access and privileges of users. This approach avoids trusting most of the operating system and applications.
In addition to restricting actions to a secure subset, a secure system should still permit authorized users to carry out legitimate and useful tasks. It might be possible to secure a computer against misuse using extreme measures:
“
The only truly secure system is one that is powered off, cast in a block of concrete and sealed in a lead-lined room with armed guards - and even then I have my doubts.
”
Eugene H. Spafford, director of the Purdue Center for Education and Research in Information Assurance and Security. [1]
It is important to distinguish the techniques used to increase a system's security from the issue of that system's security status. In particular, systems which contain fundamental flaws[1] in their security designs cannot be made secure without compromising their usability.[citation needed] Most computer systems cannot be made secure even after the application of extensive "computer security" measures. Furthermore, if they are made secure, functionality and ease of use often decreases.
Computer security can also be seen as a subfield of security engineering, which looks at broader security issues in addition to computer security.
Secure operating systems
One use of the term computer security refers to technology to implement a secure operating system. Much of this technology is based on science developed in the 1980s and used to produce what may be some of the most impenetrable operating systems ever. Though still valid, the technology is almost inactive today, perhaps because it is complex or not widely understood. Such ultra-strong secure operating systems are based on operating system kernel technology that can guarantee that certain security policies are absolutely enforced in an operating environment. An example of such a Computer security policy is the Bell-LaPadula model. The strategy is based on a coupling of special microprocessor hardware features, often involving the memory management unit, to a special correctly implemented operating system kernel. This forms the foundation for a secure operating system which, if certain critical parts are designed and implemented correctly, can ensure the absolute impossibility of penetration by hostile elements. This capability is enabled because the configuration not only imposes a security policy, but in theory completely protects itself from corruption. Ordinary operating systems, on the other hand, lack the features that assure this maximal level of security. The design methodology to produce such secure systems is precise, deterministic and logical.
Systems designed with such methodology represent the state of the art of computer security and the capability to produce them is not widely known. In sharp contrast to most kinds of software, they meet specifications with verifiable certainty comparable to specifications for size, weight and power. Secure operating systems designed this way are used primarily to protect national security information and military secrets. These are very powerful security tools and very few secure operating systems have been certified at the highest level (Orange Book A-1) to operate over the range of "Top Secret" to "unclassified" (including Honeywell SCOMP, USAF SACDIN, NSA Blacker and Boeing MLS LAN.) The assurance of security depends not only on the soundness of the design strategy, but also on the assurance of correctness of the implementation, and therefore there are degrees of security strength defined for COMPUSEC. The Common Criteria quantifies security strength of products in terms of two components, security capability (as Protection Profile) and assurance levels (as EAL levels.) None of these ultra-high assurance secure general purpose operating systems have been produced for decades or certified under the Common Criteria.
[edit] Security by design
The technologies of computer security are based on logic. There is no universal standard notion of what secure behavior is. "Security" is a concept that is unique to each situation. Security is extraneous to the function of a computer application, rather than ancillary to it, thus security necessarily imposes restrictions on the application's behavior.
There are several approaches to security in computing, sometimes a combination of approaches is valid:
Trust all the software to abide by a security policy but the software is not trustworthy (this is computer insecurity).
Trust all the software to abide by a security policy and the software is validated as trustworthy (by tedious branch and path analysis for example).
Trust no software but enforce a security policy with mechanisms that are not trustworthy (again this is computer insecurity).
Trust no software but enforce a security policy with trustworthy mechanisms.
Many systems unintentionally result in the first possibility. Approaches one and three lead to failure. Since approach two is expensive and non-deterministic, its use is very limited. Because approach number four is often based on hardware mechanisms and avoid abstractions and a multiplicity of degrees of freedom, it is more practical. Combinations of approaches two and four are often used in a layered architecture with thin layers of two and thick layers of four.
There are myriad strategies and techniques used to design security systems. There are few, if any, effective strategies to enhance security after design.
One technique enforces the principle of least privilege to great extent, where an entity has only the privileges that are needed for its function. That way even if an attacker gains access to one part of the system, fine-grained security ensures that it is just as difficult for them to access the rest.
Furthermore, by breaking the system up into smaller components, the complexity of individual components is reduced, opening up the possibility of using techniques such as automated theorem proving to prove the correctness of crucial software subsystems. This enables a closed form solution to security that works well when only a single well-characterized property can be isolated as critical, and that property is also assessable to math. Not surprisingly, it is impractical for generalized correctness, which probably cannot even be defined, much less proven. Where formal correctness proofs are not possible, rigorous use of code review and unit testing represent a best-effort approach to make modules secure.
The design should use "defense in depth", where more than one subsystem needs to be violated to compromise the integrity of the system and the information it holds. Defense in depth works when the breaching of one security measure does not provide a platform to facilitate subverting another. Also, the cascading principle acknowledges that several low hurdles does not make a high hurdle. So cascading several weak mechanisms does not provide the safety of a single stronger mechanism.
Subsystems should default to secure settings, and wherever possible should be designed to "fail secure" rather than "fail insecure" (see fail safe for the equivalent in safety engineering). Ideally, a secure system should require a deliberate, conscious, knowledgeable and free decision on the part of legitimate authorities in order to make it insecure.
In addition, security should not be an all or nothing issue. The designers and operators of systems should assume that security breaches are inevitable. Full audit trails should be kept of system activity, so that when a security breach occurs, the mechanism and extent of the breach can be determined. Storing audit trails remotely, where they can only be appended to, can keep intruders from covering their tracks. Finally, full disclosure helps to ensure that when bugs are found the "window of vulnerability" is kept as short as possible.
Early history of security by design
The early Multics operating system was notable for its early emphasis on computer security by design, and Multics was possibly the very first operating system to be designed as a secure system from the ground up. In spite of this, Multics' security was broken, not once, but repeatedly. The strategy was known as 'penetrate and test' and has become widely known as a non-terminating process that fails to produce computer security. This led to further work on computer security that prefigured modern security engineering techniques producing closed form processes that terminate.
Monday, December 24, 2007
Subscribe to:
Posts (Atom)