Encyclopedia > Talk:Computer security

  Article Content

Talk:Computer security

OK, I've taken another tack. I've removed all the computer insecurity stuff into its own article computer insecurity, and I now propose that the security (computers) stuff should be merged into the remaining Computer security article. The Anome

True security can only come from within. (tongue in cheek)

Some random observations:

I don't know how capabilities apply to networking resources, hence I don't know what effect they'd have on "external attacks". My guess is that once systems are internally secure, external security reduces to the allocation of resources to completely untrusted entities. For example, flooding attacks would be much more expensive if an application only serviced one request per ip at a time.

Diverting a website in the router tables would still be there as an attack, but that's a fundamental problem in the protocol. I don't know what the solution to that is but I've never heard of anyone looking at the problem (though I'm sure someone has). -- Ark

The NSA has released a security-enhanced version of the Linux operating system which contains support for mandatory access controls based on the principle of least privilege[?]. See Security-Enhanced Linux for more details.

I find it difficult to understand how you can apply the principle of least priviledge in a system based on ACLs. I tried to move this to computer insecurity but I couldn't find a good place.

I'm sympathetic to adding Windows but it's inaccurate to call it a fundamentally flawed security design; it has no security design! I heard some of these things were changed in NT but if I consider Unix fundamentally flawed .... -- Ark

I don't understand why people seems to think that a security "enhanced" Linux automatically qualifies it as a secure computing platform. Linux has had capabilities in the kernel for a long, long time and that's never made it secure. So now it seems like someone's aggressively applied ACLs (not even capabilities!) to the (static!) set of daemons and system utilities. Wow. Big fucking deal. The system can't be crashed anymore but your data is just as vulnerable to malicious access, corruption and destruction as ever. Now there's security for you. Give me a fucking break; I don't give a shit if Linux crashes, I'm a user, not a corporation!

Note that I've only briefly looked up the matter but that's all I needed to do since it's impossible to provide capabilities to users in a Unix architecture without radically redesigning the entire file system and its entire relationship with the kernel (you have to make the system orthogonally persistent to have widespread capabilities -- a conclusion I came to on my own and which the EROS developers confirmed). -- Ark


Whilst I do not have the in-depth knowledge to argue the relative merits of capabilities vs. ACLs (or, for that matter, the Unix security model), I do know that such debates are a vanishingly small part of computer security. This article is far too narrowly focussed, and when rewritten should give this technical issue the relative priority it deserves. --Robert Merkel

Which is the majority of the article. Of course, that depends on your defining "the relative priority it deserves" as "what most matters to security" instead of "what's the hottest topic among researchers". What other subjects would you rather the article talk about?

Cryptography? That's certainly a hot topic of research. Only problem is there's a grand total of three places where a typical computer would ever use it; authentication of users, data storage, and internet tunneling. For most people, only one of the three activities is relevant.

Or perhaps proofs of correctness? Another hot topic of research. And completely useless to us peons!

The only other issue I can think of is spoofing. And spoofing is only an issue in relation to either cryptography or capabilities, otherwise it's a non-issue to a systems designer.

Capabilities are the best and only way to improve the security of all computers. They're the most fundamental security measure you can have and they're also the only thing that will be accepted by most users (cryptography won't).

Perhaps what you're looking for when you refer to "computer security" is the computer insecurity industry. -- Ark

Nope, I've come back to this, and the current article is still hopelessly misleading and presents *one* aspect of computer security as the One True Way. Capabilities have *nothing* to do with physical security, network eavesdropping (which is getting easier and easier with the profusion of wireless networks), and so on, and have nothing to do with the processes sysadmins actually go through to make current systems which don't use them as secure as they can. --Robert Merkel

Apparently you don't quite understand why there are two separate articles if you think that "make current systems which don't use them as secure as they can" has anything to do with this one. It doesn't, it belongs on computer insecurity.

There are two fundamental concepts in the design of secure computing platforms. One is cryptography. The other, whether you like it or not, is capabilities. Crypto is fundamental to the storage of information in an insecure medium (or over an insecure channel, which amounts to the same thing). Capabilities are fundamental to access security.

Most situations can be decomposed into those two concepts. For example, a wireless network is a resource, access to which should require a capability. OTOH, the airwaves (or copper wires) are an insecure medium so any capabilities transmitted over these channels must be encrypted. The same division applies to authentication, which is really the problem of transmitting a capability over an insecure open terminal. The solution is to use encrypted capabilities. Passwords and tokens are just particular (not very good) forms of capabilities.

Almost always, problems arise only when you can't afford to use these systems. Spoofing is when a process fakes the identity of the OS or some other trusted process. This is only a problem because humans can't do cryptography, so that processes can't authenticate themselves to human users. So instead of cryptographically secure authentication, the OS is forced to mediate the representation of processes so that their identity is clear to humans. This is a data representation problem. And given how it's limited by human psychology, it's probably intractable as a result.

The insecurity of the internet protocols (the routing tables and such) can be understood as the non-application of cryptography and capabilities. Whether this is an accident of history, or is unavoidable (eg, too costly) is a separate question.

Issues like what do you do when a web of trust gets too large only become relevant when you have a web of trust to begin with. And you can only build one using caps and crypto.

Then there are the denial of service, starvation and deadlock problems. Denial of service is just a special case of starvation, and so is deadlock. But starvation isn't a security issue at all. It's a politico-economic issue. If someone wishes to starve every other process by buying access to a crucial resource forever, that shows a serious defect in economic policy. But not necessarily security mechanisms.

The only independent issue I can think of is communication over covert channels. For example, when two processes communicate with each other using CPU utilization or page fault frequency. If there is a theory behind identifying and blocking covert channels, I don't know it. Every resource available to users provides a covert channel for communication. Blocking them is an esoteric art form and largely irrelevant since there are much bigger fish to fry.

I'm not against presenting issues other than caps and crypto, but these issues are exotic. They don't seem to admit to any theory and so are difficult to systematically incorporate into a design. -- Ark



All Wikipedia text is available under the terms of the GNU Free Documentation License

 
  Search Encyclopedia

Search over one million articles, find something about almost anything!
 
 
  
  Featured Article
Bugatti

... racing, winning the first ever Monaco Grand Prix and with driver Jean-Pierre Wimille[?] they won the 1937 and 1939 24 hours of Le Mans. Table of contents 1 Under ...

 
 
 
This page was created in 63.4 ms