InvestorsHub Logo
Followers 2
Posts 250
Boards Moderated 0
Alias Born 12/14/2004

Re: khillo post# 86759

Thursday, 07/07/2005 6:18:39 PM

Thursday, July 07, 2005 6:18:39 PM

Post# of 249244
This is good... Thanks khillo. Sorry I snipped but copyright and all.
-R

Trusted Computing
In the past few years, increasing volumes of malicious
software, or malware (such as Trojan horses, computer
viruses, worms, and combinations thereof ), and corresponding
attacks have emerged. The situation is bad
and likely to get worse.
<snip>
In this article, we argue that trusted computing
has its merits but that this technology is unlikely to
be a complete remedy for PC security problems.
Trusted computing basics
The computer industry has accommodated the idea of
trusted computing in various ways. In 1999, Intel, Microsoft,
IBM, Hewlett-Packard, and several other companies
formed the Trusted Computing Group Platform
or TCG (formerly known as the Trusted Computing
Platform Alliance [TCPA]; www.trustedcomputinggroup.
org) to work on creating a new computing platform
for the next century that provides for improved
trust in the PC platform. The TCG published the
Trusted Platform Module (TPM) specification, currently
in version 1.2, and a corresponding protection
profile (PP) for the Common Criteria (CC), which
represents efforts to develop formal criteria for evaluating
its security.5 Based on these specifications, Microsoft
announced in 2002 that it would be incorporating
its TPM implementation, preliminarily named
Palladium, into future versions of its Windows OS.
More recently, MS has started to promote Palladium
under the newly coined title Next-Generation Secure
Computing Base (NGSCB).
<snip>
Yet, it’s
not clear (and probably too early to tell) to what extent
Microsoft and other manufacturers will try to control systems’
hardware and software. In either case, implementing
trusted computing requires a secure and reliable
bootstrap architecture, as other literature has proposed.7
The boot-time process
According to TCG specifications, trusted computing requires
a TPM that acts as a monitoring and reporting component.
(The TPM is sometimes also referred to as the
“Fritz chip” (in honor of Senator Ernest “Fritz” Hollings
[D.- S.C.], who’s trying to make trusted computing
mandatory in all consumer electronics.8) In a first phase of
deployment, the TPM will be a special chip embedded in
a smartcard or dongle that’s soldered to the motherboard.
In a second phase, the TPM will be integrated in the main
processor, offering additional security (because data
shouldn’t be transferred on buses between the TPM and
the CPU). On booting up, the TPM takes charge, checking
that the boot ROM is as expected, then loading and
executing it, and, finally, verifying the system’s state. It then
checks the first part of the operating system, loads and executes
it, and again verifies the system’s state. This procedure
repeats for all relevant software modules that are loaded and
made available to the system at boot time, therewith
steadily expanding the trust boundary (that is, known and
verified hardware and software).
In order to expand the trust boundary, the TCGenabled
system maintains a list of approved hardware and
software components. For each of them, the system
checks whether it’s on the approved list, whether it’s digitally
signed (where applicable), and that its serial number
hasn’t been revoked.
<snip>

There is another reason hardware and software manufacturers
invest time and money developing trusted computing:
digital rights management (DRM). On a trusted
computing platform, the applications and files that users
download, browse, or work with can’t be tampered with.
Consequently, such platforms will make it considerably
harder to run software, download DVDs, or listen to MP3
music files without having properly licensed them. This
fact has given rise to an ongoing controversy on the effects
of DRM on the free-market economy and civil rights.
Clearly, a software-controlled or software-closed system is
a prerequisite to implementing DRM—in the past, all
other approaches failed miserably, including such examples
as copy protection schemes for software, DVDs, and
e-books. If we accept that intellectual property is a good
that deserves legal protection, trusted computing might
provide a technical solution. In this article, however, we
focus on the security issues that trusted computing might
tackle; we’ll leave aside the DRM discussion, which is examined
in more detail elsewhere.8,9
Can trusted computing
solve security problems?
,<snip>
In light of these facts, you might wonder whether
trusted computing can contribute to solving the PC’s
security problems. By design, trusted computing
• can control and selectively execute software on a computer
system—that is, it provides the means to authenticate
and authorize software and verify its authenticity
and integrity before it’s executed; but
• can’t guarantee that software executed on a computer
system is free of programming errors (vulnerabilities) or
malicious pieces of software (Trojan horses) that could
be exploited.
In fact, there are no convincing reasons why we would
expect to see substantially fewer programming errors on
TCG-enabled computing systems. Thus, if executing
nonauthenticated and nonauthorized software poses the
main risk to security, there’s a good chance that trusted
computing could resolve a great deal of it. If, however, a
PC’s security is mainly endangered by programming errors
and corresponding exploits, notably in the operating
system or ubiquitous application software, trusted computing
is much less likely to be efficient at fighting it.
Real-world examples

Let’s look at some recent security-related incidents on
the Internet.
<snip>
In all of these examples (except for the
Sobig.F worm), vulnerabilities were exploited in software
that’s likely to exist and run in any trusted computing
environment. Finding efficient ways of propagation
that would elude detection by the operating system is left
to the programmer’s imagination. Trusted computing
can protect against manual execution of malware, such as
by opening a binary email attachment, or against malicious
code, which must register with the operating system.
It is absolutely powerless, however, if the malware
exploits vulnerabilities, flaws, and bugs in legitimate soft-
18 IEEE SECURITY & PRIVACY ■ MARCH/APRIL 2005
Trusted Computing
ware for its own purposes.
<snip>
We all remember Matt Blaze’s attack against
the Clipper chip (where, in a nutshell, the authentication
code field was too short to protect against an exhaustive
key search),14 or Dan Brumleve’s Brown Orifice demonstration
tool (which exploited a Java security hole in
Netscape’s browser that turned a PC into a server on the
Internet; www.cert.org/advisories/CA-2000-15.html).
Both examples showed that even security technologies
can be designed and implemented with flaws and bugs.
Trusted computing could indeed solve some of the
PC’s security problems, but we’re still far from a radical
remedy—and the additional security will be bought
dearly with a dramatic loss in PCs’ flexibility and versatility.
This will make life harder not only for the user, but for
small software manufacturers and open-source software
developers as well.
Trusted computing in general, and software-closed or
software-controlled systems in particular, should target
specific market segments with stringent security and
reliability requirements and needs. Clearly, many vulnerabilities
will still exist and things will go wrong, but a
computer system that implements trusted computing is
certainly more secure—or can at least be secured more
easily—than one that doesn’t. The level of security, however,
strongly depends on the details of design and implementation
(which are not clear yet for almost all trusted
computing manufacturers). This is particularly true if
considering that large portions of the software running
on these systems will be written in the C programming
language, which is certainly not well designed from a security
viewpoint. Furthermore, experiences with multilevel
security (MLS) and MLS-based computer systems,
with regard to trusted computing’s practicability and
security, haven’t been very promising.
The intrinsic battle between functionality and security
is one of the fundamental issues computer security
professionals must deal with, and this situation is expected
to linger for quite some time. It will be interesting
to see to what extent and for what markets hardware and
software manufacturers will implement trusted computing
and whether they will be successful. We can at least
hope that TCG-enabled computer systems will make it
more difficult for hardware and software manufacturers
to avoid product liability, easing the chance of consumers
winning court cases for losses caused by defective products.
This might make manufacturers try that much
harder to provide quality hardware and software, resulting
in a previously unintended side effect of trusted computing
initiatives.
References
1. S. Staniford, V. Paxson, and N. Weaver, “How to Own
the Internet in Your Spare Time,” Proc. 11th Usenix Security
Symp. (Security 02), Usenix Assoc., 2002, pp.
149–167; www.icir.org/vern/papers/cdc-usenix-sec02/.
2. D. Moore et al., “Inside the Slammer Worm,” IEEE Security
& Privacy, vol. 1, no. 4, 2003, pp. 33–39.
3. F. Cohen, “Computer Viruses—Theory and Experiments,”
Computers & Security, vol. 6, no. 1, 1987, pp. 22–35.
4. R. Shirley, Internet Security Glossary, RFC 2828, May
2000, www.faqs.org/rfcs/rfc2828.html.
5. Trusted Computing Platform Alliance, Trusted Platform
Module Protection Profile, tech report, version 1.9.7., July
2002; www.commoncriteriaportal.org/public/files/
ppfiles/PP_TCPATPMPP_V1.9.7.pdf.
6. P. England et al., “A Trusted Open Platform,” Computer,
vol. 36, no. 7, 2003, pp. 55–62.
7. W.A. Arbaugh, D.J. Farber, and J.M. Smith, “A Secure
and Reliable Bootstrap Architecture,” Proc. IEEE
Symp. Security and Privacy, IEEE CS Press, 1997, pp.
65–71.
8. R. Anderson, Trusted Computing Frequently Asked Questions—
TC/TCG/LaGrande/NGSCB/Longhorn/Palladium/
TCPA, v. 1.1, Aug. 2003, www.cl.cam.ac.uk/
~rja14/tcpa-faq.html.
9. R. Anderson, “Cryptography and Competition Policy—
Issues with ‘Trusted Computing,’” Proc. 22nd ACM
Ann. Symp. Principles of Distributed Computing, ACM Press,
2003, pp. 3–10; www.ftp.cl.cam.ac.uk/ftp/users/rja14/
tcpa.pdf.
10. R. Oppliger, Security Technologies for the World Wide Web,
2nd ed., Artech House, 2003.
11. H.H. Thompson, “Why Security Testing Is Hard,” IEEE
Security & Privacy, vol. 1, no. 4, 2003, pp. 83–86.
12. K. Thompson, “Reflections on Trusting Trust,” Comm.
ACM, vol. 27, no. 8, 1984, pp. 761–763.
13. J. Nazario, Defense and Detection Strategies against Internet
Worms, Artech House, 2003.
14. M. Blaze, “Protocol Failure in the Escrowed Encryption
Standard,” Proc. 2nd ACM Conf. Computer and Comm.
Security, ACM Press, 1994, pp. 59–67.
Rolf Oppliger is a scientific employee at Swiss Federal Strategy
Unit for Information Technology (FSUIT). He also leads eSECURITY
Technologies, teaches at the University of Zurich, and is
the editor of Artech House’s computer security book series.
Oppliger has an MSc and a PhD in computer science from the
University of Berne, and received the venia legendi from the University
of Zurich. He’s a member of the IEEE Computer Society,
the ACM, the International Association for Cryptologic Research,
and the International Federation for Information Processing.
Contact him at rolf.oppliger@isb.admin.ch.
Ruedi Rytz is a scientific employee at FSUIT. His research interests
are computer and network security, information assurance,
and critical information infrastructure protection. Rytz has an
MSc and a PhD in physical chemistry from the University of
Berne. Contact him at ruedi.rytz@isb.admin.ch.
www.computer.org/security/ ■ IEEE SECURITY & PRIVACY 19

Join the InvestorsHub Community

Register for free to join our community of investors and share your ideas. You will also get access to streaming quotes, interactive charts, trades, portfolio, live options flow and more tools.