Register for free to join our community of investors and share your ideas. You will also get access to streaming quotes, interactive charts, trades, portfolio, live options flow and more tools.
FDE
Gordon Hughes will introduce a new drive feature which can assist in controlling access to creative content stored in hard drives, for distribution or archiving. Recently, in-drive encryption has been announced, allowing instant data secure erasure by commanding a drive to securely erase its encryption key.
http://www.media-tech.net/fileadmin/templates/resources/sc06/mtc06_keynote_day1_hughes.pdf
Gordon Hughes received his BS degree in Physics and his PhD in Electrical Engineering from Cal Tech. After a research career in disk drive technology at Xerox, he formed a R&D group at Seagate Technology to commercialize new technology, receiving an IEEE Fellow award for his contributions in making thin film cobalt today's standard disk media.
WinHec 2006: BitLocker/TPM PowerPoints
BitLocker Drive Encryption Hardware Enhanced Data Protection
http://download.microsoft.com/download/5/b/9/5b97017b-e28a-4bae-ba48-174cf47d23cd/CPA064_WH06.ppt#1
Enterprise and Server use of BitLocker Drive Encryption
http://download.microsoft.com/download/5/b/9/5b97017b-e28a-4bae-ba48-174cf47d23cd/CPA027_WH06.ppt#1
Computer Security - The Next 50 Years
By Alan Cox, Fellow, Red Hat
http://www.itconversations.com/shows/detail869.html
Security and validation are critical issues in computing, and the next fifty years will be harder than the last. There are a number of proven programming techniques and design approaches which are already helping to harden our modern systems, but each of these must be carefully balanced with usability in order to be effective. In this talk, Alan Cox, fellow at Red Hat Linux, explores the future of what may be the biggest threat facing software engineers, the unverified user.
OpenTC – an open approach to trusted virtualization
http://www.indicare.org/tiki-print_article.php?articleId=183
By Dirk Kuhlmann, Hewlett Packard Laboratories, Bristol, UK on:01/03/06
Abstract: Due to the increasing complexity of IT systems, the mutual attestation of platform characteristics will become a necessity for proprietary as well as Open Source based systems. Trusted Computing platforms offer building blocks to achieve this goal. Their combination with non-proprietary virtualization technology can help to avoid much feared negative side-effects of Trusted Computing. It will permit to run locked-down execution environments in parallel with unconstrained ones, making it possible to support tight security requirements while maintaining user choice. An open approach to Trusted Computing is a prerequisite for future community based effort to describe and attest expected properties of software components in a trustworthy manner.
Introduction
The advent of "Trusted Computing" (TC) technology as specified by the Trusted Computing Group (cf. sources) has not met much enthusiasm by the Free/Open Source Software (FOSS) and LINUX communities so far. Despite this fact, FOSS based systems have become the preferred vehicle for much of the academic and industrial research on Trusted Computing. In parallel, a lively public discussion between proponents and critics of TC has dealt with the question whether the technology and concepts put forward by the TCG are compatible, complementary or potentially detrimental to the prospects of open software development models and products.
Common misconceptions of TC technology are that it implies or favours closed and proprietary systems, reduces options of using arbitrary software, or remotely controls users' computers. It has long been argued, though, that these and similar undesirable effects are by no means unavoidable, not least because the underlying technology is passive and neutral with regard to specific policies. The actual features displayed by TC equipped platforms will almost exclusively be determined by the design of operating systems and software running on top of it. With appropriate design, implementation and validation of trusted software components, and by using contractual models of negotiating policies, negative effects can be circumvented while improving the system's trust and security properties. This is the intellectual starting point of the EU-supported, collaborative OpenTC research and development project (project
Nr. 027635; cf. sources) that started in November 2006.
Combining FOSS and TC technology
OpenTC aims to demonstrate that a combination of TC technology and FOSS has several inherent advantages that are hard to meet by any proprietary approach. Enhanced security at the technical level tends to come at the expense of constraining user options, and the discursive nature of FOSS-development could help to find the right balance here. Trusted software components have to be protected from analysis during runtime, so it is highly desirable that their design is documented and that the source code is available to allow for inspection and validation. Finally, any attempts to introduce TC technology are likely to fail without the buy-in of its intended users, and openness could prove to be the most important factor for user acceptance.
OpenTC sets out to support cooperative security models that can be based on platform properties without having to assume the identifiability, personal accountability and reputation of platform owners or users. For reasons of privacy and efficiency, these models could be preferable to those assuming adversarial behaviour from the outset. A policy model based on platform properties, however, requires reliable audit facilities and trustworthy reporting of platform states to both local users and remote peers. The security architecture put forward by the TCG supplies these functions, including a stepwise verification of platform components with an integral, hardware-assisted auditing facility at its root. In OpenTC, this will be used as a basic building block.
Trusted virtualization and protected execution environments
The goal of the OpenTC architecture is to provide execution environments for whole instances of guest operating systems that communicate to the outside world through reference monitors guarding their information flow properties. The monitors kick into action as soon as an OS instance is started. Typically, the policy enforced by it should be immutable during the lifetime of the instance: it can neither be relaxed through actions initiated by the hosted OS nor overridden by system management facilities. In the simplest case, this architecture will allow to run two independent OS instances with different grades of security lock-down on an end user system. Such a model with an unconstrained "green" environment for web browsing, software download / installation and a tightly guarded "red" side for tax record, banking communications etc. has recently been discussed by Carl Landwehr (2005). More complex configurations are possible and frequently needed in server scenarios.
OpenTC is borrowing from research on trusted operating systems that goes back as far as 30 years. The underlying principles – isolation and information flow control – have been implemented by several security hardened versions of Linux, and it has been demonstrated that such systems can be integrated with Trusted Computing technology (see e.g. Maruyama et al. 2003). However, the size and complexity of these implementations is a serious challenge for any attempt to seriously evaluate their actual security properties. The limited size of developer communities, difficulties of understanding and complexity of managing configurations and policies continue to be road blocks for deployment of trusted platforms and systems on a wider scale.
Compared to full-blown operating systems, the tasks of virtualization layers tend to be simpler. This should allow OpenTC to reduce the size of the Trusted Computing Base. The architecture separates management and driver environments from the core system and hosted OS instances. They can either be hosted under stripped-down Linux instances, or they can run as generic tasks of the virtualization engines. The policy enforced by the monitors is separated from decision and enforcement mechanisms. It is human readable and can therefore be subjected to prior negotiations and explicit agreement.
OpenTC chose (para-)virtualization as the underlying architecture for a trusted system architecture, which allows to run standard OS distributions and applications side by side with others that are locked down for specific purposes. This preempts a major concern raised with regard to Trusted Computing, namely, that TC excludes components not vetted for by third parties. The OpenTC architecture allows to limit constraints to components marked as security critical, while unconstrained components can run in parallel.
OpenTC builds on two virtualization engines: XEN and L4. Both are available under FOSS licenses and boosted by active developer and user communities. Currently, it is necessary to compile special versions of Linux that co-operate with the underlying virtualization layer. However, the development teams will improve their architectures to support unmodified, out-of-the-box distributions as well. This will be simplified by hardware support for virtualization as offered by AMD's and INTEL's new CPU generations. Prototypic results have shown that this hardware support could also allow to host unmodified operating systems other than Linux (see e.g. Shankland 2005).
From trusted to trustworthy computing
TCG hardware provides basic mechanisms to record and report the startup and runtime state of a platform in an extremely compressed, non-forgeable manner. It allows to create a digitally signed list of values that correspond to elements of the platform's Trusted Computing Base. In theory, end users could personally validate each of these components, but this is not a practical option. End users may have to rely on other parties to evaluate and attest that a particular set of values corresponds to a system configuration with a desired behaviour. In this case, their reason to trust will ultimately stem from social trust he puts in statements from specific brands, certified public bodies, or peers groups.
A much discussed dilemma arises if trusted components become mandatory prerequisites for consuming certain services. Even in case such components are suspicious to the end user, they might still be required by a provider. This problem is particularly pronounced if named components come as binaries only and do not allow for analysis. The recent history of DRM technology has shown that trojans can easily be inserted under the guise of legitimate policy enforcement modules. Clearly, a mechanism that enforces DRM on a specific piece of content acquired by a customer must not assume an implicit a permission to sift through the customer's hard disk and report back on other content.
This highlights an important requirement for components that deserve the label "trusted": at least in principle, it should be possible to investigate their actual trustworthiness. A clearly stated description of function and expected behaviour should be an integral part of their distribution, and it should be possible to establish that they do not display behaviour other than that stated in their description – at compile time, runtime, or both. A socially acceptable approach to Trusted Computing will require transparency and open processes. In this respect, a FOSS based approach looks promising, as it might turn openness into a crucial competitive advantage.
The TCG specification is silent on procedures or credentials required before a software component can be called "trusted". OpenTC works on the assumption that defined methodologies, tools, and processes to describe goals and expected behaviour of software components are needed. This way, it will become possible to check whether their implementation reflects (and is constrained to) their description. Independent replication of tests may be required to arrive at a commonly accepted view of a component's trustworthiness which in turn requires accessibility of code, design, test plans and environments for the components under scrutiny.
Trust, risk, and freedom
Most of us have little choice but to trust IT systems where more and more things can go wrong, while our actual insight in what is actually happening on our machines gets smaller by the day. Users are facing a situation of having to bear full legal responsibility for actions initiated on or by their machines while lacking the knowledge, tools and support to keep these systems in a state fit for purpose. Due to the growing complexity of our technology, we will increasingly have to rely on technical mechanisms that help us to estimate the risk prior to entering IT based transactions. Enhanced protection, security and isolation features based on TCG technology will become standard elements of proprietary operating systems and software in due time.
This evolution is largely independent of whether FOSS communities endorse or reject this technology. OpenTC assumes that mutual attestation of the platforms' "fitness for purpose" will become necessary for proprietary systems as well as FOSS based ones. The absence of comparable protection mechanisms for non-proprietary operating or software systems will immediately create problems for important segments of professional Linux users. In fact, many commercial, public or governmental entities have chosen non-proprietary software for reasons of transparency and security. These organizations tend to be subjected to stringent compliance regulations requiring state-of-the-art protection mechanisms. If FOSS based solutions don't support these mechanisms, the organizations could eventually be forced to replace their non-proprietary components with proprietary ones: a highly undesirable state of affairs that OpenTC might help to avoid.
From this perspective, the current discussion about the next version of the GNU public license raises serious concerns. Some of the suggested changes could impact the possibility to combine Trusted Computing technology and Free Software licensed under GPLv3 this refers to the GPLv3 Draft, status 2006-02-07 16:50 (cf. sources). Section 3 of this draft concerns Digital Restrictions Management, a term that has been used by Richard Stallman in discussions about Trusted Computing. For example, the current draft excludes "modes of distribution that deny users that run covered works the full exercise of the legal rights granted by this License". It is an open question whether this might apply to elements of a security architecture such as OpenTC.
A Trusted Computing architecture does not constrain the freedom of copying, modifying and sharing works distributed under the GPL. However, it can constrain the option running modified code as a trusted component, since previously evaluated security properties might have been affected by the modifications. Unless a re-evaluation is performed, the properties of modified versions can not be derived from the attestation of the original code; security assurances about the original code become invalid.
This is by no means specific to the Trusted Computing approach; it also applies to commercial Linux server distributions with protection profiles evaluated according to the Common Criteria. The source code for the distribution is available, but changing any of the evaluated components results in losing the certificate. Whether or not software is safe, secure, or trustworthy is independent of the question of how it is licensed and distributed. The option to choose between proprietary and FOSS solutions is an important one and should be kept open. This is one of the reasons why several important industrial FOSS providers and contributors participate in OpenTC. The project aims at a practical demonstration that Trusted Computing technology and FOSS can complement each other. This is possible in the context of the current GPLv2. Whether it will be so under a new GPLv3 remains to be seen.
Vpro and McAfee...
http://www.digitimes.com/NewsShow/MailHome.asp?datePublish=2006/4/25&pages=PR&seq=212
"....The aim of Intel VT, on the other hand, is to strengthen PC security by allowing a separate independent hardware-based environments inside a single PC so IT managers can create a dedicated, tamper-resistant service environment – or partition – where particular tasks or activities can run independently, invisible to and isolated from PC users.
...For example, anti-virus software company Symantec announced plans to work with Intel to build security solutions creating an isolated environment outside of the main PC operating system for the purpose of managing security threats. This virtualized environment where the software is installed should be tamper-resistant from any malware than happens to find its way onto the PC, and thus can delivers stronger control and protection in the data infrastructure..."
Legislating Identity
Posted by Eric Norlin
April 18, 2006
"Drivers" are a funny thing. They're those often-ambiguous factors cited by analysts and reporters as they attempt to explain why a technology is catching on. Of course, there are usually deeper underlying factors "driving" a technology adoption, than the official technology "drivers." But, even as we explore those underlying factors, its still helpful to know who's behind the wheel of our technology car with their foot on the gas.
In the world of identity, legislative and industry regulations have become some key drivers (and boy, are there a lot of them). A quick look at this dizzying array might betray the importance of identity in today's world:
The Real ID Act is a de facto national ID card act that was slipped onto the end of some military spending bills. It started as an initiative led by state motor vehicle administrators, and quickly grew into a federal mandate for all state driver's licenses. The mandate includes requirements for "biometrics" and machine-readability (i.e., RFID chips). There is (as the link above illustrates) some state-level protest, but mostly because the law mandates that states spend money and doesn't write the check.
Sarbanes Oxley, Section 404 is the law that grew out of the accounting scandals of the late 90's bubble. The law (which applies to public companies), and specifically Section 404, mandates that companies control access to sensitive information, and be able to conduct an audit of that access. All of that means one thing: identity management systems.
The Gramm-Leach Bliley Act is the law that seeks "modernization" and privacy protections for the financial services industry. "GLB," as its commonly known has been around since 1999, and is seen as a general driver of identity management's privacy benefits.
HSPD-12 and FIPS 201 are the directives ("Homeland Security Presidential Directive") that mandate the security standards for access cards and initiatives across government agencies. The Department of Defense's "Common Access Card" (CAC) project is often cited as one of the largest and most successful of theses deployments.
California SB 1386 is the state law that mandates notification of customers in the event of a data breach or leak. It is widely seen as the prototype for a national law, though none has been enacted yet. That said, the California law seems to be having enough pull, so as to force many companies to comply.
The FFIEC guidance on authentication in Internet Banking are the guidelines that all financial institutions must adhere to (the FFIEC is the Federal Financial Institutions Examination Council, or the same guys that run the FDIC insurance that protects your bank accounts to $100,000). This is the big one for 2006, as its pushing online banks and brokers to deal with the sticky wicket of consumer strong authentication. The result is the rapid adoption of "risk-based" or "layered" authentication.
That's just the beginning. Did I mention BASEL II, HIPAA, or the EU's mandates for privacy? The funny thing is this: all of these mandates, regulations, legislative initiatives and guidances are seeking to "secure" something, or to make a process more secure (for auditing purposes). And in so doing, all of them have to demand identity mechanisms. Its almost as if identity is the precursor to all IT security (he says with tongue firmly in cheek).
http://blogs.zdnet.com/digitalID/?p=14
TC: Towards a global dependability and security framework
Everybody wanting to understand Trusted Computing and where it is headed should read this.
Towards a global dependability and security framework
http://miklos.vazsonyi.com/public/public/temp/nyomtatni.pdf
Version: 18-August-2005 approved by the EC on 02-September-2005
Regulatory compliance
"….In 2006/7 Basel II comes into force, a European version of Sarbanes Oxley the US regulations that make firms responsible for all of the data they hold on individuals and the use to which it is put, with criminal sanctions for breaches. In theory in the US company executives can be gaoled if confidential data leaks and they fail to inform the people whose data gets out.
It is easy to exaggerate how draconian all of this is and in practice executives are not lead away in handcuffs every time a telesales clerk breaks the rules but the impact has still been dramatic. The potential penalties have served to focus attention. We can expect a similar galvanising effect in Europe...
Regulatory compliance
BY TOM ROWLAND
Jose Lopez is a senior analyst in network security at technology consultants Frost & Sullivan.
http://business.timesonline.co.uk/article/0,,26849-2038521,00.html
Companies have to innovate and compete but they cannot afford to cut corners or abuse their market position. For most it inevitably means appointing someone whose job it is to make sure that the regulatory sky never falls in.
Compliance has become a job with an expanding career path in front. But it can be a lonely posting. Managers can feel that the constant emphasis on complying with an increasingly lengthy set of rules risks more cautious decision making.
What are the limits of compliance inside a business?
Compliance covers everything the business does, ranging from a field engineer turning up to keep an appointment, to call centre staff, through to a senior manager who works at headquarters.
Senior executives often complain that they struggle to maintain a balance between sticking to all of the rules and keeping the vitality in their operations. How should they begin turning compliance into a science?
First of all they have to formulate a security policy. That means deciding what is important and what needs to be protected in your company and formulate the policy accordingly.
What should the policy cover?
Everything from data capture to e-mail security and the authentication of users. Work out what needs to be protected and then you can choose the technologies best suited to each purpose.
For example if you are trying to improve network privacy then perhaps you should make sure that the communication between users is regulated by a virtual private network (vpn).
In Britain a lot of the rules companies need to keep inside stem from the Data Protection Act and Privacy and Electronic Communications Regulations. How do you make sure you are inside the net?
The legislation in Europe is more obscure than it is in the US, or at least less indicative of what good companies need to do to tighten their security. In the US they have legislation that is specific for vertical markets.
For instance in the health care industry HIPPA governs much of the activity of an enterprise and sets out what it should and should not be doing. So if a firm is operating in the US then it needs to be aware of the whole of the regulatory environment that might affect it.
Are the UK and Europe becoming more like the US environment?
In the UK we have the Data Protection Act but it is true that things are about to change on this side of the Atlantic.
In 2006/7 Basel II comes into force, a European version of Sarbanes Oxley the US regulations that make firms responsible for all of the data they hold on individuals and the use to which it is put, with criminal sanctions for breaches. In theory in the US company executives can be gaoled if confidential data leaks and they fail to inform the people whose data gets out.
It is easy to exaggerate how draconian all of this is and in practice executives are not lead away in handcuffs every time a telesales clerk breaks the rules but the impact has still been dramatic. The potential penalties have served to focus attention. We can expect a similar galvanising effect in Europe.
How does an increasing regulatory burden change the way firms have to handle information?
It is hard for many to know the extent of their liability. Often it is best to have an outsider come and audit. Sometimes it is difficult for an individual inside a business to see all of the ramifications of a new piece of legislation.
Do both deal with making provision for things that you hope will never happen?
There are some similarities. If legislation says you are having to store data for 10 years, you need to know where it is and to make sure that it does not go astray. The key difference is that in disaster planning there is no legislative framework that the organisation is under an obligation to comply with.
If the US leads in the rigour of its legislative framework governing commercial activities how would you rank the Europeans?
Britain does quite well, but this is a league where right now the Germans are out in front.
Security Module – TrustZone® Software
A framework for interoperable security / Edition 2006
http://www.trusted-logic.com/Flyers/TL_Security_Module.pdf
Crypto man
by Andy Coote
http://www.scmagazine.com/features/index.cfm?fuseaction=FeatureDetails&newsUID=a4b8fe9a-34b9-4a2...
Whitfield Diffie made his name in encryption back in the 1970s, paving the way for modern e-commerce. Andy Coote learns more about his early work and hears his predictions for web services and grid computing
Not many people get their names enshrined in computing parlance. The Turing test, Moore's law and Metcalfe's law come to mind, but the list is small. So Whitfield Diffie, CSO at Sun Microsystems and co-creator of the Diffie-Hellman key exchange method, which arguably underpins worldwide e-commerce, is one of a very select bunch.
Although he first made his name back in the 1970s, he still appears regularly on the international speaking circuit. At a recent event, he was focusing on issues of trust in a networked economy and it was interesting to learn that he still sees cryptography playing a major role in the development of 21st century computing.
The 1970s were a seminal time in the development of cryptography, when the age of secure computing seemed to have arrived, laying the foundations for the explosion in the use of the internet a decade or so later.
The Data Encryption Standard (DES) was adopted in 1975 as the standard method of exchanging messages using "symmetrical" keys, but it demanded that two parties use the same key. The problem of sharing keys securely remained a major obstacle.
With Martin Hellman, then a professor at Stanford University in California, and Ralph Merkle, a doctoral student, Diffie went on to solve the problem with the now widely used Diffie-Hellman key exchange method.
In 1976, Diffie published a paper explaining how authentication and encryption could be achieved using a publicly available key for encryption and a private key, known only to the recipient, for decryption. The theory depended on mathematical "one-way functions" and two large prime numbers.
The search for a way to turn Diffie's theory to practical use ended in 1977 when Rivest, Shamir and Adelman came up with the first commercial Public Key Infrastructure (PKI), still known as RSA.
In recent years, some people have questioned the originality of their work, so I began by asking Diffie about this. The claim is that James Ellis, Clifford Cocks and Malcolm Williamson – three scientists at GCHQ (a top-secret communications center operated by the British government) – had discovered the concepts of key exchange and public key encryption before 1975, but had been constrained from publication by the U.K. Official Secrets Act.
Diffie believed that to be true when he first met James Ellis in 1982, but he has since changed his mind. "I believe it less now because I've talked to him for hours and hours since then," he says. "I don't understand his paper; it wouldn't convince me of anything. He had the conception of a public key system in the same form that I had, but I never found any solid evidence that he or his colleagues understood the significance."
Despite claims from some, including the National Security Agency (NSA), that the implications were understood, "the papers I've seen declassified support the opposite point of view. When Ellis wrote his history in '87, he tried to suggest that this grew out of thinking about key management problems, but there's no sign of that in the original paper from 1970," says Diffie.
He is rather surprised that in less than 30 years, public key cryptography has "become a mainstay of information security," but concedes that "there's much less cryptography visibly in use than I would like to see."
He feels that PKI could have been much more successful and useful if it had been better supported. He points to the example of the U.S. Department of Defense (DoD) which has issued a million Java cards with integrated PKI to its employees to support his view that "the problem with PKI is a capital development problem. You have to put in a lot of up-front investment."
It is a matter of having ready-made applications, he argues. For example, when the first cell phones came into use, they were successful because you could call all the people who already had phones in their homes.
Something similar "might have been done with cryptography. Suppose that you had had lots of keys distributed in the smartcards, and suppose the minute you got an electronic certificate there had been something that you could do with it that was valuable, that might have capitalized the whole thing. DoD put a lot of up-front investment in for some of its own applications, but nobody was in a position to put a lot of investment in for the world at large."
Diffie thinks that either AT&T or NSA could have done that for the U.S., but the chance passed. "I don't see that either could do it now."
The RSA approach has lasted longer than he expected. "My expectation was not that it would be broken, but that it would be replaced much earlier." He felt that once RSA had become established, "then people with a lot more mathematical sophistication than I had would move into this business and we would have a whole new round of these things. It took longer, and then we had one new thing that really has gotten a lot of attention, which is elliptic curve cryptography (see panel, page 25) and it only enhances the Diffie Hellman [approach] rather than changing it."
Diffie was a big fan of the RSA system for the first ten years because it solved the problem he had envisioned. Now he can see that there are problems with the RSA type structure.
"If I send you my modulus and tell you to send me a secret message, there's no test you can do on the modulus that will tell whether it's built of good primes."
He contrasts this with a Diffie-Hellman system, noting: "It's much more open and the keying material is much cheaper. You can standardize on the modulus and on the generator and then those things are public, they can be generated in a publicly known fashion and all that you manufacture in the protocols is random numbers with very simple excluded cases. That makes it much more feasible to have protocols that are robust and that satisfy distrusting parties."
Although necessary, standards have had the effect of limiting the scope for the sort of creativity exhibited in the '70s. Change is now slower and more focused. "If you look at cryptography as a practical matter today, it is standards-dominated and the significance of that is that it's a hard sell for a new system."
He finds it very hard to persuade systems developers that even if their systems are very good, they are not going to squeeze out something that is now a major international standard for as long as the standard stays satisfactory."
People who think they are going to get somewhere with something that is thought more secure than Advanced Encryption Standard (AES) have an uphill battle. AES, if it is as secure as it appears, is secure enough for any application. So that direction seems closed.
"So you get something that's just as secure and, say, much faster. Well, that might address a very real problem because networks are getting faster – faster than processors are getting faster – but you still have a big uphill battle. You've got to persuade people that it's true. You have some arguing to do to show why you think this is more secure despite using less computation."
As expected, cryptography has a big part to play in Diffie's two major challenges for early 21st century computing. Sun has joined the Trusted Computing Group (TCG) and Diffie can see a key element of the TCG platform, remote attestation, growing rapidly.
Remote attestation uses cryptography to manage and assure the configuration of network systems. It could "prevent users running viruses," he says, but could also be used to "prevent other [legitimate] programs being run."
Even so, he sees advantages in being able to confirm the integrity of a whole network using attestations from the component devices. A whole syntax in XML would be needed to define "what it means for two systems to be identical and what class of programs they should run in their identical ways."
This could lay the foundations for a new approach to computing power. To smooth peaks in demand and make use of idle resources, companies "will be able to go out and hire computing power on the spot," he says.
This "on-demand" world of grid computing and web services, sometimes called adaptive computing, "needs security and the ability to reduce the making of contracts from days and years to minutes and seconds." This, too, will increase the need for public key authentication and encryption as well as for "letters of introduction" in the form of digital certificates.
When I ask if there are any issues burning in the background, as the Public Key issue had during the early '70s, he is modest. "I won't do anything like that again. I essentially took two years off to work on that project. I think about a variety of things that might produce a new breakthrough, but I also do a day job that takes up a good deal of what little thinking power I have."
With his track record, Diffie is likely to offer the IT security industry yet another breakthrough. His inclusion in the Global Council of CSOs, an elite think tank of "influential cybersecurity leaders" and his key position as CSO within Sun Microsystems show that many others share that view.
Finally, a security protocol you can trust!
http://www-128.ibm.com/developerworks/wireless/library/wi-roam35/
TNC spells big changes for enterprise wireless networks
By Larry Loeb (larryloeb@prodigy.net), Principal, pbc enterprises
15 Jun 2005
Trusted Network Connect (TNC) is probably the most trustworthy concept to ever emerge from Redmond. This month's Roaming charges unpacks the TNC specification and explains how it proposes to bring real security to your wireless enterprise networks.
A lesson in (de)liberation
http://www-128.ibm.com/developerworks/library/wi-roam39.html
Does the TCG really want your cell phone? Probably not...
By Larry Loeb (larryloeb@prodigy.net), Principal, pbc enterprises
18 Oct 2005
The Electronic Frontier Foundation is pounding the drums about the Trusted Computing Group's latest attempt to destroy that last bastion of rights and privacy -- the cell phone network. But Larry has three words for them: Sarbanes-Oxley Act.
Offering digital content
Dean Marks - Warner Bros. Entertainment
A bit more about metering requirements...
http://europa.eu.int/information_society/eeurope/2005/all_about/digital_rights_man/doc/drm_workshop_...
Summary of the consultation of the High Level Group final report on Digital Rights Management by the European Commission
Metered content is highly desired!!
http://europa.eu.int/information_society/eeurope/2005/all_about/digital_rights_man/doc/drm_workshop_...
eEurope: Digital Rights Management
http://europa.eu.int/information_society/eeurope/2005/all_about/digital_rights_man/events/index_en.h...
Finding the Common Ground of Identity...
Exploring How OASIS XRI/XDI, TCG, Identity Commons, and Shibboleth Might Work Together
Internet2 Fall Meeting
MACE Dinner - September 29, 2004
Drummond Reed, Cordance, Co-Chair, OASIS XRI & XDI TCs
Geoffrey Strongin, AMD, Co-Chair, OASIS XDI TC
Fen Labalme, Identity Commons
http://alpha-geek.com/Internet2-Fall-MACE-Dinner-v3.ppt
Identity Management...
Definition
http://h71028.www7.hp.com/enterprise/downloads/HP%20Security%20Handbook%20Identity%20Management.pdf
Identity management is the set of processes, tools, and social contracts surrounding the creation, maintenance, and use of digital identities for people, systems, and services. It enables secure access to a set of systems and applications. Its components include data repositories, security, life cycle management, consumables, and management policies. Identity management has strong links to security, trust, and privacy management. It also delivers components of risk management. Traditionally, identity management has been a core component of system security environments. It is used for maintaining account information and controlling access to a system or limited set of applications. Control is usually the primary focus of identity management. For example, an administrator issues accounts to restrict and monitor access to resources. More recently, however, identity management has also become a key enabler of electronic business.
The above is taken from HP Security Handbook, dated May 2005
http://h71028.www7.hp.com/enterprise/downloads/HP%20Security%20Handbook.pdf
http://h71028.www7.hp.com/eNewsletter/cache/251282-0-0-39-121.html
cm: Federation-to-Federation Trust
(Interfederation Interoperability Presentation)
http://www.eapartnership.org/docs/Jun2005/June_2005_Interfederation_Interoperability_Presentation.pp...
OLS: Linux and trusted computing
The term "trusted computing" tends to elicit a suspicious response in the free software community. It has come to be associated with digital restrictions management schemes, locked-down systems, and similar, untrustworthy mechanisms. At the 2005 Ottawa Linux Symposium, Emily Ratliff and Tom Lendacky discussed the state of trusted computing support for Linux and tried to show how this technology can be a good thing to have. Trusted computing does not have to be evil.
At the lowest level, trusted computing is implemented by a small chip called the "trusted platform module" or TPM. The Linux kernel has had driver support for TPM chips since 2.6.12; a couple of chips are supported now, with drivers for others in the works. Many systems - laptops in particular - are currently equipped with TPM chips, so this is a technology which Linux users can play with today.
A TPM provides a number of features to the host system. It includes a protected memory area, and a restricted set of commands which can operate on that area. "Platform configuration registers" (PCRs) are a special sort of hashed accumulator which can be used to track the current hardware and software configuration of the system. The TPM also includes a cryptographic processor with a number of basic functions: a random number generator, SHA hash calculator, etc. And there is some non-volatile RAM for holding keys and such.
A TPM-equipped system requires support in the BIOS. Before the system boots, the BIOS will "measure" the current hardware state, storing the result in a PCR. The boot loader will also be checksummed, with the result going into another PCR. The boot loader is then run; its job is to stash a checksum of the kernel into yet another register before actually booting that kernel. Once the kernel is up, the "trusted software stack" takes charge of talking to the TPM, providing access to its services and keeping an eye on the state of the system. Systems which provide a TPM typically also include the needed BIOS support; this support could also be added by projects like FreeBIOS and OpenBIOS. There are versions of the Grub bootloader which can handle the next step; LILO patches also exist. Once the kernel is booted, the TPM driver takes over, with the user-space being handled by the TrouSerS TSS stack.
TrouSerS makes a number of TPM capabilities available to the system. If the TPM has RSA capabilities, TrouSerS can perform RSA key pair generation, along with encryption and decryption. There is support for remote attestation functionality (more about that momentarily). The TSS can be used to "seal" data; such data will be encrypted in such a way that it can only be decrypted if certain PCRs contain the same values. This capability can also be used to bind data to a specific system; move an encrypted file to another host, and that host's TPM will simply lack the keys it needs to decrypt that file. Needless to say, if you make use of these features, you need to give some real thought to recovery plans; there are various sorts of key escrow schemes and such which can be used to get your data back should your motherboard (with its TPM chip) go up in flames.
The TrouSerS package also provides a set of tools for TPM configuration tasks. However, a number of BIOS implementations will lock down the TPM before invoking the boot loader, so TPM configuration is often best done by working directly with the BIOS. There is also a PCKS#11 library; PCKS#11 is a standard API for working with cryptographic hardware.
At the next level is the integrity measurement architecture (IMA) code. IMA was covered on the LWN Kernel Page last May; look there for the details. In short: IMA uses a PCR to accumulate checksums of every application and library run on the system since boot; this checksum, when signed by the TPM, can be provided to another system to prove that the measured system is running a specific list of software, that the programs have not been modified, and that nothing which is not on the list has been run. If the chain of trust (starting with the BIOS) holds together, a remote system can have a high degree of confidence that the list is accurate and complete.
Since last May, the IMA code has been significantly reworked (it took a fair amount of criticism on the kernel list). Among other things, it no longer hooks in as a Linux security module. The next step, however, will be a security module; it is called the "extended verification module." It includes a fair amount of security enforcement policy. This module can, for example, check that the extended attributes on files have not been changed by any third party. SELinux makes heavy use of extended attributes; with this mechanism in place, an SELinux system can remain secure even if somebody moves the disk to a different system and makes changes to the SELinux labels. Once back on the original system, those changes will be detected.
So why would a Linux user care about all of this? Some of the things that can be done with the TPM include:
Key protection. A user can store GPG keys (or others) in the TPM and not have to worry about those keys being extracted and disclosed by a compromised application.
System integrity checking. The measurement capabilities can be used to ensure that the binaries on the system have not been tampered with; it is a sort of Tripwire with hardware support.
In the corporate environment, the remote attestation features provided by IMA can be used to keep compromised systems from affecting the company network. Simply require systems to provide their "measurement" before giving them access to the network, and any system which has, say, been infected with malware at a conference will be detected and locked out.
Similarly, a conference attendee using an "email garden" terminal to access a mail server could, in the future, require that terminal to verify itself to the server before any sort of access is allowed.
Attestation could be used in electronic voting machines to verify that they are running the proper (hopefully open source) software.
And so on. The point is that there are legitimate uses for a hardware-based mechanism which can, with a reasonable level of confidence, verify that a system's software has not been compromised.
On the other hand, this same technology has a number of other potential uses. It could be used by company IT cops to ensure that employees are not running "unapproved" software, be it games, unlicensed copies of proprietary software, or Linux. Remote attestation is a boon for companies like TiVo, which can use it to ensure that the remote system is running current software and has not been cracked. Providers of web services could be sure that you really are running Internet Explorer. It does not take much imagination to come up with several unpleasant scenarios involving trusted computing and locked-down systems.
What it comes down to is that "trusted computing," like computing itself, is a tool which can be used in many ways. One does not have to look very far to find people using Linux in ways that one, personally, might not approve of. The TPM hackers feel that, given that the technology is available, let's use it. Properly used, this hardware can help to ensure that we remain in charge of our systems, and that much, certainly, is a good thing.
http://lwn.net/Articles/144681/
Biometric Systems - Threats and Countermeasures - The State-of-the-Art
Dr. Colin Soutar, Chief Technology Officer, Bioscrypt, Inc. and Mr. Dale Setlak, Chief Technology Office and VP of Research, Authentec, Inc.
From this NIST conference:
http://www.csrc.nist.gov/pki/BioandEAuth/program.html
References TPMs as an important element in biometric authentication
http://www.csrc.nist.gov/pki/BioandEAuth/Presentations/Wednesday,%20March%2030/Soutar_Setlak_Threats...
Fine, awk. I wouldn't post this publicly on your board, but you have me blocked from PM's. I have taken it up with Matt.
Regards,
greg
Rosie asked on the WAVX board, not here.
Go to the jailhouse to post your stuff. Not here. One more attempt by you to make noise here and you'll be banned for good. I will not put up with your agenda over here. It's up to you.
Rosie asked an honest question. No one else would (or could) give her an answer. My post violates absolutely no TOU's and is entirely on topic.
End of story ... except ... what rule did my post violate that warranted it's deletion?
greg; Don't use this board to get around your promise not to post on the WAVX board. End of story!
re: This is something Wave has been working on for over a decade.
To no avail, I'm afraid. Subsumation.
If I find the timne I'll dig up an HP/Wave white paper from 1998 discussing Trusted Computing, or 'Trust at the Edge'...This is something Wave has been working on for over a decade.
Greg S
"How can anyone claim one firm "owns the space" after reviewing this white paper, information that has been available for over three years?"
Free will.
regards
For RosielovesWAVX:
White paper from April 2002 reflecting on IBM/Microsoft plans for WS-Security which is starting to roll out now.
Caution: NOT FOR THE TECHNICALLY CHALLENGED.
How can anyone claim one firm "owns the space" after reviewing this white paper, information that has been available for over three years?
Executive Summary
The IT industry has been talking about Web services for almost two years. The benefits of having a loosely-coupled, language-neutral, platform-independent way of linking applications within organizations, across enterprises, and across the Internet are becoming more evident as Web services are used in pilot programs and in wide-scale production. Moving forward, our customers, industry analysts, and the press identify a key area that needs to be addressed as Web services become more mainstream: security. This document proposes a technical strategy and roadmap whereby the industry can produce and implement a standards-based architecture that is comprehensive yet flexible enough to meet the Web services security needs of real businesses.
A key benefit of the emerging Web services architecture is the ability to deliver integrated, interoperable solutions. Ensuring the integrity, confidentiality and security of Web services through the application of a comprehensive security model is critical, both for organizations and their customers.
Responding to concerns expressed both from our customers and the industry, IBM and Microsoft have collaborated on this proposed Web services security plan and roadmap for developing a set of Web Service Security specifications that address how to provide protection for messages exchanged in a Web service environment.
For the first time, we have created a security model that brings together formerly incompatible security technologies such as public key infrastructure, Kerberos, and others. In short, this is not an idealized framework but a practical one that can allow us to build secure Web services in the heterogeneous IT world in which our customers live today.
In this document we present a broad set of specifications that cover security technologies including authentication, authorization, privacy, trust, integrity, confidentiality, secure communications channels, federation, delegation and auditing across a wide spectrum of application and business topologies. These specifications provide a framework that is extensible, flexible, and maximizes existing investments in security infrastructure. These specifications subsume and expand upon the ideas expressed in similar specifications previously proposed by IBM and Microsoft (namely the SOAP-Security, WS-Security and WS-License specifications).
By leveraging the natural extensibility that is at the core of the Web services model, the specifications build upon foundational technologies such as SOAP, WSDL, XML Digital Signatures, XML Encryption and SSL/TLS. This allows Web service providers and requesters to develop solutions that meet the individual security requirements of their applications.
IBM and Microsoft intend to work with customers, partners and standards bodies to evolve and improve upon this security model in a phased approach. We are seeding this effort with the WS-Security specification. WS-Security defines the core facilities for protecting the integrity and confidentiality of a message, as well as mechanisms for associating security-related claims with the message. While WS-Security is the cornerstone of this effort, it is only the beginning and we will cooperate with the industry to produce additional specifications that will deal with policy, trust and privacy issues.
To make the issues and solutions discussed in this document as concrete as possible, we discuss several scenarios that reflect current and anticipated applications of Web services. These include firewall processing, privacy, use of browser and mobile clients, access control, delegation, and auditing.
We anticipate concerns about what can be done to ensure interoperability and consistent implementation of the various proposed specifications. To address this, IBM and Microsoft will work closely with standards organizations, the developer community, and with industry organizations such as WS-I.org to develop interoperability profiles and tests that will provide guidance to tool vendors.
This document outlines a comprehensive, modular solution that, when implemented, will allow customers to build interoperable and secure Web services that leverage and expand upon existing investments in security infrastructure while allowing them to take full advantage of the integration and interoperability benefits Web service technologies have to offer.
Go to link for full article:
http://msdn.microsoft.com/library/default.asp?url=/library/en-us/dnwssecur/html/securitywhitepaper.a....
Enterprise Grid Security Requirements
Date: July 8, 2005
http://www.gridalliance.org/en/documents/TWGDocs/ega-grid_security_requirements-v1Approved.pdf
Considering these developer boards have only been available for a scant month or two, you can imagine how much work/time has gone into the preparation for this. No surprises here. Just lots of heads buried in the sand.
You can now add Apple (MacTel) developer boards to the TPM matrix?
YES!!! GUX has TPM and wave software. But be careful...The article state
TPM on the Mac sighting? Very near the bottom - I'm sure I've seen that model number before with a TPM...
Multibooting Intel based Macs - A Step-by-step How to Guide
Disclaimer: We have read the NDA from Apple and do not see that this violates it. If we are wrong however someone please let us know and we'll happily remove the following. It is NOT or intention to violate this NDA or to make anyone upset. We are only trying to help others in the community by benefiting from the work we have done
By Ross Carlson and Joel Wampler
In this guide we'll take you through installing multiple operating systems on the Intel based Developer Macintosh machine. This guide was put together by Ross Carlson and Joel Wampler to hopefully get you through building a machine that can run every major operating system currently available. This guide takes about 2 hours total. Let's get started...
First there are a few things that you'll need:
Decide what OSes you'll install
Mac OS X Intel disk (the one that came with the Intel Mac)
Windows XP SP2 CD (if you want XP - we tested with a already SP2'ed disc)
Windows XP CD Key (obviously, just being safe...)
CentOS 4 CD's (or your favorite distro - we got kernel panic's every time we tried Fedora Core 4 and CentOS worked great)
CD Ejection Device (otherwise known as a paper clip - just in case...)
Notes:
You're going to need a Linux install so you can use it's boot loader for your OS selection menu.
We had major issues with Fedora Core 4. At first we thought it was an issue with HyperThreading support, and we did a "linux ht=off" at boot. This worked once but never again?!? Joel was also too lazy to make some Slackware CD's with SATA support so we just went with CentOS since we had it handy.
Keep the CD Ejection Device handy - Apple thought it was a good idea to remove the button from the DVD drive so the only way to eject a disc if you need to is with the OS or the CD Ejection Device. So if you can't boot into an OS and you want to remove the CD you'll need that...
Quick Guide: - Return to Top
If you're like us and hate reading through pages of crap to get things done here is the quick version of what you'll need to do. We'll explain this step-by-step down below.
Boot from the Mac OS X Install DVD
Use the Disk Utility within the Installer to delete ALL partitions
Use the drop down and select 3 partitions (if you're doing OSX/Windows/Linux) - YOU REALLY ONLY NEED A MAX OF 3!
Change the size of the partitions as you desire (make sure to leave room for all your OSes)
Set the first and third partition to "free space" - DO NOT FORMAT THEM!
Set the second partition to Mac Journeled - name it "OS X" (or what you want)
Write the partiton table
Exit the Disk Utility
Install OS X on the partition you created above (if you have more than 1 disk you did something wrong!)
Once OS X is installed and working put in the Windows XP CD and reboot
At boot make sure to hit a key so the machine boots from the XP CD
Create an NTFS partition on the first empty partition - you'll see the other two - ignore them. The partition you'll create will be called "E:", don't worry...
Exit the XP installer (AFTER you've created the partition - DO NOT proceed with setup).
Restart XP Setup (remember to press enter on reboot)
Now the first partition will be called C: - install to that one - Format NTFS (we recommend quick)
Finish installing XP
Once XP is installed put in the CentOS 4 disk 1 and reboot (we'll do drivers later...)
When the CentOS CD loads press enter to go into setup
Choose manual partition - create your partitions (we just did one big / partition and a 1536 swap partition)
At the Grub config screen add a choice pointing at "/dev/sda2" called "Mac OS X" - rename the one called "Other" to "Windows XP" (or what you want) - complete the CentOS install
Once the CentOS install is complete boot into CentOS
using fdisk mark "/dev/sda2" as the bootable partition (make sure to unselect "/dev/sda1")
Edit /boot/grub/grub.conf to remove hiddenmenu and timeout (so you can choose)
Reboot
When CentOS boots hit enter to get the Grub menu. Select the OS you want.
Enjoy - think happy thoughts for us
Full Guide: - Return to Top
Ok, so you've read the quick guide now let's take you through that step-by-step and fully explain everything.
Install OS X (about 20 minutes): - Return to Top
Before rebooting from OS X put the OS X Install DVD in the drive. Reboot. Make sure to watch the machine at reboot so you can hit enter to boot off the install CD - otherwise it will just boot into OS X. Most of this is just like a standard OS X installation. The key issue here is making sure that you partition the disk properly. Basically OS X MUST be on the second partition (from our testing Windows XP MUST be first - correct us if we're wrong). If you're going to triple boot (or more) you'll need at least 3 partitions. If you plan on running OS X, Windows XP, and more than 1 Linux distrobution you still only need 3 partitions here (you can chop up the third one later with Linux). As soon as the installer begins you'll need to load up the "Disk Utility" - by using the "Utilities" menu and choosing "Disk Utility". This will let you select the disk and repartion it as you want. On the left you'll see your drive, mine is a "152.7 GB Maxtor". Highlight this and click "Partition" over on the right. You'll now what to change the "Volume Scheme" to "3 partitions". All three partitions should show as "Untitled 1, Untitled 2, Untitled 3". Select the frist and thrid partitions (seperately) and change the "Format" to "Free Space". Now select the middle partition and make sure it's "Mac OS Extentded (Journaled)". Now set the sizes of the 3 partitions as you want (we did 60GB for Windows - the first partition, 40 GB for Mac - the second partition, and the rest for Linux - the third partition). Partitioning the disk correctly is the most important step so make sure you get this right! Once you've got your partitions sized the way you want click "Partition" in the bottom right corner. You'll get a warning that all data will be destroyed, just click "Partition". Once the partitioning is complete close the "Disk Utility" and return to the installer.
Now that you're back into the OS X install you should see only one drive to install on, at the size you set above. If you see more than one or it's not the size you expected relaunch the Disk Utility and verify everything. DO NOT proceed with this if you're not sure, you'll probably be wasting your time and have to start over later. Once OS X is all installed you can proceed to install Windows XP.
Install Windows XP: (about 45 minutes): - Return to Top
Before rebooting from OS X put your Windows XP SP2 CD in (we tested with a already SP2'ed CD since we knew that included SATA drivers - if you use the base XP CD and it works just let us know). Once you've got your CD in reboot OS X.
As the machine boots be sure to watch for the "Press any key...." to boot from the XP CD. The XP install will begin as normal. The key thing in the XP install is selecting the right partition to user. You MUST put it on the first partition on the disk. In our case this is the 60GB one right at the beginning of the disk. You will see a drive labeled "C:", this is really the OS X partition - we CAN NOT use that one. What you'll need to do is select the first partition and click "C" (for create) - take the default size for the partition. You'll now be back at the partition table but your first partition will be labeled E: - this is bad, we can't use that. Now you'll need to press F3 to exit the installer. Don't worry, we'll come back here in a minute and the first partition will become C:
Once you've restarted the installer and gotten back to the partition choice make sure that the first partition is labeled C: - if it is you're good to go, if not check your work. Select the C: partition by clicking enter - you'll get a message about another active partition, just ignore this we'll fix this later when we install Linux. Hit enter to proceed. Format the partition (we always use quick) and continue with setup.
When XP boots into the GUI portion of setup at the end you'll be asked about joining a domain. Don't try it, the network driver won't be loaded at that time so you won't be able to. See below for notes on getting the drivers installed for all the hardware - this is just an install guide for the basic OSes...
Once you've got Windows XP installed pop out the CD (you'll need to right click on the drive in Windows Explorer and choose eject) and put the CentOS 4 disc 1 in. While we're sure your favorite distro *might* work (we actually used RedHat Enterprise Linux 4.0 first) - we picked CentOS since we had it around (and again Joel was lazy and didn't make us a Slack CD with SATA support). we'll no doubt run other distros soon... Reboot.
Install CentOS 4.0 (about 30 minutes): - Return to Top
Now that you're booting from the CentOS 4 disk you can just press enter and go. Proceed with a standard CentOS install but make sure you manually partition the drive (using Disk Druid). When you get there create 2 new partitions - 1 swap partition (we made this 1.5GB - 1536) and 1 partition for the OS (named "/" - we made ours 20 GB - again size these based on what you want to do - we chose 20 GB here leaving 30 GB for later in case we want a 4th or 5th OS).
After you've created your partitions you can continue with the installer. The next important thing is the Grub boot loader configuration. You can do this later but it's definetly easiest to do it here. When you get to this page click on "Add" - the Device is "/dev/sda2" we named it "MacOSX". We also renamed "Other" to "WindowsXP" so it would display nicer. We chose to make "Mac OS X" the default OS, you can choose the one you want. After these changes you can proceed and install CentOS the way you want (selecting your packages, etc).
If you selected something other than CentOS as your default OS make sure to hit enter right at boot time so you can select the right OS. You'll need to boot into CentOS first so you can set your bootable partition to the OS X partition os that OS X will boot correctly. If you try to boot OS X now you'll get Windows XP instead (why we're not sure). We're also going to set it so that you get the menu each time to make things easier. So boot into CentOS now. Finish the CentOS welcome stuff.
Once you're booted in CentOS you'll need to edit a few things. First we'll need to change the bootable partition using fdisk (yes there are other ways, this one is just easy to explain). Once you get booted go to a shell and type:
fdisk /dev/sda (to launch fdisk)
a 1 (to turn off bootable for partition 1)
a 2 (to turn on bootable for partition 2)
w
Next we need to tell Grub to always show the menu and never time out (this is optional). Make the following changes:
add # in front of:
#timeout=5
#hiddenmenu
Save and quit. Reboot and you should have the Grub boot menu to select your OS.
Selecting Your OS:
Now that you've got everything installed you can reboot and choose your OS. The Grub boot menu should come up and let you select your different OS. We've setup a forum on our site at Forums and we'll do our best to give you a hand and answer any questions for you.
Enjoy!!!
Authors:
Ross Carlson <ross@jasbone.com>
Joel Wampler <jwampler@iwamp.com>
Driver Installation: - Return to Top
Ok, now that you've got your shiny new OSes installed you'll need some drivers. We got everything running great in XP but did have some problems with Audio in CentOS (we wanted to finish this guide so we scratched that for now). We were able to determine that the motherboard is basically a Intel D915GUX board (or at least it's VERY close) and since Intel is Linux friendly you can grab pretty much all the drivers from there. Here are some quick links for you:
General:
Motherboard (we think - at least very close): - Intel D915GUX
Windows XP:
Networking: Intel PRO/1000 MT Server Adaptor
Video: Intel 82915G/82910GL Express Chipset Family
Audio: Intel High Definition Audio Controller (Realtek Codec)
Other Device: There is also one of the Trusted Computing chips on the board - Windows Update will install the driver for that...
Linux:
Networking: Intel PRO/1000 MT Server Adaptor
Audio: Intel High Definition Audio Controller (Realtek Codec)
As we said we didn't finish the Audio driver for CentOS - we just haven't had time yet - we'll post a new story if/when we get that all worked out and as we get other OSes installed on this bad boy. We hope this has been helpful - enjoy!!!
http://www.jasbone.com/blog/archives/2005/07/multibooting_in.html
Listen to Kim around the 21:00 mark...
http://channel9.msdn.com/ShowPost.aspx?PostID=85004
2b(edit)--You write: "The WinTel world STILL has not embraced the TPM, TCG, and Wave, for it it had, wouldn't we see a substantial increase in Wave's share price?"
Come on. Of course, Wintel HAS embraced TPM and TCG...the Vista/Longhorn is architectured around the TPM and Trust Server being developed by TCG. Of course it remains speculation, but I firmly believe WAVE's share price will surge forward in share price rather violently in the coming months.
You write: "To think that Apple will go through wave for trusted computing is not a foregone conclusion, at least with me."
Is it a "foregone conclusion" that Apple will go through the TCG? IMO it is a foregone conclusion otherwise Apple has no future. You think Apple wants to be outside the TCG Trusted Grid? No way! And yes of course WAVE is not a "foregone conclusion" that WAVE's Embassy Platform is foundational to the future of TCG. But the issue is not one of "foregone conclusion(s)" but rather the question of likelihood based on the available evidence. I believe the accumulative evidence points heavily in WAVE's favor.
You write: "I would like to offer the idea that a simple DRM scheme coupled with the TPM may be used that doesn't require the use of a wave software solution."
I totally concur with your thinking. There is no requirement that devices with TPMs need to be hooked into a TCG or Wave's "software solution". But keep in mind that those maverick TPMs will not be in compliance with the World Wide Trusted Computing Standards established by TCG. Check out the companies that belong to TCG. Where is the competition to that Standards Group? There isn't competition because the goal is a World Wide TCG Grid. Not multiple grids, but ONE GRID!
You write: "Though I must confess I would love to see many a wavoid eat crow with an apple embracement of TPMs and wave software."
Your out of the box thinking over the years has ALWAYS been appreciated by me! You thundered about Apple and Trusted Computing, and few understood what the heck you were talking about. Isn't that typically the fate of visionary thinkers like yourself. In the end, you'll be vindicated. No doubt about it.
the best
barge
MSFT's Kim Cameron and the Laws of Identity for the Internet...
http://channel9.msdn.com/ShowPost.aspx?PostID=85004
barge - my friend, I know how much you love and adore the 'promise' of wave. I'm not insensitive to that love nor am I trying to alter your emotions in this regard. My point is to offer a different perspective regarding the implementation of trusted computing.
Anyone that cares about the movement of digital content via the MacTel platform (clearly some don't) then watching the development of the TPM in the context of the MacTel platform should warrant a watch. To think that Apple will go through wave for trusted computing is not a foregone conclusion, at least with me.
I'm not trying to be argumentative with you. Contrary. I would like to offer the idea that a simple DRM scheme coupled with the TPM may be used that doesn't require the use of a wave software solution.
I'm not sure about apple having "eaten humble pie in the past few weeks by acknowledging that if they do not design future Apple hardware and software around the TPM platform they will soon be BURNT TOAST." I rather find this statement interesting in light of how long apple has been stealthily working on this. The transition is going to occur at lightening speed. The WinTel world STILL has not embraced the TPM, TCG, and Wave, for it it had, wouldn't we see a substantial increase in Wave's share price?
I think Intel has regained what it lost with InterTrust. They are now able to capitalize upon their motherboard realestate. That Apple wants to be part of that is a good thing for apple (in my opinion) and for end-users who want to gain digital content. That it is a good thing for wave remains to be seen in my opinion. Though I must confess I would love to see many a wavoid eat crow with an apple embracement of TPMs and wave software.
2b--Wrong! You write: "I don't believe Wave has any interest in moving content via Apple."
Your point is irrelevant. APPLE has every interest in moving content via the Embassy TPM Platform. Apple needs to come to the Trusted Computing Group/WAVE. Why would TCG/WAVE need to approach Apple?
Apple has eaten humble pie in the past few weeks by acknowledging that if they do not design future Apple hardware and software around the TPM platform they will soon be BURNT TOAST!
ALL ROADS LEAD TO THE TRUSTED COMPUTING GROUP and of course little ole WAVE!!!
Wave? I'm more interested in protecting and moving content. I don't believe Wave has any interest in moving content via Apple.
And many an 'important' wavoid believes that Apple is insignificant. It will be interesting to see what the landscape looks like next June/July if OS X is locked down to Intel motherboards via TPM and movies and music are flying through that platform - very interesting indeed...
2b---I believe you are correct! If WAVE ever launches firing on all cylinders......I kid you not, it will be FRIGHTENING!!! I intend to hid under my bed until I get an "ALL CLEAR" message from Awk!!
And isn't MOT bringing out a video/music phone that sync's with iTunes? I'm sure they are...
Awk--Your Motorola find is actually quite frighting! You just added another million pounds of rocket fuel to the already dangerously bulging supersized fuel tanks attached to the WAVE rocket.
An interesting TCG page that clarifies TriCipher.
https://www.trustedcomputinggroup.org/kshowcase/view/select_item?categories=9696f950dcd44cf48039dc1ef1f8e13cc91ee086&step%3Aint=2
and some other categories where many of the same products are listed.
Categories:
Applications
Hardware
PC
Services
Software
A thread of thought on Apple board about TPM...
http://www.investorshub.com/boards/read_msg.asp?message_id=7081075
Anticipation
is killing me!! This is great stuff.
I do have a stupid question, though. Is it possible the Embassy chip has been re-engineered smaller?
Thanks awk. Looks like progress is definitely being made. A spec by the end of the year is aggressive with so many players involved.
regards
More on Motorola...TrustZone!!!!
http://www.cdg.org/news/events/CDMASeminar/050513_Tech_Forum/8%20LChen_Motorola.pdf
Freescale (Motorola) Solutions
Driving Seamless Mobile Entertainment
Freescale Semiconductor
06/2005
http://www.freescale.com/files/abstract/overview/MOBILE_ENTERTAINMENT.html
Freescale’s applications processors have security “baked in” to the chip itself, protecting a
device’s modem and providing secure communications at the hardware level. The platforms
incorporate our platform independent security architecture, a combination of features that
provide a high level of confidence for carriers, content providers and consumers. For carriers,
the architecture provides protection against malicious service attacks and service theft,
configuration protection, and cloning. For content providers, it blocks illegal access to
licensed content, protecting against unauthorized use and distribution. For consumers,
private data is inaccessible, helping protect against identity theft.
Innovations for Grid Security from Trusted Computing
Dated June 7, 2005
http://www.hpl.hp.com/personal/Wenbo_Mao/research/tcgridsec.pdf
P2P Access Control Architecture Using Trusted Computing Technology
http://www.list.gmu.edu/confrnc/sacmat/2005-tc.pdf
Followers
|
25
|
Posters
|
|
Posts (Today)
|
0
|
Posts (Total)
|
447
|
Created
|
02/03/04
|
Type
|
Premium
|
Moderator awk | |||
Assistants Bull_Dolphin |
Volume | |
Day Range: | |
Bid Price | |
Ask Price | |
Last Trade Time: |