Register for free to join our community of investors and share your ideas. You will also get access to streaming quotes, interactive charts, trades, portfolio, live options flow and more tools.
DIDW 2005: Kim Cameron's 7 laws of identity
http://scottsrawnotes.blogspot.com/2005/05/didw-2005-kim-camerons-7-laws-of.html
Microsoft's New Law Of Identity
"...Microsoft *now* views identity and what they
are planning -- nearly everyone was surprised by some part of
what was presented. Who knows what will actually emerge from
this, but Kim's "Laws of Identity" are now clearly part of the
background of most identity conversations these days..."
-Phil Becker-
Audio:
http://www.digitalidworld.com/misc/sts.mp3
Presentation:
http://www.digitalidworld.com/misc/sts.pdf
Grid Computing Can Allow Security Threats
By Ryan Naraine
March 30, 2005
http://www.eweek.com/article2/0%2C1759%2C1780849%2C00.asp
Security experts on Wednesday recommended that IT administrators clearly identify and understand the security risks associated with large-scale grid computing deployments.
During Ziff Davis Media's Enterprise Solutions Virtual Tradeshow, the pros and cons of grid computing and safe data storage took center stage, with panelists stressing the importance of using best practices to protect the confidentiality of information passed over corporate grid systems.
Lenny Mansell, senior security consultant at Triad Information Security Services LLC, warned that greater sharing of information and resources across traditional trust boundaries will result in increased risks that must be addressed as a matter of urgency.
Mansell recommends that businesses deploying grid systems identify critical assets and the threats to those assets.
More importantly, IT administrators must assess the impact that a security threat could have on the business and implement mitigation controls and policies.
To ensure the confidentiality, integrity and availability of crucial data on the grid systems, Mansell said administrators must implement proper classification to handle confidential data. "Access to confidential data needs to be restricted to those with a need to know and you have to set up audit trails," Mansell said.
Click here to read more about the security risks of grid computing.
"Many of the concepts that apply to a well-managed information security practice apply to grid computing," he said, warning that the reliance on authentication and authorization of users and groups makes it complicated for an administrator.
"Policies and processes must be created to address this expanded reliance on these extended models," Mansell said, calling for boundaries of administrative control to be clearly defined.
Mark Teter, chief technical officer of Advanced Systems Group LLC, said the highly automated manner in which resources are allocated on a grid can be used by a malicious attacker to steal sensitive corporate data.
"It is crucial to safeguard the grid and the data being distributed. Your whole storage infrastructure can be compromised," Teter said. He recommended that businesses use encryption technologies to mitigate the threat.
To read about Microsoft's recent agreement with GridIron Software to develop grid technology, click here.
Grid computing is the concept of using computers in the way that utilities use power grids to tap the unused capacity of a vast array of linked systems. Users can then share computing power, databases and services online.
Several high-profile companies have invested heavily in grid computing, including IBM, Sun Microsystems Inc., Microsoft Corp. and Oracle Corp
TCG, Secure Execution
There are two main elements that enter the Trusted Computing equation:
TCG - Trusted Platform Module (TPM)
The Trusted Computing Group (TCG) concerns itself with platform integrity (security) only, at least at this time.
The TCG developed a standard that allows platform manufactures (PCs, mobiles, set-top's etc.) to leverage the Trusted Platform Modul (TPM) as a repository for platform integrity data, credentials and the secure creation of PKI keys.
The Trusted Platform Module allows service providers (servers) to identify a client with a Trusted Platform Module and allows clients with a Trusted Platform Module to identify service providers (servers) using the Public Key Infrastructure (PKI).
There is the necessity for a TPM supporting infrastructure like Key Transfer Manager, Attestation Credential Manager.
The Trusted Computing Group specifications concerns itself only with platform attestation. User attestation to the secure platform is a building block that leverages the TPM i.e. it would not make sense to attest the user to an unsecure platform
A large number of secure services can be built leveraging the Trusted Platform Module. However, computational processes (applications) still run in the open and unprotected environment of the OS. As an example DRM for movies and/or music can not be made sufficiently secure as the hardware layer for sound and picture is not encrypted and continues to be in the open. i.e can be tapped. Hollywood will want assurances that this can not happen (See Secure Execution Environment below). It goes actually a bit deeper but, for the sake of simplicity, I will leave it at that.
The TCG (TPM) approach is a fundamental building block (Platform integrity) necessary to enable Trusted Computing. It's but one basic element on the road to securing a broad web service infrastructure.
Secure Execution Environment (NGSCB - LaGrande/SEM, ARM's "TrustZone")
The Secure Execution Environment is the end goal.
The secure Execution Environment is a separate hardware environment in the processor designed, in the broadest sense, to execute trusted operations. These trusted operations are executed by a secure isolation OS running in the separated and secure hardware partition of the processor. The open platform OS has no means/possibility to see what is going on in the secure isolation OS.
The Secure Execution Environment leverages the Trusted Computing Group's (TCG) Trusted Platform Module (TPM 1.2) as a vault for credentials, the generation of credentials (keys) and proof of platform integrity.
Ultimately every peripheral device (hard drive, monitor, sound card/speakers, printer) will contain a TPM like device for cryptographic acceleration and for attesting device integrity within the system.
Several secure applications can run in the secure partition under the isolation OS in parallel. These individual secure applications do not know about each other (domain separation)
Example: A VISA application running in the secure environment can and does not know about an American Express application running at the same time (WEB SERVICES MODEL).
To come back to the Hollywood DRM example: Movies are delivered in encrypted form. A DRM applet is executed in the secure execution environment, validates the client's rights for viewing the content. The movie data is sent, in encrypted form to the monitor and soundcard/speakers where the data is decrypted and rendered.
For those that are interested in Wave IP I would suggest reading the following:
Method and system for authenticating and utilizing secure resources in a computer system
http://164.195.100.11/netacgi/nph-Parser?Sect1=PTO1&Sect2=HITOFF&d=PALL&p=1&u=/netah...
Abstract
A system and method for executing secure transactions on a computer system is disclosed. The computer system includes a memory.
In one aspect, the method and system include providing a basic input output system (BIOS) on the computer system, providing a secure peripheral coupled with the computer system, and providing a master security co-processor coupled with the computer system.
The BIOS includes first unit for indicating a first trust relationship with the BIOS. The secure peripheral includes second unit for indicating a second trust relationship with the secure peripheral. The master security co-processor is for processing sensitive data on the computer system and includes third unit for indicating a third trust relationship with the master security co-processor.
The method and system further includes utilizing the BIOS to verify at least one of the first trust relationship, the second trust relationship, or the third trust relationship using the first unit for indicating the first trust relationship, the second unit for indicating the second trust relationship, or the third unit for indicating the third trust relationship.
In another aspect, the method and system are for executing an application utilizing sensitive data on a computer system. The computer system includes a master security co-processor and a secure peripheral. In this aspect, the method and system include establishing a secure channel for communication between the master security co-processor and the secure peripheral for executing a portion of the application and executing the portion of the application by the master security co-processor utilizing the secure channel.
Method and system for conditional installation and execution of services in a secure computing environment
http://appft1.uspto.gov/netacgi/nph-Parser?Sect1=PTO2&Sect2=HITOFF&p=1&u=%2Fnetahtml%2FP...
Abstract
A system and method are provided for installing and executing an applet in a secure processor. The system and method can receive the applet in non-secure data storage. The applet includes a meta-data portion and an executable portion. The meta-data portion includes a security meta-data portion, a resource meta-data portion, and a meta-data signature portion.
The system and method determines whether the applet is capable of being executed by the secure processor based at least in part on the security meta-data portion and the resource meta-data portion of the applet, and if the applet can be executed by the secure processor, the applet is installed on the secure processor.
And in regard to the often discussed subject of Direct Anonymous Attestation (DAA) I suggest this:
Method and system for user and group authentication with pseudo-anonymity over a public network
http://appft1.uspto.gov/netacgi/nph-Parser?Sect1=PTO2&Sect2=HITOFF&p=1&u=%2Fnetahtml%2FP...
Abstract
A method of authorizing anonymous access to content by an individual user or a member of an authorized group of users is provided. The method includes receiving a request for access from a user having a persona identifier. Next, a challenge message is generated that includes, at least in part, the persona identifier and verification data, such as pseudo random data. The challenge message is provided to a persona server, which operates as an authentication agent that generates an authentication object extractable only by an individual user or group member. Upon receiving an authentication object from the persona server. The user retrieves decryption data from the persona server. The authentication object is forwarded to the user. If the persona user is authentic, the authentication object packaging is stripped by secure hardware at the user computer using the data from the persona server and the verification data is extracted. Upon receiving and confirming the verification data from the user, the content provider grants the user access to the selected content.
Jobs: Symantec Warns More Hackers Eyeing Apple's Macintosh OS
http://www.forbes.com/facesinthenews/2005/03/22/0322autofacescan09.html?partner=yahoo&referrer=
EAP Trust Framework
http://www.eapartnership.org/
Draft January 2005
http://www.eapartnership.org/docs/Trust_Framework_0105.pdf
AMD Launches Turion Chip to Rival Intel's Centrino
This was posted once I believe and ignored. Given AMD said that they'd be there when they were needed, I suggest this deserves more attention.
http://story.news.yahoo.com/news?tmpl=story&cid=581&e=3&u=/nm/20050310/tc_nm/tech_amd_dc
Is this the start of AMD being there when needed....
AMD aims for a 12-15% marketshare for the notebook-CPU
Category: SOFTPEDIA NEWS :: Mobile Computing :: Laptops
With AMD Turion 64
The company recently launched the AMD Turion 64, a 64bit mobile processor that will go up against Intel's Centrino chipset. AMD is relying heavily on Turion 64 to improve its poor standing in the notebook sector. The company controls less than 10 per cent of the global notebook
arena and has been losing ground fast in the European market.
AMD's share of this market fell to 10.7 per cent in May 2004, down from 16.7 per cent in January 2004, according to analyst Context.
The Sunnyvale company is already promoting its processor as it was integrated with systems from HP, Acer, Asus and others, which should be available as of April 18th.
The processors these manufacturers chose are Turion 64 2800+ and 3000+, with power consumption averaging 25W or 35W, which will cost 13% less than Pentium M Dothan.
Turion is a 64 Low-Power Athlon processor, based on the E revision to the core, which means it will be produced in the 90 nanometres process and will have SSE3 instructions. The processor should be available under the following versions:
- AMD Turion 64 2800+ - 1.6 GHz - 1M L2 - Core 90nm Rev 'E'
- AMD Turion 64 3000+ - 1.8 GHz - 1M L2 - Core 90nm Rev 'E'
- AMD Turion 64 3200+ - 2.0 GHz - 1M L2 - Core 90nm Rev 'E'
HP might raise the proportion of AMD-based notebooks to 50% of its total sales in 2005, from about 20-30% in 2004, citing speculation in the market. Acer expects its shipments of AMD-compatible notebooks to increase 20-30% in 2005, from about 1.0-1.2 million units it shipped in 2004.
Asustek plans to launch five or six AMD-based notebooks this year, compared to just one or two models launched last year.
-----------------------------------------------------
I thought I'd seen that the Turion supported 1.1 and 1.2 but I can't find a specific reference yet.
On Card readers...for the Gov
Wave to exhibit here...
Department of Defense (DoD) Identity Protection and Management Conference
http://www.iaevents.com/PKE05/index.cfm
The focus of the Physical Access Track is the Homeland Security Presidential Directive #12 and its impact on Department of Defense Access Control Programs. The objective is to keep the end-user community informed on key issues that will affect how access control systems are designed, acquired and implemented as well as the underlying programs governing their use.
PS-1 Brief introduction into areas to be covered by the track.
PS-2 From the DOD perspective, Mr. Haberkern will address the basics of the HSPD #12 policy and its implications to DOD programs. This discussion will include the policy related requirements of the FIPS 201and the Personal Identity Verification Levels 1 and 2. He will discuss the changes required to current DOD policies and directives, and the necessity for change. His discussion will provide the Department’s time line for effecting change and those things we may expect in the near and mid term.
As a result of the HSPD #12, the Department of Commerce was tasked to prepare policy standards for a Federal level Personal Identity Verification Credential.
PS-3 Mr. Baldridge has played a key role in the development of the “Use Cases” used to develop the technical standards. He will discuss the content of Special Publication 800-73 and the Technical Implementation Guidance for Smart Card Enabled Physical Access Control Systems v2.2. He will also discuss the characteristics and the methods for central issuance of the Card Holder Unique Identification (CHUID) information. This is the method to be adopted by Defense Manpower Data Center (DMDC) for future Common Access Cards (CACs).
PS-4 Mr. Woodward will discuss the implications of HSPD #12 on the implementation of biometrics within the DOD. This discussion will include the events that are driving the widened use of biometrics in both Information assurance and physical access. The content and implications of the Special Publication 800-76 will be addressed and its impact on DOD’s plans for implementing biometrics across the department. The discussion will address the role of the Biometrics Management Office (BMO) in meeting DOD’s goals and objectives, as well as broad timelines for implementation.
Interfaces for Personal Identity Verification
http://www.csrc.nist.gov/piv-project/fips201-support-docs/SP800-73-2ndDraft.pdf
Executive Summary
The Homeland Security Presidential Directive HSPD-12 called for new standards to be adopted governing the interoperable use of identity credentials to allow physical and logical access to Federal government locations and systems.
The Personal Identity Verification (PIV) for Federal Employees and Contractors, Federal Information Processing Standard 201 (FIPS 201) was developed to establish standards for identity credentials.
This document, Special Publication 800-73 (SP 800-73), specifies interface requirements for the retrieving and using data from the PIV Card1 and is a companion document to FIPS 201.
January 31, 2005 -- Draft Special Publication 800-77, Guide to IPsec VPNs
http://csrc.nist.gov/publications/drafts/Draft-SP800-77.pdf
NIST is pleased to announce new draft special publication 800-77, Guide to IPsec VPNs. IPsec is a framework of open standards for ensuring private communications over IP networks. The most common use is with virtual private networks (VPN). IPsec provides several types of data protection, including maintaining confidentiality and integrity, authenticating the origin of data, preventing packet replay and traffic analysis, and providing access protection.
This document describes the three primary models for VPN architectures: gateway-to-gateway, host-to-gateway, and host-to-host. These models can be used, respectively, to connect two secured networks, such as a branch office and headquarters, over the Internet; to protect communications for hosts on unsecured networks, such as traveling employees; or to secure direct communications between two computers that require extra protection.
The guide describes the components of IPsec. It also presents a phased approach to IPsec planning and implementation that can help in achieving successful IPsec deployments. The five phases of the approach are as follows: identity needs, design the solution, implement and test a prototype, deploy the solution, and manage the solution. Special considerations affecting configuration and deployment are analyzed, and three test cases are presented to illustrate the process of planning and implementing IPsec VPNs.
Prototype Contactless Reader Specification
http://www.tsa.gov/interweb/assetlibrary/PrototypeContactlessReaderSpecification.pdf
For biometrically enabled readers, the biometric device shall be embedded in the same chassis as the reader, or may be in a separate unit. If a separate biometric device is used, the wiring between the reader and biometric unit must not be exposed. The government is seeking methods to support two critical security functions:
1. Ability for the credential to confirm that it is about to communicate with a valid reader that is authorized to receive sensitive information (e.g., biometric identifiers)
2. Ability for the reader to verify that it is communicating with a valid government issued credential (TWIC).
(Note: The government requests input in methods and methodologies to achieve these security objectives).
At this time, it is anticipated that security keys will be required to read the credential media and to verify the contents read from the credential media.
These keys must be user customizable using a manufacturer-supplied configuration utility.. The reader shall support ISO 7816 compliant authentication methods to provide mutual authentication between credential and reader. The information stored on the credential by the issuer is digitally signed. The reader should be able to verify this signature. Authentication and signature methods shall use algorithms that are compliant with appropriate government standards as specified by FIPS 186-2, FIPS 46-2 and FIPS 197.
The reader shall utilize flash memory to allow for future enhancements to be added in the field. The reader shall be designed such that memory used for storage of security keys is protected from all forms of attack on the memory device itself that would result in disclosure of the security keys for the reader and associated media. The manufacturer shall design for compliance to FIPS 140-2 level 3 requirements or a NIAP certification for Common Criteria at an EAL 4+ to demonstrate this level of protection.
The manufacturer shall provide a configuration utility, key management utility and any other configuration software required free of charge in CD format or by download from the manufacturer’s web site.
Smart Card Enabled Physical Access Control Systems
http://www.tsa.gov/interweb/assetlibrary/TIG_SCEPACS_v2.2.pdf
1.2 A scenario for cross-agency interoperability
Through the concepts presented herein along with the work of the various specifying entities, the future of physical access control at federal agencies will look like this:
1. Bob is issued a FASC card from his employer, Agency A. At the point of issuance, he is enrolled into the physical access control system at his main office location. His card enables him to gain entry to his place of work.
2. Months later, Bob is sent to work on a project at another of Agency A’s facilities located in another state. When Bob reports for duty to the new location, the security manager for that location enrolls Bob into the PACS for that facility. Bob can now use his ID card to gain access to the new facility in addition to his original office.
3. In addition, Bob’s work finds him on a project team that meets at another agency’s, Agency B’s, facilities. The security manager at Agency B enrolls Bob in the PACS and the same credential issued by Agency A now electronically identifies Bob at the control points at Agency B’s facility.
Tumbleweed and Wave Systems Corp. Offer an Integrated Solution for Secure Access from UNCLASSIFIED to CLASSIFIED (SECRET) Networks
http://www.tumbleweed.com/pdfs/va-wavesys1.pdf
As information sharing among organizations increases, so does the risk of unauthorized access to classified information.
Unclassified network users must be able to access classified (secret) networks securely so upward information flow can occur. Organizations often require the use of a valid digital certificate stored on a smart card such as the Department of Defense Common Access Card to prevent unauthorized access to classified data. However, unless a user can be positively validated prior to any sort of access to the computing platform, including the smart card reader itself, a potential for breaching vital data compartmentalization exists.
Utilizing an integrated solution from Wave Systems and Tumbleweed, users who have been issued a standard X509v3 digital certificate stored on a smart card can now be validated in real-time using the EMBASSY® “trusted” smart card reader and the Valicert Validation Authority as shown in Figure 1. Ensuring users with expired or revoked credentials cannot access the computing platform including the smart card reader itself allows secure integration of classified and unclassified networks, a key requirement in enabling critical data sharing and communications.
TCPA/TCG and NGSCB: Benefits and Risks for Users
http://pericson.com/writings/tcpa-tcg_ngscb/tcpa-tcg_and_ngscb.pdf
Abstract
Trusted computing has been proposed as a way to enhance computer security and privacy significantly by including them in the design of computing platforms instead of adding them on top of an inherently insecure foundation; however, the project has attracted much criticism. This dissertation looks at trusted computing from the user perspective. Possible beneficial uses of the technology are brought up, and some of the raised criticism is discussed. The criticism is analyzed in an attempt to find out if the criticism is correct on all points, or if some of it is the result of misinformation or misunderstanding. The conclusion is that not all the arguments against trusted computing are correct, and that the possible implications for users are taken into account in the development process. The dissertation ends on a positive note, concluding that trusted computing is possible without the worst fears of the critics coming true.
QUALCOMM Trusted Services
http://www.cdmatech.com/solutions/pdf/securemsm.pdf
Wave TCG Enabled Toolkit
http://www.wave.com/products/03-000172_TK.pdf
Identity Management: "Identity = Data + Policies"
http://www.hpl.hp.com/techreports/2004/HPL-2004-14.pdf
Digital identities are fundamental to enable digital interactions and transactions on the web. The current digital identity model, based on the "identity = data" paradigm, starts showing its limitations when addressing people's expectations about their identities (in terms of preferences, privacy, trust, etc.) and providing them with degrees of assurance that expectations will be met. An alterative model is introduced, based on the "identity = data + policies" paradigm, along with an underlying policy management framework.
Details are given on how this model can address the above issues and how the framework can be implemented. Related technologies and work done by HP Labs Bristol are presented and discussed.
4.4 Trust
The trust model of the proposed model is centered on the concept of having one or more trusted third parties, TAAs, mediating identity disclosures. Users can get degrees of assurance that their policies will be satisfied by relying and trusting a TAA (a known party) instead of having to trust the data receivers.
This trust model can be extended by having multiple TAAs, acting in a collaborative way when dealing with policy checking and enforcement. There is no fundamental reason why users should be prevented from running their own TAAs. The usage of multiple TAAs mitigates the risk of having to trust and relying on only one party.
4.5 Monitoring
The TAA plays a key role also in providing raw information that can be processed and used to monitor disclosures of identity information. Monitoring activities can be performed by the TAA to spot anomalous situations and trends and prevent misuses: this can be achieved by analysing and correlating evidence collected during disclosures.
Tools can also be provided to users by the TAA to monitor the disclosures of their personal data. These tools provide simple reports based on information collected by TAAs during disclosures.
5. Discussion
The “identity = data + policies” model is based on the concept of associating policies to identity information to explicitly express them and deal with their enforcement. The policy management framework described in this paper show how an implementation of this model can be achieved by leveraging technologies and mechanisms already available today (even if they are at different maturity stages).
However, there are aspects and issues that need to be fully investigated in order to draw conclusions about the feasibility of the proposed model. This is part of our ongoing research. A few important aspects are discussed in the remaining part of this section. From a technology perspective, we illustrated the usage of three kinds of technologies: cryptographic mechanisms, TCG/TPM (trusted platforms) and Tagged OS. Their relevance and applicability is summarized in Figure 5
The involved technologies are currently available at different maturity stages: whilst TCG/TPMs chips are already available on the market, Tagged Os and solutions underpinning TAAs are still in a research stage. Additional mechanisms are currently under research and development at the trusted platform level such as Microsoft NGSCB [11]. Prototypes and trials need to be done to fully understand the implications – in terms of integration, usability, flexibility, scalability, etc. - of using such technologies.
Further research needs to be done on policies and how to describe and integrate policy aspects (constraints, obligations, conditions, etc.) in a smooth and simple way, at different levels of abstractions. In particular, the integration of sticky policies with stored data is complex and hard to achieve in a way that performance and flexibility are not compromised. Work is in progress in this space.
Awk--"Blogging Hysteria will follow." Becker sure got that one right! lol!
From Phil Becker's newsletter 01/06/2005
Predictions for 2005...
4. Someone will wake up in 2005 and realize that the U.S. legislated a de facto national ID with the intelligence bill.
The U.S. intelligence bill was passed in a flurry of political heat, allowing little real consideration of its contents. In 2005 it will become clear that it effectively legislated a national ID through its driver's license standardization requirements. Blogging Hysteria will follow.
helpfulbacteria:
I have been thinking about E2100 readers.
I am fairly convinced that Wave will not update their very universal E2100 chip...and inventory must be depleted rather soon...
So, how are we going to supply EMBASSY snmart card readers and keyboards to our customers? I would think that some manufacturer will have to come on board and start building those units. Would ARM's 1176 "TrustZone" core be the ideal platform for a slimmed down E2100-like chip?
In reply to: helpfulbacteria
Date:1/4/2005 10:23:22 AM
Post http://www.investorshub.com/boards/read_msg.asp?message_id=5003935
(Not so OT-ish): Awk...
Again, I would gather that you've already seen this document:
http://www.tecsec.com/CKM/Technologies%20Juxtaposed_Final.pdf
But, this is a little fun: it lays out the COMPLEMENTARITY between PKI and CKM (kinda aimed at Government, no?). But, then it goes on to talk about THEIR idea of a trusted platform.
Now, just add an EMBASSY reader or preferably a Dell TCG-ready PC... and this stew gets a lot tastier, I think.
Let's see we've got Wave listed on the TUMBLEWEED OCSP Responder page. (Wonder why a leader like TUMBLEWEED feels the need to make it clear that they've got EMBASSY reader integration?) And we've got TecSec working with Wave over at West Point. And we've got TecSec and Wave presenting SCADA-related and first-responders stuff.
Best Regards,
c m
helpfulbacteria: Yes, I saw this ...
...but I think there is really great info in this recent PPT presentation.
For one, it appears that indeed the TPM is regarded as an extremely valuable contribution to securing the grid infrastructure...
An absolute MUST READ re: GRID/SCADA
Trusted Computing for the GRID
Managing the grid endpoints
How can we trust the computers are secure?
(integrity of systems, verification of code, configuration management, ...)
http://grid.ncsa.uiuc.edu/ggf12-sec-wkshp/panel4/kuhlman.ppt
Slide #19
Unclever: Somebody has to keep some interesting stuff going... /e
Thanks for the posts awk.
Trusted Computing & Digital Rights
School of Mathematics and Systems Engineering
Växjö University, Sweden
September 2004
Great paragraph about LaGrande
http://www.msi.vxu.se/forskn/exarb/2004/04086.pdf
Trusted Computing for the GRID
Managing the grid endpoints
How can we trust the computers are secure ?
(integrity of systems, verification of code, configuration management, ...)
http://grid.ncsa.uiuc.edu/ggf12-sec-wkshp/panel4/kuhlman.ppt
Wave TC flyer for enterprises and Government
Trusted Computing for Enterprise and Government
http://i.i.com.com/cnwk.1d/html/itp/CNET_WP_1_TC_Overview_final.pdf
TPM and Key Recovery, from Wave
http://i.i.com.com/cnwk.1d/html/itp/CNET_WP3_TC_Man_Recov_final.pdf
Thanks awk. I particularly enjoyed the service example that begins on page 23 (section 9.1). It brings home the complexity of what seems a simple transaction. Nothing quite like an example to get through a sea of acronyms.
regards
Security Architecture for Open Grid Services
http://www.globus.org/ogsa/Security/draft-ggf-ogsa-sec-arch-01.pdf
2b: I agree with your assessment! /e
awk, Dell will not...
...annouce until after the holidays (imho) because they want to push out current inventory.
Next step in biometrics
By ROSEANNE GERIN
New security projects need common standards for exchanging data
Initial reports on the government's Registered Traveler program are promising.
"The feedback from the airlines and the traveling public was that it was a program that everyone was happy with and didn't want to see stop," said Larry Zmuda of Unisys Corp., regarding the pilot project that is using biometric identifiers to screen travelers and speed them through security checkpoints at five U.S. airports. Unisys is one of the prime contractors on the project.
But the project also faces a major hurdle if Transportation Security Administration officials decide to make it a national program and expand it to other airports. Currently, participants can use biometric kiosks only at the airports at which they registered. The next step for TSA is to adopt standards to make interoperable systems to allow rapid screening of Registered Traveler participants at all participating airports.
"Interoperability will be a big item, so a traveler entering one location can be transparent. ... Elsewhere around the world, similar programs are being explored," said Tom Grissen, chief executive officer of Daon Inc. of Herndon, Va., which supplies biometric identity management software for the program.
Registered Traveler is just one of a handful of the government's big biometrics projects that must adopt standards for interoperability. Government officials and contractors also are working to establish standards for the U.S. Visitor and Immigrant Status Indicator Technology (U.S. Visit) and Transportation Worker Identification Credential programs, two high-profile projects designed to improve security at the nation's borders and other sites.
Standards allow different biometric systems and devices to share information by establishing common formats, such as fingerprints, for representing and exchanging data. The National Institute of Standards and Technology for many years has been involved in setting biometric standards. The American National Standards Institute and the International Committee for Information Technology Standards also help create standards.
The Department of Homeland Security already has begun setting standards for facial recognition biometrics. In October, the department adopted its first biometric facial recognition standard that is consistent with international standards for applications that use biometrics, such as travel documents. The International Committee for Information Technology Standards, a standards development organization accredited by the American National Standards Institute, created the standard, which DHS will use as technical criteria in designing cameras and software for facial recognition.
"The adaptation of facial recognition standards is a first step in standardizing all types of biometrics, which is essential for the success of Homeland Security programs," said Undersecretary for Border and Transportation Security Asa Hutchinson in a department press release.
Growing government dependence on biometric solutions is fueling significant industry growth. Government spending on biometric technologies is expected to grow from $432 million in 2004 to nearly $1.8 billion in 2008, according to the International Biometric Group, a biometrics industry consulting firm in New York.
The government sector accounts for more than one-third of all biometric spending, which will grow from $1.2 billion this year to more than $4.6 billion in 2008, according to IBG, which measures only biometric hardware and software sales and not revenue from related professional and integration services.
But widespread adoption of biometric solutions depends on the creation and acceptance of biometric standards, to ensure that interoperable systems can identify users at all participating locations. A key challenge will be designing systems that protect privacy and alleviate fears of government abuse. Ultimately, it will have to be done on a worldwide basis to allow the free movement of goods and people.
"We must develop a set of international standards for capturing, analyzing, storing, reading and protecting biometric data to ensure maximum interoperability between systems and maximum privacy for individuals," said DHS Secretary Tom Ridge in remarks at the Asia-Pacific Homeland Security Summit Nov. 15. "The sooner the world community can embrace an international standard for biometrics, the quicker we'll be able to secure our borders."
SECURING AIRPORT TRAVELERS
The Registered Traveler program, which launched in the summer, records and stores passengers' biographical data along with a biometric fingerprint, iris scans or both. Unisys and EDS Corp. are the prime contractors leading the efforts at Minneapolis-St. Paul International Airport, George Bush/ Houston Intercontinental Airport, Los Angeles International Airport, Boston Logan International Airport and Ronald Reagan Washington National Airport.
TSA extended the program through January 2005 to continue to study the program's feasibility and to collect more data before determining whether to introduce the program at other national airports and which biometric identifiers to use, said Darrin Kayser, a TSA spokesman.
TSA and companies involved in Registered Traveler have just started discussing standards and a common architecture for the program to let travelers use the program at all participating airports, Kayser and Grissen said.
Zmuda of Unisys said making Registered Traveler interoperable shouldn't be difficult.
"The thing that needs to happen is to establish one set of standards and rules, which TSA is looking to do, to make the playing field and all the players look alike," he said. "It's the next logical step to further the effort."
SECURING U.S. BORDERS
DHS' U.S. Visit program is another government biometrics project trying to make headway with standards. The program to track foreign visitors traveling on a visa requires most visitors to have two fingers scanned by an inkless device, and a digital photograph taken by immigration officials upon entry to the United States. The scans are then checked against law enforcement databases and other watchlists.
On Jan.5, U.S. Visit entry procedures started operating at 115 airports and 14 seaports, and DHS began pilot testing biometric exit procedures at one airport and one seaport. In mid-November, DHS started using biometric fingerprint scans and digital photographs at six land entry points in three states. The border security system is scheduled to be implemented by the end of 2004 at the nation's 50 busiest land entry sites.
DHS is not hampered by the lack of national biometric standards, said Kimberly Weissman, a department spokesperson. Of the 13 million visitors that have passed through U.S. Visit, the program has helped law enforcement officials identify and capture more than 330 criminals or individuals with immigration violations, she said.
Weissman also noted that the 9/11 Commission report cited U.S. Visit as the foundation upon which all border screening programs should be consolidated to allow for a fully integrated screening system.
Many foreign countries must also create machine-readable biometric passports that are acceptable for the U.S. Visit program, adding another twist to the interoperability issue. Congress is requiring citizens of 27 countries whose citizens can enter the United States without a visa to obtain passports with a biometric identifier, such as a digital fingerprint, by Oct. 26, 2005.
Most of these countries are in the European Union, but E.U. members will not be ready to issue biometric passports until the end of 2005, E.U. Justice and Home Affairs Commissioner Antonio Vitorino said at a joint news conference with U.S. Attorney General John Ashcroft Oct. 1 at The Hague.
But DHS is addressing the issue by adopting biometrics standards set by the International Organization for Standardization, which will ensure interoperability for data exchange when required and make it lawful to exchange biometrics data, Weissman said.
DHS also is active in developing the Enhanced Information Travel Security initiative that will enable various national and international systems to swap real-time data without the need for centralized storage, she said.
"However, we are not designing our databases for direct exchange of biometric data with other nations," Weissman said. "We are very aware and cognizant of the privacy rights associated with the biometric data and associated information."
SECURING TRANSPORTATION
The program to create a Transportation Worker Identification Credential, also known as the TWIC card, is grappling with standardization issues.
BearingPoint Inc. and its team of subcontractors are developing a prototype common access credential for transportation workers who need physical or logical access to secure areas. The McLean, Va., contractor won the $12 million contract in August.
TSA kicked off its pilot project in November and is testing it in Los Angeles, Philadelphia and Florida. The test phase will last seven months and eventually include up to 200,000 workers from the transportation sector in 34 additional locations in six states.
The program is being implemented in partnership with Florida, which passed legislation to adopt TWIC cards for its state transportation workers. The state's formal partnership with TSA defined requirements for background checks and state-of-the-art identification credentials for truck drivers, dockworkers and others who require unescorted access to secure areas within transportation facilities.
TWIC eliminates the need for workers to have numerous cards and pass through redundant background checks to enter secure areas at multiple facilities.
TWIC is following ANSI standards that are interoperable across vendors, said Conor White, chief technology officer at Daon.
It's still unclear whether other states and parties, such as airports or seaports, will align their worker credential systems with TWIC and adopt the same biometric standards.
he states and other ports likely would have to replace some of their legacy ID-verification systems that provide access control or tracking to adopt TWIC standards. So far, only Florida has taken the lead in this area.
"[They] must provide a secure credential but allow those legacy systems to be replaced," said Mark Heilman, executive vice president of business development at Anteon International Corp., a subcontractor on the TWIC contract. The company is completing site surveys, installation and training for all the systems to be deployed.
TSA said TWIC will be interoperable with any state systems because the program uses multiple technologies.
"The credential at each facility is identical and includes multiple data storage, so that it can be used with various legacy systems," Kayser said.
He added that both Georgia and the New York/New Jersey Port Authority have expressed interest in making their systems compatible and interoperable with TWIC. TSA and DHS also have monthly homeland security liaison calls with state representatives to share information, he said.
"Many states and ports are looking to stay current on what's happening with the TWIC program," said Daon's Grissen. "The intention from a technology perspective is to be very flexible, so different ports could adopt the technology and not have any interoperability issues."
Staff Writer Roseanne Gerin can be reached at rgerin@postnewsweekech.com.
http://www.washingtontechnology.com/news/19_18/cover-stories/25088-1.html
Standardization efforts...
http://www.t10.org/ftp/t10/document.04/04-140r1.pdf
This document presents a strategy for defining an industry standard set of interface commands for a trusted device, which is a component of an overall trusted system. A trusted device provides a horizontal security product embedded in devices whose behavior may be authorized via interaction with a trusted host system. This proposal creates two commands:
TRUSTED COMPUTING OUT and TRUSTED COMPUTING IN. These commands provide for variable length data transfers. We request two 12 byte CDBs to provide commonality between SCSI and ATAPI implementations.
The SCSI commands proposed provide a data transfer length field of 4 bytes and expresses the data length as a number of bytes to be transferred. The CDB parameters and data payload shall be defined by the Trusted Computing Group (TCG) in its Storage Systems Working Group. The subsequent actions resulting from these commands will also be defined by TCG.
The intent is to standardize this data content so it is identical across both ATA and SCSI. This proposal refers to the data payload format as “restricted” to indicate that the format shall conform to their definition. Page 1 of 3
Renesas [Hitachi - awk] Technology Unveils Its Trusted Reader Platform for FINREAD Card Readers at Cartes Show in Paris November 2-4, 2004
http://biz.yahoo.com/bw/041026/265341_1.html
Tuesday October 26, 8:01 am ET
Renesas' Advanced, Open-Standard Security Architecture Will Help Facilitate the Development of Future Financial Transaction Terminals
PARIS--(BUSINESS WIRE)--Oct. 26, 2004 - Renesas Technology Corp., the world's number-one supplier of microcontrollers and a leader in security IC products, today announced the availability of a trusted reader platform for FINREAD card readers. This architecture is based on the Renesas ePOS (electronic point-of-sale) reference platform, a secure, scalable hardware and software reference design board that allows financial transaction terminals to be equipped with the most advanced security.
The new Renesas FINREAD trusted reader platform will be presented publicly for the first time in the Renesas booth (#3B1) at the Cartes show in Paris, France, November 2 to 4, 2004.
The platform is based on an H8S/2215 microcontroller and an AE Series security IC. The platform is packaged in a compact form factor and is supported by middleware meeting the high level of security defined by the FINREAD guidelines for authenticating the device and the applets. This hardware/software combination comprises the best possible solution for developing highly secure PC-connected, USB-powered products.
The new FINREAD trusted reader platform is based on the Renesas' open-standard ePOS reference platform* which supports Visa Smart POS, the GlobalPlatform and EMV (Europay/MasterCard/Visa) standards, and the STIP, FINREAD and JEFF software and middleware.
The Renesas ePOS platform provides a development environment that facilitates the creation and customization of applications that work across a diverse range of payment and identification products, including classic payment (EFTPOS*), terminals, PC-connected payment terminals, PIN pads and embedded readers.
According to William Vanobberghen, FINREAD Coordinator, "FINREAD is pleased that the new secure card reader platform from Renesas Technology will share the spotlight at the Cartes show. This platform helps promote the forthcoming ISO/IEC JTC1/SC17 standard for a secure and interoperable IC card transaction device. The solid, secure and easy-to-implement solution will accelerate the adoption of FINREAD technology onto the global market."
Jim Lee, senior vice president, Product Technology and Standards, Visa International, says, "Visa has been leading initiatives to standardize and streamline the payment industry infrastructure's support of the EMV specifications for smart card technology. The Renesas ePOS platform being shown at Cartes in Paris is well-timed and brings a new level of uniformity in terminal system implementation."
Sami Nassar, director and general manager of the advanced solutions group at Renesas Technology America, says, "Our market-ready, open-standard architecture meets and exceeds the security and performance mandates of next-generation payment terminals. The ePOS platform security engine has already received EAL4+, Visa level3 and MasterCard CAST certifications."
About FINREAD
FINREAD was created in 2001 to address the specifics of e-commerce and PC-connected devices. A consortium of European partners in the payment industry has developed a set of specifications for a card reader to secure transactions on the Internet. The FINREAD specifications have been drawn up and made available for free to the manufacturers through the CEN (European Standardization Committee). www.FINREAD.com
About Visa Smart POS
Visa Smart POS is a cost-effective software solution that dramatically reduces EMV application development time. Visa Smart POS is an important and innovative component of the Visa Smart initiative, Visa's comprehensive chip migration program. The Visa Smart POS EMV Level-2 compliant software module is available royalty-free to acquirers, merchants, and vendors. The software module helps reduce time-to-market for EMV chip acceptance devices by up to 12 months.
About Renesas Technology Corp.
Renesas Technology Corp. designs and manufactures highly integrated semiconductor system solutions for mobile, automotive and PC/AV markets. Established on April 1, 2003 as a joint venture between Hitachi, Ltd. (NYSE:HIT - News; TOKYO:6501 - News) and Mitsubishi Electric Corporation (TOKYO:6503 - News) and headquartered in Tokyo, Japan, Renesas Technology is one of the largest semiconductor companies in the world and the world's leading microcontroller supplier globally. Besides microcontrollers, Renesas Technology offers system-on-chip devices, Smart Card ICs, mixed-signal products, flash memories, SRAMs and more.
http://www.renesas.com
Global Platform, STIP API, and CardSoft VM
The enabler of cross platform interoperability....
http://www.globalplatform.org/documents/presentations/copenhagen/11_Jean-Paul_Billon.pdf
...and the CardSoft JEFF VM virtual machine
http://www.ars2000.com/eAppliance-DS.pdf
Secure Transactional Computing
and Embedded Integrity
An IBM presentation...
http://www.catalyst-conference.com/dss/downloads/pdf/dssd3p602.pdf
PRIME: Trust and Trust Management
The following is an excerpt from a PRIME paper dated October 13, 2004
http://www.prime-project.eu.org/public/prime_products/deliverables/pub_del_D14.2.a_ec_wp14.2_V5_fina...
7.9 Trust and trust management
This section provides an introduction to the concept of trust, trust metrics and hardware mechanisms for increasing trustworthiness. Some of the aspects of trust covered in this section are based on the trust in platforms based on the integrity of the platform, others on trust in the company. The concept of trust as discussed in this section addresses the general notion of trust.
Making use of this more general notion, trust can be sketched as confidence that a party will behave as expected for the action it shall perform.
The ideas concerning trust presented below concern a subset of the trust negotiation – in particular input to the trust negotiation phase – and are targeted towards increasing the trust one party has in another by providing properties of the party, properties of the party’s platform or the integrity of the party’s platform and providing more general information supporting the trust negotiation process. The information on a platform provided by another party is in the form of a credential and seamlessly fits into the trust negotiation process as a means for providing information on another party and its platform.
Section 7.9.1 provides some background to trust management and the determination of and aspects of trustworthiness of parties. Section 7.9.2 gives an introduction to trust measurement and trust metrics. Section 7.9.3 introduces a hardware mechanism supporting trust management, the Trusted Platform Module (TPM).
Trust as discussed in this section is in part only assessible by the human user. The subset that is assessible by automated means is assessed in the negotiation phase when confidence in the identity and properties of the other party is established.
7.9.1 Background to trust management
Although trust features strongly under user-side identity management, since trust is (in part at least) a multiparty experience, it is inevitable that any solution to the trust problem will involve both user-side and services-side technology.
It should also be recognized that an entity can be a user or a services-provider (i.e. sender or receiver of PII), or could perform both roles simultaneously. To simplify understanding, the situations are described as asymmetric, i.e. an entity is either one or the other but not both as, for example, in a peer-to-peer scenario. Symmetry is reintroduced in the discussion about the Trust Management component in section 9.8 because the goal is to reduce complexity of the architecture by reusing components and, because in practice a user could indeed be a receiver of PII.
The concept of trust
Users want to be able to release personal information in the confident belief that it will only be used in the way the user intended. Providing this assurance is the key to demonstrating trustworthiness. For most situations, the trust that users place in an organization is a mixture of technological trust (system trust) and social trust (human trust). In many situations it is possible to manage technical trust by minimising risks using threat/vulnerability models. Social trust – the trust we place in another human – on the other hand, is very much more difficult to understand, measure and control. Except for a handful of niche applications, technology and humans interact to affect outcome.
On the whole, trust is limited to a belief that (say) an organization will fulfill a request. There is usually limited evidence to support this belief other than possibly a contract that is only enforceable in specific circumstances.
One way to understand trust better is to consider the nature of the participants. One the one hand there is the deceitful recipient who, if sufficiently motivated, will be able to circumvent controls (not always technical). This is a difficult category to deal with unless we can separate system and human trust.
Another category is the recipient who sets a high standard of business conduct and wishes to demonstrate this in order to provide differentiation from other less scrupulous recipients. This is an interesting category for two reasons:
1) the division between system and social trust is of less concern to the user;
2) 2) this type of recipient probably represents the attitude of most major organizations.
The latter are organizations that have valued brand and reputation, and are keen to ‘show’ users that they can be trusted even if they cannot present indisputable facts that support their claim.
Of course, even the best-intended organizations make unintentional mistakes. These organizations would most likely welcome solutions to help them keep in check and reaffirm their own trust in their systems. For further reading on some facets of trust we refer to [CC03] and [KSG04].
Services-side trustworthiness
Assuming the situation where an organization is basically trustworthy but wishes to provide further evidence to this effect, a user can measure its trustworthiness in the following ways:
• trustworthiness of the services-side system; and
• trustworthiness of the organization.
Trustworthiness of the services-side system: Knowing that an organization has adopted state-of-the-art trust technologies can be an initial sign to the user that the organization intends to be true to their word. Today, state-of-the-art trust technologies mean a TPM (Trusted Processing Module) that provides:
• A reliable third party endorsed stable identity
• Originator non-repudiation achieved through TPM-controlled signatures
These requirements can be achieved by equipping a server with a TPM, endorsed by a trusted third party, and building the functionality to allow
1) remote interrogation of the TPM by the user, and
2) automatic signing of acknowledgements and other information intended to convince the user that their wishes are being fulfilled.
In practice, the systems that support services offered by an organization will be much more complex than a simple peer-to-peer arrangement. Whilst these systems may be built on TPM and future trusted platform technologies, techniques for forming an aggregated measure of trust across multiple heterogeneous systems that process personal information still need to be researched.
Trustworthiness of the organization
Trust in an organization is built up over time, based in part on past interactions. Evidence that an organization is willing to commit to an intended action, possibly in the knowledge that not doing so will incur penalties, is a useful sign of good
intentions.
Typically, a user would either review or present the terms under which the interaction will take place (i.e. a policy or contract). Once accepted, these terms are binding to some degree.
As required, the user reviews the interaction and compares outcome against the contract, particularly where the terms specify several points in the process where an assessment can be made (c.f. project milestones). This leads us to a process with clearly definable steps:
• Policy/contract comparison between user and organization
• Fulfillment (by an organization)
• Checking (by a user)
• Opinion forming (by a user – essentially retention of evidence to aid trust evaluation during future interactions.)
The proposed approach differs from existing approaches (e.g. P3P) by providing feedback to the user and indeed involves a user/user’s system in the process of ‘active’ comparison and management.
The process can be presented diagrammatically as shown in Figure 9.
Figure 9
Note the requirement on the user side of functionality to compare policies, and check and record status. In addition, user-side functionality that provides policy generation, proactive checking (that is, not relying simply on the organization notifying the user of the status of an interaction), and presentation to the user of an aggregated and meaningful trust assessment is required.
User-side trustworthiness
Whilst a user is concerned about the trustworthiness of the services provider, the user must also be able to trust their own system to hold their personal information securely. Assuming that the user is the only person with legitimate access to the system, trust is based solely on the technical merits of the system. Again, taking the TPM as the state-of-the-art technical security solution, the functionality to be supported by the TPM should include:
• Granting user authorised access to personal information, i.e. identification and authentication of the user.
• Secure storage of personal information and/or the cryptographic key(s) used to control access to personal information.
• Generation of random ‘seeds’.
Additionally, the TPM permits the generation/presentation of pseudonymous identities that may support or supplement credential management schemes like Dresden Identity Management(DRIM) [Dre] and Idemix [CL01].
Many users are likely to find the task of managing trust too difficult because it requires specialist skill and knowledge. Ways of providing help and support to the user through mechanism, best practice advice, etc. will need to be deployed to help users check/preserve their platform’s trustworthiness and avoid making decisions that could compromise their platform.
These are ambitious goals, involving long-term research, but we can start by leveraging the functionalities provided by TPMs and trusted platforms.
Looking further into the future, and the evolution of ambient services and devices, managing trust on the user-side goes beyond the relatively straightforward ‘gatekeeper’ role that we see here to that of an ‘agent’. Imagine the situation where a user has a need for particular service, and instructs their personal system to ‘look’ for the most appropriate services on offer.
Part of this process could involve the automatic release of personal information about the user. How can a user be confident that their personal system is acting in the best way to preserve their privacy?
By concentrating on the specific situation described, i.e. where the organization is essentially trustworthy but needs to be able to demonstrate this publicly, we can provide users with the means to differentiate likely trustworthy from untrustworthy parties to which the user intends to release personal information.
7.9.2 Trust measurement and trust metrics
Background on trust measurement and trust metrics Being able to say that another party can be trusted to handle personal information with today’s technology is probably unrealistic. Unless we can
1) completely isolate the processing from the operator and
2) 2) rely on the technology and implementation, we have to rely on some level of faith in the other party.
Requirement 1) is unrealistic since in practice virtually every application is likely to involve some form of human intervention, including access to the information after the ‘trusted’ processing is complete. Requirement 2) is currently difficult to demonstrate.
Since in practice users are unable to prove ‘before the event’ that a recipient is trustworthy and will uphold their wishes, the next best approach (as in real life) is to establish an alternative means of enforcement. A contract provides a user with an indication that a recipient intends to carry out their wishes and is a means to identify deviation from agreed actions. Of course, the contract is only useful if it is enforceable.
A deceitful recipient will most likely always be able to circumvent controls. However, the concept of a contract is useful for a recipient who has every intention of behaving properly, and wishes to demonstrate so in order to differentiate themselves from other less scrupulous recipients.
To some extent this lessens the enforcement challenge, making it an obligation of the
recipient. For the most part these large corporate organizations have a strong brand (which itself can be a basis for trust) and generally intent to behave honourably and fairly. Often the later is enforced through third party legislation and codes of conduct. These are the organizations that are willing to demonstrate the openness of their procedures and be held accountable for misconduct.
Technical vs. social trust metrics: There appears to be 4 main options for a user to manage identity and maintain privacy:
True anonymity: Simply don’t reveal a user’s identity. This is the only truly anonymous option. There are pros and cons: The user cannot be identified but the basis for trust is unclear. Enquiring of / replying to the user is difficult. (Pre-processing requirement)
Pseudonymous ID: Present a user’s identity as a pseudonym. The user is still anonymous to the recipient but it’s likely that an intermediary is providing the anonymizing service that the user must trust. Issues of trust remain as in Option 1 as the receiver must be able to trust the user’s intermediary, and if not, may not trust the user. (Pre-processing requirement)
Credentials: Present a user’s identity as a credential. This option preserves identity and establishes trust through a mutually trusted intermediary (the credential issuer). Again, both the user and the receiver must be willing to trust the intermediary. (Pre-processing requirement)
Faith’ in the receiver: Present a user’s true identity but with usage conditions (data handling policies) attached. This option requires the highest level of trust in the recipient, and the user pays the highest price if the trust is misplaced (Post-processing requirement)
The first three of these options describe actions that a user can take without placing any special requirement on the recipient. Given that users consider they own their identity and want to manage it themselves, these are the obvious options to use.
However, in practice these options may be far less useful to the user. Certain organizations are ‘geared up’ to collect as much information about a customer as possible and are likely to be unwilling to give up this opportunity without good reason. They will simply demand true identities or refuse to provide the service. In time this situation should change as organizations recognise the demand for anonymous/privacy preserving services, but now it is a hard fact that customers often have little choice but to provide whatever information is requested.
PRIME will allow the user to determine how PII is to be handled by a service provider and to enforce the agreed data handling policy and provide evidence of enforcement to the user.
Option 4 is a compromise which leaves the user at risk, but is nevertheless an improvement on simply handing over an identity without any understanding of how it might be managed.
It is an end-to-end user and enterprise scenario that brings together user-side and services-side activities within PRIME. The down-side of this approach is that it still requires the user to trust the organization.
Understanding trust
As already discussed, trust is a combination of social trust and technical trust. Both of these aspects of trust influence a user’s overall trust assessment. Another way to look at trust is in terms of the three components: technical (as before), history and reputation. (Some may consider history and reputation to be the same, however there is a subtle difference.) History and reputation form the social assessment, and each is based on past interaction with the intended recipient. In the case of history the assessment is made on past interactions that the user has had. Reputation is based on interactions that others have had. Reputation introduces a further complexity in that the user also has to judge the trustworthiness (or reliability) of the third party’s assessment. The user must also be aware that the quality of a reputation indicator may vary depending on the provider, and be ready to compensate.
Reputation is clearly strongly influenced by social understanding, but history (as perceived by the user) is measurable as long as the user can articulate the conditions under which past performance has a bearing on future performance. It is this ability of the user to collect and assess evidence that is directly related to past events that provides a means to form an opinion about trustworthiness in the absence of other more definitive trust indicators.
Concluding example for trust measurement and metrics
Referring again to Figure 9, the idea is that a user agrees to release personal information but with conditions attached.
The conditions form a contract that is agreed between the two parties. Part of the contract will include trust requirements, e.g. presence of a TPM, policy match, past experience.
The recipient provides the user with signed acknowledgements/confirmations, e.g. contract accepted, contract (or an element of the contract) fulfilled. The user automatically uses these ‘notifications’ to check compliance with the policy.
An extension to this theme is where the user can proactively interrogate the recipient.
The role of a user’s platform is to:
1. Associate policy with personal information.
2. Present a simplified indication of the trustworthiness of a recipient (based on TPM,
past/outstanding interactions).
In essence, the recipient will present its ‘trust model’
which will then be matched against the trust model the user has in mind for the particular
personal information.
3. Build up a record of a recipient’s compliance (for future interactions).
4. Query a recipient.
The role of a recipient system is to:
1. Negotiate and acknowledge acceptance of policy (endorsed by TPM and/or TTP).
2. Enforce policy (through, say, an Obligation Manager).
3. Provide TPM trust status information.
4. Accept queries from a user.
PRIME - A European Initiative
http://www.prime-project.eu.org/
Objectives
Develop solutions to empower individuals to control their private sphere and manage their identities;
Trigger persuasive deployment of privacy-enhancing identity management solutions.
ResultsApplication requirements, including legal, socio-economic and usability issues.
Public integration framework.
Public architecture & specifications.
Application-driven prototypes.
Download PRIME leaflet (Pdf, 1MB)
http://www.prime-project.eu.org/public/prime_products/deliverables/pub_del_D14.2.a_ec_wp14.2_V5_fina...
This document provides the initial version of the architecture for PRIME. The document will support the construction of the first version of the integrated prototype. The architecture has evolved through many meetings, phone discussions and email exchanges of the members of the ‘architecture group’ for PRIME. The next version of the architecture will reflect the first integrated prototype.
The Threat, Trusted Computing, to the German Insurance Industry
The German assertions:
http://www.german-secure.de/index.php?option=articles&task=viewarticle&artid=73&Itemid=3
TCG reply:
https://www.trustedcomputinggroup.org/downloads/whitepapers/GDV_Clarification_from_TCG_v8_English.pdf
Thanks SKS for comfirming this...
From unclevername's Q3/04 transcript:
http://www.unclever.com/wavx/WAVX3Q04.htm
"...Let me wrap up by saying that Trusted Computing is here to stay. We’ve had a very strong deployment year this year, a number of other brands have entered the market. We clearly see sustained growth into next year of the volume of units shipping in the marketplace. We’d love to see it double from what it’s been this year. It probably could even be stronger than that. Clearly the market is driving towards ’06 with Microsoft’s requirements for Trusted Platforms, that this has to drive towards the majority of all machines in the marketplace. So we’re on the stepping-stone to get there. And ultimately that’s 100 to 120 million units a year shipping. So next year should achieve half of that, or along the lines to that..."
Trusted Computing: From Theory to Practice in the Real World
http://www.t13.org/docs2004/e04142r0-Trusted_Computing_Theory_to_Practice.pdf
24601: ...by awk /e
Wave bluelighted by you, awk, or by National? *
National Semiconductor Enhances Personal Computers With Its SafeKeeper Trusted I/O Device
IBM First PC Manufacturer to Equip Its Desktop PCs With New Security Technology From National Semiconductor
9/16/2004 9:00:00 AM
SANTA CLARA, Calif., Sep 16, 2004 /PRNewswire-FirstCall via COMTEX/ -- National Semiconductor (NSM) today introduced two SafeKeeper(TM) Trusted Input/Output (I/O) devices, new hardware products designed to embed security into desktop and notebook computer motherboards. These devices allow PC manufacturers to protect their customers' computer systems from hackers and viruses.
IBM is the first manufacturer to equip selected models of its desktop computers with National Semiconductor's SafeKeeper(TM) Trusted I/O devices. "IBM has led the industry in developing secure, manageable systems since pioneering embedded PC security in 1999," said Clain Anderson, program director of wireless and security solutions, IBM Personal Computing Division. "Security, encryption and password management are key components of IBM ThinkVantage Technologies, which simplify the PC user experience and reduce management costs for organizations of all sizes. Using National Semiconductor's Trusted I/O chip for our newly launched desktop models helps make IBM ThinkCentre models featuring the IBM Embedded Security Subsystem the most secure industry-standard desktop PCs you can buy."
Unlike other security hardware, National's Trusted I/O devices integrate a Trusted Platform Module (TPM), Super I/O and embedded firmware to implement industry-standard Trusted Computing Group security functions. TPMs are microcontrollers that securely store passwords, digital certificates and encryption keys for PCs and other systems. These devices, which comply with Trusted Computing Group (TCG) specifications, protect computer software, such as BIOS, operating systems and applications, from unauthorized or malicious attacks. IBM has used TPMs since 1999.
Why Offer Computer Security in Hardware?
In an era of increased national security concerns and weekly reports of malicious attacks on PC systems, companies and consumers rely primarily on software programs to protect corporate and personal information. Unfortunately, these software-based security solutions are still vulnerable to attacks. In contrast, National's Trusted I/O devices integrate the TPM into the existing PC architecture (Super I/O), storing the computer's identity in silicon and making it virtually impossible for outsiders to locate key information.
Hardware solutions provide a stronger foundation for a secure computing infrastructure than stand-alone software systems. This infrastructure provides protected storage of cryptographic or sensitive data, authenticates a host computing device by verifying its identity to other computing devices, and supplies metrics that provide a reliable and trusted network environment.
Key Technology Features and Benefits
National's SafeKeeper family includes two parts, the PC8374T Desktop and PC8392T Notebook Trusted I/O devices, which are based on National's embedded 16-bit CompactRISC(R) core technology. Both reside on the low-pin-count (LPC) bus, an ideal place for integration because it sits at the intersection of input devices to the PC.
Since these new Trusted I/O devices are pin- and software-compatible with National's current Super I/O products, system engineers easily can create a dual-system design that can accept either part. This gives manufacturers flexibility to design "TPM-ready" systems without designing in an additional empty socket.
Industry Standards and Partnerships
National developed its Trusted I/O devices to meet the Trusted Computing Group's TPM 1.1b specification. TCG developed these specifications with industry-leading system, silicon and software providers to create standard interfaces and interoperability between hardware and software layers. These industry standard interfaces allow National to partner with security software developers such as IBM and Wave Systems Corp. (Nasdaq: WAVX) to offer customers multiple software solutions that work in conjunction with National's integrated hardware.
Pricing and Availability
National's Desktop PC8374T Trusted I/O device is available now in a PQFP-128 package and is priced at $5 each in 1,000-unit volumes. The Notebook PC8392T Trusted I/O device will be available in the fourth quarter of 2004 and will be priced at $7 each in 1,000-unit volumes. All packages are available lead-free. More information about the products are available at
http://www.national.com/appinfo/advancedio/
To view a high-resolution downloadable photo of the Trusted I/O devices, visit National's photo gallery at http://www.national.com/company/pressroom/gallery/aio.html .
Understanding Desktop and Notebook Systems
National Semiconductor offers a diverse product portfolio to surround current and next-generation CPUs, chipsets and memory. National provides all the analog technology a PC needs for excellent audio, display, video and power management performance. In addition, its mixed-signal technology delivers security, system management and networking capabilities. Using this system-level approach, National is partnering with customers to re-define system partitions that create unique features for their products.
About National Semiconductor
National Semiconductor, the industry's premier analog company, creates high performance analog devices and subsystems. National's leading-edge products include power management circuits, display drivers, audio and operational amplifiers, communication interface products and data conversion solutions. National's key markets include wireless handsets, displays, PCs and laptops. The company's analog products are also optimized for numerous applications in a variety of electronics markets, including medical, automotive, industrial, and test and measurement. Headquartered in Santa Clara, California, National reported sales of $1.98 billion for fiscal 2004, which ended May 30, 2004. Additional company and product information is available at www.national.com.
NOTE: National Semiconductor is a registered trademark and SafeKeeper is a trademark of National Semiconductor Corporation. All other brands or product names are trademarks or registered trademarks of their respective holders.
SOURCE National Semiconductor Corporation
media, Gayle Bullock of National Semiconductor, +1-408-721-2033, or
gayle.bullock@nsc.com; or Reader Information: Design Support Group, 800-272-9959
http://www.national.com
.
Slow Attestations
Seth Schoen recently proposed an interesting variant on the ever-controversial Remote Attestation feature of Trusted Computing.
Seth proposes using a novel (to me anyway) cryptographic construct called a "hard-to-verify signature". This works like a regular, public-key based cryptographic signature, issued by a private key and verifiable by anyone who possesses the corresponding public key. The difference is that the verification operation is slow, ideally with the time necessary to do a verification being under the control of the signer.
Seth's idea for how to implement this is straightforward; he proposes to include a random salt value in the signature calculation, where the random value is not supplied to the verifier. The only way to verify the signature is to try one random salt value after another. The concept is similar to key stretching, which also takes a relatively fast calculation and tries to make it slow.
However it is the application which Seth proposes which is most interesting and relevant here. He suggests that this kind of hard-to-verify signature could be used for remote attestation. By using this kind of signature, verifiers who have to verify many attestations, such as corporations who are requiring their customers to attest to them, would be at a disadvantage. Other applications, particularly including some of the more acceptable ones I cited in my earlier article on Trusted Computing, involve each verifier only handling a few attestations, and would not be hampered much by hard-to-verify signatures.
Here's my take. First, I am all for new technologies, and if people find good uses for these kinds of "slow attestations", so much the better. But it doesn't look to me like this idea will accomplish what Seth wants. He fails to address the fact that the exchanges he is trying to prevent are fully voluntary for all parties involved. When someone attests to Sony in order to get permission to download a piece of music, in some hypothetical TC based DRM system of the future, he is attesting voluntarily. If he didn't want to make the attestation, he always has the option of not requesting the download.
In this situation, no one, neither the seller nor the buyer, has incentive to use a slow attestion. That will only gum up the works and make the buyer wait longer before the download is approved. Why would he want that? It doesn't make sense.
Seth, whether he realizes it or not, is acting in a role analogous to a drug warrior, trying to forbid consensual transactions among others because of the harmful side effects he sees for society. Calling for hard-to-verify signatures to be used for attestation is like calling for people to use a strain of marijuana that doesn't get you high. Such proposals ignore the motivations of the parties involved.
Ultimately the best solution is to let people use technology in the ways which meet their needs. And if that means that sellers of information goods choose to make their use contingent on restrictions enforced by Trusted Computing and other technologies, so be it. No one is being coerced or compelled to engage in any such transactions. Everyone is free to create their own content and let it go under any conditions they want, as in Lessig's Creative Commons license.
The only way Seth's proposal for slow attestations could actually be effective in the uses he envisions is if it were made mandatory. Only if people were forbidden to use easy attestations and required to use slow ones for commercial transactions could he successfully burden those transactions to the point that the technology might not be used. (Seth's earlier proposal along these lines, Owner Override, suffers from the same problem.)
I still can't understand why I seem to stand alone on this issue. Every day I read complaints about the INDUCE act and the broadcast flag and other proposals that attempt to limit what technologies people can use. Everyone on the net seems to understand intuitively how counterproductive such efforts will be. But when it comes to a DRM technology which might have a slightly greater degree of effectiveness than what we have seen in the past, people have no problem with suggestions that would require legislating technological restrictions.
All I ask for is consistency. If you favor a big-brother government intervention regime, where technologies must pass a social-benefit litmus test, fine. Make your case and I'll listen. But if you oppose efforts to legislate technology and support the freedom to innovate and experiment, then join me and extend that support to Trusted Computing and related technologies.
http://invisiblog.com/1c801df4aee49232/
NIST: National Institute of Standards and Technology
Common Criteria Evaluation and Validation Scheme Validation Report
http://niap.nist.gov/cc-scheme/pp/PP_VID3009-VR.pdf
Trusted Computing Group (TCG) Personal Computer (PC) Specific Trusted Building Block (TBB)
Protection Profile And TCG PC Specific TBB With Maintenance Protection Profile
Report Number: CCEVS-VR-04-0070
Dated: 18 August 2004
Version: 1.0
http://niap.nist.gov/cc-scheme/pp/PP_VID3009-CI.pdf
Interesting Juniper Networks PR…
http://www.juniper.net/company/presscenter/pr/2004/pr-040830.html
Juniper Networks Delivers Endpoint Defense Initiative to Enhance Trust and Compliance on Leading SSL VPN Solution
Open APIs Enable Integrated Provisioning and Management of Leading Endpoint Security Technologies
SUNNYVALE, Calif., Aug. 30, 2004 - Juniper Networks, Inc. (Nasdaq: JNPR) today announced the Endpoint Defense Initiative to enable broader and deeper integration of the industry-leading NetScreen Secure Access SSL VPN appliances with best-of-breed endpoint security products. Under this Endpoint Defense Initiative, the Juniper Networks SSL VPN appliances leverage endpoint security agents to enable integrated provisioning, auditing, policy definition, and central management of both Juniper Networks and third-party endpoint agents through the SSL VPN appliance. Using Juniper Networks' SSL VPN enhanced native functionality and additional open application interfaces (APIs), Endpoint Defense Initiative solutions establish the trustworthiness of client hosts at VPN endpoints the critical portion of the network that needs additional protection against malicious software and policy non-compliance.
The Endpoint Defense Initiative tightly integrates Juniper Networks' SSL VPN with leading endpoint security Juniper Global Alliances Program partners, including InfoExpress, Inc., McAfee, Inc., Sygate Technologies, Symantec Corp., Trend Micro, Inc., and WholeSecurity, Inc. "While all SSL companies should partner with endpoint security companies, there is value to increasing embedded and default security measures," said John Girard, vice president, Gartner, Inc. "Today's leading remote access solutions must offer robust endpoint security technologies with simplified management."
Available with both the NetScreen Secure Access and Remote Access families of SSL VPN products, the Endpoint Defense Initiative encompasses enhancements to Juniper Networks' native host check agent and policy-based enforcement, as well as extensions to Juniper Networks' host check APIs, to enable seamless partner integration. Native Juniper Networks SSL VPN host check functionality combined with personal firewall, anti-virus solutions, emerging malware detection agents, and virtual environments empowers customers to deploy endpoint security solutions that fit their business needs.
"Having worked with Juniper Networks since 2002 to provide protection and compliance with our Sygate Security Agent, we see a great opportunity integrating the latest endpoint security functionality of the NetScreen Secure Access SSL VPN with Sygate's new On-Demand 'virtual agent' solution," said Bill Scull, senior vice president of marketing, Sygate. "The combined approach of Sygate's Continuous Compliance™ with Juniper's layered security delivers an unmatched secure access offering that meets the real-world challenges of information and network protection."
The Endpoint Defense Initiative solutions address the full range of risks that endpoints can pose to the enterprise network by supporting both unmanaged and managed PCs accessing resources from trusted or untrusted networks. The solutions allow enterprises to conduct deep and broad security assessments before provisioning a VPN or extranet connection, defending endpoints through:
· native compliance checks and policy enforcement;
· policies that leverage existing endpoint security products; and
· policies that augment endpoint security with downloadable third-party agents.
David Flynn, vice president of products, security products group at Juniper Networks, said, "We're committed to providing our customers with best-of-breed combinations between endpoint security products and our market leading SSL VPNs. The Juniper Networks Endpoint Defense Initiative represents the next phase in the evolution of our host check API, going beyond client-side checks to enable deeper partner integration and enable enterprises to assess, quarantine and remediate non-compliant clients. The Endpoint Defense Initiative solutions allow enterprises to provision access based on user identity and the trustworthiness of the client host. This composite of trust is a critical defense against the latest forms of malware."
Customers can now deploy the Juniper Networks NetScreen Secure Access SSL VPNs and endpoint security agents from a single appliance and manage both components from the Juniper Networks NetScreen Secure Access Central Manager and Administrator Console. Compliance with the Endpoint Defense Initiative APIs ensures simple, secure delivery of endpoint security solutions to both managed and unmanaged PCs.
Availability
The Juniper Networks SSL VPN supports Endpoint Defense Initiative solutions today. For more information on where to purchase, please visit http://www.juniper.net/products/howtobuy.html.
About the Juniper Global Alliances Program
Juniper Networks transforms the business of networking by creating competitive advantage for our customers with the most sophisticated networking and security solutions in the industry. The J-Partner Alliances offers five alliance categories, each one focused on a particular aspect in delivering next-generation networking and security solutions, providing organizations with integrated and interoperable best-in-breed solutions. The five categories are: Infrastructure, Security, System Integrator, OSS and Network Management, and Content and Applications. For more information on joining the J-Partner Alliances, please visit http://www.juniper.net/partners/.
About Juniper Networks, Inc.
Juniper Networks transforms the business of networking by creating competitive advantage for our customers with superior networking and security solutions. Juniper Networks is dedicated to customers who derive strategic value from their networks, including global network operators, enterprises, government agencies and research and educational institutions. Juniper Networks' portfolio of networking and security solutions supports the complex scale, security and performance requirements of the world's most demanding mission critical networks. Additional information can be found at www.juniper.net.
greg--Oh, come on!!! Please EXPLAIN how questioning a person's objectivity and/or neutrality constitutes an "attack"?
Pls. stick to debating the discussion points. When you resort to attacks, you weaken your position.
greg---lol! Oh, no doubt it does "parallel the admittedly anecdotal comments I have received....."
Am I questioning your neutrality and objectivity on this topic? You betcha!
Deride the article as you see fit. It does parallel the admittedly anecdotal comments I have received from IT managers and network admins. I know.
Greg--The self administered "knowledge" controls on that anonymous questionnaire are hilarious! Those controls wouldn't even pass mustard on a High School Project!
1. "Network architects who know a lot about or consider themselves experts on TCG."
2. "Network architects who know little or nothing about TCG."
Long article - it continues for 3 pages:
http://www.developerpipeline.com/howto/showArticle.jhtml?articleId=22103886&pgno=1
greg: I don't think I made my point well enough. Why would half of those who know nothing of the TCG opt for TPMs? Why would 3% insist on them?
I think there must be some garbling of data in those charts.
The charts themselves don't indicate that the people polled received a pitch about TPMs. Can you post a link to the article or an extract from the article that provides that context?
Followers
|
25
|
Posters
|
|
Posts (Today)
|
0
|
Posts (Total)
|
447
|
Created
|
02/03/04
|
Type
|
Premium
|
Moderator awk | |||
Assistants Bull_Dolphin |
Volume | |
Day Range: | |
Bid Price | |
Ask Price | |
Last Trade Time: |