Register for free to join our community of investors and share your ideas. You will also get access to streaming quotes, interactive charts, trades, portfolio, live options flow and more tools.
From another site:
Msg 2477 of 2480 at 10/21/2015, 5:45pm by Bibiranch
For user Bibiranch: Follow | Ignore | Send PM
Back to Board Add to Library Recommend
AWK
It is with deep sorrow that I have to tell you, my dear friend Andy aka AWK passed away this morning,
He believed in WAVE's future success until his last minute, we just talked about it yesterday!
May you rest in peace my Friend.
Windows XP falls below 25% market share while Windows 8.1 loses share for the first time
http://thenextweb.com/microsoft/2014/08/01/windows-xp-falls-25-market-share-windows-8-1-loses-share-first-time/
Despite support for Windows XP finally ending three months ago, the ancient OS has only now fallen below the 25 percent market share mark. To add to the bad news for Microsoft, after only nine full months of availability, its latest operating system version, Windows 8.1, has lost share for the first time.
The latest market share data from Net Applications shows that Windows 8 and Windows 8.1 lost a combined 0.06 percentage points (from 12.54 percent to 12.48 percent) between June and July. More specifically, Windows 8 slipped 0.01 percentage points (from 5.93 percent to 5.92 percent), while Windows 8.1 dipped 0.05 percentage points (from 6.61 percent to 6.56 percent).
There is no surprise that Windows 8, which saw its biggest gain in August at 2.01 percentage points and its biggest loss in November at 0.87 percentage points, continues to slip. The fact that Windows 8.1 has managed to lose share, however, may raise eyebrows. While it’s possible the dip is just a blip, it doesn’t bode well for Microsoft, especially given that the upgrade path from Windows 8 to Windows 8.1 is merely a free download away.
Meanwhile in July, Windows 7 managed to grab an additional 0.67 percentage points (from 50.55 percent to 51.22 percent). Windows 8 and Windows 8.1 usually do better combined than Windows 7 does, but some months the opposite happens, and the last three months haven’t been good ones for the new releases. Microsoft will likely one day struggle to woo users off Windows 7, just like it is currently trying to do with the headache that is Windows XP.
Speaking of earlier versions, Windows Vista managed to gain 0.10 percentage points (from 2.95 percent to 3.05 percent). Windows XP dropped 0.49 percentage points (from 25.31 percent to 24.82 percent). While it’s great to finally see it fall under the 25 percent mark, the drop is nowhere near as large as it should be.
In 2013, Windows lost share every month except for March, July, and November. So far in 2014, Windows has only slipped in January and April; it gained another 0.15 percentage points (from 91.53 percent to 91.68 percent) in July. OS X fell 0.09 percentage points (to 6.64 percent), while Linux slipped 0.06 percentage points (to 1.68 percent).
Net Applications uses data captured from 160 million unique visitors each month by monitoring some 40,000 websites for its clients. StatCounter is another popular service for watching market share moves; the company looks at 15 billion page views. To us, it makes more sense to keep track of users than of page views, but if you prefer the latter, the corresponding data is available here (mobile and desktop operating systems are combined).
brant_point: Thanks...
... and I like this quiet board... can put on record and file subjects that I think are interesting for the future... without having to argue with anybody... so much subjective information and opinions posted on the other boards...
Trusted Computing is here to stay... initial developments concluded... TC will take (is taking) off no matter who leads and spearheads the efforts from now on...
ViSCa - Virtualization of Smart Cards
https://itea3.org/project/result/download/6724/12015-ViSCa-ViSCa_profile_march-15.pdf
"... Theoretically, any device which can provide the three key properties of Smart Cards (non-exportability, isolated cryptography, and anti-hammering) can be commissioned as a VSC, though the virtual smart card platform is currently limited to the use of the Trusted Platform Module (TPM) chip contained in most modern PCs. Smart Cards as a Service (SCaaS), as proposed by ViSCa, will be carried out in the cloud, giving the user the flexibility to access the Virtual Machine that substitutes the traditional smart cards from different host devices and allows multiple user sessions ..."
Thanks for posting awk.
I really like that it doesn't mention "Trusted Computing" even once. It's about solutions and value, not technology.
New ERAS Bochure 2014
http://www.ciosummits.com/DATA_SHEET_ERAS_7_2014.pdf
Looks like you have this place to your self!!
Nice to see you posting.
Missed you at ASM.
Securing the Client for Cloud Computing SecureView ™
http://www.cloudcomputingevent.net/wp-content/uploads/2014/05/Ortman-TTC_Cloud_sp14.pdf
The endpoint is one of the weakest security links of the Cloud
TPM and Virtual SmartCard (VSC) coming to a smartphone near you soon!
Enterprise finally embraces TPM-based security
http://www.computerweekly.com/news/2240225813/Analyis-Enterprise-finally-embraces-TPM-based-security
Wednesday 30 July 2014
By Warwick Ashford
Enterprises are finally embracing security systems based on trusted platform module (TPM) chips built into computing devices, but why has it taken so long?
Since 2006, many computing devices have included TPM chips, but enterprises have been slow to embrace the technology in their information security strategies. However, in 2012 the Trusted Computing Group (TCG), which published the TPM specification, claimed the technology had reached tipping point.
Steven Sprague, a founding-member of the TCG, told Computer weekly that claim was backed up because the number of PCs with TPM chips has crossed the 600 million mark. He predicted further expansion of TPM use in Windows 8 would also drive the first mainstream adoption of TPM and a much broader spectrum of use. This prediction has proven to be correct, according to Bill Solms, who succeeded Sprague as chief executive of Wave Systems in October 2013.
“The TPM’s time has come,” Solms told Computer Weekly, driven by the fact that individuals and companies are now far more aware of the need to defend against cyber threats and that mature TPM-based technologies are available to help address that need. “There is a much greater awareness and understanding at a much broader level of cyber threats among business professionals and the general public than there was just two years ago,” he said.
In that time, Solms said cyber threats have gone from being an IT security issue to a business issue with high-profile data breaches in recent months contributing to an “acute awareness” in many organisations. “This has put cyber security on the agenda of the board of directors who want to know what their information security teams are doing to ensure they are not the victim of the next breach,” he said.
Solms admits this has boosted interest in TPM-based systems, but said companies are much more interested in what they can do in terms of securing the enterprise, rather than underlying technology. This in turn has prompted a change in the go-to-market strategy at Wave Systems. Rather than trying to educate customers about TPMs, the company is focusing on solving specific security problems.
“Based on my experience at Microsoft and Oracle, it is vital to ensure you understand the customers’ needs and present the combination of products and services that solves that problem. Adoption of TPM systems is being driven by use cases such as TPM-based virtual smart cards that can protect companies from attackers using stolen credentials from accessing their systems,” he said.
Because TPM-based systems combine user credentials with the device ID, user credentials will not work if they are being used on an unknown computing device. “Stolen credentials are useless to attackers because they do not have access to the device or devices that have been associated with the credentials,” said Solms.
“Virtual smart cards are there to provide strong authentication which means it is extremely difficult for attackers to impersonate legitimate users even if attackers are inside corporate systems,” he said. Wave Systems, which considers itself on the cutting edge of TPM management systems, invested early on in its TPM-based virtual smart card system the works with Windows 7, 8 and 8.1.
“By using the TPM chip in computing devices, virtual smartcards offer the same additional security as physical tokens but at a 50% to 75% lower cost because they cannot be stolen or lost,” said Solms. Much of the savings come from the fact that most large companies expect to replace about a third of their physical smartcards or tokens every year.
“Virtual smartcards also work with applications and access controls that have been set up to work with physical smartcards, therefore no re-engineering is required,” he said. The benefit of using TPM-based smartcards is that. once authenticated to the TPM in the computing device, users can access all applications and systems using biometric systems like fingerprint scanners.
What if employees use more than one device or cannot use their main device? Although the authentication relies on the TPM of a specific device, the Wave Systems virtual smartcard allows users to associate more than one device with their identity. In office situations where a single device may be used by several employees, the system also allows multiple identities to be associated with a single device. “And if a user’s device fails or is lost or stolen, setting up access from a new device can be done quickly and easily by system administrators,” said Solms.
TPM-based technology is well established, he said, and gaining traction due to a greater desire to protect IT endpoints, even though companies are not necessarily aware of the underlying technology. But at the same time, Solms said awareness of TPM-based systems is growing because of the greater TPM support provided by Microsoft and other suppliers. “For all these reasons the moment has come for TPM-based technologies, and I believe they are now perfectly positioned to make a big jump in adoption,” he said.
Given the growing demand for security at all levels including the end point, the widening deployment of TPM-enabled devices, increased support by Windows 8 and 8.1, and the fact that TPM is a security requirement for new kit for the US government and department of defence; it could be argued that TPM is coming of age and will now develop into maturity after being a nascent technology for so long.
Wave Systems Introduces Virtual Smart Card 2.0
July 28, 2014
By Clayton Hamshar
http://www.mobilecommerceinsider.com/topics/mobilecommerceinsider/articles/384909-wave-systems-introduces-virtual-smart-card-20.htm
In an era of hackers becoming more common and skillful, breakthroughs in security technology are essential. Wave Systems has brought the world the latest of these advancements, a virtual smart card that replaces traditional smart card technology.
The use of a Wave Virtual Smart Card 2.0 brings all the security benefits of a smart card without the need for a physical object, which can be lost or stolen. From a centralized management server, Wave provisions virtual smart cards based on a Trusted Platform Module (TPM) security chip to a user’s device, which stores a unique device identity that is associated with all of that user’s devices. When a user enters their credentials, they must be associated with the TPM identity in order to gain access. In other words, a hacker would have to know the user’s credentials and use their device with the right TPM identity in order to be successful.
With this solution, a company’s IT department can remotely create and delete virtual smart cards, provide helpdesk-assisted recovery, configure PIN and card policies, view the status of virtual smart cards and enrolled certificates and generate reports for compliance. They are supported on laptops, desktops and tablets with TPM 1.2 or TPM 2.0.
The new development greatly enhances security as there is nothing physical that a hacker can steal, except the device itself, which although possible is significantly less likely and the hacker would still need the user’s credentials. In addition, a device is going to be missed much faster, and is much less likely to be lost. The solution all but eliminates hacker attacks from remote locations and makes the job of a physical intruder much more difficult.
Virtual smart card technology is also much cheaper. Many companies report that as many as 30% of physical smart cards must be replaced each year, and Wave says the entire solution will cost up to 50% of that system in addition to saving those replacement costs.
The Wave Virtual Smart Card 2.0 is available on Windows 7, Windows 8 and Windows 8.1 after a launch on July 22, 2014.
TPM delivers a hardware root of trust for IT security
Mon, 2012-05-21 05:13 PM
By: Brian Berger
http://www.gsnmagazine.com/article/26405/tpm_delivers_hardware_root_trust_it_security?page=0,1&c=cyber_security
Today, with increasing electronic communications and transactions, trust in the hardware used for these purposes has never been more important. To establish and ensure trust, the U.S. and other governments around the world have taken advantage of the trusted hardware and process for establishing trust that leading technology companies have developed through a not-for profit organization, called the Trusted Computing Group (TCG).
TCG’s hardware-based root of trust process relies on open standards -- not proprietary processes. It starts with a Trusted Platform Module or TPM. Typically a secure cryptographic integrated circuit inside an enterprise-grade computer or server, the TPM is an integral part of these units and has been installed in over half a billion end products.
The hardware-based root of trust is a significant improvement over software-only protection schemes, since software is vulnerable to the same attacks from the malware that it attempts to thwart. In contrast, the more robust hardware-based TPM approach can manage user authentication, network access, data protection and more. The root of trust has a minimum set of functions to establish the trustworthiness of the host platform. Attestation or vouching for the accuracy of information, as well as authentication, or proof of identity, are among the tasks enabled by the root of trust established by the TPM.
With the TPM, users can set passwords and store digital credentials, including passwords in a hardware-based vault. The TPM can manage keys and can be used in conjunction with self-encrypting drives to restrict access to sensitive data.
The TPM has progressed from its first level over 10 years ago to the TPM 1.2 version today. The TPM and its associated specifications were designed to provide a high level of security to Commercial Off-The-Shelf (COTS) and other products used by government agencies.
As part of its High Assurance Platform (HAP) Program, the National Security Agency (NSA) uses the TPM in a virtualized approach to run multiple secure environments. Today, almost all computers acquired by the Department of Defense (DoD) are required to include a TPM.
This advanced level of trust and security has prompted the National Security Agency (NSA) to sponsor two Trusted Computing Conferences and Expositions. The most recent conference was held September 20-22, 2011 in Orlando, FL. In addition to demonstrating current successes from the use of the TPM in national security programs, presenters discussed the necessity to take these efforts even further.
Taking advantage of a hardware root of trust
In its as-delivered condition, the TPM in computers, servers and other products are in a ready-state to be activated. For government, business entities or individuals to obtain the improved security that the TPM offers, it simply requires security policy processes to be followed. This process is usually described in the operation manuals for the equipment, and is easy for trained information technology personnel to implement.
Once activated, the TPM provides increased security through linkage to other TCG specifications that have been developed for networks, such as the Trusted Network Connect (TNC) and self-encrypting drives (SEDs).
TNC provides trusted network access for fixed and remote mobile devices used enterprise-wide, so authorized users can safely interact with network systems. The National Institute of Standards and Technology (NIST) and the Trusted Computing Group (TCG) worked together to integrate Security Content Automation Protocol (SCAP) developed by NIST and TNC standards developed by TCG. The combination provides a powerful automated compliance and network access and enforcement tool set. The use of SCAP’s ability to manage the security integration of devices, including desktop PCs, servers, laptops and more, with TNC’s complementary set of network capabilities provides users a level of security that was very difficult, expensive or impossible to deliver previously.
TPM used with the newest self-encrypting drives (SEDs) can take the encryption security to a higher level. While a TPM is not required for users to benefit from the automatic encryption that an SED provides, the TPM can prevent unauthorized access to the network or computer systems. Microsoft has step-by-step instructions for enabling and using the BitLocker disk encryption leveraging the TPM included in Windows Vista and Windows 7.
Extending TPM security
With TPM-based security readily available to protect computers and servers, the transition is already well underway to use mobile devices including smart phones to access the restricted information in government and other networks. TCG’s Mobile Trusted Module (MTM) is a secure element and specification developed for use in mobile and embedded devices. The market requirements for these wireless devices dictate a reduced feature set from the traditional PC TPM developed for a wired computing environment, but can work cooperatively with TPMs in other devices for complete system security. The effort to develop the complete functionality required for mobile trust continues with the ongoing development of MTM 2.0. With these specifications, network service providers, third-party service providers and end-users all benefit from establishing trustworthy behavior.
TCG’s recent formation of an Embedded Work Group will provide additional tools to embedded system developers. With these specifications, devices that are increasingly connected to the Internet can benefit from the same approach to security that TCG has provided to computer, servers, drives and networks. With this protection, embedded devices can avoid becoming the weak link entry point for network malware.
Moving forward
The supply chain realizes the importance of hardware-based security and continues to embrace TPM technology with implementations in hardware and improved software and services. Upcoming Windows’ releases from Microsoft are anticipated to require TPM. As self- encrypting drives continue to proliferate that also take advantage of the TPM, Microsoft is just one of the examples that can be cited.
Acceptance of improved TPM-based trustworthiness by U.S. Government agencies has been demonstrated as well. With NIST and TCG’s initial collaboration viewed as quite successful, expanded use of SCAP and TNC technologies can be expected.
While there are few guarantees in life, one thing is for certain, a non-activated TPM cannot deliver the added security it was designed for. So, the admonition to government as well as business organizations is: let’s use what we own, turn on the TPM and use it.
NFC and MTM/TPM
https://online.tugraz.at/tug_online/voe_main2.getvolltext?pCurrPk=50783
Abstract
Near Field Communication (NFC) has become widely available on smart phones. It helps users to intuitively establish communication between local devices.
Accessing devices such as public terminals raises several security concerns in terms of confidentiality and trust. To overcome this issue, NFC can be used to leverage the trusted-computing protocol of remote attestation.
In this paper, we propose an NFC-enabled Trusted Platform Module (TPM) architecture that allows users to verify the security status of public terminals. For this, we introduce an autonomic and low-cost NFC-compatible interface to the TPM to create a direct trusted channel. Users can access the TPM with NFC-enabled devices. The architecture is based on elliptic-curve cryptography and provides efficient signing and verifying of the security-status report.
As a proof-of concept, we implemented an NFC-enabled TPM platform and show that a trust decision can be realized with commodity smart phones. The NFC-enabled TPM can effectively help to overcome confidentiality
3 Steps To Ensuring Laptops Are Trusted, Healthy And Managed
By Joseph Souren APRIL 26, 2012
Wave Systems EMEA
http://www.businesscomputingworld.co.uk/3-steps-to-ensuring-laptops-are-trusted-healthy-and-managed/
Enterprises that want to secure IT networks should start by protecting the device, ensure data is encrypted and monitor device health from activation to decommissioning. These three steps are essential.
Step 1: Authentication and Trust
The first step to securing the network periphery is to authenticate and trust any device used to connect with enterprise IT systems. When companies focus on the device as the bedrock for network security, they start with a firm foundation. If the device is truly protected, rogue connections to the network are almost impossible. If the core is secure, the device is trusted.
For this, we need to establish an integrity check showing that the underlying components have not changed. This requires methods that work outside of the operating software. In general, security methods require a root of trust – a known hardware starting point.
To facilitate this process, a secure process is used to conduct platform measurements. For example, the BIOS firmware and the bootblock are measured, and at the end of the boot sequence a number is calculated that must be exactly the same as the number that was calculated when the platform was first activated.
When the numbers do not match, something has changed on the platform and it should be investigated for changes. The Trusted Platform Module (TPM), which is often part of the PC hardware, can be used to deliver this root of trust, while the BIOS and TPM can deliver the method to calculate the platform integrity number.
For mobile devices, a Mobile Trusted Module (MTM) acts in the same way as the TPM, and by utilising functions such as Trustzone in ARM processors, trust in the platform can be established.
Step 2: Secure encryption
The second step is to ensure that all data on the device is encrypted in a virtually unbreakable way. This can be achieved through the deployment of Self Encrypting Drives (SEDs), and can be allied to the TPM to provide the most robust data protection currently available.
Cost is not an issue as SEDs are now virtually matching the price of other hard drives and the TPM is already installed on more than half a billion enterprise laptops, notebooks and PCs around the world.
Corporate IT management understand the need to control and monitor networks whose periphery continues to expand. This expansion is driven by the adoption of mobile devices, and because each device is different, the logical answer is to deploy an accepted standard for device and user authentication, data encryption and device health that is embedded in the device, but has no effect on device performance.
Step 3: Device health
If the mobile device is secure and difficult to break at its base, the management of the device becomes less complex, and monitoring is both highly effective and relatively simple. Control returns to the network manager and security can be established as a policy.
The adoption of standards also reduces costs. If each mobile device has the same security process, organisations will make substantial savings at every stage of the engagement process, from secure initialisation to decommissioning.
Choice of management software to enable the evolution of the secure IT network is a matter for the enterprise decision-makers. However, a decision must be made this year because the network that supports their business is now mobile, and as a result the risk is greater than ever. However, if the device is secure and trusted, the opportunities for the enterprise to cut costs and effectively manage a robust and highly secure network are substantial.
Joseph Souren leads Wave’s operations in EMEA, where he is responsible for developing the company’s sales, marketing and channel strategy in that region. With nearly 20 years of experience, Joseph has a strong track record for managing sales, marketing, channel and geo operations. He has held management positions at high-growth, NASDAQ 100 companies, including SanDisk, McAfee, and CA Technologies. Most recently, he served as VP of CA Technologies’ Internet Security Business Unit. Joseph has worked with the European Network and Information Security Agency, GovCert, the Platform for Information Security and the International Systems Security Association.
Taking Comply to Connect on the Road - Virtually
http://www.trustedcomputinggroup.org/community/2012/04/taking_comply_to_connect_on_the_road__virtually
by Lisa Lorenzin, Juniper Networks
April 2012
If you lost your corporate laptop tomorrow, how much could its next owner learn about your company? If you work for NASA, the answer is "a lot" - one stray laptop contained command codes for the International Space Station! With the proliferation of mobile devices such as laptops, tablets, and smartphones, the barriers protecting sensitive and critical information have become more porous than ever before. But data loss isn't limited to mobile devices, and it isn't always as obvious as watching a taxi drive away... The RSA data breach resulted from a trusted endpoint that was compromised by malware, with the user none the wiser.
Controlling access to sensitive resources is an essential part of information security. Traditionally, access controls have focused on user identity and roles. However, many recent attacks focus on compromising an authorized user's endpoint, then using that endpoint with the user's credentials and privileges to launch further attacks such as extracting confidential data or infecting other endpoints. One of the best ways to protect against such attacks is to ensure that the user's endpoint is equipped with required security controls such as self-encrypting drives, and up to date on applicable patches and security updates, by verifying the security of endpoints both when they connect to the network and continuously thereafter. This technique is known as Comply to Connect, since endpoints must comply with enterprise policy before they are allowed to connect to protected networks and resources.
The increasing proliferation of consumer devices on business networks under the "bring your own device" (BYOD) technology approach, coupled with the growing trend among information workers to stay connected at home, on the road, and just about anywhere, have created new challenges for network administrators responsible for ensuring device health, network security, and protection of corporate assets. Comply to Connect is a standards-based solution that addresses the problems many organizations face: how to ensure network security while allowing a wide range of individual devices to connect.
Earlier this year, at the RSA Security Conference in San Francisco, I had the opportunity to demonstrate a Comply to Connect system at the Trusted Computing Group's pre-conference workshop focused on the paradox of security. I've been participating in this annual TCG workshop for several years - demonstrating various types of security solutions based on open standards from the Trusted Network Connect (TNC) work group - and this year's workshop was by far the easiest for me. Rather than assembling my usual Rube Goldberg contraption of power strips, appliances, switches, cables, and monitors, I brought a single laptop!
The demo was simple; I showed an endpoint connecting to an environment protected by an access control system that assessed my endpoint and ensured that it was in compliance with required policies before allowing me access to resources. If I caused the endpoint to become out of compliance with the required policies, the system responded based on the severity of the problem: automatically fixing the problem for me, in some cases, and in other cases restricting my access until I took action to bring the endpoint back into compliance. Not rocket science! TNC-enabled technologies have been offering this functionality for years.
But two things were different this year. One is that instead of building my Comply to Connect system onsite, as I've done for other demos in previous years, I was connecting via SSL VPN to a lab environment in Bellevue, then running RDP across an encrypted tunnel to control the demo endpoint. You could say that I moved my demo to the cloud! (It's certainly easier than shipping boxes of gear across the country and hoping that the shipper doesn't decide to route them to Timbuktu instead. Although it does help to be able to reach the online demo environment.) And, more importantly, it reflects a very real use case for companies who want to enable their mobile, always-on, 24/7 workforce today.
And the other is that more people were interested in my demo than ever before. People have been saying for years that NAC is dead (Hi, Mike! Smile ) - but in reality, it's experiencing a quiet renaissance as one of the security controls enabling BYOD. The difference is that now we're focusing on the business problem to solve, rather than a particular technology to solve it - which is probably what we should have been doing all along. Users don't care whether the enabling technology is RDP or GoToMyPC, NAC or VPN - they simply want access to anything, from anywhere, at any time. Companies are embracing the cost reductions and benefits to productivity - and TCG is leading the way in enabling the technology administrators need to support this new model while minimizing the risks involved.
Investment opportunities in trusted computing:
http://wire.kapitall.com/investment-idea/cyberthreat-congress-and-power-companies/
Thwarting APT's:
http://www.gsnmagazine.com/article/26176/age_stuxnet_us_cyber_security_officials_rethink_so?page=0,0&c=cyber_security
In the age of Stuxnet, U.S. cyber security officials rethink software as their defense
Mon, 2012-04-23 02:59 PM
By: Steven Sprague
Steven Sprague
When the Stuxnet virus caused centrifuges to malfunction at Iran’s Bushehr nuclear reprocessing facility last year, it put cyber security officials around the world on notice that a new, more dangerous strain of Advanced Persistent Threat (APT) had appeared.
Post-analysis indicated the Stuxnet virus had altered the basic-input-output system (BIOS) firmware of the facility’s computer control systems. In essence, it targeted the computers’ pre-boot environments, which made it invisible to all software layers that subsequently came online.
The implications were clear: A virus that can alter the BIOS of a computer could grant control over its operating system (OS) and any software layer above it, including security and encryption applications. It could conceivably permit hackers to silently monitor a user's keystrokes, invade networked machines or assign remote control over online systems.
This emerging class of APTs prompted the U.S. National Institute of Standards and Testing (NIST) to publish guidelines last year for preventing unauthorized changes to BIOS firmware. The agency is now on the verge of issuing subsequent standards for measuring the health of an endpoint BIOS in real-time. Both NIST publications tacitly recognize that software solutions are an antiquated defense against attacks that are already active in the pre-boot phase. One alternative they suggest is to shift the line of defense to a computer’s physical hardware, which offers a deeper and incorruptible foundation for preserving the identity and health of a device.
A very persistent threat
Attacks on BIOS firmware are not a particularly new threat. They’re commonly known as rootkit viruses. When they first appeared during the mid-90’s, they simply disabled a targeted computer. The only fix was to wipe the drive clean, and reinstall the OS. But as Stuxnet illustrates, rootkits have evolved into something far more persistent and insidious. In their emergent form, they can remain intact in the BIOS, even after a hard drive has been reformatted. Further, they can lie dormant for months before being activated remotely or by a certain cue. And, as mentioned, they can exercise invisible control over the entire software stack of a machine, as well as that of networked computers.
Skeptics argue that the threat of such sophisticated attacks is negligible since the diversity of firmware platforms in circulation requires rootkits to be highly tailored to the BIOS of a targeted computer. Yet, again as Stuxnet illustrated, a highly targeted attack can have a very broad impact. Industrial control systems similar to those used at Bushehr are commonly used in gas pipelines, power plants and other key infrastructure, which helps explain why such a “limited” threat became a priority for NIST.
Some of the skepticism surrounding the threat of rootkits may also be fatalism in disguise. In an industry dominated by software security solutions, vulnerabilities in the pre-boot environment can seem like an unpleasant yet inevitable fact of life. They are not, as NIST helped illustrate when it issued Special Publication (SP) 800-147 last year. The document established the first guidelines for ensuring that changes or updates to system BIOS come only from an authorized source.
But SP 800-147 was only the start. NIST recognized that protecting BIOS firmware requires more than passive defenses. Security further demands the ability to monitor those defenses against evolving and persistent threats.
As a result, NIST will soon issue SP 800-155, which outlines methods for actively measuring the health of BIOS firmware in real time, and reporting any unauthorized changes to a remote authority. The question is: What reporting source can be trusted when the pre-boot environment itself -- and any software layer operating above it -- is suspect? The most readily available solution is the hardware layer operating below system BIOS. More specifically, it is a piece of hardware called the Trusted Platform Module.
Designed a decade ago to thwart APT attacks, the TPM is a cryptographic chip attached to the motherboard of virtually every corporate-class laptop deployed. Today, activated TPMs are capable of storing and reporting measurements from the pre-boot environment. Plus, because their security functionality is embedded within physical hardware, TPMs cannot be compromised or altered by rootkits or other malicious code.
Measuring BIOS integrity
SP 800-155 establishes guidelines defining how to measure, store and report the integrity of a computer’s BIOS to a remote authority in real-time. NIST’s publication is well-detailed and deserves to be read separately. But, in its simplest form, it establishes three key requirements:
1. Provide the hardware support necessary to implement credible Roots of Trust for BIOS integrity measurements;
2. Enable endpoint computers to measure multiple stages of the boot up process prior to execution;
3. Securely transmit measurements of BIOS integrity from the endpoints to IT management.
Again, TPMs can play a central role in fulfilling these requirements. First, as physical hardware, they provide an unalterable baseline -- the so-called Root of Trust -- for comparison with expected BIOS measurements. TPMs securely store these measurements and, at any point during or after the boot process, can send encrypted reports of BIOS health to a remote central authority, such as an IT manager in a corporate office.
Thanks to NIST SP 800-155, the industry now has the knowledge and the tools to do what it could not before, and block APTs based on rootkit attacks on BIOS firmware. Even better, BIOS monitoring platforms built on NIST’s specifications are already commercially available from vendors like Wave Systems.
Yet, further work remains. NIST’s publications do not provide guidance for attacks targeting computer components, such as the video BIOS on a standard PC video card. Nor do they address attacks on the master boot record, which can cost a user their hard drive. But securing and measuring the integrity of the BIOS environment was an essential first step to making meaningful measurements further up the pre-boot stack. A house is only as strong as its foundation. Thus, NIST was wise to build on a foundation of strong BIOS integrity, and to leverage hardware-based tools, such as TPMs, as the cornerstone.
Steven Sprague is CEO of security firm Wave Systems. He can be reached at:
ssprague@wave.com
Walled gardens look rosy for Facebook, Apple – and would-be censors
In part three of our series, how the rise of app stores and social networks is making the way we use the net cleaner, easier and far more controllable
http://www.guardian.co.uk/technology/2012/apr/17/walled-gardens-facebook-apple-censors
Charles Arthur, guardian.co.uk, Tuesday 17 April 2012 12.59 BST
It was in May 2008 that Jonathan Zittrain first sounded the warning. While the argument was raging, as it is now, about censorship of the internet by governments seeking to control what their populations read – in countries such as China, India and Pakistan – the professor of cyberlaw at Oxford and Harvard universities had another concern: what if it were actually the gadgets we used that were in effect censoring the world that we could connect to, and the things we could do?
Zittrain fretted that smartphones, which were just beginning to take off, might actually limit what users could do online compared with devices such as personal computers. Besides the obvious difference – a smartphone is light and can be slotted in a pocket; a personal computer is power-hungry and bulky – there's another subtle but essential difference. Personal computers are "generative": they can be programmed to do more than they were set up to. Smartphones, on the other hand, generally can't be programmed directly by the user. For the most part, they're appliances, as limited in what they can do as a coffee maker.
In his book, The Future of the Internet and How to Stop It, Zittrain noted: "We care little about the devices we're using to access the net … we don't think of that as significant to its future the way we think of [direct censorship]."
But does the rise of appliance-like smartphones – and more generally of "walled gardens" such as Facebook, Myspace and Google+ – presage an age where we simply cut ourselves off from uncomfortable truths online because our devices, or the sites we use, won't show them to us, like a North Korean radio made so it cannot be tuned to unauthorised sources?
The question is urgent. Facebook has passed 845 million users, and smartphones are outselling PCs so quickly that in 2010 the research company Gartner forecast that as soon as next year mobile phones will overtake PCs as the most common way to access the web, used by 1.82 billion people, compared with 1.78bn net-connected PCs.
But answering it is complicated, says Dr Richard Clayton of Cambridge University's computer laboratory, who has extensively researched the censorship and oversight systems used by many countries and companies, including the UK and British Telecom's "CleanFeed" system, used to filter child pornography.
"Facebook can cause people to disappear from history, vaporising their pages and everything they wrote on your wall, as if they were never there," he points out. For Facebook, everything – every user, every wall entry, every photo – is just an entry in a giant database, which can be removed at any time by someone with access to that database. (It could be you, or an administrator.) The complication comes in trying to suggest that doing that or not doing that is "wrong".
"Everybody applauds the idea that there shouldn't be an open space where paedophiles swap material," says Clayton. "Or where al-Qaida can swap material and recruit. And then it gets harder – you have Facebook groups where you have Muslims who want to march through Luton to protest about our activities in central Asia. Facebook has a rather fun arrangement so that they can set up groups like that, but they aren't visible in the UK [where they would count as hate speech]."
So Facebook roots out what it considers against good taste, which (as Clayton points out) generally means content that would not be allowed under the US first amendment, since it is an American company. A guidebook for its moderation staff recently became public, revealing that images of breastfeeding would be banned if nipples were exposed, but deep flesh wounds and crushed heads would be OK.
While such rules seem peculiar in Europe, almost to the extent of being the reverse of what is expected, Google has also demonstrated the same American prudishness on its Google+ social network, which insists on people using their real names.
As San Francisco-based journalist-turned-venture capitalist MG Siegler discovered, the site banned him from using a photo with a rude gesture – an extended middle finger – for his profile; when Siegler reposted it, Google removed it again. The key to the problem: Google wanted to show Google+ profile pictures in search results, and if those included pictures that some might find offensive, Google could lose business.
Censorship? Heavy-handed US-biased restriction? Or reasonable move to keep the web clean? Tom Anderson, the co-founder of Myspace, who was automatically everyone's friend when they first joined, wrote an open letter (on Google+) to Siegler, in which he said: "Every social network has the policy you're decrying, and why shouldn't they? It's a public sphere." He compared it to wearing a racist T-shirt in a shopping mall: "Security would probably ask you to leave." He added that it had been very difficult at Myspace to keep up with "offensive" photos; without that control, a social network "turns into a cesspool that no one wants to visit … sorta like Myspace was".
But social networks played a big role in the Arab spring of 2011, with Facebook and Twitter both cited as key to getting the message out from oppressed groups. More recently, Syria has become the source of many important videos showing the suffering of citizens attacked by their own government. Those can be seen on YouTube – though not, of course, by citizens within Syria itself.
The fears about "walled gardens" sometimes reflect concerns that are as much about business models as principles.
Facebook does not let Google or any other site index the vast majority of its content; a tiny file called robots.txt on its homepage stops search engines from grabbing details of photos, feeds or other data. Only the most limited information can cross that wall – and that worries Google, which relies on being able to index everything (don't forget its mission statement: "organise the world's information and make it universally available") and then to sell adverts against it.
John Battelle, who runs online advertising network Federated Media, says Facebook poses an existential threat to Google. "The old internet is shrinking and being replaced by walled gardens over which Google's crawlers can't climb," he noted earlier this year, as Facebook prepared its flotation. "Sure, Google can crawl Facebook's 'public pages', but those represent a tiny fraction of the pages on Facebook, and are not informed by the crucial signals of identity and relationship which give those pages meaning."
In the same way, Apple's iTunes store is available on the web, and Google can index it, "but all the value creation in the mobile iPhone and iPad app world is behind the walls of Fortress Apple. Google can't see that information, can't crawl it, and can't make it universally available."
In that sense, as Facebook gets bigger, and sells advertising to its users, it poses an increasing threat to Google – because to many, the space outside Facebook will look more and more like an untamed space where scams, malware and piracy thrive. "Google's business model depends on the web remaining open, and … that model is imperilled," Battelle adds. "The open web is full of spam, shady operators and blatant falsehoods. Outside of a relatively small percentage of high-quality sites, most of the web is chock full of pop-up ads and other interruptive come-ons.
"It's nearly impossible to find a signal in that noise, and the web is in danger of being overrun by all that crap. In the curated gardens of places like Apple and Facebook, the weeds are kept to a minimum, and the user experience is just … better."
Even video sites such as YouTube and Vimeo can be thought of as a form of walled garden: videos are removed at the request of copyright owners and law enforcement. Often, they're dismissed as just being repositories for "cute cats" videos (with user-generated films such as "Charlie bit my finger" still near the top of the all-time list). But as Ethan Zuckerman, director of MIT's Centre for Civic Media, pointed out in a Vancouver Human Rights lecture, Cute Cats and the Arab Spring, sites such as YouTube, Facebook, or Twitter are the best place for dissidents to post grievances and findings.
Those sites don't offer the best protection for dissidents, Zuckerman argues – for people can often be identified through their posting or web identity – but their power is that governments, even repressive ones, block them at their peril. If YouTube suddenly becomes invisible, people begin to wonder why and begin to ask questions – which in time, given the connectedness of our modern civilisation, will mean that they find out.
An earlier version, from 2008, pointed to how the overhead views of Google Maps had shown precisely who owned property in Bahrain – which often turned out to be the royal family. But what about the mechanisms that are increasingly being used to foment or report revolution – the smartphones with internet connectivity, or the computers being used to upload photos or video taken with cameraphones?
Zittrain has expressed fears about how the devices we use to connect to the net have moved away from being fully capable personal computers – where in theory you can write programs that can use any capability of the computer – towards appliances such as the iPad or iPhone, with tightly limited functionality and access to the underlying operating system software, where only "allowed" programs can be installed from a vendor-maintained store. He calls such a process "tethering".
"From the start my worries about appliances permanently tethered to their makers have been that the tethering won't be limited to smartphones," Zittrain sayss. "Rather, the closed smartphone architecture is the canary in the coalmine for all of consumer computing. That's why I've said the PC is dead, even as the PC's form factor may remain. In the past year we've seen the introduction of the App Store on the Mac PC – not just iPhone and iPad."
Even Microsoft, which ushered in the era of the personal computer running software that in theory could be used to write any program, is heading in the same direction. Versions of Windows 8, to be released in the autumn, will also use Metro Store for apps, which Microsoft will control.
Adding new programs will be hard; in effect, websites will become the new programs. Zittrain notes that although you can still side-load software – that is, transfer it from another source, such as the internet or a memory stick connected to the machine, that is a reversal from the paradigm that ruled for years.
"What a transformation: the principal way of acquiring software for the past 30 years is now through a side door rather than a front one," he says. "I'm both awed and worried about what's happened since 2008."
Zittrain concedes that people like convenience and security – and they're entitled to. But he says there's a qualitative difference between now and then. "No one tried to get Bill Gates to alter Windows so that undesirable apps and associated content – undesirable to someone other than the user – couldn't be accessed. Today is different: if Facebook or Apple allow objectionable apps on their platforms, or Google in the Android Marketplace, or Microsoft in the Metro Store, regulators can say: take it down."
That's a subtle shift, but important. Media commentator Jeff Jarvis says Apple's iPad is "sweet and pretty but shallow and vapid ... I see danger in moving from the web to apps," he said. "The iPad is retrograde. It tries to turn us back into an audience again."
The same broad criticism is applied to smartphones, where not just Apple's product, but almost all platforms prevent any sort of easy access to the underlying code; there's no "command line interface" for a smartphone, no black screen and blinking cursor as you can find on a Windows or Apple computer, if you look hard enough.
Part of that is for the protection of the wider telephone network, says Clayton. "Because phones are talking to the wider telecommunications system, which isn't secure, the wireless side of phones tends to be locked down very tight."
But on the app side, the extent of user lockdown varies between platforms. Apple's iPhone is tightly controlled: you can't distribute an app on to iPhones except by putting it through Apple's App Store – and the company has previously removed apps in China at the government's behest, such as in 2009, when apps about the Dalai Lama were removed.
In that, Apple was like Google, which at the time maintained an operation inside China, and self-censored its content, offering a link to Chinese searchers to explain why the content was censored – but not to ways to find the results they wanted. Apple offered no such indication that the store was censored.
With smartphones now outselling PCs every quarter, and China forecast to become the world's largest smartphone market this year, ahead of the US, the question of whether smartphones are a "reductive", limited platform, or "generative" like a PC looks like an increasingly important one.
Battelle says the shift to mobile is unstoppable. "The PC-based HTML web is hopelessly behind mobile in any number of ways," he wrote on his blog. "It has no eyes (camera), no ears (audio input), no sense of place (GPS/location data). Why would anyone want to invest in a web that's deaf, dumb, blind, and stuck in one place?"
Yet that wealth of data on mobiles isn't necessarily leading to a more web-like experience. Clayton says that "we are seeing more locked-down platforms than before" on smartphones – pointing particularly to Apple, but not excepting others.
The biggest, and best-selling, exception is Android, the smartphone software that Google offers free to handset makers. "With Android, there's a wider choice of where to download apps from" – many companies, including Amazon, offer their own Android "app stores" – "and Google doesn't hold your hand as much." The search giant can still "kill" apps if it judges them to be malware.
So far, there do not seem to have been any occasions when the Chinese government has demanded that an Android app is wiped from phones – though its Great Firewall can prevent people inside China accessing the official Android Market from which apps can be downloaded, rendering the problem moot. Indeed, Android Market has been blocked a number of times inside China, and many users there prefer the unofficial ones that have sprung up; though those, of course, will come under the eye of the government.
In total, Android is outselling all other smartphone platforms – though probably not because eager would-be programmers and tinkerers are taking it up, but because carriers can offer them cheaply. "[Apple's iPhone and iPad software] iOS and [Google's] Android now represent fascinating hybrids," says Zittrain. "Third parties can write apps – and how they do! – but the manufacturer, to varying degrees, can control whether those apps can reach their audiences."
Even so, there's no easy answer. For example, the success of Research In Motion's BlackBerry phones in many Middle Eastern countries has come about because they allow teenagers to communicate directly with the opposite sex, without having to meet face-to-face – because that could fall foul of strict religious laws. Similarly, Facebook offers a way for teenagers to "speak" in ways that might be banned in the physical world. To some teenagers – and activists – the fact that the BlackBerry's PIN-based Messenger system can't be tied to a phone, yet lets people stay in touch, is the perfect reason for using the platform. Though it can be decrypted – if the government goes to great lengths – in general it will be private, which suits its younger users perfectly.
Zittrain's real worry is that "the personal computer is dead".
His conclusion is a call to arms: "We need some angry nerds" – people capable of breaking out of the walled gardens.
Indeed, the US government has found some: it has backed projects such as "the internet in a suitcase", which could set up a telecommunications network inside a country separate from the existing infrastructure.
Zittrain acknowledges such projects, but for the wider world, he says, "convenience is great. I wouldn't call for a return to the green blinking cursor of [Microsoft's pre-Windows] MS-DOS or the [text-based] Apple II. But we should build architectures that permit innovation and experimentation if consumers wish to go 'off-roading'."
The Guardian: US and China engage in cyber war games
http://www.guardian.co.uk/technology/2012/apr/16/us-china-cyber-war-games
Exclusive: US and Chinese officials take part in war games in bid to prevent military escalation from cyber attacks
Nick Hopkins
guardian.co.uk, Monday 16 April 2012 13.00 BST
The US and China have been discreetly engaging in "war games" amid rising anger in Washington over the scale and audacity of Beijing-co-ordinated cyber attacks on western governments and big business, the Guardian has learned.
State department and Pentagon officials, along with their Chinese counterparts, were involved in two war games last year that were designed to help prevent a sudden military escalation between the sides if either felt they were being targeted. Another session is planned for May.
Though the exercises have given the US a chance to vent its frustration at what appears to be state-sponsored espionage and theft on an industrial scale, China has been belligerent.
"China has come to the conclusion that the power relationship has changed, and it has changed in a way that favours them," said Jim Lewis, a senior fellow and director at the Centre for Strategic and International Studies (CSIS) thinktank in Washington.
"The PLA [People's Liberation Army] is very hostile. They see the US as a target. They feel they have justification for their actions. They think the US is in decline."
The war games have been organised through the CSIS and a Beijing thinktank, the China Institute of Contemporary International Relations. This has allowed government officials, and those from the US intelligence agencies, to have contact in a less formal environment.
Known as "Track 1.5" diplomacy, it is the closest governments can get in conflict management without full-blown talks.
"We co-ordinate the war games with the state department and department of defence," said Lewis, who brokered the meetings, which took place in Beijing last June, and in Washington in December.
"The officials start out as observers and become participants … it is very much the same on the Chinese side. Because it is organised between two thinktanks they can speak more freely."
During the first exercise, both sides had to describe what they would do if they were attacked by a sophisticated computer virus, such as Stuxnet, which disabled centrifuges in Iran's nuclear programme. In the second, they had to describe their reaction if the attack was known to have been launched from the other side.
"The two war games have been quite amazing," said Lewis. "The first one went well, the second one not so well.
"The Chinese are very astute. They send knowledgeable people. We want to find ways to change their behaviour … [but] they can justify what they are doing. Their attitude is, they have experienced imperialism and they had a century of humiliation."
Lewis said the Chinese have a "sense that they have been treated unfairly".
"The Chinese have a deep distrust of the US. They are concerned about US military capabilities. They tend to think we have a grand strategy to preserve US hegemony and they see a direct challenge.
"The [Chinese officials] who favour co-operation are not as strong as the people who favour conflict."
The need for the meetings has been underlined in recent months as the US and the UK have tried to increase pressure on China, which they regard as chiefly responsible for the theft of billions of dollars of plans and intellectual property from defence manufacturers, government departments, and private companies at the heart of America's national infrastructure.
Analysts say this amounts to "preparation of the battlefield", and both the UK and the US have warned Beijing to expect retaliation if it continues.
In recent months, the US has made clear it is turning its military focus away from Europe towards the Pacific to protect American interests in the region.
"Of the countries actively involved in cyber espionage, China is the only one likely to be a military competitor to the US," Lewis said.
"US and Chinese forces are in close proximity and there are hostile incidents … The odds of miscalculation are high, so we are trying to get a clear understanding of each side's position."
Lewis believes the US is preparing to become more aggressive towards China, saying President Barack Obama has already tasked internal working groups in the White House to consider tougher sanctions.
Without naming China, a senior executive in the FBI told the Guardian the threats posed from cyber attacks were alarming.
"We know that the capabilities of foreign states are substantial and we know the type of information that they are targeting," said Shawn Henry, executive assistant director of the FBI's cyber unit.
"We have seen adversaries that have been in networks for many months or even years in some cases, undetected. They have essentially had free rein over those networks … They have complete ability to disrupt that network entirely."
Frank Cilluffo, who was George Bush's special assistant on homeland security, said the time had come to confront China.
"We need to talk about offensive capabilities to deter bad actors. You cannot expect companies to defend against foreign intelligence services. There are certain things we should do if someone is doing the cyber equivalent of intelligence preparation of the battlefield of our energy infrastructure.
"To me that's off grounds. That demands a response. What other incentive could there be to map our infrastructure in the event of a crisis?
"We have a stronger hand in conventional military and diplomatic means. We need to show them our cards. All instruments on the table. I think we do have to start talking active defence."
He said the US had to be proactive or, in time, people would start losing confidence in the integrity of the internet and computer systems.
"If I don't invest because I am afraid, if I don't use the web because I am afraid, if you lose trust and confidence in those systems, the bad guys have won. Checkmate."
The state department refused to speak about the war games, or say which officials took part.
A spokesman said: "The United States is committed to engaging countries to build a global environment in which all states recognise and adhere to norms of acceptable behaviour in cyberspace. We are engaging broadly with the Chinese government on cyber issues so that we can find common ground on these issues which have increasing importance in our bilateral relationship."
The Pentagon declined to comment or say which of its officials took part in the war games.
China has consistently denied being responsible for cyber attacks on the US and other western countries. It says it is also the victim of this kind of espionage.
The Chinese defence minister, Liang Guanglie, has said Beijing "stands firmly against all kinds of cyber crimes".
"It is hard to attribute the real source of attacks and we need to work together to make sure that this security problem won't be a problem," he said.
"Actually in China we also suffered quite a wide range [of], and frequent, cyber attacks. The Chinese government attaches importance also on cyber security and stands firmly against all kinds of cyber crimes. It is important for everyone to obey or follow laws and regulations in terms of cyber security."
The People's Daily, the Chinese newspaper that most reflects the views of China's ruling Communist party, said last year that linking China to internet hacking attacks was irresponsible.
"As the number of hacking attacks on prominent international businesses and organisations has grown this year, some western media have repeatedly depicted China as the villain behind the scenes."
The Guardian: Militarisation of cyberspace: how the global power struggle moved online
http://www.guardian.co.uk/technology/2012/apr/16/militarisation-of-cyberspace-power-struggle?intcmp=239
Militarisation of cyberspace: how the global power struggle moved onlineRise of cyber-attacks on critical infrastructure on both sides of Atlantic calls for creation of cyberweapons and new rules for use
Nick Hopkins, guardian.co.uk,
Monday 16 April 2012 15.00 BST
Jonathan Millican is a first-year university student from Harrogate in North Yorkshire. He says he doesn't think of himself as a "stereotypical geek", but having been crowned champion in Britain's Cyber Security Challenge, the 19-year-old is bound to take some stick from his undergraduate friends at Cambridge.
The competition is not well known, but it is well contested. About 4,000 people applied to take part this year, hundreds were seen by judges, and 30 were selected for the final in Bristol on 10 March.
After a day of fighting off hackers and identifying viruses in a series of simulations, Millican triumphed, giving him legitimate claim to be the brightest young computer whiz in the UK.
And though he may not recognise it yet, Millican has become a small player in a global game. There is a dotted line that links him to an ideological battle over the future of the internet, and the ways states will use it to prosecute conflicts in the 21st century.
The remaining cold war superpower, the United States, is slowly squaring up to the emerging behemoth, China, in a sphere in which Beijing has a distinct advantage: cyberspace.
Experts estimate China has as many "cyber jedis" as the US has engineers, and some of them, with backing from the state, have been systematically hacking into and stealing from governments and companies in the west, taking defence secrets, compromising computer systems, and scanning energy and water plants for potential vulnerabilities.
The scale of what has been going on is only now being recognised, and with a discernible sense of panic, the US and the UK are trying to make up lost ground.
One important way of shoring up the west's defences involves recruiting a rival army of computer specialists to defend the systems being attacked.
This is why the UK began the Cyber Security Challenge in 2011, and why Millican and otherparticipants have been discreetly courted by GCHQ, the government's electronic eavesdropping centre, which is on the frontline of this new power struggle.
The explosion in internet use, and the almost complete reliance on computer systems to run and record our daily lives, has opened up endless opportunities for thieves, spies and vandals to exploit the platform.
Though it is still evolving, the push-back has started. The Guardian has spoken to senior officials in the US and UK government, as well as specialists and independent thinktanks in London, Washington and San Francisco, who agree that the west is galvanising itself to adopt a far more aggressive approach to a problem for which there is no precedent. The stakes have suddenly become very high.
Over the past 18 months, there has been a concerted effort to highlight the relentless nature of day-to-day attacks on businesses and government departments. The Obama administration estimates that 60% of small firms that are hacked go broke, and billions of dollars worth of intellectual property have been stolen from industry, including military blueprints from leading defence contractors.
And in the political shadows in Westminster and Washington, they have moved to put cyberspace more formally into the military sphere, so that those responsible for the attacks understand that retaliation is now part of the game.
New military battleground
Though much maligned, Britain's 2010 strategic defence and security review may prove to have been a historic punctuation mark in this process.
The review made the threats from cyberspace a "tier one" priority, because Downing Street considered them a genuine threat to national security.
The US is moving in this direction, too. On 17 January, the head of the US military, General Martin Dempsey, set out a significant change in position. In a 70-page document that was largely ignored and almost completely impenetrable, he said the US intended to treat cyberspace as a military battleground.
"Disrupting the enemy will require the full inclusion of space and cyberspace operations into the traditional air-land-sea battle space … [They have] critical importance for the projection of military force. Arguably, this emergence is the most important and fundamental change … over the past several decades."
The military has long had basic cyber capabilities, such as equipment for jamming signals, but the more sophisticated weapons are seldom spoken of, and rarely used, in part because there has been no formal code of conduct.
This has prevented the US from routinely deploying its most destructive cyberweapons, including during the Libya campaign last year, when the Pentagon gave President Obama the option of disabling Muammar Gaddafi's military computer network with a targeted cyber-attack. The White House decided against it, but the Dempsey doctrine will give the president, and General Keith Alexander, the head of US Cyber Command, more confidence next time.
Officials in the US and the UK privately concede they have been developing a range of new "offensive" cyberweapons – and a rulebook for their use.
"If we know that someone is about to launch a cyber-attack on us, then we will pre-empt it," said one Whitehall official. "We have that capability and we will use it, even if the bad actors are based abroad."
The state department now regards cybersecurity "as a foreign policy priority", and Obama administration officials insist "the laws of conflict apply to cyberspace".
"If there is significant information of a cyber-event, we reserve the right to use tools in our toolbox," said one. "When does a cyber-attack achieve critical level? When one can attribute an attack that deliberately causes loss of life."
Paul Rosenzweig, who spent four years as deputy assistant secretary in the department of homeland security until 2010, is sceptical that a cyber-only war will happen soon. But he added: "We may have cyberwar as part of another war. I would hope and pray and assume that they [China] are as worried about that as we are."
Frank Cilluffo, President George Bush's special assistant for homeland security at the time of the 9/11 attacks, said: "In cyber, we are where the counter-terrorist community was on September 12, 2001.
"I have come to the conclusion that we can no longer firewall our way out of the problem. We need to talk about offensive capabilities to deter bad actors. I don't think that you are going to see warfare without a cyber dimension in the future … that is a given. I think warfare as we think of it today will take on these dimensions."
With a buildup of cyberweaponry on both sides, Russia and China have called for negotiations to start on new treaties to govern what is permissible in the domain.
The Russians, in particular, have favoured arms control-style agreements, and last September Moscow and Beijing formally proposed to the UN a new international code that would standardise behaviour on the internet.
That has been flatly rejected by the UK and the US. They argue arms control treaties won't work because it will be almost impossible to verify the weapons each state has – computer viruses are more easily hidden than nuclear missiles.
And the new international code, the Foreign Office argues, is simply an attempt by Russia and China to stifle free speech on the internet in their own countries.
"It is too late for new formal treaties," said one senior source in the Ministry of Defence. "If we go down that road it will be years before anything emerges. This is China and Russia trying to kick the issue into the long grass."
But the alternative is almost as far-fetched, and perhaps more nebulous. The foreign secretary, William Hague, has been calling for countries to agree a "rules of the road" in cyberspace, with respect for international law, rights to privacy, and protection of intellectual property at their core.
This puts huge emphasis on goodwill between countries and the harmonisation of existing laws to make it easier for investigators to cross international boundaries. It is as unpalatable to China and Russia as their ideas are to the west.
"It's not at a point where I would call it cyberwar yet, but it's close," said Larry Clinton, president of the Internet Security Alliance, which represents a group of multinational companies, including many in the defence and aviation sectors.
"I think we are certainly seeing an arms race with respect to cyber. We did well to get through the nuclear age. We did well with chemical weapons. If we can do as well with cyber, that would be great, but we don't really have a theory; I am not sure what the theory is. We don't have a model set up for how we are going to deal with this."
Private fears
Developing cyberweapons, and a methodology for using them, is only one part of this complex new puzzle.
Though government departments are continually under attack, it is private industry that suffers most from hackers. The frightening scale of the theft of intellectual property, and the potential knock-on effect for fragile economies, underpinned the UK's decision to say it must now be regarded as a genuine threat to national security.
This, in turn, is forcing governments to expand the boundaries of what might trigger a military response to include theft, albeit on a massive scale.
Rosenzweig estimates that 85-90% of the US's digital infrastructure is in private hands. "I am pretty sure it's the same in Europe."
Though it is hard to make calculations, one survey last year commissioned by the Cabinet Office estimated the UK economy lost £27bn to cybertheft in 2010.
In America, they gave up trying to calculate precise values nine years ago, when the number of known "cyber-intrusions" reached 100,000 in a year; one respected Washington thinktank put the cost of cybertheft in the US last year at roughly $100bn (£63bn).
America's biggest companies have spent a similar amount beefing up their cybersecurity in the past five years, but analysts say this hasn't been enough to prevent "significant military losses" involving stealth, nuclear weapon and submarine technology, though none of the companies involved will admit it.
Without giving away details, Shawn Henry, executive assistant director at the FBI, confirmed that military networks and defence contractors had been hit hard by hackers. "A tremendous amount of information has been stolen from those networks by a variety of state actors."
But there is another dimension of cyber-espionage which is, in some ways, more disturbing.
"We know that Russia and China have done the reconnaissance necessary to plan to attack US critical infrastructure," said Jim Lewis, from the Centre for Strategic and International Studies, a Washington thinktank.
Lewis was commissioned by Bush in 2008 to write a cyber strategy for the government, which is still regarded as a benchmark.
"You might think we should put protection of critical infrastructure at a slightly higher level. It is completely vulnerable. It is totally unprotected.
"This isn't made up. I have been doing this for a long time. We know that people have done the reconnaissance, we know that control systems can issue commands to destroy critical infrastructure. We know all this and we have done nothing to defend ourselves … We have been trying for about seven years to deter people and it doesn't work."
Henry admitted his agency was now dealing with thousands of attacks every month. The agency has people in 63 countries specifically to deal with online threats. "We recognise that there are vulnerabilities in infrastructure," he said. "There are thousands of breaches every month across industry and retail infrastructure. We know that the capabilities of foreign states are substantial and we know the type of information that they are targeting."
He added: "We have seen adversaries that have been in networks for many months, or even years in some cases, undetected. They have essentially had free rein over those networks … looking at information that is transiting that network, with the ability not only to review that data, but potentially to change that data. They have complete ability to disrupt that network entirely."
Henry said attacks were becoming much more sophisticated. "Every step that the defence makes, the offence changes its tactics."
Rosenzweig believes this mapping of critical infrastructure – such as energy or water plants – is seen within government as "preparation of the battlefield". It is, he says, China's way of saying: "Don't send the 7th fleet to save Taiwan, or we will take out the electricity supply in Los Angeles".
The US is using the Idaho National Laboratory to run simulations testing the robustness of America's most important computer networks, but these take time.
With so much at stake, the Obama administration is pushing for proper domestic regulation and standards in cybersecurity, but that is being resisted by private companies, even though it may force them to close the gaps that are being exploited.
Three competing bills are currently vying for votes in Congress, including one from the former presidential candidate John McCain, who wants to fend off government oversight, and the prospect of companies being fined – or sued – if their cyber defences don't come up to scratch.
The role of China
Though the arguments are running along party lines, there is no argument about the fundamental problem, and where it is sourced from.
"Anyone who is significant on either side of the aisle is running around with their hair on fire," said Rosenzweig. "The influential voices on both sides are saying it's a problem. It's a real problem and it's a real problem right now. General Keith Alexander [head of US Cyber Command] says he is seeing it, and he's not the sort of guy to make things up."
There is no doubt about the main culprit, says Rosenzweig. "China denies it – but this is one of the bald-faced lies that people get away with because we don't want to face the consequences. China has more computer programmers than the west has engineers.
"Not everyone is a cyber jedi. But if you have 1 million computer programmers, you will find 1,000 jedis. We have a lot of IT professionals but they aren't the same thing; we don't understand the culture."
Dmitri Alperovitch, one of the world's foremost independent cybersecurity analysts, said: "The Chinese clearly have no restraints when it comes to espionage.
"In the US, economic espionage by either private sector or government is prohibited by policy and the Chinese are certainly not constrained by such measures. When it comes to volumes and sheer scale, no one even comes close to them."
The audaciousness of some of the attacks has been astounding. Earlier this month, Nasa's inspector general, Paul Martin, revealed the space agency's Jet Propulsion Laboratory headquarters in Pasadena, California, had been compromised by an attack that appeared to come from China.
The JPL manages 23 spacecraft, including missions to Jupiter, Saturn and Mars, and controls the International Space Station.
In remarkable testimony to Congress, Martin said hackers had "gained full system access" to JPL, allowing them to modify, copy, or delete sensitive files, create new ones, and upload hacking tools to compromise other Nasa systems. In short, they were running the network.
This was only one of 47 cyber-attacks on Nasa last year, 13 of which successfully compromised the agency's firewalls.
Martin said some of the intrusions "may have been sponsored by foreign intelligence services seeking to further their countries' objectives".
There is debate on how effective, and for how long, a cyber-attack from China could knock out an energy supply or communications hub. Larry Clinton said it would not be easy, but it would be foolish to think it was not possible.
"Older technologies tend to be safer than newer technologies. Copper wire is more secure than fibre. And the problem is the interconnections. We don't have nearly the degree of air-gapping that we once did.
"You can get into a weapons system and you won't even know that system is compromised until you set it off and then it comes back and hits you in the face … the sort of attacks that were considered sophisticated six years ago are considered elementary now."
If the threat is that great, and the belief that China is behind it so widely held, why hasn't the US been more robust in condemning Beijing? It's a question the state department refuses to answer. It will not even say if it has used normal diplomatic means – summoning an ambassador or expelling someone from the embassy.
Melissa Hathaway, who was director of the Joint Interagency Cyber Task Force under Bush and was on the National Security Council in the first year of the Obama administration, thinks the reticence is understandable.
"We need to think about our roles and the economic future of the world. What would you like the future of the economy to look like? Quite honestly, right now we are all dependent on China. All of us.
"They have bought a lot of European debt, they have bought a lot of US debt. They are helping to promote world stability right now."
The US has been pursuing another route to the Chinese, reaching out to Beijing using thinktanks as proxies, and engaging them in "cyberwar" games.
It is the only chance the Pentagon and the state department get to sit across the table from their Chinese counterparts, to express their own fears, and to hear those of China.
One hope is that the talks will lead to an equivalent of a "nuclear hotline" from Washington to Beijing, so leaders can talk before a situation gets out of control.
While the US may be pleased it is finally getting its message across, Lewis isn't convinced the Chinese are listening. And he doesn't think they will stop their activity in cyberspace either.
He has been dealing with the Chinese military for years, and says the People's Liberation Army is hostile.
"They see the US as a target. They feel they have justification for their actions. There is a sense that China has been treated unfairly, and so they have a right to catch up. Britain and France may have burned the summer palace, but the US has become the symbol of imperialism. And they think the US is in decline."
TSCP Selected to Compete for NSTIC Grant
The Transglobal Secure Collaboration Participation, Inc. (TSCP) announced today that it has been selected by the National Institute of Standards and Technology (NIST) as one of 28 organizations that will compete for grants to develop and implement pilot programs for The National Strategy for Trusted Identities in Cyberspace (NSTIC), an initiative signed by President Obama in April 2011.
NSTIC identifies a set of guiding principles for accelerating the use of trusted digital identity credentials. The strategy aims to deploy a system that helps secure transactions on the Internet, improve the public's awareness and control of personal information, and stimulate growth of online commerce.
"TSCP is honored to be part of this industry team that is focused on an initiative to protect and secure our national infrastructure," said Keith Ward, President and CEO of TSCP. "The NSTIC Pilot Program is a way for TSCP member companies, along with organizations and states that have joined the TSCP Team, to demonstrate key NSTIC concepts and identify barriers to adoption across technical, political, social, and economic domains. The TSCP approach brings together government, private and public sector participants to collaborate in achieving NSTIC's goals. A key challenge that crosses all domains is being able to increase security while preserving privacy.
"TSCP realized early on that there is no single government, agency or company that can address security in cyberspace on its own," said Ward. As a result, the TSCP team has engaged several non-member teaming partners who will add depth to the team's overall expertise. Those partners include Open Identity Exchange (OIX), Trusted Computing Group (TCG), National Association of State Chief Information Officers (NASCIO) and the All Hazards Consortium. He further stated, "TSCP's approach is a 'trusted architecture framework' - through various real-life case scenarios."
About TSCP: TSCP, Inc. is a nonprofit 501(C)(6) technical association comprised of defense and technology companies in the U.S. and Europe, as well as U.S. government departments and agencies and European government ministries. TSCP is a Government-industry partnership specifically focused on mitigating the risks related to compliance, complexity, cost and IT that are inherent in large-scale, collaborative programs that span national jurisdictions.
TSCP members include: BAE Systems, The Boeing Company, EADS, Lockheed Martin, Northrop Grumman, Raytheon, U.S. Dept. of Defense, Government Services Administration, U.S. Secret Service, UK Ministry of Defense, the Netherlands Ministry of Defense, the French government, Microsoft, CA Technologies, ActivIdentity, Boldon James, Deep-Secure, FuGen Solutions, Gemalto, Intercede, NextLabs, NLR, PwC, Quest Software, Titus and Wave Systems.
For more information, visit TSCP on the web at http://www.tscp.org.
All product and company names herein may be trademarks of their registered owners.
NSA: Deploying a TPM Measurement Solution
Piloting right now?
http://ncsi.choicenoc.net/nsatc11/presentations/wednesday/real_world/white.pdf
Integrity Measurement
The Way Ahead, Knowing if your Systems Have Been Altered
Peter A. Loscocco
NSA Trusted Systems Research
(formally known as National Information Assurance Research Lab)
Briefing to ITSEF, March 20, 2012
http://www.security-innovation.org/pdfs/Pete%20Loscocco%20Presentation.pdf
Espi,
la siguiente linea sera decifrado
[}beidzWedR RUC gczS IXE75TB{]
[}beiJyXfwv q0 XJCm3v8 l0 F1
B- Enh2 V1 T2
K1 frg22D8 i2 9-
a1 tYFGVjp2f1 Q-{]
This is a very interesting and lucrative project
[}beiJyWvxR ohHB124Db7PSkvj6482 xYp0 aVM9v69D{]
Why choose Trusted Foundations ?
http://www.tl-mobility.com/IMG/pdf/TLM_Trusted-Foundations.pdf
? Works on all mobile and connected devices and supports any security use case.
? Proven solution, deployed by top-5 smartphone and tablet vendors and included on millions of shipped units.
? Production versions for Android™, Symbian™ OS, Windows Mobile™, and Linux and can be ported easily to other environments.
? High performance combination of hardware and software security, supporting ARM® TrustZone® and ARMbased application processors and other processor architectures.
? Communicates with a Trusted Service Manager (TSM) infrastructure for easy and secure service management and Over-The-Air (OTA) deployment of trusted applications.
? Complies with GlobalPlatform, GSMA and Trusted Computing Group industry standards.
? Includes a uniform, platform-independent, rich Software Development Kit (SDK) that can be used by any security service provider.
Post from dabears3
http://investorshub.advfn.com/boards/read_msg.aspx?message_id=73714975
REPORT: Chinese Capabilities for Computer Network Operations and Cyber Espionage
March 8, 2012
http://www.uscc.gov/
"The U.S.-China Economic and Security Review Commission was created by Congress to report on the national security implications of the bilateral trade and economic relationship between the United States and the People’s Republic of China. For more information, visit www.uscc.gov
Today the U.S.-China Economic and Security Review Commission released a report entitled: “Occupying the Information High Ground: Chinese Capabilities for Computer Network Operations and Cyber Espionage.” The report details how China is advancing its capabilities in computer network attack, defense, and exploitation and examines issues related to cybersecurity, China, and potential risks to U.S. national security and economic interests.
"The United States suffers from continual cyber operations sanctioned or tolerated by the Chinese government" said Commission Chairman Dennis Shea. "Our nation's national and economic security are threatened, and as the Chinese government funds research to improve its advanced cyber capabilities these threats will continue to grow. This report is timely as the United States Congress is currently considering cybersecurity legislation, and the Commission hopes that this work will be useful to the Congress as it deliberates on how to best protect our networks."
"The report highlights China's extensive development of cyber tools to advance the leadership's objectives” said Commissioner Michael Wessel. “It's getting harder and harder for China's leaders to claim ignorance and innocence as to the massive electronic reconnaissance and cyber intrusions activities directed by Chinese interests at the U.S. government and our private sector. The report identifies specific doctrinal intent as well as financial support for government-sponsored cyber espionage capabilities. There's clear and present danger that is increasing every day."
This report was prepared for the U.S.-China Economic and Security Review Commission by Northrop Grumman Corp, and is a follow-up to a 2009 report prepared for the Commission by Northrop Grumman on the “Capability of the People’s Republic of China to Conduct Cyber Warfare and Computer Network Exploitation.”
Report Conclusions
Among other things, the report concludes that:
* Chinese capabilities in computer network operations have advanced sufficiently to pose genuine risk to U.S. military operations in the event of a conflict;
* Chinese commercial firms, with foreign partners supplying critical technology and often sharing the cost of the R&D, are enabling the PLA to receive access to cutting edge research and technology; and
* The Chinese military’s close relationship with large Chinese telecommunications firms creates an avenue for state sponsored or state directed penetrations of supply chains for electronics supporting U.S. military, government, and civilian industry – with the potential to cause the catastrophic failure of systems and networks supporting critical infrastructure for national security or public safety.
Chinese Capabilities for Computer Network Operations and Cyber Espionage
http://www.uscc.gov/RFP/2012/USCC%20Report_Chinese_CapabilitiesforComputer_NetworkOperationsandCyberEspionage.pdf
Prepared for the U.S.-China Economic and Security Review Commission
by Northrop Grumman Corp.
March 7, 2012
The report finds:
(Here some snippets):
Computer network operations combined with sophisticated electronic warfare systems are increasingly an option for Chinese commanders as tools improve and more skilled personnel become available to the PLA. To counter sophisticated and multilayered U.S. C4ISR networks, China’s defense industries, have devoted resources over the past fifteen years to developing space-based and network-based information warfare capabilities to target U.S. systems in detail.
* Calling space “the ultimate high ground” the PLA has developed credible capabilities for direct ascent kinetic strikes against orbiting satellites, ground-based laser strikes, apparent capabilities for ground-based laser optical countermeasures to imagery satellites.
* Additionally, joint PLA and civilian research into CNE and CNA tools and techniques may provide a more advanced means to penetrate unclassified networks supporting U.S. satellite ground stations.
* Computer network attack research and development has focused on stealthier means of deploying tools via more sophisticated rootkits possibly delivering Basic Input/Output System BIOS level exploitation and attack on targeted computer systems.
[skip]
As the Chinese D-day draws closer, more direct offensive measures may be employed, possibly using tools that were pre-deployed via earlier CNE penetrations. CNE tools with BIOS destruct payloads emplaced on PACOM and TRANSCOM computers with an activation that is timed to correspond to other movements or phases of a larger Chinese campaign plan could create catastrophic hardware failures in key networks. CNE efforts against PACOM networks to understand the network topology and command relationships would provide the details as to where to place these tools to achieve the desired impact.
* BIOS destruct tools pre-placed via network reconnaissance and
exploitation efforts performed earlier in this two-week CNO campaign
might be activated to destroy the circuit boards of key the motherboard
containing the microprocessors necessary for the systems’ operation.
* Chinese writings on information confrontation and network attack
underscore the effectiveness of BIOS attacks as a means of destroying
hardware components, such as the motherboard containing the
microprocessors necessary for the systems’ operation.68
* Tools designed to destroy the primary hard drive controller, overwrite
CMOS RAM, and erase flash memory (the BIOS) would render the
hardware itself completely inoperable, requiring a full replacement
of motherboard components, not just an operating system reimaging,
to restore the system to full functionality.
* Attacking multiple servers at a specific command, unit, or base would
require the IT personnel to obtain necessary parts and physically replace
the destroyed components. Performing this replacement during peacetime
is a prolonged and expensive effort but during a crisis the potential delay or
network outage has the potential for significant delays depending on the
nature of the military unit or government agency targeted.
From Weby:
Friday, March 09, 2012 11:10:32 PM
Re: awk post# 223872 Post # of 223877
Andy
Today is a perfect example of the Buffet Rule. Now this is not investment advice. I don't do investment advice. Stocks, especially tech stocks can go to zero. They can also go into the many hundreds.
Some technologies are adopted slowly. Others only after long waits and tipping points. Each of us has the responsibility to read the tea leaves as they see them.
Some here are very honorable, others less so. People also have to be able to judge people --- and take the results that come with those judgments. I have a view of management that others do not share. I believe they owe me very little besides honesty and hard work which I expect of anybody I or they hire. It is cant that their first responsibility is to their investors. What does that mean? It might mean the folks who paid the company money before it went public. It might mean people who bought shares directly from the company. It might mean those of us who bought shares from market makers or other shareholders and never gave the company anything.
It might also mean that the company owes the investors nothing, but hard work and best effort to put the money at greatest risk to get greatest return. However, its up to us to hire the BoD, and the BoD to hire management. I have yet to see any of the bashers submit a timely resolution to actually place anybody on the board to do something differently.
Today, the stock went down on a million shares. I don't know why. I really don't care much why. I bought a significant amount of shares between 50 and 70 cents years ago and watched it go to 28 cents. Stocks change price for all kinds of reasons. There is reality. There is manipulation. There is perception. There are hard to see fundamentals. There is some blood in the street ---- or is it Jermart's ketchup?
I don't know!!!!! I suspect ketchup, but I don't know. If knowing was easy, everybody would make money in the market. Some here have made a lot on both sides of the trades. Today I suspect, whoever was holding the price above $1.80 all day yesterday (UBS) sold in the morning High and bought it back lower in the afternoon. If it's true at all, it's only part of the story, but it could be the whole story too. One big trader (or three) making a larger profit than usual --- and freeing up money to throw into a rising NASDAQ. My point is that we don't know the meaning of any day's trading, or for that matter, any weeks trading.
Awk, we both know that there are companies out there that will win and lose in the changed ecosystem. You change the stack --- which is now a done deal -- and you change the ecosystem. The ecosystem is the ecosystem as defined by Wintel, ARM, AMD, MSFT HP, Dell, Samsung.
I may be wrong, but I haven't been this comfortable with Wave in a long time. My attitude is I am in a win win situation. The company is undervalued IMO on its patents, staff, and probable DoD contracts alone.
WEM is written into CRITICAL Federal standards. Seems nobody reads them until they are final except you and me. The company gives no signs of what's happening that would be anything like real information, but I do believe the hints that after years of working and working to create known computing -- there is sunshine just ahead for the investors. Right now there IS sunshine ahead, hard as some may want to shield their eyes from the dawn of another couple of possibly bad days.
Everybody always focuses on Sprague and BoD. I'll stick with Lark and Thibideau, and all the other brilliant investors whose pictures are on the Security Matters page. I'll stick with Willett whoever he works for next, and I'll stick with Admiral Inman and Lord Renwick, who I believe have made a great difference. Wave is a quiet security company that cannot shout to the world what the government and its customers do not want shouted. Things have happened 4X slower than I thought they would, BUT THEY HAVE AND WILL CONTINUE TO HAPPEN.
This is why I don't have to sell anything to sleep easy tonight. Of course, unlike Unixguy, I could be wrong.
Intel vying for Apple foundry business
Mark LaPedus
5/2/2011 1:36 PM EDT
http://www.eetimes.com/electronics-news/4215650/Intel-vying-for-Apple-foundry-business-?cid=NL_EETimesDaily
SAN JOSE, Calif. - At present, Samsung Electronics Co. Ltd. is making the Apple-designed A4 and A5 processors on a foundry basis for Apple Inc.
That could soon change. As reported, Apple and Taiwan Semiconductor Manufacturing Co. Ltd. (TSMC) have entered into a foundry relationship for the A5 and follow-on chips, sources said.
Now, according to an analyst, another player is pursuing Apple's foundry business: Intel Corp. Intel is already supplying x86-based processors for Apple's PC line. Intel is also dabbling in the foundry business and has recently struck a deal with Achronix Semiconductor Corp.
''Based on a number of inputs, we believe Intel is also vying for Apple's foundry business,'' said Gus Richard, an analyst with Piper Jaffray & Co., in a new report.
''It makes strategic sense for both companies. The combination of Apple's growing demand and market share in smart phones and tablets gives Intel a position in these markets and drives the logic volume Intel needs to stay ahead in manufacturing,'' Richard said.
''Intel's manufacturing lead gives Apple an additional competitive advantage in these markets and distances it from Asian competitors that are knocking off its products,'' he said. ''Furthermore, it would also serve to weaken Samsung who is a significant competitive threat to both companies.''
Samsung will remain Apple's main foundry-at least for now. ''While it will take a few years for Apple to shift foundry suppliers, we believe Apple is shifting away from Samsung,'' he said. ''We believe TSMC will start getting revenue from Apple in Q4 of this year. We believe the recent patent lawsuit between the two companies is further evidence to support our belief that Apple is moving its silicon needs elsewhere.''
Samsung and TSMC each have the fab capacity to support Apple. The question is clear: Does Intel have the capacity?
''Samsung has just completed a 30K-40K wafer start per month logic plant in Austin Texas to support its foundry business of which Apple is its largest customer,'' he said. ''Based on the die size of Apple's A5 processor, Apple needs roughly 23K wafers a month for the A5. We believe that Apple moving its foundry business away from Samsung is what has recently driven Samsung to reduce equipment orders, as it will likely repurpose this capacity for memory.''
How hypervisors help integrate portable devices into the office environment
See also post I am rsponding to:
The expansion of ARM's TrustZone to include an "ARM Virtualization Extension" means that corporations will be able to use mobile devices within their infrastructure, centrally managed of course. The enterprise would deploy a "Hi-End" virtual space on the mobile device - not unlike an HAP platform - allowing the remote person to securely access corporate networks. Only devices with the managed virtualized space would be able to do so. Since this "Hi-End" virtualized space is hardware separated from the regular "Low-End" browser owned by the mobile device user there is no possibility of "contamination" (malware/viruses) of the "High-End" by/through the "Low-End".
This is in addition to the consumer space leveraging ARM's TrustZone security extension for secure transactions (Banking, wallet etc.)
How hypervisors help integrate portable devices into the office environment
26 April 2011
http://www.newelectronics.co.uk/electronics-technology/how-hypervisors-help-integrate-portable-devices-into-the-office-environment/33374/
Increasingly, people are using smartphones and tablets for more than business calls: video conferencing, email, document editing, storage and oral presentations are just a few popular applications. This movement towards portable devices is a direct result of the trend towards enterprise mobility, with distributed workforces and portable workspaces.
Paravirtualization has proven prohibitively expensive and slow to market. The great news is that many current generation smartphones and tablets support ARM's TrustZone technology that provides a form of high speed virtualisation. In addition, ARM's Virtualization Extensions, a complete hypervisor mode for mobile ARM applications processors, is due out in apps processors in 2012 and will further improve the platform for hypervisors.
In other words, the future of 'Bring Your Own' is bright.
Author
David Kleidermacher
David Kleidermacher is Green Hills Software's chief technology officer.
SandForce 2nd Generation SSD Processors Deliver Break-Through Client Computing User Experiences
insert-text-here
SandForce Driven™ 6Gb/s SATA SSDs at CeBit Are First to Enable 500 MB/s Read and Write Speeds to Client Storage Media
MILPITAS, CA. – February 24, 2011 – SandForce® Inc., the pioneer of SSD (Solid State Drive) Processors that enable standard NAND flash deployment in enterprise, client, and industrial computing applications, today announced the much anticipated availability of its second-generation SF-2200 and SF-2100 SSD Processors optimized for SSDs deployed in client computing applications. The SF-2200 processors feature a 6 Gigabit-per-second (Gb/s) SATA host interface, an unprecedented sustained sequential read/write performance of up to 500 Megabytes per second (MB/s), and award-winning DuraClass™ Technology. The SF-2100 processors feature a 3 Gb/s SATA host interface, read/write performance up to 250 MB/s, and the same powerful DuraClass Technology. In addition to leading-edge performance, the SF-2200 and SF-2100 Client products support state-of-the-art, high-speed ONFi2 and Toggle flash interfaces in single-level & multi-level cell (SLC & MLC) NAND flash families from all major suppliers.
"With high profile products now incorporating SSDs as standard storage media and most other system vendors offering them as options, the market for client SSD applications is poised for growth as SSD prices decline,” said Joseph Unsworth, Research Director, NAND Flash & SSD at Gartner. “SSD controllers that can deliver superior performance and reliability without the dependence on DRAM will have a compelling value proposition across a wide range of client applications.”
The SF-2200 and SF-2100 Client SSD Processors address the needs of cost-sensitive client storage markets with many inherent enterprise-class features. These devices feature the highly sought-after SandForce DuraClass Technology including RAISE™ and patented and patent-pending DuraWrite™ features to deliver the ultimate in performance, endurance, reliability, and power management.
Additionally, SF-2200 and SF-2100 Client SSD Processors feature:
• Support for advanced 30nm- and 20nm-class NAND flash from all leading flash vendors with Asynch/ONFi1/ONFi2/Toggle interfaces with data transfer rates up to 166 Mega Transfers per second
• Trusted Computing Group (TCG) OPAL security with 256-bit AES encryption and automatic, line-rate double encryption with a drive-level password
• Advanced ECC engine correcting up to 55 bits per 512-byte sector to assure high data integrity and support for future generations of flash memory
• Power and performance optimization and tuning features to maximize mobile battery life
• Single-chip “DRAM-less” solution enabling highly compact and flexible designs
“As with our first-generation product, the new SF-2200 and SF-2100 Client SSD Processors break new ground in terms of reliability, performance, and affordability by optimizing access to the most advanced NAND flash technologies,” said Michael Raam, President and CEO for SandForce. “Manufacturers building client SSDs can now introduce even higher performance products that further optimize the computing user experience and enhance overall productivity which will continue to accelerate mainstream laptop and PC market adoption of SSDs.”
Live SF-2200 and SF-2100 Product Demonstrations at CeBit
SSD manufacturers and system OEMs will demonstrate their SandForce Driven™ SF-2200 and SF-2100 products at the CeBit show in Hanover, Germany, on March 1-5, 2011. Be sure to look for the various online product reviews that will follow those demonstrations.
About SandForce
SandForce is transforming data storage by pioneering the use of standard flash memory in enterprise, client, and industrial computing applications with its innovative SSD (Solid State Drive) Processors. By delivering unprecedented reliability, performance, and energy efficiency, SSDs based on patent-pending SandForce DuraClass technology unleash the full potential for mass-market adoption of SSDs using NAND flash memory. Founded in 2006, SandForce is funded by leading venture capital investors and first tier storage companies. For more information, visit SandForce at www.sandforce.com and follow SandForce on Facebook, LinkedIn, Twitter, and YouTube.
Thanks for posting this awk.
TrustedLogic: Trusted Execution Environment
http://www.trusted-logic.com/Presentations/Trusted_Execution_Environment_CColas_2008Sept18.pdf
Para-Virtualized TPM Sharing
By Dr. Jork Löser, Microsoft
http://os.inf.tu-dresden.de/EZAG/abstracts/abstract_20080314.xml
The talk introduces a technique that allows a hypervisor to safely share a TPM among its guest operating systems. Our design allows guests full use of the TPM in legacy-compliant or functionally equivalent form. The design also allows guests to use the authenticated-operation facilities of the TPM (attestation, sealed storage) to authenticate themselves and their hosting environment. Finally, our design and implementation makes use of the hardware TPM wherever possible, which means that guests can enjoy the hardware key protection offered by a physical TPM. In addition to superior protection for cryptographic keys our technique is also much simpler than a full soft-TPM implementation. The talk shows that a current TCG TPM 1.2 compliant TPM can be multiplexed easily and safely between multiple guest operating systems. However, the peculiar characteristics of the TPM mean that certain features (in particular those that involve PCRs) cannot be exposed unmodified, but instead need to be exposed in a functionally equivalent para-virtualized form. We provide an analysis of our reasoning on the right balance between the accuracy of virtualization, and the complexity of the resulting implementation.
VH: Follow-up on ARM-Intel - Authenticated Memory
A while ago Vacationhouse posted a PR discussing Intel Authenticated Memory for ARM "TrustZone" processors.
I just found this exceptional paper discussing the relevance of authenticated memory in the secure execution environment:
The role of secure memory in a trusted execution environment
Many of the attacks on mobile phones are traced to an attacker modifying data/code in the non-volatile memory. Flash memory-based security safeguards against such attacks, which is something other mobile security approaches cannot do.
Here's why...
TPM architecture and/or Intel's "Danbury" technology…
It appears that "Danbury" adds a whole new dimension to the "interoperability" question. It appear that "Danbury" is a totally separate architectural platform from the TPM architecture that needs its own management tools. And it appears that Waves's EMBASSY tools are the only ones that can handle both architectural platforms..
I am not yet quite clear how this really will function, but it is clear to me now, that a vPro 5.0 with "Danbury" really consist of two distinct platforms to be managed: The "TPM system" and "Danbury"
Wave-Intel press release: Here Steven Sprague talks about two distinct platforms within the same system.
Steven Sprague says:
"As trusted computing solutions evolve, cross-platform interoperability could represent an important opportunity," said Steven Sprague, president and CEO of Wave Systems. "We believe that the addition of hardware security that provides data-at-rest, strong authentication and management capabilities, built into the hardware, is an important step forward in supporting the growing need for security in the PC. We are keenly aware of the requirements for applications to interoperate among multiple secure platforms and are providing proof of concepts today to show how our applications can be adapted to a new generation of platforms from Intel. We are proud to be the first company demonstrating our flexible, interoperable, secure applications on the industry’s leading trusted platforms."
Assumption: In a way, "Danbury" likely functions similarly to Seagate's "DriveTrust" technology, in the sense that "Danbury" also incorporates some EMBASSY functionality. Also, most likely, the "Danbury" encryption keys are stored within the Intel chipset and never leave the chipset.
Question: Where does this leave Infineon and, moreover, where does it leave the rest of the PC OEMs?
Steven Sprague goes on to say:
We are keenly aware of the requirements for applications to interoperate among multiple secure platforms and are providing proof of concepts today to show how our applications can be adapted to a new generation of platforms from Intel. We are proud to be the first company demonstrating our flexible, interoperable, secure applications on the industry’s leading trusted platforms."
Also check out the highlighted part of a "blog exchange" that I had with Intel's Todd Christ. He says:
http://communities.intel.com/openport/blogs/proexpert/2007/12/14/5-reasons-to-look-forward-to-danbury-technology
Feb 11, 2008 11:36 AM Reply Todd Christ in response to: Andreas Kuhn
Hi Andreas - Danbury won't have interaction with a TPM, but rather utilize an integrated mechanism to control security access.
Danbury will become part of the AMT 5.0 stack and much like other AMT releases - AMT 5.0 will be backward compatible with previous versions of AMT - but the older versions will not be scaleable to the newer platforms.
From the Wave-Intel press release:
http://www.wave.com/news/press_archive/07/070918_IDF
Wave to Demonstrate Capabilities for Data Protection and Trusted Platform Module Support for Next-Generation Intel vPro Technology at Intel Developer Forum
Wave highlights new Intel hardware technologies while enhancing Intel® Active Management Technology with Wave’s key management capabilities
Lee, MA and San Francisco, CA (Intel Developer Forum, Booth #415-20) –September 18, 2007 – Wave Systems Corp. (NASDAQ: WAVX; www.wave.com ), a leader in delivering trusted computing applications and services with advanced products, infrastructure and solutions across multiple trusted platforms, today announced it will demonstrate the capabilities of its EMBASSY® technology on a development Intel® vPro™ processor technology platform.
This 2008 platform incorporates a new, integrated chipset and Trusted Platform Module (TPM), along with a new data encryption technology codenamed "Danbury Technology." Wave will show how EMBASSY technology can be adapted for data-at-rest, strong authentication and key management. Wave offers the only interoperable solution based upon the Trusted Computing Group’s specifications for trusted platforms that include TPM secure storage solutions and secure infrastructures as defined by the TCG.
"Protecting stored data is critical for businesses today, and Intel vPro Danbury technology will make encrypting hard drive data more secure and manageable," said Tom Quillin, director of Intel's Digital Office Ecosystem Enabling. "Intel is pleased that Wave is rapidly embracing this secure platform initiative."
"As trusted computing solutions evolve, cross-platform interoperability could represent an important opportunity," said Steven Sprague, president and CEO of Wave Systems. "We believe that the addition of hardware security that provides data-at-rest, strong authentication and management capabilities, built into the hardware, is an important step forward in supporting the growing need for security in the PC. We are keenly aware of the requirements for applications to interoperate among multiple secure platforms and are providing proof of concepts today to show how our applications can be adapted to a new generation of platforms from Intel. We are proud to be the first company demonstrating our flexible, interoperable, secure applications on the industry’s leading trusted platforms."
Wave’s demonstrations will be located in the Intel vPro Zone Pavilion, Wave Booth #415-20 at the Moscone Center North. Customers may make appointments by contacting Brian Berger, Wave’s EVP Marketing & Sales, at bberger@wavesys.com
What is EEE?
Over the past couple of days I have dug a bit into the EMBASSY Endpoint Enforcerer (EEE) and was in contact with Wave to obtain a better understanding of the technology and the associated business model. Below the result of this undertaking:
The EEE is a bit different than all the other Wave tools. EEE is not really a "hard" product like i.e. the EMBASSY TrustDrive Manager (ETDM).
EEE, today, is an SDK (Software Developer Kit) for building a TNC client. It would reside and execute on the client machine. Whether it gets pushed to the client from the NAC/TNC server or not is probably up to the VPN/TNC/NAC vendor who would choose to implement it in their client software.
EEE provides the libraries and components needed to use the TPM and TSS to perform integrity measurements of the client utilizing the capabilities of the TPM to do hashing, signing, storing, etc.
The measurements could be of the transitive trust chain… bios, drivers, trust client, VPN client, OS, applications, etc. or the measurements could be of any executable or file, or hardware configuration of the devices. These integrity measurements performed by EEE would be provided in a TNC format which can then be sent to the TNC/NAC server which provides a policy enforcement point for determining whether the supplied integrity measurements are correct and adequate for allowing access to the network, a resource, or other protected items controlled by the server.
MS Server 2008 will support the TNC defined protocols and processes, including those client measurements performed by EEE
EEE is not a product component which is included with ETS today. It is being ‘sold’ or offered to NAC vendors and others for integration into their own clients.
Wave has demonstrated some interesting applications of EEE for our its own products. At N+I in May, Wave demonstrated using EEE to measure the pre-boot OS used in the Seagate FDE drive to make sure that no one had tampered with the code.
Wave also showed measuring and reporting the fact that an FDE drive was in the client machine and that the security settings were set "on". These kinds of measurements could be used for assuring compliance in the event that a drive or laptop is stolen, or for high value/sensitive applications, the measurements could be used to make sure that the client machine could be trusted before sending files or allowing transactions to a server, for instance.
So "endpoint integrity" with EEE is not necessarily a driver to force PC OEMs to enter into a bundling agreement with Wave for the client side. The PC client only needs an activated TPM for EEE to function.
EEE, as a trusted service, is aimed more at getting NAC vendors to be able to turn on and use the TPM, so that Wave can sell the TPM related infrastructure and tools.
ETS system structure and bundling packages
.
.
OEM Bundling Packages
1. ESC Basic Edition ( TPM-OEM / SEAGATE ETDM) - No server support!
2. ETS 3.x (Dell Edition) and 6.x (Wave Edition) – With server support!
.
.
What is the system structure of ESC?
The CORE of the client software is ESC (EMBASSY Security Center).
ETS (EMBASSY Trust Suite) is a SUITE of additional applications that ALWAYS contains the ESC (EMBASSY Security Center).
The ESC (EMBASSY Security Center) IS THE ENGINE (application) to which ETDM (EMBASSY Trust Drive Manager) and EEE (EMBASSY Endpoint Enforcerer) are plugged in.
"…Where existing tools largely fall short however is in their ability to monitor the whole enterprise, integrate with other tools and to keep track of and detect VMs to limit their spread. Detection tools are required to scan VMs and detect any vulnerabilities or malicious code. Again with reference to some of the newer Hyperjacking type attacks control of inter-virtual data needs to be monitored, with suspicious traffic reported and/or escalated. Communications between virtual components therefore need to be safeguarded with built-in encryption, digital signatures and hardware based root certificates provided by technologies such as the Trusted Computing initiative TPM (Trusted Platform Module) offering built in security, tamper detection and exploit prevention…"
Virtualization brings new security challenges
By David Frith, senior consultant, Siemens Enterprise Communications Limited
http://www.continuitycentral.com/feature0533.htm
Why virtualization matters
Although virtualization is not a new concept its present implementations are changing the face of corporate IT, through the reduction of the number of physical servers, the consolidation of rack space and the cutting of energy costs.
Virtualization allows the Virtual Machines (or VMs) running the applications to be divorced from their physical environment. A VM provides an isolated ‘sandbox’ for running applications, with Hypervisor processes managing multiple VM’s on each physical machine. This separation of functionality from physical location allows superior management and a pooling of resources with the ability to meet workload on demand. Virtualization technology is not just applicable to server applications within a data centre it applies across the enterprise be it within storage, security, the network or at the desktop.
The characteristics of virtualization
The use of virtualization technologies however causes the complexity of computing environments to mushroom and as we all know additional complexity breeds insecurity. Such obfuscation being an issue for both management and monitoring. With recent virtualization technologies evolving from mainframe origins to the standard server and desktop market its widespread application is still relatively new. Full security analysis of many of the vendor offerings reveals large areas of unexplored code in which could lurk potential flaws, this is an ‘known unknown’ since the lack of live deployments until recently has resulted in little testing.
One of the great benefits of virtualization as mentioned is the pooling of resources with the ability to re-deploy VM’s ‘on the fly’. It is easy to create ‘Gold’ master VM images and replicate these as needed to increase computing resources. VM’s can be deployed instantly and shuffled around the infrastructure in a similar way as transferring files, however managing change and introducing security into this mix becomes incredibly complex.
New attacks
Attacks on virtualised systems have so far been few and far between mainly due to only recent adoption, however the number of installed systems is set to double by 2012 and proof of concept attacks are already in existence. Attacks on virtual systems can come from an extension of older forms of attack such as Denial of Service (DoS), buffer overflows, spyware, rootkits and/or Trojans – all prone to lurk beneath guest operating systems.
Additionally new specific attacks include those from worms, guest hopping, Hypervisor malware and Hyperjacking all involving the Hypervisor itself being exploited and used to subvert each VM it controls. As the volume of virtualised software increases more exploits will be written and they in turn will become increasingly insidious (potentially compromising several VM systems at once).
Existing security
In the recent rush to deploy virtualization technologies, cost and mobility have been the top priorities and many other implications (such as security, integration, management etc.) have still to be worked out. Existing security technologies typically revolve around static and IP based controls (be they firewalls, IDS’s, VLAN’s etc.) however with the erosion of technology tied to a particular location, the tracking of IP or static based identifiers is no longer sufficient, indeed most network and admission control technologies are not virtualization aware. Additionally IT audit and compliance processes are now far more complex undertakings, what happens with offline or dormant VMs? Obviously these still need to be patched and reviewed on a timely basis, but how - if you can’t keep track of VMs and the applications within them? It is clear that the even with including standard best practices such as enhanced change management, separation of duties and administration controls conventional security measures fall far short.
The security requirements
With potential attacks first compromising one VM and then spreading to others, each needs to be protected with secure policies configured and adapted as needed. Here existing vendor tools can be used in the partitioning, isolating and segmenting of each VM with resource management controls to allocate, schedule, monitor and cap resources as required. Such tools can ensure that the VMs that require like levels of security are grouped together and that controls are in place to stop any unauthorised replication.
Where existing tools largely fall short however is in their ability to monitor the whole enterprise, integrate with other tools and to keep track of and detect VMs to limit their spread. Detection tools are required to scan VMs and detect any vulnerabilities or malicious code. Again with reference to some of the newer Hyperjacking type attacks control of inter-virtual data needs to be monitored, with suspicious traffic reported and/or escalated. Communications between virtual components therefore need to be safeguarded with built-in encryption, digital signatures and hardware based root certificates provided by technologies such as the Trusted Computing initiative TPM (Trusted Platform Module) offering built in security, tamper detection and exploit prevention.
Management tools are required to provision VMs as necessary together with their associated security settings, such tools also need to map interdependencies and data flows ensuring that with all the complexity administrators do not lose an understanding of their environment.
With VMs being deployed and re-deployed, patching tools are also required. The need to introduce timely patches is ever more critical to reduce attack surfaces and ensure best-practice compliance. However because of the resulting downtimes or infrastructure complications many applications are difficult to patch in a timely way, therefore new technologies such as inline patch proxying and application correction (modifying data in midstream) have been developed to help mitigate such issues.
In essence the old adage of combined layers of complementary countermeasures applies, protecting the physical devices, the Hypervisors and the Virtual Machines (VMs). It is just that these defences need to be provided dynamically with security policies and settings following and surrounding each newly mobile VM.
Conclusion
The complexity and dynamic nature of virtualised environments means that new threats and vulnerabilities have appeared and will increasingly manifest themselves. Because traditional security practices only go so far new architectural models, design practices and security tools are required. The existing tools however are generally immature and not yet certified, while such vendors and their tools need to evolve, the market also needs to educate itself, raising awareness of potential issues, new vulnerabilities, evolving threats and where necessary pressuring the vendors to enhance their security offerings.
Siemens Enterprise Communications Limited is exhibiting at Infosecurity Europe 2008. Now in its 13th year, the show continues to provide an education programme, new products and services and over 300 exhibitors. Held on the 22nd – 24th April 2008 in the Grand Hall, Olympia, this is a must attend event for all professionals involved in information security. http://www.infosec.co.uk
Single secure CPU safeguards payment terminals
http://www.electronicstalk.com/news/trg/trg101.html
Security solution for embedded systems in payment terminal market targets applications on wireless devices
12 November 2007
Trusted Logic and Trango Virtual Processors have teamed up to provide a complete security solution for embedded systems. The companies will focus on the payment terminal market, with particular applications on wireless devices. Security solutions based on Trango's secure virtualisation solution, coupled with Trusted Logic's Security Module, will allow OEMs and device manufacturers to create highly competitive, scalable products of an outstanding level of security, which are also easy to certify.
In the payment terminal market, the growing adoption of rich operating systems such as Linux and Windows CE, coupled with the tough certification requirements of both the payment card industry PIN entry device (PCI-PED) specification and EMV specifications, has driven system manufacturers to use a dedicated security processor to ensure that critical security functions and hardware such as user interfaces (eg keypads, LCD screens) cannot be corrupted or compromised.
This method guarantees safe payment transactions, but at a high cost in both hardware and development resources.
Representing a breakthrough from the traditional dual-chip approach, the Trango Hypervisor enables integration, on a single secure CPU, of a rich OS running in parallel with certified applications, maintaining the same level of security and the same certification process as a dual-chip platform.
This architecture-independent joint solution relies on the Trango Hypervisor to provide multiple secure execution environments, each of which contains its own virtual processing unit capable of running an OS, applications and drivers.
Running on top of its own virtual processor, Trusted Logic Security Module offers a dedicated trusted execution environment interfacing with hardware security features such as cryptographic hardware and secure user interface, and where certified value-added applications, such as payment applications, can run safely, whether they be native or interpreted.
Ensuring the protection of both user interface and of the complete payment process, the joint solution enables the cost-effective implementation of the payment function not only on payment terminals, but also on mobile phones, so that end users can safely interact with their device.
'Given the payment industry's rigorous security and certification requirements, our partnership with Trusted Logic is of tremendous value for customers who can benefit from their proven security expertise and experience', says Pierre Coulombeau, Chief Operating Officer at Trango Virtual Processors.
'Trango Hypervisor enhances TL Security Module by enabling a number of isolated execution environments, thus offering the combined benefits of security and multicore chips on a lower cost, single core chip', adds Dominique Bolignano, CEO of Trusted Logic
TrustedLogic and Trango hypervisor
http://www.windowsfordevices.com/news/NS7769741041.html
Intel "Danbury" and WAVX
Intel "Danbury" and WAVX
See here for details...
TrustZone: Detailed Q&A with Tiago Alves
Here... a pdf file of this very interesting interview.
I had intended this board to serve as a "technical base" concerning trusted computing. Unfortunately not many took this offer and participated. So I use it as a reference to earlier happenings...
Awk, this is my first visit to this board. I was led here by xpoint's recent posts. I am delighted to find another board of common interest, but alas, it looks like this one hasn't caught on, at least for the past 9 months. What's up here?
Regards,
AA
A Tree of Trust rooted in Extended Trusted Computing
Abstract—Trusted Computing and its associated technologies are rapidly gaining momentum in the computing world. While this initiative is based on detailed security analyses, it is sometimes unclear what is meant by trust in the context of use of the technology. We study the structure of the concept of trust in the largest sense in the Extended Trusted Computing paradigm, which combines Trusted Computing and Virtualisation technologies. We propose the Tree of Trust (ToT) concept and notation in order to represent the Extended Trusted Computing platform's trust structure. A ToT is a tree whose nodes represent the various platform components, from the hardware TPM up to the running applications, annotated with trust and security statements. The ToT can be used to better understand the trust that one should put into the platform, or even to reorganise the platform according to certain constraints.
http://www.isg.rhul.ac.uk/~uqai221/ACSF2007.pdf
Establishing Mobile Security
By Janne Uusilehto, head of product security at Nokia.
December 24, 2006
http://www.drdobbs.com/dept/security/196701910?pgno=1
Once implemented correctly, the newly developed Mobile Trusted Module (MTM) specification can protect against theft and malicious attacks as end users send, receive, store and handle sensitive data.
An industry-wide open standard for mobile phone security promises to enable mobile phone information security assurance. Developed by the Trusted Computing Group's(TCG) Mobile Phone Work Group (MPWG), the Mobile Trusted Module (MTM) specification goal is to establish trust in a platform's ability to protect its information and functional assets, and to validate that protection capability.
Members of the Mobile Phone Work Group contributed a considerable amount of effort to develop the specification, based on a very clear vision of future mobile communications. They view the specification as an enabler to the growth of third party service providers and the means to significantly influence the market place. The ultimate benefits to consumers are improved protection from theft and malicious attacks as they send, receive, store and handle sensitive data. Mobile equipment suppliers and network providers now have a critical tool to build trust.
Aside from a little cosmetic fine tuning, the Mobile Trusted Module (MTM) specification is essentially 99 percent final, based on the 0.9 version published in Sept. 2006. As a result, every company in the mobile technology community should start considering how to proceed to take advantage of the MTM specification and to implement improved security in their next-generation products.
Establishing trust
While the Mobile Trusted Module is very new, it has its basis in the well-established efforts of the (TCG). TCG members develop and promote open, vendor-neutral, industry-standard specifications for trusted computing building blocks and software interfaces across multiple platforms, peripherals and devices. Member companies include handset makers, service provides, silicon suppliers, and others.
Targeting more secure computing environments without compromising functional integrity, privacy, or individual rights, TCG's primary goal is to help users protect information assets such as data, passwords and keys from external software attacks and physical theft. To achieve this, TCG's Trusted Platform Module (TPM) specification, versions 1.1b and 1.2, provide the foundation of trust for the efforts of the other TCG work groups. The TPM specification has been widely implemented in integrated circuits (ICs) that have been installed in some 50 million personal computers, and is now shipped in virtually every enterprise PC. With the Mobile Trusted Module (MTM) specification, the Mobile Phone Work Group has extended the TCG specifications to support mobile phones.
For entire article click on this link:
http://www.drdobbs.com/dept/security/196701910?pgno=1
Followers
|
25
|
Posters
|
|
Posts (Today)
|
0
|
Posts (Total)
|
447
|
Created
|
02/03/04
|
Type
|
Premium
|
Moderator awk | |||
Assistants Bull_Dolphin |
Volume | |
Day Range: | |
Bid Price | |
Ask Price | |
Last Trade Time: |