Register for free to join our community of investors and share your ideas. You will also get access to streaming quotes, interactive charts, trades, portfolio, live options flow and more tools.
Register for free to join our community of investors and share your ideas. You will also get access to streaming quotes, interactive charts, trades, portfolio, live options flow and more tools.
Conversations in Security
Date: December 02, 2004
Source: Network Computing
By: the CMP team
http://www.crime-research.org/news/02.12.2004/821/
Moritz believes startups that specialise in a niche security technology area, such as phishing, spyware, and spamware, will become acquisition targets.
Security is hard work, exclaimed Bruce Schneier, CTO and founder of Counterpane at the recent CSI-Asia Conference &Exhibition 2004. We’ve gathered in these pages the CSI-Asia speakers’ views on the technology roadmaps, security trends in the coming months, and their thoughts on secure processes.
Computer Associates
We’ve got to start looking at an integrated “network systems, storage, and security management” approach, says Ron Moritz, Computer Associates’s (CA) chief security strategist.
This approach is preferred over getting seven to eight different products from point-solution vendors—a trend that Moritz sees happening.
He recalls that the security market today looked a lot like what the network systems management (NSM) market was 10 years ago, with thousands of solutions available.
“I believe the security world is moving mid-way to end-to-end security suites,” he adds. “Security will be integrated into enterprise infrastructure management to become ‘network systems, storage, security management’.
We’re two to three years out from this, and from a recognition that security is like any business process you do.”
CA provides three broad categories of security products: security of exclusion, covered under eTrust Threat Management; security of inclusion, under Identity and Access Management; and security of accountability, covered by Security Information Management.
Its strategy is to take security elements like antivirus, spyware, anti-spam, Web filtering, and “sell them as a single secure content management platform, rather than different products,” says Moritz.
Given this trend, Moritz believes startups that specialise in a niche security technology area, such as phishing, spyware, and spamware, will become acquisition targets.
“The days of startups becoming big security standalone companies are over,” he ventures. “You will see innovations but none of the classic disruptive technologies.”
As CA works on its security software, it is also making sure it develops secure software—a more effective way to stall cyber-crime.
To get secure software, Moritz says that an overhaul of current software engineering mentality and practices is necessary to improve software quality and security.
Within CA, some developer groups are focused on extreme programming—they write the test kits to test the software, even before the first line of code is written—and then extend these kits to other groups in CA.
“We’re doing it...group by group. My group is developing internal white papers encapsulating best ideas from the groups,” adds Moritz. “The key driver is the need to capture and publish these because customers are asking us what we’re doing to deliver secure products.” — Jorina Choy
Symantec
Symantec believes there will be a shift towards integrated security appliances over the next 12 months.
Linda McCarthy, executive security advisor for the office of the CTO, Symantec, says the reason is companies find it difficult managing many appliances.
“It used to be that you could just put a firewall up to protect yourself, but today, the amount of different technology needed to protect an enterprise is staggering,” she elaborates. “There’s virus protection, intrusion prevention, policy management, and a whole lot of appliances and devices to manage. End-users are telling us enough is enough; we don’t have the time to properly manage them anymore.”
However, she does not think there will be one security appliance that can do it all. “That would be nice! But such a silver bullet is a pretty tough thing to do; you are talking about a lot of different things in one product and some companies just do not need all that.”
McCarthy also believes that security is difficult for some companies and will not be possible for them to handle it all. Hence, she believes security outsourcing will be on the agenda of security professionals in the near future.
“But whether the company should outsource everything or not would depend on what they want and the company’s culture,” she adds.
As for product development cycles always being behind the latest threats, McCarthy says that companies should start thinking a few steps ahead of hackers.
This can be done by better understanding their networks and operating systems, and proactively seeking out the possible vulnerabilities within.
“Start forming teams with the aim of hunting down vulnerabilities; probe and think about what can bring your business down, and then protect the company against that,” McCarthy advises. — Sng Chee Khiang
Cisco Systems
Cisco Systems’ direction for the next year sees a continuance in the development of “self-defending networks”, says Bernie Trudel, security consultant, Asia-Pacific, Cisco Systems.
The company will focus on automating protection via a proactive approach to security, using techniques like “scrubbing” to identify and eliminate traffic associated with DDoS attacks.
Trudel says that three main features ought to be present in a security product: detection; policy decisions based on what is detected; and enforcing the policy. He adds that these activities would be spread throughout a self-defending network.
“What you are going to see from Cisco is more and more of our network products having self-defending mechanisms in them,” says Trudel. “Our network products will have the ability to detect attacks, and also have the ability to enforce policies against these attacks.”
Worms will continue to dominate in the next year, exploiting vulnerabilities even faster than today. Trudel believes that there will be more sophisticated social engineering attacks, such as phishing.
While worms have so far been used to wreak havoc or farm botnets, Trudel was also concerned that they may, in the future, have damaging payloads, such as the ability to erase disk drives.
Stating that security is more than just technology, Trudel underscores the importance of putting processes in place and having best practices, as well as increasing user awareness of security. “Users have to become more paranoid about how they use the Internet, about responding to unsolicited e-mail,” he says.
Coupled with technology, these processes can help protect organisations against the latest threats in the long run, offers Trudel.
Deploying technology, he stresses, is not about finding a silver bullet to solve security problems, but about putting multiple layers of defence in place. He believes that in five to ten years, we will see a greater integration of best practices and processes with technology into operating systems, applications, and networks. A result of the benefit of the experience of past users. — Jeffrey Lim
Computer Security Institute
If you are thinking of implementing radio frequency identification (RFID) technology within your organisation and have yet to consider the security implications, you could be treading on dangerous ground.
Issuing this warning was John O’Leary, director of education at the Computer Security Institute (CSI).
He points out that RFID involves the access of information wirelessly, and thus should be subject to the same concerns associated with other types of wireless technologies.
This is even more crucial as the usage of RFID technology extends into areas such as air ticketing, toll collection, physical access control, electronic article surveillance, animal identification, and even waste disposal. “RFID chips, in the billions, will generate mountains of information. How do we protect the information? What do we share with trading partners? And what do we do with all the RFID chips we generate?” he posits.
Business rules will be needed to manage and direct the flood of information. RFID technology is also liable to malfunction or be subject to misuse, so organisations have to prepare to handle such problems should they arise. For instance, people with malicious intent could use immobilisers to disrupt delivery fleets causing damage to the organisation, says O’Leary.
Encrypting the data transmitted could be one way to prevent outsiders from accessing the information.
One would have to balance the cost of encryption, infrastructure, and operation against the value of the information that is being protected as well. If there is information to be protected, who should then be in charge, asks O’Leary.
“The business manager responsible for the application for which RFID is being implemented must be involved in the discussion of protecting RFID data and its metadata.
Likewise, the information security manager must be involved in selecting the methods to secure what the business manager deems necessary,” he advises.
Users definitely have to start thinking about security as well, if RFID implementation is on the cards. Organisations could bear the impact of regulatory and legal infringements, if they fail to adequately protect information. “You don’t want to be made a business case study of how to do it wrong,” O’Leary quips. — Jeanne Lim
Counterpane
“Security is hard [work],” acknowledges Bruce Schneier, CTO of Counterpane Internet Security, referring to how companies continually play catch-up to network attacks.
But there is help at hand. First, users have to accept security as a chain that will break at its weakest point. While a truism, Schneier argues that determining the weakest point was subjective.
“Securing the weakest point depends on the profile of your attacker,” he elaborates. Defending against someone with little resource and a lot of time is different from defending against someone with much resource and time.
Also, their level of expertise do play a role in deciding which parts of your network are the weaker links. Hence, understanding threats means understanding an attacker’s motivations.
If an attacker’s motivation were known, it would be easier to plan and design countermeasures to suit each attacker’s profile. Too often, notes Schneier, companies tended to secure things that were obvious, or their network analyses were not adequate to discover the weakest links.
Likening security to a labyrinth of systems, with much interdependence between them, Schneier says security systems are less important in how they work than in how they fail. He cites the lack of a response plan to deal with failures as a common problem for users.
Testing that system for frequent failures, or checking to see if teams responded well to infrequent failures, would help prepare them for genuine attacks.
Another interesting trend is toward outsourced services. “Every product needs a service attached to it,” claims Schneier. He predicts that in five to 10 years, security for all computing systems will be outsourced, adding that companies will still maintain control as they outsource the function, but not the responsibility, for securing their networks.
Finally, having trusted people in the company remained a key security consideration. “[Security’s] really about people,” says Schneier, although there is a danger that these trusted personnel could subvert security.
Put people in positions of trust, and give them as little knowledge as possible to effectively do their jobs, he advises, or assign them overlapping spheres of trust with other trusted staff to address the dangers posed.
OT: Intel/LeGrande
http://www.cbronline.com/article_news.asp?guid=C8A9BFAA-8218-418F-90CB-9325FA818302
The chip vendor has disclosed further details of its previously vague LeGrand security technology, and sketched out its plans for the Active Management Technology it hinted at during its most recent Developer Conference.
At the same time, it has hinted at further areas the *T strategy could expand into.
Intel this year has gone cold turkey on the addiction to speed that has historically dominated its CPU roadmap. The T technologies (* as in wildcard) have been pushed to the fore, together with the vendor's shift to dual and multicore technologies next year, as part of a "platformization" approach.
Existing *Ts include Hyper-Threading Technology and Extended Memory 64 Technology, and the vendor has made a lot of noise about its Vanderpool virtualization technology.
Ron Curry, director of marketing for Intel's Corporate Technology Group, has confirmed that the vendor still planned to debut the LeGrande security technology in the "LongHorn timeframe", ie sometime in 2006. At the same time, he said, other operating systems would be able to take advantage of the technology around the same time.
In the case of EMT, the technology was already built into processors before Intel turned it on. Given the fluid nature of Longhorn's delivery date, it seems reasonable to expect LeGrande will be integrated into Intel's product line ahead of time. Curry would not comment on this, though he said developers would of course need LG-enable machines on which to develop their products.
LeGrande will initially be targeted at business desktops and notebooks, said Curry. "I think consumers will eventually find value (in LeGrande), but at the moment consumers haven't made the connection between the need for this."
He also said it would be immaterial whether users were running 32 bit or 64 bit software on the technology.
Curry also gave some more details on the Active Management Technology which Intel fleetingly referred to at IDF. AMT is part of the vendor's digital office initiative, which aims to offer "embedded IT". AMT will include hardware and software technologies which allow IT managers "out of band" system access and management, regardless of whether the system is turned on or not.
Curry did not give a timeframe for AMT's launch, saying it would be some time in the next one to three years.
Curry also hinted at other areas in which Intel could develop further *Ts. He said the vendor was looking at a range of possibilities including 3D and animated graphics, data mining, networking processing, and speech recognition and synthesis.
With the shift to "beyond" multicore computing, such applications could potentially be handled by dedicated cores within the system, he said, or spread across the whole system.
OT: has anyone heard from scorpio? tnx e/
eamonnshute,
did you get my email?
Doma, slightly loaded??
Your questions or the TCG members??
New TPM-enabled machines from HP:
HP Compaq nc8200 Notebook PC
Configuration
· Intel 750 (1.87GHz) Pentium M Processor
· Intel 915 PM Chipset
· 512MB DDR2 Memory
· A large, brilliant 15.4" WSXGA+ Widescreen Display
· ATI Mobility Radeon X600 Graphics with 64MB of Dedicated Video Memory
· 60GB 5400RPM HDD
· Intel 802.11 b/g Wireless LAN
· Integrated Bluetooth
· 10/100/1000 NIC
· Embedded TPM Security Chip
· Integrated Smart Card Reader
· Secure Digital (SD) Slot
· Compatibility with Next Generation Docking and Power Solutions
Product Overview
The HP Compaq nc8200 Notebook PC offers users high performance processing and graphics capabilities in a travel friendly, thin and wide form factor. Mobile professionals will like the thin, sleek design combined with the advantages of a 15.4-inch widescreen display, ultra slim MultiBay II drive, and desktop equivalent performance. IT managers will value the full portfolio of integrated security options, breadth of integrated wireless choices, and unsurpassed commonality and consistency with the entire enterprise notebook line. The HP Compaq nc8200 Notebook PC is the ideal choice for enterprise-wide deployment to mobile users who need the power of a desktop and the convenience of a notebook.
Key features include:
New thin and light design
15.4” Widescreen Display
Magnesium display enclosure
HP Mobile Data Protection System
Integrated Smart Card Reader and Embedded TPM Security Chip
Enhanced scratch resistance through the use of In-mold lamination
Panel ‘anti-scuff’ pads
Intel Pentium M Processors
Intel 915 PM Chipset
Discrete PCI Express Graphics from ATI
Configuration - nc4200
· Next generation Intel Centrino mobile technology with Intel Pentium M processors
· Mobile Intel 915GM Express Chipset
· Intel Graphics Media Accelerator 900 with up to 64MB of shared system video memory
· 12.1" XGA display
· 60GB 5400 rpm HDD
· 512MB DDRII memory
· Integrated MiniPCI based wireless LAN
· Integrated Bluetooth
· 10/100/1000 NIC
· Integrated TPM embedded security chip
Configuration - tc4200
· Next generation Intel Centrino mobile technology with Intel Pentium M processors
· Mobile Intel 915GM Express Chipset
· Intel Graphics Media Accelerator 900 with up to 64MB of shared system video memory
· 12.1" XGA Wide Viewing Angle display
· 60GB 5400 rpm HDD
· 512MB DDRII memory
· Integrated MiniPCI based wireless LAN
· Integrated Bluetooth
· 10/100/1000 NIC
· Integrated TPM embedded security chip
uhh, cpa, don't be so thick
We're talking about this one:
November 19, 2003 Wave Systems Completes $7.1 Million Private Placement Financing
Not this one:
Wave Systems Files $25 Million Shelf Registration
Lee, MA - April 15, 2004 – Wave Systems Corp. (NASDAQ: WAVX – www.wave.com) announced today that it has filed a $25 million shelf Registration Statement on Form S-3 with the Securities and Exchange Commission. Once declared effective by the Securities and Exchange Commission, the shelf registration statement would permit Wave to sell, in one or more public offerings, up to $25 million in aggregate value of its Class A common stock, warrants to purchase its Class A common stock or a combination of both. Wave has no current plans or agreements regarding the sale of the securities registered on this shelf registration statement.
cpa, zen is correct
fwiw, I know an investor that only got half his requested shares during the November, 2003 private placement.
Regulatory Climate A Boon For Software
By Riva Richmond
Of DOW JONES NEWSWIRES
NEW YORK (Dow Jones)--As public companies scramble to revamp information
technology systems to meet Sarbanes-Oxley regulatory requirements, some
security software makers see a windfall coming their way.
The Sarbanes-Oxley Act of 2002, a sweeping law designed in the wake of
financial scandals to make public companies more accountable to shareholders,
includes a number of provisions that are causing companies to assert more
control over data in their computer systems. But a key milestone came last
week when a provision took effect requiring many top executives to affirm
personally that their companies have effective financial controls in place.
Requirements like this have already meant millions in spending. But so far,
much of the money has gone to extra staff, lawyers, consultants and outside
service providers, as companies have tried to get a handle on the rules and
comply this year. Industry watchers say their next step will be to buy
technology to ease compliance for the long term, make it less costly and even
reap some operational benefits.
Analysts say top software priorities, after logging and auditing tools, will
be "internal" security products. These programs allow companies to control
which employees can access what data or areas of a network, monitor the
content of employee communications and alert managers to potential
infractions. Among the software makers who stand to benefit are Computer
Associates International Inc. (CA), International Business Machines Corp.
(IBM), VeriSign Inc. (VRSN), RSA Security Inc. (RSAS), Netegrity Inc. (NETE),
and Entrust Inc. (ENTU).
"Sarbanes-Oxley is kind of like Y2K all over again," says John Pescatore, an
analyst at Gartner Inc., referring to the rush to buy and update technology
gear ahead of the year 2000. But this time technology staffs are liberally
invoking compliance as a reason to get "new toys."
Though internal security systems top the spending list, technologies
designed to protect organizations from malicious outsiders - such as
vulnerability-assessment and anti-spyware tools - could also get a bit of a
lift. Analysts say implicit requirements for better data security in a number
of laws, including the Health Insurance Portability and Accountability Act and
Gramm-Leach Bliley for the financial-services industry, are pushing nervous
corporate lawyers to provide increasingly conservative advice to bolster
defenses.
"Security is the underpinning of all of these different regulations," says
Frederik Soendergaard-Jensen, program director of IBM's Risk & Compliance
Council.
"The regulations, whether they're HIPAA for privacy and security or Sarbanes
for protecting the value of the investor, are really just there to tell people
to do what they ought to have been doing all along," says Christian Byrnes,
vice president for security programs at IT-advisory firm META Group Inc. of La
Jolla, Calif. "Well-managed, well-secured companies can meet Sarbanes-Oxley
criteria typically by improving the documentation of what they do. However,
for the vast majority of companies, there's a lot more work."
AMR Research estimates spending on Sarbanes-Oxley compliance of about $5.5
billion this year. While 77% of that will go to staffing and outsourced
services and only 19% to technology, the Boston research firm expects
technology to win a 28% share of next year's $5.8 billion pie. That spending
includes both software and hardware.
In the security software realm, so far access- and identity-management
systems have gotten the biggest share of Sarbanes-Oxley spending, analysts
say. Here, the need is clear. A CEO can't confidently pledge his company has
proper controls if someone in the marketing department can change entries in
finance department records, or if a former employee can still enter and fiddle
with the numbers.
"What it comes down to is you got to know who did what when and where," says
IDC analyst Chris Christiansen. "For a large number of big multinational
companies it's a big issue.... They're subject to regulations all over the
world."
Analysts and software-company executives say compliance imperatives are
powering access-management software sales. IDC predicts the market will grow
9.7% a year to $3.5 billion in 2008 from $2.2 billion in 2003. Christiansen
estimates 10% to 15% of current spending is tied to Sarbanes-Oxley compliance,
but says that could rise as companies look to bring work in house that was
given to service providers.
"Compliance is the No. 1 driver in the ID and access business," says Toby
Weiss, senior vice president of eTrust security management solutions at
Computer Associates. "In North America, Sarbanes-Oxley is far and away the
biggest driver."
He says identity- and access-management sales rose more than 40% in the
first half of this year. They accounted for about two-thirds of CA's $200
million security business, making CA the largest provider according to IDC.
The company will increase its share once its proposed acquisition of Netegrity
is completed. Netegrity, the No. 5 player, sells technology for controlling
access within Web applications.
The fastest growing area of compliance-related security sales is for
content-filtering software, says META's Byrnes. These programs help companies
monitor employee communications for regulatory missteps. With an eye to this
expanding market, Entrust launched a product in September that scans e-mail
messages for possible violations and takes action. For instance, it can look
for language suggesting improper influence of auditors, a Sarbanes-Oxley
violation, and alert the compliance officer.
Computer Security and Liability
http://www.snpx.com/cgi-bin/news5.cgi?target=www.newsnow.co.uk/cgi/NGoto/76549288?-2622
Information insecurity is costing us billions. We pay for it in theft: information theft, financial theft. We pay for it in productivity loss, both when networks stop working and in the dozens of minor security inconveniences we all have to endure. We pay for it when we have to buy security products and services to reduce those other two losses. We pay for security, year after year.
The problem is that all the money we spend isn't fixing the problem. We're paying, but we still end up with insecurities.
The problem is insecure software. It's bad design, poorly implemented features, inadequate testing and security vulnerabilities from software bugs. The money we spend on security is to deal with the effects of insecure software.
And that's the problem. We're not paying to improve the security of the underlying software. We're paying to deal with the problem rather than to fix it.
The only way to fix this problem is for vendors to fix their software, and they won't do it until it's in their financial best interests to do so.
Today, the costs of insecure software aren't borne by the vendors that produce the software. In economics, this is known as an externality, the cost of a decision that's borne by people other than those making the decision.
There are no real consequences to the vendors for having bad security or low-quality software. Even worse, the marketplace often rewards low quality. More precisely, it rewards additional features and timely release dates, even if they come at the expense of quality.
If we expect software vendors to reduce features, lengthen development cycles and invest in secure software development processes, it needs to be in their financial best interests to do so. If we expect corporations to spend significant resources on their own network security -- especially the security of their customers -- it also needs to be in their financial best interests.
Liability law is a way to make it in those organizations' best interests. Raising the risk of liability raises the costs of doing it wrong and therefore increases the amount of money a CEO is willing to spend to do it right. Security is risk management; liability fiddles with the risk equation.
Basically, we have to tweak the risk equation so the CEO cares about actually fixing the problem, and putting pressure on his balance sheet is the best way to do that.
Clearly, this isn't all or nothing. There are many parties involved in a typical software attack. There's the company that sold the software with the vulnerability in the first place. There's the person who wrote the attack tool. There's the attacker himself, who used the tool to break into a network. There's the owner of the network, who was entrusted with defending that network. One hundred percent of the liability shouldn't fall on the shoulders of the software vendor, just as 100% shouldn't fall on the attacker or the network owner. But today, 100% of the cost falls directly on the network owner, and that just has to stop.
We will always pay for security. If software vendors have liability costs, they'll pass those on to us. It might not be cheaper than what we're paying today. But as long as we're going to pay, we might as well pay to fix the problem. Forcing the software vendor to pay to fix the problem and then pass those costs on to us means that the problem might actually get fixed.
Liability changes everything. Currently, there is no reason for a software company not to offer feature after feature after feature. Liability forces software companies to think twice before changing something. Liability forces companies to protect the data they're entrusted with. Liability means that those in the best position to fix the problem are actually responsible for the problem.
Information security isn't a technological problem. It's an economics problem. And the way to improve information technology is to fix the economics problem. Do that, and everything else will follow.
wildman/24601
I emailed your posts to Wave. Looks like the page has been updated:
http://www.wave.com/solutions/getting_started2.html#gettingstarted
OT: eamonnshute:
Symantec Client Security For Nokia 9500
The Symantec Client Security For Nokia 9500 Communicator And Nokia 9300 Smartphone Keeps New Devices Safe and increases user's confidence in their device.
Symantec Corp., the global leader in information security, has announced the availability of Symantec Client Security for the Nokia 9500 Communicator and Nokia 9300 smartphone. The new solution increases users’ confidence in using their Nokia devices to connect to the Internet, check e-mail, download files, and to be more productive while safeguarding against the risk of losing valuable information due to viruses, worms or other malicious attacks.
Symantec Client Security for the Nokia devices is among the world’s first embedded security solutions for the Symbian operating system based devices, and is Symantec’s latest offering in the wireless handheld market. Featuring the world’s first integrated mobile antivirus and firewall protection, the software will be provided preloaded on the phones’ memory cards and can be updated wirelessly through Symantec LiveUpdate. It provides enhanced protection through integrated threat response and management capabilities.
Of crucial importance to corporations, the new software prevents unwanted network intrusions from entering or spreading to or from Nokia 9500 Communicator and 9300 smartphone business devices. Configuration management features allow administrators to manage the Nokia devices locally and remotely, and configure, lock and enforce security policies. Symantec Client Security for Nokia devices extends existing IT security policies to these new endpoint devices, enabling workforce mobility while keeping corporate infrastructure protected.
“As communication continues to accelerate toward being both instantaneous and global, individuals are demanding access to information for their personal and business needs, while corporations are more keenly aware than ever of the need to protect their infrastructure and brand assets,” said Richard Batchelar, country manager, Symantec New Zealand. “Symantec Client Security for the Nokia 9500 Communicator and 9300 smartphone enables access to vital information while safeguarding individuals and corporations from electronic threats.”
“Mobile phones and smart handheld devices will become increasingly tempting targets for virus writers over the next few years,” reports IDC analyst Sally Hudson.
“Nokia is delighted to be working with Symantec to secure its recently announced business optimised devices,” said Panu Kuusisto, director of Alliance Development for Nokia’s enterprise unit. “Symantec’s integrated antivirus and firewall protection for the Nokia 9500 Communicator and 9300 smartphone will provide the much needed peace of mind CIOs and business decision makers require when considering a business mobility deployment.”
The Nokia 9500 Communicator with Symantec Client Security is currently available in Europe, and some Pacific Rim countries including New Zealand. It is planned to be available in the United States in the first quarter of 2005.
More on Broadcom & TPMs
Broadcom to Push HSDPA, Not WiMax
Enhancing Enterprise Products
Broadcom plans to enhance its enterprise products with enhanced security beginning next year, said Tom Lagatta, head of Broadcom's Enterprise Computing Group. The company plans to combine Trusted Platform Modules, the security chip at the heart of the industrywide "trusted computing" initiative, with Broadom's PCI Express-based Gigabit Ethernet switches, he said. Over time, all of Broadcom's NIC chips also will gain ROC (RAID-on-chip) functionality, he said.
The company also has 10-Gigabit-Ethernet-over-copper projects under way, Lagatta said.
In addition, Broadcom intends to boost the use of iSCSI, an IP-based storage networking standard for linking data storage facilities.
iSCSI initiator cards using Broadcom chips should begin shipping in the first quarter of 2005, Lagatta said; by 2006, the technology will be integrated on Broadcom's NIC chips. The company recently began shipping a hybrid Layer-2 Ethernet controller, 4-layer TCP/IP and iSCSI initiator, all integrated on the same chip.
Broadcom's switch chips, meanwhile, also will gain packet-sniffing capabilities designed to block the spread of Internet worms, said Ford Tamer, vice president of Broadcom's Networking Infrastructure Group. "What many don't realize is if you can somehow get inside a corporation's hardened perimeter, security inside can be very soft," he said.
IP phone OEMs also have begun to demand integrated security solutions, to prevent outsiders from snooping on corporate VOIP (voice over IP) calls, Tamer said. Security will need to be applied to both wired and wireless VOIP phones. Broadcom also intends to integrate the wired and wireless gateway into a single box, eliminating the additional management headaches an additional box requires, Tamer added.
Broadcom's SecureEZSetup guards consumer WLANs.
Those wireless products eventually will include Broadcom's first next-generation 802.11n chips in 2006, said Robert Rango, vice president of Broadcom's Mobile & Wireless Group. In the consumer space, Broadcom plans to push its AlphaMosaic mobile graphics silicon into 3-G (third-generation) cell phones and other devices, Rango said.
Although Broadcom's ServerWorks chip-set business originally was one of the company's strengths, the company played on a series of missteps by Intel's own chip-set group to win business. After Intel solved the problems and began shipping its own chip sets supporting multiple processors, Broadcom's division lost business.
In August 2003, Broadcom said it would develop chip sets for the AMD 64-bit Opteron platform. The company has completed so-called "A0" silicon and expects first revenue shipments to begin in either March or April, Lagatta said.
Microsoft states formula for trustworthy computing
TwC = SD³ + c. This is the formula for Trustworthy Computing (TwC) that was restated by Scott Charney in his keynote at IT Forum 2004.
Charney is the head of Microsoft's Trustworthy Computing Initiative and he was outlining 'the guiding principles of trustworthy computing in a dangerous world'. The SD³, for those of you who are wondering, stands for security by design, by default and by deployment, and Charney extended the principle to other pillars of the company's much-publicised initiative, such as Privacy, Reliability and Business Integrity. The '+ c', by the way, stands for 'and communication'.
Tracing the development of cyber crime and security threats from an early Unix worm to phishing attacks by organised criminals, he emphasised the sluggishness of markets to recognise the importance of tackling security. It was 11 September 2001 - and the effect of cascading failures once one network fails - that has motivated governments and commerce to take this field more seriously.
With 'time to exploit' decreasing for known flaws (from 390 days before Nimda exploited a known flaw, to Woody Worm exploiting a known vulnerability within 48 hours. He cited the Internet as 'a great medium for committing crime', with its relative anonymity and lack of traceability.
To tackle security 'by design' he cited the need for better awareness of developing secure code and adopting strategies such as the 'doctrine of least privilege' necessary for processes to run. The issue of security by default relates more straightforwardly to not enabling all features for all users, as this was not even necessary let alone desirable. Finally, in terms of deployment and communication, he claimed Microsoft were now better at giving prescriptive guidance and supplying better management (and patch management) tools.
Covering the subject, of privacy he stated that security and privacy 'may be synergistic or antagonistic', quoting the example of security checks being invasive in terms of people's actions on a network: knowledge of who did what when is important for security but may impinging on privacy.
'What constitutes an "invasion of privacy" may be unclear and may be dependent on local laws and customs,' he maintained. 'There is not a clear, fact-dependent definition.'
Furthermore, the
value of information is in the use we put it to - it can't just sit in static storage - it has to be accessed and analysed. The implication was clear: privacy will inevitably be secondary to security. As an example for how compromise could be reached, he gave the example of databases such as SQL Server and how the more granular the division of data the greater the possibility of elements of data remaining private (all data need not be accessed for the sake of one part of the data).
Recent 'privacy enabling' technologies introduced by Microsoft that he listed included anonymous Windows Error Reporting, greater respect for privacy in Windows Media Player 9 (and 10) for not communicating user profiles, and the increased spam blocking efforts of Outlook 2003 (unwanted emails compromise our privacy).
As for reliability, again, he stated that there can be one single definition and that it would vary group by group. In the case of computing, he described the state of computing as 'machines built by geeks for geeks that can, by design, run any code'. This made conditions for a very inter-connected and vulnerable industry: 'the computing ecosystem,' he declared, 'is a "system of systems"'.
Reliability by design, he declared, will involve better instrumentation to track the concept of reliability, in order to make it more quantitative. By default, the less used processor-intensive functions should not be enabled by default (needlessly doing something at great effort increases the chance of other tasks failing). And more reliable deployment could be aided via crash reporting tools and more prescriptive guidance.
Finally, he covered the 'pillar' of business integrity. This was the most important of all, he maintained, as without people perceiving Microsoft as a company of integrity that could be not trust in the other fields. This was where he was most frank in admitting past failings by Microsoft. 'Trust levels in the IT industry generally and in Microsoft in particular, could be better,' he admitted. Industry-wide, he cited the race to ship before considerations of quality (a charge certainly familiar to Microsoft) and, specific to Microsoft, he admitted the harm done to the company by the ongoing anti-trust cases and clashes wit he US Federal Trade Commission.
Charney cited the delay to the shipment of Windows Server 2003 back in December 2002 as an example of Microsoft committing to quality ahead of a race to ship. 'Define "delay",' he said. 'The development process will take longer, but is this "a delay"?'
He finished with a ringing declaration that Microsoft should be committed to good corporate citizenship and that it should help blunt the digital divide that separates the haves and have-nots of information technology.
New IBM model..............................
http://www-306.ibm.com/common/ssi/rep_ca/3/897/ENUS104-433/ENUS104-433.PDF
OTish:CalPERS Adopts Plan to Tackle Abusive Executive Compensation; System to Target Focused List of Directors, Corporations and Compensation Consulting Industry
SACRAMENTO, Calif.--(BUSINESS WIRE)--Nov. 15, 2004--The California Public
Employees' Retirement System's (CalPERS) Board of Administration today
approved a focused plan to reign in abusive compensation practices in
corporate America and hold directors and compensation committees more
accountable for their actions.
The plan calls for CalPERS to advocate for executive compensation reforms on
a national level by addressing issues of transparency and design with the
Securities and Exchange Commission, the financial exchanges, and the
compensation consulting industry.
The System will also wage a campaign against targeted individual
compensation committee directors who support egregious pay packages and
companies that have the worst compensation practices, as well as recognize
corporations who are leaders in pay for performance.
Levels of executive compensation have skyrocketed in recent years, creating
a vast gap between the pay of top executives and average workers.
According to Businessweek, the average Chief Executive Officer salary has
grown to 535 times the average workers salary in 2000 from 42 times in 1980.
In 2003, CEO median cash pay -- base salary and bonus -- was up 14 percent
to about $2 million from $1.75 million, according to a study of Standard &
Poor's 500 companies conducted by Equilar.
"This will be a focused approach to today's most serious problem," said Sean
Harrigan, President of CalPERS Board. "We will recognize the good guys who
compensate for performance and we will call out some prime examples of those
who are hurting long-term shareowner value by paying for lack of performance."
"Compensation can be so obscene that we need to tackle the problem
structurally and hold accountable selected individual directors who create and
support abusive pay packages," said Rob Feckner, Chair of CalPERS Investment
Committee. "We will call on other institutional investors and allies to join
us in this campaign."
Under the plan, CalPERS will pursue six major pay for performance
initiatives over the next three years. They include:
-- Submitting a comprehensive proposal to the Securities and
Exchange Commission in 2005 that calls for greater
transparency of compensation packages;
-- Strengthening listing standards at the securities exchanges
and self-regulatory organizations to promote greater
communication and transparency between listed companies and
investors;
-- Urging the compensation consulting industry to adopt practices
that better aligns boards and management with shareowners;
-- Targeting a limited number of corporations in 10 market
sectors with the worst compensation practices to move their
executive compensation philosophy and practice toward greater
pay-for-performance concepts;
-- Publicly withholding support from a focused list of certain
corporate compensation committee members who develop and
support egregious pay packages; and
-- Recognizing companies and individuals who use superior pay for
performance practices.
A copy of CalPERS Executive Compensation Strategic Plan can be found on its
website at www.calpers.ca.gov, click CalPERS Board Meeting information, then
Investment Committee, then item 6d.
CalPERS is the nation's largest public pension fund with assets of
approximately $168 billion. The System provides retirement and health benefits
to more than 1.4 million State and local public employees and their families.
cpa, wow
I would never want to be the voice for a group of people with the combined intelligence level of this group.
You sure know how to make friends!
hj
From the yahoo story I posted yesterday:
Dell is not expecting, however, to put the AMD chips into desktop PCs, Rollins said.
But then there was this from DowJones:
Mark Stahlman of Caris & Co. raised his rating Monday on Dell to "buy" from "above average," on the grounds that the company will see better demand for its PC and server products in 2005. Stahlman also believes that Dell will begin selling PCs using microprocessors from Advanced Micro Devices (AMD: news, chart, profile) next year.
I think you'll know soon..........
Doma, great find!
Looks like December to me, then '05 and beyond! I was starting to wonder this morning when I went to orda's link. Dell has a glossary on the security page but there's no definition of a TPM. Looks like that'll soon change.
flyerguy, wavxmaster
I think you guys are too diversified!! lol
OT: Dell close to adopting AMD chips
http://story.news.yahoo.com/news?tmpl=story&ncid=1208&e=1&u=/infoworld/20041111/tc_infow....
San Francisco (InfoWorld) - Dell president and CEO Kevin Rollins indicated that the company is actively considering including AMD processors in its server roster in the foreseeable future.
As of now, the PC and server maker remains as the lone holdout among its competition in not using AMD chips.
"My guess is we're going to want to add that [AMD] product line in the future," Rollins said in an interview on Wednesday with InfoWorld editors.
Rollins pointed to AMD's technology lead on Intel in the 64-bit category as the primary reason for the shift in Dell's Intel-only strategy.
"They've been getting better and better. The technology is better. In some areas they're now in the lead on Intel. That is what is interesting us more than anything," he said.
With the release of its 64-bit Opteron chip for servers and 64-bit Athlon64 processor for desktops in 2003, AMD has won over every major computer manufacturer except Dell. Even Microsoft picked up AMD's 64-bit banner, designing its 64-bit version of Windows on AMD's architecture, not its traditional ally, Intel.
If in fact Dell climbs on board, AMD will have realized a company goal that many never thought possible. Since coming under the direction of Hector Ruiz, CEO for the past two years, AMD has enjoyed a technological lead on its biggest rival and four straight profitable quarters. That financial viability had a big influence on Dell's consideration of the company, according to Rollins. "They are more viable of a company than they once were," he said.
Dell is not expecting, however, to put the AMD chips into desktop PCs, Rollins said.
"If we basically sucked up all of AMD's [manufacturing] capacity it would not be enough. They don't have enough capacity for us to use them on the desktop. For us, fundamentally, AMD is much more interesting in the server, workstation or gaming arenas," Rollins explained.
Dell has flirted with AMD in the past, but most analysts believe Dell did so only to win more concessions from Intel.
"Dell has come close before, but we've been hearing that they've become more serious lately," said Dean McCarron, principal analyst at Mercury Research.
Rollins emphasized that Dell has not been missing opportunities by not having an AMD system.
"We have not been losing a ton of business because we haven't had AMD," he said. "At the end of the day we have to be profitable and grow. That's the main indicator of what we’re going to do."
Mark Stahlman of Caris & Co. raised his rating Monday on Dell to "buy" from "above average," on the grounds that the company will see better demand for its PC and server products in 2005. Stahlman also believes that Dell will begin selling PCs using microprocessors from Advanced Micro Devices (AMD: news, chart, profile) next year.
Stahlman said in a research note that Dell has an inferior product line because it uses only Intel (INTC: news, chart, profile) components. He said one of the reasons he expects Dell to roll out AMD-based computers is "clear performance advantages" of AMD systems running 64-bit software products from Microsoft (MSFT: news, chart, profile)
wavxmaster
ssprague@wavesys.com
24601
thanks for the PM. Can you send your email address?
thanks
Shoring Up the Trusted Computing Base
http://www.pdl.cmu.edu/PDL-FTP/stray/sigops04-bootstrap.pdf
Secure Bootstrap is Not Enough: Shoring up the Trusted Computing Base
James Hendricks
Carnegie Mellon University
5000 Forbes Ave
Pittsburgh, PA
James.Hendricks@cs.cmu.edu
Leendert van Doorn
IBM T.J. Watson Research Center
19 Skyline Drive
Hawthorne, NY
leendert@watson.ibm.com
Abstract
We propose augmenting secure boot with a mechanism
to protect against compromises to field-upgradeable devices.
In particular, secure boot standards should verify
the firmware of all devices in the computer, not just devices
that are accessible by the host CPU. Modern computers
contain many autonomous processing elements, such
as disk controllers, disks, network adapters, and coprocessors,
that all have field-upgradeable firmware and are an
essential component of the computer system’s trust model.
Ignoring these devices opens the system to attacks similar
to those secure boot was engineered to defeat.
1 Introduction
As computers continually integrate into our business and
personal lives, corporate and home users are storing more
sensitive data on their personal computers. However,
widespread Internet usage has exposed more computers to
attack and provided would-be attackers with the information
needed to scale such attacks. To protect this increasingly
sensitive data from these increasingly prolific attacks,
next-generation personal computers will be equipped with
special hardware and software to make computing more
worthy of trust. Such trustworthy computing will provide
security guarantees never before seen on personal computers.
Trustworthy computing requires a Trusted Computing
Base (TCB)—a core set of functionality that is assumed
secure—to implement the primitives that provide security
guarantees. The TCB typically consists of hardware,
firmware, and a basic set of OS services that allow each application
to protect and secure its data and execution. Security
of the bootstrap mechanism is essential. Modeling
the bootstrap process as a set of discrete steps, if an adversary
manages to gain control over any particular step,
no subsequent step can be trusted. For example, consider
a personal computer with a compromised BIOS. The BIOS
can modify the bootstrap loader before it is executed, which
can then insert a backdoor into the OS before the OS gains
control.
This secure bootstrap problem is well-known and various
solutions have been proposed to deal with it. For example,
Arbaugh et al. [1] propose a mechanism whereby the
first step in the bootstrap process is immutable and therefore
trustworthy. This trust is then bootstrapped all the way
up to the operating system by checking a digital signature
for each bootstrap step before it is executed. For example,
the BIOS could verify a public-key signature of the
disk’s boot sector to ensure its authenticity; the boot sector
could then verify the public-key signature of the OS bootstrap
code, which could likewise verify the privileged OS
processes and drivers. Though such an approach would obviously
not guarantee the security of the OS code, it would
at least guarantee the authenticity.
A weakness to this approach is that the BIOS in most
personal computers is writable. One solution is to store
the BIOS on a ROM. However, a ROM-based approach
is by definition inflexible, preventing BIOS updates that
may be required to support maintenance applications, network
booting, special devices, or CPU microcode updates.
Furthermore, the use of digital signatures introduces a key
management problem that is amplified by the requirement
to store the initial public key in ROM. To ameliorate these
problems, a secure hardware device can be used both to verify
a programmable BIOS and to authenticate this verification.
This is the approach taken by the Trusted Computing
Group (TCG)[13], described in Section 2.
Both the Arbaugh et al. and TCG based approaches
share a CPU-centric view of the system that is inadequate
for establishing a trustworthy system. In Section 3, we
argue that, though the current specification goes to much
trouble to defend against attacks utilizing the CPU, it fails
to defend against similar attacks utilizing peripherals, and
in Section 4 we argue that such attacks are not much more
difficult. Section 5 describes how the current specification
could be improved with a minor augmentation.
2 The Current Approach
The Trusted Computing Group advocates using a secure
hardware device to verify the boot sequence and authenticate
this verification. Such a device could provide assurance
even to a remote user or administrator that the OS at
least started froma trustworthy state. If an OS security hole
is found in the future, the OS can be updated, restarted, and
re-verified to start from this trustworthy state. An example
of this kind of device is the Trusted Platform Module
(TPM) [14]. Such a device has been shown to enable a remote
observer to verify many aspects of the integrity of a
computing environment [8], which in turn enables many of
the security guarantees provided by more complex systems,
such as Microsoft’s NGSCB (formerly Palladium) [4].
The following is a simplified description of how the
1
Proceedings of the Eleventh SIGOPS European Workshop, ACM SIGOPS, Leuven, Belgium, September 2004.
Figure 1: Hashes of the bootstrap code, operating system, and
applications are stored in the Platform Configuration Registers,
which can later be queried to verify what was executed.
TPM can be used to verify the integrity of a computing
system (see the specification for details [15]). The TPM
measures data by hashing the data. It extends a measurement
to a Platform Configuration Register (PCR) by hashing
together the current value of the PCR and the hash of
the data and storing the result in the PCR. To measure to a
PCR, the TPM measures data and extends it to a PCR. All
code must be measured before control is transferred to it.
When the computer is reset, a small and immutable code
segment (the Core Root of Trust for Measurement, CRTM)
must be given control immediately. The CRTM measures
all executable firmware physically connected to the motherboard,
including the BIOS, to PCR[0] (PCR[0] is the first
of sixteen PCRs). The CRTM then transfers control to the
BIOS, which proceeds to measure the hardware configuration
to PCR[1] and option ROM code to PCR[2] before
executing option ROMs. Each option ROM must measure
configuration and data to PCR[3]. The BIOS then measures
the Initial Program Loader (IPL) to PCR[4] before transferring
control to it (the IPL is typically stored in the first 512
bytes of a bootable device, called the Master Boot Record).
The IPL measures its configuration and data to PCR[5].
PCR[6] is used during power state transitions (sleep, suspend,
etc.), and PCR[7] is reserved. The remaining eight
PCRs can be used to measure the kernel, device drivers,
and applications in a similar fashion (the post-boot environment),
as Figure 1 depicts.
At this point, the bootstrap code, operating system, and
perhaps a few applications have been loaded. A remote
observer can verify precisely which bootstrap code or operating
system has been loaded by asking the TPM to sign
a message with each PCR (the TPM QUOTE command);
this operation is called attestation. If the TPM, operating
system, bootstrap code, and hardware are loaded correctly,
the remote observer can trust the integrity of the system.
The TPM should be able to meet FIPS 140-2 requirements
[14]; hence, it is reasonably safe to assume the TPM is
trustworthy (see FIPS 140-2 requirements for details [16]).
The integrity of the operating system and bootstrap code is
verified by the remote observer; hence, the operating system
and bootstrap can be trusted to be what the remote observer
expects. The hardware, however, is not verified; fortunately,
hardware is more difficult to spoof than software.
From this, we can describe attacks that are and are not
defended against. Attacks that exploit a known hole in the
OS can be detected at attestation. Attacks that modify the
BIOS, option ROMs, or IPL are detected at boot. Similarly,
upgrades and repairs to these components are verifi-
able. However, physical attacks on the TPM (such as invasive
micro-probing or EM attacks [7]) or other components
(such as RAM bus analysis) are not detected. Furthermore,
malicious hardware may provide an avenue of attack; a malicious
processor would not be detected by attestation, yet
it could circumvent most security policies.
For Microsoft’s NGSCB, an alternate secure boot
method is proposed [15]. This method requires the addition
of a new operation to the CPU instruction set architecture
that resets the CPU and ensures the execution of a secure
loader without reseting the I/O bus. This method allows the
secure loader to gain full control of the CPU without the
need to reinitialize the I/O subsystem. While this method
reduces its reliance on the BIOS, it still assumes that the
CPU is in control of all executable content in the system,
which, we argue, is a flawed assumption.
3 A Security Vulnerability in This System
Though it is relatively safe to trust hardware circuits (because
mask sets are expensive to develop, etc.), there is
less sense in trusting firmware. Firmware is dangerous because
it can be changed by viruses or malicious distributors.
Though current attestation methods detect attacks on
the OS, BIOS, and option ROMs, attacks on other firmware
may be no more difficult. Firmware with direct access to
memory is no less dangerous than the BIOS or the kernel,
and even firmware without direct memory access may require
trust. Hence, though peripherals and memory are implicitly
proposed to be a part of the TCB, we do not believe
they are currently adequately verified.
Consider a compromised disk. For example, assume
the delivery person is bribed to allow an attacker to “borrow”
the disk for a few hours to be returned in “perfect”
condition. This disk could collect sensitive data; modern
disks are large enough that the compromised firmware
could remap writes so as to never overwrite data (similar
to CVFS [10]). On a pre-specified date, or when the disk
starts to run low on storage, the disk can report disk errors.
The disk could ignore commands to perform a low-level
format or otherwise erase its data while being prepared for
warranty service. Once again the bribed delivery person
could allow the attacker physical access, literally delivering
gigabytes of sensitive data to the attacker’s doorstep.
2
The attacker could then reset the firmware to act normal
for a few months, leading the disk vendor to send the disk
to another customer because it believes this customer misdiagnosed
the problem.
Generalized, the above attack takes place in three phases:
first, the device is compromised; second, the device compromises
the integrity of data; third, the device delivers data
to the attacker. There are many techniques to perform each
of these steps, and security is violated even if the third step
does not occur.
3.1 Compromising a Device
The first step is to compromise the device. We consider
only attacks on firmware for autonomous computing
engines that are not under control of the main CPU.
These include the operating systems found on disks [2] and
some network cards [6]. We rule out attacks that replace
parts of the hardware for several reasons: replacement requires
physical access; unlike overwriting firmware, replacement
costs money; the cost of fabricating a custom
device is likely much greater than the cost of modifying the
firmware; etc. Furthermore, we assume the manufacturer is
not malicious.
The most direct attack is to provide a firmware update
to the user and use social engineering to convince the user
to install this update. Or consider the man-in-the-middle
attack, where the device is compromised after it leaves the
trusted manufacturer but before it arrives at the victim. For
example, the manufacturer may outsource the actual manufacturing
to a plant in an adversarial country, where the
firmware could easily be replaced. The delivery person, the
installation crew, or the maintainance team could similarly
compromise the firmware. A less glamorous (but more
likely) attack would be to embed the update in a virus or
worm that scans infected systems for vulnerable devices.
Essentially, any attack that can compromise an unattested
operating system could likely compromise unattested
firmware. Furthermore, note that once a device is
compromised, future firmware updates may not guarantee
that the device is safe (themalicious firmware couldmodify
the update utility or ignore update commands); also, reinstalling
the computer software won’t reinstall the firmware.
Hence, compromising firmware is potentially more damaging
than compromising the operating system.
3.2 Compromising Data
Once the firmware has been replaced with malicious
firmware, there are two ways in which the device can compromise
the integrity of data. If the device can directly issue
a DMA request, or if it can solicit a device to issue a
DMA request on its behalf, it can overwrite valid data or
read confidential data in host RAM. But even if DMA is
not an option, the device can still store unencrypted and
manipulate unauthenticated data that is fed to it, or simply
discard data.
3.3 Delivering Data to the Attacker
If the compromised device is a network device, it can deliver
confidential data over the network. If the device has
direct or indirect DMA access, it can bus master a DMA
request to the network device’s ring buffer, which the network
device will then transmit over the network. But even
if there is no reachable network connection to the outside
world, a device may still be able to breach confidentiality;
for example, the device can store data and then misbehave,
causing the user to send the device in for warranty.
Once again, a man-in-the-middle attack can be used, this
time to extract the data and hide the tracks of the malicious
firmware (other attacks used to compromise the device
may be similarly adapted). Note that storing data is not
unique to storage devices; this works for any device with an
EEPROM, and every device vulnerable to an attack on its
firmware has some EEPROM.
3.4 Summary
All DMA-capable peripherals are trusted, and must either
be verifiable or not have firmware. Furthermore, many devices
without DMA capabilities are trusted to some degree.
If these devices may have firmware that is not verified, data
sent to them must be either encrypted and authenticated or
insensitive to security violations. There remains a question
of feasibility: even if it is feasible to replace the firmware,
read or modify sensitive data, and deliver sensitive data,
how difficult is it to generate the malicious firmware?
4 Is Writing Malicious Firmware Feasible?
Security is about risk management; hence, it is appropriate
to ask which attacks are most likely. Attacks on software
have been shown to be quite popular; attacks on firmware
and hardware have been less prolific. We argue that attacks
on firmware are only incrementally more difficult than attacks
on software, and that, once attacks on software become
more difficult, attacks on firmware will become common.
We further argue that attacks on hardware are more
difficult because hardware is not malleable; hence, circuits
and ROMs are relatively trustworthy.
Because security is about risk management, there is a
natural tendency for conflicts to escalate to slightly more
sophisticated variants. Defenders plug the easiest holes,
and attackers ratchet attacks up to the next level. For example,
the simplest buffer-overrun relies on jumping to
executable code on the stack. The direct solution, nonexecutable
stacks, led to slightly more elaborate attacks
[17]. Perhaps the greatest vulnerability of firmware attacks
is that modifying firmware may be no harder than modifying
OS code. We believe attacks have been limited up
to this point because firmware has been less homogeneous
than software and most programmers have less experience
with firmware. Both of these factors are changing: device
3
vendors are consolidating, and programmers are being exposed
to firmware. The LinuxBIOS project [5] has successfully
replaced the BIOS of several commodity PCs to provide
flexibility. Also, hacked firmware is becoming more
common: many DVD players have hacked firmware to support
DVDs from any region [9], and game stations such as
the X-Box have hacked versions of firmware [3] that convert
them into cheap computers.
As discussed above, any device that can DMA and any
device that is fed unencrypted or unauthenticated data is a
threat. Unless these devices are verified, one of two options
must be taken to ensure security: either DMA must
be disabled and all accesses to devices must be encrypted
and authenticated, or memory must not be trusted (as in
AEGIS [11] or XOM [12]). Both options are severe and
would limit performance.
5 The Technical Solution
This paper contributes two complimentary technical solutions:
1) Each compliant device must be included in the
TCB. It must ensure that its firmware is signed and veri-
fied at startup just like the rest of the executable code, and
it must verify its children. Such recursive verification will
form a tree of trust. 2) Every other device must be recognized
as explicitly external to the TCB. Applications must
be aware that it is unsafe, and its I/O must be sandboxed.
5.1 An Example: A Trustworthy Disk
A trustworthy disk would have a firmware signing mechanism:
for example, a cheap processor and ROM for some
immutable root of trust. On power-on, this system would
work in much the same manner as the TPM; all security
sensitive code would be measured to a local PCR, which
would then be signed with a key embedded in the disk’s
TPM and returned to the host CPU on request. Of crucial
importance is that this mechanismis not necessary for basic
operation of the device; it is an optional feature. The disk
can be manufactured and the additional firmware signing
hardware can be installed optionally. The signing hardware
could read the firmware directly and send the measurement
through a vendor specific command to the host CPU. Such
a solution would have a marginal cost for systems without
the security hardware, and likely less than a dollar for systems
with the hardware, which both keeps costs down and
provides disk vendors with a “value add.”
5.2 The Generalized Solution: A Verification
Mechanism for Trusted Peripherals
A generalized version of the above solution is to descend
the device chain and recursively verify the trustworthiness
of all devices. On system reset, the BIOS and option ROMs
are currently measured, as well as the current hardware
configuration. When the hardware configuration is measured,
each device should measure its firmware. For example,
when the PCI bus is configured and measured, each
device on the PCI bus should attest its firmware, if it is
field-upgradeable. During PCI configuration, the SCSI host
adapter will be queried; the SCSI host adapter will measure
its firmware then query each disk; finally, each disk will
measure its firmware and return this measurement. This
creates a tree of trusted devices, as depicted in Figure 2.
The host can determine the trustworthiness of a device
by assuming that the device was initially secure and therefore
verify the initial attestation statement against future
ones, or the host can compare the firmware attestation statement
against a trust certificate provided by the device vendor.
If the device is unable to provide an attestation statement
or the vendor is unable to provide a trust certificate,
we have to assume the firmware and therefore the device
cannot be trusted.
5.3 Untrustworthy Devices
Because there may exist some devices whose trustworthiness
is unknown, there must be a compatibility mode. One
solution is to tag such devices as untrustworthy, and restrict
their DMA access to a memory address range sandbox using
mechanisms similar to an I/O-MMU or machine partitioning
[4]. Furthermore, the operating system and sensitive
applications must understand that they cannot rely on
unencrypted or unauthenticated data sent or received from
an untrustworthy device. All devices bridged by an untrustworthy
device are untrustworthy; for example, a trustworthy
disk attached to an untrustworthy SCSI controller
is untrustworthy.
5.4 Guarantees Provided
If all critical software and firmware are verifiable, then only
attacks on hardware can go undetected. For example, consider
a system where the OS is verifiable, boot firmware is
verifiable, field upgradable firmware for trusted devices is
verifiable, and all other devices are sandboxed as in Section
5.3. Then all remotely malleable components are veri-
fiable, and, for the first time ever, strong guarantees can be
provided: all remote attacks on PCs are remotely detectable
as soon as the method of attack is known, patches can be
verifyably installed, and attacks cannot survive across reboot.
A remote observer can verify that a PC is not vulnerable
to any known remote attacks; attacks can no longer
hide in unverified storage. Known attacks on software are
likely to be fixed with a patch that can be verifyably installed.
Likewise for firmware; furthermore, if no patch
is provided, the firmware can be isolated as untrustworthy.
Hence, assuming that all vulnerabilities are eventually
discovered—and many vulnerabilities are discovered
before attacks surface—attackers are limited to hardware
attacks. Hardware attacks either requires physical access
or buggy hardware; the former is hard to come by and the
latter can be isolated.
4
Figure 2: a) On reset, the CRTM measures the BIOS to PCR[0] before transferring control to it. b) The BIOS recursively measures
devices on the PCI bus and PCI-X bus. c) The IDE controller and Gigabit Ethernet controller do not support firmware measurements—
they cannot be trusted—and hence their DMA must be sandboxed (the Gigabit Ethernet sandbox is its entire ring buffer). d) The SCSI
controller reports that one of its disks cannot be trusted with unencrypted or unauthenticated sensitive data. e) The USB controller
reports that the Camera cannot be trusted; however, the USB controller itself can still utilize DMA.
6 Conclusion
The added complexity of any security facility is worthwhile
only if the additional security provided justifies its cost.
But the additional security of current secure bootstrap facilities
is minimal, because they are vulnerable to attacks
on firmware. These attacks are at least as damaging as
their software counterparts, as deployable, and nearly as
straight forward. Fortunately, a simple extension to secure
bootstrap prevents such attacks on firmware. This extension
utilizes the current framework, allows device vendors
to cheaply add the required functionality, and accounts for
legacy hardware. It makes known remote attacks detectable
and forces attackers to focus on hardware attacks, which—
though possible—are difficult enough to justify the cost of
secure bootstrap.
7 Acknowledgments
We would like to thank Greg Ganger, James Hoe, Adrian
Perrig, and the anonymous reviewers for their comments.
James is supported in part by a NDSEG Fellowship, which
is sponsored by the Department of Defense.
References
[1] W. A. Arbaugh, D. J. Farber, and J. M. Smith. A secure
and reliable bootstrap architecture. In Proceedings of the
1997 IEEE Symposium on Security and Privacy, pages 65–
71, May 1997.
[2] Arm storage: Seagate-Cheetah family of disk drives.
http://www.arm.com/markets/armpp/462.html.
[3] J. Davidson. Chips to crack Xbox released on internet.
Australian Financial Review, page 16 (Computers), 21 Jun
2003.
[4] P. England, B. Lampson, J. Manferdelli, M. Peinado, and
B. Willman. A trusted open platform. Computer, 36(7):55–
62, 2003.
[5] LinuxBIOS. http://www.linuxbios.org.
[6] Myricom home page. http://www.myrinet.com.
[7] J. R. Rao and P. Rohatgi. EMpowering side-channel attacks.
Technical Report 2001/037, IBM, 2001.
[8] R. Sailer, X. Zhang, T. Jaeger, and L. van Doorn. Design and
implementation of a TCG-based integrity measurement architecture.
In Proceedings of the 13th Usenix Security Symposium,
August 2004.
[9] T. Smith. Warner attempts to out-hack DVD hackers.
http://www.theregister.co.uk/content/2/13834.html, Sep
2000.
[10] C. A. N. Soules, G. R. Goodson, J. D. Strunk, and G. R.
Ganger. Metadata efficiency in versioning file systems. In
Proceedings of the 2nd Usenix Conference on File and Storage
Technologies, San Francisco, CA, Mar 2003.
[11] G. E. Suh, D. Clarke, B. Gassend, M. van Dijk, and S. Devadas.
Aegis: Architecture for tamper-evident and tamperresistant
processing. In Proceedings of the 17th annual international
conference on Supercomputing, pages 160–171.
ACM Press, 2003.
[12] D. L. C. Thekkath, M. Mitchell, P. Lincoln, D. Boneh,
J. Mitchell, and M. Horowitz. Architectural support for
copy and tamper resistant software. In Proceedings of the
ninth international conference on Architectural support for
programming languages and operating systems, pages 168–
177. ACM Press, 2000.
[13] The Trusted Computing Group: Home.
http://www.trustedcomputinggroup.org.
[14] The Trusted Computing Group. TPM Main: Part 1 Design
Principles, Oct 2003.
[15] The Trusted Gomputing Group. TCG PC Specific Implementation
Specification, Aug 2003.
[16] U.S. National Institute of Standards and Technology. Security
Requirements for Cryptographic Modules, Jan 1994.
FIPS PUB 140-2.
[17] R. Wojtczuk. Defeating solar designer’s non-executable
stack patch. http://www.insecure.org/sploits/nonexecutable.
stack.problems.html, Jan 1998.
5
OT: Senforce Joins Trusted Computing Group
to Advance Open Information Security Standards
Demonstrates Commitment to Development and Adoption of Secure, Trusted Computing Technologies That Protect Data and Privacy
DRAPER, UT -- (MARKET WIRE) -- 11/09/2004 -- Senforce Technologies™ Inc., the leader in location-aware enterprise endpoint security, today announced its membership with the Trusted Computing Group (TCG). TCG is an industry standards body composed of software vendors and computer device manufacturers. The common goal is to improve data security and privacy of online business practices and commerce transactions across multiple platforms.
"TCG members are bringing the industry together in an important way by focusing on the development of standards that will make life easier for every organization's security management," said Brian McElroy, senior director of Business Development, Senforce. "We're honored to be part of this group, and through it will continue to play an active role in the advancement of open standards. Senforce has demonstrated its commitment to supporting the highest level of security and standards in its enterprise solutions today."
Senforce Enterprise Mobile Security Manager (EMSM) is currently compliant with FIPS 140-2, Department of Defense Directive 8100.2, IPv6 in accordance with the US DoD mandate of October 2003, and is in-process for Common Criteria, EAL 4+, Army Wireless and Wired Interoperability Testing. The company is also certified with leading vendors to ensure ease of use and interoperability, such as Microsoft Windows hardware Quality Labs (WHQL) for Windows XP and 2000.
Highlighted Links
www.senforce.com
"We welcome Senforce as a leading provider of endpoint security policy enforcement solutions. Because the company provides software capabilities critical to protecting enterprise networks, its commitment to the adoption of open technologies is a model for other vendors in the way technologies should be implemented," said Nancy Sumrall, chairman of the Marketing Work Group, TCG.
About TCG
TCG is an industry standards body formed to develop, define and promote open standards for trusted computing and security technologies, including hardware building blocks and software interfaces, across multiple platforms, peripherals and devices. TCG specifications are designed to enable more secure computing environments without compromising functional integrity with the primary goal of helping users protect information assets from compromise due to external software attack and physical theft. For more information, please visit http://www.trustedcomputinggroup.org.
About Senforce Technologies Inc.
Senforce is the leader in policy-enforced endpoint security. Senforce Enterprise Mobile Security Manager™ (EMSM) ensures central management and control of all computing clients regardless of a user's location or method of accessing the Internet. It provides protection against exposure and risk caused by intrusion, unauthorized access, loss, theft, malware/viruses, unauthorized downloads or software removal, altered security or configuration settings, and more. Powerful standards-based core technologies ensure a higher level of security and management than available previously. The company is headquartered Draper, Utah, with executive offices in the Silicon Valley, California, and sales offices in Illinois, New York, and Washington, D.C. Senforce is privately-held and funded by Thomas Weisel Venture Partners, vSpring Capital, Rocket Ventures, American River Ventures and EsNet Group. The company serves customers primarily in the government, corporate, financial and healthcare sectors. For more information, visit www.senforce.com or call 1-877-844-5430.
*Senforce Technologies, Senforce Portable Firewall Plus, Senforce SPF+, Senforce, Senforce Enterprise Mobile Security Manager, and Senforce EMSM are trademarks of Senforce Technologies, Inc. All other names and trademarks are the property of their respective owners.
--------------------------------------------------------------------------------
Hey Weby, I hadn't seen that.................
thanks, I owe you one. Check your EM tomorrow.
Dude-ash
Huge find!!!!!!!!!
Now do the same search on December 15. But for now, go rent some movies!
kevin_s5
ps: liked your charts.
barge/doma, re: your discussion
is the board punchdrunk?
or just bored? I think possibly the latter. We're seeing TPM-enabled board, upon board, upon board, being released, yet little response? !!!
I know some folks in my life that at least offer an uneducated, unreasoned, and negative view of Wave. For the sake of argument, maybe I can get them to post !! LOL
really dig?
Gartner is predicting 30 million this year, up from their previous forcast of 17 to 20 million.
HP should be expanding their line soon and we haven't heard from Dell. Think we're going backwards??
Doma, have you seen this?:
Intel's Stealth Release
http://www.extremetech.com/article2/0,1558,1700431,00.asp
yaya, where's the afterglow??
go to bed and check your email in the morning.............
RWK
fwiw, as I've been reading, the consensus was that Homeland Security projects are so vital that any procurement begun under Bush wouldn't have been affected by a change in administration.
OT:HP Lockheed Martin Partner to Deliver Supply Chain and Intelligence Solutions to Government Customers
MONTEREY, Calif.--(BUSINESS WIRE)--Nov. 3, 2004--
Global alliance builds on more than $500 million in
joint customer programs
HP (NYSE:HPQ)(Nasdaq:HPQ) and Lockheed Martin (NYSE:LMT) today announced a
strategic global alliance to drive new business opportunities for both
companies around the world.
HP and Lockheed Martin share a long history of cooperation on major U.S.
Department of Defense (DoD), intelligence and civilian agency programs. In the
past three years alone, the companies have partnered on more than $500 million
worth of business, including programs in the United States such as the FBI's
national Integrated Automated Fingerprint Identification System, the DoD's
Defense Civilian Personnel Data System, and the Postal Service's Integrated
Data System.
The formal alliance will enable HP and Lockheed Martin to further build upon
each other's strengths by optimizing technologies, solutions, key personnel
and product plans to create world-class solutions for customers and growth
opportunities for both companies on a sustained basis.
With this expanded relationship, HP and Lockheed Martin will initially focus
on three major initiatives:
-- Enterprise Logistics: HP will provide its global supply chain
"best practices" to enable Lockheed Martin to further
streamline its existing supply chain process, improving
efficiency and significantly reducing internal costs. Based on
the results of that work and customer needs, HP and Lockheed
Martin plan to leverage their combined global supply chain and
logistics expertise to provide end-to-end enterprise logistics
solutions.
-- Intelligence: For the intelligence community, HP and Lockheed
Martin will leverage Lockheed Martin's depth of knowledge and
systems integration expertise along with HP technology and
products to jointly develop and deliver solutions focused on
information integration and sharing, security and systems
management.
-- International Opportunities: By combining Lockheed Martin's
successful track record as a lead system integrator for the
defense and intelligence community with HP's global presence
and technology leadership, the alliance will jointly develop
and deliver solutions that enable governments around the world
to benefit from innovations in homeland security,
intelligence, supply chain, and other defense and security
technologies. As an example, opportunities exist in homeland
security for Italy, Poland and Romania, where integrated
information frameworks are required to support railroad,
ports, aviation and border infrastructure.
"This partnership builds on our respective strengths and long history of
cooperation," said Carly Fiorina, HP chairman and chief executive officer.
"Lockheed Martin's great success in developing and deploying system-of-system
solutions presents an opportunity for HP to provide transformational solutions
for governments and organizations responsible for securing the safety of their
constituencies against evolving, asymmetrical challenges."
"HP's established network of global customers and breadth of proven
technologies will enable Lockheed Martin to strengthen and expand the reach of
its innovative solution offerings," said Robert Stevens, president and chief
executive officer, Lockheed Martin. "As end-to-end solutions become
increasingly important to our customers, Lockheed Martin and HP, through our
strategic alliance, will be strongly positioned to support our customers'
complex missions with advanced technology system solutions."
HP and Lockheed Martin anticipate the alliance will expand over time to
include other areas of importance to customers such as Homeland Security and
Joint Command and Control.
The joint announcement came at MILCOM, a gathering of more than 1,000 top
military communications experts that was co-hosted by HP and Lockheed Martin.
wavxmaster
Trusted Computing Group to Showcase Security Applications at RSA Conference, Europe 2004, Stand #7 PORTLAND, Ore.--(BUSINESS WIRE)--Nov. 1, 2004-- Trusted Computing Group Members to Participate in Two Security Educational Sessions The Trusted Computing Group (TCG), whose specifications have been developed to help vendors build products that protect critical data and information, will participate in two educational sessions at the upcoming RSA Conference, Europe 2004 November 3-5, at the Princess Sofia Hotel in Barcelona, Spain TCG will present two sessions during the RSA Conference, Europe 2004. On Thursday, 4 November, 4:30 p.m., David Grawrock of Intel Corporation will speak at the Implementers Track on "How to Prevent Spoofing and Its Impact on End Users, Servers, and IT Operations." Thorsten Stremlau of IBM will also speak at 11:00 a.m., Friday, 5 November, on a panel addressing "SSL VPN vs. IPSec VPN: Which is Better for Your Organization(R)" TCG will also sponsor the Delegate Lunch which will be held on Thursday, 4 November, 12:15 p.m. - 2:00 p.m. in the Exhibition Hall Trusted Computing Group member company Wave Systems will host a demonstration of Trusted Computing solutions and applications at the TCG stand (#7) at RSA Conference, Europe 2004 To register for the conference or to get more information, visit http://2004.rsaconference.com/europe/ About TCG TCG is an industry standards body formed to develop, define, and promote open standards for trusted computing and security technologies, including hardware building blocks and software interfaces, across multiple platforms, peripherals, and devices. TCG specifications are designed to enable more secure computing environments without compromising functional integrity with the primary goal of helping users to protect their information assets from compromise due to external software attack and physical theft More information and the organization's specifications are available at the Trusted Computing Group's website, www.trustedcomputinggroup.org Brands and trademarks are the property of their respective owners CONTACT: Trusted Computing Group Anne Price, 1-602-840-6495 Mobile 1-602-330-6495 press@trustedcomputinggroup.org Copyright Business Wire 2004
Microsoft and IT Security
http://www.snpx.com/cgi-bin/news5.cgi?target=www.newsnow.co.uk/cgi/NGoto/74443160?-1313
Microsoft has come in for heavy criticism in recent years on the IT Security front, for obvious reasons. There are a vast number of Windows servers and PCs deployed in the corporate world and the Microsoft platforms have been the hackers’ primary target. However the problem doesn’t just belong to Microsoft. The reality is that the Internet has, by virtue of its rapid and fairly ad hoc expansion, created a huge playground for digital criminals and the digital criminals have created a loose network of their own, trading stolen information, software and various techniques for breaking in to systems.
It is a multi-faceted problem that has grown faster than IT industry’s ability to provide defenses to the various risks. Indeed, it has turned into a technology contest between the good guys and the bad guys that probably still has a long time to run.
A recent briefing I had with Microsoft made it very clear that Microsoft takes the matter very seriously indeed and is determined to fix its part of the problem. The short term fix has been the only possible fix, which is to invest in greater depth in checking code for vulnerabilities and getting fixes out fast. There is an unavoidable problem for any vendor releasing a security patch. Simply by making the patch available you alert hackers to the fact that a vulnerability exists and they can try to exploit it before the users get round to curing the problem, but there is nothing that can be done about this. The alternative of not fixing the problem is worse.
In order to really bolt down the IT security problem Microsoft has to engineer security into the operating system at the lowest possible level and that is what Microsoft is intending to do with Longhorn. Currently Longhorn is not expected to be released until 2006, and the server version may be later still. This may seem like a long lead time but to be fair, it is a very major release.
The IT industry never has engineered IT security into operating systems at the fundamental level – so that every process needed to be validated before it ran and all users and all data were authenticated. (This, by the way is what Microsoft has in mind). Until the Internet brought an explosion of security problems, most people thought that perimeter security was enough, but the reality is that perimeter security is no longer even a viable idea (what perimeter???). Microsoft is expecting to deliver a genuine security foundation in Longhorn; checking patches against a central patching service, providing “active protection” to dynamically adjust PC configuration and firewall settings to block attacks, blocking open ports, adjusting registry settings, watching for anomalous behaviour and so on. As much of this as possible will be ported to the next Windows XP service pack (SP2), but back engineering a fully functional security layer is not really feasible.
One of the primary objectives of Longhorn will be to fix the security problem for the home PC user. In terms of patch management, providing solid default security settings, the blocking of executable email attachments and so on, most of the problems that plague the home user can and will be addressed.
The task in the large corporations is, of course, far more difficult, because they are mixed environments and this is unlikely to change any time soon. Microsoft is promising to deliver for Longhorn but what is really required is a security layer that permeates every part of a network. This will require solutions from multiple vendors, and not just platform vendors, and it will also need to cover the legacy hardware and platforms that are deployed and likely to stay in use.
It’s a problem that isn’t going to get resolved quickly. It’s too complicated and there’s too much history.
Wave Mention - Trusted Computing Group
to Showcase Security Applications at Rsa Conference, Europe 2004, Stand £7
http://news.scotsman.com/latest.cfm?id=3703318
(BUSINESS WIRE) – Nov. 1, 2004 –
Trusted Computing Group Members to Participate in Two
Security Educational Sessions
The Trusted Computing Group (TCG), whose specifications have been developed to help vendors build products that protect critical data and information, will participate in two educational sessions at the upcoming RSA Conference, Europe 2004 November 3-5, at the Princess Sofia Hotel in Barcelona, Spain.
TCG will present two sessions during the RSA Conference, Europe 2004. On Thursday, 4 November, 4:30 p.m., David Grawrock of Intel Corporation will speak at the Implementers Track on “How to Prevent Spoofing and Its Impact on End Users, Servers, and IT Operations.” Thorsten Stremlau of IBM will also speak at 11:00 a.m., Friday, 5 November, on a panel addressing “SSL VPN vs. IPSec VPN: Which is Better for Your Organization?”
TCG will also sponsor the Delegate Lunch which will be held on Thursday, 4 November, 12:15 p.m. – 2:00 p.m. in the Exhibition Hall.
Trusted Computing Group member company Wave Systems will host a demonstration of Trusted Computing solutions and applications at the TCG stand (£7) at RSA Conference, Europe 2004.
About TCG
TCG is an industry standards body formed to develop, define, and promote open standards for trusted computing and security technologies, including hardware building blocks and software interfaces, across multiple platforms, peripherals, and devices. TCG specifications are designed to enable more secure computing environments without compromising functional integrity with the primary goal of helping users to protect their information assets from compromise due to external software attack and physical theft.
More information and the organization’s specifications are available at the Trusted Computing Group’s website, www.trustedcomputinggroup.org.
Securing Logical Access: Smart Cards and Strong Authentication
Virtually every day another news story highlights the importance of network security - corporate networks are breached, databases are accessed by unauthorized individuals, and identities are stolen and used to conduct fraudulent transactions. As a result, both businesses and governments are evaluating or implementing new identity management systems to provide more secure logical access.
Logical access is the process by which individuals are permitted to use computer systems and the networks to which these systems are attached. The objective of secure logical access is to ensure that these devices and networks, and the services they provide, are available only to those individuals who are entitled to use them. Entitlement is typically based on some sort of predetermined relationship between the network or system owner and the user, as a paying subscriber, an employee, a customer, or some other type of binding relationship.
The system that supports delivery of such networked services represents a significant investment; in fact, this system may represent the single largest asset that the owning organization has. These assets require protection from unauthorized use by individuals or entities who may diminish or destroy their value. Therefore, controlling access to these assets is of paramount importance to virtually all organizations that rely on information technology (IT) systems to accomplish their objectives.
Current Methods for Accessing Computer Networks
The most widely implemented method for controlling logical access is the user ID-password combination. Users provide the user ID (usually the user's name) and a secret that only the user knows (usually a password). A simple database lookup determines that the password is attached to the user ID, authenticates the user's identity, and grants access. Each system or application typically assigns a unique user ID and password combination to each user and then determines access controls for that user based on the unique ID.
Over time, however, this type of authentication has proven to be weak and inefficient. User IDs and passwords can be compromised relatively easily through a variety of well-known techniques. When such information is obtained by criminal elements, it can be used to achieve unauthorized and illegal entry into a network. The results of compromised access controls can be disastrous for the network owner and for the user whose network or system identity is stolen. In addition, user identities are typically managed application by application, creating operational inefficiencies as the number of systems and applications in an organization grows and introducing security vulnerabilities as it becomes increasingly difficult to control policies governing the use of those identities.
Fortunately, new technologies are available that can strengthen the authentication process supporting access control and provide a higher level of assurance that users are who they claim to be and that the identity credentials presented are valid. These technologies generally employ encryption techniques, biometric data of some sort, and/or the possession of a physical token or credential to improve the effectiveness of access control systems. Unlike the use of a single factor (i.e., user ID-password combination), strong authentication requires the use of two or three factors to validate identity.
Factors would include some combination of:
Something you know (a password or personal identification number that only you know),
Something you have (a physical item or token in your possession), and
Something you are (a unique physical quality or behavior that differentiates you from all other individuals).
Using stronger authentication technologies and multiple authentication factors mitigates potential loss due to unauthorized access to network assets.
Drivers for Stronger Logical Access Methods
Compromised security is not the only reason for seeking improved logical access control techniques. Other drawbacks of the user ID-password combination include high administrative costs, inadequate ability to manage different risks, and inability to leverage the additional security that is now being built into computer systems and applications.
Administrative Costs. As users access increasing numbers of network services, each requiring a separate user ID and password, the user's ability to manage and remember required access information breaks down. As a result, users either write the information down, which makes it vulnerable, or call their network administrators. Administrators must regularly deal with service calls from users who have forgotten their user ID-password combination.
Such service calls are expensive and are becoming more so, as the services provided through a multitude of expanding networks increase. Several sources estimate that a single call to an administrator to reset a forgotten password costs approximately $40. The costs associated with supporting this method of authentication and access control are driving network administrators to look for solutions that are more efficient, as well as more secure.
Security Risks. Recently, reports of unauthorized individuals breaking into computer networks to steal information for financial or political purposes have multiplied. In the private sector, the impact of such security breaches is measured in terms of both financial loss and loss of customer confidence. In government circles, the risk is magnified by the potential effect on national security and the impact on the public's trust and confidence in critical government institutions.
As more intrusions take place, the ability to quantify their negative impact is improving. Institutions in both the public and private sector are better able to analyze the costs and benefits of investing in new technologies to improve network security, including technologies to improve access control, and are able to justify doing so based on solid return on investment.
Risks of Legal and Regulatory Noncompliance. In the aftermath of the September 11 terrorist attacks, a significant amount of new legislation was passed, primarily aimed at improving the security of computer networks owned and managed by the Federal Government. Additional legislation promotes the adoption of systems that deliver government services electronically. One critical part of these initiatives is support for the logical authentication of individuals trying to access such services.
As a result, network security and the mechanisms by which users are granted access to government-controlled assets have moved to the top of the government agenda. Policy and implementation guidelines define the various levels of authentication that are needed based on the sensitivity of the information being accessed, and a variety of candidate technology options have been identified, ranging from user IDs and passwords to public key infrastructure (PKI), biometrics, and smart cards. Many U.S. government agencies have already put in place programs to issue smart ID cards that support stronger authentication techniques for both physical and logical access.
The government already requires contractors to meet government-specified standards for security technologies, policies, and practices. The trend is for the private sector to adopt technologies and practices put in place by the government, not only as an example of best practices, but also as a means of mitigating any legal risk that may be incurred by nonconformance. Businesses are also subject to a number of new requirements for access control and audit, as a result of new laws or regulations such as the Gramm-Leach-Bliley Act, HIPAA, the Sarbanes-Oxley Act, and the USA Patriot Act.
Privacy and Identity Theft. According to the Federal Trade Commission, in the last 5 years 27.3 million Americans were victims of identity theft, with businesses and financial institutions losing nearly $48 billion to identity theft and consumer victims reporting $5 billion in out-of-pocket expenses. Attacks on consumers' computers, through "phishing" and other virus and "spyware" attacks, constitute new ways to steal usernames and passwords. Gartner reports that more than 1.4 million U.S. adults have suffered from identity theft fraud due to phishing attacks, costing banks and card issuers $1.2 billion in direct losses in the past year.
As privacy and identity theft become larger issues (and are addressed by legislation at the state and national level), the private sector will have to move toward stricter controls on customer databases and the personal information that companies are entrusted to protect. Companies will need to control access to sensitive information and ensure that such information is only accessible to those with the proper authorization.
Technology Evolution and Migration. Because of the increasing demand by IT users for improved access control mechanisms, IT solution providers are building more security into their products to provide native support for modern authentication solutions. For example, support for PKI logon and encrypted and digitally signed e-mail is now native to Windows. More and more products from a wide variety of vendors enable the use of PKI, biometric, and smart card technologies to support strong authentication methods using multiple factors.
As computer systems are refreshed and upgraded over time, support for strong authentication through multiple technological approaches will be more readily available. The result should be increasingly widespread use of strong authentication techniques, higher levels of security assurance, and greater user convenience.
The Role of Smart Cards
Smart card technology provides the foundation for privacy, trust and security in logical access applications. As a cryptographic device, the microcontroller at the heart of the smart card can support a number of security applications and technologies. Smart cards offer secure data storage and support any or all of the authentication techniques commonly used to secure logical access, including:
Support for PKI and asymmetric key applications (e.g., digital signatures, e-mail message encryption), on-card key generation, and protection for the privacy of the user's private key
Secure storage for biometric templates
Secure storage for user IDs and passwords
Support for one-time password generation
Secure storage for symmetric keys
Support for other applications, such as physical access control or financial transactions
Smart card technology significantly strengthens security, protecting both the electronic credential used to authenticate an individual for logical access and the physical device. Since the credential is permanently stored on the card, it is never available in software or on the network for an unauthorized user to steal. Smart cards build protection into the physical device by supporting tamper-resistant features and active security techniques for encrypting communications. Smart card technology is also available in multiple form factors (plastic card, Universal Serial Bus (USB) device, or mobile phone Subscriber Identification Module (SIM) chip), supports both contact and contactless interfaces and has a wide variety of readers available.
Smart cards are becoming the preferred method for logical access, not only for their increased security, but also for their ease of use, broad application coverage, ease of integration with the IT infrastructure, and multi-purpose functionality. Both Microsoft® Windows® and Unix® operating systems offer a significant level of smart-card-related support and functionality, through either built-in (out-of-the-box) support or commercial add-on software packages. Smart-card-based logical access allows organizations to issue a single ID card that supports logical access, physical access, and secure data storage, along with other applications. For example, the same smart ID card can allow an individual to enter a building securely, log onto the corporate network securely, sign documents securely, encrypt e-mail and transactions, and pay for lunch at the organization's cafeteria. By combining multiple applications on a single ID card, organizations can reduce cost, increase end-user convenience, and provide enhanced security for different applications.
Smart card technology provides organizations with cost-effective, secure logical access. Smart cards deliver a positive business case for implementing any authentication technology. Improved user productivity, reduced password administration costs, decreased exposure to risk, and streamlined business processes all contribute to a significant positive return on investment.
awk, re:32 bit smartcards, did you see this?:
New 32-bit SIM Chip from STMicroelectronics Will Benefit Mobile Phone Multimedia Services
ST22N256 Doubles Multi-Media Message Storage Capability and Enables
Advanced Phonebook and MMS Applications, Allowing Telecom Operators to
Provide Enhanced 2.5G and 3G M-Services
STMicroelectronics Will be At Cartes 2004, November 2-4, in Paris, Booth 3E14
GENEVA, Oct. 27 /PRNewswire-FirstCall/ -- STMicroelectronics (NYSE: STM)
has announced a new smartcard MCU in its ST22 range -- based on the SmartJ(TM)
Java-accelerated RISC architecture -- which integrates 256-kbytes of EEPROM
memory with a high performance CPU to support the demands of multimedia
applications on the latest mobile phones.
With sales of multimedia-equipped handsets booming, mobile communications
operators supporting 3G (Third Generation) and 2.5G mobile phones need (U)SIM
cards (Universal Subscriber Identity Modules) that have sufficient memory
capacity to store Multimedia Messaging System (MMS) data, video, and
photographic images, coupled with the capability to transfer and use this data
efficiently to provide advanced phonebooks and audio-visual services. 2.5G is
an intermediate level of service that uses an enhanced second-generation
technology to provide some of the 3G features over GPRS (General Packet Radio
Service).
"The ST22N256 is perfectly in line with the growing demand for secure
high-performance chips with high-speed interfaces and a large memory capacity,
for use in 2.5 and 3G SIMs," said Reza Kazerounian, General Manager of ST's
Smart Card ICs Division. "ST already offers the largest range of secure 32-bit
processors for smartcard systems, and will remain at the forefront of
smartcard silicon suppliers as 3G takes off."
The SmartJ CPU core at the heart of ST22 Family -- which the new ST22N256
now combines with 256-kbytes of EEPROM -- is a 32-bit RISC-architecture core
developed specifically to provide very fast execution of Java, the programming
language commonly used for small applications, or applets, downloaded to
mobile phones. The ST22 augments its own highly efficient native RISC
instruction set with a hardware decoder that directly converts Java bytecodes
into native microcode instructions, thereby eliminating the overhead and lower
performance of processors based on Java emulation. The result is not only very
fast Java execution but also reduced power consumption.
An essential component of all GSM (Global System for Mobile Communications)
mobile phones, the SIM card stores critical subscriber authentication
information; private data such as personal phone directories, messages, audio,
and images; and the operating system and operator's multimedia environment.
With the quantity and size of users' MMS messages increasing, operators will
now be able to provide increased storage for subscriber data without impacting
user friendliness, due to the exceptional performance of the ST22N256's SmartJ
processor, and its communication through a fast Asynchronous Serial Interface
(ASI) which enables 440-kbit/s communication speeds with mobile equipment, in
line with the fastest deployments of ISO 7816 in the GSM world. Two additional
serial I/O ports are also provided.
The Java-accelerated CPU ensures that the ST22N256 not only provides the
memory needed for today's multimedia services (M-services), but also the
processing power to exploit it. The core, with 24-bit linear memory
addressing, is complemented by 368-kbytes of on-chip ROM, 16-kbytes of RAM,
and a set of standard peripherals and custom plug-in circuits. Logical and
physical security mechanisms are fully integrated into the silicon, including
a hardware Memory Protection Unit for application firewalling and peripheral
access control, and a protected Context Stack. The core includes dedicated DES
(Data Encryption Standard) instructions for Secret Key cryptography, and a
fast Multiply and Accumulate instruction for Public Key (RSA) and Elliptic
Curve cryptography, plus a CRC (Cyclic Redundency Check) instruction. A
firmware cryptographic subroutine library is located in a secure ROM area to
save designers the need to code first-layer functions.
The ST22 product platform is supported by a comprehensive Integrated
Development Environment, which allows coding, compilation, and debugging using
a common interface. It provides a code-generation chain that includes a C/C++
compiler, a native and JavaCard assembler and a linker, plus a SmartJ
instruction set simulator, C/C++ source level debugger, and hardware emulation
tools. Operating System developers currently working with the 128-kbyte
ST22L128 will be able to benefit from the design continuity offered by the
ST22N256, as well as its immediate availability and compliance with the
fastest communication standard adopted by handset manufacturers.
The SmartJ development methodology allows customers to significantly reduce
the time and cost of developing secure applications. It supports concurrent
hardware and software development, multiple development teams and IP reuse, as
well as security evaluation to the Common Criteria and the use of formal
methods for security assurance through executable high-level specifications
and model checking techniques.
Development of the ST22N256 follows more than 20 years' experience in the
design of silicon products to the highest levels of security. ST is a major
manufacturer for the smartcard market, and is the number one supplier of
secure ICs to the financial sector for card applications. Over the years it
has evolved a "security culture" across design and manufacturing functions, in
addition to meeting the stringent requirements of formal security
certification.
The ST22N256 is manufactured using 0.15-micron technology, and is currently
the only secure IC to combine 32-bit processing power with 256- kbytes of
EEPROM and 368-kbytes of ROM. It is available in sample form now, with volume
production starting 2005. US pricing for the product is between $4 and $5,
depending on quantity and final packaging.
2b: Embedded FINREAD mobile in Deauville
October 5th 2004
Cartes Bancaires is exhibiting the France Télécom 'Embedded FINREAD' mobile phone demonstration at the Deauville "RSI Normandie" Trade Show and Conference.
This demonstration will combine a France Télécom R&D server with a Sagem FINREAD mobile phone. It achieves an electronic signature on an insurance contract, in order to authenticate modifications performed via the Internet.
This Embedded FINREAD feature is based on a Java-MIDP secure application and a France Télécom R&D server.
This demo will be accompanied by the presentation of the CB solutions for 3-D Secure that will be explained by Cartes Bancaires staff as well.
One of them should rely on a FINREAD reader authentication in a near future.
OT: FINREAD Heading for Standardization
FINREAD was placed on the International Organisation for Standardisation's (ISO) agenda last September.
http://www.cartes-bancaires.com/CBMag/24/en/enews/index.htm
© Dominique Maître
The FINREAD specifications are continuing to make headway towards international standardisation. In May 2001, the CEN (European Committee for Standardisation) approved the FINREAD specifications in the form of a CWA (CEN Workshop Agreement), and has since done the same for Embedded FINREAD in September 2003. The FINREAD partners then submitted these specifications to ISO (International Organisation for Standardisation) and its 146 member countries, with the aim of getting FINREAD adopted as an international standard.
The first stage in this procedure was successfully completed in early September 2004 when ISO placed FINREAD on the agenda of one of its committees (ISO/IEC JTC1/SC17) following a positive vote by 20 of the countries out of the relevant committee's 31 members. Seven countries (China, France, Germany, Japan, New Zealand, Singapore and South Africa) will take part in the standardisation work scheduled to commence very shortly. This standardisation work should ultimately lead to FINREAD's adoption as an international standard (IS), and the recognition of these specifications by all stakeholders at the global level. Watch this space!