Register for free to join our community of investors and share your ideas. You will also get access to streaming quotes, interactive charts, trades, portfolio, live options flow and more tools.
Register for free to join our community of investors and share your ideas. You will also get access to streaming quotes, interactive charts, trades, portfolio, live options flow and more tools.
Was the previous score +30? e/
TCG Extends Standards for Endpoint Security
Technology Executive Alert By Linda Musthaler and Brian Musthaler , Network World , 05/12/2008
http://www.networkworld.com/newsletters/techexec/2008/051208techexec1.html
At times it seems the endpoint device is the scourge of the network – at least from a security administrator’s viewpoint. Endpoints come in a variety of devices, from a variety of vendors, and by their nature, many of them can come and go almost as they please. In response, network and security administrators attempt to restore order by managing the process by which an endpoint is admitted onto the corporate network.
But with threats to networks becoming more frequent and sophisticated; the corresponding increase in regulations to assure data privacy; and the continual pressure to reduce costs, there is a real need for integrated security beyond endpoint admission control.
The Trusted Computing Group (TCG) provides a blueprint for this integrated security called Trusted Network Connect (TNC). TNC is an open architecture and set of standards for network access control (Compare NAC products). These standards facilitate the creation and enforcement of security requirements for endpoint devices that connect to corporate networks by collecting endpoint configuration data; comparing this data against policies set by the network owner; and providing an appropriate level of network access based on the detected level of policy compliance (along with instructions on how to fix compliance failures). The standards ensure multi-vendor interoperability across a wide variety of endpoints, network technologies, and policies.
More than likely the endpoint and NAC solutions you have implemented today to access and control your network are based on the TNC protocol/standard. Numerous hardware and software vendors support the standard in their networking products.
On April 28, TCG published extensions to the current TNC standard that address the following problems:
* There are more unmanaged network endpoints than managed endpoints, including, for example, factory automation components, inventory control devices, and RFID-enabled assets.
* There’s a need to manage the entire life-cycle of a network endpoint, not just the admission process.
* It’s hard to manage increasingly complex security solutions.
The TNC extension is called IF-MAP (Interface for Metadata Access Point). With IF-MAP, various devices can be integrated into the network infrastructure, enabling real-time monitoring of the security status of an endpoint by a network operator. By doing so, the management of security risk can move from point security to a holistic approach, as well as from passive protection to active protection.
The heart of IF-MAP is the MAP database. This database contains records for each of the endpoints on your network, including information on the user, the health of the device, the device’s port or MAC address, how the user came onto the network, and other pertinent real-time information. Various vendors like Infoblox and Juniper have already signaled their intentions to provide MAP database servers.
Which vendor provides the database isn’t as important as the fact that the database provides a standard way for network security components to securely share information about users, devices, and security incidents on the network. By sharing information, security components can act in a more intelligent manner. For example, peer-to-peer file sharing may be normal and permitted for one group of users but not for another.
The value of IF-MAP is that the network can be made more secure yet less restrictive by tuning protection for each user or group of users. Companies can reduce operating costs by minimizing false alarms, enabling automated response, and making policies and reports more useful by using usernames and roles instead of IP addresses.
If you think that this is vaporware, think again. At Interop 2008, the TCG and a number of supporting vendors demonstrated TNC/MAP in action. The first products supporting IF-MAP should be commercially available in 6 to 12 months.
The IF-MAP specifications don’t require the logging of data in the MAP database. However, we can see this type of feature as a value-add provided by the MAP database vendors. Logs of all the endpoints’ activities can be invaluable for forensic analysis and for maintaining compliance with regulations like SOX and PCI.
D&O....
Here's to hoping he has good news to report, rosey predictions to make, and begins to get both the attention and RESPECT of the market!!
Why would you choose to believe him now?
FM
I have a feeling you're going to be forced to cover at much higher prices......
Fixing the Internet
http://weblog.infoworld.com/securityadviser/archives/2008/05/fixing_the_inte.html
Long–time readers know that I often rant about how insecure the Internet is, and how few solutions will do anything to change that equation during the next 5 to 10 years. I've also recommended a handful of solutions over the years, and accepted the resulting criticism that goes along with proposing big ideas.
Privately, and not so privately in this column and other public forums, I've been proposing specific solutions to make the Internet significantly more secure during the next five years. If you know me personally, you would also know that other than my family, I think of nothing else but how to secure the Internet. I've been thinking about it since the early 1990s, every waking hour of every day. I think about it during my early morning workouts, in the shower, while stopped at stoplights, and while getting my haircut. It's no exaggeration, although it's more than a little embarrassing to admit that I spent my honeymoon thinking and writing about a possible solution. Thankfully, my lovely wife understands my quest. I truly think that, work-wise, I was put on this planet to make the Internet a more secure place to compute. Mentally, it defines who I think I am. If I fail to assist in this endeavor, in some measurable way, then I haven't met a major life goal.
Recently, two of my biggest ideas have independently ended up in other group's proposals and standards (neither group appears aware of my ideas). One was Microsoft's End-to-End Trust, announced a few weeks ago at the last RSA conference; and the other the recently announced Trusted Computing Group's IF-MAP standard. Although I've proposed very similar ideas, in this column and other online forums, only participating readers are aware of the early existence of my ideas (as compared to the newer initiatives).
It reinforced the notion that I'm not alone in my thinking, of course, and that many other individuals have the exact same ideas. What human good might happen if we shared and debated our ideas? In that spirit I've decided to release a formal whitepaper entitled Fixing the Internet: A Security Solution. It encompasses all my main ideas, including how to practically build on the ideas of End-to-End Trust and IF-MAP (which are both laudable solutions).
Here is a brief re-cap of the document: Any solution proposed to secure the Internet must be:
• Vendor Independent (Non-Proprietary)
• Using an Open and Transparent Process
• Voluntary Opt-In
• Performance Neutral
• With Least Service and End-User Interruption as Possible
• Driven by User and Vendor Self-Interests
As difficult and complex as this seems at first, it can be accomplished.
It will require two major Internet infrastructure changes. First, it will require a global, Internet security “dream team” to meet and solve the problems. There are many existing teams with brilliant members, but they are either not global in nature or do not focus on preventive, holistic Internet security defenses. This idea is perhaps the hardest to pull off, as neither individuals nor businesses want to commit to a many-month process for the common good if it does not have immediate, tangible benefits to their own competitive self-interests. Put another way, even I don’t have months to two years of my life to give up to the cause without someone footing the bill.
Second, it will require a new global Internet security infrastructure service to handle the dream team’s global initiatives. This idea is similar to an imagined cross between the global DNS infrastructure , a web services’ Universal Description Discovery and Integration, UDDI19 service, and the Trusted Computing Group’s new IF-MAP standard, applied globally. The new global Internet security infrastructure service should be DNS-like in that there would be fault-tolerant, distributed “root” servers dedicated to directing querying clients to the appropriate security service server(s). It would be UDDI-like in that each participating global, sub-root server would serve up IP addresses to the corresponding needed security services (and to advertise and publish such services). It would be IF-MAP-like in that the existing sub-root servers would allow participating members to report and respond in a global, holistic, multi-service manner.
If you are not familiar with IF-MAP, in a nutshell, the new Trusted Computing Group’s (www.trustedcomputingroup.org) IF-MAP standard allows participating devices to report security events and receive notifications from other security devices to be able to respond in a coordinated fashion.
For example, if a firewall notes an unauthorized outbound stream that it recognizes as a bot spam stream, the firewall can contact the IF-MAP service, which can then contact a policy server that contacts another service that shunts the offending device off the network. The Internet security service would be similar to IF-MAP in that it would allow the coordination (i.e. reporting, advertising, direction, and response) of multiple disparate services, but be global in scale. Currently, the IF-MAP standard focuses on coordination within a single control domain. The Internet security service would be available for global coordination and direction, and should be integrated with private IF-MAP devices. The global Internet security service would have to be resilient, fault-tolerant, and cryptographically sound.
Finally, the whitepaper suggests one possible solution under the previously laid out structure: The major underlying Internet security issue that is preventing a significant reduction in malicious behavior is the pervasiveness of default anonymity on the Internet. Because we can’t identify malicious hackers with a high degree of confidence we cannot identify or hold them accountable. Internet crime is high-yield and low risk. If the Internet’s model of default anonymity was replaced with default identity and integrity, the amount of maliciousness would significantly decrease.
I propose that every participating Internet component, hardware and software, be modified to provide increased identity and integrity assurance. Participating devices and users would provide improved levels of trust and be treated appropriately. All participating network traffic would be cryptographically tagged with a “trust level”, which could be evaluated and acted upon accordingly. Each participating security domain would be responsible for assuring the trust and labeling of its egress traffic and responsible for acting upon tagged ingress traffic (and be held accountable for its attestations).
That’s it. The whitepaper covers each of these issues, along with the motivations and explanations behind the ideas. I hope you’ll take the time to read it.
Not everyone will agree with what I have said, but I hope both sides, supporters and critics, will write back and participate. It’s easier to tear down a barn than it is to build one, but I know that there are several ways to make a barn, many styles of barns, and mine isn’t the only way. Even if I don’t have all the solutions, I want to provoke a dialog that starts a discussion about the real solutions to the Internet’s security problems. If you disagree with my ideas, I only ask that you propose your idea for fixing the Internet along with your criticism.
player, tnx for wading through the reqs on this e/
NPD: Online Subs Exceed $1 bln
http://www.next-gen.biz/index.php?option=com_content&task=view&id=10377&Itemid=2
By Kris Graft
The NPD Group has released new data showing that online subscriptions in the US gaming market exceed $1 billion annually.In particular, NPD's new Online Subscription Tracker (announced in February) paints a clearer picture of the PC market, which has had much of its revenue move to online.
"Now that NPD can estimate the value of the subscription market, it's clear that there is a sizable chunk of revenue being generated by PC gaming beyond what is reflected in retail sales," said NPD analyst Anita Frazier.
PC retail sales declined slightly from just over $1 billion in the US in 2006 to $910.7 million in 2007. This did not include digital distribution and online subscription sales.
NPD said there are 11 million gaming subscribers monthly.
The billion-dollar figure includes three categories: MMO, casual and console revenues. The firm calculated the figures by taking the online revenue average of Q4 2007 ($94.3 million) and Q1 2008 ($80.1 million), in turn taking the average of those figures and multiplying by 12 months.
The research firm also ranked the top five MMO and gaming websites based on units in Q1 2008:
MMO/PC Game Subscribers
1.) World of Warcraft
2.) RuneScape
3.) Lord of the Rings Online
4.) Final Fantasy XI
5.) City of Heroes
Gaming Website Subscribers
1) Pogo.com
2) Realarcade.com
3) Bigfishgames.com
4) Gametap.com
5) Disney.com
Frazier described a demographically diverse PC gaming audience. "By contrasting the demographics of MMO players against those of gaming website players, the broad appeal of PC gaming is clearly evident," she said.
"While the majority of gaming website players are females over the age of 35, MMOG players are largely males under the age of 35. The variety of content available to play games on the PC clearly can draw a diverse audience."
NPD said that it may periodically release more online-related figures to the public, such as Xbox Live data. Complete details on NPD's findings are in the full report, Video Game and PC Game Subscriptions Report.
Intel Capital’s Arvind Sodhani explains Clearwire-Sprint deal
http://www.thestandard.com/news/2008/05/07/q-intel-capital-s-arvind-sodhani-explains-clearwire-sprint-deal
VB: Can you discuss the logic of participating in this deal?
AS: WiMax is crucial for our strategy. We want to facilitate the deployment by providing capital. That’s what Intel Capital does. WiMax is here. It’s happening. It’s two to three years ahead of any competing technology. It’s getting deployed. It’s cheaper and faster. It’s getting embedded into our Montevina platform and will be in laptops in the coming quarters.
The Future Of Gaming May Already Be Sitting On Your Desk
http://www.napsnet.com/articles/58222.html
Gaming Market Still Growing
In terms of revenue, the PC was also the largest gaming platform in 2007, earning $8.2 billion in both online and retail sales worldwide. Despite this already incredible figure, analysts predict the market will grow by more than 80 percent in the next five years, with significant increases in revenue from digital distribution and hardware sales.
I'm sure it would be!!!
and I do communicate with dozens of posters on this board ... the ones I know and trust.
FM
alea,
In fact, I have discussed Wave's cash flow situation in addition to many other aspects of the company's business. I discuss it daily..... just not in a public forum.
FM
I hate to say it alea,
but not really. I read your posts about as often as I visited your board. I much prefer posters that post info versus opinion. I can't recall seeing one link to anything in any of your posts. That's neither good nor bad, it's just who you are.
FM
Hi Alea,
I do agree with much of what you post, but I find it laborious to wade through your pedantic and verbose writing-style.
1. Do you think Wave has a cash flow issue and if you do, why haven't you commented on it? Obviously, they do. I haven't commented on it because you have.. many, many times.
2. Do you recommend that investors should buy Wave's stock here? No, only speculators.
3. Do you agree with Ramsey that Steven being economical with the truth is a price worth paying for the survival of the company? No, I don't.
great question bibi !! e/
Sandusky Port Workers Begin Enrollment for Federal Port Security Credential
SANDUSKY, OH -- (MARKET WIRE) -- 05/07/08 --
Today, port workers, longshore workers, truckers and others at the Port of Sandusky will begin to enroll in the Department of Homeland Security's Transportation Worker Identification Credential (TWIC) program. The program's goal is to ensure that any individual who has unescorted access to secure areas of port facilities and vessels has received a thorough background check and is not a security threat.
Nationwide, more than 1.2 million workers with unescorted access to secure areas will apply for TWIC by the end of enrollment in April 2009.
"The start of enrollment is one more step in our effort to prevent persons who
are a threat from gaining access to secure areas of port facilities," said Maurine Fanguy, TWIC Program Director for the Transportation Security Administration (TSA). "We appreciate the support of our partners at the Port of Sandusky
for helping to make one of the world's most advanced interoperable biometric systems a reality."
Sandusky is the 103rd port to begin enrollment since the program began in October 2007. Ultimately, established fixed enrollment centers will be in place at 147 ports along with mobile enrollment centers at dozens of other locations as needed. "TWIC is an important initiative to strengthen security and access control for key port facilities and vessels, and is part of an overall government initiative to bolster transportation security," said Coast Guard Captain Patrick Brennan, the Commander of Sector Detroit. "It is vital to our multifaceted risk-based approach to maritime domain awareness, ensuring the protection of facilities, our ports and the vessels that sail our waters."
Workers at the Port of Sandusky are able to pre-enroll for TWIC online at www.tsa.gov/twic or the Coast Guard's Homeport site, http://homeport.uscg.mil. Pre-enrolling speeds up the process by allowing workers to provide biographic information and schedule a time to complete the application process in person. This eliminates waiting at enrollment centers and reduces the time it takes to enroll each individual.
Gates says big changes in store for Internet in next decade
http://www.breitbart.com/article.php?id=D90G6KFO1&show_article=1
Microsoft Chairman Bill Gates said there will be a vast shift in Internet technology over the next decade as he met Tuesday with South Korean President Lee Myung-bak.
"We're approaching the second decade of (the) digital age," the software mogul and philanthropist told Lee at the start of their meeting at the presidential Blue House, according to a media pool report.
"The Internet has been operating now for 10 years," Gates said. "The second 10 years will be very different."
Microsoft Corp., the South Korean government and South Korean companies are investing $313 million in information technology for vehicles, games and education, according to a Blue House statement.
Microsoft and automakers Hyundai Motor Inc. and Kia Motors Corp. announced earlier Tuesday a deal to use Microsoft's in-car software, which allows people to control music and telephones with voice commands.
The company has a one-year exclusivity deal on the software with Ford Motor Co. in the U.S., but that expires in November. Fiat also has been selling cars with the software.
"We're doing some very interesting work on automobile software," Gates said after having dinner with Lee. "That's a really wide open area where some very exiting things will come out of."
Lee, a conservative former construction CEO, swept into office in February with a vow to boost economic growth through deregulation and increasing foreign investment.
In the Blue House statement, Gates was quoted as saying that new deals would boost South Korea's economic growth by as much as $6.9 billion over the next five years.
Gates, at a later event sponsored by South Korean television network SBS, talked about the future of software and human interaction in the next decade.
"We can expect that the variety and quality of software will accelerate in the years ahead," the Microsoft co-founder said.
Gates added that "natural interaction" between hardware and software was finally becoming possible, citing as an example speech commands to computers.
"The whole environment will be very, very different," he said.
Microsoft also said Tuesday that it will invest $280 million to build a research and development center in China's capital Beijing, and will double the number of its full-time research staff in China to 3,000 in three to five years.
Microsoft is winning the NAC war, expert says
Why Microsoft is doing it right, ACLs are better than VLANs and the dirty dark corner of NAC (management).
http://www.networkworld.com/chat/archive/2008/050608-nac-chat-joel-snyder.html
fyatim: We have seen some consolidation in the NAC space. Can you provide an update on the NAC market and where it's heading?
Joel_Snyder: Towards Microsoft, for sure. The key is that the desktop is EVERYTHING and Microsoft is making the right noises about standards and openness and making things work in the big picture. So we have already seen Microsoft and the Trusted Computing Group (TCG) get together, and I think it's only a matter of time before we also see the other vendors like Cisco at least have a good accommodation of the Microsoft Network Access Protection (NAP) framework
Security guru Joel Snyder from Opus One recently starred as the guest of a live Network World chat where he discussed the state of network access control. Snyder says that Microsoft is emerging as one of the clear winners of NAC, but that Microsoft's technology is a foundation from which to build, not an end-all. He also says that those who are anti-NAC simply don't understand the technology. He answered a slew of technical questions from attendees including why ACLs are better than VLANs, the dirty dark corner of NAC (management) and the how and why of 802.1X. What follows is a full transcript.
Moderator-Keith: Please welcome security guru Joel Snyder, a senior partner with consulting firm Opus One from Tucson, Ariz., and member of the Network World Lab Alliance. Today's chat will focus on the facts and fictions about NAC, answering questions about what NAC products can and cannot do, including integration with wireless, technology shortcomings, plug-ins and more.
Joel_Snyder: Keith, it's great to be here!
Moderator-Julie: While waiting for Joel to type up answers to the first questions rolling in, here's a pre-submitted question: You just got back from Interop Labs with a lot of NAC testing. What are the most interesting things you learned?
Joel_Snyder: Thanks for asking! I'll put in a pitch for the Interop Labs NAC resource Web site (http://www.opus1.com/nac/). That has a bunch of our white papers (about 13 of them), all of our device configurations, classes on NAC, and basically about 90 MB of stuff that we've gathered and learned about NAC. The really interesting thing we noticed is that things are finally beginning to converge. We ran a nice little graphic (click on the "Click to see" diagram) in NWW last week talking about the family trees, and the key is that people seem to be willing to let Microsoft take a leading role in NAC. So we really focused on that: what comes built-in with XP SP3 and Vista? And then how do you extend things if you don't like what's built-in? We definitely had other policy decision points besides MS NPS---Cisco, Avenda Systems, Juniper, and Radiator, plus FreeRADIUS sort-of. Even on the client side, there are interesting things. For example, you can add more system health agents/verifiers, or you can go for other supplicants, or you can do non-Windows or pre-XPSP3 operating systems, or you can worry about other devices, like cameras and VoIP phones and printers. What we ended up with was about a dozen demonstrations, all showing what you need for a complete NAC solution. And it really focused on "let's start with Microsoft and work out from there." Much more satisfying than trying to have three silos like we've done in the past that don't work together. [Editor's note: Also check out Network World's NAC Buyer's Guide which compares dozens of NAC products.]
Brian: I've been asked to investigate .1x for port-based authentication. I have reservations recommending this for production use because of the mixed clients on our 1,000-node LAN (Macs running 10.4 and 10.5, PCs with Windows 95 to Vista). I think support would turn into a nightmare, plus I don't know of anyone using .1x. What are your thoughts?
Joel_Snyder: I hear you. 802.1X is outstanding technology, but you do have to have client support. Macs 10.4/10.5 are no problem - it's all built-in. For Windows, though, you're going to be restricted to Win 2000 SP3 and later. Of course, the Juniper guys are going to say you should go with Odyssey, which has a unified experience and supports earlier Windows versions and is great stuff and I can vote for that as well. Support nightmare? Hard to say. I'm of the belief that once you work through the initial problems, you end up having lower support calls. It's going to depend on what your environment is. If you're talking an education market, that's one thing. If you're talking an enterprise, I think it's manageable.
fyatim: We have seen some consolidation in the NAC space. Can you provide an update on the NAC market and where it's heading?
Joel_Snyder: Towards Microsoft, for sure. The key is that the desktop is EVERYTHING and Microsoft is making the right noises about standards and openness and making things work in the big picture. So we have already seen Microsoft and the Trusted Computing Group (TCG) get together, and I think it's only a matter of time before we also see the other vendors like Cisco at least have a good accommodation of the Microsoft Network Access Protection (NAP) framework.
RalphSam2: I work for a large company. We have about 30K employees in 500 sites across North America. Management wants to see centralized NAC. All product evaluations are going badly. What is good for large site (more than 1,000 people) is not good for small sites (less than 10). What should we do?
Joel_Snyder: Well, boy, that's a softball. Of course, you should hire Opus One to help But really, I think that you need to step back and figure out what it is that you care about MOST in your NAC deployment. Are you doing this for access control? For endpoint security? You have to narrow down what it is you want and then you can put together a solution that will work based on your requirements. I agree that there is no single universal answer, but I think that if designed correctly, you can do it. What we saw at Interop was the ability to move from VLANs (which definitely won't work at small sites) up to Access Control Lists (ACLs), which work and scale beautifully. If you haven't gone down that path, I'd suggest thinking in those terms. A lot of little guys are fixated on VLANs, which just don't scale.
shelly: Can you say more about why you think ACLs are better/more scalable than VLANs for network access control? It seems to me that ACLs can get very large if your network isn't easily summarizable. How do you choose between them?
Joel_Snyder: Good question and thanks! The deal with VLANs that I don't like is that we have already burned them in most networks. We're using them for other things, and making changes to the VLAN infrastructure is hard unless you have a green-field network, which no one does. However, with ACLs, you can push onto the EXISTING VLAN structure and not have to screw with it. This also solves the hand-waving problem of getting people to jump around VLANs as they go into and out of quarantine, which (as a Mac user) I really feel for. Very true that the ACLs can get ugly, but I am thinking that you aren't going for total control at the port level, but broad swaths of control. If you want LOTS of ACLs, then you need to go with specialized hardware: Consentry, Nevis, and I think that HP is talking that talk as well. I'm really bullish on ACLs now that Interop's Labs helped prove that they work. We're talking about anterior cruciate ligaments here, right?
Tom2342: Since the NAP client from Microsoft alone doesn't offer anywhere near the amount of endpoint data that some other vendors' NAC clients offer, why would you want to bother with it at all?
Joel_Snyder: Dude. The NAP client is just a base. You don't just do everything that Microsoft says, right? They provide a great base and you build on top of that to meet your needs. If you're a small site, you stick with them. but if you have Symantec, then you layer their SEP11 on top of that using the NAP SHA/SHV. If you have McAfee, same deal. Sophos, same deal. We tested Avenda and Blue Ridge as well in the labs, all sitting on top of NAP. The reason you START with Microsoft is that they know more about their own O/S than anyone else, so that is going to maximize the ability to interoperate. And then you take your preferred end-point security partner and put it on top using the SHA/SHV model. It is totally clean and totally extensible.
Moderator-Julie Pre-submitted question: TCG/TNC just announced IF-MAP What's that all about and what do you think of it? [Editor's note: TCG's NAC scheme is called Trusted Network Connect (TNC)].
Joel_Snyder: IF-MAP is very cool. We were lucky because TCG gave us advance access under NDA and we were able to get a white paper out on it at the same instant that it was announced. Talk about a scoop! Anyway, IF-MAP is all about having a structured way to store, correlate, and retrieve identity, access control, and security posture information about users and devices on a network. The cool thing about IF-MAP is that it's not just for NAC, although that's a first step. It's a way to finally bring together a whole world of policy and status information that just has been totally proprietary or even un-doable in the past.
RandyJ: I am looking to implement NAC next year on our campus. We are a wireless campus with some wired. I have talked to a lot of different vendors. What are the top two companies you would recommend, and why?
Joel_Snyder: Well, it depends, which one is buying you lunch? Honestly, though, I can't answer that very easily without knowing exactly what you're trying to accomplish. The obvious answer is Bradford, because they understand and do education better than anyone else (in my testing, anyway). They are built around education issues, so that's going to be well suited. From there, it's hard to say. I'd look to see what other partners you have good relationships with and see if they can meet your needs. In other words, if you're an Enterasys shop, go talk to them. Foundry, etc.
Leo: Can you comment on the relationship between Microsoft and Cisco on NAC now and project it in the future? Truly cooperative and division of labor? Or collision ahead?
Joel_Snyder: Hard to say. There are a lot of personalities involved. I'd say that right now we've got two titans who are hard-pressed to cooperate trying to figure out a modus vivendi. Even if there is a lot of joy together, it is inevitable that Microsoft and Cisco will have different interests in the long run. I don't see a big collision, because Microsoft's primary interest is in the desktop and Cisco has no intention of competing there. Things like NPS might go by the wayside as Cisco readies new versions of their NAC management solution and completely re-architects ACS and the CCA stuff. What I personally see is that Cisco owns 74% of the switch market and Microsoft owns 95% (or more) of the desktop market and that's not going to change too much in the long run. So I would look to Cisco for leadership in the areas that they are strong: switching, wiring closets, etc., and Microsoft for leadership in the areas that they are absolutely top in: desktop. Having either cross into the other's territory seems like danger.
WillBean11: The title of the chat is 'fact and fiction,' so what are some of the 'fictions' surrounding NAC that we should be aware of?
Joel_Snyder: Oh, good question. What are the top myths about NAC? How about that it's all about end-point security? We have some luminaries on our own staff who seem confused about that. NAC is about ACCESS CONTROL and NETWORKs, and USER FOCUS. That's the biggest confusion. Another one: that a NAC product solves your needs. I haven't seen a network larger than 100 devices where a single vendor solution answered all problems. Let me see if I can think up more as we go along...
Moderator-Julie: I think you are referring to Rich Steinnon in his Stiennon on Security blog. He called it: "Don't even bother investing in Network Admission Control" where he did a big NAC attack. Got any response?
Ricky: What are your suggestions for handling non-Windows machines, or "non-OS" devices altogether - e.g. IP phones, cameras, medical devices, etc.?
Joel_Snyder: MAC auth bypass is the strongest approach for non-OS devices. You use that with a nice strong access control (i.e., phones can only act like phones because there's an ACL and a firewall keeping them in) and maybe back it up with a device profiler like a Great Bay box. For non-Windows, harder question. Big issue here is that you have to say what you think is important about your NAC deployment. Are you all about end-point security? If so, what does that mean for a Mac? Or if it's all about access control, then focus on 802.1X for those puppies. I wouldn't do Mac Auth Bypass for OSes that can do 802.1X or which have browsers. I'd do 802.1X or dump them in a captive portal if you want to abuse them a lot.
Moderator-Keith: Abuse the "end users," not the devices, right?
Joel_Snyder: Right. Make 'em hit a captive portal every time they sit down. That's abuse, in my book.
Mash: How do you make sure, that MAC-based auth for IP-phones and such, doesn't become your hacker's favorite security-hole in NAC-networks? Just spoof the MAC with the MAC of a Cisco or Nortel IP-phone, and you're in, right?
Joel_Snyder: Well, you better not be "in." Right? You have to be controlled when you're in, so if you are saying you're a phone, then you better be ACLed or VLANed or whatever so that you can only ACT like a phone. If you just dump the phone on the network with full access privileges then you're not getting the point of NAC, the ACCESS CONTROL part. Sorry for the caps. Anyway, yes, then the hacker can still wander around your VOIP network with whatever ports and protocols you've allowed her, but you deal with that by using IF-MAP (in the future) or something like a Great Bay box (today) and let behavior-based information modify your policy decision.
RonM20747: We're looking at implementing NAC within the next year. We've got users that come in via a VPN and use RDP to get to their desktops. What is the recommended way to handle that with NAC?
Joel_Snyder: It depends on the firewall. If you are firewalling down so that they ONLY have RDP then there's not a lot of issue involved with end-point health. Sure, someone could be screen scraping on the RDP on the client and you might not know about it, but that's a pretty unusual situation. I would say that if you have already gone Citrix-y or RDP and the desktop is really what matters, then just do a normal wired NAC on the "end" desktop (the one that they're RDPing to) and don't worry so much about the end user on the VPN. You're already doing authentication and firewall on them. What harm can they "really" do that's worth the effort of NAC? Alternatively: go with SSL VPN and use one of the end-point host checkers that are in all the SSL VPN products. That gives you user-focused access controls and a health check, which is the essence of NAC.
I wrote a story on that
Mash: TNC seems to have been going in the right direction for quite a while. And they are way ahead of the IETF taskforce. Microsoft also seems to be very open to "the TNC way." But Cisco keeps saying their mantra, that "we work with NAC, where it belongs, in IETF", which seems to be just an excuse for selling proprietary for now. What's happening in the IETF taskforce? When can we expect to see some actual standards from them? Will they be adopted by the industry? And will it turn out to be TNC with an IETF standard name?
Joel_Snyder: I have to say that I'm very depressed about the whole IETF thing. I had a long argument about this with Jim Martin, an old IETF stalwart. The problem is that the IETF is focusing on what has already been done by TNC and is not breaking new ground in areas that TNC has not covered. So what we're ending up with is a re-thinking of the same protocols by essentially the same guys which seems to me to be a huge waste of time. I don't know about Cisco's thinking here, but we're ending up with the same darn protocols in IETF and TNC. But what we need is the IETF guys to do a big-picture thing, like they did with IPSEC in RFC2401. That would be cool. But I am not holding out hope. I'm just depressed about it. Sigh.
shelly: Are there any good network management tools that you recommend for monitoring and troubleshooting 802.1X-based NAC deployments?
Joel_Snyder: Talk about a dark and dusty corner of NAC. You have found one of the ugliest ones. Russ Rice was hitting me on that at NAC Day [at Interop], talking about troubleshooting as the last frontier. Short answer: no good answer. Long answer. Look at tools like Splunk to get all your logs together and really searchable. I don't think that SIMs are the answer here because they're not about debugging, so you want some huge log aggregator that can do structured searches.
Moderator-Julie Pre-submitted question: Hi. We've looked at several NAC products, One in particular, Identity Driven Manager by HP, for the most part suites our needs. One problem however we come across is that enabling RADIUS authentication on our edge switches inhibits clients to check in with PXE servers. We use PXE on all our machines for on-the-fly rebuilds. While I realize this is a flaw within RADIUS authentication on a switch I really want to know what other products could help us address this problem? Thanks, Mark.
Joel_Snyder: PXE is one of the dark corners of NAC, so I understand where you're coming from. The situation with PXE is that you have a device which wants to get on the network, but it has nothing but a MAC address. Also, it has a pretty tight DHCP timer and if you don't get it on the network fast, then you've got a problem and it'll fail over. There are two approaches you can take here. First is that you can use Guest VLAN and second is that you can use MAC Authentication Bypass.
WillBean11: This may be unrelated to NAC, but I was wondering what Joel's thoughts were on IPv6 and the latest "we're doomed" chants going around. Is the sky falling on IPv4 or not?
Joel_Snyder: The sky is falling, indeed. I'm on some mailing lists out of ARIN where there's a lot of argument on that, but no one is disputing that IPv4 is running out of space. The question is where and what we're going to do about it when it happens. I predict rioting in the streets and widespread violence. That, plus a lot of IP address theft. Obviously, we're going to be able to put this off a long time, but honestly we are running out of addressable Internet space with all the devices going up nowadays, and NAT is not going to solve it forever. So you can either figure out how you're going to solve it a couple of years ahead of everyone else, or you can wait until your ISP suddenly says "No, no more addresses available," and then panic. I'm in favor of the panic approach, but there are those who want to plan ahead and be reasonable and well thought out.
WillBean11: So you don't buy the argument that we have plenty of addresses and it's just the ISPs that are hoarding them? (that's one theory I've heard)
Joel_Snyder: I don't buy it. I was helping a client with a crisis last weekend that could have been solved by another /29 of space (Nothing in the big picture) and the ISP just said, "No deal. You can't have it." They got a new ISP with more space, but they're running out too. Yes, there is some slop and wastage and unused addresses (look at Interop's 45/8!) but there is an end. 32-bits is not enough for a world with 6+ billion people and 12+ billion cell phones and Wiis and whatever.
Moderator-Julie Pre-submitted question: Hello Joel, it Eric from La Reunion. Have you tested the HP NAC/IDM solution? Best regards, Eric
Joel_Snyder: I have not had an opportunity to test the IDM stuff. I have had a couple of briefings on it, and certainly the PowerPoint is impressive. The whole idea of Identity Driven Manager (IDM) is that you can add policies to your NAP NPS server. That, to me, is totally kick-ass. NPS is a fine little framework, but as soon as you try and do anything exciting with it in terms of access controls, then you run into a brick wall. NPS is more focused on the remediation/end-point compliance lifecycle.
SC: We are looking at SSL-VPN for remote access 1st and down the road NAC for the inside network. Should we go single vendor or wait to see what IF-MAP brings in post-admission control?
Joel_Snyder: I'd solve the pain point first with a good solution (SSL VPN) and then re-surface in 6 months to see what's happening with IF-MAP. Too early to put eggs in IF-MAP basket this month. I have hopes, but hopes are not products.
NAC%20Dog: How soon do you think the trusted platform modules embedded in laptops and desktops will become a significant part of NAC deployments?
Joel_Snyder: Could be a long time. If Microsoft includes it in the Windows Security Center SHA, then we're in better shape. But we have a LOT of TPM chips that aren't being used in a LOT of desktops, laptops, and servers. I think that a lot of folks just don't get TPM, which is OK, but there also seem to be a lot of product managers in vendors who ALSO don't get it. Try putting pressure. Make 'em figure it out.
Mash: Reply to the MAC auth answer: Fine answer, thanks! I didn't see it in the big picture with IF-MAP, so that's fine!
Joel_Snyder: Even with phones, I think if you've got a solid firewall between your VoIP network and the rest of the world, you're probably going to be safe even without.
Moderator-Julie Pre-submitted question: Do you have to disable NAC on a switch port with a non 802.1x device attached i.e printer? If the answer is yes what will happen if someone removes the printer and then plugs a PC in?
Joel_Snyder: Absolutely not. You have to have a way to handle corner cases, like printers. This calls for a bunch of switch configuration that is specifically designed to make this work. What most people have done is use MAC authentication bypass (available in almost all modern switches) which does a MAC auth of the device if it doesn't talk 802.1X. By the way, a lot of printers nowadays, phones, and even video cameras will talk 802.1X. But if it doesn't, then you do a MAC auth and put the guy on the printer VLAN or apply the printer ACL.
Moderator-Julie Pre-submitted question: We have a full Cisco switch/routed/firewalled/VoIP network and are warming to Cisco NAC as an infrastructure based NAC deployment: a) Will NAC work from behind a Cisco phone/unmanaged switch? b) If "a)" is possible what happens if some devices on an unmanaged switch are 802.1x and some are not? c) How does NAC work with wireless (i.e devices like phones/pc's moving from one WAP to another)?
Joel_Snyder: Whoa, dude. What is this, get-it-all-in-one-question week? Let me give you the fast answers, and you can write back in if you need more detail. (a) yes, but you may have restrictions on what ACL and VLAN you can do. See David Newman's 10Gig Switch test for a specific discussion of the restrictions. (b) It depends on what you want to do with them. If you want to drop them on a guest VLAN, no problem, although now you're crossing the streams and that sounds like a bad idea. (Try to imagine all life as you know it stopping instantaneously and every molecule in your body exploding at the speed of light.) (c) 802.1X is 802.1X. That's the beauty of it all. GO between wired, 802.11, 802.16, whatever. You will have a re-auth in some wireless gear, which is perhaps bad. This is a good argument for an integrated wireless management system (in your case, probably the Airespace stuff, but Aruba and Aerohive would do the same).
Sam: Will NAC work from behind an unmanaged switch?
Joel_Snyder: Define "work." Obviously you don't have total control the way you do with a switch, but you can do lots of NAC things. I think in the last answer I mentioned David Newman's article on switches which talks about some of the limitations. I'll also point you to http://www.opus1.com/nac/teamwhitepapers/2008-09SwitchFeatures.pdf which is a white paper on switch features that are relevant to this question.
shelly: Do you have a favorite EAP type that you recommend for people trying to deploy 802.1X?
Joel_Snyder: You want me to say "EAP FAST" don't you? Honestly, though, it doesn't much matter. You want an EAP method that will let you send your posture information through. Frankly, the choice of EAP method is the last thing you do: you start with your client and Policy Decision Point and figure out what method they support in common. I suspect that EAP methods are fast becoming a non-knowledge-area with 802.1X nowadays and people will just use whatever works.
Mash" In Microsoft NAP, there's a "test-mode", where you can get the reports on devices that would have been denied access to network, if policies were enforced. Wouldn't that be a good way to start and that way see what you have to find workarounds/solutions for, before deploying full-scale? (Whether or not you end up with Microsoft NAP or some other vendors NAC?)
Joel_Snyder: Totally. Who could disagree with that? Only an idiot will turn on all NAC features on the same moment. You start with non-enforcement and maybe even non-authentication. Slide things in slowly and pragmatically and reasonably. ALL NAC vendors should have a Test-Mode, just like all IPSes should. Anyone that doesn't is defective in my book. And your strategy is totally the way to go. How many times can I put totally in an answer? Totally, Dude (or Dudette).
Moderator-Julie: Pre-submitted question: How secure is Microsoft's new networking "mesh" technology?
Joel_Snyder: I am not entirely sure I know what you're thinking of here, so I'm going to assume you're talking about wireless mesh, which is pretty interesting stuff from an access point of view - but a total nightmare from a security point of view. My general thinking is that if you use any kind of mesh wireless where 'foreign' access points may participate, you should be doing end-to-end encryption of any traffic that you care about, either with a VPN client or application layer encryption. But I'm not sure what this has to do with NAC.
Joel_Snyder: As long as we have a lull, I'll put in one more pitch for the Interop data at http://www.opus1.com/nac/. This is the best NAC resource we could put together for implementers and technical people, and there's no marketing fluff attached. The white papers are my favorite part … although not as funny as this chat. And no cheese references.
Moderator-Keith: That cheese reference wasn't very gouda...
Joel_Snyder: At least it was brie-f.
Moderator-Julie: Final pre-submitted question: If we were to use an end-point NAC system i.e Sophos, how easy would it be for a malware coder to write software to manipulate NAC code sent in the pre- and post-admission stages so to fool the NAC system into thinking that the PC is virus free when it isn't?
Joel_Snyder: Well, first of all, let's disabuse a notion that having a virus checker turned on and current says anything about having a virus. You can have both and that's normal behavior. Of course, the goal is reducing your RISK of having a virus by making sure that the A/V is up to date. But don't think that just because you can tell whether the user has A/V turned on and up-to-date that it says anything about whether or not they're infected.
Mash: Microsoft is trying to sell their ConfigMgr with the argument, that auto-remediation is tightly integrated with Microsoft NAP. Are they just "hyping"?
Joel_Snyder: I think that lots of vendors are able to do remediation, as well as Microsoft. I wouldn't let the Microsoft guys say that they are the only ones who can do remediation. Lots of end-point security vendors can do that, and maybe even better than Microsoft. I'm actually not much of a desktop guy so I don't know how much better or worse, but I bet "at least as good as."
Moderator-Keith: With all of our questions answered, let me take this time to thank Joel again for joining us, and for the awesome questions asked and answered. Joel, u are da man.
NEC showcases tough ultraportable for business
....its data security features are much stronger with a fingerprint biometric sensor and TPM security chip combo.
http://crave.cnet.com/8301-1_105-9937025-1.html
NAC 2.0
After Interop...
http://weblog.infoworld.com/smbit/archives/2008/05/after_interop.html
So I'm back in the office and thinking about what I saw and heard at Interop. What do I think you should be thinking about?
3. NAC 2.0
I know why vendor companies are big on proprietary technologies, but good standards (note the word "good" in that last phrase -- it's critical) can make life so much easier for IT professionals trying to build a working system. NAC 2.0, from the Trusted Computing Group, promises to make life dramatically better for IT professionals who want to create a security system, as opposed to those who are stuck just trying to make an unrelated collection of components work in the same room. Several vendors have already begun building NAC 2.0 into their products, and I strongly suspect that the open source projects will be getting up to speed with this in the next couple of quarters. You may be a small company IT professional, but you can have some pull with vendors -- so use that pull to start asking when NAC 2.0 is going to show up in their software. You'll really like the results when it appears.
Awk, also from that article:
Chip manufacturers are at work here as well. Consider the Trusted Platform Module. Think of a TPM chip as a hardware-based lockbox where users can store credentials and certificates, manage keys, and encrypt e-mail and files. The VDI hypervisor can make use of this security mechanism, making calls to hardware instead of storing important information in software.
Net vendors demo improved security protocol
http://www.eetimes.com/news/latest/showArticle.jhtml;jsessionid=CIBOZWWY3QGMYQSNDLPSKHSCJUNN2JVN?articleID=207501479&printable=true&printable=true
Rick Merritt
(05/05/2008 2:36 PM EDT)
URL: http://www.eetimes.com/showArticle.jhtml?articleID=207501479
SAN JOSE, Calif. — A handful of vendors have demonstrated a technique to help companies more easily secure a rising number of Internet Protocol devices accessing their private business networks.
ArcSight, Aruba, Infoblox, Lumeta Networks and Juniper have demonstrated a new protocol to link to a common security database. The protocol, called IF-MAP, is at the core of the Network Access Control 2.0 standard just published by the Trusted Computing Group, a broad ad hoc security organization devoted to security.
The new protocol defines a standard interface to a common shared database of who is on a network and what each node is doing. It aims to ease the job of providing integrated security for corporate nets in the face of a rising number of automated clients including RFID systems. It is an upgrade of the initial NAC standard first adopted in 2005.
"NAC 1.0 is key in controlling who gets on the network, but the problem is there are many new kinds of nodes like inventory control devices and robots, and they all have an IP address and so users need to control them," said Steve Hanna a distinguished engineer at Juniper Networks who co-chairs the Trusted Network Connect committee that developed the protocol.
IF-MAP, which stands for Interface to Metadata Access Point, was officially published April 28. Companies first demonstrated the technology at Interop in Las Vegas last week.
Vendors are free to implement the security database as they see fit as long as they support the common access protocol. The new interface does not require any changes in hardware.
"That's how we were able to get so many companies to create a demo using it in a short amount of time," said Hanna
The Trusted Computing Group is working with the Internet Engineering Task Force to harmonize their currently separate standards for secure network access.
Wrapping up Interop
http://www.news.com/8301-10784_3-9936049-7.html
Posted by Jon Oltsik
My networking guru colleague Bob Laliberte and I wrapped up our week in Vegas at Interop, grabbing the last flight to Manchester, N.H., on Thursday evening. A few final thoughts:
1. First of all, a mea culpa to the hospitable folks running the Interop show. In a previous blog, I said that attendance was down this year. This may be true in relation to the boom day Interops at the Las Vegas Convention Center, but 2008 attendance was actually up from 2007. Additionally, there were 170 new exhibiting companies this year, a 25 percent increase. Pretty impressive results in a recession where major companies like AT&T have imposed bans on employee travel.
2. Vendors I spoke with were crowing about end user traffic and lead generation. Large users need networking equipment, security systems, and help.
3. I am impressed with a new Trusted Computing Group standard called IF-MAP. In simple terms, IF-MAP defines a set of protocols that enable security, networking, and other IT systems to share information about traffic patterns, system status, and overall behavior. By sharing this information, networks should be able to detect and react to security incidents or traffic spikes. Good effort; let's hope that leading networking and security vendors join the party.
4. It was very telling to see HP with a large booth in prime Interop real estate near the show floor entrance. The HP ProCurve networking division has always been the company's best kept secret. Looks like the cat is out of the bag now--HP could be a candidate to challenge Cisco's enterprise dominance in the next few years.
5. I walked by e-mail security vendor Barracuda Networks' booth at RSA and Interop. Each time, Barracuda has a large truck covered with Barracuda ads parked right in the center of its booth. Two things trouble me about this. First of all, since vendors pay for booth space by the square foot, why pay exorbitant fees for a parking space? Wouldn't posters with the same ads be more efficient? Finally, if I owned a company named Barracuda Networks and wanted to use a vehicle to represent my firm, I would use a Plymouth Barracuda (circa 1971 or so) instead of a large van. Maybe it's just me.
The network continues to evolve rapidly, so this is no time to wallow in the economic doldrums. Users continue to buy, vendors continue to sell, and Interop continues to grow.
Jon Oltsik is a senior analyst at the Enterprise Strategy Group.
Hardware Trends: Faster. Smaller. Cooler.
http://rcpmag.com/features/article.aspx?editorialsid=2461
Every type of hardware -- from chips to PCs to wireless LANs -- is evolving to support new enterprise demands for mobilization, virtualization and conservation.
Small, mobile, virtual and green: Those are the key words to keep in mind when making hardware decisions for the future.
Every new iteration of chip, desktop, server, storage system and wireless router is not only getting smaller, but is evolving to better support your customers as they move to the latest mobile, virtualization and green technologies. Here are some of the top trends to keep an eye on as you look to support your customers' future initiatives:
3. Storage Is Getting Smarter
.......And as more enterprises look to mobilize their work forces, encryption becomes another key driver in storage. Look for storage systems that do full disk encryption, as well as on the server and in transit. The kicker is that with virtualization, CDP and encryption becoming more readily available, cost-effective and reliable, your customers can now expect to store data more efficiently even as they store more.
reg...I've heard May 8th. e/
Interop Feature Articles
http://www.tmcnet.com/tmcnet/interop/articles/26737-wave-systems-demos-hardware-based-network-endpoint-validation.htm
May 01, 2008
Wave Systems Demos Hardware-based Network Endpoint Validation at Interop
By Raju Shanbhag, TMCnet Contributing Editor
In an effort to limit data access by using proof-of-encryption for Seagate FDE hard drives, and to extend Juniper Networks’ (News - Alert) Unified Access Control (UAC) authentication, Wave Systems is displaying its Network Access Control (NAC) solution this week at the Interop (News - Alert) show in Las Vegas.
If you don’t have a telecommuting program, how much revenue is your business losing? Learn more, download free white paper.
Discover how Unified Communications can help you gain the competitive edge, with more efficient internal operations, increased productivity and improved customer service. Learn more, download free white paper.
Discover the top 10 reasons to throw out your old PBX and replace it with a new software based IP PBX.
Learn about a new end-to-end billing and network management solution for independent Internet Service Providers.
Learn about front office and back office systems which are less and less dependent on the underlying technology and more and more agile.
Learn how to maintain seamless business continuity while migrating from traditional voice mail, to unified communications.
This solution verifies the presence and state of a full disk encryption hard drives as a prerequisite for network access. The result is assurance that only computers in full compliance of security policies that comply fully with the company’s security policies will be allowed on the network.
The joint solution offers robust platform authentication and strong protection against software attacks that create “lying endpoints.” Wave's EMBASSY Endpoint Enforce ensures that the endpoint integrity data collected is not compromised or otherwise altered as it measures, validates and reports on the integrity of the Juniper UAC client components.
By making use of the Trusted Platform Module (TPM) security chip to measure, sign and store endpoint integrity metrics, Wave’s software extends the security of commercially available NAC solutions.
“We believe that the ability to verify the health and data protection capabilities of PCs before granting access to networks and confidential data is a critical step for protecting enterprises,” said Brian Berger, executive vice president of marketing and sales at Wave Systems. “We look forward to providing attendees at Interop 2008 with multiple demonstrations of how Wave’s hardware-based security solutions can make this process a reality for their organizations.”
In this demonstration, Wave will also show how standards-based, client PC security hardware can ensure the integrity of the machines on a corporate network. It enhance the capabilities of traditional NAC solutions.
warbil: ssprague@wavesys.com e/
In Search of Trust
Microsoft's end-to-end trust initiative is long on vision, but short on developer details.
http://reddevnews.com/features/print.aspx?editorialsid=2470
by John K. Waters
May 2008
Senior Editor Kathleen Richards contributed to this report.
When Microsoft's Chief Research and Strategy Officer Craig Mundie addressed the annual 2008 RSA Security Conference in San Francisco last month, he took Microsoft back over some well-trod ground, in a very big way.
Mundie used the high-profile confab to unveil Microsoft's "end-to-end trust vision," which takes aim at the growing array of threats and vulnerabilities that plague every connected device, service and piece of software. Mundie argued that these threats target everything from the operating system running the machine to the user punching the keys, and that the industry must enable what he called a "trusted stack." This stack would span the gamut of hardware, OS, application and service layers.
The proposal has its roots in an earlier Microsoft initiative, the Trustworthy Computing effort launched in January 2002. That push is widely credited with improving Microsoft's abysmal software security record and helping make products like SQL Server and Windows Server appropriate for high-stakes enterprise deployments.
"Having gotten -- I'll call it the core stuff -- in place, we now look at the next requirements being sort of a trusted stack of software," Mundie said in his keynote.
Critics contend that the end-to-end trust effort is too ambitious for Microsoft to execute, and is both heavy on platitudes and short on deliverables. Many question how Microsoft will equip developers to achieve its goals.
"It sounds nice on paper, but I don't think even Microsoft can pull that off," argues Gary McGraw, CTO of Cigital Inc., a Dulles, Va.-based provider of software quality and security solutions, and author of numerous books including "Software Security: Building Security In" (Addison-Wesley Professional, 2006).
"If we really want to have trust from end-to-end, that means the end will have to be owned by the people who want the trust," McGraw says of Mundie's pitch. "So who's the trust for? The user? Microsoft? Intel? We're pretty quickly going to be getting into issues of individual computational liberty versus some amount of security goodness."
The call for building an ecosystem based on a trusted infrastructure comes as some at RSA warn of increasing threats to the security and reliability of government and commercial computer systems.
"The potential consequences of a cyber attack are very real and every bit as concerning as the potential of a physical attack on the order of what we saw on Sept. 11," said Homeland Security Secretary Michael Chertoff, who also addressed the RSA conference. "Managing the risk of a cyber attack is not quite the same as managing the risk to our airline system or our transit systems or our borders," he continued.
Developer Questions
McGraw praises Microsoft's effort to address evolving threats that target the digital economy, but he says any effort Microsoft hopes to lead will have to directly target and engage developers.
"By and large, developers want to do the right thing," he says. "If you tell them what the right thing is, they'll more than likely do it. Microsoft has done a pretty decent job of telling them, and I think that's more important than focusing attention on this end-to-end stuff."
McGraw points out that the application layer has emerged as the area most vulnerable to attack. He says that in the Web-facing world, attackers often gain access by leveraging the functionality of an application, rather than defeating some security mechanism.
Analyst Neil Macehiter of U.K.-based Macehiter Ward-Dutton cites this dynamic application environment in applauding Microsoft's vision.
"Developers should be focusing on the business logic, not the security implementation," Macehiter says. "They should be in a position where they can declare what they need and a set of identity and security services delivers it for them -- [for example,] 'I want this request to be authenticated using a digital certificate' -- rather than implementing low-level security code," Macehiter says.
For .NET-aligned shops, the evolution and ubiquity of Windows Communication Foundation (WCF) and Windows CardSpace may prove to be pivotal in making that business logic more secure.
Microsoft's latest effort toward that end centers around plans to integrate the U-Prove technology of Credentica Inc., a company it acquired in March, into WCF and CardSpace. Ted Ritter, analyst at the Nemertes Research Group Inc., says that successful integration of U-Prove into WCF could bolster identity management within the framework.
U-Prove is an encryption and authentication system designed to allow users to conduct secure digital transactions while revealing as little about themselves as possible -- what Credentica calls "minimal disclosure."
Tooling Trials
If Microsoft expects developers to produce "trusted" applications, it needs to provide them with the tools to do it.
"The phrasing 'trusted applications' leads you to believe that there will be continued investments in the Visual Studio product line to help developers to write more secure code," says Gartner Inc. analyst Neil MacDonald. "There better be."
Steve Lipner, senior director of security engineering strategy in Microsoft's Trustworthy Computing Group, says Redmond is investing in security underpinnings for the .NET Framework. "Microsoft has invested in making .NET a platform that offers great support to help developers efficiently create trustworthy applications," Lipner says in an e-mail. "As additional industry standards and technologies come together as needed to realize the end-to-end trust vision, Microsoft will continue to ensure that .NET application developers can effectively support [that vision]."
Lipner points developers to Microsoft's Security Development Lifecycle and the published Privacy Guidelines for Developing Software Products and Services, which can be found here.
On the tools front, Microsoft plans to integrate a new set of testing tools into the next release of Visual Studio Team System (VSTS) -- code-named "Rosario" -- according to Stephanie Saad, a group manager for VSTS at Microsoft. The company has yet to commit to the types of testing technologies it plans to add to the VSTS toolbox, but if end-to-end trust is the company's goal, MacDonald says it must go beyond operational, stress and performance testing to include security testing capabilities "right up there as a peer."
"Not only are the tools for performing security testing different," he says, "but the mindset you must have to operate those tools is different, too. When you perform security tests, you're trying to get the application to do things it wasn't designed to do, and to break in unexpected ways. There's a real difference in the tools and the approach."
Developers need more than just tooling -- they need visibility into the dev stack, says Howard A. Schmidt. The president of R&H Security Consulting LLC, Schmidt served as Microsoft's first chief security officer and was the founder of the Trustworthy Computing initiative in 2001. He says the company's Feb. 21 interoperability pledge, which has produced tens of thousands of pages of published documentation on Microsoft APIs, protocols and interfaces, is a major step forward in the effort.
"When you start looking at one of the complaints that people had over the years, [it was] the inability to write security-related APIs because they didn't know what it was going to do with the other [components]," Schmidt says. "I think the classic example we see is when we're rolling out new update patches from whatever vendor it may be. The concern is always: Is it going to break some security that you've got already built into something?"
React and Recruit
Reaction to Mundie's presentation among RSA conference goers was mixed, but enthusiasm was in short supply. One attendee, an independent software developer who asked not to be identified, summed up a prevailing sentiment: "Wasn't that the keynote from 1985? There was nothing new here."
But industry analyst Rob Enderle says that that attitude, which he also observed, misses the point. "This was a call for some help," Enderle suggests. "Microsoft was largely trying to point out the problem and argue that there needed to be a solution, while giving the solution some boundaries. Because asking for help is atypical of Microsoft, I don't think a lot of folks got that."
Macehiter agrees: "The company is setting out with this ambitious strategy to join up the historically fragmented approach to security and identity management," he says. "This is not for some technology purist reason, but because many of the challenges organizations face today from an IT and business perspective -- everything from service-orientation to inter-enterprise collaboration and compliance -- stretch and break current approaches to security ... There are few vendors out there with the breadth of capability to actually address this sort of challenge. Who else has articulated such a vision?"
That's a question Microsoft itself left open, says MacDonald, when it failed to bring anyone else onstage for the Mundie keynote.
"They would have been much better served if they had had other people up on stage with them," MacDonald suggests, "even potential enemies like Google and Amazon. They needed other organizations standing up and saying, 'Yes, end-to-end trust is needed and we're going to get past our competitive issues and work together in the better interest of consumers and the Internet as a whole.' When you have only Microsoft people stand up and talk about it, the message loses some of its credibility. This can't be a Microsoft-only vision."
In the end, end-to-end trust might not be the right term for what Microsoft is really talking about here, observes MacDonald.
"I give them credit for calling much-needed attention to the problem of trust on the Internet," he says, "but Microsoft is saying that the way you do this is with trusted platforms. Focusing on the platform makes this message too Microsoft-centric. I think it also ignores that what we want to achieve at the end of the day is trusted transactions, interactions and relationships. That's the big picture, but they seem to be focusing on the parts."
Maybe what Microsoft needs to do, says McGraw, is focus more closely on developers. "Developers like to be able to write and run all over the place," he says, "so this end-to-end trust thing would present them with constraints they'd have to learn to deal with. And I'd expect Microsoft to provide the support and tools they needed to do that. Whether or not developers should be constrained is a topic worthy of debate."
The Road to Trust
When Microsoft Chief Research and Strategy Officer Craig Mundie presented the end-to-end trust vision in his RSA Security Conference keynote, he made a point to link it directly to Microsoft's Trustworthy Computing initiative, which was publicly launched in January 2002. It's clear that this effort will be at a much greater scale than the 2002 program, which is widely regarded as a success.
"Today, I think we're in a transitional situation -- at least at Microsoft -- where we're focused on moving beyond what we did in our first generation of trust," Mundie said. "You can't just look at any one piece. You can't say, 'OK, the operating system is pretty hardened; the applications may or may not be.' We really need to stitch these things together in some complete way."
Microsoft's white paper, "Creating a More Trusted Internet," written by Scott Charney, vice president of Microsoft's Trustworthy Computing Group, lists three key elements to the end-to-end trust model:
A "trusted stack" in which each stratum can be authenticated and declared trustworthy -- from the hardware all the way up to the application layer.
The technology components required for managing identity claims, authentication, authorization policy, access controls and auditing. Microsoft calls this combo "I+4A."
An alignment of technological, social, economic and political forces that enable what Mundie calls "real progress."
"Part of the problem," Charney writes, "is that the security solutions employed to date are primarily defensive technical measures that, while effective in mitigating particular avenues of attack, do not address an adversary who is adaptive and creative and will rapidly shift tactics. Thus, for example, hardening of the operating system caused attackers to move 'up the stack' and attack applications, as well as refine social engineering techniques that technology today is ill-equipped to help prevent."
Trustworthy Beginnings
Microsoft's Trustworthy Computing initiative is credited with producing quantifiable improvements in software quality. The Security Development Lifecycle and other best practices have served to drive down the frequency and scope of exploits against Microsoft software.
Now Microsoft must find a way to extend that rigor beyond the Redmond stack. During the keynote, Mundie said that the trusted stack must be able to know which apps and services are "certified or attested relative to the practices that have been brought to bear on their construction, just like we do today for the operating system."
But Mundie insisted that though Microsoft may be the initial driver behind the end-to-end trust model, this is anything but a solo act.
"We can't do this by ourselves," he said. "Even if we did it just for our products, that would be fine, but it wouldn't work in the world that you work in every single day, and we need to get ahead of the power curve in thinking about how we bring these things together, what protocols and formats are going to be required to ensure interoperability, and what regulatory environment we want to wrap around that and how we deal with that on an international basis. So, I guess the call to action today is: Get good at operating what you have, and help us think about going to the future."
Defining the memory's role in a secured environment
http://www.eetasia.com/ART_8800520412_499486_NT_27498da3.HTM
Today's mobile phones are used for a myriad of new applications that involve storing sensitive data and providing such secure services as mobile payments. With phones storing more critical information than ever before, it is increasingly important to keep them safe from rogue software that can steal or abuse credit card numbers or encryption keys associated with valuable digital content.
Mobile phones require a trusted execution environment (EE) to guarantee that sensitive data is stored and processed without abuse. A trusted EE is a computing environment where execution takes place as expected. The Trusted Computing Group (TCG) uses the notion of behavioral reputation when it refers to "trusted computing" in its documents. Trusted behavior is an essential element of security since it allows one to reason about the behavior of an EE with confidence, which in turn allows one to analyze the security aspects of the environment. Having a complete understanding of how to create and maintain a trusted EE will help make mobile phone applications like mobile payment more secure. Once customers, banks and businesses can fully trust that these applications are protected, adoption will increase.
In the book Security for Mobility, Chris J. Mitchell refers to the following as the main security services related to mobile computing: authentication, data integrity, data confidentiality and non-repudiation. This paper shows how secure memory plays a critical role in offering these services as part of a trusted EE, including rich access control mechanism that supports multiple stakeholders.
Trusted environments
An EE is a collection of hardware and software components that defines a computing configuration. An EE can be a simple CPU with memory, or it could be a Java virtual machine running on top of an OS managing a processor and several peripherals. A trusted EE is a computing environment where execution takes place as expected. The TCG refers to this notion of behavioral reputation as trusted computing in its documents.
It is clear that behavioral reputation is required to provide secure services. The approach taken by TCG and others to assess behavioral reputation is to define a secure boot process that verifies that a phone boots in a "trusted state." This trusted state is attained by checking the integrity of the code (OS and others) to be executed on the phone.
However, secure boot alone is not enough to provide a trusted EE, as the system may be attacked by rogue software after a secure boot. There are security holes in any large OS that rogue software can exploit. A runtime integrity check is recommended to confirm the integrity of the code. These checks can take place periodically or before critical events in the system. However, runtime integrity checks can only detect attacks after they have taken place. This can reduce the damage, but it does not provide a trusted EE in the presence of rogue software.
Flash memory-based security
Many of the attacks on PCs and mobile phones can be traced to the attacker modifying data/code in the non-volatile memory. Flash memory-based security safeguards the memory against such attacks, preventing unauthorized modification to the flash. Mobile phone devices using enhanced security in the baseband processor alone cannot prevent modification to the flash. It can only detect modifications as part of integrity check. This detection may be too late in certain situations.
The TCG created the notion of a trusted platform module (TPM), that when integrated with a PC, provides improved hardware-based security in numerous applications. A TPM is a microcontroller that stores keys, passwords and digital certificates and is typically affixed to the motherboard of a PC. The Mobile Phone Working Group of the TCG extended this notion of TPM to the EE of a mobile device in its MTM standard. Unlike a TPM or MTM, flash memory-based security does not just detect failure in integrity, but ensures that integrity is preserved under a reasonable threat model. This feature, called integrity protected memory, is very important to avert an attack on the phone's non-volatile memory. An MTM without flash memory-based security can only detect the change to data/code, but cannot prevent it. The damage may be already done by the time the MTM detects the change in data/code.
Figure 1: A flash memory-based security embedded in a mobile phone.
Another important consequence of integrity-protected memory provided by flash memory-based security is data availability. Other approaches to trusted EE focus on data confidentiality. For example, they make sure that a user's credit card number is not readable by rogue software. However, they do not prevent a virus from deleting credit card numbers, resulting in thousands of customers not being able to use their phones to make mobile payments. Flash memory-based security provides both confidentiality and availability.
Flash memory-based security (Figure 1), is a multichip package that includes non-volatile memory (flash memory) as well as a secure processor that provides hardware access control to the non-volatile memory. The secure processor also acts as a trusted EE for providing secure services in a mobile phone. The secure processor is ideal as a trusted EE since it is close to the non-volatile memory where all the assets like integrity-protected code, data and keys are stored. Since it is also an isolated environment that only executes software provided as part of flash memory-based security, it is not subject to attacks like buffer overflow.
Figure 2 illustrates a block diagram of a secure processor. The CPU is an ARM7-TDMI processor running at about 60MHz. The crypto engine supports both symmetric (AES, DES, 3DES) and asymmetric (PKI based on RSA) cryptography. All the accesses (including the bypass) to the flash devices are monitored by the secure processor acting as a gatekeeper between the host processor (baseband or application processor) and the flash. The secure services provided by the secure processor include cryptographic and secure flash memory services.
Case in point: secure processor
The secure processor provides a trusted EE for applications running on the mobile phone. The software running on the secure processor is tightly controlled by the handset OEM and the network operator, and it is isolated from the host. Only programs that are verified and trusted are installed on the secure processor. The size of the software running on the secure processor is much smaller than a typical OS running on a mobile phone, so it is easier to verify that the software is trusted. The secure processor provides the four secure services (authentication, data integrity, data confidentiality and non-repudiation) required in the context of mobile phone security. Figure 3 illustrates the software architecture of flash memory-based security.
The API implemented on the host platform provides secure memory services, as well as cryptographic services. The API converts the function calls into messages that are sent to the secure processor using the memory interface.
These messages are designed with well-defined syntax and semantics to eliminate malicious message attacks on the secure processor. Within a message, each data field of variable length starts with a special marker followed by the length of the field, which is specified before the data. This is not like C strings whose length is known only after you scan the string and find a null character. There is a message parser that analyzes the message and checks for valid syntax. The message is not processed unless the syntax check is successful, which provides a guard against buffer overflow-type attacks. The message is then routed to the right agent based on a special field in the message. The agent allocates buffers of adequate size as specified in the message and verified by the message parser. There is a limited set of messages that are processed by a limited set of agents, which are carefully analyzed for security holes. The message cannot result in arbitrary native code being executed in the secure processor. There are no function pointers in the messages.
Figure 2: Shown is a flash memory-based security block diagram.
Secure flash memory services—The secure processor provides secure memory services, including the storage of keys, certificates, code and data. The access rights to these objects are specified according to the security needs of the applications using them.
Secure portioning—The non-volatile memory can be divided into separate memory partitions, each with separate access control. There is a separate hardware-enforced access control for read and program/erase and another for changing the access rights to the partitions. Access can be controlled by a password or through PKI authentication for increased security. Different stakeholders create these partitions during different lifecycle stages. For example, the network operator can create a "code partition" that contains the OS and other certified software installed by the operator. This partition will have a read access without any authentication so that the code can be executed freely. At the same time, program/erase will require a PKI authentication from the network operator. This prevents any rogue software running on the host platform from modifying the code partition. This maintains the integrity of the OS and other related software at all times, not just during secure boot.
Secure partitioning with a rich access control provides data integrity and data confidentiality. The partition can be protected against unauthorized read using password protection or PKI authentication. This provides the necessary data confidentiality. The partition can be protected against unauthorized write in a similar way, resulting in data integrity.
The access control also has an additional feature that defines the availability of the individual partitions. For example, the main code partition can be locked for read prior to a successful "simlock check." This will enforce the policy that the phone cannot be used without a successful simlock check.
Figure 3: An illustration of flash memory-based security software architecture.
Storage objects—Flash memory-based security is used for storing data, code, keys, certificates and counters. Mobile devices normally store the keys in ROM, which is less flexible and limited in size compared to flash. Storing keys encrypted in non-volatile memory provides confidentiality. However, it does not prevent keys from being wiped out by rogue software. Flash memory-based security allows one to store a virtually unlimited number of keys. More keys can be added at any time using an OTA update. Confidentiality, integrity and authenticity are provided by storing the objects in their appropriate partitions.
On-the-fly encryption—The secure processor provides an on-the-fly encryption feature. This allows the host to send plaintext to the memory, which gets encrypted as it is being written to the flash. The encryption algorithm used is AES-CTR.
Cryptographic services—The cryptographic services are a subset of the PKCS#11 API. The API is independent of a host platform and supports symmetric key and public key cryptography. The API converts function calls to messages to the secure processor, where they are serviced using the crypto flash core. The integrity and confidentiality of the keys are well protected since they never leave the secure processor. The cryptographic services provided by the secure processor allow one to create a secure communication channel between the secure processor and an external server. The security of this communication channel is not dependent on the host platform. This allows secure implementation of applications such as FOTA and mobile commerce. The secure processor provides a high level of device authentication since the root key never leaves the secure processor.
Conclusion
Flash memory-based security provides a trusted EE, as well as a secure non-volatile memory with a rich access control mechanism that supports multiple stakeholders. Secure non-volatile memory with PKI authentication means that the integrity of the code and data is protected, resulting in highly secure data integrity and confidentiality. Authentication and non-repudiation are a result of the secure processor being an isolated trusted EE with an embedded crypto flash core. In addition, secure non-volatile memory makes the data available at all times. This property of data availability is not possible with encryption alone. Using cryptography, other solutions such as MTM can detect whether the data has been tampered with, but cannot protect against tampering. If not prevented, a virus can destroy credit card numbers on thousands of phones, making it impossible for customers to make mobile payments.
Integrity-protected memory provided by flash memory-based security makes key provisioning more flexible. There is more room for storing cryptographic keys and digital certificates. Further, the keys can be updated over the air.
Many of the attacks on PCs and mobile phones can be traced to the attacker modifying data/code in the non-volatile memory. Flash memory-based security safeguards the memory against such attacks, which is something other mobile security approaches cannot do. Approaches such as MTM or baseband security rely on secure boot and runtime integrity check to detect any changes to code or data. Modified code may have already abused sensitive data by the time the runtime integrity check detects it. It is important to prevent tampering of data/code to build a trusted EE.
Using flash memory-based security memory to create and maintain a trusted EE will help make advanced mobile phone applications like mobile payment secure and reliable and increasing their adoption by institutions and consumers.
Microsoft Vista security impresses those hot for NAC
By Tim Greene, Network World (US)
29 Apr, 2008
Microsoft's network access control client in Vista and now in Windows X has a lot of IT executives excited, according to an informal poll of about 250 attendees of an Interop Las Vegas NAC seminar who are actively considering deploying the access technology.
About a third of them say they would use the NAC support in the Microsoft client software rather than pay more and deal with deploying and maintaining a client with more features that they have to pay extra for. Microsoft calls its NAC technology Network Access Protection (NAP)
Slightly fewer said they would pay extra and deal with the additional work needed to deploy a better client. About a fifth of the group didn't respond to the call for a show of hands when asked by the session's instructor, Joel Snyder a partner in Opus One consultancy and a member of Network World Lab Alliance. (Compare NAC products.)
Many vendors make gear compatible with Microsoft NAP, including Cisco and vendors that follow the standards set by the Trusted Computing Group (TCG).
But NAP didn't escape unscathed by a panel during the Interop NAC session. Participants noted that in order to support non-Microsoft machines, customers have to deal with third-party vendors that make software that can report the status of Linux, Unix and McIntosh machines to NAP severs.
Sophos, which makes such a NAP client that also interoperates with Sophos' own desktop security software, says it's more convenient to get all the data about the endpoint in one place rather than have separate clients. "You look in one place and get all the information -- from the firewall, NAC, [desktop security software]," says Chester Wisniewski, product specialist for global sales engineering at Sophos.
"Our APIs are available to any partner," says Manlio Vecchiet, a group product manager in the Windows server division of Microsoft.
One of the knottiest problems with NAC technology remains how to get data about devices that can't run NAC clients such as phones and printers, panelists say. The best way to deal with it is checking the behavior of devices continuously after they are admitted to the network to flag and block them when they stop acting like printers and phones. "If these devices do things they shouldn't, you need to know," says Brendan O'Connell, a senior product manager at Cisco who also was on the panel.
To that end the TCG announced at Interop that it has a new standard that lets other security devices share network security data with NAC platforms. The data is posted centrally and can be tapped by any of the devices. That way firewalls, intrusion detection/prevention systems and the like can contribute to ongoing monitoring of devices' behavior.
Vendors acknowledged in response to questions from attendees that setting up NAC is a slow, methodical process and may in its initial phases require significant work. That is especially true of networks lacking updated infrastructure to support the form of NAC chosen, says Cisco's O'Connell. "When you put NAC on your network, you probably are going to have a fair amount of spending on your hands," he says. "If you've ignored your wiring closet in the last 10 years, you're going to have some work to do."
The upside is that the investment will be worth it because the network will have a needed overhaul.
Other vendors noted that phasing in NAC in monitoring mode first to find out just how many devices would be rejected is the best way to deploy. Once the majority of endpoints are remediated to pass NAC inspection, enforcement can be turned on without disrupting business, they say.
Interest seemed high in NAC, with the workshop selling out to about 250 attendees who had to come a day early to pay for the class.
Fingerprints: forward march!
04/24/08
By Kathleen Hickey
Pentagon adopts biometric recruit enrollment
http://www.gcn.com/cgi-bin/udt/im.display.printable?client.id=gcn_daily&story.id=46175
Today new recruits are enlisting with electronic fingerprints rather than signing a piece of paper, as part of the military's drive to eliminate paper signatures.
The first recruits used the technology last week at the Baltimore Military Entrance Processing Station. The recruits read the electronic contracts on a computer screen, then touched their index fingers to an electronic pad, uploading their prints and linking them to their contracts.
After swearing in the recruits, Air Force Maj. Michael Thomas, deputy station commander, used his own index fingerprint to biometrically sign their contracts. The new service members received printouts of their enlistment contracts, which included a facial photo and the fingerprint. No other paper was required.
Today only the Baltimore recruitment center is beta testing biometrically signed contracts. Once beta testing is completed, the military plans to expand the program to all 65 enrollment centers, said Lt. Col. Jonathan Withington, press officer, Office of the Assistant Secretary of Defense for Public Affairs.
Signing up enlistees electronically is a big step in the U.S. Military Entrance Processing Command’s transition to paperless enlistment recordkeeping, said Ted Daniels, chief of the command’s accessions division.
Biometrics will offer the agency many advantages, from improving security to reducing redundancy and costs, Daniels said. Last year the military recruited 266,000 new warriors. By switching to biometric technology, the agency estimates it will save 70 million sheets of paper a year, Daniels said.
The biometric fingerprints will not only be used as a digital signature but will also become part of the service members’ permanent personnel records and will be used for identity verification. Additionally the technology will be used to track an applicant’s progress throughout the qualification process, including aptitude testing, medical screening, background checks and basic training.
“What we want to do is make sure whoever is next to you in the foxhole is exactly who they are supposed to be,” Daniels said.
Daniels expects biometrics to accelerate and simplify many personnel procedures, including getting a Common Access Card and enrolling in the military’s health insurance program.
The military has used biometric fingerprints to do background checks on service members for several years, running prints through the Federal Bureau of Investigation’s fingerprint database, said Gaylan Johnson, spokesperson for the enrollment agency.
StoneWood pushes laptop encyption into the enterprise
Encrypted hard drives tailored for enterprise use
Daniel Robinson, IT Week 29 Apr 2008
http://www.itweek.co.uk/articles/print/2215460
Stonewood Electronics has launched a range of encrypted hard drives aimed at general business users. The drives are designed to protect against data loss in the event of a system being stolen or a laptop misplaced.
The firm's Eclypt range, available immediately, consists of Eclypt Corporate, a direct replacement for a PC or laptop hard drive; and Eclypt Freedom, a portable external drive with a USB connection. Both secure the entire disk content using 256-bit AES encryption and store the key within the drive electronics, so it cannot be uncovered through an attack on the computer's operating system. Both also ship in a tamper-proof enclosure.
Stonewood already supplies government departments with encrypted drives, but the Eclypt range is aimed at a broader market, according to marketing director Grant Gutteridge.
"In light of recent data losses, there is a growing need in both public and private sector to protect information," he said. The firm's existing FlagStone products are single-user only while Eclypt is much more flexible and designed to support scenarios such as a pool of laptops being available for employees travelling on business, Gutteridge added.
(from a different article: “The fundamental difference between a Seagate unit and ours is that on ours the key is held on the PCB,” said Stonewood’s Grant Gutteridge, referring to the rival storage giant’s full-disk encryption product, the Momentus FDE.2. “You would not be able to mount an attack on our drives because the key is not on the drive itself.”http://www.techworld.com/security/news/index.cfm?newsID=12116&pagtype=samechan)
But while FlagStone is fully accredited by the UK government's CESG information assurance agency, Eclypt is only in the early stages of the accreditation process, Gutteridge said.
Eclypt Corporate simply replaces a standard hard drive in new systems or can be retro-fitted to existing kit. It is available in 60Gb and 120GB capacities with ATA and Serial ATA interfaces, while Eclypt Freedom is only available as a 120GB unit.
Fujitsu last month unveiled its own encrypted drives, the first available in capacities up to 320GB, it claimed. The MHZ2-CJ series also stores the key internally and uses 256-bit AES encryption.
Security gets into the mix
04/28/08
By William Jackson
http://www.gcn.com/cgi-bin/udt/im.display.printable?client.id=gcn&story.id=46166
Natalie Givans, a vice president at Booz Allen Hamilton’s information and mission assurance and resilience group, has gained experience during her career in analyzing and designing security for a variety of government and commercial information and communication systems.
From 2000 to 2005, she was on the board of the International Systems Security Engineering Association, which developed the System Security Capability Maturity Model. Givans has said information security is a matter not only of technology but also of leadership, economics, policy and culture.
GCN: You have worked in the information security field for more than 20 years. What changes have you seen?
GIVANS: I started at Booz Allen 24 years ago. Back then, we were working on crypto devices, things like the STU-3, the Secure Telephone Unit, at all levels of government — primarily point solutions. That was the extent of the security industry. I found it difficult to talk with commercial organizations — energy utilities, financial services companies — about their responsibilities to protect resources in what was becoming an electronic world. They didn’t understand anything beyond scrambling the bits. We were ahead of our time in terms of our concerns. With the fact that everything is connected to everything now, the threats are coming from within the network as well as from outside. It’s on everybody’s mind now.
GCN: You have said that information security involves more than technology. But is the technology available today adequate for the job?
GIVANS:Information security involves protection of information in the classic sense, such as encrypting it. It also involves information and network integrity and their availability as well as the accountability of the processes and humans involved.
We have a lot of technology, but a lot of it is still point solutions focused on just one of those problems, not at their integration in an enterprise or at a national security level. We have a lot of crypto devices, firewalls, identity and access management, including biometrics, smart cards and audit software to see what is going on in the network. My real concern is the integration of that technology.
The [Defense Department] called this defense in depth years ago — it’s not a new idea.
GCN: How do agencies get the funding they need for proper security?
GIVANS: Agencies need to be able to tie information and infrastructure security to the mission they are trying to accomplish. Be able to explain what the risks are to the organization and tie information security requirements to that.
Too often this focus is separate: There are a group of people who worry about information security but who are not linked to the rest of the organization. Agencies need to link different elements, to show not only compliance but show how the money spent on security is going to be an enabler of their mission.
GCN: How do you measure security? What metrics do you use?
GIVANS: I worked years ago on what has become an [International Organization for Standardization] standard, the Systems Security Engineering Capability Maturity Model. We had a large metrics working group on that. In the software world, it was fairly easy to demonstrate that higher maturity levels yielded better software.
In the security world, we had a lot of debates about that.
It wasn’t really clear that the more process you had, the better the security would be. In fact, there were times we could prove that really wasn’t the case.
The metrics working group had to take this on. We determined that security measurements typically focused on areas that were easy to measure and on what was obvious. Organizations easily can measure the number of people trained or the number of devices installed or the number of intrusions that are detected.
The problem is [that] the metrics that people collect do not necessarily point to better security; they point to better process.
I think it is important for organizations to identify the goals for their missions, and the threats they are seeing to those goals.
Then they tie their improvements to those. For example, we asked organizations to identify the specific configuration management weaknesses that were exploited within their organization and to train their personnel on how and why they needed to close those vulnerabilities. Then give them a deadline, give them resources and then audit and make them accountable.
That string of events would lead to real knowledge of security results.
GCN: How do you translate a security policy into a culture that supports security?
GIVANS: It starts at the top. If you look at the nation, it starts with the president. What we find in any organization is that which is measured is improved, and what leadership talks about are the things people pay attention to. So first, the most senior leaders must publicly embrace and advocate security. They also have to ensure there is adequate funding to implement these policies and that people are adequately trained. And there has to be accountability, a way to tie the stakeholders’ incentives to the desired level of security maturity.
GCN: Is security training being adequately addressed in most agencies?
GIVANS: Probably not, but it varies greatly from organization to organization. There are examples where agencies are putting a lot of effort into it. When we have a lot of budget constraints, training of any kind tends to take a back seat. There is a need to work out what kind of training is needed for each kind of employee. In some cases, you can get by with an awareness campaign; in other cases, security professionals must have certification.
Certification can be a driver for education. One example of that is the Defense directive requiring certification of the information assurance workforce in a certain time frame. Recently, this was picked up as a requirement that contractors also must achieve certification. Obviously, organizations still need to provide the funding for that to happen.
GCN: Are there any government success stories that stand out for you?
GIVANS: I would say [the National Institute of Standards and Technology] is a great example.
Under [the Federal Information Security Management Act], they have focused on working across the community to establish the right kinds of guidance, standards and tools that help normalize the requirements. They have everything from performance measurement guides to information security handbooks to recommended security controls. I think that is a great enabler. Another is the Information Assurance Technical Analysis Center sponsored by [the Defense Technical Information Center and the Director of Defense Research and Engineering office]. Their emphasis is on capturing I-A best practices and standards.
GCN: What is the greatest security challenge facing the government in the coming years?
GIVANS: The big area is the defense of our infrastructure.
There should be a lot of concern about the risk to our financial systems, our control systems and our networks from both inside and outside the enterprise. We see more incidents, such as the loss of information, but more scary is the lack of availability of the infrastructure when you really need it. That to me is the big threat. We need to focus on better tools for the prediction, prevention and reconstitution of our infrastructure. We need to focus on resilience, not just protection, because we are going to get attacked, the systems will go down, and what will be important is how fast you can respond and recover.
Experts struggle with cybersecurity agenda
04/28/08
By William Jackson,
http://www.gcn.com/cgi-bin/udt/im.display.printable?client.id=gcn_daily&story.id=46189
Whoever becomes our next president will inherit a cyber infrastructure under almost constant attack and at greater risk than eight years ago, and a handful of experts and legislators have come together to ensure that cybersecurity has a high priority in his or her administration.
The Commission on Cyber Security for the 44th Presidency, set up in November by the Center for Strategic and International Studies, held the second of five planned public meetings Monday to hear recommendations on issues of information security, identity theft and government leadership.
Cybersecurity is not a technical issue, panelists said, but a matter of culture, education and self-interest. Government cannot regulate information technology security, and industry cannot do the job by itself. Forging the public/private partnership needed to provide adequate security will require leadership in both government and industry. Cooperation between the two spheres may not be easy to come by, said John Koskinen, who spearheaded the government response to the Year 2000 Transition.
“The private sector is always nervous about what the government is up to,” Koskinen said. Business deals with security in terms of business cases and managing acceptable risk, while government tends to deal in regulatory absolutism. And information sharing is always a challenge. The advice of corporate general counsels is generally “Don’t tell anybody anything.”
But the Y2K transition showed that effective cooperation is possible if government acts as a catalyst to establish priorities and bring different sides together, he said.
The nonpartisan think tank established the commission “to develop recommendations for a comprehensive strategy to improve cybersecurity in federal systems and in critical infrastructure.” Its goal is to have a package of recommendations ready for the next president by November. Cybersecurity will be vying with numerous other domestic and international, economic, security and political issues for the presidential transition team’s attention. Establishing it as a high priority will require putting it on the legislative and policy agenda from the beginning of the administration, organizers say.
Co-chairmen of the group are the former director of the U.S. National Security Agency, ret. Adm. Bobby Inman; Scott Charney, vice president of trustworthy computing at Microsoft; Rep. Jim Langevin (D-R.I.), chairman of the Homeland Security Subcommittee on Emerging Threats, Cyber Security and Science and Technology; and ranking Republican Rep. Michael McCaul of Texas. Members of the commission include Amit Yoran, formerly top cybersecurity official at the Homeland Security Department; Orson Swindle, formerly of the Federal Trade Commission; and Marty Stansell-Gamm, former head of the Department of Justice’s computer crimes division; in addition to a number of industry executives.
There was not complete agreement among panelists on cybersecurity priorities. They agreed that a single national data breach notification law is needed to replace the current patchwork of 40-plus state laws. Although Lisa Sotto, a partner at the law firm Hunton and Williams, called for federal preemption of state laws, David Mortman, chief information security officer-in-residence at Echelon One, wanted federal law to set a baseline for breach notification without precluding stiffer state requirements.
Julie Ferguson, vice president of emerging technology at Debix, called for a zero-tolerance policy for identity theft enforced by required verification of online transactions with consumers. Jay Foley, founder of the Identity Theft Resource Center, called for creation of a national death registry and for the Social Security Administration to create a database tying Social Security numbers with dates of birth to help prevent misuse of the numbers even though efforts are being made to stop their use as a unique personal identifier.
Pamela Fusco, executive vice president of security solutions at Fishnet Security, said she wanted to establish an International Data Classification Standard that could help identify and assess value and risk to data. This would improve business practices and help put teeth in government regulation, she said.
“Information is not being identified as essential,” Fusco said. “We’re protecting machines, we’re protecting access,” we have not developed standard ways to classify and prioritize the information that underlies them.
'NAC 2.0' Takes Shape Under Networking Giants
http://www.internetnews.com/infra/article.php/3743346/NAC+20+Takes+Shape+Under+Networking+Giants.htm
April 28, 2008
By Sean Michael Kerner
A slew of the big names in networking are aiming to push the hot technology of network access control (NAC) beyond its proprietary beginnings, incorporating a broader base of vendor frameworks and implementations.
The effort marks a joint initiative between Cisco and the Trusted Computing Group (TCG) -- a five-year old consortium of vendors working on open standards for hardware-based security that includes HP, IBM, Intel and Microsoft.
Together, the networking colossus and the TCG are rallying behind a new specification called Interface for Metadata Access Point (IF-MAP), designed around aligning their respective access control frameworks. If all goes well, the effort to converge Cisco NAC and Trusted Network Connect (TNC) will result in a standard sanctioned by the Internet Engineering Task Force (IETF).
The news that NAC may be set to become a pervasive technology, interoperable across vendors, gives further signs that NAC may prove to be the cornerstone of end-to-end access control security within an enterprise network.
"We have Cisco, Microsoft and TNC all aligned around protocols," said Stuart Bailey, founder of networking vendor InfoBlox and the editor of the IF-MAP specification. "That's pretty exciting stuff in terms of making a substantial step forward toward network access control interoperability."
The specification is being posted today by the TNC and the group will be demonstrating implementations at the Interop trade show in Las Vegas.
The lynchpin of IF-MAP's interoperability across Cisco, Microsoft and TNC systems is the TNCCS-SOH protocol, which Microsoft donated to the TNC last year. TNCCS-SOH is a statement-of-health protocol that validates the health level of an endpoint to provide what's known as pre-admission control.
TNCCS-SOH is part of Microsoft's network address protection (NAP) technology integrated with Windows Server 2008. TNC members like Juniper and HP ProCurve as still building out the actual implementation of the protocol, but Bailey told InternetNews.com that the foundation is in place.
While Bailey noted that the IETF standardization effort is extremely important, the TNC is also moving forward on a related effort: to expand the definition of what NAC can do.
For one thing, IF-MAP goes beyond pre-admission access control -- validating an endpoint before it is granted access to network assets -- to include post-connection event correlation for access control policy.
"While NAC focuses on pre-admission requirements now because of the proliferation of unmanaged endpoints and compliance issues, there is a need to understand and manage the entire lifecycle," Bailey said.
"It's not good [enough] to know that we can admit an endpoint to the network -- we need to watch that endpoint through the entire lifecycle and be able to react and adjust to the endpoint as it does what it needs to do," he said.
That's where the new IF-MAP protocol comes into play -- its designers had the goal of using it to provide a unified response to network endpoint events. IF-MAP uses XML-based metadata from network security devices to help correlate actions, thereby helping a network make a decision about access policy for a given endpoint.
"MAP is like a MySpace or Facebook for enterprise infrastructure security pieces that each component publishes and subscribes to," Bailey said. "This is a community of security infrastructure devices where each device can allow its circle to know what it sees on the network, and share information."
For example, if one IF-MAP-compliant security device on a network detects an VoIP phone doing something that it shouldn't, that information can be shared with other network elements to take action. The protocol itself is secured with strong certificate-based authentication and uses Web services, specifically XML over HTTPS, to communicate.
Bailey said that since IF-MAP is based on Web services, existing network security devices could potentially integrate the protocol into their devices with only a software upgrade.
"There is a pent-up demand for network security and the perceived complexity of NAC has made NAC deployment difficult for some," Bailey said. "What IF-MAP may be is a game changer for enterprise network security. It's a simple system that allows existing systems to integrate and it lowers operating cost and reduces vendor cost for integration."
Dell, HP launch AMD business desktops
http://www.news.com/8301-10784_3-9929860-7.html
NAC group expands its scope
http://www.networkworld.com/news/2008/042808-interop-tcg.html
NAC group expands its scope
By Brad Reed , Network World , 04/28/2008
Trusted Computing Group has expanded its interest in NAC to include the way devices behave once they have been admitted to networks, and the group is demonstrating this capability at Interop Las Vegas.
Formerly focused just on the security of endpoints before they are admitted to networks, TCG today is announcing a new protocol to help coordinate network security devices so they can respond to threatening behavior of devices that have already connected to networks. (Compare NAC products.)
The protocol acts as a common language that security devices can use to upload data to a meta-data access point (MAP). The MAP then publishes that data to other devices on the network that have subscribed to it. The same protocol, called IF-MAP, is used to access posted data from the MAP.
Based on this composite store of security information, each device can make better-informed decisions about restricting devices deemed to be violating security policies.
For example, the MAP may publish that the configuration of a router has been changed by a device at a certain IP address. Security management software may use this data to determine that that act puts the IP address in violation of security policy. Via the MAP, the security management software can publish that the IP address should be blocked, and a firewall monitoring the MAP could block the IP address based on that notification.
The protocol could have other applications, says Stu Bailey, CTO of Infoblox, who helped work on IF-MAP. For instance, components of an automated supply-chain system could use the protocol to publish data to a MAP so other elements could make decisions based on more data, he says.
Using IF-MAP in a NAC framework provides a standard in which gear from multiple vendors can be used to perform post-admission NAC, something that is already possible on a proprietary basis via individual vendors. With the standard protocol and a MAP device, security gear that is already installed in networks could contribute to post-admission NAC.
Infoblox supplies the MAP for the IF-MAP demonstration at the TCG booth at Interop, but gear from Juniper and ArcSight could be used as well, says Steve Hanna, co-chairman of TCG’s Trusted Network Connect program for NAC. He says open source groups are also working on a MAP platform.
TCG may later push for IETF approval of IF-MAP, but that has not been decided, Hanna says. The IETF is considering other TCG NAC standards as possible IETF RFCs.
All contents copyright 1995-2008 Network World, Inc. http://www.networkworld.com
Fingerprints: forward march!
http://www.gcn.com/online/vol1_no1/46175-1.html
Pentagon adopts biometric recruit enrollment
By Kathleen Hickey, Special to GCN
Today new recruits are enlisting with electronic fingerprints rather than signing a piece of paper, as part of the military's drive to eliminate paper signatures.
The first recruits used the technology last week at the Baltimore Military Entrance Processing Station. The recruits read the electronic contracts on a computer screen, then touched their index fingers to an electronic pad, uploading their prints and linking them to their contracts.
After swearing in the recruits, Air Force Maj. Michael Thomas, deputy station commander, used his own index fingerprint to biometrically sign their contracts. The new service members received printouts of their enlistment contracts, which included a facial photo and the fingerprint. No other paper was required.
Today only the Baltimore recruitment center is beta testing biometrically signed contracts. Once beta testing is completed, the military plans to expand the program to all 65 enrollment centers, said Lt. Col. Jonathan Withington, press officer, Office of the Assistant Secretary of Defense for Public Affairs.
Signing up enlistees electronically is a big step in the U.S. Military Entrance Processing Command’s transition to paperless enlistment recordkeeping, said Ted Daniels, chief of the command’s accessions division.
Biometrics will offer the agency many advantages, from improving security to reducing redundancy and costs, Daniels said. Last year the military recruited 266,000 new warriors. By switching to biometric technology, the agency estimates it will save 70 million sheets of paper a year, Daniels said.
The biometric fingerprints will not only be used as a digital signature but will also become part of the service members’ permanent personnel records and will be used for identity verification. Additionally the technology will be used to track an applicant’s progress throughout the qualification process, including aptitude testing, medical screening, background checks and basic training.
“What we want to do is make sure whoever is next to you in the foxhole is exactly who they are supposed to be,” Daniels said.
Daniels expects biometrics to accelerate and simplify many personnel procedures, including getting a Common Access Card and enrolling in the military’s health insurance program.
The military has used biometric fingerprints to do background checks on service members for several years, running prints through the Federal Bureau of Investigation’s fingerprint database, said Gaylan Johnson, spokesperson for the enrollment agency.
Storage Security Standards Heat Up
http://www.enterprisestorageforum.com/continuity/features/article.php/3742881
April 24, 2008
By Drew Robb
Standalone storage security vendors may be having a rough go of it, but storage security standards are faring much better.
While the likes of NeoScale, Kasten Chase and Decru disappearing from the scene as independent vendors, larger storage vendors are incorporating greater security into their products, and standards groups are taking on the issue too (see Progress Catches Up With Storage Security Vendors).
Storage security standards seem to be blossoming forth in ever-increasing numbers. Here are some of the various standards (and far from a complete list at that): the LTO Consortium implemented the embedded Advanced Encryption Standard (AES) 256-bit encryption that is now native in LTO-4 tape drives; the T10 group is working on SCSI storage interfaces; the Institute of Electrical and Electronics Engineers (IEEE) 1619.3 committee is developing encryption key management; the Trusted Computing Group (TCG) has its trusted security initiative; and the Storage Networking Industry Association's (SNIA) Storage Security Industry Forum (SSIF) is involved on a longer-term overall security framework.
"AES-256 is an established and widely used standard while the others are works in progress," said Jon Oltsik, information security analyst at Enterprise Strategy Group.
The LTO-4 Consortium announced the acceptance of AES 256 bit encryption a year ago, and tape drives incorporating this standard have been shipping since the middle of last year. Quantum, IBM and HP have each implemented this encryption on Fibre Channel, SAS and most SCSI versions of their LTO-4 drives. The result is tangible — encryption is built into the tape drive as opposed to having to implement an appliance or install software to protect tape data.
The Keys to Encryption
The rest of the security standards, as Oltsik pointed out, are not yet finalized. Some may take quite a while to materialize while others are edging closer. The IEEE 1619.3 Key Management Standards group appears to be in the latter category. The 1619.3 committee is working to create a common method for encryption key managers to talk to devices such as tape. The goal is to free users up from having to deploy proprietary key management solutions, since these can create challenges to track and manage. 1619.3 aims to allow users to be able to choose a key management solution that will work across multiple platforms and vendors.
"This committee is working on key management standards, which I believe will become a very important issue as more and more encryption is deployed," said Oltsik.
The committee is composed of engineers from end user companies and vendors. Quantum, for example, is heavily involved in many storage security standards.
"1619.3 creates a standards-based key management API that can be implemented by the various key management vendors and storage providers who offer encryption solutions as part of their storage tape, disk and switch products," said Robert Callaghan, Quantum's senior product manager for Security and Enabling Solutions. "The goal is to provide the customer with choices and interoperability."
He gives the example of a customer using a Quantum i2000 library with LTO-4 drives and encryption. Instead of being locked into a Quantum key manager, the customer would be free to choose one from another vendor if it better suited their needs and budget.
Another problem this potentially solves is having to manage multiple encryption key managers and key sets, as well as managing the backup of those keys, and protecting access and delivery of those keys. Utilizing one key manager for all encryption keys saves time and money and removes the administrative headaches associated with having to manage multiple interfaces and key managers.
So is this standard going to be a reality any time soon? Matt Ball, chair of the IEEE1619.3 committee, is optimistic. Standards from this committee have already been pushed through, such as IEEE 1619 and IEEE 1619.1. And they have obtained broad vendor support.
IEEE 1619 addresses encryption of data on block-oriented storage devices, i.e., disk drives.
"The only negative feedback I've heard about IEEE 1619 is by a large hard disk manufacturer that does not believe the encryption mode is suitable for hard disks," said Ball. "It appears that they are alone in this belief — several hard disk encryption utilities already support the XTS encryption mode: TrueCrypt, FreeOTFE, and dm-crypt."
IEEE 1619.1 deals with encryption of large tape drives. Major tape drive vendors such as IBM, HP, Sun and Quantum all offer encrypting tape drives that support IEEE 1619.1.
"The approval of IEEE 1619 and 1619.1 is a major milestone in storage security because storage vendors now have a proven recipe that they can follow to provide strong data protection," said Ball. "I expect that we'll start to see customers demand adherence to standards instead of the all-too-common practice of 'rolling-your-own' cryptography."
He makes the point that if a vendor is unwilling to divulge the specifics of the encryption algorithm, it's probably not secure. This is what the crypto community calls security through obscurity, and it almost always fails.
With 1619 and 1619.1 signed, sealed and delivered, Ball understandably has confidence that 1619.3 will soon follow suit. The committee began its key management efforts about a year ago and is making good progress. He said the group has strong support by all the major storage companies, such as Cisco, Sun, HP, IBM, Seagate, NetApp, RSA Security (EMC), nCipher and others.
"By this summer, we should expect a framework that companies could start preliminary implementations against," said Ball. "We also plan to have an open source reference implementation to speed adoption across the industry. The project should be finished by the middle of next year."
Meanwhile, the IEEE Security in Storage Working Group (SSWIG) is also working on another standard under the 1619 banner. 1619.2 is aimed at wide-block encryption, and Jim Hughes of Sun is the chair for that committee.
The group is currently standardizing two wide-block encryptions: EME and XCB (EME is used in PGP's full disk encryption utility). The effort should be mostly finished by the summer, with the long balloting and publishing process to follow.
Race Against Time
These IEEE committees, however, will have to ensure that no last-minute disputes foil their intended timelines. Oltsik believes that timing is everything in the standards arena. If a standard is needed and isn't immediately forthcoming, that vacuum tends to be filled by vendor schemes that generally are proprietary in nature.
"If standards are created and approved soon, 1619.3 should gain wide support," said Oltsik. "If it languishes, vendors may take things into their own hands and figure out how to integrate with others."
Windows Vista Trusted Platform Module Services Step by Step Guide
dated 2008 and may have been posted....
http://technet.microsoft.com/en-us/windowsvista/aa905092.aspx
Nvidia and TPMs!
NVIDIA is ready to deliver on last week's promise
http://www.dailytech.com/NVIDIA+Prepares+Singlechip+Mobile+Platform+for+Intel+Processors/article11581.htm
When we think of notebook computers which use Intel Pentium Dual Core, Core 2 Solo, or Core 2 Duo processors, the systems are usually paired with an Intel Northbridge and Southbridge. Intel hopes to carry on this tradition with its Centrino 2 platform which is slated to launch during the summer.
NVIDIA, however, has plans of its own when it comes to mobile platforms for Intel-based notebooks. NVIDIA CEO Jen-Hsun Huang recently called Intel out on its integrated desktop/mobile graphics performance. Huang even went so far as to say that NVIDIA would "open a can of whoop ass" when it comes to its upcoming products.
It appears that NVIDIA's first few cans of "whoop ass" are just now leaving the factory and they are labeled MCP79. MCP79 is a new single-chip integrated chipset for "small form factor" notebooks according to NVIDIA. Each variant of MCP79 incorporates a DX10-capable GeForce graphics controller supporting Shader Model 4.0, NVIDIA's VP3 video processor, and support for Hybrid Power/Hybrid SLI/Hybrid Performance.
In addition, MCP79 features a single-channel TMDS interface for HDMI 1.2, support for a 1066 MHz FSB, 800 MHz DDR2, 1333 MHz DDR3 memory support, 3 Gbps SATA/eSATA, NVIDIA DriveCache (similar to Intel Turbo Memory), an NVIDIA GbE controller, High Definition Audio, TPM 1.2, and up to 20 PCIe 2.0 lanes.
NVIDIA has plans for six members of the MCP79 family. On the low-end, the MCP79ML lacks such features as RAID 0/1, PCIe x16, DisplayCache, and DriveCache. The MCP79GLM will feature a Quadro-based graphics controller instead of GeForce and the MCP79-SLI will support NVIDIA SLI as its name implies. Other members of the family include the MCP79MH, MCP79MX, and MCP79MVL.
NVIDIA's MCP79 family will be going toe-to-toe with Intel's Centrino 2 platform (Montevina) in June. Montevina Northbridges will include the integrated GM45/GM47 and the discrete PM45. The integrated Northbridges will incorporate Intel's new X4500 HD graphics processor (DX10, HDMI, DisplayPort) which NVIDIA doesn't think too highly of at this point in time. Like NVIDIA's MCP79, Centrino 2 chipset will support DDR2/DDR3 memory and support FSB's up to 1066MHz.
Since the Centrino platform won't be a single-chip solution like NVIDIA's offering, the GM45/GM47 and PM45 will be paired with either the ICH9M or ICH9-M Enhanced Southbridge.
It's shaping up to be an interesting summer with NVIDIA and Intel battling it out in the notebook platform sector. If flash manufacturers could expedite their downward trend for solid-state drives, the outlook would be even better for consumers.