Register for free to join our community of investors and share your ideas. You will also get access to streaming quotes, interactive charts, trades, portfolio, live options flow and more tools.
Register for free to join our community of investors and share your ideas. You will also get access to streaming quotes, interactive charts, trades, portfolio, live options flow and more tools.
vPro push
http://www.scratchitandsee.com/
I had to hit the refresh button a couple times to get the video.
Pentagon approves creation of cyber command
http://www.alertnet.org/thenews/newsdesk/N2388687.htm?bcsi_scan_A03A11A6942CA7FF=0&bcsi_scan_filename=N2388687.htm
23 Jun 2009 22:31:21 GMT
WASHINGTON, June 23 (Reuters) - The Pentagon will create a Cyber Command to oversee the U.S. military's efforts to protect its computer networks and operate in cyberspace, under an order signed by Defense Secretary Robert Gates on Tuesday.
The new headquarters, likely to be based at Fort Meade, Maryland, outside Washington, D.C., will be responsible for defending U.S. military systems but not other U.S. government or private networks, Pentagon spokesman Bryan Whitman said.
Asked if the command would be capable of offensive operations as well as protecting the Department of Defense, Whitman declined to answer directly.
"This command is going to focus on the protection and operation of DoD's networks," he said. "This command is going to do what is necessary to be able to do that."
U.S. officials have voiced growing concern in recent years about being vulnerable to attacks on the country's civilian or military networks as technology takes on an ever-increasing role, including in military operations.
President Barack Obama said last month he would name a White House-level czar to coordinate government efforts to fight cybercrime.
The United States has said many attempts to penetrate its networks appear to come from China but it has stopped short of accusing Chinese authorities of being responsible.
Whitman said the new command will consolidate existing Pentagon efforts to protect its networks and operate in cyberspace.
Those efforts currently come under the auspices of U.S. Strategic Command in Nebraska, which will also oversee the new headquarters.
The U.S. Department of Defense runs some 15,000 electronic networks and runs some 7 million computers and other information technology devices, Whitman said.
"Our defense networks are constantly probed. There are millions of scans every day," he said.
"The power to disrupt and destroy, once the sole province of nations, now also rests with small groups and individuals, from terrorist groups to organized crime to industrial spies to hacker activists, to teenage hackers," he said.
"We also know that foreign governments are trying to develop offensive cyber capabilities," he added, saying more than 100 foreign intelligence services were trying to hack into U.S. networks.
The new command should begin initial operations by this October and be fully up and running a year later.
The head of the Cyber Command would also be the director of the U.S. National Security Agency, which conducts electronic surveillance and communications interception and is also based at Fort Meade. (Editing by Eric Walsh)
Trusted Computing Group to Demonstrate Pervasive Security at Gartner Group Information Security Summit
Gartner Group Information Security Summit
Booth 16
--(BUSINESS WIRE)--Trusted Computing Group:
WHO:
Trusted Computing Group and member companies Great Bay Software, Lumeta Corporation and Wave Systems are exhibiting at the Gartner Group Information Security Summit.
WHAT:
The companies will show “Pervasive Security” solutions for the employee cubicle, conference room and data center based on TCG’s Trusted Network Connect architecture and the Trusted Platform Module. These examples will demonstrate how network security and TCG’s Trusted Network Connect specifications, including the Metadata Access Protocol, are used to connect network and physical security and protect the network, data and systems.
WHEN:
The Gartner Group Information Security Summit will be held Monday, June 29 – Wednesday, July 1.
WHERE:
The Summit will be located at the Gaylord National Resort, Washington, D.C. TCG will host demonstrations in Booth 16 in the exhibit area.
WHY:
Every day brings new reports of lost or stolen data, breached networks and new viruses, spam attacks and malware. Enterprise users can learn now to protect their data, systems and networks using interoperable, available products based on widely accepted industry standards from Trusted Computing Group.
MORE INFO:
Gartner Group: www.gartner.com/us/itsecurity
Trusted Computing Group (TCG): www.trustedcomputinggroup.org
TCG on Facebook: http://www.facebook.com/pages/Trusted-Computing-Group/99778480008?ref=s
Follow TCG on Twitter: https://twitter.com/TrustedComputin
dig,
It was a mandatory conversion from preferred to common, which results in no cash to Wave. It changes the nature of the equity held by the PP participant. Wave will receive money when the warrants are exercised.
PC Desktop Market Will Continue in the Long Term, Intel Says
The company plans to combine Clarkdale with its vPro technology for security and manageability for the corporate market,
http://www.pcworld.com/article/166507/pc_desktop_market_will_continue_in_the_long_term_intel_says.html
FHM,
Despite all your pontificating, you are generally incorrect on most of your points.
Investors do NOT have to create a separate account in which to hold fully paid for securities; they keep their account number and change only the account "type". 1234-5678 type 1 is an example of a cash account, 1234-5678-2 is an example of a margin account, 1234-5678-4 is a "when-issued" account and 1234-5678-6 is a "short" account. All have the same number but the types are differentiated by the appendage at the end.
Accounts are opened as cash. The cash account agreement is what's initially signed. It takes a specific signed margin account agreement to create the margin account. You cannot have a margin account without first creating the cash account. And, yes, you can call your broker and request via phone that securities held in the margin account be journalled to the cash account. The only restriction would be if the movement of those shares, and its subsequent reduction in "funds available," triggers a margin call.
Last, when you deal with larger retail firms, clients aren't compensated for the lending of shares. They don't even realize their securities have been lent. If there's any compensation, it's to the lending broker-dealer from the borrowing broker dealer. Securities Lending firms, asset managers with large positions, and sometimes large pension funds will also lend stock in their portfolios to generate interest income. In 35 years on Wall Street, I've never heard of this practice with small investors at the retail level.
btw, I see your Home Solutions of America hit a new low today.
Performance Testing And Integration At Interop
http://www.informationweek.com/blog/main/archives/2009/05/_performance_te.html;jsessionid=QLIHCBTOF1DL2QSNDLRSKH0CJUNN2JVN
On the integration side, almost a year after the Trusted Computing Group Trusted Network Connect working group announced a specification for publishing and subscribing to device status updates called IF-MAP, the group was a demonstrating at their booth several vendors sharing and using data published to an IF-MAP server.
IF-MAP is a TCG specification for receiving and publishing device status that is updated as a device changes over time. Besides using IF-MAP for NAC, there are other uses. For example, Lumeta maps networks and can discover previously unknown leaks out of the network via modems or VPNs’. Discovery of such a leak may be case to raise and alarm or quarantine the host.
If networking is cool at Interop, then testing, the red-headed stepchild of networking, is going to make itself known. Factors like data center consolidation and virtualization are changing the demands made of the network for more resilient, low latency and high speed capacity.
That means bigger, faster switches, denser port counts, and reducing switch hops. That also means more complexity and fragility that needs to be qualified before going into service.
After what seemed like a testing drought, established vendors like Spirent and Ixia are launching new products and testing programs to qualify network designs and determine performance issues early on.
The increasing complexity of today’s networks from changing topologies to the protocols running over them is driving pre-qualification testing. Spirent is partnering with vendors and VARs to provide testing consulting services of components as well as end to end. The goal is to ensure that deployed network works as designed and any bottle necks are discovered and addressed.
Ixia hasn’t been idling, however. They are working with the Test Lab Automation Alliance, which is a consortium of vendors like Ixia, MuDynamics, and Gale Technologies developing specifications to standardize the integration and orchestration of test tools and infrastructure into a manageable whole. Either through a service such as Spirents or integration standards from Test Lab Automation, performance testing will hopefully get easier for working IT administrators.
On the integration side, almost a year after the Trusted Computing Group Trusted Network Connect working group announced a specification for publishing and subscribing to device status updates called IF-MAP, the group was a demonstrating at their booth several vendors sharing and using data published to an IF-MAP server.
IF-MAP is a TCG specification for receiving and publishing device status that is updated as a device changes over time. Besides using IF-MAP for NAC, there are other uses. For example, Lumeta maps networks and can discover previously unknown leaks out of the network via modems or VPNs’. Discovery of such a leak may be case to raise and alarm or quarantine the host.
Lumeta can publish that status via IF-MAP and as shown in the demo, a Juniper UAC appliance, their NAC offering can take action. Hirsh Electronics, manufacturer of door locks, demonstrated integration by showing that a user that hadn’t swiped their badge on entering a building was denied access to the network under the assumption that the user never entered the building. (Whether or not that was true or whether such access decisions are even desirable is an altogether different question. The integration is interesting.)
Finally, Boeing, Tofino which makes SCADA network security devices, and Trapeze Networks showed how location discovered via wireless networking can be used to grant or deny access to the network as well as how legacy devices can be protected and reported on via Byers Security Tofino Industrial Security Solution.
Tomorrow is a day mostly focused on switching. The other big topic at Interop.
Cybersecurity Groups Launch 'Chain of Trust' Initiative to Combat Malware
DJ Press Release Wire
11:07 AM Eastern Daylight Time May 19, 2009
WASHINGTON, May 19 /PRNewswire-USNewswire/ -- Three of the world's leading cybersecurity groups today launched a new initiative to combat malicious software (malware) by establishing a "Chain of Trust" among all organizations and individuals that play a role in securing the Internet.
Developed by the Anti-Spyware Coalition (ASC), National Cyber Security Alliance (NCSA) and StopBadware.org, the Chain of Trust Initiative will link together security vendors, researchers, government agencies, Internet companies, network providers, advocacy and education groups in a systemic effort to stem the rising tide of malware.
Applying many of the same approaches used to bring nuisance adware under control, the Chain of Trust Initiative aims to establish a united front against a threat that continues to grow exponentially. Kaspersky Labs recently reported that malware distributed through social networking sites is successful 10 times more often than scams distributed via email.
"Strong security in any one organization or sector is not enough to combat an agile, fast evolving threat like malware, which exploits security breakdowns between entities," said Ari Schwartz, ASC Coordinator and Vice President of the Center for Democracy & Technology (CDT). "We all need to work together to build a system that can withstand and repel the next generation of exploits."
The first order of business in the Chain of Trust Initiative is to map the complex, interdependent network of organizations and individuals that make up the chain. Only by identifying all the vulnerable links and understanding how they connect to one another can malware fighters get a handle on the problem and begin to develop consensus solutions.
"Online safety and security is a shared responsibility that requires the involvement of governments, corporations, non-profit institutions and citizens," said Michael Kaiser, Executive Director of the NCSA. "The Chain of Trust Initiative will focus furthering the development of tools that provide better protections. However, we must also continue to ensure that all of us implement universal behaviors online that protect us against a multitude of threats."
ASC, NCSA and StopBadware.org will lead the mapping effort and jointly develop ideas and initiatives to form stronger bonds between links on the chain. Leaders of the initiative have already begun reaching out to key players and identifying critical areas for collaboration. In the next six months, the Chain of Trust Initiative will produce a paper tracking the results of the mapping project and propose initial recommendations to strengthen the chain.
"Organization and collaboration are our best tools against an enemy that doesn't play by any rules," said StopBadware.org manager Maxim Weinstein. "Just by nature of how the Internet works, malware distributors have a technological advantage, but we can respond by strengthening our shared networks and by better understanding our shared responsibilities."
About the Anti-Spyware Coalition: The Anti-Spyware Coalition (ASC) is a group dedicated to building a consensus about definitions and best practices in the debate surrounding spyware and other potentially unwanted technologies. Composed of anti-spyware software companies, academics, and consumer groups, the ASC seeks to bring together a diverse array of perspective on the problem of controlling spyware and other potentially unwanted technologies.
About National Cyber Security Alliance: The National Cyber Security Alliance is a non-profit organization. Through collaboration with the government, corporate, non-profit and academic sectors, the mission of the NCSA is to empower a digital citizenry to use the Internet securely and safely, protecting themselves and the cyber infrastructure. NCSA works to create a culture of cyber security and safety through education and awareness activities. Visit www.staysafeonline.org for more information.
About StopBadware.org: StopBadware.org is a partnership among the academic community, consumer groups, technology industry leaders, and volunteers committed to protecting Internet users from threats to their privacy and security caused by bad software. StopBadware.org is led by Harvard University's Berkman Center for Internet & Society. The initiative is supported by Google, PayPal, Mozilla, AOL, and Trend Micro. For more information, please visit http://www.stopbadware.org.
SOURCE Anti-Spyware Coalition; StopBadware.org; National Cyber Security Alliance
/CONTACT: Brock Meeks, +1-202-637-9800 x114, or David McGuire, +1-202-423-7432, both for StopBadware.org, National Cyber Security Alliance, Anti-Spyware Coalition
/Web site: http://stopbadware.org/
/Web site: http://www.staysafeonline.org/
(END)
Joel Snyder discusses NAC Control Day at Interop Las Vegas
(Who is Joel Snyder?)
http://searchnetworking.techtarget.com/news/article/0,289142,sid7_gci1356708,00.html#
Monday is NAC Day at Interop. This day-long mini-conference at Interop Las Vegas dives deep into all things network access control. The event is an opportunity for networking professionals to get a solid technical understanding of this maturing network security technology. SearchNetworking.com spoke with NAC Day chairman Joel Snyder and got a preview of what attendees can expect to get out of the event. Snyder is a senior partner at the Tuscon, Ariz.-based consulting firm Opus One.
For those who have never before been to NAC Day at Interop Las Vegas, could you describe generally what it is like? What do attendees typically learn about network security from the event?
NAC Day is all about NAC technology and the issues we have in deploying NAC in modern networks. I don't like "high level" talks; I'm a real technologist, so we spend most of our time digging deep into the technology. The day is divided into a number of large sections, but the key drivers for the content are:
1. What is NAC, really, deep down, at the technology and networking level?
2. What is it like to deploy NAC, and how do I have successful deployments?
I don't want to reproduce what people can get off the Internet and out of other people's white papers, so I like to dive into the various issues that I have seen both in deploying NAC and in talking to people who have chosen not to deploy NAC for one reason or another.
Also, after lunch, we have a panel of vendor tech people who will help give their experiences from the field.
Describe how the agenda for this year's NAC Day differs from last year's event.
The big changes are in the area of standards-based technology. Generally, the major vendors have started to oscillate toward a common idea of how to do NAC and why to do NAC, and this has colored all of the material. Someone who has been to NAC Day before probably will not get a whole day's worth of new material out of this, but someone who has never been should find it great.
What are some of the major misunderstandings about network access control?
People haven't come to a common definition of NAC, which means that there are many different ways to apply this technology. What people don't understand well is how to deal with the dark corners of NAC deployments -- how to handle things like VoIP phones and old switches and printers and so on. The goal of NAC Day is to try and get people to a common definition, and when we have a good base to work from, we can dive into all of these hard topics -- like where to put NAC -- and come up with a nice solution set.
How are enterprises using network access control these days, and how has it changed over the years?
Well, this is a very new technology, so it's hard to really come up with an answer to this question that doesn't sound silly. But, in general, early NAC adopters were responding to some particular pain point, such as guest access or a need to support a particular audit requirement.
Now that these first requirements have been met, we are seeing a different set of enterprises look to NAC based on a broader set of requirements. They are not diving into a single solution to their problem but doing NAC as part of a broader assessment of security in their networks.
What's your overall assessment of the NAC market today? Are vendors going in wildly different directions with this network security technology, or are they all generally running in the same direction?
Well, a combination of both. Vendors are consolidating, as is always the case in a new market with a lot of froth. And while there are a lot of vendors that are going down the TCG-like [nonprofit standards body Trusted Computing Group] NAC path, there are still some that have very different ideas on how to do NAC. This can range from small vendors like Napera, which is taking a hardware approach, to folks like Forescout, which is going with a very detection-oriented way to handle NAC. The nice thing about this diversity of deployment is that if you don't like a TCG-style NAC approach, there is probably an alternative view of NAC that will help you out.
What are some of the most common mistakes you see in a network access control implementation, and how can they be avoided?
The biggest mistake I see is people going for NAC because it's a buzzword and not because they have a defined requirement. Because this is a technology and not a simple point solution product, there are lots of reasons why you might want to put NAC into your network. But you have to explicitly give that reason, not just, "Oh, well, we saw a presentation and thought it would be a good idea." If you can't state your requirement for NAC in a few simple declarative sentences, then this technology is really not for you.
Could Solid State Spell the End for Hard Drives?
May 14, 2009
By Paul Rubens
Mechanical hard drives with spinning disks are doomed to extinction, thanks to solid state flash drives that are becoming cheaper and offering greater capacity by the month. At least that's how some in the data storage industry see it.
Outwardly there's a convincing logic to this argument, especially when you consider what's happened in other markets where devices with moving parts faced competition from solid state electronics. Televisions, telephony and radio equipment, clocks, automobile ignition ... the list is endless, and in every case it's ended up with the same result: solid state electronic devices have won because they are cheaper to make, more reliable, and offer similar or (usually) superior performance.
So when it comes to storage planning, it's sensible to at least consider when flash-based solid state drives (SSDs) might take over from conventional hard disk drives (HDDs). Right now, SSDs are significantly more expensive per gigabyte than HDDs, and while they offer very fast read speeds, they suffer from slower write speeds, and from the limited number of times flash cells can be written to before they wear out.
But flash memory prices are falling rapidly, perhaps by 50 percent to 60 percent a year, and SSD technology is also improving, so write speeds are likely to increase and memory wear-out is likely to become less of a problem. For example, companies such as California-based SandForce promise technology innovations that will ensure flash cells effectively last 80 times longer than is common now, with write speeds far closer to levels achievable for reads.
As prices drop and the capacity and performance of SSDs improves, it's likely that first a few, and then an increasing number of HDDs of different types will be replaced by their solid state siblings. But the complete extinction of HDDs is unlikely for many years, if ever, for reasons we'll get to in a moment.
Fibre Channel Could Go First
So what type of HDDs are likely to be replaced first? David Vellante, a former IDC analyst and founder of the Wikibon project, believes that the first to go will be high-performance Fibre Channel (FC) drives, which are usually bought for their high performance and low access times. He argues that since flash memory prices are falling much faster than HDD prices, the price differential between SSDs and FC HDDs — which is currently 15 times greater for SSDs — will drop to a multiple of just three in less than three years, and possibly considerably less. At this price, SSDs with their faster read speeds will make the competing FC HDDs obsolete, he believes.
Mark Peters, an analyst at Enterprise Strategy Group, agrees with at least some of this assessment. "I'd say that Fibre Channel drives are first in the firing line," said Peters. "In general, SSDs will be more attractive than FC drives if they are not too much more expensive."
But Peters believes that SSDs will have an impact in the FC drive market much sooner than Vellante anticipates. "I think SSD sales will take off next year," he said. That's because some IT departments will be willing to pay a significant premium over FC disk prices for SSDs that offer higher performance. This should not be too much of a surprise — users have always paid more for disk storage than tape, and FC drives rather than lower-performance drives. For applications that require the highest possible I/O performance, why shouldn't they pay more for SSDs?
But Peters warns against looking at storage media such as FC drives and SSDs solely on a price per gigabyte basis. "If you have a 500GB FC disk and you are only using 200GB, then what is the price per gigabyte? Your effective price per gigabyte is more than twice as high," he said.
In any case, price per gigabyte is often not the relevant metric to be looking at when considering switching to SSDs. "Companies should also be looking at price per I/O, or price per millisecond of access time, or cost per unit of power a drive consumes, depending on their circumstances," Peters said. And that means that you end up with something like hard drives for capacity, and SSDs for I/O.
Of course, it won't all be plain sailing for SSDs. There are two sides to I/O: reading and writing. While SSDs have a clear advantage when it comes to read speeds, what about the write side of things? And let's not forget about the limited life of SSD memory cells.
"I think these problems are overblown," said Peters. "We'll overcome poor write speeds with techniques like caching using DRAM, and will be able to get around wear problems with techniques like wear leveling and over provisioning." In any case, conventional hard drives often fail in a matter of months, and those that don't are typically replaced every two or three years anyway.
SSD's Obstacles: Tiering, Inertia
There are other problems to consider if you are adopting a "hard drives for capacity, SSDs for I/O" strategy. How, for example, do you make sure the right data is on the right medium? To get the full performance benefits of SSDs and the cheap storage benefits of spinning disks, you may end up needing a whole new software layer to help move data around in a tiering approach, Peters warns. But there are companies like Compellent (NYSE: CML) and Sun (NASDAQ: JAVA) working on the problem.
Another thing that may slow the advance of SSDs is the fact that there is an enormous installed base of FC drives around the world. The "stickiness" of any given technology shouldn't be underestimated when there is lots of it about. Look at tape storage — it is still a multi-billion dollar business, and holding its own too. IT departments are rightly cautious when it comes to making changes and abandoning investments, so SSDs may be adopted far slower than the economic case dictates.
One thing is pretty much certain, however: delays won't be caused by vendors dragging their feet.
"All the major vendors are looking at SSD technology — it's not just a couple of them that are interested in pushing it," said Peters.
Indeed, business from the likes of EMC (NYSE: EMC), Sun, IBM (NYSE: IBM) and HP (NYSE: HPQ) have made STEC (NASDAQ: STEC) the early winner in the SSD sweepstakes. Its shares were up 31 percent in a single day earlier this week after the company's growth rates exceeded Wall Street estimates once again — and in the worst recession since the Eisenhower Administration at that.
Going forward, STEC will face greater competition from newer entrants like SandForce, Intel (NASDAQ: INTC), where the technology has attracted the attention of co-founder Gordon Moore, and Fusion-IO — the startup that managed to lure Apple (NASDAQ: AAPL) co-founder Steve Wozniak out of retirement.
With all that excitement, perhaps analysts' claims are not that far-fetched. While conventional hard disk drives may not be obsolete in the foreseeable future, it appears certain that many of them will have been replaced by SSDs by the time 2012 rolls around.
SSD Moving into the Mainstream as PCs Go 100% Solid State
http://www.wwpi.com/top-stories/7077-ssd-moving-into-the-mainstream-as-pcs-go-100-solid-state
Wednesday, 13 May 2009 09:52 Brian Beard, Samsung
Most solid state drives sold in the last few years have been hard disk drive (HDD) replacements. While the majority of these 1.8”, 2.5”, and 3.5” storage drives were primarily aimed at the notebook market, they also have begun penetrating into server, PC, and many rugged or industrial markets. This year marks a critical threshold for the SSD growth curve, not only in traditional form factors, but in new drive shapes and sizes, as the SSD begins its move into the mainstream.
Due to major overcapacity in the commodity memory markets in recent years, the NAND flash market has had to weather some difficult times of late, with price declines of a greater degree than historical yearly market data would have led us to believe. But the consumer market has reaped the advantages -- enjoying higher quality SSDs at improved price levels, as SSD’s move closer to mainstream adoption with higher performance ratings than many expected they would ever have. The cost differential between SSDs and HDDS has dwindled much more than anticipated. For example, in notebooks, an SSD currently costs orders of magnitude less than it did just two years ago, when compared to a similar capacity, high performance hard drive.
Once the other advantages of SSDs over HDDs are factored in, like greater durability, more reliability, and higher performance, we can see how notebook SSDs are encroaching on the ever elusive ‘tipping point’, ready to embrace mainstream acceptance.
Netbooks
Another catalyst for this has been the tremendously fast growth in popularity of the netbook. Clearly, netbooks have helped mainstream adoption of SSDs.
Exploding into the marketplace, the ubiquitous netbook has been embracing SSDs in large part due to their cost advantage over low-capacity 1.8-inch HDDs. Ten gigabytes of NAND flash (plus other SSD components) easily meets the PC OEM’s desire for a stronger and more cost-effective storage solution than a typical HDD, at $35 to $50.
Similarly, SSDs have also made significant strides in the enterprise server storage arenas where total cost of ownership (TCO) matters a great deal. Consider that SSDs offer an almost instantaneous ROI when you factor in their significant performance advantages, typically 10X the fastest HDD, at 1/10th the power. The TCO for SSDs is most compelling for high performance, read-intensive applications like web serving and video on demand.
SSDs 2009: “Inflection in the Solid State Ecosystem”
At the core of SSD technology is NAND flash memory. While the current adoption of SSD is in large part due to the rapid decline in NAND prices, NAND technology continues to improve as suppliers push to smaller geometries and higher densities, keeping manufacturing costs on a downward slope.
MLC or multi-level cell flash, the memory component typically used in notebook SSDs, is continuously being improved, allowing for larger SSD capacities each year as more gigabytes of capacity can be packed into each chip, and successively smaller chips packed into each drive.
For any given market segment, NAND flash technology is very similar regardless of which major manufacturer offers it. The real performance differentiator in one SSD brand versus another is the controller and the firmware. In fact, controller technology has progressed in leaps and bounds in the last few years, with most major SSDs moving from 4-channel to 8-channel controllers late last year, while controllers with more channels are right around the corner. Having more channels allow SSDs to perform faster and faster, moving them rapidly towards the SATA 3.0 gigabits-per-second actual performance limit of approximately 250 megabytes per second.
Controller Technology
Advances in controller technology have improved all SSD performance factors. This has allowed mainstream notebook and industrial PC applications as well as read-intensive server applications to move to MLC NAND while write-intensive server applications are using mostly single-level cell (SLC) NAND, a more durable component with a longer potential lifespan. For example, Samsung’s latest controller for its MLC drives reads sequentially at 220 megabytes per second (MB/s) and writes sequentially at 200 MB/s. Samsung’s latest enterprise (SLC) SSD offers similar outstanding sequential performance, and its advanced controller allows 25K random read and 6K random write IOPS (inputs/outputs per second).
Firmware is today’s other critical SSD differentiator. SSDs with the most robust firmware are being tested rigorously by Samsung and a few other manufacturers in conjunction with major PC and server OEMs for extensive debugging during rigorous validation processes. Over the long haul, the SSD suppliers who can best overcome performance improvement hurdles in the controller and the firmware will capture the greatest market share.
Newest Trends
This year, SSDs are well into the process of moving from a niche market position to a storage solution that’s rapidly growing in popularity. As notebooks and netbooks adopt SSDs, they are actually moving to a 100 percent solid state composition as “cloud computing” becomes increasingly popular, and more and more notebooks/netbooks forego the space consuming and “flirtingly obsolete” optical drive. The trend is definitely towards “slim-light” portable PCs offering SSDs that not only help out with space constraints and weight concerns, but their lower power consumption and virtually heat-free emissions extend battery life and reduce cooling requirements
In 2009, a substantial number of notebooks will be made with SSDs as their only storage option (hard disks will not be a choice). This trend, which we refer to as SSD-Only or “SSD-O,” began last year. The Lenovo X300 (now X301) was the first mainstream SSD-O business notebook, launching in early 2008 followed by a suite of netbooks from many OEMs. Dell, for one. joined the mainstream SSD-O fray with the Latitude E4200 followed by a full-size consumer SSD-O model called Adamo.
The SSD-O design choice enables the OEM to literally design a notebook around the SSD so that everything – CPU, battery, cooling system and other components – is optimized to provide a greater total overall system value with an SSD. As OEMs take advantage of form-factor-agnostic NAND flash memory, more and more creative SSD solutions will arise.
For now, in addition to performance gains, capacities are growing rapidly, with the “sweet spot” for business SSDs moving from 64GB to 128GB and consumer SSDs moving from 128GB to 256GB. More benefits are being added too. SSDs now are becoming self-encrypted devices, offering full hardware encryption to ensure data is secure at all times.
Enterprise SSD: ‘Non-traditional IT’ leads the charge
SSD penetration into the traditional enterprise IT market is moving slowly. This is due to the “risk adverse” nature of server/PC OEMs and because HDD manufacturers generally want to retain their installed base of underperforming, energy-hungry HDDs to maximize existing revenue streams.
Change is coming, however, with non-traditional IT-focused companies (predominantly in Web2.0 applications) leading the SSD charge. Those new applications most ripe for SSD conversion like web-serving and video on demand are ‘performance optimized’ applications, an ideal fit for SSDs. New products are being announced almost monthly with the latest being new storage devices from the likes of EMC, Sun (Oracle), Fusion-io, and Dell’s EqualLogic division.
The SSD Future
Looking beyond 2009, SSDs and NAND flash memory will proliferate into many new market areas including applications for railroad cars, airplane seatback entertainment, mobile internet devices, video cameras, and many more segments. Most of these applications will use standard SSD form factors, though many will not. New technologies are accommodating custom form factors such as PCI Express (PCIe), mini PCIe, eSATA, USB 3.0, and DIMMs.
Looking out five years, solid state storage will be a common household term. While most consumers will never know the history of SSDs, many of us will look back and remember 2009 as a pivotal year, the year in which SSDs went mainstream.
Brian Beard is the manager, SSD Marketing, at Samsung Semiconductor, Inc.
SSD Market To Reach 43 Million Units By 2013
In the near future, flash-based solid-state drives (SSD) expected to challenge the hard disk drives (HDD) in the mobile computer market, according to a new report by Research and Market. The report forecasts global shipments of SSDs to reach 43 million units in 2013. It also predicts that most cost effective SSD solutions, which are currently around 16GB, will approach 256GB by 2013.
According to the report, which examines the rise of the SSD market and the challenges that remain ahead, the decreasing flash memory prices and the shift to mobile computing are pushing SSDs from the data center to the PC. It notes however, that this transition will not happen overnight due to a number of issues, including drive density limitations, cost and performance.
“In the last years, the adoption of SSDs in PCs and netbooks has been limited by speed, capacity and cost constrains,” said Tareq Husseini, SanDisk’s Middle East and Africa Sales Director. “But when one examines the declining cost trends for flash, and the consumer’s need for storage and the premium that users place on the benefits provided by SSDs, it is easy to see that there will be a clear demand for SSDs in the near future.” SanDisk aims to grab half of the SSDs market share for the mobile PC sector in 2010.
According to Husseini, SSDs, which are made from NAND flash memory chips, hold several advantages over common HDDs including being speedier, lighter and use far less power. SSDs also tend to be more rugged than a standard hard drive because the NAND flash memory they use lacks the moving parts found in a hard drive.
How to Buy a Business Laptop
Full story here: http://www.pcmag.com/article2/0,2817,2346880,00.asp
excerpt:
Buyers of small-business laptops have a near-overwhelming range of options in models and features to consider, but we can help you make the best choice for your business.
In this buying guide, we will walk you through essential business features, the parts you'll need, and, more important, how to differentiate a business laptop from a consumer model.
Safeguard Your Company's Data
With the dramatic increase in identity theft and data breaches over the past year, a laptop strong in security features can make a difference. Enterprise laptops like the Lenovo T400 and the Toshiba Tecra R10-S4401 have a Trusted Platform Module (TPM) installed as a way to protect encrypted keys and digital certificates in a hardware chip that resides on the motherboard. Smart-card readers, which authenticate users who insert a pocket-size card with the dimensions of a credit card into a slot, are employed in systems like the HP 6930p. They are available as a contact-less solution as well: The Dell E6400, for instance, can read a smart card that is waved across the palm rest, thanks to a wireless technology known as RFID (radio frequency identification) induction. Built-in fingerprint readers are a standard authentication feature in business these days, and PC makers are making use of the webcam, which can be used in tandem with facial recognition software.
Over 55 Exhibitors Make Big Announcements at Interop Las Vegas 2009
http://www.earthtimes.org/articles/show/over-55-exhibitors-make-big-announcements-at-interop-las-vegas-2009,819634.shtml
Trusted Computing Group will announce three specifications to extend the Trusted Network Connect architecture for network security. The group also is updating existing specifications, including IF-MAP that allows security applications and devices to talk to one another. Multi-vendor demonstrations of the new and existing specifications will be in Booth 869.
Wave Q1 '09 Revenues Rose 137% to $4.0 Million on -2-
DJ Press Release Wire
4:02 PM Eastern Daylight Time May 11, 2009
=========== ===========
Weighted average number of common shares
outstanding during the period 61,868,589 50,898,515
WAVE SYSTEMS CORP. AND SUBSIDIARIES
Consolidated Supplemental Schedule
(Unaudited)
Three Months Ended
March 31,
2009 2008
----------- ------------
Total net revenues $ 4,034,181 $ 1,699,079
Increase (decrease) in deferred revenue (31,549) 66,907
----------- ------------
Total billings (Non-GAAP) $ 4,002,632 $ 1,765,987
=========== ============
Non-GAAP Financial Measures:
As supplemental information, we provide a non-GAAP performance measure
that we refer to as total billings. This measure is provided in
addition to, but not as a substitute for, GAAP total net revenues.
Total billings means the sum of total net revenues determined in
accordance with GAAP, plus the increase or minus the decrease in
deferred revenue. We consider total billings an important measure of
our financial performance, as we believe it best represents the
continued increase in demand for our software license upgrades.
Total billings is not a measure of financial performance under GAAP
and, as calculated by us, may not be consistent with computations of
total billings by other companies.
WAVE SYSTEMS CORP. AND SUBSIDIARIES
Consolidated Balance Sheets
(Unaudited)
March 31, December 31,
2009 2008
------------ ------------
Assets
Current assets:
Cash and cash equivalents $ 275,316 $ 951,563
Accounts receivable, net of allowance for
doubtful accounts of $16,364
March 31, 2009 and December 31, 2008 1,452,734 1,701,829
Prepaid expenses 231,647 227,967
------------ ------------
Total current assets 1,959,697 2,881,359
Property and equipment, net 361,519 408,440
Other assets 129,640 139,975
------------ ------------
Total Assets 2,450,856 3,429,774
============ ============
Liabilities and Stockholders' Equity (Deficit)
Current liabilities:
Accounts payable and accrued expenses 7,492,496 7,655,834
Current portion of capital lease payable 62,006 63,537
Deferred revenue 1,452,495 1,484,044
------------ ------------
Total current liabilities 9,006,997 9,203,415
Long-term portion of capital lease payable 230,338 245,362
------------ ------------
Total liabilities 9,237,335 9,448,777
------------ ------------
Stockholders' Equity (Deficit):
8% Series I Convertible Preferred stock, $.01
par value. 220 shares issued and outstanding
(liquidation preference of $968,000) in 2009
and 2008 2 2
Series J Convertible Preferred stock, $.01 par
value. 91 shares issued and outstanding
(liquidation preference of $364,000) in 2009
and 2008 1 1
8% Series K Convertible Preferred stock, $.01
par value. 456 shares issued and -0-
outstanding (liquidation preference of $-0-)
in 2009 and 456 shares issued and outstanding
(liquidation preference of $1,276,800) in
2008 - 5
Common stock, $.01 par value. Authorized
150,000,000 shares as Class A; 64,222,968
shares issued and outstanding in 2009 and
58,877,968 in 2008 642,230 588,780
Common stock, $.01 par value. Authorized
13,000,000 shares as Class B; 38,232 shares
issued and outstanding in 2009 and 2008 382 382
Capital in excess of par value 338,784,003 338,081,691
Accumulated deficit (346,213,097) (344,689,864)
------------ ------------
Total Stockholders' Equity (Deficit) (6,786,479) (6,019,003)
------------ ------------
Total Liabilities and Stockholders' Equity
(Deficit) $ 2,450,856 $ 3,429,774
============ ============
Conference call: Today, May 11, 2009 at 4:30 P.M. EDT
Webcast / Replay URL: www.wave.com/news/webcasts
Dial-in numbers: 212-231-2900 or 415-226-5360
Contact: Gerard T. Feeney CFO Wave Systems Corp. 413-243-1600 info@wavesys.c
(MORE TO FOLLOW)
Wave Q1 '09 Revenues Rose 137% to $4.0 Million on Increased Software License and Services Activity
DJ Press Release Wire
4:02 PM Eastern Daylight Time May 11, 2009
Wave Q1 '09 Revenues Rose 137% to $4.0 Million on Increased Software
License and Services Activity
LEE, MA -- (MARKETWIRE) -- 05/11/09 --
Wave Systems Corp. (NASDAQ: WAVX) (www.wave.com) today reported results for the first quarter (Q1) ended March 31, 2009 and reviewed recent corporate progress and developments. Wave's Q1 2009 net revenues rose 137% to $4.0 million, compared with Q1 2008 net revenues of $1.7 million and Q4 2008 net revenues of $3.3 million, principally reflecting increased bundled software royalties as well as higher services revenue related to a contract with the U.S. government.
Reflecting the Company's ongoing efforts to reduce its overhead expenses and quarterly cash burn, Q1 2009 selling, general and administrative expense declined 21% to $3.4 million as compared to $4.3 million in Q1 2008, and declined 11% versus the Q4 2008 level. Wave was able to reduce its R&D expenses by 44% to $1.8 million in Q1 2009 as compared to $3.3 million in Q1 2008, also achieving a decrease of 14% versus Q4 2008. For Q1 2009, total billings grew 127% to $4.0 million, compared to Q1 2008 total billings of $1.8 million.
Reflecting the benefit of higher revenues and lower overhead expenses, Wave's Q1 2009 net loss was reduced to $1.5 million, or $0.02 per basic and diluted share, compared with a Q1 2008 net loss of $6.0 million, or $0.12 per basic and diluted share. Wave's Q4 2008 net loss attributable to common stockholders was $4.0 million, or $0.07 per basic and diluted share. Per-share figures are based on a weighted average number of basic shares outstanding in the first quarters of 2009 and 2008 of 61.9 million and 50.9 million, respectively, and on 58.7 million basic shares outstanding at December 31, 2008.
As of March 31, 2009, Wave had total current assets of $2 million, which did not reflect the proceeds of a $1 million equity financing completed in early April 2009. Wave's deferred revenue was slightly lower at $1.5 million at March 31, 2009, as compared to year-end 2008.
Steven Sprague, president and CEO of Wave Systems commented, "In spite of economic challenges facing companies in the technology sector, our first quarter results reflect significant progress in our business and a continued reduction of our net loss and cash burn. We grew our base of bundled software shipments during Q1 2009 and, as of the end of April, surpassed a cumulative total of 50 million units shipped. Beyond these important milestones, there were significant developments in the market for self-encrypting drive technology that we believe have been beneficial to Wave.
"In January, the Trusted Computing Group announced its Opal storage specification, which provided hard drive vendors with a standard design 'blueprint.' Its passage helped demonstrate that hardware encryption is more than a niche offering and makes it easier for organizations to implement large-scale adoption of SED drives across all their platforms.
"Q1 also saw Wave continue to collaborate with the leading self-encrypting drive vendors -- Fujitsu, Toshiba, Seagate and Samsung. Samsung recently made headlines with the first solid state self-encrypting drives that will soon ship through Dell. We believe that solid state drives, or SSDs, will be the next generation of hard drives, and Wave applauds Samsung's decision to make encryption standard on all its SSDs.
"We were honored to have Samsung and all of our drive partners join us at our booth at this year's RSA Conference in San Francisco, showcasing interoperability and how Wave's software manages all the major self-encrypting drives on the market -- or in development -- today. RSA was also a key venue as we were able to highlight our other partnerships with Dell and HID, a developer of contactless smart cards, in interactive presentations before the show."
"Based on our Q1 progress and the growing market for hardware-based PC security, we are encouraged about our opportunities in the months ahead," Sprague said.
Summary of Recent Progress/Developments:
-- Samsung Electronics Self-Encrypting Solid State Drives Feature Wave
Management Software: In April, Wave and Samsung announced their
collaboration in offering mobile professionals significant hard drive
security and speed with Samsung's self-encrypting 256-, 128- and 64-
gigabyte solid state (SSD) drives bundled with Wave's management software.
SSDs offer advantages over traditional platter hard drives, boasting two-
to five-times faster overall performance and longer battery life in
notebook PCs. SSDs are also far less prone to reliability issues and
excessive heat generation.
-- Dell Introduced Broad Suite of Mobile Data Security Solutions for its
Latitude Laptop Line: In April, Dell announced that it would be one of the
first in the industry to offer encrypted solid state drives from Samsung.
Dell also showcased its innovative approach to a solution-based security
framework that includes: Dell ControlPoint Security Manager for simplifying
access to mobile security features; DellControlVault, a security engine to
protect end-users' security credentials; and contactless smart card
authentication with HID(R) Global iCLASS(R) technology. Wave collaborated
on the development of DellControlPoint Security Manager for the deployment,
management and enablement of pre-boot security, ControlVault, biometrics,
smart cards, Trusted Platform Modules and self-encrypting drives. Wave's
EMBASSY(R) software provides customers with multi-factor authentication,
single sign-on and password management.
-- The Adoption of Electronic Mortgages is the Central Theme at the MBA
National Technology in Mortgage Banking Conference and Expo: In March,
representatives from Wave's eSignSystems division demonstrated the advanced
functionality of their eSign Transaction Management Suite, which enables
lenders to securely deliver, track, sign, purchase, refinance and modify
loan documents electronically.
About Wave Systems Corp.
Wave provides software to help solve critical enterprise PC security challenges such as strong authentication, data protection, network access control and the management of these enterprise functions. Wave is a pioneer in hardware-based PC security and a founding member of the Trusted Computing Group (TCG), a consortium of nearly 140 PC industry leaders that forged open standards for hardware security. Wave's EMBASSY(R) line of client- and server-side software leverages and manages the security functions of the TCG's industry standard hardware security chip, the Trusted Platform Module (TPM). TPMs are included on an estimated 300 million PCs and are standard equipment on many enterprise-class PCs shipping today. Using TPMs and Wave software, enterprises can substantially and cost-effectively strengthen their current security solutions. For more information about Wave and its solutions, visit http://www.wave.com.
Safe Harbor for Forward-Looking Statements
This press release may contain forward-looking information within the meaning of the Private Securities Litigation Reform Act of 1995 and Section 21E of the Securities Exchange Act of 1934, as amended (the Exchange Act), including all statements that are not statements of historical fact regarding the intent, belief or current expectations of the company, its directors or its officers with respect to, among other things: (i) the company's financing plans; (ii) trends affecting the company's financial condition or results of operations; (iii) the company's growth strategy and operating strategy; and (iv) the declaration and payment of dividends. The words "may," "would," "will," "expect," "estimate," "anticipate," "believe," "intend" and similar expressions and variations thereof are intended to identify forward-looking statements. Investors are cautioned that any such forward-looking statements are not guarantees of future performance and involve risks and uncertainties, many of which are beyond the company's ability to control, and that actual results may differ materially from those projected in the forward-looking statements as a result of various factors. Wave assumes no duty to and does not undertake to update forward-looking statements. All brands are the property of their respective owners.
WAVE SYSTEMS CORP. AND SUBSIDIARIES
Consolidated Statements of Operations
(Unaudited)
Three Months Ended
March 31,
2009 2008
----------- -----------
Net revenues:
Licensing 3,730,896 1,675,505
Services 303,285 23,574
----------- -----------
Total net revenues $ 4,034,181 $ 1,699,079
----------- -----------
Operating expenses:
Cost of sales - licensing 165,672 159,161
Cost of sales - services 182,388 18,314
Selling, general, and administrative 3,378,522 4,297,090
Research and development 1,825,124 3,253,479
----------- -----------
Total operating expenses 5,551,706 7,728,044
----------- -----------
Operating loss (1,517,525) (6,028,965)
Net interest income (expense) (5,708) 18,917
----------- -----------
Net loss (1,523,233) (6,010,048)
Loss per common share - basic and diluted ($ 0.02) ($ 0.12)
(MORE TO FOLLOW)
Conference Call Phone numbers/website:
(212) 231-2900
(415) 226-5360
http://www.wave.com/news/webcasts
Dell to launch Android netbooks
So says PC giant's software porting partners
http://www.reghardware.co.uk/2009/05/07/dell_android_netbook/
By James Sherwood 7th May 2009 10:18 GMT
Dell is preparing to launch a Google Android-based netbook, one of the firm’s loose-lipped business partners has revealed.
Software developer Bsquare issued a press release this week which stated that it’s “porting Adobe’s Flash Lite 3.17 technology onto Dell Netbooks running Google’s Android platform”.
Whoa! That’s quite an announcement to make when Dell hasn’t even confirmed that it’s planning to launch Android-based computers in any shape or form – despite several recent rumours.
Could Dell's Mini Inspiron 9 become an Android PC?
Bsquare’s kept mum on any specific details, but made reference to industry analyst predictions that “Android will gain traction on smart devices, such as the ultra-portable Dell Mini Inspiron 9”. This will obviously lead many to assume that the small, cheap computer could be one of the machines Dell’s considering pre-loading with Android.
Dell and Bsquare have since joined forces to claim the release as a mistake. Andrew Bowins, a Dell spokesman, told the Wall Street Journal that the announcement was “made in error” but stopped short of confirming or denying if the company's working on an Android netbook.
HHS Guidance Specifies Technologies to Secure PHI
http://www.mondaq.com/article.asp?articleid=79170
The American Recovery and Reinvestment Act passed at the end of February contains a number of changes to HIPAA privacy and security rules. Among the most important changes are new notification obligations in cases of breaches of protected health information (PHI).
Limiting the amount of "unsecured" PHI is another way to reduce likelihood of a reportable breach. HHS guidance published April 17 specifies technologies that secure PHI by rendering it unusable, unreadable or indecipherable to unauthorized individuals. If health plans apply the technologies and methodologies specified in the guidance to secure information, they will not be obligated to provide ARRA notifications in the event the information is breached.
Under the guidance, PHI is rendered unusable, unreadable or indecipherable to unauthorized individuals only if one or more of the following applies:
Encryption. Electronic PHI has been encrypted by "the use of an algorithmic process to transform data into a form in which there is a low probability of assigning meaning without use of a confidential process or key" and such confidential process or key has not been breached. Encryption processes that meet this standard for data at rest (data in databases, file systems and other structured storage methods) are those consistent with NIST Special Publication 800-111, Guide to Storage Encryption Technologies for End User Devices. Encryption processes for data in motion (data that is moving through a network, including wireless transmission) must comply with Federal Information Processing Standards 140-2.
Support and Q&A for Solid-State Drives
http://blogs.msdn.com/e7/archive/2009/05/05/support-and-q-a-for-solid-state-drives-and.aspx
There’s a lot of excitement around the potential for the widespread adoption of solid-state drives (SSD) for primary storage, particularly on laptops and also among many folks in the server world. As with any new technology, as it is introduced we often need to revisit the assumptions baked into the overall system (OS, device support, applications) as a result of the performance characteristics of the technologies in use. This post looks at the way we have tuned Windows 7 to the current generation of SSDs. This is a rapidly moving area and we expect that there will continue to be ways we will tune Windows and we also expect the technology to continue to evolve, perhaps introducing new tradeoffs or challenging other underlying assumptions. Michael Fortin authored this post with help from many folks across the storage and fundamentals teams. --Steven
Many of today’s Solid State Drives (SSDs) offer the promise of improved performance, more consistent responsiveness, increased battery life, superior ruggedness, quicker startup times, and noise and vibration reductions. With prices dropping precipitously, most analysts expect more and more PCs to be sold with SSDs in place of traditional rotating hard disk drives (HDDs).
In Windows 7, we’ve focused a number of our engineering efforts with SSD operating characteristics in mind. As a result, Windows 7’s default behavior is to operate efficiently on SSDs without requiring any customer intervention. Before delving into how Windows 7’s behavior is automatically tuned to work efficiently on SSDs, a brief overview of SSD operating characteristics is warranted.
Random Reads: A very good story for SSDs
SSDs tend to be very fast for random reads. Most SSDs thoroughly trounce traditionally HDDs because the mechanical work required to position a rotating disk head isn’t required. As a result, the better SSDs can perform 4 KB random reads almost 100 times faster than the typical HDD (about 1/10th of a millisecond per read vs. roughly 10 milliseconds).
Sequential Reads and Writes: Also Good
Sequential read and write operations range between quite good to superb. Because flash chips can be configured in parallel and data spread across the chips, today’s better SSDs can read sequentially at rates greater than 200 MB/s, which is close to double the rate many 7200 RPM drives can deliver. For sequential writes, we see some devices greatly exceeding the rates of typical HDDs, and most SSDs doing fairly well in comparison. In today’s market, there are still considerable differences in sequential write rates between SSDs. Some greatly outperform the typical HDD, others lag by a bit, and a few are poor in comparison.
Random Writes & Flushes: Your mileage will vary greatly
The differences in sequential write rates are interesting to note, but for most users they won’t make for as notable a difference in overall performance as random writes.
What’s a long time for a random write? Well, an average HDD can typically move 4 KB random writes to its spinning media in 7 to 15 milliseconds, which has proven to be largely unacceptable. As a result, most HDDs come with 4, 8 or more megabytes of internal memory and attempt to cache small random writes rather than wait the full 7 to 15 milliseconds. When they do cache a write, they return success to the OS even though the bytes haven’t been moved to the spinning media. We typically see these cached writes completing in a few hundred microseconds (so 10X, 20X or faster than actually writing to spinning media). In looking at millions of disk writes from thousands of telemetry traces, we observe 92% of 4 KB or smaller IOs taking less than 1 millisecond, 80% taking less than 600 microseconds, and an impressive 48% taking less than 200 microseconds. Caching works!
On occasion, we’ll see HDDs struggle with bursts of random writes and flushes. Drives that cache too much for too long and then get caught with too much of a backlog of work to complete when a flush comes along, have proven to be problematic. These flushes and surrounding IOs can have considerably lengthened response times. We’ve seen some devices take a half second to a full second to complete individual IOs and take 10’s of seconds to return to a more consistently responsive state. For the user, this can be awful to endure as responsiveness drops to painful levels. Think of it, the response time for a single I/O can range from 200 microseconds up to a whopping 1,000,000 microseconds (1 second).
When presented with realistic workloads, we see the worst of the SSDs producing very long IO times as well, as much as one half to one full second to complete individual random write and flush requests. This is abysmal for many workloads and can make the entire system feel choppy, unresponsive and sluggish.
Random Writes & Flushes: Why is this so hard?
For many, the notion that a purely electronic SSD can have more trouble with random writes than a traditional HDD seems hard to comprehend at first. After all, SSDs don’t need to seek and position a disk head above a track on a rotating disk, so why would random writes present such a daunting a challenge?
The answer to this takes quite a bit of explaining, Anand’s article admirably covers many of the details. We highly encourage motivated folks to take the time to read it as well as this fine USENIX paper. In an attempt to avoid covering too much of the same material, we’ll just make a handful of points.
Most SSDs are comprised of flash cells (either SLC or MLC). It is possible to build SSDs out of DRAM. These can be extremely fast, but also very costly and power hungry. Since these are relatively rare, we’ll focus our discussion on the much more popular NAND flash based SSDs. Future SSDs may take advantage of other nonvolatile memory technologies than flash.
A flash cell is really a trap, a trap for electrons and electrons don’t like to be trapped. Consider this, if placing 100 electrons in a flash cell constitutes a bit value of 0, and fewer means the value is 1, then the controller logic may have to consider 80 to 120 as the acceptable range for a bit value of 0. A range is necessary because some electrons may escape the trap, others may fall into the trap when attempting to fill nearby cells, etc… As a result, some very sophisticated error correction logic is needed to insure data integrity.
Flash chips tend to be organized in complex arrangements, such as blocks, dies, planes and packages. The size, arrangement, parallelism, wear, interconnects and transfer speed characteristics of which can and do vary greatly.
Flash cells need to be erased before they can be written. You simply can’t trust that a flash cell has no residual electrons in it before use, so cells need to be erased before filling with electrons. Erasing is done on a large scale. You don’t erase a cell; rather you erase a large block of cells (like 128 KB worth). Erase times are typically long -- a millisecond or more.
Flash wears out. At some point, a flash cell simply stops working as a trap for electrons. If frequently updated data (e.g., a file system log file) was always stored in the same cells, those cells would wear out more quickly than cells containing read-mostly data. Wear leveling logic is employed by flash controller firmware to spread out writes across a device’s full set of cells. If done properly, most devices will last years under normal desktop/laptop workloads.
It takes some pretty clever device physicists and some solid engineering to trap electrons at high speed, to do so without errors, and to keep the devices from wearing out unevenly. Not all SSD manufacturers are as far along as others in figuring out how to do this well.
Performance Degradation Over Time, Wear, and Trim
As mentioned above, flash blocks and cells need to be erased before new bytes can be written to them. As a result, newly purchased devices (with all flash blocks pre-erased) can perform notably better at purchase time than after considerable use. While we’ve observed this performance degradation ourselves, we do not consider this to be a show stopper. In fact, except via benchmarking measurements, we don’t expect users to notice the drop during normal use.
Of course, device manufactures and Microsoft want to maintain superior performance characteristics as best we can. One can easily imagine the better SSD manufacturers attempting to overcome the aging issues by pre-erasing blocks so the performance penalty is largely unrealized during normal use, or by maintaining a large enough spare area to store short bursts of writes. SSD drives designed for the enterprise may have as high as 50% of their space reserved in order to provide lengthy periods of high sustained write performance.
In addition to the above, Microsoft and SSD manufacturers are adopting the Trim operation. In Windows 7, if an SSD reports it supports the Trim attribute of the ATA protocol’s Data Set Management command, the NTFS file system will request the ATA driver to issue the new operation to the device when files are deleted and it is safe to erase the SSD pages backing the files. With this information, an SSD can plan to erase the relevant blocks opportunistically (and lazily) in the hope that subsequent writes will not require a blocking erase operation since erased pages are available for reuse.
As an added benefit, the Trim operation can help SSDs reduce wear by eliminating the need for many merge operations to occur. As an example, consider a single 128 KB SSD block that contained a 128 KB file. If the file is deleted and a Trim operation is requested, then the SSD can avoid having to mix bytes from the SSD block with any other bytes that are subsequently written to that block. This reduces wear.
Windows 7 requests the Trim operation for more than just file delete operations. The Trim operation is fully integrated with partition- and volume-level commands like Format and Delete, with file system commands relating to truncate and compression, and with the System Restore (aka Volume Snapshot) feature.
Windows 7 Optimizations and Default Behavior Summary
As noted above, all of today’s SSDs have considerable work to do when presented with disk writes and disk flushes. Windows 7 tends to perform well on today’s SSDs, in part, because we made many engineering changes to reduce the frequency of writes and flushes. This benefits traditional HDDs as well, but is particularly helpful on today’s SSDs.
Windows 7 will disable disk defragmentation on SSD system drives. Because SSDs perform extremely well on random read operations, defragmenting files isn’t helpful enough to warrant the added disk writing defragmentation produces. The FAQ section below has some additional details.
Be default, Windows 7 will disable Superfetch, ReadyBoost, as well as boot and application launch prefetching on SSDs with good random read, random write and flush performance. These technologies were all designed to improve performance on traditional HDDs, where random read performance could easily be a major bottleneck. See the FAQ section for more details.
Since SSDs tend to perform at their best when the operating system’s partitions are created with the SSD’s alignment needs in mind, all of the partition-creating tools in Windows 7 place newly created partitions with the appropriate alignment.
Frequently Asked Questions
Before addressing some frequently asked questions, we’d like to remind everyone that we believe the future of SSDs in mobile and desktop PCs (as well as enterprise servers) looks very bright to us. SSDs can deliver on the promise of improved performance, more consistent responsiveness, increased battery life, superior ruggedness, quicker startup times, and noise and vibration reductions. With prices steadily dropping and quality on the rise, we expect more and more PCs to be sold with SSDs in place of traditional rotating HDDs. With that in mind, we focused an appropriate amount of our engineering efforts towards insuring Windows 7 users have great experiences on SSDs.
Will Windows 7 support Trim?
Yes. See the above section for details.
Will disk defragmentation be disabled by default on SSDs?
Yes. The automatic scheduling of defragmentation will exclude partitions on devices that declare themselves as SSDs. Additionally, if the system disk has random read performance characteristics above the threshold of 8 MB/sec, then it too will be excluded. The threshold was determined by internal analysis.
The random read threshold test was added to the final product to address the fact that few SSDs on the market today properly identify themselves as SSDs. 8 MB/sec is a relatively conservative rate. While none of our tested HDDs could approach 8 MB/sec, all of our tested SSDs exceeded that threshold. SSD performance ranged between 11 MB/sec and 130 MB/sec. Of the 182 HDDs tested, only 6 configurations managed to exceed 2 MB/sec on our random read test. The other 176 ranged between 0.8 MB/sec and 1.6 MB/sec.
Will Superfetch be disabled on SSDs?
Yes, for most systems with SSDs.
If the system disk is an SSD, and the SSD performs adequately on random reads and doesn’t have glaring performance issues with random writes or flushes, then Superfetch, boot prefetching, application launch prefetching, ReadyBoost and ReadDrive will all be disabled.
Initially, we had configured all of these features to be off on all SSDs, but we encountered sizable performance regressions on some systems. In root causing those regressions, we found that some first generation SSDs had severe enough random write and flush problems that ultimately lead to disk reads being blocked for long periods of time. With Superfetch and other prefetching re-enabled, performance on key scenarios was markedly improved.
Is NTFS Compression of Files and Directories recommended on SSDs?
Compressing files help save space, but the effort of compressing and decompressing requires extra CPU cycles and therefore power on mobile systems. That said, for infrequently modified directories and files, compression is a fine way to conserve valuable SSD space and can be a good tradeoff if space is truly a premium.
We do not, however, recommend compressing files or directories that will be written to with great frequency. Your Documents directory and files are likely to be fine, but temporary internet directories or mail folder directories aren’t such a good idea because they get large number of file writes in bursts.
Does the Windows Search Indexer operate differently on SSDs?
No.
Is Bitlocker’s encryption process optimized to work on SSDs?
Yes, on NTFS. When Bitlocker is first configured on a partition, the entire partition is read, encrypted and written back out. As this is done, the NTFS file system will issue Trim commands to help the SSD optimize its behavior.
We do encourage users concerned about their data privacy and protection to enable Bitlocker on their drives, including SSDs.
Does Media Center do anything special when configured on SSDs?
No. While SSDs do have advantages over traditional HDDs, SSDs are more costly per GB than their HDD counterparts. For most users, a HDD optimized for media recording is a better choice, as media recording and playback workloads are largely sequential in nature.
Does Write Caching make sense on SSDs and does Windows 7 do anything special if an SSD supports write caching?
Some SSD manufacturers including RAM in their devices for more than just their control logic; they are mimicking the behavior of traditional disks by caching writes, and possibly reads. For devices that do cache writes in volatile memory, Windows 7 expects flush commands and write-ordering to be preserved to at least the same degree as traditional rotating disks. Additionally, Windows 7 expects user settings that disable write caching to be honored by write caching SSDs just as they are on traditional disks.
Do RAID configurations make sense with SSDs?
Yes. The reliability and performance benefits one can obtain via HDD RAID configurations can be had with SSD RAID configurations.
Should the pagefile be placed on SSDs?
Yes. Most pagefile operations are small random reads or larger sequential writes, both of which are types of operations that SSDs handle well.
In looking at telemetry data from thousands of traces and focusing on pagefile reads and writes, we find that
Pagefile.sys reads outnumber pagefile.sys writes by about 40 to 1,
Pagefile.sys read sizes are typically quite small, with 67% less than or equal to 4 KB, and 88% less than 16 KB.
Pagefile.sys writes are relatively large, with 62% greater than or equal to 128 KB and 45% being exactly 1 MB in size.
In fact, given typical pagefile reference patterns and the favorable performance characteristics SSDs have on those patterns, there are few files better than the pagefile to place on an SSD.
Are there any concerns regarding the Hibernate file and SSDs?
No, hiberfile.sys is written to and read from sequentially and in large chunks, and thus can be placed on either HDDs or SSDs.
What Windows Experience Index changes were made to address SSD performance characteristics?
In Windows 7, there are new random read, random write and flush assessments. Better SSDs can score above 6.5 all the way to 7.9. To be included in that range, an SSD has to have outstanding random read rates and be resilient to flush and random write workloads.
In the Beta timeframe of Windows 7, there was a capping of scores at 1.9, 2.9 or the like if a disk (SSD or HDD) didn’t perform adequately when confronted with our random write and flush assessments. Feedback on this was pretty consistent, with most feeling the level of capping to be excessive. As a result, we now simply restrict SSDs with performance issues from joining the newly added 6.0+ and 7.0+ ranges. SSDs that are not solid performers across all assessments effectively get scored in a manner similar to what they would have been in Windows Vista, gaining no Win7 boost for great random read performance.
Published Tuesday, May 05, 2009 12:00 AM by e7blog
Shorting isn't investing; You're an opportunist. e/
Features Under Consideration For The Next Generation Of TPM
http://www.trustedcomputinggroup.org/files/resource_files/0CD79678-1D09-3519-ADDAFD2ED5450D0A/Features%20Under%20Consideration%20for%20TPM%20next%20%28FINAL%29.pdf
Intel encourages to buy new computers
http://www.ecommerce-journal.com/news/15142_intel_encourages_to_buy_new_computers
May 5, 2009
Intel Corp. reported last week that companies with 30,000 PCs upgrading to new Core 2 Duo or Quad computers would return their money in 17 months. Those which also equip themselves with vPro-enabled motherboards in even a shorter time period, 10 months.
However, it is known that these figures apply only to a limited set of firms and do not encompass other costs of PC upgrades.
According to the survey commissioned by the company and conducted by Wipro Consulting of 106 North American and European companies, 32% of them have slowed their PC refresh rates in the last six months, 60%, haven't changed their PC upgrade policy, and 8% have even accelerated them. Northern American companies participating in the survey had a minimum of 5,000 PCs, and Europeans 2,500.
Intel states that the main reason to the delayed PC upgrades is decreasing revenues. Consequently, this once again highlights the fact shown by Jack Gold, an independent analyst that only 10% to 15% of enterprise PCs deployed today have vPro today, which is understandable considering that vPro, was launched in April 2006 and the first desktop PCs featuring it began shipping about several months later.
vPro enables IT managers to remotely configure and set policies for PCs. Analysts suggest that the vPro technology needs to be accompanied by a strong back-end management tools and run in large environments in order to insure its security and lower costs.
In addition to the regular CC, SKS will be giving us mid-Q2 guidance.
Flash forward
Over the next few months we will see storage sales continue to be driven by price and size. For example, average stick capacities have recently doubled in size from 4GB to 8GB and that trend is expected to continue.
If resellers and distributors want to take advantage of this growing market, they should concentrate on new products based on flash – such as solid-state drives (SSDs) and eSata drives – and less on optical media, which is diminishing as a storage medium.
SSDs are predominantly used by professionals and the usual gadget-mad, early-adopter consumers.
The biggest barrier to mass adoption is the consumer belief that it is difficult to open a laptop and swap the traditional hard disk for SSD.
Demand will be created by resellers that like to bring innovation to their customers.
The rest of the article is here:
http://www.channelweb.co.uk/crn/comment/2241509/flash-forward
Toshiba inks deal to buy Fujitsu's HDD operations
http://www.tradingmarkets.com/.site/news/Stock%20News/2301264/
TOKYO, Apr 30, 2009 (Kyodo News International)
Toshiba Corp. and Fujitsu Ltd. signed a 30 billion yen deal calling for Fujitsu to fully hand over its hard disk drive business to Toshiba via a two-stage stock deal to be completed by the end of 2010, the two companies said Thursday.
Under the deal, Fujitsu's HDD-related business and functions will be transferred to Toshiba Storage Device Corp. to be set up shortly in Tokyo's Shibaura district. The new firm with a workforce of 980 engineers and other personnel will take charge of HDD design, and research and development, the two companies said.
Fujitsu's HDD-manufacturing units, Fujitsu Computer Products Corp. of the Philippines and Fujitsu (Thailand) Co., will be renamed Toshiba Storage Device (Philippines) Inc. and Toshiba Storage Device (Thailand) Co., they said.
After the transfer, Toshiba will handle the marketing of all HDD products, including Fujitsu's. Most of Fujitsu's marketing offices outside of Japan will be transferred to Toshiba's overseas divisions.
Toshiba will then acquire an 80.1 percent stake in Toshiba Storage Device from Fujitsu for 24 billion yen by July 1, with Fujitsu handing over the remaining stake to Toshiba for 6 billion yen by Dec. 31, 2010.
The combined acquisition cost of 30 billion yen does not include 6 billion yen in net debt that Toshiba Storage Device will take over from Fujitsu's HDD business, the companies said.
The deal will entail the transfer of 800 employees in Japan and 7,000 employees overseas in the Fujitsu group's HDD divisions to the Toshiba group.
Toshiba will enhance its position as a vendor of small-form HDDs by diversifying into the area of enterprise HDDs for servers in which Fujitsu is a strong force. The deal will enable Toshiba to expand its solid state drive business by developing SSD products for servers and enterprise storage systems, they added.
tommy:
My thoughts exactly:
http://www.expresscomputeronline.com/20090504/edit01.shtml
Sometime back I had written about how Full Disk Encryption (FDE) on hard drives was here and would supersede software based encryption for the greater part. Until now, SSDs haven’t supported this feature. That’s all changed now with Samsung releasing a slew of SSDs that support FDE. These will be making their appearance on Dell’s Latitude line of laptops.
titlewave, exactly what are you trying to say? I can't figure out if your posts are off topic or off kilter......
New Intel technology combats laptop losses
http://blog.oregonlive.com/hillsboroargus/2009/04/new_intel_technology_combats_l.html
Posted by Susan Gordanier, Hillsboro Argus April 29, 2009 11:30AM
Whether through theft or unintentional misplacing, the loss of a notebook computer can create major problems for a business.
The costs come not so much from replacing the equipment, as from loss of control of the information stored on it, whether proprietary documents or private details about customers.
New technology from Intel Corporation offers one solution. The features, called vPro, are not programs that run after the computer is turned on. Instead vPro features are built into the circuitry of the computer's hardware.
Depending on how the technology is set to react after detecting a security breach, it can completely lock the computer, rendering it completely useless, or delete the keys necessary to access encrypted data stored on it.
Directions on which action to take can also arrive from another computer -- directed from a console in the owning company's information technology department -- even if the stolen computer has not been turned on.
The Intel group that developed vPro is based at the Cornell Oaks campus on Greenbriar Parkway in Beaverton, according to Bill MacKenzie, Intel communications manager. Several original equipment manufacturers have already announced their support for the new technology and offer it in computers aimed at the corporate market, he said. These include Lenovo, Fujitsu, Asus and Acer.
Intel recently released the results of a Ponemon Institute study that showed how much is at stake in having a quick effective response policy in place for stolen notebooks. The study found that the actual cost to the company is much less if the loss of the laptop is discovered within 24 hours. The average cost after one day is $15,933, but if the company doesn't learn the computer is missing until over a week passes, the costs could rise as high as $115,849, on average, to control the damage caused by compromised data.
This highlights another advantage of vPro technology: The lost machine itself can be set to detect its own theft, disabling itself if it has not "called home" to its company's technology center within an set period of time.
The study also found that a lost laptop belonging to a company's CEO isn't nearly as expensive as one belonging to a manager or director. That's because lower level executives are more likely to have detailed files on their computers.
The entire "Cost of a Lost Laptop" study is available online at communities.intel.com/docs/DOC-3076.
If You Build It, They Will Come...
http://www.trustedcomputinggroup.org/community/2009/04/if_you_build_it_they_will_come
by Matt Webster, Lumeta Corporation
Mike Fratto correctly highlights the need for vendors to listen to their customers, and provide standard-based access control technologies (CSI 2008: You Want Standards, You Have To Demand Them <http://www.informationweek.com/blog/main/archives/2008/11/csi_2008_you_wa.html> ). Well, Mr. Fratto, the vendors have listened to you and our customers, and more standard-based products are now available.
There are numerous vendors with TNC standard-based products today, with many more planning for product release in the near future (weeks and months, not years). These companies include the traditional access control providers as well as vendors in security adjacencies who can offer valuable contributions to the realm of access control.
See, here's the really cool thing.... Through TNC, both the traditional and non-traditional access control vendors are "enabled". Products such as vulnerability management, visibility/mapping, network discovery, control systems, and others are easily enabled to take part in the access decision making process. Both pre and post access decisions have the ability to be made based any number of policy and/or compliance checks.
This multifaceted approach will allow customers to realize the power of access control without a multitude of changes or excessive up-front costs. As these standards are adopted and productized by more security vendors, customers will have the ability to "connect" the various security components and network equipment that they already have in place, easing the deployment and cost of access control. The standard-based "connections" between enforcement and decision points, ingesting information from sensors, compliance/policy monitors create an environment in which customers can feel confident they are not being saddled with a "single access control" solution.
Awk, have you seen this?
Windows Slideshow:
Windows 7 RC Features Many Small but Useful Enhancements
The release candidate of Windows 7 (Build 7100) features many small enhancements over the beta released last January. The new operating system's security features and the Aero Glass interface have undergone much tweaking and tuning. The biggest late-arriving feature to the OS—the virtualization-based Windows XP Mode—remains missing; that code will be available separately via download at a later time.
From slide #15:
BitLocker Without TPM
BitLocker, Windows 7's built-in full-disk encryption tool, wants to write the encryption keys to the computer's TPM chip. If no TPM is present, users need to change some group policy settings so they can write the key to a USB stick.
http://www.eweek.com/c/a/Windows/Windows-7-RC-Features-Many-Small-but-Useful-Enhancements-888614/
Intel: Spend money (on new PCs) to save money
http://www.computerworld.com/action/article.do?command=viewArticleBasic&articleId=9132282&intsrc=news_ts_head
Companies can reap the advantages of vPro technology, Intel argues
Eric Lai
April 28, 2009 (Computerworld) Hurt by enterprises putting off PC purchases, Intel Corp. today presented research purporting to show that large companies that buy new PCs equipped with its vPro security and management technology can recoup their investment in less than a year.
A company with 30,000 PCs that upgrades to new Core 2 Duo or Quad computers would make its money back in 17 months -- and in just 10 months if those PCs are also equipped with vPro-enabled motherboards, according to Intel.
One analyst, however, said such ROI figures apply only to a limited set of companies and do not encompass other costs of PC upgrades, such as buying or upgrading new software licenses, something many companies have also been delaying.
"This might make sense for IT outsourcing firms or very large companies," said Jim McGregor, an analyst at In-Stat. "But for many of us, this is the worst economic downturn of our lives. Unless it fits in your company budget, it doesn't make sense."
In an Intel-commissioned Wipro Consulting survey of 106 North American and European companies, 32% of the respondents said that they have slowed their PC refresh rates in the last six months. The majority, 60%, said they haven't changed their PC upgrade policy, while 8% said they have accelerated upgrades. The North American companies surveyed had a minimum of 5,000 PCs, and those based in Europe had a minimum of 2,500.
"Corporate IT is overdue for an update from Windows XP," said Rob Crook, general manager and vice president for Intel's business client group. "Yes, it's tough economic times, but those who can, are [upgrading]."
Though it's still profitable, Intel has blamed delayed PC upgrades on declining revenues in recent months.
Slower PC upgrades mean that many enterprises haven't tried out vPro, which Intel launched in April 2006. The first desktop PCs featuring the technology began shipping several months later.
VPro enables a number of services, such as Active Management Technology (AMT), which allows IT managers to remotely configure and set policies for PCs, and Intel Trusted Execution Technology (TXT), which is used to enforce security policies.
According to independent analyst Jack Gold, only 10% to 15% of enterprise PCs deployed today have vPro. Gold released research last week showing that upgrading laptops makes financial sense, primarily because it's costly to maintain breakage-prone, out-of-warranty hardware.
Desktop PCs aren't as vulnerable to damage as laptops, Gold said, noting that upgrading to new vPro-enabled PCs on the desktop could be financially smart but the payback is less straightforward than Intel might make it sound.
TXT is supposed to be more secure because it is in the hardware, not software, layer. That didn't stop security experts from breaking TXT and and explaining how it was done earlier this year.
McGregor still praises vPro's features. But he says vPro technology needs to be paired with strong back-end management tools and run in large environments for it to translate into increased security and lower costs.
dig, I've just learned Wave is here:
http://www.nsa.gov/applications/ia/events/conferences/tracks.cfm?ConferenceID=59&menutype=openreg
Implementation, Enablement and Usage
(Includes Coalition, Federal Agency, and DoD Contractors)
This track will host in-depth discussions of program updates regarding various interoperability challenges among the DoD, federal government, and international communities in addition to demonstrations on the various ways identity protection and management technologies are being developed for future use.
Hey dig,
Nope, nothing new. My contact is firm in his assertion that TPMs will be used across all govt agencies. But, in thinking about it later, he's in DHS so I wonder if the Dept of Agriculture is even on his radar screen? He believes we'll see some initial implementation in ICE (Immigrations and Customs Enforcement)where both TPMs and biometrics will be used on in-the-field ruggedized notebooks. Like you, I have no links, only recollections of many recent conversations.
True cost of a lost business laptop?: $50000
http://arstechnica.com/gadgets/news/2009/04/report-average-stolen-laptop-cost-is-50k-intel-buy-vpro.ars
AA, re: dig_space
I have a life-long friend in DHS IT. I sent the links to dig's post for his review. Suffice it to say I'm not the least bit worried about TPM adoption across all government agencies.
Buying Strategies For Storage
Security
http://www.processor.com/editorial/article.asp?article=articles/P3113/26p13/26p13.asp&guid=
Because data is the lifeblood of a company, making sure that it's secure is imperative. Manufacturers provide many security controls, but some gaps can be created in a networked system or a strategy that uses many pieces of hardware.
One growing trend in security is FDE (full-disk encryption), which secures all data on a disk or disk volume and usually boasts faster virus scans than other types of encryption, according to David James, vice president of advanced product engineering at Fujitsu (www.fujitsu.com).
When making storage purchasing decisions, it’s often a good idea to look at security strategies such as FDE simultaneously to minimize interoperability concerns. A data center manager can talk to storage vendors about how to integrate existing security or planned upgrades so there aren’t security issues during initial implementation.
Public-private security cooperation at RSA
http://news.cnet.com/8301-1009_3-10225182-83.html
by Jon Oltsik
In past years, I looked at the RSA security conference as a high-tech flea market staffed by the world's best security carnival barkers. Yes, important security topics were discussed, but the real focus of the show was selling products and doing deals.
This year's event has its share of tacky presentations and booth babes, but I'm hearing a lot of chatter about a far more important topic: the state of information security and its impact on us all. Finally, the combination of unending data breaches, sophisticated malware, and the very real cybersecurity threat has everyone paying attention. There is a broad recognition that we security professionals aren't hawking hardware or writing code, we actually have a responsibility to educate, help, and safeguard users.
This theme is evident throughout the event. Microsoft's Scott Charney, a former U.S. Department of Justice attorney, talked about Microsoft's vision for end-to-end trust, describing why this is necessary and how it can be done in simple terms. While security crowds are often skeptical about Microsoft, Charney stated clearly, "It is our responsibility to make technology trustworthy."
Charney was followed later in the day by National Security Agency Director Lt. Gen. Keith Alexander, who talked about NSA capabilities and its role in security cyberspace. Wednesday's speakers include Melissa Hathaway, acting senior director for cyberspace and the individual tasked with researching the state of domestic cybersecurity and reporting her results to President Obama. Finally, the day concludes with one of my favorite authors, James Bamford, who has written several books such as "Body of Secrets" and "The Shadow Factory" that are must-reads for anyone interested in cybersecurity, privacy, and the NSA.
I applaud this group of speakers and their messages, but I truly believe that private-public security cooperation needs to go to another level. Here are a few suggestions where this would help:
Security standards. The National Institute of Standards and Technology and the NSA should champion standards across the public sector while cooperating with the security industry on education and promotional programs. I'd like to see this cooperation on standards like the Key Management Interoperability Protocol (KMIP) and the Extensible Access Control Markup Language (XACML). I'd also like to see a standard for data "tagging" so that security requirements travel with the data for distributed security policy enforcement.
Information assurance. The defense and intelligence community is pretty good at data discovery, classification, and security. The private sector on the other hand is struggling. I'd like to see government agencies work more closely with the security industry to define standards, create best practices models, and enhance education.
Secure software development. This is the Achilles' heel of the technology industry, and secure development programs remain underfunded and behind the scenes. The federal government should flex its purchasing muscles by auditing vendor development processes, demanding that vendors adhere to the Common Weakness Enumeration/SANS Institute list of "Top 25 Most Dangerous Programming Errors," and creating some type of "good housekeeping seal of approval" certification for software vendors. This will stimulate new security training, products, and services and force the private sector into similar requirements.
Talk is cheap and cybersecurity gets worse each day. I hope that the government and security industry can build upon this common understanding to make real and immediate progress.
Microsoft Details Windows 7 Security
Features in upcoming OS reflect Microsoft's strategy of end-to-end trust
By Kelly Jackson Higgins, DarkReading
April 22, 2009
URL:http://www.darkreading.com/story/showArticle.jhtml?articleID=217000228
SAN FRANCISCO -- RSA CONFERENCE 2009 -- Microsoft this week provided a peek into the security innards of its upcoming operating system, Windows 7.
Amid the backdrop of the software giant's end-to-end trust theme for security -- highlighted by Scott Charney, corporate vice president of Microsoft's Trustworthy Computing Group, in his keynote address yesterday -- Microsoft also outlined new security features that come embedded in the next-generation OS, including application and mobile security controls.
End-to-end trust is especially relevant given today's threats, says Steve Lipner, director of security engineering strategy for Microsoft's Trusted Computing Group. "We're seeing attacks based on rogue AV software today, [for example], and end-to-end trust is a way to assure users what they are downloading was not a malicious artifact. Similarly, targeted attacks sending people spoofed email with a malicious attachment is another great example [of how it could help]," he says.
Paul Cooke, Microsoft's director of Microsoft's Windows enterprise client products, says the key security features in Windows 7 are part of the end-to-end trust model. They include Direct Access, AppLocker, USB thumb-drive support in BitLocker, and updated security features in Internet Explorer 8, such as an anti-clickjacking function. Windows 7 is currently in beta.
Windows 7's Direct Access will let users log into the corporate network and automatically get secure access via IPSec. "IT then can also touch and update their machine," Cooke says. "It works well with NAP [Network Access Protection]," he adds, which validates the client's security posture.
AppLocker lets IT control and secure applications on the client's machine. "It can control executables, scripts, installed software, and DLLs," Cooke says. It lets an organization set up which applications a user can run and then automatically updates them, he says.
BitLocker to Go in Windows 7 will let users encrypt USB thumb drives and SD cards to protect data stored on those devices in the event they are lost or stolen. "Too many USB drives are in the news about being lost," Cooke says.
Microsoft's Charney said in his keynote yesterday that Windows 7 provides some of the key elements of a "trusted stack," where all components on the machine can be authenticated and proved trustworthy.
Security model is failing, claim RSA speakers
http://www.itpro.co.uk/610605/security-model-is-failing-claim-rsa-speakers
Technology leaders at the RSA Conference in San Francisco have said that security needs to change - but exactly how was up for some debate.
The current security model isn't working, according to technology leaders speaking at the RSA Conference in San Francisco.
While keynote speakers agreed on that point, what’s to be done about it remained a topic of much debate.
Symantec’s new chief executive Enrique Salem, who took the company’s reins on 4 April, called for security managers to “operationalise” their efforts by creating “a bridge between day-to-day operations and security departments” to create shared plans and goals.
“We know that the most effective programs are those that bring together security, storage, and systems management to automate the repetitive tasks that consume most of your time,” he noted.
“When you bring together these areas, it’s possible to be more proactive and policy-driven.”
Microsoft’s corporate vice president for trustworthy computing Scott Charney insisted that it’s time for hardware vendors to recognise their role in maintaining security.
Trust is crucial to continued growth on the internet, he said, and “we have to root trust in hardware, because it's less malleable than software.” That requires collaboration and cooperation across the board – software and hardware vendors, consumers, enterprises, and society as a whole.
“We need to have alignment,” he said. “We need alignment between social forces, economic forces, political forces, and IT. Too often the information technology community has a solution, but they can't figure out how to monetise it or it's not acceptable for some other reason."
"Too often the politicians may have an objective, a worthy one like protecting children online, but the technology is not supportive and it has too many unintended consequences," he added. "Too often good ideas fail because the alignment isn't there.”
What Charney referred to as “alignment,” others called standardisation.
Art Coviello, executive vice president of EMC and president of the company’s security division RSA, warned that cyber criminals are evolving faster than the industry intended to stop them. With new malware and cyber-attacks arising daily, the security industry needs to develop an ecosystem to support the strengths of every player.
“Security cannot be solved by products from a single vendor,” he said. “It must be solved by the vendor community, what I call inventive collaboration. It’s about taking expertise of one technology organisation and interweaving it with another.”
To start things off, Coviello said he would make more of RSA’s tools and research publicly available. “We need to be far faster and flexible than cybercriminals,” he said. “We need a common development process to support risk management.”
Part of the problem, Symantec’s Salem noted, is that virtualisation and cloud computing are separating information from the infrastructure and hardware that once protected it. Where security once required blocking threats to hardware, modern security requires tracking where information goes, who is manipulating it and how to move bits quickly and securely between various systems.