Register for free to join our community of investors and share your ideas. You will also get access to streaming quotes, interactive charts, trades, portfolio, live options flow and more tools.
Register for free to join our community of investors and share your ideas. You will also get access to streaming quotes, interactive charts, trades, portfolio, live options flow and more tools.
OT: Microsoft Establishes Customer Council on Interoperability
Wednesday June 14, 12:01 am ET
Industry and Government Leaders to Provide Input on Making Technologies Work Better Together
REDMOND, Wash., June 14 /PRNewswire-FirstCall/ -- Microsoft Corp. (Nasdaq: MSFT - News) today announced that it has formed the Interoperability Customer Executive Council to identify areas for interoperability improvements across its products and the overall software industry. Customers are working in increasingly heterogeneous IT environments and asking for a greater level of interoperability from their IT vendors. Microsoft is committed to building bridges across the industry to deliver products to its customers that are interoperable by design.
ADVERTISEMENT
(Logo: http://www.newscom.com/cgi-bin/prnh/20000822/MSFTLOGO)
"As part of our commitment to Trustworthy Computing, we design our products with interoperability in mind so our customers can connect to other platforms, applications and data easily," said Bob Muglia, senior vice president of the Server and Tools Business at Microsoft. "The Interoperability Customer Executive Council will help us prioritize areas where we can achieve greater interoperability through product design, collaboration agreements with other companies, standards, and effective licensing of our intellectual property."
The council, hosted by Muglia, will meet twice a year in Redmond, Wash. The council will have direct contact with Microsoft executives and product teams so it can focus on interoperability issues that are of greatest importance to customers, including connectivity, application integration and data exchange. Council members will include chief information officers (CIOs), chief technology officers (CTOs) and architects from leading corporations and governments. Representatives from Societe Generale, LexisNexis, Kohl's Department Stores, Denmark's Ministry of Finance, Spain's Generalitat de Catalunya and Centro Nacional de Inteligencia (CNI), and the states of Wisconsin and Delaware have joined as founding members.
Customers Identify Interoperability as a Key IT Priority
The adoption of disparate systems over time is a reality, but customers in the private and public sector still want to take advantage of the leading IT road maps going forward. Increasingly, businesses and governments are looking at interoperability in IT deployments to drive down costs and increase their access to information. Microsoft continues to work proactively with others in the industry, including competitors, to deliver innovative, interoperable technologies that meet the requirements of customers and the demands of the market.
"Within the different architectures of Societe Generale IT, we are convinced that the best way to design flexible and adaptable IT solutions to answer the needs of our different business lines is to use technology designed with a commitment to interoperability between products, hardware, software and applications," said Olivier de Bernardi, group chief technology officer at Societe Generale. "With this in mind, we are quite interested to participate in this new program launched by Microsoft."
"Going forward, LexisNexis and our parent company, Reed Elsevier Group plc, will depend heavily on the ease, consistency and trust of true secure interoperability of operating system and infrastructure foundation layers," said Allan McLaughlin, senior vice president and chief technology officer at LexisNexis. "Our customers demand the best of our products, which involves working across various vendor environments to deliver superior solutions. We encourage all our vendors to take the necessary steps, as Microsoft is intending to do with this Council, to significantly improve the interoperability of the operating environment foundation."
"It is important that technologies have interoperability designed into their architecture if they are to satisfy our business need for faster integration of systems," said Jeff Marshall, chief information officer at Kohl's Department Stores. "I appreciate Microsoft's commitment to a dialogue around interoperability through this council, and it will definitely further the good work we have already started."
"With the overall responsibility for the largest Microsoft Business Solutions installation globally, consisting of more than 600 instances of Microsoft Dynamics(TM) NAV, it is important to me to be able to timely understand and influence Microsoft's direction on interoperability," said Henrik Jeberg, chief information officer, AGM at the Danish Ministry of Finance. "We are pleased to be a part of this global council and look forward to contributing to higher overall interoperability in the industry."
"Microsoft's commitment to interoperability represents a key issue to accelerate the provision of real e-government services by public administrations, anywhere and anytime," said Ignacio Alamillo, CATCert's research director of Spain's Generalitat de Catalunya. "Microsoft's role as a key player in interoperability will help remove the main technical barriers to global e-government administrative services, reducing cost and time to market."
"We welcome Microsoft's initiative on interoperability to address both technical and policy requirements and the invitation to participate in the council," said Luis Jimenez, subdirector adjunto del Centro Criptologico Nacional of Spain's CNI. "The requirement to achieve interoperability between public administration agencies operating in an e-government context is of ever-increasing importance."
Microsoft Invests in Interoperability
Microsoft is making long-term investments in interoperability. In February 2005, Microsoft Chairman and Chief Software Architect Bill Gates introduced "interoperable by design," a concept based on Microsoft's industry leadership in expanding the use of Extensible Markup Language (XML) and delivering technology that empowers customers by working with the applications and solutions they already have in place. Over the past 12 months, Microsoft has broadened its investments in interoperability and collaborated with both partner and competitive software and hardware companies when improving interoperability for shared customers benefits all parties. Recent examples include the following:
-- Interoperable software designed in Microsoft® Virtual Server 2005 R2
to support Linux guest operating systems and the royalty-free licensing
of the Virtual Hard Drive (VHD) format to more than 45 vendors such as
Akimbi, Brocade, Diskeeper, Fujitsu-Siemens, Network Appliance,
Platespin, Softricity, Virtual Iron and XenSource.
-- Technical collaboration agreements with SAP AG, Hyperion Solutions
Corp. and Sugar CRM Inc.; technical work underway in the Microsoft Open
Source Software Lab; and dialogue about interoperability issues for
Windows®, Linux, UNIX and open-source software on its community Web
site, Port 25.
-- Intellectual property licensing deals with companies including
NEC Corp., Toshiba Corp., Sony Ericsson Mobile Communications,
Autodesk Inc. and Nokia.
-- Ongoing participation in, and support of, industry standards for
improved data exchange and application integration in technologies such
as Web services (Web Services Interoperability (WS-I) participation),
financial and business transactions (electronic data interchange (EDI)
interoperability and radio frequency identification (RFID) integration
in Windows Vista(TM) and the 2007 Microsoft Office system),
speech-enabled applications and Web sites (Speech Application Language
Tags (SALT) and VoiceXML in Microsoft Speech Server 2007), and Web
content (XHTML 1.0 in the 2007 Microsoft Office system).
"Interoperability helps customers trust that they are making the most out of their IT investments, and our work on interoperability is consistent with the approach we are taking on security and privacy," Muglia said. "We are committed to interoperability for the long term, so watch this space."
Additional information about Microsoft's interoperability commitment may be found at http://www.microsoft.com/interop.
Founded in 1975, Microsoft is the worldwide leader in software, services and solutions that help people and businesses realize their full potential.
NOTE: Microsoft, Microsoft Dynamics, Windows and Windows Vista are either registered trademarks or trademarks of Microsoft Corp. in the United States and/or other countries.
The names of actual companies and products mentioned herein may be the trademarks of their respective owners.
Representative Support for the Interoperability Customer Executive Council
"I have been working on interoperability issues for over 30 years as a law enforcement officer, and solving this problem is a priority of mine in Congress. Interoperability equally impacts governments, citizens and the private sector, and finding solutions demands that governments, vendors and customers work side by side. The private sector is often more innovative and adaptive than government, providing solutions to challenges that government cannot. Industry-led initiatives such as Microsoft's are promising and are an important step in improving software and hardware interoperability and ultimately in making interoperable communication a reality."
- U.S. Rep. Dave Reichert
R-Wash.
"Interoperability and reusability place demands on governments, citizens and the private sector; finding solutions requires that vendors and their customers must work side by side. That makes industry-led initiatives such as Microsoft's promising when it comes to meeting customers' needs and improving the effectiveness of software and hardware."
- Alan Bellinger
Member
U.K.'s National Computing Centre (NCC) and e-GIF Accreditation
Authority
"Interoperability is one of the key issues facing the private sector, the government and the public, and finding solutions demands that vendors and customers work side by side. Intellect, the trade association for the U.K. hi- tech industry, warmly welcomes initiatives such as Microsoft's Interoperability Executive Council, which seeks to meet customers' needs by improving software and hardware interoperability."
- John Higgins
Director General
Intellect
"Interoperable hardware and software ultimately gives customers, businesses and governments the confidence to choose IT products that best meet their respective needs. Industry-driven efforts such as Microsoft's customer council are encouraging steps toward greater interoperability among IT components."
- Ina Gudele
Minister, Special Assignments for Electronic Government Affairs
Republic of Latvia
"Interoperability is an important feature of our purchasing decisions as we seek to employ the IT technology that is best suited to our needs. Industry initiatives such as Microsoft's are most welcome contributions to our efforts and promise to result in both increased productivity and cost savings."
- Jirij Bertok
ICT Director, Ministry of Defense
Republic of Slovenia
Sorry if already posted: Intel reveals R&D plans
Energy efficiency and performance are key
Linda Leung
Intel recently showed off a range of advanced-stage research projects to the press and potential partners at its fourth annual Research at Intel Day held at its facility in Santa Clara. The projects spanned mobile technology, enterprise computing, large-scale computing platforms and "people-centred" computing and are being developed under Intel's current mantra of "driving energy efficiency and performance."
In his keynote session that opened that day, Justin Rattner, Intel CTO, said: "It takes a good four years to develop a new generation of microprocessor, and another 3 or 4 years preceding that for research and getting the ideas. What you will see today is work that has been going on for the last few years." Intel did not announce which of the projects would make their way into the market.
Rattner said one of Intel's goal is to achieve 10 times improvement in the energy efficiency and performance of its processors over the next three to four years. In communications, he said the major theme for Intel researchers is WiMAX and ultrawideband, adding that the two technologies will be "fully deployed in the platform over the coming year." In enterprise computing, Rattner said Intel is "going after the maintenance portion of the pie," with research focused on virtualization, data center performance and security.
Among the enterprise computing research being demonstrated Wednesday, were:
The adaptive firewall
Intel's traffic-adaptive filtering technology has been in development for two years. It sits on any node on the network and learns about traffic patterns to introduce shortcuts to frequently travelled paths. In their demonstration, Intel researchers showed a video streaming application going from a server to a client via a router with a firewall. The researchers launched a denial-of-service attack against the router but the video traffic was unaffected because the filtering technology had placed shortcuts in the frequently travelled paths between the server and the client, and the attacker to firewall, which reduced the number of memory accesses in the classification process and increased the throughput of the firewall. The researchers said they plan to make available the technology as open source by year-end.
Trusted platforms with virtualization
This research puts Trusted Platform Modules (TPM) into the virtualized computing environment. TPMs, based on specification developed by The Trusted Computing Group, are microcontrollers used to store and authenticate passwords, digital certificates and encryption keys. In its research, Intel puts software-based Virtual TPMs (VTPM) in front of each virtual machine client to attest their status to the authentication server, which decides to allow or deny the virtual machine's access to the whatever server it wants to connect to based on their status as reported by the VTPM. The technology has been in development for almost two years.
Dynamic thermal management of the data centre
Developed in conjunction with Arizona State University, this research enables job scheduler software to take into account the temperature of servers or server blades before deciding which data centre component should do the job. The result should be an online thermal control framework that monitors and manages data centre thermal performance from a holistic viewpoint. The researchers say the challenge for the project is to make the system reactive so that it knows when servers are starting to fail because of heat issues. They say it could be another two years before this project could be presented to Intel as a potential product.
Corroboration catches stealthy worms
Slow worms are hard to catch, say Intel researchers, particularly if they try to hide in background traffic. In distributed detection inference, a node would raise an alarm if something odd is happening and reports this to another node. However, the node may be reporting a false positive. But if a few other nodes do the same thing -- raise alarms and report to another node -- the message would eventually reach a trusted node, which would decide on what action to take. Intel researchers describe this as nodes "gossiping" for the good of the network and said that most of the time, the network is able to tell that a worm is attacking when there is a low-level of infection. Researchers say the challenge is to put this distributed detection inference technology onto the node's hardware rather than software, which would put it at risk of being taken over by the worm.
All-wireless mesh
Intel researchers say the all-wireless mesh moves all network capabilities to the edge, enabling all nodes on the network to communicate with and through each other using wireless technologies and mesh networking. Intel said the technology could promote the formation of ad-hoc groups that could share different information between each other wirelessly. The research uses distributed virtualization based on PlanetLab, an experimental network built by academic and commercial researchers across the world that rides over the public Internet.
Juniper Networks and Microsoft Sign Agreement to Enhance IPTV Infrastructure Security
Monday June 5, 8:00 am ET
Juniper Joins Microsoft TV IPTV Ecosystem
CHICAGO--(BUSINESS WIRE)--June 5, 2006--Juniper Networks, Inc. (NASDAQ:JNPR - News) and Microsoft Corp. today announced a global agreement to deliver high-performance security solutions that enhance protection for Internet Protocol (IP) networks, services and applications. The companies are collaborating to provide end-to-end security with superior levels of quality and reliability to address the current and emerging needs of their service provider customers. With this agreement, Juniper can offer IPTV network security solutions to customers of Microsoft® TV IPTV Edition.
The Juniper Firewall and Firewall IDP (Intrusion Detection and Prevention) product platforms complement the Microsoft TV IPTV Edition content security mechanisms to help protect the infrastructure from malicious traffic and attacks such as worms, trojans, spyware and application layer threats. In addition, Juniper will offer various security consulting services that assist operators to assess service infrastructure vulnerabilities and design network security solutions.
The Juniper security products offer cost-effective scale and performance, enabling operators to protect large numbers of video serving platforms. The security products include:
Juniper Networks NetScreen-5200 and NetScreen-5400 Integrated Firewall/IPSec Virtual Private Network (VPN) appliances that are purpose-built, dynamic security appliances with industry-leading flexibility and performance capabilities to protect service provider networks and network data centers;
Juniper Networks Integrated Security Gateway (ISG) 1000 and 2000 with Intrusion Detection and Prevention (IDP) appliances that provide strong access control, secure communications and network and application-level security. These products lower the total cost of ownership for deploying best-in-class firewall, VPN and intrusion prevention services.
Juniper Networks NetScreen-Security Manager that provides easy-to-use centralized management to control all aspects of the Juniper Networks Firewall/IPSec VPN appliances including device configuration, network settings and security policy.
"As the Microsoft TV IPTV Edition platform continues to gain momentum with leading service providers worldwide, Microsoft is committed to building upon its IPTV solution with leading technologies that will contribute to the success and adoption of IPTV services," said Christine Heckart, general manager of marketing for Microsoft TV. "Juniper delivers a unique security solution for the IP infrastructure that will add a layer of network security and enable operators to more reliably utilize IP networks to deliver a range of next-generation television services and experiences."
"As providers continue to bundle IPTV into their multiservice offerings, the highest levels of quality, performance and reliability are required to both help secure the network and assure the subscribers' experience," said Rob Sturgeon, executive vice president, Security Products Group, Juniper Networks. "This alliance with Microsoft enables us to address these requirements and be a part of their ecosystem that delivers an end-to-end IPTV solution to service provider customers worldwide."
About the Microsoft TV Platform
The Microsoft TV platform is a family of software solutions that help network operators create and deliver new digital TV services that delight consumers. Designed to help cable providers and telecommunications companies derive more value from their digital video and network infrastructure investments, the Microsoft TV family supports a full range of services including interactive program guides, digital video recording, high-definition TV, on-demand programming and Internet Protocol TV (IPTV) services. The Microsoft TV platform works across a full range of set-top boxes and TV devices. More information about Microsoft TV can be found at http://www.microsoft.com/tv.
About Microsoft
Founded in 1975, Microsoft (Nasdaq:MSFT - News) is the worldwide leader in software, services and solutions that help people and businesses realize their full potential.
About Juniper Networks, Inc.
Juniper Networks is the leader in enabling secure and assured communications over a single IP network. The company's purpose-built, high performance IP platforms enable customers to support many different services and applications at scale. Service providers, enterprises, governments and research and education institutions worldwide rely on Juniper Networks to deliver products for building networks that are tailored to the specific needs of their users, services and applications. Juniper Networks' portfolio of proven networking and security solutions supports the complex scale, security and performance requirements of the world's most demanding networks. Additional information can be found at www.juniper.net.
Juniper is demonstrating its portfolio of secure and assured networking solutions for the delivery of advanced IPTV and multiplay services in its Booth #42031 at GLOBALCOMM 2006. Juniper will also be hosting a press and analyst luncheon event on Tuesday, June 6th, to discuss enhancements to its IPTV strategy and partnerships. The luncheon will take place at the Hyatt McCormick Place in Room CC 10 AB and will be available via webcast for those not attending GLOBALCOMM at: http://www.juniper.net/events/globalcomm/.
Microsoft is a registered trademark of Microsoft Corp. in the United States and/or other countries. Juniper Networks, Netscreen and the Juniper Networks logo are registered trademarks of Juniper Networks, Inc. in the United States and other countries. All other trademarks, service marks, registered trademarks, or registered service marks are the property of their respective owners.
AMD Responds to Intel vPro
Filed under: General News
At a technology briefing at its Sunnyvale, California headquarters, AMD unveiled its new platforms that could ultimately compete with Intel's vPro technology. The platforms announced focused around adding coprocessors with their Opteron technology, adding tightly coupled security, virtualization, and management for servers and desktops, and adding a virtual desktop model. And while Intel has seen significant success with their platform approach among system builders, AMD points out that Intel's strategy is significantly different from AMD's approach because Intel's platforms are closed, using its own components, while AMD attempts to give customers a choice of components to use inside the platform.
In a demonstration of AMD's open and collaborative approach to innovation, Senior Vice President, Commercial Segment, Marty Seyer detailed three complementary strategic initiatives designed to accelerate industry-wide innovation on the AMD64 platform.
"Torrenza" represents the industry's first open, customer-centric x86 innovation platform, capitalizing on the Direct Connect Architecture and HyperTransport advantages of the AMD64 architecture to enable other processor and hardware providers to innovate with a common ecosystem. The technology will allow system makers to couple Opteron processors with coprocessors for high performance computing, advanced security, or other specialized applications.
"Trinity" is AMD's strategy to uniquely link, through an open approach, security, virtualization, and manageability technologies. Trinity is intended to enable greater flexibility and reduce costs associated with managing, securing and scaling commercial client and server platforms. Seyer said AMD would be putting an open management partition that is extensible for partners in part of the core of DDR2 memory in its next-generation processors. This technology is strongly rooted with AMD's virtualization technology code-named Pacifica. AMD is therefore designing Trinity to allow its partners to harness virtualization by giving them the keys needed to integrate into their own existing tools.
"Raiden" will build on Trinity to reinvent the commercial client experience. Raiden is somewhat akin to a virtual desktop approach as it shifts the focus from physical client computing to a delivery of client services from a server or a blade system on the corporate network. Raiden will support traditional PC clients, as well as new form factors inspired by such industry dynamics as software-as-a-service delivery models. IT departments have a problem with image management and need to reduce costs. AMD hopes to centralize control and management of the applications and data and thus lower management costs versus the traditional PC model.
While these technologies and ideas may take time to saturate into mainstream corporate IT, AMD is already working with customers and is readying many of the building blocks needed to come up with future client devices to make use of these concepts.
AuthenTec Providing Biometric Fingerprint Security for New Fujitsu High-End Executive Notebook PC
AuthenTec today announced that Fujitsu Computer Systems Corporation is embedding fingerprint sensors from AuthenTec into its new LifeBook Q2010 high-end executive notebook PC - expanding its use of biometric security across its line of mobile computers. One of the first PC manufacturers to adopt biometrics, Fujitsu now offers AuthenTec's advanced fingerprint security and convenience on more notebook and tablet PC models than any other manufacturer. In its latest announcement, Fujitsu integrated the EntrePad 2501A swipe sensor from AuthenTec -- the world leader in fingerprint sensor security, innovation, and sales . with a Trusted Platform Module (TPM) 1.2 to provide advanced security for the recently announced LifeBookR Q2010 notebook.
企業HP : http://www.authentec.com/
メーカ発表資料 :
Orda, agreed. Just pointing out the opportunity.
Pickle
The Net comes up short in building confidence
Security-software execs say it will take years to curb consumer fears
By John Shinal, MarketWatch
Last Update: 11:26 AM ET Jun 2, 2006
CARLSBAD, Calif. (MarketWatch) -- It may take several more years before consumers are making financial transactions over the Internet with minimal fear of fraud, the chief executives of two security-software firms said Thursday.
Improved technology and consumer education will both be needed before users trust the worldwide data network for sensitive business the same way they do an automated teller machine, according to John Thompson of Symantec Corp. (SYMC : SYMC
News , chart, profile, more
Last:
Delayed quote dataAdd to portfolio
Analyst
Create alertInsider
Discuss
Financials
Sponsored by:
, , ) and Art Coviello of RSA Security Inc. (RSAS : RSAS
News , chart, profile, more
Last:
Delayed quote dataAdd to portfolio
Analyst
Create alertInsider
Discuss
Financials
Sponsored by:
, , )
RSA develops software that verifies the identity of networked computer users. Symantec sells the Norton line of anti-virus security programs.
"It's going to take a bit of time to get there" because "the Internet was developed in an insecure environment," Coviello told an audience of executives gathered near San Diego for the D4 -- All Things Digital conference.
Threats of identify theft, fraud and other scams resemble the fears that had to be tackled when banks introduced automatic teller machines, Coviello said. Of course, the industry overcame those concerns, and ATMs went on to become a ubiquitous and trusted method of consumer banking.
"It took 10 to 15 years to build that trust," Coviello said. "I'm hoping it doesn't take that long" for the Internet.
More than half of Americans say that because of fear of fraud they spend less time online than they would otherwise, Thompson said, citing a report by the Cyber Security Industry Alliance
"Now, it's about identity theft," Thompson said, adding that consumers need to become much savvier about how they use the Internet.
An estimated 4,000 versions of so-called phishing emails are sent out to millions of Internet users every month in hopes of ensnaring consumers in scams or stealing their identities to gain access to personal information and assets. Read more on mass-marketing scams.
"There's an enormous education campaign that needs to take place," Thompson said.
Coviello said the industry will need to develop security solutions that are "idiot proof" so that everyone who surfs the Net will feel comfortable doing business online.
Coviello and Thompson differed somewhat on the relative importance of innovation versus user awareness, but agreed that companies like theirs still have work to do.
Still, the trend is moving in the right direction, Thompson said. He said the number of major computer viruses that cause widespread disruption to users of Microsoft Corp.'s Windows operating system has dwindled significantly since 2004.
"Microsoft and security software firms are doing a better job at this," Thompson said
OT: Microsoft Offers All-in-One PC Care for Consumers With Windows Live OneCare
Wednesday May 31, 12:01 am ET
Like a "pit crew" for consumer PCs, OneCare service is now available in the U.S. at Best Buy and over a dozen other leading national retailers.
REDMOND, Wash., May 31 /PRNewswire-FirstCall/ -- Microsoft Corp. (Nasdaq: MSFT - News) today announced U.S. availability of Windows Live(TM) OneCare(TM), the company's all-in-one, automatic and self-updating PC care service designed to help consumers more easily protect and maintain their PCs. It is available at Best Buy(TM) and more than a dozen other major retail chains nationwide, as well as via direct download from http://onecare.live.com. Windows Live OneCare is available for an annual subscription rate of $49.95 MSRP* for up to three personal computers, and many of the retailers plan to offer rebates and other types of in-store promotions for Windows Live OneCare in the coming weeks and months.
(Photo: http://www.newscom.com/cgi-bin/prnh/20050531/SFW013
http://www.newscom.com/cgi-bin/prnh/20000822/MSFTLOGO )
"Windows Live OneCare delivers what millions of consumers have been asking for: one source for top-to-bottom maintenance, support and performance optimization tools plus increased protection that takes the worry out of PC care," said Bill Gates, chairman and chief software architect at Microsoft. "This 'just take care of it' experience enables customers to focus on what they really care about, which is to be able to sit down at their PCs and enjoy their digital lifestyles."
To help celebrate the launch of Windows Live OneCare, Microsoft retail partner Best Buy will display the OneCare product logo prominently on the Best Buy-sponsored No. 66 stock car that Jeff Green will drive in the Neighborhood Excellence 400 event this Sunday, June 4, at Dover International Speedway in Dover, Del. In addition, Microsoft and Best Buy representatives will be on site at the race track celebrating the launch of Windows Live OneCare and the retail alliance.
"My pit crew helps keep me safe and my No. 66 car performing at its best, so I can focus on racing to the checkered flag. Windows Live OneCare offers that same value -- it's like a pit crew for your PC," said Green, driver for the Best Buy-sponsored Haas CNC Racing team. "I'm looking forward to racing the OneCare-branded car this weekend and helping Microsoft launch this great service."
Windows Live OneCare is designed to alleviate the confusion and frustration that many consumers experience when trying to protect their computers from viruses, spyware and other threats. It also goes beyond security to simplify other essential PC care practices, such as backing up important data and regularly running performance maintenance tasks.
"Best Buy and Microsoft are both strongly dedicated to helping customers get the best possible performance from their PCs," said Sean Skelley, senior vice president of services at Best Buy. "We're pleased to add Windows Live OneCare to our offerings in-store and to help spread the word about this exceptional all-in-one PC care service. As an added convenience for our customers, Geek Squad® Agents are available 24x7 to install OneCare in-home or at precincts in our stores."
Since the initial Windows Live OneCare beta was released in November 2005, Microsoft has continually added new features in response to consumers' feedback. OneCare now provides the following features:
-- Protection Plus includes anti-virus and firewall protection features
and automatic updates, as well as anti-spyware functionality to help
protect the PC and the customer.
-- Performance Plus delivers regular PC tuneups to help maintain computer
performance and reliability.
-- Backup and Restore delivers easy-to-use backup and restore
functionality for the customer's important photos, music files and
more.
-- Help and Support provides effective help when needed through a variety
of modes -- e-mail, phone and chat -- with all service support coming
from PC care experts at Microsoft for no additional charge.
In addition to expanding Windows Live OneCare capabilities over time, Microsoft plans to begin rolling out the service in international markets within the next 12 months. More information about Windows Live OneCare can be found at http://onecare.live.com.
About MSN and Windows Live
MSN® attracts more than 465 million unique users worldwide per month. With localized versions available globally in 42 markets and 21 languages, MSN is a world leader in delivering compelling programmed content experiences to consumers and online advertising opportunities to businesses worldwide. Windows Live, a new set of personal Internet services and software, is designed to bring together in one place all the relationships, information and interests people care about most, with enhanced safety and security features across their PC, devices and the Web. MSN and Windows Live will be offered alongside each other as complementary services. Some Windows Live services entered an early beta phase on Nov. 1, 2005; these and future beta updates can be found at http://ideas.live.com. Windows Live is available at http://www.live.com. MSN is located on the Web at http://www.msn.com. MSN worldwide sites are located at http://www.msn.com/worldwide.ashx.
OT: IBM, HP switch to multicore servers
"Single-core processor - you are the weakest link"
Patrick Thibodeau, Framingham
The single-core processor is apparently all but history, as major server vendors Hewlett-Packard and IBM brought out new systems last week based on Intel's dual-core chips.
With the moves by IBM, HP and others to use the new chips, Intel is projecting that by the end of this year, 85% of all processor shipments from the company will be dual core, a spokesman said.
By late June HP will largely have dual-core capability across its entire set of two- and four-way servers, "from the least expensive all the way to the top end", said John Gromala, director of server product marketing. HP's remaining single-socket systems will be updated later this year with dual-core capabilities, he said.
Intel Xeon dual-core chips will also be the dominant processor in the new ProLiant and BladeSystem servers HP will ship next month.
HP said the Xeon-based systems will triple the performance of the new systems — not only because of the processors but also as a result of a redesign of the subsystems, including memory, storage and management controllers, to support the new technology.
Meanwhile, IBM announced plans to start shipping three new System x servers running Intel's dual-core chips in June.
Intel's dual-core processors include the low-end Xeon dual-core 5000 series of chips, codenamed Dempsey, and high-end systems that will be based on the Xeon 5160 processor, or Woodcrest, which has about 3.1 times the performance of the single-core Xeon processor.
Jonathan Eunice, an analyst at IT research firm Illuminata in New Hampshire, US, said dual-core systems deliver a performance boost because they can handle multithreaded applications, which can be bandwidth intensive.
OT: New Line Of Integrated Security Appliances
Wednesday, 31 May 2006, 10:54 am
Press Release: Symantec
Symantec Introduces New Line Of Cost-effective Integrated Security Appliances
Symantec Gateway Security 1600 Series Appliances Deliver
Affordable, Robust Security
Symantec Corp. (NASDAQ: SYMC) today announced the Symantec Gateway Security 1600 Series, easy-to-manage, multi-function integrated security appliances designed to provide customers comprehensive threat protection against emerging threats. Symantec Gateway Security 1600 Series is devised to save customers valuable time and money by offering easy-to-use functionality that is ideal for smaller sites. Powered by Symantec’s award-winning technologies, Symantec Gateway Security 1600 Series delivers an integrated unified threat management (UTM) offering that can be centrally managed to protect medium sized businesses and remote and branch offices of large organisations.
Symantec Gateway Security 1600 Series appliances incorporate Symantec Gateway Security v3.0 software to provide eight essential security functions for maximum effectiveness, while reducing the complexity of security management. Symantec Gateway Security 1600 Series tightly integrates anti-spam, antivirus, anti-spyware, clientless SSL and IPSec VPN, full-inspection firewall, intrusion prevention, intrusion detection and both dynamic document review and URL list-based content filtering technologies to provide proactive, zero-day protection against today’s complex internet threats.
Symantec Gateway Security 1600 Series reduces acquisition and installation costs by integrating market-leading security functions from one vendor into a single appliance. Each system provides robust management functionality that includes configuration, event logging and alerting and detailed textual or graphical reporting. Optional capabilities are available that allow customers to gain global visibility and control through centralised policy management and event correlation for Symantec Gateway Security appliances in their networks. Customers are provided the ability to efficiently analyse and act on malicious activity.
Symantec Gateway Security 1600 Series provides comprehensive content filtering capabilities through its support for URL-based list filters with Symantec’s patented Dynamic Document Review (DDR) technology. DDR prevents employees from accessing objectionable content by allowing administrators to define context sensitive word relationships.
An ongoing problem IT administrators face today is ensuring the security and integrity of desktop and remote PCs. Symantec Gateway Security 1600 Series offers client compliance functionality that allows customers to enforce policies over systems connecting remotely via IPSec or Clientless SSL VPN. Prior to allowing access to a trusted network, remote users’ systems are scanned to determine if the appropriate security protections (antivirus definitions, security engines, intrusion detection signatures, firewall) are up-to-date and working properly.
“Smaller sites often do not have the resources and financial means to deploy the integrated security technologies necessary to protect themselves from today’s complex internet threats,” said Rogan Mallon, principal systems engineer, Symantec Corp. “Symantec Gateway Security 1600 Series provides small to medium enterprises and branch offices affordable, scalable, multi-function security in one centrally managed appliance.”
Scalable Management
Customers of Symantec Gateway Security 1600 Series are offered a wide range of centralised management and reporting capabilities through the Symantec Gateway Security Advanced Manager 3.0 appliance. Symantec Gateway Security Advanced Manager 3.0 provides secure, centralised, web-based management of hundreds or thousands of Symantec Gateway Security appliances. Designed to increase organisational productivity and drive efficient use of resources through consolidated management capabilities, Symantec Gateway Security Advanced Manger 3.0 provides customers insightful security event analyses, faster time-to-response for security incidents and a lower total cost of ownership for security gateway deployments.
Symantec Security Response
Symantec Gateway Security customers benefit from Symantec Security Response - receiving the latest security updates and information available. Part of the Symantec Global Services and Support organisation, Symantec Security Response experts work to protect enterprise businesses and consumers from the latest security threats. Symantec Security Response, a team of dedicated intrusion experts, security engineers, virus hunters and global technical support teams, provides customers with comprehensive, global, 24x7 internet security expertise to guard against today’s complex internet threats.
Licensing and Availability
In order to accommodate varying performance needs, Symantec Gateway Security 1600 Series appliances will be offered in two different models (1620 and 1660). For easier deployment and scalable licence management of hundreds or thousands of remote sites, the base product licence includes all security functions for unlimited users. In addition, customised updates of security content, hot fixes and patches are automated via LiveUpdate technology.
Symantec Gateway Security 1600 Series is available through Symantec’s worldwide network of value-added resellers, distributors and systems integrators. Organisations seeking a reseller or distributor should contact Symantec at .
About Symantec
Symantec is the world leader in providing solutions to help individuals and enterprises assure the security, availability and integrity of their information. Headquartered in Cupertino, Calif., Symantec has operations in more than 40 countries. More information is available at www.symantec.com.
Intel Relentlessly Pursues Cutting Edge
Sunday May 28, 8:50 pm ET
By Dan Goodin, AP Technology Writer
Intel's Major Overhaul Is Part of Its Quest to Fit More Transistors on Same-Sized Chip
http://biz.yahoo.com/ap/060528/intel_factories.html?.v=5
CHANDLER, Ariz. (AP) -- The glass-encased room inside Intel Corp.'s microchip factory here, with its shiny, metallic surfaces and frigid air, is a world away from the blistering sun and brown earth outside.
An army of robots suspended from the vast ceiling glide from one refrigerator-sized machine to the next. Their cargo: thousands of 12-inch silicon platters that form the raw material for Intel's most sophisticated computer microprocessor to date.
Inside this chip fabrication plant on the outskirts of Phoenix, engineers clad in what look like space suits are six months into a dramatic overhaul that could determine Intel's future as it faces its stiffest competition in more than a decade.
Intel closed the factory, officially known as Fab 12, for 18 months and spent $2 billion to retool it with more than 800 machines that follow a new manufacturing recipe cooked up more than four years ago and is already in place at a plant in Oregon. By year's end, the process will be up and running in a total of four fabs.
"Nobody ramps a technology at the rate we do," says Intel Vice President Tom Franz. "I'd be willing to stand up and say that in front of anybody, including our competitors."
The overhaul is part of Intel's and the rest of the semiconductor industry's relentless quest to shrink the size of its circuitry so more transistors fit onto the same size chips. For decades, the industry has doubled the number of transistors on a chip every two years or so, a pace that has become known as Moore's Law, after Intel co-founder Gordon Moore predicted it in a 1965 article.
Because it allows a new generation of smaller, faster products at roughly the same cost as earlier ones, Moore's Law has provided a growth engine that separates the electronics industries from virtually every other business.
But no other company spends as much money as Intel adhering to the law's rigorous demands, and as a result the payoff from more efficient factories is higher. Intel, which has spent $25.3 billion on new equipment over the past five years and is the world's largest chip maker, also gets important competitive advantages from its uncontested role as manufacturing champion.
"If you're the person that's setting the pace and setting the course, everybody else is chasing you and it's a lot easier to stay in the lead," says analyst Rob Enderle of the Enderle Group.
Thanks to Moore's Law, Intel's Core Duo microprocessor, being manufactured in Chandler, is small enough to fit on the nail of an adult pinky finger. If it was made using the process considered state-of-the-art in the early 1990s, its 151 million transistors would take up as much space as compact disc jewel case.
Under the recipe being rolled out in Chandler, a chip's average circuitry measures 65 nanometers, small enough that 100 transistors would fit into a single human blood cell.
Within the next few months, the majority of Intel's processors will be made using the new process. That puts the Santa Clara, Calif.-based company about 18 months ahead of its chief competitor, Advanced Micro Devices Inc., and up to five years ahead of other chip makers, says VLSI Research analyst Dan Hutcheson.
Although some of the gear arrived just weeks ago and is nothing like the tools used in the past, the equipment is already intimately familiar to the thousands of engineers who work at Fab 12. That's because about 400 "seed" employees have already spent more than a year working in what amounts to a carbon copy of the plant in Oregon.
Now, under a process Intel executives call "Copy Exactly," the seeds are back in Chandler, where their job is to duplicate even the subtlest manufacturing variables found in Portland, from the color of a worker's gloves to the type of fluorescent lights used.
One of those seeds is Erica Anderson, a five-year Intel employee who's responsible for the performance and upkeep of two machines that wash silicon platters -- also known as wafers -- in a chemical bath to remove impurities. In January 2004, she left Chandler for a 16-month stint at a development facility in Oregon, so she'd know her part of the new process cold by the time Fab 12 reopened in October.
"There's peace of mind in knowing that your equipment is set up exactly and that the process worked up in Portland," says Anderson, 27. "You feel pretty confident that your process is going to work down here."
All the hard work is paying off. The "yield," or percentage of chips on a 12-inch wafer that function properly, rose more quickly during Fab 12's transition than the rollout of any new process in Intel's 38-year history, Franz said.
There's little margin for error. Over the past few years, Intel's edge in manufacturing has been blunted. A series of new chip designs over the past few years has allowed AMD's market share to rise more than 3 percentage points, to 18.2 percent, versus the 80.2 percent held by Intel, according to Mercury Research.
The most notable new feature was the ability for the smaller competitor's chips to handle larger blocks of memory needed by many corporations and scientific customers while remaining compatible with software designed for earlier systems.
While no one disputes the important advantage Intel gets from outspending its competitors on factory gear, AMD's gains are an important reminder that manufacturing prowess alone is no guarantee of success.
"It doesn't matter if you have a lot of manufacturing capacity if nobody wants to buy what you're selling," says Dan Niles, an investment manager with Neuberger Berman Technology Management.
Intel is countering AMD with a host of new processors based on a new chip design due in the second half of the year. One of them, for desktop PCs, will deliver 40 percent better performance while reducing power consumption by the same margin.
The company is also hard at work on an even more compact 45-nanometer recipe slated to be rolled out in 2007 that will ensure Intel maintains its manufacturing lead.
Intel Chairman Craig Barrett, whose "Copy Exactly" technique is credited by industry watchers as a key reason for the company's unmatched manufacturing muscle, says the challenge to get things right grows with each new transition.
"It's a little bit like the baseball player with a batting machine dialing up the speed of the pitches," he says. "Each generation we dial up the speed by 10 mph, but in spite of that, we're able to hit the ball more often."
Unix, did you say the same thing about dial-up?
Pickle
Healthcare exec talks security
George Rathbun, director of IT architecture at Pfizer, discusses a shared authentication approach.
Network World, 05/25/06
George Rathbun, director of IT architecture at Pfizer, is also the CTO for SAFE-BioPharma, the pharmaceutical industry group coordinating secure sharing of information with physicians and others. SAFE members, including Johnson & Johnson, Abbott Labs, Bristol-Myers-Squibb, Proctor & Gamble, and Merck and GlaxoSmithKline, have embarked on a shared authentication approach based on public-key infrastructure cross-certification. Rathbun recently chatted with Network World Senior Editor Ellen Messmer to discuss how this security program works and what its implications are for users.
Related links
Health net gets a checkup
02/20/06
Carolinas HealthCare expands on Wi-Fi
08/04/05
Healthcare IT spending should boost ROI
04/26/05
Community
Sony settlement and Mr. Rootkit
SpamAssassin
Network World Radio: E-mail encryption
All security forums RSS feed
How many members does SAFE have, and what has the organization accomplished since its founding?
SAFE, which stands for Signatures and Authentication for Everyone, was established about one and a half years ago to meet the challenge of global online identification of individuals in the pharmaceutical industry. We now have 30 [corporate and government] members. We initially looked at sharing a single directory, a database of personal information, to have a single authentication source. But instead, we went with an approach to public-key infrastructure (PKI) and digital certificates based on a bridge.
What is that exactly?
A bridge is a certificate authority dedicated to issuing certificates for bridging multiple certificate technologies. Today, there's a SAFE bridge certificate authority that issues cross-certificates to anyone that's part of it. We call it the "trust bridge." It's maintained by a vendor, CyberTrust.
So how does this digital-certificate cross-certification work for SAFE members?
Well, for example, all of the workforce at Johnson & Johnson is already PKI-enabled internally with their own digital certificates. J&J [last month] elected to have their corporation certified with the trust bridge. To do that, J&J went to a cross-certification ceremony where agents from J&J made sure the certificate authorities are aligned and there are no discrepancies between policies. It's quite a bit of work. But it creates a trusted network of [certificate authorities] for authentication. Vendors, such as CoreStreet, are also involved in supporting the bridge.
So how does all this technical effort come to serve business goals?
Doctors in hospitals are often participating in clinical trials. Intellectual property, such as laboratory notebooks and human studies, have to be signed by them or others. Today, documents receive wet signatures on paper, which is scanned. The goal is to do this electronically with digitally signed documents, all time-stamped. The SAFE authentication model means the doctor doesn't have to get a digital certificate from each company but just one issued under SAFE.
So if one key goal at SAFE is to get doctors using SAFE cross-certified digital certificates, how is that proceeding?
The current strategy is to have members invite doctors into this and pay for their certificates. It also requires a hardware device, too, to hold the certificate, a USB token or smart card. We believe that the Trusted Computing Group's Trusted Platform Module might also lend itself to this hardware model.
Why does SAFE insist on hardware-based certificates rather than software-based?
It was done from the point of view of the legal framework and policies that govern use of credentials. In the legal analysis, it was an issue of nonrepudiation and property protection, so that in a court of law the digitally signed document would still be accepted. With the soft certificates, the question is, would it hold up in court? Someone could ghost my machine or steal my password. But the Food & Drug Administration has said they'd consider soft certificates for submissions.
What's the biggest challenge in getting SAFE in use today, if it's not mandatory?
The challenge is the cost, which can range from $30 to $150. And we can't make the assumption the doctor alone reviews documents. Today, it's a preference among SAFE members to use the SAFE token in clinical trials, but we recognize there are still going to be wet-signed documents.
Network Access Control: Yet To Take Off
By Rajendra Chaudhary
Mumbai, May 25, 2006
Amidst all the hype around the wonders of Network Access Control (NAC) and security vendors touting NAC compliant devices, one can't help but do a serious rethink of their existing network security strategy.
With new threats emerging by the day and enterprise networks becoming increasingly ubiquitous no one can tell for sure which device attempting to connect to the network is not a potential security threat to the company's network infrastructure.
NAC is essentially the framework that enforces security policy compliance on all the devices seeking access to network computing resources. The NAC system proactively scans each end point device such as PCs, PDAs, smartphones, servers etc and upon detection of a possible threat sends the user to a quarantined area for remediation. Only when the user is in compliance with all the security norms, it is allowed to connect to the network.
Several industry reports and experts suggest that the market for NAC tools is going to go through the roof in the near future. According to Infonetics Research's recent report 'Enforcing Network Access Control', the worldwide manufacturer revenue for NAC enforcement is expected to grow 1,101%, from $323 million to $3.9 billion between 2005 and 2008. Also, the market for SSL VPNs for NAC enforcement is also slated to grow at an impressive 798% in the same period.
However despite all the buzz and apparent benefits, customer adoption of NAC hasn't taken off just yet. Customers worldwide are sticking to a wait and watch game plan for the time-being, as there are unresolved issues with NAC deployments.
Elaborating on some of those challenges, Ken Low, Senior Manager - security, APAC Enterprise Marketing, 3Com said, "The biggest hurdle that customers face in deploying NAC is the fact that it can be quite a tedious exercise. Traditional NAC deployments call for installing an agent at every single end point device connecting the network which is not only time consuming but also very expensive and fairly impractical."
"It involves numerous moving parts. As a result of these complexities a lot of customers who even purchased NAC haven't gone ahead with deployments," added Low.
The slower customer adoption rates could also be attributed to the current lot NAC solutions that don't offer sufficient interoperability. Different vendors developing their own NAC architecture and own products have led to a stalemate and resulted in a marketplace filled with NAC solutions that do not gel well with multi-vendor network infrastructure that customers tend to have.
This is where entities like Trusted Computing Group (TCG), which is an independent non-profit consortium working on standard implementations for NAC, can really help resolve some critical issues. It's Trusted Network Connect (TNC) architecture is supported by 60 of its vendor members including the likes of Juniper, Symantec, Meetinghouse, Nevis, and Nortel.
Besides TCG, there are two major players toughing it out for a piece of the NAC pie viz. Cisco, Microsoft. Both the companies are developing their own NAC-like solutions with Cisco calling their version Network Admission Control and Microsoft, Network Access Protection (NAP).
New Networking Technology Built on Mistrust
by Mel Beckman , Dr. I Doctor
May 25, 2006 -
A trusting nature has long been considered a likeable quality — but not when it comes to your network. A new networking concept — which captured all the buzz at the Networld+Interop show in Las Vegas this month — is grounded solidly in the idea of mistrust.
Considered the biggest change in network security since the invention of the firewall, Network Access Control (NAC) addresses security at the point in which the user attaches to the enterprise LAN, where viruses and interlopers previously have found easy ingress. But NAC is a fuzzily defined security paradigm and can be complicated to implement.
The heart of NAC is a new idea: the concept of not trusting users' connections to your network until the network first ascertains their identity and that they haven't been compromised by viruses, back doors, Trojan horses, or spyware.
The traditional way of handling inside users is to simply assume that if they've got enough physical access to your building to connect to your network, they must be safe to let on. But that ain't necessarily so. Mobile users could be connecting from a notebook computer or PDA infected with malware. Many corporate LANs have unguarded Ethernet ports that visitors — or the night cleaning crew — can easily tap into. With WiFi, people can latch onto your network from across the street.
NAC first uses a variety of authentication schemes to confirm that anyone connecting to the inside network is who he says he is. It then applies a policy to that person that determines what tests he or she must pass to be considered safe and what resources the user is allowed to access. That safety check is a tricky thing, and the policies that get applied to admitted users can be as simple as carte blanche access to all network services or as fine-grained as a unique policy for each person.
Every NAC implementation has three major components: authentication, endpoint security assessment, and network environment control. But those are pretty broad strokes, and for the most part, no current standards exist for how they're implemented. Vendors have thus far had to enact their own ideas of how these components should work.
One problem NAC faces is that if a user isn't admitted to the network, what is the mechanism by which she's excluded? Different vendors solve this in different ways. Some NAC implementations require advanced switches that support the 802.1x standard, a userID/password mechanism for enabling access on an Ethernet port. This has been around for a few years but recently became popular when it was repurposed to control WiFi access.
Other implementations try to control who gets an IP address and then block unauthorized IP addresses from using the LAN. However, this safeguard can be circumvented by clever hackers or by malware that bypasses DHCP servers.
There are as many flavors of NAC as there are vendors. Some major NAC methodologies, which might become de facto standards, are Cisco's Network Admission Control, Juniper's Infranet, Microsoft's Network Access Protection, and the Trusted Computing Group's (TCG) Trusted Network Connect.
Of those four, only the Trusted Computing Group is a nonprofit industry-standards organization. TCG is responsible for a number of security architecture standards, but its NAC standard is only sketchily defined. TCG is working on its Trusted Network Connect (TNC) specification. The draft TNC has been a starting point for the many proprietary NAC offerings, but vendors have had to extend it. For example, TNC doesn't specify how software-based host firewalls should be handled when connecting users to the LAN.
In the proprietary arena, Microsoft's Network Access Protection (NAP) scheme is straightforward, but it only controls access for Windows workstations. Because NAP has hooks into the OS, it can directly measure whether the operating system is compromised. Microsoft's implements this technology by installing a new Windows component on user systems called the System Health Agent. NAP employs DHCP as the network gatekeeper, issuing IP addresses only to systems that jump through all the hoops. NAP interfaces with 802.1x, though, so you can achieve physical port-level security if your hardware supports it.
A more generic NAC solution is Cisco's Network Admission Control, which confusingly has the same acronym as Network Access Control. Cisco provides a free client called a Trust Agent that runs on end-user computers to check for malware. But Cisco also scans connecting hosts externally to check for known vulnerabilities - an improvement over relying on the host to guarantee its own safety. Cisco sells switches, which all conveniently support 802.1X, so naturally Cisco exploits that protocol for excluding unauthorized users. But Cisco also adds encryption, which goes way beyond the TCG minimum requirements, to prevent LAN eavesdropping.
Juniper's Infranet is deployed out of the firewall — not surprisingly, since Juniper is a firewall company. A host-installed client called Infranet Agent checks out a host's integrity. But if a host doesn't have the client, Juniper has no mechanism for keeping the user out since it doesn't support 802.1X.
At Networld+Interop, dozens of vendors offered other competing NAC products. As NAC standards mature, you can expect to see vendor offerings become more consistent. It seems, however, that NAC is here to stay.
OT: Fujitsu and Cisco announce high-performance routers for Japanese carriers
May 25, 2006 Tokyo and San Jose, CA -- Fujitsu and Cisco
Systems today announced the release of the co-branded "Fujitsu and Cisco XR12400" series (four models) of high-performance routers specifically designed for the needs of the Japanese market. They are the latest products to be delivered through the strategic alliance between the two companies announced in December 2004.
Like the "Fujitsu and Cisco CRS-1" Carrier Routing System, which was introduced in May 2005, the new series runs on the next-generation Cisco IOS XR operating system.
Offering what the companies claim is highly secure virtualization, continuous system operation and multi-service scalability, the Fujitsu and Cisco XR 12400 series provides intelligent routing systems that scale from 2.5- to 10-Gbit/sec capacity per slot to enable construction of next-generation IP/multiprotocol label switching (MPLS) networks.
The four models comprising the new series cover a range of switching capacities, from 80 Gbits/sec to 320 Gbits/sec, and each slot has a capacity of up to 10 Gbits/sec. Future models in the Fujitsu and Cisco XR12000 series will have per-slot capacities in the tens of Gbits/sec.
In addition, the companies announced that the Fujitsu and Cisco CRS-1 will be made available with multi-shelf capability, enabling carriers to scale networks in response to anticipated increases in traffic volumes. The addition of multi-shelf capability enables switching capacity of 2.5 Tbits/sec. In the future, the Fujitsu and Cisco CRS-1 will be able to accommodate 2 to 72 line-card shelves and 1 to 8 fabric shelves, with a total switching capacity of up to 92 Tbits/sec, say the companies.
Both the new Fujitsu and Cisco XR12400 (all four models) and the Fujitsu and Cisco CRS-1 come loaded with the latest version of the Cisco IOS XR operating system, which supports generalized multiprotocol label switching (GMPLS) functionality and secure domain router (SDR) functionality for high scalability.
Fujitsu and Cisco worked together on developing this version of the Cisco IOS XR with a special focus on meeting the demanding reliability standards of Japanese telecommunications carriers. The newly released version also includes traffic-analysis functions essential for delivering service-level agreement (SLA) guarantees and other features that respond to the requirements of Japanese telecommunications carriers.
iPod Chipmaker At Heart Of Vista's 'PDA Mode'
"SideShow" Allows Users To Sync PDAs, Cell Phones While Notebook Sleeps
05.24.06
WinHEC 2006
By Loyd Case
If you've ever wandered through a trade show or an airport, you've seen people struggling with open laptops, balancing them on luggage or armrests. What they're typically doing is trying to retrieve email, map directions or appointment information needed on the fly.
One solution to this is to sync the data with your cell phone, PDA or even music player. But this can be clumsy and, if you're like me, you may forget to sync your PDA or phone with the PC. What if you could get to your data on your laptop without opening the lid or powering up the system?
ADVERTISEMENT
Windows Vista enables this idea with SideShow. SideShow defines a small display, which can be embedded in a laptop, typically on the lid of the unit. SideShow can even be embedded in a cell phone or other mobile device, and communicate with the laptop via technologies such as Bluetooth.
PortalPlayer has been working closely with Microsoft to define and extend the concept of SideShow. Using the company's Preface technology, OEMs can embed a SideShow display in a mobile PC that can be used as an embedded PDA which is fully media capable. Other companies are working on SideShow products, including Winbond and Freescale. But Winbond is focusing on keyboard micro-displays, driven by the keyboard controller.
SideShow Removable
PortalPlayer's solution is more broadly based, and the company even envisions detachable SideShow devices that can be removed from the PC as needed.
The Preface product includes the controller, which is an ARM-based microprocessor with integrated video display controller and memory management unit. The Preface controller supports up to 128MB of DRAM and up to 32GB of flash memory. A Preface display integrated into a laptop communicates via USB, but can also control PCs through its support of SMBus.
* Samsung, Seagate Prep ReadyDrive Storage (ExtremeTech)
* Windows Vista's Speed Smarts (ExtremeTech)
* Flash Replaces Hard Drive In New Notebook (ExtremeTech)
* 'Windows Rally Technologies' Emerge In Vista (ExtremeTech)
* Inside Windows Vista Beta 2 (ExtremeTech)
Given PortalPlayer's heritage in developing software and hardware for music players, the company envisions full support for playing media files stored on the PC, as well as running Vista sidebar gadgets. Audio output is through the laptop output jacks, or the player can use its own jack.
SideShow Embedded
An embedded display will draw power from the laptop battery, but since a typical SideShow unit will draw less than 300mW of power, it shouldn't adversely impact battery life. However, if the user is constantly hitting the PC to search for data by powering up over the SMBus and pulling data from the hard drive, power draw can be more substantial. However, a typical scenario envisions the embedded display having its own dedicated flash memory, and data such as music files or Outlook contact info would be cached in the flash memory, minimizing the need to power up the display.
Note that SideShow is very much a Windows Vista technology. PortalPlayer showed an XP capable display, but it's the implementation of sidebar gadgets that will really drive adoption of SideShow.
OT: Tool helps programs befriend Vista
By Joris Evers, and Ina Fried, CNET News.com
Published on ZDNet News: May 24, 2006, 5:20 PM PT
SEATTLE--Microsoft is helping other software companies make sure their programs won't stumble on a new security feature in Windows Vista.
The software maker this week plans to release a new tool for developers that checks if computer programs will work with User Account Control, Chris Corio, program manager for UAC, said Wednesday. The Vista feature runs a PC with fewer user privileges for security reasons.
"Test your applications and understand how they work on Vista," Corio in a session at Microsoft's Windows Hardware Engineering Conference here. "Understand the difference UAC makes; it can be traumatic for you if you've never designed for the standard user."
Reducing user privileges is a major change for Windows. At an early point in the development of Vista, Microsoft found that more than 50 percent of its own applications wouldn't run with it, Corio said.
The new "Standard User Analyzer" tool should help make sure people get applications that work when Vista ships, he said.
Running Vista with fewer privileges should improve the security of Windows. Malicious code that makes its way onto a Vista PC won't be able to do as much damage as on a PC running in administrator mode, which is a typical setting for Windows XP.
With Windows computers around the world under repeated attack, Microsoft has made security one of its top priorities for Vista. As a result, the update will be less vulnerable than any prior Microsoft operating system, Mike Nash, the corporate vice president of Microsoft's Security Technology Unit, said in a session with reporters here.
Microsoft has looked at some 1,400 different threat models and hired penetration testers to try to break into systems running the next version of its flagship operating system, Nash said. Still, attacks will remain a fact of life, he said. "Windows will continue to be an area of interest among everyone," he said.
Some of the security woes can be solved by educating people about the importance of security messages delivered by Windows. But Microsoft says it knows that the biggest factor is how many of these messages people encounter. The goal is to reduce the number of alerts the operating system displays over time.
Changes are already visible in the latest Vista test release. In the December preview, nearly every action in the configuration panel required people to attain full privileges, indicated with a shield icon below the feature. In the Vista beta released this week, only a few actions need elevated privileges, Corio said.
UAC will be front and center in Vista. Another lower-level security feature is only gradually making its way into the operating system.
One requirement will appear first in the 64-bit edition of Vista. That version will require signed kernel mode drivers, which run hardware such as the hard disk drive and network interface card.
"This is how rootkits get into the OS," Nash said. "I think this will go a long way toward making it harder for people to write malware," or malicious software.
Customrs will be able to switch on the requirement for signed drivers on 32-bit versions of Vista, Microsoft representatives said at WinHEC.
Historically many hardware products have shipped with device drivers that don't verify where they came from.
Other security features in Vista include protection against spyware and an improved firewall. It will also include a new version of Internet Explorer that will run in "protected mode" to prevent silent installs of malicious code, Microsoft has said.
Microsoft's Standard User Analyzer should be available by week's end on Microsoft's Download Center Web site, Corio said. An earlier tool, called LUA Buglight, while also potentially useful for Vista developers, was really meant mostly for developers on Windows XP, he said.
CNET News.com's Joris Evers reported from Seattle, and Ina Fried reported from San Francisco.
Sounds like Phoenix is feeling threatened. They see the government moving rapidly toward TPMs.
Pickle
barge,
doesn't Wave have patent protection on this technology?
Pickle
OT: Intel's Core Microarchitecture Redefines Computing
Tuesday May 23, 11:00 am ET
Intel Sets New Records in Performance and Energy Efficiency
SANTA CLARA, Calif.--(BUSINESS WIRE)--May 23, 2006--Intel today disclosed record breaking(1) results on 20 key dual-processor (DP) server and workstation benchmarks. The first processor due to launch based on the new Intel Core(TM) microarchitecture -- the Dual-Core Intel Xeon processor 5100 series, previously codenamed "Woodcrest" -- delivers up to 125 percent performance(2) improvement over previous generation dual-core Intel Xeon processors and up to 60 percent performance(3) improvement over competing x86 based architectures(4), whilst also delivering performance per watt leadership.(5)
ADVERTISEMENT
"The performance and system-level power consumption we're seeing from our platforms built around the new Core microarchitecture has exceeded even our expectations," said Kirk Skaugen, vice president and general manager of Intel's Server Platforms Group. "At the same time, customers demand more than just energy-efficient performance. We've developed a superior platform that delivers the latest server technologies including faster and more reliable memory, Intel Virtualization Technology, Intel Active Server Manager and Intel I/O Acceleration Technology."
Fully-buffered dual in-line memory (FB-DIMM) technology allows for better memory capacity, throughput and overall reliability. This is critical for creating balanced platforms using multiple cores and the latest technologies, such as virtualization, to meet the expanding demand for compute headroom.
Shipping in Intel Xeon MP processors since last year, Intel Virtualization Technology provides silicon-level software support that improves dependability and interoperability and is enabling faster industry innovation. Intel Active Server Manager integrates hardware, software and firmware to manage today's complex datacenters and enterprise environments. Intel® I/O Acceleration Technology improves application response time, server I/O performance and reliability.
Intel's new server and workstation platforms, codenamed "Bensley" and "Glidewell" respectively, are architected for today's dual-core processors. They will also support dual- and quad-core processors built using Intel's 65-nanometer (nm) and future process technologies.
The first processors for Bensley and Glidewell are in the Dual-Core Intel Xeon processor 5000 series, previously codenamed "Dempsey." Shipping since March at a new low price point, they bring innovation, higher performance and lower power consumption to the value server and workstation segment.
Complementing the 5000 series, Intel will ship the next processor for Bensley and Glidewell in June -- the Dual-Core Intel Xeon processor 5100 series. Based on the Intel Core Microarchitecture, the majority of these processors will only consume a maximum of 65 watts.
Using the SPECint(TM)_rate_base2000(a) benchmark, which measures integer throughput, a Dell PowerEdge 2950(a) server based on the Dual-Core Intel Xeon processor 5100 series scored 123.0, setting a new world record(6).
Using the SPECjbb(TM)2005(a) benchmark, the Fujitsu-Siemens PRIMERGY RX200 S3(a) server based on the Dual-Core Intel® Xeon® processor 5100 series broke the previous record with a score of 96,404 business operations per second(7).
An HP Proliant ML 370 G5(a) server based on the Dual-Core Intel Xeon processor 5100, and using the TPC(TM)-C(a) benchmark, which measures database performance, smashed another world record by scoring 169,360 tpmC at $2.93/tpmC(8).
IBM(a) is also in the record books with the IBM System x3650(a) server based on the Dual-Core Intel Xeon processor 5100 series, which scored 9,182 simultaneous connections in the SPECWeb(TM)2005(a) benchmark, which measures web server performance(9).
Other leading benchmarks on Dual-Core Intel Xeon processor 5100 series-based servers include Lotus Domino(a) (R6iNotes(TM)), Microsoft Exchange Server 2003(a) (MMB3(TM)), SPECfp(TM)_rate_base2000(a) and SAP-SD(TM) 2-Tier(a).
These benchmarks, along with additional records set by the Dual-Core Intel Xeon processor 5000 and Dual-Core Intel Xeon processor 5100, can be accessed by visiting www.intelstartyourengines.com.
barge, I thought that was strange to particularly given it is WinHec week. I just remain resrved because a Microsoft PR touting Wave's technology inside Vista is a little overwhelming at this point. The TPM circle is taking shape with Wave possibly the hub. It blows my mind!!! So I sit and celebrate the Mavericks and watch for more PRs.
Pickle
CS, great find! Definitely looks like a potential PR quote.
Pickle
Good stuff tonight, helpful. Liked the podcast. Th gas station analogy was excellent. SKS should use it sometime.
Pickle
WinHEC 2006: Washington State Convention and Trade Center, Seattle, WA
May 23-25, 2006
WinHEC 2006 will explore technical innovations for the Windows hardware platform. The event will feature keynote addresses by Microsoft Executives, Bill Gates, Will Poole, and Bob Muglia.
The TCG will exhibit at WinHEC 2006 at booth #410. Please visit the booth for demonstrations by TCG member companies Infineon and Winbond.
Not that ought to be interesting.
PC8374S Desktop TrustedI/O with SafeKeeper™ TPM 1.2, Desktop Management
The Winbond PC8374S Advanced I/O product is a member of the PC837x SuperI/O family.
All PC837x devices are highly integrated and are pin- and software-compatible, thus providing drop-in interchangeability and enabling a variety of assembly options using a single motherboard and BIOS.
The PC8374S integrates SuperI/O, System BIOS Storage and Trusted Platform Module (TPM) 1.2 functionality in one device. It includes legacy SuperI/O functions, TPM 1.2, desktop management module, system glue functions, health monitoring and control, commonly used functions such as GPIO, and ACPI-compliant Power Management support.
The Trusted Platform Module provides a solution for PC security, based on the TCG 1.2 standard. The complete security solution includes hardware, software, and firmware. The Trusted Platform Module includes an CompactRISC embedded RISC core for hidden execution of security code, secured information storage, a performance accelerator that supports cryptographic algorithms, and a True Random Number Generator (RNG).
The PC8374S contains desktop management module, which provides means for enhanced system serviceability, including non-volatile storage and reporting, and chassis control. The desktop management module provides both local and remote communication (via LAN controller).
The PC8374S integrates miscellaneous analog and digital system glue functions to reduce the number of discrete components required.
System BIOS integration allows reducing the number of motherboard components and total system cost by eliminating the BIOS flash.
The PC8374S extended wake-up support complements the ACPI controller in the chipset. The System Wake-Up Control (SWC) module, powered by VSB3, supports a flexible wake-up mechanism.
The PC8374S supports both I/O and memory mapping of module registers and enables building legacy-free systems.
http://www.winbond-usa.com/mambo/content/view/306/529/
Winbond - PC8374S SafeKeeper™ Desktop TrustedI/O
Key Features
* TCG 1.2 based Trusted Platform Module (TPM)
o Supports legacy interface
o Hardware and software protection schemes
* Integrated 6 Mbit BIOS Flash with SMBus based programing
* System health, including SensorPath™ interface, fan monitor and automatic control
* Legacy modules: Parallel Port, Floppy Disk Controller (FDC), two Serial Ports, Serial InfraRed Port and a
Keyboard and Mouse Controller (KBC)
* Glue functions to complement the South Bridge functionality including flash and FDD write-protect controls
* VSB3-powered Power Management with 19 wake-up sources
* Controls three LED indicators
* 16 GPIO pins with a variety of wake-up options
* I/O-mapped and memory-mapped registers
* 128-pin PQFP package
Intel OS X kernel no longer open
5/17/2006 12:47:46 PM, by Eric Bangeman
When Intel builds of Mac OS X were first seeded to developers, copies inevitably found their way out into the wild. Once that happened, enthusiasts immediately began trying to install Mac OS X on commodity x86 hardware. Apple—while aware of its inevitability—did not welcome the hacking, making moves to tie the Intel version of Mac OS X firmly to Apple hardware. As a result, the Intel version of Darwin is no longer open.
Darwin is the open-source Mach-o kernel used by Mac OS X. When OS X development got underway, most of the code was maintained in a CVS repository. Therefore the CVS code was nearly up-to-date compared to what Apple was using. The rest of the code was maintained in "snapshot" form, meaning that what is there represents the state of the codebase at a particular point in time (usually a major or minor release).
As Mac OS X has gone through subsequent versions, Apple changed the Darwin development process. Eventually, some Darwin drivers were released without the source code and Apple shifted its focus away from Darwin development to Mac OS X development. Now that Mac OS X is up, running, and widely available on Intel Macs, Apple has stopped releasing the source code for the x86 kernel and drivers.
Apple's reasoning is that providing the source for the Mac OS X kernel on x86 will make it easier for people to run Tiger on non-Apple hardware. Getting OS X up and running on a "generic" AMD or Intel PC can be done now, and instructions are available, albeit difficult to find due to the diligence of Apple Legal. Unfortunately, closing off the source further calls into question Apple's credentials as a good player in the open source scene.
Apple is in a unique position as a vertically integrated manufacturer in the computer industry. By providing the whole widget, it gives the company the ability to tie its hardware and software together into a tightly knit package. However, if one of the blocks in the vertically integrated tower is removed, the entire structure weakens. So if Apple were to open up the iPod to play protected WMA files, it runs the risk of lower traffic and revenues at the iTunes Music Store. Similarly, if the company allowed Mac OS X to run on generic x86 PCs, its hardware sales would suffer as some users purchased cheaper Intel or AMD systems on which to run Mac OS X.
The fight against x86 hackers is going to be a long and protracted one. Apple is apparently using the Trusted Platform Module as one means of tying the Intel version of Mac OS X to Apple hardware. Many people are alarmed over the use of TPM because of its potential to be used to lock down other content, although Apple has yet to move in that direction. Apple has also moved against the OSx86 Project, forcing them to shut down their forums for a few days while instructions and links related to getting Mac OS X up and running on commodity x86 hardware were removed.
Inevitably, there will be copies of Mac OS X running on non-Apple hardware. The challenge for Apple is going to be making it as inconvenient for the average user as possible. Closing off the x86 Darwin source is just another way for Apple to raise the inconvenience factor.
wavxmaster, unilever would definitely qualify as a Fortune 1000 company as SKS cited Wave talking to about upgrades during the CC.
Pickle
http://www.hp.com/sbso/solutions/pc_expertise/professional_innovations/ProtectTools_Security_Manager...
Updated for 1.2 TPM. No Wave mention, but makes you wonder how long HP can keep advertising the 1.2 TPM and keep Infineon in the discussion.
Pickle
Alexander W. Koehler - Director - ICT - Fujitsu Reseller
Alexander W. Koehler did his studies of mathematics and computer sciences at the Karlsruhe University, Germany. After having worked as software development engineer he joint Hewlett-Packard in 1981, where he was International Product Manager for the HP 110, HP's first laptop computer. He worked for IT companies as well as consultant, published a book on IT and several articles in IT magazines. Since 1997 he looked at IT-security which led him in 2000 to Utimaco Safeware, where he provided services as Head of Product Management and Business Intelligence. He has been Utimaco’s Alliance Manager for the Trusted Computing Group, held various TCG technology related presentations and workshops world-wide and has defined the event format "Business Community Day". To comply better with international clients’ requirements, in 2004 he reorganized the IT consulting business into a new legal entity: ICT Economic Impact Ltd. In 2006, ICT provides ICT specific services and system integration for international and domestic clients. In the area of trusted computing, Alexander W. Koehler works closely with the Trusted Computing Group.
From ICT Economic website - Fujitsu reseller
Front Page
Dear CIO, IT-Manager and Enterprise User,
2006 is the most important year for IT-Security – in two aspects: First, a second generation of security products roll out right now. Second, “security is a process, not a product” starts to gain momentum.
IT-security is not the topic which you put on the first page of your corporate brochure, it is just a necessity like having company cars. IT-security is just there to make a corporate network work predictable, or a trusted one. It does not matter if a safeguard is implemented based on access control only or encryption as long as it does the job: security is a process, not a product!
By taking this approach, every enterprise can decrease cost for security tremendously and install rock solid security at the same time. How? Well, not with products and product designs which have been on the market since last century!
The Trusted Computing Group has laid the foundations for an open security framework, covering a broad range of client, server and infrastructure security aspects. The common denominator is that software only solutions are no good investments anymore. This second generation of products are highly specialised and standardized components with built-in superiority by design. ICT Economic Impact delivers these products, does system integration and provides all related cost saving services.
We would not recommend you to become an early adopter, but to become an early benefitter - now. We keep the promise: Taking the cost and complexity out of data & network security.
Get in touch with ICT Economic Impact today and drive your costs for IT Security down! Watch this page, news are coming soon!
Best regards … Alexander W. Koehler, CEO 2006/04/24
Great find, CS!!
OT: European trio join forces for NFC market
Peter Clarke
EE Times
(05/04/2006 8:36 AM EDT)
LONDON — Mobile handset software developer Esmertec AG said it has joined forces with Inside Contactless SA, a fabless provider of smartcard chips, and Trusted Logic SA, a provider of security middleware, to develop near field communications subsystems for mobile phone makers and their chipset manufacturers.
Contactless technology, originally deployed in smartcards, is gaining ground in mobile devices, payment cards and for access control, identity and security applications, mass transit and electronic tagging solutions, Esmertec (Zurich, Switzerland) claimed.
The trio’s product offering will be a combination of NFC-enabled chipset technology from Inside (Aix-en-Provence, France) with Esmertec's Java software and secure middleware from Trusted Logic (Versailles, France) that the three companies will promote to their combined customer base, Esmertec claimed.
Esmertec did not sat whether the complete subsystem would be offered as licensable intellectual property, or whether customers would license the software and purchase a chip or a chipset. Neither did Esmertec say whether the trio would work with other hardware or software companies.
“Contactless chips in mobile handsets for payment and identification are a natural next-step, since handsets are personal items that users carry everywhere with them,” said Alain Blancquart, chairman and CEO of Esmertec, in a statement. “Combining our companies' hardware, software and middleware expertise will open up new opportunities in a wide range of applications that can benefit from such contactless solutions.”
“The market for contactless solutions in mobile phones and PDAs is poised for exponential growth, and the partnership we've announced today will help accelerate the development and delivery of numerous applications that will intuitively enable users to access content and services by simply touching smart objects,” says Remy de Tonnac, CEO of Inside, in the same stateme
JUNIPER NETWORKS ONDERSTEUNT OPEN STANDAARDEN VOOR TOEGANGSCONTROLE VAN TRUSTED NETWORK CONNECT (TNC)
Looks like Dutch
Hulp bij ontwikkeling TNC-specificaties en levering van uitwisselbare beveiligingstechnologieën
Schiphol-Rijk, 9 mei 2006 - De Unified Access Control-oplossing (UAC) van Juniper Networks (Nasdaq: JNPR) ondersteunt de open standaarden van Trusted Network Connect (TNC). Deze niet-bedrijfseigen specificaties ondersteunen de inzet en handhaving van beveiligingseisen voor eindpunten die met een netwerk verbonden zijn. Bedrijven kunnen hierdoor bestaande hardware en software gebruiken in combinatie met toegangscontroletechnologieën van Juniper Networks. Dit resulteert in kosteneffectieve, verbeterde beveiliging van netwerk- en eindpuntcontrole.
TNC is onderdeel van de Trusted Computing Group (TCG). Deze organisatie voor industriële normen is opgericht om open standaarden voor computer- en beveiligingstechnologieën te ontwikkelen, te definiëren en te promoten. De TNC-specificaties helpen netwerkbeheerders bij het handhaven van het beveiligingsbeleid in uiteenlopende netwerkomgevingen met heterogene apparatuur en software van verschillende leveranciers. Met deze specificaties en ondersteunde beveiligingstechnologieën, zoals de Unified Access Control-oplossing van Juniper Networks, kunnen klanten eindpuntconfiguraties controleren en uitvoering van het beveiligingsbeleid afdwingen. Dit alles voordat een verbinding met het netwerk tot stand is gebracht. Vanuit de TNC-specificaties kunnen klanten uit verschillende betrouwbare tools de beste technologie kiezen om hun netwerken optimaal te beschermen tegen bedreigingen als virussen, wormen en 'denial-of-service'-aanvallen.
"Juniper Networks is actief betrokken bij Trusted Network Connect en helpt bij de opzet van uitwisselbare mechanismen. Deze mechanismen voorkomen dat onbetrouwbare apparaten op bedrijfsnetwerken worden aangesloten", zegt Paul Sangster, vicevoorzitter van Trusted Network Connect en ingenieur bij Symantec Corporation. "De ondersteuning en de bijdragen van Juniper Networks zijn cruciaal geweest bij de ontwikkeling van de TNC-specificaties. Deze maken uitwisselbaarheid van toegangscontrole met sterke zichtbaarheid, beleidscontrole en beveiliging door het complete netwerk mogelijk."
Uitwisselbaarheid van beveiligingstechnologieën is een essentieel onderdeel van de TNC-specificaties en cruciaal om de IT-gemeenschap keuzevrijheid te bieden op het vlak van toegangscontroletechnologie. Een recente, onafhankelijke proef, uitgevoerd door het InterOperability Lab (UNH-IOL) van de Universiteit van New Hampshire, heeft succesvol aangetoond dat de technologieën van Juniper Networks uitwisselbaar zijn met die van andere leveranciers. Juniper Networks nam deel aan een tweedaags evenement waar hardware en software volgens TNC-specificaties werden getest in een gesimuleerde netwerkomgeving. Uit de proeven bleek dat de UAC-oplossing naadloos uitwisselbaar is met andere hardware- en softwareonderdelen die dezelfde specificaties gebruiken.
"TNC-specificaties krijgen steeds meer draagvlak. Hiermee komt de doelstelling: het uitwisselbaar maken van de producten van zoveel mogelijk leveranciers, dichterbij", zegt Hitesh Sheth, Vice-President beveiligingsproducten van Juniper Networks. "Juniper Networks ondersteunt open standaarden en voorziet klanten van de beste technologie om toegangscontrole-uitdagingen op te lossen. Met de Unified Access Control-oplossing krijgen klanten de beste beveiliging die flexibel is toe te passen in bestaande infrastructuur."
De UAC-oplossing van Juniper Networks, waarin de Infranet Controller appliances zijn opgenomen, is gebaseerd op het Enterprise Infranet-raamwerk. De combinatie van identiteitsherkenning en eindpunt-'intelligentie' biedt ondernemingen real-time inzicht in, en beleidscontrole over het complete netwerk. Bedrijven kunnen daardoor netwerktoegang controleren, voldoen aan regelgeving en netwerkdiensten garanderen. De UAC-oplossing van Juniper Networks biedt eindpunt- en op identiteit gebaseerde controles en ondersteunt zowel client-host als netwerkgebaseerde handhaving van het dynamisch geconfigureerde firewall- en IPsec-beleid.
De Infranet Controller-appliance neemt rolgebaseerde beleidsbeslissingen en voedt de Infranet Agent. Deze softwareagent beoordeelt de staat van de eindpunt-compliance voor en tijdens de sessie, en voert het beleid uit op de client-host. De beleidsbeslissingen gelden voor alle firewall/VPN appliances van Juniper Networks. Die communiceren met de Infranet Controller en voeren intensieve beveiligingsfuncties uit zonder de verwerkingscapaciteit aan te tasten. De firewall/VPN appliances doen dienst als handhavingspunten voor bedrijfsbeslissingen die zijn gebaseerd op aan gebruikers toegewezen identiteiten en de eindpuntbeoordeling.
###
Over de Trusted Computing Group
The Trusted Computing Group (TCG) is een non-profitorganisatie, opgericht om open standaarden te ontwikkelen, te definiëren en te promoten voor computing- en beveiligingstechnologieën, inclusief hardwarecomponenten en software-interfaces voor diverse platforms, randapparatuur en apparaten. TCG-specificaties zorgen voor veiliger computeromgevingen, zonder de functionele integriteit, privacy of individuele rechten in gevaar te brengen. Het voornaamste doel is het beschermen van informatie (data, wachtwoorden, sleutels et cetera) tegen softwareaanvallen en fysieke diefstal. Voor meer informatie en specificaties: www.trustedcomputinggroup.org.
Over Juniper Networks
Juniper Networks maakt veilige en gegarandeerde communicatie mogelijk via één IP-netwerk. Op basis van de gespecialiseerde, hoogwaardige IP-platforms van het bedrijf kunnen klanten op elke schaal uiteenlopende services en applicaties ondersteunen. Serviceproviders, ondernemingen, overheden en onderzoeks- en onderwijsinstellingen over de gehele wereld vertrouwen op Juniper Networks om netwerken te bouwen, die maximaal afgestemd zijn op de behoeften van gebruikers, services en applicaties. Het portfolio van bewezen netwerk- en beveiligingsproducten ondersteunt veeleisende netwerken met complexe eisen op het gebied van schaalgrootte, beveiliging en prestaties. Voor meer informatie: www.juniper.net.
Hu Yoshida, of HDS fame, spoke about a conference in his latest blog, and one of the topics was virtualization.
Hu says that users at a CIO panel commented that "the next step in virtualization is virtualized environments. Where you can swap out a compliance environment for instance and then bring it back later when it is needed".
Exactly. When we speak about virtualization in specific tactical terms, such as storage virtualization for migration purposes, or server virtualization for consolidation purposes, we lose the higher potential of the concept - which is really to do two things: First, it should abstract the user from the infrastructure on thier way to and from the data they care about, and second, behind that abstraction should be a living, breathing, morphable blob that can alter itself in order to best fullfil the requirments from the top of the stack (user) or the bottom (data).
I like the fact that people are talking about the V word more openly, and with less visible disdain - even if it is in terms that are still too simplistic - such as "improved utilization". Eventually people will come to grips with the fact that a fully integrated "Enterprise IT Virtualization" strategy will be the IT equivilant of the industrial revolution.
Where virtualization should live
Mainstreaming virtualization in the network
May 08, 2006 (Computerworld) -- Q: Where should virtualization really live? It is very confusing. -- B.R., Toronto
A: Yes. Virtualization is one of the most widely interpreted themes I’ve ever known. Here’s my attempt to simplify the context of virtualization, which of course will add even more complexity to your thoughts.
1. Virtualization isn’t new. It’s old. It’s as old as commercial computing.
2. It lives at the application layer, the network layer, the server layer and the storage layer. It always has, and it always will.
3. It has lived at these layers independently, typically as a very tactical band-aid to a very specific set of problems.
4. It, regardless of where/what, always follows the same motivational path: First, reduce capital cost of gizmo acquisition. Second, reduce the operating cost of running the gizmo, and third, get a better return on people assets who spend way too much time doing manual gizmo labor.
5. Understanding and implementing an integrated “Enterprise IT Virtualization” strategy is smart, and inevitable.
What is an application other than a presentation layer abstracting the entire infrastructure we love to talk about from the user? Client-server applications are virtual by design. Web services extend that concept to the edge of sanity.
The network may be the best example of virtualization gone mainstream, via the motivational path set forth above. Not long ago a network was a programmable point-to-point connection schema where your network manager had a zillion IP addresses (or DECnet, Token Ring, etc.) all managed on their VisiCalc or Lotus spreadsheets (sound familiar storage guys?). It was a manual process. Adding or changing things took careful upfront planning and a truckload of luck.
Network gear was expensive, and the talent to run them very, very specialized. Standards (de-facto, initially) made things cheaper, but until we automated and virtualized the connections, networks were limited to the rich and famous. We have no idea today where things really live, they just show up on our virtual network.
DNS isn’t manual anymore, so I don’t need a rocket scientist to figure it out, so now implementing and running our IP network costs diddly from a capital perspective, and it is simple to run, which kills the cost of operation and people. The network is a good example to emulate; it was just a manual labor intensive, just as expensive, just as “black art” as storage is today -- but isn’t anymore. Now we have lots of virtual services that execute on the network that we don’t even know about that fix things dynamically, offer quality of service, etc.
Server virtualization is getting a ton of buzz, but of course it isn’t new either. Mainframes have been creating virtual machines forever. So have most of the big Unix boxes. VMware is on fire in the windows world. Why? Because we spend too much on servers that aren’t utilized anywhere near enough (capital). We have difficulty managing all of our servers that we buy (operating). Our people are ready to get their hair styling licenses instead of trying to deal with the explosive growth of self-multiplying servers (people).
And last, but certainly least, is storage. Storage virtualization is the granddaddy of them all. You first bought or used Veritas Volume Manager because when you attempted to lower your capital acquisition cost of storage by taking advantage of the lower cost per megabyte of the new Seagate 4GB disk drive, you realized when you plugged it in that your operating system said “hey, nice 2GB disk” so it actually cost you more money. Volume Manager made that 4GB disk look like two 2GB disks to the operating system. Then you decided to lower your operating costs by buying RAID arrays because it virtualized all the individual disks that you were sick of dealing with into larger logical entities – the logical unit number (LUN). Today we are at the third stage: running around trying to virtualize the arrays themselves to get people efficiencies. How 1972.
I’m not pooh-poohing virtualization – quite the contrary. I don’t think you can live without it. I think for it to truly be world-changing, however, a few things need to happen. First, we need to recognize that we are designing solutions that are addressing tactical issues, not strategic.
Since we still have tons of tactical issues, that’s OK, but ultimately we need all the layers of virtualization to intersect. Ultimately we want the entire infrastructure virtualized seamlessly – so the promise of grid, or liquid computing, can be realized. There has to be some level of knowledge about the layers, between the layers.
Imagine a storage layer that is integrated into the server layer above it and responds to whatever it is asked without the server layer having to have any idea where things reside and that the data residence is transient. Imagine a giant access point to the storage layer that in essence is an aggregated cache pool for all the data beneath it that dynamically alters what data sits at the top of the performance pyramid depending on the just-in-time needs of the server and application layer.
Below the virtual cache sits storage networks connected to all sorts of storage devices, whose attribute differences are cataloged and data is placed automatically on the appropriate devices at certain times based on those requirements.
The concept of information life-cycle management could actually be implemented, constantly fine-tuning based on the requirements it has. Block virtualization devices are intended to solve people and operating expense issues, as are file virtualization devices. They solve real problems by portraying multiple things as a common one, providing a single access point and enabling other real tangible (but still tactical) benefits such as dynamic migration. What they don’t do is make things faster. What if those virtualization layers also were huge central caches? The possibilities are interesting to say the least. Eventually those layers will provide a truly seamless cloud that connects the user to their data and performs all the magic in the middle perfectly, and without anyone needing to know. That would be cool.
Send me your questions -- about anything, really, to sinceuasked@computerworld.com.
Steve Duplessie founded Enterprise Strategy Group Inc. in 1999 and has become one of the most recognized voices in the IT world. He is a regularly featured speaker at shows such as Storage Networking World and others, where he takes on what's good, bad -- and more importantly -- what's next. For more information about Duplessie or ESG, go to www.enterprisestrategygroup.com.
zen 88, I agree that SKS was not giving guidance, but was giving stealth guidance. He knows Wave is locked and loaded. I was appreciative of his realization that execution is still crucial and the voice of the customer must ultimately speak through his software. As long as he remains focused on the customer's needs and grandly servicing those customers, successive design wins will litter the landscape down the road. The respect Wave has earned from the likes of DOD, Dell, Intel, Winbond, Broadcom, Segate, Juniper, NTT, etc...speaks volumes as to where this company is heading.
Pickle
SKS mentioned NTT as a systems integrator (ala EDS) which is HUGE in terms of reselling Wave's software in Japan/Asia. The upgrade rate in Japan will be far greater than 3-4%. In fact, the industries SKS said Wave is targeting such as Financial, Health, Govt, I believe will all have upgrade rates in excess of 50%. SKS threw out VERY conservative estimates because like he said, he has burned before. And do not forget that NTT is part of NTT DocCoMo which is huge on the wireless service side. Nokia embedded with Trustzone and NTT would make a
nice combination.
Pickle
tampa123, good find. EDS is going to push non-IBM/Lenavo hardware given IBM is such a huge competitor on the services side. And we all know the relationship with Wave is long-standing.
Pickle
NTT DATA traces its roots back to 1967, to the establishment of the Data Communications Bureau within Nippon Telegraph and Telephone Public Corporation (present-day NTT). The Bureau sustained consistent growth based on system development, building, operation, and maintenance across a broad front, ranging from nation wide systems that formed the cornerstones of society to a multiplicity of corporate network systems. In 1988 it began a new chapter in its story when it became NTT DATA Corporation.
Today, NTT DATA has grown into a group comprising 100 subsidiaries and affiliates. It possesses comprehensive strengths and has the best track record of any system integrator in Japan. It harnesses these many strengths to engage in business that is in keeping with its two priority management policies: "Enhancing the Competitiveness of the System Integration Business" and "Creating New Businesses."
In an information network society, it is not enough simply to increase business efficiency and make daily life more convenient. NTT DATA is committed to using IT to shape the affluent society of the future, a place where information can be accessed at any time and in any place.