Register for free to join our community of investors and share your ideas. You will also get access to streaming quotes, interactive charts, trades, portfolio, live options flow and more tools.
Register for free to join our community of investors and share your ideas. You will also get access to streaming quotes, interactive charts, trades, portfolio, live options flow and more tools.
Link might be easier for everyone. Good stuff.
http://www.isp-planet.com/technology/2007/nac_4.html
Bolting the Back Door with NAC (cont'd)
Part 4: Deploying the Juniper Networks UAC 2.0
— continued
by Lisa Phifer
VP Core Competence, Inc.
[June 25, 2007]
Building a foundation
Next, we carved a lab network into discrete subnets and VLANs to implement our policy. The resulting topology is summarized below. Note the location of each TNC component (PDP, L3 PEP, L2 PEP, NAR) and the path taken by access requests and subsequent authorized traffic.
We used third-party L2 PEPs to separate VLANs from each other—for example, insulating staff from guest and non-compliant endpoints. Guest packets were tagged based on Ethernet port or Wi-Fi SSID, but UAC let us tag staff packets dynamically, based on 802.1X authentication results. In other words, while we could quarantine managed laptops at layer two, we had to redirect agentless hosts using IP/port filters.
Inside each VLAN, host-to-host traffic could be blocked on the switch, the AP, or the endpoint itself. This degree of isolation is important during remediation to stop quarantined endpoints from attacking each other. Authenticated UAC Agents can be partly protected by Juniper's IC-configured Host Enforcer (below, right), but that personal firewall cannot help agentless or quarantined users. Instead, we configured our Colubris AP to block unwanted intra-SSID communication (below, left).
Colubris AP settings
Juniper's IC-configured Host Enforcer
We relied on Juniper's L3 PEP (the SSG) to redirect unauthenticated users to the IC4000's login portal and enforce all higher-layer resource policies. Both our HP 2626 and Colubris MSC-3300 offered local captive portals. But we felt that redirecting all unauthenticated traffic to the IC was a more consistent, secure way to support guest/customer access and staff UAC Agent installation through one common portal.
L2 PEPs can only enforce go/no-go decisions and VLAN assignments. For more granular L3 control, we could have hard-coded ACLs into upstream devices—for example, blocking non-web packets from the guest subnet at the next router. Instead, we used UAC to dynamically add TCP/IP filters to our firewall. As the IC maps each endpoint to roles and resources, it can automatically provision stateful packet inspection policies on the SSG. This feature is one of UAC's biggest strengths, but we found that using it effectively required some education.
Deciding where to place the IC in a test network was simple, but careful consideration is warranted in a production network to avoid leaks and bottlenecks. When deployed at L2, the IC must be trunked to every controlled VLAN. Instead, we chose to deploy the IC at L3, using the SSG to route traffic from all VLANs to the IC.
After all of this planning, actual IC installation was remarkably simple. Beyond the usual IP addressing and license activation, we just had to satisfy a few basic pre-requisites:
To prove its own identity, the IC needs a server certificate. Don't be tempted to take a self-signed shortcut—that will only cause repeated user warnings. Instead, install an IC certificate that chains to a trusted root CA and is bound to a resolvable name.
For L2 (802.1X) control, all switches and APs must be configured to send RADIUS access requests to the IC's embedded SBR server. The IC must in turn be configured to recognize those RADIUS clients, based on IP addresses and shared secrets.
For L3 (portal/firewall) control, UAC Infranet Enforcers like our SSG5 must be connected to a trusted IC and configured to let non-802.1X endpoints get IP addresses, resolve the IC's hostname, and send HTTP to the IC's portal.
Bolting the Back Door with NAC (Cont'd)
Part 4: Deploying the Juniper Networks UAC 2.0
— continued
by Lisa Phifer
VP Core Competence, Inc.
[June 25, 2007]
Charting a course
Given the use cases outlined in part 2 and the TNC components described on the previous page, we were ready to move from abstract to concrete. Next, we had to decide how users would be authenticated, what measurements would be validated, how those results would map into roles, and how those roles would limit network resource access. The target policies we decided to implement are summarized below:
Group User List Host Checks Roles Resources
Administrators Active Directory Supported OS +
Firewall +
Anti-Virus Pass:
CompliantPC
NetAdmin
Mgmt VLANs +
Staff VLAN +
Private Intranet +
Public Internet
Fail:
Remediation Remediation Server
Other
Domain Members
Active Directory
Supported OS +
Firewall +
Anti-Virus or
Bypass List
Pass:
CompliantPC
Staff VLAN +
Private Intranet +
Public Internet
Fail:
Remediation Remediation Server
Guests Anonymous Web Portal Supported OS +
Firewall
Pass:
GuestPC
Public Internet (web, ftp, mail, vpn)
Unknown:
GuestPDA Public Internet
(web only)
Customers Local List Supported OS +
Firewall +
Anti-Virus
Pass:
Per-customer Roles
Public Internet +
Customer's own
Private Subnet
Fail:
Remediation
Remediation Server
The IC4000's RADIUS server can consult many third-party directories, including LDAP, RSA, NIS, and Netegrity. We wanted to tap our Windows AD to authenticate our own staff, using group membership to determine role. We also planned to create a customer user list on the IC4000, mapping each to a separate role. Finally, we intended to use the IC4000's web portal to admit anonymous guests.
UAC can use TNC APIs to communicate with third-party Integrity Measurement Collectors and Verifiers. For simplicity, we decided to use only rules that Juniper's Host Checker could verify without third-party software. The IC offers a long list of predefined rules for Windows endpoints—we combined a few of those with a custom registry rule and a MAC address bypass rule. The latter gets known/trusted devices past rules that cannot otherwise be enforced (here, a laptop with unrecognized beta AV). Juniper does not currently offer predefined rules for Linux or Mac, but custom rules can check for known processes, ports, or files on those endpoints.
Note that rules cannot even be evaluated on OSs that cannot run Host Checker. This is why we ended up giving non-compliant guests limited internet access instead of blocking them altogether—we needed to admit our un-checkable WinMobile devices. The right way to handle atypical devices like PDAs, VoIP phones, and scanners depends on your own business needs and threat tolerance. We included this simple example in our test to show why it is import to identify and resolve such exceptions during policy design.
Bolting the Back Door w/NAC - Part 4 Juniper
Part 4: Deploying the Juniper Networks UAC 2.0
We had little trouble using Juniper's Unified Access Control (UAC) to quarantine non-compliant laptops and restrict customer/guest access in a diverse multi-vendor LAN, but found that third-party client interoperability is a work-in-progress.
by Lisa Phifer
VP Core Competence, Inc.
[June 25, 2007]
In this series, we have examined the business needs driving NAC (part 1 and part 2) and compared today's NAC architectures (part 3). Here in part 4, we show NAC in action by taking one TNC standards-based solution for a test drive: Juniper Network's Unified Access Control 2.0.
We implemented our planned scenarios with only minor adjustments, proving that UAC can be successfully deployed in a heterogeneous network without major upgrades. But, as our test progressed, we learned that Juniper is still working to expand client-side options. Customers who want to combine Juniper's IC 4000 with third-party Linux, Mac, or Vista TNC Clients must wait just a bit longer.
Infranet Controller 4000 ($10,000)
SSG5-v92-WLAN ($1,050)
Juniper Networks, Inc.
Sunnyvale, CA
Assembling the pieces
Juniper's UAC solution is based on the multi-vendor TNC architecture described in part 3. Thus, our first task was to identify the TNC components already present in our lab network and decide how to combine them with Juniper's UAC products.
We chose Juniper's Infranet Controller (IC) as our TNC Policy Decision Point (PDP). The IC4000 is designed for medium enterprises and remote/branch offices with thousands of users. Big-brother IC6000 delivers additional capacity and high-availability. We tested the IC4000 ($10,000) with a 100-user license ($5,000). That price tag includes an integrated Steel-Belted Radius server and UAC Agent software.
To exercise multi-vendor interoperability, we used a mixture of 802.1X-capable switches (HP, D-Link) and APs (Colubris, Cisco) as TNC Policy Enforcement Points (PEPs). After discussion with Juniper, we mixed in their SSG5 firewall for more granular policy enforcement. Using a Juniper firewall, VPN, or IDP to enforce layer three policies was not strictly required to meet our goals, but let us tap more of UAC's potential.
Our TNC Network Access Requestors (NARs) fell into three categories:
Staff with Windows XP/2000 laptops and installed UAC Agents
Guests with agentless devices (including Vista, Linux, Mac, and WinMobile)
Customers using any device, with or without an agent
Agentless endpoints that were redirected to the IC's login portal could optionally execute Juniper's Host Checker, an endpoint integrity scanner invoked via ActiveX or Java. We required our managed Windows endpoints to auto-install Juniper's UAC Agent, a persistent program that bundles an 802.1X Supplicant and TNC Client with proprietary extras: a personal firewall, IPsec VPN client, and Windows single-sign-on support.
We had hoped to authenticate managed non-Windows endpoints using third-party 802.1X Supplicants/TNC Clients, but found this is not currently possible. In UAC 2.0, the IC4000 expects the 802.1X Supplicant to send EAP-JUAC, Juniper's pre-standard take on EAP-TNC. We therefore had to admit our own Linux and PDA endpoints as agentless devices. According to Juniper, this limitation will be lifted when UAC 2.1 (3Q07) supports additional inner EAP types to be spoken by third-party TNC Clients.
Bolting the Back Door with NAC (cont'd) TNC
Part 3: page four
by Lisa Phifer
VP Core Competence, Inc.
[June 22, 2007]
TCG Trusted Network Connect
The Trusted Computing Group (TCG) develops open standards for hardware-enabled trusted computing and security technologies. TNC is TCG-defined open architecture to enable non-proprietary, interoperable endpoint security auditing and policy enforcement in multi-vendor networks.
The TNC architecture is reminiscent of both CNAC and NAP. At left is a host requesting access; at right are servers that make policy decisions. In between lay the APs, switches, firewalls, and VPN gateways that enforce policy. So what makes TNC different?
Unlike CNAC and NAP, every component of the TNC architecture could be sourced from a different vendor. For example, any mixture of TNC Policy Enforcement Points from Nortel, HP, Extreme, Enterasys, Aruba, Trapeze, Colubris, et al could request access through any TNC-compliant Network Access Authority, from Steel-Belted RADIUS to FreeRADIUS. An open source Linux TNC Client could interact with a Juniper IC 4000 TNC Server appliance. And so on.
To achieve multi-vendor interoperability, TNC functionality is carved into discrete Network Access, Integrity Evaluation, and Integrity Measurement layers. Those layers are implemented by components communicating through open standard interfaces:
IF-PEP: RADIUS bindings that enable communication between heterogeneous Policy Enforcement Points (PEPs) and Policy Decision Points (PDPs).
IF-T: Tunneled EAP bindings relay identity and integrity data between multi-vendor Network Access Requestors (NARs) and Network Access Authorities (NAAs), using access methods like 802.1X, IKE, TLS, or PPP.
IF-TNCCS: Protocols that carry integrity measurement handshakes between TNC-compliant Clients and Servers. Two protocols are now specified: the original XML-based IF-TNCCS and the new TLV-based IF-TNCCS-SOH recently adopted from NAP. TNC PDPs can implement one or both protocols.
IF-IMC: An API that all endpoint security programs—called Integrity Measurement Collectors (IMCs)—use to submit integrity data to the TNC Client.
IF-IMV: An API that all security policy servers—called Integrity Measurement Verifiers (IMVs)—use to receive integrity data from the TNC Server. Both APIs define language bindings for Java, UNIX/Linux, and Windows platforms.
IF-M: Vendor-specific IMC-IMV protocols. Today's IMCs and IMVs are matched pairs that exchange proprietary messages. For example, the Wave Embassy Endpoint Enforcer Client talks to the Wave Embassy Endpoint Enforcer Server, and the PatchLink Update Agent talks to the PatchLink Update Server.
Another core feature that differentiates TNC is optional use of a TCG Trusted Platform Module (TPM). TPM is a hardware security component found on many new laptops. It delivers platform trust services like identity and encryption key storage that make it harder for rootkit-infested endpoints to "lie" about their identity or integrity.
Heterogeneity on paper is one thing, but real-world deployments need interoperable products. Colubris, Enterasys, Fujitsu, HP, Juniper Networks, libTNC, PatchLink, Q1 Labs, Symantec, Trapeze Networks, and Wave Systems participated in an interoperability "plugfest" in March 2007. Those companies were joined by Extreme Networks, Fujitsu, FHH, Microsoft, and Nortel at a May 2007 Interop demo. According to TCG, about 75 member companies are developing TNC-compatible products or technology, including open source TNC Clients and Servers like libTNC, FHH@TNC, and OpenSEA.
TCG expects TNC/NAP interoperability to accelerate adoption by simplifying deployment and protecting investment. For example, in 1H08, not only will Microsoft Vista and Server 2008 speak IF-TNCCS-SOH, but so will TNC-based Juniper Networks Unified Access Control (UAC) products. Vendors at the top of the stack can then use either TNC or NAP APIs, knowing that customers will be able to combine NAP clients with TNC servers and vice versa when both speak the same protocol.
Bolting the Back Door with NAC (cont'd) NAP
Comparing the alternatives — page three
by Lisa Phifer
VP Core Competence, Inc.
[June 22, 2007]
Microsoft Network Access Protection
Microsoft's NAP is another proprietary architecture, designed to promote endpoint health and policy compliance in networks composed largely of Windows clients and servers—specifically, Windows Vista and Windows Server 2008 ("Longhorn").
At left is a host attempting network access. Today, NAP Agents are only available as an embedded feature of Microsoft Windows Vista or XP SP3 (beta). NAP Agents consult with Vista's native Microsoft System Health Agent (SHA) and/or third-party security programs that implement Microsoft's API. SHAs supply Statements of Health (SoHs) to the NAP Agent, which passes a consolidated SoH to a NAP Enforcement Client (EC).
Each NAP EC initiates access via 802.1X, VPN, DHCP, or IPsec through a NAP Enforcement Server. For example, 802.1X supplicant ECs exchange PEAP with any 802.1X Ethernet switch or AP. VPN client ECs connect to a VPN gateway. DHCP client ECs request an IP address from a DHCP server. Or a NAP Agent can request a Health Certificate from a Microsoft Server 2008 Health Registration Authority (HRA), using that certificate as a substitute SoH to speed up later connect requests.
Microsoft does not make network equipment, so NAP's ability to restrict access varies, depending on the Enforcement Server. For example, ECs connected to 802.1X switches can be controlled through standard RADIUS response attributes like VLAN tags. But ECs that send SoHs to a Microsoft DHCP server can only be limited only by their assigned IP address—a very weak form of access control.
All NAP Enforcement Servers forward RADIUS Access Requests to a Microsoft Network Policy Server (NPS). NPS (the Windows Server 2008 replacement for IAS) consults Active Directory and checks health before returning a response. NPS uses an Administration Server to coordinate with one or more System Health Validators (SHVs). Each SHV decides whether the SoH sent by the matching SHA complies with health status and policy requirements, then returns a Statement of Health Response (SoHR) that indicates compliance or supplies remediation instructions.
NAP can use the native Windows SHV found in Windows Server 2008 and/or third-party security policy programs that implement Microsoft's API. A list of 100+ network equipment and security software vendors that intend to support NAP can be found on Microsoft's website. But NPS is still in beta, so any NAP interfaces being tested now cannot be finalized until Windows Server 2008 is released next year.
Can't we all just get along?
NAP is more network-neutral than CNAC, but it still requires gear that can use standard 802.1X and RADIUS to relay Microsoft SoH/SoHR messages and enforce the outcome. Moreover, NAP is a primarily a Windows Server 2008/Vista/XP SP3 feature; it cannot currently be used to control access by any other endpoints.
Last September, Microsoft and Cisco announced an integration plan (.pdf) to let Microsoft's NAP Agent use selected portions of Cisco's Trust Agent—specifically, Cisco's proprietary EAP-FAST and EAPoUDP—when connecting to Cisco NADs. Those NADs must still send Access Requests to Cisco ACS, but ACS can use HCAP to treat Microsoft NPS like a Policy Validation Server. This does not eliminate any Cisco or Microsoft dependencies, but it does explain how to use both concurrently.
This May, Microsoft and TCG announced that the NAP Statement of Health protocol had been added to the TNC architecture as a standard client-server interface (.pdf). This lets Vista/XP NAP Agents be used in TNC deployments, avoiding TNC client installation on those hosts. It also makes it possible to bolt NAP SHA/SHV programs onto the TNC network access layer and use the same server to support both TNC and NAP.
TNC/NAP interoperability could simplify deployment, but it doesn't change the fact that NAP itself depends on Windows Server 2008. The bottom line: If you're not a Windows shop planning to upgrade to Redmond's latest and greatest, keep reading...
Bolting the Back Door with NAC Part 3: Comparing the alternatives
Network Access Control (NAC) promises to improve security, but competing approaches have muddied the waters. In this tutorial, we navigate our way through NAC architectures from Cisco, Microsoft, Trusted Computing Group and the IETF.
by Lisa Phifer
VP Core Competence, Inc.
[June 22, 2007]
In part 1 and part 2, we examined the business needs driving companies to rethink network access control (NAC). Instead of trusting every device on the LAN, NAC combines user identity, endpoint security state, and policies to dynamically decide who should be allowed to use which network resources, under what pre-conditions. Only healthy, compliant endpoints are permitted to reach authorized resources, while everyone else is blocked or quarantined.
This concept may sound promising, but the devil is in the details. Today, NAC is an emerging market filled with divergent implementations and no universally-agreed standard. Here in part 3, we will explore four NAC network architectures:
Cisco Network Admission Control (CNAC)
Microsoft Network Access Protection (NAP)
TCG Trusted Network Connect (TNC)
IETF Network Endpoint Assessment (NEA)
We will also take a brief look at overlay NAC appliances, a near-term alternative for those who like the idea of NAC but don't want to invest in network upgrades.
NAC architectures
Cisco NAC (the original) and Microsoft NAP (Redmond's response) are proprietary architectures, aimed at largely homogeneous networks. Once these sparked market interest, interoperability concerns prompted standards development. The Trusted Computing Group was first out of the gate, publishing TNC specifications in May 2005. A year later, the IETF formed a working group to define NEA. Today, CNAC and TNC-based products are both commercially available. Essential NAP components are still in beta, while NEA is not far enough along to implement.
Although integration points have been proposed, these four architectures all overlap with each other to some degree. Each adds new NAC protocols and/or APIs to network endpoints, servers, and the access devices that connect them. On top of that NAC-enabled network will sit new client/server posture assessment programs.
As described in parts 1 and 2 of this series, NAC can be applied to many scenarios, from controlled guest access to compliance auditing. Before we dive into architectures, let's lay out one common example—keeping infected employee laptops off our LAN:
Without NAC, every employee laptop plugged into a switch or AP has full access to our physical or virtual LAN. Application logins may restrict server/file access, but a hacked laptop can still try to harm everyone else on the LAN.
By adding NAC, we could require every employee to log in before his or her laptop is admitted to the LAN. During admission, an installed client, ActiveX control, or Java applet could scan for missing, inactive, or outdated AV programs. Combining user identity with scan results lets us make more informed decisions, using the switch or AP to isolate compromised laptops.
In theory, CNAC, NAP, TNC, or NEA could be used to implement this scenario. In practice, the kinds of clients, servers, and network devices that can be supported depend on architecture and product. To understand why, let's drill down...
Trusted Storage now ready for your hard drive
By Jeremy Reimer | Published: June 21, 2007 - 11:08PM CT
After many months of deliberation, the Trusted Computing Group has finally announced that it has finalized the draft specifications for incorporating built-in encryption and security services directly into hard drives and other storage devices. Trusted Storage is part of a new generation of security protocols that are built directly into hardware, and includes devices such as Intel's Trusted Platform Module (TPM). While the Trusted Storage Group says that the specs for Trusted Storage may change slightly from the draft version, they are final enough for both hardware and software developers to start building devices and applications that support the specification right now. The official specs are referred to as "Version 1.0, Revision 0.9—draft" in accordance with traditional storage-related standards.
The new spec allows the creation of "trusted storage units" on hard drives and other media, where only approved applications are allowed to read and write data. These units are stored on hidden partitions that are not viewable by standard drive partitioning software. Data stored on the trusted partitions can only be accessed when the drive receives a signal from the CPU that it is authorized to access the data on the hard drive. The drive then responds with a signal that confirms that it is in fact the same hard drive that the computer believes it is accessing. The drives do not require that the computer in question have a TPM module on the motherboard, but if one is present it extends the "trust boundary" of the platform, providing additional security against tampering.
The new guidelines include built-in encryption and decryption, handled by hardware on the hard drive itself. Security functions in the specification include public-key encryption, digital signatures, hashing functions, and random number generation. Of course, these sorts of technologies are not new, and software-based encryption schemes have been around for a while now: some of the more interesting ones even have the concept of hidden partitions that can't easily be discovered by casual inspection. Still, the idea of creating hardware-based solutions such as Trusted Storage is to make such technologies more mainstream and acceptable for business users, who are often concerned about the leaking of confidential data. Data removed from a Trusted Storage unit by traditional means cannot be read on other computers.
The Trusted Storage specification was developed by 60 of the Trusted Computing Group's 175 member companies. Devices ranging from hard drives to optical storage that support Trusted Storage are expected to appear on the market in the upcoming months. IBM and Lenovo, two of the biggest promoters of the Trusted Computing Group, are expected to be among the first to release devices that support Trusted Storage. No other companies have as of yet announced support for the standard, but other members of the TCG such as Hitachi, Seagate, SanDisk, and Western Digital are likely to incorporate support into their products as well.
OT: Bolting the Back Door with NAC
Part 2: Examining your needs
Before you deploy NAC, identify one specific task you want it to accomplish. Start with a local deployment; it can grow larger, later, if you want to do more.
by Lisa Phifer
VP Core Competence, Inc.
[June 21, 2007]
Before you worry about product selection or solution design, start by answering some basic questions about your network environment and access control needs.
Why: What are your objectives for deploying NAC? Do you want to control network access at a more granular level? Do you want to log access for regulatory compliance? Do you want to automate local or remote endpoint audits? Do you want to automate pre-admission virus detection and remediation? Do you want to enable limited network access by unmanaged endpoints? Don't try to tackle everything at once. It helps to establish long-term goals, but choose a top-priority objective to shape initial deployment.
Who: Which user groups and devices do you hope to address? If your goal to control or log employee access, which users are top priority and what devices, operating systems, and network access/authentication methods do they use? If you want to permit visitor access, what can you assume about those endpoint capabilities, administrative rights, and ability to run NAC client software? If your goal is to enable limited access by customers, contractors, or others with whom you have an on-going relationship, who are they and how much control do you have over their devices? These answers will have a huge impact on how you deploy NAC and what you can accomplish.
When: For each user group/device type, decide what conditional checks (if any) should be performed before authentication, after authentication, or periodically after network admission. Create a prioritized checklist, based on your security policy and NAC goals. Consider what should happen if each check passes, fails, or cannot be evaluated. Bear in mind that this assessment will ultimately be limited by your chosen NAC platform, endpoint capabilities, your control over them, and the login delay your users can tolerate.
Where: Map each defined identity and assessment result onto a resource access policy that dictates where traffic can and cannot be sent. This task can be simplified by mapping reusable roles to resource authorizations. For example, you might map all endpoints that fail virus checks onto a quarantine role. During implementation, that quarantine role might be mapped onto an isolated VLAN, subnet, and/or remediation URL. At the end of the day, your NAC solution must be able to enforce the access policies that you require.
Sample use cases
In part 4, we'll share our lab experience with one NAC product. But before we unpacked a single box, we considered what we wanted to accomplish. Ultimately, we hoped to illustrate NAC deployment considerations and admin/user impacts. But to do that, we had to create a scenario: a network without NAC and business reasons to add NAC.
NAC adopters run the gamut from SMBs to enterprises; hospitals and schools have taken an early lead. But why should ISPs care about NAC? For starters, most face the same internal threats as any company, including staff with access to sensitive systems they don't need and shouldn't have. ISPs also process payments and store subscriber account records that may contain information subject to regulation. But ISPs may also have unique interests. For example:
Providers that host colocated or managed servers may need to permit customer access to systems inside their data center. Clearly, that access must be tightly controlled and secure—NAC can help ISPs accomplish this.
Providers who run wireless hotspots need to permit what amounts to guest access, but may want to stop threats from spreading to other guests and network elements. NAC has long-term potential here as well.
Providers that deliver managed network or security services may wish to someday incorporate NAC as part of those offerings. In fact, as NAC finds its way into business network infrastructure, managed LAN customers will expect this.
For our test drive, we focused on a very common case: controlling staff access to network resources, based on identity and compliance. We also included very restricted customer access to colo servers and guest internet access.
For staff, we authenticated local LAN and WLAN endpoints against our Windows Active Directory, using group membership to let administrators reach subnets that were off-limits to other users. We required all Windows endpoints to run firewall and anti-virus programs, with exceptions for known non-Windows devices.
For customers, we authenticated local endpoints against an authorized user list, assuming nothing about OS and varying checks by customer. Endpoints that passed received access to individual customer servers. (In real life, those endpoints could be remote as well.)
For guests, we provided anonymous public web access. We had planned to deny non-compliant guests, but this proved impractical (see Part 4), so we settled for segregating guests and giving better access to devices that could demonstrate compliance.
We isolated non-compliant staff and customer endpoints on a quarantine VLAN and supplied remediation advice. But, to keep our pilot simple, we did not attempt advanced NAC services like auto-remediation or custom/third-party plug-in security assessment.
Conclusion
Now that we have examined NAC capabilities, benefits, and use cases, it's time to move from requirements to solution design. In part 2, we will compare several NAC approaches and choose one for our test drive.
—End
Related articles:
[June 20, 2007] Part 1: Introduction
[June 21, 2007] Part 2: Examining your needs
[June 22, 2007] Part 3: Comparing the alternatives
[June 25, 2007] Part 4: Deploying the Juniper Networks UAC 2.0
Maynard,
Seagate was well in front of this I would imagine.
Pickle
Hard disks spin up new security spec
Rick Merritt
EE Times
(06/19/2007 9:46 AM EDT)
SAN JOSE, Calif. — The ad hoc Trusted Computing Group releases for industry review Tuesday (June 19) a specification for securing storage devices. The spec is expected to become the underpinning of secure disk drives that will become widespread over the next three years.
The draft standard defines a way storage devices can create and protect keys that prevent unauthorized users from accessing data on the device. It enables so-called full-drive encryption, protecting data on any lost storage device as well as a fast-erase capability for users who want to re-purpose a storage device. Users can also leverage the spec to add additional cryptographic protections to any application.
Seagate is already shipping hard disks with so-called full drive encryption and Hitachi Global Storage Technologies has announced a similar product, both mainly targeted at business notebooks.
"We'll have to change a few bits in the interface to meet the spec but [the revised products] will be functionally the same," said Michael Willett, a director of research at Seagate and co-chair of the TCG group's storage committee that drafted the spec.
Willett said he expects most drive makers will begin to roll compliant products within six months, once the version 0.9 of the spec released today becomes officially ratified as a version 1.0.
"This spec applies to all storage devices," Willett said. "All the hard drive makers have taken part but so have makers of tape, optical and flash drives," he added.
Hard drive makers see disk security as a new layer of value they can roll into their devices quickly. The effort, which began as a research project three years ago, is eventually expected to become a standard feature on all drives.
"I expect within about three years all drives will have this capability. That's the road map we are working to internally," said one drive maker who asked to remain anonymous.
Unlike many security specs from the TCG, the storage standard does not require use of a standalone trusted platform module, a chip that generates and securely stores cryptographic keys. Such TPMs are now routinely used on business desktops, notebooks and some servers.
The TCG estimates as many as 100 million computers will ship with a TPM chip this year. A TCG spec for cellphone security actually requires two TPMs, one for protecting carrier data and another for protecting user data.
Instead of a TPM, the storage spec relies on an existing storage controller to generate and manage keys that are securely saved on extra space traditionally available on the storage device. Disk drive makers, for example, typically have access to a secure area of a couple hundred megabytes for storing systems management programs on a typical disk drive.
Currently, drive makers are using custom ASICs that implement 128- or 256-bit AES security. However, within three years that function is expected to be integrated into the hard disk controller.
Although AES has been adopted for initial products, the spec can use any form of encryption. The security is first expected to be used for notebook drives, followed by drives for servers and eventually for all systems.
The 230-page spec mainly defines an approach for secure access to a drive by generating secure commands. At the heart of the method is a basic register structure defined as a table. Through a secure access method, users generate commands that act upon locations in the table.
As part of the spec, TCG worked with ISO T10 and T13 committees who oversee SCSI and ATA command languages to define new commands for a secure send and receive function. Those commands act as containers to send TCG carry protocols, Willett explained.
The TCG security protocols can tie in to systems software features such as the MS-CAPI security applications programming interface used by Windows.
A separate TCG subgroup is now developing a spec for how to handle password and key management functions on servers that might contain a large number of keys. That spec should be complete in about six months, said Willett.
Great read, Bolting the Back Door with NAC
Part 1: Introduction
Firewalls may guard their front door, but many networks remain vulnerable to threats originating inside the perimeter. Network Access Control (NAC) can batten down those hatches by stopping malware-infested laptops and restricting LAN resource use.
by Lisa Phifer
VP Core Competence, Inc.
[June 20, 2007]
The buzz surrounding Network Access Control (NAC) has reached a fevered pitch. According to Infonetics Research, NAC appliance sales reached $83 million during 2006 and will double again this year. Last month at Interop, over a dozen vendors participated in a standards-based NAC interoperability demo, including heavyweights Microsoft, Juniper, Nortel, HP, Extreme, Enterasys, Aruba, and Trapeze. To date, Cisco has certified nearly 40 vendor products that fit within its proprietary NAC framework, with scores more under development.
Why this flurry of NAC activity? What the heck is NAC anyway? And why should you care? In this four part series, we examine the business needs driving NAC, compare today's major flavors of NAC, and show NAC in action by taking one popular implementation for a test drive: Juniper Network's Unified Access Control.
Turning network security inside out
Over the years, perimeter defenses have gradually improved. Today, almost everyone understands that private business networks must be protected from perils posed by the public internet. However, many network owners still turn a blind eye to threats emanating from internal systems connected to their own wired and wireless LANs.
Historically, all systems inside the network perimeter have been viewed as trustworthy, and their users have enjoyed a great deal of freedom to reach private servers and data. Compared to measures commonly applied at the internet edge, internal LAN access controls are frequently weak or absent.
Many organizations still rely on physical security measures like entrance badge checks and wall port disablement to deter unauthorized LAN access. Every system that manages to connect to a physical or virtual LAN becomes a trusted endpoint that can send packets to every other network endpoint, without regard to system integrity or user identity. While logins are often required to actually use sensitive services or fileshares, those measures do nothing to insulate the network itself from attack or misuse.
In truth, the assumption that LAN endpoints are trustworthy was always shaky. Insider attacks by disgruntled employees have long been a significant but under-appreciated risk. For example, the 2006 CSI/FBI Computer Crime and Security Survey (1.5 MB .pdf file) found that 2 in 5 companies attributed over 20 percent of their cybercrime losses to insider attacks. But over the past few years, evolving business conditions and network technologies have rewritten the ground rules and imposed costly penalties.
Workforces have become increasingly mobile, carrying corporate laptops (and more!) from work to home to hotspot. When those endpoints connect to external LANs, they are directly exposed to a myriad of network-borne threats. Laptop anti-virus and personal firewalls help, but easily become outdated or disabled. When a compromised endpoint returns to work and connects to the internal LAN, it becomes a source of infection or intrusion. Trojan downloaders, keyloggers, and other spyware have become especially troublesome, resisting removal while causing identity theft or financial loss.
Most offices are now visited daily by guests, contractors, auditors, and other users who require some degree of public or private network access. If accommodations are not made, visiting endpoints are likely to find their way onto your LAN anyway—for example, by borrowing a cubicle Ethernet jack or an employee's WLAN access password. When connected in this fashion, visitors become like any other trusted endpoint, gaining access to confidential documents, financial records, personnel files, management systems, and other sensitive resources.
Malware recovery is costly, but pales in comparison to the fear instilled by government and industry regulation compliance. For example, companies that process credit/debit card transactions must comply with the Payment Card Industry (PCI) data security standard by protecting and controlling access to cardholder data. Public US companies must now comply with the Sarbanes-Oxley Act (SOX), a law created to deter accounting errors and fraud. Hundreds of regulations exist worldwide that require organizations to not only secure affected networks, systems, and/or data, but to prove they have done so through logs and audits. Breach or audit failure due to non-compliance can result in direct costs, legal fees, hefty fines, even imprisonment.
The role of network access control
These changes have caused many organizations to reconsider internal network security policies, implementations, and practices—in many cases, following C-level mandates to reduce associated business risk. While no silver bullet, NAC can help to address these concerns by overhauling the way we control access to internal network resources.
NAC is an evolving strategy with many possible implementations. At an abstract level, NAC avoids granting unfettered LAN access to known/trusted endpoints. Instead, NAC bases network access decisions on individual user identity, the security state of that user's endpoint, and policies which define who should be allowed to use which resources, under what pre-conditions.
Identity-based controls let us differentiate between employees, contractors, and guests and treat them accordingly. Assessing each endpoint's health and policy compliance lets us spot compromised laptops before they can communicate with the rest of the network. Mapping those endpoints onto defined authorizations lets us dynamically permit or deny access on a "need to know" basis. For example, we could give guests internet-only access while admitting only healthy accounting department users to the finance LAN.
Furthermore, instead of the static pass/fail approach associated with conventional ACLs, NAC can reshape permissions on the fly. An infected endpoint might be re-directed to a remediation server for cleansing, while an endpoint missing critical patches or programs might be sent to a download server. Remedied endpoints could then be automatically re-authenticated and receive trusted resource access, while healthy endpoints that fail periodic re-assessments could be sent right back to "quarantine."
This utopian vision of NAC involves a large number of moving parts, all working together seamlessly to enforce and audit defined security policies. In reality, today's early-adopter NAC deployments are far less ambitious. Juniper estimates that 57 percent of companies want to deploy NAC incrementally, starting with a pilot that addresses a specific near-term need in a confined network segment. For example, many companies pursue NAC to enforce policy compliance for selected managed (employee) endpoints. Others deploy NAC to facilitate unmanaged (guest, contractor, phone) access. In fact, the first step towards NAC deployment is deciding what you hope to accomplish.
On to Part 2.
Related articles:
[June 20, 2007] Part 1: Introduction
[June 21, 2007] Part 2: Examining your needs
[June 22, 2007] Part 3: Comparing the alternatives
[June 25, 2007] Part 4: Deploying the Juniper Networks UAC 2.0
OT: AuthenTec Seeks Investors for Sensors
WASHINGTON — AuthenTec Inc. is raising money to boost its biometric technology, which literally can bring transactions to the fingertips of consumers.
The company is planning an initial public offering that will raise up to about $62 million for the company and another $21 million for investors who also are selling shares.
In Japan and South Korea, AuthenTec's fingerprint sensor has been incorporated into cell phones that allow users to pay for a diverse array of products, including meals, items at discount stores and vending machine drinks.
The Melbourne, Fla., company is aiming to bring that technology to cell phones in the United States, according to Brent Dietz, AuthenTec's director of communications. AuthenTec's sensors already appear in such domestic products as door locks and laptop computers.
AuthenTec has filed to sell up to 7.5 million shares at $9 to $11 each, and it has picked an auspicious time to do so, given changes in the way consumer purchases are made and the rising concerns about technology security.
Recent breaches in information security have included stolen laptops at schools, hospitals and even the federal government. A recent audit at the Internal Revenue Service revealed that nearly 500 laptops were lost or stolen between January 2003 and June 2006, and many of those computers probably contained personal taxpayer information. An audit of the Federal Bureau of Investigation showed that about 160 laptops were lost or stolen from that agency between February 2002 and September 2005.
Those thefts may be bad news for taxpayers and FBI employees, but they could be good news for AuthenTec. The company already has seen demand for its products shoot upward. In 2006, AuthenTec shipped 6.9 million sensors, more than double the number it shipped in 2005, bringing in revenue of $33.2 million, according to the company's prospectus filed with the Securities and Exchange Commission.
According to Dietz, 150 different models of personal computers and peripheral devices use the company's fingerprint sensors. Hewlett-Packard Co. and Fujitsu Ltd. accounted for almost 60 percent of AuthenTec's 2006 revenue, according to the company's prospectus.
Despite AuthenTec's rapidly growing sales, its bottom line could give investors cause for concern, according to Scott Sweet, managing director of research service IPO Boutique.
For 2005, AuthenTec lost $11.1 million but last year the loss narrowed to $9.8 million.
AuthenTec faces competition which includes includes a host of private companies and three public companies _ Atmel Corp., Taipei-based Lite-on Technology Group and Tokyo-based Mitsumi Electronic Co.
So far, big computer makers such as Round Rock, Texas-based Dell Inc. haven't created products that directly compete with AuthenTec's sensors, but if they did, it "could be catastrophic for AuthenTec," said Sweet.
Another challenge could come from the company's ongoing legal battle with Atmel, which alleges that AuthenTec's subsurface technology infringes on Atmel's patents. AuthenTec denies the charge.
The suit probably won't be a deal-breaker for AuthenTec's IPO because plenty of companies go public with patent suits pending, Sweet said.
AuthenTec said its technology resolves some of the common problems of fingerprint security because its readings are based on the live layer of skin, under the dead outer layer, that determines the pattern of a fingerprint. In that way, AuthenTec's sensors can read a fingerprint under "virtually any condition," the company says, and are less susceptible to problems such as dirt on the skin's surface. Someone trying to trick the sensor also couldn't use fake fingers or even a severed hand, Dietz said.
"That's the beauty of our fingerprint sensors _ it actually reads the live layer of skin," he said. "That's where we believe the true fingerprint resides. It's a subsurface technology. Most fingerprint readers or scanner technologies are surface-based."
Between the outer and inner layers is an electrically conductive fluid layer. An AuthenTec sensor sends a radio frequency signal into the fingertip, creating an electrical field between the sensor and the fingertip. The sensor then reads the variations in the strength of the field, which are determined by the fingerprint pattern.
In cell phones in Japan and Korea, users' fingerprints are used as verification for purchases. At "contactless terminals," in discount stores and restaurants, a user swipes a finger across a cell phone to unlock the phone's mobile payment feature, then taps the phone to the terminal to make a purchase, Dietz explained. He said six million phones in Japan and Korea have integrated AuthenTec's sensors.
Thanks, CM. Another confirmation of the upcoming paradigm shift.
Pickle
Sorry if posted. Been on vacation.
New ASUS notebooks with biometric fingerprint sensors
Posted on 18 June 2007.
AuthenTec announced that its AES1610 fingerprint sensors are standard in more than a dozen new 2007 Windows Vista-model ASUS notebooks. Bundled with the Trusted Platform Module, the exclusive ASPM allows ASUS notebooks to run applications securely and to make transactions and communications more trustworthy. ASPM improves system security and productivity by consolidating user passwords and network accounts within a single data unit called User Identity. Users can choose to setup multi-factor authentication requirements for different security levels while benefiting from the Single Sign On (SSO) one-time login convenience without compromising security integrity.
AuthenTec’s fingerprint authentication is a key component of end-to-end security for ASPM by delivering highly accurate fingerprint reading – a scan that reads from the live layer of the skin and is thus less affected by common skin surface conditions. AuthenTec’s fingerprint sensors are integrated into more than 150 PC products today.
Thanks Awk and all the others for the great SHM updates! EOM
Awk,
You are always three steps ahead of this investment, but was wondering what your impressions of the SHM in terms of validation or new pathways?
Sincerely
Pickle
Go-Kite, would very appreciative of your SHM notes when you get a chance. I know you had to fly back to the Windy City. I want to know in your estimation if Steven had his "A" game.
Pickle
Challenges in the Age of Encryption
IT management will face challenges unless it lays the groundwork for the growing ubiquity of encryption
6/5/2007
by Richard Moulds
Cryptography, once seen as a specialist area of information security, is coming of age. Growing regulatory pressures are forcing enerprises to protect the integrity, privacy, and security of critical information. As a result, cryptography is emerging as the foundation for enterprise data security and compliance.
It was true decades ago and it is still true today – encryption is the most reliable way to secure data. As recently as the late 1990s, encryption was primarily used to secure data in government facilities and financial institutions. The rapid rise of the Internet and e-commerce expanded the use of encryption to include an ever-growing variety of financial transactions and communications.
Today, as encryption use continues to grow, it is now making its way into a host of devices we use every day, such as laptop computers and wireless access points, and even devices we don’t think of as being part of an IT infrastructure such as vending machines, parking meters, gaming machines and electronic voting terminals. IT security is entering the age of ubiquitous encryption, an age that will present serious management challenges unless organizations begin laying the groundwork today.
The Challenges
There’s no doubt that encryption is a powerful tool, but getting it wrong—either from a technology or management perspective—can at best result in a false sense of security and (even worse) leave your data scrambled forever, the equivalent of a corporate document shredder.
With data protection stakes high, enterprises need to look seriously at the management of encryption and decryption keys, the secret codes that lock and unlock the data. As encryption takes off, providing lifetime management of private keys and digital certificates across hundreds of applications and thousands of servers, end users and networked devices can quickly overburden the cumbersome manual processes that have been used until now.
Managing All the Keys
From the corporate data center to an employee’s laptop all the way out to the vending machine at a local ballpark – organizations will need to deploy systems to manage their encryption centrally. Local staff, if they exist at all, will not necessarily be sufficiently experienced or trustworthy to perform the operations locally. Manual processes coordinated and dispatched from the corporate IT group simply won’t scale across a large number of devices. An encryption management system needs to be automated to a high degree and, it almost goes without saying, absolutely secure.
Key archivial, recovery, and mobility are all crucial parts of the equation. For instance, if a laptop breaks down or a back-up tape is stolen, the issue is not just one of security but recovery and business continuity as well. Now that data on the hard drive or tape is encrypted (and therefore useless without the keys that can unlock it), information recovery takes on a whole new dimension, particularly in an emergency. When the recovery process is performed in a different location, by a different team, governed by different policies, and on protected data that is years or even decades old, what used to be a data management problem has now become a serious key-management problem.
Managing cryptography becomes more difficult as the use of cryptography proliferates, leading to increased scale and diversity, driving up costs and risk. As a result, products are emerging to manage and automate the distribution of keys across disparate applications running on large numbers of geographically dispersed computing devices.
Until recently, the management of cryptography has been an ad hoc, manual process which includes renewing certificates, rolling-over keys, generating new keys, or importing existing keys to machines as they come on line and removing keys as machines are retired or fail-in service. In addition to the physical management of these keys, there is also the enforcement of security policies and the necessity to provide a full audit trail that reveals who did what, when, how, and why.
Summary
Encryption is the optimal means of securing data. With the recent technology advances in computer operating systems such as Microsoft Windows Vista, hardware devices such as Trusted Platform Modules (TPMs), and key management solutions, this once-onerous technology is no longer seen as a management nightmare. In fact, with the proper key management system, enterprise-wide encryption can become a competitive advantage for organizations operating in a world without boundaries.
Richard Moulds is nCipher’s vice president of marketing and product management. He leads the company’s product strategy, including that of keyAuthority™, nCipher’s key management solution. Richard holds a bachelor's degree in electrical engineering from Birmingham University and an MBA from Warwick University, UK.
cslewis, from your article. Good stuff. This has to be what Wave has been working on in regard to EEE.
By developing a single, unified management platform -- code-named "Stirling" -- for security from the server to the endpoint and the edge, Microsoft has made what may be its most aggressive security play yet, security experts say of the software giant's announcement at its TechEd 2007 conference yesterday.
Stirling initially will work only with Microsoft's Forefront products, but it eventually will expand to include the integration and interoperability with third-party security vendors' products, says Paul Bryan, a director of security and access product management for Microsoft.
OT: barge, this one is for you
Microsoft's SideShow: Computing Via Remote Control?
06.01.07
By Mark Hachman
Microsoft's SideShow is one of the more interesting features in Vista, because it is essentially a secondary display that can be used in any number of ways. Here, Jonathan Levy, president of Winbond Israel and general manager of its advanced PC division – which has worked on several SideShow prototypes – sits down with Mark Hachman for a brief interview on what SideShow has in store.
At WinHEC, a few early prototypes of SideShow products were displayed. I know Winbond is a chip manufacturer, so how is it that you're getting involved in SideShow products?
Levy: Winbond is a company that is mostly known for its semiconductor products. As a direction and a strategy, our group within Winbond is looking at complete system solutions, and the looking to provide value to customers through other things software, firmware, drivers, and reference designs. It's true that we do not plan to develop any user products. But we do intend—and this is part of our involvement with SideShow, where we work very closely with companies that do develop directly end products or who develop products like ODMs for OEMs and who will take them to market.
We do other things for SideShow, like hardware development and system development around the silicon.
With SideShow, are you concentrating on PCs only, or other devices besides laptops?
The original involvement with SideShow started with PCs and mobiles, and the original SideShow idea started from an alternative display for a notebook, but it quickly became apparent that there are other peripherals from around the PC, like remote controls, as well there are some gadgets that would use the technology, with some additional hardware modification to the system, or software modifications, different end user applications where the core technology is the same.
We are looking for a solution to address the wide range of SideShow products, where our only constraint is that its leveraging the same technology and very similar roadmaps. We don't intend to directly support every last company that wants to develop some application. We will not be able to support that directly. Our primary focus is to enable companies that actually will develop and provide support for other companies. We worked closely with Microsoft itself, and Intel on their own concept design – I forget the exact name.
Microsoft has a tradition of introducing hardware products that have been interesting, but have had a mixed track record: the SPOT watch, the UMPC, Tablet PC, and now this. How would you characterize SideShow? Will it be a success?
Some of these concepts are not new; they have been talked about ten years ago. Information appliances that are connected to the Internet -- they have been around for a long, long time. And they are always ahead of the reality.
But I can tell you what I what I see coming out. This is an operating system that supports this remote synchronization by default …You don't need a special proprietary interface to set out an infrastructure that will allow many companies to develop these connections that already have them back on the operating systems. These gadgets can interact with the operating system on your main screen and if you have one of these…Game developers are looking to develop these little gadgets, and there's temperature information and wstock information and whatnot and if choose, you can redirect that information to things like a refrigerator magnet. I told my mother about such a thing. When you open it, people say, "Wow – I want one of those!"
It's difficult to say when technology becomes too complicated, and it can only be used by geeks. If you are talking about everybody that is technophobic, and then show him that there is a reminder on my calendar or a note, and I don't have to do anything to use this calendar – then that is very interesting.
I'd like to know if SideShow is going to be a me-too technology. Let me give you an example: virtually every motherboard maker in Taiwan is now making graphics cards. It was an easy market to get into. You mentioned SideShow remote controls; are these going to be the next me-too product, or is this a "seed" technology that will allow people to create all these different types of weird and wonderful gadgets?
I think it's both. With certain computer applications I personally believe will be useful and probably become a viable product. Sideshow remote controls connected to the a media center. Microsoft have made it much more user-friendly, so that my wife at home can use it.
We have in Israel, the TiVo, where you can program and subscribe to high-definition programming, and watch it whenever you like… and if Microsoft would work with the programmers so that I could look and see while I am watching a movie, and then look at the front of the PC and see what the next program is coming on. You could now do that with the SideShow remote control to record without the need to interrupt your viewing. It's something that I would find very, very useful.
Don't think people are going to watch TV on their remote controller; I don't think that will happen. But there are certain applications where there would be something that could transfer all your media to the main screen, and that you could this without interrupting your viewing, that's where SideShow would come in very handy.
So when do you think that these types of products will roll out then?
We believe that the remote controllerss with SideShow will probably see production in the third quarter of this year, that's my expectation.
Will we some of these at Computex, then?
Products? Maybe. I know that WinHEC Taipei is the week after Computex. I know for sure that you will see the same kinds of things that you saw in WinHEC at the U.S. But you know you might find some products, like the one we worked with Intel on… I know for sure form our own understanding as a silicon provider that there is an expectation that there will be products shipping this year.
Panasonic Introduces Reliable Semi-Rugged Desktop Replacement Toughbook Notebook with Embedded Next Generation Wireless
Intel Santa Rosa-based Toughbook 52 is engineered to withstand the risks of mobility; entire line of reliable Panasonic® Toughbook® notebook computers now 3G wireless-ready
Smart Multimedia Gallery
The Toughbook 52 is constructed of magnesium alloy and includes a shock-mounted screen and easily removable hard drive for extra physical data security. With battery life of approximately 4-6 hours, depending on use environment, the semi-rugged Toughbook is certified to the MIL-STD-810F standard, tested to withstand drops of up to 2.5 feet on all six sides. The hard drive itself is tested to withstand a drop of 3 feet. The Toughbook 52 also includes a spill-resistant keyboard. (Photo: Business Wire) SECAUCUS, N.J.--(BUSINESS WIRE)--Panasonic Computer Solutions Company, manufacturer of durable, reliable Panasonic Toughbook mobile computers, today introduced the newest member of its full product line, the semi-rugged, wireless-ready desktop replacement notebook, the Panasonic® Toughbook® 52. Built on the new Santa Rosa chipset from Intel, the Toughbook 52 offers all the processing power of a desktop replacement notebook in a form factor engineered from the ground up for mobility.
Building on the success of its predecessor, the Toughbook 51, and a substantial amount of customer input into the design process, this new widescreen semi-rugged Toughbook comes equipped with additional security features, a new carrying handle and optional embedded access to next-generation wireless data networks from major wireless carriers. In addition, the MIL-SPEC-certified notebook incorporates the durable design features—such as magnesium alloy cases, flexible internal connectors and shock-mounted hard drives and LCDs—that contribute to Panasonic’s reputation for producing the most reliable notebooks available.
“In an increasingly wireless world, organizations large and small need to know that the tools they count on to stay connected and productive are up to the task of mobility. The semi-rugged desktop replacement Toughbook 52 is both a cost-effective and reliable mobile computing solution,” said Rance Poehler, president, Panasonic Computer Solutions Company. “Our customers play a significant role in the development of Toughbook products. In this case, requests from a broad range of users, from the Department of Defense to some of the world’s largest insurance companies, were heard loud and clear by Panasonic design engineers. The result is a semi-rugged notebook that has the power to run the most complex applications, yet is designed for double-duty as a mobile PC in support of all types of field operations.”
“Durability and reliability can only be proven in terms of hardware failure rates,” added Poehler. “Panasonic is the only computer manufacturer to routinely share failure rate information because we know that Toughbook notebooks are many times more reliable than the industry average. You can only achieve this kind of reliability by heavily investing in design, engineering, manufacturing and testing. And we back it all up with one of the industry’s best warranties and US-based service and support.”
Reliable Mobility and Wireless Connectivity
The Toughbook 52 is constructed of magnesium alloy and includes a shock-mounted screen and easily removable hard drive for extra physical data security. With battery life of approximately 4-6 hours, depending on use environment, the semi-rugged Toughbook is certified to the MIL-STD-810F standard, tested to withstand drops of up to 2.5 feet on all six sides. The hard drive itself is tested to withstand a drop of 3 feet. The Toughbook 52 also includes a spill-resistant keyboard.
Panasonic was a pioneer in the integration of next generation wireless solutions and is the only manufacturer to offer a complete portfolio of computing devices, from rugged to semi-rugged and business-rugged computers, engineered for optimized 3G wireless performance. All Toughbook notebooks incorporate a wireless-ready design that allows customers to initially purchase, or later upgrade to, embedded access to next-generation data networks, including UMTS/HSDPA-based solutions from Cingular Wireless and the EV-DO Rev. A networks of Sprint or Verizon. The Toughbook 52 will initially ship with optional embedded access to the EV-DO Rev. A. mobile broadband network of Verizon Wireless. Integrated WLAN and Bluetooth also ensure that users stay connected, wherever their work takes them.
Enhanced Security Features
To safeguard valuable data and enable customers to comply with increasing data security regulations, the new Toughbook 52 is equipped with the Trusted Platform Module (TPM v1.2) security chip, the Computrace software agent in BIOS and an optional fingerprint scanner and SmartCard reader.
Superior Warranty, Support and Professional Services
Every Panasonic Toughbook is sold with a standard three-year limited warranty which includes around-the-clock U.S.-based phone support for the entire life of the product. Panasonic provides customers an average call center hold time of less than one minute. In addition, in the unlikely event of a hardware-related failure, Panasonic covers the cost of overnight shipping to and from its national service center, where the average turnaround time for repairs is less than two days. Panasonic also offers a full range of professional services, including image management, asset management and online service analysis, to support enterprise customers before, during and after deployment.
Pricing and Availability
The wireless-ready semi-rugged Toughbook 52 will be available in July 2007 in two configurations, standard (estimated street price of $1,699) or optimized for improved video and Vista-performance, including a faster CPU, 512MB dedicated VRAM and a larger 120GB hard drive (estimated street price of $2,499).
All Toughbook notebooks are customizable and available through authorized Panasonic Toughbook resellers nationwide, “buy now” resellers accessible online via www.panasonic.com/business/toughbook/purchase.asp and at MicroCenters nationwide or the RCS Experience store in midtown Manhattan (Madison Avenue at 56th Street). Please visit www.panasonic.com/toughbook for more information.
Panasonic® Toughbook® CF-52 Semi-rugged, Wireless-Ready Desktop Replacement Notebook Computer: Select Features and Specifications
CF-52 (Optimized Configuration), Estimated Street Price: $2,499.00
CPU Intel Core 2 Duo 2.0G, T7300
Chipset PM965
Memory DDR2 1GB (Max.2GB+2GB)
LCD 15.4" WUXGA
Video AMD X2300
VRAM 512MB
HDD 120GB (SATA)
MP Multi-drive
I/O Serial, IEEE1394a, VGA, Giga LAN,
Modem, USB x4
Card PC Card x1, Express card x1, SD
Security TPM 1.2 / Smart Card (Op) / Finger Print (Op)
Wireless Wireless LAN 802.11abg
Bluetooth Ver. 2.0 + EDR (Std)
WWAN-Ready (Optional EV-DO or HSDPA)
AC Adapter 120W
Battery 85W, approximately 4.5 hours, depending on usage,
environment
Dimensions 11.3" x 14.0" x 2.0"
Weight 7.4 lbs (w/handle)
CF-52 (Standard Configuration), Estimated Street Price: $1,649.00
CPU Intel Core 2 Duo 1.8G, T7100
Chipset GM965
Memory DDR2 1GB (Max.2GB+2GB)
LCD 15.4" WXGA
Video Intel Embedded
VRAM (UMA up to 128MB)
HDD 80GB (SATA)
MP Multi-drive
I/O Serial, IEEE1394a, VGA, Giga LAN,
Modem, USB x4, PC Card x1, Express card x1, SD
Security TPM 1.2 / Smart Card (Op) / Finger Print (Op)
Wireless Wireless LAN 802.11abg
Bluetooth Ver. 2.0 + EDR (Op)
WWAN-Ready (Optional EV-DO or HSDPA)
AC Adaptor 120W
Battery 85W, approximately 7.5 hours, depending on usage,
environment
Dimensions 11.3" x 14.0" x 2.0"
Weight 7.3 lbs (w/handle)
Wave's ability to sell the technology through to its partners is undisputable given their relationship base. I am betting this to be solidified with Hitachi and OEM signing this summer.
Their partners, most notably Dell and Gateway, ability to articulate the benefits and upsell the technology through to the end user is still very much in question. How will Seagate do in this regard?
How long is the learning curve and when will it all click? Wave is canoeing upstream and appears to be the only one paddling at this point. We need other oarsmen to get this off the ground.
Pickle
Thanks, Sheldon.
FYI - Your posts are very much appreciated.
Pickle
WidePoint Appointed to the FiXs(TM) Board of Directors
Thursday May 31, 10:30 am ET
Subsidiary ORC to Issue DoD ECA Certificates on FiXs-Certified Cards
FAIRFAX, VA--(MARKET WIRE)--May 31, 2007 -- WidePoint Corporation (AMEX:WYY - News), a leading provider of information technology assurance and identity management services, today announced its appointment to the Board of Directors of The Federation for Identity and Cross-Credentialing Systems(TM) (FiXs), a coalition of government contractors, commercial companies and not-for-profit organizations that are establishing a global secure identity cross-credentialing network.
ADVERTISEMENT
FiXs provides a trusted identity management infrastructure compliant with Homeland Security Presidential Directive No. 12 (HSPD-12) and Federal Information Processing Standard (FIPS) 201. The FiXs network enables the secure exchange of approved credentials between member organizations and government partners and is modeled after the Automated Teller Machine (ATM) network, where an individual can use one of any number of bank cards at the ATM of almost any financial institution. FiXs uses available identity credential technology in conjunction with biometric identification to verify the identity of personnel seeking to enter military installations, government-controlled areas, and FiXs commercial facilities.
WidePoint CTO Daniel Turissini stated, "FiXs allows DoD and its contractors to use a common trust model based on existing security systems that will spread deployment and implementation costs among all participants while providing long-term support for multiple levels of credentials for employees in both the federal and private sectors. WidePoint intends to leverage FiXs' secure and trusted network by offering the government-compliant PKI certificates of subsidiary ORC on FiXs-certified cards. We intend to capitalize on the successes of FiXs and the ORC External Certificate Authority (ECA) with the U. S. Department of Defense (DoD) in establishing a mutually trusted, interoperable community of DoD contractors, vendors and trading partners. Each member will be able to authoritatively authenticate identities by offering ECA medium hardware assurance certificates issued on FIPS-140-2 Level 3-compliant FiXs cards."
Dr. Michael J. Mestrovich, FiXs President, stated, "We are delighted to welcome WidePoint as a FiXs member. The FiXs-certified credentials are the optimum tokens to protect and enhance the value of an ORC ECA certificate for physical access authentication and ultimately, as the FiXs Trust Model evolves, to address logical access."
About FiXs
Founded in 2004 and headquartered in Fairfax, Virginia, The Federation for Identity and Cross-Credentialing Systems (FiXs) is a coalition of government contractors, commercial companies and not-for-profit organizations whose mission is to establish and maintain a worldwide, interoperable identity and cross-credentialing network built on security, privacy, trust, standard operating rules, policies and technical standards. The FiXs network verifies and authenticates the identity of industry personnel seeking to enter military installations, government-controlled areas and commercial sites tied to the network, providing a trusted mechanism for federated identity infrastructure within and between public and private sector organizations.
The coalition currently has 23 member companies, including systems integrators, financial institutions and other organizations that want to promote improved workforce protection and systems security for critical infrastructure markets. The U.S. Department of Defense and General Services Administration are participating government organizations.
The FiXs Network is an authorized link to the U.S. Department of Defense Cross-Credentialing Identification System (DCCIS) and a joint recipient of the 2005 FCW Events Government Solution Center Pioneer Award for "Successful Public/Private Sector Partnership," representing a premier example of a program managed collaboratively by government and non-government partners that tangibly improved government operations. For more information, visit www.fixs.or
Sorry if posted. Broadcom to Bring ARM Cortex-A8 to Powerful Videocore Mobile Multimedia Product Line
Posted : Wed, 23 May 2007 09:23:59 GMT
Author : ARM Ltd
Category : PressRelease
CAMBRIDGE, England, May 23 /PRNewswire/ --
- ARM Cortex-A8 Processor to be Combined with Broadcom's LeadingVideoCore(R) Architecture to Deliver the Most Advance Mobile MultimediaDevices
ARM ((LSE: ARM); (Nasdaq: ARMHY)) today announced that BroadcomCorporation has licensed the ARM(R) Cortex(TM)-A8 high-performance processor,along with supporting ARM IP products for integration into a range of mobileand consumer products. The Cortex-A8 processor has been chosen by Broadcom todeliver the performance that will be required in the next generation ofmobile computing products. Advanced mobile and consumer applications requireintensive, efficient computation power for networking, media, and accessingthe latest Web 2.0 services. This high performance is required whilemaintaining low power consumption and meeting the cost points which consumersdemand.
Broadcom will combine the Cortex-A8 with its industry leading VideoCoremultimedia technology. VideoCore is a flexible multi-media accelerationtechnology used for high performance and very low power 2D/3D graphics,imaging, video and audio processing. The flexible nature of VideoCore enablesrapid porting and support for ever changing Internet codecs and multimediatechnologies.
"The needs of today's mobile and consumer devices are expanding rapidlyand will be required to provide an astonishing user experience that spansWeb2.0, gaming, picture taking and long play audio/video" said Mark Casey,Vice President and General Manager of Broadcom's Mobile Multimedia BusinessUnit. "The powerful combination of Broadcom's industry leading VideoCoremultimedia performance and Cortex-A8 industry leading power/performance willdeliver outstanding advantages in our next generation devices."
"We see a strong tendency among Partners such as Broadcom, to develophigh-performance ARM technology-based platforms that can be used inleadership mobile platforms," said Graham Budd, executive vice president andgeneral manager, Processors, ARM. "The Cortex-A8 processor provides thecutting-edge performance to enable dynamic, secure, consumer-centricfeatures, unlocking the potential of devices including high-end smartphones.The ARM ecosystem includes broad support of Open Operating Systems and thirdparty applications, enabling an advanced mobile internet experience."
The Cortex-A8 processor is capable of providing over 2000 DMIPS ofprocessing power and equipped with ARM TrustZone(R) security technology andalso incorporates Jazelle(R)-RCT technology for efficient execution of Javaand other execution environments such as Microsoft .NET. The processor isalso fault-tolerant, a key requirement in the networking arena.
In addition to its robust processing power, the Cortex-A8 processor isfully supported by the ARM RealView(R) Development Suite enabling OEMs todeliver rich content to Linux-based devices that fully exploit thehigh-performance Cortex-A8 processor with significantly reduced memoryfootprint. The ability to seamlessly build Linux application libraries withthe compilation tools in the RealView Development Suite enables developers tomaximize the performance of key elements of their code base, ultimatelyproviding end customers with highly responsive Linux devices that have anexceptionally long battery life.
About ARM
ARM designs the technology that lies at the heart of advanced digitalproducts, from wireless, networking and consumer entertainment solutions toimaging, automotive, security and storage devices. ARM's comprehensiveproduct offering includes 16/32-bit RISC microprocessors, data engines,graphics processors, digital libraries, embedded memories, peripherals,software and development tools, as well as analog functions and high-speedconnectivity products. Combined with the company's broad Partner community,they provide a total system solution that offers a fast, reliable path tomarket for leading electronics companies. More information on ARM is available at http://www.arm.com.
Dutton is up if you go to their site.
Dell, Intel and Microsoft Join Forces to Increase Adoption of NAND-Based Flash Memory in PC Platforms
Wednesday May 30, 12:00 pm ET
Newly formed group to provide standard interface for nonvolatile memory subsystems.
REDMOND, Wash., May 30 /PRNewswire-FirstCall/ -- Broad adoption of NAND flash memory technology in the PC platform received a boost with the formation of the Non-Volatile Memory Host Controller Interface (NVMHCI) Working Group. The NVMHCI Working Group is chaired by Intel Corporation with core contributors including Dell Inc. and Microsoft Corp.
NVMHCI will provide a standard software programming interface for nonvolatile memory subsystems. The interface would be used by operating system drivers to access NAND flash memory storage in applications such as hard drive caching and solid-state drives.
"Several NAND solutions are coming on the scene to take advantage of the ReadyBoost(TM) and ReadyDrive(TM) features of the Windows Vista® operating system," said Bob Rinne, general manager of Windows Hardware Ecosystem at Microsoft. "Standardizing on a common controller interface will enable more integrated operating system support of these solutions moving forward."
Industry momentum for standardization in NAND storage solutions is building, especially as NAND moves into the PC platform. NVMHCI complements standardization work being done in the Open NAND Flash Interface (ONFI) Working Group.
"We've got a performance-enhancing NAND-based product in the market with our new Centrino mobile technology platform called Intel Turbo memory, and this newly formed working group will help make that and a number of other NAND-based solutions more prolific, faster," said Rick Coulson, senior fellow and director of I/O Architecture at Intel. "ONFI formed last year to standardize the interface between the Flash controller and the NAND itself, and standardizing the register level interface between the Flash controller and the operating system driver is the logical next step."
"Nonvolatile memory solutions enable better system performance and lower power consumption as well as facilitate additional benefits such as smaller form factors, quieter systems and improved robustness," said Liam Quinn, director of communications for technology strategy and architecture at Dell. "Dell looks forward to working with industry partners and extending the benefits NVMHCI will bring to our customers."
The group is actively expanding its membership to include other industry- leading companies and expects to deliver the specification in the second half of 2007. Requests for the NVMHCI Contributor Adopter agreement can be directed to nvmhci@intel.com.
About Dell
Dell Inc. listens to customers and delivers innovative technology and services they trust and value. Uniquely enabled by its direct business model, Dell is a leading global systems and services company and No. 34 on the Fortune 500. For more information, visit www.dell.com.
About Intel
Intel, the world's largest chip maker, is also a leading manufacturer of computer, networking and communications products. For more information, visit www.intel.com.
About Microsoft
Founded in 1975, Microsoft (Nasdaq: MSFT - News) is the worldwide leader in software, services and solutions that help people and businesses realize their full potential.
NOTE: Microsoft, ReadyBoost, ReadyDrive and Windows Vista are trademarks of the Microsoft group of companies.
The names of actual companies and products mentioned herein may be the trademarks of their respective owners.
Thanks, Ispro. Thought it might be something new.
Government wary of 'trusted computing'
State Services Commission fears agencies could lose control over government documents
By Stephen Bell Wellington | Monday, 21 May, 2007Email Print
Democratic rights and obligations could be imperilled when Trusted Computing technologies and digital rights management arrive on new systems, says the manager of the State Service Commission’s e-government strategy and policy team, Hugh McPhail.
Government can make sure its own documents and computers do not have such protection imposed on them, he says. There is a set of principles and policies that have been established that bar such encumbering protections from both content and computers, unless there is good reason for them.
Government agencies are avoiding any of the new TC and DRM software by, for example, not turning on Microsoft’s Information Rights Management (IRM) systems.
The fear is that by allowing protections that are enforced by an outside party (such as a computer or software vendor) to act on documents that are in government agencies’ possession, agencies could lose a measure of control over these documents. They could find, for example, that they can’t copy, store, retrieve or print documents as they wish — or as government policy and practice requires.
Some DRM protection schemes are also known to “phone home” — that is, they send a digital signal over the internet to the vendor of the DRM software which may indicate details of the document and the way it is being used. This could compromise the privacy of the agency concerned, the individual and the organisation to which the document refers.
The SSC is now working with other government agencies on a set of standards and guidelines to help agencies develop practical processes “appropriate to their business drivers and statutory responsibilities” against the framework of their principles and policies.
But it is not only government-sourced documents that are affected. Statutory returns and other documents that businesses and individuals file with government agencies may well be prepared on outside computers equipped with TC or DRM software. This may also create problems, said McPhail, speaking at last week’s Govis (Government Information Services) conference.
One of the government principles says: “Any information that is relied on for execution of public business must be free from encumbrance by externally imposed digital restrictions, except with the informed consent of government.”
This means either that specific exemptions must be drafted in respect of certain documents — and content from those documents only be included in internal documents under strict controls — or that such external documents must be rejected. This could be a handicap to the routine filing of documents with government agencies, said McPhail.
Parliament is taking its first steps into the world of electronic document handling in its Select Committee hearings on such matters (Computerworld, February 5). Refusal to accept documents that may be DRM-encumbered could force submitters to file their documents in printed form for re-keying on unencumbered parliamentary computers — exactly what e-filing is trying to avoid.
However, such protections could be used positively, too — to enhance security around government agencies and in public and government communications, for example.
HP TPM-based hard drive security?
http://www.hp.com/sbso/solutions/pc_expertise/professional_innovations/data_protection.html?jumpid=r....
http://h20000.www2.hp.com/bc/docs/support/UCR/SupportManual/TPM_prt001a1098_appc/TPM_prt001a1098_app....
http://h20000.www2.hp.com/bizsupport/TechSupport/Document.jsp?lang=en&cc=us&objectID=c001633...
Ramsey, this was all about financing the ramp up. New OEM, Seagate, customer support, etc. Wave could not afford to solicit and capture additional business without having the resources to support it. I am hopeful in this respect and grateful about the possibility of buying more at lower price. Most of all, I am hopeful this is the last PP.
Pickle
Dell to Sell Computers at Wal-Mart
Thursday May 24, 4:55 pm ET
By Peter Svensson, AP Technology Writer
Dell Breaks From Original Direct-Sale Plan and Will Put Computers at Wal-Mart
NEW YORK (AP) -- In a departure from the direct-to-consumer business model it was founded on, Dell Inc. plans to sell computers at Wal-Mart Stores Inc., the world's largest retailer.
A Dell spokesman said Thursday the computer maker will begin selling two models from its Dimension desktop computer line in about 3,000 Wal-Marts beginning June 10.
Dell spokesman Dwayne Cox said the Wal-Mart deal "represents our first step" into global retail.
"Customers want more and new ways to buy our products, and we plan on meeting their needs on a global level," Cox said. "Offering Dell Dimensions in Wal-Mart is a great example of this approach."
Cox said Dell will announce additional moves into retail in the coming quarters, but he declined to give specifics.
The two Dimension E521 models will be sold at Wal-Marts in the United States, Canada and Puerto Rico. Dell said it could not reveal specific prices yet. On its Web site, the cheapest Dimension E521 costs $359.
Since its founding in the 1980s, Dell has relied on selling PCs and other products directly to consumers and business customers over the phone and Internet. It viewed direct sales as an important cost advantage over competitors who sold computers through retailers.
The strategy worked, helping Dell become the world's leading PC maker. But last year, the Round Rock-based company lost its lead to a revitalized Hewlett-Packard Co., which now sells systems online, by phone and in stores.
Dell's disappointing financial results led to the ouster of Chief Executive Kevin Rollins in January. He was replaced by founder Michael Dell, who in the 80s laid down the company's core model of building computers as customers ordered them, holding only a few day's worth of components in inventory.
Morningstar analyst Rick Hanna said Michael Dell was "the right person" to adjust the company's strategy.
"I think we're seeing the logic of him stepping back in the CEO role, because if anybody could empower the company to move in a new direction, it's the guy who founded it," Hanna said.
"We've been telling Dell for years that they need to explore a retail strategy," said analyst J.P. Gownder at Forrester Research. "They need to learn about how retail exposes your product to a wide variety of consumers."
There is a small risk, Gownder said, that Dell's brand, not just its low-end Dimension desktops, becomes associated too strongly with Wal-Mart and its price-conscious image.
"Dell has spent the last few years trying not to be the low-price player anymore in the market," Gownder said. "They try to be the value provider who customizes the computer the way you want it."
For Wal-Mart, the move is part of a recent effort to offer more brand names in its electronics departments and, more broadly, a strategy to rekindle sluggish overall sales growth at its stores.
The retailer has spent months remodeling the home electronics area in hundreds of stores to create more attractive displays of flat-panel televisions and other products. It announced May 14 that it is adding more high-definition TVs by Samsung, home theater systems by Sony and Philips and digital cameras including Kodak and Nikon.
Dell's machines will add to Wal-Mart's current lineup, which includes computers from HP, Gateway Inc. subsidiary eMachines and Taiwan's Acer Inc.
Separately, Dell was set on Thursday afternoon to start selling consumer PCs running Linux, a free operating system that competes with Microsoft Corp.'s Windows.
Dell had previously said it was planning to offer Linux PCs, without giving details. It will sell one laptop and two desktop models for $599 and up, depending on hardware options. The prices are slightly lower than similar models with Windows Vista.
Linux is distributed for free and is maintained by volunteers and companies who make money by offering technical support. Though it has a strong following among computer professionals and is widely deployed in servers, it has had little traction in replacing Windows on consumer computers. Dell started selling servers with Linux in 1999.
The version of Linux chosen by Dell for its consumer offering, Ubuntu 7.04, is designed with the consumer in mind. It is maintained and distributed by Canonical Ltd., a British company has shipped millions of free CDs loaded with the operating system.
Dell said the decision to sell consumer computers with Linux pre-installed was based on the urging of about 30,000 members of its customer brainstorming Web site, Ideastorm.
Shares of Dell dropped 37 cents to $25.89 Thursday.
Awk, love the enthusiasm! Launch is getting close. The FDE technology provides the catalyst Wave so desperately needed. Watershed moment for the TCG. I am not on margin, but I am buying more over the next few weeks.
Pickle
Well, it looks like another OEM is coming aboard (hoping HP) and Seagate drives are going to fly off the shleves based on SKS comments. The calls questionaing SKS's ability were unwarranted and if anything, rather short-sighted considering the place Wave now finds itself. Could some have positioned us better, driven adpation of a new securrity infrastruture, etc.?? I seriously doubt it. Looking forward to every quarter from here on out. This market is "blossoming"
Pickle
OT: WidePoint Reports First Quarter '07 Results
Thursday May 10, 1:00 pm ET
Conference Call Scheduled for 4:30 p.m. EDT Today
FAIRFAX, VA--(MARKET WIRE)--May 10, 2007 -- WidePoint Corporation (AMEX:WYY - News), a leading provider of information technology assurance and identity management services, today announced financial results for the first quarter ended March 31, 2007.
WidePoint reported revenue of $2.9 million for the first quarter of 2007, an increase of 6% compared to revenue of $2.7 million for the first quarter of 2006. The Public Key Infrastructure ("PKI") credentialing and managed services segment experienced 40% growth from $223,000 for the first quarter of 2006 to $310,000 for the first quarter of 2007. The company issued 2,843 credentials during the first quarter of 2007 as compared to 1,733 credentials during the first quarter of 2006. WidePoint's Consulting Services segment witnessed a slight increase in revenues from $2,461,000 in the first quarter of 2006 to $2,542,000 in the first quarter of 2007. Net loss for the first quarter of 2007 was $376,100 or a loss of $0.01 per share compared with a net loss of $258,200 or $0.01 per share in the first quarter of 2006. The company ended the first quarter of 2007 with working capital of approximately $3.5 million, cash and cash equivalents of approximately $2.7 million, no senior debt and total stockholders equity of approximately $7.5 million.
Other first quarter 2007 highlights include:
-- WidePoint was awarded a multi-year contract to provide an enterprise-
wide Smart ID solution to a federal contractor with over 10,000 initial
users and the opportunity to expand to over 100,000 users. The solution
complies with Homeland Presidential Directive 12 ("HSPD-12"), Federal
Information Processing Standards Publication Series 201 ("FIPS 201") and
Personal Identification Verification ("PIV II") requirements and
regulations.
-- WidePoint was awarded and is deploying 3,000 initial first responder
credentials under several State initiatives with two major integrators
funded by Department of Homeland Security ("DHS") Grants.
-- WidePoint commenced several teaming agreements in three new vertical
markets to leverage the Federal PKI in Health Care, Insurance, and
Hospitality industries.
-- WidePoint launched work on a Department of Defense ("DoD") physical
access initiative to include ORC© External Certificate Authority ("ECA")
certificates on smartcards to support non-DoD entities transacting business
with U.S. government entities.
-- WidePoint experienced approximately 80% renewal rates of ECA
credentials during the quarter ending March 31, 2007.
Steve Komar, CEO of WidePoint, stated, "Our efforts to strategically position and build a base of business within the Identity Management marketplace is beginning to take hold as we start to expand our presence and expertise with a range of new customers, partners, and integrators."
Dan Turissini, CTO of WidePoint commented, "Our most recent award to provide an enterprise-wide Smart ID solution represents the first government contractor to fully leverage federal compliant identity management to satisfy both internal and business-to-government transaction protection. Our solution implements a full spectrum turnkey solution from card request to card termination enabling full compliance with recent Federal Acquisition Regulations that address identity management, strong authentication and HSPD-12."
Turissini added, "Working with the first responder community, establishing pilots within the health care, insurance and hospitality industries, and addressing international government-to-government credentialing needs continues to extend our reach as this new marketplace matures. We continue to be excited about the long-term opportunities as the need and adoption of strong authentication grows."
WidePoint will hold a conference call with CEO Steve Komar and senior members of the management team today at 4:30 p.m. Eastern Time. The call will cover the company's first quarter results. Komar will open the call and a question-and-answer session will follow.
To participate, dial (888) 802-8574 any time after 4:20 p.m. Eastern Time. International callers should dial (973) 628-6885. While in conference, if callers should experience any difficulty or require operator assistance, they can press the (*) followed by the (0) button. This will call an operator to the line.
TCG Storage Conference May 15, 2007
http://www.creativestorage.org/