Register for free to join our community of investors and share your ideas. You will also get access to streaming quotes, interactive charts, trades, portfolio, live options flow and more tools.
Register for free to join our community of investors and share your ideas. You will also get access to streaming quotes, interactive charts, trades, portfolio, live options flow and more tools.
Softex OmniPass Application Supports Atmel's Trusted Platform Module
Monday July 14, 8:08 am ET
This unparalleled combination of software and hardware enhances the user's experience to provide convenient security that cannot be achieved with software solutions alone.
COLORADO SPRINGS, Colo.--(BUSINESS WIRE)--July 14, 2003-- Atmel® Corporation (Nasdaq:ATML - News) and Softex® Incorporated announced today that Softex has ported their suite of security and user access applications to achieve a higher level of security enabled by Atmel's Trusted Platform Module (TPM). The combination of Softex's OmniPass software application with the Atmel TPM dramatically improves the overall security level within a computing environment, ranging from PCs to mobile phones. Existing software applications can easily utilize the TPM for improved security by adhering to common industry cryptographic Application Program Interfaces (APIs). Atmel's TPM serves as a secure key management device, providing real hardware security at an affordable price point.
ADVERTISEMENT
var lrec_target="_top"; var lrec_URL = new Array(); lrec_URL[1] = "http://www.tdwaterhouse.com/cgi-Bin/rbox/welcome.cgi?adcode=YL01FM";" target="_new">http://rd.yahoo.com/M=247108.3459974.4748240.1395116/D=fin/S=7811759:LREC/A=1659975/R=0/id=flashurl/... var link="javascript:LRECopenWindow(1)"; var lrec_flashfile = 'http://us.a1.yimg.com/us.yimg.com/a/td/tdwq3/yho-300x250-month-c93.swf?clickTAG='+link+''; var lrec_altURL = "http://www.tdwaterhouse.com/cgi-Bin/rbox/welcome.cgi?adcode=YL01FM";" target="_new">http://rd.yahoo.com/M=247108.3459974.4748240.1395116/D=fin/S=7811759:LREC/A=1659975/R=1/id=altimgurl... var lrec_altimg = "http://us.a1.yimg.com/us.yimg.com/a/td/tdwq3/yho-300x250-month-c93.gif"; var lrec_width = 300; var lrec_height = 250;
Atmel's TPM, the AT97SC3201, is the industry's first production TPM complying with the Trusted Computing Group (TCG) specifications. As a secure key manager, the TPM is able to protect the user's OmniPass passwords. The AT97SC3201 is a member of Atmel's family of embedded security products and part of Atmel's broader security portfolio. For the past 20 years, Atmel has applied its extensive knowledge base and IP in the silicon security market place to create innovative custom and industry standard products, such as secure microcontrollers, CryptoMemory® and FingerChip(TM).
"One of TCG's goals is to improve the level of security and enhance the end user's experience across a wide variety of platforms," said Nancy Sumrall, TCG Marketing Work Group Chairperson. "Softex's support of the TCG market will help promote wider validation and acceptance within the overall marketplace by providing innovative applications end users and administrators desire, while helping to build out the overall TCG infrastructure."
Softex Incorporated, a worldwide leader of system software for PCs, offers the most comprehensive solutions for PCs running the Windows® operating systems. OmniPass is designed to reduce the cost and complexity of managing the multiple passwords that exist in today's computing environments while securing the data stored on the machine. OmniPass allows files and folders to be encrypted for enhanced data security.
With OmniPass, a user creates a single master password that is used to "replace" all other passwords. The first time a password-secured website or application is used, the user simply enters the original password and then tells OmniPass to "remember that password." From that point forward, anytime that website or application is used again, OmniPass will prompt the user to enter the user's master password and access to that site or application is automatically granted. With OmniPass, the user only has to remember a single master password that reduces the number of lost passwords and the costs associated with managing passwords.
"The Atmel choice was a solid decision. Their flagship TPM product line helped us to add a level of hardware security to our suite of applications and align our product offerings to the TCG standards. The porting of the application went smoothly with both engineering teams focused on the outcome," commented Gregg Philipson, VP-Sales for Softex Incorporated. "The TCG standards are at the forefront of enabling affordable hardware security in a variety of computing platforms. By working with Atmel and its embedded security products team, Softex will be able to make its highly regarded products even more enticing to OEMs and end users. By leveraging TPM hardware security, we can increase the value our innovative security and access applications have to the end user."
"Atmel is pleased to be working with innovative companies such as Softex in strengthening the TCG infrastructure," said Kevin Schutz, product line manager for security products at Atmel. "Softex recognizes the benefit of leveraging the TPM for improving platform security while still providing innovative applications the marketplace is requesting."
Atmel is the leading TPM supplier in the world and is actively working to provide synergistic support of its wide security product portfolio to the market. More details on the Softex applications capable of supporting Atmel's range of security products, such as CryptoMemory will be announced later this year. Atmel's TPM and Softex's applications are available on a variety of Intel® reference platforms today.
The OmniPass application and the AT97SC3201 TPM are available now. TPM development kits are available to qualified customers. Atmel's TPM pricing is less than $5.00 in volume. Contact Atmel and Softex for more information.
http://biz.yahoo.com/bw/030714/145278_1.html
last patent application posting from me for the day...
I think 4 is enough.
United States Patent Application
20030079038
Kind Code
A1
Robbin, Jeffrey L. ; et al.
April 24, 2003
Intelligent interaction between media player and host computer
Abstract
Improved techniques for interaction between a host computer (e.g., personal computer) and a media player are disclosed. According to one aspect, interaction between a host computer and a media player, such as automatic synchronization of media contents stored on a media player with media contents stored on a host computer, can be restricted. According to another aspect, management of media items residing on a media player can be performed at and by a host computer for the media player. According to still another aspect, media content can be played by a media player in accordance with quality settings established for the media content at the host computer.
Inventors:
Robbin, Jeffrey L.; (Los Altos, CA) ; Heller, David; (San Jose, CA)
Correspondence Name and Address:
BEYER WEAVER & THOMAS LLP
P.O. BOX 778
BERKELEY
CA
94704-0778
US
Assignee Name and Adress:
Apple Computer, Inc.
Serial No.:
277418
Series Code:
10
Filed:
October 21, 2002
U.S. Current Class:
709/232; 709/221
U.S. Class at Publication:
709/232; 709/221
Intern'l Class:
G06F 015/177
one more patent application of interest...
United States Patent Application
20030098884
Kind Code
A1
Christensen, Steven W.
May 29, 2003
Method and apparatus for displaying and accessing control and status information in a computer system
Abstract
An interactive computer-controlled display system having a processor, a data display screen, a cursor control device for interactively positioning a cursor on the data display screen, and a window generator that generates and displays a window on a data display screen. The window region provides status and control information in one or more data display areas. The individiual data display areas may be controlled through the use of controls and indicators on the control strip itself using cursor control keys.
Inventors:
Christensen, Steven W.; (Milpitas, CA)
Correspondence Name and Address:
Lawrence E. Lycke
BLAKELY, SOKOLOFF, TAYLOR & ZAFMAN LLP
Seventh Floor
12400 Wilshire Boulevard
Los Angeles
CA
90025-1026
US
Assignee Name and Adress:
Apple Computer, Inc.
Serial No.:
179775
Series Code:
10
Filed:
June 24, 2002
U.S. Current Class:
345/781
U.S. Class at Publication:
345/781
Intern'l Class:
G09G 005/00
United States Patent Application 20030126445
Kind Code
A1
Wehrenberg, Paul J.
July 3, 2003
Method and apparatus for copy protection
Abstract
Copy protection techniques that utilize a watermark and a permission key are disclosed. The copy protection techniques can provide single-copy copy protection in addition to different levels of copy protection. The permission key and the watermark can also permit the invention to yield variable levels of copy protection. In one embodiment, content including a watermark is transmitted to a recipient. The recipient is allowed to read the content but not record the content unless the recipient possesses a permission key.
Inventors:
Wehrenberg, Paul J.; (Palo Alto, CA)
Correspondence Name and Address:
BEYER WEAVER & THOMAS LLP
P.O. BOX 778
BERKELEY
CA
94704-0778
US
Assignee Name and Adress:
Apple Computer, Inc.
Serial No.:
327359
Series Code:
10
Filed:
December 20, 2002
U.S. Current Class:
713/176; 380/203
U.S. Class at Publication:
713/176; 380/203
Intern'l Class:
H04L 009/00; H04N 007/167
Multiple personas for mobile devices , United States Patent Application
20030107606
Kind Code
A1
Capps, Stephen C. ; et al.
June 12, 2003
Multiple personas for mobile devices
Abstract
A computer system is disclosed which may adopt one of many personas, depending upon the role that its owner is currently playing. The computer system includes a central repository of extensible personas available to all applications running on the computer system. Each such persona has associated therewith a suite of parameters, or specific values for parameters, which are appropriate for conducting computer implemented transactions under a particular persona. The computer system further includes a graphical user interface which allows the user to switch from persona to persona by selecting a particular persona from a list of available personas displayed on a display screen of the computer system. By selecting such persona, the user causes the computer system to globally change the entire suite of parameter values so that subsequent transactions conducted with the computer system employ the parameter values of the current persona.
Inventors:
Capps, Stephen C.; (San Carlos, CA) ; Ansanelli, Joseph G.; (Palo Alto, CA) ; Fang, Ton-Yun; (Sunnyvale, CA)
Correspondence Name and Address:
BEYER WEAVER & THOMAS LLP
P.O. BOX 778
BERKELEY
CA
94704-0778
US
Assignee Name and Adress:
Apple Computer, Inc.
Serial No.:
305678
Series Code:
10
Filed:
November 26, 2002
U.S. Current Class:
345/810
U.S. Class at Publication:
345/810
Intern'l Class:
G09G 005/00
Claims
United States Patent Application 20030126445
Kind Code
A1
Wehrenberg, Paul J.
July 3, 2003
Method and apparatus for copy protection
Abstract
Copy protection techniques that utilize a watermark and a permission key are disclosed. The copy protection techniques can provide single-copy copy protection in addition to different levels of copy protection. The permission key and the watermark can also permit the invention to yield variable levels of copy protection. In one embodiment, content including a watermark is transmitted to a recipient. The recipient is allowed to read the content but not record the content unless the recipient possesses a permission key.
Inventors:
Wehrenberg, Paul J.; (Palo Alto, CA)
Correspondence Name and Address:
BEYER WEAVER & THOMAS LLP
P.O. BOX 778
BERKELEY
CA
94704-0778
US
Assignee Name and Adress:
Apple Computer, Inc.
Serial No.:
327359
Series Code:
10
Filed:
December 20, 2002
U.S. Current Class:
713/176; 380/203
U.S. Class at Publication:
713/176; 380/203
Intern'l Class:
H04L 009/00; H04N 007/167
Apple attempts to patent fast user switching
By Tony Smith
Posted: 11/07/2003 at 16:41 GMT
Apple has filed for a patent that suggests the company is working on a new mobile device capable of supporting multiple users. Either that or it's cunningly trying to outflank Microsoft's lead on fast multi-user switching by retrospectively patenting the technique as its own.
Almost as interesting as the patent's content is the name of its lead inventor: Steve Capps, erstwhile Mac OS Finder co-designer and more recently Microsoft's Windows UI architect. Capps is also remembered as the designer of the UI used in Apple's Newton PDA. That gives you an idea of the application's heritage.
The application, number 0030107606, is entitled 'Multiple personas for a mobile device'. It describes how a computer system's settings can be immediately changed to reflect a new "persona" when the user chooses from a list of available personae using a graphical user interface displayed on the computer's screen.
Aimed at mobile devices it may be, but to us the patent's abstract recalls Mac OS X 10.3's fast user switching system, demo'd in public for the first time by CEO Steve Jobs as Apple's Worldwide Developers Conference last month. When that facility is enabled in the new version of the multi-user OS, codenamed 'Panther', users can instantly activate their own accounts by selecting from a menu of users in the top right-hand corner of the menu bar.
Switching this way doesn't log the current user out of the system, or kill his or her apps, it simply changes the system's settings and application states to those of the user who'd just switched in. The change is engaged with a rather cute rotating cube graphical metaphor.
At WWDC, Jobs admitted that Microsoft had beaten Apple to market by offering such a feature in Windows XP, but he claimed Apple's implementation was the better of the two.
That would imply, surely, that Microsoft has a solid prior art claim?
No. The current application, filed last November and updated this past June, turns out to be a continuation of a patent, number 6,512,525 filed in August 1995, long before Windows XP arrived, and finally granted in January 2003 with the same title. That patent is also assigned to Apple.
The downside - if Apple's intent is to outflank Microsoft; we're only guessing here - is that the patent refers to multiple personas of a single user, not multiple users. While Panther's approach to fast user switching might perform its magic by treating multiple users' preferences as different personas of a single, virtual user, it's questionable whether the original patent covers such a use.
Incidentally, it does, however, cover uses such as the Mac OS' Location Manager, which switches network-related settings according to the user's location. The patent extends that idea to cover other, more personal settings and data, that might depend on the user's location/identity, ie. the computer's owner as public individual and as company employee.
Whatever, Apple's continuation application applies the concepts relevant to fast user switching retrospectively to the original patent. The continuation application is reworded to imply the kind of functionality delivered by Panther's fast user switching. There are references to "said persona being one of multiple personas available on the computer system and associated with one or more users of the computer system" a concept seemingly missing from the original patent. The later application also refers to linking personas to passwords - another pointer to fast user switching.
So the new application clearly associates fast user switching with multiple personas, essentially allowing Apple to claim it has owned that technology since 1995.
And, of course, this is perfectly permissible under US patent law, an intellectual propery attorney of our acquaintance told us. Continuations can be and are filed to retrospectively add claims to already granted patents. Sometimes that's because the inventor is appealing against claims that were originally disallowed, more often it's done to slip one past a competitor - an approach we've heard described as "Machiavellian".
Will Apple use its new-found intellectual property rights? Maybe not, but like its use of QuickTime patents to win a $150 million investment from Microsoft demonstrated some years back, it may now have the opportunity to do so if it ever hears the words 'cancelled' and 'Microsoft Office' in the same sentence. ®
http://www.theregister.co.uk/content/39/31702.html
MP3 creator speaks out
Several albums can be compressed into MP3 files and burned on a CD
The term MP3 is well-known to millions of the world's teenagers but its mere mention sends shivers down the spines of record industry executives.
The format responsible for a musical revolution allows you to compress sound into a file which is a fraction of the size of the original.
But a name which will be unfamiliar to many is that of Karlheinz Brandenburg - the German researcher who was one of the inventors of MP3.
He first began working on a way of making small sound files some 20 years ago as part of the doctorate thesis.
"We had dreams from the start," he told BBC World's ClickOnline.
But he never expected his work to achieve the popularity or notoriety it has.
"In 1988 somebody asked me what will become of this, and I said it could just end up in libraries like so many other PhD theses," he recalls.
"But it could become something that millions of people will use, that was the dream."
Discerning ears
Dr Brandenburg finished his thesis in 1989. But that was just the start of the story.
My sympathy is always with the artists and even with the record labels. They should get paid for the work they do
Karlheinz Brandenburg
He went on to join the Fraunhofer Institute, one of Germany's most prestigious research facilities, and contributed greatly towards making MP3 what it is today.
Now, it has established itself as the de facto format for sharing music over the internet, even though rival formats have since been developed.
The big challenge in the early days was making sure that none of the sound quality was lost by squeezing a song into a smaller file size.
The researchers aimed for a MP3 file that would sound just like the original to discerning ears.
"There was a lot of testing," says Dr Brandenburg. "I remember sitting at the computer with very good headphones and always listening to a few items.
"I must have listened to some a thousand times."
One of the problems he faced was coping with the many different types of music. Each style, from pop to classical, reacted differently when it was compressed and it was hard to predict how much the sound quality would suffer.
New opportunities
The emergence of MP3 turned the music world on its head. Here was a format that allowed high quality music to be transferred over the internet and straight in to people's home computers.
With the advent of file-sharing services like the now defunct Napster, millions of people were downloading music, in many cases without paying for the tracks.
But Dr Brandenburg does not believe that by creating the MP3 format, he is contributing to the demise of the record industry.
"People should have easier access to music," he says. "They should be able to listen to it wherever they are and still pay for it.
"My sympathy is always with the artists and even with the record labels. They should get paid for the work they do.
"I don't like the Napster idea that all music should be free to everybody."
Instead Dr Brandenburg argues that the record labels need to look at ways of using the technology, rather than fighting it.
"There are so many new opportunities for the music industry if they catch on and use the technology."
http://news.bbc.co.uk/1/hi/technology/3059775.stm
Napster Creator May Be Set for Comeback
By Joseph Menn and Jon Healey
Times Staff Writers
July 7, 2003
Napster the brand is going legit under new owner Roxio Inc.
Now Napster the person is trying to do the same with an Internet start-up that could, once again, have a far-reaching effect on the music business.
Napster creator Shawn Fanning is looking for backers of technology he's developing that would let file-sharing networks distribute music without violating copyrights, people familiar with the project said.
Fanning's technology would recognize copyrighted songs on a network and let the copyright owners set a price for downloading them.
That's quite a departure from the original Napster service, which let users copy songs from one another's computers free. Bearing the then-18-year-old Fanning's online nickname, Napster launched in 1999 but quickly drew a copyright infringement lawsuit from major record companies and music publishers, which forced it to shut down in mid-2001.
Regardless of whether the gambit works, it demonstrates that 22-year-old Fanning has moved beyond the service that made him a household name. Yet Fanning, who lives in the Bay Area and declined to be interviewed, is still trying to shape the future of the music industry — this time by working with his onetime competitors.
Fanning's new program relies on audio fingerprinting that identifies every song being offered by users on a file-sharing network. As the user submits the song, it would be checked against a database at Fanning's firm to see whether it is copyrighted. If it is, the song couldn't be distributed without payment.
Napster Inc. was trying to develop something similar when it ran out of money and filed for bankruptcy protection last year. Roxio bought Napster's brand name, Web address and technology at a bankruptcy auction.
Santa Clara, Calif.-based Roxio plans to offer a new version of Napster by March, but it is unlikely to have any of the file-sharing flavor of the original. Instead, it would be built around the label-backed Pressplay subscription service that Roxio bought from Vivendi Universal's Universal Music Group and Sony Corp.'s Sony Music Entertainment.
Fanning has been acting as a consultant for Roxio while also pursuing his new file-sharing venture independently. Record-company executives say Fanning has been making the rounds of the major labels in recent weeks, demonstrating his technology and urging them to invest in and endorse his system.
If they do, he has told the labels, he would ask Kazaa and other leading peer-to-peer networks to sign on as well.
"It's fantastic, but it only works if Kazaa goes along with it," said one label executive who asked not to be named. He said his label was impressed with Fanning's demonstration and is reviewing the proposal.
Spokesmen for the companies distributing Kazaa, Morpheus and Grokster, three leading file-sharing programs, said they hadn't been contacted by Fanning and weren't familiar with his efforts.
StreamCast Networks Inc., the company that distributes Morpheus, is "willing to look at any business opportunity," Chief Executive Michael Weiss said. "If he's looking for us to distribute his software through the Internet, I'd be willing to see what he had in mind."
Grokster President Wayne Rosso said that given the record companies' history of internal disagreements, Fanning would have "a better chance of getting Yugoslavia back together" than convincing all the labels to back the new venture. Another problem, Rosso said, is that Fanning's plan perpetuates a pay-per-track approach that "really isn't the answer in the long run" because customers want to pay a flat fee for unlimited downloading.
Nevertheless, the label executive said Kazaa might agree to the plan because "it gives them a way to make money." As things stand, Kazaa is an advertiser-supported service offered as a free download by Sharman Networks, and consumers use the software to share music, movies and other digital goods, many of them pirated.
There are a number of strategic hurdles to the plan, including the concern that if Kazaa opts in to the system, its rivals might not. Without a uniform approach among the file-sharing networks, users might simply migrate to networks that don't block unauthorized copying.
Another issue is that it would be up to the labels to claim ownership of each track, and they may claim greater rights than they are entitled to or rights that are subject to dispute. Many songs have multiple rights holders, depending on who wrote the composition and who performed it, and the labels and the artists signed to them have frequent ownership disagreements.
For example, many of the songs on file-sharing networks are recordings of live performances, whose digital distribution rights and royalties might have to be negotiated between labels and artists.
That prospect is daunting to some record company executives, even though the live recordings could prove to be a new source of revenue for labels and artists alike.
http://www.latimes.com/templates/misc/printstory.jsp?slug=la-fi-fanning7jul07§ion=/printstor...
"Starting Monday, Professor Lawrence Lessig (whom we all remember from Eldred v. Ashcroft) is going on vacation, and his weblog will be guest-hosted by Democratic presidential candidate Governor Howard Dean. Could this be a sign that a serious contender for President (tied for first for the nomination in the latest polls) has his head screwed on right about copyright law?"
http://cyberlaw.stanford.edu/lessig/blog
http://cyberlaw.stanford.edu/lessig/blog/archives/2003_07.shtml#001348
http://www.deanforamerica.com/
Connect your Apple IIgs to the Internet!
GS/TCP allows you to access the internet via TCP/IP using either SLIP (IP over serial lines) or MacIP (IP over localtalk). A PPP interface is currently being completed.
GS/TCP supports a standard BSD socket interface, making it very easy to write your own network applications or port existing ones from widely available sources. Both ftp and inetd programs available for GS/TCP were compiled with ORCA/C directly from the BSDi sources with very few code changes!
GS/TCP supports multiple simultaneous interfaces (just like a real Unix machine) and has support for IP Multicast (beat that, MacTCP!). Standard unix utils like ifconfig and netstat are used to configure and monitor interface paramaters on the fly.
System Requirements
GS/TCP requires GNO 2.0.6 or higher to run. This means most current GNO users will have to upgrade before they can use GS/TCP. Several critical scheduling bugs have been fixed in GNO since 2.0.5, and BSD socket support has been added which GS/TCP uses to communicate with applications.
For new GNO users, purchasing information can be found here.
For existing GNO users, please contact Procyon directly for 2.0.6 upgrade information.
I personally recommend that your GS have at least 4MB of RAM, but GS/TCP will probably work just fine in 2MB or less (as long as you can get GNO running, you should be OK).
http://www.geeks.org/~taubert/gstcp/
A Conversation with Jim Gray
From Storage
Vol. 1, No. 4 - June 2003
Sit down, turn off your cellphone, and prepare to be fascinated. Clear your schedule, because once you've started reading this interview, you won't be able to put it down until you've finished it.
Who would ever, in this time of the greatest interconnectivity in human history, go back to shipping bytes around via snail mail as a preferred means of data transfer? (Really, just what type of throughput does the USPS offer?) Jim Gray would do it, that's who. And we're not just talking about Zip disks, no sir; we're talking about shipping entire hard drives, or even complete computer systems, packed full of disks.
Gray, head of Microsoft's Bay Area Research Center, sits down with Queue and tells us what type of a voracious appetite for data could require such extreme measures. A recent winner of the ACM Turing Award, Gray is a giant in the world of database and transaction-processing computer systems. Before Microsoft, he worked at a few companies you might know: Digital, Tandem, IBM, and AT&T. He's also a member of the Queue Editorial Advisory Board.
Shooting questions at Gray on such topics as open-source databases and smart disks is David Patterson, who holds the Pardee Chair of Computer Science at the University of California at Berkeley. Patterson headed up the design and implementation of RISC I, which laid the foundations for Sun's SPARC architecture. Along with Randy Katz, Patterson also helped pioneer redundant arrays of independent disks—yes, RAID.
DAVE PATTERSON What is the state of storage today?
JIM GRAY We have an embarrassment of riches in that we're able to store more than we can access. Capacities continue to double each year, while access times are improving at 10 percent per year. So, we have a vastly larger storage pool, with a relatively narrow pipeline into it.
We're not really geared for this. Having lots of RAM helps. We can cache a lot in main memory and reduce secondary storage access. But the fundamental problem is that we are building a larger reservoir with more or less the same diameter pipe coming out of the reservoir. We have a much harder time accessing things inside the reservoir.
DP How big were storage systems when you got started?
JG Twenty-megabyte disks were considered giant. I believe that the first time I asked anybody, about 1970, disk storage rented for a dollar per megabyte a month. IBM leased rather than sold storage at the time. Each disk was the size of a washing machine and cost around $20,000.
Much of our energy in those days went into optimizing access. It's difficult for people today to appreciate that, especially when they hold one of these $100 disks in their hand that has 10,000 times more capacity and is 100 times cheaper than the disks of 30 years ago.
DP How did we end up with wretched excess of capacity versus access?
JG First, people in the laboratory have been improving density. From about 1960 to 1990, the magnetic material density improved at something like 35 percent per year—a little slower than Moore's Law. In fact, there was a lot of discussion that RAM megabyte per dollar would surpass disks because RAM was following Moore's Law and disks were evolving much more slowly.
But starting about 1989, disk densities began to double each year. Rather than going slower than Moore's Law, they grew faster. Moore's Law is something like 60 percent a year, and disk densities improved 100 percent per year.
Today disk-capacity growth continues at this blistering rate, maybe a little slower. But disk access, which is to say, "Move the disk arm to the right cylinder and rotate the disk to the right block," has improved about tenfold. The rotation speed has gone up from 3,000 to 15,000 RPM, and the access times have gone from 50 milliseconds down to 5 milliseconds. That's a factor of 10. Bandwidth has improved about 40-fold, from 1 megabyte per second to 40 megabytes per second. Access times are improving about 7 to 10 percent per year. Meanwhile, densities have been improving at 100 percent per year.
DP I hadn't thought about it the way you explained it. It isn't that the access times have been improving too slowly; it's that the capacity has been improving too quickly. Access is on the old schedule, but the density is astronomical. What problems does that present going forward? How do we design storage systems differently?
JG The first thing to keep in mind is it's not over yet. At the FAST [File and Storage Technologies] conference about a year-and-a-half ago, Mark Kryder of Seagate Research was very apologetic. He said the end is near; we only have a factor of 100 left in density—then the Seagate guys are out of ideas. So this 200-gig disk that you're holding will soon be 20 terabytes, and then the disk guys are out of ideas. The database guys are already out of ideas!
What do you do with a 200-gig disk drive? You treat a lot of it as tape. You use it for snapshots, write-anywhere file systems, log-structured file systems, or you just zone frequent stuff in one area and try to waste the other 190 GB in useful ways. Of course, we could put the Library of Congress holdings on it or 10,000 movies, or waste it in some other way. I am sure that people will find creative ways to use all this capacity, but right now we do not have a clear application in mind. Not many of us know what to do with 1,000 20-terabyte drives—yet, that is what we have to design for in the next five to ten years.
DP You might look back at the old research papers before there was a disk and how you had to manage tape systems to look for ideas for the future.
JG Certainly we have to convert from random disk access to sequential access patterns. Disks will give you 200 accesses per second, so if you read a few kilobytes in each access, you're in the megabyte-per-second realm, and it will take a year to read a 20-terabyte disk.
If you go to sequential access of larger chunks of the disk, you will get 500 times more bandwidth—you can read or write the disk in a day. So programmers have to start thinking of the disk as a sequential device rather than a random access device.
DP So disks are not random access any more?
JG That's one of the things that more or less everybody is gravitating toward. The idea of a log-structured file system is much more attractive. There are many other architectural changes that we'll have to consider in disks with huge capacity and limited bandwidth.
On the one hand, these disks offer many opportunities. You can have a file where all the old versions are saved. The unused part of the disk can be used as tape or as archive. That's already happening with people making snapshots of the disk every night and offering a version of the file system as of yesterday or as of a certain point in time. They can do that by exploiting the disk's huge capacity.
DP Another use of tape has been archival and transport. I know you had some experience trying to use disks as a tape replacement. How has that turned out?
JG We are at a stage now where disk media and tape media have approximate price parity, which is to say it's about $1 a gigabyte per disk and per tape cartridge. So, you can think about writing to disk and then pulling the disk out and treating it as a tape cartridge.
The disk has properties that the tape doesn't have. Disk has higher bandwidth and is more convenient to access. You can just plug in the disk. You don't need a tape drive and you don't need a bunch of software that knows how to read tapes. You're actually mounting a file system. You've got no extra software, no extra concepts. You don't have to find the part of the tape that has your file, and you do not need those funny tape management systems.
I've been working with a bunch of astronomers lately and we need to send around huge databases. I started writing my databases to disk and mailing the disks. At first, I was extremely cautious because everybody said I couldn't do that—that the disks are too fragile. I started out by putting the disks in foam. After mailing about 20 of them, I tried just putting them in bubble wrap in a FedEx envelope. Well, so far so good. I have not had any disk failures of mailed disks.
The biggest problem I have mailing disks is customs. If you mail a disk to Europe or Asia, you have to pay customs, which about doubles the shipping cost and introduces delays.
DP Wouldn't that also be true with tape?
JG It's the same for tape and DVDs, but not for Internet file transfers. No customs duties on FTP—at least not yet.
The nuisance factor of moving the disks around can be a problem. It requires some intelligence to be able to plug a random IDE disk into a computer and get it to work, but the folks I am working with find disks more convenient than tape.
DP Will serial attached disks make it easier?
JG That should be a huge help. And Firewire is a help. Another option is to send whole computers. I've been sending NTFS disks (the Windows file system format), and not every Linux system can read NTFS. So lately I'm sending complete computers. We're now into the 2-terabyte realm, so we can't actually send a single disk; we need to send a bunch of disks. It's convenient to send them packaged inside a metal box that just happens to have a processor in it. I know this sounds crazy—but you get an NFS or CIFS server and most people can just plug the thing into the wall and into the network and then copy the data.
DP That makes me want to get mail from you.
JG Processors are not that expensive, and I count on people sending the computer back to me or on to the next person who wants the data.
DP What's the difference in cost between sending a disk and sending a computer?
JG If I were to send you only one disk, the cost would be double—something like $400 to send you a computer versus $200 to send you a disk. But I am sending bricks holding more than a terabyte of data—and the disks are more than 50 percent of the system cost. Presumably, these bricks circulate and don't get consumed by one use.
DP Do they get mailed back to you?
JG Yes, but, frankly, it takes a while to format the disks, to fill them up, and to send around copies of data. It is easier than tape, however, both for me and for the people who get the data.
DP It's just like sending your friends a really great movie or something.
JG It's a very convenient way of distributing data.
DP Are you sending them a whole PC?
JG Yes, an Athlon with a Gigabit Ethernet interface, a gigabyte of RAM, and seven 300-GB disks—all for about $3,000.
DP It's your capital cost to implement the Jim Gray version of "Netflicks."
JG Right. We built more than 20 of these boxes we call TeraScale SneakerNet boxes. Three of them are in circulation. We have a dozen doing TeraServer work; we have about eight in our lab for video archives, backups, and so on. It's real convenient to have 40 TB of storage to work with if you are a database guy. Remember the old days and the original eight-inch floppy disks? These are just much bigger.
DP "Sneaker net" was when you used your sneakers to transport data?
JG In the old days, sneaker net was the notion that you would pull out floppy disks, run across the room in your sneakers, and plug the floppy into another machine. This is just TeraScale SneakerNet. You write your terabytes onto this thing and ship it out to your pals. Some of our pals are extremely well connected—they are part of Internet 2, Virtual Business Networks (VBNs), and the Next Generation Internet (NGI). Even so, it takes them a long time to copy a gigabyte. Copy a terabyte? It takes them a very, very long time across the networks they have.
DP When they get a whole computer, don't they still have to copy?
JG Yes, but it runs around their fast LAN at gigabit speeds as opposed to the slower Internet. The Internet plans to be running at gigabit speeds, but if you experiment with your desktop now, I think you'll find that it runs at a megabyte a second or less.
DP Megabyte a second? We get almost 10 megabytes sustained here.
JG That translates to 40 gigabytes per hour and a terabyte per day. I tend to write a terabyte in about 8 to 10 hours locally. I can send it via UPS anywhere in the U.S. That turns out to be about seven megabytes per second.
DP How do you get to the 7-megabytes-per-second figure?
JG UPS takes 24 hours, and 9 hours at each end to do the copy.
DP Wouldn't it be a lot less hassle to use the Internet?
JG It's cheaper to send the machine. The phone bill, at the rate Microsoft pays, is about $1 per gigabyte sent and about $1 per gigabyte received—about $2,000 per terabyte. It's the same hassle for me whether I send it via the Internet or an overnight package with a computer. I have to copy the files to a server in any case. The extra step is putting the SneakerNet in a cardboard box and slapping a UPS label on it. I have gotten fairly good at that.
Tape media is about $3,000 a terabyte. This media, in packaged SneakerNet form, is about $1,500 a terabyte.
DP What about the software compatibility issues of sending a whole computer?
JG By sending a whole computer, the people just need CIFS or NFS. They can see the file system and pull files if they want. It completely nullifies a bunch of compatibility issues. Think of how easy it is to bring a wireless laptop into your network. This is the same story—except the system is wired.
Of course, this is the ultimate virus.
In the old days, when people brought floppy disks around, that was the standard way of communicating viruses among computers. Now, here I am mailing a complete computer into your data center. My virus can do wonderful things. This computer is now inside your firewall, on your network.
There are some challenges about how to secure this. The simple strategy is to say, "Look, if you were FTP'ing from this computer, it would be outside your firewall. So I'll mail it to you, and then you plant it outside your firewall, but LAN-connected, so that you're not paying the phone bill."
DP Run it through a firewall?
JG That's one strategy. The other strategy is to say, "I trust Dave. Dave sent me this computer."
DP What are storage costs today?
JG We have been talking about packaged storage so far. Packaged storage can be bought for $2,000 to $50,000 per terabyte, depending on where you shop. When it is "RAIDed," the price approximately doubles. If you go for Fibre Channel rather than direct-attached, then the fabric can add 30 percent to that. The simple answer is $3,000 per terabyte to $100,000 per terabyte, or $1,000 per year to $30,000 per year annualized. But the real cost of storage is management. Folks on Wall Street tell me that they spend $300,000 per terabyte per year administering their storage. They have more than one data administrator per terabyte. Other shops report one admin per 10 TB, and Google and the Internet Archive seem to be operating at one per 100 TB. The cost of backup/restore, archive, reorganize, growth, and capacity management seems to dwarf the cost of the iron. This stands as a real challenge to the software folks. If it is business as usual, then a petabyte store needs 1,000 storage admins. Our chore is to figure out how to waste storage space to save administration. That includes things like RAID, disk snapshots, disk-to-disk backup, and much simpler administration tools.
DP What is the tension between direct-attached disk, storage area network (SAN), and network-attached storage (NAS)?
JG Direct-attached disk is simple and cheap, but many argue that storage should be consolidated in storage servers so that it is easier to manage and can be shared—consolidation and virtualization. This creates a dance-hall design where CPUs are on one side of the picture and data is on the other side. The VaxCluster worked that way, the IBM Sysplex works that way, and now many large Unix data centers are moving that direction. When you look inside the storage servers, they are really servers with an operating system and a network fabric. This all seems really weird to me. These high-end storage servers are exporting a very low-level get-block put-block protocol. It creates a lot of traffic. File-oriented servers such as NetApp have a higher level of abstraction and so should have much better performance and virtualization. But up till now the relatively poor performance of TCP/IP has saddled Net-App with lower performance than a Fibre Channel SAN solution. Gigabit Ethernet, TCP offload engines (TOEs), and the NetApp direct-access storage protocol taken together give performance comparable to any Fibre Channel solution. These same technologies enable high-performance iSCSI. Once we have Gigabit Ethernet and TOE, I think the migration to a NAS solution will be much preferable to the get-block put-block interface of iSCSI.
DP Let's talk about the higher-level storage layers. How did you get started in databases? You were at IBM when the late Ted Codd formulated the relational database model. What was that like?
JG Computers were precious in those days. They were million-dollar items. Ease of use was not the goal. The mind-set was that labor was cheap and computers were expensive. People wrote in assembly language, and they were very, very concerned about performance.
Then along came Ted Codd, saying, "You know, it would be a lot easier to program in set theory." He observed that in the old days people didn't write Fortran, they wrote data-flows for unit-record equipment that processed sets of cards. Each device did a simple task. One would sort the set of cards or it would copy a set of cards, and so on. You could set up a plug board that would copy the cards or would throw away all the cards that didn't have a certain property or would duplicate all the cards that had a certain property. Programming consisted of configuring the machines and their plug boards and then running decks of cards through.
To some extent you can think of Codd's relational algebra as an algebra of punched cards. Every card is a record. Every machine is an operator. The operators are closed under composition: decks-of-cards in and decks-of-cards out. It was a chore to program those systems, but people programmed at a fairly high level of abstraction.
The cool thing was that once the operators were configured, it was easy to do just about anything. The architecture has both partition and pipeline parallelism, and it is very simple to conceptualize and debug since everything is in "cards."
It was a form of data-flow programming. Ted had experience in all these areas. He said, "We should figure out a way of doing things like that in this more modern world where, in fact, everything is on disk and accessible and the files are much, much bigger."
The reaction at the time was that it's going to be inefficient, and it's going to use more instructions and more disk I/Os than IBM's Information Management System (IMS), the first commercial hierarchical, structured database management system. In addition, the industry was moving from batch-transaction processing that dealt with sets of records to online transaction processing that dealt with individual records ("give me that bank account").
Ted's ideas were extremely controversial. The core issues were about how computers would be used in the future and what was important to optimize. This was not a PC world. This was a mainframe world of business data processing.
DP The controversy was whether anybody would pay for such inefficient use of the computer?
JG Right, and whether we were headed toward a world of online transaction processing where nobody actually wanted to go through all the records. Everything was going to be online, everything was going to be incremental, and there wouldn't be this need for batch reporting.
DP You wouldn't need to go back and look at the old data to have a summary of the current data instantly available?
JG The online advocates assumed that we would keep running totals, and they minimized the fact that the manager might want to balance the books. In fact, despite the advance of online transaction-processing systems, the batch-processing systems continue to be there and continue to evolve, and they have to deal with larger and larger data sets.
Paradoxically, the relational model ended up being really good for reporting in the decision-support part of batch processing.
Nonetheless, the relational guys took the challenge and said, "If we're going to be successful, we're going to have to perform as well as the IMS on the bread-and-butter transactions." A great deal of energy went into this, and I think it is fair to say that the relational implementations did okay. Today, all the best TPC-C [Transaction Processing Performance Council Online Transaction Processing Benchmark] results you see are with relational systems. The IMSes of the world are not reporting TPC-C results because, frankly, their price performance isn't very good.
It was this evolution. The database guys had to get their price-performance story together first. At a certain point, most people who bought the relational stuff bought it for the usability, not the price performance. They were getting new applications, and they wanted to get their applications up quickly.
You see this today. Two groups start; one group uses an easy-to-use system, and another uses a not-so-easy-to-use system. The first group gets done first, and the competition is over. The winners move forward and the other guys go home.
That situation is now happening in the Web services space. People who have better tools win.
DP What do you think is happening with databases in terms of open source? What is the Linux of databases?
JG I think it's exciting. Very small teams built the early database systems. A small team at Oracle built the original Oracle, and there were small teams at Informix, Ingress, Sybase, and IBM.
Twenty-five people can do a pretty full-blown system, and ship it, and support it, and get manuals written, and test it. The Postgress and MySQL teams are on that scale and likely represent the leading open-source DBMSes out there. Maybe the teams are getting larger at this point. A few years ago the DBMSes lacked transactions, optimization, replication, and lots of other cool features, but they are adding these features now.
The lack of a common code base is one of the things that has held back the database community and has been a huge advantage for the operating systems community. The academic world has had a common operating system that everybody can talk about and experiment with.
It has the downside of creating a mob culture. But the positive side is everybody has a common language and a common set of problems they are working on. Having MySQL as a common research vehicle is going to accelerate progress in database research. People can do experiments and compare one optimizer against another optimizer. Right now, it is very difficult for one research group to benefit from the work of others.
The flip side is this: What does this mean for the database industry as a whole? What does this mean for Oracle and Microsoft and DB2 and whoever else wants to make a database system?
So far, MySQL is very primitive and very simple. It will add features, and the real question is, can it evolve to be competitive with Oracle, Microsoft, and DB2?
Those companies spend a huge amount of energy on quality control, support, documentation, and a bunch of things that are thinner in the open-source community. But it could be that some company will step forward and MySQL.com will displace the incumbent database vendors.
The challenge is similar to the challenge we see in the OS space. My buddies are being killed by supporting all the Linux variants. It is hard to build a product on top of Linux because every other user compiles his own kernel and there are many different species. The main hope for Oracle, DB2, and SQLserver is that the open-source community will continue to fragment. Human nature being what it is, I think Oracle is safe.
DP Is MySQL.com trying to be the Red Hat of MySQL?
JG It could be that they will step forward and provide all of those things that IBM, Microsoft, and Oracle provided, and do it for a much lower price. I think the incumbent vendors will have to be innovative to make their products more attractive.
One thing that works in the incumbents' favor is fear, uncertainty, and doubt (FUD). If you base your company on a database, you are risking a lot. You want to buy the best one. People are usually pretty cautious about where they want to put their data. They want to know that it's going to have a disaster recovery plan, replication, good code quality, and in particular, lots and lots and lots of testing.
The thing that slows Oracle, IBM, and Microsoft down is the testing, and making sure they don't break anything—supporting the legacy. I don't know if the MySQL community has the same focus on that.
At some point, somebody will say, "I'm running my company on MySQL." Indeed, I wish I could hear Scott McNealy [CEO of Sun Microsystems] tell that to Larry Ellison [CEO of Oracle].
DP The whole corporation?
JG Right. Larry Ellison announced that Oracle is now running entirely on Linux. But he didn't say, "Incidentally we're going to run all of Oracle on MySQL on Linux." If you just connected the dots, that would be the next sentence in the paragraph. But he didn't say that, so I believe that Larry actually thinks Oracle will have a lot more value than MySQL has. I do not understand why he thinks the Linux problems are fixable and the MySQL problems are not.
DP To change the subject, I think one of the reasons you received the Turing Award was for your contributions in transactions, or for making the databases work better. Is that a fair characterization?
JG It's hard to know. There is this really elegant theory about transactions, having to do with concurrency and recovery. A lot of people had implemented these ideas implicitly. I wrote a mathematical theory around it and explained it and did a fairly crisp implementation of the ideas. The Turing Award committee likes crisp results. The embarrassing thing is that I did it with a whole bunch of other people. But I wrote the papers, and my name was often first on the list of authors. So, I got the credit.
But to return to your question, the fundamental premise of transactions is that we needed exception handling in distributed computations. Transactions are the computer equivalent of contract law. If anything goes wrong, we'll just blow away the whole computation. Absent some better model, that's a real simple model that everybody can understand. There's actually a mathematical theory lying underneath it.
In addition, there is a great deal of logic about keeping a log of changes so that you can undo things, and keeping a list of updates, called locks, so that others do not see your uncommitted changes.The theory says: "If that's what you want to do, this is what you have to do to get it."
The algorithms are simple enough so most implementers can understand them, and they are complicated enough so most people who can't understand them will want somebody else to figure it out for them. It has this nice property of being both elegant and relevant.
DP You worked at Tandem Computers for a decade. I think what struck me is that since reliability is so important, people assumed that Tandem would rule the universe. But it seems that transactions made databases run well enough on standard hardware that customers could safely get their work done on stuff that wasn't as reliable as Tandem. Do you have a take on that?
JG Yes, others stole Tandem's thunder. Tandem had fault tolerance, transaction processing, and scale-out. Each of these is a really good thing, and Tandem implemented each one very well.
Tandem didn't have many patents, and this was an era where there wasn't a tradition of software patents. Therefore, Tandem didn't have much protection of its ideas. The two things Tandem did really well, and that were really novel, were process pairs and scale-out—being able to combine as many computers as you want and spreading the operating system out among the nodes. The company went much further than anybody else had gone in that space, and even to this day, I think Teradata is the only other good example of this. It had such a transparent scale-out.
DP What do you see as the big storage opportunity in the next decade?
JG One thing is a no-brainer. Disks will replace tapes, and disks will have infinite capacity. Period. This will dramatically change the way we architect our file systems. There are many more questions opened by this than resolved. Will we start using an empty part of the disk for our tape storage, our archive storage, or versions? Just exactly how does that work? And how do I get things back?
I don't think there is much controversy about that, especially if you set the time limit far enough out: I would say three years; others would say 10 years.
Something that I'm convinced of is that the processors are going to migrate to where the transducers are. Thus, every display will be intelligent; every NIC will be intelligent; and, of course, every disk will be intelligent. I got the "smart disk" religion from you, Dave. You argued that each disk will become intelligent. Today each disk has a 200-megahertz processor and a few megabytes of RAM storage. That's enough to boot most operating systems. Soon they will have an IP interface and will be running Web servers and databases and file systems. Gradually, all the processors will migrate to the transducers: displays, network interfaces, cameras, disks, and other devices. This will happen over the next decade. It is a radically different architecture.
What I mean by that is it's going to have a gigahertz or better processor in it. And it will have a lot of RAM. And they will be able to run almost any piece of software that you can think of today. It could run Oracle or Exchange or any other app you can think of.
In that world, all the stuff about interfaces of SCSI and IDE and so on disappears. It's IP. The interface is probably Simple Object Access Protocol (SOAP) or some derivative of SOAP; you send requests to it and get back responses in a pretty high-level protocol. The IP sack does security and naming and discovery. So each of these "disks" will be an IP version 6 (IPv6) node—or IPv9.
I know you're a great fan of intelligent disks, Dave. I remember it clearly. You held up a disk and said, "See these integrated circuits? Imagine what's going to happen to them in 10 years."
It was obvious what was going to happen in 10 years: Those circuits were going to turn into a supercomputer.
The people at Seagate will tell you they worry about pennies. They don't put anything into the disk drive that isn't required. The whole notion of putting a processor and an operating system and a network interface into a disk drive, unless it's absolutely required, is just considered crazy.
On the other hand, if you step back, it is just a matter of time. They will do it. So there's a disconnect. When I talk to Dave Anderson [director of strategic planning at Seagate], he always says, "Well, maybe in five years, but not right now." He has been saying that for five years, and so far he has been right. But I think it is truer now than it was five years ago. It is just a matter of time.
DP I can't help but think this is one of those things that is resolved by capitalism. Technologically, it seems clear. Somebody just needs to figure out a market benefit and show the disk industry how to make money doing it.
JG You're absolutely right. The two things that are going to be real shifts in storage are tertiary storage going online so there is no distinction; and intelligent storage, so that we raise the level above SCSI. It will probably start with file servers and evolve to app servers. How long will that be? Five years, maybe ten.
One final thing that is even more speculative is what my co-workers at Microsoft are doing. They are replacing the file system with an object store, and using schematized storage to organize information. Gordon Bell calls that project MyLifeBits. It is speculative—a shot at implementing Vannevar Bush's memex [http://www.theatlantic.com/unbound/flashbks/computer/bushf.htm]. If they pull it off, it will be a revolution in the way we use storage.
In Memoriam
Since this interview, Edgar "Ted" Codd, inventor of the relational data model, died of heart failure. He was much loved by his colleagues both for his warmth and for his many contributions to computer science.
http://www.acmqueue.org/modules.php?name=Content&pa=showpage&pid=43
for example:
Today -
The increasing federal interest in Coca-Cola's practices comes while U.S. corporations are under pressure from regulators and investors to be more forthcoming about their operations and their finances.
http://money.cnn.com/2003/07/11/news/companies/coke_probe/index.htm?cnn=yes
"told?"
Integrity is when it is verified.
Trust, but verify.
stingray...
I, too, believe in the handshake and look in the eye. I believe it more than most.
I'm not comparing STEH to the failures. I'm suggesting to be wary and cautious because trust can be a fleeting thing.
I'm not doubting your perspectives. They are your perspectives and truths for you.
I want to know what the goal(s) of the company are. That was partially answered at the SHM in Phoenix. What was not answered was how to get there. Until the float and/or outstanding share issues are resolved we will be an open target for bashers.
BTW, you have a CEO who seems to know the markets. That should tell you quite a bit.
Tex...
I'm not sure why you feel a need to make such a stink.
You are wrong. I do have a reader for my mac.
Do you see an "Apple" here? I sure do.
http://www.linuxnet.com/
correction...
I could have openned the picture in a new window which would have given me the properties. Should have done that.
Bull,
I'm running a Mac and my right click doesn't give me properties. I have to do a view source and I didn't have the time to sift through. Usually the mac is better for web-work, but on the properties it is a bit weaker than the pc (for the moment anyway).
Thanks.
jarvis...
Let me guess, trust him because....
...you like him?
In case someone failed to bring you up to speed, Enron happened while you were away. It didn't just end there either. Tyco got involved. GlobalCrossing too. Actually the list is quite long.
There is a new law too: Sarbannes-Oxley. This requires the CEO to put his/her signature on all filings.
Seriously, why should I trust? The technology is good or BMG wouldn't be buying it. How do I know to trust that CEO has been getting paid in millions of authorized shares and selling them? Do you think this is a fair question to your response?
Oh. You're just referencing from another site and not actually inserting.
Thanks.
Does anybody else have a problem with the...
Apple email client breaking links? I didn't have that problem with Entourage, but this is one of a couple of things that bug me about the native app.
BTW, how do you guys get the pictures/animations into a post? eom
Apple related...
[bolds are mine]
All-In-One Security And Convenience
Michael Bobelian, 07.11.03, 10:00 AM ET
NEW YORK - Are you tired of remembering all of your computer and online passwords? There's one to log on to your computer, another one for your e-mail at work, one more for your AOL, Yahoo! or MSN account and potentially another for your ISP at home. If you bank or trade shares online, then you have to remember those passwords as well.
Fellowes, a privately held Itasca, Ill.-based company with 1,700 employees and $600 million in annual sales, will soon release a mouse that can remember all of them and use just one "password": your fingerprint. The company, which began life in 1917 as a maker of bankers' products, now manufactures computer and electronic accessories like carrying cases, cables, media storage and surge protectors as well as add-ons for PDAs and cellular phones.
Its Secure Touch Optical Mouse allows users to replace their passwords with a fingerprint reader. When you arrive at the logon screen, just place your fingerprint on the sensor built on to the mouse and you're in. The same goes for anything on the Web that requires a password.
The mouse can recognize multiple users so you can share it with family members or co-workers. And it actually works.
We registered a user on the device and then put it to work. When someone other than the registered user tried to log on, the Fellowes software rejected them. When we disconnected the Secure Touch mouse and tried to sign on, it also balked. And the software refused to be fooled when we connected a different mouse to our computer.
The Secure Touch mouse will retail at about $99 and should be available online from the company's Web site or through its online partners Amazon.com (nasdaq: AMZN - news - people ) and Best Buy (nyse: BBY - news - people ) in mid-July. Office Depot (nyse: ODP - news - people ) and Staples (nasdaq: SPLS - news - people ) will also stock it in their stores.
One small catch: The mouse now works only on Windows-based desktops and PCs, and Fellowes has no plans at this time to release a version for Apple (nasdaq: AAPL - news - people ).
http://www.forbes.com/2003/07/11/cx_mb_0711tentech.html?partner=yahoo&referrer=
Naw, one can still learn from it.
correction...
A couple of times I used "their" instead of "there." Please forgive my typing.
stingray, let me be clear first...
...before you become angered at my insistence, let me explain.
First, before I take a large position and start talking about this company and its technology, I need to have some fundamental questions answered. There are a few Sunncomm investors who know me and know that this is me doing my DD.
In light of not getting a response from the company, that gives me reason to up the level of asking. I see the Prez answered one of my questions (sort of anyway) so it is a start.
I know it is a pink sheet company and not doing filings. That is just fine for some, but not me. My money may mean nothing to the investors nor to the company (since both parties may have it coming out of their ears), however, for me, to invest requires some basic questions answered. If they can't, or won't, then I will decide what level of investment to make. You can be sure it will be substantially less then with a straight answer, even if the answer is not good. I value honesty above all else.
Now for your question.
I think any float over 100mil is "too many." However, you have the double task of a huge amount of authorized.
I ask you, how do you KNOW that the Prez hasn't been selling millions and millions of shares? Authorizing shares is tricky business.
Yes, they just expanded the team. This is good. Perhaps they are using this authorization for compensation. If so, that is the price of doing business.
But I want to KNOW that this is what is taking place. I don't KNOW this is not taking place.
Geez, we just came out of Enron, ACLN, GlobalCrossing, Martha Stew., etc. etc. and one expects me to blindly trust someone I've never looked in the eye and shaken his/her hand?
I just want to know what the plan is for addressing the float. I'm not even worried about its size so long as their is a plan for addressing it. Let me be very clear with this:
They will not be able to go onto one of the big boards with a float above 150-200mil. What is the plan to get the float under control?
Another contract will (might) boost the share price. Yes - this is true. However, staying power comes from decreased SUPPLY and increased DEMAND. When your supply meets the demand you do not rise. You do not retain. To get on the boards they need to not only get the price up, but control the supply to ensure their is a demand. The key is the demand.
We can run around here (or anywhere) claiming that all shares will be gobbled up. However, that is not the precedent. I've looked a number of pinkies to big boards and I've not yet found a company with a large float make it. If you are aware of one by all means enlighten me. Give me something to hang on to as a life preserver taking a plung in a non-reporting company.
I hope this elucidates my persistence in asking?
From DRM Watch...
July 1, 2003: A Brazilian man won a lawsuit that he filed against EMI Recorded Music, Sony, and a Brazilian production company after he purchased a copy-protected CD that he could not play in his car audio system. The judge in the case awarded damages of 1000 Brazilian Real (US $344) and ordered that the CD be replaced with a non-copy-protected one. EMI is appealing. Sony provided the copy protection technology.
Although the major record companies tend to believe that most consumers will passively accept copy-protected CDs, this is an example of the kind of backlash that they can expect when they begin to distribute them in quantity in the U.S. market - as BMG has stated its intent to do later this year with SunnComm technology. In addition to incompatibility problems that will inevitably result from the sheer impossibility of testing the copy-protected CDs on every possible playback device, imposing copy protection on a format that has done without it for a decade will stir up the simmering legal brouhaha over consumers' rights to private copying. This is an issue that will not stay in the background for long.
chwdrhed,
Wiser words have not been spoken (regarding stocks that is).
On the issue of reverse split...
This requires more exploration.
The 1st thing that I'm interested in is this - if there is manipulation of the stock, then perhaps the price we are seeing now (above a dime) is artificial?
I could be wrong about this, and a few know that I really want to be wrong about this, but the exchanges will not let this float on their pasture. Can someone ask the Prez (I tried) if they are planning to buy back a large number of shares, if and when the next contract happens? If I see a "yes" for the answer I might deepen my position.
stingray,
So what is management's plan for dealing with the float?
SERLET BERTRAND, SEC filing...
http://www.secinfo.com/d141Nx.2Dd4.htm
Anybody ever used Zinio? eom
I'm not a basher.
A reverse split is not necessarily a bad thing.
Let me ask you some questions:
1. Do you want to get off the pinkies?
2. How do you propose to do it (#1 above) with this float?
3. Do you think management will be buying stock in order to accomplish it (#2)?
Why do I think a reverse split is not necessarily a bad thing?
A reverse split was authorized a few months ago on this firm:
http://finance.yahoo.com/q?s=prsf&d=c
It doesn't look like it will be needed.
BTW, I hold quite a few shares. Of course it is not a large percentage because I would need a few million to put a dent in the float.
Ruellit,
Thanks for the welcome.
I'm going to try and get one of my business partners to post here. He is a great researcher and better at it than I am.
Did anyone else see this release today...
[bolds are mine]
[This company has a two fold business model. They are patenting SNPs and providing the discovery platforms.]
Sequenom Unveils Development Pact With Bristol-Myers
Thursday July 10, 11:51 am ET
SAN DIEGO -- Sequenom Inc. (NasdaqNM:SQNM - News) agreed to collaborate with Bristol-Myers Squibb Co. (NYSE:BMY - News) to develop diagnostic and therapeutic products.
Bristol-Myers, based in New York, will fund the collaboration, in which Sequenom will use its MassArray technology and collection of DNA samples to analyze disease-related genes and genetic variations.
Sequenom said Thursday that it will receive milestone payments and royalties on any products the collaboration develops. Financial terms of the deal aren't being disclosed, a Sequenom spokesman said.
The company plans to update its guidance on July 29. In January, Sequenom said it was targeting 2003 revenue of $46 million, up from 2002 revenue of $30.9 million.
A Bristol-Myers spokesman confirmed his company is using Sequenom's technology to screen a large set of its early-stage drug targets for their role in treating several diseases.
The pharmaceutical giant is working to recover from accounting problems, flat revenue of $18.1 billion in 2002, and the expiration of U.S. patents on two of its biggest products, the cancer drug Taxol and diabetes drug Glucophage. In June, Bristol-Myers received Food and Drug Administration (News - Websites) approval of its AIDS drug, Reyataz.
-Nora Devine, Dow Jones Newswires; 201-938-5400
http://biz.yahoo.com/djus/030710/1151000925_2.html
byron, re: reverse split...
So, this is the gut check for leadership.
Was the BMG win a lucky, one shot wonder? If so, no reverse split. If not, and you can reload and fire for effect, then reverse split is very well in order.
There are companies that can hold it. Some can even make it on the announcement. Look at PRSF. They announced a reverse split, got a huge announcement with MSFT, and now trading very solid and nicely, up.
This is all about leadership.
bleuduece, re: listing...
Don't get me wrong, listing on the big exchanges would be nice. However, the float is KILLING US! Until that is resolved sunncomm ain't going anywhere.
When they make the listing offer they must demonstrate a market for the shares. There is no market for this size float. Without some resolution of that it would not receive the node for listing because they would be quickly delisted.
If the float were a third of its current it would be tough to hold $1.00. Of course, a third of the float would bring in new investors who are now just going to wait and see how management handles this new found fame.
The fame will be very short lived. How they respond is the essence of leadership.
byron, the problem is...
Management has backed this into a corner. There are so many outstanding shares, with so many that can be issued that this stock will never get off the ground, let alone the pinkies.
Either a reverse or a buy-back. A buy-back would be them putting their money where their mouths are. I suspect you won't see that. I would like to be wrong, but so much can be hidden behind the veil.
They have the technology. Now they've got to prove it is viable. I recommend they start a buy-back immediately.
byron, re: my take...
Clearly it would hose management IF they have any intentions of staying on for the long haul. I suspect they are interested in bringing this to a conclusion for themselves and moving on. Good for them, probably not good for us.
re: listing...
That won't be happening for a long time.
For Naz listing they need $1/sh for extended period of time and assets. The float kills it.
Either a reverse split or buy back.
I bet this management has already eyed the door. They will not do a buy back. I would be surprised if they don't opt for a reverse.