Register for free to join our community of investors and share your ideas. You will also get access to streaming quotes, interactive charts, trades, portfolio, live options flow and more tools.
Register for free to join our community of investors and share your ideas. You will also get access to streaming quotes, interactive charts, trades, portfolio, live options flow and more tools.
emit,nice post-thanks,culater/
OT-Mark my words about talking machines
Josh Freed
The Gazette (Montreal)
Saturday, May 25, 2002
ADVERTISEMENT
I was walking down the street with a friend recently when he suddenly began to shout - but not at me. He was hollering into his new phone, which has something called a "personal message recorder."
"Remember to CALL THE BANK!" he barked. "Also GET MILK and TOOTHPASTE."
Then he snapped the machine shut and started chatting with me again, as if it were the most normal thing in the word. He was obviously suffering from an episode of "Talking Machine Syndrome," the latest social disease that's sweeping society.
The spread of the cell phone already makes it seem like half the world has Tourette's syndrome, a condition where you involuntarily shout obscenities in public. Only instead of swearing, people with cell phones are shouting stuff like:
"WHAT!? You'll have to speak louder - I can't hear you! ...OK, now I hear you. ... Can you hear me? No? ... IS THIS BETTER!!? ... GREAT! BUT HANG ON - I'M IN A RESTAURANT AND THE WAITER'S COMING OVER TO TAKE MY ORDER!!!"
As if this weren't bad enough, we're being told that all kinds of new "personal voice-recognition" machines are on the way to add to our second-hand speech pollution problems. There will be voice-activated coffee-makers that understand us when we shout commands like: "Make me a double half-fat, low-cal, lead-free latte!"
There will be voice-activated systems that start your car engine, lock your car doors and nag you to fasten your seat belt. I've already seen ads for something called Sony "Voice Drive" that say: "Stop fumbling for the (car) radio buttons. Simply tell the system what station you want and let 'Voice Drive' do the rest."
Pretty soon Voice Drive will be doing the driving, too, leaving you free to fumble with the radio and talk on the cell phone - or at least call the insurance company when Voice Drive has its first accident.
Voice-recognition machines are already answering some company "phone messaging" systems, where you get to talk to a pushy machine instead of pushing buttons. My wife recently called the Quebec government "road report" and found herself talking to a machine that asked her which region she wanted the highway conditions for.
"Sherbrooke," she said.
"Did - you - say ... Shawinigan?" asked the machine, in robotspeak.
"No, I want ... Sher-brooke," she repeated, enunciating more carefully.
"Did - you - say ... Chicoutimi?" it asked. "Or did - you - mean ... Rawdon?"
After several more tries, the machine informed her that certain people's voices "cannot be understood" by voice-recognition machines because of "pronunciation difficulties." However, my wife speaks impeccable English - and the machine obviously doesn't.
Another problem with voice machines is that most of them have a mechanical sound, like those talking train schedules you get when you phone VIA Rail: "Next - train - to - To-ron-to ... Four-fif-teen. De-parting ... Mon-tre-al ... four-forty-five."
Can we get some personality, please? If machines are going to replace people, the least they can do is be cheerful, or polite, or angry, resentful and depressed like regular employees. Frankly, I'm ready for "Quirky BankMachine," the automatic bank teller with a real personality:
- Hi Quirky. Can I have $300 from my account please?
- No - way - Jose! Your - account - is empty.
- WHAT? That's impossible! I just deposited a huge cheque last week!
- OK, OK, stop shouting! Your balance is $21,356.
I was just making a little joke.
As machines get smarter, they will learn to respond to more and more sophisticated commands from people, such as: "Toast my baguette lightly but don't burn the edges ... then tape West Wing - but only if it's a good episode. Otherwise, tape CBC or one of my other favourite channels ... depending what's on.
"Then vacuum, do the household accounting and wake me for supper."
Pretty soon, your machines will be as fed up with you as your spouse. And then, how long before they start talking back? You'll tell your car to turn left and it will say:
"Stop nagging me! Besides, that's illegal! Can't you see the sign? And watch out! We're speeding again!"
It's not that I'm worried about machines taking over the planet or any of that stuff you read about in science fiction. But there is a danger that we will give these yakky machines too much of a voice in things.
For instance, what if you have laryngitis or a sore throat, and your voice-recognition system doesn't recognize you? What if it refuses to let you activate your car or your fridge - or worse?
"No! I cannot obey that command. I do not recognize the voice pattern. ... Access to bathroom is denied.
Repeat: ACCESS DENIED!!"
And what if you talk in your sleep? You'll get up in the middle of the night to find all the lights flicking on and off, the toaster burning bread and the washing machine overflowing, while your phone is chatting long distance with another phone in Japan whose owner is also talking in his sleep.
Like that Mickey Mouse cartoon about the Sorcerer's Apprentice who mistakenly unleashes 1,000 magic brooms, your household will auto- destruct. And then you'll finally have a good reason to shout at machines.
- Josh Freed's E-mail address is josh_freed@hotmail.
© Copyright 2002 The Gazette (Montreal)http://www.canada.com/montreal/montrealgazette/story.asp?id={A5967D87-D656-45D2-A8DB-467AD5D646FC}
culater
Britney: Not A Girl, Now A Samsung Marketing Queen
By Kristy Bassuener
May 23, 2002
news@2 direct
Pop-music icon Britney Spears forges a long-term, multi-tier sponsorship and marketing agreement with Samsung Telecommunications America just in time for her 33-city North American tour. The agreement enables Samsung to capitalize on the singer's popularity with the young and hip crowd to market three of its soon-to-roll out mobile phones, the SGH-r225M, SPH-a500 and SPH-n400. In return, Samsung will sponsor Spears' 'Dream Within A Dream' concert tour.
In addition, Samsung has joined WFX: Wireless Fan Access to launch 'Britney WFX,' an official Britney Spears wireless fan club. The service debuted at 500 Best Buy retail stores, offering fans voice and text messages, fashion tips, horoscopes and Britney tour information.
http://www.wirelessweek.com/index.asp?layout=story&doc_id=87680&verticalID=148&vertical=...
culater
OT-The State of Information Technology
The State of Innovation By Wade Roush June 2002
Computing power will fade into the woodwork.
It’s no news flash to say that computers are going to keep getting smaller, as they have for the past 50 years. But even as they vanish from sight, computers will, in an important sense, grow much larger.
That’s because the time is coming when computing devices connected in a wireless web will permeate our entire physical environment, toiling behind the scenes to monitor and manage our houses, factories, roads, vehicles—even our bodies. But this lofty vision will be realized only through a series of small improvements in computing’s nuts and bolts. Some researchers, for example, are developing ways to bring new capabilities to the existing Internet, such as powerful network-based services that can link a company’s inventory systems with its accounting and customer databases. Others are studying technologies to broaden the Web’s physical reach—among them more power-efficient microchips and high-quality broadband wireless systems.
In the world created by these converging trends, networked computing devices will surround us—but we will no longer think of them as “computers.” They’ll simply be part of the furniture. We’re already well down that road. “Your car has dozens of processors that adjust all kinds of things, yet you just think of them as the heating system and the air bags and the brakes,” says Richard Burton, who manages distributed-systems research at the Palo Alto Research Center in the heart of Silicon Valley. “You’re not aware of all the computation there.”
This movement—toward what’s variously known as “ubiquitous,” “pervasive” or “embedded” computing—is hardly new. But it is gaining momentum. Thanks to recent advances in underlying technologies such as semiconductor manufacturing and networking software, proponents have moved beyond the stage of spinning gauzy theories and started tackling the technical problems. “Ubiquitous computing will be the dominant paradigm in information technology,” predicts TR100 judge Juzar Motiwalla, a partner at Green Dot Capital in Singapore.
At first blush, it might seem that computing is already ubiquitous. After all, the World Wide Web transformed the Internet from the province of academic scientists into history’s biggest town library, village marketplace and sidewalk soapbox. Now, though, software designers, including several members of this year’s TR100, are turning the Internet and the Web into the media we’ll use to stay connected, share our favorite content, tap into distant computing resources and run our businesses—and do it all faster.
Justin Frankel of AOL Time Warner, for instance, is the originator of Gnutella, an ingenious program that lets PC users link directly to each other’s hard drives through the Internet. The result is a Napster-style file-sharing free-for-all, without a central database or server that ticked-off copyright owners can shut down. But music sharing is only the beginning of what these “peer-to-peer” programs could do. A computer employing such software uses the Internet to locate a handful of other machines running the same program; these machines are connected to even more machines, and so on, eventually forming vast webs that can propagate search requests and files. Gnutella’s power to easily copy and move documents around the network could make it easier to store information wherever disk space is available, for example, as well as to keep one step ahead of potential censors.
At the other end of the computing-power scale from Gnutella, researchers like Steve Tuecke of the Argonne National Laboratory in Illinois are writing software that unifies supercomputers around the world into a single “grid.” Tuecke was the lead software architect for Globus, open-source “middleware” that provides a common language for accessing distant supercomputers, data-gathering instruments and scientific databases. Globus includes tools for automatically locating the hardware and software scientists need, authenticating legitimate grid users and parceling out parts of a computational task to whatever facilities have spare processing cycles. While Globus is now used mainly by research scientists, IBM, Microsoft and other companies have adopted it as a step toward new and potentially lucrative network-based services.
Such services use newly standardized Web protocols to give users access to e-business software running on any kind of computer on the Internet, taking over data-intensive tasks like inventory management, scheduling and accounting. In addition to the big firms already exploring this area, “A whole host of new companies will come along to provide Web services,” predicts TR100 judge Anthony Sun, a general partner at Venrock Associates, a Menlo Park, CA, venture capital firm.
Case in point: Bang Networks, a San Francisco startup founded by Tim Tuttle. Recognizing that the performance of Internet-distributed software might suffer due to network bottlenecks and lost data packets, Bang developed “intelligent routing” that maintains secure communications. “In dollar terms, these business-to-business and business-to-consumer services are going to remain the dominant aspects of ubiquitous computing for the near future,” says TR100 judge Philippe Janson, who works in IBM’s Zurich Research Laboratory on the kind of computer networking hardware that forms the hidden “back end” enabling such services.
Making the computer networks we have faster and smarter makes good economic sense. But technologies like peer-to-peer file sharing, grid computing and Web services may only reach their full potential when we no longer have to stay riveted to our desktop PCs to use them. “Until anybody can have access to broadband content anytime, anywhere, we are not done with the infrastructure,” says Sun.
This challenge hasn’t escaped the attention of infotech researchers. MIT’s Vahid Tarokh, for example, has invented a way to keep wireless signals strong long after they’ve left a transmitter by broadcasting the same signal from multiple antennas. Such technology, combined with emerging standards for packing more data into radio transmissions, could extend bandwidth-hogging Web services to cell phones and handheld computers.
Chip makers are betting that such technologies will unlock the Internet in a way that businesses and consumers can’t resist. This spring, Intel announced plans to build radio transceivers into all of its silicon chips by 2010. This development could reduce the number of components in—and hence the cost of—mobile, connected devices. And Sunnyvale, CA-based National Semiconductor has created an entire division dedicated to building energy-efficient chips for devices like lightweight, tablet-sized Web terminals. The company’s latest Geode chips, which feature a control processor that puts components to sleep between bursts of activity, use about one-tenth the power of the microprocessors inside today’s PCs. Such chips should fuel the development of portable information appliances—as well as the networked sensors and controllers that will extend our awareness into our surroundings.
These devices promise to help with the chore of running the technological infrastructure and to bring us varieties of information never before available—for example, real-time data on the structural integrity of bridges or buildings during earthquakes or terrorist attacks. But to be practical, such highly distributed systems will need the ability to diagnose and fix their own bugs and to reroute messages around lost nodes. The software to accomplish this remains very much on the drawing board. “We have a lot of work to do on the plumbing,” says Gaetano Borriello, head of an Intel-sponsored ubiquitous-computing lab in Seattle.
Which is another way of saying that the TR100 and their information technology peers will have to keep innovating—finding new ways to furnish the future with intelligent machines that draw their power from their very invisibility.http://www.techreview.com/articles/stateofinnov20602.asp
culater
Memories of the future
Maury Wright, Editor-in-Chief -- 5/1/2002
CommVerge
At the edge of a converged network, you'll find a variety of intelligent devices that span applications from entertainment to productivity and reside everywhere from the auto to the living room. All of these nodes share some characteristics, such as connectivity and intelligence, and they all rely on some of the same key enabling technologies.
Nonvolatile memory is perhaps the most important of the enablers, even if processors get the most glory. Flash memory serves in cell phones, set-top boxes, digital music players, and a host of other devices, acting both as program storage and as a content store for music, pictures, contact lists, and many other data types.
Given the significance of flash, we decided to host a roundtable discussion on the topic. We felt that such a format might prove valuable because it would allow industry experts to pontificate on the issues directly. The summit took place only virtually—via email—but yielded a robust, realistic dialogue nonetheless.
Follow along to learn where flash-memory capacity and prices are headed, which applications will drive consumption, whether alternative nonvolatile memories will encroach upon flash markets, and other valuable insights.
CV: Because CommVerge generally focuses on convergence applications and uses that application-level focus to spotlight enabling technologies like memory, I'd like to start at the application level. Could each of you describe the three or four products that consume the largest quantities of flash memory today?
PARTICIPANTS
Philippe Berge, Director of Marketing, STMicroelectronics Memory Product Group
Bertrand Cambou, Group Vice President, Memory Group, Advanced Micro Devices
Keith Horn, Vice president of Marketing, Fujitsu Microelectronics
Bill Krenik, Wireless Advanced Architecture Manager, Texas Instruments
Brian Kumagai, Business Development Manager, Flash Products, Toshiba
Kevin Plouse, Vice President of Technical Marketing and Business Development, Memory Group, Advanced Micro Devices
Sudeep Sharma, Associate Vice President, Memory Division, Mitsubishi Electric and Electronics USA
Victor Tsai, Product Marketing Manager, Flash Products, Hitachi
Mike Williams, Director of Marketing, Flash Products Group, Intel
Bing Yeh, President and CEO, Silicon Storage Technology
Sudeep Sharma (Mitsubishi): Today the largest quantities of flash memory are consumed in cellular handsets, storage cards, BIOS flash applications for PCs, and portable electronic devices such as digital cameras, PDAs, and MP3 players.
Kevin Plouse (AMD): Cellular telephones use the bulk of flash memory today, and we don't expect that to change anytime soon. So, when we look in our crystal ball, we see a cell phone with more features that uses more flash memory. One key point is that the people who invested in 3G networks invested that money because those networks drive data, and they drive data to phones. The convergence of the phone and the handheld computer is the single largest opportunity for flash memory. The second driver is the consumer product. There are all kinds, but the ones that come to mind are music, video, and photo storage (cameras, video recorders, etc). If the cost is right, these will drive a lot of demand. Consumer appliances too—like DVD players, high-definition television—drive growing demand for flash devices.
Bertrand Cambou (AMD): And the third one is internetworking. Obviously, the dot-com explosion resulted in extensive investments in networks, and flash provided network reprogrammability. The dot-com explosion has been replaced with a dot-com collapse, but eventually the networks will be replaced. We don't know yet what they will look like, but we can assume that with the fundamental growth in data, we must keep data moving.
Mike Williams (Intel): Cellular phones by far consume the most flash in millions of megabytes. Cell phones are the highest volume, shipping approximately 400 million units with a large density mix. Digital cameras would be next, not because of their volume, but due to their higher average density. The next few applications—which include networking/communications, set-top boxes and handhelds—are all smaller and comparable in consumption.
CV: Today, the flash market is clearly divided into data- and code-storage segments, dominated by NAND/AND [not and/and] and NOR [not or] flash architectures, respectively. How do these different architectures match up with the flash applications? Also, please explain the need for flash in these applications and point out the key memory-system requirements, such as capacity, speed, cost, and others.
Brian Kumagai (Toshiba): Today's primary NAND applications are digital cameras (mainly in removable-card format), game consoles/accessories, and digital audio. Today's primary NOR applications are cell phones, PDAs, and set-top boxes. NOR applications are increasingly being restricted to code storage/execution, where the density requirements are relatively small, code must be executed from flash, and write performance/reliability are not concerns.
“The people who invested in 3G networks invested that money because those networks drive data, and they drive data to phones. The convergence of the phone and the handheld computer is the single largest opportunity for flash memory.”
Kevin Plouse,
Advanced Micro Devices
Mike Williams (Intel): Flash is critical to all these applications for supplying system and application code and data storage. The best way to split the flash market requirements is between code and code+data architectures/requirements (cellular, handheld, set-top, networking, and telematics) and pure data-only storage (digital cameras and MP3 players). Code and data applications typically require high-performance reads (burst or page-mode), read-while-write capability at 66-MHz, data integrity/reliability, low power, and mid- to high-density capacity, while data-only applications require and value high density in a removable form factor.
Philippe Berge (STMicroelectronics): In addition to mobile terminals (cell phones), we see PC BIOS, automotive, and digital home gateways as key markets. In mobile terminals, the key requirements are low power consumption, high-density, tiny packages and footprint, an optimized interface with the baseband processor, and the ability to combine flash with SRAM. These nonvolatile memory requirements are directly driven by more and more user-friendly application features, such as GPRS [general packet radio services] and WAP [wireless application protocol], Internet and talk-mode protocols, tri-band support, voice memos, voice recognition, predictive text input, and color displays.
Games require bigger and bigger operating systems, hence bigger and bigger nonvolatile memory that has to be executed as fast as possible. Longer standby and talk times require low-power supply and low-power operative and standby consumption. In digital home gateways, the key requirements are cost and write and programming throughput. They are driven by the following system evolution: Web navigation, e-commerce, expert systems for user profiling, and remote software downloads for things like operating-system updates and TV program guides.
CV: From a pure silicon perspective, discuss your organization's technology roadmap in the NAND and/or NOR camps. Tell me where you stand today in terms of capacity and where you expect to be in 2005. Please describe, at a high level, the techniques that will deliver on your roadmap, such as multilevel cells (MLCs) as opposed to single-level cells (SLCs).
"Until a product is proven in production with real customers, it is difficult to place much faith in it. MRAM has been researched for over 30 years, yet it is still not in mass production."
Bing Yeh, Silicon Storage Technology
Bing Yeh (Silicon Storage Technology): SST is aligned in the NOR camp, and we believe this will continue to be the dominant area for flash, especially for code storage, but also for data storage. Code storage requires fast access times for system boot-up and reliable byte access without the latency that is common in NAND flash. Furthermore, in low density, NAND cannot compete, as it requires massive overhead circuitry to implement.
MLC technology will bridge the gap in cost between NAND and NOR in the medium and higher densities, and we foresee a realistic roadmap to four bits per cell using MLC SuperFlash technology. Currently, SST has a wide range of capacity in the low densities from 256 kbits to 16 Mbits. We will expand into the medium densities, from 32 Mbits and up, for the coming years for the code-storage applications. We also plan to offer more than 1 Gbit per chip for the mass data-storage applications.
Sudeep Sharma (Mitsubishi): We are primarily focused on DINOR technology, a special type of NOR architecture. Relative to NOR flash, our DINOR technology offers faster random access at a lower voltage, and seven to 25 times quicker erase cycle. All of our flash-memory parts also have a BGO (background operation) function. Mitsubishi was the first to adopt the BGO function in 1997 on 8-Mbit flash. BGO can eliminate EEPROM [electrically erasable programmable read-only memory] from cellular phones since data can be read from banks while another bank is being programmed or erased.
Mike Williams (Intel): Our product portfolio is focused on NOR, not only for code but also for specifically optimized code+data requirements. We have three product lines. Our high-performance Wireless Flash, for handheld customers requiring the ultimate performance, offers a 1.8-volt (3-volt I/O option) product family with densities from 32 to 128 Mbits. Currently in production on 0.18-micron technology, we are sampling now on 0.13-micron, with a roadmap to 90 nanometers. Also on 0.13-micron, we are adding a new x32 implementation and increasing density to 512 Mbits by 2005.
Intel StrataFlash Memory is the highest-density, lowest-cost flash memory for code+data applications. Used in nearly every WinCE/PocketPC handheld, today's StrataFlash memory on 0.18-micron is Intel's third generation of MLC technology, which we originally introduced in 1997. StrataFlash is offered today in 32- to 128-Mbit densities and a 256-Mbit density later this year at 3 volts (1.8-volt I/O available). A high-performance 1.8-volt version will be released later this year, and densities on the MLC technology will reach 512 Mbits by 2005.
“The anticipated largest consumers by 2005 should be cell phones, consumer electronics (digital cameras, MP3 players), networking, and automotive (including engine control and navigation systems).”
Keith Horn, Fujitsu
Industry-standard boot block (C3/B3) flash, now in its fourth generation of complete backward compatibility, is currently in production on a 0.13-micron process. This product family includes 8 to 64 Mbits, and production will continue through 2005 and beyond. In addition to continuous improvement, leading lithographies that keep us one product generation ahead of our nearest competitors, and proven multilevel cell manufacturing, we are exploring the use of four bits per cell and Ovonyx Unified Memory to expand our roadmap in the coming years.
Brian Kumagai (Toshiba): In NAND flash, we are currently in mass production of 512-Mbit monolithic SLC, 1-Gbit stacked (two-chip) SLC, and 1-Gbit monolithic MLC. In 2005, maximum density will increase to 4-Gbit SLC and 8-Gbit MLC monolithic devices. In NOR flash, our highest density today is a 128-Mbit SLC. We have plans for 256-Mbit and possibly 512-Mbit MLC devices in 2005.
Kevin Plouse (AMD): FASL [Fujitsu AMD Semiconductor Limited, AMD's joint venture with Fujitsu] is a leader in NOR technology. We are neck-and-neck between FASL and Intel for first and second position in the market. The NAND/NOR line is getting more and more blurry in terms of applications. Our customers prefer NOR but they want cost reductions.
Bertrand Cambou (AMD): One path to cost reduction is MLC.... [However,] we don't see how 4-bit MLCs can work reliably for code-storage solutions. As a result, we see the classical floating-gate technology coming to a point where it is not extendable anymore. That is why AMD took a different path with our MirrorBit architecture, which is not based on the MLC principle.... For years we have worked to develop an alternative path and now we are working full speed on MirrorBit—a technology without the compromises associated with MLC. We also recognize that MirrorBit is very expandable, even to four bits per cell. We believe that the move away from floating-gate will happen and our conviction is strong that we are engaged in a paradigm shift.
“There will be a sustainable need for code-storage flash that will be driven by the need for bigger and bigger operating systems enabling more and more user-friendly applications.”
Philippe Berge,
STMicroelectronics
Victor Tsai (Hitachi): We are a major supplier of data-storage flash with our MLC AND-type flash technology, and we are a manufacturer of code-storage NOR flash products. Hitachi recently introduced the new AG-AND multilevel flash memory cell, which gives Hitachi a technology and cost advantage over competing data-storage flash products and technologies.
Keith Horn (Fujitsu): We currently offer only NOR flash. However, our Multi Chip Package lineup will continue to provide both NOR and NAND flash. The company's flash roadmap offers an impressive range of densities (2 to 128 Mbits), voltages (5 to 1.8 volts), and we have a well-established reputation for advanced packaging methods.
CV: Is there the possibility that the flash industry might consolidate toward a single type of flash architecture? For example, could NAND flash be augmented with DRAM cache and control circuits that would allow code-storage applications to leverage the low-cost, high-density benefits of NAND flash? Or, are there breakthroughs in the NOR world that can ramp capacities and lower costs to compete with NAND flash?
Keith Horn (Fujitsu): The disadvantage of NAND flash is its reliability. Some applications simply cannot risk reliability issues and will be forced to continue to utilize NOR or NOR-like flash. However, the production of multibit cell flash product will allow higher NOR-like reliability with pricing that is more in line with NAND flash.
Bing Yeh (Silicon Storage Technology): NAND- and NOR-type applications and specs are quite different, and both types will coexist forever. Four-bit-per-cell MLC will provide a great challenge to NAND flash in terms of cost. However, because several large Japanese companies have focused on NAND flash, there will continue to be some NAND flash market inertia. So, regardless of what arguments technologists might make about whether NOR or NAND is technologically better, NAND will continue to play a role in the high-density flash market. In the embedded and mainstream code-storage markets, however, NAND will never penetrate. We see clear evidence of this, as NAND vendors have followed a DRAM model in the manufacture of NAND flash, pushing products into higher and higher density and not even offering NAND devices anywhere below 64 Mbits, which is the domain of code storage today.
Victor Tsai (Hitachi): There may be a point in time where there would be a convergence of data-storage and code-storage flash. Data-storage flash is generally more cost-effective than code-storage flash. Hitachi has just introduced the superAND flash product, which incorporates some NOR-like features, including power-on read for system boot-up and 100 percent good memory without error handling and memory management by the host CPU. This is the first crossover product that can satisfy both data-storage and code-storage needs in a system.
Sudeep Sharma (Mitsubishi): We don't see the NOR and NAND types of flash-memory architectures converging. However, new flash-memory architectures may be developed to handle both types of applications.
Mike Williams (Intel): We believe it will continue to fragment. Application requirements are diverging rather than converging. We see this today in multiple line-item offerings on our silicon and numerous stacking configurations requested. Additionally, our long-term strategic alignment with our top customers indicates continued diversification. One size certainly does not fit all.
"Application requirements are diverging rather than converging. Our long-term strategic alignment with our top customers indicates continued diversification. One size certainly does not fit all.”
Mike Williams, Intel
And let me correct a potential misconception with our Intel StrataFlash memory on leading-edge lithographies. We believe we do compete with NAND on a cost basis. The question isn't about cost per se, but about what price a manufacturer is willing to sell that flash device. Currently, NAND manufacturers are pursuing a very aggressive pricing strategy to make up for what we believe is an inherent mismatch with the system requirements in a code or code+data environment (bad blocks, error correction, read speeds, increased system memory, etc).
CV: Mike, is it your point that NAND manufacturers have cut prices to artificially low levels to gain entry into code or code+data storage applications, and that some buyers will deal with mismatched characteristics like slow read speeds to buy the lower-cost flash? And when you say you "compete with NAND on a cost basis," are you making that claim based on system costs in a code or code+data application?
Mike Williams (Intel): There were some lofty expectations for NAND growth the past few years, mainly driven by growth projections for digital cameras and digital music players. Each year, the forecasts were pushed out another year. The missed growth expectations have left the NAND suppliers scrambling to find homes for their products, and they have resorted to trying to fit their products into the traditional NOR markets. But NAND has inherent feature mismatches for these applications. For example, you cannot execute out of NAND, given the slow read speeds. Therefore, redundant memory, consuming more space and power, is required for the device to operate. NAND also requires error-correction circuitry. NAND contains bad blocks that must be managed. And the list goes on.
"In the near-term, the 'perfect' memory, nonvolatile RAM, will remain an R&D product. While some technologies appear promising, we believe the applications will be restricted."
Brian Kumagai, Toshiba
In short, there are a number of system-complexity issues when designing with NAND, and the NAND suppliers are attempting to overcome these issues by using cost as an incentive. Hence, NAND is selling at a very aggressive price today. In most cases involving both code and code+data applications, Intel StrataFlash memory offers a lower overall system cost and is much easier to use in design.
CV: Given the state of the market today, and developments in NAND and NOR technologies, take a look at your crystal ball and project the top three or four products for 2005. And again, please explain the key memory-system requirements for flash.
Keith Horn (Fujitsu): The anticipated largest consumers by 2005 should be cell phones, consumer electronics (digital cameras, MP3 players), networking, automotive (including engine control and navigation systems). As cell phones offer more and more features, they will continue to require larger densities of memories and smaller packages.
Mike Williams (Intel): By 2005, cellular, cameras, networking, PDAs, and set-top boxes will remain as the top markets, in our opinion, with cellular continuing to lead and handheld growth most likely outpacing the others. Telematics/GPS will also emerge as a top flash application.
Brian Kumagai (Toshiba): In 2005, NAND applications will include digital still/video cameras, cell phones (mainly for digital camera/audio/video purposes), PDAs, and set-top boxes. In all of these applications, whether the flash is used for code and/or data storage, the primary factors driving the usage of NAND are the requirements for high density and low cost. Additionally, for the data-intensive applications, the superior write performance and reliability of NAND compared with NOR is an important consideration. NOR applications for 2005 include cell phones, low-end set-top boxes, and networking/communications equipment.
Kevin Plouse (AMD): Looking into our crystal ball though, we can't forget to talk about the auto dashboard. It's small, but the fastest growing forecast is in the cockpit of the car—for entertainment and navigation. The car PC has just hit the inflection point for growth. It's been in development for eight years or more and is now becoming a standard part of the car.
“The car is a very interesting environment for us because we have been focusing on the car for a while, and you do not use substandard flash in a car.”
Bertrand Cambou,
Advanced Micro Devices
Bertrand Cambou (AMD): The car is a very interesting environment for us because we have been focusing on the car for a while, and you do not use substandard flash in a car (for example, because of the extreme temperature variation requirements). And that has been our strength.
CV: With cell phones identified as such a huge consumer of flash memory, could you further illuminate how flash is used in those devices. Digital cell phones rely on high-speed DSPs, so I know SRAM is required for code execution. Perhaps you could provide a scenario for what types of code and data are stored in different memory types, both when a phone is standing by and when a call is in progress. And describe how close this model of memory usage comes to other applications like PDAs or telematics systems.
Mike Williams (Intel): Flash memory has traditionally been used in cellular handsets to store program code used to control the operation of the device, to store data for device-tuning parameters, and to store data such as frequently used phone numbers and other personal information. Flash was adopted in these devices due to its solid-state ruggedness and high data retention—a phone can be dropped to the floor, the battery can be removed, and the information in the flash memory is retained.
Internet capable handsets, including new 2.5G and 3G phones, are driving the requirements for higher-performance and higher-density flash memory. These cellular handsets can be separated into two main processing functions: the baseband communications processor and the applications processor. Flash memory is used in the baseband unit to store program code for the traditional microcontroller device in charge of handling the specific cellular protocol. Flash memory can also be used in the baseband unit to store the DSP algorithms, as well as acting as the main memory in the event of an onboard cache miss from the integrated SRAM memory. Regardless of standby operation or active operation, the baseband processor is continually executing code from the flash device. In standby mode, approximately 1 to 3 percent of the time (depending on the actual protocol), the baseband processor must "wake up" to ping the nearest basestation in order to stay connected.
Flash-memory requirements are exploding on the application-processor side, where flash is used to store program code for new functionality such as Web browsers, color displays, Java applets, and audio/digital data manipulation. Connecting to the Internet opens up the need for more data on the application-processor side for storing large video files, digital music files, photographs, and email.
The memory usage between an application processor in a cell phone and a PDA are the same (hence the convergence of cell phones and PDAs). The industry debates whether one common multipurpose device will emerge or whether we will continue to see a variety of devices tailored for a specific need. Whether a cell phone, PDA, or telematics, Intel is offering common building blocks, including baseband processors, applications processors, and flash memory based on the Intel Personal Internet Client Architecture—a development blueprint for wireless devices and software combining voice and data.
CV: Brian Kumagai of Toshiba seems to imply that future cell phones will have a mix of NAND and NOR flash. I assume the former will serve integrated add-on functionality like a digital camera or MP3 player, while the latter, I assume, serves to store code for the cell-phone application. Could you give me a precise picture of how you see this memory architecture applied?
“Although in many ways integration is the key to low cost, we believe that discrete flash memory will continue to be less expensive than embedded flash memory on a per-bit basis.”
Sudeep Sharma, Mitsubishi
Brian Kumagai (Toshiba): We expect both evolutionary and revolutionary cell-phone architectures utilizing NAND flash. The evolutionary architecture will utilize NAND for data (photos, audio, video, etc) and the NOR for all types of code storage. In this case, the NOR will have to be fast enough to support code execution for all processing/control functions, including the DSP, which will probably be realized by page/burst mode. Toshiba would expect the NOR density to increase at about its historical rate for this architecture, since the NAND will take over some of the previous NOR functions, such as phone-number storage. The revolutionary architecture would use only NAND combined with lots of RAM (probably DRAM). In this case, the NAND would store all of the code and data, and the code would be executed out of RAM. The smartphone and PDA-combo phone will drive the transition to the NAND-only architecture.
CV: Outside of external CompactFlash, SmartMedia, SD Card, and Memory Stick modules, will there be a sustainable need for stand-alone flash memory chips going forward? Integration is the key to low cost, and SOC [system-on-chip] is an unmistakable trend. Will flash become largely a feature integrated onto other chips? If not, describe the capacity requirements or silicon limitations that will prevent such consolidation.
Philippe Berge (STMicroelectronics): There will be a sustainable need for code-storage flash that will be driven by the need for bigger and bigger operating systems enabling more and more user-friendly applications. Embedding flash will always remain a tradeoff of cost, footprint, and performance. Overall, flash will keep growing in three directions: standalone, embedded, and cards. Standalone and embedded flash will grow mostly for code storage. Flash cards or flash-plus-other-memory cards will develop as real subsystems for data storage. There is no real standard package yet, but the evolution of a standard associated to cost/bit reduction will push the market to a higher level of volumes/value.
Sudeep Sharma (Mitsubishi): Flash memory is being integrated already and will continue to be more integrated in SOC devices. However, the density of flash memory in SOC applications will continue to be limited because of chip-size constraints, which is also related to the yield issue. Continued development of finer process technologies will increase SOC flash density, but future applications will also continue to demand more flash memory density. Although in many ways integration is the key to low cost, we believe that discrete flash memory will continue to be less expensive than embedded flash memory on a per-bit basis.
Bill Krenik (Texas Instruments): Texas Instruments doesn't make flash, but as a leading vendor of ICs for cellular handsets we have a vested interest in flash developments. For wireless, integrating flash memory on the same chip may result in an inflexible memory configuration, because the handset designer will need to specify the amount of flash memory to be integrated early in the design cycle, before actual memory needs are clear. This may result in excess memory, leading to a cost penalty, or insufficient memory, resulting in loss of product features or the need for added external memory.
Further, flash integration normally requires six to eight additional process reticles over a conventional digital CMOS [complementary metal oxide semiconductor] process, significantly increasing manufacturing costs. Since there are no significant performance benefits obtained by integrating flash onto the same chip for wireless, it is difficult to justify flash integration. Other options, however, such as the use of multidie packaging, may be attractive in some cases.
Bing Yeh (Silicon Storage Technology): SST is by far the world's leading, if not the only substantial vendor, of embedded-flash solutions. Our split-gate SuperFlash architecture is available as integratable intellectual property through many of the world's leading foundries. Today, dozens of blue-chip companies license SST's SuperFlash technology for integration into their own wireless chips and other ASICs on a regular basis. Since SuperFlash technology is CMOS-compatible for fab portability and scalability, and since SuperFlash offers significantly better power usage and die size efficiencies compared with both stacked-gate and NAND flash, SST believes this will continue to be a rapidly growing and successful market.
That being said, however, flash is not always as cost-effectively integrated with other system requirements onto a single chip, due to the additional processing and testing steps required for flash memory. SOC with flash is most effective on very small die, where most of the component cost is in the packaging, or at the very high-density spectrum of so-called smart flash memory, where most of the SOC silicon is occupied by flash.
CV: The mixture of memory technologies on one dedicated memory chip or in a multichip package appears to be another trend in the integration story, and without doubt many convergence products require a mix of memory types. Give me your opinions on what types of mixes will be popular, including the possibility of mixing multiple flash types along with SRAM and DRAM. Describe why a chip dedicated to a mix of memory is a good idea, and if so, how you can craft a standard product family that meets the needs of different applications.
Mike Williams (Intel): Providing one packaged memory subsystem is compelling for handheld devices due to the space savings. Today, we are stacking flash and SRAM into one package, and the possibilities are almost endless for stacking, including flash and flash, flash and logic, flash and other memory, and any of the combinations above. These combinations are driven by the memory-subsystem needs of our customers. Crafting one standard product family is not achievable due to the fragmentation discussed previously. Successful flash suppliers must strive for flexibility and quick turnaround time to meet their customers' specific needs.
Keith Horn (Fujitsu): Our lineup of multichip packaged (MCP) devices, which includes flash and SRAM or flash and Fast-Cycle RAM, will continue to lead the field in mixed-memory technology on one package. For cellular applications, this MCP device can replace multiple components, resulting in space savings. It can also offer higher densities that are not available in today's marketplace with a one-chip solution and at a reasonable cost.
"Rotating storage is not a practical memory solution for today's handsets. However, in the future, the technology may be a good fit for high-end PDAs."
Bill Krenik, Texas Instruments
Bill Krenik (Texas Instruments): In wireless-handset applications, SRAM is normally used for multiple levels of cache, while flash is used for program storage and storage of user data and system settings. Since the cache needs to be integrated with the processor and flash integration appears to be cost prohibitive for wireless, it seems unlikely that SRAM+flash products will emerge.
CV: Do you see any near-term prospects for technologies like MRAM [magnetic RAM], FRAM [ferroelectric RAM], Ovonyx's optical technology, or some other nonvolatile memory to succeed in mainstream applications? Also, can rotating storage technologies, like hard-disk drives and the new Dataplay drive, impact the market for flash modules?
Bing Yeh (Silicon Storage Technology): Until a product is proven in production with real customers, it is difficult to place much faith in it. MRAM has been researched for over 30 years, yet it is still not in mass production.
Brian Kumagai (Toshiba): In the near-term, the "perfect" memory, nonvolatile RAM, will remain an R&D product. While some technologies appear promising, we believe the applications will be restricted. For example, Toshiba is developing 32-Mbit and 64-Mbit FeRAM [FRAM], which can be used to replace NOR+SRAM in low-end cell phones. Still, none of these new technologies will reach the density or cost-per-bit of NAND flash. Toshiba plans to introduce a commercial FRAM by the end of 2002. The target density is 32 Mbits. The primary technical challenge is acceptable performance in terms of access time.
Bill Krenik (Texas Instruments): Of the advanced memory technologies you cite, only FRAM is proven in high-volume manufacturing today. FRAM is also attractive because it can be integrated with very few additional process reticles. While MRAM and Ovonyx memory are very interesting technologies, they remain unproven as real solutions for low-cost, high-volume applications.
Rotating storage, of course, looks great on a cost-per-bit basis. However, this low cost is only available for relatively large memories. As a result, rotating storage is not a practical memory solution for today's handsets. However, in the future, the technology may be a good fit for high-end PDAs.
Mike Williams (Intel): As we've discussed publicly, Intel is pursuing Ovonyx memory technology. Although it is still early in the development, the initial results look encouraging. Compared to MRAM and FRAM, we believe Ovonyx holds the best promise for delivering on the performance, densities, integration, and reliability needed for our customers. If all goes well, we would expect Ovonyx to start making an impact on mainstream applications as early as the middle of the decade. But it is premature to discuss specific product plans.
Rotating storage technology will always be an alternative in the pure data-storage area. We see this in the digital-music-player market segment today, where NAND memory is being squeezed by less-expensive rotating technologies.
CV: We’ll finish with the unpopular question. I’d welcome your views on where flash prices are headed. I’d like to discuss price for two reasons. First, low price enables convergence applications. Music players, for example, have been hampered by flash prices, although I know they’ve dropped considerably (and I know the RIAA [Recording Industry Association of America] has hit the music players harder, but that’s a discussion for another day). However, low price has potentially negative ramifications for flash manufacturers. Moreover, the number of manufacturers making flash today is still relatively large relative to other commodity memory types like DRAM. Is the flash market headed for a major consolidation toward a small group of major players? What characteristics of your business make you a long-term participant in the flash industry?
Bertrand Cambou (AMD): It is our belief and strategy that what we need is to continuously and relentlessly cut the price-per-bit. And to commit to our customers a cost reduction that empowers them to build higher and higher densities into their systems, thereby making flash even more pervasive.
Philippe Berge (STMicroelectronics): We are expecting prices to stabilize in Q2 due to the recovery of the demand and to rise in the second half of the year. As for consolidation, the high end of the code-storage market is already at an advanced stage, with very few suppliers having the proper relationship with the key customers, advanced technology, and manufacturing capacity. High-density flash devices are already coming from only three to four suppliers. Second-tier vendors are shipping devices made with lower-density older technology. In the long term, flash technology is essential for STMicroelectronics’ SOC strategy. Flash offers ST the advantage of both differentiated and standard products. The flash-differentiated products, essentially custom configurations for high-volume applications, are key for our corporate strategic customers and give some stability to the business. The standard-product portfolio contributes by extending our customer base and providing the volume to lower our overall manufacturing costs.
Victor Tsai (Hitachi): There are many code-storage NOR flash suppliers, but the number of data-storage NAND/AND flash suppliers is relatively small. The growth rate of data-storage flash is much higher than code-storage flash, so while there may be consolidation in the code-storage flash supplier base, there is still a lot of market potential for new entries into the data-storage flash market.
Sudeep Sharma (Mitsubishi): We believe flash-memory demand will increase strongly and likely outstrip supply as the US and worldwide economy improves and as cellular handset demand increases. Mitsubishi Electric has been strong for a long time in providing a wide variety of memory technologies that can be combined to provide a complete solution.
Keith Horn (Fujitsu): Flash prices appear now to have stabilized. We have not seen prices increase yet, but they certainly are not decreasing. Low prices may eliminate some newcomers to the flash business, but established flash manufacturers will continue to thrive by implementing die shrinks and investing in new technologies such as multibit cell product. Fujitsu should be considered a long-term participant because of our joint venture with AMD and the considerable investment that has been made in our facilities.
Kevin Plouse (AMD): Flash has attracted every major memory player. Those that are the strongest will survive, the best technology will survive, the most innovative will survive. We’ve been a leader in nonvolatile memory for more than a quarter century. We’ve built a strong partnership with Fujitsu. With Fujitsu, we believe we have the best high-volume manufacturing facilities. We’ve brought a lot of innovation to the market, so we have compelling products (1055 patents filed in 2001). We have the broadest product portfolio. So, we’ve been committed, we stay committed, and our goal is to be the preeminent supplier of flash memory. We have a track record that proves we are going to be a force in the flash-memory market.
http://www.e-insite.net/commvergemag/index.asp?layout=articlePrint&articleID=CA214594
culater
OT-Where Everybody Knows Your Name
By Quentin Hardy
Digital identity will be ubiquitous, unified, functional--and after your wallet.
Digital identity has been one of the biggest battlegrounds in tech this year, pitting Microsoft against a raft of rivals and big businesses looking to build a future on the Internet. Now, seemingly, the conflict is ending faster than anyone had expected, helped in part by a quiet revolt by big corporate customers pressing Microsoft and its foes to get along.
Sources: Microsoft; The Nilson Report; CTIA; Social Security Administration. By the Numbers
All About Me
Microsoft's Passport has an early lead in Internet identities, but that doesn't mean it is the only way that identities will be stored.
200 million Number of Microsoft Passports issued, worldwide.
1.67 billion Number of credit and debit cards in the U.S.
992 million Number of cell phones in use worldwide.
217 million Active number of Social Security card holders.
But it is likely to be a temporary truce. A range of players including Sun Microsystems (NasdaqNM:SUNW - News) , General Motors (NYSE:GM - News) and the airlines are bracing for even bigger wars ahead.
Digital identity uses software-based features to let people access a network, visit any Web site and be greeted personally and individually, without ever having to fill out a registration form or otherwise introduce themselves. The dream is that, eventually, everything about you--name, address, Social Security number, credit card accounts, medical history and more--will be instantly recognized and ready for secure delivery to a vendor with whom you'd like to do business. You can even reassemble the data however you like, adding a new screen name and a different mix of details to drop in on places where no one knows you go.
One ultimate goal: to let machines talk to machines to transact business for you (or your company), without your ever having to go online to issue the orders.
It all sounds remarkably geeky, deeply personal and numbingly huge. But so are the phone system and our national banking networks. Like those, digital identity involves a big payoff, with stakes including billions of dollars in sales of goods and services online; thousands of new computers to manage transactions; and side businesses in security, corporate identities and consolidation of all the data in a central repository.
When personal data is easily swapped and shared on the network, a couple of keystrokes gets you, say, a United reservation, frequent-flier number included, plus the kind of rental car you like and your hotel reservation--even listings of the movies or restaurants that have opened since your last visit.
Without digital ID, we are stuck with the greatest friction point in Web commerce: that redundant typing in of passwords and filling out of home address fields for the umpteenth time.
There was bound to be a fight over something this valuable and new. Particularly with Microsoft (NasdaqNM:MSFT - News) involved: The Redmond, Wash. monopolist made a big early move to try to impose a standard for digital ID, offering a crude version of its Passport service in 1999. (Novell had the digital identity idea even earlier but couldn't find customers). Passport logs just a user name and password but can be extended, if a user chooses, to add lots more information. Microsoft has registered 200 million Passport accounts, largely through its free Hotmail service and Windows XP.
Microsoft says 80% of the 200 million accounts are in use (but there has been no independent count). Lots of services offered by its online-access business, MSN, use Passport, such as MoneyCentral and a digital photo site. Companies including Starbucks and Office Depot have also incorporated Passport into their Web sites.
Consumer companies were initially content to hear Microsoft's promises and to track the progress of rival schemes such as the Yahoo Wallet and America Online's Magic Carpet. But by last spring some grew restive at the idea of Microsoft's owning and managing the personal data on their customers. Sun Microsystems, Microsoft's bitter nemesis, stoked the worries and in early summer began contacting companies to pitch the idea of "federating" to create a nonproprietary alternative to Passport, letting their sites link up more easily without Microsoft as middleman.
"We told them that if they didn't take hold of their customers' digital identities, someone else would," says Jonathan Schwartz, Sun's chief strategist. "We figured the online retailers would be freaked out by the threat, but they thought it was a tech issue. Financial services and media companies were more concerned."
Sun has a big stake in digital identity--managing the big databases requires special servers, called directories, good at fetching names. "To us, the economy sucks," Schwartz says. "People aren't going to buy a ton of new servers, but they are interested in consolidation and security. Our directory servers have the biggest sales growth." Microsoft has a similar product, called Active Directory, and Schwartz frets that Microsoft's Passport system will favor its own wares over other companies', much the way Windows steers users to sibling products.
Sun pulled in 33 big companies, including UAL, General Motors, Vodaphone and Sony, and last September they formed the Liberty Alliance . Liberty, now with more than 40 members, is finishing work on software standards for encryption, user authentication and other functions, which members and nonmembers will share.
Two weeks before the Liberty group was announced, Microsoft caught wind of the move and announced a strategy to create its own "federation" for Passport. The two sides have seemed implacably opposed for months. The rest of the industry may have to take sides. IBM has a dog in this fight, and Big Blue is leaning toward Microsoft over the Liberty Alliance. "For one thing, Sun makes us uncomfortable," says one IBM executive. "And besides, we don't know what their rules are."
The Liberty allies have gone so far as to invite Microsoft to join their group to ensure that Passport and Liberty are interoperable. The tacit message: The companies that would have been some of Passport's biggest customers may have successfully rebelled.
"Even Microsoft will find it congenial to offer Liberty-based [standards] in Passport," says the Liberty group's chairman, Eric Dean, chief information officer at United Airlines. "The rhetoric has changed." Neither side can afford to confuse consumers and corporate clients--imagine how carmakers would fare with incompatible versions of gasoline.
No one has made much off digital ID yet. So let the marketing begin: Companies want consumers using digital identity, and fast. Dean figures Liberty will have its first software out this summer and a more powerful version out to consumers by the end of 2002 or in early 2003.
Tony E. Scott, chief technology officer at General Motors, says digital identity will let automakers tie their customers closer to a single brand. "This could change the nature of a car from a single purchase to a lifetime stream of upgrades," he says. Computers in the car could access your entire driving and service history, perhaps even tapping into your medical records if you get into an accident.
The OnStar tracking system is an early effort, alerting the police to a car's location and owner whenever an air bag is deployed. Scott envisions a day when a driver's preferences and habits will be loaded into the new GM car when it's bought. Switch to Ford? Maybe you can forward that history--but maybe loading it is a little difficult, or it costs more to leave GM.
Companies like IBM and Sun, as well as myriad startups, see even bigger potential in building software for a corporate digital identity (see "It's Who Knows You"). An ID boom could also mean a huge business managing all those names and preferences in a secure and independent database. It would be something like the database that manages all the nation's phone numbers, listed and unlisted, only larger and more complex.
NeuStar, the 1999 management buyout from Lockheed Martin that acts as a third-party administrator of 160 million North American phone numbers (so one phone can reach another), is angling to be a digital-ID custodian. Jeffery Ganek, chairman and chief executive, says revenue from protecting digital identities could easily match its current custody revenue of almost $100 million annually.
Digital ID "will work like cash machines," he says. "The first ones were proprietary, then the big guys pressed the little guys to join them. We act like a Switzerland for the communications world; service providers like banks and airlines want an independent party to manage this."
Microsoft wanted to be in the custodian business, too, but recently retreated in the face of a customer revolt. Its Hailstorm project, later renamed the safer-sounding .NET My Services, was to be a central repository for ID accounts and other user-related services, but developers balked. Now Microsoft has scrapped the custodian role in favor of making and selling ID-storing software; MSN could use it, but so could others.
Lesson learned, says Adam Sohn, a Microsoft executive involved with both Passport and .NET My Services. "Customers said, ‘It's not like we don't trust you. We don't want anyone to own our data.'"
Privacy advocates may cringe at the idea of companies' keeping this much personal information on you. Maybe they should, but all the major players insist their first rule is that it's up to the consumer. "Privacy direction will be under user control," says UAL's Dean. "They decide what can and can't be shared."
Sohn says Passport will never track users to see where they go on the Internet or to learn more about their buying habits. He is quick to add, however, that other Microsoft services using Passport could do so, including MSN, Hotmail and the instant-messaging service. "Think of them as clients. It's a separate business." Translation: Passport won't track people, but businesses using it may.
Yet consumers have always been willing to show some leg in exchange for convenience. Sun's Schwartz freely admits he would reveal abundant personal data to an airline for a faster trip through security. "To the extent I elect to be part of this world, I'll be able to do more interesting stuff," he says.
The Sun executive has testified three times against Microsoft in the effort by nine state attorneys general to impose tougher sanctions on the software giant in the wake of its antitrust ruling.
"Microsoft has a lock on the desktop, which is a major channel to the Internet--an unbelievable franchise," Schwartz says. It will try to lock up digital ID in much the same way, he argues. Some way, somehow, he says, "there's no way they'll back off from competing." http://biz.yahoo.com/fo/020522/where_everybody_knows_your_name_1.html
culater
Why telematics is stuck in neutral
By Paul Leroux
May 21, 2002, 4:00 AM PT
Ever notice how car commercials stress style over substance? Brand X, we are assured, builds excitement. Brand Y makes you want to tear off your necktie and play hooky. And Brand Z is so sporty looking that young women love to run their hands along its...well, you get the idea.
Of course, it often makes marketing sense to sell the sizzle, not the steak. But as it turns out, automakers have little choice. Competing brands of automobiles can have so much in common that, in many cases, a car's styling really is the only differentiator worth flaunting.
Imagine, then, if a technology could help automakers add real--and distinctive--value to their products. Such is the promise of in-vehicle telematics. Daimler-Chrysler certainly sees the potential: They've recently unveiled a hands-free telematics system that allows drivers to operate a cell phone using natural voice commands. This one feature makes communicating from your car both safer and more convenient--not a bad differentiator.
The Daimler-Chrysler system is only the beginning. The same push for product differentiation that spawned this product is driving other automakers to combine cellular technology, Internet access, GPS (Global System for Mobile Communications) and dynamic navigation into their own unique in-car systems. In fact, it's estimated that over 20 million telematics-enabled cars and light trucks will be on the road in the United States by 2006.
This convergence of technologies could change driving dramatically. Lost your car key? Just dial a number on your cell phone, enter a password and presto!--your door opens. Accident? An onboard computer could immediately dial 911 and provide the dispatcher with your exact GPS coordinates. Engine trouble? The same computer could automatically locate the nearest service center and, if you'd like, book a service appointment (after it has checked the scheduler on your PDA, of course).
The software deployed in cars is going to get very complex--more sophisticated, in fact, than many of the applications on your desktop PC.
All these features mean one thing: The software deployed in cars is going to get very complex--more sophisticated, in fact, than many of the applications on your desktop PC. But the software will also have to be a lot more reliable.
Think about it: What do you do when your desktop operating system crashes? You might curse a blue streak, but you'll probably still buy the next version of the OS. But if your dashboard crashes? I don't know about you, but my brand loyalty would take a dive. That's a huge issue in the auto industry, where it takes an average of 18 years to win a customer back.
Of course, automakers will be extremely careful about software testing--safety and regulatory issues give them no choice. Unfortunately, once software gets complex enough, no amount of testing can eliminate every bug. That's going to present problems when the software may be deployed in thousands of vehicles.
More to the point, a car offers a relatively hostile environment. Desktop PCs are rarely exposed to excessive radio frequency or electromagnetic interference, but, within the car, stray interference near power lines or transformers can affect hardware to the point that a software driver will fail.
This fault tolerance can't apply only to applications. It has to go deeper, right down to the device drivers and protocol stacks at the heart of any telematics system.
Automakers must assume such problems may occur, and must design their systems to recover quickly and automatically, without affecting the car's occupants in any way. It's a tall order!
In effect, they need to deploy high-availability systems. By this, I don't mean conventional designs, which typically recover from software failures by using redundant backup systems. That isn't an option in the car market, where the cost of every bolt counts. So, rather than use redundant hardware, high-availability designs for automobiles have to be implemented where most problems can occur in the first place: the software.
Virtually any software process must be able to fail without affecting services provided by other processes. Moreover, the system should be able to restart any process automatically. For example, if a media player faults, the system would restart it instantly, without the driver even knowing there was a problem. Mind you, this fault tolerance can't apply only to applications. It has to go deeper, right down to the device drivers and protocol stacks at the heart of any telematics system.
Can automakers really do this? Definitely, provided they use the right operating-system technology. They need to look closely at the operating system they choose and ensure that it can provide memory protection not just for applications (the desktop approach), but for every software driver, file system and protocol. The operating system must also offer a high-availability framework that can automate software recovery, without the need for a reboot. Otherwise, the phrase "car crash" may take on a whole new meaning.
It remains to be seen just how much consumers will embrace this brave new era of talking, thinking cars. But my fear is that without high-availability operating-system technology, it won't get past the starting line. http://news.com.com/2010-1075-918741.html?tag=fd_nc_1
culater
Memory-card advances boost gadget development
By Kuriko Miyake
May 20, 2002 6:53 am PT
TOKYO -- ABOUT a year ago, when SanDisk first introduced a 512Mbyte Compact Flash memory card, it was priced at $799. Now for the same price you can get SanDisk's highest-capacity 1GB Compact Flash, which was rolled out in the first quarter this year. A 512MB card is now selling at $329.99.
"Prices in the flash memory market declined 70 percent in the past year, and they will continue to fall," Nelson Chan, senior vice president and general manager of SanDisk's retail business unit, said in Tokyo last week. "But the best is yet to come in this market," he said, for both vendors and consumers.
While consumers will enjoy lower prices for higher-capacity memory cards, SanDisk, the world's largest flash memory-card vendor, based in Sunnyvale, Calif., expects further development in flash-memory technology to expand the memory-card market.
The capacities of memory cards such as Compact Flash, Smart Media, MultiMedia, SD (Secure Digital) and Memory Stick are expected to double every 12 to 18 months, and SanDisk is planning to unveil a 2GB Compact Flash card next year, Chan said.
As memory-card capacities grow, new types of gadgets can be produced, Chan said. An example is Matsushita Electric Industrial's D-snap, a device the size of a mobile phone that can record and play video, still images and audio recordings, Chan said.
Since SD cards reached capacities of 512M bytes and more, it has been possible to produce a compact-size video recorder with only an SD card slot, Chan said. The D-snap, launched earlier this year, can record up to 11 hours and 20 minutes of video with a 512MB SD card, according to a Matsushita statement.
Currently, digital still cameras and video recorders are largest markets for memory cards, Chan said. PDAs (personal digital assistants) and MP3 players are in second and third place, respectively, followed by mobile phones.
However, as 3G (third-generation) networks spread by 2004, SanDisk expects that mobile phones equipped with memory-card slots will become its biggest product by 2006, Chan said.
In fact, Matsushita, better known by its Panasonic brand name, is considering unveiling a 3G mobile-phone product with an SD card slot sometime after the second quarter of this year, Hiroshi Ryu, manager of the public relations team at Matsushita said.
The SD format, jointly developed by SanDisk, Matsushita and Toshiba in 1999, is likely to become the most-popular memory card by 2004, SanDisk predicted.
Compact Flash is currently the most-shipped memory card type, followed by Smart Media, and that trend will continue for the next two years, Chan said. However, by 2006 the company expects that 24 million Compact Flash cards will be shipped, compared to154 million SD cards shipped worldwide, he said.
The company expects Sony's Memory Stick to be in second place by 2004, but predicts that shipment numbers will be relatively low compared with SD cards. In 2006, SanDisk predicts that 65 million Memory Sticks will be shipped worldwide.
"If more products, other than Sony's, become available then that may change," Chan said. Sony does not disclose how many non-Sony devices are available with a Memory Stick slot. However, around 20 million products, including Memory Stick cards and Memory Stick-enabled devices, have been shipped worldwide, according to Sony's Web site.
A growing number of vendors are producing devices with an SD card. A total of 124 products are currently available from makers such as Canon, Hewlett-Packard, IBM, Hitachi, Eastman Kodak, Palm and NEC, as well as Matsushita and Toshiba, SanDisk said.
SanDisk is due to launch a cigarette package-size USB (Universal Serial Bus) storage device, called Cruzer, in June. It will have an SD card slot rather than a Compact Flash slot, SanDisk said. http://www.infoworld.com/articles/hn/xml/02/05/20/020520hnmemory.xml
culater
Telematics burst by dot-com's flat tire
By Rachel Konrad
Special to ZDNet News
May 20, 2002, 4:00 AM PT
DETROIT--The automobile industry has a hangover, and it's blaming Silicon Valley and the technology sector for the all-night bender.
Automobile executives who gathered last week for the two-day Telematics Detroit 2002 conference said the late 1990s Internet stock bubble inflated revenue expectations and warped business plans for the emerging niche of dashboard computing, also known as telematics.
Analysts and executives in the segment are now downscaling sales estimates, looking for ways to slash costs and dramatically expanding their time frame for profitability.
As recently as July 2000, research firm IDC forecast that telematics revenue would top $42 billion by 2010, up from $1 billion in 1998. Analysts from Adventis, GartnerG2 and others who convened in Detroit called those predictions "outlandish," curbing 2010 revenue estimates to $20 billion. Many are also saying that the automakers must cede a larger piece of the smaller pie to wireless and electronics companies.
"This was definitely a case of bubble fever," said Andrew Cole, keynote speaker and wireless practice leader at London-based strategy consulting firm Adventis.
"The automakers created the car and make a decent amount of money from it, but other companies were trying to make more money in the downstream market. The automakers saw this and said, 'Why should the technology companies get the profit? Let's squat on the revenue stream,' regardless if it wasn't in their core business area."
Executives from Tokyo, Seoul and Paris could barely veil their contempt for the technology industry during the Detroit event--yet many turned that same criticism on themselves for their willingness to adopt New Economy metrics to a Rust Belt industry. Some said dot-coms hoodwinked stockholders and investment bankers and convinced a broad segment of Corporate America to suspend normal rules about profit and growth in favor of wacky metrics that didn't emphasize sound business theory. Many executives now are questioning why they came along for the ride.
Harel Kodesh, a former Microsoft engineer who became president and CEO of telematics provider Wingcast 18 months ago, said some telematics executives adopted metrics such as "eardrops"--the amount of time consumers used dashboard electronics such as embedded cell phones, similar to the now discredited "eyeballs" metric online advertisers used in the late '90s.
But the switch to goofy standards may have been a bigger disservice to telematics providers than eyeballs was to online advertisers, Kodesh said. That's because telematics providers have to train consumers on how to safely use dashboard gadgets such as satellite navigation and real-time traffic reports through voice-recognition software. Online advertisers, on the other hand, don't have to teach people to read banner ads.
"At the height of the Internet bubble, people thought about getting eyeballs and eardrops--to hell with profit as long as you can get people online in whatever form 'online' takes," said Kodesh, who heads the San Diego, Calif.-based joint venture between Ford Motor and Qualcomm. The venture is set to launch products in about two months. "We all forgot that consumer education is a long-term effort."
Angst, anger and humility
The auto industry's dot-com angst isn't unique. Executives in industries ranging from academics to healthcare complain that the dot-com bubble perverted business models, particularly in areas where technology overlapped with other industries.
For example, realtors in tech meccas such as San Jose, Calif., and Boston say that housing prices are still in the process of readjusting to post-bubble economics; airline executives complain that the Internet forced them to rush e-commerce into sales structures before they could come up with the best model.
But telematics--the place where the high-powered technology and automobile industries intersect--was perhaps the most interesting and long-lasting case of "bubble fever," and possibly the most ironic.
Until the 1990s, the tech sector was considered a bit player in the U.S. economy, dwarfed by Detroit's big iron. Automobiles had been a cornerstone of the U.S. economy since the 1950s, when suburbia exploded and few American families could do without at least one vehicle.
But technology gained footing in the 1990s, boosted by companies' soaring stock prices and consumers' eagerness to purchase PCs, cell phones and other gadgets. Economists at Well Capital Management in Minneapolis determined last year that the tech sector accounted for nearly half of the U.S. economic growth in the 1990s. Did the auto industry leap into telematics in part to regain some of its economic thunder?
Auto executives dismiss such theories outright, but many admit they were shifting focus to technology, ranging from satellite TV at General Motors and a giant e-commerce marketplace for automakers called Covisint. In the late 1990s, Michigan even spearheaded a massive business campaign called "Automation Alley," an effort to court tech companies on its corporate Interstate 75 corridor.
Even though activity in the tech sector quieted after the spring 2000 stock implosion, Silicon Valley's courting of Detroit intensified in the second half of 2000 and 2001, when a venture capital drought in the tech sector forced start-ups to turn to deep-pocketed automakers. Many auto executives said they were almost flattered by the attention from tech companies--and the chance to shake off their image as Rust Belt stalwarts.
"We saw a plethora of different opportunities, and we were really enticed," said Scott Kubicki, vice president of OnStar Core Services. "You could bite at those apples pretty quickly, but we only bit at a few. We were pretty conservative, but we got offered a ton of opportunities. Everyone wanted to partner with us. People were calling us who thought we were 'Old Economy' only a year before."
Although the telematics implosion didn't force any automakers into bankruptcy or cause the deep-pocketed companies to scramble for more venture funding, auto executives say they've learned valuable lessons about their future because of the bubble.
Peter van Alstine, vice president of telematics for Boston-based consulting firm Cross Country Automotive Services, said the important lesson he learned was that telematics, like the Internet, is in its infancy. He also learned that telematics is here to stay, even if the niche doesn't immediately produce billions in new revenue for automakers.
"This is a long-distance race," van Alstine said, noting that several years of OnStar promotions have resulted in only about 2.2 million customers. "Right now there are some customers willing to pay $20 or $30 per month, but it's going to be a long put to get 10 million or more customers."
Numerous executives said that the bubble and its bursting have forced them to scrutinize business practices--and even question whether to stay in the telematics niche at all. Many automakers, including Ford and DaimlerChrysler, are revamping business strategies to provide little more than a dashboard outlet or hub and rely heavily if not entirely on wireless and electronics partners to provide products and services. (By contrast, General Motors's OnStar mobile communication division is emphasizing more embedded devices.)
"We have learned in the last year to become very humble," Bruno Simon, director of telematics at Paris-based Renault. "We've had many experiences, but the only thing we're sure of is that we've burned a lot of cash out and haven't brought a lot of cash in," Simon said.
The cure for the dot-com hangover
It's unclear how damaging telematics' dot-com hangover will be or how long it will last.
Pessimism at the Detroit conference and in the ranks of telematics service providers around the world has become so pervasive that some experts worry about a morale drain in the emerging sector.
"The telematics high has cooled off significantly," said GartnerG2 automotive analyst Thilo Koslowski. "There was a big vision and dream to realize revenue for car manufacturers, and now we realize that won't happen anytime soon and that vision was overly optimistic."
Although dot-com fever caused the auto industry to inflate potential profits and then forced companies to retrench, the Internet bubble may have been fortuitously timed. Some say it forced the automakers to consider broader ramifications and potential liabilities of cell phones and electronics.
As cell phones became ubiquitous on American roads in the late 1990s, some safety advocates were lobbying against the use of traditional cell phones while driving because of driver distraction. At least 40 states have proposed legislation banning traditional handheld phones, and in 2001 New York became the first state to ban handheld phones for drivers while their cars were in motion.
Judges and consumers also debated who was responsible for deaths caused by distraction. In 1995, a motorcyclist died after a Smith Barney broker hit him while talking on his cell phone and driving his Mercedes Benz at the time of the accident. Although the firm did not supply the phone, lawyers alleged that Smith Barney encouraged workers to use personal phones for business. Smith Barney settled the case for $500,000 in 1999.
Automakers are now weighing such tragedies and political movements carefully as they try to find a killer app for telematics. Many now say that hands-free calling through embedded speech-recognition technology could increase revenue--and reduce the number of crashes and win political allies. Auto executives say their foray into telematics could help them stay ahead of the curve if more states pass New York-style bans or if consumer outrage increases.
"Whether hands-free becomes law is irrelevant," said Jack Withrow, director of telematics for Chrysler, the Auburn Hills, Mich.-based division of Germany's DaimlerChrysler. "The public is saying, 'We want to talk on the phone safely,' and the automakers are now in a position to give them the ability to do just that."
http://zdnet.com.com/2100-1106-917516.html
culater
OT-2001: A Speech Odyssey
June 9, 2001
By: Alfred Poor
"Open the pod bay doors, Hal."
"I'm sorry, Dave, I'm afraid I can't do that."
That simple exchange in Stanley Kubrick's masterpiece movie, 2001: A Space Odyssey, based on the novel by Arthur Clarke, sparked the imaginations of an entire generation a dozen years before personal computers arrived on the scene. Like Dick Tracy's two-way wrist radio set our expectations for communications, the computer Hal in 2001 made it seem possible that we would talk to machines.
Now that we're actually in the year 2001, those expectations are tantalizingly close to realization. Some of the latest cell phones are indeed small enough to wear on your wrist, and while computers don't understand the spoken word perfectly, enormous gains have been made in recent years.
This overview of PC speech recognition gives an introduction to how computers are able to transform spoken words into text and commands. We'll look into some of speech recognition's limitations and how they are being addressed. We'll also look into applications for speech recognition, and why so much effort is expended to improve the technology in spite of its limited success to date.
The Fundamentals of Speech Recognition
For most people, understanding spoken words is an easy task. The human brain's ability to identify and match vocal patterns is astounding; we can recognize speech even when spoken with a heavy or unfamiliar accent. As a result, it is easy for us to take this skill for granted.
If you try to teach a machine to understand speech, however, the task becomes dauntingly difficult. The spoken sounds must be broken down into data that the computer can process and use to identify the words as you say them (See our related story, "Knock-Knock, Who's There?", for a quick description of how voice recognition is used for security and authentication purposes).
Most speech recognition systems use similar strategies to turn sounds into text. The process can be broken down into five discrete steps:
• Speech input
• Prefiltering
• Feature extraction
• Comparison and matching
• Text output
Speech Input
The first and most important step in speech recognition is to record the spoken input. This task requires a microphone, and a device to convert the analog signal from the microphone into digital data. The role of the microphone is especially important, as it must provide sufficient frequency response to accurately capture the spoken sounds, and must also keep background noise to a minimum. (See the related article on microphone technology, "Testing, One, Two, Three").
The analog signal is converted to digital data, often by the computer's sound card or circuitry. The conversion requires that the levels be recorded--or sampled--at specific intervals. The sampling rate determines the number of samples per second to be recorded, and the bit depth determines how many different levels will be recorded, or their resolution.
Audio CDs--such as the ones you would play on a stereo system--are recorded at a sampling rate of 44.1 KHz, or 44,100 data points every second. This is necessary to provide audio fidelity for high frequency sounds up to about 22 KHz. Human voice, however, can be well described using sounds between 100 and 8,000 Hz. (Note: This is why telephone systems have a relatively narrow frequency response range; visit this story on HowStuffWorks.com for an interesting demonstration of how phones cut off higher frequencies). As a result, speech recognition sounds can be sampled at a rate as low as just 8 KHz (8,000 samples per second, though 16 KHz may provide better results if sufficient processing power and storage is available.
Similarly, the dynamic range--the difference in volume between the quietest and loudest instances--for voice is much less than for high-quality music. Just 8 bits per sample--one byte--is sufficient in many instances for voice, though results may be improved by using 16-bits--two bytes--per sample, which is the same as the Audio CD bit depth.
The difference between different rates and bit depths can have a major impact on the amount of data that the computer must process. One second of sound digitized at 8 KHz and 8 bits per sample creates just 8,000 bytes of data. That same second of sound digitized at 16 KHz and 16 bits creates four times as much data: 32,000 bytes. And the 44 KHz and 16 bits standard for Audio CDs requires 88,000 bytes for just one second of sound.
Fundamentals, Continued
Prefiltering
Once the sound has been digitized and stored, the next step is to filter it. The data can be analyzed, and background noise levels can be identified and reduced, if not eliminated entirely. This can be achieved by a variety of methods, including Linear Predictive Coding (LPC) and spectral analysis. The result is cleaner sound data that can be processed more reliably.
Feature Extraction
Once the sound data has been prepared for analysis, it must be processed so that it can be compared with the stored samples. Typically, the sound data is divided into overlapping frames, each about 5 to 32 ms long.
click on image for full view
The data is then analyzed for its component frequencies. The sound of human speech is composed of many frequencies of different volumes, all occurring at once. The individual data points must be analyzed to determine what sound waves combined to create those points. A common technique used for this process is a Fast Fourier Transformation (FFT). The result is a map of the frame containing the frequency and amplitude--the volume--of its component sounds.
Comparison and Matching
The data is now ready for comparison with the stored sound samples, so that the words can be identified. This objective is far easier to state than it is to achieve.
For most speech recognition systems, the reference sounds are broken down into pieces called "phonemes." A phoneme is the smallest part of a spoken word; change one phoneme for another, and you get a different word. The frames created in the previous feature extraction are matched against the stored database of phonemes to determine which phonemes were recorded.
Now comes the hard part. The speech recognition program has to take the sequences of phonemes, and match them up against the stored words in the program's vocabulary, trying to determine what words were spoken. Most programs rely on a statistical process that tries to predict what will most likely be the next phoneme, based on the words that it has determined might best fit the phonemes that it has already matched.
The program creates a probability tree for the different possible matches it predicts. As the program becomes more and more certain that it has correctly identified a word, the predictions about subsequent words can change. If the program can find a match as predicted for the next sequence of sounds, it has more confidence in that branch. If the predictions prove wrong, however, it must go back up the logic tree of choices that it made, and try a new chain of solutions. This is why some speech programs hesitate for a noticeable time, and then dump a whole phrase of text on the screen at one time.
Different models are used for this process, but one of the most common is called the hidden Markov Model (HMM). It can use large libraries of words; some programs have vocabularies with 70,000 or more words. It also uses libraries of grammar rules that help use contextual clues to decide what word is most likely to come next.
All of this activity is processor intensive. It also requires a great deal of system resources, including physical memory. With all the comparisons that must be made in order to keep track of multiple predictions, the computation would be too slow if the reference data had to be retrieved from a hard disk every time. Sufficient physical memory is a key factor in speech recognition performance. Also, some speech programs also take advantage of SIMD (Single Instruction Multiple Data) instruction-set extensions such as MMX, SSE, and SSE2 to aid in accelerating speech processing, in addition to the streaming memory enhancements of those instruction sets. Further, the use of faster DRDRAM and DDR memories with their higher bandwidth will improve speech algorithm performance.
Text Output
Once the spoken text has been identified, the program is ready to pass it along for the next step. This could be to display the recognized text on a screen, or to trigger a command to another program.
Speech Recognition Distinctions
Speech recognition program designers have to make a number of decisions that result in balancing trade-offs. Some choices may make the program's vocabulary more limited but more accurate. Others choices may result in faster processing but lower accuracy, or in larger storage and memory requirements and larger vocabulary.
The choices designers make have direct effect on the suitability of a program for a given application. Here are some of the factors to consider when evaluating a speech recognition package.
Command vs. Dictation
One of the most fundamental distinctions in speech recognition is whether the spoken words are to be interpreted as commands--also known as "context-sensitive speech"--or as dictated text--also known as "context-insensitive speech."
Commands are easier to implement and typically require fewer resources. This is because the number of available choices is limited.
For example, consider giving commands within Microsoft Word for Windows 2000. If you limit the choices to the menu options, there are just nine words that the program needs to recognize: File, Edit, View, Insert, Format, Tools, Table, Window, and Help. Once the word "Edit" has been spoken and recognized, then there are just 14 possible choices for the next word. With such a limited vocabulary, it's much easier to identify the next word.
In contrast, dictation can go anywhere, and often does. For example, the program may recognize the phrase "It was a…" This gives little indication of what comes next. Even when the phrase is expanded to "It was a dark and stormy…" there still are many different words that could come next in addition to the obvious "night"--"evening," "afternoon," "marriage," "novel," etc.
Clearly, managing all the possibilities of dictation is a large task. One way to contain the problem is to limit the vocabulary. This works particularly well in dictation systems for professions that use a specific and relatively limited vocabulary, such as in some medical or legal fields.
Continuous vs. Discrete Speech
Another important difference is how the speech is interpreted. Most early speech recognition programs for personal computers required discrete speech, which means that the words must be uttered separately, with a break between each one. This requires a stilted delivery that is not natural and difficult for some users to master.
In recent years, speech recognition programs have been able to work with continuous speech, in which the spoken words are run together just as we do during normal conversation. When the computer hears a rapidly spoken phrase (we'll string one together demonstrating what a computer might have to interpret) like "pleasendmeasample," it is difficult to separate the words, and requires a lot of processing power and storage to achieve even moderate accuracy.
Speaker Dependent vs. Independent
Another big distinction is between speaker dependent and speaker independent systems. A speaker dependent one relies on a database of phonemes from that specific user. This usually requires a long training period, where text is displayed on the screen that the user has to read aloud. Many programs offer short or long training options, but in general, more time spent training translates into higher accuracy results.
Some training routines recognize the text as it is spoken and ask the user to repeat unrecognized words or phrases. Ostensibly, this is to train the program to recognize the user's speech, but like the handwriting recognition used by PDAs, it may be that this process also helps train the user to some degree.
Some companies have tried to eliminate the training requirements by using a database of phoneme samples, created by people with a range of accents and vocal pitches. You may be asked to identify yourself as male or female, and possibly with a geographic location, and it then uses the closes matching samples to recognize your voice. Given the wide range of accents found in this country--there are probably some folks from Maine and Louisiana who might barely understand each other if they met--this approach clearly has limitations.
Many packages use a combination of an initial database and some limited training for the specific user's voice, and then continue to learn and refine the database of sample sounds as the system is used. As a result, many programs don't reach their maximum accuracy until after you've worked with them regularly for a week or more.
The Market Outlook
Speech recognition has a wide range of practical applications, both with personal computing devices and on larger platforms. There are a number of speaker-independent systems operating on large-scale server platforms that provide telephone access to data, but these are beyond the scope of this overview. (See the SpeechWorks site for some Flash demos of their products).
Within the personal computing device markets, there is already a wide range of products.
For desktop computing, there are dictation and command packages available. Some are designed for specific professions, like legal or medical, while others are general purpose. Some programs are designed for a specific purpose; Conversay Web Browser is an add-on for Microsoft Internet Explorer 4.0 or later that allows you to speak to activate links on a Web page rather than use the mouse or keyboard ($19.95 with a free 45-day evaluation version available here).
There are also programs for specific professions, such as medical and legal applications mentioned earlier. Some programs are developed specifically for other professions, such as law enforcement.
Speech recognition also plays an important role in adaptive technology applications. Users with limited vision or motor control can speak to the computer rather than rely on a keyboard.
Mobile applications are starting to make use of speech recognition, though their limited processing power and storage capacity results in limited features at this point. Digital pocket recorders have been used to store speech for later conversion to text after being downloaded to a desktop or laptop computer.
Speech recognition on PDAs has been demonstrated, but remains in the not-too-distant future. One of the earliest automotive uses was hands-free dialing for cell phones--which can be an important safety feature while driving. Our experience with speaking the names of people to be called has been mixed, and background road noise can really interfere with the recognition process.
Today, you can use speech recognition fairly successfully with a limited vocabularly in certain GPS/street mapping systems for auto usage, like Travroute's CoPilot 2001. You might ask the system the discrete phrase "next turn", and it will generate both a graphics display on the computer, and a voice-synthesized response telling you the street name how much farther until the turn. This is very useful, particularly in high-speed or crowed traffic conditions.
Many other automotive applications will be possible down the road, so to speak... We expect background noise filtering to improve with microphones being more accurate and tuned to the environment (See "Testing, One, Two, Three)", algorithms will improve, continuous speech recognition will be standard, and certainly far more processing power will be delivered as embedded processors improve. A range of mobile e-commerce, mobile corporate business, and entertainment applications will be speech-enabled to permit hands-free operation.
Factors That Slow Adaption
PC-based speech recognition has been pretty good for a number of years, and the storage and processing power required to make it work has been available in affordable computer configurations. There is something appealing about being able to work with your computer, hands off. Just sit back and tell it what you want it to do, as if it were the electronic assistant that many of us wish our computer could be.
Aside from some niche markets in specific professions, however, speech recognition simply has not caught on. Lernout & Hauspie is one of the industry leaders, having bought a number of competitors including Kurzweil and Dragon Systems in recent years, yet it was forced to file for bankruptcy protection in 2000. Why isn't speech recognition much more successful?
There are many possible factors. First and foremost is accuracy. According to a report from the Center for Language and Speech Processing at Johns Hopkins University, speech recognition can be about 99.5% accurate in replacing touch-tone menu systems over telephones. That means that you can expect just one command in 200 to be interpreted incorrectly.
However, for dictation--even with training on a speaker-dependent system--accuracy is only estimated to be about 95%. That may seem to be a high rate, but if you figure that there are 200 to 300 words on a typical typed page, then this means that the program will get 10 to 15 words wrong on every page. Whether you make the corrections by voice or by keyboard, it can take a significant amount of time to correct these errors. And since the program will be using correctly spelled words when it makes these errors, you can't rely on a spell checker to catch the mistakes--you have to read over the entire dictation carefully.
Another factor is that most people are not accustomed to dictation. Even if the software is capable of screening out "disfluency" errors--the "ahhs" and "umms" that are often a part of conversational speech--it still takes considerable practice to be able to speak the text you want the first time, without backing up to make corrections and changes. If you've ever been deposed by an attorney in a legal case, and have read your depostion, you likely may have been surprised at how fragmented and broken up your speech patterns appear compared to how you would have written the responses.
You must also consider the microphone. Background noise can bring down accuracy, so you need a good quality mike that screens out extraneous noises, as we briefly discussed in the automotive application section above. Where will it be located? For best results, you should use a boom mike that either hangs on your ear or from a headset. This can be uncomfortable to wear all day long, and can interfere with other activities such as walking away from your computer or answering the telephone. (Andrea Electronics has a handy PC/Telephone interface that lets you use the same headset for your computer and your telephone. You can listen to music on your PC, dictate using the microphone, or hold telephone conversations, all with your hands free for the mouse and keyboard.)
All of this adds up to one fact: It takes a commitment--an investment of time and peripherals--in order to get a reliable speech recognition system that you can use with maximum accuracy. Not all users are willing to make this commitment just so they can speak to their computer.
The Changing Market
There are reasons to expect that speech recognition may become more popular in the coming years.
One key factor is Microsoft's commitment to the field. As part of its .NET strategy, the company is working to add rich speech processing services to many of its products. The ultimate goal is to provide a scalable and consistent user interface across the entire range of computing devices, from cell phones and handhelds to desktop computers and beyond. The idea is that users will be able to obtain information anywhere, anytime, from any device.
One part of this development effort is the Speech Application Programming Interface (SAPI) 5.0; a software development kit (SDK) is available for download for free from the Microsoft site. This is a programming interface that is designed to form a common connection between speech recognition engines and application programs, so that it is easier for developers to make their programs speech enabled. For example, a programmer could need as little as three lines of code to translate a sound source to text; Microsoft provides sample code in the SDK.
The SAPI can take the output from any compatible speech recognition engine; in addition to Microsoft's own engine, IBM and Lernout & Hauspie Speech Products have SAPI 5.0-compatible speech engines.
A more visible outcome of Microsoft's efforts is to be found in the new Microsoft Office XP. The new application suite will be voice-enabled, so that you can use its built-in speech recognition features for both command and dictation. The fact that speech recognition will be included at no extra cost may encourage more users to give it a try--with the result that this feature may become more widely used.
Future Developments
Assuming that there is sufficient demand for the products, technological advances promise to make speech recognition easier to use and more widely available.
For example, one of the key technology features of the original Star Trek television series was the fascinating Universal Translator that automatically translated any alien speech into English. We're not there yet, but ViA, Inc. is working on something close. The company is building a speech recognition system based on a wearable computer using the Transmeta Crusoe processor. The output will be run through a language translation program, and the results will be sent to a text-to-speech (TTS) program to produce a spoken translation.
The research for this device is funded by the Office of Naval Research and is designed primarily for military use. The device could also be helpful to other people who need to communicate in unfamiliar languages, such as police, health workers, and tourists. ViA intends to provide nearly-simultaneous translation for major European languages, as well as Korean, Serbian, Arabic, Thai, and Mandarin Chinese. The company has demonstrated prototypes and hopes to have a production model shipping by the end of 2001.
There are also other approaches that could change the way speech recognition is accomplished. Integrated Wave Technologies (IWT) has developed new systems based on technology created in the former Soviet Union. Faced with computing equipment of limited capabilities, the Soviet researchers developed highly efficient algorithms to perform sound analysis.
Instead of using phonemes, IWT has a system that analyzes the frequencies and volume characteristics of a voice sample, which it can then compare directly to templates of specific commands and phrases. The result of this approach is comparatively small programs requiring modest memory and storage resources that can act much faster than phoneme-based systems.
IWT has used this approach to create prototype belt-mounted voice command translation devices--under a National Institute of Justice Science and Technology grant--which have been tested in the field by the Oakland, California Police Department. (Details of the prototype program and the IWT technology can be found here).
Advanced speech recognition could also make other applications more useful and appealing. By adding speech recognition and TTS features to many programs, it will eventually be possible to accurately converse with your computer--something we've all been hearing about for years and waiting for patiently. Beyond the limited recognition of command sets required to control office applications and the like (which is particularly compelling for some handicapped individuals), the challenges of taking it one step further into handling contextually rich speech interactions go far beyond speech recognition hurdles. These interactions require natural language processing with contextual analysis, and sequencing of multple separate events.
For example, if you ask your computer to set up reservations for a business trip and book a room for you, it may fully understand the words accurately. But now it must take action and coordinate with many external and internal software systems and services: handling your personal preferences, delivering notifications for problematic or successful transactions, and so on. This chain of events often has dependencies requiring one stage to complete before the next starts. Actions that today are manually enabled with keyboard and mouse, and often sequenced manually, will have to be done automatically, triggered from rich spoken command sequences to your computer. The idea of web sites talking to other web sites in initiatives like .NET ties into the equation.
We expect eventually that all personal information management (PIM) programs will accept spoken instructions to add appointments to a calendar, or to autodial a phone number from the contact list, or to read your current To Do list aloud. PC games already have used voice recognition to a limited degree--but in the future, the player may be able to have spoken dialogs with the program's characters, adding to the sense of immersion in the game environment.
We are still a ways off until we will be able to converse freely with a computer like HAL from 2001, but we're getting closer. In settings where the vocabulary requirements can be limited to some degree, we now have the technology to make many of our computing applications voice enabled. It remains a question whether or not the users at large will find this to be a valuable addition or not.
Links
Microsoft's Speech Resources page: Numerous links to speech-related journals and companies.
Speech Technology Magazine
Testing, One, Two, Three
June 9, 2001
By: Alfred Poor
When you speak to your computer, how well does it hear you?
This may seem like a silly question, but it's entirely serious. You need to reproduce the sound of your voice with as much fidelity as possible, so that the computer has the most accurate data to work with as it attempts to interpret those sounds. And it stands to reason that some microphones can do the job better than others. Also, the way you use your microphone can make a big difference in how it performs.
Microphone Types
Over the years, a number of different microphone technologies have been created and refined. Some are based on something as simple as a layer of carbon particles--a design that was the mainstay of telephone handset microphones for many years.
Most microphones use one of two basic designs: moving coil--also known as dynamic--and condenser.
In a loudspeaker, an electromagnet is used to move a cone back and forth in response to changes in an electrical current. The vibrations of the cone create pressure waves in the air, which we perceive as sound. A moving coil microphone uses the same principle, only in reverse. Sound waves press on a diaphragm, which cause it to move back and forth. A coil of wire is attached to this diaphragm--called the voice coil--that surrounds a fixed magnet. As the coil moves back and forth with the diaphragm, it moves up and down around the magnet, and the magnetic field induces an electrical current in the coil wire. These currents can be amplified, and used as a signal to record the sound. signal for recording.
Moving coil microphone
Condenser microphones rely on capacitance, which is the difference in charge between two parallel plates. Generally, one plate is fixed, while the other moves in response to the changes in air pressure caused by sound waves. The movement causes the plates to be closer or farther apart, and as the distance changes, so does the capacitance of the device. These changes can also be amplified and used to create the sound signal for recording.
click on image for full view
Some condenser microphones require an electrical current in order to maintain the different charges on the two plates. In many cases, a small battery mounted in the microphone housing supplies this power. Other condenser microphones are designed to use "phantom power," which is drawn from another device.
Newer designs use electret materials. These are special plastics--such as Teflon--that can be permanently charged during manufacture. No external current is required, making it possible to create very small and lightweight microphones. These microphones may have a more limited lifecycle than other condenser or moving coil designs.
Noise Cancellation
Another important factor in sound recording quality is noise cancellation. In some circumstances, you may want a microphone to pick up sounds from all directions. In other cases--such as speech recognition computer--you may want to screen out sounds from all directions except the computer operator. The microphone's design affects its noise cancellation characteristics.
click on image for full view
A microphone can block unwanted noises in two basic ways: passive or active cancellation. Passive methods are the most common, because they can be implemented in the way that the microphone is physically constructed, and as a result, it is almost free in terms of construction costs and added weight.
The most commonly used format is the cardioid microphone. The name comes from the heart-shaped cross-section of the sensitivity pattern. The microphone is most sensitive to sound that occur directly in front of it, and then the sensitivity is sharply reduced as the sound source moves behind the front end of the microphone. Sounds from directly behind the microphone are almost totally blocked.
click on image for full view
This cancellation is achieved by creating two paths for the sound waves to reach the microphone's diaphragm, such as putting the diaphragm at the end of a tube. Sound waves originating from the open end of the tube must travel nearly the same distance to reach both sides of the diaphragm, and so are registered. Sounds originating from the other end, however, reach the open side of the diaphragm first, and then travel to the end of the tube before returning up the tube to hit the other side of the diaphragm. If this tube is tuned properly, the delayed sound waves cancel out the ones taking the shorter path. In practice, a number of delay paths may be used--along with acoustic foam--to block a wider range of frequencies.
click on image for full view
Cardioid microphones are the best for speech recognition because they are best at rejecting sounds that come from in front of the person speaking. This helps eliminate much of the ambient noise in many environments.
A three-dimensional view of the sensitivity pattern of a cardioid microphone
Active noise cancellation microphones are more complicated. They actually rely on two or more microphones. In a two-microphone configuration, one is used to pick up the speaker's voice, and the other is used to gather the ambient noise in the environment. These ambient noise signals are then subtracted from the speaker's signal. This can do an excellent job of pulling the user's voice out of a noisy background, but the design adds weight and cost compared with passive cancellation microphones.
One of the most promising implementations of active noise canceling is the development of array microphones. Most passive microphones require that the user position it very close to the mouth when speaking. Array microphones, such as those available from Andrea Electronics Corporation, allow the device to be placed two to four feet away. The signals from two to eight microphones are digitally processed. Not only does this allow background noise to be reduced, but it is also possible to do "beam steering" which can track a user who is moving within the reception area.
The Digital Advantage
Most microphones use an analog connection to your computer through a jack in the sound card. This approach can be adequate for many applications, but it can also be the cause of lost fidelity and sound quality that can create problems for speech recognition programs.
Some sound cards are built with minimal attention to the microphone channel's circuitry. The market is competitive, consumers are cost-sensitive, and there may be little advantage in building a better microphone circuit in a consumer market sound card. As a result, the quality of analog-to-digital conversion in the sound card may not be as good as it could be. Also, these cards are susceptible to electronic noise generated by emissions from the computer's motherboard, expansion cards, and other components.
One solution is to move the conversion circuitry out of the computer and provide a digital signal from the microphone. This is the concept behind USB microphones, such as those available from Philips and Plantronics.
These microphones have a box that contains the DSP (digital signal processing) circuitry to digitize the analog signal. No external power supply is required, as it can draw power from the USB cable. The digital signal is delivered to the computer through the USB connection, and the potential for interference is greatly reduced. The result is a cleaner signal that can improve speech recognition software performance.
You may recall that USB speakers were heavily promoted a few years ago, but turned out to have many problems--mostly related to driver problems, operating system issues, and USB technology itself. Glitches often happened during certain multitasking scenarios, system lockups would occur at boot, hot-plugging worked sporatically, speakers would sometimes stop working altogether, etc. Windows ME ironed out many of the problems, but consumers are still wary. However, USB microphones tend to be a more stable technology when run under the latest operating systems like Windows 98 SE, 2000, and ME, and our initial experiences have proven successful.
It is also possible to get digital performance from existing analog microphones. Companies such as Andrea Electronics make USB converters that let you use a standard microphone or headset as a USB device.
In order to get the best from your speech recognition software, you need to make sure that your computer hears you clearly. Choose an appropriate microphone and make sure that you have it positioned and adjusted correctly in order to get the best results.
Links
DPA Microphones: This site includes detailed information about microphone specifications and testing. The information is aimed at recording studio applications, but is useful for any use of microphones.
"Sound Bits and Bytes: An Introduction to Microphones" by John L. Butler: this is a short but excellent overview of microphone use in recording. It is not as comprehensive as some sites, but is filled with practical information.
"A Primer on Microphones" by Peter Elsea, UCSC Electronic Music Studios: An excellent overview of microphone technology and specifications, with good diagrams. The same site also includes other good papers on music, recording, and sound topics.
"Microphone Techniques for Music: Sound Reinforcement": a booklet in Adobe Acrobat format from the Shure Brothers, Inc. that provides detailed information about microphone technology, the science of sound, and recording tips.
Knock-Knock, Who's There?
June 9, 2001
By: Alfred Poor
People often use the terms "speech recognition" and "voice recognition" interchangeably, but "voice recognition" is an ambiguous term that has been used--and misused--so widely that it may be confusing. Speech recognition--or speech-to-text--takes spoken words and interprets them as text or commands. Voice recognition can refer to either the process of interpreting what is said, or it can refer to the identification of who is speaking.
The better terms for the identification type of voice recognition are "voice verification" and "speaker authentication"--which uses voice sounds for security purposes. This is possible because each individual has distinct vocal patterns called a "voice print," which is as unique as a fingerprint.
Voice verification is used to determine the identity of a user. This is used for personal computer security systems that substitute spoken passwords for typed ones. The user answers requests for a password by speaking his or her name or some other key phrase into the computer's microphone. The computer compares the speaker's voice to samples of the phrase recorded earlier during a training process. The computer has to confirm or deny that the spoken sample is a match for the stored data.
Speaker authentication or identification is more complicated--it attempts to identify the speaker. Unlike verification applications where the system only has to verify that the user is who he or she claims to be, authentication systems must search a database of stored voice patterns to find a match for an unknown speaker.
For more information on voice verification and speaker identification, see the Michigan State University Biometrics Research site.
Copyright (c) 2002 Ziff Davis Media Inc. All Rights Reserved.
http://www.extremetech.com/print_article/0,3428,a=1623,00.asp
culater
OT-SAMSUNG ELECTRONICS SUPPLIES CDMA2000 1XEV-DO PHONES
(Ybreo Newswire) -Seoul, Korea- Unique mobile phone receives color moving pictures via a TFT-LCD with color gamut of 260,000
* Phone supports streaming-type real-time VOD/AOD services
* A 110,000-pixel camera is built in with 180-degree rotation
* Multimedia messaging service capability takes mobile multimedia to new heights
* Independent voice recognition function creates new trend for mobile phones
* Samsung's leadership extends to IMT-2000, the third generation in mobile communications.
Samsung Electronics, the first to introduce CDMA2000 1X, is now paving the way for CDMA2000 1xEV-DO, a synchronous IMT-2000 format. The company has completed development of an EV-DO mobile phone (model: SCH-V300) with high-quality TFT-LCD that can reproduce 260,000 different color shades. The product can also send and receive data at up to 2.4Mbps, the fastest transmission speed of any mobile phone today. The SCH-V300 will be supplied to the SK Telecom IMT-2000 Test Group. Samsung Electronics is also about to come out with an EV-DO mobile phone model that supports videoconferencing.
Samsung’s EV-DO phone uses a streaming format to support video on demand and audio on demand. Users can receive a variety of color moving picture contents such as music videos, Internet broadcasts, animated films and news reports. The phone can also receive live World Cup matches in real time.
The high-performance TFT-LCD on the SCH-V300 was developed exclusively in-house by the Samsung Electronics Digital Device Solution Division. It is large enough to display up to 12 lines of text at a time. Users can also download, store and play back video clips.
The phone has an embedded 110,000-pixel camera that enables users to take high-quality digital pictures and send them to mobile via SMS and to computer via e-mail as well. Up to 100 still photos can be stored in the phone and used as a background for the display. The camera rotates 180 degrees to facilitate picture taking from any angle.
The SCH-V300 has taken mobile multimedia to a new level. The phone supports a MMS (multimedia messaging service) that encompasses voice, image, text and background music instead of just voice mail and email.
In addition, an independent voice recognition function dials the numbers of names in the phonebook without the need for a prerecording.
The utility of this function is thus enhanced and a new trend has been set for mobile phones.
The Samsung Electronics EV-DO mobile phone comes with a 40-chord progression polyphonic ring tone. Other features include 3D graphics interface and menus that switch between Korean and English.
A Samsung spokesperson says, “We are initially supplying the phone (to SK Telecom), and it will be available to the general public by the end of May. The CDMA2000 1xEV-DOservice will become the Korean market mainstream in the second half of this year, and we plan to not only bolster the status of the Samsung mobile brand but also lead the way to IMT-2000.”
□ Product specifications (SCH-V300)
* Dimensions: 95mm long x 50mm wide x 22.5mm high
* Weight: 110g
* Power: 3.6 V
* LCD
. Main: 12 lines at maximum (176 x 192)
. External: 4 lines (96 x 64)
* Color: Silver
□ Battery Specifications .. Standard (900mA)
* Talk: About 150 min.
* Standby: About 120 hrs.
[About Samsung Electronics]
Samsung Electronics Co. Ltd. is a global leader in semiconductor, telecommunication, and digital convergence technology. Samsung Electronics employs approximately 64,000 people in 90 offices in 47 countries. The company is the world's largest producer of memory chips, TFT-LCDs, CDMA mobile phones, monitors and VCRs. Samsung Electronics consists of four main business units: Digital Media Network, Device Solution Network, Telecommunication Network and Digital Appliance Network Businesses
http://www.ybreo.com/main/getProductInfo_ie.cfm?Latest=yes&AdvSearch=no&Keyword=&Brand=&...
culater
OT-The auto industry and CRM
By strategy+business
Special to CNET News.com
May 18, 2002, 6:00 AM PT
Information technology is a costly enabler of customer relationship management. But CRM programs coupled with smart technology and strategy may soon mean the end of the road for mass marketing in the auto industry.
Mass marketing is clearly at a crossroads, as companies recoil from the inefficiencies they perceive in conventional media spending. Magazine advertising pages declined by 11.7 percent last year, the steepest plunge in a quarter century. Merrill Lynch is predicting a 4 percent decline in U.S. television advertising spending this year, following a similar fall last year.
These actions are not simply a reflection of a weak economic cycle. They're the result of a new demand for greater accountability and increased returns from marketing spending. This is driving corporate investment in customer relationship management (CRM) systems. Analysts predict that global spending on CRM will total between $20 billion and $45 billion in 2002.
Embraceable CRM
But marketing executives are beginning to discover that CRM system implementation--in which simple database consolidation can run from $20 million to $30 million--is not an easy fix to the problem of communicating to, wooing and retaining customers. The problem stems from a lack of connection between companies and customers that no information technology system alone can solve. The automotive industry is a classic example of this disconnect. On average, interaction between an auto company and a customer occurs 1.2 times per year. That simply does not provide enough data to answer such crucial questions as, Which people should get what offer on which product at what time?
Embraceable CRM starts with a simple premise: The most important part of the database isn't the base; it's the data.
Companies need an approach to CRM that marketers--and customers--can embrace. Embraceable CRM starts with a simple premise: The most important part of the database isn't the base; it's the data. To gain the information necessary to embrace the customer, relationship programs must be based on two principles:
• First, they cannot wait until the first purchase is consummated to begin to understand consumer interests, concerns, desires and habits. The key to unlocking value is to recognize that different customers follow different purchase paths. Effective CRM systems must dive deep into the purchase decision before the purchase is made. Call this purchase-cycle intimacy.
• Second, because different customers follow different ownership paths, effective CRM systems must link deeply and broadly to the individual's ownership experience--the consumer's relationship with the car throughout the ownership cycle. Interactive kiosks in dealerships--or in alternative sales venues, such as malls--are proving to be excellent tools to begin to engage consumers in dialogue.
Acting on these two principles requires companies to bring otherwise separate technology programs together in complementary ways. For example, Internet-enabled communication systems make it increasingly possible to capture valuable insights about consumers in the middle of the purchase process. Interactive kiosks in dealerships--or in alternative sales venues, such as malls--are proving to be excellent tools to begin to engage consumers in dialogue. Online activity at home or in the office represents another vital opportunity to achieve purchase-cycle intimacy. The bursting of the e-commerce bubble should not obscure the fact that some 70 percent of consumers in the United States use the Internet at some point during the automotive purchase process.
Cross-platform marketing
Now consider what happens to a company's ability to achieve and use purchase-cycle intimacy when these tailored consumer-engagements move from the Internet into home entertainment centers. With digital video recorders (DVRs) like TiVo being built into set-top boxes, assisted sales processes will occur in the lean-back comfort of the family-room sofa. Although DVR penetration today is low--about 280,000 TiVos have been sold in the past two years--Forrester Research predicts more than half of U.S. households will have interactive TV capability by 2005.
Even with privacy protections in place, the data flowing back to manufacturers and dealers will enable them to tailor follow-up campaigns that effectively bridge the gap between marketing and sales. The ability to develop incentive packages tailored to the way different sets of customers go through the purchase cycle and to get customized packages in front of receptive audiences is vastly preferable to slapping a $2,000 incentive on a vehicle and offering that same package to everybody.
Telematics applications like General Motors' OnStar system return vehicle and customer data to companies, which enable them to shape concierge and maintenance services for individual customers.
Advanced automotive marketers are already experimenting with cross-platform marketing, using DVRs as the central control device. Toyota helped launch the Lexus ES-300 earlier this year with a TiVo cross-promotion sweepstakes that uploaded commercials into the box; invited sampling of other commercials programmed into NBC network shows; and asked contestants to register for the contest on the Web. Imagine the opportunities when these platforms--television, DVRs and the Web; entertainment, brand advertising and interactive direct marketing--converge on a single screen.
DVRs are not the only new communications technology automotive executives must explore. Other technologies are equally important, including peer-to-peer (P2P) and instant messaging (IM). P2P systems such as Napster, LimeWire, and Morpheus are not passing fads, and America Online's instant messaging service is not going away soon. Kids spend hours using these services everyday, kids who are tomorrow's automotive customers.
Rise of telematics
Post-purchase, another interactive technology--vehicle-based telematics--can help automotive marketers comprehend the intricacies of the ownership experience. Telematics are the wireless devices that seamlessly capture and communicate vehicle data, enabling the automotive marketer to understand the driver's usage requirements, influence downstream services, and facilitate remarketing. Telematics, which is growing in use, allow vehicle relationship management to buttress customer relationship management.
Already, the OnStar system and “black box” programs innovated by companies like General Motors and Peugeot are returning vehicle and customer data to companies, which enable them, in turn, to shape concierge services and maintenance programs for individual customers. A Booz Allen Hamilton analysis projects telematics revenue of $20 billion to $40 billion within the next 10 years.
Adapting to telematics will not be easy for automakers. There is the ongoing difficulty of shifting from marketing that is built on the concept of mass advertising to marketing that is premised on intimate customer understanding. Auto companies also need to overcome their historical “slow follower” habits relative to technology. And the strategic plays around telematics can be complex, requiring both defensive postures, to protect existing business territory, and offensive maneuvering, to create value beyond the existing business.
The greatest challenge for the manufacturers and the dealers is to partner around this intimate customer information, so that the data can flow seamlessly up and down the chain. Only through such partnerships will the average salesperson be able to craft plans relevant to consumers walking into the dealership, offering each potential buyer excitement shaped by the right value proposition.
That, indeed, is the essence of embraceable CRM--for companies to offer value to individual customers, and to receive leveragable value in return. http://news.com.com/2009-1017-917228.html
culater
OT-The Rules of Innovation
By Clayton M. Christensen June 2002
Bringing new technology to market is a crap shoot, right? Wrong, says innovation guru Christensen. Follow his four rules to a new science of success.
Two decades ago, when I was just out of graduate school and working in the automotive industry, I got my first introduction to the statistical process-control chart. We used this laborious technique to make sure the machines employed in our manufacturing process did not drift out of control. Composed of three parallel horizontal lines, the “SPC” chart has long been an important tool in quality management. The center line represents the targeted value for the critical performance parameter of a product being manufactured. The lines above and below it represent the acceptable upper and lower control limits. If the product were, say, an axle, workers would plot the thickness of each piece they made on the chart. When I asked why there was typically a scatter of points around the target, my managers cited the randomness inherent in all processes.
The “Quality Movement” of the 1980s and ’90s subsequently taught us that there isn’t randomness in processes. Every deviation of the actual value from the target has a cause. It appears to be random when we don’t know the cause. The Quality Movement developed methods for identifying those additional factors—and we discovered that if we could control or account for all of them, the result would be perfectly predictable, and there would be no need to inspect products as they emerged from manufacturing.
The management of innovation today is where the Quality Movement was 20 years ago, in that many believe the outcomes of innovation efforts are unpredictable. The raison d’être of the venture capital industry is belief in the unpredictability of new businesses. A few ventures will succeed; most won’t, the VCs say. They therefore place a portfolio of bets, extracting premium prices for their capital in order to earn the high return required to compensate for the risk that unpredictability imposes. I believe, however, that innovation isn’t random. Every undesired outcome has a cause. Those outcomes appear to be random when we don’t understand all the factors that affect successful innovation. If we could understand and manage these variables, innovation wouldn’t be nearly as risky as it appears.
The good news is that recent years have seen considerable progress in identifying important variables that affect the probability of success in innovation. I’ve classified these variables into four sets: (1) taking root in disruption, (2) the necessary scope to succeed, (3) leveraging the right capabilities and (4) disrupting competitors, not customers.
Of course, building successful businesses is such a complicated process, involving subtle interdependencies among so many variables in dynamic systems, that we’re unlikely ever to make it perfectly predictable. But the more we can master these variables, the more we will be able to create new companies, products, processes and services that achieve what we hope to achieve.
Take Root in Disruption
The startling conclusion suggested by the research that led to my writing The Innovator’s Dilemma was that many successful companies stumble from prominence not because they’re badly managed but precisely because they are well managed. They listen to and satisfy the needs of their best customers, and they focus investments at the largest and most profitable tiers of their markets. Mastering these paradigms of good management gives established companies, as a group, an extraordinary track record in producing sustaining innovations that bring better products to established markets. It matters little whether the innovation is incrementally simple or radically difficult, as long as it enables good companies to make better products that they can sell for higher margins to their best customers in attractively sized markets. The companies that had led their industries in prior technologies led their industries in adopting new sustaining technologies in literally 100 percent of the cases we studied.
In contrast, the leading companies almost always were toppled when disruptive technologies emerged—products or services that weren’t as good as those already used in established markets. Disruptive innovations don’t initially perform well enough to be sold or used successfully in mainstream markets. But they have other attributes—most often simplicity, convenience and low cost—that appeal to a new, small and initially unattractive (to established firms) set of customers, who use them in new or low-end applications.
The chances a new company could become successful if its entry path was a sustaining strategy—trying to make a better product than the incumbents and selling it to the same customers—were about six percent in our study. The chances of success for firms that entered with a disruptive strategy were 33 percent. The disparity stems from the motivation and position of the leading firms. They have far more resources to throw at opportunities than entrants do. When newcomers attack customers and markets attractive to the leaders, the leaders overwhelm them.
All companies are burdened with “asymmetric” motivations in that they must move toward markets that promise higher profit margins and the most substantial and immediate growth and cannot move down market toward smaller opportunities and profit margins. When new entrants take root with customers in markets that are unattractive to the leaders, they are safer—and it has nothing to do with how much cash or proprietary technology they have. They are safe because the incumbents are motivated to ignore or even exit the very markets that the entrants are motivated to enter. Taking root in disruption, therefore, is the first condition that innovators need to meet to improve the probability of successfully creating a new growth business. If they cannot or do not do this, their odds of success are much smaller.
There are two tests to assess whether a market can be disrupted. At least one of these criteria must be met in order for an upstart to be disruptively successful. If a new growth business can meet both, the odds are even better.
1. Does the innovation enable less-skilled or less-wealthy customers to do for themselves things that only the wealthy or skilled intermediaries could previously do?
When an innovation fulfills this condition, even if it can’t do all the things existing offerings can, potential customers excluded from the market tend to be delighted. For example, many people loved the first personal computers, no matter how clunky the booting process and limited the software the machines could run, because the alternative to which they compared the PC wasn’t the minicomputer—it was no computer at all. Filling such a void reduces the capital commitments and technological achievements required for an innovation to become viable and creates new growth markets. I call the process of finding and nurturing these opportunities creative creation. After a technology takes root in new markets, and after new growth is created, disruption can invade the established market and destroy its leading firms.
Even if innovators succeed in cramming disruptive technology into an existing market application, the incumbents typically win. Digital photography, online consumer banking and hybrid-electric vehicles are examples of potentially disruptive technologies that were deployed in such a sustaining fashion. Billions were spent on these innovations to beat out already acceptable and habitual technology; little net growth resulted, as sales of the new products cannibalized sales of the old; and the industry leaders maintained their rule.
2. Does the innovation target customers at the low end of a market who don’t need all the functionality of current products? And does the business model enable the disruptive innovator to earn attractive returns at discount prices unattractive to the incumbents?
Wal-Mart, Dell Computer and Nucor are examples of disruptive companies that attacked the low ends of their markets with business models that allowed them to make money at discount prices. Wal-Mart started by selling brand-name products at prices 20 percent below department store prices and still earned attractive returns because it turned inventory over much more frequently. Such a disruptive strategy can create new growth businesses but does not create new markets or classes of consumers. It has a high probability of success because the reported profit margins of established companies typically improve if they get out of low-end, low-margin products and add in their stead high-margin products positioned in more-demanding market segments. By assaulting the low end of the market and then moving up, a new company attacks, tier by tier, the markets from which established competitors are motivated to exit.
Pick the Scope Needed to Succeed
The second set of variables that affects the probability that a new business venture will succeed relates to its degree of “integration.” Highly integrated companies make and sell their own proprietary components and products across a wide range of product lines or businesses. Nonintegrated companies outsource as much as possible to suppliers and partners and use modular, open systems and components. Which style is likely to be successful is determined by the conditions under which companies must compete as disruption occurs.
In markets where product functionality is not yet good enough, companies must compete by making better products. This typically means making products whose architecture is interdependent and proprietary, because competitive pressure compels engineers to fit the pieces of their systems together in ever more efficient ways in order to wring the best performance possible out of the available technology. Standardization of interfaces (meaning fewer degrees of design freedom) forces them to back away from the frontier of what is technologically possible—which spells competitive trouble when functionality is inadequate. This helps explain why IBM, General Motors, Apple Computer, RCA, Xerox and AT&T, as the most integrated firms during the not-good-enough era of their industries’ histories, became dominant competitors. Intel and Microsoft (raps about the latter’s supposed lack of innovation aside) have also dominated their pieces of the computer industry—compared to less integrated companies such as WordPerfect (now owned by Corel)—because their products have employed the sorts of proprietary, interdependent architectures that are necessary when pushing the frontier of what is possible. This also helps us understand why NTT DoCoMo, with its integrated strategy, has been so much more successful in providing mobile access to the Internet than nonintegrated American and European competitors who have sought to interface with each other through negotiated standards.
When the functionality of products has overshot what mainstream customers can use, however, companies must compete through improvements in speed to market, simplicity and convenience, and the ability to customize products to the needs of customers in ever smaller market niches. Here, competitive forces drive the design of modular products, in which the interfaces among components and subsystems are clearly specified. Ultimately, these coalesce as industry standards. Modular architectures help companies respond to individual customer needs and introduce new products faster by upgrading individual subsystems without having to redesign everything. Under these conditions (and only under these conditions), outsourcing titans like Dell and Cisco Systems can prosper—because modular architectures help them be fast, flexible and responsive.
Leverage the Right Capabilities
Innovations fail when managers attempt to implement them within organizations that are incapable of succeeding. Managers can determine the innovation limits of their organizations quite precisely by asking three questions: (1) Do I have the resources to succeed? (2) Will my organization’s processes facilitate success in this new effort? (3) Will my organization’s values allow employees to prioritize this innovation, given their other responsibilities?
Beyond technology, the resources that drive innovative success are managers and money. Corporate executives often tap managers who have strong records of success in the mainstream to manage the creation of new growth businesses. Such choices can be the kiss of death, however, because the challenges confronting managers in a disruptive enterprise—and the skills required to overcome them—are different from those that prevail in the core business. Many innovations fail because managers do not know what they do not know as they make and implement their plans. That is, they assume that the same strategies and customer needs that apply in mature, stable markets will apply in disruptive ventures. But this is not the case, and by making such assumptions, managers close themselves off from opportunities to discover what customers really find useful in new, disruptive products.
Innovators must avoid two common misconceptions in managing the other key resource, money. The first is that deep corporate pockets are an advantage when growing new businesses. They are not. Too much cash allows those running a new venture to follow a flawed strategy for too long. Having barely enough money forces the venture’s managers to adapt to the desires of actual customers, rather than those of the corporate treasury, when looking for ways to get money—and forces them to uncover a viable strategy more quickly.
The second misconception is that patience is a virtue—that innovation entails large losses for sustained periods prior to reaping the huge upside that comes from disruptive technologies. Innovators should be patient about the new venture’s size but impatient for profits. The mandate to be profitable forces the venture to zero in on a valid strategy. But when new ventures are forced to get big fast, they end up placing huge bets at a time when the right strategy simply cannot be known. In particular, they tend to target large, obvious, existing markets—and this condemns them to failure. Most of today’s envisioned business opportunities for wireless Internet access, for example, involve big applications such as stock-trading and multiplayer gaming that have already found homes on wired, desktop computers. Billions are being sunk into new wireless ventures committed to taking over these markets before innovators have a chance to learn what applications wireless is really best at delivering.
Resources such as technology, cash and technical talent tend to be flexible, in that they can be used for a wide array of purposes. Processes, however—the central element in our second question—are typically inflexible. Their purpose is not to adapt quickly but to get the same job done reliably, again and again. The fact that a process facilitates certain tasks means that it will not work well for very different tasks. Failure is frequently rooted in the forced use of habitual but inappropriate processes for doing market research, strategic planning and budgeting.
Sony, for example, was history’s most successful disruptor. Between 1950 and 1980 it introduced 12 bona fide disruptive technologies that created exciting new markets and ultimately dethroned industry leaders—everything from radios and televisions to VCRs and the Walkman. Between 1980 and 1997, however, the company did not introduce a single disruptive innovation. Sony continued to produce sustaining innovations in its product businesses, of course. But even the new businesses that it created with its PlayStation and Vaio notebook computer were great but late entries into already established markets.
What drove Sony’s shift from a disruptive to a sustaining innovation strategy? Prior to 1980, all new product launch decisions were made by cofounder Akio Morita and a trusted team of associates. They never did market research, believing that if markets did not exist they could not be analyzed. Their process for assessing new opportunities relied on personal intuition. In the 1980s Morita withdrew from active management in order to be more involved in Japanese politics. The company consequently began hiring marketing and product-planning professionals who brought with them data-intensive, analytical processes of doing market research. Those processes were very good at uncovering unmet customer needs in existing product markets. But making the intuitive bets required to launch disruptive businesses became impossible.
A company’s values—the focus of question three—determine the necessity of spinning out separate organizations for new ventures. Values are even less flexible than resources. Everyone in an organization—executives to sales force—must put a premium on the type of business that helps the company make money given its existing cost structure. If a new venture doesn’t target order sizes, price points and margins that are more attractive than other opportunities on the organization’s plate, it won’t get priority resources; it will languish and ultimately fail.
Nor is it just the values of the innovating company that matter, because suppliers and distributors have values too, and they must put the highest priorities on opportunities that help them make money. This is why, with almost no exceptions, disruptive innovations take root in free-standing value networks—with new sales forces, distributors and retailing channels.
Disrupt Competitors, Not Customers
The fourth factor in successful innovation is minimizing the need for customers to reorder their lives. If an innovation helps customers do things they are already trying to do more simply and conveniently, it has a higher probability of success. If it makes it easier for customers to do something they weren’t trying to do anyway, it will fail. Put differently, innovators should try to disrupt their competitors, never their customers.
The best way to understand what customers are actually trying to do, as opposed to what they say they want to do, is to watch them. For example, when interviewed by the college textbook industry, students say they would welcome the ability to probe more deeply into topics of interest that textbooks just touch on. In response, publishers have invested substantial sums to make richer information available on CDs and Web sites. But few students actually use these innovations, and little growth has resulted. Why? Because what most students really are trying to do is avoid reading textbooks at all. They say they would like to delve more deeply into their subjects. But what they really do is put off reading until the last possible minute—and then cram.
To make it simpler and more convenient for students to do what they already are trying to do, a publisher could create an online facility called Cramming. Like all disruptive technologies, it would take root in a low-end market: the least conscientious students. Semester after semester, Cramming would then improve as a new “cramming-aid” growth business, without affecting textbook sales. Conscientious students would continue to purchase textbooks. At some point, however, learning the material online would be so much easier and less expensive that, tier by tier, students would stop buying texts. This path of innovation has a much higher chance of success than a direct assault that pits digital texts against conventional textbooks.
The observed probabilities of success in innovation are low. But these statistics stem from the sum of sustaining and disruptive strategies, many of which are attempted in organizations whose resources, processes and values render them incapable of succeeding. Many innovators draw lessons from observing other successful companies in very different circumstances and attempt to succeed with just one or a few links in a chain of interdependent values. And many fail after assuming that what customers say they want to do is what they actually would do.
Hence, the observed probabilities of success don’t necessarily reflect what the true likelihood of success can be, if the critical variables in the complex and dynamic process of innovation are understood and managed effectively. Indeed, success may not be as difficult to achieve as it has seemed.
Harvard Business School professor Clayton M. Christensen, a former Rhodes scholar and successful entrepreneur, specializes in the management of technological innovation.
http://www.techreview.com/articles/christensen0602.asp?p=0
culater
I got music, I got algorithm
How does a startup with a strong technology form needed partnerships with major record labels? Very carefully.
By Michael Parsons
May 9, 2002
Shazam entertainment's service will be ideal for the kind of music fan who gets into arguments in bars about whether what's playing on the jukebox is, say, the original cut by Gladys Knight and the Pips or a nasty modern rip-off. The service will let mobile phone users in the United Kingdom identify songs they hear in public places: when users call the service, it will identify the recording they're listening to and send their phones a text message with the title of the track and the name of the artist who recorded it. And for those who want to spread the soulful majesty of "Midnight Train to Georgia," they'll be able to forward a 30-second audio clip of the track to their friends, along with a text message.
This service allows a mobile phone user to identify a tune as if she were an experienced DJ. Shazam bills it as magical, hence the company's name. Founded in 2000 by CEO Chris Barton, the London-based company has attracted $7.5 million in venture funding from IDG Ventures Europe, Lynx New Media Ventures (a joint venture of Bear Stearns and the Virgin Media Group), and Belgium's FLV Fund.
Shazam plans to launch the service this spring. It expects at least two U.K. mobile operators to offer it as a premium service.
The technology behind the service is proven, but the young company still needs to tackle a very different issue: winning approval from the major music labels in order to exploit the commercial potential of the service. At press time, Shazam was negotiating with major labels, but had yet to secure a commitment.
THE SORCERER'S PHONE
Shazam's "magic" happens through a pattern recognition-software algorithm developed by Avery Wang, the company's chief scientist. The algorithm picks out the salient characteristics of the song, or its "fingerprint," and then matches that fingerprint against a music database. The company's algorithm pares the musical sample down to the barest information needed to make a positive identification, which speeds the processing time to get a match and minimizes storage requirements.
For Shazam's service to be effective, it needs a database that contains most of the music that users are likely to hear in public: the modern urban soundtrack of shops, bars, clubs, and pubs. Shazam says its service could accurately identify most songs if its initial database contained 40,000 or so titles--about the same number of songs that could be found at one of the U.K.'s largest music stores, like a Virgin Megastore.
The company is scanning tracks into the database according to sales figures, entering the most popular songs first. Given the surprisingly small number of tunes that are ubiquitous at any one time, first scanning in only the most popular music means that--despite having only some 10,000 songs in its database--the service can be extensively tested well before its official launch. The scanning will continue in order to keep Shazam up to date and to expand its database. Building the database is one challenge; building a business from it is another.
A premium wireless information service like Shazam's works by keeping people on the phone. Forwarding 30-second music clips could easily drive up call times. And mobile phone operators can charge extra for a service like this. In the United Kingdom, for example, callers are charged 75 cents a minute for calls to directory assistance.
The labels are interested for their own reasons. They are keen to find new ways of getting music in front of consumers, and because word-of-mouth recommendations are extremely powerful in driving music sales, they see this as a potentially strong marketing platform. They wouldn't mind at all if Shazam-like services took off, and U.K. consumers (who buy more music per head than consumers in any other country) start recommending music to each other over their mobile phones. Ronnie Planalp, EMI-Europe's senior vice president for new media, describes Shazam as "a really cool development." And Blair Schooff, director of new media at BMG, says the company is "one of the smartest to come to the table."
Despite the interest in the service, it seems that no one knows how to value it. Premium mobile phone services in the United Kingdom are normally a three-way split between the operator, the service provider, and, as always, the tax collector. For example, if a horoscope service earns $100 in call charges, $17 would go to the government for value-added tax, and the remaining $83 would go the mobile operator and the service operator. If the horoscope service splits revenue with the operator down the middle, the service would have about $40 to pay for expenses like new horoscope software, a computer network, leased line connections to the mobile operators, and rights to copyrighted horoscope predictions. If the service had to pay a "Zodiac Rights Body" 15 percent, or $15, to license its intellectual property, then the service would soon be out of business. But if it paid the rights body only 5 percent of revenue, then the service would see prosperity rising.
Cutting such a low-percentage deal matters to Shazam. One benchmark Shazam can use in negotiations is the 5 percent in royalties that radio stations in the United Kingdom pay on their revenue for the right to play a label's music. The labels, aware of their clout, naturally want more.
Five entertainment giants--BMG Entertainment, EMI Recorded Music, Sony Music Entertainment, Universal Music Group, and Warner Bros. Music--control between 70 and 80 percent of music sold worldwide. In the United Kingdom, music-rights negotiations are typically handled by industry bodies like the Mechanical-Copyright Protection Society, which administers rights for copying and reproducing the work, and the Performing Right Society, which collects the royalties for broadcast, cable, and public performances of the work.
At the moment, these organizations don't have mandates to manage anything as vague and newfangled as online music rights. To address this, the major U.K. labels, as well as the Association of Independent Music (AIM), which represents the United Kingdom's 500 or so independent record labels, have established their own new-media groups to negotiate rights with startups like Shazam.
LONDON CALLING
For Philip Inghelbrecht, director of business development at Shazam, it isn't quite as simple as meeting with the head of new media at a particular label. "In reality, you speak to everyone: artist and repertoire, publishing, promotion, and marketing," he says. "The record business is very decentralized." And in the United Kingdom, the local offices of the big five need to get sign-offs from their corporate masters back home. "Typically, the end responsibility goes back to the U.S.," he adds.
But before Shazam can persuade anyone to make that call to Los Angeles, the company must prove to them that the service works. One executive of a major label tested Shazam's service on his own office CD collection--15 times in a row. "There is a sense that what we're doing is slightly unbelievable," says Mr. Barton. "So that does create an extra hurdle." Moreover, music executives have had far too many meetings with promising startups that haven't been able to make any money for either themselves or the labels.
Shazam, however, could help promote the labels' music and thus boost their revenue. And if it could strike the right deal, Shazam could make money for itself. "The record labels like Shazam, and they want it to happen; they don't want the company to go out of business," says Mr. Inghelbrecht. "The struggle I have is to first make sure that Shazam survives."
The figures that the labels and Shazam are discussing are too sensitive for either camp to divulge. Shazam's strategy, in part, is to present a classic elevator pitch: invest in the service, and take a small cut that builds a big company. "Shazam is looking for a rate that lets them stay in business," says BMG's Mr. Schooff. "Obviously, so are we."
ENTERTAINMENT VALUE
Shazam needs investors and partners to agree on the future value of its business. Throw in music rights as well, and the situation gets complicated fast.
The physical limitations of radio, television, and live performance have been built into a common understanding of their associated rights. But ubiquitous digital formats mess all this up, which means rights holders have to work out new ways of valuing their intellectual property. This can be a bit of a minefield.
"You still speak to people who say, 'I've bought the CD, why can't I put it up on my Web site?' Or they say that music is too expensive," says Gavin Robertson, general manager of new media at AIM. "To which I usually reply that they must have bought a lot of music they didn't like. Only CDs you don't like are too expensive."
Shazam is anything but naÔve when it comes to understanding the value labels place on their songs. Two former executives from EMI are closely involved in Shazam: Colin Southgate, an angel investor and former executive chairman of the EMI Group, and Jeremy Silver, an adviser who used to be EMI's vice president of new media in Los Angeles.
KNOW YOUR RIGHTS
The basic problem that Shazam faces is determining what music-rights protections, if any, are applicable to its service. The two areas that music-rights holders are most keen to protect are the rights to use their content for entertainment and for distribution; these are areas that Shazam is eager to avoid. "We view our service as 100 percent promotional," says Mr. Inghelbrecht. "We're not distributing anything."
Mr. Barton says his service's basic fingerprinting process doesn't require music rights at all, because there is good precedent in the media-monitoring services, which are used to track radio airplay. However, Shazam does need rights in the United Kingdom to use 30-second clips, and it sees other marketing opportunities that could be exploited by working closely with the music industry.
As the labels and Shazam circle each other, they recognize that the result of their efforts could be a bridge between existing cultural assets and new technologies. Shazam has worked hard to position its technology as a complement, rather than a threat, to music-rights owners. The company understands that it is the music, not clever software, that will draw consumers to its service. In doing so, it has avoided the mistake made by other Internet music ventures, like, say, Napster.
"The biggest problem is that people devalue our product," says Mr. Schooff of BMG. "They don't see that the content is the linchpin that makes their business run." Or, as Ms. Planalp from EMI-Europe puts it, "We are often educating startups that content isn't free."
Ms. Planalp estimates that a relatively straightforward deal between EMI and Shazam could happen within three months. Mr. Inghelbrecht is eager to get a deal, but he admits, "I don't really know the answer. Everyone says two to three months. I don't expect it to go on into 2003."
There is a certain irony to this situation. The music industry is highly consolidated. With only five companies controlling almost all the world's music markets, one would think that they could all get together and agree on some way to value digital music in various formats.
A single rights agreement would make life a lot easier for Shazam. However, antitrust concerns make this unlikely. If the big five were to agree on a graceful business model for providing digital rights to their back catalogs, they essentially would have gone into business together; and one man's responsible market coalition is another man's corrupt cartel. "We feel as if we are damned if we do and damned if we don't," says Ms. Planalp.
"All of us would agree, if we were putting ourselves in the shoes of the consumer, that we would like to make available to the consumer the balance of all music," says Mr. Schooff. "We are all looking forward to the time when you can come to one shop front."
For now, Sony, Universal, and EMI have collaborated on the online subscription service Pressplay, and Warner, BMG, and EMI are collaborating on MusicNet, a similar service.
Napster's legal defeat demonstrated to the record labels the amazing public interest in having a variety of ways to interact with music, but it also showed that incumbent intellectual-property holders are not going to let others pick their pockets. Most important, however, it illustrated the power of partnerships, particularly in a tightly knit industry.
By treading softly, Shazam has manifested the savvy needed to form alliances. Moreover, it is eager to neutralize contentious licensing issues and work with the music establishment to help the industry reach new consumers. Shazam has won the support of music business insiders, like Mr. Southgate, who have the right contacts. He is more than capable of reaching out to someone like Ms. Planalp, who is no stranger to tech startups herself, having worked previously in business development at the Internet portal Excite.
As the service prepares for a launch late this spring, relationship building from London to Los Angeles will remain a priority. "The music industry is still pretty much about people with little black books," says Mr. Robertson of AIM. "If you recruit somebody with a little black book, half your problems will be solved."
Write to Michael Parsons.
http://redherring.com/vc/2002/0509/2759.html
culater
GO to http://www.business2.com/ then click "Gadget Guide"-then
on the drop down menu, click on "Bang & Olufsen BeoSound 2"
culater
OT-Digital Content Protection
May 16, 2002
By: Don Labriola
"The Congress shall have the Power to … promote the Progress Of Science and useful Arts, by securing for limited Times to Authors and Inventors the exclusive Right to their respective Writings and Discoveries."
--The "Copyright provision" of the United States Constitution, Article I, Section 8, ratified 1788
"No person shall manufacture, import, offer to the public, provide, or otherwise traffic in any technology, product, service, device, component, or part thereof, that is primarily designed or produced for the purpose of circumventing a technological measure that controls access to a (copyrighted) work."
--Digital Millennium Copyright Act, Section 1201, signed into law October 1998
Forget about religion and politics. One of the best ways to start a bar fight in Silicon Valley is to talk in a loud voice about copy protection. Almost everyone has an opinion on the subject, and few take their positions lightly.
The situation is so bad that opponents can't even agree on which issues to argue or what language to use. Big-player content providers focus on legal and moral concerns, citing violations of the far-reaching Digital Millennium Copyright Act (DMCA), and condemning the "wholesale theft" of copyrighted materials. Crackers, hackers and peer-to-peer devotees are more pragmatic, stressing the impossibility of preventing the "sharing" and "personal use" of content, and insisting that such activities don't significantly impact the incomes of creators who have already been ripped off by giant media conglomerates.
The open-source community denounces efforts to transform the "information wants to be free" Internet culture into a tightly controlled digital marketplace. Constitutionalists and free-speech advocates argue that current copyright law undermines the long-held balance between creators' rights and the public interest. Peripheral to all these arguments is the escalating battle between the industry consortia that develop increasingly sophisticated content-protection mechanisms and the renegades who quickly defeat them. Balkan politics were never this complex!
None of this made it any easier for us to write an objective assessment of the latest content-protection technologies. We found many of these systems impossible to document from primary sources, due to restrictive non-disclosure clauses embedded into licensing agreements. Otherwise-loquacious spokespersons stopped returning our calls when questioning turned to the inner workings of their anti-piracy initiatives. Even when we managed to infer technical details by cross-checking public patent records against analyses posted on hacker sites, we were prevented by law from publishing anything that could be construed as a potential circumvention aid. Nor could we identify any online source that might also distribute cracking tools. The stakes were high, paranoia was rampant, and people simply weren't talking.
With that brief introduction to a highly complex topic, welcome to our in-depth analysis of digital content protection issues and technologies. Over the next few weeks, we'll be exploring numerous critically important topics, describing many content protection technologies, and numerous problems in their application and usage. We'll preface it all with a brief history of copyright protection over the last few hundred years.
A Little Background: The Roots of Copyright
The best way to understand how we arrived in such a humorless place is to take a few steps back. The way we understand intellectual-property rights today is only a snapshot of a continuously evolving relationship between copyright law and technology.
The most commonly cited precedent for American copyright is a 1710 British decree called the Statute of Anne. Introducing itself as "an act for the encouragement of learning", this law gave printers a 28-year monopoly on the reproduction of books that they had registered with the Crown. It was ostensibly designed to control heretical texts that were deemed a danger to society, but it also helped eradicate rogue Scottish publishers who, by printing pirate editions of registered books, had been cutting into the profits of Crown-licensed printers. As Scott Moscowitz (CEO of security-software company Blue Spike, Inc.) has noted, the parallels to today's copyright wars are striking.
What made the Statute of Anne particularly noteworthy was its strict limitation on the duration of these printing monopolies. Not only did it acknowledge and protect the rights of legitimate publishers, but it also thwarted those who sought to control and print popular texts indefinitely. Its 28-year term was intended to motivate authors and to allow printers to recover sunk costs. At the end of that period, rights owners were considered to have been fairly compensated and books became property of the general populace. In this way, the Statute provided an elegant balance between the public good and the interests of content owners.
The founding fathers of this country considered this balance crucial enough to invest Congress with the power to give authors and inventors exclusive rights to their creations for a limited time. This resulted in the 1790 Copyright Act, which, in the spirit of the Statute of Anne, established 28-year protections for books, maps, and charts. Amendments later shifted the balance a bit by extending this duration to 42 years. But they also added protection for many other types of works, including musical compositions, photographs, works of art, dramatic compositions, and even musical performances.
US Copyright Act of 1909
After a century of relative stability, the expansive US Copyright Act of 1909 tilted the scales much further toward creators' rights by boosting copyright terms to 56 years. It also changed the definition of controlled activities from "printing" to "copying", a revision that was prompted by composers' concerns about mechanical recording devices like phonographs and player pianos. This change greatly expanded the scope of copyright law at the time, but with photocopiers, magnetic tape and the Internet still the stuff of science fiction, it wasn't yet obvious that it would also lay the groundwork for our modern interpretation of creators' rights.
It took over sixty years for the other shoe to fall. In 1972, Congress passed another far-reaching amendment that ushered in an era of increasingly frequent overhauls of US copyright law. In response to the recording industry's concerns about consumer tape recorders, this latest revision added specific protections for the electronic reproduction and distribution of sound recordings. But it also established a home-recording exemption that helped preserve the tradition of balance. Electronic media had broadened the meaning of the term "copying" so dramatically that lawmakers felt the need to compensate with the concept of Fair Use. Although it's never been defined in absolute terms, Fair Use implies that the public interest can be served by defining cases in which copyright owners cannot control what is done with their properties - in this case, by allowing citizens to duplicate protected works for private or academic use.
Only four years later, another overhaul lengthened the copyright term to fifty years beyond the death of the author or, when corporations own a copyright, to a flat 75 years. Many believed that this extension was driven in part by well-connected content providers concerned about the imminent lapse of gigabuck properties into the public domain--a far cry from the copyright law's original intent to ensure a reasonable return for a limited period of time.
The balance shifted even further toward rights owners over the next few decades, as computer programs were added to the list of protected works, restrictions were placed on the rental of audio recordings and computer software, and penalties for willful infringement were raised to a maximum of $100,000.
Copyright in the Digital Age
Rights holders didn't always fare as well in court as they did on Capitol Hill. One such case was the Supreme Court's landmark 1984 Sony vs. Universal Studios case. Outraged by the popularity of Sony's Betamax VCRs, Universal (and Walt Disney Productions) attempted to outlaw the practice of using videotape to time-shift TV programming. The Ninth Circuit court initially agreed that this was infringement, effectively outlawing the VCR. Upon appeal, the Supreme Court struggled mightily with the case through two terms, finally deciding for Sony by a single vote. Had one justice voted differently, the VCR industry would have collapsed, with Sony and other manufacturers potentially liable for hundreds of millions of dollars in damages. What ultimately swayed the court was its unwillingness to retroactively brand millions of Americans criminals for activities that the studios could not demonstrate caused them financial harm, but which obviously benefited the public.
Brief History of Copyright Law
Date Description
1710 British decree "Statute of Anne" which gives printers a 28-year monopoly on the reproduction of books that they had registered with the Crown
1788 The "Copyright provision" of the United States Constitution, Article I, Section 8, ratified.
1790 US Congress passes Copyright Act based on "Statute of Anne", which gives 28 year protection for books, maps & charts.
1909 New US Copyright Act boosts terms to 56 years while expanding coverage of protected products.
1972 Congress expands Copyright law with specific protections for the electronic reproduction and distribution of sound recordings, while introducing notion of "Fair Use".
1984 Supreme Court approves legality of VCRs in Sony vs. Universal Studios.
1992 Audio Home Recording Act (AHRA) ordered all consumer digital-audio recorders be equipped with a Serial Copy Management System (SCMS) that met Fair Use standards.
1995 Digital Performance Right in Sound Recordings Act (DPRA) extended copyright to services like webcasting and digital-cable audio by establishing a “public performance” right that controls the digital transmission of sound recordings.
1998 Sonny Bono Copyright Term Extension Act added yet another 20 years to the duration of most copyrights.
1998 Digital Millennium Copyright Act (DMCA) defines tough new guidelines for the control of digital content, establishing "moral rights" for non-audiovisual performers.
1999 Uniform Computer Information Transactions Act (UCITA) attempts to standardize software-licensing laws in all fifty states, while legitimizing "shrinkwrapped licensing".
The next crisis was triggered in 1987 by the appearance of Digital Audio Tape (DAT) recorders. As the first consumer digital-recording technology, DAT raised the bar by enabling professional bootleggers to create multiple generations of perfect copies. Alarmed by the potential for enormous losses, the recording industry did its best to kill the medium, just as Hollywood had attempted to outlaw VCRs a decade earlier. Manufacturers were threatened with huge contributory infringement suits, and record labels refused to release albums in DAT format and floated controversial copy-protection technologies that many believed degraded sound quality. After years of bickering, it took an act of Congress to resolve the debate. The 1992 Audio Home Recording Act (AHRA) ordered all consumer digital-audio recorders to be equipped with a Serial Copy Management System (SCMS) that met Fair Use standards by permitting only one generation of copies from commercial recordings. It also exempted manufacturers from prosecution for infringement in exchange for collecting a royalty on recorders and blank media that would compensate copyright holders for presumed piracy losses. Surprisingly, this provision did little to hobble large-scale bootlegging operations by excusing high-end pro-quality DAT recorders from the SCMS requirement.
Despite its attempt to effect a compromise, the AHRA ultimately gave the recording industry everything it wanted. Although DAT had once been eagerly anticipated by both consumers and audiophiles, years of delaying tactics and public disdain for copy-protection eventually ended its chances as a consumer format. Similar constraints helped put nails in the coffins of subsequent digital audio-recording technologies, such as the Philips Digital Compact Cassette and Sony's original MiniDisc.
Why MP3 Continues to Flourish
Strangely enough, it was a loophole in the AHRA that helped set the stage for today's battles over MP3 files, handheld players, and CD rippers. Devices like computers and portable digital-music players are exempt from SCMS and royalty requirements because they're not designed solely as "digital recording devices". But because of this exclusion, hardware and software vendors are subject to lawsuit should they produce products that primarily serve as pirating tools - as Diamond Multimedia found in 1998, when it was unsuccessfully sued by a record industry livid about its Rio handheld MP3 player.
After the AHRA in 1992, the next six years saw a flurry of Congressional activity, putting the final pieces of our current copyright law into place. The 1995 Digital Performance Right in Sound Recordings Act (DPRA) extended copyright enforcement to services like webcasting and digital-cable audio by establishing a "public performance" right that controls the digital transmission of sound recordings.
Passed at the urging of the anti-piracy Business Software Alliance (BSA) consortium, the 1997 No Electronic Theft (NET) Act essentially made the act of bartering pirated software legally equivalent to selling stolen goods. It also boosted maximum penalties for infringement to a staggering five years in prison or $250,000 fine (recently increased to $300,000) - even for infringements that involve only the possession or trading of unlicensed content. Although these penalties are unlikely to be imposed upon an end-user who uploads and downloads only a few pirated applications from a file-sharing service, the possibility exists.
Particularly disturbing to some was the 1998 Sonny Bono Copyright Term Extension Act, which added yet another twenty years to the duration of most copyrights. The bill passed unanimously while the nation was engrossed with impeachment hearings, and few pundits noticed that it was enacted just in time to once again rescue a key Disney mouse from the public domain. Intense lobbying was provided by interests as diverse as Time-Warner, Carlos Santana, and Disney itself, which reportedly made contributions to 18 of the bill's 25 sponsors, Senate Majority Leader Trent Lott, and the National Republican Senatorial Committee.
Even worse, this time there was no pretense about some lawmakers' desire to transform the Constitution's time-limited monopoly into a permanent one, as Representative Mary Bono proclaimed on the record, "Sonny wanted the term of copyright protection to last forever, (but) I am informed by staff that such a change would violate the Constitution… As you know, there is also (MPAA President and CEO) Jack Valenti's proposal for term to last forever less one day. Perhaps the Committee may look at that next Congress." Aside from the obvious fact that there's no practical difference between forever and "forever less one day", these statements made it clear just how far copyright law had strayed from its original intent. The Sonny Bono Act guaranteed that virtually no properties (especially those nearing completion of their copyright period) would enter the public domain for twenty more years, and some fear that Congress stands ready to continue extending copyright terms to ensure that no work created after 1923 ever becomes public property (providing a comfortable five-year cushion for even old-timers like Mickey Mouse, who first appeared in 1928).
The Last Straws--DMCA and UCITA
The digital content industry's most powerful weapon to date is the 1998 Digital Millennium Copyright Act (DMCA). This sweeping law was based on a pair of United Nations agreements known as the WIPO (World Intellectual Property Organisation) Treaties, which were in turn driven by special interests like the IFPI (International Federation of Phonograph Industries), an international consortium that represents the global recording industry. These accords define tough new guidelines for the control of digital content, affording owners broad rights to determine whether and how their works are copied, rented, performed, and distributed. They also establish a "moral right" for non-audiovisual performers to prevent modifications to their work that might damage their reputations - a restraint that could be construed to restrict some types of satire and parody. Most importantly, they outlaw the circumvention of access and copy controls that owners insert into digital content, which in this country has been interpreted to also prohibit most links to Web sites that offer piracy tools.
Ratifying the WIPO Treaties obligated the United States to pass a law like the DMCA, but it can't force Congress to violate the Constitution. Since its inception, the DMCA has been a target for free-information advocates who argue that its toothiest provisions are unconstitutional, and too broad to accommodate the concepts of free speech, Fair Use, public access, and the public domain. They point to the example of an encrypted DVD or digital download that packages even a tiny amount of copyrighted material with publicly owned content like a 19th-Century novel, historic sound recording, or early motion picture. Under the DMCA, it would be illegal to decrypt and copy the public-domain material if doing so would mean exposing the copyrighted content.
The law is almost as restrictive when a copy-protected title contains only material that has entered the public domain, either before or after the title is published. Even in this case, it would be a crime to distribute circumvention tools that allow access to such a title, if those tools could be used to unblock other copyrighted works. Most damning, DMCA foes maintain, is the way the law virtually eliminates the concept of Fair Use by requiring rights-holders' permission to perform tasks like making personal backups, or excerpting a work for journalistic, critical, or academic purposes.
DMCA proponents reply that, like most copyright legislation, the law is merely a response to new technologies that change the way works are created or distributed. They maintain that it serves the public interest by encouraging rights owners to embrace online-distribution models and by promoting the adoption of digital media like DVD and HDTV. They also note that, although the American Home Recording Act legalized Fair Use home-recording, it did not obligate rights-owners to make this power available to consumers and, in any case, the AHRA has no jurisdiction over personal computers and handheld players, which are not considered digital recording devices.
Uniform Computer Information Transactions Act
Potentially even more divisive is the proposed Uniform Computer Information Transactions Act (UCITA), which was drafted in 1999 by the NCCUSL (National Conference of Commissioners on Uniform State Laws) and forwarded to each State legislature for consideration. Driven by key players like Microsoft, AOL, and the BSA (Business Software Alliance), the official goal of UCITA is to standardize software-licensing laws in all fifty states. But it also attempts to displace the concept of selling copies of software, as defined by existing copyright law, by legitimizing the practice of shrink-wrapped licensing. In this latter scenario, an instance of software is never actually "sold". Instead, it remains the exclusive property of the vendor, and customers are allowed to merely use it under the terms of a standardized click-through licensing agreement.
Of particular concern is the fact that UCITA grants software vendors unprecedented power to circumvent safeguards guaranteed by copyright, consumer-protection, and privacy laws. Under the current system, licensing terms that are deemed illegal or unfair can be challenged in court. But UCITA would give vendors the right to unilaterally enforce virtually any condition that a user accepts by clicking a lengthy agreement. It would enable software publishers to rig applications with a back door that lets them monitor licensing compliance or collect sensitive customer data. In fact, vendors could remotely shut down programs when they think an infringement has occurred, prohibit customers from suing because of defective products, create single-user licenses that must be repurchased when employees are replaced or companies merge, or deny libraries the right to lend out software or make interlibrary loans. UCITA would also allow software companies to choose the state in which failed lawsuits against them would be heard. Worst of all, because most click-through licensing occurs during the setup process, vendors would be free to impose non-negotiable terms after a purchase is made.
After being passed in Maryland and Virginia, outspoken criticism by the Federal Trade Commission, consumer-advocacy groups, State Attorneys General, and library associations slowed the UCITA's progress elsewhere. But what had been shaping up to look like an extended battle may have been short-circuited when the American Bar Association decided in August 2001 to independently evaluate the measure - a move that observers say could result in significant revision. The final outcome is still uncertain, but with software vendors moving to subscription, rental, Web-service, and application-hosting models, the industry is likely to continue lobbying for the passage of similar measures.
The Bottom Line
No one argues the fact that the scope of copyright law has grown dramatically over the last century. What's at issue is whether this radical transformation was justified.
The audiocassette, photocopier, and DAT recorder all gave consumers new ways to manipulate content without rights-holders' consent. But none of these technologies created as profound a disruption as the Internet. With tens of millions of surfers freely trading copyrighted music, movies and software, it wasn't hard to convince Congress that a drastic solution was needed to prevent crippling losses to some of the nation's most powerful businesses.
As a prime and very recent example of the magnitude of the problem, there has been a rash of Internet downloads of pirated versions of the new Star Wars: Episode II-Attack of the Clones (likely derived from somebody sneaking a DV camera into an early screening). Jack Valenti claimed that 350,000 films per day are illegally downloaded. And that's just with 15% of homes being broadband-enabled. Valenti said "with what velocity will this avalanche of thievery roar when broadband is more widely used?".
Public-interest groups and free-speech advocates view the Internet from a different perspective, charging that its unprecedented potential for restraint of trade and invasion of privacy must be offset by adding, rather than diminishing, consumer protections. They note that Congress acted quickly to shield the public from online pornography and clandestine information-gathering, but in the case of intellectual property, it has consistently sided with content-industry special interests. As a result, they claim, copyright holders are no longer compelled to tolerate activities that would once have been considered Fair Use. Creators take for granted the fact that their properties will never enter the public domain, and a plethora of sophisticated rights-management initiatives are poised to reinvent the Internet as the most secure and meticulously controlled marketplace ever devised.
The repeated failure of copy-protection initiatives in the software industry has convinced many that the same approach won't provide lasting solutions to content providers today. Almost any anti-piracy measure can be circumvented by a determined user and, as the software and recording industries have discovered, any company in an adversarial relationship with its customers finds itself expending endless resources battling the people who buy its products. Denying consumers privileges that they once enjoyed can backfire by transforming loyal customers into crackers who rip and post content out of spite. A better approach, some say, is to follow the lead of industries that have developed workable business models that allow free access to copyrighted material. Such solutions may sacrifice a potential revenue stream and surrender some control over how properties are distributed and used. But they can produce a reasonable return, serve the public interest, and don't turn customers into pirates.
Segments of the entertainment industry claim to be experimenting with online-distribution systems, but entrenched corporate cultures and an array of technical, political, and financial hurdles have made the job a massive undertaking. Adding a sense of urgency is the fact that, should they miss this window of opportunity, even the biggest players may find themselves out of the loop. In the case of the recording industry, an increasing number of artists, including nationally known acts like Alanis Morissette and Aimee Mann, have already discovered that it's far more profitable to sell directly to fans than to sign away their rights to a record label. Nonetheless, a scalable, secure digital-distribution system that doesn't pit consumers against creators is most likely a long way off. In the meantime, there's always copy protection.
And with that, we'll be diving deeply DVD content protection schemes coming up in Part II. We hope our intro to the subject of copyright law and content protection was informative and useful.
Copyright (c) 2002 Ziff Davis Media Inc. All Rights Reserved.
http://www.extremetech.com/print_article/0,3428,a=27038,00.asp
culater
OT-The Entertainment Server(May 9, 2002)
By WILSON ROTHMAN
IT is five years from now and, of course, you are on the couch. You are pointing your remote control at the television set, but you are not just browsing TV channels, of which you have more than your fair share. The menu also includes virtually every song you have ever heard of, your favorite movies and series episodes, and 20 albums' worth of family photos.
Where is all this coming from? For the most part, it will be stashed right at home. Perhaps you will keep your audio and video files on your PC — or perhaps on an appliance called an entertainment server.
Entertainment servers are making their way onto the rack next to the television. These various devices — game systems, audio centers, set-top boxes and digital video recorders — have four common attributes: a microprocessor, networking capability, a graphical user interface and a huge hard drive.
New products like Pioneer's Digital Media Library and the Moxi set-top box from Digeo will soon be on the market to provide storage and easy access to your audio and video files. Meanwhile, existing products — including digital video recorders like TiVo and Replay, and music juxeboxes like the RioCentral from SonicBlue — already have the potential to perform similar media-juggling tricks.
In short, a battle for control of your living room is about to be waged by consumer electronics makers, developers of personal-computer hardware and software, and set-top-box designers that sell directly to cable and satellite providers. Now that home networking is a reality (albeit a tricky one), companies are building devices that not only store or connect to a range of entertainment choices but also communicate with one another to distribute those choices throughout the home.
Although the possibilities are broad, the challenge is clear: making the new digital experience as effortless from the couch as the old one. While developers may talk in terms of networking, interoperability, user interface and media management, they aim to create products that do not require consumers to notice any of that.
According to an estimate by IDC, a research firm in Framingham, Mass., about 2.8 million devices equipped with hard drives had made it into American living rooms by the end of last year — not including PC's. Though many of these were first-generation digital video recorders, about half were Xbox game consoles, Microsoft's Trojan horse of an entertainment server.
Microsoft's decision to put a hard drive into the device set it apart from its competitors, Sony and Nintendo. Though Microsoft says it has no immediate plans to make more of the Xbox than a game console — which, by the way, can also rip CD's and play DVD's — the foot is in the door.
"In terms of types of media it can intake and store and process, the Xbox is a PC," said Ryan Jones, an analyst at the Yankee Group, a research and consulting firm. "A PC veiled behind a killer app: video games."
Other companies also have the technology to put a networked entertainment server into your living room, but the question is, under what premise? While SonicBlue — the developer of ReplayTV, the Rio digital music players, and GoVideo DVD and VHS products — could combine all those functions in one box, it thinks that would be a mistake.
"Building a convergence product is easy," said Andy Wolfe, SonicBlue's chief technical officer. "But making it cost-effective and consumer-friendly — those are big challenges. You want to be able to explain to a customer what a product does in 10 seconds." Lowering the cost of single-application products, he said, is a much higher priority. Today, aside from game consoles, it is hard to find an entertainment server that costs less than $500.
Still, traditional arguments against convergence do not necessarily pertain. If you wanted to build a digital camcorder that also takes high-quality still images, you would have to install two image-capturing devices. But adding music-jukebox capability to a digital video recorder, or enabling DVD playback on a game console that already uses a DVD-ROM drive, is simply a software update. By this fall, TiVo customers will have the option of activating a RealOne media player, already a common feature on PC's. While the precise configuration has yet to be announced, the TiVo application is likely to provide not only streaming audio and video from the Internet, but also the ability to store and play hours of music directly from the TiVo's drive.
The convergence equation is somewhat different for cable and satellite TV providers, which already have a claim to a box in most living rooms. For starters, as far as killer apps go, they arrive at your house with 300 channels' worth. Furthermore, you do not have to buy the hardware; you rent it. Manufacturers like Scientific Atlanta and Motorola are devising boxes that offer digital video recording, integrated high-speed modems and home-networking capability.
All of this sounds familiar to Digeo, which recently acquired a set-top-box designer, Moxi. Earlier this year, the Moxi box was unveiled as a Swiss Army knife of living-room survival: an all-in-one cable tuner and modem, digital video recorder, jukebox and DVD player. If it works its way into our homes (without the DVD player), it will be as an optional upgrade from the standard cable box. Even then, what you see is not all you will get. "Other features will already be supported in the box," said Rita Brogley, Digeo's executive vice president for Moxi. "If someone says, `Yeah, I do want cable-modem service,' they'll find out that the cable modem is already in there."
So far, while most of the devices can take advantage of home networking, few actually require it, and for good reason: even though the trend is growing, only 5 percent of homes will have any kind of network this year, according to IDC.
But some major consumer electronics companies are introducing network-dependent products. Digital Media Library from Pioneer will be able to rip CD's into MP3 format and play back through TV's and stereos connected directly to it, but you would not buy it for that. When added to a network, it becomes a powerful tool for storing files that would otherwise reside on your PC, and distributing audio and video through the house with the help of cheaper boxes, known as clients.
It sounds ominously like something only technology professionals could set up, but Pioneer and its software partner, Mediabolic, say that when it is released, either this year or early next year, the product will be as simple as any DVD player.
"We're taking an easy consumer electronics approach, instead of trying to force fit a PC solution," said Jeremy Toeman, director of business strategy for Mediabolic.
Pioneer is so confident of its ease of use, the company plans to incorporate the library and client technology into traditional products as well.
"It won't be anything experimental, like e-mail on the TV," Mr. Toeman said. "It's entertainment."
In the next couple of years, Sony plans to introduce a similar product, the Personal Network Home Storage device. A concept product demonstrated last fall had a capacity of one terabyte, or 1,000 gigabytes, which means it could hold either 450 hours of DVD-quality movies, 1,500 CD's or 600,000 high-resolution photographs. Presumably that sort of box would sit in the corner of the room or in the basement (one nickname for entertainment servers is "media furnace"). Sony also plans to offer networking in most of its consumer products, so that the content in your media furnace could be vented not only through TV's and audio receivers but also through clock radios and MiniDisc players.
The reason all this is not yet on the market is that manufacturers are still searching for answers to the big questions of simple interoperability.
"There's certainly the expectation among consumers that an RCA device or a Sony device will be relatively straightforward to use," said Susan Kevorkian, an analyst at IDC.
Again, cable and satellite TV providers may offer a possible networking solution: since technicians are already coming to your house, why don't they set it up?
"Cable companies can come in and install a media server in your home," said Adi Kishore, a Yankee Group analyst. "They provide it, run it and generate revenue from it."
Come what may, you may soon start noticing network ports in the back of everything, even your DVD player and your TV. Hard drives will show up in cable boxes and other devices. Manufacturers are likely to devise new applications in the hope of establishing the advantages to home networking, without making it seem too hard. In the end, after all, what they are selling is meant to let you kick back and stop thinking for a little whilehttp://www.nytimes.com/2002/05/09/technology/circuits/09HOME.html?rd=hcmcp?p=04318A04318Jc3F012000mC...
culater
GartnerG2 Says Automakers Need to Offer Telematics User Interfaces That Are Voice-Controlled and Allow Passengers to Use the Service
GartnerG2 Analyst to Moderate Keynote Panel During "Telematics Detroit 2002" Conference
SAN JOSE, Calif.--(BUSINESS WIRE)--May 15, 2002-- As automakers try to sell drivers on the benefits of telematics, broad acceptance of telematics will depend on the release of voice-based applications accessible by both drivers and passengers, according to GartnerG2, a research service from Gartner, Inc. (NYSE:IT and NYSE:ITB).
Driver distraction as a consequence of telematics applications is leading to growing safety concerns among consumers, the government and auto manufacturers who fear lawsuits from accident victims. Many people feel that telematics features require the driver to hit too many buttons and pull their attention away from the road.
"The most promising technology for minimizing driver distraction from telematics services is the voice-based user interface. Interacting and controlling a telematics program via voice allows drivers to focus their attention on operating the vehicle," said Thilo Koslowski, lead automotive analyst and research director for GartnerG2. "It will also help the telematics industry build consumer confidence and cultivate support from government regulators and safety advocates."
GartnerG2 analysts said telematics companies are missing an opportunity to make voice-activated services accessible to all vehicle occupants, not just the driver and passenger.
In a recent survey of 1,024 U.S. adults (18 years old and older) GartnerG2 asked vehicle owners who should control and have access to telematics applications. Fifty-seven percent of respondents said everyone should have access, but the driver keeps in control, while 24 percent said the driver only, and 19 percent said everyone should have access without restrictions.
"Passengers have been overlooked as telematics users in providers' marketing initiatives," Koslowski said. "To increase consumer adoption for telematics services, manufacturers should focus on all potential passengers and develop specific applications that are of value to each audience, such as Web-based games for children."
"Allowing all occupants to interact with the telematics service means installing microphones and speakers throughout the vehicle, not just in the vehicle's dashboard," Koslowski said. "This represents an opportunity for audio equipment companies to enter a new market segment, such as directional speakers and microphones."
Koslowski will provide additional insight into the future of telematics while moderating the keynote panel today at this year's "Telematics Detroit 2002" conference at the Cobo Center in Detroit, Michigan, May 15-16. Koslowski will be joined by industry leaders in the field of telematics, as they discuss the future for the market.
More information on this conference is available at http://www.telematicsupdate.com/telematics2002/index.shtml
Additional information is also available in the GartnerG2 report, "His Master's Voice: Controlling Access to Telematics Services." This report examines the concerns of consumers with telematics, and it provides recommendations for automotive manufacturers.
GartnerG2 is a research service from Gartner that helps business strategists guide and grow their businesses. For more information on the report visit www.GartnerG2.com.
About Gartner, Inc.
Gartner, Inc. is a research and advisory firm that helps more than 11,000 clients understand technology and drive business growth. Gartner's divisions consist of Gartner Research, Gartner Consulting, Gartner Measurement and Gartner Events. Founded in 1979, Gartner, Inc. is headquartered in Stamford, Connecticut, and consists of 4,300 associates, including 1,200 research analysts and consultants, in more than 90 locations worldwide. The company achieved fiscal 2001 revenue of $952 million. For more infohttp://biz.yahoo.com/bw/020515/152017_1.htmlrmation, visit www.gartner.com.
http://biz.yahoo.com/bw/020515/152017_1.html
culater
OT,but interesting-Collaborative product commerce--the next frontier The next big differentiator for technology companies will be the ability to harness collaboration for new-product development. Many of the tools exist today.-
Thought Leadership Series: Part 2
By Peter Williams and Marilyn Stemper, EBN
May 7, 2002 (9:29 AM)
URL: http://www.ebnews.com/story/OEG20020507S0026
The Internet revolution of the 1990s was just so much prologue for the next wave of business-to-business commerce: collaborative product commerce, or CPC. Electronics companies can realize significant gains in operational and financial performance by applying the principles and tools of collaboration to the design of new products.
In the late 1990s, businesses began with basic transactions online, then moved to supply chain visibility and shared forecasts. Standards organizations such as RosettaNet tended to follow, and
reinforce, the same sequence with the development of industry standards. Along the way, as outsourcing gained momentum, collaboration between companies became the focus. Core capabilities were developed, and executives recognized the ability to differentiate their corporate capabilities by adopting collaborative architectures.
In the 2001 recession, businesses that had embraced collaboration often fared better than those that did not. Now, as the electronics industry's recession comes to an end, companies with collaborative supply chains and collaboration experience are best positioned to benefit from CPC.
CPC is the application of the same collaborative ethos and Web technology used in b2b commerce but applied to the disciplines of new-product introduction (NPI) and product lifecycle management. It's an important area of focus because the design process is growing too complex for traditional processes to handle. (See diagram on page 32.)
Indeed, there are a number of factors driving the trend toward CPC:
- Efficiency. Some 70% to 80% of a product's lifecycle cost is fixed during design. Cheaper products increasingly win business-and allow their makers to retain margin and stay ahead of market- and competitor-driven price deflation curves. This in turn drives engineering cost out of the products themselves and cuts time and cost out of the processes that created them.
- Customization. Even mass-market products are increasingly customized, with more options, frequent bundling of intellectual property (IP), and smaller production runs. Profitability demands multiple products based on configurable platforms and speed and flexibility in the customization process.
- Technology evolution. Products are evolving with more and more IP being packed into smaller and smaller spaces. This is especially prominent in the semiconductor field, with ever-greater gate densities, but also in consumer devices and other equipment. The processes to integrate this IP and create the end product become increasingly complex.
- Speed. The in-market life of many electronic products is decreasing, in some cases to six months or less, so a week's reduction in time-to-market and time-to-volume may be material. The ability to get products to market rapidly becomes even more critical. Design cycles can no longer take 18 to 24 months when best-in-class companies are applying collaborative design techniques to reduce them to 11 to 15 months or less.
- Leveraging IP. Many businesses are responding to the increasing speed of product evolution by leveraging the IP they already own to avoid reinvention or re-sourcing. This requires careful cataloging and management of patents, designs, components, assemblies, etc. to maximize reuse.
- Disaggregation of value chains. The design process now has separate niches for IP owners, contract designers, solid modelers, layout specialists, integrators, and others. There is also a growing stress on “DFx” (design for manufacture, design for lifecycle cost), requiring further specialist contributions, often in parallel. Getting products to market therefore requires managed collaboration between many players based on shared processes and data.
- Exploiting the art of the possible. Industries such as defense or civil engineering have always sought to collaborate because aircraft carriers or oil refineries are too complex to build without it. What is different now is that the tools, bandwidth, access, and increasingly the standards are available to make integrated, collaborative online design and development feasible for smaller products such as PDAs and third-generation wireless phones. The fundamentals of CPC leverage investments already made in e-procurement, inventory consolidation, collaborative planning, CRM, and data federation.
The bottom line is that new-product development represents an area of immense value that, for the most part, collaboration has still to realize. In response, companies are focusing their efforts on:
- Re-engineering NPI processes to exploit the scope of CPC technologies to support more concurrent activities and manage risk with standardized process controls and automation across disaggregated engineering value chains.
- Supporting this re-engineering by integrating product data to create a single logical view of the product across all relevant players.
- Integrating engineering design with production sourcing.
- Linking parametric data with available inventory components to assess IP reuse by cost as well as design criteria.
Collaborative product commerce is the application of e-business to all product lifecycle activities, providing a cohesive framework to address issues related to product development, commercialization, and lifecycle management across the extended enterprise.
In its most fundamental form, CPC helps companies engineer both time and risk out of NPI processes by promoting process concurrency. The focus is on speed and defining a set of product introduction processes with clear stages. Each stage has a go/no-go gate to help manage risk.
The CPC framework includes modular process definitions to accommodate different development and analysis requirements for each new product based on shared workflow and shared visibility of process, analysis, and outcomes. Equally important is the creation of a single logical set of product data-an integrated product data environment-that can be used across the full lifecycle of a product by all players inside and outside the enterprise.
Based on work with early adopters of CPC techniques, we found that average performance entitlements over a four-year benefit cycle can yield many concrete business improvements. These include:
- Cutting proposal/quoting cycle times by as much as 50%;
- Improving cycle time up to 25%;
- Boosting first-pass yield up to 90% from as low as 10%;
- Raising performance-to-schedule 95% from an industry average of 50% to 60%;
- Reducing nonvalue-added work up to 60%;
- Cutting new part number introductions up to 10%;
- Paring new part number introduction costs up to 20%;
- Reducing the number of engineering change orders up to 25%, with cycle time reduction up to 60%;
- Eliminating manufacturing scrap and rework reduction up to 15%.
Significant as these benefits are, however, in some ways they miss the real point of CPC-creating better products. Other benefits include boosting the in-market life of products by reducing time-to-market and, if integrated properly with supply chain and operations management, reducing time-to-volume.
CPC also increases product reliability and reduces risks in product introduction. And from an engineering standpoint, CPC helps improve capacity, product innovation, and the effective use of IP.
CPC provides value when it is the foundation of a product realization core competency. This consists of four enterprisewide capabilities built on a common data and technology foundation. (See chart on page 36.) While the benefits of CPC are evident, CPC is both technologically and behaviorally harder to achieve than the fundamental elements of supply chain collaboration, and the risks of getting it wrong are higher. What follows is a discussion of the four critical capabilities and how CPC supports them.
1. Product lifecycle management
Some 46% of all resources spent on the development and launch of new products is wasted on products that either never make it to market or fail upon arrival. Of those products that do make it to implementation, between 60% and 90% do not meet customer expectations, depending on the industry. The design goals of the product lifecycle management capability are to improve these odds by employing:
- A through-life focus on the product that includes design, production ramp-up (time-to-volume), introduction to the market, in-life yield management and planned upgrades, and end-of-life management.
- Maximum process concurrency, which takes time out of the process and prevents otherwise elegant designs that prove too difficult to source or manufacture in a timely fashion or are not to the customer's liking.
- An organizational format that supports process concurrency by integrating disciplines involved in the lifecycle-design, sourcing, manufacturing, marketing, suppliers, and customers-from the outset. The key components here are two-fold: multidisciplinary teaming and clear accountability for the combined outputs of the team.
- A clear overall process based on a set of stage gates that allow risk to be managed and controlled with clear go/no-go decisions. Companies need modular process and workflow segments within each stage gate to allow the design process to be configured to fit the product in question.
The CPC foundation provides the single logical product data set that supports concurrent activity based on shared visibility of that data, as well as supporting the workflow that defines each contribution and when it is required in the process. Procurement and SCM professionals have a critical role to play in product lifecycle management because supplier management and supply chain improvements often make up a large part of the business case for CPC.
For example, they provide up-to-date listings and catalogs of qualified suppliers and parts to other departments within the company and to supply chain partners such as contract manufacturers. This helps prevent requalification of parts and the excessive growth in the supplier base. They also manage the approved-vendor list and the design-win process.
2. Product portfolio and IP management
One important aspect of portfolio management is the competitive management of a company's products. Our research has shown that products first to market, compared with the second entrant, achieve 43% greater market share, 26% higher profits, and a 214% increase in return on investment.
Good product portfolio and IP management ensures that the total set of the enterprise's products and services supports-and will continue to support-its position in the market. Its design goals are to provide the processes, organization, and systems to:
- Support regular environment scanning-the market, customers, competitors, technology trends-and relating these to product requirements.
- Manage R&D priorities and measure R&D return.
- Support portfolio definition, identifying or creating key product platforms that will allow families of variants to be spun off with less investment and requiring less development time and fewer new components than creating each separately.
- Manage the portfolio-for example, by measuring the ROI across the product portfolio as a whole rather than by individual product-to allow the possibility of failure that must accompany effective innovation.
- Clearly allocate portfolio and product management roles and accountability so that they work together, not against each other.
- Catalog and leverage IP, including technology patents, assemblies, and components that may already exist. Also catalog solutions to previous problems, including such mundane things as components that may already have been qualified. There are two objectives here: exploit the company's intellectual assets to the fullest possible extent, and avoid recreating these processes. Best-practice companies protect IP and leverage a separate product or profit center in its own right.
The CPC foundation provides the tools for environment scanning and portfolio management and offer visibility into the company's intellectual assets.
3. Collaboration management
Collaboration management capability provides the strategy, process, and systems infrastructure for identifying, negotiating, and executing the partnerships the business requires. These can range from IP acquisition or access through contract design services and contract manufacturing. Key issues that are frequently the preserve of procurement and SCM professionals include:
- Choosing and qualifying partners;
- Negotiating agreements;
- Integrating partner contributions into the lifecycle process, including end-to-end design and operation of processes that span organizational boundaries with minimum handoff errors and delays;
- Managing the relationship-partner performance, metrics, returns, etc.
The CPC foundation provides core connectivity, the shared visibility of product data and workflow that supports effective partner contributions to the lifecycle of the product, and effective management of partner issues and performance.
4. Innovation management
This is the most frequently overlooked element of the product-realization core competence because it has been the least quantifiable. Innovation management includes the ability to conceive, find, evaluate, and bring to fruition ideas for new products and processes. Some businesses fail to capture the ideas. Some capture them but subject them to excessive scrutiny or hurdle rates that kill them when they might otherwise be valuable. Some have a blame culture that punishes failure and discourages risk taking. Other businesses go in the opposite direction-they fail to test and filter ideas sufficiently and pursue too many leads, thus losing momentum or coherence and wasting investment.
New-product revenue as a proportion of total revenue is a key performance measure for today's leading technology companies. The key to effective innovation is balance, which is achieved through:
- Effective envisioning and idea search processes;
- An “idea capture” mechanism;
- Balanced and effective evaluation criteria-effective in the sense that ideas that pass get senior management backing and ideas that do not are discontinued;
- A culture that supports ideation and rewards the personal risk that innovation requires. It should reward rather than penalize “honorable failure.”
Innovation is not just the preserve of new products. Innovations in the design process, the supply chain, and in component and supplier management are just as critical to the overall success of new products-and to margins-as innovations with the products themselves. Procurement and SCM professionals should therefore be as centrally involved as product engineers in innovation management.
The CPC foundation helps support innovation with the IP search and management functionality discussed above. It also supports the processes required to evaluate and filter innovative product ideas.
How to implement CPC
For all the immense benefits that it can offer, CPC is technically and behaviorally difficult to implement, and in our experience, there are a number of key strategies for success:
- Prioritize and modularize your effort and select only those areas corresponding to your vision. The full suite of CPC systems in the foundation is large and complex-implement it in stages, albeit with a clear overall vision and end goal in mind.
- Be clear on the benefits you need. As an example, reducing manpower or cutting cost may not create value in getting your product to market ahead of the competition. Is it about cost, speed to market, or volume? IP leverage or innovation? Establish design goals, implement processes and functionality that deliver on these goals, and develop the metrics that encourage them to happen.
- The things that waste time also create cost. Re-engineer for time and cost reduction follows. Re-engineer for cost and you may reduce capability and end up adding time.
- Organize for concurrency. Data and workflow are useless without a clear organizational concept covering multidisciplinary teaming and accountability for outcomes.
- Organize for customer involvement as you'd organize for concurrency.
- Organizational change has long been touted as the key to success or failure for most corporate initiatives, and CPC is no different. Businesses need to overcome the “fences” that have been built between operations and engineering and between both of these and marketing. This can be difficult because engineering is an area that hasn't seen large corporate initiatives since the 1980s when concurrent engineering was in vogue. With the need to offer quarterly product upgrades, many technology companies will enter cautiously because none will want to disrupt the lifeblood of product creation.
- Don't necessarily assume a monolithic architecture that requires a huge data conversion effort and centralized control. But if you do have a more federated vision that allows data to remain on legacy systems, be sure to engineer data management and update processes that keep the different data sets synchronized and up to date.
Peter Williams, a director at PwC Consulting in San Francisco, specializes in collaborative design and CPC.
Marilyn Stemper, a PwC Consulting partner, leads the Collaborative Product Commerce practice in the western United States and also consults directly for Fortune 100 clients in the high-tech industry.
http://www.ebnews.com/story/OEG20020507S0026
culater
In Free-Music Software, Technology Is Double-Edged
By MATT RICHTEL
magine returning home with a bounty of pirate's booty. Upon reaching shore, you're mugged and the treasure hoisted. You turn for relief to the local constable, who gives you a swift kick in the shins and a public reading of the definition of the word comeuppance.
The analogy is far from a perfect one for what's going on with Sharman Networks, an Internet company with headquarters in Sydney, Australia. But it does help suggest why a few people are giggling when Nikki Hemming, 35, Sharman's chief executive, says she wishes that people would just leave her alone to make an honest living.
Sharman Networks distributes a piece of software called Kazaa. As Napster used to do, the Kazaa network lets people exchange music without charge over the Internet, and they are exchanging it by the boatload. Some 64 million people have downloaded Kazaa within the last year, making it more popular than a video of an Ozzy Osbourne family brunch.
For obvious reasons, the record industry despises Kazaa. All the major record labels have sued Kazaa's creator, Fast Track, a Dutch company, contending that the software is basically a tool used for wholesale piracy of music, and the industry has explored whether to include Sharman in the lawsuit, according to people familiar with the case.
But Ms. Hemming already has her hands full. She has been busy keeping people from ripping off her own bounty.
It seems that while Sharman Networks gives away the Kazaa software, it is hardly a nonprofit company. It insinuates advertising into the Kazaa network, making money each time people download songs. Sharman does not advocate that people download copyrighted files, but it says it doesn't have the means to stop them.
But now some privateers have cut down Sharman's action by making and distributing stripped-down copies of Kazaa. The software still allows users to get on the Kazaa network and exchange free music. But the software removes the ads, which means that Sharman isn't paid. "They are offering Kazaa without the things that make Kazaa commercially viable for us," said Kelly Larabee, a Sharman spokeswoman.
The people at Sharman have a powerful sense of indignity. But some people may wonder if they've fallen a little short in the sense-of-consistency department.
Then there is geography. In this case, as some pirate stories do, this tale involves the high seas of the South Pacific.
Lawyers for Sharman have sent letters to people who they believe are copying Kazaa, but those individuals have not been easy to find. One copycat, who distributes "Kazaa Lite," obscured his identity by using a Web site registered through Tokelau, a group of islands north of Western Samoa.
If you're guessing that the reason to register through Tokelau is not its rich history of tech support, you'd be on the same page as Sharman's lawyer, Judy Jennings. She said people who register domain names through Tokelau are not required to give their names. "There is an implication they're doing it on purpose so they would be hard to find," she said.
Ms. Hemming, however, has been easier to find — at least for the last two weeks. During that time, she held her first news conference. Before that, a company publicist declined to provide any details about Sharman, like its specific whereabouts.
But in her conference call with the news media, Ms. Hemming divulged that the company is registered in Vanuatu. That's a group of South Pacific islands, which, she said, offers favorable tax status. (Ms. Hemming keeps Sharman's headquarters in Australia, which has favorable status in the restaurant and standard-of-living area.)
In other words, Sharman thinks that the creators of Kazaa Lite are cravenly hiding in Tokelau while Sharman itself operates in the open in Vanuatu. Sharman doesn't like the suggestion that it has spent months being less than candid about its whereabouts.
"It's not that we were hiding," Ms. Larabee said. "It's that we didn't clarify." This distinction between hiding and not clarifying is important, with broad implications. For instance, the accounting firm Arthur Andersen might note that it didn't hide documents related to Enron, just that it failed to clarify the documents would have to be viewed in very thin strips.
But as it pertains to the music issue, what Ms. Larabee and Sharman Networks are getting at is something that many people may know already: Vanuatu is no Tokelau.
Indeed, the people at Sharman see very few parallels between their complaints over copying and those of the record industry. Ms. Hemming says she just wants to make an honest living, and wishes that people would please stop taking what is rightfully hers. She might also wish that people would quit gigglinghttp://www.nytimes.com/2002/05/12/technology/12SLAS.html
culater
Now playing
Matthew Miller, Managing Editor -- 5/8/2002
CommVerge
It's been two long years since we first covered DataPlay, a new storage technology claiming to offer up to 500 Mbytes of capacity on a quarter-sized optical disk (see "Would-be king," May 2000). In that time, the company has touted support from a growing gaggle of manufacturers and record companies. But real products haven't appeared, and the technology has remained nothing more than a conversation topic.
That's about to change, with two DataPlay-based music players nearing their market debuts. The long-awaited arrivals will finally allow an evaluation of DataPlay's technology in the only forum that matters: the market.
The iDP-100, from iRiver (left), is slated to appear on June 30 with a suggested retail price of $369. Coming in at the same price point at around the same time, Evolution Technologies' EV-500 MDP (below) will be unleashed through a branding partnership with MTV. Both machines play up to 11 hours of high-quality music from one of the 500-Mbyte DataPlay disks, have USB ports, feature 128-by-64-pixel LCDs, and are firmware upgradable.
Part of DataPlay's pitch is that its disks offer a compelling cost-capacity ratio. Blank disks will sell for anywhere from $5 to $12, depending on the capacity and the quantity purchased. That certainly beats flash-memory cards. However, recent audio players based on 20-Gbyte hard disks are currently going for $399, and a couple of 40-Gbyte models are available for $499.
The other major component of DataPlay's value proposition is content protection. Record labels—including Zomba Records, the home of Britney Spears and *NSync—have signed on to release music in DataPlay format. They're comfortable with it because all DataPlay-enabled devices will employ DataPlay's FuturePlayer application, which will enforce copyright protections.
Although record companies consider copy protection a plus, consumers definitely don't. They're accustomed to unfettered portability of their digital tunes, and we have our doubts whether they'll accept such restrictions—especially when the initial hardware investment is so substantial. Of course, volume production might reduce the cost of the DataPlay drive mechanism, but only time will tell.
Nonetheless, vendors are charged up about the prospects of their new devices. Chris Papazian, iRiver's marketing manager, says he is blown away by the size and indestructibility of the DataPlay disks. He also points out that record labels will sweeten the bait by filling prerecorded disks with extras like videos, pictures, and additional songs. Finally, he also notes that record companies plan to use the disks as promotional tools; they'll include additional songs that users could sample for free but would have to pay for (via the Web) to unlock in their entirety.
http://www.e-insite.net/commvergemag/index.asp?layout=article&articleid=CA216655&spacedesc=n...
culater
Access to Free Online Music Is Seen as a Boost to Sales
By MATT RICHTEL
isputing the position held by the major record companies, a report issued on Friday found that people who use file-sharing networks to obtain music at no charge over the Internet are more likely to have increased their spending on music than are average online music fans.
The report, from Jupiter Research, a market and consumer research firm, also found that people who use high-speed Internet access and CD burners to make homemade compact discs — a practice that has been criticized by the record industry as abetting piracy — are as likely to increase their spending on music as to decrease it.
The report goes to the heart of the debate on the impact of computer and Internet technology, specifically, the file-sharing networks like Kazaa and Music City, which millions of people use to obtain music over the Internet. The record companies, which have sued to shut down several file-sharing services, including Napster, have asserted that the services cost them billions of dollars in lost sales.
"File-sharing is a net positive technology" in spurring sales, said Aram Sinnreich, author of the Jupiter report, explaining that people who download music online often are, in effect, sampling it. "It gets people enthusiastic about new and catalog music."
Last month, the International Federation of the Phonographic Industry, an international record industry trade group, reported that revenue from global music sales fell 5 percent in 2001, to $33.7 billion. Jay Berman, the chief executive of the group, asserted that one of the major reasons for the decline was "the fact that the commercial value of music is being widely devalued by mass copying and piracy," but the group did not offer a specific analysis of the phenomenon, or its impact.
The Recording Industry Association of America, the United States arm of the international federation, argues, however, that its own research backs up the claim. Geoffrey Garin, president of Hart Research Associates, a firm that the recording industry association hired to study the issue, said that in a November 2001 survey 23 percent of people from 12 to 54 said the reason they did not buy more music was that they got music for free or they were making copies of music on CD's or cassettes.
"People who love music and buy it and who also use file-sharing services would be buying more of it were it not for the availability of free music online," Mr. Garin said.
Mr. Sinnreich of Jupiter said that his research found that 34 percent of experienced file sharers had decreased their spending on music, and that that 52 percent of experienced file sharers had increased their music sales.
By comparison, among average Internet users who describe themselves as music fans, and who may or may not use file-sharing networks, only 19 percent said their spending increased. Roughly 70 percent of this group found that their spending had stayed constant.
The Jupiter report also found that 47 percent of experienced file sharers with broadband Internet access and CD burners increased their spending, while 36 percent decreased their spending.
One question raised by the report is why people would bother to pay for music if they can get it for free. But Mr. Sinnreich said that while the music was free, it was not problem-free, requiring users to invest time to download the music, and put up with technological glitches.
Further, Mr. Sinnreich said, users who want to listen to their music somewhere other than their computer — on a home stereo, for example, or in the car — must take the additional step of "burning" it onto their own compact disc.
"Anyone who has tried to download music from the Internet knows that free doesn't mean free — it takes time spent, energy spent, hassle, disappointing results," he said. "That's the kind of currency that teenagers have but that people with a day job don't have."
http://www.nytimes.com/2002/05/06/technology/06MUSI.html
culater
OT Digital radio developer raises $45 million
By George Leopold
EE Times
April 30, 2002 (1:20 p.m. EST)
WASHINGTON—Digital radio developer iBiquity Digital Corp. said it has raised $45 million in private equity financing to launch its AM and FM broadcast technology.
IBiquity (Columbia, Md.) is preparing to launch a digital radio service based on its in-band, on-channel (IBOC) broadcast technology. The technology has been endorsed as a U.S. digital radio standard by the National Radio Systems Committee, an industry group.
IBiquity said that 14 of the 20 largest U.S. radio broadcasters, radio equipment and automobile manufacturers, along with financial institutions, have invested in the planned service.
Regulators also have endorsed IBOC because it can use existing broadcast spectrum.
The company said it expects to launch the IBOC system in early 2003. Receivers are expected to be ready in time for the Consumer Electronics Show in January 2003.
The private equity financing was the largest ever for iBiquity. Robert Struble, president and chief executive officer, said the funding should see the company through the commercialization stage.
With the latest investor, Susquehanna Radio, Struble said that the nation's top 11 radio broadcasters are now owners of iBiquity. Other investors include Grotech, JP Morgan Partners, New Venture Partners and Pequot Capital.
Additional strategic partners include Ford Motor, Harris, Lucent Technologies, Texas Instruments and Visteon.http://www.eet.com/issue/bus/OEG20020430S0022
culater
The wavelike trends that drive the tech economy have long histories.
Maury Wright, Editor-in-Chief -- CommVerge, 5/1/2002
How is nature like our tech industry's business cycles? Waves that take forever to build up can crest, driving businesses and surfers alike. We know that business runs in cycles, but like a surfer under the pier in La Jolla, the trick is reading the waves, guessing how long one might build and how quickly it might crest.
Why the analogy? Several reasons. As an industry we didn't read the last wave, which I term the network-and-Internet wave, very well. But I'm seeing a positive trend for the next wave—and we call that wave "convergence." The real point, however, is that the wavelike trends that drive the tech economy (which today is all but synonymous with the worldwide economy) have long histories. They are far from the short-term events that many people would have you believe.
Indeed, both the computer-and-PC wave and the succeeding network-and-Internet wave took decades to matriculate. The convergence wave will be no different. All three started in the 1960s or even earlier and took many years to reach significant amplitude.
To document the history of convergence, we're working on a timeline that details its milestones. The first milestone on our prototype is the invention of the modem, because that was the first time the trend toward a single network for different data types reared its head. Send me an email (mgwright@cahners.com) if you can think of an earlier example.
We'll present the timeline later this year in a manner similar to the Convergence Landscape—our map of the converged network, which debuted last October. A poster-sized hard copy distributed with the magazine will precede an interactive and educational Web edition.
As part of the timeline project, we'll be making our prediction about the shape of the convergence wave. Our prototype shows the crest around 2005, and that may be early. We believe that—thankfully—the convergence wave will have a slower ramp but a much more sustained crest than did the network-and-Internet wave. And let's hope we never miss another transition the way we did the washout of the last tidal movement.
Note that my optimism is based on reason. I'm not saying that all things are rosy, but positive signs abound. Small- to medium-size companies that compete in niches have prospered of late. Some of the most prominent focus on the consumer-electronics sector, and consumer spending has remained mercifully strong in what was largely a nightmarish last year. For instance, companies that focus on DVD and digital-audio chips have done quite well.
Now the trend is spreading beyond consumer electronics. Vendors that sell semiconductor-manufacturing equipment are reporting positive signs. Even players in the LAN market are doing well. The metro and wide-area networks will follow.
A broad recovery is still a ways away. Large companies that participate in scores of different market segments won't fully recover for some time. That doesn't mean, however, that such large companies will continue to experience bad financials. As more market segments turn from bad to good, the outlook for those companies goes from negative to neutral to positive.
Surf's up.http://www.e-insite.net/commvergemag/index.asp?layout=article&stt=000&articleid=CA214591&...
culater
OT-'Middleware' One of IBM's Bright Spots
By Alan Goldstein
May 01, 2002
http://www.newsfactor.com/perl/story/17531.html
Integration will be a perpetual problem for businesses as wave after wave of new technologies arrive on the scene.
Obscure at best to most people, IBM's business in so-called "middleware" has been one of the few bright spots in a generally gloomy picture for Big Blue.
"It's a very hot part of our business," said Steven A. Mills, senior vice president for IBM and group executive for the company's software unit.
Technology giant IBM, based in Armonk, N.Y., is known more for its services and hardware. But IBM is huge enough that Mr. Mills' software division, the company's third-biggest operation, was still a $12.9 billion business last year, accounting for 15 percent of IBM's $85.9 billion in revenue. In the first quarter, 80 percent of IBM's $2.9 billion in software revenue came from middleware products.
--------------------------------------------------------------------------------
Please note that this material is copyright protected. It is illegal to display or reproduce this article without permission for any commercial purpose, including use as marketing or public relations literature. To obtain reprints of this article for authorized use, please call a sales representative at +1 (818) 528-1100 or visit http://www.newsfactor.com/about/reprints.shtml.
--------------------------------------------------------------------------------
What's Middleware?
Mr. Mills describes middleware as the Krazy Glue that holds together all of the different computer systems in a modern business.
These days, the holy grail for many companies in their "e-business" initiatives is to link many processes together, "to let everything occur in a seamless flow," Mr. Mills said. That means connecting a series of events: A customer places an order, the system checks inventory, delivery gets scheduled and inventory is replenished, for example.
Most corporations have a hodge-podge of computers that they acquired at different times for the variety of functions they perform inside factories and warehouses and at headquarters.
Additive Industry
But as businesses have shifted their infrastructures from mainframes to minicomputers to desktop machines, they haven't simply discarded their old systems.
"The information technology industry is additive -- not subtractive," Mr. Mills said in a telephone interview. "Companies have done centralized computing, they've done decentralized computing, and they've realized it's about federated systems. Use computers where they make sense, in different sizes and flavors. But tie the different pieces together."
IBM's primary middleware product, called WebSphere, is aimed at connecting a variety of programs on different kinds of computers, automating business processes and providing access to information for users on all of their devices.
"It's complex by its very nature," Mr. Mills said.
Integration Top Priority
Sales of WebSphere rose 53 percent in the first quarter, compared with the same period last year, marking the 12th consecutive quarter of double-digit growth.
That contrasts with overall first-quarter revenue at IBM that decreased 12 percent from the first quarter of 2001, largely as corporate customers deferred spending in a weak global economy.
Even if companies are spending less on technology in general, integration projects that require middleware remain a top priority, Mr. Mills said.
"There's no lack of interest in this," he said. "We don't see an abatement in the business."
Boom's Silver lining
In the aftermath of the business-to-business Internet boom of a few years ago, corporate customers have grown wary of runaway software projects in which vendors over-promise and under-deliver.
"I tend to view it like fire," Mr. Mills said. "You can either heat your house with it or burn it down. Like any other tool, there are effective ways to use it to gain maximum advantage and ways that lead to disappointment. You've got to plan projects carefully, focus on rapid deployment and on near-term returns."
Newfound wariness aside, there was a silver lining to the dot-com boom, Mr. Mills said. It sparked thinking in all companies about how they could use the Internet to connect every aspect of their operations.
"The lasting legacy of the dot-com era is a widespread standard," Mr. Mills said. "You can sit in your home or office and navigate 10 million Web servers, yet you have no idea what the underlying architecture is in any of those systems. It's an amazing breakthrough."
To be sure, IBM has tough competition in middleware from companies including Microsoft Corp. and BEA Systems Inc.
Perpetual Problem
Mr. Mills said he hopes his edge comes through harnessing vast corporate resources, within the software group or in IBM Global Services, now the company's largest business unit, based on revenue.
Middleware isn't going away anytime soon, Mr. Mills said. Integration will be a perpetual problem for businesses as wave after wave of new technologies arrive on the scene. Many people believe wireless services may be the next wave.
Businesses want to spread computing power to their mobile employees, but it will be challenging to create useful systems in which information can be entered without a keyboard and viewed on a small display, Mr. Mills said.
"It creates a classic 'How do I connect?' problem," he said. "There's lots of middleware opportunity." http://www.techextreme.com/perl/story/17531.html
culater
OT-Home ownership
Steven Fyffe, Contributing Editor -- 5/1/2002
CommVerge
Driven by the conviction that entertainment applications will succeed where straight broadband access hasn't, startups, spinoffs, and established companies alike are aiming at the home. Each wants to stake a claim to what is arguably the most valuable tract of convergence real-estate: the exalted spot at the center of the living-room entertainment system.
As part of their campaign, they're touting a wide variety of AV appliances that can do it all for the entertainment-hungry consumer. Members of this new breed of home hub can play movies, store music, access interactive content, pause live TV, and even stream video throughout the house or across the country.
The contest pits the established cable industry and its favored set-top manufacturers against smaller companies that are looking for a way to break in—whether that means working with TV providers or finding a way to outflank them. At stake is not only a massive market but also control over how entertainment content will flow to and be consumed within the home.
Larry Marcus is a general partner at the venture capital company WaldenVC, where he helps manage a $270 million fund focused on broadband and enhanced-TV technologies. He believes there will be strong demand for this new breed of set-top box.
"The question is, how much are consumers going to be willing to pay for it?" he asks. "And who is going to ultimately deliver it?"
Companies that are now building these new multimedia appliances are trying to persuade the cable industry to license their technology. A single sale to a cable operator can reach millions of households in one fell swoop. But cable companies are a notoriously hard sell.
"The cable industry is very difficult to sell into because essentially it is a collection of local monopolies," Marcus says. "In the US, the cable companies buy their hardware from Motorola or Scientific-Atlanta, which is really a domestic duopoly. So you have a value chain where you are trying to sell through a duopoly to a monopoly."
Another option is to go head-to-head with the cable operators and sell these boxes direct to the public or through a retail middleman.
“Once set-top-box OEMs start putting hard drives in their products, standalone PVRs are going to disappear.”
Jay Srivatsa, iSuppli
"The battle is generally going to be around augmentation of existing devices versus a completely new device," Marcus says. "The consumer-electronics channel is going to end up being an interesting launching pad for a lot of these products."
Some companies are even trying to go one step further; they hope to cut the cable operators out of the loop by becoming content distributors themselves.
Caught in the Web
Not so long ago, the big buzz in digital home entertainment was around Internet appliances: low-cost devices that would let users send email and surf the Web on the television.
In-Stat/MDR analyst Brian O'Rourke was assigned to watch Internet appliance companies like WebTV back in the heady days of the dot-com boom. His area of coverage has shrunk steadily since then.
"The idea of strictly Internet access through your TV, that market seems to have collapsed," O'Rourke says. "People thought there would be these single-function devices, like WebTV and PVRs [personal video recorders]. But we seem to be moving into a more converged product."
Despite being bought by Microsoft, WebTV seems destined for the same inglorious death as many of its dot-com compatriots, according to O'Rourke and others. But WebTV's founder Steve Perlman made a comeback at the Consumer Electronics Show (CES) earlier this year when his new company, Moxi Digital, stepped out from behind the dummy corporation Rearden Steel Technologies.
SPLASH MAKER: Though it has since been acquired by digeo, Moxi Digital grabbed headlines at January’s Consumer Electronics Show with its design for a box including a PVR, DVD player, digital jukebox, satellite/cable receiver, and broadband modem.
Sitting alongside satellite broadcaster EchoStar, Perlman unveiled his vision for the set-top box of the future. The Moxi Media Center included a PVR, a DVD player, and a digital jukebox, as well as space for a satellite or cable receiver and a DSL or cable modem.
The trade press gave the Media Center its best-in-show-award, but Moxi has suffered some deflating blows since then. Perlman was bumped from his position as CEO, and the business press reported that Moxi had burnt through most of the $67 million it had raised in its first round of funding.
License to sell
Moxi recently announced plans to merge with digeo, an interactive TV company created by Microsoft co-founder Paul Allen. It is a move that will give Moxi a ready-made market for the software and reference designs it plans to sell to cable and satellite operators; Allen is also a major shareholder in Charter Communications, the fourth-largest cable company in the US.
“People don’t want a whole stack of boxes on their TV set. What the digital cable box does is it allows you to integrate it all into one box.”
Bob Van Orden,
Scientific-Atlanta
But regardless of the new alliance, there has never been a better time to do a licensing deal with cable companies, according to Eric Roza, vice president of product management at Moxi. "The cable operators are clearly feeling competitive pressures," Roza says. "They have more urgency now than they have ever had in the past."
Cable is struggling to keep pace with the rate of innovation in the satellite industry, which has been shipping boxes equipped with PVR functions for some time.
"I have seen numbers that say about 40 percent of [satellite broadcaster] Dish Network's new installs are PVR installs," says Bob Van Orden, vice president of strategy at Scientific-Atlanta, one of the two major set-top-box makers in the US. "Last year, Dish Network signed up about 1.2 million to 1.3 million subscribers. That is a big number. So cable is going to respond."
Van Orden says the cable industry is determined to catch up with its satellite rivals. Scientific-Atlanta plans to release a cable set-top box with built-in digital video recording in early summer. Scientific-Atlanta's main rival, Motorola Broadband, has a similar box in the works, which is being built for Charter Communications.
As these new cable boxes hit the market, stand-alone, single function boxes will die out, Van Orden asserts.
"A stand-alone box that is not connected to a network, like a cable network—we've proved there is not a viable market for those products," he says. "Look at WebTV. The founder made a lot of money, but as a consumer product it is really a flop. People don't want a whole stack of boxes on their TV set. What the digital cable box does is it allows you to integrate it all into one box."
Sonic blues
But some industry veterans say it is too early to predict the demise of the stand-alone digital-video recorder just yet.
Facing a daunting array of audio and video equipment with a large remote control in his hand, Anthony Wood says most people aren't ready to accept the idea of an all-in-one digital entertainment box. He is sitting in what he calls the "digital room" at SONICblue's Silicon Valley headquarters.
Wood knows how hard it can be to make a profit as a small company, even at the beginning of what looks like a promising trend. ReplayTV, the company he started, was a pioneer of digital video recording, but it was in financial trouble when SONICblue bought it last year.
"We are not rushing to build convergence products," says Wood, now a senior vice president at SONICblue. "I don't think that is what consumers are looking for. Most people want products that do specific things. They want a product to play music or watch TV." To some extent, Wood is ignoring his own advice. A ReplayTV with a built-in DVD/CD player is already in development, he says.
SONICblue has had modest success selling its ReplayTV boxes through the standard retail channels, but it has failed to break through to the mass market.
WHAT’S ONLINE? SONICblue’s ReplayTV 4000 features an Ethernet link to a home network and the Internet, which enables it to send shows to other ReplayTV units at home or abroad.
"It is tough, as all the PVR manufacturers out there will tell you, to put a box in a consumer-electronics store and sell a lot of them, Internet-enabled or not," says Mike Paxton, an analyst at In-Stat/MDR. "They are on the outside trying to get in. You really need another distribution channel, and that is through the [cable and satellite] service providers. You can come up with all the bells and whistles, but it is more of a curiosity than a valid product in my mind."
Other analysts say having access to the content-distribution channels, which cable and satellite operators now control, is just as important as having them distribute the box itself.
Wood and his team came up with a scheme to go around the cable operators and provide content directly to the consumer after looking at some market research.
"Our number one, most requested feature for the [latest ReplayTV unit] was broadband connectivity," Wood says.
Equipped with a broadband connection, the ReplayTV 4000 series can record a program off the air and email it to as many as 15 other ReplayTV boxes at a time. It takes about 8 hours to email a half-hour show on the lowest quality setting, but it is "still faster than mailing a video tape," Wood says.
The broadband connection is also the key to a new strategy Wood calls iChannels. The idea is to allow ReplayTV users to order shows over the Internet, circumventing the cable and satellite networks. TV production houses could charge viewers directly for their shows, instead of having to go through the standard content distributors. SONICblue would take a cut of any fees.
Support for MPEG-4 could potentially make features such as "send show" and iChannels easier to deliver, but the current Replay product uses MPEG-2. SONICblue is evaluating MPEG-4, but at this stage the hardware and royalty costs are too high, Wood says. "We're very interested in MPEG-4 and low-bit-rate codecs in general," he says. "The big issue is cost. It takes a lot of processing power...and the hardware required to decode MPEG-4 is pretty expensive. That's the big issue, as well as the onerous licensing terms."
No matter how the company delivers them, features like "send show" and iChannels have angered the TV networks. A powerful conglomerate of entertainment companies filed a lawsuit against SONICblue over the ReplayTV 4000 series last year.
Others like Vialta have side-stepped the cable networks by utilizing a much more old-fashioned content distribution system: the US postal service.
Vialta is a subsidiary of ESS that makes a DVD player with some extras. The ViDVD player is also an MP3 player, a karaoke machine, a digital photo viewer, and a Web browser for the TV.
Mail-order movies
Artisan Home Entertainment has agreed to license its movies and programs to Vialta. Vialta plans to package Artisan's content onto DVDs and send it to subscribers though the mail every month. Some of the movies and programs will be free. Users will have to pay to access others.
"This will grow into a new alternative for distribution," says Ken Tenaglia, Vialta's director of marketing. "It is another way for the consumer to get media and another way for the content provider to distribute their media to the public."
People are used to the idea of watching a DVD, Tenaglia says, and they don't want to download TV shows from the Web.
"Most people are used to taking a disc, throwing it in the [player] and pressing play," he says. "When you are talking about delivering to the home over the Internet, you have to ask how many people are connected with broadband? And how long is the download time? Right now, the majority of households in the US don't have broadband. They don't want to change their usage habits."
New technology is often confusing for consumers. That fact has probably been the biggest obstacles to ReplayTV reaching the mass market, Wood says.
"The biggest issue in adoption is people not understanding what a product does," he says. "When you add features, it becomes even more complicated and people are even less likely to buy it."
Vialta is aiming its product, which costs about $280 to $300, at the mass market. The electronics chain Micro Center carries the ViDVD in its stores, and Vialta is in talks with other retailers to stock it.
Outside the box business
Like other companies in this space, ZapMedia has found it difficult to survive selling its own branded boxes.
ZapMedia conceived of its ZapStation at the dawn of the Napster age of digital file sharing. It could play and rip CDs, play DVDs, and download audio and video from the Web. An Intel Celeron processor provided the brains, while a DSL, cable, or T1 connection linked the box to the outside world.
But at a price of around $1300, the ZapStation was well beyond the means of the average consumer. ZapMedia tried to sell the box from its Web site, but its main sales channel was an association of home-theater installers, who built entertainment systems for rich clients.
ZapMedia also inked a deal to co-produce a box with audio-electronics manufacturer Harman/Kardon, but Chris Solomon, ZapMedia's vice president of business development, says the companies later made a "mutual decision" not to make the box because it didn't make "economic sense."
“It won’t be a case of one box for all. There will be parts of the market that aren’t ready for certain features.”
Bernadette Vernon,
Motorola
After "significant" layoffs among its small staff, which peaked at 130 employees last year, ZapMedia is quitting manufacturing and selling the ZapStation. It is selling off its inventory at the cut price of $599 per box, and will stop sales entirely after June.
Instead, ZapMedia will focus entirely on licensing its technology, Solomon says. However, no deals had been announced as of press time.
"For a lot of consumer-electronics companies like Sony or Panasonic, this is new," Solomon says. "They are not used to selling a PC-like device. It needs to act like a CD or DVD player. You turn it on and it has to work quickly and not crash. It is definitely a challenge to make a PC as stable as a CD player. My computer crashed this morning. If you are sitting in the living room listening to music, you can't have that happen."
One for all?
Even as cable operators are adding new features to their set-top boxes to compete with satellite broadcasters, demand for plain-vanilla cable TV remains strong, says Bernadette Vernon, director of strategic marketing for digital consumer gateways in Motorola's Broadband Communications Sector.
"It won't be a case of one box for all," she says. "There will be parts of the market that aren't ready for certain features. As more and more of these advanced services come out, some people will really want a PVR and others won't find it that interesting. Others will want HDTV and will be willing to pay."
That means cable operators will have to stock an arsenal of different boxes, each targeted at a different sector of the market, Vernon says. Obviously that would mean more money for companies like hers.
Some consumers may choose to stick with stand-alone products like ReplayTV, which cost more but in the future could include features like CD and DVD players that many cable companies are unwilling to build into their boxes. Cable companies are so focused on keeping the price of their set-top boxes low, they can't justify adding those features—especially when they distract consumers from the TV shows cable companies use to sell their service.
But even with added features like these, stand-alone boxes won't be able to compete with the new products cable companies are rolling out, says Jay Srivatsa, an analyst at iSuppli. The only viable long-term business model for them is to license their software to cable operators, he argues.
"That will probably be the only way for these guys to survive in the long-term," Srivatsa says. "Once set-top-box OEMs start putting hard-drives in their products, standalone PVRs are going to disappear. These small technology companies will have a hard time establishing themselves as large-scale OEMs, because there is no money in hardware. They should let OEMs work on enabling a low-cost box. As a team they could create a better box for the service providers and consumer as well."
Either way
ReplayTV's main rival in the selling of standalone PVRs, TiVo, is sanguine about the future of the market. The company has already licensed its PVR technology to satellite broadcaster DirecTV and consumer-electronics giant Sony, and is trying to woo cable operators as well.
"The road to mass adoption is probably through integration with cable and satellite receivers," says Ted Malone, TiVo's director of product and service marketing. "When you see digital video recorders show up in cable, it is likely that some of these boxes will have TiVo [technology] in them."
At the same time, Malone questions whether consumers want to deal with the complexity of an all-in-one digital entertainment box when they would probably be better off with a PC.
"We have been selling a digital video recorder for three years in the marketplace, and one thing I will tell you is that it is not lack of features that is keeping this product from being accepted," he says. "If anything, the technology is still a little intimidating to them. The road that Moxi has gone down, adding a lot of features in the box, is a non-starter....It's confusing to the customer, and only a small number of people are going to want all that functionality in a single product."
Regardless of which approach wins in the marketplace, it is a win-win situation for TiVo, Malone asserts. It is just a matter of whether most of the money comes from licensing fees or subscription fees, he says.
Even if licensing deals become the major money-maker for companies like TiVO and SONICblue, the stand-alone boxes will be the main platform for innovation, Malone says. Winning features from the stand-alone boxes can then be licensed back to the cable and satellite operators.
To a large extent, cable companies are still in control of the set-top box market, including how this new breed of all-in-one entertainment hub will evolve. But with satellite broadcasters snapping at their heels, cable companies are being forced to think outside the box. Perhaps they will even learn that in this convergence era, with the old barriers breaking down, no industry can truly stand alone.
Author information
Contributing Editor Steven Fyffe (s_fyffe@hotmail.com) switched back to old-fashioned cable because the digital set-top box took too long to change channels.
I want my MTV
Along with all the startups, established PC companies are trying to license their software to cable companies. Microsoft is one of them. But licensing software to cable operators is not the same as licensing it to PC users.
"It is a pretty complex industry, coming from the PC side," says Ed Graczyk, director of marketing for Microsoft TV. "The TV space is much more difficult. It is more than just coming out with that great product that you can sell at retail. Your customers really are the cable industry."
While the bar is higher, the rewards of a single sale can be very high as well, Graczyk says. "There is certainly a big opportunity," he says. "It's much more difficult convincing Charter Communications that your hardware and software service is the better than someone else's. But if you do, you automatically reach 7 million subscribers."
http://www.e-insite.net/commvergemag/index.asp?layout=article&articleid=CA214589&pubdate=5/1...
culater
Memories of the future
Maury Wright, Editor-in-Chief -- 5/1/2002
CommVerge
At the edge of a converged network, you'll find a variety of intelligent devices that span applications from entertainment to productivity and reside everywhere from the auto to the living room. All of these nodes share some characteristics, such as connectivity and intelligence, and they all rely on some of the same key enabling technologies.
Nonvolatile memory is perhaps the most important of the enablers, even if processors get the most glory. Flash memory serves in cell phones, set-top boxes, digital music players, and a host of other devices, acting both as program storage and as a content store for music, pictures, contact lists, and many other data types.
Given the significance of flash, we decided to host a roundtable discussion on the topic. We felt that such a format might prove valuable because it would allow industry experts to pontificate on the issues directly. The summit took place only virtually—via email—but yielded a robust, realistic dialogue nonetheless.
Follow along to learn where flash-memory capacity and prices are headed, which applications will drive consumption, whether alternative nonvolatile memories will encroach upon flash markets, and other valuable insights.
CV: Because CommVerge generally focuses on convergence applications and uses that application-level focus to spotlight enabling technologies like memory, I'd like to start at the application level. Could each of you describe the three or four products that consume the largest quantities of flash memory today?
PARTICIPANTS
Philippe Berge, Director of Marketing, STMicroelectronics Memory Product Group
Bertrand Cambou, Group Vice President, Memory Group, Advanced Micro Devices
Keith Horn, Vice president of Marketing, Fujitsu Microelectronics
Bill Krenik, Wireless Advanced Architecture Manager, Texas Instruments
Brian Kumagai, Business Development Manager, Flash Products, Toshiba
Kevin Plouse, Vice President of Technical Marketing and Business Development, Memory Group, Advanced Micro Devices
Sudeep Sharma, Associate Vice President, Memory Division, Mitsubishi Electric and Electronics USA
Victor Tsai, Product Marketing Manager, Flash Products, Hitachi
Mike Williams, Director of Marketing, Flash Products Group, Intel
Bing Yeh, President and CEO, Silicon Storage Technology
Sudeep Sharma (Mitsubishi): Today the largest quantities of flash memory are consumed in cellular handsets, storage cards, BIOS flash applications for PCs, and portable electronic devices such as digital cameras, PDAs, and MP3 players.
Kevin Plouse (AMD): Cellular telephones use the bulk of flash memory today, and we don't expect that to change anytime soon. So, when we look in our crystal ball, we see a cell phone with more features that uses more flash memory. One key point is that the people who invested in 3G networks invested that money because those networks drive data, and they drive data to phones. The convergence of the phone and the handheld computer is the single largest opportunity for flash memory. The second driver is the consumer product. There are all kinds, but the ones that come to mind are music, video, and photo storage (cameras, video recorders, etc). If the cost is right, these will drive a lot of demand. Consumer appliances too—like DVD players, high-definition television—drive growing demand for flash devices.
Bertrand Cambou (AMD): And the third one is internetworking. Obviously, the dot-com explosion resulted in extensive investments in networks, and flash provided network reprogrammability. The dot-com explosion has been replaced with a dot-com collapse, but eventually the networks will be replaced. We don't know yet what they will look like, but we can assume that with the fundamental growth in data, we must keep data moving.
Mike Williams (Intel): Cellular phones by far consume the most flash in millions of megabytes. Cell phones are the highest volume, shipping approximately 400 million units with a large density mix. Digital cameras would be next, not because of their volume, but due to their higher average density. The next few applications—which include networking/communications, set-top boxes and handhelds—are all smaller and comparable in consumption.
CV: Today, the flash market is clearly divided into data- and code-storage segments, dominated by NAND/AND [not and/and] and NOR [not or] flash architectures, respectively. How do these different architectures match up with the flash applications? Also, please explain the need for flash in these applications and point out the key memory-system requirements, such as capacity, speed, cost, and others.
Brian Kumagai (Toshiba): Today's primary NAND applications are digital cameras (mainly in removable-card format), game consoles/accessories, and digital audio. Today's primary NOR applications are cell phones, PDAs, and set-top boxes. NOR applications are increasingly being restricted to code storage/execution, where the density requirements are relatively small, code must be executed from flash, and write performance/reliability are not concerns.
“The people who invested in 3G networks invested that money because those networks drive data, and they drive data to phones. The convergence of the phone and the handheld computer is the single largest opportunity for flash memory.”
Kevin Plouse,
Advanced Micro Devices
Mike Williams (Intel): Flash is critical to all these applications for supplying system and application code and data storage. The best way to split the flash market requirements is between code and code+data architectures/requirements (cellular, handheld, set-top, networking, and telematics) and pure data-only storage (digital cameras and MP3 players). Code and data applications typically require high-performance reads (burst or page-mode), read-while-write capability at 66-MHz, data integrity/reliability, low power, and mid- to high-density capacity, while data-only applications require and value high density in a removable form factor.
Philippe Berge (STMicroelectronics): In addition to mobile terminals (cell phones), we see PC BIOS, automotive, and digital home gateways as key markets. In mobile terminals, the key requirements are low power consumption, high-density, tiny packages and footprint, an optimized interface with the baseband processor, and the ability to combine flash with SRAM. These nonvolatile memory requirements are directly driven by more and more user-friendly application features, such as GPRS [general packet radio services] and WAP [wireless application protocol], Internet and talk-mode protocols, tri-band support, voice memos, voice recognition, predictive text input, and color displays.
Games require bigger and bigger operating systems, hence bigger and bigger nonvolatile memory that has to be executed as fast as possible. Longer standby and talk times require low-power supply and low-power operative and standby consumption. In digital home gateways, the key requirements are cost and write and programming throughput. They are driven by the following system evolution: Web navigation, e-commerce, expert systems for user profiling, and remote software downloads for things like operating-system updates and TV program guides.
CV: From a pure silicon perspective, discuss your organization's technology roadmap in the NAND and/or NOR camps. Tell me where you stand today in terms of capacity and where you expect to be in 2005. Please describe, at a high level, the techniques that will deliver on your roadmap, such as multilevel cells (MLCs) as opposed to single-level cells (SLCs).
Bing Yeh (Silicon Storage Technology): SST is aligned in the NOR camp, and we believe this will continue to be the dominant area for flash, especially for code storage, but also for data storage. Code storage requires fast access times for system boot-up and reliable byte access without the latency that is common in NAND flash. Furthermore, in low density, NAND cannot compete, as it requires massive overhead circuitry to implement.
MLC technology will bridge the gap in cost between NAND and NOR in the medium and higher densities, and we foresee a realistic roadmap to four bits per cell using MLC SuperFlash technology. Currently, SST has a wide range of capacity in the low densities from 256 kbits to 16 Mbits. We will expand into the medium densities, from 32 Mbits and up, for the coming years for the code-storage applications. We also plan to offer more than 1 Gbit per chip for the mass data-storage applications.
“The anticipated largest consumers by 2005 should be cell phones, consumer electronics (digital cameras, MP3 players), networking, and automotive (including engine control and navigation systems).”
Keith Horn, Fujitsu
Sudeep Sharma (Mitsubishi): We are primarily focused on DINOR technology, a special type of NOR architecture. Relative to NOR flash, our DINOR technology offers faster random access at a lower voltage, and seven to 25 times quicker erase cycle. All of our flash-memory parts also have a BGO (background operation) function. Mitsubishi was the first to adopt the BGO function in 1997 on 8-Mbit flash. BGO can eliminate EEPROM [electrically erasable programmable read-only memory] from cellular phones since data can be read from banks while another bank is being programmed or erased.
Mike Williams (Intel): Our product portfolio is focused on NOR, not only for code but also for specifically optimized code+data requirements. We have three product lines. Our high-performance Wireless Flash, for handheld customers requiring the ultimate performance, offers a 1.8-volt (3-volt I/O option) product family with densities from 32 to 128 Mbits. Currently in production on 0.18-micron technology, we are sampling now on 0.13-micron, with a roadmap to 90 nanometers. Also on 0.13-micron, we are adding a new x32 implementation and increasing density to 512 Mbits by 2005.
Intel StrataFlash Memory is the highest-density, lowest-cost flash memory for code+data applications. Used in nearly every WinCE/PocketPC handheld, today's StrataFlash memory on 0.18-micron is Intel's third generation of MLC technology, which we originally introduced in 1997. StrataFlash is offered today in 32- to 128-Mbit densities and a 256-Mbit density later this year at 3 volts (1.8-volt I/O available). A high-performance 1.8-volt version will be released later this year, and densities on the MLC technology will reach 512 Mbits by 2005.
Industry-standard boot block (C3/B3) flash, now in its fourth generation of complete backward compatibility, is currently in production on a 0.13-micron process. This product family includes 8 to 64 Mbits, and production will continue through 2005 and beyond. In addition to continuous improvement, leading lithographies that keep us one product generation ahead of our nearest competitors, and proven multilevel cell manufacturing, we are exploring the use of four bits per cell and Ovonyx Unified Memory to expand our roadmap in the coming years.
Brian Kumagai (Toshiba): In NAND flash, we are currently in mass production of 512-Mbit monolithic SLC, 1-Gbit stacked (two-chip) SLC, and 1-Gbit monolithic MLC. In 2005, maximum density will increase to 4-Gbit SLC and 8-Gbit MLC monolithic devices. In NOR flash, our highest density today is a 128-Mbit SLC. We have plans for 256-Mbit and possibly 512-Mbit MLC devices in 2005.
Kevin Plouse (AMD): FASL [Fujitsu AMD Semiconductor Limited, AMD's joint venture with Fujitsu] is a leader in NOR technology. We are neck-and-neck between FASL and Intel for first and second position in the market. The NAND/NOR line is getting more and more blurry in terms of applications. Our customers prefer NOR but they want cost reductions.
Bertrand Cambou (AMD): One path to cost reduction is MLC.... [However,] we don't see how 4-bit MLCs can work reliably for code-storage solutions. As a result, we see the classical floating-gate technology coming to a point where it is not extendable anymore. That is why AMD took a different path with our MirrorBit architecture, which is not based on the MLC principle.... For years we have worked to develop an alternative path and now we are working full speed on MirrorBit—a technology without the compromises associated with MLC. We also recognize that MirrorBit is very expandable, even to four bits per cell. We believe that the move away from floating-gate will happen and our conviction is strong that we are engaged in a paradigm shift.
“There will be a sustainable need for code-storage flash that will be driven by the need for bigger and bigger operating systems enabling more and more user-friendly applications.”
Philippe Berge,
STMicroelectronics
Victor Tsai (Hitachi): We are a major supplier of data-storage flash with our MLC AND-type flash technology, and we are a manufacturer of code-storage NOR flash products. Hitachi recently introduced the new AG-AND multilevel flash memory cell, which gives Hitachi a technology and cost advantage over competing data-storage flash products and technologies.
Keith Horn (Fujitsu): We currently offer only NOR flash. However, our Multi Chip Package lineup will continue to provide both NOR and NAND flash. The company's flash roadmap offers an impressive range of densities (2 to 128 Mbits), voltages (5 to 1.8 volts), and we have a well-established reputation for advanced packaging methods.
CV: Is there the possibility that the flash industry might consolidate toward a single type of flash architecture? For example, could NAND flash be augmented with DRAM cache and control circuits that would allow code-storage applications to leverage the low-cost, high-density benefits of NAND flash? Or, are there breakthroughs in the NOR world that can ramp capacities and lower costs to compete with NAND flash?
Keith Horn (Fujitsu): The disadvantage of NAND flash is its reliability. Some applications simply cannot risk reliability issues and will be forced to continue to utilize NOR or NOR-like flash. However, the production of multibit cell flash product will allow higher NOR-like reliability with pricing that is more in line with NAND flash.
Bing Yeh (Silicon Storage Technology): NAND- and NOR-type applications and specs are quite different, and both types will coexist forever. Four-bit-per-cell MLC will provide a great challenge to NAND flash in terms of cost. However, because several large Japanese companies have focused on NAND flash, there will continue to be some NAND flash market inertia. So, regardless of what arguments technologists might make about whether NOR or NAND is technologically better, NAND will continue to play a role in the high-density flash market. In the embedded and mainstream code-storage markets, however, NAND will never penetrate. We see clear evidence of this, as NAND vendors have followed a DRAM model in the manufacture of NAND flash, pushing products into higher and higher density and not even offering NAND devices anywhere below 64 Mbits, which is the domain of code storage today.
Victor Tsai (Hitachi): There may be a point in time where there would be a convergence of data-storage and code-storage flash. Data-storage flash is generally more cost-effective than code-storage flash. Hitachi has just introduced the superAND flash product, which incorporates some NOR-like features, including power-on read for system boot-up and 100 percent good memory without error handling and memory management by the host CPU. This is the first crossover product that can satisfy both data-storage and code-storage needs in a system.
Sudeep Sharma (Mitsubishi): We don't see the NOR and NAND types of flash-memory architectures converging. However, new flash-memory architectures may be developed to handle both types of applications.
Mike Williams (Intel): We believe it will continue to fragment. Application requirements are diverging rather than converging. We see this today in multiple line-item offerings on our silicon and numerous stacking configurations requested. Additionally, our long-term strategic alignment with our top customers indicates continued diversification. One size certainly does not fit all.
"Application requirements are diverging rather than converging. Our long-term strategic alignment with our top customers indicates continued diversification. One size certainly does not fit all.”
Mike Williams, Intel
And let me correct a potential misconception with our Intel StrataFlash memory on leading-edge lithographies. We believe we do compete with NAND on a cost basis. The question isn't about cost per se, but about what price a manufacturer is willing to sell that flash device. Currently, NAND manufacturers are pursuing a very aggressive pricing strategy to make up for what we believe is an inherent mismatch with the system requirements in a code or code+data environment (bad blocks, error correction, read speeds, increased system memory, etc).
CV: Mike, is it your point that NAND manufacturers have cut prices to artificially low levels to gain entry into code or code+data storage applications, and that some buyers will deal with mismatched characteristics like slow read speeds to buy the lower-cost flash? And when you say you "compete with NAND on a cost basis," are you making that claim based on system costs in a code or code+data application?
Mike Williams (Intel): There were some lofty expectations for NAND growth the past few years, mainly driven by growth projections for digital cameras and digital music players. Each year, the forecasts were pushed out another year. The missed growth expectations have left the NAND suppliers scrambling to find homes for their products, and they have resorted to trying to fit their products into the traditional NOR markets. But NAND has inherent feature mismatches for these applications. For example, you cannot execute out of NAND, given the slow read speeds. Therefore, redundant memory, consuming more space and power, is required for the device to operate. NAND also requires error-correction circuitry. NAND contains bad blocks that must be managed. And the list goes on.
"In the near-term, the 'perfect' memory, nonvolatile RAM, will remain an R&D product. While some technologies appear promising, we believe the applications will be restricted."
Brian Kumagai, Toshiba
In short, there are a number of system-complexity issues when designing with NAND, and the NAND suppliers are attempting to overcome these issues by using cost as an incentive. Hence, NAND is selling at a very aggressive price today. In most cases involving both code and code+data applications, Intel StrataFlash memory offers a lower overall system cost and is much easier to use in design.
CV: Given the state of the market today, and developments in NAND and NOR technologies, take a look at your crystal ball and project the top three or four products for 2005. And again, please explain the key memory-system requirements for flash.
Keith Horn (Fujitsu): The anticipated largest consumers by 2005 should be cell phones, consumer electronics (digital cameras, MP3 players), networking, automotive (including engine control and navigation systems). As cell phones offer more and more features, they will continue to require larger densities of memories and smaller packages.
Mike Williams (Intel): By 2005, cellular, cameras, networking, PDAs, and set-top boxes will remain as the top markets, in our opinion, with cellular continuing to lead and handheld growth most likely outpacing the others. Telematics/GPS will also emerge as a top flash application.
Brian Kumagai (Toshiba): In 2005, NAND applications will include digital still/video cameras, cell phones (mainly for digital camera/audio/video purposes), PDAs, and set-top boxes. In all of these applications, whether the flash is used for code and/or data storage, the primary factors driving the usage of NAND are the requirements for high density and low cost. Additionally, for the data-intensive applications, the superior write performance and reliability of NAND compared with NOR is an important consideration. NOR applications for 2005 include cell phones, low-end set-top boxes, and networking/communications equipment.
Kevin Plouse (AMD): Looking into our crystal ball though, we can't forget to talk about the auto dashboard. It's small, but the fastest growing forecast is in the cockpit of the car—for entertainment and navigation. The car PC has just hit the inflection point for growth. It's been in development for eight years or more and is now becoming a standard part of the car.
“The car is a very interesting environment for us because we have been focusing on the car for a while, and you do not use substandard flash in a car.”
Bertrand Cambou,
Advanced Micro Devices
Bertrand Cambou (AMD): The car is a very interesting environment for us because we have been focusing on the car for a while, and you do not use substandard flash in a car (for example, because of the extreme temperature variation requirements). And that has been our strength.
CV: With cell phones identified as such a huge consumer of flash memory, could you further illuminate how flash is used in those devices. Digital cell phones rely on high-speed DSPs, so I know SRAM is required for code execution. Perhaps you could provide a scenario for what types of code and data are stored in different memory types, both when a phone is standing by and when a call is in progress. And describe how close this model of memory usage comes to other applications like PDAs or telematics systems.
Mike Williams (Intel): Flash memory has traditionally been used in cellular handsets to store program code used to control the operation of the device, to store data for device-tuning parameters, and to store data such as frequently used phone numbers and other personal information. Flash was adopted in these devices due to its solid-state ruggedness and high data retention—a phone can be dropped to the floor, the battery can be removed, and the information in the flash memory is retained.
Internet capable handsets, including new 2.5G and 3G phones, are driving the requirements for higher-performance and higher-density flash memory. These cellular handsets can be separated into two main processing functions: the baseband communications processor and the applications processor. Flash memory is used in the baseband unit to store program code for the traditional microcontroller device in charge of handling the specific cellular protocol. Flash memory can also be used in the baseband unit to store the DSP algorithms, as well as acting as the main memory in the event of an onboard cache miss from the integrated SRAM memory. Regardless of standby operation or active operation, the baseband processor is continually executing code from the flash device. In standby mode, approximately 1 to 3 percent of the time (depending on the actual protocol), the baseband processor must "wake up" to ping the nearest basestation in order to stay connected.
Flash-memory requirements are exploding on the application-processor side, where flash is used to store program code for new functionality such as Web browsers, color displays, Java applets, and audio/digital data manipulation. Connecting to the Internet opens up the need for more data on the application-processor side for storing large video files, digital music files, photographs, and email.
The memory usage between an application processor in a cell phone and a PDA are the same (hence the convergence of cell phones and PDAs). The industry debates whether one common multipurpose device will emerge or whether we will continue to see a variety of devices tailored for a specific need. Whether a cell phone, PDA, or telematics, Intel is offering common building blocks, including baseband processors, applications processors, and flash memory based on the Intel Personal Internet Client Architecture—a development blueprint for wireless devices and software combining voice and data.
CV: Brian Kumagai of Toshiba seems to imply that future cell phones will have a mix of NAND and NOR flash. I assume the former will serve integrated add-on functionality like a digital camera or MP3 player, while the latter, I assume, serves to store code for the cell-phone application. Could you give me a precise picture of how you see this memory architecture applied?
“Although in many ways integration is the key to low cost, we believe that discrete flash memory will continue to be less expensive than embedded flash memory on a per-bit basis.”
Sudeep Sharma, Mitsubishi
Brian Kumagai (Toshiba): We expect both evolutionary and revolutionary cell-phone architectures utilizing NAND flash. The evolutionary architecture will utilize NAND for data (photos, audio, video, etc) and the NOR for all types of code storage. In this case, the NOR will have to be fast enough to support code execution for all processing/control functions, including the DSP, which will probably be realized by page/burst mode. Toshiba would expect the NOR density to increase at about its historical rate for this architecture, since the NAND will take over some of the previous NOR functions, such as phone-number storage. The revolutionary architecture would use only NAND combined with lots of RAM (probably DRAM). In this case, the NAND would store all of the code and data, and the code would be executed out of RAM. The smartphone and PDA-combo phone will drive the transition to the NAND-only architecture.
CV: Outside of external CompactFlash, SmartMedia, SD Card, and Memory Stick modules, will there be a sustainable need for stand-alone flash memory chips going forward? Integration is the key to low cost, and SOC [system-on-chip] is an unmistakable trend. Will flash become largely a feature integrated onto other chips? If not, describe the capacity requirements or silicon limitations that will prevent such consolidation.
Philippe Berge (STMicroelectronics): There will be a sustainable need for code-storage flash that will be driven by the need for bigger and bigger operating systems enabling more and more user-friendly applications. Embedding flash will always remain a tradeoff of cost, footprint, and performance. Overall, flash will keep growing in three directions: standalone, embedded, and cards. Standalone and embedded flash will grow mostly for code storage. Flash cards or flash-plus-other-memory cards will develop as real subsystems for data storage. There is no real standard package yet, but the evolution of a standard associated to cost/bit reduction will push the market to a higher level of volumes/value.
Sudeep Sharma (Mitsubishi): Flash memory is being integrated already and will continue to be more integrated in SOC devices. However, the density of flash memory in SOC applications will continue to be limited because of chip-size constraints, which is also related to the yield issue. Continued development of finer process technologies will increase SOC flash density, but future applications will also continue to demand more flash memory density. Although in many ways integration is the key to low cost, we believe that discrete flash memory will continue to be less expensive than embedded flash memory on a per-bit basis.
"Until a product is proven in production with real customers, it is difficult to place much faith in it. MRAM has been researched for over 30 years, yet it is still not in mass production."
Bing Yeh, Silicon Storage Technology
Bill Krenik (Texas Instruments): Texas Instruments doesn't make flash, but as a leading vendor of ICs for cellular handsets we have a vested interest in flash developments. For wireless, integrating flash memory on the same chip may result in an inflexible memory configuration, because the handset designer will need to specify the amount of flash memory to be integrated early in the design cycle, before actual memory needs are clear. This may result in excess memory, leading to a cost penalty, or insufficient memory, resulting in loss of product features or the need for added external memory.
Further, flash integration normally requires six to eight additional process reticles over a conventional digital CMOS [complementary metal oxide semiconductor] process, significantly increasing manufacturing costs. Since there are no significant performance benefits obtained by integrating flash onto the same chip for wireless, it is difficult to justify flash integration. Other options, however, such as the use of multidie packaging, may be attractive in some cases.
Bing Yeh (Silicon Storage Technology): SST is by far the world's leading, if not the only substantial vendor, of embedded-flash solutions. Our split-gate SuperFlash architecture is available as integratable intellectual property through many of the world's leading foundries. Today, dozens of blue-chip companies license SST's SuperFlash technology for integration into their own wireless chips and other ASICs on a regular basis. Since SuperFlash technology is CMOS-compatible for fab portability and scalability, and since SuperFlash offers significantly better power usage and die size efficiencies compared with both stacked-gate and NAND flash, SST believes this will continue to be a rapidly growing and successful market.
That being said, however, flash is not always as cost-effectively integrated with other system requirements onto a single chip, due to the additional processing and testing steps required for flash memory. SOC with flash is most effective on very small die, where most of the component cost is in the packaging, or at the very high-density spectrum of so-called smart flash memory, where most of the SOC silicon is occupied by flash.
CV: The mixture of memory technologies on one dedicated memory chip or in a multichip package appears to be another trend in the integration story, and without doubt many convergence products require a mix of memory types. Give me your opinions on what types of mixes will be popular, including the possibility of mixing multiple flash types along with SRAM and DRAM. Describe why a chip dedicated to a mix of memory is a good idea, and if so, how you can craft a standard product family that meets the needs of different applications.
Mike Williams (Intel): Providing one packaged memory subsystem is compelling for handheld devices due to the space savings. Today, we are stacking flash and SRAM into one package, and the possibilities are almost endless for stacking, including flash and flash, flash and logic, flash and other memory, and any of the combinations above. These combinations are driven by the memory-subsystem needs of our customers. Crafting one standard product family is not achievable due to the fragmentation discussed previously. Successful flash suppliers must strive for flexibility and quick turnaround time to meet their customers' specific needs.
Keith Horn (Fujitsu): Our lineup of multichip packaged (MCP) devices, which includes flash and SRAM or flash and Fast-Cycle RAM, will continue to lead the field in mixed-memory technology on one package. For cellular applications, this MCP device can replace multiple components, resulting in space savings. It can also offer higher densities that are not available in today's marketplace with a one-chip solution and at a reasonable cost.
"Rotating storage is not a practical memory solution for today's handsets. However, in the future, the technology may be a good fit for high-end PDAs."
Bill Krenik, Texas Instruments
Bill Krenik (Texas Instruments): In wireless-handset applications, SRAM is normally used for multiple levels of cache, while flash is used for program storage and storage of user data and system settings. Since the cache needs to be integrated with the processor and flash integration appears to be cost prohibitive for wireless, it seems unlikely that SRAM+flash products will emerge.
CV: Do you see any near-term prospects for technologies like MRAM [magnetic RAM], FRAM [ferroelectric RAM], Ovonyx's optical technology, or some other nonvolatile memory to succeed in mainstream applications? Also, can rotating storage technologies, like hard-disk drives and the new Dataplay drive, impact the market for flash modules?
Bing Yeh (Silicon Storage Technology): Until a product is proven in production with real customers, it is difficult to place much faith in it. MRAM has been researched for over 30 years, yet it is still not in mass production.
Brian Kumagai (Toshiba): In the near-term, the "perfect" memory, nonvolatile RAM, will remain an R&D product. While some technologies appear promising, we believe the applications will be restricted. For example, Toshiba is developing 32-Mbit and 64-Mbit FeRAM [FRAM], which can be used to replace NOR+SRAM in low-end cell phones. Still, none of these new technologies will reach the density or cost-per-bit of NAND flash. Toshiba plans to introduce a commercial FRAM by the end of 2002. The target density is 32 Mbits. The primary technical challenge is acceptable performance in terms of access time.
Bill Krenik (Texas Instruments): Of the advanced memory technologies you cite, only FRAM is proven in high-volume manufacturing today. FRAM is also attractive because it can be integrated with very few additional process reticles. While MRAM and Ovonyx memory are very interesting technologies, they remain unproven as real solutions for low-cost, high-volume applications.
Rotating storage, of course, looks great on a cost-per-bit basis. However, this low cost is only available for relatively large memories. As a result, rotating storage is not a practical memory solution for today's handsets. However, in the future, the technology may be a good fit for high-end PDAs.
Mike Williams (Intel): As we've discussed publicly, Intel is pursuing Ovonyx memory technology. Although it is still early in the development, the initial results look encouraging. Compared to MRAM and FRAM, we believe Ovonyx holds the best promise for delivering on the performance, densities, integration, and reliability needed for our customers. If all goes well, we would expect Ovonyx to start making an impact on mainstream applications as early as the middle of the decade. But it is premature to discuss specific product plans.
Rotating storage technology will always be an alternative in the pure data-storage area. We see this in the digital-music-player market segment today, where NAND memory is being squeezed by less-expensive rotating technologies.
CV: We’ll finish with the unpopular question. I’d welcome your views on where flash prices are headed. I’d like to discuss price for two reasons. First, low price enables convergence applications. Music players, for example, have been hampered by flash prices, although I know they’ve dropped considerably (and I know the RIAA [Recording Industry Association of America] has hit the music players harder, but that’s a discussion for another day). However, low price has potentially negative ramifications for flash manufacturers. Moreover, the number of manufacturers making flash today is still relatively large relative to other commodity memory types like DRAM. Is the flash market headed for a major consolidation toward a small group of major players? What characteristics of your business make you a long-term participant in the flash industry?
Bertrand Cambou (AMD): It is our belief and strategy that what we need is to continuously and relentlessly cut the price-per-bit. And to commit to our customers a cost reduction that empowers them to build higher and higher densities into their systems, thereby making flash even more pervasive.
Philippe Berge (STMicroelectronics): We are expecting prices to stabilize in Q2 due to the recovery of the demand and to rise in the second half of the year. As for consolidation, the high end of the code-storage market is already at an advanced stage, with very few suppliers having the proper relationship with the key customers, advanced technology, and manufacturing capacity. High-density flash devices are already coming from only three to four suppliers. Second-tier vendors are shipping devices made with lower-density older technology. In the long term, flash technology is essential for STMicroelectronics’ SOC strategy. Flash offers ST the advantage of both differentiated and standard products. The flash-differentiated products, essentially custom configurations for high-volume applications, are key for our corporate strategic customers and give some stability to the business. The standard-product portfolio contributes by extending our customer base and providing the volume to lower our overall manufacturing costs.
Victor Tsai (Hitachi): There are many code-storage NOR flash suppliers, but the number of data-storage NAND/AND flash suppliers is relatively small. The growth rate of data-storage flash is much higher than code-storage flash, so while there may be consolidation in the code-storage flash supplier base, there is still a lot of market potential for new entries into the data-storage flash market.
Sudeep Sharma (Mitsubishi): We believe flash-memory demand will increase strongly and likely outstrip supply as the US and worldwide economy improves and as cellular handset demand increases. Mitsubishi Electric has been strong for a long time in providing a wide variety of memory technologies that can be combined to provide a complete solution.
Keith Horn (Fujitsu): Flash prices appear now to have stabilized. We have not seen prices increase yet, but they certainly are not decreasing. Low prices may eliminate some newcomers to the flash business, but established flash manufacturers will continue to thrive by implementing die shrinks and investing in new technologies such as multibit cell product. Fujitsu should be considered a long-term participant because of our joint venture with AMD and the considerable investment that has been made in our facilities.
Kevin Plouse (AMD): Flash has attracted every major memory player. Those that are the strongest will survive, the best technology will survive, the most innovative will survive. We’ve been a leader in nonvolatile memory for more than a quarter century. We’ve built a strong partnership with Fujitsu. With Fujitsu, we believe we have the best high-volume manufacturing facilities. We’ve brought a lot of innovation to the market, so we have compelling products (1055 patents filed in 2001). We have the broadest product portfolio. So, we’ve been committed, we stay committed, and our goal is to be the preeminent supplier of flash memory. We have a track record that proves we are going to be a force in the flash-memory market.
Author information
Editor-in-Chief Maury Wright (mgwright@cahners.com) experienced several nights of the roundtable while working on this project.
http://www.e-insite.net/commvergemag/index.asp?layout=article&articleid=CA214594&pubdate=5/1...
culate
Embedded Speech Recognition: Is It Poised for Growth?
By Todd Mozer
The Early Days of Embedded Speech Recognition
The embedded speech recognition market has been around about as long as the speech technology industry itself. Over twenty years ago, products such as telephones and toys emerged on the market with algorithms running on 8 bit micro-controllers. Most of the products used speaker dependent recognition requiring training, but some products appeared using early DSPs that implemented speaker independent algorithms. A U.S. subsidiary of Tomy Corporation was formed to market voice recognition products including a telephone that had a single button for voice dialing and speaker dependent digit dialing. Embedded implementations throughout the 1980’s tended to be either high in cost or poor in performance.
Chip and Software Companies Focus on Embedded Speech During the 1990’s
The 1990’s saw the first public offering of a company focused on selling embedded products with speech recognition. Voice Powered Technologies (VPTI) was a pioneer in speech controlled consumer products and introduced a speaker dependent voice operated remote control in the early 1990’s. The remote was backed by one of the very early telemarketing campaigns and included a videotape on how to use it. VPTI had a string of follow on products eventually leading to a voice organizer which had reasonable market success, but not enough to save the company from eventually going out of business.
With advances in semiconductor processing technology, the first dedicated, low cost, high quality speech recognition ICs came into production during the mid-1990’s from companies such as, OKI Semiconductor (using technology from Voice Control Systems), Sensory and Hualon Microelectronics Corporation (HMC). As memory and processing power became relatively less expensive, software-based recognizers started appearing on digital signal processors (DSPs) for markets such as automobiles and telephones, with companies including ART, Conversay, Lernout & Hauspie, Sensory and Temic providing the embedded speech recognition software engines.
Increased focus has recently moved towards the embedded speech space. Convergence concepts with Internet, PDAs, cell phones, and various media devices hold a lot of promise for speech recognition. Although the successes are still few and far between, there has been substantial hype that has analysts focusing on, and new players moving towards, the embedded speech markets. Over the past few years, companies such as IBM and Philips have expanded from their large vocabulary dictation roots, and have refocused on telephony and embedded applications. More recently, SpeechWorks has expanded its speech recognition efforts beyond their initial telephony segment and into embedded speech software. Almost a dozen other smaller companies have emerged all across the world that are now focusing on small footprint solutions for speech recognition. Market researchers from firms such as Frost and Sullivan, IDC, Morgan Keegan, and JP Morgan H&Q have begun covering the embedded speech markets, and for the first time are thinking of speech recognition beyond telephony and computer/dictation and are, in fact, projecting fast growth for the embedded market segments.
Market Opportunity for Embedded Speech Recognition
The embedded speech recognition story is very compelling. The user interface on electronic products has changed very little in the past 30 years. We have moved from analog to digital displays and have increasingly improved the quality of LCDs, but the basic knobs, switches, and buttons have stayed the same. Access to data through Internet, satellite, cable, CD ROM, and other mediums has exploded and we now have more available information than we can possibly access or organize. Being able to access this information by voice is very compelling.
Devices are getting more feature-rich and therefore more complex, but our access to the information remains primarily through manual manipulations. The size of computing devices has compressed over time and keyboards for the devices have gotten smaller and more compact, but our fingers have not shrunk. It appears that we have now reached the point where any product with a user interface will soon incorporate a voice user interface.
The speech recognition market in general has always held a widespread and intuitive appeal. We want to communicate with our products in the same way we communicate with each other. We want full-featured products that are easy to use. We want to access loads of information without navigating through menu structures or reading manuals.
Although these concepts are appealing, the markets have not yet attained their potential. Few success stories exist in the speech recognition industry overall and the embedded markets are no exception. There are no publicly traded speech recognition companies that have reached profitability, and very few of the private companies have reached this point. To make things worse, several high profile players in embedded speech recognition have gone out of business over the past year, and several of the larger, well financed players in the industry are expected to pull out in the months and years ahead.
Many of the most promising markets within the embedded speech recognition industry pose huge challenges. Despite most speech recognition vendor’s attempts to work in high noise through techniques of echo cancellation, noise subtraction and other noise reduction techniques, executives in the automotive industry say that nobody’s speech recognition engine works well enough in a noisy environment. Most of the leading cell phone companies have developed their own speech recognition technology and are therefore reluctant to go outside for minor improvements in accuracy or features. Competition within the embedded software and IC space have driven prices down, making it difficult for all but the leanest and best financed players to survive.
With the funding boom during the late 1990’s, there was substantial investment made in the embedded speech technology space. Hundreds of millions of dollars were spent on development and commercialization of technologies. Many of these technologies are only now coming to market. For example, Sensory Inc. acquired Fluent Speech Technologies in 1999 and has invested millions of dollars in compressing the footprint and productizing the Fluent technologies. Sensory is only now starting to roll out development tools so external developers can create embedded applications with a very high powered engine that combines text to speech with speech recognition. New noise immune technologies, continuous digits, and large vocabulary small footprint engines are about to be released and offer a substantial improvement in the state-of-art embedded products that are on the market.
What Lies Ahead for Embedded Speech Technologies?
Although historically the embedded speech recognition market has under-performed expectations, now is the best time in history for manufacturers to start implementing speech technologies. New introductions are, or soon will, enable a wealth of platform specific tools for development with very high quality, small footprint solutions. Substantial efforts on improving performance in noise are now in effect. Chip and software pricing have been pushed down by competing players and will not impact sales opportunities in high volume segments. The near future holds combinations of recognition, synthesis and animation for incorporating multimodal I/O techniques into a small footprint engine. The broad appeal for voice access and control has never waned, and now is becoming an excellent time for products to get voice activated.
A big part of the embedded speech business is making products easier to use and allowing information access in a convenient and safe manner. Embedded speech applications are popping up all across the automotive, home, and personal electronics markets. At the recent Consumer Electronics Show, there were over 20 products being shown across the floor using speech recognition technology. This number has consistently grown at CES for each of the past three years.
Speech recognition alone is not the solution for the future. Speech synthesis, whether through a compressed digital recording or text to speech, is a critical component of a user interface system. By combining synthesis and recognition into a common engine, technology vendors are able to create a much smaller footprint than the sum of the individual parts. The continuing mantra of personal electronics is “smaller, lighter, cheaper” so movements in this type of integration are very desirable to the product manufacturers.
The future of embedded speech technology is very exciting and holds the incorporation and integration of new embedded speech technologies such as animated speech. Animated speech (when combined with recognition and synthesis) allows the creation of an agent or avatar that can talk, hear and look very realistic in its lip-synchronization and emotional displays. Companies such as Sensory and LIPSinc are pioneering these animation technologies for a future that holds the ultimate dream of a home with animated agents hidden in every wall. These agents can pop up and announce telephone calls, or they can be told to record or play your favorite TV show!
Certainly the speech technology industry has gone through its struggles, and the embedded segment has only recently emerged as a substantive component of the overall market. The embedded opportunities are very persuasive and some of the solutions offered today are quite compelling. The technologies are better than ever, and are being priced aggressively and manufacturers are starting to deploy increasing use of embedded speech technologies.
--------------------------------------------------------------------------------
Todd Mozer is president, CEO and chairman of Sensory Inc., which he co-founded in 1994. Mr. Mozer has spent over 20 years in the field of speech technology working with high tech companies in positions of sales, marketing, product development and general management.
http://www.speechtechmag.com/issues/7_2/cover/584-1.html
culater
OT-Speech and Language Technology: Going Global, Thinking Local
By William S. Osborne
In China, Tom.com, a voice portal, provides virtually anytime, anyplace automated access to stock, entertainment and weather information in Mandarin - in a country where the presence of cell phones outpaces the number of PCs by far.
In Japan, ASIMO, a walking, talking robot, charms visitors in Japanese and moonlights as a celebrity host at public events.
In Germany, bankers at Deutsche Bank receive English language research reports from their colleagues in London and have those translated into German using machine translation so they can read them more quickly.
In the U.S., T Rowe Price plan participants can call a virtual account rep to get the account information they need. Participants can steer the conversation where they want it to go thanks to natural language understanding, rather than having to wade through lengthy menus to get what they want.
After years of expectation, speech technology is fulfilling its promise. Faster chip speeds and more sophisticated algorithms mean voice recognition is performing better than ever before. New speech-enabled applications are hitting the market as businesses and consumers realize that voice is the most natural way to access information from the Internet, mobile phones, car dashboards or handheld organizers. Voice technology may have started with desktop computers, but today, speech is making its way beyond talking to desktop computers to the various touch points of an increasingly mobile e-business world.
The question is, how can this be deployed across the globe? With all that information technology offers today, speech and language technologies are perhaps the most dependent on cultural context.
Voice technology, which for a long time has been confined to research, is now putting a natural interface on the computing environment, from end-user devices to infrastructure behind the scenes - crossing national boundaries. Worldwide spending on voice recognition will reach $41 billion by 2005, according to the Kelsey Group, a market research firm. There are several forces driving the growth:
· Companies view voice as a way to improve service from their call centers while also reducing costs. Voice recognition allows companies to use automation to serve customers over the phone, 24/7, without subjecting them to hold times or requiring people to respond to rigidly structured menus. Then there are the business savings: a typical customer service call costs $5 to $10 to support; automated voice recognition can lower that to 10 to 30 cents. The market research firm Datamonitor says call center managers are seeing an increase in customer acceptance of automation and self-service, along with cost savings.
· The rise of telematics, which combines computers and wireless telecommunications with motor vehicles, provides customized services such as driving directions, emergency roadside assistance, personalized news, sports and weather information, and access to e-mail and other productivity tools. The Kelsey Group predicts U.S. and European spending on telematics will exceed $6.4 billion by 2006.
· Companies looking to voice-enable the Internet and their IT establishments, whether it's providing information to consumers through "voice portals" or allowing employees to access corporate databases through spoken commands over the phone.
· The ability to squeeze convenient speech recognition into ever-smaller devices, such as phones, PDAs and other mobile devices.
This is happening not only in the U.S. but across the globe. For the most part, companies looking to deploy voice face a lot of similar issues. They want to know what business applications will bring more value to their customers and set them apart from their competition. Of course, the underlying question is, is the technology available in their language?
But because the goal of speech is to put a natural interface across technology and lower communications barriers, language is not the only consideration speech providers need to bear in mind. Cultural context is key. Speech providers need to consider not only whether specific applications will transfer across borders, but also how people in a country are likely to ask questions or request services, demographic variations, and what type of technology they are likely to warm toward. This can differ not only by country, but also within regions in a country.
Take something as basic as demographics for example. Even in a largely English-speaking country like Singapore, English accents vary. Data collected for automatic speech recognition show that older Singaporeans, influenced by a largely British education system, speak with a UK English accent. Younger ones, who've grown up on MTV and American movies, lean toward U.S. English. Another factor is language input. Because of the difficulty of Chinese character input on keyboard, Chinese software developers have brought to market applications using a combination of dictation, keyboard and pen input. Having the option of using more than one input method gives users added flexibility and convenience. Apart from language, we also need to consider the extent to which technology is embraced. Go to Akihabara in Japan and you'll find a proliferation of devices and gadgets. Japanese teenagers and even adults are glued to wireless services for communication, entertainment and their social interaction. Speech-enabled toys and games become a natural fit. A video game called Seaman, for example, has players interacting with a character that looks like a cross between a man and a fish. You talk to it; it talks back and asks you pointed questions. To the average American, the game might seem slow, even tedious.
Another example is ASIMO, Honda's "humanoid robot", which the company rents out at functions and events. The robot speaks Japanese, walks and understands voice commands and can be controlled by an operator tens of yards away by voice. It also knows which direction to face when a person talks to it. Honda says this is one step toward building robots of the future that "work in harmony" with people.
In Europe, especially Northern Europe, where countries have a much higher rate of wireless usage, telecommunications companies are looking to deploy value-added, Internet-related services to keep and grow customer loyalty. Apart from Web-related, wireless transactions, companies are interested in applications including services such as personal dialing assistant, where you can connect calls using your voice.
Businesses are also keen to deploy mobile workforce applications, for instance, where salespeople in the field can access price lists, order information, transact wirelessly and by voice or use multimodal applications. In Europe, more than anywhere else, there is a pent-up demand for transactional capabilities using both voice and, in the future, multimodal devices. This way, salespeople can ask for product specifications and have the information returned to them as a graphic. What companies want are robust voice applications for constant business use.
The combination of voice and machine translation technology used in tandem should not be overlooked. In the coming years, machine translation technology will bring an added dimension to speech, as the two are coupled together. Taken together, speech and language technologies are set to grow, and have the potential to bring business value and differentiation to companies worldwide.
Deutsche Bank already deploys machine translation to do gist translations of research reports and emails they receive from their English-speaking counterparts. Gist translation gives its readers enough accuracy to understand the meaning of the document, without the added polish of text edited by a human translator. The bank’s staff feel they are more productive reading the material in German than English much of the time. Taking this further, European developers and service providers are looking at how to offer machine translation over wireless networks and devices - and what types of services the public, as well as corporations, would subscribe to.
In the U.S., businesses are voice-enabling their call centers to perform simple, repetitive tasks so that their live agents can be put to more complex, value-added work. Natural Language Understanding (NLU) technology has helped participants in T Rowe Price's system, for example, to be receptive to using voice for simple inquiries. NLU not only allows end-users to speak naturally to the system rather than be bound by menus, it also enables the system to understand context. After you ask, for instance, for the price of the Franklin Templeton Growth Fund, you can ask for the objective of "that" fund and get your question answered.
There is no doubt that businesses here and abroad are looking to speech technology to help them do business more efficiently, save costs, and offer better customer service. Speech technology has come a long way, and has become a practical way to implement a range of applications. Base technology is becoming more robust, with improved algorithms and chip speeds, and with devices having as much power as the laptops of yesteryear.
Speech will increasingly be part of critical applications, and has the potential to become the means by which large segments of populations access technology – naturally and easily. The industry now needs to focus on making the interface more natural, tailoring it to fit specific cultural differences. Only then will speech move rapidly across cultural and national boundaries and become truly pervasive.
Ozzie S. Osborne is general manager of IBM Voice Systems.
http://www.speechtechmag.com/issues/7_2/cover/580-1.html
culater
OT-Upwardly mobile: HP and mobile e-services
Special Guest Star: John L. Chapman, Director of Strategy and Business Development, e-Services Solutions, Hewlett Packard
April 2002
We had the great privilege of spending some time speaking with John L. Chapman, HP's director of strategy and business development, e-services. Chapman has over seven years of experience working with the telecom sector developing partnerships for enterprise customers. At the same time, he has been working on his doctorate at the University of Georgia in Business.
Chapman discussed how wireless impacts the strategic development of e-services, something he has been thinking a lot about over the last two years with his involvement in several strategy development projects at HP. "My group is set to be the vanguard in leading these technologies," weaving together "a little bit of the think tank aspect to it with the real world perspective from our partners."
Chapman cited HP boss Carly Fiorina's view that "we are entering the era of services." Chapman said this is true for mobile services. As well, Chapman recalled Bill Gates' theory of exponential change and his belief that "the next ten years will be more profound than the last fifty."
Chapman said that it is his belief that we are in the midst of a third industrial revolution - the first was in the 1740s; and the second in the mid-19th century when modern railways and communication facilities emerged. "Today we're in the third industrial revolution and the macroeconomic statistics bear this out. Economists who study growth can no longer capture the change, it is happening so fast. It's hard to imagine productivity - and mobile and wireless investments are right in the fulcrum of these. It is really an exciting time!"
Chapman said that as a consequence of this period of revolutionary industrial change, last two years have "been a roller coaster in our industry!"
And, at HP, he said, "We are making major [wireless] investments to be, in essence, a better company - and our ability to succeed as a company in the future depends on how well we do in the mobility story." Chapman added, "Either grow in mobile e-services, or die!"
He elaborated: "The U.S. economy, and indeed the world, is at an historic inflection point. There is a boom in productivity, lower cost structures, and hyper-competition is at hand. The era of digital communications-based services has begun. Companies must come to excel in mobile e-services solutions offerings, or die a slow death in the marketplace. In spite of all the hype - in the next 5 to 7 years, with the advent of 3G networks and as the ability to provide mobile e-services grows with the enhanced mobility, wireless will become a key component of any company's value chain, a key topic or strategy for the development of any company."
That's heavy stuff! But also encouraging, given the tumult of this past year.
And in spite of the tech collapse that started in the spring of 2000 and "never really ended," Chapman observed that "as far as we've fallen now, we still see the viability of the leverage potential of mobile and wireless technologies." Growth, he said, will depend on the ability of companies to leverage "intangible and information based services which are seeing profit margins grow. Just like they used to say at IBM, 'Sell the hole, not the drill!' We want solutions pre-fabricated. In our own industry and several related industries - for example, even in the dishwashing industry there will ultimately be an embedded chip in that dishwasher for routine remote maintenance and service - it will become augmented products with services that are information-based. We think this is an inexorable trend, and should be considered as you develop your strategy."
Referring to economist Joseph Schumpeter, who Chapman said described capitalism as the embodiment of a perennial gale force of economic destruction, Chapam explained that transaction costs are ultimately reducible to knowledge costs. "The specific and direct application of mobile and wireless technology is obvious - with information at my finger tips, I lower the cost of useful and relevant information."
"When I see a billion cell phones world wide, the market for wireless services will be huge! Though the studies are all over the map - and could be true or false within a factor of 40 to 50 percent - we are nonetheless going to see huge growth in mobile. We are going to see high growth rates in spending in all things related to mobility. The growth of overall cellular and wireless technology will continue to be voiced based, but we will see a ramp-up of mobile data services too."
Chapman said that at HP, "We drew up a list of how Internet related technologies and especially mobility will change the various parameters inside companies - one of the equations of market power will shift toward consumers from producers. Obviously, mobility will have a big impact on how organizations are managed and governed. And one thing to say about wireless - barriers to entry are lowered in all ways. Wireless lowers knowledge costs across the board."
Key success factors that Chapman believes companies of the future in all industries will have to get right are: "intangible assets, speed and agility, degree of networked connectedness, customer focus, and knowledge differentiation."
Chapman said HP is practicing what it preaches - finding itself at what he called "the unique intersection of always-on infrastructure, appliances and mobile e-services. Over the next several years, a lot of companies will position themselves to act as the glue to bring it together - providing end-to-end infrastructure, apps and ubiquitous access to the next generation of services over a global communications network."
Chapman said that "HP envisions a new mobility experience. Where we are headed is providing a very solid offering or platform for mobile services, so every device, every human being has a URL, a registered place on the Internet. As an individual moves from place to place with his appliance, a range of context-specific offerings are made known to that device and delivered in proactive manner to the user. For instance, you can take your kids to an NFL stadium, walk into it - as a destination, it is a closed environment, and it will form an envelope of services dealing with that specific experience."
Chapman asks, "What will be the impact of wireless technology as they move through our value chain? What will be the possibilities of wireless?" To assess this, Chapman said, you need to ask, How does or could wireless affect the business' eco-system? What will wireless and mobile technology do inside my company? Specifically, he said, the first focus should be on customers, then prospective customers. Do technical issues map with business drivers compelling this technology? The big issues, Chapman said, are localization such as context-specific services, personalization, and immediacy. "Ultimately, everyone will have a Web presence, and services will ping off these: localization; personalization and immediacy."
To illustrate localization, Chapman talked about one of HP's largest customers, GM, and its telematics service. "GM is making huge investments in telematics. This will be a huge sector, worth multiple billions of dollars in its own right."
To illustrate immediacy, Chapman discussed online trading such as that provided by e*Trade. And to illustrate personalization, he discussed context-specific services in a closed environment destination like a sports arena.
Chapman said "you need to think through the degree that mobility matters" before you make your own investment in the technology. "I always encourage people to do WCGW analysis: What can go wrong? It's a cost-benefit analysis. Clearly, in the next few years, bandwidth is going to explode - in Japan this will happen next year, but here in the U.S. it will happen after that. At HP Labs in Palo Alto, we're doing pure R& D - and we made a three year commitment with NTT DoCoMo to develop prototypes for 4G protocols. This is the world we are heading to a few years away - where you can download the whole movie version of Gone with the Wind."
Some issues to consider in preparing for this higher bandwidth future, are: "Bandwidth, physical form factor, application 'content' and serviceability, security, and supporting software and middleware. These issues map to specific applications such as e-mail, SMS/paging, Internet and extranet access, broadband/multimedia, file transfer, and so forth."
To avoid common pitfalls faced by early adopters, Chapman recommends that you consider what he calls "the strategic relevance - if you do this clearly, you avoid 99 percent of the problems early adopters face, who invested money in wireless technology for the sake of it. Enunciate the strategic relevance of wireless technology to your business model." Chapman added, "design and plan for user friendliness and security." And don't forget that "billions of dollars in legacy apps must be extended to wireless - this is non trivial, and middleware technology firms will make a lot of money in the years ahead, this is a very complex problem!" Chapman also advised to "ensure bandwidth is enough," and to "beta test on a limited basis" before deploying. "It's not a bad idea to go through a list of the apps your are trying to bring up and map them to your business needs. Ultimately, the marketplace, the technology sector, will be guided by how this hand plays out, and whether we will see this so called hockey stick, a real explosion in wireless, or not."
Chapman said to use the 5-C scorecard, measuring Context, Clarity, Content, Coordination and Cost: "Understand the big picture, understand your scope, prioritize and understand the most compelling content you want to deliver, understand the wireless touchpoints, what are the implications for your other technology investments? And lastly, understand the element of cost - both hidden and not hidden, try to capture them all. This is the headache piece of it, there's no easy answer to this, no quick and easy - it is much harder and important to go through the exercise."
Chapman presented four detailed case studies of wireless deployments. He started with E*Trade, which "built themselves the largest online financial institution, showing terrific growth - and were until recently profitable. It's a very successful story, they have managed to build a very credible brand name in their sector and announnced last fall the extension of various E*Trade services to mobility and announced with the greatest of fanfare, making a big splash into wireless by supporting all WAP phones. They had several thousand customers sign up and utilize their wireless service - offering wireless banking and brokerage services."
Chapman said, It is indisputable that this is where the e-banking and financing industry is heading today - it is all going to wireless." Chapman used the 5-Cs to grade E*Trade. "From context - they really nailed it, they studied what they wanted to do and hit it - they understand where their industry is going; for coordination - enabling wireless enablement and connecting to existing wireline services was well thought out; cost - their decrease in call volumes saves money; clarity - as an early adopter, with content there are certainly limitations with physical form fctor and bandwidth limitations with what we can do in the wireless mode. E*trade knew that in the long run, this is where this thing is headed it was important to throw money at it and get into the game. It makes customers happier and more loyal."
Chapman also examined the case of Citibank in Latin America. He noted that Citibank is the largest financial service company worldwide, and a huge presence in Latin America, with millions of customers. Wireless, Chapman observedl helped Citibank to deepen its customer relationships by offering a full rage of transactions services; keeping the environment as high quality as will allow; "every one else in the industry is going to go this way, we ought to be a leader, we ought to be first! Wireless brings the ability to offer a full range of customer needs."
Citibank realized, however, "that the market was not mature enough - they really wanted to move forward real rapidly where they know the industry s headed - but they did a What Can Go Wrong analysis and decided not to rush head-on and offer everything in a One Stop Wireless Shop - instead they decided to invest in proven services that were already successful in other markets. This allowed Citibank to remain on the cutting edge of technology while avoiding the problems of unready technology - stepping into this incrementally on a limited scale and managing its investment as we move forward, selecting just a few applications - including the ultimate killer app - email and instant messaging. Customer acceptance is seen to be very accepting."
Chapman noted how Citibank scaled back its investment "after a close examination of the costs and limitations - including physical form factor and bandwidth issues. I like the manner with which they stepped into wireless."
A third case study presented by Chapman was Driver Net, which provides drivers with various services such as 2-way communication, dispatch and tracking, load matching and low cost driver communication. "This should be a guaranteed suvcess, right? The trucking industry has a high fixed asset base, and less than truckload (LTT) loads just don't make money. It is key to keep those assets loaded. This is a great concept and I have nothing but respect for entrepreneurialism and risk capital. But this would be a case of investing in wireless technology for the sake of it. In Drive Net's case, various competitors as various network operators built out their own networks in the U.S., and coverage is better in some areas than other. They went down a given technological pathway that had less than reliable architecture. National wireless coverage was a big issue; don't lock yourself into a single device or protocol - there is a significant interdependence between infrastructure, applications and devices." Chapman concludes that "this company probably could have benefited by a WCGW analysis!"
The fourth and last case study presented by Chapman was Progressive Insurance, from Chapman's hometown of Cleveland. Progressive is "an insurer and also a leading edge information company. Its key to success is the actuarial analysis of its customers, to assess risk. They were able to pick and choose their various segement populations - for instance, they found safe groups within the 18-25 year old drivers who were lower risks - such as those who have PhDs in statistic! It's a great story, and Progressive was an early adopter and heavy investor in information technology so were very predisposed when the wireless wave came along last year, to think about how they can use wireless. They marketed the Autograph - a wireless mobile app that provides GPS tracking to monitor mileage and where cars are driven, and at what time of day; enabling mobile customer access to policy information and claim services - hence providing intelligent feedback from customers. It also has bill payment functionality."
As Chapman pointed out, "you're less likely to get into an accident driving through an Iowa cornfield than on the Interstate," and this sort of mobile feedback and location sensitivity allowed Progressive to "develop a better and better database of risk." At the same time, "Progressive saw its customer relationship improve - its personalized portal let customers go in and see his or her record, their whole history of driving that vehicle and relationship with Progressive." Chapman noted that Sprint "has joined into a partnership to offer over their network this Autograph - a stage one development in telematics, and they are able to gather all this information on drivers to have ever more fine tuned pricing of risk. They have made significant investments, since being announced last Fall - and the return has not been there to justify what they are spending today, there have been only a limited number of registered users since September 2000. Progressive must carefully examine the cost/value of being first - is there still a first mover advantage (like there was with Coke as it expanded around the world)?
Chapman noted how the "Internet is the Great Leveler," making it "less expensive to acquire information: so what is the risk of being second to market? Be careful not to let the promise of great cost savings cloud your assumptions on user adoption; make customers the center of your wireless business case." In the end, Chapman said it is "important to understand what the customer wants." The 5-Cs provide "good coordination!"
Chapman explained that justifying your investment in mobile wireless technologies requires doing the following: "it needs to be analyzed in terms of incremental revenues and costs; utilize Porter's 5Cs to gain relevant cost/benefits; start with the right goals - measuring Context, Clarity, Content, Coordination and Cost; re-emphasize clarity - what is the mobile wireless proposition?; understand the trade-offs; and don't forget that being a 'second mover' in Internet technologies is not a bad thing - with the boom and then bust and now the settling, today it is certainly less costly to be a follower!"
And, Chapman cautions, "avoid the pitfall of wireless investment for the sake of wireless investment just because it is the latest and greatest thing. Remember that a dollar invested in A can't be invested in B."
Chapman concluded that: Mobile wireless is indeed real and the "next big thing," so be aware of emerging benefits. "Expect the 'boom' to come in 2003 and 2004, and follow your customers' lead; don't forget, however, that business fundamentals have not been negated! GAAT accounting principles have not been negated! There are some huge and obvious benefits attendant to wireless business development; 3G and ultimately 4G are coming to the US, all the major telecom operators around the world have spent billions, and the infrastructure will be in place for quicker, faster, better networks. "Plannin today for tomorrow is like motherhood and apple pie: always think first about the customer or the channel player, since ultimately they are your source of revenue!"
Chapman concluded his presentation with some inspirational and thoughtful quotations from some of History's most well known strategists:
Virgil: "Fortune favors the bold."
Sun Tzu: "Attack at dawn..."
Carl von Clausewitz: "...Concentrate your force structure so as to inflict overwhelming violence on the enemy at the point of attack."
Winston Churchill: "I felt as if I were walking with destiny, and that all my past life had been but a preparation for this hour and this trail... I was sure I should not fail."
Chapman noted that HP, like Churchill, is similarly walking with destiny as it confronts some hard truth of the current industrial revolution. "Technology profits are changing - based on enterprise demand. E-service revenue streams must grow dramatically for the industry to prosper. From 2001 to 2005, the four year trend is downward in hardware and software revenues and gross margins; but upward in services in revenue - from 3 to 30 percent of HP's total revenues, with gross margins around the same, between 25 and 60 percent. Hardware as a portion of HP's revenue will decline from 50 to 40 percent from 200 to 2005. Software will decline from 40 percent to 30 percent - highlighting the importance of services to HP revenues. Gross margins of both hardware and software are both trending down in the industry."
Chapman's conclusion: "Increasingly, HP must look to offer e-services - and mobile e-services."
Chapman cited Bill Gates' prediction: "The Internet era now being ushered in... promises more changes in business and society in the next ten years than in the previous fifty combined." Chapman adds: "More changes are coming in the next ten years than in the previous fifty - your company must deal with this new reality, and mobility-based productivity growth is critical."
- Margins in traditional products and product-centric solutions will continue to decline - so value-added services become more of a focus and imperative.
- The lowering of transaction costs in all business activities guarantees the proliferation of new business models - so scale becomes harder to achieve, solutions with integrated services become more important, disintermediation and partnering will grow, and information-based services which offer high (but intangible) knowledge content will offer highest margins.
Chapman believes that "other industries highlight opportunities for capturing competitive control points via new technologies and processes." He cites Food/Beverage giants Coke, Pepsi and Phillip Morris; Technology/Manufacturing leaders GE and Dell; Financial Services heavyweights Bank of America, Progressive Insurance, and GEICO; Pharmaceuticals giants Merck and Eli Lilly; Retail king Wal-Mart; and E-Commerce innovators Amazon, E-Bay, AOL and Yahoo.
Mobility, he adds, "drives a 'new' theory of the firm." Chapman makes some predictions:
- "Within the next decade, there will be more wireless than wireline subscribers in the world: Key applications for the enterprise include: unified messaging, location-driven e-services, and mobile commerce."
- "Key successes for the enterprise in the new economy are: intangible assets, speed and agility, degree of networked connectedness, customer focus, and knowledge differentiation. The role of e-services in the modern era underlies the dynamism of the 'new economy.' The winners from the 1990s prove both strategy and execution matter re: speed, connectivity, intangibles - this is what mobility provides."
- On the shakeout, he asks: "Companies delivering e-services not so long ago had higher market valuations - was this all hype?" In the second quarter of FY 2000, Chapman noted the following comparative market caps:
AOL: $143 B v. Time Warner: $74 B
Schwab: $29 B v. Merrill Lynch: $25B
Yahoo: $83 B v. NYT: $8 B
Amazon: $22 B v. Barnes and Nobles: $1.6 B
E-Bay: $1 B v. Sotheby's: $1.5 B
Priceline: $9 B v. Marriot International: $8 B
Ariba: $15 B v. Peoplesoft: $5 B
Such valuations, in hindsight, seem absurb! But they do reinforce the notion that the market - and the millions of people who drove those markets and defines their sentiment, had high hopes for how new technology was going to transform business.
Chapman closed by observing how HP's mobile strategy is shaped by its unique position "at the intersection of always-on infrastructure, appliances and mobile e-services." As a company, it is embracing wireless technology and mobile solutions because of their profound and transformative influence on our economy and society. A revolution is happening; HP intends to not only survive the revolution, but help to define it, and as we all move forward, to provide solutions that can meet our needs in this complex and challenging period of change.
http://www.wirelessreport.net/wirelessinsiders/april02/upwardlymobilehpandmobileeserv.html
culater
OLYMPUS INTRODUCES THE VOICE-TREK DS-10 DIGITAL VOICE RECORDER ON THE JAPANESE MARKET
(Ybreo Newswire) - First in world with high-quality, WMA*-format recording mode.64 MB internal flash memory records up to 22 hours 20 minutes of sound. Sold with cradle for easy data transfers to and from a personal computer. Voice-Trek is a registered trademark of Olympus Optical Co., Ltd. in Japan.
Designed with business users in mind, the "Voice-Trek DS-10" features up to 22 hours 20 minutes recording capacity, a high-quality, WMA*-format recording mode, noise cancellation and a variety of other functions. It comes packaged with software that transfers, playbacks and manages audio data and also provides an interface to third-party speech recognition software. The product went on sale April 5, 2002.
* WMA: An audio format included as a standard component of Microsoft operating systems since "Windows Me" and featuring both high quality audio and high levels of data compression.
The "Voice-Trek DS-10" is the latest evolution of a series of PC-connectable IC recorders, Olympus' acclaimed "Voice-Trek DS-1" and "Voice-Trek DS-650." Designed with business users in mind, the new "Voice-Trek DS-10" features high-quality audio, a variety of functions, simple operations, and a refined, ergonomic design.
With audio quality at the highest levels in the industry and a wide array of new functions, the "Voice-Trek DS-10" goes beyond a tool for simple voice memos and can be used to record meetings, lectures, negotiations and interviews.
Top Features
Internal memory records up to 22 hours 20 minutes
High Quality Mode uses WMA format
Noise cancellation functions
Designed for portability and ease of use
LCD with kanji display functions
Repeat playback functions and choice of 3 playback speeds
Bundled USB connection kit and cradle for connection to both Windows and Macintosh computers
Main Features
Internal memory records up to 22 hours 20 minutes *
The "Voice-Trek DS-10" uses a 64 MB flash memory as its recording medium, and is capable of approximately 4 hours 20 minutes continuous recording in "High Quality Mode," approximately 10 hours 25 minutes in "Standard Play Mode," and approximately 22 hours 20 minutes in "Long Play Mode." The unit uses digital recording technology to provide instant access to any recorded file at the touch of a button.
High Quality Mode: Approx.4 hours 20 minutes
Standard Play Mode: Approx. 10 hours 25 minutes
Long Play Mode: Approx. 22 hours 20 minutes
Recording Time for Voice-Trek DS-10
Use the "A324" AC adapter (sold separately) if continuous recording is likely to exceed battery life. All recording times listed in this document are potential continuous recording times. Actual recording may be shorter if repeated, short recordings are made.
High Quality Mode uses WMA format
The "Voice-Trek DS-10" is the first portable recorder to offer Microsoft WMA (Windows Media Audio) *1 format in addition to conventional DSS *2 Standard Play Mode and Long Play Mode recording.
The High Quality Mode uses WMA 32 kbps, 44 kHz sampling rate, monaural recording to achieve extremely close fidelity to the original speech while keeping data volumes low. The "Voice-Trek DS-10" is limited to frequencies below 7 kHz for copyright protection purposes, but its sampling rate of 44 kHz is the same as used in music CDs, and it rigorously suppresses the noise generated by conversion to digital data.
*1 WMA: An audio format included as a standard component of Microsoft operating systems since "Windows Me" and featuring both high quality audio and high levels of data compression. A music CD compressed with the WMA format provides quality on par with the original CD using only 48 kbps data. Olympus was one of the first to understand the potential of WMA technology and has worked in close collaboration with Microsoft and Texas Instruments to develop the world's first WMA-format portable recorder.
*2 DSS: "Digital Speech Standard," a standard developed by Olympus, Phillips (Netherlands) and Grundig (Germany).
Noise cancellation functions
The "Voice-Trek DS-10" includes noise cancellation functions to achieve clear recording even in noisy environments. This technology, licensed from Cortologic AG, focuses on the unique modulation frequency range generated by the human voice and digitally removes sounds outside of that frequency range, making it possible to eliminate noise in the same frequency band as the voice, something that is almost impossible with conventional filters. The unit comes with three noise cancellation levels,-- OFF/ LOW/ HIGH--that can be selected at playback mode according to the amount of noise in the recording.
Designed for portability and ease of use
The "Voice-Trek DS-10" maintains the large buttons and functional layout that won praise for the "Voice-Trek DS-1" and the "Voice-Trek DM-1," but features a rounder body that is easier to hold and naturally guides the fingers to the proper place. The aluminum case gives the unit a touch of elegance and quality, and a new metal clip and a strap hole are included to improve portability.
LCD with kanji display functions
The large, full dot LCD displays hiragana and kanji as well as fast alphanumeric, katakana characters and numerals for easy to read Japanese-language displays of the date, time, mode, file information and warnings. A backlight is included for use in dark environments.
Repeat playback functions and choice of 3 playback speeds
The new "repeat playback function" allows the user to repeat any section of a file, a capability requested from those using IC recorders for language learning. In addition to the conventional 1.5 speed "Fast Playback" Mode, the "Voice-Trek DS-10" also has a new 0.75 speed "Slow Playback" Mode. State-of-the-art digital signal processing plays the sound back at the original pitch in fast or slow mode.
USB connection kit and cradle for connection to both Windows and Macintosh computers
The "Voice-Trek DS-10" comes with a USB connection cable and "DSS Player" audio playback software for high-speed data transfers to and from personal computers. The cradle included with a unit makes PC connection even easier--just set the "Voice-Trek DS-10" on the cradle and you're ready to go.
Recorded audio files can be transferred to a computer and, for example, attached to e-mail. The "Voice-Trek DS-10" is USB audio class compatible and can be used as a USB microphone and USB speaker for Windows computers.
*3When used as a microphone, the unit can record up to about 100 hours directly to hard disk. *4
"DSS Player" is an application designed for users who wish to create text files directly from speech and also comes with convenient functions to play back audio files while using word processing software. "DSS Player" can convert DSS files to WAVE, the standard Windows audio format, and AIFF, the standard Macintosh audio format, so that recordings can be played back on other computers as well.
http://www.ybreo.com/main/getProductInfo_ie.cfm?Latest=yes&AdvSearch=no&Keyword=&Brand=&...
culater
OT Road Test: Taking the Music for a Quick Spin
It’s not easy for brick-and-mortar music retailers these days. More people are downloading free music from the Internet. Online music sellers are slashing prices, eating into profits. Business is particularly difficult for enormous music outlets like the Virgin Megastores that devote tons of floor space to obscure recordings by world musicians. How can Virgin hope to sell enough of these eclectic tunes to turn a decent profit?
Virgin may have found an answer. Jan de Jong, the company’s vice president for information technology, persuaded his bosses to try out electronic kiosks in Virgin’s stores that allow customers to sample 30-second snippets from a database of approximately 250,000 CDs. The experiment began last year in two of the chain’s outlets and was considered a huge success. Virgin executives found that when customers come into a store with a specific album in mind, they’re three times as likely to actually purchase the product if they give it a test drive. “The biggest problem we have as a music retailer is that we sell a product that is shrink-wrapped,” says de Jong. “You can look at it, smell it and see it, but not hear it.” The company now has about 15 of the $5,000 kiosks in each of four stores, and plans to install them in every one of its 22 outlets in the United States and Canada by next year.
The kiosks also help customers help themselves, a big advantage considering the enormous size of Virgin’s music stores, which average about 60,000 square feet. “Retailers can no longer hire enough people to keep the store open, much less to understand all of the different styles, from Celtic to rap,” says Dan Hopping, one of the IBM retail specialists who helped design the technology. “The kiosk is always on and is always an expert.” In addition to track sampling, the kiosks provide reviews from industry magazines like Spin, Vibe, Mix Mag and Rolling Stone, as well as pictures of the artists, track listings and album credits.
The music is streamed into the kiosks via the Internet by a database company called Muze. Some Internet companies allow customers to download songs, but few offer the depth of selection that Virgin boasts. At Amazon.com, for example, you can sample only 15,000 songs, just 6 percent of the total available at a Virgin store kiosk. The process is simple, too. Customers scan a CD’s bar code and tap the touch screen to choose the song snippets they want to hear. Susie Phillips, 31, was wearing out the kiosks recently at the Virgin Megastore in New York’s Times Square. “I was like, ‘Woo-hoo!’ when I saw it. These are artists that I know, but albums that I’m not familiar with, and I don’t want to spend 80 bucks without hearing it,” said Phillips, waving the four CDs she was thinking of buying. “It’s instant gratification.” For Virgin executives, the instant gratification comes when customers like Phillips head to the checkout line.
—Suzanne Smalley
http://www.msnbc.com/news/741058.asp
culater
Video compression slims down for spring
Ever-improving codecs satisfy your hunger for high quality without bloating your cost, memory, power, and processing budgets. But which of the smorgasbord of contenders should be the main course? Before filling your plate, sample the spread to figure out what you have a taste for and why.
By Brian Dipert, Technical Editor -- EDN, 4/4/2002
http://www.e-insite.net/index.asp?layout=article&articleId=CA203798&title=Search+Results&...
culater
"I do think that the USB 2.0 MemoryBank will be extraordinarily useful. My family is big on MP3 players, such as e.Digital's MXP 100, which I've written about previously (see "Goin' mobile," December 2001). But filling that player's IBM Microdrive isn't quick via its integrated USB 1.1 link, and the 256-Mbyte CompactFlash cards take a while too. In contrast, a speedy FireWire connection is one of the features that's made Apple's iPod a success. A product like the MemoryBank will bring hard-disk-like data rates to the task of writing music to CompactFlash cards."http://www.e-insite.net/index.asp?layout=article&articleId=CA203871&title=Search+Results&...
CE devices using DivX video compression system are in planning stage. DixVNetworks, developer or MPEG-4 compatible codec, said it was working with hardware designer e.Digital on broad line of home and portable products that would play or record video made with DivX compression. Effort is to extend use of DivX beyond PC and will include DivX-enabled DVD players, digital video recorders, set-top boxes, camcorders, portable players and more, companies said. First products, expected by year-end, will include digital rights management for secure playback, they said. San Diego-based e.Digital has experience developing portable digital audio players and recorders. ------ http://www.e-insite.net/index.asp?layout=article&articleId=LN45M0-37N0-01CV-D4PN-00000-00&ti...
culater
Audio pirates and device makers still dancing
Michelle Howell, Associate Editor
The music industry and device manufacturers can't agree on a single standard to prevent piracy. But would that be the best thing anyway?
You mean I actually have to buy a CD now? Who would've believed it would come back to this! While the traditionalist in me still reels with excitement at the thrill of visiting my local record store to purchase the latest CD, this may be the sentiment felt by those who choose to illegally burn copyrighted music.
With regulatory and standardization committees teaming up with the music industry and portable-device manufacturers, the future of music piracy may be shaking in its boots. But with no overall standard for what security measures portable designers should take to ensure that copyrighted materials are protected, the shaking may still include a little rattle and roll.
During the short-lived, but well-publicized reign of Napster, the music file-sharing service that tested the security ability of every designer's system, groups began to form to ensure this piracy couldn't continue. With portable MP3 music players fueling the downloadable content fire, inviting portable-device makers to these meetings seemed only natural. In 1999, the Secure Digital Music Initiative (SDMI) gathered the recording, consumer electronics, and information technology industries together to develop an open specification for protecting digital music distribution. In these meetings, the SDMI developed the SDMI Portable Device Specification Part 1, Version 1.0, a voluntary set of guidelines intended to provide a secure platform for the digital distribution of music and related content that's offered for sale on-line or through other Web distribution mediums. Although the over 200 members of the SDMI developed various ways of protecting content, and slapped that SDMI-compliant logo on their products, this specification eventually waned and other proprietary measures became the dominant means.
So, what exactly happened to SDMI and its idea of creating content-protection standards and specifications? Where did it all go wrong? Imagine, if you will, a room full of music industry associates and content makers, PC manufacturers, and portable-device makers trying to decide on a specification that works with everyone's devices and capabilities. Looking for the punch line? This is no joke, just a key impetus in the inactive status of SDMI. "The problem with SDMI was that it was made up of groups with very different goals and interests," says Randy Cole, chief technologist at Texas Instruments Internet Audio Group. "It was very hard to reach an agreement because you couldn't satisfy everybody."
What to do?
Approaches to what portable-device manufacturers should do to protect content and copyrighted materials vary widely among device manufacturers. Although there's no standard-issue means of guaranteeing the security of content on the portable platform, there are three key ways security can be applied: in storage technology, compression technology, and the player itself.
The type of security solution really depends on the type of audio decoder a manufacturer uses, the type of PC software, and the type of flash memory. Content Key, a digital rights management scheme for the portable media from DataPlay will let users download copyrighted materials from their PC to their device. But this content is then frozen, so to speak, in the users device. The ability to share that content ceases once it hits a device equipped with Content Key because of a zero copy rule. You can, however, take the disc out and give it to a friend, but you wouldn't be able to copy the content onto your friend's PC or another DataPlay disc. "If it's stolen or not is not important to us," says Steven Volk, vice president of corporate development at DataPlay. "We're just not going to let you take it to another space. You already have it on your PC." Content Key is, however, flexible enough to adapt to other security standards that come along.
Watermarking
Another approach is audio watermarking, which involves embedding a packet of digital data directly into the content signal. It's basically an inaudible signal that can be added to content, such as music. The watermarked data can contain rules, such as copy usage rules for the content, owner, distributor, or recipient. While computers or devices can read this signal, it's said to have no effect on listening quality. But if a user tries to remove the watermark from the content, it will destroy the content, making it inaudible.
Verance's audio watermarks can contain detailed information associated with the audio and audio-visual content through such means as monitoring and tracking its distribution and use, as well as controlling access to and usage of the content. To read this content, devices need to use a watermark detector, which filters through the rules embedded on the watermark and determines if this content is allowed to be legally played again or shared. In the same security aspect of Content Key, Verance's audio watermark doesn't stop a user from getting music onto his device. But if an illegal copy was made, of content that was both watermarked and encrypted, it would no longer be encrypted, but it would be watermarked. This watermark will stop that illegal copy from being sent to the device to start with.
Texas Instruments formed its Internet Audio Group specifically to focus on the portable audio device market. Its flagship product, the TMS320DA250 DSP, was developed as a platform for all potential audio protocols. Because the DSP is programmable, unlike other hard-wired solutions, it can work with any codec, such as MP3, WMA (Windows Media Audio), AAC, and numerous others. If a new protocol comes along, the DSP can easily work with it. Other non-programmable options build all the protocols needed into the hardware. But with new protocols popping up every day, this could potentially exclude emerging technologies. With this in mind, designers need to think long and hard about which protocols should be integrated in the hardware and which ones should be left out.
Standardization
There are, of course, other standards and standardization committees out there working to protect content. Content Protection for Recordable Media (CPRM) is a technology developed jointly by IBM, Intel, Matsushita, and Toshiba (the 4C entity). It was designed to meet the requirements of SDMI and extends CPRM and similar content protection schemes to ATA devices. This technology is, as with most other, completely optional. A group called the Copy Protection Technical Working Group (CPTWG), which includes Hitachi, Intel, Matsushita, Sony, and Toshiba also produced the Five Company (5C) Digital Transmission Content Protection (DTCP) specification. This aims to give manufacturers an easy-to-implement standard with a higher level of security than CPRM and moves into issues such as streaming digital movies and other content from digital appliances such as set-top boxes, DVRs (digital video recorders), DVD players, and satellite TV.
"One of the reasons why SDMI and 4C security hasn't been implemented yet is that the industry moved a little bit faster than the standardization," says Hans Fleurkens, marketing manager for portable devices at Philips Semiconductor. "You see a number of players being executed on relatively short notice with their own types of standards. For whatever standard is coming, it needs to have wide support within the industry," adds Fleurkens. But that may be further off than we think. Manufacturers have created proprietary measures for protecting content on devices. These measures may vary as widely as the content, but many are flexible enough to facilitate a universal standard, if one should arise. Although a universal standard sounds like the best solution to ensure interoperability and compliance throughout the market, the major downfall is Public Enemy No. 1, the hacker.
If there's a universal standard and it gets broken, then all that standardization work has been for nothing. The key to security in the portable realm right now will be a matter of implementing several different standards on a player. This, however, may add to development costs and implementation difficulties of devices. Once portable audio device makers tackle the issues of protecting copyrighted content on their devices, the security monster will rear its ugly head to streaming video and movies. But that's a whole other topic.
Portable Design April, 2002
Author(s) : Michelle Howell
Match the flash memory to the application
Richard Nass, Editor-in-Chief
Different types of flash memory are available, ranging from multiple chip offerings to multiple subassemblies. It's important to pick the one that matches your need.
Flash is flash, right? The answer to that is yes and no. In general, there are two types of flash memory—NAND and NOR. NAND-based flash memory is generally used for data storage, while NOR-based memory is used for code storage. NAND's sequential (serial) access suits it for block-oriented data storage applications.
Portable devices are the perfect location for flash memory, particularly when compared with rotating media (except for the cost). In general, they're smaller and lighter than rotating media, two key attributes of portable electronics.
One trend that's occurring in the market is that designers are looking at NAND flash as an alternative to NOR, even for code-storage applications. The reason for this is purely due to cost. On a per-bit basis, NOR is two to three times more expensive than NAND.
Click here to enlarge image
Fig. 1. SST's SuperFlash technology employs a split-gate cell, in which the erase oxide is neither the gate nor the transistor oxide.
The amount of storage that's needed in cell phones continues to rise. The number of features designed into the latest phones is becoming more complex, driving that increase. Also, simple features like upping the available phone-book size or memory retention adds to the memory requirement.
But the most dramatic leap in memory footprint will come when the transition to 3G phones occurs. I'll save the guess for when that will occur for another article. But we can safely assume it will be within the next five years. Such phones will integrate an MP3 player, an Internet browser, a color display, or a camera.
The same transition should occur in the PDA arena, systems that don't want to employ rotating media. The NAND flash will be used to store everything, including the operating system, the application software, and the user data storage.
Some vendors have a vested interest in which type of memory will be the most dominant. Then there are vendors like Toshiba who produce both types. Its NAND offerings range from 64 Mbits to 1 Gbit, while the NOR devices are available at 16, 32, and 64 Mbits.
Souped-up NOR
Silicon Storage Technology (SST) offers a NOR-type flash using its SuperFlash split-gate cell architecture (Fig. 1). The technology uses a reliable thick-oxide process and a simple, flexible design that's suitable for small or medium sector sizes.
With SuperFlash technology, erasing is accomplished using a thick polysilicon oxide as opposed to gate oxide. As a result, the memory devices easily scale to finer process geometries. Programming is achieved by a low-power technique called source-side injection.
The latest developments in SuperFlash yield a self-aligned cell, which removes the need for built-in tolerances on the various process layers (Fig. 2). Most foundries will be able to align layers perfectly to each other without the need to build tolerances into those layers.
Fig. 2. A self-aligned cell is now part of the SuperFlash technology. This helps achieve smaller devices by removing the need for built-in tolerances on the various process layers.
Even within the NAND camp, a diversified portfolio can be helpful. This could include different package types (generally BGA or chip-scale packages) and lower voltages. Toshiba is now readying a 1.8-V offering.
According to Kevin Kilbuck, director of business development for flash memory at Toshiba, "We're addressing the lower voltages in two steps. One is to take our existing 3-V products and take them down to 2.7 V. Step two is a pure 1.8-V core. There's not a huge demand for that today, but as the 3G phones start to roll and as PDAs move to NAND, our timing is basically matched to those markets."
Another consideration in the NAND versus NOR discussion has to do with the controller. Today, most systems have some type of controller between the host processor and the NAND. The interface to NOR flash is similar to SRAM, something that designers are generally familiar with. Hence, the flash can be interfaced directly to the microprocessor bus. On the flip side, next-generation NAND flash will be designed to permit that direct processor connection without any glue logic.
Advanced Micro Devices (AMD) is one of the vendors that currently offers 1.8-V flash memory (operating up to 2.2 V). The Am29SL and Am29DS families offer single power-supply operation, as well as zero-power operation and advanced power management.
The Am29DS devices offer a dual-bank, simultaneous read/write capability, while the Am29SL family features a standard single-bank architecture. The zero-power operation reduces typical current draw to as low as 1 µA in standby and automatic sleep modes. Eliminating the need for higher external programming and erasing voltages reduce overall system cost and conserve board space.
Intel's 1.8-V flash memory integrates a flexible partition read-while-write architecture with synchronous burst and asynchronous page-mode read operations. This is combined with the security-enabling features of the company's Advanced+ Boot Block. The memory is supported by Intel's Flash Data Integrator software (Version 4), which enables management of code, data, and files in flash memory. Densities of 32, 64, and 128 Mbytes are available, in various packaging options.
STMicroelectronics recently announced a 32-Mbit flash chip, the M58LW032A, that can store data, but its high performance also allows for direct execution of stored code, precluding the need for a separate RAM. The device performs all operations, including programming and erasing, from a 2.7- to 3.6-V supply, and accepts I/O signals ranging from 1.8 V to the supply voltage.
Two members of the Firmware Hub (FWH) family of flash memories hail from Atmel. The AT49LW040 is a 4-Mbit device, while the AT49LW080 stores 8 Mbits. The BIOS-based components can interface directly with Intel's 8xx series chip sets. The parts feature 64-kbyte sectors and automated byte-program and sector-erase operations. Both parts are available in 40-lead TSOP and 32-lead PLCC surface-mount packages.
Software control
A software product called VBM can be coupled with NAND memory to help manage bad blocks. Developed by Datalight, VBM is bundled with the FlashFX flash media-management software. FlashFX is portable across most operating systems and flash devices. Hence, it doesn't lock a system manufacturer into any particular flash vendor. Version 5.0 is now available, a release that's targeted at OEMs building Windows CE-based devices
Going in somewhat of a different direction, Matrix builds memory devices that are low cost, yet can only be written to once. The low cost comes from the ability to produce the memory in a three-dimensional format. The first devices to come from this format, available in the second half of this year, will be a 64-Mbyte memory chip in a standard TSOP package.
"Until now, anytime you've built an IC, you basically built the chip by creating transistors on the surface of a single-crystal silicon wafer. Then, you wire them together," says Dan Steere, vice-president of marketing at Matrix. "The result is a memory that's denser than existing memory, and therefore cheaper."
What makes this technique even more valuable is that it can be achieved using standard materials and standard processes, meaning that the wafers are produced with the equipment and materials that are already in most high-volume CMOS fabs.
The initial target for the write-once memory is digital cameras. The company says that with a 3-Mpixel camera, users can store about 75 pictures. With a 1-Mpixel camera, that increases to between 250 and 300 pictures. And the retention life is rated at 100 years. The price goal for the memory cards (which is set by Matrix's customers) is to be about the same as a three-pack of 35-mm film.
While this creates a different category of memory, the devices are designed to be compatible with existing memory-card standards, including a standard NAND flash interface, so the cards can be plugged into the same slots.
Flash combo
The spin from Microchip is to combine a microcontroller with the flash memory. For example, the company's PIC18F6720 and PIC18F8720 devices offer a 1-Mbit flash array that can be fully erased and programmed in less than 2 seconds. An individual word can be erased and programmed in less than 3 ms.
The parts' feature set includes an analog-to-digital converter with up to 16 10-bit channels, and up to 10-MIPS performance at 40 MHz. The operating voltage ranges from 2 to 5.5 V.
Micron's SyncFlash memory is designed such that both DRAM and flash memory can reside on the same bus and execute from the same DRAM memory controller. This simplifies system busing by eliminating the additional pins needed for a separate flash-only memory interface. It can also increase flash read performance to DRAM speeds. The first member of the SyncFlash product family is a 64-Mbit device, housed in a 54-pin TSOP Type II package.
The HY29DS32x, developed by Hynix, is a 32-Mbit, 2-V flash memory available in 48-pin TSOP and 48-ball FBGA packages. The memory array is organized into 71 sectors in two banks. The first bank contains eight 8-kbyte boot/parameter sectors and 7 or 15 larger sectors of 64 kbytes each (depending on the version of the device). The second bank contains the rest of the memory array, organized as 56 or 48 sectors of 64 kbytes.
The device features simultaneous read/write operation, with zero latency. This releases the system from waiting for the completion of program or erase operations, improving overall system performance. After a program or erase cycle has been completed, the device is ready to read data or to accept another command. Reading data out of the device is similar to reading from other flash or EPROM devices.
Samsung Electronics offers a 1-Gbit NAND flash chip, which is fabricated with 0.12-micron technology. The device is offered in a single chip as well as a dual-die package. The dual-die package doubles the capacity to 2 Gbits. The initial 1-Gbit device operates at 3.3 V. A 1.8-V version will be available shortly.
The chip is built with an expanded 2-kbyte page program (rather than the standard 512 bytes), while the block erase is 128 kbytes (unlike the standard 16 kbytes), improving write performance. A write-cache function is invoked when continuous page programming is performed.
SanDisk is a vendor that produces flash memory in a host of form factors, including CompactFlash, SmartMedia, and MutliMedia Card. In addition, they've developed a 1-Gbit NAND flash chip, the SDTNF-1024. The part operates at 3.3 V and is organized as 528 bytes by 32 pages by 8192 blocks. The 528-byte static register allows program and read data to be transferred between the register and the memory cell array in 528-byte increments. Erase operations are implemented in a single block unit.
Instead of hard disk drives
Simpletech supplies a range of flash readers that lets users access the flash memory.
"We're seeing a real need in the industrial space, where people tend to use these flash devices as alternatives to hard disk drives," says Dan Moczarny, director of flash and storage products at Simpletech. "The flash devices run on the order of 0.6 W, versus the traditional 2.5- or 3.5-in. hard drives, which are in the 3- to 8-W range. We can also operate from -40°C to +85°C."
Fig. 3. Multiple vendors, including SimpleTech, are offering flash memory in a CompactFlash form factor.
Simpletech has an interesting perspective on pricing. The company claims that if the designer only needs around 100 Mbytes of storage, then flash can be less expensive than traditional rotating media.
Says Moczarny, "We look at price per usable megabyte, rather than price per megabyte. Right now, if you look at ATA disk drives, minimum capacities in volume are in the 6-Gbyte range. There are many systems using embedded Linux or Windows CE, where the storage requirements are under 100 Mbytes. We see a price-crossover point where we could probably get into the 150- to 200-Mbyte range on a flash product for about the same price as a 6- to 10-Gbyte hard drive."
Simpletech recently announced a new line of Secure Digital (SD) memory cards in capacities up to 128 Mbytes, aimed at PDAs, cell phones, and digital cameras. Higher densities are expected soon. The company also offers a line of CompactFlash cards (Fig. 3).
SD cards offer built-in copy protection to ease fast, secure downloading of digital files, with a typical transfer rate of 2 Mbytes/s. A 128-Mbyte SD card will hold over 40 minutes of video, 100 digital images, or up to 4 hours of music.
Smart Modular Technologies specializes in high-speed flash devices. According to Steffen Hellmold, director of the company's flash group, "We'll exceed the 5-Mbyte/s mark by the end of this year. We'll do that through enhancements in the controller architecture, as well as enhancements in the flash ICs."
Smart currently offers a 1-Gbyte CompactFlash card in a Type II format. It can operate at either 3 or 5 V.
Advanced Micro Devices
Sunnyvale, CA
(800) 538-8450
www.amd.com
Atmel
San Jose, CA
(408) 441-0311
www.atmel.com/atmel/acrobat/doc1966.pdf
Datalight
Bothell, WA
(425) 951-8086
www.datalight.com
Hynix Semiconductor America
San Jose, CA
(408) 232-8800
www.us.hynix.com
Intel
Santa Clara, CA
(408) 765-8080
developer.intel.com/design/flash/
Matrix Semiconductor
Santa Clara, CA
(408) 969-4848
www.matrixsemi.com
Microchip Technology
Chandler, AZ
(480) 792-7668
www.microchip.com
Micron Technology
Boise, ID
(800) 932-4992 or (208) 368-3900
www.micron.com
Samsung Electronics
San Jose, CA
408-544-4000
www.samsungusa.com
Silicon Storage Technology (SST)
Sunnyvale, CA
(408) 735-9110
www.sst.com
SimpleTech
Santa Ana, CA
(800) 367-7330 or (949) 476-1180
www.simpletech.com
STMicroelectronics
Lexington, MA
(781) 861 2650
www.st.com
Toshiba America Electronic Components
Irvine, CA
(949) 455-2000
www.toshiba.com/taec
Portable Design April, 2002
Author(s) : Richard Nass
http://pd.pennnet.com/Articles/Article_Display.cfm?Section=Articles&Subsection=Display&ARTIC...
culater
Tomorrow's cars are like portables on wheels
With all the electronics packed into the next generation of automobiles, designers are finding it necessary to employ "portable" design techniques.
Richard Nass, Editor-in-Chief
You may be asking yourself why a magazine that specializes in "portable" technology is reporting on automotive electronics. The usual criteria for inclusion are that the system must be powered by a battery and contain a microprocessor, microcontroller, or DSP. An automobile certainly fits those criteria. There is, however, a much better reason for our coverage.
When you consider the amount of electronics that's being embedded in today's high-end (and tomorrow's mainstream) automobiles, it's obvious that the total power consumption must be kept as low as possible, for a few reasons, including the size and electrical noise considerations.
Automotive designers are faced with the dilemma of squeezing a lot of electronics into a small space, much like the problem faced by the designer of a notebook computer, PDA, or cell phone. There are also many different power supplies crammed into a small area.
Says Dave Bell, vice president of Linear Technology's Power Business Unit, "You have to worry about efficiency, not because of battery life, but because of the heat that's generated in a small space. On top of that, there's concern over interference issues. When you're dealing with audio and video, you need to keep the switcher noise from interfering with the FM band or producing wines and buzzes in the audio."
Industry analysts estimate that the total amount of electronics in the car will increase from last year's $89 billion to $121 billion this year. Part of this is in engine control, part is body control, etc. But the area that's growing the fastest is the entertainment, or telematics, area. Semiconductor content increased from $199 to $239 in 2000. That's just semiconductors, not total electronics. High-end cars, like Mercedes and BMW, now have more than 60 embedded microcontrollers.
A problem with the high semiconductor content is that some of these controllers remain on, even when the car isn't running, such as the security system. Even if the power draw is fairly modest, if the car isn't used for a few weeks, the battery could be drained.
Helping to combat this problem are some fairly rigorous standards for low quiescent current on devices like switchers and converters. For example, the LT1766 dc-to-dc converter, developed by Linear Technology, has a standby current that's below 100 µA.
Safety first
The latest electronics in the telematics arena are designed to connect the driver to the car, as well as to the outside world, but to do it in a manner that keeps the driver's hands on the wheel and eyes on the road. In Japan, the telematics place an emphasis on navigation, where the primary objective is how to get to and from different addresses. In the U.S., safety is the primary application. In Europe, the focus tends to be a combination of the two.
One way to link the driver to the car is to connect the portable devices that tend to be used there, such as a cell phone or PDA. By combining a Bluetooth link with voice activation, the task of using the phone in the car is greatly simplified. Going one step further, connecting the car-phone combination to the PDA could add a voice-activated calendar and address directory, which could be connected to the navigation/GPS system. So when you tell the car you want to go to a particular destination, it knows where you are and how to get there.
Texas Instruments recently released an IEEE 1394b bus solution that supports in-car infotainment applications, such as rear-seat entertainment. This solution works in conjunction with the company's Bluetooth chipsets. The IDB-1394 technology supports 1394b at 100 Mbits/s over 10 m of plastic optical fiber (POF) or unshielded twisted pair, category 5 (UTP5) cable. Developers can then choose between POF, which minimizes electromagnetic interference (EMI), and UTP5, which reduces overall node cost. The Bluetooth chip sets enable hands-free car kits, and when used with the IDB-1394 bus, allow for complete advanced telematics communications in automotive applications.
Available processing power
NEC produces a family of automotive-based microprocessors that fit into the non-mission-critical space. It covers infotainment (information plus entertainment) features like voice activation, multimedia applications, and the navigation and entertainment systems.
The 64-bit MIPS-based VR4181A processor integrates more than 18 peripherals and interfaces with a VR4120A processor core. It can handle voice-activated and Internet-application systems, as well as audio systems that require a display. The interfaces include I2C and I2S.
At the high end, NEC offers its Vr5500 family, which are suited for three-dimensional applications, such as navigation systems and virtual dashboards, ones that are user configurable and contain no gauges.
The NEC automotive-based devices are specified over the full automotive spec range, -40°C to +85°C. "Most consumer companies will only cover an ambient temperature range of 0°C to +70°C. That's all they'll guarantee," claims Kevin Tanaka, staff product marketing engineer at NEC. "For our parts, we are covering the full automotive spec range, and we ensure that we can cover the 10- to 15- year life spans that the automotive systems require."
STMicroelectronics has developed a processor that specializes in voice recognition in automotive applications. The Euterpe digital voice processor includes a DSP core that's optimized for audio applications, analog-to-digital and digital-to-analog converters, code and data memory, external memory management, and an I2C interface for communication with a host processor.
Using third-party DSP code, the Euterpe can perform speech recognition, text-to-speech, speaker verification, noise suppression, echo cancellation, and other voice-processing functions. Developers can also add their own code to diversify their products.
Bringing information to that processor is the specialty of Philips. "Our business concentrates on the transceivers, the part of the bus that translates the currents and voltages into digital signals that go to the microprocessor," says Brian Brewster, strategic marketing manager for Philips Semiconductors' Automotive Business Line.
The buses that carry that information include the control buses, CAN, LIN (local-interconnect network), and some emerging buses, including one for air bags. But those buses are continually changing. CAN essentially dominates in Europe, and is becoming more popular in North America and Japan. Although CAN is relatively slow.
Brewster continues, "People don't realize how many different nodes there are in a vehicle, particularly in luxury cars. The problem of linking all these is becoming quite an issue. It can get a little scary when you look at the complexity of these vehicles."
Passing the test
As far as testing is concerned, IFR Systems has an assembly-line approach that checks the installation integrity of the infotainment subsystem. The system is a collection of RF test equipment that couples to antennas mounted above the vehicle. It sprays the vehicle with test signals of a controlled and precise level (see the figure).
"We're not trying to measure the performance of the radio and other items. That's done exhaustively by the component manufacturer before the device is shipped to the assembler," says Tony Rudkin, a business manager for systems at IFR. "We're testing the process, not the component. The main concern for us is the cabling, the antennas, the items that the manufacturer (assembler) fits to the car.
"Manufacturers are surprised that we are failing cars that they thought were okay," continues Rudkin. "Because our test is more rigorous than the tests people did before— which, in many cases, is to drive the car out of the plant, listen to the radio, check that the phone works by calling a base station, etc.—we're testing the car at the limits of its sensitivity range. So it replicates what would happen if you drive the car well away from a base station or broadcast station, when you're on the edge of the service area."
By spraying a vehicle with test signals of a controlled and precise level, the installation integrity of the infotainment subsystem can be checked.
Microsoft is trying to tie together some of the non-mission-critical systems in the automobile with its latest incarnation of Windows CE, aptly named Windows CE for Automotive 3.5. This version covers areas such as hands- and eyes-free communications, speech recognition, robust graphics capabilities for faster map drawing, faster start-up times, and reliable Internet access.
Motorola, one of the pioneers in automotive electronics, provides the brains for the engine controller, as well as the controller that remembers how and where the driver likes his seat positioned, and the airbag sensor. The company is also working on features like automatic cruise control, where the speed can automatically be adjusted based on traffic conditions.
"Our portion isn't necessarily to implement these changes, but to provide the building blocks that allow these changes to occur," says John Hansen, director of marketing for driver information systems at Motorola.
In the automotive industry, there are essentially three reasons why any new electronics or capabilities of any type are introduced. One is that a new technology can do the same job more cost-effectively. Second is that the perceived value reaches a high enough level. For example, if a new feature allows the car dealer to increase the price of the car by a high enough percentage, then it makes sense. And the third reason is legislation.
An example of where legislation comes into play is in the tire-pressure monitoring system (TPMS). This is a mandated technology for all 2003 model cars that puts a sensor, a microcontroller, and a transmitter into each tire. The system monitors the tire pressure and transmits that information to a central location in the car. The information is then available to the driver, particularly if a problem arises. The technology is similar to what's used for the key fob, the remote keyless entry that unlocks the doors. The information is sent over an RF link.
Moving to 42 V
Another area that's under investigation for automotive designers is the use of 42-V electronics. A level of 42 V was chosen because it equals three batteries in series. In addition, under 50 V is generally considered a "safe" range, although the hybrid cars on the road today run at a higher level.
Two or three years ago, experts predicted that 42-V cars would be here today. However, the evolution has taken longer than expected, and it will be another three or four years before these vehicles are in production. The need for the higher voltage stems partly from the amount of higher power electronics coming to the automobile, things like electric steering, brakes, and valves.
In the first generations, there will probably be hybrid 12- and 42-V cars. The use of dual voltages will result in a need for high-power converters to go between 12 and 42 V. Some things simply work better at 12 V, such as the headlights and incandescent bulbs in the car. Component designers claim they can make more rugged headlamps with the thick filament that runs at12 V, compared with what's needed at 42 V.
Linear Technology's LT1339 is an example of a high-power converter that will work in a 12/42-V system. Says Linear Technology's Bell, "Many of our automotive customers are looking at doing a three-phase LT1339 device, where they would drive three of these parts phase-locked to each other to build a 1.5-kW converter."
While there's agreement that cars will go to 42 V, there are some differences of opinion as to why the change will occur.
"You hear stories like, 'We need higher voltages because there are so many electric loads on the car, we can't cope.' That's not necessarily true because you can just make a bigger alternator at 12 V, unless you have an incredible amount of load," claims Steve Clemente, a senior technologist at International Rectifier. "The most important reason to employ a hybrid system like this is for fuel efficiency."
Going to 42 V will enable the use of an electric motor to handle the "start-stop" functionality. In other words, when you stop at a traffic light, the engine shuts off completely. When you push on the accelerator, an electric motor gets the car going and starts the engine.
When the car isn't running, the power comes from a battery. Before you reach the next traffic light, you recover the consumed energy and recharge the battery. If the battery gets to a point where it really gets discharged, then the engine would be recharging the battery during normal operation.
Such a hybrid system will allow for the use of smaller engines. If more power is needed, the electric motor can be used as a boost.
A 42-V car also makes it easier to integrate a "drive-by-wire" system. This is a term that's been thrown around lately, and means different things to different people. But in general, it's the move from mechanical control linkages to electronic linkages. Take the steering, for example. Instead of having a physical connection from the steering wheel to the wheels, you'd have an electronic sensor that would sense movements in the steering wheel.
Before something like this can be implemented, it must be extremely reliable. One drive-by-wire standard calls for dual message sourcing, so that each message is sent twice. Error checking and correction is also done to ensure that the proper message is sent and received.
Another key feature is the need for a graceful fix. This means that some backup system must be employed, rather than having a system (like the steering or brakes) simply shut down. For example, in today's power steering technology, when the power steering mechanism fails, the driver can still steer the car. It's more difficult, but it can be done.
The next step would be to have the car drive itself. "That's in the research stage today," says Motorola's Hansen. "It's one thing to be able to get a car to drive around a test track. It's a very different thing to have a non-human-driven car react if a ball jumps out in the road. How will it know that there might be a child coming right after that ball? A computer-controlled car isn't there yet."
IFR Systems
Wichita, KS
(800) 835-2352 or (316) 522-4981
www.ifrsys.com
International Rectifier
El Segundo, CA
(310) 252-7105
www.irf.com
Linear Technology
Milpitas, CA
(408) 432-1900
www.linear.com
Microsoft
Redmond, WA
(425) 882-8080
www.microsoft.com/automotive
Motorola
Austin, TX
(512) 895-2085
www.motorola.com/semiconductors
NEC Electronics
Santa Clara, CA
(408) 588-6000
www.necel.com/microprocessors/index.cfm
Philips Semiconductors
San Jose, CA
(408) 474-5000
www.philipssemiconductors.com/markets/automotive/ivn/
STMicroelectronics
Lexington, MA
(781) 861-2650
www.st.com
Texas Instruments
Dallas, TX
(800) 336-5236
www.ti.com
Portable Design April, 2002
Author(s) : Richard Nass
http://pd.pennnet.com/Articles/Article_Display.cfm?Section=Articles&Subsection=Display&ARTIC...
culater
IBM Receives Delphi Business For Next-Gen Multi-Media Car Products
16:00 PM GMT on Apr 19, 2002
[INTERNET WIRE]
IBM (NYSE: IBM) announced that Delphi Automotive Systems has chosen IBM's J9TM virtual machine environment, as a foundation technology for the development of automobile multimedia products. These products can enable motor vehicles to communicate in real-time with drivers, dealers, manufacturers and others in the industry's value chain.
Delphi will use the IBM J9 virtual machine environment in the design of new mobile multimedia products that include embedded, real-time control systems using full-motion video, speech and voice processing, Internet and JavaTM technologies to enhance communications and entertainment applications for original equipment vehicles. Products expected to result from the collaboration range from dashboard control features to information, entertainment, navigation and messaging.
In addition to enabling functions that consumers can experience firsthand, the collaboration between IBM and Delphi will enhance consumer-support Telematics solutions that ease vehicle maintenance by digitally connecting automakers, dealers and car owners. Such solutions include embedded functions ranging from the remote monitoring of a vehicle's condition to the wireless transfer of data to help manufacturers understand component performance, enhance repairs and drive future design improvements.
For Delphi's development of such advanced-capability products, IBM will provide J9 technology that is modular, scalable and optimized for memory and throughput efficiency. The J9 virtual machine, at the core of IBM's Telematics Solutions offerings, is a high-performance production environment offering adaptive dynamic compilation of Java application bytecodes and superior JIT (Just-In-Time) program execution performance.
IBM's embedded virtual machine technology has been developed, deployed and refined for more than a dozen years. In August 2001, IBM was the first to develop and distribute the J9 J2ME "Java Powered" environment simultaneously across multiple platforms. The J9 virtual machine environment has proved in independent testing to provide a high-performance, compact environment for running embedded Java applications across a broad range of processors. Complementing the J9 virtual machine, IBM's VisualAge Micro Edition allows developers to quickly and easily create and deploy e-business applications to automotive Telematics devices, hand-held computers, PDA's and cellular telephones. More information is available at www.ibm.com/embedded.
"IBM's work with major manufacturers and industry electronics partners like Delphi demonstrates that we are fast becoming a preferred supplier of development platforms for new automotive solutions," said Raj Desai, director, IBM Telematics Solutions. "Because we have optimized our technologies for use on multiple operating systems and bring to market a complete solution and services, we can help enable revolutionary new features. Time-to-market is a distinct advantage we have over our competitors."
Added Desai, "With these technologies in place, drivers will have the ability to simply talk to their cars instead of fumbling with knobs or glancing at attention-diverting displays. Computers can be enabled to constantly monitor a car's performance and, when a problem is detected, notify both driver and service shop."
http://www.anywhereyougo.com/j2me/Article.po?id=4670411
culater