InvestorsHub Logo

Nadendla

09/14/12 5:07 PM

#15999 RE: gibnit #15993

Cloud computing is complex and sophisticated..Hybrid computing even more so..and its the next future ..it takes time just to put the infrastructure in place..After that the revenues flow big..what looked like old news..has come to an end with the infrastructure in place now!..Just to give you an idea how tough it is to put down the infrastructure pertaining to HYBRID COMPUTING in place..check this link..

http://www.informationweek.com/cloud-computing/infrastructure/4-keys-to-hybrid-cloud-planning/232900731

4 Keys To Hybrid Cloud Planning
When blending private and public cloud infrastructure, IT leaders must consider everything from IT skill sets to management tools. Prioritize these four topics.

By Beth Stackpole InformationWeek
April 23, 2012 12:30 PM
Amidst all the hype surrounding cloud computing, the hybrid cloud approach-- the blending of both private and public cloud environments--is gaining traction. But the reality of building a bridge that effectively leverages the strengths of both architectures is ending up to be a greater challenge than many anticipated.
With private cloud implementations set to accelerate this year, hybrid clouds, too, are destined to grow in popularity. That means organizations are going to have to ramp up efforts to evaluate application and data location scenarios based on factors such as cost, core business enablement, and business alignment, Unisys said in making its 2012 cloud computing predictions.

More Cloud Insights

Webcasts
Accelerate Product and Service Innovation with a Business-Ready Cloud
Unleash the Power of DBaaS in Private Cloud Environments
More >>
White Papers
5 Things You Need to Know About BYOD
IBM WebSphere Cast Iron Cloud integration: Integrate Microsoft Dynamics in days
More >>
Reports
Research: Cloud ROI Calculations
Fundamentals: Cloud vs. In-House IT: Spend Smart in 2012
More >>
Initially, most companies' vision for a hybrid cloud involves offloading some applications to the public cloud, where there is a compelling need to take advantage of scalability benefits, while at the same time maintaining applications that demand a higher level of security in private cloud infrastructures. A longer-term and more sophisticated view of the hybrid cloud blurs the boundaries between public and private environments, creating an infrastructure that allows applications to shuffle seamlessly between them based on need and economics.
Experts say the former hybrid cloud scenario is fairly straightforward and not necessarily new, while the latter instance is where companies are still struggling. "People think it's easy to set up a hybrid cloud, but when you start to mix vendors and technologies, it amps up the level of complexity and the amount of attention required for planning out the solution," noted Dave LeClair, director of product management and marketing for Stratus Technologies, a maker of high availability server and software solutions.
[ Learn more about cloud storage. Read more about what to look for at Cloud Storage Infrastructures Raise Many Issues. ]
Indeed, hybrid cloud models that span an entire environment are a very different animal from a hybrid cloud solution for a single capability, where the application operates on a private platform and so-called "cloud bursts" to a public cloud when transient capability is required, noted Jonathan Shaw, PhD, principal at consulting company Pace Harmon. Another more complex hybrid cloud interpretation is to segregate requirements within a single capability--storage-as-a-service, for example--so that different storage tiers might be delivered privately vs. publicly as part of an overall storage strategy. "This requires virtual machine portability, session management, etc., which is a more complex technical problem," Shaw said.
With an eye towards the more complex hybrid cloud as the end goal, experts say companies need to consider the following factors as part of their deployment roadmap.
1. Understand your IT architecture and application needs.
Not only do companies need to determine what applications and capabilities are suitable for the public cloud vs. a private delivery model (based on factors like demand variability, high availability, response times, and security/privacy requirements), they also need to examine how their applications and workloads are designed to determine if they can be effectively deployed in a hybrid situation.
Typically running some applications on a public cloud and some on a private cloud is a better scenario than spanning a single application across both. "It's much better to have discrete instantiations of applications on one or the other as opposed to a single application spanning both," said Michael Crandell, CEO of RightScale, which provides cloud management capabilities. So for example, you might do R&D work on a private cloud then launch the finished product on a public cloud or vice versa, he explained.
2. Be realistic about the integration challenges that lie ahead.
Crandell said there are at least 10 different public cloud infrastructures, each with their own sets of APIs, not to mention the growing list of private cloud infrastructure offerings like OpenStack or Eucalyptus. The thinking is you can go back and forth and deploy workloads across platforms, but because there is currently no universal standard for workloads in the cloud, you need a portability layer to create the interoperability. "When you start talking about splitting between the public and private cloud environments because you want some level of elasticity, the complexity ramps up dramatically," said Stratus' LeClair. "You need to go into this with both eyes open or you'll find yourself getting into awkward situations where you've moved something that shouldn't have been moved."
3. Factor management tools into the equation.
One of the most critical pieces of a hybrid cloud scenario is a management platform used to monitor and manage the environment with an eye towards resource provisioning, performance, and scalability. The issue here is having a single interface and management layer that can work both sides of the infrastructure. IT shops typically have their own on-premises management consoles for monitoring internal networks while public clouds employ their own set of tools, and a company implementing the hybrid cloud needs visibility into both.
"Unless you want to duplicate work, you have to find a management interface that puts all the resources in a single pane of glass so you don't have to switch between different products to manage this," said Crandell's Rightscale, which offers a product that provides automation, autoscaling, and monitoring capabilities that span public and private clouds.
4. Ramp up organizational skill sets.
Most IT organizations have highly specialized experts who know virtualization, or applications, or servers and backup. A hybrid cloud cuts across all those skill sets and you need to ramp up your team accordingly. "Very few people have the skills that cut across all of these capabilities," said LeClair. "When you're talking about your IT team, there's retraining that has to go on to move beyond how we've run things for the past 25 years."




About HYBRID CLOUDS and how they differ from and are superior to normal cloud computing!...

http://www.techrepublic.com/blog/tr-out-loud/hybrid-computing-not-cloud-computing-is-the-future-of-technology/1711

Hybrid computing, not cloud computing, is the future of technology
By Donovan Colbert
March 9, 2010, 6:12 AM PST
Takeaway: TechRepublic member dcolbert believes that cloud computing will compliment — rather than replace — traditional local computing. Do you agree? Is hybrid computing the future of technology?

One of the few remaining print technology magazines with a viable market recently did a series on turning failure into success. Among these articles was a piece on Larry Ellison’s vision of thin-client network computing.

In 1995, there was a sudden uptick among executive management across the world of secure thin-client computing devices. The concept was that a single piece of “big iron” in the background would house all data and applications, and that small, light-weight, and inexpensive thin-client network devices would sit on user desktops and access the server side data on the back end.

Oracle’s Larry Ellison was one of the driving forces evangelizing this paradigm shift in how we approached the end-user computing experience. Supposed benefits included lower TCO due to reduced administration, plus less expensive equipment, longer life cycles, and increased security.

In reality, these machines turned out to be stripped-down PCs with very limited processors, no local internal storage, and often no optical, magnetic, or — at the time — still viable, floppy disk drives. Otherwise, they hooked up to the same industry standard 17? CRTs, PC 101 layout keyboards, and two or three button mice.

The improved security was always a dubious claim. The lack of any kind of disk input/output (we were still some years away from inexpensive flash media thumb drives) was supposed to prevent malicious employees from transferring data onto removable media in corporate espionage schemes. This limited the portability of data for other, legitimate reasons as well — but if the model was a centralized server with thin clients hooked into it, why would you want to have your data portable outside of your network anyhow, right?

At the time, I was working for MCI VANSIS/SGUS (Value Added Networking System Integration Services/State Government University Systems). A high-level executive asked me to perform an analysis on thin-client computing. I’m not sure that what I came up with was exactly what he had in mind, but I do know that MCI never instituted a large-scale thin-client computing initiative in that group.

Within my report, I stated that ultimately, these systems were closed architecture machines that had a very limited life span. I didn’t know that there was a name for Moore’s Law at that point in my career, but I had worked with PCs and related technology long enough to realize that things change quite a bit every couple of years.

I claimed that users want to be able to store their data locally, to copy it, take it to other machines, and even work on it from home. I felt that the lack of a local hard drive was a serious negative, and although I had not begun my experience as an early adopter and core expert on high-availability solutions, I also realized intuitively that the centralized server model introduced a single point of failure.

With the traditional model of PC computing, even today, if you have a copy of your data on removable media and your PC or your connection to the network or back-end server goes down, you can find another machine and keep going. With the thin-client model that was proposed in 1995, if any of these components failed, you weren’t doing any work until the issue was resolved.

I presented all of these opinions to this executive, but I never heard back from him. I often wonder if he watched network computing devices arrive and then fail for most of the reasons I outlined in my presentation.

Some of the early adopters struggled with these proprietary network devices, which required special keyboards, mice, monitors, and non-standard power supplies — devices that locked them into small vendors who charged far more for these components than the same commodity PC equivalent.

Other shops ended up replacing these devices with full PCs a couple years later. After all, PCs had quadruple the processing power, huge (for the time) hard drives, and the ability to inexpensively write optical media – at the same price they had paid for the dead-end, dated, non-upgradable network computing devices.

Thin-client network computing devices (which, 20 years earlier, were called “dumb-terminal/mainframe” computing devices) quietly died a second death, for the same reasons they had been replaced by the IBM-compatible PC in business applications during the PC revolution of the ‘80s. I thought to myself, “Well, I nailed it, and I’ll certainly never find myself concerned with that model of computing again.”

But then something happened. The Internet became the single most important driver of personal computing. Everyone ended up on their PCs, hooked to the Internet, with broadband connections, on machines so incredibly powerful that companies like Sun gave up on what had always been considered “powerhouse, industrial, RISC computing platforms.”

Coincidentally, people started suggesting that “cloud computing” was the new wave of how people would use their PCs. Over the last few years, that quiet buzz has turned into a crescendo of incessant chatter about how the future of computing is “in the cloud.” Again, Larry Ellison is one of the most vocal proponents of this model, but it still has many of the same inherent risks as before.

The difference this time is that low-cost “disposable” machines are not the key selling point – instead, it’s the convenience of centralized computing. And while apps and data are stored in a centralized machine, it isn’t quite “thin-client” computing we’re talking about. That’s an important distinction that people seem to miss when they claim that the arrival of the cloud is the model that Larry Ellison and other network computing device advocates proposed in the mid-90s.

Thin-client computing utilizes an application and data on a central server. All of the heavy lifting occurs on a machine across the network. The network computing device — the thin-client — only handles screen refreshes and input/output. Citrix and Windows Terminal Server are two common examples of this paradigm of computing.

In contrast, when you load a web app, it loads and executes in a native engine on your PC. A slower PC will perform worse than a faster machine. The same basic principles of local computing apply with cloud computing, but you’re still dependent on that remote back-end server. If it’s down or unreachable, you can’t get to your applications or your data.

Many cloud solutions, such as Google Docs, promise to deliver “local, offline” access to both apps and data. Again, this is not the same thing as thin-client computing. In fact, it is just a web delivery of the same model of local computing that has been popular since the PC revolution of the ‘80s. A copy of the app and a copy of the data sit on your local machine. Instead of using the web to execute the app and load the data, you access it locally — but through your browser (an added layer of complexity) instead of your desktop.

You might find it ironic, but I’m writing this with Google Docs, which means that I actually use the cloud-based model that I seem to be writing against in this piece. But do not misunderstand me. I’m not saying that the cloud model will fail. Cloud computing will complement — rather than replace — traditional local computing. In my honest opinion, a hybrid approach to computing is the future of technology.




http://www.infoworld.com/d/cloud-computing/why-the-hybrid-cloud-model-the-best-approach-477

Why the hybrid cloud model is the best approach
Although some cloud providers look at the hybrid model as blasphemy, there are strong reasons for them to adopt it

By David Linthicum | InfoWorldFollow @DavidLinthicum
Print|1 Comment
.
When the industry first began discussing the hybrid cloud computing model back in 2008, cloud computing purists pushed back hard. After all, they already thought private clouds were silly and a new, wannabe-hip name for the data center. To them, the idea of hybrid clouds that used private clouds or traditional computing platforms was just as ridiculous.

Over time, it became clear that hybrid cloud computing approaches have valid roles within enterprises as IT tries to mix and match public clouds and local IT assets to get the best bang for the buck. Now it's the cloud computing providers who are pushing back on hybrid cloud computing, as they instead try to promote a pure public cloud computing model.

[ Get the no-nonsense explanations and advice you need to take real advantage of cloud computing in InfoWorld editors' 21-page Cloud Computing Deep Dive PDF special report. | Stay up on the cloud with InfoWorld's Cloud Computing Report newsletter. ]

However, these providers are hurting the adoption of cloud computing. Although public cloud computing has valid applications, the path to public cloud computing is not all that clear to rank-and-file enterprises. For many, it's downright scary.

Leveraging a hybrid model accomplishes several goals:

It provides a clear use case for public cloud computing. Specific aspects of existing IT infrastructure (say, storage and compute) occur in public cloud environments, and the remainder of the IT infrastructure stays on premise. Take the case of business intelligence in the cloud -- although some people promote the migration of gigabytes of operational data to the cloud, many others find the hybrid approach of keeping the data local and the analytical processing in the cloud to be much more practical.
Using a hybrid model is a valuable approach to architecture, considering you can mix and match the resources between local infrastructure, which is typically a sunk cost but difficult to scale, with infrastructure that's scalable and provisioned on demand. You place the applications and data on the best platforms, then span the processing between them.
The use of hybrid computing acknowledges and validates the fact that not all IT resources should exist in public clouds today -- and some may never exist in public clouds. Considering compliance issues, performance requirements, and security restrictions, the need for local is a fact of life. This experience with the hybrid model helps us all get better at understanding what compute cycles and data have to be kept local and what can be process remotely.
Of course there are cloud providers that already have their eye on leveraging a hybrid model. These new kids on the block even provide management and operating systems layers specifically built for hybrid clouds. However, the majority of public cloud providers are religious about pushing everything outside of the firewall (after all, that's where they are). They need to be careful that their zealotry doesn't turn off potential cloud converts.




..Without an Aircraft we cannot fly..what i meant by that is ..With the infrastructure in place now that took such a long time from 2011 till 2012..The company can expect solid revenues rolling in from now !!.