InvestorsHub Logo
Followers 34
Posts 754
Boards Moderated 0
Alias Born 04/04/2010

Re: None

Thursday, 10/02/2014 11:45:00 PM

Thursday, October 02, 2014 11:45:00 PM

Post# of 48146
Two very inspiring articles from today make my head spin as perfect indicators of Sphere 3D's market potential for Glassware 2.0.
To understand in depth, go back to HTFBS' October Newsletter and reread whatever Peter Bookman gave away as info about Glassware 2.0 and containerization for Windows.

Article 1: First Global Desktop Operating System Market Share



Microsoft on Tuesday unveiled its next operating system for all devices — computers, tablets, phones, and more — called Windows 10.
But before Windows 10 launches in early 2015, we thought it’d be interesting to look at the current landscape of operating systems to see what Microsoft’s latest effort is up against.
As it turns out, the biggest competition for Windows is …well, Windows. Based on real-time data from NetMarketShare charted for us by Business Insider Intelligence, Windows 7 is still the world’s most dominant desktop operating system — and between all versions of Windows, Microsoft’s operating system comprises a whopping 92% of the global desktop market share.
Beyond desktops, when you look at all operating systems across desktop and mobile devices, Windows 7 is still the most-used OS with 38-39% market share.
Windows as a whole makes up about 62% of the global market share of all operating systems.

When reading the second article and to understand Glassware's potential in depth, go back to HTFBS' October Newsletter and reread whatever Peter Bookman gave away as info about Glassware 2.0 and containerization for Windows. Very revealing!

Article 2: Why Docker and containerization is a boon to web software startups

By: James Thomason, CTO, Dell Cloud Marketplace. I love science, technology, new ideas, and new ventures. The first company I started was at the age of about 13, building custom antenna arrays for amateur radio operators, sold through flyers I distributed at the local amateur radio club under the tongue-in-cheek moniker “Amateur Antenna Concepts”. The business was a disaster, if not the name. I lost orders, lost money, lost inventory, and generally made a terrible mess of things, eventually losing the most important asset a business has – its customers. I was learning the painful, iterative process of developing new ideas, and taking them to market. Inexplicably, I wanted more. Read More >

In the 10 or so startups I have helped build since 1995, one of the biggest challenges we always faced was the “scale problem”. Scale is a problem of success, a great problem you get to solve when you succeed at creating some initial value for your customers. If the measure of a software company’s ability to innovate is the velocity of software creation, then the measure of a web software company’ s ability to innovate includes getting that software through testing, integration, and successfully deployed into production. The scale problem can impact each of these functional areas differently and sometimes in surprising (and interdependent) ways. Often, the resolution of one scale problem simply reveals another previously unknown scale problem, leading to a seemingly unending list of issues and remediation activities.

Over the last 10 years creating value in web software applications, and scaling them up, has become a lot easier. The growth of agile software development practices have greatly accelerated software development, testing, and delivery. At the same time, the advent of cloud services like Amazon, Google, and Azure have reduced the time to acquire and deploy web infrastructure to near zero. Continuous integration and deployment has taken hold as an operations and architecture style, automating test and integration processes, and reducing the overall time between development and customer benefit. Today, software startups can build world-class web applications, scale them to millions of users, and ensure their availability for less money, time, and with employees than ever before.

These advances in scale have not been without tradeoffs, however, and a new set of problems has emerged around development and operations complexity. While the horizontal scaling pattern increases capacity along with the number of application server instances, the difficulty becomes managing the sheer number of application server deployments, along with their configuration and dependencies on other infrastructure and services. This also has severe implications for continuous integration and deployment processes, where functionality often has to be “stubbed out” owing to differences in the production and development environments. While further infrastructure abstraction in the form of platform-as-a-service can improve developer workflows and further reduce operations complexity, these issues are not simply negated by PaaS. As it turns out, many of the startups who initially adopted PaaS have eventually moved on to more traditional infrastructure models as they have scaled, owing to better economics, and increased flexibility to manage problems and make changes. These problems and complexities are at the root of why so many people are excited about containerization, and more specifically, Docker (the de-facto containerization standard for Linux).

One of the major benefits of Docker and containerization is that they enable an architectural style known as immutable infrastructure. In an immutable infrastructure, components of the infrastructure are never modified once deployed, but rather they are replaced by new components generated from a pristine state. For example, in the “old world” of 2008, an upgrade to the Java version across an infrastructure of 1000 servers would require:

logging in to each server
downloading the Java release
installing Java
restarting the application servers.
To deal with the scale challenge, these steps would be automated through scripting (ssh for loop), configuration management (Puppet, Chef), and other external systems. The problem of course being that this type of process works when it works, except when it doesn’t work, requiring teams to discover differences between systems and environments and recover on the fly.

In the new world of immutable application containers, applications are built in place with their dependencies and configuration. Wherever those containers are deployed, they run with the same dependencies and configuration. Containers are absent infrastructure dependencies, such as hostnames and IP addresses, which are injected at runtime when the container is deployed on a target host. This delineation provides a very clean separation of concerns between the application and the host infrastructure, and ultimately reduces the number of configuration endpoints by orders of magnitude (to just 1).

The benefits for software development are vast. Already, Docker (and others) are working on various new forms of service discovery, in order to solve the infrastructure dependency injection problem, and consequently the “awareness” of dependencies between application components on different servers and infrastructure. This means that in the future, development teams can focus on creating and maintaining micro services, another important scaling pattern for software development process. Micro services are discrete functionality packaged as network-available services, often through REST interfaces, which are completely independent of other services. By leveraging the immutable architecture style, integration testing becomes a matter of deploying the necessary micro services from their pristine state in the repository and executing the test harnesses against the complete system.

For operations, the immutable style provides a means to deploy major releases fractionally, testing them with small portions of the user base before rolling them out across the entire infrastructure. Containers themselves can be automatically introspected to derive dependencies on other containers, networks, storage, and other systems. Rolling upgrades backwards requires merely deploying the previous version of the container(s) in question and terminating the more recent version.

In the last few months there has been a relentless outpouring of new orchestration systems for Docker containers, including Kubernetes (Google), Mesos (Mesosphere), as well as PaaS capabilities from Flynn.io, Deis, and OpenShift (RedHat). As these new tools continue to emerge and mature, all of these developments will ultimately translate to a lot less software startup equity being spent on overhead and lot more software startup equity being spent on creating value through working software that scales.

That is a huge win for start-ups, their founders, their employees, their investors – and their customers.

Here at Dell Cloud Marketplace, we’re creating a new generation of tools to help IT and developer teams compare, consume, and control cloud services. Emerging technologies like Docker and containerization are a big part of what we’re building and we’re excited to showcase some of our progress in the forthcoming public beta of Dell Cloud Marketplace.

Now remember Sphere 3D and Glassware are part of Dell's Workstation Virtualization Center of Excellence in Texas as announced early March so they have had quite some time to test Glassware 2.0 and see how the market is starting to accept Glassware 2.0
Volume:
Day Range:
Bid:
Ask:
Last Trade Time:
Total Trades:
  • 1D
  • 1M
  • 3M
  • 6M
  • 1Y
  • 5Y
Recent ANY News