5 Years of Building A Cloud: Interview With LivePerson Openstack Cloud Leader
Koby Holzer has 17 years of experience working with large infrastructure environments, with the last 4.5 of these at LivePerson as the director of cloud engineering, specifically focussing on Openstack. His past experience includes working with prominent Israeli telecom companies in the area of technological infrastructure. I have personally known...Read more

imageI believe that this is the year when the enterprise will find its way to the cloud.

The mega Internet sites and applications are the new era enterprises. These will become the role models for the traditional enterprise. IT needs remain the same with regards to scale, security, SLA, etc. However, the traditional enterprise CIO has already set the goal for next year: 100% efficiency.

The traditional CIO understands that in order to achieve that goal, IT will need to start and do cloud, make sure that IT resources are utilized right, and that his teams move fast.

(read more…)

Every day I talk, write and comment about the “Cloud”. Every time I mention the cloud I try to make sure that I add the name of the relevant cloud operator, “Rackspace Cloud, “MS Cloud” (Azure) or “HP Cloud”. Somehow all of these cloud titles don’t right to me – it seems the only title that really works for me is the “Amazon Cloud”. In this post, I will elaborate about the competition in the IaaS market and I will explain further why I think this is so.

(read more…)

cloud-connectLast week I attended one of the most popular cloud technology conferences in the world – CloudConnect. The CloudConnect conference started about four years ago. Attending the event gave me a clear understanding of the market maturity and evolution rhythm. Check out the following sections for the main points on what I heard and learned:

Cloud Performance

The underlying infrastructure performance, round trip time, bandwidth, caching and rendering are to be counted as the major features of an online service performance. In an interesting presentation by @joeweinman (known by his famous “Cloudonomics” theory), it was claimed that latency holds the greatest weight among these faetures. I encourage you to check out his new research – As Time Goes By: The Law of Cloud Response Time presents some good formulas, methods and considerations with regards to online services’ performance and latency (including simple facts, for example, that people tend to prefer selecting from fewer options on an online page –  so you can have less content on a page and achieve a better browsing performance).

(read more…)

Last week I was invited to the HP Tech Day in HP’s campus in Houston to learn and hear more about the giant’s cloud offering. I appreciate HP and Ivy very much for the invitation and for a great event where I was able to learn more and see these clouds in real. I had the privilege to meet savvy and professional guys. It is always great to see people who are enthusiastic on their jobs and are proud of their company. Let me share with you HP’s cloud from my point of view.

(read more…)

It always good to start with Wikipedia’s definition as it helps to initiate a structured discussion, here is Wiki’s definition for Lock-In:

“In economics, vendor lock-in, also known as proprietary lock-in or customer lock-in, makes a customer dependent on a vendor for products and services, unable to use another vendor without substantial switching costs. Lock-in costs which create barriers to market entry may result in antitrust action against a monopoly.” Read more on Wikipedia

Does the cloud present a major lock-in ? Does the move create substantial switching costs?

“Yes !” is the common answer I hear for those questions. In this article I will debate it basing my findings on real cloud adoption cases.

Generally in terms of cloud’s lock-in, we face the same issues as in the traditional world where the move includes re-implementation of the IT service. It involves issues such as data portability, users guidance and training, integration, etc.

“I think we’ve officially lost the war on defining the core attributes of cloud computing so that businesses and IT can make proper use of it. It’s now in the hands of marketing organizations and PR firms who, I’m sure, will take the concept on a rather wild ride over the next few years.”

The above statement I bring from David Linthicum’s article “It’s official: ‘Cloud computing’ is now meaningless”. Due to my full consent with Linthicum on that matter, I will be accurate and try to make a clear assessment of the cloud lock-in issue by relating each of the three cloud layers (i.e. IPS aaS) separately.

In this part, I will relate to the most lower layer, the IaaS lock-in.

It is a fact that IT organizations take advantage of the IaaS platforms by moving part or even all of their physical resources to the public clouds. Furthermore, ISVs move at least their test and development environments and making serious plans to move (or already moved) part of their production environment to the public clouds.

Read more about shifting legacy systems to the cloud by Ben Kepes

Discussing with a public IaaS consumers, it always come to the point where I ask “do you feel locked on your cloud vendor ?” most, if not all of the companies’ leaders claim that the public clouds’ values (on-demand, elastic, agility,ect) overcomes the lock-in impact so they are willing to compromise. As a cloud enthusiastic it is great for me to see the industry leaders’ positive approach towards moving their businesses to the cloud (again too general – any of them refer to a different layer). I do not think that the lock-in is so serious.

For sometime this claim sounded pretty reasonable to me though on second thought I find that the discussion should start from a comparison with the traditional data center “locks”. Based on this comparison I can already state that one of the major public cloud advantages is the weak lock-in, simply because you don’t buy hardware. Furthermore, companies that still use the public cloud as an hosting extension to their internal data center, don’t acquire new (long term or temporary) assets that they can’t get rid of without having a major loss. In regards to its lock-in the public cloud is great !

Another important explanation related specifically to Amazon AWS products which support SaaS scalability and operations. Smart SaaS architect will plan the cloud integration layer, so that the application logic and workflow will be strongly tied with the underlying IaaS capabilities such as on-demand resources auto provisioning.

Read more about the relationship between web developers and the cloud

For example, the web can use the cloud integration layer to get on-demand EC2 resources for a specific point when a complex calculation occurs. In a superficial glance, the fact that the cloud API used as a part of the application run-time script holds an enormous lock-in risks. I disagree and let me explain why.

As a market leader, Amazon AWS will be (already is) followed by other IaaS vendors. Those will solve the same scalability and operational issues by the same sense and logic of AWS. Basically this means an evolution of IaaS platform standards. Smart cloud integration layer will enable “plug & play” a different IaaS platform or even orchestrate several in parallel. To strengthen my point I bring as an example several cloud start-ups (solving IaaS issues such as governance, usage and security) that developed their product to solve issues for Amazon AWS consumers and seriously target support of other IaaS vendors’ platforms such as Rackspace cloud and vCloud. In regards to lock-in the public cloud is great !

The IaaS vendors in the market recognize the common lock-in drawback of moving to the cloud. Vendors such as Rackspace brings the OpenStack which is a cloud software platform, so cloud vendors can build IaaS solutions upon it. Rackspace showing off on their blog site –

OpenStack™ is a massively scalable cloud operating system, powering the world’s leading clouds. Backed by more than 50 participating organizations, OpenStack is quickly becoming the industry standard for public and private clouds. Read More

It should be noted that applications and data switching between clouds is still complex and in some cases not feasible though believing in the public cloud’s future comes with understanding of its weak lock-in and will lead to visionary and long term strategic plans.

What about the private IaaS ?

Following my on going research on what is the best cloud option (i.e public, private or hybrid), I found that outsourcing the IT environment to a private or an hybrid includes a major lock-in. Implementation of a private or an hybrid cloud includes lots of customization, hence lack of standards. Private and Hybrid clouds have their benefits though lock-in is not one of them. The contract with the vendor is for 3 to 5 years at least (a data center’s typical depreciation period) on a non standard environment leads to an extreme, long term lock-in in terms of the “on-demand world”.

In order to decrease lock-in the IaaS consumer must prove the organization need for a private cloud by planning strategically for long term. Besides the ordinary due diligence to prove the vendor strength, the contract must include termination points and creative ideas that can weaken the lock-in. For example renewal of initial contract under re-assessing of the service standards, costs and terms in comparison with the cloud market, including the public one. The private cloud vendor must prove on-going efficiency improvements and costs reductions accordingly.

In his article Keep the ‘Cloud’ User in Charge”, Mark Bohannon, VP at Red Hat, Warns:

by vendors to lock in their customers to particular cloud architecture and non-portable solutions, and heavy reliance on proprietary APIs. Lock-in drives costs higher and undermines the savings that can be achieved through technical efficiency. If not carefully managed, we risk taking steps backwards, even going toward replicating the 1980s, where users were heavily tied technologically and financially into one IT framework and were stuck there.”

Some of the private cloud offering today have similar characteristics as the traditional data center, to me it seems that the former comes with a stronger lock-in impacts. In case of an IT transition companies who decide to go that way should expect a considerable switching costs and long term recovery of their IT operations hence of their business.

The second part will discuss the cloud lock-in characteristics in regards to the SaaS and the PaaS layers.

The three layers of cloud computing IaaS, PaaS and SaaS occupy the headlines with significant capabilities undergo continuous improvement to host services in the cloud. This growing market is slowly changing so that offered services will become generic. The current evolving struggle is the deployment and management of SaaS applications in the cloud, Gartner calls this cloud market portion SEAP (Software Enabled Application Platforms). We will dare to say that developers are from Mars and cloud providers from Venus, let us explain in detail why.

SaaS application developer builds the application architecture structure including the database system, the business logic and the user Interface. The software developer (or the SaaS vendor for that matter) invests on building these main three infrastructure cornerstones in order to bring life to the business idea and launch a new on-line service.

(read more…)

The first part of Weinman’s lecture discussing the basic “go to the cloud” and demonstrating cloud environments’ loads of different corporations’ web applications. In this part we will bring 6 scenarios presented by Weinman, each includes a brief analysis and proof of its cost and benefits.

First lets start with several assumptions and definitions:

> > > 5 Basic assumptions Pay-per-use capacity model:

  1. Paid on use – Paid for when used and not paid for when not used.
  2. No depend on time – The cost for such capacity is fixed. It does not depend on the time or use of the request.
  3. Fixed unit cost – The unit cost for on-demand or dedicated capacity does not depend on the quantity of resources requested (you don’t get discount for renting 100 rooms for the same time).
  4. No other costs – There are no additional relevant costs needed for the analysis.
  5. No delay – All demand served without any delay.

(read more…)

Joe Weinman is well known in the cloud computing community as the founder of Cloudonomics. Presenting complex simulation tools, Weinman characterizes the sometimes counterintuitive business, financial, and user experience benefits of cloud computing including its on-demand, pay-per-use and other buisness aspects. Last month I had the pleasure of participating in Weinman’s webinar. Weinman discussed several interesting points which I would like to share with you.

Weinman started by contradicting what seem to be the fundamental assumptions regarding the Cloud and its benefits. There was nothing radical about what I heard but it made me think and challenge all the things I took for granted –

(read more…)

“How fast will federal agencies make the transition? InformationWeek Government and InformationWeek Analytics surveyed 137 federal IT pros in February to gauge their plans. Our 2011 Federal Government Cloud Computing Survey shows a big jump in the use of cloud services, with 29% of respondents saying their agencies are using cloud services, up 10 points from last year. Another 29% plan to begin using the cloud within 12 months, which means adoption should surpass the 50% mark in the year ahead. “

I read so much about the cloud trend and it seems to be moving very fast.  You are welcome to read more about U.S. government cloud embracing on Information Week.