Cloud Made of Atomic Units
Cloud Made of Atomic Units

This week I had a morning coffee with Shai Fultheim the CEO of ScaleMP and it turned to be an interesting discussion about cloud and high performance computing. I want to share with you a great story of this company, which delivers smart virtualization solutions that enable server consolidation. The company offering supports capabilities to maintain high-performance software applications and simplified management of multiple computing resources in the cloud.

“The evolution of what we know today as the `Cloud`, started from talking about `Clusters`; then it became `The Grid`”, says Fultheim. He also mentions “on-demand application,” a term from the 1980s that refers to an option to ask the software vendor to develop a fully tailor-made application, where today “on-demand” refers to the exact opposite. Interesting, isn’t it?! ScaleMP delivers the capability to consolidate up to 128 x86 server systems and 64 gigabyte of memory to become a big virtual machine ranging from 4 to 1,024 processors (8,192 cores) and up to 64 TB of shared memory. The company supports large organizations in industries such as higher-education & research and financial services.

Fultheim noted that the computing resources for running the application itself is not the only need for high performance computing (HPC). An additional pain point for this market is large-scale databases (size) as well as data-intensive computing (IOPS) needs. No doubt that large data is one of the main issues under discussion these days, because clouds have evolved to become a massive data resource and a data collaboration repository, and has made systems integration a complex cloud issue.

Discussing large databases should raise Hadoop/MapReduce as a good option to solve the one-dimensional database problem (where data model complexity is low) and it really works where the data is too big for a database (i.e., you’ve reached the technical limits of this free software). With very large multi-dimensional datasets, the cost of regenerating indexes is so high you can’t easily index changing data. With many machines trying to write to the database, you can’t get locks on it.

What is Hadoop? Check I Am OnDemand Terminology page

Watch a video: The next frontier of cloud database, a panel moderated by Geva Perry (Think Out Cloud blogger)

We discussed the private/public cloud known argument, and as I anticipated, it was clear that the HPC market applies only to the private cloud. Fultheim mentioned the legal issue of having a third party, the public cloud vendors, in the “data-holding game”. “It will be easier for feds to get your data from the public cloud, and IT organizations will probably not even know that.” He believes that the future is hybrid. I tend to agree but do think that the public cloud will hold the majority of the applications and the data.

The next subject that was raised by Fultheim: chipmakers. Today, it’s in the chipmakers’ interest to develop and supply the micro-servers market to support the clouds, as the costs are obviously lower and the margins these companies gain are higher.

“Intel’s Boyd Davis, general manager of the Data Center Group, said on a call today that the chipmaker would address this market for highly dense and low-power architectures for cloud-based web service companies, a market Intel estimates might become 10 percent of the total server market in the next four to five years.”

We aren’t trying to cram a desktop processor into a server anymore”

Check this brief GigaOM article talking on this evolving trend.

Today HPC is not in the people minds when thinking about their computing limitation problems.” said Fultheim.

Fultheim mentioned a serious lack of awareness specifically for the small and mid-size enterprises in the HPC as for example simulations in the manufacturing and life sciences industries.

Last month, cloud giant IBM announced its new high performance computing (HPC) cloud offerings “to help clients tackle advanced scientific and technical computing workloads like analytics, simulations for product development, climate research and life sciences.” During the global Supercomputing Conference in Hamburg, Germany, IDC talked about the HPC compute server market, which is projected to grow at a 7.5% CAGR from 2009 to 2014, from $8.6 billion in 2009 to $12.4 billion in 2014. Cloud computing could increase the utilization of HPC among small- and medium-size businesses, but some hurdles need to be overcome.

Read: IDC Shares HPC Market Figures, Trends, Predictions at ISC by HPCWire Magazine

Penetration of companies like ScaleMP to the public cloud market will solve the awareness issue and will lead to a great change in the way we consume cloud resources, not just by getting the “hardcode” AWS catalogue but also by being able to aggregate atomic computing resources to the exact computing power that the system needs at any point of time. It will take time to get there but, if this happens, the IaaS market will move one big step further to deliver a real, living cloud entity.

Want to know how to maximize your reach with your content?
I'm interested in:
Join IOD

IOD is a content creation and research company working with some of the top names in IT.

Our philosophy is experts are not writers and writers are not experts, so we pair tech experts with experienced editors to produce high quality, deeply technical content.

The author of this post is one of our top experts. You can be too! Join us.

Join Us

Tech blogger? Why not get paid for your research and writing? Learn how.

The IOD website uses cookies. By clicking, you confirm acceptance of our cookie policy.
Logout
Please select one of the following:
Full Name
Email
Company
Job Title
Website
Expert?
Yes
Yes
Yes
No
What's your area of expertise?