2014년 11월 5일 수요일

Google Cloud goes corporate with peering, carrier interconnects, VPN


 Google Cloud struts its corporate stuff, adding connections from carrier hotels and direct peering as well as virtual private networking access for risk-averse companies seeking hybrid cloud.
Google SVP Urs Holzle
credit: Barb Darrow/Gigaom
Google has apparently been listening to what business customers want from a public cloud.
On Tuesday the company announced that it will introduce Virtual Private Networking support in the first quarter of 2015. VPNs will let customers tie securely into Google Cloud via the internet, and they are a key foundation of hybrid clouds that let customers keep some key applications and data under their control in-house while also relying on the public cloud for other workloads.
Also new from Google: Direct peering promises a fast connection to Google’s network from 70 locations worldwide. And new carrier interconnects add Google cloud access from Equinix, IX Reach, Level 3, Tata Communications, Telx, Verizon and Zayo facilities around the world.

Read more
 
Summary: As it expands its hybrid/public cloud effort, VMware adds new Australia coverage, bringing the number of company-run vCloud Air regions to nine worldwide.
VMware is taking vCloud Air cloud to Australia in a move that brings the total data center coverage for the company’s hybrid cloud to nine — with five data centers in the U.S. and one each in the U.K., Germany, Japan and now Australia.
The company will own and operate its Australian infrastructure but host it out of Telstra data centers.
Earlier this year,VMware renamed its cloud offering from vCloud Hybrid Services to vCloud Air.

Read more

Google has been laying the foundation of its cloud platform for years

  
        
Summary: Google announced a handful of new features on Tuesday, but the company is only turning into products what it has been building for years.
Google data center
credit: Google
A stream of Google cloud executives took the stage in San Francisco on Tuesday to announce a slew of new cloud computing features, including around advanced network and resource-management capabilities. Speaking on a panel at the AppDynamics user conference in Las Vegas, Allan Naim, global product lead for the Google Cloud Platform, told the audience about how the company has been laying the groundwork for these new capabilities for years.
He explained how application containers can be a huge source of operational efficiency if you know how to manage them. Everything at Google runs on containers and the company spins up 2 billion of them per week, he said. In the name of squeezing every ounce of capacity out of each server, a single box might contain hundreds of containers split among multiple workloads (Gmail and MapReduce jobs, for example) and it’s management software like Google’s Omega that helps ensure they’re all getting the resources they need.

Read more 

The target audience for Google Container Engine is wondering about security and interoperability


credit: Jonathan Vanian/Gigaom    
Summary: Google wants to show the enterprise world that it can trust it when it comes to container technology. But if you want to give its new container engine a test run, you won’t be able to use it on other public clouds.
Brian Johnson onstage at Google Cloud Platform.
credit: Jonathan Vanian/Gigaom
Google’s announcement of its new Google Container Engine, a managed service version of its open-sourced Kubernetes container-management system, shows that the search giant believes it can lure new customers to its cloud through its container expertise. But, it’s clear from talking to various attendees at Google’s Cloud Platform event that while many people are interested in an enterprise-version of Kubernetes, questions of container security, ease of use on different cloud platforms and stability need to be ironed out before folks are ready to sign up.

Read more

2014년 11월 3일 월요일

Why EMC thinks it’s ready to power the internet of things


Summary:     
EMC President Jeremy Burton came on the Structure Show podcast this week to talk about the company’s current plan to deliver hybrid cloud-storage systems and its future plan to provide the infrastructure underpinning the next generation of big data and internet-of-things applications.
Jeremy Burton. Source: EMC
Despite major shakeups among its large IT-vendor peers over the past few years, storage giant EMC maintains that it’s more than capable of sticking around for the long haul and housing data for even the most-innovative types of applications. This week, Jeremy Burton, president of products at EMC, came on the Structure Show podcast to explain the company’s plans for the years to come.

Read more

Akanda exits stealth and promises better routing in virtualized networks

  
Summary: The San Francisco-based startup took in a seed round of $1.5 million. Its open-source software can virtualize the part of the network that handles intelligent routing with IP addresses.
Akanda, a San Francisco-based startup that aims to improve routing in a virtualized network, has come out of stealth and taken a seed-funding round of $1.5 million. Web-hosting provider DreamHost helped co-found the company and supplied all of the seed-round funding.
The two-man shop, less than a month old, consists of CEO Henrik Rosendahl, a virtualization veteran who recently helped sell CloudVolumes to VMware and CTO Mark McClain, who was the former project team lead of the OpenStack Neutron networking project and a former DreamHost senior developer.

Read more

Microsoft makes its cloud data move

    
Summary: Microsoft’s cloud data stack was short and slow-growing. But this summer, something changed, raising its stature considerably.
It’s taken Microsoft quite a while to get traction in the cloud, and even longer for it to get its cloud data story right. For the longest time, things weren’t looking good. I say that as someone who has worked with – and at various times championed – Microsoft technology for most of my career. As much as I’ve wanted Microsoft to do well in the cloud data arena, I thought it was doomed to an eternity of near misses.

Fast forward

But things have been steadily improving since the summer, especially in the last few weeks. The glass that was half empty in the spring is now nearly full, with a complete HDInsight Big Data service based on Hadoop 2.0; an able machine learning service called Azure Machine Learning; a document store NoSQL database called DocumentDB; a publish-subscribe service for capturing streaming data called Event Hubs; a service for processing and analyzing that data called Azure Stream Analytics; a data transformation workflow service called Data Factory; and an eponymous Search service based on ElasticSearch at its core.

Read more