When we created Google App Engine, we wanted to allow developers to build the way Google does. Unencumbered by old models, bad software, or limited infrastructure, we were free to bring the innovations born inside Google out to you. This remains our aim.

We know how important flexibility is to you in the languages you write in, the deployment model you use, the tools you build with, and the infrastructure on which your software runs. Today we’re announcing a unique collaboration between AppScale and Google Cloud Platform. We are making a direct investment, in the form of contributing engineers, to drive compatibility and interoperability between AppScale and App Engine.

Together, you’ll have even more infrastructure flexibility for your apps.
With AppScale you can run your App Engine app on any physical or cloud infrastructure, wherever you want. You also have the flexibility of configuring AppScale yourself, or working with AppScale Systems to manage the infrastructure for you.

Imagine you have a subset of customers who would be better served by your infrastructure because they have custom integration requirements. You can continue to serve your worldwide customers with the power of App Engine, and route requests from specific customers down to an installation of AppScale just for them.

Imagine you have an existing data center or colocation capacity, but want to get started building apps designed with the cloud in mind. Simply install AppScale today and start building apps that are cloud-ready.

Today, AppScale exposes a subset of the 1.8 App Engine API, while the current version of App Engine is 1.9. We’re working with AppScale engineers and the broader community to add compatibility with App Engine 1.9, including Versions and Modules. We’re eager to take community feedback on feature prioritization, and specifically we’re very interested in integrating Managed VMs in to AppScale to further increase interoperability.

There are lots of details to share and we’re eager to keep you posted on our progress. We will continue to share updates on this blog, the AppScale project wiki, the AppScale Github page, and AppScale Systems’ blog. If you want to give it a try today, try AppScale’s Fast Start for running AppScale on top of Google Compute Engine.

We hope you’re as excited as we are about our work together; we can’t wait to see what new, amazing things you build with it. As always, I’m personally interested in your feedback, so please don’t hesitate to reach out with any questions, ideas, or great stories. Thanks!

-Miles Ward, Global Head of Solutions, Google Cloud Platform


Creating testing, staging, and production environments for your application is still too hard.  Today, you might run scripts to configure each environment or set up a separate server to run an open source configuration tool. Customizing and configuring the tools and testing the provisioning process takes time and effort. Additionally, if something goes wrong, you need to debug your server deployment and your tools.

Google Cloud Deployment Manager, now in beta, lets you build a description of what you want to deploy and takes care of the rest. The syntax is declarative, meaning you declare the desired outcome of your deployment, rather than the steps the system needs to take. For example, if you want to provision an auto-scaled pool of VMs, you would declaratively define the VM instance type you need, assign the VMs to a group, and configure the autoscaler and load balancer. Instead of creating and configuring each of these items through a series of command line interface calls or writing code to call the APIs, you can define these resources in a template and deploy them all through one command to Deployment Manager.

Key features of Deployment Manager:
  • Define your infrastructure deployment in a template and deploy via command line or RESTful API
  • Templates support jinja or python, so you can take advantage of programming constructs, such as loops, conditionals, and parameterized inputs for deployments requiring logic
  • UI support for viewing and deleting deployments in Google Developers Console
  • Tight integration with Google Cloud Platform resources from compute to storage to networking, which provides faster provisioning and visualization of the deployments
Screen Shot 2015-04-21 at 10.32.52 AM.png
A Sample Deployment

We’re often asked how Deployment Manager differs from existing open source configuration management systems like Puppet, Chef, SaltStack, or Ansible. Each of these are powerful frameworks for configuration management, but none are natively integrated into Cloud Platform. To truly unlock the power of intent driven management, we need a declarative system that allows you to express what you want to run, so that our internal systems can do the hard work of running it for you. Also, unlike other configuration management systems, Deployment Manager offers UI support directly in Developers Console, allowing you to view the architecture of your deployments.

Because Deployment Manager is natively supported by Cloud Platform, you don’t need to deploy or manage any additional configuration management software, and there’s no additional cost for running it. Take the complexity out of deploying your application on Cloud Platform and test drive Deployment Manager today. We also welcome your feedback feedback at

-Posted by Chris Crall, Technical Program Manager


The Cloud Innovation World Cup is part of the world€™’s leading series of innovation competitions and aims to foster groundbreaking solutions and applications for cloud computing. This year’s Cloud Innovation World Cup is looking for the next most-innovative solutions within the segment of cloud computing, and we’re proud to be a co-sponsor. Contestants can submit their solution in the categories:
  • Mobility
  • Industry 4.0
  • Smart Living
  • Urban Infrastructure
  • ICT Business Services
Finalist will be announced at the Cloud World Forum in London on the 24th of June, 2015. The award ceremony with finalist presentations will take place on July 8th 2015 at Google´s New York office.

Submission Deadline: 6th May 2015
Participants of the Cloud Innovation World Cup have the chance to develop their solution using $100,000 worth of Google Cloud Platform credits.  While registering, just select the option that you want to create your solution using Google’s Cloud Platform.
Award Details
Deadline: 06th May 2015, 11:59pm
Submit your solution and win:
  • Placement on the “€˜Hall of Fame”
  • Opportunity to present your innovative solution at the award ceremony
  • Speaking opportunities at international conferences
  • Dedicated marketing activities to promote the finalists and the category winners
  • Access to the worldwide network of Innovation World Cup Series, an opportunity to connect with important market players at a very early stage of product development
  • Business acceleration

Participants of the Cloud Innovation World Cup have the opportunity to develop their solution using Google Cloud Platform credits worth $100,000 of our co-sponsor Google for free. Learn more about the Google Cloud Platform Credits here.

Database closes: 06th May 2015, 11:59pm

For further information please visit:

Should you have any questions, please contact:

Today’s guest blogger is Fredrik Averpil, Technical Director at Industriromantik. Fredrik develops the custom computer graphics pipeline at Industriromantik, a digital production company specializing in computer generated still and moving imagery.

As a small design and visualization studio, we focus on creating beautiful 3D imagery – be it high-resolution product images or TV commercials. To successfully do this, we need to ensure we have access to enough rendering power, and at times, we find ourselves in a situation where our in-house render farm's capacity isn’t cutting it. That’s where Google Compute Engine comes in.

By taking our 3D graphics pipeline, applications, and project files to Compute Engine, we expand and contract available rendering capacity on-demand, in bursts. This enables us to increase project throughput, deliver on client requests, and handle render peak times with ease while remaining cost efficient – with the added bonus of getting us home in time for supper.

Figure 1. We created and rendered these high resolution interiors using our custom computer graphics production pipeline.

The setup
We use the very robust Pixar Tractor as our local render job manager, as it’s designed for scaling and can handle a large number of tasks simultaneously. Our local servers – which serve applications, custom tools, and project files - are mirrored to Compute Engine ahead of rendering time. This makes cloud rendering just as responsive as a local render. By making Compute Engine instances run the Tractor client, they’ll seamlessly pop up in the Tractor management dashboard in our local office. To me, pouring 1600 cores worth of instances into your local 800-core render farm reminds you how powerful the technology is.

Figure 2. Google Compute Engine  instances access the local office network through a VPN tunnel.

The basic setup of the file server is having an instance equipped with enough RAM to accommodate for good file caching performance. We use an n1-highmem-4 instance as a file server to serve 50 n1-standard-32 rendering instances. Then we attach additional persistent disk storage (in increments of 1.5TB for high IOPS) to the file server instance to hold projects and applications. Using ZFS for this pool of persistent disks, the file server's storage can be increased on-demand, even while rendering is in progress. For increased ZFS caching performance, local SSD disks can be attached to the file server instance (feature in beta). It’s all really up to what you need for your specific project. Set up will vary based on how many instances you’re planning on using, and what kind of performance you’re looking for.

Operations on the file server and file transfers can be performed over SSH from a Google Compute Engine-authenticated session, and ultimately be automated through Tractor:

# Create folder on GCE file server running on public IP address over SSH port 22
ssh -p 22 -t -t -o UserKnownHostsFile=/dev/null -o CheckHostIP=no -o StrictHostKeyChecking=no -i /home/fredrik/.ssh/google_compute_engine "sudo mkdir -p /projects/projx/"
# Upload project files to GCE file server running on public IP address over SSH port 22
rsync -avuht -r -L --progress -e "ssh -p 22 -o UserKnownHostsFile=/dev/null -o CheckHostIP=no -o StrictHostKeyChecking=no -i /home/fredrik/.ssh/google_compute_engine" /projects/projx/

If you store your project data in a bucket, you could also retrieve it from there:

# Copy files from bucket onto file server running on public IP address over SSH port 22
ssh -p 22 -t -t -o UserKnownHostsFile=/dev/null -o CheckHostIP=no -o StrictHostKeyChecking=no -i /home/fredrik/.ssh/google_compute_engine "gsutil -m rsync -r gs://your-bucket/projects/projx/ /projects/projx/"

Software executing on Compute Engine (managed by Tractor) accesses software licenses served from our local office via the Internet. And also, instances running the Tractor client need to be able to contact the local Tractor server. All of this can be achieved by using the beta of VPN, as seen in figure 2 above.

Since the number of software licenses cannot be scaled on-demand, like the number of instances, we take advantage of the fastest machines available: 32-core instances, which return a 97-98% speed boost from 16-core (awesome scaling!) when rendering with V-Ray for Maya, our primary choice of renderer.

When a rendering task completes, the files can be copied back home easily, again managed by Tractor, directly after a frame render completes:

# Copy files from Google Compute Engine file server "fileserver-1" onto local machine
gcloud compute copy-files username@fileserver-1:/projects/projx/render/*.exr /local_dest_dir

Figure 3. Tractor dashboard, showing queued jobs and the task tree of a standard render job.

Avoiding manual labour and micromanagement of Compute Engine rendering is highly recommended. This is also where Tractor excels: the automation of complex processes. Daisy-chaining tasks in Tractor, such as spinning up the file server, allocating storage, and transferring files makes large and parallel jobs a breeze to manage.

Figure 4. Tractor task tree.

In figure 4, the daisy-chaining of tasks is illustrated. When initiating a project upload to the Google Compute Engine file server, a disk is attached to the file server and added to the ZFS pool. Project files are uploaded as well as the specific software versions required. No files can be uploaded before the disk storage has been attached, so in this case, some processes are waiting for other processes to complete before initiating.

With Compute Engine and its per-minute billing approach, I’ve stopped worrying and started loving the auto-scaling of instances. By having a script check in with Tractor (using its query Python API) for pending tasks every once in a while, we can spin up instances (via the Google Cloud SDK) to crunch a render and quickly wind them down when no longer needed. Now that’s micromanagement done right.

Figure 5. High resolution exterior 3D rendering for Etaget, Stockholm.

For anyone who wants to utilize Compute Engine rendering but needs a turnkey, managed solution, I’d recommend checking out the beta of Zync Render, which utilizes the excellent Google Cloud Platform infrastructure. Zync Render has its own front end UI that manages the file transfer and provides the software licenses required for rendering so you don’t have to implement a Compute Engine specific integration. This makes that part of the rendering a whole lot easier. I’m keeping my fingers crossed that Zync Render will ultimately offer their software license server for Google Compute Engine users so that we can scale licenses along with any number of instances seamlessly.

I believe that every modern digital production company dealing with 3D rendering today, regardless of size, needs to leverage affordable cloud rendering in some shape or form in order to stay competitive. I also believe that key to success is to focus on automation. The Google Cloud SDK provides excellent tools to do exactly this by pairing  the powerful Google Compute Engine together with an advanced and highly customizable render job manager, such as Pixar’s Tractor. For smaller companies or individuals who do not wish to orchestrate these advanced queuing systems themselves, Zync Render takes advantage of the Compute Engine infrastructure.

For additional computer graphics pipeline articles, tips and tricks, check out Fredrik’s blog at and for more information about Industriromantik, visit


Earlier this year, we announced the beta of the Google Cloud Logging service, which included the capability to:

  • Stream logs in real-time to Google BigQuery, so you can analyze log data and get immediate insights.
  • Export logs to Google Cloud Storage (including Nearline), so you can archive logs data for longer periods to meet backup and compliance requirements.

Today we’re expanding Cloud Logging capabilities with the beta of  Cloud Logging Connector that allows you to stream logs to Google Cloud Pub/Sub.  With this capability you can stream log data to your own endpoints and further expand how you can make big data useful.  For example, you can now transform and enrich the data in Cloud Dataflow before sending it to BigQuery for analysis.  Furthermore, this provides easy real-time access to all your logs data, so you can export it to your private cloud or any third party application.

Cloud Pub/Sub
Google Cloud Pub/Sub delivers real-time and reliable messaging in one global, managed service that helps you create simpler, more reliable, and more flexible applications. By providing many-to-many, asynchronous messaging that decouples senders and receivers, it allows for secure and highly available communication between independently written applications.  With Cloud Pub/Sub, you can push your log events to another Webhook, or pull them as they happen.  For more information, check out our Google Cloud Pub/Sub documentation.
High-Level Pub/Sub Schema
Configuring Export to Cloud Pub/Sub
Configuring export of logs to Cloud Pub/Sub is easy and can be done from the Logs Viewer user interface.  To get to the export configuration UI start in the Developers Console, go to Logs under Monitoring and then click Exports on the top menu.  Currently this supports export configuration for Google App Engine and Google Compute Engine logs.

One Click Export Configuration in the Developers Console

Transforming Log Data in Dataflow
Google Cloud Dataflow allows you to build, deploy, and run data processing pipelines at a global scale.  It enables reliable execution for large-scale data processing scenarios such as ETL and analytics, and allows pipelines to execute in either streaming or batch mode. You choose.  

You can use the Cloud Pub/Sub export mechanism to stream your log data to Cloud Dataflow and dynamically generate fields, combine different log tables for correlation, and parse and enrich the data for custom needs.  Here are a few examples of what you can achieve with log data in Cloud Dataflow:

  • Sometimes it is useful to see the data only for the key applications for top customers.  In Cloud Dataflow, you can group logs by Customer ID or Application ID, filter out specific logs, and then apply some aggregation of system level or application level metrics.
  • On the flip side, sometimes you want to enrich the log data to make it easier to analyze, for example by appending marketing campaign information to customer interaction logs, or other user profile info. Cloud Dataflow lets you do this on the fly.
  • In addition to preparing the data for further analysis, Cloud Dataflow also lets you perform analysis in real time. So you can look for anomalies, detect security intrusions, generate alerts, keep a real-time dashboard updated, etc.

Cloud Dataflow can stream the processed data to BigQuery, so you can analyze your enriched data.  For more details, please see the Google Cloud Dataflow documentation.

Getting Started
If you’re a current Google Cloud Platform user, the capability to stream logs to Cloud Pub/Sub is available to you at no additional charge.  Applicable charges for using Cloud Pub/Sub and Cloud Dataflow will apply. For more information, visit the the Cloud Logging documentation page and share your feedback.

-Posted by Deepak Tiwari, Product Manager