With the recent addition of support for both Google Cloud Storage (GCS) and Google Compute Engine (GCE), Zencoder is breaking new ground in becoming "cloud-agnostic" — enabling customers to have control over where and how their content is processed, at scale.
Google Cloud Storage joins Amazon S3 and Rackspace Cloud Files as a supported cloud storage provider in Zencoder. Like the s3:// or cf:// protocols, you can also construct input and output URLs with the gcs:// protocol to access content that is stored in Google Cloud Storage. With a bit of initial setup, you’ll be able to ingest content from your GCS buckets and push transcoded renditions (including adaptive HTTP Live Streaming) to GCS.
We're also pleased to announce that Zencoder VOD transcoding jobs can now be run on Google Compute Engine, which was recently made generally available. By using both Google Compute Engine and Google Cloud Storage, you can take advantage of Google's massive network to power your video transcoding and delivery workflow.
Below, we've put together a quick guide on configuring Zencoder to take advantage of both GCS and GCE. If you have any questions, feel free to get in touch with our support team at help@zencoder.com.Google Cloud Storage Setup
Generate Interoperable GCS Credentials
Google Cloud Storage is interoperable with the Amazon S3 API, but requires some initial setup work. To get started with GCS on Zencoder, you'll need to enable Interoperable Access from the Google Cloud Console and generate a key. Check out the To generate a developer key section in this document for instructions on enabling interoperable access and generating access keys.
Securely Store with Zencoder Credentials
Once you have a set of Interoperable GCS credentials for your account, you can then securely store them with Zencoder at the Zencoder Credentials page.
Under Add Credentials, select Google Cloud Storage. Then, populate the Google Cloud Storage Access Key ID and Secret Access Key parameters with your newly generated credentials. Be sure to check "Default credentials for transfers with gcs" -- this will ensure that any input or output URLs that specify the gcs:// protocol will use these credentials.
Transcoding with Google Compute Engine
Armed with a set of GCS credentials and some stored content to transcode, the next step is to start running jobs in GCE. By setting the region parameter, you can control which cloud region transcoding jobs will be run in on a job-by-job basis. The new region names for GCE are "us-central-gce" and "eu-west-gce".
The example below shows a working job, which pulls an input video from GCS, creates renditions for HTTP Live Streaming, and sends the output streams to GCS. You can test this out by changing my-bucket to an existing GCS bucket that you own and pasting this into the Request Builder. Note: all existing Zencoder parameters and storage providers work with the new GCE regions, including Rackspace Cloud Files and Amazon S3.
Example HLS Job Using GCE and GCS
{
"test": true,
"input": "gcs://zencoder-testing/test.mov",
"region": "us-central-gce",
"output": [
{
"audio_bitrate": 64,
"audio_sample_rate": 22050,
"base_url": "gcs://my-bucket/",
// ^^ Change me!
"filename": "file-64k.m3u8",
"segment_seconds": 2,
"format": "aac",
"headers":{
"x-goog-acl": "public-read"
},
"type": "segmented"
},
{
"audio_bitrate": 56,
"audio_sample_rate": 22050,
"base_url": "gcs://my-bucket/",
// ^^ Change me!
"decoder_bitrate_cap": 360,
"decoder_buffer_size": 840,
"filename": "file-240k.m3u8",
"segment_seconds": 2,
"max_frame_rate": 15,
"type": "segmented",
"video_bitrate": 184,
"headers":{
"x-goog-acl": "public-read"
},
"width": 400,
"format": "ts"
},
{
"audio_bitrate": 56,
"audio_sample_rate": 22050,
"base_url": "gcs://my-bucket/",
// ^^ Change me!
"decoder_bitrate_cap": 578,
"decoder_buffer_size": 1344,
"filename": "file-440k.m3u8",
"segment_seconds": 2,
"type": "segmented",
"video_bitrate": 384,
"headers":{
"x-goog-acl": "public-read"
},
"width": 400,
"format": "ts"
},
{
"audio_bitrate": 56,
"audio_sample_rate": 22050,
"base_url": "gcs://my-bucket/",
// ^^ Change me!
"decoder_bitrate_cap": 960,
"decoder_buffer_size": 2240,
"filename": "file-640k.m3u8",
"segment_seconds": 2,
"type": "segmented",
"video_bitrate": 584,
"headers":{
"x-goog-acl": "public-read"
},
"width": 480,
"format": "ts"
},
{
"audio_bitrate": 56,
"audio_sample_rate": 22050,
"base_url": "gcs://my-bucket/",
// ^^ Change me!
"decoder_bitrate_cap": 1500,
"decoder_buffer_size": 4000,
"filename": "file-1040k.m3u8",
"segment_seconds": 2,
"type": "segmented",
"video_bitrate": 1000,
"headers":{
"x-goog-acl": "public-read"
},
"width": 640,
"format": "ts"
},
{
"audio_bitrate": 56,
"audio_sample_rate": 22050,
"base_url": "gcs://my-bucket/",
// ^^ Change me!
"decoder_bitrate_cap": 2310,
"decoder_buffer_size": 5390,
"filename": "file-1540k.m3u8",
"segment_seconds": 2,
"type": "segmented",
"video_bitrate": 1484,
"headers":{
"x-goog-acl": "public-read"
},
"width": 960,
"format": "ts"
},
{
"audio_bitrate": 56,
"audio_sample_rate": 22050,
"base_url": "gcs://my-bucket/",
// ^^ Change me!
"decoder_bitrate_cap": 3060,
"decoder_buffer_size": 7140,
"filename": "file-2040k.m3u8",
"segment_seconds": 2,
"type": "segmented",
"video_bitrate": 1984,
"headers":{
"x-goog-acl": "public-read"
},
"width": 1024,
"format": "ts"
},
{
"base_url": "gcs://my-bucket/",
// ^^ Change me!
"filename": "playlist.m3u8",
"streams": [
{
"bandwidth": 2040,
"path": "file-2040k.m3u8"
},
{
"bandwidth": 1540,
"path": "file-1540k.m3u8"
},
{
"bandwidth": 1040,
"path": "file-1040k.m3u8"
},
{
"bandwidth": 640,
"path": "file-640k.m3u8"
},
{
"bandwidth": 440,
"path": "file-440k.m3u8"
},
{
"bandwidth": 240,
"path": "file-240k.m3u8"
},
{
"bandwidth": 64,
"path": "file-64k.m3u8"
}
],
"headers":{
"x-goog-acl": "public-read"
},
"type": "playlist"
}
]
}