Managing Long Running Operations (LRO) with Google Cloud Client Libraries

2021-12-15

When an API method normally takes long time to complete, it can be designed to return google.api_core.operation.Operation to the client. This is basically a cursor that the client can use to check up on the status of that operation. When the operation is completed, the LRO is also designed to return the service specific result back. What i mean by that is, if the LRO involves creating a Google Cloud Filestore backup, it will return a generic Operation, clients can use that to check status but when the status completes, the final LRO response will include a service specific object, in this case its a protobuf type.googleapis.com/google.cloud.filestore.v1.Backup.

This section covers what an LRO looks like with raw REST api calls, how to manage LROs as the client as well as an Admin API to list, view or cancel LROs from any external program.

For more information, see the api.dev article here: Long Running Operations. You can also see a raw implementation with gRPC here


This example here uses Cloud FileStore backup API which returns an LRO. As a basic example, suppose you have the following FileStore defined

$ gcloud filestore instances list
INSTANCE_NAME  ZONE           TIER       CAPACITY_GB  FILE_SHARE_NAME  IP_ADDRESS  STATE  CREATE_TIME
myfilestore    us-central1-a  BASIC_HDD  1024         data             10.99.64.2  READY  2021-12-22T14:37:31

If you use gcloud to create a backup

$ gcloud filestore backups create backup1 --file-share data --instance myfilestore --region=us-central1 --instance-zone=us-central1-a

All you would see it do is wait and in a final step, state that the backup succeeded or completed.

Instead, simply add on the --log-http flag to the command above and see the raw request-responses that make up an Operation

The initial POST request to the filestore API calls the create backup method

uri: https://file.googleapis.com/v1/projects/mineral-minutia-820/locations/us-central1/backups?alt=json&backupId=backup1
method: POST

{"sourceFileShare": "data", "sourceInstance": "projects/mineral-minutia-820/locations/us-central1-a/instances/myfilestore"}

The response back from the server is actually a Operations object.

Note that this includes the canonical ’name’ for the operation and the status "done": false.

LRO clients will use the ’name’ to check on the status (which is indicated by the done field)

{
  "name": "projects/mineral-minutia-820/locations/us-central1/operations/operation-1640184440454-5d3bd32f07df5-dbb76c23-9821a4e0",
  "metadata": {
    "@type": "type.googleapis.com/google.cloud.common.OperationMetadata",
    "createTime": "2021-12-22T14:47:20.643507917Z",
    "target": "projects/mineral-minutia-820/locations/us-central1/backups/backup1",
    "verb": "create",
    "cancelRequested": false,
    "apiVersion": "v1"
  },
  "done": false
}

The client within gcloud will automatically retry so every now and then it will call the operations endpoint for filestore and submit the name from above

uri: https://file.googleapis.com/v1/projects/mineral-minutia-820/locations/us-central1/operations/operation-1640184440454-5d3bd32f07df5-dbb76c23-9821a4e0?alt=json
method: GET

and the response may again still be the same status and operation handle

{
  "name": "projects/mineral-minutia-820/locations/us-central1/operations/operation-1640184440454-5d3bd32f07df5-dbb76c23-9821a4e0",
  "metadata": {
    "@type": "type.googleapis.com/google.cloud.common.OperationMetadata",
    "createTime": "2021-12-22T14:47:20.643507917Z",
    "target": "projects/mineral-minutia-820/locations/us-central1/backups/backup1",
    "verb": "create",
    "cancelRequested": false,
    "apiVersion": "v1"
  },
  "done": false
}

This loop continues until the operation succeeds or fails…the outcome may look like this (note "done": true)

{
  "name": "projects/mineral-minutia-820/locations/us-central1/operations/operation-1640184440454-5d3bd32f07df5-dbb76c23-9821a4e0",
  "metadata": {
    "@type": "type.googleapis.com/google.cloud.common.OperationMetadata",
    "createTime": "2021-12-22T14:47:20.643507917Z",
    "endTime": "2021-12-22T14:47:44.597603581Z",
    "target": "projects/mineral-minutia-820/locations/us-central1/backups/backup1",
    "verb": "create",
    "cancelRequested": false,
    "apiVersion": "v1"
  },
  "done": true,
  "response": {
    "@type": "type.googleapis.com/google.cloud.filestore.v1.Backup",
    "name": "projects/mineral-minutia-820/locations/us-central1/backups/backup1",
    "state": "READY",
    "createTime": "2021-12-22T14:47:20.640659647Z",
    "capacityGb": "1024",
    "storageBytes": "4254144",
    "sourceInstance": "projects/mineral-minutia-820/locations/us-central1-a/instances/myfilestore",
    "sourceFileShare": "data",
    "sourceInstanceTier": "BASIC_HDD",
    "downloadBytes": "4319391"
  }
}

The metadata is about the operation while the response is the final “object” this LRO is supposed to return. In this case with filestore, its a protobuf of “type.googleapis.com/google.cloud.filestore.v1.Backup”

SDK clients can unmarshal the JSON response into the appropriate type.


The following snippets performs the backups and renders the LRO in usable way.

  • client.py

The following is the client that issues the filestore backup and waits on the LRO to complete

#!/usr/bin/python

import google.auth
from google.cloud.filestore_v1.services.cloud_filestore_manager import CloudFilestoreManagerClient

import time

import google.api_core.operation
import google.auth.transport.requests

credentials, projectId = google.auth.default()   
client = CloudFilestoreManagerClient(credentials=credentials)

project_number='1071284184436'
region = 'us-central1'
zone = 'us-central1-a'
instance_id='myfilestore'
backup_id = "mybackup-" + time.strftime("%Y%m%d-%H%M%S")

parent = 'projects/{project_number}/locations/{location}'.format(project_number=project_number, location=region)
backup ={
   "source_instance" : 'projects/{project_number}/locations/{zone}/instances/{instance_id}'.format(project_number=project_number,zone=zone,instance_id=instance_id),
   "source_file_share": 'data'
}
op = client.create_backup(parent=parent, backup=backup, backup_id=backup_id)
print(op._operation.name)

# you can just wait for it (synchronously)
# op_result = op.result(retry=polling.DEFAULT_RETRY)
# print(op_result)

# or in callback
def my_callback(future):
    # wait for it here
    # you can optionally check status too
    # https://googleapis.dev/python/google-api-core/latest/futures.html#google.api_core.future.async_future.AsyncFuture
    result = future.result()
    print('done LRO')
    print(result)

op.add_done_callback(my_callback)
input("wait here...")
  • admin.py

This the Admin API for the LRO.

you can use this to ‘read’ the status of the LRO from any other program.


import google.auth
from google.cloud.filestore_v1.services.cloud_filestore_manager import CloudFilestoreManagerClient

import time
from google.api_core import operation
from google.cloud.filestore_v1.types import Backup

from google.longrunning.operations_pb2 import Operation, OperationsStub
from google.api_core import operations_v1 
import google.api_core.operation
import google.auth.transport.requests

from google.api_core.future import polling

credentials, projectId = google.auth.default()   
client = CloudFilestoreManagerClient(credentials=credentials)

project_number='1071284184436'
region = 'us-central1'
parent = 'projects/{project_number}/locations/{location}'.format(project_number=project_number, location=region)

api = client.transport.operations_client
op_list_client = api.list_operations(name=parent, filter_=None)

for op in op_list_client:
   #print(op.name)
   # get an operation by name
   op_get_client = api.get_operation(name=op.name)
   print(op_get_client.name)



# finally, as a demo on reconstructing an LRO from scratch 

#  TODO, cant get either of these to work
# request = google.auth.transport.requests.Request()
# channel = google.auth.transport.grpc.secure_authorized_channel(
#             credentials, request, 'filestore.googleapis.com')
# stub = OperationsStub(channel)
# gop = operation.from_grpc(operation=op._operation, operations_stub=stub,  result_type=google.cloud.filestore_v1.types.Backup, retry=polling.DEFAULT_RETRY)
# gop_result = gop.result()
# print(gop_result)

# hop = operation.from_http_json(operation=op._operation,api_request=request, result_type=google.cloud.filestore_v1.types.Backup)
# hop_result = hop.result()
# print(hop_result)
  • client.go

  • using google cloud library (preferred)

package main

import (
	"context"
	"fmt"
	"log"
	"strconv"
	"time"

	filestore "cloud.google.com/go/filestore/apiv1"
	filestorepb "google.golang.org/genproto/googleapis/cloud/filestore/v1"
)

const (
	projectId = "mineral-minutia-820"
	location  = "us-central1"
	zone      = "us-central1-a"
)

func main() {

	ctx := context.Background()
	parent := fmt.Sprintf("projects/%s/locations/%s", projectId, location)
	backupId := "backup-" + strconv.FormatInt(time.Now().UTC().UnixNano(), 10)
	sourceInstanceName := "myfilestore"
	sourceFileShareName := "data"
	instanceParent := fmt.Sprintf("projects/%s/locations/%s", projectId, zone)

	c, err := filestore.NewCloudFilestoreManagerClient(ctx)
	if err != nil {
		panic(err)
	}
	defer c.Close()

	// can do this in go routine and channel callback
	req := &filestorepb.CreateBackupRequest{
		Parent:   parent,
		BackupId: backupId,
		Backup: &filestorepb.Backup{
			SourceInstance:  fmt.Sprintf("%s/instances/%s", instanceParent, sourceInstanceName),
			SourceFileShare: sourceFileShareName,
			Description:     "my new backup",
		},
	}
	op, err := c.CreateBackup(ctx, req)
	if err != nil {
		panic(err)
	}

	resp, err := op.Wait(ctx)
	if err != nil {
		panic(err)
	}
	log.Printf("Backup %s", resp.Name)
}
  • The following uses google api clients.
package main

import (
	"context"
	"fmt"
	"log"
	"strconv"
	"time"

	file "google.golang.org/api/file/v1"
	"encoding/json"
	"cloud.google.com/go/longrunning"
)

const (
	projectId = "mineral-minutia-820"
	location  = "us-central1"
	zone      = "us-central1-a"
)

func main() {

	ctx := context.Background()
	fs, err := file.NewService(ctx)
	if err != nil {
		log.Fatalf("%v", err)
	}

	pbs := file.NewProjectsLocationsBackupsService(fs)

	// list
	parent := fmt.Sprintf("projects/%s/locations/%s", projectId, location)
	lstBackups := pbs.List(parent)

	lstResp, err := lstBackups.Do()
	if err != nil {
		log.Fatalf("%v", err)
	}
	for _, b := range lstResp.Backups {
		log.Printf("Backup: %s\n", b.Name)
	}

	// create
	backupId := "backup-" + strconv.FormatInt(time.Now().UTC().UnixNano(), 10)
	sourceInstanceName := "myfilestore"
	sourceFileShareName := "data"
	instanceParent := fmt.Sprintf("projects/%s/locations/%s", projectId, zone)

	createBackup := pbs.Create(parent, &file.Backup{
		Description:     "my new backup",
		SourceInstance:  fmt.Sprintf("%s/instances/%s", instanceParent, sourceInstanceName),
		SourceFileShare: sourceFileShareName,
	})

	op, err := createBackup.BackupId(backupId).Do()
	if err != nil {
		log.Fatalf("%v", err)
	}

	for {
		if op.Done {
			break
		}
		opGetCall := fs.Projects.Locations.Operations.Get(op.Name)
		op, err = opGetCall.Do()
		if err != nil {
			log.Fatalf("%v", err)
		}
		log.Printf("creating backup %s", op.Name)
		time.Sleep(1 * time.Second)
	}

	log.Printf("backup done %v", op.Done)

	jsonBytes, err := op.Response.MarshalJSON()
	if err != nil {
		log.Fatalf("%v", err)
	}
	b := &file.Backup{}
	err = json.Unmarshal(jsonBytes, b)
	if err != nil {
		fmt.Println("error:", err)
	}
	log.Printf("Backup %s", b.Name)
}
  • admin.go

This the Admin API for the LRO.

you can use this to ‘read’ the status of the LRO from any other program.

package main

import (
	"context"
	"fmt"

	filestore "cloud.google.com/go/filestore/apiv1"
	"google.golang.org/api/iterator"
	longrunningpb "google.golang.org/genproto/googleapis/longrunning"
)

const (
	projectNumber = "1071284184436"
	region        = "us-central1"
	zone          = "us-central1-a"
)

func main() {

	ctx := context.Background()
	parent := fmt.Sprintf("projects/%s/locations/%s", projectNumber, region)

	c, err := filestore.NewCloudFilestoreManagerClient(ctx)
	if err != nil {
		panic(err)
	}
	defer c.Close()

	it := c.LROClient.ListOperations(ctx, &longrunningpb.ListOperationsRequest{
		Name: parent,
	})
	for {
		resp, err := it.Next()
		if err == iterator.Done {
			break
		}
		if err != nil {
			panic(err)
		}
		fmt.Printf("Operation Name: %s\n", resp.Name)

	}
	fmt.Println("Done")
}
  • Client.java
package com.test;

import com.google.cloud.filestore.v1.Backup;
import com.google.cloud.filestore.v1.CloudFilestoreManagerClient;
import com.google.cloud.filestore.v1.LocationName;

public class TestApp {
   public static void main(String[] args) {
      TestApp tc = new TestApp();
   }

   public TestApp() {
      try {

         CloudFilestoreManagerClient cloudFilestoreManagerClient = CloudFilestoreManagerClient.create();

         String projectNumber = "1071284184436";
         String region = "us-central1";
         String zone = "us-central1-a";
         String instanceId = "myfilestore";
         String fileShare = "data";
         String backupId = "a23213243"; // should be random
         LocationName sourceInstance =LocationName.of(projectNumber, zone);


         LocationName parent = LocationName.of(projectNumber, region);
         Backup backup = Backup.newBuilder().setSourceInstance(sourceInstance + "/instances/" + instanceId).setSourceFileShare(fileShare).build();
         Backup response = cloudFilestoreManagerClient.createBackupAsync(parent, backup, backupId).get();
         System.out.println(response.toString());

      } catch (Exception ex) {
         System.out.println("Error: " + ex);
      }
   }

}
  • Admin.java

This the Admin API for the LRO.

you can use this to ‘read’ the status of the LRO from any other program.

package com.test;

import com.google.cloud.filestore.v1.CloudFilestoreManagerClient;
import com.google.cloud.filestore.v1.LocationName;
import com.google.longrunning.ListOperationsRequest;
import com.google.longrunning.Operation;
import com.google.longrunning.OperationsClient;

public class TestApp {
   public static void main(String[] args) {
      TestApp tc = new TestApp();
   }

   public TestApp() {
      try {

         CloudFilestoreManagerClient cloudFilestoreManagerClient = CloudFilestoreManagerClient.create();

         String projectNumber = "1071284184436";
         String region = "us-central1";

         OperationsClient operationsClient = cloudFilestoreManagerClient.getOperationsClient();

         LocationName name = LocationName.of(projectNumber, region);

         ListOperationsRequest request = ListOperationsRequest.newBuilder()
           .setName(name.toString())
           .setFilter("")
           .build();
         for (Operation element : operationsClient.listOperations(request).iterateAll()) {
           System.out.println("Operation Name "+ element.getName());
         }

 

      } catch (Exception ex) {
         System.out.println("Error: " + ex);
      }
   }

}

TODO.

  • client.js
var log4js = require("log4js");
var logger = log4js.getLogger();
const { CloudFilestoreManagerClient, CreateBackupRequest } = require('@google-cloud/filestore');

const project_number = '1071284184436';
const region = 'us-central1';
const zone = 'us-central1-a';
const instance_id = 'myfilestore';
const backup_id = "emay2112341";
const fileshare = 'data';


async function main() {
	const client = new CloudFilestoreManagerClient();

	const [operation] = await client.createBackup({
		parent: `projects/${project_number}/locations/${region}`,
		backup: {
			sourceInstance: `projects/${project_number}/locations/${zone}/instances/${instance_id}`,
			sourceFileShare: fileshare
		},
		backupId: backup_id
	});

	const [response] = await operation.promise();
	console.log(response);
}

main().catch(console.error);
  • admin.js

This the Admin API for the LRO.

you can use this to ‘read’ the status of the LRO from any other program.

TODO: this should work but for somereason the client is always unauthenticated…

var log4js = require("log4js");
var logger = log4js.getLogger();
const { CloudFilestoreManagerClient, CreateBackupRequest } = require('@google-cloud/filestore');

const project_number = '1071284184436';
const region = 'us-central1';

async function main() {
	const client = new CloudFilestoreManagerClient();
	const opcClient = client.operationsClient;

	const [lroresponse] = await opcClient.listOperations({
		name: `projects/${project_number}/locations/${region}`,
		filter: ''
	});
	console.log(lroresponse);
}

main().catch(console.error);

TODO


Also see: Using Google's Client Library Generation system

This site supports webmentions. Send me a mention via this form.