How do I use `fdbbackup` with Google Cloud Storage?

It seems that the backup system is hard-coded to use S3 buckets right now. To be more specific, the Authorization header sends HTTP requests like Authorization: AWS access:secret.

Google Cloud Storage is technically compatible with S3-style requests, but it seems to require an HMAC signature that is currently not being sent.

Anyway, has anybody else had any success with this? Any recommendations? And is this a feature that would be worth making a future pull-request for?

1 Like

This is not correct, can you explain what is making it seem this way?

The Authorization header value sent is the access key plus an hmac_sha1 signature calculated from several other parts of the request as required by Amazon’s S3 auth scheme.

I have tested our client against Amazon S3 and Minio and both work.

I do not know for certain that anyone has used it with Google Cloud Storage, so you may be the first. If there is an incompatibility I’m sure it can be remedied easily. Can you paste the error messages you are seeing (sanitized of course)? Also, if you add --knob_http_verbose_level=3 to the command line of fdbbackup commands or the backup agent you will see a lot of HTTP/HTTPS detail printed to standard output including the full responses. GCS might be providing response content that gives more error detail.

Ah! --knob_http_verbose_level is so helpful! Thanks.

Here is the output of fdbbackup:

[c408a9b68635a049d0769fb6ea83aa9c] HTTP starting HEAD /backups ContentLen:0
Request Header: Accept: application/xml
Request Header: Authorization: AWS access:secret
Request Header: Content-Length: 0
Request Header: Date: 20190620T231316Z
Request Header: Host: storage.googleapis.com
[c408a9b68635a049d0769fb6ea83aa9c] HTTP code=400 early=0, time=0.007038s HEAD /backups contentLen=0 [204 out, response content len 179]
[c408a9b68635a049d0769fb6ea83aa9c] HTTP RESPONSE:  HEAD /backups
Response Code: 400
Response ContentLen: 179
Reponse Header: Cache-Control: private, max-age=0
Reponse Header: Content-Length: 179
Reponse Header: Content-Type: application/xml; charset=UTF-8
Reponse Header: Date: Thu, 20 Jun 2019 23:13:16 GMT
Reponse Header: Expires: Thu, 20 Jun 2019 23:13:16 GMT
Reponse Header: Server: UploadServer
Reponse Header: X-GUploader-UploadID: xxxxxxxxxxxxxxxxxxxxxx
-- RESPONSE CONTENT--

When I try an identical GET request with httpie:

GET /backups/test_route HTTP/1.1
Accept: application/xml
Accept-Encoding: gzip, deflate
Authorization: AWS access/secret
Connection: keep-alive
Content-Length: 0
Date: 20190620T232744Z
Host: storage.googleapis.com
User-Agent: HTTPie/0.9.8



HTTP/1.1 400 Bad Request
Cache-Control: private, max-age=0
Content-Length: 179
Content-Type: application/xml; charset=UTF-8
Date: Thu, 20 Jun 2019 23:43:03 GMT
Expires: Thu, 20 Jun 2019 23:43:03 GMT
Server: UploadServer
X-GUploader-UploadID: xxxxxxxxxx

<?xml version='1.0' encoding='UTF-8'?><Error><Code>MalformedSecurityHeader</Code><Message>Your request has a malformed header.</Message><ParameterName>Date</ParameterName></Error>

So then I changed the date format until it stopped complaining:

GET /backups/test_route HTTP/1.1
Accept: application/xml
Accept-Encoding: gzip, deflate
Authorization: AWS access/secret
Connection: keep-alive
Content-Length: 0
Date: Thu, 20 Jun 2019 23:35:59 +0000
Host: storage.googleapis.com
User-Agent: HTTPie/0.9.8



HTTP/1.1 400 Bad Request
Cache-Control: private, max-age=0
Content-Length: 179
Content-Type: application/xml; charset=UTF-8
Date: Thu, 20 Jun 2019 23:43:03 GMT
Expires: Thu, 20 Jun 2019 23:43:03 GMT
Server: UploadServer
X-GUploader-UploadID: xxxxxxxxxx

<?xml version='1.0' encoding='UTF-8'?><Error><Code>SignatureDoesNotMatch</Code><Message>The request signature we calculated does not match the signature you provided. Check your Google secret key and signing method.</Message><StringToSign>GET


Thu, 20 Jun 2019 23:35:59 +0000
/test_route</StringToSign></Error>

But it looks like the date is signed, which breaks everything.

That’s unfortunate.

Can you build FDB locally from source but change "%Y%m%dT%H%M%SZ" to "%a, %d %b %Y %H:%M:%S GMT" at this line?

That will produce the date format that you found to work and use it in the signature. If that works everywhere then I’ll make a PR for it. If not, we’ll have to find something that does.

2 Likes

Do you find any solution? facing the same issue

I haven’t tested this, but if no one gets a patch in to fix this date formatting issue soon, you could try using MinIO in pass-through mode to GCS.

https://docs.min.io/docs/minio-gateway-for-gcs.html

To be clear, fdbbackup’s S3 client is using exactly the date format required by Amazon’s v4 signature scheme.

https://docs.aws.amazon.com/general/latest/gr/sigv4-date-handling.html

You can include the date as part of your request in several ways. You can use a date header, an x-amz-date header or include x-amz-date as a query parameter.

The time stamp must be in UTC and in the following ISO 8601 format: YYYYMMDD’T’HHMMSS’Z’. For example, 20150830T123600Z is a valid time stamp.

It seems strange that other services would use an S3-like interface but change the date format used in the signature, but maybe that is the case.

If someone can confirm that the patch I posted above works against GCS then we can add a blobstore:// URL parameter for using the alternate date format.