It seems that the backup system is hard-coded to use S3 buckets right now. To be more specific, the Authorization header sends HTTP requests like Authorization: AWS access:secret.
This is not correct, can you explain what is making it seem this way?
The Authorization header value sent is the access key plus an hmac_sha1 signature calculated from several other parts of the request as required by Amazon’s S3 auth scheme.
I have tested our client against Amazon S3 and Minio and both work.
I do not know for certain that anyone has used it with Google Cloud Storage, so you may be the first. If there is an incompatibility I’m sure it can be remedied easily. Can you paste the error messages you are seeing (sanitized of course)? Also, if you add --knob_http_verbose_level=3 to the command line of fdbbackup commands or the backup agent you will see a lot of HTTP/HTTPS detail printed to standard output including the full responses. GCS might be providing response content that gives more error detail.
Can you build FDB locally from source but change "%Y%m%dT%H%M%SZ" to "%a, %d %b %Y %H:%M:%S GMT" at this line?
That will produce the date format that you found to work and use it in the signature. If that works everywhere then I’ll make a PR for it. If not, we’ll have to find something that does.
To be clear, fdbbackup’s S3 client is using exactly the date format required by Amazon’s v4 signature scheme.
You can include the date as part of your request in several ways. You can use a date header, an x-amz-date header or include x-amz-date as a query parameter.
The time stamp must be in UTC and in the following ISO 8601 format: YYYYMMDD’T’HHMMSS’Z’. For example, 20150830T123600Z is a valid time stamp.
It seems strange that other services would use an S3-like interface but change the date format used in the signature, but maybe that is the case.
If someone can confirm that the patch I posted above works against GCS then we can add a blobstore:// URL parameter for using the alternate date format.
from a foundationdb pod in the same kubernetes cluster
You’d have to create google credentials with the proper permissions, and add it in a kubernetes secret (I used a secret called fdb-backups with a field called key.json. And another secret called minio-fdb-gateway-secret with the password used to access minio. That password is needed for the fdbbackup command.
It would be great if you could do the bit of testing mentioned in Steve’s post above, so that FDB could just natively support GCS instead of using minio to work around it.
There is probably (obviously?) a difference in how GCS and S3 handle the date header. I imagine since GCS was going for compatibility, this should be supported. Though given the upcoming deprecation, I’m not sure if they will want to make the effort to change it. I’ll file a bug on Monday to see if the change can be made.
Sorry about the broken links… new users have a limit of 2 per post.
EDIT: Used admin powers to linkify your links. Sorry about forum restrictions.
With V4 authenticate, fdbbackup still not works for Google Cloud Storage (GCS). Problem comes from mismatch between signature encode by fdbbackup and by GCS side. This is described in github issue along with a possible fix.