@siddhesh_ghadi/remoteresources3

Razeedeploy: component to download and manage files from s3 object storage

Usage no npm install needed!

<script type="module">
  import siddheshGhadiRemoteresources3 from 'https://cdn.skypack.dev/@siddhesh_ghadi/remoteresources3';
</script>

README

RemoteResourceS3

Build Status Dependabot Status GitHub

RemoteResourceS3 is a variant of RemoteResource. RemoteResource is the foundation for implementing continuous deployment with razeedeploy. It retrieves and applies the configuration for all resources. RemoteResourceS3 extends that functionality by supporting authentication to object storage services that implement the S3 interfaces.

Install

kubectl apply -f "https://github.com/razee-io/RemoteResourceS3/releases/latest/download/resource.yaml"

Resource Definition

Sample

apiVersion: "deploy.razee.io/v1alpha2"
kind: RemoteResourceS3
metadata:
  name: <remote_resource_s3_name>
  namespace: <namespace>
spec:
  auth:
    # hmac:
    #   accessKeyId: <key id>
    #   secretAccessKey: <access key>
    iam:
      responseType: <provider response type>
      grantType: <provider grant type>
      url: <iam auth provider>
      apiKeyRef:
        valueFrom:
          secretKeyRef:
            name: <name of secret resource>
            key: <key of api_key within secret>
  requests:
    - options:
        url: https://<source_repo_url>/<file_name1>
        headers:
          <header_key1>: <header_value1>
          <header_key2>: <header_value2>
    - optional: true
      options:
        url: http://<source_repo_url>/<file_name2>
    - options:
        url: http://<source_repo_url>/<bucket_path>/

Spec

Path: .spec

Description: spec is required and must include section requests. You may also include auth, to make connecting to S3 easier.

Schema:

spec:
  type: object
  required: [requests]
  properties:
    auth:
      type: object
      ...
    requests:
      type: array
      ...

Auth: HMAC

Path: .spec.auth.hmac

Description: Allows you to connect to s3 buckets using an HMAC key/id pair.

Schema:

hmac:
  type: object
  allOf:
    - oneOf:
        - required: [accessKeyId]
        - required: [accessKeyIdRef]
    - oneOf:
        - required: [secretAccessKey]
        - required: [secretAccessKeyRef]
  properties:
    accessKeyId:
      type: string
    accessKeyIdRef:
      type: object
      required: [valueFrom]
      properties:
        valueFrom:
          type: object
          required: [secretKeyRef]
          properties:
            secretKeyRef:
              type: object
              required: [name, key]
              properties:
                name:
                  type: string
                namespace:
                  type: string
                key:
                  type: string
    secretAccessKey:
      type: string
    secretAccessKeyRef:
      type: object
      required: [valueFrom]
      properties:
        valueFrom:
          type: object
          required: [secretKeyRef]
          properties:
            secretKeyRef:
              type: object
              required: [name, key]
              properties:
                name:
                  type: string
                namespace:
                  type: string
                key:
                  type: string

Auth: IAM

Path: .spec.auth.iam

Description: Allows you to connect to s3 buckets using an IAM provider and api key.

Schema:

iam:
  type: object
  allOf:
    - required: [responseType, grantType, url]
    - oneOf:
        - required: [apiKey]
        - required: [apiKeyRef]
  properties:
    responseType:
      type: string
    grantType:
      type: string
    url:
      type: string
      format: uri
    apiKey:
      type: string
    apiKeyRef:
      type: object
      required: [valueFrom]
      properties:
        valueFrom:
          type: object
          required: [secretKeyRef]
          properties:
            secretKeyRef:
              type: object
              required: [name, key]
              properties:
                name:
                  type: string
                namespace:
                  type: string
                key:
                  type: string

Note:

Request Options

Path: .spec.requests[].options

Description: All options defined in an options object will be passed as-is to the http request. This means you can specify things like headers for authentication in this section.

Schema:

options:
  type: object
  oneOf:
    - required: [url]
    - required: [uri]
  properties:
    url:
      type: string
      format: uri
    uri:
      type: string
      format: uri
    headers:
      type: object
      x-kubernetes-preserve-unknown-fields: true

Optional Request

Path: .spec.requests[].optional

Description: if download or applying child resource fails, RemoteResource will stop execution and report error to .status. You can allow execution to continue by marking a reference as optional.

Schema:

optional:
  type: boolean

Default: false

Download Directory Contents

  • If url/uri ends with /, we will assume this is an S3 directory and will attempt to download all resources in the directory.
  • Every resource within the directory will be downloaded using the .spec.requests[].options provided with the directory url.
  • Path must follow one of:
    • http://s3.endpoint.com/bucket/path/to/your/resources/
    • http://bucket.s3.endpoint.com/path/to/your/resources/

Managed Resource Labels

Reconcile

Child resource: .metadata.labels[deploy.razee.io/Reconcile]

  • DEFAULT: true
    • A razeedeploy resource (parent) will clean up a resources it applies (child) when either the child is no longer in the parent resource definition or the parent is deleted.
  • false
    • This behavior can be overridden when a child's resource definition has the label deploy.razee.io/Reconcile=false.

Resource Update Mode

Child resource: .metadata.labels[deploy.razee.io/mode]

Razeedeploy resources default to merge patching children. This behavior can be overridden when a child's resource definition has the label deploy.razee.io/mode=<mode>

Mode options:

  • DEFAULT: MergePatch
    • A simple merge, that will merge objects and replace arrays. Items previously defined, then removed from the definition, will be removed from the live resource.
    • "As defined in RFC7386, a Merge Patch is essentially a partial representation of the resource. The submitted JSON is "merged" with the current resource to create a new one, then the new one is saved. For more details on how to use Merge Patch, see the RFC." Reference
  • StrategicMergePatch
    • A more complicated merge, the kubernetes apiServer has defined keys to be able to intelligently merge arrays it knows about.
    • "Strategic Merge Patch is a custom implementation of Merge Patch. For a detailed explanation of how it works and why it needed to be introduced, see StrategicMergePatch." Reference
    • Kubectl Apply Semantics
  • EnsureExists
    • Will ensure the resource is created and is replaced if deleted. Will not enforce a definition.

Debug Individual Resource

.spec.resources.metadata.labels[deploy.razee.io/debug]

Treats the live resource as EnsureExist. If any razeedeploy component is enforcing the resource, and the label deploy.razee.io/debug: true exists on the live resource, it will treat the resource as ensure exist and not override any changes. This is useful for when you need to debug a live resource and don't want razeedeploy overriding your changes. Note: this will only work when you add it to live resources. If you want to have the EnsureExist behavior, see Resource Update Mode.

  • ie: kubectl label rrs3 <your-rrs3> deploy.razee.io/debug=true

Lock Cluster Updates

Prevents the controller from updating resources on the cluster. If this is the first time creating the razeedeploy-config ConfigMap, you must delete the running controller pods so the deployment can mount the ConfigMap as a volume. If the razeedeploy-config ConfigMap already exists, just add the pair lock-cluster: true.

  1. export CONTROLLER_NAME=remoteresources3-controller && export CONTROLLER_NAMESPACE=razee
  2. kubectl create cm razeedeploy-config -n $CONTROLLER_NAMESPACE --from-literal=lock-cluster=true
  3. kubectl delete pods -n $CONTROLLER_NAMESPACE $(kubectl get pods -n $CONTROLLER_NAMESPACE | grep $CONTROLLER_NAME | awk '{print $1}' | paste -s -d ',' -)