Zenko

Zenko Installation

Prepare

Before installing Zenko, you must have a working, appropriately sized server or cluster running with a live Kubernetes instance. The instructions here will guide you to a baseline configuration that supports a Zenko installation in either a high-availability cluster or a single-node Zenko instance.

Topology

Deploying Zenko in a single-server context reduces hardware cost and complexity, and offers Zenko features in a small footprint, but single-server deployment sacrifices high availability: a single server is a single point of failure.

Deploying Zenko on a single server requires deploying Kubernetes on this server, and ensuring that the endpoints and containers it administers are available to it from that server.

Once Zenko is installed, Kubernetes manages system operations on all servers. You will manage and reconfigure Zenko using Helm commands.

Setting Up a Cluster

While running Zenko on a single machine is desirable for certain use cases, a clustered operating environment is required for high-availability deployments. If you can set up a Kubernetes cluster on your own, review the General Cluster Requirements and skip to Install Zenko. Otherwise, download the latest stable version of MetalK8s and follow its instructions to establish a working Kubernetes instance on your cluster.

Most of the complexity of installing Zenko on a cluster involves deploying the cluster istelf. Scality supports MetalK8s, an open source Kubernetes engine optimized for the Zenko use case. The following section describes general cluster requirements that have been tested on MetalK8s. Because MetalK8s is designed to operate without support from public cloud resources, the following sizing requirements are assumed sufficient for hosted cloud Kubernetes clusters, where such resources are available on demand.

General Cluster Requirements

Setting up a cluster requires at least three machines (these can be VMs) running CentOS 7.4 or higher. The recommended mimimum for a high-availability Zenko production service is the Standard Architecture, a five-node server with three masters/etcds. The Compact Architecture, a three-node configuration, is also supported. The cluster must have an odd number of nodes to provide a quorum. You must have SSH access to these machines and they must have SSH access to each other.

Important

Three-server clusters can continue to operate without service disruption or data loss if one node fails. Five-server clusters can operate without disruption or loss if two nodes fail.

Each machine acting as a Kubernetes node must also have at least one disk available to provision storage volumes.

Once you have set up a cluster, you cannot change the size of the machines on it.

Service and Component Architecture

Zenko consists of the following stateful and stateless services.

Stateful Services

For the following stateful services, each node has a copy of the data. Though their terminology varies, each service employs the same strategy for maintaining availability. A primary service acts on data and transfers it to replica instances on the other nodes. If the service running as the primary fails (either due to internal error or node failure), the remaining replica services elect a primary to continue. If this occurs on a three-node cluster, no data is lost unless two nodes fail. On a five-node cluster, no data is lost unless three nodes fail.

If a replica node fails, the primary continues operation without interruption or an election.

The following stateful services conform to this failover strategy:

  • MongoDB
  • Redis
  • Kafka
  • ZooKeeper
Stateless Services

The following stateless services are based on a transactional model. If a service fails, Kubernetes automatically reschedules the process on an available node.

Lifecycle Services

  • Lifecycle Bucket Processor
  • Lifecycle Conductor
  • Lifecycle Object Processor
  • Garbage Collection (GC) Consumer

Replication Services

  • Replication Data Processor
  • Replication Populator
  • Replication Status Processor

APIs

  • CloudServer API
  • Backbeat API

Monitoring Services

  • Prometheus
  • Grafana

Out-of-Band Services

  • Ingestion Consumer
  • Ingestion Producer
  • Cosmos Operator
  • Cosmos Scheduler

Orbit Management Layer

  • CloudServer Manager
Sizing

The following sizes for Zenko instances have been tested on live systems using MetalK8s, which adds some overhead. If you are running a different Kubernetes engine, fewer resources may be required, but such configurations remain to be tested.

Reserve the following resources for each node.

  • Cores per server

    • 24 vCPU minimum
    • 32 vCPU medium load
    • 58 vCPU heavy load
  • RAM per server

    • 32 GB minimum
    • 64 GB medium load
    • 96 GB heavy load
  • Additional resources

    • 120 GB SSD (boot drive)
    • 80 GB SSD
  • Storage

    • 1 TB persistent volume per node minimum

      Note

      This requirement is for storage, not for the system device. Storage requirements depend on the sizing of different components and anticipated use. You may have to attach a separate storage volume to each cloud server instance.

All servers must run CentOS 7.4 or later, and must be ssh-accessible.

Custom Sizing

The default persistent volume sizing requirements are sufficient for a conventional deployment. Your requirements may vary based on total data managed and total number of objects managed.

Important

Persistent volume storage must match or exceed the maximum anticipated demand. Once set, the cluster cannot be resized without redefining new volumes.

To size your deployment, review the default values in Zenko/kubernetes/zenko/values.yaml. This file reserves space for each component in the build. This is the baseline build, which Helm will install unless instructed otherwise.

Next, review the values discussed in Zenko/kubernetes/zenko/storage.yaml. The storage.yaml file contains sizing instructions and settings that, when specified in a Helm installation, override the default values expressed in the values.yaml file. To override default values using storage.yaml, use the following addendum to the helm install invocation at Zenko deployment.

$ helm install [other options] -f storage.yaml

How much persistent volume space is required is calculable based on total data managed, total objects managed, and other factors. See storage.yaml for details.

Configure Logical Volumes

Set up a cluster of nodes conforming to the specifications described in Sizing. If you are using MetalK8s, do this by downloading the latest stable MetalK8s source code from the MetalK8s releases page: https://github.com/scality/metalk8s/releases. Follow the Quickstart guide (in docs/usage/quickstart.rst) to install MetalK8s on your cluster, instantiating the desired number of nodes.

Note

It is a best practice to install Zenko on a fresh cluster.

Volume Sizing

When building your cluster, take sizing into account. If you are deploying non-default sizing, make sure your volume sizing is sufficient. For MetalK8s, you must size the volumes in the inventory during setup in metalk8s/inventory/group_vars/kube-node.yml.

Minimum Volume Size for Cluster Deployments

For a default sizing, paste the following into kube-node.yml:

metalk8s_lvm_default_vg: False
metalk8s_lvm_vgs: ['vg_metalk8s']
metalk8s_lvm_drives_vg_metalk8s: ['/dev/vdb']
metalk8s_lvm_lvs_vg_metalk8s:
  lv01:
    size: 125G
  lv02:
    size: 125G
  lv03:
    size: 125G
  lv04:
    size: 62G
  lv05:
    size: 62G
Minimum Volume Size for Single-Server Deployments

For a default sizing, paste the following into kube-node.yml:

metalk8s_lvm_default_vg: False
metalk8s_lvm_vgs: ['vg_metalk8s']
metalk8s_lvm_drives_vg_metalk8s: ['/dev/vdb']
metalk8s_lvm_lvs_vg_metalk8s:
  lv01:
    size: 120G
  lv02:
    size: 120G
  lv03:
    size: 120G
  lv04:
    size: 60G
  lv05:
    size: 60G
  lv06:
    size: 10G
Custom Sizing

For custom sizing for a cluster or a “cluster of one” (single-server) deployment, increase these base numbers.

For non-MetalK8s deployments, follow your vendor or community’s instructions for configuring persistent volumes at 500 GB/node.

Proxies

If you are behind a proxy, add the following lines to your local machine’s /etc/environment file:

http_proxy=http://user;pass@<my-ip>:<my-port>
https_proxy=http://user;pass@<my-ip>:<my-port>
no_proxy=localhost,127.0.0.1,10.*
Preconfiguring Zenko

Zenko provides valid settings for a stable, featureful deployment by default. For most users, the best practice is to use the default settings to deploy Zenko, then to modify settings files held in Helm charts and use Helm to pass these values to the deployment.

For some uses, however (for example, in a high-security environment that requires unnecessary interfaces to be turned off), configuring the charts before deploying Zenko may be necessary. To preconfigure Zenko, follow the instructions in Configuring Zenko, then install using your custom settings.

Install

Once your cluster or server is up and running, and the logical volumes are configured, you must install Helm, then tell Helm to install Zenko.

Install Helm

  1. If you are using MetalK8s, use the MetalK8s virtual shell. Change to the directory from which you will deploy Zenko:

    (metal-k8s) $ cd /path/to/installation
    

    If you are not installing from MetalK8s, follow the instructions in Zenko/docs/gke.md to install Helm on your cluster.

  2. If Helm is not already installed on the machine from which you will be conducting the deployment, install it as described at: https://github.com/helm/helm#Install

  3. Initialize Helm:

    (metal-k8s) $ helm init
    Creating /home/centos/.helm
    Creating /home/centos/.helm/repository
    Creating /home/centos/.helm/repository/cache
    Creating /home/centos/.helm/repository/local
    Creating /home/centos/.helm/plugins
    Creating /home/centos/.helm/starters
    Creating /home/centos/.helm/cache/archive
    Creating /home/centos/.helm/repository/repositories.yaml
    Adding stable repo with URL: https://kubernetes-charts.storage.googleapis.com
    Adding local repo with URL: http://127.0.0.1:8879/charts
    $HELM_HOME has been configured at /home/centos/.helm.
    Warning: Tiller is already installed in the cluster.
    (Use --client-only to suppress this message, or --upgrade to upgrade Tiller to the current version.)
    Happy Helming!
    (metal-k8s) $
    

    Helm can now install applications on the Kubernetes cluster.

  4. Go to https://github.com/Scality/Zenko/releases and download the latest stable version of Zenko.

  5. Unzip or gunzip the file you just downloaded and change to the top-level (Zenko) directory.

Install Zenko

Helm installs Zenko using packages of Kubernetes resource definitions known as charts. These charts, which Helm follows for each Zenko component, can be found under zenko/kubernetes/zenko/charts. For each component there is a Chart.yaml file and a values.yaml file. Helm reads the Chart.yaml file to establish such basic installation attributes as name and version number, and reads the values file for instructions on how to deploy and configure the component. Though manually editing the default settings in values.yaml is possible, it is much better to write configuration changes and options to zenko/kubernetes/zenko/options.yaml, which Helm can use to overwrite the default settings presented in the charts.

Follow these steps to install Zenko with ingress.

Note

The following example is for a configuration using the NGINX ingress controller. If you are using a different ingress controller, substitute parameters as appropriate.

Configure with options.yaml
  1. Create an options.yaml file in Zenko/kubernetes/ to store deployment parameters. Enter the following parameters:

    ingress:
      enabled: "true"
      annotations:
        nginx.ingress.kubernetes.io/proxy-body-size: 0
      hosts:
        - zenko.local
    
    cloudserver:
      endpoint: "zenko.local"
    

    You can edit these parameters using each component’s values.yaml file as your guide. Save this file.

  2. To configure the ingress controller for HTTPS, go to “Configure HTTPS Ingress for Zenko” for additional terms to add to this chart.

  3. If your Zenko instance is behind a proxy, add the following lines to the options.yaml file:

    cloudserver:
      proxy:
        http: ""
        https: ""
        caCert: false
        no_proxy: ""
    

    Enter your proxy’s full server address, including the protocol and port.

    For example:

    cloudserver:
      proxy:
        http: "http://proxy.example.com:80"
        https: ""
        caCert: false
        no_proxy: "localhost,127.0.0.1,10.*"
    

    If the HTTP proxy endpoint is set and the HTTPS one is not, the HTTP proxy will be used for HTTPS traffic as well.

    Note

    To avoid unexpected behavior, specify only one of the “http” or “https” proxy options.

Install with Helm
  1. Perform the following Helm installation from the kubernetes directory

    $ helm install --name my-zenko -f options.yaml zenko
    

    If the command is successful, the output from Helm is extensive.

    Note

    The installation name must consist solely of alphanumeric characters and hypens. The name must start with an alphabetic character, and can end with alphabetic or numeric characters. Punctuation marks, including periods, are not permitted.

  2. To see Kubernetes’s progress creating pods for Zenko, the command:

    $ kubectl get pods -n default -o wide
    

    returns a snapshot of pod creation. For a few minutes after the Helm install, some pods will show CrashLoopBackOff issues. This is expected behavior, because there is no launch order between pods. After a few minutes, all pods will enter Running mode.

Register with Orbit
  1. To register your Zenko instance for Orbit access, get your CloudServer’s name:

    $ kubectl get -n default pods | grep cloudserver-manager
    
    my-zenko-cloudserver-manager-76f657695-j25wq      1/1   Running   0       3m
    
  2. Grab your CloudServer’s logs with the command:

    $ kubectl logs my-zenko-cloudserver-manager-<id> | grep 'Instance ID'
    

    Using the present sample values, this command returns:

    $ kubectl logs my-zenko-cloudserver-manager-76f657695-j25wq | grep 'Instance ID'
    
    {"name":"S3","time":1532632170292,"req_id":"effb63b7e94aa902711d",\
    "level":"info","message":"this deployment's Instance ID is \
    7586e994-01f3-4b41-b223-beb4bcf6fff6","hostname":"my-zenko-cloudserver-\
    76f657695-j25wq","pid":19}
    

    Copy the instance ID.

  3. Open https://admin.zenko.io/user in a web browser. You may be prompted to authenticate through Google.

  4. Click the Register My Instance button.

  5. Paste the instance ID into the Instance ID dialog. Name the instance what you will.

Your instance is registered.

Configure

Once Zenko is installed and stable, installing optional configurations is a relatively painless process. Follow the configuration instructions in this section to extend Zenko’s capabilities.

Because ingress control is not standardized across all Kubernetes implementations, it is deactivated by default. You must configure it to enable Web access to Zenko.

Zenko supports out-of-band (OOB) updates from NFS file systems as well as dynamic OOB updates from Scality RINGs with the S3 Connector. With added configurations, described in the sections that follow, your Zenko instance can access and manage these namespaces.

Configuring Zenko

Zenko is readily configurable using Helm to pass values set in Helm charts. Helm charts are stored in Zenko/kubernetes/zenko/ and its subdirectories. Helm charts are YAML files with configurable values. In a Zenko deployment, reconfiguration, or upgrade, Helm reads charts in the following order:

  1. Base settings for Zenko microservices (for example, Grafana settings, written to Zenko/kubernetes/zenko/charts/grafana/values.yaml).
  2. Base settings for Zenko. These settings override the base microservice settings, and are found in Zenko/kubernetes/zenko/values.yaml.
  3. Custom settings, which you can write to an options.yaml file. Settings written to this file override settings read from the preceding values.yaml file.

Zenko’s charts are populated by default to provide a stable, feature-rich deployment. It is easiest and safest to deploy Zenko using these default settings in a test environment and to adjust settings there for a working deployment. If your use case requires configuring Zenko before deployment, these instructions will remain valid and portable to the production system.

Modify options.yaml

The options.yaml file is not present by default, but you added a simple one at Zenko/kubernetes/zenko/options.yaml when deploying Zenko. options.yaml is the best place to make all changes to your configuration (with one exception, nodeCounts). While it is possible to reconfigure any aspect of Zenko or its attendant microservices from those services’ base settings or in the Zenko settings, it is better to make changes to options.yaml. Because options.yaml is not a part of the base installation, it is not overwritten on a Zenko version upgrade. Likewise, finding changes written to several values.yaml file locations can become quite difficult and cumbersome. For these reasons, it is a best practice to confine all modifications to options.yaml.

Examples:

Zenko provides outward-facing NFS service using Cosmos, which is enabled by default. To deactivate Cosmos:

  1. Open kubernetes/zenko/cosmos/values.yaml with read-only access and review the cosmos block.

  2. Copy the block title declaration and the subsequent line:

    cosmos:
      enabled: true
    
  3. Open (or create) Zenko/kubernetes/zenko/options.yaml and paste the block you copied there.

  4. Change the value of enabled to false.

Cosmos mirrors data based on a cron-like schedule. To modify this cron interval (for an enabled Cosmos instance), descend into the YAML structure as follows:

  1. Review the cosmos block in kubernetes/zenko/cosmos/values.yaml.

  2. Copy the relevant hierarchy to options.yaml:

    cosmos:
      scheduler:
        schedule: "* */12 * * *"
    
  3. Edit the schedule to suit your requirements.

    Tip

    If you are comfortable with JSON or SOAP objects, you will find YAML to be logically similar. If you have problems with YAML, check the indentation first.

Modify values.yaml

The one setting that cannot be modified in the options.yaml file is nodeCount. To change the node count:

  1. Open Zenko/kubernetes/zenko/values.yaml
  2. Change the nodeCount value only.
Push Modifications to Zenko

Once you have entered all changes to options.yaml or changed the values.yaml nodeCount parameter, issue the following command from Zenko/kubernetes/zenko to push your changes to the deployed Zenko instance:

$ helm upgrade {{zenko-server-name}} . -f options.yaml

Configure HTTPS Ingress for Zenko

If your Kubernetes cluster uses NGINX for ingress control, use the following guidelines to configure HTTPS support. From the Zenko/kubernetes/zenko directory:

  1. Generate the certificate.

    $ openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /tmp/tls.key -out /tmp/tls.crt -subj "/CN=zenko.local"
    
  2. Store the certificate in a Kubernetes secret.

    $ kubectl create secret tls zenko-tls --key /tmp/tls.key --cert /tmp/tls.crt
    
  3. Set Zenko chart values in options.yaml to resemble:

    ingress:
      enabled: true
      hosts:
        - zenko.local
      max_body_size: 100m
      annotations:
      tls:
        - secretName: zenko-tls
          hosts:
            - zenko.local
    
  4. Install or upgrade Zenko from this directory (Zenko/kubernetes/zenko). Helm will pick up the settings in options.yaml.

    $ helm upgrade --install -f options.yaml zenko .
    
Ports

Zenko operates in the context of a Kubernetes cluster. Ingress and egress from the cluster are configured in the base setup of the cluster, using the conventional web protocols for secure and insecure transactions: HTTPS and HTTP over ports 443 and 80, respectively.

Zenko can use either or both protocols to allow ingress/egress. If ingress is enabled, port 80 is used, unless SSL is configured. If SSL is configured, then port 443 is required.

Port Protocol
80 HTTP
443 HTTPS

For non-Scality Kubernetes platforms, discuss the configuration with your vendor or community.

Setting Up Out-of-Band Updates for S3 Connector

Zenko operates by establishing and managing a namespace of data. For select object stores, Zenko does not have to create a namespace; rather, it can read an existing namespace and use it as its own. This process of metadata ingestion makes it easier to manage existing data stores, as Zenko can “go back in time” and pick up the history of a namespace and assume management and replication tasks retroactively.

Prerequisites

The following are required to configure out-of-band updates for S3 Connector:

  • An S3 bucket must be established on an S3 Connector (version 7.4.4 or later). See “Using the S3 Browser” in S3 Connector Operation.
  • The S3 Connector bucket must have permissions that enable Zenko access.
  • The S3 Connector bucket must have versioning enabled.
  • The Zenko user must have access to an S3 Connector user credentials (an access key and a secret key) and the S3-credentialed user must have access to the target S3 bucket.

Establishing a Zenko user in S3 Connector with all and only necessary permissions to access the bucket is considered a best practice. See “Create a Zenko User for Out-of-Band Updates” in S3 Connector Operation for how to assign correct policies to a Zenko user for S3 access.

There are no special Zenko dependencies for Zenko 1.1 or later.

Set Up Out-of-Band Updates for S3 Connector
  1. Go to Storage Locations and click the “Add New” button.

    _images/add_new_cloud_location.png
    1. Enter the name of the new storage location (this can be the same as or different than the bucket name) and select the Scality RING with S3 Connector location type.

    2. Enter the access key and secret key pair you copied when creating the S3 Connector bucket.

    3. Enter the target bucket name. The Target Bucket Name field requires conformity to Amazon S3 naming conventions (Lowercase alphanumerics only. Hyphens allowed. No punctuation or diacritical marks.)

    4. In the Endpoint field, enter the endpoint’s locator (URL or IP address), followed by the port number appropriate for the protocol. For HTTP (non-secure) use port 80, or omit the port assignment. For HTTPS, use port 443. For diagnostic, familiarization, and test purposes, port 8000 is acceptable. This directly addresses the endpoint, but bypasses load balancers.

    5. Check the Write objects without prefix box. The warning text, “Storing multiple buckets in a location with this option enabled can lead to data loss” appears. This is expected. Click Save.

      _images/add_new_location_dialog.png

    The new cloud location appears in the Cloud Locations window. The Mirroring indicator is grayed out.

    _images/new_cloud_location.png
  2. Open the Multicloud Browser and click Create Bucket.

    1. Enter the bucket name.

    2. Select the appropriate Location Constraint. You will see two instances of the name of the storage location you created above. Pick the instance that is followed by “(Mirror mode)”.

      _images/create_bucket_mirror_mode.png
    3. Click Create.

    4. The Multicloud Browser view returns.

      _images/mirroring_enabled_indicator.png

      Note the icon at far right indicating metadata ingestion has been activated. In the Cloud Locations window, the Mirroring button is activated and no longer grayed out.

In a few minutes, objects stored in the S3 Connector become visible and manageable from Zenko. Files uploaded to Zenko propagate to S3 Connector as well.

Set Up Out-of-Band Updates from an NFS Mount

Zenko 1.1 allows ingestion and out-of-band (OOB) updates from existing NFS mount points. This new feature does not copy the files themselves; rather, the system’s attributes are copied to Zenko for data management. Using this information, Zenko can act on NFS mounts as it would any other type of bucket, thus enabling metadata search, cloud replication, and lifecycle transition or expiration. Writes from Zenko users to buckets at NFS locations are not permitted.

Minimum Requirements

Setting up Zenko for out-of-band updates from NFS mount points requires:

  • Zenko 1.1.0

  • A Linux-compatible NFS mount

  • Kubernetes nodes with the NFS packages (nfs-utils for CentOS, nfs-common for Debian) installed.

    Note

    MetalK8s 1.1.0 installs all required packages by default.

To add an NFS location in Orbit, you must have an NFS server endpoint and know the export path. You may also apply specific NFS mount options based on your environment’s requirements. For example, for read-only access to the NFS mount, specify the ro NFS option.

  1. Create the location in Orbit. Your export path can include specific folders. For example, if your root export is /data but you only need Zenko to work with the accounting/2019 subfolder, specify /data/accounting/2019 as the export path. In this way you can assign different folders to their own buckets in Zenko.

    _images/add_nfs_location.png
  2. Create your bucket in the mirror-mode version of the location just created. As of Zenko 1.1.0, only the “Mirror Mode” option is supported, and the standard location option does not allow writes to the location.

    _images/create_nfs_bucket.png

    With the bucket created, Zenko deploys and configures new pods in Kubernetes to access and ingest file metadata. Naming is based on the location name and you can see these pods by running kubectl get pods. Pods typically deploy within a few minutes of bucket creation, along with the initial ingestion.

    _images/cosmos_initial_ingest.png
Advanced Usage
Create Buckets from the Command Line

You can create mirror-mode buckets from the command line using the aws-cli client. For example, the following command creates a mirror-mode bucket for an NFS location named “my-nfs”.

$ aws s3 mb s3://nfs-bucket-name --region 'my-nfs:ingest' --endpoint https://zenko.local
Cron Job Defaults

Zenko’s NFS ingestion cron job is triggered every 12 hours (12 pm and 12 am) by default, but this is configurable. The cron specification supports both the traditional (* *0 * * * *) format as well as the non-standard (@hourly) format. Adding and upgrading Zenko with the following YAML added as custom values sets a default cron schedule for all future created NFS locations.

cosmos:
  scheduler:
    # Run hourly
    schedule: "@hourly"

Note

This does not change the cron schedule on existing NFS locations.

Modify Cron on Existing NFS Locations

Cron schedules can be customized to create cron schedules in various NFS locations. The quickest way to customize cron is to edit the resource directly:

$ kubectl edit cosmos <my-nfs-location-name>

spec:
...
  rclone:
     # Run every day at 8am
     schedule: '0 8 * * *'
List Installed NFS Locations

Because each location is treated as a unique resource, you can list all installed locations with the command:

$ kubectl get cosmos
Managed Resources

Due to the Kubernetes operator-managed nature of the NFS locations, resources like cron jobs or deployments related to each location are “enforced state.” This means that if a cron job for a location is deleted, it is automatically recreated, which can be useful for testing and debugging. This also means, however, that you cannot directly edit a managed cronjob or deployment resource, because your changes are immediately changed to match the state defined in the “cosmos” resource. Desired changes must be made by editing the nfs resources themselves using kubectl.

$ kubectl edit cosmos <my-nfs-location-name>

Upgrade

Once a Zenko instance is up and running, you can upgrade it with a simple Helm command.

Before Upgrading

Most installations use custom values to set them up as required. Compare any live values to those to be applied in an upgrade to ensure consistency and prevent undesired changes.

To see the custom values of a running Zenko instance:

$ helm get values {{zenko-release-name}} > pre_upgrade_values.yaml

Important

Values added at upgrade override and completely replace the values used to configure the running instance. Before invoking helm upgrade, carefully review all changes between the values used in the running instance against the values you intend to push in the upgrade.

In a production environment, it is a best practice to run an upgrade simulation. For example:

$ helm upgrade zenko ./zenko --dry-run --debug

Note

A production environment may neccessitate additional validation before upgrading. Upgrades run with the –dry-run flag simulate and, if possible, validate a compatible upgrade. If the –debug flag is set, Helm outputs all templated values and deployment configurations to be installed. These are basic validations, but their upgrade implications must be considered by both the Zenko and Kubernetes administrators.

Upgrading

To upgrade Zenko:

  1. Back up your existing Zenko directory.

    $ cp -r Zenko Zenko-backup
    
  2. Download the latest stable version (.zip or .tar.gz) from https://github.com/scality/Zenko/releases

  3. Unpack the .zip or .tar.gz file and navigate to Zenko/kubernetes/.

  4. Copy Zenko/kubernetes/zenko/options.yaml from your existing Zenko source directory to the same directory in the new Zenko source.

  5. If you have modified the node count from the default value of 3, go to Zenko/kubernetes/zenko/values.yaml in the new Zenko source and edit the nodeCount value to match the existing nodeCount value.

  6. From the kubernetes/ directory of the new Zenko source, enter this Helm command, inserting your Zenko server’s name:

    $ helm upgrade {{zenko-server-name}} ./zenko
    

    If you are using custom values, reuse options.yaml on upgrades.

    $ helm upgrade {{zenko-server-name}} ./zenko -f options.yaml
    
  7. On success, Helm reports:

    Release "{{zenko-server-name}}" has been upgraded. Happy Helming!
    

    After a few seconds, Helm displays a voluminous status report on the server’s current operating condition.

    Tip

    Expect the cluster to take a few minutes to stabilize. You may see CrashLoopBackoff errors. This is normal, expected behavior.

Upgrading from 1.0.x to 1.1.x

Zenko 1.0.x versions use MongoDB version 3.4, which has been upgraded to 3.6 in Zenko 1.1.x. Although upgrades using the commands above will work, some new features, specifically Cosmos (NFS integration) may not function with MongoDB 3.4.

To upgrade from 1.0.x to 1.1:

  1. Run the upgrade command, inserting your Zenko server’s release name and disabling the specific feature that depends on MongoDB 3.6.

    $ helm upgrade {{zenko-release-name}} ./zenko --set cosmos.enabled=false
    

    If you are using custom values, be sure to reuse your options.yaml file on any upgrade.

    $ helm upgrade {{zenko-server-name}} ./zenko --set cosmos.enabled=false -f options.yaml
    
  2. After the upgrade has stabilized all pod rollouts and prior functionality is verified, run the following command to finalize the MongoDB compatibility upgrade and enable the Cosmos feature set:

    $ helm upgrade {{zenko-release-name}} ./zenko --set maintenance.upgradeMongo=true -f options.yaml
    
  3. A pod is deployed. When the upgrade is successful, it shows a “Completed” status.

    {{zenko-release-name}}-zenko-mongo-upgrade          0/1     Completed          0          4h
    

    Note

    Upgrade failures typically show up as an “Error” or “Crash” state.

  4. If, after a few minutes of stabilization, kubectl get pods shows a CrashLoopBackoff error involving the mongodb-upgrade pod (rather than a Completed status), manually upgrade MongoDB on the primary MongoDB instance. To do this:

    1. Find the primary MongoDB instance with a kubectl logs query:

      kubectl logs -lcomponent=mongodb-upgrade
      

      On a three-node Zenko cluster, this returns a response resembling:

      kubectl logs --selector component=mongodb-upgrade
      Checking zenko-mongodb-replicaset-0.zenko-mongodb-replicaset.default.svc.cluster.local:27017 SECONDARY
      Checking zenko-mongodb-replicaset-1.zenko-mongodb-replicaset.default.svc.cluster.local:27017 PRIMARY
      Checking zenko-mongodb-replicaset-2.zenko-mongodb-replicaset.default.svc.cluster.local:27017 SECONDARY
      Unable to upgrade capabilities, manual upgrade may be neccessary
      
    2. Enter the following command, replicating the primary instance’s pod name:

      kubectl exec -it {{primary-zenko-pod-name}} -- mongo --eval='db.adminCommand({ setFeatureCompatibilityVersion: "3.6" });'
      

      In the present example, this command reads:

      kubectl exec -it zenko-mongodb-replicaset-1 -- mongo --eval='db.adminCommand({ setFeatureCompatibilityVersion: "3.6" });'
      
  5. Validate the upgrade’s successful by checking the logs. Any errors encountered during the upgrade procedure are listed here as well.

    $ kubectl logs --selector component=mongodb-upgrade
      Finished successfully! Compatibility set to version 3.6
    
  6. Once the upgrade is successful, these Zenko upgrade flags are no longer needed for further 1.1.x upgrades. You can now run the typical upgrade command to ensure the desired 1.1 state:

    $ helm upgrade {{zenko-release-name}} ./zenko -f options.yaml
    

Uninstall

To uninstall/delete the “my-zenko” deployment:

$ helm delete my-zenko

The command removes all Kubernetes components associated with the chart and deletes the deployed instance.

Zenko Operation

Zenko is Scality’s multi-cloud storage controller. It provides a single point of integration using the Amazon S3 cloud storage API, and enables data backup, transfer, and replication across private and public clouds.

Using Zenko, you can store to a Scality RING storage device and automatically back up to one or several public clouds. Alternatively, you can use a public cloud such as Amazon S3 as primary storage and replicate data stored there—-specific files, file types, or entire buckets—-to other supported clouds, such as Google Cloud Platform (GCP) or Microsoft Azure.

About Zenko

The Zenko open-source project, described at https://www.zenko.io/ and hosted on GitHub at https://github.com/scality/Zenko, provides the core logic of the Zenko product. This core is a stack of microservices, written primarily in Node.js and some Python, that provides a RESTful API that handles the complexities of translating S3 API calls into actions on various cloud storage platforms.

Based on the latest stable branch of open-source Zenko, the Zenko Enterprise product offers full Scality support to build and maintain the topology that works best for your organization, use of the Orbit graphical user interface beyond 1 TB of data under management, and other value-added features Scality elects to offer.

Zenko can be accessed and managed through the Orbit GUI, or using direct API calls. Because Orbit acts as a seamless management interface to Zenko, people may confuse the interface (Orbit) with the underlying logic (Zenko). You can access Zenko from Orbit, or from the command line. Where it makes sense, Scality provides APIs to help customize, automate, and improve interactions with Zenko.

Conceptual Framework

Zenko is designed to help administrators of large storage systems control, manage, replicate, and visualize data in a multi-cloud context.

Fundamental building blocks of Zenko’s replication framework are the bucket, the location, and the cloud service. End users (people or services) store data as files in buckets. Administrators are free to configure and name buckets as they please—to the admin, buckets are essentially tenants aggregated in a cloud storage deployment. From an end user’s perspective, a bucket simply appears as a networked storage location, for example, “accounting,” “bioinformatics-lab,” “daily-video-backup,” or any other organizational unit that makes sense.

Zenko can fetch buckets from one location—a Scality RING private cloud, for example—and replicate them to designated cloud storage locations on other clouds. A company may, for example, store its data by departmental buckets to local cloud, then replicate that cloud storage location to several other public or private clouds for storage redundancy, rate-shopping, or other purposes. Zenko can also provide useful management features for large quantities of unstructured data without replicating data at all: a company with very large in-house data stores may not replicate using Zenko, but may still want a robust metadata search capability to locate files or file versions rapidly.

Zenko manages the complexities and discontinuities between the native Amazon S3 cloud storage protocol and other popular cloud protocols, such as Google Cloud and Microsoft Azure storage, and manages a unified namespace to retrieve data seamlessly from all managed cloud locations.

To do this, Zenko maintains its own namespace, which it uses to manage and track objects in other namespaces. For example, a cloud might spread a bucket’s contents over many servers. Zenko trusts this cloud to work as designed and provide the bucket, or individual objects in the bucket, on demand. Zenko knows what the source cloud calls the data, and if it should replicate it to a target cloud, it again relies on the target cloud to maintain the data fed to it: Zenko only deals with cloud storage systems with the APIs they provide. In effect, Zenko operates by maintaining a namespace of namespaces.

To perform this task efficiently, Zenko relies heavily on the metadata generated by object data storage systems. Rather than managing every byte under its control, Zenko closely watches or extrapolates cloud or file system metadata and makes decisions and offers features based on metadata status changes. This enables Zenko to replicate data, offer bucket lifecycle management (expiration and transition), and provide other tools to manage unstructured data easily, quickly, and in a truly multi-cloud environment.

Supported Clouds and Services

Zenko replicates and manages data from one or many sources to one or several destinations. Sources can be clouds or services, and targets can be of several types of cloud backend.

Supported Sources

Zenko supports multiple-cloud storage operations natively using the Amazon Simple Storage Service (S3) protocol. It can handle data from many S3-compatible sources, as well as from servers using supported protocols.

The following Amazon S3-based clouds have been tested as Zenko-compatible sources:

  • Amazon Simple Storage Service
  • Scality RING with S3 Connector
  • Wasabi Hot Cloud Storage
  • Ceph RADOS Gateway

In addition to its compatibility with S3 cloud frontends, Zenko can ingest data and metadata from servers using the following protocols:

  • Scality RING with sproxyd

    The sproxyd connector provides a flexible REST-based API for interacting with Scality RING.

  • Scality RING Scale-Out File System (SOFS)

    SOFS is the RING object store’s native file system. SOFS is a POSIX- compatible parallel file system that provides file storage services on the RING. SOFS is a virtual file system, based on an internal distributed database deployed on the RING.

  • Network File System (NFS)

    Zenko can ingest data and metadata from NFS sources, using a simple Orbit configuration process or from the command line. Because NFS does not maintain an object-store namespace, Zenko extrapolates one using out-of-band updates.

Other sources are under development.

Supported Targets

Zenko can replicate stored data at the site level to the following supported private and public clouds:

  • Amazon S3-based public clouds:
    • Amazon Simple Storage Service
    • DigitalOcean Spaces
    • Wasabi Hot Cloud Storage
  • Amazon-S3-based private clouds:
    • Scality RING S3 Connector
    • Red Hat Ceph RADOS Gateway
  • Other public clouds:
    • Google Cloud Storage
    • Microsoft Azure Storage

Zenko cannot write to non-object-store endpoints such as NFS at this time. Other target clouds and services are under development.

Architecture

This section describes Zenko’s system architecture (the physical and virtual environments on which Zenko operates) and software architecture (the concatenation of microservices that Zenko comprises).

System Architecture

Basics

Zenko provides a layer that mediates between a user or configured storage frontend and one or several storage backends.

_images/Zenko_hi-level.svg

Zenko may use a transient source, which enables it to write once to a master (local) storage cloud, then replicate the stored data to other clouds without incurring egress fees from the primary storage cloud.

Zenko uses agile application frameworks such as Kubernetes for orchestration and Prometheus for monitoring. Zenko is deployed using Kubernetes either on-premises or remotely, or using a cloud Kubernetes framework (such as GKE, AKS, EKS, or Kops). Scality supports MetalK8s as the reference Kubernetes implementation for Zenko installations.

Zenko Services Stack

The following diagram summarizes the Zenko cloud architecture:

_images/Zenko_arch_NoNFS.svg

The Zenko instance depicted above presents an idealized representation of Zenko’s structure. Several complexities are elided for clarity.

Transient source replication is optional and configurable. Transient source storage requires an on-premises RING deployment (with sproxyd).

The central square in this diagram represents the suite of interdependent services required to implement a working Zenko instance. Deployed, this suite of services is highly available, containerized, and under the control of Kubernetes. Kubernetes dynamically creates and destroys services in response to demand.

The following table offers brief descriptions of the Zenko components in this architecture:

Component Description
CloudServer CloudServer is an open-source Node.js implementation of a server handling the Amazon S3 protocol. It presents the core logic for translating user inputs and data into stored objects on several cloud storage systems. With this component, users can create locations corresponding to different clouds.
Backbeat Backbeat manages the queues involved in Zenko cloud event tracing (such as admin_API, etc.) and job queuing for current actions (such as CRR, lifecycle management, synchronous encryption, etc).
Orbit

The Orbit UI offers users controls for CloudServer, workflow management, user management, and Metadata (MD) instance configuration using such parameters as location, access key, workflow configuration (CRR, for example), basic search, etc.

The UI runs in the cloud and is hosted by Scality.

CLI CloudServer accepts commands from command-line interfaces.
MongoDB An open-source metadata database, MongoDB works with one or multiple instances in scale-out mode. It also explodes JSON values, allowing powerful searches and potentially indexing to speed up searches.
Local RING/sproxyd For local cloud storage (including transient source), S3 data can be put to an sproxyd RING.

These services and their likely use cases are described in the sections that follow.

Zenko Cluster Topology

To operate with high availability, Zenko must operate on a cluster of at least three physical or virtual servers running CentOS 7.4 and either have access to a virtual Kubernetes instance (EKM, GKM, or AKM) or to an instance of MetalK8s installed and running on each server.

When run in a cluster configuration, Zenko is highly available. Load balancing, failover, and service management are handled dynamically in real time by Kubernetes. This dramatically improves several aspects of service management, creating a fast, robust, self-healing, flexible, scalable system. From the user’s perspective, Zenko is functionally a single instance that obscures the services and servers behind it.

_images/Zenko_cluster_NoNFS.svg

A basic test configuration—a cluster of three servers—is depicted above. Five servers is the recommended minimum service deployment for high availability. In actual practice, each server can dynamically deploy up to ten CloudServer instances, making for a default maximum of 50 CloudServer instances, plus one master. Kubernetes sets the current upper boundary, defined by the number of pods (including service pods) that can be run, at 100 pods. The Zenko instance that manages all these CloudServers spans all deployed and functioning servers, managing a common namespace of data and associated metadata, with Kubernetes managing individual servers, spinning services up and down in response to emergent conditions.

Software Architecture

Zenko consists of several microservices. Among these are:

Orbit

Scality’s Orbit management portal provides Zenko’s graphical user interface.

Designed for ease of use, this Scality-managed and -hosted cloud portal offers easy management and monitoring of Zenko instances and metrics, as well as a simple, point-and-click interface to configure and monitor multi-cloud workflows.

Orbit offers the following features:

  • Login and authentication
  • Cloud “location” and credential management for:
    • Zenko local filesystem
    • Scality RING with SOFS
    • Scality RING with S3 Connector
    • AWS S3
    • Ceph RADOS Gateway
    • Google Cloud Storage (GCS)
    • NFS
    • Microsoft Azure Blob
    • Wasabi Cloud
    • DigitalOcean Object Storage
    • Local transient/cache storage (RING)
  • Lifecycle management (health status, key metrics/KPIs) for Zenko instances
  • Workflow configuration and monitoring (for CRR and Lifecycle expiration)
  • S3 data browser
  • Metadata search
  • Help and resources

You can test drive Orbit by following the “Try Zenko” link at https://www.zenko.io/ and following the instructions for Setting Up an Orbit Sandbox Instance.

For a full walk-through of these features, see Using Orbit.

CloudServer

Zenko CloudServer is an open-source Node.js object storage server handling the Amazon S3 protocols.

By providing a free-standing implementation of the S3 API, CloudServer offers developers the freedom to build S3 apps and run them either on-premises, in the AWS public cloud, or both—with no code changes. CloudServer is deployed in a Docker container.

Overview
_images/CloudServer.svg
Component Description
S3 routes The main S3 service that receives S3-protocol commands.
Backbeat routes A special Backbeat-only S3 service that uses Backbeat routes to replicate data to other clouds and update the replication status of the local object, while being authenticated as the internal Backbeat service.
Management agent CloudServer establishes an HTTPS connection to Orbit (API Push Server) and uses polling or websockets. The management agent stores the configuration, an in-memory-only overlay of the configuration, in the Metadata service. The same mechanism retrieves statistics from the Backbeat API and, later, to control the Replication service and do the same with other service components.
Prometheus client (Not depicted) Monitoring information is maintained in a Prometheus endpoint. Prometheus polls this endpoint for monitoring.
Metadata backend A multi-backend interface than can communicate with MongoDB.
Data backend A multi-backend interface than can communicate with different clouds (S3, Azure, GCP) while preserving namespace.

Note

CloudServer also supports bucketd and sproxyd protocol for S3 Connector.

Use Cases

As currently implemented with Zenko, CloudServer supports the following use cases.

  • Direct cloud storage

    Users can store data on the managed cloud locations using the S3 protocol, if a cloud location (AWS, Azure, GCP, etc.) and endpoints are configured (using Orbit or a configuration file).

  • Managing a preferred location for PUTs and GETs

    When defining an endpoint, you can define and bind a preferred read location to it. This is a requirement for transient source support.

  • Objects’ cloud location readable

    CloudServer can read objects’ location property.

  • Direct RING storage (sproxydclient)

    CloudServer uses a library called sproxydclient to access the RING through the sproxy daemon (sproxyd).

  • Direct SOFS storage (cdmiclient)

    CloudServer uses a library called cdmiclient to access the SOFS Dewpoint daemon through the CDMI protocol. Both the file system and the S3 environment have their own metadata. The CDMI protocol allows a user to attach custom metadata to an entity (directory/file). This feature is used to save S3 metadata: an entry named “s3metadata” is added to a metadata entity. Its value is the S3 metadata (JSON object). When an object is created from an S3 client, the cloud server produces all the S3 metadata. When a file is created using the file system interface (either using CDMI protocol or a traditional file system client on Dewpoint daemon fuse mountpoint), S3 metadata is reconstituted from POSIX information.

  • Healthcheck

    Currently, a liveness probe calls /_/healthcheck/deep. Services that expose readiness can also get a readiness probe.

  • Metrics Collection

    These metrics are valid on all Node.js-based services:

Node.js Process General Metrics
Metric Description
nodejs_version_info Node.js version info
nodejs_heap_space_size_available_bytes Process heap space size available from node.js in bytes
nodejs_heap_size_total_bytes Process heap size from node.js in bytes
nodejs_heap_size_used_bytes Process heap size used from node.js in bytes
nodejs_external_memory_bytes Node.js external memory size in bytes
nodejs_heap_space_size_total_bytes Process heap space size total from node.js in bytes
process_cpu_user_seconds_total Total user CPU time spent in seconds
process_cpu_system_seconds_total Total system CPU time spent in seconds
process_cpu_seconds_total Total user and system CPU time spent in seconds
process_start_time_seconds Start time of the process since unix epoch in seconds
process_resident_memory_bytes Resident memory size in bytes
nodejs_eventloop_lag_seconds Lag of event loop in seconds
nodejs_active_handles_total Number of active handles
nodejs_active_requests_total Number of active requests
nodejs_heap_space_size_used_bytes Process heap space size used from node.js in bytes
Cloud Server General Metrics
Metric Description
cloud_server_number_of_buckets Total number of buckets
cloud_server_number_of_objects Total number of objects
cloud_server_data_disk_available Available data disk storage in bytes
cloud_server_data_disk_free Free data disk storage in bytes
cloud_server_data_disk_total Total data disk storage in bytes
Labeled Metrics
Metric Description
cloud_server_http_requests_total Total number of HTTP requests
cloud_server_http_request_duration _microseconds Duration of HTTP requests in microseconds
cloud_server_http_request_size_bytes The HTTP request sizes in bytes
cloud_server_http_response_size_bytes The HTTP response sizes in bytes
Backbeat

A native-Node.js engine built around a messaging system, Backbeat is the core engine for asynchronous replication, optimized for queuing metadata updates and dispatching work to long-running background tasks.

Backbeat is:

  • Containerized: Backbeat lives in its own set of containers.
  • Distributed: Three to five instances per site are required for high availability. If one instance dies, the queue replica from another instance can be used to process the records.
  • Extensible: Backbeat allows extensions to the core engine to realize such features as replication, health check, and lifecycle management.
  • Backend-agnostic: All interactions go through CloudServer, which exposes separate “fast-path” routes, used only by Backbeat, on different ports controlled by an accessKey and secretKey.
  • Backgroundable: Backbeat includes a background process that runs on a crontab schedule. For example, the process can wake up at a set time and get a list of buckets, get attributes for a bucket, check lifecycle policy, check object metadata for tags (if tags are used in a policy), and then apply the lifecycle action to matched objects.

Zenko uses Backbeat extensions to enable cross-region replication, health checks, and object lifecycle management. Further extensions are under development.

Backbeat is an open-source project. You can learn more about it at its home repository: https://github.com/scality/backbeat.

Kubernetes

Kubernetes is the essential manager for Zenko cloud instances, spawning and destroying containerized services, load balancing, and managing failover for a robust, highly available service.

Zenko operates with Kubernetes engines provided by all major cloud storage providers, including Amazon Elastic Container Service for Kubernetes (EKS), Microsoft Azure Kubernetes Service (AKS), and Google Kubernetes Engine (GKE). Additionally, Scality provides MetalK8s, an open-source Kubernetes engine you can deploy on a cluster to provide Kubernetes service to maintain independence from cloud storage providers.

MetalK8s

MetalK8s provides a Kubernetes engine that can be hosted on a local or virtual machine. Zenko uses Kubernetes to automate the deployment of Cloudserver instances whenever server operations cross pre-configured thresholds. Kubernetes reduces the complexity of container service and management previously addressed with Docker Swarm. MetalK8s, an open-source Scality project, reduces the complexity of deploying Kubernetes outside of a public cloud. You can use any Kubernetes deployment to run Zenko, but MetalK8s offers total platform independence.

MetalK8s builds on the Kubespray project to install a base Kubernetes cluster, including all dependencies (like etcd), using the Ansible provisioning tool. This installation includes operational tools, such as Prometheus, Grafana, ElasticSearch, and Kibana, and deploys by default with the popular NGINX ingress controller (ingress-nginx). All these are managed as Helm packages.

Unlike hosted Kubernetes solutions, where network-attached storage is available and managed by the provider, MetalK8s is designed with the assumption that it will be deployed in environments where no such systems are available. Consequently, MetalK8s focuses on managing node-local storage and exposing these volumes to containers managed in the cluster.

MetalK8s is hosted at https://github.com/scality/metalk8s. Instructions for deploying MetalK8s on your choice of hardware (real or virtual) are provided in the project’s Quickstart. Documentation is available at: https://metal-k8s.readthedocs.io/. Installation instructions specific to deploying MetalK8s for Zenko are included in Zenko Installation.

Cosmos

Cosmos provides translation from CloudServer to the NFS file system protocol. It provides a REST API to CloudServer and translates CloudServer requests into native file system requests. The anticipated use case for Cosmos is to enable CloudServer to detect changes made to a non-object-storage repository with NFS access and to back up these changes to a cloud storage location on a regular schedule. Cosmos serves these changes for CloudServer ingestion over the S3 protocol.

_images/Cosmos.svg

Because these venerable protocols do not maintain object-storage namespaces, Cosmos cannot actively manage resources in the NFS space. Instead, changes to the NFS space are regularly captured on a schedule set up in cron. At preconfigured intervals, Cosmos detects changes on the NFS side and updates the inferred namespace. This namespace can be used for such advanced features as metadata search, lifecycle management, and cross-region replication.

ZooKeeper

ZooKeeper is an Apache service for maintaining configuration information, naming, providing distributed synchronization, and providing group services. Backbeat uses ZooKeeper as a configuration engine, storing information generated and required by Kafka in a Kafka-friendly Java-based context. Backbeat also uses ZooKeeper to store and maintain some state information (oplogs).

Kafka

Kafka is a distributed streaming platform written in Java for the Apache ecosystem.

Backbeat uses Kafka as a journaling system to track jobs to be performed and as a persistent queue for failed operations to retry. Jobs come to Kafka from Node.js streams (from producers), and Kafka queues them for Backbeat to follow.

The lifespan of jobs queued in Kafka’s journal is configurable (the current default lifespan of a job in Backbeat’s Kafka queue is seven days), and after the job’s age reaches the configured retention time, Kafka purges it from the queue. This solves two problems: job instructions are stored in a stable, non-volatile system (not, for instance, in a fugitive database, such as Redis) that is also not enduring (once completed, the job information is not preserved).

MongoDB

MongoDB is a semi-structured open-source NoSQL database system, built on JavaScript Object Notation (JSON). Zenko uses it for several tasks; primary among these is its role of maintaining a freestanding, independent namespace and location mapping of all files held in object storage. Additionally, Zenko uses MongoDB to retain and order metadata associated with the objects stored in these namespaces. As a semi-structured metadata database, MongoDB also offers SQL-like metadata search capacities that can cover tremendous amounts of data at great speed.

As deployed in Zenko, MongoDB also ingests status, log information, and metrics from Prometheus.

MongoDB is essential to the following Zenko features:

  • Data replication
  • Metadata search
  • Health check
  • Metrics collection
  • Queue population
Prometheus

Prometheus is a systems and service monitoring system produced as an open-source project of the Cloud Native Computing Foundation. It collects metrics from configured targets at given intervals, evaluates rule expressions, displays the results, and can trigger alerts if a condition is observed to be true.

Prometheus offers Kubernetes-native support for discovering services and metrics. Its primary use in Zenko is to gather monitoring data from Zenko services, nodes, and clusters. This information is then sent to Grafana for dashboard-style display.

Redis-Sentinel

Zenko uses Redis-Sentinel to provide a highly available in-memory database for processes in:

  • CRR (pause and retry feature)
  • CloudServer
  • Backbeat API

In general, Redis-Sentinel is useful for short-term database needs, such as for a quick or temporary job, or for sharing as a shared database between two or more services. Information stored in Redis can be written, accessed, rewritten, and destroyed faster and more easily than with a full-featured database.

S3 Data Service

The S3 Data service is a local implementation of Scality’s object-storage name space. It enables local storage that maintains naming consistency with cloud name spaces. The S3 Data Service can operate as a default data store where a local cloud is not available, and latency makes a remote cloud infeasible. This service can be configured as the local buffer for cloud storage operations: use it once to write to a local object storage for replication to one or several public or private clouds.

The S3 Data service is available for testing purposes only. It can become unreliable under heavy loads. For production systems, it must be backed by a replicated local data store.

Services

Zenko offers the following services as Backbeat extensions:

Replication Service

Replication

Replication is the automatic copying of information from one location to another. Zenko extends beyond the AWS specification’s paradigm by enabling replication to occur not only from one site to another, but also from one site to several.

The Backbeat replication extension manages many of the complexities of large-scale replication, including defining tasks, handling large queues, graceful failure and retry, and managing deferred tasks. Zenko enables site-level replication, mirroring the contents of a primary cloud object storage site (this can be a public or private cloud), and copies its contents to another public or private cloud.

Transient Source Replication

Public cloud data storage services can charge substantial egress fees for moving data out of their cloud. This can make it costly to use a public cloud as a primary storage site from which other clouds instances are replicated. Zenko offers a transient source as a configurable option. If configured, the transient source accepts data to be replicated in a transient storage location on the host machine, replicates the data to the target clouds, and once all replications show success, deletes the data from the transient source location. If transient source replication is not configured, the cross-region replication feature copies the data to the primary cloud (this could be a public cloud or a private cloud, such as the RING), and replicates the data to other clouds from there.

Garbage Collection

The Garbage Collector is a Node.js service that cleans up the transient source buffer after a replication is complete. When the Backbeat replication status processor receives notification from all targets that a PUT job is complete (HTTP 200, indicating success), it enters a task in the Kafka queue to delete the transient source buffer. The Garbage Collector service is invoked and deletes files meeting the job description.

Lifecycle Management Service

Lifecycle management enables you to set policies in Zenko that control the lifecycle of objects in a bucket conforming to AWS S3 lifecycle rules. Lifecycle management gives users the ability to specify a time threshold beyond which certain files are to be moved (lifecycle transition) or expunged (lifecycle expiration) to free up storage space or reduce storage costs. You can control lifecycle policies using the Orbit user interface, or with API calls.

Using this Backbeat extension, Zenko follows the S3 API to provide three calls to manage lifecycle properties per bucket:

  • PUT Bucket Lifecycle
  • GET Bucket Lifecycle
  • DELETE Bucket Lifecycle

These calls manage bucket attributes related to lifecycle behavior, which are stored as bucket metadata.

Lifecycle Policies

Cloud users can apply lifecycle expiration or transition rules (specified in Amazon’s AWS S3 API) to buckets managed through Zenko. These rules are triggered after a defined time has passed since the object’s last modification.

Zenko supports expiration or transition of versioned or non-versioned objects, when a defined number of days has passed since those objects’ creation. This enables automatic deletion of older versions of versioned objects to reclaim storage space. Zenko also supports triggering expiration or transition of the latest version on a date. This is currently feasible from the S3 API, but not using Orbit. Using Zenko from the command line or from Orbit, you can expire the current version of an object with a separate rule. For versioned buckets, lifecycle adds a delete marker automatically when a rule expiring a current version is triggered, such as when a user deletes an object without first specifying a version ID.

Bucket lifecycle characteristics inhere to each bucket. Zenko’s lifecycle management feature enforces, but does not set these characteristics. When lifecycle management is enabled, the host cloud enforces buckets’ lifecycle rules.

To configure bucket lifecycle, follow the AWS S3 Lifecycle Configuration Element syntax described in https://docs.aws.amazon.com/AmazonS3/latest/dev/intro-lifecycle-rules.html.

Note

Lifecycle management rules conform to the S3 lifecycle management syntax for expiration and transition policies described at https://docs.aws.amazon.com/AmazonS3/latest/dev/object-lifecycle-mgmt.html. Files that exceed a preconfigured temporal threshold (for example, 90 days) can be transitioned (moved) or expired (deleted) from the bucket on which they are stored.

Zenko does not support lifecycle transitions to Amazon’s STANDARD_IA or GLACIER storage classes, only to other storage locations.

Out-of-Band Updates

Zenko depends on object storage metadata for its operation. When a new Zenko location is configured, Zenko encounters a new object storage system. If Zenko does not understand the location’s namespace, Zenko creates a new Zenko namespace and uses it to perform all of its data management tasks (metadata search, replication, lifecycle transition and expiration). If Zenko recognizes the existing namespace, it can ingest the existing namespace metadata, in a process called an out-of-band update.

Out-of-band updates are available for Scality’s RING object storage system via S3 Connector or for RINGs using the Scale-Out File System (SOFS), via the NFS protocol. When each location is established, versioning must be enabled. Out-of-band updates for S3C offers metrics; for NFS services, this feature remains under development.

More services are under development.

Using Orbit

Orbit is the browser-based graphical user interface for the Zenko platform. While Zenko offers its full functionality through a rich API, almost all of its features are also available through Orbit, providing ease of use and excellent opportunities for visualizing managed data.

Orbit Security

Orbit sessions use a direct connection proxied through https://admin.zenko.io. Connections are encrypted end-to-end between the user’s browser and the admin.zenko.io proxy, and from there to the Zenko instance. No user content in transmission is stored, cached, read, or readable by Scality systems.

Metrics (and configuration) are stored in Scality systems using non-identifiable numbers or encrypted blobs except for the following essential namespace information

  • Bucket names
  • Remote credential names (secret keys are encrypted such that only the Zenko instance itself can decrypt them)
  • Zenko-local storage account names, location names and endpoints

The following features employ real-time communication

  • CRR failed object listing and retry operations
  • Multicloud browser
  • Metadata search

The most secure operating mode is when the instance is linked and locked to a user account. Forgetting the instance, except for momentary transfer to another account, is not a best practice. Forgetting the instance does not stop the secure channel, which is needed for normal operation.

Setting Up Orbit

Orbit is a web-based graphical user interface that simplifies Zenko operations. You can set up Orbit in a sandbox demo mode or as a registered instance. Once Orbit is configured, you can unlink it from a Zenko instance as well.

Setting Up an Orbit Sandbox Instance

A good way to learn how to use Orbit is through the sandbox feature available at zenko.io under Try Zenko.

Prerequisites

  • A web-connected browser
  • One or more cloud storage targets (AWS, RING, GCP, Azure, etc.)

The Sandbox is a great place to learn how Orbit will help you manage your data. To set up a Sandbox instance:

  1. Open zenko.io, and click Try Zenko.

    image0

  2. Click Register with Google. You must authenticate using a Google ID.

    _images/google_login.png
  3. After you have registered, the Welcome dialog displays:

    image1

    Click Install now.

  4. The REGISTER AN INSTANCE screen displays:

    image2

    Choose the Sandbox option (Next: Let’s do this!)

  5. The CREATE YOUR ZENKO SANDBOX screen displays:

    image3

    Enter a name for your sandbox and click Create Sandbox.

  6. After less than a minute, the Settings window displays:

    image4

    Your sandbox is created. Depending on server load, there may be a delay of a few minutes to complete the Orbit setup.

  7. Once setup is complete, you’re taken automatically to the STORAGE ACCOUNTS screen for account creation.

    _images/newuser_add_storage_location_prompt.png
  8. Add a storage account name and click the Generate button. This creates a new user/account, and generates an access/secret key pair.

  9. Click Show to reveal your secret key. Copy this to a secure location, either by highlighting the exposed text or clicking the Copy button to transfer the secret key to your clipboard.

    _images/secret_key_my_account.png

    Important

    You do not get a second chance. Copy this now.

  10. The sandbox is for demonstration purposes, and is limited for total data managed (1 TB) and time (48 hours). Scality may change these limits without notice. You can review how much time remains for your sandbox by reviewing the Settings window’s Sandbox Time Left indicator.

    _images/sandbox_settings.png

    The sandbox runs against a Zenko instance hosted by Scality. Though this demonstration instance is limited both in its lifespan and in the amount of data it can handle, you can use it to watch Zenko in action.

Setting Up a Full Orbit Installation

Orbit can be run as a user interface to Zenko no matter where or how Zenko is hosted. You can deploy Zenko in any of the following topologies:

  • As a test instance running on a local machine using Minikube
  • On a cloud host using MetalK8s
  • On a cloud host using the host’s native Kubernetes environment (EKS, GKE, AKS).

To run a “full” Zenko installation, you must register your Zenko instance to Orbit.

  1. Go to Zenko.io.

  2. Click Register with Google (Google ID required)

  3. Authenticate:

    image0

  4. Click Install Now.

    _images/Orbit_Welcome_screen.png
  5. Review and affirm the Privacy Policy:

    image1

  6. Click Register My Instance.

    image2

  7. Enter your Instance ID and your instance’s name, then click Submit Now!

    image3

Tip

To find your Instance ID, use the kubectl commands from Zenko Installation.

Unlinking Orbit from Zenko

During product familiarization and solution development, it may be necessary to unlink a Zenko instance from Orbit. To do this:

  1. Click the Settings button in the sidebar.

    _images/sidebar_settings_button.png
  2. Click the Unlink this Zenko Instance button.

    _images/unlink_zenko.png
  3. Click Forget to confirm.

    _images/Forget_Zenko_Instance.png

Orbit’s association with the Zenko instance is broken. To connect Orbit to another Zenko instance or reconnect it to the forgotten instance, follow the steps in Setting Up a Full Orbit Installation.

Registering Orbit to Zenko

Orbit can be run as a user interface to Zenko no matter where or how Zenko is hosted. You can deploy Zenko:

  • As a test instance running on a local machine using Minikube
  • On a cloud host using MetalK8s
  • On a cloud host using the host’s native Kubernetes environment (EKS, GKE, AKS).

The Dashboard

On login, the dashboard displays:

image0

The dashboard provides useful information about ongoing Zenko operations, replication status, and cloud server health.

The Sidebar

Visible from all Orbit windows, the sidebar contains navigational buttons for:

In addition to the options on the left sidebar, you can click the View More Statistics button to see more statistics about buckets, objects, and resources.

Statistics

The Statistics window provides visualizations of bucket and object counts, capacity, CPU, and memory use, as well as total data under management.

image0

Information is dynamic and refreshed every 30 seconds by default, over a moving window. For Backlog, Completions, and Failures, this window is 24 hours, after which data expires. For all other statistics, data is expired after 15 minutes. No configurations to the frequency or longevity of these statistics are offered at present; however, Prometheus also retains all statistics for 15 days by default. This duration is configurable.

Escape this screen by clicking the back arrow or a sidebar button.

Settings

The Settings window provides information on the Zenko instance (its name, ID, edition, version and last-modified date), and personal information, such as whether you want to receive news updates, as well as a link to Scality Support for data retrieval or account deletion.

_images/Settings.png

The Settings window also provides a button, Unlink this Zenko Instance, which offers to forget the Zenko instance. This option does not delete the Zenko instance; rather, it breaks the association between Orbit and the instance. If you click this and confirm (Forget), you will not lose data or otherwise perturb the running Zenko instance, but you will have to follow the steps in Setting Up a Full Orbit Installation to connect Orbit to another Zenko instance or reconnect it to the forgotten instance.

User Management Tasks

Using the Orbit interface, you can manage users by adding them, changing their credentials (secret keys), or revoking their credentials altogether.

Add a New User
  1. Click Storage Accounts in the sidebar to raise the Storage Accounts window.

    _images/sidebar_storage_accounts_button.png
  2. Enter a new user name in the Account Name field and click Generate.

    image0

  3. Click Show to see the secret key associated with this user:

    image2

    Copy this key and store it.

    Warning

    You will not get a second chance to copy this key! If you lose the key, the user name and any information associated with it are lost as well.

    A Copy button is included in the user interface for your convenience.

As the Zenko user, you can create multiple users in the Zenko-managed namespace, each with a unique access key and secret key. You can also re-generate access/secret key pairs for any such user.

Copy Owner ID

The owner’s Canonical ID is a 64-byte hash string that uniquely identifies the owner. It can be useful for certain searches, and for cross-account and cross-user access. Zenko provides an easy way to copy this string.

  1. Click the Browser button to open the Multicloud Browser.

    _images/sidebar_browser_button2.png
  2. In the Multicloud Browser window, double-click a bucket to open it.

    _images/bucket_select.png
  3. The Copy Owner ID button appears above the file and directory listing.

    _images/Orbit_bucket_canonical_ID.png
  4. Click the Copy Owner ID button. The canonical user ID is transferred to the clipboard. The Zenko UI does not offer notification of this transfer.

Changing User Credentials

To change a user’s credentials (public/private key pair) from Orbit:

  1. Click the Storage Accounts button in the sidebar.

    _images/sidebar_storage_accounts_button.png
  2. Look for the user’s name in the STORAGE ACCOUNTS pane.

    _images/credentialed_user.png
  3. Click Replace.

  4. Orbit warns you that this could cause problems for the user. Click Regenerate.

    _images/Orbit_User_regen_key.png
  5. Show the new key by clicking the Show button or copy it directly to your clipboard using the Copy button on the user’s line.

    _images/Orbit_user_secret_key.png

    Warning

    You will not get a second chance to copy this key! If you lose the key, the user name and any information associated with it are lost as well.

    The user’s public access key remains unchanged.

Removing a User
  1. Click Storage Accounts in the sidebar.

    _images/sidebar_storage_accounts_button.png
  2. Find the user to remove:

    _images/Orbit_User_Remove_Name.png
  3. Click Delete. Orbit requests confirmation.

    _images/orbit_user_revoke_warning.png
  4. The user’s access key is revoked and the user name is deleted from the list of storage accounts/tenants.

Location Management

Users save data to files, which are stored as objects in buckets. Buckets are stored in locations, which correspond (quite roughly) to a service and a region, such as AWS us-east-1, or GCP europe-west1. Before you can establish a location in Orbit, you must have an account with at least one cloud storage service. This can either be a public cloud, such as Amazon S3 or Google Cloud Platform, or Scality’s private cloud, the RING with S3 Connector. RING provides resilient, reliable object storage, and S3 Connector provides Amazon S3-compatible command set and namespace.

Public Clouds

If you don’t have a cloud storage account already configured, here are some useful links to get started. These are subject to change without notice.

See Zenko Installation for advice on sizing, cluster configuration, and other preparations.

Scality RING with S3 Connector

S3 Connector provides an Amazon S3-compliant frontend for the Scality RING private cloud storage solution.

Except as noted, you can integrate to S3 Connector exactly as you would integrate to any of the other S3-based cloud services, such as AWS, DigitalOcean Spaces, Wasabi Hot Cloud, or Ceph RADOS Gateway. See the S3 Connector and RING documentation at https://documentation.scality.com/ for more details on deploying and configuring S3 Connector with the RING.

Scality RING with sproxyd

sproxyd presents a RING-native REST API providing direct object-store access to the RING. If you are integrating to a RING that does not have an S3 Connector installed, this is probably the API you use to access the RING.

Zenko Local

The Zenko Local filesystem is a convenient, easily configured test location that enables product familiarization and simple operations. It is internal to Zenko, and serves as the default location for otherwise unnamed locations that rely on the default “us-east-1” location. Because it is internal to Zenko, the “Add New Storage Location” prompt does not offer configurations for credentials or bucket naming at setup time. These are handled elsewhere in the Orbit user interface.

Warning

While convenient for testing purposes, the Zenko Local filesystem is not recommended for use in a production setting. The Zenko Local filesystem introduces a single point of failure and is thus unsuitable for a highly-reliable, highly-available storage solution.

NFS Mount

Zenko can access information and file system metadata over the NFSv3 and NFSv4 protocols. To configure Zenko to access NFS using out-of-band updates, review the NFS host’s /etc/exports file to find the relevant export path, hostname, and NFS options. Use nfsstat on the NFS host to discover the relevant NFS version and protocol.

Important

Do not configure CRR for NFS mounts unless there is predictable down time for replication. For NFS CRR, Zenko scans the NFS file system, then detects and replicates changes. It assumes that the NFS mount does not change after scanning but before replication is complete. Changes written after the scan but before replication completes may not be replicated.

You can:

Adding a Storage Location

To add a storage location:

  1. Click the Storage Locations item in the sidebar to open the Cloud Locations window:

    _images/Orbit_Storage_Locations.png
  2. Click Add New.

  3. The Add New Storage Location dialog displays:

    _images/Orbit_Add_New_Storage_Location.png
    1. Enter a location name in the Location Name field using lowercase letters, numbers, and dashes.

      Note

      Capital letters, spaces, and punctuation and diacritical marks will result in an error message.

    2. Select a location type from the Location Type pull-down menu. You can choose:

      • Amazon S3
      • DigitalOcean Spaces
      • Wasabi
      • Google Cloud Storage
      • Microsoft Azure Blob Storage
      • NFS Mount
      • Scality RING with S3 Connector
      • Scality RING with sproxyd Connector
      • Ceph RADOS Gateway
      • A Zenko local filesystem
  4. Each storage location type has its own requirements. No security is required for a local file system, but all public clouds require authentication information.

    Note

    Adding a location requires credentials (an access key and a secret key). Though nothing prevents you from using account-level credentials when Zenko requests credentials for a location, it is a best practice to enter credentials specifically generated for this access. In other words, before you add a location, first create a user in that location (an AWS account or an S3 Connector, for example) for the purpose of Zenko access. Give that Zenko “user” all and only the permissions needed to perform the desired tasks.

    Tip

    When configuring an S3 Connector, assign the following policy to the special zenko-access user to ensure access to the Metadata service and the ability to perform operations on the bucket:

    {
      "Version":"2012-10-17",
      "Statement":[
        {
          "Action":"metadata:*",
          "Effect":"Allow",
          "Resource":"*"
        },
        {
          "Action":"s3:*",
          "Effect":"Allow",
          "Resource":"*"
        }
      ]
    }
    
Cloud Storage Locations

All the cloud storage services serviced by Zenko require the same basic information: an access key, a secret key, and a target bucket name. [1] The Orbit interface also presents the following requirements for each cloud storage system.

Service Endpoint Bucket Match Server- Side Encryption Target Helper for MPU
Amazon S3 - - Yes -
Ceph RADOS Gateway Yes Yes - -
DigitalOcean Spaces [2] Yes - - -
Google Cloud Storage - - - Yes
Microsoft Azure Blob Storage Yes - - -
RING/S3C Yes Yes - -
Wasabi - - - -

These configuration options are described below.

Endpoint

Some service providers assign fixed endpoints to customers. Others require users to name endpoints. Services for which Zenko requests endpoint names may have additional naming requirements. For these requirements, review your cloud storage service provider’s documentation.

Note

The Add Storage Location screen for Wasabi presents an endpoint field, but it is not yet editable.

For Ceph RADOS Gateway endpoints, you can nominate a secure port, such as 443 or 4443. If you do not, the default is port 80. Whichever port you assign, make sure it is accessible to Zenko (firewall open, etc.).

Bucket Match

Zenko provides a “Bucket Match” option for Ceph RADOS Gateway and Scality S3 Connector. If this option is left unchecked, Zenko prepends a bucket identifier to every object in the target backend’s namespace. This enables a “bucket of buckets” architecture in which the target backend sees and manages only one large bucket and Zenko manages the namespace of the “sub-buckets.” Clicking the Bucket Match box deactivates this feature: the prepending of bucket names is defeated, and the bucket structure in the host cloud is copied identically to the target cloud.

Important

If the Bucket Match option is set, buckets in the target location cannot be used as a CRR destination. Zenko requires the bucket identifier in order to manage the namespace for replication.

Server-Side Encryption

Encryption-based transfer protocols ensure your credentials and transmitted information are secure while in transit. The S3 API also offers encryption and key management services to protect information stored on cloud drives. From Orbit, clicking Server Side Encryption when setting up a location creates a location with encryption enabled for all objects stored there. Encryption is set at the bucket level, not at the object level. Object encryption is delegated to the cloud storage system.

Server-side encryption is based on the x-amz-server-side-encryption header. Inquire with your cloud vendor to determine whether server-side encryption using x-amz-server-side-encryption is supported on their platform. A table is provided in this document, but vendors’ offerings are subject to change without notice.

If you have already created a bucket with server-side encryption enabled (SSE-S3 protocol), clicking Server Side Encryption forces Zenko to include "x-amz-server-side-encryption": "AES256" in API calls to the cloud host (AWS or a vendor that supports the call). If valid credentials are provided, the cloud service provides the objects thus requested.

Target Helper Bucket for Multi-Part Uploads

The Google Cloud Storage solution imposes limitations on uploads that require specific workarounds. Among these is a 5 GB hard limit on uploads per command, which requires objects over this limit to be broken up, uploaded in parallel chunks, and on a successful upload reassembled in the cloud. Zenko manages this complexity, in part, by using a “helper” bucket.

Note

Google Cloud Storage also imposes a 1024-part cap on objects stored to its locations (For all other backends, Zenko caps the number of parts at 10,000). For data stored directly to GCP as the primary cloud, Zenko propagates this limitation forward to any other cloud storage services to which Google data is replicated.

Other Services: Zenko Local, RING/sproxyd, and NFS
Zenko Local Filesystem

Zenko Local Filesystem has similar authentication requirements to AWS S3, but because it is a Zenko-native filesystem, it shares authentication and related credentialing tasks, which are addressed elsewhere in the Orbit UI.

For more information, see Zenko Local.

RING with sproxyd Connector

The RING maintains stability and redundancy in its object data stores by way of a bootstrap list. To access a RING directly using sproxyd, you must enter at least one bootstrap server; however, more is better. This is simply a list of IP addresses for the bootstrap servers in the RING. The order of entry is not important: none enjoys a preferred position. Entries must assign a port number. If a port number is not explicitly assigned, Zenko assigns port 8081 by default. Entries can use DNS or IP address format.

NFS

Zenko supports out-of-band updates from NFSv3 and NFSv4 file systems. Zenko replicates data from NFS servers to cloud storage services using scheduled cron jobs.

Note

For NFS mounts, Zenko cannot perform data PUT transactions. In other words, data can be written directly to NFS for Zenko to replicate to other backends, but cannot be written to Zenko to replicate to NFS.

Configuring NFS requires you to specify the transfer protocol (TCP or UDP), NFS version (v3 or v4), the server location (IP address or URI), export path (the path to the NFS mount point on the server), and the desired NFS options (rw and async are the default entries).

Transient Sources

Both RING with sproxyd and Zenko Local file systems can be configured as transient sources. The transient source can be deployed as a “buffer” for replication to cloud locations. This configuration enables replication from a local service to multiple “parallel” cloud locations without incurring egress fees. Once data has been replicated, it is deleted from the transient source.

Configuring a location as a transient source requires checking the Delete objects after successful replication box under the Advanced Options submenu.

See Adding a Transient Source Storage Location for details.

[1]Microsoft’s setup procedure is functionally identical to that of AWS S3. However, the Microsoft terms, “Azure Account Name” and “Azure Access Key” correspond, respectively, to the AWS terms “Access Key” and “Secret Key.” Do not confuse Amazon’s “access key” (a public object) with Microsoft’s “access key” (a secret object).
[2]DigitalOcean uses different nomenclature (“Space Name” instead of “bucket name,” for example) but its constructs are functionally identical to Amazon S3’s.
Adding a Transient Source Storage Location

Adding a transient source storage location is quite similar to adding any other storage location, but for a few particulars.

A transient source location is a temporary buffer to which data is stored and from which data is replicated. Scality RING with sproxyd is the only production-ready environment that supports the transient source replication feature (the Zenko Local environment also supports this feature, but is suitable for testing purposes only). Data written to the transient source location can be replicated to any cloud service Zenko supports.

To deploy a transient source storage location:

  1. Click the Storage Locations button in the sidebar.

    _images/sidebar_storage_locations_button.png
  2. The Cloud Locations window displays. Click Add New.

    _images/cloud_locations_modal.png
  3. The Add New Storage Location modal appears. Enter the Location Name and from the Location Type drop-down list, select Scality RING with Sproxyd Connector.

    _images/Add_New_Storage_Location_RING_sproxyd.png
  4. Enter the Location Details (Bootstrap List, Proxy Path, and Replication Factor for Small Objects). Click Advanced Options, raising the Advanced Options.

    _images/Add_New_Storage_Location_RING_advanced_options.png
  5. To create a transient source, check the Delete objects after successful replication option. You can also set the Limit total size in this location to parameter to a reasonable size that conforms to the anticipated size of files, peak demand, and estimated throughput of the slowest cloud to which you intend to replicate data.

  6. Click Save. The transient source location is established.

  7. Go to Set Up Replication, setting the transient source as the source bucket.

Do not update metadata in a transient source object. Changing metadata of an object in a transient source bucket will fail. You cannot change metadata in the S3 protocol.

Location Status

The Location Status button on the Orbit sidebar opens the Location Status window.

_images/location_status.png

This provides a “high view” of the cloud locations under Zenko’s management. From the Location Status window, you can see which locations are configured, each location’s type and operating status, and the amount of data stored at that location.

No management tasks are accessible from the Location Status window. It is informative only.

Bucket Management Tasks

In Orbit, location operations pertain to data sources, and bucket operations pertain to replication targets.

Orbit makes available the following bucket management tasks.

Create a Bucket

To create a bucket:

  1. Click Browser in the sidebar to open the Multicloud Browser:

    image0

    Click the Create Bucket button.

  2. The Create Bucket dialog displays:

    _images/Orbit_bucket_create_dialog.png

    Enter the bucket name and location constraint, and click the Create button.

  3. The bucket appears in the Buckets list:

    image2

  4. For buckets associated with AWS S3 or Scality RING endpoints, click View Info. Bucket information displays:

    _images/Orbit_View_Bucket_Info.png

    Toggle Versioning ON.

    _images/Orbit_Versioning_ON.png

    Important

    You must turn versioning on for cloud-hosted buckets before assigning them a location. Assigning a bucket to a location with versioning off will result in errors.

View Bucket Info

Prerequisite: To view bucket info, you must already have created at least one bucket.

Zenko offers the ability to view bucket details, and to turn Versioning on or off.

To access this feature:

  1. Click the Browser item in the sidebar.

    _images/sidebar_browser_button.png
  2. The Multicloud Browser Buckets list displays:

    image0

    Select a bucket from the list of bucket names, then click View Info.

  3. Orbit displays bucket info:

    _images/Orbit_bucket_view_info.png

From this panel, you can:

  • Review permissions and the bucket’s cross-region replication status
  • Copy the bucket’s Amazon Resource Name (Copy Bucket ARN)
  • Toggle the Versioning feature

For more information on versioning, review the Amazon S3 documentation at: https://docs.aws.amazon.com/AmazonS3/latest/dev/Versioning.html. Zenko implements S3 logic for versioning.

Object Lifecycle Management

Orbit enables you to set policies that manage lifecycle events in selected buckets.

From the Bucket Lifecycle selector on the Zenko sidebar menu, you can select, then create lifecycle transition or lifecycle expiration policies for objects in buckets. These transition (move) or expire (delete) objects that match your criteria based on a timespan you set. In other words, Zenko reviews when a set number of days has passed since an object or type of object was last modified, and either moves such objects to a different storage site (transitions them) or deletes (expires) them. You can create rules to transition objects themselves, or if versioning is enabled, to transition object versions.

This feature can help you keep data fresh and save money. You can, for example, store job information for ongoing customer interactions on a local RING. After a customer stops making demands for immediate (low-latency) support, information pertaining to their purchase order transitions from the company RING to a public cloud. Latency may increase, but cloud data storage costs may be reduced. After a few months, if the data on the public cloud is not accessed, it can either be transitioned to another cloud or deleted (expired).

Orbit supports most of the Amazon S3 lifecycle management command set; however, the following transition rules can only be defined using API calls.

  • Filtering by object tag
  • Using the Date field (GMT ISO 8601 format)
  • Setting the Days field to zero
  • Setting a custom ID

See “Bucket Lifecycle Operations” in the Zenko Reference.

Object Lifecycle Management: Transition

Object lifecycle transition policies enable you to change the location of an object or object type based on its age.

Establishing a Lifecycle Transition Policy

Prerequisite: You must have established a bucket to transition data from, and a location to send transitioned data to.

To establish a lifecycle transition rule:

  1. Click the Bucket Lifecycle tab in the sidebar.

    _images/Orbit_lifecycle_select.png
  2. The Bucket Lifecycle screen displays.

    image1

  3. Choose a bucket and pick Add New Rule > Transition

    _images/Orbit_lifecycle_add_new_rule.png
  4. The Add New Transition Rule dialog displays:

    _images/Orbit_lifecycle_add_transition_rule.png

    You may specify an prefix to identify objects to which the rule applies. Enter a time span after the object’s current version was last modified and specify a location to which it shall be moved. You can also add a comment about the transition rule.

    Click Save.

  5. The new rule is displayed:

    _images/Orbit_lifecycle_transition_rule_success.png

    Zenko will enforce these rules on this bucket. If replication is configured, any change of state to objects in this bucket can be replicated to buckets on other clouds.

Object Lifecycle Management: Expiration

Object lifecycle expiration policies enable you to delete an object or object type based on its age.

Establishing an Object Expiration Policy

Prerequisite: You must have established at least one bucket.

  1. From anywhere in Orbit, click the Bucket Lifecycle tab in the left navbar.

    image0

  2. The Bucket Lifecycle screen displays.

    image1

  3. Choose a bucket and pick Add New Rule > Expiration

    _images/Orbit_lifecycle_add_new_rule.png
  4. The Add New Expiration Rule dialog displays:

    _images/Orbit_lifecycle_add_expiration_rule.png

    You may enter a distinct directory or subdirectory to which the rule applies. Enter an expiration time span and a deletion time span. These follow the bucket and enforce expiration and deletion. You may also add a comment about this expiration rule.

    Click Save.

  5. The new rule is displayed:

    image4

    Zenko will enforce these rules on this bucket.

Versioning logic precludes simply deleting an object: that day’s object is deleted, but earlier versions remain. See warning at Deleting Objects.

Delete a Bucket

Prerequisite: The bucket must be empty.

Note

If the bucket has versioning enabled, and if it contains any versioned objects, you will have to run scripts to empty the bucket of all objects. See Deleting Versioned Objects.

To delete a bucket:

  1. Click Browser in the sidebar to open the Multicloud Browser:

    _images/Orbit_bucket_create_multicloud_browser.png
  2. Pick the bucket to delete from the Buckets list:

    _images/multicloud_browser_select_bucket.png
  3. Click the Delete button.

    _images/delete_button.png
  4. Orbit requests confirmation:

    _images/bucket_delete_verify.png
  5. If you are sure, click Delete

    _images/bucket_delete_verify_selected.png
  6. The Multicloud Browser refreshes, and the bucket is deleted.

File Operations

Prerequisites: You must have at least one account, containing at least one bucket.

For each file stored in a Zenko bucket, you can view info, manipulate tags and metadata, download the file to your local machine, or delete the file from the bucket.

To access these operations:

  1. Click Browser in the sidebar to open the Multicloud Browser.

    _images/sidebar_browser_button1.png
  2. Double-click the bucket you want to access.

    _images/multicloud_browser_1_bucket.png
    • If the bucket is empty, Zenko asks you to Drag and Drop Objects:

      image0

      Clicking the Upload Objects button takes you to your local machine’s file system to pick files to upload. Clicking skip takes you to the empty bucket.

    • Otherwise, the Multicloud Browser displays the bucket’s contents:

      image1

For each uploaded file, you can Download, View Info, or Delete.

Uploading Files to Buckets

Prerequisites: Before uploading data to a bucket, you must have a storage account associated with a user name, and you must have created at least one bucket.

  1. Click Browser in the sidebar to open the Multicloud Browser:

    _images/sidebar_browser_button1.png
  2. Double-click the bucket to which you will upload data.

    image0

    • If the bucket is empty, the Drag and Drop Objects dialog displays:

      image1

    • Otherwise, Orbit displays the bucket’s contents:

      _images/Orbit_file_operations.png
    • Click Upload to raise the Drag and Drop Objects dialog.

      _images/upload_button_hover.png
  3. In the Drag and Drop Objects dialog, you can either upload files by dragging and dropping from the local desktop (Windows Explorer, OS X, Linux desktop, for example) or by clicking the Upload Objects button and selecting files for upload using your local operating system’s file manager.

    Note

    Browsers may limit the ability to upload directories. Uploading a directory may require that you recursively zip the directory and upload it as a single file, or access Zenko through a cloud storage browser such as Cyberduck.

    Note

    Object key name lengths are limited to 915 single-byte characters (109 fewer than the 1024 one-byte characters permitted in the AWS specification).

View File Info

To view information about a file:

  1. Click the Browser button to open the Multicloud Browser.

    _images/sidebar_browser_button1.png
  2. Double-click the bucket containing the file.

    _images/Orbit_multicloud_browser_with_values1.png
  3. Select the file, and click the View Info button. File information displays in a pop-up window:

    _images/Orbit_file_operations_popup.png
  4. Click the pencil icon in the Metadata field to add or edit metadata options.

    _images/Orbit_add-edit_metadata.png

    Available options are cache-control, content disposition, content-encoding, content-language, content-type, expires, website-redirect-location, and x-amz-meta. Most of these are HTTP header field definitions, documented at https://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html and https://www.w3.org/Protocols/rfc2616/rfc2616-sec19.html). The x-amz-meta tag acts as a wrapper that indicates that the subsequent information is specific to the Amazon S3 protocol. When you pick this, an extra field displays to permit entry of this “nested” key information.

    _images/Orbit_x-amz-meta.png

    This name space must conform to Amazon’s naming rules: numbers, hyphens, and upper- and lower-case letters only).

  5. Click the pencil icon in the Tags field to add custom tags.

    _images/Orbit_add_tags.png

    These are S3-supported tags (see https://docs.aws.amazon.com/AmazonS3/latest/dev/object-tagging.html). Because other backends may not support the S3 tagging structure, operations that use these tags must be performed using Zenko.

Download a File

To download a file from a selected bucket:

  1. Click the Browser button in the sidebar to open the Multicloud Browser.

    _images/sidebar_browser_button1.png
  2. Double-click a bucket to open it.

    _images/Orbit_multicloud_browser_with_values1.png
  3. Select the file to download.

    _images/Orbit_download_file.png
  4. Click Download.

    _images/download_button_selected.png
  5. The file download is handled by the browser.

Deleting Objects

To delete objects from a selected bucket:

  1. Click the Browser button in the sidebar to open the Multicloud Browser:

    _images/sidebar_browser_button1.png
  2. Double-click a button to open it:

    _images/Orbit_multicloud_browser_with_values1.png
  3. Click the check box next to each object to be deleted. The number of objects to be deleted is indicated in the top bar of the file list.

    _images/Orbit_file_delete.png
  4. Click the Delete button.

    _images/Orbit_file_delete_button.png
  5. Orbit requests confirmation of the deletion.

    _images/Orbit_file_delete_confirm.png
  6. The object is deleted from the bucket.

    Important

    If versioning is enabled (the recommended configuration for AWS nodes) deleting from the Orbit UI deletes the most recent version of the object only. This results in a condition where the bucket appears empty, but continues to contain previous versions of the deleted object. This prevents the bucket from being deleted, because it is not empty. To completely delete an object and its version history requires entering the CLI commands described below.

Deleting Versioned Objects

Deleting versioned objects is difficult because cloud servers are biased towards preserving data. While this is useful, it can become problematic when large numbers of objects are under management (during stress testing, for example).

To completely delete a versioned object, you must issue S3 API commands from the command line.

If you have not already done so, follow the instructions at Zenko from the Command Line to configure one of your nodes to accept AWS S3 CLI commands.

Zenko provides command-line scripts that enable removing versioned objects completely (both removing the object data and its object ID from the namespace).

The cleanupBuckets.js script is available in the s3utils pod.

Run it as follows:

  1. Enable the s3utils pod with:

    $ kubectl run s3utils --image=zenko/s3utils:0.5 -it bash
    

    Tip

    The s3utils pod is disabled by default. You can also enable it by adding the following to the options file and upgrading your Zenko deployment:

    maintenance:
      debug:
        enabled: true
          # An access/secret key to access Zenko that will be used to configure the s3utils pod
          accessKey: <access-key>
          secretKey: <secret-key>
    
  2. Use grep to find the s3utils pod:

    $ kubectl get pods | grep s3utils
    myzenko-zenko-debug-s3utils-7f77f9b5b9-627gz   1/1  Running   0   31m
    
  3. exec into the s3utils pod:

    $ kubectl exec -it myzenko-zenko-debug-s3utils-7f77f9b5b9-627gz bash
    
  4. Run the cleanup script with:

    $ node cleanupBuckets.js <bucket1> <bucket2> ...
    

On versioned buckets, this script deletes current and archived versions, deletes markers, and aborts any ongoing multipart uploads.

Buckets are cleaned up (emptied of all objects and versions), but not deleted. With all object versions in a bucket thus deleted, you can delete the bucket from the command line with:

$ aws s3api delete-bucket --bucket <bucket-name> --endpoint-url http://<zenko.endpoint.url>

or delete the bucket using the Orbit Multicloud Browser.

Advanced Workflows

You can use Orbit to manage the following advanced workflows:

Set Up Replication

Prerequisites: To set up bucket-level CRR using the Orbit UI, you must have

  • One pre-configured source bucket
  • At least one pre-configured destination bucket

To set up a replication configuration:

  1. Click Replication in the sidebar:

    _images/sidebar_replication_button.png
  2. Orbit raises the Replication window:

    image0

    If no locations are configured, Orbit displays this message:

    _images/replication_no_target_message.png

    Click the link text to create a suitable replication target.

  3. Click New. The Set up bucket replication dialog displays.

    _images/Orbit_set_up_bucket_replication.png

    Name the new replication configuration, and enter source and destination bucket information. The replication configuration name is free-form, and not constrained by Amazon’s naming schema. Click Save.

  4. The named configuration and specified destination(s) display on successful implementation.

    image2

With one or more replication instances configured, the Replication window lets you add a new replication configuration, or edit, suspend, or delete an existing one.

Replication is not retroactive. In other words, if you have files stored in a bucket and you configure that bucket to be replicated, replication only occurs to files written to that bucket after you have configured and set the replication.

AWS-to-AWS Replication
  1. Create a bucket on AWS (https://s3.console.aws.amazon.com/s3/buckets/) with versioning enabled

    _images/aws_versioning_enabled.png
  2. From Orbit, open Storage Location, click Add New and enter information (location name, type, and type-specific options) for the AWS bucket you just created.

    _images/Orbit_Add_Storage_location_AWS.png
  3. From the Multicloud Browser, create another bucket and set the new AWS location as its location constraint.

    If using AWS CLI, set the endpoint as the Zenko deployment, and location constraint as the AWS location.

  4. The bucket created through Zenko appears in the drop-down menu on the Set up bucket replication dialog box.

    _images/Orbit_set_up_bucket_replication_pulldown.png
  5. With the AWS target now visible, enter the flow as described in Set Up Replication.

CRR Pause and Resume

This feature enables pausing and resuming cross-region replication (CRR) operations by storage location. Users can also resume CRR operations for a given storage location by specifying a number of hours from the present. This is particularly useful when the user knows a destination location will be down and wants to schedule a time to resume CRR.

CRR pause and resume is accessible via the Backbeat API and documented in the Zenko Reference.

API calls for this feature include:

This feature is implemented in the API and is accessible from the command line or from Orbit.

Searching Metadata from Orbit

Every file stored in an object storage system is associated with a set of metadata. Understanding the metadata tags associated with these files provides a powerful method for extremely fast search and retrieval of files.

Orbit provides a graphical tool for performing metadata searches, the syntax for which is hinted under the search bar itself, but also described explicitly in Searching Metadata with Zenko.

To search the metadata of files stored in clouds managed by Zenko,

  1. Click Search in the sidebar to raise the Multicloud Search window.

    image0

  2. Pick a bucket to search.

    image1

  3. Enter metadata search terms in the modified NoSQL format described in Searching Metadata with Zenko. Click the magnifying glass icon.

    _images/metadata_search_results.png

    Orbit returns the search results.

    Clicking the arrow icon next to the search result takes you to the item’s location (directory) in the bucket.

Cloud Management Services

Zenko operates in the context of a Kubernetes cluster. In combination with MetalK8s this provides numerous opportunities to interact with third-party systems for monitoring and managing the cluster and the data Zenko manages there.

Accessing the following default MetalK8s dashboard services helps you understand and control activities in your cluster:

Accessing Cloud Dashboards

To access cloud management dashboards,

  1. With Zenko installed on a Kubernetes cluster, open a command line on your local workstation and enter kubectl proxy from your local Kubernetes repo. This opens a kubectl proxy session that enables you to access dashboards. Leave this process running.

  2. Make sure the path to the cluster’s configuration file is exported to the environment. For a MetalK8s installation, open another command-line interface and export the path to the configuration file with the following command:

    $ export KUBECONFIG=`pwd`/inventory/<cluster-name>/artifacts/admin.conf
    
  3. Open a browser, and enter the Kubernetes dashboard at: http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/#!/overview?namespace=default

  4. Log into your Kubernetes installation.

    image0

    Tip

    If you used MetalK8s for your Kubernetes installation and want to use Basic authentication, look for your user name and password in the inventory file. The default location is:

    /[...]/metalk8s/inventory/<cluster name>/credentials/kube_user.creds
    
  5. The Kubernetes dashboard displays:

    image1

For MetalK8s deployments, if the Kubernetes dashboard is visible, you can also access the other Kubernetes-dependent services Zenko offers as part of its default stack.

Troubleshooting Cloud Dashboards

To operate, Zenko ties together several systems, many of which are not part of the core Zenko product. Because of this, describing every possible configuration impossible; however, these are the main points of failure when accessing cloud dashboards.

The dashboards to control, monitor, and adjust your Kubernetes cluster and Zenko instance are available when the following conditions are met:

  • The cluster is operating.
  • The appropriate ports on the cluster are open.
  • Your local environment has access to the admin.conf file, which is also stored on the cluster.
  • Cluster ingress is configured correctly and identified accurately on the local machine.
  • There is an open kubectl proxy session.
  • You have the correct credentials.
Cluster Is Up

To be sure the cluster is operating, check out the management interface and make sure all nodes of your cluster are up and running. Check to see that all nodes are live, and that each node has an attached and running storage volume. You may be able to diagnose problems by sending kubectl commands to the cluster. For a complete listing of active pods and their current status, enter:

$ kubectl get pods

To find backbeat-api status, enter:

$ kubectl get pods -l app=backbeat-api

The filtered result is:

zenko-backbeat-api-787f756fb7-8hwh4               1/1       Running   6        2d

If you are concerned about a particular node’s health, enter:

$ kubectl get nodes
Ports Are Open

Make sure each node is configured with ports 6443, 80, and 443 open and accessible to incoming traffic from your local machine.

KUBECONFIG Environment Variable is Correctly Set

If you are in the make shell virtual environment from which you created a MetalK8s deployment, the appropriate environment variables are already set. If you can’t enter this virtual environment or are using another Kubernetes, find the admin.conf file, which was generated with your Kubernetes setup. Your local machine must know where to find this file.

To see whether this is set properly in your environment, enter

$ env | grep KUBE

If the KUBECONFIG environment variable is set, the response shows a path; for example:

KUBECONFIG=/home/username/metalk8s/admin.conf

If KUBECONFIG is not set (that is, the env command shows no result for KUBECONFIG), you must export the path. Once you find the path to admin.conf, export it with:

$ export KUBECONFIG=/path/to/admin.conf

The admin.conf in the local client device must match the admin.conf file in the Kubernetes cluster. For MetalK8s, this is defined in the inventory at […]/metalk8s/inventory/<cluster-name>/artifacts/admin.conf and is copied to /etc/kubernetes/admin.conf on deployment.

Windows users may experience trouble if the admin.conf file is installed in a user’s personal directory. Windows may inject a space in the user name, which breaks the path. If you use a Windows machine, make sure admin.conf resides in a path that contains no spaces.

Proxy Is On

To access the cloud dashboards, you must open and maintain a kubectl proxy session. Open a terminal, run the following command, and leave it running.

$ kubectl proxy
Starting to serve on 127.0.0.1:8001
Correct Credentials

You must have correct credentials to access the Kubernetes dashboard. For MetalK8s deployments, look for Kubernetes credentials in […]/metalk8s/inventory/<cluster-name>/credentials/kube_user.creds. Copy and paste this file’s contents as the password when you log in to the MetalK8s Kubernetes desktop. If you have recently reinstalled a cluster, make sure your browser is not presenting old credentials. Other Kubernetes engines may employ different authentication strategies. For any such problems, request help from the Kubernetes vendor or community you have chosen.

Kubernetes Dashboard

The Kubernetes dashboard is the master dashboard for visualizing your cluster’s performance. You can use a cloud-hosted version of Kubernetes, or host it yourself using MetalK8s.

image0

Your Kubernetes user experience will vary depending on which Kubernetes you use. At a minimum, you will see everything available to you by kubectl commands.

Kubernetes is a Google-hosted open-source container management project, which you can find at https://kubernetes.io/.

If kubectl is properly configured, you will find the Kubernetes dashboard at http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/#!/ . If this doesn’t work, see Troubleshooting Cloud Dashboards.

To access the Kubernetes dashboard, you must have a kubectl proxy established, and you must export the path to KUBECONFIG as your local environment variable. Scality offers detailed instructions for setting up Kubernetes proxying in the MetalK8s Quickstart. These instructions may prove useful to non-MetalK8s installations as well.

Grafana

Grafana provides a “dashboard of dashboards” that offers deep insight into ongoing operations. Installing the Zenko stack gives you access to an “application-level” Grafana, which provides information about CloudServer operations. Installing Zenko with MetalK8s also provides access to “platform-level” Grafana, which provides information about cluster operations.

Both require some setup, detailed in the installation guide, which ships with the Zenko repository.

Application-Level Grafana

Deployed with all Zenko instances, application-level Grafana provides insight into CloudServer operation.

Configuration

Operating application-level Grafana requires modifications to options.yaml, and may require further configurations.

To enable application-level Grafana:

  1. Open the options.yaml file for editing. Though it may be located anywhere in your Zenko directory, if you followed the installation instructions closely, you will find it at Zenko/kubernetes/zenko/options.yaml.

  2. Add the following block to options.yaml:

    grafana:
      ingress:
        enabled: true
        hosts:
          - grafana.local
        tls:
          - secretName: zenko-tls
            hosts:
              - grafana.local
      adminUser: admin
      adminPassword: strongpassword
    

    Important

    Storing passwords in a plain text configuration file is not a best security practice. For higher security deployment, either allow Grafana to set its own passwords or have password secrets managed by a credentials management service, such as an LDAP secrets manager. This requires adding an LDAP configuration block to options.yaml, which exceeds scope for this documentation. If you allow Grafana to set its own credentials using the default configuration, the Grafana credentials will be overwritten on every Helm upgrade command.

  3. Upgrade Zenko. From Zenko/kubernetes/zenko, enter:

    $ helm upgrade my-zenko -f options.yaml .
    
  4. If you did not configure the adminUser and adminPassword parameters in the previous step, Helm wipes the old Grafana instance and creates a new one. Grafana generates new admin credentials automatically. The default login is “admin.” To obtain the admin password from such a “default” installation, run the following command from the command line, substituting the name of your Zenko instance for <my-zenko>:

    $ echo $(kubectl get secret <my-zenko>-grafana -o "jsonpath={.data['admin-password']}" | base64 --decode)
    
    Gcym51zKQG8PSDD2B7ch9h8cXFIu8xalmIQfdXkd
    
Operation

Once your system and server are properly configured, access application-level Grafana at http://grafana.local/login.

  1. The Grafana login page displays:

    image0

    Enter the admin identity and the admin password as described in the previous section.

  2. The Grafana dashboard displays:

    image1

    From the Home Dashboard, you can Add Users, Install apps and plugins, or click through to the Cloudserver dashboard.

    image2

Troubleshooting

You don’t see the Grafana login page

Possible reasons:

  • Connectivity

    Make sure your browser can access and resolve the IP address (i.e., that you’re in the same network/VPN as your cluster), that kubectl proxy is running between your local machine and the server, and that the server’s Ingress controller is configured and running.

  • Grafana server is not set in /etc/hosts/

    In a managed namespace, the paths to zenko.local and grafana.local should resolve successfully without changes to /etc/hosts in the local machine. However, in test environments, this resolution may not be provided. Check that your local machine has the correct server IP address set in /etc/hosts (c:\windows\System32\drivers\etc\hosts for Windows machines). If not, add the following line to /etc/hosts, substituting the IP address for the cluster on which you are operating Zenko and Grafana:

    10.1.1.1     zenko.local grafana.local
    
  • Ingress is not set.

    Run:

    $ helm upgrade <my-zenko> /path/to/zenko\
    --set grafana.ingress.enabled=true\
    --set grafana.ingress.hosts[0]=grafana.local
    

    This command may rewrite your Grafana credentials. See the note above.

  • Grafana is not running on the server:

    $ kubectl get pods | grep grafana
    

Your admin password is rejected

  1. If you’re sure you have entered the admin password correctly (as produced by the echo command above), run:

    $ kubectl get pods | grep grafana
    
    my-zenko-grafana-5dbf57f648-wbnkg               3/3       Running   0          7m
    
  2. Copy the first part of the result and restart Grafana on the server with:

    $ kubectl delete pod my-zenko-grafana-5dbf57f648-wbnkg
    

    Your particular running instance will, of course, have a different working name and hashes.

  3. Give Kubernetes a minute or so to bring the Grafana pod back up.

  4. When kubectl get pods shows the new Grafana instance running and stable, retry the login.

Platform-Level Grafana

Deployed with MetalK8s, Grafana provides the following views of Zenko and Kubernetes services:

  • Deployment
  • ElasticSearch
  • etcd
  • Kubernetes Capacity Planning
  • Kubernetes Cluster Health
  • Kubernetes Cluster Status
  • Kubernetes Cluster Control Plane Status
  • Kubernetes Resource Requests
  • Node Exporter Full
  • Nodes
  • Pods
  • Prometheus 2.0 Stats
  • StatefulSet

Access platform-level Grafana using this URL: http://localhost:8001/api/v1/namespaces/kube-ops/services/kube-prometheus-grafana:http/proxy/?orgId=1

Cerebro

Cerebro enables users to monitor and manage Elasticsearch clusters from a convenient, intuitive dashboard.

image0

The Cerebro dashboard offers a visualization of log information showing how file systems are sharded, indexed, and distributed in in MetalK8s, Scality’s free Kubernetes server.

If you have deployed Zenko using a MetalK8s cluster, use the following URL to access the MetalK8s dashboard:

http://localhost:8001/api/v1/namespaces/kube-ops/services/cerebro:http/proxy/#/overview?host=Metal K8s

With a kubectl proxy running, use the following URL to access the Cerebro dashboard:

http://localhost:8001/api/v1/namespaces/kube-ops/services/cerebro:http/proxy/#/connect

Cerebro ships with MetalK8s. If you elect to run other Kubernetes implementations, you will probably have access to other tools that do the same or similar work, but you may want to install Cerebro. It’s hosted at: https://github.com/lmenezes/cerebro.

Kibana

Kibana is an open-source log management tool that operates with Elasticsearch and Logstash. Elasticsearch generates and stores log information. Logstash gathers and processes these logs, and Kibana provides a web visualization service that lets you search and visualize them.

image0

When kubectl proxy is active, you can access Kibana at:

http://localhost:8001/api/v1/namespaces/kube-ops/services/http:kibana:/proxy/app/kibana#/home?_g=()

Prometheus

Prometheus offers visualization and insight into Kubernetes operations. For Zenko, it aggregates metrics exposed by Kubernetes pods that have been configured to yield Prometheus-readable data.

Prometheus ships with MetalK8s. Access to Prometheus is similar to that for other dashboard services. Open http://localhost:8001/api/v1/namespaces/kube-ops/services/kube-prometheus:http/proxy/graph in your browser. If you are configured to see the other dashboards, the Prometheus dashboard displays:

_images/prometheus.png

If you use a different Kubernetes implementation than MetalK8s, you will have to install your own Prometheus instance to use this feature.

While these open source projects are independent of Zenko, and their operation exceeds scope for this document, the following sections describe how to access them from a cluster deployed via MetalK8s.

Zenko from the Command Line

Zenko supports command-line interactions for a limited set of Amazon S3 API calls and to access its own Backbeat server.

Enabling command-line interactions enables programmatic access to the following features:

CRR Metrics and Healthcheck

Zenko provides replication status tracking. The replication system exposes metrics through a REST API to monitor pending, processing, and completed replication objects. It can return the number of failures that occurred during replication, the current throughput (in replication operations per second), and total bytes completing per second.

If source and destination buckets are set up to allow replication, when a new object is added to the source bucket, the request for replicating that object begins processing.

Metrics are gathered when entries are published to Kafka. When a Kafka entry has completed processing and an object has replicated to its destination bucket, further metrics are gathered to record its completion.

Backbeat offers routes for the following services:

Backbeat also offers a healthcheck service that enables replication component monitoring.

API documentation (routes, requests, and responses) for these services is provided in the Zenko Reference.

CRR Retry

The CRR Retry feature lets users monitor and retry failed CRR operations. Users can retrieve a list of failed operations and order Zenko to retry specific CRR operations.

From Orbit

In Orbit, CRR Retry appears as a notice in the Location Status indicating that a file or a number of files failed to replicate. The number of failed operations is listed as a metric under the Replication Statistics for that location. By clicking the failed objects metric, Orbit provides a listing of the failed objects, each with a “Retry” button. By clicking this button, the user triggers a retry of the failed replication operation. The entire listing can be retried by clicking the “Retry All” button.

From the Command Line

The CRR Retry feature comprises three API calls. These are:

These requests, sent to the Backbeat endpoints, return members stored in bb:crr:failed:* Redis sorted sets. A Retry command removes the member and changes the object’s metadata “FAILED” status to “PENDING”, which queues them to be retried by the replication processor.

Object Lifecycle Management

Cloud users can apply lifecycle rules (specified in Amazon’s AWS S3 API) to buckets managed through Zenko. These rules are triggered after a defined time has passed since the object’s last modification. Zenko supports expiration and transition of objects when a defined number of days has passed since those objects’ creation. This enables automated deletion or movement of older objects.

Installation

Lifecycle management is part of Backbeat configuration and is installed with Backbeat. It is enabled by default, and can be disabled in the Zenko deployment configuration files.

Operation

Lifecycle management conforms partially to the S3 lifecycle management syntax described at https://docs.aws.amazon.com/AmazonS3/latest/dev/object-lifecycle-mgmt.html. Zenko lifecycle management supports expiration and transition actions. Files that exceed a preconfigured temporal threshold (for example, 90 days) are expired and deleted from the bucket in which they are stored or transitioned and moved.

Bucket lifecycle characteristics inhere to the bucket: Zenko’s lifecycle management feature does not set lifecycle characteristics, but does enforce them. When lifecycle management is enabled, the host cloud enforces buckets’ lifecycle rules. If CRR operation is enabled, Zenko replicates the lifecycle event to all backup clouds.

To configure bucket lifecycle, follow the AWS S3 Lifecycle Configuration Element syntax described in https://docs.aws.amazon.com/AmazonS3/latest/dev/intro-lifecycle-rules.html.

Note

To implement the S3 API effectively in a cross-cloud context, Zenko interprets S3’s <StorageClass> field differently from how AWS defines it. Where Amazon uses StorageClass to indicate various proprietary Amazon storage locations that can be described by their quality of service, Zenko uses this parameter to identify cloud service locations by a user-defined name. So, instead of using <StorageClass>GLACIER<\StorageClass> for inexpensive, high-latency storage, the Zenko user must define a cloud location with satisfactory storage and pricing requirements and use that cloud location as the target cloud storage location. Zenko reads and writes to this location based on the StorageClass tag definition.

Zenko API Calls

The Zenko API provides three calls to manage lifecycle properties per bucket:

  • PUT Bucket Lifecycle

  • GET Bucket Lifecycle

  • DELETE Bucket Lifecycle

    Tip

    See the AWS S3 API Reference for protocol-level formatting details.

These calls manage bucket attributes related to lifecycle behavior, which are stored as part of bucket metadata.

Managing Lifecycle Rules from the S3 API

See https://docs.aws.amazon.com/AmazonS3/latest/dev/lifecycle-configuration-examples.html for more examples and explanations on lifecycle rules.

  1. Create a JSON file defining the bucket lifecycle rules (see https://docs.aws.amazon.com/AmazonS3/latest/dev/set-lifecycle-cli.html for examples).

    Note

    The “Rules” section is an array that can hold multiple rules.

  2. Use the aws s3api to set the JSON lifecycle rule on a bucket, zenko-bucket.

    $ aws s3api put-bucket-lifecycle-configuration --bucket zenko-bucket --lifecycle-configuration file://lifecycle_config.json
    

    You can confirm that the rule has been set with:

    $ aws s3api get-bucket-lifecycle-configuration --bucket zenko-bucket
    

Once the lifecycle rules on the bucket are set, the rules apply to all objects in the specified bucket.

Querying Lifecycle Events

You can access the storage location of transitioned object data by viewing the object’s metadata, for example by making a HEAD request.

Querying the CloudServer requires an active kubectl session with the Zenko controller and S3 API functionality configured as described in Setting Up S3 API Access. Once this is configured, use the head-object command as described in https://docs.aws.amazon.com/cli/latest/reference/s3api/head-object.html.

For example:

$ aws s3api head-object --bucket <bucket-name> --key <key-name> --endpoint <endpoint-url>

returns:

{
   "AcceptRanges": "bytes",
   "ContentType": "application/octet-stream",
   "LastModified": "Tue, 16 Apr 2019 22:12:33 GMT",
   "ContentLength": 1,
   "ETag": "\"e358efa489f58062f10dd7316b65649e\"",
   "StorageClass": "aws-storage-location",
   "Metadata": {}
}

The returned information describes the <key-name> object in the <bucket-name> bucket. The StorageClass information indicates the object has transitioned to a storage location named “aws-storage-location”, as defined by the Zenko user.

After an expiration event, the object is deleted, and no metadata can be queried. The object metadata is not found.

Monitoring NFS/SOFS Ingestion

Zenko 1.1 implements an NFS feature that ingests the NFS file hierarchy that the RING’s Scale-Out File System (SOFS) projects. For this release, the only available metrics for NFS/SOFS ingestion operations reside in log files. SOFS metadata ingestion runs on a cron schedule, and users can make a query against Cosmos on the kubectl location to discover how and when scheduled NFS activities have been executed.

To find the relevant Cosmos pods, enter:

$ kubectl get pods  | grep "rclone"

nfs-options-cosmos-rclone-1555329600-7fr27                    0/1     Completed   0          7h
nfs-options-cosmos-rclone-1555333140-bb5vf                    1/1     Running     0          1h

Use this information to retrieve the logs with a command formatted as follows:

$ kubectl logs -f {{location}}-cosmos-rclone-{{hash}}

Hence:

$ kubectl logs -f my-location-name-cosmos-rclone-84988dc9d4-9dwjl

yields:

* myobjects273725
* myobjects273726
* myobjects273727

2019/04/15 17:48:30 INFO  : S3 bucket nfs-rsize: Waiting for checks to finish
2019/04/15 17:48:31 INFO  : S3 bucket nfs-rsize: Waiting for transfers to finish
2019/04/15 17:48:31 INFO  : Waiting for deletions to finish
2019/04/15 17:48:31 INFO  :
Transferred:                0 / 0 Bytes, -, 0 Bytes/s, ETA -
Errors:                 0
Checks:            500000 / 500000, 100%
Transferred:            0 / 0, -
Elapsed time:  2h48m18.8s
Response Description
Transferred The byte count of metadata transferred, with data rate in B/s and a completion time estimate. Transferred counts are almost always 0/0.
Errors Aggregated error count for the requested job.
Checks Ingested information/total information to be ingested, and a percentage expressing this ratio.
Elapsed Time Time spent on the current ingestion cycle.

To access Zenko from the command line, you must first set up access to the S3 API.

Setting Up S3 API Access

Zenko supports a limited set of S3 API commands. For a comprehensive listing of supported S3 commands, see the Zenko Reference.

To access Zenko’s AWS S3 API, you must perform the following setup tasks. In the present example, server 1 is modified to be the AWS gateway.

  1. Using SSH, open any server in a running Zenko instance.

    $ ssh centos@10.0.0.1
    
  2. Install the EPEL libraries.

    [$centos@node-01 ~]$ sudo yum -y install epel-release
    
  3. Install python-devel and python-pip

    [centos@node-01 ~]$ sudo yum -y install python-devel python-pip
    
  4. Install awscli.

    [centos@node-01 ~]$ sudo pip install awscli
    
  5. Edit /etc/hosts.

    [centos@node-01 ~]$ sudo vi /etc/hosts
    
  6. Nominate a server node as zenko.local.

    # Ansible inventory hosts BEGIN
    10.0.0.1 node-01 node-01.cluster.local zenko.local
    10.0.0.2 node-02 node-02.cluster.local
    10.0.0.3 node-03 node-03.cluster.local
    10.0.0.4 node-04 node-04.cluster.local
    10.0.0.5 node-05 node-05.cluster.local
    # Ansible inventory hosts END
    
  7. Retrieve your Zenko access key ID and Zenko secret access key.

  8. Configure AWS using these keys.

    [centos@node-01 ~]$ aws configure
    AWS Access Key ID [None]: P6F776ZY4QZS7BY9E5KF
    AWS Secret Access Key [None]: lndN5vIeqL9K6g6HVKCMAjZbTX9KsCGw5Fa4MbRl
    Default region name [None]:
    Default output format [None]:
    

    Leave the Default region name and output format fields blank.

  9. Enter a test AWS command.

    [centos@node-01 ~]$ aws s3 ls --endpoint http://zenko.local
    2018-09-07 18:33:34 wasabi-bucket
    2018-09-05 22:17:18 zenko-bucket-01
    

Zenko can now respond to the set of S3 commands documented in the Zenko Reference.

Setting Up Backbeat API Access

Backbeat can be accessed from the command line using calls to CloudServer. These calls must be formatted with authentication as described in this section.

A pseudocode example of a model query is shown here.

Authorization = "AWS" + " " + ZenkoAccessKeyId + ":" + Signature;

Signature = Base64( HMAC-SHA1( YourSecretAccessKeyID, UTF-8-Encoding-Of( StringToSign ) ) );

StringToSign = HTTP-Verb + "\n" +
        Content-MD5 + "\n" +
        Content-Type + "\n" +
        Date + "\n" +
        CanonicalizedResource;

CanonicalizedResource = [ "/" + "_/backbeat/api/" ] +
        <HTTP-Request-URI, from the protocol name up to the query string>

Where:

  • ZenkoAccessKeyId is the public access key associated with a user account (see the Access Key column in https://admin.zenko.io/accounts) and
  • YourSecretAccessKeyId is the secret key associated with the requesting user ID. It is generated in Orbit when the user account is created (see Add a New User).
  • CanonicalizedResource is as described in the AWS documentation
  • HTTP-Verb is PUT or GET.

You must follow the instructions at https://docs.aws.amazon.com/AmazonS3/latest/dev/RESTAuthentication.html to generate the CanonicalizedResource credentials. A rudimentary script is provided below to help you formulate test requests with valid CanonicalResource certification.

Example Request:

{ host: ‘10.233.3.194’,
 port: 80,
 method: ‘GET’,
 path: ‘/_/backbeat/api/metrics/crr/all’,
 service: ‘s3’,
 headers:
   { Host: ‘10.233.3.194:80’,
      ‘X-Amz-Content-Sha256’: ‘e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855’,
      ‘X-Amz-Date’: ‘20190509T214138Z’,
      Authorization: ‘AWS4-HMAC-SHA256 Credential=BUQO8V4V6568AZKGWZ2H/20190509/us-east-1/s3/aws4_request, SignedHeaders=host;x-amz-content-sha256;x-amz-date, Signature=69f85b5398e1b639407cce4f502bf0cb64b90a02462670f3467bcdb7b50bde9a’
   }
}

Example Response:

{“backlog”:{“description”:“Number of incomplete replication operations (count)
and number of incomplete bytes transferred (size)“,”results”:{“count”:0,
“size”:0}},“completions”:{“description”:“Number of completed replication
operations (count) and number of bytes transferred (size) in the last 86400
seconds”,“results”:{“count”:0,“size”:0}},“failures”:{“description”:“Number of
failed replication operations (count) and bytes (size) in the last 86400
seconds”,“results”:{“count”:0,“size”:0}},“throughput”:{“description”:“Current
throughput for replication operations in ops/sec (count) and bytes/sec (size)
in the last 900 seconds”,“results”:{“count”:“0.00",“size”:“0.00"}},“pending”:
{“description”:“Number of pending replication operations (count) and bytes
(size)“,”results”:{“count”:0,“size”:0}}}
Helper Script

Note

Scality does not offer any support or warranty for the following script. It is included as a convenience. You must edit it to suit your installation.

  1. Access your Zenko cluster.

    $ ssh centos@10.0.0.1
    

    Substitute your cluster’s IP address.

  2. Install node.js.

    $ sudo yum install nodejs
    
  3. Install AWS4.

    $ npm i aws4
    
  4. Open a text editor and copy the following to a .js file.

    const http = require('http');
    const aws4 = require('aws4');
    
    const credentials = {
        accessKeyId: 'BUQO8V4V6568AZKGWZ2H',
        secretAccessKey: 'q=1/VU49a82z6W1owyT+u60dTofxb3Z817S2Ok13',
    };
    
    const headers = {
        host: '10.233.3.194',
        port: 80,
        method: 'GET',
        path: '/_/backbeat/api/metrics/crr/all',
        service: 's3',
    };
    
    const options = aws4.sign(headers, credentials);
    
    console.log(options);
    
    const req = http.request(options, res => {
        const body = [];
        res.on('data', chunk => body.push(chunk));
        res.on('end', () => console.log(body.join('')));
    });
    
    req.on('error', console.log);
    req.end();
    
  5. Instantiate values for accessKeyId, secretAccessKey, host, and the method and path (route) you want to test and save a copy to another .js file (test-request.js for the present example).

  6. Run the script.

    $ node test-request.js
    

Zenko Reference

Introduction

Zenko is Scality’s multi-cloud controller. It provides an open-source, open-platform gateway to enable replication, management, and general ease of use to storage managers handling extreme data volumes over multiple clouds. Zenko provides a single integration point from which cloud data can be managed in several protocol spaces. Zenko either builds a namespace for cloud object data stores, or ingests the namespace of supported cloud data stores to perform powerful metadata-based file management and search tasks.

Zenko offers these capabilities by using the logic and much of the syntax of Amazon Web Services’ Simple Storage Service protocol (AWS S3) through its CloudServer module. CloudServer replicates select S3 API calls verbatim, providing ease of integration from existing cloud storage solutions. When requested, it can also replicate data and manage replicated data in other popular public clouds, such as Microsoft Azure Blob Stroage and Google Cloud Storage, as well as private clouds like Scality’s RING.

Most Zenko tasks can be managed using the web-based Orbit service. More advanced users, however, may wish to interact directly with Zenko using its REST APIs. This guide provides an API reference for the benefit of such users.

Some properties can only be managed through other APIs. Documentation is also furnished here for addressing Prometheus and Backbeat. Prometheus API access is direct. The Backbeat API is accessed through CloudServer API calls.

API Basics

Zenko operates by deploying and managing containerized instances of CloudServer, a Scality service that reproduces relevant API calls from Amazon’s Simple Storage Service (S3) API. Using this RESTful API requires access credentials (an access/secret key pair) in a well-formed request. The following sections describe supported protocols and APIs, correct formatting for requests and responses, possible error messages, and bucket encryption methods.

API Support

Supported S3 API commands for Zenko are detailed here.

Operation Name Operation Type Available?
GET Service Bucket
DELETE Bucket Bucket Yes
GET Bucket Versioning Bucket Yes
GET Bucket Location Bucket Yes
GET Bucket (List Objects) Bucket Yes
GET Bucket Object Versions Bucket Yes
HEAD Bucket Bucket Yes
PUT Bucket Bucket Yes
PUT Bucket Versioning Bucket Yes
GET Bucket ACL Bucket Yes
PUT Bucket ACL Bucket Yes
List Multipart Uploads Bucket Yes
PUT Bucket Website Bucket Yes
GET Bucket Website Bucket Yes
DELETE Bucket Website Bucket Yes
PUT Bucket CORS Bucket Yes
GET Bucket CORS Bucket Yes
DELETE Bucket CORS Bucket Yes
DELETE Bucket Lifecycle Bucket Yes
DELETE Bucket Replication Bucket Yes
DELETE Bucket Policy Bucket Yes
DELETE Bucket Tagging Bucket
GET Bucket Lifecycle Bucket Yes
GET Bucket Replication Bucket Yes
GET Bucket Policy Bucket Yes
GET Bucket Logging Bucket
GET Bucket Notification Bucket
GET Bucket Tagging Bucket
GET Bucket RequestPayment Bucket
PUT Bucket Lifecycle Bucket Yes
PUT Bucket Replication Bucket Yes
PUT Bucket Policy Bucket Yes
PUT Bucket Logging Bucket
PUT Bucket Notification Bucket
PUT Bucket Tagging Bucket
PUT Bucket RequestPayment Bucket
DELETE Object Object Yes
DELETE Object Tagging Object Yes
Multi-Object Delete Object Yes
GET Object Object Yes
GET Object Tagging Object Yes
GET Object ACL Object Yes
HEAD Object Object Yes
GET Object Torrent Object
OPTIONS Object Object
POST Object Object
POST Object Restore Object
PUT Object Object Yes
PUT Object Tagging Object Yes
PUT Object ACL Object Yes
PUT Object - Copy Object Yes
Initiate Multipart Upload Multipart Upload Yes
Upload Part Multipart Upload Yes
Upload Part - Copy Multipart Upload Yes
Complete Multipart Upload Multipart Upload Yes
Abort Multipart Upload Multipart Upload Yes
List Parts Multipart Upload Yes
Special Notes
Transfer-stream-encoding for object PUT with v4 AUTH   Yes
HTTP Protocols

Zenko uses HTTP protocols as defined by RFC 2616. REST operations consist of sending HTTP requests to Zenko, which returns HTTP responses. These HTTP requests contain a request method, a URI with an optional query string, headers, and a body. The responses contain status codes, headers, and may contain a response body.

S3 supports the REST API, accessed via requests that employ an XML-based protocol. Input parameters are provided as an XML body (at the Service level) or as an XML array of entities (such as Buckets, Accounts or Users), plus a time range (start and end times expressed as UTC epoch timestamps). Output is delivered as an XML array, one element per entity. Bytes transferred and number of operations metrics are accumulated between the provided start and end times. For storage capacity, discrete values are returned in bytes for the start and the end times (not as an average between start and end).

Because request headers and response headers can be specific to a particular Zenko API operation or set of operations, many such elements are common to all operations.

Request headers that are typically found in Zenko requests include Authorization, Content-Length, Content-Type, Date, and Host.

Header Description
Authorization Contains the information required for authentication.
Content-Length Message Length (without headers), as specified by RFC 2616; required for PUT and operations that load XML, such as logging and ACLs.
Content-Type Resource content type (e.g.,text/plain) (For PUT operations, default is binary/octet-stream, and valid values are MIME types.)
Date Date and time of the request (default format is Thu, 31 Mar 2016 13:00:00 GMT, which conforms to RFC 2616 Section 3.3.1).
Host

Required for HTTP 1.1, the Host header points to the standard storage service. If the host contains anything other than the standard Zenko storage server, this information is interpreted as the bucket for the request.

The Host header contains either the service host name or the virtual host (bucket.s3.bsedomain.com), in addition to the bucket.

Important response headers that customarily comprise API operation responses with include HTTP/1.1, x-amzn-request-id, Content-Length, Content-Type, and Date.

Header Type Description
HTTP/1.1 string Header followed by a status code, with status code 200 indicating a successful operation.
x-amzn-request-id string A value created by Zenko that uniquely identifies a request. Values can be used to troubleshoot problems.
Content-Length string Length of response body, in bytes.
Content-Type string Message’s content type (typically application/hal+json)
Date string Date and time of the Zenko response.

Note

For detail on common request headers refer to Common Request Headers, and for detail on common response headers refer to Common Response Headers.

Common Request Headers

All request headers are passed as strings, but are not enclosed in quotation marks. They must be listed on separate lines, and each header included in the request signature must be followed by a newline marker (n) in the signature string.

Header Description
Authorization information required for request authentication.
Content-Length Length of the message (without the headers); required for PUTs and operations that load XML, such as logging and ACLs.
Content-Type The content type of the resource in case the request content in the body (e.g., text/plain).
Content-MD5 The base64 encoded 128-bit MD5 digest of the message (without the headers; can be used as a message integrity check to verify that the data is the same data that was originally sent.
Date Current date and time according to the requester (e.g., Tues, 14 Jun 2011 08:30:00 GMT); either the x-amz-date or the Date header must be specified in the Authorization header
Expect

When an application uses 100-continue, it does not send the request body until it receives an acknowledgment. If the message is rejected based on the headers, the body of the message is not sent; can be used only if a body is being sent

Valid Values: 100-continue

Host Host URI (e.g., s3.{{StorageService}}.com or {{BucketName}}.s3.{{StorageService}}.com); required for HTTP 1.1, optional for HTTP/1.0 requests
x-amz-date The current date and time according to the requester (e.g., Wed, 01 Mar 2006 12:00:00 GMT); either the x-amz-date or the Date header must be specified in the Authorization header (If both specified, the value specified for the x-amz-date header takes precedence).
x-amz-security-token Provide security token when using temporary security credential. When making requests using temporary security credentials obtained from IAM, a security token must be provided using this header.
Using the Date Header as an “Expires” Field

Query string authentication can be used for giving HTTP or browser access to resources that require authentication. When using query string authentication, an Expires field must be included in the header request.

Properly speaking, Expires is not a common request header but a sub-resource of the URL passed to a client. Use it in place of the Date header field when distributing a request with the query string authentication mode. The Expires field indicates the number of seconds from Unix Epoch time that a request signature remains valid.

Common Response Headers

All Zenko response headers are listed on separate lines.

Header Type Description
Content-Length string

Length of the response body, in bytes

Default: None

Content-Type string

The MIME type of the content (e.g., Content-Type: text/html; charset=utf-8)

Default: None

Connection Enum

Indicates whether the connection to the server is open or closed.

Valid Values: open| close

Default: None

Date string

Date and time of the response (e.g., Wed, 01 Mar 2006 12:00:00 GMT)

Default: None

ETag string

The entity tag (Etag) is a hash of the object that reflects changes only to the object’s contents; not to its metadata. It may or may not be an MD5 digest of the object data, depending on how the object was created and how it is encrypted:

  • Objects created by the PUT Object and encrypted by SSE-S3 or plaintext have ETags that are an MD5 digest of their object data.
  • Objects created by the PUT Object and which are encrypted by SSE-C or SSE-KMS have ETags that are not an MD5 digest of their object data.
  • If an object is created by the Multipart Upload the ETag is not an MD5 digest, regardless of the method of encryption.
Server string Name of server that created the response.
x-amz-request-id string

A created value that uniquely identifies the request; can be used to troubleshoot the problem.

Default: None

x-amz-delete-marker Boolean

Specifies whether the object returned was or was not a delete marker.

Valid Values:
true | false

Default: false

x-amz-version-id string

The version of the object. When versioning is enabled, generates a URL-ready hex string Zenko uses to identify objects added to a bucket. For example: 3939393939393939393939393939393939393939756e6437.

When an object is PUT in a bucket where versioning has been suspended, the version ID is always null.

Error Messages
AWS S3 Error Messages

Zenko may return the following AWS error messages, which are available to the AWS-emulating CloudServer module:

Error Code Description
AccessDenied 403 Access denied
AccessForbidden 403 Access forbidden
AccountProblem 403 A problem with your account prevents the operation from completing. Please use Contact Us.
AmbiguousGrantByEmailAddress 400 The provided email address is associated with more than one account.
BadDigest 400 The Content-MD5 specified did not match what we received.
BucketAlreadyExists 409 The requested bucket name is not available. The bucket namespace is shared by all users of the system. Please select a different name and try again.
BucketAlreadyOwnedByYou 409 The request to create the named bucket succeeded and you already own it. This error is returned in all locations except us-east-1. In the us-east-1 location, you will get a “200 OK”, but it is inoperative: if the bucket exists, S3 does nothing).
BucketNotEmpty 409 The bucket you tried to delete is not empty.
CredentialsNotSupported 400 This request does not support credentials.
CrossLocationLoggingProhibited 403 Cross-location logging not allowed. Buckets in one geographic location cannot log information to a bucket in another location.
DeleteConflict 409 The request was rejected because it attempted to delete a resource that has attached subordinate entities. The error message describes these entities.
EntityTooSmall 400 Proposed upload is smaller than the minimum allowed object size.
EntityTooLarge 400 Proposed upload exceeds the maximum allowed object size.
ExpiredToken 400 The provided token has expired.
IllegalVersioningConfigurationException 400 Indicates that the versioning configuration specified in the request is invalid.
IncompleteBody 400 The number of bytes specified by the Content-Length HTTP header were not provided.
IncorrectNumberOfFilesInPostRequest 400 POST requires exactly one file upload per request.
InlineDataTooLarge 400 Inline data exceeds the maximum allowed size.
InternalError 500 We encountered an internal error. Please try again.
InvalidAccessKeyId 403 The access key ID provided does not exist in our records.
InvalidAddressingHeader 400 You must specify the Anonymous role.
InvalidArgument 400 Invalid argument
InvalidBucketName 400 The specified bucket is not valid.
InvalidBucketState 409 The request is not valid with the current state of the bucket.
InvalidDigest 400 The specified Content-MD5 is not valid.
InvalidEncryptionAlgorithmError 400 The specified encryption request is not valid. The valid value is AES256.
InvalidLocationConstraint 400 The specified location constraint is not valid.
InvalidObjectState 403 The operation is not valid for the current state of the object.
InvalidPart 400 One or more of the specified parts could not be found. The part might not have been uploaded, or the specified entity tag might not have matched the part’s entity tag.
InvalidPartOrder 400 The list of parts was not in ascending order. Parts list must specified in order by part number.
InvalidPartNumber 416 The requested partnumber is not satisfiable.
InvalidPayer 403 All access to this object has been disabled.
InvalidPolicyDocument 400 The content of the form does not meet the conditions specified in the policy document.
InvalidRange 416 The requested range cannot be satisfied.
InvalidRedirectLocation 400 The website redirect location must have a prefix of “http://”, “https://”, or “/”.
InvalidRequest 400 SOAP requests must be made over an HTTPS connection.
InvalidSecurity 403 The provided security credentials are not valid.
InvalidSOAPRequest 400 The SOAP request body is invalid.
InvalidStorageClass 400 The specified storage class is not valid.
InvalidTag 400 The provided tag is invalid.
InvalidTargetBucketForLogging 400 The target bucket for logging does not exist, is not yours, or does not have appropriate grants for the log-delivery group.
InvalidToken 400 The provided token is malformed or otherwise invalid.
InvalidURI 400 Couldn’t parse the specified URI.
KeyTooLong 400 Your key is too long.
LimitExceeded 409 The request was rejected because it attempted to create resources beyond current account limits. The error message describes the exceeded limit.
MalformedACLError 400 The XML you provided was not well formed or did not validate against our published schema.
MalformedPOSTRequest 400 The POST request body is not well-formed multipart data or form data.
MalformedXML 400 The provided XML was not well formed or did not validate against the published schema.
MaxMessageLengthExceeded 400 The request was too long.
MaxPostPreDataLengthExceededError 400 The POST request fields preceding the upload file were too long.
MetadataTooLarge 400 The metadata headers exceed the maximum allowed metadata size.
MethodNotAllowed 405 The specified method is not allowed against this resource.
MissingAttachment 400 A SOAP attachment was expected, but none was found.
MissingContentLength 411 Provide the Content-Length HTTP header.
MissingRequestBodyError 400 Request body is empty.
MissingRequiredParameter 400 Request is missing a required parameter.
MissingSecurityElement 400 The SOAP 1.1 request is missing a security element.
MissingSecurityHeader 400 Request is missing a required header.
NoLoggingStatusForKey 400 There are no logging status subresources for keys.
NoSuchBucket 404 The specified bucket does not exist.
NoSuchCORSConfiguration 404 The CORS configuration does not exist
NoSuchKey 404 The specified key does not exist.
NoSuchLifecycleConfiguration 404 The lifecycle configuration does not exist.
NoSuchWebsiteConfiguration 404 The specified bucket does not have a website configuration.
NoSuchUpload 404 The specified multipart upload does not exist. The upload ID might be invalid, or the multipart upload might have been aborted or completed.
NoSuchVersion 404 The version ID specified in the request does not match an existing version.
ReplicationConfigurationNotFoundError 404 The replication configuration was not found.
NotImplemented 501 A provided header implies functionality that is not implemented.
NotModified 304 Not modified.
NotSignedUp 403 Account is not signed up for the S3 service. You must sign up before you can use S3.
NoSuchBucketPolicy 404 The specified bucket does not have a bucket policy.
OperationAborted 409 A conflicting conditional operation is currently in progress against this resource. Try again.
PermanentRedirect 301 The bucket you are attempting to access must be addressed using the specified endpoint. Send all future requests to this endpoint.
PreconditionFailed 412 At least one of the specified preconditions did not hold.
Redirect 307 Temporary redirect.
RestoreAlreadyInProgress 409 Object restore is already in progress.
RequestIsNotMultiPartContent 400 Bucket POST must be of the multipart/form-data enclosure type.
RequestTimeout 400 Socket connection to the server was not read from or written to within the timeout period.
RequestTimeTooSkewed 403 The difference between the request time and the server’s time is too large.
RequestTorrentOfBucketError 400 Requesting the torrent file of a bucket is not permitted.
SignatureDoesNotMatch 403 The request signature we calculated does not match the signature you provided.
ServiceUnavailable 503 Reduce your request rate.
ServiceUnavailable 503 The request has failed due to a temporary server failure.
SlowDown 503 Reduce your request rate.
TemporaryRedirect 307 You are being redirected to the bucket while DNS updates.
TokenRefreshRequired 400 Refresh the provided token.
TooManyBuckets 400 You attempted to create more buckets than are allowed.
TooManyParts 400 You attempted to upload more parts than are allowed.
UnexpectedContent 400 This request does not support content.
UnresolvableGrantByEmailAddress 400 The provided email address does not match any account on record.
UserKeyMustBeSpecified 400 The bucket POST must contain the specified field name. If it is specified, check the order of the fields.
NoSuchEntity 404 The rejected request referenced an entity that does not exist. The error message describes the entity.
WrongFormat 400 Data entered by the user has a wrong format.
Forbidden 403 Authentication failed.
EntityDoesNotExist 404 Not found.
EntityAlreadyExists 409 The request was rejected because it attempted to create a resource that already exists.
KeyAlreadyExists 409 The request was rejected because it attempted to create a resource that already exists.
ServiceFailure 500 Server error: The request processing has failed because of an unknown error, exception or failure.
IncompleteSignature 400 The request signature does not conform to S3 standards.
InternalFailure 500 Request processing failed due to an unknown error, exception, or failure.
InvalidAction 400 The requested action or operation is invalid. Verify that the action is entered correctly.
InvalidClientTokenId 403 The X.509 certificate or AWS access key ID provided does not exist in our records.
InvalidParameterCombination 400 Parameters that must not be used together were used together.
InvalidParameterValue 400 An invalid or out-of-range value was supplied for the input parameter.
InvalidQueryParameter 400 The query string is malformed or does not conform to S3 standards.
MalformedQueryString 404 The query string contains a syntax error.
MissingAction 400 The request is missing an action or a required parameter.
MissingAuthenticationToken 403 The request must contain either a valid (registered) access key ID or X.509 certificate.
MissingParameter 400 A required parameter for the specified action is not supplied.
OptInRequired 403 The access key ID requires a subscription for the service.
RequestExpired 400 The request reached the service more than 15 minutes after the date stamp on the request or more than 15 minutes after the request expiration date (such as for pre-signed URLs), or the date stamp on the request is more than 15 minutes in the future.
Throttling 400 The request was denied due to request throttling.
AccountNotFound 404 No account was found in Vault. Contact your system administrator.
ValidationError 400 The specified value is invalid.
MalformedPolicyDocument 400 Syntax errors in policy.
InvalidInput 400 The request was rejected because an invalid or out-of-range value was supplied for an input parameter.
MalformedPolicy 400 This policy contains invalid JSON.
Non-AWS S3 Error Messages

Zenko also may return the following non-AWS S3 error message during a multipart upload:

Error Code Description
MPUinProgress 409 The bucket you tried to delete has an ongoing multipart upload.
Bucket Encryption

Slightly different from AWS SSE, Zenko bucket encryption is transparent to the application. Buckets are created with a special x-amz-scal-server-side-encryption header (value: AES256), which specifies that the bucket’s objects be encrypted, and thereafter there is no need to change any object PUT or GET calls in the application as the encrypt/decrypt behavior will simply occur (encrypt on PUT, decrypt on GET). In contrast, AWS SSE can be quite intrusive, as it requires special headers on all object-create calls, including Object Put, Object Copy, Object Post, and Multi Part Upload requests.

Zenko bucket encryption is similar to SSE-C in its integration with a key management service (KMS). Zenko requires users to provide the KMS, which generates encryption keys on PUT calls and retrieves the same encryption key on GET calls. Thus, Zenko does not store encryption keys.

Zenko also uses standard 256-bit OpenSSL encryption libraries to perform payload encryption and decryption. This supports the Intel AES-NI CPU acceleration library, making encryption nearly as fast as non-encrypted performance.

Access Controls

Zenko implements access controls that conform to the S3 API for:

  • Access Control Lists (ACLs)
  • Cross-Origin Resource Sharing (CORS)
  • Bucket policies

User-level access control conforming to the AWS Identity and Access Management (IAM) service protocols is under development as of version 1.1.0.

ACL (Access Control List)

Access Control Lists (ACLs) enable the management of access to buckets and objects.

Each bucket and object has an ACL attached to it as a subresource, defining which accounts or groups are granted access and the type of access. When a request is received against a resource, Zenko checks the corresponding ACL to verify the requester has the necessary access permissions.

When a bucket or object is created, Zenko creates a default ACL that grants the resource owner full control over the resource as shown in the following sample bucket ACL (the default object ACL has the same structure).

<?xml version="1.0" encoding="UTF-8"?>
<AccessControlPolicy xmlns="http://example.com/doc/2006-03-01/">
  <Owner>
    <ID>*** Owner-Canonical-User-ID ***</ID>
    <DisplayName>owner-display-name</DisplayName>
  </Owner>
  <AccessControlList>
    <Grant>
      <Grantee xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
               xsi:type="Canonical User">
        <ID>*** Owner-Canonical-User-ID ***</ID>
        <DisplayName>display-name</DisplayName>
      </Grantee>
      <Permission>FULL_CONTROL</Permission>
    </Grant>
  </AccessControlList>
</AccessControlPolicy>

The sample ACL includes an Owner element identifying the owner via the account’s canonical user ID. The Grantelement identifies the grantee (either a specific account or a predefined group), and the permission granted. This default ACL has one Grantelement for the owner. You grant permissions by adding Grantelements, each grant identifying the grantee and the permission.

Grantee Eligibility

A grantee can be an account or one of the predefined groups. Permission is granted to an account by the email address or the canonical user ID. However, if an email address is provided in the grant request, Zenko finds the canonical user ID for that account and adds it to the ACL. The resulting ACLs always contain the canonical user ID for the account, not the account’s email address.

AWS Canonical User ID

Canonical user IDs are associated with AWS accounts. When an AWS account is granted permissions by a grant request, a grant entry is added to the ACL with that account’s canonical user ID.

Predefined Amazon S3 Groups

Zenko offers the use of Amazon S3 predefined groups. When granting account access to such a group, specify one of URIs instead of a canonical user ID.

Authenticated Users

Represents all authenticated accounts. Access permission to this group allows any system account to access the resource. However, all requests must be signed (authenticated).

http://acs.example.com/groups/global/AuthenticatedUsers

Public

Access permission to this group allows anyone to access the resource. Requests can be signed (authenticated) or unsigned (anonymous). Unsigned requests omit the Authentication header in the request.

http://acs.example.com/groups/global/AllUsers

Log Delivery

WRITE permission on a bucket enables this group to write server access logs to the bucket.

http://acs.example.com/groups/s3/LogDelivery

Note

When using ACLs, a grantee can be an AWS account or one of the predefined Amazon S3 groups. However, the grantee cannot be an Identity and Access Management (IAM) user. When granting AWS accounts access to resources, be aware that the AWS accounts can delegate their permissions to users under their accounts (a practice known as cross-account access).

Grantable Permissions

The set of permissions Zenko supports in an ACL is detailed in the following table.

Permission When Granted to a Bucket When Granted to an Object
READ Grantee can list the objects in the bucket. Grantee can read the object data and its metadata.
WRITE Grantee can create, overwrite, and delete any object in the bucket. Not applicable
READ_ACP Grantee can read the bucket ACL. Grantee can read the object ACL.
WRITE_ACP Grantee can write the ACL for the applicable bucket. Grantee can write the ACL for the applicable object.
FULL_CONTROL Allows grantee the READ, WRITE, READ_ACP, and READ_ACP, and WRITE_ACP WRITE_ACP permissions on the bucket Allows grantee the READ, READ_ACP, and WRITE_ACP WRITE_ACP permissions on the object

Note

The set of ACL permissions is the same for object ACL and bucket ACL. However, depending on the context (bucket ACL or object ACL), these ACL permissions grant permissions for specific bucket or the object operations.

Specifying an ACL

Using Zenko, an ACL can be set at the creation point of a bucket or object. An ACL can also be applied to an existing bucket or object.

Set ACL using request headers When sending a request to create a resource (bucket or object), set an ACL using the request headers. With these headers, it is possible to either specify a canned ACL or specify grants explicitly (identifying grantee and permissions explicitly).
Set ACL using request body When you send a request to set an ACL on an existing resource, you can set the ACL either in the request header or in the body.
Sample ACL

The ACL on a bucket identifies the resource owner and a set of grants. The format is the XML representation of an ACL in the Zenko API. The bucket owner has FULL_CONTROL of the resource. In addition, the ACL shows how permissions are granted on a resource to two accounts, identified by canonical user ID, and two of the predefined Amazon S3 groups.

<?xml version="1.0" encoding="UTF-8"?>
<AccessControlPolicy xmlns="http://example.com/doc/2006-03-01/">
  <Owner>
    <ID>Owner-canonical-user-ID</ID>
    <DisplayName>display-name</DisplayName>
  </Owner>
  <AccessControlList>
    <Grant>
      <Grantee xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:type="CanonicalUser">
        <ID>Owner-canonical-user-ID</ID>
        <DisplayName>display-name</DisplayName>
      </Grantee>
      <Permission>FULL_CONTROL</Permission>
    </Grant>

    <Grant>
      <Grantee xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:type="CanonicalUser">
        <ID>user1-canonical-user-ID</ID>
        <DisplayName>display-name</DisplayName>
      </Grantee>
      <Permission>WRITE</Permission>
    </Grant>

    <Grant>
      <Grantee xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:type="CanonicalUser">
        <ID>user2-canonical-user-ID</ID>
        <DisplayName>display-name</DisplayName>
      </Grantee>
      <Permission>READ</Permission>
    </Grant>

    <Grant>
      <Grantee xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:type="Group">
        <URI>http://acs.example.com/groups/global/AllUsers</URI>
      </Grantee>
      <Permission>READ</Permission>
    </Grant>
    <Grant>
      <Grantee xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:type="Group">
        <URI>http://acs.example.com/groups/s3/LogDelivery</URI>
      </Grantee>
      <Permission>WRITE</Permission>
    </Grant>
  </AccessControlList>
</AccessControlPolicy>

Bucket CORS Operations

Bucket CORS operations enable Zenko to permit cross-origin requests sent through the browser on a per-bucket basis. To enable cross-origin requests, configure an S3 bucket by adding a CORS subresource containing rules for the type of requests to permit.

Bucket CORS Specification

Zenko implements the AWS S3 Bucket CORS APIs.

Preflight CORS Requests

A preflight request with the HTTP OPTIONS method can be made against Zenko to determine whether CORS requests are permitted on a bucket before sending the actual request. (For detailed information on the preflight request and response, see the OPTIONS object.)

Warning

If several rules are specified, the first one matching the preflight request Origin, Access-Control-Request-Method header and Access-Control-Request-Headers header is the rule used to determine response headers that are relevant to CORS (the same behavior as AWS).

CORS Headers in Non-Options Requests and AWS Compatibility

With the exception Access-Control-Allow-Headers, CORS-relevant response headers are sent in response to regular API calls if those requests possess the Origin header and are permitted by a CORS configuration on the bucket.

Note

Because responding with CORS headers requires making a call to metadata to retrieve the bucket’s CORS configuration, CORS headers are not returned if the request encounters an error before the API method retrieves the bucket from metadata (if, for example, a request is not properly authenticated). Such behavior deviates slightly from AWS, in favor of performance, anticipating that the preflight OPTIONS route will serve most client needs regarding CORS. If many rules are specified, the first rule that matches the request’s origin and HTTP method is used to determine response headers that are relevant to CORS (the same behavior as AWS).

Bucket Policy

Bucket policies let owners inscribe highly specific or powerfully general access controls to buckets and the objects they contain. By default, external accounts are denied access to buckets. A bucket policy, set as a JSON file attached to the bucket, can permit or deny specified accounts or users access to the bucket, to all objects in the bucket, or to individual objects in the bucket, and can permit or deny specific actions on specified buckets or objects.

Policies consist of four elements: an action, an effect, a resource, and a principal.

  • The action is the API being permitted or blocked. This can either take the form of a specific API for access/denial ("Action": ["s3:CreateBucket"]) or a wildcard assertion ("Action": "s3:*") which enables/disables access either to an entire bucket (for bucket API calls) or to objects in a bucket (for object API calls).

  • The effect is the condition placed on the action and principal: either “Allow” or “Deny” It is expressed simply as: "Effect": "Allow" or "Effect": "Deny".

  • The resource is the bucket or object to which access is being granted or denied. Resources are formatted in the policy as follows:

    Resource Type ARN
    bucket arn:aws:s3:::${BucketName}
    object arn:aws:s3:::${BucketName}/${ObjectName}
  • The principal is defined by the account ID, account ARN, user ARN, or canonical ID of the user or entity being permitted or denied access. The basic format of a principal is "Principal": {"AWS": ["123456789012"]}, but many options are available, the syntax for which exceeds the scope of this documentation. Zenko follows the conventions for principals documented at https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements_principal.html, but does not recognize federated users, IAM roles, or service roles.

The easiest method for creating bucket policies is to use the Amazon policy generator at https://awspolicygen.s3.amazonaws.com/policygen.html.

You can set, review, and clear bucket policies using these API calls:

Bucket Operations

This section presents a compendium of available API calls for bucket operations in Zenko.

PUT Bucket

The PUT Bucket operation creates a bucket and sets the account issuing the request as the bucket owner. Buckets cannot be created by anonymous requests (no authentication information is sent).

Requests
Syntax
PUT / HTTP/1.1
Host: {{BucketName}}.{{StorageService}}.com
Content-Length: {{length}}
Date: {{date}}
Authorization: {{authenticationInformation}}

<CreateBucketConfiguration xmlns="http://s3.scality.com/doc/2006-03-01/">
<LocationConstraint>scality-us-west-1</LocationConstraint>
</CreateBucketConfiguration>

If the Host header of a PUT Bucket request does not match any of the REST endpoints in your configuration, and a region is not specified in the request, it will be automatically assigned us-east-1.

This syntax illustrates only a portion of the request headers.

Parameters

The PUT Bucket operation does not use request parameters.

Headers

The PUT Bucket operation can use a number of optional request headers in addition to those that are common to all operations (refer to Common Request Headers). These request headers are used either to specify a predefined—-or canned—-ACL, or to explicitly specify access permissions.

Specifying a Canned ACL

Zenko supports a set of canned ACLs, each with a predefined set of grantees and permissions.

Header Type Description
x-amz-acl string

The canned ACL to apply to the bucket being created

Default: private

Valid Values: private | public-read | public-read-write | authenticated-read | bucket-owner-read | bucket-owner-full-control

Constraints: None

Explicitly Specifying Access Permissions

A set of headers is available for explicitly granting access permissions to specific Zenko accounts or groups, each of which maps to specific permissions Zenko supports in an ACL.

In the header value, specify a list of grantees who get the specific permission.

Header Type Description
x-amz-grant-read string

Allows grantee to list the objects in the bucket

Default: None

Constraints: None

x-amz-grant-write string

Allows grantee to create, overwrite, and delete any object in the bucket

Default: None

Constraints: None

x-amz-grant-read-acp string

Allows grantee to read the bucket ACL

Default: None

Constraints: None

x-amz-grant-write-acp string

Allows grantee to write the ACL for the applicable bucket

Default: None

Constraints: None

x-amz-grant-full-control string

Allows grantee READ, WRITE, READ_ACP, and WRITE_ACP permissions on the ACL

Default: None

Constraints: None

x-amz-scal-server-side-encryption string

Special optional header, specifies that the source object is to be encrypted.

Default: AES256

Constraints: Must be AES256.

Each grantee is specified as a type=value pair, where the type can be one any one of the following:

  • emailAddress (if value specified is the email address of an account)
  • id (if value specified is the canonical user ID of an account)
  • uri (if granting permission to a predefined group)

For example, the following x-amz-grant-read header grants list objects permission to the accounts identified by their email addresses:

x-amz-grant-read: emailAddress="xyz@scality.com", emailAddress="abc@scality.com"
Elements

The PUT Bucket operation can request the following items:

Element Type Description
CreateBucketConfiguration container Container for bucket configuration settings
LocationConstraint enum Specifies where the bucket will be created
Responses
Headers

The PUT Bucket operation uses only response headers that are common to all operations (see Common Response Headers).

Elements

The PUT Bucket operation does not return response elements.

Examples
Create a Bucket Named “Documentation”
Request
PUT / HTTP/1.1
Host: documentation.demo.s3.scality.com
Content-Length: 0
Date: Mon, 15 Feb 2016 15:30:07 GMT
Authorization: AWS pat:fxA/7CeKyl3QJewhIguziTMp8Cc=
Response
HTTP/1.1 200 OK
x-amz-id-2: YgIPIfBiKa2bj0KMg95r/0zo3emzU4dzsD4rcKCHQUAdQkf3ShJTOOpXUueF6QKo
x-amz-request-id: 236A8905248E5A01
Date: Mon, 15 Feb 2016 15:30:07 GMT

Location: /documentation
Content-Length: 0
Connection: close
Server: ScalityS3
Setting a Bucket’s Location Constraint
Request

A PUT Bucket operation example request that sets the location constraint of the bucket to EU.

PUT / HTTP/1.1
Host: {{bucketName}}.s3.{{storageService}}.com
Date: Wed, 12 Oct 2009 17:50:00 GMT
Authorization: {{authorizationString}}
Content-Type: text/plain
Content-Length: 124

<CreateBucketConfiguration xmlns="http://s3.scality.com/doc/2006-03-01/">
<LocationConstraint>EU</LocationConstraint>
</CreateBucketConfiguration >
Creating a Bucket and Configuring Access Permission Using a Canned ACL
Request

A PUT Bucket operation example request that creates a bucket named “documentation” and sets the ACL to private.

PUT / HTTP/1.1
Host: documentation.s3.scality.com
Content-Length: 0
x-amz-acl: private
Date: Wed, 01 Mar  2006 12:00:00 GMT
Authorization: {{authorizationString}}
Response
HTTP/1.1 200 OK
x-amz-id-2: YgIPIfBiKa2bj0KMg95r/0zo3emzU4dzsD4rcKCHQUAdQkf3ShJTOOpXUueF6QKo
x-amz-request-id: 236A8905248E5A01
Date: Wed, 01 Mar  2006 12:00:00 GMT

Location: /documentation
Content-Length: 0
Connection: close
Server: ScalityS3
Creating a Bucket and Explicitly Configuring Access Permissions
Request

A PUT Bucket operation example request that creates a bucket named “documentation” and grants WRITE permission to the account identified by an email address.

PUT HTTP/1.1
Host: documentation.s3.{{storageService}}.com
x-amz-date: Sat, 07 Apr 2012 00:54:40 GMT
Authorization: {{authorizationString}}
x-amz-grant-write: emailAddress="xyz@scality.com", emailAddress="abc@scality.com"
Response
HTTP/1.1 200 OK

GET Bucket (List Objects)

The GET Bucket (List Objects) operation returns some or all objects in a bucket (up to 1000, which is also the default setting). By default, the objects returned by the GET Bucket operation is limited to 1000, however this can be changed via the max-keys parameter to any number less than 1000.

The request parameters for GET Bucket (List Objects) can be used as selection criteria to return a subset of the objects in a bucket. Because a 200 OK response can contain valid or invalid XML, applications must be designed to parse the contents of the response and to handle them appropriately.

Requests
Syntax
GET / HTTP/1.1
Host: {{BucketName}}.{{StorageService}}.com
Date: {{date}}
Authorization: {{authenticationInformation}}
Parameters

The GET Bucket (List Objects) operation can use the following optional parameters to return a subset of objects in a bucket:

Parameter Type Description
delimiter string

Character used to group keys

All keys that contain the same string between the prefix, if specified, and the first occurrence of the delimiter after the prefix are grouped under a single result element, CommonPrefixes. If prefix is not specified, then the substring starts at the beginning of the key. The keys that are grouped under CommonPrefixes result element are not returned elsewhere in the response.

encoding-type string Encodes keys with the method specified. Since XML 1.0 parsers cannot parse certain characters that may be included in an object key, the keys can be encoded in the response to ensure they are legible. Encoding is not set by default. Currently the only valid value is url.
marker integer Specifies the key to start with when listing objects in a bucket. Zenko returns object keys in UTF-8 binary order, starting with key after the marker in order.
max-keys string Limits the number of keys included in the list. Default is 1000. The IsTruncated element returns true if the search criteria results for the request exceed the value set for the max-keys parameter.
prefix string Specifies a string that must be present at the beginning of a key in order for the key to be included in the GET Bucket response list.
Headers

The GET Bucket (List Objects) operation uses only request headers that are common to all operations (refer to Common Request Headers).

Elements

The GET Bucket (List Objects) operation does not use request elements.

Responses
Headers

The GET Bucket (List Objects) operation uses only response headers that are common to all operations (refer to Common Response Headers).

Elements

The GET Bucket (List Objects) operation can return the following XML elements in the response:

Element Type Description
Contents XML metadata Metadata about each object returned
CommonPrefixes string A response can contain CommonPrefixes only if Delimiter is specified. When that is the case, CommonPrefixes contains all (if there are any) keys between Prefix and the next occurrence of the string specified by Delimiter. In effect, CommonPrefixes lists keys that act like subdirectories in the directory specified by Prefix. All of the keys rolled up in a common prefix count as a single return when calculating the number of returns. Refer to MaxKeys.
Delimiter string Causes keys that contain the same string between the prefix and the first occurrence of the delimiter to be rolled up into a single result element in CommonPrefixes the collection. These rolled-up keys are not returned elsewhere in the response. Each rolled up result counts as only one return against the MaxKeys value.
DisplayName string Object owner’s name
Encoding-Type string

Encoding type used by Zenko to encode object key names in the XML response.

If encoding-type request parameter is specified, Zenko includes this element in the response, and returns encoded key name values in the following response elements: Delimiter, Marker, Prefix, NextMarker, Key

ETag string The entity tag is an MD5 hash of the object. The ETag only reflects changes to the contents of an object, not its metadata.
ID string Object owner’s ID
IsTruncated Boolean Specifies whether (true) or not (false) all of the results were returned. All of the results may not be returned if the number of results exceeds that specified by MaxKeys.
Key string The object’s key specified by MaxKeys.
LastModified date Date and time the object was last modified
Marker string Indicates where in the bucket listing begins; Marker is included in the response if it was sent with the request
MaxKeys string The maximum number of keys returned in the response body
Name string Name of the bucket
NextMarker string

When response is truncated (the (IsTruncated element value in the response is true), the key name can be used in this field as marker as marker in the subsequent request to get next set of objects. Zenko lists objects in UTF-8 binary order.

Note that Zenko returns the NextMarker only if a Delimiter request parameter is specified (which runs counter to AWS practice).

Owner string Bucket owner
Prefix string Keys that begin with the indicated prefix
Size string Size in bytes of the object
Examples
Getting Objects in the Backup Bucket
Request
GET / HTTP/1.1
Host: backup.s3.scality.com
Date: Thu, 31 Mar 2016 15:11:47 GMT
Authorization: AWS pat:6nYhPMw6boadLgjywjSIyhfwRIA=
Presenting a Single Object
Response
<?xml version="1.0" encoding="UTF-8"?>
  <ListBucketResult xmlns="http://s3.scality.com/doc/2006-03-01/">
    <Name>backup</Name>
    <Prefix></Prefix>
    <Marker></Marker>
    <MaxKeys>1000</MaxKeys>
    <Delimiter>/</Delimiter>
    <IsTruncated>false</IsTruncated>
  <Contents>
    <Key>support-20110614.md5</Key>
    <LastModified>2011-06-14T05:08:57.000Z</LastModified>
    <ETag>&amp;quot;8aad2888fd4fafaeabb643ccdaa77872&amp;quot;</ETag>
    <Size>155</Size>
    <Owner>
      <ID>3452783832C94517345278000000004000000120</ID>
      <DisplayName>Patrick</DisplayName
    </Owner>
  <Contents>
  </ListBucketResult>
Using the max_keys Parameter

List up to four keys in the demo bucket.

Request
GET /?max-keys=4 HTTP/1.1
Host: demo.s3.scality.com
Accept: */*
Authorization: AWS pat:0YPPNCCa9yAbKOFdlLD/ixMLayg=
Date: Tue, 28 Jun 2011 09:27:15 GMT
Connection: close
Response
HTTP/1.1 200 OK
Date: Tue, 28 Jun 2011 09:27:15 GMT
Server: RestServer/1.0
Content-Length: 1499
Content-Type: application/xml
Cache-Control: no-cache
Connection: close

<?xml version="1.0" encoding="UTF-8"?>
  <ListBucketResult xmlns="http://s3.scality.com/doc/2006-03-01/">
    <Name>confpat</Name>
    <Prefix></Prefix>
    <Marker></Marker>
    <MaxKeys>4</MaxKeys>
    <IsTruncated>true</IsTruncated>
   <Contents>
     <Key>DS_Store</Key>
     <LastModified>2011-06-26T23:45:35.000Z</LastModified>
     <ETag>>&quot;02674163a1999de7c3fe664ae6f3085e&quot;</ETag>
     <Size>12292</Size>
     <Owner>
       <ID>3452783832C94517345278000000004000000120</ID>
       <DisplayName>pat</DisplayName>
     </Owner>
     <StorageClass>STANDARD</StorageClass>
   </Contents>
   <Contents>
     <Key>Aziende/cluster.sh</Key>
     <LastModified>2011-05-20T14:33:37.000Z</LastModified>
     <ETag>&quot;45ecf8f5ebc7740b034c40e0412250ec&quot;</ETag>
     <Size>74</Size>
     <Owner>
       <ID>3452783832C94517345278000000004000000120</ID>
       <DisplayName>pat</DisplayName>
     </Owner>
     <StorageClass>STANDARD</StorageClass>
   </Contents>
</ListBucketResult>
Using Prefix and Delimiter
Request

The following keys are present in the sample bucket:

  • greatshot.raw
  • photographs/2006/January/greatshot.raw
  • photographs/2006/February/greatshot_a.raw
  • photographs/2006/February/greatshot_b.raw
  • photographs/2006/February/greatshot_c.raw

The following GET request specifies the delimiter parameter with value “/”.

GET /?delimiter=/ HTTP/1.1
Host: example-bucket.s3.scality.com
Date: Wed, 01 Mar  2006 12:00:00 GMT
Authorization: {{authorizationString}}
Response

The key greatshot.raw does not contain the delimiter character, and Zenko returns it in the Contents element in the response. However, all other keys contain the delimiter character. Zenko groups these keys and return a single CommonPrefixes element with the common prefix value photographs/, which is a substring from the beginning of these keys to the first occurrence of the specified delimiter.

<ListBucketResult xmlns="http://s3.scality.com/doc/2006-03-01/">
  <Name>example-bucket</Name>
  <Prefix></Prefix>
  <Marker></Marker>
  <MaxKeys>1000</MaxKeys>
  <Delimiter>/</Delimiter>
  <IsTruncated>false</IsTruncated>
  <Contents>
    <Key>greatshot.raw</Key>
    <LastModified>2011-02-26T01:56:20.000Z</LastModified>
    <ETag>&amp;quot;bf1d737a4d46a19f3bced6905cc8b902&amp;quot;</ETag>
    <Size>142863</Size>
    <Owner>
      <ID>accessKey-user-id</ID>
      <DisplayName>display-name</DisplayName>
    </Owner>
  </Contents>
  <CommonPrefixes>
    <Prefix>photographs/</Prefix>
  </CommonPrefixes>
</ListBucketResult>
Request

The following GET request specifies the delimiter parameter with value “/”, and the prefix parameter with value photographs/2006/.

GET /?prefix=photographs/2006/&amp;delimiter=/ HTTP/1.1
Host: example-bucket.s3.scality.com
Date: Wed, 01 Mar  2006 12:00:00 GMT
Authorization: {{authorizationString}}
Response

In response, Zenko returns only the keys that start with the specified prefix. Further, it uses the delimiter character to group keys that contain the same substring until the first occurrence of the delimiter character after the specified prefix. For each such key group Zenko returns one CommonPrefixes element in the response. The keys grouped under this CommonPrefixes element are not returned elsewhere in the response. The value returned in the CommonPrefixes element is a substring, from the beginning of the key to the first occurrence of the specified delimiter after the prefix.

<ListBucketResult xmlns="http://s3.scality.com/doc/2006-03-01/">
  <Name>example-bucket</Name>
  <Prefix>photographs/2006/</Prefix>
  <Marker></Marker>
  <MaxKeys>1000</MaxKeys>
  <Delimiter>/</Delimiter>
  <IsTruncated>false</IsTruncated>
  <CommonPrefixes>
    <Prefix>photographs/2006/February/</Prefix>
 </CommonPrefixes>
  <CommonPrefixes>
    <Prefix>photographs/2006/January/</Prefix>
  </CommonPrefixes>
</ListBucketResult>

GET Bucket (List Objects) Version 2

The Version 2 GET operation returns some or all (up to 1000) of the objects in a bucket. The request parameters can be used as selection criteria to return a subset of the objects in a bucket. A 200 OK response can contain valid or invalid XML. Design applications to parse the contents of the response and to handle them appropriately.

Note

Using the v2 implementation requires READ access to the bucket.

To use this operation in an AWS Identity and Access Management (IAM) policy, perform the s3:ListBucket action. By dsefault, the bucket owner has this permission and can grant it to others. For more information about permissions, see Permissions Related to Bucket Operations and Identity and Access Management in Amazon S3 in the Amazon Simple Storage Service Developer Guide.

Important

Use the revision of the API described in this topic, GET Bucket (List Objects) version 2, for application development. For backward compatibility, support is maintained for the prior version of this API operation, GET Bucket (List Objects).

Requests
Syntax
GET /?list-type=2 HTTP/1.1
Host: BucketName.s3.scality.com
Date: date
Authorization: authorization string
Parameters

GET Bucket (List Objects) Version 2 uses the following parameters:

Parameter Description Required
delimiter

A delimiter is a character for grouping keys.

If specifying a prefix, all keys that contain the same string between the prefix and the first delimiter occurring after the prefix are grouped under a single result element, CommonPrefixes. If the prefixparameter is not specified, the substring starts at the beginning of the key. Keys grouped under the CommonPrefixes result element are not returned elsewhere in the response.

Type: String

Default: None

No
encoding-type

Requests Zenko to encode the response and specifies the encoding method to use.

An object key can contain any Unicode character. However, XML 1.0 parsers cannot parse some characters, such as characters with an ASCII value from 0 to 10. For characters not supported in XML 1.0, add this parameter to request S3 to encode the keys in the response.

Type: String

Default: None

Valid value: url

No
max-keys

Sets the maximum number of keys returned in the response body. To retrieve fewer than the default 1,000 keys, add this to the request.

The response may contain fewer than the specified value of keys, but never contains more. If additional keys that satisfy the search were not returned because max-keys was exceeded, the response contains <IsTruncated>true</IsTruncated>. To return these additional keys, see NextContinuationToken.

Type: String

Default: 1000

No
prefix

Limits the response to keys that begin with the specified prefix.

Prefixes can be used to separate a bucket into different groupings of keys. (You can think of using prefix to group objects as you’d use a folder in a file system.)

Type: String

Default: None

No
list-type

Version 2 of the API requires this parameter. Its value must be set to 2.

Type: String

Default: Value is always 2.

Yes
continuation-token

When the response to this API call is truncated (that is, the IsTruncated response element value is true), the response also includes the NextContinuationToken element. To list the next set of objects, use the NextContinuationTokenelement in the next request as the continuation-token.

  • The continuation token is an opaque value that Zenko understands.
  • Zenko lists objects in UTF-8 character encoding in lexicographic order.

Type: String

Default: None

No
fetch-owner

By default, the API does not return Owner information in the response. To get owner information in the response, set this parameter to true.

Type: String

Default: false

No
start-after

Add this parameter to request the API to return key names after a specific object key in your key space. Zenko lists objects in UTF-8 character encoding in lexicographic order.

This parameter is valid only in a first request. If the response is truncated, specifying this parameter along with the continuation-token parameter causes CloudServer to ignore this parameter.

Type: String

Default: None

No
Elements

This operation does not use request elements.

Headers

This operation uses only request headers that are common to all operations (see Common Request Headers).

Responses
Headers

This operation uses only response headers that are common to most responses (see Common Response Headers).

Elements
Name Description
Contents

Metadata about each object returned.

Type: XML metadata

Ancestor: ListBucketResult

CommonPrefixes

All of the keys rolled up into a common prefix count as a single return when calculating the number of returns. See MaxKeys.

  • A response can contain CommonPrefixes
    only if a delimiter has been specified.
  • CommonPrefixes contains any existing keys between Prefix and the next occurrence of the string specified by a delimiter.
  • CommonPrefixes lists keys that act like subdirectories in the directory specified by Prefix.

For example, if the prefix is notes/ and the delimiter is a slash (/), as in notes/summer/july, the common prefix is notes/summer/. All keys that roll up into a common prefix count as a single return when calculating the number of returns. See MaxKeys.

Type: String

Ancestor: ListBucketResult

Delimiter

Causes keys containing the same string between the prefix and first occurrence of the delimiter to be rolled up into a single result element in the CommonPrefixes collection. These rolled-up keys are not returned elsewhere in the response. Each rolled-up result counts as only one return against the MaxKeys value.

Type: String

Ancestor: ListBucketResult

DisplayName

Object owner’s name.

Type: String

Ancestor: ListBucketResult.Contents.Owner

Encoding-Type

Encoding type used by Zenko to encode object key names in the XML response.

If you specify encoding-type request parameter, Zenko includes this element in the response, and returns encoded key name values in the Delimiter, Prefix, Key, and StartAfter response elements.

Type: String

Ancestor: ListBucketResult

ETag

The entity tag is an MD5 hash of the object. ETag reflects only changes to the contents of an object, not its metadata.

Type: String

Ancestor: ListBucketResult.Contents

ID

Object owner’s ID

Type: String

Ancestor: ListBucketResult.Contents.Owner

IsTruncated

Set to false if all results were returned.

Set to true if more keys are available to return.

If the number of results exceeds that specified by MaxKeys, all of the results might not be returned.

Type: Boolean

Ancestor: ListBucketResult

Key

The object’s key

Type: String

Ancestor: ListBucketResult.Contents

LastModified

Date and time the object was last modified

Type: Date

Ancestor: ListBucketResult.Contents

The maximum number of keys returned in the response body

Type: String

Ancestor: ListBucketResult

Name

Name of the bucket

Type: String

Ancestor: ListBucketResult

Owner

Bucket owner

Type: String

Children: DisplayName, ID

Ancestor: ListBucketResult.Contents | CommonPrefixes

Prefix

Keys that begin with the indicated prefix

Type: String

Ancestor: ListBucketResult

Size

Size of the object (in bytes)

Type: String

Ancestor: ListBucketResult.Contents

StorageClass

STANDARD | STANDARD_IA | REDUCED_REDUNDANCY

Type: String

Ancestor: ListBucketResult.Contents

ContinuationToken

If ContinuationToken was sent with the request, it is included in the response.

Type: String

Ancestor: ListBucketResult

KeyCount

Returns the number of keys included in the response. The value is always less than or equal to the MaxKeys value.

Type: String

Ancestor: ListBucketResult

NextContinuationToken

If the response is truncated, Zenko returns this parameter with a continuation token. You can specify the token as the continuation-token in your next request to retrieve the next set of keys.

Type: String

Ancestor: ListBucketResult

StartAfter

If StartAfter was sent with the request, it is included in the response.

Type: String

Ancestor: ListBucketResult

Special Errors

This operation does not return special errors. For general information about the AWS errors Zenko uses, and a list of error codes, see Error Messages.

Examples
Listing Keys

This request returns the objects in BucketName. The request specifies the list-type parameter, which indicates version 2 of the API.

Request
GET /?list-type=2 HTTP/1.1
Host: bucket.s3.scality.com
x-amz-date: 20181108T233541Z
Authorization: authorization string
Content-Type: text/plain
Response
<?xml version="1.0" encoding="UTF-8"?>
<ListBucketResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
  <Name>foob</Name>
  <Prefix/>
  <MaxKeys>1000</MaxKeys>
  <EncodingType>url</EncodingType>
  <IsTruncated>false</IsTruncated>
  <FetchOwner>undefined</FetchOwner>
  <Contents>
    <Key>fill-00</Key>
    <LastModified>2018-11-09T20:08:05.396Z</LastModified>
    <ETag>"f1c9645dbc14efddc7d8a322685f26eb"</ETag>
    <Size>10485760</Size>
    <StorageClass>STANDARD</StorageClass>
  </Contents>
  <Contents>
  ...
  </Contents>
</ListBucketResult>
Listing Keys Using the max-keys, prefix, and start-after Parameters

In addition to the list-type parameter that indicates version 2 of the API, the request also specifies additional parameters to retrieve up to three keys in the quotes bucket that start with E and occur lexicographically after ExampleGuide.pdf.

Request
GET /?list-type=2&max-keys=3&prefix=E&start-after=ExampleGuide.pdf HTTP/1.1
Host: quotes.s3.scality.com
x-amz-date: 20181108T232933Z
Authorization: authorization string
Response
HTTP/1.1 200 OK
x-amz-id-2: gyB+3jRPnrkN98ZajxHXr3u7EFM67bNgSAxexeEHndCX/7GRnfTXxReKUQF28IfP
x-amz-request-id: 3B3C7C725673C630
Date: Thu, 08 Nov 2018 23:29:37 GMT
Content-Type: application/xml
Content-Length: length
Connection: close
Server: ScalityS3

<?xml version="1.0" encoding="UTF-8"?>
<ListBucketResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
Server: my-zenko
  <Name>quotes</Name>
  <Prefix>E</Prefix>
  <StartAfter>ExampleGuide.pdf</StartAfter>
  <KeyCount>1</KeyCount>
  <MaxKeys>3</MaxKeys>
  <IsTruncated>false</IsTruncated>
  <Contents>
    <Key>ExampleObject.txt</Key>
    <LastModified>2013-09-17T18:07:53.000Z</LastModified>
    <ETag>&quot;599bab3ed2c697f1d26842727561fd94&quot;</ETag>
    <Size>857</Size>
    <StorageClass>REDUCED_REDUNDANCY</StorageClass>
  </Contents>
</ListBucketResult>
Listing Keys Using the prefix and delimiter Parameters
Request

This example illustrates the use of the prefix and the delimiter parameters in the request. This example assumes the following keys are in your bucket:

  • sample.jpg
  • photos/2006/January/sample.jpg
  • photos/2006/February/sample2.jpg
  • photos/2006/February/sample3.jpg
  • photos/2006/February/sample4.jpg

The following GET request specifies the delimiter parameter with value /.

GET /?list-type=2&delimiter=/ HTTP/1.1
Host: my-zenko.example.com
x-amz-date: 20181108T235931Z
Authorization: authorization string
Response

The sample.jpg key does not contain the delimiter character, and Zenko returns it in the Contents element in the response. However, all other keys contain the delimiter character. Zenko groups these keys and returns a single CommonPrefixes element with the prefix value photos/. The element is a substring that starts at the beginning of these keys and ends at the first occurrence of the specified delimiter.

<ListBucketResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
  <Name>example-bucket</Name>
  <Prefix></Prefix>
  <KeyCount>2</KeyCount>
  <MaxKeys>1000</MaxKeys>
  <Delimiter>/</Delimiter>
  <IsTruncated>false</IsTruncated>
  <Contents>
    <Key>sample.jpg</Key>
    <LastModified>2017-02-26T01:56:20.000Z</LastModified>
    <ETag>&quot;bf1d737a4d46a19f3bced6905cc8b902&quot;</ETag>
    <Size>142863</Size>
    <StorageClass>STANDARD</StorageClass>
  </Contents>

   <CommonPrefixes>
     <Prefix>photos/</Prefix>
   </CommonPrefixes>
 </ListBucketResult>
Request

The following GET request specifies the delimiter parameter with value /, and the prefix parameter with valuephotos/2006/.

GET /?list-type=2&prefix=photos/2006/&delimiter=/ HTTP/1.1
Host: my-zenko.example.com
x-amz-date: 20181108T000433Z
Authorization: authorization string
Response

In response, Zenko returns only the keys that start with the specified prefix. Further, it uses the delimiter character to group keys that contain the same substring until the first occurrence of the delimiter character after the specified prefix. For each such key group Zenko returns one CommonPrefixes element in the response. The keys grouped under this CommonPrefixes element are not returned elsewhere in the response. The value returned in the CommonPrefixes element is a substring that starts at the beginning of the key and ends at the first occurrence of the specified delimiter after the prefix.

<ListBucketResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
  <Name>example-bucket</Name>
  <Prefix>photos/2006/</Prefix>
  <KeyCount>3</KeyCount>
  <MaxKeys>1000</MaxKeys>
  <Delimiter>/</Delimiter>
  <IsTruncated>false</IsTruncated>
  <Contents>
    <Key>photos/2006/</Key>
    <LastModified>2016-04-30T23:51:29.000Z</LastModified>
    <ETag>&quot;d41d8cd98f00b204e9800998ecf8427e&quot;</ETag>
    <Size>0</Size>
    <StorageClass>STANDARD</StorageClass>
  </Contents>

  <CommonPrefixes>
    <Prefix>photos/2016/February/</Prefix>
  </CommonPrefixes>
  <CommonPrefixes>
    <Prefix>photos/2016/January/</Prefix>
  </CommonPrefixes>
</ListBucketResult>
Using a Continuation Token

In this example, the initial request returns more than 1000 keys. In response to this request, Zenko returns the IsTruncated element with the value set to true and with a NextContinuationToken element.

Request
GET /?list-type=2 HTTP/1.1
Host: my-zenko.example.com
Date: Thu, 08 Nov 2018 23:17:07 GMT
Authorization: authorization string
Response

The following is a sample response:

HTTP/1.1 200 OK
x-amz-id-2: gyB+3jRPnrkN98ZajxHXr3u7EFM67bNgSAxexeEHndCX/7GRnfTXxReKUQF28IfP
x-amz-request-id: 3B3C7C725673C630
Date: Thu, 08 Nov 2018 23:29:37 GMT
Content-Type: application/xml
Content-Length: length
Connection: close
Server: ScalityS3

<ListBucketResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
  <Name>bucket</Name>
  <Prefix></Prefix>
  <NextContinuationToken>1ueGcxLPRx1Tr/XYExHnhbYLgveDs2J/wm36Hy4vbOwM=</NextContinuationToken>
  <KeyCount>1000</KeyCount>
  <MaxKeys>1000</MaxKeys>
  <IsTruncated>true</IsTruncated>
  <Contents>
    <Key>happyface.jpg</Key>
    <LastModified>2014-11-21T19:40:05.000Z</LastModified>
    <ETag>&quot;70ee1738b6b21e2c8a43f3a5ab0eee71&quot;</ETag>
    <Size>11</Size>
    <StorageClass>STANDARD</StorageClass>
  </Contents>
   ...
</ListBucketResult>
Request

In the subsequent request, a continuation-token query parameter is included in the request with the <NextContinuationToken> value from the preceding response.

GET /?list-type=2 HTTP/1.1
GET /?list-type=2&continuation-token=1ueGcxLPRx1Tr/XYExHnhbYLgveDs2J/wm36Hy4vbOwM= HTTP/1.1

Host: my-zenko.example.com
Date: Thu, 08 Nov 2018 23:17:07 GMT
Authorization: authorization string
Response

Zenko returns a list of the next set of keys starting where the previous request ended.

HTTP/1.1 200 OK
x-amz-id-2: gyB+3jRPnrkN98ZajxHXr3u7EFM67bNgSAxexeEHndCX/7GRnfTXxReKUQF28IfP
x-amz-request-id: 3B3C7C725673C630
Date: Thu, 08 Nov 2018 23:29:37 GMT
Content-Type: application/xml
Content-Length: length
Connection: close
Server: ScalityS3

<ListBucketResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
  <Name>bucket</Name>
  <Prefix></Prefix>
  <ContinuationToken>1ueGcxLPRx1Tr/XYExHnhbYLgveDs2J/wm36Hy4vbOwM=</ContinuationToken>
  <KeyCount>112</KeyCount>
  <MaxKeys>1000</MaxKeys>
  <IsTruncated>false</IsTruncated>
  <Contents>
    <Key>happyfacex.jpg</Key>
    <LastModified>2014-11-21T19:40:05.000Z</LastModified>
    <ETag>&quot;70ee1738b6b21e2c8a43f3a5ab0eee71&quot;</ETag>
    <Size>1111</Size>
    <StorageClass>STANDARD</StorageClass>
  </Contents>
   ...
</ListBucketResult>

DELETE Bucket

The DELETE Bucket operation deletes a named bucket only if the bucket is empty.

Note

Before a bucket can be deleted, all objects must be deleted from the bucket and all ongoing multipart uploads must be aborted.

Requests
Syntax
DELETE / HTTP/1.1
Host: {{BucketName}}.{{StorageService}}.com
Date: {{date}}
Authorization: {{authenticationInformation}
Parameters

The DELETE Bucket operation does not use request parameters.

Headers

The DELETE Bucket operation uses only request headers that are common to all operations (refer to Common Request Headers).

Elements

The DELETE Bucket operation does not use request elements.

Responses
Headers

The DELETE Bucket operation uses only response headers that are common to all operations (refer to Common Response Headers).

Elements

The DELETE Bucket operation does not return response elements.

Host: documentation.s3.scality.com
Date: Tue, 21 Jun 2011 12:12:34 GMT
Examples
Deleting the “documentation” Bucket
Request
DELETE / HTTP/1.1
Host: documentation.s3.scality.com
Date: Tue, 21 Jun 2011 12:12:34 GMT
Authorization: AWS pat:BAupPCpkyeIGKH2s5Je4Bc32bc=
Response
HTTP/1.1 204 No Content
Date: Tue, 21 Jun 2011 12:12:34 GMT
Server: RestServer/1.0
Content-Type: application/octet-stream
Cache-Control: no-cache
Connection: close

PUT Bucket Versioning

The PUT Bucket Versioning operation uses the versioning subresource to set the versioning state of an existing bucket. To set the versioning state, you must be the bucket owner.

You can set the versioning state with one of the following values:

  • Enabled — Enables versioning for the objects in the bucket. All objects added to the bucket receive a unique version ID.
  • Suspended — Disables versioning for the objects in the bucket. All objects added to the bucket receive the version ID null.

If the versioning state has never been set on a bucket, it has no versioning state; a GET versioning request does not return a versioning state value.

Requests
Syntax
PUT / HTTP/1.1
Host: {{BucketName}}.{{StorageService}}.com
Content-Length: {{length}}
Date: {{date}}
Authorization: {{authenticationInformation}}

<VersioningConfiguration xmlns="http://s3.scality.com/doc/2006-03-01/">
<Status>{{VersioningState}}>/Status>
</VersioningConfiguration>

The Request Syntax illustrates only a portion of the request headers.

Parameters

The PUT Bucket Versioning operation does not use request parameters.

Headers

The PUT Bucket Versioning operation uses only request headers that are common to all operations (refer to Common Request Headers).

Elements

The PUT Bucket operation can request the following items:

Element Type Description
Status enum

Sets the versioning state of the bucket.

Valid Values: Suspended | Enabled

Ancestor: VersioningConfiguration

VersioningConfiguration container

Container for setting the versioning state

Children: Status

Ancestor: none

Responses
Headers

The PUT Bucket Versioning operation uses only response headers that are common to all operations (refer to Common Response Headers).

Elements

The PUT Bucket Versioning operation does not return response elements.

Examples
Enabling Versioning for a Specified Bucket
Request
PUT /?versioning HTTP/1.1
Host: bucket.s3.scality.com
Date: Wed, 01 Mar  2006 12:00:00 GMT
Authorization: {{authorization string}}
Content-Type: text/plain
Content-Length: 124

<VersioningConfiguration xmlns="http://s3.scality.com/doc/2006-03-01/">
<Status>Enabled</Status>
</VersioningConfiguration>
Response
HTTP/1.1 200 OK
x-amz-id-2: YgIPIfBiKa2bj0KMg95r/0zo3emzU4dzsD4rcKCHQUAdQkf3ShJTOOpXUueF6QKo
x-amz-request-id: 236A8905248E5A01
Date: Wed, 01 Mar  2006 12:00:00 GMT
Suspending Versioning for a Specified Bucket
Request
PUT /?versioning HTTP/1.1
Host: bucket.s3.scality.com
Date: Wed, 12 Oct 2009 17:50:00 GMT
Authorization: {{authorization string}}
Content-Type: text/plain
Content-Length: 124

<VersioningConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<Status>Suspended</Status>
</VersioningConfiguration>
Response
HTTP/1.1 200 OK
x-amz-id-2: YgIPIfBiKa2bj0KMg95r/0zo3emzU4dzsD4rcKCHQUAdQkf3ShJTOOpXUueF6QKo
x-amz-request-id: 236A8905248E5A01
Date: Wed, 01 Mar  2006 12:00:00 GMT

GET Bucket Versioning

The GET Bucket Versioning operation uses the versioningsubresource to return the versioning state of a bucket.

Note

Only the bucket owner can retrieve the versioning state of a bucket.

Versioning State Response
Versioning is enabled on a bucket <VersioningConfiguration xmlns="http://s3.scality.com/doc/2006-03-01/"> <Status>Enabled</Status> </VersioningConfiguration>
Versioning is suspended on a bucket <VersioningConfiguration xmlns="http://s3.scality.com/doc/2006-03-01/"> <Status>Suspended</Status> </VersioningConfiguration>
Versioning has not been enabled (or suspended) on a bucket <VersioningConfiguration xmlns="http://s3.scality.com/doc/2006-03-01/"/>
Requests
Syntax
GET /?versioning HTTP/1.1
Host: {{BucketName}}.{{StorageService}}.com
Date: {{date}}
Current-Length: {{length}}
Authorization: {{authenticationInformation}}
Parameters

The GET Bucket Versioning operation does not use request parameters.

Headers

The GET Bucket Versioning operation uses only request headers that are common to all operations (see Common Request Headers).

Elements

The GET Bucket Versioning operation does not use request elements.

Responses
Headers

The GET Bucket Versioning operation returns the following response elements.

Elements
Element Type Description
Status enum

The versioning state of the bucket.

Valid Values: Disabled | Enabled

Ancestors: VersioningConfiguration

VersioningConfiguration Container Container for the status response element.
Examples

The example offered returns the versioning state of myBucket.

Request
GET /?versioning HTTP/1.1
Host: myBucket.s3.scality.com
Date: Tue, 13 Dec 2011 19:14:42 GMT
Authorization: {{authenticationInformation}}
Content-Type: text/plain
Response
<VersioningConfiguration xmlns="http://s3.scality.com/">
  <Status>Enabled</Status>
</VersioningConfiguration>

GET Bucket Location

The GET Bucket Location operation uses the location subresource to return a bucket’s location. The bucket’s location is set up using the LocationConstraint request parameter in a PUT Bucket request. Refer to PUT Bucket.

Note

The possible options for a LocationConstraint are configured in the env_s3 setting of the S3 configuration.

Requests
Syntax
GET /?location HTTP/1.1
Host: {{BucketName}}.{{StorageService}}.com
Date: {{date}}
Authorization: {{authenticationInformation}}
Parameters

The GET Bucket Location operation does not use request parameters.

Headers

The GET Bucket Location operation uses only request headers that are common to all operations (refer to Common Request Headers).

Elements

The GET Bucket Location operation does not use request elements.

Responses
Headers

The GET Bucket Location operation uses only response headers that are common to all operations (refer to Common Response Headers).

Elements

The GET Bucket Location operation can return the following XML elements in the response:

Element Type Description
LocationConstraint String Specifies the location of the bucket. The LocationConstraint parameter is configured in the env_s3 setting of the S3 configuration.
Examples
Request
GET /?location HTTP/1.1
Host: myBucket.s3.scality.com
Date: Thu, 31 Mar 2016 15:11:47 GMT
Authorization: AWS pat:6nYhPMw6boadLgjywjSIyhfwRIA=
Response
<xml version="1.0" encoding="UTF-8"?>
<LocationConstraint xmlns="http://s3.amazonaws.com/doc/2006-03-01/">EU</LocationConstraint>

GET Bucket Object Versions

The GET Bucket Object Versions operation uses the versions subresource to list metadata about all of the versions of objects in a bucket. Request Parameters can also be used as selection criteria to return metadata about a subset of all the object versions. READ access to the bucket is necessary to use the GET Bucket Object Versions operation.

Note

A 200 OK response can contain valid or invalid XML. Make sure to design the application to parse the contents of the response and handle it appropriately.

Requests
Syntax
GET /?versions HTTP/1.1
Host: {{BucketName}}.{{StorageService}}.com
Date: {{date}}
Authorization: {{authenticationInformation}}
Parameters

The GET Bucket Object Versions operation can use the following optional parameters to return a subset of objects in a bucket:

Parameter Type Description
delimiter string

Character used to group keys

All keys that contain the same string between the prefix, if specified, and the first occurrence of the delimiter after the prefix are grouped under a single result element, CommonPrefixes. If the prefix is not specified, the substring starts at the beginning of the key. The keys grouped under the CommonPrefixes result element are not returned elsewhere in the response.

encoding-type string Encodes keys with the method specified. Since XML 1.0 parsers cannot parse certain characters that may be included in an object key, the keys in the response can be encoded to ensure they are legible. Encoding is not set by default. The only valid value is url.
key-marker string Specifies the key in the bucket to start listing from. Also, refer to version-id-marker.
max-keys string

Sets the maximum number of keys returned in the response body. The response might contain fewer keys, but will never contain more. If additional keys satisfy the search criteria, but were not returned because max-keys was exceeded, the response contains <isTruncated> true</isTruncated>. To return the additional keys, refer to key-marker and version-id-marker.

Default: 1000

prefix string Use this parameter to select only keys that begin with the specified prefix. Use prefixes to separate a bucket into different groupings of keys. (Use prefix to make groups in the same way a folder is used in a file system.) Use prefix with delimiter to roll up numerous objects into a single result under CommonPrefixes.
version-id-marker string

Specifies the object version to start listing from. Also, refer to key-marker.

Valid Values: Valid version ID | Default

Constraint: May not be an empty string

Headers

The GET Bucket Object Versions operation uses only request headers that are common to all operations (refer to Common Request Headers).

Elements

The GET Bucket Object Versions operation does not use request elements.

Responses
Headers

The GET Bucket Object Versions operation uses only response headers that are common to all operations (refer to Common Response Headers).

Elements

The GET Bucket Object Versions operation can return the following XML elements in the response:

Element Type Description
DeleteMarker container

Container for an object that is a delete marker

Children: Key, VersionId, IsLatest, LastModified, Owner

Ancestor: ListVersionsResult

DisplayName string

Object owner’s name

Ancestor: ListVersionsResult.Version.Owner | ListVersionsResult.DeleteMarker.Owner

Encoding-Type string

Encoding type used by Zenko to encode object key names in the XML response.

If encoding-type request parameter is specified, Zenko includes this element in the response, and returns encoded key name values in the following response elements:

KeyMarker, NextKeyMarker, Prefix, Key, and Delimiter.

ETag string

The entity tag is an MD5 hash of the object. The ETag reflects changes only to the contents of an object, not its metadata.

Ancestor: ListVersionsResult.Version

ID string

Object owner’s ID

Ancestor: ListVersionsResult.Version.Owner | ListVersionsResult.DeleteMarker.Owner

IsLatest Boolean Specifies whether the object is (true) or is not (false) the current version of an object
IsTruncated Boolean

Indicates whether (true) or not (false) all results matching the search criteria were returned. All of the results may not be returned if the number of results exceeds that specified by MaxKeys. If the results were truncated, it is possible to make a follow-up paginated request using the NextKeyMarker and NextVersionIdMarker response parameters as a starting place in another request to return the rest of the results.

Ancestor: ListVersionResult

Key string

The object’s key

Ancestor: ListVersionsResult.Version | ListVersionsResult.DeleteMarker

KeyMarker string

Marks the last key returned in a truncated response.

Ancestor: ListVersionsResult

LastModified date

Date and time the object was last modified

Ancestor: ListVersionsResult.Version | ListVersionsResult.DeleteMarker

ListVersionsResult container Container of the result
MaxKeys string

The maximum number of objects to return

Default: 1000

Ancestor: ListVersionsResult

Name string Bucket owner’s name
NextKeyMarker string When the number of responses exceeds the value of MaxKeys, NextKeyMarker specifies the first key not returned that satisfies the search criteria. Use this value for the key-marker request parameter in a subsequent request.
NextVersionIdMarker string

When the number of responses exceeds the value of MaxKeys, NextVersionIdMarker specifies the first object version not returned that satisfies the search criteria. Use this value for the version-id-marker request parameter in a subsequent request.

Ancestor: ListVersionResult

Owner string Bucket owner
Prefix string Selects objects that start with the value supplied by this parameter.
Size string Size of the object, in bytes
StorageClass string Always STANDARD
Version container Container of version information
VersionId string Version ID of an object
VersionIdMarker string Marks the last version of the key returned in a truncated response
Examples
Getting All Versions of All Objects in a Specific Bucket
Request
GET /?versions HTTP/1.1
Host: BucketName.s3.scality.com
Date: Thu, 31 Mar 2016 15:11:47 GMT
Authorization: AWS pat:6nYhPMw6boadLgjywjSIyhfwRIA=
Response
<?xml version="1.0" encoding="UTF-8"?>
<ListVersionsResult xmlns="http://s3.scality.com/doc/2006-03-01">
    <Name>bucket</Name>
    <Prefix>my</Prefix>
    <KeyMarker/>
    <VersionIdMarker/>
    <MaxKeys>5</MaxKeys>
    <IsTruncated>false</IsTruncated>
    <Version>
        <Key>my-image.jpg</Key>
        <VersionId>3/L4kqtJl40Nr8X8gdRQBpUMLUo</VersionId>
        <IsLatest>true</IsLatest>
         <LastModified>2009-10-12T17:50:30.000Z</LastModified>
        <ETag>&quot;fba9dede5f27731c9771645a39863328&quot;</ETag>
        <Size>434234</Size>
        <StorageClass>STANDARD</StorageClass>
        <Owner>
            <ID>8b27d4b0fc460740425b9deef56fa1af6245fbcccdda813b691a8fda9be8ff0c</ID>
            <DisplayName>mtd@amazon.com</DisplayName>
        </Owner>
    </Version>
    <DeleteMarker>
        <Key>my-second-image.jpg</Key>
        <VersionId>03jpff543dhffds434rfdsFDN943fdsFkdmqnh892</VersionId>
        <IsLatest>true</IsLatest>
        <LastModified>2009-11-12T17:50:30.000Z</LastModified>
        <Owner>
            <ID>8b27d4b0fc460740425b9deef56fa1af6245fbcccdda813b691a8fda9be8ff0c</ID>
            <DisplayName>mtd@amazon.com</DisplayName>
        </Owner>
    </DeleteMarker>
    <Version>
        <Key>my-second-image.jpg</Key>
        <VersionId>QUpfdndhfd8438MNFDN93jdnJFkdmqnh893</VersionId>
        <IsLatest>false</IsLatest>
        <LastModified>2009-10-10T17:50:30.000Z</LastModified>
        <ETag>&quot;9b2cf535f27731c974343645a3985328&quot;</ETag>
        <Size>166434</Size>
        <StorageClass>STANDARD</StorageClass>
        <Owner>
            <ID>8b27d4b0fc460740425b9deef56fa1af6245fbcccdda813b691a8fda9be8ff0c</ID>
            <DisplayName>mtd@amazon.com</DisplayName>
        </Owner>
     </Version>
</ListVersionsResult>
Getting Objects in the Order They Were Stored

The following GET request returns the most recently stored object first starting with the value for key-marker.

Request
GET /?versions&amp;key-marker=key2 HTTP/1.1
Host: demo.s3.scality.com
Pragma: no-cache
Accept: */*
Date: Tue, 28 Jun 2011 09:27:15 GMT
Authorization: AWS pat:0YPPNCCa9yAbKOFdlLD/ixMLayg=
Response
<?xml version="1.0" encoding="UTF-8"?>
<ListVersionsResult xmlns="http://s3.scality.com/doc/2006-03-01/">
  <Name>mtp-versioning-fresh</Name>
  <Prefix/>
  <KeyMarker>key2</KeyMarker>
  <VersionIdMarker/>
  <MaxKeys>1000</MaxKeys>
  <IsTruncated>false</IsTruncated>
  <Version>
    <Key>key3</Key>
    <VersionId>I5VhmK6CDDdQ5Pwfe1gcHZWmHDpcv7gfmfc29UBxsKU.</VersionId>
    <IsLatest>true</IsLatest>
    <LastModified>2009-12-09T00:19:04.000Z</LastModified>
    <ETag>&quot;396fefef536d5ce46c7537ecf978a360&quot;</ETag>
    <Size>217</Size>
    <Owner>
      <ID>8b27d4b0fc460740425b9deef56fa1af6245fbcccdda813b691a8fda9be8ff0c</ID>
    </Owner>
    <StorageClass>STANDARD</StorageClass>
  </Version>
  <DeleteMarker>
    <Key>sourcekey</Key>
    <VersionId>qDhprLU80sAlCFLu2DWgXAEDgKzWarn-HS_JU0TvYqs.</VersionId>
    <IsLatest>true</IsLatest>
    <LastModified>2009-12-10T16:38:11.000Z</LastModified>
    <Owner>
      <ID>8b27d4b0fc460740425b9deef56fa1af6245fbcccdda813b691a8fda9be8ff0c</ID>
    </Owner>
  </DeleteMarker>
  <Version>
    <Key>sourcekey</Key>
    <VersionId>wxxQ7ezLaL5JN2Sislq66Syxxo0k7uHTUpb9qiiMxNg.</VersionId>
    <IsLatest>false</IsLatest>
    <LastModified>2009-12-10T16:37:44.000Z</LastModified>
    <ETag>&quot;396fefef536d5ce46c7537ecf978a360&quot;</ETag>
    <Size>217</Size>
    <Owner>
      <ID>8b27d4b0fc460740425b9deef56fa1af6245fbcccdda813b691a8fda9be8ff0c</ID>
    </Owner>
    <StorageClass>STANDARD</StorageClass>
  </Version>
</ListVersionsResult>
Using prefix

The following GET request returns objects whose keys begin with source.

Request
GET /?versions&amp;prefix=source HTTP/1.1
Host: bucket.s3.scality.com
Date: Wed, 01 Mar  2006 12:00:00 GMT
Authorization: {{authorizationString}}
Response
<?xml version="1.0" encoding="UTF-8"?>
<ListVersionsResult xmlns="http://s3.scality.com/doc/2006-03-01/">
  <Name>mtp-versioning-fresh</Name>
  <Prefix>source</Prefix>
  <KeyMarker/>
  <VersionIdMarker/>
  <MaxKeys>1000</MaxKeys>
  <IsTruncated>false</IsTruncated>
  <DeleteMarker>
    <Key>sourcekey</Key>
    <VersionId>qDhprLU80sAlCFLu2DWgXAEDgKzWarn-HS_JU0TvYqs.</VersionId>
    <IsLatest>true</IsLatest>
    <LastModified>2009-12-10T16:38:11.000Z</LastModified>
    <Owner>
      <ID>8b27d4b0fc460740425b9deef56fa1af6245fbcccdda813b691a8fda9be8ff0c</ID>
    </Owner>
  </DeleteMarker>
  <Version>
    <Key>sourcekey</Key>
    <VersionId>wxxQ7ezLaL5JN2Sislq66Syxxo0k7uHTUpb9qiiMxNg.</VersionId>
    <IsLatest>false</IsLatest>
    <LastModified>2009-12-10T16:37:44.000Z</LastModified>
    <ETag>&quot;396fefef536d5ce46c7537ecf978a360&quot;</ETag>
    <Size>217</Size>
    <Owner>
      <ID>8b27d4b0fc460740425b9deef56fa1af6245fbcccdda813b691a8fda9be8ff0c</ID>
    </Owner>
    <StorageClass>STANDARD</StorageClass>
  </Version>
</ListVersionsResult>
Using key_marker and version_id_marker

The following GET request returns objects starting at the specified key (key-marker) and version ID (version-id-marker).

Request
GET /?versions&amp;key=key3&amp;version-id-marker=t4Zen1YTZBnj HTTP/1.1
Host: bucket.s3.scality.com
Date: Wed, 01 Mar  2006 12:00:00 GMT
Authorization: {{authorizationString}}
Response
<?xml version="1.0" encoding="UTF-8"?>
<ListVersionsResult xmlns="http://s3.scality.com/doc/2006-03-01/">
  <Name>mtp-versioning-fresh</Name>
  <Prefix/>
  <KeyMarker>key3</KeyMarker>
  <VersionIdMarker>t46ZenlYTZBnj</VersionIdMarker>
  <MaxKeys>1000</MaxKeys>
  <IsTruncated>false</IsTruncated>
  <DeleteMarker>
    <Key>sourcekey</Key>
    <VersionId>qDhprLU80sAlCFLu2DWgXAEDgKzWarn-HS_JU0TvYqs.</VersionId>
    <IsLatest>true</IsLatest>
    <LastModified>2009-12-10T16:38:11.000Z</LastModified>
    <Owner>
      <ID>8b27d4b0fc460740425b9deef56fa1af6245fbcccdda813b691a8fda9be8ff0c</ID>
    </Owner>
  </DeleteMarker>
  <Version>
    <Key>sourcekey</Key>
    <VersionId>wxxQ7ezLaL5JN2Sislq66Syxxo0k7uHTUpb9qiiMxNg.</VersionId>
    <IsLatest>false</IsLatest>
    <LastModified>2009-12-10T16:37:44.000Z</LastModified>
    <ETag>&quot;396fefef536d5ce46c7537ecf978a360&quot;</ETag>
    <Size>217</Size>
    <Owner>
      <ID>8b27d4b0fc460740425b9deef56fa1af6245fbcccdda813b691a8fda9be8ff0c</ID>
    </Owner>
    <StorageClass>STANDARD</StorageClass>
  </Version>
</ListVersionsResult>
Using key_marker, version_id_marker, and max_keys

The following GET request returns up to three (the value of max-keys) objects starting with the key specified by key-marker and the version ID specified by version-id-marker.

Request
GET /?versions&amp;key-marker=key3&amp;version-id-marker=t46Z0menlYTZBnj HTTP/1.1
Host: bucket.s3.scality.com
Date: Wed, 28 Oct 2009 22:32:00 +0000
Authorization: authorization string
Response
<?xml version="1.0" encoding="UTF-8"?>
<ListVersionsResult xmlns="http://s3.scality.com/doc/2006-03-01/">
  <Name>mtp-versioning-fresh</Name>
  <Prefix/>
  <KeyMarker>key3</KeyMarker>
  <VersionIdMarker>null</VersionIdMarker>
  <NextKeyMarker>key3</NextKeyMarker>
  <NextVersionIdMarker>d-d309mfjFrUmoQ0DBsVqmcMV15OI.</NextVersionIdMarker>
  <MaxKeys>2</MaxKeys>
  <IsTruncated>true</IsTruncated>
  <Version>
    <Key>key3</Key>
    <VersionId>8XECiENpj8pydEDJdd-_VRrvaGKAHOaGMNW7tg6UViI.</VersionId>
    <IsLatest>false</IsLatest>
    <LastModified>2009-12-09T00:18:23.000Z</LastModified>
    <ETag>&quot;396fefef536d5ce46c7537ecf978a360&quot;</ETag>
    <Size>217</Size>
    <Owner>
      <ID>8b27d4b0fc460740425b9deef56fa1af6245fbcccdda813b691a8fda9be8ff0c</ID>
    </Owner>
    <StorageClass>STANDARD</StorageClass>
  </Version>
  <Version>
    <Key>key3</Key>
    <VersionId>d-d309mfjFri40QYukDozqBt3UmoQ0DBsVqmcMV15OI.</VersionId>
    <IsLatest>false</IsLatest>
    <LastModified>2009-12-09T00:18:08.000Z</LastModified>
    <ETag>&quot;396fefef536d5ce46c7537ecf978a360&quot;</ETag>
    <Size>217</Size>
    <Owner>
      <ID>8b27d4b0fc460740425b9deef56fa1af6245fbcccdda813b691a8fda9be8ff0c</ID>
    </Owner>
    <StorageClass>STANDARD</StorageClass>
  </Version>
</ListVersionsResult>
Using the delimiter and prefix Parameters

Assume the following keys are in the bucket, example-bucket:

  • photos/2006/January/sample.jpg
  • photos/2006/February/sample.jpg
  • photos/2006/March/sample.jpg
  • videos/2006/March/sample.wmv
  • sample.jpg

The following GET request specifies the delimiter parameter with value “/”.

Request
GET /?versions&amp;delimiter=/ HTTP/1.1
Host: example-bucket.s3.scality.com
Date: Wed, 02 Feb 2011 20:34:56 GMT
Authorization: authorization string
Response

The response returns the sample.jpg key in a <Version> element. However, because all the other keys contain the specified delimiter, Zenko returns a distinct substring from each of these keys–from the beginning of the key to the first occurrence of the delimiter–in a <CommonPrefixes> element. The key substrings in the <CommonPrefixes> element, photos/ and videos/, indicate that there are one or more keys with these key prefixes.

This is a useful scenario if key prefixes are used for the objects to create a logical folder-like structure. In this case the result can be interpreted as the folders photos/ and videos/ having one or more objects.

<ListVersionsResult xmlns="http://s3.scality.com/doc/2006-03-01/">
  <Name>mvbucketwithversionon1</Name>
  <Prefix></Prefix>
  <KeyMarker></KeyMarker>
  <VersionIdMarker></VersionIdMarker>
  <MaxKeys>1000</MaxKeys>
  <Delimiter>/</Delimiter>
  <IsTruncated>false</IsTruncated>

  <Version>
    <Key>Sample.jpg</Key>
    <VersionId>toxMzQlBsGyGCz1YuMWMp90cdXLzqOCH</VersionId>
    <IsLatest>true</IsLatest>
    <LastModified>2011-02-02T18:46:20.000Z</LastModified>
    <ETag>&quot;3305f2cfc46c0f04559748bb039d69ae&quot;</ETag>
    <Size>3191</Size>
    <Owner>
      <ID>852b113e7a2f25102679df27bb0ae12b3f85be6f290b936c4393484be31bebcc</ID>
      <DisplayName>display-name</DisplayName>
    </Owner>
    <StorageClass>STANDARD</StorageClass>
  </Version>

  <CommonPrefixes>
    <Prefix>photos/</Prefix>
  </CommonPrefixes>
  <CommonPrefixes>
    <Prefix>videos/</Prefix>
  </CommonPrefixes>
</ListVersionsResult>
Request

In addition to the delimiter parameter you can filter results by adding a prefix parameter as shown in the following request:

GET /?versions&amp;prefix=photos/2006/&amp;delimiter=/ HTTP/1.1
Host: example-bucket.s3.scality.com
Date: Wed, 02 Feb 2011 19:34:02 GMT
Authorization: authorization string
Response

In this case the response will include only objects keys that start with the specified prefix. The value returned in the <CommonPrefixes> element is a substring from the beginning of the key to the first occurrence of the specified delimiter after the prefix.

<?xml version="1.0" encoding="UTF-8"?>
<ListVersionsResult xmlns="http://s3.scality.com/doc/2006-03-01/">
  <Name>example-bucket</Name>
  <Prefix>photos/2006/</Prefix>
  <KeyMarker></KeyMarker>
  <VersionIdMarker></VersionIdMarker>
  <MaxKeys>1000</MaxKeys>
  <Delimiter>/</Delimiter>
  <IsTruncated>false</IsTruncated>
  <Version>
    <Key>photos/2006/</Key>
    <VersionId>3U275dAA4gz8ZOqOPHtJCUOi60krpCdy</VersionId>
    <IsLatest>true</IsLatest>
    <LastModified>2011-02-02T18:47:27.000Z</LastModified>
    <ETag>&quot;d41d8cd98f00b204e9800998ecf8427e&quot;</ETag>
    <Size>0</Size>
    <Owner>
      <ID>8b27d4b0fc460740425b9deef56fa1af6245fbcccdda813b691a8fda9be8ff0c</ID>
      <DisplayName>display-name</DisplayName>
    </Owner>
    <StorageClass>STANDARD</StorageClass>
  </Version>
  <CommonPrefixes>
    <Prefix>photos/2006/February/</Prefix>
  </CommonPrefixes>
  <CommonPrefixes>
    <Prefix>photos/2006/January/</Prefix>
  </CommonPrefixes>
  <CommonPrefixes>
    <Prefix>photos/2006/March/</Prefix>
  </CommonPrefixes>
</ListVersionsResult>

HEAD Bucket

The HEAD Bucket operation is used to determine whether a bucket exists and can be accessed.

HEAD Bucket returns 200 OK if the bucket is in the system and accessible, otherwise the operation can return such responses as 404 Not Found or 403 Forbidden.

Requests
Syntax
HEAD / HTTP/1.1
Host: {{BucketName}}.{{StorageService}}.com
Date: {{date}}
Authorization: {{authorizationString}}
Parameters

The HEAD Bucket operation does not use request parameters.

Headers

The HEAD Bucket operation uses only request headers that are common to all operations (refer to Common Request Headers).

Elements

The HEAD Bucket operation does not use request elements.

Responses
Headers

The HEAD Bucket operation uses only response headers that are common to all operations (refer to Common Response Headers).

Elements

The HEAD Bucket operation does not return response elements.

Examples
Determining the Status of a Particular Bucket
Request
HEAD / HTTP/1.1
Date: Fri, 10 Feb 2012 21:34:55 GMT
Authorization: {{authorizationString}}
Host: {{bucketname}}.s3.scality.com
Connection: Keep-Alive
Response
HTTP/1.1 200 OK
x-amz-id-2: JuKZqmXuiwFeDQxhD7M8KtsKobSzWA1QEjLbTMTagkKdBX2z7Il/jGhDeJ3j6s80
x-amz-request-id: 32FE2CEB32F5EE25
Date: Fri, 10 2012 21:34:56 GMT
Server: ScalityS3

List Multipart Uploads

The List Multipart Uploads operation catalogs in-progress multipart uploads (multipart uploads that have been initiated via the Initiate Multipart Upload request, but that have not yet been completed or aborted).

List Multipart Uploads returns at most 1,000 multipart uploads in the response (which is also the default setting for uploads a response can include, adjustable via the max-uploads request parameter in the response). If additional multipart uploads satisfy the list criteria, the response returns an IsTruncated element with the value of true. To list the additional multipart uploads, use the key-marker and upload-id-marker Request Parameters.

In the response, the uploads are sorted by key. If the application has initiated more than one multipart upload using the same object key, then uploads in the response are first sorted by key. Additionally, uploads are sorted in ascending order within each key by the upload initiation time.

Requests
Syntax
GET /?uploads HTTP/1.1
Host: {{BucketName}}.{{StorageService}}.com
Date: {{date}}
Authorization: {{authorizationString}}
Parameters

The List Multipart Uploads operation’s implementation of GET uses certain parameters to return a subset of the ongoing multipart uploads in a bucket.

Parameter Type Description
delimiter string

Character used to group keys.

All keys that contain the same string between the prefix, if specified, and the first occurrence of the delimiter after the prefix are grouped under a single result element,CommonPrefixes. If prefix is not specified, then the substring starts at the beginning of the key. The keys that are grouped under CommonPrefixes result element are not returned elsewhere in the response.

encoding-type string

Requests that Zenko encode the response and specifies the encoding method to use.

An object key can contain any Unicode character; however, XML 1.0 parser cannot parse some characters, such as characters with an ASCII value from 0 to 10. For characters that are not supported in XML 1.0, add this parameter to request Zenko to encode the keys in the response.

Note: The only valid value for the encoding-type parameter is url.

Default: None

max-uploads integer

Sets the maximum number of multipart uploads, from 1 to 1,000, to return in the response body.

1,000 is the maximum number of uploads that can be returned in a response.

Default: 1,000

key-marker string

Together with upload-id-marker, the key-marker parameter specifies the multipart upload after which listing should begin.

If upload-id-marker is not specified, only the keys lexicographically greater than the specified key-marker are included in the list. If upload-id-marker is specified, any multipart uploads for a key equal to the key-marker might also be included, provided those multipart uploads have upload IDs lexicographically greater than the specified upload-id-marker.

prefix string Lists in-progress uploads only for those keys that begin with the specified prefix. This parameter can be used to separate a bucket into different grouping of keys. To illustrate, prefixes can be used to make groups, similar to the manner in which a folder is used in a file system.
upload-id-marker string Together with key-marker, specifies the multipart upload after which listing should begin. If key-marker is not specified, the parameter is ignored. Otherwise, any multipart uploads for a key equal to the key-marker might be included in the list only if they have an upload ID lexicographically greater than the specified upload-id-marker.
Headers

The List Multipart Uploads operation uses only request headers that are common to all operations (refer to Common Request Headers).

Elements

The List Multipart Uploads operation does not use request elements.

Responses
Headers

List Multipart Uploads uses only the common response headers supported by Zenko (refer to Common Response Headers).

Elements

The List Multipart Uploads operation can return the following XML elements in its response (includes XML containers):

Element Type Description
ListMultipartUploadsResult container Container for the response
Bucket string Name of the bucket to which the multipart upload was initiated
KeyMarker string The key at or after which the listing began
UploadIdMarker string Upload ID after which listing began
NextKeyMarker string When a list is truncated, NextKeyMarker specifies the value that should be used for the key-marker request parameter in a subsequent request.
NextUploadIDMarker string When a list is truncated, NextUploadIDMarker specifies the value that should be used for the upload-id-marker request parameter in a subsequent request.
Encoding-Type string

Encoding type used by Zenko to encode object key names in the XML response.

If the encoding-type request parameter is specified, Zenko includes this element in the response, and returns encoded key name values in the following elements: Delimiter, KeyMarker, Prefix, NextKeyMarker, and Key.

MaxUploads integer Maximum number of multipart uploads that that could have been included in the response
IsTruncated Boolean

Indicates whether the returned list of multipart uploads is truncated.

A true value indicates that the list was truncated. A list can be truncated if the number of multipart uploads exceeds the limit returned in the MaxUploads element.

Upload container Container for elements related to a particular multipart upload. A response can contain zero or more Upload elements.
Key integer Key of the object for which the multipart upload was initiated
UploadID integer Upload ID that identifies the multipart upload
Initiator container

Identifies the party that initiated the multipart upload

ID: Initiation User ID

DisplayName: Name of party initiating request

Owner container

Container element that identifies the object owner, after the object is created

ID: Object owner User ID

DisplayName: Name of object owner

Initiated date Date and time the multipart upload was initiated
ListMultipartUploadsResult.Prefix string When a prefix is provided in the request, this field contains the specified prefix. The result contains only keys starting with the specified prefix.
Delimiter string

Contains the delimiter specified in the request

If a delimiter is not specified in the request, this element is absent from the response.

CommonPrefixes container If a delimiter is specified in the request, then the result returns each distinct key prefix containing the delimiter in a CommonPrefixes element. The distinct key prefixes are returned in the Prefix child element.
CommonPrefixes.Prefix string

If the request does not include the Prefix parameter, then CommonPrefixes.Prefix shows only the substring of the key that precedes the first occurrence of the delimiter character. These keys are not returned anywhere else in the response.

If the request includes the Prefix parameter, CommonPrefixes.Prefix shows the substring of the key from the beginning to the first occurrence of the the delimiter after the prefix.

Examples
List Multipart Uploads
Request

The request sample lists three multipart uploads, specifying the max-uploads request parameter to set the maximum number of multipart uploads to return in the response body.

GET /?uploads&amp;max-uploads=3 HTTP/1.1
Host:  example-bucket.{{StorageService}}.com
Date: Mon, 1 Nov 2010 20:34:56 GMT
Authorization: {{authorizationString}}

The request sample indicates that the multipart upload list was truncated and provides the NextKeyMarker and the NextUploadIdMarker elements. These values are specified in subsequent requests to read the next set of multipart uploads. That is, send a subsequent request specifying key-marker=my-movie2.m2ts (value of the NextKeyMarker element) and upload-id-marker=YW55IGlkZWEgd2h5IGVsdmluZydzIHVwbG9hZCBmYWlsZWQ (value of the NextUploadIdMarker).

Response

The sample response also shows a case of two multipart uploads in progress with the same key (my-movie.m2ts). That is, the response shows two uploads with the same key. This response shows the uploads sorted by key, and within each key the uploads are sorted in ascending order by the time the multipart upload was initiated.

HTTP/1.1 200 OK
x-amz-id-2: Uuag1LuByRx9e6j5Onimru9pO4ZVKnJ2Qz7/C1NPcfTWAtRPfTaOFg==
x-amz-request-id: 656c76696e6727732072657175657374
Date: Mon, 1 Nov 2010 20:34:56 GMT
Content-Length: 1330
Connection: keep-alive
Server: AmazonS3

<?xml version="1.0" encoding="UTF-8"?>
<ListMultipartUploadsResult xmlns="http://s3.scality.com/doc/2006-03-01/">
  <Bucket>bucket</Bucket>
  <KeyMarker></KeyMarker>
  <UploadIdMarker></UploadIdMarker>
  <NextKeyMarker>my-movie.m2ts</NextKeyMarker>
  <NextUploadIdMarker>YW55IGlkZWEgd2h5IGVsdmluZydzIHVwbG9hZCBmYWlsZWQ</NextUploadIdMarker>
  <MaxUploads>3</MaxUploads>
  <IsTruncated>true</IsTruncated>
  <Upload>
    <Key>my-divisor</Key>
    <UploadId>XMgbGlrZSBlbHZpbmcncyBub3QgaGF2aW5nIG11Y2ggbHVjaw</UploadId>
    <Initiator>
      <ID>arn:aws:iam::111122223333:user/user1-11111a31-17b5-4fb7-9df5-b111111f13de</ID>
      <DisplayName>user1-11111a31-17b5-4fb7-9df5-b111111f13de</DisplayName>
    </Initiator>
    <Owner>
      <ID>8b27d4b0fc460740425b9deef56fa1af6245fbcccdda813b691a8fda9be8ff0c</ID>
      <DisplayName>OwnerDisplayName</DisplayName>
    </Owner>
    <StorageClass>STANDARD</StorageClass>
    <Initiated>2010-11-10T20:48:33.000Z</Initiated>
  </Upload>
  <Upload>
    <Key>my-movie.m2ts</Key>
    <UploadId>VXBsb2FkIElEIGZvciBlbHZpbmcncyBteS1tb3ZpZS5tMnRzIHVwbG9hZA</UploadId>
    <Initiator>
      <ID>b1d16700c70b0b05597d7acd6a3f92be</ID>
      <DisplayName>InitiatorDisplayName</DisplayName>
    </Initiator>
    <Owner>
      <ID>b1d16700c70b0b05597d7acd6a3f92be</ID>
      <DisplayName>OwnerDisplayName</DisplayName>
    </Owner>
    <StorageClass>STANDARD</StorageClass>
    <Initiated>2010-11-10T20:48:33.000Z</Initiated>
  </Upload>
  <Upload>
    <Key>my-movie.m2ts</Key>
    <UploadId>YW55IGlkZWEgd2h5IGVsdmluZydzIHVwbG9hZCBmYWlsZWQ</UploadId>
    <Initiator>
      <ID>arn:aws:iam::444455556666:user/user1-22222a31-17b5-4fb7-9df5-b222222f13de</ID>
      <DisplayName>user1-22222a31-17b5-4fb7-9df5-b222222f13de</DisplayName>
    </Initiator>
    <Owner>
      <ID>b1d16700c70b0b05597d7acd6a3f92be</ID>
      <DisplayName>OwnerDisplayName</DisplayName>
    </Owner>
    <StorageClass>STANDARD</StorageClass>
    <Initiated>2010-11-10T20:49:33.000Z</Initiated>
  </Upload>
</ListMultipartUploadsResult>
Using the Delimiter and the Prefix Parameters

Assume a multipart upload is in progress for the following keys in a example-bucket.

  • greatshot.raw
  • photographs/2006/January/greatshot.raw
  • photographs/2006/February/greatshot.raw
  • photographs/2006/March/greatshot.raw
  • video_content/2006/March/greatvideo.raw
Request

The sample list multipart upload request specifies the delimiter parameter with value “/”.

GET /?uploads&amp;delimiter=/ HTTP/1.1
Host: example-bucket.s3.scality.com
Date: Mon, 1 Nov 2010 20:34:56 GMT
Authorization: {{authorizationString}}
Response

The response sample lists multipart uploads on the specified bucket, example-bucket.

The response returns multipart upload for the greatshot.raw key in an Upload element. As all the other keys contain the specified delimiter, however, a distinct substring—from the beginning of the key to the first occurence of the delimiter, from each of the keys—is returned in a CommonPrefixes element. The key substrings, photographs/ and video_content/, in the CommonPrefixes element indicate that there are one or more in-progress multipart uploads with these key prefixes.

This is a useful scenario if key prefixes are used for objects for the purpose of creating a logical folder like structure. In this case you can interpret the result as the folders photographs/ and video_content/ have one or more multipart uploads in progress. In such a case the results can be interpreted, as the folders photographs/ and video_content/ have one or more multipart uploads in progress.

<ListMultipartUploadsResult xmlns="http://s3.scalityaws.com/doc/2006-03-01/">
  <Bucket>example-bucket</Bucket>
  <KeyMarker/>
  <UploadIdMarker/>
  <NextKeyMarker>sample.jpg</NextKeyMarker>
  <NextUploadIdMarker>Xgw4MJT6ZPAVxpY0SAuGN7q4uWJJM22ZYg1W99trdp4tpO88.PT6.MhO0w2E17eutfAvQfQWoajgE_W2gpcxQw--</NextUploadIdMarker>
  <Delimiter>/</Delimiter>
  <Prefix/>
  <MaxUploads>1000</MaxUploads>
  <IsTruncated>false</IsTruncated>
  <Upload>
    <Key>sample.jpg</Key>
    <UploadId>Agw4MJT6ZPAVxpY0SAuGN7q4uWJJM22ZYg1N99trdp4tpO88.PT6.MhO0w2E17eutfAvQfQWoajgE_W2gpcxQw--</UploadId>
    <Initiator>
      <ID>314133b66967d86f031c7249d1d9a80249109428335cd0ef1cdc487b4566cb1b</ID>
      <DisplayName>s3-nickname</DisplayName>
    </Initiator>
    <Owner>
      <ID>314133b66967d86f031c7249d1d9a80249109428335cd0ef1cdc487b4566cb1b</ID>
      <DisplayName>s3-nickname</DisplayName>
    </Owner>
    <StorageClass>STANDARD</StorageClass>
    <Initiated>2010-11-26T19:24:17.000Z</Initiated>
  </Upload>
  <CommonPrefixes>
    <Prefix>photos/</Prefix>
  </CommonPrefixes>
  <CommonPrefixes>
    <Prefix>videos/</Prefix>
  </CommonPrefixes>
  </ListMultipartUploadsResult>
Request

In addition to the delimiter parameter, results can be filtered by adding a prefix parameter.

GET /?uploads&amp;delimiter=/&amp;prefix=photographs/2006/ HTTP/1.1
Host: example-bucket.s3.scalityaws.com
Date: Mon, 1 Nov 2010 20:34:56 GMT
Authorization: authorization string
Response

In this case, the response includes only multipart uploads for keys that start with the specified prefix. The value returned in the CommonPrefixes element is a substring from the beginning of the key to the first occurrence of the specified delimiter after the prefix.

<?xml version="1.0" encoding="UTF-8"?>
<ListMultipartUploadsResult xmlns="http://s3.scality.com/doc/2006-03-01/">
  <Bucket>example-bucket</Bucket>
  <KeyMarker/>
  <UploadIdMarker/>
  <NextKeyMarker/>
  <NextUploadIdMarker/>
  <Delimiter>/</Delimiter>
  <Prefix>photos/2006/</Prefix>
  <MaxUploads>1000</MaxUploads>
  <IsTruncated>false</IsTruncated>
  <CommonPrefixes>
    <Prefix>photos/2006/February/</Prefix>
  </CommonPrefixes>
  <CommonPrefixes>
    <Prefix>photos/2006/January/</Prefix>
  </CommonPrefixes>
  <CommonPrefixes>
    <Prefix>photos/2006/March/</Prefix>
  </CommonPrefixes>
</ListMultipartUploadsResult>

PUT Bucket Replication

In a versioning-enabled bucket, The PUT Bucket Replication operation creates a new replication configuration, or replaces an existing one.

Requests
Syntax
PUT /?replication HTTP/1.1
Host: bucketname.s3.amazonaws.com
Content-Length: length
Date: date
Authorization: authorization string (see Authenticating Requests (AWS Signature Version 4))
Content-MD5: MD5

Replication configuration XML in the body
Parameters

The PUT Bucket Replication operation does not use request parameters.

Headers
Name Type Description Required
Content-MD5 String

The base64-encoded 128-bit MD5 digest of the data; must be used as a message integrity check to verify that the request body was not corrupted in transit. For more information, see RFC 1864.

Default: None

Yes
Request Body

The replication configuration can be specified in the request body. The configuration includes one or more rules, with each providing information (e.g., key name prefix identifying objects with specific prefixes) to replicate (an empty prefix indicates all objects), rule status, and details about the destination.

The destination details include the bucket in which to store replicas and optional storage classes to use to store the replicas.

Zenko only acts on rules with an Enabled status. Zenko does not support IAM roles; instead, Zenko pre-creates service accounts, one for each service (Replication, Lifecycle, Ingestion, Garbage Collection, Metadata Search). Each service uses keys generated for its own account to execute an operation.

<ReplicationConfiguration>
    <Role>IAM-role-ARN</Role>
    <Rule>
        <ID>Rule-1</ID>
        <Status>rule-status</Status>
        <Prefix>key-prefix</Prefix>
        <Destination>
           <Bucket>arn:aws:s3:::bucket-name</Bucket>
           <StorageClass>optional-destination-storage-class-override</StorageClass>
        </Destination>
    </Rule>
    <Rule>
        <ID>Rule-2</ID>
         ...
    </Rule>
     ...
</ReplicationConfiguration>

The following table describes the XML elements in the replication configuration:

Name Type Description Required
ReplicationConfiguration Container

Container for replication rules. Up to 1,000 rules can be added. Total replication configuration size can be up to 2 MB.

Children: Rule

Ancestor: None

Yes
Role String

Amazon Resource Name (ARN) of an IAM role for Zenko to assume when replicating the objects.

Type: String

Ancestor: Rule

Yes
Rule Container

Container for information about a particular replication rule. Replication configuration must have at least one rule and can contain up to 1,000 rules.

Ancestor: ReplicationConfiguration

Yes
ID String

Unique identifier for the rule. The value cannot be longer than 255 characters.

Ancestor: Rule

No
Status String

The rule is ignored if status is not Enabled.

Ancestor: Rule

Valid Values: Enabled, Disabled

Yes
Prefix String

Object keyname prefix identifying one or more more objects to which the rule applies.

Maximum prefix length can be up to 1,024 characters. Overlapping prefixes are not supported.

Ancestor: Rule

Yes
Destination Container

Container for destination information.

Ancestor: Rule

Yes
Bucket String

Amazon resource name (ARN) of the bucket where Zenko is to store replicas of the object identified by the rule.

If there are multiple rules in the replication configuration, all these rules must specify the same bucket as the destination. That is, replication configuration can replicate objects only to one destination bucket.

Ancestor: Destination

Yes
StorageClass String

Optional destination storage class override to use when replicating objects. If this element is not specified, Zenko uses the storage class of the source object to create object replica.

Zenko reinterprets this S3 call not as a service quality directive, but as a service locator. In other words, where Amazon S3 uses this directive to define a location by quality of service (e.g., STANDARD or GLACIER), Zenko uses it to direct replication to a location. The quality of service is determined and the replication destination is configured by the user.

Ancestor: Destination

Default: Storage class of the source object

Valid Values: Any defined destination name

No
Response
Headers

This operation uses only response headers that are common to most responses.

Elements

This operation does not return response elements.

Special Errors

This operation does not return special errors.

Examples
Add Replication Configuration
Request

The following is a sample PUT request that creates a replication subresource on the specified bucket and saves the replication configuration in it. The replication configuration specifies a rule to replicate to the {{exampleTargetBucket}} bucket any new objects created with the key name prefix “TaxDocs”.

After adding a replication configuration to a bucket, S3 assumes the IAM role specified in the configuration in order to replicate objects on behalf of the bucket owner, which is the AWS account that created the bucket.

PUT /?replication HTTP/1.1
Host: examplebucket.s3.amazonaws.com
x-amz-date: Wed, 11 Feb 2015 02:11:21 GMT
Content-MD5: q6yJDlIkcBaGGfb3QLY69A==
Authorization: authorization string
Content-Length: 406

<ReplicationConfiguration>
  <Role>arn:aws:iam::35667example:role/CrossRegionReplicationRoleForS3</Role>
  <Rule>
    <ID>rule1</ID>
    <Prefix>TaxDocs</Prefix>
    <Status>Enabled</Status>
    <Destination>
      <Bucket>arn:aws:s3:::{{exampleTargetBucket}}</Bucket>
    </Destination>
  </Rule>
</ReplicationConfiguration>
Response
HTTP/1.1 200 OK
x-amz-id-2: r+qR7+nhXtJDDIJ0JJYcd+1j5nM/rUFiiiZ/fNbDOsd3JUE8NWMLNHXmvPfwMpdc
x-amz-request-id: 9E26D08072A8EF9E
Date: Wed, 11 Feb 2015 02:11:22 GMT
Content-Length: 0
Server: <serverName>

GET Bucket Replication

Returns the replication configuration information set on the bucket.

Requests
Syntax
GET /?replication HTTP/1.1
Host: bucketname.s3.amazonaws.com
Date: date
Authorization: authorization string (see Authenticating Requests (AWS Signature Version 4))
Parameters

The GET Bucket operation does not use request parameters.

Headers

This operation uses only request headers that are common to all operations.

Elements

This operation does not use request elements.

Responses
Headers

This operation uses only response headers that are common to most responses.

Elements

This implementation of GET returns the following response elements.

Name Description
ReplicationConfiguration

Container for replication rules. Up to 1,000 rules can be added. Total replication configuration size can be up to 2 MB.

Type: Container

Children: Rule

Ancestor: None

Role

Amazon Resource Name (ARN) of an IAM role for Amazon S3 to assume when replicating the objects.

Type: String

Ancestor: Rule

Rule

Container for information about a particular replication rule. Replication configuration must have at least one rule and can contain up to 1,000 rules.

Type: Container

Ancestor: ReplicationConfiguration

ID

Unique identifier for the rule. The value’s length cannot exceed 255 characters.

Type: String

Ancestor: Rule

Status

The rule is ignored if status is not Enabled.

Type: String

Ancestor: Rule

Valid Values: Enabled, Disabled

Prefix

Object keyname prefix identifying one or more objects to which the rule applies. Maximum prefix length is 1,024 characters. Overlapping prefixes are not supported.

Type: String

Ancestor: Rule

Destination

Container for destination information.

Type: Container

Ancestor: Rule

Bucket

Amazon resource name (ARN) of the bucket in which Amazon S3 is to store replicas of the object identified by the rule.

If there are multiple rules in the replication configuration, all these rules must specify the same bucket as the destination. That is, replication configuration can replicate objects only to one destination bucket.

Type: String

Ancestor: Destination

StorageClass

Optional destination storage class override to use when replicating objects. If this element is not specified, Zenko uses the source object’s storage class to create an object replica.

Zenko reinterprets this S3 call not as a service quality directive, but as a service locator. In other words, where Amazon S3 uses this directive to define a location by quality of service (e.g., STANDARD or GLACIER), Zenko uses it to direct replication to a location. The quality of service is determined and the replication destination is configured by the user.

Type: String

Ancestor: Destination

Default: Storage class of the source object

Valid Values: Any defined Zenko location

Special Errors
Name Description HTTP Status Code SOAP Fault Code Prefix
NoSuchReplicationConfiguration The replication configuration does not exist. 404 Not Found Client
Examples
Retrieve Replication Configuration Information
Request

The following example GET request retrieves replication configuration information set for the examplebucket bucket.

GET /?replication HTTP/1.1
Host: examplebucket.s3.amazonaws.com
x-amz-date: Tue, 10 Feb 2015 00:17:21 GMT
Authorization: signatureValue
Response

The following sample response shows that replication is enabled on the bucket, and the empty prefix indicates that Zenko will replicate all objects created in the examplebucket bucket. The Destination element shows the target bucket where Zenko creates the object replicas and the storage class (AzureCloud) that Zenko uses when creating replicas.

Zenko assumes the specified role to replicate objects on behalf of the bucket owner.

HTTP/1.1 200 OK
x-amz-id-2: ITnGT1y4RyTmXa3rPi4hklTXouTf0hccUjo0iCPjz6FnfIutBj3M7fPGlWO2SEWp
x-amz-request-id: 51991C342example
Date: Tue, 10 Feb 2015 00:17:23 GMT
Server: AmazonS3
Content-Length: contentlength

<?xml version="1.0" encoding="UTF-8"?>
<ReplicationConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
  <Rule>
    <ID>rule1</ID>
    <Status>Enabled</Status>
    <Prefix></Prefix>
    <Destination>
      <Bucket>arn:aws:s3:::exampletargetbucket</Bucket>
      <StorageClass>AzureCloud</StorageClass>
    </Destination>
  </Rule>
  <Role>arn:aws:iam::35667example:role/CrossRegionReplicationRoleForS3</Role>
</ReplicationConfiguration>

DELETE Bucket Replication

Deletes the replication subresource associated with the specified bucket. This operation requires permission for the s3:DeleteReplicationConfiguration action.

Requests
Syntax
DELETE /?replication HTTP/1.1
Host: bucketname.s3.amazonaws.com
Date: date
Authorization: authorization string (see Authenticating Requests (AWS Signature Version
        4))
Parameters

The PUT Bucket operation does not use request parameters.

Headers

This operation uses only request headers that are common to all operations.

Elements

This operation does not use request elements.

Responses
Headers

This operation uses only response headers that are common to most responses.

Elements

This operation does not use request elements.

Example

The following DELETE request deletes the replication subresource from the specified bucket. This removes the replication configuration set for the bucket.

DELETE /?replication HTTP/1.1
Host: examplebucket.s3.amazonaws.com
Date: Wed, 11 Feb 2015 05:37:16 GMT
20150211T171320Z

Authorization: signatureValue

GET Bucket Policy

This GET operation uses the policy subresource to return a specified bucket’s policy. For any identity other than the root user of the account that owns the bucket, the identity must have GetBucketPolicy permissions on the specified bucket and belong to the bucket owner’s account to use this operation.

In the absence of GetBucketPolicy permissions, Zenko returns a 403 Access Denied error. If the permissions are correct, but you are not using an identity that belongs to the bucket owner’s account, Zenko returns a 405 Method Not Allowed error.

Important

The root user of the account that owns a bucket can always use this operation, even if the policy explicitly denies the root user the ability to perform this action.

For more information about bucket policies, see Using Bucket Policies and User Policies in the Amazon Simple Storage Service Developer Guide.

Requests
Syntax
GET /?policy HTTP/1.1
Host: BucketName.s3.example.com
Date: date
Authorization: authorization string (see Authenticating Requests (AWS
Signature Version 4))
Request Parameters

This operation does not use request parameters.

Request Headers

This operation uses only request headers that are common to all operations.

Request Elements

This operation does not use request elements.

Responses
Response Headers

This operation uses only response headers that are common to most responses.

Response Elements

The response contains the (JSON) policy of the specified bucket.

Special Errors

This operation does not return special errors.

Examples
Sample Request

The following request returns the policy of the specified bucket.

GET ?policy HTTP/1.1
Host: bucket.s3.yourservice.com
Date: Fri, 27 Sep 2019 20:22:00 GMT
Authorization: authorization string
Sample Response
HTTP/1.1 200 OK
x-amz-id-2: Uuag1LuByru9pO4SAMPLEAtRPfTaOFg==
x-amz-request-id: 656c76696e67SAMPLE57374
Date: Fri, 27 Sep 2019 20:22:01 GMT
Connection: keep-alive
Server: S3Server


{
"Version":"2008-10-17",
"Id":"aaaa-bbbb-cccc-dddd",
"Statement" : [
       {
        "Effect":"Deny",
        "Sid":"1",
        "Principal" : {
            "AWS":["111122223333","444455556666"]
        },
        "Action":["s3:*"],
        "Resource":"arn:aws:s3:::bucket/*"
      }
   ]
}

PUT Bucket Policy

This PUT operation uses the policy subresource to return a specified bucket’s policy. For any identity other than the root user of the account that owns the bucket, the calling identity must have PutBucketPolicy permissions on the specified bucket and belong to the bucket owner’s account to use this operation.

In the absence of PutBucketPolicy permissions, Zenko returns a 403 Access Denied error. If the permissions are correct, but you are not using an identity that belongs to the bucket owner’s account, Zenko returns a 405 Method Not Allowed error.

Important

The root user of the account that owns a bucket can always use this operation, even if the policy explicitly denies the root user the ability to perform this action.

For more information about bucket policies, see Using Bucket Policies and User Policies in the Amazon Simple Storage Service Developer Guide.

Requests
Syntax
PUT /?policy HTTP/1.1
Host: BucketName.s3.example.com
Date: date
Authorization: authorization string (see Authenticating Requests (AWS
Signature Version 4))

Policy written in JSON
Request Parameters

This operation does not use request parameters.

Request Headers

This operation uses only request headers that are common to all operations.

Request Elements

The body is a JSON string containing policy contents that contain policy statements.

Responses
Response Headers

This operation uses only response headers that are common to most responses.

Response Elements

PUT response elements return whether the operation succeeded or not.

Special Errors

This operation does not return special errors.

Examples
Sample Request

The following request shows the PUT individual policy request for the bucket.

PUT /?policy HTTP/1.1
Host: bucket.s3.example.com
Date: Fir, 27 Sep 2019 20:22:00 GMT
Authorization: authorization string

{
"Version":"2008-10-17",
"Id":"aaaa-bbbb-cccc-dddd",
"Statement" : [
    {
        "Effect":"Allow",
        "Sid":"1",
        "Principal" : {
            "AWS":["111122223333","444455556666"]
        },
        "Action":["s3:*"],
        "Resource":"arn:aws:s3:::bucket/*"
    }
  ]
}
Sample Response
HTTP/1.1 204 No Content
x-amz-id-2: Uuag1LuByR5Onimru9SAMPLEAtRPfTaOFg==
x-amz-request-id: 656c76696e6727732SAMPLE7374
Date: Fri, 27 Sep 2019 20:22:01 GMT
Connection: keep-alive
Server: S3Server

DELETE Bucket Policy

This DELETE operation uses the policy subresource to delete a specified bucket’s policy. For any identity other than the root user of the account that owns the bucket, the identity must have the DeleteBucketPolicy permissions on the specified bucket and belong to the bucket owner’s account to use this operation.

In the absence of DeleteBucketPolicy permissions, Zenko returns a 403 Access Denied error. If the permissions are correct, but you are not using an identity that belongs to the bucket owner’s account, Zenko returns a 405 Method Not Allowed error.

Important

The root user of the AWS account that owns a bucket can always use this operation, even if the policy explicitly denies the root user the ability to perform this action.

For more information about bucket policies, see Using Bucket Policies and User Policies in the Amazon Simple Storage Service Developer Guide.

Requests
Syntax
DELETE /?policy HTTP/1.1
Host: BucketName.s3.example.com
Date: date
Authorization: authorization string (see Authenticating Requests (AWS
Signature Version 4))
Request Parameters

This operation does not use request parameters.

Request Headers

This operation uses only request headers that are common to all operations.

Request Elements

This operation does not use request elements.

Responses
Response Headers

This operation uses only response headers that are common to most responses.

Response Elements

The response elements contain the status of the DELETE operation including the error code if the request failed.

Special Errors

This operation does not return special errors.

Examples
Sample Request

This request deletes the bucket named “BucketName”.

DELETE /?policy HTTP/1.1
Host: BucketName.s3.example.com
Date: Fri, 27 Sep 2019 20:22:00 GMT
Authorization: signatureValue
Sample Response
HTTP/1.1 204 No Content
x-amz-id-2: Uuag1LuByRx9e6j5OnimrSAMPLEtRPfTaOFg==
x-amz-request-id: 656c76696e672SAMPLE5657374
Date: Fri, 27 Sep 2019 20:22:01 GMT
Connection: keep-alive
Server: my-zenko

GET Bucket ACL

The GET Bucket ACL operation returns a bucket’s access control list (ACL) settings.

Requests
Syntax
GET /?acl HTTP/1.1
Host: {{BucketName}}.{{StorageService}}.com
Date: {{date}}
Authorization: {{authenticationInformation}}
Parameters

The GET Bucket ACL operation does not use request parameters.

Headers

The GET Bucket ACL operation uses only request headers common to all operations (see Common Request Headers).

Elements

The GET Bucket ACL operation does not use request elements.

Responses
Headers

The GET Bucket ACL operation uses only response headers that are common to all operations (refer to Common Response Headers).

Elements

The GET Bucket ACL operation can return the following XML elements in its response (includes XML containers):

Element Type Description
AccessControlList container Container for ACL information
AccessControlPolicy container Container for the response
DisplayName string Bucket owner’s display name; returned only if the owner’s e-mail address (or the forum name, if configured) can be determined from the ID.
Grant container Container for Grantee and Permission
Grantee container Container for DisplayName and ID of the person being granted permissions
ID string Bucket owner’s user ID
Owner container Container for bucket owner information
Permission string

Permission given to the Grantee for bucket

Valid Values: FULL_CONTROL | WRITE | WRITE_ACP | READ | READ_ACP

Examples
Getting the ACL of the Specified Bucket
Request Sample
GET ?acl HTTP/1.1
Host: {{bucketName}}.{{storageService}}.com
Date: Wed, 28 Oct 2009 22:32:00 GMT
Authorization: {{authorizationString}}
Response Sample
HTTP/1.1 200 OK
x-amz-id-2: eftixk72aD6Ap51TnqcoF8eFidJG9Z/2mkiDFu8yU9AS1ed4OpIszj7UDNEHGran
x-amz-request-id: 318BC8BC148832E5
Date: Wed, 04 Sep 2019 21:22:00 GMT
Last-Modified: Sun, 1 Jan 2006 12:00:00 GMT
Content-Length: 124
Content-Type: text/plain
Connection: close
Server: ScalityS3

<AccessControlPolicy> xmlns="http://example.com/doc/2006-03-01/">
  <Owner>
    <ID>8b27d4b0fc460740425b9deef56fa1af6245fbcccdda813b691a8fda9be8ff0c</ID>
    <DisplayName>user@example.com</DisplayName>
  </Owner>
  <AccessControlList>
    <Grant>
      <Grantee xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:type="CanonicalUser">
        <ID>8b27d4b0fc460740425b9deef56fa1af6245fbcccdda813b691a8fda9be8ff0c</ID>
        <DisplayName>user@example.com</DisplayName>
      </Grantee>
      <Permission>FULL_CONTROL</Permission>
    </Grant>
  </AccessControlList>
</AccessControlPolicy>

PUT Bucket ACL

The PUT Bucket ACL operation uses the acl subresource to set the permissions on an existing bucket using its access control list (ACL).

WRITE_ACP access is required to set a bucket’s ACL.

Bucket permissions are set using one of the following two methods:

  • Specifying the ACL in the request body
  • Specifying permissions using request headers

Warning

Access permission cannot be specified using both the request body and the request headers.

Requests
Syntax

The request syntax that follows sends the ACL in the request body. If headers are used to specify the bucket’s permissions, the ACL cannot be sent in the request body (see Common Request Headers for a list of available headers).

PUT /?acl HTTP/1.1
Host: {{BucketName}}.{{StorageService}}.com
Date: {{date}}
Authorization: {{authorizationString}}

<AccessControlPolicy>
  <Owner>
    <ID>ID</ID>
    <DisplayName>EmailAddress</DisplayName>
  </Owner>
  <AccessControlList>
    <Grant>
      <Grantee xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:type="CanonicalUser">
        <ID>ID</ID>
        <DisplayName>EmailAddress</DisplayName>
      </Grantee>
      <Permission>Permission</Permission>
    </Grant>
    ...
  </AccessControlList>
</AccessControlPolicy>
Parameters

The PUT Bucket ACL operation does not use request parameters.

Headers

The PUT Bucket ACL operation can use a number of optional request headers in addition to those that are common to all operations (refer to Common Request Headers). These request headers are used either to specify a predefined—or canned—ACL, or to explicitly specify grantee permissions.

Specifying a Canned ACL

Zenko supports a set of canned ACLs, each of which has a predefined set of grantees and permissions.

To grant access permissions by specifying canned ACLs, use the following header and specify the canned ACL name as its value.

Note

If the x-amz-acl header is in use, other access-control-specific headers in the request are ignored.

Header Type Description
x-amz-acl string

The canned ACL to apply to the bucket you are creating

Default: private

Valid Values: private | public-read | public-read-write | authenticated-read | bucket-owner-read | bucket-owner-full-control

Constraints: None

Explicitly Specifying Grantee Access Permissions

A set of x-amz-grant-permission headers is available for explicitly granting individualized bucket access permissions to specific Zenko accounts or groups. Each of these headers maps to specific permissions the Zenko supports in an ACL.

Note

If an x-amz-acl header is sent all ACL-specific headers are ignored in favor of the canned ACL.

Header Type Description
x-amz-grant-read string

Allows grantee to list the objects in the bucket

Default: None

Constraints: None

x-amz-grant-write string

Allows grantee to create, overwrite, and delete any object in the bucket

Default: None

Constraints: None

x-amz-grant-read-acp string

Allows grantee to read the bucket ACL

Default: None

Constraints: None

x-amz-grant-write-acp string

Allows grantee to write the ACL for the applicable bucket

Default: None

Constraints: None

x-amz-grant-full-control string

Allows grantee the READ, WRITE, READ_ACP, and WRITE_ACP permissions on the ACL

Default: None

Constraints: None

For each header, the value is a comma-separated list of one or more grantees. Each grantee is specified as a type=value pair, where the type can be one any one of the following:

  • emailAddress (if value specified is the email address of an account)
  • id (if value specified is the canonical user ID of an account)
  • uri (if granting permission to a predefined Amazon S3 group)

For example, the following x-amz-grant-write header grants create, overwrite, and delete objects permission to a LogDelivery group predefined by Zenko and two accounts identified by their email addresses.

x-amz-grant-write: uri="http://acs.example.com/groups/s3/LogDelivery", emailAddress="xyz@example.com", emailAddress="abc@example.com"

Note

Though cited here for purposes of example, the LogDelivery group permission is not currently being used by Zenko.

Request Elements

If the request body is used to specify an ACL, the following elements must be used.

Note

If the request body is requested, the request headers cannot be used to set an ACL.

Element Type Description
AccessControlList container Container for Grant, Grantee, and Permission
AccessControlPolicy string Contains the elements that set the ACL permissions for an object per grantee
DisplayName string Screen name of the bucket owner
Grant container Container for the grantee and his or her permissions
Grantee string The subject whose permissions are being set
ID string ID of the bucket owner, or the ID of the grantee
Owner container Container for the bucket owner’s display name and ID
Permission string Specifies the permission given to the grantee.
Grantee Values

Specify the person (grantee) to whom access rights are being assigned (using request elements):

  • By ID

    <Grantee xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:type="CanonicalUser"><ID>{{ID}}</ID><DisplayName>GranteesEmail</DisplayName></Grantee>
    

    DisplayName is optional and is ignored in the request.

  • By Email Address

    <Grantee xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:type="ScalityCustomerByEmail"><EmailAddress>{{Grantees@email.com}}</EmailAddress>lt;/Grantee>
    

    The grantee is resolved to the CanonicalUser and, in a response to a GET Object acl request, appears as the CanonicalUser.

  • By URI

    <Grantee xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:type="Group"><URI>{{http://acs.example.com/groups/global/AuthenticatedUsers}}</URI></Grantee>
    
Responses
Headers

The PUT Bucket ACL operation uses only response headers that are common to all operations (refer to Common Response Headers).

Elements

The PUT Bucket ACL operation does not return response elements.

Examples
Access Permissions Specified in the Body

The request sample grants access permission to the existing example-bucket bucket, specifying the ACL in the body. In addition to granting full control to the bucket owner, the XML specifies the following grants.

  • Grant AllUsers group READ permission on the bucket.
  • Grant the LogDelivery group WRITE permission on the bucket.
  • Grant an AWS account, identified by email address, WRITE_ACP permission.
  • Grant an AWS account, identified by canonical user ID, READ_ACP permission.
Request Sample
PUT ?acl HTTP/1.1
Host: example-bucket.example.com
Content-Length: 1660
x-amz-date: Thu, 12 Apr 2012 20:04:21 GMT
Authorization: {{authorizationString}}

<AccessControlPolicy xmlns="http://example.com/doc/2006-03-01/">
  <Owner>
    <ID>852b113e7a2f25102679df27bb0ae12b3f85be6BucketOwnerCanonicalUserID</ID>
    <DisplayName>OwnerDisplayName</DisplayName>
  </Owner>
  <AccessControlList>
    <Grant>
      <Grantee xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:type="CanonicalUser">
        <ID>852b113e7a2f25102679df27bb0ae12b3f85be6BucketOwnerCanonicalUserID</ID>
        <DisplayName>OwnerDisplayName</DisplayName>
      </Grantee>
      <Permission>FULL_CONTROL</Permission>
    </Grant>
    <Grant>
      <Grantee xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:type="Group">
        <URI xmlns="">http://acs.scality.com/groups/global/AllUsers</URI>
      </Grantee>
      <Permission xmlns="">READ</Permission>
    </Grant>
    <Grant>
      <Grantee xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:type="Group">
        <URI xmlns="">http://acs.scality.com/groups/s3/LogDelivery</URI>
      </Grantee>
      <Permission xmlns="">WRITE</Permission>
    </Grant>
    <Grant>
      <Grantee xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:type="AmazonCustomerByEmail">
        <EmailAddress xmlns="">xyz@example.com</EmailAddress>
      </Grantee>
      <Permission xmlns="">WRITE_ACP</Permission>
    </Grant>
    <Grant>
      <Grantee xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:type="CanonicalUser">
        <ID xmlns="">f30716ab7115dcb44a5ef76e9d74b8e20567f63TestAccountCanonicalUserID</ID>
      </Grantee>
      <Permission xmlns="">READ_ACP</Permission>
    </Grant>
  </AccessControlList>
</AccessControlPolicy>
Response Sample
HTTP/1.1 200 OK
x-amz-id-2: NxqO3PNiMHXXGwjgv15LLgUoAmPVmG0xtZw2sxePXLhpIvcyouXDrcQUaWWXcOK0
x-amz-request-id: C651BC9B4E1BD401
Date: Thu, 12 Apr 2012 20:04:28 GMT
Content-Length: 0
Server: ScalityS3
Access Permissions Specified Using Headers

The request sample uses ACL-specific request headers to grant the following permissions:

  • Write permission to the Zenko LogDelivery group and an account identified by the email xyz@example.com
  • Read permission to the Zenko AllUsers group
Request Sample
PUT ?acl HTTP/1.1
Host: example-bucket.example.com
x-amz-date: Sun, 29 Apr 2012 22:00:57 GMT
x-amz-grant-write: uri="http://acs.example.com/groups/s3/LogDelivery", emailAddress="xyz@example.com"
x-amz-grant-read: uri="http://acs.example.com/groups/global/AllUsers"
Accept: */*
Authorization: {{authorizationString}}
Response Sample
HTTP/1.1 200 OK
x-amz-id-2: 0w9iImt23VF9s6QofOTDzelF7mrryz7d04Mw23FQCi4O205Zw28Zn+d340/RytoQ
x-amz-request-id: A6A8F01A38EC7138
Date: Sun, 29 Apr 2012 22:01:10 GMT
Content-Length: 0
Server: ScalityS3

GET Bucket CORS

The GET Bucket CORS operation returns a bucket’s cors configuration information. This operation requires S3:GetBucketCORS permission.

Requests
Syntax
GET /?cors HTTP/1.1
Host: {{BucketName}}.{{StorageService}}.com
Date: {{date}}
Authorization: {{authenticationInformation}}

Note

The request syntax illustrates only a portion of the request headers.

Parameters

The GET Bucket CORS operation does not use request parameters.

Headers

The GET Bucket CORS operation uses only request headers that are common to all operations (see Common Request Headers).

Elements

The GET Bucket CORS operation does not use request elements.

Responses
Headers

The GET Bucket CORS operation uses only response headers that are common to all operations (see Common Response Headers).

Elements
Element Type Description
CORSConfiguration Container

Container for up to 100 CORSRules elements.

Ancestors: None

CORSRule Container

A set of origins and methods (cross-origin the access to allow). Up to 100 rules can be added to the configuration.

Ancestors: CORSConfiguration

Children: AllowedOrigin, AllowedMethod, MaxAgeSeconds, ExposeHeader, ID.

AllowedHeader Integer

Specifies which headers are allowed in a pre-flight OPTIONS request through the Access-Control-Request-Headers header. Each header name specified in the Access-Control-Request-Headers must have a corresponding entry in the rule. Only the headers that were requested will be sent back. This element can contain at most one * wildcard character.

A CORSRule can have at most one MaxAgeSeconds element.

Ancestor: CORSRule

AllowedMethod Enum

Identifies an HTTP method that the domain/origin specified in the rule is allowed to execute. Each CORSRule must contain at least one AllowedMethod and one AllowedOrigin element.

Ancestor: CORSRule

AllowedOrigin String

One or more response headers that users are allowed to access from their applications (for example, from a JavaScript XMLHttpRequest object). Each CORSRule must have at least one AllowedOrigin element. The string value can include at most one “*” wildcard character, for example, “http://*.example.com”. Also, it is possible to specify only “*” to allow cross-origin access for all domains/origins.

Ancestor: CORSRule

ExposeHeader String

One or more headers in the response that users can access from their applications (for example, from a JavaScript XMLHttpRequest object). Add one ExposeHeader in the rule for each header.

Ancestor: CORSRule

ID String

An optional unique identifier for the rule. The ID value can be up to 255 characters long. The IDs can assist in finding a rule in the configuration.

Ancestor: CORSRule

MaxAgeSeconds Integer

The time in seconds that thse browser is to cache the preflight response for the specified resource. A CORSRule can have at most one MaxAgeSeconds element.

Ancestor: CORSRule

Examples
Retrieve CORS Subresource

This request retrieves the cors subresource of a bucket.

Request Sample
GET /?cors HTTP/1.1
Host: example.com
Date: Tue, 13 Dec 2011 19:14:42 GMT
Authorization: {{authenticationInformation}}
Response Sample
HTTP/1.1 200 OK
x-amz-id-2: 0FmFIWsh/PpBuzZ0JFRC55ZGVmQW4SHJ7xVDqKwhEdJmf3q63RtrvH8ZuxW1Bol5
x-amz-request-id: 0CF038E9BCF63097
Date: Tue, 13 Dec 2011 19:14:42 GMT
Server: ScalityS3
Content-Length: 280

.. code::

<CORSConfiguration>
     <CORSRule>
       <AllowedOrigin>http://www.example.com</AllowedOrigin>
       <AllowedMethod>GET</AllowedMethod>
       <MaxAgeSeconds>3000</MaxAgeSec>
       <ExposeHeader>x-amz-server-side-encryption</ExposeHeader>
     </CORSRule>
</CORSConfiguration>

PUT Bucket CORS

The PUT Bucket CORS operation configures a bucket to accept cross-origin requests.

Requests
Syntax
PUT /?cors HTTP/1.1
Host: {{BucketName}}.{{StorageService}}.com
Content-Length: {{length}}
Date: {{date}}
Authorization: {{authenticationInformation}}
<CORSConfiguration>
  <CORSRule>
    <AllowedOrigin>Origin you want to allow cross-domain requests from</AllowedOrigin>
    <AllowedOrigin>...</AllowedOrigin>
    ...
    <AllowedMethod>HTTP method</AllowedMethod>
    <AllowedMethod>...</AllowedMethod>
    ...
    <MaxAgeSeconds>Time in seconds your browser to cache the pre-flight OPTIONS response for a resource</MaxAgeSeconds>
    <AllowedHeader>Headers that you want the browser to be allowed to send</AllowedHeader>
    <AllowedHeader>...</AllowedHeader>
     ...
    <ExposeHeader>Headers in the response that you want accessible from client application</ExposeHeader>
    <ExposeHeader>...</ExposeHeader>
     ...
  </CORSRule>
  <CORSRule>
    ...
  </CORSRule>
    ...
</CORSConfiguration>

Note

This request syntax example illustrates only a portion of the request headers.

Parameters

The PUT Bucket CORS operation does not use request parameters.

Elements
Element Type Description
CORSConfiguration Container

Container for up to 100 CORSRules elements.

Ancestors: None

CORSRule Container

A set of origins and methods (cross- origin access that you want to allow). You can add up to 100 rules to the configuration.

Ancestors: CORSConfiguration

ID String

A unique identifier for the rule. The ID value can be up to 255 characters long. The IDs help you find a rule in the configuration.

Ancestors: CORSRule

AllowedMethod Enum

An HTTP method that you want to allow the origin to execute. Each CORSRule must identify at least one origin and one method.

Ancestors: CORSRule

AllowedOrigin String

An origin from which you want to allow cross-domain requests. This can contain at most one * wildcard character. Each CORSRule must identify at least one origin and one method.

Ancestors: CORSRule

AllowedHeader String

Specifies which headers are allowed in a pre-flight OPTIONS request via the Access-Control-Request-Headers header. Each header name specified in the Access-Control-Request-Headers header must have a corresponding entry in the rule to get a 200 OK response from the preflight request. In a response, CloudServer sends only the allowed headers that were requested. This can contain at most one * wildcard character.

Ancestors: CORSRule

MaxAgeSeconds Integer

The time in seconds that your browser is to cache the preflight response for the specified resource. A CORSRule can have at most one MaxAgeSeconds element.

Ancestors: CORSRule

ExposeHeader String

One or more headers in the response that you want customers to be able to access from their applications (for example, from a JavaScript XMLHttpRequest object). Add one ExposeHeader element in the rule for each header.

Ancestors: CORSRule

Responses
Headers

The PUT Bucket CORS operation uses only response headers that are common to all operations (see Common Response Headers).

Elements

The PUT Bucket CORS operation does not return response elements.

Examples
Configure CORS

The following PUT request adds the cors subresource to a bucket.

Request Sample
PUT /?cors HTTP/1.1
Host: example.com
x-amz-date: Tue, 21 Aug 2012 17:54:50 GMT
Content-MD5: 8dYiLewFWZyGgV2Q5FNI4W==
Authorization: {{authenticationInformation}}
Content-Length: 216
<CORSConfiguration>
 <CORSRule>
   <AllowedOrigin>http://www.example.com</AllowedOrigin>
   <AllowedMethod>PUT</AllowedMethod>
   <AllowedMethod>POST</AllowedMethod>
   <AllowedMethod>DELETE</AllowedMethod>
   <AllowedHeader>*</AllowedHeader>
   <MaxAgeSeconds>3000</MaxAgeSec>
   <ExposeHeader>x-amz-server-side-encryption</ExposeHeader>
 </CORSRule>
 <CORSRule>
   <AllowedOrigin>*</AllowedOrigin>
   <AllowedMethod>GET</AllowedMethod>
   <AllowedHeader>*</AllowedHeader>
   <MaxAgeSeconds>3000</MaxAgeSeconds>
 </CORSRule>
</CORSConfiguration>
Response Sample
HTTP/1.1 200 OK
x-amz-id-2: CCshOvbOPfxzhwOADyC4qHj/Ck3F9Q0viXKw3rivZ+GcBoZSOOahvEJfPisZB7B
x-amz-request-id: BDC4B83DF5096BBE
Date: Tue, 21 Aug 2012 17:54:50 GMT
Server: ScalityS3

DELETE Bucket CORS

Use the DELETE Bucket CORS operation to remove the CORS configuration for a bucket. This operation requires the S3:PutBucketCORS permission.

Requests
Syntax
DELETE /?cors HTTP/1.1
Host: {{BucketName}}.{{StorageService}}.com
Date: {{date}}
Authorization: {{authenticationInformation}}

Note

The Request Syntax illustrates only a portion of the request headers.

Parameters

The DELETE Bucket CORS operation does not use Request Parameters.

Headers

The DELETE Bucket CORS operation uses only request headers that are common to all operations (refer to Common Request Headers).

Elements

This operation does not use request elements.

Responses
Headers

The DELETE Bucket CORS operation uses only response headers that are common to all operations (refer to Common Response Headers).

Elements

This operation does not use response elements.

Examples
Request Sample
DELETE ?cors HTTP/1.1
Host: example.com
Date: Mon, 15 Feb 2016 15:30:07 GMT
Authorization: {{authenticationInformation}}
Request Sample
HTTP/1.1 204 No Content
x-amz-id-2: YgIPIfBiKa2bj0KMgUAdQkf3ShJTOOpXUueF6QKo
x-amz-request-id: 3848CD259D811111
Date: Thu, 27 Jan 2011 00:49:26 GMT
Server: ScalityS3
Content-Length: 0

Bucket Ingestion Metrics

Zenko can make queries against S3 Connector instances named as bucket locations under its management. Querying the managed namespace, Zenko can retrieve information about the number of transfers completed and pending, and the rate at which they are completing.

Zenko provides a REST API that makes these metrics available to users. To access these, make a request to the endpoints as specified in the following sections. Enter the listed endpoints verbatim, substituting a location that matches the bucket/location queried if the query requires it.

For example,

$ curl http://zenko-instance.net/_/backbeat/api/metrics/ingestion/us-west-video-dailies/all

where zenko-instance.net is the Zenko server’s URL and us-west-video-dailies is the bucket name (location).

GET All Metrics

This request retrieves all metrics for all S3 Connector metadata ingestion locations. Zenko returns three categories of information (metrics) about system operations: completions, throughput, and pending operations. Completions are returned for the preceding 24 hours, throughput for the preceding 15 minutes, and pending transactions are returned as a simple aggregate.

Endpoint

/_/backbeat/api/metrics/ingestion/all

Sample Response

 {
  "completions": {
    "description": "Number of completed ingestion operations (count) in the last 86400 seconds",
    "results": {
      "count":678979
    }
  },
  "throughput": {
    "description": "Current throughput for ingestion operations in ops/sec (count) in the last 900 seconds",
    "results": {
      "count":"34.25"
    }
  },
  "pending": {
    "description": "Number of pending ingestion operations (count)",
    "results": {
      "count":253417
    }
  }
}
GET Metrics for a Location

This request retrieves metrics for a single location, defined as a bucket in the Orbit UI.

The response from the endpoint is formatted identically to the Get All Metrics request, (completions, throughput, and pending operations) but constrained to the requested location only.

Endpoint

/_/backbeat/api/metrics/ingestion/<location>/all

Sample Response

{
   "completions": {
      "description": "Number of completed ingestion operations (count) in the last 86400 seconds",
      "results": {
         "count":678979
      }
   },
   "throughput": {
      "description": "Current throughput for ingestion operations in ops/sec (count) in the last 900 seconds",
      "results": {
         "count":"34.25"
      }
   },
   "pending": {
      "description": "Number of pending ingestion operations (count)",
      "results": {
         "count":253417
      }
   }
}
GET Throughput per Location

This request queries the managed S3 namespace for throughput, expressed as operations per second, over the preceding 15 minutes.

Endpoint

/_/backbeat/api/metrics/ingestion/<location>/throughput

Sample Response

{
   "throughput": {
      "description":"Current throughput for ingestion operations in ops/sec (count) in the last 900 seconds",
      "results": {
         "count":"25.72"
      }
   }
}
GET Completions per Location

This request retrieves the number of operations Zenko ingested (completed) from a specific location over the preceding 24 hours.

Endpoint

/_/backbeat/api/metrics/ingestion/<location>/completions

Sample Response

{
   "completions": {
      "description":"Number of completed ingestion operations (count) in the last 86400 seconds",
      "results": {
         "count":668900
      }
   }
}
GET Pending Object Count

This request retrieves the number of objects queued for Zenko ingestion.

Endpoint

/_/backbeat/api/metrics/ingestion/<location>/pending

Sample Response

{
   "pending": {
      "description":"Number of pending ingestion operations (count)",
      "results": {
         "count":253409
      }
   }
}

Bucket Lifecycle Operations

PUT Bucket Lifecycle

The PUT Bucket Lifecycle operation creates a new lifecycle configuration or replaces an existing one.

Requests
Syntax
PUT /?lifecycle HTTP/1.1
Host: {{BucketName}}.{{StorageService}}.com
Content-Length: {{length}}
Date: {{date}}
Authorization: {{authorizationString}}
Content-MD5: MD5
Parameters

The PUT Bucket Lifecycle operation does not use request parameters.

Headers
Name Description Required
Content-MD-5

The base64-encoded 128-bit MD5 digest of the data; must be used as a message integrity check to verify that the request body was not corrupted in transit. For more information, see RFC 1864.

Type: String

Default: None

Yes
Request Body

The lifecycle configuration can be specified in the request body. The configuration is specified as XML consisting of one or more rules.

<LifecycleConfiguration>
  <Rule>
    ...
  </Rule>
  <Rule>
    ...
  </Rule>
</LifecycleConfiguration>

Each rule consists of the following:

  • A filter identifying a subset of objects to which the rule applies. The filter can be based on a key name prefix, object tags, or a combination of both.
  • A status, indicating whether the rule is in effect.
  • One or more lifecycle transition and expiration actions to perform on the objects identified by the filter. If the state of your bucket is versioning-enabled or versioning-suspended, you can have many versions of the same object (one current version, and zero or more non-current versions). Amazon S3 provides predefined actions that you can specify for current and non-current object versions.

For example:

<LifecycleConfiguration>
  <Rule>
    <Filter>
      <Prefix>key-prefix</Prefix>
    </Filter>
    <Status>rule-status</Status>
    [One or more Transition/Expiration lifecycle actions.]
  </Rule>
</LifecycleConfiguration>

The following table describes the XML elements in the lifecycle configuration:

Name Description Required
AbortIncompleteMultipartUpload

Container for specifying when an incomplete multipart upload becomes eligible for an abort operation.

When you specify this lifecycle action, the rule cannot specify a tag-based filter.

Type: Container

Child: DaysAfterInitiation

Ancestor: Rule

Yes, if no other action is specified for the rule.
And

Container for specifying rule filters. These filters determine the subset of objects to which the rule applies.

Type: String

Ancestor: Rule

Yes, if more than one filter condition is specified (for example, one prefix and one or more tags).
Date

Date when action should occur. The date value must conform to the ISO 8601 format.

Type: String

Ancestor: Expiration or Transition

Yes, if Days and ExpiredObjectDeleteMarker are absent.
Days

Specifies the number of days after object creation when the specific rule action takes effect.

Type: Nonnegative Integer when used with Transition. Positive Integer when used with Expiration.

Ancestor: Expiration or Transition

Yes, if Date and ExpiredObjectDeleteMarker are absent.
DaysAfterInitiation

Specifies the number of days after initiating a multipart upload when the multipart upload must be completed. If it does not complete by the specified number of days, it becomes eligible for an abort operation and Amazon S3 aborts the incomplete multipart upload.

Type: Positive Integer

Ancestor: AbortIncompleteMultipartUpload

Yes, if ancestor is specified.
Expiration

This action specifies a period in an object’s lifetime when Amazon S3 should take the appropriate expiration action. Action taken depends on whether the bucket is versioning-enabled.

If versioning has never been enabled on the bucket, the only copy of the object is deleted permanently.

Otherwise, if your bucket is versioning-enabled or versioning-suspended, the action applies only to the current version of the object. A versioning-enabled bucket can have many versions of the same object, one current version, and zero or more noncurrent versions.

Instead of deleting the current version, the current version becomes a noncurrent version and a delete marker is added as the new current version.

Type: Container

Children: Days or Date

Ancestor: Rule

Yes, if no other action is present in the Rule.
Filter

Container for elements that describe the filter identifying a subset of objects to which the lifecycle rule applies. If you specify an empty filter, the rule applies to all objects in the bucket.

Type: String

Children: Prefix or Tag

Ancestor: Rule

Yes
ID

Unique identifier for the rule. The value cannot be longer than 255 characters.

Type: String

Ancestor: Rule

No
Key

Specifies the key of a tag. A tag key can be up to 128 Unicode characters in length.

Tag keys that you specify in a lifecycle rule filter must be unique.

Type: String

Ancestor: Tag

Yes, if Tag parent is specified.
LifecycleConfiguration

Container for lifecycle rules. You can add as many as 1,000 rules.

Type: Container

Children: Rule

Ancestor: None

Yes
ExpiredObjectDeleteMarker

On a versioning-enabled or versioning-suspended bucket, you can add this element in the lifecycle configuration to delete expired object delete markers.

On a non-versioned bucket, adding this element would do nothing because you cannot have delete markers.

When you specify this lifecycle action, the rule cannot specify a tag-based filter.

Type: String

Valid Values: true or false

Ancestor: Expiration

Yes, if Date and Days are absent.
NoncurrentDays

Specifies the number of days an object is non-current before performing the associated action.

Type: Positive Integer

Ancestor: NoncurrentVersionExpiration

Yes
NoncurrentVersionExpiration

Specifies when noncurrent object versions expire. Upon expiration, the noncurrent object versions are permanently deleted.

This lifecycle configuration action is set on a bucket that has versioning enabled (or suspended).

Type: Container

Children: NoncurrentDays

Ancestor: Rule

Yes, if no other action is present in the rule.
Prefix

Object key prefix identifying one or more objects to which the rule applies. Empty prefix indicates there is no filter based on key prefix.

There can be at most one Prefix in a lifecycle rule Filter.

Type: String

Ancestor: Filter or And (if you specify multiple filters such as a prefix and one or more tags)

No
Rule

Container for a lifecycle rule. A lifecycle configuration can contain as many as 1,000 rules.

Type: Container

Ancestor: LifecycleConfiguration

Yes
Status

If Enabled, the rule is executed when condition occurs.

Type: String

Ancestor: Rule

Valid Values: Enabled or Disabled.

Yes
StorageClass

Specifies the storage class (Zenko location) to which you want the object to transition.

Type: String

Ancestor: Transition

Valid Values: Any defined location

Yes

This element is required only if you specify one or both its ancestors.

Tag

Container for specifying a tag key and value. Each tag has a key and a value.

Type: Container

Children: Key and Value

Ancestor: Filter or And (if you specify multiple filters such as a prefix and one or more tags)

No
Transition

This action specifies a period in the objects’ lifetime when an object can transition to another storage class.

If versioning has never been enabled on the bucket, the object will transition to the specified storage class.

Otherwise, when your bucket is versioning-enabled or versioning-suspended, only the current version transitions to the specified storage class. Noncurrent versions are unaffected.

Type: Container

Children: Days or Date, and StorageClass

Ancestor: Rule

Yes, if no other action is present in the Rule.
Value

Specifies the value for a tag key. Each object tag is a key-value pair.

Tag value can be up to 256 Unicode characters in length.

Type: String

Ancestor: Tag

Yes, if Tag parent is specified
Requests
Syntax
PUT /?lifecycle HTTP/1.1
Host: {{BucketName}}.{{StorageService}}.com
Content-Length: {{length}}
Date: {{date}}
Authorization: {{authorizationString}}
Content-MD5: MD5
Parameters

The PUT Bucket Lifecycle operation does not use request parameters.

Headers
Name Type Required
Content-MD-5

The base64-encoded 128-bit MD5 digest of the data; must be used as a message integrity check to verify that the request body was not corrupted in transit. For more information, go to RFC 1864.

Type: String

Default: None

Yes
Elements

The lifecycle configuration can be specified in the request body. The configuration is specified as XML consisting of one or more rules.

<LifecycleConfiguration>
  <Rule>
  ...
  </Rule>
  <Rule>
  ...
  </Rule>
</LifecycleConfiguration>
Responses
Headers

The PUT Bucket Lifecycle operation uses only response headers that are common to most responses (see Common Response Headers).

Elements

The PUT Bucket Lifecycle operation does not return response elements.

Special Errors

The PUT Bucket Lifecycle operation does not return special errors.

Examples
Add Lifecycle Configuration–Bucket Versioning Disabled

The following lifecycle configuration specifies two rules, each with one action.

  • The Transition action specifies objects with the “documents/” prefix
    to transition to the wasabi_cloud storage class 30 days after creation.
  • The Expiration action specifies objects with the “logs/” prefix to be
    deleted 365 days after creation.
<LifecycleConfiguration>
  <Rule>
    <ID>id1</ID>
    <Filter>
    <Prefix>documents/</Prefix>
    </Filter>
    <Status>Enabled</Status>
    <Transition>
      <Days>30</Days>
      <StorageClass>wasabi_cloud</StorageClass>
    </Transition>
  </Rule>
  <Rule>
    <ID>id2</ID>
    <Filter>
      <Prefix>logs/</Prefix>
    </Filter>
    <Status>Enabled</Status>
    <Expiration>
      <Days>365</Days>
    </Expiration>
  </Rule>
</LifecycleConfiguration>
Request

The following is a sample PUT /?lifecycle request that adds the preceding lifecycle configuration to the “examplebucket” bucket.

PUT /?lifecycle HTTP/1.1
Host: examplebucket.s3.example.com
x-amz-date: Wed, 14 May 2014 02:11:21 GMT
Content-MD5: q6yJDlIkcBaGGfb3QLY69A==
Authorization: *authorization string* Content-Length: 415
<LifecycleConfiguration>
  <Rule>
    <ID>id1</ID>
    <Filter>
      <Prefix>documents/</Prefix>
    </Filter>
    <Status>Enabled</Status>
    <Transition>
      <Days>30</Days>
      <StorageClass>wasabi_cloud</StorageClass>
    </Transition>
  </Rule>
    <Rule>
      <ID>id2</ID>
      <Filter>
        <Prefix>logs/</Prefix>
      </Filter>
      <Status>Enabled</Status>
      <Expiration>
        <Days>365</Days>
      </Expiration>
  </Rule>
</LifecycleConfiguration>

The following is a sample response.

HTTP/1.1 200 OK
x-amz-id-2: r+qR7+nhXtJDDIJ0JJYcd+1j5nM/rUFiiiZ/fNbDOsd3JUE8NWMLNHXmvPfwMpdc
x-amz-request-id: 9E26D08072A8EF9E
Date: Wed, 14 May 2014 02:11:22 GMT
Content-Length: 0
Server: AmazonS3
Add Lifecycle Configuration–Bucket Versioning Enabled

The following lifecycle configuration specifies one rule, with one action to perform. Specify this action when your bucket is versioning-enabled or versioning is suspended.

The NoncurrentVersionExpiration action specifies non-current versions of objects with the “logs/” prefix to expire 100 days after the objects become non-current.

<LifeCycleConfiguration>
  <Rule>
    <ID>DeleteAfterBecomingNonCurrent</ID>
    <Filter>
      <Prefix>logs/</Prefix>
    </Filter>
    <Status>Enabled</Status>
    <NoncurrentVersionExpiration>
      <NoncurrentDays>100</NoncurrentDays>
    </NoncurrentVersionExpiration>
  </Rule>
</LifeCycleConfiguration>
Request

The following is a sample PUT /?lifecycle request that adds the preceding lifecycle configuration to the `examplebucket` bucket.

PUT /?lifecycle HTTP/1.1
Host: examplebucket.s3.amazonaws.com
x-amz-date: Wed, 14 May 2014 02:21:48 GMT
Content-MD5: 96rxH9mDqVNKkaZDddgnw==
Authorization: authorization string
Content-Length: 598

<LifeCycleConfiguration>
  <Rule>
    <ID>DeleteAfterBecomingNonCurrent</ID>
    <Filter>
      <Prefix>logs/</Prefix>
    </Filter>
    <Status>Enabled</Status>
    <NoncurrentVersionExpiration>
      <NoncurrentDays>1</NoncurrentDays>
    </NoncurrentVersionExpiration>
  </Rule>
</LifeCycleConfiguration>
Response

The following is a sample response:

HTTP/1.1 200 OK
x-amz-id-2:  aXQ+KbIrmMmoO//3bMdDTw/CnjArwje+J49Hf+j44yRb/VmbIkgIO5A+PT98Cp/6k07hf+LD2mY=
x-amz-request-id: 02D7EC4C10381EB1
Date: Wed, 14 May 2014 02:21:50 GMT
Content-Length: 0
Server: AmazonS3
GET Bucket Lifecycle

The GET Bucket Lifecycle operation returns the lifecycle configuration information set on the bucket. This GET operation requires the S3:GetLifecycleConfiguration permission.

Requests
Syntax
GET /?lifecycle HTTP/1.1
Host: {{BucketName}}.{{StorageService}}.com
Date: {{date}}
Authorization: {{authorizationString}}
Parameters

The GET Bucket Lifecycle operation does not use request parameters.

Headers

The GET Bucket Lifecycle operation uses only request headers that are common to all operations (refer Common Request Headers).

Elements

The GET Bucket Lifecycle operation does not use request elements.

Responses
Headers

The GET Bucket Lifecycle operation uses only response headers that are common to all operations (refer to Common Response Headers).

Elements

The GET Bucket Lifecycle operation returns the following response elements.

Name Description
And

Container for specifying Prefix and Tag based filters.

Child: Prefix and Tag

Type: Container

Ancestor: Filter

AbortIncompleteMultipartUpload

Container for specifying when an incomplete multipart upload becomes eligible for an abort operation.

Child: DaysAfterInitiation

Type: Container

Ancestor: Rule

Date

Date when you want the action to take place.

Type: String

Ancestor: Expiration or Transition

Days

Specifies the number of days after object creation when the specific rule action takes effect. The object’s eligibility time is calculated as creation time + the number of days.

Type: Nonnegative Integer when used with Transition, or Positive Integer when used with Expiration.

Ancestor: Transition or Expiration

DaysAfterInitiation

Specifies the number of days after initiating a multipart upload when the multipart upload must be completed. If it does not complete by the specified number of days, the incomplete multipart upload will be aborted.

Type: Positive Integer

Ancestor: AbortIncompleteMultipartUpload

Expiration

The expiration action occurs only on objects that are eligible according to the period specified in the child Date or Days element. The action depends on whether the bucket is versioning enabled (or suspended).

If versioning has never been enabled on the bucket, the object is permanently deleted.

Otherwise, if the bucket is versioning-enabled or versioning-suspended, the action applies only to the current version of the object. Buckets with versioning-enabled or versioning-suspended can have many versions of the same object, one current version, and zero or more noncurrent versions.

Instead of deleting the current version, the current version becomes a noncurrent version and a delete marker is added as the new current version.

Type: Container

Children: Days or Date

Ancestor: Rule

Filter

Container element describing one or more filters used to identify a subset of objects to which the lifecycle rule applies.

Child: Prefix, Tag, or And (if both prefix and tag are specified)

Type: String

Ancestor: Rule

ID

Unique identifier for the rule. The value cannot be longer than 255 characters.

Type: String

Ancestor: Rule

Key

Tag key

Type: String

Ancestor: Tag

LifecycleConfiguration

Container for lifecycle rules. You can add as many as 1000 rules.

Type: Container

Children: Rule

Ancestor: None

ExpiredObjectDeleteMarker

On a versioning-enabled or versioning-suspended bucket, any expired object delete markers will be deleted in the bucket.

Type: String

Valid Values: true or false

Ancestor: Expiration

NoncurrentDays

Specifies the number of days an object is noncurrent before performing the associated action.

Type: Positive integer

Ancestor: NoncurrentVersionExpiration

NoncurrentVersionExpiration

Specifies when noncurrent object versions expire. Upon expiration, the applicable noncurrent object versions are permanently deleted.

You set this lifecycle configuration action on a bucket that has versioning enabled (or suspended).

Type: Container

Children: NoncurrentDays

Ancestor: Rule

Prefix

Object key prefix identifying one or more objects to which the rule applies.

Type: String

Ancestor: Filter or And (if you specify Prefix and Tag child elements in the Filter)

Rule

Container for a lifecycle rule.

Type: Container

Ancestor: LifecycleConfiguration

Status

Type: String

Ancestor: Rule

Valid Values: Enabled or Disabled

StorageClass

Specifies the storage class to which you want to transition the object.

Zenko reinterprets this S3 call not as a service quality directive, but as a service locator. In other words, where Amazon S3 uses this directive to define a location by quality of service (e.g., STANDARD or GLACIER), Zenko uses it to direct replication to a location. The quality of service is determined and the replication destination is configured by the user.

Type: String

Ancestor: Transition

Valid Values: Any defined destination name

Tag

Container listing the tag key and value used to filter objects to which the rule applies.

Type: String

Ancestor: Filter

Transition

This action specifies a period in the objects’ lifetime to transition to another storage class.

If versioning has never been enabled on the bucket, the object will transition to the specified storage class.

Otherwise, when your bucket is versioning-enabled or versioning-suspended, only the current version of the object identified in the rule.

Type: Container

Children: Days or Date, and StorageClass

Ancestor: Rule

Value

Tag key value.

Type: String

Ancestor: Tag

Special Errors
Error Code Description HTTP Status Code SOAP Fault Code Prefix
NoSuchLifecycleConfiguration The lifecycle does not exist. 404 Not Found Client
Examples

The following example shows a GET request to retrieve the lifecycle configurations from a specified bucket.

Sample Request
GET /?lifecycle HTTP/1.1
Host: examplebucket.s3.amazonaws.com
x-amz-date: Thu, 15 Nov 2012 00:17:21 GMT
Authorization: signatureValue
Sample Response

The following is a sample response that shows a prefix of “projectdocs/” filter and multiple lifecycle configurations for these objects.

  • Transition to wasabi_cloud after 30 days
  • Transition to azure_cold_storage after 365 days
  • Expire after 3,650 days
HTTP/1.1 200 OK
x-amz-id-2:  ITnGT1y4RyTmXa3rPi4hklTXouTf0hccUjo0iCPjz6FnfIutBj3M7fPGlWO2SEWp
x-amz-request-id: 51991C342C575321
Date: Thu, 15 Nov 2012 00:17:23 GMT
Server: AmazonS3
Content-Length: 358
<?xml version="1.0" encoding="UTF-8"?>
<LifecycleConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
  <Rule>
    <ID>Archive and then delete rule</ID>
    <Filter>
      <Prefix>projectdocs/</Prefix>
    </Filter>
    <Status>Enabled</Status>
    <Transition>
      <Days>30</Days>
      <StorageClass>wasabi_cloud</StorageClass>
    </Transition>
    <Transition>
      <Days>365</Days>
      <StorageClass>azure_cold_storage</StorageClass>
    </Transition>
    <Expiration>
      <Days>3650</Days>
    </Expiration>
  </Rule>
</LifecycleConfiguration>
DELETE Bucket Lifecycle

The DELETE Bucket Lifecycle operations removes the lifecycle configurations set on a bucket. To use this operation, you must have permission to perform the S3:PutLifecycleConfiguration action.

Requests
Syntax
DELETE /?lifecycle HTTP/1.1
Host: {{BucketName}}.{{StorageService}}.com
Date: {{date}}
Authorization: {{authorizationString}
Parameters

The DELETE Bucket Lifecycle operation does not use request parameters.

Headers

The DELETE Bucket Lifecycle operation uses only request headers that are common to all operations (refer to Common Request Headers).

Elements

The DELETE Bucket Lifecycle operation does not use request elements.

Responses
Headers

The DELETE Bucket Lifecycle operation uses only response headers that are common to most responses (refer to Common Response Headers).

Examples

The following example set shows a DELETE request to delete the lifecycle configurations from the specified bucket.

The following is a sample request.

Request
DELETE /?lifecycle HTTP/1.1
Host: examplebucket.s3.amazonaws.com
Date: Wed, 14 Dec 2011 05:37:16 GMT
Authorization: {{signatureValue}}
Response

The following sample response shows a successful “204 No Content” response. Objects in the bucket no longer expire.

HTTP/1.1 204 No Content
x-amz-id-2: Uuag1LuByRx9e6j5OnimrSAMPLEtRPfTaOAa==
x-amz-request-id: 656c76696e672SAMPLE5657374
Date: Wed, 14 Dec 2011 05:37:16 GMT
Connection: keep-alive
Server: AmazonS3

Bucket Website Operations

Bucket Website operations enable S3 Buckets to host static websites, with web pages that include static content and potentially client-side scripts. To host a static website, configure an S3 bucket for website hosting and then upload the website content to the bucket.

Bucket Website Specification

Zenko implements the AWS S3 Bucket Website APIs per the AWS specifications. This makes the objects accessible through a bucket website.

Website Redirect Rules Attached to Particular Objects

When an object is put (either through a PUT Object call, an Initiate Multipart Upload call, or a PUT Object - Copy call), an x-amz-website-redirect-location header may be added to the call. If such a header is provided, it will be saved with an object’s metadata and will be retrieved on either a GET Object call or HEAD Object call. Requests to the object at the bucket website endpoint will be redirected to the location specified by the header.

The header is described by the AWS protocol for putting objects.

Any applicable redirect rule in a bucket website configuration will prevail over a rule sent with a x-amz-website-redirect-location header (the same behavior as AWS).

Using Bucket Websites

To experience bucket website behavior, a user must make a request to a bucket website endpoint rather than the usual REST endpoints. Refer to Website Endpoints for the difference in response from a bucket endpoint versus the usual REST endpoint.

To set up Zenko with website endpoints, in Federation env_s3 should have a website_endpoints section that contains a list of all desired website endpoints (e.g., s3-website.scality.example.com). Thus, if a user has a bucket foo, a bucket website request to Zenko would be made to foo.s3-website.scality.example.com.

Note

To be served from the website endpoints, objects must be public, meaning that the ACL of such an object must be public-read. This ACL can be set when the object is originally put or through a PUT Object ACL call. The AWS instructions for setting up bucket websites suggest using a bucket policy to set all objects to public, but Zenko does not yet implement bucket policies so this option is not available.

PUT Bucket Website

The PUT Bucket Website operation configures a bucket to serve as a bucket website.

Requests
Syntax
PUT /?website HTTP/1.1
Host: {{BucketName}}.{{StorageService}}.com
Date: {{date}}
Content-Length: {{length}}
Authorization: {{authenticationInformation}}

<WebsiteConfiguration xmlns="http://s3.scality.com/doc/2006-03-01/">
<!-- website configuration information. -->
</WebsiteConfiguration>

Note

The request syntax illustrates only a portion of the request headers.

Parameters

The PUT Bucket Website operation does not use Request Parameters.

Headers

The PUT Bucket operation uses only request headers that are common to all operations (refer to Common Request Headers).

Elements

You can use a website configuration to redirect all requests to the website endpoint of a bucket, or you can add routing rules that redirect only specific requests.

  • To redirect all website requests sent to the bucket’s website endpoint, add a website configuration with the following elements. Because all requests are sent to another website, you don’t need to provide index document name for the bucket.

    Element Type Description
    WebsiteConfiguration container

    The root element for the website configuration

    Ancestors: None

    RedirectAllRequestsTo container

    Describes the redirect behavior for every request to this bucket’s website endpoint. If this element is present, no other siblings are allowed.

    Ancestors: WebsiteConfiguration

    HostName string

    Name of the host where requests will be redirected.

    Ancestors: RedirectAllRequestsTo

    Protocol string

    Protocol to use (http, https) when redirecting requests. The default is the protocol that is used in the original request.

    Ancestors: RedirectAllRequestsTo

  • For granular control over redirects, use the following elements to add routing rules that describe conditions for redirecting requests and information about the redirect destination. In this case, the website configuration must provide an index document for the bucket, because some requests might not be redirected.

    Element Type Description
    WebsiteConfiguration Container

    Container for the request

    Ancestors: None

    IndexDocument Container

    Container for the Suffix element

    Ancestors: WebsiteConfiguration

    Suffix String

    A suffix that is appended to a request that is for a directory on the website endpoint (e.g., if the suffix is index.html and you make a request to samplebucket/images/, the data returned will be for the object with the key name images/index.html)

    The suffix must not be empty and must not include a slash character.

    Ancestors: WebsiteConfiguration.IndexDocument

    ErrorDocument Container

    Container for the Key element

    Ancestors: WebsiteConfiguration

    Key String

    The object key name to use when a 4XX-class error occurs. This key identifies the page that is returned when such an error occurs.

    Ancestors: WebsiteConfiguration.ErrorDocument

    Condition: Required when ErrorDocument is specified.

    RoutingRules Container

    Container for a collection of RoutingRule elements.

    Ancestors: WebsiteConfiguration

    RoutingRule String

    Container for one routing rule that identifies a condition and a redirect that applies when the condition is met.

    Ancestors: WebsiteConfiguration.RoutingRules

    Condition: In a RoutingRules container, there must be at least one RoutingRule element.

    Condition Container

    A container for describing a condition that must be met for the specified redirect to apply.

    For example:

    • If request is for pages in the /docs folder, redirect to the /documents folder.
    • If request results in a 4xx HTTP error, redirect the request to another host to process the error.

    Ancestors: WebsiteConfiguration.RoutingRules.RoutingRule

    KeyPrefixEquals String

    The object key name prefix when the redirect is applied. For example, to redirect requests for ExamplePage.html, the key prefix is ExamplePage.html. To redirect request for all pages with the prefix docs/, the key prefix will be /docs, which identifies all objects in the docs/ folder.

    Ancestors: WebsiteConfiguration.RoutingRules.RoutingRule.Condition

    Condition: Required when the parent element Condition is specified and sibling HttpErrorCodeReturnedEquals is not specified. If both conditions are specified, both must be true for the redirect to be applied.

    HttpErrorCodeReturnedEquals String

    The HTTP error code when the redirect is applied. In the event of an error, if the error code equals this value, then the specified redirect is applied.

    Ancestors: WebsiteConfiguration.RoutingRules.RoutingRule.Condition

    Condition: Required when parent Condition element is specified and sibling KeyPrefixEquals is not specified. If both are specified, then both must be true for the redirect to be applied.

    Redirect String

    Container for redirect information. You can redirect requests to another host, to another page, or with another protocol. In the event of an error, you can specify a different error code to return.

    Ancestors: WebsiteConfiguration.RoutingRules.RoutingRule

    Protocol String

    The protocol to use in the redirect request.

    Ancestors: WebsiteConfiguration.RoutingRules.RoutingRule.RedirectValid

    Values: http, https

    Condition: Not required if one of the siblings is present

    HostName String

    The host name to use in the redirect request.

    Ancestors: WebsiteConfiguration.RoutingRules.RoutingRule.Redirect

    Condition: Not required if one of the siblings is present

    ReplaceKeyPrefixWith String

    The object key prefix to use in the redirect request. For example, to redirect requests for all pages with the prefix “docs/” (objects in the docs/ folder) to documents/, set a condition block with KeyPrefixEquals set to docs/ and in the Redirect set ReplaceKeyPrefixWith to “documents”.

    Ancestors: WebsiteConfiguration.RoutingRules.RoutingRule.Redirect

    Condition: Not required if one of the the siblings is present. Can be present only ifReplaceKeyWith is not provided.

    ReplaceKeyWith String

    The specific object key to use in the redirect request. For example, redirect request to error.html.

    Ancestors: WebsiteConfiguration.RoutingRules.RoutingRule.Redirect

    Condition: Not required if one of the siblings is present. Can be present only ifReplaceKeyPrefixWith is not provided.

    HttpRedirectCode String

    The HTTP redirect code to use on the response.

    Ancestors: WebsiteConfiguration.RoutingRules.RoutingRule.Redirect

    Condition: Not required if one of the siblings is present.

Responses
Headers

The PUT Bucket Website operation uses only response headers that are common to all operations (refer to Common Response Headers).

Elements

The PUT Bucket Website operation does not return response elements.

Examples
Configure a Bucket as a Website (Add Website Configuration)

This request configures a bucket, example.com, as a website. The configuration in the request specifies index.html as the index document. It also specifies the optional error document, SomeErrorDocument.html.

Request
PUT ?website HTTP/1.1
Host: example.com.s3.scality.com
Content-Length: 256
Date: Thu, 27 Jan 2011 12:00:00 GMT
Authorization: {{authenticationInformation}}
<WebsiteConfiguration xmlns='http://s3.scality.com/doc/2006-03-01/'>
    <IndexDocument>
        <Suffix>index.html</Suffix>
    </IndexDocument>
    <ErrorDocument>
        <Key>SomeErrorDocument.html</Key>
    </ErrorDocument>
</WebsiteConfiguration>
Response
HTTP/1.1 200 OK
x-amz-id-2: YgIPIfBiKa2bj0KMgUAdQkf3ShJTOOpXUueF6QKo
x-amz-request-id: 80CD4368BD211111
Date: Thu, 27 Jan 2011 00:00:00 GMT
Content-Length: 0
Server: ScalityS3
Configure a Bucket as a Website but Redirect All Requests

The following request configures a bucket www.example.com as a website; however, the configuration specifies that all GET requests for thewww.example.com bucket’s website endpoint will be redirected to host example.com.

Request
PUT ?website HTTP/1.1
Host: www.example.scality.com
Content-Length: 256
Date: Mon, 15 Feb 2016 15:30:07 GMT
Authorization: {{authenticationInformation}}
<WebsiteConfiguration xmlns='http://s3.scality.com/doc/2006-03-01/'>
   <RedirectAllRequestsTo>
      <HostName>example.com</HostName>
    </RedirectAllRequestsTo>
</WebsiteConfiguration>
Configure a Bucket as a Website and Specify Optional Redirection Rules

You can further customize the website configuration by adding routing rules that redirect requests for one or more objects. For example, suppose your bucket contained the following objects:

  • index.html
  • docs/article1.html
  • docs/article2.html

If you decided to rename the folder from docs/ to documents/, you would need to redirect requests for prefix /docs to documents/. For example, a request for docs/article1.html will need to be redirected to documents/article1.html. In this case, you update the website configuration and add a routing rule as shown in the following request:

Request
PUT ?website HTTP/1.1
Host: www.example.com.s3.scality.com
Content-Length: length-value
Date: Thu, 27 Jan 2011 12:00:00 GMT
Authorization: {{authenticationInformation}}
<WebsiteConfiguration xmlns='http://s3.scality.com/doc/2006-03-01/'>
  <IndexDocument>
    <Suffix>index.html</Suffix>
  </IndexDocument>
  <ErrorDocument>
    <Key>Error.html</Key>
  </ErrorDocument>

  <RoutingRules>
    <RoutingRule>
    <Condition>
      <KeyPrefixEquals>docs/</KeyPrefixEquals>
    </Condition>
    <Redirect>
      <ReplaceKeyPrefixWith>documents/</ReplaceKeyPrefixWith>
    </Redirect>
    </RoutingRule>
  </RoutingRules>
</WebsiteConfiguration>
Configure a Bucket as a Website and Redirect Errors

You can use a routing rule to specify a condition that checks for a specific HTTP error code. When a page request results in this error, you can optionally reroute requests. For example, you might route requests to another host and optionally process the error. The routing rule in the following requests redirects requests to an EC2 instance in the event of an HTTP error 404. For illustration, the redirect also inserts an object key prefix report-404/ in the redirect. For example, if you request a page ExamplePage.html and it results in a HTTP 404 error, the request is routed to a page report-404/testPage.html on the specified EC2 instance. If there is no routing rule and the HTTP error 404 occurred, then Error.html is returned.

Request
PUT ?website HTTP/1.1
Host: www.example.com.s3.scality.com
Content-Length: 580
Date: Thu, 27 Jan 2011 12:00:00 GMT
Authorization: {{authenticationInformation}}
<WebsiteConfiguration xmlns='http://s3.scality.com/doc/2006-03-01/'>
  <IndexDocument>
    <Suffix>index.html</Suffix>
  </IndexDocument>
  <ErrorDocument>
    <Key>Error.html</Key>
  </ErrorDocument>

  <RoutingRules>
    <RoutingRule>
    <Condition>
      <HttpErrorCodeReturnedEquals>404</HttpErrorCodeReturnedEquals >
    </Condition>
    <Redirect>
      <HostName>ec2-11-22-333-44.compute-1.scality.com</HostName>
      <ReplaceKeyPrefixWith>report-404/</ReplaceKeyPrefixWith>
    </Redirect>
    </RoutingRule>
  </RoutingRules>
</WebsiteConfiguration>
Configure a Bucket as a Website and Redirect Folder Requests to a Page

Suppose you have the following pages in your bucket:

  • images/photo1.jpg
  • images/photo2.jpg
  • images/photo3.jpg

And you want to route requests for all pages with the images/ prefix to go to a single page, errorpage.html. You can add a website configuration to your bucket with the routing rule shown in the following request.

Request
PUT ?website HTTP/1.1
Host: www.example.com.s3.scality.com
Content-Length: 481
Date: Thu, 27 Jan 2011 12:00:00 GMT
Authorization: {{authenticationInformation}}
<WebsiteConfiguration xmlns='http://s3.scality.com/doc/2006-03-01/'>
  <IndexDocument>
    <Suffix>index.html</Suffix>
  </IndexDocument>
  <ErrorDocument>
    <Key>Error.html</Key>
  </ErrorDocument>

  <RoutingRules>
    <RoutingRule>
    <Condition>
      <KeyPrefixEquals>images/</KeyPrefixEquals>
    </Condition>
    <Redirect>
      <ReplaceKeyWith>errorpage.html</ReplaceKeyWith>
    </Redirect>
    </RoutingRule>
  </RoutingRules>
</WebsiteConfiguration>
GET Bucket Website

Use the GET Bucket Website operation to retrieve a bucket website configuration. This GET operation requires the S3:GetBucketWebsite permission.

Requests
Syntax
GET /?website HTTP/1.1
Host: {{BucketName}}.{{StorageService}}.com
Date: {{date}}
Content-Length: {{length}}
Authorization: {{authenticationInformation}}

Note

The Request Syntax illustrates only a portion of the request headers.

Parameters

The GET Bucket Website operation does not use Request Parameters.

Headers

The GET Bucket Website operation uses only request headers that are common to all operations (refer to Common Request Headers).

Elements

This operation does not use request elements.

Responses
Headers

The GET Bucket Website operation uses only response headers that are common to all operations (refer to Common Response Headers).

Elements

The PUT Bucket Website response XML includes same elements that were uploaded when you configured the bucket as website. For more information, refer to PUT Bucket Website.

Examples
Request
GET / HTTP/1.1
Host: example.com
Date: Mon, 15 Feb 2016 15:30:07 GMT
Authorization: {{authenticationInformation}}
Response
HTTP/1.1 200 OK
x-amz-id-2: YgIPIfBiKa2bj0KMgUAdQkf3ShJTOOpXUueF6QKo
x-amz-request-id: 3848CD259D811111
Date: Thu, 27 Jan 2011 00:49:26 GMT
Content-Length: 240
Content-Type: application/xml
Transfer-Encoding: chunked
Server: AmazonS3

.. code::


<?xml version="1.0" encoding="UTF-8"?>
<WebsiteConfiguration xmlns="http://s3.scality.com/doc/2006-03-01/">
  <IndexDocument>
    <Suffix>index.html</Suffix>
  </IndexDocument>
  <ErrorDocument>
    <Key>404.html</Key>
  </ErrorDocument>
</WebsiteConfiguration>
DELETE Bucket Website

Use the DELETE Bucket Website operation to remove the website configuration for a bucket. This operation requires the S3:DeleteBucketWebsite permission. If there is no bucket website configuration, this operation will return a 204 error response.

Requests
Syntax
DELETE/?website HTTP/1.1
Host: {{BucketName}}.{{StorageService}}.com
Date: {{date}}
Authorization: {{authenticationInformation}}

Note

The Request Syntax illustrates only a portion of the request headers.

Parameters

The DELETE Bucket Website operation does not use Request Parameters.

Headers

The DELETE Bucket Website operation uses only request headers that are common to all operations (refer to Common Request Headers).

Elements

This operation does not use request elements.

Responses
Headers

The PUT Bucket Website operation uses only response headers that are common to all operations (refer to Common Response Headers).

Elements

This operation does not use response elements.

Examples
Request
DELETE ?website HTTP/1.1
Host: example.com
Date: Mon, 15 Feb 2016 15:30:07 GMT
Authorization: {{authenticationInformation}}
Response
HTTP/1.1 204 No Content
x-amz-id-2: YgIPIfBiKa2bj0KMgUAdQkf3ShJTOOpXUueF6QKo
x-amz-request-id: 3848CD259D811111
Date: Thu, 27 Jan 2011 00:49:26 GMT
Server: ScalityS3

Object Operations

This section presents a compendium of available API calls for object operations in Zenko.

DELETE Object

The DELETE Object operation removes the null version (if there is one) of an object and inserts a delete marker, which becomes the current version of the object. If there isn’t a null version, Zenko does not remove any objects.

Only the bucket owner can remove a specific version, using the versionId subresource, whichpermanently deletes the version. If the object deleted is a delete marker, Zenko sets the response header x-amz-delete-marker to true.

Requests
Syntax
HTTP/1.1 200 OK
x-amz-id-2: Yd6PSJxJFQeTYJ/3dDO7miqJfVMXXW0S2Hijo3WFs4bz6oe2QCVXasxXLZdMfASd
x-amz-request-id: 80DF413BB3D28A25
Date: Fri, 13 Apr 2012 05:54:59 GMT
ETag: "dd038b344cf9553547f8b395a814b274"
Content-Length: 0
Server: ScalityS3
Parameters

The DELETE Object operation does not use Request Parameters.

Headers

The DELETE Object operation uses only request headers that are common to all operations (refer to Common Request Headers).

Elements

The DELETE Object operation does not use request elements.

Responses
Headers

The DELETE Object operation can include the following response headers in addition to the response headers that are common to all operations (see Common Response Headers).

Header Type Description
x-amz-delete-marker Boolean

Valid Values: true | false

Returns true if a delete marker was created by the delete operation. If a specific version was deleted, returns true if the deleted version was a delete marker.

Default: false

x-amz-version-id string

Returns the version ID of the delete marker created as a result of the DELETE operation. If a specific object version is deleted, the value returned by this header is the version ID of the object version deleted.

Default: None

Elements

The DELETE Object operation does not return response elements.

Examples
Single Object Delete

The request sample deletes the object, sampledocument.pdf.

Request
DELETE /sampledocument.pdf HTTP/1.1
Host: {{bucketname}}.s3.scality.com
Date: Wed, 12 Oct 2009 17:50:00 GMT
Authorization: {{authorizationString}}
Content-Type: text/plain
Response
HTTP/1.1 204 NoContent
x-amz-id-2: LriYPLdmOdAiIfgSm/F1YsViT1LW94/xUQxMsF7xiEb1a0wiIOIxl+zbwZ163pt7
x-amz-request-id: 0A49CE4060975EAC
Date: Wed, 12 Oct 2009 17:50:00 GMT
Content-Length: 0
Connection: close
Server: ScalityS3
Deleting a Specified Version of an Object

The following sample request deletes the specified version of the object, sampledocument2.pdf.

Request
DELETE /sampledocument2.pdf?versionId=UIORUnfndfiufdisojhr398493jfdkjFJjkndnqUifhnw89493jJFJ HTTP/1.1
Host: {{bucketname}}.s3.scality.com
Date: Wed, 12 Oct 2009 17:50:00 GMT
Authorization: {{authorizationString}}
Content-Type: text/plain
Content-Length: 0
Response
HTTP/1.1 204 NoContent
x-amz-id-2: LriYPLdmOdAiIfgSm/F1YsViT1LW94/xUQxMsF7xiEb1a0wiIOIxl+zbwZ163pt7
x-amz-request-id: 0A49CE4060975EAC
x-amz-version-id: UIORUnfndfiufdisojhr398493jfdkjFJjkndnqUifhnw89493jJFJ
Date: Wed, 12 Oct 2009 17:50:00 GMT
Content-Length: 0
Connection: close
Server: ScalityS3
Response if the Deleted Object Is a Delete Marker
HTTP/1.1 204 NoContent
x-amz-id-2: LriYPLdmOdAiIfgSm/F1YsViT1LW94/xUQxMsF7xiEb1a0wiIOIxl+zbwZ163pt7
x-amz-request-id: 0A49CE4060975EAC
x-amz-version-id: 3/L4kqtJlcpXroDTDmJ+rmSpXd3dIbrHY+MTRCxf3vjVBH40Nr8X8gdRQBpUMLUo
x-amz-delete-marker: true
Date: Wed, 12 Oct 2009 17:50:00 GMT
Content-Length: 0
Connection: close
Server: ScalityS3

DELETE Object Tagging

This implementation of the DELETE operation uses the tagging subresource to remove the entire tag set from the specified object. For more information about managing object tags, refer to Object Tagging in the Amazon Simple Storage Service Developer Guide.

To use the DELETE Object Tagging operation, the user must have permission to perform the s3:DeleteObjectTagging action.

To delete tags of a specific object version, add the versionId query parameter in the request (permission for the s3:DeleteObjectVersionTagging action is required).

Requests
Syntax
DELETE /ObjectKey/ ?tagging HTTP/1.1
Host: {{BucketName}}.{{StorageService}}.com
Date: {{date}}
Content-Length: {{length}}
Authorization: {{authenticationInformation}}
Parameters

The DELETE Object Tagging operation does not use Request Parameters.

Headers

The DELETE Object Tagging operation uses only request headers that are common to all operations (refer to Common Request Headers).

Elements

The DELETE Object Tagging operation does not use request elements.

Responses
Headers

The DELETE Object Tagging operation uses onlyresponse headers that are common to all operations (refer to Common Response Headers).

Elements

The DELETE Object Tagging operation does not return response elements.

Examples
Request
DELETE exampleobject/?tagging HTTP/1.1
Host: {{bucketname}}.s3.scality.com
Date: Wed, 12 Oct 2016 17:50:00 GMT
Authorization: {{authorizationString}}
Response

The following successful response shows Amazon S3 returning a 204 No Content response. The tag set for the object has been removed.

HTTP/1.1 204 No Content
Date: Wed, 25 Nov 2016 12:00:00 GMT
Connection: close
Server: ScalityS3

Multi-Object Delete

The Multi-Object Delete operation enables the deletion of multiple objects from a bucket using a single HTTP request. If object keys to be deleted are known, this operation provides a suitable alternative to sending individual delete requests, reducing per-request overhead. Refer to DELETE Object.

The Multi-Object Delete request contains a list of up to 1000 keys that can be deleted. In the XML, provide the object key names. Optionally, provide version ID to delete a specific version of the object from a versioning-enabled bucket. For each key, Zenko performs a delete operation and returns the result of that delete, success or failure, in the response. If the object specified in the request is not found, Zenko returns the result as deleted.

The Multi-Object Delete operation supports two modes for the response—verbose and quiet. By default, the operation uses verbose mode in which the response includes the result of deletion of each key in the request. In quiet mode the response includes only keys where the delete operation encountered an error. For a successful deletion, the operation does not return any information about the delete in the response body.

Finally, the Content-MD5 header is required for all Multi-Object Delete requests. Amazon S3 uses the header value to ensure that your request body has not be altered in transit.

Requests
Syntax
POST / ?delete HTTP/1.1
Host: {{BucketName}}.{{StorageService}}.com
Authorization: {{authorizationString}}
Content-Length: {{length}}
Content-MD5: {{MD5}}

<?xml version="1.0" encoding="UTF-8"?>
<Delete>
    <Quiet>true</Quiet>
    <Object>
         <Key>Key</Key>
         <VersionId>VersionId</VersionId>
    </Object>
    <Object>
         <Key>Key</Key>
    </Object>
    ...
</Delete>
Parameters

The Multi-Object Delete operation requires a single query string parameter called “delete” to distinguish it from other bucket POST operations.

Headers

The Multi-Object Delete operation uses two request headers — Content-MD5, and Content-Length — in addition to those that are common to all operations (refer to Common Request Headers).

Header Type Description
Content-MD5 string The base64-encoded 128-bit MD5 digest of the data. This header must be used as a message integrity check to verify that the request body was not corrupted in transit.
Content-Length string Length of the body according to RFC 2616.
Elements

The Multi-Object Delete operation can request the following items:

Element Type Description
Delete Container

Container for the request

Ancestor: None

Children: One or more Object elements and an optional Quiet element

Quiet Boolean

Element to enable quiet mode for the request (when added, the element must be set to true)

Ancestor: Delete

Object Container

Element that describes the delete request for an object

Ancestor: Delete

Children: Key element and an optional VersionId element

Key String

Key name of the object to delete

Ancestor: Object

VersionId String

VersionId for the specific version of the object to delete

Ancestor: Object

Responses
Headers

The Multi-Object Delete operation uses only response headers that are common to all operations (see Common Response Headers).

Elements

The Multi-Object Delete operation can return the following XML elements in its response:

Element Type Description
DeleteResult Container

Container for the response

Ancestor: None

Children: Deleted, Error

Deleted Container

Container element for a successful delete (identifies the object that was successfully deleted)

Ancestor: DeleteResult

Children: Key, VersionId

Key String

Key name for the object that Amazon S3 attempted to delete

Ancestor: Deleted, Error

VersionId String

Version ID of the versioned object Zenko attempted to delete. Includes this element only in case of a versioned-delete request.

Ancestor: Deleted or Error

DeleteMarker Boolean

DeleteMarker element with a true value indicates that the request accessed a delete marker. If a specific delete request either creates or deletes a delete marker, this element is returned in the response with a value of true. This is the case only when your Multi-Object Delete request is on a bucket that has versioning enabled or suspended.

Ancestor: Deleted

DeleteMarkerVersionId String

Version ID of the delete marker accessed (deleted or created) by the request.

If the specific delete request in the Multi-Object Delete either creates or deletes a delete marker, Zenko returns this element in response with the version ID of the delete marker. @hen deleting an object in a bucket with versioning enabled, this value is present for the following two reasons:

  • A non-versioned delete request is sent; that is, only the object key is specified and not the version ID. In this case, Zenko creates a delete marker and returns its version ID in the response.
  • A versioned delete request is sent; that is, an object key and a version ID are specified in the request; however, the version ID identifies a delete marker. In this case, Zenko deletes the delete marker and responds with the specific version ID.

Ancestor: Deleted

Error String

Container for a failed delete operation that describes the object that Zenko attempted to delete and the error it encountered.

Ancestor: DeleteResult

Children: Key, VersionId, Code, Message

Key String

Key for the object Zenko attempted to delete

Ancestor: Error

Code String

Status code for the result of the failed delete

Valid Values: AccessDenied, InternalError

Ancestor: Error

Message String

Error description

Ancestor: Error

Examples
Multi-Object Delete Resulting in Mixed Success/Error Response

The request sample illustrates a Multi-Object Delete request to delete objects that result in mixed success and error responses.

Request

The request deletes two objects from {{bucketname}} (in this example, the requester does not have permission to delete the sample2.txt object).

POST /?delete HTTP/1.1
Host: {{bucketname}}.s3.scality.com
Accept: */*
x-amz-date: Wed, 12 Oct 2009 17:50:00 GMT
Content-MD5: p5/WA/oEr30qrEE121PAqw==
Authorization: {{authorizationString}}
Content-Length: {{length}}
Connection: Keep-Alive
<Delete>
  <Object>
    <Key>sample1.txt</Key>
  </Object>
  <Object>
    <Key>sample2.txt</Key>
  </Object>
</Delete>
Response

The response includes a DeleteResult element that includes a Deleted element for the item that Zenko successfully deleted and an Error element that Zenko did not delete because the user didn’t have permission to delete the object.

HTTP/1.1 200 OK
x-amz-id-2: 5h4FxSNCUS7wP5z92eGCWDshNpMnRuXvETa4HH3LvvH6VAIr0jU7tH9kM7X+njXx
x-amz-request-id: A437B3B641629AEE
Date: Fri, 02 Dec 2011 01:53:42 GMT
Content-Type: application/xml
Server: ScalityS3
Content-Length: 251
<?xml version="1.0" encoding="UTF-8"?>
<DeleteResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<Deleted>
<Key>sample1.txt</Key>
</Deleted>
<Error>
<Key>sample2.txt</Key>
<Code>AccessDenied</Code>
<Message>Access Denied</Message>
</Error>
</DeleteResult>
Deleting Object from a Versioned Bucket

In deleting an item from a versioning enabled bucket, all versions of that object remain in the bucket; however, Zenko inserts a delete marker.

The following scenarios describe the behavior of a Multi-Object Delete request when versioning is enabled for a bucket.

Scenario 1: Simple Delete

As shown, the Multi-Object Delete request specifies only one key.

POST /?delete HTTP/1.1
Host: {{bucketname}}.s3.scality.com
Accept: */*
x-amz-date: Wed, 30 Nov 2011 03:39:05 GMT
Content-MD5: p5/WA/oEr30qrEEl21PAqw==
Authorization: {{authorizationString}}
Content-Length: {{length}}
Connection: Keep-Alive

<Delete>
  <Object>
    <Key>SampleDocument.txt</Key>
  </Object>
</Delete>

Because versioning is enabled on the bucket, Zenko does not delete the object, instead adding a delete marker. The response indicates that a delete marker was added (the DeleteMarker element in the response has a value of true) and the version number of the added delete marker.

HTTP/1.1 201 OK
x-amz-id-2: P3xqrhuhYxlrefdw3rEzmJh8z5KDtGzb+/FB7oiQaScI9Yaxd8olYXc7d1111ab+
x-amz-request-id: 264A17BF16E9E80A
Date: Wed, 30 Nov 2011 03:39:32 GMT
Content-Type: application/xml
Server: ScalityS3
Content-Length: 276
<?xml version="1.0" encoding="UTF-8"?>
<DeleteResult xmlns="http://s3.scality.com/doc/2006-03-01/">
  <Deleted>
    <Key>SampleDocument.txt</Key>
    <DeleteMarker>true</DeleteMarker>
    <DeleteMarkerVersionId>NeQt5xeFTfgPJD8B4CGWnkSLtluMr11s</DeleteMarkerVersionId>
  </Deleted>
</DeleteResult>
Scenario 2: Versioned Delete

As shown, the Multi-Object Delete attempts to delete a specific version of an object.

POST /?delete HTTP/1.1
Host: {{bucketname}}.s3.scality.com
Accept: */*
x-amz-date: Wed, 30 Nov 2011 03:39:05 GMT
Content-MD5: p5/WA/oEr30qrEEl21PAqw==
Authorization: {{authorizationString}}
Content-Length: {{length}}
Connection: Keep-Alive
<Delete>
<Object>
<Key>sampledocument.txt</Key>
<VersionId>OYcLXagmS.WaD..oyH4KRguB95_YhLs7</VersionId>
</Object>
</Delete>

In this case, Zenko deletes the specific object version from the bucket and returns the following response. In the response, Zenko returns the key and version ID of the deleted object.

HTTP/1.1 201 OK
x-amz-id-2: P3xqrhuhYxlrefdw3rEzmJh8z5KDtGzb+/FB7oiQaScI9Yaxd8olYXc7d1111xx+
x-amz-request-id: 264A17BF16E9E80A
Date: Wed, 30 Nov 2011 03:39:32 GMT
Content-Type: application/xml
Server: ScalityS3
Content-Length: 219
<?xml version="1.0" encoding="UTF-8"?>
<DeleteResult xmlns="http://s3.scality.com/doc/2006-03-01/">
<Deleted>
<Key>sampledocument.txt</Key>
<VersionId>OYcLXagmS.WaD..oyH4KRguB95_YhLs7</VersionId>
</Deleted>
</DeleteResult>
Scenario 3: Versioned Delete of a Delete Marker

In the preceding example, the request refers to a delete marker (in lieu of an object), then Zenko deletes the delete marker. The effect of this operation is to make the object reappear in the bucket. The response returned by Zenko indicates the deleted delete marker (DeleteMarker element with value true) and the version ID of the delete marker.

HTTP/1.1 200 OK
x-amz-id-2: IIPUZrtolxDEmWsKOae9JlSZe6yWfTye3HQ3T2iAe0ZE4XHa6NKvAJcPp51zZaBr
x-amz-request-id: D6B284CEC9B05E4E
Date: Wed, 30 Nov 2011 03:43:25 GMT
Content-Type: application/xml
Server: ScalityS3
Content-Length: {{length}}
<?xml version="1.0" encoding="UTF-8"?>
<DeleteResult xmlns="http://s3.scalitys3.com/doc/2006-03-01/">
<Deleted>
<Key>sampledocument.txt</Key>
<VersionId>NeQt5xeFTfgPJD8B4CGWnkSLtluMr11s</VersionId>
<DeleteMarker>true</DeleteMarker>
<DeleteMarkerVersionId>NeQt5xeFTfgPJD8B4CGWnkSLtluMr11s</DeleteMarkerVersionId>
</Deleted>
</DeleteResult>

In general, when a Multi-Object Delete request results in Zenko either adding a delete marker or removing a delete marker, the response returns the following elements:

<DeleteMarker>true</DeleteMarker>
<DeleteMarkerVersionId>NeQt5xeFTfgPJD8B4CGWnkSLtluMr11s</DeleteMarkerVersionId>
Malformed XML in the Request

The request sample sends a malformed XML document (missing the Delete end element).

Request
POST /?delete HTTP/1.1
Host: bucketname.S3.amazonaws.com
Accept: */*
x-amz-date: Wed, 30 Nov 2011 03:39:05 GMT
Content-MD5: p5/WA/oEr30qrEEl21PAqw==
Authorization: AWS AKIAIOSFODNN7EXAMPLE:W0qPYCLe6JwkZAD1ei6hp9XZIee=
Content-Length: 104
Connection: Keep-Alive
<Delete>
<Object>
<Key>404.txt</Key>
</Object>
<Object>
<Key>a.txt</Key>
</Object>
Response

The response returns the Error messages that describe the error.

HTTP/1.1 200 OK
x-amz-id-2: P3xqrhuhYxlrefdw3rEzmJh8z5KDtGzb+/FB7oiQaScI9Yaxd8olYXc7d1111ab+
x-amz-request-id: 264A17BF16E9E80A
Date: Wed, 30 Nov 2011 03:39:32 GMT
Content-Type: application/xml
Server: AmazonS3
Content-Length: 207
<?xml version="1.0" encoding="UTF-8"?>
<Error>
<Code>MalformedXML</Code>
<Message>The XML you provided was not well-formed or did not validate against our published schema</Message>
<RequestId>264A17BF16E9E80A</RequestId>
<HostId>P3xqrhuhYxlrefdw3rEzmJh8z5KDtGzb+/FB7oiQaScI9Yaxd8olYXc7d1111ab+</HostId>
</Error>

GET Object

Using GET Object requires read access to the target object. If read access is granted to an anonymous user, the object can be returned without using an authorization header.

By default, the GET Object operation returns the current version of an object. To return a different version, use the versionId subresource.

Tip

If the current version of the object is a delete marker, Zenko behaves as if the object were deleted and includes x-amz-delete-marker: true in the response.

Requests
Syntax
GET /ObjectName HTTP/1.1
Host: {{BucketName}}.{{StorageService}}.com
Date: {{date}}
Authorization: {{authorizationString}}
Range:bytes={{byteRange}}
Parameters

Values for a set of response headers can be overridden in the GET Object response using the query parameters listed in the following table. These response header values are sent only on a successful request, one in which a status code 200 OK is returned. The set of headers that can be overridden using these parameters is a subset of the headers that Zenko accepts when an object is created, including Content-Type, Content-Language, Expires, Cache-Control, Content-Disposition, and Content-Encoding.

Note

In using these parameters it is necessary to sign the request, either with an Authorization header or a pre-signed URL. They cannot be used with an unsigned (anonymous) request.

Parameter Type Description
response-content-type string

Sets the Content-Type header of the response

Default: None

response-content-language string

Sets the Content-Language header of the response

Default: None

response-expires string

Sets the Expires header of the response

Default: None

response-cache-control string

Sets the Cache-Control header of the response

Default: None

response-content-disposition string

Sets the Content-Disposition header of the response

Default: None

response-content-encoding string

Sets the Content-Encoding header of the response

Default: None

Additional Parameters
Versioning

By default, the GET operation returns the current version of an object. To return a different version, use the versionId subresource.

PartNumber

The PartNumber parameter requests the part number of the object being read. This is a positive integer between 1 and 10,000, and effectively performs a “ranged” GET request for the part specified. This is useful for downloading just a part of an object that was originally put by multipart upload.

Headers

The GET Object operation can use a number of optional request headers in addition to those that are common to all operations (see Common Request Headers).

Header Type Description
If-Modified-Since string

Return the object only if it has been modified since the specified time, otherwise return a 304 (not modified).

Default: None

Constraints: None

If-Unmodified-Since string

Return the object only if it has not been modified since the specified time, otherwise return a 412 (precondition failed).

Default: None

Constraints: None

If-Match string

Return the object only if its entity tag (ETag) is the same as the one specified; otherwise, return a 412 (precondition failed).

Default: None

Constraints: None

If-None-Match string

Return the object only if its entity tag (ETag) is different from the one specified; otherwise, return a 304 (not modified)

Default: None

Constraints: None

x-amz-location-constraint string

Return object from the location specified here. Location value must be a valid Zenko location name to which the object has been replicated, or an error is returned.

Default: None

Constraints: Location name provided in header must be a valid replication target.

Users can specify in a GET request a location from which to read the object by providing the custom “x-amz-location-constraint” header and the name of the alternate location as value. Using this request, header, and location, an object can be retrieved even if the object is unavailable in the original/preferred location. The location value must be a valid Zenko location name to which the object has been replicated, or an error is returned.

Note

This Zenko extension is not available in the standard S3 API. While applications may be modified to use this header for greater availability, doing so may incur egress fees for the specified cloud.

Elements

The GET Object operation does not use request elements.

Responses
Headers
Header Type Description
x-amz-delete-marker Boolean

Specifies whether the object retrieved was (true) or was not (false) a delete marker. If false, the response header does not appear in the response.

Valid Values: true | false

Default: false

x-amz-meta-\* string Headers starting with this prefix are user-defined metadata, each of which is stored and returned as a set of key-value pairs. Zenko does not validate or interpret user-defined metadata.
x-amz-version-id string

Returns the version ID of the retrieved object if it has a unique version ID.

Default: None

x-amz-website-redirect-location string

When a bucket is configured as a website, this metadata can be set on the object so the website endpoint will evaluate the request for the object as a 301 redirect to another object in the same bucket or an external URL.

Default: None

Elements

The GET Object operation does not return response elements.

Examples
Returning the Object “my-document.pdf”
Request
GET /my-document.pdf HTTP/1.1
Host: {{bucketName}}.s3.scality.com
Date: Wed, 28 Oct 2009 22:32:00 GMT
Authorization: {{authorizationString}}
Response
HTTP/1.1 200 OK
x-amz-id-2: eftixk72aD6Ap51TnqcoF8eFidJG9Z/2mkiDFu8yU9AS1ed4OpIszj7UDNEHGran
x-amz-request-id: 318BC8BC148832E5
Date: Wed, 28 Oct 2009 22:32:00 GMT
Last-Modified: Wed, 12 Oct 2009 17:50:00 GMT
ETag: "fba9dede5f27731c9771645a39863328"
Content-Length: 434234
Content-Type: text/plain
Connection: close
Server: ScalityS3
[434234 bytes of object data]

If the Latest Object is a Delete Marker:

HTTP/1.1 404 Not Found
x-amz-request-id: 318BC8BC148832E5
x-amz-id-2: eftixk72aD6Ap51Tnqzj7UDNEHGran
x-amz-version-id: 3GL4kqtJlcpXroDTDm3vjVBH40Nr8X8g
x-amz-delete-marker:  true
Date: Wed, 28 Oct 2009 22:32:00 GMT
Content-Type: text/plain
Connection: close
Server: ScalityS3

The delete marker returns a 404 Not Found error.

Getting a Specified Version of an Object
Request
GET /myObject?versionId=3/L4kqtJlcpXroDTDmpUMLUo HTTP/1.1
Host: {{bucketName}}.s3.scality.com
Date: Wed, 28 Oct 2009 22:32:00 GMT
Authorization: {{authorizationString}}
Response
HTTP/1.1 200 OK
x-amz-id-2: eftixk72aD6Ap54OpIszj7UDNEHGran
x-amz-request-id: 318BC8BC148832E5
Date: Wed, 28 Oct 2009 22:32:00 GMT
Last-Modified: Sun, 1 Jan 2006 12:00:00 GMT
x-amz-version-id: 3/L4kqtJlcpXroDTDmJ+rmSpXd3QBpUMLUo
ETag: "fba9dede5f27731c9771645a39863328"
Content-Length: 434234
Content-Type: text/plain
Connection: close
Server: ScalityS3
[434234 bytes of object data]
Specifying All Query String Parameters, Overriding Response Header Values
Request
GET /Junk3.txt?response-cache-control=No-cache&amp;response-content-disposition=attachment%3B%20filename%3Dtesting.txt&amp;response-content-encoding=x-gzip&amp;response-content-language=mi%2C%20en&amp;response-expires=Thu%2C%2001%20Dec%201994%2016:00:00%20GMT HTTP/1.1
x-amz-date: Sun, 19 Dec 2010 01:53:44 GMT
Accept: */*
Authorization: AWS AKIAIOSFODNN7EXAMPLE:aaStE6nKnw8ihhiIdReoXYlMamW=
Response

In the sample, the header values are set to the values specified in the true request.

HTTP/1.1 200 OK
x-amz-id-2: SIidWAK3hK+Il3/Qqiu1ZKEuegzLAAspwsgwnwygb9GgFseeFHL5CII8NXSrfWW2
x-amz-request-id: 881B1CBD9DF17WA1
Date: Sun, 19 Dec 2010 01:54:01 GMT
x-amz-meta-param1: value 1
x-amz-meta-param2: value 2
Cache-Control: No-cache
Content-Language: mi, en
Expires: Thu, 01 Dec 1994 16:00:00 GMT
Content-Disposition: attachment; filename=testing.txt
Content-Encoding: x-gzip
Last-Modified: Fri, 17 Dec 2010 18:10:41 GMT
ETag: "0332bee1a7bf845f176c5c0d1ae7cf07"
Accept-Ranges: bytes
Content-Type: text/plain
Content-Length: 22
Server: ScalityS3
[object data not shown]
Request with a Range Header
Request

The request specifies the HTTP Range header to retrieve the first 10 bytes of an object.

 GET /example-object HTTP/1.1
 Host: {{bucketName}}.s3.scality.com
 x-amz-date: Fri, 28 Jan 2011 21:32:02 GMT
 Range: bytes=0-9
 Authorization: AWS AKIAIOSFODNN7EXAMPLE:Yxg83MZaEgh3OZ3l0rLo5RTX11o=
 Sample Response with Specified Range of the Object Bytes

.. note::

  Zenko does not support retrieving multiple ranges of data per GET request.
Response

In the sample, the header values are set to the values specified in the true request.

HTTP/1.1 206 Partial Content
x-amz-id-2: MzRISOwyjmnupCzjI1WC06l5TTAzm7/JypPGXLh0OVFGcJaaO3KW/hRAqKOpIEEp
x-amz-request-id: 47622117804B3E11
Date: Fri, 28 Jan 2011 21:32:09 GMT
x-amz-meta-title: the title
Last-Modified: Fri, 28 Jan 2011 20:10:32 GMT
ETag: "b2419b1e3fd45d596ee22bdf62aaaa2f"
Accept-Ranges: bytes
Content-Range: bytes 0-9/443
Content-Type: text/plain
Content-Length: 10
Server: ScalityS3
[10 bytes of object data]

GET Object Tagging

The GET Object Tagging operation returns the tags associated with an object. You send the GET request against the tagging subresource associated with the object.

To use this operation, you must have permission to perform the s3:GetObjectTagging action. By default, the GET operation returns information about current version of an object. For a versioned bucket, you can have multiple versions of an object in your bucket. To retrieve tags of any other version, use the versionId query parameter. You also need permission for thes3:GetObjectVersionTagging action. By default, the bucket owner has this permission and can grant this permission to others.

Requests
Request
GET /ObjectName?tagging HTTP/1.1
Host: {{BucketName}}.{{StorageService}}.com
Date: {{date}}
Authorization: {{authorizationString}}
Parameters

The GET Object Tagging operation does not use Request Parameters.

Headers

The GET Object Tagging operation uses only request headers that are common to all operations (refer to Common Request Headers).

Elements

The GET Object Tagging operation does not use request elements.

Responses
Headers

The GET Object Tagging operation uses only response headers common to all responses (see Common Response Headers).

Elements

The GET Object Tagging operation can return the following XML elements in its response (includes XML containers):

Element Type Description
Tagging container Container for the TagSet element
TagSet container

Contains the tag set

Ancestors: Tagging

Tag container

Contains the tag information

Ancestors: TagSet

Key string

Name of the tag

Ancestors: Tag

Value string

Value of the tag

Ancestors: Tag

Examples
Request

The following request returns the tag set of the specified object.

GET /example-object?tagging HTTP/1.1
Host: {{BucketName}}.s3.scality.com
Date: Wed, 28 Oct 2009 22:32:00 GMT
Authorization: {{authorizationString}}
Response
HTTP/1.1 200 OK
Date: Thu, 22 Sep 2016 21:33:08 GMT
Connection: close
Server: ScalityS3
<?xml version="1.0" encoding="UTF-8"?>
<Tagging xmlns="http://s3.scality.com/doc/2006-03-01/">
   <TagSet>
      <Tag>
         <Key>tag1</Key>
         <Value>val1</Value>
      </Tag>
      <Tag>
         <Key>tag2</Key>
         <Value>val2</Value>
      </Tag>
   </TagSet>
</Tagging>

GET Object ACL

The GET Object ACL operation returns an object’s access control list (ACL) permissions. This operation requires READ_ACP access to the object.

By default, GET returns ACL information about the current version of an object. To return ACL information about a different version, use the versionId subresource.

Requests
Syntax
GET /ObjectName?acl HTTP/1.1
Host: {{BucketName}}.{{StorageService}}.com
Date: {{date}}
Authorization: {{authorizationString}}
Range:bytes={{byte_range}}
Parameters

The GET Object ACL operation does not use Request Parameters.

Headers

The GET Object ACL operation uses only request headers that are common to all operations (refer to Common Request Headers).

Elements

The GET Object ACL operation does not use request elements.

Responses
Headers

The GET Object ACL operation can include the following response headers in addition to the response headers common to all responses (refer to Common Response Headers).

Element Type Description
x-amz-version-id string

Returns the version ID of the retrieved object if it has a unique version ID

Default: None

Elements

The GET Object ACL operation can return the following XML elements of the response (includes XML containers):

Element Type Description
AccessControlList container Container for Grant, Grantee, Permission
AccessControlPolicy container Contains the elements that set the ACL permissions for an object per Grantee
DisplayName string Screen name of the bucket owner
Grant container Container for the grantee and his or her permissions
Grantee string The subject whose permissions are being set
ID string ID of the bucket owner, or the ID of the grantee
Owner container Container for the bucket owner’s display name and ID
Permission string Specifies the permission (FULL_CONTROL, WRITE, READ_ACP) given to the grantee
Examples
Returning Object Information, Including ACL

The sample illustrated retrieves the access control permissions for the specified file object, greatshot_d.raw:

Request
GET /greatshot_d.raw?acl HTTP/1.1
Host: bucket.example.com
Date: Wed, 28 Oct 2009 22:32:00 GMT
Authorization: {{authorizationString}}
Response
HTTP/1.1 200 OK
x-amz-id-2: eftixk72aD6Ap51TnqcoF8eFidJG9Z/2mkiDFu8yU9AS1ed4OpIszj7UDNEHGran
x-amz-request-id: 318BC8BC148832E5
x-amz-version-id: 4HL4kqtJlcpXroDTDmJ+rmSpXd3dIbrHY+MTRCxf3vjVBH40Nrjfkd
Date: Wed, 28 Oct 2009 22:32:00 GMT
Last-Modified: Sun, 1 Jan 2006 12:00:00 GMT
Content-Length: 124
Content-Type: text/plain
Connection: close
Server: ScalityS3
<AccessControlPolicy>
  <Owner>
    <ID>8b27d4b0fc460740425b9deef56fa1af6245fbcccdda813b691a8fda9be8ff0c</ID>
    <DisplayName>user@example.com</DisplayName>
  </Owner>
  <AccessControlList>
    <Grant>
      <Grantee xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:type="CanonicalUser">
        <ID>8b27d4b0fc460740425b9deef56fa1af6245fbcccdda813b691a8fda9be8ff0c</ID>
        <DisplayName>user@example.com</DisplayName>
      </Grantee>
      <Permission>FULL_CONTROL</Permission>
    </Grant>
  </AccessControlList>
</AccessControlPolicy>
Getting and Showing the ACL of a Specific Object Version
Request
GET /my-image.jpg?versionId=3/L4kqtJlcpXroDVBH40Nr8X8gdRQBpUMLUo&amp;acl HTTP/1.1
Host: {{bucketName}}.example.com
Date: Wed, 28 Oct 2009 22:32:00 GMT
Authorization: {{authorizationString}}
Response
HTTP/1.1 200 OK
x-amz-id-2: eftixk72aD6Ap51TnqcoF8eFidJG9Z/2mkiDFu8yU9AS1ed4OpIszj7UDNEHGran
x-amz-request-id: 318BC8BC148832E5
Date: Wed, 28 Oct 2009 22:32:00 GMT
Last-Modified: Sun, 1 Jan 2006 12:00:00 GMT
x-amz-version-id: 3/L4kqtJlcpXroDTDmJ+rmSpXd3dIbrHY+MTRCxf3vjVBH40Nr8X8gdRQBpUMLUo
Content-Length: 124
Content-Type: text/plain
Connection: close
Server: ScalityS3
<AccessControlPolicy>
  <Owner>
    <ID>8b27d4b0fc460740425b9deef56fa1af6245fbcccdda813b691a8fda9be8ff0c</ID>
    <DisplayName>user@example.com</DisplayName>
  </Owner>
  <AccessControlList>
    <Grant>
      <Grantee xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:type="CanonicalUser">
        <ID>8b27d4b0fc460740425b9deef56fa1af6245fbcccdda813b691a8fda9be8ff0c</ID>
        <DisplayName>user@example.com</DisplayName>
      </Grantee>
      <Permission>FULL_CONTROL</Permission>
    </Grant>
  </AccessControlList>
</AccessControlPolicy>

HEAD Object

The HEAD Object operation returns the metadata for an object without returning the object itself (READ access to the object is necessary to use the operation).

By default, the HEAD operation retrieves metadata from the current version of an object. If the current version is a delete marker, Amazon S3 behaves as if the object was deleted. To retrieve metadata from a different version, use the versionIdsubresource.

Warning

The HEAD Object operation does not return a response body. Its response header is the same as for a GET Object operation.

Requests
Syntax
HEAD /{{ObjectName}} HTTP/1.1
Host: {{BucketName}}.{{StorageService}}.com
Authorization: {{authorizationString}}
Date: {{date}}
Parameters

The HEAD Object operation does not use Request Parameters.

Headers

The HEAD Object operation can use a number of optional request headers in addition to those that are common to all operations (refer to Common Request Headers).

Header Type Description
If-Modified-Since string

Return the object only if it has been modified since the specified time, otherwise return a 304 (not modified)

Default: None

Constraints: None

If-Unmodified-Since string

Return the object only if it has not been modified since the specified time, otherwise return a 412 (precondition failed)

Default: None

Constraints: None

If-Match string

Return the object only if its entity tag (ETag) is the same as the one specified; otherwise, return a 412 (precondition failed)

Default: None

Constraints: None

If-None-Match string

Return the object only if its entity tag (ETag) is different from the one specified; otherwise, return a 304 (not modified)

Default: None

Constraints: None

Elements

The HEAD Object operation does not use request elements.

Responses
Headers

The HEAD Object operation can include the following response headers in addition to the response headers common to all responses (refer to Common Response Headers).

Header Type Description
x-amz-meta-\* string Headers starting with this prefix are user-defined metadata, each of which is stored and returned as a set of key-value pairs. Zenko does not validate or interpret user-defined metadata.
x-amz-version-id string

Returns the version ID of the retrieved object if it has a unique version ID

Default: None

x-amz-website-redirect-location string

When a bucket is configured as a website, this metadata can be set on the object so the website endpoint will evaluate the request for the object as a 301 redirect to another object in the same bucket or an external URL.

Default: None

Elements

The HEAD Object operation does not return response elements.

Examples
Returning an Object’s Metadata
Request
GET /my-document.pdf HTTP/1.1
Host: {{bucketName}}.s3.scality.com
Date: Wed, 28 Oct 2009 22:32:00 GMT
Authorization: AWS AKIAIOSFODNN7EXAMPLE:02236Q3V0RonhpaBX5sCYVf1bNRuU=
Response
HTTP/1.1 200 OK
x-amz-id-2: ef8yU9AS1ed4OpIszj7UDNEHGran
x-amz-request-id: 318BC8BC143432E5
x-amz-version-id: 3HL4kqtJlcpXroDTDmjVBH40Nrjfkd
Date: Wed, 28 Oct 2009 22:32:00 GMT
Last-Modified: Sun, 1 Jan 2006 12:00:00 GMT
ETag: "fba9dede5f27731c9771645a39863328"
Content-Length: 434234
Content-Type: text/plain
Connection: close
Server: ScalityS3
Getting Metadata from a Specified Version of an Object
Request
HEAD /my-document.pdf?versionId=3HL4kqCxf3vjVBH40Nrjfkd HTTP/1.1
Host: {{bucketName}}.s3.scality.com
Date: Wed, 28 Oct 2009 22:32:00 GMT
Authorization: AWS AKIAIOSFODNN7EXAMPLE:02236Q3V0WpaBX5sCYVf1bNRuU=
Response
HTTP/1.1 200 OK
x-amz-id-2: eftixk72aD6Ap51TnqcoF8epIszj7UDNEHGran
x-amz-request-id: 318BC8BC143432E5
x-amz-version-id: 3HL4kqtJlcpXrof3vjVBH40Nrjfkd
Date: Wed, 28 Oct 2009 22:32:00 GMT
Last-Modified: Sun, 1 Jan 2006 12:00:00 GMT
ETag: "fba9dede5f27731c9771645a39863328"
Content-Length: 434234
Content-Type: text/plain
Connection: close
Server: ScalityS3

PUT Object

The PUT Object operation adds an object to a bucket (WRITE permission on a bucket is necessary to add an object to it).

Note

Zenko never adds partial objects; if a success response is received, Zenko added the entire object to the bucket.

Object locking is not supported. If an object with the same name is added to the bucket by multiple PUT Object operations, only the last request is not overwritten.

To ensure that data is not corrupted traversing the network, use the Content-MD5 header. When this header is in use, the system checks the object against the provided MD5 value and returns an error if they do not match. In addition, the MD5 can be calculated while putting an object into the system, and the returned ETag can subsequently be compared to the calculated MD5 value.

Use the 100-continue HTTP status code to configure your application to send the Request Headers before sending the request body. For PUT operations, this provides a means for avoiding the sending of the message body if the message is rejected based on the headers (e.g., because of authentication failure or redirect).

When uploading an object, it is possible to optionally specify the accounts or groups that should be granted specific permissions on an object. Two ways exist for granting the appropriate permissions using the request headers:

  • Specify a canned (predefined) ACL using the x-amz-acl request header
  • Specify access permissions explicitly using thex-amz-grant-read, x-amz-grant-read-acpand x-amz-grant-write-acp, and x-amz-grant-full-control headers. These headers map to the set of permissions supported in an ACL.

Note

You cannot both use a canned ACL and explicitly specify access permissions.

Requests
Syntax
PUT /ObjectName HTTP/1.1
Host: {{BucketName}}.{{StorageService}}.com
Date: {{date}}
Authorization: {{authorizationString}}
Parameters

The PUT Object operation does not use Request Parameters.

Headers

The PUT Object operation can use a number of optional request headers in addition to those that are common to all operations (see Common Request Headers).

Header Type Description
Cache-Control string

Can be used to specify caching behavior along the request/reply chain.

Default: None

Constraints: None

Content-Disposition string

Specifies presentational information for the object.

Default: None

Constraints: None

Content-Encoding string

Specifies what content encodings have been applied to the object and the decoding mechanisms that must be applied to obtain the media-type referenced by the Content-Type header field.

Default: None

Constraints: None

Content-Length string

The size of the object, in bytes.

Default: None

Constraints: None

Content-MD5 string

The base64-encoded 128-bit MD5 digest of the message (without the headers) according to RFC 1864. This header can be used as a message integrity check to verify that the data is the same data that was originally sent. Although it is optional, using the Content-MD5 mechanism is recommended as an end-to-end integrity check.

Default: None

Constraints: None

Content-Type string

A standard MIME type describing the format of the contents

Default: binary/octet-stream

Valid Values: MIME types

Constraints: None

Expect string

When the application uses 100-continue, it does not send the request body until it receives an acknowledgment. If the message is rejected based on the headers, the message body is not sent.

Default: None

Valid Values: 100-continue

Constraints: None

Expires string

The date and time at which the object is no longer cacheable

Default: None

Constraints: None

x-amz-meta-\* string

Headers starting with this prefix are user-defined metadata, each of which is stored and returned as a set of key-value pairs. Zenko does not validate or interpret user-defined metadata. Within the PUT request header, user-defined metadata is limited to 2 KB.

Default: None

Constraints: None

x-amz-meta-scal-location-constraint string

Setting this heading with a locationConstraint on a PUT request defines where the object will be saved. If no header is sent with a PUT object request, the location constraint of the bucket will determine where the data is saved. If the bucket has no location constraint, the endpoint of the PUT request is used to determine location. Within the PUT request header, user-defined metadata is limited to 2 KB.

Default: None

Constraints: The value must be a location constraint listed in locationConfig.json.

x-amz-website-redirect-location string

When a bucket is configured as a website, this metadata can be set on the object so the website endpoint will evaluate the request for the object as a 301 redirect to another object in the same bucket or an external URL.

Default: None

Constraints: The value must be prefixed by, “/”, “http://” or “https://”. The length of the value is limited to 2 KB.

In addition, access control-related headers can be used with this operation. By default, all objects are private: only the owner has full control. When adding a new object, it is possible to grant permissions to individual accounts or predefined groups. These permissions are then used to create the Access Control List (ACL) on the object.

Specifying a Canned ACL

Zenko supports a set of canned ACLs, each of which has a predefined set of grantees and permissions.

Header Type Description
x-amz-acl string

The canned ACL to apply to the bucket you are creating

Default: private

Valid Values: private | public-read | public-read-write | authenticated-read | bucket-owner-read | bucket-owner-full-control

Constraints: None

Explicitly Specifying Access Permissions

A set of headers is available for explicitly granting access permissions to specific Zenko accounts or groups, each of which maps to specific permissions Zenko supports in an ACL.

In the header value, specify a list of grantees who get the specific permission.

Header Type Description
x-amz-grant-read string

Allows grantee to read the object data and its metadata.

Default: None

Constraints: None

x-amz-grant-read-acp string

Allows grantee to read the object ACL.

Default: None

Constraints: None

x-amz-grant-write-acp string

Allows grantee to write the ACL for the applicable object.

Default: None

Constraints: None

x-amz-grant-full-control string

Allows grantee the READ, READ_ACP, and WRITE_ACP permissions on the object.

Default: None

Constraints: None

Each grantee is specified as a type=value pair, where the type can be one any one of the following:

  • emailAddress (if value specified is the email address of an account)
  • id (if value specified is the canonical user ID of an account)
  • uri (if granting permission to a predefined group)

For example, the following x-amz-grant-read header grants list objects permission to the accounts identified by their email addresses:

x-amz-grant-read: emailAddress="xyz@scality.com", emailAddress="abc@scality.com"
Responses
Headers

The PUT Object operation uses the x-amz-version-id response header in addition to response headers that are common to all operations (see Common Response Headers).

Header Type Description
x-amz-version-id string Version of the object.
Elements

The PUT Object operation does not return response elements.

Examples
Upload an Object
Request

Places the my-document.pdf object in the myDocsBucket bucket:

PUT /my-document.pdf HTTP/1.1
Host: myDocsBucket.s3.scality.com
Date: Wed, 12 Oct 2009 17:50:00 GMT
Authorization: {{authorizationString}}
Content-Type: text/plain
Content-Length: 11434
x-amz-meta-author: CharlieParker
Expect: 100-continue
[11434 bytes of object data]
Response with Versioning Suspended
HTTP/1.1 100 Continue

HTTP/1.1 200 OK
x-amz-id-2: LriYPLdmOdAiIfgSm/F1YsViT1LW94/xUQxMsF7xiEb1a0wiIOIxl+zbwZ163pt7
x-amz-request-id: 0A49CE4060975EAC
Date: Wed, 12 Oct 2009 17:50:00 GMT
ETag: "1b2cf535f27731c974343645a3985328"
Content-Length: 0
Connection: close
Server: ScalityS3
Response with Versioning Enabled
HTTP/1.1 100 Continue

HTTP/1.1 200 OK
x-amz-id-2: LriYPLdmOdAiIfgSm/F1YsViT1LW94/xUQxMsF7xiEb1a0wiIOIxl+zbwZ163pt7
x-amz-request-id: 0A49CE4060975EAC
x-amz-version-id: 43jfkodU8493jnFJD9fjj3HHNVfdsQUIFDNsidf038jfdsjGFDSIRp
Date: Wed, 12 Oct 2009 17:50:00 GMT
ETag: "fbacf535f27731c9771645a39863328"
Content-Length: 0
Connection: close
Server: ScalityS3
Upload an Object (Specify Access Permission Explicitly)
Request: Uploading an Object and Specifying Access Permissions Explicitly

This request sample stores the file TestObject.txtin the bucket myDocsBucket. The request specifies various ACL headers to grant permission to accounts specified using canonical user ID and email address.

PUT TestObject.txt HTTP/1.1
Host: myDocsBucket.s3.scality.com
x-amz-date: Fri, 13 Apr 2012 05:40:14 GMT
Authorization: {{authorizationString}}
x-amz-grant-write-acp: id=8a6925ce4adf588a4532142d3f74dd8c71fa124ExampleCanonicalUserID
x-amz-grant-full-control: emailAddress="ExampleUser@scality.com"
x-amz-grant-write: emailAddress="ExampleUser1@scality.com", emailAddress="ExampleUser2@scality.com"
Content-Length: 300
Expect: 100-continue
Connection: Keep-Alive
...Object data in the body...
Response
HTTP/1.1 200 OK
x-amz-id-2: RUxG2sZJUfS+ezeAS2i0Xj6w/ST6xqF/8pFNHjTjTrECW56SCAUWGg+7QLVoj1GH
x-amz-request-id: 8D017A90827290BA
Date: Fri, 13 Apr 2012 05:40:25 GMT
ETag: "dd038b344cf9553547f8b395a814b274"
Content-Length: 0
Server: ScalityS3
Upload an Object (Specify Access Permission Using a Canned ACL)
Request: Using a Canned ACL to Set Access Permissions

This request sample stores the file TestObject.txt in the bucket myDocsBucket. The request uses an x-amz-acl header to specify a canned ACL to grant READ permission to the public.

...Object data in the body...
PUT TestObject.txt HTTP/1.1
Host: myDocsBucket.s3.scality.com
x-amz-date: Fri, 13 Apr 2012 05:54:57 GMT
x-amz-acl: public-read
Authorization: {{authorizationString}}
Content-Length: 300
Expect: 100-continue
Connection: Keep-Alive
...Object data in the body...
Response
HTTP/1.1 200 OK
x-amz-id-2: Yd6PSJxJFQeTYJ/3dDO7miqJfVMXXW0S2Hijo3WFs4bz6oe2QCVXasxXLZdMfASd
x-amz-request-id: 80DF413BB3D28A25
Date: Fri, 13 Apr 2012 05:54:59 GMT
ETag: "dd038b344cf9553547f8b395a814b274"
Content-Length: 0
Server: ScalityS3

PUT Object Tagging

The Put Object Tagging operation uses the tagging subresource to add a set of tags to an existing object.

A tag is a key/value pair. You can associate tags with an object by sending a PUT request against the tagging subresource associated with the object. To retrieve tags, send a GET request. For more information, see GET Object Tagging.

For tagging restrictions related to characters and encodings, see Tag Restrictions in the AWS Billing and Cost Management User Guide. S3 limits the maximum number of tags to 10 tags per object.

This operation requires permission to perform the s3:PutObjectTagging action. By default, the bucket owner has this permission and can grant it to others.

To put tags of any other version, use the versionId query parameter. You also need permission for the s3:PutObjectVersionTagging action.

Requests
Syntax

The following request shows the syntax for sending tagging information in the request body.

GET /ObjectName?tagging HTTP/1.1
Host: {{BucketName}}.{{StorageService}}.com
Date: {{date}}
Authorization: {{authorizationString}}
<Tagging>
  <TagSet>
     <Tag>
       <Key>Tag Name</Key>
       <Value>Tag Value</Value>
     </Tag>
  </TagSet>
</Tagging>
Parameters

The PUT Object Tagging operation does not use request parameters.

Headers

Content-MD5 is a required header for this operation.

Elements
Element Type Description Required
Tagging container Container for the TagSet element Yes
TagSet container

Contains the tag set

Ancestors: Tagging

Yes
Tag container

Contains the tag information

Ancestors: TagSet

No
Key string

Name of the tag

Ancestors: Tag

Yes, if Tag is specified
Value string

Value of the tag

Ancestors: Tag

No
Responses
Headers

The PUT Object Tagging operation uses only response headers common to all responses (see Common Response Headers).

Elements

The PUT Object Tagging operation does not return response elements.

Special Errors
  • InvalidTagError — The tag provided was not a valid tag. This error can occur if the tag did not pass input validation. See Object Tagging in the Amazon Simple Storage Service Developer Guide.
  • MalformedXMLError — The XML provided does not match the schema.
  • OperationAbortedError — A conflicting conditional operation is currently in progress against this resource. Please try again.
  • InternalError — The service was unable to apply the provided tag to the object.
Examples
Request

The following request adds a tag set to the existing object object-key in the examplebucket bucket.

PUT object-key?tagging HTTP/1.1
Host: {{BucketName}}.s3.scality.com
Content-Length: length
Content-MD5: pUNXr/BjKK5G2UKExample==
x-amz-date: 20160923T001956Z
Authorization: {{authorizationString}}
<Tagging>
   <TagSet>
      <Tag>
         <Key>tag1</Key>
         <Value>val1</Value>
      </Tag>
      <Tag>
         <Key>tag2</Key>
         <Value>val2</Value>
      </Tag>
   </TagSet>
</Tagging>
Response
HTTP/1.1 200 OK
x-amz-id-2: YgIPIfBiKa2bj0KMgUAdQkf3ShJTOOpXUueF6QKo
x-amz-request-id: 236A8905248E5A01
Date: Thu, 22 Sep 2016 21:33:08 GMT

PUT Object ACL

The PUT Object ACL operation uses the acl subresource to set the access control list (ACL) permissions for an object that exists in a storage system bucket. This operation requires WRITE_ACP permission for the object.

Note

WRITE_ACP access is required to set the ACL of an object.

Object permissions are set using one of the following two methods:

  • Specifying the ACL in the request body
  • Specifying permissions using request headers

Depending on the needs of the application, the ACL may be set on an object using either the request body or the headers.

Warning

Access permission cannot be specified using both the request body and the request headers.

The ACL of an object is set at the object version level. By default, PUT sets the ACL of the current version of an object. To set the ACL of a different version, use the versionId subresource.

Requests
Syntax

The request syntax that follows is for sending the ACL in the request body. If headers are used to specify the permissions for the object, the ACL cannot be sent in the request body (refer to Common Request Headers for a list of available headers).

PUT /{{ObjectName}}?acl HTTP/1.1
Host: {{BucketName}}.{{StorageService}}.com
Date: {{date}}
Authorization: {{authorizationString}}
<AccessControlPolicy>
  <Owner>
    <ID>{{iD}}</ID>
    <DisplayName>{{emailAddress}}</DisplayName>
  </Owner>
  <AccessControlList>
    <Grant>
      <Grantee xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:type="CanonicalUser">
        <ID>{{iD}}</ID>
        <DisplayName>{{emailAddress}}</DisplayName>
      </Grantee>
      <Permission>{{permission}}</Permission>
    </Grant>
    ...
  </AccessControlList>
</AccessControlPolicy>
Parameters

The PUT Object ACL operation does not use Request Parameters.

Headers

The PUT Object ACL operation can use a number of optional request headers in addition to those that are common to all operations (refer to Common Request Headers). These request headers are used either to specify a predefined—or canned—ACL, or to explicitly specify grantee permissions.

Specifying a Canned ACL

Zenko supports a set of canned ACLs, each of which has a predefined set of grantees and permissions.

To grant access permissions by specifying canned ACLs, use the x-amz-acl header and specify the canned ACL name as its value.

Note

Other access control specific headers cannot be used when the x-amz-acl header is in use.

Header Type Description
x-amz-acl string

Sets the ACL of the object using the specified canned ACL.

Default: private

Valid Values: private | public-read | public-read-write | authenticated-read | bucket-owner-read | bucket-owner-full- control

Constraints: None

Explicitly Specifying Grantee Access Permissions

A set of x-amz-grant-permission headers is available for explicitly granting individualized object access permissions to specific Zenko accounts or groups.

Note

Each of the x-amz-grant-permission headers maps to specific permissions the Zenko supports in an ACL. Please also note that the use of any of these ACL-specific headers negates the use of the x-amz-acl header to set a canned ACL.

Header Type Description
x-amz-grant-read string

Allows grantee to read the object data and its metadata

Default: None

Constraints: None

x-amz-grant-read-acp string

Allows grantee to read the object ACL

Default: None

Constraints: None

x-amz-grant-write-acp string

Allows grantee to write the ACL for the applicable object

Default: None

Constraints: None

x-amz-grant-full-control string

Allows grantee the READ, READ_ACP, and WRITE_ACP permissions on the object

Default: None

Constraints: None

For each header, the value is a comma-separated list of one or more grantees. Each grantee is specified as a type=value pair, where the type can be one any one of the following:

  • emailAddress (if value specified is the email address of an account)
  • id (if value specified is the canonical user ID of an account)
  • uri (if granting permission to a predefined group)

For example, the following x-amz-grant-read header grants list objects permission to two accounts identified by their email addresses:

x-amz-grant-read:  emailAddress="xyz@example.com", emailAddress="abc@example.com"
Request Elements

If the request body is used to specify an ACL, the following elements must be used.

Element Type Description
AccessControlList container Container for Grant, Grantee, and Permission
AccessControlPolicy string Contains the elements that set the ACL permissions for an object per grantee
DisplayName string Screen name of the bucket owner
Grant container Container for the grantee and his or her permissions
Grantee string The subject whose permissions are being set
ID string ID of the bucket owner, or the ID of the grantee
Owner container Container for the bucket owner’s display name and ID
Permission string Specifies the permission given to the the grantee

Note

If the request body is requested, the request headers cannot be used to set an ACL.

Grantee Values

Specify the person (grantee) to whom access rights are being assigned (using request elements):

  • By ID

    <Grantee xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:type="CanonicalUser">
    <ID>{{ID}}</ID><DisplayName>GranteesEmail</DisplayName></Grantee>
    

    DisplayName is optional and is ignored in the request.

  • By Email Address

    <Grantee xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:type="ScalityCustomerByEmail"><EmailAddress>{{Grantees@email.com}}</EmailAddress>lt;/Grantee>
    

    The grantee is resolved to the CanonicalUser and, in a response to a GET Object acl request, appears as the CanonicalUser.

  • By URI

    <Grantee xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:type="Group"><URI>{{http://acs.example.com/groups/global/AuthenticatedUsers}}</URI></Grantee>
    
Responses
Headers

The PUT Object ACL operation can include the following response header in addition to the response headers common to all responses (refer to Common Response Headers).

Header Type Description
x-amz-version-id string

Returns the version ID of the retrieved object if it has a unique version ID.

Default: None

Elements

The PUT Object ACL operation does not return response elements.

Examples
Grant Access Permission to an Existing Object

The request sample grants access permission to an existing object, specifying the ACL in the body. In addition to granting full control to the object owner, the XML specifies full control to an account identified by its canonical user ID.

Request Sample
PUT /my-document.pdf?acl HTTP/1.1
Host: {{bucketName}}.example.com
Date: Wed, 28 Oct 2009 22:32:00 GMT
Authorization: {{authorizationString}}
Content-Length: 124

<AccessControlPolicy>
  <Owner>
    <ID>8b27d4b0fc460740425b9deef56fa1af6245fbcccdda813b691a8fda9be8ff0c</ID>
    <DisplayName>{{customersName}}@scality.com</DisplayName>
  </Owner>
  <AccessControlList>
    <Grant>
      <Grantee xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:type="CanonicalUser">
        <ID>8b27d4b0fc460740425b9deef56fa1af6245fbcExampleCanonicalUserID</ID>
        <DisplayName>{{customersName}}@scality.com</DisplayName>
      </Grantee>
      <Permission>FULL_CONTROL</Permission>
    </Grant>
  </AccessControlList>
</AccessControlPolicy>
Response Sample
HTTP/1.1 200 OK
x-amz-id-2: eftixk72aD6Ap51T9AS1ed4OpIszj7UDNEHGran
x-amz-request-id: 318BC8BC148832E5
x-amz-version-id: 3/L4kqtJlcpXrof3vjVBH40Nr8X8gdRQBpUMLUo
Date: Wed, 28 Oct 2009 22:32:00 GMT
Last-Modified: Sun, 1 Jan 2006 12:00:00 GMT
Content-Length: 0
Connection: close
Server: ScalityS3
Setting the AC
Setting the ACL of a Specified Object Version

The request sample sets the ACL on the specified version of the object.

Request Sample
PUT /my-document.pdf?acl&amp;versionId=3HL4kqtJlcpXroDTDmJ+rmSpXd3dIbrHY+MTRCxf3vjVBH40Nrjfkd HTTP/1.1
Host: {{bucketName}}.example.com
Date: Wed, 28 Oct 2009 22:32:00 GMT
Authorization: {{authorizationString}}
Content-Length: 124

<AccessControlPolicy>
  <Owner>
    <ID>8b27d4b0fc460740425b9deef56fa1af6245fbcccdda813b691a8fda9be8ff0c</ID>
    <DisplayName>user@example.com</DisplayName>
  </Owner>
  <AccessControlList>
    <Grant>
      <Grantee xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:type="CanonicalUser">
        <ID>8b27d4b0fc460740425b9deef56fa1af6245fbcccdda813b691a8fda9be8ff0c</ID>
        <DisplayName>user@example.com</DisplayName>
      </Grantee>
      <Permission>FULL_CONTROL</Permission>
    </Grant>
  </AccessControlList>
</AccessControlPolicy>
Response Sample
HTTP/1.1 200 OK
x-amz-id-2: eftixk72aD6Ap51u8yU9AS1ed4OpIszj7UDNEHGran
x-amz-request-id: 318BC8BC148832E5
x-amz-version-id: 3/L4kqtJlcpXro3vjVBH40Nr8X8gdRQBpUMLUo
Date: Wed, 28 Oct 2009 22:32:00 GMT
Last-Modified: Sun, 1 Jan 2006 12:00:00 GMT
Content-Length: 0
Connection: close
Server: ScalityS3
Access Permissions Specified Using Headers

The request sample uses ACL-specific request header x-amz-acl, and specifies a canned ACL (public_read) to grant object read access to everyone.

Request Sample
PUT ExampleObject.txt?acl HTTP/1.1
Host: {{bucketName}}.example.com
x-amz-acl: public-read
Accept: */*
Authorization: {{authorizationString}}
Host: example.com
Connection: Keep-Alive
Response Sample
HTTP/1.1 200 OK
x-amz-id-2: w5YegkbG6ZDsje4WK56RWPxNQHIQ0CjrjyRVFZhEJI9E3kbabXnBO9w5G7Dmxsgk
x-amz-request-id: C13B2827BD8455B1
Date: Sun, 29 Apr 2012 23:24:12 GMT
Content-Length: 0
Server: ScalityS3

PUT Object - Copy

An implementation of the PUT operation, PUT Object - Copy creates a copy of an object that is already stored. On internal data backends, performing a PUT copy operation is the same as performing GET and then PUT. On external cloud data backends, data is directly copied to the designated backend. Adding the x-amz-copysource request header causes the PUT operation to copy the source object into the destination bucket.

By default, x-amz-copy-source identifies the current version of an object to copy. To copy a different version, use the versionId subresource.

When copying an object, it is possible to preserve most of the metadata (default behavior) or specify new metadata with the x-amz-metadata-directive header. In the case of copying an object to a specific location constraint, the metadata directive must be set to REPLACE and the location constraint header specified. Otherwise, the default location for the object copied is the location constraint of the destination bucket.

The ACL, however, is not preserved and is set to private for the user making the request if no other ACL preference is sent with the request.

All copy requests must be authenticated and cannot contain a message body. Additionally, READ access is required for the source object, and WRITE access is required for the destination bucket.

To copy an object only under certain conditions, such as whether the ETag matches or whether the object was modified before or after a specified date, use the request headers x-amz-copy-source-if-match, x-amz-copy-source-if-none-match, x-amz-copy-source-if-unmodified-since, or x-amz-copy-source-if-modified-since.

Warning

When using v4 Authentication all headers prefixed with x-amz- must be signed, including x-amz-copy-source.

The source object being copied can be encrypted or unencrypted and the destination object can be stored encrypted or unencrypted. If bucket encryption is activated on the source bucket, the source object will remain encrypted in its original location. If bucket encryption is activated on the destination bucket, the destination object will be encrypted. If bucket encryption is not activated on the destination bucket, the object copy will be stored unencrypted

If the copy is successful, a response will generate that contains information about the copied object.

Access Permissions

When copying an object, it is possible to specify the accounts or groups that should be granted specific permissions on the new object. There are two ways to grant the permissions using the request headers:

  • Specify a canned ACL using the x-amz-acl request header.
  • Specify access permissions explicitly using thex-amz-grant-read, x-amz-grant-read-acp, x-amz-grant-write-acp, and x-amz-grant-full-control headers. These headers map to the set of permissions Zenko supports in an ACL.

Note

Access permissions can be explicitly specified or they can be enacted via a canned ACL. Both methods, however, cannot be deployed at the same time.

Requests
Syntax

The Request Syntax that follows is for sending the ACL in the request body. If headers are used to specify the permissions for the object, the ACL cannot be sent in the request body (refer to Common Request Headers for a list of available headers).

PUT /destinationObject HTTP/1.1
Host: destinationBucket.s3.amazonaws.com
x-amz-copy-source: /source_bucket/sourceObject
x-amz-metadata-directive: metadata_directive
x-amz-copy-source-if-match: etag
x-amz-copy-source-if-none-match: etag
x-amz-copy-source-if-unmodified-since: time_stamp
x-amz-copy-source-if-modified-since: time_stamp
<request metadata>
Authorization: {{authorizationString}}
Date: date

Note

The syntax shows only a representative sample of the possible request headers. For a complete list, refer to Common Request Headers.

Parameters

The PUT Object - Copy operation does not use request parameters.

Headers

The PUT Object - Copy operation can use the following optional request headers in addition to those that are common to all operations (see Common Request Headers).

Header Type Description
x-amz-copy-source string

The name of the source bucket and key name of the source object, separated by a slash (/). If versioning is enabled, this will copy the latest version of the key by default. To specify another version, append ?versionId={{version id}} after the object key.

Default: None

Constraints: This string must be URL-encoded. Additionally, the source bucket must be valid and READ access to the valid source object is required.

x-amz-metadata-directive string

Specifies whether the metadata is copied from the source object or replaced with metadata provided in the request.

If copied, the metadata, except for the version ID, remains unchanged. In addition, the server-side-encryption storage-class, and website-redirect-location metadata from the source is not copied. If you specify this metadata explicitly in the copy request, Zenko adds this metadata to the resulting object. If you specify headers in the request specifying any user-defined metadata, Zenko ignores these headers. To use new user-defined metadata, REPLACE must be selected.

If replaced, all original metadata is replaced by the specified metadata.

Default: COPY

Valid Values: COPY, REPLACE

Constraints: Values other than COPY or REPLACE result in an immediate 400- based error response. An object cannot be copied to itself unless the MetadataDirective header is specified and its value set to REPLACE (or, at the least, some metadata is changed, such as storage class).

x-amz-copy-source-if-match string

Copies the object if its entity tag (ETag) matches the specified tag; otherwise, the request returns a 412 HTTP status code error (failed precondition).

Default: None

Constraints: Can be used with x-amz-copy-source-if-unmodified-since, but cannot be used with other conditional copy headers.

x-amz-copy-source-if-none-match string

Copies the object if its entity tag (ETag) is different than the specified ETag; otherwise, the request returns a 412 HTTP status code error (failed precondition).

Default: None

Constraints: Can be used with x-amz-copy-source-if-modified-since, but cannot be used with other conditional copy headers.

x-amz-copy-source-if-unmodified-since string

Copies the object if it hasn’t been modified since the specified time; otherwise, the request returns a 412 HTTP status code error (failed precondition).

Default: None

Constraints: This must be a valid HTTP date. This header can be used with x-amz-copy-source-if-match, but cannot be used with other conditional copy headers.

x-amz-copy-source-if-modified-since string

Copies the object if it has been modified since the specified time; otherwise, the request returns a 412 HTTP status code error (failed condition).

Default: None

Constraints: This must be a valid HTTP date. This header can be used with x-amz-copy-source-if-none-match, but cannot be used with other conditional copy headers.

x-amz-storage-class enum

The default storage class is “Standard.” Currently, Zenko only suports one level level of storage class.

Default: Standard

Valid Values: STANDARD, STANDARD_IA, REDUCED_REDUNDANCY

Note the following additional considerations about the preceding request headers:

  1. If both of thex-amz-copy-source-if-match and x-amz-copy-source-if-unmodified-since headers are present in the request as follows, Zenko returns 200 OK and copies the data:

    x-amz-copy-source-if-match condition evaluates to true, and;
    x-amz-copy-source-if-unmodified-since condition evaluates to false;
    
  2. If both of the x-amz-copy-source-if-none-match and x-amz-copy-source-if-modified-since headers are present in the request as follows, Zenko returns a 412 Precondition Failed response code:

    x-amz-copy-source-if-none-match condition evaluates to false, and;
    x-amz-copy-source-if-modified-since condition evaluates to true
    

Additionally, the following access control-related (ACL) headers can be used with the PUT Object - Copy operation. By default, all objects are private; only the owner has full access control. When adding a new object, it is possible to grant permissions to individual AWS accounts or predefined groups defined by Amazon S3. These permissions are then added to the Access Control List (ACL) on the object. For more information, refer to ACL (Access Control List).

Specifying a Canned ACL

Zenko supports a set of predefined ACLs, each of which has a predefined set of grantees and permissions.

To grant access permissions by specifying canned ACLs, use the x-amz-acl header and specify the canned ACL name as its value.

Note

Other access control specific headers cannot be used when the x-amz-acl header is in use.

Header Type Description
x-amz-acl string

The canned ACL to apply to the object.

Default: private

Valid Values: private | public-read | public-read-write | aws-exec-read | authenticated-read | bucket-owner-read | bucket-owner-full-control

Constraints: None

Explicitly Specifying Grantee Access Permissions

A set of headers is available for explicitly granting access permissions to specific accounts or groups.

Note

Each of the x-amz-grant-permission headers maps to specific permissions that Zenko supports in an ACL. Please also note that the use of any of these ACL-specific headers negates the use of the x-amz-acl header to set a canned ACL.

Header Type Description
x-amz-grant-read string

Allows grantee to read the object data and its metadata.

Default: None

Constraints: None

x-amz-grant-write string

Not applicable. This applies only when granting access permissions on a bucket.

Default: None

Constraints: None

x-amz-grant-read-acp string

Allows grantee to read the object ACL.

Default: None

Constraints: None

x-amz-grant-write-acp string

Allows grantee to write the ACL for the applicable object.

Default: None

Constraints: None

x-amz-grant-full-control string

Allows grantee the READ, READ_ACP, and WRITE_ACP permissions on the object.

Default: None

Constraints: None

For each header, the value is a comma-separated list of one or more grantees. Each grantee is specified as a type=value pair, where the type can be any one of the following:

  • emailAddress (if the value specified is the email address of an account)
  • id (if the value specified is the canonical user ID of an account)
  • uri (if granting permission to a predefined group)

For example, the following x-amz-grant-read header grants list objects permission to two accounts identified by their email addresses:

x-amz-grant-read:  emailAddress="xyz@scality.com", emailAddress="abc@scality.com"
Elements

The implementation of the operation does not use request Parameters.

Responses
Headers

The PUT Object - Copy operation can include the following response headers in addition to the response headers common to all responses (refer to Common Response Headers).

Header Type Description
x-amz-copy-source-version-id string Returns the version ID of the retrieved object if it has a unique version ID.
x-amz-server-side-encryption string If server-side encryption is specified either with an AWS KMS or Zenko-managed encryption key in the copy request, the response includes this header, confirming the encryption algorithm that was used to encrypt the object.
x-amz-server-side-encryption-aws-kms-key-id string If the x-amz-server-side-encryption is present and has the value of aws:kms, this header specifies the ID of the AWS Key Management Service (KMS) master encryption key that was used for the object.
x-amz-server-side-encryption-customer-algorithm string

If server-side encryption with customer-provided encryption keys (SSE-C) encryption was requested, the response will include this header confirming the encryption algorithm used for the destination object.

Valid Values: AES256

x-amz-server-side-encryption-customer-key-MD5 string If SSE-C encryption was requested, the response includes this header to provide roundtrip message integrity verification of the customer-provided encryption key used to encrypt the destination object.
x-amz-version-id string Version of the copied object in the destination bucket.
Elements
Header Type Description
CopyObjectResult container

Container for all response elements.

Ancestor: None

ETag string

Returns the ETag of the new object. The ETag reflects changes only to the contents of an object, not its metadata. The source and destination ETag will be identical for a successfully copied object.

Ancestor: CopyObjectResult

LastModified string

Returns the date the object was last modified.

Ancestor: CopyObjectResult

Examples
Copying a File into a Bucket with a Different Key Name

The request sample copies a pdf file into a bucket with a different key name.

Request
PUT /my-document.pdf HTTP/1.1
Host: {{bucketName}}.s3.scality.com
Date: Wed, 21 Sep 2016 18:18:00 GMT
x-amz-copy-source: /{{bucketName}}/my-pdf-document.pdf
Authorization: {{authorizationString}}
Response
HTTP/1.1 200 OK
x-amz-id-2: eftixk72aD6Ap51TnqcoF8eFidJG9Z/2mkiDFu8yU9AS1ed4OpIszj7UDNEHGran
x-amz-request-id: 318BC8BC148832E5
x-amz-copy-source-version-id: 3/L4kqtJlcpXroDTDmJ+rmSpXd3dIbrHY+MTRCxf3vjVBH40Nr8X8gdRQBpUMLUo
x-amz-version-id: QUpfdndhfd8438MNFDN93jdnJFkdmqnh893
Date: Wed, 21 Sep 2016 18:18:00 GMT
Connection: close
Server: ScalityS3
<CopyObjectResult>
   <LastModified>2009-10-28T22:32:00</LastModified>
   <ETag>"9b2cf535f27731c974343645a3985328"</ETag>
</CopyObjectResult>

x-amz-version-id returns the version ID of the object in the destination bucket, and x-amz-copy-source-version-id returns the version ID of the source object.

Copying a Specified Version of an Object

The request sample copies a pdf file with a specified version ID and copies it into the bucket {{bucketname}} and gives it a different key name.

Request
PUT /my-document.pdf HTTP/1.1
Host: {{bucketName}}.s3.scality.com
Date: Wed, 21 Sep 2016 18:18:00 GMT
x-amz-copy-source: /{{bucketName}}/my-pdf-document.pdf?versionId=3/L4kqtJlcpXroDTDmJ+rmSpXd3dIbrHY+MTRCxf3vjVBH40Nr8X8gdRQBpUMLUo
Authorization: {{authorizationString}}
Response: Copying a Versioned Object to a Version-Enabled Bucket

The response sample shows that an object was copied into a target bucket where Versioning is enabled.

HTTP/1.1 200 OK
x-amz-id-2: eftixk72aD6Ap51TnqcoF8eFidJG9Z/2mkiDFu8yU9AS1ed4OpIszj7UDNEHGran
x-amz-request-id: 318BC8BC148832E5
x-amz-version-id: QUpfdndhfd8438MNFDN93jdnJFkdmqnh893
x-amz-copy-source-version-id: 09df8234529fjs0dfi0w52935029wefdj
Date: Wed, 21 Sep 2016 18:18:00 GMT
Connection: close
Server: ScalityS3
<?xml version="1.0" encoding="UTF-8"?>
<CopyObjectResult>
   <LastModified>2009-10-28T22:32:00</LastModified>
   <ETag>"9b2cf535f27731c974343645a3985328"</ETag>
</CopyObjectResult>
Response: Copying a Versioned Object to a Version-Suspended Bucket

The response sample shows that an object was copied into a target bucket where versioning is suspended. Note that the response header x-amz-version-id does not appear.

HTTP/1.1 200 OK
x-amz-id-2: eftixk72aD6Ap51TnqcoF8eFidJG9Z/2mkiDFu8yU9AS1ed4OpIszj7UDNEHGran
x-amz-request-id: 318BC8BC148832E5
x-amz-copy-source-version-id: 3/L4kqtJlcpXroDTDmJ+rmSpXd3dIbrHY+MTRCxf3vjVBH40Nr8X8gdRQBpUMLUo
Date: Wed, 21 Sep 2016 18:18:00 GMT
Connection: close
Server: ScalityS3
<?xml version="1.0" encoding="UTF-8"?>
<CopyObjectResult>
  <LastModified>2009-10-28T22:32:00</LastModified>
  <ETag>"9b2cf535f27731c974343645a3985328"</ETag>
</CopyObjectResult>
Copying an Unencrypted Object to a Server-Side Encrypted Object Using Your Encryption Keys

The request sample specifies the HTTP PUT header to copy an unencrypted object to an object encrypted with server-side encryption with customer-provided encryption keys (SSE-C).

Request
PUT ExampleObject.txt?acl HTTP/1.1
Host: {{bucketName}}.s3.scality.com
x-amz-acl: public-read
Accept: */*
Authorization: {{authorizationString}}
Host: s3.scality.com
Connection: Keep-Alive
PUT /exampleDestinationObject HTTP/1.1
Host: example-destination-bucket.s3.amazonaws.com
x-amz-server-side-encryption-customer-algorithm: AES256
x-amz-server-side-encryption-customer-key: Base64{{customerProvidedKey}})
x-amz-server-side-encryption-customer-key-MD5 : Base64(MD5{{customerProvidedKey}})
x-amz-metadata-directive: metadata_directive
x-amz-copy-source: /example_source_bucket/exampleSourceObject
x-amz-copy-source-if-match: {{etag}}
x-amz-copy-source-if-none-match: {{etag}}
x-amz-copy-source-if-unmodified-since: {{timeStamp}}
x-amz-copy-source-if-modified-since: {{timeStamp}}
<request metadata>
Authorization: {{authorizationString}}
Date: {{date}}
Copying from an SSE-C-Encrypted Object to an SSE-C-Encrypted Object

The request sample specifies the HTTP PUT header to copy an object encrypted with server-side encryption with customer-provided encryption keys to an object encrypted with server-side encryption with customer-provided encryption keys for key rotation.

Request
PUT /exampleDestinationObject HTTP/1.1
Host: example-destination-bucket.s3.amazonaws.com
x-amz-server-side-encryption-customer-algorithm: AES256
x-amz-server-side-encryption-customer-key: Base64({{customerProvidedKey}})
x-amz-server-side-encryption-customer-key-MD5: Base64(MD5{{customerProvidedKey}})
x-amz-metadata-directive: metadata_directive
x-amz-copy-source: /source_bucket/sourceObject
x-amz-copy-source-if-match: {{etag}}
x-amz-copy-source-if-none-match: {{etag}}
x-amz-copy-source-if-unmodified-since: {{timeStamp}}
x-amz-copy-source-if-modified-since: {{timeStamp}}
x-amz-copy-source-server-side-encryption-customer-algorithm: AES256
x-amz-copy-source-server-side-encryption-customer-key: Base64({{oldKey}})
x-amz-copy-source-server-side-encryption-customer-key-MD5: Base64(MD5{{oldKey}})
<request metadata>
Authorization: {{authorizationString}}
Date: {{date}}

Initiate Multipart Upload

The Initiate Multipart Upload operation returns an upload ID that is used to associate all the parts in the specific Multipart Upload. The upload ID is specified in each subsequent upload part request (refer to Upload Part), and it is also included in the final request to either complete or abort the Multipart Upload request.

For request signing, Multipart Upload is just a series of regular requests. First, multipart upload is initiated, then one or more requests to upload parts is sent, and finally multipart upload completes. Each request is individually signed (there is nothing special about signing Multipart Upload requests).

Tip

Any metadata that is to be stored along with the final multipart object should be included in the headers of the Initiate Multipart Upload request.

Requests

Syntax

POST /{{ObjectName}}?uploads HTTP/1.1
Host: {{BucketName}}.{{StorageService}}.com
Date: {{date}}
Authorization: {{authorizationString}}
Parameters

The Initiate Multipart Upload operation does not use request parameters.

Headers

The Initiate Multipart Upload operation can use a number of optional request headers in addition to those that are common to all operations (refer to Common Request Headers).

Header Type Description
Cache-Control string

Can be used to specify caching behavior along the request/reply chain

Default: None

Constraints: None

Content-Disposition string

Specifies presentational information for the object.

Default: None

Constraints: None

Content-Encoding string

Specifies which content encodings have been applied to the object and the decoding mechanisms that must be applied to obtain the media-type referenced by the Content-Type header field.

Default: None

Constraints: None

Content-Type string

A standard MIME type describing the format of the contents

Default: binary/octet-stream

Valid Values: MIME types

Constraints: None

Expires string

The date and time at which the object is no longer cacheable

Default: None

Constraints: None

x-amz-meta-* string

Headers starting with this prefix are user-defined metadata, each of which is stored and returned as a set of key-value pairs. Zenko does not validate or interpret user-defined metadata. Within the PUT request header, the user-defined metadata’s size is limited to 2 KB.

Default: None

Constraints: None

x-amz-website-redirect-location string

When a bucket is configured as a website, this metadata can be set on the object so the website endpoint will evaluate the request for the object as a 301 redirect to another object in the same bucket or an external URL.

Default: None

Constraints: The value must be prefixed by, “/”, “http://” or “https://”. The value’s length is limited to 2 KB.

Access control-related headers can be used with this operation. By default, all objects are private. Only the owner has full control. When adding a new object, it is possible to grant permissions to individual accounts or predefined groups. These permissions are then used to create the Access Control List (ACL) on the object.

Specifying a Canned ACL

Zenko supports a set of canned ACLs, each of which has a predefined set of grantees and permissions.

Header Type Description
x-amz-acl string

The canned ACL to apply to the bucket you are creating

Default: private

Valid Values: private | public-read | public-read-write | authenticated-read | bucket-owner-read | bucket-owner-full-control

Constraints: None

Explicitly Specifying Access Permissions

A set of headers is available for explicitly granting access permissions to specific accounts or groups, each of which maps to specific Zenko permissions Zenko supports in an ACL.

In the header value, specify a list of grantees who get the specific permission.

Header Type Description
x-amz-grant-read string

Allows grantee to read the object data and its metadata.

Default: None

Constraints: None

x-amz-grant-read-acp string

Allows grantee to read the object ACL.

Default: None

Constraints: None

x-amz-grant-write-acp string

Allows grantee to write the ACL for the applicable object.

Default: None

Constraints: None

x-amz-grant-full-control string

Allows grantee the READ, READ_ACP, and WRITE_ACP permissions on the object

Default: None

Constraints: None

Each grantee is specified as a type=value pair, where the type can be any one of the following:

  • emailAddress (if value specified is the email address of an account)
  • id (if value specified is the canonical user ID of an account)
  • uri (if granting permission to a predefined group)

For example, the following x-amz-grant-read header grants list objects permission to the accounts identified by their email addresses:

x-amz-grant-read: emailAddress="xyz@scality.com", emailAddress="abc@scality.com"
Elements

The Initiate Multipart Upload operation does not use request elements.

Responses
Headers

The Initiate Multipart Upload operation may include any of the common response headers supported by the Zenko (see Common Response Headers).

Elements

The Initiate Multipart Upload operation can return the following XML elements in its response (includes XML containers):

Element Type Description
InitiateMultipartUploadResult container Container for bucket configuation settings
Bucket string Name of the bucket to which the multipart upload was initiated
Key string Object key for which the multipart upload was initiated
UploadID string ID for the initiated multipart upload
Examples
Initiating a Multipart Upload for the example-object Object
Request
POST /example-object?uploads HTTP/1.1
Host: example-bucket.s3.scality.com
Date: Mon, 1 Nov 2010 20:34:56 GMT
Authorization: {{authorizationString}}
Response
HTTP/1.1 200 OK
x-amz-id-2: Uuag1LuByRx9e6j5Onimru9pO4ZVKnJ2Qz7/C1NPcfTWAtRPfTaOFg==
x-amz-request-id: 656c76696e6727732072657175657374
Date:  Mon, 1 Nov 2010 20:34:56 GMT
Content-Length: 197
Connection: keep-alive
Server: ScalityS3
<?xml version="1.0" encoding="UTF-8"?>
<InitiateMultipartUploadResult xmlns="http://s3.scality.com/doc/2006-03-01/">
<Bucket>example-bucket</Bucket>
<Key>example-object</Key>
<UploadId>VXBsb2FkIElEIGZvciA2aWWpbmcncyBteS1tb3ZpZS5tMnRzIHVwbG9hZA</UploadId>
</InitiateMultipartUploadResult>

Upload Part

Use the Upload Part operation to upload each part of an object being saved to storage via a multipart upload. Before you using this operation, an Initiate Multipart Upload request must be issued for the object, as the upload ID returned by that operation is required for the Upload Part operation. Along with the upload ID, a part number must also be specified with each Upload Part operation.

Part numbers can be any number from 1 to 10,000, inclusive. A part number uniquely identifies a part and also defines its position within the object being created. If a new part is uploaded using the same part number that was used with a previous part, the previously uploaded part is overwritten.

The largest part size permitted is 5 GB which means that the biggest object that can be split is 50 TB (10,000 * 5 GB). Each part must be at least 5 MB in size, except the last part. There is no minimum size threshold on the last part of a multipart upload.

After all the parts are uploaded, a Complete Multipart Upload request must be issued.

Requests
Syntax
PUT /ObjectName?partNumber=PartNumber&amp;uploadId=UploadId HTTP/1.1
Host: {{BucketName}}.{{StorageService}}.com
Date: {{date}}
Content-Length: Size
Authorization: {{authorizationString}}
Parameters

The Upload Part operation does not use request parameters.

Headers

The Upload Part operation can use a number of optional request headers in addition to those that are common to all operations (see Common Request Headers).

Header Type Description
Content-Length integer

The size of the object, in bytes

Default: None

Constraints: None

Content-MD5 string

The base64-encoded 128-bit MD5 digest of the message (without the headers) according to RFC 1864. This header can be used as a message integrity check to verify that the data is the same data that was originally sent. Although it is optional, the use of the Content-MD5 mechanism is recommended as an end-to-end integrity check.

Default: None

Constraints: None

Expect string

When your application uses 100-continue, it does not send the request body until it receives an acknowledgment. If the message is rejected based on the headers, the body of the message is not sent.

Default: None

Valid Values: 100-continue

Constraints: None

Expires string

The date and time at which the object is no longer cacheable.

Default: None

Constraints: None

Elements

The Upload Part operation does not return request elements.

Responses
Headers

The Upload Part operation uses only response headers that are common to all operations (refer to Common Response Headers).

Elements

The Upload Part operation does not return response elements.

Special Errors
Error HTTP Status Code Description
NoSuchUpload 404 Not Found Occurs when an invalid upload ID is provided in the Upload Part request, or when a multipart upload has already been either completed or aborted.
Examples
PUT Request Uploads a Part in a Multipart Upload
Request

Part 1 of a multipart upload using the upload ID returned by an Initiate Multipart Upload request:

PUT /my-movie.m2ts?partNumber=1&amp;uploadId=VCVsb2FkIElEIGZvciBlbZZpbmcncyBteS1tb3ZpZS5tMnRzIHVwbG9hZR HTTP/1.1
Host: example-bucket.s3.scality.com
Date:  Mon, 1 Nov 2010 20:34:56 GMT
Content-Length: 10485760
Content-MD5: pUNXr/BjKK5G2UKvaRRrOA==
Authorization: {{authorizationString}}
***part data omitted***
Response

The response includes the ETag header, a value that is needed for sending the Complete Multipart Upload request.

HTTP/1.1 200 OK
x-amz-id-2: Vvag1LuByRx9e6j5Onimru9pO4ZVKnJ2Qz7/C1NPcfTWAtRPfTaOFg==
x-amz-request-id: 656c76696e6727732072657175657374
Date:  Mon, 1 Nov 2010 20:34:56 GMT
ETag: "b54357faf0632cce46e942fa68356b38"
Content-Length: 0
Connection: keep-alive
Server: ScalityS3

Upload Part - Copy

The Upload Part - Copy operation is used to upload a part by copying data from an existing object as data source. The data source is specified by adding the request header x-amz-copy-source to the request, and a byte range is specified by adding the request header x-amz-copy-source-range to the request.

The minimum allowable part size for a multipart upload is 5 MB.

Tip

Instead of using an existing object as part data, it is possible to use the Upload Part operation and provide data in the request. For more information, refer to Upload Part.

A multipart upload must be initiated before uploading any part. In response to the initiate request, Zenko returns a unique identifier — the upload ID — that must be included in the upload part request.

Requests
Syntax
PUT /{{objectName}}?partNumber={{partNumber}}&amp;uploadId={{uploadId}} HTTP/1.1
Host: {{BucketName}}.s3.scality.com
x-amz-copy-source: /{{sourceBucket}}/{{sourceObject}}
x-amz-copy-source-range:bytes={{first-Last}}
x-amz-copy-source-if-match: {{etag}}
x-amz-copy-source-if-none-match: {{etag}}
x-amz-copy-source-if-unmodified-since: {{timeStamp}}
x-amz-copy-source-if-modified-since: {{timeStamp}}
Date: {{date}}
Authorization: {{authorizationString}}
Parameters

The Upload Part-Copy operation does not use Request Parameters.

Headers

The Upload Part operation can use a number of optional request headers in addition to those that are common to all operations (see Common Request Headers).

Header Type Description
x-amz-copy-source String

The name of the source bucket and the source object key name separated by a slash (“/”).

Default: None

x-amz-copy-source-range Integer

The range of bytes to copy from the source object. The range value must use the form, “bytes=first-last”, where the first and last are the zero-based byte offsets to copy (e.g., bytes=0-9 indicates copying the first ten bytes of the source).

x-amz-copy-source-range is not required when copying an entire source object.

Default: None

The following conditional headers are based on the object specified in the x-amz-copy-source header.

Header Type Description
x-amz-copy-source-if-match String

Perform a copy if the source object entity tag (ETag) matches the specified value. If the value does not match, Zenko returns an HTTP status code 412 Precondition Failed error.

Note

If x-amz-copy-source-if-match is requested and evaluates to true and x-amz-copy-source-if-unmodified-since is present in the request and evaluates to false, Zenko returns 200 OK and copies the data.

Default: None

x-amz-copy-source-if-none-match String

Perform a copy if the source object entity tag (ETag) is different than the value specified using this header. If the values match, Zenko returns an HTTP status code 412 Precondition Failed error.

Note

If x-amz-copy-source-if-none-match is present in the request and evaluates false and x-amz-copy-source-if-unmodified-since is requested and evaluates to true Zenko returns 412 Precondition Failed.

Default: None

x-amz-copy-source-if-unmodified-since String

Perform a copy if the source object is not modified after the time specified using this header. If the source object is modified, Zenko returns an HTTP status code, 412 Precondition Failed error.

Note

If both the x-amz-copy-source-if-match header is present in the request and evaluates to true, and x-amz-copy-source-if-unmodified-since evaluates to false, Zenko returns 200 OK and copies the data.

Default: None

x-amz-copy-source-if-modified-since String

Perform a copy if the source object is modified after the time specified using the x-amz-copy-source-if-modified-since header. If the source object is not modified, Zenko returns an HTTP status code, 412 precondition failed error.

Note

If x-amz-copy-source-if-none-match is requested and evaluates to false, and x-amz-copy-source-if-unmodified-since is requestred and evaluates to true, Zenko returns a 412 Precondition Failed response code.

Default: None

Server-Side Encryption-Specific Request Headers

If the source object is encrypted using server-side encryption with a customer-provided encryption key, you must use the following headers providing encryption information for Zenko to decrypt the object for copying.

Header Type Description
x-amz-copy-source-server-side-\ encryption-customer-algorithm string

Specifies algorithm to use when decrypting the source object.

Default: None

Valid Values: AES256

Constraints: Must be accompanied by a valid x-amz-copy-source-server-side-encryption-customer-key and x-amz-copy-source-server-side-encryption-customer-key-MD5 headers.

x-amz-copy-source-server-side-\ encryption-customer-key string

Specifies the customer-provided base-64 encoded encryption key for Zenko to use to decrypt the source object. The encryption key provided in this header must be one that was used when the source object was created.

Default: None

Constraints: Must be accompanied by a valid x-amz-copy-source-server-side-encryption-customer-algorithm and x-amz-copy-source-server-side-encryption-customer-key-MD5 headers.

x-amz-copy-source-server-side-\ encryption-customer-key-MD5 string

Specifies the base64-encoded 128-bit MD5 digest of the encryption key according to RFC 1321. Zenko uses this header for a message integrity check to ensure the encryption key was transmitted without error.

Default: None

Constraints: Must be accompanied by a valid x-amz-copy-source-server-side-encryption-customer-algorithm and x-amz-copy-source-server-side-encryption-customer-key headers.

Elements

The Upload Part - Copy operation does not return request elements.

Versioning

If a bucket has versioning enabled, it is possible to have multiple versions of the same object. By default, x-amz-copy-source identifies the current version of the object to copy. If the current version is a delete marker and a versionId is not specified in the x-amz-copy-source, Zenko returns a 404 error, because the object does not exist. If versionId is specified in the x-amz-copy-source and the versionId is a delete marker, Zenko returns an HTTP 400 error, because a delete marker cannot be specified as a version for the x-amz-copy-source.

Optionally, a specific version of the source object to copy can be specified by adding the versionId subresource, as shown:

x-amz-copy-source: /bucket/object?versionId=version id
Responses
Headers

The Upload Part - Copy operation can include the following response headers in addition to the response headers that are common to all operations (see Common Response Headers).

Header Type Description
x-amz-copy-source-version-id string The version of the source object that was copied, if you have enabled versioning on the source bucket.
x-amz-server-side-encryption string If you specified server-side encryption either with an AWS KMS or Amazon S3-managed encryption key in your Initiate Multipart Upload request, the response includes this header. It confirms the encryption algorithm that Amazon S3 used to encrypt the object.
x-amz-server-side-encryption-\ aws-kms-key-id string If the x-amz-server-side-encryption is present and has the value of aws:kms, this header specifies the ID of the AWS Key Management Service (KMS) master encryption key that was used for the object.
x-amz-server-side-encryption-\ customer-algorithm string

If server-side encryption with customer-provided encryption keys is requested, the response includes this header, confirming the encryption algorithm used.

Valid Values: AES256

x-amz-server-side-encryption-\ customer-key-MD5 string If server-side encryption with customer-provided encryption keys was requested, the response includes this header to provide roundtrip message integrity verification of the customer-provided encryption key.
Elements

The Upload Part - Copy operation can return the following XML elements in its response (includes XML containers):

Element Type Description
CopyPartResult container

Container for all response elements.

Ancestor: None

ETag string Returns the Etag of the new part.
LastModified string Returns the date the part was last modified.

Warning

Part boundaries are factored into ETag calculations, so if the part boundary on the source is different than on the destination, the ETag data between the two will not match. However, data integrity checks are performed with each copy to ensure that the data written to the destination matches the data at the source.

Special Errors
Error HTTP Status Code Description
NoSuchUpload 404 Not Found The specified multipart upload does not exist. The upload ID might be invalid, or the multipart upload might have been aborted or completed.
InvalidRequest 400 Bad Request The specified copy source is not supported as a byte-range copy source.
Examples
PUT Request Uploading One Part of a Multipart Upload
Request A

The PUT request uploads a part (part number 2) in a multipart upload. The request specifies a byte range from an existing object as the source of this upload. The request includes the upload ID received in response to an Initiate Multipart Upload request.

PUT /{{objectName}}?partNumber={{partNumber}}&amp;uploadId={{uploadId}} HTTP/1.1
Host: {{BucketName}}.s3.scality.com
x-amz-copy-source: /{{sourceBucket}}/{{sourceObject}}
x-amz-copy-source-range:bytes={{first-Last}}
x-amz-copy-source-if-match: {{etag}}
x-amz-copy-source-if-none-match: {{etag}}
x-amz-copy-source-if-unmodified-since: {{timeStamp}}
x-amz-copy-source-if-modified-since: {{timeStamp}}
Date: {{date}}
Authorization: {{authorizationString}}
Response A

The response includes the ETag header, a required value for sending the Complete Multipart Upload request.

HTTP/1.1 200 OK
x-amz-id-2: Vvag1LuByRx9e6j5Onimru9pO4ZVKnJ2Qz7/C1NPcfTWAtRPfTaOFg==
x-amz-request-id: 656c76696e6727732072657175657374
Date:  Mon, 7 Nov 2016 20:34:56 GMT
Server: ScalityS3
<CopyPartResult>
<LastModified>2009-10-28T22:32:00</LastModified>
<ETag>"9b2cf535f27731c974343645a3985328"</ETag>
</CopyPartResult>
Request B

The PUT request uploads a part (part number 2) in a multipart upload. The request does not specify the optional byte range header, but requests the entire source object copy as part 2. The request includes the upload ID received in response to an Initiate Multipart Upload request.

PUT /newobject?partNumber=2&amp;uploadId=VCVsb2FkIElEIGZvciBlbZZpbmcncyBteS1tb3ZpZS5tMnRzIHVwbG9hZR HTTP/1.1
Host: example-bucket.s3.scality.com
Date:  Mon, 7 Nov 2016 20:34:56 GMT
x-amz-copy-source: /source-bucket/sourceobject
Authorization: {{authorizationString}}
Response B

The Request B response structure is similar to the one specified in Response A.

Request C

The PUT request uploads a part (part number 2) in a multipart upload. The request specifies a specific version of the source object to copy by adding the versionId subresource. The byte range requests 6 MB of data, starting with byte 500, as the part to be uploaded.

PUT /newobject?partNumber=2&amp;uploadId=VCVsb2FkIElEIGZvciBlbZZpbmcncyBteS1tb3ZpZS5tMnRzIHVwbG9hZR HTTP/1.1
Host: example-bucket.s3.scality.com
Date:  Mon, 7 Nov 2016 20:34:56 GMT
x-amz-copy-source: /source-bucket/sourceobject?versionId=3/L4kqtJlcpXroDTDmJ+rmSpXd3dIbrHY+MTRCxf3vjVBH40Nr8X8gdRQBpUMLUo
x-amz-copy-source-range:bytes=500-6291456
Authorization: {{authorizationString}}
Response C

The response includes the ETag header, a value required for sending the Complete Multipart Upload request.

HTTP/1.1 200 OK
x-amz-id-2: Vvag1LuByRx9e6j5Onimru9pO4ZVKnJ2Qz7/C1NPcfTWAtRPfTaOFg==
x-amz-request-id: 656c76696e6727732072657175657374
x-amz-copy-source-version-id: 3/L4kqtJlcpXroDTDmJ+rmSpXd3dIbrHY+MTRCxf3vjVBH40Nr8X8gdRQBpUMLUo
Date:  Mon, 7 Nov 2016 20:34:56 GMT
Server: ScalityS3
<CopyPartResult>
<LastModified>2009-10-28T22:32:00</LastModified>
<ETag>"9b2cf535f27731c974343645a3985328"</ETag>
</CopyPartResult>

Complete Multipart Upload

The Complete Multipart Upload operation is the last step in the multipart upload of a large object, pulling together previously uploaded parts, called only after a multipart upload is initiated and all of the relevant parts have been uploaded (refer to Upload Part). Upon receiving the Complete Multipart Upload request, Zenko concatenates all the parts in ascending order by part number to create a new object.

The parts list must be provided for a Complete Multipart Upload request. Care must be taken to ensure that the list is complete, and that the part number and ETag header value are provided for each part (both of which were returned with the successful uploading of the part).

Processing of a Complete Multipart Upload request can take several minutes to complete. Once Zenko begins processing the request, it sends an HTTP response header that specifies a 200 OK response. While processing is in progress, Zenko periodically sends whitespace characters to keep the connection from timing out. Because a request could fail after the initial response has been sent, it is important to check the response body to determine whether the request succeeded.

Requests
Syntax

An upload ID must be included in the URL query string supplied with the POST request for the Complete Multipart Upload operation:

POST /{{ObjectName}}?uploadId={{UploadId}} HTTP/1.1
Host: {{BucketName}}.{{StorageService}}.com
Date: {{date}}
Content-Length: {{Size}}
Authorization: {{authorizationString}}
<CompleteMultipartUpload>
<Part>
<PartNumber>{{PartNumber}}</PartNumber>
<ETag>{{ETag}}</ETag>
</Part>
...
</CompleteMultipartUpload>
Parameters

The Complete Multipart Upload operation does not use request parameters.

Headers

The Upload Part operation uses only request headers that are common to all operations (see Common Request Headers).

Elements
Element Type Description
CompleteMultipartUpload container Container for the request
Part container Container for elements related to a particular previously uploaded part
PartNumber integer Part number that identifies the part
ETag string Entity tag returned when the part was uploaded
Responses
Headers

The Complete Multipart Upload operation can include the following response header in addition to the response headers common to all responses (refer to Common Response Headers).

Header Type Description
x-amz-version-id string

Returns the version ID of the retrieved object if it has a unique version ID.

Default: None

Elements

The Complete Multipart Upload operation can return the following XML elements of the response (includes XML containers):

Element Type Description
CompleteMultipartUploadResult container Container for the response
Location URI The URI that identifies the newly created object
Bucket string The name of the bucket that contains the newly created object
Key string The object key of the newly created object
ETag string Entity tag that identifies the newly created object’s data. Objects with different object data will have different entity tags. The entity tag is an opaque string. The entity tag may or may not be an MD5 digest of the object data. If the entity tag is not an MD5 digest of the object data, it will contain one or more non-hexadecimal characters and will consist of more or less than 32 hexadecimal digits.
Special Errors
Error HTTP Status Code Description
EntityTooSmall 400 Bad Request Occurs when an a proposed upload is smaller than the minimum allowed object size. Each part must be at least 5 MB in size, except the last part.
invalidPart 400 Bad Request One or more of the specified parts could not be found
invalidPartOrder 400 Bad Request The parts were not listed in ascending order
NoSuchUpload 404 Not Found Occurs when an invalid upload ID is provided in the Upload Part request, or when a multipart upload has already been either completed or aborted.
Examples
Request Specifying Three Parts in the Operation Element
Request
POST /example-object?uploadId=AAAsb2FkIElEIGZvciBlbHZpbmcncyWeeS1tb3ZpZS5tMnRzIRRwbG9hZA HTTP/1.1
Host: Example-Bucket.{{StorageService}}.com
Date:  Mon, 1 Nov 2010 20:34:56 GMT
Content-Length: 391
Authorization: {{authorizationString}}
<CompleteMultipartUpload>
<Part>
<PartNumber>1</PartNumber>
<ETag>"a54357aff0632cce46d942af68356b38"</ETag>
</Part>
<Part>
<PartNumber>2</PartNumber>
<ETag>"0c78aef83f66abc1fa1e8477f296d394"</ETag>
</Part>
<Part>
<PartNumber>3</PartNumber>
<ETag>"acbd18db4cc2f85cedef654fccc4a4d8"</ETag>
</Part>
</CompleteMultipartUpload>
Response Sample Indicating Successful Object Assembly
HTTP/1.1 200 OK
x-amz-id-2: Uuag1LuByRx9e6j5Onimru9pO4ZVKnJ2Qz7/C1NPcfTWAtRPfTaOFg==
x-amz-request-id: 656c76696e6727732072657175657374
Date: Mon, 1 Nov 2010 20:34:56 GMT
Connection: close
Server: ScalityS3
<?xml version="1.0" encoding="UTF-8"?>
<CompleteMultipartUploadResult xmlns="http://s3.scality.com/doc/2006-03-01/">
<Location>http://Example-Bucket.s3.scality.com/Example-Object</Location>
<Bucket>Example-Bucket</Bucket>
<Key>Example-Object</Key>
<ETag>"3858f62230ac3c915f300c664312c11f-9"</ETag>
</CompleteMultipartUploadResult>
Response Sample with Error Specified in Header

The response sample indicates that an error occurred before the HTTP response header was sent.

HTTP/1.1 403 Forbidden
x-amz-id-2: Uuag1LuByRx9e6j5Onimru9pO4ZVKnJ2Qz7/C1NPcfTWAtRPfTaOFg==
x-amz-request-id: 656c76696e6727732072657175657374
Date:  Mon, 1 Nov 2010 20:34:56 GMT
Content-Length: 237
Connection: keep-alive
Server: ScalityS3
<?xml version="1.0" encoding="UTF-8"?>
<Error>
<Code>AccessDenied</Code>
<Message>Access Denied</Message>
<RequestId>656c76696e6727732072657175657374</RequestId>
<HostId>Uuag1LuByRx9e6j5Onimru9pO4ZVKnJ2Qz7/C1NPcfTWAtRPfTaOFg==</HostId>
</Error>
Response Sample with Error Specified in Body

The response sample indicates that an error occurred after the HTTP response header was sent.

Note

Although the HTTP status code is 200 OK, the request actually failed as described in the Error element.

HTTP/1.1 200 OK
x-amz-id-2: Uuag1LuByRx9e6j5Onimru9pO4ZVKnJ2Qz7/C1NPcfTWAtRPfTaOFg==
x-amz-request-id: 656c76696e6727732072657175657374
Date:  Mon, 1 Nov 2010 20:34:56 GMT
Connection: close
Server: {{ScalityS3}
<?xml version="1.0" encoding="UTF-8"?>
<Error>
<Code>InternalError</Code>
<Message>We encountered an internal error. Please try again.</Message>
<RequestId>656c76696e6727732072657175657374</RequestId>
<HostId>Uuag1LuByRx9e6j5Onimru9pO4ZVKnJ2Qz7/C1NPcfTWAtRPfTaOFg==</HostId>
</Error>

Abort Multipart Upload

The Abort Multipart Upload operation is used to cancel a multipart upload (the upload ID for the affected object must be supplied). Once initiated, no additional parts can be uploaded using that upload ID.

Tip

In the event of an Abort Multipart Upload operation, the storage consumed by any previously uploaded parts is freed. However, any partial uploads currently in progress may or may not succeed. Therefore, aborting a given multipart upload multiple times may be required to completely free all storage consumed by all upload parts. To verify that all parts have been removed, call the List Parts operation to ensure the parts list is empty.

Requests
Syntax

An upload ID must be included in the URL query string supplied with the DELETE request for the Abort Multipart Upload operation:

DELETE /{{ObjectName}}?uploadId={{UploadId}} HTTP/1.1
Host: {{BucketName}}.{{StorageService}}.com
Date: {{date}}
Authorization: {{authorizationString}}
Parameters

The Abort Multipart Upload operation does not use Request Parameters.

Headers

The Abort Multipart Upload operation uses only request headers that are common to all operations (refer to Common Request Headers).

Elements

The Abort Multipart Upload operation does not use request elements.

Responses
Headers

The Abort Multipart Upload operation uses only response headers that are common to all operations (refer to Common Request Headers).

Elements

The Abort Multipart Upload operation does not return response elements.

Errors
Error HTTP Status Code Description
NoSuchUpload 404 Not Found Occurs when an invalid upload ID is provided in the Upload Part request, or when a multipart upload has already been either completed or aborted.
Examples
Aborting a Multipart Upload Identified by Its Upload ID
Request
DELETE /example-object?uploadId=VXBsb2FkIElEIGZvciBlbHZpbmcncyBteS1tb3ZpZS5tMnRzIHVwbG9hZ HTTP/1.1
Host: example-bucket.s3.scality.com
Date:  Mon, 1 Nov 2010 20:34:56 GMT
Authorization: {{authorizationString}}
Response
HTTP/1.1 204 OK
x-amz-id-2: Weag1LuByRx9e6j5Onimru9pO4ZVKnJ2Qz7/C1NPcfTWAtRPfTaOFg==
x-amz-request-id: 996c76696e6727732072657175657374
Date:  Mon, 1 Nov 2010 20:34:56 GMT
Content-Length: 0
Connection: keep-alive
Server: ScalityS3

List Parts

The List Parts operation catalogs the parts that have been uploaded for a specific multipart upload. The operation must include the upload ID, which is obtained when the initiate multipart upload request is sent (refer to Initiate Multipart Upload).

List Parts returns a maximum of 1,000 parts (which is also the default setting for parts returned, adjustable via the max-parts request parameter). If the multipart upload consists of more than 1,000 parts, the response returns an IsTruncated field with the value of true, and a NextPartNumberMarker element. In subsequent List Parts requests it is possible to include the part-number-marker query string parameter and set its value to the NextPartNumberMarker field value from the previous response.

Requests
Syntax
GET /{{ObjectName}}?uploadId={{UploadId}} HTTP/1.1
Host: {{BucketName}}.{{StorageService}}.com
Date: {{date}}
Authorization: {{authorizationString}}
Parameters

The List Parts operation’s GET implementation uses fixed parameters to return a subset of a bucket’s objects.

Parameter Type Description
encoding-type string

Requests that Zenko encode the response and specifies the encoding method to use. An object key can contain any Unicode character; however, XML 1.0 parser cannot parse some characters, such as characters with an ASCII value from 0 to 10. For characters that are not supported in XML 1.0, you can add this parameter to request that Zenko encode the keys in the response.

Default: None

uploadID string

Upload ID identifying the multipart upload whose parts are being listed

Default: None

max-parts string

Sets the maximum number of parts to return in the response body

Default: None

part-number-marker string

Specifies the part after which listing should begin. Only parts with higher part numbers will be listed.

Default: None

Headers

The List Parts operation uses only request headers that are common to all operations (refer to Common Request Headers).

Elements

The List Parts operation does not use request elements.

Responses
Headers

The List Parts operation uses only the common response headers supported by Zenko (see Common Response Headers).

Elements

The List Parts operation can return the following XML elements in its response (includes XML containers):

Element Type Description
ListPartsResult container Container for the response
Bucket string Name of the bucket to which the multipart upload was initiated
Encoding-Type string

Encoding type used by Zenko to encode object key names in the XML response.

If the encoding-type request parameter is specified, Zenko includes this element in the response, and returns encoded key name values in the Key element.

Key string Object key for which the multipart upload was initiated
UploadId string Upload ID identifying the multipart upload whose parts are being listed
Initiator container Container element that identifies who initiated the multipart upload
ID string User ID
DisplayName string Principal’s name
Owner container Container element that identifies the object owner, after the object is created
PartNumberMarker integer Part number after which listing begins
NextPartNumberMarker integer When a list is truncated, this element specifies the last part in the list, as well as the value to use for the part-number-marker request parameter in a subsequent request.
MaxParts integer Maximum number of parts allowed in the response
IsTruncated Boolean Indicates whether the returned list of parts is truncated. A “true” value indicates that the list was truncated. A list can be truncated if the number of parts exceeds the limit returned in the MaxParts element.
Part string Container for elements related to a particular part. A response can contain zero or more Part elements.
PartNumber integer Part number identifying the part
LastModified date Date and time when the part was uploaded
ETag string Entity tag returned when the part was uploaded
Size integer Size of the uploaded part data
Examples
List Parts

Assume parts have been uploaded with sequential part numbers starting with 1.

The following example request specifies max-parts and part-number-marker query parameters. It lists the first two parts that follow part 1 (i.e., parts 2 and 3) in the response. If more parts exist, the result is truncated and the response returns an IsTruncated element with the value true. The response also returns the NextPartNumberMarker element with the value 3, which should be used for the value of the part-number-marker request query string parameter in the next List Parts request.

Request
GET /example-object?uploadId=XXBsb2FkIElEIGZvciBlbHZpbmcncyVcdS1tb3ZpZS5tMnRzEEEwbG9hZA&amp;max-parts=2&amp;part-number-marker=1 HTTP/1.1
Host:  example-bucket.{{StorageService}}.com
Date: Mon, 1 Nov 2010 20:34:56 GMT
Authorization: {{authorizationString}}
Response
HTTP/1.1 200 OK
x-amz-id-2: Uuag1LuByRx9e6j5Onimru9pO4ZVKnJ2Qz7/C1NPcfTWAtRPfTaOFg==
x-amz-request-id: 656c76696e6727732072657175657374
Date: Mon, 1 Nov 2010 20:34:56 GMT
Content-Length: 985
Connection: keep-alive
Server: ScalityS3
<?xml version="1.0" encoding="UTF-8"?>
<ListPartsResult xmlns="http://s3.scality.com/doc/2006-03-01/">
<Bucket>example-bucket</Bucket>
<Key>example-object</Key>
<UploadId>XXBsb2FkIElEIGZvciBlbHZpbmcncyVcdS1tb3ZpZS5tMnRzEEEwbG9hZA</UploadId>
<Initiator>
<ID>arn:aws:iam::111122223333:user/some-user-11116a31-17b5-4fb7-9df5-b288870f11xx</ID>
<DisplayName>umat-user-11116a31-17b5-4fb7-9df5-b288870f11xx</DisplayName>
</Initiator>
<Owner>
<ID>8b27d4b0fc460740425b9deef56fa1af6245fbcccdda813b691a8fda9be8ff0c</ID>
<DisplayName>someName</DisplayName>
</Owner>
<PartNumberMarker>1</PartNumberMarker>
<NextPartNumberMarker>3</NextPartNumberMarker>
<MaxParts>2</MaxParts>
<IsTruncated>true</IsTruncated>
<Part>
<PartNumber>2</PartNumber>
<LastModified>2010-11-10T20:48:34.000Z</LastModified>
<ETag>"7778aef83f66abc1fa1e8477f296d394"</ETag>
<Size>10485760</Size>
</Part>
<Part>
<PartNumber>3</PartNumber>
<LastModified>2010-11-10T20:48:33.000Z</LastModified>
<ETag>"aaaa18db4cc2f85cedef654fccc4a4x8"</ETag>
<Size>10485760</Size>
</Part>
</ListPartsResult>

Backbeat

Backbeat maintains job queues for CloudServer. It mediates between the core CloudServer logic and the transient Kafka lists and MongoDB databases where task information is queued.

Backbeat provides a REST API with endpoints for healthcheck, cross-region replication (CRR), and metrics. It is also instrumental in garbage collection.

Backbeat Healthcheck

The Healthcheck feature uses one API route, GET Healthcheck. Healthchecks return details on the health of Kafka and its topics.

Specifically, Backbeat Healthcheck returns an object with two main keys: topics and internalConnections.

The topics key returns details on the Kafka CRR topic only. The name field returns the Kafka topic name, and the partitions field returns details of each partition for the CRR topic. id is the partition ID, leader is the current node responsible for all reads and writes for the given partition, the replicas array is the list of nodes that replicate the log for the given partition, and the isrs array is the list of in-sync replicas.

topics: {
    <topicName>: {
        name: <value>,
        partitions: [
            {
                id: <value>,
                leader: <value>,
                replicas: [<value>, ...],
                isrs: [<value>, ...]
            },
            ...
        ]
    }
}

The internalConnections key returns general details on the health of the system as a whole. The isrHealth contains whether the minimum in-sync replicas for every partition has been met, the zookeeper field returns whether ZooKeeper is running properly, and the kafkaProducer field returns the health of all Kafka producers for every topic.

GET Healthcheck

Healthcheck return details on the health of Kafka and its topics.

Request
GET /_/backbeat/api/healthcheck
Responses

If the response is not an HTTP error, it is structured as an object with two main keys: topics and internalConnections.

The topics key returns details on the Kafka CRR topic only. The name field returns the Kafka topic name, and the partitions field returns details of each partition for the CRR topic. id carries the partition ID, leader contains the current node responsible for all reads and writes for the given partition, the replicas array is the list of nodes that replicate the log for the given partition, and the isrs array is the list of in-sync replicas.

topics: {
  <topicName>: {
    name: <string>,
    partitions: {
      id: <number>,
      leader: <number>,
      replicas: [<number>, ...],
      isrs: [<number>, ...],
    },
    ...
  },
}

The internalConnections key returns general details on the health of the system as a whole. It contains an object of three keys:

  • isrHealth, which carries a value of either ok or error.
  • zookeeper, which shows a status and status details (see https://github.com/alexguan/node-zookeeper-client#state for more detail).
  • kafkaProducer, which carries a value of either ok or error, and reports the health of all producers for every topic.

isrHealth reports whether the minimum in-sync replicas count for every partition is met. The zookeeper field indicates whether ZooKeeper is running properly. The kafkaProducer field reports the health of all Kafka producers for every topic.

The zookeeper:details field provides a status name and status code provided directly from the node-zookeeper), library.

internalConnections: {
  isrHealth: <ok || error>,
  zookeeper: {
    status: <ok || error>,
    details: {
      name: <value>,
      code: <value>
    }
  },
  kafkaProducer: {
    status: <ok || error>
  }
}
Example

Note

The following example is redacted for brevity.

{
  "topics": {
    "backbeat-replication": {
      "name": "backbeat-replication",
      "partitions": [
        {
          "id": 2,
          "leader": 4,
          "replicas": [2,3,4],
          "isrs": [4,2,3]
        },
        ...
        {
          "id": 0,
          "leader": 2,
          "replicas": [0,1,2],
          "isrs": [2,0,1]
        }
      ]
    }
  },
  "internalConnections": {
    "isrHealth": "ok",
    "zookeeper": {
      "status": "ok",
      "details": {
        "name": "SYNC_CONNECTED",
        "code": 3
      }
    },
    "kafkaProducer": {
      "status": "ok"
    }
  }
}

CRR Retry

The CRR Retry feature lets users monitor and retry failed CRR operations, enabling them to retrieve a list of failed operations and to retry specific CRR operations.

The CRR Retry command set comprises the following APIs:

Get All Failed Operations

This GET request retrieves a listing of failed operations at a site. Use this operation to learn if any CRR operations have failed at the site, and to retrieve the entire listing.

Requests
Syntax
GET /_/backbeat/api/crr/failed?sitename=<site>&marker=<next-marker>
Responses
Non-Truncated
{
  IsTruncated: false,
  Versions: [{
    Bucket: <bucket>,
    Key: <key>,
    VersionId: <version-id>,
    StorageClass: <site>,
    Size: <size>,
    LastModified: <last-modified>,
  }]
}
Truncated
{
  IsTruncated: true,
  NextMarker: <next-marker>,
  Versions: [{
    Bucket: <bucket>,
    Key: <key>,
    VersionId: <version-id>,
    StorageClass: <site>,
    Size: <size>,
    LastModified: <last-modified>,
  },
  ...
  ]
}
Get Failed Operations by Object

This GET request retrieves a listing of all failed operations for a specific object version. Use this operation to monitor a specific object’s replication status.

Requests
Syntax
GET /_/backbeat/api/crr/failed/<bucket>/<key>?versionId=<version-id>
Responses
{
  IsTruncated: false,
  Versions: [{
    Bucket: <bucket>,
    Key: <key>,
    VersionId: <version-id>,
      StorageClass: <site>,
      Size: <size>,
      LastModified: <last-modified>,
  }]
}

Note

The marker query parameter is not supported for this route because replication rules including more than 1,000 sites are not anticipated.

Retry Failed Operations

This POST request retries a set of failed operations.

Requests
Body
[{
  Bucket: <bucket>,
  Key: <key>,
  VersionId: <version-id>,
  StorageClass: <site>,
}]
Response
[{
  Bucket: <bucket>,
  Key: <key>,
  VersionId: <version-id>,
  StorageClass: <site>,
  Size: <size>,
  LastModified: <last-modified>,
  ReplicationStatus: 'PENDING',
}]

CRR Pause and Resume

CRR Pause and Resume offers a way for users to manually pause and resume cross-region replication (CRR) operations by storage locations.

Users may also choose to resume CRR operations for a given storage location by a specified number of hours from the current time. This is particularly useful when the user knows a destination location will be down for a certain time and wants to schedule a time to resume CRR.

Backbeat’s CRR Pause and Resume feature comprises the following API calls

Get CRR Status

This GET request checks whether cross-region replication is enabled for all locations configured as destination replication endpoints.

Request
GET /_/backbeat/api/crr/status
Response
{
  "location1": "disabled",
  "location2": "enabled"
}
Get CRR Status for a Location

This GET request checks whether cross-region replication is enabled for a specified location, configured as a destination replication endpoint.

Request
GET /_/backbeat/api/crr/status/<location-name>
Response
{
  "<location-name>": "enabled"
}
Pause CRR for a Location

This POST request manually pauses cross-region replication for a specified location configured as a destination replication endpoint.

Request
POST /_/backbeat/api/crr/pause/<location-name>
Response
{}
Pause All CRR

This POST request manually pauses cross-region replication for all locations configured as destination replication endpoints.

Request
POST /_/backbeat/api/crr/pause
Response
{}
Resume CRR for a Location

This is a POST request to resume cross-region replication to a specified location configured as a destination replication endpoint.

Request
POST /_/backbeat/api/crr/resume/<location-name>
Response
{}
Resume All CRR

This POST request resumes cross-region replication for all locations configured as destination replication endpoints.

Request
POST /_/backbeat/api/crr/resume
Response
{}
Set CRR Resume Time

This POST request sets the resume time for paused cross-region replication at a specified location configured as a destination replication endpoint. Specifying all as the location name schedules a resume for all available paused destinations.

Providing a POST request body object with an hours key and a valid integer value schedules a resume to occur in the given number of hours.

If no request body is provided for this route, a default of 6 hours is applied.

Request
POST /_/backbeat/api/crr/resume/<location-name>/schedule
Example Body
{
  "hours": 6
}
Response
{}
Get CRR Resume Time

This GET request checks if the given location has a scheduled cross-region replication resume job. If a resume job is scheduled, Backbeat returns the date when the resume is scheduled to occur.

Note

CRR resumes are scheduled as unique, non-recurring events. Resumes cannot be scheduled as recurring events.

Request
GET /_/backbeat/api/crr/resume/<location-name>

Specifying all for the location name returns all scheduled resume jobs, if any.

Response

The response is formatted by location and contains a time for requested locations with a scheduled resume or none for locations with no scheduled resume.

{
  "location1": "2018-06-28T05:40:20.600Z",
  "location2": "none"
}
Design

The RESTful API exposes methods for users to pause and resume cross-region replication operations.

Redis’s pub/sub function propagates requests to all active CRR Kafka Consumers on all nodes that have Backbeat containers set up for replication.

Backbeat’s design allows pausing and resuming the CRR service at the lowest level (pause and resume all Kafka consumers subscribed to the CRR topic) to stop processing any replication entries that might have already been populated by Kafka but have yet to be consumed and queued for replication. Any entries already consumed by the Kafka consumer and being processed for replication finish replication and are not paused.

The API has a Redis instance publishing messages to a specific channel. Queue processors subscribe to this channel, and on receiving a request to pause or resume CRR, notify all their Backbeat consumers to perform the action, if applicable. If an action occurs, the queue processor receives an update on the current status of each consumer. Based on the global status of a location, the status is updated in ZooKeeper if a change has occurred.

When a consumer pauses, the consumer process is kept alive and maintains any internal state, including offset. The consumer is no longer subscribed to the CRR topic, so it no longer tries to consume any entries. When the paused consumer is resumed, it again resumes consuming entries from its last offset.

Metrics

For basic metrics, Backbeat gathers, processes, and exposes six data points:

  • Number of operations (ops)
  • Number of completed operations (opsdone)
  • Number of failed operations (opsfail)
  • Number of bytes (bytes)
  • Number of completed bytes (bytesdone)
  • Number of failed bytes (bytesfail)
Common Metrics API Syntax

Backbeat metrics routes are organized as follows:

/_/backbeat/api/metrics/<extension-type>/<location-name/[<metric-type>]/[<bucket>]/[<key>]?[versionId=<version-id>]

Where:

  • <extension-type> currently supports only crr for replication metrics.
  • <location-name> represents any current destination replication location(s) you have defined. To display metrics for all locations, use all. All metric routes contain a location-name variable.
  • <metric-type> is an optional field. If you specify a metric type, Backbeat returns the specified metric. If you omit it, Backbeat returns all available metrics for the given extension and location.
  • <bucket> is an optional field. It carries the name of the bucket in which the object is expected to exist.
  • <key> is an optional field. When getting CRR metrics for a particular object, it contains the object’s key.
  • <version-id> is an optional field. When getting CRR metrics for a particular object, it contains the object’s version ID.

The site name must match the name specified in env_replication_endpoints under the backbeat replication configurations in env/client_template/group_vars/all.

If the site is for a different cloud backend (e.g. AWS, Azure), use that backend’s defined type (aws\_s3 or azure, for example).

Design

To collect metrics, a separate Kafka producer and consumer pair (MetricsProducer and MetricsConsumer) using their own Kafka topic (default to “backbeat-metrics”) produce their own Kafka entries.

When a new CRR entry is sent to Kafka, a Kafka entry to the metrics topic is produced, indicating to increase ops and bytes. On consumption of this metrics entry, Redis keys are generated with the following schema:

Site-level CRR metrics Redis key:

<site-name>:<default-metrics-key>:<ops-or-bytes>:<normalized-timestamp>

Object-level CRR metrics Redis key:

<site-name>:<bucket-name>:<key-name>:<version-id>:<default-metrics-key>:<ops-or-bytes>:<normalized-timestamp>

A normalized timestamp determines the time interval on which to set the data. The default metrics key ends with the type of data point it represents.

When the CRR entry is consumed from Kafka, processed, and the metadata for replication status updated to a completed state (i.e. COMPLETED, FAILED), a Kafka entry is sent to the metrics topic indicating to increase opsdone and bytesdone if replication was successful or opsfail and bytesfail if replication was unsuccessful. Again, on consumption of this metrics entry, Redis keys are generated for their respective data points.

It is important to note that a MetricsProducer is initialized and producing to the metrics topic both when the CRR topic BackbeatProducer produces and sends a Kafka entry, and when the CRR topic BackbeatConsumer consumes and processes its Kafka entries. The MetricsConsumer processes these Kafka metrics entries and produces to Redis.

A single-location CRR entry produces four keys in total. The data points stored in Redis are saved in intervals (default of 5 minutes) and are available up to an expiry time (default of 15 minutes).

An object CRR entry creates one key. An initial key is set when the CRR operation begins, storing the total size of the object to be replicated. Then, for each part of the object that is transferred to the destination, another key is set (or incremented if a key already exists for the current timestamp) to reflect the number of bytes that have completed replication. The data points stored in Redis are saved in intervals (default of 5 minutes) and are available up to an expiry time (default of 24 hours).

Throughput for object CRR entries are available up to an expiry time (default of 15 minutes). Object CRR throughput is the average bytes transferred per second within the latest 15 minutes.

A BackbeatServer (default port 8900) and BackbeatAPI expose these metrics stored in Redis by querying based on the prepended Redis keys. Using these data points, we can calculate simple metrics like backlog, number of completions, progress, throughput, etc.

Backbeat Metrics Routes

Backbeat offers routes for the following services:

Get All Metrics

This route gathers all metrics for the requested location name and extension type, returning the requested information in a JSON-formatted object.

Request
GET /_/backbeat/api/metrics/crr/<location-name>
Get Pending

This route returns pending replication in number of objects and number of total bytes. The byte total represents data only and does not include the size of associated metadata.

Pending replication represents objects that have been queued up for replication to another site, but for which the replication task has failed or has not been completed.

Request
GET /_/backbeat/api/metrics/crr/<location-name>/pending
Response
"pending":{
  "description":"Number of pending replication operations (count) and bytes (size)",
  "results":{
    "count":0,
    "size":0
  }
}
Get Backlog

This route returns the replication backlog in number of objects and number of total bytes for the specified extension type and location name. Replication backlog represents the objects that have been queued for replication to another location, but for which the replication task is not complete. If replication for an object fails, failed object metrics are considered backlog.

Request
GET /_/backbeat/api/metrics/crr/<location-name>/backlog
Response
"backlog":{
  "description":"Number of incomplete replication operations (count) and number of incomplete bytes transferred (size)",
  "results":{
    "count":4,
    "size":"6.12"
  }
}
Get Completions

This route returns the replication completions in number of objects and number of total bytes transferred for the specified extension type and location. Completions are only collected up to an EXPIRY time, set to 15 minutes by default

Request
GET /_/backbeat/api/metrics/crr/<location-name>/completions
Response
"completions":{
  "description":"Number of completed replication operations (count) and number of bytes transferred (size) in the last 900 seconds",
  "results":{
    "count":31,
    "size":"47.04"
  }
}
Get Failures

This route returns replication failures in number of objects and number of total bytes for the specified extension type and location. Failures are collected only up to an EXPIRY time, set by default to 15 minutes.

Request
GET /_/backbeat/api/metrics/crr/<location-name>/failures
Response
"failures":{
  "description":"Number of failed replication operations (count) and bytes (size) in the last 900 seconds",
  "results":{
    "count":"5",
    "size":"10.12"
  }
}
Get Throughput: Ops/Sec

This route returns the current throughput in number of completed operations per second (or number of objects replicating per second) and number of total bytes completing per second for the specified type and location name.

Request
GET /_/backbeat/api/metrics/crr/<location-name>/throughput
Response
"throughput":{
  "description":"Current throughput for replication operations in ops/sec (count) and bytes/sec (size)",
  "results":{
    "count":"0.00",
    "size":"0.00"
  }
}
Get Throughput: Bytes/Sec

This route returns the throughput in number of total bytes completing per second for the specified object.

Request
GET /_/backbeat/api/metrics/crr/<site-name>/throughput/<bucket>/<key>?versionId=<version-id>
Response
{
  "description": "Current throughput for object replication in bytes/sec (throughput)",
  "throughput": "0.00"
}
Get Progress

This route returns replication progress in bytes transferred for the specified object.

Request
GET /_/backbeat/api/metrics/crr/<location-name>/progress/<bucket>/<key>?versionId=<version-id>
Response
{
  "description": "Number of bytes to be replicated (pending), number of bytes transferred to the destination (completed), and percentage of the object that has completed replication (progress)",
  "pending": 1000000,
  "completed": 3000000,
  "progress": "75%"
}

The Backbeat API provides REST endpoints for these features.

Common Request Headers

All Backbeat API endpoints are addressed through the /_/backbeat/api/... route with endpoints entered as described in the sections linked above.

Accessing the Backbeat API paths is detailed in Zenko Operation in the “Backbeat API” section.

The internal routes presented in the following table are required for testing the overall health of Backbeat and to measure the progress of an ongoing replication.

Response Codes

Backbeat exposes various metric routes that return a response with an HTTP code.

Response Details
200 OK Success
403 AccessDenied Request IP address must be defined in conf/config.json in the server.healthChecks.allowFrom field.
404 RouteNotFound Route must be valid.
405 MethodNotAllowed The HTTP verb must be a GET.
500 InternalError This could be caused by one of several components: the api server, Kafka, Zookeeper, Redis, or one of the Producers for a topic.

Replication Status

A special status, PROCESSING, supports cross-region replication with a multiple-backend topology. Objects in CRR buckets transition from PENDING to PROCESSING to COMPLETED or FAILED.

  • PENDING: CRR to all backends is pending.
  • PROCESSING: At least one backend has completed and is waiting for other backend(s) to finish.
  • COMPLETED: All backends report a completed status.
  • FAILED: At least one backend failed.

Each backend’s replication status is reported as user metadata.

For example, if the site names configured in the replication endpoints are aws-backend-1, aws-backend-2, azure-backend-1, and azure-backend-2, user metadata on the head object may appear as:

aws-backend-1-replication-status: COMPLETED
aws-backend-2-replication-status: PENDING
azure-backend-1-replication-status: COMPLETED
azure-backend-2-replication-status: PENDING

This user metadata is in addition to the object’s replication status, which follows the logic laid out in the bucket config file.

Prometheus

One of several dashboards available to Zenko users, Prometheus provides insight into Kubernetes platform (MetalK8s) and Zenko functionality.

Lifecycle Metrics

The lifecycle transition policies feature generates useful metrics, currently exposed through SoundCloud’s Prometheus toolkit. Exposure through the Backbeat API is under development.

The following Prometheus metrics are available (see the Prometheus documentation for how to access counter/gauge/histogram data and how to query the Prometheus API). Labels and their content are described below.

Available Metrics
zenko_replication_queued_total
  • type: counter
  • description: number of objects queued for replication
  • labels: origin, partition, fromLocation, fromLocationType, toLocation, toLocationType
zenko_replication_queued_bytes
  • type: counter
  • description: number of bytes queued for replication
  • labels: origin, partition, fromLocation, fromLocationType, toLocation, toLocationType
zenko_replication_processed_bytes
  • type: counter
  • description: number of bytes replicated
  • labels: origin, fromLocation, fromLocationType, toLocation, toLocationType, status
zenko_replication_elapsed_seconds
  • type: histogram
  • description: replication jobs elapsed time in seconds
  • labels: origin, fromLocation, fromLocationType, toLocation, toLocationType, status, contentLengthRange
Labels

The following labels are provided to one or more of the above metrics:

  • origin: The origin label identifies the service that triggered the replication, e.g. “lifecycle”. Currently, lifecycle is the only service to use replication as reflected in these metrics, but when other services under development use it, this label will be used to distinguish replication tasks by the services triggering them.
  • partition: The partition number in the “backbeat-data-mover” Kafka topic where the replication task has been queued
  • fromLocation: The name of the source Zenko location
  • fromLocationType: This identifies the type of location from which the fromLocation call emerged: aws_s3 for an AWS compatible location, local for a local storage location or a RING location, azure for an Azure location, gcp for a Google Cloud location.
  • toLocation: The name of the target Zenko location
  • toLocationType: This identifies the type of location to which the information in toLocation is targeted, following the convention described in fromLocation.
  • status: This reports the status of the finished task: success if the replication completed successfully or error if it did not.
  • contentLengthRange: This range separates the metric values in different content-length (object size) buckets to offer more meaningful elapsed time metrics grouped by object size ranges. The labels used are: <10KB, 10KB..30KB, 30KB..100KB, […], 300GB..1TB, >1TB.
Kafka Metrics

Kafka provides its own set of Prometheus metrics. One, kafka_consumergroup_lag, is especially useful for monitoring lifecycle transitions:

kafka_consumergroup_lag
  • type: counter
  • description: Lag of consumer groups on a topic/partition, i.e. number of messages published but not consumed yet by this consumer group
  • labels: topic, partition, consumergroup
Labels

The following labels can be used to filter this metric:

  • topic: The topic name. The relevant topic names for transition policies are:

    • “backbeat-data-mover” for tracking the data mover (replication) topic
    • “backbeat-lifecycle-bucket-tasks” for tracking the bucket tasks topic (scanning buckets for lifecycle actions to execute on objects)
    • “backbeat-lifecycle-object-tasks” for tracking the object tasks topic (tasks to execute on each expiration or a transition object)
    • “backbeat-gc” for tracking the garbage collection topic (The garbage collection service removes original data after a transition. It is also used for transient source data removal after successful CRR.)
  • partition: Partition number in the Kafka topic

  • consumergroup: This label carries the consumer group name. Relevant consumer group names for transition policies are:

    • backbeat-replication-group-[location name]: Consumer group that consumes replication tasks of transition actions to a particular location (e.g. “backbeat-replication-group-aws1” tracks the consumer group for the “aws1” location). Consumer groups will also consume messages for other locations, but will not process them (hence the lag will also count replications to other locations).
    • backbeat-lifecycle-bucket-processor-group: The unique consumer group of the “backbeat-lifecycle-bucket-tasks” topic.
    • backbeat-lifecycle-object-processor-group: The unique consumer group of the “backbeat-lifecycle-object-tasks” topic.
    • backbeat-gc-consumer-group: The unique consumer group of the “backbeat-gc” topic

You can use either the Prometheus API or the Prometheus UI and the PromQL query language to access these metrics.

Below is an example. See the Prometheus documentation for a full PromQL reference.

  • Compute the total average throughput, in bytes per second, of successful replications triggered by lifecycle transitions over the last 5 minutes:

    sum(rate(zenko_replication_processed_bytes{origin="lifecycle",status=”success”}[5m]))
    

The foregoing descriptions are not encyclopedic. You may find other metrics not documented here to be suitable for your use case.