ドキュメント

ドキュメントトップ

インストール

動作環境

  1. RAM 1GB以上の Linux、もしくは macOS
  2. 下記のいずれかのオブジェクトストア
    • S3が使用可能な Amazon Web Services アカウント
    • Google Cloud Storage アカウント
    • その他S3互換のオブジェクトストア

インストール

ObjectiveFSバイナリは /sbin/mount.objectivefs にインストールされます

Shared filesystem for Redhat Linux: Red Hat

Shared filesystem for CentOS Linux: CentOS

Shared filesystem for Ubuntu Linux: Ubuntu

Shared filesystem for Debian Linux: Debian

  • Step 1. ObjectiveFS DEB パッケージをダウンロード
  • Step 2. パッケージの更新: apt-get update
  • Step 3. FUSEをインストール: apt-get install fuse
  • Step 4. インストール実行: dpkg -i objectivefs_7.0_amd64.deb
  • Step 5. NTP の offset が1秒未満であることを確認: /usr/sbin/ntpdate -q pool.ntp.org

Shared filesystem for SUSE Linux: SUSE

Shared filesystem for Linux その他のLinux

OFS macOS macOS

入門編

This document is for release 5.0 and newer.

Overview

It’s quick and easy to get started with your ObjectiveFS file system.

Get Started with EC2 Shared Filesystem in 3 steps

What you need

3 easy steps

  1. Configure your credentials(one-time setup)

    Need your keys? See: how to get your S3 keys or GCS keys.
    For the default region, see the region list.

    $ sudo mount.objectivefs config
                                            Enter ObjectiveFS license: <your ObjectiveFS license>
                                            Enter Access Key Id: <your S3 or GCS access key>
                                            Enter Secret Access Key: <your S3 or GCS secret key>
                                            Enter Default Region (optional): <S3 or GCS region>
                                            
  2. Create a file system

    Use a globally unique, non-secret file system name.
    Choose a strong passphrase, write it down and store it somewhere safe.
    IMPORTANT: Without the passphrase, there is no way to recover any files.

    $ sudo mount.objectivefs create <your filesystem name>
                                            Passphrase (for s3://<filesystem>): <your passphrase>
                                            Verify passphrase (for s3://<filesystem>): <your passphrase>
                                            
    Note: Your filesystem will be created in the default region for S3 if specified in step 1, or us-west-2 otherwise. For GCS, it will be created in your physical location. To specify your region, see here.
  3. Mount the file system

    You need an existing empty directory to mount your file system, e.g. /ofs.
    Process will run in the background.

    $ sudo mkdir /ofs
                                            $ sudo mount.objectivefs <your filesystem name> /ofs
                                            Passphrase (for s3://<filesystem>): <your passphrase>
                                            

    You can mount this file system on all machines (e.g. laptop, ec2 servers, etc) that you’d like to share your data.

And you are up and running! It’s that easy.

See the User Guide for all commands and options.

ユーザーガイド

This document is for release 5.0 and newer.

Overview

It’s quick and easy to get started with your ObjectiveFS file system. Just 3 steps to get your new file system up and running. (see Get Started)

ObjectiveFS runs on your Linux and macOS machines, and implements a log-structured file system with a cloud object store backend, such as AWS S3, Google Cloud Storage (GCS) or your on-premise object store. Your data is encrypted before leaving your machine, and stays encrypted until it returns to your machine.

This user guide covers the commands and options supported by ObjectiveFS. For an overview of the commands, refer to the Command Overview section. For detailed description and usage of each command, refer to the Reference section.

Commands

Config
Sets up the required environment variables to run ObjectiveFS (details)
sudo mount.objectivefs config [<directory>]
Create
Creates a new file system (details)
sudo mount.objectivefs create [-l <region>] <filesystem>
List
Lists your file systems in S3 or GCS (details)
sudo mount.objectivefs list [-a] [<filesystem>]
Mount
Mounts your file system on your Linux or macOS machines (details)
Run in background:sudo mount.objectivefs [-o <opt>[,<opt>]..] <filesystem> <dir>
Run in foreground:sudo mount.objectivefs mount [-o <opt>[,<opt>]..] <filesystem> <dir>
Unmount
Unmounts your files system on your Linux or macOS machines (details)
sudo umount <dir>
Destroy
Destroys your file system (details)
sudo mount.objectivefs destroy <filesystem>

Reference

This section covers the detailed description and usage for each ObjectiveFS command.

Config

  • SUMMARY:Sets up the required environment variables to run ObjectiveFS

  • USAGE:
    sudo mount.objectivefs config [<directory>]
  • DESCRIPTION:
    Config is a one-time operation that sets up the required credentials, such as object store keys and your license, as environment variables in a directory.You can also optionally set your default region.
    <directory>
    Directory to store your environment variables. This should be a new non-existing directory.
    Default: /etc/objectivefs.env
  • WHAT YOU NEED:
  • DETAILS:

    Config sets up the following environment variables in /etc/objectivefs.env (if no argument is provided) or in the directory specified.

    • AWS_ACCESS_KEY_ID Your object store access key
    • AWS_SECRET_ACCESS_KEY Your object store secret key
    • OBJECTIVEFS_LICENSE Your ObjectiveFS license key
    • AWS_DEFAULT_REGION (optional) This optional field can be an AWS or GCS region or the S3-compatible endpoint (e.g. http://<object store>) for your on-premise object store
  • EXAMPLES:
    A. Default setup: your environment variables will be created in /etc/objectivefs.env
    $ sudo mount.objectivefs config
                                                Creating config in /etc/objectivefs.env
                                                Enter ObjectiveFS license: <your ObjectiveFS license>
                                                Enter Access Key Id: <your AWS or GCS access key>
                                                Enter Secret Access Key: <your AWS or GCS secret key>
                                                Enter Default Region (optional): <your preferred region>
                                                
    B. User-specified destination directory
    $ sudo mount.objectivefs config /home/ubuntu/.ofs.env
                                                Creating config in /home/ubuntu/.ofs.env
                                                Enter ObjectiveFS license: <your ObjectiveFS license>
                                                Enter Access Key Id: <your AWS or GCS access key>
                                                Enter Secret Access Key: <your AWS or GCS secret key>
                                                Enter Default Region (optional): <your preferred region>
                                                
  • TIPS:
    • To make changes to your keys or default region, edit the files in the environment directory directly (e.g. /etc/objectivefs.env/AWS_ACCESS_KEY_ID).
    • If you don’t want to use the variables in /etc/objectivefs.env, you can also set the environment variables directly on the command line. See environment variables on command line.
    • You can also manually create the environment variables directory without using the config command. You will need to set up AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY and OBJECTIVEFS_LICENSE in the environment directory. See environment variables section for details.
    • If you have an attached AWS EC2 IAM role to your EC2 instance, you can automatically rekey with IAM roles (see live rekeying) and don’t need to have the AWS_SECRET_ACCESS_KEY or AWS_ACCESS_KEY_ID environment variables.

Create

  • SUMMARY:

    Creates a new file system

  • USAGE:
    sudo mount.objectivefs create [-l <region>] <filesystem>
  • DESCRIPTION:

    This command creates a new file system in your S3, GCS or on-premise object store. You need to provide a passphrase for the new file system. Please choose a strong passphrase, write it down and store it somewhere safe.
    IMPORTANT:Without the passphrase, there is no way to recover any files.

    <filesystem>
    A globally unique, non-secret file system name. (Required)
    The filesystem name maps to an object store bucket, and S3/GCS requires globally unique namespace for buckets.
    For S3, you can optionally add the “s3://” prefix, e.g. s3://myfs.
    For GCS, you can optionally add the “gs://” prefix, e.g. gs://myfs.
    For on-premise object store, you can also specify an endpoint directly with the “http://” prefix, e.g. http://s3.example.com/foo
    -l <region>
    The region to store your file system in. (see region list)
    Default: The region specified by your AWS_DEFAULT_REGION environment variable (if set). Otherwise, S3’s default is us-west-2 and GCS’s default is based on your server’s location (us, eu or asia).
  • WHAT YOU NEED:
    • Your ObjectiveFS environment directory is set up (see Config section).
  • DETAILS:
    This command creates a new filesystem in your object store. You can specify the region to create your filesystem by using the -l <region> option or by setting the AWS_DEFAULT_REGION environment variable.

    ObjectiveFS also supports creating multiple file systems per bucket. Please refer to the Filesystem Pool section for details.

  • EXAMPLES:
    A. Create a file system in the default region
    $ sudo mount.objectivefs create myfs
                                                Passphrase (for s3://myfs): <your passphrase>
                                                Verify passphrase (for s3://myfs): <your passphrase>
                                                

    B. Create an S3 file system in a user-specified region (e.g. eu-central-1)

    $ sudo mount.objectivefs create -l eu-central-1 s3://myfs
                                                    Passphrase (for s3://myfs): <your passphrase>
                                                    Verify passphrase (for s3://myfs): <your passphrase>
                                                    

    C. Create a GCS file system in a user-specified region (e.g. us)

    $ sudo mount.objectivefs create -l us gs://myfs
                                                    Passphrase (for gs://myfs): <your passphrase>
                                                    Verify passphrase (for gs://myfs): <your passphrase>
                                                    

  • TIPS:
    • You can store your filesystem passphrase in /etc/objectivefs.env/OBJECTIVEFS_PASSPHRASE. Please verify this file’s permission is restricted to root only.
    • To run with a different ObjectiveFS environment directory, e.g. /home/ubuntu/.ofs.env:
      $ sudo OBJECTIVEFS_ENV=/home/ubuntu/.ofs.env mount.objectivefs create myfs
    • To create a filesystem without manually entering your passphrase (e.g. for scripting filesystem creation), you can use the admin mode and store your filesystem passphrase in /etc/objectivefs.env/OBJECTIVEFS_PASSPHRASE.

List

  • SUMMARY:Lists your file systems, snapshots and buckets
  • USAGE:
    sudo mount.objectivefs list [-asz] [<filesystem>[@<time>]]
  • DESCRIPTION:

    This command lists your file systems, snapshots or buckets in your object store. The output includes the file system name, filesystem kind (regular filesystem or pool), snapshot (automatic, checkpoint, enabled status), region and location.

    -a
    List all buckets in your object store, including non-ObjectiveFS buckets.
    -s
    Enable listing of snapshots.
    -z
    Use UTC for snapshot timestamps.
    <filesystem>
    The filesystem name to list. If the filesystem doesn’t exist, nothing will be returned.
    <filesystem>@<time>
    The snapshot to list. The time specified can be in UTC (needs -z) or local time, in the ISO8601 format (e.g. 2016-12-31T15:40:00).
    If a prefix of a time is given, a list of snapshots matching the time prefix will be listed.
    default
    All ObjectiveFS file systems are listed.
  • WHAT YOU NEED:
    • Your ObjectiveFS environment directory is set up (see Config section).
  • DETAILS:

    The list command has several options to list your filesystems, snapshots and buckets in your object store. By default, it lists all your ObjectiveFS filesystems and pools. It can also list all buckets, including non-ObjectiveFS buckets, with the -a option. To list only a specific filesystem or filesystem pool, you can provide the filesystem name. For a description of snapshot listing, see Snapshots section.

    The output of the list command shows the filesystem name, filesystem kind, snapshot type, region and location.

    Example filesystem list output:
    NAME                KIND  SNAP REGION        LOCATION
                                                s3://myfs-1         ofs   -    eu-central-1  EU (Frankfurt)
                                                s3://myfs-2         ofs   -    us-west-2     US West (Oregon)
                                                s3://myfs-pool      pool  -    us-east-1     US East (N. Virginia)
                                                s3://myfs-pool/fsa  ofs   -    us-east-1     US East (N. Virginia)
                                                
    Example snapshot list output:
    NAME                           KIND  SNAP    REGION     LOCATION
                                                s3://myfs@2017-01-10T11:10:00  ofs   auto    eu-west-2  EU (London)
                                                s3://myfs@2017-01-10T11:17:00  ofs   manual  eu-west-2  EU (London)
                                                s3://myfs@2017-01-10T11:20:00  ofs   auto    eu-west-2  EU (London)
                                                s3://myfs                      ofs   on      eu-west-2  EU (London)
                                                
    Filesystem Kind Description
    ofs ObjectiveFS filesystem
    pool ObjectiveFS filesystem pool
    - Non-ObjectiveFS bucket
    ? Error while querying the bucket
    access No permission to access the bucket
    Snapshot type Applicable for Description
    auto snapshot Automatic snapshot
    manual snapshot Checkpoint (or manual) snapshot
    on filesystem Snapshots are activated on this filesystem
    - filesystem Snapshots are not activated
  • EXAMPLES:

    A. List all ObjectiveFS filessytem.

    $ sudo mount.objectivefs list
                                                    NAME                KIND  SNAP REGION        LOCATION
                                                    s3://myfs-1         ofs   -    eu-central-1  EU (Frankfurt)
                                                    s3://myfs-2         ofs   on   us-west-2     US West (Oregon)
                                                    s3://myfs-3         ofs   on   eu-west-2     EU (London)
                                                    s3://myfs-pool      pool  -    us-east-1     US East (N. Virginia)
                                                    s3://myfs-pool/fsa  ofs   -    us-east-1     US East (N. Virginia)
                                                    s3://myfs-pool/fsb  ofs   -    us-east-1     US East (N. Virginia)
                                                    s3://myfs-pool/fsc  ofs   -    us-east-1     US East (N. Virginia)
                                                    

    B. List a specific file system, e.g. s3://myfs-3

    $ sudo mount.objectivefs list s3://myfs-3
                                                    NAME                KIND  SNAP REGION        LOCATION
                                                    s3://myfs-3         ofs   -    eu-west-2     EU (London)
                                                    

    C. List everything, including non-ObjectiveFS buckets. In this example, my-bucket is a non-ObjectiveFS bucket.

    $ sudo mount.objectivefs list -a
                                                    NAME                KIND  SNAP REGION  LOCATION
                                                    gs://my-bucket      -     -    EU      European Union
                                                    gs://myfs-a         ofs   -    US      United States
                                                    gs://myfs-b         ofs   on   EU      European Union
                                                    gs://myfs-c         ofs   -    ASIA    Asia Pacific
                                                    

    D. List snapshots for myfs that match 2017-01-10T11

    $ sudo mount.objectivefs list -s myfs@2017-01-10T11
                                                    NAME                           KIND  SNAP    REGION     LOCATION
                                                    s3://myfs@2017-01-10T11:10:00  ofs   auto    eu-west-2  EU (London)
                                                    s3://myfs@2017-01-10T11:17:00  ofs   manual  eu-west-2  EU (London)
                                                    s3://myfs@2017-01-10T11:20:00  ofs   auto    eu-west-2  EU (London)
                                                    s3://myfs@2017-01-10T11:30:00  ofs   auto    eu-west-2  EU (London)
                                                    

    E. List snapshots for myfs that match 2017-01-10T12 in UTC

    $ sudo mount.objectivefs list -sz myfs@2017-01-10T12
                                                    NAME                            KIND SNAP    REGION     LOCATION
                                                    s3://myfs@2017-01-10T12:10:00Z  ofs  auto    eu-west-2  EU (London)
                                                    s3://myfs@2017-01-10T12:17:00Z  ofs  manual  eu-west-2  EU (London)
                                                    s3://myfs@2017-01-10T12:20:00Z  ofs  auto    eu-west-2  EU (London)
                                                    s3://myfs@2017-01-10T12:30:00Z  ofs  auto    eu-west-2  EU (London)
                                                    

  • TIPS:
    • You can list partial snapshots by providing <filesystem>@<time prefix>, e.g. myfs@2017-01-10T12.
    • To run with a different ObjectiveFS environment directory, e.g. /home/ubuntu/.ofs.env:
      $ sudo OBJECTIVEFS_ENV=/home/ubuntu/.ofs.env mount.objectivefs list

Mount

  • SUMMARY:Mounts your file system on your Linux or macOS machines.

  • USAGE:
    Run in background:
    sudo mount.objectivefs [-o <opt>[,<opt>]..] <filesystem> <dir>
  • Run in foreground:
    sudo mount.objectivefs mount [-o <opt>[,<opt>]..] <filesystem> <dir>
  • DESCRIPTION:

    This command mounts your file system on a directory on your Linux or macOS machine. After the file system is mounted, you can read and write to it just like a local disk.

    You can mount the same file system on as many Linux or macOS machines as you need. Your license will always scale if you need more mounts, and it is not limited to the number of included licenses on your plan.

    NOTE: The mount command needs to run as root. It runs in the foreground if “mount” is provided, and runs in the background otherwise.

    <filesystem>
    A globally unique, non-secret file system name. (Required)
    For S3, you can optionally add the “s3://” prefix, e.g. s3://myfs.
    For GCS, you can optionally add the “gs://” prefix, e.g. gs://myfs.
    For on-premise object store, you can also specify an endpoint directly with the “http://” prefix, e.g. http://s3.example.com/foo.
    The filesystem can end with @<timestamp> for mounting snapshots.
    <dir>
    Directory (full path name) on your machine to mount your file system. (Required)
    This directory should be an existing empty directory.
  • General Mount Options

    -o env=<dir>
    Load environment variables from directory <dir>. See environment variable section.

    Mount Point Options

    -o acl | noacl
    Enable/disable Access Control Lists.(Default: Disable)
    -o dev | nodev
    Allow block and character devices. (Default: Allow)
    -o diratime | nodiratime
    Update / Don’t update directory access time. (Default: Update)
    -o exec | noexec
    Allow binary execution. (Default: Allow)
    -o export | noexport
    Enable / disable restart support for NFS or Samba exports. (Default: Disable)
    -o fsavail=<size>
    Set the reported available filesystem space. Value can be specified using SI/IEC prefixes, e.g. 100TB, 10.5TiB, 1.5PB, 3PiB, etc.
    -o nonempty
    Allow mounting on non-empty directory (Default: Disabled)
    -o ro | rw
    Read-only / Read-write file system. (Default: Read-write)
    -o strictatime | relatime | noatime
    Update / Smart update / Don’t update access time. (Default: Smart update)
    -o suid | nosuid
    Allow / Disallow suid bits. (Default: Allow)

    File System Mount Options

    -o bulkdata | nobulkdata
    Enable / disable bulk data mode. The bulk data mode improves the storage layout during high write activity to improve filesystem performance. To improve performance, the bulk data mode will use more PUT requests. The bulk data mode is useful during initial data upload and when there are lots of writes to the filesystem. (Default: Enable on 5.5 release and newer)
    -o clean[=1|=2] | noclean
    Set level for / disable storage cleaner. The storage cleaner reclaims storage from deleted snapshots together with the compaction process. 1 is for the standard cleaner and 2 is for the cleaner+. (Default: standard cleaner)
    -o compact[=<level>] | nocompact
    Set level for / disable background index compaction. (Default: Enable)
    -o fuse_conn=<NUM>
    Set the max background FUSE connections. Range is 1 to 1024. (Default: 96)
    -o freebw | nofreebw | autofreebw
    Regulates the network bandwidth usage for compaction. Use freebw when the network bandwidth is free (e.g. on-premise object store or ec2 instance connecting directly to an s3 bucket in the same region). Use nofreebw when the network bandwidth is charged. autofreebw will enable freebw when it detects that it is on an ec2 instance in the same region as the S3 bucket. Important: when using freebw and autofreebw, verify that there is no network routing through a paid network route such as a NAT gateway to avoid incurring bandwidth charges. Enabling freebw will incur extra AWS data transfer charges when running outside of the S3 bucket region. (Default: nofreebw) [6.8 release and newer]
    -o hpc | nohpc
    Enable / disable high performance computing mode. Enabling hpc assumes the server is running in the same data center as the object store. (Default: will check)
    -o mboost[=<minutes>] | nomboost
    Enable / disable memory index reduction. This feature will trade off performance for lower memory usage for larger filesystems. The mboost setting specifies how long certain data is allowed to stay in memory before being considered for memory reduction. The range is 10 to 10080 minutes (60 if not specified). (Default: Disable)
    -o mt | nomt | cputhreads=<N> | iothreads=<N>
    Specify multithreading options. (Default: 2 dedicated IO threads) (details in Multithreading section)
    -o nomem=<spin|stop>
    Set mount behavior when unable to allocate memory from the system. If spin is set, the mount will wait until memory is available. If stop is set, the mount will exit and accesses to the mount will return error. In both cases, a log message will be sent to syslog. Spin is useful in situations where memory can be freed (e.g. OOM killer) or added (e.g. swap). Stop may be useful for generic nodes behind a load balancer. (Default: spin) [6.3 release and newer]
    -o ocache | noocache
    Enable / disable caching in object store. When enabled, the object store cache (if generated) will shorten the mount time for medium to large filesystems. This cache will only be generated or updated on read-write mounts that have been mounted for at least 10 minutes. (Default: Enable on 5.5 release and newer)
    -o oob | nooob
    Enable / disable the out of band flag. Useful if your nodes communicate information about newly created files out of band and not through the filesystem.
    -o oom | nooom
    Linux only. Enable / disable oom protection to reduce the likelihood of being selected by the oom killer. To be exempt from the oom killer (use with care), you can specify oom twice. (Default: Enable)
    -o ratelimit | noratelimit
    Eanble / disable the built-in request rate-limiter. The built-in request rate-limiter is designed to prevent runaway programs from running up the S3 bill. (Default: Enable)
    -o retry=<seconds>
    Duration in seconds to retry connection to the object store if the connection could not be established upon start up. Range is 0 (no retry) to 31536000 (Default: 60 for background mount, 3 for foreground mount) [6.3 release and newer]
    -o snapshots | nosnapshots
    Enable / disable generation of automatic snapshots from this mount point. (Default: Enable)
  • WHAT YOU NEED:
    • Your ObjectiveFS environment directory is set up (see Config section).
    • The file system name (see Create section for creating a file system)
    • An empty directory to mount your file system
  • EXAMPLES:
    Assumptions:
    1. /ofs is an existing empty directory
    2. Your passphrase is in /etc/objectivefs.env/OBJECTIVEFS_PASSPHRASE

    A. Mount an S3 file system in the foreground

    $ sudo mount.objectivefs mount myfs /ofs

    B. Mount a GCS file system with a different env directory, e.g. /home/ubuntu/.ofs.env

    $ sudo mount.objectivefs mount -o env=/home/ubuntu/.ofs.env gs://myfs /ofs
                                                

    C. Mount an S3 file system in the background

    $ sudo mount.objectivefs s3://myfs /ofs

    D. Mount a GCS file system with non-default options
    Assumption: /etc/objectivefs.env contains GCS keys.

    $ sudo mount.objectivefs -o nosuid,nodev,noexec,noatime gs://myfs /ofs
  • TIPS:
    • Control-C on a foreground mount will stop the file system and try to unmount the file system. To properly unmount the filesystem, use unmount.
    • You can mount your filesystem without needing to manually enter your passphrase by storing your passphrase in /etc/objectivefs.env/OBJECTIVEFS_PASSPHRASE. Please verify that your passphrase file’s permission is the same as other files in the environment directory (e.g. OBJECTIVEFS_LICENSE).
    • If your machine sleeps while the file system is mounted, the file system will remain mounted and working after it wakes up, even if your network has changed. This is useful for a laptop that is used in multiple locations.
    • To run with another ObjectiveFS environment directory, e.g. /home/ubuntu/.ofs.env:
    $ sudo OBJECTIVEFS_ENV=/home/ubuntu/.ofs.env mount.objectivefs mount myfs /ofs

Mount on Boot

ObjectiveFS supports Mount on Boot, where your directory is mounted automatically upon reboot.

  • WHAT YOU NEED:
  • A. Linux
    1. Check that /etc/objectivefs.env/OBJECTIVEFS_PASSPHRASE exists so you can mount your filesystem without needing to enter the passphrase.
      If it doesn’t exist, create the file with your passphrase as the content.(see details)

    2. Add a line to /etc/fstab with:

      <filesystem> <mount dir> objectivefs auto,_netdev[,<opts>]  0  0 
      _netdev is used by many Linux distributions to mark the file system as a network file system.
    3. For more details, see Mount on Boot Setup Guide for Linux.

  • B. macOS

    macOS can use launchd to mount on boot. See Mount on Boot Setup Guide for macOS for details.

Unmount

  • SUMMARY:Unmounts your files system on your Linux or macOS machines
  • USAGE:
    sudo umount <dir>
  • DESCRIPTION:
    To unmount your filesystem, run umount with the mount directory. Typing Control-C in the window where ObjectiveFS is running in the foreground will stop the filesystem, but may not unmount it. Please run umount to properly unmount the filesystem.
  • WHAT YOU NEED:

    • Not accessing any file or directory in the file system
  • EXAMPLE:
    $ sudo umount /ofs

Destroy

  • IMPORTANT: After a destroy, there is no way to recover any files or data.

  • SUMMARY: Destroys your file system. This is an irreversible operation.

  • USAGE:
    sudo mount.objectivefs destroy <filesystem>
  • DESCRIPTION:

    This command deletes your file system from your object store. Please make sure that you really don’t need your data anymore because this operation cannot be undone.

    You will be prompted for the authorization code available on your user profile page. This code changes periodically. Please refresh your user profile page to get the latest code.

    <filesystem>
    The file system that you want to destroy. (Required)
  • NOTE:
    Your file system should be unmounted from all of your machines before running destroy.
  • WHAT YOU NEED:
    • Your ObjectiveFS environment directory is set up (see Config section).
    • Your authorization code from your user profile page. (Note: the authorization code changes periodically. To get the latest code, please refresh your user profile page.)
  • EXAMPLE:
    $ sudo mount.objectivefs destroy s3://myfs
                                                *** WARNING ***
                                                The filesystem 's3://mytest1' will be destroyed. All data (550 MB) will be lost permanently!
                                                Continue [y/n]? y
                                                Authorization code: <your authorization code>
                                                

Settings

This section covers the options you can run ObjectiveFS with.

Regions

This is a list of supported S3 and GCS regions. ObjectiveFS supports all regions for S3 and GCS.

  • AWS S3
    • us-east-1
    • us-east-2
    • us-west-1
    • us-west-2 [default for S3 if no default region is specified]
    • ca-central-1
    • eu-central-1
    • eu-west-1
    • eu-west-2
    • ap-south-1
    • ap-southeast-1
    • ap-southeast-2
    • ap-northeast-1
    • ap-northeast-2
    • sa-east-1
    • us-gov-west-1 [requires AWS GovCloud account]
  • GCS
    • Multi-regions:
      • us
      • eu
      • asia
    • Sub-regions:
      • us-central1
      • us-east1
      • us-west1
      • europe-west1
      • asia-east1
      • asia-northeast1
    • RELATED COMMANDS:
    • REFERENCE:
      AWS S3 regions, GCS regions

Endpoints

The table below lists the corresponding endpoints for the supported regions.

  • AWS S3
    Region Endpoint
    us-east-1 s3-external-1.amazonaws.com
    us-east-2 s3-us-east-2.amazonaws.com
    us-west-1 s3-us-west-1.amazonaws.com
    us-west-2 s3-us-west-2.amazonaws.com
    ca-central-1 s3-ca-central-1.amazonaws.com
    eu-central-1 s3-eu-central-1.amazonaws.com
    eu-west-1 s3-eu-west-1.amazonaws.com
    eu-west-2 s3-eu-west-2.amazonaws.com
    ap-south-1 s3-ap-south-1.amazonaws.com
    ap-southeast-1 s3-ap-southeast-1.amazonaws.com
    ap-southeast-2 s3-ap-southeast-2.amazonaws.com
    ap-northeast-1 s3-ap-northeast-1.amazonaws.com
    ap-northeast-2 s3-ap-northeast-2.amazonaws.com
    sa-east-1 s3-sa-east-1.amazonaws.com
    us-gov-west-1 s3-us-gov-west-1.amazonaws.com
  • GCS
    All GCS regions have the endpoint storage.googleapis.com

Environment Variables

ObjectiveFS uses environment variables for configuration. You can set them using any standard method (e.g. on the command line, in your shell). We also support reading environment variables from a directory.

The filesystem settings specified by the environment variables are set at start up. To update the settings (e.g. change the memory cache size, enable disk cache), please unmount your filesystem and remount it with the new settings (exception: manual rekeying).

A. Environment Variables from Directory

ObjectiveFS supports reading environment variables from files in a directory, similar to the envdir tool from the daemontools package.

Your environment variables are stored in a directory. Each file in the directory corresponds to an environment variable, where the file name is the environment variable name and the first line of the file content is the value.

  • SETUP:

    The Config command sets up your environment directory with three main environment variables:

    • AWS_ACCESS_KEY_ID
    • AWS_SECRET_ACCESS_KEY
    • OBJECTIVEFS_LICENSE

    You can also add additional environment variables in the same directory using the same format: where the file name is the environment variable and the first line of the file content is the value.

  • EXAMPLE:
    $ ls /etc/objectivefs.env/
                                                AWS_ACCESS_KEY_ID  AWS_SECRET_ACCESS_KEY  OBJECTIVEFS_LICENSE  OBJECTIVEFS_PASSPHRASE
                                                $ cat /etc/objectivefs.env/OBJECTIVEFS_PASSPHRASE
                                                your_objectivefs_passphrase
                                                
B. Environment Variables on Command Line

You can also set the environment variables on the command line. The user-provided environment variables will override the environment directory’s variables.

  • USAGE:
    sudo [<ENV VAR>='<value>'] mount.objectivefs 
  • EXAMPLE:
    $ sudo CACHESIZE=30% mount.objectivefs myfs /ofs
SUPPORTED ENVIRONMENT VARIABLES

To enable a feature, set the corresponding environment variable and remount your filesystem (exception: manual rekeying).

ACCOUNT
Username of the user to run as. Root will be dropped after startup.
AWS_ACCESS_KEY_ID
Your object store access key. (Required or use AWS_METADATA_HOST)
AWS_DEFAULT_REGION
The default object store region to connect to.
AWS_METADATA_HOST
AWS STS host publishing session keys (for EC2 set to “169.254.169.254”). Sets and rekeys AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, and AWS_SECURITY_TOKEN.
AWS_SECRET_ACCESS_KEY
Your secret object store key. (Required or use AWS_METADATA_HOST)
AWS_SECURITY_TOKEN
Session security token when using AWS STS.
AWS_SERVER_SIDE_ENCRYPTION
Server-side encryption with AWS KMS support. (Enterprise plan feature) (see Server-side Encryption section)
AWS_TRANSFER_ACCELERATION
Set to 1 to use the AWS S3 acceleration endpoint. (Enterprise plan feature) (see S3 Transfer Acceleration section)
CACHESIZE
Set cache size as a percentage of memory (e.g. 30%) or an absolute value (e.g. 500M or 1G). (Default: 20%) (see Memory Cache section)
DISKCACHE_SIZE
Enable and set disk cache size and optional free disk size. (see Disk Cache section)
DISKCACHE_PATH
Location of disk cache when disk cache is enabled. (see Disk Cache section)
DNSCACHEIP
IP address of recursive name resolver. (Default: use /etc/resolv.conf)
http_proxy
HTTP proxy server address. (see HTTP Proxy section)
IDMAP
User ID and Group ID mapping. (see UID/GID Mapping section)
OBJECTIVEFS_ENV
Directory to read environment variables from. (Default: /etc/objectivefs.env)
OBJECTIVEFS_LICENSE
Your ObjectiveFS license key. (Required)
OBJECTIVEFS_PASSPHRASE
Passphase for filesystem. (Default: will prompt)

Features

Memory Cache

  • DESCRIPTION:

    ObjectiveFS uses memory to cache data and metadata locally to improve performance and to reduce the number of S3 operations.

  • USAGE:

    Set the CACHESIZE environment variable to one of the following:

    • the actual memory size (e.g. 500M or 2G)
    • as a percentage of memory (e.g. 30%)

    A minimum of 64MB will be used for CACHESIZE.

  • DEFAULT VALUE:

    If CACHESIZE is not specified, the default is 20% of memory for machines with 3GB+ memory or 5%-20% (with a minimum of 64MB) for machines with less than 3GB memory.

  • DETAILS:

    The cache size is applicable per mount. If you have multiple ObjectiveFS file systems on the same machine, the total cache memory used by ObjectiveFS will be the sum of the CACHESIZE values for all mounts.

    The memory cache is one component of the ObjectiveFS memory usage. The total memory used by ObjectiveFS is the sum of:
    a. memory cache usage (set by CACHESIZE)
    b. index memory usage (based on the number of S3 objects/filesystem size), and
    c. kernel memory usage

    The caching statistics such as cache hit rate, kernel memory usage is sent to the log. The memory cache setting is also sent to the log upon file system mount.

  • EXAMPLES:

    A. Set memory cache size to 30%

    $ sudo CACHESIZE=30% mount.objectivefs myfs /ofs

    B. Set memory cache size to 2GB

    $ sudo CACHESIZE=2G mount.objectivefs myfs /ofs
  • RELATED INFO:

Disk Cache

  • DESCRIPTION:

    ObjectiveFS can use local disks to cache data and metadata locally to improve performance and to reduce the number of S3 operations. Once the disk cache is enabled, ObjectiveFS handles the operations automatically, with no additional maintenance required from the user.

    The disk cache is compressed, encrypted and has strong integrity checks. It is robust and can be copied between machines and even manually deleted, when in active use. So, you can rsync the disk cache between machines to warm the cache or to update the content.

    Since the disk cache’s content persists when your file system is unmounted, you get the benefit of fast restart and fast access when you remount your file system.

    ObjectiveFS will always keep some free space on the disk, by periodically checking the free disk space. If your other applications use more disk space, ObjectiveFS will adjust and use less by shrinking its cache.

    Multiple file systems on the same machine can share the same disk cache without crosstalk, and they will collaborate to keep the most used data in the disk cache.

  • RECOMMENDATION:

    We recommend enabling disk cache when local SSD or harddrive is available. For EC2 instances, we recommend using the local SSD instance store instead of EBS because EBS volumes may run into ops limit depending on the volume size. (See how to mount an instance store on EC2 for disk cache).

  • USAGE:

    The disk cache uses DISKCACHE_SIZE and DISKCACHE_PATH environment variables (see environment variables section for how to set environment variables). To enable disk cache, set DISKCACHE_SIZE.

    DISKCACHE_SIZE:

    • Accepts values in the form <DISK CACHE SIZE>[:<FREE SPACE>].
    • <DISK CACHE SIZE>:

      • Set to the actual space you want ObjectiveFS to use (e.g. 20G or 1T).
      • If this value is larger than your actual disk (e.g. 1P), ObjectiveFS will try to use as much space as possible on the disk while preserving the free space.
    • <FREE SPACE> (optional):

      • Set to the amount of free space you want to keep on the volume (e.g. 5G).
      • The default value is 3G.
      • When it is set to 0G, ObjectiveFS will try to use as much space as possible (useful for dedicated disk cache partition).
    • The free space value has precedence over disk cache size. The actual disk cache size is the smaller of the DISK_CACHE_SIZE or (total disk space - FREE_SPACE).

    DISKCACHE_PATH

    • Specifies the location of the disk cache.
    • Default location:
      macOS: /Library/Caches/ObjectiveFS
      Linux: /var/cache/objectivefs
  • DEFAULT VALUE:

    Disk cache is disabled when DISKCACHE_SIZE is not specified.

  • EXAMPLES:

    A. Set disk cache size to 20GB and use default free space (3GB)

    $ sudo DISKCACHE_SIZE=20G mount.objectivefs myfs /ofs

    B. Use as much space as possible for disk cache and keep 10GB free space

    $ sudo DISKCACHE_SIZE=1P:10G mount.objectivefs myfs /ofs

    C. Set disk cache size to 20GB and free space to 10GB and specify the disk cache path

    $ sudo DISKCACHE_SIZE=20G:10G DISKCACHE_PATH=/var/cache/mydiskcache mount.objectivefs myfs /ofs

    D. Use the entire space for disk cache (when using dedicated volume for disk cache)

    $ sudo DISKCACHE_SIZE=1P:0G mount.objectivefs myfs /ofs
  • WHAT YOU NEED:
    • Local instance store or SSD mounted on your DISKCACHE_PATH (default:/var/cache/objectivefs)
  • TIPS:
    • Different file systems on the same machine can point to the same disk cache. They can also point to different locations by setting different DISKCACHE_PATH.
    • The DISKCACHE_SIZE is per disk cache directory. If multiple file systems are mounted concurrently using different DISKCACHE_SIZE values and point to the same DISKCACHE_PATH, ObjectiveFS will use the minimum disk cache size and the maximum free space value.
    • A background disk cache clean-up will keep the disk cache size within the specified limits by removing the oldest data.
    • One way to warm a new disk cache is to rsync the content from an existing disk cache.
  • RELATED INFO:

Snapshots

ObjectiveFS supports automatic built-in snapshots and checkpoint snapshots. Automatic snapshots are managed by ObjectiveFS and are taken according to a snapshot schedule when your filesystem is mounted. Checkpoint snapshots can be taken at anytime and the filesystem doesn’t need to be mounted. Snapshots can be mounted as a read-only filesystem to access your filesystem data as it was at that point in time. There is no limit to the number of filesystem snapshots that you can create and use.

Snapshots are not backups since they are part of the same filesystem and are stored in the same bucket. A backup is an independent copy of your data stored in a different location.

Snapshots can be useful for backups. To create a consistent point-in-time backup, you can mount a recent snapshot and use it as the source for backup, instead of running backup from your live filesystem. This way, your data will not change while the backup is in progress.

Activate Snapshots

Snapshots are activated on a filesystem upon the first mount with an ObjectiveFS version with snapshots (v.5.0 or newer) and no special mount options are needed. Upon activation, ObjectiveFS will perform a one-time identification of existing snapshots and activate them if available. A log message will be generated when snapshots have been activated on your filesystem. You can also use the list command to identify the filesystems with activated snapshots.

NOTE: even though many snapshots are generated, the storage used for each snapshot is incremental. If two snapshots contain the same data, there is no additional storage used for the second snapshot. If two snapshots are different, only the difference is stored, and not a new copy.

Create Snapshots

A. Create Automatic Snapshots
After initial activation, snapshots are automatically taken only when your filesystem is mounted. When your filesystem is not mounted, automatic snapshots will not be taken since there are no changes to the filesystem. Automatic snapshots are taken and retained based on the snapshot schedule in the table below. Older automatic snapshots are automatically removed to maintain the number of snapshots per interval.

Snapshot Interval Number of Snapshots
  10-minute 72  
  hourly 72  
  daily 32  
  weekly 16  

RELEVANT INFO:

  • If there are multiple mounts of the same filesystem, only one snapshot will be generated at a given scheduled time.
  • Automatic snapshots generation is on by default and can be disabled with the nosnapshots mount option.

B. Create Checkpoint Snapshots (Pro and Enterprise Plan Feature)

sudo mount.objectivefs snapshot <filesystem>

After initial activation, checkpoint (i.e. manual point-in-time) snapshots can be taken at anytime, even if your filesystem is not mounted. There is no limit to the number of checkpoint snapshots you can take. The checkpoint snapshots are kept until they are explicitly removed by the destroy command.

Checkpoint snapshots are useful for creating a snapshot right before making large changes to the filesystem. They can also be useful if you need snapshots at a specific time of the day or for compliance reasons.

List Snapshots

DESCRIPTION:

You can list one or more snapshots for your filesystem using the list command. Snapshots have the format <filesystem>@<time>, and are by default listed in local time, e.g. s3://myfs@2016-12-31T15:40:00. They can also be listed in UTC with the -z option.

The list command shows both automatic and checkpoint snapshots in your object stores. You can use it to list all snapshots available on your filesystem, or to list the snapshots matching a specific time prefix.

USAGE:

sudo mount.objectivefs list -s[z] [<filesystem>[@<time>]]

This command uses the -s option to enable listing of snapshots. It can further restrict the list of snapshots to a single filesystem by providing the filesystem name. The snapshots can be filtered by the time prefix such as myfs@2016-11 to list all snapshots in November 2016.

For more details on these options and the output format, see the list command.

EXAMPLES:

A. List all snapshots for myfs

$ sudo mount.objectivefs list -s myfs

B. List all snapshots for myfs that match 2017-01-10T11

$ sudo mount.objectivefs list -s myfs@2017-01-10T11

C. List all the snapshots for myfs that match 2017-01-10T12:30 in UTC

$ sudo mount.objectivefs list -sz myfs@2017-01-10T12:30
Mount Snapshots

DESCRIPTION:

Snapshots can be mounted to access the filesystem as it was at that point in time. When a snapshot is mounted, it is accessible as a read-only filesystem.

You can mount both automatic and checkpoint snapshots. When an automatic snapshot is mounted, a checkpoint snapshot for the same timestamp will be created to prevent the snapshot from being automatically removed in case its retention schedule expires while it is mounted. These checkpoint snapshots, when created for data recovery purpose only, are also included in the BASIC Plan.

Snapshots can be mounted using the same AWS keys used for mounting the filesystem. If you choose to use a different key, you only need read permission for mounting checkpoint snapshot, but need read and write permissions for mounting automatic snapshots.

A snapshot mount is a regular mount and will be counted as an ObjectiveFS instance while it is mounted.

USAGE:

sudo mount.objectivefs [-o <options>] <filesystem>@<time> <dir>
<filesystem>@<time>
The snapshot for the filesystem at a particular time. The time can be specified as local time or UTC in the ISO8601 format. You can use the list snapshots command to get the list of available snapshots for your filesystem.
<dir>
Directory (full path name) to mount your file system snapshot. (Required)
This directory should be an existing empty directory.
<options>
You can also use the same mount options as mounting your filesystem (some of them will have no effect since it is a read-only filesystem).

EXAMPLES:

A. Mount a snapshot specified in local time

$ sudo mount.objectivefs mount myfs@2017-01-10T11:10:00 /ofs

B. Mount a snapshot specified in UTC

$ sudo mount.objectivefs mount myfs@2017-01-10T12:30:00Z /ofs

C. Mount a snapshot with multithreading enabled

$ sudo mount.objectivefs mount -o mt myfs@2017-01-10T11:10:00 /ofs

TIPS:

  • Snapshots and Disk Cache: mounting snapshots with the same disk cache as your filesystem is safe. It can also improve performance by getting common data from the disk cache.
  • Backups: to create a consistent point-in-time backup, mount a recent snapshot and use it as the source for backup. Unlike using a live filesystem for backup, your data will not change while the backup is in progress.

Unmount Snapshots

Same as the regular unmount command.

Destroy Snapshots

To destroy a snapshot, use the regular destroy command with <filesystem>@<time>. Time should be specified in ISO8601 format (e.g. 2017-01-10T10:10:00) and can be either local time or UTC. Both automatic and checkpoint snapshots matching the timestamp will be removed.

sudo mount.objectivefs destroy <filesystem>@<time>

EXAMPLE:
Destroy a snapshot specified in local time

$ sudo mount.objectivefs destroy s3://myfs@2017-01-10T11:10:00
                                        *** WARNING ***
                                        The snapshot 's3://myfs@2017-01-10T11:10:00' will be destroyed. No other changes will be done to the filesystem.
                                        Continue [y/n]? y
                                        
Recovering Files from Snapshots

If you need to recover a file from a snapshot, you can use the following steps:

  1. Identify the snapshot you want to recover from using list snapshots.

    $ sudo mount.objectivefs list -s myfs@2017-01-10
  2. Mount the snapshot on an empty directory, e.g. /ofs-snap.

    $ sudo mount.objectivefs myfs@2017-01-10T11:10:00 /ofs-snap
  3. Mount your filesystem on another empty directory, e.g. /ofs.

    $ sudo mount.objectivefs mount myfs /ofs
  4. Verify it is the right file to restore. Copy the file from the snapshot to your filesystem.

    $ cp /ofs-snap/<path to file> /ofs/<path to file>

Live Rekeying

ObjectiveFS supports live rekeying which lets you update your AWS keys while keeping your filesystem mounted. With live rekeying, you don’t need to unmount and remount your filesystem to change your AWS keys. ObjectiveFS supports both automatic and manual live rekeying.

Automatic Rekeying with IAM roles

If you have attached an AWS EC2 IAM role to your EC2 instance, you can set AWS_METADATA_HOST to 169.254.169.254 to automatically rekey. With this setting, you don’t need to use the AWS_SECRET_ACCESS_KEY and AWS_ACCESS_KEY_ID environment variables.

Manual Rekeying

You can also manually rekey by updating the AWS_SECRET_ACCESS_KEY and AWS_ACCESS_KEY_ID environment variables (and also AWS_SECURITY_TOKEN if used) and sending SIGHUP to mount.objectivefs. The running objectivefs program (i.e. mount.objectivefs) will automatically reload and pick up the updated keys.

RELATED INFO:

Compaction

  • DESCRIPTION:

    ObjectiveFS is a log-structured filesystem that uses an object store for storage. Compaction combines multiple small objects into a larger object and brings related data close together. It improves performance and reduces the number of object store operations needed for each filesystem access. Compaction is a background process and adjusts dynamically depending on your workload and your filesystem’s characteristics.

  • USAGE:

    You can specify the compaction rate by setting the mount option when mounting your filesystem. Faster compaction increases bandwidth usage.For more about mount options, see this section.

    • nocompact - disable compaction
    • compact - enable regular compaction
    • compact,compact - enable faster compaction (uses more bandwidth)
    • compact,compact,compact - enable fastest compaction (uses most bandwidth)
  • DEFAULT:

    Compaction is enabled by default. If the filesystem is mounted on an EC2 machine in the same region as your S3 bucket, compact,compact is the default. Otherwise, compact is the default.

  • EXAMPLES:

    A. To enable faster compaction

    $ sudo mount.objectivefs -o compact,compact myfs /ofs

    B. To disable compaction

    $ sudo mount.objectivefs -o nocompact myfs /ofs
  • TIPS:
    • You can find out the number of S3 objects for your ObjectiveFS filesystem in the IUsed column of df -i.
    • To increase the compaction rate of a filesystem, you can enable compaction on all mounts of that filesystem.
    • You can also set up temporary extra mounts with the fastest compaction option to increase the compaction rate.

Multithreading

Pro and Enterprise Plan Feature

  • DESCRIPTION:

    Multithreading is a performance feature that can lower latency and improve throughput for your workload. ObjectiveFS will spawn dedicated CPU and IO threads to handle operations such as data decompression, data integrity check, disk cache accesses and updates.

  • USAGE:

    Multithreading mode can be enabled using the mt mount option, which sets the dedicated CPU threads to 4 and dedicated IO threads to 8. One read thread and one write thread will be spawned for each specified CPU and IO thread. You can also explicitly specify the number of dedicated CPU threads and IO threads using the cputhreads and iothreads mount options. You can also use the -nomt mount option to disable multithreading.

    Mount options Description
    -o mt sets cputhreads to 4 and iothreads to 8
    -o cputhreads=<N> sets the number of dedicated CPU threads to N (min:0, max:128)
    -o iothreads=<N> sets the number of dedicated IO threads to N (min:0, max:128)
    -o nomt sets cputhreads and iothreads to 0
  • DEFAULT VALUE:

    By default, there are 2 dedicated IO threads and no dedicated CPU threads.

  • EXAMPLE:

    A. Enable default multithreading option (4 cputhreads, 8 iothreads)

    $ sudo mount.objectivefs -o mt <filesystem> <dir> 

    B. Set CPU threads to 8 and IO threads to 16

    $ sudo mount.objectivefs -o cputhreads=8,iothreads=16 <filesystem> <dir> 

    C. Example fstab entry to enable multithreading

    s3://<filesystem> <dir> objectivefs auto,_netdev,mt 0 0

Filesystem Pool

Pro and Enterprise Plan Feature

Filesystem pool lets you have multiple file systems per bucket. Since AWS S3 has a limit of 100 buckets per account, you can use pools if you need lots of file systems.

A filesystem pool is a collection of regular filesystems to simplify the management of lots of filesystems. You can also use pools to organize your company’s file systems by teams or departments.

A file system in a pool is a regular file system. It has the same capabilities as a regular file system.

A pool is a top-level structure. This means that a pool can only contain file systems, and not other pools. Since a pool is not a filesystem, but a collection of filesystems, it cannot be mounted directly.

Reference: Managing Per-User Filesystems Using Filesystem Pool and IAM Policy

An example organization structure is:

 |
                                    |- myfs1                // one file system per bucket
                                    |- myfs2                // one file system per bucket
                                    |- mypool1 -+- /myfs1   // multiple file systems per bucket
                                    |           |- /myfs2   // multiple file systems per bucket
                                    |           |- /myfs3   // multiple file systems per bucket
                                    |
                                    |- mypool2 -+- /myfs1   // multiple file systems per bucket
                                    |           |- /myfs2   // multiple file systems per bucket
                                    |           |- /myfs3   // multiple file systems per bucket
                                    :
                                    
Create

To create a file system in a pool, use the regular create command with
<pool name>/<file system> as the filesystem argument.

sudo mount.objectivefs create [-l <region>] <pool>/<filesystem>

NOTE:

  • You don’t need to create a pool explicitly. A pool is automatically created when you create the first file system in this pool.
  • The file system will reside in the same region as the pool. Therefore, any subsequent file systems created in a pool will be in the same region, regardless of the -l <region> specification.

EXAMPLE:
A. Create an S3 file system in the default region (us-west-2)

# Assumption: your /etc/objectivefs.env contains S3 keys
                                        $ sudo mount.objectivefs create s3://mypool/myfs

B. Create a GCS file system in the default region

# Assumption: your /etc/objectivefs.env contains GCS keys
                                        $ sudo mount.objectivefs create -l EU gs://mypool/myfs
List

When you list your file system, you can distinguish a pool in the KIND column. A file system inside of a pool is listed with the pool prefix.

You can also list the file systems in a pool by specifying the pool name.

sudo mount.objectivefs list [<pool name>]

EXAMPLE:
A. In this example, there are two pools myfs-pool and myfs-poolb. The file systems in each pools are listed with the pool prefix.

$ sudo mount.objectivefs list
                                        NAME                        KIND      REGION
                                        s3://myfs-1                 ofs       us-west-2
                                        s3://myfs-2                 ofs       eu-central-1
                                        s3://myfs-pool/             pool      us-west-2
                                        s3://myfs-pool/myfs-a       ofs       us-west-2
                                        s3://myfs-pool/myfs-b       ofs       us-west-2
                                        s3://myfs-pool/myfs-c       ofs       us-west-2
                                        s3://myfs-poolb/            pool      us-west-1
                                        s3://myfs-poolb/foo         ofs       us-west-1
                                        

B. List all file systems under a pool, e.g. myfs-pool

$ sudo mount.objectivefs list myfs-pool
                                        NAME                        KIND      REGION
                                        s3://myfs-pool/             pool      us-west-2
                                        s3://myfs-pool/myfs-a       ofs       us-west-2
                                        s3://myfs-pool/myfs-b       ofs       us-west-2
                                        s3://myfs-pool/myfs-c       ofs       us-west-2
                                        
Mount

To mount a file system in a pool, use the regular mount command with
<pool name>/<file system> as the filesystem argument.

Run in background:

sudo mount.objectivefs [-o <opt>[,<opt>]..] <pool>/<filesystem> <dir>

Run in foreground:

sudo mount.objectivefs mount [-o <opt>[,<opt>]..] <pool>/<filesystem> <dir>

EXAMPLES:
A. Mount an S3 file system and run the process in background

$ sudo mount.objectivefs s3://myfs-pool/myfs-a /ofs

B. Mount a GCS file system with a different env directory, e.g. /home/tom/.ofs_gs.env and run the process in foreground

$ sudo mount.objectivefs mount -o env=/home/tom/.ofs_gcs.env gs://mypool/myfs /ofs
                                        
Unmount

Same as the regular unmount command

Destroy

To destroy a file system in a pool, use the regular destroy command with
<pool name>/<file system> as the filesystem argument.

sudo mount.objectivefs destroy <pool>/<filesystem>

NOTE:

  1. You can destroy a file system in a pool, and other file systems within the pool will not be affected.
  2. A pool can only be destroyed if it is empty.

EXAMPLE:
Destroying an S3 file system in a pool

$ sudo mount.objectivefs destroy s3://myfs-pool/myfs-a
                                        *** WARNING ***
                                        The filesystem 's3://myfs-pool/myfs-a' will be destroyed. All data (550 MB) will be lost permanently!
                                        Continue [y/n]? y
                                        Authorization code: <your authorization code>
                                        

UID/GID Mapping

Pro and Enterprise Plan Feature

  • DESCRIPTION:

    This feature lets you map local user ids and group ids to different ids in the remote filesystem.The id mappings should be 1-to-1, i.e. a single local id should only be mapped to a single remote id, and vice versa. If multiple ids are mapped to the same id, the behavior is undetermined.

    When a uid is remapped and U* is not specified, all other unspecified uids will be mapped to the default uid: 65534 (aka nobody/nfsnobody). Similarly, all unspecified gids will be mapped to the default gid (65534) if a gid is remapped and G* is not specified.

  • USAGE:
    IDMAP="<Mapping>[:<Mapping>]"
                                                where Mapping is:
                                                    U<local id or name> <remote id>
                                                    G<local id or name> <remote id>
                                                    U* <default id>
                                                    G* <default id>
                                                

    Mapping Format
    A. Single User Mapping:    U<local id or name> <remote id>
    Maps a local user id or local user name to a remote user id.

    B. Single Group Mapping:    G<local id or name> <remote id>
    Maps a local group id or local group name to a remote group id.

    C. Default User Mapping:    U* <default id>
    Maps all unspecified local and remote users ids to the default id. If this mapping is not specified, all unspecified user ids will be mapped to uid 65534 (aka nobody/nfsnobody).

    D. Default Group Mapping:    G* <default id>
    Maps all unspecified local and remote group ids to the default id. If this mapping is not specified, all unspecified group ids will be mapped to gid 65534 (aka nobody/nfsnobody).

  • EXAMPLES:

    A. UID mapping only

    IDMAP="U600 350:Uec2-user 400:U* 800"
    • Local uid 600 is mapped to remote uid 350, and vice versa
    • Local ec2-user is mapped to remote uid 400, and vice versa
    • All other local uids are mapped to remote uid 800
    • All other remote uids are mapped to local uid 800
    • Group IDs are not remapped

    B. GID mapping only

    IDMAP="G800 225:Gstaff 400"
    • Local gid 800 is mapped to remote gid 225, and vice versa
    • Local group staff is mapped to remote gid 400, and vice versa
    • All other local gids are mapped to remote gid 65534 (aka nobody/nfsnobody)
    • All other remote gids are mapped to local gid 865534 (aka nobody/nfsnobody)
    • User IDs are not remapped

    C. UID and GID mapping

    IDMAP="U600 350:G800 225"
    • Local uid 600 is mapped to remote uid 350, and vice versa
    • Local gid 800 is mapped to remote gid 225, and vice versa
    • All other local uids and gids are mapped to remote 65534 (aka nobody/nfsnobody)
    • All other remote uids and gids are mapped to local 865534 (aka nobody/nfsnobody)

HTTP Proxy

Pro and Enterprise Plan Feature

  • DESCRIPTION:

    You can run ObjectiveFS with an http proxy to connect to your object store. A common use case is to connect ObjectiveFS to the object store via a squid caching proxy.

  • USAGE:

    Set the http_proxy environment variable to the proxy server’s address (see environment variables section for how to set environment variables).

  • DEFAULT VALUE:

    If the http_proxy environment is not set, this feature is disabled by default.

  • EXAMPLE:

    Mount a filesystem (e.g. s3://myfs) with an http proxy running locally on port 3128:

    $ sudo http_proxy=http://localhost:3128 mount.objectivefs mount myfs /ofs

    Alternatively, you can set the http_proxy in your /etc/objectivefs.env directory

    $ ls /etc/objectivefs.env
                                                AWS_ACCESS_KEY_ID          OBJECTIVEFS_PASSPHRASE
                                                AWS_SECRET_ACCESS_KEY      http_proxy
                                                OBJECTIVEFS_LICENSE 

    $ cat /etc/objectivefs.env/http_proxy http://localhost:3128

Admin Mode

Pro and Enterprise Plan Feature

  • DESCRIPTION:

    The admin mode provides an easy way to manage many filesystems in a programmatic way. You can use the admin mode to easily script the creation of many filesystems.

    The admin mode lets admins create filesystems without the interactive passphrase confirmations. To destroy a filesystem, admins only need to provide a ‘y’ confirmation and don’t need an authorization code. Admins can list the filesystems, similar to a regular user. However, admins are not permitted to mount a filesystem, to separate the admin functionality and user functionality.

    Operation User Mode Admin Mode
    Create Needs passphrase confirmation No passphrase confirmation needed
    List Allowed Allowed
    Mount Allowed Not allowed
    Destroy Needs authorization code and confirmation  Only confirmation needed
  • USAGE:

    Enterprise plan users have an admin license key, in addition to their regular license key. Please contact from here for this key.

    To use admin mode, we recommend creating an admin-specific objectivefs environment directory, e.g. /etc/objectivefs.admin.env. Please use your admin license key for OBJECTIVEFS_LICENSE.

    $ ls /etc/objectivefs.env/
                                                AWS_ACCESS_KEY_ID      AWS_SECRET_ACCESS_KEY
                                                OBJECTIVEFS_LICENSE    OBJECTIVEFS_PASSPHRASE
                                                $ cat /etc/objectivefs.env/OBJECTIVEFS_LICENSE
                                                your_admin_license_key
                                                

    You can have a separate user objectivefs environment directory, e.g. /etc/objectivefs.<user>.env, for each user to mount their individual filesystems.

  • EXAMPLES:

    A. Create a filesystem in admin mode with credentials in /etc/objectivefs.admin.env

    $ sudo OBJECTIVEFS_ENV=/etc/objectivefs.admin.env mount.objectivefs create myfs

    B. Mount the filesystem as user tom in the background

    $ sudo OBJECTIVEFS_ENV=/etc/objectivefs.tom.env mount.objectivefs myfs /ofs

Local License Check

Enterprise Plan Feature

  • DESCRIPTION:

    While our regular license check is very robust and can handle multi-day outages, some companies prefer to minimize external dependencies. For these cases, we offer a local license check feature that lets you run your infrastructure independent of any license server.

  • USAGE:

    Please talk with your enterprise support contact for instructions on how to enable the local license check on your account.

S3 Transfer Acceleration

Enterprise Plan Feature

  • DESCRIPTION:

    ObjectiveFS supports AWS S3 Transfer Acceleration that enables fast transfers of files over long distances between your server and S3 bucket.

  • USAGE:

    Set the AWS_TRANSFER_ACCELERATION environment variable to 1 to enable S3 transfer acceleration (see environment variables section for how to set environment variables).

  • REQUIREMENT:

    Your S3 bucket needs to be configured to enable Transfer Acceleration. This can be done from the AWS Console.

  • EXAMPLES:

    Mount a filesystem called myfs with S3 Transfer Acceleration enabled

    $ sudo AWS_TRANSFER_ACCELERATION=1 mount.objectivefs myfs /ofs

AWS KMS Encryption

Enterprise Plan Feature

  • DESCRIPTION:

    ObjectiveFS supports AWS Server-Side encryption using Amazon S3-Managed Keys (SSE-S3) and AWS KMS-Managed Keys (SSE-KMS).

  • USAGE:

    Use the AWS_SERVER_SIDE_ENCRYPTION environment variable (see environment variables section for how to set environment variables).

    The AWS_SERVER_SIDE_ENCRYPTION environment variable can be set to:

    • AES256  (for Amazon S3-Managed Keys (SSE-S3))
    • aws:kms  (for AWS KMS-Managed Keys (SSE-KMS) with default key)
    • <your kms key>  (for AWS KMS-Managed Keys (SSE-KMS) with the keys you create and manage)
  • REQUIREMENT:

    To run SSE-KMS, stunnel is required. See the following guide for setup instructions.

  • EXAMPLES:

    A. Create a filesystem called myfs with Amazon S3-Managed Keys (SSE-S3)

    $ sudo AWS_SERVER_SIDE_ENCRYPTION=AES256 mount.objectivefs create myfs

    B. Create a filesystem called myfs with Amazon S3-Managed Keys (SSE-KMS)
    Note: make sure stunnel is running. See setup instructions.

    $ sudo http_proxy=http://localhost:8086 AWS_SERVER_SIDE_ENCRYPTION=aws:kms mount.objectivefs create myfs

    C. Mount a filesystem called myfs with Amazon KMS-Managed Keys (SSE-KMS) using the default key
    Note: make sure stunnel is running. See setup instructions.

    $ sudo http_proxy=http://localhost:8086 AWS_SERVER_SIDE_ENCRYPTION=aws:kms mount.objectivefs myfs /ofs

    D. Mount a filesystem called myfs with Amazon KMS-Managed Keys (SSE-KMS) using a specific key
    Note: make sure stunnel is running. See setup instructions.

    $ sudo http_proxy=http://localhost:8086 AWS_SERVER_SIDE_ENCRYPTION=<your aws kms key> mount.objectivefs myfs /ofs

Logging

Log information is printed to the terminal when running in the foreground, and is sent to syslog when running in the background. On macOS, the log is typically at /var/log/system.log. On Linux, the log is typically at /var/log/messages or /var/log/syslog.

Below is a list of common log messages. For error messages, please see troubleshooting section.

A. Initial mount message
  • SUMMARY:

    The message logged every time an ObjectiveFS filesystem is mounted.

  • FORMAT:
    objectivefs starting [<fuse version>, <region>, <endpoint>, <cachesize>, <disk cache setting>]
  • DESCRIPTION:
    <fuse version>
    The fuse protocol version that the kernel uses
    <region>
    The region where your S3 or GCS bucket resides
    <endpoint>
    The endpoint used to access your S3 or GCS bucket, typically determined by the region
    <cachesize>
    The maximum size used for memory cache
    <disk cache setting>
    Indicates whether disk cache is on or off
  • EXAMPLE:

    objectivefs starting [fuse version 7.22, region us-west-2, endpoint http://s3-us-west-2.amazonaws.com, cachesize 753MB, diskcache off]

B. Regular log message
  • SUMMARY:

    The message logged while your filesystem is active. It shows the cumulative number of S3 operations and bandwidth usage since the initial mount message.

  • FORMAT:
    <put> <list> <get> <delete> <bandwidth in> <bandwidth out>
  • DESCRIPTION:
    <put> <list> <get> <delete>
    For this mount on this machine, the total number of put, list, get and delete operations to S3 or GCS since the filesystem was mounted. These numbers start at zero every time the filesystem is mounted.
    <bandwidth in> <bandwidth out>
    For this mount on this machine, the total number of incoming and outgoing bandwidth since the filesystem was mounted. These numbers start at zero every time the filesystem is mounted.
  • EXAMPLE:

    1403 PUT, 571 LIST, 76574 GET, 810 DELETE, 5.505 GB IN, 5.309 GB OUT

C. Caching Statistics
  • SUMMARY:

    Caching statistics is part of the regular log message starting in ObjectiveFS v.4.2. This data can be useful for tuning memory and disk cache sizes for your workload.

  • FORMAT:
    CACHE [<cache hit> <metadata> <data> <os>], DISK[<hit>]
  • DESCRIPTION:
    <cache hit>
    Percentage of total requests that hits in the memory cache (cumulative)
    <metadata>
    Percentage of metadata requests that hits in the memory cache (cumulative)
    <data>
    Percentage of data requests that hits in the memory cache (cumulative)
    <OS>
    Amount of cached data referenced by the OS at the current time
    Disk [<hit>]
    Percentage of disk cache requests that hits in the disk cache (cumulative)
  • EXAMPLE: CACHE [74.9% HIT, 94.1% META, 68.1% DATA, 1.781 GB OS], DISK [99.0% HIT]
D. Error messages
  • SUMMARY:

    Error response from S3 or GCS

  • FORMAT:
    retrying <operation> due to <endpoint> response: <S3/GCS response> [x-amz-request-id:<amz-id>, x-amz-id-2:<amz-id2>]
  • DESCRIPTION:
    <operation>
    This can be PUT, GET, LIST, DELETE operations that encountered the error message
    <endpoint>
    The endpoint used to access your S3 or GCS bucket, typically determined by the region
    <S3/GCS response>
    The error response from S3 or GCS
    <amz-id>
    S3 only: corresponding unique ID from Amazon S3 that request that encountered error. This unique ID can help Amazon with troubleshooting the problem.
    <amz-id2>
    S3 only: corresponding token from Amazon S3 for the request that encountered error. Used for troubleshooting.
  • EXAMPLE:

    retrying GET due to s3-us-west-2.amazonaws.com response: 500 Internal Server Error, InternalError, x-amz-request-id:E854A4F04A83C125, x-amz-id-2:Zad39pZ2mkPGyT/axl8gMX32nsVn

Relevant Files

/etc/objectivefs.env
Default ObjectiveFS environment variable directory
/etc/resolv.conf
Recursive name resolvers from this file are used unless the DNSCACHEIP environment variable is set.
/var/log/messages
ObjectiveFS output log location on certain Linux distributions (e.g. RedHat) when running in the background.
/var/log/syslog
ObjectiveFS output log location on certain Linux distributions (e.g. Ubuntu) when running in the background.
/var/log/system.log
Default ObjectiveFS output log location on macOS when running in the background.

Troubleshooting

Initial Setup
403 Permission denied
Your S3 keys do not have permissions to access S3. Check that your user keys are added to a group with all S3 permissions.
./mount.objectivefs: Permission denied
mount.objectivefs needs executable permissions set. Run chmod +x mount.objectivefs
During Operation
Transport endpoint is not connected
ObjectiveFS process was killed. The most common reason is related to memory usage and the oom killer. Please see the Memory Optimization Guide for how to optimize memory usage.
Large delay for writes from one machine to appear at other machines
  1. Check that the time on these machines are synchronized. Please verify NTP has a small offset (<1 sec).
    To adjust the clock:
    On Linux: run /usr/sbin/ntpdate pool.ntp.org.
    On macOS: System Preferences → Date & Time → Set date and time automatically
  2. Check for any S3/GCS error responses in the log file
RequestTimeTooSkewed: The difference between the request time and the current time is too large.
The clock on your machine is too fast or too slow. To adjust the clock:
On Linux: run /usr/sbin/ntpdate pool.ntp.org.
On macOS: System Preferences → Date & Time → Set date and time automatically
Checksum error, bad cryptobox
The checksum error occurs when our end-to-end data integrity checker detects the data you stored on S3 differs from the data received when it is read again. Two common causes are:
1. Your S3/GCS bucket contains non-ObjectiveFS objects. Since ObjectiveFS is a log-structured filesystem that uses the object store for storage, it expects to fully manage the content of the bucket. Copying non-ObjectiveFS files directly into the S3/GCS bucket will cause the end-to-end data integrity check to fail with this error.
To fix this, move the non-ObjectiveFS objects from this bucket.
2. You may be running behind a firewall/proxy that modifies the data in transit. Please contact from here for the workaround.
Ratelimit delay
ObjectiveFS has a built-in request rate-limiter to prevent runaway programs from running up your S3 bill. The built-in limit starts at 25 million get requests, 1 million each for put and list requests and it is implemented as a leaky bucket with a fill rate of 10 million GET requests and 1 million PUT/LIST requests per day and is reset upon mount.
To explicitly disable the rate-limiter, you can use the noratelimit mount option.
Filesystem format is too new
One likely cause of this error is when your S3/GCS bucket contains non-ObjectiveFS objects. Since ObjectiveFS is a log-structured filesystem that uses the object store for storage, it expects to fully manage the content of the bucket. Copying non-ObjectiveFS files directly into the S3/GCS bucket will cause this error to occur.
To fix this error, move the non-ObjectiveFS objects from this bucket.
Unmount
Resource busy during unmount
Either a directory or a file in the file system is being accessed. Please verify that you are not accessing the file system anymore.

Upgrades

ObjectiveFS is forward and backward compatible. Upgrading or downgrading to a different release is straightforward. You can also do rolling upgrades for multiple servers. To upgrade: install the new version, unmount and then remount your filesystem.

Questions

Don’t hesitate to send us an email at here.

リリースノート

7.0

Release date: October 5, 2022

A. New Features
  • Native support for more object stores. Now supporting:
    • Public Cloud
    • Amazon S3
    • Azure Blob Storage
    • Digital Ocean Spaces
    • Google Cloud Storage
    • IBM Cloud Object Storage
    • Oracle Cloud
    • Scaleway
    • Wasabi
    • S3-compatible object stores
    • Gov Cloud
    • Amazon S3 GovCloud
    • Azure GovCloud
    • Oracle GovCloud
    • On-Premise
    • Ceph
    • MinIO
    • IBM COS
    • S3-compatible object stores
  • Built-in TLS/SSL support. Supports AES native instructions and other processor-specific optimization
  • New connection manager for more efficient object store connection handling
  • New heuristics for object store connection reuse and slow connection handling
  • New kcache+ performance feature for storing directories and symlinks in kernel cache
  • New mkdir mount option to create directory on mount if directory doesn’t exists
  • Added ability to select Azure API instead of S3 API by setting the SIGNATURE variable
  • New ENDPOINT variable to directly specify object store endpoint
  • New PATHSTYLE variable to select path style addressing for object stores that do not use domain style addressing
  • New OBJECTSTORE variable to select the default object store
  • config command now generates object store specific configurations
  • macOS: Faster detection of network changes
  • macOS: Always create mount directory when mounting in /Volumes
  • Oracle Cloud: New NAMESPACE variable for Oracle Cloud namespace
  • IBM Public Cloud: Added specific handling for cloud region listing and mounting
  • New REGION, ACCESS_KEY, SECRET_KEY variables are synonyms for AWS_DEFAULT_REGION, AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY
B. Performance
  • Optimized syscall selection to improve performance of sequential requests with up to 2X speedup
  • Improved small file performance with up to 40% speedup
  • Implemented fine-grain locking for in-memory index structure for lower latency and less memory usage
  • Lowered latency of mboost for large filesystems
  • Linux: Optimized number of syscalls for connection handling
C. Improvements
  • SIGNATURE variable now supports s3v2 and s3v4, in addition to v2, v4 and azure
  • Descriptive names for more object stores and regions in list command output
  • create command shows available regions for the selected object store
  • config command shows available regions for the selected object store
  • Tuned unmount behavior for disk cache threads
  • Recognize Outposts and Glacier Instant Retrieval AWS storage tiers
  • Recognize more EC2 nitro instances for autofreebw heuristic
  • Log errno names when applicable
  • Updated Azure blob storage protocol version to the latest version: 2021-06-08
  • Improved log messages and error messages
D. Fixes
  • Fixed snapshot reads of partial blocks from uncoordinated concurrent writer nodes
  • Compatibility: All ObjectiveFS filesystems are compatible with this release

6.9.1

Release date: November 2, 2021

  • Updated ioctl for Linux file attributes to work with newer FUSE versions
  • MacOS: Added workaround to address macFUSE breaking backwards compatibility

6.9

Release date: October 2, 2021

A. New Features
  • Azure blob storage support integrated (learn more)
  • Oracle Cloud support (learn more)
  • Disk cache size can be specified as a percentage
  • Removed requirement for GetBucketLocation permission
  • MacOS: macOS extended ACL support
  • MacOS: Added compatibility with macFUSE 4.0+
  • MacOS: Added capability to work on Apple M1 machines
B. Improvements
  • Cleaner efficiency improvements to adapt to more filesystem types and workloads
  • Improvements to dynamic metadata object sizing for large filesystems
  • Improved the memory usage and efficiency for cleaner+
  • Improved multi-node locking performance
  • Improved cache performance when multiple nodes are optimizing the storage layout
  • Adjusted request timeout to better support object stores with slower response times
  • Improved multi-node compaction efficiency for very large filesystems
  • Tuned fysnc coalescing for applications with lots of parallel fsyncs
  • MacOS: Improved support for very large macOS resource forks
  • SignatureV4 region defaults to us-east-1 if default region is not set
  • Added verbose flag to list command
  • Update list command to only output snapshots when @ is specified
  • New logging format and more detailed logging for object store request exceptions
  • Improved log messages for DNS lookup and connection startup
  • Added new diagnostics messages for object store connection issues
C. Fixes
  • Fixed rare snapshot mount issue related to local time conversion
  • Refresh disk cache more often in multi node cleaner+ environment
  • Compatibility: All ObjectiveFS filesystems are compatible with this release

6.8

Release date: March 5, 2021

A. New Features
  • New storage cleaner architecture with more efficient compaction and improved cleaning heuristic (learn more)
  • New Size Tiered and Time Based dynamic compaction heuristic
  • New multithreaded storage cleaner and compactor
  • New object store tier aware compaction algorithm
  • New freebw, autofreebw mount options
  • New mtplus mount option (learn more)
  • New compaction progress monitoring and logging (learn more)
  • A free cleaner thread for all users and additional cleaner threads with multithreading
B. Performance
  • Improved worker threads scaling algorithm
  • Improved storage cleaner heuristics when used with snapshots
  • Increased concurrency for compaction operations at higher compaction levels
  • Improved disk cache performance for larger disk caches
  • Performance optimizations for Petabyte-size filesystems
  • Optimized object sizes for improved object store performance
  • Increased ocache size for very large filesystems
  • Improved storage layout heuristics
  • Faster performance for creation of lots of small files
  • Faster response time in the event of object store errors
  • Increased I/O threads for regular multithreading from 8 to 16
C. General Improvements
  • Improved EC2 detection for autofreebw and compaction level setting
  • Used newer AWS metadata server API version
  • Tuned concurrency to match updated AWS recommendations
  • Tuned compaction to utilize file system idle periods
  • Support filesystem creation in an existing bucket with restrictive permission
  • Adjusted disk cache queue response time
  • Tuned compaction on write for metadata and data
  • Tuned quick sync heuristic
  • Unified endpoint support for all AWS regions
  • Adjusted compaction rate limits
  • Updated various logging messages and exit codes
  • Increased retry handling during initial startup
  • MacOS only: Increased backwards compatibility to macOS 10.8
Fixes
  • Fix for disk cache when used together with storage cleaner
  • Fix for sending Google project id in request header
  • Compatibility: All releases after 2.1.1 are compatible with this release

6.7.2

Release date: June 16, 2020

  • Fix compability when using http proxy and sigv2 with non-AWS object stores

6.7.1

Release date: April 25, 2020

  • Linux only: Make EC2 Instance Metadata Service v2 support work better with Docker

6.7

Release date: April 9, 2020

  • Dynamic scaling of threads to improve efficiency and resource usage
  • Reduced memory usage when running with multithreading
  • Speed up performance when running with disk cache
  • Support for EC2 Instance Metadata Service v2 (IMDSv2)
  • Improved disk cache performance resiliency when dealing with slow or faulty local disks
  • Improved performance for small file operations
  • Improved cleaner+ efficiency for active directories and files with many hard links
  • Improved write queue management to better handle contention during slow responses from the object store
  • Reduced max CPU usage for some workloads with improved heuristics
  • Reduced the number of GET requests when running with multiple nodes
  • Tuned S3 connection settings
  • Tuned destroy filesystem operation
  • Added beta support for workflows that communicate new filenames through a separate channel
  • Report final statistics to log file upon shutdown
  • Error message improvements
  • Compatibility: All releases after 2.1.1 are compatible with this release.

6.6

Release date: January 6, 2020

  • New memory caching algorithm for faster cache operations and better memory usage.
  • New heuristic for memory cache to better adapt to workload changes over time.
  • Improved parallelism for backend operations to speed up performance.
  • Reduced memory cache usage for active directories.
  • Added several optimizations to reduce OS memory usage.
  • Reduced number of FUSE interface operations in certain cases.
  • Added fix for rdirplus memory usage.
  • Added fix to address a case that could cause high cpu usage.
  • Amazon S3 only: use ISO 8601 date format for x-amz-date header for http proxy connections.
  • MacOS only: added support for inode creation time queries.
  • Various minor improvements for write workload performance and checkpoint snapshots.
  • Compatibility: All releases after 2.1.1 are compatible with this release.

6.5

Release date: November 28, 2019

A. New Features
  • Extended ACL support for Linux with new acl mount option learn more
  • Support for immutable and append-only file attributes for Linux
  • File capabilities support learn more
  • File attributes support for Linux learn more
  • File flags support for macOS learn more
  • Improved xattr support including more xattr namespace compatibility
B. Improvements
  • Faster cross-node locking with new directory locking algorithm
  • Improved synchronization speed between nodes
  • Improved speed and efficiency of cleaner+
  • Improved 99th percentile latency for filesystem operations
  • Improved write queue management to object store
  • Match atime, ctime and mtime updates closer with ext4
C. General
  • Additional error message logging for http retries
  • Support for macOS Catalina
  • Support for newer FUSE versions up to 7.31
  • Moved oldest supported FUSE version to 7.12
  • Compatibility: All releases after 2.1.1 are compatible with this release

6.4

Release date: August 18, 2019

  • Important fix for cleaner+ in the presence of clock skew between nodes (only for clean=2, not enabled by default)
  • Fixed an issue which can cause a directory to have slow file creation
  • Improved ocache handling of updates from slow writer nodes
  • Improved rdirplus inode memory usage
  • EC2 only: Nitro instance type and region are now reported in syslog
  • Compatibility: All releases after 2.1.1 are compatible with this release

6.3

Release date: June 24, 2019

  • Updated memory allocator to be more robust in low memory situations
  • New nomem mount option to select mount behavior when out of memory
  • New retry mount option for retrying connection to object store upon start up
  • Cleaner+ now skips ocache on initial mount
  • MacOS only: updated FUSE for macOS to version 3.9.2
  • Minor error message updates
  • Compatibility: All releases after 2.1.1 are compatible with this release

6.2

Release date: April 23, 2019

  • Support AWS S3 transfer acceleration through http proxy
  • Fixed cleaner+ low memory issue (only for clean=2, not enabled by default)
  • Added /opt/bin to search path for fusermount
  • Compatibility: All releases after 2.1.1 are compatible with this release

6.1

Release date: March 23, 2019

  • Amazon S3 only: Use Signature Version 4 (SigV4) in all regions to support the upcoming Amazon S3 deprecation of Signature Version 2 (SigV2) on June 24, 2019 (see AWS announcement). S3 regions that need this upgraded version are:
    • us-east-1 (N. Virginia)
    • us-west-1 (N. California)
    • us-west-2 (Oregon)
    • ap-southeast-1 (Singapore)
    • ap-southeast-2 (Sydney)
    • ap-northeast-1 (Tokyo)
    • eu-west-1 (Ireland)
    • sa-east-1 (Sao Paulo)
  • Compatibility: All releases after 2.1.1 are compatible with this release

5.5.3

Release date: March 23, 2019

  • Backported: Amazon S3 only: Use Signature Version 4 (SigV4) in all regions to support the upcoming Amazon S3 deprecation of Signature Version 2 (SigV2) on June 24, 2019 (see AWS announcement). S3 regions that need this upgraded version are:
    • us-east-1 (N. Virginia)
    • us-west-1 (N. California)
    • us-west-2 (Oregon)
    • ap-southeast-1 (Singapore)
    • ap-southeast-2 (Sydney)
    • ap-northeast-1 (Tokyo)
    • eu-west-1 (Ireland)
    • sa-east-1 (Sao Paulo)

6.0

Release date: January 26, 2019

A. New Features
  • New kernel cache kcache to improve re-read performance (learn more)
  • New cleaner+ storage cleaner (learn more)
  • New rdirplus that reduces stat call overhead for many common workloads (learn more)
  • New fuse_conn to set the max background FUSE connections (learn more)
B. Performance
  • Faster directory listing which can speed up metadata-heavy operations by up to 50%
  • Read-ahead performance improvements, including bigger step size for hpc mode
  • New early-issue for certain read requests for faster read performance
  • Compaction heuristics improvements with focus on metadata and boot time performance
  • Object size optimizations for filesystems in the TB to PB+ ranges
  • Lower latency for parallel directory operations
  • Lower memory usage for active directories and extended attributes
  • Lower memory usage for OS portion of cache for some workloads
  • Faster repeated metadata lookup to lower cpu usage for active workloads
C. General Improvements
  • Support for newer FUSE versions up to 7.26
  • Additional export flag tuning for NFS and Samba exports
  • Read and write operations cpu usage efficiency improvements
  • Increase compaction rate for max compaction level (compact=5)
  • Support newer EC2 instances with Nitro hypervisor for compaction level setting
  • Refresh indexes more often for more up-to-date filesystem size
  • Log availability zone and instance type on the starting line
  • Error message improvements
D. Fixes
  • Fixed link count for directories with block devices
  • Handled directly setting atime when using relatime or noatime mount options
  • Improved cleaning for filesystems which never enabled snapshots when used with the nosnapshots flag
E. Compatibility
  • Preferred block size is now reported as 128KB instead of 4KB for efficient filesystem I/O
  • ObjectiveFS 6.0 uses an extended storage format
    • Version 6.0 can mount filesystems created by all previous versions
    • Version 3.0 and newer can mount filesystems with the 6.0 extended storage format
    • Version 2.1.1 (2014) and older are not compatible with the 6.0 extended storage format, but a filesystem with the 6.0 extended storage format can be downgraded to be compatible to work with these versions if needed.

5.5.2

Release date: January 14, 2019

  • Fixed issue where nodes that create large number of files have a small chance of inode reuse
  • MacOS only: updated FUSE for macOS to version 3.8.3

5.4.2

Release date: January 14, 2019

  • Backported: Fixed issue where nodes that create large number of files have a small chance of inode reuse
  • Backported: MacOS only: updated FUSE for macOS to version 3.8.3

5.5.1

Release date: October 28, 2018

  • MacOS only: updated FUSE for macOS to version 3.8.2

5.5

Release date: June 21, 2018

  • Write performance improvements including improved buffer and link management
  • Improved write speed for remote object stores and high network latency links
  • Improved write speed for geo-distributed object stores
  • Improved write speed for on-premise object stores
  • Improved link bandwidth predictor
  • Improved compressible data write performance
  • New fsavail mount option to set the available filesystem space
  • bulkdata and ocache are now enabled by default
  • Added support for SI/IEC when setting CACHESIZE and DISKCACHE_SIZE
  • No changes in file system format. All versions of ObjectiveFS are compatible with 5.5

5.4.1

Release date: April 10, 2018

  • Fixed issue when using new bulkdata mode with slow links and certain write patterns
  • Improved support for very large memory cache size (500GB+ CACHESIZE)

5.4

Release date: March 26, 2018

  • New cache in object store (ocache) to improve mount time
  • New bulk data mode (bulkdata) to improve performance for filesystems with high write activity
  • New memory reduction option (mboost) to balance performance/memory for larger filesystems
  • Use default ACL handling for removexattr for better AUFS compatibility
  • Improved log messages
  • No changes in file system format. All versions of ObjectiveFS are compatible with 5.4

5.3.1

Release date: December 3, 2017

  • Fixed a resource leak in handling certain region-dependent contention cases, issue introduced in 5.3

5.3

Release date: November 22, 2017

  • Improved change detection heuristics for lower latency updates from other nodes
  • Reduced average write latency to S3 with new buffer management heuristics
  • Added support for getting filesystem passphrase from secrets management tools (learn more)
  • Allow larger writes when hpc option is set for better throughput on high latency links
  • Storage cleaner is enabled by default
  • Improved write throughput for very compressible data
  • Metadata host is automatically tried when no keys are provided
  • New config command option for IAM roles
  • Linux only: Updated network settings for faster detection of low-level connection errors
  • Set disk cache directory to be excluded from backup programs (learn more)
  • Updated config command to trim leading and trailing whitespace from user inputs
  • Added verbose flags -v and -vv for more verbose messages
  • Added diagnostic checks for config file permissions when using -v
  • Fixed request back-off handling when object store returns large number of errors
  • Fixed a case where the default compact level was used if multiple -o options were given
  • Fixes and improvements for error and informational messages
  • No changes in file system format. All versions of ObjectiveFS are compatible with 5.3

5.2

Release date: July 28, 2017

  • Optimized storage layout to improve mount time, list time and access time
  • New compaction algorithm to support the optimized storage layout
  • Added more compaction levels to enable fast storage layout optimization
  • Added storage cleaner to reclaim storage from snapshots
  • Tuned compaction on write to improve filesystem performance
  • Tuned fair queue algorithm to work better with large requests
  • Added new export option to better support remount for NFS/Samba exports
  • Added improvement to return free memory to the Linux kernel faster
  • Increased the number of reported inodes
  • MacOS only: updated FUSE for macOS to version 3.6.3
  • No changes in file system format. All versions of ObjectiveFS are compatible with 5.2

5.1.1

Release date: April 25, 2017

  • Tunneling proxy support for on-premise object store
  • Support for filesystem pool creation using an existing bucket
  • No changes in file system format. All versions of ObjectiveFS are compatible with 5.1.1

5.1

Release date: March 26, 2017

  • Performance improvements. Metadata up to 30% faster, large file writes up to 2.5X and large file reads up to 1.4X compared to v.5.0
  • Multithreaded write and compaction
  • Dynamically adjustable object sizes and parallelism on writes
  • Improved compaction heuristics for large filesystems
  • Improved multithreading performance
  • Improved DNS performance: faster retries and reduce number of total queries
  • Improved read and write performance for high-latency cross-region buckets
  • Optimized index layout for large and mixed file size workloads
  • Improved read ahead heuristics
  • Improved responsiveness during large file reads
  • Increased max reported filesystem size
  • Monitor disk cache performance and bypass to S3 if disk cache is too slow
  • Faster identification of DNS configuration changes
  • Google Cloud Storage: use AWS_DEFAULT_REGION on filesystem creation
  • No changes in file system format. All versions of ObjectiveFS are compatible with 5.1

5.0

Release date: January 12, 2017

  • Snapshots: automatic and checkpoint (learn more)
  • S3 Transfer Acceleration support (learn more)
  • Added protection against OOM killer (Linux)
  • Multithreading support on macOS
  • MacOS Sierra v.10.12 support
  • Simplified setup of on-premise object stores by connecting directly to endpoints
  • Simplified use of on-premise object stores with http AWS_DEFAULT_REGION support
  • Improved performance for large sequential reads
  • Improved Samba support for 16GB+ files
  • Better list command handling of large number of filesystems
  • Better compatibility for empty extended attributes
  • Extended config to configure AWS_DEFAULT_REGION
  • Added AWS S3 new regions: Ohio (us-east-2), Canada (ca-central-1), London (eu-west-2)
  • No changes in file system format. All versions of ObjectiveFS are compatible with 5.0

4.3.4

Release date: October 3, 2016

  • Expanded AWS_DEFAULT_REGION support for all commands
  • Added AWS Asia Pacific (Mumbai) region: ap-south-1
  • Added Google Cloud Storage regional locations

4.3.3

Release date: August 15, 2016

  • Fixed a scenario where update propagation between nodes could sometimes be delayed
  • Improved user interface message when creating a filesystem using the default region

4.3.2

Release date: July 26, 2016

  • Linux only: Fixed Windows directory listing interoperability when exporting using Samba/NFS

4.3.1

Release date: June 28, 2016

  • Improved handling of DNS queries for certain network errors

4.2.2

Release date: June 28, 2016

  • Backported: improved handling of DNS queries for certain network errors

4.3

Release date: May 30, 2016

  • Local license check support (learn more)
  • Memory improvements to significantly reduce memory usage
  • Cache changes to increase memory efficiency
  • Disk cache cleaner with faster response and less memory and cpu usage
  • Disk cache cleaner updated to be inode-aware and to work better with EBS disks
  • Improved cache efficiency for reads of large files
  • Compaction rate improvements
  • New flag to enable mount on a non-empty directory
  • Improved help text
  • No changes in file system format. All versions of ObjectiveFS are compatible with 4.3

4.2.1

Release date: May 30, 2016

  • Fix for disk cache cleaner start-up with background mount

4.2

Release date: April 26, 2016

  • Multithreading support for Linux (learn more)
  • I/O thread pools
  • Automatically increases parallel fuse connections for parallel workloads
  • Up to 20% speed up in cached responses
  • Various caching performance improvements
  • Wait for mount directory to be ready before daemonizing
  • Caching statistics logging (learn more)
  • OSX only: support new OSX fuse version 3.2
  • Improved some error messages
  • No changes in file system format. All versions of ObjectiveFS are compatible with 4.2

4.1.6

Release date: April 26, 2016

  • Improved handling of slow S3 list responses

4.1.5

Release date: April 3, 2016

  • Faster retries on truncated S3 list responses

4.1.4

Release date: March 20, 2016

  • Added noratelimit flag to disable S3 requests rate limits

4.1.3

Release date: February 7, 2016

  • Http proxy only: support for getting settings in directory

4.1.2

Release date: January 28, 2016

  • IBM Cleversafe object store change back off timing for retries

4.1.1

Release date: January 25, 2016

  • Fix for fair queue reads, issue introduced in 4.1

4.1

Release date: January 11, 2016

This major release has many new features and improvements including a large reduction in memory usage, HPC support for large sequential reads and writes, http proxy, user id mapping, Amazon server-side encryption (AWS KMS) support and ap-northeast-2 support.

A. Major Changes and Features
  • High-performance computing (HPC) support for faster (100+MB/s) read/write of large sequential files
  • Amazon server side encryption support including AWS Key Management Service (KMS) (learn more)
  • HTTP proxy support to access object store (learn more)
  • User ID and Group ID mapping support (learn more)
  • High-bandwidth mode support when hpc flag is set
  • AWS Asia Pacific (Seoul) region: ap-northeast-2
  • More robust clock skew handling and detection
B. Performance
  • Reduced memory usage for directories and extended attributes
  • Reduced memory usage for the memory cache
  • Updated the main index to a more memory-efficient data structure
  • Updated compaction heuristics to be more aggressive in reducing memory usage
  • Faster compression for small blocks
  • Added fair queue policy to let small requests bypass big/slow requests
  • Lowered the minimum required free space for disk cache for dedicated cache partitions
  • Optimized caching heuristics for web server workload to reduce latency bumps under high loads
  • Improved memory cache performance
C. Others
D. Compatibility
  • No changes in file system format. All versions of ObjectiveFS are compatible with 4.1

4.0.3

Release date: October 31, 2015

  • Reliability fix for compaction (for v.4.0+)

4.0.2

Release date: October 15, 2015

  • OS X only: Added support for El Capitan System Integrity Protection (SIP) and changed kernel interface

4.0.1

Release date: September 28, 2015

  • Added workaround for Amazon S3 us-east-1 issue where incorrect bucket creation status code is returned
  • Removed overly-strict bucket name check in list command to handle non-ObjectiveFS buckets better
  • Improved reporting and error messaging

4.0

Release date: September 8, 2015

This major release has many new features such as disk cache, compaction on write, connection pooling, us-east-1 support and significant performance improvements. These improvements reduce latency, lower memory usage and reduce the number of S3 operations.

A. Major Changes and Features
  • Disk cache support learn more
    • Content is compressed, encrypted and has strong integrity checks
    • Can be shared between multiple filesystems
    • Faster start up time and fewer S3 operations
  • Connection pooling to reduce request latency
  • Compaction on write to reduce the number of S3 objects stored
  • US Standard (us-east-1) region support
  • Faster detection of compaction done on other nodes to lower memory usage
  • Index generation heuristics improvements to lower memory usage and to reduce S3 operations
  • Request queue improvement to reduce latency for parallel read operations
B. Performance
  • Caching algorithm update with more detailed tracking to improve latency for large directories with 100k+ files
  • Cache operations optimizations to improve speed
  • Prioritize metadata in cache
  • Prefetch algorithm improvements for small files
  • Read ahead improvements for very large files
  • Read ahead optimizations to improve performance for multiple streams of videos
  • Lower cpu usage
  • General responsiveness and latency improvements
C. Improvements
  • Reduce default cache size for low memory machines
  • Make use of Linux o_noatime hint
  • Count kernel-referenced data as part of cache usage
  • Retry list operations for additional S3 error conditions
  • Improve user interface error messages
  • Reduce inode memory usage
D. Fixes
  • Fix for potential stale read corner case with very low cache size (only in 3.2)
E. Compatibility
  • No changes in file system format. All versions of ObjectiveFS are compatible with 4.0

3.2

Release date: July 21, 2015

New user interface with easy-to-use commands
  • New config command to easily set up the required variables
  • New list command to display user’s file systems with region and type info
  • Updated create command with simpler usage and ability to specify regions directly
  • Updated mount command with improved messaging and ability to run in background
  • Updated destroy command with simpler usage
  • Simpler installation with OSX package, and Linux rpm and deb packages
  • More detailed log messages for error responses from AWS S3/GCS
Features
  • Multiple file systems per bucket support
  • AWS v4 signature support, including Frankfurt data center support
  • Smart key-based selection of AWS S3 vs GCS storage backend
  • Preliminary support for auto mount on OSX
  • Support for GCS regions
  • Mount on boot support for additional Linux distributions: e.g. Ubuntu, Debian, SUSE
  • OS shutdown signal response handling
Performance Improvements
  • Improved responsiveness for directory listing
  • Improved caching responsiveness to updates from other nodes
  • Significant reduction in list operations for common usage patterns
  • Improved start up performance
  • Improved compaction heuristics
  • Reduced memory usage
Fixes
  • Various GCS compatibility fixes, including adapting to variable response time from GCS
Compatibility
  • No changes in file system format. All versions of ObjectiveFS are compatible with 3.2

3.1

Release date: April 20, 2015

  • Significant reduction of memory footprint
  • Reduced inode cache usage
  • Fixed initial rsync slowdown for certain directory structures
  • Better support for mount on boot
  • Small latency improvement for first uncached directory access
  • Enable compaction by default
  • No changes in file system format. All versions of ObjectiveFS are compatible with 3.1

3.0.1

Release date: March 3, 2015

  • Beta support for Google Cloud Storage
  • Fix for small cache for handling certain workloads
  • Output cache size and S3 region at file system startup
  • No changes in file system format. All versions of ObjectiveFS are compatible with 3.0.1

3.0

Release date: February 25, 2015

A. Major Changes and Features
  • Easy mount on boot
    • Support for mounting file system from /etc/fstab on boot
  • Faster synchronization speed between machines
  • Re-key live file system
    • User can change AWS keys without restarting the file system
  • User settable cache size support
    • User can set the read cache size as a percentage of memory or absolute value
  • Read-only mode mount option
  • Additional access time settings
    • Added reltime, noatime and nodiratime, in addition to strictatime
  • Parallel writes to S3
    • For write performance speed up
  • Compaction support
  • Dropping of root after file system start to run as a different user
B. Performance and Improvements
  • Reduced memory usage for large file systems
  • Improved decompression speed
  • Improved synchronization speed between machines
  • Added environment variables in files support
  • Added additional error handling support
C. Fixes
  • Fixed file hole creation for uncached files
  • Added compatibility with Linux for non-empty directory move target
D. Compatibility
  • No changes in file system format. All versions of ObjectiveFS are compatible with 3.0

2.1.1

Release date: October 26, 2014

  • Better handling of sleeping on OS X
  • Fix for undocumented OS X limitation
  • No changes in file system format. All versions of ObjectiveFS are compatible with 2.1.1

2.1

Release date: July 28, 2014

  • Handle rekeying from Amazon
  • Minor improvements for DNS
  • No changes in file system format. All versions of ObjectiveFS are compatible with 2.1

2.0.1

Release date: July 16, 2014

  • Added support for session security token when using AWS STS
  • No changes in file system format. All versions of ObjectiveFS are compatible with 2.0.1

2.0

Release date: June 3, 2014

A. Major Changes and Features
  • Added unified metadata and data cache
  • Improved handling of Linux ACL
  • Added support for compressed data detection
  • Support environment variables for configuration
  • Support for user’s non-DevPay S3 buckets
  • Improved sync speed to object store
B. Performance and Improvements
  • Improved read ahead algorithm
  • Reduce cache memory use
  • Various performance optimization
  • Faster synchronization between nodes
  • Speed improvement for compressed data
C. Fixes
  • Improved ctime handling
  • Better symlink compatibility
Compatibility
  • No changes in file system format. All versions of ObjectiveFS are compatible with 2.0

Pre 2.0

1.0 (Release date: July 30, 2013)

1.0 Release Candidate (Release date: May 23, 2013)

First public beta (Release date: April 3, 2013)

取り扱いが容易で、しかも安全な S3 ファイルシステム

14日間無料でお試しください