動作環境
- RAM 1GB以上の Linux、もしくは macOS
- 下記のいずれかのオブジェクトストア
- S3が使用可能な Amazon Web Services アカウント
- Google Cloud Storage アカウント
- その他S3互換のオブジェクトストア
ObjectiveFSバイナリは /sbin/mount.objectivefs にインストールされます
yum install objectivefs-7.2-1.x86_64.rpm
/usr/sbin/ntpdate -q pool.ntp.org
yum install objectivefs-7.2-1.x86_64.rpm
/usr/sbin/ntpdate -q pool.ntp.org
dpkg -i objectivefs_7.2_amd64.deb
/usr/sbin/ntpdate -q pool.ntp.org
apt-get update
apt-get install fuse
dpkg -i objectivefs_7.2_amd64.deb
/usr/sbin/ntpdate -q pool.ntp.org
sudo zypper install objectivefs-7.2-1.x86_64.rpm
/usr/sbin/ntpdate -q pool.ntp.org
/usr/sbin/ntpdate -q pool.ntp.org
objectivefs-7.2.pkg
をダブルクリックし、ObjectiveFS と OSXFUSE をインストール
This document is for release 5.0 and newer.
It’s quick and easy to get started with your ObjectiveFS file system.
Need your keys? See: how to get your S3 keys or GCS keys.
For the default region, see the region list.
$ sudo mount.objectivefs config
Enter ObjectiveFS license: <your ObjectiveFS license>
Enter Access Key Id: <your S3 or GCS access key>
Enter Secret Access Key: <your S3 or GCS secret key>
Enter Default Region (optional): <S3 or GCS region>
Use a globally unique, non-secret file system name.
Choose a strong passphrase, write it down and store it somewhere safe.
IMPORTANT: Without the passphrase, there is no way to recover any files.
$ sudo mount.objectivefs create <your filesystem name>
Passphrase (for s3://<filesystem>): <your passphrase>
Verify passphrase (for s3://<filesystem>): <your passphrase>
Note: Your filesystem will be created in the default region for S3 if specified in step 1, or us-west-2 otherwise. For GCS, it will be created in your physical location. To specify your region, see here.
You need an existing empty directory to mount your file system, e.g. /ofs
.
Process will run in the background.
$ sudo mkdir /ofs
$ sudo mount.objectivefs <your filesystem name> /ofs
Passphrase (for s3://<filesystem>): <your passphrase>
You can mount this file system on all machines (e.g. laptop, ec2 servers, etc) that you’d like to share your data.
And you are up and running! It’s that easy.
See the User Guide for all commands and options.
This document is for release 5.0 and newer.
It’s quick and easy to get started with your ObjectiveFS file system. Just 3 steps to get your new file system up and running. (see Get Started)
ObjectiveFS runs on your Linux and macOS machines, and implements a log-structured file system with a cloud object store backend, such as AWS S3, Google Cloud Storage (GCS) or your on-premise object store. Your data is encrypted before leaving your machine, and stays encrypted until it returns to your machine.
This user guide covers the commands and options supported by ObjectiveFS. For an overview of the commands, refer to the Command Overview section. For detailed description and usage of each command, refer to the Reference section.
sudo mount.objectivefs config [<directory>]
sudo mount.objectivefs create [-l <region>] <filesystem>
sudo mount.objectivefs list [-a] [<filesystem>]
Run in background:sudo mount.objectivefs [-o <opt>[,<opt>]..] <filesystem> <dir>
Run in foreground:sudo mount.objectivefs mount [-o <opt>[,<opt>]..] <filesystem> <dir>
sudo umount <dir>
sudo mount.objectivefs destroy <filesystem>
This section covers the detailed description and usage for each ObjectiveFS command.
sudo mount.objectivefs config [<directory>]
<directory>
/etc/objectivefs.env
AWS_ACCESS_KEY_ID
)AWS_SECRET_ACCESS_KEY
)Config sets up the following environment variables in /etc/objectivefs.env
(if no argument is provided) or in the directory specified.
AWS_ACCESS_KEY_ID
Your object store access key
AWS_SECRET_ACCESS_KEY
Your object store secret key
OBJECTIVEFS_LICENSE
Your ObjectiveFS license key
AWS_DEFAULT_REGION (optional)
This optional field can be an AWS or GCS region or the S3-compatible endpoint (e.g. http://<object store>
) for your on-premise object store
/etc/objectivefs.env
$ sudo mount.objectivefs config
Creating config in /etc/objectivefs.env
Enter ObjectiveFS license: <your ObjectiveFS license>
Enter Access Key Id: <your AWS or GCS access key>
Enter Secret Access Key: <your AWS or GCS secret key>
Enter Default Region (optional): <your preferred region>
B. User-specified destination directory
$ sudo mount.objectivefs config /home/ubuntu/.ofs.env
Creating config in /home/ubuntu/.ofs.env
Enter ObjectiveFS license: <your ObjectiveFS license>
Enter Access Key Id: <your AWS or GCS access key>
Enter Secret Access Key: <your AWS or GCS secret key>
Enter Default Region (optional): <your preferred region>
/etc/objectivefs.env/AWS_ACCESS_KEY_ID
).
/etc/objectivefs.env
, you can also set the environment variables directly on the command line. See environment variables on command line.
AWS_ACCESS_KEY_ID
, AWS_SECRET_ACCESS_KEY
and OBJECTIVEFS_LICENSE
in the environment directory. See environment variables section for details.
AWS_SECRET_ACCESS_KEY
or AWS_ACCESS_KEY_ID
environment variables.
Creates a new file system
sudo mount.objectivefs create [-l <region>] <filesystem>
This command creates a new file system in your S3, GCS or on-premise object store. You need to provide a passphrase for the new file system. Please choose a strong passphrase, write it down and store it somewhere safe.
IMPORTANT:Without the passphrase, there is no way to recover any files.
<filesystem>
s3://myfs
.gs://myfs
.http://s3.example.com/foo
-l <region>
AWS_DEFAULT_REGION
environment variable (if set). Otherwise, S3’s default is us-west-2 and GCS’s default is based on your server’s location (us, eu or asia).-l <region>
option or by setting the AWS_DEFAULT_REGION
environment variable.
ObjectiveFS also supports creating multiple file systems per bucket. Please refer to the Filesystem Pool section for details.
$ sudo mount.objectivefs create myfs
Passphrase (for s3://myfs): <your passphrase>
Verify passphrase (for s3://myfs): <your passphrase>
B. Create an S3 file system in a user-specified region (e.g. eu-central-1)
$ sudo mount.objectivefs create -l eu-central-1 s3://myfs
Passphrase (for s3://myfs): <your passphrase>
Verify passphrase (for s3://myfs): <your passphrase>
C. Create a GCS file system in a user-specified region (e.g. us)
$ sudo mount.objectivefs create -l us gs://myfs
Passphrase (for gs://myfs): <your passphrase>
Verify passphrase (for gs://myfs): <your passphrase>
/etc/objectivefs.env/OBJECTIVEFS_PASSPHRASE
. Please verify this file’s permission is restricted to root only./home/ubuntu/.ofs.env
:$ sudo OBJECTIVEFS_ENV=/home/ubuntu/.ofs.env mount.objectivefs create myfs
/etc/objectivefs.env/OBJECTIVEFS_PASSPHRASE
.sudo mount.objectivefs list [-asz] [<filesystem>[@<time>]]
This command lists your file systems, snapshots or buckets in your object store. The output includes the file system name, filesystem kind (regular filesystem or pool), snapshot (automatic, checkpoint, enabled status), region and location.
-a
-s
-z
<filesystem>
<filesystem>@<time>
-z
) or local time, in the ISO8601 format (e.g. 2016-12-31T15:40:00
).default
The list command has several options to list your filesystems, snapshots and buckets in your object store.
By default, it lists all your ObjectiveFS filesystems and pools. It can also list all buckets, including non-ObjectiveFS buckets, with the -a
option. To list only a specific filesystem or filesystem pool, you can provide the filesystem name. For a description of snapshot listing, see Snapshots section.
The output of the list command shows the filesystem name, filesystem kind, snapshot type, region and location.
Example filesystem list output:NAME KIND SNAP REGION LOCATION
s3://myfs-1 ofs - eu-central-1 EU (Frankfurt)
s3://myfs-2 ofs - us-west-2 US West (Oregon)
s3://myfs-pool pool - us-east-1 US East (N. Virginia)
s3://myfs-pool/fsa ofs - us-east-1 US East (N. Virginia)
Example snapshot list output:
NAME KIND SNAP REGION LOCATION
s3://myfs@2017-01-10T11:10:00 ofs auto eu-west-2 EU (London)
s3://myfs@2017-01-10T11:17:00 ofs manual eu-west-2 EU (London)
s3://myfs@2017-01-10T11:20:00 ofs auto eu-west-2 EU (London)
s3://myfs ofs on eu-west-2 EU (London)
Filesystem Kind | Description |
---|---|
ofs | ObjectiveFS filesystem |
pool | ObjectiveFS filesystem pool |
- | Non-ObjectiveFS bucket |
? | Error while querying the bucket |
access | No permission to access the bucket |
Snapshot type | Applicable for | Description |
---|---|---|
auto | snapshot | Automatic snapshot |
manual | snapshot | Checkpoint (or manual) snapshot |
on | filesystem | Snapshots are activated on this filesystem |
- | filesystem | Snapshots are not activated |
A. List all ObjectiveFS filessytem.
$ sudo mount.objectivefs list
NAME KIND SNAP REGION LOCATION
s3://myfs-1 ofs - eu-central-1 EU (Frankfurt)
s3://myfs-2 ofs on us-west-2 US West (Oregon)
s3://myfs-3 ofs on eu-west-2 EU (London)
s3://myfs-pool pool - us-east-1 US East (N. Virginia)
s3://myfs-pool/fsa ofs - us-east-1 US East (N. Virginia)
s3://myfs-pool/fsb ofs - us-east-1 US East (N. Virginia)
s3://myfs-pool/fsc ofs - us-east-1 US East (N. Virginia)
B. List a specific file system, e.g. s3://myfs-3
$ sudo mount.objectivefs list s3://myfs-3
NAME KIND SNAP REGION LOCATION
s3://myfs-3 ofs - eu-west-2 EU (London)
C. List everything, including non-ObjectiveFS buckets. In this example, my-bucket
is a non-ObjectiveFS bucket.
$ sudo mount.objectivefs list -a
NAME KIND SNAP REGION LOCATION
gs://my-bucket - - EU European Union
gs://myfs-a ofs - US United States
gs://myfs-b ofs on EU European Union
gs://myfs-c ofs - ASIA Asia Pacific
D. List snapshots for myfs
that match 2017-01-10T11
$ sudo mount.objectivefs list -s myfs@2017-01-10T11
NAME KIND SNAP REGION LOCATION
s3://myfs@2017-01-10T11:10:00 ofs auto eu-west-2 EU (London)
s3://myfs@2017-01-10T11:17:00 ofs manual eu-west-2 EU (London)
s3://myfs@2017-01-10T11:20:00 ofs auto eu-west-2 EU (London)
s3://myfs@2017-01-10T11:30:00 ofs auto eu-west-2 EU (London)
E. List snapshots for myfs
that match 2017-01-10T12
in UTC
$ sudo mount.objectivefs list -sz myfs@2017-01-10T12
NAME KIND SNAP REGION LOCATION
s3://myfs@2017-01-10T12:10:00Z ofs auto eu-west-2 EU (London)
s3://myfs@2017-01-10T12:17:00Z ofs manual eu-west-2 EU (London)
s3://myfs@2017-01-10T12:20:00Z ofs auto eu-west-2 EU (London)
s3://myfs@2017-01-10T12:30:00Z ofs auto eu-west-2 EU (London)
<filesystem>@<time prefix>
, e.g. myfs@2017-01-10T12
./home/ubuntu/.ofs.env
:$ sudo OBJECTIVEFS_ENV=/home/ubuntu/.ofs.env mount.objectivefs list
sudo mount.objectivefs [-o <opt>[,<opt>]..] <filesystem> <dir>
sudo mount.objectivefs mount [-o <opt>[,<opt>]..] <filesystem> <dir>
This command mounts your file system on a directory on your Linux or macOS machine. After the file system is mounted, you can read and write to it just like a local disk.
You can mount the same file system on as many Linux or macOS machines as you need. Your license will always scale if you need more mounts, and it is not limited to the number of included licenses on your plan.
NOTE: The mount command needs to run as root. It runs in the foreground if “mount” is provided, and runs in the background otherwise.
<filesystem>
s3://myfs
.gs://myfs
.http://s3.example.com/foo
.@<timestamp>
for mounting snapshots.<dir>
General Mount Options
-o env=<dir>
Mount Point Options
-o acl | noacl
-o dev | nodev
-o diratime | nodiratime
-o exec | noexec
-o export | noexport
-o fsavail=<size>
-o nonempty
-o ro | rw
-o strictatime | relatime | noatime
-o suid | nosuid
File System Mount Options
-o bulkdata | nobulkdata
-o clean[=1|=2] | noclean
-o compact[=<level>] | nocompact
-o fuse_conn=<NUM>
-o freebw | nofreebw | autofreebw
-o hpc | nohpc
-o mboost[=<minutes>] | nomboost
-o mt | nomt | cputhreads=<N> | iothreads=<N>
-o nomem=<spin|stop>
-o ocache | noocache
-o oob | nooob
-o oom | nooom
oom
twice. (Default: Enable)-o ratelimit | noratelimit
-o retry=<seconds>
-o snapshots | nosnapshots
/ofs
is an existing empty directory/etc/objectivefs.env/OBJECTIVEFS_PASSPHRASE
A. Mount an S3 file system in the foreground
$ sudo mount.objectivefs mount myfs /ofs
B. Mount a GCS file system with a different env directory, e.g. /home/ubuntu/.ofs.env
$ sudo mount.objectivefs mount -o env=/home/ubuntu/.ofs.env gs://myfs /ofs
C. Mount an S3 file system in the background
$ sudo mount.objectivefs s3://myfs /ofs
D. Mount a GCS file system with non-default options
Assumption: /etc/objectivefs.env
contains GCS keys.
$ sudo mount.objectivefs -o nosuid,nodev,noexec,noatime gs://myfs /ofs
/etc/objectivefs.env/OBJECTIVEFS_PASSPHRASE
.
Please verify that your passphrase file’s permission is the same as other files in the environment directory (e.g. OBJECTIVEFS_LICENSE
)./home/ubuntu/.ofs.env
: $ sudo OBJECTIVEFS_ENV=/home/ubuntu/.ofs.env mount.objectivefs mount myfs /ofs
ObjectiveFS supports Mount on Boot, where your directory is mounted automatically upon reboot.
Check that /etc/objectivefs.env/OBJECTIVEFS_PASSPHRASE
exists so you can mount your filesystem without needing to enter the passphrase.
If it doesn’t exist, create the file with your passphrase as the content.(see details)
Add a line to /etc/fstab
with:
<filesystem> <mount dir> objectivefs auto,_netdev[,<opts>] 0 0
_netdev
is used by many Linux distributions to mark the file system as a network file system.
For more details, see Mount on Boot Setup Guide for Linux.
macOS can use launchd
to mount on boot. See Mount on Boot Setup Guide for macOS for details.
sudo umount <dir>
umount
with the mount directory.
Typing Control-C
in the window where ObjectiveFS is running in the foreground will stop the filesystem, but may not unmount it. Please run umount
to properly unmount the filesystem.
$ sudo umount /ofs
sudo mount.objectivefs destroy <filesystem>
This command deletes your file system from your object store. Please make sure that you really don’t need your data anymore because this operation cannot be undone.
You will be prompted for the authorization code available on your user profile page. This code changes periodically. Please refresh your user profile page to get the latest code.
<filesystem>
destroy
.
$ sudo mount.objectivefs destroy s3://myfs
*** WARNING ***
The filesystem 's3://mytest1' will be destroyed. All data (550 MB) will be lost permanently!
Continue [y/n]? y
Authorization code: <your authorization code>
This section covers the options you can run ObjectiveFS with.
This is a list of supported S3 and GCS regions. ObjectiveFS supports all regions for S3 and GCS.
The table below lists the corresponding endpoints for the supported regions.
Region | Endpoint |
---|---|
us-east-1 | s3-external-1.amazonaws.com |
us-east-2 | s3-us-east-2.amazonaws.com |
us-west-1 | s3-us-west-1.amazonaws.com |
us-west-2 | s3-us-west-2.amazonaws.com |
ca-central-1 | s3-ca-central-1.amazonaws.com |
eu-central-1 | s3-eu-central-1.amazonaws.com |
eu-west-1 | s3-eu-west-1.amazonaws.com |
eu-west-2 | s3-eu-west-2.amazonaws.com |
ap-south-1 | s3-ap-south-1.amazonaws.com |
ap-southeast-1 | s3-ap-southeast-1.amazonaws.com |
ap-southeast-2 | s3-ap-southeast-2.amazonaws.com |
ap-northeast-1 | s3-ap-northeast-1.amazonaws.com |
ap-northeast-2 | s3-ap-northeast-2.amazonaws.com |
sa-east-1 | s3-sa-east-1.amazonaws.com |
us-gov-west-1 | s3-us-gov-west-1.amazonaws.com |
ObjectiveFS uses environment variables for configuration. You can set them using any standard method (e.g. on the command line, in your shell). We also support reading environment variables from a directory.
The filesystem settings specified by the environment variables are set at start up. To update the settings (e.g. change the memory cache size, enable disk cache), please unmount your filesystem and remount it with the new settings (exception: manual rekeying).
ObjectiveFS supports reading environment variables from files in a directory, similar to the envdir
tool from the daemontools package.
Your environment variables are stored in a directory. Each file in the directory corresponds to an environment variable, where the file name is the environment variable name and the first line of the file content is the value.
The Config command sets up your environment directory with three main environment variables:
AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY
OBJECTIVEFS_LICENSE
You can also add additional environment variables in the same directory using the same format: where the file name is the environment variable and the first line of the file content is the value.
$ ls /etc/objectivefs.env/
AWS_ACCESS_KEY_ID AWS_SECRET_ACCESS_KEY OBJECTIVEFS_LICENSE OBJECTIVEFS_PASSPHRASE
$ cat /etc/objectivefs.env/OBJECTIVEFS_PASSPHRASE
your_objectivefs_passphrase
You can also set the environment variables on the command line. The user-provided environment variables will override the environment directory’s variables.
sudo [<ENV VAR>='<value>'] mount.objectivefs
$ sudo CACHESIZE=30% mount.objectivefs myfs /ofs
To enable a feature, set the corresponding environment variable and remount your filesystem (exception: manual rekeying).
AWS_ACCESS_KEY_ID
, AWS_SECRET_ACCESS_KEY
, and AWS_SECURITY_TOKEN
.30%
) or an absolute value (e.g. 500M
or 1G
). (Default: 20%) (see Memory Cache section)ObjectiveFS uses memory to cache data and metadata locally to improve performance and to reduce the number of S3 operations.
Set the CACHESIZE
environment variable to one of the following:
500M
or 2G
)30%
)A minimum of 64MB will be used for CACHESIZE
.
If CACHESIZE
is not specified, the default is 20% of memory for machines with 3GB+ memory or 5%-20% (with a minimum of 64MB) for machines with less than 3GB memory.
The cache size is applicable per mount. If you have multiple ObjectiveFS file systems on the same machine, the total cache memory used by ObjectiveFS will be the sum of the CACHESIZE
values for all mounts.
The memory cache is one component of the ObjectiveFS memory usage. The total memory used by ObjectiveFS is the sum of:
a. memory cache usage (set by CACHESIZE
)
b. index memory usage (based on the number of S3 objects/filesystem size), and
c. kernel memory usage
The caching statistics such as cache hit rate, kernel memory usage is sent to the log. The memory cache setting is also sent to the log upon file system mount.
A. Set memory cache size to 30%
$ sudo CACHESIZE=30% mount.objectivefs myfs /ofs
B. Set memory cache size to 2GB
$ sudo CACHESIZE=2G mount.objectivefs myfs /ofs
ObjectiveFS can use local disks to cache data and metadata locally to improve performance and to reduce the number of S3 operations. Once the disk cache is enabled, ObjectiveFS handles the operations automatically, with no additional maintenance required from the user.
The disk cache is compressed, encrypted and has strong integrity checks. It is robust and can be copied between machines and even manually deleted, when in active use. So, you can rsync the disk cache between machines to warm the cache or to update the content.
Since the disk cache’s content persists when your file system is unmounted, you get the benefit of fast restart and fast access when you remount your file system.
ObjectiveFS will always keep some free space on the disk, by periodically checking the free disk space. If your other applications use more disk space, ObjectiveFS will adjust and use less by shrinking its cache.
Multiple file systems on the same machine can share the same disk cache without crosstalk, and they will collaborate to keep the most used data in the disk cache.
We recommend enabling disk cache when local SSD or harddrive is available. For EC2 instances, we recommend using the local SSD instance store instead of EBS because EBS volumes may run into ops limit depending on the volume size. (See how to mount an instance store on EC2 for disk cache).
The disk cache uses DISKCACHE_SIZE
and DISKCACHE_PATH
environment variables (see environment variables section for how to set environment variables). To enable disk cache, set DISKCACHE_SIZE
.
DISKCACHE_SIZE
:
<DISK CACHE SIZE>[:<FREE SPACE>]
.<DISK CACHE SIZE>
:
20G
or 1T
).1P
), ObjectiveFS will try to use as much space as possible on the disk while preserving the free space.<FREE SPACE>
(optional):
5G
).3G
.0G
, ObjectiveFS will try to use as much space as possible (useful for dedicated disk cache partition).The free space value has precedence over disk cache size. The actual disk cache size is the smaller of the DISK_CACHE_SIZE
or (total disk space - FREE_SPACE
).
DISKCACHE_PATH
Disk cache is disabled when DISKCACHE_SIZE
is not specified.
A. Set disk cache size to 20GB and use default free space (3GB)
$ sudo DISKCACHE_SIZE=20G mount.objectivefs myfs /ofs
B. Use as much space as possible for disk cache and keep 10GB free space
$ sudo DISKCACHE_SIZE=1P:10G mount.objectivefs myfs /ofs
C. Set disk cache size to 20GB and free space to 10GB and specify the disk cache path
$ sudo DISKCACHE_SIZE=20G:10G DISKCACHE_PATH=/var/cache/mydiskcache mount.objectivefs myfs /ofs
D. Use the entire space for disk cache (when using dedicated volume for disk cache)
$ sudo DISKCACHE_SIZE=1P:0G mount.objectivefs myfs /ofs
DISKCACHE_PATH
(default:/var/cache/objectivefs
)DISKCACHE_PATH
.DISKCACHE_SIZE
values and point to the same DISKCACHE_PATH
, ObjectiveFS will use the minimum disk cache size and the maximum free space value.ObjectiveFS supports automatic built-in snapshots and checkpoint snapshots. Automatic snapshots are managed by ObjectiveFS and are taken according to a snapshot schedule when your filesystem is mounted. Checkpoint snapshots can be taken at anytime and the filesystem doesn’t need to be mounted. Snapshots can be mounted as a read-only filesystem to access your filesystem data as it was at that point in time. There is no limit to the number of filesystem snapshots that you can create and use.
Snapshots are not backups since they are part of the same filesystem and are stored in the same bucket. A backup is an independent copy of your data stored in a different location.
Snapshots can be useful for backups. To create a consistent point-in-time backup, you can mount a recent snapshot and use it as the source for backup, instead of running backup from your live filesystem. This way, your data will not change while the backup is in progress.
Snapshots are activated on a filesystem upon the first mount with an ObjectiveFS version with snapshots (v.5.0 or newer) and no special mount options are needed. Upon activation, ObjectiveFS will perform a one-time identification of existing snapshots and activate them if available. A log message will be generated when snapshots have been activated on your filesystem. You can also use the list command to identify the filesystems with activated snapshots.
NOTE: even though many snapshots are generated, the storage used for each snapshot is incremental. If two snapshots contain the same data, there is no additional storage used for the second snapshot. If two snapshots are different, only the difference is stored, and not a new copy.
A. Create Automatic Snapshots
After initial activation, snapshots are automatically taken only when your filesystem is mounted. When your filesystem is not mounted, automatic snapshots will not be taken since there are no changes to the filesystem. Automatic snapshots are taken and retained based on the snapshot schedule in the table below. Older automatic snapshots are automatically removed to maintain the number of snapshots per interval.
Snapshot Interval | Number of Snapshots | ||
---|---|---|---|
10-minute | 72 | ||
hourly | 72 | ||
daily | 32 | ||
weekly | 16 |
RELEVANT INFO:
nosnapshots
mount option.B. Create Checkpoint Snapshots (Pro and Enterprise Plan Feature)
sudo mount.objectivefs snapshot <filesystem>
After initial activation, checkpoint (i.e. manual point-in-time) snapshots can be taken at anytime, even if your filesystem is not mounted. There is no limit to the number of checkpoint snapshots you can take. The checkpoint snapshots are kept until they are explicitly removed by the destroy command.
Checkpoint snapshots are useful for creating a snapshot right before making large changes to the filesystem. They can also be useful if you need snapshots at a specific time of the day or for compliance reasons.
DESCRIPTION:
You can list one or more snapshots for your filesystem using the list command. Snapshots have the format <filesystem>@<time>
, and are by default listed in local time, e.g. s3://myfs@2016-12-31T15:40:00
. They can also be listed in UTC with the -z
option.
The list command shows both automatic and checkpoint snapshots in your object stores. You can use it to list all snapshots available on your filesystem, or to list the snapshots matching a specific time prefix.
USAGE:
sudo mount.objectivefs list -s[z] [<filesystem>[@<time>]]
This command uses the -s
option to enable listing of snapshots. It can further restrict the list of snapshots to a single filesystem by providing the filesystem name. The snapshots can be filtered by the time prefix such as myfs@2016-11
to list all snapshots in November 2016.
For more details on these options and the output format, see the list command.
EXAMPLES:
A. List all snapshots for myfs
$ sudo mount.objectivefs list -s myfs
B. List all snapshots for myfs
that match 2017-01-10T11
$ sudo mount.objectivefs list -s myfs@2017-01-10T11
C. List all the snapshots for myfs
that match 2017-01-10T12:30
in UTC
$ sudo mount.objectivefs list -sz myfs@2017-01-10T12:30
DESCRIPTION:
Snapshots can be mounted to access the filesystem as it was at that point in time. When a snapshot is mounted, it is accessible as a read-only filesystem.
You can mount both automatic and checkpoint snapshots. When an automatic snapshot is mounted, a checkpoint snapshot for the same timestamp will be created to prevent the snapshot from being automatically removed in case its retention schedule expires while it is mounted. These checkpoint snapshots, when created for data recovery purpose only, are also included in the BASIC Plan.
Snapshots can be mounted using the same AWS keys used for mounting the filesystem. If you choose to use a different key, you only need read permission for mounting checkpoint snapshot, but need read and write permissions for mounting automatic snapshots.
A snapshot mount is a regular mount and will be counted as an ObjectiveFS instance while it is mounted.
USAGE:
sudo mount.objectivefs [-o <options>] <filesystem>@<time> <dir>
<filesystem>@<time>
<dir>
<options>
EXAMPLES:
A. Mount a snapshot specified in local time
$ sudo mount.objectivefs mount myfs@2017-01-10T11:10:00 /ofs
B. Mount a snapshot specified in UTC
$ sudo mount.objectivefs mount myfs@2017-01-10T12:30:00Z /ofs
C. Mount a snapshot with multithreading enabled
$ sudo mount.objectivefs mount -o mt myfs@2017-01-10T11:10:00 /ofs
TIPS:
Same as the regular unmount command.
To destroy a snapshot, use the regular destroy command with <filesystem>@<time>
. Time should be specified in ISO8601 format (e.g. 2017-01-10T10:10:00) and can be either local time or UTC. Both automatic and checkpoint snapshots matching the timestamp will be removed.
sudo mount.objectivefs destroy <filesystem>@<time>
EXAMPLE:
Destroy a snapshot specified in local time
$ sudo mount.objectivefs destroy s3://myfs@2017-01-10T11:10:00
*** WARNING ***
The snapshot 's3://myfs@2017-01-10T11:10:00' will be destroyed. No other changes will be done to the filesystem.
Continue [y/n]? y
If you need to recover a file from a snapshot, you can use the following steps:
Identify the snapshot you want to recover from using list snapshots.
$ sudo mount.objectivefs list -s myfs@2017-01-10
Mount the snapshot on an empty directory, e.g. /ofs-snap
.
$ sudo mount.objectivefs myfs@2017-01-10T11:10:00 /ofs-snap
Mount your filesystem on another empty directory, e.g. /ofs
.
$ sudo mount.objectivefs mount myfs /ofs
Verify it is the right file to restore. Copy the file from the snapshot to your filesystem.
$ cp /ofs-snap/<path to file> /ofs/<path to file>
ObjectiveFS supports live rekeying which lets you update your AWS keys while keeping your filesystem mounted. With live rekeying, you don’t need to unmount and remount your filesystem to change your AWS keys. ObjectiveFS supports both automatic and manual live rekeying.
If you have attached an AWS EC2 IAM role to your EC2 instance, you can set AWS_METADATA_HOST
to 169.254.169.254
to automatically rekey. With this setting, you don’t need to use the AWS_SECRET_ACCESS_KEY
and AWS_ACCESS_KEY_ID
environment variables.
You can also manually rekey by updating the AWS_SECRET_ACCESS_KEY
and AWS_ACCESS_KEY_ID
environment variables (and also AWS_SECURITY_TOKEN
if used) and sending SIGHUP to mount.objectivefs. The running objectivefs program (i.e. mount.objectivefs) will automatically reload and pick up the updated keys.
RELATED INFO:
ObjectiveFS is a log-structured filesystem that uses an object store for storage. Compaction combines multiple small objects into a larger object and brings related data close together. It improves performance and reduces the number of object store operations needed for each filesystem access. Compaction is a background process and adjusts dynamically depending on your workload and your filesystem’s characteristics.
You can specify the compaction rate by setting the mount option when mounting your filesystem. Faster compaction increases bandwidth usage.For more about mount options, see this section.
nocompact
- disable compactioncompact
- enable regular compactioncompact,compact
- enable faster compaction (uses more bandwidth)compact,compact,compact
- enable fastest compaction (uses most bandwidth)Compaction is enabled by default. If the filesystem is mounted on an EC2 machine in the same region as your S3 bucket, compact,compact
is the default. Otherwise, compact
is the default.
A. To enable faster compaction
$ sudo mount.objectivefs -o compact,compact myfs /ofs
B. To disable compaction
$ sudo mount.objectivefs -o nocompact myfs /ofs
IUsed
column of df -i
.Pro and Enterprise Plan Feature
Multithreading is a performance feature that can lower latency and improve throughput for your workload. ObjectiveFS will spawn dedicated CPU and IO threads to handle operations such as data decompression, data integrity check, disk cache accesses and updates.
Multithreading mode can be enabled using the mt
mount option, which sets the dedicated CPU threads to 4 and dedicated IO threads to 8. One read thread and one write thread will be spawned for each specified CPU and IO thread. You can also explicitly specify the number of dedicated CPU threads and IO threads using the cputhreads
and iothreads
mount options. You can also use the -nomt
mount option to disable multithreading.
Mount options | Description |
---|---|
-o mt | sets cputhreads to 4 and iothreads to 8 |
-o cputhreads=<N> | sets the number of dedicated CPU threads to N (min:0, max:128) |
-o iothreads=<N> | sets the number of dedicated IO threads to N (min:0, max:128) |
-o nomt | sets cputhreads and iothreads to 0 |
By default, there are 2 dedicated IO threads and no dedicated CPU threads.
A. Enable default multithreading option (4 cputhreads, 8 iothreads)
$ sudo mount.objectivefs -o mt <filesystem> <dir>
B. Set CPU threads to 8 and IO threads to 16
$ sudo mount.objectivefs -o cputhreads=8,iothreads=16 <filesystem> <dir>
C. Example fstab entry to enable multithreading
s3://<filesystem> <dir> objectivefs auto,_netdev,mt 0 0
Pro and Enterprise Plan Feature
Filesystem pool lets you have multiple file systems per bucket. Since AWS S3 has a limit of 100 buckets per account, you can use pools if you need lots of file systems.
A filesystem pool is a collection of regular filesystems to simplify the management of lots of filesystems. You can also use pools to organize your company’s file systems by teams or departments.
A file system in a pool is a regular file system. It has the same capabilities as a regular file system.
A pool is a top-level structure. This means that a pool can only contain file systems, and not other pools. Since a pool is not a filesystem, but a collection of filesystems, it cannot be mounted directly.
Reference: Managing Per-User Filesystems Using Filesystem Pool and IAM Policy
An example organization structure is:
|
|- myfs1 // one file system per bucket
|- myfs2 // one file system per bucket
|- mypool1 -+- /myfs1 // multiple file systems per bucket
| |- /myfs2 // multiple file systems per bucket
| |- /myfs3 // multiple file systems per bucket
|
|- mypool2 -+- /myfs1 // multiple file systems per bucket
| |- /myfs2 // multiple file systems per bucket
| |- /myfs3 // multiple file systems per bucket
:
To create a file system in a pool, use the regular create command with
<pool name>/<file system>
as the filesystem
argument.
sudo mount.objectivefs create [-l <region>] <pool>/<filesystem>
NOTE:
-l <region>
specification.EXAMPLE:
A. Create an S3 file system in the default region (us-west-2)
# Assumption: your /etc/objectivefs.env contains S3 keys
$ sudo mount.objectivefs create s3://mypool/myfs
B. Create a GCS file system in the default region
# Assumption: your /etc/objectivefs.env contains GCS keys
$ sudo mount.objectivefs create -l EU gs://mypool/myfs
When you list your file system, you can distinguish a pool in the KIND
column. A file system inside of a pool is listed with the pool prefix.
You can also list the file systems in a pool by specifying the pool name.
sudo mount.objectivefs list [<pool name>]
EXAMPLE:
A. In this example, there are two pools myfs-pool
and myfs-poolb
. The file systems in each pools are listed with the pool prefix.
$ sudo mount.objectivefs list
NAME KIND REGION
s3://myfs-1 ofs us-west-2
s3://myfs-2 ofs eu-central-1
s3://myfs-pool/ pool us-west-2
s3://myfs-pool/myfs-a ofs us-west-2
s3://myfs-pool/myfs-b ofs us-west-2
s3://myfs-pool/myfs-c ofs us-west-2
s3://myfs-poolb/ pool us-west-1
s3://myfs-poolb/foo ofs us-west-1
B. List all file systems under a pool, e.g. myfs-pool
$ sudo mount.objectivefs list myfs-pool
NAME KIND REGION
s3://myfs-pool/ pool us-west-2
s3://myfs-pool/myfs-a ofs us-west-2
s3://myfs-pool/myfs-b ofs us-west-2
s3://myfs-pool/myfs-c ofs us-west-2
To mount a file system in a pool, use the regular mount command with
<pool name>/<file system>
as the filesystem
argument.
Run in background:
sudo mount.objectivefs [-o <opt>[,<opt>]..] <pool>/<filesystem> <dir>
Run in foreground:
sudo mount.objectivefs mount [-o <opt>[,<opt>]..] <pool>/<filesystem> <dir>
EXAMPLES:
A. Mount an S3 file system and run the process in background
$ sudo mount.objectivefs s3://myfs-pool/myfs-a /ofs
B. Mount a GCS file system with a different env directory, e.g. /home/tom/.ofs_gs.env
and run the process in foreground
$ sudo mount.objectivefs mount -o env=/home/tom/.ofs_gcs.env gs://mypool/myfs /ofs
Same as the regular unmount command
To destroy a file system in a pool, use the regular destroy command with
<pool name>/<file system>
as the filesystem
argument.
sudo mount.objectivefs destroy <pool>/<filesystem>
NOTE:
EXAMPLE:
Destroying an S3 file system in a pool
$ sudo mount.objectivefs destroy s3://myfs-pool/myfs-a
*** WARNING ***
The filesystem 's3://myfs-pool/myfs-a' will be destroyed. All data (550 MB) will be lost permanently!
Continue [y/n]? y
Authorization code: <your authorization code>
Pro and Enterprise Plan Feature
This feature lets you map local user ids and group ids to different ids in the remote filesystem.The id mappings should be 1-to-1, i.e. a single local id should only be mapped to a single remote id, and vice versa. If multiple ids are mapped to the same id, the behavior is undetermined.
When a uid is remapped and U*
is not specified, all other unspecified uids will be mapped to the default uid: 65534 (aka nobody/nfsnobody
).
Similarly, all unspecified gids will be mapped to the default gid (65534) if a gid is remapped and G*
is not specified.
IDMAP="<Mapping>[:<Mapping>]"
where Mapping is:
U<local id or name> <remote id>
G<local id or name> <remote id>
U* <default id>
G* <default id>
Mapping Format
A. Single User Mapping: U<local id or name> <remote id>
Maps a local user id or local user name to a remote user id.
B. Single Group Mapping: G<local id or name> <remote id>
Maps a local group id or local group name to a remote group id.
C. Default User Mapping: U* <default id>
Maps all unspecified local and remote users ids to the default id. If this mapping is not specified, all unspecified user ids will be mapped to uid 65534 (aka nobody/nfsnobody
).
D. Default Group Mapping: G* <default id>
Maps all unspecified local and remote group ids to the default id. If this mapping is not specified, all unspecified group ids will be mapped to gid 65534 (aka nobody/nfsnobody
).
A. UID mapping only
IDMAP="U600 350:Uec2-user 400:U* 800"
ec2-user
is mapped to remote uid 400, and vice versaB. GID mapping only
IDMAP="G800 225:Gstaff 400"
staff
is mapped to remote gid 400, and vice versanobody/nfsnobody
)nobody/nfsnobody
)C. UID and GID mapping
IDMAP="U600 350:G800 225"
nobody/nfsnobody
)nobody/nfsnobody
)Pro and Enterprise Plan Feature
You can run ObjectiveFS with an http proxy to connect to your object store. A common use case is to connect ObjectiveFS to the object store via a squid caching proxy.
Set the http_proxy
environment variable to the proxy server’s address (see environment variables section for how to set environment variables).
If the http_proxy
environment is not set, this feature is disabled by default.
Mount a filesystem (e.g. s3://myfs) with an http proxy running locally on port 3128:
$ sudo http_proxy=http://localhost:3128 mount.objectivefs mount myfs /ofs
Alternatively, you can set the http_proxy in your /etc/objectivefs.env
directory
$ ls /etc/objectivefs.env
AWS_ACCESS_KEY_ID OBJECTIVEFS_PASSPHRASE
AWS_SECRET_ACCESS_KEY http_proxy
OBJECTIVEFS_LICENSE
$ cat /etc/objectivefs.env/http_proxy
http://localhost:3128
Pro and Enterprise Plan Feature
The admin mode provides an easy way to manage many filesystems in a programmatic way. You can use the admin mode to easily script the creation of many filesystems.
The admin mode lets admins create filesystems without the interactive passphrase confirmations. To destroy a filesystem, admins only need to provide a ‘y’ confirmation and don’t need an authorization code. Admins can list the filesystems, similar to a regular user. However, admins are not permitted to mount a filesystem, to separate the admin functionality and user functionality.
Operation | User Mode | Admin Mode |
---|---|---|
Create | Needs passphrase confirmation | No passphrase confirmation needed |
List | Allowed | Allowed |
Mount | Allowed | Not allowed |
Destroy | Needs authorization code and confirmation | Only confirmation needed |
Enterprise plan users have an admin license key, in addition to their regular license key. Please contact from here for this key.
To use admin mode, we recommend creating an admin-specific objectivefs environment directory, e.g. /etc/objectivefs.admin.env
. Please use your admin license key for OBJECTIVEFS_LICENSE
.
$ ls /etc/objectivefs.env/
AWS_ACCESS_KEY_ID AWS_SECRET_ACCESS_KEY
OBJECTIVEFS_LICENSE OBJECTIVEFS_PASSPHRASE
$ cat /etc/objectivefs.env/OBJECTIVEFS_LICENSE
your_admin_license_key
You can have a separate user objectivefs environment directory, e.g. /etc/objectivefs.<user>.env
, for each user to mount their individual filesystems.
A. Create a filesystem in admin mode with credentials in /etc/objectivefs.admin.env
$ sudo OBJECTIVEFS_ENV=/etc/objectivefs.admin.env mount.objectivefs create myfs
B. Mount the filesystem as user tom
in the background
$ sudo OBJECTIVEFS_ENV=/etc/objectivefs.tom.env mount.objectivefs myfs /ofs
Enterprise Plan Feature
While our regular license check is very robust and can handle multi-day outages, some companies prefer to minimize external dependencies. For these cases, we offer a local license check feature that lets you run your infrastructure independent of any license server.
Please talk with your enterprise support contact for instructions on how to enable the local license check on your account.
Enterprise Plan Feature
ObjectiveFS supports AWS S3 Transfer Acceleration that enables fast transfers of files over long distances between your server and S3 bucket.
Set the AWS_TRANSFER_ACCELERATION
environment variable to 1 to enable S3 transfer acceleration (see environment variables section for how to set environment variables).
Your S3 bucket needs to be configured to enable Transfer Acceleration. This can be done from the AWS Console.
Mount a filesystem called myfs with S3 Transfer Acceleration enabled
$ sudo AWS_TRANSFER_ACCELERATION=1 mount.objectivefs myfs /ofs
Enterprise Plan Feature
ObjectiveFS supports AWS Server-Side encryption using Amazon S3-Managed Keys (SSE-S3) and AWS KMS-Managed Keys (SSE-KMS).
Use the AWS_SERVER_SIDE_ENCRYPTION
environment variable (see environment variables section for how to set environment variables).
The AWS_SERVER_SIDE_ENCRYPTION
environment variable can be set to:
AES256
(for Amazon S3-Managed Keys (SSE-S3))aws:kms
(for AWS KMS-Managed Keys (SSE-KMS) with default key)<your kms key>
(for AWS KMS-Managed Keys (SSE-KMS) with the keys you create and manage)To run SSE-KMS, stunnel
is required. See the following guide for setup instructions.
A. Create a filesystem called myfs with Amazon S3-Managed Keys (SSE-S3)
$ sudo AWS_SERVER_SIDE_ENCRYPTION=AES256 mount.objectivefs create myfs
B. Create a filesystem called myfs with Amazon S3-Managed Keys (SSE-KMS)
Note: make sure stunnel is running. See setup instructions.
$ sudo http_proxy=http://localhost:8086 AWS_SERVER_SIDE_ENCRYPTION=aws:kms mount.objectivefs create myfs
C. Mount a filesystem called myfs with Amazon KMS-Managed Keys (SSE-KMS) using the default key
Note: make sure stunnel is running. See setup instructions.
$ sudo http_proxy=http://localhost:8086 AWS_SERVER_SIDE_ENCRYPTION=aws:kms mount.objectivefs myfs /ofs
D. Mount a filesystem called myfs with Amazon KMS-Managed Keys (SSE-KMS) using a specific key
Note: make sure stunnel is running. See setup instructions.
$ sudo http_proxy=http://localhost:8086 AWS_SERVER_SIDE_ENCRYPTION=<your aws kms key> mount.objectivefs myfs /ofs
Log information is printed to the terminal when running in the foreground, and is sent to syslog when running in the background. On macOS, the log is typically at /var/log/system.log
. On Linux, the log is typically at /var/log/messages
or /var/log/syslog
.
Below is a list of common log messages. For error messages, please see troubleshooting section.
The message logged every time an ObjectiveFS filesystem is mounted.
objectivefs starting [<fuse version>, <region>, <endpoint>, <cachesize>, <disk cache setting>]
<fuse version>
<region>
<endpoint>
<cachesize>
<disk cache setting>
objectivefs starting [fuse version 7.22, region us-west-2, endpoint http://s3-us-west-2.amazonaws.com, cachesize 753MB, diskcache off]
The message logged while your filesystem is active. It shows the cumulative number of S3 operations and bandwidth usage since the initial mount message.
<put> <list> <get> <delete> <bandwidth in> <bandwidth out>
<put> <list> <get> <delete>
<bandwidth in> <bandwidth out>
1403 PUT, 571 LIST, 76574 GET, 810 DELETE, 5.505 GB IN, 5.309 GB OUT
Caching statistics is part of the regular log message starting in ObjectiveFS v.4.2. This data can be useful for tuning memory and disk cache sizes for your workload.
CACHE [<cache hit> <metadata> <data> <os>], DISK[<hit>]
<cache hit>
<metadata>
<data>
<OS>
Disk [<hit>]
CACHE [74.9% HIT, 94.1% META, 68.1% DATA, 1.781 GB OS], DISK [99.0% HIT]
Error response from S3 or GCS
retrying <operation> due to <endpoint> response: <S3/GCS response> [x-amz-request-id:<amz-id>, x-amz-id-2:<amz-id2>]
<operation>
PUT, GET, LIST, DELETE
operations that encountered the error message<endpoint>
<S3/GCS response>
<amz-id>
<amz-id2>
retrying GET due to s3-us-west-2.amazonaws.com response: 500 Internal Server Error, InternalError, x-amz-request-id:E854A4F04A83C125, x-amz-id-2:Zad39pZ2mkPGyT/axl8gMX32nsVn
DNSCACHEIP
environment variable is set.chmod +x mount.objectivefs
/usr/sbin/ntpdate pool.ntp.org
./usr/sbin/ntpdate pool.ntp.org
.noratelimit
mount option.
ObjectiveFS is forward and backward compatible. Upgrading or downgrading to a different release is straightforward. You can also do rolling upgrades for multiple servers. To upgrade: install the new version, unmount and then remount your filesystem.
Don’t hesitate to send us an email at here.
Release date: October 5, 2022
Release date: November 2, 2021
Release date: October 2, 2021
Release date: March 5, 2021
Release date: June 16, 2020
Release date: April 25, 2020
Release date: April 9, 2020
Release date: January 6, 2020
Release date: November 28, 2019
Release date: August 18, 2019
Release date: June 24, 2019
Release date: April 23, 2019
Release date: March 23, 2019
Release date: March 23, 2019
Release date: January 26, 2019
Release date: January 14, 2019
Release date: January 14, 2019
Release date: October 28, 2018
Release date: June 21, 2018
Release date: April 10, 2018
Release date: March 26, 2018
Release date: December 3, 2017
Release date: November 22, 2017
Release date: July 28, 2017
Release date: April 25, 2017
Release date: March 26, 2017
Release date: January 12, 2017
Release date: October 3, 2016
Release date: August 15, 2016
Release date: July 26, 2016
Release date: June 28, 2016
Release date: June 28, 2016
Release date: May 30, 2016
Release date: May 30, 2016
Release date: April 26, 2016
Release date: April 26, 2016
Release date: April 3, 2016
Release date: March 20, 2016
Release date: February 7, 2016
Release date: January 28, 2016
Release date: January 25, 2016
Release date: January 11, 2016
This major release has many new features and improvements including a large reduction in memory usage, HPC support for large sequential reads and writes, http proxy, user id mapping, Amazon server-side encryption (AWS KMS) support and ap-northeast-2 support.
Release date: October 31, 2015
Release date: October 15, 2015
Release date: September 28, 2015
Release date: September 8, 2015
This major release has many new features such as disk cache, compaction on write, connection pooling, us-east-1 support and significant performance improvements. These improvements reduce latency, lower memory usage and reduce the number of S3 operations.
Release date: July 21, 2015
config
command to easily set up the required variableslist
command to display user’s file systems with region and type infocreate
command with simpler usage and ability to specify regions directlymount
command with improved messaging and ability to run in backgrounddestroy
command with simpler usageRelease date: April 20, 2015
Release date: March 3, 2015
Release date: February 25, 2015
Release date: October 26, 2014
Release date: July 28, 2014
Release date: July 16, 2014
Release date: June 3, 2014
1.0 (Release date: July 30, 2013)
1.0 Release Candidate (Release date: May 23, 2013)
First public beta (Release date: April 3, 2013)
取り扱いが容易で、しかも安全な S3 ファイルシステム
14日間無料でお試しください