How to Mount S3 bucket on EC2 Linux Instance - Cloudkul
Technology

How to Mount S3 bucket on EC2 Linux Instance – Cloudkul

an s3 bucket can be mounted on an aws instance as a file system known as s3fs. s3fs is a fusible file system that allows you to mount an amazon s3 repository as a local file system. behaves like a network attached drive in that it doesn’t store anything on amazon ec2, but the user can access data on s3 from the ec2 instance.

filesystem in userspace (fuse) is a simple interface for userspace programs to export a virtual filesystem to the linux kernel. it is also intended to provide a secure method for non-privileged users to create and mount their own file system implementations.

Reading: How to connect s3 to ec2

the s3fs-fuse project is written in python backed by amazon’s simple storage service. Amazon offers an open API to build applications on top of this service, which several companies have done, using a variety of interfaces (web, rsync, fuse, etc.).

follow the steps below to mount your s3 repository on your linux instance.

This tutorial assumes you have an ec2 instance of linux running on aws with root access and a bucket created in s3 that will be mounted on your linux instance. you will also need access and secret key pair with sufficient s3 permissions or iam access to generate or create it.

We will perform the steps as the root user. you can also use the sudo command if you are a normal user with access to sudo. so let’s get started.

step-1:- if you are using a fresh centos or ubuntu instance. update the system.

-> for centos or red hat

-> for ubuntu

step 2:- install the dependencies.

-> in centos or red hat

on ubuntu or debian

step 3:- clone the s3fs source code from git.

step 4:- now change to the source code directory and compile and install the code with the following commands:

step 5:- use the following command to check where the s3fs command is placed in o.s. it will also tell you that the installation is ok.

step 6:- obtain the access key and the secret key.

You will need the aws access key and secret key with the proper permissions to access your s3 bucket from your ec2 instance. You can easily manage your user permissions from the IAM (Identity and Access Management) service provided by AWS. create an iam user with full access to s3 (or a role with sufficient permissions) or use your account’s root credentials. here we will use root credentials for simplicity.

See Also:  How to Connect Bose Headphones to Your Windows PC - Headphonesty

See also: How to Pair DJI Mavic Air 2? (Phone, Tablet, Controller, PC)

go to aws menu -> your aws account name -> my security credentials. your iam console will appear here. you have to go to users > your account name and in the permissions tab check if you have enough access on the s3 bucket. otherwise you can manually assign an existing “s3 full access” policy or create a new policy with sufficient permissions.

now go to the security credentials tab and create an access key. a new access key and secret key pair will be generated. here you can see the access key and the secret key (the secret key is visible when you click the show tab) which you can also download. copy these two keys separately.

Note that you can always use an existing access and secret key pair. alternatively, you can also create a new iam user and give it enough permissions to generate the access and secret key.

step 7:- create a new file in /etc named passwd-s3fs and paste the access key and secret key in the following format.

step 8:- change the file permission

step 9:- now create a directory or provide the path of an existing directory and mount s3bucket on it.

if you have a simple bucket without a dot (.) in the bucket name, use the commands used in dot “a” or else for a bucket with a dot (.) in the bucket name , follow step “b”:

a) repository name without dot (.):

where, “your_bucketname” = the name of your s3 bucket that you have created in aws s3, use_cache = to use a directory for your cache purpose, allow_other= to allow other users to write to the mount point, uid= uid of the mount point user/owner (you can also add “-o gid=1001” for the group), mp_umask= to remove the permission of other users. multireq_max= parameter to send the request to the s3 bucket, /mys3bucket= mount point where the bucket will be mounted.

See Also:  Bytech Ultimate TWS Earbuds User Manual - Manuals

See also: How to Get Wi-Fi Wherever You Go | Verizon

You can make an entry in /etc/rc.local to automatically remount after reboot. find the s3fs binary with the “which” command and make the input before the “exit 0” line as shown below.

b) repository name with dot (.):

where, “your_bucketname” = the name of your s3 bucket that you have created in aws s3, use_cache = to use a directory for your cache purpose, allow_other= to allow other users to write to the mount point, uid= uid of the mount point user/owner (you can also add “-o gid=1001” for the group), mp_umask= to remove the permission of other users. multireq_max= parameter to send the request to the s3 bucket, /mys3bucket= mount point where the bucket will be mounted.

remember to replace “{{aws_region}}” with the region of your repository (example: eu-west-1).

See also: How to Get Wi-Fi Wherever You Go | Verizon

You can make an entry in /etc/rc.local to automatically remount after reboot. find the s3fs binary with the “which” command and make the input before the “exit 0” line as shown below.

To debug at any time, add “-o dbglevel=info -f -o curldbg” to the s3fs mount command.

step 10:- check the mounted s3 cube. the output will be similar to the one shown below, but the size used may differ.

“or”

if it shows the mounted filesystem, you have successfully mounted the s3 bucket on your ec2 instance. you can also test it further by creating a test file.

This change should also be reflected in the s3 repository. so please login to your s3 bucket to check if the test file is present or not.

See Also:  The easy way to connect your PS4 or Xbox controller to your iPhone - CNET

note: if you already had some data in s3bucket and it is not visible, you need to set permission on acl in s3 aws admin console for that s3 bucket.

Also, if you get any errors from s3fs like “transport endpoint is not connected”, you need to unmount and remount the filesystem. you can also do it through a custom script to automatically detect and remount.

congratulations! you have successfully mounted your s3 bucket on your ec2 instance. any files written to /mys3bucket will be replicated to your amazon s3 bucket.

need help?

thanks for reading this blog!

For more interesting blogs, stay in touch with us. If you need any kind of support, just create a ticket at https://webkul.uvdesk.com/en/.

For further assistance or inquiries, please contact us or create a ticket.

See also: How To Wire Tweeters With A Built In Crossover To An Amp

Related Articles

Leave a Reply

Your email address will not be published.

Back to top button