CARVIEW |
Select Language
HTTP/2 200
date: Thu, 24 Jul 2025 19:38:53 GMT
content-type: text/html; charset=utf-8
vary: X-PJAX, X-PJAX-Container, Turbo-Visit, Turbo-Frame, X-Requested-With,Accept-Encoding, Accept, X-Requested-With
x-robots-tag: none
etag: W/"44e153111a77bf2fd01fa457b53d48df"
cache-control: max-age=0, private, must-revalidate
strict-transport-security: max-age=31536000; includeSubdomains; preload
x-frame-options: deny
x-content-type-options: nosniff
x-xss-protection: 0
referrer-policy: no-referrer-when-downgrade
content-security-policy: default-src 'none'; base-uri 'self'; child-src github.githubassets.com github.com/assets-cdn/worker/ github.com/assets/ gist.github.com/assets-cdn/worker/; connect-src 'self' uploads.github.com www.githubstatus.com collector.github.com raw.githubusercontent.com api.github.com github-cloud.s3.amazonaws.com github-production-repository-file-5c1aeb.s3.amazonaws.com github-production-upload-manifest-file-7fdce7.s3.amazonaws.com github-production-user-asset-6210df.s3.amazonaws.com *.rel.tunnels.api.visualstudio.com wss://*.rel.tunnels.api.visualstudio.com objects-origin.githubusercontent.com copilot-proxy.githubusercontent.com proxy.individual.githubcopilot.com proxy.business.githubcopilot.com proxy.enterprise.githubcopilot.com *.actions.githubusercontent.com wss://*.actions.githubusercontent.com productionresultssa0.blob.core.windows.net/ productionresultssa1.blob.core.windows.net/ productionresultssa2.blob.core.windows.net/ productionresultssa3.blob.core.windows.net/ productionresultssa4.blob.core.windows.net/ productionresultssa5.blob.core.windows.net/ productionresultssa6.blob.core.windows.net/ productionresultssa7.blob.core.windows.net/ productionresultssa8.blob.core.windows.net/ productionresultssa9.blob.core.windows.net/ productionresultssa10.blob.core.windows.net/ productionresultssa11.blob.core.windows.net/ productionresultssa12.blob.core.windows.net/ productionresultssa13.blob.core.windows.net/ productionresultssa14.blob.core.windows.net/ productionresultssa15.blob.core.windows.net/ productionresultssa16.blob.core.windows.net/ productionresultssa17.blob.core.windows.net/ productionresultssa18.blob.core.windows.net/ productionresultssa19.blob.core.windows.net/ github-production-repository-image-32fea6.s3.amazonaws.com github-production-release-asset-2e65be.s3.amazonaws.com insights.github.com wss://alive.github.com api.githubcopilot.com api.individual.githubcopilot.com api.business.githubcopilot.com api.enterprise.githubcopilot.com; font-src github.githubassets.com; form-action 'self' github.com gist.github.com copilot-workspace.githubnext.com objects-origin.githubusercontent.com; frame-ancestors 'none'; frame-src viewscreen.githubusercontent.com notebooks.githubusercontent.com; img-src 'self' data: blob: github.githubassets.com media.githubusercontent.com camo.githubusercontent.com identicons.github.com avatars.githubusercontent.com private-avatars.githubusercontent.com github-cloud.s3.amazonaws.com objects.githubusercontent.com release-assets.githubusercontent.com secured-user-images.githubusercontent.com/ user-images.githubusercontent.com/ private-user-images.githubusercontent.com opengraph.githubassets.com copilotprodattachments.blob.core.windows.net/github-production-copilot-attachments/ github-production-user-asset-6210df.s3.amazonaws.com customer-stories-feed.github.com spotlights-feed.github.com objects-origin.githubusercontent.com *.githubusercontent.com; manifest-src 'self'; media-src github.com user-images.githubusercontent.com/ secured-user-images.githubusercontent.com/ private-user-images.githubusercontent.com github-production-user-asset-6210df.s3.amazonaws.com gist.github.com; script-src github.githubassets.com; style-src 'unsafe-inline' github.githubassets.com; upgrade-insecure-requests; worker-src github.githubassets.com github.com/assets-cdn/worker/ github.com/assets/ gist.github.com/assets-cdn/worker/
server: github.com
content-encoding: gzip
accept-ranges: bytes
set-cookie: _gh_sess=QC2wzO4f4azWXdCiH%2FVxOL3IY9o4VX2qrn6BLAvQxj7jbzIUa59zABF246tOZ24woUNY6mBDPsTzDU0aasiQ2OdoFTMC69e7s%2BBu6QV3mJX%2B4QKKXvVgbad63M18bcJrDrqxoA2VK1mZmngWrW8YQm%2BsaH7cjT1b4NNhlrnJxj%2FvHivshZrEpHAeYbIB4s2tOO9Y7qJINBM45jkgdh9fQldtRT4MR%2FcRRoRx3bdAUuizvT2INljHpB9Oo5f1LPL9Yo4t6FyN3AkAbrzeD9iq9g%3D%3D--Ljp0kj8d3h49nnAm--asOKrCVWbwjRQ0aam%2BGR1Q%3D%3D; Path=/; HttpOnly; Secure; SameSite=Lax
set-cookie: _octo=GH1.1.1034211759.1753385933; Path=/; Domain=github.com; Expires=Fri, 24 Jul 2026 19:38:53 GMT; Secure; SameSite=Lax
set-cookie: logged_in=no; Path=/; Domain=github.com; Expires=Fri, 24 Jul 2026 19:38:53 GMT; HttpOnly; Secure; SameSite=Lax
x-github-request-id: ACB8:218AE0:19E7B6:1FA5FC:68828BCD
FAQ · s3fs-fuse/s3fs-fuse Wiki · GitHub
Skip to content
Navigation Menu
{{ message }}
-
Notifications
You must be signed in to change notification settings - Fork 1.1k
Micky edited this page Oct 4, 2021
·
19 revisions
- /usr/bin/s3fs
- /var/log/messages
- an entry in /etc/fstab (optional - ** requires fuse to be fully installed ** issue #115)
- the file $HOME/.passwd-s3fs or /etc/passwd-s3fs (optional)
- the folder specified by use_cache (optional) a local file cache automatically maintained by s3fs, enabled with "use_cache" option, e.g., -o use_cache=/tmp
- the file mime.types
- This file is used for map of file extensions to Content-types
- on Fedora /etc/mime.types comes from mailcap, so, you can either (a) create this file yourself or (b) do a yum install mailcap
- Usually, s3fs tries to detect /etc/mime.types as default regardless of the OS
- Else s3fs tries to detect /etc/apache2/mime.types if OS is macOS
- s3fs exits with an error if these files are not exist
- Alternatively, you can set mime.types file path with mime option without detecting these default files
- stores files natively and transparently in amazon s3; you can access files with other tools, e.g., jets3t
- Does the bucket exist?
- Are your credentials correct?
- Is your local clock within 15 minutes of Amazon's? (RequestTimeTooSkewed)
- tail -f /var/log/messages
- Use the fuse -f switch, e.g., /usr/bin/s3fs -f my_bucket /mnt
- Try updating your version of libcurl: I've used 7.16 and 7.17
Q: when I mount a bucket only the current user can see it; other users cannot; how do I allow other users to see it?
- A: use 'allow_other'
- /usr/bin/s3fs -o allow_other mybucket /mnt
- or from /etc/fstab: s3fs#mybucket /mnt fuse _netdev,allow_other 0 0
- A: It is unbounded! if you want you can use a cron job (e.g., script in /etc/cron.daily) to periodically purge "~/.s3fs"... due to the reference nature of posix file systems a periodic purge will not interfere with the normal operation of s3fs local file cache...!
- A: s3fs will upload contents of a file when the "close" is called from userspace, which in turn provokes "s3fs_flush" to be called. "s3fs_flush" is a synchronous call, which means that after "close" returns in userspace, all modified data has been uploaded to S3 (except on errors, of course). If you want to make sure data has been updated in S3 before closing the file, call "fsync" from userspace, which will trigger s3fs' "s3fs_fsync", uploading all modified file contents to S3 synchronously.
Q: s3fs uses x-amz-meta custom meta headers... will s3fs clobber any existing x-amz-meta custom header headers?
- A: No!
- A: try using the use_path_request_style option.
- A: Try to add “_netdev” option to s3fs entry in fstab, it waits mounting until network up.
- A: Start netfs service on your instance, it loads fuse module to system(by modprobe).
- A: You can use ahbe_conf option, and you specify ahbe_conf file for it. please see man page for s3fs.
- A: s3fs supports files and directories which are uploaded by other S3 tools(ex. s3cmd/s3 console). Those tools upload the object to S3 without x-amz-meta-(mode,mtime,uid,gid) HTTP headers. s3fs uses these meta http headers for looking like filesystems, but s3fs can not know the meta data for file because there is no meta data. Thus s3fs displays "d---------" and "---------" for those file(directory) permission. There are several ways to solve. One is that you can give permission to files and directories by chmod command, or can set x-amz-meta- headers for files by other tools. Alternatively, you can use umask, gid and uid option for s3fs.
- A: you can use complement_stat option. It gives the file/directory the permissions as appropriate as possible.
- A: Please use the entry for s3fs in fstab is following format.
<bucket name> <mount point> fuse.s3fs _netdev, 0 0
- A: After s3fs version 1.82, s3fs connect S3 server with HTTPS(https://s3.amazonaws.com).
s3fs uses your bucket name and tries to connect to "https://.s3.amazonaws.com" by default.
If your bucket name includes dot (e.g. "xxx.yyy"), the destination host name(FQDN) will eventually be "xxx.yyy.s3.amazonaws.com".
Because the SSL certificate subject name of the S3 API is wildcard "*.s3.amazonaws.com", and it supports only single-level labels, then you catch an error when connecting S3 server.
error message from curl example: "SSL: certificate subject name (*.s3.amazonaws.com) does not match target host name 'xxx.yyy.s3.amazonaws.com'"
In this case you can specify use_path_request_style option, and if your bucket's region is not us-east-1 region, you need to specify endpoint(e.g. endpoint=us-west-2) option, too.
- A: You can see following Managing Access Keys for IAM Users document in AWS. For distributed storage other than AWS S3, refer to each document.
- A: About troubles which receive Permission denied if you access the files and directories under the mount point mounted by s3fs.
Although there are several causes, this error is displayed because the file/directory does not have read/write/execute permission.
s3fs expects that permissions are set for each object on S3.
This permission is set on the object's header as x-amz-meta-mode, x-amz-meta-mtime, x-amz-meta-uid, x-amz-meta-gid.
In the case where the objects below the bucket to be mounted are created by s3fs, these permissions are set on the object and no error occurs.
However, objects created with the AWS console, s3cmd or other tools are not set with these permissions.
Therefore, when you access such an object, it becomes permission denied.
To solve permission denied error, you can use umask/uid/gid/mp_umask and give appropriate permissions and mount the bucket.
After updating the file / directory with s3fs, since the permission is set for the object, it will not receive an error for that object even if you do not use these options after that. - A: Another common issue which causes this problem is that s3fs by default only lets the user who created the mount to view the directory. To fix this problem see this related question: Q: when I mount a bucket only the current user can see it; other users cannot; how do I allow other users to see it?
- A:
updatedb
(andlocated
) scans your mount points at certain times of the day. If you have lots of files to index, this will take a lot of CPU. Solution: Add your mount point to PRUNEPATHS in/etc/updatedb.conf
so updatedb does not include this when it scans
- A: ListBucket and HeadObject API calls were being made
updatedb
(andlocated
). Solution: Add your mount point to PRUNEPATHS in/etc/updatedb.conf
so updatedb does not include this when it scans
- A: s3fs can specify the path below Bucket and mount that path.
However, in rare cases it fails to specify a path and mount it.
In order to mount by specifying a path, the path must be a directory object that s3fs can recognize.
When a directory object is created in the bucket by a tool other than s3fs, it may not be recognized as a directory by s3fs.
(The directory object should bepath/
name for recognized for s3fs.)
As a workaround, first you mount only bucket to mount point by s3fs, and you create a desired path as directory or update the directory information by chmod/chown etc.
After that, you umount it, and you mount the directory(specified the bucket and the path) to mount point.
And if the path can be recognized by s3fs as a directory object, it can be mounted.
When mounting, you also need to specify the endpoint correctly.
If you get anAuthorizationHeaderMalformed
error, you need to specify the correct end-point with theendpoint
option.
- A: If you are using updatedb, you may have to exclude the s3fs mount point in /etc/updatedb.conf.
If you are mounting a bucket which has a lot of files, updatedb's behavior can make your system unstable.
Then you should edit /etc/updatedb.conf and set PRUNEFS and PRUNEPATHS to avoid this problem.
If you are using the s3fs use_cache option, you may also need to specify a cache directory.
Please set properly while checking the system status.
You can’t perform that action at this time.