GithubHelp home page GithubHelp logo

aliyun / ossfs Goto Github PK

View Code? Open in Web Editor NEW
719.0 719.0 153.0 1020 KB

Export s3fs for aliyun oss.

License: GNU General Public License v2.0

Shell 12.61% C++ 81.91% C 0.64% Python 2.28% Makefile 0.63% M4 0.95% Dockerfile 0.48% Gnuplot 0.51%

ossfs's Introduction

OSSFS

[Version][releases] Build Status

Introduction

The ossfs enables you to mount Alibaba Cloud OSS buckets to a local file in Linux, macOS, and FreeBSD systems. In the system, you can conveniently operate on objects in OSS while using the local file system to maintain data sharing.

Features

The ossfs is built based on s3fs and has all the features of s3fs. Main features:

  • large subset of POSIX including reading/writing files, directories, symlinks, mode, uid/gid, and extended attributes
  • allows random writes and appends
  • large files via multi-part upload
  • renames via server-side copy
  • optional server-side encryption
  • data integrity via MD5 hashes
  • in-memory metadata caching
  • local disk data caching

In addition to the above features, ossfs also has its own features.

  • renames via server-side single-part copy to improve large files renaming performance
  • optional saves the symbolic link target in object user metadata
  • optional improve readdir perfermence by ignoring metadata-atime/ctime, uid/gid, and permissions

Precompiled installer

We have prepared an installer package for common Linux releases:

  • Ubuntu-14.04 or later
  • CentOS-7.0 or later
  • Anolis-7 or later

Please select the corresponding installer on the [Version Releases Page][Releases] to download and install the tool. The latest version is recommended.

  • For Ubuntu systems, the installation command is:
sudo apt-get update
sudo apt-get install gdebi-core
sudo gdebi your_ossfs_package
  • For CentOS, the installation command is:
sudo yum localinstall your_ossfs_package
  • For Anolis, the installation command is:
sudo yum localinstall your_ossfs_package

Install by source code

If you fail to find the corresponding installer package, you can also install the tool by compiling the code on your own. First install the following dependency libraries before compilation:

Ubuntu 14.04:

sudo apt-get install automake autotools-dev g++ git libcurl4-gnutls-dev \
                     libfuse-dev libssl-dev libxml2-dev make pkg-config

CentOS 7.0:

sudo yum install automake gcc-c++ git libcurl-devel libxml2-devel \
                 fuse-devel make openssl-devel

Then you can download the source code from GitHub and compile the code for installing the tool:

git clone https://github.com/aliyun/ossfs.git
cd ossfs
./autogen.sh
./configure
make
sudo make install

Otherwise consult the compilation instructions.

Run OSSFS

The default location for the ossfs password file can be created:

  • using a .passwd-ossfs file in the users home directory (i.e. ${HOME}/.passwd-ossfs)
  • using the system-wide /etc/passwd-ossfs file

Enter your credentials in a file ${HOME}/.passwd-ossfs and set owner-only permissions:

echo my-access-key-id:my-access-key-secret > ${HOME}/.passwd-ossfs
chmod 600 ${HOME}/.passwd-ossfs

Run ossfs with an existing bucket my-bucket and directory /path/to/mountpoint:

ossfs my-bucket /path/to/mountpoint -ourl=my-oss-endpoint

If you encounter any errors, enable debug output:

ossfs my-bucket /path/to/mountpoint -ourl=my-oss-endpoint -o dbglevel=info -f -o curldbg

You can also mount on boot by entering the following line to /etc/fstab:

my-bucket /path/to/mountpoint fuse.ossfs _netdev,allow_other,url=my-oss-endpoint 0 0

Note: You may also want to create the global credential file first

echo my-access-key-id:my-access-key-secret > /etc/passwd-ossfs
chmod 600 /etc/passwd-ossfs

Note2: You may also need to make sure netfs service is start on boot

Example

Mount the 'my-bucket' bucket to the '/tmp/ossfs' directory and the AccessKeyId is 'faint', the AccessKeySecret is '123', and the OSS endpoint is 'http://oss-cn-hangzhou.aliyuncs.com'.

echo faint:123 > /etc/passwd-ossfs
chmod 600 /etc/passwd-ossfs
mkdir /tmp/ossfs
ossfs my-bucket /tmp/ossfs -ourl=http://oss-cn-hangzhou.aliyuncs.com

Unmount the bucket:

umount /tmp/ossfs # root user
fusermount -u /tmp/ossfs # non-root user

Common settings

  • You can use 'ossfs --version' to view the current version and 'ossfs -h' to view available parameters.

  • If you are using ossfs on an Alibaba Cloud ECS instance, you can use the intranet domain name to save traffic charges and improve speed:

      ossfs my-bucket /tmp/ossfs -ourl=http://oss-cn-hangzhou-internal.aliyuncs.com
    
  • In a Linux system, [updatedb][updatedb] will scan the file system on a regular basis. If you do not want the ossfs-mounted directory to be scanned, refer to [FAQ][FAQ-updatedb] to configure skipping the mounted directory.

  • The ossfs allows you to specify multiple sets of bucket/access_key_id/access_key_secret information. When multiple sets of information are in place, the format of the information written to passwd-ossfs is:

      bucket1:access_key_id1:access_key_secret1
      bucket2:access_key_id2:access_key_secret2
    
  • The [Supervisor][Supervisor] is recommended in a production environment to start and monitor the ossfs process. For usage see [FAQ][faq-supervisor].

Advanced settings

  • You can add the '-f -d' parameters to run the ossfs in the foreground and output the debug log.
  • You can use the '-o kernel_cache' parameter to enable the ossfs to use the page cache of the file system. If you have multiple servers mounted to the same bucket and require strong consistency, do not use this option.

Errors

Do not panic in case of errors. Troubleshoot the problem following the steps below:

  1. If a printing error occurs, read and understand the error message.

  2. View '/var/log/syslog' or '/var/log/messages' to check for any related information.

     grep 's3fs' /var/log/syslog
     grep 'ossfs' /var/log/syslog
    
  3. Retry ossfs mounting and open the debug log:

     ossfs ... -o dbglevel=debug -f -d > /tmp/fs.log 2>&1
    

    Repeat the operation and save the '/tmp/fs.log' to check or send the file to me.

Limitations

Generally OSS cannot offer the same performance or semantics as a local file system. More specifically:

  • random writes or appends to files require rewriting the entire object, optimized with multi-part upload copy
  • metadata operations such as listing directories have poor performance due to network latency
  • no atomic renames of files or directories
  • no coordination between multiple clients mounting the same bucket
  • no hard links
  • inotify detects only local modifications, not external ones by other clients or tools
  • not suitable for scenarios with highly concurrent reads/writes as it will increase the system load

Frequently Asked Questions

Related

  • ossfs Wiki
  • s3fs- Mount the s3 bucket to the local file system through the fuse interface.

Contact us

License

Copyright (C) 2010 Randy Rizun [email protected]

Licensed under the GNU GPL version 2

ossfs's People

Contributors

abo969 avatar baiyubin2020 avatar chenwk96 avatar hangzws avatar hryang avatar huiguangjun avatar rockuw avatar vbem avatar xuensheng avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

ossfs's Issues

test_truncate_file偶尔会失败

失败的原因有两个:

  1. 由于fuse的一个known issue,stat时([ -s $file ]),truncate操作的release还没有调用,导致没有及时删掉StatCahce中缓存的上一次的结果
  2. 由于CentOS-6.5并没有包含这个patch,导致open(O_TRUNC)时没有更新fuse的attr cache

请问缓存机制是如何,本地已经存在的文件,再访问的时候看日志还是有网络请求

heartbeatcheck.txt这个文件本地已经有了,为什么看日志里面还是有网络请求

Feb 24 13:32:05 iZm5e0s7m7r0k69jqd34foZ s3fs[845]: s3fs.cpp:s3fs_getattr(821): [path=/heartbeatcheck.txt] uid=0, gid=0, mode=100777
Feb 24 13:32:05 iZm5e0s7m7r0k69jqd34foZ s3fs[845]: [path=/heartbeatcheck.txt][flags=34816]
Feb 24 13:32:05 iZm5e0s7m7r0k69jqd34foZ s3fs[845]: delete stat cache entry[path=/heartbeatcheck.txt]
Feb 24 13:32:05 iZm5e0s7m7r0k69jqd34foZ s3fs[845]: s3fs.cpp:check_parent_object_access(657): [path=/heartbeatcheck.txt]
Feb 24 13:32:05 iZm5e0s7m7r0k69jqd34foZ s3fs[845]: s3fs.cpp:check_object_access(551): [path=/]
Feb 24 13:32:05 iZm5e0s7m7r0k69jqd34foZ s3fs[845]: s3fs.cpp:get_object_attribute(405): [path=/]
Feb 24 13:32:05 iZm5e0s7m7r0k69jqd34foZ s3fs[845]: s3fs.cpp:check_object_access(551): [path=/heartbeatcheck.txt]
Feb 24 13:32:05 iZm5e0s7m7r0k69jqd34foZ s3fs[845]: s3fs.cpp:get_object_attribute(405): [path=/heartbeatcheck.txt]
Feb 24 13:32:05 iZm5e0s7m7r0k69jqd34foZ s3fs[845]: [tpath=/heartbeatcheck.txt]
Feb 24 13:32:05 iZm5e0s7m7r0k69jqd34foZ s3fs[845]: [tpath=/heartbeatcheck.txt][bpath=][save=][sseckeypos=-1]
Feb 24 13:32:05 iZm5e0s7m7r0k69jqd34foZ s3fs[845]: curl.cpp:GetHandler(239): Get handler from pool: 31
Feb 24 13:32:05 iZm5e0s7m7r0k69jqd34foZ s3fs[845]: URL is http://oss-cn-qingdao-internal.aliyuncs.com/yeteammedia/heartbeatcheck.txt
Feb 24 13:32:05 iZm5e0s7m7r0k69jqd34foZ s3fs[845]: URL changed is http://yeteammedia.oss-cn-qingdao-internal.aliyuncs.com/heartbeatcheck.txt
Feb 24 13:32:05 iZm5e0s7m7r0k69jqd34foZ s3fs[845]: curl.cpp:RequestPerform(1791): connecting to URL http://yeteammedia.oss-cn-qingdao-internal.aliyuncs.com/heartbeatcheck.txt
Feb 24 13:32:05 iZm5e0s7m7r0k69jqd34foZ s3fs[845]: HTTP response code 200
Feb 24 13:32:05 iZm5e0s7m7r0k69jqd34foZ s3fs[845]: curl.cpp:ReturnHandler(262): Return handler to pool: 31
Feb 24 13:32:05 iZm5e0s7m7r0k69jqd34foZ s3fs[845]: add stat cache entry[path=/heartbeatcheck.txt]
Feb 24 13:32:05 iZm5e0s7m7r0k69jqd34foZ s3fs[845]: cache.cpp:GetStat(171): stat cache hit [path=/heartbeatcheck.txt][time=1487914325][hit count=0]
Feb 24 13:32:05 iZm5e0s7m7r0k69jqd34foZ s3fs[845]: s3fs.cpp:get_object_attribute(405): [path=/heartbeatcheck.txt]
Feb 24 13:32:05 iZm5e0s7m7r0k69jqd34foZ s3fs[845]: cache.cpp:GetStat(171): stat cache hit [path=/heartbeatcheck.txt][time=1487914325][hit count=1]
Feb 24 13:32:05 iZm5e0s7m7r0k69jqd34foZ s3fs[845]: fdcache.cpp:Open(1809): [path=/heartbeatcheck.txt][size=16][time=1487321638]
Feb 24 13:32:05 iZm5e0s7m7r0k69jqd34foZ s3fs[845]: fdcache.cpp:Open(710): [path=/heartbeatcheck.txt][fd=-1][size=16][time=1487321638]
Feb 24 13:32:05 iZm5e0s7m7r0k69jqd34foZ s3fs[845]: fdcache.cpp:Open(196): file locked(/heartbeatcheck.txt - /alidata1/ossfs_local_cache/.yeteammedia.stat/heartbeatcheck.txt)
Feb 24 13:32:05 iZm5e0s7m7r0k69jqd34foZ s3fs[845]: fdcache.cpp:Release(212): file unlocked(/heartbeatcheck.txt)
Feb 24 13:32:05 iZm5e0s7m7r0k69jqd34foZ s3fs[845]: [path=/heartbeatcheck.txt][fd=28][time=1487321638]
Feb 24 13:32:05 iZm5e0s7m7r0k69jqd34foZ s3fs[845]: s3fs.cpp:s3fs_read(2063): [path=/heartbeatcheck.txt][size=4096][offset=0][fd=28]
Feb 24 13:32:05 iZm5e0s7m7r0k69jqd34foZ s3fs[845]: fdcache.cpp:ExistOpen(1860): [path=/heartbeatcheck.txt][fd=28][ignore_existfd=false]
Feb 24 13:32:05 iZm5e0s7m7r0k69jqd34foZ s3fs[845]: fdcache.cpp:Open(1809): [path=/heartbeatcheck.txt][size=-1][time=-1]
Feb 24 13:32:05 iZm5e0s7m7r0k69jqd34foZ s3fs[845]: fdcache.cpp:Open(710): [path=/heartbeatcheck.txt][fd=28][size=-1][time=-1]
Feb 24 13:32:05 iZm5e0s7m7r0k69jqd34foZ s3fs[845]: fdcache.cpp:Dup(695): [path=/heartbeatcheck.txt][fd=28][refcnt=2]
Feb 24 13:32:05 iZm5e0s7m7r0k69jqd34foZ s3fs[845]: fdcache.cpp:Read(1418): [path=/heartbeatcheck.txt][fd=28][offset=0][size=4096]
Feb 24 13:32:05 iZm5e0s7m7r0k69jqd34foZ s3fs[845]: fdcache.cpp:Close(1901): [ent->file=/heartbeatcheck.txt][ent->fd=28]
Feb 24 13:32:05 iZm5e0s7m7r0k69jqd34foZ s3fs[845]: fdcache.cpp:Close(671): [path=/heartbeatcheck.txt][fd=28][refcnt=1]
Feb 24 13:32:05 iZm5e0s7m7r0k69jqd34foZ s3fs[845]: [path=/heartbeatcheck.txt][fd=28]
Feb 24 13:32:05 iZm5e0s7m7r0k69jqd34foZ s3fs[845]: s3fs.cpp:check_parent_object_access(657): [path=/heartbeatcheck.txt]
Feb 24 13:32:05 iZm5e0s7m7r0k69jqd34foZ s3fs[845]: s3fs.cpp:check_object_access(551): [path=/]
Feb 24 13:32:05 iZm5e0s7m7r0k69jqd34foZ s3fs[845]: s3fs.cpp:get_object_attribute(405): [path=/]
Feb 24 13:32:05 iZm5e0s7m7r0k69jqd34foZ s3fs[845]: s3fs.cpp:check_object_access(551): [path=/heartbeatcheck.txt]
Feb 24 13:32:05 iZm5e0s7m7r0k69jqd34foZ s3fs[845]: s3fs.cpp:get_object_attribute(405): [path=/heartbeatcheck.txt]
Feb 24 13:32:05 iZm5e0s7m7r0k69jqd34foZ s3fs[845]: cache.cpp:GetStat(171): stat cache hit [path=/heartbeatcheck.txt][time=1487914325][hit count=2]
Feb 24 13:32:05 iZm5e0s7m7r0k69jqd34foZ s3fs[845]: fdcache.cpp:ExistOpen(1860): [path=/heartbeatcheck.txt][fd=28][ignore_existfd=false]
Feb 24 13:32:05 iZm5e0s7m7r0k69jqd34foZ s3fs[845]: fdcache.cpp:Open(1809): [path=/heartbeatcheck.txt][size=-1][time=-1]
Feb 24 13:32:05 iZm5e0s7m7r0k69jqd34foZ s3fs[845]: fdcache.cpp:Open(710): [path=/heartbeatcheck.txt][fd=28][size=-1][time=-1]
Feb 24 13:32:05 iZm5e0s7m7r0k69jqd34foZ s3fs[845]: fdcache.cpp:Dup(695): [path=/heartbeatcheck.txt][fd=28][refcnt=2]
Feb 24 13:32:05 iZm5e0s7m7r0k69jqd34foZ s3fs[845]: [tpath=][path=/heartbeatcheck.txt][fd=28]
Feb 24 13:32:05 iZm5e0s7m7r0k69jqd34foZ s3fs[845]: fdcache.cpp:Close(1901): [ent->file=/heartbeatcheck.txt][ent->fd=28]
Feb 24 13:32:05 iZm5e0s7m7r0k69jqd34foZ s3fs[845]: fdcache.cpp:Close(671): [path=/heartbeatcheck.txt][fd=28][refcnt=1]
Feb 24 13:32:05 iZm5e0s7m7r0k69jqd34foZ s3fs[845]: [path=/heartbeatcheck.txt][fd=28]
Feb 24 13:32:05 iZm5e0s7m7r0k69jqd34foZ s3fs[845]: [path=/heartbeatcheck.txt][fd=28]
Feb 24 13:32:05 iZm5e0s7m7r0k69jqd34foZ s3fs[845]: fdcache.cpp:Close(1901): [ent->file=/heartbeatcheck.txt][ent->fd=28]
Feb 24 13:32:05 iZm5e0s7m7r0k69jqd34foZ s3fs[845]: fdcache.cpp:Close(671): [path=/heartbeatcheck.txt][fd=28][refcnt=0]

ossfs权限问题

ossfs只能用root启动,切换其他账户(www)启动,出现错误:
ossfs: could not determine how to establish security credentials.

而root启动后,无法赋予www权限,
chown: changing ownership of ‘/ossfs/’: Input/output error
www用户无法进入ossfs目录!请问怎么操作呢?

mac下无法写入文件

我想拷贝一个文件到映射的目录, 会出现以下错误:

Finder 不能完成该操作,因为不能读取或写入“bees.jpg”中的某些数据。
(错误代码 -36)

二次写入文件的问题

用 Python 写入二进制,首先创建文件,写入 100 个字节的数据,接着重新打开文件,定位到文件起始位置,用 10 个字节的数据覆盖,结果这 10 个字节正常写入,而剩余的 90 个字节全部变成 00,是什么原因?

使用rsync/scp在拷贝大文件(200MB)时会报错

root@iZ23qc6ioeeZ:/tmp# rsync -avSH /root/file.200m /tmp/ossfs/
sending incremental file list
file.200m
rsync: write failed on "/tmp/ossfs/file.200m": Operation not permitted (1)
rsync error: error in file IO (code 11) at receiver.c(389) [receiver=3.1.0]
root@iZ23qc6ioeeZ:/tmp# dd if=/dev/zero of=/root/file.10m bs=1M count=10
10+0 records in
10+0 records out
10485760 bytes (10 MB) copied, 0.0103612 s, 1.0 GB/s
root@iZ23qc6ioeeZ:/tmp# rsync -avSH /root/file.10m /tmp/ossfs/
sending incremental file list
file.10m

sent 10,488,407 bytes  received 35 bytes  20,976,884.00 bytes/sec
total size is 10,485,760  speedup is 1.00
  1. 使用cp就不会有问题
  2. rsync小一点的文件也不会有问题

如何实现0770挂载权限

我使用owncloud搭建私有云盘,把data目录指向ossfs的目录,如果使用allow_other,会变成777权限,owncloud会报错,不允许权限。如果不使用allow_other,但是把用户ID改为apacheID,就会变成700权限,就会导致无法访问的问题。

安装时提示这个是什么问题?

我的系统是centos 6.5 64位

Total size: 5.7 M
Installed size: 5.7 M
Is this ok [y/N]: y
Downloading Packages:
Running rpm_check_debug
Running Transaction Test

Transaction Check Error:
file /usr/include/fuse.h from install of ossfs-1.7.9.2-1.x86_64 conflicts with file from package fuse-devel-2.8.3-4.el6.x86_64
file /usr/include/fuse/fuse.h from install of ossfs-1.7.9.2-1.x86_64 conflicts with file from package fuse-devel-2.8.3-4.el6.x86_64
file /usr/include/fuse/cuse_lowlevel.h from install of ossfs-1.7.9.2-1.x86_64 conflicts with file from package fuse-devel-2.8.3-4.el6.x86_64
file /usr/include/fuse/fuse_common.h from install of ossfs-1.7.9.2-1.x86_64 conflicts with file from package fuse-devel-2.8.3-4.el6.x86_64
file /usr/include/fuse/fuse_common_compat.h from install of ossfs-1.7.9.2-1.x86_64 conflicts with file from package fuse-devel-2.8.3-4.el6.x86_64
file /usr/include/fuse/fuse_compat.h from install of ossfs-1.7.9.2-1.x86_64 conflicts with file from package fuse-devel-2.8.3-4.el6.x86_64
file /usr/include/fuse/fuse_lowlevel.h from install of ossfs-1.7.9.2-1.x86_64 conflicts with file from package fuse-devel-2.8.3-4.el6.x86_64
file /usr/include/fuse/fuse_lowlevel_compat.h from install of ossfs-1.7.9.2-1.x86_64 conflicts with file from package fuse-devel-2.8.3-4.el6.x86_64
file /usr/include/fuse/fuse_opt.h from install of ossfs-1.7.9.2-1.x86_64 conflicts with file from package fuse-devel-2.8.3-4.el6.x86_64
file /usr/include/ulockmgr.h from install of ossfs-1.7.9.2-1.x86_64 conflicts with file from package fuse-devel-2.8.3-4.el6.x86_64
file /sbin/mount.fuse from install of ossfs-1.7.9.2-1.x86_64 conflicts with file from package fuse-2.8.3-4.el6.x86_64
file /usr/bin/fusermount from install of ossfs-1.7.9.2-1.x86_64 conflicts with file from package fuse-2.8.3-4.el6.x86_64
file /usr/bin/ulockmgr_server from install of ossfs-1.7.9.2-1.x86_64 conflicts with file from package fuse-2.8.3-4.el6.x86_64

Error Summary

神奇目录

环境:
CentOS 7 x64
ossfs 1.8.0
将 bucket01 挂载到本地 ~/bk01

重现步骤:

  1. ~/bk01 下新建目录 dirA;
  2. 远程通过 OSS-API 删除 dirA;
  3. ls -lsa ~/bk01 已经看不到 dirA 目录;
  4. 问题:cd ~/bk01/dirA 依然有效,perl -d 检查 ~/bk01/dirA 目录也依然存在;

运行一段时间后ls挂载目录出现Transport endpoint is not connected错误

之前使用正常,今天突然其中一个目录访问异常,ls -l效果:
11111
ls访问时终端报错:
ls: cannot access edu: Transport endpoint is not connected total 225
cat /var/log/syslog | grep ossfs 日志中:
ossfs[9858]: segfault at 0 ip 0000000000411a11 sp 00007f81f27fba70 error 4 in ossfs[400000+77000]
另外,通过umount; mount -a重新挂载已解决此问题。
想知道此问题产生的原因,以及是否在新版本中已修复?

无缓存模式下刷了大量的HEAD请求

接到用户反馈,在ECS内网环境下上传一个9M的文件要用5s左右。用户通过max_stat_cache_size=0设置了无缓存。

在服务器上抓了一下包:

  1. 上传9M的文件时,刷了758HEAD请求

实际上传只用了1s的时间,在实际上传之前进行了N多的HEAD:

**14:44:35.303405** IP (tos 0x0, ttl 63, id 61238, offset 0, flags [DF], proto TCP (6), length 488)
    iZ23zgd4cp3Z.40775 > 10.157.220.8.http: Flags [P.], cksum 0x492b (incorrect -> 0xdde8), seq 1:449, ack 1, win 229, length 448
E....6@.?.......
....G.P....7.]EP...I+..PUT /test/Uploads/test_img.jpg HTTP/1.1^M
Host: router-qmt.oss-cn-hangzhou-internal.aliyuncs.com^M
Accept: */*^M
Authorization: OSS ACSMJ7iviWqjchjf:qlBZe7bxvSLAKLBVRrIXhC7CLeY=^M
Content-Type: image/jpeg^M
Date: Wed, 16 Mar 2016 06:44:35 GMT^M
User-Agent: aliyun-sdk-http/1.0()/ossfs1.7.9.1^M
x-oss-acl: private^M
x-oss-meta-gid: 0^M
x-oss-meta-mode: 33188^M
x-oss-meta-mtime: 1458110675^M
x-oss-meta-uid: 0^M
Content-Length: 9439196^M
Expect: 100-continue^M

**14:44:36.197117** IP (tos 0x48, ttl 56, id 48870, offset 0, flags [DF], proto TCP (6), length 244)
    10.157.220.8.http > iZ23zgd4cp3Z.40775: Flags [P.], cksum 0x2c79 (correct), seq 26:230, ack 9439645, win 3510, length 204
[email protected].;.
........P.G7.]^.&..P.^M.,y..HTTP/1.1 200 OK^M
Server: AliyunOSS^M
Date: Wed, 16 Mar 2016 06:44:36 GMT^M
Content-Length: 0^M
Connection: keep-alive^M
x-oss-request-id: 56E900D312034F2E51B6C6AE^M
ETag: "B3DB3CE643F2FB1E0ADBE7AC63C8FFEB"^M
  1. 上传一个简单的文本文件,则只刷了38HEAD请求

ossfs挂载后显示文件访问日期为1970-01-01

ossfs挂载到linux主机后,用stat命令查看里面的文件,显示的access和Change时间不正确。
Size: 188149 Blocks: 368 IO Block: 4096 regular file
Device: 24h/36d Inode: 13 Links: 1
Access: (0666/-rw-rw-rw-) Uid: ( 0/ root) Gid: ( 0/ root)
Access: 1970-01-01 08:00:00.000000000 +0800
Modify: 2017-06-21 10:30:58.000000000 +0800
Change: 1970-01-01 08:00:00.000000000 +0800

不支持在VPC里面的ecs

$ossfs ossfs-fuse /opt/ossfs/ -ourl=http://vpc100-oss-cn-beijing.aliyuncs.com -f -d -d -d

[CRT] s3fs.cpp:set_s3fs_log_level(253): change debug level from [CRT] to [INF]
[INF] s3fs.cpp:set_moutpoint_attribute(4113): PROC(uid=1001, gid=1001) - MountPoint(uid=1001, gid=1001, mode=40755)
ossfs: could not establish security credentials, check documentation.

OSS挂载后可以创建FTP指向OSS内部某个目录么?

将OSS挂载到本地目录如:osspath 现在我可以创建一个FTP的根目录指向osspath,通过FTP可以访问到挂载的OSS,并且可以上传等。但是我创建另外一个账户指向OSS内部的某个目录,FTP账户就无法登陆?

某些情况下目录会被当作普通文件

            if(strConType == "application/x-directory"){
              mode |= S_IFDIR;
            }else if(path && 0 < strlen(path) && '/' == path[strlen(path) - 1]){
              if(strConType == "binary/octet-stream" || strConType == "application/octet-stream"){
                mode |= S_IFDIR;
              }else{
                mode |= S_IFREG;
              }
            }else{
              mode |= S_IFREG;
            }

判断一个object是否是目录的逻辑,要求content-type是"application/x-directory" or "binary/octet-stream" or "application/octet-stream",而不是直接根据文件后缀有没有"/"来判断。

这有点奇怪,某些工具会生成一个目录文件,但是content-type不符合要求,导致其被当成普通文件对待。

Discuz 论坛附件使用OSSFS CPU占用资源非常高

https://bbs.aliyun.com/read/296532.html

我使用discuz 程序,存储附件用. 使用后发现占用资源太高,偶尔造成服务器无法正常工作

ECS环境
CPU: 4核内存: 16384 MB
TOP
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 2653 root 20 0 3033364 657856 2000 S 109.3 4.1 68014:16 ossfs

OSSFS 容量 : 579.656GB

每小时PV预计 5万次

启动命令
ossfs #aliyuntest /home/nginx/domains/demo.net/public/aliyun -ourl=http://oss-cn-qingdao-internal.aliyuncs.com -oallow_other -omax_stat_cache_size=100000  -ouid=1000 -ogid=1000

允许用户为通过其他工具上传到oss的文件指定owner信息。

当用户通过其他工具上传文件到oss的bucket中,再通过ossfs挂载该bucket,这些文件没有文件权限,所有者等元信息。issue 7为这类文件增加了默认权限,但所有者仍然是root用户。如果用户要改变这类文件,需要通过chmod命令,这需要root权限,并且文件数过多时可能耗时很长。合理的做法是允许用户在挂载时指定owner信息。

上传性能

一直听说ossfs高并发读写性能比较差,有没有具体数据,到底多差?有没有相关的测试工具,vdbench可以吗?

watch文件

请问使用ossfs,是否可以watch里面的文件或目录,我在阿里的容器服务中使用,挂载到oss,发现无法watch到挂载的文件改变,在ecs里本地目录的文件改变是可以watch到的

ossfs/scripts/automount.template

模板里:
ossfs your_bucket your_mountpoint -ourl=your_url -oallow_other
应该是
ossfs your_bucket your_mountpoint -ourl=your_url -o allow_other

权限依然有问题

为啥不改成我PR的版本= =。
用wget来处理
6.png
用SSH依然有问题的。。。。
不想PR直接修改也行啊= =

curl.cpp:RequestPerform(1846): ### CURLE_WRITE_ERROR

Archlinux
跟着这个编译的

git clone https://github.com/aliyun/ossfs.git
cd ossfs
./autogen.sh
./configure
make
sudo make install

挂载成功
ossfs xxxx /data -ourl=oss-cn-beijing.aliyuncs.com -o allow_other

/data设置为syncthing的同步文件.
用docker Syncthing同步时遇到的这个error.
原先oss是挂载在阿里云Ubuntu上的,也是用docker syncthing同步文件运行正常.

$ journalctl -xe
Jun 26 23:07:17 HomeServer s3fs[1047]: curl.cpp:RequestPerform(1961): ### giving up
Jun 26 23:07:17 HomeServer s3fs[1047]: fdcache.cpp:Write(1496): failed to load uninitialized area before writing(errno=-5)
Jun 26 23:07:19 HomeServer s3fs[1047]: curl.cpp:RequestPerform(1846): ### CURLE_WRITE_ERROR
Jun 26 23:07:21 HomeServer s3fs[1047]: curl.cpp:RequestPerform(1846): ### CURLE_WRITE_ERROR
Jun 26 23:07:26 HomeServer s3fs[1047]: curl.cpp:RequestPerform(1846): ### CURLE_WRITE_ERROR
Jun 26 23:07:28 HomeServer s3fs[1047]: curl.cpp:RequestPerform(1961): ### giving up
Jun 26 23:07:28 HomeServer s3fs[1047]: fdcache.cpp:Write(1496): failed to load uninitialized area before writing(errno=-5)

所以,请问 是我编译的不对,还是有其他问题?

文件创建人变成ROOT,并且修改时间丢失

本来都是普通用户写入,但隔一个晚上读取的时候变成ROOT用户,并且文件修改时间错乱
drwx------ 1 root root 0 Jan 1 1970 apm_test

使用osscmd meta 发现 x-oss-meta-mtime: 0 被置成0, 变成这样后,无论在此目录下创建或者删除子内容,这个值都是0 ,时间变不回正确的值。

不知道是OSSFS问题还是 OSS本身的问题。

在某些情况下会泄露fd

ossfs没有任何活动时,lsof看到未释放的fd:

root@ :~# lsof -n | grep deleted|more
ossfs       614             root   36u      REG              253,1 27284931439    1054186 /tmp/tmpf9kS0wq (deleted)
ossfs       614             root   37u      REG              253,1   449757208    1054188 /tmp/tmpfrIyzc4 (deleted)
ossfs       614  3593       root   36u      REG              253,1 27284931439    1054186 /tmp/tmpf9kS0wq (deleted)
ossfs       614  3593       root   37u      REG              253,1   449757208    1054188 /tmp/tmpfrIyzc4 (deleted)
ossfs       614  5884       root   36u      REG              253,1 27284931439    1054186 /tmp/tmpf9kS0wq (deleted)
ossfs       614  5884       root   37u      REG              253,1   449757208    1054188 /tmp/tmpfrIyzc4 (deleted)
ossfs       614  6068       root   36u      REG              253,1 27284931439    1054186 /tmp/tmpf9kS0wq (deleted)
ossfs       614  6068       root   37u      REG              253,1   449757208    1054188 /tmp/tmpfrIyzc4 (deleted)

ossfs没有利用http keepalive

每次请求都是新建一个连接,虽然每个HTTP请求头部都带了Connection: keep-alive,但是并没有利用连接,这样会导致:

  1. 性能差
  2. 导致很多的TIME_WAIT连接

如果文件夹下有大量文件,ls性能很差,如何优化?

假设目录d下只有n个文件f,ls d会触发1 opendir + 1 read dir + n getattr fuse调用。 当n很大时,getattr会导致严重的性能问题。

可以从以下两方面优化:

  • 通过-omax_stat_cache_size=xxx参数增大stat cache的size,这样第一次ls会较慢,但是后续的ls就快了,因为文件的元数据都在本地cache中。默认这个值是1000,大约消耗4MB内存,请根据您机器内存大小调整为合适的值。
  • 使用ls -f命令,这个命令会消除n次getattr调用,提高性能。

注:上述数据在centos7.0,fuse2.9.2版本下测得。

ossfs 挂载无法完成

尝试挂载oss的时候一直无法完成,以下是全部的日志
image
系统CentOS 7.2x64,版本信息如下
image

挂载失败时没有错误信息

root@iZ23qc6ioeeZ:~/ossfs# ./src/ossfs oss-dev-tianlong /tmp/ossfs -ourl=http://oss-dev-tianlong.oss-cn-hangzhou-internal.aliyuncs.com

root@iZ23qc6ioeeZ:~/ossfs# ./src/ossfs oss-dev-tianlong /tmp/ossfs -ourl=http://oss-dev-tianlong.oss-cn-hangzhou-internal.aliyuncs.com -f -d
[CRT] s3fs.cpp:set_s3fs_log_level(253): change debug level from [CRT] to [INF]
[INF]     s3fs.cpp:set_moutpoint_attribute(4113): PROC(uid=0, gid=0) - MountPoint(uid=0, gid=0, mode=40755)
[CRT] s3fs.cpp:s3fs_init(3323): init v1.79.6(commit:5314f18) with OpenSSL
[INF] s3fs.cpp:s3fs_check_service(3682): check services.
[INF]       curl.cpp:CheckBucket(2590): check a bucket.
[INF]       curl.cpp:prepare_url(4025): URL is http://oss-dev-tianlong.oss-cn-hangzhou-internal.aliyuncs.com/oss-dev-tianlong/
[INF]       curl.cpp:prepare_url(4057): URL changed is http://oss-dev-tianlong.oss-dev-tianlong.oss-cn-hangzhou-internal.aliyuncs.com/oss-dev-tianlong/.oss-cn-hangzhou-internal.aliyuncs.com/oss-dev-tianlong/
[INF]       curl.cpp:RequestPerform(1823): HTTP response code 400 was returned, returing EIO.
[ERR] curl.cpp:CheckBucket(2625): Check bucket failed, OSS response: <?xml version="1.0" encoding="UTF-8"?>
<Error>
  <Code>InvalidBucketName</Code>
  <Message>The specified bucket is not valid.</Message>
  <RequestId>574519588A5216A77C32A25D</RequestId>
  <HostId>oss-dev-tianlong.oss-dev-tianlong.oss-cn-hangzhou-internal.aliyuncs.com</HostId>
  <BucketName>oss-dev-tianlong.oss-dev-tianlong</BucketName>
</Error>

check failed

提示这样的ossfs: Check OSS service failed. Run with -f option for more details.

无法将本地文件带权限值完全复制到bucket

目前我有个目录是让apache2上传下载文件的本地目录,想通过ossfs用我的bucket透明替代,在加载方面没问题,但将本地文件复制到bucket时用cp -rp,本地文件权限值是775,到了bucket里变成了700,导致apache2无法读取某些目录,如何解决?

ossfs 挂载文件数量疑问

oss 中没有分文件夹,根目录下文件很多。
那用 ossfs 挂载到一个文件夹的话会超过文件数量限制吗?

使用 ossfs 拷贝大文件问题

使用 ossfs 挂在 bucket 到 ubuntu12.4 32位系统,fuse版本2.8.6 ,拷贝 1.7G 以上的文件到挂在目录,会报 operation not support ,然后再拷贝其他文件到挂在目录,会出现 Segmentation fault。 umount或者 reboot,重新挂在问题还在。

部分日志如下
[DBG] s3fs.cpp:check_parent_object_access(657): [path=/file1024]
[DBG] s3fs.cpp:check_object_access(551): [path=/]
[DBG] s3fs.cpp:get_object_attribute(405): [path=/]
[INF] s3fs.cpp:create_file_object(888): [path=/file1024][mode=100600]
[INF] curl.cpp:PutRequest(2383): [tpath=/file1024]
[INF] curl.cpp:PutRequest(2400): create zero byte file object.
[DBG] curl.cpp:GetHandler(239): Get handler from pool: 31
[INF] curl.cpp:prepare_url(4025): URL is http://oss-cn-hangzhou-internal.aliyuncs.com/hellok/file1024
[INF] curl.cpp:prepare_url(4057): URL changed is http://hellok.oss-cn-hangzhou-internal.aliyuncs.com/file1024
[INF] curl.cpp:PutRequest(2485): uploading... [path=/file1024][fd=-1][size=0]
[DBG] curl.cpp:RequestPerform(1791): connecting to URL http://hellok.oss-cn-hangzhou-internal.aliyuncs.com/file1024
[INF] curl.cpp:RequestPerform(1811): HTTP response code 200
[DBG] curl.cpp:ReturnHandler(262): Return handler to pool: 31
[INF] cache.cpp:DelStat(404): delete stat cache entry[path=/file1024]
[DBG] s3fs.cpp:get_object_attribute(405): [path=/file1024]
[INF] curl.cpp:HeadRequest(2232): [tpath=/file1024]
[INF] curl.cpp:PreHeadRequest(2174): [tpath=/file1024][bpath=][save=][sseckeypos=-1]
[DBG] curl.cpp:GetHandler(239): Get handler from pool: 31
[INF] curl.cpp:prepare_url(4025): URL is http://oss-cn-hangzhou-internal.aliyuncs.com/hellok/file1024
[INF] curl.cpp:prepare_url(4057): URL changed is http://hellok.oss-cn-hangzhou-internal.aliyuncs.com/file1024
[DBG] curl.cpp:RequestPerform(1791): connecting to URL http://hellok.oss-cn-hangzhou-internal.aliyuncs.com/file1024
[INF] curl.cpp:RequestPerform(1811): HTTP response code 200
[DBG] curl.cpp:ReturnHandler(262): Return handler to pool: 31
[INF] cache.cpp:AddStat(248): add stat cache entry[path=/file1024]
[DBG] cache.cpp:GetStat(171): stat cache hit [path=/file1024][time=1478237615][hit count=0]
[DBG] fdcache.cpp:Open(1809): [path=/file1024][size=0][time=-1]
[DBG] fdcache.cpp:Open(710): [path=/file1024][fd=-1][size=0][time=-1]
[INF] s3fs.cpp:s3fs_getattr(797): [path=/file1024]
[DBG] s3fs.cpp:check_parent_object_access(657): [path=/file1024]
[DBG] s3fs.cpp:check_object_access(551): [path=/]
[DBG] s3fs.cpp:get_object_attribute(405): [path=/]
[DBG] s3fs.cpp:check_object_access(551): [path=/file1024]
[DBG] s3fs.cpp:get_object_attribute(405): [path=/file1024]
[DBG] cache.cpp:GetStat(171): stat cache hit [path=/file1024][time=1478237615][hit count=1]
[DBG] fdcache.cpp:ExistOpen(1860): [path=/file1024][fd=-1][ignore_existfd=false]
[DBG] fdcache.cpp:Open(1809): [path=/file1024][size=-1][time=-1]
[DBG] s3fs.cpp:s3fs_getattr(821): [path=/file1024] uid=0, gid=0, mode=100600
[INF] s3fs.cpp:s3fs_getxattr(3061): [path=/file1024][name=security.capability][value=(nil)][size=0]
[DBG] s3fs.cpp:check_parent_object_access(657): [path=/file1024]
[DBG] s3fs.cpp:check_object_access(551): [path=/]
[DBG] s3fs.cpp:get_object_attribute(405): [path=/]
[DBG] s3fs.cpp:get_object_attribute(405): [path=/file1024]
[DBG] cache.cpp:GetStat(171): stat cache hit [path=/file1024][time=1478237615][hit count=2]
[DBG] s3fs.cpp:s3fs_write(2094): [path=/file1024][size=1024][offset=0][fd=7]
[DBG] fdcache.cpp:ExistOpen(1860): [path=/file1024][fd=7][ignore_existfd=false]
[DBG] fdcache.cpp:Open(1809): [path=/file1024][size=-1][time=-1]
[DBG] fdcache.cpp:Dup(695): [path=/file1024][fd=7][refcnt=2]
[DBG] fdcache.cpp:Write(1478): [path=/file1024][fd=7][offset=0][size=1024]
Segmentation fault
2016-11-04 13:40:34

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.