GithubHelp home page GithubHelp logo

gluster / glusterdocs Goto Github PK

View Code? Open in Web Editor NEW
345.0 40.0 280.0 6.06 MB

This repo contains the source of official Gluster documentation rendered at https://docs.gluster.org

License: MIT License

HTML 92.14% Dockerfile 3.59% Makefile 4.27%

glusterdocs's Introduction

glusterdocs

Source code to gluster documentation: http://docs.gluster.org/

Important Note: This repo had its git history re-written on 19 May 2016. Please create a fresh fork or clone if you have an older local clone.

Building the docs

If you are on EPEL 7 or Fedora, the first thing you will need is to install mkdocs, with the following command :

# sudo yum install mkdocs

For Fedora 30+ (run the following in root)

# dnf install python-pip
# pip install -r requirements.txt

Then you need to run mkdocs from the root of that repository:

$ mkdocs build

If you see an error about docs_dir when using recent versions of mkdocs , try running additional steps mentioned below:

$ cp ./mkdocs.yml ../
$ cd ..

Edit below entry in the copied mkdocs.yml file

docs_dir: ./glusterdocs/

Then you need to run mkdocs

$ mkdocs build

The result will be in the site/ subdirectory, in HTML.

Building the docs in Docker

Included is a Makefile and a Dockerfile, which enables you to easily build the docs inside Docker without installing any dependencies on your system.

Simply run the following command to compile the docs:

make

This Makefile recipe builds a Docker image containing the dependencies required and runs mkdocs inside the built image, taking care to run the container as the current uid and gid so that your user has ownership of the results in the ./site directory.

glusterdocs's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

glusterdocs's Issues

Upgrade guides: include guidance for gfapi based applications running on remote systems

The upgrade guide does not specify how to handle gfapi based applications running on systems. I think more clarity on ordering in this case would remove some guess work.

upgrading gfapi clients and op-version

  • trusted pool is updated
  • gfapi clients are updated
  • op-version is updated

Is the gfapi client aware of the new op-version? Does the client need to be restarted to make use of the new op-version features? What's the mechanism is in play?

upgrading within major version

General guidance here would be appreciated, even if it's as simple as "review the release notes"

3.9 (applying to 3.8 in my specific case)

If you are using gfapi based applications (qemu, NFS-Ganesha, Samba etc.) on the servers, please stop those applications too.

(Where "servers" is 'every storage server in your trusted storage pool'?)

I read this as stop gfapi on the current trusted pool member. Is this correct? Should this be more specific?

After the upgrade is complete on all servers, run the following command:
#gluster volume set all cluster.op-version 30900

(where "servers" is 'every storage server in your trusted storage pool'

After reading op-version I think I know what the right thing to do is. A link might be helpful. (like 3.10)

3.10

Stop all applications that run on this server and access the volumes via gfapi (qemu, NFS-Ganesha, Samba, etc.)

Does this apply to the 3.9 guide? Should the other improvements b

NIT: number of steps is not correct.

General structure

Perhaps it would make more sense to have a general guide? There could be a links or callouts to considerations for specific versions.

[RFE]: sequence diagram (UML) for complex fops

I would like to see us come up with sequence diagram like below:

afr lookup

This was done by some basic language like below:

title AFR lookup()

TopLayer->Client:lookup()
Client->S1:lookup() with xdata \nset to get details
note right of S1: file\nmissing
Client->(1)S2:lookup()
Client->(1)S3:lookup()
Client<--S2: success(0)
Client<--S3: success(0)
Client<--S1: failure(ENOENT)
Client->S1: Start Self-heal from (S2)
Client<--S1: Self-heal complete
TopLayer<-Client:lookup() success(0)

I know this can be a tedious effort now for getting it for many operations, but if people try to picture the operations in a diagram like above, we would be in good shape to ask questions in terms of what happens when something returns early, returns error, etc etc...

@gluster/gluster-all

Credits: http://sequencediagram(dot)org/

Explain how ondemand scrub feature works

Description

We need to extend description of bitrot detection, so that it contains details about ondemand scrub detection and how to review the result.

What is missing:

  • full description of ondemand scrub in section of admin guide, where bit rot detection is discussed
  • there is no description of scrub status command anywhere

URLs

See Also

For additional context, see: https://bugzilla.redhat.com/show_bug.cgi?id=1516484#c7

Config Glusterfs with SSL/TLS

Hi,
According to guide https://gluster.readthedocs.io/en/latest/Administrator%20Guide/SSL/, I could not use glusterfs normally, so here I want to ask some questions.
Env:
on glusterfs host:
root@minion3:/etc/ssl# ll total 72 drwxr-xr-x 4 root root 4096 Sep 29 15:52 ./ drwxr-xr-x 136 root root 12288 Sep 29 15:37 ../ drwxr-xr-x 3 root root 24576 Sep 1 22:13 certs/ -rw-r--r-- 1 root root 733 Sep 29 15:52 glusterfs.ca -rw-r--r-- 1 root root 887 Sep 29 15:52 glusterfs.key -rw-r--r-- 1 root root 733 Sep 29 15:52 glusterfs.pem ...
glusterfs version
root@minion3:/etc/ssl# gluster --version glusterfs 3.6.9 built on Mar 2 2016 18:21:17 Repository revision: git://git.gluster.com/glusterfs.git Copyright (c) 2006-2011 Gluster Inc. <http://www.gluster.com> GlusterFS comes with ABSOLUTELY NO WARRANTY. You may redistribute copies of GlusterFS under the terms of the GNU General Public License.
gluster volume info
`root@minion3:/etc/ssl# gluster volume info

Volume Name: vol-ssl
Type: Distribute
Volume ID: 56a6148c-6abb-4ffa-abcb-35f2f0df4d42
Status: Started
Number of Bricks: 1
Transport-type: tcp
Bricks:
Brick1: _..**.**_:/export/vol-ssl
Options Reconfigured:
client.ssl: on
server.ssl: on`

on client host, copy ssl/tls related file from glusterfs server host.
root@minion2:/# ll /etc/ssl/ total 72 drwxr-xr-x 4 root root 4096 Sep 29 16:08 ./ drwxr-xr-x 135 root root 12288 Sep 29 19:44 ../ drwxr-xr-x 3 root root 24576 Sep 1 22:13 certs/ -rw-r--r-- 1 root root 733 Sep 29 16:05 glusterfs.ca -rw-r--r-- 1 root root 887 Sep 29 16:08 glusterfs.key -rw-r--r-- 1 root root 733 Sep 29 16:07 glusterfs.pem ...
while execute cmd mount.glusterfs failed
root@minion2:/# mount.glusterfs ***.***.***.***:/vol-ssl /mnt Mount failed. Please check the log file for more details.

LOG
on client host:
[2016-09-29 11:44:34.850226] I [rpc-clnt.c:1761:rpc_clnt_reconfig] 0-vol-ssl-client-0: changing port to 49152 (from 0) [2016-09-29 11:44:34.855100] E [socket.c:384:ssl_setup_connection] 0-vol-ssl-client-0: SSL connect error [2016-09-29 11:44:34.855160] E [socket.c:2386:socket_poller] 0-vol-ssl-client-0: client setup failed [2016-09-29 11:44:34.860610] I [fuse-bridge.c:5086:fuse_graph_setup] 0-fuse: switched to graph 0 [2016-09-29 11:44:34.861495] I [fuse-bridge.c:4015:fuse_init] 0-glusterfs-fuse: FUSE inited with protocol versions: glusterfs 7.22 kernel 7.23 [2016-09-29 11:44:34.861799] W [fuse-bridge.c:779:fuse_attr_cbk] 0-glusterfs-fuse: 2: LOOKUP() / => -1 (Transport endpoint is not connected) [2016-09-29 11:44:34.870410] I [fuse-bridge.c:4927:fuse_thread_proc] 0-fuse: unmounting /mnt [2016-09-29 11:44:34.870949] W [glusterfsd.c:1211:cleanup_and_exit] (--> 0-: received signum (15), shutting down [2016-09-29 11:44:34.870976] I [fuse-bridge.c:5607:fini] 0-fuse: Unmounting '/mnt'.
on server host:
[2016-09-29 12:03:38.884791] E [socket.c:1972:__socket_read_frag] 0-rpc: wrong MSG-TYPE (-1476197977) received from ***.***.***.***:1021

Error happened when connect to server with ssl/tls. I don't know why.

Wrong path for Debian repository

On https://docs.gluster.org/en/latest/Install-Guide/Install/ the following steps are listed to create the repository file:

DEBID=$(grep 'VERSION_ID=' /etc/os-release | cut -d '=' -f 2 | tr -d '"')
DEBVER=$(grep 'VERSION=' /etc/os-release | grep -Eo '[a-z]+')
DEBARCH=$(dpkg --print-architecture)
echo deb https://download.gluster.org/pub/gluster/glusterfs/LATEST/Debian/${DEBID}/${DEBARCH}/apt ${DEBVER} main > /etc/apt/sources.list.d/gluster.list

On Debian 9 this creates the repository URL
https://download.gluster.org/pub/gluster/glusterfs/LATEST/Debian/9/amd64/apt

But this URL does not exist, it only reports error 404. The correct URL now seems to be
https://download.gluster.org/pub/gluster/glusterfs/LATEST/Debian/10/stretch/amd64/apt

Feature request: Article on methods for replicating VM’s across multiple zones.

In production, it is recommended to replicate your VM’s across multiple zones. For purpose of this tutorial, it is overkill, but if anyone is interested in this please let us know since we are always looking to write articles on the most requested features and questions.

Would love to see documented details of HA AWS deployment scenarios.

Linux Kernel Tuning: do you really suggest high vfs_cache_pressure?

http://docs.gluster.org/en/latest/Administrator%20Guide/Linux%20Kernel%20Tuning/
https://github.com/gluster/glusterdocs/blob/master/Administrator%20Guide/Linux%20Kernel%20Tuning.md

"With GlusterFS, many users with a lot of storage and many small files easily end up using a lot of RAM on the server side due to 'inode/dentry' caching, leading to decreased performance when the kernel keeps crawling through data-structures on a 40GB RAM system. Changing this value higher than 100 has helped many users to achieve fair caching and more responsiveness from the kernel."

You either need to specify more info about the example case or I have hard time believing that this really makes sense with modern Linux kernels.

Increasing vfs_cache_pressure above 100 should only make sense if your system is lacking RAM. In that case setting value to higher to 100 can reduce the amount of RAM used for dentry caching in return for worse overall performance. Sure, if you do not have enough RAM for your workload it may help if you can avoid really bad worst case behavior. However, you could avoid that problem AND still have good overall performance simply by adding real RAM. As an alternative, one could try adding some swap and set swappiness near 100.

Invalid mount option: volfile-max-fetch-attempts

Hi everyone,
I encountered an issue in the Setting Up Clients page, under the Mounting Volumes heading.
It states volfile-max-fetch-attempts is a valid mounting option, and gives an example showing it being used. However, when attempting to use it (on 3.10), you get an error stating that it's an invalid option. fetch-attempts on its own is valid however, and is simply translated to volfile-max-fetch-attempts internally.
Cheers

apache does not have access to gluster volume

Hi
I have setup glusterfs and mounted it on /var/www/html which apache can read/write from/to it, it seems that everything is right and correct but I'm getting 403 forbidden access while accessing to the website.

would you please guide me how to solve the issue?

Thank you

Why mount failed?

Hi, I have four nodes. One of them was used as serve and the others were clients (the gluster version is 3.1.2). Recently, one client was umount in a minute after reboot. Following is related information:

[2017-06-10 09:21:29.606750] W [io-stats.c:1644:init] share-lv: dangling volume. check volfile 
[2017-06-10 09:21:29.606803] W [dict.c:1205:data_to_str] dict: @data=(nil)
[2017-06-10 09:21:29.606817] W [dict.c:1205:data_to_str] dict: @data=(nil)
Given volfile:
+------------------------------------------------------------------------------+
  1: volume share-lv-client-0
  2:     type protocol/client
  3:     option remote-host pluto
  4:     option remote-subvolume /rd1/brick
  5:     option transport-type tcp
  6: end-volume
  7: 
  8: volume share-lv-write-behind
  9:     type performance/write-behind
 10:     subvolumes share-lv-client-0
 11: end-volume
 12: 
 13: volume share-lv-read-ahead
 14:     type performance/read-ahead
 15:     subvolumes share-lv-write-behind
 16: end-volume
 17: 
 18: volume share-lv-io-cache
 19:     type performance/io-cache
 20:     subvolumes share-lv-read-ahead
 21: end-volume
 22: 
 23: volume share-lv-quick-read
 24:     type performance/quick-read
 25:     subvolumes share-lv-io-cache
 26: end-volume
 27: 
 28: volume share-lv-stat-prefetch
 29:     type performance/stat-prefetch
 30:     subvolumes share-lv-quick-read
 31: end-volume
 32: 
 33: volume share-lv
 34:     type debug/io-stats
 35:     subvolumes share-lv-stat-prefetch
 36: end-volume

+------------------------------------------------------------------------------+
[2017-06-10 09:21:33.606286] I [client-handshake.c:1005:select_server_supported_programs] share-lv-client-0: Using Program GlusterFS-3.1.0, Num (1298437), Version (310)
[2017-06-10 09:21:33.606755] I [client-handshake.c:841:client_setvolume_cbk] share-lv-client-0: Connected to 162.105.134.222:24009, attached to remote volume '/rd1/brick'.
[2017-06-10 09:21:33.611234] I [fuse-bridge.c:2821:fuse_init] glusterfs-fuse: FUSE inited with protocol versions: glusterfs 7.13 kernel 7.24
pending frames:
frame : type(1) op(READLINK)
frame : type(1) op(READLINK)

patchset: v3.1.1-64-gf2a067c
signal received: 11
time of crash: 2017-06-10 10:04:46
configuration details:
argp 1
backtrace 1
dlfcn 1
fdatasync 1
libpthread 1
llistxattr 1
setfsid 1
spinlock 1
epoll.h 1
xattr.h 1
st_atim.tv_nsec 1
package-string: glusterfs 3.1.2
/lib64/libc.so.6(+0x33370)[0x7ff6c183b370]
  1. I then update the version to 3.6.5 with emerge. However, it failed to mount now.
mount -t glusterfs pluto:/share-lv /mnt/share
Mount failed. Please check the log file for more details.

(1) looking the log, I found:

tail /var/log/glusterfs/mnt-share.log
[2017-06-11 15:35:08.377505] I [MSGID: 100030] [glusterfsd.c:2018:main] 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.6.5 (args: /usr/sbin/glusterfs --volfile-server=pluto --volfile-id=/share-lv /mnt/share)
[2017-06-11 15:35:08.383616] E [glusterfsd-mgmt.c:1596:mgmt_getspec_cbk] 0-mgmt: failed to fetch volume file (key:/share-lv)
[2017-06-11 15:35:08.383857] W [glusterfsd.c:1194:cleanup_and_exit] (--> 0-: received signum (0), shutting down
[2017-06-11 15:35:08.383885] I [fuse-bridge.c:5605:fini] 0-fuse: Unmounting '/mnt/share'.

(2) I then start glusterfs with debug:

/usr/sbin/glusterfs --debug --volfile-server=pluto --volfile-id=/share-lv /mnt/share
[2017-06-11 15:37:42.248970] I [MSGID: 100030] [glusterfsd.c:2018:main] 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.6.5 (args: /usr/sbin/glusterfs --debug --volfile-server=pluto --volfile-id=/share-lv /mnt/share)
[2017-06-11 15:37:42.249190] D [MSGID: 0] [glusterfsd.c:409:set_fuse_mount_options] 0-glusterfsd: fopen-keep-cache mode 2
[2017-06-11 15:37:42.249213] D [MSGID: 0] [glusterfsd.c:463:set_fuse_mount_options] 0-glusterfsd: fuse direct io type 2
[2017-06-11 15:37:42.249229] D [MSGID: 0] [glusterfsd.c:487:set_fuse_mount_options] 0-glusterfsd: fuse no-root-squash mode 0
[2017-06-11 15:37:42.249289] D [options.c:1163:xlator_option_init_double] 0-fuse: option negative-timeout using set value 0.000000
[2017-06-11 15:37:42.249329] D [options.c:1160:xlator_option_init_bool] 0-fuse: option no-root-squash using set value disable
[2017-06-11 15:37:42.249641] D [logging.c:1763:__gf_log_inject_timer_event] 0-logging-infra: Starting timer now. Timeout = 120, current buf size = 5
[2017-06-11 15:37:42.249903] D [rpc-clnt.c:972:rpc_clnt_connection_init] 0-glusterfs: defaulting frame-timeout to 30mins
[2017-06-11 15:37:42.249919] D [rpc-clnt.c:986:rpc_clnt_connection_init] 0-glusterfs: disable ping-timeout
[2017-06-11 15:37:42.249944] D [rpc-transport.c:262:rpc_transport_load] 0-rpc-transport: attempt to load file /usr/lib64/glusterfs/3.6.5/rpc-transport/socket.so
[2017-06-11 15:37:42.252410] D [socket.c:3880:socket_init] 0-glusterfs: SSL support on the I/O path is NOT enabled
[2017-06-11 15:37:42.252432] D [socket.c:3883:socket_init] 0-glusterfs: SSL support for glusterd is NOT enabled
[2017-06-11 15:37:42.252439] D [socket.c:3900:socket_init] 0-glusterfs: using system polling thread
[2017-06-11 15:37:42.252460] D [rpc-clnt.c:1452:rpcclnt_cbk_program_register] 0-glusterfs: New program registered: GlusterFS Callback, Num: 52743234, Ver: 1
[2017-06-11 15:37:42.253074] D [common-utils.c:253:gf_resolve_ip6] 0-resolver: returning ip-162.105.134.222 (port-24007) for hostname: pluto and port: 24007
[2017-06-11 15:37:42.253093] D [socket.c:2829:socket_fix_ssl_opts] 0-glusterfs: disabling SSL for portmapper connection
[2017-06-11 15:37:42.253563] D [rpc-clnt-ping.c:231:rpc_clnt_start_ping] 0-glusterfs: ping timeout is 0, returning
[2017-06-11 15:37:42.253854] E [glusterfsd-mgmt.c:1596:mgmt_getspec_cbk] 0-mgmt: failed to fetch volume file (key:/share-lv)
[2017-06-11 15:37:42.253878] D [logging.c:1740:gf_log_flush_extra_msgs] 0-logging-infra: Log buffer size reduced. About to flush 4 extra log messages
[2017-06-11 15:37:42.253899] D [logging.c:1743:gf_log_flush_extra_msgs] 0-logging-infra: Just flushed 4 extra log messages
[2017-06-11 15:37:42.254143] W [glusterfsd.c:1194:cleanup_and_exit] (--> 0-: received signum (0), shutting down
[2017-06-11 15:37:42.254162] D [glusterfsd-mgmt.c:2244:glusterfs_mgmt_pmap_signout] 0-fsd-mgmt: portmapper signout arguments not given
[2017-06-11 15:37:42.254178] I [fuse-bridge.c:5605:fini] 0-fuse: Unmounting '/mnt/share'.

(3) Check the other client log, I found the port is 24009, while above is 24007.

tail /var/log/glusterfs/mnt-share.log 
32: 
 33: volume share-lv
 34:     type debug/io-stats
 35:     subvolumes share-lv-stat-prefetch
 36: end-volume

+------------------------------------------------------------------------------+
[2017-06-10 02:09:43.334108] I [client-handshake.c:1005:select_server_supported_programs] share-lv-client-0: Using Program GlusterFS-3.1.0, Num (1298437), Version (310)
[2017-06-10 02:09:43.334553] I [client-handshake.c:841:client_setvolume_cbk] share-lv-client-0: Connected to 162.105.134.222:24009, attached to remote volume '/rd1/brick'.
[2017-06-10 02:09:43.338079] I [fuse-bridge.c:2821:fuse_init] glusterfs-fuse: FUSE inited with protocol versions: glusterfs 7.13 kernel 7.21
  1. I had uninstall glusterfs and re-install it with these command: "emerge -C sys-cluster/glusterfs; emerge -av sys-cluster/glusterfs". Now, I hardly know what to do. Should I change the port (how to configure port)? Or should I update all the old version to 3.6.5?

It makes me crazy, any suggestion would be grateful!

Thanks for advance!

"About op-version" section in the "Upgrading GlusterFS" chapter needs updates for 3.10

In 3.10 we introduced the option to fetch maximum possible op-version that the cluster can be bumped to. This needs to be updated in this section of the guide.
Option: gluster volume get all cluster.max-op-version
Guide location: https://gluster.readthedocs.io/en/latest/Upgrade-Guide/op_version/

Further, checking client op-version was also supported, from my understanding we should also add that to this part of the guide to state that, a user can check the client op-version as well before making the decision to upgrade/update the op-version of the cluster.

It would be nice to have a PR for this by Fri. 24th, 2017 before we announce 3.10.0 release.

Architecture page does not reflect recommendation against striped volumes

The architecture page lists striped volumes, and variants involving striping, along with all other volume types.

Justification for using striped volumes is provided, which would of course lead any new user to believe it is a valid choice, and many may use them.

The architecture page should clearly indicate the reasons they are not recommended, if this is the opinion of the developers, so that users are not mislead as to the viability and advantages of striped volumes.

https://joejulian.name/blog/should-i-use-stripe-on-glusterfs/
http://www.gluster.org/pipermail/gluster-devel/2016-August/050377.html

Add Web Hook

Please add a webhook for readthedocs service so that we need not build manually on each merge.

  • click on settings and select "Webhooks & services" from left menu box.
  • Click on "Add service" button inside the "Services" section.
  • Type and select "ReadTheDocs" from drop down.
  • Finally check "active" and click on "Add service" button.

Installation documentation is wrong

This page has an error in the wget -O line. My debian returns an error.

https://gluster.readthedocs.io/en/latest/Install-Guide/Install/

wget -O - http://download.gluster.org/pub/gluster/glusterfs/LATEST/rsa.pub | apt-key add -

converted 'http://download.gluster.org/pub/gluster/glusterfs/LATEST/rsa.pub' (ANSI_X3.4-1968) -> 'http://download.gluster.org/pub/gluster/glusterfs/LATEST/rsa.pub' (UTF-8)
--2017-03-01 08:04:00--  http://download.gluster.org/pub/gluster/glusterfs/LATEST/rsa.pub
Resolving download.gluster.org (download.gluster.org)... 23.253.208.221, 2001:4801:7824:104:be76:4eff:fe10:23d8
Connecting to download.gluster.org (download.gluster.org)|23.253.208.221|:80...
connected.
HTTP request sent, awaiting response... 302 Found
Location: https://download.gluster.org/pub/gluster/glusterfs/LATEST/rsa.pub [following]
converted 'https://download.gluster.org/pub/gluster/glusterfs/LATEST/rsa.pub' (ANSI_X3.4-1968) -> 'https://download.gluster.org/pub/gluster/glusterfs/LATEST/rsa.pub' (UTF-8)
--2017-03-01 08:04:00--  https://download.gluster.org/pub/gluster/glusterfs/LATEST/rsa.pub
Connecting to download.gluster.org (download.gluster.org)|23.253.208.221|:443... connected.
HTTP request sent, awaiting response... 404 Not Found
2017-03-01 08:04:01 ERROR 404: Not Found.

gpg: no valid OpenPGP data found.

Gluster Setup for Anything but CentOS/RHEL Incorrect

With the migration of Gluster into the CentOS Storage SIG, there are no longer "public" packages for "other" RHEL Variants other than RHEL or CentOS...even the instructions for installing on RHEL are unclear as it requires a CentOS Release Package to Install.

Meaning, there's no clear installation method on pretty much any system but Cent at this point. This locks out:

  • ROSA Enterprise Linux Server
  • Scientific Linux
  • ClearOS
  • Oracle Linux

Just to name a few.

At this point to install on Oracle Linux, I had to track down the centos-release package, track down the centos-release-storage-common package, track down the centos-release-gluster38 package, track down what that package contains...just to get to the REPO file. From there the baseURL in that REPO file is broken requiring me to manually go out to Cent's archives and browse for the URL.

Hand waving your documentation with "Ey, if it's a RHEL clone just go do what Cent tells you" as your installation instructions is just ridiculously short sighted. Especially when Cent moves the repository into what is apparently a "CentOS only format".

The repo is NOT an "open" repository in the sense that you can install the repo file via a downloadable repo package from the Web, unlike stand alone repositories like Docker and EPEL. It is VERY MUCH a CentOS only repository with no "easy" installation method unless you are using CentOS (only) as your base OS. RPM centos-release-gluster38 relies upon centos-release-storage-common which itself relies upon centos-release to install. Centos-release will conflict with ANY release package on ANY OS Distribution except CentOS, meaning there is no RPM based way to get ahold of the repo file to install gluster.

If Gluster is intending to be adopted by a broader enterprise community than "RHEL Users" only, this kind of hand waving documentation and tight integration with specific OS Distributions needs to be strongly looked at and backed out.

The LEAST you guys could do would be to provide the proper REPO spec and a location to obtain the GPG Key (had to track that down myself also) directly in the documentation instead of assuming every business organization in the world is using CentOS.

Wrong key location in Debian install instructions

The docs (here)[https://github.com/gluster/glusterdocs/blob/master/Install-Guide/Install.md] say that you should at the following GPG key:

http://download.gluster.org/pub/gluster/glusterfs/LATEST/rsa.pub

Unfortunately, there's no such key file there. glusterfs/LATEST appears to just be a link to glusterfs/3.12/LATEST, but the key that you need to use is located at glusterfs/3.12/rsa.pub.

Either the instructions should be updated or some sort of link or duplicate of the required key should be maintained.

HTTPS mixed-content in www.gluster.org

The Gluster website doesn't get labeled "secure" because a single image is being loaded through HTTP:

http://gluster.wpengine.com/wp-content/themes/glusterTheme/Resources/Sprites/main.png

Add a documentation with fill configuration options

Hello,

Thanks for this really usefull and performant FS. I used it since 3 years and for now, i had only success with it.

I think, before we had a documentation with referenced options : http://www.gluster.org/community/documentation/index.php/Gluster_3.2:_Tuning_Volume_Options

and this was usefull to read all referenced options. In the actual documentation i find many cookbooks, but not separated references's tables (except for the very good description about types of volume). In my mind, 4 tables could be usefull for many peoples, maybe less if there options are in the same table :

  • volumes options references
  • client options references
  • type of volumes references (present)
  • monitoring options references

Thanks for this software and your good works,

Best Regards,

Nicolas

Duplication of glossary/terminology overview documents

We have glossary with gluster terminology in both Quick Start Guide and Administrator Guide.

The problem is that the content of glossary/terminology overview itself is duplicated in 2 different separate files, with different formatting. This is hard to maintain and keep in sync.

Details

There are 2 files with gluster glossary overview:

While the number of items explained in each file differs greatly:

$ grep '^###' Quick-Start-Guide/Terminologies.md | sed 's/### //' | wc -l
19
$ grep '^\*\*' Administrator\ Guide/glossary.md | sed 's/\*\*//g' | wc -l
45

There is lots of duplication, see the following list of terms which are explained in both files:

$ comm -12 <(grep '^\*\*' Administrator\ Guide/glossary.md | sed 's/\*\*//g' | sort) <(grep '^###' Quick-Start-Guide/Terminologies.md | sed 's/### //' | sort)
Brick
Client
Cluster
Distributed File System
FUSE
Geo-Replication
glusterd
Metadata
Namespace
POSIX
RAID
RRDNS
Server
Trusted Storage Pool
Userspace
Volume

Here is an example how explanation for Brick looks in each file. First is from Quick-Start-Guide/Terminologies.md:

### Brick                                                                       
                                                                                
Brick is the basic unit of storage, represented by an export directory          
on a server in the trusted storage pool.                                        

While this is from Administrator Guide/glossary.md:

**Brick**                                                                       
:   A Brick is the basic unit of storage in GlusterFS, represented by an export 
    directory on a server in the trusted storage pool.                          
    A brick is expressed by combining a server with an export directory in the following format:
                                                                                
        `SERVER:EXPORT`                                                         
    For example:                                                                
        `myhostname:/exports/myexportdir/` 

Expected state

There should be no duplication of information.

We could for example store the content in a single file and then include this single file in both guides without duplicating the content.

Rephrase this repo's description

Currently, it says:
'This repo had it's git history re-written on 19 May 2016. Please create a fresh fork or clone if you have an older local clone.'

Surely we can improve.

Build instructions given in the README seem incorrect

The build instructions presented in the README.md at the root of this project seem incorrect.

When building using,

$ mkdocs build

from the root of the repository I get the following errors,

$ mkdocs build
WARNING -  Config value: 'extra_css'. Warning: The following files have been automatically included in the documentation build and will be added to the HTML: site/css/highlight.css,site/css/theme.css,site/css/theme_extra.css. This behavior is deprecated. In version 1.0 and later they will need to be explicitly listed in the 'extra_css' config setting. 
WARNING -  Config value: 'extra_javascript'. Warning: The following files have been automatically included in the documentation build and will be added to the HTML: site/js/highlight.pack.js,site/js/jquery-2.1.1.min.js,site/js/modernizr-2.8.3.min.js,site/js/theme.js,site/mkdocs/js/lunr.min.js,site/mkdocs/js/mustache.min.js,site/mkdocs/js/require.js,site/mkdocs/js/search.js,site/mkdocs/js/text.js. This behavior is deprecated. In version 1.0 and later they will need to be explicitly listed in the 'extra_javascript' config setting. 
ERROR   -  Config value: 'docs_dir'. Error: The 'docs_dir' should not be the parent directory of the config file. Use a child directory instead so that the config file is a sibling of the config file. 
ERROR   -  Config value: 'site_dir'. Error: The 'site_dir' should not be within the 'docs_dir' as this leads to the build directory being copied into itself and duplicate nested files in the 'site_dir'.(site_dir: '.../glusterdocs/site', docs_dir: '.../glusterdocs') 

Aborted with 2 Configuration Errors!

Tool and platform versions that I use to build,

$ cat /etc/fedora-release 
Fedora release 24 (Twenty Four)
$ mkdocs --version
mkdocs, version 0.16.1
$ pip --version
pip 8.0.2 from /usr/lib/python2.7/site-packages (python 2.7)

I overcame this problem by,

$ cp ./mkdocs.yml ../
$ cd ..
# edit "docs_dir: ./glusterdocs/" in the copied mkdocs.yml 
$ mkdocs build
# site directory is now a sibling of glusterdocs directory

I am not entirely sure if what I did is right or this is the way to do it, the errors seem to point that this is how it possibly needs to be done.

So in case this is right, request that README.md be updated with the same.

Can't open file, link count is 0, thanks!

Hi!

I had used gluster for a while, then I suddenly found some files couldn't be opened. After restart gluster, it is same. Following is more detail:

head /mnt/share/share/tool/perl/util.c
head: cannot open '/mnt/share/share/tool/perl/util.c' for reading: No such file or directory

Then, I found the link of the file is 0:

ls -alh /mnt/share/share/tool/perl/util.c
-r--r--r-- 0 share bgm 174K Apr 9 2016 /mnt/share/share/tool/perl/util.c

As /mnt/share is mounted in /rd1/brick, so I check whether the raw file is good:

head /rd1/brick/share/tool/perl/util.c
/* util.c
*

  • Copyright (C) 1993, 1994, 1995, 1996, 1997, 1998, 1999, 2000, 2001,
  • 2002, 2003, 2004, 2005, 2006, 2007, 2008 by Larry Wall and others
  • You may distribute under the terms of either the GNU General Public
  • License or the Artistic License, as specified in the README file.

*/

The volume had been created as:

gluster volume create share-lv pluto:/rd1/brick force

Since some files could be accessed properly, I think there maybe some problem of gluster file system. Could you tell me how to do?

Any suggestion would be grateful!
Best wishes!

Red Hat 7 Installation

For Red Hat/CentOS
RPMs for CentOS and other RHEL clones are available from the CentOS Storage SIG mirrors.

Is this still valid? Encountering difficulties attempting to find glusterfs-server and glusterfs-client for RHEL 7.x.

Section 'Setting up GlusterFS with SSL/TLS' of the Administrator Guide is outdated

The section 'Setting up GlusterFS with SSL/TLS' of the Administrator Guide

  • still refers to option "ssl.cert-depth" which got renamed "ssl.certificate-depth" in the meantime
  • does not take into account the 2015 SSL ciphers discussion on the [Gluster-users] mailing list regarding best-practice settings for option "ssl.cipher-list" (apart from two suggestions, the thread did not answer which ciphers can be included/excluded--especially the latter information should be part of the guide!)

glusterfs mount

Dear:
Ever since I used glusterfs, my glusterfs client memory has been growing linearly, what's the solution? thank you

The detail of "Open file stats" not display

For example, to view open fd count and maximum fd count on brick server:/export of test-volume and list top 10 open calls:

gluster volume top open brick list-cnt

Brick: server:/export/dir1
Current open fd's: 34 Max open fd's: 209
But why the detail of "Open file stats" not display,like the following:
==========Open file stats========

open file name
call count

2 /clients/client0/~dmtmp/PARADOX/
COURSES.DB

11 /clients/client0/~dmtmp/PARADOX/
ENROLL.DB

11 /clients/client0/~dmtmp/PARADOX/
STUDENTS.DB

10 /clients/client0/~dmtmp/PWRPNT/
TIPS.PPT

10 /clients/client0/~dmtmp/PWRPNT/
PCBENCHM.PPT

9 /clients/client7/~dmtmp/PARADOX/
STUDENTS.DB

9 /clients/client1/~dmtmp/PARADOX/
STUDENTS.DB

9 /clients/client2/~dmtmp/PARADOX/
STUDENTS.DB

9 /clients/client0/~dmtmp/PARADOX/
STUDENTS.DB

9 /clients/client8/~dmtmp/PARADOX/
STUDENTS.DB

GlusterFS in container - comprehensive document of different projects

There are multiple projects related to GlusterFS in container like:

  1. gluster-kuberenets - https://github.com/gluster/gluster-kubernetes
  2. gluster-containers - https://github.com/gluster/gluster-containers/
  3. heketi - https://github.com/heketi/heketi
  4. docker hub. - https://hub.docker.com/r/gluster/

There is documentation present in every project which explains about the project and usage.

To document overall view of all these projects and how all of this fit together to provide GlusterFS container solution. This will helps in guiding the user to consume GlusterFS in container world better.

Clarify documentation on setting up lvm for gluster snapshots

According to this link:

https://gluster.readthedocs.io/en/latest/Administrator%20Guide/Setting%20Up%20Volumes/

The following sentence exists.

"It is recommended that only one VG must be created from one storage device."

Not only is this grammatically incorrect, its extremely confusing. I think what its trying to say is that a single device should not be in more than one volume group.

According to red hat doc Formatting_and_Mounting_Bricks.. There's the following note:

To avoid performance problems resulting from the sharing of the same thin pool, Red Hat Storage recommends that the LV for each Red Hat Storage brick have a dedicated thin pool of its own. As Red Hat Storage volume snapshots are created, snapshot LVs will get created and share the thin pool with the brick LV.

Assuming the above is correct, then it seems like the sentence from the open source documentation should be replaced with something like the following section:

For performance reasons, we recommend the following LVM layout:

  • a single physical device should not be allocated to more than one volume group.
  • you may have as many physical devices allocated to a single volume group.
  • you should allocate the entire device to the volume group instead of just a partition of that device
  • a thin pool is a type of lvm logical volume that consists of a metadata logical volume and a data logical volume. When setting up LVM for gluster snapshots, each gluster brick must be stored on a thin pool. Only store a single brick on each thin pool. This means you should have a thin pool for each brick.
  • when the metadata LV is created for the thin pool, it should be least 5% of the data LV's disk size.
  • No more than one brick should be placed on a single logical volume.

In addition, the open source documentation says this:

"The device name and the alignment value will vary based on the device you are using."

Many people will have no idea what that means. This redhat documentation does an excellent job of explaining exactly what that means.

So could we either incorporate that information into the open source docs or alternatively, link to the red hat hat docs in that note.

SSL Cert expired

Hi,

it looks like your letsencrypt cert expired yesterday.

Greetings, Sascha

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.