Forum Linux.debian/ubuntu Problème de montage de volume glusterfs après reboot

Posté par  . Licence CC By‑SA.
Étiquettes :
1
31
oct.
2016

Le montage fonctionne lorsque je viens de créer les volumes, par contre après reboot plus moyen d'effectuer le montage sauf a tout supprimer puis reconstruire les volumes.
J'ai ce problème sur un raspberry pi Raspbian Jessie (3.8.4 - backport) ET sur une Xubuntu 16.04 X64 (3.7.16) pour des montages locaux (127.0.0.1).
Les disques sont bien monté, il va de soit, et les volumes sont start par root via un script au boot via la commande "sudo gluster volume start localPiGluster1".
La commande "sudo glusterfs --volfile-server=127.0.0.1:/localPiGluster1 /media/localPiGluster1" ne renvoie rien (comme si elle fonctionnait mais le montage n'est pas dispo) et "sudo mount -t glusterfs 127.0.0.1:/localPiGluster1 /media/localPiGluster1" renvoie "Mount failed. Please check the log file for more details."

Si quelqu'un à une solution, ça serait franchement cool.

Plus d'infos :

sudo gluster volume status

Status of volume: localPiGluster1
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick PiGluster1:/media/Seagate3To1/gfs     N/A       N/A        N       N/A  
Brick PiGluster1:/media/HDDrive1500Go/gfs   N/A       N/A        N       N/A  
NFS Server on localhost                     N/A       N/A        N       N/A  

Task Status of Volume localPiGluster1
------------------------------------------------------------------------------
There are no active volume tasks

sudo tail -f /var/log/glusterfs/media-localPiGluster1.log

[2016-10-31 16:02:13.777789] I [MSGID: 100030] [glusterfsd.c:2408:main] 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.8.4 (args: /usr/sbin/glusterfs --volfile-server=127.0.0.1 --volfile-id=/localPiGluster1 /media/localPiGluster1)
[2016-10-31 16:02:13.834520] I [MSGID: 101190] [event-epoll.c:628:event_dispatch_epoll_worker] 0-epoll: Started thread with index 1
[2016-10-31 16:02:13.860436] I [MSGID: 101190] [event-epoll.c:628:event_dispatch_epoll_worker] 0-epoll: Started thread with index 2
[2016-10-31 16:02:13.866192] I [MSGID: 114020] [client.c:2356:notify] 0-localPiGluster1-client-0: parent translators are ready, attempting connect on transport
[2016-10-31 16:02:13.871613] I [MSGID: 114020] [client.c:2356:notify] 0-localPiGluster1-client-1: parent translators are ready, attempting connect on transport
[2016-10-31 16:02:13.873770] E [MSGID: 114058] [client-handshake.c:1533:client_query_portmap_cbk] 0-localPiGluster1-client-0: failed to get the port number for remote subvolume. Please run 'gluster volume status' on server to see if brick process is running.
[2016-10-31 16:02:13.874690] I [MSGID: 114018] [client.c:2280:client_rpc_notify] 0-localPiGluster1-client-0: disconnected from localPiGluster1-client-0. Client process will keep trying to connect to glusterd until brick's port is available
Final graph:
+------------------------------------------------------------------------------+
  1: volume localPiGluster1-client-0
  2:     type protocol/client
  3:     option ping-timeout 42
  4:     option remote-host PiGluster1
  5:     option remote-subvolume /media/Seagate3To1/gfs
  6:     option transport-type socket
  7:     option transport.address-family inet
  8:     option username blablablabla-censuré-blabalabla-censuré
  9:     option password blablablabla-censuré-blabalabla-censuré
 10:     option send-gids true
 11: end-volume
 12:  
 13: volume localPiGluster1-client-1
 14:     type protocol/client
 15:     option ping-timeout 42
 16:     option remote-host PiGluster1
 17:     option remote-subvolume /media/HDDrive1500Go/gfs
 18:     option transport-type socket
 19:     option transport.address-family inet
 20:     option username blablablabla-censuré-blabalabla-censuré
 21:     option password blablablabla-censuré-blabalabla-censuré
 22:     option send-gids true
 23: end-volume
 24:  
 25: volume localPiGluster1-dht
 26:     type cluster/distribute
 27:     option lock-migration off
 28:     subvolumes localPiGluster1-client-0 localPiGluster1-client-1
 29: end-volume
 30:  
 31: volume localPiGluster1-write-behind
 32:     type performance/write-behind
 33:     subvolumes localPiGluster1-dht
 34: end-volume
 35:  
 36: volume localPiGluster1-read-ahead
 37:     type performance/read-ahead
 38:     subvolumes localPiGluster1-write-behind
 39: end-volume
 40:  
 41: volume localPiGluster1-io-cache
 42:     type performance/io-cache
 43:     subvolumes localPiGluster1-read-ahead
 44: end-volume
 45:  
 46: volume localPiGluster1-quick-read
 47:     type performance/quick-read
 48:     subvolumes localPiGluster1-io-cache
 49: end-volume
 50:  
 51: volume localPiGluster1-open-behind
 52:     type performance/open-behind
 53:     subvolumes localPiGluster1-quick-read
 54: end-volume
 55:  
 56: volume localPiGluster1-md-cache
 57:     type performance/md-cache
 58:     subvolumes localPiGluster1-open-behind
 59: end-volume
 60:  
 61: volume localPiGluster1
 62:     type debug/io-stats
 63:     option log-level INFO
 64:     option latency-measurement off
 65:     option count-fop-hits off
 66:     subvolumes localPiGluster1-md-cache
 67: end-volume
 68:  
 69: volume meta-autoload
 70:     type meta
 71:     subvolumes localPiGluster1
 72: end-volume
 73:  
+------------------------------------------------------------------------------+
[2016-10-31 16:02:13.883060] E [MSGID: 114058] [client-handshake.c:1533:client_query_portmap_cbk] 0-localPiGluster1-client-1: failed to get the port number for remote subvolume. Please run 'gluster volume status' on server to see if brick process is running.
[2016-10-31 16:02:13.883677] I [MSGID: 114018] [client.c:2280:client_rpc_notify] 0-localPiGluster1-client-1: disconnected from localPiGluster1-client-1. Client process will keep trying to connect to glusterd until brick's port is available
[2016-10-31 16:02:13.901385] I [fuse-bridge.c:5241:fuse_graph_setup] 0-fuse: switched to graph 0
[2016-10-31 16:02:13.903203] I [fuse-bridge.c:4153:fuse_init] 0-glusterfs-fuse: FUSE inited with protocol versions: glusterfs 7.24 kernel 7.23
[2016-10-31 16:02:13.904963] E [MSGID: 101172] [dht-helper.c:1666:dht_inode_ctx_time_update] 0-localPiGluster1-dht: invalid argument: inode [Invalid argument]
[2016-10-31 16:02:13.906544] W [fuse-bridge.c:767:fuse_attr_cbk] 0-glusterfs-fuse: 2: LOOKUP() / => -1 (Transport endpoint is not connected)
[2016-10-31 16:02:13.973935] I [fuse-bridge.c:5082:fuse_thread_proc] 0-fuse: unmounting /media/localPiGluster1
The message "E [MSGID: 101172] [dht-helper.c:1666:dht_inode_ctx_time_update] 0-localPiGluster1-dht: invalid argument: inode [Invalid argument]" repeated 3 times between [2016-10-31 16:02:13.904963] and [2016-10-31 16:02:13.920165]
[2016-10-31 16:02:13.976259] W [MSGID: 100032] [glusterfsd.c:1286:cleanup_and_exit] 0-: received signum (15), shutting down
[2016-10-31 16:02:13.976646] I [fuse-bridge.c:5793:fini] 0-fuse: Unmounting '/media/localPiGluster1'.

sudo gluster volume info

Volume Name: localPiGluster1
Type: Distribute
Volume ID: blablablabla-censuré-blabalabla-censuré
Status: Started
Snapshot Count: 0
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: PiGluster1:/media/Seagate3To1/gfs
Brick2: PiGluster1:/media/HDDrive1500Go/gfs
Options Reconfigured:
auth.allow: 127.0.*.*
transport.address-family: inet

redémarrage du brick (gluster volume start|stop NomVolume)
sudo tail -f /var/log/glusterfs/bricks/media-Seagate3To1-gfs.log

[2016-10-31 18:43:04.915413] I [MSGID: 100030] [glusterfsd.c:2408:main] 0-/usr/sbin/glusterfsd: Started running /usr/sbin/glusterfsd version 3.8.4 (args: /usr/sbin/glusterfsd -s PiGluster1 --volfile-id localPiGluster1.PiGluster1.media-Seagate3To1-gfs -p /var/lib/glusterd/vols/localPiGluster1/run/PiGluster1-media-Seagate3To1-gfs.pid -S /var/run/gluster/c52a146c8c5fc0748aac6ed4a00178b3.socket --brick-name /media/Seagate3To1/gfs -l /var/log/glusterfs/bricks/media-Seagate3To1-gfs.log --xlator-option *-posix.glusterd-uuid=blablablabla-censuré-blabalabla-censuré --brick-port 49156 --xlator-option localPiGluster1-server.listen-port=49156)
[2016-10-31 18:43:04.965363] I [MSGID: 101190] [event-epoll.c:628:event_dispatch_epoll_worker] 0-epoll: Started thread with index 1
[2016-10-31 18:43:05.143250] I [MSGID: 101173] [graph.c:269:gf_add_cmdline_options] 0-localPiGluster1-server: adding option 'listen-port' for volume 'localPiGluster1-server' with value '49156'
[2016-10-31 18:43:05.143682] I [MSGID: 101173] [graph.c:269:gf_add_cmdline_options] 0-localPiGluster1-posix: adding option 'glusterd-uuid' for volume 'localPiGluster1-posix' with value 'blablablabla-censuré-blabalabla-censuré'
[2016-10-31 18:43:05.146279] I [MSGID: 101190] [event-epoll.c:628:event_dispatch_epoll_worker] 0-epoll: Started thread with index 2
[2016-10-31 18:43:05.146373] I [MSGID: 115034] [server.c:398:_check_for_auth_option] 0-localPiGluster1-decompounder: skip format check for non-addr auth option auth.login./media/Seagate3To1/gfs.allow
[2016-10-31 18:43:05.146932] I [MSGID: 115034] [server.c:398:_check_for_auth_option] 0-localPiGluster1-decompounder: skip format check for non-addr auth option auth.login.3805b7b6-7421-4ab1-9201-0c7140d10e60.password
[2016-10-31 18:43:05.156324] I [rpcsvc.c:2199:rpcsvc_set_outstanding_rpc_limit] 0-rpc-service: Configured rpc.outstanding-rpc-limit with value 64
[2016-10-31 18:43:05.157696] W [MSGID: 101002] [options.c:954:xl_opt_validate] 0-localPiGluster1-server: option 'listen-port' is deprecated, preferred is 'transport.socket.listen-port', continuing with correction
[2016-10-31 18:43:05.185722] I [MSGID: 121050] [ctr-helper.c:259:extract_ctr_options] 0-gfdbdatastore: CTR Xlator is disabled.
[2016-10-31 18:43:05.186009] W [MSGID: 101105] [gfdb_sqlite3.h:234:gfdb_set_sql_params] 0-localPiGluster1-changetimerecorder: Failed to retrieve sql-db-pagesize from params.Assigning default value: 4096
[2016-10-31 18:43:05.186180] W [MSGID: 101105] [gfdb_sqlite3.h:234:gfdb_set_sql_params] 0-localPiGluster1-changetimerecorder: Failed to retrieve sql-db-journalmode from params.Assigning default value: wal
[2016-10-31 18:43:05.186345] W [MSGID: 101105] [gfdb_sqlite3.h:234:gfdb_set_sql_params] 0-localPiGluster1-changetimerecorder: Failed to retrieve sql-db-sync from params.Assigning default value: off
[2016-10-31 18:43:05.186494] W [MSGID: 101105] [gfdb_sqlite3.h:234:gfdb_set_sql_params] 0-localPiGluster1-changetimerecorder: Failed to retrieve sql-db-autovacuum from params.Assigning default value: none
[2016-10-31 18:43:05.210820] I [trash.c:2414:init] 0-localPiGluster1-trash: no option specified for 'eliminate', using NULL
[2016-10-31 18:43:05.216691] W [MSGID: 101174] [graph.c:360:_log_if_unknown_option] 0-localPiGluster1-server: option 'rpc-auth.auth-glusterfs' is not recognized
[2016-10-31 18:43:05.217177] W [MSGID: 101174] [graph.c:360:_log_if_unknown_option] 0-localPiGluster1-server: option 'rpc-auth.auth-unix' is not recognized
[2016-10-31 18:43:05.217572] W [MSGID: 101174] [graph.c:360:_log_if_unknown_option] 0-localPiGluster1-server: option 'rpc-auth.auth-null' is not recognized
[2016-10-31 18:43:05.218182] W [MSGID: 101174] [graph.c:360:_log_if_unknown_option] 0-localPiGluster1-server: option 'auth-path' is not recognized
[2016-10-31 18:43:05.218540] W [MSGID: 101174] [graph.c:360:_log_if_unknown_option] 0-localPiGluster1-quota: option 'timeout' is not recognized
[2016-10-31 18:43:05.219256] W [MSGID: 101174] [graph.c:360:_log_if_unknown_option] 0-localPiGluster1-trash: option 'brick-path' is not recognized
[2016-10-31 18:43:05.237311] W [MSGID: 113026] [posix.c:1487:posix_mkdir] 0-localPiGluster1-posix: mkdir (/.trashcan/): gfid (00000000-0000-0000-0000-000000000005) is already associated with directory (/media/Seagate3To1/gfs/.glusterfs/00/00/00000000-0000-0000-0000-000000000001/.trashcan). Hence, both directories will share same gfid and this can lead to inconsistencies.
[2016-10-31 18:43:05.237613] E [MSGID: 113027] [posix.c:1594:posix_mkdir] 0-localPiGluster1-posix: mkdir of /media/Seagate3To1/gfs/.trashcan/ failed [File exists]
[2016-10-31 18:43:05.240161] W [MSGID: 113026] [posix.c:1487:posix_mkdir] 0-localPiGluster1-posix: mkdir (/.trashcan/internal_op): gfid (00000000-0000-0000-0000-000000000006) is already associated with directory (/media/Seagate3To1/gfs/.glusterfs/00/00/00000000-0000-0000-0000-000000000005/internal_op). Hence, both directories will share same gfid and this can lead to inconsistencies.
[2016-10-31 18:43:05.240410] E [MSGID: 113027] [posix.c:1594:posix_mkdir] 0-localPiGluster1-posix: mkdir of /media/Seagate3To1/gfs/.trashcan/internal_op failed [File exists]
Final graph:
+------------------------------------------------------------------------------+

[plein plein plein d'infos sur les options du bricks]

138: volume localPiGluster1-server
139:     type protocol/server
140:     option transport.socket.listen-port 49156
141:     option rpc-auth.auth-glusterfs on
142:     option rpc-auth.auth-unix on
143:     option rpc-auth.auth-null on
144:     option rpc-auth-allow-insecure on
145:     option transport-type tcp
146:     option transport.address-family inet
147:     option auth.login./media/Seagate3To1/gfs.allow blablablabla-censuré-blabalabla-censuré
148:     option auth.login.blablablabla-censuré-blabalabla-censuré.password blablablabla-censuré-blabalabla-censuré
149:     option auth-path /media/Seagate3To1/gfs
150:     option auth.addr./media/Seagate3To1/gfs.allow 127.0.*.*
151:     subvolumes localPiGluster1-decompounder
152: end-volume
  • # mes pistes, sans rien connaitre à glusterfs

    Posté par  . Évalué à 4.

    faire les choses et les diagnostiques dans l'ordre

    le montage dit qu'il a echoué et te demande d'aller voir les logs

    le logs dit :

    [2016-10-31 16:02:13.873770] E [MSGID: 114058] [client-handshake.c:1533:client_query_portmap_cbk] 0-localPiGluster1-client-0: failed to get the port number for remote subvolume. Please run 'gluster volume status' on server to see if brick process is running.

    et te propose de lancer gluster volume status pour verifier si le process brick tourne

    ce que tu fais en effet

    sudo gluster volume status

    mais que tu ne semble pas lire.

    • toutes les occurences sont "ONLINE = N"
    • et la conclusion de la commande status est : There are no active volume tasks

    le log te dit aussi qu'il va continuer d'essayer de se connecter à glusterd

    Client process will keep trying to connect to glusterd until brick's port is available

    donc la question serait de savoir si ce glusterd est bien lancé au demarrage, avant de vouloir faire quoique ce soit d'autre…

    • [^] # Re: mes pistes, sans rien connaitre à glusterfs

      Posté par  . Évalué à 1.

      Merci pour ton aide.
      Oui glusterd semble bien lancé. (commande lancée après un reboot)

      ps -aux | grep "gluster" | grep -v "grep"

      root       528  1.8  1.6  90452 15476 ?        Ssl  18:14   0:00 /usr/sbin/glusterd -p /var/run/glusterd.pid
      

      toutes les occurences sont "ONLINE = N"
      et la conclusion de la commande status est : There are no active volume task

      je l'ai vu mais ça ne m'a pas aidé

      Donation Bitcoin : 1N8QGrhJGWdZNQNSspm3rSGjtXaXv9Ngat

      • [^] # Re: mes pistes, sans rien connaitre à glusterfs

        Posté par  . Évalué à 1.

        J'ai lancé un scan des ports et rien ne semble apparaître (gluster utilise 111, 24007, 24008 et un ports par bricks à partir de 49152)

        sudo nmap 127.0.0.1 -sS -sU
        
        Starting Nmap 6.47 ( http://nmap.org ) at 2016-10-31 18:26 UTC
        Nmap scan report for localhost (127.0.0.1)
        Host is up (0.000085s latency).
        Not shown: 1995 closed ports
        PORT     STATE         SERVICE
        22/tcp   open          ssh
        25/tcp   open          smtp
        68/udp   open|filtered dhcpc
        123/udp  open          ntp
        5353/udp open|filtered zeroconf
        

        Donation Bitcoin : 1N8QGrhJGWdZNQNSspm3rSGjtXaXv9Ngat

        • [^] # Re: mes pistes, sans rien connaitre à glusterfs

          Posté par  . Évalué à 2.

          Client process will keep trying to connect to glusterd until brick's port is available

          alors il faut aller lire les documentations sur "comment lancer brick"

          • [^] # Re: mes pistes, sans rien connaitre à glusterfs

            Posté par  . Évalué à 1. Dernière modification le 01 novembre 2016 à 11:38.

            J"ai reposté un fichier log (/var/log/glusterfs/bricks/media-Seagate3To1-gfs.log) montrant le redemarrage d'un bricks. Dedans on peut y voir le port utilisé (--brick-port 49156).
            Mais le port utilisé n'est ni signalé dans volume status (port N/A) ni dans nmap.

            Donation Bitcoin : 1N8QGrhJGWdZNQNSspm3rSGjtXaXv9Ngat

  • # Résolut

    Posté par  . Évalué à 1.

    Et bien c'est résolut, il faut bien démarrer (voir redémarrer au pire) le volume en ajoutant force --mode=script à la fin de la commande

    Pour démarrer le volume dans un script

    sudo gluster volume start monVolume force --mode=script
    

    Pour arrêter le volume dans un script

    sudo gluster volume stop monVolume force --mode=script
    

    Ensuite attendre un petit peu.

    Mon script ressemble à ça:

    #!/bin/bash
    #on verifie qu'on est bien en root
    if [ ! "$SUDO_USER" ]; then
    exit 0
    fi
    sleep 10 # petit délais d'attente afin que les disques soient prêt
    #sudo gluster volume stop localPiGluster1 force --mode=script
    gluster volume start localPiGluster1 force --mode=script
    sleep 1
    mount -t glusterfs 127.0.0.1:/localPiGluster1 /media/localPiGluster1

    Donation Bitcoin : 1N8QGrhJGWdZNQNSspm3rSGjtXaXv9Ngat

Suivre le flux des commentaires

Note : les commentaires appartiennent à celles et ceux qui les ont postés. Nous n’en sommes pas responsables.