Discussion:
[Ocfs2-users] ceph osd mounting issue with ocfs2 file system
gjprabu
2015-07-30 09:57:15 UTC
Permalink
Hi All,



We are using ceph with two OSD and three clients. Clients try to mount with OCFS2 file system. Here when i start mounting only two clients i can able to mount properly and third client giving below errors. Some time i can able to mount third client but data not sync to third client.





mount /dev/rbd/rbd/integdownloads /soho/build/downloads



mount.ocfs2: Invalid argument while mounting /dev/rbd0 on /soho/build/downloads. Check 'dmesg' for more information on this error.



dmesg



[1280548.676688] (mount.ocfs2,1807,4):dlm_send_nodeinfo:1294 ERROR: node mismatch -22, node 0

[1280548.676766] (mount.ocfs2,1807,4):dlm_try_to_join_domain:1681 ERROR: status = -22

[1280548.677278] (mount.ocfs2,1807,8):dlm_join_domain:1950 ERROR: status = -22

[1280548.677443] (mount.ocfs2,1807,8):dlm_register_domain:2210 ERROR: status = -22

[1280548.677541] (mount.ocfs2,1807,8):o2cb_cluster_connect:368 ERROR: status = -22

[1280548.677602] (mount.ocfs2,1807,8):ocfs2_dlm_init:2988 ERROR: status = -22

[1280548.677703] (mount.ocfs2,1807,8):ocfs2_mount_volume:1864 ERROR: status = -22

[1280548.677800] ocfs2: Unmounting device (252,0) on (node 0)

[1280548.677808] (mount.ocfs2,1807,8):ocfs2_fill_super:1238 ERROR: status = -22







OCFS2 configuration



cluster:

node_count=3

heartbeat_mode = local

name=ocfs2



node:

ip_port = 7777

ip_address = 192.168.112.192

number = 0

name = integ-hm5

cluster = ocfs2

node:

ip_port = 7777

ip_address = 192.168.113.42

number = 1

name = integ-soho

cluster = ocfs2

node:

ip_port = 7778

ip_address = 192.168.112.115

number = 2

name = integ-hm2

cluster = ocfs2



Version :- o2cb 1.8.0



OS : - Centos 7 64 bit (Kernel 3.18.16)





Regards

Prabu GJ
Guozhonghua
2015-07-30 10:59:18 UTC
Permalink
Hi,

The number of the node should begin with 1 in the cluster.conf, you can try it.
I always start with 1.

You should check the directory: ls -al /sys/kernel/config/cluster/pool/node ; and all the three node should have same node directory information on three nodes.

Otherwise, umount all the fs, copy one /etc/ocfs2/cluster.conf to other two node.
service o2cb offline/unload ; service o2cb online ;
Remount the disk again and check the nodes info again.

The cluter.conf file should be same on everyone node.

发件人: ocfs2-users-***@oss.oracle.com [mailto:ocfs2-users-***@oss.oracle.com] 代衚 gjprabu
发送时闎: 2015幎7月30日 17:57
收件人: ocfs2-***@oss.oracle.com
䞻题: [Ocfs2-users] ceph osd mounting issue with ocfs2 file system

Hi All,

We are using ceph with two OSD and three clients. Clients try to mount with OCFS2 file system. Here when i start mounting only two clients i can able to mount properly and third client giving below errors. Some time i can able to mount third client but data not sync to third client.


mount /dev/rbd/rbd/integdownloads /soho/build/downloads

mount.ocfs2: Invalid argument while mounting /dev/rbd0 on /soho/build/downloads. Check 'dmesg' for more information on this error.

dmesg

[1280548.676688] (mount.ocfs2,1807,4):dlm_send_nodeinfo:1294 ERROR: node mismatch -22, node 0
[1280548.676766] (mount.ocfs2,1807,4):dlm_try_to_join_domain:1681 ERROR: status = -22
[1280548.677278] (mount.ocfs2,1807,8):dlm_join_domain:1950 ERROR: status = -22
[1280548.677443] (mount.ocfs2,1807,8):dlm_register_domain:2210 ERROR: status = -22
[1280548.677541] (mount.ocfs2,1807,8):o2cb_cluster_connect:368 ERROR: status = -22
[1280548.677602] (mount.ocfs2,1807,8):ocfs2_dlm_init:2988 ERROR: status = -22
[1280548.677703] (mount.ocfs2,1807,8):ocfs2_mount_volume:1864 ERROR: status = -22
[1280548.677800] ocfs2: Unmounting device (252,0) on (node 0)
[1280548.677808] (mount.ocfs2,1807,8):ocfs2_fill_super:1238 ERROR: status = -22



OCFS2 configuration

cluster:
node_count=3
heartbeat_mode = local
name=ocfs2

node:
ip_port = 7777
ip_address = 192.168.112.192
number = 0
name = integ-hm5
cluster = ocfs2
node:
ip_port = 7777
ip_address = 192.168.113.42
number = 1
name = integ-soho
cluster = ocfs2
node:
ip_port = 7778
ip_address = 192.168.112.115
number = 2
name = integ-hm2
cluster = ocfs2

Version :- o2cb 1.8.0

OS : - Centos 7 64 bit (Kernel 3.18.16)


Regards
Prabu GJ



-------------------------------------------------------------------------------------------------------------------------------------
本邮件及其附件含有杭州华䞉通信技术有限公叞的保密信息仅限于发送给䞊面地址䞭列出
的䞪人或矀组。犁止任䜕其他人以任䜕圢匏䜿甚包括䜆䞍限于党郚或郚分地泄露、倍制、
或散发本邮件䞭的信息。劂果悚错收了本邮件请悚立即电话或邮件通知发件人并删陀本
邮件
This e-mail and its attachments contain confidential information from H3C, which is
intended only for the person or entity whose address is listed above. Any use of the
information contained herein in any way (including, but not limited to, total or partial
disclosure, reproduction, or dissemination) by persons other than the intended
recipient(s) is prohibited. If you receive this e-mail in error, please notify the sender
by phone or email immediately and delete it!
gjprabu
2015-07-30 14:34:54 UTC
Permalink
Thanks guozhonghua. Its working.



Regards

Prabu










---- On Thu, 30 Jul 2015 16:29:18 +0530 Guozhonghua <***@h3c.com> wrote ----




Hi,



The number of the node should begin with 1 in the cluster.conf, you can try it.

I always start with 1.



You should check the directory: ls -al /sys/kernel/config/cluster/pool/node ; and all the three node should have same node directory information on three nodes.



Otherwise, umount all the fs, copy one /etc/ocfs2/cluster.conf to other two node.

service o2cb offline/unload ; service o2cb online ;

Remount the disk again and check the nodes info again.



The cluter.conf file should be same on everyone node.



发件人:ocfs2-users-***@oss.oracle.com [mailto:ocfs2-users-***@oss.oracle.com] 代衚 gjprabu
发送时闎: 2015幎7月30日 17:57
收件人:ocfs2-***@oss.oracle.com
䞻题: [Ocfs2-users] ceph osd mounting issue with ocfs2 file system




Hi All,





We are using ceph with two OSD and three clients. Clients try to mount with OCFS2 file system. Here when i start mounting only two clients i can able to mount properly and third client giving below errors. Some time i can able to mount third client but data not sync to third client.








mount /dev/rbd/rbd/integdownloads /soho/build/downloads





mount.ocfs2: Invalid argument while mounting /dev/rbd0 on /soho/build/downloads. Check 'dmesg' for more information on this error.





dmesg





[1280548.676688] (mount.ocfs2,1807,4):dlm_send_nodeinfo:1294 ERROR: node mismatch -22, node 0


[1280548.676766] (mount.ocfs2,1807,4):dlm_try_to_join_domain:1681 ERROR: status = -22


[1280548.677278] (mount.ocfs2,1807,8):dlm_join_domain:1950 ERROR: status = -22


[1280548.677443] (mount.ocfs2,1807,8):dlm_register_domain:2210 ERROR: status = -22


[1280548.677541] (mount.ocfs2,1807,8):o2cb_cluster_connect:368 ERROR: status = -22


[1280548.677602] (mount.ocfs2,1807,8):ocfs2_dlm_init:2988 ERROR: status = -22


[1280548.677703] (mount.ocfs2,1807,8):ocfs2_mount_volume:1864 ERROR: status = -22


[1280548.677800] ocfs2: Unmounting device (252,0) on (node 0)


[1280548.677808] (mount.ocfs2,1807,8):ocfs2_fill_super:1238 ERROR: status = -22











OCFS2 configuration





cluster:


node_count=3


heartbeat_mode = local


name=ocfs2





node:


ip_port = 7777


ip_address = 192.168.112.192


number = 0


name = integ-hm5


cluster = ocfs2


node:


ip_port = 7777


ip_address = 192.168.113.42


number = 1


name = integ-soho


cluster = ocfs2


node:


ip_port = 7778


ip_address = 192.168.112.115


number = 2


name = integ-hm2


cluster = ocfs2





Version :- o2cb 1.8.0





OS : - Centos 7 64 bit (Kernel 3.18.16)








Regards


Prabu GJ














-------------------------------------------------------------------------------------------------------------------------------------
本邮件及其附件含有杭州华䞉通信技术有限公叞的保密信息仅限于发送给䞊面地址䞭列出
的䞪人或矀组。犁止任䜕其他人以任䜕圢匏䜿甚包括䜆䞍限于党郚或郚分地泄露、倍制、
或散发本邮件䞭的信息。劂果悚错收了本邮件请悚立即电话或邮件通知发件人并删陀本
邮件
This e-mail and its attachments contain confidential information from H3C, which is
intended only for the person or entity whose address is listed above. Any use of the
information contained herein in any way (including, but not limited to, total or partial
disclosure, reproduction, or dissemination) by persons other than the intended
recipient(s) is prohibited. If you receive this e-mail in error, please notify the sender
by phone or email immediately and delete it!
gjprabu
2015-09-01 11:23:13 UTC
Permalink
Hi Team,



We are going to use ceph with ocfs2 in production, here my doubt is 12 clients performance and throughput with 1Gig is enough or need to change network level.



Regards

Prabu




---- On Thu, 30 Jul 2015 20:04:54 +0530 gjprabu <***@zohocorp.com> wrote ----




Thanks guozhonghua. Its working.



Regards

Prabu










---- On Thu, 30 Jul 2015 16:29:18 +0530 Guozhonghua <***@h3c.com> wrote ----




Hi,



The number of the node should begin with 1 in the cluster.conf, you can try it.

I always start with 1.



You should check the directory: ls -al /sys/kernel/config/cluster/pool/node ; and all the three node should have same node directory information on three nodes.



Otherwise, umount all the fs, copy one /etc/ocfs2/cluster.conf to other two node.

service o2cb offline/unload ; service o2cb online ;

Remount the disk again and check the nodes info again.



The cluter.conf file should be same on everyone node.



发件人:ocfs2-users-***@oss.oracle.com [mailto:ocfs2-users-***@oss.oracle.com] 代衚 gjprabu
发送时闎: 2015幎7月30日 17:57
收件人:ocfs2-***@oss.oracle.com
䞻题: [Ocfs2-users] ceph osd mounting issue with ocfs2 file system




Hi All,





We are using ceph with two OSD and three clients. Clients try to mount with OCFS2 file system. Here when i start mounting only two clients i can able to mount properly and third client giving below errors. Some time i can able to mount third client but data not sync to third client.








mount /dev/rbd/rbd/integdownloads /soho/build/downloads





mount.ocfs2: Invalid argument while mounting /dev/rbd0 on /soho/build/downloads. Check 'dmesg' for more information on this error.





dmesg





[1280548.676688] (mount.ocfs2,1807,4):dlm_send_nodeinfo:1294 ERROR: node mismatch -22, node 0


[1280548.676766] (mount.ocfs2,1807,4):dlm_try_to_join_domain:1681 ERROR: status = -22


[1280548.677278] (mount.ocfs2,1807,8):dlm_join_domain:1950 ERROR: status = -22


[1280548.677443] (mount.ocfs2,1807,8):dlm_register_domain:2210 ERROR: status = -22


[1280548.677541] (mount.ocfs2,1807,8):o2cb_cluster_connect:368 ERROR: status = -22


[1280548.677602] (mount.ocfs2,1807,8):ocfs2_dlm_init:2988 ERROR: status = -22


[1280548.677703] (mount.ocfs2,1807,8):ocfs2_mount_volume:1864 ERROR: status = -22


[1280548.677800] ocfs2: Unmounting device (252,0) on (node 0)


[1280548.677808] (mount.ocfs2,1807,8):ocfs2_fill_super:1238 ERROR: status = -22











OCFS2 configuration





cluster:


node_count=3


heartbeat_mode = local


name=ocfs2





node:


ip_port = 7777


ip_address = 192.168.112.192


number = 0


name = integ-hm5


cluster = ocfs2


node:


ip_port = 7777


ip_address = 192.168.113.42


number = 1


name = integ-soho


cluster = ocfs2


node:


ip_port = 7778


ip_address = 192.168.112.115


number = 2


name = integ-hm2


cluster = ocfs2





Version :- o2cb 1.8.0





OS : - Centos 7 64 bit (Kernel 3.18.16)








Regards


Prabu GJ














-------------------------------------------------------------------------------------------------------------------------------------
本邮件及其附件含有杭州华䞉通信技术有限公叞的保密信息仅限于发送给䞊面地址䞭列出
的䞪人或矀组。犁止任䜕其他人以任䜕圢匏䜿甚包括䜆䞍限于党郚或郚分地泄露、倍制、
或散发本邮件䞭的信息。劂果悚错收了本邮件请悚立即电话或邮件通知发件人并删陀本
邮件
This e-mail and its attachments contain confidential information from H3C, which is
intended only for the person or entity whose address is listed above. Any use of the
information contained herein in any way (including, but not limited to, total or partial
disclosure, reproduction, or dissemination) by persons other than the intended
recipient(s) is prohibited. If you receive this e-mail in error, please notify the sender
by phone or email immediately and delete it!





_______________________________________________

Ocfs2-users mailing list

Ocfs2-***@oss.oracle.com

https://oss.oracle.com/mailman/listinfo/ocfs2-users

Loading...