>데이터 베이스 >MySQL 튜토리얼 >Greenplum测试环境部署

Greenplum测试环境部署

WBOY
WBOY원래의
2016-06-07 16:00:261228검색

本实例是部署实验环境,采用的是Citrix的虚拟化环境,分配了3台RHEL6.4的主机。

1.准备3台主机

本实例是部署实验环境,采用的是Citrix的虚拟化环境,分配了3台RHEL6.4的主机。

Master 创建模板后,额外添加20G一块磁盘/dev/xvdb,额外添加2块网卡eth1,eth2

Standby 创建模板后,额外添加20G一块磁盘/dev/xvdb,额外添加2块网卡eth1,eth2

Segment01 创建模板后,额外添加50G一块磁盘/dev/xvdb,额外添加2块网卡eth1,eth2

网络规划

 eth0(外部IP)eth1eth2

Master 192.168.9.123 172.16.10.101 172.16.11.101

Standby 192.168.9.124 172.16.10.102 172.16.11.102

Segment01 192.168.9.125(可选) 172.16.10.1 172.16.11.1

实验环境资源有限暂时配置3个节点,后续可能会根据需求添加Segment02,Segment03...

修改主机名

将Master,Standby,Segment01的三台主机名分别设置为mdw, smdw, sdw1

主机名修改方法:

hostname 主机名 vi /etc/sysconfig/network 修改hostname

Options:配置脚本,前期为了方便同步节点间的配置,可选。

export NODE_LIST='MDW SMDW SDW1'

vi /etc/hosts 临时配置

192.168.9.123 mdw 192.168.9.124 smdw 192.168.9.125 sdw1

配置第一个节点到自身和其他机器的无密码登录

ssh-keygen -t rsa ssh-copy-id -i /root/.ssh/id_rsa.pub root@192.168.9.123 ssh-copy-id -i /root/.ssh/id_rsa.pub root@192.168.9.124 ssh-copy-id -i /root/.ssh/id_rsa.pub root@192.168.9.125 cluster_run_all_nodes "hostname ; date"

磁盘规划

gp建议使用xfs文件系统,所有节点需要安装依赖包
# rpm -ivh xfsprogs-3.1.1-10.el6.x86_64.rpm

所有节点建立/data文件夹,用来挂载xfs的文件系统

mkdir /data

mkfs.xfs /dev/xvdb

[root@smdb Packages], agsize=1310720 blks = , , imaxpct=25 = blks internal log , version=2 = blks, none , rtextents=0

vi /etc/fstab 添加下面一行

/dev/xvdb /data xfs rw,noatime,inode64,allocsize=16m1 1 2.关闭iptables和selinux cluster_run_all_nodes "hostname; service iptables stop" cluster_run_all_nodes "hostname; chkconfig iptables off" cluster_run_all_nodes "hostname; chkconfig ip6tables off" cluster_run_all_nodes "hostname; chkconfig libvirtd off" cluster_run_all_nodes "hostname; setenforce 0" cluster_run_all_nodes "hostname; sestatus" vi /etc/selinux/config cluster_copy_all_nodes /etc/selinux/config /etc/selinux/

注:所有节点都要统一设定,我这里先配置了信任,用脚本实现的同步,如果没有配置,是需要每台依次设定的。

3.设定建议的系统参数

vi /etc/sysctl.conf

kernel.shmmax = 500000000 kernel.shmmni = 4096 kernel.shmall = 4000000000 kernelkernel.sysrq = 1 kernel.core_uses_pid = 1 kernel.msgmnb = 65536 kernel.msgmax = 65536 net.ipv4.tcp_syncookies = 1 net.ipv4.ip_forward = 0 net= 0 net.ipv4.tcp_tw_recycle = 1 net.ipv4.tcp_max_syn_backlog = 4096 net= 1 net= 1 net.core.netdev_max_backlog = 10000 vm.overcommit_memory = 2 kernel.msgmni = 2048 net

vi /etc/security/limits.conf

* soft nofile 65536 * hard nofile 65536 * soft nproc 131072 * hard nproc 131072

同步到各个节点:

cluster_copy_all_nodes /etc/sysctl.conf /etc/sysctl.conf cluster_copy_all_nodes /etc/security/limits.conf /etc/security/limits.conf

磁盘预读参数及 deadline算法

在/etc/rc.d/rc.local 添加

blockdev --setra 16385 /dev/xvdb echo deadline > /sys/block/xvdb/queue/scheduler cluster_copy_all_nodes /etc/rc.d/rc.local /etc/rc.d/rc.local

注:重启后 blockdev --getra /dev/xvdb 验证是否生效

验证所有节点的字符集

cluster_run_all_nodes "hostname; echo $LANG"

重启所有节点,验证修改是否生效:

blockdev --getra /dev/xvdb more /sys/block/xvdb/queue/scheduler cluster_run_all_nodes "hostname; service iptables status" 4.在Master上安装 mkdir -p /data/soft 上传greenplum-db-4.3.4.2-build-1-RHEL5-x86_64.zip到Master unzip greenplum-db-4.3.4.2-build-1-RHEL5-x86_64.zip /bin/bash greenplum-db-4.3.4.2-build-1-RHEL5-x86_64.bin 5.在所有的节点上安装配置Greenplum

配置/etc/hosts

192.168.9.123 mdw 172.16.10.101 mdw-1 172.16.11.101 mdw-2 192.168.9.124 smdw 172.16.10.102 smdw-1 172.16.11.102 smdw-2 192.168.9.125 sdw1 172.16.10.1 sdw1-1 172.16.11.1 sdw1-2

同步/etc/hosts配置

cluster_copy_all_nodes /etc/hosts /etc/hosts

配置gp需要的互信

성명:
본 글의 내용은 네티즌들의 자발적인 기여로 작성되었으며, 저작권은 원저작자에게 있습니다. 본 사이트는 이에 상응하는 법적 책임을 지지 않습니다. 표절이나 침해가 의심되는 콘텐츠를 발견한 경우 admin@php.cn으로 문의하세요.