有没有ceph的安装操作文档

来源:11-4 共享存储 --- PV、PVC和StorageClass(上).mp4

idefav

2019-10-30

共享存储有没ceph的安装文档

写回答

3回答

可汗

2019-10-30

# Ceph 准备工作

```bash
# 设置国内的yum源
cat > /etc/yum.repos.d/ceph.repo <<EOF
[Ceph]
name=Ceph packages for $basearch
baseurl=https://mirrors.tuna.tsinghua.edu.cn/ceph/rpm-nautilus/el7/x86_64/
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc

[Ceph-noarch]
name=Ceph noarch packages
baseurl=https://mirrors.tuna.tsinghua.edu.cn/ceph/rpm-nautilus/el7/noarch/
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc

[ceph-source]
name=Ceph source packages
baseurl=https://mirrors.tuna.tsinghua.edu.cn/ceph/rpm-nautilus/el7/SRPMS/
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc
EOF

# 安装 ceph-deploy
yum install -y ceph-deploy

# 时钟同步
yum install -y ntp ntpdate ntp-doc

# 关闭防火墙和selinux
systemctl stop firewalld.service
systemctl disable firewalld.service
setenforce 0
sed -i 's/enforcing/disabled/g' /etc/selinux/config

# 配置 Python 相关, 解决出现 ImportError: No module named pkg_resources
wget https://pypi.python.org/packages/source/d/distribute/distribute-0.7.3.zip --no-check-certificate
unzip distribute-0.7.3.zip
mv distribute-0.7.3 /usr/local/
cd /usr/local/distribute-0.7.3/
python setup.py install

# 配置免密登录
ssh-keygen
ssh-copy-id root@ceph-n1
ssh-copy-id root@ceph-n2
ssh-copy-id root@ceph-n3

```


# 开始安装集群

```bash
# 如果安装失败,使用下面的命令清除
ceph-deploy purge {ceph-node} [{ceph-node}]
ceph-deploy purgedata {ceph-node} [{ceph-node}]
ceph-deploy forgetkeys
rm ceph.*

# create a directory on your admin node for maintaining the configuration files and keys
# that ceph-deploy generates for your cluster.
mkdir ceph-cluster
cd ceph-cluster

# Create the cluster.(初始的 monitor 节点)
ceph-deploy new ceph-n1

# 安装 ceph 软件包
ceph-deploy install ceph-n1 ceph-n2 ceph-n3

# Deploy the initial monitor(s) and gather the keys:
ceph-deploy mon create-initial

# copy the configuration file and admin key to your admin node and your Ceph Nodes
ceph-deploy admin ceph-n1 ceph-n2 ceph-n3

# Deploy a manager daemon
ceph-deploy mgr create ceph-n1

# Add OSDs
ceph-deploy osd create --data /dev/sdb ceph-n2
ceph-deploy osd create --data /dev/sdb ceph-n3
...

# check cluster status
ceph -s

# Add Ceph Monitors
# 在 配置文件 ceph.conf 中添加新节点 和 public network
mon_initial_members = ceph-n1,ceph-n2,ceph-n3
mon_host = 192.168.193.161,192.168.193.162,192.168.193.163
public_network = 192.168.193.0/24

# 推送到各个节点
ceph-deploy --overwrite-conf config push ceph-n1 ceph-n2 ceph-n3

# 增加 monitor
ceph-deploy mon add ceph-n2

# add manager
ceph-deploy mgr create ceph-n2

# 对象存储需要使用到radosgw,即对象存储网关,所以需要先起一个网关
ceph-deploy rgw create ceph-n1

# 时间同步,使用 ceph-deploy 节点来安装ntp服务
# 修改 /etc/ntp.conf 文件,用阿里云的时间服务器
server time1.aliyun.com minpoll 3 maxpoll 4 iburst
server time2.aliyun.com minpoll 3 maxpoll 4 iburst
server time3.aliyun.com minpoll 3 maxpoll 4 iburst

# 启动服务
systemctl enable ntpd.service
systemctl start ntpd.service

# 每个ceph节点都将ntp服务器指向这个ntp,然后启动ntpd.service
server ceph-deploy minpoll 3 maxpoll 4 iburst

# 启动服务
systemctl enable ntpd.service
systemctl start ntpd.service

# 查看时间同步
ntpq -pn

# 存储一个对象
echo hello world > testfile.txt
ceph osd pool create mytest 64 64
rados put test-object-1 testfile.txt --pool=mytest

# To verify that the Ceph Storage Cluster stored the object, execute the following:
rados -p mytest ls

# identify the object location:
ceph osd map mytest test-object-1

# remove the test object
rados rm test-object-1 --pool=mytest

#  delete the mytest pool:
ceph osd pool rm mytest

```

0
0

idefav

提问者

2019-11-01

谢谢,回去试试

0
0

可汗

2019-10-30

上面是我自己安装 整理的

0
0

Kubernetes生产落地全程实践

一个互联网公司落地Kubernetes全过程点点滴滴

2293 学习 · 2216 问题

查看课程