前期准备
系统要求
Python 3
Systemd
Podman or Docker for running containers
Time synchronization (such as chrony or NTP)
LVM2 for provisioning storage devices
配置主机名
10.10.101.10 ceph-01
10.10.101.11 ceph-02
10.10.101.12 ceph-03
更新到最新
yum update -y
关闭selinux
setenforce 0
sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/selinux/config
关闭防火墙
systemctl disable --now firewalld
配置免密
ssh-keygen -f /root/.ssh/id_rsa -P ''
ssh-copy-id -o StrictHostKeyChecking=no root@10.10.101.10
ssh-copy-id -o StrictHostKeyChecking=no root@10.10.101.11
ssh-copy-id -o StrictHostKeyChecking=no root@10.10.101.12
安装python3.6
yum install -y python3
基于 CURL 的安装
使用 curl 获取最新版本的独立脚本
curl --silent --remote-name --location https://github.com/ceph/ceph/raw/quincy/src/cephadm/cephadm
我执行死活下载不下来,直接copy的
添加可执行权限
chmod +x cephadm
要安装提供 cephadm 命令的软件包
[root@ceph-01 ~]# ./cephadm add-repo --release octopus
[root@ceph-01 ~]# cat /etc/yum.repos.d/ceph.repo
[Ceph]
name=Ceph $basearch
baseurl=https://download.ceph.com/rpm-octopus/el7/$basearch
enabled=1
gpgcheck=1
gpgkey=https://download.ceph.com/keys/release.asc
[Ceph-noarch]
name=Ceph noarch
baseurl=https://download.ceph.com/rpm-octopus/el7/noarch
enabled=1
gpgcheck=1
gpgkey=https://download.ceph.com/keys/release.asc
[Ceph-source]
name=Ceph SRPMS
baseurl=https://download.ceph.com/rpm-octopus/el7/SRPMS
enabled=1
gpgcheck=1
gpgkey=https://download.ceph.com/keys/release.asc
# 修改下载源
[root@ceph-01 ~]# sed -i 's#download.ceph.com#mirrors.aliyun.com/ceph#' /etc/yum.repos.d/ceph.repo
[root@ceph-01 ~]# ./cephadm install
[root@ceph-01 ~]# which cephadm
/usr/sbin/cephadm
引导新的集群
[root@ceph-01 ~]# cephadm bootstrap --mon-ip 10.10.101.10
Verifying podman|docker is present...
Verifying lvm2 is present...
Verifying time synchronization is in place...
Unit chronyd.service is enabled and running
Repeating the final host check...
podman|docker (/usr/bin/docker) is present
systemctl is present
lvcreate is present
Unit chronyd.service is enabled and running
Host looks OK
Cluster fsid: 5a0ef854-fb8c-11ed-b60e-005056b5e7bd
Verifying IP 10.10.101.10 port 3300 ...
Verifying IP 10.10.101.10 port 6789 ...
Mon IP 10.10.101.10 is in CIDR network 10.10.101.0/24
Pulling container image quay.io/ceph/ceph:v15...
Extracting ceph user uid/gid from container image...
Creating initial keys...
Creating initial monmap...
Creating mon...
Waiting for mon to start...
Waiting for mon...
mon is available
Assimilating anything we can from ceph.conf...
Generating new minimal ceph.conf...
Restarting the monitor...
Setting mon public_network...
Creating mgr...
Verifying port 9283 ...
Wrote keyring to /etc/ceph/ceph.client.admin.keyring
Wrote config to /etc/ceph/ceph.conf
Waiting for mgr to start...
Waiting for mgr...
mgr not available, waiting (1/10)...
mgr not available, waiting (2/10)...
mgr not available, waiting (3/10)...
mgr not available, waiting (4/10)...
mgr is available
Enabling cephadm module...
Waiting for the mgr to restart...
Waiting for Mgr epoch 5...
Mgr epoch 5 is available
Setting orchestrator backend to cephadm...
Generating ssh key...
Wrote public SSH key to to /etc/ceph/ceph.pub
Adding key to root@localhost's authorized_keys...
Adding host ceph-01...
Deploying mon service with default placement...
Deploying mgr service with default placement...
Deploying crash service with default placement...
Enabling mgr prometheus module...
Deploying prometheus service with default placement...
Deploying grafana service with default placement...
Deploying node-exporter service with default placement...
Deploying alertmanager service with default placement...
Enabling the dashboard module...
Waiting for the mgr to restart...
Waiting for Mgr epoch 13...
Mgr epoch 13 is available
Generating a dashboard self-signed certificate...
Creating initial admin user...
Fetching dashboard port number...
Ceph Dashboard is now available at:
URL: https://ceph-01:8443/
User: admin
Password: pv56q6l7pe
You can access the Ceph CLI with:
sudo /usr/sbin/cephadm shell --fsid 5a0ef854-fb8c-11ed-b60e-005056b5e7bd -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring
Please consider enabling telemetry to help improve Ceph:
ceph telemetry on
For more information see:
https://docs.ceph.com/docs/master/mgr/telemetry/
Bootstrap complete.
查看状态
[root@ceph-01 ~]# cephadm shell
Inferring fsid 5a0ef854-fb8c-11ed-b60e-005056b5e7bd
Inferring config /var/lib/ceph/5a0ef854-fb8c-11ed-b60e-005056b5e7bd/mon.ceph-01/config
Using recent ceph image quay.io/ceph/ceph@sha256:c08064dde4bba4e72a1f55d90ca32df9ef5aafab82efe2e0a0722444a5aaacca
[ceph: root@ceph-01 /]# ceph -s
cluster:
id: 5a0ef854-fb8c-11ed-b60e-005056b5e7bd
health: HEALTH_WARN
OSD count 0 < osd_pool_default_size 3
services:
mon: 1 daemons, quorum ceph-01 (age 5m)
mgr: ceph-01.tvpflp(active, since 4m)
osd: 0 osds: 0 up, 0 in
data:
pools: 0 pools, 0 pgs
objects: 0 objects, 0 B
usage: 0 B used, 0 B / 0 B avail
pgs:
[ceph: root@ceph-01 /]# ceph orch ps
NAME HOST STATUS REFRESHED AGE VERSION IMAGE NAME IMAGE ID CONTAINER ID
alertmanager.ceph-01 ceph-01 running (2m) 40s ago 4m 0.20.0 quay.io/prometheus/alertmanager:v0.20.0 0881eb8f169f 8e1e36854d97
crash.ceph-01 ceph-01 running (4m) 40s ago 4m 15.2.17 quay.io/ceph/ceph:v15 93146564743f 593abfc7e78c
grafana.ceph-01 ceph-01 running (2m) 40s ago 3m 6.7.4 quay.io/ceph/ceph-grafana:6.7.4 557c83e11646 724b4e757e92
mgr.ceph-01.tvpflp ceph-01 running (5m) 40s ago 5m 15.2.17 quay.io/ceph/ceph:v15 93146564743f 52e36aeac870
mon.ceph-01 ceph-01 running (5m) 40s ago 6m 15.2.17 quay.io/ceph/ceph:v15 93146564743f ba8833732dd1
node-exporter.ceph-01 ceph-01 running (2m) 40s ago 3m 0.18.1 quay.io/prometheus/node-exporter:v0.18.1 e5a616e4b9cf 344d820ca236
prometheus.ceph-01 ceph-01 running (2m) 40s ago 2m 2.18.1 quay.io/prometheus/prometheus:v2.18.1 de242295e225 7fa3f4c02b18
[ceph: root@ceph-01 /]# ceph orch ps --daemon-type mon
NAME HOST STATUS REFRESHED AGE VERSION IMAGE NAME IMAGE ID CONTAINER ID
mon.ceph-01 ceph-01 running (6m) 52s ago 6m 15.2.17 quay.io/ceph/ceph:v15 93146564743f ba8833732dd1
[root@ceph-01 ~]#
[root@ceph-01 ~]# ceph -v
ceph version 15.2.17 (8a82819d84cf884bd39c17e3236e0632ac146dc4) octopus (stable)
[root@ceph-01 ~]# ssh-copy-id -f -i /etc/ceph/ceph.pub root@ceph-02
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/etc/ceph/ceph.pub"
Number of key(s) added: 1
Now try logging into the machine, with: "ssh 'root@ceph-02'"
and check to make sure that only the key(s) you wanted were added.
[root@ceph-01 ~]# ssh-copy-id -f -i /etc/ceph/ceph.pub root@ceph-03
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/etc/ceph/ceph.pub"
Number of key(s) added: 1
Now try logging into the machine, with: "ssh 'root@ceph-03'"
and check to make sure that only the key(s) you wanted were added.
[root@ceph-01 ~]# ssh-copy-id -f -i /etc/ceph/ceph.pub root@ceph-01
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/etc/ceph/ceph.pub"
Number of key(s) added: 1
Now try logging into the machine, with: "ssh 'root@ceph-01'"
and check to make sure that only the key(s) you wanted were added.
[root@ceph-01 ~]# cephadm shell
Inferring fsid 5a0ef854-fb8c-11ed-b60e-005056b5e7bd
Inferring config /var/lib/ceph/5a0ef854-fb8c-11ed-b60e-005056b5e7bd/mon.ceph-01/config
Using recent ceph image quay.io/ceph/ceph@sha256:c08064dde4bba4e72a1f55d90ca32df9ef5aafab82efe2e0a0722444a5aaacca
# # 创建mon和mgr
[ceph: root@ceph-01 /]# ceph orch host add ceph-02
Added host 'ceph-02'
[ceph: root@ceph-01 /]# ceph orch host add ceph-03
Added host 'ceph-03'
[ceph: root@ceph-01 /]# ceph orch ls
NAME RUNNING REFRESHED AGE PLACEMENT IMAGE NAME IMAGE ID
alertmanager 1/1 9s ago 9m count:1 quay.io/prometheus/alertmanager:v0.20.0 0881eb8f169f
crash 1/3 9s ago 9m * quay.io/ceph/ceph:v15 93146564743f
grafana 1/1 9s ago 9m count:1 quay.io/ceph/ceph-grafana:6.7.4 557c83e11646
mgr 1/2 9s ago 9m count:2 quay.io/ceph/ceph:v15 93146564743f
mon 1/5 9s ago 9m count:5 quay.io/ceph/ceph:v15 93146564743f
node-exporter 1/3 9s ago 9m * quay.io/prometheus/node-exporter:v0.18.1 e5a616e4b9cf
prometheus 1/1 9s ago 9m count:1 quay.io/prometheus/prometheus:v2.18.1 de242295e225
# 查看目前集群纳管的节点
[ceph: root@ceph-01 /]# ceph orch host ls
HOST ADDR LABELS STATUS
ceph-01 ceph-01
ceph-02 ceph-02
ceph-03 ceph-03
创建mon和mgr
[ceph: root@ceph-01 /]# ceph orch apply mon 3
Scheduled mon update...
[ceph: root@ceph-01 /]# ceph orch apply mon ceph-01,ceph-02,ceph-03
Scheduled mon update...
[ceph: root@ceph-01 /]# ceph mon dump
dumped monmap epoch 1
epoch 1
fsid 5a0ef854-fb8c-11ed-b60e-005056b5e7bd
last_changed 2023-05-26T06:17:00.155294+0000
created 2023-05-26T06:17:00.155294+0000
min_mon_release 15 (octopus)
0: [v2:10.10.101.10:3300/0,v1:10.10.101.10:6789/0] mon.ceph-01
[ceph: root@ceph-01 /]# ceph mon dump
dumped monmap epoch 1
epoch 1
fsid 5a0ef854-fb8c-11ed-b60e-005056b5e7bd
last_changed 2023-05-26T06:17:00.155294+0000
created 2023-05-26T06:17:00.155294+0000
min_mon_release 15 (octopus)
0: [v2:10.10.101.10:3300/0,v1:10.10.101.10:6789/0] mon.ceph-01
[ceph: root@ceph-01 /]#
[ceph: root@ceph-01 /]# ceph mon dump
dumped monmap epoch 1
epoch 1
fsid 5a0ef854-fb8c-11ed-b60e-005056b5e7bd
last_changed 2023-05-26T06:17:00.155294+0000
created 2023-05-26T06:17:00.155294+0000
min_mon_release 15 (octopus)
0: [v2:10.10.101.10:3300/0,v1:10.10.101.10:6789/0] mon.ceph-01
创建osd
[ceph: root@ceph-01 /]# ceph orch daemon add osd ceph-01:/dev/sdb
Created osd(s) 0 on host 'ceph-01'
[ceph: root@ceph-01 /]# ceph orch daemon add osd ceph-02:/dev/sdb
Created osd(s) 1 on host 'ceph-02'
[ceph: root@ceph-01 /]# ceph orch daemon add osd ceph-03:/dev/sdb
Created osd(s) 2 on host 'ceph-03'
创建mds
[ceph: root@ceph-01 /]# ceph osd pool create cephfs_data
pool 'cephfs_data' created
[ceph: root@ceph-01 /]# ceph osd pool create cephfs_metadata
pool 'cephfs_metadata' created
[ceph: root@ceph-01 /]# ceph fs new cephfs cephfs_metadata cephfs_data
new fs with metadata pool 3 and data pool 2
[ceph: root@ceph-01 /]# ceph orch apply mds cephfs --placement="3 ceph-01 ceph-02 ceph-03"
Scheduled mds.cephfs update...
[ceph: root@ceph-01 /]# ceph orch ps --daemon-type mds
NAME HOST STATUS REFRESHED AGE VERSION IMAGE NAME IMAGE ID CONTAINER ID
mds.cephfs.ceph-01.lvzpgk ceph-01 running (59s) 55s ago 59s 15.2.17 quay.io/ceph/ceph:v15 93146564743f 33256c710a97
mds.cephfs.ceph-02.ohhfia ceph-02 running (61s) 55s ago 61s 15.2.17 quay.io/ceph/ceph:v15 93146564743f 37dee2bc1e20
mds.cephfs.ceph-03.fguquy ceph-03 running (63s) 55s ago 63s 15.2.17 quay.io/ceph/ceph:v15 93146564743f 237f93101298
创建rgw
# 首先创建一个领域
[ceph: root@ceph-01 /]# radosgw-admin realm create --rgw-realm=myorg --default
{
"id": "11b167f7-3a4b-48cb-80a3-a4c3f396049e",
"name": "myorg",
"current_period": "009302d5-14fb-493c-877e-4586d0c5f017",
"epoch": 1
}
# 创建区域组
[ceph: root@ceph-01 /]# radosgw-admin zonegroup create --rgw-zonegroup=default --master --default
{
"id": "bdc088f7-1f2a-4af1-96c6-7f2fb027bc1b",
"name": "default",
"api_name": "default",
"is_master": "true",
"endpoints": [],
"hostnames": [],
"hostnames_s3website": [],
"master_zone": "",
"zones": [],
"placement_targets": [],
"default_placement": "",
"realm_id": "11b167f7-3a4b-48cb-80a3-a4c3f396049e",
"sync_policy": {
"groups": []
}
}
# 创建区域
[ceph: root@ceph-01 /]# radosgw-admin zone create --rgw-zonegroup=default --rgw-zone=cn-east-1 --master --default
{
"id": "59935956-4617-47f8-8cca-086890d844ac",
"name": "cn-east-1",
"domain_root": "cn-east-1.rgw.meta:root",
"control_pool": "cn-east-1.rgw.control",
"gc_pool": "cn-east-1.rgw.log:gc",
"lc_pool": "cn-east-1.rgw.log:lc",
"log_pool": "cn-east-1.rgw.log",
"intent_log_pool": "cn-east-1.rgw.log:intent",
"usage_log_pool": "cn-east-1.rgw.log:usage",
"roles_pool": "cn-east-1.rgw.meta:roles",
"reshard_pool": "cn-east-1.rgw.log:reshard",
"user_keys_pool": "cn-east-1.rgw.meta:users.keys",
"user_email_pool": "cn-east-1.rgw.meta:users.email",
"user_swift_pool": "cn-east-1.rgw.meta:users.swift",
"user_uid_pool": "cn-east-1.rgw.meta:users.uid",
"otp_pool": "cn-east-1.rgw.otp",
"system_key": {
"access_key": "",
"secret_key": ""
},
"placement_pools": [
{
"key": "default-placement",
"val": {
"index_pool": "cn-east-1.rgw.buckets.index",
"storage_classes": {
"STANDARD": {
"data_pool": "cn-east-1.rgw.buckets.data"
}
},
"data_extra_pool": "cn-east-1.rgw.buckets.non-ec",
"index_type": 0
}
}
],
"realm_id": "11b167f7-3a4b-48cb-80a3-a4c3f396049e"
}
[ceph: root@ceph-01 /]# ceph orch apply rgw myorg cn-east-1 --placement="3 ceph-01 ceph-02 ceph-03"
Scheduled rgw.myorg.cn-east-1 update...
[ceph: root@ceph-01 /]# ceph orch ps --daemon-type rgw
NAME HOST STATUS REFRESHED AGE VERSION IMAGE NAME IMAGE ID
rgw.myorg.cn-east-1.ceph-03.qeiawx ceph-03 starting - - <unknown> <unknown> <unknown>
[ceph: root@ceph-01 /]# ceph orch daemon add osd ceph-03:/dev/sdc
Created osd(s) 3 on host 'ceph-03'
[ceph: root@ceph-01 /]# ceph orch daemon add osd ceph-02:/dev/sdc
^[[ACreated osd(s) 4 on host 'ceph-02'
[ceph: root@ceph-01 /]# ceph orch daemon add osd ceph-01:/dev/sdc
Created osd(s) 5 on host 'ceph-01'
查看磁盘分配情况
[ceph: root@ceph-01 /]# exit
[root@ceph-01 ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 40G 0 disk
├─sda1 8:1 0 1G 0 part /boot
└─sda2 8:2 0 39G 0 part
├─centos-root 253:0 0 35G 0 lvm /
└─centos-swap 253:1 0 4G 0 lvm [SWAP]
sdb 8:16 0 500G 0 disk
└─ceph--e1455cf0--714c--4c9a--a5f8--18521631de08-osd--block--53c70dee--5829--4aa4--82c3--d408b44ae079 253:2 0 500G 0 lvm
sdc 8:32 0 500G 0 disk
└─ceph--7cef9e9e--c432--41ee--9f56--2069a18baf44-osd--block--7b22409b--ff2b--4e89--ab1a--d9373086e145 253:3 0 500G 0 lvm
sr0 11:0 1 1024M 0 rom
创建RBD
[root@ceph-01 ~]# ceph osd pool create rbd 16
pool 'rbd' created
[root@ceph-01 ~]# ceph osd pool application enable rbd rbd
enabled application 'rbd' on pool 'rbd'
[root@ceph-01 ~]# rbd create rbd1 --size 204800
[root@ceph-01 ~]# rbd --image rbd1 info
rbd image 'rbd1':
size 200 GiB in 51200 objects
order 22 (4 MiB objects)
snapshot_count: 0
id: 3933e86757f8
block_name_prefix: rbd_data.3933e86757f8
format: 2
features: layering, exclusive-lock, object-map, fast-diff, deep-flatten
op_features:
flags:
create_timestamp: Fri May 26 15:01:09 2023
access_timestamp: Fri May 26 15:01:09 2023
modify_timestamp: Fri May 26 15:01:09 2023
[root@ceph-01 ~]# radosgw-admin user create --uid="admin" --display-name="admin user"
{
"user_id": "admin",
"display_name": "admin user",
"email": "",
"suspended": 0,
"max_buckets": 1000,
"subusers": [],
"keys": [
{
"user": "admin",
"access_key": "4UDRE994TNPSF00C6ZVO",
"secret_key": "q1l3fOugHCJy9mDwYDao0Ssl6ewHnXQj7PNlUZqx"
}
],
"swift_keys": [],
"caps": [],
"op_mask": "read, write, delete",
"default_placement": "",
"default_storage_class": "",
"placement_tags": [],
"bucket_quota": {
"enabled": false,
"check_on_raw": false,
"max_size": -1,
"max_size_kb": 0,
"max_objects": -1
},
"user_quota": {
"enabled": false,
"check_on_raw": false,
"max_size": -1,
"max_size_kb": 0,
"max_objects": -1
},
"temp_url_keys": [],
"type": "rgw",
"mfa_ids": []
}
[root@ceph-01 ~]# radosgw-admin user info --uid=admin
{
"user_id": "admin",
"display_name": "admin user",
"email": "",
"suspended": 0,
"max_buckets": 1000,
"subusers": [],
"keys": [
{
"user": "admin",
"access_key": "4UDRE994TNPSF00C6ZVO",
"secret_key": "q1l3fOugHCJy9mDwYDao0Ssl6ewHnXQj7PNlUZqx"
}
],
"swift_keys": [],
"caps": [],
"op_mask": "read, write, delete",
"default_placement": "",
"default_storage_class": "",
"placement_tags": [],
"bucket_quota": {
"enabled": false,
"check_on_raw": false,
"max_size": -1,
"max_size_kb": 0,
"max_objects": -1
},
"user_quota": {
"enabled": false,
"check_on_raw": false,
"max_size": -1,
"max_size_kb": 0,
"max_objects": -1
},
"temp_url_keys": [],
"type": "rgw",
"mfa_ids": []
}
[root@ceph-01 ~]# yum install s3cmd -y
已加载插件:fastestmirror
Loading mirror speeds from cached hostfile
* base: mirrors.aliyun.com
* epel: mirror.lzu.edu.cn
* extras: mirrors.aliyun.com
* updates: mirrors.aliyun.com
正在解决依赖关系
--> 正在检查事务
---> 软件包 s3cmd.noarch.0.2.3.0-4.el7 将被 安装
--> 正在处理依赖关系 python-dateutil,它被软件包 s3cmd-2.3.0-4.el7.noarch 需要
--> 正在处理依赖关系 python-magic,它被软件包 s3cmd-2.3.0-4.el7.noarch 需要
--> 正在检查事务
---> 软件包 python-dateutil.noarch.0.1.5-7.el7 将被 安装
---> 软件包 python-magic.noarch.0.5.11-37.el7 将被 安装
--> 解决依赖关系完成
依赖关系解决
======================================================================================================================================================================================
Package 架构 版本 源 大小
======================================================================================================================================================================================
正在安装:
s3cmd noarch 2.3.0-4.el7 epel 208 k
为依赖而安装:
python-dateutil noarch 1.5-7.el7 base 85 k
python-magic noarch 5.11-37.el7 base 34 k
事务概要
======================================================================================================================================================================================
安装 1 软件包 (+2 依赖软件包)
总下载量:326 k
安装大小:1.1 M
Downloading packages:
(1/3): python-magic-5.11-37.el7.noarch.rpm | 34 kB 00:00:00
(2/3): python-dateutil-1.5-7.el7.noarch.rpm | 85 kB 00:00:00
(3/3): s3cmd-2.3.0-4.el7.noarch.rpm | 208 kB 00:00:00
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
总计 446 kB/s | 326 kB 00:00:00
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
正在安装 : python-magic-5.11-37.el7.noarch 1/3
正在安装 : python-dateutil-1.5-7.el7.noarch 2/3
正在安装 : s3cmd-2.3.0-4.el7.noarch 3/3
验证中 : python-dateutil-1.5-7.el7.noarch 1/3
验证中 : python-magic-5.11-37.el7.noarch 2/3
验证中 : s3cmd-2.3.0-4.el7.noarch 3/3
已安装:
s3cmd.noarch 0:2.3.0-4.el7
作为依赖被安装:
python-dateutil.noarch 0:1.5-7.el7 python-magic.noarch 0:5.11-37.el7
完毕!
[root@ceph-01 ~]# s3cmd --configure
Enter new values or accept defaults in brackets with Enter.
Refer to user manual for detailed description of all options.
Access key and Secret key are your identifiers for Amazon S3. Leave them empty for using the env variables.
Access Key [4UDRE994TNPSF00C6ZVO]:
Secret Key [q1l3fOugHCJy9mDwYDao0Ssl6ewHnXQj7PNlUZqx]:
Default Region [US]:
Use "s3.amazonaws.com" for S3 Endpoint and not modify it to the target Amazon S3.
S3 Endpoint [10.10.101.10]:
Use "%(bucket)s.s3.amazonaws.com" to the target Amazon S3. "%(bucket)s" and "%(location)s" vars can be used
if the target S3 system supports dns based buckets.
DNS-style bucket+hostname:port template for accessing a bucket [%(bucket)s.s3.amazonaws.com]: 10.10.101.10
Encryption password is used to protect your files from reading
by unauthorized persons while in transfer to S3
Encryption password:
Path to GPG program [/usr/bin/gpg]:
When using secure HTTPS protocol all communication with Amazon S3
servers is protected from 3rd party eavesdropping. This method is
slower than plain HTTP, and can only be proxied with Python 2.7 or newer
Use HTTPS protocol [No]:
On some networks all internet access must go through a HTTP proxy.
Try setting it here if you can't connect to S3 directly
HTTP Proxy server name:
New settings:
Access Key: 4UDRE994TNPSF00C6ZVO
Secret Key: q1l3fOugHCJy9mDwYDao0Ssl6ewHnXQj7PNlUZqx
Default Region: US
S3 Endpoint: 10.10.101.10
DNS-style bucket+hostname:port template for accessing a bucket: 10.10.101.10
Encryption password:
Path to GPG program: /usr/bin/gpg
Use HTTPS protocol: False
HTTP Proxy server name:
HTTP Proxy server port: 0
Test access with supplied credentials? [Y/n] y
Please wait, attempting to list all buckets...
Success. Your access key and secret key worked fine :-)
Now verifying that encryption works...
Not configured. Never mind.
Save settings? [y/N] y
Configuration saved to '/root/.s3cfg'
[root@ceph-01 ~]# s3cmd mb s3://mybucket
Bucket 's3://mybucket/' created
[root@ceph-01 ~]# s3cmd ls
2023-05-26 07:20 s3://mybucket
[root@ceph-01 ~]# s3cmd put /etc/hosts s3://mybucket
upload: '/etc/hosts' -> 's3://mybucket/hosts' [1 of 1]
222 of 222 100% in 0s 354.24 KB/s
222 of 222 100% in 3s 63.56 B/s done
[root@ceph-01 ~]#
[root@ceph-01 ~]# s3cmd ls
2023-05-26 07:20 s3://mybucket
[root@ceph-01 ~]# s3cmd ls s3://mybucket
2023-05-26 07:21 222 s3://mybucket/hosts
[root@ceph-01 ~]# radosgw-admin user info --uid=admin
{
"user_id": "admin",
"display_name": "admin user",
"email": "",
"suspended": 0,
"max_buckets": 1000,
"subusers": [],
"keys": [
{
"user": "admin",
"access_key": "4UDRE994TNPSF00C6ZVO",
"secret_key": "q1l3fOugHCJy9mDwYDao0Ssl6ewHnXQj7PNlUZqx"
}
],
"swift_keys": [],
"caps": [],
"op_mask": "read, write, delete",
"default_placement": "",
"default_storage_class": "",
"placement_tags": [],
"bucket_quota": {
"enabled": false,
"check_on_raw": false,
"max_size": -1,
"max_size_kb": 0,
"max_objects": -1
},
"user_quota": {
"enabled": false,
"check_on_raw": false,
"max_size": -1,
"max_size_kb": 0,
"max_objects": -1
},
"temp_url_keys": [],
"type": "rgw",
"mfa_ids": []
}
[root@ceph-01 ~]# echo "4UDRE994TNPSF00C6ZVO" > access_key
[root@ceph-01 ~]# echo "q1l3fOugHCJy9mDwYDao0Ssl6ewHnXQj7PNlUZqx" > secret_key
[root@ceph-01 ~]# ceph dashboard set-rgw-api-access-key -i access_key
Option RGW_API_ACCESS_KEY updated
[root@ceph-01 ~]# ceph dashboard set-rgw-api-secret-key -i secret_key
Option RGW_API_SECRET_KEY updated
1790

被折叠的 条评论
为什么被折叠?



