Ceph PGs per Pool Calculator
Instructions
- Confirm your understanding of the fields by reading through the Key below.
- Select a "Ceph Use Case" from the drop down menu.
- Adjust the values in the "Green" shaded fields below.
Tip: Headers can be clicked to change the value throughout the table. - You will see the Suggested PG Count update based on your inputs.
- Click the "Add Pool" button to create a new line for a new pool.
- Click the icon to delete the specific Pool.
- For more details on the logic used and some important details, see the area below the table.
- Once all values have been adjusted, click the "Generate Commands" button to get the pool creation commands.
Key
Pool Name
Name of the pool in question. Typical pool names are included below.
Size
Number of replicas the pool will have. Default value of 3 is pre-filled.
OSD #
Number of OSDs which this Pool will have PGs in. Typically, this is the entire Cluster OSD count, but could be less based on CRUSH rules. (e.g. Separate SSD and SATA disk sets)
%Data
This value represents the approximate percentage of data which will be contained in this pool for that specific OSD set. Examples are pre-filled below for guidance.
Target P

本文介绍了Ceph中Placement Group (PG) 的计算方法,用于确保数据在集群中的均匀分布。目标PGs per OSD根据集群预期扩展情况选择,如100(不打算扩展)或200(可能扩展)。计算公式为 `(Target PGs per OSD) x (OSD #) x (%Data) / (Size)`。此外,讨论了预定义PG_NUM的常见设置,并强调了PG数量对集群性能和数据持久性的影响。Pool是存储数据的逻辑分区,可以配置副本或EC来增强数据可用性。
最低0.47元/天 解锁文章
1371

被折叠的 条评论
为什么被折叠?



