Prometheus通过scrapeManager抓取的指标(metrics)可通过本地TSDB时序数据库存储,简单高效,但无法持久化数据.所以,可根据需求,选择本地存储或远端存储.本文不涉及存储的具体实现,而是分析指标(metrics)在存储前合法性校验,即指标缓存层(scrapeCache).
由上文Prometheus源码系列:指标采集(scrapeManager)可知,scrapeLoop结构体包含scrapeCache,调用scrapeLoop.append方法处理指标(metrics)存储.在方法append中,把每个指标(metirics)放到结构体scrapeCache对应的方法中进行合法性验证,过滤和存储.
-
scrapeCache结构及实例化如下:
prometheus/scrape/scrape.go
// scrapeCache tracks mappings of exposed metric strings to label sets and
// storage references. Additionally, it tracks staleness of series between
// scrapes.
type scrapeCache struct {
iter uint64 // Current scrape iteration. //被缓存的迭代次数
// Parsed string to an entry with information about the actual label set
// and its storage reference.
series map[string]*cacheEntry //map类型,key是metric,value是cacheEntry结构体
// Cache of dropped metric strings and their iteration. The iteration must
// be a pointer so we can update it without setting a new entry with an unsafe
// string in addDropped().
droppedSeries map[string]*uint64 //缓存不合法指标(metrics)
// seriesCur and seriesPrev store the labels of series that were seen
// in the current and previous scrape.
// We hold two maps and swap them out to save allocations.
seriesCur map[uint64]labels.Labels //缓存本次scrape的指标(metrics)
seriesPrev map[uint64]labels.Labels //缓存上次scrape的指标(metrics)
metaMtx sync.Mutex //同步锁
metadata map[string]*metaEntry //元数据
}
func newScrapeCache() *scrapeCache {
return &scrapeCache{
series: map[string]*cacheEntry{},
droppedSeries: map[string]*uint64{},
seriesCur: map[uint64]labels.Labels{},
seriesPrev: map[uint64]labels.Labels{},
metadata: map[string]*metaEntry{},
}
}
-
scrapeCache结构体包含多种方法(按照在方法append出现的顺序列出)
(1) 判断指标(metrics)的合法性
prometheus/scrape/scrape.go
func (c *scrapeCache) getDropped(met string) bool {
// 判断metric是否在非法的dropperSeries的map类型里, key是metric, value是迭代版本号
iterp, ok := c.droppedSeries[met]
if ok {
*iterp = c.iter
}
return ok
}
(2) 根据指标(metrics)信息获取结构体cacheEntry
prometheus/scrape/scrape.go
func (c *scrapeCache) get(met string) (*cacheEntry, bool) {
ser