Elasticsearch as a Time Series Data Store

本文探讨了如何使用Elasticsearch作为时间序列数据库来替代Graphite TSDB,并通过优化索引和存储策略显著减少了存储空间的需求。

原文地址:https://www.elastic.co/blog/elasticsearch-as-a-time-series-data-store

As the project manager of stagemonitor, an open source performance monitoring tool, I've recently been looking for a database to replace the cool-but-aging Graphite Time Series DataBase (TSDB) as the backend. TSDBs are specialised packages for storing (performance) metric data, like the response time of your app or the CPU utilisation of a server. Ultimately, we were looking for a datastore that is easy to install, scalable, supports a variety of functions, and has great support for visualizing the metrics.

We've previously worked with Elasticsearch, so we know it is easy to install, scaleable, offers many aggregations, and has a great visualisation tool in Kibana. But we didn't know if Elasticsearch was suited for time series data. We were not the only one asking the question. In fact, CERN (you know, those folks who shoot protons in circles) did a performance comparison between Elasticsearch, InfluxDB and OpenTSDB and declared Elasticsearch the winner.

The Decision Process

Elasticsearch is a fantastic tool for storing, searching, and analyzing structured and unstructured data — including free text, system logs, database records, and more. With the right tweaking, you also get a great platform to store your time series metrics from tools like collectd or statsd.

It also scales very easily as you add more metrics. Elasticsearch has built in redundancy thanks to shard replicas and allows simple backups with Snapshot & Restore, which makes management of your cluster and your data a lot easier.

Elasticsearch is also highly API driven, and with integration tools like Logstash, it's simple to build data processing pipelines that can handle very large amounts of data with great efficiency. Once you add Kibana to the mix you have a platform that allows you to ingest and analyse multiple datasets and draw correlations from metric and other data side-by-side.

Another benefit that isn't immediately obvious is instead of storing metrics that have been transformed via a calculation to provide an endpoint value that gets graphed, you are storing the raw value and then running the suite of powerful aggregations built into Elasticsearch over these values. This means that if you change your mind after a few months of monitoring a metric and you want to calculate or display the metric differently, it's as simple as changing the aggregation over the dataset, for both historic and current data. Or to put it another way: You have the ability to ask and answer questions that you didn't think about when the data was stored!

With all that in mind, the obvious question we wanted to answer is: What's the best method for setting up Elasticsearch as a time series database?

First Stop: Mappings

The most important part to start is your mapping. Defining your mapping ahead of time means that the analysis and storage of data in Elasticsearch is as optimal as possible.

Here's an example of how we do the mappings at stagemonitor. You can find the original over in our Github repo:

{
  "template": "stagemonitor-metrics-*",
  "settings": {
    "index": {
      "refresh_interval": "5s"
    }
  },
  "mappings": {
    "_default_": {
      "dynamic_templates": [
        {
          "strings": {
            "match": "*",
            "match_mapping_type": "string",
            "mapping":   { "type": "string",  "doc_values": true, "index": "not_analyzed" }
          }
        }
      ],
      "_all":            { "enabled": false },
      "_source":         { "enabled": false },
      "properties": {
        "@timestamp":    { "type": "date",    "doc_values": true },
        "count":         { "type": "integer", "doc_values": true, "index": "no" },
        "m1_rate":       { "type": "float",   "doc_values": true, "index": "no" },
        "m5_rate":       { "type": "float",   "doc_values": true, "index": "no" },
        "m15_rate":      { "type": "float",   "doc_values": true, "index": "no" },
        "max":           { "type": "integer", "doc_values": true, "index": "no" },
        "mean":          { "type": "integer", "doc_values": true, "index": "no" },
        "mean_rate":     { "type": "float",   "doc_values": true, "index": "no" },
        "median":        { "type": "float",   "doc_values": true, "index": "no" },
        "min":           { "type": "float",   "doc_values": true, "index": "no" },
        "p25":           { "type": "float",   "doc_values": true, "index": "no" },
        "p75":           { "type": "float",   "doc_values": true, "index": "no" },
        "p95":           { "type": "float",   "doc_values": true, "index": "no" },
        "p98":           { "type": "float",   "doc_values": true, "index": "no" },
        "p99":           { "type": "float",   "doc_values": true, "index": "no" },
        "p999":          { "type": "float",   "doc_values": true, "index": "no" },
        "std":           { "type": "float",   "doc_values": true, "index": "no" },
        "value":         { "type": "float",   "doc_values": true, "index": "no" },
        "value_boolean": { "type": "boolean", "doc_values": true, "index": "no" },
        "value_string":  { "type": "string",  "doc_values": true, "index": "no" }
      }
    }
  }
}
Read More ↓

You can see here we have disabled _source and _all as we are only ever going to be building aggregations, so we save on disk space as the document stored will be smaller. The downside is that we won't be able to see the actual JSON documents or to reindex to a new mapping or index structure (see the documentation for disabling source for more information), but for our metrics based use case this isn't a major worry for us.

Just to reiterate: For most use cases you do not want to disable source!

We are also not analyzing string values, as we won't perform full text searches on the metric documents. In this case, we only want to filter by exact names or perform term aggregations on fields like metricNamehost or application so that we can filter our metrics by certain hosts or to get a list of all hosts. It's also better to use doc_values as much as possible to reduce heap use.

There are two more quite aggressive optimisations which may not be suitable for all use cases. The first one is to use"index": "no" for all metric values. This reduces the index size but also means we can't search the values — which is fine if we want to show all values in a graph and not only a subset, like values between 2.7182 and 3.1415. By using the smallest numeric type (for us it was float) we can optimize the index further. if your case request values are out of the range of a float you could use doubles.

Next Up: Optimising for long term storage

The next important step in optimising this data for long term storage is to force merge (previously known as optimize) the indices after all the data has been indexed into them. This involves merging the existing shards into just a few and removing any deleted documents in the same step. “Optimize”, as it was known, is a bit of a misleading term in Elasticsearch — the process does improve resource use, but may require a lot of CPU and disk resources as the system purges any deleted documents and then merges all the underlying Lucene segments. This is why we recommend force merging during off peak periods or running it on nodes with more CPU and disk resources.

The merging process does happen automatically in the background, but only while data is being written to the index. We want to explicitly call it once we are sure all the events have been sent to Elasticsearch and the index is no longer being modified by additions, updates, or deletions.

An optimize is usually best left until 24-48 hours after the newest index has been created (be it hourly, daily, weekly, etc) to allow any late events to reach Elasticsearch. After that period we can easily use Curator to handle the optimize call:

$ curator optimize --delay 2 --max_num_segments 1 indices --older-than 1 --time-unit days --timestring %Y.%m.%d --prefix stagemonitor-metrics-

Another great benefit of running this optimise after all data has been written is that we automatically apply synced flush that assists in cluster recovery speed and restarts of nodes.

If you are using stagemonitor the optimize process is triggered automatically every night, so you don't even need to use curator in that case.

The Outcome

To test this, we sent a randomised set of just over 23 million data points from our platform to Elasticsearch, equal to roughly a week's worth. This is a sample of what the data looks like:

{
    "@timestamp": 1442165810,
"name": "timer_1",
    "application": "Metrics Store Benchmark",
    "host": "my_hostname",
    "instance": "Local",
    "count": 21,
    "mean": 714.86,
    "min": 248.00,
    "max": 979.00,
    "stddev": 216.63,
    "p50": 741.00,
    "p75": 925.00,
    "p95": 977.00,
    "p98": 979.00,
    "p99": 979.00,
    "p999": 979.00,
    "mean_rate": 2.03,
    "m1_rate": 2.18,
    "m5_rate": 2.20,
    "m15_rate": 2.20
}

After running a few indexing and optimising cycles we saw the following figures:

  Initial Size Post Optimize
Sample Run 1 2.2G 508.6M
Sample Run 2 514.1M
Sample Run 3 510.9M
Sample Run 4 510.9M
Sample Run 5 510.9M

You can see how important the optimize process was. Even with Elasticsearch doing this work in the background, it is well worth running for long term storage alone.

Now with the all this data in Elasticsearch, what can we discover? Well, here's a few samples of what we have built with the system:

stagemonitor-applications-metrics.png
stagemonitor-hosts-metrics.png
stagemonitor-instances-metrics.png
stagemonitor-response-times.png
stagemonitor-throughputstatuscode-line.png
stagemonitor-toprequests-metric.png
stagemonitor-pageloadtime-line.png
stagemonitor-slowestrequestsmedian-line.png
stagemonitor-highestthroughput-line.png
stagemonitor-slowestrequests-line.png
stagemonitor-mosterrors-line.png

If you'd like to replicate the testing that we did here you can find the code in the stagemonitor Github repo.

The Future

With Elasticsearch 2.0 there are a lot of features that make it even more flexible and suitable for time series data users.

Pipeline aggregations open a whole new level for analyzing and transforming of data points. For example, it is possible to smooth graphs with moving averages, use a Holt Winters forecast to check if the data matches historic patterns, or even calculate derivatives.

And finally, in the mapping described above we had to manually enable doc_values to improve heap efficiency. In 2.0,doc_values are enabled by default for any not_analyzed field which means less work for you!

<script lang="ts" setup> import type { EChartsOption } from 'echarts' import echarts from '@/plugins/echarts' import { debounce } from 'lodash-es' import 'echarts-wordcloud' import { propTypes } from '@/utils/propTypes' import { PropType } from 'vue' import { useAppStore } from '@/store/modules/app' import { isString } from '@/utils/is' import { useDesign } from '@/hooks/web/useDesign' import { cloneDeep } from 'lodash-es' import 'echarts/lib/component/markPoint' import 'echarts/lib/component/markLine' import 'echarts/lib/component/markArea' defineOptions({ name: 'EChart' }) const { getPrefixCls, variables } = useDesign() const prefixCls = getPrefixCls('echart') const appStore = useAppStore() const props = defineProps({ options: { type: Object as PropType<EChartsOption>, required: true }, width: propTypes.oneOfType([Number, String]).def(''), height: propTypes.oneOfType([Number, String]).def('100%'), // 新增:指定图表类型(用于区分配置) chartType: { type: String as PropType<'line' | 'pie' | 'bar' | 'bars'>, required: true }, carousel: { type: Object as PropType<{ enabled: boolean interval?: number seriesIndex?: number }>, default: () => ({ enabled: false }) } }) const isDark = computed(() => appStore.getIsDark) const theme = computed(() => { const echartTheme: boolean | string = unref(isDark) ? true : 'auto' return echartTheme }) const options = computed(() => { const baseOptions = cloneDeep(props.options) return Object.assign(baseOptions, { darkMode: unref(theme) }) }) const elRef = ref<ElRef>() let echartRef: Nullable<echarts.ECharts> = null const contentEl = ref<Element>() // 轮播相关 let carouselTimer: Nullable<NodeJS.Timeout> = null const currentHighlightIndex = ref(0) const styles = computed(() => { const width = isString(props.width) ? props.width : `${props.width}px` const height = isString(props.height) ? props.height : `${props.height}px` return { width, height } }) // 初始化轮播 const initCarousel = () => { if (!props.carousel?.enabled || !echartRef) return clearCarousel() const seriesIndex = props.carousel.seriesIndex ?? 0 const seriesData = (props.options.series?.[seriesIndex]?.data as any[]) || [] if (seriesData.length === 0) return const interval = props.carousel.interval || 3000 carouselTimer = setInterval(() => { currentHighlightIndex.value = (currentHighlightIndex.value + 1) % seriesData.length highlightDataPoint(seriesIndex, currentHighlightIndex.value) }, interval) } // 高亮数据点 const highlightDataPoint = (seriesIndex: number, dataIndex: number) => { if (!echartRef) return echartRef.dispatchAction({ type: 'downplay', seriesIndex: seriesIndex }) echartRef.dispatchAction({ type: 'highlight', seriesIndex: seriesIndex, dataIndex: dataIndex }) echartRef.dispatchAction({ type: 'showTip', seriesIndex: seriesIndex, dataIndex: dataIndex }) } // 清除轮播 const clearCarousel = () => { if (carouselTimer) { clearInterval(carouselTimer) carouselTimer = null } } const initChart = () => { if (unref(elRef) && props.options) { echartRef = echarts.init(unref(elRef) as HTMLElement) echartRef?.setOption(unref(options)) if (props.carousel?.enabled) { initCarousel() } } } // 监听鼠标事件,暂停轮播 const handleMouseEnter = () => { if (props.carousel?.enabled) { clearCarousel() } } const handleMouseLeave = () => { if (props.carousel?.enabled) { initCarousel() } } watch( () => options.value, (options) => { if (echartRef) { // echartRef?.setOption(options) // 图表大小自适应时,option重新赋值 initOption() echartRef?.resize() if (props.carousel?.enabled) { initCarousel() } } }, { deep: true } ) // 监听轮播配置变化 watch( () => props.carousel, (newCarousel) => { clearCarousel() if (newCarousel?.enabled && echartRef) { initCarousel() } }, { deep: true } ) const resizeHandler = debounce(() => { if (echartRef) { // 图表大小自适应时,option重新赋值 initOption() echartRef.resize() } }, 100) // 字体大小计算函数 function fontSize(size: number) { const clientWidth = window.innerWidth || document.documentElement.clientWidth || document.body.clientWidth if (!clientWidth) return size const fontSize = clientWidth / 1920 return size * fontSize } const initOption = () => { if (!props.options || !echartRef) return // 深拷贝原始配置,避免修改原对象 let option = cloneDeep(props.options) as EChartsOption // ========== 通用 fontSize 重置逻辑 ========== // 1. Legend 图例 if (option.legend) { const handleLegend = (legend: any) => { if (!legend.textStyle) legend.textStyle = {} legend.textStyle.fontSize = fontSize(16) legend.itemWidth = fontSize(14) legend.itemHeight = fontSize(14) legend.itemGap = fontSize(20) } if (Array.isArray(option.legend)) { option.legend.forEach(handleLegend) } else { handleLegend(option.legend) } } // 2. Tooltip 提示框 if (option.tooltip) { const tooltip = option.tooltip as any if (!tooltip.textStyle) tooltip.textStyle = {} tooltip.textStyle.fontSize = fontSize(16) if (typeof tooltip.padding === 'number') { tooltip.padding = fontSize(12) } tooltip.borderWidth = fontSize(1) } // 3. X轴配置 if (option.xAxis) { const handleXAxis = (axis: any) => { if (!axis.axisLabel) axis.axisLabel = {} axis.axisLabel.fontSize = fontSize(16) if (axis.splitLine?.lineStyle) { axis.splitLine.lineStyle.width = fontSize(1) } } if (Array.isArray(option.xAxis)) { option.xAxis.forEach(handleXAxis) } else { handleXAxis(option.xAxis) } } // 4. Y轴配置 if (option.yAxis) { const handleYAxis = (axis: any) => { if (!axis.axisLabel) axis.axisLabel = {} axis.axisLabel.fontSize = fontSize(16) if (!axis.nameTextStyle) axis.nameTextStyle = {} axis.nameTextStyle.fontSize = fontSize(16) if (axis.splitLine?.lineStyle) { axis.splitLine.lineStyle.width = fontSize(1) } // 横柱状图Y轴特殊配置 if (axis.axisLabel && props.chartType === 'bars') { axis.axisLabel.width = fontSize(80) axis.axisLabel.lineHeight = fontSize(20) } } if (Array.isArray(option.yAxis)) { option.yAxis.forEach(handleYAxis) } else { handleYAxis(option.yAxis) } } // ========== 处理 Series 先统一转为数组再遍历 ========== // 解决 series 可能是单个对象/数组的类型问题 const seriesList = Array.isArray(option.series) ? option.series : option.series ? [option.series] : [] // ========== 根据图表类型针对性重置 ========== switch (props.chartType) { case 'line': // 折线图系列配置 seriesList.forEach((series: any) => { if (series.lineStyle) { series.lineStyle.width = fontSize(4) } if (series.emphasis?.itemStyle) { series.emphasis.itemStyle.borderWidth = fontSize(4) } }) break case 'pie': // 饼图系列配置 seriesList.forEach((series: any) => { if (series.type === 'pie' && series.itemStyle) { series.itemStyle.borderWidth = fontSize(2) } }) break case 'bar': // 竖柱状图自定义渲染配置 seriesList.forEach((series: any) => { if (series.type === 'custom' && series.renderItem) { const originalRenderItem = series.renderItem // 明确类型 renderItem 类型报错 series.renderItem = ((params: any, api: any) => { const renderResult = originalRenderItem(params, api) // 严格判断类型 + 类型断言 if (renderResult && renderResult.type === 'rect') { const rectResult = renderResult as any rectResult.shape = { x: api.coord([api.value(0), 0])[0] - fontSize(20), y: api.coord([api.value(0), api.value(1)])[1], width: fontSize(40), height: api.coord([0, 0])[1] - api.coord([0, api.value(1)])[1] } if (rectResult.style) { rectResult.style.lineWidth = fontSize(1) } } return renderResult }) as typeof originalRenderItem } }) break case 'bars': // 横柱状图系列配置 seriesList.forEach((series: any) => { if (series.type === 'bar') { series.barWidth = fontSize(10) if (series.itemStyle) { series.itemStyle.borderWidth = fontSize(1) } } }) break } // 将处理后的 series 放回 option option.series = seriesList // 应用更新后的配置 echartRef.setOption(option, true) } const contentResizeHandler = async (e: TransitionEvent) => { if (e.propertyName === 'width') { resizeHandler() } } onMounted(() => { initChart() // 添加鼠标事件监听 if (elRef.value) { elRef.value.addEventListener('mouseenter', handleMouseEnter) elRef.value.addEventListener('mouseleave', handleMouseLeave) } window.addEventListener('resize', resizeHandler) contentEl.value = document.getElementsByClassName(`${variables.namespace}-layout-content`)[0] unref(contentEl) && (unref(contentEl) as Element).addEventListener('transitionend', contentResizeHandler) }) onBeforeUnmount(() => { clearCarousel() // 移除鼠标事件监听 if (elRef.value) { elRef.value.removeEventListener('mouseenter', handleMouseEnter) elRef.value.removeEventListener('mouseleave', handleMouseLeave) } window.removeEventListener('resize', resizeHandler) unref(contentEl) && (unref(contentEl) as Element).removeEventListener('transitionend', contentResizeHandler) }) onActivated(() => { if (echartRef) { echartRef.resize() if (props.carousel?.enabled) { initCarousel() } } }) onDeactivated(() => { clearCarousel() }) </script> <template> <div ref="elRef" :class="[$attrs.class, prefixCls]" :style="styles"></div> </template> 现在是这种方法 解决的 但是 缩放后 汇编模糊 字体啥的
最新发布
12-05
你当前的实现已经非常完善了,通过 `fontSize()` 函数实现了基于页面宽度的字体自适应,并结合 `resize` 和 `transitionend` 事件触发图表重绘。但你说: > **“缩放后图表模糊,字体啥的”** 这是 ECharts 在高倍缩放(尤其是浏览器 zoom > 100% 或 < 100%)时常见的问题 &mdash;&mdash; **Canvas 渲染模糊**。 --- ## 🔍 问题本质分析 ECharts 是基于 `canvas` 渲染的,而 canvas 的清晰度取决于其 **设备像素比(devicePixelRatio)下的实际绘制分辨率**。 当你: - 放大页面(Ctrl + 鼠标滚轮) - 或窗口 resize 后未正确重置 canvas 的 DPR 缩放 - 或没有根据 `window.devicePixelRatio` 动态调整渲染尺寸 会导致: - 图表虽然“看起来”变大了,但 canvas 像素没跟上; - 文字、线条变得模糊、发虚; - 即使调用了 `echart.resize()`,也只是布局变了,**底层 canvas 分辨率没更新**! --- ## ✅ 正确解决方案:让 ECharts 自动处理高清显示(DPR 适配) ### ✅ 方法一:初始化时传入 `devicePixelRatio` ECharts 提供了一个配置项:[`devicePixelRatio`](https://echarts.apache.org/zh/api.html#init),用于指定渲染 canvas 时使用的像素密度。 ```ts echarts.init(container, theme, { renderer: 'canvas', // 或 'svg' devicePixelRatio: window.devicePixelRatio || 1, useCoarsePointer: false }) ``` > ✅ 推荐在初始化时动态传入当前 DPR,确保清晰度。 --- ### ✅ 修改你的 `initChart()` 方法如下: ```ts const initChart = () => { if (unref(elRef) && props.options) { // 销毁旧实例避免内存泄漏 if (echartRef) { echartRef.dispose() } // 初始化时传入 devicePixelRatio echartRef = echarts.init(unref(elRef) as HTMLElement, unref(theme), { renderer: 'canvas', devicePixelRatio: window.devicePixelRatio || 1, // 关键:适配高清屏 useCoarsePointer: false }) initOption() // 设置 option if (props.carousel?.enabled) { initCarousel() } } } ``` --- ### ✅ 方法二:监听 DPR 变化(检测浏览器缩放) 如前所述,**浏览器缩放(zoom)不会触发 `resize` 事件**,但会改变 `window.devicePixelRatio`。 所以你需要额外监听它! #### 添加 DPR 监听器(使用 `matchMedia`) ```ts let mediaQuery: MediaQueryList | null = null const setupDPRListener = () => { // 创建媒体查询监听 - 检测缩放变化 const dprQuery = `(resolution: ${window.devicePixelRatio}dppx)` mediaQuery = window.matchMedia(dprQuery) const handleChange = () => { // 延迟执行以确保环境稳定 setTimeout(() => { if (echartRef && elRef.value) { // 重新初始化图表(或强制 resize + 刷新) echartRef.dispose() initChart() } }, 300) } // Safari 不支持 addEventListener,用 deprecated 的方式兼容 if (mediaQuery.addEventListener) { mediaQuery.addEventListener('change', handleChange) } else { mediaQuery.addListener(handleChange) } } onMounted(() => { initChart() setupDPRListener() window.addEventListener('resize', resizeHandler) // ...其他监听 }) onBeforeUnmount(() => { // 清理 DPR 监听 if (mediaQuery) { if (mediaQuery.removeEventListener) { mediaQuery.removeEventListener('change', handleChange) } else { mediaQuery.removeListener(handleChange) } mediaQuery = null } // ...其余清理 }) ``` ⚠️ 注意:`handleChange` 中我们选择 **销毁并重建图表实例**,因为仅 `resize()` 无法解决 canvas 模糊问题。 --- ### ✅ 方法三:使用 SVG 渲染器(替代 Canvas) 如果你对性能要求不高,但追求极致清晰度,可以切换为 `svg` 渲染模式: ```ts echartRef = echarts.init(unref(elRef) as HTMLElement, unref(theme), { renderer: 'svg', // 👈 使用 SVG 渲染 devicePixelRatio: 1 // SVG 不需要高 DPR }) ``` ✅ 优点: - 矢量图,任意缩放都不模糊; - 字体、线条永远清晰; ❌ 缺点: - 数据量大时(> 1k 点)性能下降明显; - 不适合大数据可视化; > ✅ 适用于小数据、仪表盘、移动端等场景。 --- ### ✅ 方法四:手动设置 CSS 缩放补偿(进阶技巧) 如果不想频繁重建图表,也可以尝试在 `resizeHandler` 中判断 DPR 是否变化,并手动调整 canvas 的 `scale`,但这复杂且容易出错,不推荐初学者使用。 --- ## 🛠 其他优化建议 ### 1. 防止多次重复 setOption 导致性能下降 你现在在 `initOption()` 中做了深拷贝 + 大量计算,建议加上防抖和缓存控制: ```ts let lastWidth = 0 const MIN_WIDTH_CHANGE = 50 // 最小变化阈值 const initOption = () => { const width = elRef.value?.clientWidth || 0 if (Math.abs(width - lastWidth) < MIN_WIDTH_CHANGE && echartRef?._option) return lastWidth = width // ...继续处理 } ``` --- ### 2. 使用 `autoResize` 插件(可选) ECharts 官方提供了 [autoResize](https://github.com/apache/echarts/issues/8667) 插件,不过一般自己封装更灵活。 --- ### 3. 强制清除模糊:每次 resize 时 dispose 再重建(终极手段) ```ts const hardReset = () => { if (echartRef) { echartRef.dispose() initChart() } } ``` 可用于关键场景(比如检测到 zoom 变化较大时)。 --- ## ✅ 总结:你现在应该怎么做? | 问题 | 解决方案 | |------|----------| | 图表模糊 | ✅ 初始化时传入 `devicePixelRatio: window.devicePixelRatio` | | 浏览器缩放不响应 | ✅ 使用 `matchMedia` 监听 DPR 变化,重建图表 | | 字体大小不适配 | ✅ 继续使用你的 `fontSize()` 函数 ✔️ | | 性能差 | ❌ 避免每帧都 deepClone / setOption → 加缓存 | | 想彻底解决模糊 | ✅ 改用 `renderer: 'svg'` | --- ### ✅ 推荐最终修改点汇总 #### ✅ 修改 `initChart`: ```ts const initChart = () => { const el = unref(elRef) if (!el || !props.options) return // 销毁已有实例 if (echartRef) { echartRef.dispose() } // 初始化带 DPR 的实例 echartRef = echarts.init(el, unref(theme), { renderer: 'canvas', devicePixelRatio: window.devicePixelRatio || 1 }) initOption() if (props.carousel?.enabled) { initCarousel() } } ``` #### ✅ 添加 DPR 监听(放在 `onMounted`): ```ts // 在 onMounted 内部 const dprMedia = window.matchMedia(`(resolution: ${window.devicePixelRatio}dppx)`) const onDPRChange = () => setTimeout(() => echartRef && initChart(), 300) if (dprMedia.addEventListener) { dprMedia.addEventListener('change', onDPRChange) } else { dprMedia.addListener(onDPRChange) } // onBeforeUnmount 清理 return () => { if (dprMedia.removeEventListener) { dprMedia.removeEventListener('change', onDPRChange) } else { dprMedia.removeListener(onDPRChange) } } ``` --- ###
评论
成就一亿技术人!
拼手气红包6.0元
还能输入1000个字符
 
红包 添加红包
表情包 插入表情
 条评论被折叠 查看
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值