Why Redis beats Memcached for caching

本文对比了Redis与Memcached这两种流行的内存数据存储解决方案。两者都作为高性能缓存层使用,但在功能、灵活性及数据持久性方面有所不同。Redis提供了更丰富的数据类型和支持数据持久化的选项,而Memcached则在水平扩展和存储静态小数据方面略有优势。

Refer from http://www.javaworld.com/article/2836878/developer-tools-ide/why-redis-beats-memcached-for-caching.html

Redis is the first choice at most of cases. Redis is the newer and more versatile datastore, but Memcached still has its place.

Memcached or Redis? It's a question that nearly always arises in any discussion about squeezing more performance out of a modern, database-driven Web application. When performance needs to be improved, caching is often the first step employed, and Memcached and Redis are typically the first places to turn.

Let's start with the similarities. Both Memcached and Redis are in-memory, key-value data stores. They both belong to the NoSQL family of data management solutions, and both are based on the same key-value data model. They both keep all data in RAM, which of course makes them supremely useful as a caching layer. In terms of performance, the two data stores are also remarkably similar, exhibiting almost identical characteristics (and metrics) with respect to throughput and latency.

Besides being in-memory, key-value data stores, both Memcached and Redis are mature and hugely popular open source projects. Memcached was originally developed by Brad Fitzpatrick in 2003 for the LiveJournal website. Since then, Memcached has been rewritten in C (the original implementation was in Perl) and put in the public domain, where it has become a cornerstone of modern Web applications. Current development of Memcached is focused on stability and optimizations rather than adding new features.

Redis was created by Salvatore Sanfilippo in 2009, and Sanfilippo remains the lead developer and the sole maintainer of the project today. Redis is sometimes described as "Memcached on steroids," which is hardly surprising considering that parts of Redis were built in response to lessons learned from using Memcached. Redis has more features than Memcached, which makes it more powerful and flexible but also more complex.

Used by many companies and in countless mission-critical production environments, both Memcached and Redis are supported by client libraries implemented in every conceivable programming language, and both are included in a multitude of libraries and packages that developers use. In fact, it's a rare Web stack that does not include built-in support for either Memcached or Redis.

Why are Memcached and Redis so popular? Not only are they extremely effective, they're also relatively simple. Getting started with either Memcached or Redis is considered easy work for a developer. It takes only a few minutes to set them up and get them working with an application. Thus a small investment of time and effort can have an immediate, dramatic impact on performance -- usually by orders of magnitude. A simple solution with a huge benefit: That's as close to magic as you can get.

When to use Memcached
Because Redis is newer and has more features compared to Memcached, Redis is almost always the better choice. But there are two specific scenarios in which Memcached could be preferable. The first is for caching small and static data, such as HTML code fragments. Memcached's internal memory management, while not as sophisticated as Redis', is more efficient because Memcached will consume comparatively less memory resources for metadata. Strings, which are the only data type that are supported by Memcached, are ideal for storing data that's only being read because strings require no further processing.

The second scenario in which Memcached still has a slight advantage over Redis is horizontal scaling. Due in part to its design and in part to its simpler capabilities, Memcached is much easier to scale. That said, there are several tested and accepted approaches to scaling Redis beyond a single server, and the upcoming version 3.0 (read the release candidate notes) will include built-in clustering for exactly that purpose.

When to use Redis
Unless you are working under constraints (e.g. a legacy application) that require the use of Memcached, or your use case matches one of the two scenarios above, you'll almost always want to use Redis instead. By using Redis as a cache, you gain a lot of power -- such as the ability to fine-tune cache contents and durability -- and greater efficiency overall.

Redis' superiority is evident in almost every aspect of cache management. Caches employ a mechanism called data eviction to delete old data from memory in order to make room for new data. Memcached's data eviction mechanism uses an LRU (Least Recently Used) algorithm and somewhat arbitrarily evicts data that's similar in size to the new data. Redis, by contrast, allows for fine-grained control over eviction though a choice of six different eviction policies. Redis also employs more sophisticated approaches to memory management and eviction candidate selection.

Redis gives you much greater flexibility regarding the objects you can cache. Whereas Memcached limits key names to 250 bytes, limits values to 1MB, and works only with plain strings, Redis allows key names and values to be as large as 512MB each, and they are binary safe. Redis has six data types that enable more intelligent caching and manipulation of cached data, opening up a world of possibilities to the application developer.

Instead of storing objects as serialized strings, the developer can use a Redis Hash to store an object's fields and values and manage them using a single key. Redis Hash saves developers the need to fetch the entire string, de-serialize it, update a value, re-serialize the object, and replace the entire string in the cache with its new value for every trivial update -- and that means lower resource consumption and increased performance. Other data types that Redis offers, such as Lists and Sets, can be leveraged to implement even more complex cache management patterns.

Another important advantage of Redis is that the data it stores isn't opaque, meaning that the server can manipulate it directly. A considerable share of the 160-plus commands available in Redis is devoted to data processing operations and embedding logic in the data store itself via server-side scripting. These built-in commands and user scripts give you the flexibility of handling data processing tasks directly in Redis, without having to ship data across the network to another system for processing.

Redis offers optional and tunable data persistence, which is designed to bootstrap the cache after a planned shutdown or an unplanned failure. While we tend to regard the data in caches as volatile and transient, persisting data to disk can be quite valuable in caching scenarios. Having the cache's data available for loading immediately after restart allows for much shorter cache warm-up periods and removes the load involved in repopulating and recalculating cache contents from the primary data store.

Last but not least, Redis offers replication. Replication can be used for implementing a highly available cache setup that can withstand failures and provide uninterrupted service to the application. Considering a cache failure falls only slightly short of application failure in terms of the impact on user experience and application performance, having a proven solution that guarantees the cache's contents and service availability is a major advantage in most cases.



### 光流法C++源代码解析与应用 #### 光流法原理 光流法是一种在计算机视觉领域中用于追踪视频序列中运动物体的方法。它基于亮度不变性假设,即场景中的点在时间上保持相同的灰度值,从而通过分析连续帧之间的像素变化来估计运动方向和速度。在数学上,光流场可以表示为像素位置和时间的一阶导数,即Ex、Ey(空间梯度)和Et(时间梯度),它们共同构成光流方程的基础。 #### C++实现细节 在给定的C++源代码片段中,`calculate`函数负责计算光流场。该函数接收一个图像缓冲区`buf`作为输入,并初始化了几个关键变量:`Ex`、`Ey`和`Et`分别代表沿x轴、y轴和时间轴的像素强度变化;`gray1`和`gray2`用于存储当前帧和前一帧的平均灰度值;`u`则表示计算出的光流矢量大小。 #### 图像处理流程 1. **初始化和预处理**:`memset`函数被用来清零`opticalflow`数组,它将保存计算出的光流数据。同时,`output`数组被填充为白色,这通常用于可视化结果。 2. **灰度计算**:对每一像素点进行处理,计算其灰度值。这里采用的是RGB通道平均值的计算方法,将每个像素的R、G、B值相加后除以3,得到一个近似灰度值。此步骤确保了计算过程的鲁棒性和效率。 3. **光流向量计算**:通过比较当前帧和前一帧的灰度值,计算出每个像素点的Ex、Ey和Et值。这里值得注意的是,光流向量的大小`u`是通过`Et`除以`sqrt(Ex^2 + Ey^2)`得到的,再乘以10进行量化处理,以减少计算复杂度。 4. **结果存储与阈值处理**:计算出的光流值被存储在`opticalflow`数组中。如果`u`的绝对值超过10,则认为该点存在显著运动,因此在`output`数组中将对应位置标记为黑色,形成运动区域的可视化效果。 5. **状态更新**:通过`memcpy`函数将当前帧复制到`prevframe`中,为下一次迭代做准备。 #### 扩展应用:Lukas-Kanade算法 除了上述基础的光流计算外,代码还提到了Lukas-Kanade算法的应用。这是一种更高级的光流计算方法,能够提供更精确的运动估计。在`ImgOpticalFlow`函数中,通过调用`cvCalcOpticalFlowLK`函数实现了这一算法,该函数接受前一帧和当前帧的灰度图,以及窗口大小等参数,返回像素级别的光流场信息。 在实际应用中,光流法常用于目标跟踪、运动检测、视频压缩等领域。通过深入理解和优化光流算法,可以进一步提升视频分析的准确性和实时性能。 光流法及其C++实现是计算机视觉领域的一个重要组成部分,通过对连续帧间像素变化的精细分析,能够有效捕捉和理解动态场景中的运动信息
微信小程序作为腾讯推出的一种轻型应用形式,因其便捷性与高效性,已广泛应用于日常生活中。以下为该平台的主要特性及配套资源说明: 特性方面: 操作便捷,即开即用:用户通过微信内搜索或扫描二维码即可直接使用,无需额外下载安装,减少了对手机存储空间的占用,也简化了使用流程。 多端兼容,统一开发:该平台支持在多种操作系统与设备上运行,开发者无需针对不同平台进行重复适配,可在一个统一的环境中完成开发工作。 功能丰富,接口完善:平台提供了多样化的API接口,便于开发者实现如支付功能、用户身份验证及消息通知等多样化需求。 社交整合,传播高效:小程序深度嵌入微信生态,能有效利用社交关系链,促进用户之间的互动与传播。 开发成本低,周期短:相比传统应用程序,小程序的开发投入更少,开发周期更短,有助于企业快速实现产品上线。 资源内容: “微信小程序-项目源码-原生开发框架-含效果截图示例”这一资料包,提供了完整的项目源码,并基于原生开发方式构建,确保了代码的稳定性与可维护性。内容涵盖项目结构、页面设计、功能模块等关键部分,配有详细说明与注释,便于使用者迅速理解并掌握开发方法。此外,还附有多个实际运行效果的截图,帮助用户直观了解功能实现情况,评估其在实际应用中的表现与价值。该资源适用于前端开发人员、技术爱好者及希望拓展业务的机构,具有较高的参考与使用价值。欢迎查阅,助力小程序开发实践。资源来源于网络分享,仅用于学习交流使用,请勿用于商业,如有侵权请联系我删除!
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值