秒杀与抢购--大规模并发设计方案
近期在工作的过程中不断接触到直接或隐含的高并发业务场景的需求。因此结合自己所得以及网络上比较优秀的
博客得以总结。首先参考的就是 徐汉彬的博客,其博文地址为 http://www.kuqin.com/shuoit/20141203/343669.html
其对大并发的设计可以说入木三分。
作者从 后端存储、异步写入、单机的MaxClient 三个方面简述了服务器本身应具备的性质;
从 重启与过载保护、防止攻击两个方面简述了防止海量请求的应对策略;
从 少卖和超卖两个方面 简述了数据安全的原因以及应对方案。
下面我们将通过更为详细的分析,彻底分析大规模海量请求秒杀系统的设计策略。
首先我们需要考虑整个秒杀系统的环节,基本环节包括 浏览->下单->支付->发货
其中一个非常重要的也是并发处理最为紧要的环节就是减库存,这不但涉及到并发处理还关系到数据安全,直接影响商品的
少卖和超卖问题。减库存可以发生在下单之后也可以发生在支付之后,下单之后的减库存对并发处理的要求更高但是也更为符合
用户行为,下单成功立马扣除库存,即使用户没有及时支付。在订单过期之后将商品库存重新放入秒杀队列以提供其他用户捡漏,这一设计更符合现实要求。
当然,秒杀并发处理较弱的系统可以将减库存放在支付之后,每个用户都有可能进行秒杀,也都可能进行下单,但不是每个用户都有支付的需求,即使大量用户
进行了并发支付,秒杀商品发生了超卖现象,运营商仍然可以通过退款来减少损失,但是这种设计用户体验是相对较差的。
一 基于数据库乐观锁的设计
在并发量没有那么高的时候,我们可以考虑通过乐观锁机制来实现秒杀,其实这一设计在早期是一直存在于各大电商系统之中的,比如淘宝网2006年还在使用这一
机制进行秒杀设计。采用这一设计的前提是,并发量不是非常庞大。所谓乐观锁机制,是指各个线程保持乐观的态度对库存进行修改,这种锁的机制相对悲观锁更为
宽松,其实现也相对简单,可以通过版本号控制、状态值等实现,简单的说就是多个线程都可以获取库存并且修改,但是只有符合版本条件的修改才会真正的对库存进行更新。
下面举例,假如商品ID字段为code,商品定额数量为 mount,用户A订购了buyCount个此类型的商品,其乐观锁设计可以为
int i = update goods_table set mount = mount-buyCount where ID = code and mount-buyCount >= 0
这里我们以mysql为例 写一个测试用例, 附件代码可以直接下载
1 定义实体类
package com.jfn.miaosha.model;
public class Goods {
/**ID**/
private Integer id;
/**商品代码**/
private Integer code;
/**商品数量**/
private Integer count;
public Integer getId() {
return id;
}
public void setId(Integer id) {
this.id = id;
}
public Integer getCode() {
return code;
}
public void setCode(Integer code) {
this.code = code;
}
public Integer getCount() {
return count;
}
public void setCount(Integer count) {
this.count = count;
}
}
2 定义mapper 文件
<?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE mapper PUBLIC "-//mybatis.org//DTD Mapper 3.0//EN" "http://mybatis.org/dtd/mybatis-3-mapper.dtd"> <mapper namespace="com.jfn.miaosha.dao.GoodsMapper"> <resultMap id="BaseResultMap" type="com.jfn.miaosha.model.Goods"> <id column="id" jdbcType="BIGINT" property="id" /> <result column="code" jdbcType="BIGINT" property="code" /> <result column="count" jdbcType="BIGINT" property="count" /> </resultMap> <update id="modifyCountDB" parameterType="java.util.HashMap"> update goods set count = count - #{count} where count - #{count} >=0 and code = #{code} </update> <insert id="insert" parameterType="com.jfn.miaosha.model.Goods"> insert into goods(id,code,count) values(#{id},#{code},#{count}); </insert> <delete id="delete" parameterType="com.jfn.miaosha.model.Goods"> delete from goods where code = #{code} </delete> <select id="queryGoods" parameterType="com.jfn.miaosha.model.Goods" resultType="com.jfn.miaosha.model.Goods"> select * from goods where code = #{code} </select> </mapper>
3 定义db接口
package com.jfn.miaosha.dao;
import java.util.Map;
import com.jfn.miaosha.model.Goods;
public interface GoodsMapper {
/**更新库存
* 基于数据库乐观锁机制的秒杀
* 基于数据库乐观锁机制的秒杀,主要是通过控制数据版本修改数据
* int count = update goods set count = count - #{buy_count}
* where count - #{buy_count} >=0 and code = #{code}
* 优点: 简单、准确 可靠性高
* 缺点: 并发低 ,基于DDS机械硬盘的并发约为 700,HDS 并发约为 300 这是一个平均值。
* @param code 商品代码
* @param count 购买数量
* @return
*/
public Integer modifyCountDB(Map<String,Integer> map);
public Integer insert(Goods good);
public void delete(Goods good);
public Goods queryGoods(Goods good);
}
4 定义service
package com.jfn.miaosha.service;
import com.jfn.miaosha.model.Goods;
public interface MiaoShaDBService {
public boolean modifyCountDB(Integer code,Integer count);
public Integer insert(Goods good);
}
5 定义实现类
package com.jfn.miaosha.service;
import java.util.HashMap;
import java.util.Map;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Repository;
import com.jfn.miaosha.dao.GoodsMapper;
import com.jfn.miaosha.model.Goods;
@Repository
public class MiaoShaDBServiceImpl implements MiaoShaDBService {
@Autowired
private GoodsMapper goodsDao;
@Override
public boolean modifyCountDB(Integer code, Integer count) {
Map<String,Integer> map = new HashMap<String,Integer>();
map.put("code", code);
map.put("count", count);
return this.goodsDao.modifyCountDB(map).equals(1);
}
@Override
public Integer insert(Goods good) {
return goodsDao.insert(good);
}
public void delete(Goods good) {
goodsDao.delete(good);
}
}
6 定义spring容器
<?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:p="http://www.springframework.org/schema/p" xmlns:context="http://www.springframework.org/schema/context" xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context-3.0.xsd"> <context:annotation-config /> <context:component-scan base-package="com.jfn.miaosha" /> <context:property-placeholder location="classpath:app.properties" /> <bean id="dataSource" class="com.mchange.v2.c3p0.ComboPooledDataSource" destroy-method="close"> <property name="user" value="${jdbc.username}"/> <property name="password" value="${jdbc.password}"/> <property name="jdbcUrl" value="${jdbc.url}"/> <property name="driverClass" value="${jdbc.driverClassName}"></property> <property name="minPoolSize" value="${jdbc.minPoolSize}" /> <property name="maxPoolSize" value="${jdbc.maxPoolSize}" /> <property name="maxIdleTime" value="${jdbc.maxIdleTime}" /> </bean> <bean id="sessionFactory" class="org.mybatis.spring.SqlSessionFactoryBean"> <property name="dataSource" ref="dataSource" /> <property name="mapperLocations"> <list> <value>classpath:mapper/**/*.xml</value> </list> </property> </bean> <bean id="mapperScannerConfigurer_crm" class="org.mybatis.spring.mapper.MapperScannerConfigurer"> <property name="basePackage" value="com.jfn.miaosha.dao" /> <property name="sqlSessionFactoryBeanName" value="sessionFactory" /> </bean> </beans>
7 定义maven pom.xml
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> <modelVersion>4.0.0</modelVersion> <groupId>com.jfn</groupId> <artifactId>miaosha1</artifactId> <version>0.0.1-SNAPSHOT</version> <build> <plugins> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-compiler-plugin</artifactId> <configuration> <target>1.6</target> <source>1.6</source> <encoding>UTF-8</encoding> </configuration> </plugin> </plugins> </build> <dependencies> <!-- https://mvnrepository.com/artifact/junit/junit --> <dependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <version>4.11</version> </dependency> <!-- https://mvnrepository.com/artifact/org.springframework/spring-test --> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-test</artifactId> <version>4.1.6.RELEASE</version> </dependency> <dependency> <groupId>org.mybatis</groupId> <artifactId>mybatis</artifactId> <version>3.1.1</version> </dependency> <dependency> <groupId>org.mybatis</groupId> <artifactId>mybatis-spring</artifactId> <version>1.1.1</version> </dependency> <!-- mybatis end --> <!-- mysql start --> <dependency> <groupId>mysql</groupId> <artifactId>mysql-connector-java</artifactId> <version>5.1.18</version> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-jdbc</artifactId> <version>4.1.6.RELEASE</version> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-context</artifactId> <version>4.1.6.RELEASE</version> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-tx</artifactId> <version>4.1.6.RELEASE</version> </dependency> <dependency> <groupId>c3p0</groupId> <artifactId>c3p0</artifactId> <version>0.9.1.1</version> </dependency> </dependencies> </project>
8 定义 连接数据
jdbc.username=iperp_m2
jdbc.password=QbdxeI8MUyOsfcIv0xNNvGIC
jdbc.url=jdbc:mysql://192.168.30.26:3306/test?autoReconnect=true&useUnicode=true&characterEncoding=utf-8&generateSimpleParameterMetadata=true
jdbc.minPoolSize=500
jdbc.maxPoolSize=1000
jdbc.maxIdleTime=60
jdbc.driverClassName=com.mysql.jdbc.Driver
9 定义测试类
package com.jfn.miaosha;
import java.util.concurrent.CountDownLatch;
import org.junit.After;
import org.junit.Before;
import org.junit.Test;
import org.junit.runner.RunWith;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.test.context.ContextConfiguration;
import org.springframework.test.context.junit4.SpringJUnit4ClassRunner;
import com.jfn.miaosha.dao.GoodsMapper;
import com.jfn.miaosha.model.Goods;
import com.jfn.miaosha.service.MiaoShaDBServiceImpl;
@RunWith(SpringJUnit4ClassRunner.class)
@ContextConfiguration("classpath:applicationContext.xml")
public class MiaoshaDBTest {
//商品代码
private Integer goodsCode = 1133;
@Autowired
private MiaoShaDBServiceImpl miaoShaService;
@Autowired
private GoodsMapper goodsDao;
@Before
public void before(){
Goods good = new Goods();
good.setCode(1133);
good.setCount(100);
good.setId(1122);
miaoShaService.insert(good);
}
@After
public void after(){
Goods good = new Goods();
good.setCode(1133);
good.setCount(100);
good.setId(1122);
Goods g = goodsDao.queryGoods(good);
System.out.println("数据库剩余商品...."+g.getCount());
miaoShaService.delete(good);
}
@Test
public void testMiaoshaDB(){
for(int i = 0;i<100;i++){
Thread t = new Thread(new Customer(3));
t.start();
latch.countDown();
}
try {
Thread.sleep(1000);
} catch (InterruptedException e) {
e.printStackTrace();
}
System.out.println("卖出去商品------"+allCount+"*****顾客数量--------"+manCount);
}
//启动线程数
private CountDownLatch latch = new CountDownLatch(100);
//购买总数量
private Integer allCount = 0;
//顾客数量
private Integer manCount = 0;
private class Customer implements Runnable{
//每个客户购买数量
private Integer count;
public Customer(Integer count){
this.count = count;
}
public void run() {
try {
//线程等待
latch.await();
} catch (InterruptedException e) {
e.printStackTrace();
}
boolean result = miaoShaService.modifyCountDB(goodsCode, count);
if(result){
synchronized (allCount) {
allCount += count;
manCount ++;
}
}
}
}
}
基于数据库乐观锁的秒杀代码可以在下面的附件下载,数据库的秒杀面对的场景是 并发量低,用户活跃度较低的场景, 主要考察的是数据库磁盘的读写能力,并发能力在 300-700 ,随着硬盘速度的提升,有可能会有所提高。基于内存数据库的并发秒杀 后续会更新上去。
二 基于内存数据库的异步写入
(1) 现在互联网发展的规模和速度,在稍大的电商系统中是很难再通过数据库乐观锁的机制来实现秒杀了。这时候我们就不得不考虑其他的方案。
首先需要考虑的就是存储,大规模秒杀要求的是尽可能的快,所以面向关系型的数据存储不再合适。基于内存级别的内存数据库是必须的,这里以redis为数据存储的
对象。秒杀这类比较复杂的业务需求,需要使用异步写入的机制来完成。
(2) web服务器的MaxClient也是必须考虑的问题。
MaxClient是每台服务器最大连接数,连接数的设置必须结合服务器的CPU 内存等物理硬件的实际情况充分考虑,并不是一味的越大越好,试想一下,假如我们有10台web
服务器,每台服务器承受的最大连接数是 500,但是我们设置成 700,在大规模并发的情况下,大量的请求涌到服务器,服务器超出了其服务上限,打开了过多的处理线程,
CPU需要处理的上下文切换就会更多,额外增加了CPU的消耗,增加了业务接口的响应速度,但是噩梦才刚刚开始,业务接口的响应时间延长将会拖慢整个web服务器的平均响应时间,
导致服务器异常,基于用户行为特点,系统响应越慢,用户点击的次数就越多,后面更多的请求涌入,将会被分配到其他正常的服务器上,其他服务器也会因为过载出现异常从而导致整个web系统出现服务异常。
在此期间 网络带宽和负载均衡也是需要考虑的因素,但是鉴于业务特点,一般秒杀的请求数据包较小,带宽和负载均衡很少会成为瓶颈。
案例一 以memcached为例
memcached 相对于redis 属于多线程多进程 内存数据库,其写入效率非常之高。
下面将以memcached为例 演示秒杀场景。
1 引入jar
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> <modelVersion>4.0.0</modelVersion> <groupId>com.jfn</groupId> <artifactId>miaosha1</artifactId> <version>0.0.1-SNAPSHOT</version> <build> <plugins> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-compiler-plugin</artifactId> <configuration> <target>1.6</target> <source>1.6</source> <encoding>UTF-8</encoding> </configuration> </plugin> </plugins> </build> <dependencies> <!-- https://mvnrepository.com/artifact/junit/junit --> <dependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <version>4.11</version> </dependency> <!-- https://mvnrepository.com/artifact/org.springframework/spring-test --> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-test</artifactId> <version>4.1.6.RELEASE</version> </dependency> <dependency> <groupId>org.mybatis</groupId> <artifactId>mybatis</artifactId> <version>3.1.1</version> </dependency> <dependency> <groupId>org.mybatis</groupId> <artifactId>mybatis-spring</artifactId> <version>1.1.1</version> </dependency> <!-- mybatis end --> <!-- mysql start --> <dependency> <groupId>mysql</groupId> <artifactId>mysql-connector-java</artifactId> <version>5.1.18</version> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-jdbc</artifactId> <version>4.1.6.RELEASE</version> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-context</artifactId> <version>4.1.6.RELEASE</version> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-tx</artifactId> <version>4.1.6.RELEASE</version> </dependency> <dependency> <groupId>c3p0</groupId> <artifactId>c3p0</artifactId> <version>0.9.1.1</version> </dependency> <!-- https://mvnrepository.com/artifact/spy/memcached --> <dependency> <groupId>net.spy</groupId> <artifactId>spymemcached</artifactId> <version>2.11.2</version> </dependency> </dependencies> </project>
2 配置客户端memcached
<?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:p="http://www.springframework.org/schema/p" xmlns:cache="http://www.springframework.org/schema/cache" xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.1.xsd http://www.springframework.org/schema/cache http://www.springframework.org/schema/cache/spring-cache-3.1.xsd"> <bean id="spyMemcachedClient" class="net.spy.memcached.spring.MemcachedClientFactoryBean"> <property name="servers" value="${memcached.servers}" /> <property name="protocol" value="${memcached.protocol}" /> <property name="transcoder"> <bean class="net.spy.memcached.transcoders.SerializingTranscoder"> <property name="compressionThreshold" value="1024" /> </bean> </property> <property name="opTimeout" value="${memcached.opTimeout}" /> <property name="timeoutExceptionThreshold" value="${memcached.timeoutExceptionThreshold}" /> <property name="hashAlg"> <value type="net.spy.memcached.DefaultHashAlgorithm">KETAMA_HASH</value> </property> <property name="locatorType" value="${memcached.locatorType}" /> <property name="failureMode" value="${memcached.failureMode}" /> <property name="useNagleAlgorithm" value="${memcached.useNagleAlgorithm}" /> </bean> <bean id="memcachedClient" class="com.jfn.miaosha.memcached.SpyMemcachedCacheClient"> <property name="memcachedClient" ref="spyMemcachedClient"></property> </bean> </beans>
3 配置客户端memcached连接信息
#memcached.servers=192.168.0.125:11211,192.168.0.125:11211 memcached.servers=127.0.0.1:11211 #memcached.servers=10.1.7.104:11211,10.1.7.105:11211 memcached.protocol=BINARY memcached.opTimeout=6000 memcached.timeoutExceptionThreshold=1998 memcached.locatorType=CONSISTENT memcached.failureMode=Redistribute memcached.useNagleAlgorithm=false
4 引入客户端程序
package com.jfn.miaosha.memcached;
import java.net.SocketAddress;
import java.util.ArrayList;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.Set;
import java.util.TreeSet;
import net.spy.memcached.CASResponse;
import net.spy.memcached.CASValue;
import net.spy.memcached.MemcachedClient;
/**
* SpyMemcached客户端
*
*/
public class SpyMemcachedCacheClient implements MemcachedCacheClient {
private final Map<String,String> transMap = new HashMap<String,String>();
public SpyMemcachedCacheClient(){
String[][] transArr = {
{"pid","进程ID"},{"version","服务器版本"},
{"curr_items","当前存储items数量"},
{"total_items","累计存储items总数量"},
{"bytes","当前存储items占用的字节数"},
{"curr_connections","当前打开连接数"},
{"total_connections","累计打开连接总数"},
{"cmd_get","执行get命令总数"},
{"cmd_set","执行set命令总数"},
{"get_hits","get命中次数"},
{"get_misses","get未命中次数"},
{"bytes_read","读取字节总数"},
{"bytes_written","写入字节总数"},
{"uptime","服务器运行秒数"},
{"time","服务器当前时间"},
{"accepting_conns","当前接受的链接数"},
{"evictions","为获取空间删除item的总数"},
{"limit_maxbytes","分配给memcache的内存字节数"},
};
for( String[] transEntry : transArr ){
transMap.put(transEntry[0], transEntry[1]);
}
}
/** The memcached client. */
private MemcachedClient memcachedClient;
@Override
public Object get(String key) {
return memcachedClient.get(key);
}
@Override
public void add(String key, int expiredDuration, Object obj) {
memcachedClient.add(key, expiredDuration, obj);
}
@Override
public Object delete(String key) {
return memcachedClient.delete( key);
}
@Override
public void flush() {
memcachedClient.flush();
}
public void setMemcachedClient(MemcachedClient memcachedClient) {
this.memcachedClient = memcachedClient;
}
@Override
public List<KeyValueEntity> getEntity(String cacheName) {
List<KeyValueEntity> result = new ArrayList<KeyValueEntity>();
Map<SocketAddress, Map<String, String>> map = memcachedClient.getStats();
Set<SocketAddress> socketAddresses = map.keySet();
// Collection<Map<String, String>> values = map.values();
for (SocketAddress address : socketAddresses) {
KeyValueEntity entity = new KeyValueEntity();
entity.setKey(cacheName + ":" + address.toString());
Map<String, String> valueMap = map.get(address);
Set<String> keyValue = new TreeSet<String>(valueMap.keySet());
if (keyValue != null) {
StringBuilder sb = new StringBuilder();
for (String key : keyValue) {
String valueValue = valueMap.get(key);
String transKey = transMap.get(key.trim());
sb.append( transKey == null ? key : transKey ).append(':').append(valueValue).append(',');
}
entity.setValue(sb.toString());
}
result.add(entity);
}
return result;
}
public MemcachedClient getMemcachedClient() {
return memcachedClient;
}
public CASValue gets(String goods_cache_code) {
return memcachedClient.gets(goods_cache_code);
}
public CASResponse cas(String goods_cache_code, long cas, int i) {
return memcachedClient.cas(goods_cache_code, cas, i);
}
}
package com.jfn.miaosha.memcached;
import java.util.List;
/**
* memcached客户端
*
*/
public interface MemcachedCacheClient {
public Object get(String key);
public void add(String key,int expiredDuration, Object obj);
public Object delete(String key);
public void flush();
/**
* 获取里面的内容
* @return
*/
public List<KeyValueEntity> getEntity(String cacheName);
}
/*
*
*
*
*
*
*/
package com.jfn.miaosha.memcached;
import java.io.Serializable;
/**
* Key Value Mapping.
*/
public class KeyValueEntity implements Serializable, Cloneable {
/** The Constant serialVersionUID. */
private static final long serialVersionUID = 5568358970483740841L;
/** The key. */
private String key;
/** The value. */
private String value = "";
/**
* Instantiates a new key value entity.
*/
public KeyValueEntity() {
}
/**
* Instantiates a new key value entity.
*
* @param key
* the key
* @param value
* the value
*/
public KeyValueEntity(String key, String value) {
this.key = key;
this.value = value;
}
public KeyValueEntity(Integer key, String value) {
this.key = String.valueOf(key);
this.value = value;
}
/**
* Gets the key.
*
* @return the key
*/
public String getKey() {
return key;
}
/**
* Sets the key.
*
* @param key
* the new key
*/
public void setKey(String key) {
this.key = key;
}
/**
* Gets the value.
*
* @return the value
*/
public String getValue() {
return value;
}
/**
* Sets the value.
*
* @param value
* the new value
*/
public void setValue(String value) {
this.value = value;
}
/*
* (non-Javadoc)
*
* @see java.lang.Object#hashCode()
*/
public int hashCode() {
int result = 17;
result = 31 * result + key.hashCode();
result = 31 * result + value.hashCode();
return result;
}
/*
* (non-Javadoc)
*
* @see java.lang.Object#equals(java.lang.Object)
*/
public boolean equals(Object obj) {
if (obj instanceof KeyValueEntity) {
KeyValueEntity entity = (KeyValueEntity) obj;
if (entity.getKey() != null && entity.getValue() != null && entity.getKey().equals(this.key)
&& entity.getValue().equals(this.value)) {
return true;
}
}
return false;
}
public KeyValueEntity clone(){
KeyValueEntity entity = new KeyValueEntity();
entity.setKey(this.getKey());
entity.setValue(this.getValue());
return entity;
}
}
5 测试类
package com.jfn.miaosha;
import java.util.concurrent.CountDownLatch;
import net.spy.memcached.CASResponse;
import net.spy.memcached.CASValue;
import org.junit.After;
import org.junit.Before;
import org.junit.Test;
import org.junit.runner.RunWith;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.test.context.ContextConfiguration;
import org.springframework.test.context.junit4.SpringJUnit4ClassRunner;
import com.jfn.miaosha.memcached.SpyMemcachedCacheClient;
@RunWith(SpringJUnit4ClassRunner.class)
@ContextConfiguration("classpath:applicationContext.xml")
public class MiaoshaMemcachedTest {
//缓存代码
private String goods_cache_code = "good_code_1122";
@Autowired
private SpyMemcachedCacheClient memcachedClient;
@Before
public void before(){
memcachedClient.add(goods_cache_code, 30000, 100);
Integer goodCount = (Integer)memcachedClient.get(goods_cache_code);
System.out.println("memcached 初始商品...."+goodCount);
}
@After
public void after(){
Integer goodCount = (Integer)memcachedClient.get(goods_cache_code);
System.out.println("memcached 剩余商品...."+goodCount);
memcachedClient.delete(goods_cache_code);
}
@Test
public void testMiaoshaMemcached(){
for(int i = 0;i<100000;i++){
Thread t = new Thread(new Customer(3));
t.start();
latch.countDown();
}
try {
Thread.sleep(5000);
} catch (InterruptedException e) {
e.printStackTrace();
}
System.out.println("卖出去商品------"+allCount+"*****顾客数量--------"+manCount);
}
//启动线程数
private CountDownLatch latch = new CountDownLatch(100);
//购买总数量
private Integer allCount = 0;
//顾客数量
private Integer manCount = 0;
private class Customer implements Runnable{
//每个客户购买数量
private Integer count;
public Customer(Integer count){
this.count = count;
}
public void run() {
try {
//线程等待
latch.await();
} catch (InterruptedException e) {
e.printStackTrace();
}
CASValue goodReset = memcachedClient.gets(goods_cache_code);
//检查数量
if((Integer)goodReset.getValue()-count < 0) return;
CASResponse response = memcachedClient.cas(goods_cache_code, goodReset.getCas(),
(Integer)goodReset.getValue()-count);
if(response.name().equals("OK")){
manCount ++;
allCount +=count;
}
}
}
}
详细代码 参照 附件。
基于redis 后文会继续更新。
三 重启与过载保护
http://www.kuqin.com/shuoit/20141203/343669.html
http://developer.51cto.com/art/201601/503511.htm#topx
http://blog.youkuaiyun.com/csdn265/article/details/51461466
http://www.tuicool.com/articles/Erqm6z
http://wenku.baidu.com/link?url=TCQJMKUiXRPIu--jOuRgdGiBbX2s-1DjEId1L1IIjUiHmxioyVUIUU8ec11EorT0_KhDooZpHxtjDVw86G2FbJadN29V3bOz0MAmo4ldMYa