简述Times33散列函数

本文探讨了Times33哈希函数在Redis、PHP和Memcached等项目中的广泛应用,因其快速且冲突少的特性而备受推崇。核心思想包括使用位运算而非乘法,以及特定的索引优化策略。此外,文章还提到了最新PHP源码中的优化调整,以适应现代架构。

闲来无事,随手翻看《Redis5 设计与源码分析》的时候再次看到了哈希函数times33,想着之前在学习PHP源码的时候也看到过这个哈希函数,就想好好研究一下,但是查阅许久都没有找到满意的结果,以下内容部分摘自Laruence介绍PHP中的hash算法的博客,原文地址:http://www.laruence.com/2009/07/23/994.html

PHPHash采用的是目前最为普遍的DJBX33A (Daniel J. Bernstein, Times 33 with Addition), 这个算法被广泛运用与多个软件项目:ApachePerlBerkeley DB等。 对于字符串而言这是目前所知道的最好的哈希算法,原因在于该算法的速度非常快,而且分类非常好(冲突小,分布均匀)。

算法的核心思想就是:hash(i) = hash(i-1) * 33 + str[i]
zend_hash.h中,我们可以找到在PHP中的这个算法:

static inline ulong zend_inline_hash_func(char *arKey, uint nKeyLength)
{
    register ulong hash = 5381;
 
    /* variant with the hash unrolled eight times */
    for (; nKeyLength >= 8; nKeyLength -= 8) {
        hash = ((hash << 5) + hash) + *arKey++;
        hash = ((hash << 5) + hash) + *arKey++;
        hash = ((hash << 5) + hash) + *arKey++;
        hash = ((hash << 5) + hash) + *arKey++;
        hash = ((hash << 5) + hash) + *arKey++;
        hash = ((hash << 5) + hash) + *arKey++;
        hash = ((hash << 5) + hash) + *arKey++;
        hash = ((hash << 5) + hash) + *arKey++;
    }
    switch (nKeyLength) {
        case 7: hash = ((hash << 5) + hash) + *arKey++; /* fallthrough... */
        case 6: hash = ((hash << 5) + hash) + *arKey++; /* fallthrough... */
        case 5: hash = ((hash << 5) + hash) + *arKey++; /* fallthrough... */
        case 4: hash = ((hash << 5) + hash) + *arKey++; /* fallthrough... */
        case 3: hash = ((hash << 5) + hash) + *arKey++; /* fallthrough... */
        case 2: hash = ((hash << 5) + hash) + *arKey++; /* fallthrough... */
        case 1: hash = ((hash << 5) + hash) + *arKey++; break;
        case 0: break;
EMPTY_SWITCH_DEFAULT_CASE()
    }
    return hash;
}

相比在ApachePerl中直接采用的经典Times 33算法:

hashing function used in Perl 5.005:
  # Return the hashed value of a string: $hash = perlhash("key")
  # (Defined by the PERL_HASH macro in hv.h)
  sub perlhash
  {
      $hash = 0;
      foreach (split //, shift) {
          $hash = $hash*33 + ord($_);
      }
      return $hash;
  }

PHPhash算法中,我们可以看出很处细致的不同。

  • 首先, 最不一样的就是:PHP并没有使用直接乘33,而是采用了:hash << 5 + hash(位运算),这样当然会比用乘快了。
  • 然后, 特别要主意的就是使用的unrolled, 我前几天看过一片文章讲·Discuz·的缓存机制,其中就有一条说是·Discuz·会根据帖子的热度不同采用不同的缓存策略,根据用户习惯,而只缓存帖子的第一页(因为很少有人会翻帖子)。

由于此类似的思想,PHP鼓励8位以下的字符索引,他以8为单位使用unrolled来提高效率,这不得不说也是个很细节的,很细致的地方。

另外还有inlineregister变量 … 可以看出PHP的开发者在hash的优化上也是煞费苦心

最后就是hash的初始值设置成了5381, 相比在Apache中的times算法和Perl中的Hash算法(都采用初始hash0), 为什么选5381呢? 具体的原因我也不知道, 但是我发现了5381的一些特性:

Magic Constant 5381:
  1. odd number
  2. prime number
  3. deficient number
  4. 001/010/100/000/101 b

看了这些, 我有理由相信这个初始值的选定能提供更好的分类.

至于说,为什么是Times 33而不是Times其他数字,在PHP Hash算法的注释中也有一些说明,希望对有兴趣的同学有用:

  DJBX33A (Daniel J. Bernstein, Times 33 with Addition)
 
  This is Daniel J. Bernstein's popular 'times 33' hash function as
  posted by him years ago on comp.lang.c. It basically uses a function
  like "hash(i) = hash(i-1) * 33 + str[i]". This is one of the best
  known hash functions for strings. Because it is both computed very
  fast and distributes very well.
 
  The magic of number 33, i.e. why it works better than many other
  constants, prime or not, has never been adequately explained by
  anyone. So I try an explanation: if one experimentally tests all
  multipliers between 1 and 256 (as RSE did now) one detects that even
  numbers are not useable at all. The remaining 128 odd numbers
  (except for the number 1) work more or less all equally well. They
  all distribute in an acceptable way and this way fill a hash table
  with an average percent of approx. 86%.
 
  If one compares the Chi^2 values of the variants, the number 33 not
  even has the best value. But the number 33 and a few other equally
  good numbers like 17, 31, 63, 127 and 129 have nevertheless a great
  advantage to the remaining numbers in the large set of possible
  multipliers: their multiply operation can be replaced by a faster
  operation based on just one shift plus either a single addition
  or subtraction operation. And because a hash function has to both
  distribute good _and_ has to be very fast to compute, those few
  numbers should be preferred and seems to be the reason why Daniel J.
  Bernstein also preferred it.
  
                   -- Ralf S. Engelschall <rse@engelschall.com>

上述英文的大致意思如下(英文水平有限,翻译可能不太好,请见谅。。):

Times 33 是 Daniel J. Bernstein多年前在comp.lang.c上发表的哈希算法。
计算原理是:使用类似"hash(i)= hash(i-1)* 33 + str [i]"的函数。
由于它不仅计算速度很快,而且分布比较均匀,如今已成为处理字符串哈希最好的哈希算法之一。

神奇的数字33,为什么它比许多其他常数(无论是否是质数)更好,从来没有任何人充分解释过。 
因此,我不妨大胆猜测一下:如果有人对1到256之间的所有乘数(如RSE现在所做的那样)进行测试,
那么会检测到偶数根本不可用,而剩余的其他128个奇数(数字1除外)或多或少都一样有效。
这些奇数在分布上都表现不错,对哈希表的填充覆盖大概在86%。

从实验效果的卡方分布来看,数字33并不一定是最好的测试效果。
但包括33在内的一些数字,如:17、31、63、127和129等,相较于其他的奇数都有一个很明显的优势:
由于这些奇数与16、32、64、128只相差1,它们的乘法运算合一分解为(位运算 + 加减运算),这样的话有更高的运算效率。
由于哈希函数需要兼顾分布均匀和高效的运算,所以 Daniel J. Bernstein 就更偏爱它吧。。

由于Laurance编写的博客是2009年的,所以查找了PHP最新的源码(2019-11-02,摘自github php-src源码),以下是新版的,可以看出针对当下新的CPU架构已经做了相应的优化:

/*
 * DJBX33A (Daniel J. Bernstein, Times 33 with Addition)
 *
 * This is Daniel J. Bernstein's popular `times 33' hash function as
 * posted by him years ago on comp.lang.c. It basically uses a function
 * like ``hash(i) = hash(i-1) * 33 + str[i]''. This is one of the best
 * known hash functions for strings. Because it is both computed very
 * fast and distributes very well.
 *
 * The magic of number 33, i.e. why it works better than many other
 * constants, prime or not, has never been adequately explained by
 * anyone. So I try an explanation: if one experimentally tests all
 * multipliers between 1 and 256 (as RSE did now) one detects that even
 * numbers are not usable at all. The remaining 128 odd numbers
 * (except for the number 1) work more or less all equally well. They
 * all distribute in an acceptable way and this way fill a hash table
 * with an average percent of approx. 86%.
 *
 * If one compares the Chi^2 values of the variants, the number 33 not
 * even has the best value. But the number 33 and a few other equally
 * good numbers like 17, 31, 63, 127 and 129 have nevertheless a great
 * advantage to the remaining numbers in the large set of possible
 * multipliers: their multiply operation can be replaced by a faster
 * operation based on just one shift plus either a single addition
 * or subtraction operation. And because a hash function has to both
 * distribute good _and_ has to be very fast to compute, those few
 * numbers should be preferred and seems to be the reason why Daniel J.
 * Bernstein also preferred it.
 *
 *
 *                  -- Ralf S. Engelschall <rse@engelschall.com>
 */

static zend_always_inline zend_ulong zend_inline_hash_func(const char *str, size_t len)
{
	zend_ulong hash = Z_UL(5381);

#if defined(_WIN32) || defined(__i386__) || defined(__x86_64__) || defined(__aarch64__)
	/* Version with multiplication works better on modern CPU */
	for (; len >= 8; len -= 8, str += 8) {
# if defined(__aarch64__) && !defined(WORDS_BIGENDIAN)
		/* On some architectures it is beneficial to load 8 bytes at a
		   time and extract each byte with a bit field extract instr. */
		uint64_t chunk;

		memcpy(&chunk, str, sizeof(chunk));
		hash =
			hash                        * 33 * 33 * 33 * 33 +
			((chunk >> (8 * 0)) & 0xff) * 33 * 33 * 33 +
			((chunk >> (8 * 1)) & 0xff) * 33 * 33 +
			((chunk >> (8 * 2)) & 0xff) * 33 +
			((chunk >> (8 * 3)) & 0xff);
		hash =
			hash                        * 33 * 33 * 33 * 33 +
			((chunk >> (8 * 4)) & 0xff) * 33 * 33 * 33 +
			((chunk >> (8 * 5)) & 0xff) * 33 * 33 +
			((chunk >> (8 * 6)) & 0xff) * 33 +
			((chunk >> (8 * 7)) & 0xff);
# else
		hash =
			hash   * 33 * 33 * 33 * 33 +
			str[0] * 33 * 33 * 33 +
			str[1] * 33 * 33 +
			str[2] * 33 +
			str[3];
		hash =
			hash   * 33 * 33 * 33 * 33 +
			str[4] * 33 * 33 * 33 +
			str[5] * 33 * 33 +
			str[6] * 33 +
			str[7];
# endif
	}
	if (len >= 4) {
		hash =
			hash   * 33 * 33 * 33 * 33 +
			str[0] * 33 * 33 * 33 +
			str[1] * 33 * 33 +
			str[2] * 33 +
			str[3];
		len -= 4;
		str += 4;
	}
	if (len >= 2) {
		if (len > 2) {
			hash =
				hash   * 33 * 33 * 33 +
				str[0] * 33 * 33 +
				str[1] * 33 +
				str[2];
		} else {
			hash =
				hash   * 33 * 33 +
				str[0] * 33 +
				str[1];
		}
	} else if (len != 0) {
		hash = hash * 33 + *str;
	}
#else
	/* variant with the hash unrolled eight times */
	for (; len >= 8; len -= 8) {
		hash = ((hash << 5) + hash) + *str++;
		hash = ((hash << 5) + hash) + *str++;
		hash = ((hash << 5) + hash) + *str++;
		hash = ((hash << 5) + hash) + *str++;
		hash = ((hash << 5) + hash) + *str++;
		hash = ((hash << 5) + hash) + *str++;
		hash = ((hash << 5) + hash) + *str++;
		hash = ((hash << 5) + hash) + *str++;
	}
	switch (len) {
		case 7: hash = ((hash << 5) + hash) + *str++; /* fallthrough... */
		case 6: hash = ((hash << 5) + hash) + *str++; /* fallthrough... */
		case 5: hash = ((hash << 5) + hash) + *str++; /* fallthrough... */
		case 4: hash = ((hash << 5) + hash) + *str++; /* fallthrough... */
		case 3: hash = ((hash << 5) + hash) + *str++; /* fallthrough... */
		case 2: hash = ((hash << 5) + hash) + *str++; /* fallthrough... */
		case 1: hash = ((hash << 5) + hash) + *str++; break;
		case 0: break;
EMPTY_SWITCH_DEFAULT_CASE()
	}
#endif

	/* Hash value can't be zero, so we always set the high bit */
#if SIZEOF_ZEND_LONG == 8
	return hash | Z_UL(0x8000000000000000);
#elif SIZEOF_ZEND_LONG == 4
	return hash | Z_UL(0x80000000);
#else
# error "Unknown SIZEOF_ZEND_LONG"
#endif
}

文中部分内容摘自:http://www.laruence.com/2009/07/23/994.html

device = torch.device("cuda" if torch.cuda.is_available() else "cpu") ############################################ # 1. 数据预处理 ############################################ # 【1.1】读取数值型数据(用于原始数据分支) # 原来的数据预处理,替换 Y/N/?,并归一化;标签根据 list_y 得到索引。 list_y = ['WWW', 'MAIL','FTP-DATA', 'FTP-PASV','FTP-CONTROL','SERVICES','DATABASE' ,'P2P', 'ATTACK', 'MULTIMEDIA', 'INTERACTIVE', 'GAMES'] # 定义离散特征对应的列名和映射字典 feature_cols = ["col_65", "col_66", "col_67", "col_68", "col_71", "col_72"] label_mappings = { 'col_65': {'Y': 'Window_Scaling_cs', 'N': 'no_Window_Scaling_cs'}, 'col_66': {'Y': 'Time_Stamp_cs', 'N': 'no_Time_Stamp_cs'}, 'col_67': {'Y': 'Window_Scaling_sc', 'N': 'no_Window_Scaling_sc'}, 'col_68': {'Y': 'Time_Stamp_sc', 'N': 'no_Time_Stamp_sc'}, 'col_71': {'Y': 'SACK_cs', 'N': 'no_SACK_cs'}, 'col_72': {'Y': 'SACK_sc', 'N': 'no_SACK_sc'} } # 离散型特征在原始文件中的列索引(注意:索引从0开始) discrete_indices = [64, 65, 66, 67, 70, 71] def data_prepross_numeric(filenames): X, Y = [], [] for f in filenames: print("Processing numeric file:", f) with open(os.path.join(os.getcwd(), f), 'r') as file: # 跳过头部信息 for line in file.readlines()[253:]: #strip()去掉字符串两端的空格或换行符,确保处理后的字符串干净 # 保留原始行,不做 Y/N 替换,这里只做 strip(),后面我们会用离散规则进行过滤 line = line.strip().replace('?', '0') parts = line.split(',') row = [] # 对前面除标签外的所有列进行处理 for i, x in enumerate(parts[:-1]): if i in discrete_indices: # 离散型:直接保留原始字符串(如 "Y", "N", "?") row.append(x.strip()) else: # 其它列按 float 转换 row.append(float(x)) # 按原逻辑末尾补8个0(这部分可以认为是数值型数据) row.extend([0] * 8) # 标签处理,修正拼写错误后根据 list_y 得到索引 label = parts[-1].replace('FTP-CO0TROL', 'FTP-CONTROL') \ .replace('I0TERACTIVE', 'INTERACTIVE').strip() label_idx = list_y.index(label) X.append(row) Y.append(label_idx) # 注意:由于存在离散型列存为字符串,不能直接用 np.array(..., dtype=np.float32) # 此处返回原始列表和标签(标签可以转换为 numpy 数组) return X, np.array(Y) # 读取数据 filenames = ['MooreDataset/entry01.weka.allclass.arff'] X_raw, total_labels = data_prepross_numeric(filenames) # 将 X_raw 转换为 DataFrame方便后续处理 df_total = pd.DataFrame(X_raw) ############################################ # 2. 离散型特征处理与样本过滤 ############################################ # 从 df_total 中提取离散型部分:第65,66,67,68,71,72列(对应索引 64,65,66,67,70,71) df_discrete = df_total.iloc[:, discrete_indices].copy() df_discrete.columns = feature_cols # 加入 protocol 列(供检查使用) df_discrete["protocol"] = [list_y[label] for label in total_labels] # 对这六个离散列做映射:先用映射字典将 "Y"/"N" 转换为描述字符串 # (注意:原始数据中这些列应当只有 "Y", "N", 或 "?"(0)) for col in feature_cols: df_discrete[col] = df_discrete[col].map(label_mappings[col]) # 过滤掉任一离散型特征映射后为 NaN 的样本 df_discrete_valid = df_discrete.dropna(subset=feature_cols).reset_index(drop=True) valid_indices = df_discrete_valid.index # 接下来,将有效样本的离散型数据转换为数值: # 使用一个简单规则:映射描述性字符串还原为数字,"Window_Scaling_cs" 或其他描述中的肯定部分 → 1, # 而否定字符串(通常含 "no_") → 0. def discrete_to_num(x): if isinstance(x, str): if "no_" in x: return 0.0 else: return 1.0 else: return np.nan for col in feature_cols: df_discrete_valid[col] = df_discrete_valid[col].apply(discrete_to_num) # 现在 df_discrete_valid 中的离散列已经转换为数值(1.0 或 0.0)。 ############################################ # 3. 同步过滤原始数据并整合离散数据 ############################################ # 对 df_total(原始完整数据)按 valid_indices 同步过滤,得到有效样本 df_total_valid = df_total.loc[valid_indices].copy() # 将 df_total_valid 中原来存储离散型数据的对应列,替换为我们已经数值化的离散数据 for idx, col_name in zip(discrete_indices, feature_cols): df_total_valid.iloc[:, idx] = df_discrete_valid[col_name] # 此时 df_total_valid 的所有列均为数值型(离散列用 0/1 表示,其它列原样为 float,末尾有补的8个0) # 同时,将标签同步过滤 total_labels_filtered = total_labels[valid_indices] ############################################ # 4. 对过滤后的数据进行归一化(min-max归一化) ############################################ # 转换 df_total_valid 为 numpy 数组,并确保为 float32 total_x_filtered = df_total_valid.to_numpy(dtype=np.float32) X_min = np.min(total_x_filtered, axis=0) X_max = np.max(total_x_filtered, axis=0) X_range = X_max - X_min X_range[X_range == 0] = 1 X_norm_filtered = (total_x_filtered - X_min) / X_range # 输出结果检查 print("最终过滤后的数据形状:", X_norm_filtered.shape) print("最终过滤后的标签形状:", total_labels_filtered.shape) def data_prepross_as_string(filenames): X, Y = [], [] for f in filenames: print("Processing file:", f) with open(os.path.join(os.getcwd(), f), 'r') as file: # 假设需要跳过前253行头部信息 for line in file.readlines()[253:]: # 保留“Y”或“N”,将 '?' 替换为 "0" line = line.strip().replace('?', '0') parts = line.split(',') # 保留原始文本数据(离散部分不再转换为数值) features = parts[:-1] # 后面补充额外 8 个"0",与原来处理方式保持一致 features = features + ["0"] * 8 label = parts[-1].replace('FTP-CO0TROL', 'FTP-CONTROL').replace('I0TERACTIVE', 'INTERACTIVE') label_idx = list_y.index(label) X.append(features) Y.append(label_idx) X = np.array(X) return X, np.array(Y) 请你帮我分析我上述代码属于哪种转化方式?
最新发布
07-18
评论
成就一亿技术人!
拼手气红包6.0元
还能输入1000个字符
 
红包 添加红包
表情包 插入表情
 条评论被折叠 查看
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值