sparse data structure - matrix

本文详细介绍了稀疏矩阵的高效存储方式,通过使用头数组与链表来减少不必要的内存占用,并优化了搜索、插入、删除与更新操作的时间复杂度。具体包括数据结构定义、操作实现以及与传统二维数组的对比分析。

sparse data structure

 

sparse data structure - means most element are empty(or have a very same value), 

so we can try to use some better data structure to reduce memory use,

 

------

matrix

 

we can use 'header array' & 'link list' to represent matrix,

 

how to:

define a data structure to represent a element in matrix, the element include: x, y, value, next,

use a linked list to store the elements of a column, in order of increasing x, 

use a header array to store the first element's pointer of each column,

 

memory:

only exists element will use memory,

 

search time:

in matrix of size a * b, the max count is Nmax, and the actural count is N elements, 0 <= N <= Nmax,

then the time to search an element by (i,j) is:

N/2b

or

a*t/2 (t = N/Nmax,Nmax = a * b)

 

drawback:

the search/insert/delete/update operation take a little longer time, than the original matrix(two-dimensional array),

 

------

code

 

#include <stdio.h>
/**
 * (sparse data structure - use less memory to store element)
 * matrix might be a sparse data structure, we can use head array & linked list to represent matrix, this might free a lot memory,
 */

/** data struct for each element in link list */
struct matrix_ele { 
	int x,y,value;
	struct matrix_ele *next;
};
typedef struct matrix_ele matrix_ele;

/**
 * add element to matrix,
 * @param colheader
 	store pointer of all column's first element,
 * @param me_new
 	pointer of new element to insert,
 * 
 * @return
 	1 means add the new element, 0 means update the value of old element,
 */

int matrix_add(matrix_ele **colheader, matrix_ele *me_new) {
	matrix_ele *me = *(colheader + me_new->y);
	if(me == NULL) { // linked list is empty
		*(colheader + me_new->y) = me_new;
		return 1;
	} else { // linked list is not empty
		matrix_ele *me_pre = NULL;
		for(; me != NULL; me = me->next) {
			if(me->x == me_new->x) { // element already exists,
				me->x == me_new->x;
				return 0;
			} else if(me->x > me_new->x) {
				if(*(colheader+me_new->y)==me) { // replace the first in linked list,
					*(colheader+me_new->y)=me_new;
					me_new->next = me;
					me->next = NULL;
				}
				else { // the one replaced is not first in linked list,
					me_new->next = me;
					me_pre->next = me_new;
				}
				return 1;
			} else {
				me_pre = me; // record previous ele in linked list,
			}
		}

		// new element has largest x, put it at the end of linked list,
		me_pre->next = me_new;
		return 1;
	}
}

/**
 * delete element from matrix,
 * @param colheader
 	store pointer of all column's first element,
 * @param me_new
 	pointer of new element to delete,
 * 
 * @return
 	1 means delete, -1 mean the element does not exists,
 */

int matrix_del(matrix_ele **colheader, matrix_ele *me_del) {
	matrix_ele *me = *(colheader + me_del->y);
	if(me == NULL) { // linked list is empty
		return -1;
	} else { // linked list is not empty
		matrix_ele *me_pre = NULL;
		for(; me != NULL; me = me->next) {
			if(me->x == me_del->x) { // element found
				me->x == me_del->x;
				if(me_pre == NULL) { // deleted element is the first element in linked list
					*(colheader + me_del->y) == me_del->next;
				} else {
					me_pre->next = me->next;
				}
				me == NULL;
				me_del == NULL;
				return 1;
			} else if(me->x < me_del->x) {
				me_pre = me; // record previous ele in linked list,
			} else { // not found
				return -1;
			}
		}
	}
}

/**
 * search element in the matrix,
 * equal time: N/2b, (N is the total count,b is count of column)
 */
int matrix_search(matrix_ele **colheader, int x, int y) {
	matrix_ele *me = *(colheader+y);
	for(; me != NULL; me = me->next) {
		if(me->x == x)
			return me->value;
	}
	return -1;
}

int main() {
	int row = 10,col = 20;
	matrix_ele *colheader[20] = {};
	int i;
	for(i = 0;i<col;i++) { // init pointer to NULL, this is necessary, if not init, the init value might not be NULL,and bug might happen,
		colheader[i] = NULL;
	}
	matrix_ele e_1_1 = {1,1,101,NULL};
	matrix_ele e_3_1 = {3,1,301,NULL};
	matrix_ele e_4_1 = {4,1,401,NULL};
	matrix_ele e_3_2 = {3,2,302,NULL};
	matrix_ele e_6_2 = {6,2,602,NULL};
	matrix_ele e_7_2 = {7,2,702,NULL};

	// add
	matrix_add(colheader, &e_1_1);
	matrix_add(colheader, &e_3_1);
	matrix_add(colheader, &e_4_1);
	matrix_add(colheader, &e_3_2);
	matrix_add(colheader, &e_6_2);
	matrix_add(colheader, &e_7_2);

	// search
	int v_1_1 = matrix_search(colheader, 1, 1);
	printf("expect: %d,\tactural: %d,\n",101,v_1_1);

	int v_3_1 = matrix_search(colheader, 3, 1);
	printf("expect: %d,\tactural: %d,\n",301,v_3_1);

	int v_4_1 = matrix_search(colheader, 4, 1);
	printf("expect: %d,\tactural: %d,\n",401,v_4_1);

	int v_3_2 = matrix_search(colheader, 3, 2);
	printf("expect: %d,\tactural: %d,\n",302,v_3_2);

	int v_6_2 = matrix_search(colheader, 6, 2);
	printf("expect: %d,\tactural: %d,\n",602,v_6_2);

	int v_7_2 = matrix_search(colheader, 7, 2);
	printf("expect: %d,\tactural: %d,\n",702,v_7_2);

	int v_4_2 = matrix_search(colheader, 4, 2);
	printf("expect: %d,\tactural: %d,\n",-1,v_4_2);

	int v_x_x = matrix_search(colheader, 3, 9);
	printf("expect: %d,\tactural: %d,\n",-1,v_x_x);
	
	// test delete
	int d_3_1 = matrix_del(colheader, &e_3_1);
	v_3_1 = matrix_search(colheader,3,1);
	printf("expect: %d,%d,\tactural: %d,%d,\n",1,-1,d_3_1,v_3_1);

	matrix_ele e_x_x = {3,9,309,NULL};
	int d_x_x = matrix_del(colheader, &e_x_x);
	v_x_x = matrix_search(colheader,3,9);
	printf("expect: %d,%d,\tactural: %d,%d,\n",-1,-1,d_x_x,v_x_x);
}
 

 

------


--------------------------------------------------------------------------- ModuleNotFoundError Traceback (most recent call last) <ipython-input-9-2d915d6ffaa1> in <module>() 5 from sklearn.model_selection import train_test_split 6 from sklearn.metrics import classification_report, confusion_matrix ----> 7 from tensorflow.keras import layers, Model, Sequential 8 from tensorflow.keras.preprocessing.image import ImageDataGenerator, load_img, img_to_array 9 from tensorflow.keras.callbacks import EarlyStopping, LearningRateScheduler D:\Anaconda3\lib\site-packages\tensorflow\__init__.py in <module>() 35 import typing as _typing 36 ---> 37 from tensorflow.python.tools import module_util as _module_util 38 from tensorflow.python.util.lazy_loader import LazyLoader as _LazyLoader 39 D:\Anaconda3\lib\site-packages\tensorflow\python\__init__.py in <module>() 40 41 # Bring in subpackages. ---> 42 from tensorflow.python import data 43 from tensorflow.python import distribute 44 # from tensorflow.python import keras D:\Anaconda3\lib\site-packages\tensorflow\python\data\__init__.py in <module>() 19 20 # pylint: disable=unused-import ---> 21 from tensorflow.python.data import experimental 22 from tensorflow.python.data.ops.dataset_ops import AUTOTUNE 23 from tensorflow.python.data.ops.dataset_ops import Dataset D:\Anaconda3\lib\site-packages\tensorflow\python\data\experimental\__init__.py in <module>() 94 95 # pylint: disable=unused-import ---> 96 from tensorflow.python.data.experimental import service 97 from tensorflow.python.data.experimental.ops.batching import dense_to_ragged_batch 98 from tensorflow.python.data.experimental.ops.batching import dense_to_sparse_batch D:\Anaconda3\lib\site-packages\tensorflow\python\data\experimental\service\__init__.py in <module>() 417 """ 418 --> 419 from tensorflow.python.data.experimental.ops.data_service_ops import distribute 420 from tensorflow.python.data.experimental.ops.data_service_ops import from_dataset_id 421 from tensorflow.python.data.experimental.ops.data_service_ops import register_dataset D:\Anaconda3\lib\site-packages\tensorflow\python\data\experimental\ops\data_service_ops.py in <module>() 20 from tensorflow.core.protobuf import data_service_pb2 21 from tensorflow.python import tf2 ---> 22 from tensorflow.python.data.experimental.ops import compression_ops 23 from tensorflow.python.data.experimental.service import _pywrap_server_lib 24 from tensorflow.python.data.experimental.service import _pywrap_utils D:\Anaconda3\lib\site-packages\tensorflow\python\data\experimental\ops\compression_ops.py in <module>() 14 # ============================================================================== 15 """Ops for compressing and uncompressing dataset elements.""" ---> 16 from tensorflow.python.data.util import structure 17 from tensorflow.python.ops import gen_experimental_dataset_ops as ged_ops 18 D:\Anaconda3\lib\site-packages\tensorflow\python\data\util\structure.py in <module>() 18 import itertools 19 ---> 20 import wrapt 21 22 from tensorflow.python.data.util import nest ModuleNotFoundError: No module named 'wrapt' 给我一个正确的解决方案
最新发布
06-14
<think>我们正在解决ModuleNotFoundError:Nomodulenamed'wrapt'的问题。根据引用[2]和引用[3],这个问题通常与wrapt模块的安装有关。在尝试使用pip卸载并重新安装wrapt无效的情况下,我们可以尝试手动下载并安装wrapt的whl文件。具体步骤如下:1.**确定Python版本和系统架构**:在命令行中运行以下命令来获取Python版本和平台信息:```bashpython-c"importplatform;print(platform.architecture()[0]);print(platform.python_version())"```或者,在Python环境中:```pythonimportplatformprint(platform.architecture())#例如:('64bit','WindowsPE')print(platform.python_version())#例如:3.8.10```2.**下载对应的wrapt安装包**:根据引用[2]提供的官网地址(https://www.lfd.uci.edu/~gohlke/pythonlibs/#wrapt)下载与你的Python版本和系统架构匹配的whl文件。例如,对于64位Python3.8,应下载`wrapt-1.14.1-cp38-cp38-win_amd64.whl`。如果官网访问困难,可以使用引用[2]提供的百度网盘链接(提取码:5pec)下载。3.**安装下载的whl文件**:在命令行中,切换到下载目录,然后使用pip安装:```bashpipinstall下载的whl文件名```例如:```bashpipinstallwrapt-1.14.1-cp38-cp38-win_amd64.whl```4.**验证安装**:在Python环境中尝试导入wrapt,检查是否成功:```pythonimportwraptprint(wrapt.__version__)```5.**重新安装TensorFlow(如果需要)**:如果问题是由于安装TensorFlow时引起的,在解决wrapt问题后,可能需要重新安装TensorFlow:```bashpipinstall--upgradetensorflow```注意:如果之前尝试过卸载wrapt但遇到无法卸载的情况(如引用[3]所述),手动安装可以覆盖原有文件,从而解决问题。如果上述步骤不能解决问题,可能需要检查环境变量和多个Python环境冲突的情况。确保你安装的wrapt位于当前使用的Python环境中。另外,引用[1]和引用[4]提到了其他模块缺失的问题(如matplotlib),但当前问题集中在wrapt模块。如果后续出现其他模块缺失,可类似处理。引用[5]中的迭代器示例与当前问题无关,可忽略。请按照上述步骤操作,通常可以解决wrapt模块缺失的问题。如果问题仍然存在,请提供更多错误信息以便进一步分析。相关问题:1.如何查看当前Python环境中已安装的包及其版本?2.如何解决安装TensorFlow时出现的依赖冲突?3.在Windows系统上如何安装.whl格式的Python包?4.如何解决多个Python环境导致的模块导入错误?</think>### 解决 TensorFlow 导致的 ModuleNotFoundError: No module named 'wrapt' 错误 这个错误通常是由于 `wrapt` 模块安装不完整或版本冲突引起的。以下是完整的解决步骤: #### 🛠️ 解决方法 1. **卸载现有 wrapt 模块** ```bash pip uninstall wrapt ``` 如果遇到卸载错误(如引用[3]所述),请使用强制卸载: ```bash pip uninstall -y wrapt ``` 2. **安装正确版本的 wrapt** - **推荐方法**:从官方源安装 ```bash pip install wrapt --no-cache-dir --force-reinstall ``` - **备选方法**:手动下载安装(适合网络受限环境) 1. 访问 [wrapt 官方下载页](https://www.lfd.uci.edu/~gohlke/pythonlibs/#wrapt) 2. 根据你的 Python 版本和系统架构下载对应的 .whl 文件 3. 本地安装: ```bash pip install wrapt-1.14.1-cp38-cp38-win_amd64.whl # 替换为实际文件名 ``` 3. **验证 TensorFlow 安装** ```bash python -c "import tensorflow as tf; print(tf.__version__)" ``` #### ⚠️ 常见问题处理 1. **如果遇到卸载错误**: - 手动删除 wrapt 相关文件: ```bash # 查找 wrapt 安装位置 python -c "import wrapt; print(wrapt.__file__)" # 删除显示的目录 ``` 2. **环境冲突问题**: ```bash # 创建纯净虚拟环境 python -m venv tf_env source tf_env/bin/activate # Linux/Mac tf_env\Scripts\activate # Windows # 在新环境中安装 pip install tensorflow wrapt ``` 3. **版本兼容性问题**: - TensorFlow 2.x 需要 wrapt >=1.11.1 - 检查兼容性: ```bash pip check tensorflow ``` #### ✅ 验证解决方案 创建测试脚本 `test_tf.py`: ```python import wrapt import tensorflow as tf print(f"wrapt版本: {wrapt.__version__}") print(f"TensorFlow版本: {tf.__version__}") print("导入成功!") ``` 运行: ```bash python test_tf.py ``` 如果问题仍未解决,可能是 PATH 环境变量问题,尝试: ```bash python -m pip install wrapt # 确保使用当前Python的pip ``` #### 💡 预防措施 1. 始终在虚拟环境中工作 2. 使用 `pip freeze > requirements.txt` 保存依赖版本 3. 安装时指定版本: ```bash pip install tensorflow==2.9.0 wrapt==1.14.1 ``` > 此解决方案参考了引用[2]和引用[3]中关于 wrapt 模块的处理方法[^2][^3]。
评论
成就一亿技术人!
拼手气红包6.0元
还能输入1000个字符
 
红包 添加红包
表情包 插入表情
 条评论被折叠 查看
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值