运算符重载中可能的深拷贝问题

本文详细解释了在C++中实现深拷贝的重要性,特别是针对包含动态分配内存的类。通过对比浅拷贝可能导致的问题,如内存所有权混乱和重复释放,文章提供了正确的深拷贝实现方式。

一般情况来说 在类中 系统会自动提供一个默认的拷贝构造函数来处理复制,但在某写状况下,类体内的成员是需要动态开辟堆内存的,如果我们不自定义拷贝构造函数而让系统自己处理,那么就会导致堆内存的所属权产生混乱,已经开辟的一端堆地址原来是属于对象a的,由于复制过程发生,b对象取得是a已经开辟的堆地址,一旦程序产生析构,释放堆的时候,计算机是不可能清楚这段地址是真正属于谁的,当连续发生两次析构的时候就出现了运行错误。

更详细的说明 在这个地址http://pcedu.pconline.com.cn/empolder/gj/c/0503/570112_1.html

 

如同我发的帖子http://topic.youkuaiyun.com/u/20100514/17/aa2ea6c2-0fb8-4fc4-b467-d08f979fa5d4.html

[code=C/C++]

CMatrix& CMatrix::operator=(const CMatrix &other)
{
   
if (&other == this)
    {
       
return (*this) ;
    }
    m_NumColumns
= other.m_NumColumns ;
    m_NumRows
= other.m_NumRows ;
    m_pData
= other.m_pData ;
   
return *this ;
}

[/code]

中 m_pData = other.m_pData ;
这样复制的话 肯定就是深拷贝了,一旦临时变量析构   后面的变量再析构的话 就是严重的错误。

这样改 就可以了

[code=C/C++]

CMatrix::CMatrix(const CMatrix &other)
{

m_NumColumns = other.m_NumColumns;
m_NumRows = other.m_NumRows;
m_pData = new double[m_NumColumns * m_NumRows];
memcpy(m_pData,other.m_pData,m_NumColumns * m_NumRows *sizeof(double));


[/code]

Introduction ============ This is a class for symmetric matrix related computations. It can be used for symmetric matrix diagonalization and inversion. If given the covariance matrix, users can utilize the class for principal component analysis(PCA) and fisher discriminant analysis(FDA). It can also be used for some elementary matrix and vector computations. Usage ===== It's a C++ program for symmetric matrix diagonalization, inversion and principal component anlaysis(PCA). To use it, you need to define an instance of CMatrix class, initialize matrix, call the public funtions, and finally, free the matrix. For example, for PCA, CMarix theMat; // define CMatrix instance float** C; // define n*n matrix C = theMat.allocMat( n ); Calculate the matrix (e.g., covariance matrix from data); float *phi, *lambda; // eigenvectors and eigenvalues int vecNum; // number of eigenvectors (<=n) phi = new float [n*vecNum]; lambda = new float [vecNum]; theMat.PCA( C, n, phi, lambda, vecNum ); delete phi; delete lambda; theMat.freeMat( C, n ); The matrix diagonalization function can also be applied to the computation of singular value decomposition (SVD), Fisher linear discriminant analysis (FLDA) and kernel PCA (KPCA) if forming the symmetric matrix appropriately. For data of very high dimensionality (n), the computation of nxn matrix is very expensive on personal computer. But if the number m of samples (vectors) is smaller than dimenionality, the problem can be converted to the computation of mxm matrix. The users are recommended to read the paper KPCA for how to form mxm matrix: B. Sch枚lkopf, A. Smola, K.-R. M眉ller. Nonlinear component analysis as a kernel eigenvalue problem, Neural Computation, 10(5): 1299-1319, 1998. Example ======= Refer to `example' directory for a simple demonstration.
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值