
Math&Stat
文章平均质量分 94
EverNoob
simply bumping around
展开
-
Descriptive Statistics
easures o。转载 2024-08-01 15:39:09 · 115 阅读 · 0 评论 -
DeConvolution(Transposed Convolution)
DeConv fundamentals原创 2023-11-09 20:47:31 · 269 阅读 · 0 评论 -
Automatic Differentiation
For beginners, the most daunting aspect of deep learning algorithms is perhaps Back-Propagations (BP) which require derivations of some highly complex mathematical expressions.Luckily when actually implementing BP, we do not have to rely on smmary symbolic原创 2023-07-28 13:46:54 · 320 阅读 · 0 评论 -
Bilinear Interpolation 双线性插值
Summary:双线性插值(Bilinear Interpolation) - 马语者 - 博客园Intro:一篇文章为你讲透双线性插值 - 知乎双线性插值(Bilinear Interpolation):双线性插值是用原图像中4(2*2)个点计算新图像中1个点,效果略逊于双三次插值,速度比双三次插值快,属于一种平衡美,在很多框架中属于默认算法。1. Nearest Interpolation最近邻法不需要计算只需要寻找原图中对应的点,所以最近邻法速度最快,但是会破坏原图像中.原创 2022-01-21 14:05:51 · 3256 阅读 · 0 评论 -
Hierarchical Clustering: Agglomerative and Divisive
efficientaccurate。转载 2023-04-04 18:05:53 · 192 阅读 · 0 评论 -
RC, RL, LC, RLC
all taken from WikipediaRChttps://en.wikipedia.org/wiki/RC_circuitusingKirchhoff's current lawSeries Circuithere we exploit reactance/impedance, see later in capacitor section.s is for "second", since Q = I * t, C = Q/V ==> 1/C ~ Oh...转载 2022-05-10 15:11:44 · 564 阅读 · 0 评论 -
Cross Entropy (Loss)
Cross Entropyhttps://en.wikipedia.org/wiki/Cross_entropyCross Entropy Losshttps://towardsdatascience.com/cross-entropy-loss-function-f38c4ec8643eA Gentle Introduction to Cross-Entropy for Machine Learning转载 2022-05-08 12:35:59 · 938 阅读 · 0 评论 -
RoI: Region of Interest Projection and Pooling
RoI is a technique/layer introduced in Fast-RCNN paper:https://arxiv.org/abs/1504.08083here is an easy to read intro:Understanding Region of Interest (RoI Pooling) - Blog by Kemal Erdem==> in short, RoI projection shrinks RoI after CNNpre-proces..转载 2022-05-06 11:19:17 · 326 阅读 · 0 评论 -
Ridge, Lasso, Group Lasso and Sparse Group Lasso
Main Article==> this is a great introductary article with visual cues about the statistical regularization techniques.https://en.wikipedia.org/wiki/Lasso_(statistics)(secondary title: Complete Guide Using Scikit-Learn)Moving on from a very impor转载 2022-04-19 15:40:01 · 804 阅读 · 0 评论 -
(Weight) Sparsity in Deep Learning
SOTA* Overview*[Submitted on 31 Jan 2021]Sparsity in Deep Learning: Pruning and growth for efficient inference and training in neural networkshttps://arxiv.org/abs/2102.00554The growing energy and performance costs of deep learning have driven th.原创 2022-04-20 18:43:40 · 3790 阅读 · 0 评论 -
PINN: Physics Informed Neural Networks
Introhttps://en.wikipedia.org/wiki/Physics-informed_neural_networksPhysics-informed neural networks(PINNs) are a type of universal function approximators that can embed the knowledge of any physical laws that govern a given data-set in the learning p.原创 2022-04-15 10:42:58 · 2023 阅读 · 0 评论 -
Batch Normalization: BP
Understanding the backward pass through Batch Normalization LayerFlair of Machine LearningPosted on February 12, 2016(for intro and howit possibly could work, see: Batch Normalization_EverNoob的博客-优快云博客)(for a concise mathematical solution, see: B.转载 2022-03-28 15:46:10 · 274 阅读 · 0 评论 -
Batch Normalization: Basics and Intuition
Wiki Introhttps://en.wikipedia.org/wiki/Batch_normalization==> this wiki article is techinical enough for further reference on related concepts and deeper looksBatch normalization(also known asbatch norm) is a method used to makeartificial neu...转载 2022-03-28 14:27:09 · 1128 阅读 · 0 评论 -
Means
fromMeasures of Central Tendency: The meansJ Pharmacol Pharmacother.2011 Apr-Jun; 2(2): 140–142.doi:10.4103/0976-500X.81920PMCID:PMC3127352PMID:21772786Measures of central tendency: The meanS. ManikandanAuthor informationCopyright and ...转载 2022-03-22 12:35:42 · 224 阅读 · 0 评论