- Disentangling Feature Extractor
What is Disentangled Representation Learning?
Disentangled representation is an unsupervised learning technique that breaks down, or disentangles, each feature into narrowly defined variables and encodes them as separate dimensions. The goal is to mimic the quick intuition process of a human, using both “high” and “low” dimension reasoning.
For example, in a predictive network processing images of people, “higher dimensional” features such as height and clothing would be used to determine sex. In a generative network version of that model designed to produce images of people from a stock photo database, these would be broken down into separate, lower dimensional features. Such as: total height of each person, length of arms and legs, type of shirt, type of pants, type of shoe, etc…
Distributed vs Disentangled Representation:
In disentangled representation, a single node, or neuron, can learn a complete feature independent of other nodes if the disentangle dimensions match an output’s dimensions. For example, if trying to tell the difference between a child and an adult, a node checking the disentangled vector of left leg length can discover the entire picture is a child due to parameter’s max, even though that wasn’t the node’s objective.
In distributed representation, objects are represented by their particular location in the vector space, so the algorithm must “run its course” and run through all nodes to arrive at the output of “child.”
- feature extractor nonlinear
特征提取必须是非线性的。这样不会使一个非线性问题变成线性问题
p个点在n维空间,当p小于n线性可分的概率大,当p大于n概率小
ref:https://deepai.org/machine-learning-glossary-and-terms/disentangled-representation-learning#:~:text=Disentangled%20representation%20is%20an%20unsupervised,and%20%E2%80%9Clow%E2%80%9D%20dimension%20reasoning.