emmm,被同组大佬带飞...
I Introduction
Background
Over the past decade or so, artificial intelligence, especially deep learning, has grown by a leap and bound. Researchers are constantly experimenting with new engineering design and different structures of neural networks, tons of AI applications emerge and bring about the dawn of a new era. Disappointingly, however, the theoretical foundations of deep learning are weak and underdeveloped compared to what we have seen in AI engineering.
Scientists have made many attempts at the interpretability of neural networks. From applying TSNE map to the dataset to the visualization of attention map of transformer, these approaches provide intuitive ways to interpreter the intrinsic cause of AI. Explaining AI requires a two-pronged approach: what it is learning and how it is learning it. Topology provides some rationalization for these as well as intuitively interpretations.

Manifold Hypothesis (Methods)
Early in the development of neuroscience and manifold learning, when deep learning has barely evolved, scientists has proposed a hypothesis to explain the intrinsic natural of the datasets and was called Manifold Hypothesis.
The manifold hypothesis posits that many high-dimensional data sets that occur in the real world actually lie along low-dimensional latent manifolds inside that high-dimensional space. This hypothesis was widely accepted in neurology and many interesting theories have been developed on top of it. Neuroscientists believe that human action consists of a neural, and the latent variables that control these neural modes are distributed over the lower dimension manifold. The shared manifold between different individuals forms the root of empathy and perception.
To study the topological pro

最低0.47元/天 解锁文章

被折叠的 条评论
为什么被折叠?



