人工智能资料库:第27辑(20170208)

本文探讨了DeepMind提出的PathNet模块化深度学习架构及其在通用人工智能中的应用,并介绍了生成对抗网络(GANs)的工作原理及潜力。PathNet通过进化通道和梯度下降在超级神经网络中实现模块化学习,而GANs则被构想为两个网络间的博弈。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >


  1. 【博客】DeepMind’s PathNet: A Modular Deep Learning Architecture for AGI

简介:

PathNetis a new Modular Deep Learning (DL) architecture, brought to you by who else but DeepMind, that highlights the latest trend in DL research to meldModular Deep Learning,Meta-Learningand Reinforcement Learning into a solution that leads to more capable DL systems. A January 20th, 2017 submitted Arxiv paper “PathNet: Evolution Channels Gradient Descent in Super Neural Networks” (Fernando et. al) has in its abstract the following interesting description of the work:

原文链接:https://medium.com/intuitionmachine/pathnet-a-modular-deep-learning-architecture-for-agi-5302fcf53273#.9uwk331d5


2.【博客】On the intuition behind deep learning & GANs—towards a fundamental understanding

简介:

A generative adversarial network (GAN) is composed of two separate networks - the generator and the discriminator. It poses the unsupervised learning problem as a game between the two. In this post we will see why GANs have so much potential, and frame GANs as a boxing match between two opponents.

原文链接:https://hackernoon.com/introduction-to-gans-a-boxing-match-b-w-neural-nets-b4e5319cc935#.lyaaodaih


3.【代码】nmtpy

简介:

**nmtpy**is a suite of Python tools, primarily based on the starter code provided indl4mt-tutorialfor training neural machine translation networks using Theano.

The basic motivation behind forking**dl4mt-tutorial**was to create a framework where it would be easy to implement a new model by just copying and modifying an existing model class (or even inheriting from it and overriding some of its methods).

原文链接:https://github.com/lium-lst/nmtpy


4.【博客】Demystifying Word2Vec

简介:

Research into word embeddings is one of the most interesting in the deep learning world at the moment, even though they were introduced as early as 2003 by Bengio, et al. Most prominently among these new techniques has been a group of related algorithm commonly referred to as Word2Vec which came out of google research.[^2]

In this post we are going to investigate the significance of Word2Vec for NLP research going forward and how it relates and compares to prior art in the field. In particular we are going to examine some desired properties of word embeddings and the shortcomings of other popular approaches centered around the concept of a Bag of Words (henceforth referred to simply as Bow) such as Latent Semantic Analysis. This shall motivate a detailed exposition of how and why Word2Vec works and whether the word embeddings derived from this method can remedy some of the shortcomings of BoW based approaches. Word2Vec and the concept of word embeddings originate in the domain of NLP, however as we shall see the idea of words in the context of a sentence or a surrounding word window can be generalized to any problem domain dealing with sequences or sets of related data points.

原文链接:http://www.deeplearningweekly.com/blog/demystifying-word2vec


5.【博客】Highlights and tutorials for concepts discussed in “Richard Socher on the future of deep learning”

简介:

Bruner, Jon. “The O’Reilly Bots Podcast” Audio blog post. Richard Socher on the Future of Deep Learning. O’Reilly, December 1, 2016.

Raw interview: I highly encourage listening to the podcast because the questions were so well crafted.

TLDR; Richard Socher of Salesforce (formerly Stanford and MetaMind) offers insight into the current and future states of deep learning used for NLP. We need one model that can do lot’s of different tasks and need to be wary of bias in our models. Future of conversational bots is multimodal and Salesforce research is awesome.

Disclaimer: This is my interpretation of the interview. I have included the pertinent questions I found interesting.

原文链接:https://theneuralperspective.com/2016/12/20/highlights-and-tutorials-for-concepts-discussed-in-richard-socher-on-the-future-of-deep-learning/


评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值