- 【资料】AN ANNOTATED DEEP LEARNING BIBLIOGRAPHY
简介:
一大堆深度学习资料
原文链接:http://memkite.com/deep-learning-bibliography/#santos2014learning
2.【博客 & 代码】Nexar’s Deep Learning Challenge: the winners reveal their secrets
简介:
前段时间看了利用深度学习来识别交通灯的比赛,现在比赛的代码出来了,可以自己动手学习一下了。
3.【博客】Distributed Deep Learning with Apache Spark and Keras
简介:
In the following blog posts we study the topic of Distributed Deep Learning, or rather, how to parallelize gradient descent using data parallel methods. We start by laying out the theory, while supplying you with some intuition into the techniques we applied. At the end of this blog post, we conduct some experiments to evaluate how different optimization schemes perform in identical situations. We also introduce dist-keras(link is external), which is our distributed deep learning framework built on top of Apache Spark(link is external) and Keras(link is external). For this, we provide several notebooks and examples(link is external). This framework is mainly used to test our distributed optimization schemes, however, it also has several practical applications at CERN, not only because of the distributed learning, but also for model serving purposes. For example, we provide several examples(link is external) which show you how to integrate this framework with Spark Streaming and Apache Kafka. Finally, these series will contain parts of my master-thesis research. As a result, they will mainly show my research progress. However, some might find some of the approaches I present here useful to apply in their own work.
原文链接:http://maxpumperla.github.io/elephas/
4.【博客】Learning in Brains and Machines
简介:
We all make mistakes, and as is often said, only then can we learn. Our mistakes allow us to gain insight, and the ability to make better judgements and fewer mistakes in future. In their influential paper, the neuroscientists Robert Rescorla and Allan Wagner put this more succinctly, ‘organisms only learn when events violate their expectations’ [1]. And so too of learning in machines. In both brains and machines we learn by trading the currency of violated expectations: mistakes that are represented as prediction errors.
We rely on predictions to aid every part of our decision-making. We make predictions about the position of objects as they fall to catch them, the emotional state of other people to set the tone of our conversations, the future behaviour of economic indicators, and of the potentially adverse effects of new medical treatments. Of the multitude of prediction problems that exist, the prediction of rewards is one of the most fundamental and one that brains are especially good at. This post explores the neuroscience and mathematics of rewards, and the mutual inspirations these fields offer us for the understanding and design of intelligent systems.
原文链接:http://blog.shakirm.com/2016/02/learning-in-brains-and-machines-1/
5.【博客 & 代码】Deep Learning for Supervised Language Identification for Short and Long Texts!
简介:
In this post, will look at language identification for written text such that some text is given and a set of languages, identify which language it belongs to. To this extent, I use the Genesis dataset from NLTK which has six languages : Finnish, English, German, French, Swedish and Portuguese.
本文精选了几篇关于深度学习的重要资源,包括使用深度学习识别交通灯的竞赛及其获胜者分享的秘密、分布式深度学习的研究实践、大脑与机器的学习机制比较、以及用于短文和长文的监督式语言识别实验。

被折叠的 条评论
为什么被折叠?



