博客
关于我
强烈建议你试试无所不能的chatGPT,快点击我
人工智能资料库:第22辑(20170131)
阅读量:2441 次
发布时间:2019-05-10

本文共 4143 字,大约阅读时间需要 13 分钟。


  1. 【视频】Can Cognitive Neuroscience Provide a Theory of Deep Learning

简介:

本视频主要讲解了认知神经科学对深度学习的理论支持。

原文链接:

PPT链接:


2.【博客】Preparing a large-scale image dataset with TensorFlow’s tfrecord files

简介:

There are several methods of reading image data in TensorFlow as mentioned in its documentation:

From disk: Using the typical feed_dict argument when running a session for the train_op. However, this is not always possible if your dataset is too large to be held in your GPU memory for it to be trained.

From CSV Files: Not as relevant for dealing with images.

From TFRecord files: This is done by first converting images that are already properly arranged in sub-directories according to their classes into a readable format for TensorFlow, so that you don’t have to read in raw images in real-time as you train. This is much faster than reading images from disk.

原文链接:


3.【论文】Optimization Methods for Large-Scale Machine Learning

简介:

This paper provides a review and commentary on the past, present, and future of numerical optimization algorithms in the context of machine learning applications. Through case studies on text classification and the training of deep neural networks, we discuss how optimization problems arise in machine learning and what makes them challenging. A major theme of our study is that large-scale machine learning represents a distinctive setting in which the stochastic gradient (SG) method has traditionally played a central role while conventional gradient-based nonlinear optimization techniques typically falter. Based on this viewpoint, we present a comprehensive theory of a straightforward, yet versatile SG algorithm, discuss its practical behavior, and highlight opportunities for designing algorithms with improved performance. This leads to a discussion about the next generation of optimization methods for large-scale machine learning, including an investigation of two main streams of research on techniques that diminish noise in the stochastic directions and methods that make use of second-order derivative approximations.

原文链接:


4.【问答】41 Essential Machine Learning Interview Questions (with answers)

简介:

Machine learning interview questions are an integral part of the data science interview and the path to becoming a data scientist, machine learning engineer or data engineer. Springboardcreated aso we know exactly how they can trip candidates up! In order to help resolve that, here is a curated and created a list of key questions that you could see in a machine learning interview. There aresome answers to go along with them so you don’t get stumped. You’ll be able to do well in any job interview with machine learning interview questions after reading through this piece.

原文链接:


5.【代码】Word Prediction using Convolutional Neural Networks

简介:

In this project, we examine how well neural networks can predict the current or next word. Language modeling is one of the most important nlp tasks, and you can easily find deep learning approaches to it. Our contribution is threefold. First, we want to make a model that simulates a mobile environment, rather than having general modeling purposes. Therefore, instead of assessing perplexity, we try to save the keystrokes that the user need to type. To this end, we manually typed 64 English paragraphs with a iPhone 7 for comparison. It was super boring, but hopefully it will be useful for others. Next, we use CNNs instead of RNNs, which are more widely used in language modeling tasks. RNNs—even improved types such as LSTM or GRU—suffer from short term memory. Deep layers of CNNs are expected to overcome the limitation. Finally, we employ a character-to-word model here. Concretely, we predict the current or next word, seeing the preceding 50 characters. Because we need to make a prediction at every time step of typing, the word-to-word model dont’t fit well. And the char-to-char model has limitations in that it depends on the autoregressive assumption. Our current belief is the character-to-word model is best for this task. Although our relatively simple model is still behind a few steps iPhone 7 Keyboard, we observed its potential.

原文链接:


转载地址:http://uvdqb.baihongyu.com/

你可能感兴趣的文章
路由器接路由器_路由器之战:到达路由器vsReact路由器
查看>>
rxjs 搜索_如何使用RxJS构建搜索栏
查看>>
如何在Debian 10上安装MariaDB
查看>>
go函数的可变长参数_如何在Go中使用可变参数函数
查看>>
react开源_React Icons让您可以访问数百个开源图标
查看>>
debian 服务器_使用Debian 10进行初始服务器设置
查看>>
joi 参数验证_使用Joi进行节点API架构验证
查看>>
react-notifications-component,一个强大的React Notifications库
查看>>
如何在Debian 10上设置SSH密钥
查看>>
如何在Debian 10上安装Node.js
查看>>
了解css_了解CSS的特异性
查看>>
emmet快速插入css_如何使用Emmet快速编写HTML
查看>>
graphql_GraphQL简介
查看>>
typescript 枚举_TypeScript枚举声明和合并
查看>>
flutter开发_加快Flutter开发的提示
查看>>
redis排序_如何在Redis中管理排序集
查看>>
使用Gatsby和Cosmic JS创建多语言网站
查看>>
redis 连接数据库_如何连接到Redis数据库
查看>>
如何在Ubuntu 18.04上对Redis服务器的性能进行基准测试
查看>>
如何在Ubuntu 18.04上安装Nginx
查看>>