Diferente pentru blog/deep-learning-alien-labs intre reviziile #2 si #16

Diferente intre titluri:

All aboard the deep learning train: Alien Labs
All aboard the deep learning train and Alien Labs

Diferente intre continut:

I've been thinking about deep learning ever since I saw Andrew Ng's 'talk at Google':https://www.youtube.com/watch?v=n1ViNeWhC24 back in 2013, but somehow never made much progress other than talking to people.
I've been thinking about deep learning ever since I saw Andrew Ng's 'talk at Google':https://www.youtube.com/watch?v=n1ViNeWhC24 back in 2012, but somehow never made much progress other than talking to people.
I don't buy into the whole brain metaphor, and I agree that the field is a bit overhyped. Robots aren't replacing us just yet. But the deep learning technique is so effective across a large set of problems that it would be a shame not spending some time on it.
Deep learning has recently ~2010-2012 become very effective across a large set of problems where progress was very slow. It would be a shame not spending some time on it.
I think the infoarena community should switch it's focus from solving algorithms puzzles to machine learning and deep learning in particular but probably this discussion deserves it's own blog post.
I think the infoarena community should move some of it's focus from solving algorithm puzzles to machine learning and deep learning in particular.
Mircea Pasoi and Cristian Strat, after their successful stint at twitter, recently founded Alien Labs. They use deep learning to build intelligent chat bots for an office environment. This is an awesome opportunity to work together again. It's been 8 years since we last did. At Google, we worked on solving an ads inventory management problem using 'network flows':http://goo.gl/5SDRrM Our claim to fame is that we got help from Cliff Stein, the S in CLRS :). This is also an opportunity for me to jump on the deep learning train tackling real world problems.
Here are some of my notes about this:
Yesterday I've started at Alien Labs based in San Francisco. The first thing I'm going to work on is figuring out if two questions are similar. This should be fun!
*Anecdotes*:
 
* Doing object recognition used to be a hard problem. If you need a car detector, you would hire five PhDs and let them work on it for a few years. Now it's a solved problem, even your phone has a strong enough computer to do a good job.
* Google voice was unusable for me a few years ago, now it gets my bad English accent.
* DeepMind f*king solved Go. A game which 2 years ago was thought to be 10 years out of reach.
* Object detection is real time, one can use it for self driving cars, the radar solution might be getting outdated.
* Baidu is working on speech recognition for mandarin (lots of illiterate people with phones).
* Word embeddings give you natural language models that do away with the tweaks one used to encode all sorts of idiosyncrasies in the English language.
 
*Shallow field* OpenAI CTO: "As Ilya likes to say, deep learning is a shallow field — it's actually relatively easy to pick up and start making contributions.
 
* deep learning techniques started to be effective recently (2010? 2012?)
* you can do a lot without being a hard core mathematician
* the field is still a bit of an art so coding contest guys shouldn't feel bad about tweaks and hacks
* previous experience in the old techniques is not very relevant
* a home setup or some AWS resources are enough to start (a google datacenter of GPUs would help)
* state of the art results significantly improving on the previous state of the art
(in real time object detection, speech recognition)
 
*Interesting concepts*:
 
* *Word2Vec* - neat idea that maps words to points in n dimensional space. Then you can do algebra on the vector representation:
vec(“king”) – vec(“man”) + vec(“woman”) =~ vec(“queen”), or vec(“Montreal Canadiens”) – vec(“Montreal”) + vec(“Toronto”) resembles the vector for “Toronto Maple Leafs” (Mikolov, Tomas; Sutskever, Ilya; Chen, Kai; Corrado, Greg S.; Dean, Jeff (2013). Distributed representations of words and phrases and their compositionality)
 
* *Back propagation* - I see it as a dynamic programming technique that works well for the neural network setup to compute partial derivatives one needs when running gradient descent
* *Convolutional neural networks* - a convolutional layer reuses the same k weights instead of having k^2 weights between 2 layers (this concept makes sense in image input and makes algorithms much faster)
* *Rectified Linear Units* f(x) = max(0, x) work much better than the historically used sigmoid and hyperbolic tangent functions
* *Dropout* - some neurons are ignored with a set probability, inspired by how the neurons in the brain aren't always on
 
*Some deep learning news*:
 
* 2016 "ACRush, 3 times Google Code Jam winner, joins Baidu to work on autonomous cars":https://twitter.com/AndrewYNg/status/723640197875830784
So coding contest guys are getting in the field :).
* "Introducing OpenAi":https://openai.com/blog/introducing-openai/
"these funders have committed $1 billion, although we expect to only spend a tiny fraction of this in the next few years."
* 2014 "Baidu started a deep learning lab, Andrew Ng and other Stanford people went there":http://deeplearning.net/2014/05/17/andrew-ng-is-hired-by-baidu/
* 2014 'Oxford natural language processing research group bought by Google':http://techcrunch.com/2014/10/23/googles-deepmind-acqui-hires-two-ai-teams-in-the-uk-partners-with-oxford/
* 2014 "Google acquired DeepMind for more than 500 million dollars.":http://deeplearning.net/2014/01/27/google-acquires-deep-mind/ That's how much it costs to beat Go
* 2013 'Facebook starts a deep learning lab':https://gigaom.com/2013/12/09/facebook-hires-nyu-deep-learning-expert-to-run-its-new-ai-lab/
* 2013 'Wired interview with Geoff Hinton, one of the neural net pioneers':http://www.wired.com/2013/03/google_hinton/
* 2012 'Google cat detector on 16000 machines':http://www.wired.com/2012/06/google-x-neural-network/
 
*And some news*:
 
Mircea Pasoi and Cristian Strat, after their successful stint at Twitter, recently founded Alien Labs. They use deep learning to build intelligent chat bots for an office environment. This is an awesome opportunity to work together again. It's been 8 years since we last did. At Google, we worked on an ads inventory management problem using 'network flows':http://goo.gl/5SDRrM Our claim to fame is that we got help from Cliff Stein, the S in 'CLRS':https://mitpress.mit.edu/books/introduction-algorithms :). This is also an opportunity for me to jump on the deep learning train while tackling real world problems.
 
Yesterday I've started at *Alien Labs* based in San Francisco. The first thing I'm going to work on is figuring out if two questions are similar. This should be fun!
 
Here's the 'Alien Labs website':https://pogo.ai/ , have a look.
I'll follow up with a few posts on deep learning that may encourage you to try it if you haven't already.

Diferente intre securitate:

private
protected

Diferente intre topic forum:

 
10782