Bill Gates Response to AMA Question on Feb 11 2014
If you were a current computer science student what area would you start studying heavily?
If you feel like expanding on that, why do you think this area deserves the attention and how do you see it changing the technology game in the next 10 years?
Bill Gates Response:
The ultimate is computers that learn. So called deep learning which started at Microsoft and is now being used by many researchers looks like a real advance that may finally learn. It has already made a big difference in video and audio recognition – more progress in the last 3 years than ever be
Google Acquires Deep Mind
The Race to Buy the Human Brains Behind Deep Learning Machines
Any aspiring science fiction writer looking for a good protagonist could do worse than ripping off the Wikipedia page for Demis Hassabis: He grew up in England as a chess prodigy and built absurdly sophisticated video games before getting a degree in computer science from Cambridge, started studying neuroscience and publishing respected papers on amnesia and other topics, and then proceeded to co-found one of the hottest artificial-intelligence startups. Now that his company, DeepMind, has been snapped up by Google (GOOG) for a reported $400 million to $500 million (depending on your tech blog of choice), exactly how this latest twist will change his story remains to be seen—but there’s a decent chance Hassabis will ultimately become commander of an army of humanoid Googlebots.
Google’s acquisition of Hassabis and the rest of the DeepMind team points to the surging interest in the field of deep learning, a funky part of computer science seen as key to building truly intelligent machines. It centers on having computers learn to do tasks and find patterns on their own. Google, for example, received attention a couple of years ago, when its network of self-learning computers were able to understand the concept of a cat and find cats in YouTube videos. (There’s obviously way more complexity to deep learning than cat videos, and you’re welcome to expand your horizons on the subject here.)
Facebook’s ‘Deep Learning’ Guru Reveals the Future of AI
New York University professor Yann LeCun has spent the last 30 years exploring artificial intelligence, designing “deep learning” computing systems that process information in ways not unlike the human brain. And now he’s bringing this work to Facebook.
Earlier this week, the social networking giant told the world it had hired the French-born scientist tohead its new artificial intelligence lab, which will span operations in California, London, and New York. From Facebook’s new offices on Manhattan’s Astor Place, LeCun will oversee the development of deep-learning tools that can help Facebook analyze data and behavior on its massively popular social networking service — and ultimately revamp the way the thing operates.
Netflix Is Building an Artificial Brain Using Amazon’s Cloud
Nothing beats a movie recommendation from a friend who knows your tastes. At least not yet. Netflix wants to change that, aiming to build an online recommendation engine that outperforms even your closest friends.
The online movie and TV outfit once sponsored what it called the Netflix Prize, asking the world’s data scientists to build new algorithms that could better predict what movies and shows you want to see. And though this certainly advanced the state of the art, Netflix is now exploring yet another leap forward. In an effort to further hone its recommendation engine, the company is delving into “deep learning,” a branch of artificial intelligence that seeks to solve particularly hard problems using computer systems that mimic the structure and behavior of the human brain. The company details these efforts in a recent blog post.
From Netflix Blog Post
As we have described previously on this blog, at Netflix we are constantly innovating by looking for better ways to find the best movies and TV shows for our members. When a new algorithmic technique such as Deep Learning shows promising results in other domains (e.g. Image Recognition, Neuro-imaging, Language Models, and Speech Recognition), it should not come as a surprise that we would try to figure out how to apply such techniques to improve our product. In this post, we will focus on what we have learned while building infrastructure for experimenting with these approaches at Netflix. We hope that this will be useful for others working on similar algorithms, especially if they are also leveraging the Amazon Web Services (AWS) infrastructure. However, we will not detail how we are using variants of Artificial Neural Networks for personalization, since it is an active area of research.
My Prior Email on Jan 28 2014
Subject: Software Vs Humans (Again)
Technology and wealth inequality
Technology makes wealth inequality worse by giving people leverage and compounding differences in ability and amount of work. It also often replaces human jobs with machines. A long time ago, differences in ability and work ethic had a linear effect on wealth; now it’s exponential.  Technology leads to increasing wealth inequality for lots of other reasons, too—for example, it makes it much easier to reach large audiences all at once, and a great product can be sold immediately worldwide instead of in just one area.
As the following table shows, wealth inequality has been growing in America for some time, not just the last few years. It’s noticeable between the top 20% and bottom 80%, and particularly noticeable between the top 1% and bottom 99%.
And here is a graph that shows the income share of the top 1% over time:
The best thing one can probably say about this widening inequality is that it means we are making technological progress—if it were not happening, something would be going wrong with innovation. But it’s a problem for obvious reasons (and the traditional endings to extreme wealth inequality in a society are never good).
We are becoming a nation of haves and have-nots—of prosperous San Francisco vs. bankrupt Detroit. In San Francisco, the average house costs around $1mm. In Detroit, the average house costs less than a Chevy Malibu made there.  And yet, I’d view a $1mm house in San Francisco as a better investment than 20 $50k houses in Detroit. As the relentless march of technology continues, whole classes of jobs lost are never coming back, and cities dependent on those lost jobs are in bad shape. 
This widening wealth divide is happening at all levels—people, companies, and countries. And either it will keep going, or innovation will stop.
But it feels really unfair. People seem to be more sensitive to relative economic status than absolute. So even if people are much better off being poor today than king 500 years ago, most people compare themselves to the richest people today, and not the richest people from the past.
And importantly, it really is unfair. Trying to live on minimum wage in the United States is atrocious (http://www.forbes.com/sites/laurashin/2013/07/18/why-mcdonalds-employee-budget-has-everyone-up-in-arms/). That budget, incidentally, assumes that the worker is working two jobs. Even though they’re outputting less value, that person is certainly working harder than I am. We should do more to help people like this.
Real minimum wage has declined, failing to track real averages wages and massively failing to track the wages of the top 1%.
In a world where ideas and networks are what matter, and manufacturing costs trend towards zero, we are going to have to get comfortable with a smaller and smaller number of people creating more and more of the wealth. And we need a new solution for the people not creating most of the wealth—many of the minimum wage jobs are going to get innovated away anyway.
There are no obvious/easy solutions, or this would all be resolved. I don’t have any great answers, so I’ll just throw out some thoughts.
We should assume that computers will replace effectively all manufacturing, and also most “rote work” of any kind. So we have to figure out what humans are better at than computers. If really great AI comes along, all bets are off, but at least for now, humans still have the market cornered on new ideas. In an ideal world, we’d divide labor among humans and computer so that we can both focus on what we’re good at.
There is reason to be optimistic. When the steam engine came along, a lot of people lost their manual labor jobs. But they found other things to do. And when factories came along, the picture looked much worse. And yet, again, we found new kinds of jobs. This time around, we may see lots more programmers and startups.
Better education—in the right areas—is probably the best way to solve this. I am skeptical of many current education startups, but I do believe this is a solvable problem. A rapid change in what and how we teach people is critical—if everything is changing, we cannot keep the same model for education and expect it to continue to work. If large classes of jobs get eliminated, hopefully we can teach people new skills and encourage them to do new things.
Education, unlike a lot of other government spending, is actually an investment—we ought to get an ROI on it in terms of increased GDP (but of course it takes a long time to pay back).
However, if we cannot find a new kind of work for billions of people, we’ll be faced with a new idle class. The obvious conclusion is that the government will just have to give these people money, and there’s been increasing talk about a “basic income”—i.e, any adult who wanted it could have, say, $15,000 a year.
You can run the numbers in a way that sort of makes sense—if we did this for every adult in the US, it’d be about $3.5 trillion a year, or a little more than 20% of our GDP. However, we’d knock out a lot of existing entitlement spending, maybe 10% of GDP. And we’d probably phase it out for people making over a certain threshold, which could cut it substantially.
There are benefits to this—we’d end up helping truly poor people more and middle class people less, and we’d presumably cut a ton of government bureaucracy. We could perhaps end poverty overnight (although, no doubt, anything like this would cause prices to rise). And likely most of this money would be spent, providing some boost to the economy. We could require 10 hours a week of work for the government, or not. A big problem with this strategy is that I don’t think it’ll do much to address the feeling of inequality.
Many people have a visceral dislike to the idea of giving away money (though I think some redistribution of wealth is required to reasonably equalize opportunity), and certainly the default worry is that people would just sit around and waste time on the Internet. But maybe, if everyone knew they had a safety net, we’d get more startups, or more new research, or more novels. Even if only a small percentage of people were productive, in a world where some people create 10,000x more value than others, that’d be ok. The main point I’m trying to make is that we’re likely going to have to do something new and uncomfortable, and we should be open to any new ideas.
But this still doesn’t address the fundamental issue—I believe most people want to be productive. And I think figuring out a much better way to teach a lot more people about technology is likely the best way to make that happen.
Thanks to Nick Sivo for reading a draft of this.
Follow me on Twitter here: http://twitter.com/sama
 There are lots of other significant factors that cause wealth inequality—for example, having money makes it easier to make more money—but technology is an important and often-overlooked piece
 I was recently in Detroit and was curious to see some of the neighborhoods where you can buy houses for $10-20k. Here are some pictures: