Is AI MIA?

There’s been a lot of talk recently about AI being super-scary and that it represents one of humanity’s greatest risks. This has been mentioned, with almost hyperbolic sincerity, by Elon Musk and Stephen Hawking. While that might happen someday, the current state of AI is simply not there.

Google search, while impressive, isn’t “thinking”. At it’s heart, Google search is searching through a bunch of big lists, looking for some key terms and returning associated results. Then, periodically, those lists are tweaked and updated to become more accurate over time (note: Google is super smart; I’m way-trivializing here to make a point). While they are likely using all sorts of machine learning (iterating over big sets of data to generated more precise associations) to do this, the GoogleBot is not actually–literally–understanding what we’re typing. Most voice recognition stuff (think Siri, and I’m guessing Echo too–though I don’t know for sure) are using similar techniques.

What’s the difference? Well, if I ask, “what’s 2+2”, Google looks through its lists and finds that people searching for “what’s 2+2” want an answer of four. So they open up a little calculator app with 2+2 as the input and 4 in the output screen. However, if I ask the Google: “Google, what is the sum of numeral two added with the number two?” the Google looks through a bunch of lists and finds a bunch of possible associations. The first result Google decides to show me is the wikipedia entry for the number “2”. So, yeah. If I had asked you the second question, you would have said, “Four. Why are you so damn verbose?”

The big AI breakthrough will come when machines can finally comprehend and truly learn. And maybe once that happens, there will be an avalanche of AI progress, and then the computer overlords will smite us. That’s certainly possible, as this article about AI progress at Google seems to hint at. I would love to hear the finer details here, though I’m sure they would be difficult for me to comprehend.

My guess is that this will be an area that eludes us for some time. I usually err on the side of faster pace when prognosticating on tech advancements, but I think this problem is exponentially greater than many of the things the tech industry has solved up until now (I’m looking at you, My Singing Monster). Machine learning, in its current form, is a long way from machine comprehension.

3 thoughts on “Is AI MIA?

  1. I vaguely remember from school that theoretical models of the human brain would take 1.2 trillion fuckpotloads of transistors. Even with Moore’s law working in high gear, it’ll be a while before we get there. It might have to wait for biological computing, in which case, is it really a machine?

    1. Yeah, bio is where is get’s spooky. Did you read the whole thought-vector concept? I’m not sure I fully grok it, sounds like dynamic lists where each “thought” (phrase?) is really just encoded. Interesting stuff, would love to read further on it.

      1. Had to look that up to remind myself what it is. Pretty simple concept, but difficult to implement. This article: http://www.extremetech.com/extreme/206521-thought-vectors-could-revolutionize-artificial-intelligence has a very brief description, and quote from the British High Priest of AI, Geoffrey Hinton. OK – First question: How do I apply for the job of British High Priest of AI? Second Q: Did you really need to take this pot shot at us Yanks? “Irony is going to be hard to get, [as] you have to be master of the literal first. But then, Americans don’t get irony either. Computers are going to reach the level of Americans before Brits.” It’s true, but c’mon, man.

        I’ve always though that the solution will be brilliantly simple. I think it will be a combination of 4 things:

        1. Simple neural net or thought vector system to simulate the “brain.” Now – It will be simple, but fucking huge.
        2. Sensory perception. I think this is key. Without having perception of it’s environment, how can their be learning?
        3. Simple rules that it is trying to maximize / minimize. IE pleasure / pain. Asimov’s laws, or whatever.
        4. Ability to manipulate it’s environment.

        The learning is going to come from the manipulate ->pleasure / pain feedback->manipulate ad infinitum.

        You could start simple, with a simulated environment, then work your way up to a full robot.

        Set that shit up, then let it run for an extended period of time, and see what you get. Run it 1,000,000 times, find the Shakespeare amongst all the monkey gibberish, and then clone it.

Josh and J are very lonely. Please leave a reply. Pretty please?