I've been doing a lot of thinking about artificial intelligence lately – where it's going in the next several decades, how it might be integrated into our society and culture, how it might impact the way we live and the course of history. It's the kind of thing you spend time thinking about when you're writing a sci-fi backstory for a game where one of the two main characters is an AI.
This is not, however, a post about where we might be a century from now, but more about the starting point for those speculations: where we are now and what direction things seem to be moving in. And for the time being, it seems to me that where we are now is best described as "high-octane stupid being branded as 'smart.'" I say this as not as an indictment of present-day technology itself — the scientists, programmers, and engineers behind some of the techie toys we take for granted these days have done amazing things, and it's because of them that I think we have better days ahead of ourselves. No, rather, I am pointing the blame at pretty much every other link in the chain: the business folks selling dolled-up stupid as "smart" and the everyday consumers who buy it. These people are why, before the wide-eyed promises of The Future start to manifest, things are going to kind of fall flat for a while.
As with any indictment, it follows that I should provide evidence, and I will. But first, so we're on the same page, let me explain what I mean by "high-octane stupid." Imagine, if you will, two people, both American citizens. The second has never actually been to Europe, never actually plans to go to Europe, and does not correspond with any Europeans, but has for whatever reason decided to memorize the Fodor's travel guides for every European country. The second has not done any of that memorization but has actually been on two or three trips to Europe and has a handful of friends out in Europe.
The first person represents what I am calling "high-octane stupid." He's not useless, far from it. Knowing the addresses and operating hours of every museum in Prague can come in handy if that's what you're looking for. But he only understands Europe on one level. He can tell you where the most popular cafés in Paris are, but not what the food tastes like. He doesn't have any stories about that one cute waitress in Barcelona or any photos of Venice.
Our second person represents "low-octane smart." Sure, he's never been to Poland or Switzerland or most other countries for that matter and can't tell you anything about them. But he knows all about this one great pub in Dublin, remembers what the ruins in Crete smelled like, and can show you a cool video he took of sunset from the Eiffel Tower. Both of these people have something useful to share, but they have something very fundamentally different to offer and it's important not to confuse one for the other.
As an aside, these two individuals hint at a third we have not yet spoken of, a "high-octane smart" specimen who has somehow acquired several lifetimes' worth of intimate knowledge about every teeny little bit of Europe. It's possible that we'll get there some day, but we are not there yet. That's an article for a different day.
So back to my charge that we have been sold high-octane stupid in a wrapper that says "smart." Where's my evidence? Computers seem to be doing some pretty smart things these days, you might think, and you'd be right: Computers seem to be doing some pretty smart things. I can think of no clearer example than Watson, the computer that quite handily defeated a pair of mere mortals at a game of Jeopardy!. Watch what it does during Final Jeopardy:
The category is U.S. Cities, and its response, "What is Toronto?" is a city that is decidedly not a part of the U.S. Watson effectively demonstrated that it is a magnificent word association machine that has no solid internal concept of what a Toronto is. However, if you watched the entire episode of the show, you'll notice that despite Watson's single-layer understanding of the world, it still had enough raw computational horsepower behind it to outperform by an order of magnitude two of the greatest Jeopardy! contestants our species has been able to produce.
Still not convinced that Watson isn't "smart?" Willing to write off Watson's mistake as a small wrinkle that will be ironed out in a short amount of time? Exhibit B is a cute feature that Google added to Google Docs spreadsheets at some point, and you can try it at home. Go to any Google Docs spreadsheet, type two similar terms in two adjacent cells, select them, then hold down the alt or option key as you expand your selection. Google will work some magic behind the scenes and auto-populate the cells with terms it thinks are similar.
It works amazingly well in certain specific situations. Type "Wisconsin" and "Minnesota," expand your selection to 50 items, and you'll have a list of the 50 U.S. States — Ask it for 4 more and it'll give you DC, Puerto Rico, the Virgin Islands, and Guam. Type "Genesis" and "Exodus," expand it, and you'll have a list of the books of the Bible. Pretty smart, huh? Well... it seems that way until you see the system break down.
Try the same exercise with "Ontario" and "Quebec." For the first 10 entries it does just fine. It lists seven of Canada's eight remaining provinces as well as its three territories. But... for the thirteenth entry, rather than tell you about Prince Edward Island, it gives you an entry called "Newfoundland Labrador," demonstrating that not only does Google think the word "and" is unnecessary but that it thinks that "Newfoundland Labrador" is some province that is fundamentally different from "Newfoundland," which it already listed on line eleven. Expanding the list further, it will first suggest Québec — with an accent, as if that's enough to make it different from Quebec — before just listing a half dozen Canadian cities, followed by French translations of a couple province names, and doesn't give up its grudge against PEI until about the 22nd entry in the list (and even then, only refers to it by abbreviation).
Google sets, just like Watson, are great at making word associations, but terrible at understanding the concepts those words represent. Play around with it for a bit, you'll be surprised what works and what doesn't work. Type "Hydrogen" and "Helium," and not only does it fail to list the elements of the periodic table but it deviates from chemistry entirely within a couple dozen entries.
But go back to our Canada example for a moment. Strange grudge against PEI aside, where it really starts to get peculiar is when you ask it for 100 entries. I won't criticize it for getting weird after filling up the first thirteen cells. After all, imagine being presented with the same exercise as a human: "Ontario," "Quebec," 98 blank lines and the only instructions are "complete the list with items similar to the two already listed," and you'd have to make some judgement calls and get creative once you get done listing the provinces and territories of Canada. One person might decide it is a list of Canadian places, and start listing cities, national parks, and geographical features to complete the list. Another might decide it is a list of the administrative divisions of North America and go on to list the states, territories, and districts of the U.S. and Mexico. Both people would have just as valid a list by the end.
So the really interesting part isn't just that Google sets starts grasping at straws after a while. The interesting part is that it just goes ahead and makes a judgement call and merrily presents a list to you as if nothing is wrong. Do it a second time, and it will give you the same exact list. At no point does it say, "Hey, I wasn't sure what you wanted after I listed every province that wasn't the setting for Anne of Green Gables, so I just listed a mish mash of Canadian cities and U.S. states, but you might want to double-check the list to make sure it's what you want." A truly smart system would start a dialogue. "I filled in what I could? What else do you want? U.S. States? Canadian cities? Historical regions like Keewatin? Or were the provinces and territories all you really wanted, and you just didn't know how many there were?"
This is the part where it is going to get worse before it's going to get better -- not because the technology itself is going to get worse, but because it's going to become more prevalent at a much faster rate than it becomes more helpful. Google sets are a bell and whistle that are easy to ignore if they aren't working for you, but they're far from the only place that Google has decided to make an ass of u and me. Google's primary search is just as guilty of this. A prime example: last summer I was searching for something with "Newfoundland" in the query. Trying to be helpful, it also included websites with the postal abbreviation "NL" in its query. Unfortunately, this brought in a bunch of false positives with locations in the Netherlands, also abbreviated "nl." The most maddening thing was that I couldn't figure out how to turn this functionality off so I was stuck sifting through a bunch of Dutch results I never asked for and didn't want.
Google is far from the only culprit. Take Apple's infernal auto-correct: For reasons I will never forgive, Apple made their autocorrect opt-out instead of opt-in. Likely, there are studies that show that people prefer quick and simple to accurate, and given the number of people that don't immediately disable autocorrect (mercifully, unlike Google's "helping" features, Apple's can be disabled) probably prove that they're right. There must be people out there in the world for whom not needing to remember the difference between "there," "their," and "they're" is worth turning a trip to Disney World into a divorce once in a while.
But high-octane stupid technologies getting rolled out to individual consumers are just the tip of the iceberg. Remember Watson? A machine that didn't know what he was doing, got a lot of answers comically wrong, but still managed to outperform low-octane smart? The sad extrapolation of this means that fast and inaccurate is going to be better for the bottom line than slow and accurate in a lot of situations. Heck, marketing companies have been using high-octane stupid techniques for years, using bizarre statistical correlations to decide whose throats to shove their ads down. They get it wrong sometimes, a lot of times in fact, but it generally doesn't hurt them because they're mostly just annoying the people who weren't going to buy their product anyways. And that's just one example. You think automated phone customer service labyrinths are awful now? They're about to get a whole lot worse. Companies using high-octane stupid techniques to decide who to hire or fire? Wouldn't put it past them. Legislators using high-octane stupid techniques to inform their votes on legislation? Hell, they're already using low-octane stupid techniques, so that might even be an improvement.
Here's the worst-case scenario, though... a perfect storm of technology becoming an industrial scale irritant before it starts to improve. There are a couple other trends going on right now. First, "attention" has become a currency of sorts. Companies are vying for likes and shares and retweets and a bunch of other things that were not nouns ten years ago. They are finding new ways to worm those memetic pathogens they call "advertisements" into our brains. They want your attention and good ratings because they know they can convert that attention into money. Second, technology is following us more closely than ever before. Whether or not Google Glass bombs, over the next few years, chances are, we're going to have more technology in our face than we have today. Add these two things together the high-octane stupid algorithms they're rolling out in adolescence under brand names like "smart" and "genius," and you know what it's starting to remind me of?
I don't think we're headed towards GLaDOS, or Skynet, or the Matrix anytime soon. But if my fears are correct... we're on a direct course for Navi.
God have mercy on our souls.
The one saving grace is that this adolescent period of artificial intelligences in our lives won't last forever. Although the baseline strategy of the corporate world these days seems to be "make it popular first, make it good later," there still are going to be those folks behind the scenes legitimately improving things. And once we weather this storm, who knows what might be in store for us...