fbpx

Archive for future

Autonomous cars and personal space

Image result for self driving car map

The vision that keeps getting served up, when self-driving cars finally become ubiquitous, is that of a fleet of vehicles capable of computationally precise pathfinding. By talking to one another, they will be able to tool down the freeway, bumpers within inches, rearranging their groupings so as to allow for maximum aerodynamics and most efficient travel times. It’s well within the bounds of these vehicle designs, computers can react thousands of times faster than we do and if we give them the ability to communicate with one another, cross-platform, they can do a brilliant of of staying out of one another’s travel path.

But as we know, or many suspect, the best, most efficient solution is not always the “best” (for human-comfort orders of “best” at any rate).  The auto-pilot on an airplane can land it in a storm with little problem, but the decision making keeps the plane intact, with little regard for the comfort or even safety of the passengers. The world is riddled with examples of human “best” taking the lead over efficiency.  Everybody wants a window or an aisle seat, people will take the time to wait for the next elevator if the one arriving seems too full (even if it is still well within its weight capacity). What is “best” for a computer is simply not always “best” for a person.

This got me thinking about customization, user preferences for these kinds of vehicles. I mean obviously you *can* drive with your bumpers within 5 inches of each other, but anyone who’s had their hands on the wheel is going to be more comfortable and familiar with the “two second rule”, where a driver maintains a two-second gap between your car and the one ahead of you. Granted there are a few there who seem to have no concept of personal space while maneuvering a 2000 pound pile of carbon fiber and steel, but by and large, most drivers leave a comfortable distance between the cars around them.

And as you sit, safely ensconced in your self-driving box (assuming you’re not distracted by a film or some quality “personal time”), you may not at all enjoy your vehicle tailgating someone else or someone else’s vehicle tailgating yours. The most efficient mode of travel may simply be nerve-wracking to anyone who hasn’t grown up with it.

I imagine that the initial solution is going to be the allowance of “user preferences”.  You can choose wake times and ringtones on your phone, so it follows that car manufacturers might allow a user to set things like minimum distance to the car in front of them, or maximum tailgate distance (in fact, aggressive human drivers may be able to take advantage of gaps in traffic left by two cars whose owners requested a large buffer around them). Cars would communicate these preferences between each other and roll that into their driving calculations as they go.

The Simple Exchange of Please and Thank You

I’d like to make a request of all you personal AIssistant programmers, you engineers at Apple, Google, Microsoft, all of you who are responsible for iterating on human/AI exchanges.

I’d like to be able to say please and thank you to my voice controlled computing.

It seems like a minor thing, doesn’t it?  A quaint nicety falling by the wayside in the pursuit of one more step towards the Singularity.  But what you are forgetting, my engineers, is that while you are training your AI’s to talk to us, those AI’s are training us to talk to them.

Much like cats, but with less shedding.

A request from a person often forms a sort of closed-loop.  It’s a format we learn, something that most cultures have.  An In, a Confirmation, A Request, a Confirmation and an Out.  To your average human, this feels complete.  In fact, interrupting this sequence feels rude.  Failing to complete this sequence just leaves one feeling uncomfortable, the same kind of uncomfortable you get when someone fails to say “good bye” before they hang up the phone.  Depending on the person/culture this feeling can range from a mild annoyance to an offence that requires a response.

It’s not always pretty.

 

As an example, let’s say we have a diner in a restaurant, ordering a meal from an AIssistant (like Siri or Hey Google).  The interaction might go something like this:

DINER: “Hey Waiter.” (In)

WAITER: “What do you want to order?” (Confirmation)

DINER: “I would like the Salmon Mousse, please.” (Request)

WAITER: “One Salmon Mousse, coming right up.” (Confirmation)

DINER: “Thank you.” (Out)

You’ve probably had thousands of exchanges like this over the course of your lifetime.  At the end the waiter is released from the encounter by the Out and both parties are free to move on to other things.  There is a clear In and Out, nobody is left hanging, waiting for a followup or a new request.  In fact, you may have had an experience or two when the Waiter has left the exchange early, before the second Confirmation or before the Out.

It left you feeling a bit slighted, didn’t it.  Maybe a little confused.  Definitely not quite right, though you might not have understood why.

This type of exchange flows smoothly, we have an idea in our heads of how it will play out.  It’s comfortable, familiar.  It’s successful execution triggers a feeling of satisfaction in both parties similar to the way you feel when picking up resources in Clash of Clans or creating a cascade in Candy Crush.

With the current state of Voice Recognition Technology, this same exchange is truncated, cut short:

DINER: “Hey, Waiter?”

WAITER: “Yes?”

DINER: “I would like the Salmon Mousse, please.”

WAITER: “Salmon mousse with peas.”

And boom, you’re done.  Misunderstanding of the word please aside, there’s no Out here.  The Diner has to trust that they will get what they want.  They are left hanging and, when the Waiter delivers peas alongside the Salmon Mousse they are frustrated, annoyed.  The exchange fails in the users mind, the AIssistant is cast as unreliable.

Once you’ve had a few of these sub-optimal exchanges with your AIssistant, you stop using natural language.  Every please and thank you, because they are so often misunderstood or they are ignored, or they cause a misunderstanding, gets dropped.  These conditioned responses, designed to get the best possible reaction from a human, become a burden when talking to an AI.  Your exchange becomes:

DINER: “Hey, Waiter. Salmon Mousse, plate, dining room, extra fork.”

WAITER: Delivers plate of Salmon Mousse on a plate to the dining room with an extra fork.

Yikes! This is no longer a “natural language” request.  The diner had started to simply deliver a string of keywords in order to get the end result they are looking for.    The user, the human part of this equation that natural language voice recognition is specifically being designed for, has abandoned natural language entirely when talking to their AIssistant.  They have run up against the Uncanny Valley of voice and have begun treating the AIssistant like a garden variety search engine.

Which wouldn’t be a problem if it only affected the AIsisstant.  In fact, it makes things run much more smoothly.  But these voice patterns tend to stick.  They backflush into the common lexicon of words (look at words like LOL and l33t that have entered spoken language and are here to stay, they exist only because of the constraints of technology).  Listen to a voice message left by someone who habitually uses Voice to Text.  You’ll find they have a tendency to automatically speak their punctuation out loud, just like you need to when dictating an email or a text message.

Please and thank you cease to be Ins and Outs of a conversation, they instead become stumbling blocks, places where your command sequence fails.  These niceties that we use to frame requests in the spoken language start to get dropped not because nobody’s teaching them, not because humans are getting ruder, but because they are being trained back out again by interaction with AIssistants that fall a bit too shy of being human.

The next step becomes complex.  Do we split language into a “conversation” and a “command” form?  Or do we end of abandoning the conversational form altogether in favor of the much more efficient (but far less communicative) string of key words?  It will be interesting to see if we pass each other in the night, humans and AIssistants, with the human language patterns becoming even more AI friendly as the AI language recognition software gets better at handling our natural way of speaking.

Either way, please and thank you, those natural addresses that help to keep requests couched in a tidy little package, may be one of the first victims.