fbpx

Archive for future

New Languages and New Technologies

http://www.fastcodesign.com/3041174/48-crazy-ui-ideas-coming-from-the-500-million-stealth-startup-magic-leap#22

There’s an etiquette that needs to go with every new technology.  Google’s Glass Explorer experiment was an exercise in this.  Some might regard it as a failure, but I tend to look at it as another necessary step.  Without an established etiquette, a visual body language adopted by the users, a code of interaction that anyone *not* employing or familiar with Glass could understand, conflicts arose.  In some cases those misunderstandings were bordering on violent.  It’s a lesson to all developers of interactive wearables, and I don’t think many of them have taken it to heart just yet.

This is not the first time technology has required social norms evolve to suit.  As cellular phone entered the marketplace, then became smaller and smaller, users were called selfish and inconsiderate for answering their phones and speaking aloud in public spaces (to the point where some restaurants banned phones entirely).  When hands-free devices became commonplace, it got even worse because you simply could not tell if the person was listening to you or to a voice on the end of the line.  It got better over time, people using their bluetooth headsets learned to turn away, avoid eye contact, hold their conversations in their cars.  Other people learned to check to see if the person was on their device, helped by the flashing blue light on the side that drew your attention to even the subtlest earpiece on the market.

Within Magic Leap’s patent artistry (pictured at the top), we can see allowances for different styles of interactivity, many of which convey a clear body language to those looking in from the outside.  What remains to be seen is if they will do the experiment, if they will allow their product out into the wild so they can see how the human factor reacts, and what work they will need to do to smooth that transition into common usage.

Talk Data to Me

 

http://arxiv.org/pdf/1506.05869v2.pdf

There’s a difference, a pretty large difference, between an AI and a chatbot. It’s perhaps hard to see if you’re on the receiving end, if you don’t know what to look for, but the way they act and react are different and in the case of a chatbot, once you figure out how the logic behind it works, you can talk it in circles.  Which is a good way to kill an afternoon, if you’re bored on the intarwebz.

Not that I have ever done this.  Oh no, not me.

The point of a chatbot, usually, is to mimic conversation.  They are often not capable of *steering* a conversation themselves, they don’t, or can’t, as leading questions unless the developer has planned ahead (and even then, you can tell when the canned questions come into play, the segues are never terribly smooth).  What they can do reasonably well, however, is continue a conversation in much the same way that many humans do.  It deconstructs your sentence, pulls the appropriate verbs and subjects, and constructs a question or response of it’s own.

If you’ve ever gotten a customer service call, or contacted customer service through one of those “live chat” services offered by banks and online retailers you’ve likely encountered a few chatbots.  Depending on the sophistication, they are often used to just collect your basic information before passing you off to a real-live human, but you can hear the difference if you listen.