fbpx

Archive for future

Space Exploration has Two Camps

 

http://www.sciencealert.com/nasa-s-messenger-spacecraft-will-soon-crash-into-the-surface-of-mercury?perpetual=yes&limitstart=1

 Are a rule, humanity is pretty d*mn stubborn.  If we want to do a thing, we do a thing.  If we don’t want to do a thing, we don’t do a thing.  There are a ton of “reasons” we might do or not do, ranging from corporate greed to altruistic intentions and higher-order awareness down to plain “I donwanna”.

So in hand, right now, in everyday life, we have what seem to be two primary camps for space exploration.

Number One is the Meat Plan.  The Big Sexxy, right?  Strap a rocket to the back of a test pilot and fire them off into the great unknown.  WOOOOOOOHOOOOOO!  Take THAT Universe!  It’s a fabulous vision, and there is some logic to it.  People are useful.  We can do many many things given enough time and materials, so sending humans + materials (or at least instructions) does make some sense (outside of the potential death/madness thing).  BUT it is infinitely harder to do because people die and rockets explode.

BUT, Plan Two is Robots all the way down.  Remotely Operated Vehicles, Rovers, Probes, essentially they are all extensions of humanity, just without the meat part.  Human designed, human built, while they are not as adaptable, they don’t die quite the same way as we do, they are easier to power, easier to support emotionally (just don’t look at their Twitter feeds), and they are the current functional plan.  Since they are designed by humans they are, as our ambassadors, going to reflect human preconceptions and frailties.

My humble opinion is that both plans are going to work.  The “Meat” plan is going to take longer and be harder to execute.  The “Robot” plan is already underway.  But once we get the technologies and developments from BOTH plans working together?  Watch out Universe.  Here we come….

Neural nets and modular capabilities

http://journals.plos.org/ploscompbiol/article?id=10.1371%2Fjournal.pcbi.1004128

This abstract, as it was originally presented via one of the science aggregation websites, was about robots.

Except it wasn’t.  Not really.  I mean, “robots” is the end goal that most people will understand, but what these people are examining is the way that memories are written and rewritten.  They are examining adaptation of the learning process and how that can be applied to the idea of building neural networks.

Right now, when we build an AI, whether it be to assemble cars or act as an enemy in a videogame, it’s very task based.  In an ultra-simplified form you get, “If this happens, you deliver that result.”

But this not only leaves holes in the logic (because what if THAT happens and you didn’t think of it beforehand?) but it makes the programming rigid.  The AI can only operate according to the rules you have set, so if a dog runs into the automotive assembly factory, or a player decides to sneak around the left side of the building instead of the right, you end up with a broken situation.

There is a risk (seemingly) that as we continue to develop neural network based AI’s that this will get coded in there too.  Not always on purpose, but because that is familiar ground.  It is something we can test easily.  It is something we can codify and deliver a clear result that can be shown to colleagues/investors, etc. to keep the funding and interest going.

But in order to truly make a neural net efficient at learning and executing new tasks, it’s got to retain the old tasks.  It’s got to use the knowledge it already has the learn the new stuff even faster (rather than having it handed to them in the form of a programming block).