Hatrack River Writers Workshop   
my profile login | search | faq | forum home

  next oldest topic   next newest topic
» Hatrack River Writers Workshop » Forums » Open Discussions About Writing » Naval/Military Etiquete (Page 2)

  This topic comprises 2 pages: 1  2   
Author Topic: Naval/Military Etiquete
EricJamesStone
Member
Member # 1681

 - posted      Profile for EricJamesStone   Email EricJamesStone         Edit/Delete Post 
Doc,

OK, imagine a confrontation between a human armed only with a toolbox and a present-day ICBM armed with several thermonuclear MIRV warheads, but without any human intelligence to initiate or guide any action.

I'd bet on the human with the toolbox as being more likely to survive the confrontation intact.

Even if we start talking about far-future technology, there is no guarantee that trans-human artificial intelligence is possible. It may be possible to create the hardware capable of sustaining such an intelligence, but unless humans are capable of designing the software for an intelligence greater than their own (or designing the software capable of creating such software, recursively), then trans-human intelligence will remain impossible.

Even if it is possible, problems with the earliest AI's may lead to prohibiting the creation of such entities.

So while it is possible the human brain may become completely obsolete, it is not inevitable.

As for edge-of-the-envelope spaceflight, technology may allow humans to tolerate far more than they can now. After all, if we're talking about a civilization with FTL travel, there's no reason they might not have inertial compensators and artificial gravity. In David Weber's Honor Harrington series, the humans don't even blink at the idea of manned warships accelerating at hundreds of gees.

Similarly, technology can shield humans from near-misses by weapons such as nukes, grasers, and antimatter warheads.


Posts: 1517 | Registered: Jul 2003  | Report this post to a Moderator
Survivor
Member
Member # 213

 - posted      Profile for Survivor   Email Survivor         Edit/Delete Post 
I think that EJS has some mixed points there. The strongest is that humans don't like things that will lead to the extinction of humanity. We already have the myth of the evil AI that turns on its creators, and we've never even built one, let alone had it turn on anyone.

The first time it happens might be the last time for any number of reasons. But presuming that we survive, it will probably be the last time anyone views creating a true AI as a less serious crime than premeditated genocide.

Also, there is no reason to believe that it wouldn't be just as easy to "harden" a human as to create an equally "hard" intelligent system from scratch. And humans are very cheap (and surprisingly hard, particularly against things like nukes). For ordinary exploration, it well might be the case that automated probes will be more cost effective. But for real wars...human life may be priceless in some remote philosophical sense, but the truth is that on a battlefield, lives are worth very little.

Most humans don't like to think of their lives as being cheap, but such is the case when hard facts are examined.


Posts: 8322 | Registered: Aug 1999  | Report this post to a Moderator
  This topic comprises 2 pages: 1  2   

   Close Topic   Feature Topic   Move Topic   Delete Topic next oldest topic   next newest topic
 - Printer-friendly view of this topic
Hop To:


Contact Us | Hatrack River Home Page

Copyright © 2008 Hatrack River Enterprises Inc. All rights reserved.
Reproduction in whole or in part without permission is prohibited.


Powered by Infopop Corporation
UBB.classic™ 6.7.2