Hatrack River Writers Workshop   
my profile login | search | faq | forum home

  next oldest topic   next newest topic
» Hatrack River Writers Workshop » Forums » Open Discussions About Writing » Robot Soldiers

   
Author Topic: Robot Soldiers
Keeley
Member
Member # 2088

 - posted      Profile for Keeley   Email Keeley         Edit/Delete Post 
While searching on a slightly related topic, I found this article.

According to this NYTimes article, the military is planning on spending "tens of billions" of dollars on robots that can kill people. Currently, they are remote-controlled, but the military hopes that within a decade, the robots will be able to think for themselves (obviously without all the hang-ups humans have about killing other members of their genetic pool).

I'm working on background for a story that may include these (not in the actual story... only in the background) so I ask everyone here interested in responding, what do you think?

[This message has been edited by Keeley (edited February 16, 2005).]


Posts: 836 | Registered: Jul 2004  | Report this post to a Moderator
HSO
Member
Member # 2056

 - posted      Profile for HSO   Email HSO         Edit/Delete Post 
Yes. Robots would be good. We won't feel nearly as guilty for killing people with people as we currently do. However, it is one of the most fundamental functions of the military to kill people. If you don't want to do that, you shouldn't have a military at all. But the modern military has always been pursuing ways of not using people on the battlefield, for many reasons that would take pages of text to explain properly.

I read a story (or several, actually) where whole battles were fought by machines on battlefields (some were computer simulated) and the human generals and officers sat in cozy air-conditioned avionics vans behind their computers to control the troops, safe from harm. We're pretty much there already. These robots are a natural progression of an idea that has been around ever since man has been fighting, I would guess.


Posts: 1520 | Registered: Jun 2004  | Report this post to a Moderator
mikemunsil
Member
Member # 2109

 - posted      Profile for mikemunsil   Email mikemunsil         Edit/Delete Post 
I can see the use of robots for assisting troops in gaining and holding urban territory, but I have a problem with them being able to hold that territory on their own, or to be of signficant use on a widely spread battlefield.

I can see them being used to deny territory to an opponent, but so can minefields, and mines continue to be cheaper to produce and distribute. If I were the commander of a mixed forces group, I would use them to prepare the battlefield in my favor, and to set up killing fields.

I might also use them as expendable scouts.


Posts: 2710 | Registered: Jul 2004  | Report this post to a Moderator
Christine
Member
Member # 1646

 - posted      Profile for Christine   Email Christine         Edit/Delete Post 
The first questions that come to mind are:

1. Where's the shut-off switch?
2. Which people do the robots kill and how do they identify them?
3. Where's the shut-off switch?
4. Do they understand or respond to surrender?
5. Where's the shut-off switch?

The biggest problem I see with AI stories is not explain why the shut off switch was never used. Either that or they insinuate that people are too stupid to put one in.


Posts: 3567 | Registered: May 2003  | Report this post to a Moderator
HSO
Member
Member # 2056

 - posted      Profile for HSO   Email HSO         Edit/Delete Post 
I think your final comment, Christine is right on the money -- that is precisely what would happen, people would forget or they would install dodgy, low-cost, imported cutoff switches that failed, thus resulting in scandalous accidental deaths and friendly fire killings.

But I laughed at the three instances of "where's the cutoff switch?" Point well taken.


Posts: 1520 | Registered: Jun 2004  | Report this post to a Moderator
Corpsegrinder
Member
Member # 2251

 - posted      Profile for Corpsegrinder   Email Corpsegrinder         Edit/Delete Post 
Here's a really interesting book on how the US Army trains soldiers to "temporarily" overcome a human being's natural disinclination to kill other human beings...

ON KILLING
by David Grossman

The author isn't exactly the most polished wordsmith, but the info in this book is very interesting.

Much of the R&D for robotic combat systems falls within the rubric of a program known as "Future Combat System".


Posts: 104 | Registered: Dec 2004  | Report this post to a Moderator
Minister
Member
Member # 2213

 - posted      Profile for Minister   Email Minister         Edit/Delete Post 
Without major leaps several areas of technology, I can't see effective robotized infantry in the near future. Especially not robotized infantry that looks even vaguely human.

Robotized convoys -- sure; still need some work, as the Times article pointed out, but probably doable within the next few decades (assuming that your convoy never has to deal with enemy fire, anyhow). Robotized air power isn't far off. It's already going up for surveillance. Most airial combat today depends upon the weaponry, with the airplane simply a platform for launch, and the pilot a guide to get it there, anyhow. The guy with the better missile will win most dogfights. Robotized artillery isn't unreasonable either, as long as a human is around to tell it where to point and where to drive. And we're already seeing the value of robots in bomb disposal and information gathering.

But truly autonomous robotic cavalry and infantry are probably a long way off, if they are ever practical. First, to be really effective, they have to be essentially autonomous. Any method of communication can be jammed. If your infantry/cavalry depend on someone telling them what to do, they are useless as soon as the signal is interfered with. And they are terribly dangerous if someone else is able to trick them into taking the wrong orders. (There's a story for you -- computer hackers becoming the elite combat forces, as they try to break into and take over the opposition's forces, while protecting their own. Hmmm...)

Second, movement is an issue, especially for infantry. The human body is an all-terain vehicle practically without parallel. But the engineering involved in the human body is generations beyond what modern robotics can accomplish. And the more complex the robotics become, the less reliable they are, more fragile they are, and harder to maintenance they are.

Maintenance is the next major problem. The article presents the fact that robots don't need to eat, ect., as a strength. But they still need either fuel or electrical charge. And robots cannot heal themselves -- even the most minor damage will require a trained technician to deal with, as well as a constant flow of parts. I suspect that maintenance and supply issues would actually increase with the use of battlefield robots.

Finally, there is simply no substitute for human intelligence and initiative. While Big Blue showed us that within a simple fixed set of parameters, and with superlative programming, an artificial intelligence can equal or even beat the best of humans, that is far from showing that even the best of AI's could compete with trained, intelligent humans on the battlefield (especially since, contrary to what movies generally show, the humans will probably be more mobile and more resistent to injury that prevents them from functioning than the robots are).

In short, although robotic warriors can make good, plausible science fiction, and although robotics can help our troops in a lot of ways, I just can't see them becoming the infantry, cavalry, and decision making elements of our military any time in the foreseeable future.


Posts: 491 | Registered: Oct 2004  | Report this post to a Moderator
Keeley
Member
Member # 2088

 - posted      Profile for Keeley   Email Keeley         Edit/Delete Post 
I heartily agree with the shut-off switch issue.

Robotics (last I heard anyway) was leaning more toward emulating insects and the way they move. If robots weren't made to look like humans or move like humans, if they were like a swarm of insects, would that be more effective in combat?

Specifically, I'm thinking of a cloud of locusts effect.

As for maintenance, that's a pretty big problem. The money saved on medals probably wouldn't cover the cost of repairing the machines. It would be even worse with smaller, insect-like robots unless they were considered disposable from the start.

Minister, I love your story idea.

Edited to thank Corpsegrinder for the book recommendation.

[This message has been edited by Keeley (edited February 18, 2005).]


Posts: 836 | Registered: Jul 2004  | Report this post to a Moderator
Jeraliey
Member
Member # 2147

 - posted      Profile for Jeraliey   Email Jeraliey         Edit/Delete Post 
Hmmm, a shutoff switch for an AI...could that be seen as similar to killing?

No answer needed; the value is in the consideration of the question.


Posts: 1041 | Registered: Aug 2004  | Report this post to a Moderator
Robyn_Hood
Member
Member # 2083

 - posted      Profile for Robyn_Hood   Email Robyn_Hood         Edit/Delete Post 
If a shut-off switch does equate to killing the AI, then it begs the question, is it ethical to use AIs as cannon fodder?

I don't think a shut-off switch would be equal to death. It would be more like an induced coma. But if the machines are sentient, then there is still an ethical question of imposing a coma-like state on them.


Posts: 1473 | Registered: Jul 2004  | Report this post to a Moderator
Survivor
Member
Member # 213

 - posted      Profile for Survivor   Email Survivor         Edit/Delete Post 
Hmmm...mm.

I think that the greatest problem is moral/morale.

quote:
Pentagon officials and military contractors say the ultimate ideal of unmanned warfare is combat without casualties.

You'll note that this is our primary justification for developing such a system. But this idea is incompable with fighting and winning a war, even a completely defensive war. Once the people making military decisions are making them on a basis of avoiding casualties at all costs, the nation is doomed to lose in war. A nation that is consistently losing wars is not in a good position to make a major military-technology advance. It soon won't be in a position to do much of anything at all, in fact.

It is only after listing the real reason that we're moving to robots that the "economic" justification is offered.

quote:
The median lifetime cost of a soldier is about $4 million today and growing, according to a Pentagon study. Robot soldiers could cost a tenth of that or less.

This is well and good in some ways, but the truth is that the listed pricetage of how much it costs to field a soldier is artificially inflated by our cultural attitude towards the risk of taking casualties. The economics of this only appears to hold up because we've already factored in the morale issue by means of programs to cover the moral cost of casualties economically.

Robot soldiers taking the place of humans soldiers is a pipe dream. Robots can be used to do important tasks on the battlefield, in fact, they're already doing that. But a society that regards eliminating the risk of human casualties as a reason for fielding robots will never be able to accomplish it effectively. Only when the robots have taken over will they be the ones fighting the wars for us (even then, they'll probably recruit humans for certain tasks, the way we use dolphins and dogs).


Posts: 8322 | Registered: Aug 1999  | Report this post to a Moderator
Hildy9595
Member
Member # 1489

 - posted      Profile for Hildy9595   Email Hildy9595         Edit/Delete Post 
Weird.

I just came back from Universal Studios, where I went on the Terminator "ride." Part of the storyline is a demo of the new soldiers being developed by Cyberdyne: the Terminator robots. The dialogue and the real-life news reports are eerily similar.

Not that I think the robot soldiers are going to come to sentience and stomp us all into the ground; merely pointing out how odd it was to step out of a sci-fi simulation and then read/watch actual news reports on the (basically) same development.


Posts: 338 | Registered: Aug 2002  | Report this post to a Moderator
Corpsegrinder
Member
Member # 2251

 - posted      Profile for Corpsegrinder   Email Corpsegrinder         Edit/Delete Post 
Autonomosity (Is that even a word?) will creep into existing combat systems very slowly, a little bit at a time.

Right now, remote piloted vehicles (RPV's)like the Predator posess limited degrees of autonomosity. For example, if communication is abruptly lost, the RPV has the ability to choose between several potential rendezvous/loiter/recovery destinations. The RPV then flies to the selected destination and awaits further instructions.

That may not sound very impressive, but the Predator is also armed with a pair of Hellfire antitank missiles.

To my knowledge, no "autonomous" Hellfire kills have been made by Predators, but I wouldn't be surprized if it has already happened...


Posts: 104 | Registered: Dec 2004  | Report this post to a Moderator
Survivor
Member
Member # 213

 - posted      Profile for Survivor   Email Survivor         Edit/Delete Post 
"Autonomy" pretty much covers anything that you might mean by "autonomosity" (except the amusing parallel with "animosity").

Anyone unimpressed by Hellfire missiles can volunteer for target practice for all I care. I certainly don't go about my day to day business with the firepower to take out a couple of light tanks should the occasion arise.

However, the Predators do not currently have the ability to choose their own targets. That would require at least a next generation advance in their onboard visual processing capabilities for them to even be able to pick out a ground target for themselves, let along decide whether it would be a good idea to blow it up. Actually, I'm not sure that the Predator has any ability to use the visual information they collect, they probably do their semi-independent navigation by GPS and instruments for the most part. But I wouldn't be surprised if they can "see" some things. And ground targets would be a useful first step, that way they could run basically unmonitered until they "noticed" something interesting. But they aren't to that stage just yet.

But even with the best visual processing capabilities we could put in them, they would only be able to spot things that might be ground targets. For a good long while, the best bet is to set them to report anything that looks like it could be a target and let a human decide whether it was worth a missile. If you were deploying them in massive swarms over a "kill everything that moves or looks artificial" area, that would be different (and kinda cool to see), but I don't see that happening very much.


Posts: 8322 | Registered: Aug 1999  | Report this post to a Moderator
Doc Brown
Member
Member # 1118

 - posted      Profile for Doc Brown   Email Doc Brown         Edit/Delete Post 
If the human race continues to wage war long enough it will eventually become the job of AI machines to wage it for us. Paying humans to fight is just too expensive.

We've already seen this with nuclear weapons; we use them because they give us more destruction at less cost. Reducing the need for human labor is the name of the game.

While AIs might never march across the battlefield like footsoldiers (Star Wars episode I was silly in this aspect), we will certainly see something like an AI fighter plane or ballistic missile submarine within a generation or two.


Posts: 976 | Registered: May 2001  | Report this post to a Moderator
Keeley
Member
Member # 2088

 - posted      Profile for Keeley   Email Keeley         Edit/Delete Post 
Survivor, I thought the Pentagon's comment about avoiding casualties was just a way of appealing to the masses. I seriously doubt anyone there thinks we can wage a war without losing some of our soldiers in the process.

But then, I've never been in the military.


Posts: 836 | Registered: Jul 2004  | Report this post to a Moderator
JBSkaggs
Member
Member # 2265

 - posted      Profile for JBSkaggs   Email JBSkaggs         Edit/Delete Post 
Intelligent bombs- crawling up from sewers, trees, the air, water whatever. Moving slowly and which are very small to infiltrate critical targets. No one would notice a barrell sized robot moving at five or six miles per hour move into position at night. Once he arrived he would simply detonate. Controlled by simple GPS guidance. Smaller versions crawling up out of toilets and exploding in offices or living rooms of would be useful.

High velocity, mass, and human delivery are the watchpoint now for bombs.

[This message has been edited by JBSkaggs (edited February 18, 2005).]


Posts: 451 | Registered: Dec 2004  | Report this post to a Moderator
Survivor
Member
Member # 213

 - posted      Profile for Survivor   Email Survivor         Edit/Delete Post 
Well, there's a certain diversity of opinion in the Pentagon. But overall there is a noticible creep of "let's find ways to get paid without getting shot at" attitude. That's a perfectly sensible attitude for most lines of work.

I think that I would notice a barrel sized robot moving at a rapid walking pace, be it day or night. You'd have to disguise it as something else.

On the subject of robotic soldiers, we can all get our "acronym" designations expanded to reveal our true functionality over at http://www.cyborgname.com . Sorry, there's an eleven letter limit on expanding names.


Posts: 8322 | Registered: Aug 1999  | Report this post to a Moderator
Warbric
Member
Member # 2178

 - posted      Profile for Warbric   Email Warbric         Edit/Delete Post 
JBSkaggs -- your post brought to my mind hundreds or thousands of miniscule bomblet-bots infiltrating into and congregating at one place where their combined explosive potential could be devastating.

Christine's shut-off switch could be a key controller-bot without which the bomblet-bots remain individually impotent and collectively harmless.


Posts: 151 | Registered: Sep 2004  | Report this post to a Moderator
Corpsegrinder
Member
Member # 2251

 - posted      Profile for Corpsegrinder   Email Corpsegrinder         Edit/Delete Post 
Well, I don’t mean to pick on Survivor, but…

“Well, there's a certain diversity of opinion in the Pentagon. But overall there is a noticeable creep of "let's find ways to get paid without getting shot at" attitude. That's a perfectly sensible attitude for most lines of work.”

Well, no. I take that back. I DO mean to pick on Survivor, but only because he’s so much fun to debate with—I mean that as a compliment. No, really!

Anyway, the “noticeable creep” that Survivor talks about is actually one of the distinguishing characteristics of democracies at war. Democracies are always less willing than totalitarian states to expend large numbers of casualties, for obvious reasons. But on the other hand, this characteristic also means that a democracy will only declare war if it thinks it has a good chance of winning. Also, the armies of democratic nations tend to be more lethal than those of totalitarian states. In other words, soldiers from democratic societies are very efficient killers—I’m not just talking about technologically advanced nations like the US; this is a trend that goes all the way back to ancient Athens. The Athenians, after all, invented the concept of the “pitched battle”.

As a result, democracies tend to win about 80% of the wars they fight. Totalitarian states only win about 50%.


Posts: 104 | Registered: Dec 2004  | Report this post to a Moderator
Survivor
Member
Member # 213

 - posted      Profile for Survivor   Email Survivor         Edit/Delete Post 
That's both true and...misleading.

The common fallecy that drags down the military of a totalitarian state is Clausewitzian thinking. Basically, that you can make your soldiers fight best by inspiring them to fear their own officers and government (and the related idea that the best way to get the enemy to stop fighting is to make them fear your military). This idea persists because you can get soldiers to go out and fight by inspiring fear of their own side. But the judgement of history is very clear, normal humans fight best when their fear (and consequent hatred) of the enemy is the fundamental motive.

This is the basic reason that democracies are able to field more lethal, better motivated armies in the wars they end up fighting, because the individual soldiers are eager to fight the enemy. Every soldier on a battlefield that is thinking first and foremost about how to evade the wrath of his own officers and government is basically a minus when compared to any soldier that is there because he personally wants to kill the enemy.

Note, in none of this thinking is there ever a calculation of whether casualties will be heavy or light. The urge to avoid taking casualties is unrelated to the general desire--always present in a democratic army fighting a popular war--to inflict casualties on the enemy.

Perhaps you said it best, "the armies of democratic nations tend to be more lethal than those of totalitarian states." It is the individual soldier's will to kill the enemy that makes such armies successful.

But when avoiding casualties is your primary motivation, you are much more susceptable to the motives posited by Clausewitz, because those motives depend on the cowardice of both the citizens and soldiers of the nations involved. Soldiers that fight against those they fear must be willing to overcome fear of death. If they cannot, they will only be able to serve those they fear most. In an army where the soldiers have more reason to fear their own officers than to fear the enemy, such soldiers can still be made to fight. But in our army, they cannot be made to fight.


Posts: 8322 | Registered: Aug 1999  | Report this post to a Moderator
Corpsegrinder
Member
Member # 2251

 - posted      Profile for Corpsegrinder   Email Corpsegrinder         Edit/Delete Post 
Ah...Clausewitz. Yes, Clausewitz says a lot of very interesting things. However, you are correct when you assert that there are indeed flaws in his martial philosophy. The biggest flaw in Clausewitz is that he writes ONLY about wars between totalitarian states. After all, he was a veteran of the wars between Napoleonic France and Prussia. He was simply not capable of writing about democratic states at war.

Anyway, to the issue at hand. I don’t think you’re correct when you assert that soldiers of democratic armies are better motivated to KILL THE ENEMY than those of totalitarian states. In point of fact, during the Civil War, World War One, and World War Two, American and British soldiers were among the most poorly motivated to kill bad guys. (Then, of course, there were the French…we won’t go there.) The soldiers of Nazi Germany, Communist Russia, and the Imperial Japanese were much better motivated to kill the enemy—especially when they were fighting against a foreign invader on their own soil, as happened during the later stages of WW2. As far as individual soldiers were concerned, the German and Japanese soldiers were the best.

S.L.A. Marshall calculates that in combat conditions, only 1/6 of individual American soldiers fired their weapons at the enemy, when given a clear target. (Marshall has his critics, to be sure, but so far no one has managed to directly disprove his thesis. Furthermore, the works of earlier researchers tend to support Marshall.)

So, how did the Yanks and Brits manage to beat the best and most highly motivated individual soldiers in the world? (The war on the Eastern Front is, by definition, beyond the scope of this question.) Here’s a hint: they did not rely on individual soldiers.


Posts: 104 | Registered: Dec 2004  | Report this post to a Moderator
Survivor
Member
Member # 213

 - posted      Profile for Survivor   Email Survivor         Edit/Delete Post 
It can be a mistake to use the examples of how poorly individual soldiers fielded by democratic societies fared in those wars. For one thing, those armies were then committed to embracing the Clausewitz/Prussian model of military discipline/motivation for their soldiers.

And that's my essential point. The Clausewitz/Prussian model, when applied to soldiers from a democratic society, is fundamentally defective. It worked far better for totalitarian armies. And even though reducing the soldier's fear of the enemy by allowing him to kill with little risk to himself is somewhat different from threatening him with dire punishment should he fail, it is still an expression of the same model of motivating soldiers to fight.

Marshall's estimates are hard to reduce to a single, neat figure. There were a variety of different circumstances posited for what percentage of soldiers were carrying out what level of combat behavior. For instance, in the circumstance where an enemy was in sight and an officer or non-com was present, most soldiers would discharge their weapons in the direction of the enemy, and the percentage went up further if the officer actually reminded the soldier to do so. But the evidence suggests that many of them were not actually shooting to kill or even hit the enemy. On the other hand, soldiers not being directly supervised would often engage in such "combat activities" as constantly reloading or adjusting their weapons over and over rather than firing at all.

In practical terms, depending on what you count as effective combat activity, the figure drops to below a tenth for most WWII engagements and something far less than that even for many of the Civil War battles (including some of the bloodiest and "hardest fought" battles). Paradoxically, because such ineffective fire tends to lead to stagnent battles of attrition, these kinds of combat avoidance tend to mean more and worse casualties in absolute terms.


Posts: 8322 | Registered: Aug 1999  | Report this post to a Moderator
Corpsegrinder
Member
Member # 2251

 - posted      Profile for Corpsegrinder   Email Corpsegrinder         Edit/Delete Post 
"Marshall's estimates are hard to reduce to a single, neat figure."

Yeah, Marshall himself floats around between 25% and 15%. He never really boiled it down to a repeatable constant.

"...the figure drops to below a tenth for most WWII engagements and something far less than that even for many of the Civil War battles..."

I can believe that! Several years ago, I participated in a Civil Reenactment in Missouri. There were approximatly twenty thousand uniformed participants, 100 mounted cavalry, and at least that many full-sized artillery pieces. The battles were all pre-scripted and very tightly controlled. Anyway, as I stood in the front rank with several thousand other Yankees, reloading and firing blank cartridges from my musket, I wondered how in hell ANYBODY could actually do this in actual combat. The noise and smoke had to be experienced to be believed--and we were shooting blanks with only thirty grains of black powder! A typical Civil War cartridge had seventy grains. And on top of that, a minnie ball makes quite a bit of noise as it cuts through the air. (I know that from standing in a target trench at a shooting range, raising and lowering targets while other shooters fired black powder rounds over my head. I simply can't imagine what cannister must have sounded like...)

Anyway, one of the big trump cards the Brits and Yanks had over the Germans was the fact that they had the best artillery in the world. It's been said that (though, I can't remember by whom) the war in Europe was basically a series of battles over the best places to situate forward artillery observers.


Posts: 104 | Registered: Dec 2004  | Report this post to a Moderator
HSO
Member
Member # 2056

 - posted      Profile for HSO   Email HSO         Edit/Delete Post 
Is anyone else getting a sense of Déjá Vú?


Posts: 1520 | Registered: Jun 2004  | Report this post to a Moderator
Corpsegrinder
Member
Member # 2251

 - posted      Profile for Corpsegrinder   Email Corpsegrinder         Edit/Delete Post 
Bwahahahahahahahahah!!!
Posts: 104 | Registered: Dec 2004  | Report this post to a Moderator
Survivor
Member
Member # 213

 - posted      Profile for Survivor   Email Survivor         Edit/Delete Post 
I think that the big trump card came down to the American economy in both WWI and WWII. Superiorities in particular weapon systems/tactical capabilities don't really become trumps until the enemy lacks the resources to use his strength in another area to counter your specific/limited superiority.

The same thing is true in an actual game of cards, as it so happens. Not that that has anything to do with anything.

I think that the term "pressure" used so frequently in describing Civil War era tactics is such a wonderful military euphemism for "scaring the other guy till he wants to pee himself." One of the things that helped make N.B.Forrest a tactical genius is that he could state the actual functions of the available tactics in terms of their practical effect without getting red in the face.


Posts: 8322 | Registered: Aug 1999  | Report this post to a Moderator
Corpsegrinder
Member
Member # 2251

 - posted      Profile for Corpsegrinder   Email Corpsegrinder         Edit/Delete Post 
Well yeah, the economic power was the biggest trump card of all, but the disparity between Anglo-American and German artillery was like the disparity between the Sherman tank and the Panther, except in reverse.

For example, American artillery routinely fired TOT barrages (Time On Target--that’s where hundreds of guns scattered over many miles would fire in such a way as to allow their shells to land on target within a few seconds of each other). German TOT barrages were extremely rare, and also very small. Also, the Allied counterattack after the Ardennes Offensive was spear-headed by a massive barrage of shells with radar proximity fuses. These things were designed to explode approximately fifty feet above the ground, showering everything below with shell fragments.

Getting back to the original topic, there was one particular robotic weapon deployed during WW2—the Mk 24 acoustic homing torpedo. This was an air-dropped torpedo that ran a pre-programmed search pattern for a period of about fifteen minutes. It had a vacuum-tube signal processing system that determined when to deviate from the search pattern and home in on a potential target. If it didn’t locate a target by the time its batteries ran down, it would automatically sink to the bottom.

It was a very successful weapon, by the standards of the day. Out of approximately 340 Mk 24’s launched in combat, 37 German and Japanese subs were either sunk or damaged—that’s a hit ratio of about 11%.

[This message has been edited by Corpsegrinder (edited February 22, 2005).]


Posts: 104 | Registered: Dec 2004  | Report this post to a Moderator
Survivor
Member
Member # 213

 - posted      Profile for Survivor   Email Survivor         Edit/Delete Post 
I suppose that in a certain sense, it's true that we already use a lot of "robots" on the battlefield. But in a practical sense, we don't (and won't, not for a long time) have robots that can replace the human in combat. The essential thing isn't hit miss ratios, it's something that suitably replicates a human judgement, the ability to consider whether or not something is a good idea.

That element is clearly necessary to an entity that is not just going to be responsible for hitting the target, but for making the decision whether or not to fire in the first place. For the time being, developing a non-expendable automaton that can be entrusted with its own survival while carrying out a mission is the expression of that challenge. We might have those in under a decade, or not. It depends. I know there are bright people with military experience and electronics engineering training, and there are more in the pipe. Whether there are enough, along with the other necessary factors, is still an open question.

TOT barrages...you can see why the Germans didn't feel they needed them. TOT barrages are a method of concentrating firepower in order to weaken a defensive position prior to an assault. They had no real place in the (highly effective) attack strategies developed by the German command, and they are nearly useless for a defender faced with steady losses of territory. There was simply no phase of the war during which the Germans would have had much reason to use them.

It's actually interesting where Marshall's research, artillery, and military robots meet. While I don't approve of the concept of "bloodless combat", I do think that there has to be a fundamental understanding that the vast majority of humans (the kind that can be substantially affected by conventional artillery, for instance) have no place on the battlefield (except as hostages, impromptu or otherwise). But it is important that we not forget that there are other humans that are...different.

That is what we're trying to replace with our robotic soldiers, and it is a daunthing task. I don't think that it is possible to create something better suited to war than the type of soldier that is drawn from this population. But that is just an opinion, after all.


Posts: 8322 | Registered: Aug 1999  | Report this post to a Moderator
   

   Close Topic   Feature Topic   Move Topic   Delete Topic next oldest topic   next newest topic
 - Printer-friendly view of this topic
Hop To:


Contact Us | Hatrack River Home Page

Copyright © 2008 Hatrack River Enterprises Inc. All rights reserved.
Reproduction in whole or in part without permission is prohibited.


Powered by Infopop Corporation
UBB.classic™ 6.7.2