28199. iiibbb - 4/19/2006 10:11:12 PM The turing test breaks down in light of Koko... as well as a lot of mentally deficient humans.
I'm more likely to buy the empathy argument... or at least some sort of introspective capacity... the capacity for abstract thought is another good "test"... the capacity to link seemingly unrelated topics.
I think it might be very hard for us to judge something we made. 28200. alistairConnor - 4/19/2006 10:12:01 PM would you convict someone to the death penalty for killing it?
That's easy...
I would never convict anyone to the death penalty, for killing anything. 28201. Adam Selene - 4/20/2006 12:59:00 AM Alistair - what about life imprisonment? For killing a "sentient machine?" 28202. anomie - 4/20/2006 11:38:49 AM Not to put too fine a point on things but we already do create entities "as real as us". They start out small and inconvenient but are ready for punishments and self-replication in about 14 years. Some are ready for the death penalty at that age in places like Kansas.
As for crimes committed by an AI entity, the death penalty couldn't possibly apply since it is by definition not "alive". We'd need new punishment protocols like...disassembly with no possiblity of repair. Not life imprisonment, but long term storage, perhaps with eligibility to be cannibalized for parts. For lesser -not crimes but - errors, perhaps a double re-boot, RAM swap out, mother board rebuild. Possibilities to humiliate the machine are endless. 28203. alistairconnor - 4/20/2006 12:17:30 PM Well. I'm sure we can all recite Asimov's Three Rules for robots. I don't concede that an AI entity could ever have any legal standing whatever.
That's an easy rule. The only problem is if people start monkeying around at the frontier between what's human and what isn't.
* Implanting electronics to enhance human intelligence : problematic; would that diminish legal responsibility?
* Growing a protein brain in a vat, and wiring it up to the outside world? Yecch.
Personally I think all frontier stuff should be outlawed, because I think it's important that we avoid getting into such moral ambiguities. 28204. Adam Selene - 4/20/2006 3:26:20 PM Outlawed or not, it will happen just like every other possible (and profitable) technology.
One possiblitiy is that, rather than elevate machines to human status, humans will be reduced to bio-machines. And this isn't necessarily bad. For example, if you could "fix" a criminals brain so they didn't want to commit crimes any more... isn't that better than punishment? Rather than "kill" a machine by disassembly, just fix it.
Asimov's rules are only applicable if we truly treat robots as a separate kind of entity and forever formalize their distinction from human beings. (Assuming such highlevel, cognative concepts as the three laws could ever be hardwired in the first place.) 28205. alistairconnor - 4/20/2006 4:58:08 PM Profitable technology?
At university, one of the profs had a good quote about the futility of AI, which is unlikely ever to be cost effective given the cheap availability of the protein variety. 28206. Adam Selene - 4/20/2006 6:37:55 PM Well, if you duplicate a human at higher cost, then ya, hardly profitable. But if you create a "pure" intelligence that doesn't need sleep, take coffee breaks, ask for a salary, get pregnant, go on strike, need oxygen, sue anyone, etc... now that's a whole 'nother story. 28207. PelleNilsson - 4/20/2006 6:58:28 PM Exactly. What is the profit in creating machines that emulate the fuzziness, the unpredictability, the moodiness, the irrationality of us humans? The whole thing is a strawman created by Adam, the best use of which is to chop it up, perhaps by the machine below, and use the proceeds in alistair's pony stables.
International Harvester, model M, 6 HP, 1929. 28208. Adam Selene - 4/20/2006 7:48:10 PM The "strawman" failed to elicit the response I expected. Funny that. Not too many years ago, people would have been all, "you can never make a machine that is really intelligent," or "it will always be a machine, it won't be alive or anything like that." That was waaayy back in the days when we thought of making people more machine-like, (a la Mr. Spock.) But in the post Data-Android days... we think more about making machine human-like and no one seems to be aghast that we could even possibly create such a thing.
Times, they are a changin'. 28209. sakonige - 4/21/2006 7:13:38 AM doesn't seem that strange to me to love a machine. People do it all the time, especially men. A beautiful machine seems alive. It's a small step to a beautiful intelligent machine being alive. 28210. alistairconnor - 4/21/2006 9:28:40 AM Outlawed or not, it will happen just like every other possible (and profitable) technology.
I don't accept that as inevitable. If there is a moral imperative involved (and I believe there is), then as moral beings we must oppose it. For example : would you concede that genocide is inevitable? I contend that it is not : that all moral entities must remain vigilant and intervene by all means to prevent it. 28211. Adam Selene - 4/21/2006 2:11:22 PM "Personally I think all frontier stuff should be outlawed, because I think it's important that we avoid getting into such moral ambiguities." - Alistair
I think I need more clarification... there are lot's issues that are moral abiguities to some but not to others. Convince me that it's immoral to have an intelligent and alive creature that is created artificially by man. 28212. PelleNilsson - 4/21/2006 4:28:46 PM Define "alive". 28213. Adam Selene - 4/21/2006 6:58:13 PM Define alive.
Oh, you know, the usual definition. ;)
Use whatever definition suits you, as long as it is modified to include: "intentionally created by humans from non-living materials." It can have progeny, or be a progeny, but it or some specific ancester must be the first one that was created by humans. It cannot result from a reproduction (natural or otherwise) of an existing life form, it has to be an original design.
A typical definition would include:
1) adapts to its environment
2) reacts to stimuli
3) reproduces itself
4) capable of perpetuating itself in a "natural" environment (Without intential and ongoing intervention of others, that is, as apart from what occurs in an ecosystem.)
and I would also require intelligence which is just has hard to define, but would include:
1) self-aware
2) aware of its own mortality
3) capable of abstract communication.
So - why would creating such a thing be immoral? 28214. Ulgine Barrows - 4/22/2006 6:37:24 AM heh, they're already working your ass, aren't they 28215. Ulgine Barrows - 4/22/2006 6:39:24 AM mmmm.....that comment was to the new host,Adam
yep, they're workin ya 28216. alistairConnor - 4/22/2006 3:57:24 PM Convince me that it's immoral to have an intelligent and alive creature that is created artificially by man.
You misread me Adam, that's not my thesis. I don't have a problem with that (or if I do, it's an entirely different problem). What I want to outlaw is any blurring of the lines between human intelligence and created intelligence. 28217. PelleNilsson - 4/22/2006 4:26:26 PM Alistair is clearly an Asimovian. How shall we label Adam? Would Philipdickian do? 28218. Adam Selene - 4/22/2006 5:56:16 PM alistair, could you explain what you mean by blurring the lines? You mean by granting artifical intelligencses any kind of human rights?
Pelle, Adam is... well, consider the source of my namesake. :)
|