View Single Post
Old 04-22-2007, 03:17 PM   #16
Spider AL
A well-spoken villain...
 
Spider AL's Avatar
 
Join Date: Jan 2002
Location: Help, help, I'm stapled to my workstation.
Posts: 2,162
TK, I wonder why you'd want my input? Achilles has already done a good job of covering all the most significant points.

I may have a couple of things to add:

1. On the idea of representing moral values mathematically in general:

As others have noted, your idea is a fine intellectual exercise, very intriguing from the perspective of a puzzle. But there are two questions that immediately spring up:

1. Does your equation tell us anything about morality that hasn't already been elucidated in longhand by the major moral philosophers of history?

2. More importantly, COULD it tell us anything new about morality in the future?

The answer to no.1 would appear to be "no", but that's a minor issue. The answer to number two is "maybe, but only if all moral values were accurately represented in the equation. Perhaps then the numbers could be manipulated mathematically to show us something new and interesting."

But of course, to accurately represent all variables would probably be the work of a lifetime. And while your OP shows one way that moral values might be transcribed, it does not cover the whole gamut. (which I'll comment specifically on shortly.)

So in short, you've started on what would be a quite serious undertaking. There's no reason you shouldn't be the one to complete it, however. I for one would be most interested to see what you eventually come up with.


2: On the specifics of your original "moral equation":

Quote:
Originally posted by tk102:

So how do we measure the morality of an action? It is inversely proporational to the amount of distress, D, the particular act, x, causes.
Up to this line, there was little to add, however- as has been noted subsequently- distress is not the only variable in the moral equation. It's an important variable, to be sure. It is central, in fact. But it's not the only one. As well as physical and psychological suffering, there's the concept of "loss". Let me illustrate:

If you murder someone it's immoral, whether the method you use is painless or not. You can murder someone painlessly; you can for instance drug them so that they first fall unconscious and then expire. The fact that they did not feel any physical or psychological distress is a factor- I mean, someone who tortures a person for three weeks before killing them has arguably committed a more immoral act- but any killing of this type, painful or painless, results in the loss of the subject's life.

Existence- rationally speaking- is all we have. And our time on this earth is all we ever will have. For various reasons which have been discussed elsewhere, we feel a desire to maintain our lives. When you kill a creature (whether it's aware of its impending doom or not) you are literally taking away all it has, and all it ever would have had. To paraphrase a Clint Eastwood line.

This is why I have tried to define morality in the past- as you may remember from my response to one of your own questions in the "moral relativism" thread- as the objective, universal standard of behaviour that aims to minimise one's negative impact on other creatures... This heading of "negative impact" encompasses any and all suffering, but also loss of life and also any more minor violations of established rights, etcetera.

Therefore any purely mathematical expression of the moral equation would have to incorporate these additionable variables, possibly under a heading similar to "negative impact".

Quote:
Originally posted by tk102:

A mosquito's relative distress at being smashed would be much greater than the discomfort a mosquito bite would cause in a person, but because Kperson>>Kmosquito, Ddon't smash > Dsmash.
Secondly, this assumption that the person's proportion of "universal distress" would be "much greater than" the mosquito's, (in essence that the person is intrinsically more "valuable" than the mosquito, because its capacity to suffer is so much larger) begs a certain degree of analysis. Once again, the question of "distress" is by no means the be-all and end-all of morality, but let's address distress alone at this point.

It is a general social convention that we humans are "more valuable" than other animals. But let's examine that convention and see whether we can discern the reasoning behind it, and whether this reasoning gives us any insight into the question of how we should classify other organisms on the "distress" or "suffering" scale specifically.

First we must define the boundaries of such a scale.

Science has made this first step quite easy, by teaching us a lot about the biology of simple life-forms. There are forms of life that have literally zero cognitive ability, literally zero capacity to feel suffering, fear, pain etcetera. It stands to reason that we should not concern ourselves with causing suffering to a creature that is unable to suffer.

Once again, let us define the boundaries: We know that higher life-forms show signs of fear and pain, and that their brains are really quite similar to our own, structurally and relatively speaking. Therefore it is reasonable to assume that most of the more complex mammals- including humans- would be highest on the scale of "capacity to suffer".

We know that the most simple life-forms (single-celled organisms for instance) have no complex nervous system and no higher reasoning powers, nor any organ that fulfils a similar function to the complex brain of the higher life-forms. Therefore we have successfully (if roughly) classified the positions of several forms of life on the "capacity to suffer" scale. At the one end we have mere "biological robots", those rudimentary animals that operate on a simple set of rules (a few lines of code, one might say) and at the other end we have highly developed mammals.

Therefore we have two simple moral rules already:

1. Indiscriminate attacks on say... bacteria through artificial means such as disinfectant, are not intrinsically immoral from a "suffering" perspective. (Leaving aside for now the question of whether such robotic life-forms have an intrinsic right to life.)

2. any maltreatment of the highest forms- highly developed mammals- is VERY MUCH immoral.

However we run into a sticking point in the middle of the scale, pretty much AS SOON as life-forms become complex to any degree. As soon as the rudiments of a nervous system appear in a primitive creature, our previously easily defined boundaries become blurred.

As an example, we might dig up an earthworm. A simple invertebrate, it seems to operate on a simple set of rules. At first appearance it would appear to be a biological robot, a form of life too simple to warrant the care and attention afforded to a cat, a dog, a monkey or another person. But of course, research has shown that earthworms do indeed have a nervous system developed enough to pass along information about injuries and to trigger reflex reactions to these stimuli... but research also suggests that the brain of the worm is probably not complex enough to accommodate higher functions that we might define as "distress". (As in emotional responses like fear and horror at the pain one is suffering combined with the fervent desire to live, etcetera...)

So if the earthworm does indeed register pain... but cannot interpret it quite the way we do, is it capable of what we call "suffering"? Well in an attempt to answer this difficult question, let's use a hypothetical:

Suppose some very advanced alien lifeforms arrive on earth from another distant world. Then suppose that their equivalent of brain functions are so advanced and complex that to them, we seem like mere robots, mere biological automatons. Suppose that they- like us- have some sort of logical standard of moral behaviour that they wish to adhere to. Then suppose that they decide that they can do whatever horrible things they want to us, without danger of being immoral. Because we are simply unable to experience the complex emotional state that they define as "distress".

Clearly we would consider this an appallingly unfair and short-sighted decision on the part of the aliens. But from the aliens' perspective, it might seem quite logical. In this respect it's comparable to our routine decision to value humans more highly than other animals. What do humans have that other animals do not? Merely slightly more complex brains.

This hypothetical highlights the fact that "rating" other organisms on a scale of intrinsic value purely by the apparent complexity of their cognitive functions may not be a moral thing to do. After all, if taken to its inevitable conclusion, this concept of "intellect as value" would lead us to terrifying consequences. It would perhaps mean that torturing a severely mentally retarded person would be regarded as "more moral" than torturing a college professor... Children's brains don't develop fully for some years, by the above standard it would presumably be regarded as being "more moral" to torture a young schoolkid than it would to torture his or her adult teacher.

In short, such a scale probably isn't moral anyway.

So, returning finally to the question of the single mosquito... you can't arbitrarily decide that a mosquito's suffering is in some way less intrinsically important than a man's suffering. What you CAN do is note that in some countries mosquitos carry fatal or severely debilitating diseases. Therefore if you're IN one of those countries, you should kill the mosquito as the risk to the human is great indeed. If you're NOT in one of those countries, why not let the critter bite you?

Because in my country mosquitos present little or no danger to me, I don't kill mosquitos. If they appear, I let them fly around my house, and they can bite me if they wish. Because a small insect bite that may itch for a couple of hours is a TINY inconvenience to me... It poses no danger to me, it doesn't affect my life in any meaningful way. Therefore it certainly does not warrant the killing of the organism in question.

Anyway, in the course of our reasoning, even without addressing any question other than the question of "capacity for suffering", we've arrived at the fairly conservative principle that the moral man must give other creatures the benefit of the doubt whenever possible, in terms of their capacity to suffer. This fairly universal principle would have to be factored into any mathematical "moral equation" of the type you're attempting to construct.

I'm afraid that's all I can think of on the topic right now.

-

Quote:
Originally posted by Achilles:

Would it be inaccurate to interject that what we're looking for is a creature's capacity for suffering as compared to its capacity for happiness? A bee has relatively diminutive capacity for suffering or happiness especially when compared to the highly allergic human that it is about to sting, correct? So it wouldn't be immoral to kill a bee that was trying to attack you. Conversely, a cat has a relatively higher capacity of suffering and happiness, therefore it would not be moral for an allergic person to randomly kill cats.
Hmm. On these examples, Achilles: A bee-sting can be fatal to a person allergic to bee-stings. Therefore it might well be moral for this allergic fellow to kill the bee, as it qualifies as self-defence.

If the person allergic to cats might ALSO be killed by the cat, (improbable) AND if the killing of the cat efficiently removed the threat (which it probably would not) then the killing of the cat might also be self-defence and therefore moral.

Given these variables and the probable circumstances surrounding each example, I personally don't think that the "capacity for suffering vs. happiness" question is addressed by the examples at all. Conversely, I don't think the question is relevant to any discussion of these examples specifically.

-

Quote:
Originally posted by Tyrion:

Also, the main problem I have with moral objectivity is that humans can never be truly objective; we always have some bias because we always have an opinion and a limited view of existence.
I'm afraid that's the same non-sequitur that many people seem to churn out in these debates, Tyrion. Human objectivity (or lack of it) has NOTHING whatsoever to do with moral objectivism. Logic dictates that morality must be universal or it is not morality. Therefore, morality by definition IS objective, and must be applied objectively to be moral.

Whether people are CAPABLE of doing this is neither here nor there. It's literally completely irrelevant.

In essence, your stance is that: "people aren't objective therefore morality can never be objective". Which is like saying: "people aren't objective therefore mathematics can never be objective". Which is obviously nonsense. There is a right answer to a calculation and there are wrong answers. People may assert that "2 + 2 = 5", but that doesn't MAKE it five. That doesn't MAKE the numbers relative.

Numbers are numbers, just as morality is morality. People make mistakes while exploring mathematical calculations, people make mistakes while deciding what is morally right. But that doesn't make "maths relative". It doesn't make "morality relative". It just means people are fallible. "Moral relativism" is an irrational, illogical, and by definition immoral stance.

Therefore, your position doesn't make any sense.


[FW] Spider AL
--
Hewwo, meesa Jar-Jar Binks. Yeah. Excusing me, but me needs to go bust meesa head in with dissa claw-hammer, because yousa have stripped away meesa will to living.
Spider AL is offline   you may: quote & reply,