ANNOUNCEMENT: This forum has been archived and its community moved to the Star Wars Gamer forum.
 lfnetwork.com mark read register faq members calendar

04-13-2007, 06:56 PM   #1
tk102
@tk102
Well past expiration date

Join Date: Jan 2004
Posts: 5,767
Current Game: FTL

Revisiting Moral Objectivism with Mathematical Notation

Once upon a time I said
Quote:
 Originally Posted by tk102 With no objective way to measure morality, the argument of what is the most moral course of action is word-play for politicians. You cannot pretend that it is anything like a mathematical equation. It does not hold the same "truth".
Well that statement has bothered me for a couple months and I wanted to explore further the idea of moral objectivity with the help of mathematical notation. I hope others can provide insights into these definitions. I admit having to consult Wiki's Naive Set Theory to remember how to syntax set notation.

* * * *

If x is an action in the set of all possible actions that you can perform, A:
x ∈ A

And m(x) is the morality of action x, then set of all possible moral outcomes, M, is:
M := { m(x) : x ∈ A}

And the moral person seeks to perform the most moral act:
Mmax = max { m(x) : x ∈ A}

So how do we measure the morality of an action? It is inversely proporational to the amount of distress, D, the particular act, x, causes.
m(x) ∝ 1/D(x)

For the sake of clarity, let's define our D function such that we can write this as an equality.
m(x) = 1/D(x)

So our goal to maximize morality could be rephrased to say we seek to minimize distress.
Mmax = Dmin = min{ D(x) : x ∈ A}

So how do we measure D(x)? It should be the summation of all distress felt by all organisms for the given action. To arrive at that formula we must define its elements.

Given an organism capable of feeling distress, y, in the population of all organisms capable of feeling distress, P:
y ∈ P

Then relative distresses, δ, for all organisms for all actions is the set:
δ := { δxy : (x ∈ A) and (y ∈ P) }

But how do we compare distresses of one organism to another? We don't weigh distresses of one organism the same as the distresses of another organism. Most would consider it morally right to kill a mosquito that landed on a friend's neck, for example. In order to define D(x), we will need to translate these relative distresses into an absolute scale that can be summed. Let us propose a conversion factor K where:

For a given action x, in a given organism y, Ky is the proportion of universal distress, Dx, to relative distress, δxy
Ky = Dxxy

A mosquito's relative distress at being smashed would be much greater than the discomfort a mosquito bite would cause in a person, but because Kperson>>Kmosquito, Ddon't smash > Dsmash.

We can now define D(x) as:
D(x) := ∑ δxyKy : (y ∈ P)

(Pardon the notation -- I'm limited by bbcode... read the above as a summation of δ*K for each member of y in P for a given action, x.)

And the most moral act is therefore
Mmax = Dmin = min { D(x) : x ∈ A} = min { (∑ δxyKy : (y ∈ P) ) : x ∈ A }

* * * *

Okay, so what do we see here? Mmax is a function of δxy and Ky. It remains up to us to use our faculties to best interpret δxy and Ky as well as recognizing the full sets of A and P. Let's assume A and P are well-defined. δxy could estimated using faculties of reason, heuristics, and the understanding of organism y. Ky is more difficult to define in an objective manner. Does the size organism matter? Do its mental faculties matter? Or maybe its own ability and desire to act morally? Have I oversimplified something?
you may:
 04-13-2007, 10:18 PM #2 Achilles @Achilles Dapper Chimp     Join Date: May 2004 Posts: 8,227 Current Game: Pillars of Eternity I find moments like these very humbling. you may:
 04-14-2007, 01:59 PM #3 Gargoyle King @Gargoyle King Veteran     Join Date: Apr 2007 Location: In My Own Little World! Posts: 895 Well, this has got me lost dude! I never was too good at maths! Show's i've still got a lot to learn in life i suppose. you may:
04-15-2007, 02:41 AM   #4
tk102
@tk102
Well past expiration date

Join Date: Jan 2004
Posts: 5,767
Current Game: FTL

Quote:
 Originally Posted by tk102 And the most moral act is therefore Mmax = min { (∑ δxyKy : (y ∈ P) ) : x ∈ A }
In other words, the most moral action is the action that causes the least amount of distress to all other organisms, where the distress of an organism is 'weighted' so that is can be compared to other organisms.

Therefore, we have to be able to determine the magnitude of distress an organism will feel from a particular action (δxy), and we have to determine how to weigh that organism's distress in relation to other organisms (Ky). These two determinations are still a bit mysterious, but I suppose that is a different subject for analysis -- this exercise was just an attempt to define what factors need to be evaluated to derive morality objectively.

Did I miss anything? Seems oversimplified somehow, but maybe all the difficulty lies in interpreting the variables themselves.
you may:
 04-15-2007, 03:54 AM #5 Tyrion @Tyrion nothing is real     Join Date: May 2002 Location: no one I think is in my tree, I mean it must be high or low Posts: 6,917 Er, what was the point of adding in mathematical notation? It didn't do much besides adding a layer of confusion, even though your words were enough to say the same. Despite which though, I do find it a fascinating brainstorm. Anyway, you do fall prey to oversimplification in your descriptions, tk102. For instance, what is distress? Is it simply the avoidance of pain? Or is it the avoidance of anything that causes pain? For instance, it would be the difference of finding a bee on a window still and either avoiding it or squashing it; the former would cause one to be cautious and perhaps retreat to another room, while the latter might compel one to destroy the bee in his vicinity. In other words, despite both options resulting in the same amount of pain to the human, people would still go to the stretch of killing the bee. Why one would act upon killing the bee would depend on a variety of reasons; phobia, paranoia, bad experiences, curiosity, ect. Such emotions, particularly curiosity, do not fall under direct pain and give ambiguity to the notion of distress as the sole moral impetus. (There are other factors besides distress, such as knowledge and ignorance. A little kid could flood an ant farm with no concept of the fact that ants are dying and are in pain, while a teenager could very well do the same with malevolent intent. Despite the fact that both cause the same level of distress to the ants and to the participants, most would say that the kid would be less evil because he had ignorance of his actions. How would you apply that to the equation?) Also, the main problem I have with moral objectivity is that humans can never be truly objective; we always have some bias because we always have an opinion and a limited view of existence. Without knowing all of the ramifications that actions can have, and without knowing all the possible sides to a given argument or idea, morality is based less on ultimate distress caused and instead cut off by some arbitrary level depending on the person. For instance, killing a Jew is different to a Christian to a Nazi to a Muslim to a Jew to a Humanist, ad infinitum. Killing a Jew would depend on the person's past, ideas, point of view, knowledge and ignorance, mental wellbeing and mood, ect ecetera; such things are inescapable, and so it would impossible for humans to ever create a totally objective moral system. Therefore, it's pointless to bring up a particular formula for morality as all variables would be to the whim of the person with the pen. That's just my opinion, though. I could be wrong. Last edited by Tyrion; 04-15-2007 at 04:12 AM. you may:
04-15-2007, 04:49 AM   #6
tk102
@tk102
Well past expiration date

Join Date: Jan 2004
Posts: 5,767
Current Game: FTL

Quote:
 Originally Posted by Tyrion Er, what was the point of adding in mathematical notation? It didn't do much besides adding a layer of confusion, even though your words were enough to say the same.
The point was to abstract the concepts with symbology in order to avoid misinterpretation that comes from words. If morality can truly be considered objective, it must be able to conform to logical statements with unambiguous terms. Symbology provides a simple way to keep terminology constant.
Quote:
 Originally Posted by Tyrion Anyway, you do fall prey to oversimplification in your descriptions, tk102. For instance, what is distress? Is it simply the avoidance of pain? Or is it the avoidance of anything that causes pain? For instance, it would be the difference of avoiding being stung by a bee on a window still and squashing it; the former would cause one to be cautious and perhaps retreat to another room, while the latter might compel one to destroy the bee in his vicinity. In other words, despite both options resulting in the same amount of pain to the victim, people would still go to either option depending on altogether seperate reasons. This means that distress is not the sole factor in morality.
Well, if the bee was not considered so be part of set P, then there would be no difference in the morality of the two actions. However, becuase the bee is an organism that can be distressed, it is a part of P and so the action of leaving the room causes less universal distress than the act of killing the bee (Dleave < Dkill ) and so leaving the room has greater morality.

As for the defintion of distress, it's true I didn't make any attempt to define it. I have a dictionary that I could quote if it makes any difference. I think each creature knows distress when it feels it. I did suggest that each distress is a local phenomenon that must be converted to some sort of universal scale so that it could be weighed against other organisms' distresses.
Quote:
 Originally Posted by Tyrion Also, the main problem I have with moral objectivity is that humans can never be truly objective; we always have some bias because we always have an opinion and a limited view of existence. Without knowing all of the ramifications that actions can have, and without knowing all the possible sides to a given argument or idea, morality is based less on ultimate distress caused and instead cut off by some arbitrary level depending on the person.
I wouldn't say that makes morality subjective. It's just that the precision of the variables discerned by different actors may be different. In 1650 BC, an Egyptian scribe named Ahmes declared the number π was 256/81 -- off by approximately 1%. In 1997, π was calculated out to 51.5 billion digits. It's still not exactly right, but it's much closer. You wouldn't say that the ratio of a circle's circumference to its diameter is subjective though, right?
Quote:
 Originally Posted by Tyrion Therefore, it's pointless to bring up a particular formula for morality as all variables would be to the whim of the person with the pen. That's just my opinion, though. I could be wrong.
Believe me, I've been wrestling with that too. That's why I started this thread. I appreciate your response. The counter-argument to my analogy with pi would be that unlike morality, the values of pi have been shown to converge with increasing precision. That's why we can believe there is a true number π. There's no way to show this with a given action because we can't freeze all the conditions and perform an analysis to ever greater precisions. If there's no convergence, then it's impossible to show there is a true (objective) morality.

Quote:
 Originally Posted by Jae Onasi One man's pain is another man's pleasure. The severity of the pain would be on some kind of bell curve, with no pain being zero, and for instance severity of pain increases negatively and pleasure increases positively. The bell curve would be skewed well into the negative, but there'd be a few on the positive who got some kind of pleasure out of the pain. You'd have to add in a probability equation (just to make things more fun).
Distresses are local to the individual and are inherent to their own dispositions, tolerances, and adaptabilities to external stimuli. The summation notation adds up the distresses felt by each individual, so I think that's covered. I guess you're suggesting to use a probability equation to estimate the unknown distresses of the unknown population P? Okay.

Quote:
 Originally Posted by Jae Onasi Also, maximize morality is not always inversely proportional to distress. Some medical procedures cause distress, but are not immoral. You'd have to account for those things that cause distress but have little or no bearing on morality, unless you limit your sets to only those things where distress has an impact on morality, and even that will vary to some degree.
Well I assume a doctor performs a procedure to improve or save a life, and wouldn't undertake a procedure that didn't have some long-run benefit (an overall decrease in distress, with the assumption that death is the ultimate distress.)

Last edited by tk102; 04-15-2007 at 05:00 AM.
you may:
04-15-2007, 05:04 AM   #7
Achilles
@Achilles
Dapper Chimp

Join Date: May 2004
Posts: 8,227
Current Game: Pillars of Eternity

Quote:
 Originally Posted by Tyrion Also, the main problem I have with moral objectivity is that humans can never be truly objective; we always have some bias because we always have an opinion and a limited view of existence.
If something is objective, isn't that to say that it exists outside of human bias? In other words, wouldn't it be reasoned out rather than opined?

Quote:
 Originally Posted by Tyrion and so it would impossible for humans to ever create a totally objective moral system.
Indeed it would probably be impossible for humans to create an objective moral system. However if objective morality does exist, it would not need to be created just as we didn't "create" the principles that govern mathematics or the physical laws of our universe.

Quote:
 Originally Posted by Jae Onasi One man's pain is another man's pleasure.
That is true but that is obviously not a basis for any objective system of morals.

Quote:
 Originally Posted by Jae Onasi Also, maximize morality is not always inversely proportional to distress.
I actually agree with you here. Sometimes following the moral option requires some exposure to pain or distress. Anyone that has children knows that immunization is not a lot of fun for your little ones. It's not uncommon to have to restrain a child so that they can be poked with a big sharp needle and injected with solution that will cause them to become mildly ill for a few days.

If we were to follow a mathematical model for morality, the formula might spit out a result that says the pain and distress caused by immunization would make the process immoral. Since the comparative result (disease or general illness) is not certain, our formula might lead us astray. Then again, math has never been my strong point and my reliance on conceptualization might be doing me a disservice here.

Regardless, I'm pretty sure this is why all the ethics courses fall under the philosophy umbrella rather than applied sciences

EDIT for teekay's post above:

Quote:
 As for the defintion of distress, it's true I didn't make any attempt to define it. I have a dictionary that I could quote if it makes any difference. I think each creature knows distress when it feels it. I did suggest that each distress is a local phenomenon that must be converted to some sort of universal scale so that it could be weighed against other organisms' distresses.
Would it be inaccurate to interject that what we're looking for is a creature's capacity for suffering as compared to its capacity for happiness? A bee has relatively diminutive capacity for suffering or happiness especially when compared to the highly allergic human that it is about to sting, correct? So it wouldn't be immoral to kill a bee that was trying to attack you. Conversely, a cat has a relatively higher capacity of suffering and happiness, therefore it would not be moral for an allergic person to randomly kill cats.

Also, we would then have to somehow show individual suffering/happiness as compared to social suffering/happiness (e.g. deontology). For instance, is it moral for someone to throw oneself onto a grenade in order to save a group of complete strangers? How about to save a hive of honey bees?

Lastly, I'd like to inquire about the component of will. If I'm a doctor and I have 5 patients that will all die within an hour if they do not receive organ transplants and I have a patient that just died that happens to have matching organs, would it be ethical to perform the transplants if the deceased is not an organ donor? What if the potential donor were not dead but in a persistent vegetative state? Unconscious from a head wound, but expected to make a full recovery within hours? Fully alert and there to see about a sprained ankle?

I think my point is that you can have a completely objective set of morals that are not represented by one equation. Mathematics is objective, yet we do not try to nail the study down to just one rule, rather we accept that there are a wide variety of rules. Similarly, just as "math" represents arithmetic, algebra, calculus, geometry, finite mathematics, etc, "morality" might not be reducible to a single line of logical statements.

Somehow I feel as though I derailed the section that I quoted, but hopefully, I have not.

Last edited by Achilles; 04-15-2007 at 09:12 AM.
you may:
04-15-2007, 11:27 AM   #8
tk102
@tk102
Well past expiration date

Join Date: Jan 2004
Posts: 5,767
Current Game: FTL

Quote:
 Originally Posted by Achilles Would it be inaccurate to interject that what we're looking for is a creature's capacity for suffering as compared to its capacity for happiness? A bee has relatively diminutive capacity for suffering or happiness especially when compared to the highly allergic human that it is about to sting, correct? So it wouldn't be immoral to kill a bee that was trying to attack you. Conversely, a cat has a relatively higher capacity of suffering and happiness, therefore it would not be moral for an allergic person to randomly kill cats.
That is the whole concept of Ky above. A person has a high Ky and a bee has a small Ky. An person allergic to bees also would have a high δy.

Quote:
 Also, we would then have to somehow show individual suffering/happiness as compared to social suffering/happiness (e.g. deontology). For instance, is it moral for someone to throw oneself onto a grenade in order to save a group of complete strangers? How about to save a hive of honey bees?
That's where the summation comes in (∑ δxyKy).

Quote:
 Lastly, I'd like to inquire about the component of will. If I'm a doctor and I have 5 patients that will all die within an hour if they do not receive organ transplants and I have a patient that just died that happens to have matching organs, would it be ethical to perform the transplants if the deceased is not an organ donor?
I think we're talking about the distress that would be caused to society (eg. families affected, the hospital, etc.) to know that doctor disregarded the will of the deceased. But what if no one knew and the doctor felt no distress over his action? Hmm, I don't see how the equation would take that into account. There's no factor for something like a categorical imperative.

Quote:
 What if the potential donor were not dead but in a persistent vegetative state? Unconscious from a head wound, but expected to make a full recovery within hours? Fully alert and there to see about a sprained ankle?
Again, forgiving the lack of categorical imperative... assuming society knew about the doctor's action, there would be increasing δsociety for each of those scenarios. Since the donor would is still, there would be some factor δpatient to account for, especially since organ donation would likely result in his death (assumed to be the maximum distress for an organism).
Quote:
 I think my point is that you can have a completely objective set of morals that are not represented by one equation. Mathematics is objective, yet we do not try to nail the study down to just one rule, rather we accept that there are a wide variety of rules. Similarly, just as "math" represents arithmetic, algebra, calculus, geometry, finite mathematics, etc, "morality" might not be reducible to a single line of logical statements.
The idea that there exists objective morality suggests there should be a definable function of variables that defines morality. The assertion is that morality can be arrived at by logic. If we cannot apply logical symbology to morality then we cannot make this assertion. And if we cannot say morality is based on logic, we cannot say it objective. If there are many various theories and fields describing the origins of morality, then morality indeed is subjective.
you may:
04-15-2007, 01:37 PM   #9
Achilles
@Achilles
Dapper Chimp

Join Date: May 2004
Posts: 8,227
Current Game: Pillars of Eternity

Quote:
 Originally Posted by tk102 That is the whole concept of Ky above. A person has a high Ky and a bee has a small Ky. An person allergic to bees also would have a high δy. That's where the summation comes in (∑ δxyKy).
Fair enough. Thanks for clarifying.

Quote:
 Originally Posted by tk102 I think we're talking about the distress that would be caused to society (eg. families affected, the hospital, etc.) to know that doctor disregarded the will of the deceased. But what if no one knew and the doctor felt no distress over his action?
This seems to assume that the family and/or the doctor should feel distress over his actions (assuming the "dead" scenario and not any of the others).

EDIT: I just re-read this section and realize that we may have missed each other slightly here. You appear to be looking at this from the perspective of the negative social consequence for the hospital, the deceased person's family, etc. This assumes that the action itself in inherently immoral and should be viewed negatively. My argument (which was vague) is such judgments should be the result of process, rather than part of the process itself.

To restate my point without posing it as a question:

I believe that it would be absolutely moral for a doctor to "part out" a corpse to save 5 lives because this would maximize social happiness (the other patients, their families, and loved ones, etc) with absolutely no impact to social (or individual) suffering. The patient is dead, so his or her rights and freedoms aren't violated because they are no longer applicable. In some cultures, the deceased person's family would feel the act is immoral, however if the deceased was a organ donor, they would not. This tells me that the act itself is not immoral, rather the how the act is perceived determines its "morality". In other words, this moral stance is relative, rather than absolute.

There, I think I got it all that time.

Quote:
 Originally Posted by tk102 Hmm, I don't see how the equation would take that into account. There's no factor for something like a categorical imperative.
The categorical imperative is what I had hoped to highlight.

Quote:
 Originally Posted by tk102 Again, forgiving the lack of categorical imperative... assuming society knew about the doctor's action, there would be increasing δsociety for each of those scenarios. Since the donor would is still, there would be some factor δpatient to account for, especially since organ donation would likely result in his death (assumed to be the maximum distress for an organism).
Which is tricky. Certainly we would be providing the maximum social benefit with minimum social suffering by parting out the sore ankle guy, but we would still be murdering a healthy man. Thank goodness the categorical imperative prevents us from doing so.

However there might be a case for deceased/vegetative state scenarios. The reason that I brought up will is that I think it should be reduced to 0 in some cases and such an equation would need to factor that in (unless this is accounted for/agreed upon in the part that accounts for categorical imperative).

Quote:
 Originally Posted by tk102 The idea that there exists objective morality suggests there should be a definable function of variables that defines morality. The assertion is that morality can be arrived at by logic. If we cannot apply logical symbology to morality then we cannot make this assertion. And if we cannot say morality is based on logic, we cannot say it objective. If there are many various theories and fields describing the origins of morality, then morality indeed is subjective.
No, I agree, but my point was we don't point to one line of logical symbols and claim that it represents all of mathematics (at least I assume that we don't). In other words, there is no one equation that perfect encapsulates all of the study of mathematics, so why should that be the case for something equally complex?

Then again, it might be that I'm not understanding your point as well as I think I am.

Last edited by Achilles; 04-15-2007 at 04:10 PM.
you may:
04-17-2007, 12:29 PM   #10
Jae Onasi
@Jae Onasi
Antiquis temporibus, nati tibi similes in rupibus ventosissimis exponebantur ad necem

Status: Super Moderator
Join Date: Aug 2005
Posts: 10,937
Current Game: Guild Wars 2, VtMB, TOR

Quote:
Originally Posted by Achilles
Quote:
 Originally Posted by Jae one man's pain is another man's pleasure
That is true but that is obviously not a basis for any objective system of morals.
So work in the probability stat to cover that.
Quote:
 Originally Posted by Achilles I actually agree with you here. Sometimes following the moral option requires some exposure to pain or distress. If we were to follow a mathematical model for morality, the formula might spit out a result that says the pain and distress caused by immunization would make the process immoral. Since the comparative result (disease or general illness) is not certain, our formula might lead us astray. Then again, math has never been my strong point and my reliance on conceptualization might be doing me a disservice here.
If you work in another equation that compares long-term gain over short-term distress, it would fix that problem.

Quote:
 Originally Posted by Achiles Regardless, I'm pretty sure this is why all the ethics courses fall under the philosophy umbrella rather than applied sciences

Quote:
 Originally Posted by tk I guess you're suggesting to use a probability equation to estimate the unknown distresses of the unknown population P? Okay
I think you'd have to do that, since it would not be a simple coefficient (i.e. the amount of distress of an action is exactly the same for every single person), it would vary within the population.

From MST3K's spoof of "Hercules Unchained"--heard as Roman medic soldiers carry off an unconscious Greek Hercules on a 1950's Army green canvas stretcher: "Hi, we're IX-I-I. Did somebody dial IX-I-I?"

Read The Adventures of Jolee Bindo and see the amazing Peep Surgery
Story WIP: The Dragonfighters
My blog: Confessions of a Geeky Mom--Latest post: Security Alerts!

you may:
 04-15-2007, 04:28 AM #11 Jae Onasi @Jae Onasi Antiquis temporibus, nati tibi similes in rupibus ventosissimis exponebantur ad necem     Status: Super Moderator Join Date: Aug 2005 Posts: 10,937 Current Game: Guild Wars 2, VtMB, TOR One man's pain is another man's pleasure. The severity of the pain would be on some kind of bell curve, with no pain being zero, and for instance severity of pain increases negatively and pleasure increases positively. The bell curve would be skewed well into the negative, but there'd be a few on the positive who got some kind of pleasure out of the pain. You'd have to add in a probability equation (just to make things more fun). Also, maximize morality is not always inversely proportional to distress. Some medical procedures cause distress, but are not immoral. You'd have to account for those things that cause distress but have little or no bearing on morality, unless you limit your sets to only those things where distress has an impact on morality, and even that will vary to some degree. From MST3K's spoof of "Hercules Unchained"--heard as Roman medic soldiers carry off an unconscious Greek Hercules on a 1950's Army green canvas stretcher: "Hi, we're IX-I-I. Did somebody dial IX-I-I?" Read The Adventures of Jolee Bindo and see the amazing Peep Surgery Story WIP: The Dragonfighters My blog: Confessions of a Geeky Mom--Latest post: Security Alerts! you may:
 04-16-2007, 09:13 AM #12 Ray Jones @Ray Jones [armleglegarmhead]     Join Date: Jun 2003 Location: digital Posts: 8,301 I find the hardest thing in getting the "real level of morality" (read: the amount of distress caused to other beings) of an action is that one would have to wait until the end of time to be sure that all "distressing reactions" regarding that action are taken into consideration. Everything else is just assumption about the morality of an action. And, for instance, you could not simply consider killing Hitler a pretty moral action. Because killing him in December 1944 would be less moral than if he'd be killed by a drunken driver in a car accident as a young boy, what on the other hand might just increase chances that Russia takes over whole Europe, except if he dies one day later when he's been pushed down a cliff and he could tell someone about his plans to take on the world, who will now do it instead, with more success, btw. Err, well Spider, lead me words ad absurdum already Last edited by Ray Jones; 04-16-2007 at 09:26 AM. you may:
 04-17-2007, 01:02 PM #13 tk102 @tk102 Well past expiration date     Join Date: Jan 2004 Posts: 5,767 Current Game: FTL Actually the probability equation would be an estimate to determine δ for a given populace, which would then be substituted into the above equation. That would be more of an auxiliary equation whereas the above attempts to define the variables at play. I think the categorical imperative should be added but I'm not sure how to define that one yet... I was taking a break from this to let it sink in some more and decide whether this was worth further pursuit. spoiler:I was hoping Spider would weigh in at some point, but alas. :-) you may:
 04-22-2007, 01:12 AM #14 Mike Windu @Mike Windu Je suis l'agent du chaos.     Join Date: Jul 2003 Location: Stars Hollow Posts: 3,562 Moral objectivism is easily represented mathematically: f(x) = sin(x). A political AND mathematical joke? Good lord I'm good. That's the last time I buy anything just because it's furry! you may:
04-22-2007, 08:52 AM   #15
Windu Chi
@Windu Chi
Banned

Status: Banned
Join Date: Apr 2004
Location: Getting revenge on that traitor, Anakin.
Posts: 882

Quote:
 Originally Posted by tk102 Once upon a time I said Well that statement has bothered me for a couple months and I wanted to explore further the idea of moral objectivity with the help of mathematical notation. I hope others can provide insights into these definitions. I admit having to consult Wiki's Naive Set Theory to remember how to syntax set notation. * * * * If x is an action in the set of all possible actions that you can perform, A: x ∈ A And m(x) is the morality of action x, then set of all possible moral outcomes, M, is: M := { m(x) : x ∈ A} And the moral person seeks to perform the most moral act: Mmax = max { m(x) : x ∈ A} So how do we measure the morality of an action? It is inversely proporational to the amount of distress, D, the particular act, x, causes. m(x) ∝ 1/D(x)
What about doing something that is negating to -m(x) respect to society's moral standards, that will benefit the individual if
fear F(x)>> D(x) :x ∈ I

where F(x) is fear function of not doing something that is highly immoral respect to society's set moral standards if that immoral action x that is a element of I can lead to survival of the individual, since survival will have a higher value over M and D(x) distress function of the specific individual will have in commiting a immoral act.
When x is a element ∈ of a immoral set I.

F(x):= { if x ∈ Ei then F(x)∝Ei/D(s) } where Ei is the bias influencing emotions i of a individual that will negate the moral objectivity of that individual.

That will produce different set rule in D(x) ∝ UM(s)/Ei if x ∈ I where s ∈ M

where UM(x) is universal morality function stantards of a society.

So if -D(x) ( decrease) cause by Ei increase.

Then this set rule will have to apply.

UM(x) @ m(x) if {m(x) ∝ Ei/Dx: x ∈ I}

So, to remain on the moral bandwagon of the moral standards of society UM will have to be substituted @ with the morality of action function m(x) to counter I immoral action set that will cause the distress D(x) to change in a negative way, that will make the act of commiting the immoral less distressful to specific individuals with different values of Ei.

Also Ei will have to be a statistical variable, since emotions of specific individuals of a specific society are ruled by uncertainity of probability.

Quote:
 δ := { δxy : (x ∈ A) and (y ∈ P) }
Also δxy= KyUM/Fxy and Fxy>>Dx : x ∈ I I suspect!

where Fxy is the relative fears for all organisms for all actions x if P := { m(x) ∝ 1/Fxy: if x ∈ I and y ∈ P }

So, if relative distresses is to be minimize the moral standards of society probably will have take a greater level of value over specific individual's own moral standards when it's actions are influence greatly by fear of death.
Or, a immoral may result.

But fear as a bias emotion can negate objectivity of any individual that has emotions since all organisms have emotions society's moral standards probably
can't in the long run take higher value over fear of death for some specfic individuals.

That is for some cowardice take highter value over relative distresses of individuals in society.

So, an immoral act will result for those who have a cowardice charcteristic.

Example: In the death camps some Nazi soldiers who didn't kill Jews because of the relative distresses of society that with their faculties would result in a immoral act, obviously.
If they don't commit the immoral act of murder they will be kill themselves.
So, if EiC where C is a cowardice set that is ∝ F(x), then F(x)>> UM and DPI where DPI is the relative total distresses of society of commitng a immoral act when x ∈ I
.

So, the immoral act of murder may result,
because F∝Ei/UM(s) & F(x)>> D(x) and where x ∈ I and s ∈ M

may result because emotions Ei is ruled by probability.

If UM(s) increase influence over fear function F(x) at a constant Ei value then fear of death go down and murder = 0

and Ei ∈/ cowardliness set C.

:∈/ is not a element:

Also good use of using Set Theory to illustrate your thoughts on morality, tk102.

Last edited by windu6; 04-23-2007 at 06:40 PM.
you may:
04-22-2007, 03:17 PM   #16
Spider AL
@Spider AL
A well-spoken villain...

Join Date: Jan 2002
Location: Help, help, I'm stapled to my workstation.
Posts: 2,162
TK, I wonder why you'd want my input? Achilles has already done a good job of covering all the most significant points.

I may have a couple of things to add:

1. On the idea of representing moral values mathematically in general:

As others have noted, your idea is a fine intellectual exercise, very intriguing from the perspective of a puzzle. But there are two questions that immediately spring up:

1. Does your equation tell us anything about morality that hasn't already been elucidated in longhand by the major moral philosophers of history?

2. More importantly, COULD it tell us anything new about morality in the future?

The answer to no.1 would appear to be "no", but that's a minor issue. The answer to number two is "maybe, but only if all moral values were accurately represented in the equation. Perhaps then the numbers could be manipulated mathematically to show us something new and interesting."

But of course, to accurately represent all variables would probably be the work of a lifetime. And while your OP shows one way that moral values might be transcribed, it does not cover the whole gamut. (which I'll comment specifically on shortly.)

So in short, you've started on what would be a quite serious undertaking. There's no reason you shouldn't be the one to complete it, however. I for one would be most interested to see what you eventually come up with.

2: On the specifics of your original "moral equation":

Quote:
 Originally posted by tk102: So how do we measure the morality of an action? It is inversely proporational to the amount of distress, D, the particular act, x, causes.
Up to this line, there was little to add, however- as has been noted subsequently- distress is not the only variable in the moral equation. It's an important variable, to be sure. It is central, in fact. But it's not the only one. As well as physical and psychological suffering, there's the concept of "loss". Let me illustrate:

If you murder someone it's immoral, whether the method you use is painless or not. You can murder someone painlessly; you can for instance drug them so that they first fall unconscious and then expire. The fact that they did not feel any physical or psychological distress is a factor- I mean, someone who tortures a person for three weeks before killing them has arguably committed a more immoral act- but any killing of this type, painful or painless, results in the loss of the subject's life.

Existence- rationally speaking- is all we have. And our time on this earth is all we ever will have. For various reasons which have been discussed elsewhere, we feel a desire to maintain our lives. When you kill a creature (whether it's aware of its impending doom or not) you are literally taking away all it has, and all it ever would have had. To paraphrase a Clint Eastwood line.

This is why I have tried to define morality in the past- as you may remember from my response to one of your own questions in the "moral relativism" thread- as the objective, universal standard of behaviour that aims to minimise one's negative impact on other creatures... This heading of "negative impact" encompasses any and all suffering, but also loss of life and also any more minor violations of established rights, etcetera.

Therefore any purely mathematical expression of the moral equation would have to incorporate these additionable variables, possibly under a heading similar to "negative impact".

Quote:
 Originally posted by tk102: A mosquito's relative distress at being smashed would be much greater than the discomfort a mosquito bite would cause in a person, but because Kperson>>Kmosquito, Ddon't smash > Dsmash.
Secondly, this assumption that the person's proportion of "universal distress" would be "much greater than" the mosquito's, (in essence that the person is intrinsically more "valuable" than the mosquito, because its capacity to suffer is so much larger) begs a certain degree of analysis. Once again, the question of "distress" is by no means the be-all and end-all of morality, but let's address distress alone at this point.

It is a general social convention that we humans are "more valuable" than other animals. But let's examine that convention and see whether we can discern the reasoning behind it, and whether this reasoning gives us any insight into the question of how we should classify other organisms on the "distress" or "suffering" scale specifically.

First we must define the boundaries of such a scale.

Science has made this first step quite easy, by teaching us a lot about the biology of simple life-forms. There are forms of life that have literally zero cognitive ability, literally zero capacity to feel suffering, fear, pain etcetera. It stands to reason that we should not concern ourselves with causing suffering to a creature that is unable to suffer.

Once again, let us define the boundaries: We know that higher life-forms show signs of fear and pain, and that their brains are really quite similar to our own, structurally and relatively speaking. Therefore it is reasonable to assume that most of the more complex mammals- including humans- would be highest on the scale of "capacity to suffer".

We know that the most simple life-forms (single-celled organisms for instance) have no complex nervous system and no higher reasoning powers, nor any organ that fulfils a similar function to the complex brain of the higher life-forms. Therefore we have successfully (if roughly) classified the positions of several forms of life on the "capacity to suffer" scale. At the one end we have mere "biological robots", those rudimentary animals that operate on a simple set of rules (a few lines of code, one might say) and at the other end we have highly developed mammals.

Therefore we have two simple moral rules already:

1. Indiscriminate attacks on say... bacteria through artificial means such as disinfectant, are not intrinsically immoral from a "suffering" perspective. (Leaving aside for now the question of whether such robotic life-forms have an intrinsic right to life.)

2. any maltreatment of the highest forms- highly developed mammals- is VERY MUCH immoral.

However we run into a sticking point in the middle of the scale, pretty much AS SOON as life-forms become complex to any degree. As soon as the rudiments of a nervous system appear in a primitive creature, our previously easily defined boundaries become blurred.

As an example, we might dig up an earthworm. A simple invertebrate, it seems to operate on a simple set of rules. At first appearance it would appear to be a biological robot, a form of life too simple to warrant the care and attention afforded to a cat, a dog, a monkey or another person. But of course, research has shown that earthworms do indeed have a nervous system developed enough to pass along information about injuries and to trigger reflex reactions to these stimuli... but research also suggests that the brain of the worm is probably not complex enough to accommodate higher functions that we might define as "distress". (As in emotional responses like fear and horror at the pain one is suffering combined with the fervent desire to live, etcetera...)

So if the earthworm does indeed register pain... but cannot interpret it quite the way we do, is it capable of what we call "suffering"? Well in an attempt to answer this difficult question, let's use a hypothetical:

Suppose some very advanced alien lifeforms arrive on earth from another distant world. Then suppose that their equivalent of brain functions are so advanced and complex that to them, we seem like mere robots, mere biological automatons. Suppose that they- like us- have some sort of logical standard of moral behaviour that they wish to adhere to. Then suppose that they decide that they can do whatever horrible things they want to us, without danger of being immoral. Because we are simply unable to experience the complex emotional state that they define as "distress".

Clearly we would consider this an appallingly unfair and short-sighted decision on the part of the aliens. But from the aliens' perspective, it might seem quite logical. In this respect it's comparable to our routine decision to value humans more highly than other animals. What do humans have that other animals do not? Merely slightly more complex brains.

This hypothetical highlights the fact that "rating" other organisms on a scale of intrinsic value purely by the apparent complexity of their cognitive functions may not be a moral thing to do. After all, if taken to its inevitable conclusion, this concept of "intellect as value" would lead us to terrifying consequences. It would perhaps mean that torturing a severely mentally retarded person would be regarded as "more moral" than torturing a college professor... Children's brains don't develop fully for some years, by the above standard it would presumably be regarded as being "more moral" to torture a young schoolkid than it would to torture his or her adult teacher.

In short, such a scale probably isn't moral anyway.

So, returning finally to the question of the single mosquito... you can't arbitrarily decide that a mosquito's suffering is in some way less intrinsically important than a man's suffering. What you CAN do is note that in some countries mosquitos carry fatal or severely debilitating diseases. Therefore if you're IN one of those countries, you should kill the mosquito as the risk to the human is great indeed. If you're NOT in one of those countries, why not let the critter bite you?

Because in my country mosquitos present little or no danger to me, I don't kill mosquitos. If they appear, I let them fly around my house, and they can bite me if they wish. Because a small insect bite that may itch for a couple of hours is a TINY inconvenience to me... It poses no danger to me, it doesn't affect my life in any meaningful way. Therefore it certainly does not warrant the killing of the organism in question.

Anyway, in the course of our reasoning, even without addressing any question other than the question of "capacity for suffering", we've arrived at the fairly conservative principle that the moral man must give other creatures the benefit of the doubt whenever possible, in terms of their capacity to suffer. This fairly universal principle would have to be factored into any mathematical "moral equation" of the type you're attempting to construct.

I'm afraid that's all I can think of on the topic right now.

-

Quote:
 Originally posted by Achilles: Would it be inaccurate to interject that what we're looking for is a creature's capacity for suffering as compared to its capacity for happiness? A bee has relatively diminutive capacity for suffering or happiness especially when compared to the highly allergic human that it is about to sting, correct? So it wouldn't be immoral to kill a bee that was trying to attack you. Conversely, a cat has a relatively higher capacity of suffering and happiness, therefore it would not be moral for an allergic person to randomly kill cats.
Hmm. On these examples, Achilles: A bee-sting can be fatal to a person allergic to bee-stings. Therefore it might well be moral for this allergic fellow to kill the bee, as it qualifies as self-defence.

If the person allergic to cats might ALSO be killed by the cat, (improbable) AND if the killing of the cat efficiently removed the threat (which it probably would not) then the killing of the cat might also be self-defence and therefore moral.

Given these variables and the probable circumstances surrounding each example, I personally don't think that the "capacity for suffering vs. happiness" question is addressed by the examples at all. Conversely, I don't think the question is relevant to any discussion of these examples specifically.

-

Quote:
 Originally posted by Tyrion: Also, the main problem I have with moral objectivity is that humans can never be truly objective; we always have some bias because we always have an opinion and a limited view of existence.
I'm afraid that's the same non-sequitur that many people seem to churn out in these debates, Tyrion. Human objectivity (or lack of it) has NOTHING whatsoever to do with moral objectivism. Logic dictates that morality must be universal or it is not morality. Therefore, morality by definition IS objective, and must be applied objectively to be moral.

Whether people are CAPABLE of doing this is neither here nor there. It's literally completely irrelevant.

In essence, your stance is that: "people aren't objective therefore morality can never be objective". Which is like saying: "people aren't objective therefore mathematics can never be objective". Which is obviously nonsense. There is a right answer to a calculation and there are wrong answers. People may assert that "2 + 2 = 5", but that doesn't MAKE it five. That doesn't MAKE the numbers relative.

Numbers are numbers, just as morality is morality. People make mistakes while exploring mathematical calculations, people make mistakes while deciding what is morally right. But that doesn't make "maths relative". It doesn't make "morality relative". It just means people are fallible. "Moral relativism" is an irrational, illogical, and by definition immoral stance.

Therefore, your position doesn't make any sense.

[FW] Spider AL
--
Hewwo, meesa Jar-Jar Binks. Yeah. Excusing me, but me needs to go bust meesa head in with dissa claw-hammer, because yousa have stripped away meesa will to living.
you may:
04-24-2007, 03:26 AM   #17
tk102
@tk102
Well past expiration date

Join Date: Jan 2004
Posts: 5,767
Current Game: FTL

@windu6:
I see think I understand the direction you investigating in trying to determine how a person will choose to act given his predisposition and various social factors. That goal is a bit different than what I seeking with this thread which was to determine an objective definition for morality independent of the individual and his society.

@Spider AL:
Quote:
 1. Does your equation tell us anything about morality that hasn't already been elucidated in longhand by the major moral philosophers of history?
Oh probably not... but the syntax is concise and with the goal of eliminating all ambiguity in the terms. When you say m(x) and I say m(x) I know we're talking about the exactly the same thing for example.
Quote:
 More importantly, COULD it tell us anything new about morality in the future?
Well I'll admit the goal of mine was less lofty than that. Rather than discover something new about morality, I wanted to gather the known factors of morality and relate them in a form of shorthand. I figured having a formula such as that would be an easy reference guide. At least on paper, if not in practice.

I understand what you mean by loss. I tried to cheat it into the D(x) factor assuming death as being the maximum distress for an individual:
Quote:
 Originally Posted by tk102 ...especially since organ donation would likely result in his death (assumed to be the maximum distress for an organism)...
I believe that was the only negative impact you cited that didn't fall under D(x). "Violations of established rights" I believe would qualify as psychological distress. I'm open to suggestions though of how better to define m(x) in terms of physical distress, psychological distress, death. Maybe each of those get their own variable instead of being lumped under D(x).

I'm glad you took issue with Ky. It bothered me too. Seemed quite anthrocentric, but at the same time I am surprised you wouldn't kill a mosquito. If I saw one biting my son, I wouldn't hesitate even without the risk of disease.

But getting back, you are suggesting that in order to be remain objective, we simplify the equation:

D(x) := ∑ δxyKy : (y ∈ P)

to

D(x) := ∑ δxy : (y ∈ P)

That is quite a conservative view, considering the suffering of a mosquito is equivalent is on par to the suffering of a human. It's not unheard of though -- Jainism follows this belief precisely. Plus it is does eliminate the rather troublesome Ky value.

But why does Ky trouble us? As you suggested with your alien scenario, what appears to an advanced alien as moral may appear to us as cruel and immoral. If we assume the universe has a fixed number of species within it, it is possible there does exist a scale of complexity that could be objectively defined. By inference then, the seemingly subjective Ky would in fact be an objective value. So in that case, perhaps it is true that Khuman << Kalien. Oh yeah, that's why it troubles us.

Quote:
 TK, I wonder why you'd want my input?
Because of the lightbulb.

I still haven't figured out how to represent a categorical imperative into this set of equations. Anyone have any suggestions?
you may:
04-24-2007, 06:53 PM   #18
Windu Chi
@Windu Chi
Banned

Status: Banned
Join Date: Apr 2004
Location: Getting revenge on that traitor, Anakin.
Posts: 882

Quote:
 Originally Posted by tk102 @windu6: I see think I understand the direction you investigating in trying to determine how a person will choose to act given his predisposition and various social factors. That goal is a bit different than what I seeking with this thread which was to determine an objective definition for morality independent of the individual and his society.
You must mean a A.I. or someone, if they exist on Earth, that has no emotions.
But that human may exist, nothing is impossible.

Or thinking outside the box, a supreme universal intelligence, for our universe?

A universal intelligence that is a compose of all the intelligence life in our visible universe.

Quote:
 Originally Posted by tk102 I still haven't figured out how to represent a categorical imperative into this set of equations. Anyone have any suggestions?
categorical imperative: The moral principle that behaviour should be determined by duty.

Well, this... maybe a tough one, I will have to do some structure analysis with Set Theory.

I will post if I figure this one out, tk102.
you may:
05-14-2007, 07:37 PM   #19
Spider AL
@Spider AL
A well-spoken villain...

Join Date: Jan 2002
Location: Help, help, I'm stapled to my workstation.
Posts: 2,162
I'm back.

Quote:
 Originally posted by tk102: Oh probably not... but the syntax is concise and with the goal of eliminating all ambiguity in the terms. When you say m(x) and I say m(x) I know we're talking about the exactly the same thing for example.
Removing ambiguity is a laudable goal, but since your notation merely directly substitutes symbols for longhand words and concepts, it's not going to be any more nor any less ambiguous than the original terms were. We could certainly quibble over the meaning of "m(x)" to precisely the same extent as we could quibble over a longhand term like "the morality of a given action".

Concision is also a laudable goal, but I'm afraid that since every public usage of a "morality equation" will have to be accompanied with a longhand key-code, It'll be slightly less concise than just using the longhand.

Don't get me wrong, I still think that it's an interesting exercise that might yield new insights from manipulation of the variables... but I rather think that's all it's useful for.

Quote:
 Originally posted by tk102: I understand what you mean by loss. I tried to cheat it into the D(x) factor assuming death as being the maximum distress for an individual: I believe that was the only negative impact you cited that didn't fall under D(x). "Violations of established rights" I believe would qualify as psychological distress. I'm open to suggestions though of how better to define m(x) in terms of physical distress, psychological distress, death. Maybe each of those get their own variable instead of being lumped under D(x).
Absolutely they need their own values. I mean, conflating these values totally oversimplifies the equation. If the values are joined at the hip, there's no way to properly delineate say... a painless murder and on the other end of the scale, torture without physical injury.

Quote:
 Originally posted by tk102: I'm glad you took issue with Ky. It bothered me too. Seemed quite anthrocentric, but at the same time I am surprised you wouldn't kill a mosquito. If I saw one biting my son, I wouldn't hesitate even without the risk of disease.
Why would you kill it? Merely because of parental instinct? If so, I'm sure you'll agree that such protective instincts- while human and understandable- are not means by which we can determine a moral course of action.

Quote:
 Originally posted by tk102: But getting back, you are suggesting that in order to be remain objective, we simplify the equation: D(x) := ? dxyKy : (y ? P) to D(x) := ? dxy : (y ? P) That is quite a conservative view, considering the suffering of a mosquito is equivalent is on par to the suffering of a human. It's not unheard of though -- Jainism follows this belief precisely. Plus it is does eliminate the rather troublesome Ky value.
Which once again highlights this point: In order to be optimally moral, it follows that we must follow the most conservative view. By this, I mean the optimally moral individual would follow a course of action that eliminated even the RISK of behaving immorally.

Thus, while I am not a follower of Jainism, I would have a hard time arguing against the assertion that they're being optimally moral in their treatment of other creatures.

Quote:
 Originally posted by tk102: But why does Ky trouble us? As you suggested with your alien scenario, what appears to an advanced alien as moral may appear to us as cruel and immoral. If we assume the universe has a fixed number of species within it, it is possible there does exist a scale of complexity that could be objectively defined. By inference then, the seemingly subjective Ky would in fact be an objective value. So in that case, perhaps it is true that Khuman << Kalien. Oh yeah, that's why it troubles us.
Mmm, bit of leap there TK, I rather think the reason the alien scenario shows that anthrocentric views are immoral is that we know that we suffer and die, and that suffering and death are negative experiences for us. So regardless of whether the hypothetical aliens are more complex than us in both body and mind, that doesn't make their version of suffering more "valuable" than ours. It doesn't make our lives less "important". After all, empathy is about putting yourself in the shoes of others, not weighing them against your self-image.

So I think the question of whether there's an objectively definable scale of complexity on which all higher animal life can fit... is an utter irrelevance to the question of morality and moral treatment of other creatures.

Quote:
 Originally posted by tk102: I still haven't figured out how to represent a categorical imperative into this set of equations. Anyone have any suggestions?
The categorical imperative is essentially a plea for moral universality. Universality is already implied (assumed, actually) in the equation. I don't think you need to fit the CI in anywhere to properly represent the values you're trying to describe. It would take a separate (and prohibitively complex) equation to deal with the question "why should a person be moral?"

[FW] Spider AL
--
Hewwo, meesa Jar-Jar Binks. Yeah. Excusing me, but me needs to go bust meesa head in with dissa claw-hammer, because yousa have stripped away meesa will to living.
you may:
 05-30-2007, 11:30 AM #20 Nancy Allen`` @Nancy Allen`` Banned     Status: Banned Join Date: Jan 2006 Posts: 1,948 There are two problems I see with moral objectivism: cultual diffirences and personal bias. Cultual diffirence means what is seen as imoral in one part of the world is completely normal in another. For example women being treated like objects in Asia and ornamental in Asia does not fly well with the Western world. In fact culture may not have as much to do with it as one's upbringing and beliefs. One person may believe the best way to take action is a confrontational way, another might feel not so, and yet another, all three being from the same culture, may feel to just allow something to slide. With regard to personal bias, I see that someone will feel that their morals are right regardless of what someone else may think. They may feel that it's perfectly acceptable or they are entitled to act the way they do. They may even go further and claim that it's alright for them to act that way and not others, or simply pick on others for the same faults they refuse to accept in themselves. you may:
 06-09-2007, 07:01 PM #21 Spider AL @Spider AL A well-spoken villain...     Join Date: Jan 2002 Location: Help, help, I'm stapled to my workstation. Posts: 2,162 Your "problems" with objective morality were all addressed in the earlier thread entitled "Moral Relativism". In short, your contentions make no sense. Just because something is regarded as moral in one culture and immoral in another means nothing, except that at least ONE of those cultures has gotten it wrong. Like numbers, objective morality is an abstract objective standard, it remains static whether people perceive it correctly or not. [FW] Spider AL -- Hewwo, meesa Jar-Jar Binks. Yeah. Excusing me, but me needs to go bust meesa head in with dissa claw-hammer, because yousa have stripped away meesa will to living. you may:
06-09-2007, 10:33 PM   #22
Windu Chi
@Windu Chi
Banned

Status: Banned
Join Date: Apr 2004
Location: Getting revenge on that traitor, Anakin.
Posts: 882

Quote:
 Originally Posted by Spider AL Just because something is regarded as moral in one culture and immoral in another means nothing, except that at least ONE of those cultures has gotten it wrong.
Before you seek knowledge, you have to know that you are ignorant.
It might be the case that there is no preference between right and wrong respect to other cultures.
Maybe there is no static wrong or right; but a choice of the society.
Well, of course not other cultures on Earth, maybe far beyond Earth.
Other universes maybe!

But I'm wasting my time telling you wisdom, you only care for logic.

Quote:
 Originally Posted by Spider AL Like numbers, objective morality is an abstract objective standard, it remains static whether people perceive it correctly or not.
I don't agree that morality can be objective to intelligent life forms, since most intelligent living beings, that we know of here have emotions and personal bias.
And also the abstract concept of morality, was conceive by humans here that had emotions and personal bias, of course.

Maybe a A.I. (artificial intelligence) can be objective to morality, since they won't have no emotions, unless the intelligent lifeform can figure out how to give it human or intelligence life form emotions.

Since it's looks like presently (maybe when we become more of aware of the rest of existence) that a A.I. don't get created by the universe, a intelligent life form will have to create the A.I. intelligence, a intelligence life form that will have it's personal bias influencing the preference of the A.I. intelligence.

That will of course, lead to negating a A.I.'s moral objectivity.

Last edited by windu6; 06-10-2007 at 01:28 AM.
you may:
06-10-2007, 01:14 AM   #23
Det. Bart Lasiter
@Det. Bart Lasiter
obama.png

Join Date: Mar 2005
Location: `(•.°)~
Posts: 7,997
Current Game: all

Quote:
 Originally Posted by Spider AL In short, your contentions make no sense. Just because something is regarded as moral in one culture and immoral in another means nothing, except that at least ONE of those cultures has gotten it wrong.
Haha, what an ass way of thinking.

"No, Mama. You can bet your sweet ass and half a titty whoever put that hit on you already got the cops in their back pocket." ~Black Dynamite
you may:
 06-09-2007, 07:18 PM #24 Nancy Allen`` @Nancy Allen`` Banned     Status: Banned Join Date: Jan 2006 Posts: 1,948 And who defines the standered? Exactly. No one has a sheet of mathamatics saying x + y = this is what's right and wrong. you may:
 06-09-2007, 07:24 PM #25 Spider AL @Spider AL A well-spoken villain...     Join Date: Jan 2002 Location: Help, help, I'm stapled to my workstation. Posts: 2,162 The standard is easy to define, through a basic application of logic. Go and read through that thread again, it's all explained in great depth by several people. Just as in mathematics, simple logic allows one to examine all variables in the moral equation and extrapolate from them a ruleset by which moral behaviour can be quantified. [FW] Spider AL -- Hewwo, meesa Jar-Jar Binks. Yeah. Excusing me, but me needs to go bust meesa head in with dissa claw-hammer, because yousa have stripped away meesa will to living. you may:
 06-09-2007, 07:35 PM #26 Nancy Allen`` @Nancy Allen`` Banned     Status: Banned Join Date: Jan 2006 Posts: 1,948 So the answer would be to what? Force the same set of standereds onto everyone? Clearly a lot of people are wrong in their set of standereds. you may:
 06-09-2007, 07:39 PM #27 Spider AL @Spider AL A well-spoken villain...     Join Date: Jan 2002 Location: Help, help, I'm stapled to my workstation. Posts: 2,162 "Forcing" others to conform to a moral standard? Well that would depend, wouldn't it Nancy. Laws already "force" people to conform to a societal standard, in a way. I personally wouldn't have a problem "forcing" an axe-murderer to stop his immoral behaviour. What specific example were you thinking of when it comes to "forcing" others to behave morally? [FW] Spider AL -- Hewwo, meesa Jar-Jar Binks. Yeah. Excusing me, but me needs to go bust meesa head in with dissa claw-hammer, because yousa have stripped away meesa will to living. you may:
 06-09-2007, 07:45 PM #28 Nancy Allen`` @Nancy Allen`` Banned     Status: Banned Join Date: Jan 2006 Posts: 1,948 You would also be against torture I take it, genocide, being allowed to get away with atrocities, ect, that would be correct wouldn't it? Well you tell me. Obviously a lot of people are morally wrong, they don't abide by a mathamatical sheet like a computer. So the question is how to fix that. you may:
06-09-2007, 07:55 PM   #29
Spider AL
@Spider AL
A well-spoken villain...

Join Date: Jan 2002
Location: Help, help, I'm stapled to my workstation.
Posts: 2,162
Quote:
 You would also be against torture I take it, genocide, being allowed to get away with atrocities, ect, that would be correct wouldn't it?
Absolutely correct. That is why... for instance... I oppose US/UK torture, US/UK sponsored genocide and US/UK atrocities worldwide. It's basic morality.

Quote:
 Well you tell me. Obviously a lot of people are morally wrong, they don't abide by a mathamatical sheet like a computer. So the question is how to fix that.
Well Nancy, how do you try to "fix" the fact that a lot of people commit CRIMES? You penalise them and take measures to prevent further crimes.

It'd be the same with morality, and ideally, the law SHOULD be pure morality. The fact that the law is currently flawed means that campaigning is necessary to improve the legal system and our lexicon of laws. That's all.

[FW] Spider AL
--
Hewwo, meesa Jar-Jar Binks. Yeah. Excusing me, but me needs to go bust meesa head in with dissa claw-hammer, because yousa have stripped away meesa will to living.
you may:
 06-09-2007, 08:01 PM #30 Nancy Allen`` @Nancy Allen`` Banned     Status: Banned Join Date: Jan 2006 Posts: 1,948 Iraqi torture, Palestinion\Iranian sponsered genocide, Middle Eastern Muslim terrorist atrocities. The law would be of absolutes? Would there be any thought given to whether or not for example a criminal was abused? Or a vigilante acted out of lack of police action? you may:
06-09-2007, 08:03 PM   #31
Spider AL
@Spider AL
A well-spoken villain...

Join Date: Jan 2002
Location: Help, help, I'm stapled to my workstation.
Posts: 2,162
Quote:
 Iraqi torture, Palestinion\Iranian sponsered genocide, Middle Eastern Muslim terrorist atrocities.
All torture, all genocide, all terrorism. Whether committed by us or by someone else. That's morality. It's one standard for everyone.

If they do it, it's wrong. If we do it it is also wrong.

Quote:
 The law would be of absolutes? Would there be any thought given to whether or not for example a criminal was abused? Or a vigilante acted out of lack of police action?
One presumes that under the ideal legal system, there would be at LEAST the same amount of consideration for individual circumstances as there is now. Is there some point you're trying to make?

[FW] Spider AL
--
Hewwo, meesa Jar-Jar Binks. Yeah. Excusing me, but me needs to go bust meesa head in with dissa claw-hammer, because yousa have stripped away meesa will to living.
you may:
 06-09-2007, 08:08 PM #32 Nancy Allen`` @Nancy Allen`` Banned     Status: Banned Join Date: Jan 2006 Posts: 1,948 Yes, under an absolute legal system it would be x + y = z, however by adding + v, in other words allowing for extenuating circumstances, the result is changed to allow for it making it an exception to the rule. you may:
 06-09-2007, 08:11 PM #33 Spider AL @Spider AL A well-spoken villain...     Join Date: Jan 2002 Location: Help, help, I'm stapled to my workstation. Posts: 2,162 Not really an exception to the rule, Nancy. The ruleset must be set up to ACCOMMODATE individual variations in order to be optimally moral. Like our self-defence laws. We are allowed under the law to defend ourselves, and if it's necessary to kill someone who is attacking you with lethal force, it is legally justifiable to do so. This is not an "exception to the anti murder law" so much as it is a recognised special circumstance that warrants its own rule. [FW] Spider AL -- Hewwo, meesa Jar-Jar Binks. Yeah. Excusing me, but me needs to go bust meesa head in with dissa claw-hammer, because yousa have stripped away meesa will to living. you may:
06-10-2007, 10:28 AM   #35
Windu Chi
@Windu Chi
Banned

Status: Banned
Join Date: Apr 2004
Location: Getting revenge on that traitor, Anakin.
Posts: 882

Quote:
 Originally Posted by Spider AL windu6: You're back! I can say without fear of contradiction from any quarter: The place most definitely wasn't the same without you.
Yeah whatever!
I can assure you no one here miss me.
I'm the weird one!
Also I'm hated by others.
Quote:
 Originally Posted by Spider AL As for your contentions that morality cannot be objective because people have personal bias
I didn't say only personal bias also I said emotions.

Quote:
 Originally Posted by Spider AL it's nonsense
There is no definite nonsense there is only uncertainty.
Meaning is relative!
Nonsense is relative!
Some knowledge seems nonsensical only because of ignorance of the infinite existence;the infinite meaning of existence.

Quote:
 Originally Posted by Spider AL Numbers remain objective even when mathematicians get their sums wrong
Do you mean numbers already exist, before intelligent lifeforms conceive of them?
If that is the case then you will have to accept the possibility that knowledge is infinite and you can't only say someone can get it wrong, since infinity will imply none of us got the total abstract truth; right or wrong respect to morality will be fraught with uncertainty.
We will always be seeking knowledge and will possibly always remain igorrant of the true abstract truth.

Quote:
 Originally Posted by Spider AL morality remains objective even when nobody in the whole world has perceived the correct, moral course of action to take in a certain circumstance.
How can you be sure that we know what is truely right or wrong in existence?
Only evidence we have of morality, is here on Earth.
Think outside the box, you should consider.
Quote:
 Originally Posted by Spider AL Morality is a dry equation.
What is a dry equation? I never heard of that one.

Quote:
 Originally Posted by Spider AL To each moral question, there is a "right" answer (an optimally moral answer) and there are infinite "wrong" answers. (Less moral courses of action.)
Ok I'm going to ask you some questions.
If you had to save your love one from death and a non-love one from death, by preventing them from falling off a cliff, but you can only save one of them.
Which one will you chose to save, your love one or the stranger?
Since you can only save one of them and one of them will have to die is it morally right or wrong to chose?
If you decide to fall with them, is it morally right or wrong to commit suicide?
If you save one of them is it morally right or wrong, to make that choice?
If you save neither of them is it morally right or wrong, to make that choice?
Remember if you let either one of them die you are immoral, according to your reasoning.
Also will you save your love one over the stranger?
Are you amoral if you commit suicide?

Quote:
 Originally Posted by Spider AL It doesn't matter whether people choose the right answer or not. The answer remains out in the aether, an abstract truth that may be attainable to those with the necessary reasoning power to see it, and the necessary empathy to want to see it.
Empathy, if you depend on that you can't be objective; feelings are bias by the emotions.
Feelings: Emotional or moral sensitivity (especially in relation to personal principles or dignity)
Personal it says, bias!

Quote:
 Originally Posted by Spider AL As for your persistent references to alien life forms and artificial intelligences... They're all irrelevant and laughable. No offence.
They are relevant, you are being bias, Spider.
Our society is not the only one.
Also artificial intelligences, will have emotions one day; we should include them in discussion of the philosophy of morality.
You should consider breaking down your enclose walls of skepticism.
Or, you will remain ignorant of the abstract truth of existence.
No offence and I'm trying to be nice here.

Last edited by windu6; 06-10-2007 at 11:36 AM.
you may:
06-10-2007, 03:42 PM   #36
Det. Bart Lasiter
@Det. Bart Lasiter
obama.png

Join Date: Mar 2005
Location: `(•.°)~
Posts: 7,997
Current Game: all

Quote:
 Originally Posted by Spider AL jmac7142: Slavery was (and is currently, I might add) considered perfectly acceptable in many cultures throughout history. All those cultures got their moral sums wrong. The fact that they believed it to be acceptable does not make it moral. Likewise in our own culture and our own time, there are those who believe invading a sovereign nation without just cause is moral. They are wrong. Their belief affects the moral equation not a jot.
No, you believe that they are immoral.

Nice job trying to get me to argue for slavery and the Iraq War by the way.

"No, Mama. You can bet your sweet ass and half a titty whoever put that hit on you already got the cops in their back pocket." ~Black Dynamite
you may:
06-10-2007, 05:25 PM   #37
Spider AL
@Spider AL
A well-spoken villain...

Join Date: Jan 2002
Location: Help, help, I'm stapled to my workstation.
Posts: 2,162
Jmac:

Quote:
 Originally Posted by jmac7142 No, you believe that they are immoral.
Of course I believe that such things as slavery and illegal international aggression are immoral, because an unbroken chain of logical reasoning leads me to believe that they are immoral. Morality is defined by reason and logic.

Quote:
 Originally Posted by jmac7142 Nice job trying to get me to argue for slavery and the Iraq War by the way.
I don't think you understood the point I was trying to make, Jmac. Perhaps I didn't make myself clear enough. I'm not "trying to get you to argue for" anything. I am pointing out that just because a group of people (be it a minority or an entire culture... or the whole world) believes something to be moral... doesn't make it moral.

If two people have different ideas about the most moral course of action in a given situation... one or both of them are wrong. That is the point. That morality is not "whatever we feel like" at the time, it is an abstract that we either perceive correctly... or fail to perceive correctly.

You may sit there and state that this is an "ass way of thinking", but once again... you have no good reasons to make such statements, you have not CITED any such reasons and frankly you will never find such reasons, in my estimation.

Windu:

Quote:
 Originally Posted by windu6 I didn't say only personal bias also I said emotions.
Emotions are merely contributing factors within the larger issue of bias, Windu. Therefore this has already been addressed.

Quote:
 Originally Posted by windu6 Nonsense is relative!
No, there is either sense... that is: that which makes sense, that which is logical... or nonsense. That which is not logical. They are absolutes. There is no issue of relativism when it comes to the question of whether someone is "making sense".

Quote:
 Originally Posted by windu6 We will always be seeking knowledge and will possibly always remain igorrant of the true abstract truth.
Of course. As an objectivist, I automatically have to accept the possibility that I will never attain the state of perceiving absolute truth.

And furthermore, I must accept the certainty that I will never KNOW to an absolute degree whether I have attained such a state or not, even if I have in fact attained it.

But that doesn't signify anything. The fact that we are fallible human organisms doesn't mean that there is no objective truth. It merely means that we may be incapable of perceiving it.

Quote:
 Originally Posted by windu6 Ok I'm going to ask you some questions.

First to even out the question a little, let's assume that the two people (your lover and the stranger) are of the same age and apparent physical health.

Let's discard your options that involve jumping off after them, as suicide would serve no useful purpose, moral or otherwise.

Next let's suggest that it's morally necessary to save at least one of them, as letting them both die would merely be shirking one's moral responsibility to help if possible.

Next let's point out that making a judgement based on one's personal affections for one of the falling individuals is immoral. It's perfectly human and understandable, but personal bias should be disregarded, ideally.

So we have come to the conclusion that- morally speaking- one must save one of the individuals. It is therefore morally necessary to make a dispassionate choice as to which person to save. Since we know nothing about the stranger, it's hard to weigh the impact of his/her death against the death of one's lover. Perhaps if the stranger was a family man supporting several children and one's lover was childless it would be more moral to save the stranger. But this set of factors would have to be established before such a judgement could be made.

In a situation where nothing is known about the stranger, it would be impossible to weigh their life against the life of one's lover... therefore to save either would be equally moral.

Quote:
 Originally Posted by windu6 Empathy, if you depend on that you can't be objective; feelings are bias by the emotions.
As noted before, objective morality is MOTIVATED by empathy, not DEFINED by empathy. Logic defines the correct- most moral- course of action in each situation. Not empathy.

Quote:
 Originally Posted by windu6 They are relevant, you are being bias, Spider. Our society is not the only one.
Alien civilisations ARE irrelevant Windu, not least because you have no evidence to suggest the existence of such civilisations... but also because we are discussing a very human and very terrestrial issue, the issue of morality.

[FW] Spider AL
--
Hewwo, meesa Jar-Jar Binks. Yeah. Excusing me, but me needs to go bust meesa head in with dissa claw-hammer, because yousa have stripped away meesa will to living.
you may:
 06-10-2007, 10:01 PM #38 Det. Bart Lasiter @Det. Bart Lasiter obama.png     Join Date: Mar 2005 Location: `(•.°)~ Posts: 7,997 Current Game: all Heh, AL, you make it sound as though morality itself is some sort of deity whose expectations we can either live up to or not. "No, Mama. You can bet your sweet ass and half a titty whoever put that hit on you already got the cops in their back pocket." ~Black Dynamite you may:
06-11-2007, 02:45 AM   #39
Windu Chi
@Windu Chi
Banned

Status: Banned
Join Date: Apr 2004
Location: Getting revenge on that traitor, Anakin.
Posts: 882

Quote:
 Originally Posted by Spider AL Emotions are merely contributing factors within the larger issue of bias, Windu. Therefore this has already been addressed.
But you can't ignore emotions.
Morality is useless without emotions or empathy; you won't give a damn about doing good or evil if you have no emotions or sympathy for others.

Quote:
 Originally Posted by Spider AL No, there is either sense... that is: that which makes sense, that which is logical... or nonsense. That which is not logical. They are absolutes. There is no issue of relativism when it comes to the question of whether someone is "making sense".
Yes there is; the knowledge we have acquired today about the universe was once thought nonsense, until over time meaning was gained.
Since you keep believing that we are the only ones and the current knowledge about the universe is only known to us.
That imply the rest of the knowledge of the universe relative to us remain nonsensical, for the time being.
As time go by and we continue to explore existence the present nonsense will become meaningful to us in the ongoing future.
Like for example the existence of black holes was once thought of as nonsense, until more and more knowledge was gained about the universe, then the meaningful explanation of the existense of the Blackholes was obtained.
The current nonsense is the big bang singularity, that was suppose to had created our universe, but as the unproved theories of our universe slowing bring understanding, such as String Theory, M-Theory and Quantum Gravity more meaning replace the nonsense.
Where do you suppose the concept of morality come from?
Nowhere!

The explanation of our own existense is nonsense in terms of logic.

A first creator to existence don't make no logical sense, because it is a contradiction, it conflicts with a logic principle called, the principle of the excluded middle, that states by the assumption of a person who use logic, that no proposition is both true and false. It must be only true and only false.A first creator won't have a creator, that means to logician, he would have to conclude, that the first creator will have to create itself out of nothing also, which will be a contradiction to the principle of the excluded middle.
A creator created itself!
The creator came from nothing.
So, that will make the first creator infinite, which will imply a contradiction in it's own creation.
Since infinity is true and false at the same time.
A logician will have to called the idea of a first creator illogical, if he/she follow logic strictly, but a atheist, don't believe in a creator or creators of any kind, but the idea of no creator is also illogical.
Because we will have to be created from nothing and nothing to a logician is nonsense.

If you follow logic so strongly like a religion as you do Spider, the concept of morality had to come from somewhere, that was acquired by the ancient philosophers, that they gave to our society today.

Quote:
 Originally Posted by Spider AL Of course. As an objectivist, I automatically have to accept the possibility that I will never attain the state of perceiving absolute truth. And furthermore, I must accept the certainty that I will never KNOW to an absolute degree whether I have attained such a state or not, even if I have in fact attained it.
Well, then you can't boldly say someone gotten it wrong with high confidence.
Uncertainy seems to ruled existence.

Quote:
 Originally Posted by Spider AL But that doesn't signify anything. The fact that we are fallible human organisms doesn't mean that there is no objective truth. It merely means that we may be incapable of perceiving it.
Then it's possible that something out there that can perceive it; which can imply we aren't alone in existence.
And we aren't superior!

Quote:
 Originally Posted by Spider AL Let's discard your options that involve jumping off after them, as suicide would serve no useful purpose, moral or otherwise.
Well, I included that because some people will commit sucicide because of guilt or to save many others in dire circumstances.
Like the people in the twin towers on 9/11 jump with others because of guilt.
Or, a solider jump in the line of fire to save his buddies in combat; sucicide he commit.
Quote:
 Originally Posted by Spider AL Next let's point out that making a judgement based on one's personal affections for one of the falling individuals is immoral. It's perfectly human and understandable, but personal bias should be disregarded, ideally.
What do you mean ideally, there seems to be nothing ideal in real life, on Earth?
You know this is laughable, you're saying you will let your family member die over the stranger?
Because you don't want to be immoral?

Quote:
 Originally Posted by Spider AL It is therefore morally necessary to make a dispassionate choice as to which person to save. Since we know nothing about the stranger, it's hard to weigh the impact of his/her death against the death of one's lover. Perhaps if the stranger was a family man supporting several children and one's lover was childless it would be more moral to save the stranger. But this set of factors would have to be established before such a judgement could be made.
Man you are sounding like a computer A.I.?
So, you saying some strangers children is more important than your own family member?
Also I was not talking about a lover; your family member, when say love one.

Quote:
 Originally Posted by Spider AL In a situation where nothing is known about the stranger, it would be impossible to weigh their life against the life of one's lover... therefore to save either would be equally moral.
Since you believe that our society is the ONLY one.
And we are superior!
No it won't, you let someone die that who's death, that you could've prevented, in this society that is against the rules of morality; Negligent homicide.

No one life should be weighed, never.
In the concetration camps, the victims was force to make that choice of weighing someone's life over the other, by the immoral SS concentration camp
guard.
So, those victims made choices like that, will be label immoral by some in our society.
The point I'm trying to make that if you have emotions, sympathy and personal bias, you possibly can't be truly morally objective.

Quote:
 Originally Posted by Spider AL As noted before, objective morality is MOTIVATED by empathy, not DEFINED by empathy. Logic defines the correct- most moral- course of action in each situation. Not empathy.
So, you saying logic should determine who should die and don't die.
Logic only should determine who's life more valuable than someone else life.
That's rediculous!
The Nazis use that kind of reasoning, when they imbrace evil in the death camps.
The Jews lives was not valuable at all, in the like of other ethnic groups that was death camp victims.
A Jew life was consider less valuable, than other concentration camp prisoners, because they was consider inferior. So, with the logic the Nazis use, the more of them(Jews) die, the less immoral it is to them(Nazis) .
Because, they were seen like wood in their inferiority, so worthless.
I think it's really amoral to only use Logic to determine who will die and who won't die.

Quote:
 Originally Posted by Spider AL but also because we are discussing a very human and very terrestrial issue, the issue of morality.
What, you believe morality and other knowledge was only acquired and invented by us?
What about animals like elephants, gorillas, monkeys dogs, etc... they should be included too.
They have empathy!
It's utter arrogance, to include only us and to say we are only ones in existence.
What, do you believe that all those planets, stars and galaxies out there is just decoration?

Last edited by Windu Chi; 06-11-2007 at 06:22 PM.
you may:
 06-10-2007, 06:39 PM #40 Nancy Allen`` @Nancy Allen`` Banned     Status: Banned Join Date: Jan 2006 Posts: 1,948 One set of morals, one set of ideals. One belief, one vision, that's what we're talking about here isn't it? you may:
 LucasForums > Revisiting Moral Objectivism with Mathematical Notation

 Thread Tools Display Modes Hybrid Mode

 Posting Rules You may not post new threads You may not post replies You may not post attachments You may not edit your posts BB code is On Smilies are On [IMG] code is On HTML code is Off Forum Rules
 Forum Jump User Control Panel Private Messages Subscriptions Who's Online Search Forums Forums Home -------------------- Network     Star Wars Gamer     Knights of the Old Republic     XWingAlliance.net     Mixnmojo.com     Monkey Island Discussion     Brutal Legend Discussion     Grim Fandango Discussion     Psychonauts Discussion at Razputin's Domain     Sam And Max.Net     The Dig Museum     Full Throttle Discussion     IndyJones.net     Fracture     Forum Help & Feedback Center     LucasForums Archive         LEGO Star Wars series         Star Wars         Star Wars: Battlefront series         Star Wars: Classic Gaming         Star Wars: Empire At War         Star Wars: The Force Unleashed series         Star Wars: Galaxies         Star Wars: Jedi Knight series         Star Wars: The Old Republic         Star Wars: Republic Commando         Star Wars: Rogue Squadron series