Here is a brief overview of the interpretive principles of each school of literary theory. It appears as though the main divide consists of those strands of thought associated with structuralism (modernism) and those associated with post-structuralism (postmodernism). The former finds its bearings within the Platonic, scientific, and ordered tradition while the latter within the Nietzchean, chaotic, and creative tradition.
Modernism:
(1) Structuralism: Concerned with the forms of literary expression, and the meanings these forms convey. Less concerned with authorial intent or background, or biographical history, of the author. Less concerned, moreover, with the cultural or historical circumstances of the writing. Therefore, structuralism is anti-historicist since the meaning of the text can be understood by the words and forms used independent of anything else. The main principle binding structuralist thought is its focus on plots and values rather than on the specifics of how the plot is developed or how the values are endorsed. New Criticism and formalism is a strand within formalism that similarly focuses on the text. (Claude Levi-Strauss and Ferdinand Saussere)
Post-Modernism:
(2) Post-Structuralism: Advocates believe that forms of expression and words are inherently unclear as well as the intentions of the author. It relies on the idea posited by W.V.O Quine, which is the indeterminacy of translation. Meaning of words are typically established by observable behavior or, in literary theory, common usage of words. Since these can't be known with certainty, meaning itself can't be certain. The 'structure' of literature, furthermore, must be interpreted by structuralists as meaning something; otherwise, it wouldn't have any cognitive significance, or it wouldn't be worth noticing and evaluating. Consequently, post-structuralists attack the structuralists' notion that meaning can be gathered from the structure of a text since the concept of meaning is itself indeterminate.
Post-Structuralism also attacks the fundamental principles, values, or morality expressed and furthered in a text by attacking those principles, values, or morality directly. It's to attack the motives of the author, while attempting to reveal the shaky foundations upon which those principles or values rest. Post-Structuralism has often been seen as providing the intellectual origin for New Historicism since the latter simply focuses their criticism on their belief that ideas about fundamental values and morality are ultimately limited within the horizon that they are thought. For instance, the self-evident proposition that men have inalienable rights to life, liberty, and property is merely a prejudice of our current, liberal age that may appear antiquated to future generations, which may have their own self-evident truths. Reader Response theory is also a natural extension of post-structuralism. The former has the principle that there is no true or correct interpretation of a text, but that each interpretation relies on the reader and his interpretive community, which involve its values, customs, and methods of interpretation. It's thus apparent that post-structuralism finds its philosophic home in relativism about values and principles taken to be eternal truths by platonic realists, which may include structuralists. (Jacques Derrida)
Thursday, December 28, 2006
Thursday, December 21, 2006
Philosophy of Language
Since I'm a new-comer to the study of the philosophy of language, I'll try to elucidate the classifications and conflicts within the field not only to help those interested in the subject but to help myself understand the material as well. As Wittgenstein made the point in the Tractatus, only those ideas in the mind that are intelligible (logically conceivable) have meaning. By writing out the doctrines of the major schools with in the philosophy of language, I can connote meaning to my ideas about the subject and therefore impart value upon those ideas. The major divide in the area appears to be truth-conditional theories of meaning (Frege, Russell, Tarski, logical positivists) versus use theories of meaning (Wittgenstein, Quine, and Speech-Act Theorists).
(1) Referential Theory of Meaning (J.S. Mill): Mill made the distinction between denotation, or reference (subject; extension), and connotation, or the implication of an attribute to an object (predicate; intension). This school believed that the meaning of sentences relied on the referent (subject) of the sentence, or the denoted (referred) object that corresponds to the sentence in the real world. If one knows what an object in the real world 'is,' then one can know the meaning of sentences about that object. For instance, if I know what the Lincoln Memorial is, then I can understand the meaning of the sentence: The Lincoln Memorial sits in Washington D.C. The obvious difficulty in such an approach is that it's hard to account for non-existent referents. Although Pegasus doesn't exist, I can still understand the meaning of sentences about (intension of) it. This theory also has difficulty coping with the idea that identity statements can be meaningful in the sense that they provide more information than otherwise. For instance, the referential theory would maintain that the sentence "the evening star is the morning star" is valueless in terms of knowledge, but someone may know that one refers to Venus without knowing that the other one does. Thus, one will have gained knowledge if one knows that the other referent (extension) refers to the same object; one will now know they're both co-referents to Venus.
(2) Connotative Theory of Meaning (Gottleb Frege): Frege made the distinction between sense (intension) and reference (extension), which appears to correspond to the distinction between connotation and denotation. Yet instead of finding meaning within referents (denotation), one should rather find meaning within predicates (intention; attributes of objects; descriptive content of subjects). Objects, therefore, are nothing more than bundles of descriptions, akin to Hume's idea of the self. We understand the meaning of sentences, as well as objects, by knowing what the predicates (attributes or properties) mean. Russell, moreover, accounted for negative existential quantifiers by saying that the sense of the sentence's logical formulation makes it understandable. For instance, the sentence "the present king of France is bald" is understandable because we know the predicate, "is bald," and so we can attach it to a subject if it existed. The objection to this theory is that proper names, objects or subjects, rarely have fixed descriptions or properties. A tree may have leaves in spring but not necessarily in winter, for example. Kripke and Putnam responded by introducing the idea of rigid designators, which allowed the descriptive content of objects (referents) to change without changing the referent. The tree referred to in the example is the same tree referred to in every possible world, so to say.
(3) Picture Theory of Language and Language Games (Wittgenstein): Wittgenstein supposed that in order for a sentence to be meaningful, (1) senses must be attached to objects and (2) forms of propositions must correspond to (resemble) facts in the world. In other words, one must be able to logically conceive of what a sentence says in order for it to be meaningful. The later Wittgenstein supposed that language's purpose was to lead to actions, and that denotation, connotation, picture theory were all meant to get people to do things. He also introduced the famous private language argument, which supposed that Descartes's idea that one can appeal to private experiences and thus create a private language was impossible.
(4) Verificationist Theory of Meaning (logical positivists): Logical positivists seized on the analytic/synthetic distinction to establish their idea of meaning. The test for whether a sentence has meaning is whether it is analytically true, or true by virtue of the meaning of the terms in a sentence, or synthetic, or verifiable through observational experience. This conception of meaning was anti-metaphysical (Gary Becker mentioned in his blog that metaphysics are those things not verifiable). The positivists are considered naturalists, which means they reduce everything as explainable by observation in the natural world. Morality can be explained by neurology, for instance. The sentence God exists outside of space and time is meaningless. It's not analytically true since the concept of God as omnipotent, omnipresent, and benevolent necessarily makes Him exist outside space-time (in fact, it seems he his omnipresence should make him exist within it). It's neither synthetically true since the proposition isn't verifiable since experience presupposes (as Kant would say) the a priori concepts--categories--of space-time.
The objections to this theory lie in the fact that the theory itself is meaningless by its own standards since its neither analytically nor synthetically true--it's self-refuting. Another difficulty is its ability to impart meaning on general laws of nature since experience doesn't confirm laws, but only does not falsify them--according to Karl Popper. The final difficulty is that it has trouble imparting meaning to counter-factual propositions since they aren't analytically nor synthetically true, yet they seem to be meaningful and understandable.
(5) Tarski's Theory of Truth (Alfred Tarski): Tarski believed that the meaning of sentences rested in its syntax, or sentence as well as logical structure, as opposed to semantics or meaning of the object or predicate (reference or sense). Propositions must correspond to facts in the world to be true. Tarski also introduced the distinction between metalanguage and object language. The former is a language used to analyze the object language. The object language "snow is white" can be analyzed in the metalanguage "snow is true iff snow is white."
(6) Semantic Holism (W.V.O. Quine): The meaning of sentences must be understood in terms of the speaker's and hearer's holistic experience rather than particular experiences emphasized by the logical positivists. He thus sought to destroy the analytic-synthetic distinction. Quine also introduced the idea of "radical translation," or the idea that users acquire understanding of the meaning of a language's signs by understanding co-referents and common assent among users. He also introduced the idea of "indeterminacy of translation," or the idea that demonstratives (signs given meaning in context by sign-giver) and indexicals (signs that change meaning in different contexts) are ultimately uncertain since interpreters can only acquire meaning of signs through observable behavior of the sign-giver or through common usage. Though this doesn't mean that some interpretations of signs aren't better than others.
(7) Speech-Act Theory (Paul Grice and J.L. Austin): Speech-Act Theorists focused on pragmatics (context-dependency of meaning) more so than the other theories. They believed that language can be expressed (locution) in four ways. (1) Illocutionary acts are speech-acts meant to lead the hearer to recognize the speaker's intention; for example through promises since the one who promises promises in the first place to let the promised recognize that he intends to perform some action in the future. (2) Perlocutionary acts were meant for persuasion and consisted in the idea that the speaker intends the hearer to perform some action or change or preserve a belief. (3) Utterance acts are the act of speaking or writing, while (4) propositional acts are acts were a speaker attributes an object within properties, or qualities.
Speech-Act Theorists also posited the idea of "conventional implicatures," or the alteration of meaning by subtle changes in the context of utterance or form of utterance; common understanding of what these alterations imply is necessary to understand the meaning of the altered sentence. And lastly, they established a criteria by which communication is possible, which is called the cooperative principle. In order for communication to be possible, the content of utterance must satisfy the constraints of (1) quality (the speaker must believe the content of utterance), (2) quantity (the speaker must be relatively efficient with word use), (3) relevance, and (4) manner (be perspicuous, or sharp; avoid obscurity and ambiguity).
(1) Referential Theory of Meaning (J.S. Mill): Mill made the distinction between denotation, or reference (subject; extension), and connotation, or the implication of an attribute to an object (predicate; intension). This school believed that the meaning of sentences relied on the referent (subject) of the sentence, or the denoted (referred) object that corresponds to the sentence in the real world. If one knows what an object in the real world 'is,' then one can know the meaning of sentences about that object. For instance, if I know what the Lincoln Memorial is, then I can understand the meaning of the sentence: The Lincoln Memorial sits in Washington D.C. The obvious difficulty in such an approach is that it's hard to account for non-existent referents. Although Pegasus doesn't exist, I can still understand the meaning of sentences about (intension of) it. This theory also has difficulty coping with the idea that identity statements can be meaningful in the sense that they provide more information than otherwise. For instance, the referential theory would maintain that the sentence "the evening star is the morning star" is valueless in terms of knowledge, but someone may know that one refers to Venus without knowing that the other one does. Thus, one will have gained knowledge if one knows that the other referent (extension) refers to the same object; one will now know they're both co-referents to Venus.
(2) Connotative Theory of Meaning (Gottleb Frege): Frege made the distinction between sense (intension) and reference (extension), which appears to correspond to the distinction between connotation and denotation. Yet instead of finding meaning within referents (denotation), one should rather find meaning within predicates (intention; attributes of objects; descriptive content of subjects). Objects, therefore, are nothing more than bundles of descriptions, akin to Hume's idea of the self. We understand the meaning of sentences, as well as objects, by knowing what the predicates (attributes or properties) mean. Russell, moreover, accounted for negative existential quantifiers by saying that the sense of the sentence's logical formulation makes it understandable. For instance, the sentence "the present king of France is bald" is understandable because we know the predicate, "is bald," and so we can attach it to a subject if it existed. The objection to this theory is that proper names, objects or subjects, rarely have fixed descriptions or properties. A tree may have leaves in spring but not necessarily in winter, for example. Kripke and Putnam responded by introducing the idea of rigid designators, which allowed the descriptive content of objects (referents) to change without changing the referent. The tree referred to in the example is the same tree referred to in every possible world, so to say.
(3) Picture Theory of Language and Language Games (Wittgenstein): Wittgenstein supposed that in order for a sentence to be meaningful, (1) senses must be attached to objects and (2) forms of propositions must correspond to (resemble) facts in the world. In other words, one must be able to logically conceive of what a sentence says in order for it to be meaningful. The later Wittgenstein supposed that language's purpose was to lead to actions, and that denotation, connotation, picture theory were all meant to get people to do things. He also introduced the famous private language argument, which supposed that Descartes's idea that one can appeal to private experiences and thus create a private language was impossible.
(4) Verificationist Theory of Meaning (logical positivists): Logical positivists seized on the analytic/synthetic distinction to establish their idea of meaning. The test for whether a sentence has meaning is whether it is analytically true, or true by virtue of the meaning of the terms in a sentence, or synthetic, or verifiable through observational experience. This conception of meaning was anti-metaphysical (Gary Becker mentioned in his blog that metaphysics are those things not verifiable). The positivists are considered naturalists, which means they reduce everything as explainable by observation in the natural world. Morality can be explained by neurology, for instance. The sentence God exists outside of space and time is meaningless. It's not analytically true since the concept of God as omnipotent, omnipresent, and benevolent necessarily makes Him exist outside space-time (in fact, it seems he his omnipresence should make him exist within it). It's neither synthetically true since the proposition isn't verifiable since experience presupposes (as Kant would say) the a priori concepts--categories--of space-time.
The objections to this theory lie in the fact that the theory itself is meaningless by its own standards since its neither analytically nor synthetically true--it's self-refuting. Another difficulty is its ability to impart meaning on general laws of nature since experience doesn't confirm laws, but only does not falsify them--according to Karl Popper. The final difficulty is that it has trouble imparting meaning to counter-factual propositions since they aren't analytically nor synthetically true, yet they seem to be meaningful and understandable.
(5) Tarski's Theory of Truth (Alfred Tarski): Tarski believed that the meaning of sentences rested in its syntax, or sentence as well as logical structure, as opposed to semantics or meaning of the object or predicate (reference or sense). Propositions must correspond to facts in the world to be true. Tarski also introduced the distinction between metalanguage and object language. The former is a language used to analyze the object language. The object language "snow is white" can be analyzed in the metalanguage "snow is true iff snow is white."
(6) Semantic Holism (W.V.O. Quine): The meaning of sentences must be understood in terms of the speaker's and hearer's holistic experience rather than particular experiences emphasized by the logical positivists. He thus sought to destroy the analytic-synthetic distinction. Quine also introduced the idea of "radical translation," or the idea that users acquire understanding of the meaning of a language's signs by understanding co-referents and common assent among users. He also introduced the idea of "indeterminacy of translation," or the idea that demonstratives (signs given meaning in context by sign-giver) and indexicals (signs that change meaning in different contexts) are ultimately uncertain since interpreters can only acquire meaning of signs through observable behavior of the sign-giver or through common usage. Though this doesn't mean that some interpretations of signs aren't better than others.
(7) Speech-Act Theory (Paul Grice and J.L. Austin): Speech-Act Theorists focused on pragmatics (context-dependency of meaning) more so than the other theories. They believed that language can be expressed (locution) in four ways. (1) Illocutionary acts are speech-acts meant to lead the hearer to recognize the speaker's intention; for example through promises since the one who promises promises in the first place to let the promised recognize that he intends to perform some action in the future. (2) Perlocutionary acts were meant for persuasion and consisted in the idea that the speaker intends the hearer to perform some action or change or preserve a belief. (3) Utterance acts are the act of speaking or writing, while (4) propositional acts are acts were a speaker attributes an object within properties, or qualities.
Speech-Act Theorists also posited the idea of "conventional implicatures," or the alteration of meaning by subtle changes in the context of utterance or form of utterance; common understanding of what these alterations imply is necessary to understand the meaning of the altered sentence. And lastly, they established a criteria by which communication is possible, which is called the cooperative principle. In order for communication to be possible, the content of utterance must satisfy the constraints of (1) quality (the speaker must believe the content of utterance), (2) quantity (the speaker must be relatively efficient with word use), (3) relevance, and (4) manner (be perspicuous, or sharp; avoid obscurity and ambiguity).
Tuesday, December 19, 2006
Gottleb Frege and "The Logical Inquiry of Thought"
A terrible windstorm, lasting all of one day, recently hit the area where my house is located, and it has disabled the electricity from my house for the past week (at the time of this posting). Therefore, my family has decided to stay at my uncle and auntie's house until all gets back to normal. This unfortunate occurrence, like the nature of other unfortunate occurrences, has given me the opportunity to watch the excellent television series, "House," as well as to read an article by Gottleb Frege in preparation for the philosophy of language class I intend to take this upcoming quarter. While I don't intend to provide any concerted criticism of the article, I'll merely provide a description of his views--criticism may follow in later posts.
Frege makes what he takes to be a crucial distinction between the meaning of 'thought' and 'idea.' Proceeding from the classic debate between materialists (empiricists) and idealists, Frege uses the distinction to argue ultimately for the materialist viewpoint. Since I have never seriously written about metaphysics before, let's slowly examine the metaphysical debate from its philosophical roots.
All materialists share the belief that objects exterior to themselves cause their perceptions--through sense experience--of the 'contents of their consciousness.' The presumes that objects exists independently of perception and therefore independently of minds. My perception, which Frege calls 'idea,' of seeing the monitor of a laptop and feeling my fingers type in these words, for instance, is directly caused by the laptop's actually existing.
To those not conditioned to academic philosophy, this materialist principle would appear to be a tautology, which is to say that thinking otherwise would not only be incoherent but also warrant one to arrest the person uttering it into a mental asylum. On further reflection, though, such a reaction is clearly facetious when one considers the possibility of hallucinations, dreams, and matrix worlds. For if we take the previous example into consideration again, we'll notice that it's logically possible (I can conceive of it) that my typing this post may be part of an hallucination I'm experiencing and am presuming as real. The certainty that the laptop exists (in the independent world apart from my mind's thinking it) therefore vanishes. If one responds by saying it's 'probably' true that the laptop independently exists, then one wonders on what basis one can tell whether the something exists independently or not. For it appears as though some objective criterion must be used to distinguish between something existing and something not existing. Yet if the probability is based on experience, then how would I know if all my earlier experience were not similarly illusions. And if the probability is based on reason, then there doesn't appear to be any reason to believe the laptop exists over the reason that it doesn't. To make the point another way, how do I know I'm not stuck in a matrix world. Skepticism about the existence of objects independent of the mind therefore ensues--realism appears to fall into deep trouble.
The alternative to materialism, idealism, was advocated for by philosophers like George Berkeley, Immanuel Kant, and the German idealists (Hegel, Schoepenhauer, Fichte)--although they each defended a different form of idealism. Idealists begin from a position of solipsism, or the idea that all I can know for certain are my own perceptions (experiences)--what I hear, see, taste, smell, and feel. Given that I know I perceive these things for certain, I can then reflect upon the necessary conditions in order for me to have these perceptions. These perceptions, I will notice, presuppose a priori space, time, and causality. For if I had these experiences in the absence of space, there would be no movement of objects of perception since there would be no space in which to move. If there there was no time, there would not be movement either because time (succession of events) is necessary for an object to move from one place to another. The perceiver must also presuppose the concept of causality since one object's moving another is caused by the former object.
Nietzche, in his typical polemical way, attacked Kant and the idealists for supposing that these perceptions and a priori concepts reflect what is in the real world. The world of perception does not necessarily reflect the noumenal world (thing-in-itself). Moreover, he critiqued the whole enterprise of pursuing the truth of the noumenal world altogether. Since we do have these experiences, why do we value knowing the truth about the noumenal world since we would have the same experiences even if we did. Moreover, he thought that all values were ultimately baseless--including the supreme value of truth-seeking--since one would ultimately come to the conclusion that we ourselves created values in the first instance--according to our emotions (what we wanted)--, so we can change them if we wanted. Creativity, then, becomes the chief virtue for human beings: creating something out of nothing. It's a matter of human ingenuity to create values for himself, for Nietzche doesn't give specific prescriptions to what those values should be. Relativism, as opposed to objectivism, takes over the world.
Frege may be best associated with the materialists. He thought thoughts themselves were the link between our mind and the external world. We can all share thoughts about these objects, but we can't share ideas, which are necessarily stuck within our own minds. He undertakes a type of conceptual analysis, trying to identify the necessary and sufficient conditions for thoughts and ideas so as to gain knowledge about them and the rest of metaphysics. He supposed that the idea of the self was not itself an idea like all other ideas (ideas about perceptions) but was
Frege makes what he takes to be a crucial distinction between the meaning of 'thought' and 'idea.' Proceeding from the classic debate between materialists (empiricists) and idealists, Frege uses the distinction to argue ultimately for the materialist viewpoint. Since I have never seriously written about metaphysics before, let's slowly examine the metaphysical debate from its philosophical roots.
All materialists share the belief that objects exterior to themselves cause their perceptions--through sense experience--of the 'contents of their consciousness.' The presumes that objects exists independently of perception and therefore independently of minds. My perception, which Frege calls 'idea,' of seeing the monitor of a laptop and feeling my fingers type in these words, for instance, is directly caused by the laptop's actually existing.
To those not conditioned to academic philosophy, this materialist principle would appear to be a tautology, which is to say that thinking otherwise would not only be incoherent but also warrant one to arrest the person uttering it into a mental asylum. On further reflection, though, such a reaction is clearly facetious when one considers the possibility of hallucinations, dreams, and matrix worlds. For if we take the previous example into consideration again, we'll notice that it's logically possible (I can conceive of it) that my typing this post may be part of an hallucination I'm experiencing and am presuming as real. The certainty that the laptop exists (in the independent world apart from my mind's thinking it) therefore vanishes. If one responds by saying it's 'probably' true that the laptop independently exists, then one wonders on what basis one can tell whether the something exists independently or not. For it appears as though some objective criterion must be used to distinguish between something existing and something not existing. Yet if the probability is based on experience, then how would I know if all my earlier experience were not similarly illusions. And if the probability is based on reason, then there doesn't appear to be any reason to believe the laptop exists over the reason that it doesn't. To make the point another way, how do I know I'm not stuck in a matrix world. Skepticism about the existence of objects independent of the mind therefore ensues--realism appears to fall into deep trouble.
The alternative to materialism, idealism, was advocated for by philosophers like George Berkeley, Immanuel Kant, and the German idealists (Hegel, Schoepenhauer, Fichte)--although they each defended a different form of idealism. Idealists begin from a position of solipsism, or the idea that all I can know for certain are my own perceptions (experiences)--what I hear, see, taste, smell, and feel. Given that I know I perceive these things for certain, I can then reflect upon the necessary conditions in order for me to have these perceptions. These perceptions, I will notice, presuppose a priori space, time, and causality. For if I had these experiences in the absence of space, there would be no movement of objects of perception since there would be no space in which to move. If there there was no time, there would not be movement either because time (succession of events) is necessary for an object to move from one place to another. The perceiver must also presuppose the concept of causality since one object's moving another is caused by the former object.
Nietzche, in his typical polemical way, attacked Kant and the idealists for supposing that these perceptions and a priori concepts reflect what is in the real world. The world of perception does not necessarily reflect the noumenal world (thing-in-itself). Moreover, he critiqued the whole enterprise of pursuing the truth of the noumenal world altogether. Since we do have these experiences, why do we value knowing the truth about the noumenal world since we would have the same experiences even if we did. Moreover, he thought that all values were ultimately baseless--including the supreme value of truth-seeking--since one would ultimately come to the conclusion that we ourselves created values in the first instance--according to our emotions (what we wanted)--, so we can change them if we wanted. Creativity, then, becomes the chief virtue for human beings: creating something out of nothing. It's a matter of human ingenuity to create values for himself, for Nietzche doesn't give specific prescriptions to what those values should be. Relativism, as opposed to objectivism, takes over the world.
Frege may be best associated with the materialists. He thought thoughts themselves were the link between our mind and the external world. We can all share thoughts about these objects, but we can't share ideas, which are necessarily stuck within our own minds. He undertakes a type of conceptual analysis, trying to identify the necessary and sufficient conditions for thoughts and ideas so as to gain knowledge about them and the rest of metaphysics. He supposed that the idea of the self was not itself an idea like all other ideas (ideas about perceptions) but was
Friday, August 18, 2006
Justice Cardozo's Legal Realism as Subjectivity as Judicial Abuse of Power
I have recently finished reading Justice Benjamin Cardozo's (justice of the Roosevelt era) book entitled, "The Nature of the Judicial Process." It took me all of roughly twenty-four hours to finish reading it; this in light of the fact that the beginning of this twenty-four hours commenced right after I finished my classes in Accounting and Business Communications at the University of Washington.
The central thesis of Cardozo's book is that subconscious forces inevitably taint (as opposed to merely 'affect') judicial decisions: no matter how objective a judge supposes himself to be. This thesis implies that judges are constrained by convention; that is, they can never completely leave the shackles of what Freud would call, their 'subconscious' prejudices and biases that result from the circumstances of the environment in which they live. Cardozo, however, believes that although the ideal judge (most likely akin to Dworkin's Hercules judge) should base decisions off objective assessments and applications of the law, the subjective prejudices and perspectives of the self can't be overcome, and one may infer that he seeks to make the point of acknowledging that no judicial decisions are ever free of subjective biases. He also claims that the central purpose of jurisprudence lies in creating social welfare (yet this idea is slightly difficult to make sense of given Cardozo's ambiguous definition of that term).
Cardozo, moreover, believes that there are four types of sources from which the judge makes decisions: (1) method of philosophy, (2) method of history, (3) method of custom, and (4) method of sociology (of promoting 'social welfare'). (While I don't intend to go in depth into each of these methods, I'll touch on some or all of them briefly when they are useful in making other points.) He defines the source of judicial decisions based on philosophy as one of almost a priori reasoning. He thinks of reasoning philosophically--in adjudicating cases--as reasoning by analogy or carrying out legal principles to their logical conclusion. The former is almost exactly like the common law, which mean that relevant similarities between the facts of distinct cases warrants judges to make the same ruling in both instances. The latter is like what John Rawls would call, equilibrium reasoning. It is to formulate legal principles and to figure a priori (which Cardozo seems to think is the only way of reasoning 'philosophically') conclusions or applications of the principles to particular cases--it is like finding unintended consequences, which may provide grounds for rejecting the legal principle in favor of some other formulation.
Cardozo thinks that subconscious forces dictate which source of judicial decision-making, or source of standard for adjudication, takes precedence; for instance, one judge may think that social welfare effects are of more importance than logical consistency or customs of society. The point is that judges can't escape from whatever subconscious forces lead judges to favor one 'method' over another.
Cardozo's theory of jurisprudence came at a time when the legal movement known as legal realism was taking hold of law schools and general jurisprudential theory. Cardozo, as well as his friend and colleague Justice Oliver Wendall Holmes Jr., were two powerful proponents of this theory during the time. It was an almost natural extension of the then popular philosophy of pragmatism that was making its mark on academia by the influence of William James.
My biggest point of difference with Cardozo's jurisprudential theory is the implied idea that law should evolve with the people and that judges should adapt their judicial decisions to the changing circumstances of society. This is to say that judges should interpret the meaning of statutes and constitutions--the law--in light of the fact that the interests, customs, and composition of society changes. This theory implies that judges know these customs and that they can effect social welfare for the better. This seems to leave much room for judicial discretion and an almost baseless means of adjudicating cases--instead of simply acknowledging that subjectivity can't ever be completely removed, the theory actually leads to too much subjectivity. The old notion of judges making decisions based on the last book they read or what they ate for breakfast comes to a head. While Cardozo, nonetheless, acknowledges that judicial 'law-making' is only allowed when legislative intention isn't completely set, his theory still allows for massive abuses and implies that judges somehow know the customs as well as can effect social welfare better than legislatures--it leads to judical abuse, making decisions based on political persuasion than application of the law. This is what I believe is the logical extension of his argument.
What does it mean to take an objective approach to the law? Justice Scalia thinks that judges in their role as judges should apply the definitions of words of those who passed the laws(legislature) to find the meaning of the law. In this way, judges avoid judical law-making, and leave that job to the legislature--the people elected to make law and represent the interests of the people. They, after all, know the culture, customs, and ways to promote social welfare better than judges who tend to come from older generations. This post has become too long, and so I will continue with this topic so other time.
The central thesis of Cardozo's book is that subconscious forces inevitably taint (as opposed to merely 'affect') judicial decisions: no matter how objective a judge supposes himself to be. This thesis implies that judges are constrained by convention; that is, they can never completely leave the shackles of what Freud would call, their 'subconscious' prejudices and biases that result from the circumstances of the environment in which they live. Cardozo, however, believes that although the ideal judge (most likely akin to Dworkin's Hercules judge) should base decisions off objective assessments and applications of the law, the subjective prejudices and perspectives of the self can't be overcome, and one may infer that he seeks to make the point of acknowledging that no judicial decisions are ever free of subjective biases. He also claims that the central purpose of jurisprudence lies in creating social welfare (yet this idea is slightly difficult to make sense of given Cardozo's ambiguous definition of that term).
Cardozo, moreover, believes that there are four types of sources from which the judge makes decisions: (1) method of philosophy, (2) method of history, (3) method of custom, and (4) method of sociology (of promoting 'social welfare'). (While I don't intend to go in depth into each of these methods, I'll touch on some or all of them briefly when they are useful in making other points.) He defines the source of judicial decisions based on philosophy as one of almost a priori reasoning. He thinks of reasoning philosophically--in adjudicating cases--as reasoning by analogy or carrying out legal principles to their logical conclusion. The former is almost exactly like the common law, which mean that relevant similarities between the facts of distinct cases warrants judges to make the same ruling in both instances. The latter is like what John Rawls would call, equilibrium reasoning. It is to formulate legal principles and to figure a priori (which Cardozo seems to think is the only way of reasoning 'philosophically') conclusions or applications of the principles to particular cases--it is like finding unintended consequences, which may provide grounds for rejecting the legal principle in favor of some other formulation.
Cardozo thinks that subconscious forces dictate which source of judicial decision-making, or source of standard for adjudication, takes precedence; for instance, one judge may think that social welfare effects are of more importance than logical consistency or customs of society. The point is that judges can't escape from whatever subconscious forces lead judges to favor one 'method' over another.
Cardozo's theory of jurisprudence came at a time when the legal movement known as legal realism was taking hold of law schools and general jurisprudential theory. Cardozo, as well as his friend and colleague Justice Oliver Wendall Holmes Jr., were two powerful proponents of this theory during the time. It was an almost natural extension of the then popular philosophy of pragmatism that was making its mark on academia by the influence of William James.
My biggest point of difference with Cardozo's jurisprudential theory is the implied idea that law should evolve with the people and that judges should adapt their judicial decisions to the changing circumstances of society. This is to say that judges should interpret the meaning of statutes and constitutions--the law--in light of the fact that the interests, customs, and composition of society changes. This theory implies that judges know these customs and that they can effect social welfare for the better. This seems to leave much room for judicial discretion and an almost baseless means of adjudicating cases--instead of simply acknowledging that subjectivity can't ever be completely removed, the theory actually leads to too much subjectivity. The old notion of judges making decisions based on the last book they read or what they ate for breakfast comes to a head. While Cardozo, nonetheless, acknowledges that judicial 'law-making' is only allowed when legislative intention isn't completely set, his theory still allows for massive abuses and implies that judges somehow know the customs as well as can effect social welfare better than legislatures--it leads to judical abuse, making decisions based on political persuasion than application of the law. This is what I believe is the logical extension of his argument.
What does it mean to take an objective approach to the law? Justice Scalia thinks that judges in their role as judges should apply the definitions of words of those who passed the laws(legislature) to find the meaning of the law. In this way, judges avoid judical law-making, and leave that job to the legislature--the people elected to make law and represent the interests of the people. They, after all, know the culture, customs, and ways to promote social welfare better than judges who tend to come from older generations. This post has become too long, and so I will continue with this topic so other time.
Saturday, February 25, 2006
Career and Life Aspirations
I am perhaps overly concerned with the future of my life and overly focus on education with respect to my future in particular, but I have become distraught at my seeming indecisiveness and ambivalence in relation to my educational goals while as an undergraduate. Do I want to work in the corporate environment or do I want to work in academia as a college professor? Does majoring in a particular field inhibit my aspirations towards one of these paths if it happens that I actually definitively choose one of them?
If it does turn out that I decide that the corporate environment either as a corporate lawyer or investment banker really are careers I think are in my best interests, this would be because of the salary I would likely be paid and the argumentative and analytical skills I like to commonly exercise. My natural or acquired inclinations excite in me the idea of living without the worries of wealth, with a loving family, and with the joy I would feel by owning a membership in a country club whereby I would be able to play golf at my convenience. This aspiration perhaps conflates the "good life" with an aristocratic, high-society model, but I believe this is an aspiration that is reasonably noble to say the least.
A possible career as an academic, however, also excites my fancy, because I love studying the greatest thinkers and their greatest thoughts and I love researching and simply learning this. This inclination of mine is perhaps properly credited to my over-estimation of my own intellectual abilities, but
If it does turn out that I decide that the corporate environment either as a corporate lawyer or investment banker really are careers I think are in my best interests, this would be because of the salary I would likely be paid and the argumentative and analytical skills I like to commonly exercise. My natural or acquired inclinations excite in me the idea of living without the worries of wealth, with a loving family, and with the joy I would feel by owning a membership in a country club whereby I would be able to play golf at my convenience. This aspiration perhaps conflates the "good life" with an aristocratic, high-society model, but I believe this is an aspiration that is reasonably noble to say the least.
A possible career as an academic, however, also excites my fancy, because I love studying the greatest thinkers and their greatest thoughts and I love researching and simply learning this. This inclination of mine is perhaps properly credited to my over-estimation of my own intellectual abilities, but
Saturday, February 04, 2006
A New Path
I hope my foray into blog-posting goes well, so here I go:
I am currently a junior at the University of Washington, and yet I haven't even formally declared the majors that I plan to pursue. Although my major is technically listed as 'political science,' I have abandoned my pursuit of this major in favor of philosophy and business administration with a concentration in accounting. The reason I no longer want to pursue political science is not because I have grown weary of the subject (the department offers some of the more interesting and engaging classes at the UW that I would like to take), but rather, I realize that holding a political science degree is not worth much to employers. This is due in large part, I think, to the fact that many students graduate with a political science degree, and that major is one of 'last resort'; that is, graduating from the political science department is relatively easy, and one will usually declare the political science major after an unsuccessful pursuit of other, more difficult majors.
The primary reason for my choice of philosophy and accounting as majors rests in large part to my aspirations of attending law school, and once there, to specialize in corporate or tax law. Philosophy courses are, as I have been told by professors and fellow students as well as have inferred from taking the classes, an excellent preparation for the skills needed by successful lawyers of all specializations. These skills include critical thinking, reading comprehension, and writing. Critical thinking means, more specifically in terms of how I define it in relation to the law, the ability to identify an argument, to assess it by first reconstructing it and then attacking the truth or falsity of the premises or the way in which the premises establish the conclusion (reasoning), and to formulate one's own argument and to predict and defend against argumentative attacks by critics. Philosophy also emphasizes and improves one's reading comprehension of difficult material. Most importantly, one must have the intellectual capacity to recognize the main arguments of a text and to recognize the reasons put forth by the author in support of the conclusions of these arguments. Philosophy, moreover, stresses the ability to write clearly and coherently, that is, each sentance must lead to the next and also have a reason for being in the essay as well as each key word must defined explicitly in order to avoid equivocation (changing the meaning of words) in other parts of the essay. Since the duty of a lawyer is inherently argumentative (persuading a spectator of one's position on an issue and discrediting the other side), the general ability of argumentation as embodied in the skills of critical thinking, reading comprehension, and writing are essential to successful lawyers.
I chose to major in Accounting primarily to gain the fundamental knowledge necessary for working in a corporate environment. Furthermore, if future events do not accord with my plans, and I am not able to attend law school immediately after I graduate, then I will always have a profession that I can fall back upon; whereas with a political science degree, I would probably end up having to work a pathetic service-sector job making only $24,000 a year.
The reason I desire to work in the field of corporate law is ultimately because of the salary I would be paid. Although this may seem like a shallow reason for working in this job, I will provide two reasons for my choice. (1) Since I enjoy argumentation and debate so much, I have known since my freshman year of college that I want to become a lawyer; and since I am going to become a lawyer, I should be a corporate lawyer since they generally receive the highest salaries. (2) Since corporate lawyers do get paid the most, this means that I would have enough money to support my parents in their old-age. This is important to me, not only because of their love and support, but also because it is customary in Filipino families for the children to care for their parents as the latter ages. In a sense, the children are like the parent's retirement funds for they invest much into the young in order for them to grow-up and become successful. So, in conclusion, I just hope that I will be able to make my parents proud by becoming a successful law professional, and my path to this end begins with a successful experience as a philosophy and accounting major.
I am currently a junior at the University of Washington, and yet I haven't even formally declared the majors that I plan to pursue. Although my major is technically listed as 'political science,' I have abandoned my pursuit of this major in favor of philosophy and business administration with a concentration in accounting. The reason I no longer want to pursue political science is not because I have grown weary of the subject (the department offers some of the more interesting and engaging classes at the UW that I would like to take), but rather, I realize that holding a political science degree is not worth much to employers. This is due in large part, I think, to the fact that many students graduate with a political science degree, and that major is one of 'last resort'; that is, graduating from the political science department is relatively easy, and one will usually declare the political science major after an unsuccessful pursuit of other, more difficult majors.
The primary reason for my choice of philosophy and accounting as majors rests in large part to my aspirations of attending law school, and once there, to specialize in corporate or tax law. Philosophy courses are, as I have been told by professors and fellow students as well as have inferred from taking the classes, an excellent preparation for the skills needed by successful lawyers of all specializations. These skills include critical thinking, reading comprehension, and writing. Critical thinking means, more specifically in terms of how I define it in relation to the law, the ability to identify an argument, to assess it by first reconstructing it and then attacking the truth or falsity of the premises or the way in which the premises establish the conclusion (reasoning), and to formulate one's own argument and to predict and defend against argumentative attacks by critics. Philosophy also emphasizes and improves one's reading comprehension of difficult material. Most importantly, one must have the intellectual capacity to recognize the main arguments of a text and to recognize the reasons put forth by the author in support of the conclusions of these arguments. Philosophy, moreover, stresses the ability to write clearly and coherently, that is, each sentance must lead to the next and also have a reason for being in the essay as well as each key word must defined explicitly in order to avoid equivocation (changing the meaning of words) in other parts of the essay. Since the duty of a lawyer is inherently argumentative (persuading a spectator of one's position on an issue and discrediting the other side), the general ability of argumentation as embodied in the skills of critical thinking, reading comprehension, and writing are essential to successful lawyers.
I chose to major in Accounting primarily to gain the fundamental knowledge necessary for working in a corporate environment. Furthermore, if future events do not accord with my plans, and I am not able to attend law school immediately after I graduate, then I will always have a profession that I can fall back upon; whereas with a political science degree, I would probably end up having to work a pathetic service-sector job making only $24,000 a year.
The reason I desire to work in the field of corporate law is ultimately because of the salary I would be paid. Although this may seem like a shallow reason for working in this job, I will provide two reasons for my choice. (1) Since I enjoy argumentation and debate so much, I have known since my freshman year of college that I want to become a lawyer; and since I am going to become a lawyer, I should be a corporate lawyer since they generally receive the highest salaries. (2) Since corporate lawyers do get paid the most, this means that I would have enough money to support my parents in their old-age. This is important to me, not only because of their love and support, but also because it is customary in Filipino families for the children to care for their parents as the latter ages. In a sense, the children are like the parent's retirement funds for they invest much into the young in order for them to grow-up and become successful. So, in conclusion, I just hope that I will be able to make my parents proud by becoming a successful law professional, and my path to this end begins with a successful experience as a philosophy and accounting major.
Subscribe to:
Comments (Atom)