Pages

Thursday, March 25, 2010

The Invalidation of One-way Knowable Subjective Experience

By Austin Garrido

The so-called “experience” of a phenomena is indistinguishable from the processing of the information contained in that phenomenal experience, and it is not one-way knowable.

I propose that the qualia of "experiencing" the phenomenon is inseparable from the pattern of physical changes that occur in the system that is experiencing. Whatever qualia exist, exist embodied in whatever physical changes are occurring. If the neural pathways activated by experiencing a phenomena were removed, most would agree that any existence of a conscious mind experiencing that phenomenon would also disappear. Since the experience of the qualia is intrinsically tied to whatever system (in this case the "neuronal firings”) that is experiencing it, I propose that any claim that distinguishes them is a product of the inherent imprecision of our (and any) spoken language.

There is a classic thought experiment involving a woman, Mary, who knows all physical facts, with a specialty in the neural basis of vision, color, and perception. Mary is aware of all physical reactions, down to a sub-atomic level that occur when a biological brain perceives the color “red”. Mary, herself, however has never seen red- she lives in a hypothetical house where there is nothing emitting light in the red part of the spectrum. We assume that she maintains the ability to see red, but that ability has never manifested itself. The question remains that, whether upon going outside her house and actually experiencing a red rose, Mary gains any new information. Philosophers like Thomas Nagle and Jackson are likely to say that she does. If she does, then knowing/experiencing something must be non-physical, since even complete knowledge of the physical reactions to an experience is still consciously different than perceiving the stimulus externally.

Pete Mandak initially summarizes the “subjective experience is one-way knowable” argument as (K):

“(K): For all types of phenomenal character, in order to know what it is like to have a conscious experience with a phenomenal character of a type, one must have, at that or some prior time, a conscious experience with a phenomenal character of the same type.”

He goes on,

“However, even fans of Nagel and Jackson are likely to reject (K) on the grounds that there are many types of phenomenal characters for which (K) is highly implausible. Suppose that there was some shade of gray or some polygon that Mary had never seen before. Few philosophers are likely to suppose that Mary would be surprised on seeing a 65.5% gray or a 17-sided polygon for the first time. Perhaps, then, the idea behind subjectivity considerations is better put by modifying (K) by replacing “the same” with “a relevantly similar” and replacing “all” with "at least one," resulting in the following:

(K+): For at least one type of phenomenal character, in order to know what it is

like to have a conscious experience with a phenomenal character of a type, one

must have, at that or some prior time, a conscious experience with a phenomenal

character of a relevantly similar type. “

If Mary indeed knows all physical facts, then, in effect, she is running a point by point simulation of the involved physical processes in perceiving that physical phenomenon, by the very fact the location is known at each successive iterance of all of the particles involved in the perception of the color red.[1]

The act of remembering a phenomenal experience is, according to argument K+, impossible for someone who has never consciously experienced that particular phenomena or a relevantly similar state (such as Mary having seen, at some point, different shade of red from the rose), and is therefore intrinsically tied to the state of externally perceiving said phenomena (argument K+). Indeed, it is difficult to separate the two. They are highly interconnected and I propose that when the experience of seeing red is remembered, the actual system that is being recreated is the neural state of “perceiving” the color. As memories fade, the ability to recreate those past neural states fails, part by part, forming a fuzzier and fuzzier recreation of what the experience was in the first place.

What is a memory? A memory is a particular neural state characterized by experiencing a phenomenal experience. In fact, the brain scans show remarkably similar brain activity for physically doing an activity compared to visualizing doing that same activity introspectively [1].

Neural signals in neocortical layers are not only feed forward, but also feedback, and some combination thereof results in conscious thought. However, if the “feedback” part of that system is removed, then no conscious thought could take place, as the time it takes a neural signal to reach the top of a cortical column is negligible. A thought arises either through introspection, or else by external stimulation. It’s analogous to thinking of a politician because one was thinking of politics, versus physically seeing that politician.

Importantly, the difference between introspection and external stimulation are inseparable to neural network, as introspection involves using the output of one neural structure as input into another iteration, whereupon more pathways are activated, and external stimulation involves inputting data into that neural system with the only difference being that it happens not to come from some other thought process. A neural architecture that only goes one direction would be exhausted very rapidly. In experiments with anesthetized animals [2], there is brain response from stimuli, but only feed-forward that lacks feedback from higher to lower levels. So while a stimulus was still being experienced, consciousness was not achieved because of the lack of feedback in the neural structures, pointing to a neural structure that produces consciousness.

Mary, in her infinite physical wisdom, would certainly know of whatever physical neural state was analogous to “experiencing” the color red. Since the act of remembering a phenomenon is intrinsically connected to (and in practice, identical to) experiencing that phenomenon, then recreating the neural state of experiencing a phenomena, as described in Mary’s “simulation” is identical to externally perceiving red, which is analogous to externally perceiving the color red, and Mary would not gain any new and non-physical information by physically experiencing the color red.

Finally, evidence found in red-green color blind synthesithetics [3] were tested to see what colors they saw with certain stimulus. They reported that certain stimuli produced colors that they had never experienced in real life, and indeed were so bizarre to them that they called them “martian colors”. We can guess what those “martian colors” are! They are their missing greens and blues, for which their brain’s already had architectures for perceiving, which had never been activated externally. It is probable that the same thing would happen to Mary, experiencing the color red as the synthesithetics do the reds and blues.

The important thing is that, despite never actually physically experiencing a phenomenon, the experience was still consciously achieved through a different means, rendering argument K+ assertion of subjectivity being one-way knowable invalid.

[1]

[2] Lamme et al. (1998)

[3] Ramachandran, V.S.; Edward M. Hubbard. (April 14, 2003). “More Common Questions About Synthesisia” Scientific American.



[1] Before I proceed, let me examine a problem that may arise carrying out the thought experiment in this universe- very interestingly, Mary’s “simulation” of perceiving the color red involves her to having explicit knowledge of each involved particles’ exact position at each incremental step of the process of the experiencing a phenomena. While being physically impossible in the universe know both a particle’s exact position and momentum according to Heisenberg’s uncertainty principal, it is non-trivial in this case, as the physical system representing “Mary” can be of infinite size, and can accordingly calculate all the involved particles quantum positions within the universe, which is finite in size.

The Spontaneous Convergence of Intelligent Systems into a Super-Inclusive Intelligent System

By Austin Garrido

A biosphere composed on conscious, perceiving nodes mirrors the interactions of their respective components, non-trivially the components which give rise to consciousness in the highly complex and quantum neural architecture, resulting in a encompassing intelligence composed of any physical object interacting with the nodes.

In the basic design of the class “neural networks”, nodes of summation are connected to one another through weighted “synapses”, that dictates the amount of influence a given signal has on the next neuron. Information is encoded in spike trains that encode information about the source, duration, and kind of stimuli activating the neuron. In the biological brain, this is principally encoded by a method of a timing interval spike, as this is the most compact and concise way to give all the necessary information to the neurons using with an otherwise binary system of communication. The product of internal processing can and will be expressed externally, providing phenomenal input to other biological brains in othr, separate organisms.

Human language evolved as a way for us to communicate thoughts and concepts to one another. Thoughts and feelings are merely the result of the percolation of neuronal interaction of phenomenal stimulation- indeed, a mind with no stimulation will be exceedingly entropic. [1]

Human brains are equipped with censor information, choosing not to display and communicate certain thoughts and feelings to other organisms. This mirrors the threshold function that discriminates what information is sent along the axon in biological neural systems. Obviously, with a much more complicated system of conveying information, almost every aspect of a biological neuron is expanded and more complex, and increasingly subjected to entropy. The most important thing, however, is that every characteristic of a neural system is mirrored by biological organisms that interact with one another- in essence, any collection of dynamically processing beasts that can share information with one another assumes the form of a neural network, an therefore, intelligence on a massive scale.

Synapses are mirrored by human interactions. In the biological brain, a given neuron is principally connected to nearby neurons, but may also share information with neurons a much greater distance away. Given that human beings evolved from a tribal society, they have developed a number called “Dunbar’s Number”, and is the supposed cognitive limit to the number of individuals with whom any one person can maintain stable social relationships. This number is believed to be about 148, and is estimated to be the size of the largest hunter-gatherer societies. With the advent of the telephone and radio, humans began to enter the final stages of super-inclusive neural simulation. When networks such as the internet gave individuals the ability to convey specific information to other individuals, or groups of individuals, our biosphere completed the last necessary step in complexity to mirror it’s component parts, and emerge as a conscious system. An individual human being interacts principally with nearby humans with whom he/she has relationships with, but may also occasionally share information with somebody of considerable distance away, just as a neuron in a neural network would to its peers- if necessary, forming strong relationships with nodes that are physically distant.

One difference between a neural network as found in the human brain and a massive universal intelligence is the nature of input vs. output. In the human brain, there is some, trivial, level of input and output. Given a stimulus, the brain stores the stimulus as a some sort of memory, either sensory, short-term, or long-term.[2] The stimulus may then be discarded almost entirely, as happens to most of the information that enters out mind, with the neural effects still percolating introspectively. Effectively, the output of the “top” layer of neurons is fed back into the “bottom” causing the information to be to reprocessed, and reclassified.[3] In fact, to a neural structure, there is no difference between classifying new information from an external source and processing information that has already been partially classified. It is important to remember that this is the basic nature of a global intelligence; as an intelligent system grows larger and its basic constituent parts grow more complex and constitute a greater percentage of it’s own environment, the amount of closed-loop thinking is increased, and thus, the amount of classification is increased.

On an imaginary time line of humanity, with (for the sake of argument) definite points of discoveries about the world, one can easily see this closed-loop thinking in effect. Say at point “A” on this simplified timeline is the beginning of humanity. The entire world is one large chunk of information, with no distinguishing subclasses. Then, say at point “B”, humans discover something edible, the first input of information. News of this eventually passes through all nearby members of his specie. Now, humans have discovered the world is divided into at least two parts, “edible” and “non-edible.” This is reprocessed by the global intelligence, and at point “C”, man discovers that some edible items taste better and are more nutritious than others. Now, man’s world is divided into at least three groups: “good edible”, “bad edible”, and “non-edible.”

All of the data needed to classify the world up to the complexity we see today is in contained within the system, which included not only the humans, but the very foods and objects they were gathering information about. Unlike neurons, which are not complex enough to examine their own environment, humans possess the differentiation of language necessary to gather information about their environment. Having closed-loop processing occurring constantly throughout the earth’s biosphere’s existence has given us all of the technology and infrastructure we have today, through the reclassification and manipulation of old data about the earth.



[1] Some believe that through complex “emergence”, our own thought processes can be altered by will, or some quantum effect. It is fully within the full scope of this paper to encompass this, as any unknown processes that lead to consciousness would be encompassed in the super-inclusive consciousness (including possible linguistic or quantum effects. A super-inclusive system must be at least as entropic as it’s component parts, and in practice, usually much more so.)

[2] The appropriate analogue of this on the proposed global intelligence is the development of written language and the keeping of long-term records. They are an more permanent extensions of the neural network of their creators, who in turn had been affected by all previous physical neural stimulation.

[3] It similar to studying a cryptographer studying a complex cipher for a long enough period of time, until a pattern emerges and the cipher is solved. Even though no new information is given, the cryptographer has reprocessed the cipher long enough to classify it into “encoded” and “solved” (and all intermediary stages).

Understanding the Federal Reserve and its Consequences

By Austin Garrido




The Federal Reserve Act of 1913 was primarily passed both to prevent banking panics like the Great Panic of 1907 and to remove the stock markets’ and banks’ heavy influence on the economy. Originally conceived by representatives of major banks and industry leaders as being almost entirely privately owned, the Federal Reserve Act was presented to Congress in 1912 by Republican majority leader Nelson Aldrich. House Democrats next proposed an entirely public centralized bank, and some conservative Democrats proposed a private, but de-centralized, banking system. The Federal Reserve System was eventually decided upon as a hybrid of public and private sections.

The current Federal Reserve System is composed of 12 institutions, and remains a hybrid of public and private ownership. The majority shareholders are large private banking institutions who each receive a 6% dividend share of revenue per year. This group of private banks is able to elect each of the members of the board of directors, and five of the twelve members of the Federal Open Market Committee, which is the ultimate authority on regulating the money supply and setting interest rates. While the President of the United States appoints the remaining seven (majority vote) members of the Federal Open Market Commitee, it is clear that the Federal Reserve is still

considered an independent central bank because its decisions do not have to be ratified by the President or anyone else in the executive or legislative branch of government, it does not receive funding appropriated by Congress, and the terms of the members of the Board of Governors span multiple presidential and congressional terms.[1]

The only accountability for the system began in 1978 with the Federal Banking Agency Audit Act, the Government Office of Accountability (GOA) was given permission to periodically audit the Federal Reserve, and the ability to alter it’s responsibilities by statute. It is important to note, however, that the auditing abilities may not include:

1. Transactions for or with a foreign central bank or government, or non-private international financing organization

2. Deliberations, decisions, or actions on monetary policy matters

3. Transactions made under the direction of the Federal Open Market Committee

4. Any part of a discussion or communication among or between members of the board of Governors and officers and employees of the Federal Reserve System Related to items (1), (2), or (3)

With the advent of a centralized banking system, the Federal Reserve could theoretically calm the waters of the economic ocean by tightening and loosening the money supply in response to the economic climate. For such a system to work, complete transparency and close monitor is necessary. When put into practice, however, it is far too easy to simply print more money when it is perceived as necessary, such as wartime or recession (in extreme cases, leading to hyperinflation, as happened in Germany after World War II).

A standard of monetary trade acquires its exchange value from the labor expended to acquire it. Gold has traditionally been a standard backing to currency, considering the reduction in difficulty of acquiring it roughly mirrors the total expansion of commodities in a society. An ounce of gold today is approximately equivalent in purchasing power to an ounce of gold at another time history (when converted to use-value, the natural reduction of the intrinsic value of gold, or any other monetary standard).[2] With the elimination of the gold standard for currency in 1971, the United States system completed its transition from a representative money system to a fiat money system. A representative money system uses intrinsically valueless money, such as paper notes, that represent a quantity of something of known value, such as gold, silk, or silver. A fiat monetary system gives value to currency by fiat, or by an arbitrary statute, without actual physical backing. Fiat money systems have been shown throughout history to be subject to unreasonable inflation, and subsequently abandoned. In the early days of the United States, a fiat money system known as “continental currency” was put into practice, but abandoned because of the high inflation rates (leading to the colloquial phrase “not worth a continental”). In response to this failure, the constitution was written with the clause that

The Congress shall have Power To coin Money, regulate the Value thereof, and of foreign coin and fix the Standard of Weights and Measures; no state shall make anything but gold and silver coin a Tender in payment of debts.

This scenario has played out numerous times throughout history, from fiat systems in early China to the “bills of credit” used in England’s American colonies. Indeed, The United States has gone through several cycles of inflationary fiat money, followed by effectively deflationary “hard” money.

The bulk of money used by Congress is borrowed from the Federal Reserve, at interest. [3] Taxes and other sources of revenue simply do not fully cover congressional spending. If the resultant debt cannot be paid for by revenue gains (such as taxes), more money must once again be borrowed, still at interest, creating a cycle of debt. Inflation is the natural result, due to the devaluing of currency, not the increase of the value of commodities.[4]

Wages and commodity prices are determined by the perceived total amount of currency in the system. Should the total amount of currency suddenly increase, there is a time gap before the market would become “aware” of this influx and appropriately raise the prices of commodities. In this way, the first obtainers of the newly created currency are benefactors and are able to purchase goods at prices that under-estimate the amount of money in the system, in effect under-paying for the commodities purchased.

The result of inflation is that, on average, money earned one day through labor will have less value the next. Since the purpose of currency is that it should be a constant exchange medium to facilitate the free trade of commodities, inflation, and the system that creates it, under-values and exploits the labor of the earner. In this system, the value lost to the earner goes to the creator of the newly made currency- namely, the shareholders of the Federal Reserve.

What is the result of this system? As T. Smeedng wrote,

Americans have the highest income inequality in the rich world and over the past 20–30 years Americans have also experienced the greatest increase in income inequality among rich nations. The more detailed the data we can use to observe this change, the more skewed the change appears to be... the majority of large gains are indeed at the top of the distribution.[5]

The definite trend since the institution of the Federal Reserve is greater discrepancy between the upper and lower income classes, most dramatically since the elimination of the gold standard in 1971.[6]When the Federal Reserve System was put into place, the Gini coefficient (a measure of income inequality) began to fluctuate wildly, in a steep upwards trend. [7] Heavy government spending in the Post-Depression era mitigated this trend, leading to a lower Gini coefficient, but resulting in a skyrocketing national debt.8Once the gold standard was fully eliminated in 1971, the Gini coefficient once again began to climb, this time at a predictably constant rate, as money was steadily channeled away from society and into the pockets of a select few.

The final view is one that does not necessarily points to a solution, but instead highlights a problem.Twenty-two years after the Federal Reserve System was created, the great depression hit, showing that centralized banking had not fulfilled its primary goal in calming the economic debt fluctuations. In fact, not only did the debt fluctuations begin to occur at a nearly identical frequency, but they began to fluctuate with much greater intensity.[8] Much as Marx showed the instability of the capitalist system by the underpayment and exploitation of labor, the creation of non-labor backed currency is essentially creating value from nothing, and is contrary to the nature of an economy where wages and prices are still dictated by labor, leading to instability and exploitation. Since the interest rate of banks almost always outstrips the inflation rate, those with the most money in savings (the upper earning percentiles) do not feel the effects as much as those whose money supply is much more variable (lower and middle earning percentiles). With technological progression, the total amount of labor to create a product tends to decrease with time, which consequently leads to a decrease in the value of commodities. The fact that prices are increasing highlights the contrary nature of the fiat monetary system to a stable system. If there is to be a stable economic system, the fiat money system, and the current Federal Reserve System that perpetuates it, must be eliminated.



[1] Board of Governors of the Federal Reserve System. "Frequently Asked Questions: Who Owns The Federal Reserve"

[3] To whom is the Federal Debt Owed?

[5] Smeeding, T. (2005). Public Policy, Economic Inequality, and Poverty: The United States in Comparative Perspective. Social Science Quarterly, 86, 956-983.