* You can't find anything earlier than Klevius on this topic - no matter were you search! It's original research and it fulfills the criterion of fitting in the gaps that existing research has failed to explain. When in 1994 Klevius tried to publish the text in scientific AI magazines, one rejected it as 'too philosophical' for their type of magazine and the other as 'too empirical'! Moreover, wherever Klevius has presented the theory he has always asked receivers to comment, question or challenge it. No one, except for one of Klevius sons (who argues that a fruit fly has "consciousness" as well), has done it so far.
** Just consider how many animals could have been saved from suffering and death by directing research in accordance with Klevius theory, hence avoiding a lot of unnecessary dead ends.
Despite of many extensive physical descriptions (compare 19th century research) of the Thalamus, only Peter Klevius has so far been able to produce a theory (1992, 1994 and 2003 on the web) that not only fits this physicality but also makes it self evident how the Thalamus is the only possible "center of consciousness/awareness" in vertebrates. And although Francis Crick didn't seem to get it when Klevius contacted him in 1994 (he and Koch were too entangled with "brain waves" etc), the web presentation of the theory (EMAH 2003) got top hits ('consciousness thalamus' and 'consciousness ai') for almost a decade and therefore has to be considered at least equally available as any Nature etc. paper.
The Thalamus, according to Klevius, is what paved the way for human intelligence. However, this aspect hasn't at all been taken into account before Klevius.
Btw, do note that Klevius is a true scientist - not a researcher.
What really made the foundation for EMAH was Klevius criticism of Habermas' division of observation and understanding in The Theory of Communicative Action (1984) as well as Carl Popper's view on "primitive societies". Simply put, according to Klevius there's no way one can "observe" anything without any kind of "understanding". As a consequence, what Habermas calls understanding is just retrospective "corrections", i.e. new "observation/understanding".
So why is Klevius bragging - or is he?
Answer: In the service of better science. Why/how? Dichotomies created out of pre-established goals have cumulated in science in an accelerating tempo in line with Klevius chapter Science and its References (1992:40). Financing, politics and religion are some of the main culprits for this streamlining. And on the other hand, the best (least biased) science now resides outside "established" (compare media) science channels - but with limited access to basic resources which are withheld, usually behind a pay wall, by keeping new findings in secret etc.. Even though there are a few "established" proponents for a more open research realm (e.g. John Hawks), real openings for the future now lie in the independent blogosphere (compare e.g. Eurogenes). However, even such a forum is equally contaminated with less capable minds as is "established" science. When Klevius permanented his thoughts with publishing dates, ISBN numbers, correspondence etc. he had a much lower expectation about his own capabilities compared to "established" science, but thought he might have had something new or of some importance to say. However, in retrospective Klevius has become increasingly disappointed with what "established" science has produced in some particular areas of his (and hopefully humankind's) interest. So Klevius isn't bragging - just disappointed with the frequent use of "science blockers".
Peter Klevius first in the world to explain why/how the Thalamus is at the center of your "consciousness", and more importantly, what "consciousness" really is
The text below is mostly very old and poorly edited - have mercy or donate! And do remember that Klevius will answer/explain any question you may pose via comments!
Klevius preface: Fake "research" vs. true science.
* Do note the difference between 'research' and 'science', e.g. by using Klevius definition and analysis of science. Normal research is the screwing and hammering pieces together. However, it often turns into more or less fake "research" under a financially or culturally biased professor etc. Science again, is ideally pure logic, but us pure logic per se has no meaningful existence, it has to be connected to human existence-centrism (see same named chapter in Klevius 1992:21), i.e. a human interface. So to do true science means to avoid as much as possible to contaminate it with bias. And of course, the goal isn't and could never be to eliminate (human) bias - only to make its human interface as wide as possible. The con side of this, however, is that good scientists and good science will always clash with someone/something. Compare this problem to much democracy of today that tries to capture a picture of the will of the people by using a 2 pixel democracy camera resulting in a 1 pixel result when the self evident solution would be to use high resolution digitalization via deeply profiled (with a variety of possible voter interests taken account of) web voting giving a direct connection between the will of the people and factual politics. Such a system could also include triggers for existing laws re. particular issues so to make it easy for the voter to pinpoint his wishes in the general already existing political/legal reality.
Peter Klevius anti bias science cv timeline with examples of how truly scientific analysis* is made:
1 1979-1980 Klevius, in an effort to track the social origins and consequences of early civilizations, first concluded that the traces pointed north and to central-Asia. Klevius then sent a letter to the Finland-Swedish philosopher Georg Henrik von Wright (Wittgenstein's successor at Cambridge) about these thoughts and the new concept of 'extended demand for resources'. The letter can be found in the archive of now late GHvW (Klevius signed it with his mother's surname Kotilainen). The answer was very supportive as GHvW saw the concept and its embeddings as 'both original' and 'of significant importance for our understanding and analyzing of civilizations'.
2 1981 Klevius then published his thoughts for the first time under the title 'Demand for Resources' (still using his mother's surname). The publication was first delayed (possibly due to its heavy intellectuality for such a forum) but was eventually released due to GHvW being a friend of Jan Magnus Jansson* who was the editorial chief of Hufvudstadsbladet, in which the article first appeared - and was rewarded Fmk 500,- which at that time corresponded to a third of an average monthly net salary.
* Jan Magnus Jansson was a Professor of general state science at the University of Helsinki 1954–74 and chancellor for the Åbo Akademi University 1985 to 1991. He was the chairman of the (Finland-)Swedish People's Party (SPP) 1966 to 1973 and a Minister of Trade and Industry 1 January 1973 - 30 September 1974. Jansson was his Finland-Swedish party's presidential candidate in the elections of 1982.
3 1984 Klevius (still as Kotilainen) published his article 'The Green Dilemma' in which he warned for, on one hand the "Pentti Linkola effect"* of what some used to call "eco-fascism", and on the other hand a "green movement" that is "green" only to its name (Klevius exemplified with a family where the well paid husband travels with a big expensive and much fuel consuming car around the country selling cheap car etc. products to service stations, shops etc. while his not working wife drives around in an other car meeting with her "green" friends in activities of minor, or even opposite, greenish value). Moreover, this was Klevius first warning of the politicization of the green movement that today has made it a supporter of state socialism and islamofascism.
* Pentti Linkola is a radical Finnish deep ecologist, polemicist and fisherman. He has written widely about his ideas and in Finland is a prominent, and highly controversial, thinker. Linkola was a year-round fisherman from 1959 to 1995. Linkola blames humans for the continuous degradation of the environment. He promotes rapid population decline in order to combat the problems commonly attributed to overpopulation. He is also strongly in favor of deindustrialization and opposes democracy, which he calls the "Religion of Death, believing it to be an agent of wasteful capitalism and consumerism. He considers the proponents of economic growth to be ignorant of the destructive effects which free market policies have had on the biosphere over the past two centuries.
1989 Klevius made a program about human evolution in which he also interviewed Richard Leakey. He also met with some guys from South Africa who knew a lot about "Bushmen". As a consequence Klevius studied what was known about them in the literature, e.g. Lee's !Kung reports. The simple question stood clear: Why would Africa's oldest population be cold-adapted, i.e. having clear mongoloid traits even though they were already heavily mixed with non-mongoloid Africans?
1990 Klevius wrote the short but intense (perhaps too intense, according to GHvW) book Demand for Resources (published 1992) - as a follow up to the 1981 article Demand for Resources. It's an analysis of physical and cultural evolution which main methodology is to look behind prevalent contemporary bias (defined as unproved convictions) such as e.g.:
1 Evolution out of nothing/God proposition countered by the question: What would be "nothing"?, hence revealing the total meaninglessness of the question: Why are we here? The equally meaningless counter question would be: Why would we be nowhere?
2 Human evolution to what we are today. The unproved populist theory "out of Africa" seemed spurious in general and did not fit the fact that native Africans such as Khoesan speakers are mongoloids and that big brained early Homos (e.g. Jinniushan) roamed northern China (i.e. cold "mongoloid territory") already almost 300,000 years ago. Also do note that we lack Neanderthal skulls from the very north, and that Europe is a receiver of the Gulf stream. So far we don't even know how "Neanderthal" skulls from more northern parts really looked like. Interestingly, Georg Henrik von Wright considered the last chapter named Khoe, San and Bantu the least important in the book.
3 The linguistic terms Khoe-San and Bantu in Africa exemplify three main categories of way of living: Hunter-gatherers (called gatherer-hunters in the book), pastorals, and farmers. These reveal a transition from almost neutral* demand for resources to expanded demand for resources (this should not be confused with the fact that hunter-gatherers need much more space). You want what you need but you necessarily don't need what you want. Expanded demand for resources is the basis for investment compared to neutral demand for resources in many pre-civilisatoric societies - a fact that Karl Marx missed but Claude Levi-Strauss sort of touched upon with his division of societies in 'warm' and 'cold' using terms from thermodynamics.
4 The observation/understanding dichotomy seems to be a repetition of the ghost/machine dichotomy trapped as it is in its own "Homunculus spirality". So instead Klevius proposed an intellectual "digitalization" of the brain, i.e. a "relief" from the old view of "categories". One such "category" is 'language' although there's no definition of it that matches its use as particularly human. Not that 'language' isn't a useful concept in our everyday "language game", but rather that 'language' can not be distinguished from other activities in a meaningful way when it comes to understanding the mind.
2003/4 Klevius for the first time presented the view about a better packed brain as the reason to the sudden jump in human sophistication. However, this was clearly already hinted at in Klevius book Demand for Resources (1992 ISBN 9173288411) where the question was posed why big skulled homos some 300,000 bp didn't manage to do what we've done.
2004 the discovery of Homo floresiensis in SE Asia was presented, and as a consequence Klevius immediately connected it with the obvious possibility of similar evolutionary island dwarfing even north of the Wallace line that later could branche towards the big skulled northerners. The discoveries from 2010 on of a 50,000 bp sewing needle, a more than 40,000 bp sophisticated stone bracelet and the DNA evidence of at least three Homo species mixing/hybridizing in the Altai region/southern Siberia, made Klevius theory even more plausible. At the same time conventional theorists were "confused" and "puzzled" because their theories failed to fit the new landscape.
The Viking age started in the east ca. 750, i.e. many decades before going west. Also consider that Gotland used to be part of Kvenland/Finnland.
Origin of the Vikings
2005/6 Klevius realized the equally obvious answer to the question why Swedish Vikings would first have gone north and east into Finnish territories instead of south and west where everyone talked the same language. The answer was of course that they weren't Swedes bu Finland-Swedes, i.e. Finns who had become bilingual in the borderline between Indo European and Uralic.
All of these "insights" were simple - as long as you just question prevailing bias - and possess enough intellectual power (knowledge) as well as processing power (intelligence). When people get annoyed about Klevius fast and broad thinking (which he himself can't avoid without acting more stupid than he is) they can be assured that they always have time on their side in case they are right and Klevius wrong.
Klevius analysis of bias in sex segregation/apartheid (with numerous writings, debates, radio/TV programs, film etc. has gone on since his teenage years as part of his view that the ideology of (negative) Human Rights as stated in the 1948 Universal Human Rights Declaration, is the only logical way to go that can't be opposed without being racist/sexist: How can males and females "have the same sexuality" if testosterone is the "sex hormone" and females have 10-15 times less of it? Moreover, if reproduction happens in females, then males have to be attracted to females - not necessarily the other way round.
EMAH, the theory of mind that makes our brain less human and ourselves more human
The text below, based on Peter Klevius book Demand for Resources (1992 in Swedish) and presented for Francis Crick (1994-5), was made globally accessible on line in 2004. In today's communicative environment and with some additional findings Klevius would perhaps have honed it slightly differently although not altering the basis of the theory at all. However, here it is in its original form (main text from 1992 and 1994-5 plus the 2004 web introduction on www.klevius.info*).
* Do note that www.klevius.info is an experimental web museum created 2003 and with no changes for more than a decade. Keep this in mind when reading it.
EMAH text from Klevius web museum:
EMAH (the Even More Astonishing Hypothesis**)
Continuous integration in Thalamus of complex neural patterns without assistance of Homunculus constitutes the basis for memory and "consciousness"
(*AI = artificial intelligence)
(** The EMAH title applied 1994 alluding to Francis Crick's book
The Astonishing Hypothesis)
by Peter Klevius (1992-94, and 2004)
These links were on the original 2004 web page
*compare Francis Crick's The Astonishing Hypothesis
Translation from Resursbegär (Demand for Resources 1992 p 32-33).
A critique of Habermas' dichotomy observing/understanding in The Theory of Communicative Action (1981):
1 Observing a stone = perception understood by the viewer
2 I observe a stone = the word 'stone' (uttered, written etc.) i.e. intelligible for an other person
Although I assume that Habermas would consider the latter example communication because of an allusion (via the language) to the former, I would argue that this "extension" of the meaning of the utterance cannot be demonstrated as being essentially different from the original observation/understanding. Consequently there exists no "abstract" meaning of symbols, which fact of course eliminates the symbol itself. The print color/waves (sound or light etc) of the word "stone" does not differ from the corresponding ones of a real or a fake (e.g. paper maché) stone.
The dichotomy observation/understanding hence cannot be upheld because there does not exist a theoretically defendable difference. What is usually expressed in language games as understanding is a historical - and often hierarchical - aspect of a particular phenomenon/association. Thus it is not surprising that Carl Popper and John C. Eccles tend to use culture-evolutionary interpretations to make pre-civilized human cultures fit in Popper´s World 1 to World 3 system of intellectual transition.
"Subliminal" selection of what we want to interpret as meaningful
The ever-present subsidiary awareness that lies behind the naive concept of "subliminal perceptions" is no more mystifying than the fact that we can walk and play musical instruments without paying direct awareness/attention to it.
Representations and properties
Representations are dependent on properties but if there are no properties (and there is certainly a philosophical lack of any such evidence although the concept is still popular in many camps) then there are no representations either. What should be represented (see above and below)?
The lost ghost in the machine and the psychoanalytic chameleon Mr. Nobody
There has been an all time on-going development within biology, genetics, AI research and robot technology, which narrows our view on, not only the difference between animals and humans, but also the gap between what is considered living and dead matter. Not only free will, but also properties and representations/symbols are getting all the more complicated and vanishing as their subjective meaning seems less usable in a new emerging understanding of our environmental positioning. Although the psychoanalytic movement seems ready to confirm/adapt to this development equally fast as Freud himself changed his ideas to fit into new scientific discoveries (it was a pity he didn't get a chance to hear about Francis Crick) psychoanalysis is forever locked out from this reality. PA is doomed to hang on the back of development just as feminism and middle-class politics, without any clue on the direction (neither on the individual nor the collective/cultural level).
Psychoanalysis has survived just because of its weakest (in fact, absent) link, namely the lack of a border between folk psychology and itself. The diagnosis for psychoanalysis would consequently be borderline.
Sigmund's dream of a biological psychoanalysis was his biggest mistake.
The entire EMAH hypothesis (1994) as it emerged after the above criticism of Habermas and some new research about cortex-thalamus connections.
1991 presented for Georg Henrik von Wright, 1994 presented for Francis Crick and 2004 presented on the world wide web*.
* this text used to be on Yahoo's Geocities which is now terminated - by Yahoo
Understanding how social behavior and its maintenance in human and other forms of life (incl. plants etc) evolved has nothing to do with “the balance between self interest and co-operative behavior” but all to do with kinship and friendship. Although humans may be attributed a more chaotic (i.e. more incalculable) "personality", they are, like life in general, just robots (i.e. active fighters against entropy – see Demand for Resources - on the right to be poor). Misunderstanding (or plain ignorance of – alternatively ideological avoidance of) kinship (kin recognition), friendship (symbiosis), and AI (robotics) pave the way for the formulation of unnecessary, not to say construed, problems which, in an extension, may become problematic themselves precisely because they hinder an open access for direct problem solving (see e.g. Angels of Antichrist – kinship vs. social state).
The Future of a "Gap" (copyright P. Klevius 1992-2004)
Human: What is a human being? Can the answer be found in a non-rational a priori statement (compare e.g. the axiomatic Human Rights individual) or in a logical analysis of the "gap" between human beings and others? The following analysis uses an "anti-gap" approach. It also rests on the struggle and success of research performed in the field of artificial intelligence (AI), robotics etc.
Signal: A "signal gap" is commonly understood as a break in the transition from input to output, i.e., from perception to behavior. Mentalists use to fill the gap with "mind" while behaviorists don't bother because they can't even see it.
Matter: Berkeley never believed in matter. What you experience is what you get and the rest is in the hands of "God" (i.e. uncertainty). This view makes him a super-determinist without "real" matter.
Mind: The confusing mind-body debate originates in the Cartesian dualism, which divides the world into two different substances, which, when put together, are assumed to make the world intelligible. However, on the contrary, they seem to have created a new problem based on this very assumption.
Free will: Following a mind-body world view, many scholars prefer to regard human beings as intentional animals fueled by free will. It is, however, a challenging task to defend such a philosophical standpoint. Not even Martin Luther managed to do it, but rather transferred free will to God despite loud protests from Erasmus and other humanists. Although Luther's thoughts in other respects have had a tremendous influence on Western thinking, this particular angle of view has been less emphasized.
Future: When asked about the "really human" way of thinking, many mentalists refer to our capacity to "calculate" the future. But is there really a future out there? All concepts of the future seem trapped in the past. We cannot actually talk about a certain date in the future as real future. What we do talk about is, for example, just a date in an almanac. Although it is a good guess that we are going to die, the basis for this reasoning always lies in the past. The present hence is the impenetrable mirror between the "real future" and ourselves. Consequently every our effort to approach this future brings us back in history. Closest to future we seem to be when we live intensely in the immediate present without even thinking about future. As a consequence the gap between sophisticated human planning and "instinctual" animal behavior seems less obvious. Is primitive thinking that primitive after all?
An additional aspect of future is that neither youth, deep freezing or a pill against aging will do as insurance for surviving tomorrow.
Observation and Understanding (copyright P. Klevius 1992-2004)
If one cannot observe something without understanding it, all our experiences are illusions because of the eternal string of corrections made by later experiences. What seems to be true at a particular moment may turn out to be something else in the next, and what we call understanding hence is merely a result of retrospection.The conventional way of grasping the connection between sensory input and behavioral output can be described as observation, i.e. as sensory stimulation followed by understanding. The understanding that it is a stone, for example, follows the observing of a stone. This understanding might in turn produce behavior such as verbal information. To do these simple tasks, however, the observer has to be equipped with some kind of "knowledge," i.e., shared experience that makes him/her culturally competent to "understand" and communicate. This understanding includes the cultural heritage embedded in the very concept of a stone.
Categorization belongs to the language department, which, on the brain level, is only one among many other behavioral reactions. But due to its capability to paraphrase itself, it has the power to confuse our view on how we synchronize our stock of experience. When we look at a stone, our understanding synchronizes with the accumulated inputs associated with the concept of a stone. "It must be a stone out there because it looks like a stone," we think. As a result of such synchronization, our brain intends to continue on the same path and perhaps do something more (with "intention"). For example, we might think, "Let's tell someone about it." The logical behavior that follows can be an expression such as, "Hey look, it's a stone out there." Thus, what we get in the end is a concept of a stone and, after a closer look, our pattern of experience hidden in it.If the stone, when touched, turns out to be made of paper maché, then the previous perception is not deepened, but instead, switched to a completely new one.
One might say that a stone in a picture is a real stone, while the word "stone" written on a piece of paper is not. The gap here is not due to different representations but rather to different contexts.When one tries to equalize observation with understanding, the conventional view of primitive and sophisticated thinking might be put in question. We act like no more than complex worms and the rest, such as sophistication, is only a matter of biased views built on different stocks of experience. But a worm, just like a computer, is more than the sum of its parts.
Therefore, meaning, explanation and understanding are all descriptions of the same basic principle of how we synchronize perceptions with previous experiences. For the fetus or the newborn child, the inexperienced (unsynchronized, or uncertainty/"god" if you prefer) part of the inside-outside communication is considerably huge. Hence the chaotic outside world (i.e., the lack of its patterns of meaningfulness) has to be copied in a stream of experiences, little by little, into the network couplings of the brain. When the neural pattern matches the totality (meaningfulness) its information potential disappears. On top of this, there is in the fetus a continuous growth of new neurons, which have to be connected to the network. As a result of these processes, the outside world is, at least partly, synchronized with the inside, mental world. Heureka, the baby finally begins to think and exist! In other words, the baby records changes against a background of synchronized inputs.
* see "existence centrism" in Demand for Resources for a discussion abt a shrinking god and the allmighty human!
The Category of the Uniquely Human (copyright P. Klevius 1992-2004)
A main difficulty in formulating the concept of consciousness is our pride (presumably we should have been equally proud as mice) and our strong belief in "something uniquely human." However, if we try to follow the die-hard determinists, we would probably find free will and destiny easier to cope with, and also that the concept of "the unique human being" is rather a question of point of view. Following this line of thought, I suggest turning to old Berkeley as well as to Ryle but excluding Skinnerian Utopias. Those who think the word determinism sounds rude and blunt can try to adorn it with complexity to make it look more chaotic.Chaos here means something you cannot overview no matter how deterministic it might be. We seem to like complexity just because we cannot follow the underlying determinism. Maybe the same is to be said of what it really is to be a human? A passion for uncertainty, i.e. life itself.Francis Crick in The Astonishing Hypothesis: "... your sense of personal identity and free will are in fact no more than the behavior of a vast assembly of nerve cells and their associated molecules."
This statement is easy to agree on, so let me continue with another, perhaps more useful, quote from Crick: "Categories are not given to us as absolutes. They are human inventions."I think these two statements create an efficient basis for further investigations into the mystery of thinking. Hopefully you will forgive me now as I'm going to try to abolish not only the memory but also the free will and consciousness altogether. Then, I will go even one step further to deny that there are any thoughts (pictures, representations, etc.) at all in the cortex. At this point, many might agree, particularly regarding the cortex of the author of this text.
The main problem here is the storage of memories, with all their colors, smells, feelings and sounds. Crick suggests the dividing of memory into three parts: episodic, categorical and procedural. While that would be semantically useful, I'm afraid it would act more like an obstacle in the investigation of the brain, because it presupposes that the hardware uses the same basis of classification and, like a virus, hence infects most of our analyses.
Nerves, Loops and "Meet-puts" (copyright P. Klevius 1992-2004)
According to Crick, "each thalamic area also receives massive connections from the cortical areas to which it sends information. The exact purpose of these back connections is not yet known." In the following paragraphs, I will outline a hypothetical model in line with this question.The interpretation of the interface between brain and its surrounding as it is presented here has the same starting point as Crick's theory but divides thinking into a relay/network system in the cortex and the perception terminals (or their representatives in the thalamus) around the body like an eternal kaleidoscope. Under this model, imagination would be a back-projected pattern of nerve signals, equal to the original event that caused them but with the signals faded. This view suggests that there are not only inputs and outputs but also "meet-puts," i.e., when an input signal goes through and evolves into other signals in the cortex, these new signals meet other input signals in the thalamus.
There is no limit to the possible number of patterns in such a system, and there is no need for memory storage but rather, network couplings. These "couplings," or signals, are constantly running in loops (not all simultaneously but some at any given moment) from the nerve endings in our bodies through the network in the cortex and back again to the thalamus. Of course the back-projected signals have to be discriminated from incoming signals, thereby avoiding confusion regarding fantasy and reality. But this process, though still unknown, could be quite simple and perhaps detected simply by the direction where it comes from. As a consequence of the loops, the back-projected pattern differs from the incoming signals, or the stimuli.Therefore, every signal from the body?perceptions, hormonal signals and so on, either finds its familiar old routes or patterns of association in the network (established experiences) or creates new connections (new experiences) that can be of varying durability. For example, if someone is blind from the moment of birth, he or she will have normal neuronal activity in the cortex area of vision. On the other hand, in case of an acquired blindness, the level of activity in the same area will become significantly lower over time. This is logical according to the EMAH model because, in the former case, the neurons have never become involved in association patterns of vision but were engaged in other tasks. In the latter case, the neurons have partly remained in previous vision patterns, which are no longer in use, while the rest has moved onto other new tasks.
It is important to note that human thinking, contrary to what today's computers do, involves the perceptions that originate from the chemical processes in the body's hormonal system, what we carelessly name "emotions." This, I think, is the main source behind the term "human behavior." The difference between man and machine is a source of concern but, as I see it, there is no point in making a "human machine." But perhaps someone might be interested in building a "human-like machine".
Body vs. Environment - a History of Illusions (copyright P. Klevius 1992-2004)
According to the EMAH model, its nerves define our body. This view does not exactly resemble our conventional view of the human body. Thus, our hormonal signals inside our body, for example, can be viewed?at least partially?as belonging to the environment surrounding the EMAH-body.The meaning of life is to uphold complexity by guarding the borders and it is ultimately a fight against entropy. In this struggle, life is supported by a certain genetic structure and metabolism, which synchronizes its dealings with the surrounding environment. Balancing and neutralizing these dealings is a job done by the nerves.
A major and crucial feature of this "body-guarding" mechanism is knowledge of difference in the directions between incoming signals and outgoing, processed signals. On top of this, both areas changes continuously and thus have to be matched against each other to uphold or even improve the complexity. According to this model, people suffering from schizophrenia, just like healthy people, have no problem in discriminating between inputs and outputs. In fact, we can safely assume that the way they sometimes experience hallucinations is just like the way we experience nightmares. Both hallucinations and nightmares seem so frightening because they are perceived as incoming signals and confused as real perceptions. The problem for the schizophrenic lies in a defect in processing due to abnormal functions in and among the receptors on the neurons, which makes the association pattern unstable and "creative" in a way that is completely different compared with controlled fantasies. In the case of nightmares, the confusion is related to low and fluctuating energy levels during sleep.A frightful hallucination is always real because it is based on perceptions. What makes it an illusion is when it is viewed historically from a new point of view or experienced in a new "now," i.e., weighed and recorded as illusory from a standpoint that differs from the original one. In conclusion, one can argue that what really differentiates a frightful ghost from a harmless fantasy is that we know the latter being created inside our body, whereas we feel unsure about the former.
EMAH Computing as Matched Changes (copyright P. Klevius 1992-2004)
EMAH does not support the idea that information is conveyed over distances, both in the peripheral and central nervous systems, by the times of occurrence of action potentials?
"All we are hypothesizing is that the activity in V1 does not directly enter awareness. What does enter awareness, we believe, is some form of the neural activity in certain higher visual areas, since they do project directly to prefrontal areas. This seems well established for cortical areas in the fifth tier of the visual hierarchy, such as MT and V4." (Crick & Koch, 1995a,b). Hardware in a computer is, together with software (should be “a program” because this word signals programming more directly), specified at the outset. A high level of flexibility is made possible through the hardware's ability to unceasingly customize to incoming signals. This is partly what differs human beings from a machine. The rest of the differentiating factors include our perceptions of body chemistry such as hormones, etc. Programming a computer equipped with flexible hardware, i.e., to make them function like neurons, will, according to the EMAH-model, make the machine resemble the development of a fetus or infant to a certain extent. The development of this machine depends on the type of input terminals.
All input signals in the human, including emotional ones, involve a feedback process that matches the incoming signals from the environment with a changing copy of it in the form of representations in the brain's network couplings.Life starts with a basic set of neurons, the connections of which grow as experiences come flooding in. This complex body of neuronal connections can be divided into permanent couplings, the sum of experiences that is your "personality," and temporary couplings, short-term "memories" for everyday use.
A certain relay connection, if activated, results in a back-projected signal toward every receptor originally involved and thus creates, in collaboration with millions of other signals, a "collage" that we often call awareness. This is a constant flow and is in fact what we refer to as the mysterious consciousness. At this stage, it is important to note that every thought, fantasy or association is a mix of different kinds of signals. You cannot, for example, think about a color alone because it is always "in" or "on" something else (on a surface or embedded in some kind of substance) and connected by relay couplings to other perceptions or hormonal systems. "Meaning" is thus derived from a complex mix of the loops between perceptions and back-projected perceptions. This can be compared to a video camera system with a receiving screen and a back-projecting screen. The light meter is the "personality" and the aperture control the motor system. However, this system lacks the complex network system found in the cortex and thus has no possibility to "remember." The recorded signal is of course not equivalent to the brain?s network couplings because it is fixed.To save "bytes," our brains actually tend to "forget" what has been synchronized rather than remember it. Such changes in the brain?not memories?are what build up our awareness. This process is in fact a common technique in transmitting compressed data.
Short-Term Memories and Dreams (copyright P. Klevius 1992-2004)
At any given moment, incoming signals, or perceptions, have to be understood through fitting and dissolving in the net of associations. If there are new, incomprehensible signals, they become linked (coupled) to the existing net and localized in the present pattern of associations. Whether their couplings finally vanish or stay depends on how they fit into the previous pattern and/or what happens next.
As a consequence of this coupling process, memories in a conventional, semantic meaning do not exist, because everything happens now. Consciousness or awareness is something one cannot influence, but rather, something that involves an ongoing flow of information to and from nerve endings through the brain (a relay station). For every given moment (now), there is consequently only one possible way of acting. One cannot escape awareness or decisions because whatever one thinks, it is based on the past and will rule the future. Memories are thus similar to fantasies of the future, based on and created by experiences.Regarding short-term memory, I agree with Crick's view and hypothesis. But I certainly would not call it memory, only weaker or vanishing couplings between neurons. Remember that with this model, the imagination of something or someone seen a long time ago always has to be projected back on the ports were it came through and thus enabling the appropriate association pattern. Although signals in each individual nerve are all equal, the back-projected pattern makes sense only as a combination of signals. The relay couplings in the cortex is the "code," and the receptor system is the "screen." Because this system does not allow any "escape" from the ever changing "now" which determines the dealings with the surrounding environment. Living creatures are forced to develop their software by living.
Dreams are, according to this model, remains of short-term memories from the previous day(s), connected and mixed with relevant association patterns but excluding a major part of finer association structures. This is why dreams differ from conscious thinking. The lack of finer association structures is due to low or irregular activity levels in the brain during sleep. The results are "confused thoughts," which are quite similar to those of demented people, whose finer neural structures are damaged because of tissue death due to a lack of appropriate blood flow. Thus dreams are relevantly structured but in no way a secret message in the way psychoanalysts see them, whereas patients with dementia tend to go back to their childhood due to the irrevocable nature of the physical retardation process.Investigating dreams and their meanings by interpreting them is essentially the same as labeling them as psychological (in a psychoanalytical sense). A better and less biased result would emerge if the researcher actually lived with the subject the day before the dream occurred. Rather than analyzing pale and almost vanished childhood experiences from a view trapped in theoretical prejudices that describe an uncertain future, the researcher should perhaps put more efforts in the logic of the presence.
Donald Duck and a Stone in the Holy Land of Language (copyright P. Klevius 1992-2004)
Wittgenstein: "Sie ist kein Etwas, aber auch nicht ein Nichts!" (Phil. Untersuch. 304). Also see P. Klevius' analysis of a stone (in Demand for Resources - on the right to be poor, 1992).
Although Wittgenstein describes language as a tool it seems more appropriate to classify it as human behavior. Unlike tools language is a set (family) of a certain kind of bodily reactions (internal and/or towards its environment). We have to reject, not only the grammar which tries to force itself on us", but also, and perhaps even more so, representations we, without any particular reason, assign to language.
Language is basically vocal but apart from that little has been said about its real boundaries. One could actually argue that the best definition is perhaps the view that language is a human territory. The question whether animals have a language is then consequently meaningless. On the other hand, Wittgenstein denied the existence of a "private language" because applying it could never prove the validity of its products.We are trapped in words and connotations of language although these categories themselves, like language in general, are completely arbitrary "language games," as Wittgenstein would have put it. (No offense, Mr Chomsky and others, but this is the tough reality for those trying to make sense of it in the efforts of constructing intelligent,talking computers). Furthermore, these categories change over time and within different contexts with overlapping borders.
Changing language games provide endless possibilities for creating new "language products", such as e.g. psycho-dynamic psychology. I believe this is exactly what Wittgenstein had in mind when he found Freud interesting as a player of such games but with nothing to say about the scientific roots of the mental phenomenon.Let's image Donald Duck and a picture of a stone. Like many psychological terms, Donald Duck is very real in his symbolized form but nonetheless without any direct connection to the reality that he symbolizes. In this sense, even the word stone has no connection to the reality for those who don't speak English. Words and languages are shared experiences.
It is said that a crucial feature of language is its ability to express past and future time. This might be true but in no way makes language solely human. When bees arrives to their hive they are able, in symbolic form, to express what they have seen in the past so that other bees will "understand" what to do in the future. Naming this an instinct just because bees have such an uncomplicated brain does not justify a different classification to that of the human thinking.If, as I proposed in Demand for Resources (1992), we stop dividing our interactions with the surrounding world in terms of observation and understanding (because there is no way of separating them), we will find it easier to compare different human societies. By categorization, language is an extension of perception/experience patterns and discriminates us as human only in the sense that we have different experiences. Words are just like everything else that hits our receptors. There is no principle difference in thinking through the use of words or through sounds, smells (albeit not through thalamus), pictures or other "categories." Ultimately, language is, like other types of communication with the surrounding world, just a form of resistance against entropy.
To define it more narrowly, language is also the room where psychoanalysis is supposed to live and work. A stone does not belong to language, but the word "stone" does. What is the difference? How does the word differ from the symbolic expression of a "real" stone in front of you? Or if we put it the other way round: What precisely makes it a stone? Nothing, except for the symbolic value derived from the word "stone." The term "observation" thus implicates an underlying "private language."When Turing mixed up his collapsing bridges with math, he was corrected by Wittgenstein, just as Freud was corrected when he tried to build psychological courses of events on a basis of natural science. Wittgenstein's "no" to Turing at the famous lecture at Cambridge hit home the difference between games and reality.
Archetypes and grammar as evolutionary tracks imprinted in our genes is a favorite theme among certain scholars. But what about other skills? Can there also be some hidden imprints that make driving or playing computer games possible? And what about ice hockey, football, chess, talk shows, chats and so on? The list can go on forever. Again, there is no distinguishing border between evolutionary "imprints" and other stimulus/response features in ordinary life.
"Primitive" vs. "Sophisticated" Thinking (copyright P. Klevius 1992-2004)
The more synchronized (informed) something or someone is with its surrounding reality, the less dynamics/interest this something or someone invests in its relationship with that particular reality. Interest causes investment and social entropy excludes investment economy because economy is always at war against entropy. The key to economical success is luck and thus includes lack of knowledge. No matter how well a business idea is outlined and performed, the success or lack of success is ultimately unforeseeable.In Demand for Resources I discussed the possibility of some serious prejudice hidden in Karl Poppers' top achievement of civilization, namely the "World 3" and his and Eccles' assumption of an increasing level of sophistication from the primitive to the modern stage of development. It is of course easy to be impressed by the sophistication of the artificial, technical environment constructed by man, including language and literature, etc. But there is nonetheless a striking lack of evidence in support of a higher degree of complexity in the civilized human thinking than that of e.g. Australian Aboriginals, say 25,000 years ago. Needless to say, many hunting-gathering societies have been affluent in the way that they have food, shelter and enough time to enrich World 3, but in reality they have failed to do so.
Even on the level of physical anthropology, human evolution gives no good, single answer to our originality. What is "uniquely human" has rested on a "gap," which is now closed, according to Richard Leakey and Roger Lewin, among others. This gap is presumably the same as the one between sensory input and behavioral output mentioned above.From an anthropological point of view, it can be said that a computer lacks genetic kinship, which, however, is a rule without exception in the animate world, although we in the West seem to have underestimated its real power.
De-constructing the Mind (copyright P. Klevius 1992-2004)
A deconstruction of our underlying concepts of the brain can easily end up in serious troubles due to the problem with language manipulation. Wittgenstein would probably have suggested us to leave it as it is. If language is a way of manipulating a certain area - language - then the confusion will become even greater if we try to manipulate the manipulation! But why not try to find out how suitable "the inner environment" is for deconstruction? After all, this environment presupposes some kind of biology at least in the border line between the outside and the inside world. Are not behavioral reactions as well as intra-bodily causes, e g hormones etc. highly dependent on presumed biological "starting points"? How does skin color or sex hormones affect our thinking? Where do causes and reactions start and isn't even the question a kind of explanation and understanding?
Determinists usually do not recognize the point of free will although they admit the possible existence of freedom. Why? Obviously this needs some Wittgensteinian cleaning of the language. Unfortunately I'm not prepared for the task, so let's pick up only the best looking parts, that words as freedom, will, mind, etc., are semantic inventions and that they have no connections to anything else (i.e., matter) if not proved by convincing and understandable evidence. Does this sound familiar and maybe even boring? Here comes the gap again.Stimuli and response seen purely as a reflex is not always correct, says G. H. von Wright, because sometimes there may be a particular reason causing an action. According to von Wright, an acoustic sensation, for example, is mental and semantic and thus out of reach for the scientific understanding of the body-mind interaction. Is this a view of a diplomatic gentleman eating the cake and wanting to keep it too? To me, it is a deterministic indeterminist's view.
G. H. von Wright concludes that what we experience in our brain is the meaning of its behavioral effects. In making such a conclusion that it is rather a question of two different ways of narrowing one's view on living beings von Wright seems to narrow himself to Spinoza?s view.Is meaning meaningful or is it perhaps only the interpreter's random projection of himself or herself? Is it, in other words, based only on the existence of the word meaning?
Aristotle divided the world primarily into matter and definable reality (psyche). As many other Greek philosophers, Aristotle was an individualist and would have fitted quite well in the Western discourse of today. Berkeley, who was a full-blood determinist, however recognized the sameness in mind and matter and handed both over to "god". Consequently Philonous' perceived sensations in the mind were not directly aligned with Hylas view of immediate perceptions. We thus end up with Berkeley as a spiritual die-hard determinist challenging materialistic humanism.
In conclusion one might propose a rethinking of the conventional hierarchy of the brain. What we use to call "higher levels", perhaps because they are more pronounced in humans, are in fact only huge "neural mirrors" for the real genius, thalamus (and its capability of two-way communication with extensions in the cerebellum, spine, nerv ends etc), i.e. what has sometimes been interpreted as part of the "primitive" system.. In other words, one may propose a view describing the "gap" between humans and animals as a quantitative difference in the amount/power of cerebral "mirroring" and communication with thalamus, rather than as a distinct qualitative feature. Nothing, except our "emotions", seems to hinder us from making a "human machine". And because these very "emotions" are lived experiences (there is, for example, no way to scientifically establish what could be considered "emotions" in a fetus) nothing, except the meaninglessness in the project itself, could hinder us from allowing a machine to "live" a "human life".
So what about human rights for a computer (Honda's Asimo robot) loaded with all possible human "emotions"? Is Asimo human or Klevius inhuman? Is death what ultimately unites humans? So what abt a hypothetical memory card containing a lifetime of experience? Or a fetus with hardly no experience at all?
Klevius comment: A thoroughly honest approach towards others combined with negative human rights seems to be the only acceptable framework for being really human. This approach hence excludes segregation as well as "monotheist"* religions (but see Klevius definition of religion).