Pages

Wednesday, February 26, 2020

1991-1994 Peter Klevius wrote a book and told his friends and Francis Crick that there's no essential difference between a brick and them. Now some 30 yrs later physicists imply the same about sand dunes.


Peter Klevius brain tutorial: The only "consciousness" there is - is the word 'consciousness'.


"They're definitely communicating," physicist Nathalie Vriend from Cambridge University said in an interview with The Washington Post.

and here's Peter Klevius take on sand dunes:




Here Peter Klevius (right) has just informed his friend (middle) that there's no difference between Peter Klevius wife (left) and a brick. That's why his friend looks slightly puzzled.



Whoever interested in what consciousness is and how the brain works, need to read Peter Klevius stone example (see below) and the Even More Astonishing Hypothesis (see below). In fact, Peter Klevius stone example should be compulsory reading for everyone - just like a vaccination against dumb or deliberately evil "spirituality".

That will cure much of your "religiosity" etc. bias.

Peter Klevius knows about aliens because he is one - you're too. Most parents see their children turning into aliens already in their teens (that's how the concept 'teenager' emerged). Changing education/job and location also alienates. And when we send humans on multi-generational space trips, their grand-grand-grand etc. -children will have absolutely nothing in common with the grand-grand-grand etc. -children to those humans who stayed on Earth or went to other places. Of course, there's no reason to send people in the future - read Rendezvous with Rama by Arthur C. Clarke, the best science fiction ever written. Why isn't there a movie?!

The reason why Peter Klevius is so successful in scientific analysis is (except for his slightly "different" brain) the fact that he simply checks for bias (religious, political, economics etc) - and the results reveal themselves naturally. And according to Weiniger (who had a big influence on Wittgenstein), 'the Woman' is the main obstacle against women's emancipation, and according to Klevius, 'the Human' is the main obstacle against science. Klevius may accordingly be one of the last human scientists.

And of course, checking for all kinds of bias erases pretty much every possible source of support.

The first and most important redundancy to understand is to skip 'understanding' all together and replace it with 'adaptation'. That simple maneuvre will clean the playing field from distractions more than anything else.

We don't "observe" or "understand" - we adapt. And not only to our outer surrounding but eqaully to our own body incl. our brain. Or a brick turning into grovel/sand. Or a star etc.

Is the flying dust from what used to be a brick less or more "complex"? Or the supernova?

Although the brain/nerve system is more complex, it's no different from e.g. light skin that gets tanned in the sun.

And when Klevius says "we" he really means it. There's no "I" (other than as origo) or "self". As Klevius wrote on the web 2003: In creating this text Klevius would have been helpless without an assisting world". Wittgenstein showed the impossibility of a "private language" and Klevius showed (see the stone example below) that information is the flow of perception and that there's no difference between observation and understanding.

As a consequence there's no free will (even Luther realized this and threw it in the face of Erasmus) because free will is a linguistic mirage (although Luther called it dependancy on a "god").




Peter Klevius scientific biography: My bio-parents were both highly intelligent, thank you. Lincoln Barnett's book (co-written with Albert Einstein) was my favourite at age 14. In my early twenties I wrote an unpublished essay about universe and an other about automation. I see my own best asset as a scientist being lack of political, religious, academic etc. ties. I also see the danger in this setup as it could as well be the perfect road to "private pseudo-science" i.e. individual charlatanism, or "public pseudo-science" i.e. collective charlatanism. The latter may well include s.c. "highly respected" researchers.

However, being too much ahead of once time most often doesn't pay off. The only time I've been paid for my scientific revelations was back in 1981 when my mentor Georg Henrik von Wright (Wittgenstein's successor at Cambridge) convinced a paper to publish an article that apparently none of its editors and few of its readers understood. It was called Resursbegär (Demand for Resources) and was ten years later - again assisted by Georg Henrik von Wright - self-published in a 71 pp "book" in which I analyzed our "existence-centrism" in an unreachable universe. This included a new understanding of "consciousness" that emerged from my criticism of Haberma's division of communicative action in observation and understanding.


Klevius stone example

My view was and is that there can only be adaptation.

I exemplified it (1992:31-33, ISBN 9173288411) with 1) someone seeing (fotons) a stone and 2) telling (sound waves) about it to 3) someone who writes it down (text) to 4) someone else who sees  (fotons) the text. 5) Then the initial "observer" kicks the stone which turns out to be made of paper-mache.

Klevius argues that this example covers everything essential for understanding human information flow.



There's only now -

exemplified it (1992:31-33, ISBN 9173288411) with 1) someone seeing (fotons/understanding/language?*) a stone and 2) telling (sound waves/language) about it to 3) someone who writes it down (text/language) to 4) someone else who sees  (fotons) the text. 5) Then the initial "observer" kicks the stone which turns out to be made of paper-mache.

Klevius argues that this example covers everything essential for understanding human information flow.

1) is a specific perception (understanding) and thought (interpretation) synonymous with the individual's communicative use (compare Wittgenstein's "language game") of the concept 'stone'.

2) is the second individual's linguistic interpretation and understanding of a 'stone', but limited to this individual's specific communicative use of the term.

3) same as above

4) same as above

5) a new perception (understanding)

This last one isn't to be seen as a "correction" but just as part of a continuous flow of adaptation.

* some people understand without necessarily putting a word on it



In 1994 I developed my theory calling it EMAH ('the Even More Astonishing Hypothesis', alluding to Francis Crick's 1994 book The Astonishing Hypothesis). EMAH has been on the web since 2003. In the “book” I also warned how research is vulnerable to be choked by its own peer steered citation cartels. I finished the book by exemplifying a new division of human cultures in a chapter called Khoi, San and Bantu. My moral bedrock since my teens rests on universal (negative) Human Rights for everyone - incl. women.


Here's rough Google translate based "translation" from Klevius 1992 book Demand for Resources:

What I want to say is that there is a culturally independent "thought intelligence" and that there is an intellectual difference between cats, monkeys and humans even when body? and environmental experiences are equated. A person can simply boast of experiences in a cat impossible way, no matter how dull the lives they both lived. Produced in this way, the matter seems obvious, but it is often diffused in the debate.

I also find it difficult to understand the relevance of the theory of the so-called The "Machiavellian Intelligence" which states that social manipulation skills would have shaped and driven our intelligence. For me, it seems as if successful% ocio play "today seems to lack intelligence and / or thought correlation. It is more exceptional, in relation to other factors, that fruitfully intriguing carries fruit. You can as well see memory capacity (in an ecological context ) as a consequence of the selection factor and social interaction.

An interesting detail in this context is that the large brain of man as well as of other animals is actually the "neck" upon which it so-called. the olfactory center (the odor organ) is associated with the brain stem. This "olfactory neck" has, in man, received its longest and most complex design, while man actually has denser accumulations of odor-producing fat glands than any other animal. One can therefore rightly assume that the odor organ has played a crucial role in designing the way in which we perceive / think the world. You just need to think about how strong and overwhelming emotions some smells that you have not known for a long time can induce. For me personally, it seems to apply mainly to smells and smells with positive associations from my childhood. At the beginning of my work, it appeared as something of an intuitive feeling that the "psychological" view of the brain so strong in the 20th century led us away from the evolutionary self-explanations of thought.

In line with the modern "hygiene", which seems to have the main purpose of concealing body scents and replacing them with artificial, allergies and immune problems increase. If you assume that thinking has a close connection with smells and scents, this is undoubtedly a reflection. Does this also affect us on other levels?

While in his book "The Scented Ape" Michael Stoddart pleads for the fact that man, when she started hunting in flock, would have lost large parts of his odor communication ability due to that the then necessary monogamy was threatened by sexual odor invasions from the females but at the same time one can come up with several alternative scenarios and objections. The "olfactory neck" complexity of man suggests Lex. that something closely related to the odor organ has' driven our intelligence. Lex's dogs. which have recognized good sense of smell, even the very strongly developed "olfactory neck" and therefore the brain's development, except possibly marginally, can be linked to or from simpler degrees of odor detection. In fact, we know quite a bit about the more subtle odor perceptions that man unconsciously occupies.

The connection between intelligence / intellect and its biological anchors can thus appear to be problematic on several levels. This applies inter alia to the connection between sensory impression and abstraction. In a remark about rational reconstruction, Jurgen Haberma makes a distinction between what he calls sensory experience (observation) and communicative experience (understanding). Against this, one can polemise if one sees the thought process as consisting of parts in memory patterns and experiences that must be processed / understood in order to be meaningful at all.

  sees a stone = sight impression as understood by the viewer
  I see a stone = opinion understood by another person

I suppose Habermas sees the latter example as communication because of the purpose (via the language) of the original stone viewer's visual impression of the stone and then to claim that this "extent" of the meaning in the opinion cannot be proved to be of a different nature from the thought / understanding process that lies behind the first example. This understanding of the stone does not differ from the understanding of an abstract symbol like Lex. a letter or a word, written or pronounced. The statement "I see a stone" is likewise a direct impression of mind which, like the stone as an object, lacks all meaning if it is not understood. Here one can object that the word stone in contrast to the phenomenon stone can transmit meanings (symbolic construction according to Habermas). Nevertheless, I would like to insist that this is also apparent and a consequence of our way of perceiving the language and Popper's third world (see below).

A stone can be perceived as everything from the printing ink in a word to an advanced symbolic design. It is not a matter of difference between observation and understanding, but only different, unrelated levels of understanding. Nor does the division "pure observation" and "reflective observation" have any other than purely comparative significance, since any delimitation (other than the purely comparative) does not make sense meaningfully. Doesn't it matter that communication takes place between two conscious, thinking beings? Certainly, Habermas and others are free to elevate communication between individuals to another group than the communication the stone viewer has with himself and his cultural heritage via reflection in the stone, but in this case this is merely an ethnocentric position without relevance to the distinction / observation distinction.

For me, it is therefore not a fundamental difference in the symbol combination in the sensory experience of a stone or of Habermas text. Of course, this does not mean that in any way I would express any kind of valuation of the Habermas or the stone. What it does mean, however, is that I want to question the division of observation / understanding and thus also the division of primitive / civilian thinking. In the name of justice, it should be said that Habara's example is based on a completely different chain of thought with a different purpose than this one mentioned, and that I only want to try to demonstrate the danger of generalizing the relationship between observation / understanding. In other contexts, it becomes almost unnoticed for a linguistic axiom (virus to take information technology as an example) that then both generates and cumulates differences that do not exist.

In the book Evolution of the Brain / Creation of the Self (with preface by Karl Popper), John C. EccIes notes among other things. that: '1t is surprising how slow the growth of World 3 (K. Poppers and J. EccIes division of existence and experiences; World I = physical objects and states, World 2 = states of consciousness, World 3 = knowledge in objective sense) was in the earlier of thousands of years of Homo sapiens sapiens. And even today there are races of mankind with negligible cultural creativity. Only when the societies could provide the primary needs of shelter, food, clothing, and security were able to participate effectively in cultural creativity, so enriching World 3. "

This quote shows Eccle's and Popper's legitimate concerns about the issue and partly the cultural-revolutionary retreat path they use to leave the question. (See the chapter Khoi, San and Bantu in this book) It also reveals a certain, perhaps unconscious, aversion to the idea that societies would voluntarily settle for meeting their "primary needs".
Karl Popper has, with reasons, made himself known as the freedom advocate and here I fully share his attitude. Freedom (implicitly a human and responsible freedom) is a clear deficiency in the modern welfare state. At the same time, it is so that the concept of freedom does not exist at all among the collectors / hunter cultures referred to in this consideration. The concept of freedom, like diamonds, is created only under pressure.


Peter Klevius wrote:


Tuesday, May 15, 2018

Peter Klevius contribution to the AI/consciousness debate.

The thoughts below were first presented 1979-81 in an article and correspondence with Georg Henrik von Wright (Wittgenstein's successor at Cambridge), and later published in a book 1992, a letter to Francis Crick (Salk) 1994, and on the web 2003.

Evolution means change - a fact missed by many neo-creationists*


* Exemplified with the eager "humanifying" of Neandertals etc. extinct creatures. Or the equally eager (not to say desperate) search for a hiding place where "consciousness" can be protected against de-mystifiers such as e.g. Peter Klevius.


In Demand for Resources (1992 ISBN 9173288411) Klevius crossed the boundaries between consciousness-observation-understanding-language and wrapped it all in one, i.e. adaptation.
According to Klevius analysis everything is adaptation. There's no principal analytical difference between how planets adapt to their star or how humans adapt to their environment. And no dude, this is not "simplifying away" or diluting it. When the bedrock of the Indo-Australian Plate met with the bedrock of the Asian plate the landscape was almost flat. However, look at the Himalayas today. Same rock but a completely different and extremely wrinkled appearance and a new name, mountain range.

Consciousness is neither simple nor complicated - and certainly not a "mystery". The real mystery is how people "mystify" it - from Penrose's hiding in quantum tubulars to Koch's escape into the brain's olfactory channels. The former outside falsifiability, and the latter outside any kind of scientific consensus and, more importantly, clearly related to the fact that brain evolution started as a smell organ which later on was mounted with additional gadgets (vision, hearing etc.) connected via Thalamus. In short, as Klevius wrote 1992, this is why olfactory "memories" feel so different. This is also why claustrum is focused towards the olfactory lobes, i.e. functioning as a "translator" and transferer of these signals which weren't originally connected to thalamus at all.

And please, don't get stuck in the frontal lobe just because you find some difference compared to other parts of the brain. The simple reason is just that the frontal lobe happens to be the last expansion in brain evolution and is lacking in non-humans.

The  "mystery" of drivingness - or carness.


An undriving car doesn't move.

A selfdriving car makes intentional decisions based on history and present. These decisions wouldn't be any different with a human driver with exactly the same information available. A surprising looking choice of route may be just based on info npt available for the surprised.


Humans have humanness rather than "consciousness"*


* Humans have skin. So were's the mystery of "skinness"?
According to Peter Klevius (1981, 1992, 1994, 2003) humans have trapped themselves in language and have a borderline problem re. what can be said across the border between humans and "the rest".

In Demand for Resources (1992, ISBN 9173288411), Peter Klevius presented the following - his own (as far as he is aware of) - original observations re. evolution and awareness/mind:

Existence is change - not creation out of nothing.

Among so called "primitive" societies which had had no contact with monotheisms, the very thought that something could appear out of nothing was impossible.

So why did monotheisms come up with such a ridiculous idea? It's very simple. The racist "chosen people" supremacist ideology created a "god" that was not part of the world he (yes, he) had created out of nothing, i.e. making a clean sheet on which the chosen ones could exist (see the chapter Existencecentrism in Demand for Resources, 1992 ISBN 9173288411).

Culture is that (arbitrarily defined and bordered) part of adaptation that is shared by others.


Warning/advise: To better your understanding of Klevius writings you need to realize that he is extremely critical of how concepts are created and used. Not in a stiff/absolute sense of meaning, but rather how concepts may cluelessly (or deliberately) migrate within a particular discourse. So when Dennet talks about "deliberate design" he contrasts it against "clueless design", although such a distinction isn't possible. Evolution is neither clueless nor deliberate. And whatever we are up to it can't be distinguished from evolution other than as a purely human assessment - in which case it can't include evolution. Only humans can evaluate human behavior, which fact renders such evaluations pointless outside the realm of humans. Getting this seems to constitute a main obstacle in debates about AI and singularity.

This is why Klevius always refers to the individual human's negative Human Rights, i.e. everyone we agree is a human. This is also why Klevius can emphasize the Denisova bracelet, genetics etc. finds in Siberia/Altai as proof of modern humans evolving there (with some help from island South East Asia, not in Africa. Most humans living today would have been incapable of intellectually perform the task because the IQ peak has long since been diluted in the mass of humans. We're all one family of humans but the top of the line of human intelligence was a combination of island shrinking brains and its genetic transfrer to big skulled relatives in the north - as Klevius has pointed out since 2004 on the web.

Peter Klevius EMAH update on "consciousness" 2018: 


Acknowledgement: I've never in my life met anyone who I've felt being more intelligent* than I am. This means I've had no reason warshipping human intelligence. And whole my life I've been told it's unfair that I see things faster and clearer than others - or even worse, that I "turn black into white" (some real idiots from the 1970-80s). But how could it be "unfair" when I can't use it for my own advantage without others sooner or later catching up and shaming me? And when you're in the front line no one understands and therefore doesn't pay you. Which fact has added valuable neutrality and reduced malign bias to/from Klevius' analysis.

* Klevius intelligence was perhaps best described by the Finnish neuroscientist, J. Juurmaa, who in the 1990s wrote: "Peter Kleviuksen ajatuksen kulku on ilmavan lennokas ja samalla iskevän ytimekäs" which translated to English would mean something like: "Peter Klevius' thought process is easily eloquent yet simultaneously concisely punchy." This he wrote in a long letter answering Klevius question about the effects on the visual cortex on individuals who have been blind from birth. This inquiry was part of Klevius check up of his already published EMAH theory, so to get a qualified confirmation that the "visual cortex" in born blinds is fully employed with other things than vision. Juurmaa's description of Klevius  is in line with philosopher Georg Henrik von Wright's 1980 assessment, and more importantly with Klevius own experience, and perhaps most importantly when assessing AI/deep learning etc.

Only in true science and Human Rights does Klevius intelligence matter. And with AI singularity "pure" science will be dead anyway (although some idiots will never get it). Why? Because human existencecentrism (look it up in Klevius 1992 book pp 21-22) will only follow AI to the point of singularity.

Peter Klevius has - since he at age 14 read Einstein's and Barnett's book - been fascinated with human aversion of checking themselves in the mirror of existencecentrism.

Future democracy will be cloud based and filtered through (negative) Human Rights equality. This means that we get rid of the distorting bottleneck our politicians now constitute.

This also means the definitive end of islam as we know it, i.e. as a Human Rights violating excuse for racism, sexism, and power greed.

It's astonishing how the avoidance of negative Human Rights affects every debate. And most of this is due to our politicians' defense of the Saudi dictator family. Why? Simply because they stand as the "guardians" of islam and 1.6 Billion muslims which are all lumped together and protected by the label "islamophobia" which in fact only protects the Saudi dictator family and those who want to deal with it and its Human Rights violating sharia(e.g. OIC etc).

There's no way to copy a brain without a total break between individuals. That's perhaps one definition of what it means to be a human.

What makes humans individuals (atoms) and robots collective. Robot memories are shared and if you destroy the hardware, the software will still be alive and well.

However, a human individual is extremely vulnerable to individual extinction.

And a "pet" copy is an other individual - although it remembers and behaves like the original.


Peter Klevius in Demand for Resources (1992:23, ISBN 9173288411):



The basis of existence is change, and causality constitutes a complex of evolution and devolution. Evolution may be seen as the consequence of causality's variables in time where complexity in existing structures are reinforced. This stands in opposition to thermodynamics which theoretically leads to maximal entropy (i.e. energy equilibrium) where time/change finally ends. Someone might then say that the products of evolution are just temporary components in causality's road towards uniformity (Klevius 1981, 1992 - text copied from Klevius 1981 article Demand for Resources).


The Even More Astonishing Hypothesis (EMAH)

by Peter Klevius


1991, years before Crick's book, the original idea was presented for Georg Henrik von Wright (Wittgenstein's own choice of successor at his Cambridge chair), then published in Demand for Resources (1992, ISBN 9173288411), and 1994 presented for Francis Crick and 2004 presented on the world wide web.

Abstract: Consciousness may be seen as environmental adaptation rather than something "uniqely human". Although neo-cortex constitutes the mass of adaptations Thalamus is the least discussed yet perhaps the most important piece in the "puzzle of mind" due to its central function as the main relay station between body actions, brain and environment. A critical assessment of concepts such as: observation/understanding, mind/body, free will, knowledge and language reveals an inescapable awareness in the Thalamic "meet-puts". In conclusion memories hence may be better described as associations causing linguistic traps (i.e. self-inflicted "problems" produced in language) rather than as distinct entities. The continuity model proposed in EMAH avoids the limitations of a "discrete packets of information" model, and without Cartesian dualism or the Homunculus fallacy.

Note. In some respect the neural network of "lower" systems such as the spinal cord and cerebellum by far outperforms the cortex. This is because of different tasks (fast motorics and slow adaptation) and due difference in processing. (Copyright Peter Klevius).


Introduction

Understanding how social behavior and its maintenance in human and other forms of life (incl. plants etc) evolved has nothing to do with “the balance between self interest and co-operative behavior” but all to do with kinship and friendship adaptation. Everything is "self-interest" - how could it not be? Although humans may be attributed a more chaotic (i.e. more incalculable) "personality", they are, like life in general, just adaptive "robots" (i.e. active fighters against entropy – see Demand for Resources, 1992 ISBN 9173288411). Misunderstanding (or plain ignorance of – alternatively ideological avoidance of) kin recognition/friendship (symbiosis), and AI (robotics) pave the way for the formulation of unnecessary, not to say construed, problems which, in an extension, may become problematic themselves precisely because they hinder an open access for direct problem solving (see e.g. Angels of Antichrist – kinship vs. social state).

Mentalists trap themselves in selfinflicted astonishment over phenomenons they think are beyond determinism. When Chomsky says "there are things beyond comprehension" he should ask himself: Who are you to talk about things beyond comprehension (compare 'existencecentrism' in Klevius Demand for Resources, 1992 ISBN 9173288411), i.e. something that can't be asked - without just pushing the border a little - or rather, just a new comprehensible adaptation. And if it seems incomprehensible, it's no more so than e.g. Donald Duck (see below).


The Future of a "Gap" (copyright P. Klevius 1992-2004)

Human: What is a human being? Can the answer be found in a non-rational a priori statement (compare e.g. the axiomatic Human Rights individual) or in a logical analysis of the alleged "gap" between human beings and others? The following analysis uses an "anti-gap" approach. It also rests on the struggle and success of research performed in the field of artificial intelligence (AI), automation/robotics etc.

Signal: A "signal gap" is commonly understood as a break in the transition from input to output, i.e., from perception to behavior. Mentalists use to fill the gap with "mind" and "consciousness" while behaviorists don't bother because they can't even see it. A five minute timelaps of Earth spanning 4.5 Billion years would make a very lively planet. However, where's "consiousness" between input (the single frames) and output (the running video)? Or, what/whom should we allow to possess "consciousness"? And if we limit it only to humans we are stuck with it being just a human thing - hence impossible to use in general meaning. An easier way out is to avoid the signal "gap" and call it what it is, a network. But a network that continuously builds new patterns on top of already existing ones. 

Matter: Berkeley never believed in matter. What you experience is what you get and the rest is in the hand of "God" (i.e. uncertainty). This view makes him a super-determinist without "real" matter. Klevius just adds the fact that Berkeley's "God" is truly metaphysical and therefore not worthy of even talking about.

Mind: The confusing mind-body debate originated in the Cartesian dualism, which divides the world into two different substances, which, when put together, are assumed to make the world intelligible. However, on the contrary, they seem to have created a new problem based on this very assumption. But a problem that has become popular among those who want to talk metaphysics, i.e. giving an impression of talking about what can't be talked about.

Free will: Following a mind-body world view, many scholars prefer to regard human beings as intentional animals fueled by free will. It is, however, a challenging task to defend such a philosophical standpoint. Not even Martin Luther managed to do it, but rather transferred free will to God despite loud protests from Erasmus. Although Luther's thoughts in other respects have had a tremendous influence on Western thinking, this particular angle of view has been less emphasized. However, 'free will' can only be used locally.

Future: When asked about the "really human" way of thinking, many mentalists refer to our capacity to "calculate" the future. But is there really a future out there? All concepts of the future seem trapped in the past. We cannot actually talk about a certain date in the future as real future. What we do talk about is, for example, just a date in a calendar. Although it is a good guess that we are going to die, the basis for this reasoning always lies in the past. The present hence is the impenetrable mirror between the "real future" and ourselves. Consequently, every our effort to approach this future brings us back in history. Closest to future we seem to be when we live intensely in the immediate present without even thinking about the future. As a consequence the gap between sophisticated human planning and "instinctual" animal behavior seems less obvious. Is primitive thinking that primitive after all? And isn't 'instinct' just an excuse for ignorance?

An additional aspect of future is that neither youth, deep freezing or a pill against aging will do as insurance for surviving tomorrow. The human individual is lost in a crash whereas the robot brain safely hovers in the cloud - in many copies.


Observation and Understanding (copyright P. Klevius 1992-2004)

If one cannot observe something without understanding it, all our experiences are illusions because of the eternal string of corrections made by later experience. What seems to be true at a particular moment may turn out to be something else in the next, and what we call understanding is merely retrospection.

The conventional way of grasping the connection between sensory input and behavioral output can be described as observation, i.e. as sensory stimulation followed by understanding. The understanding that it is a stone, for example, follows the observation of a stone. This understanding might in turn produce behavior such as verbal information. To do these simple tasks, however, the observer has to be equipped with some kind of "knowledge," i.e., shared experience that makes him/her culturally competent to "understand" and communicate. This understanding includes the cultural heritage embedded in the very concept of a stone, i.e.it's a prerequsite for observation. As a consequence it's not meaningful to separate observation and understanding. This, of course, doesn't exclude "local" (non-analytical) use of the terms in speech and literature etc. for the purpose of catching subtle nyances.

Categorization belongs to the language department, which, on the brain level, is only one among many other behavioral reactions. But due to its capability to paraphrase itself, it has the power to confuse our view on how we synchronize our stock of experience. When we watch a stone, our understanding synchronizes with the accumulated inputs associated with the concept of a stone. "It must be a stone because it looks like a stone," we think. As a result of such synchronization, our brain intends to continue on the same path and perhaps do something more (with "intention"). For example, we might think (as a result of our adaptation to the situation), "Let's tell someone about it." The logical behavior that follows can be an expression such as, "Hey look, it's a stone out there." Thus, what we get in the end is a concept of a stone and, after a closer look, our pattern of experience hidden in it. If the stone, when touched, turns out to be made of paper maché, then the previous perception is not deepened, but instead, switched to a completely new one.

It's almost frightening how often one hears researchers/scientists/philosophers etc. who think they are at least average in intelligence, telling others that "previously we didn't understand what X was", for example that "water consists of molecules and atoms". This kind of schizophrenic "thinking" reflects the depth of the mind/body hoax many are trapped in.

One might say that a stone in a picture is a "real" stone, while the word 'stone' written on a piece of paper is not. The gap here is not due to different representations but rather to different contexts. When one tries to equalize observation with understanding, the conventional view of primitive and sophisticated thinking might be put in question. We still act like complex worms, and sophistication is only a matter of biased views built on different stocks of experience (adaptaion) and the overwhelming complexity that appears chaotic. Moreover, a worm, just like a computer, is more than the sum of its parts.

Therefore, meaning, explanation and understanding are all descriptions of the same basic principle of how we synchronize (adapt) perception with previous experience. For the fetus or the newborn child, the inexperienced (unsynchronized, or uncertainty/"god" if you prefer) part of the inside-outside communication is huge compared to a grown up. Hence the chaotic outside world (i.e., the lack of its patterns of meaningfulness) has to be copied (adapted) in a stream of experience, little by little, into the network couplings of the brain. When the neural pattern matches the totality (meaningfulness) its information potential disappears. Our brain doesn't store information - it kills information. From an analytical point of view "storing of information" is an oxymoron. On top of this, there is a continuous growth of new neurons, which have to be connected to the network. As a result of these processes, the outside world is, at least partly, synchronized with the inside, "mental" world. Heureka, the baby appears to think and exist! In other words, the baby records changes against a background of already synchronized (adapted) inputs.

* see "existence-centrism" in Demand for Resources (1992) for a discussion abt a shrinking god and the allmighty human!


The Category of the Uniquely Human Category Mistake (copyright P. Klevius 1992-2004)

It's meaningless to state that we are the best (or the worst) humankind. However, category mistakes re. humans and non-humans are still common and many researchers/scientists don't even seem to realize how carelessly they handle this important distinction.

It's equally meaningless to ask what something is that we don't know what 'it' is. 'Consciousness' is easily understood when used in comparison with 'unconcious'. However, how stupid is it when we mystify the term beyond comprehension by squeezing in random additional properties and then ask the question: What is this mystery with consciousness".

A main difficulty in formulating the concept of consciousness is our pride (presumably we should have been equally proud as mice) and our tautological belief in "something uniquely human", However, if we try to follow the die-hard determinists, we would find free will and destiny easier to cope with, and also that the concept of "the unique human being" is rather a question of point of view and carelessly crossing borders of concepts.

Following this line of thought, I suggest turning to old Berkeley as well as to Ryle but excluding Skinnerian Utopias. Those who think the word determinism sounds rude and blunt can try to adorn it with complexity to make it look more chaotic. Chaos here means something you cannot overview no matter how deterministic it is. We seem to like complexity just because we cannot follow the underlying determinism. The same could be said about what it really is to be a human? A passion for uncertainty, i.e. life itself. Francis Crick in The Astonishing Hypothesis: "... your sense of personal identity and free will are in fact no more than the behavior of a vast assembly of nerve cells and their associated molecules."

This statement is easy to agree on, so let me continue with another, perhaps more useful, quote from Crick: "Categories are not given to us as absolutes. They are human inventions." I think these two statements create an efficient basis for further investigations into the mystery of thinking. Hopefully you will forgive me now as I'm going to abolish not only memory but also free will and consciousness altogether. Then, I will go even one step further to deny that there are any thoughts (pictures, representations, etc.) at all in the cortex. At this point, many might agree, particularly regarding the cortex of the author of this text.

The main problem here is the storage of memories, with all their colors, smells, feelings and sounds. Crick suggests the dividing of memory into three parts: episodic, categorical and procedural. While that would be semantically useful, I'm afraid it would act more like an obstacle in the investigation of the brain, because it presupposes that the hardware uses the same basis of classification and, like a virus, hence infects our analyses.

The analysis presented here is the result of de-categorization. The only thing that distinguishes us from the rest of nature (and 'nature' includes all artefacts, non-human as well as human ones) is the structure and complexity most (but not all) humans possess. In other words, there's no point at which something "special" happens. This is why Klevius in 1994 said that there's no principal difference between a brick and his girlfriend - which comment rose the eyebrow on his pal who admired Klevius girlfriend.

Instead of categorization, this analysis sees only adaptation to the surrounding world incl. one's own brain, which condtitutes of layers of previous adaptations where the latest one is awareness, consciousness, or the present now if you like.


Nerves, Loops and "Meet-puts" (copyright P. Klevius 1992-2004)

According to Crick, "each thalamic area also receives massive connections from the cortical areas to which it sends information. The exact purpose of these back connections is not yet known." In the following paragraphs, I will outline a hypothetical model in line with this question. The interpretation of the interface between brain and its surrounding as it is presented here has the same starting point as Crick's theory but divides thinking into a relay/network system in the cortex and the perception terminals (or their representatives in the thalamus) around the body like an eternal kaleidoscope. Under this model, imagination would be a back-projected pattern of nerve signals, associated to the original events that caused them but with the signals faded and localized as "internal" based on direction of nerve signals. This view suggests that there are not only inputs and outputs but also whst one might name "meet-puts," i.e., when an input signal goes through and evolves into other signals in the cortex, these new signals meet other input signals in the thalamus.

There is no limit to the possible number of pattern/association in such a system, and there is no need for memory storage but rather, adaptive network couplings. These "couplings," or signal pathways, are constantly running in loops (not all simultaneously but some at any given moment, i.e. e.g. what we call awareness) from the nerve endings in our bodies through the network in the cortex and back again to the thalamus. Of course the back-projected signals have to be discriminated from incoming signals, thereby avoiding confusion regarding fantasy and reality. But this process, though still unknown, could be quite simple and perhaps detected simply based on the direction where it comes from. As a consequence of the loops, the back-projected pattern differs from the incoming signals, or the stimuli. Therefore, every signal from the body/perceptions, hormonal signals and so on, either finds its familiar old route or pattern of association in the network (established/adapted experiences) or creates new connections (new experiences) that can be of varying durability depending on how they settle with older associations. For example, if someone is blind from the moment of birth, s/he will have normal neuronal activity in the cortex area of vision. On the other hand, in case of an acquired blindness, the level of activity in the same area will become significantly lower over time. This is logical according to the EMAH model because, in the former case, the neurons have never become involved in association patterns of vision but were engaged in other tasks. In the latter case, the neurons have partly remained in previous vision patterns, which are no longer in use, while the rest has moved onto other new tasks.

It is important to note that human thinking, contrary to what today's computers do, involves the perceptions that originate from the chemical processes in the body's hormonal system, what we carelessly name "emotions." This, I think, is the main source behind the term "human behavior." The difference between man and machine is a source of concern but, as I see it, there is no point in making a "human machine". But perhaps someone might be interested in building a "human-like machine".


Body vs. Environment - a History of Illusions (copyright P. Klevius 1992-2004)

The surface of our body isn't the border of consciousness. A better candidate is the neuronal system/Thalamus.

According to the EMAH model, nerves define our body. Thus, our hormonal signals inside our body can be viewed as belonging to the environment surrounding the nerveous system. As the meaning of life is to uphold complexity by guarding the borders, it's ultimately a fight against entropy. In this struggle, life is supported by a certain genetic structure and metabolism, which synchronizes its dealings (adaptation) with the surrounding environment. Balancing and neutralizing these dealings is a job done by nerves. Also consider Klevius gut bacterias with brain.

A major and crucial feature of this "body-guarding" mechanism is knowing difference in the direction between incoming signals and outgoing, processed signals. On top of this, both areas change continuously and thus have to be matched against each other to uphold or even improve the complexity. According to this model, people suffering from schizophrenia, just like healthy people, have no problem in discriminating between inputs and outputs. In fact, we can safely assume that the way they sometimes experience hallucinations is just like the way we experience nightmares. Both hallucinations and nightmares seem so frightening because they are perceived as incoming signals and confused as real perceptions. The problem for the schizophrenic lies in a defect in processing due to abnormal functions in and among the receptors on the neurons, which makes the association pattern unstable and "creative" in a way that is completely different compared with controlled fantasies. In the case of nightmares, the confusion is related to low and fluctuating energy levels during sleep. However, a frightful hallucination is always real because it is based on perceptions. What makes it an illusion is when it is viewed historically from a new point of view or experienced in a new "now," i.e., weighed and recorded as illusory from a standpoint that differs from the original one. In conclusion, one may argue that what really differentiates a frightful ghost from a harmless fantasy is that we know the latter being created inside our body, whereas we feel unsure about the former.



EMAH Computing as Matched Changes (copyright P. Klevius 1992-2004)

EMAH does not support the idea that information is conveyed over distance, both in the peripheral and central nervous system, by the time of occurrence of action potential?

"All we are hypothesizing is that the activity in V1 does not directly enter awareness. What does enter awareness, we believe, is some form of the neural activity in certain higher visual areas, since they do project directly to prefrontal areas. This seems well established for cortical areas in the fifth tier of the visual hierarchy, such as MT and V4." (Crick & Koch, 1995a,b).  Hardware in a computer is, together with software (should be “a program” because this word signals programming more directly), specified at the outset. A high level of flexibility is made possible through the hardware's ability to unceasingly customize to incoming signals. This is partly what differs human beings from a machine. The rest of the differentiating factors include our perceptions of body chemistry such as hormones, etc. Programming a computer equipped with flexible hardware, i.e., to make them function like neurons, will, according to the EMAH-model, make the machine resemble the development of a fetus or infant. The development of this machine depends on the type of input terminals.

All input signals in the human, including emotional ones, involve a feedback process that matches the incoming signals from the environment with a changing copy of it in the form of representations (or rather adaptations) in the brain's network couplings. Life starts with a basic set of neurons, the connections of which grow as experiences come flooding in. This complex body of neuronal connections can be divided into permanent couplings, the sum of experiences that is your "personality," and temporary couplings, short-term more shallow "memories"/imprints for the time being.

A certain relay connection, if activated, results in a back-projected signal toward every receptor originally involved and thus creates, in collaboration with millions of other signals, a "collage" that we often call awareness. This is a constant flow and is in fact what we refer to as the mysterious consciousness. At this stage, it is important to note that every thought, fantasy or association is a mix of different kinds of signals. You cannot, for example, think about a color alone because it is always "in" or "on" something else (on a surface or embedded in some kind of substance) and connected by relay couplings to other perceptions or hormonal systems. "Meaning" is thus derived from a complex mix of the loops between perceptions and back-projected perceptions. This can be compared to a video camera system with a receiving screen and a back-projecting screen. The light meter is the "personality" and the aperture control the motor system. However, this system lacks the complex network system found in the cortex and thus has no possibility to "remember"/adapt. The recorded signal is of course not equivalent to the brain's network couplings because it is fixed. To save "bytes," our brains actually "forgets" what has been synchronized (adapted) rather than "remember" it. Such changes in the brain - not memories - are what build up our awareness. This process is in fact a common technique in transmitting compressed data. It's also similar to how we first actively learn to walk, and then stop thinking about it.


Short-Term Memories and Dreams (copyright P. Klevius 1992-2004)

At any given moment, incoming signals, or perceptions, have to be understood through fitting and dissolving in a network of associations. If there are new, incomprehensible signals, they become linked (coupled) to the existing net and localized in the present pattern of associations. Whether their couplings finally vanish or stay depends on how they fit into the previous pattern and/or what happens next.

As a consequence of this coupling process - a process that could be described rather as a flow - memories in a conventional, semantic meaning do not exist, because everything happens now. Consciousness or awareness is something one cannot influence, but rather, something that involves an ongoing flow of information to and from nerve endings through the brain (a relay station incl. Thalamus). For every given moment (now) there is consequently only one possible way of acting, i.e. no absolute "free will". One cannot escape awareness or decisions because whatever one thinks, it is based on the past and will rule the future. Memories are thus similar to fantasies of the future, based on and created by experiences. Regarding short-term memory, I agree with Crick's view and hypothesis. But I certainly would not call it memory, only weaker or vanishing superficial couplings between neurons. Remember that with this model, the imagination of something or someone seen a long time ago always has to be projected back on the ports were it came through and thus enabling the appropriate association pattern. Although signals in each individual nerve are all equal, the back-projected pattern makes sense only as a combination of signals. The relay couplings in the cortex is the "code," and the receptor system is the "screen." Because this system does not allow any "escape" from the ever changing "now" which determines the dealings with the surrounding environment. Living creatures develope their software by living.

Dreams are, according to this model, remnants of short-term memories from the previous day(s), connected and mixed with relevant patterns of associations but excluding a major part of finer association structures. This is why dreams differ from conscious thinking. The lack of finer association structures is due to low or irregular activity levels in the brain during sleep. The results are "confused thoughts", which are quite similar to those of demented people, whose finer neural structures are damaged because of tissue death due to a lack of appropriate blood flow. Thus dreams are relevantly structured but in no way a secret message in the way psychoanalysts see them, whereas patients with dementia tend to go back to their childhood due to the irrevocable nature of the physical retardation process. Investigating dreams and their meaning by interpreting them is essentially the same as labeling them as psychological (in a psychoanalytical sense). A better and less biased result would emerge if the researcher actually lived with the subject the day before the dream occurred. Rather than analyzing pale and almost vanished childhood experiences from a view trapped in theoretical prejudices that describe an uncertain future, the researcher should perhaps put more efforts in the logic of the presence.





Donald Duck and a Stone in the Holy Land of Language (copyright P. Klevius 1992-2004)

Wittgenstein: "Sie ist kein Etwas, aber auch nicht ein Nichts!" (Phil. Untersuch. 304). Also see P. Klevius' analysis of a stone (in Demand for Resources - on the right to be poor, 1992).

Although Wittgenstein describes language as a tool it seems more appropriate to classify it as human behavior. Unlike tools language is a set (family) of a certain kind of bodily reactions (internal and/or towards its environment). We have to reject, not only the grammar which tries to force  itself on us", but also, and perhaps even more so, representations we, without any particular reason, assign to language.

Language is basically vocal but apart from that, little has been said about its real boundaries. One could actually argue that the best definition is perhaps the view that language is human territory. The question whether animals have a language is then consequently meaningless. On the other hand, Wittgenstein denied the existence of a "private language" because applying it could never prove the validity of its products. We are trapped in words and connotations of language although these categories themselves, like language in general, are completely arbitrary "language games," as Wittgenstein would have put it. (no offense, Mr Chomsky and others, but this is the tough reality for those trying to make sense of it in the efforts of constructing intelligent, talking computers). Furthermore, these categories change over time and within different contexts with overlapping borders.

Changing language games provide endless possibilities for creating new "language products", such as e.g. psycho-dynamic psychology. I believe this is exactly what Wittgenstein had in mind when he found Freud interesting as a player of such games but with nothing to say about the scientific roots of the mental phenomenon.

Let's imaging Donald Duck and a picture of a stone. Like many psychological terms, Donald Duck is very real in his symbolized form but nonetheless without any direct connection to the reality of the stone. In this sense, even the word stone has no connection to reality for those who don't speak English. Words and languages are shared experience.

It is said that a crucial feature of language is its ability to express past and future time. This might be true but in no way makes language solely human. When bees arrive to their hive they are able, in symbolic form, to express what they have seen in the past so that other bees will "understand" what to do in the future. Naming this an instinct just because bees have such an uncomplicated brain does not justify a different classification to that of human thinking.

If, as I proposed in Demand for Resources (1992), we stop dividing our interaction with the surrounding world in terms of observation and understanding (because there is no way of separating them), we will find it easier to compare different human societies. Language is a categorizing extension of perception/experience patterns and discriminates us as human only in the sense that we have different experiences.

Language has developed from a tool for communication to an additional tool of deception within itself. In Demand for Resources (1992 ISBN 9173288411) I used the example of a stone that turned out to be papier mache, as well as the word existence which has transformed from emerge to exist, i.e. loosing its root and hence opening up for the question how we can exist.

However, words and language are just like everything else that hits our receptors. There is no principle difference in thinking through the use of words or through sounds, smells (albeit not through thalamus), pictures or other "categories". Ultimately, language is, like other types of communication with the surrounding world, just a form of adaptation to one's environment (in a broad sense of course), i.e. resistance against entropy.



Wikipedia: Language is a system that consists of the development, acquisition, maintenance and use of complex systems of communication, particularly the human ability to do so.
Human language has the properties of productivity and displacement, and relies entirely on social convention and learning. Its complex structure affords a much wider range of expressions than any known system of animal communication. Writing is a medium of human communication that represents language and emotion with signs and symbols.

This short "definition" reveals the meaninglessness of the definition.



It's important to note the difference between everyday use of language, and language used about itself.

What's the difference between an image of a distant galaxy taken via a space telescope, or smell molecules left on a path?

And long before humans realized how nature performs photosynthesis, they already thought of themselves as the masters of Universe.

And unlike what Chomsky and others say, Klevius doesn't think in language other than when preparing to answer someone through language. Is this why Klevius is a lousier talker than most early teenagers who don't have a clue about what Klevius is talking about?

Words constitute rigid traps when compared to free, smoothly running thinking/analysis - unless you're gambling with words, as Freud did while waiting for reality to catch up with his speculations we call psychoanalysis (see Klevius Psychosocial Freud timeline.

However, words are also so unprecise that they are useless for construction work etc. where we need math and geometry instead. Words describe what it is and math how it is.

Everyday language needs its greatest asset, volatility, which simultaneously constitutes its main security risk re. faking/misleading communication.

To define it more narrowly, language is also the room where psychoanalysis is supposed to live and work. A stone does not belong to language, but the word "stone" does. What is the difference? How does the word differ from the symbolic expression of a "real" stone in front of you? Or if we put it the other way round: What precisely makes it a stone? Nothing, except for the symbolic value derived from the word "stone." The term "observation" thus implicates an underlying "private language. When Turing mixed up his collapsing bridges with math, he was corrected by Wittgenstein, just as Freud was corrected when he tried to build psychological courses of events on a fantasy of natural science. Wittgenstein's "no" to Turing at the famous lecture at Cambridge hit home the difference between games and reality.

Archetypes and grammar as evolutionary tracks imprinted in our genes is a favorite theme among certain scholars. But what about other skills? Can there also be some hidden imprints that make driving or playing computer games possible? And what about ice hockey, football, chess, talk shows, chats and so on? The list can go on forever. Again, there is no distinguishing border between evolutionary "imprints" (i.e. adaptation) and other stimulus/response features in ordinary life.


"Primitive" vs. "Sophisticated" Thinking (copyright P. Klevius 1992-2004)

The more synchronized (informed) something or someone is with its surrounding reality, the less dynamics/interest this something or someone invests in its relationship with that particular reality. Interest causes investment and social entropy excludes investment economy because economy is always at war against entropy. The key to economic success is luck and thus includes lack of knowledge. No matter how well a business idea is outlined and performed, the success or lack of success is ultimately unforeseeable.In Demand for Resources I discussed the possibility of some serious prejudice hidden in Karl Poppers' top achievement of civilization, namely the "World 3" and his and Eccles' assumption of an increasing level of sophistication from the primitive to the modern stage of development. It is of course easy to be impressed by the sophistication of the artificial, technical environment constructed by man, including language and literature, etc. But there is nonetheless a striking lack of evidence in support of a higher degree of complexity in the civilized human thinking than that of e.g. Australian Aboriginals, say 25,000 years ago. Needless to say, many hunting-gathering societies have been affluent in the way that they have food, shelter and enough time to enrich World 3, but in reality they have failed to do so.

Even on the level of physical anthropology, human evolution gives no good, single answer to our originality. What is "uniquely human" has rested on a "gap," which is now closed, according to Richard Leakey and Roger Lewin, among others. This gap is presumably the same as the one between sensory input and behavioral output mentioned above.From an anthropological point of view, it can be said that a computer lacks genetic kinship, which, however, is a rule without exception in the animate world, although we in the West seem to have underestimated its real power.


De-constructing the Mind (copyright P. Klevius 1992-2004)

A deconstruction of our underlying concepts of the brain can easily end up in serious troubles due to the problem with language manipulation. Wittgenstein would probably have suggested us to leave it as it is. If language is a way of manipulating a certain area - language - then the confusion will become even greater if we try to manipulate the manipulation! But why not try to find out how suitable "the inner environment" is for deconstruction? After all, this environment presupposes some kind of biology at least in the border line between the outside and the inside world. Are not behavioral reactions as well as intra-bodily causes, e g hormones etc. highly dependent on presumed biological "starting points"? How does skin color or sex hormones affect our thinking? Where do causes and reactions start and isn't even the question a kind of explanation and understanding?

Determinists usually do not recognize the point of free will although they admit the possible existence of freedom. Why? Obviously this needs some Wittgensteinian cleaning of language. Unfortunately I'm not prepared for the task, so let's pick up only the best looking parts, i.e. that words as freedom, will, mind, etc., are semantic inventions and that they have no connections to anything else if not proved by convincing and understandable evidence. Does this sound familiar and maybe even boring? Here comes the gap again. Stimuli and response seen purely as a reflex/adaptation is not always correct, says G. H. von Wright, because sometimes there may be a particular reason causing an action. According to von Wright, an acoustic sensation, for example, is mental and semantic and thus out of reach for the scientific understanding of the body-mind interaction. Is this a view of a diplomatic gentleman eating the cake and wanting to keep it too? To me, it is a deterministic indeterminist's view.

G. H. von Wright concludes that what we experience in our brain is the meaning of its behavioral effects. In making such a conclusion that it is rather a question of two different ways of narrowing one's view on living beings von Wright seems to narrow himself to Spinoza's view. Is meaning meaningful or is it perhaps only the interpreter's random projection of him/herself? Is it, in other words, based only on the existence of the word meaning?

Aristotle divided the world primarily into matter and definable reality (psyche). As many other Greek philosophers, Aristotle was an individualist and would have fitted quite well in the Western discourse of today. Berkeley, who was a full-blood determinist, however recognized the sameness in mind and matter and handed both over to "god". Consequently Philonous' perceived sensations in the mind were not directly aligned with Hylas view of immediate perceptions. We thus end up with Berkeley as a spiritual die-hard determinist challenging materialistic humanism.


Conclusion
                                                                            
In conclusion one might propose a rethinking of the conventional hierarchy of the brain. What we use to call "higher levels", perhaps because they are more pronounced in humans, are in fact only huge "neural mirrors" for the real genius, thalamus (and its capability of two-way communication with the cortex and extensions in the cerebellum, spine, nerv ends etc), i.e. what is part of the "primitive" system. In other words, one may propose a view describing the "gap" between humans and animals as a quantitative difference in the amount/power of cerebral "mirroring" and communication with thalamus, rather than as a distinct qualitative feature. Nothing, except our "emotions", seems to hinder us from making a "human machine". And because these very "emotions" are lived experience (there is, for example, no way to scientifically establish what could be considered "emotions" in a fetus) nothing, except the meaninglessness in the project itself, could hinder us from allowing a machine to "live" a "human life".

No comments:

Post a Comment