Intelligence - A Form of self-organization ?

  1. Philosophy Forum
  2. » Epistemology
  3. » Intelligence - A Form of self-organization ?

Get Email Updates Email this Topic Print this Page

Reply Mon 29 Dec, 2008 04:26 am
The idea of intelligence is going through changes.
In these days we frequently hear the word swarm intelligence.
Once in a while I read the term organizational intelligence but it's not as popular as the first one. In discussions it always gets confused with collective Intelligence which means something completely different.
Collective Intelligence is about an accumulation of IQ. For example a company as an entitity is expected to behave more intelligently (than e.g. another one) when its employees have more intelligence to contribute . The participiants of the collective already carry the high IQ.
Even though IQ cannot simply be summed up (unfortunately), something like a partial accumulation might actually take place.
Organizational intelligence though is something completely different.
It takes place at an extremely low level.
Organizational intelligence is not an output of the participants' minds but much more a result of the way something is organized. Ants probably provide the most common example. They have a habit of following very simple rules without questioning it, accidently creating an effect which is an optimization method for finding the shortest way from A to B, a method that has been transferred into a formal system as so called ant algorithms.
These algorithms can easily be applied to robots having no further AI and no other functions than following just these simple algorithms.
Since these algorithms contain no such thing as a concept of what a shortest distance could mean, it is interesting to see that they still work as method to find the shortest distance.
This also shows that the ants do not need a concept of distance and shortest way but that the effect results from the way the system is organized.
A quite similar effect can be observed with evolutionary algorithms.
They are being used for solving problems which do not allow to find the optimum (e.g. for time reasons). Evolutionary algorithms find solutions that get close to the optimum.
Regardless if somebody wants to believe in a god being the designer of evolution, fact is that the effect of evolutionary algorithms results from the way the system is organized, making a central controlling unit obsolete.
But why call optimization methods intelligence?
Let's come back to the swarm intelligence. Here we have the same effect.
Keeping a big amount of living creatures together instead of having them spread in all directions and further having them all do more or less the same is an effort which would demand a high perfomance from a central controlling unit guiding the swarm with its single intelligence.
This is probably why this output is considered the result of some kind of intelligence.
Again we have the same effect:
It was possible to proof that keeping up the functionality of a swarm can be achieved with only a hand full of rules. Again you can apply these algorithms to robots or reconstruct the swarm behaviour in computer simulations to see how the effect shows up when mindless virtual units simply follow a couple of instructions.
The important thing to take note of here is that these mindless units or living creatures do not need to have the intention of creating that effect.
They do not need a concept of distances, optimizations, swarms or whatever.
In all these cases the individuals' behaviour results in an "intelligent" system behaviour.
In other words what we observe here is an emerging system intelligence.
The system "swarm" as an entity can show an intelligence which the individual may have no idea of.
I wouldn't say this should lead to too much optimism, because certainly there might also be cases in which the system swarm shows signs of stupidity which the individual has no idea of, but the point that I want to make is not whether or not human mankind can be saved...
So the fact that a "swarm" as an entity CAN show intelligent behaviour should not lead to believe that it MUST show intelligent behaviour.
First of all it's more than obvious that this kind of intelligence is actually far more primitive than anything that we connect to the word "mind".
Having realized that this intelligence is not a result of the participants' IQ but does also emerge when simulated by mindless virtual units, the next question I am asking is:
Why should it only be observed among intelligent/living creatures.
The fact that we can easily simulate these effects in a computer, shows that they are simply based on logical principles that could also appear in non-living systems.
In this case, would we call it intelligence?
People tend to say no.
They tend to demand something like a consciousness or an intention that motivates a behaviour to call it the result of intelligence, considering it the result of mind and/or reasoning.
The whole discussion about Searle's Chinese Room and the Turing-Test is an example of how this idea of intelligence dominates at least one branch of philosophy and actually is pretty dominating in peoples' minds in general.
For a complete understanding of the term intelligence all of these concepts definitely have to be taken into account, however my purpose is to talk about a different aspect of intelligence.
So whenever I use the word intelligence I do not refer to it as the output of mind or reasoning.
The developments in the field of Information technology have caused a slightly more liberal usage of the word intelligence, talking about strong and weak artificial intelligence.
The so called weak AI is interested in even the lowest units of intelligence.
Anything that could possibly be measured as a positive intelligent output is of interest for the weak AI, e.g. a simple algorithm that causes a group of machines to find the shortest distance to a target (antalgorithms) which can be considered a cognitive process.
In case of AI we have humans put this intelligence into the system and in case of ants we have living creatures being involved, so one could argue that this intelligent output can only appear where a more or less intelligent creature causes it from the background.
My hypothesis is that self-organization itself comes with the whole potential of intelligence, containing both: the molecules of intelligence and the glue that accumulates them to what we call mind.
And if somebody is curious about it, it would be my pleasure to explain how and why.
 
paulhanke
 
Reply Mon 29 Dec, 2008 11:13 am
@Exebeche,
Exebeche wrote:
My hypothesis is that self-organization itself comes with the whole potential of intelligence, containing both: the molecules of intelligence and the glue that accumulates them to what we call mind.
And if somebody is curious about it, it would be my pleasure to explain how and why.


... a very interesting topic! ... from the importance of morphological computation (embodiment) in robotics, to the idea of scaffolding in cognitive science, it seems that the concept of "intelligence" is becoming less and less brain-centered and more and more a complex dynamic of brain-body-world ... what's also interesting is that it appears that philosophical germs of these ideas were put forth over 50 years ago ... e.g., Merleau-Ponty's ideas regarding the "reversible" (i.e., feedback) relationships between embodied self-others-things in a never-ending creative process of "expression", where expressed creations sediment into new layers of reality that provide the ground for the next waves of creative expression (if you've read anything on spontaneous self-organization in non-equilibrium thermodynamic systems - which it sounds like you have - this should sound exceedingly familiar!) ... anyhoo, I'd be interested in hearing more of your thoughts on this subject!
 
Aedes
 
Reply Mon 29 Dec, 2008 12:50 pm
@Exebeche,
Exebeche wrote:
For example a company as an entitity is expected to behave more intelligently (than e.g. another one) when its employees have more intelligence to contribute.
I highly doubt that this is universally true, and I think overall you are really oversimplifying and overstating what intelligence is and how it operates.
 
paulhanke
 
Reply Mon 29 Dec, 2008 03:52 pm
@Aedes,
Aedes wrote:
I highly doubt that this is universally true, and I think overall you are really oversimplifying and overstating what intelligence is and how it operates.


... does intelligence really have an operational ("glass box") definition? ... or is it rather a functional ("black box") definition? ... or perhaps a categorical (terra-centric, bio-centric, anthropocentric, etc.) definition? ...
 
Aedes
 
Reply Mon 29 Dec, 2008 04:47 pm
@Exebeche,
Intelligence not only has many definitions, both qualitative and quantitative, but there are thought to be many discrete and distinct types of intelligence. IQ has many technical and interpretive limitations.

But an important point is that for a group to act intelligently there needs to be complementarity -- for everyone to be smart just doesn't work. You need leadership, communication, differentiation of skills and tasks, etc.
 
paulhanke
 
Reply Mon 29 Dec, 2008 05:24 pm
@Aedes,
Aedes wrote:
But an important point is that for a group to act intelligently there needs to be complementarity -- for everyone to be smart just doesn't work. You need leadership, communication, differentiation of skills and tasks, etc.


... ah - I think I see the disconnect here ... the sentence you quoted from Exebeche's original post is part of a brief dismissal of popular corporate notions of "organizational intelligence" so that he can get on with the business of talking about what an AI researcher might mean when discussing "organizational intelligence" ... perhaps the dismissal it is a bit shallow, but then again perhaps not (your description of an engineered corporate organization is often a lesson learned by start-ups via the school of hard knocks) ... at any rate, the popular corporate notions of "organizational intelligence" could equally be dismissed as a form of engineered intelligence, since it is self-organized intelligence that Exebeche seems to want to discuss ...
 
Exebeche
 
Reply Tue 30 Dec, 2008 05:02 am
@paulhanke,
Thanks a lot for the response.
It is just the way PaulHanke explained it.
I am in fact not too close to the idea of collective intelligence, and i wrote this (maybe a little bit too) short explanation of this term only to draw a clear line between the terms collective and organizational intelligence.
In fact there will be opposition from some directions not allowing me to make such a cut, for example i read about a japanese trend which is about to monopolize the term of organizational intelligence and finally instrumentalize it for the old idea of a synergetic effect caused by a community.
Whatsoever, working with a term that is subject to many definitions is not uncommon, making it not only legitimate but even necessary to exclude part of the defintions.
In other words as long as we are clear about what precisely is the subject of our discussion we have a basis for it, no matter whether or not the definition is right or wrong.

For that reason i want to clarify that i am NOT talking about a definition of intelligence that is universally true.
I am also not trying to find a defintion of the word intelligence that covers all meanings of it, including reasoning and mind.
I do NOT dismiss the general importance of reasoning and mind in the discussion about intelligence.
But i DO explicitely exclude these aspects of intelligence when i talk about Organizational Intelligence .
I refer to Organizational Intelligence as a simple kind of intelligence as referred to by weak AI.

And i will come up with a few ideas about the origins of intelligence which i think i don't promise too much when i say it's going to be at least exciting.
Doing this in a foreign language however might take me a few minutes more than usual.
For now i say thank you for your interest.
 
Exebeche
 
Reply Tue 30 Dec, 2008 07:55 pm
@paulhanke,
paulhanke wrote:
what's also interesting is that it appears that philosophical germs of these ideas were put forth over 50 years ago ... e.g., Merleau-Ponty's ideas regarding the "reversible" (i.e., feedback) relationships between embodied self-others-things in a never-ending creative process of "expression", where expressed creations sediment into new layers of reality that provide the ground for the next waves of creative expression


Wow,
i have to find out more about this one.

Now i have to keep a promise ..
Let me begin with functional information processing.
Functional information processing can take place without an intelligent unit.
If we look at a pressure valve this is a classic example for a selfregulating feedback loop.
The function is obvious: It keeps the system stable.
One more word to the information processing: Cybernetics divide the feedback loop into four phases:
Information input, processing, output and feedback.
If this looks somehow familiar to anybody it might be because the first three of these phases are well known as the IPO-principle in informatics since the days of Von Neumann.
Since Cybernetics is the science of controlling it is naturally at the same time a science of information.
It is not however, what many assume, a science of machines. Though used for industrial purposes, principles like the selfregulating feedback loop are found all over nature.
Our breath or heartbeat are examples for it as well as our sleep. But also the relation between a rabbit population and the plants they eat as well as their number and the number of their predators. Even the oxygen of the atmosphere and the worldclimate are subject to selfregulating feedback loops.
From the tiniest organism to the global biosphere nature is overcrowded with selfregulating feedback loops. This does not have to take us by surprise because life itself is essentially based on this principle. We could even call it the logical structure that life is based upon because when we look at the origin of the first protocells we find that the process called autopoiesis is based upon the system of dissipative structures which in return are based on - you guess it - the selfregulating feedback loop.
So, special configuration of feedback loops can result in the appearing of dissapative structures of which some special ones CAN show autopoietic behaviour.
Why am i telling all this?
Because looking at nature from a cybernetic perspective makes it a giant information processing machine.
This idea is already enough to cause allergic reactions to many people for two reasons:
Some can't stand mother earth being compared to a computer, others don't accept this picture because it's too close to a religious way of seeing nature as an intelligent giant Gaia-Organism.
Well, i don't want to discuss the Gaia theory to be honest but i think if we can free our minds of this idea for a minute we will clearly see that as much as there is an obvious difference between nature and a computer there is an undeniable huge process of information processing going on.
The most important difference to a computer is probably that there are no directions of how to handle the information like a program does.
The same kind of information is getting processed billions of billions of times in redundant loops. Still it's getting done though.
And the most important thing: Since the loops are interlocked, information can flow from one system to another. A blade of grass drying out does not have a great effect, but billions of them drying out is an information that will echo in the rabbits' predators' world and other systems that are interconnected with these.
As you can see the information that gets processed here is pretty analogue as oppose to the digitalized information in our computers that we got so used to that we consider it the natural condition of information. (The transmission of information in neurons by the way is also considered analogue.)
I think there is no need to further explain the mechanisms. The life cycles of germs may serve as an example of how micro-cycles affect macro-cycles.
Let me come back to the term of functional information processing. The pressure valve had a certain function that was given by human intention - stabilzing the system. The natural loopback just functions the same way without being given the function by anybody.
What we see here is how a system can have an accidental benefit from its own behaviour.
We could call it a collateral profit.
Nature being a huge web of stabilizing mechanisms has turned life into something that survived severe global catastrophies.
When a system benefits from its own behaviour this happens more or less accidentally. But it can simply be the result of the way something is organized.
It's the same to the ants who accidentally optimize the transport route between a food source and their anthill.
We called that optimization a somewhat cognitive process. The fact that ant algorithms are considered a problem solving strategy allows to do so.
This leads me to the really mindblowing effects of the global feedbackloop network.
Also the so called evolutionary algorithms are considered problem solving strategies. There's nothing mysterious about them, just a set of rules too, like "eliminate a certain number of individuals, then mix the properties of the individuals ...etc." (In case somebody is interested i have a tiny open source java-program that demonstrates on a very simplified basis how mindbaffling the effects of these primitive algorithms work.)
Now the fact that the information can travel from one cycle into another actually causes that no information really gets lost.
Even a missing information (like from an ending heartbeat) can flow into the meta-systems.
This is what causes the system to find cognitive ways of solution to particular problems of actually the individuals.
What causes aninsect to look like a leaf of a tree?
The idea of camouflageing oneself as a leaf of a tree is an intellectual effort that is far beyond an insects capacities. One might say that these are just processes of accomodation, of course they are but they are not a coincidence but a result of information processing.
In nature we see millions of problem solution strategies.
In fact the feedback loop network has the effect that creatures that are not able to perform the intelligence effort to solve a certain problem as individuals nonetheless pass through a phase of problem solving as a complete species - an effect similar to the swarm effect, just the swarm (species) isn't locally but temporally expanded.
This is in my eyes a mindblowing intelligent output caused by the way the system is organised.
Besides it's a better explanation of those gloomy effects for which many people still want to blame a "higher" intelligence as a designer.
There are outputs that can be measured as intelligent but it's not caused by a designer.

I want to perform the step from organizational to actual intelligence - after new years eve.
 
paulhanke
 
Reply Tue 30 Dec, 2008 08:53 pm
@Exebeche,
... fascinating stuff, ain't it???!!! ... personally, I'm still wrestling with the use of the term "information" in these contexts ... to most engineers, "information theory" is Shannon or Kolmogorov - whereas some folks working on emergence as it applies to mind think Peirce's semiotics is a better conception of information in this respect ... but in any case, I think that looking at what ants do through the lens of "information processing" needs to be understood for what it is: a model - and as is the case with all models, certain elements of the real world are ignored in order for the models be understandable and tractable ... unanticipated experimental results like those of Bird and Layzell (http://www.informatics.sussex.ac.uk/research/groups/ccnr/Papers/Downloads/Bird_CEC2002.pdf) demonstrate how important this is to keep in mind.

Another mind-blowing aspect of insect intelligence is just how much of the computation is done by interaction with the environment ... the whole concept of stigmergy - communication and coordination through the environment - is just too cool! ... the fact that the environment can accept ant pheremones while at the same time evaporating them at a certain rate is an absolutely critical element of ant intelligence - the environment is, in fact, an ant colony's short-term memory! ... likewise, the environment is, in fact, an ant colony's long-term memory - the system of tunnels that make up an ant hill constrain the behavior of individual ants in habitual ways ... seeing the components of an emergent intelligence layed out in this way is just incredible! ... and is also another reason why I'm still wrestling with the use of the term "information" in these contexts - to characterize an ant colony as "information processing" seems to do small justice to what is occurring here! (but that's not to imply that an information processing perspective isn't extremely insightful!)

Then there's the long-term dynamics of the ant hill ... a lot of research has been done as far as the ant colony's short-term intelligence (optimizing location of and access to food sources), but what about the ant colony's long-term behavior? ... how much of this is constrained by the "habits" that have accreted over time into the ant hill itself? ... how changeable are these habits? (that is, to what extent can an ant colony reconstruct its ant hill in order to change the colony's behaviors in response to environmental pressures?)

This stuff is too cool!!! ... looking forward to your next post!
 
Exebeche
 
Reply Fri 2 Jan, 2009 04:11 pm
@paulhanke,
Honestly i haven't expected finding so quickly somebody of your competence.
You provide some really exciting input for me to do some research and reading on.
I need to find out more about Peirce who admittedly was new to me. In fact the term "information" is a highly critical point in this discussion, so complex that it certainly deserves at least its own thread.
I'm looking forward to discussing it, my personal favorite in terms of the definition of information is Oliver Stonier who did some amazing investigation on the physics of information.
The idea of the environment being part of a swarms memory is of course extremely inspiring.
Who knows, we might end up seeing mind and nature as reflections of each other.
The anthill reminds me of another thing:
Those termites in Zimbabwe cultivate a fungus that has to be kept at precisely 87 degrees F, while the outside temperature shifts between 35 an 107 degrees. In the heat of africa they have developed a natural airconditioning system based on chimney architecture and other techniques which we would find amazing even if it was the result of ancient Maya or other human architecture.
Being the output of insect intelligence makes it a miracle.
Of course it would be overly simplified to reduce this and other outputs to the effect of the information processing done by feedback loops only. There are processes taking place that can be described as evolutionary or genetic algorithms and a lot more.
But it's the billions of feedback cycles being interlocked who guarantee the information being passed on and actually processed instead of simply transmitted.
The difference between transmission and processing of information shall be discussed after i keep my promise of making the step from organizational to actual intelligence which should be my next post.
For today thank you for your inspiring input.
 
Exebeche
 
Reply Sat 3 Jan, 2009 06:00 pm
@paulhanke,
The example of the selfregulating feedback loop shows how simple information processing can have a functional effect.
Typically to the system that is based on the loop, the loop's functionality is literally existential - the loop keeps up the system's existence.
And that is basically the function that causes natural intelligence to emerge: Keeping up a system's existence.
This statement is likely to be criticised as i know and i want to make it a hundred percent clear what i see as the origin of intelligence:
The origin of intelligence is functional processing of information.
In case of natural intelligence the function of the information processing is keeping up the existence of the particular system.
The selfregulating feedbackloop having a lot in common with the regular information processing as we know it from our computers (see above)
is an example of how even the most primitive mechanisms in nature can provide functional information processing.
For a system to establish itself in an environment that is not strictly friendly the most relevant information it can get will normally be information about this environment.
As i explained a system like the biosphere is open in any direction for the input of information by means of feedback loops.
A living organism however by the nature of dissipative structures needs to have an energetical opening but also a frontier that separates it from the environment. So any information about the environment has to come from wether the energetical opening or for the enhanced ones from an organ of perception.
One of the first organs of perception we can locate is the photosensitive organelle which the protozoa Euglena carries. A simple spot will always shade the organelle when Euglena is directing the source of light. Since Euglena doesn't have anything like neurons it doesn't "see" the light, but it will simply respond with an automatic reaction whenever the shadow appears. This automatic reaction could be coded in a simple algorithm:
If shadow=true then move flagellum 1.
This simple instruction results in Euglena always heading for the light, which means an enormous increase of succes in finding food.
The information processing as we can see has a primary and a secondary function: Triggering a movement in the first place, and furthermore helping the system to survive.
Although we can hardly even call it perception this simple information processing already makes the protozoas behaviour considerably more intelligent.
I guess this is not sufficient though to be a valid substantiation for my statement of functional information processing being the root of intelligence.
Let me give another example:
A very primitive little organism living in fresh water called hydra has the ability of "sensing" whether something that it gets in its tentacles is a water flea or not. A particular protein molecule that will hardly be found anywhere else but in the skin of a water flea causes a chemical reaction which results in the reflex of the mouth opening.
So again like in Euglenas case we don't have a real "sensing" here because the organism is far from having neurons. It's a simple chemical reaction taking place and thus nothing but an information processing (with a certain function though).
The fact though that it's behaviour is more intelligent with this function than without is obvious: Not only would it be a waste of energy trying to catch any crab it can get but also could it do serious harm.
As well as it's obvious that ANY ability that functions in a way comparable to a "sense" raises the intelligence of an organisms behaviour:
Taste the difference between poison and food. Smell or hear or see a predator. Smell or hear or see food/prey.
And the actual point that i am trying to make is that of course these primitive information processing mechanisms are just pre-stages of what turned into sense organs. Being instruments of functional information processing, the more efficient sense organs were used by an organism the more intelligent this organism behaved.
And here comes the clou:
The instruments of information processing which we call sense organs actually provided the physical basis for the development of intelligence.
As the sense organs became more and more elaborate, the connections between them came more and more into play, also turning into something more and more elaborate.
They turned into neurons being able to process increasingly complex information, expanding and growing during evolution, knotted and swelling to lumps and finally working as what today we call the brain.
The intelligence organ.
An organ being able to process information so complex that we hardly know what it's good for.

So as we see: Functional processing of information can take place only by the way something is organised already.
The interconnectedness of the whole system caused an accumulation that is not simple, but simply caused by it.
 
Holiday20310401
 
Reply Sat 3 Jan, 2009 07:22 pm
@Exebeche,
I'm really interested in all this and I have many questions. Would it be ok to post some here? I've been reading some stuff off the internet, any newbie sites you guys could recommend?
 
paulhanke
 
Reply Sat 3 Jan, 2009 08:15 pm
@Exebeche,
Exebeche wrote:
The instruments of information processing which we call sense organs actually provided the physical basis for the development of intelligence.


... and also provide the physical basis for the development of life ... it has been argued that life is autopoeisis + cognition ... that is, an autopoeitic system that does not sense and adapt to its environment is only proto-life ... anyhoo, all great stuff you've got here! :a-ok:

By the way, I've been reading a book entitled Mind in Life: Biology, Phenomenology, and the Sciences of Mind by Evan Thompson (a Canadian philosopher) ... it is an attempt to weave together multiple points of view in order to fill all of the "explanatory gaps" that continue to haunt cognitive science and AI ... I read the first half of it about six months ago, but then decided I didn't have enough knowledge of phenomenology to really "get" Thompson's arguments ... so I've spent the last six months diving into philosophy ... phenomenology, obviously, but also process metaphysics and such (it would appear that scientific complexity/emergence theorists implicitly assume the ontological priority of process over matter, a significant deviation from the historical mainstream of Western metaphysics) ... at any rate, I'm back to skimming over the first half of Thompson's book - and yes, the background research has helped (often in unexpected ways - passages I thought I got the gist of on the first go-round now expose significant depth that I just didn't see before) ... but getting back to why I brought this up in the first place: you might find this book interesting ... to give you a taste of what Thompson's multidisciplinary fusion is all about, here's a brief quote:

Quote:

The first idea is that living beings are autonomous agents that actively generate and maintain themselves, and thereby also enact or bring forth their own cognitive domains. The second idea is that the nervous system is an autonomous dynamic system: It actively generates and maintains its own coherent and meaningful patterns of activity, according to its operation as a circular and reentrant network of interacting neurons. The nervous system does not process information in the computationalist sense, but creates meaning. The third idea is that cognition is the exercise of skillful know-how in situated and embodied action. Cognitive structures and processes emerge from recurrent sensorimotor patterns of perception and action. Sensorimotor coupling between organisms and environment modulates, but does not determine, the formation of endogenous, dynamic patterns of neural activity, which in turn inform sensorimotor coupling. The fourth idea is that a cognitive being's world is not a prespecified, external realm, represented internally by its brain, but a relational domain enacted or brought forth by that being's autonomous agency and mode of coupling with the environment. The fifth idea is that experience is not an epiphenomenal side issue, but central to any understanding of the mind, and needs to be investigated in a careful phenomenological manner.
(Evan Thompson, Mind in Life)

As you can see, Thompson covers much of the same ground you've just covered ... but he does so from a novel multidisciplinary perspective (which is why I think you'd find his book interesting!)

EDIT: an online review: Evan Thompson - Mind in Life: Biology, Phenomenology, and the Sciences of Mind - Reviewed by Charles Siewert, University of California, Riverside - Philosophical Reviews - University of Notre Dame
 
paulhanke
 
Reply Sat 3 Jan, 2009 08:41 pm
@Holiday20310401,
Holiday20310401 wrote:
I'm really interested in all this and I have many questions. Would it be ok to post some here? I've been reading some stuff off the internet, any newbie sites you guys could recommend?


... feel free to jump in anywhere! ... it's quite obvious that Exebeche is bursting at the seams to share his/her ideas ... myself, I'm enormously interested in this stuff, too ... unfortunately, I don't know of any worthwhile sites on the Internet dedicated to this stuff - I've just been following a trail of books on the subject, one bibliography at a time Wink
 
Exebeche
 
Reply Mon 5 Jan, 2009 05:52 pm
@paulhanke,
Hello holiday
of course you can ask whatever you're curious about.
I am glad to find people who share my fascination about this topic.
Telling you links is not so easy though, especially because most of what i have read used to be in german. On the other hand i remember that "Principia Cybernetica" is a very helpful link for example for finding definitions, and you will notice that some things you don't find in the index can be found using the search function. There's far more than definitions to be found there though.
("The web of life" by Fritjof Capra might be a good book to get started with.)

Hello Paul
paulhanke wrote:
... and also provide the physical basis for the development of life ... it has been argued that life is autopoeisis + cognition ... that is, an autopoeitic system that does not sense and adapt to its environment is only proto-life ... anyhoo, all great stuff you've got here! :a-ok:

Thanx a lot.
You are even kind of paving the way for me. I was in fact going to make that connection.
Intelligence being rooted in particular kinds of selforganization as well as Autopoiesis being a certain kind of selforganization the question gets obvious of how big the difference between life and intelligence is after all.
The feedback loop serving as an example of organizational intelligence as well as being an inevitable component of autopoieses, it represents a clear connection between organizational intelligence and life.
In fact the origins of life are based on functional processing of information.
This is probably the point where a definition of information becomes obligatory.
I want to try coming up with some ideas about information in my next post.
The book you recommended is definitely a good hint for me, too bad only that i can't find it in german, because the vocabulary, jesus, totally beats my everyday english. I guess i'm going to have to face it and get it in english because what i read about it:
"Where there is life, Thompson argues, there is mind: life and mind share common principles of self-organisation, and the self-organising features of mind are an enriched version of the self-organizing features of life." sounds like somebody has already done all the work and it will certainly save me from years of researching on this topic if i just get this book and read it.
Funny. Many thanks one more time.
To me this is a confirmation of this (what i considered my own, haha) idea.
 
paulhanke
 
Reply Mon 5 Jan, 2009 06:12 pm
@Exebeche,
Exebeche wrote:
The feedback loop serving as an example of organizational intelligence as well as being an inevitable component of autopoieses, it represents a clear connection between organizational intelligence and life.


... and to think that there was ever a "mind-body problem", eh?

Exebeche wrote:
To me this is a confirmation of this (what i considered my own, haha) idea.


... welcome to the club - I've never had a far-reaching idea that I didn't eventually find had already been thought by someone else ... the Internet is humbling that way Wink
 
Exebeche
 
Reply Mon 12 Jan, 2009 06:20 am
@paulhanke,
paulhanke wrote:

... welcome to the club - I've never had a far-reaching idea that I didn't eventually find had already been thought by someone else ... the Internet is humbling that way Wink


In a way yes.
On the other hand that's positive. Having found somebody who tells me about that book which is precisely about this topic gives me a chance of progressing ten times faster.

Now to the term "information":
You already mentioned Shannon's technical view at the term information which is kind of competing with semantic interpretations.
In a way Shannon's idea of information is an anti-semantic one.
Why is that? First of all Shannon chose Boltzmann's Entropy equation as a basis for calculating the amount of information of a system.
We have to remember though what kind of systems Shannon was focused on: Telephone cables.
We should not forget that Shannon explicitely denied having created a theory of information. His own statement was that he never had an intention of creating a theory of information. His concern was how a communication system has to be set up physically for dealing with the problem of transmitting the maximum of information when nobody had an idea about how to count or measure information.
What he did was: He used Boltzmann's entropy equation as a variable for information.
This means the less ordered a system is the more information it contains.
He had to do this because he had to treat anything as information regardless of its content. If you see the snowy noise of dots on a tv-set without reception as signals, and you want to transfer it or store it on media to recall it precisely the way it was recorded, it actually demands the most resources because any meaningful signal like a picture would be continuous and thus taking less resources.
Shannon analyzed his problem from a totally practical and technical perspective.
He never had an intention of finding an essential definition of what information is.
His point of view is totally functional, but only for systems that presume that anything has to be taken as information regardless of its content.
One could believe that this would be the way information has to be seen in physics.
But in the contrary:
It was Tom Stonier whose investigations also picked up on the entropy function but in the opposite direction.
According to Stonier it's the INVERSE entropy function that decribes the amount of information a system contains.
The more entropic a system is, the less information it contains and vice versa. The more information you add to a system the more ordered (less entropic) it will become.
Even though Stonier's work is completely based on physics, functional significance comes into play. His analysis includes systems becoming more complex, for which a chemical reaction can have a functional significance.
I don't have a precise memory right now, but it's easily explained what is meant by functional significance:
A system might for its existence need the presence of a particular molecule like an enzyme that reduces the energy necessary for activating a reaction.
Reducing the activation energy is a physical process that an enzyme would also do when floating somewhere where it has no meaning for anything. Some systems however benefit from this effect and therefore the enzyme when applied to this system, has a functional significance in this particular place.
Unnecessary to say that the Enzyme's acting is to be seen as an exchange of information.
Every physical event is an exchange of information.
Two atoms connecting to each other, building a molecule, exchange information. When a carbon atom and oxygen atom e.g. meet, they have a very precise exchange of information about how many electrons they carry and how many they would like to be carrying. The information exchanged always is understood by both, so we can relie on the result will always be the same.
The reason why i am going into physical details is because this is actually the bridge between physical and semantic information.
Any content - and that means semantic information - descends from that: Functional significance.
This is where the raw information makes a step up the ladder of complexity. As soon as it gets functional significance an information can be used as a signal.
Whether it's the smell of fire alarming an animal, or the current starting the engine of your car, it's all signals.
And of course this is also where we can take the next step and talk about messages.
Signals can be used as signs and further turn into spoken or written words and anything that is subject of semantics and semiotics.
It's all a question of complexity. And therefore a question of how much you reduce the entropy thus adding information and increasing the order of a system.
It also shouldn't take us by surprise now, that language is reagarded as one essential component of mind by the philosophy that is named after just that.
All the pieces of the puzzle fit together.
 
paulhanke
 
Reply Mon 12 Jan, 2009 08:17 pm
@Exebeche,
... I was figuring your ideas on "information" wouldn't be mainstream - and I wasn't disappointed! Wink ... good stuff! ... btw, I just finished reading an essay you might be interested in ... it's an essay on how substance metaphysics falls short when it comes to explaining mental representations - the author takes the position that process metaphysics is a better starting point ... the result is something the author calls "Interactivism", a philosophical position where mental representation is a complex dynamic network of contacts with the world and associated interactive possibilities which those contacts evoke ... http://www.lehigh.edu/~mhb0/ProcessEmergence.pdf ...
 
Sekiko
 
Reply Fri 16 Jan, 2009 04:06 pm
@Exebeche,
[SIZE="4"]
Fascinating stuff. I haven't read the whole thread, but you're giving me a lot of stuff to chew on. Danke!
[/SIZE]
 
Ultracrepidarian
 
Reply Sun 26 Apr, 2009 06:02 am
@Sekiko,
I enjoy thinking about these terms, although I do struggle with them.
My objection is it seems important concepts eventually degenerate into being very hmm, axiomatic sounding. Information ends up sounding like another word for motion. Life into self-organization into pattern. Intelligence into feedback into hmm, connectedness. And even as I feel these terms lose hmm, specificity, I regard it as enlightening because there is some truth to the proposition that there is much sameness in the world. The mind and it's environment. Where the information comes from and where it goes. Ultimately though, and the feeling from which I'm writing now, is that the lines become too blurred. I am tired. I've enjoyed reading this thread. I plan to read that paper you linked to, PaulHanke.
 
 

 
  1. Philosophy Forum
  2. » Epistemology
  3. » Intelligence - A Form of self-organization ?
Copyright © 2024 MadLab, LLC :: Terms of Service :: Privacy Policy :: Page generated in 0.02 seconds on 04/18/2024 at 11:15:16