BBO Discussion Forums: Consciousness - BBO Discussion Forums

Jump to content

  • 6 Pages +
  • « First
  • 3
  • 4
  • 5
  • 6
  • You cannot start a new topic
  • You cannot reply to this topic

Consciousness What's you favorite theory?

#81 User is offline   Al_U_Card 

  • PipPipPipPipPipPipPipPip
  • Group: Advanced Members
  • Posts: 6,080
  • Joined: 2005-May-16
  • Gender:Male

Posted 2007-June-28, 08:52

helene_t, on Jun 28 2007, 03:06 AM, said:

DrTodd13, on Jun 28 2007, 02:04 AM, said:

What is the evolutionary benefit to sentience in a deterministic universe?

Tough one.

Not THAT tough....creativity.

Sentience begat consciousness that begat self-awareness that resulted in the one real thing that we share with creation.

All things evolve or devolve into states that optimize their creative potential. Only the inherent internal interference caused by the systemic structures of the developing psyche are an impediment to this creative potential. Our first step after the birth of our comprehension of this situation is the systematic elimination of everything that would impede this process.

Talk about raison d'être... :)
The Grand Design, reflected in the face of Chaos...it's a fluke!
0

#82 User is offline   jtfanclub 

  • PipPipPipPipPipPipPip
  • Group: Advanced Members
  • Posts: 3,937
  • Joined: 2004-June-05

Posted 2007-June-28, 09:25

helene_t, on Jun 28 2007, 03:06 AM, said:

What is the evolutionary benefit to sentience in a deterministic universe?

Fawn and her mother are drinking at a stream. Mountain lion appears and chases the fawn.

If the fawn moves deterministically, she'll get caught...if all fawns move right, the Mountain Lion will have learned that, and anticipate the move right.

If the fawn moves randomly, she and her mom will never get tothether again, and the fawn will starve to death.

If the fawn moves sentiently, then the fawn will move towards something she 'likes'. Since the doe also knows what the fawn 'likes', the doe will know where to go. Since the mountain lion doesn't have that information, the mountain lion cannot anticipate where the fawn is going.

To me, the primary purpose of sentience is 'irrational' likes and dislikes. Having prefences which can be predicted by your friends but cannot be predicted by your enemies is a powerful survival tool.

With enough observation....
A Turing machine can predict the actions of another Turing machine.
Human can predict the actions of a Turing Machine.
A Turing machine cannot predict the actions of a Human.

That's how all the Turing machines made so far have been found out. With enough time talking to them, humans figure out that they're Turing machines because they become predictable.
0

#83 User is offline   luke warm 

  • PipPipPipPipPipPipPipPip
  • Group: Advanced Members
  • Posts: 6,951
  • Joined: 2003-September-07
  • Gender:Male
  • Interests:Bridge, poker, politics

Posted 2007-June-28, 10:00

DrTodd13, on Jun 27 2007, 07:04 PM, said:

What is the evolutionary benefit to sentience in a deterministic universe? It seems like it would only cause a feeling of helplessness if you realized that you were merely a captive spectator to life and couldn't influence anything. Sentience combined with illusory free will might convince you you had some control when you didn't but I still don't see the evolutionary benefit of it. Sentience is just an accident of increased brain power?

assume there is no evolutionary benefit to sentience in a universe where you have no free will, for the sake of argument.. now assume you in fact have no free will, yet are sentient... what conclusion do you draw?
"Paul Krugman is a stupid person's idea of what a smart person sounds like." Newt Gingrich (paraphrased)
0

#84 User is offline   DrTodd13 

  • PipPipPipPipPipPip
  • Group: Advanced Members
  • Posts: 1,156
  • Joined: 2003-July-03
  • Location:Portland, Oregon

Posted 2007-June-28, 11:23

Al_U_Card, on Jun 28 2007, 06:52 AM, said:

helene_t, on Jun 28 2007, 03:06 AM, said:

DrTodd13, on Jun 28 2007, 02:04 AM, said:

What is the evolutionary benefit to sentience in a deterministic universe?

Tough one.

Not THAT tough....creativity.

Sentience begat consciousness that begat self-awareness that resulted in the one real thing that we share with creation.

All things evolve or devolve into states that optimize their creative potential. Only the inherent internal interference caused by the systemic structures of the developing psyche are an impediment to this creative potential. Our first step after the birth of our comprehension of this situation is the systematic elimination of everything that would impede this process.

Talk about raison d'être... :blink:

Creativity cannot exist in a deterministic universe.
0

#85 User is offline   DrTodd13 

  • PipPipPipPipPipPip
  • Group: Advanced Members
  • Posts: 1,156
  • Joined: 2003-July-03
  • Location:Portland, Oregon

Posted 2007-June-28, 11:34

luke warm, on Jun 28 2007, 08:00 AM, said:

DrTodd13, on Jun 27 2007, 07:04 PM, said:

What is the evolutionary benefit to sentience in a deterministic universe?  It seems like it would only cause a feeling of helplessness if you realized that you were merely a captive spectator to life and couldn't influence anything.  Sentience combined with illusory free will might convince you you had some control when you didn't but I still don't see the evolutionary benefit of it.  Sentience is just an accident of increased brain power?

assume there is no evolutionary benefit to sentience in a universe where you have no free will, for the sake of argument.. now assume you in fact have no free will, yet are sentient... what conclusion do you draw?

I guess I would conclude that it was a result of the randomness of evolution. If I believed this way though I might start to conclude that dividing the world into "me and everything else" doesn't make much sense and I might start to lose the notion of "I."
0

#86 User is offline   Al_U_Card 

  • PipPipPipPipPipPipPipPip
  • Group: Advanced Members
  • Posts: 6,080
  • Joined: 2005-May-16
  • Gender:Male

Posted 2007-June-28, 12:27

DrTodd13, on Jun 28 2007, 12:23 PM, said:

Al_U_Card, on Jun 28 2007, 06:52 AM, said:

helene_t, on Jun 28 2007, 03:06 AM, said:

DrTodd13, on Jun 28 2007, 02:04 AM, said:

What is the evolutionary benefit to sentience in a deterministic universe?

Tough one.

Not THAT tough....creativity.

Sentience begat consciousness that begat self-awareness that resulted in the one real thing that we share with creation.

All things evolve or devolve into states that optimize their creative potential. Only the inherent internal interference caused by the systemic structures of the developing psyche are an impediment to this creative potential. Our first step after the birth of our comprehension of this situation is the systematic elimination of everything that would impede this process.

Talk about raison d'être... :blink:

Creativity cannot exist in a deterministic universe.

I can appreciate the intent, but that statement, deterministically, is creative. B)
The Grand Design, reflected in the face of Chaos...it's a fluke!
0

#87 User is offline   DrTodd13 

  • PipPipPipPipPipPip
  • Group: Advanced Members
  • Posts: 1,156
  • Joined: 2003-July-03
  • Location:Portland, Oregon

Posted 2007-June-28, 16:01

The statement didn't exist before I typed it...that is true but that doesn't mean it is creative. I think you are confusing new and creative.

I couldn't find my copy of Shadows of the Mind
but the previous link contains a summary of Penrose's suggested proof. Some commentary on the proof is below and a response to the commentary by Penrose is available.
0

#88 User is offline   luke warm 

  • PipPipPipPipPipPipPipPip
  • Group: Advanced Members
  • Posts: 6,951
  • Joined: 2003-September-07
  • Gender:Male
  • Interests:Bridge, poker, politics

Posted 2007-June-28, 16:42

DrTodd13, on Jun 28 2007, 12:34 PM, said:

luke warm, on Jun 28 2007, 08:00 AM, said:

DrTodd13, on Jun 27 2007, 07:04 PM, said:

What is the evolutionary benefit to sentience in a deterministic universe?  It seems like it would only cause a feeling of helplessness if you realized that you were merely a captive spectator to life and couldn't influence anything.  Sentience combined with illusory free will might convince you you had some control when you didn't but I still don't see the evolutionary benefit of it.  Sentience is just an accident of increased brain power?

assume there is no evolutionary benefit to sentience in a universe where you have no free will, for the sake of argument.. now assume you in fact have no free will, yet are sentient... what conclusion do you draw?

I guess I would conclude that it was a result of the randomness of evolution. If I believed this way though I might start to conclude that dividing the world into "me and everything else" doesn't make much sense and I might start to lose the notion of "I."

that seems a reasonable conclusion... how reasonable is this one? God created you in his image, said image including (of necessity) sentience... he preordained all things having to do with his creatures (you included)... would you possess free will or simply the perception of free will? if perception, how does that differ from reality?

i don't really want to get a religious conversation going, so feel free to ignore this if you want..
"Paul Krugman is a stupid person's idea of what a smart person sounds like." Newt Gingrich (paraphrased)
0

#89 User is offline   Al_U_Card 

  • PipPipPipPipPipPipPipPip
  • Group: Advanced Members
  • Posts: 6,080
  • Joined: 2005-May-16
  • Gender:Male

Posted 2007-June-28, 16:47

DrTodd13, on Jun 28 2007, 05:01 PM, said:

The statement didn't exist before I typed it...that is true but that doesn't mean it is creative.  I think you are confusing new and creative. 

I couldn't find my copy of Shadows of the Mind
but the previous link contains a summary of Penrose's suggested proof.  Some commentary on the proof is below and a response to the commentary by Penrose is available.

I can hear the sound of hairs splitting in the distance..... :)

Creativity is what you make (of it) :)

As for religious arguements......I have no problem with pre-ordination but my creative spark will ALWAYS be able to "remake" what I was able to make "initially" which is why it is important to realize (make real) this. B)
The Grand Design, reflected in the face of Chaos...it's a fluke!
0

#90 User is offline   mikeh 

  • PipPipPipPipPipPipPipPipPipPip
  • Group: Advanced Members
  • Posts: 13,176
  • Joined: 2005-June-15
  • Gender:Male
  • Location:Canada
  • Interests:Bridge, golf, wine (red), cooking, reading eclectically but insatiably, travelling, making bad posts.

Posted 2007-June-28, 17:03

There is no way that so many sentient posters would have devoted so much time to such a meaningless thread if any of us possessed a shred of free will. The mere fact that we have written and read so much with respect to a subject that, if it exists, cannot ever be proved (free will) suggests that we had no choice.... no sentient with free will would waste this much time. Therefore, free will does not exist, at least in BBF.

QED

Or maybe we ain't that sentient? :)
'one of the great markers of the advance of human kindness is the howls you will hear from the Men of God' Johann Hari
0

#91 User is offline   DrTodd13 

  • PipPipPipPipPipPip
  • Group: Advanced Members
  • Posts: 1,156
  • Joined: 2003-July-03
  • Location:Portland, Oregon

Posted 2007-June-28, 17:28

luke warm, on Jun 28 2007, 02:42 PM, said:

DrTodd13, on Jun 28 2007, 12:34 PM, said:

luke warm, on Jun 28 2007, 08:00 AM, said:

DrTodd13, on Jun 27 2007, 07:04 PM, said:

What is the evolutionary benefit to sentience in a deterministic universe?  It seems like it would only cause a feeling of helplessness if you realized that you were merely a captive spectator to life and couldn't influence anything.  Sentience combined with illusory free will might convince you you had some control when you didn't but I still don't see the evolutionary benefit of it.  Sentience is just an accident of increased brain power?

assume there is no evolutionary benefit to sentience in a universe where you have no free will, for the sake of argument.. now assume you in fact have no free will, yet are sentient... what conclusion do you draw?

I guess I would conclude that it was a result of the randomness of evolution. If I believed this way though I might start to conclude that dividing the world into "me and everything else" doesn't make much sense and I might start to lose the notion of "I."

that seems a reasonable conclusion... how reasonable is this one? God created you in his image, said image including (of necessity) sentience... he preordained all things having to do with his creatures (you included)... would you possess free will or simply the perception of free will? if perception, how does that differ from reality?

i don't really want to get a religious conversation going, so feel free to ignore this if you want..

I can, and do, accept a God who would create beings with free will and sentience. What I could not accept is a God who created beings with sentience but not free will, especially if he gave them immortal souls, some of which will be tormented for deeds over which they were powerless to not do.

At this point, I see no reason to be pessimistic about the likelihood that this issue will be resolved scientifically someday. Is this discussion a waste of time? Probably, but what else could we be doing and why is that any more meaningful than debating this topic? Meaning has no meaning in a deterministic universe so perhaps this is the first thing we should decide. Does anything have meaning? If no, become a hedonist. If yes, then do something meaningful.
0

#92 User is offline   akhare 

  • PipPipPipPipPipPip
  • Group: Advanced Members
  • Posts: 1,261
  • Joined: 2005-September-04
  • Gender:Male

Posted 2007-June-29, 00:09

helene_t, on Jun 20 2007, 03:11 AM, said:

I still liked his book "The emperor's new mind" a lot, though. Natural scientists can sometimes write refreshing stuff abut problems that usually belongs to the humanities. I like what he writes about the role of language in consciousness:

I too agree w/ your comments about Penrose. I really liked his first book, but can't really say I care too much about "Shadows of the Mind". Besides, the whole book sounded like a tirade against AI and seemed to be bent on proving that (silicon) machines are incapable of becoming conscious. I went to one of his lectures and while it was very good for the most part, it did in the end get derailed by a prolonged discussion of how "quantum processes" in the neurons on our brains give rise to a "emergent phenomena" like consciouness.

Granted, it's plausible, but how is this different from a conjecture that say claims that the quantum processes in our heart (or pick an organ of choice) gives rise to the "soul"? Of course, the concept of consiousness is more slighly more tangible and secular than the soul, but it once we starting blending physics with meta-physics, it's a slippery sliding slope.

So, do I think that we are nothing but carbon machines and that the silicon ones will become sentient at some point in the future? Frankly, I don't know and doubt I will find out in my lifetime either. Anyway, enough of this and back to working on Skynet in due earnestness -- after all, we need those androids from the future who will help answer some of these questions :D...
foobar on BBO
0

#93 User is online   mike777 

  • PipPipPipPipPipPipPipPipPipPipPip
  • Group: Advanced Members
  • Posts: 17,045
  • Joined: 2003-October-07
  • Gender:Male

Posted 2007-June-29, 00:35

"....too agree w/ your comments about Penrose. I really liked his first book, but can't really say I care too much about "Shadows of the Mind". Besides, the whole book sounded like a tirade against AI and seemed to be bent on proving that (silicon) machines are incapable of becoming conscious. I went to one of his lectures and while it..."


I repeat even if we assume an AI cannot be conscious, we assume it is against the laws of known science, however you define it, can it be 100 million times more "intelligent" than the entire human race by 2050? If so does it matter?

Just sidestep such terms as conscious, alive, or freewill.

Even if we assume that the AI does not posses or cannot posses any of the above, it does not follow we can assert 100% control over it or understand it fully.
0

#94 User is offline   helene_t 

  • The Abbess
  • PipPipPipPipPipPipPipPipPipPipPip
  • Group: Advanced Members
  • Posts: 17,223
  • Joined: 2004-April-22
  • Gender:Female
  • Location:Copenhagen, Denmark
  • Interests:History, languages

Posted 2007-June-29, 00:49

luke warm, on Jun 28 2007, 06:00 PM, said:

assume there is no evolutionary benefit to sentience in a universe where you have no free will, for the sake of argument.. now assume you in fact have no free will, yet are sentient... what conclusion do you draw?

Strange assumptions, IMHO. There are plenty of theories about the evolutionary advantages of perception, empathy, memory and mental images . I'm not aware of any that involve free will. Whether those psychological phenomena constitute sentience may be a matter of semantics or metaphysics but they do seem related to sentience in some way. Note that while jtfanclub's scenario involves what I would call "the illusion of free will", he talks about "sentience" rather than "free will". That is quite typical I think. (Btw, I'm sceptical as to jtfanclub's final remarks about Turing machines which seems to based on Penrose's interpretation of consciousness. Then again, what do I know about computability).

Also, if you talk about evolutionary advantage you should try to specify: advantage to whom? There are plenty of genetic traits that evolved not because of the advantage to the individual or species that possesses the gene, but to something else, such as to the gene itself, or to a parasite that induced the selective advantage of the gene.

Or even have no advantage to anyone. Aging, for example, gives probably net selective disadvantages to the genes that "cause" aging in the sense that alternative alleles would have given the individual a longer life span. This doesn't make aging an evolutionary mystery, of course. The same genes may protect the individual against cancer by shortening the telomeres, thereby ultimately letting the individual succumb to aging. Or the seletive pressure against aging may be too weak to overcome random decay. After all, a healthy living body decays much slower than a dead body so the "trait" of aging is something like the "trait" of not being able to jump to Casiopeia.

All this notwithstanding, one might be tempted to draw the conclusion that under your assumptions, sentience does not seem to have a genetic basis. That would be interesting, but not shocking. Some see sentience as a cultural trait.

Todd said:

Meaning has no meaning in a deterministic universe
Sic! You must have a completely different notion about those concepts than I have, since to me this is utterly absurd. You said the same about creativity. I would say that the issue of "determinism" should be confined to the ivory tower of theoretical physics or maybe even to that of metaphysics. While "creativity" and "meaning" are down-to-Earth concepts that exist in a cultural context and can be discussed without reference to neurophysiology, let alone physics. I can barely think of concepts more distant from each other than "determinism" versus "creativity" and "meaning".

That's just me, I'm sure your notion is as coherent as mine. But I do find it difficult to empatize with that notion. I can empatize a little bit with the belief in "free will" because I have been brought up in a culture where many people seem to believe in free will.
The world would be such a happy place, if only everyone played Acol :) --- TramTicket
0

#95 User is offline   helene_t 

  • The Abbess
  • PipPipPipPipPipPipPipPipPipPipPip
  • Group: Advanced Members
  • Posts: 17,223
  • Joined: 2004-April-22
  • Gender:Female
  • Location:Copenhagen, Denmark
  • Interests:History, languages

Posted 2007-June-29, 00:52

mike777, on Jun 29 2007, 08:35 AM, said:

Even if we assume that the AI does not posses or cannot posses any of the above, it does not follow we can assert 100% control over it or understand it fully.

Good point. Mikeh said that if we really possesed free will we would not be discussing this topic. Maybe some day someone will invent a computer that really posseses free will, and that computer would not worry about sentience and free will but spent its time philosofying about control and understanding instead.
The world would be such a happy place, if only everyone played Acol :) --- TramTicket
0

#96 User is online   mike777 

  • PipPipPipPipPipPipPipPipPipPipPip
  • Group: Advanced Members
  • Posts: 17,045
  • Joined: 2003-October-07
  • Gender:Male

Posted 2007-June-29, 01:04

helene_t, on Jun 29 2007, 01:52 AM, said:

mike777, on Jun 29 2007, 08:35 AM, said:

Even if we assume that the AI does not posses or cannot posses any of the above, it does not follow we can assert 100% control over it or understand it fully.

Good point. Mikeh said that if we really possesed free will we would not be discussing this topic. Maybe some day someone will invent a computer that really posseses free will, and that computer would not worry about sentience and free will but sepnt it's time philosofying about control and understanding instead.

I remain convinced that some, a few, highly improbable events will occur between now and 2051. Events of such improbable importance we will be unprepared. :o

They will be extremely important events in human history and they will be highly improbable. :D

On the scale if not more so of a few, very few guys with box cutters in 2001.
0

#97 User is offline   jtfanclub 

  • PipPipPipPipPipPipPip
  • Group: Advanced Members
  • Posts: 3,937
  • Joined: 2004-June-05

Posted 2007-June-29, 09:39

helene_t, on Jun 29 2007, 01:49 AM, said:

Note that while jtfanclub's scenario involves what I would call "the illusion of free will", he talks about "sentience" rather than "free will". That is quite typical I think. (Btw, I'm sceptical as to jtfanclub's final remarks about Turing machines which seems to based on Penrose's interpretation of consciousness. Then again, what do I know about computability).

To me, the definition of sentience is irrational (or maybe I should say non-rational) likes and dislikes. Preferences that are based not on logic or randomness.

A Turing machine is a very simple model...you have an input, and a state (or table or whatever you want to call it). All computers can be replicated by input-and state. You can even make a Turing machine that will mimic a particular person. However, in a Turing machine, if you give me the state, and the input, I can tell you the output. But with a person, even if you make the state the entire universe and the input everything the person is getting, you cannot perfectly predict what the output will be.

It is that unpredictablility, which I claim is inherent in the mind, that makes it non-Turing. However, that doesn't mean that we have free will. And while I believe that no computer can perfectly predict a human (even if the computer is infinitely large), that doesn't mean that we cannot create a computer that is also impossible to perfectly predict.

Would such a computer be sentient? I don't know- I guess it would be.
0

#98 User is offline   DrTodd13 

  • PipPipPipPipPipPip
  • Group: Advanced Members
  • Posts: 1,156
  • Joined: 2003-July-03
  • Location:Portland, Oregon

Posted 2007-June-29, 12:13

jtfanclub, on Jun 29 2007, 07:39 AM, said:

A Turing machine is a very simple model...you have an input, and a state (or table or whatever you want to call it). All computers can be replicated by input-and state. You can even make a Turing machine that will mimic a particular person. However, in a Turing machine, if you give me the state, and the input, I can tell you the output. But with a person, even if you make the state the entire universe and the input everything the person is getting, you cannot perfectly predict what the output will be.

It is that unpredictablility, which I claim is inherent in the mind, that makes it non-Turing. However, that doesn't mean that we have free will. And while I believe that no computer can perfectly predict a human (even if the computer is infinitely large), that doesn't mean that we cannot create a computer that is also impossible to perfectly predict.

Would such a computer be sentient? I don't know- I guess it would be.

For very large systems, you can get "unpredictability" through chaos theory even though everything is completely deterministic. Obviously, you can also get unpredictability through sources of randomness such as at the quantum level. If you think the brain does not use quantum effects then I think you are then forced to believe it could be simulated by a computer of a finite size. It is also very likely a chaotic system such that knowing the initial state perfectly is terribly important and nearly impossible. Those who believe the brain does not have quantum effects I think must believe that a computer performing a full simulation of a brain would also be sentient.
0

#99 User is offline   barmar 

  • PipPipPipPipPipPipPipPipPipPipPipPip
  • Group: Admin
  • Posts: 21,640
  • Joined: 2004-August-21
  • Gender:Male

Posted 2007-June-29, 14:01

Thank you, Dr Todd -- I was just thinking about Chaos Theory myself in this regard. It's the reason why deterministic doesn't mean predictable, and it allows for the illusion of free will. Another example is weather -- it's deterministic, since it's just the result of fluid dynamics and energy transfer; but there are so many components to the system that it's not predictable to any fine degree. Terms like "brain storm" may be more meaningful than the coiners imagined.

Don't feel bad that free will and consciousness may just be illusions. Life and society work just fine with this illusion, so go with it. Many aspects of our physiology and psychology evolved as a result of the macroscopic nature of physics. We see colors, not wavelengths, because evolution discovered that this was a useful and efficient way to categorize objects in the world. We expect continuity in objects, because that's the way things work at the macro level; quantum processes are confusing precisely because nothing in our experience, or that of all our ancestors, is like them. And we evolved to believe in free will because it's a useful approximation of what happens.

#100 User is online   mike777 

  • PipPipPipPipPipPipPipPipPipPipPip
  • Group: Advanced Members
  • Posts: 17,045
  • Joined: 2003-October-07
  • Gender:Male

Posted 2007-June-29, 14:31

barmar, on Jun 29 2007, 03:01 PM, said:

Thank you, Dr Todd -- I was just thinking about Chaos Theory myself in this regard.  It's the reason why deterministic doesn't mean predictable, and it allows for the illusion of free will.  Another example is weather -- it's deterministic, since it's just the result of fluid dynamics and energy transfer; but there are so many components to the system that it's not predictable to any fine degree.  Terms like "brain storm" may be more meaningful than the coiners imagined.


This may help.

Solomonoff Induction
June 25th, 2007 – Nick Hay


The problem of prediction is: given a series of past observations, what future observations do you expect? When we are rigorous about expectations we assign probabilities to the different possibilities. For example, given the weather today we assign 50% probability to a rainy day tomorrow, 30% probability to a cloudy day, and 20% probability to a sunny one.

How can we determine the probability of a future given the past? Solomonoff induction is a solution to this problem. Solomonoff induction has a strong performance guarantee: any other method assigns at most a constant factor larger probability to the actual future. This constant is equal to the complexity of that predictor.

Solmononoff induction itself is uncomputable, but there are computable analogs. It serves as a simple method of specifying a device which accurately predicts a series of observations. Were such a device to exist we would think it highly intelligent as it correctly predicted any patternful sequence we entered with little error.

Below the fold I describe some of the machinary behind Solomonoff induction. I describe a computable approximation which can be exactly and efficiently solved. Although this computable predictor is not particularly intelligent, it shares the same structure as Solomonoff induction.

Suppose we are predicting a series of observations, and that we assign:

probability 0.2 (i.e. 20%) to the series beginning with 0 (we denote this p(0) = 0.2),
probability 0.4 to the series beginning with 1 (we denote this p(1) = 0.4).
This means we think the series is twice as likely to begin with a 1 as a 0, but there is a 40% chance (1-p(0)-p(1) = 1-0.2-0.4 = 0.4) that it begins with nothing at all i.e. there are no observations. We also have:

p(00) = 0.1, i.e. probability 0.1 that the series begins with 00,
p(01) = 0.1,
p(01000) = 0.05.
The first two entries mean if that the series begins with 0, it is equally likely to be followed by either a 0 or a 1. We say the probability of a 0 given a 0 is 0.5 (p(0|0) = p(00)/p(0) = 0.1/0.2 = 0.5) and similarly the probability of a 1 given a 0 is 0.5 (p(1|0) = p(01)/p(0) = 0.1/0.2 = 0.5). Finally, given that the series begins with 01 there is a 50% chance the sequence 000 follows (p(000|01) = p(01000)/p(01) = 0.05/0.1 = 0.5) and a 50% chance that something else happens.

Determining the probability that a series begins with a certain sequence is enough to predict everything else.

Underlying Solomonoff induction is a programming language for describing sequences. Consider the following simplified language. Programs are sequences of the 4 commands: {0,1,L,E}. Examples:

00110101E: outputs the finite sequence 00110101.
L01E: outputs the infinite sequence 01010101….
111L0E: outputs the infinite sequence 1110000….
The program executes from left to right. The commands 0 and 1 output 0 and 1 respectively. If it reaches an L it records the start of a loop and continues. Upon reading a second L it jumps back to the start of the loop. If it reads an E it either ends the sequence or jumps back to the start of a loop.

Solomonoff induction predicts sequences by assuming they are produced by a random program. The program is generated by selecting each character randomly until we reach the end of the program. In the above example, there are 4 different characters so each is chosen with probability 1/4. The program L0E is 3 characters long so is generated with probability (1/4)*(1/4)*(1/4) = 1/64.

The probability the series begins with a given sequence is the probability the random program’s output begins with that sequence. This means if a sequence is generated by short program it is likely, and if it is only generated by long programs it is unlikely. This is a form of Occam’s razor: simpler sequences (i.e. those described by short progams) are given higher prior probability than complex (i.e. long description) ones.

To compute the probability the series begins with 0, consider all the programs whose output begins with 0. These are exactly the programs which begin with either 0 or L0: if a program outputs 0, its first character cannot be either 1 or E, and if its first character is L its second must be 0. The probability the series begins with 0 is therefore (1/4) + (1/4)*(1/4) = 5/16. Similarly, the probability it begins with 1 is 5/16. This doesn’t sum to 1 because the programs E, LE, and LL output nothing. They together have probability 1/4 + 2/16 = 6/16, and reassuringly 6/16 + 5/16 + 5/16 = 1 i.e. either the series is empty, or it begins with either 0 or 1.

For this simple language we can compute the probability the series begins with a sequence by studying that sequence’s structure. For example, if the series starts with 1010 then its program must begin with either:

1010: probability 1/256,
101L0, 10L10, 1L010, L1010: probability 4/1024,
L10E, L10L, 1L01E, 1L01L: probability 2*(1/256) + 2*(1/1024) = 10/1024.
So the probability the sequence begins with 1010 is 1/256 + 4/1024 + 10/1024 = 18/1024. The first two lines of programs are routine: every sequence has a description of this form. The last line only holds because of the pattern: 1010 is 10 repeated twice.

Solomonoff induction has this same structure. For any given sequence, there is a finite set of programs which generate it (actually, program prefixes e.g. 1010 is not a complete program and can be completed in different ways). The probability the series begins with this sequence is the probability any of these programs are generated, formed by the kinds of sums we have above.

To be continued….

Comment by Nick Tarleton
Jun 25, 2007 2:42 pm
What programming language is used for “real” Solomonoff induction, and why that one?

Reply to this comment
Comment by Nick Hay
Jun 25, 2007 3:47 pm
Real Solomonoff induction uses any Turing-complete language. Turing-completeness is required to prove that it assigns at most a constant factor less probability to the actual series than any other predictor.

Turing-complete isn’t quite enough. You need a language L with the following property: for any other language, you can reprogram all of its program into L with only a constant increase in length. Intuitively, you can write an interpreter in L for any other language, and you can quote that language’s programs in constant space.

Machine code (if you allowed unbounded memory) would be a suitable language. You can write an interpreter for any language in machine code, and you can directly embed that language’s programs into its data area.

Reply to this comment
Comment by Nick Tarleton
Jun 26, 2007 4:57 am
So does Solomonff induction give different probabilities for different choices of language? (Say you were using machine code with a primitive instruction for one particular sequence with ridiculously high information content.) Or do they all converge to the One True Probability in the uncomputable limit?
(Comments wont nest below this level)
Comment by Nick Hay
Jun 26, 2007 2:20 pm
It will give different probabilities, but the difference is bounded.

Roughly, if your magic instruction contains N bits of information relative to the original language (i.e. requires a program of length N to implement), and this is useful information about the world, then magic-Solomonoff can perform at most N bits better than regular Solomonoff i.e. assign probability at most 2^N higher to the true sequence.

You can make deliberately pathological languages where Solomonoff induction isn’t very powerful for all practical lengths of time. Just as you can make pathological Turing-complete langauges where it’s really hard to write useful programs.


Reply here



Comment by Sebastian Hagen
Jun 27, 2007 1:37 pm
“and reassuringly 1/4 + 5/16 + 5/16 = 1″

Actually, 1/4 + 5/16 + 5/16 = 14/16 = 0.875 != 1.

At least some of the missing 1/8 of probability goes to programs that are not “E” and don’t output anything.
Obviously LE gets a probability of 1/16, and there’s infinitely more of them: LLE, LLLE, LL0E, LL1E, etc.

Reply to this comment

Comment by Nick Hay
Jun 27, 2007 4:56 pm
Thanks! I forgot the infinite empty loops LE and LL. LLE, LLLE etc aren’t proper programs, since the decompresor stops reading symbols after the first two L’s. They are included under LL since they begin with it: we don’t want to double count our probability.

So, (1/4 + 2/16) + 5/16 + 5/16 = 1.


© 2007 Singularity Institute for Artificial Intelligence, Inc.
Design by Helldesign
0

  • 6 Pages +
  • « First
  • 3
  • 4
  • 5
  • 6
  • You cannot start a new topic
  • You cannot reply to this topic

3 User(s) are reading this topic
0 members, 3 guests, 0 anonymous users