bab9, on Jul 22 2010, 03:42 AM, said:
what if instead of using 1 neural network, a different one was used for each round of bidding? Kind of getting each neural network to specialize in a particular round of bidding. There would be a strong argument for having information from the previous neural network feed into the current one.
The way I would like to think of the network is as the intermediate layer representing the information on which bidding decisions are made. So the synapses that connect the input layer to the intermediate layer condense/preprocess the information, while the synapses connecting the intermediate layer to the output make the decision. Therefore I chose a one-and-a-half network model: as for the inp-interm synapses I had one for 1st/2nd seat, and one for 3rd/4th seat, but for the interm-outp synapses I had only one set. The idea being that the decision rules should be the same but the preprocessing may be different because a pass in first seat could be informative and therefore the implication of an opening bid in 3rd/4th seat may be different from one in 1st/2nd.
But now I feel that this is inconsequent because it is no different from a 1
♥ response to 1
♦ having different implications than a 1
♥ response to 1
♣, so by the same logic I should have separate set of inp-interm synapses for each opening bid.
So I would consider going back to only one network. This would probably imply a certain symmetry in the bidding system. Although the system
can still be highly asymmetric (a pass in 1st seat is a different input value than "it is not my turn yet" in 1st seat), I would expect the architecture of the network to bias the outcome of the evolution in the direction of somewhat symmetric systems. "Symmetric" here meaning that the opening scheme in 3rd/4th would be similar to 1st/2nd, but by a similar argument the responses to the different opening bids would be similar etc.*
Now if I were to include the inference from earlier calls as input, I might consider having seperate inp-interm synapses for each round. Maybe it should depend on how I would formalize the inference from the earlier calls. But even if I still had only one set of inp-interm synapses, there would still be a distinction, at the level of the new input neurons, between "the inference from previous round is blahblahblah" and "the inference from previous round is void as this is the first round".
*sidetrack: thinking of it, the fact that I have the same output neuron representing, say, the 2NT call regardless of what the previous call was, makes it difficult for the network to emulate step responses. Having one output neuron for 1st step, one for 2nd step etc. would make step responses more likely to evolve. The fixed neuron->call mapping probably favors natural bidding. Transfers would be equally easy to emulate with the fixed mapping, but the transfer
accept would not. This would change by having the inference from earlier calls as input: now a transfer accept is a natural call since it is related to the suit in which partner has shown length.
Anyway, I will let the neural networks rest for now. I think Tysen is much better than I in making neural networks, and there are probably others who also know more about that issue than I do. So I will focus on my decision trees instead.
The world would be such a happy place, if only everyone played Acol :) --- TramTicket