BBO Discussion Forums: Reverse engineering GIB (part 1?) - a treatise on the insanity of bidding simulations - BBO Discussion Forums

Jump to content

Page 1 of 1
  • You cannot start a new topic
  • You cannot reply to this topic

Reverse engineering GIB (part 1?) - a treatise on the insanity of bidding simulations

#1 User is offline   smerriman 

  • PipPipPipPipPipPipPip
  • Group: Advanced Members
  • Posts: 4,042
  • Joined: 2014-March-15
  • Gender:Male

Posted 2023-April-24, 22:12

Disclaimer #1: This entire post applies to an old version of GIB from around 2012. It’s possible that BBO have changed some of the logic discussed below in the current version of GIB. However, the general feeling based on prior forum posts is that they have basically only made improvements to the database, leaving the rest of the algorithm virtually untouched, as they don’t really understand the rest well enough to make changes. So I would be very surprised if anything less than the vast majority of the below no longer applies.

Disclaimer #2: This is a long post. Most people are probably not going to reach the end :) But if GIB interests you even half as much as it does me, enjoy! And if it’s just me, which is a reasonable chance, well I had fun anyway ;)

Disclaimer #3: since there are so many referenced hands in this post, I have linked to handviewer URLs, rather than embedding them, to prevent too many iframes loading at once.

As some of you may know, I have been playing around with some debugging features of the old version of GIB, trying to understand more about it works.

These debugging features are very limited - you get some information about simulated deals, scores, and totals, but with no details of what anything actually means. Despite rumours of the code being hard to understand, this type of analysis would be very straighforward for me with access to it. But since I’ve had no luck achieving that dream, I’ve instead had to take the painful route - many hours of testing hands, hacking the database to force certain bidding situations, searching for and verifying patterns in the debugging output, and so on.

Here’s what I’ve found.

The publicised algorithm

GIB’s bidding logic has always been generally understood to work as follows:

  • a large, complex database of rules tells GIB the ‘default’ bid to make in any particular situation (and, often independently from the logic, how to describe it).
  • several database rules, especially those early on in bidding sequences, have a flag that tells GIB that it *must* make the default bid. But in all other circumstances, GIB is allowed to test alternative bids that are ‘close’ to the hand that they hold, ‘close’ means that bid also has a matching rule, but of a lower priority.
  • in these situations, GIB simulates a number of deals which match the bidding to date.
  • for each alternative, GIB extrapolates what the auction would be if everyone followed the database from that point onwards, and determines the score of the resulting contract.
  • it then compares the results, and chooses the bid which gives the best average score, taking IMP / MP into account.

Free robots only do the first step, but this cripples them too much so they aren’t relevant to this thread - the database was never designed to be used standalone.

For this post, I’m ignoring the actual database, including rule priorities for how GIB decides which bids are ‘close’ enough to analyse.

I’m also ignoring how GIB decides which deals ‘match’ the bidding. Some day I may write about the latter in a part 2, since it’s just as weird and not even remotely close to what you may think. (Would you believe, when simulating a 1NT - 3NT auction, literally half of the simulated deals *wouldn’t* have gone 1NT - 3NT?)

This post is focusing on the last two steps - what GIB does after simulating hands - because this is far more complicated (and buggy) than originally believed.

But firstly, two preliminary notes:

Number of simulations

Nobody knows how many deals GIB simulates.

In the old version of GIB, the default during the play of the hand is 100 deals, though it’s customisable. And this version of GIB plays slower than advanced robots on BBO, so it’s likely less.

But no matter what I set the play value to, the number of bidding simulations didn’t seem to change, so perhaps wasn’t configurable in the old version. It differs on each hand, but is usually somewhere in the 20s or 30s. I guess the fact it has to calculate double dummy results for multiple contracts on each hand means it’s always several times slower than play simulations.

This isn’t really anywhere near enough to give much statistical accuracy. But still clearly light years ahead of the free robot’s 0.

An extra reason not to simulate

In GIB’s original pre-BBO documentation, there is a configurable parameter p - default value 0.55 - which says: after you have dealt a set of hands, if the database predicts there is probability that the auction will continue after our bid is > p, do not simulate, just use the book bid. The idea seems to be that we’ll get another chance to bid, so we can stick with the book for now and simulate next time.

When I try to adjust the parameter in BBO’s old version of GIB, it does not accept it. However, the logic appears baked in, as I see messages like:

book bid [rule 11710] 3H is 83% to continue auction; returning it

I’ve seen this percentage as low as 63%, so I suspect the default 0.55 is still in place.

This parameter / concept makes absolutely no sense at all.

For example, in a forcing pass situation when GIB has a close decision between passing and doubling.. if the database says ‘pass, but you’re allowed to simulate to see if double will work better’.. and then the algorithm says ‘oh, but 100% of simulated deals pass is forcing, so let’s not even consider double’.

Or recently, there was a thread where GIB bid 1 - 1 - 3 with a hand that was really too strong to just invite. In this case, the database used the flag saying don’t simulate in this situation, so I deleted it to see whether a simulation would lead it to a different conclusion. But nope - it detected that 3 was 80% likely to continue the auction, so refused to consider anything else. Uh, the other 20% is the whole reason a simulation would have been useful in the first place..

In saying all this, there have been a few occasions where I’ve been looking at a bid made by the modern BBO robot, and have been unable to replicate it in the old version due to this parameter disabling simulations. I do wonder if BBO disabled this parameter entirely at some point in between - I hope so, because if it’s there, it’s surely causing problems.

The very last step

I’m going to work in reverse order, since this is the part of GIB that makes the most sense. Assume GIB has already assigned each bid a score for each simulated deal.

For each deal, bids are compared in pairs in order to determine what the matchpoint or IMP difference would be if one bid is made at one table, and the other at another table - basically forming an NxN table of scores for each deal. If one bid would gain 3 IMPs against another bid, the former will get a +3, and the latter a -3. (And at MPs, a +1 vs a -1).

It then averages out the results over all pairs and all deals, to get an expected IMP or MP gain / loss for each bid. (The resulting scores thus all end up adding to 0.) It picks the bid with the highest overall score.

This means that it’s technically possible that option x could come better off than option y, even if a straight comparison of x to y results in y being better (due to differences in gains when comparing to options other than those two).

But bridge scoring is non-transitive - there are bidding situations where option x > option y > option z > option x; some clever commenters posted examples here - so you have to have some way of breaking the cycle, and this makes sense to me.

But one twist - after performing the above calculations, it adds 0.5 IMPs (or 0.2 MPs, where 1 is a top as mentioned above) to the book bid. So it will only choose an alternative to the book bid if it results in a somewhat noticeable gain.

Scoring a bid for a given deal - a high level overview

Surely this is easy? Just calculate the double dummy score for the predicted contract. Done, and that’s the end of this thread.

Nope.

Not. Even. Remotely. Close.

For each bid, the debugging output calculates three separate scores.

The first is the ‘book’ score. For now, take this to be the double dummy score of the extrapolated contract as we all believed, though there are a few surprises to come.

The second is the ‘par’ score. More detail on this later too, but as a concept for now, given the wording of ‘par’, take this to mean the best score you can guarantee after making this bid.

(Note that this is close to, but not the same as, the standard ‘par score’ for a deal. Here, the par score may differ for each bid you’re thinking of making. For example, if the auction starts 1 - 3 and game is makeable double dummy, the par score assigned to pass will be the partscore, since the opponents won’t let you have another chance).

It then calculates a final score based on the above two. If the book score that double dummy gives you is lower than the par score you could have achieved, then it ignores the par score - you get what you expect to bid, and nothing more.

But if the double dummy result is expected to be better than par, it uses the *average* of the book and par scores. The general feeling here seems to be that even if the database doesn’t predict it will happen, the opponents have the ability to do better than the database is predicting - so let’s go somewhere halfway in between.

This final score - either book, or halfway between book and par - is what is provided to the last step of the algorithm.

An example

Here’s an example. (This was taken from a hand posted in the forum a while ago that I was looking at, where GIB bid 4 in the following position - in fact, the older version of GIB doesn’t consider 4 at all. So this isn’t really relevant to that thread, but some interesting results popped up nonetheless).

https://www.bridgeba...q7643dq4c54&v=n

North is contemplating what to bid after West bids 2, and this is one of the deals it has simulated. It comes up with three options - pass, double, or 3.

-295.0 		par  -420 	book  -170 	P.P.1S.X.P.2C.2S.P 	P.P
-420.0 		par  -420 	book  -420 	P.P.1S.X.P.2C.2S.X 	4S.P.P.P
-420.0 		par  -420 	book  -420 	P.P.1S.X.P.2C.2S.3C 	4S.P.P.P

The weird GIB database tells it that if it passes, 2 will be passed out; but if it doubles, or bids 3, East will leap to 4 preemptively, which will be the final contract.

Both of the latter bids are thus scored as -420, as 4 is making and optimal.

Pass is however scored as -295: the average of the double dummy score (-170) and the par score (-420, as in this case the opponents still have the opportunity to find game over your pass).

-295 vs -420 is a +125 differential, which is converted to +3 IMPs. It believes that passing 2 will gain 3 IMPs on the deal compared to the other contracts.

Does this make sense?

It just so happens that if you assume the opponents will find game over a pass half the time, it’ll work out to the same +3 IMP average (since game is 6 IMPs over the partscore). But with IMPs not being linear, taking a midpoint of two total point scores seems to have no mathematical foundation, and can lead to some *very* unusual scoring (especially when different pars are involved for different bids).

Also, this makes even less sense at matchpoints. If the idea was to assume they’d find game half the time if you pass 2, then 2 should be scoring 0.75 (75%) compared to the other two options. But nope, it compares the midpoint of -295 to -420.. the first number is bigger, so 2 gets a score of 1 (100%) for this deal. In other words, you get exactly the same result as if you had ignored the par calculation entirely.

In fact, matchpoints is luckily unaffected by the par score a reasonable amount of the time. But there are rare occasions when the par score calculation results in the relative order of two bids changing. If this happens, then it will give a 0% score to the bid the database suggests is better - meaning it assumes the opponents will find the unlikely par scores 100% of the time!

Going deeper - the real book score

I just told you that the ‘book’ score was the double dummy score. That wasn’t true, for so many reasons.

I ran a large number of experiments where I passed the debugging output onto a third party double dummy solver and compared the results, looking for patterns. And I found plenty.

#1: Slams are reduced in value.

As everyone knows, in bridge, making a small slam gives you a 500 point bonus non-vulnerable, and 750 point bonus vulnerable. A grand slam is 1000 points non-vulnerable, and 1500 points vulnerable.

When GIB is simulating bids, it believes that making a small slam gives you a .. wait for it.. 425 point bonus non-vulnerable, and 637 point bonus vulnerable (both a 15% reduction). A grand slam is reduced significantly - 612 points non-vulnerable, and 918 points vulnerable (the same 38.8% reduction in both cases).

So, for example, when weighing up whether to bid on to slam or not, it’s considering that making 6H vulnerable will score a remarkable +1317.

This is clearly intentional, as during the play of the hands, GIB does use the correct values for slam contracts, and thus is using the correct percentages when deciding what lines to choose. These values only apply during bidding simulations.

I have no idea where these numbers come from. The only explanation I can think of is that this is somehow trying to take into account the fact other tables may end up stopping in game. Perhaps GIB was plugged into some large datasets, and the figures required to bid slam tinkered with until it reached optimal average scoring. That would explain why they’re such weird numbers as well. But there’s definitely no concept of this in the play of the hand; GIB assumes the other tables are in the same contract 100% of the time.

Very intriguing.

#2: Excess undertricks are automatically doubled.

GIB is extremely bad at penalty doubles, with virtually all doubles taken out, to the chagrin of most humans.

When simulating, if GIB predicts that we’ll end up in an undoubled contract, but double dummy tells it that the resulting contract is going down 3 or more tricks, it calculates the score as if it were doubled.

So if we’re vulnerable, for example, down 1 in an undoubled contract is scored as -100, down 2 is scored as -200, and down 3 is scored as -800.

(This is only true for “us” - the side that is simulating what to bid next - if our bid predicts the opponents will declare and go down 3 undoubled, we’ll only get the +300 our way).

This can have quite a big impact on the results - an outlier where the contract happens to go down 3 can contribute a hefty penalty to the overall average at IMPs, even if you’re bidding to a normal contract and neither opponent had a hand where they would (or even could) have doubled.

Maybe this makes sense to compensate for GIB’s horrendous lack of penalty doubles, to prevent GIB from making too many outrageous bids without thinking it’ll get punished. But of course, it will surely be less accurate than if penalty doubles could be programmed more accurately in the first place.

#3: On some occasions, *all* undertricks are automatically doubled

If we’re playing at the 5 level or higher, even down one is scored as if it were doubled. Maybe similar weird logic to the above.

If the opponents opened the bidding (excluding preempts), down 1 for us is always scored as doubled. Why?

If the opponents doubled at *any* point during the auction to date, down 1 is scored as doubled. Why? This even involves doubles that weren’t showing values. We open 2 vulnerable, LHO doubles to show some clubs, pass, pass. GIB simulates, and sees 2NT (passed out) might occasionally be down 1.. for -200.

It gets more confusing. Suppose we open 1, partner raises to 2, we pass.. and LHO balances with 3.

If partner simulates whether or not to compete to 3, it will be treated normally - no double if it’s down 1 or 2.

But if instead there are two more passes, and it’s us simulating whether to compete to 3 in the balancing seat, down one is doubled again.

Is the rule that our undertricks are doubled if we’re simulating after two passes? I’m still not 100% sure on this, but it all seems very bizarre.

In fact, I haven’t been able to come up with an exact set of rules as to when our undertricks are scored normally, and when they’re doubled. And the main reason for that is because it’s hard to isolate the effects of the staggering point number 4..

#4: Double dummy scores are regularly wrong.

Oh dear, oh dear, oh dear.

https://www.bridgeba...62&d=s&a=pp2h2s

One of GIB’s choices it considers is pass:

-420.0		par -420 	book -420	P.P.2H.2S.P 	3H.P.4S.P.P.P


We can take 4 down 1, but it treats it as if game were making.

https://www.bridgeba...s&a=pp2h2s3h4sp

Two of GIB’s choices are pass and 5 here:

+450.0	par +450	book +450 	P.P.2H.2S.3H.4S.P.P 	P
-100.0	par +450	book -100 	P.P.2H.2S.3H.4S.P.5H 	P.6S.P.P.P


6 is cold double dummy, but it treats it as if it were down 1 (doubled, nonvulnerable; you can see from the +450 it thinks you can take 11 tricks in spades, as opposed to being down 2).

I ran an experiment where I hacked the database to force South to simulate opening everything from 1 through 7 on a few hundred deals, with the other three hands passing every time. By comparing the results for different levels, I could tell how many tricks it calculates you would take, eliminating any confusion around whether they were scored as doubled or not.

On about 15% of occasions, the book score had South taking the wrong number of tricks.

!!!!!!!!!!!!!!

All but one of these occasions, it was off by a single trick; the other time it was actually off by 2 tricks.

With normal bidding, after taking all of the exceptions listed above into play, I’ve hit around the same 15% mismatch figure, which is why I believe I’ve caught most of the scoring rules with the rest due to double dummy failures, but it’s hard to know for certain.

Two possible explanations:

The book score isn’t the double dummy score after all; it’s something else (like assuming the opponents will make some form of book lead too??)
It simply regularly calculates double dummy scores wrongly, like has already been proven in the 0% play thread.

I’m guessing it’s not option a). I did find an old post in the rec.games.bridge newsgroup from literally 20 years ago where Matt Ginsberg was aware of an issue causing occasional incorrect double dummy results. But it was then meant to be fixed, well before it was provided to BBO. Has something gone badly wrong since? Has it been re-fixed since then?

Oh dear indeed.

So how exactly are these mysterious par scores calculated?

I’ve been trying to figure this out for a long time, and kept putting off this post because I wanted to have a definitive answer before sharing it.

While I haven’t been able to come up with a definitive answer, I am 100% sure about one fact:

Whatever the par score was intended to be, it most certainly is not.

There are several reasons that have made it tricky for me to try to reverse engineer how the par scores are being calculated in detail. The first is that the broken double dummy calculations affect par scores just as much as book scores, meaning a decent proportion of them are flat out wrong even in simple cases. But here are a few other interesting points:

GIB only considers a subset of possible par contracts

https://www.bridgeba...=n&a=pp1sxp2cpp

GIB is trying to decide whether to balance over 2, and simulates the above deal. The results:

-170.0 		par  -170 	book  -170 	P.P.1S.X.P.2C.P.P.P
-713.5 		par -1257 	book  -170 	P.P.1S.X.P.2C.P.P.2D 		P.2S.3C.P.P.P
-713.5 		par -1257 	book  -170 	P.P.1S.X.P.2C.P.P.2S 		P.P.3C.P.P.3S.X.P.4C.P.P.P


Despite everything ending in partscores, E/W can actually make slam in clubs. So it uses par figures based on slam making (the fun 1257 figure), which results in it assigning balancing bids a beautiful -713.5 points.

Now look what happens when I hack the database to force GIB to consider a leap to 7 as a fourth option:

-170.0 		par  -170 	book  -170 	P.P.1S.X.P.2C.P.P.P
-485.0 		par  -800 	book  -170 	P.P.1S.X.P.2C.P.P.2D 		P.2S.3C.P.P.P
-485.0 		par  -800 	book  -170 	P.P.1S.X.P.2C.P.P.2S 		P.P.3C.P.P.3S.X.P.4C.P.P.P
-1100.0 	par  -800 	book -1100 	P.P.1S.X.P.2C.P.P.7D 		P.P.P


Having calculated that 7 is down 5, it suddenly realises that we can actually sacrifice in 6 over their 6 slam - scoring only -800, resulting in balancing bids being assigned a different score of -485 instead.

(Ignore momentarily that 7 is assigned a par of -800 too - more on this in a moment).

From all my investigations, it appears GIB only considers potential trump suits if there is some book sequence that ends in that trump suit being the final contract. This is backed up by the fact it takes longer and longer for GIB to perform its analysis if I hack in more potential final contracts.

I guess it makes some sense that with double dummy calculations being the most performance intensive part of the algorithm, removing some suits that nobody would ever play in is useful to speed up the results. But here this is despite the fact that one of the book sequences even has us competing in diamonds, and there are plenty of other examples where a very logical final trump suit may never appear as a final book contract. If there is any sense at all to calculating and using par scores - and that’s a big if already - this optimisation renders it pretty useless.

GIB outputs the wrong par score for many bids - but maybe mostly deliberately?

This is what really stumped me for a very long time. A significant proportion of par scores reported in the debugging output don’t make sense - but also don’t impact the results, and I believe this is deliberate in many cases. Some examples will help explain - I took the same bidding sequence as above, but hacked in several additional nonsense bids in to look at what was really going on.

https://www.bridgeba...=n&a=pp1sxp2cpp

+200.0 		par  +200	book  +200 	P.P.1S.X.P.2C.P.P.P
-500.0 		par  +200 	book  -500 	P.P.1S.X.P.2C.P.P.2D	 	P.2N.P.3S.P.3N.P.P.P
-100.0 		par  -100 	book  -100 	P.P.1S.X.P.2C.P.P.2S	 	P.P.P
-500.0 		par  -100 	book  -500 	P.P.1S.X.P.2C.P.P.3D		P.4S.P.P.P
-300.0 		par  -300 	book  -300 	P.P.1S.X.P.2C.P.P.4D		P.P.P
-1700.0 	par -1100 	book -1700 	P.P.1S.X.P.2C.P.P.7C		P.P.P
-1100.0 	par -1100 	book -1100 	P.P.1S.X.P.2C.P.P.7D		P.P.P
-1700.0 	par -1400 	book -1700	P.P.1S.X.P.2C.P.P.7H		P.P.P


In line 1, if it passes 2, it gets +200, which makes sense.

Skipping line 2 momentarily, in line 3, if it bids 2, that’s down 1 (doubled for the prior double rule) for -100. This receives a par of -100, since neither side can improve on that. Makes sense.

In line 2, 2 is extrapolated to 3N, which is scored as -500. But par is still set to +200, which could only be achieved by passing out 2. All of the other lines make it clear that this can’t be considered as an option. Where does the +200 come from? Seems strange, at first.

https://www.bridgeba...=n&a=pp1sxp2cpp

+100.0 		par  +100 	book  +100 	P.P.1S.X.P.2C.P.P.P
-300.0 		par  +110	book  -300 	P.P.1S.X.P.2C.P.P.2D 	P.3S.P.4S.P.P.P
+110.0 		par  +110 	book  +110 	P.P.1S.X.P.2C.P.P.2S 	P.P.P
-300.0 		par  +110 	book  -300 	P.P.1S.X.P.2C.P.P.3D 	P.4S.P.P.P
-100.0 		par	+0	book  -100 	P.P.1S.X.P.2C.P.P.4D 	P.P.P
-2000.0 	par	+0 	book -2000 	P.P.1S.X.P.2C.P.P.7C 	P.P.P
-800.0 		par	+0 	book  -800 	P.P.1S.X.P.2C.P.P.7D 	P.P.P
-1400.0 	par	+0 	book -1400 	P.P.1S.X.P.2C.P.P.7H 	P.P.P
-1100.0 	par	+0 	book -1100 	P.P.1S.X.P.2C.P.P.7S 	P.P.P


This time, bidding 2, 2, or 3 outperforms passing 2. But all bids from 4 up are scored as par 0! Note that 0 seems to be an ‘uncalculated’ par, rather than an actual 0; it pops up quite regularly, but when it’s with a positive book score, only the book score is used (rather than pulling the score halfway to 0).

https://www.bridgeba...=n&a=pp1sxp2cpp

-110.0 		par  -110 	book  -110 	P.P.1S.X.P.2C.P.P.P
-110.0 		par  -110 	book  -110 	P.P.1S.X.P.2C.P.P.2D 	P.2S.3C.P.P.P
-800.0 		par  -110 	book  -800 	P.P.1S.X.P.2C.P.P.2S 	P.P.3C.P.P.3S.P.P.P
-800.0 		par  -110 	book  -800 	P.P.1S.X.P.2C.P.P.3D 	P.3S.P.P.P
-800.0 		par  -110 	book  -800 	P.P.1S.X.P.2C.P.P.4D 	P.P.P
-2600.0 	par  -110 	book -2600 	P.P.1S.X.P.2C.P.P.7C 	P.P.P
-1700.0 	par  -110 	book -1700 	P.P.1S.X.P.2C.P.P.7D 	P.P.P
-2600.0 	par  -110 	book -2600 	P.P.1S.X.P.2C.P.P.7H 	P.P.P
-2000.0 	par  -110 	book -2000 	P.P.1S.X.P.2C.P.P.7S 	P.P.P


And this time all bids are scored as a par of -110, despite this being obviously nonsensical.

It appears that all of this comes down to another optimisation. As par only affects the end result when it is *worse* for us than what we would otherwise achieve, any time we’re getting a much worse score anyway, it doesn’t matter too much what par is set to.

In the last case, 2 is already par to begin with, so it seems to generally forget about calculating other pars, since they won’t be important. In some cases, it sets it to a previous value; in other cases it sets it to 0.

I haven’t found any hard and fast rules for when it does what - at times it seems very consistent and I thought I’ve come up with a rule, and then find a new example which puts me back to square 1. But in the vast majority of cases, whatever it comes up with is greater than the book score - so it matters not.

It does, however, seem to come up with a figure of ‘0’ in some cases where I believe it shouldn’t. I have a feeling this is a bug and related to the first optimisation mentioned previously about not computing scores for all trump suits. In one test, when I put in bids of 7 of every suit, they all had par 0 - after I added in a 7N bid too, they were all fixed to proper values, as if they are marked as 0 because it doesn’t have sufficient information to figure out what par should actually be.

Doubles are broken, yet again

https://www.bridgeba...=pp1sxp2c2s&v=n

North is trying to decide what to bid over 2 and simulates the above deal.

-110.0		par  -110	book  -110	 P.P.1S.X.P.2C.2S.P 		P.P
+105.0		par  +100	book  +110	 P.P.1S.X.P.2C.2S.3C 		P.P.P
-180.0		par  -470	book  +110	 P.P.1S.X.P.2C.2S.X 		P.3C.P.P.P


If it passes, the opps will make 2 for -110. It doesn’t consider the fact partner may bid 3, but since par is only relevant if it improves things for the opponents, this doesn’t matter much.

If it bids 3, it calculates a par of just +100, presumably because the opponents could escape to the better contract of 3x-1, resulting in a final average of +105.

If it doubles, it calculates a par of.. -470! It decides rather than bid the making 3 like the database says we will, the opponents may bewitch us into passing the takeout double for no reason. This results in doubling being considered the clear worst choice at both IMPs and MPs for this deal, despite that being ludicrous.

Yet on a similar deal - in this case the opponents push higher in spades on the middle auction - par doesn’t leave in the double:

https://www.bridgeba...=pp1sxp2c2s&v=n

-110.0	par  -110   book  -110  P.P.1S.X.P.2C.2S.P  	P.P
-30.0	par  -110   book   +50  P.P.1S.X.P.2C.2S.3C     3S.P.P.P
-200.0	par  -110   book  -200  P.P.1S.X.P.2C.2S.X  	P.3C.P.P.P


I have no clue at all what’s going on.

Pars sometimes simply make no sense at all

I had some other examples I was going to put here, but since I started this months ago, I can't find a couple. Suffice to say there are plenty of other situations where par scores have just left me stumped.

Two last oddities to finish.

Oddity 1 - fully zeroed out scores

Here’s a strange little quirk I discovered.

If, on a given simulated deal, all potential bids lead to the same final contract, double dummy and par scores are not computed at all, instead resulting in output like this:

 +0.0	 	par +0 		book +0	 P.P.1S.X.P.2C.2S.P 	P.3C.P.P.4S.P.P.P
 +0.0 		par +0 		book +0	 P.P.1S.X.P.2C.2S.X 	4S.P.P.P
 +0.0 		par +0 		book +0	 P.P.1S.X.P.2C.2S.3C 	4S.P.P.P


Now, this small optimisation in itself makes total sense; the double dummy and par calculations are what takes up all of the time, so the less you have to do, the better. And since all scores are computed as differences pair-wise, you get the same 0 assigned to each pair as if you had performed the full analysis, so this isn’t influencing any overall averages.

Or is it?

For a completely baffling reason, the logic for IMPs and MPs differs.

When the scoring is set to MPs, these 0s are included when computing the final averages. If there are 30 deals, and 10 of these are zeroed out, the total sum of scores for each bid is still divided by 30 to calculate the final average.

But when the scoring is set to IMPs, the zeroed out hands are ignored - the total sum of scores is divided by 20.

This would make absolutely no difference to the relative positions of the final bids.. except for the fact the book bid is given a fixed bonus.

If a simulated bid averages a 0.6 IMP gain over the book bid on the 20 non-zeroed-out hands in this example, GIB would choose that bid because it’s above the 0.5 IMP bonus threshold.. even though the average over all 30 hands is only 0.4.

Why GIB has a different method of dealing with these hands at MPs vs IMPs is completely bizarre.

Oddity 2 - once in a blue moon

I said that the final score for each bid is either the book score, or the average of book and par, depending on whether you expected to outperform par or not.

On my latest runs of tests, this was true on 2379 out of 2383 calculations.

On the other 4, the book score was higher than par, but the par score was used as the final score, rather than the average:

https://www.bridgeba...a=1hp1np2dp2sxp

+200.0 	par  +200 	book  +800 	1H.P.1N.P.2D.P.2S.X.P 	P.3C.P.4H.4S.P.P.X.P.P.P
+600.0 	par  +600 	book  +600 	1H.P.1N.P.2D.P.2S.X.3D 	4S.P.P.5D.P.P.P
+600.0 	par  +600 	book  +800 	1H.P.1N.P.2D.P.2S.X.4H 	4S.P.P.X.P.P.P


The first and third line don’t follow the normal rules. I can’t find any pattern to these - plenty with similar scores where the average is used as expected - but there are *extremely* rare, so maybe just a glitch in the code.

Conclusion

I may have misinterpreted some aspects of how GIB works, having zero access to the actual algorithm. But overall it really does seem that some parts of GIB are fixable.
3

#2 User is offline   thorvald 

  • PipPipPipPip
  • Group: Full Members
  • Posts: 376
  • Joined: 2012-September-05
  • Gender:Male
  • Location:Denmark

Posted 2023-April-26, 06:09

Impressive work Stephen

From todays free Daylong tournament I had this deal



I think most players would bid 2 on the North hand instead of passing 2.

I assume we are out of the defined sequences, so this calls for a simulation

Following your explanation 2 going down will always be doubled, and then it seems better to let opps play 2 - if simulation could not be fixed this just calls for a generic rule, that over partners opening you should not let the opponents play below 2 in your major with a 6-card suit

For those still here there was also an interesting observation in the play, just follow the play
Thorvald Aagaard
Mobile : +45 22 99 55 25
http://www.netbridge.dk
http://www.thorvald.dk
0

#3 User is offline   smerriman 

  • PipPipPipPipPipPipPip
  • Group: Advanced Members
  • Posts: 4,042
  • Joined: 2014-March-15
  • Gender:Male

Posted 2023-April-26, 14:18

View Postthorvald, on 2023-April-26, 06:09, said:

I assume we are out of the defined sequences, so this calls for a simulation

Following your explanation 2 going down will always be doubled, and then it seems better to let opps play 2

No, this is wrong on a couple of counts - for one, simulating 2 going down would only be doubled on the deals it was found to be down 3 or more; none of the other conditions match.

But far more importantly, you said you're playing in the free daylong; free robots do not simulate during the auction, and make the database bid 100% of the time as I mentioned early in the above post. These robots are of zero interest / relevance to this thread; the entire database was based around working in conjunction with simulations, so using it by itself is always going to be horrible. Proper robots do simulate and comfortably bid 2 here.
0

#4 User is offline   pilowsky 

  • PipPipPipPipPipPipPip
  • Group: Advanced Members
  • Posts: 3,774
  • Joined: 2019-October-04
  • Gender:Male
  • Location:Poland

Posted 2023-April-26, 19:07

View Postsmerriman, on 2023-April-26, 14:18, said:

No, this is wrong on a couple of counts - for one, simulating 2 going down would only be doubled on the deals it was found to be down 3 or more; none of the other conditions match.

But far more importantly, you said you're playing in the free daylong; free robots do not simulate during the auction, and make the database bid 100% of the time as I mentioned early in the above post. These robots are of zero interest / relevance to this thread; the entire database was based around working in conjunction with simulations, so using it by itself is always going to be horrible. Proper robots do simulate and comfortably bid 2 here.


Really? I thought the free daylongs used "advanced" GIB?
I'm heartbroken.


Fortuna Fortis Felix
0

#5 User is offline   smerriman 

  • PipPipPipPipPipPipPip
  • Group: Advanced Members
  • Posts: 4,042
  • Joined: 2014-March-15
  • Gender:Male

Posted 2023-April-26, 20:36

View Postpilowsky, on 2023-April-26, 19:07, said:

Really? I thought the free daylongs used "advanced" GIB?
I'm heartbroken.

I suspect you're kidding but "advanced" is a synonym for "paid". The weekly free instant is an exception, since there the original players had paid robots, so you have to have the same ones to get accurate comparisons.
0

#6 User is offline   thepossum 

  • PipPipPipPipPipPipPip
  • Group: Advanced Members
  • Posts: 2,572
  • Joined: 2018-July-04
  • Gender:Male
  • Location:Australia

Posted 2023-April-27, 18:55

Has anyone ever organised a team match between the Free and Advanced/Paid bots? Is there a difference between Free and Basic?

I would even consider paying to watch
0

#7 User is offline   pescetom 

  • PipPipPipPipPipPipPipPipPip
  • Group: Advanced Members
  • Posts: 7,920
  • Joined: 2014-February-18
  • Gender:Male
  • Location:Italy

Posted 2023-April-29, 15:12

Just to say well done.

I hope and trust that BBO feel obliged to provide an adequate response.

Maybe a moderator could also ensure this discussion is pinned.
0

Page 1 of 1
  • You cannot start a new topic
  • You cannot reply to this topic

2 User(s) are reading this topic
0 members, 2 guests, 0 anonymous users