Page 1 of 1
[Ben] And I Believed the Key Card Bid Description!
#1
Posted 2024-January-21, 07:56
Hello! Bid description says 5C response to 4NT asking bid is 0 or 3 key cards. The robot actually had one. Maybe its just me, but somehow, the robot's math seems wrong.
Mike
http://tinyurl.com/yr2x42ht
Mike
http://tinyurl.com/yr2x42ht
#2
Posted 2024-January-22, 00:31
msheald, on 2024-January-21, 07:56, said:
Hello! Bid description says 5C response to 4NT asking bid is 0 or 3 key cards. The robot actually had one. Maybe its just me, but somehow, the robot's math seems wrong.
Mike
http://tinyurl.com/yr2x42ht
Mike
http://tinyurl.com/yr2x42ht
same problem.. http://tinyurl.com/yrt6v6nn
#5
Posted 2024-January-24, 07:56
Interesting. Fortunately we have two different pieces of data which confirm my impression: the responses to 4NT by msheald(=#1) and zhenya_S(=#2) by 5♣ and 6♣ are clearly incorrect, having in the first case only a keycard (=the Ace) and in the second there are 2 keycards plus the Queen. Not wanting to give a justification, it seems that the Robot in N reveals the situation of the 3 keycards in the hands of the partner in #1 and the void too in #2 who is asking with RCKB. It therefore seems that in these cases a misleadiing situation is created which must be urgently remedied to avoid misunderstandings and related and justified complaints.(Lovera )
#6
Posted 2024-January-24, 15:02
The robot used in the "Try our AI Bridge Engine" games were an experimental version of the open-source Ben robot -- they did not use BBO's usual GIB robots. Noted the feedback and passed it to the programmers.
I edited the topic title to reflect this, so that it won't confuse players looking for GIB feedback.
I edited the topic title to reflect this, so that it won't confuse players looking for GIB feedback.
#7
Posted 2024-January-24, 16:07
diana_eva, on 2024-January-24, 15:02, said:
The robot used in the "Try our AI Bridge Engine" games were an experimental version of the open-source Ben robot -- they did not use BBO's usual GIB robots. Noted the feedback and passed it to the programmers.
I edited the topic title to reflect this, so that it won't confuse players looking for GIB feedback.
I edited the topic title to reflect this, so that it won't confuse players looking for GIB feedback.
As with the basic GIB these should also have a CC which can be checked for declarations, exit cards and the use of conventions used with their exact answers. If AI stands for Artificial Intelligence, you have experienced what happens as we already know for other situations, that is, we still have to wait to see good results.
#9
Posted 2024-January-25, 10:08
If you're waiting for AI to actually read and follow what they're spewing out as "reality", you've got a long wait. More likely to be useful than more generic LLMs, but still.
"They don't play blackwood right, even though they say that's what they're showing". Wow, totally surprising. Never would have believed that. Well, that, or bidding 2NT Jacoby with totally unsuitable hands, or responding in 3-card spade suits, or ... One of them, anyway.
Frankly, anybody who expected "AI bot" with this much experience being able to do *anything* without occasionally falling off a clearly marked, sign-posted cliff, isdelusional believing the "rise of AI will take over everything" (except what I do, which is clearly more complicated than [topic of the day, which I know enough to be wrong about], of course).
Yeah, it's surprising to us all that the big cliff was Ace-asking, but that's because humans don't "think" like LLMs do, and can see why "these are rigid constraints". LLMs can't - they just see that (against their training bots), slams win more than they lose, when they "think" about their blackwood responses (and their LLM partner "thinks" about whether to bid 6, and their LLM opponents "know" that they aren't off two aces).
It's *really hard* to program a computer to follow a bidding system - especially to know which constraints are *critical* and which are subject to judgement. Almost as hard as programming your regular partner, even.
"They don't play blackwood right, even though they say that's what they're showing". Wow, totally surprising. Never would have believed that. Well, that, or bidding 2NT Jacoby with totally unsuitable hands, or responding in 3-card spade suits, or ... One of them, anyway.
Frankly, anybody who expected "AI bot" with this much experience being able to do *anything* without occasionally falling off a clearly marked, sign-posted cliff, is
Yeah, it's surprising to us all that the big cliff was Ace-asking, but that's because humans don't "think" like LLMs do, and can see why "these are rigid constraints". LLMs can't - they just see that (against their training bots), slams win more than they lose, when they "think" about their blackwood responses (and their LLM partner "thinks" about whether to bid 6, and their LLM opponents "know" that they aren't off two aces).
It's *really hard* to program a computer to follow a bidding system - especially to know which constraints are *critical* and which are subject to judgement. Almost as hard as programming your regular partner, even.
When I go to sea, don't fear for me, Fear For The Storm -- Birdie and the Swansong (tSCoSI)
Page 1 of 1