I have come across a new bridge playing program which has a particular appeal for me. It genuinely allows you to enter complete bidding systems!
I propose to review it ( and subsequently the other robots I have) briefly under the headings: general, good features, bad features, bidding performance, play performance, and tentative conclusion.
Here goes for Oxford bridge:
(1) General:
Oxford bridge featured in the very early world computer bridge championships but has never won a championship and has not featured in recent years. To download a trial version of the program "Google" "oxford bridge".
(2) Good features:
Oxford bridge has a pleasant interface and is user-friendly.
Its bidding systems have been programmed on a database language (think Prolog or Lisp) and are open ended. Most programs allow you to add conventions from an existing collection. Oxford bridge also allows you to change the meanings of/ requirements for bids and in relatively normal if stilted English. Thus you can extend and correct the built in bidding systems and also enter completely new bidding systems from scratch. The only limitation is imposed by a limited vocabulary. I am not au fait with the latest bidding systems but I would guess the vocabulary would cope with these. It could cope with Fred's Modern Standard, or Reese's Little Major. Roman Club might require a deal of ingenuity.
The play engine is pragmatic reasoning backed up by random simulation, potentially the most powerful approach, but I think the cross-over point needs more work. There is a play commentary which explains plays retrospectively in recognisable but limited English.
(3) Bad features:
I would say that both its marketing and its copy protection need re-thinking. The former is offputting and the latter comes over as paranoid.
It does not recognise PBN files.
(4) Bidding performance:
The built in systems seem overly cautious, but this may not matter since they can be modified in detail.
(5) Play performance:
Unfortunately Oxford bridge plays well below its potential and the random simulation seems to kick-in only when the contract is already down.
I tested Oxford bridge (why not call it Oxbridge?) on a sample of hands from BM2000, the first five deals with no competitive bidding from each level of section A. A simple criterion of 1 point if BM says correct and 0 if BM says incorrect, gives the following results:
Level 1 score 5 (= 100%)
Level 2 4
Level 3 0
Level 4 0
Level 5 0
(6) Tentative conclusion:
I keep looking for the "great white hope", a program based on pragmatic reasoning which would outplay random simulations. And I keep getting disappointed.
I fear this program is not the answer. Perhaps if the play engine were open to modification like the bidding this could be the genesis of a great program?
Page 1 of 1
State of the Art 5 New kid (robot) on the block - Oxford bridge
#2
Posted 2012-October-03, 23:46
Oxford bridge also allows bids to have altrnative meanings. These are distinguished by priorities. In bid explanations only the features common to all meanings are described. (My maths are too rusty for me to decide whether this should be HCF or LCD?)
Page 1 of 1