BBO Discussion Forums: Rating Players - BBO Discussion Forums

Jump to content

  • 8 Pages +
  • 1
  • 2
  • 3
  • 4
  • Last »
  • You cannot start a new topic
  • You cannot reply to this topic

Rating Players Basic theory

#21 User is offline   barmar 

  • PipPipPipPipPipPipPipPipPipPipPipPip
  • Group: Admin
  • Posts: 21,581
  • Joined: 2004-August-21
  • Gender:Male

Posted 2009-November-21, 00:27

bid_em_up, on Nov 20 2009, 07:54 PM, said:

Just out of curiosity, why does the rating need to be displayed for everyone to see? Assuming a reasonable rating system could be established, simply show it to each person when they log on so that they are the only person who actually knows what their "rating" is. Hopefully, this might convince at least some people to assess their profile "level" somewhat more accurately.

Other random thoughts:

It could be an optional item to be displayed on profile. (Check a box for on or off).

Certain "levels" could be prevented from claiming to be Expert or World Class status based upon their rating proficiency. This doesn't have to be based in rocket science. It's quite irritating to look at someone's MyHands records who has expert/WC in their profile, and yet, they have a negative 2.5 imp score in 1500 hands. It's fairly safe to say, this person is not Expert, much less World Class, and if that hurts their itty bitty feelings....well, so be it.

If somebody decides they don't wish to see how good/bad they are performing, it should be a simple matter to make it possible to "opt out" of the rating system entirely.

just my $0.05 (inflation is killing me)

If the rating isn't displayed, it isn't useful. The only people who are about ratings are OTHER people, who want to decide whether to partner with you, or whether they're willing to play against you. On OKbridge, table hosts typically advertise a range of Lehmans that they're willing to accept at their table (not too much lower than theirs because they want decent competition, but not too much higher because they don't want to be out of their league).

If displaying ratings were optional, generally people with good ratings will display them, people with bad ratings will not. Anyone who is looking for a good partner will simply ignore people with hidden ratings, on the assumption that they must be bad if they're not willing to show them.

What's strange, though, is that other games don't seem to have this problem. A rating system has been part of chess for many years. Do chess players with poor ratings get discouraged and stop playing, or does it spur them to keep trying to improve?

#22 User is offline   Old York 

  • PipPipPipPip
  • Group: Full Members
  • Posts: 447
  • Joined: 2007-January-26
  • Location:York, England
  • Interests:People, Places, Humour

Posted 2009-November-21, 11:53

Self rating works reasonably well but there will always be exceptions
BBO masterpoints are awarded to players who are prepared to play/sub in certain tournaments, but many "novices" build up high numbers by playing in BiL tournies.

I would like to see a way of restricting tournies to masterpoint holders, and maybe have the option to set the level to 20+ MP etc.

Tony
Hanging on in quiet desperation, is the English way (Pink Floyd)
0

#23 User is offline   TylerE 

  • PipPipPipPipPipPipPip
  • Group: Advanced Members
  • Posts: 2,760
  • Joined: 2006-January-30

Posted 2009-November-21, 12:15

Old York, on Nov 21 2009, 12:53 PM, said:

Self rating works reasonably well

In what parallel universe?
1

#24 User is offline   suokko 

  • PipPipPipPip
  • Group: Full Members
  • Posts: 289
  • Joined: 2005-October-18
  • Gender:Male
  • Location:Helsinki (Finland)
  • Interests:*dreaming*

Posted 2010-June-05, 02:10

There is one big problem in rating systems. Most of people will take ratings as serius competition. That will change change attitude to a lot worse for random social games.

Also indivual ratings don't realy match the actual skill level of players in bridge. There is often huge difference in biding and defense performance depending on partner. If someone can play good bridge in regular partnership it doesn't tell how good random partner she would be.

There is relatively simple rating system for tournaments that have different level of players.

Take average rating of top 10%. That is competition rating ©

Then from result calculate average score of top 10%. That is average result (A)

other variables are number of competitors (N) and result ® and position in rankings (P)

rating points=C*(R/A)*(1-0.15*(P-1)/(N-1))

R and A has to be positive values so zero in IMP scoring would need to be tanslated so that everyone has positive result for rating.

Then actual pair/player rating is average 5 best results in last year. if less than 5 tournaments has been played rating is reduced 3% for each.

Rating list is published periodicaly (every month). Ratings are scaled so that the first ranked has always score of 100.

Unrated pairs/players join a tournament are asigned rating of 70.

This system is slightly modified version of the system that is used in Finnish orienteering ranking. This gives quite good rankings for different level of competitions.

I agree with Gnome's ladder idea if people want competitive rated games. Of course that would require improvements to the team game system. Ladder rating could degenarate 5-10% towards starter rating.

here is a list of features/improvements that would help:
- Preregistering to a team
- Creating team match against random team
- Some limitation to team game options for ladder match
- Selecting TD
0

#25 User is offline   helene_t 

  • The Abbess
  • PipPipPipPipPipPipPipPipPipPipPip
  • Group: Advanced Members
  • Posts: 17,196
  • Joined: 2004-April-22
  • Gender:Female
  • Location:UK

Posted 2010-June-05, 04:17

If a rating system was based on robot reward tournaments only, it would avoid two of the major pitfalls of rating systems:
- Your human partner would not be in a position to ruin your rating, so you would not need to avoid bad/overrated partners, and you would not make fights with them of the "you ruin my rating!" type that are so frequent on sites with rating.
- There would be no extra incitement to cheat (or to accuse other people of cheating) since you can't cheat with the robot award tourneys. (OK, there may be good players who would be willing to play robot award using someone else's account to boost the client's rating and getting payed for it, but I don't think that would be so widespread).

People could still manipulate the system by creating multiple accounts, playing some robot awards with each, and then continue with the one that got most lucky. But the money and time that would cost would deter most from doing so.
The world would be such a happy place, if only everyone played Acol :) --- TramTicket
0

#26 User is offline   Tola18 

  • PipPipPipPip
  • Group: Full Members
  • Posts: 333
  • Joined: 2006-January-19
  • Location:Sweden
  • Interests:Cats.

Posted 2010-June-05, 06:23

helene_t, on Jun 5 2010, 05:17 AM, said:

(OK, there may be good players who would be willing to play robot award using someone else's account to boost the client's rating and getting payed for it, but I don't think that would be so widespread).

People could still manipulate the system by creating multiple accounts, playing some robot awards with each, and then continue with the one that got most lucky. But the money and time that would cost would deter most from doing so.

Sure, but the same can prob be said about almost any rating system. Unless perhaps if you must play with your own name and own face.

So, for the time being, your suggestion for using robot tournaments as rating help, IF we really need a rating system, is very interesting, I think.

I second this proposal.
Cats bring joy and a feeling of harmony and well-being into a home.
Many homeless cats seek a home.
Adopt one. Contact a cat shelter!
You too can be an everyday hero. :)
0

#27 User is offline   Wackojack 

  • PipPipPipPipPip
  • Group: Full Members
  • Posts: 925
  • Joined: 2004-September-13
  • Gender:Male
  • Location:England
  • Interests:I have discovered that the water cooler is a chrono-synclastic infundibulum

Posted 2010-June-06, 04:36

Helene's idea of using robots for rating looks attractive. Suggestion:

1. Use robot reward format with imp scoring and random hands (not best)
2. Free admission.
3. Participants names are hidden min number say 30.
4. To get onto the ladder play a certain number of tournaments such that the level of confidence is high. 200 hands say looks reasonable Mathmaticians to work this out.
5. No limit to number tourneys that can be played.
6. Rolling expiry of say 2 years.

One problem that I can see is that GIB plays 2/1 exclusively, so those that don't play this will be disadvantaged and so will be discouraged from participating. Another is that the more you play with GIB the more you learn to adjust to its foibles. So no doubt the more you play in these tournies the better your rating will get.
May 2003: Mission accomplished
Oct 2006: Mission impossible
Soon: Mission illegal
0

#28 User is offline   helene_t 

  • The Abbess
  • PipPipPipPipPipPipPipPipPipPipPip
  • Group: Advanced Members
  • Posts: 17,196
  • Joined: 2004-April-22
  • Gender:Female
  • Location:UK

Posted 2010-June-06, 04:51

1. IMP scoring against the field doesn't work for robot reward as they are not duplicate. IMPing against PAR would be fine. IMPing against passout might be better than total points or money won. Note that it is essential that they are not duplicated to avoid cheating.

2. Hmmmm not sure if this would be in BBO's interests! Currently the fee is 25 $cents for 25 minutes for robot race but I think robot reward and robot rebate should be included as well.

3. This doesn't matter since they are not duplicated.

4. Yeah or maybe you start with a prior belief that your percentile relative to the whole ladder is [0;100] and then a 95% credibility range is reported. In that way you can see on someones rating not only the estimate but also how uncertain it is. Someone new to BBO will have a range [2.5;97.5]. An average player with 1000 of recent boards may have [49;51].

5. Agree

6. Yeah agree, although an exponential forget is cooler, roling expiry of two years is easier to understand.
The world would be such a happy place, if only everyone played Acol :) --- TramTicket
0

#29 User is offline   Wackojack 

  • PipPipPipPipPip
  • Group: Full Members
  • Posts: 925
  • Joined: 2004-September-13
  • Gender:Male
  • Location:England
  • Interests:I have discovered that the water cooler is a chrono-synclastic infundibulum

Posted 2010-June-06, 06:44

Comparison against par is capricious. Is it really essential for boards not to be duplicated? Suppose over the tournament, duplicated boards were played in a random order for each player and had no number and had no player identification. Then the scope for cheating would be virtually zero. I believe the best assessment of ability is to compare with other players.
May 2003: Mission accomplished
Oct 2006: Mission impossible
Soon: Mission illegal
0

#30 User is offline   helene_t 

  • The Abbess
  • PipPipPipPipPipPipPipPipPipPipPip
  • Group: Advanced Members
  • Posts: 17,196
  • Joined: 2004-April-22
  • Gender:Female
  • Location:UK

Posted 2010-June-06, 06:54

Yes I think it is essential that boards are not duplicated. Not only to avoid cheating but also to avoid the issue of whether or not results should be adjusted for the strength of the field, and if so: how.

Obviously IMPing against a table with four robots would be better than IMPing against par.
The world would be such a happy place, if only everyone played Acol :) --- TramTicket
0

#31 User is offline   hrothgar 

  • PipPipPipPipPipPipPipPipPipPipPip
  • Group: Advanced Members
  • Posts: 15,480
  • Joined: 2003-February-13
  • Gender:Male
  • Location:Natick, MA
  • Interests:Travel
    Cooking
    Brewing
    Hiking

Posted 2010-June-06, 06:56

Wackojack, on Jun 6 2010, 03:44 PM, said:

Comparison against par is capricious. Is it really essential for boards not to be duplicated? Suppose over the tournament, duplicated boards were played in a random order for each player and had no number and had no player identification. Then the scope for cheating would be virtually zero. I believe the best assessment of ability is to compare with other players.

This seems nonsensical

1. You don't base a rating system off of a single tournament. You need a large corpus of hands. Its impractical to have players complete a meaningful hand sample in the single event. Equally important, you really don't want to be using the same hands for multiple events stretched across multiple days.

2. GIB uses an intrinsically stochastic process for decision making. Even if players use the exact same boards, there is no guarantee that they will face the same bidding / play decisions.

When it comes to rating systems, duplication is both unnecessary and undesirable.
Alderaan delenda est
0

#32 User is offline   helene_t 

  • The Abbess
  • PipPipPipPipPipPipPipPipPipPipPip
  • Group: Advanced Members
  • Posts: 17,196
  • Joined: 2004-April-22
  • Gender:Female
  • Location:UK

Posted 2010-June-06, 07:24

I think you exagerate, Richard. In non-duplicate scoring, probably most of the variance between players on a single round is to be attributed to differences between the deals played, and it would be nice to get rid of it.

But IMHO it just doesn't trump the arguments for having non-duplicated boards. And I suspect that IMPing against PAR would remove most of the unwanted variance.
The world would be such a happy place, if only everyone played Acol :) --- TramTicket
0

#33 User is offline   hrothgar 

  • PipPipPipPipPipPipPipPipPipPipPip
  • Group: Advanced Members
  • Posts: 15,480
  • Joined: 2003-February-13
  • Gender:Male
  • Location:Natick, MA
  • Interests:Travel
    Cooking
    Brewing
    Hiking

Posted 2010-June-06, 08:01

helene_t, on Jun 6 2010, 04:24 PM, said:

I think you exagerate, Richard. In non-duplicate scoring, probably most of the variance between players on a single round is to be attributed to differences between the deals played, and it would be nice to get rid of it.

I don't disagree with this; however, I don't think that the single round criteria is in any way meaningful to a rating system.

I'd be leery about using a single day or event a single event for a meaningful long term rating.
Alderaan delenda est
0

#34 User is offline   helene_t 

  • The Abbess
  • PipPipPipPipPipPipPipPipPipPipPip
  • Group: Advanced Members
  • Posts: 17,196
  • Joined: 2004-April-22
  • Gender:Female
  • Location:UK

Posted 2010-June-06, 08:09

hey richard, of course a single round is not meaningful. Jack suggested 200 rounds. Then the ratio between the between-player variance and the residual variance improves by a factor 200, fine. But that is true regardless of which method you use, so the duplicated results over 200 boards will still be much better than non-duplicated results over 200 boards. There is a reason that all serious competition is duplicate.
The world would be such a happy place, if only everyone played Acol :) --- TramTicket
0

#35 User is offline   Wackojack 

  • PipPipPipPipPip
  • Group: Full Members
  • Posts: 925
  • Joined: 2004-September-13
  • Gender:Male
  • Location:England
  • Interests:I have discovered that the water cooler is a chrono-synclastic infundibulum

Posted 2010-June-06, 08:14

hrothgar, on Jun 6 2010, 07:56 AM, said:

Wackojack, on Jun 6 2010, 03:44 PM, said:

Comparison against par is capricious.  Is it really essential for boards not to be duplicated?  Suppose over the tournament, duplicated boards were played in a random order for each player and had no number and had no player identification.  Then the scope for cheating would be virtually zero.  I believe the best assessment of ability is to compare with other players.

This seems nonsensical

1. You don't base a rating system off of a single tournament. You need a large corpus of hands. Its impractical to have players complete a meaningful hand sample in the single event. Equally important, you really don't want to be using the same hands for multiple events stretched across multiple days.

2. GIB uses an intrinsically stochastic process for decision making. Even if players use the exact same boards, there is no guarantee that they will face the same bidding / play decisions.

When it comes to rating systems, duplication is both unnecessary and undesirable.

1. I was not suggesting a rating system off a single tournament. I was suggesting how in a single tournament, cheating might be avoided.

2. Thats why I was suggesting that the rating is based on comparison with humans and not with par.

Comparing with par even when imped would need many times more hands to base a rating than comparison with humans or indeed with other GIBs.
May 2003: Mission accomplished
Oct 2006: Mission impossible
Soon: Mission illegal
0

#36 User is offline   hrothgar 

  • PipPipPipPipPipPipPipPipPipPipPip
  • Group: Advanced Members
  • Posts: 15,480
  • Joined: 2003-February-13
  • Gender:Male
  • Location:Natick, MA
  • Interests:Travel
    Cooking
    Brewing
    Hiking

Posted 2010-June-06, 08:28

helene_t, on Jun 6 2010, 05:09 PM, said:

hey richard, of course a single round is not meaningful. Jack suggested 200 rounds. Then the ratio between the between-player variance and the residual variance improves by a factor 200, fine. But that is true regardless of which method you use, so the duplicated results over 200 boards will still be much better than non-duplicated results over 200 boards. There is a reason that all serious competition is duplicate.

Hi Helene

I don't think that we're actually disagreeing...

If I thought that it were possible to use duplicated boards in a meaningful way for online bridge ratings, I would prefer to do so.

I don't think that duplicated boards can be used for a broadly deployed system. Furthermore, i don't consider this particularly troublesome.

A system based on nonduplicated boards will require a larger sample to generate the same degree of accuracy. Such is life.
Alderaan delenda est
0

#37 User is offline   Wackojack 

  • PipPipPipPipPip
  • Group: Full Members
  • Posts: 925
  • Joined: 2004-September-13
  • Gender:Male
  • Location:England
  • Interests:I have discovered that the water cooler is a chrono-synclastic infundibulum

Posted 2010-June-06, 11:55

As evidence, this is my experience with MP robot tourneys where you play 8 boards for 25 cents and your results are matchpointed against other humans partnering robots playing the same boards.

Started about 2 years ago and now played 1080 hands. My playing in these tourney's now has tailed off to about one 8 board tourney a week. Over the last 500 hands played, the 128 hand moving average and the overall average from the start has stabilised to within a 0.2% band on a % MP scale and at the present time they are exactly equal.

What I think this tells me is that my % record accurately reflects my playing ability at MP relative to the rest of the field that plays in these tourneys. What of course it does not tell me is how good the field is relative to the average BBOers or good tournament players.
May 2003: Mission accomplished
Oct 2006: Mission impossible
Soon: Mission illegal
0

#38 User is offline   spotlight7 

  • PipPipPipPip
  • Group: Full Members
  • Posts: 342
  • Joined: 2009-March-21

Posted 2010-June-06, 15:43

Hi:

Rating new players using 'hands played' may not be wise.

Mike Cappelletti had one of the top teams in the Washington D.C. area several decades ago. They played a 28 board IMP match against a team with zero master points and lost to the newbies.

The 'new team' could play bridge, they just did so in their own circle which was quite good.

What is the problem? I normally know how good a person is after a few hands.
Do you really expect to find that many good players as a 'pick up player' on bbo?

Label people as friends if they play at/near your level. Avoid playing with players that your notes list as 'butcher' or 'wild overbidder.'

Regards,
Robert
0

#39 User is offline   eyhung 

  • PipPipPipPip
  • Group: Full Members
  • Posts: 345
  • Joined: 2003-February-13
  • Location:San Jose, CA
  • Interests:bridge, poker, literature, boardgames, computers, classical music, baseball, history

Posted 2010-June-09, 17:32

As I stated in another thread, I think it's far more important to rate people on the basis of their behavior rather than skill. In particular, the unpleasant behavior of quitting in the middle of the hand is easily quantifiable and can be discouraged with a reputation system that tracks how many times a player has left prematurely in the last 50 hands played, and allowing main room servers to specify how tolerant they want to be (bar people with more than a 10% drop rate, for example).

Right now the main room of BBO is trending towards the unpleasant nature of Yahoo! bridge with people popping in, seeing a bad hand, and leaving, or leaving immediately when their partner makes what they consider a bad mistake. I think this is fine for the Relaxed Club, but the Main Room should be a little more serious. It's my belief that this behavior has induced most of the good players on BBO to only play in set games, tournaments, or team matches.
Eugene Hung
0

#40 User is offline   cloa513 

  • PipPipPipPipPipPip
  • Group: Advanced Members
  • Posts: 1,529
  • Joined: 2008-December-02

Posted 2010-June-10, 00:02

What if you have to quit or the location you have has irrattic internet?
0

  • 8 Pages +
  • 1
  • 2
  • 3
  • 4
  • Last »
  • You cannot start a new topic
  • You cannot reply to this topic

11 User(s) are reading this topic
0 members, 11 guests, 0 anonymous users