BBO Discussion Forums: Bridge Hands on Bridgebase. - BBO Discussion Forums

Jump to content

  • 5 Pages +
  • 1
  • 2
  • 3
  • 4
  • 5
  • You cannot start a new topic
  • You cannot reply to this topic

Bridge Hands on Bridgebase. Are the Bridge Hands on Bridgebase truly random?

#41 User is offline   mythdoc 

  • PipPipPipPip
  • Group: Full Members
  • Posts: 114
  • Joined: 2020-January-12
  • Gender:Not Telling
  • Location:Tennessee USA

Posted 2021-March-19, 10:06

View Postpilowsky, on 2021-March-19, 09:52, said:

Or, to put it another way, the drunk man searches for his lost keys under the lamp-post because it's too dark to see anywhere else?


Prejudicial and completely unhelpful. Almost as if someone wanted to personify the thing I have criticized.
0

#42 User is offline   mythdoc 

  • PipPipPipPip
  • Group: Full Members
  • Posts: 114
  • Joined: 2020-January-12
  • Gender:Not Telling
  • Location:Tennessee USA

Posted 2021-March-19, 10:11

View Posthrothgar, on 2021-March-19, 09:59, said:


The deal pool system does need to care about whether hands are biased with respect to strength or the distribution of various shapes or any of this sort of stuff. Presuming that the hand generators aren't borked, you can simply take a stream of inputs and slice and dice them into different pools.

Indeed, trying to add logic to look at hand strength (and tweak the allocation of hands accordingly) adds code, adds complexity, and increases the chances that something will get screwed up.



I have stated at least twice that I am not asserting that the Deal pool cares about distribution or strength. But what if it cares (by which I mean the BBO program is made to care) about creating events that (one possible motivation) ensure good players who play well are more likely to win, instead of being beaten by bad players making bad plays succeeding because the robots are bad.
0

#43 User is offline   hrothgar 

  • PipPipPipPipPipPipPipPipPipPipPip
  • Group: Advanced Members
  • Posts: 15,372
  • Joined: 2003-February-13
  • Gender:Male
  • Location:Natick, MA
  • Interests:Travel
    Cooking
    Brewing
    Hiking

Posted 2021-March-19, 10:18

View Postmythdoc, on 2021-March-19, 10:04, said:

Neither of these. These questions (indeed your comment as a whole asking me to state an hypothesis) show your own assumption about “measurables.”

They reflect the factors that create scoring variance according to the two scoring types. This has much less to do with distributions or strength as it has to do with the following:

MP’s capturing an over trick or conceding another undertrick; competing correctly on part score battles
IMP’s game or slam hands, or hands where a swing is formed by being set in a part score (particularly doubled) vs. making a part score. Overtricks not particularly important at all.

As I have said, you will see the difference most obviously in just declare hands where the bidding has been done by the computer as well.


If this is truly obvious, state a simple concise and testable hypothesis
Alderaan delenda est
0

#44 User is offline   mythdoc 

  • PipPipPipPip
  • Group: Full Members
  • Posts: 114
  • Joined: 2020-January-12
  • Gender:Not Telling
  • Location:Tennessee USA

Posted 2021-March-19, 11:44

View Posthrothgar, on 2021-March-19, 10:18, said:

If this is truly obvious, state a simple concise and testable hypothesis


If it’s as obvious to you as it was to me, you won’t need an hypothesis, lol.

For a rigorous test, wouldn’t one need to have as a control group a significantly sized group of players, similar to BBO’s in makeup, playing random hands obtained outside of BBO, against BBO robots, and analyzing scoring variance in both scoring types against the BBO group playing daylongs of both scoring types?

And wouldn’t one be looking for a significantly wider bell curve of scoring results, both on an average hand by hand basis, and upon an event by event basis, in the BBO deal pool group vs. the control group?

I don’t know how one would perform such a test given that these conditions cannot be replicated.

Other interesting tests while we’re discussing:
1—One could do bell curves of the scoring variance of different BBO events — live robot games vs. daylong robot games vs. robot national events.
2—One could also do bell curves comparing results obtained (by a computer) playing under a brand new BBO account vs. accounts of current players of varying records and experience playing on BBO.
I’m not asserting we’d find anything in test 2, but it would be interesting data to have a look at.
0

#45 User is offline   hrothgar 

  • PipPipPipPipPipPipPipPipPipPipPip
  • Group: Advanced Members
  • Posts: 15,372
  • Joined: 2003-February-13
  • Gender:Male
  • Location:Natick, MA
  • Interests:Travel
    Cooking
    Brewing
    Hiking

Posted 2021-March-19, 12:00

View Postmythdoc, on 2021-March-19, 11:44, said:

If it’s as obvious to you as it was to me, you won’t need an hypothesis, lol.



You are the making claims how this is "obvious" to you and friends
If things are simple, then you should be able to describe this.

Put up or shut up.
Alderaan delenda est
0

#46 User is offline   hrothgar 

  • PipPipPipPipPipPipPipPipPipPipPip
  • Group: Advanced Members
  • Posts: 15,372
  • Joined: 2003-February-13
  • Gender:Male
  • Location:Natick, MA
  • Interests:Travel
    Cooking
    Brewing
    Hiking

Posted 2021-March-19, 12:04

View Postmythdoc, on 2021-March-19, 11:44, said:


For a rigorous test, wouldn’t one need to have as a control group a significantly sized group of players, similar to BBO’s in makeup, playing random hands obtained outside of BBO, against BBO robots, and analyzing scoring variance in both scoring types against the BBO group playing daylongs of both scoring types?



Your claim is that there are difference in the BBO deal pool

More specifically, you stated that you and your friends

Quote

played a series of “non-best-hand” “just declare” MP’s challenges followed by a series of “non-best-hand” “just declare” IMP’s challenges. The hands were immediately, obviously different from the one scoring method to the other.


Stop trying to muddy the waters with random distractions

Let's focus on a simple and specific claim that you made
Alderaan delenda est
0

#47 User is offline   mythdoc 

  • PipPipPipPip
  • Group: Full Members
  • Posts: 114
  • Joined: 2020-January-12
  • Gender:Not Telling
  • Location:Tennessee USA

Posted 2021-March-19, 12:24

@Hrothgar: Respectfully, I think I have been quite clear.

I also think you are writing as if I am on the stand under cross examination. I think I have no obligation to play the role of “agitator” you have assigned me, while you play the role of “defender of order.” BBO is more than just our game, it is a for-profit concern. I have no idea why they do or don’t do anything, but I certainly did not come to BBO looking to make waves. What would be my motivation? What would be the motivation of all the people who have posted here over the years claiming to detect something not completely random in the mechanism? Yes, we all could be the saps who don’t recognize our own bias...but that is not science, that is psychology. We all have one. That just closes off the debate.

I outlined a test that would satisfy me. It’s clear but, alas, impossible to do. It seems (as I’ve repeated) that you would like a simple measurable that you can run through a computer on some hands—irrespective of the very factors I have stated about scoring types, events, players, and variance, complicated as they are. Perhaps as a reason not to try the test, yourself?

I think I have finished saying my say, and thanks for reading.
0

#48 User is offline   hrothgar 

  • PipPipPipPipPipPipPipPipPipPipPip
  • Group: Advanced Members
  • Posts: 15,372
  • Joined: 2003-February-13
  • Gender:Male
  • Location:Natick, MA
  • Interests:Travel
    Cooking
    Brewing
    Hiking

Posted 2021-March-19, 13:21

View Postmythdoc, on 2021-March-19, 12:24, said:

@Hrothgar: Respectfully, I think I have been quite clear.

I also think you are writing as if I am on the stand under cross examination. I think I have no obligation to play the role of “agitator” you have assigned me, while you play the role of “defender of order.” BBO is more than just our game, it is a for-profit concern. I have no idea why they do or don’t do anything, but I certainly did not come to BBO looking to make waves. What would be my motivation? What would be the motivation of all the people who have posted here over the years claiming to detect something not completely random in the mechanism? Yes, we all could be the saps who don’t recognize our own bias...but that is not science, that is psychology. We all have one. That just closes off the debate.

I outlined a test that would satisfy me. It’s clear but, alas, impossible to do. It seems (as I’ve repeated) that you would like a simple measurable that you can run through a computer on some hands—irrespective of the very factors I have stated about scoring types, events, players, and variance, complicated as they are. Perhaps as a reason not to try the test, yourself?

I think I have finished saying my say, and thanks for reading.


You are asking people to do work to test and try and prove / disprove your claims

If you want people to do this, you need to be more precise than "this is obvious to me, take a go at it"
Alderaan delenda est
0

#49 User is online   smerriman 

  • PipPipPipPipPipPipPip
  • Group: Advanced Members
  • Posts: 3,722
  • Joined: 2014-March-15
  • Gender:Male

Posted 2021-March-19, 13:30

Mythdoc, I'm really not understanding your posts at all.

Firstly, you say that you do not believe the generation of deals is flawed, and that you are only referring to the "deal pool". You included a reference to how BBO describes the deal pool. But then you talked about noticing the effects in robot challenges.

Deal pooling, as defined by BBO, is this. For every board in a daylong, a fixed number of hands is randomly generated (a pool). Each time a human plays each board in the tournament, one of that board's hands is selected at random from the pool. That way every hand is the pool is played approximately the same number of times, while avoiding the effects of cheating by people that enter the tournament multiple times. This is not a secret; it's just a trivial algorithm.

In a robot challenge, both players get the same boards. Deal pooling is therefore 100% inapplicable.

Correct me if I'm wrong, but it appears you've seen the words 'deal pool', misunderstood them, and are talking about something completely unrelated - that you believe that BBO are intentionally biasing the hand generator by not dealing random hands, but by throwing out 'flat hands' based on the scoring.

So let's continue based on that.

You then stated two things:

a) That it was "immediately obvious" with a "clear difference" solely by playing challenge hands.

b) That the only accurate way to test whether this is true or not is to get a large set of players to complete truly random generated hands and compare with BBO's "maybe-not-so-random" hands. And that testing this is basically impossible.

Your earlier posts lined up with a) - you made it extremely clear you thought this was testable:

Quote

As I’ve said twice previously, this exact same test (two sets of robot challenges described above) can be done by anyone who is interested in really investigating what I am talking about, as opposed to diverting the conversation back to the deal generator blah blah. So far, no one has mentioned they even tried it. That is disappointing but also predictable. Confirmation bias and unwillingness to explore beyond one’s comfortable beliefs can cut both ways.


If this is true, all you have to do is clearly quantify the factor that made it completely obvious to you. Then that can immediately be put to the test.

Yet when pressed to quantify it, you've moved towards statement b), which basically admits that your 'immediately obvious' was completely made up.

So which is it? I am happy to run tests based on a). All you have to do is quantify what was 'obvious' in your head.
0

#50 User is offline   mythdoc 

  • PipPipPipPip
  • Group: Full Members
  • Posts: 114
  • Joined: 2020-January-12
  • Gender:Not Telling
  • Location:Tennessee USA

Posted 2021-March-19, 16:08

View Postsmerriman, on 2021-March-19, 13:30, said:

.....that you believe that BBO are intentionally biasing the hand generator by not dealing random hands, but by throwing out 'flat hands' based on the scoring.


Ok, yes, I have been saying this. Whether we call these “deal pool” hands or not is subsidiary to this point. I don’t think the hands from one event to another are the same, or from one scoring type to another. My hypothesis (hat tip to Hrothgar) is that hands that are flat for MP scoring occur less frequently than normal in an MP game, and hands that are flat for IMP scoring occur less frequently in an IMPs game.

[EDIT—note added] I am talking about individual robot games, which is what I have mainly played

Quote

So let's continue based on that.

You then stated two things:

a) That it was "immediately obvious" with a "clear difference" solely by playing challenge hands.


Yes, the thing that really threw up some questions for me and my friends was when we played the series of challenges. “Why in these just declare challenges were the MP set of hands so, so different from each IMP set, if they were not put through a filter?” we kept asking ourselves. I tried to elucidate these differences above, but perhaps I was too unclear.

—The MP hands typically had multiple decisions per hand designed to test one’s technique and appetite for risk in pursuing overtricks, saving undertricks, ruffing losers, finesses and other cardplay devices, establishing side suits, etc. (NOTE: The MP hands compared each to another didn’t have the same decisions, and these decisions were only occasionally influenced by distributions, splits and the like. All good bridge players know that the game is not as simple as distributions and splits.)
—The IMP hands were ridiculously simple by comparison.

Of course, my buddies and I playing was not a scientific process. But yes, seemingly time after time the MP hands just happened to have these MP-intensive decisions and the IMP hands didn’t. (I am not saying you could never go for an overtrick in an IMP hand. Please don’t infer that.)

Quote

b) If this is true, all you have to do is clearly quantify the factor that made it completely obvious to you. Then that can immediately be put to the test.

Yet when pressed to quantify it, you've moved towards statement b


Well then, do you know of an easy way to state a specific, simple quantitative factor that one could use to test hands for a “preponderance of scoring-method-specific decisions to be made”? I don’t.

But one could certainly test the hypothesis of fewer flat hands. That’s where method B, as you put it, comes in. In order to do so, one would have to do a study along the likes of what I described above. Testing for fewer flatter hands would not be a comprehensive test of the ways and purposes that an algorithm might use to select or leave out hands, but it is ONE specific way to inquire, and perhaps the only way, since bridge is such a complicated card game that scoring results don’t come down to one or another specific quantitative factor.

I hope this is clearer. Thank you for allowing me to elucidate in answer to less contentious questions.
0

#51 User is online   smerriman 

  • PipPipPipPipPipPipPip
  • Group: Advanced Members
  • Posts: 3,722
  • Joined: 2014-March-15
  • Gender:Male

Posted 2021-March-19, 17:15

View Postmythdoc, on 2021-March-19, 16:08, said:

—The MP hands typically had multiple decisions per hand designed to test one’s technique and appetite for risk in pursuing overtricks, saving undertricks, ruffing losers, finesses and other cardplay devices, establishing side suits, etc. (NOTE: The MP hands compared each to another didn’t have the same decisions, and these decisions were only occasionally influenced by distributions, splits and the like. All good bridge players know that the game is not as simple as distributions and splits.)
—The IMP hands were ridiculously simple by comparison.

I think this is absolutely true. But it's also absolutely true for 100% random hands as well - playing MPs is far, far more complex than playing IMPs.

It's well known that when playing IMP tournaments, the majority of hands are completely meaningless - the whole tournament comes down to a small number of decisions in a small subset of hands - while when playing MP, every trick in every hand is important.

So the fact there are far more decisions to be made in MP vs IMPs doesn't imply non-randomness - it's what you'd expect as a baseline.

View Postmythdoc, on 2021-March-19, 16:08, said:

Well then, do you know of an easy way to state a specific, simple quantitative factor that one could use to test hands for a “preponderance of scoring-method-specific decisions to be made”? I don’t.


Nope, I have no idea how you could possibly test this. The fact that neither of us do in a sense strengthens the argument against you - the only way BBO would be able to bias such hands is for them to have an algorithm for testing this, so they knew which hands to throw out, and do so in a way that doesn't affect the overall distributions, etc. I do not believe they are capable of having such an algorithm.

Since a large scale test is unfeasible, I would recommend you get a source of deals dealt via BigDeal, load them through a teaching table, and play 'as if it were a challenge'. I am confident that you would see exactly the same 'obvious' signs that the MP versions involve considerably more decisions.

If the manipulation is indeed obvious, and you go into the test with the right mindset (which is tough, since you'll be wanting to look for evidence you're right, rather than being unbiased), you should notice the difference straight away. You can then elaborate on specific differences that you noticed, with examples.
0

#52 User is online   smerriman 

  • PipPipPipPipPipPipPip
  • Group: Advanced Members
  • Posts: 3,722
  • Joined: 2014-March-15
  • Gender:Male

Posted 2021-March-19, 18:41

Actually, scrub the last paragraph. Easy to run that test without bias - just have someone else mix in groups of deals that come from BigDeal, and deals which come from challenges (or even simpler, BBO IMP vs BBO MP). Your task would be to guess which was which. If you are correct that there is an obvious difference, you should be able to guess consistently better than average.

Let me know if you're game and I can set up the deals for you.
0

#53 User is offline   mythdoc 

  • PipPipPipPip
  • Group: Full Members
  • Posts: 114
  • Joined: 2020-January-12
  • Gender:Not Telling
  • Location:Tennessee USA

Posted 2021-March-19, 20:14

View Postsmerriman, on 2021-March-19, 17:15, said:

Nope, I have no idea how you could possibly test this. The fact that neither of us do in a sense strengthens the argument against you - the only way BBO would be able to bias such hands is for them to have an algorithm for testing this, so they knew which hands to throw out, and do so in a way that doesn't affect the overall distributions, etc. I do not believe they are capable of having such an algorithm.


Actually, it would be quite easy for BBO servers to create this outcome. Thousands upon thousands of hands are played on BBO every day, generating scores in both MP’s and IMPs. Hands played at anonymous tables, hands played at live tables, there is no shortage.. All that is necessary is to recycle these hands, making sure not to deliver the same hand twice to the same user, and in the meantime, dropping out a few of the flattest and/or selecting out some of the boards generating wider swings.

I want to make one final point that I think is likely to have been missed in all this back and forth. I don’t think there is some nefarious plot afoot to make BBO less fair for the average user. My belief is that, over the long haul, better players will get better results and lesser players will get lesser results — perhaps even more so if there is a filter being used to select harder or suppress easier hands. I, for one, welcome this challenge, if you’ll pardon the pun. I do speculate, however, that there may well be (at least) two forms of motivation that a profit-making online bridge website would find worthwhile in using such a filter: 1) it could provide a more engaging, more valuable experience for the player spending .40 cents US, or whatever it costs in your local currency, to play more interesting hands as opposed to a daylong with 2 or 3 out of the 8 hands flat. 2) it could lessen instances when the robots generate flat boards by making an embarrassingly bad play (like leading an ace against certain slam contracts, and enabling a lay down claim.)

As to your other point — namely that MP’s hands and MP tournaments are inherently harder... sure! But surely you aren’t saying that an experienced bridge player can’t assess the difficulty of a a given hand and (more importantly) its likelihood for generating a swing at MP’s scoring vs. at IMPs scoring. I also won’t take you up on your offer send me hands to identify. Take the hour you would spend generating deals for me to look at, and look at them yourself. Remember, they are “just declare,” “non-best-hand” MP challenges and IMPs challenges.Thanks for reading. mythdoc out.
0

#54 User is online   smerriman 

  • PipPipPipPipPipPipPip
  • Group: Advanced Members
  • Posts: 3,722
  • Joined: 2014-March-15
  • Gender:Male

Posted 2021-March-19, 21:52

View Postmythdoc, on 2021-March-19, 20:14, said:

But surely you aren’t saying that an experienced bridge player can’t assess the difficulty of a a given hand and (more importantly) its likelihood for generating a swing at MP’s scoring vs. at IMPs scoring.

Huh? I'm saying that if BBO is biasing the hands and you think this is obvious, then it should be *easy* for you to look at the hands and determine which are the biased sets. If there is no bias, then you will unable to do so.

If you're unwilling to take a simple test that will actually prove whether your claim is right or wrong, then I guess enough said, and there's no point anyone spending any more time on it.
0

#55 User is offline   hrothgar 

  • PipPipPipPipPipPipPipPipPipPipPip
  • Group: Advanced Members
  • Posts: 15,372
  • Joined: 2003-February-13
  • Gender:Male
  • Location:Natick, MA
  • Interests:Travel
    Cooking
    Brewing
    Hiking

Posted 2021-March-20, 05:19

View Postsmerriman, on 2021-March-19, 21:52, said:

Huh? I'm saying that if BBO is biasing the hands and you think this is obvious, then it should be *easy* for you to look at the hands and determine which are the biased sets. If there is no bias, then you will unable to do so.

If you're unwilling to take a simple test that will actually prove whether your claim is right or wrong, then I guess enough said, and there's no point anyone spending any more time on it.


Might make sense to do this with one set of hands that are mixed up and then labelling individual hands.
Alderaan delenda est
0

#56 User is offline   mythdoc 

  • PipPipPipPip
  • Group: Full Members
  • Posts: 114
  • Joined: 2020-January-12
  • Gender:Not Telling
  • Location:Tennessee USA

Posted 2021-March-20, 07:13

Why not let everyone play? Just please understand that it is not scientific and doesn’t prove that BBO conditions hand delivery for the two scoring types. It only suggests it.

These two sets of four hands (four board robot challenges, just declare, non best hand) were generated this morning. Only these two sets were dealt. I didn’t pick and choose among sets, lol. If an algorithm was being used to enhance scoring swings, which set do you think would be MP’s scoring, and which would be IMPs scoring?

SET 1

https://tinyurl.com/ygh6cpy7

https://tinyurl.com/yhp5l466

https://tinyurl.com/yh79nqt5

https://tinyurl.com/yzpxg35h

SET 2

https://tinyurl.com/ygrz9jj8

https://tinyurl.com/ygrsto4k

https://tinyurl.com/yetatguk

https://tinyurl.com/yge2hw3x
0

#57 User is offline   mythdoc 

  • PipPipPipPip
  • Group: Full Members
  • Posts: 114
  • Joined: 2020-January-12
  • Gender:Not Telling
  • Location:Tennessee USA

Posted 2021-March-20, 07:21

And also, I’d love a reply to this paragraph that I wrote above, that you guys ignored. smerriman, you said it would be impossible for BBO to have a computer sophisticated enough to generate conditioned hands. I wrote:

View Postmythdoc, on 2021-March-19, 20:14, said:

Actually, it would be quite easy for BBO servers to create this outcome. Thousands upon thousands of hands are played on BBO every day, generating scores in both MP’s and IMPs. Hands played at anonymous tables, hands played at live tables, there is no shortage.. All that is necessary is to recycle these hands, making sure not to deliver the same hand twice to the same user, and in the meantime, dropping out a few of the flattest and/or selecting out some of the boards generating wider swings.


Do you agree, then, it is quite doable?
0

#58 User is offline   mycroft 

  • Secretary Bird
  • PipPipPipPipPipPipPipPip
  • Group: Advanced Members
  • Posts: 7,058
  • Joined: 2003-July-12
  • Gender:Male
  • Location:Calgary, D18; Chapala, D16

Posted 2021-March-20, 09:34

That would last exactly until someone pointed out on the forums to one person's post that "when they held that hand in the main room, the auction went..." It hasn't happened.

It's possible, but it fails on the corporation test: "how much money does it cost? How much money does it make? How expensive would it be to get caught? How easy would it be to find?" "Lots", "none", "very", and "I wouldn't bet against it falling out of Nic Hammond going full bore" respectively, IMHO.

David Stevenson once said approximately, on these forums:

There are three types of bridge sessions:
  • hand-dealt, exciting and swingy: "These hands sure are weird tonight."
  • computer-dealt, exciting and swingy: "Those damned computer hands again."
  • flat, normal hands, hand- or computer-dealt: "Thanks for the game, James."

When I go to sea, don't fear for me, Fear For The Storm -- Birdie and the Swansong (tSCoSI)
0

#59 User is offline   hrothgar 

  • PipPipPipPipPipPipPipPipPipPipPip
  • Group: Advanced Members
  • Posts: 15,372
  • Joined: 2003-February-13
  • Gender:Male
  • Location:Natick, MA
  • Interests:Travel
    Cooking
    Brewing
    Hiking

Posted 2021-March-20, 12:11

View Postmythdoc, on 2021-March-20, 07:13, said:

Why not let everyone play? Just please understand that it is not scientific and doesn’t prove that BBO conditions hand delivery for the two scoring types. It only suggests it.

These two sets of four hands (four board robot challenges, just declare, non best hand) were generated this morning. Only these two sets were dealt. I didn’t pick and choose among sets, lol.


No offense, but this isn't how you set up this kind of experiment:

I propose the following

1. We establish a set of rules (in advance) regarding how sets of deals will be selected. For example we might chose the first 100 IMP boards and the first 100 MP boards on some given day. I don't much care what the rules are, but rather, that some objective method is chosen in advance.

2. That set of 200 boards (or however many might get chosen) get mixed up.

3. People vote on individual boards

4. We assess whether folks ability to identify MP or IMP hands differs from a coin tossing exercise.
Alderaan delenda est
0

#60 User is offline   pilowsky 

  • PipPipPipPipPipPipPip
  • Group: Advanced Members
  • Posts: 3,620
  • Joined: 2019-October-04
  • Gender:Male
  • Location:Israel

Posted 2021-March-20, 12:50

This doesn't sound like a mathematics problem as much as a Turing test problem.
How do you define "interesting" mathematically? Who decides?
It really sounds as though the wrong question is being asked. In which case, no satisfactory answer is possible.


Why not climb back out of Alice-in-wonderland world?
What is needed is an 'interest test.'


Here's a simple example. Find 100 boards that mythdoc played 100 days ago and see if he bids/plays them in the same way.


The percentage variance from the original bidding and play would be an "interest index".


0

  • 5 Pages +
  • 1
  • 2
  • 3
  • 4
  • 5
  • You cannot start a new topic
  • You cannot reply to this topic

1 User(s) are reading this topic
0 members, 1 guests, 0 anonymous users