Fluffy, on 2016-February-02, 06:41, said:
Google has beaten Go, what could it do to bridge?
#22
Posted 2016-February-03, 02:31
psyck, on 2016-February-02, 06:44, said:
Yes, but so far GNU Bridge has not been a great succes. Could change, of course.
#23
Posted 2016-February-03, 07:33
I think the question is how long until they can beat a top ten or twenty player or pair. I would expect this before 2050 as we gain a more fuller understanding of how the hardware and software of the brain works at a fundamental level. This research is being done now.
#24
Posted 2016-February-03, 09:04
mike777, on 2016-February-03, 07:33, said:
I think it has already happened. 15 years ago, Jack played at about "meesterklasse" level (highest level of the Dutch competition, comprising about 100 players out of an organized bridge population of about 150,000). I don't know how much it has improved since then but the increased CPU speed since year 2000 alone might be enough for it to win the Bermuda Bowl.
It is not easy to measure, though. In Go or Chess, just a handful of games against a World champ is enough to give you an idea of the relative strength, and those games are easy to set up. In bridge, it is a lot of work to make the computer understand the human bidding and carding system, and then you need a sample size of several hundred boards to make a reasonably robust verdict.
#25
Posted 2016-February-03, 09:09
something on youtube on Neural Nets from Oriol Vinyals from Google DeepMind - Recent Advances in Deep Learning Published on Jan 28, 2016
January 27th, 2016
#26
Posted 2016-February-03, 13:37
#27
Posted 2016-February-03, 16:33
#28
Posted 2016-February-04, 03:07
helene_t, on 2016-February-03, 09:04, said:
It is not easy to measure, though. In Go or Chess, just a handful of games against a World champ is enough to give you an idea of the relative strength, and those games are easy to set up. In bridge, it is a lot of work to make the computer understand the human bidding and carding system, and then you need a sample size of several hundred boards to make a reasonably robust verdict.
As explained above much of the skill required in bridge is partnership oriented.
What you can do on your own is only one ingredient of the overall picture. What really matters is what you can accomplish in a partnership.
I would claim that at high levels of Bridge the majority of poor and good results have a lot to do with partnership communication and understanding (or lack of it).
Do you seriously believe if you would play with Jack as partner it would make you competitive several levels higher than what you are currently capable of, say qualifying and taking part in the Bermuda Bowl, because Jack is so much better on certain aspects than the majority of bridge players?
I seriously doubt your claim.
Letting Jack play with Jack is mickey mouse in comparison. The real test is what BBO does with its robots.
Show me the partnership of such a robot with a human or an independently developed robot beating anyone else.
Rainer Herrmann
#29
Posted 2016-February-04, 03:24
rhm, on 2016-February-04, 03:07, said:
What you can do on your own is only one ingredient of the overall picture. What really matters is what you can accomplish in a partnership.
I would claim that at high levels of Bridge the majority of poor and good results have a lot to do with partnership communication and understanding.
Do you seriously believe if you would play with Jack as partner it would make you competitive several levels higher than what you are currently capable of, say qualifying and taking part in the Bermuda Bowl, because Jack is so much better on certain aspects than the majority of bridge players?
I seriously doubt your claim and I am not aware of as single experiment where this has even been tried at all for good reasons.
Letting Jack play with Jack is mickey mouse in comparison.
Rainer Herrmann
I concede that Jack+Jack vs two humans is slightly unfair because each Jack will be perfectly tuned to this partner while the human opps (unless we are talking of very stable partnerships like deWijs/Muller, the Hackett twins or Meckwell) will have less than perfect partnership understanding. But maybe your point is more that Jack lacks the skill to develop a distinct understanding with each partner.
Jack actually can do some of this since if multiple users use the same Jack install, they have to log in with seperate IDs because Jack analyses their style and slowly builds up a knowledge base of each human's preempt style, overcall style, falsecarding style (not sure about the latter, maybe it is restricted to bidding). But of course, Jack is not human-like in this respect.
You could play an indy, then, where 99 expert humans (unknown to each other) participated together with one Jack. My guess is that Jack would be very close to, if not above, the World elite in such an experiment. I might be wrong today but then I will probably be right within the next 5-10 years.
#30
Posted 2016-February-04, 10:42
rhm, on 2016-February-04, 03:07, said:
There are also other psychological features that are hard to program. During the play of the hand, a significant aspect is trying to infer what's in the unseen hands based on how those players are playing. One of the most unique human abilities is called "theory of mind" -- this is when you imagine what someone else is thinking, based on their actions. We do it all the time, mostly without thinking, even as infants. It's generally considered fundamental to our language ability.
I've thought on a number of occasions how we might add this to a program like GIB. When GIB does its simulations, it mostly uses information from the auction, and the known cards that have been played, to calculate likely types of hands of the other players. To make better inferences, it would have to go back through all the previous plays of the other players, perform simulations at each step, and determine which hands are consistent with the actual plays.
Ginsberg has told us that he tried something like this in GIB, but the computational expense was overwhelming. If he limited the number of simulations so the time was acceptable, the results were not very helpful.
If you play against the BBO robots now, you can generally tell when they're doing a complicated simulation, there's a very noticeable hesitation. Imagine multiplying that by 10 or 100 for almost every play, which is probably what would happen if we tried something like that.
This is where neural networks would probably do better. Rather than having to explicitly simulate all the possibilities, the network would learn to recognize common patterns and deduce the implications.
#31
Posted 2016-February-05, 06:56
You can get some idea of current robot strength from Wbridge which runs daily duplicate match point events free with 3 players being robots and one live player. Wbridge plays some insane methods which is a drawback, jack is at least changeable enough to can probably find a cc you can stand to play with it.
Might be interesting to see how jack or wbridge does as a robot/human partnership, if anyone wants to try let me know I can run wbridge on bbo reasonably easily as it has a one hand mode unlike jack.
Jack's bidding is worse than top players, its declarer play is its best ability which is on a par with pretty much the best I claim.
#32
Posted 2016-February-05, 08:02
George Carlin
#33
Posted 2016-February-05, 08:25
Assuming the brain really is only made up of the stuff we can see without inventing something there is obviously no reason to suppose a computer can't do everything a human brain can.
#34
Posted 2016-February-05, 08:44
etha, on 2016-February-05, 08:25, said:
Absolutely. All we need to do now is let the computer evolve naturally over millions of years. Perhaps someone should start a breeding programme.
#36
Posted 2016-February-05, 10:05
StevenG, on 2016-February-05, 08:44, said:
If you think you're being sarcastic, you're not. There's actually a method of programming called "genetic algorithms". It starts with numerous versions of a program with random differences, and they each tackle a problem. Then the versions that did the best are "mated" and give birth to children, which are new versions of the program with random parts of each parent swapped (analogous to the crossover in biological sexual reproduction) and occasional random mutations as well. This is repeated over and over, which mimics natural selection.
It doesn't take millions of years, effective results can be obtained in hours.
#37
Posted 2016-February-06, 05:15
According to the TeachWeb,the date of challenge match are 9,10,12,13 and 15 this March.
It is said that AlphaGo owns the ability of self learning at present and with the passage of time, AI can become more clever, so some experts of our country think the first game is key.
Google spent $400 million for the DeepMind, there were only 20 people then, now more than 200 people in total, and they invested funds even at all cost.Playing against world Go champion,AI will exhibit its great progress with glorious future. I think " AlphaBridge" will beat world top players in the bridge game in the future.
#38
Posted 2016-March-09, 08:49
barmar, on 2016-February-05, 10:05, said:
It doesn't take millions of years, effective results can be obtained in hours.
We can play with this kind of stuff ourselves here:
http://rednuht.org/genetic_cars_2/
_________________
Valiant were the efforts of the declarer // to thwart the wiles of the defender // however, as the cards lay // the contract had no play // except through the eyes of a kibitzer.
#39
Posted 2016-March-09, 09:56
lycier, on 2016-January-29, 14:33, said:
After determining the date of the challenge, Lee said it is his great pleasure for him to play against artificial intelligence, " Whatever the outcome will be, it would be very meaningful event in the history of Go.I heard that artificial intelligence is unexpectedly strong, but I have confidence to win this time at least."
Many readers of course support Lee. they think the biggest differences between artificial intelligence and human are every step of computer calculation is the best choice, and the layout of human is not necessarily best, but the human is able to set a trap. Of course,many readers strongly support AlphaGo, "Don't look down on artificial intelligence, AI owns super computing power, human can do?".
I will vote for Lee Se-Dol, I think it is impossible for AlphaGo to beat human at present, even AlphaGo have beaten European Go Champion, but compared to Go champions from China,Japan and South Korea, he's too weak.
So, looks like Lee Se-Dol lost the first game.
Better hope that he wins three out of the next four...
#40
Posted 2016-March-09, 14:21
hrothgar, on 2016-March-09, 09:56, said:
Better hope that he wins three out of the next four...
After watching the first game, I don't think that's going to happen. To my amateur eye, it seemed like AlphaGo was playing a bit slack in the endgame because it was confident of winning, which is to say that it was not really a close game.
-- Bertrand Russell