Jump to content

westcarmo

Users
  • Posts

    2
  • Joined

  • Last visited

Everything posted by westcarmo

  1. I was researching the web and found this, well it seems that we will have to think a lot The Card Clumping Myth by Bryce Carlson The whole card-clumping concept is largely a fraud, Jerry Patterson and his cult of voodoo Blackjack, notwithstanding. Last year, I published an in-depth study cum expose on card clumping. It is reprinted, below, and it should answer most of your questions regarding this concept. I have resisted getting involved in the newsgroup's card-clumping controversy because so much of what is written seems to be flash and flame, and so little appears to be a real search for truth. However, there are a lot of players out there who really do want to know the truth of the matter--and have no idea what or whom to believe. So, in an effort to shed some light on the matter I have conducted a series of computer studies, consisting of several billion hands of simulated Blackjack, that I believe go a long way toward clarifying the issues involved. These simulations were run using the Omega II Blackjack Casino v1.2 for MS-DOS and Windows. This program has the ability to perform real-world shuffles (washes, riffles, strips, cuts, etc.) as they are done in a casino, as well as perform random card selections using a pseudo-random number generator. (P-RNG). Before we get into the results of the simulations, however, let's take a look at the psychodynamics that I believe are driving the card-clumping "industry." First off, *accurate* card counting is difficult; consequently, the vast majority of would-be card counters are, and always will be, losers. It's that simple. Becoming a winning counter takes a lot of hard work, and a lot of good judgment. Not surprisingly, therefore, many players yearn for an easier way. A shortcut that will allow them to target games they can beat using simple methods they can master in a few short hours. This hunger for a free lunch is not unique to Blackjack, but as I say in Blackjack for Blood, "...in the game of Blackjack, as in the game of life, winning is tough. It requires determination, preparation, and plenty of perspiration." But, unfortunately, "...this is not what most people (are) looking for. " What they want, instead, is "...a simple rule for riches they (can) memorize on the taxi ride to the casino." So, from the players' perspective, card-clumping systems are very seductive--they promise an easy alternative method for winning play. Now, let's take a look at it from the perspective of the entrepreneurs who sell books, systems, programs and tapes hawking card-clumping "technology." What's in it for them? Answer: $$$. Big $$$. Players WANT to believe in a simple "winning" concept such as card-clumping--and where there's a want, sooner or later someone will market a way. Of course, authors of card-counting works also sell books, systems and programs to wanna-be winners. So, what's the difference? The answer is both simple and straightforward: Bryce Carlson, Stanford Wong, Arnold Synder, Kenny Uston, Peter Griffin, etc., have all based their works on accepted scientific principles, with a minimum of speculation, estimation, or guesswork. Publishers of card-clumping systems, on the other hand, have based their works almost entirely on plausible-sounding theories backed up by anecdotal testimonials. Science versus "religion." Fact versus faith. A modern paradigm for an age-old conundrum. Now having said all this, you are likely expecting me to state categorically that there is nothing to the card-clumping concept. That it just doesn't happen. Well, surprise, I'm not prepared to do that. Based on the computer studies that follow, it does appear that under certain (unusual) circumstances, some non-random "clumping" effects are produced in some (unrealistically incomplete) multiple-deck shuffles. As we shall see, these effects are generally small, probably not exploitable, rarely (if ever) encountered in the real world--and such biases seem to favor the player as often as they favor the house. But, they are there. Ready for more? OK, here are the studies, and the results. Let's begin by discussing the Omega II Blackjack Casino's ability to perform real-world shuffles, as well as simple random card selections based on a P-RNG (pseudo-random number generator). The card-clumping system sellers and their faithful followers discount the fact that computer simulations have not backed up their claims of bias by stating that these effects only occur in games where the decks are actually shuffled as they would be in a casino, not in computer-simulated games where the cards are selected by a P-RNG. They have a point. Almost all Blackjack simulation programs do select cards with a P-RNG. Note, however, that the operative word is "almost." The Omega II Blackjack Casino v1.2 DOES do real-world shuffles as they are done in a casino. The card-shuffling routines in the Omega II Blackjack Casino have been thoroughly analyzed and tested. You can trust these results. And, just in case you don't trust what you can't see, the Blackjack Casino allows you to step through the various shuffle routines--all the while visually displaying the cards in their current sequence and order. John Imming's highly-regarded UBE also does such real-world shuffles. So, there are programs out there capable of performing realistic casino-style shuffles. Although the Blackjack Casino is capable of performing a large number of different shuffle-related routines, the studies done here used only the following procedures: A wash of "new" decks (W); a two-block zone riffle of the entire pack ®; a strip of the entire pack (S); a random cut ©, and the introduction of a fresh pack into the game in new-deck order (F). These procedures are performed by the Omega II Blackjack Casino in the following manner: (W) Wash: The pack is randomly broken into packets of from 1 to 8 cards. The program then randomly puts these packets back together. The cards within a packet maintain their initial order, only the packets themselves are randomly reordered. ® Two-block zone riffle: The pack is divided into two approximately equal blocks. Then half-deck "picks" from each block are riffled (randomly interleaved) together to form a new stack. This procedure is repeated until all the cards are in this new stack. (S) Strip: The program randomly strips small packets of from 1 to 4 cards off the top of the pack and places them in another stack. This procedure tends to reverse the order of the pack. © Cut: The program performs a random cut. All possible cuts areequally likely. (F) Fresh pack: A fresh pack is brought into the game in new-deck order. The simulations were all performed assuming a 6-deck game with Las Vegas Strip rules, including double after splits, and resplitting of all pairs including Aces. Penetration varied slightly, but was always to a fixed number of rounds that generally totaled about 245 cards. The game was dealt face-up and blackjacks and busted hands were immediately "placed" in the discard tray (buffer). A fresh pack of 6 decks in new-deck order was brought in periodically, just as it would be in a real casino. From other computer studies, as well as from well-documented direct probability studies, we know that the theoretical expectation for this game, assuming flat bets and Basic Strategy play, is -.34% (of original bets) for the players. In other words, the house enjoys a slight edge in this game of +.34%, assuming Basic Strategy play. Each computer simulation consisted of 100,000,000 (one hundred million) rounds. The Omega II Blackjack Casino is fast, so such extensive studies were feasible. The percent standard deviation for each player's expectation in each simulation was about +/- .011%. Each shuffle study consisted of seven individual simulations. Each simulation was similar except for the number of players (from 1 to 7 players). Study #1 did not use real-world casino-style shuffles, but instead performed random card selections using a pseudo-random number generator (P-RNG). The results were virtually the same for all seven simulations (1 player, 2 players, 3 players, etc.). Since the results did not differ regardless of the number of players at the "table," only the results for the 7-player simulation are shown (below): PLAYER 1 RESULT -.35% DELTA -.01% PLAYER 2 RESULT -.33% DELTA +.01% PLAYER 3 RESULT -.34% DELTA +.00% PLAYER 4 RESULT -.35% DELTA -.01% } MEAN DELTA +.00% PLAYER 5 RESULT -.35% DELTA -.01% PLAYER 6 RESULT -.34% DELTA +.00% PLAYER 7 RESULT -.34% DELTA +.00% As expected, no biases or other unusual effects were obtained. The results are almost exactly as predicted by theory (-.34%). Study #2 did use real-world casino-style shuffles. The shuffle wastypical of that performed in many casinos and consisted of the following shuffle sequences: For fresh packs brought into the game in new-deck order, the shuffle sequence was FWRRSRC (fresh pack, wash, zone-riffle, zone-riffle, strip, zone-riffle, cut). For reshuffles of the pack in play the shuffle sequence was RRSRC (zone-riffle, zone-riffle, strip, zone-riffle, cut). As with Study #1, the results were virtually the same for all seven simulations (1 player, 2 players, 3 players, etc.). Since the results did not differ regardless of the number of players at the "table," only the results for the 7-player simulation are shown (below): PLAYER 1 RESULT -.32% DELTA +.02% PLAYER 2 RESULT -.35% DELTA -.01% PLAYER 3 RESULT -.31% DELTA +.03% PLAYER 4 RESULT -.33% DELTA +.01% } MEAN DELTA +.01% PLAYER 5 RESULT -.35% DELTA -.01% PLAYER 6 RESULT -.32% DELTA +.02% PLAYER 7 RESULT -.34% DELTA +.00% In this 7-player simulation, a fresh 6-deck pack was introduced every 40 rounds. As can be seen, little if any bias is evident. Given the large number of rounds (100,000,000), the player results do vary slightly more than would be expected on statistical grounds, and this minor increased variance probably is due to non-random effects. But these effects, if they exist, are very, very small, seem to favor neither the players as a group nor the house, and are of no practical significance, whatever. Study #3. The card-clumping "gurus" generally blame the wash (W) for producing most of the biases they claim exist in multiple-deck games. To test for this, the above study was run, again, except that this time when a new 6-deck pack was introduced into the game, NO wash was performed. The fresh pack shuffle, therefore, consisted of FRRSRC. Reshuffles of the pack in play did not change (RRSRC). This time non-random effects, though small, were evident. Furthermore, these effects varied based, primarily, on the number of players at the table. Therefore, all seven simulations are presented below: Simulation #1. One (1) player. Fresh pack every 1120 rounds. Penetration to 43 rounds per "shoe." PLAYER 1 RESULT -.44% DELTA -.10% } MEAN DELTA-.10% Simulation #2. Two (2) players. Fresh pack very 960 rounds. Penetration to 29 rounds per "shoe." PLAYER 1 RESULT -.43% DELTA -.09% PLAYER 2 RESULT -.38% DELTA -.04% } MEAN DELTA -.07% Simulation #3. Three (3) players. Fresh pack every 800 rounds. Penetration to 22 rounds per "shoe." PLAYER 1 RESULT -.35% DELTA -.01% PLAYER 2 RESULT -.33% DELTA +.01% } MEAN DELTA -.01% PLAYER 3 RESULT -.37% DELTA -.03% Simulation #4. Four (4) players. Fresh pack every 640 rounds. Penetration to 18 rounds per "shoe." PLAYER 1 RESULT -.31% DELTA +.03% PLAYER 2 RESULT -.34% DELTA +.00% PLAYER 3 RESULT -.32% DELTA +.02% } MEAN DELTA +.02% PLAYER 4 RESULT -.31% DELTA +.03% Simulation #5. Five (5) players. Fresh pack every 560 rounds. Penetration to 15 rounds per "shoe." PLAYER 1 RESULT -.25% DELTA +.09% PLAYER 2 RESULT -.22% DELTA +.12% PLAYER 3 RESULT -.27% DELTA +.07% } MEAN DELTA +.10% PLAYER 4 RESULT -.25% DELTA +.09% PLAYER 5 RESULT -.23% DELTA +.11% Simulation #6. Six (6) players. Fresh pack every 480 rounds. Penetration to 13 rounds per "shoe." PLAYER 1 RESULT -.19% DELTA +.15% PLAYER 2 RESULT -.16% DELTA +.18% PLAYER 3 RESULT -.22% DELTA +.12% PLAYER 4 RESULT -.17% DELTA +.17% } MEAN DELTA +.16% PLAYER 5 RESULT -.20% DELTA +.14% PLAYER 6 RESULT -.16% DELTA +.18% Simulation #7. Seven (7) players. Fresh pack every 440 rounds. Penetration to 11 rounds per "shoe." PLAYER 1 RESULT -.11% DELTA +.23% PLAYER 2 RESULT -.13% DELTA +.21% PLAYER 3 RESULT -.16% DELTA +.18% PLAYER 4 RESULT -.12% DELTA +.22% } MEAN DELTA +.21% PLAYER 5 RESULT -.13% DELTA +.21% PLAYER 6 RESULT -.15% DELTA +.19% PLAYER 7 RESULT -.14% DELTA +.20% Clearly, there is some (small) bias present in this study. In addition, it appears that the more players at the table, the better off the players are. With one or two players, there appears to be a small bias of about .1% against the players. With three or four players, any biases, if present, appear to cancel out, resulting in no net effect (except, for the peculiar increased variance of results noted in Study #2, above). With five, six, or seven players at the table, there appears to be a small (.1% to .2%) net bias working for the players. This number-of-players-dependent bias pattern was not expected (not by me, anyway). To see whether or not it was repeatable, and whether small changes could alter it, I ran the entire 7-simulation study, again, this time varying the penetration and frequency of fresh-pack introductions, somewhat. The results, though varying slightly from the previous study, showed the same pattern of increasing bias favoring the players as the number of players at the "table" increased, with the break-even point at three or four players. This effect, though small, seems to be both real and persistent (at least, when all players use Basic Strategy). It is interesting to note that a player's position at the table does not seem to be a factor correlating with expectation. Encouraged (though not necessary happy) with these results, I next ran a series of studies degrading the shuffle more and more in an attempt to get biased results large enough to be potentially meaningful and exploitable. From the perspective of the card-clumping community, the results of these subsequent studies were disappointing. As the shuffle got more and more primitive, only a slight increase in the bias effect was noted. As before, with one or two players, it hurt the players, with three or four players, it seemed to virtually disappear, and with five, six, or seven players, the players were somewhat favored. Finally, in an attempt to get a bias effect large enough to mean anything, I ran a study using a VERY primitive shuffle. For fresh packs the shuffle sequence was FRC (fresh pack, zone-riffle, cut). For reshuffles of packs in play the sequence was simply RC (zone-riffle, cut). Note, the lack of a wash (W) with fresh packs. There is no casino in the world, that I know of, that uses a shuffle this primitive and incomplete. Furthermore, even with this unrealistically incomplete shuffle, if a wash (W) were introduced into the fresh-pack shuffle sequence, any bias virtually disappeared. Here are the results of this final study. Simulation #1. One (1) player. Fresh pack every 1120 rounds. Penetration to 43 rounds per "shoe." PLAYER 1 RESULT -1.10% DELTA -.76% } MEAN DELTA -.76% Simulation #2. Two (2) players. Fresh pack every 960 rounds. Penetration to 29 rounds per "shoe." PLAYER 1 RESULT -.80% DELTA -.46% PLAYER 2 RESULT -.68% DELTA -.34% } MEAN DELTA -.40% Simulation #3. Three (3) players. Fresh pack every 800 rounds. Penetration to 22 rounds per "shoe." PLAYER 1 RESULT -.44% DELTA -.10% PLAYER 2 RESULT -.43% DELTA -.09% } MEAN DELTA -.09% PLAYER 3 RESULT -.41% DELTA -.07% Simulation #4. Four (4) players. Fresh pack every 640 rounds. Penetration to 18 rounds per "shoe." PLAYER 1 RESULT -.29% DELTA +.05% PLAYER 2 RESULT -.36% DELTA -.02% PLAYER 3 RESULT -.31% DELTA +.03% } MEAN DELTA +.02% PLAYER 4 RESULT -.34% DELTA +.00% Simulation #5. Five (5) players. Fresh pack every 560 rounds. Penetration to 15 rounds per "shoe." PLAYER 1 RESULT -.18% DELTA +.16% PLAYER 2 RESULT -.15% DELTA +.19% PLAYER 3 RESULT -.13% DELTA +.21% } MEAN DELTA +.16% PLAYER 4 RESULT -.24% DELTA +.10% PLAYER 5 RESULT -.18% DELTA +.16% Simulation #6. Six (6) players. Fresh pack every 480 rounds. Penetration to 13 rounds per "shoe." PLAYER 1 RESULT -.17% DELTA +.17% PLAYER 2 RESULT -.19% DELTA +.15% PLAYER 3 RESULT -.23% DELTA +.11% PLAYER 4 RESULT -.14% DELTA +.20% } MEAN DELTA +.19% PLAYER 5 RESULT -.10% DELTA +.24% PLAYER 6 RESULT -.09% DELTA +.25% Simulation #7. Seven (7) players. Fresh pack every 440 rounds. Penetration to 11 rounds per "shoe." PLAYER 1 RESULT +.19% DELTA +.53% PLAYER 2 RESULT +.11% DELTA +.45% PLAYER 3 RESULT +.10% DELTA +.44% PLAYER 4 RESULT +.15% DELTA +.49% } MEAN DELTA +.47% PLAYER 5 RESULT +.16% DELTA +.50% PLAYER 6 RESULT +.12% DELTA +.46% PLAYER 7 RESULT +.09% DELTA +.43% As with the previous study, I was suspicious of the trend toward a player-favored bias as the number of players at the "table" increased. So, as before, I ran the entire 7-simulation study, again, varying the penetration and frequency of fresh-pack introductions, somewhat. As before, the results, though varying slightly, continued to show the same pattern of increasing bias favoring the players as the number of players at the "table" increased, with the break-even point at three or four players. Also, as be fore, a player's position at the table does not seem to be a factor correlating with expectation. With this final study, we, at last, seem to have an effect worthy of the term "bias." As to whether or not it is exploitable is another story. As noted, to get significant non-random effects resulting in a noticeable bias it was necessary to limit the shuffle to an unrealistically incomplete zone-riffle, cut (RC) sequence that is not found anywhere in the world that I know of. Also, as noted, even with this primitive shuffle, the introduction of a simple wash (W) in fresh-pack introductions virtually eliminated the bias, completely. The card-clumping "gurus" have claimed that orthodox researchers have failed to detect biases in the deal because their studies have been distorted by unrealistic conditions (such as the use of P-RNG's in simulations). It appears, however, that it is the card-clumping wonks, themselves, who are trading in unrealistic conditions. You could search the world over and never find a casino so careless as to use the simplistic shuffles necessary to produce a meaningful bias. We're not quite through yet. The argument could be made, however, that just because the AVERAGE bias produced by realistic casino shuffles is too small to matter, it does not necessarily follow that meaningful--exploitable--opportunities do not arise for clump trackers, any more than the fact that Basic Strategy expectation is on AVERAGE close to zero means that meaningful--exploitable--opportunities do not arise for card counters. That's a plausible-sounding argument. But there are serious problems with it. To begin with, to the extent that card-clumping concepts are valid, card-counting concepts are not. They are essentially opposites. In card-counting theory, the best predictor of the next card being "big" is that the last several cards have been "small" (a "plus" count). In card-clumping theory, the best predictor of the next card being "big" is that the last several cards have also been "big" (a "big"-card clump). Consequently, if card-clumping theory were valid, multiple-deck team play wouldn't work. A "Big Player" being called into a "plus" shoe would generally walk into an ambush of dealer three- and four-card 20s and 21s. But that's not what happens. Team play *does* work. Kenny Uston's teams made a fortune with it. I have done very well with it; and teams, led by players you've never heard of, are out, tonight, making money with team play. This fact, alone, argues strongly against the card-clumping concept. Here's another important point: If the pack were often strongly "polarized" with biased clumps not conforming to a normal distribution, Basic Strategy would be very ineffectual--especially in "small"-card clumps. Consequently, the very fact that any kind of reasonable shuffle produces (as the above studies have shown) at best (worst?) a nominally detectable average bias against Basic Strategy play is strong evidence that no such biased clumping or polarization of the pack occurs. I know that none of this is going to have the slightest impact on the Jerry Patterson's, or any of the other financially- or emotionally-invested faithful in the card-clumping cult. They will argue that these studies are far from comprehensive. That I didn't look hard enough, or long enough, or in the right places. And that, in any case, I DID find the elusive biases the orthodox cognoscenti say don't exist. Perhaps; and I do look forward to further research and results. But it's not up to us to prove that real-world biases can't exist--it's up to them to prove that they can, and that they do. And that is something they have never done. And probably never will. Caveat emptor. Let the buyer beware.
  2. dear e clifton davis or K Smith I just send a payment for the blackjack manuals via paypal but I havent received any download link also I send an email to ellis@beatthecasino.com I will wait for your response and the link to download the manuals thanks Peter
×
×
  • Create New...

Important Information

Terms of Use