The SQL Server 2014 Cardinality Estimator white paper says:
The new CE, however, uses a simpler algorithm that assumes that there is a one-to-many join association between a large table and a small table. This assumes that each row in the large table matches exactly one row in the small table. This algorithm returns the estimated size of the larger input as the join cardinality.
But it doesn’t say how SQL Server determines what is a “large table” and “small table” for purposes of this optimization.
Are these criteria documented anywhere? Is it a simple threshold (e.g. “small table” must be under 10,000 rows), a percentage (e.g. “small table” must be <5% of rows in the “large table”), or some more complicated function?
Also, is there a trace flag or query hint that forces use of this optimization for a particular join?
Finally, does this optimization have a name that I can use for further Googling?
I’m asking because I want this “use the cardinality of the large table” cardinality estimation behavior in a join of master/detail tables, but my “small table” (master) is 1M rows and my “big table” (detail) is 22M rows. So I’m trying to learn more about this optimization to see if I can adjust my queries to force use of it.
✓ Extra quality
ExtraProxies brings the best proxy quality for you with our private and reliable proxies
✓ Extra anonymity
Top level of anonymity and 100% safe proxies – this is what you get with every proxy package
✓ Extra speed
1,ooo mb/s proxy servers speed – we are way better than others – just enjoy our proxies!
USA proxy location
We offer premium quality USA private proxies – the most essential proxies you can ever want from USA
99,9% servers uptime
No usage restrictions
Perfect for SEO
We are working 24/7 to bring the best proxy experience for you – we are glad to help and assist you!