I am dealing with the following. We upgraded a vendor solution to a newer version, including new DB servers. I know one thing at a time, but how can a DBA refuse new servers with more CPU and much more RAM, including HA ability. Well, after the upgrade, lets call them “data extraction” queries that were taking 5 min or less on the old server are now taking over 70 min. The old SQL server was SQL 2014 Std, the new server SQL 2014 Enterprise. I have done testing, compared server and SQL configurations, DB compat level, checked on the cardinality estimator, tested out MAXDOP/cost threshold, turned on/off virus scan, turned on/off AG, excluding everything I can think of. The only thing I can actually pin down is that if I restore the new production DB to ANY SQL Standard instance, it runs faster, including DEV servers with only a few cores and much less RAM. If I restore the new production DB to ANY SQL Enterprise instance, it runs much slower. The only information I could find relating to this mentioned the cardinality estimator in SQL 2014. But in our case we moved from SQL 2014 Std to SQL 2014 Ent. Oh, and the DB Compat level is set to SQL 2008 R2 (100), left over from before I came. Didn’t know that until after the migration. Maybe a good idea I didn’t change it. Can anyone think of something I may have missed or not thought of? I am kind of leaning towards hardward / network / infrastructure as I haven’t seen such performance issues between Std and Ent before.