Thanks in advance for reading, and for any help you can hopefully offer.
My company uses Magento for our marketplace, and while we’ve always had occasional issues with AOE Scheduler unable to kick off an enterprise_refresh_index, lately it’s been happening far more often, and at a rate that’s actually hampering our Merchandising team’s ability to operate, due to the rogue enterprise_refresh_index seeming to hold table locks that are causing INSERT/DELETE/UPDATE timeouts on VERY simple Merchandising operations.
It’s crippling their ability to merchandise & update products & categories throughout the day, so it’s crucial we track down what’s causing this rogue enterprise_refresh_index that’s not being handled by AOE Scheduler.
Useful info about our setup:
- Magento ver. 22.214.171.124
- MySQL 5.6.34
Extensions of note:
- AOE Scheduler (1.5.0)
- LogicBroker (1.0.8)
- Amasty (Mass Product Actions 1.5.8, Extended Product Grid with Editor 4.12.4, Shipping Table Rates 2.2.3, Image-Based Dynamic Configurable Products 3.11.7)
- Klevu (Replaces Mage search entirely, 1.2.5)
enterprise_refresh_index cron setup
index management configured in backend
enterprise_refresh_index erorr details from our investigations:
- An enterprise_refresh_index is being kicked off by something we can’t trace
We notice the issue via the usual error AOE Scheduler throws when it’s having indexing issues:
Cron error while executing enterprise_refresh_index:
exception ‘Enterprise_Index_Exception’ with message ‘Can’t lock indexer process.’ in /APP/magento/app/code/core/Enterprise/Index/Model/Observer.php:140 Stack trace:
[internal function]: Enterprise_Index_Model_Observer->refreshIndex(Object(Aoe_Scheduler_Model_Schedule))
This isn’t an AOE Scheduler-triggered run, as it’s throwing ERRORs in the Job List trying to execute.
- AOE Scheduler errors out until the running re-index finishes
- This rogue index usually takes 5 to 20 minutes to complete
- We’ve had it go for as long as 4 hours recently, with little pauses in-between of errors
We can see the active re-index going on when viewing Client Connections to the Marketplace DB; the queries to run a enterprise_refresh_index are chugging away, without any insight into what triggered this run.
Here’s a look at our MySQL connections when it is happening. Note that there’s not subsequent sleeping magentouser threads that could be blocking the index or anything. Just the rogue re-index running wild.
This rogue indexer often blocks our merchandising team’s updates to products & categories
- They get lock timeouts updating products at an extreme rate during this window, and the updates are usually small / should be easy to execute, aside from the indexer causing issues
The merchandising lock timeout errors don’t seem to happen when the rogue index isn’t running; We’ve gone through every failed save in the admin access log, EVERY one of them, and all of them correlated to a time when AOE Scheduler was throwing ERRORs when trying to run the enterprise_refresh_index. Even the most basic of product / category updates can be blocked by this re-index.
Locks happen for our merch team most frequently when trying to DELETE/INSERT into catalog_product_entity_int / catalog_category_entity / catalog_category_product
The locks even happen to simple row-level operations, which seems to imply the indexer is taking table locks (but we can’t locate or pinpoint the lock / where the lock is held & managed from?):
error: SQLSTATE[HY000]: General error: 1205 Lock wait timeout exceeded; try restarting transaction, query was: DELETE FROM
catalog_product_entity_intWHERE (attribute_id = ‘306’) AND (entity_id = ‘205532’) AND (store_id <> 0)
error: SQLSTATE[HY000]: General error: 1205 Lock wait timeout exceeded; try restarting transaction, query was: UPDATE
created_at= ‘2017-12-26 20:47:53’,
updated_at= ‘2018-01-26 21:49:18’,
children_count= ? WHERE (
Running DB statistics queries during these rogue indexers do NOT show any db or table locks in use, or active locked mysql indexes on these tables.
We’re at the point where we assume one of two things, after exhausting all debugging options we can think of:
- A 3rd party module we don’t expect to be doing so is kicking off an enterprise_refresh_index or triggering an observer with some updates it performs
- Related to theory 1: Magento has a default observer, that despite manual reindexing & AOE Scheduler handling enterprise_refresh_index, is naturally kicked off by some setup of our cron jobs or some action taken by a 3rd party module triggering an observer.
- We’re hoping it’s not this, as that’s far harder to track down in theory…
Worthwhile bonus data:
- The memory usage of the failed / errored enterprise_refresh_index jobs in AOE Scheduler’s list is always 11.06 MB.
- Every failed Merchandising “Save” correlates to these enterprise_refresh_index errors, but they are able to occasionally sneak product updates in while these errors occur
We’re hoping for any avenues or ideas from people who may have had similar experiences, and not managed to document the solution online, or maybe trigger a solution from shop(s) experiencing similar issue(s).
We know this issue has been discussed a lot, but no one has ever had this set of scenarios where an unknown enterprise_refresh_index is holding tables hostage for so long that simple merchandising updates are failing. Our merch team has been super patient, but the more we grow as a company, the more our catalog grows, the bigger the problem becomes… Any help is appreciated.
Let me know if providing more info will help you with your theories / ideas. Thanks all.
✓ Extra quality
ExtraProxies brings the best proxy quality for you with our private and reliable proxies
✓ Extra anonymity
Top level of anonymity and 100% safe proxies – this is what you get with every proxy package
✓ Extra speed
1,ooo mb/s proxy servers speed – we are way better than others – just enjoy our proxies!
USA proxy location
We offer premium quality USA private proxies – the most essential proxies you can ever want from USA
Our proxies have TOP level of anonymity + Elite quality, so you are always safe and secure with your proxies
Use your proxies as much as you want – we have no limits for data transfer and bandwidth, unlimited usage!
Superb fast proxy servers with 1,000 mb/s speed – sit back and enjoy your lightning fast private proxies!
99,9% servers uptime
Alive and working proxies all the time – we are taking care of our servers so you can use them without any problems
No usage restrictions
You have freedom to use your proxies with every software, browser or website you want without restrictions
Perfect for SEO
We are 100% friendly with all SEO tasks as well as internet marketing – feel the power with our proxies
Buy more proxies and get better price – we offer various proxy packages with great deals and discounts
We are working 24/7 to bring the best proxy experience for you – we are glad to help and assist you!