Jump to content
thirty bees forum
  • 0

Slow DB queries after 1.3 -> 1.4 update


Question

Posted

Hello

What changes in 1.4 could cause high load and memory usage after updating from 1.3 -> 1.4.

I run a lot of custom import scripts with 1m - 5m products. I mostly use the Db::execute() method for my updates and inserts.

The general store speed seems about the same, but these CLI scripts are VERY slow now, and takes a lot of resources.

I have made no changes to the import scripts or server config.

Php is running 90 - 100% and memory goes up to 30GB+ (if I let it)

There are no errors in the logs ...

Best regards

Kasper 

4 answers to this question

Recommended Posts

  • 0
Posted
2 hours ago, braffas said:

Hello

What changes in 1.4 could cause high load and memory usage after updating from 1.3 -> 1.4.

I run a lot of custom import scripts with 1m - 5m products. I mostly use the Db::execute() method for my updates and inserts.

The general store speed seems about the same, but these CLI scripts are VERY slow now, and takes a lot of resources.

I have made no changes to the import scripts or server config.

Php is running 90 - 100% and memory goes up to 30GB+ (if I let it)

There are no errors in the logs ...

Best regards

Kasper 

What comes to mind is error handling.

If you do a lot of work in cycles then this might be caused by some repeating php warning. All php messages are collected in one array (unlimited, which is not good, we should definitely limit this). And if you have installed collectlogs module, there will also be overhead related to this module. 

  • 0
Posted

Ah. That makes sense. It definitely involves a lot of cycles. I go over every line of the datafeeds,  analyse the data and compare it with existing data before updating / inserting / skipping.

The import script was all build with speed and low memory usage in mind, and it was really fast, and had a relatively low memory usage.

Db::executeS() seems unaffected. Generation of our own data feeds seems unaffected.

I did install the collectlogs module but it did not seem to make a huge difference. I will uninstall it again and check.

Which array in core should i be looking for (error collection)?

Best regards

Kasper

  • 0
Posted

Since 1.4, Db::executeS() will raise warning if used for non-select sql statements. That might be the reason why you are seeing this problem.

For sql statements like INSERT or UPDATE, you should use Db::execute(), and executeS() should be used for SELECT queries only. However, for those there are better alternatives, as Db::executeS() does not guarantee return type to be an array (it can be false, or null). You should use Db::getArray() / Db::getRow() / Db::getValue() instead. 

I have fixed the error collection in core, you can look here: https://github.com/thirtybees/thirtybees/commit/3fa91183536ebac1454f3641d429d7c193beac25

 

  • 0
Posted

I know about the different DB methods I use execute, executeS, getValue etc. (and I fixed those places where I had used the wrong method by mistake, nice addition to the core btw).

The ErrorHandling sadly does not seem to be the problem. I added the limit from your fix, but the slowness persists. It starts after around 10k cycles on the heavy combined updates/inserts (products, features, manufactures, suppliers etc.). 

It may be some override or something.

I will let you know when I find the cause. Right now, I'm going to run the script on small portions of the data every minute or so.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...