Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

aggregate fails with SQLSTATE[HY000]: General error: 1206 The total number of locks exceeds the lock table size #828

Open
WolfgangFahl opened this issue Oct 7, 2020 · 4 comments
Labels

Comments

@WolfgangFahl
Copy link
Contributor

setup is the same as in #827

/usr/bin/php /srv/www/volkszaehler.org.2020/bin/aggregate run -l day
Performing 'delta' aggregation on 'day' level

 [>---------------------------]   0%  < 1 sec/< 1 sec   0 channels
 [==>-------------------------]  10%  23 secs/3 mins    1 channels
 [=====>----------------------]  20%   3 mins/19 mins   2 channels
In AbstractMySQLDriver.php line 128:
                                                                               
  An exception occurred while executing 'REPLACE INTO aggregate (channel_id,   
  type, timestamp, value, count) SELECT channel_id, ? AS type, MAX(agg.timest  
  amp) AS timestamp, COALESCE( SUM(agg.val_by_time) / (MAX(agg.timestamp) - M  
  IN(agg.prev_timestamp)), AVG(agg.value)) AS value, COUNT(agg.value) AS coun  
  t FROM ( SELECT channel_id, timestamp, value, value * (timestamp - @prev_ti  
  mestamp) AS val_by_time, COALESCE(@prev_timestamp, 0) AS prev_timestamp, @p  
  rev_timestamp := timestamp FROM data CROSS JOIN (SELECT @prev_timestamp :=   
  UNIX_TIMESTAMP(DATE_ADD(FROM_UNIXTIME(MAX(timestamp) / 1000, "%Y-%m-%d"), I  
  NTERVAL 1 day)) * 1000 FROM aggregate WHERE type = ? AND aggregate.channel_  
  id = ?) AS vars WHERE channel_id = ? AND timestamp >= IFNULL((SELECT UNIX_T  
  IMESTAMP(DATE_ADD(FROM_UNIXTIME(MAX(timestamp) / 1000, "%Y-%m-%d"), INTERVA  
  L 1 day)) * 1000 FROM aggregate WHERE type = ? AND aggregate.channel_id = ?  
   ), 0) AND timestamp < UNIX_TIMESTAMP(DATE_FORMAT(NOW(), "%Y-%m-%d")) * 100  
  0 ) AS agg GROUP BY channel_id, YEAR(FROM_UNIXTIME(timestamp/1000)), DAYOFY  
  EAR(FROM_UNIXTIME(timestamp/1000))' with params [3, 3, "3", "3", 3, "3"]:    
                                                                               
  SQLSTATE[HY000]: General error: 1206 The total number of locks exceeds the   
  lock table size                                                              
                                                                               

In Exception.php line 18:
                                                                               
  SQLSTATE[HY000]: General error: 1206 The total number of locks exceeds the   
  lock table size                                                              
                                                                               

In PDOStatement.php line 141:
                                                                               
  SQLSTATE[HY000]: General error: 1206 The total number of locks exceeds the   
  lock table size                                                              
                                                                               

run [-l|--level LEVEL] [-m|--mode MODE] [-p|--periods PERIODS] [-v|--verbose] [--] [<uuid>...]

@WolfgangFahl
Copy link
Contributor Author

WolfgangFahl commented Oct 7, 2020

see e.g.

SELECT @@innodb_buffer_pool_size/1024/1024/1024;
+------------------------------------------+
| @@innodb_buffer_pool_size/1024/1024/1024 |
+------------------------------------------+
|                           0.125000000000 |
+------------------------------------------+
1 row in set (0.00 sec)

@WolfgangFahl
Copy link
Contributor Author

I am trying:

mysql> SET GLOBAL innodb_buffer_pool_size=402653184;
Query OK, 0 rows affected (0.00 sec)

mysql>  SELECT @@innodb_buffer_pool_size/1024/1024/1024;
+------------------------------------------+
| @@innodb_buffer_pool_size/1024/1024/1024 |
+------------------------------------------+
|                           0.375000000000 |
+------------------------------------------+
1 row in set (0.00 sec)

and restarting the aggregate

@andig
Copy link
Contributor

andig commented Oct 7, 2020

Ich vermute du schreibst gleichzeitig hochfrequent in die Tabelle? Dann würde ich das schreiben abstellen solange die initale Aggregation läuft. Vzlogger kann puffern, Du könntest die Middleware (über die vzlogger schreibt) also solange einfach offline nehmen.

@andig andig added the Question label Oct 7, 2020
@WolfgangFahl
Copy link
Contributor Author

@andig Danke für den Hinweis. Ja während des Aggregate kommen Daten an. Ich brauch das daily Aggregate nicht wirklich, weil sbfspot das auch schon kann. Das monthly scheint zu laufen und steht im crontab - mal gucken ob es da in den nächsten Monaten Probleme gibt.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants