Skip to content
This repository has been archived by the owner on Feb 26, 2024. It is now read-only.

eth_getLogs causes JavaScript heap to run out of memory when forking mainnet #1575

Open
robrichard opened this issue Nov 15, 2021 · 7 comments

Comments

@robrichard
Copy link

robrichard commented Nov 15, 2021

Using latest published alpha version

ganache-cli ethereum --fork.url https://eth-mainnet.alchemyapi.io/v2/XXX

Send rpc request

{
  "jsonrpc": "2.0",
  "method": "eth_getLogs",
  "params": [
    {
      "address": "0x7e88916c4dD22D6C1d04Ec87Ca54Bf777cA0B2B6",
      "fromBlock": "0x0",
      "toBlock": "latest",
      "topics": [
        "0xddf252ad1be2c89b69c2b068fc378daa952ba7f163c4a11628f55a4df523b3ef"
      ]
    }
  ],
  "id": 1
}

Result

RPC Listening on 127.0.0.1:8545
eth_getLogs

<--- Last few GCs --->

[38796:0x1049dd000]    43027 ms: Mark-sweep (reduce) 4094.8 (4112.2) -> 4093.7 (4114.9) MB, 2503.9 / 20.3 ms  (+ 0.5 ms in 716 steps since start of marking, biggest step 0.0 ms, walltime since start of marking 2987 ms) (average mu = 0.622, current mu = 0.[38796:0x1049dd000]    47016 ms: Mark-sweep (reduce) 4094.7 (4104.9) -> 4094.4 (4104.7) MB, 3983.1 / 18.7 ms  (average mu = 0.352, current mu = 0.002) allocation failure scavenge might not succeed


<--- JS stacktrace --->

FATAL ERROR: MarkCompactCollector: young object promotion failed Allocation failed - JavaScript heap out of memory
 1: 0x1013024b5 node::Abort() (.cold.1) [/usr/local/bin/node]
 2: 0x1000b1919 node::Abort() [/usr/local/bin/node]
 3: 0x1000b1a7f node::OnFatalError(char const*, char const*) [/usr/local/bin/node]
 4: 0x1001f5bb7 v8::Utils::ReportOOMFailure(v8::internal::Isolate*, char const*, bool) [/usr/local/bin/node]
 5: 0x1001f5b53 v8::internal::V8::FatalProcessOutOfMemory(v8::internal::Isolate*, char const*, bool) [/usr/local/bin/node]
 6: 0x1003a2ed5 v8::internal::Heap::FatalProcessOutOfMemory(char const*) [/usr/local/bin/node]
 7: 0x1003fef13 v8::internal::EvacuateNewSpaceVisitor::Visit(v8::internal::HeapObject, int) [/usr/local/bin/node]
 8: 0x1003e677b void v8::internal::LiveObjectVisitor::VisitBlackObjectsNoFail<v8::internal::EvacuateNewSpaceVisitor, v8::internal::MajorNonAtomicMarkingState>(v8::internal::MemoryChunk*, v8::internal::MajorNonAtomicMarkingState*, v8::internal::EvacuateNewSpaceVisitor*, v8::internal::LiveObjectVisitor::IterationMode) [/usr/local/bin/node]
 9: 0x1003e62c5 v8::internal::FullEvacuator::RawEvacuatePage(v8::internal::MemoryChunk*, long*) [/usr/local/bin/node]
10: 0x1003e6006 v8::internal::Evacuator::EvacuatePage(v8::internal::MemoryChunk*) [/usr/local/bin/node]
11: 0x10040393e v8::internal::PageEvacuationTask::RunInParallel(v8::internal::ItemParallelJob::Task::Runner) [/usr/local/bin/node]
12: 0x1003bd8f2 v8::internal::ItemParallelJob::Task::RunInternal() [/usr/local/bin/node]
13: 0x1003bdd78 v8::internal::ItemParallelJob::Run() [/usr/local/bin/node]
14: 0x1003e8075 void v8::internal::MarkCompactCollectorBase::CreateAndExecuteEvacuationTasks<v8::internal::FullEvacuator, v8::internal::MarkCompactCollector>(v8::internal::MarkCompactCollector*, v8::internal::ItemParallelJob*, v8::internal::MigrationObserver*, long) [/usr/local/bin/node]
15: 0x1003e7c76 v8::internal::MarkCompactCollector::EvacuatePagesInParallel() [/usr/local/bin/node]
16: 0x1003d33e7 v8::internal::MarkCompactCollector::Evacuate() [/usr/local/bin/node]
17: 0x1003d0c7b v8::internal::MarkCompactCollector::CollectGarbage() [/usr/local/bin/node]
18: 0x1003a359b v8::internal::Heap::MarkCompact() [/usr/local/bin/node]
19: 0x10039fb89 v8::internal::Heap::PerformGarbageCollection(v8::internal::GarbageCollector, v8::GCCallbackFlags) [/usr/local/bin/node]
20: 0x10039d9d0 v8::internal::Heap::CollectGarbage(v8::internal::AllocationSpace, v8::internal::GarbageCollectionReason, v8::GCCallbackFlags) [/usr/local/bin/node]
21: 0x1003ac0da v8::internal::Heap::AllocateRawWithLightRetrySlowPath(int, v8::internal::AllocationType, v8::internal::AllocationOrigin, v8::internal::AllocationAlignment) [/usr/local/bin/node]
22: 0x100379772 v8::internal::Factory::CodeBuilder::BuildInternal(bool) [/usr/local/bin/node]
23: 0x10104de32 v8::internal::compiler::CodeGenerator::FinalizeCode() [/usr/local/bin/node]
24: 0x101237875 void v8::internal::compiler::PipelineImpl::Run<v8::internal::compiler::FinalizeCodePhase>() [/usr/local/bin/node]
25: 0x10122c2fe v8::internal::compiler::PipelineImpl::FinalizeCode(bool) [/usr/local/bin/node]
26: 0x10122c15b v8::internal::compiler::PipelineCompilationJob::FinalizeJobImpl(v8::internal::Isolate*) [/usr/local/bin/node]
27: 0x1002c2d51 v8::internal::Compiler::FinalizeOptimizedCompilationJob(v8::internal::OptimizedCompilationJob*, v8::internal::Isolate*) [/usr/local/bin/node]
28: 0x1002e218b v8::internal::OptimizingCompileDispatcher::InstallOptimizedFunctions() [/usr/local/bin/node]
29: 0x100359104 v8::internal::StackGuard::HandleInterrupts() [/usr/local/bin/node]
30: 0x1006f9897 v8::internal::Runtime_StackGuardWithGap(int, unsigned long*, v8::internal::Isolate*) [/usr/local/bin/node]
31: 0x100a80639 Builtins_CEntry_Return1_DontSaveFPRegs_ArgvOnStack_NoBuiltinExit [/usr/local/bin/node]
32: 0x368d8cb0e14e 
[1]    38796 abort      ganache-cli ethereum --logging.debug --fork.url 

increasing memory to 6GB (node --max-old-space-size=6144) did not help

@davidmurdoch
Copy link
Member

Can you try [email protected] and let me know if it still runs into memory issues? Release notes: https://github.com/trufflesuite/ganache/releases/tag/ganache%407.0.0-alpha.2

To install globally run:

npm uninstall ganache-cli --global
npm install ganache@alpha --global

@robrichard
Copy link
Author

@davidmurdoch yes this is the version I am using

ganache v7.0.0-alpha.2 (@ganache/cli: 0.1.1-alpha.2, @ganache/core: 0.1.1-alpha.2)

@davidmurdoch
Copy link
Member

Ah, I see the problem in the code. Ganache attempts to fetch every block from "0x0" to "latest" individually instead of just asking the remote node for the data directly; this will eat up some memory very quickly. :-)

Until we fixed this you may be able to work around it by requesting the data in chunks (maybe 10000 blocks at a time? you'll have to experiment).

For a known contract like this one you can easily look up it's "Creator" transaction on Etherscan (https://etherscan.io/address/0x7e88916c4dD22D6C1d04Ec87Ca54Bf777cA0B2B6, see Creator: ) and then divide the work up using that transaction's block number as the starting number. However, for contracts not known ahead of time you might need to binary search the blockchain to discover when it was created before chunking the requests.

I'll put this bug in our backlog. Thanks for reporting it!

@davidmurdoch
Copy link
Member

Related: #145

@CharlieMc0
Copy link

Any updates on this issue or work around? I am running into this issue as well

@MicaiahReid
Copy link
Contributor

@CharlieMc0 we have not started on this one yet. Will the workaround that @davidmurdoch posted above not work for you?

@davidmurdoch davidmurdoch moved this to Inbox in Ganache Jul 19, 2022
@davidmurdoch davidmurdoch moved this from Inbox to Backlog in Ganache Jul 19, 2022
@adjisb
Copy link
Contributor

adjisb commented Sep 20, 2022

I'm trying to fix it with: #3692
I don't know much about ganache. I don't understand why BlockLogs and BlockLogManager.get is so tight to only one block and I'm also not sure if I must deserialize serialize back in getFromFork.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
Status: Backlog
Development

No branches or pull requests

5 participants