-
Notifications
You must be signed in to change notification settings - Fork 20.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
geth API performance degradation #15739
Comments
When this happens again, could you do a |
I restarted and its acting badly already :( |
We're seeing the same mainly when it is used to serve a lot of RPC requests. |
Edit: image was so large so I made it a link instead of displaying inline. |
Could background database compaction be the culprit? |
Does not really look like compaction to me, more like many simultaneous db requests, (during state.copy?) and IO-lag when it tries to open database-files. |
same problem when I send more than 5 rpc requests at the same time.Memory will be occupied and won't release. In bitcoin core, the bitcoind can limit the rpcthreads to avoid this problem: |
Adding another debug.cpuProfile("geth.cpu", 60) during a time that estimategas took over 1minute to return. |
And another. geth is under 0 load. Everything is offline and times are still in the 10s of seconds. This should be obvious to track down. Please help. |
@richiela which version of geth are you running? |
I'm asking because I can't make sense of the profile without having the binary that created it. It doesn't seem to be geth 1.7.2 from geth.ethereum.org/downloads. |
name: "Geth/v1.7.2-stable/linux-amd64/go1.8", Compiled from source. |
In that case it would be good to have the actual geth binary. |
Note: you should consider compiling with go 1.9.2 instead of go 1.8. They added debug symbols to profiles in go 1.9. We still need the "Geth/v1.7.2-stable/linux-amd64/go1.8" binary to look at the profiles you posted. |
@richiela could you provide us with the binary? |
Regarding estimategas,I have a hunch that providing a "gas" (which is the max gas) in the call instead of letting geth supply, you'll get back to very quick responses. @richiela could you please test that? Specify something like |
I'm not able to repro myself, but I would very much like to know if this "solves" it, because if so I know what the underlying issue is. |
@richiela also, when you measure |
time curl -X POST http://10.0.0.23:8101 -d @test.json real 0m5.283s cat test.json Doesn't solve the problem. As for the sendDuration, it is not constant because in some cases we have to unlock, in others we don't. Following Nick's suggestion of using a static gas value, our problem seems to been solved, so the majority of it was caused by that single call. This is what it looks like today: EthereumDeposit [ETH] : Ethereum.Deposit: swept 0xcca1917d40043229c82e44cf9afc239dbea027a1 192 / 198: getBalanceDuration 38 ms sendDuration 1260 ms Any calls > 1s is usually an unlock. Anything sub 1 sec is a deposit contract. I will upload a binary as soon as i get a second to go back on that server. |
geth (2).zip |
Hmm.. eth_call is done on latest and usually just on a small number of calls. I dont have a good way to test eth_call on pending. |
Ok, I just took what was a slow estimateGas RPC and turned it into an
That is, do the same
This is on a node that's processing other Sometimes they both take roughly the same time (10s of milliseconds), but very often the 'pending' one takes 6-10 seconds. |
This is too old to do anything meaningful. Some amount of fluctuation is to be expected when interacting with the pending state because access to it must synchronize with generation of the pending state. |
Hi there,
please note that this is an issue tracker reserved for bug reports and feature requests.
For general questions please use the gitter channel or the Ethereum stack exchange at https://ethereum.stackexchange.com.
System information
Geth version: "Geth/v1.7.2-stable/linux-amd64/go1.8"
OS & Version: Ubuntu 16.04
Commit hash : (if
develop
)Expected behaviour
daemon responds to API calls in a timeline manner
Actual behaviour
After running for 10-12 hours, the daemon jsonrpc interface response times for certain calls start slowing down. getBalance calls seem to take the same time, as do eth_call. But send eth_sendTransaction, regardless from a locked or unlocked (coinbase) account, slows down to the point where its unusable. A restart of the daemon fixes the issue.
This is what it looks like after restart of the daemon and for the first few hours
EthereumDeposit [ETH] : Ethereum.Deposit: swept 0xc3cc155e320bb7da40b65c7a36ad4d944aafaa86 2056 / 2816: getBalanceDuration 55 ms sendDuration 802 ms 2017-12-22 05:56:20.350
EthereumDeposit [ETH] : Ethereum.Deposit: swept 0x6045ed31b513c80c44e31594c4feb27333022bd7 2055 / 2816: getBalanceDuration 51 ms sendDuration 436 ms 2017-12-22 05:56:19.337
EthereumDeposit [ETH] : Ethereum.Deposit: swept 0x5cf085326ad6e992409806bef9d93b99eb968261 2054 / 2816: getBalanceDuration 48 ms sendDuration 494 ms 2017-12-22 05:56:18.807
EthereumDeposit [ETH] : Ethereum.Deposit: swept 0x6034f9fecdca8355dbfe971f80e82ba42236f5ef 2053 / 2816: getBalanceDuration 52 ms sendDuration 463 ms 2017-12-22 05:56:18.243
EthereumDeposit [ETH] : Ethereum.Deposit: swept 0x30277fc62f48a2b54903cb427628acef6c33f287 2052 / 2816: getBalanceDuration 41 ms sendDuration 1884 ms 2017-12-22 05:56:17.697
EthereumDeposit [ETH] : Ethereum.Deposit: swept 0xab8b9e9d6cf1cd0cefac392d8a87d48e0597fc58 2051 / 2816: getBalanceDuration 44 ms sendDuration 364 ms 2017-12-22 05:56:15.730
Here is the response time after about 10 hours:
EthereumDeposit [ETH] : Ethereum.Deposit: swept 0xfbb30fd8cb6869a711455348b01f740ecfcc7a1f 575 / 618: getBalanceDuration 40 ms sendDuration 11508 ms
EthereumDeposit [ETH] : Ethereum.Deposit: swept 0x40b378794490d77acd52e188619bf0aff5178202 574 / 618: getBalanceDuration 2 ms sendDuration 149232 ms
EthereumDeposit [ETH] : Ethereum.Deposit: swept 0x83aea7637dd722283a2f86e3037716a57ec8082e 573 / 618: getBalanceDuration 40 ms sendDuration 49735 ms
EthereumDeposit [ETH] : Ethereum.Deposit: swept 0xab744eab6f50422b44b30b55a599f277a61f7465 572 / 618: getBalanceDuration 44 ms sendDuration 144540 ms
EthereumDeposit [ETH] : Ethereum.Deposit: swept 0xe0a21dfac0182b71850b90b0a72e8b2f8c8114e9 571 / 618: getBalanceDuration 40 ms sendDuration 90995 ms
EthereumDeposit [ETH] : Ethereum.Deposit: swept 0x635b5f9af7edabc406f4b7afc0ebb9d52f3e8875 570 / 618: getBalanceDuration 41 ms sendDuration 152650 ms
geth is not resource constrained down it keeps climbing in memory usage
47771 101001 20 0 22.521g 0.016t 45992 S 161.8 30.3 823:12.48 geth
Steps to reproduce the behaviour
Just let daemon run and use it normally
Backtrace
The text was updated successfully, but these errors were encountered: