-
Notifications
You must be signed in to change notification settings - Fork 468
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Replace generic-pool with tarn #808
Conversation
6da9884
to
ed6e4fd
Compare
ed6e4fd
to
919fc68
Compare
Given the response to coopernurse/node-pool#256 (that it is intentional that resources continue to be created / released after draining the pool) we probably should look at moving away from |
919fc68
to
a1cd26e
Compare
@patriksimek it seems that the travis builds aren't running or being triggered properly - any chance you can look at it? Ah, don't worry - I have access - I can't see anything amiss at the moment |
OK - I think travis is not building draft PRs for some reason |
@gshively11 that looks interesting, though I've tried a patch of my own here: #806 I'm not entirely following how the patch you linked to solves the problem because if we lose the DB connection we'll error immediately and that will win the promise "race"... As explained above, unfortunately This means if we detect a connection error and close the pool the library will continue to create a resource until the acquire timeout is reached (if it's set) or infinitely until the DB comes back. |
OK - so the idea is that the pool error would have been recorded before the next tick picks up the timer and evaluates the promise.. ok, JS is funky. |
Yea, something like that. I'm going to play with it today to see how it behaves. Even if the error doesn't map 1:1 with an acquire call, if we can propagate the error up to tedious and prevent an infinite loop, that'll be good enough for our use case. |
@gshively11 would you mind testing out my fix in #806 ? It's not going to fix an infinite loop problem if there's no acquire timeout set on the pool but it stops the pool trying to create hundreds of connections a second. the only other option I can think of is a fairly aggressive destruction of the pool on connection errors, which seems a bit over the top. |
Yea will do, although we definitely want to fix both problems. |
Yes, but the infinite loop problem can only be fixed by:
You can already attempt to close the pool on connection errors: pool.connect().then(connection => {
connection.on('error', (e) => {
if (connection.connected || connection.connecting) {
connection.close()
}
});
}); This can be used in connection with an acquire timeout to stop pools getting stuck. |
We want to stop generic-pool from getting stuck in an infinite dispatch loop. I think we can do that similar to how sequelize did it. Does #806 handle that piece as well, or is it only making sure that pools get closed and connection attempts aren't spammed? |
That's right, but I'd recommend a v5 upgrade as there isn't any significant breaking changes, it adds pausable stream support to requests and (soon) to fix this pool issue.
At the moment no, because the only way to do that is to swallow any errors and release a dud connection object. That would be quite a significant change to how this lib works that would certainly require a major version bump. It would also mean one bad connection has the ability to trash the entire pool... |
Ok, I think we'll do a quick temporary fork just for our stuff, because a dud connection object is way more desirable than the current state. We have a wrapper around node-mssql that does failover/retry etc to handle that situation anyway. Once we're stable again I can test out v5 and the lib replacement fix. |
Ok, I thought I was going to be able to work a hack in node-mssql to make generic-pool work, but I don't think I can without monkey patching. So I need to focus on either patching/forking generic-pool, or looking at your tarn solution. I'm looking for the internal pool used by node-mssql to function as follows, could you let me know if this is how it'll be in your PR plz @dhensby
this behavior is critical due to how we handle password rotations, failovers, etc. for our databases. we cannot have the pool indefinitely try to recreate. it can try once per request, but then it needs to bubble that failure up so we can take action on it (kill the ConnectionPool, create a new one with new DB config) |
It seems like it does, since you're using |
on line 262 of tedious.js, would we need to add a check for |
ba9e181
to
b57fba6
Compare
I've made it so that the reject/resolve calls can only be done once without need for a variable to be passed around |
b57fba6
to
33d9c1e
Compare
Ideally for v6 we'd also upgrade the tedious version (again) See #818 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good. A changelog entry should be added, along with a short migration guide.
Could we augment the pool property with size
, available
, pending
and borrowed
as gettable variables that emit deprecation warnings and return the tarn equivalents? I can think of at least one project that uses those props in production 😉
Edit: just to clarify, I'm talking about using Object.defineProperty
with a getter, on the pool property, and having that getter emit a deprecation warning:
There's actually very little migration required because the config API is so similar. The main changes are accessing the properties and if I add in those properties it'll basically it. I'm not sure where would be appropriate to put the upgrade guide, either :/ |
I would stick it in the changelog and/or the GH pages branch where there's a 3-4x migration guide already. |
b98a322
to
fc63c24
Compare
@dhensby Thanks for actioning/answering feedback, this just needs rebasing now (annoyingly!) but otherwise it looks ready to go from my point of view. |
@willmorgan rebased :) |
Rebase of #624
generic-pool
does seem to be pretty broken from what I can tell (see #805)Tarn JS seems reasonable but it doesn't appear to be actively maintained (last release was 1 year ago)