-
Notifications
You must be signed in to change notification settings - Fork 1.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Pull Requests/Merge to master #306
Comments
@aheusingfeld any idea why Travis build is failing? |
I think the reason for failing the build is not correct. The build is now failed when any test fails. But the tests point out incompatibilities/bugs in various servers, so almost by definition not all tests can pass. For instance, no matter the commit, the build is now ALWAYS failed because some early test in Batch fails. But if JBoss (which is used to run the tests against I think) itself has a bug here, then the test is still valid, and JBoss should fix it. Meanwhile, the build should not fail here. The build should for a PR probably only fail if it doesn't compile.
I personally test every PR indeed, but what do you want to see as a result? That's my question. A failing test does not mean the PR is not valid. |
It seems it actually runs the tests and aborts on the first failure, then marks the entire build as failed. See my comments here as well: #303 |
Which essentially means that we (and people potentially interested in Java EE goodies) have no idea if this code is valid or not unless they somehow manage to run relevant parts. I think the problem lies in the way how the repository is structured, but also how we have builds defined. I believe we can do way better. At this point all of the infrastructure around serves no purpose in my humble opinion. |
No, the test is valid. Their server is not. It points out to people they can't use that feature on their specific server. To run all tests and don't have the run abort on the very first failure That said, we should check external and each other's commits to really make sure the test is indeed valid. This is not always easy, I mean who knows enough about JASPIC here to double check that my JASPIC tests are valid? Again, a failing test does not indicate an invalid test. I personally go to great lengths to check my tests, and validate with the spec lead of JASPIC in case of doubts and run them on multiple servers. In some cases a seemingly simple commit has taken weeks of testing the test itself. I would like an easier way to run just the JPA tests or just the JASPIC tests though. Now the easiest way seems to be to just comment out unneeded modules in the main pom.xml file. |
That's not totally correct. It seems that Travis builds cannot have more than 10000 lines of output. If you take a look at all the builds, they are all stuck at this log line no matter what it is. I'll ask the travis guys about it. ah, and by the way
I'm 100% with Bartosz on that one. I think it'd be much better if we create separate builds for the sub modules and the builds would also be way faster. |
Okay, that's interesting. It looked like the abort was on the first failure, but could be a coincidence then. |
intermediate result: downloading dependencies alone takes ~3 minutes! https://api.travis-ci.org/jobs/64117841/log.txt?deansi=true
Our build has been killed after 20 minutes, probably because of the 'quiet' flag. I will remove the flag again so travis shows the real failure and I'll also add the |
There are really a lot of broken tests in the Travis build. Maybe one of you already wants to take a closer look into them while I'm trying to get the build working? https://api.travis-ci.org/jobs/64335298/log.txt?deansi=true |
One additional type of "failure" that I see in the log is that some tests are run against TomEE, but TomEE is a web profile+ implementation and doesn't support all technologies that are tested. For instance, JASPIC fails on TomEE, but TomEE doesn't support JASPIC. |
FYI I split the travis build by subfolder so they could theoretically run in parallel. I think the output of this very clearly highlights our current problem: it's just too huge to be one piece! If the build of a single subproject takes more than 6 minutes, how are we still able to speak of "fast feedback"? @bartoszmajsak I guess splitting by subproject is a reasonable first step. We now need to see whether we can limit builds only to the subprojects where changes have been made. The obvious way would be to really split the github project but I don't like this too much as it has (organizational) side effects. If none of you minds, I'd like to focus on splitting only the build and try to filter the subprojects via Travis' explicit include feature. If this doesn't work, we can still think about plan B. Does that make sense? |
Split by subprojects makes a perfect sense. Though I have a different question, we are testing against |
@kubamarchwicki forget what I said in the comment above, I tried different things locally and I think that splitting into multiple subprojects, i.e.
I have no feelings about this. I doubt though that switching the server will have an impact on the OutOfMemoryError we currently have for the JPA build :-o |
Didnt know a out OOM with JPA :o Maybe a different approach, have a wildfly / glassfish instance running and Splitting into multiple independent projects might be a way to go as well What's your view?
|
I switched the build to glassfish, let's see how it goes.
We'll need a |
There are two util projects; utils for batch and generic test utils Though that's quite a revolution :)
|
Hi guys,
I understand we gave up on rebase approach and we do
merge
from GitHub. Fine with me.I'm just wondering why do we merge PR which leads to failing build? Such approach has a risk of introducing lots of issues to the whole project because we are missing an opportunity to validate it.
I assume a PR is at least executed on local env of the approver before hitting a magic button, isn't it?
The text was updated successfully, but these errors were encountered: