-
Notifications
You must be signed in to change notification settings - Fork 104
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
GTA does not detect process termination triggered by CRT memory leak detection #266
Comments
Thanks for filing the issue and for the sample code! Is it ok if I add that code to my SampleTests suite? Now I better understand the approach, and I might indeed have a solution. Currently, GTA completely ignores the exit code of the test executables (except during test discovery, but this is merely to recognize if something has gone wrong with that). My idea (which I have already partly implemented) is as follows: I will provide a new option which allows to specify a test name (e.g., For your code, this would mean that you would not return the result of The only drawback I see is that the test executable will not behave as "normal" gtest executables do (which produce an exit code of 0 if and only if no test has failed), but if the tests are only executed through GTA (be it locally or on the build server), that shouldn't be an issue. I would however probably add some output to the test executable to indicate that different behavior. Would that solve your problem? |
Sure, feel free to add the memory leak test code in your sample tests suite. There is nothing proprietary in the contents. Thanks for explaining your proposal. Let me summarize to make sure I understand correctly:
This approach should work and should solve the issue of not detecting memory leak failure indicated by a non zero exit code. I just have a few questions,
Thanks! |
The lists of tests didn't show up correctly, here's what I meant:
|
Also, question 1 above did not show up correctly, it should be:
|
I'd seen the In any way, the point is that the added test would be green iff the executable returns 0 when run by GTA. Concerning you questions: Your description is accurate. The added test will be treated as any other test (despite having no source location) and will thus be included in the total list of tests. |
OK, if I understand correctly,
Is this logic correct? If so, should the test main still be always returning 0? When the test executable is running standalone (type test_program.exe in command prompt), GTA is not involved at all and so the test main return code from RUN_ALL_TESTS() is the only success/failure return code indication. At our workplace we have 3 scenarios under which unit tests are run,
So for scenario 2), the test executables are run without GTA, and so returning status of RUN_ALL_TESTS() I think is necessary. Also, scenario 2) is an important use case because scenario 1) of running unit tests in Test Explorer will only run all the unit tests in the currently opened solution. We have multiple Visual Studio solutions each with their own sets of unit tests. So Scenario 2) is a way in which developers can quickly execute all unit tests (outside Visual Studio) across all solutions to make sure their changes did not break any unit tests. Let me know what you think. |
Yes, your logic is correct, and that's the reason why I suggested to let the I see. I have just tested the executable's behavior on the console, and it seems that memory leaks only cause an exit code of 1 if compiled in Debug mode (in Release mode they do not seem to have any effect), making matters even worse from GTA's point of view :-) However, here's what you can could do:
|
Yes, CRT memory leak detection is only compiled for debug, and that is perfectly OK as it is expensive to perform and you wouldn't want it enabled in your optimized and released code. We use it in debug code as part of the Gated Build so memory leaks are never allowed to be checked into the code base. |
So I really don't think we need to concern ourselves with memory leaks detection in release build if that will make matters slightly better for GTA :) I think there are just 3 scenarios to consider, running with or without GTA:
For each scenario when running with GTA where a MemoryLeakTest is added,
For each scenario when running without GTA (console) and no MemoryLeakTest is added,
This is also why I still think test main should just return RUN_ALL_TESTS() (instead of always returning 0) to support case 2) when running without GTA. I think in all these 6 cases, the outcome are acceptable, in the sense that if something has failed then overall status is failure and the build won't pass. The only case that is slightly unexpected is case 2) with GTA, where a test failure also causes the MemoryLeakTest to show up as a failure. It's not too bad and I would be willing to live with that limitation. I ponder this some more though.... |
I was surprised about the Release behavior because I usually try to minimize differences between Release and Debug behavior for the sake of better maintainability (and after all, I guess you are not delivering your test executables to your customers ;-) )... But that's of course completely up to you. With the approach described above, we don't have to take that compromise. The main method will look something like this: int result = RUN_ALL_TESTS();
if (running_inside_gta parameter is passed to test executable)
return 0;
else
return result; That way, the executable will behave correctly if both run inside GTA and from the console. You could even use that switch to produce some meaningful output (e.g., only print the result code output tag if the switch is enabled, and print something like "No memory leak checks are performed in Release mode" in Release mode and the output of the memory leak check in Debug mode (if it as failed), so you can distinguish between the test having passed because no memory leaks have been found vs the test having passed because no check has been performed. I will probably tonight provide a first version for testing - you can see at the sample code what I mean if you are unsure. |
Thanks, I'm looking forward to testing it. On a separate note, I uninstalled TAFGT yesterday and installed GTA in Visual Studio but ran into a very strange test discovery and execution issue. I don't think it is a problem in GTA as it would have been seen by many others using GTA. It may be related to my VS configuration but I can't see anything that is obviously wrong, and I'm wondering if you have come across this before. Sorry for being off topic and please let me know if I should move this to another forum. Some of the discovered tests seem to "loose" its "test_case_name", and is left with just ".test_name" (instead of the full "test_case_name.test_name). It's hard to explain so I'll try and attach a few screenshots. I did not run into the same issues with TAFGA, and I was careful with exiting and restarting Visual Studio during the adapter re-installation. I can still run the tests in Test Explorer but have a lot fewer tests than there actually are. |
I think I have found the issue after searching and reviewing issue 196, I have some initialization/shutdown code in test main that are run during test discovery. I didn't think they would interfere with test discovery, and they were not a problem with TAFGT but is certainly interfering with GTA. I've excluded the initialization/shutdown code from test discovery pass and all the tests are running now. |
I have provided a first version for testing. See C++ code and tests for usage. There are some remaining issues (some of which I hope to be able to resolve with your help):
|
Thanks for providing a beta for testing so quickly. I tested it this morning and it looks very good :)
Now for feedback:
I see the additional tests show up as "gtest_Solvers_exe.MemoryLeakTest" in the output, but in Test Explorer the hierarchy it looks like "gtest_Solvers.gtest_Solvers_exe.MemoryLeakTest". Some more warnings I saw in output window, |
Actually I was wrong about the extra "MemoryLeakTest" test not showing up in the test hierarchy if the test passed. Somehow I missed it in the screenshot above. I see that all test exe have an extra MemoryLeakTest. |
Thanks for the kudos! Btw, in case we get this running smoothly for your use cases: Your company might consider donating ;-) I have just pushed a couple of bugfixes, should work better now. I may have let you test a bit too early, sorry for that - I must admit that I didn't have time to test within VS... Gtest leaks: interesting... I encourage you to provide that pull request :-) Output: I do not want to produce that output within GTA, but within the test executable. Currently, I provide output in case a leak occurs and if no leak detection is performed. It would feel more symmetric to provide some output such as "No memory leaks found." in that case. As I see your comment on the message of the failing memoryleak test, I realize that you apparently haven't noted the two tags in the output which make GTA parse what follows and uses it as the test's message :-) I'd like to use that mechanism for the success case, too... Btw, is there any way to get more output from leak detection? If that's possible, I'd like to add this to further improve the sample code. |
For sure, you have been very proactive in providing support and I'll certainly take that into consideration! Ahh, I did not notice the two tags in the output, I will incorporate that in our code and do some testing. There are ways to get more detailed output from CRT leak detection. I did not supply that code initially as it would have taken longer to put together the memory leak test, but I'll put together something for you next week for your sample code. |
I've fixed the remaining issues as far as I can see - please try this version (note that the output tags have changed slightly). I've also put together some documentation. As a side note: Given what I have so far, it shouldn't be too difficult to automatically identify the actual test(s) which produce leaks. Would that be an interesting use case for you? |
Thanks, I'll try out the new version as soon as I can, in both Visual Studio and in automated Gated Builds, so it may be a few days before I have all the testing done, but I'll provide feedback as soon as it is available. I'll also try and extract some of the memory leak message formatting code for you to use. It would be nice to automatically identify the actual test(s) which produce the leaks (I assume you mean narrow it down to a single test or small number of tests within a test executable containing a large number of unit tests), but I don't see how you are able to do this. The CRT memory leaks check runs at application shutdown after a set of test cases have run, and any one of those test cases could have caused the leak. The memory leak message formatting code which I'll provide you with can identify the source code location where the leaked memory was allocated, but it's not able to identify which unit test is associated with the memory leak. The only way I know of is to manually debug the tests in Visual Studio and use breakpoints to track down which test has a memory leak. Anyway, if you know of a way to track down the leak to a finer granularity than the test executable let me know :) |
A recursive approach (split tests in half, run both sets of tests; mark all tests in runs without leaks as "not leaking"; mark all runs with a single test as leaking (if it does); repeat with all tests in runs with leaks; stop if all tests are marked "leaking" or "not leaking"). There are probably smarter ways to do this, but even this naive approach would probably be sufficiently fast (assuming that there are only few leaking tests). This would of course not happen during a test run, but could be invoked separately (e.g. for lunch break :-) ). Anyways, if you start testing, please go for the PR build of this PR (see checks) to get the latest version... |
Some feedback on 0.14.4.1268
In the screenshot below, I have a test executable, gtest_Solvers. The unit test fe_stress.tresca has a deliberate memory leak. If I right click on "tresca" and pick "Run Select Tests" to run just this one test, then I see an expected MemoryLeakTest failure. If I right click on "fe_stress" and pick "Run Select Tests" to run all tests in "fe_stress", I would expect "memoryLeakTest" to fail, but it passes in this case. The same is seen if I run all tests in "gtest_Solvers", the "MemoryLeafTest" passes. |
That's surprising - a scenario like this behaves as desired with the SampleTests solution. I am tempted to think that your executable does not always return an appropriate exit code, I must admit...- I have just checked in some debug statements which log the exit code whenever a process execution is completed. Would you mind to double-check whether your executable returns the correct value in all cases? Note that I have changed the naming scheme of the exit code tests - now the configured name comes first and is followed by the executable. This should imho result in nicer grouping and make the tests easier to find - let me know what you think... |
I forgot to mention: Switch on Debug mode to see the log output, and look for "returned with exit code". |
Thanks, I did some more testing and there's something going on... The RunHook_ function that is called on memory leak detection (which calls exit(1)) is not getting called when I run the tests from the parent. |
Hm, that's not my fault, I guess... If this helps, have a look at the MemoryLeakTests project of the SampleTests solution - it works exactly as desired and is based on your code... As a side note: Your probably are aware of the Print test output option!? This might help debugging your case. Let me know whether I can be of any further assistance (for instance, if you would like to have some more debug output)! |
Yes, the memory leak detection should be triggered in all cases, I'll have to confirm that part first. |
So after more testing using simulated memory leaks in different branches of the unit test hierarchy, I'm now certain the issue is related to side effects of some tests on one another causing the CRT memory leak checker not to be called. That's problems with our tests though. All that to say, I think the reporting of memory leaks through GTA to Test Explorer is good :) I also tried 0.14.4.1270, and the rearrangement of the "MemoryLeakTest" to be higher in the test hierarchy is good. It makes it more prominent and noticeable. I'll do some testing on the Build server in the next day or two. |
Ok, good luck with debugging your tests. FYI: I have just pushed another version where the exit code tests receive the executable's main method as source location. |
I've added some more code to dump memory leak messages from CRT, just look at the differences in the RunHook_ function. |
Thanks! I've just added it to my sample leak detection tests - weird stuff :-), but seems to be working nicely. Any news from your side, e.g. concerning the build server? From my point of view, the feature is now pretty much ready for release... |
I've been busy with other work and have not finished testing GTA on the build server yet. However, all the new functionality provided by GTA to support CRT memory leak detection have been tested from Visual Studio Test Explorer so I don't expect any issues. I should be done build server testing soon, so don't let me hold things up and feel free to release any time. Thanks again. |
Ok, take your time... I have pushed a couple of other small changes: Exit code tests can now also be skipped if they fail, and the tests can also be run without "real" tests. Finally, I have refactored the memory leak detection code such that it can easily be reused. Which leads to the following question: Is there anything I should let my users know in case they indeed want to reuse leak detection for their own test projects? For instance, it seems that only memory allocated using Could you maybe (if your time allows) quickly browse through the code (leak detection, invocation) I came up with, and maybe even the docs? |
Sorry for the delay, I've managed to verify memory leak detection using the new GTA ExitCodeTest functionality on a Gated Build running on a build server. Everything is working. I have also browsed through the leak detection and invocation code you referenced above, the only comment I have is where you write out the GTA marker tag, std::cout << "GTA_EXIT_CODE_OUTPUT_BEGIN\n"; During testing I found that this tag was not detected by GTA in time for it to extract the CRT memory leak messages. The differences in behaviour may be related to timing differences between your test application and our code. I found that I had to flush the output to ensure the logs are captured, by adding, std::flush(std::wcout); So you may want to include this in your sample code to ensure consistent behaviour. This was a bit tricky to resolve. The link to your docs above is not working, although I did look through the docs initially when it was working and did not see anything incorrect. If you have updated the docs since then I can take another look. BTW, if you don't mind me asking, how far off are you from reaching your goal of funding your replacement laptop? and feel free to send me a private message. |
Thanks for your feedback, and for tracking down that "output issue"! Where did you have to put the Afaik, I can't send private messages through GitHub... Would you mind dropping me an email at my first name at my last name dot de? |
The std::flush statement should immediately follow the output of GTA_EXIT_CODE_OUTPUT_BEGIN tag. Thanks, I'll send you a direct email. |
We use CRT Debug Heap to detect memory leaks in our applications and unit tests as described here:
https://docs.microsoft.com/en-us/visualstudio/debugger/finding-memory-leaks-using-the-crt-library?view=vs-2017
This is also used to fail our Builds by terminating the process with exit code 1.
Unfortunately since migrating to Google Test using Google Test Adapter, I'm finding memory leaks no longer trigger build failures and I think Google Test Adapter does not reflect process exit code in test failures. Is there any way to trigger test/build failures from abnormal process termination on process exit?
I've attached 2 files, test_leak.cpp and test_leak_main.cpp containing sample code to reproduce the issue. When a memory leak is simulated in the test file, the leak is detected on process shutdown and the process exits with exit code 1.
test_leak.zip
The text was updated successfully, but these errors were encountered: