-
Notifications
You must be signed in to change notification settings - Fork 193
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
tensorflow version - NEON.h5 (file signature not found) #198
Comments
Thanks. I am making a document today about how best to update. I think the cleanest thing is to manually load the desired tensorflow release model. Download here and then load weights
https://github.com/weecology/DeepForest/releases/tag/v0.3.0 I just had another user report that the performance was different than expected between the use_release() and the download model. This is extremely surprising since they are the same model. #195 Please let me know if you experience any change in performance. Close when ready. |
Thanks for the quick response. Yes I also have a problem of performance with the pytorch version. I'm using DeepForest in urban area (and I'm going to train it on new urban data to have a better model) and the tensorflow version is pretty good but the pytorch version is really not great and is very sensitive to changes in patch_size and overlap. |
can you show some example images? The underlying data and algorithm is
basically identical.
…On Thu, Jun 10, 2021 at 4:23 AM Mathias Aloui ***@***.***> wrote:
Thanks for the quick response.
Yes I also have a problem of performance with the pytorch version. I'm
using DeepForest in urban area (and I'm going to train it on new urban data
to have a better model) and the tensorflow version is pretty good but the
pytorch version is really not great and is very sensitive to changes in
patch_size and overlap.
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#198 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAJHBLEFQ7BHJXBYVLP4K2LTSCOB7ANCNFSM46LV4CGQ>
.
--
Ben Weinstein, Ph.D.
Postdoctoral Fellow
University of Florida
http://benweinstein.weebly.com/
|
can you pass me that raw image? This is surprising, there must be an
additional parameter change somewhere, minimum score threshold between the
models? We test against the benchmark here:
https://github.com/weecology/NeonTreeEvaluation and the pytorch model is
within 1% of the tensorflow of the model with no visible differences. The
training data are the same between those two releases. I just retrained a
model to add new annotations, including a small urban tile, I will upload.
Would you be willing to share your annotations and i'll add them in for
future users? I'm going to have a look at your image, but something doesn't
quite make sense. I'm going to test it on predict_image first, to make sure
it isn't some tiling method between the old and new versions. That seems
more likely than the actual model weights.
…On Thu, Jun 10, 2021 at 1:13 PM Mathias Aloui ***@***.***> wrote:
Here is an example with the same image and the same parameters (patch_size
= 400, patch_overlap = 0.15, iou_threshold=0.15) :
The first with tensorflow :
[image: tensorflow]
<https://user-images.githubusercontent.com/43454650/121590723-f1e18480-ca38-11eb-9142-06317a06fb67.png>
The second with pytorch :
[image: pytorch]
<https://user-images.githubusercontent.com/43454650/121590722-f148ee00-ca38-11eb-8a7e-3d8d79d37aed.png>
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#198 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAJHBLBTPT35MQVHV7CAGDTTSEMHTANCNFSM46LV4CGQ>
.
--
Ben Weinstein, Ph.D.
Postdoctoral Fellow
University of Florida
http://benweinstein.weebly.com/
|
Here is the image I used : I didn't check if other parameters were different (I assumed that all default parameters were the same). The annotations I have are for the challenge Detect Trees for the Hack4Nature (https://hack4nature.slite.com/p/note/Fs34nEyzDG61edEM5oHnUF/Challenge-4-Detect-Tree). We have a wiki Slite with everything we use and soon we will put a link to a web page for online annotations and all the data we use. I can keep you posted on our advancement ;-) |
Great, thanks, keep my posted. I'm downloading that image and playing with
it. I just trained a new model. I really can't imagine the tensorflow and
pytorch versions should be meaningfully different. It must be in
preprocessing. I'll update.
…On Fri, Jun 11, 2021 at 12:04 AM Mathias Aloui ***@***.***> wrote:
Here is the image I used :
[image: image]
<https://user-images.githubusercontent.com/43454650/121643179-d191e400-ca91-11eb-88d0-011a9b8b3e52.png>
I didn't check if other parameters were different (I assumed that all
default parameters were the same).
The annotations I have are for the challenge Detect Trees for the
Hack4Nature (https://www.hackfornature.com/blog/challenge-4-detect-tree
<http://url>). We have a wiki Slite with everything we use and soon we
will put a link to a web page for online annotations and all the data we
use. I can keep you posted on our advancement ;-)
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#198 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAJHBLABPEGD4D3N6TSAQEDTSGYRHANCNFSM46LV4CGQ>
.
--
Ben Weinstein, Ph.D.
Postdoctoral Fellow
University of Florida
http://benweinstein.weebly.com/
|
Just a little follow-up on the performance issue : I saw your pretty good prediction with predict_image and I wondered what about the evaluation methode. A comment on how evaluation works : Right now the evaluation method main.deepforest.evaluate works like predict_image to make the prediction. Which explains why everything works on predict_image predictions and not on predict_tile predictions. My own prediction : I used the exact same code you used and I have a slightly different prediction, strange ? My tests : Here is the notebook I used for my evaluation tests Edit : I found out that changing the patch_size change drastically the performance for pytorch predict_tile |
Changing patch_size will definitely have performance for predict tile, that's a feature not a bug. For every input resolution, there will be an optimal patch size. There is no way to anticipate for all users. The question is why did earlier versions of deepforest predict_tile work better for you? I will check the default patch size and try to see what has changed. I cannot fathom why using the exact same code would produce different results. That seems like an important thing to investigate. Thanks for following up. |
I'm still trying to wrap my head around this. I think you are not using the code quite as intended. Let's figure out how to make the docs better. The evaluate method is meant for when you have a .csv of annotations for each image. You are 100% right there is not an evaluate method directly for a tile. You would need to cut the tile into pieces using preprocess.split_raster, save the output and then evaluate on this. I can make an example in the docs. Maybe make a quick video in evaluate. |
Evaluating tilesThe evaluation method uses deepforest.predict_image for each of the paths supplied in the image_path column. This means that the entire image is passed for prediction. This will not work for large orthomosaics. The deepforest.predict_tile method does a couple things under hood that need to be repeated for evaluation. psuedo_code:
|
@bw4sz How to download weights for "Alive" and "Dead" labels ? |
model **** |
Hi,
Since the Pytorch-Release, I'am having issue with the tesorflow version.
I tried using my script to make a prediction but when loading the model I had an error.
I tried to reload the model by changing the name in current_release.csv to force a new download but still the same error.
Thankfully I have an old backup virtual environment. I replace the NEON.h5 file with the one from this old env and it worked.
Here is an example of my error :
The text was updated successfully, but these errors were encountered: