Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Package Validation] We should support excluding tfms when running the validation #22901

Closed
safern opened this issue Dec 3, 2021 · 14 comments · Fixed by #36669
Closed

[Package Validation] We should support excluding tfms when running the validation #22901

safern opened this issue Dec 3, 2021 · 14 comments · Fixed by #36669

Comments

@safern
Copy link
Member

safern commented Dec 3, 2021

In dotnet/runtime we have been following a convention with our packages where we drop TFMs that are out of support from our packages. This results on a lot of errors that need to be suppressed because we are intentionally dropping these TFMs.

Also, when this happens, in baseline validation, if an asset is drop it will take another asset that is compatible for that TFM in the new package, i.e, if we had netcoreapp3.1 and netstandard2.0 asset, then we drop netcoreapp3.1, and now it will compare the netcoreapp3.1 asset from the baseline package vs the netstandard2.0 from the new version, resulting in a lot of errors as the API surface might be fairly different, so that comparison is not valuable in a lot of cases and it can become very noisy.

I suggest we add some support like an ItemGroup to specify ignored tfms for baseline validation, i.e:

<PackageValidationBaselineIgnoreTfm Include="netcoreapp3.1" />

Or we could introduce a flag to ignore out of support tfms and then we can use the list from the SDK for which tfms are out of support, i.e:

<PackageValidationBaselineIgnoreEolTfms>true</PackageValidationBaselineIgnoreEolTfms>

cc: @ericstj @joperezr

@dotnet-issue-labeler dotnet-issue-labeler bot added Area-ApiCompat untriaged Request triage from a team member labels Dec 3, 2021
@safern safern removed the untriaged Request triage from a team member label Dec 3, 2021
@safern safern added this to the 7.0.1xx milestone Dec 3, 2021
@joperezr
Copy link
Member

joperezr commented Dec 3, 2021

While I agree this will become more common as time goes on and TFMs continue to go out of support, I have a few thoughts in terms of the design of the feature. Here are some of the things in my mind:

  • Technically we already have support for this by just baselining the error for that specific suppression, which you can easily just add a comment on top of the specific suppression and comment that it is due to the TFM going out of support. That said, I also realize that due to the fact that the suppression file is auto-generated, you might lose the comment if the file is re-generated later.
  • The problem with having this suppression via Item or property, means that the consumer project might have suppressions for a lot of TFMs that will no longer warn after the baseline version is updated and no longer has the error.
  • Regarding your comment that: resulting in a lot of errors as the API surface might be fairly different, so that comparison is not valuable I actually think it is great that when doing this, you can basically get the preview of the impact that your user's targeting that dropped TFM will get after they upgrade. For instance, we just dropped some TFMs from our dotnet/iot packages, and in our case PackageValidation helped a lot measuring the kind of impact that consumers would see once they upgraded their package.

In general, I'm fine with adding this feature. As for your options in terms of execution, I probably wouldn't have a flag for <PackageValidationBaselineIgnoreEolTfms>true</PackageValidationBaselineIgnoreEolTfms> as that would add some unnecessary complexity like:

  • How/When do we calculate if a TFM reached EOL?
  • How can we make builds reproducible across time if we do a live check? For instance, if you build a commit hash today, vs building the same hash a year from now would potentially log different warnings, which I'm not sure we want.

@safern
Copy link
Member Author

safern commented Dec 4, 2021

Thanks for the input. I completely agree that it is helpful to get errors and evaluate the impact and that is why I would lean torwards and itemgroup which at the end is like using NoWarn.

Technically we already have support for this by just baselining the error for that specific suppression, which you can easily just add a comment on top of the specific suppression and comment that it is due to the TFM going out of support. That said, I also realize that due to the fact that the suppression file is auto-generated, you might lose the comment if the file is re-generated later.
Regarding your comment that: resulting in a lot of errors as the API surface might be fairly different, so that comparison is not valuable I actually think it is great that when doing this, you can basically get the preview of the impact that your user's targeting that dropped TFM will get after they upgrade. For instance, we just dropped some TFMs from our dotnet/iot packages, and in our case PackageValidation helped a lot measuring the kind of impact that consumers would see once they upgraded their package.

The problem is not the error that the tfm is suppressed, the problem is when we fallback to the latest compatible asset for an asset that was dropped, like in the example that I provided. For example, take we have a netcoreapp3.1 asset that has an API A.B that is only part of netcoreapp3.1+, but we also have a netstandard2.0 asset which doesn't contain that API. Then in the new version of the package we drop the netcoreapp3.1 asset, and now when running baseline validation we will compare the netcoreapp3.1 tfm from the old package vs the netstandard2.0 asset on the new package, since it is the best compatible asset for netcoreapp3.1, that would result on a PKV0006 warning, and it will also result on a CP0002 warning for A.B API. I think that A.B warning can get pretty noisy in the suppression file, so instead of suppressing all APIs that were lost because of the dropped asset, I would just declare:

<PackageValidationBaselineIgnoreTfm Include="netcoreapp3.1" />

The problem with having this suppression via Item or property, means that the consumer project might have suppressions for a lot of TFMs that will no longer warn after the baseline version is updated and no longer has the error.

I don't imagine this feature being used as default values, but rather intentional opt-in from the user using the property or the item, so the workflow would be, I drop the tfm, I see the errors I get, I think the impact makes sense, but rather than writing a lot of suppressions into the suppressions file, I just include the tfm as part of the item group.

How/When do we calculate if a TFM reached EOL?

I believe the SDK has a list of EOL TFMs for the current .NET SDK Version, so i.e, the .NET 6 SDK knows about 1.0, 1.1, 2.0 and 2.1, then .NET 7 SDK will have those, plus 3.1 and 5.0. The .NET SDK already warns if you target and EOL tfm, so that is why I was thinking we could benefit from that list, but now that I give it a thought about your comments, I think it makes more sense for the user to explicitly list the TFMs to ignore as that is more intentional.

@joperezr
Copy link
Member

joperezr commented Dec 4, 2021

The problem is not the error that the tfm is suppressed, the problem is when we fallback to the latest compatible asset for an asset that was dropped, like in the example that I provided.

Yeah but that is what I mean that really shows you what your customers will see if they target the dropped framework. For example, in your particular example you may have dropped netcoreapp3.1 thinking there wouldn't be many fallbacks, but by getting all of these warnings (like the CP0002 on A.B) will let you know exactly what will folks that target netcoreapp3.1 and upgrade to your new package will see. I think all of those are specially relevant to be able to see when a dropped asset result in a different asset being picked, cause you may think you still support a tfm but not realize all of the issues that come with that. Anyway, I think it is fine to have a feature that let's you opt-out of getting all of those diagnostics if you don't care, I guess I just don't think they are noise.

@dotMorten
Copy link

It would also be nice if we could override the baseline tfm. Ie if I drop .net5 and add .net6, I could tell the baseline validator to compare my current .net6 with the old .net5 APIs and ensure that when people upgrade, the only breaking change is the TFM itself.
This is especially the case for Xamarin.Android and Xamarin.iOS where we are now moving towards net6.0-android and net6.0-ios, but besides that want to make sure the API didn't actually change.

@joperezr
Copy link
Member

joperezr commented May 26, 2022

Mmm that feels like a bit of a specific use case to me, so I'm not sure if this is something that should be inside package validation. There are two different layers of the Package Validation feature today:

  • There is the PackageValidation layer which is the thing that understands NuGet packages, knows how to parse them, and given two packages it knows which assemblies should be compared against each other for compatibility, which then calls into the second layer which is...
  • ApiCompat itself. This isn't aware of any NuGet concepts, it only takes two inputs, a left and a right, and knows how to check if the two are compatible.

Today, this feature is only exposed via PackageValidation, but the plan (and I believe this is still in scope for .NET 7 cc @ViktorHofer) is to have also an MSBuild tasks and targets to expose the second layer directly so it could be called from users targets. I think your scenario is a special case, so instead of adding this feature as a characteristic of the first layer (package validation) I think we should enable you to be able to easily run ApiCompat via targets directly for which you can then select whichever two assets you want from two different packages in order to perform the comparisons. @dotMorten thoughts?

@dotMorten
Copy link

dotMorten commented May 26, 2022

That could probably work, but I'm not sure I agree it is specific. Dropping older targets and replacing with new ones in major versions is pretty common.

@joperezr
Copy link
Member

I guess that is fair. I probably chose my words incorrectly, I guess my thoughts were more that doing so is effectively breaking your previous package, whether that is common or not, and to me Package Validation's layer purpose was more to be objective and find all breaking changes between two packages. That said, your point is valid that it is becoming more common to drop support for some frameworks, so I think it is fine to consider adding this extensibility point.

@dotMorten
Copy link

dotMorten commented May 26, 2022

Example: net461 is just now going out of support. Hopefully all my customers have already moved on, so it should be more or less safe to drop net461 and target net462 in my package instead. But how do I make sure that the only inconvenience I introduce is the minimum tfm, and not breaking anything? That package validation tool is still super important here. The tool is currently falling back to validating against the included netstandard2.0 which is rather incorrect as most consumers would be using the net462 assembly instead (and the netstandard one often have fewer APIs exposed). I'm finding myself having a mess of a suppression file and for this entire release not a whole lot of assurance that I'm not introducing any other breaking changes.

So would be nice if I could just configure my tool to compare current net462 to net461 baseline. I believe that'll also take care of the scenario initially presented here, but with much better validation that nothing else was changed. And anything I do change, I'll have the suppression file to use as a basisfor my documentation about "hey this is what we removed in the new version when you target the new tfm"

@ericstj
Copy link
Member

ericstj commented Sep 19, 2023

I'm curious if the tuple you're talking about would already be present in the comparison?

Suppose the package previously had; netstandard2.0;net5.0 and the new package has netstandard2.0;net6.0.
I would expect the tool to do the following baseline comparisons:
netstandard2.0¹ -> netstandard2.0² (as if the user was targeting netstandard2.0)
net5.0¹ -> netstandard2.0² (as if the user was targeting net5.0)
net5.0¹ -> net6.0² (as if the user was targeting net6.0)

So I think your comparison should be covered in that case, and you're just saying you want to ignore the net5.0¹ -> netstandard2.0²` one since your new package no longer supports net5.0. I still think the tool should raise it, since you are breaking those customers who move. It'd be nice if the tool provided a succinct way to communicate that you no longer need the tuple -- that's what this issue is about.

Then there's a different shape. Suppose the package previously had; netstandard2.0;net5.0;net6.0 and the new package has netstandard2.0;net6.0.

You wouldn't get the net5.0¹ -> net6.0² comparison here as part of the baseline check, but you presumably did the regular compat check when you shipped the first package (net5.0¹ -> net6.0¹), and then you're doing a net6.0¹ -> net6.0² baseline check on the new package, so you get the net5.0¹ -> net6.0¹ -> net6.0² comparison transitively.

I do think an interesting advanced feature for APICompat might be to both add and remove comparison tuples. I could see such a feature taking the form of describing the tuples as "calculate the applicable TFM for this left framework (optionally from the baseline package), and this right framework". Another scenario I can see this helping with is migration from a package that supported .NETFramework to .NET.

@dotMorten
Copy link

Here's a much simpler case:
My v1.0 library only targets net6.0. Now that net8.0 is going LTS, I'm only going to support net8.0 going forward in v2.0.
However I do want to ensure that the only thing I break is the TFM minimum requirements. I want to make sure I don't remove/change any APIs in the library.

That means a user who already targets .net 8 and uses v1.0, can easily move to v2.0 and know that nothing will break. Yes a user targeting .net6/7 will also have to change their TFM, but again they should have confidence that is it, and I should have confidence nothing else broke.

Dropping older unsupported target frameworks is just a thing we all will be doing as we're going forward - it would be nice to be able to easily check this isn't causing any other issues, but currently every time we do make a TFM change, we basically have to disable baseline validation completely, risking we break something (this actually happened in the .NET MAUI SDK when moving from net6 to 7, and it wasn't caught, until I smashed into it and reported it!)

ViktorHofer added a commit that referenced this issue Nov 6, 2023
Fixes #22901

Add frontend option to allow ignoring TFMs when performing baseline
package validation.
ViktorHofer added a commit that referenced this issue Nov 6, 2023
Fixes #22901

Add frontend option to allow ignoring TFMs when performing baseline
package validation.
@dotMorten
Copy link

Thanks for working at this.
However would it be more useful instead of ignoring to redirect?
Ie if we have a package with netsdandard2.0 and net6.0 and in next we drop 6 and add 8, I want the validator to not ignore net8, but rather compare it to 6, since that's the library I expect people moving up from and I want to make absolutely sure I didn't break anything but the newer tfm requirement.

@ViktorHofer
Copy link
Member

ViktorHofer commented Nov 10, 2023

We still validate compatible TFMs inside the current package. I.e. the net8.0 assembly would be validated against the netstandard2.0 assembly. If you are concerned about public API that is only defined in .NETCoreApp, I would recommend to target multiple TFMs in your package.

I.e. in dotnet/runtime we target the following TFM properties: https://github.com/dotnet/arcade/blob/576b0a6fca70266087aa247d68896376ecec8c6e/src/Microsoft.DotNet.Arcade.Sdk/tools/TargetFrameworkDefaults.props#L12-L21

While you probably don't want to target three .NETCoreApp TFMs, I would recommend to target the minimum officially supported .NETCoreApp version (for breadth) and the latest .NETCoreApp version (for max perf).

@dotMorten
Copy link

dotMorten commented Nov 10, 2023

My point is its normal to drop out of support targets. Like we dropped 3.1 a few years ago and by the end of the year net6 will be out of support and we'd naturally move to net8. By that time users of my package likely have already moved to net8, hopefully that version is a smooth upgrade with no breaking changes besides the tfm increase. However there's no way to have the validator ensure that today. With the PR merged it just simplifies ignoring all the apis that might be additional to what's in netstandard, but no way to check if net8 target is missing a member net6 had.

So while I appreciate the work done here and simplifying it, it doesn't actually solve an issue that wasn't already solveable

@ViktorHofer
Copy link
Member

I definitely get your point. This happens quite often, yes.

I was trying to provide a recommendation on how to solve this with multi-targeting. I would generally recommend to target at least two .NETCoreApp TFMs for the above stated reason (breadth and perf). I expect this pattern to become more popular in the future when people stop supporting .NET Standard.

Redirection is non trivial as it would swap the "left" and the "right". Left is usually the baseline assembly and right is the current assembly. Now with redirection, net8.0 would become the baseline (left) and net6.0 the current (right).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants