Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merge .NET Standard Engine back into the main solution #388

Closed
rprouse opened this issue Mar 23, 2018 · 16 comments
Closed

Merge .NET Standard Engine back into the main solution #388

rprouse opened this issue Mar 23, 2018 · 16 comments
Assignees
Milestone

Comments

@rprouse
Copy link
Member

rprouse commented Mar 23, 2018

I am worried about the code for the .NET Standard engine diverging from the full framework engine, so I would like to merge the source and projects back into mutlitarget csproj projects using the new Visual Studio 2017 project format. At the same time, or possibly in subsequent PRs;

  • Include the .NET Standard engine in the main engine's NuGet package and deprecate the old .NET Standard engine package.
  • Figure out how to do multi-targeting of engine addins?
  • Add a .NET Standard 2.0 engine for use in Xamarin and a possible future dotnet nunit.
@rprouse rprouse added this to the 3.9 milestone Mar 23, 2018
@rprouse rprouse self-assigned this Mar 23, 2018
@ChrisMaddock
Copy link
Member

Very excited for this!

Fancy splitting some of this out? I could have a look at the addin's point separately? Sounds like fun!

@CharliePoole
Copy link
Member

For all practical purposes, we have two engine configurations right now. The primary engine that starts everything running and the secondary engine that runs under an agent. The secondary agent has a desktop flavor and a .NET Standard flavor. IT would really help to clarify which we are talking about and perhaps even have separate builds for the primary engine and the secondary one.

That secondary engine was originally a make-do solution I figured we would get rid of eventually. The code had all the same capabilities as the primary engine, but the features not used in a secondary engine were disabled by configuration. Frankly, leaving it like that was probably technical debt and led to the confusion of what "engine" really means in our architecture. When Rob created the .NET Standard "engine" he removed most of the same capabilities that I had deconfigured in the first place.

What I'm suggesting is that maybe the real distinction is not between .NET Framework and .NET Standard but between the primary engine and the secondary (sort of) engine. What if we made those two separate projects with shared code and used configs to split platforms as necessary in one or both of them?

@CharliePoole
Copy link
Member

Illustrating the last by an example...

There is no need, at least as our architecture is designed right now, for the secondary engine to know anything about project formats. All projects are translated to a set of assemblies and settings by the primary engine.

Another... we don't permit a secondary engine to start a new process by design. So the Process Runner is not currently configured for secondary engines and never should be.

Glad to clarify more if needed.

@ChrisMaddock
Copy link
Member

What I'm suggesting is that maybe the real distinction is not between .NET Framework and .NET Standard but between the primary engine and the secondary (sort of) engine. What if we made those two separate projects with shared code and used configs to split platforms as necessary in one or both of them?

I understand your points about primary/secondary engine, but I think that's a splitting those into distinct assemblies is something of a separate issue?

My thoughts is that the .NET Standard engine currently only matches the 'secondary engine' - as that was the bare minimum required at the time for the adapter, and the easiest MVP to get released. Our long-term aim (in my opinion) - should be to build the full engine with all functionality for .NET Standard - to allow the same testing experience across .NET Core and Xamarin platforms, as on the full framework.

@CharliePoole
Copy link
Member

@ChrisMaddock I believe that the distinction between primary and secondary has to be made first, at least in our heads. That enables us to discuss exactly the point you bring up: what builds of the primary engine do we need.

@rprouse
Copy link
Member Author

rprouse commented Mar 26, 2018

@CharliePoole I agree that we should make the distinction, if just for discussion. The .NET Standard version of the engine is what you are calling a secondary engine and I had previously talked about as "the parts of the engine used by the agent". I think I like secondary engine better and it eases discussion 😄

@CharliePoole
Copy link
Member

My original request was that it not be called the engine at all, but that ship has sailed. 😄

I agree that the .NET STandard engine build is equivalent to the secondary engine. But the engine that is incorproated with the agent in the .NET Framework build is also the secondary engine. It's just that the features are de-configured rather than not being present in the code. IMO, it's cleaner if they are simply not present.

WRT the primary engine... do you feel it needs to have a .NET Standard build? Are there platforms where developers want to run the "full" engine under (e.g.) .NET Core? If so, we probably need a .NET Standard 2.0 primary engine.

Assuming "yes" to the last question (we need a .NET Standard primary engine) what happens to the desktop build? Do we still need it, or can we limit the primary engine to running on Core 2.0 compatible platforms? (Actually, we can, so I guess the question is really do we want to.)

@ChrisMaddock
Copy link
Member

ChrisMaddock commented Mar 27, 2018

My opinions:

WRT the primary engine... do you feel it needs to have a .NET Standard build? Are there platforms where developers want to run the "full" engine under (e.g.) .NET Core? If so, we probably need a .NET Standard 2.0 primary engine.

I agree we do. At work, we're currently running NUnit tests with .NET Core on Linux. No mono, so we're currently using the vs adapter, which lacks the functionality we have in the console.

what happens to the desktop build? Do we still need it, or can we limit the primary engine to running on Core 2.0 compatible platforms?

My understanding is that the latter would drop our support for any platforms pre-net-4.6.1. (Correct me if I'm wrong!) I don't think that's a good idea - so we instead should be looking at two separate builds of the engine, one for .NET Standard (2.0?), and one for .NET Framework 2.0. I would expect that they can share the vast majority of code.

@CharliePoole
Copy link
Member

If the primary engine were .NET Standard 2.0 only, that would only drop in process support for earlier platforms.

I believe the questions to consider, in order, are

  1. On what platforms do you want to support runners?

  2. Since the (primary) engine has to run on those platforms, what is the lowest level of .NET Standard it can target?

  3. On what platforms do you want to run tests? This determines how many kinds of agents are needed and what they are. In some cased, the same agent can be used for several platforms, as is now the case.

I'll enlarge on this when I get to my computer.

@CharliePoole
Copy link
Member

@ChrisMaddock Re: In-process Support

As an example the GUI some of us are working on targets .NET 4.5. Obviously, you have to have .NET 4.5 installed in order to use it. If you use a menu setting or command-line option to force the code to run in-process, then it will run under .NET 4.5 if possible, otherwise you get an error. Of course, if you use the defaults, you get a separate process and the engine tries to figure out how the test assembly wants to be run. If it targets .NET 2.0, for example, it will try to run a process using .NET 2.0.

The consequence is that a developer for .NET 2.0 wanting to use the GUI has to have both .NET 2.0 and 4.5 or higher installed on their machine. This does not seem like too much of a burden.

The same logic could be applied to the primary engine. If so, every runner using the engine would inherit the same restriction. For example, if the engine required .NET 3.5+, then every runner that used it would require 3.5 or higher too. WRT 3.5, that's probably a decision we could get away with. For higher levels of .NET, I'm not sure but we should discuss it.

This indicates (I hope) more clearly why I think the engine and the piece of code that is bundled with the agent ("partial" or "secondary" engine) are really two entirely different things. IF you increase the required runtime for the engine, you limit how tests may be run. If you increase it for the agent you limit which tests may be run.

@CharliePoole
Copy link
Member

@ChrisMaddock

FWIW, here's the sequence of development steps I had planned for the engine at one point in time. They are only for illustrative purposes and I'm not advocating that the NUnit Project should do it this way.

  1. Eliminate use of the engine per se by the agents. Instead create a piece of code that includes everything needed to run in process. This is basically the DirectRunner, the logic to identify select a driver and the drivers themselves. A starting point would be to remove the code that is already de-configured within the agent (e.g. services that are not initialized and runners that are never used).

  2. Make a very important decision: Should we continue to support in-process execution at all?
    2.1 If not, remove all the code that is used for in-process execution from the engine. That's approximately the same code that was added to the agent in step 1.
    2.2. If yes, share the code from step 1 either as a separate assembly or a Git submodule.

  3. Start developing more agents that know how to run code in other (non-.NET Framework) targets.

I realize that NUnit has started down a different path, but the above was my approach and was never shared generally, so I thought I'd throw it up against the wall here for your benefit. 😄

@CharliePoole
Copy link
Member

@ChrisMaddock BTW, the agents we now have are of a certain type... what I called "process agents." They run the tests in a separate process on the same machine. At one point, I think we had code for in-process agents, which turned out not to be needed. That could be revived for any platform that can run in-process. We also had projected agents that ran on separate machines and devices. That explains why the engine gets an agent first and then asks the agent for a runner rather than doing it in one step... which might seem odd if you don't know about the intent of having multiple agents.

@rprouse
Copy link
Member Author

rprouse commented Mar 28, 2018

Lots of good conversation and ideas here, sorry I'm swamped at work and only have time for a quick update.

As @ChrisMaddock said earlier, I do think we will eventually want a full .NET Standard Engine. As he also said, we will also likely need a .NET 2.0/3.5 version unless we drop support for in-process which I wouldn't be against. The issue at the moment is that remoting is not supported by .NET Standard, .NET Core or most other target platforms other than desktop, so we would need to replace that.

My plan for this issue was to take the first step of bringing the code back together into a single solution and codebase, then start taking the steps that are being discussed in this issue. Maybe we should create a GitHub project (to replace ZenHub Epics) for this to track everything over time and this can be the Epic issue to collect the design and discussion?

@ChrisMaddock
Copy link
Member

ChrisMaddock commented Mar 29, 2018

Thanks for elaborating on all this Charlie. 🙂

Dropping in-process support for certain platforms is an interesting thought. My main thought is how much we'd actually gain dropping 'full runner support' for .NET 2.0/3.5, given...

  • The communications protocol must be supported by both primary engine and agent, meaning (I believe) that it must be available on the lowest platform. (.NET 2.0 is our current pain point here.)

  • In-process running is our 'go-to' debugging method for agent crashes. Would losing that debug tool be a risk? Of course, the older platforms are lesser used - especially via the console.

  • A sizable chunk of our code (everything currently used by the agent) would still need to be built for the old platforms. The console itself does nothing complicated - how much code that is purely in the 'primary engine' would we get advantage from just compiling for a single platform?

  • Mono.Cecil is currently used in the drivers - unless we can refactor that away, we still have that dependency issue.

  • multi-targeting with the new csproj format is comparatively trivial to what it used to be.

My plan for this issue was to take the first step of bringing the code back together into a single solution and codebase, then start taking the steps that are being discussed in this issue. Maybe we should create a GitHub project (to replace ZenHub Epics) for this to track everything over time and this can be the Epic issue to collect the design and discussion?

I'm a fan of both of these ideas.

@CharliePoole
Copy link
Member

@ChrisMaddock FWIW, I think relying on a single communications protocol is a mistake. Ideally, the protocol used to link things together should be abstracted as a separate sublayer. Right now we use remoting and the necessary infrastructure exists in TestAgency and in the agent, but not well separated from the actual logic of running tests. I'd factor out the connection into a separate object, which could then more easily be replaced. That would make it much easier to experiment with different protocols.

A bonus here would be that the same object structure might be usable at the driver level as well as the runner level.

@ChrisMaddock
Copy link
Member

Completed in #400 😄

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants