@rprouse commented on Mon May 04 2015
Visual Studio 2015 and Roslyn introduced code analyzers and the concept of code-aware libraries. There are many ways that users can incorrectly use NUnit, so we could provide analyzers to detect incorrect usage and suggest fixes. This would help users become more productive and reduce the number of errors reported because features are incorrectly used.
If we do this, it should be a separate analyzer assembly that is a dependency of the framework NuGet package.
Some things we could check,
- Constructors and test methods are public
- TestCase attributes have the correct number and type of arguments for the given method
- If a test method has a return type, the test attribute has ExpectedResult set
- If we
Assert.That(x, Is.Not.Null)
and variants, x is a reference type
- Classes that contain tests are marked with the TestFixture attribute
- Usage of
TestCaseSource
within the same class (see #320)
Good idea? Are there other improper usages that cannot be caught at compile time that we should consider?
@CharliePoole commented on Mon May 04 2015
I like the idea. The key question is when do we do it - before or after 3.0. Clearly any time we spend on it is taken away from other features, so how important is it?
Moving ahead, I think it could be a separate project under the nunit organization.
Currently, we detect most of the errors you describe at runtime. I think we should continue to do that, marking the tests as non-runnable. It has always been a principle of NUnit that we treat anything you try to mark as a test as a test, giving an error if it's not quite right. That's definitely better than silently ignoring the errors at runtime.
Comments on the errors listed:
- Constructors and test methods are public
That makes sense. The issue is what's a test method and what's a test fixture. We don't want to fall into the trap of some third-party runners, maintaining separate code for identifying tests. Ideally, we would use NUnit itself to do the identification, but that requires an assembly.
- TestCase attributes have the correct number and type of arguments for the given method
Yes
- If a test method has a return type, the test attribute has ExpectedResult set
Yes
- If we
Assert.That(x, Is.Not.Null)
and variants, x is a reference type
Should be a warning, unless NUnit is modified to give an error. Currently, such a test "works" - that is, Assert.That(5, Is.Not.Null)
succeeds.
- Classes that contain tests are marked with the TestFixture attribute
It's optional for most kinds of tests, so that one doesn't make sense to me.
- Usage of
TestCaseSource
within the same class (see #320)
Definitely! Should be an error if there is no default constructor, otherwise a warning. When we deprecate certain patterns that can't be marked with [Obsolete] this would be the way to do it.
Others:
- Test Fixtures with no tests
- Test methods with no asserts
- TestFixture on abstract methods
Maybe we should set up a spec page for this in the dev wiki.
@rprouse commented on Tue May 05 2015
I think that this is a post 3.0 release task, I just wanted to document it. I agree that it should probably be in a separate repository and with your other comments. I will update the labels and milestone and we can come back to this later.
@rprouse commented on Tue May 26 2015
Another candidate would be testing the correct usage of the Range attribute. See #472
@CharliePoole commented on Sun Jul 03 2016
@rprouse This issue came up in a discussion with @ChrisMaddock who is interested in working on it. I can assign it to him if you have no objection, but I think we ought to clarify some stuff first.
From reading back, it seems that your vision was of something we would distribute to users. Would this be a standalone program? Or would it plug into some existing static analyzer package as an extension? If that's the case, would we expect to support more than one analyzer? Or is all this stuff - as I expect - what we need to figure out?
I guess this ended up on our Backlog when I eliminated the Future milestone last year. Based on how we categorize things nowadays, this seems like an "idea" trying to become a "feature", and I relabeled it accordingly. Since @ChrisMaddock has started experimenting with this, he would have the job of shepherding the idea on the way to feature-hood, but I think having a bit more about what you imagined as a way to delivering this feature would help him.
Of course, this is likely to end up as an entirely separate project from NUnit, but right now this seems like the best place to discuss it.
@ChrisMaddock commented on Sun Jul 03 2016
Thanks for writing this up @CharliePoole. :-)
Presuming Rob's talking about what I think he is - Roslyn analyzers are essentially 'plug-ins' to Roslyn, that can provide users with warnings/code refactorings in Visual Studio - inline, and prior to compilation. Hence why NUnit would be a great fit - working via reflection means we currently can't detect a lot of user errors until runtime. Distribution is by nuget package, or vsix. Wrapping up framework and analyzer in a single package seems to be the suggested method when already distributing via nuget package - although we'd need to decide (I imagine much later on!) if this is in the default framework packages, or additional ones.
An additional area this could be really useful is in NUnit2 -> 3 updates - I've lost count of the number of times I've find/replace'd IsStringContaining()
! Using static analyzers, this could be a one click operation to adjust across the whole solution.
I'll try and get a couple of the simpler examples up soon, and we can see what we're looking at.
@rprouse commented on Sun Jul 03 2016
@ChrisMaddock You are thinking what I am thinking ๐
Feel free to grab the issue and run with it. I think we will want these in a separate repository unless we plan to ship them in the Framework NuGet package. When they were introduced at Build a couple of years ago, the idea was that they would be bundled with libraries. It is a cool idea to do it that way, but I also don't like forcing them on people. Then again, they just provide code fixes.
For now, maybe start in a personal repository and we can discuss where they belong?
@ChrisMaddock commented on Sun Jul 03 2016
Sounds like a plan, will try a few. :-)
@rprouse commented on Sat Nov 19 2016
@JasonBock, this is the issue that contains other ideas for the Roslyn analyzer. I will assign it to you once you accept the team invitation. We might want to move it to the new repo once you start working there and convert it to a ZenHub epic or split it into child issues.
@JasonBock commented on Fri Nov 25 2016
I'm really close to getting a first commit done, should be done tomorrow :)
One quick comment on one idea @CharliePoole had mentioned before....
"Test methods with no asserts" - It's not uncommon for me to write unit tests where I'm using a mocking framework and I don't have any Assert
assertions, but my assertions are in the calls that verify the mock was used correctly - e.g. myMockObject.VerifyAll()
. So I'm not sure this is something that can be enforced all the time.
@godrose commented on Fri Nov 25 2016
It looks like all failure points can be reduced to 2 cases: assertions
which are performed upon the SUT and verifications that are performed upon
a mock.
Can the latter be mapped with a regex? Possibly with specific statements
for each isolation framework.
On 25 Nov 2016 07:38, "Jason Bock" [email protected] wrote:
I'm really close to getting a first commit done, should be done tomorrow :)
One quick comment on one idea @CharliePoole
https://github.com/CharliePoole had mentioned before....
"Test methods with no asserts" - It's not uncommon for me to write unit
tests where I'm using a mocking framework and I don't have any Assert
assertions, but my assertions are in the calls that verify the mock was
used correctly - e.g. myMockObject.VerifyAll(). So I'm not sure this is
something that can be enforced all the time.
โ
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
nunit/nunit#629 (comment), or mute
the thread
https://github.com/notifications/unsubscribe-auth/AHbkIzWxc5HJcNgdAJvhjC9RI16KbBFSks5rBnQ8gaJpZM4EPgXP
.
@JasonBock commented on Fri Nov 25 2016
@godrose it could be, if we're willing to keep track of all mocking frameworks and how they work, which wouldn't be trivial. Also, what if a team decides to use some helper methods to do assertions and then you wouldn't know that the test is "correct" because it is doing assertions, just indirectly. Or what if the developer is using another assertion framework like FluentAssertions? I don't think an analyzer can be written that would cover all the cases effectively - the scope is just too broad.
@godrose commented on Fri Nov 25 2016
Certainly.
You don't have to cover all cases. Just the most important ones. Like you
would in a product case. You don't solve all use cases when you first
release a product to the market, you only hit the ones that maximize the
value for the client.
As to usage inference I absolutely agree with you. I don't use NUnit
assertions at all preferring FluentAssertions instead. Helper methods are
out of question unless you want to compile the delegates/reflected info to
expressions and so on.
Just consider the cost and the value for one particular case and see if
it's worth a shot.
On 25 Nov 2016 16:59, "Jason Bock" [email protected] wrote:
@godrose https://github.com/godrose it could be, if we're willing to keep
track of all mocking frameworks and how they work, which wouldn't be
trivial. Also, what if a team decides to use some helper methods to do
assertions and then you wouldn't know that the test is "correct" because it
is doing assertions, just indirectly. Or what if the developer is using
another assertion framework like FluentAssertions? I don't think an
analyzer can be written that would cover all the cases effectively - the
scope is just too broad.
โ
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
nunit/nunit#629 (comment), or mute
the thread
https://github.com/notifications/unsubscribe-auth/AHbkI9AOIjdLEpos9s-0STLNZHyuApkwks5rBvfSgaJpZM4EPgXP
.
@JasonBock commented on Fri Nov 25 2016
@godrose, agreed, and that's kind of what I'm getting at. There is no solid use case for this particular analyzer request. It's just too broad, and I think time can be used more wisely addressing other cases.
@godrose commented on Fri Nov 25 2016
Fair enough. Good luck in developing the project!
On 25 Nov 2016 18:44, "Jason Bock" [email protected] wrote:
@godrose https://github.com/godrose, agreed, and that's kind of what
I'm getting at. There is no solid use case for this particular analyzer
request. It's just too broad, and I think time can be used more wisely
addressing other cases.
โ
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
nunit/nunit#629 (comment), or mute
the thread
https://github.com/notifications/unsubscribe-auth/AHbkIyN1YsxnBcqGVhe6_hOwBrDyRBL6ks5rBxBbgaJpZM4EPgXP
.
@CharliePoole commented on Fri Nov 25 2016
@JasonBock I may have the wrong idea about what you are implementing. My assumption was that the user or team would get to choose which checks to apply using some sort of settings dialog or file. If that's the case then a simple check for asserts is feasible, but if there are no choices available to the user then only things that are always wrong can be flagged.
I guess it will be clearer when there's code to look at.
@JasonBock commented on Fri Nov 25 2016
@CharliePoole VS allows the user to turn analyzers on and off and change their setting (error, warning, info) from the default if they want. But there's no way (or no easy way) to change what the analyzer does with configuration. And yes, once I get the initial code in everyone can try them and give feedback. I WILL have something in soon, promise!
@JasonBock commented on Fri Nov 25 2016
Finally pushed the code:
https://github.com/nunit/nunit.analyzers
So....I think the next step is for team members and others to review it and provide feedback/issues/etc. I can address those and fix any issues, and then I think getting stories and tasks in ZenHub in place for the rest of the analyzers so they can be tracked easier, rather than lumping them all into one issue here.
@CharliePoole commented on Sat Nov 26 2016
I'd like to exercise this but I could use some instructions. The readme takes me through building and installing, but not usage.
@JasonBock commented on Sat Nov 26 2016
@CharliePoole ...I think you're asking for better instructions in the readme file :). I'll get on that.
@JasonBock commented on Sat Nov 26 2016
OK, the readme file now has some basic build instructions. Let me know if you need more.
@CharliePoole commented on Sat Nov 26 2016
Wow! I just read my message. Autocorrect gone mad! Just updated it. Basically, I don't know how one uses these "analyzers" once they are installed. However I found some online videos. Eventually, there should probably be at least a paragraph that tells people what happens after they install it.
Welcome to NUnit @JasonBock !
@OsirisTerje commented on Sat Nov 26 2016
Just cloned it, and built, was rewarded by this:
Seems it looks under tools folder for the ps1 files, but they are not there. Copied from packages subfolder, and it builds, but should be added - I hope they should be the same......
@JasonBock commented on Sat Nov 26 2016
@OsirisTerje Sorry, the gitignore I used has the tools folder ignored. Should be fixed now.
@JasonBock commented on Sat Nov 26 2016
@CharliePoole Yes, I plan on having a lot more documentation on each analyzer, what they do, and what code fixes are available if any.
@JasonBock commented on Sat Nov 26 2016
@CharliePoole what I'm looking for now is, do they work as expected? Do they add value? Are there any bugs with the implementation? Suggestions for improvement? I can address those concerns first and then start working on other analyzer code after that.
@CharliePoole commented on Sat Nov 26 2016
@JasonBock I just wanted to point out that some of your eventual users - like me - may never have seen any Roslyn analyzers before. Even the name is confusing to someone who is used to what "analyzing" has always meant in the past with static analysis. In this case, the analysis is coupled with automated fixes, a new idea to us old-timers!
@CharliePoole commented on Sat Nov 26 2016
@rprouse We really need to get that .gitignore
file standardized!
@CharliePoole commented on Sat Nov 26 2016
@JasonBock Absolutely... that's exactly the review we need right now.
There is a sort of meta-discussion that I think also has to take place around the question of what do we want to be telling users they ought to do or not do.
Here's an example for a future analyzer: Using [Parallelizable(ParallelScope.Self)]
at the assembly level should probably be considered an outright error, although it's not possible to catch it normally at compile time. OTOH, we have never told users not to use Assert.True
before. That's a stylistic decision that might be made by some users. Should we publish such an analyzer? I don't know, but it's still a good example to use in this initial implementation. We can focus on how well it works and worry later about exactly what rules we want to include.
@nunit/core-team I went on at length about this because I wanted to point out that it's the kind of vision-oriented decision I'd like to see the core team take charge of in future. How didactic or dogmatic do we want to be toward users? I have a pretty clear idea of what I would do if I were continuing, but I'm not, so I'd like to have a process where we decide things like this above and beyond the individual projects. We can talk more as we continue to get organized.
@OsirisTerje commented on Sat Nov 26 2016
The Roslyn analyzers are a great idea. In VS 2015 they have been a bit sluggish on larger solutions, but with VS 2017 that is improved. I agree a meta discussion is needed, but as long as one can set the error level, and one is conservative with respect to the defaults (not enabling too much), I feel one can add quite a bit. It is important to also have the refactoring in though, not only the analysis.
I have tried the nunit analyzer, and the testcase cehck works as it should, however, the Assert.AreEqual doesn't give me anything.
It works in the rules editor, but only for the project where it is installed. I see e.g. the sonar analyzer only needs to be installed once, and then works for all projects.
Set Rule Severity in the Ref/Analyzer does not work.
@JasonBock commented on Sat Nov 26 2016
About the DiagnosticSeverity
level - I agree, a good default value is needed. For example, for the classic model ones, I have it set to Error
(keep in mind a user can always change that in a project). But is it really an error to use Assert.IsTrue()
? Well...no. It'll still work. So maybe making it Info
is a better choice. I've generally been against non-error "things" in the past because the developer will usually ignore them, but even with the Info
level you'll still see a squiggly line in the editor.
@OsirisTerje can you post a screen shot of the issue? Maybe I'll be able to figure it out.
@OsirisTerje commented on Sat Nov 26 2016
- Restarting Visual Studio made it possible to change the severity using the Set Rule Set Severity under Ref/Analyzers. So that one is ok.
- I am uncertain above the analyzers and how they are to be installed. I haven't played too much lately with them, and the Sonarlint analyzer is not installed per project, but still works for all. Is that because it is a vsix perhaps? Think that might be it. I also see this issue referred to in the roslyn github repo. Installing for all projects adds a build performance penalty. But this is not ours anyway.
- The Assert.AreEqual works in a small project, but I tried adding to a larger one, and it doesnt work there.
From the small project:
From the larger project
I added the two marked with red arrows there, and no underscore appearing.
The analyzer works for the project, this is the TestCase one, which works
I also tried to add a Assert.AreEQual to the exact same method as the one above which reacts to the testcase, but it still doesnt light up.
@JasonBock commented on Sat Nov 26 2016
If you "install" the analyzers by running the VSIX, it's VS-wide. Meaning you can't set things per-project, nor do errors cause a build to fail.
I've noticed that sometimes the analyzers stop running for some reason. If you edit a line of code, like deleting the opening parenthesis and adding it - the analyzers run again. I'm not sure if this is an issue with my analyzer or something that VS is doing. Try that and see if you get the same behavior.
@OsirisTerje commented on Sat Nov 26 2016
I have tried that, and no change. It still doesn't light up. I do see that it is shown in the Build output, just not as squiggly lines.
And now I see they have disappeared from the small test project too. But only the AreEquals.
@OsirisTerje commented on Sat Nov 26 2016
We should also be very careful when marking these rules as Error. The default should be Warning. Errors should only be thing that is comparable to a compiler error, something that would cause NUnit to crash or not work. If we add them as Errors we force people to change them, and my larger project now have >500 errors ;-)
@OsirisTerje commented on Sat Nov 26 2016
Perhaps these errors, from the small test project, can help:
@OsirisTerje commented on Sat Nov 26 2016
I see the package includes the nunit.framework. I guess that is where some of the issues comes from.
@CharliePoole commented on Sat Nov 26 2016
If the analyzer has to have a reference to a copy of the framework, then the deployment story will need to change substantially.
@CharliePoole commented on Sat Nov 26 2016
@JasonBock With a framework reference, you would have to rebuild and republish the analyzers with each issue of the framework, so it might have to be a part of the framework distribution. However, it kind of depends on how you use the framework.
@JasonBock commented on Sat Nov 26 2016
This is something I asked about before: whether to reference the nunit.framework assembly or not. I can change it so it doesn't, but there's an advantage of having a reference to the assembly as you have the right names for members when you're doing your tests.
Still not seeing why it's not working for Assert.AreEqual()
....
@JasonBock commented on Sat Nov 26 2016
With CSLA, the analyzers are distributed with the framework, and that would be way I'd prefer them to be. It makes a lot more sense than having them as a separate thing.
@JasonBock commented on Sat Nov 26 2016
I am seeing the errors for Assert.AreEqual()
as well as others:
@CharliePoole commented on Sat Nov 26 2016
@JasonBock Distributing it as a part of the framework would mean having separate builds for .NET 2.0, 3.5, 4.0 and 4.5, Portable and NetStandard - at least at the moment. Your code could all be 4.5+, of course, but each version would have to reference a different framework build.
If you did not reference the framework itself, you might have to build in some knowledge of which features appear in which framework versions.
There is probably a tradeoff to be made here.
@JasonBock commented on Sat Nov 26 2016
@CharliePoole this could only be for the portable version. The analyzer stuff doesn't work with any version up until VS2015, so that would be the only one it could work with - basically projects that target .NET 4.5 and up.
I really don't care either way whether the analyzers take a reference to the framework or not. If I don't, I just create constants that represent that names of members I care about. It'll work either way.
@OsirisTerje commented on Sat Nov 26 2016
@JasonBock What are the target framework for your testprogram? I just removed the framework from the nuget package, and that did not help with the error either.
@JasonBock commented on Sat Nov 26 2016
@OsirisTerje 4.6.2
@JasonBock commented on Sat Nov 26 2016
@OsirisTerje if you removed the nunit.framework from the analyzer project that wouldn't even compile then. There's lots of places where I do nameof
in the code....well, come to think of it, because of that, the names get resolved at compile time and then don't even reference nunit.framework at runtime.
Just not sure why you're having issues with this.... :(
@JasonBock commented on Sat Nov 26 2016
@OsirisTerje did you try launching the VSIX with the debugger and see where the code fails? You should be able to do that.
@OsirisTerje commented on Sat Nov 26 2016
@JasonBock VSIX: No, not yet. I'll try that.
Package: I didn't remove it from the project, I kept the package there, I just removed it from the produced package output.
@CharliePoole commented on Sat Nov 26 2016
@JasonBock VS2015 allows targeting .NET 2.0 plus... are you saying users whose tests target 2.0 through 4.0 can't use this feature?
But in any case, if you reference the framework at runtime, you have to reference some version and runtime build of it. The user could be referencing some other version and/or build. Is that a problem or is your code executing somewhere where it won't interfere with the user test AppDomain.
If you don't reference the framework at runtime, that's easy... just don't distribute it with the packages.
@JasonBock commented on Sat Nov 26 2016
@CharliePoole Yes, analyzers are only for 4.6 and up I believe. If you look at 4.6 projects you'll see an "analyzers" node within the project. This doesn't exist in previous projects. There's also a new command line switch to csc that allows you to pass in analyzers so, yes, this isn't available with previous project versions.
The analyzers aren't run at test time. They're run as users are editing and building code.
I still think the analyzers should come along with the nunit.framework package at some point, rather than being a separate NuGet package. It's distributed and versioned with the framework itself.
@CharliePoole commented on Sat Nov 26 2016
@JasonBock Well, that could limit the number of versions needed to 4.5, portable and netstandard. What counts here is not what the user is targetting, but which framework version they reference. Most people building 4.6 and above probably reference the nunit 4.5 build, but I have 4.5 tests that reference portable as well. So having a relation between the analyzer build and a framework build makes sense.
We would probably not want to bundle it, however, since we don't in other cases. We generally make use of inter-package dependencies in NuGet. Since users may be targetting different runtimes in different projects, this makes it sound as if vsix distribution is not such a great idea for us.
@JasonBock commented on Sat Nov 26 2016
@CharliePoole if you don't bundle it, then developers will have to know that the analyzers exist and need to do a separate step to include them. But again, that's not my call.
I would definitely NOT want to distribute as a VSIX purely because any errors flagged by an analyzer won't stop the build process. It's too coarse-grained for my tastes. Keeping it as a NuGet package is usually the right approach for analyzers.
@JasonBock commented on Sat Nov 26 2016
@CharliePoole BTW you said 4.5....there's a NUnit 4.5 version? The latest one I see on NuGet is 3.5....or were you referring to .NET 4.5?
@CharliePoole commented on Sat Nov 26 2016
@JasonBock I'd be all for bundling it in our distributions that bundle other stuff - e.g. msi and zip - but not in NuGet. Folks who choose to use nuget are essentially choosing a more granular way of installing. So, for example, if you just install the framework, you don't get nunitlite even though nunitlite maps to the framework version to version. The downside is that you may not even know nunitlite exists, but that's the tradeoff based on how users choose to install.
Yes, I was talking about the .NET 4.5 build of the 3.5 framework.
@JasonBock commented on Sat Nov 26 2016
@CharliePoole Analyzers arguably are meant to enforce practices and guidelines for a particular framework. You can write analyzers that are general-purpose that are "tied" to the entire .NET framework, which, in that case, you can't bundle them.
Again, though we don't have to worry about that right now. Frankly, I'd much rather get feedback on the analyzer code itself and how they work (or don't) :)
@ChrisMaddock commented on Sat Nov 26 2016
I'd like to see the analyzers bundled in the framework NuGet packages. From what I remember, NuGet has specific functionality to install analyzers and interact with roslyn where relevant, and ignore them otherwise. It wouldn't introduce any extra references, and users can still turn them off if the don't want notifying of their compile errors, for any reason?!
Also agree with the discussion above, errors should just be used for what would be runtime errors. Classic assertions should just have the codefix available in the least intrusive way, in my opinion (Info?) - as we haven't yet deprecated them, and from what I gather, some people still prefer them.
@JasonBock - I'll try and have a test run in the next couple of days, looking forward to seeing them in action!
@JasonBock commented on Sat Nov 26 2016
I changed the severity for the classic model assertions to Info
, and then VS doesn't give any indication in the UI that something could change. You do get the light bulb on the line, but you don't get that until you give focus to that line of code, or you look at the message list in the Error List. Making it a Warning at least gives a squiggle in VS, so I'm thinking this is the right setting.
@CharliePoole commented on Sat Nov 26 2016
I'm in the middle of setting up a new development laptop, but I'll also try it out as soon as I can.
@JasonBock The issue I have with the transformation of some of the asserts is that we have never told users they should not use them. On the contrary, Assert.True is one of a set that we actively encouraged people to use. Others are Assert.False, Assert.Null, Assert.NotNull. But that's all academic at the moment. If you give us a good vehicle, we can write lots of issues for what we would like to see included in it.
It sounds like Info level is more like an optional refactoring than an error or warning message.
@ChrisMaddock What you say makes it sound like a good choice. Can't the bundling be done via dependencies, however?
@JasonBock commented on Sat Nov 26 2016
@CharliePoole you can also create refactorings (different extensibility project type), and you're right, that may be more of a better fit for a refactoring. But if we want to really get people on to the constraint model and ultimately get rid of the classic model, doing it as a warning with an easy way to change all instances of that analyzer solution-wide with one click of a menu option, then that may be the right approach. Even if you keep the analyzer as an Info
, you get the VS feature to apply it to all instances in a solution; refactorings don't have that capability.
@CharliePoole commented on Sat Nov 26 2016
@JasonBock There was a time, about 7 years ago, when I wanted to get rid of Classic asserts. In fact, I was calling them legacy asserts back then. User feedback strongly favored keeping them and so I dropped the goal of dropping them. :-)
@JasonBock commented on Sat Nov 26 2016
@CharliePoole I'm curious as to why they wanted to keep them. Was one of the reason because changing over to the constraint model would be too time-consuming? Or, "hey, my test code works, why force me to change?" (which is a valid reason).
@CharliePoole commented on Sun Nov 27 2016
That's one reason, but some people just don't like the constraint-based syntax. I didn't think I would like it myself, but got talked into implementing it. Tool-builders have to be more flexible than users. :-)
BTW, the fluent syntax was created as part of the separate NUntiLite project and then migrated back to NUnit. With NUnit 3, we merged nunitlite entirely into NUnit again. Anyway, the old NUnitLite did drop most of the classic asserts but kept Assert.True, Assert.False, Assert.Null, Assert.NotNull, Assert.AreEqual and a few others. In my own work, I use constraint-based 90% of the time, but I still use those few classic constraints plus Assert.Throws, since it gives you the exception for further testing.
@ChrisMaddock commented on Sun Nov 27 2016
@ChrisMaddock What you say makes it sound like a good choice. Can't the bundling be done via dependencies, however?
Could be, yes. Although, if we get the obtrusiveness-balance right, I think they'd be better bundled in the same package, purely for discoverability. Hopefully we should be able to bundle something which is helpful to everyone - although sounds like we'll need to consider the performance issues @OsirisTerje mentioned.
I changed the severity for the classic model assertions to Info, and then VS doesn't give any indication in the UI that something could change.
That's irritating, I was hoping it would give the little dotted line - that must be a Resharper thing. Although yes, until a point where we explicitly deprecate the classic syntax, I don't think warnings would be right. Doing such a thing would be a bigger change than anything from NUnit 2 -> 3, and sounds unfeasible/unpopular. (Although I'd very much be in favour!)
@ChrisMaddock commented on Sun Nov 27 2016
One other thought, could we perhaps write some analysers and finally deprecate/spin-out/remove AssertionHelper? nunit/nunit#1212
@JasonBock commented on Sun Nov 27 2016
@ChrisMaddock definitely could be done. One thing I'd like to do is on the wiki page for nunit.analyzers is to start using the wiki to doc analyzers and issues to propose analyzer ideas. This could go there.
Really, though, what I'd like right now is for people to try the analyzers that currently exist with projects they have and see how they work. Weed out bugs. Provide code review suggestions. That sort of thing. Then more analyzers can be written.
@CharliePoole commented on Sun Nov 27 2016
@JasonBock No problem doing anything stuff in the wiki or in issues, but keep in mind we may end up putting the analyzer somewhere else - actually, we will if we follow your recommendation to bundle it. It's not a real big job to move issues or wiki pages, but keep it in mind.
@JasonBock commented on Sun Nov 27 2016
@CharliePoole true, I've been debating this in my head as well. If we put too much into the current repo and then we move it, it may be a PITA to move certain assets over. So.....maybe I don't do too much of that right now.
@JasonBock commented on Sun Nov 27 2016
What I am proposing, though, is that this issue should be closed as there are now analyzers for NUnit, at least in flight :). Having specific discussions on the design and implementation of analyzers and the overall deployment story should probably move to separate issues on the nunit.analyzers issues page for now.
@CharliePoole commented on Mon Nov 28 2016
@JasonBock Many of us have been tied up with other things so your actual code isn't getting much review so far. Hang on for a bit and once a few of us have managed to review and exercise it we'll be ready to make a decision about where it goes and I think things will start to flow more quickly after that.
@ChrisMaddock commented on Mon Nov 28 2016
Had a quick play just now - posted a couple of definite issues in the repo. I'm seeing both the same FileIO exceptions as @OsirisTerje - although I'm working with a internal test library which has a reference to nunit.framework, so it could well be that something is getting crossed there. I wonder if it would be worth factoring out the nunit.framework
reference at least temporarily, while we're testing? That should hopefully allow us just to drop the package in to places for now - until such a point this is packaged appropriately with a framework. (No worries if that's actually a big job!)
The NullReferenceException seems more likely to be something separate, stack trace below. (Assembly name redacted, because work!)
Error AD0001 Analyzer 'NUnit.Analyzers.TestCaseUsage.TestCaseUsageAnalyzer' threw an exception of type 'System.NullReferenceException' with message 'Object reference not set to an instance of an object.'. XXXX 1 Active Analyzer 'NUnit.Analyzers.TestCaseUsage.TestCaseUsageAnalyzer' threw the following exception:
'Exception occurred with following context:
Compilation: XXXXXXX
SyntaxTree: XXXXXXX
SyntaxNode: Test [AttributeSyntax]@[762..766) (21,9)-(21,13)
System.NullReferenceException: Object reference not set to an instance of an object.
at NUnit.Analyzers.TestCaseUsage.TestCaseUsageAnalyzer.AnalyzeAttribute(SyntaxNodeAnalysisContext context)
at Microsoft.CodeAnalysis.Diagnostics.AnalyzerExecutor.<>c__DisplayClass42_1`1.<ExecuteSyntaxNodeAction>b__1()
at Microsoft.CodeAnalysis.Diagnostics.AnalyzerExecutor.ExecuteAndCatchIfThrows_NoLock(DiagnosticAnalyzer analyzer, Action analyze, Nullable`1 info)
-----
'.
Unfortunately, I don't seem to be able to test out the classic model refactorings. :-( When I double click the error to open the file, the red squiggly line appears for about a second, before disappearing again - and I don't seem to be able to access the quickfix. I jumped straight in to a pretty big test suite (12k tests), so maybe it's some sort of overload thing? I don't know how to get any sort of debug info out of it sorry, let me know if there's anything I can look at!
Still a big fan of the concept, having static checking of some run-time errors would be a great time saver! ๐
@tom-dudley commented on Thu May 11 2017
For configuration of rules, analyzers can access Additional Files
which looks to be the(?) recommended method: Using Additional Files
StyleCop Analyzers uses this for Configuring StyleCop Analyzers
@CharliePoole commented on Thu May 11 2017
Can we close this issue so future discussion gets pushed to the analyzers project?
@rprouse commented on Thu May 18 2017
Yes, let's close this. We have a contributor project working on analyzers now. Any feedback should be entered in that project.