GithubHelp home page GithubHelp logo

nunit / nunit.analyzers Goto Github PK

View Code? Open in Web Editor NEW
84.0 13.0 29.0 1.92 MB

Roslyn analyzers for writing unit tests with NUnit

License: MIT License

C# 99.76% PowerShell 0.22% Batchfile 0.01% Shell 0.02%
nunit nunit-analyzers roslyn-analyzer dotnet csharp hacktoberfest

nunit.analyzers's Introduction

NUnit Analyzers

Build status GitHub Actions build status NuGet Version and Downloads count MyGet Feed

This is a suite of analyzers that target the NUnit testing framework. Right now, the code is separate from the NUnit framework, so if you want to try out the analyzers you'll need to download the analyzers separately as a nuget package. In the future the analyzers may be added as part of the NUnit framework package but that hasn't been done yet.

Download

The latest stable release of the NUnit Analyzers is available on NuGet or can be downloaded from GitHub. Note that for Visual Studio 2017 one must use versions below 3.0 - note that these versions are no longer updated, so version 2.10.0 is the last version that works in Visual Studio 2017. Version 3.0 and upwards require Visual Studio 2019 (version 16.3) or newer, these versions also enables supression of compiler errors such as errors arising from nullable reference types.

Prerelease nuget packages can be found on MyGet. Please try out the package and report bugs and feature requests.

Analyzers

The full list of analyzers can be found in the documentation.

Below we give two examples of analyzers. One will look for methods with the [TestCase] attribute and makes sure the argument values are correct for the types of the method parameters along with the ExpectedResult value if it is provided.

testcase analyzers

The other analyzer looks for classic model assertions (e.g. Assert.AreEqual(), Assert.IsTrue(), etc.). This analyzer contains a fixer that can translate the classic model assertions into constraint model assertions (Assert.That()).

classic model assertions analyzers

Which version works with Unity Test Framework

If your Unity project is made by Unity under 2021.2, then use NUnit.Analyzers v2.x.

If your Unity project is made by Unity 2021.2 or later, then use NUnit.Analyzers v3.3 (v3.4 or later of the analyzers does not work with Unity).

You should use an analyzer built with the same version of Microsoft.CodeAnalysis.CSharp as the one embedded in the Unity Editor.

License

NUnit analyzers are Open Source software and released under the MIT license, which allow the use of the analyzers in free and commercial applications and libraries without restrictions.

Contributing

There are several ways to contribute to this project. One can try things out, report bugs, propose improvements and new functionality, work on issues (especially the issues marked with the labels help wanted and Good First Issue), and in general join in the conversations. See Contributing for more information.

This project has adopted the Code of Conduct from the Contributor Covenant, version 1.4, available at http://contributor-covenant.org/version/1/4. See the Code of Conduct for more information.

Contributors

NUnit.Analyzers was created by Jason Bock. A complete list of contributors can be found on the GitHub contributors page.

nunit.analyzers's People

Contributors

304notmodified avatar 333fred avatar andrewimcclement avatar antash avatar aolszowka avatar bartleby2718 avatar collinalpert avatar corniel avatar dependabot[bot] avatar dreamescaper avatar get-me-power avatar henryzhang-zhy avatar jasonbock avatar jcurl avatar jnm2 avatar johanlarsson avatar lahma avatar laniusexcubitor avatar maettu-this avatar manfred-brands avatar matode avatar maximrouiller avatar mgyongyosi avatar mikkelbu avatar nowsprinting avatar oskar avatar rprouse avatar seankilleen avatar stevenaw avatar verdie-g avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

nunit.analyzers's Issues

NRE in ClassicModelAssertUsageAnalyzer.AnalyzeInvocation

The method ClassicModelAssertUsageAnalyzer.AnalyzeInvocation can fail with a NRE, since invocationSymbol can be null. The problem is that the type of context.SemanticModel.GetSymbolInfo(invocationNode.Expression).Symbol is SourceSimpleParameterSymbol and not IMethodSymbol. The problem can be seen by opening the file nunit.nunit\src\NUnitFramework\tests\Constraints\MsgUtilTests.cs (in the nunit-project). Then the code fails in

 [Test]
 public static void FormatValue_ContextualCustomFormatterInvoked_FactoryArg()
 {
     TestContext.AddFormatter(next => val => (val is CustomFormattableType) ? "custom_formatted" : next(val));

     Assert.That(MsgUtils.FormatValue(new CustomFormattableType()), Is.EqualTo("custom_formatted"));
 }

and invocationNode is referencing next(val). I have not look much into the problem yet.

Analyzer for TestCaseAttribute should also work for nullable types

The analyzer should also handle the following (from nunit\src\NUnitFramework\tests\Attributes\TestCaseAttributeTests.cs). Currently, it gives a warning The value of the argument at position 0 cannot be assigned to the argument x.

        [TestCase(1)]
        public void CanConvertIntToNullableLong(long? x)
        {
            Assert.That(x.HasValue);
            Assert.That(x.Value, Is.EqualTo(1));
        }

NRE when writing an attribute before adding a reference to NUnit

The following code makes the analyzer throw NRE if one has not added a reference to NUnit.

[TestFixture]
public class Class1
{
    [Test]
    public void Test1()
    {
    }
}

The method should probably just return if testCaseType is null.

Exception

System.NullReferenceException was unhandled by user code
  HResult=-2147467261
  Message=Object reference not set to an instance of an object.
  Source=nunit.analyzers
  StackTrace:
       at NUnit.Analyzers.TestCaseUsage.TestCaseUsageAnalyzer.AnalyzeAttribute(SyntaxNodeAnalysisContext context) in F:\TEMP\Github\nunit.analyzers\src\nunit.analyzers\TestCaseUsage\TestCaseUsageAnalyzer.cs:line 53
       at Microsoft.CodeAnalysis.Diagnostics.AnalyzerExecutor.<>c__DisplayClass42_1`1.<ExecuteSyntaxNodeAction>b__1()
       at Microsoft.CodeAnalysis.Diagnostics.AnalyzerExecutor.ExecuteAndCatchIfThrows_NoLock(DiagnosticAnalyzer analyzer, Action analyze, Nullable`1 info)
  InnerException: 

Analyzer for TestCaseAttribute should also work for generic TestFixtures

The analyzer should also handle the following (from \nunit\src\NUnitFramework\tests\Internal\TypeParameterUsedWithTestMethod.cs). Currently, it gives a warning The value of the argument at position 0 cannot be assigned to the argument x.

    [Category("Generics")]
    [TestFixture(typeof(double))]
    public class TypeParameterUsedWithTestMethod<T>
    {
        [TestCase(5)]
        [TestCase(1.23)]
        public void TestMyArgType(T x)
        {
            Assert.That(x, Is.TypeOf(typeof(T)));
        }
    }

Add error if tests/testcases have a return value, but no ExpectedResult

If tests/testcases have a return value, but no ExpectedResult then it fail with the message
"Message: Method has non-void return value, but no result is expected". We should handle this.

[Test]
public string Test3()
{
    return "12";
}

[TestCase(1)]
public string Test4(int i)
{
    return "12";
}

Add CI support for master and PRs

Probably using appveyor and cake. To begin with we will first focus on the build and testing part. A later issue will add package functionality.

TestCase analyser doesn't handle nullables

The TestCase analyser incorrectly flags up the below test, with nullable parameters.

image

image

I had a quick test with optional parameters and params, and they both look good!

Suggestion: Analyzer Should Fix TestCaseSource(string) usage to use nameof()

Thank you for this tool.

We have several Test Cases that look similar to this:

[TestCaseSource( "_SetupIsNotEqualToNoEntryErrorBehaviorTestCases" )]

These were written before the introduction of nameof() and if they were to be re-written today would look like this:

[TestCaseSource( nameof(_SetupIsNotEqualToNoEntryErrorBehaviorTestCases) )]

This way Intellisense would operate correctly (F12/Renaming/Find Usages/etc).

I realize that you would be required to use a compiler/IDE that supports C# 6.0 (that usually means Visual Studio 2015) but it would encourage others to keep current.

Thank you for your consideration.

Warning if literal or const value is provided as actual value

In most cases (actually can't figure out when it should actually be otherwise) const value shouldn't be provided as actual value to assert.
However it can often happen when actual and expected parameter are mixed places.
That mostly applies to classic assertion model, but might be helpful for constraint model as well.

Message: Async test method must have Task<T> return type when a result is expected

We should warn about the following problem.

Message: Async test method must have Task<T> return type when a result is expected

    [TestCase(ExpectedResult = 1)]
    public async System.Threading.Tasks.Task AsyncTaskTestCaseWithExpectedResult()
    {
      await Task.Run(() => 1);
    }

and

    [TestCase(ExpectedResult = 1)]
    public System.Threading.Tasks.Task TaskTestCaseWithExpectedResult()
    {
      return Task.Run(() => 1);
    }

NRE in AnalyzePositionalArgumentsAndParameters

In the method TestCaseUsageAnalyzer.AnalyzePositionalArgumentsAndParameters there is the assumption that all expressions in the attribute has type LiteralExpressionSyntax. This is not always the case. The assumption fails in e.g. nunit.nunit\src\NUnitFramework\tests\Constraints\MsgUtilTests.cs, where we have the constant s52 which hence has type IdentifierNameSyntax.

        private const string s52 = "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ";

        [TestCase(s52, 52, 0, s52, TestName="NoClippingNeeded")]
        [TestCase(s52, 29, 0, "abcdefghijklmnopqrstuvwxyz...", TestName="ClipAtEnd")]
        [TestCase(s52, 29, 26, "...ABCDEFGHIJKLMNOPQRSTUVWXYZ", TestName="ClipAtStart")]
        [TestCase(s52, 28, 26, "...ABCDEFGHIJKLMNOPQRSTUV...", TestName="ClipAtStartAndEnd")]
        public static void TestClipString(string input, int max, int start, string result)
        {
            System.Console.WriteLine("input=  \"{0}\"", input);
            System.Console.WriteLine("result= \"{0}\"", result);
            Assert.That(MsgUtils.ClipString(input, max, start), Is.EqualTo(result));
        }

Do Not Allow Any Assertions to be Used in TearDown

Description

Based on discussion from here: nunit/docs#149
If a method is marked with [TearDown], no assertion calls from the Assert class should be allowed.

Example

[TearDown]
public void TearDown()
{
  Assert.That(true, Is.True); // This line would be flagged.
}

Analyzer Message

"No methods from the Assert class may be used in a [TearDown] method"

Default Severity Level

Error

Code Fixes

none

"Sequence contains no elements" in ClassicModelAssertUsageAnalyzer.AnalyzeInvocation

The code has the assumption that every InvocationExpression is inside a MethodDeclarationSyntax. This is not the case for nunit.nunit\src\NUnitFramework\tests\Api\FrameworkControllerTests.cs that has invocations as part of of a field assignment

public class FrameworkControllerTests
{
...
    private static readonly string MOCK_ASSEMBLY_NAME = typeof(MockAssembly).GetTypeInfo().Assembly.FullName;
...
}

So the code fails with an System.InvalidOperationException - "Sequence contains no elements" - in

if(!context.Node.Ancestors().OfType<MethodDeclarationSyntax>().Single().ContainsDiagnostics)

ClassicModelAssertUsageAnalyser throws InvalidOperationException

I haven't looked in to this too much, but I believe it may be when it encounters a property in the TestFixture class.

Error	AD0001	Analyzer 'NUnit.Analyzers.ClassicModelAssertUsage.ClassicModelAssertUsageAnalyzer' threw an exception of type 'System.InvalidOperationException' with message 'Sequence contains no elements'.	XXX		1	Active	Analyzer 'NUnit.Analyzers.ClassicModelAssertUsage.ClassicModelAssertUsageAnalyzer' threw the following exception:
'Exception occurred with following context:
Compilation: XXX
SyntaxTree: XXX
SyntaxNode: XXX[InvocationExpressionSyntax]@[6922..6945) (162,25)-(162,48)

System.InvalidOperationException: Sequence contains no elements
   at System.Linq.Enumerable.Single[TSource](IEnumerable`1 source)
   at NUnit.Analyzers.ClassicModelAssertUsage.ClassicModelAssertUsageAnalyzer.AnalyzeInvocation(SyntaxNodeAnalysisContext context)
   at Microsoft.CodeAnalysis.Diagnostics.AnalyzerExecutor.<>c__DisplayClass42_1`1.<ExecuteSyntaxNodeAction>b__1()
   at Microsoft.CodeAnalysis.Diagnostics.AnalyzerExecutor.ExecuteAndCatchIfThrows_NoLock(DiagnosticAnalyzer analyzer, Action analyze, Nullable`1 info)
-----
'.

Add Roslyn Code Analyzers for NUnit

@rprouse commented on Mon May 04 2015

Visual Studio 2015 and Roslyn introduced code analyzers and the concept of code-aware libraries. There are many ways that users can incorrectly use NUnit, so we could provide analyzers to detect incorrect usage and suggest fixes. This would help users become more productive and reduce the number of errors reported because features are incorrectly used.

If we do this, it should be a separate analyzer assembly that is a dependency of the framework NuGet package.

Some things we could check,

  • Constructors and test methods are public
  • TestCase attributes have the correct number and type of arguments for the given method
  • If a test method has a return type, the test attribute has ExpectedResult set
  • If we Assert.That(x, Is.Not.Null) and variants, x is a reference type
  • Classes that contain tests are marked with the TestFixture attribute
  • Usage of TestCaseSource within the same class (see #320)

Good idea? Are there other improper usages that cannot be caught at compile time that we should consider?


@CharliePoole commented on Mon May 04 2015

I like the idea. The key question is when do we do it - before or after 3.0. Clearly any time we spend on it is taken away from other features, so how important is it?

Moving ahead, I think it could be a separate project under the nunit organization.

Currently, we detect most of the errors you describe at runtime. I think we should continue to do that, marking the tests as non-runnable. It has always been a principle of NUnit that we treat anything you try to mark as a test as a test, giving an error if it's not quite right. That's definitely better than silently ignoring the errors at runtime.

Comments on the errors listed:

  • Constructors and test methods are public
    That makes sense. The issue is what's a test method and what's a test fixture. We don't want to fall into the trap of some third-party runners, maintaining separate code for identifying tests. Ideally, we would use NUnit itself to do the identification, but that requires an assembly.
  • TestCase attributes have the correct number and type of arguments for the given method
    Yes
  • If a test method has a return type, the test attribute has ExpectedResult set
    Yes
  • If we Assert.That(x, Is.Not.Null) and variants, x is a reference type
    Should be a warning, unless NUnit is modified to give an error. Currently, such a test "works" - that is, Assert.That(5, Is.Not.Null) succeeds.
  • Classes that contain tests are marked with the TestFixture attribute
    It's optional for most kinds of tests, so that one doesn't make sense to me.
  • Usage of TestCaseSource within the same class (see #320)
    Definitely! Should be an error if there is no default constructor, otherwise a warning. When we deprecate certain patterns that can't be marked with [Obsolete] this would be the way to do it.

Others:

  • Test Fixtures with no tests
  • Test methods with no asserts
  • TestFixture on abstract methods

Maybe we should set up a spec page for this in the dev wiki.


@rprouse commented on Tue May 05 2015

I think that this is a post 3.0 release task, I just wanted to document it. I agree that it should probably be in a separate repository and with your other comments. I will update the labels and milestone and we can come back to this later.


@rprouse commented on Tue May 26 2015

Another candidate would be testing the correct usage of the Range attribute. See #472


@CharliePoole commented on Sun Jul 03 2016

@rprouse This issue came up in a discussion with @ChrisMaddock who is interested in working on it. I can assign it to him if you have no objection, but I think we ought to clarify some stuff first.

From reading back, it seems that your vision was of something we would distribute to users. Would this be a standalone program? Or would it plug into some existing static analyzer package as an extension? If that's the case, would we expect to support more than one analyzer? Or is all this stuff - as I expect - what we need to figure out?

I guess this ended up on our Backlog when I eliminated the Future milestone last year. Based on how we categorize things nowadays, this seems like an "idea" trying to become a "feature", and I relabeled it accordingly. Since @ChrisMaddock has started experimenting with this, he would have the job of shepherding the idea on the way to feature-hood, but I think having a bit more about what you imagined as a way to delivering this feature would help him.

Of course, this is likely to end up as an entirely separate project from NUnit, but right now this seems like the best place to discuss it.


@ChrisMaddock commented on Sun Jul 03 2016

Thanks for writing this up @CharliePoole. :-)

Presuming Rob's talking about what I think he is - Roslyn analyzers are essentially 'plug-ins' to Roslyn, that can provide users with warnings/code refactorings in Visual Studio - inline, and prior to compilation. Hence why NUnit would be a great fit - working via reflection means we currently can't detect a lot of user errors until runtime. Distribution is by nuget package, or vsix. Wrapping up framework and analyzer in a single package seems to be the suggested method when already distributing via nuget package - although we'd need to decide (I imagine much later on!) if this is in the default framework packages, or additional ones.

An additional area this could be really useful is in NUnit2 -> 3 updates - I've lost count of the number of times I've find/replace'd IsStringContaining()! Using static analyzers, this could be a one click operation to adjust across the whole solution.

I'll try and get a couple of the simpler examples up soon, and we can see what we're looking at.


@rprouse commented on Sun Jul 03 2016

@ChrisMaddock You are thinking what I am thinking ๐Ÿ‘

Feel free to grab the issue and run with it. I think we will want these in a separate repository unless we plan to ship them in the Framework NuGet package. When they were introduced at Build a couple of years ago, the idea was that they would be bundled with libraries. It is a cool idea to do it that way, but I also don't like forcing them on people. Then again, they just provide code fixes.

For now, maybe start in a personal repository and we can discuss where they belong?


@ChrisMaddock commented on Sun Jul 03 2016

Sounds like a plan, will try a few. :-)


@rprouse commented on Sat Nov 19 2016

@JasonBock, this is the issue that contains other ideas for the Roslyn analyzer. I will assign it to you once you accept the team invitation. We might want to move it to the new repo once you start working there and convert it to a ZenHub epic or split it into child issues.


@JasonBock commented on Fri Nov 25 2016

I'm really close to getting a first commit done, should be done tomorrow :)

One quick comment on one idea @CharliePoole had mentioned before....

"Test methods with no asserts" - It's not uncommon for me to write unit tests where I'm using a mocking framework and I don't have any Assert assertions, but my assertions are in the calls that verify the mock was used correctly - e.g. myMockObject.VerifyAll(). So I'm not sure this is something that can be enforced all the time.


@godrose commented on Fri Nov 25 2016

It looks like all failure points can be reduced to 2 cases: assertions
which are performed upon the SUT and verifications that are performed upon
a mock.
Can the latter be mapped with a regex? Possibly with specific statements
for each isolation framework.

On 25 Nov 2016 07:38, "Jason Bock" [email protected] wrote:

I'm really close to getting a first commit done, should be done tomorrow :)

One quick comment on one idea @CharliePoole
https://github.com/CharliePoole had mentioned before....

"Test methods with no asserts" - It's not uncommon for me to write unit
tests where I'm using a mocking framework and I don't have any Assert
assertions, but my assertions are in the calls that verify the mock was
used correctly - e.g. myMockObject.VerifyAll(). So I'm not sure this is
something that can be enforced all the time.

โ€”
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
nunit/nunit#629 (comment), or mute
the thread
https://github.com/notifications/unsubscribe-auth/AHbkIzWxc5HJcNgdAJvhjC9RI16KbBFSks5rBnQ8gaJpZM4EPgXP
.


@JasonBock commented on Fri Nov 25 2016

@godrose it could be, if we're willing to keep track of all mocking frameworks and how they work, which wouldn't be trivial. Also, what if a team decides to use some helper methods to do assertions and then you wouldn't know that the test is "correct" because it is doing assertions, just indirectly. Or what if the developer is using another assertion framework like FluentAssertions? I don't think an analyzer can be written that would cover all the cases effectively - the scope is just too broad.


@godrose commented on Fri Nov 25 2016

Certainly.
You don't have to cover all cases. Just the most important ones. Like you
would in a product case. You don't solve all use cases when you first
release a product to the market, you only hit the ones that maximize the
value for the client.

As to usage inference I absolutely agree with you. I don't use NUnit
assertions at all preferring FluentAssertions instead. Helper methods are
out of question unless you want to compile the delegates/reflected info to
expressions and so on.

Just consider the cost and the value for one particular case and see if
it's worth a shot.

On 25 Nov 2016 16:59, "Jason Bock" [email protected] wrote:

@godrose https://github.com/godrose it could be, if we're willing to keep
track of all mocking frameworks and how they work, which wouldn't be
trivial. Also, what if a team decides to use some helper methods to do
assertions and then you wouldn't know that the test is "correct" because it
is doing assertions, just indirectly. Or what if the developer is using
another assertion framework like FluentAssertions? I don't think an
analyzer can be written that would cover all the cases effectively - the
scope is just too broad.

โ€”
You are receiving this because you were mentioned.

Reply to this email directly, view it on GitHub
nunit/nunit#629 (comment), or mute
the thread
https://github.com/notifications/unsubscribe-auth/AHbkI9AOIjdLEpos9s-0STLNZHyuApkwks5rBvfSgaJpZM4EPgXP
.


@JasonBock commented on Fri Nov 25 2016

@godrose, agreed, and that's kind of what I'm getting at. There is no solid use case for this particular analyzer request. It's just too broad, and I think time can be used more wisely addressing other cases.


@godrose commented on Fri Nov 25 2016

Fair enough. Good luck in developing the project!

On 25 Nov 2016 18:44, "Jason Bock" [email protected] wrote:

@godrose https://github.com/godrose, agreed, and that's kind of what
I'm getting at. There is no solid use case for this particular analyzer
request. It's just too broad, and I think time can be used more wisely
addressing other cases.

โ€”
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
nunit/nunit#629 (comment), or mute
the thread
https://github.com/notifications/unsubscribe-auth/AHbkIyN1YsxnBcqGVhe6_hOwBrDyRBL6ks5rBxBbgaJpZM4EPgXP
.


@CharliePoole commented on Fri Nov 25 2016

@JasonBock I may have the wrong idea about what you are implementing. My assumption was that the user or team would get to choose which checks to apply using some sort of settings dialog or file. If that's the case then a simple check for asserts is feasible, but if there are no choices available to the user then only things that are always wrong can be flagged.

I guess it will be clearer when there's code to look at.


@JasonBock commented on Fri Nov 25 2016

@CharliePoole VS allows the user to turn analyzers on and off and change their setting (error, warning, info) from the default if they want. But there's no way (or no easy way) to change what the analyzer does with configuration. And yes, once I get the initial code in everyone can try them and give feedback. I WILL have something in soon, promise!


@JasonBock commented on Fri Nov 25 2016

Finally pushed the code:

https://github.com/nunit/nunit.analyzers

So....I think the next step is for team members and others to review it and provide feedback/issues/etc. I can address those and fix any issues, and then I think getting stories and tasks in ZenHub in place for the rest of the analyzers so they can be tracked easier, rather than lumping them all into one issue here.


@CharliePoole commented on Sat Nov 26 2016

I'd like to exercise this but I could use some instructions. The readme takes me through building and installing, but not usage.


@JasonBock commented on Sat Nov 26 2016

@CharliePoole ...I think you're asking for better instructions in the readme file :). I'll get on that.


@JasonBock commented on Sat Nov 26 2016

OK, the readme file now has some basic build instructions. Let me know if you need more.


@CharliePoole commented on Sat Nov 26 2016

Wow! I just read my message. Autocorrect gone mad! Just updated it. Basically, I don't know how one uses these "analyzers" once they are installed. However I found some online videos. Eventually, there should probably be at least a paragraph that tells people what happens after they install it.

Welcome to NUnit @JasonBock !


@OsirisTerje commented on Sat Nov 26 2016

Just cloned it, and built, was rewarded by this:
image

Seems it looks under tools folder for the ps1 files, but they are not there. Copied from packages subfolder, and it builds, but should be added - I hope they should be the same......


@JasonBock commented on Sat Nov 26 2016

@OsirisTerje Sorry, the gitignore I used has the tools folder ignored. Should be fixed now.


@JasonBock commented on Sat Nov 26 2016

@CharliePoole Yes, I plan on having a lot more documentation on each analyzer, what they do, and what code fixes are available if any.


@JasonBock commented on Sat Nov 26 2016

@CharliePoole what I'm looking for now is, do they work as expected? Do they add value? Are there any bugs with the implementation? Suggestions for improvement? I can address those concerns first and then start working on other analyzer code after that.


@CharliePoole commented on Sat Nov 26 2016

@JasonBock I just wanted to point out that some of your eventual users - like me - may never have seen any Roslyn analyzers before. Even the name is confusing to someone who is used to what "analyzing" has always meant in the past with static analysis. In this case, the analysis is coupled with automated fixes, a new idea to us old-timers!


@CharliePoole commented on Sat Nov 26 2016

@rprouse We really need to get that .gitignore file standardized!


@CharliePoole commented on Sat Nov 26 2016

@JasonBock Absolutely... that's exactly the review we need right now.

There is a sort of meta-discussion that I think also has to take place around the question of what do we want to be telling users they ought to do or not do.

Here's an example for a future analyzer: Using [Parallelizable(ParallelScope.Self)] at the assembly level should probably be considered an outright error, although it's not possible to catch it normally at compile time. OTOH, we have never told users not to use Assert.True before. That's a stylistic decision that might be made by some users. Should we publish such an analyzer? I don't know, but it's still a good example to use in this initial implementation. We can focus on how well it works and worry later about exactly what rules we want to include.

@nunit/core-team I went on at length about this because I wanted to point out that it's the kind of vision-oriented decision I'd like to see the core team take charge of in future. How didactic or dogmatic do we want to be toward users? I have a pretty clear idea of what I would do if I were continuing, but I'm not, so I'd like to have a process where we decide things like this above and beyond the individual projects. We can talk more as we continue to get organized.


@OsirisTerje commented on Sat Nov 26 2016

The Roslyn analyzers are a great idea. In VS 2015 they have been a bit sluggish on larger solutions, but with VS 2017 that is improved. I agree a meta discussion is needed, but as long as one can set the error level, and one is conservative with respect to the defaults (not enabling too much), I feel one can add quite a bit. It is important to also have the refactoring in though, not only the analysis.

I have tried the nunit analyzer, and the testcase cehck works as it should, however, the Assert.AreEqual doesn't give me anything.
It works in the rules editor, but only for the project where it is installed. I see e.g. the sonar analyzer only needs to be installed once, and then works for all projects.
Set Rule Severity in the Ref/Analyzer does not work.


@JasonBock commented on Sat Nov 26 2016

About the DiagnosticSeverity level - I agree, a good default value is needed. For example, for the classic model ones, I have it set to Error (keep in mind a user can always change that in a project). But is it really an error to use Assert.IsTrue()? Well...no. It'll still work. So maybe making it Info is a better choice. I've generally been against non-error "things" in the past because the developer will usually ignore them, but even with the Info level you'll still see a squiggly line in the editor.

@OsirisTerje can you post a screen shot of the issue? Maybe I'll be able to figure it out.


@OsirisTerje commented on Sat Nov 26 2016

  1. Restarting Visual Studio made it possible to change the severity using the Set Rule Set Severity under Ref/Analyzers. So that one is ok.
  2. I am uncertain above the analyzers and how they are to be installed. I haven't played too much lately with them, and the Sonarlint analyzer is not installed per project, but still works for all. Is that because it is a vsix perhaps? Think that might be it. I also see this issue referred to in the roslyn github repo. Installing for all projects adds a build performance penalty. But this is not ours anyway.
  3. The Assert.AreEqual works in a small project, but I tried adding to a larger one, and it doesnt work there.
    From the small project:
    image

From the larger project
image

I added the two marked with red arrows there, and no underscore appearing.
The analyzer works for the project, this is the TestCase one, which works
image

I also tried to add a Assert.AreEQual to the exact same method as the one above which reacts to the testcase, but it still doesnt light up.


@JasonBock commented on Sat Nov 26 2016

If you "install" the analyzers by running the VSIX, it's VS-wide. Meaning you can't set things per-project, nor do errors cause a build to fail.

I've noticed that sometimes the analyzers stop running for some reason. If you edit a line of code, like deleting the opening parenthesis and adding it - the analyzers run again. I'm not sure if this is an issue with my analyzer or something that VS is doing. Try that and see if you get the same behavior.


@OsirisTerje commented on Sat Nov 26 2016

I have tried that, and no change. It still doesn't light up. I do see that it is shown in the Build output, just not as squiggly lines.
And now I see they have disappeared from the small test project too. But only the AreEquals.


@OsirisTerje commented on Sat Nov 26 2016

We should also be very careful when marking these rules as Error. The default should be Warning. Errors should only be thing that is comparable to a compiler error, something that would cause NUnit to crash or not work. If we add them as Errors we force people to change them, and my larger project now have >500 errors ;-)


@OsirisTerje commented on Sat Nov 26 2016

Perhaps these errors, from the small test project, can help:
image


@OsirisTerje commented on Sat Nov 26 2016

I see the package includes the nunit.framework. I guess that is where some of the issues comes from.


@CharliePoole commented on Sat Nov 26 2016

If the analyzer has to have a reference to a copy of the framework, then the deployment story will need to change substantially.


@CharliePoole commented on Sat Nov 26 2016

@JasonBock With a framework reference, you would have to rebuild and republish the analyzers with each issue of the framework, so it might have to be a part of the framework distribution. However, it kind of depends on how you use the framework.


@JasonBock commented on Sat Nov 26 2016

This is something I asked about before: whether to reference the nunit.framework assembly or not. I can change it so it doesn't, but there's an advantage of having a reference to the assembly as you have the right names for members when you're doing your tests.

Still not seeing why it's not working for Assert.AreEqual()....


@JasonBock commented on Sat Nov 26 2016

With CSLA, the analyzers are distributed with the framework, and that would be way I'd prefer them to be. It makes a lot more sense than having them as a separate thing.


@JasonBock commented on Sat Nov 26 2016

I am seeing the errors for Assert.AreEqual() as well as others:

image


@CharliePoole commented on Sat Nov 26 2016

@JasonBock Distributing it as a part of the framework would mean having separate builds for .NET 2.0, 3.5, 4.0 and 4.5, Portable and NetStandard - at least at the moment. Your code could all be 4.5+, of course, but each version would have to reference a different framework build.

If you did not reference the framework itself, you might have to build in some knowledge of which features appear in which framework versions.

There is probably a tradeoff to be made here.


@JasonBock commented on Sat Nov 26 2016

@CharliePoole this could only be for the portable version. The analyzer stuff doesn't work with any version up until VS2015, so that would be the only one it could work with - basically projects that target .NET 4.5 and up.

I really don't care either way whether the analyzers take a reference to the framework or not. If I don't, I just create constants that represent that names of members I care about. It'll work either way.


@OsirisTerje commented on Sat Nov 26 2016

@JasonBock What are the target framework for your testprogram? I just removed the framework from the nuget package, and that did not help with the error either.


@JasonBock commented on Sat Nov 26 2016

@OsirisTerje 4.6.2


@JasonBock commented on Sat Nov 26 2016

@OsirisTerje if you removed the nunit.framework from the analyzer project that wouldn't even compile then. There's lots of places where I do nameof in the code....well, come to think of it, because of that, the names get resolved at compile time and then don't even reference nunit.framework at runtime.

Just not sure why you're having issues with this.... :(


@JasonBock commented on Sat Nov 26 2016

@OsirisTerje did you try launching the VSIX with the debugger and see where the code fails? You should be able to do that.


@OsirisTerje commented on Sat Nov 26 2016

@JasonBock VSIX: No, not yet. I'll try that.
Package: I didn't remove it from the project, I kept the package there, I just removed it from the produced package output.


@CharliePoole commented on Sat Nov 26 2016

@JasonBock VS2015 allows targeting .NET 2.0 plus... are you saying users whose tests target 2.0 through 4.0 can't use this feature?

But in any case, if you reference the framework at runtime, you have to reference some version and runtime build of it. The user could be referencing some other version and/or build. Is that a problem or is your code executing somewhere where it won't interfere with the user test AppDomain.

If you don't reference the framework at runtime, that's easy... just don't distribute it with the packages.


@JasonBock commented on Sat Nov 26 2016

@CharliePoole Yes, analyzers are only for 4.6 and up I believe. If you look at 4.6 projects you'll see an "analyzers" node within the project. This doesn't exist in previous projects. There's also a new command line switch to csc that allows you to pass in analyzers so, yes, this isn't available with previous project versions.

The analyzers aren't run at test time. They're run as users are editing and building code.

I still think the analyzers should come along with the nunit.framework package at some point, rather than being a separate NuGet package. It's distributed and versioned with the framework itself.


@CharliePoole commented on Sat Nov 26 2016

@JasonBock Well, that could limit the number of versions needed to 4.5, portable and netstandard. What counts here is not what the user is targetting, but which framework version they reference. Most people building 4.6 and above probably reference the nunit 4.5 build, but I have 4.5 tests that reference portable as well. So having a relation between the analyzer build and a framework build makes sense.

We would probably not want to bundle it, however, since we don't in other cases. We generally make use of inter-package dependencies in NuGet. Since users may be targetting different runtimes in different projects, this makes it sound as if vsix distribution is not such a great idea for us.


@JasonBock commented on Sat Nov 26 2016

@CharliePoole if you don't bundle it, then developers will have to know that the analyzers exist and need to do a separate step to include them. But again, that's not my call.

I would definitely NOT want to distribute as a VSIX purely because any errors flagged by an analyzer won't stop the build process. It's too coarse-grained for my tastes. Keeping it as a NuGet package is usually the right approach for analyzers.


@JasonBock commented on Sat Nov 26 2016

@CharliePoole BTW you said 4.5....there's a NUnit 4.5 version? The latest one I see on NuGet is 3.5....or were you referring to .NET 4.5?


@CharliePoole commented on Sat Nov 26 2016

@JasonBock I'd be all for bundling it in our distributions that bundle other stuff - e.g. msi and zip - but not in NuGet. Folks who choose to use nuget are essentially choosing a more granular way of installing. So, for example, if you just install the framework, you don't get nunitlite even though nunitlite maps to the framework version to version. The downside is that you may not even know nunitlite exists, but that's the tradeoff based on how users choose to install.

Yes, I was talking about the .NET 4.5 build of the 3.5 framework.


@JasonBock commented on Sat Nov 26 2016

@CharliePoole Analyzers arguably are meant to enforce practices and guidelines for a particular framework. You can write analyzers that are general-purpose that are "tied" to the entire .NET framework, which, in that case, you can't bundle them.

Again, though we don't have to worry about that right now. Frankly, I'd much rather get feedback on the analyzer code itself and how they work (or don't) :)


@ChrisMaddock commented on Sat Nov 26 2016

I'd like to see the analyzers bundled in the framework NuGet packages. From what I remember, NuGet has specific functionality to install analyzers and interact with roslyn where relevant, and ignore them otherwise. It wouldn't introduce any extra references, and users can still turn them off if the don't want notifying of their compile errors, for any reason?!

Also agree with the discussion above, errors should just be used for what would be runtime errors. Classic assertions should just have the codefix available in the least intrusive way, in my opinion (Info?) - as we haven't yet deprecated them, and from what I gather, some people still prefer them.

@JasonBock - I'll try and have a test run in the next couple of days, looking forward to seeing them in action!


@JasonBock commented on Sat Nov 26 2016

I changed the severity for the classic model assertions to Info, and then VS doesn't give any indication in the UI that something could change. You do get the light bulb on the line, but you don't get that until you give focus to that line of code, or you look at the message list in the Error List. Making it a Warning at least gives a squiggle in VS, so I'm thinking this is the right setting.


@CharliePoole commented on Sat Nov 26 2016

I'm in the middle of setting up a new development laptop, but I'll also try it out as soon as I can.

@JasonBock The issue I have with the transformation of some of the asserts is that we have never told users they should not use them. On the contrary, Assert.True is one of a set that we actively encouraged people to use. Others are Assert.False, Assert.Null, Assert.NotNull. But that's all academic at the moment. If you give us a good vehicle, we can write lots of issues for what we would like to see included in it.

It sounds like Info level is more like an optional refactoring than an error or warning message.

@ChrisMaddock What you say makes it sound like a good choice. Can't the bundling be done via dependencies, however?


@JasonBock commented on Sat Nov 26 2016

@CharliePoole you can also create refactorings (different extensibility project type), and you're right, that may be more of a better fit for a refactoring. But if we want to really get people on to the constraint model and ultimately get rid of the classic model, doing it as a warning with an easy way to change all instances of that analyzer solution-wide with one click of a menu option, then that may be the right approach. Even if you keep the analyzer as an Info, you get the VS feature to apply it to all instances in a solution; refactorings don't have that capability.


@CharliePoole commented on Sat Nov 26 2016

@JasonBock There was a time, about 7 years ago, when I wanted to get rid of Classic asserts. In fact, I was calling them legacy asserts back then. User feedback strongly favored keeping them and so I dropped the goal of dropping them. :-)


@JasonBock commented on Sat Nov 26 2016

@CharliePoole I'm curious as to why they wanted to keep them. Was one of the reason because changing over to the constraint model would be too time-consuming? Or, "hey, my test code works, why force me to change?" (which is a valid reason).


@CharliePoole commented on Sun Nov 27 2016

That's one reason, but some people just don't like the constraint-based syntax. I didn't think I would like it myself, but got talked into implementing it. Tool-builders have to be more flexible than users. :-)

BTW, the fluent syntax was created as part of the separate NUntiLite project and then migrated back to NUnit. With NUnit 3, we merged nunitlite entirely into NUnit again. Anyway, the old NUnitLite did drop most of the classic asserts but kept Assert.True, Assert.False, Assert.Null, Assert.NotNull, Assert.AreEqual and a few others. In my own work, I use constraint-based 90% of the time, but I still use those few classic constraints plus Assert.Throws, since it gives you the exception for further testing.


@ChrisMaddock commented on Sun Nov 27 2016

@ChrisMaddock What you say makes it sound like a good choice. Can't the bundling be done via dependencies, however?

Could be, yes. Although, if we get the obtrusiveness-balance right, I think they'd be better bundled in the same package, purely for discoverability. Hopefully we should be able to bundle something which is helpful to everyone - although sounds like we'll need to consider the performance issues @OsirisTerje mentioned.

I changed the severity for the classic model assertions to Info, and then VS doesn't give any indication in the UI that something could change.

That's irritating, I was hoping it would give the little dotted line - that must be a Resharper thing. Although yes, until a point where we explicitly deprecate the classic syntax, I don't think warnings would be right. Doing such a thing would be a bigger change than anything from NUnit 2 -> 3, and sounds unfeasible/unpopular. (Although I'd very much be in favour!)


@ChrisMaddock commented on Sun Nov 27 2016

One other thought, could we perhaps write some analysers and finally deprecate/spin-out/remove AssertionHelper? nunit/nunit#1212


@JasonBock commented on Sun Nov 27 2016

@ChrisMaddock definitely could be done. One thing I'd like to do is on the wiki page for nunit.analyzers is to start using the wiki to doc analyzers and issues to propose analyzer ideas. This could go there.

Really, though, what I'd like right now is for people to try the analyzers that currently exist with projects they have and see how they work. Weed out bugs. Provide code review suggestions. That sort of thing. Then more analyzers can be written.


@CharliePoole commented on Sun Nov 27 2016

@JasonBock No problem doing anything stuff in the wiki or in issues, but keep in mind we may end up putting the analyzer somewhere else - actually, we will if we follow your recommendation to bundle it. It's not a real big job to move issues or wiki pages, but keep it in mind.


@JasonBock commented on Sun Nov 27 2016

@CharliePoole true, I've been debating this in my head as well. If we put too much into the current repo and then we move it, it may be a PITA to move certain assets over. So.....maybe I don't do too much of that right now.


@JasonBock commented on Sun Nov 27 2016

What I am proposing, though, is that this issue should be closed as there are now analyzers for NUnit, at least in flight :). Having specific discussions on the design and implementation of analyzers and the overall deployment story should probably move to separate issues on the nunit.analyzers issues page for now.


@CharliePoole commented on Mon Nov 28 2016

@JasonBock Many of us have been tied up with other things so your actual code isn't getting much review so far. Hang on for a bit and once a few of us have managed to review and exercise it we'll be ready to make a decision about where it goes and I think things will start to flow more quickly after that.


@ChrisMaddock commented on Mon Nov 28 2016

Had a quick play just now - posted a couple of definite issues in the repo. I'm seeing both the same FileIO exceptions as @OsirisTerje - although I'm working with a internal test library which has a reference to nunit.framework, so it could well be that something is getting crossed there. I wonder if it would be worth factoring out the nunit.framework reference at least temporarily, while we're testing? That should hopefully allow us just to drop the package in to places for now - until such a point this is packaged appropriately with a framework. (No worries if that's actually a big job!)

The NullReferenceException seems more likely to be something separate, stack trace below. (Assembly name redacted, because work!)

Error AD0001  Analyzer 'NUnit.Analyzers.TestCaseUsage.TestCaseUsageAnalyzer' threw an exception of type 'System.NullReferenceException' with message 'Object reference not set to an instance of an object.'. XXXX   1 Active  Analyzer 'NUnit.Analyzers.TestCaseUsage.TestCaseUsageAnalyzer' threw the following exception:
'Exception occurred with following context:
Compilation: XXXXXXX
SyntaxTree: XXXXXXX
SyntaxNode: Test [AttributeSyntax]@[762..766) (21,9)-(21,13)

System.NullReferenceException: Object reference not set to an instance of an object.
   at NUnit.Analyzers.TestCaseUsage.TestCaseUsageAnalyzer.AnalyzeAttribute(SyntaxNodeAnalysisContext context)
   at Microsoft.CodeAnalysis.Diagnostics.AnalyzerExecutor.<>c__DisplayClass42_1`1.<ExecuteSyntaxNodeAction>b__1()
   at Microsoft.CodeAnalysis.Diagnostics.AnalyzerExecutor.ExecuteAndCatchIfThrows_NoLock(DiagnosticAnalyzer analyzer, Action analyze, Nullable`1 info)
-----
'.

Unfortunately, I don't seem to be able to test out the classic model refactorings. :-( When I double click the error to open the file, the red squiggly line appears for about a second, before disappearing again - and I don't seem to be able to access the quickfix. I jumped straight in to a pretty big test suite (12k tests), so maybe it's some sort of overload thing? I don't know how to get any sort of debug info out of it sorry, let me know if there's anything I can look at!

Still a big fan of the concept, having static checking of some run-time errors would be a great time saver! ๐Ÿ˜„


@tom-dudley commented on Thu May 11 2017

For configuration of rules, analyzers can access Additional Files which looks to be the(?) recommended method: Using Additional Files

StyleCop Analyzers uses this for Configuring StyleCop Analyzers


@CharliePoole commented on Thu May 11 2017

Can we close this issue so future discussion gets pushed to the analyzers project?


@rprouse commented on Thu May 18 2017

Yes, let's close this. We have a contributor project working on analyzers now. Any feedback should be entered in that project.

Make Pack work

Extend the cake script to be able to perform package of a nuget package.

Ensure a least one (relevant) constructor is Public

Description:

A class marked with [TestFixture] must have all of their constructors as public as well as any methods marked with [Test] or [TestCase].

Example:

[TestFixture]
internal class MyTests 
{ 
  [Test]
  internal void TestMethod() { }
}

To fix it:

[TestFixture]
public class MyTests 
{ 
  [Test]
  public void TestMethod() { }
}

Analyzer Message:

"A constructor for a test fixture class must be public."
"A test method must be public."

Code Fixes:

Change the visibility value for the constructor or test method to public.

NUNIT_7 does not cater for Task and ValueTask

The NUNIT_7 currently returns an error

[TestCase(false, ExpectedResult = 942)]
[TestCase(true, ExpectedResult = 10)]
public Task<int> Sample(bool input)
{
     Task.FromResult(1);
}

need to cater for async methods. Analyser needs to check for Task and ValueTask in this case.

Would you accept a PR to fix this?

TestCaseUsageAnalyzer throws NullReferenceException for non-existent attribute

The TestCaseUsageAnalyzer throws a NullReferenceException for any attribute which doesn't exist. I found this out as I stopped for long enough through the word TestCase for the analyzer to kick in and try and find a TestC attribute!

I'm not sure this 'breaks' anything as such, but it does fill up my errors window. Unfortunately, the warnings don't clear once you type the rest of the attribute. ๐Ÿ‘Ž This is VS2015 Update 3.

Repro:

using System;
using NUnit.Framework;

namespace UnitTestProject1
{
    [TestFixture]
    public class UnitTest1
    {
        [TestC]
        public void TestMethod1(string hey, double left)
        {
        }
    }
}

StackTrace:

Warning	AD0001	Analyzer 'NUnit.Analyzers.TestCaseUsage.TestCaseUsageAnalyzer' threw an exception of type 'System.NullReferenceException' with message 'Object reference not set to an instance of an object.'.	UnitTestProject1		1	Active	Analyzer 'NUnit.Analyzers.TestCaseUsage.TestCaseUsageAnalyzer' threw the following exception:
'Exception occurred with following context:
Compilation: UnitTestProject1
SyntaxTree: c:\Users\Chris\Documents\Visual Studio 2015\Projects\UnitTestProject1\UnitTestProject1\UnitTest1.cs
SyntaxNode: TestC [AttributeSyntax]@[135..140) (8,9)-(8,14)

System.NullReferenceException: Object reference not set to an instance of an object.
   at NUnit.Analyzers.TestCaseUsage.TestCaseUsageAnalyzer.AnalyzeAttribute(SyntaxNodeAnalysisContext context) in C:\Users\Chris\Documents\git\nunit.analyzers\src\nunit.analyzers\TestCaseUsage\TestCaseUsageAnalyzer.cs:line 53
   at Microsoft.CodeAnalysis.Diagnostics.AnalyzerExecutor.<>c__DisplayClass42_1`1.<ExecuteSyntaxNodeAction>b__1()
   at Microsoft.CodeAnalysis.Diagnostics.AnalyzerExecutor.ExecuteAndCatchIfThrows_NoLock(DiagnosticAnalyzer analyzer, Action analyze, Nullable`1 info)
-----
'.

Refine cake configuration

In #17 we added the initial cake configuration. But we took some shortcuts:

  • The Clean target was commented out
  • Some of the paths could be more refined and less hardcoded

Ensure Classes With TestFixtureAttribute Have at Least One Test

Description:

If a class is marked with [TestFixture], it must have at least one method marked with [Test] or [TestCase].

Example:

[TestFixture]
public class MyTests { }

To fix it:

[TestFixture]
public class MyTests
{
  [Test]
  public void TestMethod() { }
}

Analyzer Message:

"A test fixture class must have at least one test method."

Default Severity Level:

Error

Code Fixes:

none

Suggestion: Analyzer Should Identify Duplicated TestNames

Consider the following example:

namespace BadNUnitExamples
{
    using NUnit.Framework;
    class NUnitTestCaseDataResult
    {
        [TestCase(1, 1, ExpectedResult = 0, TestName = "Test")]
        [TestCase(2, 2, ExpectedResult = 0, TestName = "Test")]
        public int Subtraction(int a, int b)
        {
            return a - b;
        }
    }
}

This will cause the NUnit Adapter (as of 3.11.2.0) to throw a warning to alert you that this test will not be added to the Test Window.

[11/27/2018 11:44:39 AM Informational] ------ Discover test started ------
[11/27/2018 11:44:40 AM Informational] NUnit Adapter 3.11.2.0: Test discovery starting
[11/27/2018 11:44:40 AM Informational] NUnit Adapter 3.11.2.0: Test discovery complete
[11/27/2018 11:44:40 AM Warning] A test with the same name 'BadNUnitExamples.NUnitTestCaseDataResult.Test' already exists. This test is not added to the test window.
[11/27/2018 11:44:40 AM Informational] ========== Discover test finished: 4 found (0:00:00.8339919) ==========

It would be desirable to have this Analyzer alert you of this possibility.

There are other scenarios in which you can encounter this same behavior such as:

namespace BadNUnitExamples
{
    using NUnit.Framework;
    using System.Collections.Generic;

    class NUnitTestCaseDataStatic
    {
        public static IEnumerable<TestCaseData> TestData
        {
            get
            {
                yield return new TestCaseData(1, 1).Returns(2).SetName("TestAddition");
                yield return new TestCaseData(2, 2).Returns(4).SetName("TestAddition");
            }
        }

        [TestCaseSource(nameof(TestData))]
        public int Addition(int a, int b)
        {
            return a + b;
        }
    }
}

Which will also yield a similar warning

[11/27/2018 11:53:33 AM Informational] ------ Discover test started ------
[11/27/2018 11:53:34 AM Informational] NUnit Adapter 3.11.2.0: Test discovery starting
[11/27/2018 11:53:34 AM Informational] NUnit Adapter 3.11.2.0: Test discovery complete
[11/27/2018 11:53:34 AM Warning] Unable to fetch source Information for test method: BadNUnitExamples.NUnitTestCaseDataStatic.TestAddition contained in project: BadNUnitExamples.
[11/27/2018 11:53:34 AM Warning] Unable to fetch source Information for test method: BadNUnitExamples.NUnitTestCaseDataStatic.TestAddition contained in project: BadNUnitExamples.
[11/27/2018 11:53:34 AM Warning] A test with the same name 'BadNUnitExamples.NUnitTestCaseDataResult.Test' already exists. This test is not added to the test window.
[11/27/2018 11:53:34 AM Warning] A test with the same name 'BadNUnitExamples.NUnitTestCaseDataStatic.TestAddition' already exists. This test is not added to the test window.
[11/27/2018 11:53:34 AM Informational] ========== Discover test finished: 5 found (0:00:01.0549237) ==========

Analyzer for TestCaseAttribute should also work for generic TestCases with deduced types

The analyzer should also handle the following (from nunit\src\NUnitFramework\tests\Internal\DeduceTypeArgsFromArgs.cs). Currently, it gives a warning for 5, The value of the argument at position 0 cannot be assigned to the argument t1 and similar for the second argument.

    [Category("Generics")]
    [TestFixture(100.0, 42)]
    [TestFixture(42, 100.0)]
    public class DeduceTypeArgsFromArgs<T1, T2>
    {
        T1 t1;
        T2 t2;

        public DeduceTypeArgsFromArgs(T1 t1, T2 t2)
        {
            this.t1 = t1;
            this.t2 = t2;
        }

        [TestCase(5, 7)]
        public void TestMyArgTypes(T1 t1, T2 t2)
        {
            Assert.That(t1, Is.TypeOf<T1>());
            Assert.That(t2, Is.TypeOf<T2>());
        }
    }

Simplify test in this project

Hi,

I was trying to add a new fix + test, but IMO adding a testcase in this project is pretty complicated

In my opinion:

  1. The test classes names are really too long
    image

  2. The tests in TestCaseUsageAnalyzerTests are using to much nameof

  3. The BasePath in TestCaseUsageAnalyzerTests is a bit of magic

  4. The testcases are duplicates and hard to read, even in the simple case:
    image

  5. The RunAnalysisAsync should not accept an Action<ImmutableArray<Diagnostic>>, but a expectedDiagnostics (of type IEnumerable<Diagnostic>)

Analyzer for TestCaseAttribute should also work for generic TestCases

The analyzer should also handle the following (from nunit\src\NUnitFramework\tests\Attributes\TestCaseAttributeTests.cs). Currently, it gives a warning for 1, The value of the argument at position 0 cannot be assigned to the argument arg1 and for the ExpectedResult, The ExpectedResult value cannot be assigned to the return type, T.

        [TestCase(1, ExpectedResult = 1)]
        public T TestWithGenericReturnType<T>(T arg1)
        {
            return arg1;
        }

Update README.md

We should at least do the following:

  • Add badges to build status
  • Link to "Code of Conduct"
  • Update section "Building"
  • Link to "Contributing"
  • An image to make the presentation nicer
  • Perhaps a "Downloads" section with link to myget (when we have nuget functionality)

Make conversions work in netcoreapp2.0

The analyzer should also handle the following (from nunit\src\NUnitFramework\tests\Attributes\TestCaseAttributeTests.cs). Currently, it gives a warnings for the parameters The value of the argument at position 0 cannot be assigned to the argument x (and similar for position 1 and y and for the ExpectedResult, The ExpectedResult value cannot be assigned to the return type, ___.

Note that there are no red squiggle, so perhaps the analyzer also crashes.

Edit: The problem is only for the netcoreapp2.0 build and not for the other builds, so something is different for netcoreapp2.0 (probably due to reflection).

        [TestCase(2, 2, ExpectedResult=4)]
        public double CanConvertIntToDouble(double x, double y)
        {
            return x + y;
        }

        [TestCase("2.2", "3.3", ExpectedResult = 5.5)]
        public decimal CanConvertStringToDecimal(decimal x, decimal y)
        {
            return x + y;
        }

        [TestCase(2.2, 3.3, ExpectedResult = 5.5)]
        public decimal CanConvertDoubleToDecimal(decimal x, decimal y)
        {
            return x + y;
        }

        [TestCase(5, 2, ExpectedResult = 7)]
        public decimal CanConvertIntToDecimal(decimal x, decimal y)
        {
            return x + y;
        }

        [TestCase(5, 2, ExpectedResult = 7)]
        public short CanConvertSmallIntsToShort(short x, short y)
        {
            return (short)(x + y);
        }

        [TestCase(5, 2, ExpectedResult = 7)]
        public byte CanConvertSmallIntsToByte(byte x, byte y)
        {
            return (byte)(x + y);
        }

        [TestCase(5, 2, ExpectedResult = 7)]
        public sbyte CanConvertSmallIntsToSByte(sbyte x, sbyte y)
        {
            return (sbyte)(x + y);
        }

Add myget feed to tooling

Appveyor should upload the different builds (master and PRs) to myget. Is waiting on #36 for enabling package of nuget packes.

Probably we also need to represent the version as currently Appveyor is at version 1 (+ the build number).

NRE in AttributeArgumentSyntaxExtensions.CanAssignTo

AttributeArgumentSyntaxExtensions.CanAssignTo fails in the test (from nunit.nunit\src\NUnitFramework\tests\Attributes\TestCaseAttributeTests.cs)

[TestCase("x", ExpectedResult = new []{"x", "b", "c"})]
[TestCase("x", "y", ExpectedResult = new[] { "x", "y", "c" })]
[TestCase("x", "y", "z", ExpectedResult = new[] { "x", "y", "z" })]
public string[] HandlesOptionalArguments(string s1, string s2 = "b", string s3 = "c")
{
    return new[] {s1, s2, s3};
}

The problem is that @this.Expression as LiteralExpressionSyntax evaluates to null since the type of @this.Expression is ImplicitArrayCreationExpressionSyntax (the syntax is new []{"x", "b", "c"}).

Remove false positive from ExpectedResult when used in connection with async tests

The following tests are correct, but the analyzer gives the following warning: The ExpectedResult value cannot be assigned to the return type, Task`1.

    [TestCase(ExpectedResult = 1)]
    public async Task<int> AsyncGenericTaskTestCaseWithExpectedResult()
    {
      return await Task.Run(() => 1);
    }

and

    [TestCase(ExpectedResult = 1)]
    public Task<int> GenericTaskTestCaseWithExpectedResult()
    {
      return Task.Run(() => 1);
    }

Reuse TestCaseAttribute logic for ValuesAttribute

We should reuse the logic for testing whether the value of a TestCaseAttribute matches the argument of the test method to also have effect for ValuesAttribute. The following will currently give an analyzer error on "a".

    [TestCase("a")]
    [TestCase(-4.0)]
    [TestCase(-9.0)]
    public void Test1(double d)
    {
...
    }

But for the following there are no analyzer errors

    [Test]
    public void Test2([Values("a", -4.0, -9.0)] double d)
    {
...
    }

Warning if actual type does not match expected type

Currently there's no type checks for actual and expected results, so it's quite a common mistake (at least for me), which an analyzer could help to avoid.

Basic case:
Assert.That(arg1, Is.EqualTo(arg2))
arg1 and arg2 should be of same type (at runtime).
From analyzer point of view the easiest way, I believe, is to check ClassifyConversion between typeSymbols.
Exceptions: between number types (e.g. int-double is fine), string and numbers. Anything else?
Case when custom comparer is provided should be skipped as well.

Other possible Asserts:
Collection asserts (contains, subset, superset etc), probably others

TestCase analyser interprets number types differently to framework

image

The analyser incorrectly flags the above TestCase's as not being doubles, which NUnit is happy to parse.

With this and #1, I wonder if this is entering something of a rabbit hole. Maybe we should revisit exposing and sharing this functionality from the framework - although from what you said before @JasonBock, that may be technically impossible? What do you think? ๐Ÿ˜„

NUNIT_7 Throws Unexpectedly On Integer to Decimal Conversions

This may be a duplicate of #50

Consider the following Test Program

namespace NUNIT_7TriggersOnDecimal
{
    using NUnit.Framework;

    [TestFixture]
    public class NUNIT7_TriggersOnDecimalTests
    {
        // Throws NUNIT_7
        [TestCase(-600)]
        public void AssertNegativeDecimal(decimal input)
        {
            Assert.That(input, Is.EqualTo(-600m));
        }

        // No Problem Here
        [TestCase(600)]
        public void AssertPositiveDecimal(decimal input)
        {
            Assert.That(input, Is.EqualTo(600m));
        }
    }
}

Part of the problem is related to the fact that you cannot send a decimal in an attribute (see dotnet/roslyn#16898) therefore our Unit Tests were always written to use an Integer constant.

I would not expect NUNIT_7 to throw on the first test case (AssertNegativeDecimal) because it does not throw on the second case. Any thoughts?

Analyzer for TestCaseAttribute should work when passed 1 to 3 arguments

The analyzer should also handle the following (from nunit\src\NUnitFramework\tests\Attributes\TestCaseAttributeTests.cs). Currently, it gives a warning There are too many arguments provided from test TestCaseAttribute for the method.

        [TestCase("a", "b")]
        public void ArgumentsAreCoalescedInObjectArray(object[] array)
        {
            Assert.AreEqual("a", array[0]);
            Assert.AreEqual("b", array[1]);
        }

        [TestCase(1, "b")]
        public void ArgumentsOfDifferentTypeAreCoalescedInObjectArray(object[] array)
        {
            Assert.AreEqual(1, array[0]);
            Assert.AreEqual("b", array[1]);
        }

Add an Analyzer to Verify ParallelScope Usage

According to this doc, values of ParallelScope are only valid based on what member the Parallelizable is being used on. For example, ParalleScope.Children cannot be used on methods. Writing an analyzer to inform the developer of improper value usage would be a nice thing to have.

Edit by mikkelbu: Corrected link.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.