GithubHelp home page GithubHelp logo

tryatsoftware / cleantests Goto Github PK

View Code? Open in Web Editor NEW
12.0 1.0 0.0 5.65 MB

TryAtSoftware.CleanTests is a modern framework for execution and generation of automated tests in .NET

License: MIT License

C# 100.00%
automated-testing combinatorics dotnet test-generation

cleantests's Introduction

Quality Gate Status Reliability Rating Security Rating Maintainability Rating Vulnerabilities Bugs

Core validation

About the project

TryAtSoftware.CleanTests is a library that should simplify the process of automated testing for complex setups.

The repeating pattern that we could discover in some advanced projects is that whenever more features are added or old ones are being refactored, adding new tests or modifying existing ones could be a tough challenge.

One of the private projects that uses our library has a lot of polymorphic components and every concrete implementation has a totally different logic. There were two main test assemblies - an old one (let's call it Standard for brevity) where standard patterns for testing were applied and a new one (let's call it Clean for brevity) integrating TryAtSoftware.CleanTests. We could easily compare the two approaches as we were working on them independently.

In the past, there were multiple test assemblies with more than 1500 tests that were executing for over 10 minutes. However, TryAtSoftware.CleanTests was integrated in a different assembly and we could easily compare the two approaches. Moreover, if the more components and logical branches there are in the code, the less scenarios are covered.

After finalizing the integration, we could notice the following:

Criteria Standard test assembly Clean test assembly
Number of written test > 1500 < 100
Number of test cases < 1700 > 20000
Execution time (approximated) 15 minutes 20-25 minutes
Code coverage (approximated) < 20% > 80%
Number of found bugs 0 > 20

As you can see, this is quite big of a difference! With much less effort we managed to achieve unthinkable results. If we had to stick to the Standard testing approach in order to increase the code coverage and amount of test cases and optimize the performance, we had to write a lot of code. And if we had to do the same for every new functionality that is coming, that would slow down the software development process significantly. TryAtSoftware.CleanTests gave us an alternative approach of automatic testing that not only improved the quality of the product but also saved us a lot of time that we could invest in adding more and more features.

The main goals that have been accomplished are:

  • Automatic generation of test cases using all proper combinations of registered clean utilities
  • Every clean utility can define external demands that represent conditions about what other clean utilities should be present within a combination in order to generate a test case with it
  • Every clean utility can depend internally on other clean utilities
  • Every clean utility can define internal demands that represent conditions about what clean utilities should be injected upon initialization
  • Every clean utility can define outer demands that represent conditions oriented towards the superior utilities
  • Global and local clean utilities - local clean utilities are instantiated for every test case; global clean utilities are instantiated only once and can be used to share common context between similar test cases
  • Parallel execution of tests cases

About us

Try At Software is a software development company based in Bulgaria. We are mainly using dotnet technologies (C#, ASP.NET Core, Entity Framework Core, etc.) and our main idea is to provide a set of tools that can simplify the majority of work a developer does on a daily basis.

Getting started

Installing the package

Before creating any equalization profiles, you need to install the package. The simplest way to do this is to either use the NuGet package manager, or the dotnet CLI.

Using the NuGet package manager console within Visual Studio, you can install the package using the following command:

Install-Package TryAtSoftware.CleanTests

Or using the dotnet CLI from a terminal window:

dotnet add package TryAtSoftware.CleanTests

Configurations

In order to use the features of this library, there is one mandatory step that must be done. Your test assembly should be decorated with an appropriate attribute that will define which test framework should be used for the execution of test cases. Add the following line anywhere in your project (most likely this is done within an AssemblyInfo.cs file):

[assembly: Xunit.TestFramework("TryAtSoftware.CleanTests.Core.XUnit.CleanTestFramework", "TryAtSoftware.CleanTests.Core")]

Modifying behavior

Additionally, you can modify the behavior of the clean tests execution framework using the ConfigureCleanTestsFramework attribute. There is a list of the parameters that can be controlled:

  • UtilitiesPresentations - A value used to control the presentation of the clean utilities used to generate a test case. The default value is CleanTestMetadataPresentations.None. For a detailed description see the Metadata presentation section.
  • GenericTypeMappingPresentations - A value used to control the presentation of the generic types configuration used for the execution of a test case. The default value is CleanTestMetadataPresentations.InTestCaseName. For a detailed description see the Metadata presentation section.
  • MaxDegreeOfParallelism - A value representing the maximum number of test cases executed in parallel. It should always be positive. There is no concrete formula that can be used to determine which is the most optimal value - it depends on the characteristics of the executing machine, specifics related to the test environment and many other circumstances. The default value is 5.

Example:

[assembly: TryAtSoftware.CleanTests.Core.Attributes.ConfigureCleanTestsFramework(UtilitiesPresentations = CleanTestMetadataPresentations.InTraits, GenericTypeMappingPresentations = CleanTestMetadataPresentations.InTraits | CleanTestMetadataPresentations.InTestCaseName, MaxDegreeOfParallelism = 3)]

Moreover, the execution behavior of clean tests can be finely controlled by utilizing the ExecutionConfigurationOverride attribute. This attribute allows you to apply overrides either for all test methods within a given class, or for individual test methods. Currently, the only parameter that can be controlled is the MaxDegreeOfParallelism (however, it is worth noting that this will be enhanced in future releases).

There are many scenarios for which this opportunity would be beneficial:

  • If we need to use multiple threads within a single test case, it is often useful to reduce the max degree of parallelism.

Example:

[CleanFact]
[ExecutionConfigurationOverride(MaxDegreeOfParallelism = 1)]
public async Task OperationShouldSucceeedInParallel()
{
    const int parallelTasksCount = 1_000;
    var tasks = new Task[parallelTasksCount];
    
    for (int i = 0; i < parallelTasksCount; i++) tasks[i] = ExecuteOperation();
    
    await Task.WhenAll(tasks);
    AssertState();
}

Metadata presentation

The enum CleanTestMetadataPresentations offers three options used for additional configuration over the clean tests execution framework:

  • CleanTestMetadataPresentations.None - Test metadata will not be included as a part of a test case.
  • CleanTestMetadataPresentations.InTestCaseName - Test metadata will be included within the display name of a test case.
  • CleanTestMetadataPresentations.InTraits - Test metadata will be included within the traits of a test case.

This is a flag enumeration, i.e. test metadata presentation methods can be easily combined. For example, this is a valid test metadata presentation method: CleanTestMetadataPresentations.InTestCaseName | CleanTestMetadataPresentations.InTraits.

Enabling test metadata presentation methods often has performance impact over the discovery process when dealing with a big amount of tests because of the amount of additional data that should be stored with every test case.

What are the clean utilities?

The clean utility is a key component for our library. Every clean utility has a category and a name that are required.

One test may require utilities from many categories. The corresponding test cases will be generated using unique combination of utilities from the required categories.

Every clean utility can me marked as local or global. Local clean utilities will be instantiated at least once for every test case requiring their participation. Global clean utilities will be instantiated only once for all test cases sharing a common context.

Moreover, every clean utility can optionally define its own characteristics. These characteristics can be used to filter out on some basis the utilities that we want to use when generating the cases for a given test. They do often correspond to essential segments of the requested component's behavior. We use demands to make sure that the capabilities our test needs are present for the resolved utilities used to execute the test.

In order to use a type as a clean utility, it should be marked with the CleanUtility attribute that accepts category, name and characteristics. You can also explicitly set a value to the IsGlobal flag.

Example:

[CleanUtility(Categories.Writers, "Console writer", Characteristics.UsesConsole, Characteristics.ActiveWriter)]
public class ConsoleWriter : IWriter
{
    public void Write(string text) => Console.WriteLine(text);
}

[CleanUtility(Categories.Writers, "File writer", Characteristics.UsesFile, Characteristics.ActiveWriter)]
public class FileWriter : IWriter
{
    public void Write(string text) => File.WriteAllText("C:/path_to_document", text);
}

[CleanUtility(Categories.Writers, "Fake writer")]
public class FakeWriter : IWriter
{
    public void Write(string text) { /* Do nothing */ }
}

All clean utilities should be located within the test assembly. If this is not possible, the test assembly should be explicitly decorated with an attribute denoting where the shared clean utilities are defined.

[assembly: TryAtSoftware.CleanTests.Core.Attributes.SharesUtilitiesWith("Assembly.With.Shared.CleanUtilities")]

Dependencies

Every clean utility can depend on other clean utiliites. This relationship can be modelled throughout the WithRequirements attribute.

When generating test cases, each unique instantiation procedure (i.e. the resolution of dependencies) for a given clean utility will be presumed as a separate member of the combinatorial set. For example, if the dependencies of a given utility can be resolved in N different ways, the generation process will use all N different instantiation procedures as if they were different utilities.

External demands

Every clean utility can define external demands throughout the ExternalDemands attribute. These demanded characteristics will alter the way combinations of clean utilities are generated - all external demands should be satisfied for all utilities participating in the combination.

Example:

[CleanUtility(Categories.Readers, "Console reader")]
[ExternalDemands(Categories.Writers, Characteristics.UsesConsole)]
public class ConsoleReader : IReader
{
    public string Read() => Console.ReadLine();
}

Internal demands

Every clean utility can define internal demands throughout the InternalDemands attribute. This type of demanded characteristics can be used to filter out the dependent clean utilities.

Example:

[CleanUtility(Categories.Engines, "Default engine")]
[WithRequirements(Categories.Readers, Categories.Writers)]
[InternalDemands(Categories.Writers, Characteristics.ActiveWriter)]
public class Engine : IEngine
{
    private readonly IReader _reader;
    private readonly IWriter _writer;
    
    public Engine(IReader reader, IWriter writer)
    {
        this._reader = reader ?? throw new ArgumentNullException(nameof(reader));
        this._writer = writer ?? throw new ArgumentNullException(nameof(writer));
    }
    
    /* further implementation of the `IEngine` interface... */
}

Outer demands

Every clean utility can define outer demands throughout the OuterDemands attribute. This type of demanded characteristics can be used to model conditions oriented towards utilities in the outer scope (also known as superior utilities).

How to use clean tests?

This library is built atop XUnit so if you are familiar with the way this framework operates, you are most likely ready to use clean tests. There are only two requirements for this:

  • The test should be marked with either CleanFact (instead of Fact) or CleanTheory (instead of Theory).

You can still use tests that are marked with other attributes, however, they will be executed as standard tests and will have none of the behavior clean tests can benefit from.

  • The type containing the requested test should implement the ICleanTest interface. We suggest reusing the abstract CleanTest that we have exposed as it will make accessing instances of the registered clean utilites easier and you will not have to think about various internal processes that should be handled.

Clean tests can define requirements representing the set of categories for which clean utilities should be provided. The WithRequirements attribute can be used in order to achieve that.

Clean tests can also define demands to filter out only a specific subset of the clean utilities that can be used for the generation of test cases. The TestDemands attribute can be used in order to achieve that - for each category a set of demanded characteristics can be defined.

Example:

[CleanFact]
[WithRequirements(Categories.Writers)]
[TestDemands(Categories.Writers, Characteristics.ActiveWriter)]
public void WriteShouldSucceed()
{
    IWriter writer = this.GetService<IWriter>();
    writer.Write("Some text");
}

Helpful Links

For additional information on troubleshooting, migration guides, answers to Frequently asked questions (FAQ), and more, you can refer to the Wiki pages of this project.

Acknowledgements

We appreciate the effort of everyone who made valuable contributions to this project with their ideas, suggestions, opinions, and source code.

Furthermore, special thanks to JetBrains for supporting this project!

JetBrains Logo

cleantests's People

Contributors

tonytroeff avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar

cleantests's Issues

Use DI for global utilities

Currently, this is not implemented perfectly and that causes troubles when we try to instantiate more complex global utility instances, e.g. such that depend on multiple other global utilities.

Global utilities should be able to access other global utilities they are combined with

This is again a feature related to the process of pre-configuring global utilities. In some cases global utilities need to cooperate with others.
We need to be able to determine for each global utility which are the other global utilities it cooperates with according to the set of tests that are about to be executed.
It would also be nice to know the maximum degree of parallelism in order to reduce the excessive amount of slots that will be preconfigured. See #36 for more details.

Improve tests covering `Outer demands`

Currently, we have two types of tests - testing at the root level or a level deeper. We would like to extend this so we can validate that everything is implemented correctly no matter of the depth.

Provide information about the unique `testId`

The testId is a number indicating the slot index for a given test. This can be useful whenever multiple tests are executed in parallel and we want to uniquely identify the resources used by every one of the currently executed cases.

Refine the parallel execution per test

Describe your idea
Some tests definitions are designed to work with multiple threads. In this case having multiple test cases executed in parallel will be an issue. A simple mechanism to refine the parallel execution settings per test case would solve this problem.

Additional context

Limit the number of combinations produced for each test case

Add functionality to the Combinatorial machine that will calculate the number of combinations that would be eventually generated.
We need this in order to limit the number of generated test cases because in some "simple" situations this could be a problem. Imagine a setup with 10 categories of 5 clean utilities each. All possible combinations are 5^10 = 9,765,625. We do not want to support this (except if it is explicitly allowed) because the generation process will not be that fast in such situations and most likely the UI tools for tests execution would fail to handle this.

Global utilities should know about the maximum number of test cases executed in parallel

Currently, there is a problem related to the fact that initialization of resources is happening as a part of the test execution.
The global utilities should be pre-configured. Because test cases may be executed in parallel, most global utilities are initialized with slots that are used proportionally. The number of these slots is equal to the MaxDegreeOfParallelism configuration parameter.

Member data in generic tests is not correctly consumed

Describe the bug
I have a generic test class that defines theories with MemberData (pointing to a member within the same class). However, they are not executing for some reason. If I start using InlineData, everything is OK.

Expected behavior
I expect to be able to use MemberData even within generic test classes.

Environment (please complete the following information):

  • OS: [e.g. Windows 11, Ubuntu 23.04]
  • IDE: Visual Studio 2022, JetBrains Rider 2023.1
  • Version: 1.0.0-alpha.12

Outer demands

Describe your idea
This is a new type of demands that could be applied to manage dependencies between clean utilities. Outer demands should address the principal clean utility's characteristics. They are the opposite of Inner demands.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.