GithubHelp home page GithubHelp logo

powershell / powershell-rfc Goto Github PK

View Code? Open in Web Editor NEW
415.0 91.0 118.0 3.74 MB

RFC (Request for Comments) documents for community feedback on design changes and improvements to PowerShell ecosystem

License: MIT License

PowerShell 100.00%

powershell-rfc's Introduction

PowerShell-RFC

RFC documents for community feedback on proposed changes and improvements

See https://github.com/PowerShell/powershell-rfc/blob/master/RFC0000-RFC-Process.md for process information

Along with making PowerShell Open Source, we are also inviting the community to author RFCs on proposed design changes (instead of having long threads in issues). The PowerShell Team will meet once a week (depending on amount of feedback) to review the feedback and respond.

We'll continue to refine this process as we learn from it.

powershell-rfc's People

Contributors

adityapatwardhan avatar anmenaga avatar corbob avatar darwinjs avatar daxian-dbw avatar hemantmahawar avatar isazonov avatar jameswtruher avatar jaykul avatar joeyaiello avatar justingrote avatar jzeiders avatar kirkmunro avatar lzybkr avatar mklement0 avatar msadministrator avatar onewingedshark avatar paulhigin avatar rjmholt avatar sdwheeler avatar sethvs avatar stevel-msft avatar stevel-powershell avatar stevenbucher98 avatar sydneyhsmith avatar thejasonhelmick avatar tibmeister avatar travisez13 avatar vexx32 avatar vors avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

powershell-rfc's Issues

RFC for pluggable logging model

Per #106, this is to track the need for an RFC that would create a pluggable provider model for logging. Quoting me from that thread:

@PowerShell/powershell-committee discussed this one in detail today. We believe that there's massive value in figuring out how to tee logs off to remote logging providers, but a bunch of the semantics in this RFC are specific to Splunk.

Instead, we think an RFC should be authored that builds a provider/plugin model for allowing multiple remote logging providers. This RFC should also give consideration to whether local logging targets should be treated the same way (e.g. if you want to exclusively log to a remote target, maybe in serverless/stateless scenarios). There should also be a consideration of whether we should have something like a Get-PSLog or Write-PSLog that are agnostic to logging providers.

The PowerShell Team has a strong interest in picking this up, but we have no idea when we can get to it right now. We only know that it should be a flexible, pluggable model.

RFC: Extract DSC from Existing Environments

--
RFC: RFC00xx
Author: Nik Charlebois

Reverse Engineering Existing Environment into DSC

Users have been complaining about the learning curve to get started with DSC and about the amount of work it takes for them to re-write their existing technology investment onto DSC for monitoring purposes. There have been various community projects to solve this issue by extracting DSC configurations out of existing environments, such as ReverseDSC, which had incredible success with clients wanting to adopt Configuration as Code practices within their organizations.

The purpose of this RFC is to propose that a fourth method that would retrieve all instances of a resource against an existing environment be included in DSC resources for them to be identified as HQRM.

Motivation

  • Reduce amount of work required to on-board existing environments onto DSC
  • Reduce learning curve to get started with DSC
  • Provide an automated path to migrate on-premises environments onto Azure
  • Allow users to easily document their environment’s configuration as code
  • Quickly compare configuration deltas between two environments

Specifications

There are three components to the requested solution:

  • Component 1 needs to iterate through all instances of the current resource and provide the Get-TargetResource function the key parameters for it to be able to retrieve all other parameters associated with this resource instance.
  • Component 2 needs to retrieve the Hash tables received back from the Get-TargetResource and convert them into a DSC string. This is already automated and handled by the existing ReverseDSC Core module https://github.com/microsoft/reversedsc
  • Component 3 needs to orchestrate the calls to the resources’ Component 1 and 2 and collect the results into a resulting file, which will be a DSC script that can be used to replicate the environment.

The vision is as follow:
Component 1 and 2 would be combined into a new fourth function required by every HQRM, call it Export-TargetResource. Component 3 would be a global utility module, contained within the DSC module, which would orchestrate all the calls to the module’s Export-TargetResource methods (see SharePointDSC.Reverse at https://github.com/microsoft/sharepointdsc.reverse for a reference)

RFC Static Website generator

Can we add an enhancement on this github repository with a static web page builder to provide a basic RFC home page ?

I'm not asking for a "corporate" website, but more a solution to improve readibility over markdown.

Resources :

RFC Proposal: Make terminating errors terminate the right way in PowerShell

Make terminating errors terminate the right way in PowerShell

By default in PowerShell, terminating errors do not actually terminate. For example, if you invoke this command in global scope, you will see the output "Why?" after the terminating error caused by the previous command:

& {
    $string = 'Hello'
    $string.Substring(99)
    'Why?'
}

PowerShell has upheld this behaviour since version 1.0 of the language. You can make the terminating error actually terminate execution of the command, by wrapping the command in try/catch, like this:

try {
    $string = 'Hello'
    $string.Substring(99)
    'Why?'
} catch {
    throw
}

You can also convert the terminating-yet-handled-as-a-non-terminating error into an actual terminating error like this:

& {
    $ErrorActionPreference = 'Stop'
    $string = 'Hello'
    $string.Substring(99)
    'Why?'
}

In those last two examples, the exception raised by the .NET method terminates execution of the running command.

The difference between the first example and the workarounds poses a risk to scripters who share scripts or modules with the community.

In the first workaround, the risk is that end users using a shared resource such as a script or module may see different behaviour from the logic within that module depending on whether or not they were inside of a try block when they invoked the script or a command exported by the module. That risk is very undesirable, and as a result many community members who share scripts/modules with the community wrap their logic in a try/catch{throw} (or similar) scaffolding to ensure that the behavior of their code is consistent no matter where or how it was invoked.

In the second workaround, if the shared script does not use $ErrorActionPreference = 'Stop', a caller can get different behaviour by manipulating their $ErrorActionPreference. The caller should not be able to manipulate terminating error behavior in commands that they invoke -- that's up to the command author, and they shouldn't have to use extra handling to make terminating errors terminate.

Now consider this code snippet:

New-Module -Name ThisShouldNotImport {
    # The next line uses "Generics" in the type name, when it should use "Generic"
    $myList = [System.Collections.Generics.List[string]]::new()

    function Test-RFC {
        [CmdletBinding()]
        param()
        'Some logic'
        1 / 0 # Oops
        'Some more logic'
    }
} | Import-Module

There are several things wrong with this snippet:

  1. If you invoke that snippet, the ThisShouldNotImport module imports successfully because the terminating error ([System.Collections.Generics.List[string]] is not a valid type name) does not actually terminate the loading of the module. This could cause your module to load in an unexpected state, which is a bad idea.
  2. If you loaded your module by invoking a command defined with that module, you won't see the terminating error that was raised during the loading of the module (the terminating error that was raised during the loading of the module is not shown at all in that scenario!), so you could end up with some undesirable behaviour when that command executes even though the loading of the module generated a "terminating" error, and not have a clue why.
  3. The Test-RFC command exported by this module produces a terminating error, yet continues to execute after that error.
  4. If the caller either loads your module or invokes your command inside of a try block, they will see different behaviour.

Any execution of code beyond a terminating error should be intentional, not accidental like it is in both of these cases, and it most certainly should not be influenced by whether or not the caller loaded the module or invoked the command inside of a try block. Binary modules do not behave this way. Why should script modules be any different?

Now have a look at the same module definition, this time with some extra scaffolding in place to make sure that terminating errors actually terminate:

New-Module -Name ThisShouldNotImport {
    trap {
        break
    }
    $myList = [System.Collections.Generic.List[string]]::new()

    function Test-RFC {
        [CmdletBinding()]
        param()
        $callerEAP = $ErrorActionPreference
        try {
            'Some logic'
            1 / 0 # Oops
            'Some more logic'
        } catch {
            Write-Error -ErrorRecord $_ -ErrorAction $callerEAP
        }
    }
} | Import-Module

With this definition, if the script module generates a terminating error, the module will properly fail to load (note, however, that the type name has been corrected in case you want to try this out). Further, if the command encounters a terminating error, it will properly terminate execution and the error returned to the caller will properly indicate that the Test-RFC command encountered an error. This scaffolding is so helpful that members of the community apply it to every module and every function they define within that module, just to get things to work the right way in PowerShell.

All of this is simply absurd. Any script module that generates a terminating error in the module body should fail to import without extra effort, with an appropriate error message indicating why it did not import. Any advanced function defined within a script module that encounters a terminating error should terminate gracefully, such that the error message indicates which function the error came from, without requiring extra scaffolding code to make it work that way.

Between the issues identified above, and the workarounds that include anti-patterns (naked try/catch blocks and trap{break} statements are anti-patterns), the PowerShell community is clearly in need of a solution that automatically resolves these issues in a non-breaking way.

Motivation

As a script, function, or module author,
I can write scripts with confidence knowing that terminating errors will terminate those commands the right way, without needing to add any scaffolding to correct inappropriate behaviours in PowerShell
so that I can keep my logic focused on the work that needs to be done.

User experience

The way forward for this issue is to add an optional feature (see the RFC proposal for optional features in PowerShell) that makes terminating errors terminate correctly. The script below demonstrates that a manifest can be generated with the ImplicitTerminatingErrorHandling optional feature enabled, and with that enabled the module author can write the script module and the advanced functions in that module knowing that terminating errors will be handled properly. No scaffolding is required once the optional feature is enabled, because it will correct the issues that need correcting to make this just work the right way, transparently.

$moduleName = 'ModuleWithBetterErrorHandling'
$modulePath = Join-Path -Path $([Environment]::GetFolderPath('MyDocuments')) -ChildPath PowerShell/Modules/${moduleName}
New-Item -Path $modulePath -ItemType Directory -Force > $null
$nmmParameters = @{
    Path = "${modulePath}/${moduleName}.psd1"
    RootModule = "./${moduleName}.psm1"
    FunctionsToExport = @('Test-ErrorHandling')
}

#
# Create the module manifest, enabling the optional ImplicitTerminatingErrorHandling feature in the module it loads
#
New-ModuleManifest @nmmParameters -OptionalFeatures ImplicitTerminatingErrorHandling

$scriptModulePath = Join-Path -Path $modulePath -ChildPath "${moduleName}.psm1"
New-Item -Path $scriptModulePath -ItemType File | Set-Content -Encoding UTF8 -Value @'
    # If the next command is uncommented, Import-Module would fail when trying to load
    # this module due to the terminating error actually terminating like it should
    # 1/0 # Oops!
    function Test-ErrorHandling {
        [cmdletBinding()]
        param()
        'Some logic'
        # The next command generates a terminating error, which will be treated as
        # terminating and Test-ErrorHandling will fail properly, with the error text
        # showing details about the Test-ErrorHandling invocation rather than details
        # about the internals of Test-ErrorHandling.
        Get-Process -Id 12345678 -ErrorAction Stop
        'Some more logic'
    }
'@

Module authors who want this behaviour in every module they create can invoke the following command to make it default for module manifests created with New-ModuleManifest.

Enable-OptionalFeature -Name ImplicitTerminatingErrorHandling -NewModuleManifests

Scripters wanting the behaviour in their scripts can use the #requires statement:

#requires -OptionalFeatures ImplicitTerminatingErrorHandling

Specification

Implementation of this RFC would require the following:

Implementation of optional feature support

See the RFC proposal for optional features in PowerShell for more information.

Addition of the ImplicitTerminatingErrorHandling optional feature definition

This would require adding the feature name and description in the appropriate locations so that the feature can be discovered and enabled.

PowerShell engine updates

The PowerShell engine would have to be updated such that:

  • scripts invoked with the optional feature enabled treat terminating errors as terminating
  • scripts and functions with CmdletBinding attributes when this optional feature is enabled treat terminating errors as terminating and gracefully report errors back to the caller (i.e. these commands should not throw exceptions)

Alternate proposals and considerations

Make this optional feature on by default for new module manifests

This feature is so useful that I would recommend it as a best practice. If making it just work this way globally wouldn't incur a breaking change in PowerShell, I would want it to always work that way by default. Since making it work this way globally would incur a breaking change, my recommendation is to make this optional feature on in new module manifests by default so that anyone not wanting it to work this way has to turn the optional feature off. That corrects the behaviour going forward while allowing authors of older modules/scripts can opt-in to the feature when they are ready.

Related issue

PowerShell Issue #9855 is very closely related to this RFC, and it would be worth considering fixing that issue as part of this RFC if it is not already resolved at that time.

RFC for pluggable script block/AMSI logging model

from #161/ #106

@PowerShell/powershell-committee discussed this one in detail today. We believe that there's massive value in figuring out how to tee ScriptBlock and AMSI style logs off to remote logging providers, but a bunch of the semantics in this RFC are specific to Splunk.

Instead, we think an RFC should be authored that builds a provider/plugin model for allowing multiple remote logging providers. This RFC should also give consideration to whether local logging targets should be treated the same way (e.g. if you want to exclusively log to a remote target, maybe in serverless/stateless scenarios). There should also be a consideration of whether we should have something like a Get-PSLog or Write-PSLog that are agnostic to logging providers.

The PowerShell Team has a strong interest in picking this up, but we have no idea when we can get to it right now. We only know that it should be a flexible, pluggable model.

ConvertFrom/To-Hashtable and/or ConvertTo-PSCustomObject cmdlets

Related discussion in #109, specifically this comment:

Finally got around to reviewing this with @PowerShell/powershell-committee.

It sounds like the problem that's being solved here is to convert nested hashtables to PSCustomObjects when the keys are all strings (when they're not strings, conversion will likely result in key collision due to ToString() returning type names).

To that end, we think it's more generically useful to build a ConvertFrom-Hashtable cmdlet that would allow more generic and custom conversion of hashtables to PSCustomObjects and potentially other types using constructors that expect specific hashtable "schemas". Another possible implementation is to do a ConvertTo-PSCustomObject that does more custom handling (and these aren't mutually exclusive). Additionally, a ConvertTo-Hashtable is probably useful.

We don't have this on our radar right now, but if someone wants to give a go at a ConvertFrom-Hashtable RFC and implementation, go for it.

We think the scenario we're trying to address is primarily casting nested hashtables of hashtables into PSCustomObjects (or possibly even other types).

RFC Proposal: Optional features in PowerShell

There are several important issues in the PowerShell language that cannot be corrected without introducing breaking changes. At the same time, the number of breaking changes introduced in a new version of PowerShell needs to be as minimal as possible, so that there is a low barrier to adoption of new versions, allowing community members to transition scripts and modules across versions more easily. Given that those two statements are in conflict with one another, we need to consider how we can optionally introduce breaking changes into PowerShell over time.

PowerShell has support for experimental features, which some may think covers this need; however, the intent of experimental features is to allow the community to try pre-release versions of PowerShell with breaking changes that are deemed necessary or new features that are not necessarily fully tested/polished so that they can more accurately assess the impact of those features. For release versions of PowerShell, an experimental feature has one of three possible outcomes:

  1. The the experimental feature is deemed necessary, adequately tested/polished and accepted by the community as not harmful to adoption of new versions, in which case the experimental feature is no longer marked as experimental.
  2. The experimental feature is deemed necessary, and adequately tested/polished, but considered harmful to adoption of new versions, in which case the experimental feature is changed to an optional feature.
  3. The experimental feature is deemed not useful enough, in which case the experimental feature is deprecated.

In some cases a breaking change may be implemented immediately as an optional feature, when it is known up front that such a breaking change would be considered harmful to adoption of new versions of PowerShell if it was in place by default yet is still found important enough to implement as an optional feature.

Given all of that, we need to add support for optional features in PowerShell so that what is described above becomes a reality.

As an example of a feature that will be optional if implemented, see the RFC proposal to allow execution preferences to persist beyond module or script scope or the RFC proposal to make terminating errors terminate the right way in PowerShell.

Motivation

As a script, function, or module author,
I can enable optional features for specific users or in specific scopes,
So that I can leverage important new functionality that includes breaking changes in my code without risk of breaking other scripts, functions or modules.

User experience

# Create a module manifest, specifically enabling or disabling one or more optional
# features in the manifest
New-ModuleManifest `
    -Path ./test.psd1 `
    -OptionalFeatures @(
        'OptionalFeature1',
        @{Name='OptionalFeature2';Enabled=$false}
    ) `
    -PassThru | Get-Content

# Output:
#
# @{
#
# <snip>
#
# # Optional features enabled or disabled in this module.
# OptionalFeatures = @(
#     'OptionalFeature1'
#     @{Name='OptionalFeature2';Enabled=$false}
# )
#
# }

# Create a script file, enabling or disabling one or more optional features in the file
@'
#requires -OptionalFeature OptionalFeature1
#requires -OptionalFeature OptionalFeature2 -Disabled

<snip>
'@ | Out-File -FilePath ./test.ps1

# Get the current optional feature configuration for the current user and all users
Get-OptionalFeatureConfiguration

# Output:
#
#     For: AllUsers
#
# Name                              Session                  NewManifest
# ----                              ------                   -----------
# OptionalFeature1                  False                    True
# OptionalFeature2                  True                     False
# OptionalFeature4                  True                     True
# OptionalFeature5                  False                    False
#
#
#     For: CurrentUser
#
# Name                              Session                  NewManifest
# ----                              ------                   -----------
# OptionalFeature2                  False                    True
# OptionalFeature3                  False                    True
#

# Get a list of optional features, their source, and their descriptions
Get-OptionalFeature

# Output:
#
# Name                            Source                  Description
# ----                            ------                  -----------
# OptionalFeature1                PSEngine                Description of optional feature 1
# OptionalFeature2                PSEngine                Description of optional feature 2
# OptionalFeature3                PSEngine                Description of optional feature 3
# OptionalFeature4                PSEngine                Description of optional feature 4

# Enable an optional feature in current and future PowerShell sessions for all
# users in PowerShell.
Enable-OptionalFeature -Name OptionalFeature1 -For AllUsers

# Output:
# None

# Disable an optional feature in current and future PowerShell sessions for the
# current user in PowerShell.
Disable-OptionalFeature -Name OptionalFeature1 -For CurrentUser

# Output:
# None, unless the feature was explicitly enabled for all users and is being
# disabled only for the current user as an override, as is the case here,
# in which case they get prompted to confirm. This is described below.

# Enable an optional feature in all new module manifests created with
# New-ModuleManifest in the current and future PowerShell sessions for any user
# in PowerShell.
Enable-OptionalFeature -Name OptionalFeature2 -For AllUsers -InNewModuleManifests

# Output:
# None

# Enable an optional feature in all new module manifests created with
# New-ModuleManifest in the current and future PowerShell sessions for the
# current user in PowerShell.
Disable-OptionalFeature -Name OptionalFeature3 -InNewModuleManifests

# Output:
# None

# Enable and disable an optional feature the duration of the script block being
# invoked.
Use-OptionalFeature -Enable OptionalFeature1 -Disable OptionalFeature2 -ScriptBlock {
    # Do things using OptionalFeature1 here
    # OptionalFeature2 cannot be used here
}
# If OptionalFeature1 was not enabled before this invocation, it is still no
# longerenabled here. If OptionalFeature2 was enabled before this invocation,
# it is still enabled here. All to say, their state before the call is
# preserved.

Specification

Unlike experimental features, which can only be enabled or disabled in PowerShell sessions created after enabling or disabling them, optional features can be enabled or disabled in the current PowerShell session as well as in future PowerShell sessions. This is necessary to allow certain functionality to be "lit up" in packaged modules or scripts.

Below you will find details describing how this functionality will be implemented.

System and User powershell.config.json configuration files

Enabling optional features automatically in future PowerShell sessions requires creating or updating one of two powershell.config.json configuration files that are read on startup of a new PowerShell session:

  • one in $PSHOME, which applies to all user sessions
  • one in $HOME\Documents\PowerShell\powershell.config.json on Windows or $HOME/.config/powershell/powershell.config.json on Linux and macOS, which applies only to current user sessions.

This RFC will enable optional feature defaults to be read from these configuration files, with current user configuration taking precedence over system (all users) configuration. System config is not policy so this should be acceptable and expected.

Add parameter to New-ModuleManifest

[-OptionalFeatures <object[]>]

This parameter would configure specific optional features in the new module manifest that is generated.

The values provided to this parameter would be combined with optional features that are enabled or disabled by default according to the session configuration files, with the values specified in the New-ModuleManifest command overriding the settings for optional features with the same name that are configured in the configuration files. Entries in this collection would either be string (the name of the optional feature to enable) or a hashtable with two keys: name (a string) and enabled (a boolean value). The hashtable allows an optional feature to be specifically disabled instead of enabled within a module, which is necessary if an older module does not support a newer optional feature yet.

A terminating error is generated if the same optional feature name is used twice in the collection passed into the -OptionalFeatures parameter.

Add parameter set to #requires statement

#requires -OptionalFeatures <string[]> [-Disabled]

This parameter set would enable, or disable if -Disabled is used, optional features identified by -Name in the current script file.

New command: Get-OptionalFeatureConfiguration

Get-OptionalFeatureConfiguration [[-Name] <string[]>] [-For { CurrentUser | AllUsers }]
[<CommonParameters>]

This command would return the current configuration of optional features that are available in PowerShell, read from the configuration files.

The properties on the S.M.A.OptionalFeatureConfiguration object would be Name, Session, NewManifest, and For, defined as follows:

Property Name Description
Name A string value that identifies the optional feature name
Session A boolean value that identifies whether the optional feature is enabled or disabled in the current and new PowerShell sessions
NewManifest A boolean value that identifies whether the optional feature is enabled or disabled in manifests created by new module manifests in the current and new PowerShell sessions
For An enumeration flag identifying whether the optional feature configuration was set up for the CurrentUser or AllUsers

The default output format is of type table with the properties Name, Session, and NewManifest with the results grouped by For.

When this command is invoked with the -For parameter, the results are automatically filtered for that configuration file. The default value for -For is both flags, showing configuration values from both configuration files.

New command: Get-OptionalFeature

Get-OptionalFeature [[-Name] <string[]>] [<CommonParameters>]

This command will return a list of the optional features that are available in PowerShell, along with their source and description.

The properties on the S.M.A.OptionalFeature object would be Name, Source, Description, defined as follows:

Property Name Description
Name A string value that identifies the optional feature name
Source A string value that identifies the area of PowerShell that is affected by this optional feature
Description A string value that describes the optional feature

The default output format would be of type table with the properties Name, Source, and Description.

Enabling and disabling optional features in current and future PowerShell sessions

Enable-OptionalFeature [-Name] <string[]> [-For { CurrentUser | AllUsers }]
[-InNewModuleManifests] [-WhatIf] [-Confirm] [<CommonParameters>]

Disable-OptionalFeature [-Name] <string[]> [-For { CurrentUser | AllUsers }]
[-InNewModuleManifests] [-WhatIf] [-Confirm] [<CommonParameters>]

It is important to note up front that there are three default states for an optional feature: enabled by default, implicitly disabled by default, and explicitly disabled by default. The only time an optional feature needs to be explicitly disabled by default is if it is enabled by default in the AllUsers configuration file and a specific user wants to disable it for their sessions. This impacts how Disable-OptionalFeature works.

The Enable-OptionalFeature command will enable an optional feature in current and future PowerShell sessions either globally (if the -InNewModuleManifests switch is not used) or only in manifests created by New-ModuleManifest.

The Disable-OptionalFeature command will disable an optional feature in current and future PowerShell sessions either globally (if the -InNewModuleManifests switch is not used) or only in manifests created by New-ModuleManifest. If the feature is being disabled for AllUsers and the optional feature is completely disabled in that configuration file as a result of this command, the entry is removed from the configuration file. If the feature is being disabled for AllUsers and there is no entry in the system (all users) configuration file, nothing happens. If the feature is being disabled for the CurrentUser there is no entry in the system (all users) or current user configuration files, nothing happens. If the feature is being disabled for the CurrentUser and the optional feature is enabled in the AllUsers configuration file, users will be informed that this feature is enabled for all users and asked to confirm that they want to explicitly disable this feature for the current user in the current and future PowerShell sessions. They can always re-enable it later.

New command: Use-OptionalFeature

Use-OptionalFeature [[-Enable] <string[]>] [[-Disable] <string[]>] [-ScriptBlock]
<ScriptBlock> [-Confirm] [<CommonParameters>]

This command would enable or disable the optional features whose names are identified in the -Enable and -Disable parameters for the duration of the ScriptBlock identified in the -ScriptBlock parameter, and return the features to their previous state afterwards. This allows for easy use of optional features over a small section of code. If neither -Enable or -Disable are used, a terminating error is thrown.

Checking optional feature states within the PowerShell runtime

Optional features can be enabled or disabled in a session, module, script, or script block. Since enabling or disabling an optional feature can happen at different levels, the current state of an optional feature should be maintained in a stack, where the validation logic simply peeks at the top of the stack to see if the feature is enabled or not, popping the top of the stack off when appropriate (when leaving the scope of the module, script, or script block where the feature is enabled).

Alternate proposals and considerations

Extend experimental features to support the enhancements defined in this RFC

At a glance, experimental features and optional features are very similar to one another, so it was proposed that it may make sense to have them both use the same functionality when it comes to enabling/disabling them in scripts and modules; however, experimental features have a completely different intent (to try out new functionality in a PowerShell session), are only for future PowerShell sessions, and they only have a single on/off state. On the other hand, optional features are for the current and future PowerShell sessions, for the current user or all users, and may be enabled or disabled in various scopes within those sessions. For that reason, this approach doesn't seem like a viable solution. If we want to change that, perhaps someone should file an RFC against experimental features to make that change.

Enable/disable optional features in jobs according to their current state when the job is launched

Jobs run in the background have their own session, and therefore will not automatically have optional features enabled or disabled according to the current state when the job is launched. We should consider updating how jobs are launched in the engine such that they do "inherit" optional feature configuration to allow them to use optional features without additional code.

You might think that you can accomplish this using #requires in a script block launched as a job, but that doesn't work -- the #requires statement is ignored when you do this at the moment. For example, the see the results of the following script when launched from PowerShell Core:

Start-Job {
    #requires -PSEdition Desktop
    $PSVersionTable
} | Receive-Job -Wait

The result of that command shows that the version of PowerShell where the job was run was Core, not Desktop, yet the job ran anyway despite the #requires statement. This may be a bug. If it is, and if that bug is corrected, then you could use #requires to enable/disable features, but regardless it would still be preferable (and more intuitive) for jobs to "inherit" the current optional feature configuration when they are invoked. This includes jobs launched with Start-Job, Start-ThreadJob, the & background operator, parallelized foreach statements or ForEach-Object commands, or the generic -AsJob parameter.

Suggestions on how the RFC process could be improved

I originally shared these directly with @SteveL-MSFT and @joeyaiello, but better to share these thoughts in the open so that others can chime in.

The current RFC process is challenging, for a number of reasons:

  • working through the initial process while an RFC is in a PR takes too way too long
  • managing and following discussions through a PR is very challenging
  • reading RFCs through a PR is very challenging task (especially when one PR contains multiple RFCs)

For some reason, I always thought the process was to submit a PR, get some initial corrective feedback (if there were points of confusion, etc.), make updates, and get the RFC published in the draft state, with comments coming in an issue. That seems to not be the case, and RFCs remain in a PR state with long, disjointed discussions for quite some time before (if) they make it to the draft state with comments coming in an issue. I think that process is failing, so here's my take on some rough and some not-so-rough ideas on how I would improve the process:

First, a few structural changes to the repository to support what is proposed below, as follows:

  • copy the RFC template into .github/ISSUE_TEMPLATES/rfc-proposal.md
  • create two additional new templates in .github/ISSUE_TEMPLATES:
    • rfc-review.md (used when an RFC enters the review stage)
    • general.md (used for general issues with or questions about the repository or the process)
  • configure the repository to use the three templates for issues: RFC proposals, RFC reviews, and general questions or issues with the repository or about the process

Then, fix #190 to eliminate the dead link issue this issue.

Next, implement #191 to provide a user-friendly reading view of the RFCs in the repository.

Structural changes out of the way, I'd add at least one maintainer who is willing to help keep the process moving (hide outdated/addressed comments when necessary to keep discussions clean, etc. -- see below).

Once that is in place, I would set up the following process:

  1. All RFC proposals are submitted as new Issues instead of as PRs.

    Benefits:

    • adds a needed stage that the current process does not convey well: a proposal will not necessarily become an RFC
    • conversation about the proposal is naturally threaded
    • individual comments in the issue discussion can be hidden by maintainers if they are out of
      date at some point (if the author updates the RFC in the issue, addressing the comment)
    • repository maintainers can directly edit issues (fix simple typos without creating conversation noise, for example)
    • issues can be closed if an RFC is deemed not appropriate, if the author needs to do some redesign or take a new approach, etc.
    • this leverages the "new" (been there a while now) GitHub model for multiple templates, where users can choose to submit an RFC proposal, automatically get the template, and type it in there or copy/paste/modify externally (or work with a copy of the template md file) and then paste the result back in

    Downsides:

    • comments cannot be hidden by the person who logged the issue, so a maintainer or repository owner needs to be involved (if only we knew someone who knew someone who worked at GitHub, who could influence better permissions, something they have been lacking for-e-v-e-r ;))
    • related RFCs cannot be published in one chunk (as I did under PR #187); some may see that as an upside as well, and I could have just as well done those under a bunch of individual issues that reference one another.
  2. Initial discussion about the RFC happens under the proposal, with the goal being to clean up the proposal and bring it to a draft state, add alternate proposals that are suggested by the community, etc. (note: there should be a formal specification used to identify how to document alternate proposals will not be pursued and why -- this isn't in the docs right now).

  3. After iterating over discussions and things settle down (time-bomb this to 2 weeks -- should be enough time given that this is just to get the RFC to initial draft, when there are more opportunities for review RFCs that are actually in a draft state), the author who posted the RFC proposal (or an automated bot that monitors time passed since RFC proposals show up as an issue) mentions the PSCommittee for review of the RFC. The committee checks the RFC for completeness, and one of three things happens:

    1. If it is complete, the committee asks the author (or a bot) to move the RFC to draft (i.e. to PR the RFC). If this process is automated with a bot (which would be ideal), the issue gets copied into a numbered RFC file in the correct location (hopefully in a flat structure -- see #190) with an appropriate name, the markdown document listing RFCs gets updated automatically, a new issue gets automatically created for comments on the draft RFC, referencing that file, and highlighting the deadline date for comments, plus notifications about the RFC being open for comment for 30 days are shared on social media. If it's not automated, that work would have to be done manually.
    2. If the RFC proposal is not complete, and if the author is unresponsive, the RFC proposal is closed as abandoned (it can always be reopened later).
    3. If the RFC is deemed inappropriate (maybe it needs rework or needs to be opened as a new RFC), the RFC is closed as proposal rejected. This could happen earlier in the process if discussions discover early on that the RFC proposal should be closed.
  4. When a new RFC enters the official review process, the community needs to be notified (maybe via a bot posting tweets as hinted at above). A simple blog post mentioning the RFCs that have entered the process along with the deadlines for comment, or maybe just some tweets to notify users, or bring them up during the community call. Or all of the above.

At that point, the RFC goes through the draft, draft-approved, etc. phases of the process as part of the PS Team RFC meetings, hopefully on a schedule with a bot that that they can ask for more time for or a reminder, etc. like they do in the PowerShell repo.

Feedback on this process is welcome and appreciated.

RFC0000-RFC-Process - need more details about preferred git workflow

I am not clear on the git workflow.

I don't have permissions for even the Drafts folder - I think that means I need to fork and do a pull request just to submit a draft right? (the dock only mentions doing a pull request when ready for voting)

Also - could you clarify if you prefer a new branch in PRs - some of the existing ones have them and some do not.

Also is there a sample draft as a separate file - rather than the one embedded in RFC0000 ?

Thanks!

Private members in PowerShell classes

I am not really sure if here is the right place for this feedback, but for some reason I couldn't reach the Connect for PowerShell anymore and therefore decided to post my issue here.

I really like the new PowerShell 5 class feature, however there is one essential thing that's missing for me: private members.

Syntactically it could look something like this:

class SomeClass {
  private [string] $property; # or maybe [private] or [Property(Visibility=Private)]?

  SomeClass([string]$property) {
    $this.property = $property
  }
}

$sc = [SomeClass]::new("property")
$sc.property # ERROR: Not accessible

Is there any plan to add the possibility to make members private in the (near) future? Is there another proposed solution to achieve proper encapsulation?

RFC Proposal: ScriptBlocks to handle non-terminating message processing

@jpsnover suggested in PowerShell Issue #6010 that an [-OnError <ScriptBlock>] is added to the common parameters in PowerShell that takes precedence over -ErrorAction and $ErrorActionPreference. In response to that issue, PR #182 has been opened by @TylerLeonhardt with an RFC that proposes we change the trap statement to accommodate non-terminating errors. There are several challenges between the original issue and the proposed RFC:

  1. Both designs are only for error messages. It would be more useful to be able to provide a solution that works for type of message (warning, verbose, debug, information, progress) so that everything can be handled (e.g. logged) the same way.
  2. Blurring the line between terminating and non-terminating errors is a risky proposition. There is a reason that terminating errors are terminating. Executing code beyond a terminating error should require intentional logic to allow that to happen. Blurring the line between terminating and non-terminating errors is a long standing problem with PowerShell (terminating errors don't actually terminate in PowerShell unless they are wrapped in try/catch, resulting in widespread use of an anti-pattern in scripts today), and any further blurring of that line risks even more mishandling of terminating errors in PowerShell than we already see today.

With those challenges in mind, I propose instead that we extend what is allowed in -*Action common parameters, such that a ScriptBlock can be passed into those parameters. Further, I also propose that we allow a ScriptBlock to be assigned to any $*Preference variable as well. This will allow scripters and script, function and module authors to apply custom message processing to their scripts for any type of non-terminating message that is not silenced or ignored.

Terminating messages will remain handled by try/catch statements or trap statements the way they are defined in PowerShell 6.2 and earlier releases.

Motivation

As a scripter or a script, function, or module author,
I can use a ScriptBlock with *Preference variables and -*Action parameters,
So that I can perform custom processing for all messages generated by my scripts without the complexity of redirection operators in many different locations.

User experience

Here is an example demonstrating how a scripter may handle non-terminating (as well as terminating) messages in PowerShell once this RFC is implemented:

$messageLog = [System.Collections.ArrayList]::new()

function Write-MessageLog {
    [CmdletBinding(DefaultParameterSetName='ErrorRecord')]
    param(
        [Parameter(Position=0, Mandatory=$true, ParameterSetName='ErrorRecord')]
        [ValidateNotNull()]
        [System.Management.Automation.ErrorRecord]
        $ErrorRecord,

        [Parameter(Position=0, Mandatory=$true, ParameterSetName='InformationRecord')]
        [ValidateNotNull()]
        [System.Management.Automation.InformationRecord]
        $InformationRecord
    )
    $now = [System.DateTime]::UtcNow
    if ($PSCmdlet.ParameterSetName -eq 'ErrorRecord') {
        # Record the error however you would record it in a log file or database here
        $message = $ErrorRecord | Out-String
    } else {
        # Record the information record however you record it in a log file or database here
        $message = $InformationRecord.Message
    }
    $messageLog.Add([pscustomobject]@{
        Timestamp = $now
        Message = $message
    })
}

Set-StrictMode -Version Latest
$sb = {
    [CmdletBinding()]
    param([int]$Id = $PID)
    Write-Verbose -Verbose -Message "Looking for process with ID ${Id}..."
    $process = Get-Process `
        -Id $Id `
        -ErrorAction {
            # Note: WriteInternalErrorLog is not included in this script to keep it focused on
            #       external handling of errors
            WriteInternalErrorLog $_
            [ActionPreference]::Ignore
        }
    if ($process -ne $null) {
        Write-Verbose -Verbose -Message "Found process with ID ${Id}."
        Write-Output "Name: $($process.DisplayName)"
        Write-Output "Id: $($process.Id)"
    } else {
        Write-Warning -Message "Process ${Id} was not found."
    }
}

# Run the script, recording all non-terminating errors that are not internally silenced
# or ignored in the error log, output them on the screen, and store them in $error
& $sb -Id 12345678 -ErrorAction {Write-MessageLog $_; [ActionPreference]]::Continue}

# Run the script again, recording all messages, including verbose and debug, as well as
# any terminating error that occurs in the message log without showing them on screen.
# Errors will be stored in $error.
$ErrorActionPreference = $WarningPreference = $VerbosePreference = $DebugPreference = {
    Write-MessageLog $_
    [ActionPreference]::SilentlyContinue
}
try {
    & $sb
} catch {
    Write-MessageLog $_
    throw
}

In the case of the first example, the message log will contain the first verbose message and the warning message, and the internal error message log (that may be from a module) will contain the internal errors that were silenced.

In the case of the second example, the message log will contain both verbose messages.

This approach offers more functionality than the RFC in PR #182 without mixing up the important distinction and decisions that need to made when handing terminating and non-terminating errors.

Specification

If a ScriptBlock is present in a $*Preference variable when a message of the appropriate type is raised, the ScriptBlock would be run with $_ assigned to the appropriate ErrorRecord or InformationalRecord instance. These ScriptBlock instances would be used to process whatever messages they received, and they would identify the action the scripter would like taken once the processing is complete by returning an ActionPreference enumeration value.

To make logging messages easier, if the ScriptBlock does not return an ActionPreference, PowerShell would automatically apply the default ActionPreference for that type of message (Continue for progress, warning and error messages, SilentlyContinue for information, verbose and debug messages).

While those two paragraphs explain the functionality simply enough, this would probably be a decent amount of work to implement.

It is important to note that this design would not be a breaking change because today you cannot assign a ScriptBlock to a -*Action common parameter, nor can you assign them to a $*Preference variables.

Alternate proposals and considerations

Make the ScriptBlock an EventHandler

The ScriptBlock implementation looks like event handlers, so an alternative approach would be to define a specific event handler type and having the ScriptBlock design conform to that event handler. For example, in PowerShell we could define a StreamedMessageEventArgs class that has a Action property of type ActionPreference, and require that the ScriptBlock take parameters ($MessageRecord, $EventArgs), where $MessageRecord is the message that was raised, and $EventArgs is an instance of StreamedMessageEventArgs used to define the ActionPreference to take once the message is processed. For this approach, $_ would still be assigned to the message record to allow the ScriptBlock logic to remain as simple as possible. Scripters would need to assign a value to $EventArgs.Action in the ScriptBlock in order to change the default behavior (it would be assigned to the default behavior for the corresponding message type by default).

The benefits of this alternative are as follows:

  • The ScriptBlock return ActionPreference is a little more explicit (PowerShell will return whatever is output from the ScriptBlock by default, so this makes the important part of what is returned clear).
  • Users who just want to log messages or perform some other handling without mucking around with the return type can still leave the param block out and not bother with updating $EventArgs.Action in the ScriptBlock, keeping their code simple.
  • There can only be one handler for each type of message at a time, so even using an event handler definition, scripters wouldn't have to worry about adding or removing event handlers. They just need to assign the ScriptBlock to the parameter or variable they want, and PowerShell will typecast it as an event handler appropriately.

The downsides to this approach are as follows:

  • Scripters need to explicitly define params and work with those parameters in the ScriptBlock if they want to change the default ActionPreference, which may be a little more complicated than simply working with letting an ActionPreference enumeration value (which could even be a string) be returned from the ScriptBlock.

Add -VerboseAction, -DebugAction and -ProgressAction common parameters

It is important to consider the RFC proposal to allow execution preferences to persist beyond module and script scope here because it uses common parameters to pass execution preferences to other modules and/or scripts. In order for that to work properly for all message types, such that ScriptBlock action preferences for verbose, debug, and progress messages also propagate beyond module/script scope, we would need to add -VerboseAction, -DebugAction, and -ProgressAction to the common parameter lists. The implementation of these would simply be the same as -WarningAction or -InformationAction, but for their respective streams.

The benefits of having these additional common parameters are as follows:

  • Users are provided a consistent mechanism for dealing with non-terminating messages of any type.
  • Scripters can run scripts that leverage progress messages heavily unattended with logging so that users can see how far the script has made it after the fact, or they can silence progress messages since they are running scripts unattended and the display processing is not necessary.
  • Tool makers and advanced scripters can display or log messages of any type however they need.
  • With the RFC proposal to allow execution preferences to persist beyond module and script scope implemented, even verbose, debug and progress ActionPreference values or ScriptBlock message handlers can propagate beyond the scope of modules or scripts, allowing them to function more like cmdlets do.

The downsides to these common parameters are as follows:

  • We already have -Verbose and -Debug common parameters, so there is some overlap; however, the PowerShell engine would raise an error if both -Verbose and -VerboseAction were used in an invocation, or -Debug and -DebugAction were used in an invocation, so there would be no conflict in invocation. Scripters would simply choose one or the other.

RFC0007-Weak-Aliases comments

[PowerShell Committee has Rejected this RFC, see decision for details]

This issue is to discuss the weak aliases RFC.

This proposal helps address issues related to the curl/wget aliases introduced in PowerShell V3.

The discussion should focus on the strengths and weaknesses of this specific proposal and on avoiding or minimizing breaking changes. Ideally, existing scripts should continue to run without any changes when a new version of Windows PowerShell is installed.

Note that doing nothing is the only way to completely avoid breaking changes - the goal of this RFC is to find a compromise.

This RFC does propose a breaking change, but only in scripts where curl and wget are meant to call Invoke-WebRequest but curl.exe or wget.exe are found via the path. We don't have data on how often this might be an issue - but if a future version of Windows installs curl.exe and wget.exe by default, then it may be pointless to implement this RFC as that would be equivalent to removing the aliases.

Thoughts about breaking up the Splatting RFC into manageable chunks and revisiting the proposed syntax

The RFC for generalized splatting is currently in the Draft-Approved state, but not being implemented due to a combination of other priorities and complexities in the implementation.

While it's in this wait state, can I express some strong objections to what it proposes, and suggest that more thought is needed here? I can write these up as additional competing RFCs, but I don't want to go through that process if people just disagree with me, so I'll just share my issues with the RFC as it is today for now. And yes, this is opinionated, but I really feel that the syntax proposed in that RFC is making things much more difficult than they need to be. The details below suggest how it may be broken up and simplified, such that there aren't than many splatting-specific improvements needed at all.

Generalized Splatting is not the right way to solve the backtick issue

The Motivation section of the RFC highlights how backticks are not a great solution for multi-line continuance, and neither is splatting. Part of the solution to that problem, as proposed by the RFC, is inline splatting, yet inline splatting has its own issues:

  • splatting uses a different syntax than normal use of parameters in a command invocation, and this inconsistency isn't necessary.
  • conversion to/from splatted parameters or normal use of parameters is a nuisance, even if you have inline splatting.
  • inline splatting can help multi-line continuance for command invocations, but it doesn't allow scripters to wrap other lines in a way that makes code easier to read and maintain (e.g. wrapping multiple method invocations with the dot-reference operator at the beginning of a line).

An alternative to inline splatting (and to getting rid of backtick in general, by allowing scripters to enable multi-line continuance) is described in this RFC proposal. It covers more scenarios, and can be implemented completely independent of splatting improvements.

Splatting expressions shouldn't need special syntax combining @ and $

The RFC proposes splatting the value of a variable using this syntax:

command @$PSBoundParameters

It is not clear what value that syntax offers in the RFC today.

If the intent is that we can support using members on a variable while splatting, why can't we support this less complicated syntax instead?

command @PSCmdlet.MyInvocation.BoundParameters

# or

Invoke-Something @obj.GetInvokerArgs()

As long as the result is a hashtable or an array, we should be able to just make the parser support those syntaxes, shouldn't we? They are unambiguous, and non-breaking. This is an actual splatting improvement that could be implemented as part of a splatting RFC.

Relaxed splatting

Does it even make sense to do this when you can just splat multiple separate hashtables into a command?

The only specific need I know of related to this is mentioned in the Modifying hashtables for splatting alternate proposal, where it would be useful to be able to splat part of a hashtable. For example, something like this:

# Splat all parameters but 'Force'
command @PSBoundParameters-['Force']
# Splat all parameters but 'Force' and 'WhatIf'
command @PSBoundParameters-['Force','WhatIf']
# Splat only the literal path and filter parameters
command @PSBoundParameters+['LiteralPath','Filter']

These aren't really specific to splatting though, they're just about having new index operators for collections that allow you to generate a copy of a collection that excludes certain indices (-[...]) or that only includes certain indices (+[...]). I would recommend that be handled in a separate RFC that could be implemented independently, and then added to splatting as part of the splatting RFC so that it supports those operators inline when splatting.

Splatting in method invocations

This section proposes passing in specific arguments to a method by name, like C# allows. That's a worthy addition to consider adding; however, instead of using a complicated syntax with @@{...} enclosures, how about the following syntax:

$str.SubString(2, length:2)

It's a non-breaking syntactical change, and it's much easier to type and read. The same rules would still apply (named arguments must be at the end).

This is not really splatting and could be proposed as a completely separate RFC and implemented independently of splatting.

RFC Proposal: Allow execution preferences to persist beyond module or script scope

PowerShell has a long-standing issue where execution preferences such as those defined by the -ErrorAction, -WarningAction, -InformationAction, -Debug, -Verbose, -WhatIf and -Confirm common parameters, or those defined by any of the $*Preference variables, do not persist from a module to another module or script, nor do they persist from a script to another module or script. This is a result of how modules and scripts are scoped within PowerShell. It impacts modules written by Microsoft as well as scripts and modules written by the community, and it is often identified as a bug when it shows up in various places. You can see some of the discussion around this in PowerShell Issue #4568.

Regardless of whether you are authoring a script, an advanced function, or a cmdlet, you should be able to do so knowing that all execution preferences will be carried through the entire invocation of a command, end to end, regardless of the implementation details of that command. Cmdlets already work this way today. You can invoke cmdlets within a script, or within an advanced function defined in a module, and the execution preferences used during the invocation of that script or advanced function will be respected by the cmdlets that are invoked. This RFC is about making it possible for scripts or advanced functions to work that way as well.

It is important to note that the only way to implement this feature such that it is not a breaking change is by making it optional; however, even with it optional, it can be enabled by default for new modules to correct this problem going forward, and since it is optional, existing scripts and modules could be updated to support it as well, when they are ready to take on the responsibility of testing that out. That would allow this feature to be adopted more easily for new modules, while existing modules could be updated over time. While we have experimental feature support, those are for a different purpose so an additional RFC is being published at the same time as this RFC to add support for optional feature definition in PowerShell (see the RFC proposal for optional features in PowerShell).

Motivation

As a scripter or a command author,
I can invoke commands without having to care what type of command (cmdlet vs advanced function) I am invoking,
So that I can can focus on my script without having to worry about the implementation details of commands I use.

As a command author,
I can change a published command from an advanced function to a cmdlet or vice versa if needed without worrying about invocation nuances in PowerShell,
So that I can focus on what is the best way to code my commands for myself and my team.

User experience

# First, create a folder for a new module
$moduleName = 'RFCPropagateExecPrefDemo'
$modulePath = Join-Path `
    -Path $([Environment]::GetFolderPath('MyDocuments')) `
    -ChildPath PowerShell/Modules/${moduleName}
New-Item -Path $modulePath -ItemType Directory -Force > $null

# Then, create the manifest (which would have the PersistCommandExecutionPreferences
# optional feature enabled by default, to correct the behaviour moving forward in a
# non-breaking way; downlevel versions of PowerShell would ignore the optional feature
# flags)
$nmmParameters = @{
    Path = "${modulePath}/${moduleName}.psd1"
    RootModule = "./${moduleName}.psm1"
    FunctionsToExport = @('Test-1')
    PassThru = $true
}
New-ModuleManifest @nmmParameters | Get-Content

# Output:
#
# @{
#
# RootModule = './RFCPropagateExecPrefDemo.psm1'
#
# # Private data to pass to the module specified in RootModule/ModuleToProcess. This may
# # also contain a PSData hashtable with additional module metadata used by PowerShell.
# PrivateData = @{
#
#     <snip>
#
#     PSData = @{
#
#         # Optional features enabled in this module.
#         OptionalFeatures = @(
#             'PersistCommandExecutionPreferences'
#         )
#
#         <snip>
#
#     } # End of PSData hashtable
#
#     <snip>
#
# } # End of PrivateData hashtable
#
# }

# Then, create the script module file, along with a second module in memory that it invokes,
# and import both modules
$scriptModulePath = Join-Path -Path $modulePath -ChildPath ${moduleName}.psm1
New-Item -Path $scriptModulePath -ItemType File | Set-Content -Encoding UTF8 -Value @'
    function Test-1 {
        [cmdletBinding()]
        param()
        Test-2
    }
'@
Import-Module $moduleName
New-Module Name test2 {
    function Test-2 {
        [cmdletBinding()]
        param()
        Write-Verbose 'Verbose output'
    }
} | Import-Module

# When invoking the Test-2 command with -Verbose, it shows verbose output, as expected
Test-2 -Verbose

# Output:
#
# VERBOSE: Verbose output

# Thanks to this feature, when invoking the Test-1 command with -Verbose, it also shows
# verbose output. In PowerShell 6.2 and earlier, no verbose output would appear as a
# result of this command due to the issue preventing execution preferences from propagating
# beyond module/script scope
Test-1 -Verbose

# Output:
#
# VERBOSE: Verbose output

Specification

To resolve this problem, a new optional feature called PersistCommandExecutionPreferences would be defined in PowerShell. When this feature is enabled in a script or module, it would change how common parameters work in that script or module.

Today if you invoke a script or advanced function with -ErrorAction, -WarningAction, -InformationAction, -Debug, -Verbose, -WhatIf, or -Confirm, the corresponding $*Preference variable will be set within the scope of that command. That behaviour will remain the same; however when the optional feature is enabled, in addition to that behaviour, the names and values you supplied to those common parameters stored in a new ExecutionPreferences dictionary property of the $PSCmdlet instance. Once $PSCmdlet.ExecutionPreferences is set, any common parameters that are stored in $PSCmdlet.ExecutionPreferences that are not explicitly used in the invocation of another command within that command scope will be automatically passed through if the command being invoked supports common parameters.

It is important to note that parameter/value pairs in $PSCmdlet.ExecutionPreferences, which represent command execution preferences, would take priority and be applied to a command invocation before values in $PSDefaultParameterValues, which represents user/module author parameter preferences (i.e. if both dictionaries have a value to be applied to a common parameter, only the value in $PSCmdlet.ExecutionPreferences would be applied).

As per the optional feature specification, the optional feature can be enabled in a module manifest (see example above), or a script file via #requires. For more details on how that works, see the RFC proposal for optional features in PowerShell.

Alternate proposals and considerations

Rip off the bandaid

Some members of the community feel it would better to break compatibility here. On the plus side, not having to deal with this an an optional parameter would be ideal; however, to increase adoption of PowerShell 7, it would be better to make the transition from PowerShell 5.1 into 7 easier by having as few breaking changes as possible.

One way to achieve this while supporting users who don't want the breaking change would be to inverse the optional feature, where the breaking change is in place and users opt out of the breaking change instead of opting into it. Another way would be to change the optional feature design such that users can turn them off in scripts/modules if those scripts/modules are not ready to use the breaking change. See the Alternate Proposals and Considerations section of the RFC proposal for optional features in PowerShell for more information.

Support -DebugAction, -VerboseAction, and -ProgressAction if those common parameters are added

The RFC proposal for ScriptBlocks to handle non-terminating message processing suggests that we consider adding -DebugAction, -VerboseAction, and -ProgressAction common parameters. These are important to consider adding, because beyond the -Debug and -Verbose switch common parameters (which only support ActionPreference.Continue), the new common parameters would be the only way to propagate execution preferences for debug, verbose, and progress messages to all commands that are invoked.

reduce scope of Default File Encoding work

I would like to suggest a staged approach for https://github.com/PowerShell/PowerShell-RFC/blob/master/2-Draft-Accepted/RFC0020-DefaultFileEncoding.md

By removing the support for WindowsLegacy as an encoding type and removing the support for PSDefaultFileEncoding, I believe we can satisfy the bulk of our customers by standardizing our various encoding parameter types to a single type System.Text.Encoding and providing a default of UTF-8 without bom. A type forwarder could be constructed to support UTF-8WITHBOM. We would use our current code when appending to a file and encoding is not specified.

If customer demand for WindowsLegacy is strong, we can implement that at a later date. This provides a staged approach while producing the breaking change earlier to gather feedback

Experimental features missing elements

Experimental features should have two additional elements:

  1. An indication of quality similar to:

    • Alpha
    • Beta
    • RC
  2. A link to the design and or a place to give feedback. This may be a link to the RFC for the feature

RFC0009-Glob-for-Native-Tools comments

Committee voted to Reject this RFC, but we still need a solution to globbing. Notes in RFC

This issue is to discuss the RFC to extend globbing to native tools, available per #31.

Please review the RFC.

RFC for a text based editor appropriate here?

It's driving me up the wall that Core and Nano don't even have a text mode editor.

This oneliner brings down nano for windows:

iwr https://chocolatey.org/install.ps1 -UseBasicParsing | iex ; choco install nano -y 

But is seems they've stopped releasing it for Windows as I can't find any version past 2.5.3 (the one on chocolatey), even though they are now at 2.7.4 (https://www.nano-editor.org/download.php).

Would it be appropriate to propose a text mode editor or even an updated and ongoing port of Nano in this repo?

Would be nice if it was on PowerShell Gallery since PackageManagement support of Chocolatey is still not really happening.

RFC Proposal Template enhancement suggestion

As use of the RFC Process grows and proposals to enhance and expand language features increases, I feel there is a need to make a minor amendment to the RFC Template to encourage RFC authors to think about the wider items outside of the RFC proposal including what others can focus on post RFC implementation that will help drive the successful education, use and adoption of these new features.

Therefore I propose that we make an amendment to the RFC template to include an additional small section where the author can outline what they think will aid the successful adoption of these new features within a newer version of the language (v7 / v8 / v9 etc) so that for those, like myself, that blog/present/stream etc can look for suggestions in areas which they can focus on for creating new content that can aid the adoption of these features. This would also help in identifying areas in the supporting tooling, for example PSScriptAnalyzer/PES, to potentially add new features that further help aid adoption. going forward.

I would also suggest to add an additional area where the RFC Author could showcase that with some support & guidance they would intent to implement as well as a section to detail any support they may require to help them be able to implement this.

This could look like the below

---
RFC: RFC<four digit unique incrementing number assigned by Committee, this shall be left blank by the author>
Author: <First Last>
Status: <Draft | Experimental | Accepted | Rejected | Withdrawn | Final>
SupercededBy: <link to another RFC>
Version: <Major>.<Minor>
Area: <Area within the PowerShell language>
Comments Due: <Date for submitting comments to current draft (minimum 1 month)>
Plan to implement: <Yes | No>
Require Support to implement:  <Yes | No>
---

# Title

Description and rationale.

## Motivation

    As a <<user_profile>>,
    I can <<functionality>>,
    so that <<benefit>>.

## User Experience

Example of user experience with example code/script.
Include example of input and output.

## Specification

## Alternate Proposals and Considerations

## Supporting Adoption 

## Support required to Implement

I'll submit a PR for this change if this seems ok to others?

Repository cleanup to eliminate dead links

I'd like to propose some cleanup to eliminate the dead links that are created during the processing of RFCs.

Currently as RFCs move from a PR into the various "stage" folders (Draft, Draft-Accepted, Experimental, etc.), each move creates dead links, because the RFC files themselves get moved from one folder to the next. I've followed dead links to a number of RFCs from issues in the PowerShell repository, tweets, etc., only to have to dig around to find the RFC in a different stage. This isn't a good model to follow long term.

Instead, how about we fix the dead links going forward by doing the following:

  1. Place all RFC documents into a single RFC folder, and keep them there forever. Or, if the number of files in one folder is a concern, eventually move them into an Archive subfolder when they become very, very old. Given the low volume of RFCs, however, having a single folder will probably do for quite a while.
  2. Update README.md with multiple 2-column tables containing a list of all RFC files in each of the respective stages. Column 1 would contain the RFC filename (link). Column 2 would contain a brief (1-2 sentence) description. This could alternatively be maintained in a file other than the README, but having this in the README makes this information visible to anyone visiting the community.
  3. Update the PR template requesting users create a new file from the RFC template in the root of the repo and add an entry to the Draft table in the README.md file (or other file if it is stored elsewhere).
  4. Optionally add commands to the RFC module that was recently created to do automatically create the file and table entry from a single command invocation.

At any rate, you get the gist of this issue: we should eliminate the possibility of dead links so that contributors can link/refer to RFCs regardless of their current status, and the links will work. Since the RFC documents contain YAML with the current stage at the top of each document, anyone following a link will see front and center what stage the current RFC is in, as long as that metadata is properly maintained.

@joeyaiello Thoughts? Is this PR-able, or do you have internal tooling that would break if this was changed in a PR?

In-place replacement or side-by-side

Need to get a clarity on the following scenarios:

install-module foobar -requiredVersion 1.0.0
install-module foobar -requiredVersion 1.0.0-beta.1
install-module foobar -requiredVersion 1.0.0-alpha.12

Question re. Governance

I'm concerned that the current governance document is not being viewed as a set of rules, or a contract with the community, but merely as suggestions. It appears that the document is being regularly revised --without discussion via RFC-- and yet also routinely ignored.

I'm not just talking about minor problems, either, but about the big issues of governance like changes to:

  • membership in the committee
  • repository maintainers
  • the governance process

In my reading of the document, I believed that all three of those would "require a written RFC and ample time for the community to respond with their feedback" -- in particular, "the addition of new PowerShell Committee Members or Repository Maintainers" is spelled out in black and white as always requiring this. Additionally, "changes to the process of maintaining the PowerShell repository (including the responsibilities of Committee Members, Repository Maintainers, and Area Experts)" are also spelled out as requiring RFCs and comment periods -- and I have assumed this obviously encompasses the governance document itself.

To be explicit:

One need only look at the commit history of the maintainers document or the governance document to see that these documents are regularly altered -- and new maintainers and new committee members are being added.

In fact, the maintainers document was changed last july to remove mention of RFCs as the mechanism for adding maintainers. Of course, immediately after that, new maintainers were added without fanfare or comment. The actual governance document still requires an RFC for such changes, but that's been ignored.

The same thing is happening with committee members, who have not only been added without an RFC, but as far as I can tell, without even a committee vote. As a side note: it's been disappointing to watch as each committee member who has left the PowerShell team has subsequently quit the committee -- even when they are still active maintainers of projects.

I understand that the need for transparency and following your documented change processes is a new constraint -- but this needs to be addressed and remedied ASAP if you want to maintain the faith of the community.

Additionally, I would propose that

  1. Committee meeting minutes (or recordings) need to be published
  2. Those meetings should be held electronically, since it currently appears it's not possible for "community members" to actually be a part of the committee

Initial discussion about encoding cmdlets

Our goal is to gather ideas about encoding, needed new cmdlets for encoding and possible improvement of the existing cndlets.

Related discussions:

  1. RFC0018-Add-cmdlet-Get-StringHash #65 - It makes sense to move Base64 to encoding cmdlets
    We spoke about Get-FileHash/Get-StringHash/Get-Hash vs ConvertTo-Base64/ConvertFrom-Base64.
    We should accept objects of different types as input: string(s), stream(s), byte(s)...

  2. Add Get-FileEncoding cmdlet or function. PowerShell/PowerShell#2290 - request a cmdlet to detect a file encoding (and may be a file content type).
    We spoke about Get-FileEncoding and Convert-FileEncoding, Convert-FileEncoding and Convert-StringEncoding.

Currently follow cmdlets has Encoding parameter:

PS C:\> Get-Command -ParameterName "Encoding"

CommandType     Name                                               Version    Source
-----------     ----                                               -------    ------
Function        Format-Hex                                         3.1.0.0    Microsoft.PowerShell.Utility
Cmdlet          Add-Content                                        3.1.0.0    Microsoft.PowerShell.Management
Cmdlet          Export-Clixml                                      3.1.0.0    Microsoft.PowerShell.Utility
Cmdlet          Export-Csv                                         3.1.0.0    Microsoft.PowerShell.Utility
Cmdlet          Export-PSSession                                   3.1.0.0    Microsoft.PowerShell.Utility
Cmdlet          Get-Content                                        3.1.0.0    Microsoft.PowerShell.Management
Cmdlet          Import-Csv                                         3.1.0.0    Microsoft.PowerShell.Utility
Cmdlet          Out-File                                           3.1.0.0    Microsoft.PowerShell.Utility
Cmdlet          Select-String                                      3.1.0.0    Microsoft.PowerShell.Utility
Cmdlet          Send-MailMessage                                   3.1.0.0    Microsoft.PowerShell.Utility
Cmdlet          Set-Content                                        3.1.0.0    Microsoft.PowerShell.Management

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.