GithubHelp home page GithubHelp logo

lang-team's Introduction

lang-team

A home for the Rust language design team. The language design team is generally responsible for decisions involving the design of the Rust language itself, such as its syntax, semantics, or specification. This repository houses planning documents, meeting minutes, and other such things.

Rendered form

Visit the rendered form of this site at lang-team.rust-lang.org.

Code of Conduct and licensing

All interactions on this repository (whether on issues, PRs, or elsewhere) are governed by the Rust Code of Conduct.

Further, all content on this repository is subject to the standard Rust licensing.

lang-team's People

Contributors

alice-i-cecile avatar cad97 avatar carbotaniuman avatar centril avatar chorman0773 avatar craftspider avatar cramertj avatar dylan-dpc avatar jonas-schievink avatar joshtriplett avatar jules-bertholet avatar lcnr avatar m-ou-se avatar mark-simulacrum avatar markbt avatar nagisa avatar nikomatsakis avatar notriddle avatar pablohn26 avatar pnkfelix avatar programmerjake avatar qmx avatar qwuke avatar scottmcm avatar showengineer avatar timnn avatar tmandry avatar traviscross avatar yaahc avatar yiannis-had avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

lang-team's Issues

lang team path to membership and MCP procedure

Summary

I would like to talk about overall lang-team "path to membership" questions and discuss the way we are handling MCPs.

Some thoughts:

  • I think we should discuss a bit more the role of a liaison, and in particular consider the idea of having liaisons that may not be members of the lang-team (but who do commit to bringing information back to the triage meeting and otherwise playing the role of a liaison).
    • In particular, I'm wondering if we should say that liaisons are explicitly more of a neutral role with respect to a project group, so that there is someone who is playing more of a "facilitation" role. Leads, in contrast, are driving the process and making decisions. This might not be feasible, though, just in terms of available bandwidth, but I think it'd be ideal.
  • The MCP process had as a goal to make clear how much bandwidth we have and how it's being used. I'd like to discuss what is not present on the project board and should be.
  • I'd like to talk about what the expectation is for timely feedback on MCPs and how we can meet that requirement.
  • In particular, I think it's clear we're going to need more bandwidth, and so I'd like to talk about a more explicit "path to membership". Specifically, I think we should say that prospective lang-team members ought to:
    • Lead a project group, serving therefore as the main designer
    • Liaison for a project group, serving there as more of a collaborator and guide, and not in a leadership role.
    • Perhaps other requirements? We should be able to agree on qualities we are looking for.
    • The idea would be that we identify people we think might be candidates and we (privately) discuss the idea of membership with them. If they are interested, we can lay out explicitly the next step at each point, and then have internal lang-team discussions to see if any concerns have been raised and what might allay them.
  • Finally, I'd like to talk a bit about expectations from lang-team members in general in terms of engagement. Should lang-team members have to attend triage meetings? Design meetings? Should we make explicit different "categories" or roles? Is that overkill? The intent here is not to point the finger at existing members, I honestly don't know what's reasonable, and in particular I think we should try to make sure that we can accommodate people who have less time available (while still ensuring that we have enough folks who do have time available for us to get work done).

Background reading

None.

About this issue

This issue corresponds to a lang-team design meeting proposal. It corresponds
to a possible topic of discussion that may be scheduled for deeper discussion
during one of our design meetings.

ffi-unwind

Summary

Working out the rules for cross-language unwinding as well as longjmp.

Status

This section lists the efforts that we are working towards.

"C-unwind" ABI

The "C-unwind" ABI allows you to invoke foreign functions that may unwind using the native ABI.

  • Specification: rust-lang/rfcs#2945 has been merged. This RFC introduces the "C-unwind" ABI.
  • Implementation: We are looking for someone to implement! The tracking issue for the implementation work is rust-lang/rust#74990.

Interaction with catch_unwind

We need to define what happens when foreign exceptions interact with catch_unwind.

longjmp compatibility

We need to define the conditions in which it is legal to longjmp through Rust frames. We've done some initial conversation about this.

Links

const-generics

Summary

Implement and design the const_generics feature with the initial goal of stabilizing the min_const_generics subset.

Info

What is this issue?

This issue represents an active project group. It is meant to be used for
the group to post updates to the lang team (and others) in a lightweight
fashion. Please do not use the comments here for discussion, that should be kept
in the Zulip stream (discussion comments here will be marked as off-topic).

growing the team

Summary

Let's discuss growing the lang team and how we plan to do it. This issue is a bit of a placeholder.

Background reading

None

About this issue

This issue corresponds to a lang-team design meeting proposal. It corresponds
to a possible topic of discussion that may be scheduled for deeper discussion
during one of our design meetings.

Discuss the possibility of denying `bare_trait_objects` in 2021 edition

Summary

Bare trait objects do not interact well with const generics, as if a user does not surround a const expression with {}, it can be parsed as a trait object (e.g. foo + bar is interpreted as dyn foo + bar), which leads to confusing errors. See, for instance: rust-lang/rust#77502 (comment). If we deny bare_trait_objects in the 2021 edition, which is a permitted edition change, we can emit better errors for const generics, a feature we hope to stabilise in the form of min_const_generics in the near future.

bare_trait_objects would continue to work pre-2021 edition. Thus, the better error messages will only be available in the 2021 edition.

About this issue

This issue corresponds to a lang-team design meeting proposal. It corresponds
to a possible topic of discussion that may be scheduled for deeper discussion
during one of our design meetings.

Restrict promotion to infallible operations

Proposal

Summary and problem statement

I propose to resolve the "promotion mess" by only promoting code that we know will not fail to const-evaluate.

Motivation, use-cases, and solution sketches

Promotion of code to a const has been a constant source of "challenges" (aka problems) and surprises over the years. The first I saw is this soundness issue and since then we kept accumulating special cases in various parts of the compiler. Also see why we cannot promote arbitrary const fn calls, and this "meta issue".

I think we can solve this problem once and for all by ensuring that we only promote code that cannot fail to const-evaluate. Then we can get rid of all the code in rustc that has to somehow do something when evaluating a promoted failed. If we also make const_err a hard error we can in fact assume that const-evaluation errors are always directly reported to the user, which leads to even further simplifications and enables us to fix some diagnostics issues.

Technical note: it might seem that we have to rule out promotion of arithmetic in debug mode as overflows would cause evaluation failure. That is however not the case. An addition in release mode is compiled to a CheckedAdd MIR operation that never fails, which returns an (<int>, bool), and is followed by a check of said bool to possibly raise a panic. We only ever promote the CheckedAdd, so evaluation of the promoted will never fail, even if the operation overflows.

So I think we should work towards making all CTFE failures hard errors, and I started putting down some notes for that. However, this will require some breaking changes around promotion:

  • We should no longer promote things like &(1/0) or &[2][12]. When promoting fallible operations like division, modulo, and indexing (and I think those are all the fallible operations we promote, but I might have missed something), then we have to make sure that this concrete promoted will not fail -- we need to check for div-by-0 and do the bounds check before accepting an expression for promotion. I propose we check if the index/divisor are constants, in which case the analysis is trivial, and just reject promotion for non-constant indices/divisors. If that is too breaking, a backup plan might be to somehow treat this more like CheckedAdd, where we promote the addition but not the assertion, which does ensure that the promoted never fails to evaluate even on overflow. (But I think that only works for divison/modulo, where we could return a "dummy value"; it doesn't work for indexing in general.)
  • To achieve full "promoteds never fail", we have to severely dial back promotion inside const/static initializers -- basically to the same level as promotion inside fn and const fn. Currently there are two ways in which promotion inside const/static initializers is special here: first of all union field accesses are promoted (I am trying to take that back in rust-lang/rust#77526), and secondly calls to all const fn are promoted. If we cannot take back both of these, we will instead need to treat MIR that originates from const/static initializers more carefully than other MIR -- even in code that ought to compile, there might be constants in there which fail to evaluate, so MIR optimizations and MIR evaluation (aka Miri) need to be careful to not evaluate such consts. This would be some unfortunate technical debt, but in my opinion still way better than the situation we currently find ourselves in.

Alternative: restrict promotion to patterns

I have in the past raised support for restricting promotion even further, namely to only those expressions that would also be legal as patterns. @ecstatic-morse has also expressed support for this goal. However, I now think that this is unnecessarily restrictive -- I do not see any further benefit that we would gain by ruling out expressions that will always succeed, but would not be legal as patterns. In any case, even if we want to go for pattern-only promotion in the future, that would only mean ruling out even more promotion than what I am proposing, so this proposal should still be a reasonable first step in that direction.

Prioritization

I guess this falls under the "Const generics and constant evaluation" priority.

Links and related work

Initial people involved

@ecstatic-morse, @oli-obk and me (aka the const-eval WG) have been talking about this and slowly chipping away at const promotion to make it less ill-behaved. The state described above is the result of as much cleanup as we felt comfortable doing just based on crater runs and the "accidental stabilization" argument.

What happens now?

This issue is part of the experimental MCP process described in RFC 2936. Once this issue is filed, a Zulip topic will be opened for discussion, and the lang-team will review open MCPs in its weekly triage meetings. You should receive feedback within a week or two.

This issue is not meant to be used for technical discussion. There is a Zulip stream for that. Use this issue to leave procedural comments, such as volunteering to review, indicating that you second the proposal (or third, etc), or raising a concern that you would like to be addressed.

RFC 2229

Summary

RFC 2229 modifies closures so that they do not capture entire variables but instead more precise paths.

Info

What is this issue?

This issue represents an active project group. It is meant to be used for
the group to post updates to the lang team (and others) in a lightweight
fashion. Please do not use the comments here for discussion, that should be kept
in the Zulip stream (discussion comments here will be marked as off-topic).

MCP: pub(macro)

Proposal

Summary and problem statement

This idea came from this line of code from the log crate https://github.com/rust-lang/log/blob/master/src/lib.rs#L1441

There is currently no way of keeping a function private while also using them in a public macro. This causes crate developers to resort to warning comments and function name prefixes.

With my feature, functions marked pub(macro) would become public to macros defined in the same crate.

Motivation, use-cases, and solution sketches

With my feature

// WARNING: this is not part of the crate's public API and is subject to change at any time	 
#[doc(hidden)]	 
pub fn __private_api_log(

would become

pub(macro) fn log(

Prioritization

This fits with "Targeted ergonomic wins and extensions". Warning comments and function prefixes do not feel ergonomic.

Links and related work

I understand that hygienic macros could make this feature no longer needed in future. I hope this could act as a smaller, easier to implement incremental step.

rust-lang/rfcs#2968

Initial people involved

TBD

What happens now?

This issue is part of the experimental MCP process described in RFC 2936. Once this issue is filed, a Zulip topic will be opened for discussion, and the lang-team will review open MCPs in its weekly triage meetings. You should receive feedback within a week or two.

This issue is not meant to be used for technical discussion. There is a Zulip stream for that. Use this issue to leave procedural comments, such as volunteering to review, indicating that you second the proposal (or third, etc), or raising a concern that you would like to be addressed.

Declarative macro repetition counts

Summary

Add new syntax to allow declarative macro authors to easily access the count or index of declarative macro repetitions.

Info

What is this issue?

This issue represents an active project group. It is meant to be used for
the group to post updates to the lang team (and others) in a lightweight
fashion. Please do not use the comments here for discussion, that should be kept
in the Zulip stream (discussion comments here will be marked as off-topic).

Add placement by return (GCE, unsized returns)

Proposal

This is a summary of the proposal that was initially proposed as PR-2884 in the RFC repository.

Summary

Implement Guaranteed Copy Elision (GCE) in simple cases, so that functions returning any non-trivial amount of data are guaranteed to write that data directly to the slice of memory that will store it at the end of the return chain.

In cases where GCE is possible, allow functions to return unsized types directly (rather than boxed versions or references to these types). To make this possible, functions returning unsized types are split into two functions, as a special kind of generator:

  • The first half yields the memory layout of the return value
  • The second half writes the return value into a given chunk of memory (which must have the previously yielded layout).

Various unsafe functions can then call the first half, allocate memory based on the given layout, and pass the allocated memory to the second half. Simplified example:

impl<T: ?Sized> Box<T> {
    fn new_with<F: FnOnce() -> T>(f: F) -> Self {
        unsafe {
            let state = CALL_FIRST_HALF_UNSIZED(f);
            let p = NonNull::from_mut_ptr(GlobalAlloc.alloc(state.layout()));
            CALL_SECOND_HALF_UNSIZED(f, state, p.as_mut_ptr() as *mut MaybeUninit<T>);
            Box { p }
        }
    }
}

Motivation

Although this might seem like a niche feature at first glance, it has a surprising number of use cases, and enables some highly-demanded developments. Some use cases:

I have empirically observed that the RFC has been quoted fairly often in discussions both on r/rust and the rust-internals forums, in answer to people asking "I need to find a way to do X, how could rust implement it?"

Links and related work

Mentors or Reviewers

No mentor yet.

The Major Change Process

Once this MCP is filed, a Zulip topic will be opened for discussion. Ultimately, one of the following things can happen:

  • If this is a small change, and the team is in favor, it may be approved to be implemented directly, without the need for an RFC.
  • If this is a larger change, then someone from the team may opt to work with you and form a project group to work on an RFC (and ultimately see the work through to implementation).
  • Alternatively, it may be that the issue gets closed without being accepted. This could happen because:
    • There is no bandwidth available to take on this project right now.
    • The project is not a good fit for the current priorities.
    • The motivation doesn't seem strong enough to justify the change.

2021 idiom lint overview

Summary

Go over the state of the various idiom lints that may be part of the 2021 Edition and make decisions about the course of action we expect.

Preparation

@scottmcm to prepare, for each idiom lint:

  • summarize current state (warn by default? etc)
  • summarize concerns that have been raised
  • proposed action in 2021 Edition
    • (default: Deny by default)
  • does the warning have automatic migrations
  • crater results, if applicable

About this issue

This issue corresponds to a lang-team design meeting proposal. It corresponds
to a possible topic of discussion that may be scheduled for deeper discussion
during one of our design meetings.

Make a place for a "lang team wishlist"

We should have a way to say "we're interested in someone forming a group to tackle $(problem)".

That came up in triage today with (IIRC) the prelude stuff, and I think there are some others (like maybe improving FRU? rust-lang/rfcs#2528 (comment)) that we've talked about repeatedly but on which no lang members are actively working.

It would be nice to have that set of things somewhere so that interested parties could have more confidence in picking them up.

That might also include things that we've explicitly postponed, as a way to say "please don't start working on ______ because we've talked about it and not right now". There are a few things that come to mind here, like variadic generics: I suspect we want them eventually, but right now we're closing proposals in that direction that show up.

Stabilizing a subset of const generics

Summary

I want to propose that we get a subset const generics on track to stabilization in the near future. We should have a meeting to discuss the specifics and attempt to achieve consensus on this plan.

Background reading

About this issue

This issue corresponds to a lang-team design meeting proposal. It corresponds
to a possible topic of discussion that may be scheduled for deeper discussion
during one of our design meetings.

Stop ignoring trailing semicolons in a macro body when a macro is invoked in expression position

Proposal

Summary and problem statement

When a macro_rules! macro is invoked in expression position, a trailing semicolon in the macro body is silently ignored (see issue rust-lang/rust#33953). For example, the following code compiles:

macro_rules! foo {
    () => {
        true;
    }
}

fn main() {
    let val = match true {
        true => false,
        _ => foo!()
    };
}

This behavior is inconsistent with how semicolons normally work. In any other context, <expr>; produces a value of (), regardless of the type of <expr>. If the type of <expr> has drop glue, then this could lead to unexpected runtime behavior.

Motivation, use-cases, and solution sketches

I propose to remove this special handling of trailing semicolons. As a result, the following code will stop compiling:

macro_rules! foo {
    () => {
        true;
    }
}

fn main() {
    let val = match true {
        true => false,
        _ => foo!() //~ ERROR: unexpected semicolon
    };
	let _ = foo!(); //~ ERROR: unexpected semicolon
	let _ = false || foo!(); //~ ERROR: unexpected semicolon
}

The match arm case is straightforward: _ => true; is a syntax error, since a match arm cannot end with a semicolon.

The two let statements require some explanation. Under the macro expansion proposal described by @petrochenkov, macro expansion works by only reparsing certain tokens after macro expansion. In both let _ = foo!(); and let _ = false || foo!();, the invocation foo!() is used in expression position. As a result, macro expansion will cause us to attempt to parse true; as an expression, which fails.

The alternative would be to reparse the entire let expression - that it, we would try to parse let _ = true;;, resulting in a statement let _ = true; followed by an empty statement ;. In addition to complicating parsing, this would make understanding a macro definition more difficult. After seeing <expr>; as the trailing statement in a macro body, the user now needs to examine the call sites of the macro to determine if the result of <expr> is actually used.

Rolling out this change would take a significant amount of time. As demonstrated in rust-lang/rust#78685, many crates in the ecosystem rely on this behavior, to the point where several upstream fixes are needed for the compiler to even be able to bootstrap. To make matters worse, rustfmt was inserting semicolons into macro arms up until a very recent version (it was fixed by rust-lang/rustfmt#4507). This means that any crates gating CI on stable rustfmt may find it impossible to make the necessary changes until the latest rustfmt rides the release train to stable.

I propose the following strategy for rolling out this change:

  1. Add an allow-by-default future-compatibility lint, and deny it for internal rustc crates
  2. Do a Crater run to determine the extent of impact
  3. When the necessary rustfmt fix makes it way into stable (or earlier, if we determine the impact to be small enough), switch the lint to warn-by-default
  4. Mark the lint for inclusion in the cargo future-incompat-report (see rust-lang/rust#71249)
  5. After some time passes, switch the lint to deny-by-default
  6. Make the lint into a hard error (possibly only for macros defined in a crate in a new Rust edition).

Fortunately, this change is very easy on a technical level: we simply need to emit a warning in this code

Prioritization

This doesn't appear to fit into any particular lang-team priority. However, it's part of a larger effort to fix bugs and inconsistencies in Rust's macro system.

Links and related work

Some PRs to crates removing trailing semicolons:

Initial people involved

I plan to implement this if accepted, with @petrochenkov reviewing the implementation.

What happens now?

This issue is part of the experimental MCP process described in RFC 2936. Once this issue is filed, a Zulip topic will be opened for discussion, and the lang-team will review open MCPs in its weekly triage meetings. You should receive feedback within a week or two.

This issue is not meant to be used for technical discussion. There is a Zulip stream for that. Use this issue to leave procedural comments, such as volunteering to review, indicating that you second the proposal (or third, etc), or raising a concern that you would like to be addressed.

Add a `NOOP_METHOD_CALL` lint for methods which should never be directly called

Proposal

Summary and problem statement

Some types have trivial implementations of particular traits - for example, <&T as Clone>::clone and <&T as Borrow>::borrow. These methods are useful in generic contexts, since they allow things like passing a reference to a function with a T: Clone parameter. However, directly calling one of these methods (e.g. (&NonCloneStruct).clone()) is useless. This can also lead to confusing error messages - for example, calling some_ref.to_owned() may return either a &Foo or a Foo, depending on whether the call to to_owned resolves to <&Foo as ToOwned>::to_owned or <Foo as ToOwned::to_owned

Motivation, use-cases, and solution sketches

I propose introducing a lint NOOP_METHOD_CALL (name bikesheddable), which will fire on direct calls to any 'useless' methods. Initially, it will fire on direct calls to the following methods:

  • <&T as Clone>::clone
  • <&T as Borrow>::borrow
  • <&T as Deref>::deref
  • <&T as ToOwned>::to_owned

Note that we will intentionally not perform any kind of post-monomorphization checks. This lint will only fire on calls that are known to have the proper receiver (&T) at the call site (where the user could just remove the call).

For example

struct Foo;

fn clone_it<T: Clone>(val: T) -> T {
    val.clone() // No warning - we don't know if `T` is `&T`
}

fn main() {
	let val = &Foo;
	val.clone(); // WARNING: noop method call
	clone_it(val);
}

For now, this lint will only work for types and traits in the standard library. In the future, this could be extended to third-party code via some mechanism, allowing crates to mark methods as only being useful in generic contexts.

However, more design work will be required for such a mechanism. Method calls like <&T as ToOwned>::to_owned go through a blanket impl, so applying an attributes to a method in an impl block is not sufficient to cover all use cases. For the standard library, we can simply hard-code the desired paths, or use some other perma-unstable mechanism.

Prioritization

I believe this fits into the 'Targeted ergonomic wins and extensions' lang team priority. Anecdotally, I've seen users on the Rust discord servers accidentally call some of these methods, and get confused by the resulting error messages. A lint would point users in the right direction.

Links and related work

This was initially proposed as the compiler-team MCP rust-lang/compiler-team#375, then reworded and re-opened here.

Initial people involved

I'm planning to implement this

What happens now?

This issue is part of the experimental MCP process described in RFC 2936. Once this issue is filed, a Zulip topic will be opened for discussion, and the lang-team will review open MCPs in its weekly triage meetings. You should receive feedback within a week or two.

This issue is not meant to be used for technical discussion. There is a Zulip stream for that. Use this issue to leave procedural comments, such as volunteering to review, indicating that you second the proposal (or third, etc), or raising a concern that you would like to be addressed.

async foundations

Summary

To pursue foundational work related to Async I/O. Much of this work involves extending the language with new foundational concepts.

The group also pursues work that is unrelated to the lang team, such as compiler team polish.

Info

What is this issue?

This issue represents an active project group. It is meant to be used for
the group to post updates to the lang team (and others) in a lightweight
fashion. Please do not use the comments here for discussion, that should be kept
in the Zulip stream (discussion comments here will be marked as off-topic).

Stream trait and related issues

Summary

In the context of the Async Foundations group, @nellshamrell has been drafting an RFC to stabilize the Stream trait. See e.g. rust-lang/wg-async#15 for the latest round of edits. While the actual content of the RFC is perhaps more libs than lang (introducing a trait), the choice of whether or not to introduce this trait touches on a number of lang issues, particularly around forwards compatibility, and I thought it would be good to discuss as a group.

Some of the issues I thought particularly relevant:

  • Method dispatch ambiguity issues around migrating extension methods from futures crate to the stdlib (a common scenario that also applies to e.g. itertools for iterators)
  • Coherence issues if we attempted to bridge a Stream trait with a future LendingStream (a.k.a., "streaming stream" or "attached stream") trait
  • The possibility of generator syntax and potential compatibility issues that may arise there

cc @rust-lang/wg-async-foundations @sfackler

Background reading

Draft RFC: rust-lang/wg-async#15

About this issue

This issue corresponds to a lang-team design meeting proposal. It corresponds
to a possible topic of discussion that may be scheduled for deeper discussion
during one of our design meetings.

Disallow keywords as macro arguments names

Proposal

Summary and problem statement

It was recently determined https://rust-lang.zulipchat.com/#narrow/stream/213817-t-lang/topic/postfix.20macros/near/216030097 that self and if and such are valid as identifiers in macros ($self, $if, etc).

That prevents adding new things like $crate, since there are no reserved identifiers. So it would be nice to treat them like normal identifiers, where keywords are not allowed. And because these are just identifiers, and thus unaffected by alpha-conversion, all expressible macros are still expressible. (And arguably not calling things $if is probably good.)

Motivation, use-cases, and solution sketches

  • It's more consistent to have the same identifier rules here as elsewhere
  • Both rust-lang/rfcs#2442 and rust-lang/rfcs#2968 discussed adding $self with special meaning, which is currently a breaking change without this.

This would just be an edition change for the edition in which the macro is written.

Prioritization

This would need to happen fairly promptly if it were to make it for the 2021 edition.

Links and related work

https://rust-lang.zulipchat.com/#narrow/stream/213817-t-lang/topic/postfix.20macros/near/216031552

Initial people involved

@joshtriplett

What happens now?

This issue is part of the experimental MCP process described in RFC 2936. Once this issue is filed, a Zulip topic will be opened for discussion, and the lang-team will review open MCPs in its weekly triage meetings. You should receive feedback within a week or two.

This issue is not meant to be used for technical discussion. There is a Zulip stream for that. Use this issue to leave procedural comments, such as volunteering to review, indicating that you second the proposal (or third, etc), or raising a concern that you would like to be addressed.

MCP: per-edition preludes

Proposal

Summary and problem statement

To give libs more flexibility, we should allow different preludes per edition.

Motivation, use-cases, and solution sketches

Due to nuances of trait resolution, adding a trait to the prelude is a (technically allowed) breaking change. To help avoid heavy impact, however, it would be nice to be able to make those changes opt-in. The edition mechanism seems like a reasonable place to do this: it means that new code (using cargo new, which defaults to the new edition) will get the new traits in its prelude, but old code using traits that conflict won't be immediately broken.

The basic change here is easy: instead of putting use std::prelude::v1::*; in every module, put use std::prelude::v2018::*; (or analogously for other editions). Giving edition warnings and structured fixes would be much harder, I suspect.

EDIT: petrochenkov points out that preludes for macros may also be hairy.

Out of scope

I would like to leave what, if anything, would change in such a prelude out of scope from this conversation. We can start with all of them being the same as the (could then be deprecated) v1 module. And lang and/or libs can then consider individual changes in a future edition (or existing ones) as separate changes.

Prioritization

This fits decently under "Targeted ergonomic wins and extensions". Having TryInto available in the prelude, for example, would help the compiler give error messages on conversions mentioning that without the confusion of the suggestion not working until an additional trait is used.

Links and related work

rust-lang/rust#65512

Initial people involved

@scottmcm

What happens now?

This issue is part of the experimental MCP process described in RFC 2936. Once this issue is filed, a Zulip topic will be opened for discussion, and the lang-team will review open MCPs in its weekly triage meetings. You should receive feedback within a week or two.

This issue is not meant to be used for technical discussion. There is a Zulip stream for that. Use this issue to leave procedural comments, such as volunteering to review, indicating that you second the proposal (or third, etc), or raising a concern that you would like to be addressed.

MCP: repr(align(ptr))

The AtomicPtr type needs three lines of cfg_attr just to try and say what it really wants to say, "I am aligned to the size of a pointer". This is slightly silly, and what's worse it's actually a little error prone because if the pointer width were ever smaller (unlikely) or larger (not impossible) then suddenly none of the cfg_attr would apply and not only would it not have the right alignment it wouldn't even have repr(C), which could make other code that assumes AtomicPtr has repr(C) also suddenly be UB.

Conclusion the align() attribute should allow a value of ptr to request pointer alignment for your type in a "blessed" portable way.

Discuss WF checks and type aliases

Summary

We should discuss our plans around well-formedness requirements for type aliases. This has been a long-standing point, and with the edition coming up, we may want to make some changes here.

Background reading

You don't necessarily have to read all these, but before the meeting I at least would try to prepare some information to present based on these sources:

About this issue

This issue corresponds to a lang-team design meeting proposal. It corresponds
to a possible topic of discussion that may be scheduled for deeper discussion
during one of our design meetings.

Discuss RFC 2580 (Pointer metadata and vtable)

Summary

@SimonSapin wrote rust-lang/rfcs#2580 in 2018 attempting to identify some generic APIs for manipulating fat pointers in a forward compatible way. I think we probably want to move forward with this RFC, but I'd like to evaluate it by having a discussion about what is being proposed and what the implications are.

Background reading

About this issue

This issue corresponds to a lang-team design meeting proposal. It corresponds
to a possible topic of discussion that may be scheduled for deeper discussion
during one of our design meetings.

Yield closures

Proposal

Summary and problem statement

Rust has the ability to yield and resume function calls by transforming functions into a state machines. However, this ability is currently available to users in a very limited fashion (async blocks, functions) because of the complex design choices required in generalizing the capability. I believe that we have now found a very simple version of "stackless coroutines" which will resolve this.

In short, ordinary closures should be allowed to yield in addition to return. For example, to skip alternate elements of an iterator:

iter.filter(|_| {
    yield true;
    false
})

As expected, arguments can be moved by the closure at any point. If an argument is not moved prior to yield or return, it will be dropped. When the closures is resumed after either yield or return, all arguments are reassigned:

|x| {
    // <-- x gets (re)assigned
    let y = x;
    yield;
    // <-- x gets (re)assigned
    dbg!(x, y);
}

From the outside yield closures work the same as non-yield closures: they implement any applicable Fn* traits. Since a yield-closure must at least mutate a discriminant within the closure state, it would not implement Fn. Yield closures which require stack-pinning would additionally be !FnMut, instead implementing a new FnPin trait. Note that all FnMut + Unpin should also implement FnPin.

pub trait FnPin<Args>: FnOnce<Args> {
    extern "rust-call" fn call_pin(self: Pin<&mut Self>, args: Args) -> Self::Output;
}

Motivation, use-cases, and solution sketches

Yield closures would act as the fundamental "coroutine" in the Rust language which in-language sugars and user-defined macros could use to build futures, iterators, streams, sinks, etc. However, those abstractions should not be the focus of this proposal. Yield closures should be justified as a language feature based on its own merits. To that end, below are some example use-cases.

Since yield closures are simply functions, they can be used with existing combinators. Here a closure is used with a char iterator to decode string escapes:

escaped_text.chars().filter_map(|c| {
    if c != '\\' {
        // Not escaped
        return Some(c);
    }

    // Go past the \
    yield None;

    // Unescaped-char
    Some(match c {
        // Hexadecimal
        'x' => {
            yield None; // Go past the x
            let most = c.to_digit(16);
            yield None; // Go past the first digit
            let least = c.to_digit(16);
            // Yield the decoded char if valid
            char::from_u32(most? << 4 | least?)
        },
        // Simple escapes
        'n' => '\n',
        'r' => '\r',
        't' => '\t',
        '0' => '\0',
        '\\' => '\\',
        // Unnecessary escape
        _ => c,
    })
})

Here is a similar pushdown parser utility which assists in base64 decoding:

|sextet, output| {
    let a = sextet;
    yield;
    let b = sextet;
    output.push(a << 2 | b >> 4); // aaaaaabb
    yield;
    let c = sextet;
    output.push((b & 0b1111) << 4 | c >> 2); // bbbbcccc
    yield;
    output.push((c & 0b11) << 6 | sextet) // ccdddddd
}

Since yield closures are a very consise way of writing state-machines, they could be very useful to describe agent behavior in games and simulations:

|is_opponent_near, my_health| loop {
    // Find opponent
    while !is_opponent_near {
        yield Wander;
    }

    // Do battle!
    let mut min_health = my_health;
    while my_health > 1 && is_opponent_near {
        yield Attack;
        if my_health < min_health {
            min_health = my_health;
            yield Evade;
        }
    }

    // Recover
    if my_health < 5 {
        yield Heal;
    }
}

And of course, yield closures make it easy to write all kinds of async primatives which are difficult to describe with async/await. Here is a async reader โ†’ byte stream combinator:

pub fn read_to_stream(read: impl AsyncRead) -> impl TryStream<Item=u8> {
    stream::poll_fn(move mut |ctx: &mut Context| {
        let mut buffer = [0u8; 4096];
        pin_mut!(read);

        loop {
            let n = await_with!(AsyncRead::poll_read, read.as_mut(), ctx, &mut buffer)?;

            if n == 0 {
                return Ready(None);
            }

            for &byte in buffer.iter().take(n) {
                yield Ready(Some(Ok(byte)));
            }
        }
    })
}

Once closures

Some closures consume captured data and thus can not be restarted. Currently such closures avoid restart by exclusively implementing FnOnce. However, a FnOnce-only yield closure is useless even if unrestartable, since it still might be resumed an arbitrary number of times. Thankfully, there is a different way to prevent restart: a closure could enter a "poisoned" state after returning or panicking. This behavior is generally undesirable for non-yield closures but could be switched-on when needed. I recommend a mut modifier for this purpose since it is A. syntactically unambiguous and B. invokes the idea that a FnMut implementation is being requested:

fn clousure(only_one_copy: Foo) -> impl FnMut() {
    move mut || {
        yield;
        drop(only_one_copy);
    }
}

Alternatively, all yield closures could be poisoned by default and opt-out with loop:

|| loop {
    yield true;
    yield false;
}

Poisoned-by-default is closer to the current behavior of generators but breaks the consistency between yield and non-yield closures. I believe the better consistency of the mut modifier will make the behavior of yield dumber and less surprising. However, that trade-off should be discussed further.

GeneratorState-wrapping

A try { } block produces a Result by wrapping outputs in Ok/Err. An async { } block produces a Future by wrapping outputs in Pending/Ready. Similarly a iterator! { } block could produce an Iterator by wrapping outputs in Some/None and a stream! { } block could produce a Stream by wrapping outputs in Pending/Ready(Some)/Ready(None).

However, there is a common pattern here. Users often want to discriminate values output by yield (Pending, Some, etc) from values output by return (Ready, None, etc). Because of this, it may make sense to have all yield-closures automatically wrap values in a GeneratorState enum in the same way as the existing, unstable generator syntax.

Although this should be discussed, I believe that enum-wrapping is a separate concern better served by higher-level try/async/iterator/stream blocks.

Async closures

There is an open question regarding the behavior of async + yield closures. The obvious behavior of such a closure is to produce futures, in the same way that a non-yield async closure produces a future. However, the natural desugaring of async || { yield ... } into || async { yield ... } doesn't make a whole lot of sense (how should a Future yield anything other than Pending?) and it is not clear if an alternate desugar along the lines of || yield async { ... } is even possible.

For now I would recommend disallowing such closures since async closures are unstable anyway.

Prioritization

In addition to the general ergonomic wins for all kinds of tasks involving closures, a general way of accessing coroutines allows users a far less frustrating way to implement more complex futures and streams. It will also allow crates like async-stream and propane to implement useful syntax sugars for all kinds of generators or iterators, sinks, streams, etc.

Links and related work

The effort to generalize coroutines has been going on ever since the original coroutines eRFC. This solution is very closely related to Unified coroutines a.k.a. Generator resume arguments (RFC-2781). Further refinement of that proposal with the goal of fully unifying closures and generators can be found under a draft RFC authored by @CAD97.

Initial people involved

What happens now?

This issue is part of the experimental MCP process described in RFC 2936. Once this issue is filed, a Zulip topic will be opened for discussion, and the lang-team will review open MCPs in its weekly triage meetings. You should receive feedback within a week or two.

This issue is not meant to be used for technical discussion. There is a Zulip stream for that. Use this issue to leave procedural comments, such as volunteering to review, indicating that you second the proposal (or third, etc), or raising a concern that you would like to be addressed.

Extend the reference team to "lang docs" team

(Original) Summary

(I'm not sure which the template I should use, so opening as a blank issue.)
Currently, we don't have maintainers of the nomicon.
We're tracking its toolstate and of course, we should make it up-to-date as possible. I think it's reasonable to form the nomicon team and maintain regularly, like the reference.
I can take a lead, and feel free to comment here if you're interested in helping maintain. It'd be great if some of the lang team participates in.

related Zulip topic: https://rust-lang.zulipchat.com/#narrow/stream/196563-t-lang.2Fwg-meta/topic/maintain.20status.20of.20nomicon

clarify lint policy

@RalfJung raised the question of "what is needed for a new lint". We felt like nominating the PR and FCP is the right step. In general, we should try to make clear when that is ok.

Review Safe Transmute

Summary

Transmuting values through std::mem::transmute and related APIs is surprisingly unsafe even though the rules for when it is safe to transmute values are fairly straight forward and often verifiable statically. A working group has been established to introduce mechanisms into Rust for safely transmuting values, but progress has stagnated as a consensus on the best way to achieve this cannot be reached.

A topic tangentially related to this, is the question of marker traits which describe the layout of a type for ensuring safety in unsafe code. For instance a Zeroable trait which guarantees its implementers can be zero initialized.

Background reading

Safe transmute has been fairly deeply explored in the Rust ecosystem. There are currently two approaches which are fairly orthogonal which attempt to address this issue, each with their own pros and cons.

Marker Trait Based

The first approach is through using marker traits and associated derive macros to establish certain static properties of a type's layout in memory that can then be used to build safe wrapper functions around the unsafe std::mem::transmute. These marker traits include FixedLayout for types with layouts that can be relied upon and FromBytes for types that can be safely transmuted from an appropriately sized and aligned byte array.

This approach is currently being explored in the mem-markers repo. You can also read more about this approach (albeit from a slightly different angle than mem-markers) in this internals post.

Pros

  • Fairly simple and relies on the same mechanisms (namely marker traits, and derive macros) that already employed by the language.
  • Provides markers that are useful beyond straight transmute (e.g., specifying whether a type is safe to be zeroed). The other approach mentioned here can cover this use case but in a less straightforward way.

Cons

  • Not very fine grained because marker-traits can only expose so much about a trait, and thus the approach is also fairly conservative meaning that there are some transmute operations that are safe that would not be allowed.
  • Can't handle lifetime lengthening and shortening (e.g., allowing transmute from 'static to 'a, but not allowing transmute from 'a to 'static)

Type-Level Layout

This approach attempts to model a types layout in the type system and use type checking to prove whether two layout types are equivalent. This approach use type level programming to model a types layout as a trait and then sees if this layout can be transformed into the layout of another type.

This approach is being explored in the typic crate.

Pros

  • Strictly more flexible than the marker-trait approach as all the marker traits can be modeled by it, and it can represent transformations that marker traits cannot (e.g., two types have fields of equal offset that are booleans).
  • Can handle lifetime lengthening and shortening

Cons

  • Much more complex as it relies on type level programming to model type layouts. It should be possible to expose friendly traits for most use cases so that end users are never exposed to how this works, but this is not 100% known yet. Also, the flexibility of this approach may mean having to expose a somewhat complex API to end users even if this is not exposes as type level programming.

Subtleties

The following are various subtleties of the design space that were found to be surprising or not initially considered. We leave them here to ensure they are kept in mind.

  • Safe vs. Sound Transmute: Sound transmutes represent legal transmute of memory contents while not necessarily conserving application level invariants. Safe transmutes are sound plus they conserve application level invariants. For instance, transmuting from u16 to u8 is neither safe nor sound while transmuting from u8 to [repr(transparent)] MyNonZerou8(u8) is sound but safe. Having this distinction can be very helpful as developers may wish to check invariants and then before a sound transmute that cannot be statically guaranteed to be safe. This is often up to the user to declare that their type has no invariants.
  • Owned vs Reference Transmute: The rules for what is safe to transmute change slightly when dealing with owned transmute to reference transmute. For instance, an owned NonZeroU8 can be turned into a u8 legally, but a &mut NonZeroU8 cannot since others can eventually view this memory as NonZeroU8 and the zero invariant may not have been upheld.

About this issue

This issue corresponds to a lang-team design meeting proposal. It corresponds
to a possible topic of discussion that may be scheduled for deeper discussion
during one of our design meetings.

MCP: Rust-style `#[inline]` attribute.

Proposal

Summary and problem statement

Change the semantics of #[inline] attribute.

  • Currently in Rust #[inline] is following LLVM-style inline policies
    ** #[inline(always)] -> alwaysinline
    ** #[inline] -> inlinehint
    ** N/A -> N/A
    ** #[inline(never)] - neverinline

  • However this is unnecessarily complex and hard to use.

  • It also causes compilation performance issues. (e.g. thousands copies of Option::map in rustc_middle according to cargo-llvm-lines, to ask llvm to examine the inlining possibilities one-by-one)

Motivation, use-cases, and solution sketches

  • Inlining should ideally happen before monomorphization.

  • Inlining should ideally be made as deterministic as possible (respecting user's intention).

  • I propose replacing the semantics of these attributes to:
    ** #[inline(always)] -> do inline, not just a hint (always invoke deterministic inlining before monomorphization), raise a warning or error when failing to do so.
    *** Feedback: It was suggested that this become #[inline(required)], and the check be a lint.
    ** #[inline] -> do inline, not just a hint (always invoke deterministic inlining before monomorphization), fallback to not inlining when failing to do so with good reason TBD: <Language should identify and list the possible reasons.> (and maybe tag as alwaysinline to ask llvm to try again).
    ** N/A -> keeping current behavior here: heuristically determine whether to inline or not, left to compiler internals (maybe invoke deterministic inlining before monomorphization, inlinehint).
    ** #[inline(never)] -> do not inline, not just a hint (do not invoke deterministic inlining before monomorphization, neverinline to stop llvm from doing so too)

  • This will not be a breaking change. There're performance impacts, but hopefully a positive one.

Prioritization

  • This is related to the "Targeted ergonomic wins and extensions" because it improves building experience. It is also relatively not a large change.

  • It was mentioned on zulip that the current implementation MIR-inliner is not fully ready. (Need to recheck the feasibility of using MIR inliner to provide the expected "deterministic inlining before monomorphization" behavior and timeframe approximation.)

Links and related work

  • inline keyword is currently mainly used to workaround ODR in C++ language. It causes confusion to beginners too.

Initial people involved

TBD

What happens now?

This issue is part of the experimental MCP process described in RFC 2936. Once this issue is filed, a Zulip topic will be opened for discussion, and the lang-team will review open MCPs in its weekly triage meetings. You should receive feedback within a week or two.

This issue is not meant to be used for technical discussion. There is a Zulip stream for that. Use this issue to leave procedural comments, such as volunteering to review, indicating that you second the proposal (or third, etc), or raising a concern that you would like to be addressed.


EDIT:

  • edited all mentioning of "MIR-inlining" to "deterministic inlining before monomorphization", since the former is an implementation detail.
  • clarified that for the 'N/A' cause, the semantic is not changed.
  • clarified that it's no longer a hint.

Discuss the amount of oversight that lang should have for lints

Summary

This might not need a whole meeting, but I don't know the best place to put stuff...

There seems like a bunch of different kinds of lints, and I don't know that lang should care about some of them. For example, I think it would be fine for libs to approve a lint like "don't call .step_by(0) because that always panics".

We probably do care about lints that would warn on valid uses of language features (as they're sortof a deprecation).

But maybe we can write some more nuanced guidance here about which teams need to approve which things, and avoid multiple MCPs when they're not providing value.

Please describe your meeting topic here. It doesn't have to be very long, but
it's always good to try and identify a concrete question to be answered or
narrow topic for discussion. Nobody likes a rambly meeting.

Background reading

This came up on https://rust-lang.zulipchat.com/#narrow/stream/233931-t-compiler.2Fmajor-changes/topic/Uplift.20the.20.60invalid_atomic_ordering.60.20lint.20from.20clippy/near/218463202

Include any links to material that folks ought to try to read before-hand.

About this issue

This issue corresponds to a lang-team design meeting proposal. It corresponds
to a possible topic of discussion that may be scheduled for deeper discussion
during one of our design meetings.

Require ABI for extern in 2021

Proposal

Summary and problem statement

We do not currently require users to specify which ABI an extern corresponds to, in any context. This can be confusing (e.g., see rust-lang/rust#75030), and seems like an obvious case to fix. Requiring users to specify the ABI forces them to think through which ABI they want.

Motivation, use-cases, and solution sketches

The primary motivation is to ease reading code. Particularly with the introduction of "C-unwind" ABI, it seems increasingly true that knowing up front which ABI is associated with function pointers and function declarations is useful.

Prioritization

I think this best fits the targeted ergonomic wins -- similar to dyn, at least for me, seeing extern "C" is much clearer than just a bare extern. Though related to C Parity / embedded, it does not enable any new behavior that was absent previously.

Links and related work

C++ mandates only C and C++ ABIs, and requires an ABI to be specified on extern blocks, but not extern functions.

Initial people involved

No one beyond myself, at this point.

What happens now?

This issue is part of the experimental MCP process described in RFC 2936. Once this issue is filed, a Zulip topic will be opened for discussion, and the lang-team will review open MCPs in its weekly triage meetings. You should receive feedback within a week or two.

This issue is not meant to be used for technical discussion. There is a Zulip stream for that. Use this issue to leave procedural comments, such as volunteering to review, indicating that you second the proposal (or third, etc), or raising a concern that you would like to be addressed.

never type stabilization

Summary

Long term goal is to stabilize the ! type and in particular to alter fallback for "diverging type variables" to use ! and not ().

Current status

Working towards a lint that catches the problems associated with changing fallback (rust-lang/rust#66173). Have a draft PR rust-lang/rust#74535 that is being developed by @blitzerr.

Info

What is this issue?

This issue represents an active project group. It is meant to be used for
the group to post updates to the lang team (and others) in a lightweight
fashion. Please do not use the comments here for discussion, that should be kept
in the Zulip stream (discussion comments here will be marked as off-topic).

How to dismantle an `&Atomic` bomb.

Summary

At end of 2020, I tried to get some discussion going about how to write correct code that manages memory based on atomic counters in the memory being managed.

At this point I think the UCG WG has a good proposal: To treat the deallocation capability on the same footing as mutation capability. I.e., if the compiler (or unsafe code author) has a pointer to some memory M where a concurrent actor is allowed mutate M, then one must also allow for the possibility that the concurrent actor may deallocate M (unless of course the compiler/author has some proof or established invariant that the memory M cannot be deallocated).

Background reading

See UCG zulip and T-lang zulip and rust-lang/unsafe-code-guidelines#252

About this issue

This issue corresponds to a lang-team design meeting proposal. It corresponds
to a possible topic of discussion that may be scheduled for deeper discussion
during one of our design meetings.

Improving trust in the Rust compiler

Summary

I would like to propose a path to improving the trust level of the Rust compiler. Trust is a function of many things: quality improvement of internals, validation of existing work and also understanding of flaws. Rust already has a lot of great base strength to start such an effort: good ties to universities, an interest in applying research and some ongoing effort in making Rust safer.

We believe it is the right moment to start making steps to climb the trust ladder. Rust is popular now and will start to see more industry adoption, also in safety-critical and mission-critical industries. Positioning Rust at a place where such concerns are to be discussed is a win, both for the industry and the project.

In this design meeting, we would like:

  • To discuss a number of ongoing projects that we believe would boost this topic naturally (chalk, polonius, miri)
  • Discuss the importance and chances to be found in MIR
  • Discuss organisational concerns to make sure such an effort has an open roadmap and contribution possibilities

We appreciate that such an endeavor can't be discussed in one meeting. The goal of this meeting is to start working on planning and experimental work, leading to stronger discussions and maybe RFCs later, but not in a short timeframe.

Background reading

This is based on the pitch for "Sealed Rust" last year, a name which we have now retired.
https://ferrous-systems.com/blog/sealed-rust-the-pitch/
https://ferrous-systems.com/blog/sealed-rust-the-plan/

About this issue

This issue corresponds to a lang-team design meeting proposal. It corresponds
to a possible topic of discussion that may be scheduled for deeper discussion
during one of our design meetings.

Types as const Parameters

Project Proposal: Types as const Parameters

Summary and problem statement

Rustโ€™s current(ly planned) generics allow three distinct and unrelated forms of generic parameter: types, lifetimes, and const values. Here we propose a way to unify the three by making the first two particular cases of the third, retaining the existing separate syntax as a simple sugar over the unified form, and thus preserving full backwards compatibility. This automatically subsumes variadic generics, as well as arbitrarily more complex and expressive forms of data structures and computation over types, as ordinary const Rust.

Motivation, use-cases, and solution sketches

As Rust gets more and more expressive const computation, and unlocks const generics, it's become apparent that the language for working with types is noticeably less expressive than the language for working with const values. Some particular pain points include the ability to use data structures of values, such as slices, or Options, but that types have no such capabilities. Variadic generics have been proposed a number of times to address this partially, but none of these attempts have gotten far. Further, there are cases of a type constructor wanting to accept a variable number of types in a non-list-like way, which variadic generics donโ€™t handle well if at all.

Here we propose a single extension to Rustโ€™s generics system that automatically solves both of the above problems and then some, while arguably simplifying the generics model rather than further complicating it. The idea is to treat each of types and lifetimes as just another type of const value, desugaring โ€œnormalโ€ type and lifetime generic parameters to const generic parameters (e.g.,

Foo<'a, 'b, X, Y, Z>

desugars to

Foo<{'a}, {'b}, {X}, {Y}, {Z}>

, and each can be written in user code just when the other can). To accomplish this, a new standard module {core, std}::type_level will be introduced, and types Type and Lifetime will be placed within it (names very bikesheddable). These two types can only appear in const context: as the types of const values, const generic parameters, and function parameters of const fns (list not meant to be exhaustive but only suggestive). The previous exampleโ€™s declaration would then desugar from (e.g.)

struct Foo<'x, 'y, A, B, C> { ... }

to

struct Foo<const x: Lifetime, const y: Lifetime, const A: Type, const B: Type, const C: Type> { ... }

. Likewise, (non-generic) associated types in traits would desugar to associated consts of type Type, and similarly for non-associated type aliases. (Making that desugaring work for the generic case naturally extends the ability to have generic parameters to consts of all kinds, which seems reasonable, if not particularly motivated unto itself.)

What does this unification buy us? For one thing, we now have variadic generics "for free": we can just use slices of types! For example:

struct VarFoo<const tys: &[Type]> { ... }
// โ€ฆ
let vf: VarFoo<{&[i64, i32, i64, u32, String]}> = โ€ฆ

Tuples of const-computed form can be supported easily by introducing {core, std}::tuple::Tuple with exactly the above declaration signature, and making existing tuples desugar to it.

Having types and lifetimes as const values lets us write const fns manipulating them, and lets us put them in additional data structures besides just slices. For example:

  • a rose tree of types Rose<Type> where Rose is defined as:
    #[derive(PartialEq, Eq, Clone, Debug)]
    enum Rose<T> {
        Leaf(T),
        Node(Vec<Rose<T>>),
    }
    would be a useful const generic parameter to a type of "heterogenous trees", a.k.a. nested tuples;
  • an Option<Type> would be useful const generic parameter to an "optionally typed box", i.e., something like Box<Any> but where the contained type might or might not actually be specified;
  • a descriptor for a finite state machine FSM<Type>, where each node is associated with a type and there's a marked โ€œcurrentโ€ node, is a useful generic parameter to a coroutine/generator in order to describe which possible types it can yield when.

The unification of types and lifetimes under consts also makes it easier (though still not immediate or automatic) to implement higher-rank constraints quantifying over types and const values rather than just lifetimes, since the work of dealing with lifetimes as a special case will already have been done and much of it could probably treat types and (other) consts the same way.

A third member of {core, std}::type_level is needed if we want to express const computations around constraints: Constraint would be the type of (fully specified) constraints, while bounds would be treated as unary type constructors of eventual type Constraint rather than Type. Like its fellows, Constraint would only be usable at typechecking/const-evaluation time. We don't see a need to introduce Constraint at the same time as Type and Lifetime, though; it can be added later, or not at all, and the rest of the above will still work perfectly well. Having Constraint would also make static assertions much easier to specify and use, as they could just take one or more Constraints and check them in the standard way.

Prioritization

This fits into the lang team priorities under both โ€œConst generics and constant evaluationโ€ and โ€œTrait and type system extensionsโ€, as well as to a more limited extent under โ€œBorrow checker expressiveness and other lifetime issuesโ€.

Links and related work

In addition to the attempts at variadic generics linked above, this also relates by its nature to HKTs and GATs, as well as const generics as a whole. The author is certain there are many more interested parties but doesn't know how to find or link them; help would be very appreciated here.

The ideas here are of course broadly related to dependent types and the uses they've been put to; a closer analog to this exact feature are the DataKinds and ConstraintKinds features of GHC Haskell. To the author's knowledge, no other language has implemented something like this short of implementing full dependent types; in particular, C++ continues to maintainโ€”and even reinforceโ€”the distinction between types and constexpr values that this proposal would like to erase.

Initial people involved

The author (@pthariensflame) has been privately stewing this idea over for a few months; to their knowledge no one else has yet proposed this for Rust.

Add a `NOOP_METHOD_CALL` lint for methods which should never be directly called

Based on rust-lang/compiler-team#375

Summary

Add a new lint NOOP_METHOD_CALL, which fires on calls to methods which are known to do nothing. To start with, we would lint calls to the following methods:

  • <&T as Clone>::clone
  • <&T as Borrow>::borrow
  • <&T as Deref>::deref
  • <&T as ToOwned>::to_owned

These trait impls are useful in generic code (e.g. you pass an &T to a function expecting a Clone argument), but are pointless when called directly (e.g. &bool::clone(&true)).

Note that we will intentionally not perform any kind of post-monomorphization checks. This lint will only fire on calls that are known to have the proper receiver (&T) at the call site (where the user could just remove the call).

For example

struct Foo;

fn clone_it<T: Clone>(val: T) -> T {
    val.clone() // No warning - we don't know if `T` is `&T`
}

fn main() {
	let val = &Foo;
	val.clone(); // WARNING: noop method call
	clone_it(val);
}

The precise mechanism used to indicate that these methods should be linted is not specified.
To start with, we could add an internal attribute #[noop], or hard-code a list of method paths in a lint visitor.

In the future, this could be made available to user code by stabilizing some mechanism to mark a function as being linted by NOOP_METHOD_CALL.
However, any such mechanism will need to have a way of dealing with blanekt impls (e.g. <&T as ToOwned>::to_owned goes through impl<T: Clone> ToOwned for T), which will require additional design work.

About this issue

This issue corresponds to a lang-team design meeting proposal. It corresponds
to a possible topic of discussion that may be scheduled for deeper discussion
during one of our design meetings.

Discuss RFC 3058, try_trait_v2

Summary

@joshtriplett mentioned on zulip that they'd like to talk about this in a design meeting, so here's an issue for that ๐Ÿ™‚

Background reading

The new RFC: rust-lang/rfcs#3058
The previous RFC: https://rust-lang.github.io/rfcs/1859-try-trait.html
The previous-previous RFC: https://rust-lang.github.io/rfcs/0243-trait-based-exception-handling.html#generalize-over-result-option-and-other-result-carrying-types
Niko's great experience report on the tracking issue: rust-lang/rust#42327 (comment)
Previous design meeting zulip topic: https://rust-lang.zulipchat.com/#narrow/stream/213817-t-lang/topic/Design.20meeting.3A.20try.2C.20oh.20my!

About this issue

This issue corresponds to a lang-team design meeting proposal. It corresponds
to a possible topic of discussion that may be scheduled for deeper discussion
during one of our design meetings.

Review const evaluation skill-tree and unsafe questions

Meeting proposal info

  • Title: Const evaluation skill-tree
  • Type: technical

Summary

We should discuss the work that @oli-obk did to prepare a const evaluation roadmap. I'd also like to talk about the definition of unsafe in const functions. Do they both fit in one meeting? We could narrow the topic down.

Background reading:

Some other folks that may want to be involved:

About this issue

This issue corresponds to a meeting proposal for the lang team design. It corresponds to a possible topic of discussion.

A `FunctionPointer` trait to represent all `fn` types

WARNING

The Major Change Process was proposed in RFC 2936 and is not yet in
full operation. This template is meant to show how it could work.

Proposal

Summary

Create a FunctionPointer trait that is "fundamental" (in the coherence sense) and built-in to the compiler. It is automatically implemented for all fn types, regardless of any other details (ABI, argument types, and so forth).

Motivation

You can't write an impl that applies to any function pointer

It is not possible to write an impl that is parameteric over all fn types today. This is for a number of a reasons:

  • You can't write an impl that is generic over ABI.
  • You can't write an impl that is generic over the number of parameters.
  • You can't write an impl that is generic over where binding occurs.

We are unlikely to ever make it possible to write an impl generic over all of those things.

And yet, there is a frequent need to write impls that work for any function pointer. For example, it would be nice if all function pointers were Ord, just as all raw pointers are Ord.

To work around this, it is common to find a suite of impls that attempts to emulate an impl over all function pointer types. Consider this code from the trace crate, for example:

trace_acyclic!(<X> fn() -> X);

trace_acyclic!(<A, X> fn(&A) -> X);
trace_acyclic!(<A, X> fn(A) -> X);

trace_acyclic!(<A, B, X> fn(&A, &B) -> X);
trace_acyclic!(<A, B, X> fn(A, &B) -> X);
trace_acyclic!(<A, B, X> fn(&A, B) -> X);
trace_acyclic!(<A, B, X> fn(A, B) -> X);
...

Or this code in the standard library.

Bug fixes in rustc endanger existing approaches

As part of the work to remove the leak-check in the compiler, we introduced a warning about potential overlap between impls like

impl<T> Trait for fn(T)
impl<U> Trait for fn(&U)

This is a complex topic. Likely we will ultimately accept those impls as non-overlapping, since wasm-bindgen relies on this pattern, as do numerous other crates -- though there may be other limitations. But many of the use cases where those sorts of impls exist would be better handled with an opaque FunctionPointer trait anyhow, since what they're typically really trying to express is "any function pointer" (wasm-bindgen is actually somewhat different in this regard, as it has a special case for fns that taken references that is distinct from fns that taken ownership).

Proposal

Add in a trait FunctionPointer that is implemented for any fn type (but only fn types). It is built-in to the compiler, tagged as #[fundamental], and does not permit user-defined implementations. It offers a core operation, as_usize, for converting to a usize, which in turn can be used to implement the various built-in traits:

#[fundamental]
pub trait FunctionPointer: Copy + Ord + Eq {
    fn as_usize(self) -> usize; // but see alternatives below
}

impl<T: FunctionPointer> Ord for T {

}

impl<T: FunctionPointer> PartialEq for T {
    fn eq(&self, other: &T) -> bool {
        self.as_usize() == other.as_usize()
    }
}

impl<T: FunctionPointer> Eq for T { }

In terms of the implementation, this would be integrate into the rustc trait solver, which would know that only fn(_): FunctionPointer.

As with Sized, no user-defined impls would be permitted.

Concerns and alternative designs

  • Will we get negative coherence interactions because of the blanket impls?
    • I think that the #[fundamental] trait should handle that, but we have to experiment to see how smart the trait checker is.
  • Will function pointers always be representable by a usize?
    • On linux, dlsym returns a pointer, so in practice this is a pretty hard requirement.
    • Platforms that want more than a single pointer (e.g., AVR) generally implement that via trampolines or other techniques.
    • It's already possible to transmute from fn to usize (or to cast with as), so to some extent we've already baked in this dependency.
  • Seems rather ad-hoc, what about other categories of types, like integers?
    • Fair enough. However, function pointers have some unique challenges, as listed in the motivation.
    • We could pursue this path for other types if it proves out.
  • What about dyn Trait and friends?
    • It's true that those dyn types have similar challenges to fn types, since there is no way to be generic over all the different sorts of bound regions one might have (e.g., over for<'a> dyn Fn(&'a u32) and so forth).
    • Unlike fn types, their size is not fixed, so as_usize could not work, which might argue for the "extended set of operations" approach.
    • Specifically one might confuse &dyn Fn() for fn().
    • Perhaps adding a fundamental DynType trait would be a good addition.
  • What about FnDef types (the unique types for each function)
    • If we made FunctionPointer apply to FnDef types, that can be an ergonomic win and quite useful.
    • The as_usize could trigger us to reify a function pointer.
    • The trait name might then not be a good fit, as a FnDef is not, in fact, a function pointer, just something that could be used to create a function pointer.
  • What about const interactions?
    • I think we can provide const impls for the FunctionPointer trait, so that as_usize and friends can be used from const functions

Alternative designs

Instead of the as_usize method, we might have methods like ord(Self, Self) -> Ordering that can be uesd to implement the traits. That set can grow over time since no user-defined impls are permitted.

This is obviously less 'minimal' but might work better (as noted above) if we extend to exotic platforms or for dyn types.

However, it may be that there is extant code that relies on converting fn pointers to usize and such code could not be converted to use fn traits.

The Major Change Process

Once this MCP is filed, a Zulip topic will be opened for discussion. Ultimately, one of the following things can happen:

  • If this is a small change, and the team is in favor, it may be approved to be implemented directly, without the need for an RFC.
  • If this is a larger change, then someone from the team may opt to work with you and form a project group to work on an RFC (and ultimately see the work through to implementation).
  • Alternatively, it may be that the issue gets closed without being accepted. This could happen because:
    • There is no bandwidth available to take on this project right now.
    • The project is not a good fit for the current priorities.
    • The motivation doesn't seem strong enough to justify the change.

You can read [more about the lang-team MCP process on forge].

Comments

This issue is not meant to be used for technical discussion. There is a Zulip stream for that. Use this issue to leave procedural comments, such as volunteering to review, indicating that you second the proposal (or third, etc), or raising a concern that you would like to be addressed.

Declarative macro repetition counts

WARNING

The Major Change Process was proposed in RFC 2936 and is not yet in
full operation. This template is meant to show how it could work.

Proposal

Summary

Add syntax to declarative macros to allow determination of the number of metavariable repetitions.

Motivation, use-cases, and solution sketches

Macros with repetitions often expand to code that needs to know or could
benefit from knowing how many repetitions there are. Consider the standard
sample macro to create a vector, recreating the standard library vec! macro:

macro_rules! myvec {
    ($($value:expr),* $(,)?) => {
        {
            let mut v = Vec::new();
            $(
                v.push($value);
            )*
            v
        }
    };
}

This would be more efficient if it could use Vec::with_capacity to
preallocate the vector with the correct length. However, there is no standard
facility in declarative macros to achieve this.

There are various ways to work around this limitation. Some common approaches
that users take are listed below, along with some of their drawbacks.

Use recursion

Use a recursive macro to calculate the length.

macro_rules! count_exprs {
    () => {0usize};
    ($head:expr, $($tail:expr,)*) => {1usize + count_exprs!($($tail,)*)};
}

macro_rules! myvec {
    ($(value:expr),* $(,)?) => {
        {
            let size = count_exprs!($($value,)*);
            let mut v = Vec::with_capacity(size);
            $(
                v.push($value);
            )*
            v
        }
    };
}

Whilst this is among the first approaches that a novice macro programmer
might take, it is also the worst performing. It rapidly hits the recursion
limit, and if the recursion limit is raised, it takes more than 25 seconds to
compile a sequence of 2,000 items. Sequences of 10,000 items can crash
the compiler with a stack overflow.

Generate a sum of 1s

This example is courtesy of @dtolnay.
Create a macro expansion that results in an expression like 0 + 1 + ... + 1.
There are various ways to do this, but one example is:

macro_rules! myvec {
    ( $( $value:expr ),* $(,)? ) => {
        {
            let size = 0 { $( + { stringify!(value); 1 } ) };
            let mut v = Vec::with_capacity(size);
            $(
                v.push($value);
            )*
            v
        }
    };
}

This performs better than recursion, however large numbers of items still
cause problems. It takes nearly 4 seconds to compile a sequence of 2,000
items. Sequences of 10,000 items can still crash the compiler with a stack
overflow.

Generate a slice and take its length

This example is taken from
[https://danielkeep.github.io/tlborm/book/blk-counting.html]. Create a macro
expansion that results in a slice of the form [(), (), ... ()] and take its
length.

macro_rules! replace_expr {
    ($_t:tt $sub:expr) => {$sub};
}

macro_rules! myvec {
    ( $( $value:expr ),* $(,)? ) => {
        {
            let size = <[()]>::len(&[$(replace_expr!(($value) ())),*]);
            let mut v = Vec::with_capacity(size);
            $(
                v.push($value);
            )*
            v
        }
    };
}

This is more efficient, taking less than 2 seconds to compile 2,000 items,
and just over 6 seconds to compile 10,000 items.

Discoverability

Just considering the performance comparisons misses the point. While we
can work around these limitations with carefully crafted macros, for a
developer unfamiliar with the subtleties of macro expansions it is hard
to discover which is the most efficient way.

Furthermore, whichever method is used, code readability is harmed by the
convoluted expressions involved.

Proposal

The compiler already knows how many repetitions there are. What is
missing is a way to obtain it.

I propose we add syntax to allow this to be expressed directly:

macro_rules! myvec {
    ( $( $value:expr ),* $(,)? ) => {
        {
            let mut v = Vec::with_capacity($#value);
            $(
                v.push($value);
            )*
            v
        }
    };
}

The new "metavariable count" expansion $#ident expands to a literal
number equal to the number of times ident would be expanded at the depth
that it appears.

A prototype implementation indicates this compiles a 2,000 item sequence
in less than 1s, and a 10,000 item sequence in just over 2s.

Nested repetitions

In the case of nested repetitions, the value depends on the depth of the
metavariable count expansion, where it expands to the number of repetitions
at that level.

Consider a more complex nested example:

macro_rules! nested {
    ( $( { $( { $( $x:expr ),* } ),* } ),* ) => {
        {
            println!("depth 0: {} repetitions", $#x);
            $(
                println!("  depth 1: {} repetitions", $#x);
                $(
                    println!("    depth 2: {} repetitions", $#x);
                    $(
                        println!("      depth 3: x = {}", $x);
                    )*
                )*
            )*
        }
    };
}

And given a call of:

   nested! { { { 1, 2, 3, 4 }, { 5, 6, 7 }, { 8, 9 } },
             { { 10, 11, 12 }, { 13, 14 }, { 15 } } };

This program will print:

depth 0: 2 repetitions
  depth 1: 3 repetitions
    depth 2: 4 repetitions
      depth 3: x = 1
      depth 3: x = 2
      depth 3: x = 3
      depth 3: x = 4
    depth 2: 3 repetitions
      depth 3: x = 5
      depth 3: x = 6
      depth 3: x = 7
    depth 2: 2 repetitions
      depth 3: x = 8
      depth 3: x = 9
  depth 1: 3 repetitions
    depth 2: 3 repetitions
      depth 3: x = 10
      depth 3: x = 11
      depth 3: x = 12
    depth 2: 2 repetitions
      depth 3: x = 13
      depth 3: x = 14
    depth 2: 1 repetitions
      depth 3: x = 15

Alternative designs

The macro could expand to a usize literal (e.g. 3usize) rather than just
a number literal. This matches what the number is internally in the compiler,
and may help with type inferencing, but it would prevent users using
stringify!($#x) to get the number as a string.

In its simplest form, this only expands to the repetition count for a single level of nesting.
In the example above, if we wanted to know the total count of
repetitions (i.e., 15), we would be unable to do so easily. There are a
couple of alternatives we could use for this:

  • $#var could expand to the total count, rather than the count at the
    current level. But this would make it hard to find the count at a particular
    level, which is also useful.

  • We could use the number of '#' characters to indicate the number of depths
    to sum over. In the example above, at the outer-most level, $#x expands
    to 2, $##x expands to 6, and $###x expands to 15.

The syntax being proposed is specific to counting the number of repetitions of
a metavariable, and isn't easily extensible to future ideas without more
special syntax. A more general form might be:

   ${count(ident)}

In this syntax extension, ${ ... } in macro expansions would contain
metafunctions that operate on the macro's definition itself. The syntax could
then be extended by future RFCs that add new metafunctions. Metafunctions
could take additional arguments, so the alternative to count repetitions at
multiple depths ($##x above) could be represented as ${count(x, 2)}.

There's nothing to preclude this being a later step, in which case $#ident
would become sugar for ${count(ident)}.

Links and related work

See https://danielkeep.github.io/tlborm/book/blk-counting.html for some common workarounds.

Declarative macros with repetition are commonly used in Rust for things that
are implemented using variadic functions in other languages.

  • In Java, variadic arguments are passed as an array, so the array's .length
    attribute can be used.

  • In dynamically-typed languages like Perl and Python, variadic arguments are
    passed as lists, and the usual length operations can be used there, too.

The syntax is similar to obtaining the length of strings and arrays in Bash:

string=some-string
array=(1 2 3)
echo ${#string}  # 11
echo ${#array[@]}  # 3

Similarly, the variable $# contains the number of arguments in a function.

The Major Change Process

Once this MCP is filed, a Zulip topic will be opened for discussion. Ultimately, one of the following things can happen:

  • If this is a small change, and the team is in favor, it may be approved to be implemented directly, without the need for an RFC.
  • If this is a larger change, then someone from the team may opt to work with you and form a project group to work on an RFC (and ultimately see the work through to implementation).
  • Alternatively, it may be that the issue gets closed without being accepted. This could happen because:
    • There is no bandwidth available to take on this project right now.
    • The project is not a good fit for the current priorities.
    • The motivation doesn't seem strong enough to justify the change.

You can read [more about the lang-team MCP process on forge].

Comments

This issue is not meant to be used for technical discussion. There is a Zulip stream for that. Use this issue to leave procedural comments, such as volunteering to review, indicating that you second the proposal (or third, etc), or raising a concern that you would like to be addressed.

Portable SIMD project group

WARNING

The Major Change Process was proposed in RFC 2936 and is not yet in
full operation. This template is meant to show how it could work.

Proposal

Create a project group for considering what portable SIMD in the standard library should look like.

Motivation, use-cases, and solution sketches

While Rust presently exposes ALU features of the underlying ISAs in a portable way, it doesn't expose SIMD capabilities in a portable way except for autovectorization.

A wide variety of computation tasks can be accomplished faster using SIMD than using the ALU capabilities. Relying on autovectorization to go from ALU-oriented source code to SIMD-using object code is not a proper programming model. It is brittle and depends on the programmer being able to guess correctly what the compiler back end will do. Requiring godbolting for every step is not good for programmer productivity.

Using ISA-specific instructions results in ISA-specific code. For things like "perform lane-wise addition these two vectors of 16 u8 lanes" should be a portable operation for the same reason as "add these two u8 scalars" is a portable operation that does not require the programmer to write ISA-specific code.

Typical use cases for SIMD involve text encoding conversion and graphics operations on bitmaps. Firefox already relies of the Rust packed_simd crate for text encoding conversion.

Compiler back ends in general and LLVM in particular provide a notion of portable SIMD where the types are lane-aware and of particular size and the operations are ISA-independent and lower to ISA-specific instructions later. To avoid a massive task of replicating the capabilities of LLVM's optimizer and back ends, it makes sense to leverage this existing capability.

However, to avoid exposing the potentially subject-to-change LLVM intrinsics, it makes sense expose an API that is conceptually close and maps rather directly to the LLVM concepts while making sense for Rust and being stable for Rust applications. This means introducing lane-aware types of typical vector sizes, such as u8x16, i16x8, f32x4, etc., and providing lane-wise operations that are broadly supported by various ISAs on these types. This means basic lane-wise arithmetic and comparisons.

Additionally, it is essential to provide shuffles where what lane goes where is known at compile time. Also, unlike the LLVM layer, it makes sense to provide distinct boolean/mask vector types for the outputs of lanewise comparisons, because encoding the invariant that all bits of a lane are either one or zero allows operations like "are all lanes true" or "is at least one lane true" to be implemented more efficiently especially on x86/x86_64.

When the target doesn't support SIMD, LLVM provides ALU-based emulation, which might not be a performance win compared to manual ALU code, but at least keeps the code portable.

When the target does support SIMD, the portable types must be zero-cost transmutable to the types that vendor intrinsics accept, so that specific things can be optimized with ISA-specific alternative code paths.

The packed_simd crate provides an implementation that already works across a wide variety of Rust targets and that has already been developed with the intent that it could become std::simd. It makes sense not to start from scratch but to start from there.

The code needs to go in the standard library if it is assumed that rustc won't, on stable Rust, expose the kind of compiler internals that packed_simd depends on.

Please see the FAQ.

Links and related work

The Major Change Process

Once this MCP is filed, a Zulip topic will be opened for discussion. Ultimately, one of the following things can happen:

  • If this is a small change, and the team is in favor, it may be approved to be implemented directly, without the need for an RFC.
  • If this is a larger change, then someone from the team may opt to work with you and form a project group to work on an RFC (and ultimately see the work through to implementation).
  • Alternatively, it may be that the issue gets closed without being accepted. This could happen because:
    • There is no bandwidth available to take on this project right now.
    • The project is not a good fit for the current priorities.
    • The motivation doesn't seem strong enough to justify the change.

You can read [more about the lang-team MCP process on forge].

Comments

This issue is not meant to be used for technical discussion. There is a Zulip stream for that. Use this issue to leave procedural comments, such as volunteering to review, indicating that you second the proposal (or third, etc), or raising a concern that you would like to be addressed.

MCP: Deref Patterns

Proposal

Allow pattern matching through types that impl Deref or DerefMut.

Summary and problem statement

Currently in rust, matching is blocked by bounderies like smart pointers, containers, and some wrappers. To solve this problem you would need to either use if let guards (unstable), or nested match/if-let. The former is limited to one such level, and the latter can become excessive for deeply nested types. To solve this, I propose that "deref patterns" be added, to allow for such matching to be performed.

An exception to the above problem, is that Box<T> can be matched with feature(box_patterns). However, this is magic behaviour of box, and I am not a fan of this kind of magic.

Motivation, use-cases, and solution sketches

Recursive types necessarily include smart pointers, even when you could normally match through them.
For example, in a proc-macro I worked on to support restricted variadic generics, I wanted to match "fold expressions", which take the form (<pattern> <op> ...), so I would need to match against Expr::Paren(ParenExpr{expr: Expr::Binary(ExprBinary{ left, op, right: Expr::Verbaitum(t), ..}), ..}). However, this is currently not possible, and required nested matches. This generalizes to any case where you need to check some pattern, but hit a deref boundery.

Prioritization

I do not believe this fits into any of the listed priorities. It may be considered "Targeted ergonomic wins and extensions", however, I believe it is a larger than is intended for the category.

Links and related work

This has been discussed on the Rust Internals Forum at https://internals.rust-lang.org/t/somewhat-random-idea-deref-patterns/13813, as well as on zulip at https://rust-lang.zulipchat.com/#narrow/stream/213817-t-lang/topic/Deref.20patterns.

A tracking document of all currently discussed questions and potential answers can be found here https://hackmd.io/GBTt4ptjTh219SBhDCPO4A.

Prior discussions raised on the IRLO thread:

Initial people involved

I would be involved initially, as well as Nadreiril on zulip. I would be open to anyone who wished to helping with it.

This issue is not meant to be used for technical discussion. There is a Zulip stream for that. Use this issue to leave procedural comments, such as volunteering to review, indicating that you second the proposal (or third, etc), or raising a concern that you would like to be addressed.

MCP: Elevated privileges for #[test_util]

Proposal

Motivation, use-cases, and solution sketches

When writing unit tests using the #[test] proc_macro, you get access to private functions and member variables. When projects get larger it often makes sense to introduce utility functions for your tests that are shared between unit tests of different modules.

A problem with those test utils is that they lose the elevated privileges that would e.g. allow you to call a constructor with private fields. To solve this you could make the respective fields pub(crate) but that would leak beyond just a test use case.

I propose the addition of a #[test_util] proc_macro that would elevate privileges of such utility functions in test environments only. I am open to ideas for naming.

File: lib.rs

mod foo;
mod bar;

#[cfg(test)]
mod test_utils;

File: foo.rs

pub struct Foo {
    pub(crate) a: String, // leak data to allow `create_fake_foo` to construct
    pub b: f32,
}

// Logic goes here

#[cfg(test)]
mod tests {
    #[test]
    test_foo() {
        let foo = create_fake_foo();
        // unit test goes here
    }
}

File: bar.rs

use crate::foo::Foo;

struct Bar(Foo);

// Logic goes here

#[cfg(test)]
mod tests {
    #[test]
    test_foo() {
        let foo = create_fake_foo();
        let bar = Bar(mock);
        // unit test goes here
    }
}

File: test_utils.rs

// #[test_util] would prevent the need to leak `Foo`'s fields in non-test environments
pub(crate) fn create_fake_foo() {
    Foo {
        a: "foo".to_string(),
        b: 1.234,
    }
}

Links and related work

Original Zulip discussion

The Major Change Process

Once this MCP is filed, a Zulip topic will be opened for discussion. Ultimately, one of the following things can happen:

  • If this is a small change, and the team is in favor, it may be approved to be implemented directly, without the need for an RFC.
  • If this is a larger change, then someone from the team may opt to work with you and form a project group to work on an RFC (and ultimately see the work through to implementation).
  • Alternatively, it may be that the issue gets closed without being accepted. This could happen because:
    • There is no bandwidth available to take on this project right now.
    • The project is not a good fit for the current priorities.
    • The motivation doesn't seem strong enough to justify the change.

You can read [more about the lang-team MCP process on forge].

Comments

This issue is not meant to be used for technical discussion. There is a Zulip stream for that. Use this issue to leave procedural comments, such as volunteering to review, indicating that you second the proposal (or third, etc), or raising a concern that you would like to be addressed.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.