Rust Forge
Welcome to the Rust Forge! Rust Forge serves as a repository of supplementary documentation useful for members of The Rust Programming Language. If you find any mistakes, typos, or want to add to the Rust Forge, feel free to file an issue or PR on the Rust Forge GitHub.
Help Wanted
Want to contribute to Rust, but don't know where to start? Here's a list of
rust-lang
projects that have marked issues that need help and issues that are
good first issues.
Repository | Description |
---|---|
rust | The Rust Language & Compiler |
cargo | The Rust package manager |
crates.io | Source code for crates.io |
www.rust-lang.org | The Rust website |
Current Release Versions
Channel | Version | Release Date |
---|---|---|
Stable | ||
Beta | ||
Nightly | ||
Nightly +1 |
No Tools Breakage Week
To ensure the beta release includes all the tools, no tool breakages are allowed in the week before the beta cutoff (except for nightly-only tools).
Beta Cut | No Breakage Week |
---|---|
External Links
- Bibliography of research papers and other projects that influenced Rust.
- Rust Pontoon is a translation management system used to localize the Rust website.
Platforms
Rust uses a number of different platforms for organizing work and internal communications between teams. This does not currently seek to be an exhaustive list, rather documenting the policies for a select few platforms used by the teams.
Discord
Rust's Discord is currently used by a variety of teams such as Community, Ops, and Documentation, as well as their working groups. It is also maintained as a communication tool for Domain Working Groups, and provides a space for general discussion among Rust users, contributors, and beginners.
Where to go for help with using Discord
Discord's support center provides documentation about its user interface and account settings.
Getting started
-
Understand community standards
Discord, like all official Rust spaces, is governed by the Code of Conduct. Before joining the conversation there, you can prepare by reading the Code of Conduct and Moderation Guidelines. It is also useful to read Discord's Community Guidelines -
Access channels
To access the Rust Discord, visit https://discord.gg/rust-lang. If you do not already have a Discord account, you can register for one as part of the process of gaining access. Your first action should be agreeing to our Code of Conduct by following the instructions in #welcome. -
Configure notifications
It is a good idea to disable notifications for channels that are not relevant to you, so that you will not be overwhelmed with messages. Select the expansion arrow next to the server name banner (titled "The Rust Programming Language") and select Notifications from the dropdown. Then follow the configuration instructions provided on the Discord Support site.
Appropriate conversation
Discussions should be related to the channel purpose. On team channels, conversation should be related to team business. All channels are expected to be used for purposes related to the Rust project. Discussion of (for example) wildlife or sightseeing are not appropriate.
Channels
The following channels are relevant to newcomers to the Rust project:
- welcome - Where you agree to the CoC.
- rust-usage - This is a channel where you can access support for resolving specific language use questions. The Rust Users Forum is also relevant to your needs.
- beginners - Here, you can meet people who began using Rust relatively recently.
- contribute - Interested in contributing to the Rust project? In addition to joining this channel, you can subscribe to the This Week In Rust newsletter, where many opportunities are regularly posted. It may also help to find out more about specific teams.
Channels outside of General are for contributors to Rust.
Messages
Discord conversation takes place when people are available, so you should not generally expect that your messages will receive a response quickly unless a meeting is taking place. Depending on how your notifications are configured, you will see a red circle on top of the Discord icon in your system tray when new messages are received. If you wish to communicate with a specific individual, right-click on their user icon and select "Message" in the dropdown menu.
Read-only view
Set up a Discord account (as described in Getting Started, above) in order to access Discord. There is not currently a read-only archive view available.
While most of Rust's discussion happens on other platforms, email is eternal and
we occasionally need a way to approach individuals or groups privately. Our
email is hosted through Mailgun (provided by Mozilla). We create and edit the
mailing lists for teams through the rust-lang/team repository. Our email
domain is rust-lang.org
, e.g. ferris@rust-lang.org
.
Sending a public broadcast
If your teams need to reach everyone in Rust organisation they can send a
email to all@
. It is recommended that you only use this mailing list when you
know that you need contact every member, such as for organising a members event
like the All Hands, or security alerts.
Keeping responses private
When sending a message to all@
, do not put all@
in To
. This will mean that
any replies to your broadcast will also be sent to everyone. Instead put your
team's email address in To
field, and place all@
in the Bcc
field. Then
any replies will be sent to just your team.
GitHub
Github is where the Rust project hosts all of its code, as well as a large parts of its discussions.
Organisations
rust-lang
— The Rust project organisation.rust-embedded
— The Embedded Working Group organisation.rustwasm
— The WebAssembly Working Group organisation.rust-cli
— The Command Line Application Working Group organisation.rust-secure-code
— The Secure Code Working Group organisation.rust-gamedev
— The Game Development Working Group organisation.
Team Repositories
- Compiler Team
- Core Team
- Crates.io Team
- Infrastructure Team
- Language Team
- Moderation Team
- Release Team
Administration FAQ
Who administrates the
rust-lang
organisation?
All core team members have the admin
role.
How do I create a new repository under the
rust-lang
organisation, or make changes that requireadmin
level permissions?
You can contact a GitHub admin directly, or post the request in project leads (public)
stream on Zulip and an admin will be able to respond to your request
there.
Zulip
Rust's Zulip is used by a number of teams, notably the compiler, language, and library teams, along with their working groups.
Zulip can be an unintuitive platform to get started with. To get started, take a look at the getting started guide. For more detail, examine the Zulip user documentation!
Where to go for help with using Zulip
If you're testing a feature, or want to get help, the #zulip
stream is the
place to go. Like elsewhere, the best thing to do is to create a new topic
for each question.
Getting started
It is recommended to first look at the official getting started guide. Like Rust itself, Zulip is a bit special and reading the documentation before digging can be really helpful.
You'll definitely want to configure the streams that you're subscribed to when getting started; the default set is quite limited, and there are many groups that exist beyond it. Subscribing to a stream is very low cost -- it is similar to being "in" an IRC channel, except that logs are available for all streams, regardless of subscription status.
It's not necessary to introduce yourself, but feel free to say hello in the
#new members
stream.
User groups
User groups can be pinged by anyone with the @<group>
notation, same as
pinging another user. Groups can be created by anyone, and anyone can join a
group.
Users should feel free to join (or leave) groups on their own. Furthermore,
users should feel free to create groups as needed, though it is currently
expected that this is somewhat rare. You should name your group similar to how
you would name a stream for the same purpose, though groups can be more
fine-grained (or less). For example, @T-compiler/meeting
currently does not
have a dedicated stream.
Appropriate conversation
In most streams, you should try to keep conversations related to team business.
The #general
stream is a bit broader, but even there, discussions should be
closely related to Rust (though may not relate to projects of any particular
team). All channels are expected to be used for discussions related to the Rust
project, though; discussions of (for example) wildlife or sightseeing are not
appropriate.
Streams
These are similar to "channels" on other platforms (i.e., there should not be too many). On the other hand, you can choose which streams you subscribe to, so there can be more than channels on other platforms. Read Zulip's documentation for more details.
Streams are appropriate for any Rust official group. For example, working groups, project groups, teams are all examples of official groups. These should ideally also be represented in the team repository.
Default streams
This section is still under debate, and it is not yet clear which direction we will go. It is non-normative, and should not be used yet for modifications to the Zulip instance.
The default set of streams is chosen to allow incoming people to be able to have at least one place to go that can then, if necessary, direct them to a more specific location.
Currently that means that every top-level group present on Zulip is by default
visible. Specifically, no stream that contains a /
will be enabled by default.
Currently this set is:
- general
- t-lang
- t-compiler
- t-libs
- project-ffi-unwind
- project-inline-asm
- project-safe-transmute
- rust-survey-2019
- wg-async-foundations
- wg-database
- wg-formal-methods
- wg-secure-code
- wg-traits
- zulip
An alternative, minimalistic, approach is to use:
- general
- zulip
- announce
- new members
as the default set, which would push people into customizing their default set when starting out.
Stream naming
A stream should be named such as #t-{team}/{group name}
. For example,
#t-compiler/wg-parallel-rustc
. More levels of nesting are fine, e.g., a
working group might want "subgroups" as well, though you may want to omit the
team name in such a case -- keeping the stream name short is good for usability,
to avoid confusion between different streams which share the same prefix.
If no top-level team exists, or the group spans multiple teams (e.g., project-ffi-unwind), then the top level team should be omitted.
Streams should be clearly communicated as being for a specific purpose. That purpose can be broad, but it should likely include a group of some kind (even if that group is transient, e.g., people who are having trouble with the rust build system, or people working on the compiler). Furthermore, we do not currently intend for this Zulip to be a general place for community projects not affiliated with the Rust organization; if they wish to use Zulip, it is free for open source.
When a new stream is created, you should announce it in #announce
. This is
generally done automatically by Zulip.
Topics
A topic is attached to every message within a given stream (these are the subdivisions within streams). Topics are generally transient, and live for as long as there is active discussion on a topic. Thinking of topics like email subjects is helpful.
New conversation in a given stream should almost always start in a new topic, not a preexisting one. Unlike (for example) GitHub issues, you should not attempt to search for a past topic on the same subject. Do not spend too long on the name of the topic, either, beyond trying to make it short. Topics should generally be no longer than 20 characters (loosely two to three words), to make sure it is visible to users.
You should eagerly fork new discussion topics into fresh topics. Note that this can be done with the tail of another topic (if accidentally you diverge into another area of discussion).
To fork from an existing topic, see Zulip's documentation here.
Messages
Zulip is a unique platform which combines synchronous and asynchronous communication in one location. You should not generally expect that your messages will receive a response quickly, and unlike (for example) Discord, there is likely not much reason to "re-ping" on a particular issue every few hours as your message is unlikely to vanish into history, being isolated to a specific topic.
Linkifiers
Our Zulip supports a lot of helpful linkifiers, and we're generally happy to add
more on request. See the
documentation
for the format. Propose one in #zulip
!
Generally, github-org/repo#123
works for linking to an issue or PR; the below
list gives a few more "special cased" repositories.
Don't forget that standard Markdown syntax for links also works.
We currently support linking to issues on a few repositories:
- rust-lang/rust with
#4545
orrust#4545
- rust-lang/rfcs with
RFC#3434
orrfc#3434
- rust-lang/async-book with
async-book#2334
- rust-lang-nursery/chalk with
chalk#2334
- rust-lang/compiler-team with
compiler-team#3433
- rust-lang-nursery/ena with
ena#3434
- rust-lang/miri with
miri#3434
- rust-lang-nursery/polonius with
polonius#3434
- rust-analyzer/rust-analyzer with
rust-analyzer#3434
- rust-lang/rustc-dev-guide with
rustc-dev-guide#3434
- rust-lang/stdarch with
stdarch#3434
- rust-lang/team with
team#3434
- rust-lang/unsafe-code-guidelines with
ucg#3434
We currently support linking to commits on these repositories:
- rust-lang/rust with 40-character long SHAs, e.g.,
25434f898b499876203a3b95c1b38bad5ed2cc5d
Read-only view
Zulip by default requires an account for interaction, though this "bug" is being actively worked on by the Zulip developers. We currently maintain a read-only view of the Zulip at https://zulip-archive.rust-lang.org; this has relatively up to date information (we update every 5 minutes). If you're linking to Zulip from a GitHub comment with the intent to leave a permanent link, this is a good place to link to. There is not currently good tooling for generating the links.
Zulip Moderation
Zulip, like all official Rust spaces, is governed by the Code of Conduct. If you have concerns, please feel free to escalate to the moderation team.
However, though the moderation team is the top-level body here, it is not the only place where you can seek help with moderation within Zulip.
One method for reaching the Zulip administrators privately is to email
zulip-admin.239bd484c0347d2d43214d8581f3e125.show-sender@streams.zulipchat.com
.
See this page
for details on how this works.
You can also ping the @mods
group on Zulip; note that this will be public.
It is not currently possible for normal users to self-administrate (e.g., muting another user). However, each individual stream, including private streams, can be muted:
For admins/moderators
Some common actions for moderators are listed on this page.
Notably,
- in "Organization permissions" we can restrict users to mandate invitations before joining (this is the "no new users" button)
New admins/moderators should add themselves to the mods
group on Zulip. (Note
that this is something that any user can do!)
Triagebot
Triaging on the rust-lang repository is an important step to take care of issues. The triagebot is the tool that allows anyone to help by assigning, self-assigning or labeling issues without being a member of the rust-lang organization.
To enable triagebot on a particular repository (currently only in the rust-lang organization), add a triagebot.toml
file in the repository root. It should have a section per "feature". Please read this page to learn how to enable each feature and the options supported; if you spot something missing please let us know by filing an issue, thanks!
- Issue assignment
- Issue notifications
- Ping a team
- Glacier
- Triage
- Apply labels to issues
- Request prioritization
- Autolabel an issue
- Notify Zulip
- Major Changes
Issue assignment
Any user belonging to the rust-lang organization can claim an issue via @rustbot claim
or if the user is not part of the rust-lang organization rustbot
will assign the issue to itself; then it will add a "claimed" message in the top-level comment, to signal who the current assignee is. It is possible to override someone else's claim (no warning/error is given).
You can drop your claim to the issue via @rustbot release-assignment
; Rust team members can do the same if they want to release someone else's assignment.
@rustbot assign @user
can be used only by Rust team members and will assign that user to the issue (with same rules as before -- either directly or indirectly).
Soon (when the "highfive" bot migration will be complete, see rust-lang/highfive#258), r?
will also assign reviewers to PRs, though unlike issues, non-team members cannot be assigned. Anyone can invoke the command.
To enable on a repository, add the following to a triagebot.toml file in the repository root.
[assign]
Issue notifications
Each registered team member has a notifications page at:
https://triage.rust-lang.org/notifications?user=<github-username>
This page is populated from direct mentions (@user) and team mentions (@rust-lang/libs) across the rust-lang organization.
It can also be edited via Zulip by private-messaging triagebot. Any Rust organization member can edit their notifications page, or pages of other Rust organization team members. To do so, the editor must have a zulip-id
listed in their people/username.toml file in the team repository. The bot will tell you which ID to use when talking to it for the first time; please r? @Mark-Simulacrum
on PRs adding Zulip IDs.
The following commands are supported:
acknowledge <url>
(or short formack <url>
)acknowledge <idx>
(or short formack <idx>
)
These both acknowledge (and remove) a notification from the list.
add <url> <description... (multiple words)>
This adds a new notification to the list.
move <from> <to>
This moves the notification at index from
to the index to
.
meta <idx> <metadata...>
This adds some text as a sub-bullet to the notification at idx
. If the metadata is empty, the text is removed.
as <github username> <command...>
This executes any of the above commands as if you were the other GH user.
Ping a team
The bot can be used to "ping" teams of people that do not have corresponding Github teams. This is useful because sometimes we want to keep groups of people that we can notify but we don't want to add all the members in those groups to the Github org, as that would imply that they are members of the Rust team (for example, Github would decorate their names with "member" and so forth). The compiler team uses this feature to reach the notification groups.
When a team is pinged, we will both post a message to the issue and add a label. The message will include a cc
line that @
-mentions all members of the team.
Teams that can be pinged
To be pinged, teams have to be created in the Rust team repository. Frequently those teams will be marked as marker-team
, meaning that they do not appear on the website. The LLVM team is an example.
Configuration
To enable the team (e.g. TeamName
) to be pinged, you have to add section to the triagebot.toml
file at the root of a repository, like so:
[ping.TeamName]
message = """\
Put your message here. It will be added as a Github comment,
so it can include Markdown and other markup.
"""
label = "help wanted"
This configuration would post the given message and also add the label help wanted
to the issue.
You can also define aliases to add additional labels to refer to same target team. Aliases can be useful to add mnemonic labels or accomodate slight mispellings (such as "llvms" instead "llvm"), see the following example:
[ping.cleanup-crew]
alias = ["cleanup", "cleanups", "shrink", "reduce", "bisect"]
message = """\
message content...
"""
This will allow the command @rustbot ping cleanup-crew
to be understood with all the aliased variants, ex.:
@rustbot ping cleanup
@rustbot ping shrink
...
Check out the rust-lang/rust configuration for an up-to-date examples.
Pinging teams
To ping the team XXX
, simply leave a comment with the command:
@rustbot ping XXX
Related issues
- Requested in https://github.com/rust-lang/triagebot/issues/169
Glacier
This adds the option to track ICEs (Internal Compiler Errors). Do note that the GitHub Gist must be from a Rust Playground link. The link must also be in quotes (""
), example:
@rustbot glacier "https://gist.github.com/rust-play/xxx"
where xxx
is the SHA1 hash of the GitHub gist generated by the Playground "share" button.
Triage
This command can be used by people in charge of prioritizing issues, to assign either low or high priorities to issues. This is mostly done by the Compiler Prioritization WG for compiler bugs.
@rustbot triage {high,medium,low}
The configuration for this feature is:
[triage]
remove = ["I-prioritize"] # the set of labels to remove when this command is invoked
high = "P-high"
medium = "P-medium"
low = "P-low"
Apply labels to issues
This command lets anyone apply labels to issues. This is most useful when opening an issue. In general, labels get applied to issues by the Triage WG. If you are interested in helping triaging issues, see the Triage WG procedure.
The specific grammar can be found here, but some examples are listed below. The grammar is intended to be fairly intuitive for people, to prevent needing to reach for documentation when using the bot.
@rustbot modify labels to +T-lang, -T-compiler
This will remove the T-compiler
label and add the T-lang
label. You can also omit the +
sign, if you want, and it'll be implied.
You can also write the same command in a few other ways:
@rustbot modify labels to +T-lang and -T-compiler
@rustbot modify labels: +T-lang and -T-compiler
@rustbot modify labels to +T-lang -T-compiler
Note that the command can either terminate with a .
or a newline, otherwise the bot will not parse the command successfully.
Errors
The bot currently restricts the labels that can be applied by people outside the Rust teams. For example, they can't add the I-unsound label. Most of the time, you shouldn't hit this. Feel free to ping the release team if you feel that a label should be added to the set of allowed labels!
Enabling
[relabel]
# any label is allowed to be set by team members (anyone on a team in rust-lang/team)
# but these can be set by anyone in the world
allow-unauthenticated = [
"C-*", # any C- prefixed label will be allowed for anyone
# independent of authorization with rust-lang/team
"!C-bug", # but not C-bug (order does not matter)
]
Request prioritization
Users can request an issue to be prioritized by the Prioritization WG.
To do so, you can invoke the following command:
@rustbot prioritize
This will simply add the I-prioritize
label to the issue.
Errors
The command fails if the issue has already been requested for prioritization (i.e. already has the I-prioritize
label).
Enabling
[prioritize]
# Name of the label used for requesting prioritization on issues
label = "I-prioritize"
Autolabel an issue
When certain labels are added to an issue, this command will trigger adding a set of additional prioritization labels to the issue. In the following example adding the "I-prioritize" label will automatically add the labels in trigger_labels
but only if the issue is not already labeled with those in exclude_labels
(this is to avoid applying unrelated labels to issues).
[autolabel."I-prioritize"]
trigger_labels = [
"regression-from-stable-to-stable",
"regression-from-stable-to-beta",
"regression-from-stable-to-nightly"
]
exclude_labels = [
"P-*",
"T-infra",
"T-release"
]
Notify Zulip
When a prioritization label is added to an issue, this command will create a new topic on Zulip, in the designated stream ("245100" in the following example), replacing {number}
and {title}
with the issue GitHub ID and title:
[notify-zulip."I-prioritize"]
zulip_stream = 245100 # t-compiler/wg-prioritization/alerts
topic = "I-prioritize #{number} {title}"
message_on_add = "@**WG-prioritization** issue #{number} has been requested for prioritization."
message_on_remove = "Issue #{number}'s prioritization request has been removed."
The subscribers of that Zulip stream will receive a notification and can discuss the prioritization of the issue.
Major Changes
A major change is an issue that will have a big impact on users. See this page on the MCP process for detailed explanations.
The compiler team uses the major change process, which requires:
- An issue
- A "second", who is an expert from the Compiler team who thinks the proposal is a good idea
We have supporting automation for both parts.
First, on the rust-lang/compiler-team repository, an issue with the "major-change" label (MCP = Major Change Proposal) is created via the template. Once that's opened, it automatically gains the "to-announce" label which should be removed when it's announced at a compiler team meeting.
For seconds, you tell rustbot @rustbot seconded
or @rustbot second
and it will apply the relevant label. Only team members can do so.
Configuration:
[major-change]
# Label to apply once an MCP is seconded
second_label = "final-comment-period"
# Label to apply when an MCP is created
meeting_label = "to-announce"
# The Zulip stream to automatically create topics about MCPs in
# Can be found by looking for the first number in URLs, e.g. https://rust-lang.zulipchat.com/#narrow/stream/131828-t-compiler
zulip_stream = 131828
Core
This section documents policies established by the core team. These policies tend to apply for "project-wide resources", such as the Rust blogs.
Rust Blog Guidelines
Context
The Rust project maintains two blogs. The “main blog” (blog.rust-lang.org) and a “team blog” (blog.rust-lang.org/inside-rust). This document provides the guidelines for what it takes to write a post for each of those blogs, as well as how to propose a post and to choose which blog is most appropriate.
How to select the right blog: audience
So you want to write a Rust blog post, and you’d like to know which blog you should post it on. Ultimately, there are three options:
- The main Rust blog
- Suitable when your audience is “all Rust users or potential users”
- Written from an “official position”, even if signed by an individual
- The team Rust blog
- Suitable when your audience is “all Rust contributors or potential contributors”
- Written from an “official position”, even if signed by an individual
- Your own personal blog
- Everything else
There are two key questions to answer in deciding which of these seems right:
- Are you speaking in an “official capacity” or as a “private citizen”?
- Who is the audience for your post?
In general, if you are speaking as a “private citizen”, then you are probably best off writing on your own personal blog.
If, however, you are writing in an official capacity, then one of the Rust blogs would be a good fit. Note that this doesn’t mean you can’t write as an individual. Plenty of the posts on Rust’s blog are signed by individuals, and, in fact, that is the preferred option. However, those posts are typically documenting the official position of a team — a good example is Aaron Turon’s classic post on Rust’s language ergonomics initiative. Sometimes, the posts are describing an exciting project, but again in a way that represents the project as a whole (e.g., Manish Goregaokar’s report on Fearless Concurrency in Firefox Quantum).
To decide between the main blog and the team blog, the question to ask yourself is who is the audience for your post. Posts on the main blog should be targeting all Rust users or potential users — they tend to be lighter on technical detail, and written without requiring as much context. Posts on the team blog can assume a lot more context and familiarity with Rust.
Writing for the Main Rust blog
The core team ultimately decides what to post on the main Rust blog.
Post proposals describing exciting developments from within the Rust org are welcome, as well as posts that describe exciting applications of Rust. We do not generally do “promotional cross-posting” with other projects, however.
If you would like to propose a blog post for the main blog, please reach out to a core team member. It is not suggested to just open PRs against the main Rust blog that add posts without first discussing it with a core team member.
Release note blog posts
One special case are the regular release note posts that accompany every Rust release. These are managed by the release team and go on the main blog.
The blog posts are published on the same day as the release by the same person in the release team running the release. Releases always happen on Thursdays.
Before publishing a release post, it goes through a drafting process:
- The milestone (e.g. for 1.39.0) for the release is consulted.
- PRs that we think are sufficiently important are included, and some items are headlined. The writing of a blog post typically happens through a hackmd document.
- Headlined items are sometimes written by different people, and we try to peer-review each subsection.
- The blog post draft is submitted as a PR on the blog repo for final review a few days before the release.
Team Rust blogs
Teams can generally decide for themselves what to write on the team Rust blog.
Typical subjects for team Rust blog posts include:
- New initiatives and calls for participation
- Updates and status reports from ongoing work
- Design notes
To propose a blog post for the team blog of a particular team, reach out to the team lead or some other team representative.
Legal counsel
Some of the matters handled by the Rust Project (for example, legal policies or DMCA takedowns) require legal counsel to be handled effectively. Mozilla is providing legal counsel for such matters, and this page documents how to access it.
How to request counsel
Team members can email Niko Matsakis (nmatsakis AT mozilla DOT com
) with
their inquiry, and the request will be forwarded to Mozilla Legal. Due to an
internal Mozilla policy, team members won't be CCed in the conversation with
the lawyers, but they will receive a summary of the discussion and the outcome
after the matter is resolved.
Community
This section documents the processes of the community team, and related projects.
External Links
- The Community team GitHub repository contains information about how the community team organizes.
- The RustBridge website contains information on hosting your own local RustBridge event.
- Rustlings is an project with small exercises designed around getting newcomers used to reading and writing Rust.
State of Rust Survey FAQ
In this FAQ we try to answer common questions about the Annual State of the Rust Language Community Survey. If in your opinion there is a missing question or if you have a concern about this document, please do not hesitate to contact the Rust Community Team or open an issue with the Community Team.
Why is this survey important for the Rust project?
Rust is an Open Source project. As such, we want to hear both from people inside and outside our ecosystem about the language, how it is perceived, and how we can make the language more accessible and our community more welcoming. This feedback will give our community the opportunity to participate on shaping the future of the project. We want to focus in the requirements of the language current and potential users to offer a compelling tool for them to solve real world problems in a safe, efficient and modern way.
What are the goals of the survey?
- To understand the community's main development priorities and needs
- To categorize the population of users of the language
- To focus our efforts on events and conferences to drive more impact
- To identify potential new contributors to the community goals
How much time will it take to answer the survey?
In average, it should take from 10 to 15 minutes.
What kind of questions are included in the survey?
It includes some basic questions about how do responders use Rust, their opinion the ecosystem's tools and libraries, some basic questions regarding the responders' employer or organization and their intention to use Rust, technical background and demographic questions and some feedback related to the Rust project's community activities and general priorities.
How will we use the data from the survey responses?
The answers from the survey will be anonymized, aggregated, and summarized. A high level writeup will be posted to https://blog.rust-lang.org.
How is personally identifiable information handled?
Nearly every question in the survey is optional. You are welcome to share as much or as little information as you are comfortable with. Only the Rust language Core Team and the Community Team Survey Leads will have access to the raw data from the survey. All the answers are anonymized prior to be shared with the rest of the teams and prior to the results publication.
Why is the survey collecting contact information?
The survey optionally collects contact information for the following cases if you expressed interest in:
- future conferences or meetups in your area
- helping to organize a Rust event, meetup, or conference
- talking to a Rust team about using Rust inside your company
- Rust training
- interest in a Rust team contacting you about your survey responses
If you would like to be contacted about any of this, or any other concerns, but you don't want to associate your email with your survey responses, you can instead email the Rust Community Team at community-team@rust-lang.org or the Core Team at core-team@rust-lang.org, and we will connect you to the right people.
Where and when is the survey results report published?
We expect to publish results from the survey within a month or two of the survey completion. The survey results will be posted to project's blog.
Where can I see the previous survey reports?
Compiler
This section documents the Rust compiler itself, its APIs, and how to contribute and provide bug fixes for the compiler.
External Links
- The Rustc Dev Guide documents how the compiler works as well providing helpful information to help get new contributors involved in the development.
- Rustc's internal documentation.
- The Compiler team website is the home for all of the compiler team's planning.
- oli-obk's FIXME page lists all of the
FIXME
comments in the Rust compiler.
Cross Compilation
This subsection documents cross compiling your code on one platform to another.
Windows
- Acquire LLD somehow. Either your distro provides it or you have to build it from source.
- You'll need an lld-link wrapper, which is just lld using the link flavor so it accepts the same flags as link.exe. You may either have a binary called lld-link, or you may have to write some sort of script to wrap lld.
- If you want to be able to cross compile C/C++ as well, you will need to obtain clang-cl, which is clang pretending to be cl.
- You'll need libraries from an existing msvc installation on Windows to link
your Rust code against. You'll need the VC++ libraries from either VS 2015 or
VS 2017, and the system libraries from either the Windows 8.1 or Windows 10
SDK. Here are some approximate paths which may vary depending on the exact
version you have installed. Copy them over to your non-windows machine.
- VS 2015:
C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\lib
- VS 2017:
C:\Program Files (x86)\Microsoft Visual Studio\2017\Community\VC\Tools\MSVC\14.10.24728\lib
- Windows 10 SDK:
C:\Program Files (x86)\Windows Kits\10\Lib\10.0.14393.0
- Windows 8.1 SDK:
C:\Program Files (x86)\Windows Kits\8.1\Lib\winv6.3
- VS 2015:
- If you want to cross compile C/C++ you'll also need headers. Replace
lib
in the above paths withinclude
to get the appropriate headers. - Set your LIB and INCLUDE environment variables to semicolon separated lists of all the relevant directories for the correct architecture.
- In your .cargo/config add
[target.x86_64-pc-windows-msvc] linker = "lld-link"
or whatever your lld pretending to be link.exe is called. - For cross compiling C/C++, you'll need to get the gcc crate working correctly. I never tested it to cross compile, I have no idea whether it will even do anything sane.
- Install the appropriate target using rustup and pass
--target=x86_64-pc-windows-msvc
while building. Hopefully it works. If it doesn't, well... I don't know.
Review policies
Every PR that lands in the compiler and its associated crates must be reviewed by at least one person who is knowledgeable with the code in question.
When a PR is opened, you can request a reviewer by including r? @username
in the PR description. If you don't do so, the highfive bot
will automatically assign someone.
It is common to leave a r? @username
comment at some later point to
request review from someone else. This will also reassign the PR.
bors
We never merge PRs directly. Instead, we use bors. A qualified
reviewer with bors privileges (e.g., a compiler
contributor will leave a comment like @bors r+
.
This indicates that they approve the PR.
People with bors privileges may also leave a @bors r=username
command. This indicates that the PR was already approved by
@username
. This is commonly done after rebasing.
Finally, in some cases, PRs can be "delegated" by writing @bors delegate+
or @bors delegate=username
. This will allow the PR author
to approve the PR by issuing @bors
commands like the ones above
(but this privilege is limited to the single PR).
High priority issues
When merging high priority issues (P-critical
and P-high
) it's
recommended to avoid rollups and bump a bit the priority of the PR in
the homu queue by issuing @bors r+ rollup=never p=1
.
Expectations for r+
bors privileges are binary: the bot doesn't know which code you are familiar with and what code you are not. They must therefore be used with discretion. Do not r+ code that you do not know well -- you can definitely review such code, but try to hand off reviewing to someone else for the final r+.
Similarly, never issue a r=username
command unless that person has
done the review, and the code has not changed substantially since the
review was done. Rebasing is fine, but changes in functionality
typically require re-review (though it's a good idea to try and
highlight what has changed, to help the reviewer).
So you want to add a new (stable) option to rustc
So you want to add a new command-line flag to rustc. What is the procedure?
Is this a perma-unstable option?
The first question to ask yourself is:
- Is this a "perma-unstable" option meant only for debugging rustc (e.g.,
-Ztreat-err-as-bug
)?
If so, you can just add it in a PR, no check-off is required beyond ordinary review.
Other options
If this option is meant to be used by end-users or to be exposed on the stable channel, however, it represents a "public commitment" on the part of rustc that we will have to maintain, and hence there are a few more details to take care of.
There are two main things to take care of, and they can proceed in either order, but both must be completed:
- Proposal and check-off
- Implementation and documentation
Finally, some options begin as unstable and only get stabilized over time, in which case you will also need:
- Tracking issue and stabilization
Proposal and check-off
The "proposal" part describes the motivation and design of the new option you wish to add. It doesn't necessarily have to be very long. It takes the form of a Major Change Proposal.
The proposal should include the following:
- Motivation: what is this flag used for?
- Design: What input does the flag take and what is its observable effect?
- Implementation notes: You don't have to talk about the implementation normally, but if there are any key things to note (i.e., it was very invasive to implement), you night note them here.
- Precedent, links, and related material: Are similar flags available on other compilers/linkers/tools, like clang or lld?
- Alternatives, concerns, and key decisions: Were there any alernatives considered? If so, why did you pick this design?
Note that it is fine if you don't have any implementation notes, precedent, or alternatives to discuss.
Also, one good approach to writing the MCP is basically to write the documentation you will have to write anyway to explain to users how the option works, and then add any additional notes on alternatives and so forth that are required.
Once you've written up the proposal, you can open a MCP issue. But note that since this MCP is promoting a permanent change, a full compiler-team FCP is required, and not just a "second". This can be done by @rfcbot fcp merge
by a team member.
Implementation, documentation
Naturally your new option will also have to be implemented. You can implement the option and open up a PR. Often, this implementation work actually happens before the MCP is created, and that's fine -- we'll just ask you to open an MCP with the write-up.
See the Command-line Arguments chapter in the rustc dev guide for guidelines on how to name and define a new argument.
A few notes that are sometimes overlooked:
- Many options begin as "unstable" options, either because they use
-Z
or because they require-Zunstable-options
to use. - You should document the option. Often this documentation can just be copied from the MCP text. Where you add this documentation depends on whether the option is available on stable Rust:
- If it is unstable, then document the option in the Unstable Book, whose sources are in src/doc/unstable-book.
- Once the option is stabilized, it should be documented in the Rustc book, whose sources as in src/doc/rustc.
Stabilization and tracking issue
Typically options begin as unstable, meaning that they are either used with -Z
or require -Zunstable-options
.
Once the issue lands we should create a tracking issue that links to the MCP and where stabilization can be proposed.
Stabilization generally proceeds when the option has a seen a bit of use and the implementation seems to be working as expected for its intended purpose.
Remember that when stabilization occurs, documentation should be moved from the Unstable Book to the Rustc Book.
Major Change Proposals
Introduced in RFC 2904, a "major change proposal" is a lightweight
form of RFC that the compiler team uses for architectural changes that
are not end-user facing. (It can also be used for small user-facing
changes like adding new compiler flags, though in that case we also
require an rfcbot fcp
to get full approval from the team.) Larger
changes or modifications to the Rust language itself require a full
RFC (the latter fall under the lang team's purview).
Motivation
As the compiler grows in complexity, it becomes harder and harder to track what's going on. We don't currently have a clear channel for people to signal their intention to make "major changes" that may impact other developers in a lightweight way (and potentially receive feedback).
Our goal is to create a channel for signaling intentions that lies somewhere between opening a PR (and perhaps cc'ing others on that PR) and creating a compiler team design meeting proposal or RFC.
Goals
Our goals with the MCP are as follows:
- Encourage people making a major change to write at least a few paragraphs about what they plan to do.
- Ensure that folks in the compiler team are aware the change is happening and given a chance to respond.
- Ensure that every proposal has a "second", meaning some expert from the team who thinks it's a good idea.
- Ensure that major changes have an assigned and willing reviewer.
- Avoid the phenomenon of large, sweeping PRs landing "out of nowhere" onto someone's review queue.
- Avoid the phenomenon of PRs living in limbo because it's not clear what level of approval is required for them to land.
Major Change Proposals
If you would like to make a major change to the compiler, the process is as follows:
- Open a tracking issue on the rust-lang/compiler-team repo using the major change template.
- A Zulip topic in the stream
#t-compiler/major changes
will automatically be created for you by a bot. - If concerns are raised, you may want to modify the proposal to address those concerns.
- Alternatively, you can submit a design meeting proposal to have a longer, focused discussion.
- A Zulip topic in the stream
- To be accepted, a major change proposal needs three things:
- One or more reviewers, who commit to reviewing the work. This can be the person making the proposal, if they intend to mentor others.
- A second, a member of the compiler team or a contributor who approves of the idea, but is not the one originating the proposal.
- A final comment period (a 10 day wait to give people time to comment).
- The FCP can be skipped if the change is easily reversed and/or further objections are considered unlikely. This often happens if there has been a lot of prior discussion, for example.
- Once the FCP completes, if there are no outstanding concerns, PRs can start to land.
- If those PRs make outward-facing changes that affect stable
code, then either the MCP or the PR(s) must be approved with a
rfcbot fcp merge
comment.
- If those PRs make outward-facing changes that affect stable
code, then either the MCP or the PR(s) must be approved with a
Conditional acceptance
Some major change proposals will be conditionally accepted. This indicates that we'd like to see the work land, but we'd like to re-evaluate the decision of whether to commit to the design after we've had time to gain experience. We should try to be clear about the things we'd like to evaluate, and ideally a timeline.
Deferred or not accepted
Some proposals will not be accepted. Some of the possible reasons:
- You may be asked to do some prototyping or experimentation before a final decision is reached
- The idea might be reasonable, but there may not be bandwidth to do the reviewing, or there may just be too many other things going on.
- The idea may be good, but it may be judged that the resulting code would be too complex to maintain, and not worth the benefits.
- There may be flaws in the idea or it may not sufficient benefit.
What happens if someone opens a PR that seems like a major change without doing this process?
The PR should be closed or marked as blocked, with a request to create a major change proposal first.
If the PR description already contains suitable text that could serve as an MCP, then simply copy and paste that into an MCP issue. Using an issue consistently helps to ensure that the tooling and process works smoothly.
Can I work on code experimentally before a MCP is accepted?
Of course! You are free to work on PRs or write code. But those PRs should be marked as experimental and they should not land, nor should anyone be expected to review them (unless folks want to).
What constitutes a major change?
The rough intuition is "something that would require updates to the rustc-dev-guide or the rustc book". In other words:
- Something that alters the architecture of some part(s) of the compiler, since this is what the rustc-dev-guide aims to document.
- A simple change that affects a lot of people, such as altering the names of very common types or changing coding conventions.
- Adding a compiler flag or other public facing changes, which should be documented (ultimately) in the rustc book. This is only appropriate for "minor" tweaks, however, and not major things that may impact a lot of users. (Also, public facing changes will require a full FCP before landing on stable, but an MCP can be a good way to propose the idea.)
Note that, in some cases, the change may be deemed too big and a full FCP or RFC may be required to move forward. This could occur with significant public facing change or with sufficiently large changes to the architecture. The compiler team leads can make this call.
Note that whether something is a major change proposal is not necessarily related to the number of lines of code that are affected. Renaming a method can affect a large number of lines, and even require edits to the rustc-dev-guide, but it may not be a major change. At the same time, changing names that are very broadly used could constitute a major change (for example, renaming from the tcx
context in the compiler to something else would be a major change).
Public-facing changes require rfcbot fcp
The MCP "seconding" process is only meant to be used to get agreement
on the technical architecture we plan to use. It is not sufficient to
stabilize new features or make public-facing changes like adding a -C
flag. For that, an rfcbot fcp
is required (or perhaps an RFC, if the
change is large enough).
For landing compiler flags in particular, a good approach is to start
with an MCP introducing a -Z
flag and then "stabilize" the flag by
moving it to -C
in a PR later (which would require rfcbot fcp
).
Major change proposals are not sufficient for language changes or changes that affect cargo.
Steps to open a MCP
- Open a tracking issue on the rust-lang/compiler-team repo using the major change template.
- Create a Zulip topic in the stream
#t-compiler/major changes
:- The topic should be named something like "modify the whiz-bang component compiler-team#123", which describes the change and links to the tracking issue.
- The stream will be used for people to ask questions or propose changes.
What kinds of comments should go on the tracking issue in compiler-team repo?
Please direct technical conversation to the Zulip stream.
The compiler-team repo issues are intended to be low traffic and used for procedural purposes. Note that to "second" a design or offer to review, you should be someone who is familiar with the code, typically but not necessarily a compiler team member or contributor.
- Announcing that you "second" or approve of the design.
- Announcing that you would be able to review or mentor the work.
- Noting a concern that you don't want to be overlooked.
- Announcing that the proposal will be entering FCP or is accepted.
How does one register as reviewer, register approval, or raise an objection?
These types of procedural comments can be left on the issue (it's also good to leave a message in Zulip). See the previous section.
Who decides whether a concern is unresolved?
Usually the experts in the given area will reach a consensus here. But if there is some need for a "tie breaker" vote or judgment call, the compiler-team leads make the final call.
What are some examples of major changes from the past?
Here are some examples of changes that were made in the past that would warrant the major change process:
- overhauling the way we encode crate metadata
- merging the gcx, tcx arenas
- renaming a widely used, core abstraction, such as the
Ty
type - introducing cargo pipelining
- adding a new
-C
flag that exposes some minor variant
What are some examples of things that are too big for the major change process?
Here are some examples of changes that are too big for the major change process, or which at least would require auxiliary design meetings or a more fleshed out design before they can proceed:
- introducing incremental or the query system
- introducing MIR or some new IR
- introducing parallel execution
- adding ThinLTO support
What are some examples of things that are too small for the major change process?
Here are some examples of things that don't merit any MCP:
- adding new information into metadata
- fixing an ICE or tweaking diagnostics
- renaming "less widely used" methods
When should Major Change Proposals be closed?
Major Change Proposals can be closed:
- by the author, if they have lost interest in pursuing it.
- by a team lead or expert, if there are strong objections from key members of the team that don't look likely to be overcome.
- by folks doing triage, if there have been three months of inactivity. In this case, people should feel free to re-open the issue if they would like to "rejuvenate" it.
Membership
This team discusses membership in the compiler team. There are currently two levels of membership:
- contributors: regular contributors with r+ rights, bot privileges, and access to infrastructure
- full members: full members who vote on RFCs
The path to membership
People who are looking to contribute to the compiler typically start in one of two ways. They may tackle "one off" issues, or they may get involved in some kind of existing working group. They don't know much about the compiler yet and have no particular privileges. They are assigned to issues using the triagebot and (typically) work with a mentor or mentoring instructions.
Compiler team contributors
Once a working group participant has been contributing regularly for some time, they can be promoted to the level of a compiler team contributor (see the section on how decisions are made below). This title indicates that they are someone who contributes regularly.
It is hard to define the precise conditions when such a promotion is appropriate. Being promoted to contributor is not just a function of checking various boxes. But the general sense is that someone is ready when they have demonstrated three things:
- "Staying power" -- the person should be contributing on a regular basis in some way. This might for example mean that they have completed a few projects.
- "Independence and familiarity" -- they should be acting somewhat independently when taking on tasks, at least within the scope of the working group. They should plausibly be able to mentor others on simple PRs.
- "Cordiality" -- contributors will be members of the organization and are held to a higher standard with respect to the Code of Conduct. They should not only obey the letter of the CoC but also its spirit.
Being promoted to contributor implies a number of privileges:
- Contributors have r+ privileges and can do reviews (they are expected to use those powers appropriately, as discussed previously). They also have access to control perf/rustc-timer and other similar bots.
- Contributors are members of the organization so they can modify labels and be assigned to issues.
- Contributors are a member of the rust-lang/compiler team on GitHub, so that they receive pings when people are looking to address the team as a whole.
- Contributors are listed on the rust-lang.org web page.
It also implies some obligations (in some cases, optional obligations):
- Contributors will be asked if they wish to be added to highfive rotation.
- Contributors are held to a higher standard than ordinary folk when it comes to the Code of Conduct.
Full members
As a contributor gains in experience, they may be asked to become a compiler team member. This implies that they are not only a regular contributor, but are actively helping to shape the direction of the team or some part of the compiler (or multiple parts).
- Compiler team members are the ones who select when people should be promoted to compiler team contributor or to the level of member.
- Compiler team members are consulted on FCP decisions (which, in the compiler team, are relatively rare).
- There will be a distinct GitHub team containing only the compiler team members, but the name of this team is "to be determined".
- Working groups must always include at least one compiler team member as a lead (though groups may have other leads who are not yet full members).
How promotion decisions are made
Promotion decisions (from participant to contributor, and from
contributor to member) are made by having an active team member send
an e-mail to the alias compiler-private@rust-lang.org
. This e-mail
should include:
- the name of the person to be promoted
- a draft of the public announcement that will be made
Compiler-team members should send e-mail giving their explicit assent, or with objections. Objections should always be resolved before the decision is made final. E-mails can also include edits or additions for the public announcement.
To make the final decision:
- All objections must be resolved.
- There should be a "sufficient number" (see below) of explicit e-mails in favor of addition (including the team lead).
- The nominator (or some member of the team) should reach out to the person in question and check that they wish to join.
We do not require all team members to send e-mail, as historically these decisions are not particularly controversial. For promotion to a contributor, the only requirement is that the compiler team lead agrees. For promotion to a full member, more explicit mails in favor are recommended.
Once we have decided to promote, then the announcement can be posted to internals, and the person added to the team repository.
Not just code
It is worth emphasizing that becoming a contributor or member of the compiler team does not necessarily imply writing PRs. There are a wide variety of tasks that need to be done to support the compiler and which should make one eligible for membership. Such tasks would include organizing meetings, participating in meetings, bisecting and triaging issues, writing documentation, working on the rustc-guide. The most important criteria for elevation to contributor, in particular, is regular and consistent participation. The most important criteria for elevation to member is actively shaping the direction of the team or compiler.
Alumni status
If at any time a current contributor or member wishes to take a break from participating, they can opt to put themselves into alumni status. When in alumni status, they will be removed from Github aliases and the like, so that they need not be bothered with pings and messages. They will also not have r+ privileges. Alumni members will however still remain members of the GitHub org overall.
People in alumni status can ask to return to "active" status at any time. This request would ordinarily be granted automatically barring extraordinary circumstances.
People in alumni status are still members of the team at the level they previously attained and they may publicly indicate that, though they should indicate the time period for which they were active as well.
Changing back to contributor
If desired, a team member may also ask to move back to contributor status. This would indicate a continued desire to be involved in rustc, but that they do not wish to be involved in some of the weightier decisions, such as who to add to the team. Like full alumni, people who were once full team members but who went back to contributor status may ask to return to full team member status. This request would ordinarily be granted automatically barring extraordinary circumstances.
Automatic alumni status after 6 months of inactivity
If a contributor or a member has been inactive in the compiler for 6 months, then we will ask them if they would like to go to alumni status. If they respond yes or do not respond, they can be placed on alumni status. If they would prefer to remain active, that is also fine, but they will get asked again periodically if they continue to be inactive.
Notification groups
The compiler team has a number of notification groups that we use to ping people and draw their attention to issues. Notification groups are setup so that anyone can join them if they want.
Creating a notification group
If you'd like to create a notification group, here are the steps. First, you want to get approval from the compiler team:
- Propose the group by preparing a Major Change Proposal. If your group is not analogous to some existing group, it is probably a good idea to ping compiler team leads before-hand or as part of the MCP.
- The MCP should specify what GitHub label will be associated with the
notification group. Often this is an existing label, such as
O-Windows
.
Once the MCP is accepted, here are the steps to actually create the group. In some cases we include an example PR from some other group.
- File a tracking issue in the rust-lang/compiler-team repository to collect your progress.
- Create a PR against the rust-lang/team repository adding the notification group. Example PR.
- Configure the rust-lang/rust repository to accept triagebot commands for this group. Example PR.
- Create a PR for the rustc-dev-guide amending the notification group section to mention your group.
- Create a sample PR for the rust-lang/team repository showing how one can add oneself. This will be referenced by your blog post to show people how to join. Example PR.
- Write an announcement blog post for Inside Rust and open a PR against blog.rust-lang.org. Example PR.
Compiler-team Triage Meeting
What is it?
The triage meeting is a weekly meeting where we go over the open issues, look at regressions, consider beta backports, and other such business. In the tail end of the meeting, we also do brief check-ins with active working groups to get an idea what they've been working on.
When and where is it?
See the compiler team meeting calendar for the canonical date and time. The meetings take place in the #t-compiler stream on the rust-lang Zulip.
Where can I lean more?
The meeting procedure is documented in rust-lang/rust#54818.
The working group check-in schedule is available on the compiler-team website.
Compiler-team Steering Meeting
What is it?
The "steering meeting" is a weekly meeting dedicated to planning and high-level discussion. The meeting operates on a repeating schedule:
- Week 1: Planning
- Week 2: Technical or non-technical discussion
- Week 3: Technical or non-technical discussion
- Week 4: Non-technical discussion
The first meeting of the 4-week cycle is used for planning. The primary purpose of this meeting is to select the topics for the next three meetings. The topics are selected from a set of topic proposals, which must be uploaded and available for perusal before the meeting starts. The planning meeting is also an opportunity to check on the "overall balance" of our priorities.
The remaining meetings are used for design or general discussion. Weeks 2 and 3 can be used for technical or non-technical discussion; it is also possible to use both weeks to discuss the same topic, if that topic is complex. Week 4 is reserved for non-technical topics, so as to ensure that we are keeping an eye on the overall health and functioning of the team.
Announcing the schedule
After each planning meeting, the topics for the next three weeks are added to the compiler-team meeting calendar and a blog post is posted to the Inside Rust blog.
When and where is it?
See the compiler team meeting calendar for the canonical date and time. The meetings take place in the #t-compiler stream on the rust-lang Zulip.
Submitting a proposal
If you would like to submit a proposal to the steering meeting for group discussion, read on! This page has all the details.
TL;DR
In short, all you have to do is
- open an issue on the compiler-team repository
- use the template for meeting proposals
- you only need a few sentences to start, but by the time the meeting takes place we typically expect a more detailed writeup, e.g. using this template
You don't have to have a lot of details to start: just a few sentences is enough. But, especially for technical design discussions, we will typically expect that some form of more detailed overview be made available by the time the meeting takes place.
Examples of good candidates for discussing at the steering meeting
Here are some examples of possible technical topics that would be suitable for the steering meeting:
- A working group has an idea to refactor the HIR to make some part of their job easier. They have sketched out a proposal and would like feedback.
- Someone has encountered a problem that is really hard to solve with the existing data structures. They would like feedback on a good solution to their problem.
- Someone has done major refactoring work on a PR and they would like to be able to explain the work they did and request review.
Steering meetings are also a good place to discuss other kinds of proposals:
- A proposal to move some part of the compiler into an out-of-tree crate.
- A proposal to start a new working group.
Note that a steering meeting is not required to create a new working group or an out-of-tree crate, but it can be useful if the proposal is complex or controversial, and you would like a dedicated time to talk out the plans in more detail.
Criteria for selection
When deciding the topics for upcoming meetings, we must balance a number of things:
- We don't want to spend time on design work unless there are known people who will implement it and support it; this includes not only the "main coder" but also a suitable reviewer.
- We don't want to take on "too many" tasks at once, even if there are people to implement them.
- We also don't want to have active projects that will be "stepping on each others' toes", changing the same set of code in deep ways.
Meetings are not mandatory
It is perfectly acceptable to choose not to schedule a particular slot. This could happen if (e.g.) there are no proposals available or if nothing seems important enough to discuss at this moment. Note that, to keep the "time expectations" under control, we should generally stick to the same 4-week cycle and simply opt to skip meetings, rather than (e.g.) planning things at the last minute.
Adding a proposal
Proposals can be added by opening an issue on the compiler-team repository. There is an issue template for meeting proposals that gives directions. The basic idea is that you open an issue with a few sentences describing what you would like to talk about.
Some details that might be useful to include:
- how complex of a topic you think this is
- people in the compiler team that you think should be present for the meeting
Expectations for the meeting
By the time the meeting takes place, we generally would prefer to have a more detailed write-up or proposal. You can find a template for such a proposal here. This should be created in the form of a hackmd document -- usually we will then update this document with the minutes and consensus from the meeting. The final notes are then stored in the minutes directory of the compiler-team repository.
Expectations for a non-technical proposal
The requirements for non-technical proposals are somewhat looser. A few sentences or paragraphs may well suffice, if it is sufficient to understand the aims of the discussion.
Frequently asked questions
What happens if there are not enough proposals? As noted above, meetings are not mandatory. If there aren't enough proposals in some particular iteration, then we can just opt to not discuss anything.
How to run the planning meeting
Week of the meeting
- Announce the meeting in the triage meeting
- Skim over the list of proposals and ping people who have open proposals to get their availability over the next few weeks
Day of the meeting
- Create a
design meeting YYYY.MM.DD
topic- Ping
@t-compiler/meeting
, ideally 1h or so before the meeting actually starts, to remind people
- Ping
- At the time of the meeting, return to the topic
- Ping
@t-compiler/meeting
to let people know the meeting is starting
- Ping
- We typically begin with a 5min announcement period
- Visit the compiler-team repository to get a list of proposed meetings
To actually make the final selection, we recommend
- First, try to identify topics that are clear non-candidates
- for example, sometimes more investigative work (e.g., data gathering) is needed
- try to identify people to do those tasks
- other issues may be out of date, or clear non-starters, and they can be closed
- Next tackle technical design meetings, then non-technical
- Typical ratio is 2 technical, 1 non-technical, but this is not set in stone
- It's ok to have fewer than 3 meetings
Announce the meetings
For each scheduled meeting, create a calendar event:
- invite key participants to the meeting
- set the location to
#t-compiler, Zulip
- include a link to the design meeting issue in the event
In the relevant issues, add the meeting-scheduled
label and add a
message like:
In today's [planning meeting], we decided to schedule this meeting for **DATE**.
[Calendar event]
[planning meeting]: XXX link to Zulip topic
[Calendar event]: XXX link to calendar event
You can get the link to the calendar event by clicking on the event in google calendar and selecting "publish".
Publish a blog post
Add a blog post to the Inside Rust blog using the template found on the compiler-team repository.
How to run the design meeting
Week of the meeting
- Announce the meeting in the triage meeting
- Skim over the list of proposals and ping people who have open proposals to get their availability over the next few weeks
- Make sure that a write-up is available and nag the meeting person otherwise
Day of the meeting
- Create a
design meeting YYYY.MM.DD
topic- Ping
@t-compiler/meeting
, ideally 1h or so before the meeting actually starts, to remind people - Include a link to the design meeting write-up
- Ping
- At the time of the meeting, return to the topic
- Ping
@t-compiler/meeting
to let people know the meeting is starting - Include a link to the design meeting write-up
- Ping
- We typically begin with a 5min announcement period
To guide the meeting, create a shared hackmd document everyone can view (or adapt an existing one, if there is a write-up). Use this to help structure the meeting, document consensus, and take live notes. Try to ensure that the meeting ends with sort of consensus statement, even if that consensus is just "here are the problems, here is a space of solutions and their pros/cons, but we don't have consensus on which solution to take".
After the meeting
- Post the final contents of the summary hackmd as minutes to the
minutes/design-meeting
directory in the compiler-team repository - (Optional) Create an Inside Rust blog post pointing people at the minutes and maybe giving a few notes
crates.io
This section documents the processes of the crates.io team.
Crate removal procedure
If we get a DMCA takedown notice, here's what needs to happen:
Contact Mozilla Legal
Before removing the crates, get in touch with legal and ask an opinion from them on the received request and whether we have to comply with it.
Remove relevant version(s) and/or entire crates from crates.io
-
Remove it from the database:
heroku run -a crates-io -- target/release/delete-crate [crate-name]
or
heroku run -a crates-io -- target/release/delete-version [crate-name] [version-number]
-
Remove the crate or version from the index. To remove an entire crate, remove the entire crate file. For a version, remove the line corresponding to the relevant version.
-
Remove the crate archive(s) and readme file(s) from S3.
-
Invalidate the CloudFront cache:
aws cloudfront create-invalidation --distribution-id EJED5RT0WA7HA --paths '/*'
Remove entire crates from docs.rs
The docs.rs application supports deleting all the documentation ever published of a crate, by running a CLI command. The people who currently have permissions to access the server and run it are:
- docs.rs Team:
- Infrastructure Team:
- People with elevated 1password access
You can find the documentation on how to run the command here.
docs.rs
docs.rs is a website that hosts documentation for crates published to crates.io.
External Links
- Source code: rust-lang/docs.rs
- Hosted on:
docsrs.infra.rust-lang.org
(behind the bastion -- how to connect) - Maintainers: docs.rs team
- Instance metrics (only available to infra team members).
- Application metrics (only available to infra team members).
Add a dependency to the build environment
Rustwide internally uses rustops/crates-build-env
as the build environment for the crate. If you want to add a system package for crates to link to, this is place you're looking for.
Getting started
First, clone the crates-build-env repo:
git clone https://github.com/rust-lang/crates-build-env && cd crates-build-env
Next, add the package to packages.txt
. This should be the name of a package in the Ubuntu 18.04 Repositories. See the package home page for a full list/search bar, or use apt search
locally.
Building the image
Now build the image. This will take a very long time, probably 10-20 minutes.
docker build --tag build-env .
Testing the image
Use the image to build your crate.
cd /path/to/docs.rs
docker-compose build
# NOTE: this must be an absolute path, not a relative path
# On platforms with coreutils, you can instead use `$(realpath ../relative/path)`
YOUR_CRATE=/path/to/your/crate
# avoid docker-compose creating the volume if it doesn't exist
if [ -e "$YOUR_CRATE" ]; then
docker-compose run -e DOCS_RS_LOCAL_DOCKER_IMAGE=build-env \
-e RUST_BACKTRACE=1 \
-v "$YOUR_CRATE":/opt/rustwide/workdir \
web build crate --local /opt/rustwide/workdir
else
echo "$YOUR_CRATE does not exist";
fi
Making multiple changes
If your build fails even after your changes, it will be annoying to rebuild the image from scratch just to add a single package. Instead, you can make changes directly to the Dockerfile so that the existing packages are cached. Be sure to move these new packages from the Dockerfile to packages.txt
once you are sure they work.
On line 7 of the Dockerfile, add this line: RUN apt-get install -y your_second_package
.
Rerun the build and start the container; it should take much less time now:
cd /path/to/crates-build-env
docker build --tag build-env .
cd /path/to/docs.rs
docker-compose run -e DOCS_RS_LOCAL_DOCKER_IMAGE=build-env \
-v "$YOUR_CRATE":/opt/rustwide/workdir \
web build crate --local /opt/rustwide/workdir
Run the lint script
Before you make a PR, run the shell script ci/lint.sh
and make sure it passes. It ensures packages.txt
is in order and will tell you exactly what changes you need to make if not.
Make a pull request
Once you are sure your package builds, you can make a pull request to get it adopted upstream for docs.rs and crater. Go to https://github.com/rust-lang/crates-build-env and click 'Fork' in the top right. Locally, add your fork as a remote in git and push your changes:
git remote add personal https://github.com/<your_username_here>/crates-build-env
git add -u
git commit -m 'add packages necessary for <your_package_here> to compile'
git push personal
Back on github, make a pull request:
- Go to https://github.com/rust-lang/crates-build-env/compare
- Click 'compare across forks'
- Click 'head repository' -> <your_username>/crates-build-env
- Click 'Create pull request'
- Add a description of what packages you added and what crate they fixed
- Click 'Create pull request' again in the bottom right.
Hopefully your changes will be merged quickly! After that you can either publish a point release (rebuilds your docs immediately) or request for a member of the docs.rs team to schedule a new build (may take a while depending on their schedules).
Developing locally without docker-compose
These are instructions for developing docs.rs locally. For deploying in a production environment, see Self-hosting a docs.rs instance.
While the docker-compose allows for easier setup of the required dependencies and environment, here is a breakdown of how to use the service without an outer docker container. This is useful e.g. for quickly iterating during development.
Note that this does not remove the docker dependency altogether, since docs.rs uses docker at runtime for sandboxing. This just allows you to run commands more quickly since you are building in debug mode instead of release and are also caching more of the build.
Requirements
The commands and package names on this page will assume a Debian-like machine with apt
installed, but most dependencies should be relatively easy to find on Linux. Do note however that this requires the host to be x86_64-unknown-linux-gnu
.
Docs.rs has a few basic requirements:
- Rust
- Git
- CMake, GCC, G++, and
pkg-config
(to build dependencies for crates and docs.rs itself) - OpenSSL, zlib, curl, and
libmagic
(to link against) - PostgreSQL
# apt install build-essential git curl cmake gcc g++ pkg-config libmagic-dev libssl-dev zlib1g-dev postgresql
$ sudo -u postgres psql -c "CREATE USER cratesfyi WITH PASSWORD 'password';"
$ sudo -u postgres psql -c "CREATE DATABASE cratesfyi OWNER cratesfyi;"
Building the site
Be warned - this builds over 350 crates! Depending on your computer, this may take upwards of 10 minutes.
$ git clone https://github.com/rust-lang/docs.rs && cd docs.rs
$ cargo build
The "prefix" directory
docs.rs stores several files in a "prefix" directory. This can be anywhere, but if you put it in the doc.rs repo, it should go under the ./ignored/ directory so that it is not seen by git or the docker daemon.
$ mkdir -p ignored/cratesfyi-prefix
$ cd ignored/cratesfyi-prefix
$ mkdir -vp documentations public_html sources
$ git clone https://github.com/rust-lang/crates.io-index && cd crates.io-index
$ git branch crates-index-diff_last-seen
(That last command is used to set up the crates-index-diff
crate, so we can start monitoring new crate releases.)
Docker group
docs.rs needs to run docker containers for sandboxing. Therefore, you need to be in the 'docker' group to build crates. If you are already in the docker group, you can skip this step (you can check with groups
).
# usermod -a -G docker "$USER"
$ # now logout and back in to your shell
$ cd /path/to/docs.rs/ignored/cratesfyi-prefix
Environment for the server
To ensure that the docs.rs server is configured properly, we need to set a few environment variables. This is most easily done by making a shell script.
$ cd ..
$ echo '
export CRATESFYI_PREFIX=.
# or add an appropriate username/password as necessary
export CRATESFYI_DATABASE_URL=postgresql://cratesfyi:password@localhost
export CRATESFYI_GITHUB_USERNAME=
export CRATESFYI_GITHUB_ACCESSTOKEN=
export RUST_LOG=cratesfyi,rustwide=info
' > env.sh
This last command assumes you put the shell script in ./env.sh
,
but you can name it anything as long as it is in the current directory.
$ . ./env.sh
$ cargo run database migrate
$ cargo run database update-search-index
$ cargo run database update-release-activity
# This will take between 5 and 30 minutes on the first run, depending on your internet speed.
# It downloads the rustops/crates-build-env crates which is almost 1 GB.
# It does not currently display a progress bar, this is https://github.com/rust-lang/rustwide/issues/9
# As a workaround, you can run `docker pull rustops/crates-build-env` in a separate terminal.
$ cargo run build crate rand 0.5.5
Building on platforms other than Linux
This is not currently possible. We assume in several places that the rustup toolchain is x86_64-unknown-linux-gnu. As a result, the only way to build on Mac or Alpine is to use the docker-compose file.
Resetting the database
Occasionally, if you make changes to the migrations in a PR, those migrations will be saved in the database but will not be in the code when you switch back to the master branch. In this case, there is no way to undo the migration without knowing exactly which version of the code made the change (cargo migrate
will have no effect). Here is a convenient shell script to reset the database so you don't have to remember how to undo your changes.
NOTE: DO NOT RUN THIS IN PRODUCTION.
#!/bin/sh
set -euv
. ./env.sh
sudo -u postgres dropdb cratesfyi --if-exists
sudo -u postgres psql -c 'CREATE DATABASE cratesfyi OWNER cratesfyi;'
cargo run database migrate
Self hosting a docs.rs instance
These are instructions for deploying the server in a production environment. For instructions on developing locally without docker-compose, see Developing without docker-compose.
Here is a breakdown of what it takes to turn a regular server into its own version of docs.rs.
Beware: This process is rather rough! Attempts at cleaning it up, automating setup components, etc, would be greatly appreciated!
Requirements
The commands and package names on this page will assume an Ubuntu server running systemd, but hopefully the explanatory text should give enough information to adapt to other systems. Note that docs.rs depends on the host being x86_64-unknown-linux-gnu
.
Docs.rs has a few basic requirements:
- Rust (preferably via
rustup
) - Git
- CMake, GCC, G++, and
pkg-config
(to build dependencies for crates and docs.rs itself) - OpenSSL, zlib, curl, and
libmagic
(to link against) - PostgreSQL
- LXC tools (doc builds run inside an LXC container)
$ curl https://sh.rustup.rs -sSf | sh -s -- -y --default-toolchain nightly
$ source $HOME/.cargo/env
# apt install build-essential git curl cmake gcc g++ pkg-config libmagic-dev libssl-dev zlib1g-dev postgresql lxc-utils
The cratesfyi
user
To help things out later on, we can create a new unprivileged user that will run the server process. This user will own all the files required by the docs.rs process. This user will need to be able to run lxc-attach
through sudo
to be able to run docs builds, so give it a sudoers file at the same time:
# adduser --disabled-login --disabled-password --gecos "" cratesfyi
# echo 'cratesfyi ALL=(ALL) NOPASSWD: /usr/bin/lxc-attach' > /etc/sudoers.d/cratesfyi
(The name cratesfyi
is a historical one: Before the site was called "docs.rs", it was called "crates.fyi" instead. If you want to update the name of the user, feel free! Just be aware that the name cratesfyi
will be used throughout this document.)
The "prefix" directory
In addition to the LXC container, docs.rs also stores several related files in a "prefix" directory. This directory can be stored anywhere, but the cratesfyi
user needs to be able to access it:
# mkdir /cratesfyi-prefix
# chown cratesfyi:cratesfyi /cratesfyi-prefix
Now we can set up some required folders. To make sure they all have proper ownership, run them all as cratesfyi
:
$ sudo -u cratesfyi mkdir -vp /cratesfyi-prefix/documentations /cratesfyi-prefix/public_html /cratesfyi-prefix/sources
$ sudo -u cratesfyi git clone https://github.com/rust-lang/crates.io-index.git /cratesfyi-prefix/crates.io-index
$ sudo -u cratesfyi git --git-dir=/cratesfyi-prefix/crates.io-index/.git branch crates-index-diff_last-seen
(That last command is used to set up the crates-index-diff
crate, so we can start monitoring new crate releases.)
LXC container
To help contain what crates' build scripts can access, documentation builds run inside an LXC container. To create one inside the prefix directory:
# LANG=C lxc-create -n cratesfyi-container -P /cratesfyi-prefix -t download -- --dist ubuntu --release bionic --arch amd64
# ln -s /cratesfyi-prefix/cratesfyi-container /var/lib/lxc
# chmod 755 /cratesfyi-prefix/cratesfyi-container
# chmod 755 /var/lib/lxc
(To make deployment simpler, it's important that the OS the container is using is the same as the host! In this case, the host is assumed to be running 64-bit Ubuntu 18.04. If you make the container use a different release or distribution, you'll need to build docs.rs separately inside the container when deploying.)
You'll also need to configure networking for the container. The following is a sample /etc/default/lxc-net
that enables NAT networking for the container:
USE_LXC_BRIDGE="true"
LXC_BRIDGE="lxcbr0"
LXC_ADDR="10.0.3.1"
LXC_NETMASK="255.255.255.0"
LXC_NETWORK="10.0.3.0/24"
LXC_DHCP_RANGE="10.0.3.2,10.0.3.254"
LXC_DHCP_MAX="253"
LXC_DHCP_CONFILE=""
LXC_DOMAIN=""
In addition, you'll need to set the container's configuration to use this. Add the following lines to /cratesfyi-prefix/cratesfyi-container/config
:
lxc.net.0.type = veth
lxc.net.0.link = lxcbr0
Now you can reload the LXC network configuration, start up the container, and set it up to auto-start when the host boots:
# systemctl restart lxc-net
# systemctl enable lxc@cratesfyi-container.service
# systemctl start lxc@cratesfyi-container.service
Now we need to do some setup inside this container. You can either copy all these commands so that each one attaches on its own, or you can run lxc-console -n cratesfyi-container
to open a root shell inside the container and skip the lxc-attach
prefix.
# lxc-attach -n cratesfyi-container -- apt update
# lxc-attach -n cratesfyi-container -- apt upgrade
# lxc-attach -n cratesfyi-container -- apt install curl ca-certificates binutils gcc libc6-dev libmagic1 pkg-config build-essential
Inside the container, we also need to set up a cratesfyi
user, and install Rust for it. In addition to the base Rust installation, we also need to install all the default targets so that we can build docs for all the Tier 1 platforms. The Rust compiler installed inside the container is the one that builds all the docs, so if you want to use a new Rustdoc feature, this is the compiler to update.
lxc-attach -n cratesfyi-container -- adduser --disabled-login --disabled-password --gecos "" cratesfyi
lxc-attach -n cratesfyi-container -- su - cratesfyi -c 'curl https://sh.rustup.rs -sSf | sh -s -- -y --default-toolchain nightly'
lxc-attach -n cratesfyi-container -- su - cratesfyi -c 'rustup target add i686-apple-darwin'
lxc-attach -n cratesfyi-container -- su - cratesfyi -c 'rustup target add i686-pc-windows-msvc'
lxc-attach -n cratesfyi-container -- su - cratesfyi -c 'rustup target add i686-unknown-linux-gnu'
lxc-attach -n cratesfyi-container -- su - cratesfyi -c 'rustup target add x86_64-apple-darwin'
lxc-attach -n cratesfyi-container -- su - cratesfyi -c 'rustup target add x86_64-pc-windows-msvc'
Now that we have Rust installed inside the container, we can use a trick to give the cratesfyi
user on the host the same Rust compiler as the container. By symlinking the following directories into its user directory, we don't need to track a third toolchain.
for directory in .cargo .rustup .multirust; do [[ -h /home/cratesfyi/$directory ]] || sudo -u cratesfyi ln -vs /var/lib/lxc/cratesfyi-container/rootfs/home/cratesfyi/$directory /home/cratesfyi/; done
Environment for the cratesfyi
user
To ensure that the docs.rs server is configured properly, we need to set a few environment variables. The primary ones are going into a separate environment file, so we can load them into the systemd service that will manage the server.
Write the following into /home/cratesfyi/.cratesfyi.env
. If you have a GitHub access token that the site can use to collect repository information, add it here, but otherwise leave it blank. The variables need to exist, but they can be blank to skip that collection.
CRATESFYI_PREFIX=/cratesfyi-prefix
CRATESFYI_DATABASE_URL=postgresql://cratesfyi:password@localhost
CRATESFYI_CONTAINER_NAME=cratesfyi-container
CRATESFYI_GITHUB_USERNAME=
CRATESFYI_GITHUB_ACCESSTOKEN=
RUST_LOG=cratesfyi
Now add the following to /home/cratesfyi/.profile
:
export $(cat $HOME/.cratesfyi.env | xargs -d '\n')
export PATH="$HOME/.cargo/bin:$PATH"
export PATH="$PATH:$HOME/docs.rs/target/release"
Docs.rs build
Now we can actually clone and build the docs.rs source! The location of it doesn't matter much, but again, we want it to be owned by cratesfyi
so it can build and run the final executable. In addition, we copy the built cratesfyi
binary into the container so that it can be used to arrange builds on the inside.
sudo -u cratesfyi git clone https://github.com/rust-lang-nursery/docs.rs.git ~cratesfyi/docs.rs
sudo su - cratesfyi -c 'cd ~/docs.rs && cargo build --release'
cp -v /home/cratesfyi/docs.rs/target/release/cratesfyi /var/lib/lxc/cratesfyi-container/rootfs/usr/local/bin
PostgreSQL
Now that we have the repository built, we can use it to set up the database. Docs.rs uses a Postgres database to store information about crates and their documentation. To set one up, we first need to ask Postgres to create the database, and then run the docs.rs command to create the initial tables and content:
sudo -u postgres sh -c "psql -c \"CREATE USER cratesfyi WITH PASSWORD 'password';\""
sudo -u postgres sh -c "psql -c \"CREATE DATABASE cratesfyi OWNER cratesfyi;\""
sudo su - cratesfyi -c "cd ~/docs.rs && cargo run --release -- database init"
sudo su - cratesfyi -c "cd ~/docs.rs && cargo run --release -- build add-essential-files"
sudo su - cratesfyi -c "cd ~/docs.rs && cargo run --release -- build crate rand 0.5.5"
sudo su - cratesfyi -c "cd ~/docs.rs && cargo run --release -- database update-search-index"
sudo su - cratesfyi -c "cd ~/docs.rs && cargo run --release -- database update-release-activity"
Server configuration
We're almost there! At this point, we've got all the pieces in place to run the site. Now we can set up a systemd service that will run the daemon that will collect crate information, orchestrate builds, and serve the website. The following systemd service file can be placed in /etc/systemd/system/cratesfyi.service
:
[Unit]
Description=Cratesfyi daemon
After=network.target postgresql.service
[Service]
User=cratesfyi
Group=cratesfyi
Type=forking
PIDFile=/cratesfyi-prefix/cratesfyi.pid
EnvironmentFile=/home/cratesfyi/.cratesfyi.env
ExecStart=/home/cratesfyi/docs.rs/target/release/cratesfyi daemon
WorkingDirectory=/home/cratesfyi/docs.rs
[Install]
WantedBy=multi-user.target
Enabling and running that will serve the website on http://localhost:3000
, so if you want to route public traffic to it, you'll need to set up something like nginx to proxy the connections to it.
Updating Rust
If you want to update the Rust compiler used to build crates (and the Rustdoc that comes with it), you need to make sure you don't interrupt any existing crate builds. The daemon waits for 60 seconds between checking for new crates, so you need to make sure you catch it during that window. Since we hooked the daemon into systemd, the logs will be available in its journal. Running journalctl -efu cratesfyi
(it may need to be run as root if nothing appears) will show the latest log output and show new entries as they appear. You're looking for a message like "Finished building new crates, going back to sleep" or "Queue is empty, going back to sleep", which indicates that the crate-building thread is waiting.
To prevent the queue from building more crates, run the following:
sudo su - cratesfyi -c "cd ~/docs.rs && cargo run --release -- build lock"
This will create a lock file in the prefix directory that will prevent more crates from being built. At this point, you can update the rustc inside the container and add the rustdoc static files to the database:
lxc-attach -n cratesfyi-container -- su - cratesfyi -c 'rustup update'
sudo su - cratesfyi -c "cd ~/docs.rs && cargo run --release -- build add-essential-files"
Once this is done, you can unlock the queue to allow crates to build again:
sudo su - cratesfyi -c "cd ~/docs.rs && cargo run --release -- build unlock"
And we're done! New crates will start being built with the new rustc. If you want to rebuild any existing docs with the new rustdoc, you need to manually build them - there's no automated way to rebuild failed docs or docs from a certain rust version yet.
Updating docs.rs
To update the code for docs.rs itself, you can follow a similar approach. First, watch the logs so you can stop the daemon from building more crates. (You can replace the lock command with a systemctl stop cratesfyi
if you don't mind the web server being down while you build.)
# journalctl -efu cratesfyi
(wait for build daemon to sleep)
$ sudo su - cratesfyi -c "cd ~/docs.rs && cargo run --release -- build lock"
Once the daemon has stopped, you can start updating the code and rebuilding:
$ sudo su - cratesfyi -c "cd ~/docs.rs && git pull"
$ sudo su - cratesfyi -c "cd ~/docs.rs && cargo build --release"
Now that we have a shiny new build, we need to make sure the service is using it:
# cp -v /home/cratesfyi/docs.rs/target/release/cratesfyi /var/lib/lxc/cratesfyi-container/rootfs/usr/local/bin
# systemctl restart cratesfyi
Next, we can unlock the builder so it can start checking new crates:
$ sudo su - cratesfyi -c "cd ~/docs.rs && cargo run --release -- build unlock"
And we're done! Changes to the site or the build behavior should be visible now.
Common maintenance procedures
Temporarily remove a crate from the queue
It might happen that a crate fails to build repeatedly due to a docs.rs bug, clogging up the queue and preventing other crates to build. In this case it's possible to temporarily remove the crate from the queue until the docs.rs's bug is fixed. To do that, log into the machine and open a PostgreSQL shell with:
$ psql
Then you can run this SQL query to remove the crate:
UPDATE queue SET attempt = 100 WHERE name = '<CRATE_NAME>';
To add the crate back in the queue you can run in the PostgreSQL shell this query:
UPDATE queue SET attempt = 0 WHERE name = '<CRATE_NAME>';
Pinning a version of nightly
Sometimes the latest nightly might be broken, causing doc builds to fail. In
those cases it's possible to tell docs.rs to stop updating to the latest
nightly and instead pin a specific release. To do that you need to edit the
/home/cratesfyi/.docs-rs-env
file, adding or changing this environment
variable:
CRATESFYI_TOOLCHAIN=nightly-YYYY-MM-DD
Once the file changed docs.rs needs to be restarted:
systemctl restart docs.rs
To return to the latest nightly simply remove the environment variable and restart docs.rs again.
Rebuild a specific crate
If a bug was recently fixed, you may want to rebuild a crate so that it builds with the latest version. From the docs.rs machine:
cratesfyi queue add <crate> <version>
This will add the crate with a lower priority than new crates by default, you can change the priority with the -p
option.
Raise the limits for a specific crate
Occasionally crates will ask for their build limits to be raised.
You can raise them from the docs.rs machine with psql
.
Raising a memory limit to 8 GB:
# memory is measured in bytes
cratesfyi=> INSERT INTO sandbox_overrides (crate_name, max_memory_bytes)
VALUES ('crate name', 8589934592);
Raising a timeout to 15 minutes:
cratesfyi=> INSERT INTO sandbox_overrides (crate_name, timeout_seconds)
VALUES ('crate name', 900);
Raising limits for multiple crates at once:
cratesfyi=> INSERT INTO sandbox_overrides (crate_name, max_memory_bytes)
VALUES ('stm32f4', 8589934592), ('stm32h7', 8589934592), ('stm32g4', 8589934592);
Set a group of crates to be automatically de-prioritized
When many crates from the same project are published at once, they take up a lot of space in the queue. You can de-prioritize groups of crates at once like this:
cratesfyi=> INSERT INTO crate_priorities (pattern, priority)
VALUES ('group-%', 1);
The pattern
should be a LIKE
pattern as documented on
https://www.postgresql.org/docs/current/functions-matching.html.
Note that this only sets the default priority for crates with that name. If there are crates already in the queue, you'll have to update those manually:
cratesfyi=> UPDATE queue SET priority = 1 WHERE name LIKE 'group-%';
Adding all the crates failed after a date back in the queue
After an outage you might want to add all the failed builds back to the queue. To do that, log into the machine and open a PostgreSQL shell with:
psql
Then you can run this SQL query to add all the crates failed after YYYY-MM-DD HH:MM:SS
back in the queue:
UPDATE queue SET attempt = 0 WHERE attempt >= 5 AND build_time > 'YYYY-MM-DD HH:MM:SS';
Removing a crate from the website
Sometimes it might be needed to remove all the content related to a crate from docs.rs (for example after receiving a DMCA). To do that, log into the server and run:
cratesfyi database delete-crate CRATE_NAME
The command will remove all the data from the database, and then remove the files from S3.
Blacklisting crates
Occasionally it might be needed to prevent a crate from being built on docs.rs, for example if we can't legally host the content of those crates. To add a crate to the blacklist, preventing new builds for it, you can run:
cratesfyi database blacklist add <CRATE_NAME>
Other operations (such as list
and remove
) are also supported.
Warning: blacklisting a crate doesn't remove existing content from the website, it just prevents new versions from being built!
Migrating from a local database to S3
If you're running your own instance of docs.rs for personal development, and you'd like to test out the S3 integration, there are a couple steps involved. It's mostly straightforward, but there's at least one major caveat that should be taken into account.
Requirements
Since docs.rs uses a fixed bucket name to upload files, you'll need to set up an independent server that implements the Amazon S3 API. Minio is an example of a server you can set up. Instructions for installing and configuring Minio (or any S3-compliant provider) are beyond the scope of this article.
Configuring your docs.rs instance
Once you have your server and credentials set up, you need to tell the docs.rs server to use it. You'll need to add some extra environment variables to the environment file you use to configure docs.rs. It uses the S3_ENDPOINT
variable to determine where to call, and the other variables to configure its access privileges. The available environment variables are documented in rusoto
's EnvironmentProvider
.
AWS_ACCESS_KEY_ID=<access key>
AWS_SECRET_ACCESS_KEY=<access secret>
S3_ENDPOINT=<endpoint url>
Migrating files out of the database
Before you restart the process to load the new credentials, the files that are currently in the local database need to be migrated out to S3. Once docs.rs sees that it has AWS credentials in its environment, it will stop reading from the files
table entirely. Therefore, to properly transition to using the new file storage, you need to run an extra command to upload them. It's helpful to lock the build queue while this is happening, so that no new files will sneak in while the migration happens.
cratesfyi build lock
# edit the environment file with the above variables if you haven't already
cratesfyi database move-to-s3 # this will take a while, depending on the size of your database
sudo systemctl restart cratesfyi # or however you manage your docs.rs service
# verify that files are loading from S3/Minio/etc
cratesfyi build unlock
Once this is done, new crate builds will upload files directly to your file storage service rather than into the database. The move-to-s3
command will remove rows from the files
table in the database as it uploads them, so once you're done, you can compact the database to shrink its on-disk size.
Governance
IMPORTANT This document is adapted from RFC 1068 and is currently being actively worked on, however there may be large parts of Rust's governance that are missing, incomplete, or out of date.
Core team
The core team serves as leadership for the Rust project as a whole. In particular, it:
-
Sets the overall direction and vision for the project. That means setting the core values that are used when making decisions about technical tradeoffs. It means steering the project toward specific use cases where Rust can have a major impact. It means leading the discussion, and writing RFCs for, major initiatives in the project.
-
Sets the priorities and release schedule. Design bandwidth is limited, and it's dangerous to try to grow the language too quickly; the core team makes some difficult decisions about which areas to prioritize for new design, based on the core values and target use cases.
-
Focuses on broad, cross-cutting concerns. The core team is specifically designed to take a global view of the project, to make sure the pieces are fitting together in a coherent way.
-
Spins up or shuts down subteams. Over time, we may want to expand the set of subteams, and it may make sense to have temporary "strike teams" that focus on a particular, limited task.
-
Decides whether/when to ungate a feature. While the subteams make decisions on RFCs, the core team is responsible for pulling the trigger that moves a feature from nightly to stable. This provides an extra check that features have adequately addressed cross-cutting concerns, that the implementation quality is high enough, and that language/library commitments are reasonable.
The core team should include both the subteam leaders, and, over time, a diverse set of other stakeholders that are both actively involved in the Rust community, and can speak to the needs of major Rust constituencies, to ensure that the project is addressing real-world needs.
Subteams
The primary roles of each subteam are:
-
Shepherding RFCs for the subteam area. As always, that means (1) ensuring that stakeholders are aware of the RFC, (2) working to tease out various design tradeoffs and alternatives, and (3) helping build consensus.
-
Accepting or rejecting RFCs in the subteam area.
-
Setting policy on what changes in the subteam area require RFCs, and reviewing direct PRs for changes that do not require an RFC.
-
Delegating reviewer rights for the subteam area. The ability to
r+
is not limited to team members, and in fact earningr+
rights is a good stepping stone toward team membership. Each team should set reviewing policy, manage reviewing rights, and ensure that reviews take place in a timely manner. (Thanks to Nick Cameron for this suggestion.)
Subteams make it possible to involve a larger, more diverse group in the decision-making process. In particular, they should involve a mix of:
-
Rust project leadership, in the form of at least one core team member (the leader of the subteam).
-
Area experts: people who have a lot of interest and expertise in the subteam area, but who may be far less engaged with other areas of the project.
-
Stakeholders: people who are strongly affected by decisions in the subteam area, but who may not be experts in the design or implementation of that area. It is crucial that some people heavily using Rust for applications/libraries have a seat at the table, to make sure we are actually addressing real-world needs.
Members should have demonstrated a good sense for design and dealing with tradeoffs, an ability to work within a framework of consensus, and of course sufficient knowledge about or experience with the subteam area. Leaders should in addition have demonstrated exceptional communication, design, and people skills. They must be able to work with a diverse group of people and help lead it toward consensus and execution.
Each subteam is led by a member of the core team. The leader is responsible for:
-
Setting up the subteam:
-
Deciding on the initial membership of the subteam (in consultation with the core team). Once the subteam is up and running.
-
Working with subteam members to determine and publish subteam policies and mechanics, including the way that subteam members join or leave the team (which should be based on subteam consensus).
-
-
Communicating core team vision downward to the subteam.
-
Alerting the core team to subteam RFCs that need global, cross-cutting attention, and to RFCs that have entered the "final comment period" (see below).
-
Ensuring that RFCs and PRs are progressing at a reasonable rate, re-assigning shepherds/reviewers as needed.
-
Making final decisions in cases of contentious RFCs that are unable to reach consensus otherwise (should be rare).
The way that subteams communicate internally and externally is left to each subteam to decide, but:
-
Technical discussion should take place as much as possible on public forums, ideally on RFC/PR threads and tagged discuss posts.
-
Each subteam will have a dedicated internals forum tag.
-
Subteams should actively seek out discussion and input from stakeholders who are not members of the team.
-
Subteams should have some kind of regular meeting or other way of making decisions. The content of this meeting should be summarized with the rationale for each decision -- and, as explained below, decisions should generally be about weighting a set of already-known tradeoffs, not discussing or discovering new rationale.
-
Subteams should regularly publish the status of RFCs, PRs, and other news related to their area. Ideally, this would be done in part via a dashboard like the Homu queue.
Decision-making
Consensus
Rust has long used a form of consensus decision-making. In a nutshell the premise is that a successful outcome is not where one side of a debate has "won", but rather where concerns from all sides have been addressed in some way. This emphatically does not entail design by committee, nor compromised design. Rather, it's a recognition that
... every design or implementation choice carries a trade-off and numerous costs. There is seldom a right answer.
Breakthrough designs sometimes end up changing the playing field by eliminating tradeoffs altogether, but more often difficult decisions have to be made. The key is to have a clear vision and set of values and priorities, which is the core team's responsibility to set and communicate, and the subteam's responsibility to act upon.
Whenever possible, we seek to reach consensus through discussion and design revision. Concretely, the steps are:
- Initial RFC proposed, with initial analysis of tradeoffs.
- Comments reveal additional drawbacks, problems, or tradeoffs.
- RFC revised to address comments, often by improving the design.
- Repeat above until "major objections" are fully addressed, or it's clear that there is a fundamental choice to be made.
Consensus is reached when most people are left with only "minor" objections, i.e., while they might choose the tradeoffs slightly differently they do not feel a strong need to actively block the RFC from progressing.
One important question is: consensus among which people, exactly? Of course, the broader the consensus, the better. But at the very least, consensus within the members of the subteam should be the norm for most decisions. If the core team has done its job of communicating the values and priorities, it should be possible to fit the debate about the RFC into that framework and reach a fairly clear outcome.
Lack of consensus
In some cases, though, consensus cannot be reached. These cases tend to split into two very different camps:
-
"Trivial" reasons, e.g., there is not widespread agreement about naming, but there is consensus about the substance.
-
"Deep" reasons, e.g., the design fundamentally improves one set of concerns at the expense of another, and people on both sides feel strongly about it.
In either case, an alternative form of decision-making is needed.
-
For the "trivial" case, usually either the RFC shepherd or subteam leader will make an executive decision.
-
For the "deep" case, the subteam leader is empowered to make a final decision, but should consult with the rest of the core team before doing so.
How and when RFC decisions are made, and the "final comment period"
Each RFC has a shepherd drawn from the relevant subteam. The shepherd is responsible for driving the consensus process -- working with both the RFC author and the broader community to dig out problems, alternatives, and improved design, always working to reach broader consensus.
At some point, the RFC comments will reach a kind of "steady state", where no new tradeoffs are being discovered, and either objections have been addressed, or it's clear that the design has fundamental downsides that need to be weighed.
At that point, the shepherd will announce that the RFC is in a "final comment period" (which lasts for one week). This is a kind of "last call" for strong objections to the RFC. The announcement of the final comment period for an RFC should be very visible; it should be included in the subteam's periodic communications.
Note that the final comment period is in part intended to help keep RFCs moving. Historically, RFCs sometimes stall out at a point where discussion has died down but a decision isn't needed urgently. In this proposed model, the RFC author could ask the shepherd to move to the final comment period (and hence toward a decision).
After the final comment period, the subteam can make a decision on the RFC. The role of the subteam at that point is not to reveal any new technical issues or arguments; if these come up during discussion, they should be added as comments to the RFC, and it should undergo another final comment period.
Instead, the subteam decision is based on weighing the already-revealed tradeoffs against the project's priorities and values (which the core team is responsible for setting, globally). In the end, these decisions are about how to weight tradeoffs. The decision should be communicated in these terms, pointing out the tradeoffs that were raised and explaining how they were weighted, and never introducing new arguments.
Keeping things lightweight
In addition to the "final comment period" proposed above, this RFC proposes some further adjustments to the RFC process to keep it lightweight.
A key observation is that, thanks to the stability system and nightly/stable distinction, it's easy to experiment with features without commitment.
Clarifying what needs an RFC
Over time, we've been drifting toward requiring an RFC for essentially any user-facing change, which sometimes means that very minor changes get stuck awaiting an RFC decision. While subteams + final comment period should help keep the pipeline flowing a bit better, it would also be good to allow "minor" changes to go through without an RFC, provided there is sufficient review in some other way. (And in the end, the core team ungates features, which ensures at least a final review.)
This RFC does not attempt to answer the question "What needs an RFC", because that question will vary for each subteam. However, this RFC stipulates that each subteam should set an explicit policy about:
- What requires an RFC for the subteam's area, and
- What the non-RFC review process is.
These guidelines should try to keep the process lightweight for minor changes.
Clarifying the "finality" of RFCs
While RFCs are very important, they do not represent the final state of a design. Often new issues or improvements arise during implementation, or after gaining some experience with a feature. The nightly/stable distinction exists in part to allow for such design iteration.
Thus RFCs do not need to be "perfect" before acceptance. If consensus is reached on major points, the minor details can be left to implementation and revision.
Later, if an implementation differs from the RFC in substantial ways, the subteam should be alerted, and may ask for an explicit amendment RFC. Otherwise, the changes should just be explained in the commit/PR.
The teams
With all of that out of the way, what subteams should we start with? This RFC proposes the following initial set:
- Language design
- Libraries
- Compiler
- Tooling and infrastructure
- Moderation
In the long run, we will likely also want teams for documentation and for community events, but these can be spun up once there is a more clear need (and available resources).
Language design team
Focuses on the design of language-level features; not all team members need to have extensive implementation experience.
Some example RFCs that fall into this area:
- Associated types and multidispatch
- DST coercions
- Trait-based exception handling
- Rebalancing coherence
- Integer overflow (this has high overlap with the library subteam)
- Sound generic drop
Library team
Oversees both std
and, ultimately, other crates in the rust-lang
github
organization. The focus up to this point has been the standard library, but we
will want "official" libraries that aren't quite std
territory but are still
vital for Rust. (The precise plan here, as well as the long-term plan for std
,
is one of the first important areas of debate for the subteam.) Also includes
API conventions.
Some example RFCs that fall into this area:
- Collections reform
- IO reform
- Debug improvements
- Simplifying std::hash
- Conventions for ownership variants
Compiler team
Focuses on compiler internals, including implementation of language features. This broad category includes work in codegen, factoring of compiler data structures, type inference, borrowck, and so on.
There is a more limited set of example RFCs for this subteam, in part because we haven't generally required RFCs for this kind of internals work, but here are two:
- Non-zeroing dynamic drops (this has high overlap with language design)
- Incremental compilation
Tooling and infrastructure team
Even more broad is the "tooling" subteam, which at inception is planned to
encompass every "official" (rust-lang managed) non-rustc
tool:
- rustdoc
- rustfmt
- Cargo
- crates.io
- CI infrastructure
- Debugging tools
- Profiling tools
- Editor/IDE integration
- Refactoring tools
It's not presently clear exactly what tools will end up under this umbrella, nor which should be prioritized.
Moderation team
Finally, the moderation team is responsible for dealing with CoC violations.
One key difference from the other subteams is that the moderation team does not have a leader. Its members are chosen directly by the core team, and should be community members who have demonstrated the highest standard of discourse and maturity. To limit conflicts of interest, the moderation subteam should not include any core team members. However, the subteam is free to consult with the core team as it deems appropriate.
The moderation team will have a public email address that can be used to raise complaints about CoC violations (forwards to all active moderators).
Initial plan for moderation
What follows is an initial proposal for the mechanics of moderation. The moderation subteam may choose to revise this proposal by drafting an RFC, which will be approved by the core team.
Moderation begins whenever a moderator becomes aware of a CoC problem, either through a complaint or by observing it directly. In general, the enforcement steps are as follows:
These steps are adapted from text written by Manish Goregaokar, who helped articulate them from experience as a Stack Exchange moderator.
-
Except for extreme cases (see below), try first to address the problem with a light public comment on thread, aimed to de-escalate the situation. These comments should strive for as much empathy as possible. Moderators should emphasize that dissenting opinions are valued, and strive to ensure that the technical points are heard even as they work to cool things down.
When a discussion has just gotten a bit heated, the comment can just be a reminder to be respectful and that there is rarely a clear "right" answer. In cases that are more clearly over the line into personal attacks, it can directly call out a problematic comment.
-
If the problem persists on thread, or if a particular person repeatedly comes close to or steps over the line of a CoC violation, moderators then email the offender privately. The message should include relevant portions of the CoC together with the offending comments. Again, the goal is to de-escalate, and the email should be written in a dispassionate and empathetic way. However, the message should also make clear that continued violations may result in a ban.
-
If problems still persist, the moderators can ban the offender. Banning should occur for progressively longer periods, for example starting at 1 day, then 1 week, then permanent. The moderation subteam will determine the precise guidelines here.
In general, moderators can and should unilaterally take the first step, but steps beyond that (particularly banning) should be done via consensus with the other moderators. Permanent bans require core team approval.
Some situations call for more immediate, drastic measures: deeply inappropriate comments, harassment, or comments that make people feel unsafe. (See the code of conduct for some more details about this kind of comment). In these cases, an individual moderator is free to take immediate, unilateral steps including redacting or removing comments, or instituting a short-term ban until the subteam can convene to deal with the situation.
The moderation team is responsible for interpreting the CoC. Drastic measures like bans should only be used in cases of clear, repeated violations.
Moderators themselves are held to a very high standard of behavior, and should strive for professional and impersonal interactions when dealing with a CoC violation. They should always push to de-escalate. And they should recuse themselves from moderation in threads where they are actively participating in the technical debate or otherwise have a conflict of interest. Moderators who fail to keep up this standard, or who abuse the moderation process, may be removed by the core team.
Subteam, and especially core team members are also held to a high standard of behavior. Part of the reason to separate the moderation subteam is to ensure that CoC violations by Rust's leadership be addressed through the same independent body of moderators.
Moderation covers all rust-lang venues, which currently include github repos, IRC channels (#rust, #rust-internals, #rustc, #rust-libs), and the two discourse forums. (The subreddit already has its own moderation structure, and isn't directly associated with the rust-lang organization.)
Infrastructure
This section documents Rust's infrastructure, and how it is maintained.
External Links
- rust-toolstate records build and test status of external tools bundled with the Rust repository.
Other Rust Installation Methods
Which installer should you use?
Rust runs on many platforms, and there are many ways to install Rust. If you want to install Rust in the most straightforward, recommended way, then follow the instructions on the main installation page.
That page describes installation via rustup
, a tool that manages multiple
Rust toolchains in a consistent way across all platforms Rust supports. Why
might one not want to install using those instructions?
- Offline installation.
rustup
downloads components from the internet on demand. If you need to install Rust without access to the internet,rustup
is not suitable. - Preference for the system package manager. On Linux in particular, but also on macOS with Homebrew, and Windows with Chocolatey or Scoop, developers sometimes prefer to install Rust with their platform's package manager.
- Preference against
curl | sh
. On Unix, we usually installrustup
by running a shell script viacurl
. Some have concerns about the security of this arrangement and would prefer to download and run the installer themselves. - Validating signatures. Although
rustup
performs its downloads over HTTPS, the only way to verify the signatures of Rust installers today is to do so manually with the standalone installers. - GUI installation and integration with "Add/Remove Programs" on Windows.
rustup
runs in the console and does not register its installation like typical Windows programs. If you prefer a more typical GUI installation on Windows there are standalone.msi
installers. In the futurerustup
will also have a GUI installer on Windows.
Rust's platform support is defined in three tiers, which correspond closely
with the installation methods available: in general, the Rust project provides
binary builds for all tier 1 and tier 2 platforms, and they are all installable
via rustup
. Some tier 2 platforms though have only the standard library
available, not the compiler itself; that is, they are cross-compilation targets
only; Rust code can run on those platforms, but they do not run the compiler
itself. Such targets can be installed with the rustup target add
command.
Other ways to install rustup
The way to install rustup
differs by platform:
- On Unix, run
curl https://sh.rustup.rs -sSf | sh
in your shell. This downloads and runsrustup-init.sh
, which in turn downloads and runs the correct version of therustup-init
executable for your platform. - On Windows, download and run
rustup-init.exe
.
rustup-init
can be configured interactively, and all options can additionally
be controlled by command-line arguments, which can be passed through the shell
script. Pass --help
to rustup-init
as follows to display the arguments
rustup-init
accepts:
curl https://sh.rustup.rs -sSf | sh -s -- --help
If you prefer not to use the shell script, you may directly download
rustup-init
for the platform of your choice:
- aarch64-linux-android
- aarch64-unknown-linux-gnu
- arm-linux-androideabi
- arm-unknown-linux-gnueabi
- arm-unknown-linux-gnueabihf
- armv7-linux-androideabi
- armv7-unknown-linux-gnueabihf
- i686-apple-darwin
- i686-linux-android
- i686-pc-windows-gnu
- i686-pc-windows-msvc
- i686-unknown-linux-gnu
- mips-unknown-linux-gnu
- mips64-unknown-linux-gnuabi64
- mips64el-unknown-linux-gnuabi64
- mipsel-unknown-linux-gnu
- powerpc-unknown-linux-gnu
- powerpc64-unknown-linux-gnu
- powerpc64le-unknown-linux-gnu
- s390x-unknown-linux-gnu
- x86_64-apple-darwin
- x86_64-linux-android
- x86_64-pc-windows-gnu
- x86_64-pc-windows-msvc
- x86_64-unknown-freebsd
- x86_64-unknown-linux-gnu
- x86_64-unknown-linux-musl
- x86_64-unknown-netbsd
Standalone installers
The official Rust standalone installers contain a single release of Rust, and
are suitable for offline installation. They come in three forms: tarballs
(extension .tar.gz
), that work in any Unix-like environment, Windows
installers (.msi
), and Mac installers (.pkg
). These installers come with
rustc
, cargo
, rustdoc
, the standard library, and the standard
documentation, but do not provide access to additional cross-targets like
rustup
does.
The most common reasons to use these are:
- Offline installation
- Preferring a more platform-integrated, graphical installer on Windows
Each of these binaries is signed with the Rust signing key, which is
available on keybase.io, by the Rust build infrastructure, with GPG. In the
tables below, the .asc
files are the signatures.
Source code
Channel | Archives + Signatures |
---|---|
stable (1.47.0) | tar.gz tar.gz.asc |
beta | tar.gz tar.gz.asc |
nightly | tar.gz tar.gz.asc |
The Rust Release Channel Layout
NOTE This document should be considered incomplete and descriptive rather than normative. Do not rely on anything described herein to be fully correct or a definition of how things should be done.
A lot of the content herein is derived from a posting made to the Rust internals forum by Brian Anderson back in 2016.
Rust releases are deployed onto static.rust-lang.org
where they are served via
https
. There are several parts to a release channel (stable
, beta
,
nightly
) but they all key off a manifest file and then go from there.
Channel manifests
There is a top level directory /dist/
which contains the channel manifests.
The manifests are named channel-rust-[channelname].toml
. Each channel manifest
is accompanied by a .sha256
file which is a checksum of the manifest file and
can be used to check integrity of the downloaded data. In addition each
channel's manifest is also accompanied by a .asc
file which is a detached GPG
signature which can be used to check not only the integrity but also the
authenticity of the channel manifest.
In addition to the stable
, beta
, and nightly
channels, there is also a
manifest for each release which will be called channel-rust-x.yy.z.toml
with
its associated .sha256
and .asc
files.
To support date-based channels, there is an archive folder for each day
(labelled YYYY-MM-DD
) which contains copies of the requisite channel files on
that day. So, for example, if you installed nightly-2019-02-16
then the
channel file would be
https://static.rust-lang.org/dist/2019-02-16/channel-rust-nightly.toml.
Content of channel manifests
Channel manifests are toml
files. These are known as v2 manifests. The v1
manifests are simply lists of the files associated with a release and are not
generated for every channel all of the time. Currently it is recommended to work
only with the v2 manifests and these are the topic of this section.
The top level of the .toml
file consists of two important key/value pairs.
Firstly the manifest-version
which is, at this time, "2"
, and secondly the
date of the manifest (date
) whose value is of the form "YYYY-MM-DD"
.
There are then a number of top level sections (tables) which are:
-
pkg
- This contains the bulk of the manifest and lists the packages which are part of the release. Typically this will be things likerust
,rustc
,cargo
etc. Therust
package is semi-special and currently is used to specify the subset of other packages which will be installed by default.Within packages are
components
andextensions
. Currentlycomponents
are installed by default byrustup
,extensions
are optional components and are available viarustup component add
and friends. -
renames
- This contains a set of package renames which can be used to determine the correct package to fetch when the user enters an alias for it.Typically renames are used when a package leaves its preview state and is considered to be release quality. For example, the actual package for
rustfmt
is calledrustfmt-preview
but since its release there has been arenames.rustfmt
table whoseto
field isrustfmt-preview
. When the user runsrustup component add rustfmt
the name is automatically translated torustfmt-preview
and when the user runsrustup component list
thenrustfmt-preview
is automatically renamed back torustfmt
for display to the user. -
profiles
- This is part of the future setup for deciding the default component set to install. Instead of choosing thecomponents
ofpkg.rust
insteadrustup
will honor one of the entries in theprofiles
table. Usually this will be thedefault
entry which essentially (though not exactly) boils down to["rustc", "cargo", "rust-std", "rust-docs", "rustfmt", "clippy"]
.Other profiles include
minimal
(["rustc", "cargo", "rust-std"]
) andcomplete
which adds in additional tools such as therls
, a copy of the standard library source (rust-src
),miri
,lldb
,llvm-tools
, andrust-analysis
.
Package entries in the channel manifest
As stated above, packages list their components and extensions (mostly just the
rust
package) and they can provide per-target tarball and sha256 data.
For example, a package might be:
[pkg.cargo.target.powerpc64-unknown-linux-gnu]
available = true
url = "https://static.rust-lang.org/dist/2019-05-23/cargo-0.36.0-powerpc64-unknown-linux-gnu.tar.gz"
hash = "279f3a84f40e3547a8532c64643f38068accb91c21f04cd16e46579c893f5a06"
xz_url = "https://static.rust-lang.org/dist/2019-05-23/cargo-0.36.0-powerpc64-unknown-linux-gnu.tar.xz"
xz_hash = "cf93b387508f4aea4e64f8b4887d70cc07a00906b981dc0c143e92e918682e4a"
Here you can see that this is for the cargo
package, and for the
powerpc64-unknown-linux-gnu
target. The url
/hash
combo is for a .tar.gz
and the xz_url
/xz_hash
pair for the same tarball compressed with xz
.
Either pair of url and hash could be present, both may be present, but it is not
useful for neither to be present unless available
is set to false
to
indicate that that particular combination of package and target is unavailable
in this channel at this time.
In addition, there will be a single entry providing the version for a package in the form:
[pkg.cargo]
version = "0.36.0 (6f3e9c367 2019-04-04)"
Here version
will be effectively the $tool --version
output, minus the
tool's name.
Targets
Targets are the same triples you might use when building something with
cargo build --target=$target
and you can add them to your installation using
rustup target add $target
. When you do that, what rustup
actually does is to
find the rust-std
package for the target in question and installs that.
Essentially like an imaginary rustup component add rust-std.$target
.
If a rust-std
package for a target is not available = true
then that target
cannot be installed via rustup
. This can happen for lower tier targets from
time to time.
Since components and extensions are target-specific in the pkg
tables, you
will be able to see that rust-std
for every target is specified in every
rust
target's extensions. This allows for cross-compilation by installation of
any rust-std
on any build system.
Service Infrastructure
Most services in the Rust Infrastructure are deployed via rust-central-station. Questions about infrastructure, including current status, should go to the #infra Discord channel.
Our stability guarantees: many of our services rely on publicly-accessible
storage and APIs, but not all of these are intended for public consumption. At
the moment, only the resources behind static.rust-lang.org
are considered
stable, meaning that those resources will not change without (at least) prior
notice. If you are relying on other parts of the Rust project infrastructure for
your own work, please let the infrastructure team know.
Highfive
Highfive is a bot (bot user account) which welcomes newcomers and assigns reviewers.
Rust Log Analyzer
The Rust Log Analyzer analyzes CI build logs to extract error messages and posts them to the pull request. It is run by TimNN.
Homu / bors
Homu is a bot which manages pull requests. It is often referred to as "bors" due to the name of its bot user account. Approved pull requests are placed in a queue from which tests are run.
Documentation on homu commands can be found here.
Please contact Alex Crichton if something goes wrong with the bot.
rfcbot
rfcbot is a bot (bot user account) which helps manage async decision making on issues and PRs (typically RFCs). Team members can view any pending requests for review on the FCP dashboard.
Documentation on rfcbot commands can be found in the rfcbot repository.
rustbot
rustbot is a bot (bot user account) to assist with managing issues and PRs to allow users to label and assign without GitHub permissions. See the wiki for more information.
DXR
DXR is a cross-referenced source index for Rust, allowing the Rust source tree to be navigated and searched with ease. It is generated by rust-dxr
perf / rust-timer
perf offers information
about the performance of rustc
over time, and a bot for on-demand benchmarking.
It is split into a data collector and a web frontend + bot. The raw performance data is available here and can be browsed on the perf website.
One-off performance runs can done by addressing the rust-timer bot (bot user account). You can trigger the necessary try-build and queue a perf run by saying
@bors try @rust-timer queue
(Technically, the requirement is that the queue
command finishes executing prior
to the try build completing successfully.)
Rust Playground
Rust Playground allows you to experiment with Rust before you install it locally, or in any other case where you might not have the compiler available. The Rust playground can be accessed here.
Crater
Crater is a tool to run experiments across the whole Rust ecosystem. Its primary purpose is to detect regressions in the Rust compiler, and it does this by building large number of crates, running their test suites and comparing the results between two versions of the Rust compiler.
Crates comes with a bot to trigger experiments.
docs.rs
docs.rs builds and serves the rustdoc documentation for all crates on crates.io. Issues may be filed on the docs.rs repository. See the #docs-rs channel on Discord for discussion or urgent issues.
Toolstate
The state of tools included with Rust are tracked on the toolstate page. When each PR is merged via CI, the status of each tool is recorded in a JSON file and stored in the toolstate repo. For further information, see the toolstate system documentation.
Rustup components history
The rustup components history tracks the status of every rustup component for every platform over time. See the repository for more information.
CI Timing Tracker
The CI Timing Tracker tracks and compares how long CI jobs take over time. It is run by Alex Crichton.
Team Maintenance
The roster of the Rust teams is always in flux. From time to time, new people
are added, but also people sometimes opt to into "alumni status", meaning that
they are not currently an active part of the decision-making process.
Unfortunately, whenever a new person is added or someone goes into alumni
status, there are a number of disparate places that need to be updated. This
page aims to document that list. If you have any questions, or need someone with
more privileges to make a change for you, a good place to ask is #infra
on
Discord.
Team repo
Membership of teams is primarily driven by the config files in the rust-lang/team repo. Several systems use the team repo data to control access:
- the team website
- bors r+ rights
- rfcbot interaction
- Mailgun email lists
Team membership is duplicated in a few other places listed below, but the long-term goal is to centralize on the team repo.
Full team membership
To make a full team member, the following places need to be modified:
- the team repo
- the rust-lang/TEAM and (in some cases) rust-lang-nursery/TEAM teams on github must be updated
- the internals discussion board has per-team groups
- the list of reviewers highfive uses is set in rust-lang/highfive
- the configs are set per-repo; some teams are listed in
rust.json
, whereas those that span multiple repos are set in_global.json
- the configs are set per-repo; some teams are listed in
Team member departure
Remove the team member from any and all places:
- highfive
- 1password
- The GitHub team, GitHub nursery team
- team repo
- toolstate notifications
Handling of tools embedded in the rustc repo ("toolstate")
The Rust repository contains several external tools and documents as git submodules (e.g. clippy, rls, the Book, the Reference). Many of those are very tightly coupled to the compiler and depend on internal APIs that change all the time, but they are not actually essential to get the compiler itself to work. To make API changes less painful, these tools are allowed to "break" temporarily. PRs can still land and nightlies still get released even when some tools are broken. Their current status is managed by the toolstate system. (Cargo is not subject to the toolstate system and instead just has to always work.)
The three possible states of a "tool" (this includes the documentation managed
by the toolstate system, where we run doctests) are: test-pass
, test-fail
,
build-fail
.
This page gives a rough overview how the toolstate system works, and what the rules are for when which tools are (not) allowed to break.
Toolstate Rules
-
For all tools, if a PR changes that tool (if it changes the commit used by the submodule), the tool has to be in
test-pass
after this PR or else CI will fail. -
For all tools except for "nightly only" tools, the following extra rules are applied:
- If a PR lands on the
beta
orstable
branch, the tool has to betest-pass
. - If a PR lands on
master
in the week before the beta is cut, and that PR regresses the tool (if it makes the state "worse"), CI fails. This is to help make sure all these tools becometest-pass
so that a beta can be cut. (See the Forge index for when the next beta cutoff is happening.)
At the time of writing, the following tools are "nightly only": rustc-dev-guide, miri, embedded-book.
- If a PR lands on the
Updating the toolstate repository
Updating the toolstate repository happens in two steps: when CI
runs on the auto
branch (where bors moves a PR to test if it is good for
integration), the "tool" runners for the individual platforms (at the time of
writing, Linux and Windows) each submit a JSON file to the repository recording
the state of each tool for the commit they are testing. Later, if that commit
actually entirely passed CI and bors moves it to the master
branch, the
"current tool status" in the toolstate repository is updated appropriately.
These scripts also automatically ping some people and create issues when tools break.
For further details, see the comments in the involved files: checktools.sh
,
publish_toolstate.py
as well as the other files mentioned there.
Adding a tool
To add a new tool to be tracked, the following steps must be taken:
- Create a PR to rust-lang/rust that adds the submodule along with any
necessary build system / bootstrap updates. Be careful that the tests
properly support
./x.py --no-fail-fast
to avoid issues like this. - Include changes to
checktools.sh
:- Build the tool at the top. This is the step that actually generates the
JSON status for the tool. When
save-toolstates
is set inconfig.toml
, the rust build system will write a JSON file with the status of each test. - Add the tool to
status_check
with whether it should be a beta blocker or not.
- Build the tool at the top. This is the step that actually generates the
JSON status for the tool. When
- Update
publish_toolstate.py
to add the tool. This includes a list of people to ping if the tool is broken, and its source repo. (Note: At the time of this writing, these users must have permissions to be assignable on rust-lang/rust GitHub.) - Submit a PR to the toolstate repository to manually add the tool to the
latest.json
file.
Policies of the infrastructure team
This section documents the policies created by the infrastructure team.
Policy on broken nightlies
Sometimes the nightlies released automatically by our CI ends up being broken for some people or even everyone. This policy defines what the infra team response will be in those cases.
Which nightly will be rolled back
A nightly can only be rolled back in the following cases:
- If it contains destructive code, for example if the included compiler deletes all the users files.
- If an infra problem caused it to be broken for a big percentage of users on any Tier 1 platform. Issues affecting only lower tier platforms are not worthy of a roll back, since we don't guarantee working builds for those platforms anyway.
A nightly will not be rolled back if it's broken by a critical compiler bug: those bugs are supposed to be caught by CI, and nightly can have compiler regressions anyway. There are no exceptions, even if big projects are broken because of this.
What are we going to fix
Once any member of the infra team decides to roll back a nightly under this policy we will roll back to the most recent working nightly. The roll back has to fix installing the nightly with rustup:
$ rustup toolchain install nightly
It's not required to roll back other things like the documentation or the
manually downloadable artifacts. After the nightly is rolled back we have to
announce the roll back on the @rustlang
twitter account and on the status
page.
Infrastructure guidelines
This section contains the guidelines written by the infrastructure team for other teams who want to use the project's infrastructure.
Rust Infrastructure hosting for static websites
The Rust Infrastructure team provides hosting for static websites available for all Rust teams. This document explains the requirements a website needs to meet and how to setup one.
Requirements for hosting websites
- The website must be managed by a Rust team, or be officially affiliated with
the project.
The infrastructure team has finite resources and we can't offer hosting for community projects. - The website’s content and build tooling must be hosted on a GitHub
repository in either the rust-lang or
rust-lang-nursery organizations.
The infrastructure team must be able to rebuild the website content at any time (for example if we need to switch hosting), and having it hosted on a GitHub repository inside infra-managed organizations is the best way for us to ensure that. Even though we'd prefer for all the repositories to be public it's not a requirement. - The website must be built and deployed with a CI service.
We have custom tooling built around hosting static websites on our infra, and at the moment they work with Travis CI and Azure Pipelines. If you need different CI services ask us in advance and we'll adapt the tooling to your provider of choice. - The website must reach an A+ grade on the
Mozilla Observatory.
Browsers have multiple security features toggleable only through HTTP response headers, and those features enhance users' privacy and prevent exploits from working. An A+ grade on the Observatory indicates all the important headers are correctly set. - The website must be hosted on platforms vetted by the infra team.
We recommend either GitHub Pages or Amazon S3 (in the rust-lang AWS account) as the hosting and CloudFront as the CDN, but if you need other platforms that's good as long as we consider them secure and reliable.
Static websites configuration
To avoid limitations of some hosting providers we have setup CloudFront to
enable additional, custom behaviors. These behaviors are configured through a
file named website_config.json
at the root of the generated website content.
Adding custom headers
One of the requirements for having a static website hosted by the
infrastructure team is to reach an A+ grade on the Mozilla
Observatory, and that requires custom
headers to be set. To setup custom headers you need to add an headers
section
to website_config.json
. This example content includes all the headers
needed to reach grade B on the Observatory (to reach grade A+ a Content
Security Policy is required):
{
"headers": {
"Strict-Transport-Security": "max-age=63072000",
"X-Content-Type-Options": "nosniff",
"X-Frame-Options": "DENY",
"X-XSS-Protection": "1; mode=block",
"Referrer-Policy": "no-referrer, strict-origin-when-cross-origin"
}
}
Fixing GitHub Pages redirects
GitHub Pages behaves weirdly when it sits behind CloudFront and it needs to
issue redirects: since it doesn't know the real domain name it will use
http://org-name.github.io/repo-name
as the base of the redirect instead of
the correct protocol and domain. To prevent this behavior the
github_pages_origin
key needs to be added to website_config.json
with the origin base url as the value (excluding the protocol):
{
"github_pages_origin": "org-name.github.io/repo-name"
}
Deployment guide
These deployments steps are meant to be executed by a member of the infrastructure team since they require access to our AWS account.
Configuring AWS
Create a CloudFront web distribution and set the following properties:
- Origin Domain Name: rust-lang.github.io/repo-name
- Origin Protocol Policy: HTTPS Only
- Viewer Protocol Policy: Redirect HTTP to HTTPS
- Lambda Function Association:
- Viewer Response: arn:aws:lambda:us-east-1:890664054962:function:static-websites:4
- Alternate Domain Names: your-subdomain-name.rust-lang.org
- SSL Certificate: Custom SSL Certificate
- You will need to request the certificate for that subdomain name through ACM (please use the DNS challenge to validate the certificate)
- Comment: your-subdomain-name.rust-lang.org
Wait until the distribution is propagated and take note of its
.cloudfront.net
domain name.
Head over to the domain’s Route 53 hosted zone and create a new record set:
- Name: your-subdomain-name
- Type: CNAME
- Value: the
.cloudfront.net
domain name you saw earlier
Create an AWS IAM user to allow the CI provider used to deploy website changes
to perform whitelisted automatic actions. Use ci--ORG-NAME--REPO-NAME
(for
example ci--rust-lang--rust
) as the user name, allow programmatic access to
it and add it to the ci-static-websites
IAM group. Then take note of the
access key id and the secret access key since you’ll need those later.
Adding deploy keys
To deploy websites we don’t use GitHub tokens (since they don’t have granular access scoping) but a deploy key with write access unique for each repository. To setup the deploy key you need to be an administrator on the repository, clone the simpleinfra repository and run this command:
$ cargo run --bin setup-deploy-keys rust-lang/repo-name
The command requires the GITHUB_TOKEN
(you can generate one
here) and the TRAVIS_TOKEN
(you can see
yours here) to be present. It will
generate a brand new key, upload it to GitHub and configure Travis CI to use
it if the repo is active there.
Configuring Travis CI
To actually deploy the website, this snippet needs to be added to your
.travis.yml
(please replace the contents of RUSTINFRA_DEPLOY_DIR
and
RUSTINFRA_CLOUDFRONT_DISTRIBUTION
):
env:
RUSTINFRA_DEPLOY_DIR: path/to/be/deployed
RUSTINFRA_CLOUDFRONT_DISTRIBUTION: ABCDEFGHIJKLMN
import:
- rust-lang/simpleinfra:travis-configs/static-websites.yml
You will also need to set the contents of the AWS_ACCESS_KEY_ID
and
AWS_SECRET_ACCESS_KEY
environment variables on the Travis CI web UI with the
credentials of the IAM user you created earlier. The secret access key must
be hidden from the build log, while the access key id should be publicly
visible.
Configuring Azure Pipelines
To actually deploy the website, this snippet needs to be added at the top of your pipeline's YAML file:
resources:
repositories:
- repository: rustinfra
type: github
name: rust-lang/simpleinfra
endpoint: rust-lang
Then you can add this steps when you want to execute the deploy (please replace
the contents of deploy_dir
and cloudfront_distribution
):
- template: azure-configs/static-websites.yml@rustinfra
parameters:
deploy_dir: path/to/output
# Optional, only needed if GitHub pages is behind CloudFront
cloudfront_distribution: AAAAAAAAAAAAAA
You will also need to set the following environment variables in the pipeline:
GITHUB_DEPLOY_KEY
: value outputted when adding the deploy key earlier (secret)AWS_ACCESS_KEY_ID
: access key ID of the IAM user allowed to invalidate CloudFront (public)AWS_SECRET_ACCESS_KEY
: access key of the IAM user allowed to invalidate CloudFront (secret)
Infrastructure team documentation
This section contains the documentation about the services hosted and managed by the Rust Infrastructure Team. Most of the linked resources and instructions are only available to infra team members though.
AWS access for team members
Selected members of the Rust Team have access to the AWS account of the project. This includes both members of the Infrastructure Team and members of teams with services hosted on AWS.
This document explains how to access our AWS account, and how to interact with it. If you're a infrastructure team member and you need to setup or revoke access for another person, read the "AWS access management" page.
Setting up your user after receiving the credentials
The first thing you need to do after receiving your credentials is changing the password and enabling 2-factor authentication: until you do these things, access will be restricted automatically to just the permissions needed to configure 2FA.
Sign into the console with the temporary credentials given to you by the infrastructure team member who created the user. You'll be prompted to change the temporary password: change it and log in again. Then, go to the "My Security Credentials" page, located in the dropdown at the top:
Scroll down and click the "Assign MFA device" button. Choose "Virtual MFA device" (which is classic TOTP) and configure it with your authenticator app. Once you're done, log out of the console and log in again to gain access to the resources you're authorized to use.
Do not choose "U2F security key", even if you own one: due to limitations of the AWS API, that would prevent you from using the CLI, restricting your access to the console alone.
Using the AWS console
The AWS console provides a visual interface to most of the resources in our AWS account.
Using the AWS CLI
The AWS CLI allows you to interact with our AWS account from a terminal or a script. To set it up the first time, follow Amazon's documentation to install it and configure your credentials. The CLI doesn't use your console password to authenticate: you'll need to create an access key from the "My Security Credentials" page on the console.
2-factor authentication
To ensure the security of our AWS account, 2-factor authentication is required to interact with the CLI. The Infrastructure Team developed a script that eases the authentication process by creating a temporary session validated with 2FA for the current shell. The session expires in 12 hours, and it's valid for an unlimited number of invocations.
To use the script, clone the rust-lang/simpleinfra repository in a directory. Then, every time you need to use the AWS CLI run this command in your shell:
eval $(~/PATH/TO/SIMPLEINFRA/aws-creds.py)
That command will prompt you for your 2FA code, and it will set a few environment variables in the current shell with the temporary credentials. You'll need to run the command again after 12 hours, or if you want the credentials on another shell.
Plaintext credentials
By default, AWS CLI stores your credentials (including the secret key) in the
~/.aws/credentials
file, without any kind of encryption. While the danger of
having plaintext credentials stored in your home directory is partially
mitigated by the 2FA requirement, it'd be best not to store them anyway.
If you use a password manager with a CLI interface, an approach you can take to avoid the problem is to store your credentials in the password manager, and configure the CLI to call your password manager to fetch the credentials when needed.
AWS access management
This document explains how to setup and manage AWS access for Rust team members. If you're a team member and you need to access AWS with your existing credentials, or you have received your credentials for the first time, check out the "AWS access for team members" page.
Granting access
To grant access to a person, go to team-members-access/_users.tf
in the
Terraform configuration and add the new user to it, specifying which teams they
should be a member of. The user will be created as soon as you apply the
configuration.
By default, there will be no credentials attached to the user. To allow the user to log in, go to the IAM console, open the security credentials page of the user you just created, and enable a console password. Let AWS generate a random one, and require the password to be changed on first login.
Finally communicate to the user that they can join with the generated password, and to follow the "AWS access for team members" page to learn how to enable 2FA and gain access to their account.
Revoking access
To revoke access from a person, log into the IAM console, open the security credentials page of the user you want to delete, and:
- Disable console access by clicking "Manage" on the console password
- Disable 2-factor authentication by clicking "Manage" on the assigned MFA device
- Remove all the access keys, including inactive ones, by clicking the "x".
Once all the access was removed from the console, go to
team-members-access/_users.tf
in the Terraform configuration, remove
the user and apply the configuration.
Bastion server
- FQDN:
bastion.infra.rust-lang.org
- Ansible playbook to deploy this server.
- Terraform configuration to create AWS resources.
- Instance metrics (only available to infra team members).
Logging into servers through the bastion
To improve the security of our infrastructure it's not possible to connect directly to a production server with SSH. Instead, all connections must come from a small server called the "bastion", which only allows connections from a few whitelisted networks and logs any connection attempt.
To log into a server through the bastion you can use SSH's -J
flag:
ssh -J bastion.infra.rust-lang.org servername.infra.rust-lang.org
It's also possible to configure SSH to always jump through the bastion when
connecting to a host. Add this snippet to your SSH configuration file (usually
located in ~/.ssh/config
):
Host servername.infra.rust-lang.org
ProxyJump bastion.infra.rust-lang.org
Please remember the bastion server only allows connections from a small list of IP addresses. Infra team members with AWS access can change the whitelist, but it's good practice to either have your own bastion server or a static IP address.
The SSH keys authorized to log into each account are stored in the simpleinfra repository. Additionally, people with sensitive 1password access can use the master key stored in the vault to log into every account, provided their connection comes from any whitelisted IP.
Common maintenance procedures
Adding a new user to the bastion server
To add a new user to the bastion you need to add its key to a file named
<username>.pub
in ansible/roles/common/files/ssh-keys
, and change
the Ansible playbook adding the user to the list of unprivileged
users. Please leave a comment clarifying which servers the user will have
access to.
Once that's done apply the playbook and add a new whitelisted IP address.
Adding a whitelisted IP
Due to privacy reasons, all the static IP addresses of team members with access
to the bastion are stored on AWS SSM Parameter Store instead of public
git repositories. To add an IP address you can run this command (taking care of
replacing USERNAME
and IP_ADDRESS
with the proper values):
aws ssm put-parameter --type String --name "/prod/bastion/allowed-ips/USERNAME" --value "IP_ADDRESS/32"
You'll also need to add the username to the list in
terraform/services.tf
(key allowed_users
in the
service_bastion
module). Once you made all the needed changes you wanted you
need to apply the Terraform configuration.
Updating a whitelisted IP
Due to privacy reasons, all the static IP addresses of team members with access
to the bastion are stored on AWS SSM Parameter Store instead of public
git repositories. To update an IP address you can run this command (taking care
of replacing USERNAME
and IP_ADDRESS
with the proper values):
aws ssm put-parameter --overwrite --type String --name "/prod/bastion/allowed-ips/USERNAME" --value "IP_ADDRESS/32"
Once you made all the needed changes you wanted you need to apply the Terraform configuration.
Removing a whitelisted IP
Due to privacy reasons, all the static IP addresses of team members with access
to the bastion are stored on AWS SSM Parameter Store instead of public
git repositories. To remove an IP address you can run this command (taking care
of replacing USERNAME
with the proper value):
aws ssm delete-parameter --name "/prod/bastion/allowed-ips/USERNAME"
You'll also need to remove the username from the list in
terraform/services.tf
(key allowed_users
in the
service_bastion
module). Once you made all the needed changes you wanted you
need to apply the Terraform configuration.
Crater agents
- Source code: rust-lang/crater
- Hosted on:
crater-aws-1.infra.rust-lang.org
(behind the bastion -- how to connect)crater-azure-1.infra.rust-lang.org
(behind the bastion -- how to connect)
- Maintainers: pietroalbini
- Application metrics (only available to infra team members).
- Instance metrics (only available to infra team members):
Service configuration
Crater agents are servers with our standard configuration running a Docker container hosting the agent. A timer checks for updates every 5 minutes, and if a newer Docker image is present the container will automatically be updated and restarted. This service is managed with Ansible.
Common maintenance procedures
Starting and stopping the agent
The agent is managed by the container-crater-agent.service
systemd unit. That
means it's possible to start, stop and restart it with the usual systemctl
commands:
systemctl stop container-crater-agent.service
systemctl start container-crater-agent.service
systemctl restart container-crater-agent.service
Inspecting the logs of the agent
Logs of the agents are forwarded and collected by journald. To see them you can use journalctl:
journalctl -u container-crater-agent.service
Manually updating the container image
The container is updated automatically every 5 minutes (provided a newer image is present). If you need to update them sooner you can manually start the updater service by running this command:
systemctl start docker-images-update.service
Custom GitHub Actions runners
The Infrastructure Team manages a pool of self-hosted GitHub Actions runners, meant to be used by whitelisted repositories that need to run tests on platforms not supported by the GitHub-hosted runners. We're currently running the following machines:
ci-arm-1.infra.rust-lang.org
: AArch64 runners, hosted on packet (configuration).
The server configuration for the runners is managed with Ansible (playbook, role), and the source code for the tooling run on the server is in the gha-self-hosted repository.
Please get in touch with the Infrastructure Team if you need to run builds on
this pool for your project in the rust-lang
organization.
Maintenance procedures
Changing the instances configuration
The set of instances available in each host is configured through the Ansible configuration located in the simpleinfra repo:
ansible/envs/prod/host_vars/{hostname}.yml
You'll be able to add, remove and resize instances by changing that file and applying the changes:
ansible/apply prod gha-self-hosted
Forcing an update of the source code
The server checks for source code updates every 15 minutes, but it's possible to start such check in advance. You need to log into the machine you want to act on, and run the following command:
sudo systemctl start gha-self-hosted-update
If the contents of the images/
directory were changed, an image rebuild will
also be started. The new image will be used by each VM after they finish
processing the current job.
Forcing a rebuild of the images
The server automatically rebuilds the images every week, but it's possible to rebuild them in advance. You need to log into the machine you want to act on, and run the following command:
sudo systemctl start gha-self-hosted-rebuild-image
Managing the lifecycle of virtual machines
Each virtual machine is assigned a name and its own systemd unit, called
gha-vm-{name}.service
. For example, the arm-1-1
VM is managed by the
gha-vm-arm-1-1.service
systemd unit. You can stop, start and restart the
virtual machine by stopping, starting and restarting the systemd unit.
Virtual machines are configured to restart after each build finishes.
Logging into the virtual machines
It's possible to log into the virtual machines from localhost to debug builds.
This should be used as the last resort. Each VM binds SSH on a custom port on
the host (configured in the host Ansible configuration), and allows access to
the manage
user (with password password
). For example, to log into the VM
with port 2201
you can run:
ssh manage@localhost -p 2201
Note that the VM image regenerates its own host key every time it boots, so you'll likely get host key mismatch errors when connecting to a freshly booted VM.
Discord moderation bot
- Source code: rust-lang/discord-mods-bot
- Hosted on:
rust-ecs-prod
ECS cluster - Maintainers: technetos
The bot is hosted on the rust-ecs-prod
ECS cluster, on the project's AWS
account, with the discord-mods-bot
service name. Its container image is
stored in a ECR repository with the same name, and its data is stored in the
shared
RDS PostgreSQL instance.
Automatic deploys are setup from the rust-lang/discord-mods-bot GitHub repository.
The Discord bot account is rustbot#4299
. pietroalbini,
Mark-Simulacrum, alexcrichton and aidanhs have access to the developer
portal.
Common maintenance procedures
Instructions on how to manage ECS services are available here.
Domain names and DNS
All the DNS records of the domains owned by the Rust Infrastructure team are hosted on AWS Route 53, and can be tweaked by members of the team. This document contains instructions for them on how to make changes.
- Changing DNS records of a domain managed with Terraform
- Managing DNS for a new domain with Terraform
- Adding subdomain redirects
Changing DNS records of a domain managed with Terraform
Warning: not all domain names are yet managed with Terraform. In the console, if a zone's comment doesn't start with
[terraform]
you'll need to make changes manually from the UI. Work is underway to migrate every domain to Terraform though.
Warning:
terraform/services/dns
only contains the definition of DNS records pointing to resources managed outside of Terraform. When Terraform manages a resource it will automatically add the required records on its own. See the service's documentation to learn where its Terraform configuration lives.
DNS records are managed in the terraform/services/dns
directory of
our Terraform configuration. A file named after the domain name, ending in
.tf
, exists for each managed domain, and it contains some basic information
plus its records.
The configuration supports adding A, CNAME, MX and TXT records. Inside the module definition contained in the domain's file, each record type has its own map: the map keys are the names of the records, while the values are a list of record values.
For example, to add a pages.rust-lang.org
CNAME pointing to
rust-lang.github.io
you'll need to add this to
terraform/services/dns/rust-lang.org
:
module "rust_lang_org" {
# ...
CNAME = {
"pages.rust-lang.org." = ["rust-lang.github.io"],
# ...
}
}
Once you made all the changes you can apply them with:
terraform apply
Managing DNS for a new domain with Terraform
Setting up Terraform to manage the DNS records of a new domain name involves a
few steps. First of all you need to decide the identifier used inside
Terraform for that domain. By convention, the identifier is the domain name
itself with .
and -
replaced with _
. For example rust-lang.org
becomes
rust_lang_org
.
Then you can create a file in terraform/services/dns
named after
the domain name, ending in .tf
, with this content (take care of replacing the
placeholders):
module "<IDENTIFIER>" {
source = "./domain"
domain = "<DOMAIN-NAME>"
comment = "<COMMENT-FOR-THE-DOMAIN>"
ttl = 300
}
Finally you need to output the ID of the Route53 zone, allowing other parts of
our Terraform configuration to add records. Add this snippet to
terraform/services/dns/outputs.tf
:
# ...
output "zone_<IDENTIFIER>" {
value = module.<IDENTIFIER>.zone_id
}
Once you're done you can apply the changes with:
terraform init
terraform apply
Adding subdomain redirects
Our Terraform configuration supports creating redirects from an arbitrary number of subdomains we control to an URL. Redirects are created with these pieces of infrastructure:
-
A S3 bucket for each set of redirects, named
rust-http-redirect-<HASH>
. The bucket has website hosting enabled, configured to redirect all the incoming requests to the chosen URL. This allows implementing redirects without an underlying server. -
An ACM certificate (plus the DNS records to validate it) for each set of redirects, with all the sources as alternate names. This is used to enable HTTPS redirects.
-
A CloudFront distribution for each set of redirects to support HTTPS requests, using the previously generated ACM certificate and forwarding requests to the S3 bucket.
-
Route53 records for each redirect in the related zones: CNAMEs for subdomains, and ALIASes for apex domains.
All the redirects are defined in terraform/redirects.tf
,
with a module for each destination URL. Either create a new module if you need
to redirect to a new URL, or add a new subdomain to an existing module. See an
example module here (take care of replacing the placeholders):
module "redirect_<IDENTIFIER>" {
source = "./modules/subdomain-redirect"
providers = {
aws = "aws"
aws.east1 = "aws.east1"
}
to = "<DESTINATION-URL>"
from = {
"<SUBDOMAIN-1>" = module.dns.zone_<DOMAIN-1-IDENTIFIER>,
"<SUBDOMAIN-2>" = module.dns.zone_<DOMAIN-2-IDENTIFIER>,
}
}
Once you made all the changes you can apply the configuration with:
terraform init
terraform apply
Note that each change is going to take around 15 minutes to deploy, as CloudFront distribution changes are really slow to propagate. Also, it's normal to see a bunch of resources being recreated when a domain is added or removed from an existing redirect, as the ACM certificate will need to be regenerated.
docs.rs
- Source code: rust-lang/docs.rs
- Hosted on:
docsrs.infra.rust-lang.org
(behind the bastion -- how to connect) - Maintainers: Joshua Nelson, Pietro Albini
- Instance metrics (only available to infra team members).
- Application metrics (only available to infra team members).
- Common maintenance procedures
ECS services management
Some applications running on the project's infrastructure are hosted in ECS clusters on our AWS account. This document explains the common maintenance procedures one should follow when operating them. Most of the actions explained here require AWS access.
Note: our ECS cluster is located in the Northern California (
us-west-1
) AWS region. Make sure it's the selected region when interacting with the AWS console.
Inspecting the logs
Logs for applications hosted on ECS are stored in CloudWatch Logs, and can
be inspected in the AWS Console. Open the console, go to
CloudWatch Logs and select the log group called /ecs/<service-name>
. There
are two ways to inspect the logs:
-
If you need to look at the application as a whole, you can get an aggregated view by clicking the "View all log events" button (or, on the classic interface, "Search Log Group").
-
If you need to debug a specific instance of a container, separate log streams for each running task are available. The streams are named after the container name and the task ID.
Logs are periodically purged (retention varies based on the specific application).
Restarting an application
To restart an application, you can force a new deployment without actually pushing any new code beforehand. To do so, run this command:
aws ecs update-service --cluster rust-ecs-prod --service <service-name> --force-new-deployment
Rolling back a deployment
To rollback a bad deployment you can run the aws-rollback.py
script (stored
in the simpleinfra repository) with your AWS credentials present
in the shell. The script requires the name of the ECR container image
repository as its first and only argument:
./aws-rollback.py <image-repository-name>
The script will show the list of images available in the repository, and asks
for the image number to rollback to. Once that's inserted the script will point
the latest
tag to the image you chose, and if an ECS service with the same
name as the repository exists that service will be restarted too.
Deploying application changes
Each application stores its own Docker container in a ECR repository in our AWS account. You can deploy changes both manually and automatically (with GitHub Actions).
For production applications it's recommended to setup automatic deployment.
Manual deployments
To manually deploy a local build you first need it to tag your built image with its ECR name:
docker tag <image-tag> 890664054962.dkr.ecr.us-west-1.amazonaws.com/<repository-name>:latest
Then you can authenticate with ECR and push it:
$(aws ecr get-login --no-include-email --region us-west-1)
docker push 890664054962.dkr.ecr.us-west-1.amazonaws.com/<repository-name>:latest
Finally, you need to force a new deployment of the ECS service with:
aws ecs update-service --cluster rust-ecs-prod --service <service-name> --force-new-deployment
Automatic deployments with GitHub Actions
The infrastructure team prepared an action for GitHub Actions that automates deployments from CI. To use it, ask a team member to setup AWS credentials in your repository, and then add this snippet to your workflow:
- name: Build the Docker image
run: docker build -t deploy-image .
- name: Deploy to production
uses: rust-lang/simpleinfra/github-actions/upload-docker-image@master
with:
image: deploy-image
repository: <ecr-repository-name>
region: us-west-1
redeploy_ecs_cluster: rust-ecs-prod
redeploy_ecs_service: <service-name>
aws_access_key_id: "${{ secrets.AWS_ACCESS_KEY_ID }}"
aws_secret_access_key: "${{ secrets.AWS_SECRET_ACCESS_KEY }}"
if: github.ref == 'refs/heads/<deploy-branch>'
Be sure to replace <ecr-repository-name>
, <service-name>
and
<deploy-branch>
with the correct values for your workflow. Once the workflow
changes are merged in the branch you chose for deploys, any future commits
pushed there will be deployed to the ECS cluster.
Monitoring
- Hosted on:
monitoring.infra.rust-lang.org
(behind the bastion -- how to connect) - Maintainers: pietroalbini, infra team
- Public URL: grafana.rust-lang.org
- Ansible playbook to deploy this server.
- Instance metrics (only available to infra team members).
Service configuration
Our monitoring service is composed of three parts: Prometheus to scrape, collect and monitor metrics, Alertmanager to dispatch the alerts generated by Prometheus, and Grafana to display the metrics. All the parts are configured through Ansible.
The metrics are not backed up, as Prometheus purges them after 7 days anyway,
but the Grafana dashboards are stored in a PostgreSQL database, which is backed
up with restic in the rust-backups
bucket (monitoring
subdirectory). The
password to decrypt the backups is in 1password.
Common maintenance procedures
Scrape a new metrics source
Prometheus works by periodically scraping a list of HTTP endpoints for metrics,
written in its custom format. In our configuration the list
is located in the prometheus_scrape
section of the
ansible/playbooks/monitoring.yml
file in the simpleinfra repository.
To add a new metrics source, add your endpoint to an existing job or, if the metrics you're scraping are not related to any other job, a new one. The endpoint must be reachable from the monitoring instance. You can read the Prometheus documentation to find all the available options.
Create a new alert
Alerts are generated by Prometheus every time a custom rule defined in its
configuration evaluates to true. In our configuration the list of rules is
located in the prometheus_rule_groups
section of the
ansible/playbooks/monitoring.yml
file in the simpleinfra repository.
To add a new alert you need to create an alerting rule either in an existing group or a new one. The full list of options is available in the Prometheus documentation.
Add permissions to a user
There are two steps needed to grant access to our Grafana instance to an user.
First of all, to enable the user to log into the instance with their GitHub
account they need to be a member of a team authorized to log in. The list of
teams is defined in the grafana_github_teams
section of the
ansible/playbooks/monitoring.yml
file in the simpleinfra repository, and it
contains a list of GitHub team IDs. To fetch an ID you can run this command:
curl -H "Authorization: token $GITHUB_TOKEN" https://api.github.com/orgs/<ORG>/teams/<NAME> | jq .id
Once the user is a member of a team authorized to log in they will automatically be added to the main Grafana organization with "viewer" permissions. For infrastructure team members that needs to be changed to "admin" (in the "Configuration" -> "Users"), otherwise leave it as viewer.
By default a viewer only has access to the unrestricted dashboards. To grant
access to other dashboards you'll need to add them to a team (in the
"Configuration" -> "Teams" page). It's also possible to grant admin privileges
to the whole Grafana instance in the "Server Admin" -> "Users" ->
"<username>
" page. Do not grant those permissions except to trusted infra
team members.
Additional resources
rust-bots
- FQDN:
bots.infra.rust-lang.org
(behind the bastion -- how to connect) - Instance metrics (only available to infra team members).
Common maintenance procedures
Adding a new domain
First, edit sudo vim /etc/nginx/nginx.conf
to edit the nginx configuration to add the domain.
server {
listen 443 ssl;
listen [::]:443 ssl;
server_name <domain>.infra.rust-lang.org; # Edit <domain> to match here
location /.well-known/acme-challenge {
root /home/ssl-renew/challenges;
}
location / {
# configure the domain here
}
}
Then run sudo -i -u ssl-renew vim renew.sh
. Add a --domains
line to the script with the domain you're adding.
Then, run the script: sudo -i -u ssl-renew ./renew.sh
How the Rust CI works
Which jobs we run
The rust-lang/rust
repository uses Azure Pipelines to test all the other
platforms we support. We currently have two kinds of jobs running
for each commit we want to merge to master:
-
Dist jobs build a full release of the compiler for that platform, including all the tools we ship through rustup; Those builds are then uploaded to the
rust-lang-ci2
S3 bucket and are available to be locally installed with the rustup-toolchain-install-master tool; The same builds are also used for actual releases: our release process basically consists of copying those artifacts fromrust-lang-ci2
to the production endpoint and signing them. -
Non-dist jobs run our full test suite on the platform, and the test suite of all the tools we ship through rustup; The amount of stuff we test depends on the platform (for example some tests are run only on Tier 1 platforms), and some quicker platforms are grouped together on the same builder to avoid wasting CI resources.
All the builds except those on macOS and Windows are executed inside that platform’s custom Docker container. This has a lot of advantages for us:
- The build environment is consistent regardless of the changes of the underlying image (switching from the trusty image to xenial was painless for us).
- We can use ancient build environments to ensure maximum binary compatibility, for example using CentOS 5 on our Linux builders.
- We can avoid reinstalling tools (like QEMU or the Android emulator) every time thanks to Docker image caching.
- Users can run the same tests in the same environment locally by just running
src/ci/docker/run.sh image-name
, which is awesome to debug failures.
We also run tests for less common architectures (mainly Tier 2 and Tier 3 platforms) on Azure Pipelines. Since those platforms are not x86 we either run everything inside QEMU or just cross-compile if we don’t want to run the tests for that platform.
Merging PRs serially with bors
CI services usually test the last commit of a branch merged with the last commit in master, and while that’s great to check if the feature works in isolation it doesn’t provide any guarantee the code is going to work once it’s merged. Breakages like these usually happen when another, incompatible PR is merged after the build happened.
To ensure a master that works all the time we forbid manual merges: instead all PRs have to be approved through our bot, bors (the software behind it is called homu). All the approved PRs are put in a queue (sorted by priority and creation date) and are automatically tested one at the time. If all the builders are green the PR is merged, otherwise the failure is recorded and the PR will have to be re-approved again.
Bors doesn’t interact with CI services directly, but it works by pushing the
merge commit it wants to test to a branch called auto
, and detecting the
outcome of the build by listening for either Commit Statuses or Check Runs.
Since the merge commit is based on the latest master and only one can be tested
at the same time, when the results are green master is fast-forwarded to that
merge commit.
Unfortunately testing a single PR at the time, combined with our long CI (~3.5 hours for a full run), means we can’t merge too many PRs in a single day, and a single failure greatly impacts our throughput for the day. The maximum number of PRs we can merge in a day is 7.
Rollups
Some PRs don’t need the full test suite to be executed: trivial changes like typo fixes or README improvements shouldn’t break the build, and testing every single one of them for 2 to 3 hours is a big waste of time. To solve this we do a "rollup", a PR where we merge all the trivial PRs so they can be tested together. Rollups are created manually by a team member who uses their judgement to decide if a PR is risky or not, and are the best tool we have at the moment to keep the queue in a manageable state.
Try builds
Sometimes we need a working compiler build before approving a PR, usually for
benchmarking or checking the impact of the PR across the
ecosystem. Bors supports creating them by pushing the merge commit on
a separate branch (try
), and they basically work the same as normal builds,
without the actual merge at the end. Any number of try builds can happen at the
same time, even if there is a normal PR in progress.
Which branches we test
Our builders are defined in src/ci/azure-pipelines/
, and they depend on the
branch used for the build. Each job is configured in one of the top .yml
files.
PR builds
All the commits pushed in a PR run a limited set of tests: a job containing a
bunch of lints plus a cross-compile check build to Windows mingw (without
producing any artifacts) and the x86_64-gnu-llvm-6.0
non-dist builder. Those
two builders are enough to catch most of the common errors introduced in a PR,
but they don’t cover other platforms at all. Unfortunately it would take too
many resources to run the full test suite for each commit on every PR.
Additionally, if the PR changes submodules the x86_64-gnu-tools
non-dist
builder is run.
The try
branch
On the main rust repo try builds produce just a Linux toolchain. Builds on
those branches run a job containing the lint builder and both the dist and
non-dist builders for linux-x86_64
. Usually we don’t need try
builds for
other platforms, but on the rare cases when this is needed we just add a
temporary commit that changes the src/ci/azure-pipelines/try.yml
file to
enable the builders we need on that platform (disabling Linux to avoid wasting
CI resources).
The auto
branch
This branch is used by bors to run all the tests on a PR before merging it, so all the builders are enabled for it. bors will repeatedly force-push on it (every time a new commit is tested).
The master
branch
Since all the commits to master
are fast-forwarded from the auto
branch (if
they pass all the tests there) we don’t need to build or test anything. A quick
job is executed on each push to update toolstate (see the toolstate description
below).
Other branches
Other branches are just disabled and don’t run any kind of builds, since all the in-progress branches will eventually be tested in a PR. We try to encourage contributors to create branches on their own fork, but there is no way for us to disable that.
Caching
The main rust repository doesn’t use the native Azure Pipelines caching tools.
All our caching is uploaded to an S3 bucket we control
(rust-lang-ci-sccache2
), and it’s used mainly for two things:
Docker images caching
The Docker images we use to run most of the Linux-based builders take a long
time to fully build: every time we need to build them (for example when the CI
scripts change) we consistently reach the build timeout, forcing us to retry
the merge. To avoid the timeouts we cache the exported images on the S3 bucket
(with docker save
/docker load
).
Since we test multiple, diverged branches (master
, beta
and stable
) we
can’t rely on a single cache for the images, otherwise builds on a branch would
override the cache for the others. Instead we store the images identifying them
with a custom hash, made from the host’s Docker version and the contents of all
the Dockerfiles and related scripts.
LLVM caching with sccache
We build some C/C++ stuff during the build and we rely on sccache to cache intermediate LLVM artifacts. Sccache is a distributed ccache developed by Mozilla, and it can use an object storage bucket as the storage backend, like we do with our S3 bucket.
Custom tooling around CI
During the years we developed some custom tooling to improve our CI experience.
Cancelbot to keep the queue short
We have limited CI capacity on Azure Pipelines, and while that’s enough for a single build we can’t run more than one at the time. Unfortunately when a job fails the other jobs on the same build will continue to run, limiting the available capacity. To avoid the issue we have a tool called cancelbot that runs in cron every 2 minutes and kills all the jobs not related to a running build through the API.
Rust Log Analyzer to show the error message in PRs
The build logs for rust-lang/rust
are huge, and it’s not practical to find
what caused the build to fail by looking at the logs. To improve the
developers’ experience we developed a bot called Rust Log Analyzer (RLA)
that receives the build logs on failure and extracts the error message
automatically, posting it on the PR.
The bot is not hardcoded to look for error strings, but was trained with a bunch of build failures to recognize which lines are common between builds and which are not. While the generated snippets can be weird sometimes, the bot is pretty good at identifying the relevant lines even if it’s an error we never saw before.
Toolstate to support allowed failures
The rust-lang/rust
repo doesn’t only test the compiler on its CI, but also
all the tools distributed through rustup (like rls, rustfmt, clippy…). Since
those tools rely on the compiler internals (which don’t have any kind of
stability guarantee) they often break after the compiler code is changed.
If we blocked merging rustc PRs on the tools being fixed we would be stuck in a chicken-and-egg problem, because the tools need the new rustc to be fixed but we can’t merge the rustc change until the tools are fixed. To avoid the problem most of the tools are allowed to fail, and their status is recorded in rust-toolstate. When a tool breaks a bot automatically pings the tool authors so they know about the breakage, and it records the failure on the toolstate repository. The release process will then ignore broken tools on nightly, removing them from the shipped nightlies.
While tool failures are allowed most of the time, they’re automatically forbidden a week before a release: we don’t care if tools are broken on nightly but they must work on beta and stable, so they also need to work on nightly a few days before we promote nightly to beta.
More information is available in the toolstate documentation.
Language
This section documents meta processes by the language team.
External Links
RFC Merge Procedure
Once an RFC has been accepted (i.e., the final comment period is complete, and no major issues were raised), it must be merged. Right now this is a manual process, though just about anyone can do it (if you're not a subteam member, though, you'll have to open a PR rather than merge the RFC manually). Here is the complete set of steps to merge an RFC -- in some cases, not all the steps will be applicable.
Step 1: Open tracking issue
Open a tracking issue over on rust-lang/rust. Here is a template for the issue text. You'll have to adjust the various places labeled XXX with some suitable content (e.g., the name of the RFC, or the most appropriate team).
This is a tracking issue for the RFC "XXX" (rust-lang/rfcs#NNN).
**Steps:**
- [ ] Implement the RFC (cc @rust-lang/XXX -- can anyone write up mentoring
instructions?)
- [ ] Adjust documentation ([see instructions on rustc-dev-guide][doc-guide])
- [ ] Stabilization PR ([see instructions on rustc-dev-guide][stabilization-guide])
[stabilization-guide]: https://rustc-dev-guide.rust-lang.org/stabilization_guide.html#stabilization-pr
[doc-guide]: https://rustc-dev-guide.rust-lang.org/stabilization_guide.html#documentation-prs
**Unresolved questions:**
XXX --- list all the "unresolved questions" found in the RFC to ensure they are
not forgotten
Add the following labels to the issue:
B-rfc-approved
C-tracking-issue
- the approriate
T-XXX
label
(If you don't have permissions to do so, leave a note cc'ing the appropriate team and asking them to do so.)
Step 2: Merge the RFC PR itself
In your local git checkout:
- Merge the RFC PR into master in your fork
- Add a commit that moves the file name from 0000- to its RFC number
- Edit the new file to include links to the RFC PR and the tracking issue you just created in the header
- Open a PR or push directly to the master branch on rust-lang/rfcs, as appropriate
Step 3: Leave a comment
Leave a final comment on the PR directing everyone to the tracking issue. Something like this, but feel free to add your own personal flavor (and change the team):
**Huzzah!** The @rust-lang/lang team has decided **to accept** this RFC.
To track further discussion, subscribe to the tracking issue here:
rust-lang/rust#41517
Step 4: Update the rendered link
Update the rendered link in the first post of the PR to the permanent home under
https://github.com/rust-lang/rfcs/blob/master/text/
.
(This way future visitors can open it easily after the PR branch is deleted.)
That's it, you're done!
Triage meeting procedure
This page documents how to run a lang team triage meeting, should you have the misfortune of being forced to do so.
Attending a meeting
If you would just like to attend a lang-team triage meeting, all you have to do is join the zoom call (the URL is attached to the calendar invite below).
Scheduling
Note that the scheduling for all meetings is recorded in the team calendar, links to which can be found on the rust-lang/lang-team repository.
Pre-triage
To start, we have a pre-triage meeting which occurs before the main meeting. This is not recorded. It is boring.
To execute this meeting you:
- Open the Current Meeting dropbox paper document
- Skim down the action items and look to see if there are any you know have been handled
- they can be checked off and removed
- Skip down to the Triage section
- For each Triage section, click on the link and populate it with what you find
- typically it is best to copy-and-paste the title of the issue, so that links remain intact
- For each item, click in and try to add a few notes as to the main topic
- look for things where there isn't much discussion needed, or just reminders
- these can be handled quickly in the meeting, or perhaps not at all
- items that require more discussion will need time alotted for them
Main meeting
- Ping the team on discord
@lang-team
- Begin the recording on Zoom, if you have acccess
- If nobody has access to the recording, oh well, we don't do it every week
- Discuss item by item and take some notes on what was said
- Add specific actions to the action items section above
- If a consensus arises, make sure to create an action item to document it!
- The goal should be that we leave some comment on every issue
After meeting
- Export the meeting file to markdown
- you will need to cleanup "check boxes" -- Niko usually searches and replaces
^(\s*)[ ]
with\1* [ ]
or something like that to insert a*
before them, which makes them valid markdown
- you will need to cleanup "check boxes" -- Niko usually searches and replaces
- Upload video to youtube if applicable and get the URL
- Add the file to the minutes directory of rust-lang/lang-team repository
with a file name like
YYYY-MM-DD.md
Libs
This section documents meta processes by the Libs team.
Where to find us
The libs team hangs out primarily in the rust-lang Zulip these days in the #t-libs
stream.
You can also find out more details about Zulip and how the Rust community uses it.
Maintaining the standard library
Everything I wish I knew before somebody gave me
r+
This document is an effort to capture some of the context needed to develop and maintain the Rust standard library. It’s goal is to help members of the Libs team share the process and experience they bring to working on the standard library so other members can benefit. It’ll probably accumulate a lot of trivia that might also be interesting to members of the wider Rust community.
This document doesn't attempt to discuss best practices or good style. For that, see the API Guidelines.
Contributing
If you spot anything that is outdated, under specified, missing, or just plain incorrect then feel free to open up a PR on the rust-lang/rust-forge
repository!
Terms
- Libs. That's us! The team responsible for development and maintenance of the standard library (among other things).
- Pull request (PR). A regular GitHub pull request against
rust-lang/rust
. - Request for Comment (RFC). A formal document created in
rust-lang/rfcs
that introduces new features. - Tracking Issue. A regular issue on GitHub that’s tagged with
C-tracking-issue
. - Final Comment Period (FCP). Coordinated by
rfcbot
that gives relevant teams a chance to review RFCs and PRs.
If you’re ever unsure…
Maintaining the standard library can feel like a daunting responsibility! Through highfive
, the automated reviewer assignment, you’ll find yourself dropped into a lot of new contexts.
Ping the @rust-lang/libs
team on GitHub anytime. We’re all here to help!
If you don’t think you’re the best person to review a PR then use highfive
to assign it to somebody else.
Finding reviews waiting for your input
Please remember to regularly check https://rfcbot.rs/. Click on any occurrence of your nickname to go to a page like https://rfcbot.rs/fcp/SimonSapin that only shows the reviews that are waiting for your input.
Reviewing PRs
As a member of the Libs team you’ll find yourself assigned to PRs that need reviewing, and your input requested on issues in the Rust project.
When is an RFC needed?
New unstable features don't need an RFC before they can be merged. If the feature is small, and the design space is straightforward, stabilizing it usually only requires the feature to go through FCP. Sometimes however, you may ask for an RFC before stabilizing.
Is there any unsafe
?
Unsafe code blocks in the standard library need a comment explaining why they're ok. There's a tidy
lint that checks this. The unsafe code also needs to actually be ok.
The rules around what's sound and what's not can be subtle. See the Unsafe Code Guidelines WG for current thinking, and consider pinging @rust-lang/libs
, @rust-lang/lang
, and/or somebody from the WG if you're in any doubt. We love debating the soundness of unsafe code, and the more eyes on it the better!
Is that #[inline]
right?
Inlining is a trade-off between potential execution speed, compile time and code size.
You should add #[inline]
:
- To public, small, non-generic functions.
You shouldn’t need #[inline]
:
- On methods that have any generics in scope.
- On methods on traits that don’t have a default implementation.
What about #[inline(always)]
?
You should just about never need #[inline(always)]
. It may be beneficial for private helper methods that are used in a limited number of places or for trivial operators. A micro benchmark should justify the attribute.
Is there any potential breakage?
Breaking changes should be avoided when possible. RFC 1105 lays the foundations for what constitutes a breaking change. Breakage may be deemed acceptable or not based on its actual impact, which can be approximated with a crater
run.
There are strategies for mitigating breakage depending on the impact.
For changes where the value is high and the impact is high too:
- Using compiler lints to try phase out broken behavior.
If the impact isn't too high:
- Looping in maintainers of broken crates and submitting PRs to fix them.
Are there new impls for stable traits?
A lot of PRs to the standard library are adding new impls for already stable traits, which can break consumers in many weird and wonderful ways. The following sections gives some examples of breakage from new trait impls that may not be obvious just from the change made to the standard library.
Inference breaks when a second generic impl is introduced
Rust will use the fact that there's only a single impl for a generic trait during inference. This breaks once a second impl makes the type of that generic ambiguous. Say we have:
#![allow(unused)] fn main() { // in `std` impl From<&str> for Arc<str> { .. } }
#![allow(unused)] fn main() { // in an external `lib` let b = Arc::from("a"); }
then we add:
impl From<&str> for Arc<str> { .. }
+ impl From<&str> for Arc<String> { .. }
then
#![allow(unused)] fn main() { let b = Arc::from("a"); }
will no longer compile, because we've previously been relying on inference to figure out the T
in Box<T>
.
This kind of breakage can be ok, but a crater
run should estimate the scope.
Deref coercion breaks when a new impl is introduced
Rust will use deref coercion to find a valid trait impl if the arguments don't type check directly. This only seems to occur if there's a single impl so introducing a new one may break consumers relying on deref coercion. Say we have:
#![allow(unused)] fn main() { // in `std` impl Add<&str> for String { .. } impl Deref for String { type Target = str; .. } }
#![allow(unused)] fn main() { // in an external `lib` let a = String::from("a"); let b = String::from("b"); let c = a + &b; }
then we add:
impl Add<&str> for String { .. }
+ impl Add<char> for String { .. }
then
#![allow(unused)] fn main() { let c = a + &b; }
will no longer compile, because we won't attempt to use deref to coerce the &String
into &str
.
This kind of breakage can be ok, but a crater
run should estimate the scope.
Could an implementation use existing functionality?
Types like String
are implemented in terms of Vec<u8>
and can use methods on str
through deref coersion. Vec<T>
can use methods on [T]
through deref coersion. When possible, methods on a wrapping type like String
should defer to methods that already exist on their underlying storage or deref target.
Are there #[fundamental]
items involved?
Blanket trait impls can't be added to #[fundamental]
types because they have different coherence rules. See RFC 1023 for details. That includes:
&T
&mut T
Box<T>
Pin<T>
Is specialization involved?
NOTE(2019-02-10): Due to recent soundness holes introduced by specialization in the standard library (c.f. #68358 and #67194) the language team decided on a design meeting to place a moratorium on new uses of specialization until we have some checks in place ensuring soundness for internal uses.
We try to avoid leaning on specialization too heavily, limiting its use to optimizing specific implementations. These specialized optimizations use a private trait to find the correct implementation, rather than specializing the public method itself. Any use of specialization that changes how methods are dispatched for external callers should be carefully considered.
Are there public enums?
Public enums should have a #[non_exhaustive]
attribute if there's any possibility of new variants being introduced, so that they can be added without causing breakage.
Does this change drop order?
Changes to collection internals may affect the order their items are dropped in. This has been accepted in the past, but should be noted.
How could mem
break assumptions?
mem::replace
and mem::swap
Any value behind a &mut
reference can be replaced with a new one using mem::replace
or mem::swap
, so code shouldn't assume any reachable mutable references can't have their internals changed by replacing.
mem::forget
Rust doesn't guarantee destructors will run when a value is leaked (which can be done with mem::forget
), so code should avoid relying on them for maintaining safety. Remember, everyone poops.
It's ok not to run a destructor when a value is leaked because its storage isn't deallocated or repurposed. If the storage is initialized and is being deallocated or repurposed then destructors need to be run first, because memory may be pinned. Having said that, there can still be exceptions for skipping destructors when deallocating if you can guarantee there's never pinning involved.
How is performance impacted?
Changes to hot code might impact performance in consumers, for better or for worse. Appropriate benchmarks should give an idea of how performance characteristics change. For changes that affect rustc
itself, you can also do a rust-timer
run.
Is the commit log tidy?
PRs shouldn’t have merge commits in them. If they become out of date with master
then they need to be rebased.
Merging PRs
PRs to rust-lang/rust
aren’t merged manually using GitHub’s UI or by pushing remote branches. Everything goes through bors
.
When to rollup
For Libs PRs, rolling up is usually fine, in particular if it's only a new unstable addition or if it only touches docs (with the exception of intra doc links which complicates things while the feature has bugs...).
If a submodule is affected then probably don't rollup
. If the feature affects perf then also avoid rollup
-- mark it as rollup=never
.
When there’s new public items
If the feature is new, then a tracking issue should be opened for it. Have a look at some previous tracking issues to get an idea of what needs to go in there. The issue
field on #[unstable]
attributes should be updated with the tracking issue number.
Unstable features can be merged as normal through bors
once they look ready.
When there’s new trait impls
There’s no way to make a trait impl for a stable trait unstable, so any PRs that add new impls for already stable traits must go through a FCP before merging. If the trait itself is unstable though, then the impl needs to be unstable too.
When a feature is being stabilized
Features can be stabilized in a PR that replaces #[unstable]
attributes with #[stable]
ones. The feature needs to have an accepted RFC before stabilizing. They also need to go through a FCP before merging.
You can find the right version to use in the #[stable]
attribute by checking the Forge.
When a const
function is being stabilized
Const functions can be stabilized in a PR that replaces #[rustc_const_unstable]
attributes with #[rustc_const_stable]
ones. The Constant Evaluation WG should be pinged for input on whether or not the const
-ness is something we want to commit to. If it is an intrinsic being exposed that is const-stabilized then @rust-lang/lang
should also be included in the FCP.
Check whether the function internally depends on other unstable const
functions through #[allow_internal_unstable]
attributes and consider how the function could be implemented if its internally unstable calls were removed. See the Stability attributes page for more details on #[allow_internal_unstable]
.
Where unsafe
and const
is involved, e.g., for operations which are "unconst", that the const safety argument for the usage also be documented. That is, a const fn
has additional determinism (e.g. run-time/compile-time results must correspond and the function's output only depends on its inputs...) restrictions that must be preserved, and those should be argued when unsafe
is used.
Release
This section documents the process around creating a new release of the compiler, tools, as well information on The Rust Programming Language's platform support.
External Links
- The Homu/Bors page provides links to the pull request testing queues for the
rust-lang
GitHub organisation, as well as providing an overview of the bot's syntax you can use to interact with it. - Rustup Component History documents when a component was last available (if it was available) for a specific platform on nightly.
- PR Tracking provides visualisations of pull requests made to the
rust-lang/rust
repository. - kennytm's
rustup-toolchain-install-master
is a utility to install the latest generated artifacts from CI intorustup
.
Beta Backporting
There's a steady trickle of patches that need to be ported to the beta branch. Only a few people are even aware of the process, but this is actually something anybody can do.
Backporting in rust-lang/rust
When somebody identifies a PR that should be backported to beta they tag it
beta-nominated
.
That means they want one of the teams to evaluate whether the patch should be
backported. I also suggest applying the I-nominated
and and a T-
(team) tag
as appropriate: that'll really get their attention. Anybody with triage access
is free to make these tags. Backports are mostly done to fix regressions. If the
team thinks it should be backported they'll then additionally tag it
beta-accepted
.
At that point the PR is ready to be backported. So the list of patches ready for
a backport is those tagged
both beta-nominated
and beta-accepted
.
So now somebody needs to go through those PR's and cherry-pick their commits to
the beta branch. Those cherry-picks are then submitted as a PR against the
beta branch, with a title started with [beta]
(so reviewers can see its
specialness). The OP of that PR should contain links to all the PRs being
backported. Here's an example.
Anybody can make these PRs!
After that a reviewer needs to verify that the backport looks correct, that it's
submitted to the beta branch, and then approve via @bors: r+
. Finally, they
need to follow the links to the original PRs and remove the beta-nominated
tag (people forget to do this a lot). This last step indicates that the
backport has been completed, so the beta-nominated
and beta-accepted
tags
have three states.
Backporting in rust-lang/cargo
The procedure for backporting fixes to Cargo is similar but slightly more
extended than the rust-lang/rust
repo's procedure. Currently there aren't
backport tags in the Cargo repository, but you'll initiate the backport process
by commenting on an associated PR, requesting a backport. Once a Cargo team
member has approved the backport to happen you're good to start sending PRs!
-
First you'll send a PR to the
rust-1.21.0
branch of Cargo (replace 1.21 with the current rustc beta version number). Like withrust-lang/rust
you'll prefix the title of your PR with[beta]
and ensure it's flagged as going to beta. -
Next a Cargo reviewer will
@bors: r+
the PR and put it into the queue. Eventually bors will automatically merge the PR (when tests are passing) to the appropriate Cargo branch. -
Finally you'll send a PR to the
rust-lang/rust
repository'sbeta
branch, updating the Cargo submodule. The Cargo submodule should be updated to the tip of therust-1.21.0
branch (the branch your Cargo PR was merged to). As like before, ensure you've got[beta]
in the PR title.
After that's all said and done the Cargo change is ready to get scheduled onto the beta release!
Rust Platform Support
The Rust compiler runs on, and compiles to, a great number of platforms, though not all platforms are equally supported. Rust's support levels are organized into three tiers, each with a different set of guarantees.
Platforms are identified by their "target triple" which is the string to inform the compiler what kind of output should be produced. The columns below indicate whether the corresponding component works on the specified platform.
Tier 1
Tier 1 platforms can be thought of as "guaranteed to work". Specifically they will each satisfy the following requirements:
- Official binary releases are provided for the platform.
- Automated testing is set up to run tests for the platform.
- Landing changes to the
rust-lang/rust
repository's master branch is gated on tests passing. - Documentation for how to use and how to build the platform is available.
target | std | rustc | cargo | notes |
---|---|---|---|---|
i686-pc-windows-gnu | ✓ | ✓ | ✓ | 32-bit MinGW (Windows 7+) |
i686-pc-windows-msvc | ✓ | ✓ | ✓ | 32-bit MSVC (Windows 7+) |
i686-unknown-linux-gnu | ✓ | ✓ | ✓ | 32-bit Linux (2.6.18+) |
x86_64-apple-darwin | ✓ | ✓ | ✓ | 64-bit OSX (10.7+, Lion+) |
x86_64-pc-windows-gnu | ✓ | ✓ | ✓ | 64-bit MinGW (Windows 7+) |
x86_64-pc-windows-msvc | ✓ | ✓ | ✓ | 64-bit MSVC (Windows 7+) |
x86_64-unknown-linux-gnu | ✓ | ✓ | ✓ | 64-bit Linux (2.6.18+) |
Tier 2
Tier 2 platforms can be thought of as "guaranteed to build". Automated tests are not run so it's not guaranteed to produce a working build, but platforms often work to quite a good degree and patches are always welcome! Specifically, these platforms are required to have each of the following:
- Official binary releases are provided for the platform.
- Automated building is set up, but may not be running tests.
- Landing changes to the
rust-lang/rust
repository's master branch is gated on platforms building. For some platforms only the standard library is compiled, but for othersrustc
andcargo
are too.
target | std | rustc | cargo | notes |
---|---|---|---|---|
aarch64-apple-ios | ✓ | ARM64 iOS | ||
aarch64-fuchsia | ✓ | ARM64 Fuchsia | ||
aarch64-linux-android | ✓ | ARM64 Android | ||
aarch64-pc-windows-msvc | ✓ | ARM64 Windows MSVC | ||
aarch64-unknown-linux-gnu | ✓ | ✓ | ✓ | ARM64 Linux |
aarch64-unknown-linux-musl | ✓ | ARM64 Linux with MUSL | ||
aarch64-unknown-none | * | Bare ARM64, hardfloat | ||
aarch64-unknown-none-softfloat | * | Bare ARM64, softfloat | ||
arm-linux-androideabi | ✓ | ARMv7 Android | ||
arm-unknown-linux-gnueabi | ✓ | ✓ | ✓ | ARMv6 Linux |
arm-unknown-linux-gnueabihf | ✓ | ✓ | ✓ | ARMv6 Linux, hardfloat |
arm-unknown-linux-musleabi | ✓ | ARMv6 Linux with MUSL | ||
arm-unknown-linux-musleabihf | ✓ | ARMv6 Linux with MUSL, hardfloat | ||
armebv7r-none-eabi | * | Bare ARMv7-R, Big Endian | ||
armebv7r-none-eabihf | * | Bare ARMv7-R, Big Endian, hardfloat | ||
armv5te-unknown-linux-gnueabi | ✓ | ARMv5TE Linux | ||
armv5te-unknown-linux-musleabi | ✓ | ARMv5TE Linux with MUSL | ||
armv7-linux-androideabi | ✓ | ARMv7a Android | ||
armv7a-none-eabi | * | Bare ARMv7-A | ||
armv7r-none-eabi | * | Bare ARMv7-R | ||
armv7r-none-eabihf | * | Bare ARMv7-R, hardfloat | ||
armv7-unknown-linux-gnueabi | ✓ | ARMv7 Linux, glibc | ||
armv7-unknown-linux-gnueabihf | ✓ | ✓ | ✓ | ARMv7 Linux, hardfloat |
armv7-unknown-linux-musleabi | ✓ | ARMv7 Linux, MUSL | ||
armv7-unknown-linux-musleabihf | ✓ | ARMv7 Linux with MUSL | ||
asmjs-unknown-emscripten | ✓ | asm.js via Emscripten | ||
i586-pc-windows-msvc | ✓ | 32-bit Windows w/o SSE | ||
i586-unknown-linux-gnu | ✓ | 32-bit Linux w/o SSE | ||
i586-unknown-linux-musl | ✓ | 32-bit Linux w/o SSE, MUSL | ||
i686-linux-android | ✓ | 32-bit x86 Android | ||
i686-unknown-freebsd | ✓ | ✓ | ✓ | 32-bit FreeBSD |
i686-unknown-linux-musl | ✓ | 32-bit Linux with MUSL | ||
mips-unknown-linux-gnu | ✓ | ✓ | ✓ | MIPS Linux |
mips-unknown-linux-musl | ✓ | MIPS Linux with MUSL | ||
mips64-unknown-linux-gnuabi64 | ✓ | ✓ | ✓ | MIPS64 Linux, n64 ABI |
mips64-unknown-linux-muslabi64 | ✓ | MIPS64 Linux, n64 ABI, MUSL | ||
mips64el-unknown-linux-gnuabi64 | ✓ | ✓ | ✓ | MIPS64 (LE) Linux, n64 ABI |
mips64el-unknown-linux-muslabi64 | ✓ | MIPS64 (LE) Linux, n64 ABI, MUSL | ||
mipsel-unknown-linux-gnu | ✓ | ✓ | ✓ | MIPS (LE) Linux |
mipsel-unknown-linux-musl | ✓ | MIPS (LE) Linux with MUSL | ||
nvptx64-nvidia-cuda | ✓ | --emit=asm generates PTX code that runs on NVIDIA GPUs | ||
powerpc-unknown-linux-gnu | ✓ | ✓ | ✓ | PowerPC Linux |
powerpc64-unknown-linux-gnu | ✓ | ✓ | ✓ | PPC64 Linux |
powerpc64le-unknown-linux-gnu | ✓ | ✓ | ✓ | PPC64LE Linux |
riscv32i-unknown-none-elf | * | Bare RISC-V (RV32I ISA) | ||
riscv32imac-unknown-none-elf | * | Bare RISC-V (RV32IMAC ISA) | ||
riscv32imc-unknown-none-elf | * | Bare RISC-V (RV32IMC ISA) | ||
riscv64gc-unknown-linux-gnu | ✓ | ✓ | ✓ | RISC-V Linux |
riscv64gc-unknown-none-elf | * | Bare RISC-V (RV64IMAFDC ISA) | ||
riscv64imac-unknown-none-elf | * | Bare RISC-V (RV64IMAC ISA) | ||
s390x-unknown-linux-gnu | ✓ | ✓ | ✓ | S390x Linux |
sparc64-unknown-linux-gnu | ✓ | SPARC Linux | ||
sparcv9-sun-solaris | ✓ | SPARC Solaris 10/11, illumos | ||
thumbv6m-none-eabi | * | Bare Cortex-M0, M0+, M1 | ||
thumbv7em-none-eabi | * | Bare Cortex-M4, M7 | ||
thumbv7em-none-eabihf | * | Bare Cortex-M4F, M7F, FPU, hardfloat | ||
thumbv7m-none-eabi | * | Bare Cortex-M3 | ||
thumbv7neon-linux-androideabi | ✓ | Thumb2-mode ARMv7a Android with NEON | ||
thumbv7neon-unknown-linux-gnueabihf | ✓ | Thumb2-mode ARMv7a Linux with NEON | ||
thumbv8m.base-none-eabi | * | ARMv8-M Baseline | ||
thumbv8m.main-none-eabi | * | ARMv8-M Mainline | ||
thumbv8m.main-none-eabihf | * | ARMv8-M Baseline, hardfloat | ||
wasm32-unknown-emscripten | ✓ | WebAssembly via Emscripten | ||
wasm32-unknown-unknown | ✓ | WebAssembly | ||
wasm32-wasi | ✓ | WebAssembly with WASI | ||
x86_64-apple-ios | ✓ | 64-bit x86 iOS | ||
x86_64-fortanix-unknown-sgx | ✓ | Fortanix ABI for 64-bit Intel SGX | ||
x86_64-fuchsia | ✓ | 64-bit Fuchsia | ||
x86_64-linux-android | ✓ | 64-bit x86 Android | ||
x86_64-rumprun-netbsd | ✓ | 64-bit NetBSD Rump Kernel | ||
x86_64-sun-solaris | ✓ | 64-bit Solaris 10/11, illumos | ||
x86_64-unknown-cloudabi | ✓ | 64-bit CloudABI | ||
x86_64-unknown-freebsd | ✓ | ✓ | ✓ | 64-bit FreeBSD |
x86_64-unknown-linux-gnux32 | ✓ | 64-bit Linux (x32 ABI) | ||
x86_64-unknown-linux-musl | ✓ | ✓ | ✓ | 64-bit Linux with MUSL |
x86_64-unknown-netbsd | ✓ | ✓ | ✓ | NetBSD/amd64 |
x86_64-unknown-redox | ✓ | Redox OS |
Tier 3
Tier 3 platforms are those which the Rust codebase has support for, but which are not built or tested automatically, and may not work. Official builds are not available.
target | std | rustc | cargo | notes |
---|---|---|---|---|
aarch64-apple-tvos | ** | ARM64 tvOS | ||
aarch64-unknown-cloudabi | ✓ | ARM64 CloudABI | ||
aarch64-unknown-freebsd | ✓ | ✓ | ✓ | ARM64 FreeBSD |
aarch64-unknown-hermit | ? | |||
aarch64-unknown-netbsd | ? | |||
aarch64-unknown-openbsd | ✓ | ✓ | ✓ | ARM64 OpenBSD |
aarch64-unknown-redox | ? | ARM64 Redox OS | ||
aarch64-uwp-windows-msvc | ? | |||
aarch64-wrs-vxworks | ? | |||
armv4t-unknown-linux-gnueabi | ? | |||
armv6-unknown-freebsd | ✓ | ✓ | ✓ | ARMv6 FreeBSD |
armv6-unknown-netbsd-eabihf | ? | |||
armv7-apple-ios | ✓ | RMv7 iOS, Cortex- | ||
armv7-unknown-cloudabi-eabihf | ✓ | ARMv7 CloudABI, hardfloat | ||
armv7-unknown-freebsd | ✓ | ✓ | ✓ | ARMv7 FreeBSD |
armv7-unknown-netbsd-eabihf | ? | |||
armv7-wrs-vxworks-eabihf | ? | |||
armv7a-none-eabihf | * | ARM Cortex-A, hardfloat | ||
armv7s-apple-ios | ✓ | |||
avr-unknown-unknown | ? | AVR | ||
hexagon-unknown-linux-musl | ? | |||
i386-apple-ios | ✓ | 32-bit x86 iOS | ||
i686-apple-darwin | ✓ | ✓ | ✓ | 32-bit OSX (10.7+, Lion+) |
i686-pc-windows-msvc | ✓ | 32-bit Windows XP support | ||
i686-unknown-cloudabi | ✓ | 32-bit CloudABI | ||
i686-unknown-uefi | ? | 32-bit UEFI | ||
i686-unknown-haiku | ✓ | ✓ | ✓ | 32-bit Haiku |
i686-unknown-netbsd | ✓ | NetBSD/i386 with SSE2 | ||
i686-unknown-openbsd | ✓ | ✓ | ✓ | 32-bit OpenBSD |
i686-uwp-windows-gnu | ? | |||
i686-uwp-windows-msvc | ? | |||
i686-wrs-vxworks | ? | |||
mips-unknown-linux-uclibc | ✓ | MIPS Linux with uClibc | ||
mipsel-unknown-linux-uclibc | ✓ | MIPS (LE) Linux with uClibc | ||
mipsel-sony-psp | ** | MIPS (LE) Sony PlayStation Portable (PSP) | ||
mipsisa32r6-unknown-linux-gnu | ? | |||
mipsisa32r6el-unknown-linux-gnu | ? | |||
mipsisa64r6-unknown-linux-gnuabi64 | ? | |||
mipsisa64r6el-unknown-linux-gnuabi64 | ? | |||
msp430-none-elf | * | 16-bit MSP430 microcontrollers | ||
powerpc-unknown-linux-gnuspe | ✓ | PowerPC SPE Linux | ||
powerpc-unknown-linux-musl | ? | |||
powerpc-unknown-netbsd | ? | |||
powerpc-wrs-vxworks | ? | |||
powerpc-wrs-vxworks-spe | ? | |||
powerpc64-unknown-freebsd | ✓ | ✓ | ✓ | PPC64 FreeBSD (ELFv1 and ELFv2) |
powerpc64-unknown-linux-musl | ? | |||
powerpc64-wrs-vxworks | ? | |||
powerpc64le-unknown-linux-musl | ? | |||
sparc-unknown-linux-gnu | ✓ | 32-bit SPARC Linux | ||
sparc64-unknown-netbsd | ✓ | ✓ | NetBSD/sparc64 | |
sparc64-unknown-openbsd | ? | |||
thumbv7a-pc-windows-msvc | ? | |||
thumbv7a-uwp-windows-msvc | ✓ | |||
thumbv7neon-unknown-linux-musleabihf | ? | Thumb2-mode ARMv7a Linux with NEON, MUSL | ||
x86_64-apple-ios-macabi | ✓ | Apple Catalyst | ||
x86_64-apple-tvos | ** | x86 64-bit tvOS | ||
x86_64-linux-kernel | ? | Linux kernel modules | ||
x86_64-pc-solaris | ? | |||
x86_64-pc-windows-msvc | ✓ | 64-bit Windows XP support | ||
x86_64-unknown-dragonfly | ✓ | ✓ | ✓ | 64-bit DragonFlyBSD |
x86_64-unknown-haiku | ✓ | ✓ | ✓ | 64-bit Haiku |
x86_64-unknown-hermit | ? | |||
x86_64-unknown-hermit-kernel | ? | HermitCore kernel | ||
x86_64-unknown-illumos | ✓ | illumos | ||
x86_64-unknown-l4re-uclibc | ? | |||
x86_64-unknown-openbsd | ✓ | ✓ | ✓ | 64-bit OpenBSD |
x86_64-unknown-uefi | ? | |||
x86_64-uwp-windows-gnu | ✓ | |||
x86_64-uwp-windows-msvc | ✓ | |||
x86_64-wrs-vxworks | ? |
* These targets only support core
, not alloc
or std
.
** These targets only support core
or alloc
, not std
.
? These are targets that haven't yet been documented here. If you can shed some light on these platforms support, please create an issue or PR on the Rust Forge repo.
But those aren't the only platforms Rust can compile to! Those are the ones with built-in target definitions and/or standard library support. When linking only to the core library, Rust can also target additional "bare metal" platforms in the x86, ARM, MIPS, and PowerPC families, though it may require defining custom target specifications to do so.
Preparing Release Notes
The release notes for the next release should be compiled at the beginning of the beta cycle, 6 weeks ahead of the release.
Clone the relnotes utility. This program pulls all pull requests made against
rust-lang/rust
and rust-lang/cargo
within the latest release cycle and
prints out a markdown document containing all the pull requests, categorised
into their respective sections where possible, and prints the document to
stdout
.
Only pull requests that impact stable users of Rust should be included. Generally, more exciting items go toward the top of sections. Most items are simply links to the PR that landed them; some that need more explanation have additional, unlinked text; anything supported by an RFC has an additional RFC link. Reuse the PR titles or write descriptions as needed for clarity.
Try to keep the language of the document independent of en-US or en-UK, when it can't be avoided defer to en-US grammar and syntax.
The Rust Release Process
Here's how Rust is currently released:
Promote beta to stable (T-3 days, Monday)
Promote beta to stable. Temporarily turn off GitHub branch protection for the
stable
branch in rust-lang/rust repo. In your local Rust repo:
$ git fetch origin
$ git push origin origin/beta:stable -f
# make sure that the release notes file is as fresh as possible
$ git checkout origin/master -- RELEASES.md
Re-enable branch protection for the stable
branch. Send a PR to rust-lang/rust
on the stable branch making the following changes:
- Update
src/ci/run.sh
to passchannel=stable
, notchannel=beta
.
Once the PR is sent, r+ it and give it a high p=1000
.
The stable build will not deploy automatically to prod. The rust-central-station repository is configured to upload to dev every hour if it detects a change. You should be able to browse changes in dev.
As soon as this build is done post a message to irlo asking for testing. The index is https://dev-static-rust-lang-org.s3.amazonaws.com/dist/2015-09-17/index.html and our URL is then https://dev-static.rust-lang.org/dist/2015-09-17/index.html.
Test rustup with
RUSTUP_DIST_SERVER=https://dev-static.rust-lang.org rustup update stable
If something goes wrong, and we rebuild stable artifacts, you'll need to invalidate the dev-static bucket for RCS to re-release it.
- Download https://dev-static.rust-lang.org/dist/channel-rust-1.35.0.toml The version number must be less than the current release, but otherwise doesn't matter.
- Rename the file locally to channel-rust-stable.toml
- Upload the file to the dev-static bucket into the dist folder, replacing channel-rust-stable.toml.
- Go to CloudFront in AWS, to the dev-static bucket, and issue an invalidation for "/dist/channel-rust-stable.toml". This is necessary until https://github.com/rust-lang/rust-central-station/issues/49 is fixed.
- (optional) login to central station, and run the following. This starts the dev-static promotion immediately, vs. waiting till the next hour.
docker exec -d -it rcs bash -c 'promote-release /tmp/stable stable /data/secrets-dev.toml 2>&1 | logger --tag release-stable'
Promote master to beta (T-2 days, Tuesday)
Create a new branch on rust-lang/cargo
for the new beta. Here, rust-lang
is
the remote for https://github.com/rust-lang/rust.git. Replace YY
with the
minor version of master. First determine the branch point for cargo in
rust-lang/rust
, and then create a new branch:
$ cd rust
$ git fetch rust-lang
$ CARGO_SHA=`git rev-parse rust-lang/master:src/tools/cargo`
$ cd src/tools/cargo
$ git branch rust-1.YY.0 $CARGO_SHA
$ git push origin rust-1.YY.0
You'll need to temporarily disable branch protection on GitHub to push the new branch.
In theory one day we'll do the same for rust-lang/rls, but for now we haven't done this yet.
Temporarily disable banch protection on GitHub for the beta
branch of the Rust
repo. Promote rust-lang/rust's master branch to beta as with yesterday:
$ git fetch rust-lang
$ git push rust-lang rust-lang/master:beta -f
Re-enable branch protection on GitHub. Send a PR to the freshly created beta branch of rust-lang/rust which:
- Update src/stage0.txt
- Change
date
to "YYYY-MM-DD" where the date is the archive date the stable build was uploaded - Change
rustc
to "X.Y.Z" where that's the version of rustc you just build - Change
cargo
to "A.B.C" where it's Cargo's version. That's typically "0.(Y+1).0" wrt the rustc version. - Comment
rustfmt: nightly-YYYY-MM-DD
- Uncomment
dev: 1
- Change
- Update src/ci/run.sh to pass "--release-channel=beta".
Note that you probably don't want to update the RLS if it's working, but if it's not working beta won't land and it'll need to get updated. After this PR merges (through @bors) the beta should be automatically released.
Master bootstrap update (T-1 day, Wednesday)
Write a new blog post, update rust-www, and update rust-forge. Submit PRs for tomorrow.
Send a PR to the master branch to:
- modify src/stage0.txt to bootstrap from yesterday's beta
- modify src/bootstrap/channel.rs with the new version number
Release day (Thursday)
Decide on a time to do the release, T.
-
T-30m - This is on rust-central-station:
docker exec -d -it rcs bash -c 'promote-release /tmp/stable stable /data/secrets.toml 2>&1 | logger --tag release-stable-realz'
That'll, in the background, schedule the
promote-release
binary to run on the production secrets (not the dev secrets). That'll sign everything, upload it, update the html index pages, and invalidate the CDN. Note that this takes about 30 minutes right now. Logs are in/opt/rcs/logs
. -
T-10m - Locally, tag the new release and upload it. Use "x.y.z release" as the commit message.
$ git tag -u FA1BE5FE 1.3.0 $COMMIT_SHA $ git push rust-lang 1.3.0
After this Update thanks.rust-lang.org by triggering a build on Travis.
-
T-5m - Merge blog post.
-
T - Tweet and post everything!
- Twitter @rustlang
- Reddit /r/rust
- Hacker News
- Users forum
-
T+5m - Tag Cargo the same way as rust-lang/rust and then run
cargo publish
for the tag you just created. You'll first need to comment outcargo-test-macro
from Cargo.toml, then publishcrates-io
(incrates/crates-io
) and finally publishcargo
itself.To publish Cargo you may have to bump the version numbers for the crates-io and Cargo crates; there's no need to do that in a formal commit though, so your tag and the published code may differentiate in that way.
-
T+1hr Send a PR to the beta branch to comment out
dev: 1
again and update the date to download from (modifyingsrc/stage0.txt
).
Bask in your success.
Update dependencies (T+1 day, Friday)
In the repo:
$ cd src
$ cargo update
The very ambitious can use https://crates.io/crates/cargo-outdated and update through breaking changes.
Publishing a nightly based off a try build
Sometimes a PR requires testing how it behaves when downloaded from rustup, for example after a manifest change. In those cases it's possible to publish a new nightly based off that PR on dev-static.rust-lang.org.
Once the try build finishes make sure the merge commit for your PR is at the
top of the try
branch, log into the rust-central-station server
and run this command:
docker exec -d -it rcs bash -c 'PROMOTE_RELEASE_OVERRIDE_BRANCH=try promote-release /tmp/nightly-tmp nightly /data/secrets-dev.toml 2>&1 | logger --tag release-nightly-tmp'
If the try
branch doesn't contain the merge commit (because a new build
started in the meantime) you can create a new branch pointing to the merge
commit and run (replacing BRANCH_NAME
with the name of the branch):
docker exec -d -it rcs bash -c 'PROMOTE_RELEASE_OVERRIDE_BRANCH=BRANCH_NAME promote-release /tmp/nightly-tmp nightly /data/secrets-dev.toml 2>&1 | logger --tag release-nightly-tmp'
You can follow the build progress with:
sudo tail -n 1000 -f /var/log/syslog | grep release-nightly-tmp
Once the build ends it's possible to install the new nightly with:
RUSTUP_DIST_SERVER=https://dev-static.rust-lang.org rustup toolchain install nightly
Rollup Procedure
Background
The Rust project has a policy that every pull request must be tested after merge before it can be pushed to master. As PR volume increases this can scale poorly, especially given the long (~3.5hr) current CI duration for Rust.
Enter rollups! Changes that are small, not performance sensitive, or not platform
dependent are marked with the rollup
command to bors (@bors r+ rollup
to
approve a PR and mark as a rollup, @bors rollup
to mark a previously approved
PR, @bors rollup-
to un-mark as a rollup). 'Performing a Rollup' then means
collecting these changes into one PR and merging them all at once. The rollup
command accepts three values always
, maybe
, and never
. @bors rollup
is
equivalent to rollup=always
(which will indicate that a PR should always be
included in a rollup), and @bors rollup-
is equivalent to @bors rollup=maybe
which is used to indicate that someone should try rollup the PR. rollup=never
indicates that a PR should never be included in a rollup, this should generally
only be used for PRs which are large additions or changes which could cause
breakage or large perf changes.
You can see the list of rollup PRs on Rust's Homu queue, they are listed at the bottom of the 'approved' queue with a priority of 'rollup' meaning they will not be merged by themselves until everything in front of them in the queue has been merged.
Making a Rollup
-
Using the interface on Homu queue, select a few pull requests and then use "rollup" button to make one. (The text about fairness can be ignored.)
-
Run the following command in the pull request thread:
@rustbot modify labels: +rollup @bors r+ rollup=never p=<NUMBER_OF_PRS_IN_ROLLUP>
-
If the rollup fails, use the logs rust-highfive (really it is rust-log-analyzer) provides to bisect the failure to a specific PR and do
@bors r-
. If the PR is running, you need to do@bors r- retry
. Otherwise, your rollup succeeded. If it did, proceed to the next rollup (every now and then letrollup=never
and toolstate PRs progress). -
Recreate the rollup without the offending PR starting again from 1.
Selecting Pull Requests
This is something you will learn to improve over time. Some basic tips include (from obvious to less):
- Avoid
rollup=never
PRs (these are red in the interface). - Include all PRs marked with
rollup=always
(these are green). Try to check if some of the pull requests in this list shouldn't be rolled up — in the interest of time you can do so sometimes after you've made the rollup PR. - Avoid pull requests that touch the CI configuration or bootstrap.
(Unless the CI PRs have been marked as
rollup
. -- see 2.) - Avoid having too many large diff pull requests in the same rollup.
- Avoid having too many submodule updates in the same rollup (especially LLVM). (Updating LLVM frequently forces most devs to rebuild LLVM which is not fun.)
- Do not include toolstate PRs like those fixing Miri, Clippy, etc.
- Do include docs PRs (they should hopefully be marked as green).
Failed rollups
If the rollup has failed, run the @bors retry
command if the
failure was spurious (e.g. due to a network problem or a timeout). If it wasn't spurious,
find the offending PR and throw it out by copying a link to the rust-highfive comment,
and writing Failed in <link_to_comment>, @bors r-
. Hopefully,
the author or reviewer will give feedback to get the PR fixed or confirm that it's not
at fault.
Once you've removed the offending PR, re-create your rollup without it (see 1.).
Sometimes however, it is hard to find the offending PR. If so, use your intuition
to avoid the PRs that you suspect are the problem and recreate the rollup.
Another strategy is to raise the priority of the PRs you suspect,
mark them as rollup=never
and let bors test them standalone to dismiss
or confirm your hypothesis.
If a rollup continues to fail you can run the @bors rollup=never
command to
never rollup the PR in question.
Triage Procedure
Pull Request Triage
Status Tags
- S-waiting-on-author - Author needs to make changes to address reviewer comments, or merge conflicts/test failures are present. This also covers more obscure cases, like a PR being blocked on another, or waiting for a crater run -- it is the author's responsibility to push the PR forward.
- S-waiting-on-review - Review is incomplete
- S-waiting-on-team - A T- label is marked, and team has been cc-ed for feedback.
- S-waiting-on-bors - Currently approved, waiting to merge. Managed by Bors.
- S-waiting-on-crater - Waiting to see what the impact the PR will have on the ecosystem
- S-waiting-on-bikeshed - Waiting on the consensus over a minor detail
- S-waiting-on-perf - Waiting on the results of a perf run
- S-blocked - Waiting for another PR to be merged or for discussion to be resolved
- S-blocked-closed - Closed because resolving the block is expected to take a long time
- S-inactive-closed - Closed due to inactivity.
Procedure
IMPORTANT: Whenever you do PR triage, please fill out the following form: goo.gl/forms/YKYVFYjBq28Hm3qQ2. If you want to create a bookmark for yourself, you can adapt this link to prefill your GitHub username.
Note: When you are pinging people in triage comments, you should mention that you are doing triage in the comment you post. For example, start your comments with something like "Ping from triage ..."."
First ensure that the status tag matches the current state of the PR. Change the tag if necessary, and apply the procedure for the new tag.
Unassigned PRs
All PRs that have no assignee (except rollups) should be assigned to a random member of the responsible team.
Unlabeled PRs
All unlabeled PRs should be processed. The steps below are not mutually exclusive, any number of them may apply.
When no review has happened, if the PR is a work in progress (e.g., test
failures, merge conflict) mark S-waiting-on-author
. Otherwise, mark
S-waiting-on-review
. If no human has checked in yet and you don't recognise
the submitter as a regular contributor, leave a comment saying something like
"Thanks for the PR! We’ll periodically check in on it to make sure that
@reviewer or someone else from the team reviews it soon."
At this point, all PRs must have a tag applied.
S-waiting-on-author PRs
PRs with, roughly, more than a week of inactivity need to be processed. These can be found by looking at the "updated X days ago" on GitHub's PR list.
If the author hasn't responded for more than a week to a request for changes or a status update, ping the author on GitHub asking for them to check in on the PR's state. If they've given advance warning that they are unavailable for a period of time and therefore won't be able to address comments, do not ping until after that time. It is a good idea to start the message with "Ping from Triage..." so that the concerned parties know it is coming from the triage team.
If the author has not responded to a previous ping, meaning more than 2 weeks
have passed with no activity, the PR should be closed with a message thanking
the author for their work, asking the them to reopen when they have a chance to
make the necessary changes, and warning them not to push to the PR while it is
closed as that prevents it from being reopened. Tag the PR with
S-inactive-closed
.
TIP: if an author is unavailable and you know they won't have a chance to come to a PR for a while, you can 'bump' the PR by removing and readding the tag (note that removing/readding requires clicking off the tag selection dropdown between the two actions).
If the PR is blocked on another PR, issue, or some kind of discussion, add a
comment clearly identifying what is blocking the PR (something like "This PR
appears to be blocked on #12345") and change the state to S-blocked
. Follow
the instruction for S-blocked
to determine whether you should also close the
PR.
S-waiting-on-review PRs
PRs with, roughly, more than a week of inactivity need to be processed. These can be found by looking at the "updated X days ago" on GitHub's PR list.
If the review is complete the label should be changed from S-waiting-on-review
to S-waiting-on-author
.
Otherwise, the reviewer should be pinged. It is a good idea to start the message
with "Ping from Triage..." so that the concerned parties know it is coming from
the triage team, and the message should be asking the reviewer to either review
or update a review of the PR. If the reviewer has already been pinged, meaning
more than 2 weeks have passed with no activity, another reviewer on their team
should be pinged. Note that if the reviewer has expressed that they are busy, do
not ping them until they are available again. If the PR is not already labeled
with a team (T-
), find the team assigned to the PR's issue which should have a
T-
label.
The r?
command is needed to override a reviewer, however not all triagers will
have sufficient permissions. In this case sending a message to the #triage-wg
Discord or pinging @Dylan-DPC will be necessary.
If the PR is blocked on another PR, add a comment clearly identifying the
blocking PR (something like "This PR appears to be blocked on #12345") and
change the state to S-blocked
.
If the pr is tagged with final-comment-period
it does not need to be triaged unless the process has stalled for a reasonable period of time. These PRs have a form from RFCbot that looks like:
Team member @Donald has proposed to merge this. The next step is review by the rest of the tagged team members:
- @Huey
- @Dewey
- @Louie
At this point, ping the appropriate people to check their boxes to sign off on the PR.
If this stalls nominate the PR for prioritizing at the next team triage meeting by marking it with I-nominated
.
PRs tagged with finshed-final-comment-period
are eligible for triage.
S-waiting-on-team PRs
PRs active within the last 4 days or inactive for greater than 2 weeks need to be processed. These can be found by looking at the "updated X days ago" on GitHub's PR list.
First, ensure that the status tag matches the current state of the PR. Change the tag if necessary, and apply the procedure for the new tag now. Verify that there is a T- tag for all PRs that remain in this category.
If the PR has been inactive for greater than 2 weeks, add the I-nominated
label and ping the team, requesting a new assignee or other appropriate action.
If there has been recent activity, the team might have taken some action meaning the state has changed but the label has not yet been updated. Therefore, we also check the most recent ones.
S-waiting-on-bors PRs
Bors automatically manages this label but human intervention may be required if there is an issue.
S-waiting-on-crater PRs
All PRs should be processed.
If the PR has been active in the last three days, make sure it's present on the crater spreadsheet. Fill in the link to the PR and set status as "Pending".
If crater has been run and results include failures, change the tag to
S-waiting-on-review
for the reviewer to be responsible for deciding what
should be done with the information provided by the failures.
If crater has been run and the results do not include failures, change the tag
to S-waiting-on-review
for the reviewer to take one last look and approve.
If crater has not been run and it has been more than 3 days since a crater run was requested, ping the last three distinct listed people on the spreadsheet in the infra irc channel and request a crater run.
If crater has been started (the person starting should leave a comment) and it has been more than 5 days since an update, ping the person starting the run on IRC and GitHub.
S-waiting-on-bikeshed
PRs inactive for greater than 7 days need to be processed. These can be found by looking at the "updated X days ago" on GitHub's PR list.
Find the source of the discussion and see if it has been resolved.
If it has been resolved, move it back to S-waiting-on-author
or
S-waiting-on-review
as appropriate. Add a comment notifying the author or
reviewer that the PR is now unblocked.
If it has not been resolved, remove and re-add the S-waiting-on-bikeshed
tag.
This resets the update time so the PR won't be reviewed for another week.
S-blocked PRs
Blocked PRs can remain blocked for a long time. To avoid needlessly re-triaging them, they should be closed if the blocking issue is unlikely to be resolved soon. If you close a blocked PR, apply the S-blocked-closed label and invite the author to re-open the PR once the issue has been resolved. If you feel uncomfortable just closing the PR, feel free to link to this document. As a rule of thumb, consider these guidelines:
- PRs blocked on discussion (such as RFCs or WG decisions) should be closed immediately, since those discussions generally take a long time.
- PRs blocked on other PRs should be closed, unless the blocking PR looks like it's going to be merged soon.
- PRs which have already been blocked for two weeks should generally be closed, unless there is a clear indication that they will be unblocked soon.
Blocked PRs which have not been closed should be triaged as follows:
PRs inactive for greater than 7 days need to be processed. These can be found by looking at the "updated X days ago" on GitHub's PR list.
Find the blocking issue from the triage comment and see if it has been resolved.
If it has been resolved, move it back to S-waiting-on-author
or
S-waiting-on-review
as appropriate. Add a comment notifying the author or
reviewer that the PR is now unblocked.
If it has not been resolved, remove and re-add the S-blocked
tag. This resets
the update time so the PR won't be reviewed for another week.
S-blocked-closed PRs
These never need to be looked at, although if you want you can go through the PRs and see if any have been unblocked. This label is for PRs which are blocked and have been closed because it is unlikely that the blocking issue will be resolved soon.
S-inactive-closed PRs
These never need to be looked at. PRs which have been closed due inactivity. This is a terminal state for the time being, primarily oriented towards easing future work.
Issue triage
Issue triage is mostly much simpler. After finishing PR triage, go to the list of untagged issues and add tags as you see fit. The following categories should, ideally, be assigned to each issue:
- At least one
A-
tag. This represents the area of the issue, so an issue relating to benchmarks or testing would get A-libtest. If you can't find an appropriate tag for the issue, it's possible that creating one is the right thing to do. Try to pick just one tag to give, unless you're giving the A-diagnostics tag, in which case one more tag is a good idea. - One, and only one,
C-
tag. This represents that category of the issue.C-bug
: Bugs. These are things like ICEs or other failures of the compiler to do what it's supposed to in a way that breaks correct user code. It's not always easy to tell if code is correct, and the compiler broken, though, but tend towards assuming it's the compiler's fault: at least, we should give a better diagnostic. Note that as of now, I-slow, and I-compile{time,mem} are not considered bugs, rather, they are enhancements, since they do not break user code.C-cleanup
: Refactoring and cleanup work within the compiler.C-enhancement
: Diagnostic improvements, primarily, or other nice to haves, but not critical issues. Somewhat implies that this is a minor addition.C-feature-request
: An addition of an impl is the primary thing here. Sometimes minor lang features also qualify, though in general it's likely those should be closed in favor of RFCs. It's recommended that triagers should close issues in favor of the author opening a thread on internals or rust-lang/rfcs for language changes that are more significant than adding an impl.C-feature-accepted
: Feature-requests that a relevant team would like to see an implementation for before final judgement is rendered. It's likely that such an implementation would be merged, unless breakage (e.g., inference-related) occurs.C-future-compatibility
: Used for tracking issues for future compatibility lints.C-tracking-issue
: This is used for both feature tracking issues (feature gates) and issues which track some question or concern of a team. These are maintained on GitHub over internals because GH is a more stable location and easier to refer to in the long run.
- At least one
T-
tag. Assign the appropriate team to the issue; sometimes this is multiple teams, but usually falls into either dev-tools, compiler, or libs. Sometimes the lang team needs to make a decision. - If necessary, add
I-
tags as you see fit. Particularly,I-ICE
is the dominant tag to be added. - If applicable, add platform tags (
O-
). It's fine to add more than one.
If an issue has been tagged with an E-
category tag, such as E-help-wanted
and has been taken up by someone, but there has been no activity for 7 days, ask
if they require assistance, and inform them that after 14 days this issue will
be made available to anyone. After 14 days re-add the help tag and deassign them
if necessary.
State Of Rust Triage
- Visit the State Of Rust project page. Each card has three pieces of
information.
- “Feature” — The name of the feature with a link to the tracking issue.
- “What’s next?” — What we are waiting on to implement and stabilise the RFC.
- “Last Update” — The last time this card has been triaged.
- For each card that you choose to triage:
- Visit the respective tracking issue, and any related issues that the tracking issue is recently mentioned in.
- If the “What’s next?” on the card does not match what you think the current state is, update it with the new information.
- If the implementation of an RFC has changed since the last update, move it to
the relevant column.
- If there are PRs merged that implement the RFC the card would move to “Implemented”.
- If there are only open PRs or the PRs don’t implement the full RFC the card would be moved to “Implementation in progress”.
- If there has been a decision to deprecate the RFC, move that to the “Deprecated” column.
- If there have been no meaningful changes to the RFC within 21 days, ping
someone for an update on the status of the feature.
- If there have been PRs implementing the RFC, ping the author(s).
- If author has not responded within a week, or there are no relevant PRs, ping the relevant team.
- If there is no clear choice for the team that should be doing the implementation, please add this to release team meeting notes (which can be found in the #release Discord channel).
- Update the date on the “Last update” and move that to the bottom of the column.
Triaging Crater Runs
Running crater
We regularly run Crater runs, and this documents the procedure for triaging a beta run; it may also be applicable to non-release team runs (e.g., PR crater runs) with minor modifications.
First, file a new issue titled "Crater runs for 1.x" (example)
A crater run for beta should be started as soon as we have beta out. Use the following craterbot invocations.
$BETA_VERSION is e.g. 1.40.0-1, increment the 1 if it's not the first beta crater run, you can also
use the auto-incremented counter on the beta rustc --version
.
$STABLE is e.g. 1.39.0 (the stable release) $BETA is beta-YYYY-MM-DD, get the date by looking at https://static.rust-lang.org/manifests.txt and get the date of the most recent channel-rust-beta.toml.
@craterbot run name=beta-$BETA_VERSION start=$STABLE end=$BETA mode=build-and-test cap-lints=warn p=10
@craterbot run name=beta-rustdoc-$BETA_VERSION start=$STABLE end=$BETA mode=rustdoc cap-lints=warn p=5
Once the runs complete, you want to triage them
Triaging
These steps should generally be done for the normal rustc run, and then followed up by a triage of the rustdoc run. Ignore failures in rustdoc that look to be rooted in rustc (i.e., duplicate failures).
There will usually be quite a few regressions -- there are a couple tools that can help reduce the amount of work that you need to do. It's mostly a matter of personal preference which is more helpful.
- https://github.com/Mark-Simulacrum/crater-generate-report/
- This groups regressions by 'root' by parsing the logs to look for the compilation failed messages printed by Cargo
- https://github.com/Centril/crater-cat-errors
- This groups regressions by the "error" message, also by parsing logs
If you've written a tool, feel free to add it here! We're still figuring out what the best UI for this is.
Regardless of the tool you've run, you ultimately need to read through a bunch of logs and try to quickly determine if they're genuine failures or spurious. Most of the time, a compiler failure is genuine, and test failures are mostly spurious, but this usually requires some level of guessing.
Once you've determined that something is a genuine failure, add it to a list somewhere (local file, HackMD, whatever) with the error "category." Mostly, you're trying to group things such that the regressions in a single group are all caused by the same set of commits, and different groups have different causes.
Once this is done, and you have all the regressions triaged into their separate groups, you want to
file a new issue for each group. It should have the regression-from-stable-to-beta
and
T-compiler
label by default, possibly T-libs
if it's a standard library regression, but that's
relatively rare. If you happen to think you know the PR that caused the failure, cc the PR author in
a separate comment and link to the PR; otherwise compiler team will triage the issue soon.
Leave a comment on the original issue with the crater runs linking to all the issues you just opened, ideally with the issue titles as well.
You're done!
Re-running rustc on a crate
For the crates which we're not sure about, you can try running crater locally, or build the crate
directly (cratesio-curl
can be helpful). Be careful -- regardless of what you do, you are running arbitrary code locally. It's
also fine to file issues for the crates you're not sure about and let the triage process naturally
categorize the error, though it's not good to do this for all the crates. Once you've triaged a
crater run a couple times you get a pretty good sense of what is spurious and what isn't, too.
You can run crater on just a single crate by doing something like this (at least, as of now). Note that this will download several gigabytes (on first use) and requires Docker to be running.
git clone https://github.com/rust-lang/crater
cd crater
cargo run -- prepare-local
CRATES="crates-io-crate-0.4.0,owner/repository-name" # Edit this.
cargo run -- define-ex --crate-select=list:$CRATES --cap-lints=forbid 1.38.0 beta # Edit the stable version.
cargo run -- run-graph --threads 4
cargo run -- gen-report work/ex/default/
# view report for this crate
It's also possible to re-queue a subset of crates onto the official builders, for which that take a look at: https://gist.github.com/ecstatic-morse/be799bfa4d3b3d6e163fa61a9c30706f
Determining the root cause of the regression
It's not always apparent why a crate stopped building. This isn't generally something done as part
of crater triage -- but can be a good followup. Here, cargo-bisect-rustc
and Felix's
minimization guide are excellent tools to apply.
Archive
This section is for content that has become outdated, but that we want to keep available to be read for historical/archival reasons.
Friends of the Tree
The Rust Team likes to occasionally recognize people who have made outstanding contributions to The Rust Project, its ecosystem, and its community. These people are 'Friends of the Tree', archived here for eternal glory.
2016-02-26 @mitaa
This week we would like to nominate @mitaa as Friend of the Tree. Recently @mitaa has sent a wave of fixes to rustdoc (yes those are all separate links) with even more on the way! Rustdoc has historically been a tool in need of some love, and the extra help in fixing bugs is especially appreciated. Thanks @mitaa!
2016-02-12 Jeffrey Seyfried (@jseyfried)
This week's friend of the tree is Jeffrey Seyfried (@jseyfried)!
Jeffrey Seyfried (@jseyfried) has made some awesome contributions to name resolution. He has fixed a ton of bugs, reported previously unknown edge cases, and done some big refactorings, all of which have helped improve a complex and somewhat neglected part of the compiler.
2015-12-04 Vadim Petrochenkov @petrochenkov
This week we'd like to nominate @petrochenkov for Friend of the Tree. Vadim has
been doing some absolutely amazing compiler work recently such as fixing
privacy bugs, fixing hygiene bugs, fixing pattern bugs,
paving the way and implementing #[deprecated]
,
fixing and closing many privacy holes, refactoring
and improving the HIR, and reviving the old type ascription
PR. The list of outstanding bugs and projects in the compiler is
growing ever smaller now; thanks @petrochenkov!
2015-11-16 Peter Atashian (WindowsBunny, retep998)
In his own words, WindowsBunny is "a hopping encyclopedia of all the issues windows users might run into and how to solve them." One of the heroes that make Rust work on Windows, he actively pushes the frontiers of what Rust can do on the platform. He is also notably the maintainer of the winapi family of crates, a comprehensive set of bindings to the Windows system APIs. You da bunny, WindowsBunny. Also, a friend of the tree.
2015-10-31 Marcus Klaas
Today @nrc would like to nominated @marcusklaas as Friend of the Tree:
Marcus is one of the primary authors of rustfmt. He has been involved since the early days and is now the top contributor. He has fixed innumerable bugs, implemented new features, reviewed a tonne of PRs, and contributed to the design of the project. Rustfmt would not be the software it is today without his hard work; he is indeed a Friend Of The Tree.
2015-10-16 Ryan Prichard
nmatsakis would also like to declare Ryan Prichard a Friend of the Tree. Over the last few months, Ryan has been comparing the Rust compiler's parsing behavior with that of the rust-grammar project, which aims to create a LALR(1) grammar for parsing Rust. Ryan has found a number of inconsistencies and bugs between the two. This kind of work is useful for two reasons: it finds bugs, obviously, which are often hard to uncover any other way. Second, it helps pave the way for a true Rust reference grammar outside of the compiler source itself. So Ryan Prichard, thanks!
2015-10-02 Vikrant Chaudhary
Vikrant Chaudhary (nasa42) is an individual who believes in the Rust community. Since June he has been contributing to This Week in Rust, coordinating its publication on urlo, and stirring up contributions. He recently rolled out an overhaul to the site's design that brings it more inline with the main website. Today Vikrant is the main editor on the weekly newsletter, assisted by llogiq and other contributors. Thanks for keeping TWiR running, Vikrant, you friend of the tree.
2015-07-24 Tshepang Lekhonkhobe
@Gankro has nominated @tshepang for Friend of the Tree this week:
Over the last year Tshepang has landed over 100 improvements to our documentation. Tshepang saw where documentation was not, and said "No. This will not do."
We should all endeavor to care about docs as much as Tshepang.
2015-05-19 Chris Morgan
I'd like to nominate Chris Morgan (@chris-morgan) for Friend of the Tree today. Chris recently redesigned the play.rust-lang.org site for the 1.0 release, giving the site a more modern and rustic feel to it. Chris has been contributing to Rust for quite some time now, his first contribution dating back to July 2013 and also being one of the early pioneers in the space of HTTP libraries written in Rust. Chris truly is a friend of the tree!
2015-03-24 Andrew Gallant (BurntSushi)
BurntSushi is an individual who practically needs no introduction. He's written many of the world's most popular crates, including docopt.rs, regex, quickcheck, cbor, and byteorder. Don't forget his CSV swiss-army-knife, xsv, built on rust-csv. Feedback from his early work on libraries helped informed the evolution of Rust during a critical time in its development, and BurntSushi continues to churn out the kind of Rust gems that can only come from someone who is a skilled friendofthetree.
2015-03-03 Manish Goregaokar (Manishearth)
Manish started working on Servo as part of the GSoC program in 2014, where he implemented XMLHttpRequest. Since then he's become in integral part of the Servo team while finishing his university studies and organizing Rust community events. In 2015 he took an interest in bors' queue and started making rollup PRs to accelerate the integration process. Nursing the PR queue is the kind of time-consuming labor that creates friends of the tree like Manish, the rollup friend of the tree.
2015-02-17 Toby Scrace
Today I would like to nominate Toby Scrace as Friend of the Tree. Toby emailed me over the weekend about a login vulnerability on crates.io where you could log in to whomever the previously logged in user was regardless of whether the GitHub authentication was successful or not. I very much appreciate Toby emailing me privately ahead of time, and I definitely feel that Toby has earned becoming Friend of the Tree.
2015-02-10 Jonathan Reem (reem)
Jonathan Reem has been making an impact on Rust since May 2014. His primary
contribution has been as the main author of the prominent Iron web
framework, though he has also created several other popular projects including
the testing framework stainless. His practical experience with these projects
has led to several improvements in upstream rust, most notably his complete
rewrite of the TaskPool
type. Reem is doing everything he can to advance the
Rust cause.
2015-01-20 Barosl Lee (barosl)
Today I would like to nominate Barosl Lee (@barosl) for Friend of the Tree. Barosl has recently rewritten our bors cron job in a new project called homu. Homu has a number of benefits including:
- Zero "down time" between testing different PRs (compared to 30+ minutes for bors!)
- A new rollup button to create separate rollup PRs from other PRs.
- Multiple repositories are supported (Cargo and Rust are on the same page)
Homu was recently deployed for rust-lang/rust thanks to a number of issues being closed out by Barosl, and it's been working fantastically so far! Barosl has also been super responsive to any new issues cropping up. Barosl truly is a Friend of the Tree!
2015-01-13 Kang Seonghoon (lifthrasiir, Yurume)
Seonghoon has been an active member of the Rust community since early 2013, and although he has made a number of valuable contributions to Rust itself, his greatest work has been in developing key libraries out of tree. rust-encoding, one of the most popular crates in Cargo, performs character encoding, and rust-chrono date / time handling, both of which fill critical holes in the functionality of the standard library. rust-strconv is a prototype of efficient numerical string conversions that is a candidate for future inclusion in the standard library. He maintains a blog where he discusses his work.
2015-01-06 Jorge Aparicio (japaric)
I nominate Jorge Aparicio (japaric) for Friend of the Tree (for the second time, no less!). japaric has done tremendous work porting the codebase to use the new language features that are now available. First, he converted APIs in the standard library to take full advantage of DST after it landed. Next, he converted APIs to use unboxed closures. Then, he converted a large portion of the libraries to use associated types. Finally, he removed boxed closures from the compiler entirely. He has also worked to roll out RFCs changing the overloaded operators and comparison traits, including both their definitions and their impact on the standard library. And this list excludes a number of smaller changes, like deprecating older syntax. The alpha release would not be where it is without him; Japaric is simply one of the best friends the tree has ever had.
2014-12-30 Kevin Ballard (kballard, Eridius)
This is a belated recognition of Kevin Ballard (aka @kballard, aka Eridius) as a friend of the tree. Kevin put a lot of work into Unicode issues in Rust, especially as related to platform-specific constraints. He wrote the current path module in part to accommodate these constraints, and participated in the recent redesign of the module. He has also been a dedicated and watchful reviewer. Thanks, Kevin, for your contributions!
2014-12-16 Gábor Lehel (glaebhoerl)
Gabor's major contributions to Rust have been in the area of language design. In the last year he has produced a number of very high quality RFCs, and though many of them of not yet been accepted, his ideas are often thought-provoking and have had a strong influence on the direction of the language. His trait based exception handling RFC was particularly innovative, as well that for future-proofing checked arithmetic. Gabor is an exceedingly clever Friend of the Tree.
2014-11-11 Brian Koropoff (unwound)
In the last few weeks, he has fixed many, many tricky ICEs all over the compiler, but particularly in the area of unboxed closures and the borrow checker. He has also completely rewritten how unboxed closures interact with monomorphization and had a huge impact on making them usable. Brian Koropoff is truly a Friend of the Tree.
2014-10-07 Alexis Beingessner (Gankro)
Alexis Beingessner (aka @Gankro) began contributing to Rust in July, and has already had a major impact on several library-related areas. His main focus has been collections. He completely rewrote BTree, providing a vastly more complete and efficient implementation. He proposed and implemented the new Entry API. He's written extensive new documentation for the collections crate. He pitched in on collections reform.
And he added collapse-all to rustdoc!
Alexis is, without a doubt, a FOTT.
2014-09-02 Jorge Aparicio (japaric)
Jorge has made several high-impact contributions to the wider Rust community. He is the primary author of rustbyexample.com, and last week published "eulermark", a comparison of language performance on project Euler problems, which happily showed Rust performing quite well. As part of his benchmarking work he has ported the 'criterion' benchmarking framework to Rust.
2014-07-29 Björn Steinbrink (dotdash, doener)
Contributing since April 2013. Björn has done many optimizations for Rust,
including removing allocation bloat in iterators, fmt, and managed boxes;
optimizing fail!
; adding strategic inlining in the libraries; speeding up data
structures in the compiler; eliminating quadratic blowup in translation, and
other IR bloat problems.
He's really done an amazing number of optimizations to Rust.
Most recently he earned huge kudos by teaching LLVM about the lifetime of variables, allowing Rust to make much more efficient use of the stack.
Björn is a total FOTT.
2014-07-22 Jonas Hietala (treeman)
Jonas Hietala, aka @treeman, has been contributing a large amount of documentation examples recently for modules such as hashmap, treemap, priority_queue, collections, bigint, and vec. He has also additionally been fixing UI bugs in the compiler such as those related to format!
Jonas continues to add new examples/documentation every day, making documentation more approachable and understandable for all newcomers. Jonas truly is a friend of the tree!
2014-07-08 Sven Nilson (bvssvni, long_void)
Sven Nilson has done a great deal of work to build up the Rust crate ecosystem, starting with the well-regarded rust-empty project that provides boilerplate build infrastructure and - crucially - integrates well with other tools like Cargo.
His Piston project is one of the most promising Rust projects, and its one that integrates a number of crates, stressing Rust's tooling at just the right time: when we need to start learning how to support large-scale external projects.
Sven is a friend of the tree.
2014-06-24 Jakub Wieczorek (jakub-)
jakub-, otherwise known as Jakub Wieczorek, has recently been working very hard to improve and fix lots of match-related functionality, a place where very few dare to venture! Most of this code appears to be untouched for quite some time now, and it's receiving some well-deserved love now.
Jakub has fixed 10 bugs this month alone, many of which have been long-standing problems in the compiler. He has also been very responsive in fixing bugs as well as triaging issues that come up from fun match assertions.
Jakub truly is a friend of the tree!
2014-04-22 klutzy
klutzy has been doing an amazing amount of Windows work for years now. He picks up issues that affect our quality on Windows and picks them off 1 by 1. It's tedious and doesn't get a ton of thanks, but is hugely appreciated by us. As part of the Korean community, he has also done a lot of work for the local community there. He is a friend of the tree. Thank you!
- Rust on Windows crusader
- Fixed issues with x86 C ABI struct arguments
- Fixed multiple issues with non-US locales
2014-03-18 Clark Gaebel (cgaebel)
This week's friend of the tree is Clark Gaebel. He just landed a huge first contribution to Rust. He dove in and made our hashmaps significantly faster by implementing Robin Hood hashing. He is an excellent friend of the tree.
2014-02-25 Erick Tryzelaar (erickt)
- Contributing since May 2011
- Wrote the serialization crate
- Organizes the bay area Rust meetups
- Just rewrote the Hash trait
2014-02-11 Flavio Percoco (FlaPer87)
- Contributing since September
- Does issue triage
- Organizing community events in Italy
- Optimized the 'pow' function
- Recently been fixing lots of small but important bugs
2014-01-27 - Jeff Olson (olsonjefferey)
- Contributing since February 2012
- Did the original libuv integration
- Implemented our second attempt at I/O, first using libuv
- Ported parts of the C++ runtime to Rust
- Implemented file I/O for the newest runtime
- Last week published an article about file I/O on the Safari books blog
2014-01-21 - Steven Fackler (sfackler)
- Contributing since last May
- CMU grad
- Lots of library improvements, Base64, Bitv, I/O
- Rustdoc improvements
- Mut/RefCell
- std::io::util
- external module loading
2014-01-14 - Eduard Burtescu (eddyb)
- Contributing since October
- Working on the compiler, including trans
- Reduced rustc memory usage
- Optimized vector operations
- Helping refactor the compiler to eliminate use of deprecated features
- Cleaned up ancient code in the compiler
- Removed our long-standing incorrect use of the environment argument to pass the self param
2014-01-07 - Vadim Chugunov (vadimcn)
- Contributing since June
- Fixed numerous bugs on Windows
- Fixing broken tests
- Improved compatibility with newer mingw versions
- Eliminated our runtime C++ dependency by implementing unwinding through libunwind
Rust Release history
This is an archive of Rust release artifacts from 0.1–1.7.0. Each release is signed with the Rust GPG signing key (older key, even older key).
1.7.0
- Announcement
- Release notes
- Source code (signature)
- Windows x86_64 .exe gnu installer (signature)
- Windows x86_64 .msi gnu installer (signature)
- Windows x86_64 .exe MSVC installer (signature)
- Windows x86_64 .msi MSVC installer (signature)
- Windows i686 .exe gnu installer (signature)
- Windows i686 .msi gnu installer (signature)
- Windows i686 .exe MSVC installer (signature)
- Windows i686 .msi MSVC installer (signature)
- Linux x86_64 tarball (signature)
- Linux i686 tarball (signature)
- Mac OS X i686 pkg (signature)
- Mac OS X i686 tarball (signature)
- Mac OS X x86_64 pkg (signature)
1.6.0
1.5.0
1.4.0
1.3.0
1.2.0
1.1.0
1.0.0
1.0.0-beta
1.0.0-alpha.2
- Announcement
- Release notes
- Source code (signature)
- Windows x86_64 .exe installer (signature)
- Windows i686 .exe installer (signature)
- Windows x86_64 .msi installer (signature)
- Windows i686 .msi installer (signature)
- Linux x86_64 tarball (signature)
- Linux i686 tarball (signature)
- Mac OS X x86_64 pkg (signature)
- Mac OS X i686 pkg (signature)
- Mac OS X x86_64 tarball (signature)
- Mac OS X i686 tarball (signature)
- Documentation
1.0.0-alpha
- Announcement
- Release notes
- Source code (signature)
- Windows x86_64 installer (signature)
- Windows i686 installer (signature)
- Linux x86_64 tarball (signature)
- Linux i686 tarball (signature)
- Mac OS X x86_64 pkg (signature)
- Mac OS X i686 pkg (signature)
- Mac OS X x86_64 tarball (signature)
- Mac OS X i686 tarball (signature)
- Documentation
Rust 0.x
In addition to the included short-form release in the mailing list, each 0.x release has a longer explanation in the release notes.
0.12.0
- Announcement
- Release notes
- Source code (signature)
- Windows x86_64 installer (signature)
- Windows i686 installer (signature)
- Linux x86_64 tarball (signature)
- Linux i686 tarball (signature)
- Mac OS X x86_64 pkg (signature)
- Mac OS X i686 pkg (signature)
- Mac OS X x86_64 tarball (signature)
- Mac OS X i686 tarball (signature)
- Documentation
0.11.0
- Announcement
- Release notes
- Source code (signature)
- Windows installer (signature)
- Linux x86_64 tarball (signature)
- Linux i686 tarball (signature)
- Mac OS X x86_64 pkg (signature)
- Mac OS X i686 pkg (signature)
- Mac OS X x86_64 tarball (signature)
- Mac OS X i686 tarball (signature)
- Documentation
0.10
- Announcement
- Release notes
- Source code (signature)
- Windows installer (signature)
- Linux x86_64 tarball (signature)
- Linux i686 tarball (signature)
- Mac OS X x86_64 pkg (signature)
- Mac OS X i686 pkg (signature)
- Mac OS X x86_64 tarball (signature)
- Mac OS X i686 tarball (signature)
- Documentation
0.9
0.8
- Announcement
- Release notes
- Source code (signature)
- Windows installer (signature)
- Tutorial
- borrowed pointers | conditions | containers | ffi | macros | rustpkg | tasks
- Manual (PDF)
- Rustpkg manual
- Standard library docs
- Extra library docs
0.7
- Announcement
- Release notes
- Source code (signature)
- Windows installer (signature)
- Tutorial
- Manual (PDF)
- Standard library docs
- Extra library docs
0.6
- Announcement
- Release notes
- Source code (signature)
- Windows installer (signature)
- Tutorial
- Manual (PDF)
- Core library docs
- Standard library docs
0.5
- Announcement
- Release notes
- Source code (signature)
- Windows installer (signature)
- Tutorial
- Manual (PDF)
- Core library docs
- Standard library docs
0.4
- Announcement
- Release notes
- Source code (signature)
- Windows installer (signature)
- Tutorial
- Manual (PDF)
- Core library docs
- Standard library docs
0.3.1
This was an OS X bugfix release.
0.3
- Announcement
- Release notes
- Source code (signature)
- Windows installer (signature)
- Tutorial
- Manual (PDF)
- Core library docs
- Standard library docs