GithubHelp home page GithubHelp logo

githubtraining / using-github-actions-for-ci Goto Github PK

View Code? Open in Web Editor NEW
3.0 2.0 16.0 344 KB

Course repository for "GitHub Actions: Continuous Integration"

Home Page: https://lab.github.com/githubtraining/github-actions:-continuous-integration

License: Creative Commons Attribution 4.0 International

learning-lab course hacktoberfest

using-github-actions-for-ci's Introduction

Learning Lab bot

Course: GitHub Actions: Continuous Integration

This repository powers the Learning Lab course GitHub Actions: Continuous Integration.

Every Learning Lab course is made up of:

The course repository is written in YAML and Markdown. The template repository could be written in any language that supports the learning objectives.

For more information on the goals of this course, check out the course-details.md.

Contribute

See something we could improve? Check out the contributing guide in the community contributors repository for more information on the types of contributions we ❤️ and instructions.

We ❤️ our community and take great care to ensure it is fun, safe and rewarding. Please review our Code of Conduct for community expectations and guidelines for reporting concerns.

License

All Learning Lab course repositories are licensed under CC-BY-4.0 (c) 2019 GitHub, Inc. The template repositories associated with each course may have different licenses.

When using the GitHub logos, be sure to follow the GitHub logo guidelines

using-github-actions-for-ci's People

Contributors

a-a-ron avatar aliabbasmerchant avatar brianamarie avatar crichid avatar hectorsector avatar jasonetco avatar lmkeston avatar mattdavis0351 avatar parroty avatar rkzm avatar superitman avatar xeonacid avatar

Stargazers

 avatar  avatar  avatar

Watchers

 avatar  avatar

using-github-actions-for-ci's Issues

fail in step 5 again with a message "1 snapshot obsolete."

Bug Report

Current behavior

After responding activity of step 5, github-learning-lab mentions the Actions cannot finish the workflow correctly.

In the workflow result, a warning was raised by npm run test. The warning says "Snapshot is obsolete".

FYI, This relates with #55. I solved the problem directly on githubtraining/using-github-actions-for-ci-template#15, but the next problem appeared now.

Reproduction
Same as #55

Expected behavior
Same as #55

Possible solution
I solve with fixing the code and updating the snapshot in githubtraining/using-github-actions-for-ci-template#17.

Additional context
My learning lab practice repo that this problem is produced is here.
https://github.com/dzeyelid/github-actions-for-ci_keep2/pull/2

The problem position is here.
https://github.com/dzeyelid/github-actions-for-ci_keep2/pull/2#issuecomment-859223483

Bug: Test results in the first step

Bug Report

Current behavior

Node 12.x build fails, 14.x and 16.x get cancelled right after that. Plus, there is no response from the github-learning-lab bot.

Screenshot_2022-02-18-07-40-47-869_com.github.android.jpg

Screenshot_2022-02-18-07-41-03-578_com.github.android.jpg

Screenshot_2022-02-18-07-41-11-736_com.github.android.jpg

Screenshot_2022-02-18-07-38-30-274_com.android.chrome.jpg

Reproduction

Steps to reproduce the behavior in the course:

  1. Created node.js workflow
  2. Used the node.js template: nodejs.yml
  3. Created pull request titled as desired by the bot
  4. Came across failed and cancelled builds
  5. Received no response from the bot

Expected behavior

Expected a response from github-learning-lab bot after the GitHub Actions finished running the workflow, in order to proceed to the next step.

Suggestions

It seems like the template workflow broke some of the assumptions as per the learning-lab bot's expectations due to which there's no response. FYI, this is similar to issue #11.

Course Feedback from MSFT

The below feedback was sent to me from Harshitha, a PM intern under Usha, the product Manager for GitHub Actions for Azure.

  • The label “Step 4” appeared as “step 3” in “hello GitHub actions” course while performing step 4
  • In this course “GitHub Actions: Continuous Integration” page, at the bottom are suggestion courses. “Continuous Delivery with Azure” can also be included.
  • There is a possibility for some users that their focus remains on completing the step properly such that the bot responds and may not be on learning the actual procedure. Inclusion of video for courses where videos are not present allows the user to choose his/her learning method
  • There’s a follow up link for a course named Continuous Delivery with GitHub Actions leading to “Continuous delivery using AWS” while the link doesn’t hint as such.

consider not using `master` for setup-node action

Bug Report

Current behavior
According to best practice for CI we recommend to use explicit versions of actions. major versions.
For setup-action we are using master. This is not best practice unless it is done to educate.

Reproduction
See nodejs.yml workflow file

Expected behavior
Remove reference to master and call a version instead
https://help.github.com/en/actions/automating-your-workflow-with-github-actions/workflow-syntax-for-github-actions#jobsjob_idstepsuses

in step 5, an unrelated test still fails after applying the suggested fix

Bug Report

Current behavior
In step 5 Fix the test, the tests still fail after applying the fix, due to another test failure

 App › Contains the compiled JavaScript

    Test functions cannot both take a 'done' callback and return something. Either use a 'done' callback, or return a promise.
    Returned value: Promise {}

      3 |
      4 | describe('App', () => {
    > 5 |   it('Contains the compiled JavaScript', async (done) => {
        |   ^
      6 |     fs.readFile('./public/main.js', 'utf8', (err, data) => {
      7 |       expect(err).toBe(null)
      8 |       expect(data).toMatchSnapshot()

      at Object.<anonymous>.describe (__test__/game.test.js:5:3)
      at Object.<anonymous> (__test__/game.test.js:4:1)

Reproduction
Steps to reproduce the behavior in the course:

  1. Go to 'GitHub Actions: Continuous Integration'
  2. In Step 5, commit the suggested fix for the expected failing test
  3. GitHub Learning Lab Bot will reply on the pull request 'The Actions workflow failed and I was expecting it to succeed. Let's see if we can find how to fix it.'
  4. See error in the Actions log

Expected behavior
After committing the suggested fix in step 5, all tests should pass.

Possible solution
I don't have much Node experience, but the failing test explains 'Test functions cannot both take a 'done' callback and return something. Either use a 'done' callback, or return a promise.'

Additional context
My learning lab repo where I'm experience the error.
jhampson-dbre/github-actions-for-ci#2

There is also a report of the same error in the forums:
https://github.community/t/github-actions-for-ci-stuck-on-step-5/182063

Come up with a great title

The current title is 🆗, but I'd love some ideas on the right name for this course that properly sets it apart from the other CI content and lets folks knows it's Actions specific.

test issue

This issue tests the project automation action

Typo in the Workflow file [OS matrix field] but still works

Bug Report

Current behavior

Discovered a bug in the Workflow file nodejs.yaml code that has me wondering how the virtual environments are set up.

The whole course works fine but observant learners will ask whats going on:

👀 at the line

os: [ubuntu-lastest, windows-2016]

should be -latest

name: Node CI

on: [push]

jobs:
  build:

    runs-on: ubuntu-latest

    strategy:
      matrix:
        os: [ubuntu-lastest, windows-2016]
        node-version: [8.x, 10.x]

    steps:
    - uses: actions/checkout@v1
    - name: Use Node.js ${{ matrix.node-version }}
      uses: actions/setup-node@v1
      with:
        node-version: ${{ matrix.node-version }}
    - name: npm install, build, and test
      run: |
        npm ci
        npm run build --if-present
        npm test
      env:
        CI: true

Screen Shot 2019-11-07 at 15 04 41

Will update these 👇

Reproduction
Steps to reproduce the behavior in the course:

  1. Go to '...'
  2. Click on '....'
  3. Scroll down to '....'
  4. See error

Expected behavior
A clear and concise description of what you expected to happen.

Possible solution
If you have suggestions, we'd love to hear them. If not, that's ok too.

Additional context
Add any other context about the problem here. If applicable, add screenshots to help explain.

Separate branches for workflow with build and test jobs

Starting at step 10 (Run multiple jobs), we make big changes to the workflow file that aren't easy to use suggested changes for. Right now, we tell learners they can copy/paste a complete workflow, but we can probably have these new workflows on a branch ready to merge in to make it easier, should someone want to just incorporate the changes automatically.

undocumented payload keys in use

Bug Report

Current behavior
There are steps in the config.yml that reference payload keys which aren't documented in the GitHub Event Types & Payloads documentation.

Reproduction

GitHub API docs:

 "app": {
      "id": 29310,
      "node_id": "MDM6QXBwMjkzMTA=",
      "owner": {
        "login": "Octocoders",
        "id": 38302899,
        "node_id": "MDEyOk9yZ2FuaXphdGlvbjM4MzAyODk5",
        "avatar_url": "https://avatars1.githubusercontent.com/u/38302899?v=4",
        "gravatar_id": "",
        "url": "https://api.github.com/users/Octocoders",
        "html_url": "https://github.com/Octocoders",
        "followers_url": "https://api.github.com/users/Octocoders/followers",
        "following_url": "https://api.github.com/users/Octocoders/following{/other_user}",
        "gists_url": "https://api.github.com/users/Octocoders/gists{/gist_id}",
        "starred_url": "https://api.github.com/users/Octocoders/starred{/owner}{/repo}",
        "subscriptions_url": "https://api.github.com/users/Octocoders/subscriptions",
        "organizations_url": "https://api.github.com/users/Octocoders/orgs",
        "repos_url": "https://api.github.com/users/Octocoders/repos",
        "events_url": "https://api.github.com/users/Octocoders/events{/privacy}",
        "received_events_url": "https://api.github.com/users/Octocoders/received_events",
        "type": "Organization",
        "site_admin": false
      },

Smee Payload:

"app":{
"id":15368
"slug":"github-actions"
"node_id":"MDM6QXBwMTUzNjg="
"owner":{...}
"name":"GitHub Actions"
"description":"Powers your `.github/main.workflow`."
"external_url":"https://developer.github.com/actions/"
"html_url":"https://github.com/apps/github-actions"
"created_at":"2018-07-30T09:30:17Z"
"updated_at":"2019-01-25T22:34:06Z"
"permissions":{...}
"events":[...]
}

Config.yml:*

  - title: Run a templated workflow
    description: Wait for GitHub to run the templated workflow and report back the results
    event: check_suite.completed
    link: '{{ repoUrl }}/pull/2'
    actions:
      - type: gate
        left: '%payload.check_suite.app.slug%'
        operator: ===
        right: github-actions
      - type: respond
        with: 0X_merge.md
        issue: Add jest tests
        action_id: jestPr
      - type: respond
        issue: CI for Node
        with: 02_template-workflow-ran.md
        data:
          url: '%actions.jestPr.data.html_url%'
          actionsUrl: '%payload.repository.html_url%/actions'

Possible solution
We should probably stick to whats accessible in the docs to avoid confusing anyone who looks at this code.

Node 8.x fails build, 10.x and higher work

Bug Report

Current behavior
Step 7 and later (06_custom-workflow.md) asks to use Node 8.x and 10.x, however all 8.x builds going forward fail.

Reproduction
Steps to reproduce the behavior in the course:

  1. Go to Step 7: use something similar to: node-version: [8.x, 10.x] in .github/workflows/nodejs.yml
  2. Click on Actions and the 8.x build consistently fails through the rest of the course.

Expected behavior
After following the course steps I expect all builds to pass before "Step 9: Use multiple jobs"

Possible solution
Use Node [10.x, 12.x] or [12.x, 14.x] throughout the course

Remove Node 8 from workflows

We have some responses/code examples that use Node v8 - that version is deprecated, we should remove it! If we still want to show multiple versions, I'd recommend 12, 14, thought 10, 12 is also okay!

Bug: Test results in first step

In the first step, I am not sure what to expect from the tests; but, I see that the CI is failing or cancelled and I'm not sure how to proceed.

Screen Shot 2019-09-27 at 11 19 44

Response style

The responses need to follow the same style and other courses. Things like:

  • headers for activities and calls to action
  • dividers for troubleshooting or what the bot is waiting for
  • reminders and hints to refresh the page at key moments

The `Needs` Key is Missing From Download Artifact Lab

Bug Report

Current behavior
Without needs: build the test job will run before the build artifact is uploaded and fail.

Reproduction
Just follow the comments from the bot as is.

Expected behavior
The builds work.

Possible solution
There is a PR suggestion for this now, but it would be helpful to have it in the copy paste example as the needs concept is not yet introduced. https://github.com/githubtraining/using-github-actions-for-ci/blob/master/responses/11_needs-suggestion.md

Additional context
Great lab all in all!

Create illustrations

There's a lot of steps in this course that could benefit from illustrations. Steps such as:

  • relationship between a workflow and an action
  • encapsulation and scope of jobs
  • artifact storage, upload and download
  • relationships between scripts and actions
  • passing of environment variables into actions

Course flow feedback

Hello @hectorsector! I looked through this course with the most recent changes from the full-responses branch. Here's my feedback - you'll notice it's a lot more sparse at the end, partially because it was a lot of repetitive feedback, and partially because I think a lot of it is more detailed about the responses than I think you're asking for at this point in time.

Let me know if you have any questions about this or if you'd like to walk through it synchronously. Hopefully this is helpful!

General

  • I think the names of the steps can be more specific and contextualized in terms of the broader workflow and concepts we're trying to teach
  • Since this course is advertised as CI, we should be incorporating the CI concepts into each step, referencing how this specific technical example serves a larger practice

(0) Step 1

  • What is a workflow and an action? Even if these are explained in more detail later in the course, as a learner, if I saw this amount of information with these steps, I would assume that I don't know enough to proceed. We should include some level of explanation at this point: even if it's rudimentary and it says we'll go into more detail later. Currently there's no conceptual learning happening with this step at all.

Step 2

  • If I name the PR something inexact, I get an error. Is that core to what this course is trying to teach? Is there a reason we care so much? It doesn't seem related to the topic at hand.
  • In 01_explain-template.md
    • We should specify which file
    • We also should specify what is being run, not just "CI"
    • We should advise the users on how long they should expect to wait
    • We should let the learner know that they can expect more clarification about the points in the file below in the line comments.
  • Step 2 isn't actually anything that the learner does. As a learner, I expect that I should complete an action to move forward every step. Can we combine steps 2 and 3? I think a wait step in part 2 that waits for the feedback for the action, then moves on, and gives instructions, would solve it. Ex: circleCi course
  • Why is the pull request for jest being created at this point? As a learner, I think I'm waiting for the Actions to be completed - so it's unclear why the Jest PR is being created now. It doesn't look like we're referencing to it or asking the learner to look at it yet, either. Can this be created later?
    • Do we have the learner create a pull request, then merge a different pull request? Why? Does this mimic a typical workflow? If we want them to appear, why not have the bot merge? If we want the learner go to through the pull request, why not merge the first PR first? I understand the flow we're creating but it seems like an opportunity for confusion that we can avoid - I think it makes more sense to have it all committed and merged automatically into the same PR.

Step 3

  • I don't think this merge should be something the learner does - I think it confuses the natural workflow that we recommend. I'm a fan of the method of calling out specific things through line comments and explaining them. I recommend we merge this automatically once the build succeeds, and point out separately at this point the successful build, what it has and doesn't have, and the new commits that we made through merge. I think the flow of having them merge would definitely point out the changes, but I think it would confuse the learner...it confused me. OR, like above, I think these should be two entirely different pull requests that we have the learner merge chronologically, or both to master.

Step 4

  • I LOVE the authentication step of having the learner type the name of one of the failed tests. ✨

Step 5

  • I think it's great that we have the user introduce the tests, but I think that introducing a workflow and the action + fixing the build don't necessarily make sense in the same PR. I understand that we always want master to have a passing build, but I would always recommend to leaners to keep their PRs logically grouped. I think introducing an action and fixing a bug belong in separate PRs.

Step 6

  • Same notes as before about content breadth of each pull request

Step 7

  • It's very unclear to me what editing the version of node has to do with implementing a new CI workflow. How are these related? What are the learning outcomes of this step?
  • Same feedback as before - does it really matter what the pull request title is? Is that directly related to the learning goal, strongly enough that I'm blocked if I don't do it? If we're enforcing this practice, are we equally enforcing all of the other practices that users should be doing, like self-assigning or including content in the body of the PR?

Step 8

  • This step starts to become more clear as to what we're trying to accomplish with this workflow. I think the specific requirements for what this workflow is trying to accomplish should be called out more loudly, and the concepts that we're trying to teach about CI should also be called out here. Ex; why does it matter with CI practices to include this level of detail in our actions and test? Why does this help?

Step 9

  • This step seems to add a lot. Similarly to how the steps were called out separately in step 1, I think these would be much more approachable if they're broken up into their own steps. Here would be the place I'd expect more detail and information on the possibilities of these fields, the syntax requirements, different examples, etc.

Step 10

  • Same feedback as step 2 - can we let this be a waiting gate for the previous step vs a step that implies the learner has to do something?

Step 11

  • 👍 This is the level of granularity per step that would benefit step 9.
  • Are we adding an action to the workflow file? This is a great opportunity to reinforce that vocabulary.

Step 12

  • Same for step 11

Step 13

  • Same general feedback about opportunity to reinforce the specific technical wins here and how they map to broader CI concepts

Step 14

  • Love the checklist - could this checklist be in an issue to mimic the practices we'd recommend for a team? I think that would be a nice touch and make it more realistic
  • Why do some things belong in different actions and some things belong in different workflows? What necessitates a different workflow? This should be explained here

Step 15

  • Love the modularity of this step

Step 16

  • Love the modularity of this step, and that it's reinforcing the same concepts in different ways with different examples

Step 17

  • This step seems to add a lot more than the previous steps - we should be explaining line by line after or before they commit

Step 18

  • We're asking the user to set branch protections, but not really testing that. I like the workaround that you did here to make it work.

Use job outputs in place of artifact storage

This course uses artifact storage because, at the time we created it, jobs couldn't pass data to each other.

Now, we've got job outputs! Let's update the course to use these new parameters.

/cc @mattdavis0351 I'd love your help in making this happen sometime this week -- folks are going to be using this course for Satellite workshops.

Auditing Course Prior to Universe

Using GitHub Actions for CI

I have broken things down into multiple categories and have tried to keep notes along every step of the way as I took this course. Please forgive me for not breaking these out into steps directly, sometimes it's hard to tell which Learning Lab step I am in.

Category Description Icon
Flow Changes that I think would improve the learners ability to navigate through the course along our intended path 💧
Concern Something jumped out at me about this and I wanted to bring them up for discussion 🤔
Bug Something broken while I was following the steps for the course as intended 🐛
Vulnerability Something broke because of an action I took that wasn't intended. This is being labeled as a vulnerability under the scope that the course is vulnerable to this type of action happening and breaks because of it. ⚠️

So let's get started!


💧

Severity: Low

Problem:
After joining the course I have no real indication about what the first step is. The only thing I am presented with when I visit my repository is this README.md file.

I know from experience that there is either an issue or a pull request open for me, but it's unfair to expect our learners to have that experience.

Screen Shot 2019-11-06 at 12.36.03 PM

As expected, when I click the link on the course steps page I am taken to the issue as expected to begin taking this course. ⬇️

Screen Shot 2019-11-06 at 12.39.47 PM

Suggestions:
Add a link here in the README.md that also points me to the first issue.


🤔

Severity: Low

The desired name for the first pull request is CI for Node and it is highly case sensitive. As you can see, if I use a lowercase version of the same name, ci for node the validation step fails

Screen Shot 2019-11-06 at 1.34.45 PM

Can we standardize the input we are collecting from the user to be less case sensitive? Consider how form input on a webpage might be collected and then transformed to all lowercase on the backend for logic consistency.

JavaScript can do this be implementing the toLowerCase() function:

const userInput = "My User Does SiLLy ThInGs";

const ourInput = userInput.toLowerCase();

console.log(ourInput);

output:

my user does silly things

Having something like this would allow for minor mistakes to happen from the learner without impeding the flow of the course.

This may also impact the speed at which a course can progress since we wouldn't always be waiting on a Learning Lab response to explain to the user that they didn't use the proper case when defining the text for something.


🐛

Severity: Moderate (prevents course progress)

Problem:
If the user names the first pull request incorrectly they can end up in a infinite loop of being told to name it correctly. This prevents to course from progressing.

Steps to reproduce:

  1. continue naming the pull request incorrectly
    Screen Shot 2019-11-06 at 1.50.42 PM
  2. This will continue forever as long as the user provides invalid names

Suggestions:

  • Consider having the bot name the pull request if the user fails to get this step correct after n number of attempts.
  • Any time we let the bot correct the users behavior we should also provide an explanation as to why and what we did.

🐛

Severity: Moderate(prevents course from progressing)

Problem:
If the check_suite finishes before the bot has a chance to listen for the payload the course does not continue unless the user manually triggers the check_suite again.

Steps to reproduce:

  1. Name the first pull request incorrectly (see other bug ☝️)
  2. Wait for the check_suite to finish running
  3. Name the first pull request to the correct value
  4. Once the bot finishes explaining the workflow file it responds with the following:
    Screen Shot 2019-11-06 at 2.16.14 PM
  5. Refresh the page as you are instructed to by the bot
  6. Refreshing does not trigger the check_suite again
  7. The course has officially stalled

Probable Cause:

  • There is no web hook event that is firing to tell the bot that the check_suite has finished its run. Because of this, Learning Lab will never know when to respond with an explanation of the CI logs
  • This is fixable if the learner navigates to the Actions tab and manually triggers the check_suite by clicking Re-run checks
    Screen Shot 2019-11-06 at 2.19.40 PM
  • Once the check_suite finishes the Learning Lab bot will do it's job and trigger the desired explanation
    Screen Shot 2019-11-06 at 2.22.12 PM

Suggestions:

  • If the learner has incorrectly named the pull request we can safely assume that they may be too slow when it comes to renaming it correctly and will trigger this bug.

  • We can run some sort of check that asks:

    if the pull request was named improperly:
    
      then: when we also serve a response that indicates they may need to do
      more than refresh the browser to trigger the CI explanation
    
      else: serve the standard response that only mentions a browser refresh
    

⚠️

Severity: High (prevents course progress, creates rework for the learner)

When following the steps outlined in the first issue:

  • Go to the Actions tab.
  • Choose the template Node.js workflow.
  • Commit the workflow to a new branch.
  • Create a pull request titled CI for Node.

It is entirely possible to break this course by selecting the incorrect workflow template.

Steps to reproduce:

  1. Navigate to the Actions tab
  2. Select any workflow other than the one intended, in this case I chose the Node.js Package template
    Screen Shot 2019-11-06 at 12.49.04 PM
  3. Commit the workflow to a new branch
  4. Create full request titled CI for Node
  5. The Learning Lab bot will comment on our pull request with the intended response, but will never continue even after following the suggestions of a refresh:
    Screen Shot 2019-11-06 at 1.05.09 PM
  6. The following error is generated by Learning Lab:
    HttpError: {"message":"Validation Failed","errors":[{"resource":"PullRequestReviewComment","code":"invalid","field":"path"}],"documentation_url":"https://developer.github.com/v3/pulls/comments/#create-a-comment"}
        at /app/node_modules/@octokit/rest/lib/request/request.js:72:19
        at processTicksAndRejections (internal/process/task_queues.js:93:5)
        at async Context.runActions (/app/lib/context.js:216:24)
        at async Course.runHandler (/app/lib/course.js:184:32)
    

Probable Cause:
The config.yml for this course is expecting a very specific file path after the templated workflow is committed. When the wrong template is used, the expected filename changes and thus breaks at this step:

Screen Shot 2019-11-06 at 1.00.18 PM

As we can see the file parameter is looking for .github/workflows/nodejs.yml because our learner selected a different template workflow as seen in step 2 above the actual file that exists is .github/workflows/npmpublish.yml.

Further Findings:

  • Once the error get's thrown renaming the file to nodejs.yml does not allow the course to progress.
    • Closing the existing pull request and creating a new one that follows the instructions allows the course to progress.

Suggestions

  • Before the Learning Lab bot tries to run this action there should be some sort of validation to make sure the proper workflow file was selected. This can most likely be accomplished by implementing a gate action to check for the proper file, which can then be used to prompt the learner to double check that they selected the proper workflow.
  • Instead of asking the learner to fix the workflow file, we could potentially overwrite it for them. The .github/workflows path is protected when the repository is initialized, but once they create a file in this path the Learning Lab bot can write changes to that file. So maybe we give them a change to fix it, and if they get it wrong twice we overwrite the file and explain what we did and why we did it?
  • Edit this step to be more explicit by making node.js bold font:
    • Before: Choose the template Node.js workflow.
    • After: Choose the template Node.js workflow.

🐛

Severity: Low (course progresses, feels untested and rushed)

Problem:
In the CI for Node pull request the user is prompted to enter the name of a failing test to have the course progress. The Course progresses regardless of what is typed.

No actual learning is reinforced

Steps to reproduce:

  1. When prompted to enter the name of a failing test type something other than what is asked. See below:
    Screen Shot 2019-11-06 at 2.38.33 PM
  2. I typed the word bread and the course progressed

Suggestions:

  • Parse the body of the comment and validate what has been typed.
  • Have the Learning Lab bot respond accordingly based on what is typed.

🐛

Severity: Low (can break progression, easily avoidable)

Problem:
Continual bad changes to game.js keep the learner in an endless loop of identical responses.

Steps to reproduce:

  1. Ignore the suggested changes from the Learning Lab bot
  2. Change the value of this.p2 to anything you want
  3. Watch the bot respond with help
  4. Change the value of this.p2 to another improper value
  5. Watch the bot respond with help
  6. Continue this loop forever
    Screen Shot 2019-11-06 at 2.48.45 PM

Suggestions:

  • If the learner has failed to change the value to be correct for n number of attempts we should prompt them with a more helpful and direct response. This can be hard since we can't anticipate what they are going to change, but what we can see is the diff of the game.js file.
  • Consider giving them the suggested changes again if they fail to edit the file by themselves.

🤔

Severity: Low(not tested, just guessing)

Problem:
I am guessing that the Improve CI pull request will suffer from all of the same pitfalls the CI for node pull request did.

  • Infinite response loops if proper changes aren't made offering no new guidance
  • Lack of any real input validation
    • What happens if we change the workflow filename?
    • What happens if we change a value in the workflow to call an extra step?
    • what if we try to use a different GitHub Action in our workflow?
      • We are only validating that the name of the workflow file matches what we expect, we are ignoring it's contents.

⚠️

Severity: High(Course progresses when it shouldn't, eventually breaks)

Problem:
When editing the node.yml workflow file in the Improve CI pull request any change made to the file triggers a 'success' style response from the bot.

Steps to reproduce:

  1. Ignore the suggested changes to .github/workflows/nodejs.yml
  2. Edit .github/workflows/nodejs.yml by making any change you wish to
    Screen Shot 2019-11-06 at 3.15.01 PM
  3. Commit changes to current branch the PR is for
  4. Watch the bot respond with success style message
  5. Finally, accept the suggested changes, ignoring what is currently being asked of you.
  6. The course is now entirely out of sync since we are only listening for the completion of check_suites. We can no longer accept suggestions
    Screen Shot 2019-11-06 at 3.27.36 PM

Further findings:
The edited workflow file executes, which is something we expect, however it is something we are not accounting for. Our learners could end up running out of usage quota without realizing it by making unexpected changes to these files. Although that's a risk to them, it's one we should do our best to mitigate by handling the body of these files better than what we are doing.

Take a look at the workflow running.

Screen Shot 2019-11-06 at 3.16.45 PM

As a new learner I might not expect this behavior, especially since the Learning Lab bot didn't inform me that the changes that I made were incorrect.

This poses a new challenge for us as course authors. We haven't had to think about how Actions and Learning Lab impact one another. For every good thing we can do when these two features are married together there are ten things we need to account for to provide a better experience.

At this point I have gotten this far in the course, and now my best option is to restart the entire thing

Screen Shot 2019-11-06 at 3.32.21 PM

Suggestions:

  • We can access what is changed in any given file, we should be taking care to make sure that the contents is what we are looking for.
  • We may want to consider having the bot cancel check_suites when the content of them isn't what we expect from the course.
  • The bot did not respond properly in this instance, can we somehow enforce accepting the suggestion?
    if change was made by accepting suggestion
        then: respond with success
        else: check the body of contents and act accoring
    
  • Once this course breaks it would be quite the task for the learner to try and fix it, especially if their changes are what broke it. They may also continue triggering Learning Lab Actions by trying to fix it.

Biggest Takeaways

There are many things we need to consider from the course design perspective. Working to incorporate more thorough validation of inputs and exit strategies for infinite loops will improve the quality of the courses we create in the future.

Addressing these two issues alone will help us maintain the confidence people have in the Learning Lab platform. These changes will also dramatically improve the quality of our courses.

I ask the question of how can we not only guide the learner as they progress through the course, but also help them dislodge themselves when they get stuck on something they don't understand?

We cannot assume the learner knows anything about GitHub, writing code, working with branches, issues or pull requests. The moment we assume that anything is familiar to them is the moment we fail them.

The exception to that mindset is when we explicitly guide them down a learning path, without allowing access to assets before we know for certain they have the required familiarity.

Need test job to fail until it receives build artifacts

Currently, step 11 asks the learner to upload a job's webpack generated files to artifact storage after the build job, and then expects the test job to fail when it runs npm test if those build artifacts aren't accessible. However, that's not failing.

Failure in Step 14, github-learning-lab bot fails to comment on PR "Validation Failed"

bug report

Description:
GitHub Actions for CI fails to progress past step 14, failing to comment next step for the GitHub Learning Lab user instructions.

Context:

Debug Issue raised in Learning Lab templated repository here

Error in github-actions:-continuous-integration:automate-the-review-process:createPullRequestComment[2]

HttpError: {"message":"Validation Failed","errors":[{"resource":"PullRequestReviewComment","code":"invalid","field":"pull_request_review_thread.path"},{"resource":"PullRequestReviewComment","code":"missing_field","field":"pull_request_review_thread.diff_hunk"}],"documentation_url":"https://docs.github.com/rest"}
    at /usr/src/learning-lab/node_modules/@octokit/rest/lib/request/request.js:72:19
    at processTicksAndRejections (internal/process/task_queues.js:97:5)
    at async Context.runActions (/usr/src/learning-lab/lib/context.js:233:24)
    at async Course.runHandler (/usr/src/learning-lab/lib/course.js:174:32)

runner os with strategy value

Bug Report

Current behavior
tests run only on ubuntu-latest

Reproduction

  1. view actions -> test results -> os

Expected behavior
os should be different for ubuntu and windows job instances

Possible solution
pull request #49

Flesh out responses

There are currently some stubbed out responses. We should ensure they're properly fleshed out with the format and elements that are used in all other Learning Lab courses. This includes elements like:

  • the Activity indicators (example)
  • each terminal response should explain what it's waiting for before continuing
  • separating out informational sections from call-to-action (CTAs)
  • flesh out troubleshooting-specific responses with common mistakes to help a learner that might be stuck

Step 3 Missing 'Contains the compiled JavaScript' test in list of failing tests

Bug Report

Current behavior

When user comments 'Contains the compiled JavaScript' in response to '⌨️ Activity: Tell the bot which test is failing so we can fix it', github-learning-lab bot doesn't recognize it as one of the expected failing tests. However, this is indeed one of the failing tests.

https://github.com/achekerylla/github-actions-for-ci/pull/2

Screenshot (7)_LI

https://github.com/achekerylla/github-actions-for-ci/pull/2/checks?check_run_id=3779688078

Screenshot (8)_LI

Reproduction
Steps to reproduce the behavior in the course:

  1. Go to my user comment 'Contains the compiled JavaScript' at https://github.com/achekerylla/github-actions-for-ci/pull/2#issuecomment-932851874
  2. Scroll down to the next comment from github-learning-lab bot 'That wasn't the test name I expected, but that's alright. If you typed something slightly different than what I looked for that may explain it.'
  3. See error in that comment

Expected behavior
Expected github-learning-lab bot to recognize 'Contains the compiled JavaScript' as one of the expected failing tests.

Possible solution
PR Add contains test to list of failing tests #60

Additional context
In test output for 'Contains the compiled JavaScript', the failure is here

       5 |   it('Contains the compiled JavaScript', async () => {
       6 |     const data = fs.readFileSync('./public/main.js', 'utf8')
    >  7 |     expect(data).toMatchSnapshot()
         |                  ^
       8 |   })
       9 | })
      10 |

      at Object.<anonymous> (__test__/game.test.js:7:18)

When we compare snapshot/received using js test output rewrapped to multi-line format and git diff --no-index, we see the difference is in the class r constructor.

$ git diff --no-index ./snapshot.js ./received.js
diff --git a/./snapshot.js b/./received.js
index 2cff790..5b01bee 100644
--- a/./snapshot.js
+++ b/./received.js
@@ -50,7 +50,7 @@
   }));
   class r {
     constructor(t, e) {
-      this.p1 = t, this.p2 = e, this.board = [
+      this.p1 = t, this.p2 = "Bananas", this.board = [
         [null, null, null],
         [null, null, null],
         [null, null, null]

In Step 11, suggested change is on wrong line of file

Bug Report

Current behavior
In Step 11, a code change is suggested by the bot. There is an associated "suggested change" in the pull request, but it is not applied to the correct location in the file.

Reproduction
Steps to reproduce the behavior in the course:

  1. Go through the coarse until reaching Step 11, hopping over the other bugs in the coarse as needed
  2. Look at the "suggested change" provided in Step 11

Expected behavior
The suggested change should match the change described by the bot.

Possible solution
Seems like the coarse itself needs some better CI, since there are multiple bugs in it currently. 😅

Additional context
Here's what the bot asks for in the instructions:
image

Here's what the suggested change is. Note that it is being applied in the test section rather than the build section.
image

Node starter workflow is causing problems

Bug Report

The starter workflow for Node.js has changed and because of that the Learning Lab bot is not longer responding the way we expect it to.

Further troubleshooting is needed to figure out exactly what is missing or changed. This is coming from the community forum.

Suggested changes no longer match diff

Bug Report

Current behavior
The starter workflow for Node.js is different (some prior history in #41 and #42), and some of the "copy and paste" workflows throughout the course use the older versions.

Reproduction
Steps to reproduce the behavior in the course:

  1. Join the course
  2. Whenever you see a "copy and paste" workflow, use that.
  3. Notice two bugs when you work this way: commit suggestions no longer match line numbers, and the copy-and-paste workflows are a complete mismatch from what's on the diff.

Expected behavior
The copy-and-paste workflows should match the starter workflow as much as possible.

Possible solution

  • Can replace the text in the copy-and-paste workflows to match starter workflow.
  • Alternatively, can remove the copy-and-paste options.
  • Before suggesting a change, we could test to see if the line we're suggesting on is what we expect

Additional context
Screen Shot 2020-07-30 at 12 28 24 PM

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.