Skip to content

Commit

Permalink
Merge branch 'main' into techievivek_workspace_chat_for_first_time_user
Browse files Browse the repository at this point in the history
  • Loading branch information
techievivek committed Oct 19, 2022
2 parents 321990e + ca501d9 commit 59f7091
Show file tree
Hide file tree
Showing 141 changed files with 3,877 additions and 1,007 deletions.
40 changes: 23 additions & 17 deletions .github/PULL_REQUEST_TEMPLATE.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,17 +5,22 @@

### Fixed Issues
<!---
Please replace GH_LINK with the link to the GitHub issue this Pull Request is fixing.
1. Please replace GH_LINK with a URL link to the GitHub issue this Pull Request is fixing.
2. Please replace PROPOSAL: GH_LINK_ISSUE(COMMENT) with a URL link to your GitHub comment, which contains the approved proposal (i.e. the proposal that was approved by Expensify).
Do NOT add the special GH keywords like `fixed` etc, we have our own process of managing the flow.
It MUST be an entire link to the issue; otherwise, the linking will not work as expected.
It MUST be an entire link to the github issue and your comment proposal ; otherwise, the linking will not work as expected.
Make sure this section looks similar to this (you can link multiple issues using the same formatting, just add a new line):
$ https://github.com/Expensify/App/issues/<number-of-the-issue>
$ https://github.com/Expensify/App/issues/<number-of-the-issue(comment)>
Do NOT only link the issue number like this: $ #<number-of-the-issue>
--->
$ GH_LINK
$ GH_LINK
PROPOSAL: GH_LINK_ISSUE(COMMENT)


### Tests
<!---
Expand All @@ -31,6 +36,20 @@ For example:

- [ ] Verify that no errors appear in the JS console

### QA Steps
<!---
Add a numbered list of manual tests that can be performed by our QA engineers on the staging environment to validate that your changes work on all platforms, and that there are no regressions present.
Add any additional QA steps if test steps are unique to a particular platform.
Manual test steps should be written so that the QA engineer can repeat and verify one or more expected outcomes in the staging environment.
For example:
1. Click on the text input to bring it into focus
2. Upload an image via copy paste
3. Verify a modal appears displaying a preview of that image
--->

- [ ] Verify that no errors appear in the JS console

### PR Review Checklist
<!--
This is a checklist for PR authors & reviewers. Please make sure to complete all tasks and check them off once you do, or else Expensify has the right not to merge your PR!
Expand Down Expand Up @@ -95,6 +114,7 @@ The reviewer will copy/paste it into a new comment and complete it after the aut
- [ ] I verified the steps cover any possible failure scenarios (i.e. verify an input displays the correct error message if the entered data is not correct)
- [ ] I turned off my network connection and tested it while offline to ensure it matches the expected behavior (i.e. verify the default avatar icon is displayed if app is offline)
- [ ] I checked that screenshots or videos are included for tests on [all platforms](https://github.com/Expensify/App/blob/main/contributingGuides/CONTRIBUTING.md#make-sure-you-can-test-on-all-platforms)
- [ ] I included screenshots or videos for tests on [all platforms](https://github.com/Expensify/App/blob/main/contributingGuides/CONTRIBUTING.md#make-sure-you-can-test-on-all-platforms)
- [ ] I verified tests pass on **all platforms** & I tested again on:
- [ ] iOS / native
- [ ] Android / native
Expand Down Expand Up @@ -135,20 +155,6 @@ The reviewer will copy/paste it into a new comment and complete it after the aut

</details>

### QA Steps
<!---
Add a numbered list of manual tests that can be performed by our QA engineers on the staging environment to validate that your changes work on all platforms, and that there are no regressions present.
Add any additional QA steps if test steps are unique to a particular platform.
Manual test steps should be written so that the QA engineer can repeat and verify one or more expected outcomes in the staging environment.
For example:
1. Click on the text input to bring it into focus
2. Upload an image via copy paste
3. Verify a modal appears displaying a preview of that image
--->

- [ ] Verify that no errors appear in the JS console

### Screenshots
<!-- Add screenshots for all platforms tested. Pull requests won't be merged unless the screenshots show the app was tested on all platforms.-->

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,6 @@ const completedAuthorChecklist = `- [x] I linked the correct issue in the \`###
- [x] If a new component is created I verified that:
- [x] A similar component doesn't exist in the codebase
- [x] All props are defined accurately and each prop has a \`/** comment above it */\`
- [x] Any functional components have the \`displayName\` property
- [x] The file is named correctly
- [x] The component has a clear name that is non-ambiguous and the purpose of the component can be inferred from the name alone
- [x] The only data being stored in the state is data necessary for rendering and nothing else
Expand All @@ -58,6 +57,7 @@ const completedReviewerChecklist = `- [x] I have verified the author checklist i
- [x] I verified the steps cover any possible failure scenarios (i.e. verify an input displays the correct error message if the entered data is not correct)
- [x] I turned off my network connection and tested it while offline to ensure it matches the expected behavior (i.e. verify the default avatar icon is displayed if app is offline)
- [x] I checked that screenshots or videos are included for tests on [all platforms](https://github.com/Expensify/App/blob/main/contributingGuides/CONTRIBUTING.md#make-sure-you-can-test-on-all-platforms)
- [x] I included screenshots or videos for tests on [all platforms](https://github.com/Expensify/App/blob/main/contributingGuides/CONTRIBUTING.md#make-sure-you-can-test-on-all-platforms)
- [x] I verified tests pass on **all platforms** & I tested again on:
- [x] iOS / native
- [x] Android / native
Expand All @@ -82,7 +82,6 @@ const completedReviewerChecklist = `- [x] I have verified the author checklist i
- [x] If a new component is created I verified that:
- [x] A similar component doesn't exist in the codebase
- [x] All props are defined accurately and each prop has a \`/** comment above it */\`
- [x] Any functional components have the \`displayName\` property
- [x] The file is named correctly
- [x] The component has a clear name that is non-ambiguous and the purpose of the component can be inferred from the name alone
- [x] The only data being stored in the state is data necessary for rendering and nothing else
Expand Down
3 changes: 1 addition & 2 deletions .github/actions/javascript/contributorChecklist/index.js
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,6 @@ const completedAuthorChecklist = `- [x] I linked the correct issue in the \`###
- [x] If a new component is created I verified that:
- [x] A similar component doesn't exist in the codebase
- [x] All props are defined accurately and each prop has a \`/** comment above it */\`
- [x] Any functional components have the \`displayName\` property
- [x] The file is named correctly
- [x] The component has a clear name that is non-ambiguous and the purpose of the component can be inferred from the name alone
- [x] The only data being stored in the state is data necessary for rendering and nothing else
Expand All @@ -68,6 +67,7 @@ const completedReviewerChecklist = `- [x] I have verified the author checklist i
- [x] I verified the steps cover any possible failure scenarios (i.e. verify an input displays the correct error message if the entered data is not correct)
- [x] I turned off my network connection and tested it while offline to ensure it matches the expected behavior (i.e. verify the default avatar icon is displayed if app is offline)
- [x] I checked that screenshots or videos are included for tests on [all platforms](https://github.com/Expensify/App/blob/main/contributingGuides/CONTRIBUTING.md#make-sure-you-can-test-on-all-platforms)
- [x] I included screenshots or videos for tests on [all platforms](https://github.com/Expensify/App/blob/main/contributingGuides/CONTRIBUTING.md#make-sure-you-can-test-on-all-platforms)
- [x] I verified tests pass on **all platforms** & I tested again on:
- [x] iOS / native
- [x] Android / native
Expand All @@ -92,7 +92,6 @@ const completedReviewerChecklist = `- [x] I have verified the author checklist i
- [x] If a new component is created I verified that:
- [x] A similar component doesn't exist in the codebase
- [x] All props are defined accurately and each prop has a \`/** comment above it */\`
- [x] Any functional components have the \`displayName\` property
- [x] The file is named correctly
- [x] The component has a clear name that is non-ambiguous and the purpose of the component can be inferred from the name alone
- [x] The only data being stored in the state is data necessary for rendering and nothing else
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -28,5 +28,5 @@ inputs:
description: "Web job result ('success', 'failure', 'cancelled', or 'skipped')"
required: true
runs:
using: "node12"
using: "node16"
main: "./index.js"
79 changes: 79 additions & 0 deletions .github/workflows/e2ePerformanceRegressionTests.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,79 @@
name: Run e2e performance regression tests

on:
pull_request:
types: [labeled]

jobs:
e2e-tests:
if: ${{ github.event.label.name == 'e2e' }}
name: "Run e2e performance regression tests"
# Although the tests will run on an android emulator, using macOS as its more performant
runs-on: macos-11
steps:
- uses: Expensify/App/.github/actions/composite/setupNode@main

- uses: ruby/setup-ruby@08245253a76fa4d1e459b7809579c62bd9eb718a
with:
ruby-version: '2.7'
bundler-cache: true

# Improve emulator startup time, see https://github.com/marketplace/actions/android-emulator-runner
- name: Gradle cache
uses: gradle/gradle-build-action@v2

- name: AVD cache
uses: actions/cache@v3
id: avd-cache
with:
path: |
~/.android/avd/*
~/.android/adb*
key: avd-28

- name: Create AVD and generate snapshot for caching
if: steps.avd-cache.outputs.cache-hit != 'true'
uses: reactivecircus/android-emulator-runner@v2
with:
api-level: 28
ram-size: 3072M
heap-size: 512M
force-avd-creation: false
emulator-options: -no-window -gpu swiftshader_indirect -noaudio -no-boot-anim -camera-back none
disable-animations: false
script: echo "Generated AVD snapshot for caching."

# Note: if the android build fails the logs can be incomplete. It can help to run the build once manually to get a full log
- name: Preheat build system
env:
JAVA_HOME: ${{ env.JAVA_HOME_11_X64 }}
run: |
npm run android-build-e2e
- name: Start emulator and run tests
id: tests
uses: reactivecircus/android-emulator-runner@v2
env:
JAVA_HOME: ${{ env.JAVA_HOME_11_X64 }}
INTERACTION_TIMEOUT: 120000 # 2 minutes
# when logging progresses only refresh the _log_ every 30 seconds
LOGGER_PROGRESS_REFRESH_RATE: 30000
# TODO: remove this once implementation done.
baseline: dev/ci-e2e-tests
with:
api-level: 28
ram-size: 3072M
heap-size: 512M
force-avd-creation: false
emulator-options: -no-snapshot-save -no-window -gpu swiftshader_indirect -noaudio -no-boot-anim -camera-back none
disable-animations: true
script: npm run test:e2e

- name: If tests failed, upload logs and video
if: ${{ failure() && steps.tests.conclusion == 'failure' }}
uses: actions/upload-artifact@v3
with:
name: test-failure-logs
path: e2e/.results
retention-days: 5

15 changes: 4 additions & 11 deletions .github/workflows/platformDeploy.yml
Original file line number Diff line number Diff line change
Expand Up @@ -210,15 +210,8 @@ jobs:
steps:
- uses: Expensify/App/.github/actions/composite/setupNode@main

- name: Setup python
run: sudo apt-get install python3-setuptools

- name: Setup Cloudflare CLI
run: |
# Pip 21 doesn't support python 3.5, so use the version before it
sudo python3 -m pip install --upgrade pip==20.3.4
pip3 install wheel # need wheel before cloudflare, this is the only way to ensure order.
pip3 install cloudflare
run: pip3 install cloudflare

- name: Configure AWS Credentials
# Version: 1.5.5
Expand Down Expand Up @@ -290,7 +283,7 @@ jobs:
channel: '#announce',
attachments: [{
color: 'good',
text: `🎉️ Successfully deployed ${process.env.AS_REPO} v${{ env.VERSION }} to ${{ fromJSON(env.SHOULD_DEPLOY_PRODUCTION) && 'production' || 'staging' }} 🎉️`,
text: `🎉️ Successfully deployed ${process.env.AS_REPO} <https://github.com/Expensify/App/releases/tag/${{ env.VERSION }}|${{ env.VERSION }}> to ${{ fromJSON(env.SHOULD_DEPLOY_PRODUCTION) && 'production' || 'staging' }} 🎉️`,
}]
}
env:
Expand All @@ -306,7 +299,7 @@ jobs:
channel: '#deployer',
attachments: [{
color: 'good',
text: `🎉️ Successfully deployed ${process.env.AS_REPO} v${{ env.VERSION }} to ${{ fromJSON(env.SHOULD_DEPLOY_PRODUCTION) && 'production' || 'staging' }} 🎉️`,
text: `🎉️ Successfully deployed ${process.env.AS_REPO} <https://github.com/Expensify/App/releases/tag/${{ env.VERSION }}|${{ env.VERSION }}> to ${{ fromJSON(env.SHOULD_DEPLOY_PRODUCTION) && 'production' || 'staging' }} 🎉️`,
}]
}
env:
Expand All @@ -323,7 +316,7 @@ jobs:
channel: '#expensify-open-source',
attachments: [{
color: 'good',
text: `🎉️ Successfully deployed ${process.env.AS_REPO} v${{ env.VERSION }} to production 🎉️`,
text: `🎉️ Successfully deployed ${process.env.AS_REPO} <https://github.com/Expensify/App/releases/tag/${{ env.VERSION }}|${{ env.VERSION }}> to production 🎉️`,
}]
}
env:
Expand Down
45 changes: 41 additions & 4 deletions .github/workflows/test.yml
Original file line number Diff line number Diff line change
Expand Up @@ -6,10 +6,38 @@ on:
types: [opened, synchronize]
branches-ignore: [staging, production]

env:
# Number of parallel jobs for jest tests
CHUNKS: 3
jobs:
config:
runs-on: ubuntu-latest
name: Define matrix parameters
outputs:
MATRIX: ${{ steps.set-matrix.outputs.MATRIX }}
JEST_CHUNKS: ${{ steps.set-matrix.outputs.JEST_CHUNKS }}
steps:
- name: Set Matrix
id: set-matrix
uses: actions/github-script@v6
with:
# Generate matrix array i.e. [0, 1, 2, ...., CHUNKS - 1] for test job
script: |
core.setOutput('MATRIX', Array.from({ length: Number(process.env.CHUNKS) }, (v, i) => i + 1));
core.setOutput('JEST_CHUNKS', Number(process.env.CHUNKS) - 1);
test:
needs: config
if: ${{ github.actor != 'OSBotify' || github.event_name == 'workflow_call' }}
runs-on: ubuntu-latest
name: test (job ${{ fromJSON(matrix.chunk) }})
env:
CI: true
strategy:
fail-fast: false
matrix:
chunk: ${{fromJson(needs.config.outputs.MATRIX)}}

steps:
- uses: Expensify/App/.github/actions/composite/setupNode@main

Expand All @@ -23,10 +51,19 @@ jobs:
exit 1
fi
- name: Jest Unit Tests
run: npm run test
env:
CI: true
- name: Cache Jest cache
id: cache-jest-cache
uses: actions/cache@v1
with:
path: .jest-cache
key: ${{ runner.os }}-jest

- name: All Unit Tests
if: ${{ fromJSON(matrix.chunk) < fromJSON(env.CHUNKS) }}
# Split the jest based test files in multiple chunks/groups and then execute them in parallel in different jobs/runners.
run: npx jest --listTests --json | jq -cM '[_nwise(length / ${{ fromJSON(needs.config.outputs.JEST_CHUNKS) }} | ceil)]' | jq '[[]] + .' | jq '.[${{ fromJSON(matrix.chunk) }}] | .[] | @text' | xargs npm test

- name: Pull Request Tests
# Pull request related tests will be run in separate runner in parallel.
if: ${{ fromJSON(matrix.chunk) == fromJSON(env.CHUNKS) }}
run: tests/unit/getPullRequestsMergedBetweenTest.sh
5 changes: 5 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -89,3 +89,8 @@ storybook-static
# Jest coverage report
/coverage.data
/coverage/

.jest-cache

# E2E test reports
e2e/.results/
Loading

0 comments on commit 59f7091

Please sign in to comment.