Embed 2 helps clients to setup Ada Web Chat in their web application. These docs are for internal-use only; for client-facing docs check out our docs repo.
These instructions will get Embed 2 up and running on your local machine for development and testing purposes. See deployment for notes on how to deploy the project on a live system.
1. Start Embed 2
Run this command:
yarn && yarn start
This will install dependencies and start the webpack dev server.
2. Open the example page
🎉 Head over to http://test.localhost:9001/example/. You should now be good to go! 🎉
test, you can also use a different handle to use a different local bot instead.
If you are new to Embed 2, the next step will be to familiarize yourself with its architecture. You can find
README.mds in each of the major directories that will explain how it works.
You can also test different configuration options in
Finally, it is highly recommend that you install the following plugins (or equivalent) into your text editor / IDE:
Linting and Type Checking
You can check for ESLint violations and type errors by running:
This will also run automatically during CI.
If you want to run Sentry locally, you will need to set the value for the Sentry DSN in your .env file. Create and open your .env file:
cp .env.example .env vim .env
Find the Sentry DSN for the Embed 2 project at https://docs.sentry.io/error-reporting/configuration/?platform=browser and selecting "Embed" from the dropdown menu above the code snippet. Copy this value (it should be a string that looks like a URL) to the .env file.
Checking Bundle Size
Embed 2 is the entry to Ada's web chat, and is downloaded more than any other Ada script. As such, maintaining a small bundle size is extremely important. We do this by leveraging the browser cache, and using lightweight modules. Additionally, separate
legacy bundles are generated, so the majority of users who use new browser versions do not need to fetch superfluous polyfills.
Bundle size should be checked periodically. You can check the bundle size by running:
You will notice that many tabs are opened when running this command. Because Embed 2 is split between many sub-applications (representing framed components), a bundle analyzer is run for each application. The total bundle size is equal to sum of bundle sizes, plus external dependencies.
Unit testing is done with Karma and Jasmine. Tests can be run on a local Chrome browser with:
TestCafe can run E2E tests on real browsers via LambdaTest. These tests will run automatically during the CI pipeline, but can also be run locally.
To run locally, you will first need to add
LT_ACCESS_KEY to your
.env file. You can find your username and access key in the LambdaTest dashboard. If you have not used LambdaTest before, follow this setup guide here.
Once you have added valid keys, you can now run TestCafe with:
Note that this is the same command that runs during CI, and will make requests against the production API. If you would like to run against your local API, you can run:
Cypress E2E tests can be run using:
Steps to Deploy
Follow instructions listed on this notion page
Deployment of the beta script is handled automatically by CircleCI when merging a branch into
master. Once merged, two different scripts will be added to the Ada CDN:
# Verisoned https://static.ada.support/embed-beta/legacy/entry/<FIRST 7 CHAR OF COMMIT HASH>/embed2.beta.js
# Verisonless https://static.ada.support/embed2.beta.js
Before deploying to production, you should manually smoke test to make sure the beta script is working here.
Once a beta script has been created, a production deployment will be "held" until approved by an Embed 2 admin. To approve the deploy, open the CircleCI job in question, and click on "Approve Job". You can read more about manual approval here.
Deploying production will create the following scripts:
# Verisoned https://static.ada.support/embed/legacy/entry/<FIRST 7 CHAR OF COMMIT HASH>/embed2.js
# Verisonless https://static.ada.support/embed2.js
If you require access to the Embed Procuction Deploys team (needed for production deployment), please reach out to an Engineering admin.
Loom explanation: https://www.loom.com/share/22f7b0060b3f46c8b3781afb1aae4b89
There are artifical timeouts in the application that may cause you not to see any errors with what you are working on. The longest timeout is 60000ms (60s). Please ensure you shorten this locally when you are testing your work. You can adjust this here: