Merge branch 'master' of github.com:Budibase/budibase into nord-theme

This commit is contained in:
Andrew Kingston 2022-07-26 11:46:28 +01:00
commit ec6e17748a
1039 changed files with 44267 additions and 21983 deletions

9
.dockerignore Normal file
View File

@ -0,0 +1,9 @@
packages/server/node_modules
packages/builder
packages/frontend-core
packages/backend-core
packages/worker/node_modules
packages/cli
packages/client
packages/bbui
packages/string-templates

View File

@ -6,4 +6,5 @@ packages/server/coverage
packages/server/client
packages/builder/.routify
packages/builder/cypress/support/queryLevelTransformerFunction.js
packages/builder/cypress/support/queryLevelTransformerFunctionWithData.js
packages/builder/cypress/support/queryLevelTransformerFunctionWithData.js
packages/builder/cypress/reports

View File

@ -1,76 +0,0 @@
# Contributor Covenant Code of Conduct
## Our Pledge
In the interest of fostering an open and welcoming environment, we as
contributors and maintainers pledge to making participation in our project and
our community a harassment-free experience for everyone, regardless of age, body
size, disability, ethnicity, sex characteristics, gender identity and expression,
level of experience, education, socio-economic status, nationality, personal
appearance, race, religion, or sexual identity and orientation.
## Our Standards
Examples of behavior that contributes to creating a positive environment
include:
* Using welcoming and inclusive language
* Being respectful of differing viewpoints and experiences
* Gracefully accepting constructive criticism
* Focusing on what is best for the community
* Showing empathy towards other community members
Examples of unacceptable behavior by participants include:
* The use of sexualized language or imagery and unwelcome sexual attention or
advances
* Trolling, insulting/derogatory comments, and personal or political attacks
* Public or private harassment
* Publishing others' private information, such as a physical or electronic
address, without explicit permission
* Other conduct which could reasonably be considered inappropriate in a
professional setting
## Our Responsibilities
Project maintainers are responsible for clarifying the standards of acceptable
behavior and are expected to take appropriate and fair corrective action in
response to any instances of unacceptable behavior.
Project maintainers have the right and responsibility to remove, edit, or
reject comments, commits, code, wiki edits, issues, and other contributions
that are not aligned to this Code of Conduct, or to ban temporarily or
permanently any contributor for other behaviors that they deem inappropriate,
threatening, offensive, or harmful.
## Scope
This Code of Conduct applies both within project spaces and in public spaces
when an individual is representing the project or its community. Examples of
representing a project or community include using an official project e-mail
address, posting via an official social media account, or acting as an appointed
representative at an online or offline event. Representation of a project may be
further defined and clarified by project maintainers.
## Enforcement
Instances of abusive, harassing, or otherwise unacceptable behavior may be
reported by contacting the project team at community@budibase.com. All
complaints will be reviewed and investigated and will result in a response that
is deemed necessary and appropriate to the circumstances. The project team is
obligated to maintain confidentiality with regard to the reporter of an incident.
Further details of specific enforcement policies may be posted separately.
Project maintainers who do not follow or enforce the Code of Conduct in good
faith may face temporary or permanent repercussions as determined by other
members of the project's leadership.
## Attribution
This Code of Conduct is adapted from the [Contributor Covenant][homepage], version 1.4,
available at https://www.contributor-covenant.org/version/1/4/code-of-conduct.html
[homepage]: https://www.contributor-covenant.org
For answers to common questions about this code of conduct, see
https://www.contributor-covenant.org/faq

1
.github/CODE_OF_CONDUCT.md vendored Symbolic link
View File

@ -0,0 +1 @@
../docs/CODE_OF_CONDUCT.md

View File

@ -1,206 +0,0 @@
# Contributing
From opening a bug report to creating a pull request: every contribution is appreciated and welcome. If you're planning to implement a new feature or change the api please create an issue first. This way we can ensure that your precious work is not in vain.
## Not Sure Where to Start?
Budibase is a low-code web application builder that creates svelte based web applications.
Budibase is a monorepo managed by [lerna](https://github.com/lerna/lerna). Lerna manages the building and publishing of the budibase packages. At a high level, here are the packages that make up budibase.
- **packages/builder** - contains code for the budibase builder client side svelte application.
- **packages/client** - A module that runs in the browser responsible for reading JSON definition and creating living, breathing web apps from it.
- **packages/server** - The budibase server. This [Koa](https://koajs.com/) app is responsible for serving the JS for the builder and budibase apps, as well as providing the API for interaction with the database and file system.
- **packages/worker** - This [Koa](https://koajs.com/) app is responsible for providing global apis for managing your budibase installation. Authentication, Users, Email, Org and Auth configs are all provided by the worker.
## Contributor License Agreement (CLA)
In order to accept your pull request, we need you to submit a CLA. You only need to do this once. If you are submitting a pull request for the first time, just submit a Pull Request and our CLA Bot will give you instructions on how to sign the CLA before merging your Pull Request.
All contributors must sign an [Individual Contributor License Agreement](https://github.com/budibase/budibase/blob/next/.github/cla/individual-cla.md).
If contributing on behalf of your company, your company must sign a [Corporate Contributor License Agreement](https://github.com/budibase/budibase/blob/next/.github/cla/corporate-cla.md). If so, please contact us via community@budibase.com.
## Glossary of Terms
To understand the budibase API, it can be helpful to understand the top level entities that make up Budibase.
### Client
A client represents a single budibase customer. Each budibase client will have 1 or more budibase servers. Every client is assigned a unique ID.
### App
A client can have one or more budibase applications. Budibase applications would be things like "Developer Inventory Management" or "Goat Herder CRM". Think of a budibase application as a tree.
### Database
An App can have one or more databases. Keeping with our [dendrology](https://en.wikipedia.org/wiki/Dendrology) analogy - think of an database as a branch on the tree. Databases are used to keep data separate for different instances of your app. For example, if you had a CRM app, you may create a database for your US office, and a database for your Australian office. Databases allow us to support [multitenancy](https://www.gartner.com/en/information-technology/glossary/multitenancy) in budibase applications.
### Table
Tables in budibase are almost akin to tables in relational databases. A table may be a "Car" or an "Employee". They are the main building blocks for the creation and management of backend data in budibase.
### View
A View is an advanced feature in budibase that allows you to write a custom query using [MapReduce](https://pouchdb.com/guides/queries.html) queries. Views enable powerful query functionality and calculations, allowing you to do more with your data.
### Page
A page in budibase is actually a single, self contained svelte web app. There are only 2 pages in budibase. The **login** page and the **main** page.
### Screen
A screen is a component within a single page. Generally, screens represent client side routes, and can be switched without refreshing the page.
### Component
A component is the basic frontend building block of a budibase app.
### Component Library
Component libraries are collections of components as well as the definition of their props contained in a file called `components.json`.
## Contributing to Budibase
* Please maintain the existing code style.
* Please try to keep your commits small and focused.
* Please write tests.
* If the project diverges from your branch, please rebase instead of merging. This makes the commit graph easier to read.
* Once your work is completed, please raise a PR against the `develop` branch with some information about what has changed and why.
### Getting Started For Contributors
#### 1. Prerequisites
NodeJS Version `14.x.x`
*yarn -* `npm install -g yarn`
*jest* - `npm install -g jest`
#### 2. Clone this repository
`git clone https://github.com/Budibase/budibase.git`
then `cd ` into your local copy.
#### 3. Install and Build
To develop the Budibase platform you'll need [Docker](https://www.docker.com/) and [Docker Compose](https://docs.docker.com/compose/) installed.
##### Quick method
`yarn setup` will check that all necessary components are installed and setup the repo for usage.
##### Manual method
The following commands can be executed to manually get Budibase up and running (assuming Docker/Docker Compose has been installed).
`yarn` to install project dependencies
`yarn bootstrap` will install all budibase modules and symlink them together using lerna.
`yarn build` will build all budibase packages.
#### 4. Running
To run the budibase server and builder in dev mode (i.e. with live reloading):
1. Open a new console
2. `yarn dev` (from root)
3. Access the builder on http://localhost:10000/builder
This will enable watch mode for both the builder app, server, client library and any component libraries.
#### 5. Debugging using VS Code
To debug the budibase server and worker a VS Code launch configuration has been provided.
Visit the debug window and select `Budibase Server` or `Budibase Worker` to debug the respective component.
Alternatively to start both components simultaneously select `Start Budibase`.
In addition to the above, the remaining budibase components may be ran in dev mode using: `yarn dev:noserver`.
#### 6. Cleanup
If you wish to delete all the apps created in development and reset the environment then run the following:
1. `yarn nuke:docker` will wipe all the Budibase services
2. `yarn dev` will restart all the services
### Backend
For the backend we run [Redis](https://redis.io/), [CouchDB](https://couchdb.apache.org/), [MinIO](https://min.io/) and [NGINX](https://www.nginx.com/) in Docker compose. This means that to develop Budibase you will need Docker and Docker compose installed. The backend services are then ran separately as Node services with nodemon so that they can be debugged outside of Docker.
### Data Storage
When you are running locally, budibase stores data on disk using docker volumes. The volumes and the types of data associated with each are:
- `redis_data`
- Sessions, email tokens
- `couchdb3_data`
- Global and app databases
- `minio_data`
- App manifest, budibase client, static assets
### Devlopment Modes
A combination of environment variables controls the mode that budibase runs in.
Yarn commands can be used to mimic the different modes that budibase can be ran in
#### Self Hosted
The default mode. A single tenant installation with no usage restrictions.
To enable this mode, use:
```
yarn mode:self
```
#### Cloud
The cloud mode, with account portal turned off.
To enable this mode, use:
```
yarn mode:cloud
```
#### Cloud & Account
The cloud mode, with account portal turned on. This is a replica of the mode that runs at https://budibase.app
To enable this mode, use:
```
yarn mode:account
```
### CI
An overview of the CI pipelines can be found [here](./workflows/README.md)
### Troubleshooting
Sometimes, things go wrong. This can be due to incompatible updates on the budibase platform. To clear down your development environment and start again follow **Step 6. Cleanup**, then proceed from **Step 3. Install and Build** in the setup guide above. You should have a fresh Budibase installation.
### Running tests
#### End-to-end Tests
Budibase uses Cypress to run a number of E2E tests. To run the tests execute the following command in the root folder:
```
yarn test:e2e
```
Or if you are in the builder you can run `yarn cy:test`.
### Other Useful Information
* The contributors are listed in [AUTHORS.md](https://github.com/Budibase/budibase/blob/master/.github/AUTHORS.md) (add yourself).
* This project uses a modified version of the MPLv2 license, see [LICENSE](https://github.com/budibase/server/blob/master/LICENSE).
* We use the [C4 (Collective Code Construction Contract)](https://rfc.zeromq.org/spec:42/C4/) process for contributions.
Please read this if you are unfamiliar with it.

1
.github/CONTRIBUTING.md vendored Symbolic link
View File

@ -0,0 +1 @@
../docs/CONTRIBUTING.md

View File

@ -7,6 +7,15 @@ assignees: ''
---
**Hosting**
<!-- Delete as appropriate -->
- Self
- Method: <method> <!-- One of: k8s, docker single image, docker compose, digital ocean: -->
- Budibase Version: <version> <!-- e.g. 1.0.105 -->
- App Version: <version> <!-- Indicate app version if bug is related to an application -->
- Cloud
- Tenant ID: <tenantId> <!-- shown in URL as <tenantID>.budibase.app -->
**Describe the bug**
A clear and concise description of what the bug is.
@ -22,6 +31,9 @@ A clear and concise description of what you expected to happen.
**Screenshots**
If applicable, add screenshots to help explain your problem.
**App Export**
If possible - please attach an export of your budibase application for debugging/reproduction purposes.
**Desktop (please complete the following information):**
- OS: [e.g. iOS]

3
.github/stale.yml vendored
View File

@ -14,7 +14,6 @@ staleLabel: stale
# Comment to post when marking an issue as stale. Set to `false` to disable
markComment: >
This issue has been automatically marked as stale because it has not had
recent activity. It will be closed if no further activity occurs. Thank you
for your contributions.
recent activity.
# Comment to post when closing a stale issue. Set to `false` to disable
closeComment: false

View File

@ -6,7 +6,7 @@ Welcome to the budibase CI pipelines directory. This document details what each
## All CI Pipelines
### Note
- When running workflow dispatch jobs, ensure you always run them off the `master` branch. It defaults to `develop`, so double check before running any jobs.
- When running workflow dispatch jobs, ensure you always run them off the `master` branch. It defaults to `develop`, so double check before running any jobs. The exception to this case is the `deploy-release` job which requires the develop branch.
### Standard CI Build Job (budibase_ci.yml)
Triggers:
@ -24,14 +24,14 @@ The standard CI Build job is what runs when you raise a PR to develop or master.
Triggers:
- Push to develop
The job responsible for building, tagging and pushing docker images out to the test and staging environments.
The job responsible for building, tagging and pushing docker images out to the test and release environments.
- Installs all dependencies
- builds the project
- run the unit tests
- publish the budibase JS packages under a prerelease tag to NPM
- build, tag and push docker images under the `develop` tag to docker hub
These images will then be pulled by the test and staging environments, updating the latest automatically. Discord notifications are sent to the #infra channel when this occurs.
These images will then be pulled by the test and release environments, updating the latest automatically. Discord notifications are sent to the #infra channel when this occurs.
### Release Job (release.yml)
Triggers:
@ -57,8 +57,33 @@ This job relies on the release job to have run first, so the latest image is pus
- Build and release the budibase helm chart for kubernetes users
- Perform a github release with the latest version. You can see previous releases here (https://github.com/Budibase/budibase/releases)
### Deploy Release (deploy-release.yml)
Triggers:
- Manual Workflow Dispatch Trigger
### Cloud Deploy (deploy-cloud.yml)
This job is responsible for deploying to our release, cloud kubernetes environment. You must run the release job first, to ensure that the latest images have been built and pushed to docker hub. After kicking off this job, the following will occur:
- Checks out the release branch
- Pulls the latest `values.yaml` from budibase infra, a private repo containing budibases infrastructure configuration
- Gets the latest budibase version from `lerna.json`, if it hasn't been specified in the workflow when you kicked it off
- Configures AWS Credentials
- Deploys the helm chart in the budibase repo to our preproduction EKS cluster, injecting the `values.yaml` we pulled from budibase-infra
- Fires off a discord webhook in the #infra channel to show that the deployment completely successfully.
### Deploy Preprod (deploy-preprod.yml)
Triggers:
- Manual Workflow Dispatch Trigger
This job is responsible for deploying to our preprod, cloud kubernetes environment. You must run the release job first, to ensure that the latest images have been built and pushed to docker hub. After kicking off this job, the following will occur:
- Checks out the master branch
- Pulls the latest `values.yaml` from budibase infra, a private repo containing budibases infrastructure configuration
- Gets the latest budibase version from `lerna.json`, if it hasn't been specified in the workflow when you kicked it off
- Configures AWS Credentials
- Deploys the helm chart in the budibase repo to our preprod EKS cluster, injecting the `values.yaml` we pulled from budibase-infra
- Fires off a discord webhook in the #infra channel to show that the deployment completely successfully.
### Deploy Production (deploy-cloud.yml)
Triggers:
- Manual Workflow Dispatch Trigger
@ -90,4 +115,75 @@ This job is responsible for deploying to our production, cloud kubernetes enviro
### Rollback A Bad Cloud Deployment
- Kick off cloud deploy job
- Ensure you are running off master
- Enter the version number of the last known good version of budibase. For example `1.0.0`
- Enter the version number of the last known good version of budibase. For example `1.0.0`
## Pro
### Installing Pro
The pro package is always installed from source in our CI jobs.
This is done to prevent pro needing to be published prior to CI runs in budiabse. This is required for two reasons:
- To reduce developer need to manually bump versions, i.e:
- release pro, bump pro dep in budibase, now ci can run successfully
- The cyclic dependency on backend-core, i.e:
- pro depends on backend-core
- server depends on pro
- backend-core lives in the monorepo, so it can't be released independently to be used in pro
- therefore the only option is to pull pro from source and release it as a part of the monorepo release, as if it were a mono package
The install is performed using the same steps as local development, via the `yarn bootstrap` command, see the [Contributing Guide#Pro](../CONTRIBUTING.md#pro)
The branch to install pro from can vary depending on ref of the commit that triggered the budibase CI job. This is done to enable branches which have changes in both the monorepo and the pro repo to have their CI pass successfully.
This is done using the [pro/install.sh](../../scripts/pro/install.sh) script. The script will:
- Clone pro to it's default branch (`develop`)
- Check if the clone worked, on forked versions of budibase this will fail due to no access
- This is fine as the `yarn` command will install the version from NPM
- Community PRs should never touch pro so this will always work
- Checkout the `BRANCH` argument, if this fails fallback to `BASE_BRANCH`
- This enables the more complex case of a feature branch being merged to another feature branch, e.g.
- I am working on a branch `epic/stonks` which exists on budibase and pro.
- I want to merge a change to this branch in budibase from `feature/stonks-ui`, which only exists in budibase
- The base branch ensures that `epic/stonks` in pro will still be checked out for the CI run, rather than falling back to `develop`
- Run `yarn setup` to build and install dependencies
- `yarn`
- `yarn bootstrap`
- `yarn build`
- The will build .ts files, and also update the `main` and `types` of `package.json` to point to `dist` rather than src
- The build command will only ever work in CI, it is prevented in local dev
#### `BRANCH` and `BASE_BRANCH` arguments
These arguments are supplied by the various budibase build and release pipelines
- `budibase_ci`
- `BRANCH: ${{ github.event.pull_request.head.ref }}` -> The branch being merged
- `BASE_BRANCH: ${{ github.event.pull_request.base.ref}}` -> The base branch
- `release-develop`
- `BRANCH: develop` -> always use the `develop` branch in pro
- `release`
- `BRANCH: master` -> always use the `master` branch in pro
### Releasing Pro
After budibase dependencies have been released we will release the new version of pro to match the release version of budibase dependencies. This is to ensure that we are always keeping the version of `backend-core` in sync in the pro package and in budibase packages. Without this we could run into scenarios where different versions are being used when installed via `yarn` inside the docker images, creating very difficult to debug cases.
Pro is released using the [pro/release.sh](../../scripts/pro/release.sh) script. The script will:
- Inspect the `VERSION` from the `lerna.json` file in budibase
- Determine whether to use the `latest` or `develop` tag based on the command argument
- Go to pro directory
- install npm creds
- update the version of `backend-core` to be `VERSION`, the version just released by lerna
- publish to npm. Uses a `lerna publish` command, pro itself is a mono repo.
- force the version to be the same as `VERSION` to keep pro and budibase in sync
- reverts the changes to `main` and `types` in `package.json` that were made by the build step, to point back to source
- commit & push: `Prep next development iteration`
- Go to budibase
- Update to the new version of pro in `server` and `worker` so the latest pro version is used in the docker builds
- commit & push: `Update pro version to $VERSION`
#### `COMMAND` argument
This argument is supplied by the existing `release` and `release:develop` budibase commands, which invoke the pro release
- `release` will supply no command and default to use `latest`
- `release:develop` will supply `develop`

View File

@ -11,6 +11,13 @@ on:
branches:
- master
- develop
- release
workflow_dispatch:
env:
BRANCH: ${{ github.event.pull_request.head.ref }}
BASE_BRANCH: ${{ github.event.pull_request.base.ref}}
PERSONAL_ACCESS_TOKEN : ${{ secrets.PERSONAL_ACCESS_TOKEN }}
jobs:
build:
@ -27,6 +34,10 @@ jobs:
uses: actions/setup-node@v1
with:
node-version: ${{ matrix.node-version }}
- name: Install Pro
run: yarn install:pro $BRANCH $BASE_BRANCH
- run: yarn
- run: yarn bootstrap
- run: yarn lint

View File

@ -1,4 +1,4 @@
name: Budibase Cloud Deploy
name: Budibase Deploy Production
on:
workflow_dispatch:
@ -66,7 +66,7 @@ jobs:
config-files: values.production.yaml
chart-path: charts/budibase
namespace: budibase
values: globals.appVersion=v${{ env.RELEASE_VERSION }}
values: globals.appVersion=v${{ env.RELEASE_VERSION }},services.couchdb.url=${{ secrets.PRODUCTION_COUCHDB_URL }},services.couchdb.password=${{ secrets.PRODUCTION_COUCHDB_PASSWORD }}
name: budibase-prod
- name: Discord Webhook Action

View File

@ -1,12 +1,10 @@
name: Budibase Release Preprod
name: Budibase Deploy Preprod
on:
workflow_dispatch:
env:
POSTHOG_TOKEN: ${{ secrets.POSTHOG_TOKEN }}
INTERCOM_TOKEN: ${{ secrets.INTERCOM_TOKEN }}
POSTHOG_URL: ${{ secrets.POSTHOG_URL }}
SENTRY_DSN: ${{ secrets.SENTRY_DSN }}
jobs:

77
.github/workflows/deploy-release.yml vendored Normal file
View File

@ -0,0 +1,77 @@
name: Budibase Deploy Release
on:
workflow_dispatch:
jobs:
release:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials@v1
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: eu-west-1
- name: Fail if branch is not develop
if: github.ref != 'refs/heads/develop'
run: |
echo "Ref is not develop, you must run this job from develop."
exit 1
- name: Get the latest budibase release version
id: version
run: |
release_version=$(cat lerna.json | jq -r '.version')
echo "RELEASE_VERSION=$release_version" >> $GITHUB_ENV
- name: Tag and release Proxy service docker image
run: |
docker login -u $DOCKER_USER -p $DOCKER_PASSWORD
yarn build:docker:proxy:release
docker tag proxy-service budibase/proxy:$RELEASE_TAG
docker push budibase/proxy:$RELEASE_TAG
env:
DOCKER_USER: ${{ secrets.DOCKER_USERNAME }}
DOCKER_PASSWORD: ${{ secrets.DOCKER_API_KEY }}
RELEASE_TAG: k8s-release
- name: Pull values.yaml from budibase-infra
run: |
curl -H "Authorization: token ${{ secrets.GH_PERSONAL_TOKEN }}" \
-H 'Accept: application/vnd.github.v3.raw' \
-o values.release.yaml \
-L https://api.github.com/repos/budibase/budibase-infra/contents/kubernetes/budibase-release/values.yaml
wc -l values.release.yaml
- name: Deploy to Release Environment
uses: glopezep/helm@v1.7.1
with:
release: budibase-release
namespace: budibase
chart: charts/budibase
token: ${{ github.token }}
helm: helm3
values: |
globals:
appVersion: develop
ingress:
enabled: true
nginx: true
value-files: >-
[
"values.release.yaml"
]
env:
KUBECONFIG_FILE: '${{ secrets.RELEASE_KUBECONFIG }}'
- name: Discord Webhook Action
uses: tsickert/discord-webhook@v4.0.0
with:
webhook-url: ${{ secrets.PROD_DEPLOY_WEBHOOK_URL }}
content: "Release Env Deployment Complete: ${{ env.RELEASE_VERSION }} deployed to Budibase Release Env."
embed-title: ${{ env.RELEASE_VERSION }}

View File

@ -0,0 +1,68 @@
name: Deploy Budibase Single Container Image to DockerHub
on:
workflow_dispatch:
env:
BASE_BRANCH: ${{ github.event.pull_request.base.ref}}
BRANCH: ${{ github.event.pull_request.head.ref }}
CI: true
PERSONAL_ACCESS_TOKEN : ${{ secrets.PERSONAL_ACCESS_TOKEN }}
REGISTRY_URL: registry.hub.docker.com
jobs:
build:
name: "build"
runs-on: ubuntu-latest
strategy:
matrix:
node-version: [14.x]
steps:
- name: "Checkout"
uses: actions/checkout@v2
- name: Use Node.js ${{ matrix.node-version }}
uses: actions/setup-node@v1
with:
node-version: ${{ matrix.node-version }}
- name: Setup QEMU
uses: docker/setup-qemu-action@v1
- name: Setup Docker Buildx
id: buildx
uses: docker/setup-buildx-action@v1
- name: Install Pro
run: yarn install:pro $BRANCH $BASE_BRANCH
- name: Run Yarn
run: yarn
- name: Run Yarn Bootstrap
run: yarn bootstrap
- name: Runt Yarn Lint
run: yarn lint
- name: Run Yarn Build
run: yarn build:docker:pre
- name: Login to Docker Hub
uses: docker/login-action@v2
with:
username: ${{ secrets.DOCKER_USERNAME }}
password: ${{ secrets.DOCKER_API_KEY }}
- name: Get the latest release version
id: version
run: |
release_version=$(cat lerna.json | jq -r '.version')
echo $release_version
echo "RELEASE_VERSION=$release_version" >> $GITHUB_ENV
- name: Tag and release Budibase service docker image
uses: docker/build-push-action@v2
with:
context: .
push: true
platforms: linux/amd64,linux/arm64
tags: budibase/budibase,budibase/budibase:v${{ env.RELEASE_VERSION }}
file: ./hosting/single/Dockerfile
- name: Tag and release Budibase Azure App Service docker image
uses: docker/build-push-action@v2
with:
context: .
push: true
platforms: linux/amd64
build-args: TARGETBUILD=aas
tags: budibase/budibase-aas,budibase/budibase-aas:v${{ env.RELEASE_VERSION }}
file: ./hosting/single/Dockerfile

View File

@ -1,4 +1,5 @@
name: Budibase Release Staging
name: Budibase Prerelease
concurrency: release-prerelease
on:
push:
@ -14,21 +15,33 @@ on:
- 'yarn.lock'
- 'package.json'
- 'yarn.lock'
workflow_dispatch:
env:
POSTHOG_TOKEN: ${{ secrets.POSTHOG_TOKEN }}
# Posthog token used by ui at build time
POSTHOG_TOKEN: phc_uDYOfnFt6wAbBAXkC6STjcrTpAFiWIhqgFcsC1UVO5F
INTERCOM_TOKEN: ${{ secrets.INTERCOM_TOKEN }}
POSTHOG_URL: ${{ secrets.POSTHOG_URL }}
PERSONAL_ACCESS_TOKEN: ${{ secrets.PERSONAL_ACCESS_TOKEN }}
FEATURE_PREVIEW_URL: https://budirelease.live
jobs:
release:
runs-on: ubuntu-latest
steps:
- name: Fail if branch is not develop
if: github.ref != 'refs/heads/develop'
run: |
echo "Ref is not develop, you must run this job from develop."
exit 1
- uses: actions/checkout@v2
- uses: actions/setup-node@v1
with:
node-version: 14.x
- name: Install Pro
run: yarn install:pro develop
- run: yarn
- run: yarn bootstrap
- run: yarn lint
@ -46,9 +59,9 @@ jobs:
env:
NPM_TOKEN: ${{ secrets.NPM_TOKEN }}
run: |
# setup the username and email. I tend to use 'GitHub Actions Bot' with no email by default
git config user.name "Budibase Staging Release Bot"
git config user.email "<>"
# setup the username and email.
git config --global user.name "Budibase Staging Release Bot"
git config --global user.email "<>"
echo //registry.npmjs.org/:_authToken=${NPM_TOKEN} >> .npmrc
yarn release:develop
@ -60,3 +73,56 @@ jobs:
env:
DOCKER_USER: ${{ secrets.DOCKER_USERNAME }}
DOCKER_PASSWORD: ${{ secrets.DOCKER_API_KEY }}
- name: Get the latest budibase release version
id: version
run: |
release_version=$(cat lerna.json | jq -r '.version')
echo "RELEASE_VERSION=$release_version" >> $GITHUB_ENV
- name: Tag and release Proxy service docker image
run: |
docker login -u $DOCKER_USER -p $DOCKER_PASSWORD
yarn build:docker:proxy:release
docker tag proxy-service budibase/proxy:$RELEASE_TAG
docker push budibase/proxy:$RELEASE_TAG
env:
DOCKER_USER: ${{ secrets.DOCKER_USERNAME }}
DOCKER_PASSWORD: ${{ secrets.DOCKER_API_KEY }}
RELEASE_TAG: k8s-release
- name: Pull values.yaml from budibase-infra
run: |
curl -H "Authorization: token ${{ secrets.GH_PERSONAL_TOKEN }}" \
-H 'Accept: application/vnd.github.v3.raw' \
-o values.release.yaml \
-L https://api.github.com/repos/budibase/budibase-infra/contents/kubernetes/budibase-release/values.yaml
wc -l values.release.yaml
- name: Deploy to Release Environment
uses: glopezep/helm@v1.7.1
with:
release: budibase-release
namespace: budibase
chart: charts/budibase
token: ${{ github.token }}
helm: helm3
values: |
globals:
appVersion: develop
ingress:
enabled: true
nginx: true
value-files: >-
[
"values.release.yaml"
]
env:
KUBECONFIG_FILE: '${{ secrets.RELEASE_KUBECONFIG }}'
- name: Discord Webhook Action
uses: tsickert/discord-webhook@v4.0.0
with:
webhook-url: ${{ secrets.PROD_DEPLOY_WEBHOOK_URL }}
content: "Release Env Deployment Complete: ${{ env.RELEASE_VERSION }} deployed to Budibase Release Env."
embed-title: ${{ env.RELEASE_VERSION }}

View File

@ -87,3 +87,10 @@ jobs:
packages/cli/build/cli-macos
packages/server/specs/openapi.yaml
packages/server/specs/openapi.json
- name: Discord Webhook Action
uses: tsickert/discord-webhook@v4.0.0
with:
webhook-url: ${{ secrets.PROD_DEPLOY_WEBHOOK_URL }}
content: "Self Host Deployment Complete: ${{ env.RELEASE_VERSION }} deployed to Self Host."
embed-title: ${{ env.RELEASE_VERSION }}

View File

@ -1,4 +1,5 @@
name: Budibase Release
concurrency: release
on:
push:
@ -14,22 +15,43 @@ on:
- 'yarn.lock'
- 'package.json'
- 'yarn.lock'
workflow_dispatch:
inputs:
versioning:
type: choice
description: "Versioning type: patch, minor, major"
default: patch
options:
- patch
- minor
- major
required: true
env:
POSTHOG_TOKEN: ${{ secrets.POSTHOG_TOKEN }}
# Posthog token used by ui at build time
POSTHOG_TOKEN: phc_fg5I3nDOf6oJVMHSaycEhpPdlgS8rzXG2r6F2IpxCHS
INTERCOM_TOKEN: ${{ secrets.INTERCOM_TOKEN }}
POSTHOG_URL: ${{ secrets.POSTHOG_URL }}
SENTRY_DSN: ${{ secrets.SENTRY_DSN }}
PERSONAL_ACCESS_TOKEN : ${{ secrets.PERSONAL_ACCESS_TOKEN }}
jobs:
release:
runs-on: ubuntu-latest
steps:
- name: Fail if branch is not master
if: github.ref != 'refs/heads/master'
run: |
echo "Ref is not master, you must run this job from master."
exit 1
- uses: actions/checkout@v2
- uses: actions/setup-node@v1
with:
node-version: 14.x
- name: Install Pro
run: yarn install:pro master
- run: yarn
- run: yarn bootstrap
- run: yarn lint
@ -46,10 +68,11 @@ jobs:
- name: Publish budibase packages to NPM
env:
NPM_TOKEN: ${{ secrets.NPM_TOKEN }}
RELEASE_VERSION_TYPE: ${{ github.event.inputs.versioning }}
run: |
# setup the username and email. I tend to use 'GitHub Actions Bot' with no email by default
git config user.name "Budibase Release Bot"
git config user.email "<>"
git config --global user.name "Budibase Release Bot"
git config --global user.email "<>"
echo //registry.npmjs.org/:_authToken=${NPM_TOKEN} >> .npmrc
yarn release
@ -66,3 +89,57 @@ jobs:
DOCKER_USER: ${{ secrets.DOCKER_USERNAME }}
DOCKER_PASSWORD: ${{ secrets.DOCKER_API_KEY }}
BUDIBASE_RELEASE_VERSION: ${{ steps.previoustag.outputs.tag }}
- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials@v1
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: eu-west-1
- name: Tag and release Proxy service docker image
run: |
docker login -u $DOCKER_USER -p $DOCKER_PASSWORD
yarn build:docker:proxy:preprod
docker tag proxy-service budibase/proxy:$PREPROD_TAG
docker push budibase/proxy:$PREPROD_TAG
env:
DOCKER_USER: ${{ secrets.DOCKER_USERNAME }}
DOCKER_PASSWORD: ${{ secrets.DOCKER_API_KEY }}
PREPROD_TAG: k8s-preprod
- name: Pull values.yaml from budibase-infra
run: |
curl -H "Authorization: token ${{ secrets.GH_PERSONAL_TOKEN }}" \
-H 'Accept: application/vnd.github.v3.raw' \
-o values.preprod.yaml \
-L https://api.github.com/repos/budibase/budibase-infra/contents/kubernetes/budibase-preprod/values.yaml
wc -l values.preprod.yaml
- name: Deploy to Preprod Environment
uses: glopezep/helm@v1.7.1
with:
release: budibase-preprod
namespace: budibase
chart: charts/budibase
token: ${{ github.token }}
helm: helm3
values: |
globals:
appVersion: ${{ steps.previoustag.outputs.tag }}
ingress:
enabled: true
nginx: true
value-files: >-
[
"values.preprod.yaml"
]
env:
KUBECONFIG_FILE: '${{ secrets.PREPROD_KUBECONFIG }}'
- name: Discord Webhook Action
uses: tsickert/discord-webhook@v4.0.0
with:
webhook-url: ${{ secrets.PROD_DEPLOY_WEBHOOK_URL }}
content: "Preprod Deployment Complete: ${{ steps.previoustag.outputs.tag }} deployed to Budibase Pre-prod."
embed-title: ${{ steps.previoustag.outputs.tag }}

View File

@ -28,24 +28,25 @@ jobs:
- name: Cypress run
id: cypress
continue-on-error: true
uses: cypress-io/github-action@v2
with:
record: true
install: false
command: yarn test:e2e:ci
tag: nightly
command: yarn test:e2e:ci:record
env:
CYPRESS_RECORD_KEY: ${{ secrets.CYPRESS_RECORD_KEY }}
# TODO: upload recordings to s3
# - name: Configure AWS Credentials
# uses: aws-actions/configure-aws-credentials@v1
# with:
# aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
# aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
# aws-region: eu-west-1
- name: Discord Webhook Action
uses: tsickert/discord-webhook@v4.0.0
- uses: actions/upload-artifact@v3
with:
webhook-url: ${{ secrets.BUDI_QA_WEBHOOK }}
content: "Smoke test run completed with ${{ steps.cypress.outcome }}. See results at ${{ steps.cypress.dashboardUrl }}"
embed-title: ${{ steps.cypress.outcome }}
embed-color: ${{ steps.cypress.outcome == 'success' && '3066993' || '15548997' }}
name: Test Reports
path: packages/builder/cypress/reports/testReport.html
- name: Cypress Discord Notify
run: yarn test:e2e:ci:notify
env:
CYPRESS_WEBHOOK_URL: ${{ secrets.BUDI_QA_WEBHOOK }}
CYPRESS_OUTCOME: ${{ steps.cypress.outcome }}
CYPRESS_DASHBOARD_URL: ${{ steps.cypress.outputs.dashboardUrl }}
GITHUB_RUN_URL: $GITHUB_SERVER_URL/$GITHUB_REPOSITORY/actions/runs/$GITHUB_RUN_ID

8
.gitignore vendored
View File

@ -97,5 +97,9 @@ hosting/proxy/.generated-nginx.prod.conf
bin/
hosting/.generated*
packages/builder/cypress.env.json
stats.html
packages/builder/cypress.env.json
packages/builder/cypress/reports
stats.html
# TypeScript cache
*.tsbuildinfo

14
.vscode/settings.json vendored
View File

@ -3,5 +3,17 @@
"editor.codeActionsOnSave": {
"source.fixAll": true
},
"editor.defaultFormatter": "svelte.svelte-vscode"
"editor.defaultFormatter": "svelte.svelte-vscode",
"[json]": {
"editor.defaultFormatter": "vscode.json-language-features"
},
"[javascript]": {
"editor.defaultFormatter": "esbenp.prettier-vscode"
},
"debug.javascript.terminalOptions": {
"skipFiles": [
"${workspaceFolder}/packages/backend-core/node_modules/**",
"<node_internals>/**"
]
},
}

1
.yarnrc Normal file
View File

@ -0,0 +1 @@
network-timeout 100000

View File

@ -174,6 +174,7 @@ Budibase is dedicated to providing a welcoming, diverse, and harrassment-free ex
## 🙌 Contributing to Budibase
From opening a bug report to creating a pull request: every contribution is appreciated and welcomed. If you're planning to implement a new feature or change the API please create an issue first. This way we can ensure your work is not in vain.
Environment setup instructions are available for [Debian](https://github.com/Budibase/budibase/tree/HEAD/docs/DEV-SETUP-DEBIAN.md) and [MacOSX](https://github.com/Budibase/budibase/tree/HEAD/docs/DEV-SETUP-MACOSX.md)
### Not Sure Where to Start?
A good place to start contributing, is the [First time issues project](https://github.com/Budibase/budibase/projects/22).
@ -187,7 +188,7 @@ Budibase is a monorepo managed by lerna. Lerna manages the building and publishi
- [packages/server](https://github.com/Budibase/budibase/tree/HEAD/packages/server) - The budibase server. This Koa app is responsible for serving the JS for the builder and budibase apps, as well as providing the API for interaction with the database and file system.
For more information, see [CONTRIBUTING.md](https://github.com/Budibase/budibase/blob/HEAD/.github/CONTRIBUTING.md)
For more information, see [CONTRIBUTING.md](https://github.com/Budibase/budibase/blob/HEAD/docs/CONTRIBUTING.md)
<br /><br />
@ -202,7 +203,7 @@ Budibase is open-source, licensed as [GPL v3](https://www.gnu.org/licenses/gpl-3
[![Stargazers over time](https://starchart.cc/Budibase/budibase.svg)](https://starchart.cc/Budibase/budibase)
If you are having issues between updates of the builder, please use the guide [here](https://github.com/Budibase/budibase/blob/HEAD/.github/CONTRIBUTING.md#troubleshooting) to clear down your environment.
If you are having issues between updates of the builder, please use the guide [here](https://github.com/Budibase/budibase/blob/HEAD/docs/CONTRIBUTING.md#troubleshooting) to clear down your environment.
<br /><br />

View File

@ -11,8 +11,8 @@ sources:
- https://github.com/Budibase/budibase
- https://budibase.com
type: application
version: 0.2.8
appVersion: 1.0.48
version: 0.2.11
appVersion: 1.0.214
dependencies:
- name: couchdb
version: 3.6.1

View File

@ -28,12 +28,15 @@ spec:
- env:
- name: BUDIBASE_ENVIRONMENT
value: {{ .Values.globals.budibaseEnv }}
- name: DEPLOYMENT_ENVIRONMENT
value: "kubernetes"
- name: COUCH_DB_URL
{{ if .Values.services.couchdb.url }}
value: {{ .Values.services.couchdb.url }}
{{ else }}
value: http://{{ .Release.Name }}-svc-couchdb:{{ .Values.services.couchdb.port }}
{{ end }}
{{ if .Values.services.couchdb.enabled }}
- name: COUCH_DB_USER
valueFrom:
secretKeyRef:
@ -44,6 +47,7 @@ spec:
secretKeyRef:
name: {{ template "couchdb.fullname" . }}
key: adminPassword
{{ end }}
- name: ENABLE_ANALYTICS
value: {{ .Values.globals.enableAnalytics | quote }}
- name: INTERNAL_API_KEY
@ -76,6 +80,10 @@ spec:
value: {{ .Values.services.objectStore.url }}
- name: PORT
value: {{ .Values.services.apps.port | quote }}
{{ if .Values.services.worker.publicApiRateLimitPerSecond }}
- name: API_REQ_LIMIT_PER_SEC
value: {{ .Values.globals.apps.publicApiRateLimitPerSecond | quote }}
{{ end }}
- name: MULTI_TENANCY
value: {{ .Values.globals.multiTenancy | quote }}
- name: LOG_LEVEL
@ -98,10 +106,6 @@ spec:
value: http://worker-service:{{ .Values.services.worker.port }}
- name: PLATFORM_URL
value: {{ .Values.globals.platformUrl | quote }}
- name: USE_QUOTAS
value: {{ .Values.globals.useQuotas | quote }}
- name: EXCLUDE_QUOTAS_TENANTS
value: {{ .Values.globals.excludeQuotasTenants | quote }}
- name: ACCOUNT_PORTAL_URL
value: {{ .Values.globals.accountPortalUrl | quote }}
- name: ACCOUNT_PORTAL_API_KEY
@ -114,12 +118,39 @@ spec:
value: {{ .Values.globals.google.clientId | quote }}
- name: GOOGLE_CLIENT_SECRET
value: {{ .Values.globals.google.secret | quote }}
- name: AUTOMATION_MAX_ITERATIONS
value: {{ .Values.globals.automationMaxIterations | quote }}
- name: TENANT_FEATURE_FLAGS
value: {{ .Values.globals.tenantFeatureFlags | quote }}
{{ if .Values.globals.bbAdminUserEmail }}
- name: BB_ADMIN_USER_EMAIL
value: { { .Values.globals.bbAdminUserEmail | quote } }
{{ end }}
{{ if .Values.globals.bbAdminUserPassword }}
- name: BB_ADMIN_USER_PASSWORD
value: { { .Values.globals.bbAdminUserPassword | quote } }
{{ end }}
image: budibase/apps:{{ .Values.globals.appVersion }}
imagePullPolicy: Always
livenessProbe:
httpGet:
path: /health
port: {{ .Values.services.apps.port }}
initialDelaySeconds: 5
periodSeconds: 5
name: bbapps
ports:
- containerPort: {{ .Values.services.apps.port }}
resources: {}
{{- with .Values.affinity }}
affinity:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}
restartPolicy: Always
serviceAccountName: ""
status: {}

View File

@ -39,5 +39,13 @@ spec:
imagePullPolicy: Always
name: couchdb-backup
resources: {}
{{- with .Values.affinity }}
affinity:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}
status: {}
{{- end }}

View File

@ -12,5 +12,8 @@ spec:
resources:
requests:
storage: {{ .Values.services.objectStore.storage }}
{{ if .Values.services.objectStore.storageClass }}
storageClassName: {{ .Values.services.objectStore.storageClass }}
{{- end }}
status: {}
{{- end }}
{{- end }}

View File

@ -60,6 +60,14 @@ spec:
volumeMounts:
- mountPath: /data
name: minio-data
{{- with .Values.affinity }}
affinity:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}
restartPolicy: Always
serviceAccountName: ""
volumes:

View File

@ -32,6 +32,14 @@ spec:
- containerPort: {{ .Values.services.proxy.port }}
resources: {}
volumeMounts:
{{- with .Values.affinity }}
affinity:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}
restartPolicy: Always
serviceAccountName: ""
volumes:

View File

@ -12,5 +12,8 @@ spec:
resources:
requests:
storage: {{ .Values.services.redis.storage }}
{{ if .Values.services.redis.storageClass }}
storageClassName: {{ .Values.services.redis.storageClass }}
{{ end }}
status: {}
{{- end }}
{{- end }}

View File

@ -39,6 +39,14 @@ spec:
volumeMounts:
- mountPath: /data
name: redis-data
{{- with .Values.affinity }}
affinity:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}
restartPolicy: Always
serviceAccountName: ""
volumes:

View File

@ -27,8 +27,11 @@ spec:
spec:
containers:
- env:
- name: DEPLOYMENT_ENVIRONMENT
value: "kubernetes"
- name: CLUSTER_PORT
value: {{ .Values.services.worker.port | quote }}
{{ if .Values.services.couchdb.enabled }}
- name: COUCH_DB_USER
valueFrom:
secretKeyRef:
@ -39,6 +42,7 @@ spec:
secretKeyRef:
name: {{ template "couchdb.fullname" . }}
key: adminPassword
{{ end }}
- name: COUCH_DB_URL
{{ if .Values.services.couchdb.url }}
value: {{ .Values.services.couchdb.url }}
@ -89,6 +93,10 @@ spec:
value: {{ .Values.globals.selfHosted | quote }}
- name: SENTRY_DSN
value: {{ .Values.globals.sentryDSN }}
- name: ENABLE_ANALYTICS
value: {{ .Values.globals.enableAnalytics | quote }}
- name: POSTHOG_TOKEN
value: {{ .Values.globals.posthogToken }}
- name: ACCOUNT_PORTAL_URL
value: {{ .Values.globals.accountPortalUrl | quote }}
- name: ACCOUNT_PORTAL_API_KEY
@ -115,12 +123,28 @@ spec:
value: {{ .Values.globals.google.clientId | quote }}
- name: GOOGLE_CLIENT_SECRET
value: {{ .Values.globals.google.secret | quote }}
- name: TENANT_FEATURE_FLAGS
value: {{ .Values.globals.tenantFeatureFlags | quote }}
image: budibase/worker:{{ .Values.globals.appVersion }}
imagePullPolicy: Always
livenessProbe:
httpGet:
path: /health
port: {{ .Values.services.worker.port }}
initialDelaySeconds: 5
periodSeconds: 5
name: bbworker
ports:
- containerPort: {{ .Values.services.worker.port }}
resources: {}
{{- with .Values.affinity }}
affinity:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}
restartPolicy: Always
serviceAccountName: ""
status: {}

View File

@ -47,6 +47,8 @@ ingress:
className: ""
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/client-max-body-size: 150M
nginx.ingress.kubernetes.io/proxy-body-size: 50m
hosts:
- host: # change if using custom domain
paths:
@ -87,22 +89,21 @@ affinity: {}
globals:
appVersion: "latest"
budibaseEnv: PRODUCTION
enableAnalytics: true
enableAnalytics: "1"
sentryDSN: ""
posthogToken: ""
posthogToken: "phc_fg5I3nDOf6oJVMHSaycEhpPdlgS8rzXG2r6F2IpxCHS"
logLevel: info
selfHosted: "1" # set to 0 for budibase cloud environment, set to 1 for self-hosted setup
multiTenancy: "0" # set to 0 to disable multiple orgs, set to 1 to enable multiple orgs
useQuotas: "0"
excludeQuotasTenants: "" # comma seperated list of tenants to exclude from quotas
accountPortalUrl: ""
accountPortalApiKey: ""
cookieDomain: ""
platformUrl: ""
httpMigrations: "0"
google:
clientId: ""
clientId: ""
secret: ""
automationMaxIterations: "200"
createSecrets: true # creates an internal API key, JWT secrets and redis password for you
@ -150,6 +151,11 @@ services:
url: "" # only change if pointing to existing redis cluster and enabled: false
password: "budibase" # recommended to override if using built-in redis
storage: 100Mi
## If defined, storageClassName: <storageClass>
## If set to "-", storageClassName: "", which disables dynamic provisioning
## If undefined (the default) or set to null, no storageClassName spec is
## set, choosing the default provisioner.
storageClass: ""
objectStore:
minio: true
@ -161,6 +167,11 @@ services:
region: "" # AWS_REGION if using S3 or existing minio secret
url: "http://minio-service:9000" # only change if pointing to existing minio cluster or S3 and minio: false
storage: 100Mi
## If defined, storageClassName: <storageClass>
## If set to "-", storageClassName: "", which disables dynamic provisioning
## If undefined (the default) or set to null, no storageClassName spec is
## set, choosing the default provisioner.
storageClass: ""
# Override values in couchDB subchart
couchdb:
@ -204,7 +215,7 @@ couchdb:
## The CouchDB image
image:
repository: couchdb
tag: 3.1.0
tag: 3.2.1
pullPolicy: IfNotPresent
## Experimental integration with Lucene-powered fulltext search
@ -230,6 +241,8 @@ couchdb:
## Optional tolerations
tolerations: []
affinity: {}
service:
# annotations:
enabled: true

76
docs/CODE_OF_CONDUCT.md Normal file
View File

@ -0,0 +1,76 @@
# Contributor Covenant Code of Conduct
## Our Pledge
In the interest of fostering an open and welcoming environment, we as
contributors and maintainers pledge to making participation in our project and
our community a harassment-free experience for everyone, regardless of age, body
size, disability, ethnicity, sex characteristics, gender identity and expression,
level of experience, education, socio-economic status, nationality, personal
appearance, race, religion, or sexual identity and orientation.
## Our Standards
Examples of behavior that contributes to creating a positive environment
include:
* Using welcoming and inclusive language
* Being respectful of differing viewpoints and experiences
* Gracefully accepting constructive criticism
* Focusing on what is best for the community
* Showing empathy towards other community members
Examples of unacceptable behavior by participants include:
* The use of sexualized language or imagery and unwelcome sexual attention or
advances
* Trolling, insulting/derogatory comments, and personal or political attacks
* Public or private harassment
* Publishing others' private information, such as a physical or electronic
address, without explicit permission
* Other conduct which could reasonably be considered inappropriate in a
professional setting
## Our Responsibilities
Project maintainers are responsible for clarifying the standards of acceptable
behavior and are expected to take appropriate and fair corrective action in
response to any instances of unacceptable behavior.
Project maintainers have the right and responsibility to remove, edit, or
reject comments, commits, code, wiki edits, issues, and other contributions
that are not aligned to this Code of Conduct, or to ban temporarily or
permanently any contributor for other behaviors that they deem inappropriate,
threatening, offensive, or harmful.
## Scope
This Code of Conduct applies both within project spaces and in public spaces
when an individual is representing the project or its community. Examples of
representing a project or community include using an official project e-mail
address, posting via an official social media account, or acting as an appointed
representative at an online or offline event. Representation of a project may be
further defined and clarified by project maintainers.
## Enforcement
Instances of abusive, harassing, or otherwise unacceptable behavior may be
reported by contacting the project team at community@budibase.com. All
complaints will be reviewed and investigated and will result in a response that
is deemed necessary and appropriate to the circumstances. The project team is
obligated to maintain confidentiality with regard to the reporter of an incident.
Further details of specific enforcement policies may be posted separately.
Project maintainers who do not follow or enforce the Code of Conduct in good
faith may face temporary or permanent repercussions as determined by other
members of the project's leadership.
## Attribution
This Code of Conduct is adapted from the [Contributor Covenant][homepage], version 1.4,
available at https://www.contributor-covenant.org/version/1/4/code-of-conduct.html
[homepage]: https://www.contributor-covenant.org
For answers to common questions about this code of conduct, see
https://www.contributor-covenant.org/faq

231
docs/CONTRIBUTING.md Normal file
View File

@ -0,0 +1,231 @@
# Contributing
From opening a bug report to creating a pull request: every contribution is appreciated and welcome. If you're planning to implement a new feature or change the api please [create an issue](https://github.com/Budibase/budibase/issues/new/choose) first. This way we can ensure that your precious work is not in vain.
## Table of contents
- [Quick start](#quick-start)
- [Status](#status)
- [What's included](#whats-included)
- [Bugs and feature requests](#bugs-and-feature-requests)
## Not Sure Where to Start?
Budibase is a low-code web application builder that creates svelte-based web applications.
Budibase is a monorepo managed by [lerna](https://github.com/lerna/lerna). Lerna manages the building and publishing of the budibase packages. At a high level, here are the packages that make up budibase.
- **packages/builder** - contains code for the budibase builder client side svelte application.
- **packages/client** - A module that runs in the browser responsible for reading JSON definition and creating living, breathing web apps from it.
- **packages/server** - The budibase server. This [Koa](https://koajs.com/) app is responsible for serving the JS for the builder and budibase apps, as well as providing the API for interaction with the database and file system.
- **packages/worker** - This [Koa](https://koajs.com/) app is responsible for providing global apis for managing your budibase installation. Authentication, Users, Email, Org and Auth configs are all provided by the worker.
## Contributor License Agreement (CLA)
In order to accept your pull request, we need you to submit a CLA. You only need to do this once. If you are submitting a pull request for the first time, just submit a Pull Request and our CLA Bot will give you instructions on how to sign the CLA before merging your Pull Request.
All contributors must sign an [Individual Contributor License Agreement](https://github.com/budibase/budibase/blob/next/.github/cla/individual-cla.md).
If contributing on behalf of your company, your company must sign a [Corporate Contributor License Agreement](https://github.com/budibase/budibase/blob/next/.github/cla/corporate-cla.md). If so, please contact us via community@budibase.com.
## Glossary of Terms
To understand the budibase API, it can be helpful to understand the top level entities that make up Budibase.
### Client
A client represents a single budibase customer. Each budibase client will have 1 or more budibase servers. Every client is assigned a unique ID.
### App
A client can have one or more budibase applications. Budibase applications would be things like "Developer Inventory Management" or "Goat Herder CRM". Think of a budibase application as a tree.
### Database
An App can have one or more databases. Keeping with our [dendrology](https://en.wikipedia.org/wiki/Dendrology) analogy - think of an database as a branch on the tree. Databases are used to keep data separate for different instances of your app. For example, if you had a CRM app, you may create a database for your US office, and a database for your Australian office. Databases allow us to support [multitenancy](https://www.gartner.com/en/information-technology/glossary/multitenancy) in budibase applications.
### Table
Tables in budibase are almost akin to tables in relational databases. A table may be a "Car" or an "Employee". They are the main building blocks for the creation and management of backend data in budibase.
### View
A View is an advanced feature in budibase that allows you to write a custom query using [MapReduce](https://pouchdb.com/guides/queries.html) queries. Views enable powerful query functionality and calculations, allowing you to do more with your data.
### Page
A page in budibase is actually a single, self contained svelte web app. There are only 2 pages in budibase. The **login** page and the **main** page.
### Screen
A screen is a component within a single page. Generally, screens represent client side routes, and can be switched without refreshing the page.
### Component
A component is the basic frontend building block of a budibase app.
### Component Library
Component libraries are collections of components as well as the definition of their props contained in a file called `components.json`.
## Contributing to Budibase
* Please maintain the existing code style.
* Please try to keep your commits small and focused.
* Please write tests.
* If the project diverges from your branch, please rebase instead of merging. This makes the commit graph easier to read.
* Once your work is completed, please raise a PR against the `develop` branch with some information about what has changed and why.
### Getting Started For Contributors
#### 1. Prerequisites
NodeJS Version `14.x.x`
*yarn -* `npm install -g yarn`
*jest* - `npm install -g jest`
#### 2. Clone this repository
`git clone https://github.com/Budibase/budibase.git`
then `cd ` into your local copy.
#### 3. Install and Build
| **NOTE**: On Windows, all yarn commands must be executed on a bash shell (e.g. git bash)
To develop the Budibase platform you'll need [Docker](https://www.docker.com/) and [Docker Compose](https://docs.docker.com/compose/) installed.
##### Quick method
`yarn setup` will check that all necessary components are installed and setup the repo for usage.
##### Manual method
The following commands can be executed to manually get Budibase up and running (assuming Docker/Docker Compose has been installed).
`yarn` to install project dependencies
`yarn bootstrap` will install all budibase modules and symlink them together using lerna.
`yarn build` will build all budibase packages.
#### 4. Running
To run the budibase server and builder in dev mode (i.e. with live reloading):
1. Open a new console
2. `yarn dev` (from root)
3. Access the builder on http://localhost:10000/builder
This will enable watch mode for both the builder app, server, client library and any component libraries.
#### 5. Debugging using VS Code
To debug the budibase server and worker a VS Code launch configuration has been provided.
Visit the debug window and select `Budibase Server` or `Budibase Worker` to debug the respective component.
Alternatively to start both components simultaneously select `Start Budibase`.
In addition to the above, the remaining budibase components may be run in dev mode using: `yarn dev:noserver`.
#### 6. Cleanup
If you wish to delete all the apps created in development and reset the environment then run the following:
1. `yarn nuke:docker` will wipe all the Budibase services
2. `yarn dev` will restart all the services
### Backend
For the backend we run [Redis](https://redis.io/), [CouchDB](https://couchdb.apache.org/), [MinIO](https://min.io/) and [NGINX](https://www.nginx.com/) in Docker compose. This means that to develop Budibase you will need Docker and Docker compose installed. The backend services are then run separately as Node services with nodemon so that they can be debugged outside of Docker.
### Data Storage
When you are running locally, budibase stores data on disk using docker volumes. The volumes and the types of data associated with each are:
- `redis_data`
- Sessions, email tokens
- `couchdb3_data`
- Global and app databases
- `minio_data`
- App manifest, budibase client, static assets
### Development Modes
A combination of environment variables controls the mode budibase runs in.
Yarn commands can be used to mimic the different modes as described in the sections below:
#### Self Hosted
The default mode. A single tenant installation with no usage restrictions.
To enable this mode, use:
```
yarn mode:self
```
#### Cloud
The cloud mode, with account portal turned off.
To enable this mode, use:
```
yarn mode:cloud
```
#### Cloud & Account
The cloud mode, with account portal turned on. This is a replica of the mode that runs at https://budibase.app
To enable this mode, use:
```
yarn mode:account
```
### CI
An overview of the CI pipelines can be found [here](./workflows/README.md)
### Pro
@budibase/pro is the closed source package that supports licensed features in budibase. By default the package will be pulled from NPM and will not normally need to be touched in local development. If you require to update code inside the pro package it can be cloned to the same root level as budibase, e.g.
```
.
|_ budibase
|_ budibase-pro
```
Note that only budibase maintainers will be able to access the pro repo.
The `yarn bootstrap` command can be used to replace the NPM supplied dependency with the local source aware version. This is achieved using the `yarn link` command. To see specifically how dependencies are linked see [scripts/link-dependencies.sh](../scripts/link-dependencies.sh). The same link script is used to link dependencies to account-portal in local dev.
### Troubleshooting
Sometimes, things go wrong. This can be due to incompatible updates on the budibase platform. To clear down your development environment and start again follow **Step 6. Cleanup**, then proceed from **Step 3. Install and Build** in the setup guide above to create a fresh Budibase installation.
### Running tests
#### End-to-end Tests
Budibase uses Cypress to run a number of E2E tests. To run the tests execute the following command in the root folder:
```
yarn test:e2e
```
Or if you are in the builder you can run `yarn cy:test`.
### Other Useful Information
* The contributors are listed in [AUTHORS.md](https://github.com/Budibase/budibase/blob/master/.github/AUTHORS.md) (add yourself).
* This project uses a modified version of the MPLv2 license, see [LICENSE](https://github.com/budibase/server/blob/master/LICENSE).
* We use the [C4 (Collective Code Construction Contract)](https://rfc.zeromq.org/spec:42/C4/) process for contributions.
Please read this if you are unfamiliar with it.

52
docs/DEV-SETUP-DEBIAN.md Normal file
View File

@ -0,0 +1,52 @@
## Dev Environment on Debian 11
### Install Node
Budibase requires a recent version of node (14+):
```
curl -sL https://deb.nodesource.com/setup_16.x | sudo bash -
apt -y install nodejs
node -v
```
### Install npm requirements
```
npm install -g yarn jest lerna
```
### Install Docker and Docker Compose
```
apt install docker.io
pip3 install docker-compose
```
### Clone the repo
```
git clone https://github.com/Budibase/budibase.git
```
### Check Versions
This setup process was tested on Debian 11 (bullseye) with version numbers show below. Your mileage may vary using anything else.
- Docker: 20.10.5
- Docker-Compose: 1.29.2
- Node: v16.15.1
- Yarn: 1.22.19
- Lerna: 5.1.4
### Build
```
cd budibase
yarn setup
```
The yarn setup command runs several build steps i.e.
```
node ./hosting/scripts/setup.js && yarn && yarn bootstrap && yarn build && yarn dev
```
So this command will actually run the application in dev mode. It creates .env files under `./packages/server` and `./packages/worker` and runs docker containers for each service via docker-compose.
The dev version will be available on port 10000 i.e.
http://127.0.0.1:10000/builder/admin

54
docs/DEV-SETUP-MACOSX.md Normal file
View File

@ -0,0 +1,54 @@
## Dev Environment on MAC OSX 12 (Monterey)
### Install Homebrew
Install instructions [here](https://brew.sh/)
### Install Node
Budibase requires a recent version of node (14+):
```
brew install node npm
node -v
```
### Install npm requirements
```
npm install -g yarn jest lerna
```
### Install Docker and Docker Compose
```
brew install docker docker-compose
```
### Clone the repo
```
git clone https://github.com/Budibase/budibase.git
```
### Check Versions
This setup process was tested on Mac OSX 12 (Monterey) with version numbers shown below. Your mileage may vary using anything else.
- Docker: 20.10.14
- Docker-Compose: 2.6.0
- Node: 18.3.0
- Yarn: 1.22.19
- Lerna: 5.1.4
### Build
```
cd budibase
yarn setup
```
The yarn setup command runs several build steps i.e.
```
node ./hosting/scripts/setup.js && yarn && yarn bootstrap && yarn build && yarn dev
```
So this command will actually run the application in dev mode. It creates .env files under `./packages/server` and `./packages/worker` and runs docker containers for each service via docker-compose.
The dev version will be available on port 10000 i.e.
http://127.0.0.1:10000/builder/admin

View File

@ -1894,9 +1894,9 @@ minimist-options@4.1.0:
kind-of "^6.0.3"
minimist@^1.2.0:
version "1.2.5"
resolved "https://registry.yarnpkg.com/minimist/-/minimist-1.2.5.tgz#67d66014b66a6a8aaa0c083c5fd58df4e4e97602"
integrity sha512-FM9nNUYrRBAELZQT3xeZQ7fmMOBg6nWNmJKTcgsJeaLstP/UODVpGsr5OhXhhXg6f+qtJ8uiZ+PUxkDWcgIXLw==
version "1.2.6"
resolved "https://registry.yarnpkg.com/minimist/-/minimist-1.2.6.tgz#8637a5b759ea0d6e98702cfb3a9283323c93af44"
integrity sha512-Jsjnk4bw3YJqYzbdyBiNsPWHPfO++UGG749Cxs6peCu5Xg4nrena6OVxOYxrQTqww0Jmwt+Ref8rggumkTLz9Q==
minipass-collect@^1.0.2:
version "1.0.2"

View File

@ -18,4 +18,8 @@ MINIO_PORT=4004
COUCH_DB_PORT=4005
REDIS_PORT=6379
WATCHTOWER_PORT=6161
BUDIBASE_ENVIRONMENT=PRODUCTION
BUDIBASE_ENVIRONMENT=PRODUCTION
# An admin user can be automatically created initially if these are set
BB_ADMIN_USER_EMAIL=
BB_ADMIN_USER_PASSWORD=

View File

@ -27,6 +27,7 @@ services:
image: nginx:latest
volumes:
- ./.generated-nginx.dev.conf:/etc/nginx/nginx.conf
- ./proxy/error.html:/usr/share/nginx/html/error.html
ports:
- "${MAIN_PORT}:10000"
depends_on:

View File

@ -23,6 +23,8 @@ services:
ENABLE_ANALYTICS: "true"
REDIS_URL: redis-service:6379
REDIS_PASSWORD: ${REDIS_PASSWORD}
BB_ADMIN_USER_EMAIL: ${BB_ADMIN_USER_EMAIL}
BB_ADMIN_USER_PASSWORD: ${BB_ADMIN_USER_PASSWORD}
depends_on:
- worker-service
- redis-service
@ -117,7 +119,6 @@ services:
labels:
- "com.centurylinklabs.watchtower.enable=false"
volumes:
couchdb3_data:
driver: local

View File

@ -19,3 +19,7 @@ COUCH_DB_PORT=4005
REDIS_PORT=6379
WATCHTOWER_PORT=6161
BUDIBASE_ENVIRONMENT=PRODUCTION
# An admin user can be automatically created initially if these are set
BB_ADMIN_USER_EMAIL=
BB_ADMIN_USER_PASSWORD=

View File

@ -0,0 +1,13 @@
#!/bin/bash
CUSTOM_DOMAIN="$1"
if [[ ! -z "${CUSTOM_DOMAIN}" ]]; then
certbot certonly --webroot --webroot-path="/var/www/html" \
--register-unsafely-without-email \
--domains $CUSTOM_DOMAIN \
--rsa-key-size 4096 \
--agree-tos \
--force-renewal
nginx -s reload
fi

View File

@ -0,0 +1,23 @@
#!/bin/bash
CUSTOM_DOMAIN="$1"
# Request from Lets Encrypt
certbot certonly --webroot --webroot-path="/var/www/html" \
--register-unsafely-without-email \
--domains $CUSTOM_DOMAIN \
--rsa-key-size 4096 \
--agree-tos \
--force-renewal
if (($? != 0)); then
echo "ERROR: certbot request failed for $CUSTOM_DOMAIN use http on port 80 - exiting"
exit 1
else
cp /app/letsencrypt/options-ssl-nginx.conf /etc/letsencrypt/options-ssl-nginx.conf
cp /app/letsencrypt/ssl-dhparams.pem /etc/letsencrypt/ssl-dhparams.pem
cp /app/letsencrypt/nginx-ssl.conf /etc/nginx/sites-available/nginx-ssl.conf
sed -i "s/CUSTOM_DOMAIN/$CUSTOM_DOMAIN/g" /etc/nginx/sites-available/nginx-ssl.conf
ln -s /etc/nginx/sites-available/nginx-ssl.conf /etc/nginx/sites-enabled/nginx-ssl.conf
echo "INFO: restart nginx after certbot request"
/etc/init.d/nginx restart
fi

View File

@ -0,0 +1,96 @@
server {
listen 443 ssl default_server;
listen [::]:443 ssl default_server;
server_name _;
ssl_certificate /etc/letsencrypt/live/CUSTOM_DOMAIN/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/CUSTOM_DOMAIN/privkey.pem;
include /etc/letsencrypt/options-ssl-nginx.conf;
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
client_max_body_size 1000m;
ignore_invalid_headers off;
proxy_buffering off;
# port_in_redirect off;
location ^~ /.well-known/acme-challenge/ {
default_type "text/plain";
root /var/www/html;
break;
}
location = /.well-known/acme-challenge/ {
return 404;
}
location /app {
proxy_pass http://127.0.0.1:4001;
}
location = / {
proxy_pass http://127.0.0.1:4001;
}
location ~ ^/(builder|app_) {
proxy_http_version 1.1;
proxy_set_header Connection $connection_upgrade;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://127.0.0.1:4001;
}
location ~ ^/api/(system|admin|global)/ {
proxy_pass http://127.0.0.1:4002;
}
location /worker/ {
proxy_pass http://127.0.0.1:4002;
rewrite ^/worker/(.*)$ /$1 break;
}
location /api/ {
# calls to the API are rate limited with bursting
limit_req zone=ratelimit burst=20 nodelay;
# 120s timeout on API requests
proxy_read_timeout 120s;
proxy_connect_timeout 120s;
proxy_send_timeout 120s;
proxy_http_version 1.1;
proxy_set_header Connection $connection_upgrade;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://127.0.0.1:4001;
}
location /db/ {
proxy_pass http://127.0.0.1:5984;
rewrite ^/db/(.*)$ /$1 break;
}
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_connect_timeout 300;
proxy_http_version 1.1;
proxy_set_header Connection "";
chunked_transfer_encoding off;
proxy_pass http://127.0.0.1:9000;
}
client_header_timeout 60;
client_body_timeout 60;
keepalive_timeout 60;
# gzip
gzip on;
gzip_vary on;
gzip_proxied any;
gzip_comp_level 6;
gzip_types text/plain text/css text/xml application/json application/javascript application/rss+xml application/atom+xml image/svg+xml;
}

View File

@ -0,0 +1,13 @@
# This file contains important security parameters. If you modify this file
# manually, Certbot will be unable to automatically provide future security
# updates. Instead, Certbot will print and log an error message with a path to
# the up-to-date file that you will need to refer to when manually updating
# this file.
ssl_session_cache shared:le_nginx_SSL:10m;
ssl_session_timeout 1440m;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_prefer_server_ciphers off;
ssl_ciphers "ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384";

View File

@ -0,0 +1,8 @@
-----BEGIN DH PARAMETERS-----
MIIBCAKCAQEA//////////+t+FRYortKmq/cViAnPTzx2LnFg84tNpWp4TZBFGQz
+8yTnc4kmz75fS/jY2MMddj2gbICrsRhetPfHtXV/WVhJDP1H18GbtCFY2VVPe0a
87VXE15/V8k1mE8McODmi3fipona8+/och3xWKE2rec1MKzKT0g6eXq8CrGCsyT7
YdEIqUuyyOP7uWrat2DX9GgdT0Kj3jlN9K5W7edjcrsZCwenyO4KbXCeAvzhzffi
7MA0BM0oNC9hkXL+nOmFg/+OTxIy7vKBg8P+OxtMb61zO7X8vC7CIAXFjvGDfRaD
ssbzSibBsu/6iGtCOGEoXJf//////////wIBAg==
-----END DH PARAMETERS-----

View File

@ -28,6 +28,12 @@ http {
ignore_invalid_headers off;
proxy_buffering off;
error_page 502 503 504 /error.html;
location = /error.html {
root /usr/share/nginx/html;
internal;
}
location /db/ {
proxy_pass http://couchdb-service:5984;
rewrite ^/db/(.*)$ /$1 break;

View File

@ -30,7 +30,7 @@ http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
map $http_upgrade $connection_upgrade {
default "upgrade";
}
@ -42,13 +42,13 @@ http {
client_max_body_size 1000m;
ignore_invalid_headers off;
proxy_buffering off;
set $csp_default "default-src 'self'"
set $csp_default "default-src 'self'";
set $csp_script "script-src 'self' 'unsafe-inline' 'unsafe-eval' https://cdn.budi.live https://js.intercomcdn.com https://widget.intercom.io";
set $csp_style "style-src 'self' 'unsafe-inline' https://cdn.jsdelivr.net https://fonts.googleapis.com https://rsms.me https://maxcdn.bootstrapcdn.com";
set $csp_object "object-src 'none'";
set $csp_base_uri "base-uri 'self'";
set $csp_connect "connect-src 'self' https://api-iam.intercom.io https://api-iam.intercom.io https://api-ping.intercom.io https://app.posthog.com wss://nexus-websocket-a.intercom.io wss://nexus-websocket-b.intercom.io https://nexus-websocket-a.intercom.io https://nexus-websocket-b.intercom.io https://uploads.intercomcdn.com https://uploads.intercomusercontent.com";
set $csp_connect "connect-src 'self' https://api-iam.intercom.io https://api-iam.intercom.io https://api-ping.intercom.io https://app.posthog.com wss://nexus-websocket-a.intercom.io wss://nexus-websocket-b.intercom.io https://nexus-websocket-a.intercom.io https://nexus-websocket-b.intercom.io https://uploads.intercomcdn.com https://uploads.intercomusercontent.com https://*.s3.amazonaws.com https://*.s3.us-east-2.amazonaws.com https://*.s3.us-east-1.amazonaws.com https://*.s3.us-west-1.amazonaws.com https://*.s3.us-west-2.amazonaws.com https://*.s3.af-south-1.amazonaws.com https://*.s3.ap-east-1.amazonaws.com https://*.s3.ap-southeast-3.amazonaws.com https://*.s3.ap-south-1.amazonaws.com https://*.s3.ap-northeast-3.amazonaws.com https://*.s3.ap-northeast-2.amazonaws.com https://*.s3.ap-southeast-1.amazonaws.com https://*.s3.ap-southeast-2.amazonaws.com https://*.s3.ap-northeast-1.amazonaws.com https://*.s3.ca-central-1.amazonaws.com https://*.s3.cn-north-1.amazonaws.com https://*.s3.cn-northwest-1.amazonaws.com https://*.s3.eu-central-1.amazonaws.com https://*.s3.eu-west-1.amazonaws.com https://*.s3.eu-west-2.amazonaws.com https://*.s3.eu-south-1.amazonaws.com https://*.s3.eu-west-3.amazonaws.com https://*.s3.eu-north-1.amazonaws.com https://*.s3.sa-east-1.amazonaws.com https://*.s3.me-south-1.amazonaws.com https://*.s3.us-gov-east-1.amazonaws.com https://*.s3.us-gov-west-1.amazonaws.com";
set $csp_font "font-src 'self' data: https://cdn.jsdelivr.net https://fonts.gstatic.com https://rsms.me https://maxcdn.bootstrapcdn.com https://js.intercomcdn.com https://fonts.intercomcdn.com";
set $csp_frame "frame-src 'self' https:";
set $csp_img "img-src http: https: data: blob:";
@ -56,11 +56,17 @@ http {
set $csp_media "media-src 'self' https://js.intercomcdn.com";
set $csp_worker "worker-src 'none'";
error_page 502 503 504 /error.html;
location = /error.html {
root /usr/share/nginx/html;
internal;
}
# Security Headers
add_header X-Frame-Options SAMEORIGIN always;
add_header X-Content-Type-Options nosniff always;
add_header X-XSS-Protection "1; mode=block" always;
add_header Content-Security-Policy ${csp_default}; ${csp_script}; ${csp_style}; ${csp_object}; ${csp_base_uri}; ${csp_connect}; ${csp_font}; ${csp_frame}; ${csp_img}; ${csp_manifest}; ${csp_media}; ${csp_worker};" always;
add_header Content-Security-Policy "${csp_default}; ${csp_script}; ${csp_style}; ${csp_object}; ${csp_base_uri}; ${csp_connect}; ${csp_font}; ${csp_frame}; ${csp_img}; ${csp_manifest}; ${csp_media}; ${csp_worker};" always;
# upstreams
set $apps {{ apps }};
@ -148,4 +154,4 @@ http {
gzip_comp_level 6;
gzip_types text/plain text/css text/xml application/json application/javascript application/rss+xml application/atom+xml image/svg+xml;
}
}
}

View File

@ -0,0 +1,94 @@
{
"version": "2",
"templates": [
{
"type": 3,
"title": "Budibase",
"categories": ["Tools"],
"description": "Build modern business apps in minutes",
"logo": "https://budibase.com/favicon.ico",
"platform": "linux",
"repository": {
"url": "https://github.com/Budibase/budibase",
"stackfile": "hosting/docker-compose.yaml"
},
"env": [
{
"name": "MAIN_PORT",
"label": "Main port",
"default": "10000"
},
{
"name": "JWT_SECRET",
"label": "JWT secret",
"default": "change-me"
},
{
"name": "MINIO_ACCESS_KEY",
"label": "MinIO access key",
"default": "change-me"
},
{
"name": "MINIO_SECRET_KEY",
"label": "MinIO secret key",
"default": "change-me"
},
{
"name": "COUCH_DB_USER",
"default": "budibase",
"preset": true
},
{
"name": "COUCH_DB_PASSWORD",
"label": "Couch DB password",
"default": "change-me"
},
{
"name": "REDIS_PASSWORD",
"label": "Redis password",
"default": "change-me"
},
{
"name": "INTERNAL_API_KEY",
"label": "Internal API key",
"default": "change-me"
},
{
"name": "APP_PORT",
"default": "4002",
"preset": true
},
{
"name": "WORKER_PORT",
"default": "4003",
"preset": true
},
{
"name": "MINIO_PORT",
"default": "4004",
"preset": true
},
{
"name": "COUCH_DB_PORT",
"default": "4005",
"preset": true
},
{
"name": "REDIS_PORT",
"default": "6379",
"preset": true
},
{
"name": "WATCHTOWER_PORT",
"default": "6161",
"preset": true
},
{
"name": "BUDIBASE_ENVIRONMENT",
"default": "PRODUCTION",
"preset": true
}
]
}
]
}

View File

@ -1,2 +1,3 @@
FROM nginx:latest
COPY .generated-nginx.prod.conf /etc/nginx/nginx.conf
COPY .generated-nginx.prod.conf /etc/nginx/nginx.conf
COPY error.html /usr/share/nginx/html/error.html

175
hosting/proxy/error.html Normal file
View File

@ -0,0 +1,175 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<title>Budibase</title>
<meta name="viewport" content="width=device-width,initial-scale=1">
</head>
<script>
function checkStatusButton() {
if (window.location.href.includes("budibase.app")) {
var button = document.getElementById("statusButton")
button.removeAttribute("hidden")
}
}
function goToStatus() {
window.location.href = "https://status.budibase.com";
}
function goHome() {
window.location.href = window.location.origin;
}
function getStatus() {
var http = new XMLHttpRequest()
var url = window.location.href
http.open('GET', url, true)
http.send()
http.onreadystatechange = (e) => {
var status = http.status
document.getElementById("status").innerHTML = status
var message
if (status === 502) {
message = "Bad gateway. Please try again later."
} else if (status === 503) {
message = "Service Unavailable. Please try again later."
} else if (status === 504) {
message = "Gateway timeout. Please try again later."
} else {
message = "Please try again later."
}
document.getElementById("message").innerHTML = message
}
}
window.onload = function() {
checkStatusButton()
getStatus()
};
</script>
<style>
:root {
--spectrum-global-color-gray-600: rgb(144,144,144);
--spectrum-global-color-gray-900: rgb(255,255,255);
--spectrum-global-color-gray-800: rgb(227,227,227);
--spectrum-global-color-static-blue-600: rgb(20,115,230);
--spectrum-global-color-static-blue-hover: rgb( 18, 103, 207);
}
html, body {
background-color: #1a1a1a;
padding: 0;
margin: 0;
overflow: hidden;
color: #e7e7e7;
font-family: 'Roboto', sans-serif;
}
button {
color: #e7e7e7;
font-family: 'Roboto', sans-serif;
border: none;
font-size: 15px;
border-radius: 15px;
padding: 8px 22px;
}
button:hover {
cursor: pointer;
}
.main {
height: 100vh;
display: flex;
flex-direction: row;
align-items: center;
justify-content: center;
}
.info {
display: flex;
flex-direction: column;
align-items: left;
}
@media only screen and (max-width: 600px) {
.info {
align-items: center;
}
}
.status {
color: var(--spectrum-global-color-gray-600)
}
.title {
font-weight: 400;
color: var(--spectrum-global-color-gray-900)
}
.message {
font-weight: 200;
color: var(--spectrum-global-color-gray-800)
}
.buttons {
display: flex;
flex-direction: row;
margin-top: 15px;
}
.homeButton {
background-color: var(--spectrum-global-color-static-blue-600);
}
.homeButton:hover {
background-color: var(--spectrum-global-color-static-blue-hover);
}
.statusButton {
background-color: transparent;
margin-left: 20px;
border: none;
}
.hero {
height: 160px;
width: 160px;
margin-right: 80px;
}
.content {
display: flex;
flex-direction: row;
align-items: flex-end;
justify-content: center;
}
@media only screen and (max-width: 600px) {
.content {
flex-direction: column;
}
}
</style>
<script src="">
</script>
<body>
<div class="main">
<div class="content">
<div class="hero">
<img src="https://raw.githubusercontent.com/Budibase/budibase/master/packages/builder/assets/bb-space-man.svg" alt="Budibase Logo">
</div>
<div class="info">
<div>
<h4 id="status" class="status"></h4>
<h1 class="title">
Houston we have a problem!
</h1>
<h3 id="message" class="message">
</h3>
</div>
<div class="buttons">
<button class="homeButton" onclick=goHome()>Return home</button>
<button id="statusButton" class="statusButton" hidden="true" onclick=goToStatus()>Check out status</button>
</div>
</div>
</div>
</div>
</body>
</html>

View File

@ -0,0 +1,17 @@
#!/bin/bash
echo ${TARGETBUILD} > /buildtarget.txt
if [[ "${TARGETBUILD}" = "aas" ]]; then
# Azure AppService uses /home for persisent data & SSH on port 2222
mkdir -p /home/{search,minio,couch}
mkdir -p /home/couch/{dbs,views}
chown -R couchdb:couchdb /home/couch/
apt update
apt-get install -y openssh-server
sed -i 's#dir=/opt/couchdb/data/search#dir=/home/search#' /opt/clouseau/clouseau.ini
sed -i 's#/minio/minio server /minio &#/minio/minio server /home/minio &#' /runner.sh
sed -i 's#database_dir = ./data#database_dir = /home/couch/dbs#' /opt/couchdb/etc/default.ini
sed -i 's#view_index_dir = ./data#view_index_dir = /home/couch/views#' /opt/couchdb/etc/default.ini
sed -i "s/#Port 22/Port 2222/" /etc/ssh/sshd_config
/etc/init.d/ssh restart
fi

148
hosting/single/Dockerfile Normal file
View File

@ -0,0 +1,148 @@
FROM node:14-slim as build
# install node-gyp dependencies
RUN apt-get update && apt-get upgrade -y && apt-get install -y --no-install-recommends apt-utils cron g++ make python
# add pin script
WORKDIR /
ADD scripts/pinVersions.js scripts/cleanup.sh ./
RUN chmod +x /cleanup.sh
# build server
WORKDIR /app
ADD packages/server .
RUN node /pinVersions.js && yarn && yarn build && /cleanup.sh
# build worker
WORKDIR /worker
ADD packages/worker .
RUN node /pinVersions.js && yarn && yarn build && /cleanup.sh
FROM couchdb:3.2.1
# TARGETARCH can be amd64 or arm e.g. docker build --build-arg TARGETARCH=amd64
ARG TARGETARCH amd64
#TARGETBUILD can be set to single (for single docker image) or aas (for azure app service)
# e.g. docker build --build-arg TARGETBUILD=aas ....
ARG TARGETBUILD single
ENV TARGETBUILD $TARGETBUILD
COPY --from=build /app /app
COPY --from=build /worker /worker
ENV \
APP_PORT=4001 \
ARCHITECTURE=amd \
BUDIBASE_ENVIRONMENT=PRODUCTION \
CLUSTER_PORT=80 \
# CUSTOM_DOMAIN=budi001.custom.com \
DEPLOYMENT_ENVIRONMENT=docker \
MINIO_URL=http://localhost:9000 \
POSTHOG_TOKEN=phc_fg5I3nDOf6oJVMHSaycEhpPdlgS8rzXG2r6F2IpxCHS \
REDIS_URL=localhost:6379 \
SELF_HOSTED=1 \
TARGETBUILD=$TARGETBUILD \
WORKER_PORT=4002 \
WORKER_URL=http://localhost:4002 \
APPS_URL=http://localhost:4001
# These secret env variables are generated by the runner at startup
# their values can be overriden by the user, they will be written
# to the .env file in the /data directory for use later on
# REDIS_PASSWORD=budibase \
# COUCHDB_PASSWORD=budibase \
# COUCHDB_USER=budibase \
# COUCH_DB_URL=http://budibase:budibase@localhost:5984 \
# INTERNAL_API_KEY=budibase \
# JWT_SECRET=testsecret \
# MINIO_ACCESS_KEY=budibase \
# MINIO_SECRET_KEY=budibase \
# install base dependencies
RUN apt-get update && \
apt-get install -y software-properties-common wget nginx uuid-runtime && \
apt-add-repository 'deb http://security.debian.org/debian-security stretch/updates main' && \
apt-get update
# install other dependencies, nodejs, oracle requirements, jdk8, redis, nginx
WORKDIR /nodejs
RUN curl -sL https://deb.nodesource.com/setup_16.x -o /tmp/nodesource_setup.sh && \
bash /tmp/nodesource_setup.sh && \
apt-get install -y libaio1 nodejs nginx openjdk-8-jdk redis-server unzip && \
npm install --global yarn pm2
# setup nginx
ADD hosting/single/nginx/nginx.conf /etc/nginx
ADD hosting/single/nginx/nginx-default-site.conf /etc/nginx/sites-enabled/default
RUN mkdir -p /var/log/nginx && \
touch /var/log/nginx/error.log && \
touch /var/run/nginx.pid
WORKDIR /
RUN mkdir -p scripts/integrations/oracle
ADD packages/server/scripts/integrations/oracle scripts/integrations/oracle
RUN /bin/bash -e ./scripts/integrations/oracle/instantclient/linux/install.sh
# setup clouseau
WORKDIR /
RUN wget https://github.com/cloudant-labs/clouseau/releases/download/2.21.0/clouseau-2.21.0-dist.zip && \
unzip clouseau-2.21.0-dist.zip && \
mv clouseau-2.21.0 /opt/clouseau && \
rm clouseau-2.21.0-dist.zip
WORKDIR /opt/clouseau
RUN mkdir ./bin
ADD hosting/single/clouseau/clouseau ./bin/
ADD hosting/single/clouseau/log4j.properties hosting/single/clouseau/clouseau.ini ./
RUN chmod +x ./bin/clouseau
# setup CouchDB
WORKDIR /opt/couchdb
ADD hosting/single/couch/vm.args hosting/single/couch/local.ini ./etc/
# setup minio
WORKDIR /minio
ADD scripts/install-minio.sh ./install.sh
RUN chmod +x install.sh && ./install.sh
# setup runner file
WORKDIR /
ADD hosting/single/runner.sh .
RUN chmod +x ./runner.sh
ADD hosting/single/healthcheck.sh .
RUN chmod +x ./healthcheck.sh
ADD hosting/scripts/build-target-paths.sh .
RUN chmod +x ./build-target-paths.sh
# For Azure App Service install SSH & point data locations to /home
RUN /build-target-paths.sh
# cleanup cache
RUN yarn cache clean -f
EXPOSE 80
EXPOSE 443
VOLUME /data
# setup letsencrypt certificate
RUN apt-get install -y certbot python3-certbot-nginx
ADD hosting/letsencrypt /app/letsencrypt
RUN chmod +x /app/letsencrypt/certificate-request.sh /app/letsencrypt/certificate-renew.sh
# Remove cached files
RUN rm -rf \
/root/.cache \
/root/.npm \
/root/.pip \
/usr/local/share/doc \
/usr/share/doc \
/usr/share/man \
/var/lib/apt/lists/* \
/tmp/*
HEALTHCHECK --interval=15s --timeout=15s --start-period=45s CMD "/healthcheck.sh"
# must set this just before running
ENV NODE_ENV=production
WORKDIR /
CMD ["./runner.sh"]

112
hosting/single/README.md Normal file
View File

@ -0,0 +1,112 @@
# Docker Single Image for Budibase
## Overview
As an alternative to running several docker containers via docker-compose, the files under ./hosting/single can be used to build a docker image containing all of the Budibase components (minio, couch, clouseau etc).
We call this the 'single image' container as the Dockerfile adds all the components to a single docker image.
## Usage
- Amend Environment Variables
- Build Requirements
- Build the Image
- Run the Container
### Amend Environment Variables
Edit the Dockerfile in this directory amending the environment variables to suit your usage. Pay particular attention to changing passwords.
The CUSTOM_DOMAIN variable will be used to request a certificate from LetsEncrypt and if successful you can point traffic to port 443. If you choose to use the CUSTOM_DOMAIN variable ensure that the DNS for your custom domain points to the public IP address where you are running Budibase - otherwise the certificate issuance will fail.
If you have other arrangements for a proxy in front of the single image container you can omit the CUSTOM_DOMAIN environment variable and the request to LetsEncrypt will be skipped. You can then point traffic to port 80.
### Build Requirements
We would suggest building the image with 6GB of RAM and 20GB of free disk space for build artifacts. The resulting image size will use approx 2GB of disk space.
### Build the Image
The guidance below is based on building the Budibase single image on Debian 11 and AlmaLinux 8. If you use another distro or OS you will need to amend the commands to suit.
#### Install Node
Budibase requires a more recent version of node (14+) than is available in the base Debian repos so:
```
curl -sL https://deb.nodesource.com/setup_16.x | sudo bash -
apt install -y nodejs
node -v
```
Install yarn and lerna:
```
npm install -g yarn jest lerna
```
#### Install Docker
```
apt install -y docker.io
```
Check the versions of each installed version. This process was tested with the version numbers below so YMMV using anything else:
- Docker: 20.10.5
- node: 16.15.1
- yarn: 1.22.19
- lerna: 5.1.4
#### Get the Code
Clone the Budibase repo
```
git clone https://github.com/Budibase/budibase.git
cd budibase
```
#### Setup Node
Node setup:
```
node ./hosting/scripts/setup.js
yarn
yarn bootstrap
yarn build
```
#### Build Image
The following yarn command does some prep and then runs the docker build command:
```
yarn build:docker:single
```
If the docker build step fails try running that step again manually with:
```
docker build --build-arg TARGETARCH=amd --no-cache -t budibase:latest -f ./hosting/single/Dockerfile .
```
#### Azure App Services
Azure have some specific requirements for running a container in their App Service. Specifically, installation of SSH to port 2222 and data storage under /home. If you would like to build a budibase container for Azure App Service add the build argument shown below setting it to 'aas'. You can remove the CUSTOM_DOMAIN env variable from the Dockerfile too as Azure terminate SSL before requests reach the container.
```
docker build --build-arg TARGETARCH=amd --build-arg TARGETBUILD=aas -t budibase:latest -f ./hosting/single/Dockerfile .
```
### Run the Container
```
docker run -d -p 80:80 -p 443:443 --name budibase budibase:latest
```
Where:
- -d runs the container in detached mode
- -p forwards ports from your host to the ports inside the container. If you are already using port 80 on your host for something else you can try running with an alternative port e.g. `-p 8080:80`
- --name is the name for the container as shown in `docker ps` and can be used with other docker commands e.g. `docker restart budibase`
When the container runs you should be able to access the container over http at your host address e.g. http://1.2.3.4/ or using your custom domain e.g. https://my.custom.domain/
When the Budibase UI appears you will be prompted to create an account to get started.
### Podman
The single image container builds fine when using podman in place of docker. You may be prompted for the registry to use for the CouchDB image and the HEALTHCHECK parameter is not OCI compliant so is ignored.
### Check
There are many things that could go wrong so if your container is not building or running as expected please check the following before opening a support issue.
Verify the healthcheck status of the container:
```
docker ps
```
Check the container logs:
```
docker logs budibase
```
### Support
This single image build is still a work-in-progress so if you open an issue please provide the following information:
- The OS and OS version you are building on
- The versions you are using of docker, docker-compose, yarn, node, lerna
- For build errors please provide zipped output
- For container errors please provide zipped container logs

View File

@ -0,0 +1,12 @@
#!/bin/sh
/usr/bin/java -server \
-Xmx2G \
-Dsun.net.inetaddr.ttl=30 \
-Dsun.net.inetaddr.negative.ttl=30 \
-Dlog4j.configuration=file:/opt/clouseau/log4j.properties \
-XX:OnOutOfMemoryError="kill -9 %p" \
-XX:+UseConcMarkSweepGC \
-XX:+CMSParallelRemarkEnabled \
-classpath '/opt/clouseau/*' \
com.cloudant.clouseau.Main \
/opt/clouseau/clouseau.ini

View File

@ -0,0 +1,13 @@
[clouseau]
; the name of the Erlang node created by the service, leave this unchanged
name=clouseau@127.0.0.1
; set this to the same distributed Erlang cookie used by the CouchDB nodes
cookie=monster
; the path where you would like to store the search index files
dir=/data/search
; the number of search indexes that can be open simultaneously
max_indexes_open=500

View File

@ -0,0 +1,4 @@
log4j.rootLogger=debug, CONSOLE
log4j.appender.CONSOLE=org.apache.log4j.ConsoleAppender
log4j.appender.CONSOLE.layout=org.apache.log4j.PatternLayout
log4j.appender.CONSOLE.layout.ConversionPattern=%d{ISO8601} %c [%p] %m%n

View File

@ -0,0 +1,5 @@
; CouchDB Configuration Settings
[couchdb]
database_dir = /data/couch/dbs
view_index_dir = /data/couch/views

View File

@ -0,0 +1,32 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may not
# use this file except in compliance with the License. You may obtain a copy of
# the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations under
# the License.
# erlang cookie for clouseau security
-name couchdb@127.0.0.1
-setcookie monster
# Ensure that the Erlang VM listens on a known port
-kernel inet_dist_listen_min 9100
-kernel inet_dist_listen_max 9100
# Tell kernel and SASL not to log anything
-kernel error_logger silent
-sasl sasl_error_logger false
# Use kernel poll functionality if supported by emulator
+K true
# Start a pool of asynchronous IO threads
+A 16
# Comment this line out to enable the interactive Erlang shell on startup
+Bd -noinput

View File

@ -0,0 +1,44 @@
#!/usr/bin/env bash
healthy=true
if [ -f "/data/.env" ]; then
export $(cat /data/.env | xargs)
fi
if [[ $(curl -Lfk -s -w "%{http_code}\n" http://localhost/ -o /dev/null) -ne 200 ]]; then
echo 'ERROR: Budibase is not running';
healthy=false
fi
if [[ $(curl -s -w "%{http_code}\n" http://localhost:4001/health -o /dev/null) -ne 200 ]]; then
echo 'ERROR: Budibase backend is not running';
healthy=false
fi
if [[ $(curl -s -w "%{http_code}\n" http://localhost:4002/health -o /dev/null) -ne 200 ]]; then
echo 'ERROR: Budibase worker is not running';
healthy=false
fi
if [[ $(curl -s -w "%{http_code}\n" http://localhost:5984/ -o /dev/null) -ne 200 ]]; then
echo 'ERROR: CouchDB is not running';
healthy=false
fi
if [[ $(redis-cli -a $REDIS_PASSWORD --no-auth-warning ping) != 'PONG' ]]; then
echo 'ERROR: Redis is down';
healthy=false
fi
# mino, clouseau,
nginx -t -q
NGINX_STATUS=$?
if [[ $NGINX_STATUS -gt 0 ]]; then
echo 'ERROR: Nginx config problem';
healthy=false
fi
if [ $healthy == true ]; then
exit 0
else
exit 1
fi

View File

@ -0,0 +1,91 @@
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name _;
client_max_body_size 1000m;
ignore_invalid_headers off;
proxy_buffering off;
# port_in_redirect off;
location ^~ /.well-known/acme-challenge/ {
default_type "text/plain";
root /var/www/html;
break;
}
location = /.well-known/acme-challenge/ {
return 404;
}
location /app {
proxy_pass http://127.0.0.1:4001;
}
location = / {
proxy_pass http://127.0.0.1:4001;
}
location ~ ^/(builder|app_) {
proxy_http_version 1.1;
proxy_set_header Connection $connection_upgrade;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://127.0.0.1:4001;
}
location ~ ^/api/(system|admin|global)/ {
proxy_pass http://127.0.0.1:4002;
}
location /worker/ {
proxy_pass http://127.0.0.1:4002;
rewrite ^/worker/(.*)$ /$1 break;
}
location /api/ {
# calls to the API are rate limited with bursting
limit_req zone=ratelimit burst=20 nodelay;
# 120s timeout on API requests
proxy_read_timeout 120s;
proxy_connect_timeout 120s;
proxy_send_timeout 120s;
proxy_http_version 1.1;
proxy_set_header Connection $connection_upgrade;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://127.0.0.1:4001;
}
location /db/ {
proxy_pass http://127.0.0.1:5984;
rewrite ^/db/(.*)$ /$1 break;
}
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_connect_timeout 300;
proxy_http_version 1.1;
proxy_set_header Connection "";
chunked_transfer_encoding off;
proxy_pass http://127.0.0.1:9000;
}
client_header_timeout 60;
client_body_timeout 60;
keepalive_timeout 60;
# gzip
gzip on;
gzip_vary on;
gzip_proxied any;
gzip_comp_level 6;
gzip_types text/plain text/css text/xml application/json application/javascript application/rss+xml application/atom+xml image/svg+xml;
}

View File

@ -0,0 +1,37 @@
user www-data www-data;
error_log /var/log/nginx/error.log;
pid /var/run/nginx.pid;
worker_processes auto;
worker_rlimit_nofile 8192;
events {
worker_connections 1024;
}
http {
limit_req_zone $binary_remote_addr zone=ratelimit:10m rate=20r/s;
proxy_set_header Host $host;
charset utf-8;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
server_tokens off;
types_hash_max_size 2048;
# buffering
client_header_buffer_size 1k;
client_max_body_size 20M;
ignore_invalid_headers off;
proxy_buffering off;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
map $http_upgrade $connection_upgrade {
default "upgrade";
}
include /etc/nginx/sites-enabled/*;
}

52
hosting/single/runner.sh Normal file
View File

@ -0,0 +1,52 @@
#!/bin/bash
declare -a ENV_VARS=("COUCHDB_USER" "COUCHDB_PASSWORD" "MINIO_ACCESS_KEY" "MINIO_SECRET_KEY" "INTERNAL_API_KEY" "JWT_SECRET" "REDIS_PASSWORD")
if [ -f "/data/.env" ]; then
export $(cat /data/.env | xargs)
fi
# first randomise any unset environment variables
for ENV_VAR in "${ENV_VARS[@]}"
do
temp=$(eval "echo \$$ENV_VAR")
if [[ -z "${temp}" ]]; then
eval "export $ENV_VAR=$(uuidgen | sed -e 's/-//g')"
fi
done
if [[ -z "${COUCH_DB_URL}" ]]; then
export COUCH_DB_URL=http://$COUCHDB_USER:$COUCHDB_PASSWORD@localhost:5984
fi
if [ ! -f "/data/.env" ]; then
touch /data/.env
for ENV_VAR in "${ENV_VARS[@]}"
do
temp=$(eval "echo \$$ENV_VAR")
echo "$ENV_VAR=$temp" >> /data/.env
done
fi
# make these directories in runner, incase of mount
mkdir -p /data/couch/{dbs,views} /home/couch/{dbs,views}
chown -R couchdb:couchdb /data/couch /home/couch
redis-server --requirepass $REDIS_PASSWORD &
/opt/clouseau/bin/clouseau &
/minio/minio server /data/minio &
/docker-entrypoint.sh /opt/couchdb/bin/couchdb &
/etc/init.d/nginx restart
if [[ ! -z "${CUSTOM_DOMAIN}" ]]; then
# Add monthly cron job to renew certbot certificate
echo -n "* * 2 * * root exec /app/letsencrypt/certificate-renew.sh ${CUSTOM_DOMAIN}" >> /etc/cron.d/certificate-renew
chmod +x /etc/cron.d/certificate-renew
# Request the certbot certificate
/app/letsencrypt/certificate-request.sh ${CUSTOM_DOMAIN}
fi
/etc/init.d/nginx restart
pushd app
pm2 start --name app "yarn run:docker"
popd
pushd worker
pm2 start --name worker "yarn run:docker"
popd
sleep 10
curl -X PUT ${COUCH_DB_URL}/_users
curl -X PUT ${COUCH_DB_URL}/_replicator
sleep infinity

4
hosting/single/test.sh Executable file
View File

@ -0,0 +1,4 @@
#!/bin/bash
id=$(docker run -t -d -p 8080:80 budibase:latest)
docker exec -it $id bash
docker kill $id

View File

@ -1,5 +1,5 @@
{
"version": "1.0.104-alpha.0",
"version": "1.1.22",
"npmClient": "yarn",
"packages": [
"packages/*"

View File

@ -3,6 +3,8 @@
"private": true,
"devDependencies": {
"@rollup/plugin-json": "^4.0.2",
"@types/mongodb": "3.6.3",
"@typescript-eslint/parser": "4.28.0",
"babel-eslint": "^10.0.3",
"eslint": "^7.28.0",
"eslint-plugin-cypress": "^2.11.3",
@ -16,30 +18,30 @@
"rimraf": "^3.0.2",
"rollup-plugin-replace": "^2.2.0",
"svelte": "^3.38.2",
"@typescript-eslint/parser": "4.28.0",
"typescript": "4.5.5"
},
"scripts": {
"setup": "node ./hosting/scripts/setup.js && yarn && yarn bootstrap && yarn build && yarn dev",
"bootstrap": "lerna link && lerna bootstrap",
"bootstrap": "lerna bootstrap && lerna link && ./scripts/link-dependencies.sh",
"build": "lerna run build",
"publishdev": "lerna run publishdev",
"publishnpm": "yarn build && lerna publish --force-publish",
"release": "lerna publish patch --yes --force-publish",
"release:develop": "lerna publish prerelease --yes --force-publish --dist-tag develop",
"build:dev": "lerna run prebuild && tsc --build --watch --preserveWatchOutput",
"release": "lerna publish ${RELEASE_VERSION_TYPE:-patch} --yes --force-publish && yarn release:pro",
"release:develop": "lerna publish prerelease --yes --force-publish --dist-tag develop && yarn release:pro:develop",
"release:pro": "bash scripts/pro/release.sh",
"release:pro:develop": "bash scripts/pro/release.sh develop",
"restore": "yarn run clean && yarn run bootstrap && yarn run build",
"nuke": "yarn run nuke:packages && yarn run nuke:docker",
"nuke:packages": "yarn run restore",
"nuke:docker": "lerna run --parallel dev:stack:nuke",
"clean": "lerna clean",
"kill-port": "kill-port 4001",
"kill-builder": "kill-port 3000",
"kill-server": "kill-port 4001 4002",
"kill-all": "yarn run kill-builder && yarn run kill-server",
"dev": "yarn run kill-all && lerna link && lerna run --parallel dev:builder --concurrency 1",
"dev:noserver": "yarn run kill-builder && lerna link && lerna run dev:stack:up && lerna run --parallel dev:builder --concurrency 1 --ignore @budibase/server --ignore @budibase/worker",
"dev:server": "yarn run kill-server && lerna run --parallel dev:builder --concurrency 1 --scope @budibase/worker --scope @budibase/server",
"test": "lerna run test",
"dev:noserver": "yarn run kill-builder && lerna link && lerna run dev:stack:up && lerna run --parallel dev:builder --concurrency 1 --ignore @budibase/backend-core --ignore @budibase/server --ignore @budibase/worker",
"dev:server": "yarn run kill-server && lerna run --parallel dev:builder --concurrency 1 --scope @budibase/backend-core --scope @budibase/worker --scope @budibase/server",
"test": "lerna run test && yarn test:pro",
"test:pro": "bash scripts/pro/test.sh",
"lint:eslint": "eslint packages",
"lint:prettier": "prettier --check \"packages/**/*.{js,ts,svelte}\"",
"lint": "yarn run lint:eslint && yarn run lint:prettier",
@ -48,16 +50,23 @@
"lint:fix": "yarn run lint:fix:prettier && yarn run lint:fix:eslint",
"test:e2e": "lerna run cy:test --stream",
"test:e2e:ci": "lerna run cy:ci --stream",
"test:e2e:ci:record": "lerna run cy:ci:record --stream",
"test:e2e:ci:notify": "lerna run cy:ci:notify",
"build:specs": "lerna run specs",
"build:docker": "lerna run build:docker && npm run build:docker:proxy:compose && cd hosting/scripts/linux/ && ./release-to-docker-hub.sh $BUDIBASE_RELEASE_VERSION && cd -",
"build:docker:pre": "lerna run build && lerna run predocker",
"build:docker:proxy": "docker build hosting/proxy -t proxy-service",
"build:docker:proxy:compose": "node scripts/proxy/generateProxyConfig compose && npm run build:docker:proxy",
"build:docker:proxy:preprod": "node scripts/proxy/generateProxyConfig preprod && npm run build:docker:proxy",
"build:docker:proxy:release": "node scripts/proxy/generateProxyConfig release && npm run build:docker:proxy",
"build:docker:proxy:prod": "node scripts/proxy/generateProxyConfig prod && npm run build:docker:proxy",
"build:docker:selfhost": "lerna run build:docker && cd hosting/scripts/linux/ && ./release-to-docker-hub.sh latest && cd -",
"build:docker:develop": "node scripts/pinVersions && lerna run build:docker && npm run build:docker:proxy:compose && cd hosting/scripts/linux/ && ./release-to-docker-hub.sh develop && cd -",
"build:docker:airgap": "node hosting/scripts/airgapped/airgappedDockerBuild",
"build:digitalocean": "cd hosting/digitalocean && ./build.sh && cd -",
"build:docker:single:multiarch": "docker buildx build --platform linux/arm64,linux/amd64 -f hosting/single/Dockerfile -t budibase:latest .",
"build:docker:single:image": "docker build -f hosting/single/Dockerfile -t budibase:latest .",
"build:docker:single": "npm run build:docker:pre && npm run build:docker:single:image",
"build:docs": "lerna run build:docs",
"release:helm": "node scripts/releaseHelmChart",
"env:multi:enable": "lerna run env:multi:enable",
@ -72,6 +81,8 @@
"mode:cloud": "yarn env:selfhost:disable && yarn env:multi:enable && yarn env:account:disable",
"mode:account": "yarn mode:cloud && yarn env:account:enable",
"security:audit": "node scripts/audit.js",
"postinstall": "husky install"
"postinstall": "husky install",
"install:pro": "bash scripts/pro/install.sh",
"dep:clean": "yarn clean && yarn bootstrap"
}
}

View File

@ -44,9 +44,6 @@ jspm_packages/
# Snowpack dependency directory (https://snowpack.dev/)
web_modules/
# TypeScript cache
*.tsbuildinfo
# Optional npm cache directory
.npm

View File

@ -1,4 +1,9 @@
const generic = require("./src/cache/generic")
module.exports = {
user: require("./src/cache/user"),
app: require("./src/cache/appMetadata"),
writethrough: require("./src/cache/writethrough"),
...generic,
cache: generic,
}

View File

@ -5,8 +5,11 @@ const {
getAppId,
updateAppId,
doInAppContext,
doInTenant,
} = require("./src/context")
const identity = require("./src/context/identity")
module.exports = {
getAppDB,
getDevAppDB,
@ -14,4 +17,6 @@ module.exports = {
getAppId,
updateAppId,
doInAppContext,
doInTenant,
identity,
}

View File

@ -3,4 +3,5 @@ module.exports = {
...require("./src/db/constants"),
...require("./src/db"),
...require("./src/db/views"),
...require("./src/db/pouch"),
}

View File

@ -0,0 +1 @@
module.exports = require("./src/logging")

View File

@ -1,45 +1,83 @@
{
"name": "@budibase/backend-core",
"version": "1.0.104-alpha.0",
"version": "1.1.22",
"description": "Budibase backend core libraries used in server and worker",
"main": "src/index.js",
"main": "dist/src/index.js",
"types": "dist/src/index.d.ts",
"exports": {
".": "./dist/src/index.js",
"./tests": "./dist/tests/index.js",
"./*": "./dist/*.js"
},
"author": "Budibase",
"license": "GPL-3.0",
"scripts": {
"prebuild": "rimraf dist/",
"prepack": "cp package.json dist",
"build": "tsc -p tsconfig.build.json",
"build:dev": "yarn prebuild && tsc --build --watch --preserveWatchOutput",
"test": "jest",
"test:watch": "jest --watchAll"
},
"dependencies": {
"@techpass/passport-openidconnect": "^0.3.0",
"aws-sdk": "^2.901.0",
"bcryptjs": "^2.4.3",
"cls-hooked": "^4.2.2",
"ioredis": "^4.27.1",
"jsonwebtoken": "^8.5.1",
"koa-passport": "^4.1.4",
"lodash": "^4.17.21",
"lodash.isarguments": "^3.1.0",
"node-fetch": "^2.6.1",
"passport-google-auth": "^1.0.2",
"passport-google-oauth": "^2.0.0",
"passport-jwt": "^4.0.0",
"passport-local": "^1.0.0",
"sanitize-s3-objectkey": "^0.0.1",
"tar-fs": "^2.1.1",
"uuid": "^8.3.2",
"zlib": "^1.0.5"
"@budibase/types": "^1.1.22",
"@techpass/passport-openidconnect": "0.3.2",
"aws-sdk": "2.1030.0",
"bcrypt": "5.0.1",
"dotenv": "16.0.1",
"emitter-listener": "1.1.2",
"ioredis": "4.28.0",
"jsonwebtoken": "8.5.1",
"koa-passport": "4.1.4",
"lodash": "4.17.21",
"lodash.isarguments": "3.1.0",
"node-fetch": "2.6.7",
"passport-google-auth": "1.0.2",
"passport-google-oauth": "2.0.0",
"passport-jwt": "4.0.0",
"passport-local": "1.0.0",
"passport-oauth2-refresh": "^2.1.0",
"posthog-node": "1.3.0",
"pouchdb": "7.3.0",
"pouchdb-find": "7.2.2",
"pouchdb-replication-stream": "1.2.9",
"redlock": "4.2.0",
"sanitize-s3-objectkey": "0.0.1",
"semver": "7.3.7",
"tar-fs": "2.1.1",
"uuid": "8.3.2",
"zlib": "1.0.5"
},
"jest": {
"preset": "ts-jest",
"testEnvironment": "node",
"moduleNameMapper": {
"@budibase/types": "<rootDir>/../types/src"
},
"setupFiles": [
"./scripts/jestSetup.js"
"./scripts/jestSetup.ts"
]
},
"devDependencies": {
"ioredis-mock": "^5.5.5",
"jest": "^26.6.3",
"pouchdb": "^7.2.1",
"pouchdb-adapter-memory": "^7.2.2",
"pouchdb-all-dbs": "^1.0.2"
"@shopify/jest-koa-mocks": "3.1.5",
"@types/jest": "27.5.1",
"@types/koa": "2.0.52",
"@types/lodash": "4.14.180",
"@types/node": "14.18.20",
"@types/node-fetch": "2.6.1",
"@types/pouchdb": "6.4.0",
"@types/redlock": "4.0.3",
"@types/semver": "7.3.7",
"@types/tar-fs": "2.0.1",
"@types/uuid": "8.3.4",
"ioredis-mock": "5.8.0",
"jest": "27.5.1",
"koa": "2.7.0",
"nodemon": "2.0.16",
"pouchdb-adapter-memory": "7.2.2",
"timekeeper": "2.2.0",
"ts-jest": "27.1.5",
"typescript": "4.7.3"
},
"gitHead": "d1836a898cab3f8ab80ee6d8f42be1a9eed7dcdc"
}

View File

@ -1,4 +1,5 @@
module.exports = {
Client: require("./src/redis"),
utils: require("./src/redis/utils"),
clients: require("./src/redis/init"),
}

View File

@ -1,6 +0,0 @@
const env = require("../src/environment")
env._set("SELF_HOSTED", "1")
env._set("NODE_ENV", "jest")
env._set("JWT_SECRET", "test-jwtsecret")
env._set("LOG_LEVEL", "silent")

View File

@ -0,0 +1,12 @@
import env from "../src/environment"
import { mocks } from "../tests/utilities"
// mock all dates to 2020-01-01T00:00:00.000Z
// use tk.reset() to use real dates in individual tests
import tk from "timekeeper"
tk.freeze(mocks.date.MOCK_DATE)
env._set("SELF_HOSTED", "1")
env._set("NODE_ENV", "jest")
env._set("JWT_SECRET", "test-jwtsecret")
env._set("LOG_LEVEL", "silent")

View File

@ -2,6 +2,9 @@ const passport = require("koa-passport")
const LocalStrategy = require("passport-local").Strategy
const JwtStrategy = require("passport-jwt").Strategy
const { getGlobalDB } = require("./tenancy")
const refresh = require("passport-oauth2-refresh")
const { Configs } = require("./constants")
const { getScopedConfig } = require("./db/utils")
const {
jwt,
local,
@ -12,10 +15,13 @@ const {
tenancy,
appTenancy,
authError,
ssoCallbackUrl,
csrf,
internalApi,
} = require("./middleware")
const { invalidateUser } = require("./cache/user")
// Strategies
passport.use(new LocalStrategy(local.options, local.authenticate))
passport.use(new JwtStrategy(jwt.options, jwt.authenticate))
@ -29,11 +35,129 @@ passport.deserializeUser(async (user, done) => {
const user = await db.get(user._id)
return done(null, user)
} catch (err) {
console.error("User not found", err)
console.error(`User not found`, err)
return done(null, false, { message: "User not found" })
}
})
async function refreshOIDCAccessToken(db, chosenConfig, refreshToken) {
const callbackUrl = await oidc.getCallbackUrl(db, chosenConfig)
let enrichedConfig
let strategy
try {
enrichedConfig = await oidc.fetchStrategyConfig(chosenConfig, callbackUrl)
if (!enrichedConfig) {
throw new Error("OIDC Config contents invalid")
}
strategy = await oidc.strategyFactory(enrichedConfig)
} catch (err) {
console.error(err)
throw new Error("Could not refresh OAuth Token")
}
refresh.use(strategy, {
setRefreshOAuth2() {
return strategy._getOAuth2Client(enrichedConfig)
},
})
return new Promise(resolve => {
refresh.requestNewAccessToken(
Configs.OIDC,
refreshToken,
(err, accessToken, refreshToken, params) => {
resolve({ err, accessToken, refreshToken, params })
}
)
})
}
async function refreshGoogleAccessToken(db, config, refreshToken) {
let callbackUrl = await google.getCallbackUrl(db, config)
let strategy
try {
strategy = await google.strategyFactory(config, callbackUrl)
} catch (err) {
console.error(err)
throw new Error("Error constructing OIDC refresh strategy", err)
}
refresh.use(strategy)
return new Promise(resolve => {
refresh.requestNewAccessToken(
Configs.GOOGLE,
refreshToken,
(err, accessToken, refreshToken, params) => {
resolve({ err, accessToken, refreshToken, params })
}
)
})
}
async function refreshOAuthToken(refreshToken, configType, configId) {
const db = getGlobalDB()
const config = await getScopedConfig(db, {
type: configType,
group: {},
})
let chosenConfig = {}
let refreshResponse
if (configType === Configs.OIDC) {
// configId - retrieved from cookie.
chosenConfig = config.configs.filter(c => c.uuid === configId)[0]
if (!chosenConfig) {
throw new Error("Invalid OIDC configuration")
}
refreshResponse = await refreshOIDCAccessToken(
db,
chosenConfig,
refreshToken
)
} else {
chosenConfig = config
refreshResponse = await refreshGoogleAccessToken(
db,
chosenConfig,
refreshToken
)
}
return refreshResponse
}
async function updateUserOAuth(userId, oAuthConfig) {
const details = {
accessToken: oAuthConfig.accessToken,
refreshToken: oAuthConfig.refreshToken,
}
try {
const db = getGlobalDB()
const dbUser = await db.get(userId)
//Do not overwrite the refresh token if a valid one is not provided.
if (typeof details.refreshToken !== "string") {
delete details.refreshToken
}
dbUser.oauth2 = {
...dbUser.oauth2,
...details,
}
await db.put(dbUser)
await invalidateUser(userId)
} catch (e) {
console.error("Could not update OAuth details for current user", e)
}
}
module.exports = {
buildAuthMiddleware: authenticated,
passport,
@ -46,4 +170,7 @@ module.exports = {
authError,
buildCsrfMiddleware: csrf,
internalApi,
refreshOAuthToken,
updateUserOAuth,
ssoCallbackUrl,
}

View File

@ -1,5 +1,5 @@
const redis = require("../redis/authRedis")
const { getCouch } = require("../db")
const redis = require("../redis/init")
const { doWithDB } = require("../db")
const { DocumentTypes } = require("../db/constants")
const AppState = {
@ -10,12 +10,14 @@ const EXPIRY_SECONDS = 3600
/**
* The default populate app metadata function
*/
const populateFromDB = async (appId, CouchDB = null) => {
if (!CouchDB) {
CouchDB = getCouch()
}
const db = new CouchDB(appId, { skip_setup: true })
return db.get(DocumentTypes.APP_METADATA)
const populateFromDB = async appId => {
return doWithDB(
appId,
db => {
return db.get(DocumentTypes.APP_METADATA)
},
{ skip_setup: true }
)
}
const isInvalid = metadata => {
@ -27,17 +29,16 @@ const isInvalid = metadata => {
* Use redis cache to first read the app metadata.
* If not present fallback to loading the app metadata directly and re-caching.
* @param {string} appId the id of the app to get metadata from.
* @param {object} CouchDB the database being passed
* @returns {object} the app metadata.
*/
exports.getAppMetadata = async (appId, CouchDB = null) => {
exports.getAppMetadata = async appId => {
const client = await redis.getAppClient()
// try cache
let metadata = await client.get(appId)
if (!metadata) {
let expiry = EXPIRY_SECONDS
try {
metadata = await populateFromDB(appId, CouchDB)
metadata = await populateFromDB(appId)
} catch (err) {
// app DB left around, but no metadata, it is invalid
if (err && err.status === 404) {

View File

@ -0,0 +1,92 @@
import { getTenantId } from "../../context"
import redis from "../../redis/init"
import RedisWrapper from "../../redis"
function generateTenantKey(key: string) {
const tenantId = getTenantId()
return `${key}:${tenantId}`
}
export = class BaseCache {
client: RedisWrapper | undefined
constructor(client: RedisWrapper | undefined = undefined) {
this.client = client
}
async getClient() {
return !this.client ? await redis.getCacheClient() : this.client
}
async keys(pattern: string) {
const client = await this.getClient()
return client.keys(pattern)
}
/**
* Read only from the cache.
*/
async get(key: string, opts = { useTenancy: true }) {
key = opts.useTenancy ? generateTenantKey(key) : key
const client = await this.getClient()
return client.get(key)
}
/**
* Write to the cache.
*/
async store(
key: string,
value: any,
ttl: number | null = null,
opts = { useTenancy: true }
) {
key = opts.useTenancy ? generateTenantKey(key) : key
const client = await this.getClient()
await client.store(key, value, ttl)
}
/**
* Remove from cache.
*/
async delete(key: string, opts = { useTenancy: true }) {
key = opts.useTenancy ? generateTenantKey(key) : key
const client = await this.getClient()
return client.delete(key)
}
/**
* Read from the cache. Write to the cache if not exists.
*/
async withCache(
key: string,
ttl: number,
fetchFn: any,
opts = { useTenancy: true }
) {
const cachedValue = await this.get(key, opts)
if (cachedValue) {
return cachedValue
}
try {
const fetchedValue = await fetchFn()
await this.store(key, fetchedValue, ttl, opts)
return fetchedValue
} catch (err) {
console.error("Error fetching before cache - ", err)
throw err
}
}
async bustCache(key: string, opts = { client: null }) {
const client = await this.getClient()
try {
await client.delete(generateTenantKey(key))
} catch (err) {
console.error("Error busting cache - ", err)
throw err
}
}
}

View File

@ -0,0 +1,29 @@
const BaseCache = require("./base")
const GENERIC = new BaseCache()
exports.CacheKeys = {
CHECKLIST: "checklist",
INSTALLATION: "installation",
ANALYTICS_ENABLED: "analyticsEnabled",
UNIQUE_TENANT_ID: "uniqueTenantId",
EVENTS: "events",
BACKFILL_METADATA: "backfillMetadata",
}
exports.TTL = {
ONE_MINUTE: 600,
ONE_HOUR: 3600,
ONE_DAY: 86400,
}
function performExport(funcName) {
return (...args) => GENERIC[funcName](...args)
}
exports.keys = performExport("keys")
exports.get = performExport("get")
exports.store = performExport("store")
exports.delete = performExport("delete")
exports.withCache = performExport("withCache")
exports.bustCache = performExport("bustCache")

View File

@ -0,0 +1,59 @@
require("../../../tests/utilities/TestConfiguration")
const { Writethrough } = require("../writethrough")
const { dangerousGetDB } = require("../../db")
const tk = require("timekeeper")
const START_DATE = Date.now()
tk.freeze(START_DATE)
const DELAY = 5000
const db = dangerousGetDB("test")
const db2 = dangerousGetDB("test2")
const writethrough = new Writethrough(db, DELAY), writethrough2 = new Writethrough(db2, DELAY)
describe("writethrough", () => {
describe("put", () => {
let first
it("should be able to store, will go to DB", async () => {
const response = await writethrough.put({ _id: "test", value: 1 })
const output = await db.get(response.id)
first = output
expect(output.value).toBe(1)
})
it("second put shouldn't update DB", async () => {
const response = await writethrough.put({ ...first, value: 2 })
const output = await db.get(response.id)
expect(first._rev).toBe(output._rev)
expect(output.value).toBe(1)
})
it("should put it again after delay period", async () => {
tk.freeze(START_DATE + DELAY + 1)
const response = await writethrough.put({ ...first, value: 3 })
const output = await db.get(response.id)
expect(response.rev).not.toBe(first._rev)
expect(output.value).toBe(3)
})
})
describe("get", () => {
it("should be able to retrieve", async () => {
const response = await writethrough.get("test")
expect(response.value).toBe(3)
})
})
describe("same doc, different databases (tenancy)", () => {
it("should be able to two different databases", async () => {
const resp1 = await writethrough.put({ _id: "db1", value: "first" })
const resp2 = await writethrough2.put({ _id: "db1", value: "second" })
expect(resp1.rev).toBeDefined()
expect(resp2.rev).toBeDefined()
expect((await db.get("db1")).value).toBe("first")
expect((await db2.get("db1")).value).toBe("second")
})
})
})

View File

@ -1,5 +1,5 @@
const redis = require("../redis/authRedis")
const { getTenantId, lookupTenantId, getGlobalDB } = require("../tenancy")
const redis = require("../redis/init")
const { getTenantId, lookupTenantId, doWithGlobalDB } = require("../tenancy")
const env = require("../environment")
const accounts = require("../cloud/accounts")
@ -9,9 +9,8 @@ const EXPIRY_SECONDS = 3600
* The default populate user function
*/
const populateFromDB = async (userId, tenantId) => {
const user = await getGlobalDB(tenantId).get(userId)
const user = await doWithGlobalDB(tenantId, db => db.get(userId))
user.budibaseAccess = true
if (!env.SELF_HOSTED && !env.DISABLE_ACCOUNT_PORTAL) {
const account = await accounts.getAccount(user.email)
if (account) {

View File

@ -0,0 +1,119 @@
import BaseCache from "./base"
import { getWritethroughClient } from "../redis/init"
import { logWarn } from "../logging"
const DEFAULT_WRITE_RATE_MS = 10000
let CACHE: BaseCache | null = null
interface CacheItem {
doc: any
lastWrite: number
}
async function getCache() {
if (!CACHE) {
const client = await getWritethroughClient()
CACHE = new BaseCache(client)
}
return CACHE
}
function makeCacheKey(db: PouchDB.Database, key: string) {
return db.name + key
}
function makeCacheItem(doc: any, lastWrite: number | null = null): CacheItem {
return { doc, lastWrite: lastWrite || Date.now() }
}
export async function put(
db: PouchDB.Database,
doc: any,
writeRateMs: number = DEFAULT_WRITE_RATE_MS
) {
const cache = await getCache()
const key = doc._id
let cacheItem: CacheItem | undefined = await cache.get(makeCacheKey(db, key))
const updateDb = !cacheItem || cacheItem.lastWrite < Date.now() - writeRateMs
let output = doc
if (updateDb) {
const writeDb = async (toWrite: any) => {
// doc should contain the _id and _rev
const response = await db.put(toWrite)
output = {
...doc,
_id: response.id,
_rev: response.rev,
}
}
try {
await writeDb(doc)
} catch (err: any) {
if (err.status !== 409) {
throw err
} else {
// Swallow 409s but log them
logWarn(`Ignoring conflict in write-through cache`)
}
}
}
// if we are updating the DB then need to set the lastWrite to now
cacheItem = makeCacheItem(output, updateDb ? null : cacheItem?.lastWrite)
await cache.store(makeCacheKey(db, key), cacheItem)
return { ok: true, id: output._id, rev: output._rev }
}
export async function get(db: PouchDB.Database, id: string): Promise<any> {
const cache = await getCache()
const cacheKey = makeCacheKey(db, id)
let cacheItem: CacheItem = await cache.get(cacheKey)
if (!cacheItem) {
const doc = await db.get(id)
cacheItem = makeCacheItem(doc)
await cache.store(cacheKey, cacheItem)
}
return cacheItem.doc
}
export async function remove(
db: PouchDB.Database,
docOrId: any,
rev?: any
): Promise<void> {
const cache = await getCache()
if (!docOrId) {
throw new Error("No ID/Rev provided.")
}
const id = typeof docOrId === "string" ? docOrId : docOrId._id
rev = typeof docOrId === "string" ? rev : docOrId._rev
try {
await cache.delete(makeCacheKey(db, id))
} finally {
await db.remove(id, rev)
}
}
export class Writethrough {
db: PouchDB.Database
writeRateMs: number
constructor(
db: PouchDB.Database,
writeRateMs: number = DEFAULT_WRITE_RATE_MS
) {
this.db = db
this.writeRateMs = writeRateMs
}
async put(doc: any) {
return put(this.db, doc, this.writeRateMs)
}
async get(id: string) {
return get(this.db, id)
}
async remove(docOrId: any, rev?: any) {
return remove(this.db, docOrId, rev)
}
}

View File

@ -1,39 +0,0 @@
const API = require("./api")
const env = require("../environment")
const { Headers } = require("../constants")
const api = new API(env.ACCOUNT_PORTAL_URL)
exports.getAccount = async email => {
const payload = {
email,
}
const response = await api.post(`/api/accounts/search`, {
body: payload,
headers: {
[Headers.API_KEY]: env.ACCOUNT_PORTAL_API_KEY,
},
})
const json = await response.json()
if (response.status !== 200) {
throw new Error(`Error getting account by email ${email}`, json)
}
return json[0]
}
exports.getStatus = async () => {
const response = await api.get(`/api/status`, {
headers: {
[Headers.API_KEY]: env.ACCOUNT_PORTAL_API_KEY,
},
})
const json = await response.json()
if (response.status !== 200) {
throw new Error(`Error getting status`)
}
return json
}

View File

@ -0,0 +1,63 @@
import API from "./api"
import env from "../environment"
import { Headers } from "../constants"
import { CloudAccount } from "@budibase/types"
const api = new API(env.ACCOUNT_PORTAL_URL)
export const getAccount = async (
email: string
): Promise<CloudAccount | undefined> => {
const payload = {
email,
}
const response = await api.post(`/api/accounts/search`, {
body: payload,
headers: {
[Headers.API_KEY]: env.ACCOUNT_PORTAL_API_KEY,
},
})
if (response.status !== 200) {
throw new Error(`Error getting account by email ${email}`)
}
const json: CloudAccount[] = await response.json()
return json[0]
}
export const getAccountByTenantId = async (
tenantId: string
): Promise<CloudAccount | undefined> => {
const payload = {
tenantId,
}
const response = await api.post(`/api/accounts/search`, {
body: payload,
headers: {
[Headers.API_KEY]: env.ACCOUNT_PORTAL_API_KEY,
},
})
if (response.status !== 200) {
throw new Error(`Error getting account by tenantId ${tenantId}`)
}
const json: CloudAccount[] = await response.json()
return json[0]
}
export const getStatus = async () => {
const response = await api.get(`/api/status`, {
headers: {
[Headers.API_KEY]: env.ACCOUNT_PORTAL_API_KEY,
},
})
const json = await response.json()
if (response.status !== 200) {
throw new Error(`Error getting status`)
}
return json
}

View File

@ -29,9 +29,7 @@ class API {
credentials: "include",
}
const resp = await fetch(`${this.host}${url}`, requestOptions)
return resp
return await fetch(`${this.host}${url}`, requestOptions)
}
post = this.apiCall("POST")

View File

@ -0,0 +1,650 @@
const util = require("util")
const assert = require("assert")
const wrapEmitter = require("emitter-listener")
const async_hooks = require("async_hooks")
const CONTEXTS_SYMBOL = "cls@contexts"
const ERROR_SYMBOL = "error@context"
const DEBUG_CLS_HOOKED = process.env.DEBUG_CLS_HOOKED
let currentUid = -1
module.exports = {
getNamespace: getNamespace,
createNamespace: createNamespace,
destroyNamespace: destroyNamespace,
reset: reset,
ERROR_SYMBOL: ERROR_SYMBOL,
}
function Namespace(name) {
this.name = name
// changed in 2.7: no default context
this.active = null
this._set = []
this.id = null
this._contexts = new Map()
this._indent = 0
this._hook = null
}
Namespace.prototype.set = function set(key, value) {
if (!this.active) {
throw new Error(
"No context available. ns.run() or ns.bind() must be called first."
)
}
this.active[key] = value
if (DEBUG_CLS_HOOKED) {
const indentStr = " ".repeat(this._indent < 0 ? 0 : this._indent)
debug2(
indentStr +
"CONTEXT-SET KEY:" +
key +
"=" +
value +
" in ns:" +
this.name +
" currentUid:" +
currentUid +
" active:" +
util.inspect(this.active, { showHidden: true, depth: 2, colors: true })
)
}
return value
}
Namespace.prototype.get = function get(key) {
if (!this.active) {
if (DEBUG_CLS_HOOKED) {
const asyncHooksCurrentId = async_hooks.currentId()
const triggerId = async_hooks.triggerAsyncId()
const indentStr = " ".repeat(this._indent < 0 ? 0 : this._indent)
debug2(
`${indentStr}CONTEXT-GETTING KEY NO ACTIVE NS: (${this.name}) ${key}=undefined currentUid:${currentUid} asyncHooksCurrentId:${asyncHooksCurrentId} triggerId:${triggerId} len:${this._set.length}`
)
}
return undefined
}
if (DEBUG_CLS_HOOKED) {
const asyncHooksCurrentId = async_hooks.executionAsyncId()
const triggerId = async_hooks.triggerAsyncId()
const indentStr = " ".repeat(this._indent < 0 ? 0 : this._indent)
debug2(
indentStr +
"CONTEXT-GETTING KEY:" +
key +
"=" +
this.active[key] +
" (" +
this.name +
") currentUid:" +
currentUid +
" active:" +
util.inspect(this.active, { showHidden: true, depth: 2, colors: true })
)
debug2(
`${indentStr}CONTEXT-GETTING KEY: (${this.name}) ${key}=${
this.active[key]
} currentUid:${currentUid} asyncHooksCurrentId:${asyncHooksCurrentId} triggerId:${triggerId} len:${
this._set.length
} active:${util.inspect(this.active)}`
)
}
return this.active[key]
}
Namespace.prototype.createContext = function createContext() {
// Prototype inherit existing context if created a new child context within existing context.
let context = Object.create(this.active ? this.active : Object.prototype)
context._ns_name = this.name
context.id = currentUid
if (DEBUG_CLS_HOOKED) {
const asyncHooksCurrentId = async_hooks.executionAsyncId()
const triggerId = async_hooks.triggerAsyncId()
const indentStr = " ".repeat(this._indent < 0 ? 0 : this._indent)
debug2(
`${indentStr}CONTEXT-CREATED Context: (${
this.name
}) currentUid:${currentUid} asyncHooksCurrentId:${asyncHooksCurrentId} triggerId:${triggerId} len:${
this._set.length
} context:${util.inspect(context, {
showHidden: true,
depth: 2,
colors: true,
})}`
)
}
return context
}
Namespace.prototype.run = function run(fn) {
let context = this.createContext()
this.enter(context)
try {
if (DEBUG_CLS_HOOKED) {
const triggerId = async_hooks.triggerAsyncId()
const asyncHooksCurrentId = async_hooks.executionAsyncId()
const indentStr = " ".repeat(this._indent < 0 ? 0 : this._indent)
debug2(
`${indentStr}CONTEXT-RUN BEGIN: (${
this.name
}) currentUid:${currentUid} triggerId:${triggerId} asyncHooksCurrentId:${asyncHooksCurrentId} len:${
this._set.length
} context:${util.inspect(context)}`
)
}
fn(context)
return context
} catch (exception) {
if (exception) {
exception[ERROR_SYMBOL] = context
}
throw exception
} finally {
if (DEBUG_CLS_HOOKED) {
const triggerId = async_hooks.triggerAsyncId()
const asyncHooksCurrentId = async_hooks.executionAsyncId()
const indentStr = " ".repeat(this._indent < 0 ? 0 : this._indent)
debug2(
`${indentStr}CONTEXT-RUN END: (${
this.name
}) currentUid:${currentUid} triggerId:${triggerId} asyncHooksCurrentId:${asyncHooksCurrentId} len:${
this._set.length
} ${util.inspect(context)}`
)
}
this.exit(context)
}
}
Namespace.prototype.runAndReturn = function runAndReturn(fn) {
let value
this.run(function (context) {
value = fn(context)
})
return value
}
/**
* Uses global Promise and assumes Promise is cls friendly or wrapped already.
* @param {function} fn
* @returns {*}
*/
Namespace.prototype.runPromise = function runPromise(fn) {
let context = this.createContext()
this.enter(context)
let promise = fn(context)
if (!promise || !promise.then || !promise.catch) {
throw new Error("fn must return a promise.")
}
if (DEBUG_CLS_HOOKED) {
debug2(
"CONTEXT-runPromise BEFORE: (" +
this.name +
") currentUid:" +
currentUid +
" len:" +
this._set.length +
" " +
util.inspect(context)
)
}
return promise
.then(result => {
if (DEBUG_CLS_HOOKED) {
debug2(
"CONTEXT-runPromise AFTER then: (" +
this.name +
") currentUid:" +
currentUid +
" len:" +
this._set.length +
" " +
util.inspect(context)
)
}
this.exit(context)
return result
})
.catch(err => {
err[ERROR_SYMBOL] = context
if (DEBUG_CLS_HOOKED) {
debug2(
"CONTEXT-runPromise AFTER catch: (" +
this.name +
") currentUid:" +
currentUid +
" len:" +
this._set.length +
" " +
util.inspect(context)
)
}
this.exit(context)
throw err
})
}
Namespace.prototype.bind = function bindFactory(fn, context) {
if (!context) {
if (!this.active) {
context = this.createContext()
} else {
context = this.active
}
}
let self = this
return function clsBind() {
self.enter(context)
try {
return fn.apply(this, arguments)
} catch (exception) {
if (exception) {
exception[ERROR_SYMBOL] = context
}
throw exception
} finally {
self.exit(context)
}
}
}
Namespace.prototype.enter = function enter(context) {
assert.ok(context, "context must be provided for entering")
if (DEBUG_CLS_HOOKED) {
const asyncHooksCurrentId = async_hooks.executionAsyncId()
const triggerId = async_hooks.triggerAsyncId()
const indentStr = " ".repeat(this._indent < 0 ? 0 : this._indent)
debug2(
`${indentStr}CONTEXT-ENTER: (${
this.name
}) currentUid:${currentUid} triggerId:${triggerId} asyncHooksCurrentId:${asyncHooksCurrentId} len:${
this._set.length
} ${util.inspect(context)}`
)
}
this._set.push(this.active)
this.active = context
}
Namespace.prototype.exit = function exit(context) {
assert.ok(context, "context must be provided for exiting")
if (DEBUG_CLS_HOOKED) {
const asyncHooksCurrentId = async_hooks.executionAsyncId()
const triggerId = async_hooks.triggerAsyncId()
const indentStr = " ".repeat(this._indent < 0 ? 0 : this._indent)
debug2(
`${indentStr}CONTEXT-EXIT: (${
this.name
}) currentUid:${currentUid} triggerId:${triggerId} asyncHooksCurrentId:${asyncHooksCurrentId} len:${
this._set.length
} ${util.inspect(context)}`
)
}
// Fast path for most exits that are at the top of the stack
if (this.active === context) {
assert.ok(this._set.length, "can't remove top context")
this.active = this._set.pop()
return
}
// Fast search in the stack using lastIndexOf
let index = this._set.lastIndexOf(context)
if (index < 0) {
if (DEBUG_CLS_HOOKED) {
debug2(
"??ERROR?? context exiting but not entered - ignoring: " +
util.inspect(context)
)
}
assert.ok(
index >= 0,
"context not currently entered; can't exit. \n" +
util.inspect(this) +
"\n" +
util.inspect(context)
)
} else {
assert.ok(index, "can't remove top context")
this._set.splice(index, 1)
}
}
Namespace.prototype.bindEmitter = function bindEmitter(emitter) {
assert.ok(
emitter.on && emitter.addListener && emitter.emit,
"can only bind real EEs"
)
let namespace = this
let thisSymbol = "context@" + this.name
// Capture the context active at the time the emitter is bound.
function attach(listener) {
if (!listener) {
return
}
if (!listener[CONTEXTS_SYMBOL]) {
listener[CONTEXTS_SYMBOL] = Object.create(null)
}
listener[CONTEXTS_SYMBOL][thisSymbol] = {
namespace: namespace,
context: namespace.active,
}
}
// At emit time, bind the listener within the correct context.
function bind(unwrapped) {
if (!(unwrapped && unwrapped[CONTEXTS_SYMBOL])) {
return unwrapped
}
let wrapped = unwrapped
let unwrappedContexts = unwrapped[CONTEXTS_SYMBOL]
Object.keys(unwrappedContexts).forEach(function (name) {
let thunk = unwrappedContexts[name]
wrapped = thunk.namespace.bind(wrapped, thunk.context)
})
return wrapped
}
wrapEmitter(emitter, attach, bind)
}
/**
* If an error comes out of a namespace, it will have a context attached to it.
* This function knows how to find it.
*
* @param {Error} exception Possibly annotated error.
*/
Namespace.prototype.fromException = function fromException(exception) {
return exception[ERROR_SYMBOL]
}
function getNamespace(name) {
return process.namespaces[name]
}
function createNamespace(name) {
assert.ok(name, "namespace must be given a name.")
if (DEBUG_CLS_HOOKED) {
debug2(`NS-CREATING NAMESPACE (${name})`)
}
let namespace = new Namespace(name)
namespace.id = currentUid
const hook = async_hooks.createHook({
init(asyncId, type, triggerId, resource) {
currentUid = async_hooks.executionAsyncId()
//CHAIN Parent's Context onto child if none exists. This is needed to pass net-events.spec
// let initContext = namespace.active;
// if(!initContext && triggerId) {
// let parentContext = namespace._contexts.get(triggerId);
// if (parentContext) {
// namespace.active = parentContext;
// namespace._contexts.set(currentUid, parentContext);
// if (DEBUG_CLS_HOOKED) {
// const indentStr = ' '.repeat(namespace._indent < 0 ? 0 : namespace._indent);
// debug2(`${indentStr}INIT [${type}] (${name}) WITH PARENT CONTEXT asyncId:${asyncId} currentUid:${currentUid} triggerId:${triggerId} active:${util.inspect(namespace.active, true)} resource:${resource}`);
// }
// } else if (DEBUG_CLS_HOOKED) {
// const indentStr = ' '.repeat(namespace._indent < 0 ? 0 : namespace._indent);
// debug2(`${indentStr}INIT [${type}] (${name}) MISSING CONTEXT asyncId:${asyncId} currentUid:${currentUid} triggerId:${triggerId} active:${util.inspect(namespace.active, true)} resource:${resource}`);
// }
// }else {
// namespace._contexts.set(currentUid, namespace.active);
// if (DEBUG_CLS_HOOKED) {
// const indentStr = ' '.repeat(namespace._indent < 0 ? 0 : namespace._indent);
// debug2(`${indentStr}INIT [${type}] (${name}) asyncId:${asyncId} currentUid:${currentUid} triggerId:${triggerId} active:${util.inspect(namespace.active, true)} resource:${resource}`);
// }
// }
if (namespace.active) {
namespace._contexts.set(asyncId, namespace.active)
if (DEBUG_CLS_HOOKED) {
const indentStr = " ".repeat(
namespace._indent < 0 ? 0 : namespace._indent
)
debug2(
`${indentStr}INIT [${type}] (${name}) asyncId:${asyncId} currentUid:${currentUid} triggerId:${triggerId} active:${util.inspect(
namespace.active,
{ showHidden: true, depth: 2, colors: true }
)} resource:${resource}`
)
}
} else if (currentUid === 0) {
// CurrentId will be 0 when triggered from C++. Promise events
// https://github.com/nodejs/node/blob/master/doc/api/async_hooks.md#triggerid
const triggerId = async_hooks.triggerAsyncId()
const triggerIdContext = namespace._contexts.get(triggerId)
if (triggerIdContext) {
namespace._contexts.set(asyncId, triggerIdContext)
if (DEBUG_CLS_HOOKED) {
const indentStr = " ".repeat(
namespace._indent < 0 ? 0 : namespace._indent
)
debug2(
`${indentStr}INIT USING CONTEXT FROM TRIGGERID [${type}] (${name}) asyncId:${asyncId} currentUid:${currentUid} triggerId:${triggerId} active:${util.inspect(
namespace.active,
{ showHidden: true, depth: 2, colors: true }
)} resource:${resource}`
)
}
} else if (DEBUG_CLS_HOOKED) {
const indentStr = " ".repeat(
namespace._indent < 0 ? 0 : namespace._indent
)
debug2(
`${indentStr}INIT MISSING CONTEXT [${type}] (${name}) asyncId:${asyncId} currentUid:${currentUid} triggerId:${triggerId} active:${util.inspect(
namespace.active,
{ showHidden: true, depth: 2, colors: true }
)} resource:${resource}`
)
}
}
if (DEBUG_CLS_HOOKED && type === "PROMISE") {
debug2(util.inspect(resource, { showHidden: true }))
const parentId = resource.parentId
const indentStr = " ".repeat(
namespace._indent < 0 ? 0 : namespace._indent
)
debug2(
`${indentStr}INIT RESOURCE-PROMISE [${type}] (${name}) parentId:${parentId} asyncId:${asyncId} currentUid:${currentUid} triggerId:${triggerId} active:${util.inspect(
namespace.active,
{ showHidden: true, depth: 2, colors: true }
)} resource:${resource}`
)
}
},
before(asyncId) {
currentUid = async_hooks.executionAsyncId()
let context
/*
if(currentUid === 0){
// CurrentId will be 0 when triggered from C++. Promise events
// https://github.com/nodejs/node/blob/master/doc/api/async_hooks.md#triggerid
//const triggerId = async_hooks.triggerAsyncId();
context = namespace._contexts.get(asyncId); // || namespace._contexts.get(triggerId);
}else{
context = namespace._contexts.get(currentUid);
}
*/
//HACK to work with promises until they are fixed in node > 8.1.1
context =
namespace._contexts.get(asyncId) || namespace._contexts.get(currentUid)
if (context) {
if (DEBUG_CLS_HOOKED) {
const triggerId = async_hooks.triggerAsyncId()
const indentStr = " ".repeat(
namespace._indent < 0 ? 0 : namespace._indent
)
debug2(
`${indentStr}BEFORE (${name}) asyncId:${asyncId} currentUid:${currentUid} triggerId:${triggerId} active:${util.inspect(
namespace.active,
{ showHidden: true, depth: 2, colors: true }
)} context:${util.inspect(context)}`
)
namespace._indent += 2
}
namespace.enter(context)
} else if (DEBUG_CLS_HOOKED) {
const triggerId = async_hooks.triggerAsyncId()
const indentStr = " ".repeat(
namespace._indent < 0 ? 0 : namespace._indent
)
debug2(
`${indentStr}BEFORE MISSING CONTEXT (${name}) asyncId:${asyncId} currentUid:${currentUid} triggerId:${triggerId} active:${util.inspect(
namespace.active,
{ showHidden: true, depth: 2, colors: true }
)} namespace._contexts:${util.inspect(namespace._contexts, {
showHidden: true,
depth: 2,
colors: true,
})}`
)
namespace._indent += 2
}
},
after(asyncId) {
currentUid = async_hooks.executionAsyncId()
let context // = namespace._contexts.get(currentUid);
/*
if(currentUid === 0){
// CurrentId will be 0 when triggered from C++. Promise events
// https://github.com/nodejs/node/blob/master/doc/api/async_hooks.md#triggerid
//const triggerId = async_hooks.triggerAsyncId();
context = namespace._contexts.get(asyncId); // || namespace._contexts.get(triggerId);
}else{
context = namespace._contexts.get(currentUid);
}
*/
//HACK to work with promises until they are fixed in node > 8.1.1
context =
namespace._contexts.get(asyncId) || namespace._contexts.get(currentUid)
if (context) {
if (DEBUG_CLS_HOOKED) {
const triggerId = async_hooks.triggerAsyncId()
namespace._indent -= 2
const indentStr = " ".repeat(
namespace._indent < 0 ? 0 : namespace._indent
)
debug2(
`${indentStr}AFTER (${name}) asyncId:${asyncId} currentUid:${currentUid} triggerId:${triggerId} active:${util.inspect(
namespace.active,
{ showHidden: true, depth: 2, colors: true }
)} context:${util.inspect(context)}`
)
}
namespace.exit(context)
} else if (DEBUG_CLS_HOOKED) {
const triggerId = async_hooks.triggerAsyncId()
namespace._indent -= 2
const indentStr = " ".repeat(
namespace._indent < 0 ? 0 : namespace._indent
)
debug2(
`${indentStr}AFTER MISSING CONTEXT (${name}) asyncId:${asyncId} currentUid:${currentUid} triggerId:${triggerId} active:${util.inspect(
namespace.active,
{ showHidden: true, depth: 2, colors: true }
)} context:${util.inspect(context)}`
)
}
},
destroy(asyncId) {
currentUid = async_hooks.executionAsyncId()
if (DEBUG_CLS_HOOKED) {
const triggerId = async_hooks.triggerAsyncId()
const indentStr = " ".repeat(
namespace._indent < 0 ? 0 : namespace._indent
)
debug2(
`${indentStr}DESTROY (${name}) currentUid:${currentUid} asyncId:${asyncId} triggerId:${triggerId} active:${util.inspect(
namespace.active,
{ showHidden: true, depth: 2, colors: true }
)} context:${util.inspect(namespace._contexts.get(currentUid))}`
)
}
namespace._contexts.delete(asyncId)
},
})
hook.enable()
namespace._hook = hook
process.namespaces[name] = namespace
return namespace
}
function destroyNamespace(name) {
let namespace = getNamespace(name)
assert.ok(namespace, "can't delete nonexistent namespace! \"" + name + '"')
assert.ok(
namespace.id,
"don't assign to process.namespaces directly! " + util.inspect(namespace)
)
namespace._hook.disable()
namespace._contexts = null
process.namespaces[name] = null
}
function reset() {
// must unregister async listeners
if (process.namespaces) {
Object.keys(process.namespaces).forEach(function (name) {
destroyNamespace(name)
})
}
process.namespaces = Object.create(null)
}
process.namespaces = process.namespaces || {}
//const fs = require('fs');
function debug2(...args) {
if (DEBUG_CLS_HOOKED) {
//fs.writeSync(1, `${util.format(...args)}\n`);
process._rawDebug(`${util.format(...args)}`)
}
}
/*function getFunctionName(fn) {
if (!fn) {
return fn;
}
if (typeof fn === 'function') {
if (fn.name) {
return fn.name;
}
return (fn.toString().trim().match(/^function\s*([^\s(]+)/) || [])[1];
} else if (fn.constructor && fn.constructor.name) {
return fn.constructor.name;
}
}*/

View File

@ -13,9 +13,11 @@ exports.Cookies = {
exports.Headers = {
API_KEY: "x-budibase-api-key",
LICENSE_KEY: "x-budibase-license-key",
API_VER: "x-budibase-api-version",
APP_ID: "x-budibase-app-id",
TYPE: "x-budibase-type",
PREVIEW_ROLE: "x-budibase-role",
TENANT_ID: "x-budibase-tenant-id",
TOKEN: "x-budibase-token",
CSRF_TOKEN: "x-csrf-token",

View File

@ -1,73 +1,47 @@
const cls = require("cls-hooked")
const cls = require("../clshooked")
const { newid } = require("../hashing")
const REQUEST_ID_KEY = "requestId"
const MAIN_CTX = cls.createNamespace("main")
function getContextStorage(namespace) {
if (namespace && namespace.active) {
let contextData = namespace.active
delete contextData.id
delete contextData._ns_name
return contextData
}
return {}
}
class FunctionContext {
static getMiddleware(updateCtxFn = null, contextName = "session") {
const namespace = this.createNamespace(contextName)
return async function (ctx, next) {
await new Promise(
namespace.bind(function (resolve, reject) {
// store a contextual request ID that can be used anywhere (audit logs)
namespace.set(REQUEST_ID_KEY, newid())
namespace.bindEmitter(ctx.req)
namespace.bindEmitter(ctx.res)
if (updateCtxFn) {
updateCtxFn(ctx)
}
next().then(resolve).catch(reject)
})
)
}
static run(callback) {
return MAIN_CTX.runAndReturn(async () => {
const namespaceId = newid()
MAIN_CTX.set(REQUEST_ID_KEY, namespaceId)
const namespace = cls.createNamespace(namespaceId)
let response = await namespace.runAndReturn(callback)
cls.destroyNamespace(namespaceId)
return response
})
}
static run(callback, contextName = "session") {
const namespace = this.createNamespace(contextName)
return namespace.runAndReturn(callback)
}
static setOnContext(key, value, contextName = "session") {
const namespace = this.createNamespace(contextName)
static setOnContext(key, value) {
const namespaceId = MAIN_CTX.get(REQUEST_ID_KEY)
const namespace = cls.getNamespace(namespaceId)
namespace.set(key, value)
}
static getContextStorage() {
if (this._namespace && this._namespace.active) {
let contextData = this._namespace.active
delete contextData.id
delete contextData._ns_name
return contextData
}
return {}
}
static getFromContext(key) {
const context = this.getContextStorage()
const namespaceId = MAIN_CTX.get(REQUEST_ID_KEY)
const namespace = cls.getNamespace(namespaceId)
const context = getContextStorage(namespace)
if (context) {
return context[key]
} else {
return null
}
}
static destroyNamespace(name = "session") {
if (this._namespace) {
cls.destroyNamespace(name)
this._namespace = null
}
}
static createNamespace(name = "session") {
if (!this._namespace) {
this._namespace = cls.createNamespace(name)
}
return this._namespace
}
}
module.exports = FunctionContext

View File

@ -0,0 +1,17 @@
export enum ContextKeys {
TENANT_ID = "tenantId",
GLOBAL_DB = "globalDb",
APP_ID = "appId",
IDENTITY = "identity",
// whatever the request app DB was
CURRENT_DB = "currentDb",
// get the prod app DB from the request
PROD_DB = "prodDb",
// get the dev app DB from the request
DEV_DB = "devDb",
DB_OPTS = "dbOpts",
// check if something else is using the context, don't close DB
TENANCY_IN_USE = "tenancyInUse",
APP_IN_USE = "appInUse",
IDENTITY_IN_USE = "identityInUse",
}

View File

@ -1,6 +1,6 @@
const { getGlobalUserParams, getAllApps } = require("../db/utils")
const { getDB } = require("../db")
const { getGlobalDB } = require("../tenancy")
const { doWithDB } = require("../db")
const { doWithGlobalDB } = require("../tenancy")
const { StaticDatabases } = require("../db/constants")
const TENANT_DOC = StaticDatabases.PLATFORM_INFO.docs.tenants
@ -8,11 +8,12 @@ const PLATFORM_INFO_DB = StaticDatabases.PLATFORM_INFO.name
const removeTenantFromInfoDB = async tenantId => {
try {
const infoDb = getDB(PLATFORM_INFO_DB)
let tenants = await infoDb.get(TENANT_DOC)
tenants.tenantIds = tenants.tenantIds.filter(id => id !== tenantId)
await doWithDB(PLATFORM_INFO_DB, async infoDb => {
let tenants = await infoDb.get(TENANT_DOC)
tenants.tenantIds = tenants.tenantIds.filter(id => id !== tenantId)
await infoDb.put(tenants)
await infoDb.put(tenants)
})
} catch (err) {
console.error(`Error removing tenant ${tenantId} from info db`, err)
throw err
@ -20,36 +21,8 @@ const removeTenantFromInfoDB = async tenantId => {
}
exports.removeUserFromInfoDB = async dbUser => {
const infoDb = getDB(PLATFORM_INFO_DB)
const keys = [dbUser._id, dbUser.email]
const userDocs = await infoDb.allDocs({
keys,
include_docs: true,
})
const toDelete = userDocs.rows.map(row => {
return {
...row.doc,
_deleted: true,
}
})
await infoDb.bulkDocs(toDelete)
}
const removeUsersFromInfoDB = async tenantId => {
try {
const globalDb = getGlobalDB(tenantId)
const infoDb = getDB(PLATFORM_INFO_DB)
const allUsers = await globalDb.allDocs(
getGlobalUserParams(null, {
include_docs: true,
})
)
const allEmails = allUsers.rows.map(row => row.doc.email)
// get the id docs
let keys = allUsers.rows.map(row => row.id)
// and the email docs
keys = keys.concat(allEmails)
// retrieve the docs and delete them
await doWithDB(PLATFORM_INFO_DB, async infoDb => {
const keys = [dbUser._id, dbUser.email]
const userDocs = await infoDb.allDocs({
keys,
include_docs: true,
@ -61,26 +34,60 @@ const removeUsersFromInfoDB = async tenantId => {
}
})
await infoDb.bulkDocs(toDelete)
} catch (err) {
console.error(`Error removing tenant ${tenantId} users from info db`, err)
throw err
}
})
}
const removeUsersFromInfoDB = async tenantId => {
return doWithGlobalDB(tenantId, async db => {
try {
const allUsers = await db.allDocs(
getGlobalUserParams(null, {
include_docs: true,
})
)
await doWithDB(PLATFORM_INFO_DB, async infoDb => {
const allEmails = allUsers.rows.map(row => row.doc.email)
// get the id docs
let keys = allUsers.rows.map(row => row.id)
// and the email docs
keys = keys.concat(allEmails)
// retrieve the docs and delete them
const userDocs = await infoDb.allDocs({
keys,
include_docs: true,
})
const toDelete = userDocs.rows.map(row => {
return {
...row.doc,
_deleted: true,
}
})
await infoDb.bulkDocs(toDelete)
})
} catch (err) {
console.error(`Error removing tenant ${tenantId} users from info db`, err)
throw err
}
})
}
const removeGlobalDB = async tenantId => {
try {
const globalDb = getGlobalDB(tenantId)
await globalDb.destroy()
} catch (err) {
console.error(`Error removing tenant ${tenantId} users from info db`, err)
throw err
}
return doWithGlobalDB(tenantId, async db => {
try {
await db.destroy()
} catch (err) {
console.error(`Error removing tenant ${tenantId} users from info db`, err)
throw err
}
})
}
const removeTenantApps = async tenantId => {
try {
const apps = await getAllApps({ all: true })
const destroyPromises = apps.map(app => getDB(app.appId).destroy())
const destroyPromises = apps.map(app =>
doWithDB(app.appId, db => db.destroy())
)
await Promise.allSettled(destroyPromises)
} catch (err) {
console.error(`Error removing tenant ${tenantId} apps`, err)

View File

@ -0,0 +1,50 @@
import {
IdentityContext,
IdentityType,
User,
UserContext,
isCloudAccount,
Account,
AccountUserContext,
} from "@budibase/types"
import * as context from "."
export const getIdentity = (): IdentityContext | undefined => {
return context.getIdentity()
}
export const doInIdentityContext = (identity: IdentityContext, task: any) => {
return context.doInIdentityContext(identity, task)
}
export const doInUserContext = (user: User, task: any) => {
const userContext: UserContext = {
...user,
_id: user._id as string,
type: IdentityType.USER,
}
return doInIdentityContext(userContext, task)
}
export const doInAccountContext = (account: Account, task: any) => {
const _id = getAccountUserId(account)
const tenantId = account.tenantId
const accountContext: AccountUserContext = {
_id,
type: IdentityType.USER,
tenantId,
account,
}
return doInIdentityContext(accountContext, task)
}
export const getAccountUserId = (account: Account) => {
let userId: string
if (isCloudAccount(account)) {
userId = account.budibaseUserId
} else {
// use account id as user id for self hosting
userId = account.accountId
}
return userId
}

View File

@ -1,227 +0,0 @@
const env = require("../environment")
const { Headers } = require("../../constants")
const { SEPARATOR, DocumentTypes } = require("../db/constants")
const cls = require("./FunctionContext")
const { getCouch } = require("../db")
const { getProdAppID, getDevelopmentAppID } = require("../db/conversions")
const { isEqual } = require("lodash")
// some test cases call functions directly, need to
// store an app ID to pretend there is a context
let TEST_APP_ID = null
const ContextKeys = {
TENANT_ID: "tenantId",
APP_ID: "appId",
// whatever the request app DB was
CURRENT_DB: "currentDb",
// get the prod app DB from the request
PROD_DB: "prodDb",
// get the dev app DB from the request
DEV_DB: "devDb",
DB_OPTS: "dbOpts",
}
exports.DEFAULT_TENANT_ID = "default"
exports.isDefaultTenant = () => {
return exports.getTenantId() === exports.DEFAULT_TENANT_ID
}
exports.isMultiTenant = () => {
return env.MULTI_TENANCY
}
// used for automations, API endpoints should always be in context already
exports.doInTenant = (tenantId, task) => {
return cls.run(() => {
// set the tenant id
cls.setOnContext(ContextKeys.TENANT_ID, tenantId)
// invoke the task
return task()
})
}
/**
* Given an app ID this will attempt to retrieve the tenant ID from it.
* @return {null|string} The tenant ID found within the app ID.
*/
exports.getTenantIDFromAppID = appId => {
if (!appId) {
return null
}
const split = appId.split(SEPARATOR)
const hasDev = split[1] === DocumentTypes.DEV
if ((hasDev && split.length === 3) || (!hasDev && split.length === 2)) {
return null
}
if (hasDev) {
return split[2]
} else {
return split[1]
}
}
const setAppTenantId = appId => {
const appTenantId = this.getTenantIDFromAppID(appId) || this.DEFAULT_TENANT_ID
this.updateTenantId(appTenantId)
}
exports.doInAppContext = (appId, task) => {
if (!appId) {
throw new Error("appId is required")
}
return cls.run(() => {
// set the app tenant id
setAppTenantId(appId)
// set the app ID
cls.setOnContext(ContextKeys.APP_ID, appId)
// invoke the task
return task()
})
}
exports.updateTenantId = tenantId => {
cls.setOnContext(ContextKeys.TENANT_ID, tenantId)
}
exports.updateAppId = appId => {
try {
cls.setOnContext(ContextKeys.APP_ID, appId)
cls.setOnContext(ContextKeys.PROD_DB, null)
cls.setOnContext(ContextKeys.DEV_DB, null)
cls.setOnContext(ContextKeys.CURRENT_DB, null)
cls.setOnContext(ContextKeys.DB_OPTS, null)
} catch (err) {
if (env.isTest()) {
TEST_APP_ID = appId
} else {
throw err
}
}
}
exports.setTenantId = (
ctx,
opts = { allowQs: false, allowNoTenant: false }
) => {
let tenantId
// exit early if not multi-tenant
if (!exports.isMultiTenant()) {
cls.setOnContext(ContextKeys.TENANT_ID, this.DEFAULT_TENANT_ID)
return
}
const allowQs = opts && opts.allowQs
const allowNoTenant = opts && opts.allowNoTenant
const header = ctx.request.headers[Headers.TENANT_ID]
const user = ctx.user || {}
if (allowQs) {
const query = ctx.request.query || {}
tenantId = query.tenantId
}
// override query string (if allowed) by user, or header
// URL params cannot be used in a middleware, as they are
// processed later in the chain
tenantId = user.tenantId || header || tenantId
// Set the tenantId from the subdomain
if (!tenantId) {
tenantId = ctx.subdomains && ctx.subdomains[0]
}
if (!tenantId && !allowNoTenant) {
ctx.throw(403, "Tenant id not set")
}
// check tenant ID just incase no tenant was allowed
if (tenantId) {
cls.setOnContext(ContextKeys.TENANT_ID, tenantId)
}
}
exports.isTenantIdSet = () => {
const tenantId = cls.getFromContext(ContextKeys.TENANT_ID)
return !!tenantId
}
exports.getTenantId = () => {
if (!exports.isMultiTenant()) {
return exports.DEFAULT_TENANT_ID
}
const tenantId = cls.getFromContext(ContextKeys.TENANT_ID)
if (!tenantId) {
throw new Error("Tenant id not found")
}
return tenantId
}
exports.getAppId = () => {
const foundId = cls.getFromContext(ContextKeys.APP_ID)
if (!foundId && env.isTest() && TEST_APP_ID) {
return TEST_APP_ID
} else {
return foundId
}
}
function getDB(key, opts) {
const dbOptsKey = `${key}${ContextKeys.DB_OPTS}`
let storedOpts = cls.getFromContext(dbOptsKey)
let db = cls.getFromContext(key)
if (db && isEqual(opts, storedOpts)) {
return db
}
const appId = exports.getAppId()
const CouchDB = getCouch()
let toUseAppId
switch (key) {
case ContextKeys.CURRENT_DB:
toUseAppId = appId
break
case ContextKeys.PROD_DB:
toUseAppId = getProdAppID(appId)
break
case ContextKeys.DEV_DB:
toUseAppId = getDevelopmentAppID(appId)
break
}
db = new CouchDB(toUseAppId, opts)
try {
cls.setOnContext(key, db)
if (opts) {
cls.setOnContext(dbOptsKey, opts)
}
} catch (err) {
if (!env.isTest()) {
throw err
}
}
return db
}
/**
* Opens the app database based on whatever the request
* contained, dev or prod.
*/
exports.getAppDB = opts => {
return getDB(ContextKeys.CURRENT_DB, opts)
}
/**
* This specifically gets the prod app ID, if the request
* contained a development app ID, this will open the prod one.
*/
exports.getProdAppDB = opts => {
return getDB(ContextKeys.PROD_DB, opts)
}
/**
* This specifically gets the dev app ID, if the request
* contained a prod app ID, this will open the dev one.
*/
exports.getDevAppDB = opts => {
return getDB(ContextKeys.DEV_DB, opts)
}

View File

@ -0,0 +1,251 @@
import env from "../environment"
import { SEPARATOR, DocumentTypes } from "../db/constants"
import cls from "./FunctionContext"
import { dangerousGetDB, closeDB } from "../db"
import { baseGlobalDBName } from "../tenancy/utils"
import { IdentityContext } from "@budibase/types"
import { DEFAULT_TENANT_ID as _DEFAULT_TENANT_ID } from "../constants"
import { ContextKeys } from "./constants"
import {
updateUsing,
closeWithUsing,
setAppTenantId,
setIdentity,
closeAppDBs,
getContextDB,
} from "./utils"
export const DEFAULT_TENANT_ID = _DEFAULT_TENANT_ID
// some test cases call functions directly, need to
// store an app ID to pretend there is a context
let TEST_APP_ID: string | null = null
export const closeTenancy = async () => {
let db
try {
if (env.USE_COUCH) {
db = getGlobalDB()
}
} catch (err) {
// no DB found - skip closing
return
}
await closeDB(db)
// clear from context now that database is closed/task is finished
cls.setOnContext(ContextKeys.TENANT_ID, null)
cls.setOnContext(ContextKeys.GLOBAL_DB, null)
}
// export const isDefaultTenant = () => {
// return getTenantId() === DEFAULT_TENANT_ID
// }
export const isMultiTenant = () => {
return env.MULTI_TENANCY
}
/**
* Given an app ID this will attempt to retrieve the tenant ID from it.
* @return {null|string} The tenant ID found within the app ID.
*/
export const getTenantIDFromAppID = (appId: string) => {
if (!appId) {
return null
}
const split = appId.split(SEPARATOR)
const hasDev = split[1] === DocumentTypes.DEV
if ((hasDev && split.length === 3) || (!hasDev && split.length === 2)) {
return null
}
if (hasDev) {
return split[2]
} else {
return split[1]
}
}
// used for automations, API endpoints should always be in context already
export const doInTenant = (tenantId: string | null, task: any) => {
// make sure default always selected in single tenancy
if (!env.MULTI_TENANCY) {
tenantId = tenantId || DEFAULT_TENANT_ID
}
// the internal function is so that we can re-use an existing
// context - don't want to close DB on a parent context
async function internal(opts = { existing: false }) {
// set the tenant id + global db if this is a new context
if (!opts.existing) {
updateTenantId(tenantId)
}
try {
// invoke the task
return await task()
} finally {
await closeWithUsing(ContextKeys.TENANCY_IN_USE, () => {
return closeTenancy()
})
}
}
const existing = cls.getFromContext(ContextKeys.TENANT_ID) === tenantId
return updateUsing(ContextKeys.TENANCY_IN_USE, existing, internal)
}
export const doInAppContext = (appId: string, task: any) => {
if (!appId) {
throw new Error("appId is required")
}
const identity = getIdentity()
// the internal function is so that we can re-use an existing
// context - don't want to close DB on a parent context
async function internal(opts = { existing: false }) {
// set the app tenant id
if (!opts.existing) {
setAppTenantId(appId)
}
// set the app ID
cls.setOnContext(ContextKeys.APP_ID, appId)
// preserve the identity
if (identity) {
setIdentity(identity)
}
try {
// invoke the task
return await task()
} finally {
await closeWithUsing(ContextKeys.APP_IN_USE, async () => {
await closeAppDBs()
await closeTenancy()
})
}
}
const existing = cls.getFromContext(ContextKeys.APP_ID) === appId
return updateUsing(ContextKeys.APP_IN_USE, existing, internal)
}
export const doInIdentityContext = (identity: IdentityContext, task: any) => {
if (!identity) {
throw new Error("identity is required")
}
async function internal(opts = { existing: false }) {
if (!opts.existing) {
cls.setOnContext(ContextKeys.IDENTITY, identity)
// set the tenant so that doInTenant will preserve identity
if (identity.tenantId) {
updateTenantId(identity.tenantId)
}
}
try {
// invoke the task
return await task()
} finally {
await closeWithUsing(ContextKeys.IDENTITY_IN_USE, async () => {
setIdentity(null)
await closeTenancy()
})
}
}
const existing = cls.getFromContext(ContextKeys.IDENTITY)
return updateUsing(ContextKeys.IDENTITY_IN_USE, existing, internal)
}
export const getIdentity = (): IdentityContext | undefined => {
try {
return cls.getFromContext(ContextKeys.IDENTITY)
} catch (e) {
// do nothing - identity is not in context
}
}
export const updateTenantId = (tenantId: string | null) => {
cls.setOnContext(ContextKeys.TENANT_ID, tenantId)
if (env.USE_COUCH) {
setGlobalDB(tenantId)
}
}
export const updateAppId = async (appId: string) => {
try {
// have to close first, before removing the databases from context
await closeAppDBs()
cls.setOnContext(ContextKeys.APP_ID, appId)
} catch (err) {
if (env.isTest()) {
TEST_APP_ID = appId
} else {
throw err
}
}
}
export const setGlobalDB = (tenantId: string | null) => {
const dbName = baseGlobalDBName(tenantId)
const db = dangerousGetDB(dbName)
cls.setOnContext(ContextKeys.GLOBAL_DB, db)
return db
}
export const getGlobalDB = () => {
const db = cls.getFromContext(ContextKeys.GLOBAL_DB)
if (!db) {
throw new Error("Global DB not found")
}
return db
}
export const isTenantIdSet = () => {
const tenantId = cls.getFromContext(ContextKeys.TENANT_ID)
return !!tenantId
}
export const getTenantId = () => {
if (!isMultiTenant()) {
return DEFAULT_TENANT_ID
}
const tenantId = cls.getFromContext(ContextKeys.TENANT_ID)
if (!tenantId) {
throw new Error("Tenant id not found")
}
return tenantId
}
export const getAppId = () => {
const foundId = cls.getFromContext(ContextKeys.APP_ID)
if (!foundId && env.isTest() && TEST_APP_ID) {
return TEST_APP_ID
} else {
return foundId
}
}
/**
* Opens the app database based on whatever the request
* contained, dev or prod.
*/
export const getAppDB = (opts?: any) => {
return getContextDB(ContextKeys.CURRENT_DB, opts)
}
/**
* This specifically gets the prod app ID, if the request
* contained a development app ID, this will open the prod one.
*/
export const getProdAppDB = (opts?: any) => {
return getContextDB(ContextKeys.PROD_DB, opts)
}
/**
* This specifically gets the dev app ID, if the request
* contained a prod app ID, this will open the dev one.
*/
export const getDevAppDB = (opts?: any) => {
return getContextDB(ContextKeys.DEV_DB, opts)
}

View File

@ -0,0 +1,148 @@
import "../../../tests/utilities/TestConfiguration"
import * as context from ".."
import { DEFAULT_TENANT_ID } from "../../constants"
import env from "../../environment"
// must use require to spy index file exports due to known issue in jest
const dbUtils = require("../../db")
jest.spyOn(dbUtils, "closeDB")
jest.spyOn(dbUtils, "dangerousGetDB")
describe("context", () => {
beforeEach(() => {
jest.clearAllMocks()
})
describe("doInTenant", () => {
describe("single-tenancy", () => {
it("defaults to the default tenant", () => {
const tenantId = context.getTenantId()
expect(tenantId).toBe(DEFAULT_TENANT_ID)
})
it("defaults to the default tenant db", async () => {
await context.doInTenant(DEFAULT_TENANT_ID, () => {
const db = context.getGlobalDB()
expect(db.name).toBe("global-db")
})
expect(dbUtils.dangerousGetDB).toHaveBeenCalledTimes(1)
expect(dbUtils.closeDB).toHaveBeenCalledTimes(1)
})
})
describe("multi-tenancy", () => {
beforeEach(() => {
env._set("MULTI_TENANCY", 1)
})
it("fails when no tenant id is set", () => {
const test = () => {
let error
try {
context.getTenantId()
} catch (e: any) {
error = e
}
expect(error.message).toBe("Tenant id not found")
}
// test under no tenancy
test()
// test after tenancy has been accessed to ensure cleanup
context.doInTenant("test", () => {})
test()
})
it("fails when no tenant db is set", () => {
const test = () => {
let error
try {
context.getGlobalDB()
} catch (e: any) {
error = e
}
expect(error.message).toBe("Global DB not found")
}
// test under no tenancy
test()
// test after tenancy has been accessed to ensure cleanup
context.doInTenant("test", () => {})
test()
})
it("sets tenant id", () => {
context.doInTenant("test", () => {
const tenantId = context.getTenantId()
expect(tenantId).toBe("test")
})
})
it("initialises the tenant db", async () => {
await context.doInTenant("test", () => {
const db = context.getGlobalDB()
expect(db.name).toBe("test_global-db")
})
expect(dbUtils.dangerousGetDB).toHaveBeenCalledTimes(1)
expect(dbUtils.closeDB).toHaveBeenCalledTimes(1)
})
it("sets the tenant id when nested with same tenant id", async () => {
await context.doInTenant("test", async () => {
const tenantId = context.getTenantId()
expect(tenantId).toBe("test")
await context.doInTenant("test", async () => {
const tenantId = context.getTenantId()
expect(tenantId).toBe("test")
await context.doInTenant("test", () => {
const tenantId = context.getTenantId()
expect(tenantId).toBe("test")
})
})
})
})
it("initialises the tenant db when nested with same tenant id", async () => {
await context.doInTenant("test", async () => {
const db = context.getGlobalDB()
expect(db.name).toBe("test_global-db")
await context.doInTenant("test", async () => {
const db = context.getGlobalDB()
expect(db.name).toBe("test_global-db")
await context.doInTenant("test", () => {
const db = context.getGlobalDB()
expect(db.name).toBe("test_global-db")
})
})
})
// only 1 db is opened and closed
expect(dbUtils.dangerousGetDB).toHaveBeenCalledTimes(1)
expect(dbUtils.closeDB).toHaveBeenCalledTimes(1)
})
it("sets different tenant id inside another context", () => {
context.doInTenant("test", () => {
const tenantId = context.getTenantId()
expect(tenantId).toBe("test")
context.doInTenant("nested", () => {
const tenantId = context.getTenantId()
expect(tenantId).toBe("nested")
context.doInTenant("double-nested", () => {
const tenantId = context.getTenantId()
expect(tenantId).toBe("double-nested")
})
})
})
})
})
})
})

View File

@ -0,0 +1,113 @@
import {
DEFAULT_TENANT_ID,
getAppId,
getTenantIDFromAppID,
updateTenantId,
} from "./index"
import cls from "./FunctionContext"
import { IdentityContext } from "@budibase/types"
import { ContextKeys } from "./constants"
import { dangerousGetDB, closeDB } from "../db"
import { isEqual } from "lodash"
import { getDevelopmentAppID, getProdAppID } from "../db/conversions"
import env from "../environment"
export async function updateUsing(
usingKey: string,
existing: boolean,
internal: (opts: { existing: boolean }) => Promise<any>
) {
const using = cls.getFromContext(usingKey)
if (using && existing) {
cls.setOnContext(usingKey, using + 1)
return internal({ existing: true })
} else {
return cls.run(async () => {
cls.setOnContext(usingKey, 1)
return internal({ existing: false })
})
}
}
export async function closeWithUsing(
usingKey: string,
closeFn: () => Promise<any>
) {
const using = cls.getFromContext(usingKey)
if (!using || using <= 1) {
await closeFn()
} else {
cls.setOnContext(usingKey, using - 1)
}
}
export const setAppTenantId = (appId: string) => {
const appTenantId = getTenantIDFromAppID(appId) || DEFAULT_TENANT_ID
updateTenantId(appTenantId)
}
export const setIdentity = (identity: IdentityContext | null) => {
cls.setOnContext(ContextKeys.IDENTITY, identity)
}
// this function makes sure the PouchDB objects are closed and
// fully deleted when finished - this protects against memory leaks
export async function closeAppDBs() {
const dbKeys = [
ContextKeys.CURRENT_DB,
ContextKeys.PROD_DB,
ContextKeys.DEV_DB,
]
for (let dbKey of dbKeys) {
const db = cls.getFromContext(dbKey)
if (!db) {
continue
}
await closeDB(db)
// clear the DB from context, incase someone tries to use it again
cls.setOnContext(dbKey, null)
}
// clear the app ID now that the databases are closed
if (cls.getFromContext(ContextKeys.APP_ID)) {
cls.setOnContext(ContextKeys.APP_ID, null)
}
if (cls.getFromContext(ContextKeys.DB_OPTS)) {
cls.setOnContext(ContextKeys.DB_OPTS, null)
}
}
export function getContextDB(key: string, opts: any) {
const dbOptsKey = `${key}${ContextKeys.DB_OPTS}`
let storedOpts = cls.getFromContext(dbOptsKey)
let db = cls.getFromContext(key)
if (db && isEqual(opts, storedOpts)) {
return db
}
const appId = getAppId()
let toUseAppId
switch (key) {
case ContextKeys.CURRENT_DB:
toUseAppId = appId
break
case ContextKeys.PROD_DB:
toUseAppId = getProdAppID(appId)
break
case ContextKeys.DEV_DB:
toUseAppId = getDevelopmentAppID(appId)
break
}
db = dangerousGetDB(toUseAppId, opts)
try {
cls.setOnContext(key, db)
if (opts) {
cls.setOnContext(dbOptsKey, opts)
}
} catch (err) {
if (!env.isTest()) {
throw err
}
}
return db
}

View File

@ -1,27 +1,35 @@
const { getDB } = require(".")
import { dangerousGetDB, closeDB } from "."
class Replication {
source: any
target: any
replication: any
/**
*
* @param {String} source - the DB you want to replicate or rollback to
* @param {String} target - the DB you want to replicate to, or rollback from
*/
constructor({ source, target }) {
this.source = getDB(source)
this.target = getDB(target)
constructor({ source, target }: any) {
this.source = dangerousGetDB(source)
this.target = dangerousGetDB(target)
}
promisify(operation, opts = {}) {
close() {
return Promise.all([closeDB(this.source), closeDB(this.target)])
}
promisify(operation: any, opts = {}) {
return new Promise(resolve => {
operation(this.target, opts)
.on("denied", function (err) {
.on("denied", function (err: any) {
// a document failed to replicate (e.g. due to permissions)
throw new Error(`Denied: Document failed to replicate ${err}`)
})
.on("complete", function (info) {
.on("complete", function (info: any) {
return resolve(info)
})
.on("error", function (err) {
.on("error", function (err: any) {
throw new Error(`Replication Error: ${err}`)
})
})
@ -51,7 +59,7 @@ class Replication {
async rollback() {
await this.target.destroy()
// Recreate the DB again
this.target = getDB(this.target.name)
this.target = dangerousGetDB(this.target.name)
await this.replicate()
}
@ -60,4 +68,4 @@ class Replication {
}
}
module.exports = Replication
export default Replication

View File

@ -1,39 +0,0 @@
exports.SEPARATOR = "_"
const PRE_APP = "app"
const PRE_DEV = "dev"
exports.DocumentTypes = {
USER: "us",
WORKSPACE: "workspace",
CONFIG: "config",
TEMPLATE: "template",
APP: PRE_APP,
DEV: PRE_DEV,
APP_DEV: `${PRE_APP}${exports.SEPARATOR}${PRE_DEV}`,
APP_METADATA: `${PRE_APP}${exports.SEPARATOR}metadata`,
ROLE: "role",
MIGRATIONS: "migrations",
DEV_INFO: "devinfo",
}
exports.StaticDatabases = {
GLOBAL: {
name: "global-db",
docs: {
apiKeys: "apikeys",
usageQuota: "usage_quota",
},
},
// contains information about tenancy and so on
PLATFORM_INFO: {
name: "global-info",
docs: {
tenants: "tenants",
},
},
}
exports.APP_PREFIX = exports.DocumentTypes.APP + exports.SEPARATOR
exports.APP_DEV = exports.APP_DEV_PREFIX =
exports.DocumentTypes.APP_DEV + exports.SEPARATOR

View File

@ -0,0 +1,65 @@
export const SEPARATOR = "_"
export const UNICODE_MAX = "\ufff0"
/**
* Can be used to create a few different forms of querying a view.
*/
export enum AutomationViewModes {
ALL = "all",
AUTOMATION = "automation",
STATUS = "status",
}
export enum ViewNames {
USER_BY_EMAIL = "by_email2",
BY_API_KEY = "by_api_key",
USER_BY_BUILDERS = "by_builders",
LINK = "by_link",
ROUTING = "screen_routes",
AUTOMATION_LOGS = "automation_logs",
}
export const DeprecatedViews = {
[ViewNames.USER_BY_EMAIL]: [
// removed due to inaccuracy in view doc filter logic
"by_email",
],
}
export enum DocumentTypes {
USER = "us",
WORKSPACE = "workspace",
CONFIG = "config",
TEMPLATE = "template",
APP = "app",
DEV = "dev",
APP_DEV = "app_dev",
APP_METADATA = "app_metadata",
ROLE = "role",
MIGRATIONS = "migrations",
DEV_INFO = "devinfo",
AUTOMATION_LOG = "log_au",
}
export const StaticDatabases = {
GLOBAL: {
name: "global-db",
docs: {
apiKeys: "apikeys",
usageQuota: "usage_quota",
licenseInfo: "license_info",
},
},
// contains information about tenancy and so on
PLATFORM_INFO: {
name: "global-info",
docs: {
tenants: "tenants",
install: "install",
},
},
}
export const APP_PREFIX = exports.DocumentTypes.APP + exports.SEPARATOR
export const APP_DEV = exports.DocumentTypes.APP_DEV + exports.SEPARATOR
export const APP_DEV_PREFIX = APP_DEV

View File

@ -23,24 +23,30 @@ exports.isDevApp = app => {
}
/**
* Convert a development app ID to a deployed app ID.
* Generates a development app ID from a real app ID.
* @returns {string} the dev app ID which can be used for dev database.
*/
exports.getProdAppID = appId => {
// if dev, convert it
if (appId.startsWith(APP_DEV_PREFIX)) {
const id = appId.split(APP_DEV_PREFIX)[1]
return `${APP_PREFIX}${id}`
exports.getDevelopmentAppID = appId => {
if (!appId || appId.startsWith(APP_DEV_PREFIX)) {
return appId
}
return appId
// split to take off the app_ element, then join it together incase any other app_ exist
const split = appId.split(APP_PREFIX)
split.shift()
const rest = split.join(APP_PREFIX)
return `${APP_DEV_PREFIX}${rest}`
}
/**
* Convert a deployed app ID to a development app ID.
* Convert a development app ID to a deployed app ID.
*/
exports.getDevelopmentAppID = appId => {
if (!appId.startsWith(APP_DEV_PREFIX)) {
const id = appId.split(APP_PREFIX)[1]
return `${APP_DEV_PREFIX}${id}`
exports.getProdAppID = appId => {
if (!appId || !appId.startsWith(APP_DEV_PREFIX)) {
return appId
}
return appId
// split to take off the app_dev element, then join it together incase any other app_ exist
const split = appId.split(APP_DEV_PREFIX)
split.shift()
const rest = split.join(APP_DEV_PREFIX)
return `${APP_PREFIX}${rest}`
}

View File

@ -1,13 +1,91 @@
let Pouch
const pouch = require("./pouch")
const env = require("../environment")
module.exports.setDB = pouch => {
Pouch = pouch
const openDbs = []
let PouchDB
let initialised = false
const dbList = new Set()
if (env.MEMORY_LEAK_CHECK) {
setInterval(() => {
console.log("--- OPEN DBS ---")
console.log(openDbs)
}, 5000)
}
module.exports.getDB = dbName => {
return new Pouch(dbName)
const put =
dbPut =>
async (doc, options = {}) => {
if (!doc.createdAt) {
doc.createdAt = new Date().toISOString()
}
doc.updatedAt = new Date().toISOString()
return dbPut(doc, options)
}
const checkInitialised = () => {
if (!initialised) {
throw new Error("init has not been called")
}
}
module.exports.getCouch = () => {
return Pouch
exports.init = opts => {
PouchDB = pouch.getPouch(opts)
initialised = true
}
// NOTE: THIS IS A DANGEROUS FUNCTION - USE WITH CAUTION
// this function is prone to leaks, should only be used
// in situations that using the function doWithDB does not work
exports.dangerousGetDB = (dbName, opts) => {
checkInitialised()
if (env.isTest()) {
dbList.add(dbName)
}
const db = new PouchDB(dbName, opts)
if (env.MEMORY_LEAK_CHECK) {
openDbs.push(db.name)
}
const dbPut = db.put
db.put = put(dbPut)
return db
}
// use this function if you have called dangerousGetDB - close
// the databases you've opened once finished
exports.closeDB = async db => {
if (!db || env.isTest()) {
return
}
if (env.MEMORY_LEAK_CHECK) {
openDbs.splice(openDbs.indexOf(db.name), 1)
}
try {
// specifically await so that if there is an error, it can be ignored
return await db.close()
} catch (err) {
// ignore error, already closed
}
}
// we have to use a callback for this so that we can close
// the DB when we're done, without this manual requests would
// need to close the database when done with it to avoid memory leaks
exports.doWithDB = async (dbName, cb, opts = {}) => {
const db = exports.dangerousGetDB(dbName, opts)
// need this to be async so that we can correctly close DB after all
// async operations have been completed
try {
return await cb(db)
} finally {
await exports.closeDB(db)
}
}
exports.allDbs = () => {
if (!env.isTest()) {
throw new Error("Cannot be used outside test environment.")
}
checkInitialised()
return [...dbList]
}

Some files were not shown because too many files have changed in this diff Show More