merge with master
This commit is contained in:
commit
a09fabc54b
|
@ -79,6 +79,8 @@ Component libraries are collections of components as well as the definition of t
|
|||
### Getting Started For Contributors
|
||||
#### 1. Prerequisites
|
||||
|
||||
NodeJS Version `14.x.x`
|
||||
|
||||
*yarn -* `npm install -g yarn`
|
||||
|
||||
*jest* - `npm install -g jest`
|
||||
|
|
|
@ -0,0 +1,93 @@
|
|||
|
||||
# Budibase CI Pipelines
|
||||
|
||||
Welcome to the budibase CI pipelines directory. This document details what each of the CI pipelines are for, and come common combinations.
|
||||
|
||||
## All CI Pipelines
|
||||
|
||||
### Note
|
||||
- When running workflow dispatch jobs, ensure you always run them off the `master` branch. It defaults to `develop`, so double check before running any jobs.
|
||||
|
||||
### Standard CI Build Job (budibase_ci.yml)
|
||||
Triggers:
|
||||
- PR or push to develop
|
||||
- PR or push to master
|
||||
|
||||
The standard CI Build job is what runs when you raise a PR to develop or master.
|
||||
- Installs all dependencies,
|
||||
- builds the project
|
||||
- run the unit tests
|
||||
- Generate test coverage metrics with codecov
|
||||
- Run the cypress tests
|
||||
|
||||
### Release Develop Job (release-develop.yml)
|
||||
Triggers:
|
||||
- Push to develop
|
||||
|
||||
The job responsible for building, tagging and pushing docker images out to the test and staging environments.
|
||||
- Installs all dependencies
|
||||
- builds the project
|
||||
- run the unit tests
|
||||
- publish the budibase JS packages under a prerelease tag to NPM
|
||||
- build, tag and push docker images under the `develop` tag to docker hub
|
||||
|
||||
These images will then be pulled by the test and staging environments, updating the latest automatically. Discord notifications are sent to the #infra channel when this occurs.
|
||||
|
||||
### Release Job (release.yml)
|
||||
Triggers:
|
||||
- Push to master
|
||||
|
||||
This job is responsible for building and pushing the latest code to NPM and docker hub, so that it can be deployed.
|
||||
- Installs all dependencies
|
||||
- builds the project
|
||||
- run the unit tests
|
||||
- publish the budibase JS packages under a release tag to NPM (always incremented by patch versions)
|
||||
- build, tag and push docker images under the `v.x.x.x` (the tag of the NPM release) tag to docker hub
|
||||
|
||||
### Release Selfhost Job (release-selfhost.yml)
|
||||
Triggers:
|
||||
- Manual Workflow Dispatch Trigger
|
||||
|
||||
This job is responsible for delivering the latest version of budibase to those that are self-hosting.
|
||||
|
||||
This job relies on the release job to have run first, so the latest image is pushed to dockerhub. This job then will pull the latest version from `lerna.json` and try to find an image in dockerhub corresponding to that version. For example, if the version in `lerna.json` is `1.0.0`:
|
||||
- Pull the images for all budibase services tagged `v1.0.0` from dockerhub
|
||||
- Tag these images as `latest`
|
||||
- Push them back to dockerhub. This now means anyone who pulls `latest` (self hosters using docker-compose) will get the latest version.
|
||||
- Build and release the budibase helm chart for kubernetes users
|
||||
- Perform a github release with the latest version. You can see previous releases here (https://github.com/Budibase/budibase/releases)
|
||||
|
||||
|
||||
### Cloud Deploy (deploy-cloud.yml)
|
||||
Triggers:
|
||||
- Manual Workflow Dispatch Trigger
|
||||
|
||||
This job is responsible for deploying to our production, cloud kubernetes environment. You must run the release job first, to ensure that the latest images have been built and pushed to docker hub. You can also manually enter a version number for this job, so you can perform rollbacks or upgrade to a specific version. After kicking off this job, the following will occur:
|
||||
|
||||
- Checks out the master branch
|
||||
- Pulls the latest `values.yaml` from budibase infra, a private repo containing budibases infrastructure configuration
|
||||
- Gets the latest budibase version from `lerna.json`, if it hasn't been specified in the workflow when you kicked it off
|
||||
- Configures AWS Credentials
|
||||
- Deploys the helm chart in the budibase repo to our production EKS cluster, injecting the `values.yaml` we pulled from budibase-infra
|
||||
- Fires off a discord webhook in the #infra channel to show that the deployment completely successfully.
|
||||
|
||||
## Common Workflows
|
||||
|
||||
### Deploy Changes to Production (Release)
|
||||
- Merge `develop` into `master`
|
||||
- Wait for budibase CI job and release job to run
|
||||
- Run cloud deploy job
|
||||
- Run release selfhost job
|
||||
|
||||
### Deploy Changes to Production (Hotfix)
|
||||
- Branch off `master`
|
||||
- Perform your hotfix
|
||||
- Merge back into `master`
|
||||
- Wait for budibase CI job and release job to run
|
||||
- Run cloud deploy job
|
||||
- Run release selfhost job
|
||||
|
||||
### Rollback A Bad Cloud Deployment
|
||||
- Kick off cloud deploy job
|
||||
- Ensure you are running off master
|
||||
- Enter the version number of the last known good version of budibase. For example `1.0.0`
|
|
@ -41,4 +41,6 @@ jobs:
|
|||
files: ./packages/server/coverage/clover.xml
|
||||
name: codecov-umbrella
|
||||
verbose: true
|
||||
|
||||
# TODO: parallelise this
|
||||
- run: yarn test:e2e:ci
|
||||
|
|
|
@ -0,0 +1,61 @@
|
|||
name: Budibase Cloud Deploy
|
||||
|
||||
on:
|
||||
workflow_dispatch:
|
||||
inputs:
|
||||
version:
|
||||
description: Budibase release version. For example - 1.0.0
|
||||
required: false
|
||||
|
||||
jobs:
|
||||
release:
|
||||
runs-on: ubuntu-latest
|
||||
|
||||
steps:
|
||||
- uses: actions/checkout@v2
|
||||
|
||||
- name: Pull values.yaml from budibase-infra
|
||||
run: |
|
||||
curl -H "Authorization: token ${{ secrets.GH_PERSONAL_TOKEN }}" \
|
||||
-H 'Accept: application/vnd.github.v3.raw' \
|
||||
-o values.production.yaml \
|
||||
-L https://api.github.com/repos/budibase/budibase-infra/contents/kubernetes/values.yaml
|
||||
wc -l values.production.yaml
|
||||
|
||||
- name: Get the latest budibase release version
|
||||
id: version
|
||||
run: |
|
||||
if [ -z "${{ github.event.inputs.version }}" ]; then
|
||||
release_version=$(cat lerna.json | jq -r '.version')
|
||||
else
|
||||
release_version=${{ github.event.inputs.version }}
|
||||
fi
|
||||
echo "RELEASE_VERSION=$release_version" >> $GITHUB_ENV
|
||||
|
||||
- name: Configure AWS Credentials
|
||||
uses: aws-actions/configure-aws-credentials@v1
|
||||
with:
|
||||
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
|
||||
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
|
||||
aws-region: eu-west-1
|
||||
|
||||
- name: Deploy to EKS
|
||||
uses: craftech-io/eks-helm-deploy-action@v1
|
||||
with:
|
||||
aws-access-key-id: ${{ secrets.AWS_ACCESS__KEY_ID }}
|
||||
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
|
||||
aws-region: eu-west-1
|
||||
cluster-name: budibase-eks-production
|
||||
config-files: values.production.yaml
|
||||
chart-path: charts/budibase
|
||||
namespace: budibase
|
||||
values: globals.appVersion=v${{ env.RELEASE_VERSION }}
|
||||
name: budibase-prod
|
||||
|
||||
- name: Discord Webhook Action
|
||||
uses: tsickert/discord-webhook@v4.0.0
|
||||
with:
|
||||
webhook-url: ${{ secrets.PROD_DEPLOY_WEBHOOK_URL }}
|
||||
content: "Production Deployment Complete: ${{ env.RELEASE_VERSION }} deployed to Budibase Cloud."
|
||||
embed-title: ${{ env.RELEASE_VERSION }}
|
||||
|
|
@ -0,0 +1,66 @@
|
|||
name: Budibase Release Preprod
|
||||
|
||||
on:
|
||||
workflow_dispatch:
|
||||
|
||||
env:
|
||||
POSTHOG_TOKEN: ${{ secrets.POSTHOG_TOKEN }}
|
||||
INTERCOM_TOKEN: ${{ secrets.INTERCOM_TOKEN }}
|
||||
POSTHOG_URL: ${{ secrets.POSTHOG_URL }}
|
||||
SENTRY_DSN: ${{ secrets.SENTRY_DSN }}
|
||||
|
||||
jobs:
|
||||
release:
|
||||
runs-on: ubuntu-latest
|
||||
|
||||
steps:
|
||||
- uses: actions/checkout@v2
|
||||
|
||||
- name: Configure AWS Credentials
|
||||
uses: aws-actions/configure-aws-credentials@v1
|
||||
with:
|
||||
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
|
||||
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
|
||||
aws-region: eu-west-1
|
||||
|
||||
- name: Get the latest budibase release version
|
||||
id: version
|
||||
run: |
|
||||
release_version=$(cat lerna.json | jq -r '.version')
|
||||
echo "RELEASE_VERSION=$release_version" >> $GITHUB_ENV
|
||||
|
||||
- name: Pull values.yaml from budibase-infra
|
||||
run: |
|
||||
curl -H "Authorization: token ${{ secrets.GH_PERSONAL_TOKEN }}" \
|
||||
-H 'Accept: application/vnd.github.v3.raw' \
|
||||
-o values.preprod.yaml \
|
||||
-L https://api.github.com/repos/budibase/budibase-infra/contents/kubernetes/budibase-preprod/values.yaml
|
||||
wc -l values.preprod.yaml
|
||||
|
||||
- name: Deploy to Preprod Environment
|
||||
uses: deliverybot/helm@v1
|
||||
with:
|
||||
release: budibase-preprod
|
||||
namespace: budibase
|
||||
chart: charts/budibase
|
||||
token: ${{ github.token }}
|
||||
helm: helm3
|
||||
values: |
|
||||
globals:
|
||||
appVersion: v${{ env.RELEASE_VERSION }}
|
||||
ingress:
|
||||
enabled: true
|
||||
nginx: true
|
||||
value-files: >-
|
||||
[
|
||||
"values.preprod.yaml"
|
||||
]
|
||||
env:
|
||||
KUBECONFIG_FILE: '${{ secrets.PREPROD_KUBECONFIG }}'
|
||||
|
||||
- name: Discord Webhook Action
|
||||
uses: tsickert/discord-webhook@v4.0.0
|
||||
with:
|
||||
webhook-url: ${{ secrets.PROD_DEPLOY_WEBHOOK_URL }}
|
||||
content: "Preprod Deployment Complete: ${{ env.RELEASE_VERSION }} deployed to Budibase Pre-prod."
|
||||
embed-title: ${{ env.RELEASE_VERSION }}
|
|
@ -3,53 +3,62 @@ name: Budibase Release Selfhost
|
|||
on:
|
||||
workflow_dispatch:
|
||||
|
||||
env:
|
||||
POSTHOG_TOKEN: ${{ secrets.POSTHOG_TOKEN }}
|
||||
INTERCOM_TOKEN: ${{ secrets.INTERCOM_TOKEN }}
|
||||
POSTHOG_URL: ${{ secrets.POSTHOG_URL }}
|
||||
|
||||
jobs:
|
||||
release:
|
||||
runs-on: ubuntu-latest
|
||||
|
||||
steps:
|
||||
- uses: actions/checkout@v2
|
||||
- uses: actions/setup-node@v1
|
||||
with:
|
||||
node-version: 14.x
|
||||
- run: yarn
|
||||
- run: yarn bootstrap
|
||||
with:
|
||||
fetch_depth: 0
|
||||
|
||||
- name: Configure AWS Credentials
|
||||
uses: aws-actions/configure-aws-credentials@v1
|
||||
with:
|
||||
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
|
||||
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
|
||||
aws-region: eu-west-1
|
||||
|
||||
- name: 'Get Previous tag'
|
||||
id: previoustag
|
||||
uses: "WyriHaximus/github-action-get-previous-tag@v1"
|
||||
|
||||
- name: Build/release Docker images (Self Host)
|
||||
- name: Tag and release Docker images (Self Host)
|
||||
run: |
|
||||
docker login -u $DOCKER_USER -p $DOCKER_PASSWORD
|
||||
yarn build
|
||||
yarn build:docker:selfhost
|
||||
|
||||
# Get latest release version
|
||||
release_version=$(cat lerna.json | jq -r '.version')
|
||||
echo "RELEASE_VERSION=$release_version" >> $GITHUB_ENV
|
||||
release_tag=v$release_version
|
||||
|
||||
# Pull apps and worker images
|
||||
docker pull budibase/apps:$release_tag
|
||||
docker pull budibase/worker:$release_tag
|
||||
|
||||
# Tag apps and worker images
|
||||
docker tag budibase/apps:$release_tag budibase/apps:$SELFHOST_TAG
|
||||
docker tag budibase/worker:$release_tag budibase/worker:$SELFHOST_TAG
|
||||
|
||||
# Push images
|
||||
docker push budibase/apps:$SELFHOST_TAG
|
||||
docker push budibase/worker:$SELFHOST_TAG
|
||||
env:
|
||||
DOCKER_USER: ${{ secrets.DOCKER_USERNAME }}
|
||||
DOCKER_PASSWORD: ${{ secrets.DOCKER_API_KEY }}
|
||||
BUDIBASE_RELEASE_VERSION: ${{ steps.previoustag.outputs.tag }}
|
||||
SELFHOST_TAG: latest
|
||||
|
||||
- uses: azure/setup-helm@v1
|
||||
id: install
|
||||
- name: Setup Helm
|
||||
uses: azure/setup-helm@v1
|
||||
id: helm-install
|
||||
|
||||
# So, we need to inject the values into this
|
||||
- run: yarn release:helm
|
||||
|
||||
- name: Run chart-releaser
|
||||
uses: helm/chart-releaser-action@v1.1.0
|
||||
with:
|
||||
charts_dir: docs
|
||||
- name: Build and release helm chart
|
||||
run: |
|
||||
git config user.name "Budibase Helm Bot"
|
||||
git config user.email "<>"
|
||||
git pull
|
||||
helm package charts/budibase
|
||||
git checkout gh-pages
|
||||
mv *.tgz docs
|
||||
helm repo index docs
|
||||
git add -A
|
||||
git commit -m "Helm Release: ${{ env.RELEASE_VERSION }}"
|
||||
git push
|
||||
env:
|
||||
CR_TOKEN: "${{ secrets.GITHUB_TOKEN }}"
|
||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
|
||||
- name: Perform Github Release
|
||||
uses: softprops/action-gh-release@v1
|
||||
with:
|
||||
name: v${{ env.RELEASE_VERSION }}
|
||||
tag_name: v${{ env.RELEASE_VERSION }}
|
||||
generate_release_notes: true
|
|
@ -0,0 +1,9 @@
|
|||
dependencies:
|
||||
- name: couchdb
|
||||
repository: https://apache.github.io/couchdb-helm
|
||||
version: 3.3.4
|
||||
- name: ingress-nginx
|
||||
repository: https://kubernetes.github.io/ingress-nginx
|
||||
version: 4.0.13
|
||||
digest: sha256:20892705c2d8e64c98257d181063a514ac55013e2b43399a6e54868a97f97845
|
||||
generated: "2021-12-30T18:55:30.878411Z"
|
|
@ -0,0 +1,24 @@
|
|||
apiVersion: v2
|
||||
name: budibase
|
||||
description: >-
|
||||
Budibase is an open source low-code platform, helping thousands of teams build
|
||||
apps for their workplace in minutes.
|
||||
keywords:
|
||||
- low-code
|
||||
- database
|
||||
- cluster
|
||||
sources:
|
||||
- https://github.com/Budibase/budibase
|
||||
- https://budibase.com
|
||||
type: application
|
||||
version: 0.2.5
|
||||
appVersion: 1.0.25
|
||||
dependencies:
|
||||
- name: couchdb
|
||||
version: 3.3.4
|
||||
repository: https://apache.github.io/couchdb-helm
|
||||
condition: services.couchdb.enabled
|
||||
- name: ingress-nginx
|
||||
version: 4.0.13
|
||||
repository: https://kubernetes.github.io/ingress-nginx
|
||||
condition: ingress.nginx
|
Binary file not shown.
Binary file not shown.
|
@ -73,17 +73,13 @@ spec:
|
|||
name: {{ template "budibase.fullname" . }}
|
||||
key: objectStoreSecret
|
||||
- name: MINIO_URL
|
||||
{{ if .Values.services.objectStore.url }}
|
||||
value: {{ .Values.services.objectStore.url }}
|
||||
{{ else }}
|
||||
value: http://minio-service:{{ .Values.services.objectStore.port }}
|
||||
{{ end }}
|
||||
- name: PORT
|
||||
value: {{ .Values.services.apps.port | quote }}
|
||||
- name: MULTI_TENANCY
|
||||
value: {{ .Values.globals.multiTenancy | quote }}
|
||||
- name: LOG_LEVEL
|
||||
value: {{ .Values.services.apps.logLevel | quote }}
|
||||
value: {{ default "info" .Values.services.apps.logLevel | quote }}
|
||||
- name: REDIS_PASSWORD
|
||||
value: {{ .Values.services.redis.password }}
|
||||
- name: REDIS_URL
|
||||
|
@ -110,7 +106,7 @@ spec:
|
|||
value: {{ .Values.globals.accountPortalApiKey | quote }}
|
||||
- name: COOKIE_DOMAIN
|
||||
value: {{ .Values.globals.cookieDomain | quote }}
|
||||
image: budibase/apps
|
||||
image: budibase/apps:{{ .Values.globals.appVersion }}
|
||||
imagePullPolicy: Always
|
||||
name: bbapps
|
||||
ports:
|
|
@ -0,0 +1,43 @@
|
|||
{{- if .Values.services.couchdb.backup.enabled }}
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
annotations:
|
||||
kompose.cmd: kompose convert
|
||||
kompose.version: 1.21.0 (992df58d8)
|
||||
creationTimestamp: null
|
||||
labels:
|
||||
app.kubernetes.io/name: couchdb-backup
|
||||
name: couchdb-backup
|
||||
spec:
|
||||
replicas: 1
|
||||
selector:
|
||||
matchLabels:
|
||||
app.kubernetes.io/name: couchdb-backup
|
||||
strategy:
|
||||
type: Recreate
|
||||
template:
|
||||
metadata:
|
||||
annotations:
|
||||
kompose.cmd: kompose convert
|
||||
kompose.version: 1.21.0 (992df58d8)
|
||||
creationTimestamp: null
|
||||
labels:
|
||||
app.kubernetes.io/name: couchdb-backup
|
||||
spec:
|
||||
containers:
|
||||
- env:
|
||||
- name: SOURCE
|
||||
value: {{ .Values.services.couchdb.url }}
|
||||
- name: TARGET
|
||||
value: {{ .Values.services.couchdb.backup.target | quote }}
|
||||
- name: RUN_EVERY_SECS
|
||||
value: {{ .Values.services.couchdb.backup.interval | quote }}
|
||||
- name: VERBOSE
|
||||
value: "true"
|
||||
image: redgeoff/replicate-couchdb-cluster
|
||||
imagePullPolicy: Always
|
||||
name: couchdb-backup
|
||||
resources: {}
|
||||
status: {}
|
||||
{{- end }}
|
|
@ -70,17 +70,13 @@ spec:
|
|||
name: {{ template "budibase.fullname" . }}
|
||||
key: objectStoreSecret
|
||||
- name: MINIO_URL
|
||||
{{ if .Values.services.objectStore.url }}
|
||||
value: {{ .Values.services.objectStore.url }}
|
||||
{{ else }}
|
||||
value: http://minio-service:{{ .Values.services.objectStore.port }}
|
||||
{{ end }}
|
||||
- name: PORT
|
||||
value: {{ .Values.services.worker.port | quote }}
|
||||
- name: MULTI_TENANCY
|
||||
value: {{ .Values.globals.multiTenancy | quote }}
|
||||
- name: LOG_LEVEL
|
||||
value: {{ .Values.services.worker.logLevel | quote }}
|
||||
value: {{ default "info" .Values.services.worker.logLevel | quote }}
|
||||
- name: REDIS_PASSWORD
|
||||
value: {{ .Values.services.redis.password | quote }}
|
||||
- name: REDIS_URL
|
||||
|
@ -115,7 +111,7 @@ spec:
|
|||
value: {{ .Values.globals.smtp.from | quote }}
|
||||
- name: APPS_URL
|
||||
value: http://app-service:{{ .Values.services.apps.port }}
|
||||
image: budibase/worker
|
||||
image: budibase/worker:{{ .Values.globals.appVersion }}
|
||||
imagePullPolicy: Always
|
||||
name: bbworker
|
||||
ports:
|
|
@ -0,0 +1,305 @@
|
|||
# Default values for budibase.
|
||||
# This is a YAML-formatted file.
|
||||
# Declare variables to be passed into your templates.
|
||||
|
||||
image:
|
||||
pullPolicy: IfNotPresent
|
||||
# Overrides the image tag whose default is the chart appVersion.
|
||||
tag: ""
|
||||
|
||||
imagePullSecrets: []
|
||||
nameOverride: ""
|
||||
# fullnameOverride: ""
|
||||
|
||||
serviceAccount:
|
||||
# Specifies whether a service account should be created
|
||||
create: true
|
||||
# Annotations to add to the service account
|
||||
annotations: {}
|
||||
# The name of the service account to use.
|
||||
# If not set and create is true, a name is generated using the fullname template
|
||||
name: ""
|
||||
|
||||
podAnnotations: {}
|
||||
|
||||
podSecurityContext:
|
||||
{}
|
||||
# fsGroup: 2000
|
||||
|
||||
securityContext:
|
||||
{}
|
||||
# capabilities:
|
||||
# drop:
|
||||
# - ALL
|
||||
# readOnlyRootFilesystem: true
|
||||
# runAsNonRoot: true
|
||||
# runAsUser: 1000
|
||||
|
||||
service:
|
||||
type: ClusterIP
|
||||
port: 10000
|
||||
|
||||
ingress:
|
||||
enabled: true
|
||||
aws: false
|
||||
nginx: true
|
||||
certificateArn: ""
|
||||
className: ""
|
||||
annotations:
|
||||
kubernetes.io/ingress.class: nginx
|
||||
hosts:
|
||||
- host: # change if using custom domain
|
||||
paths:
|
||||
- path: /
|
||||
pathType: Prefix
|
||||
backend:
|
||||
service:
|
||||
name: proxy-service
|
||||
port:
|
||||
number: 10000
|
||||
|
||||
resources:
|
||||
{}
|
||||
# We usually recommend not to specify default resources and to leave this as a conscious
|
||||
# choice for the user. This also increases chances charts run on environments with little
|
||||
# resources, such as Minikube. If you do want to specify resources, uncomment the following
|
||||
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
|
||||
# limits:
|
||||
# cpu: 100m
|
||||
# memory: 128Mi
|
||||
# requests:
|
||||
# cpu: 100m
|
||||
# memory: 128Mi
|
||||
|
||||
autoscaling:
|
||||
enabled: false
|
||||
minReplicas: 1
|
||||
maxReplicas: 100
|
||||
targetCPUUtilizationPercentage: 80
|
||||
# targetMemoryUtilizationPercentage: 80
|
||||
|
||||
nodeSelector: {}
|
||||
|
||||
tolerations: []
|
||||
|
||||
affinity: {}
|
||||
|
||||
globals:
|
||||
appVersion: "latest"
|
||||
budibaseEnv: PRODUCTION
|
||||
enableAnalytics: true
|
||||
sentryDSN: ""
|
||||
posthogToken: ""
|
||||
logLevel: info
|
||||
selfHosted: "1" # set to 0 for budibase cloud environment, set to 1 for self-hosted setup
|
||||
multiTenancy: "0" # set to 0 to disable multiple orgs, set to 1 to enable multiple orgs
|
||||
accountPortalUrl: ""
|
||||
accountPortalApiKey: ""
|
||||
cookieDomain: ""
|
||||
platformUrl: ""
|
||||
|
||||
createSecrets: true # creates an internal API key, JWT secrets and redis password for you
|
||||
|
||||
# if createSecrets is set to false, you can hard-code your secrets here
|
||||
internalApiKey: ""
|
||||
jwtSecret: ""
|
||||
|
||||
smtp:
|
||||
enabled: false
|
||||
|
||||
services:
|
||||
budibaseVersion: latest
|
||||
dns: cluster.local
|
||||
|
||||
proxy:
|
||||
port: 10000
|
||||
replicaCount: 1
|
||||
|
||||
apps:
|
||||
port: 4002
|
||||
replicaCount: 1
|
||||
logLevel: info
|
||||
|
||||
worker:
|
||||
port: 4001
|
||||
replicaCount: 1
|
||||
|
||||
couchdb:
|
||||
enabled: true
|
||||
# url: "" # only change if pointing to existing couch server
|
||||
# user: "" # only change if pointing to existing couch server
|
||||
# password: "" # only change if pointing to existing couch server
|
||||
port: 5984
|
||||
backup:
|
||||
enabled: false
|
||||
# target couchDB instance to back up to
|
||||
target: ""
|
||||
# backup interval in seconds
|
||||
interval: ""
|
||||
|
||||
redis:
|
||||
enabled: true # disable if using external redis
|
||||
port: 6379
|
||||
replicaCount: 1
|
||||
url: "" # only change if pointing to existing redis cluster and enabled: false
|
||||
password: "budibase" # recommended to override if using built-in redis
|
||||
storage: 100Mi
|
||||
|
||||
objectStore:
|
||||
minio: true
|
||||
browser: true
|
||||
port: 9000
|
||||
replicaCount: 1
|
||||
accessKey: "" # AWS_ACCESS_KEY if using S3 or existing minio access key
|
||||
secretKey: "" # AWS_SECRET_ACCESS_KEY if using S3 or existing minio secret
|
||||
region: "" # AWS_REGION if using S3 or existing minio secret
|
||||
url: "http://minio-service:9000" # only change if pointing to existing minio cluster or S3 and minio: false
|
||||
storage: 100Mi
|
||||
|
||||
# Override values in couchDB subchart
|
||||
couchdb:
|
||||
## clusterSize is the initial size of the CouchDB cluster.
|
||||
clusterSize: 3
|
||||
allowAdminParty: false
|
||||
|
||||
# Secret Management
|
||||
createAdminSecret: true
|
||||
|
||||
# adminUsername: budibase
|
||||
# adminPassword: budibase
|
||||
# adminHash: -pbkdf2-this_is_not_necessarily_secure_either
|
||||
# cookieAuthSecret: admin
|
||||
|
||||
## When enabled, will deploy a networkpolicy that allows CouchDB pods to
|
||||
## communicate with each other for clustering and ingress on port 5984
|
||||
networkPolicy:
|
||||
enabled: true
|
||||
|
||||
# Use a service account
|
||||
serviceAccount:
|
||||
enabled: true
|
||||
create: true
|
||||
# name:
|
||||
# imagePullSecrets:
|
||||
# - name: myimagepullsecret
|
||||
|
||||
## The storage volume used by each Pod in the StatefulSet. If a
|
||||
## persistentVolume is not enabled, the Pods will use `emptyDir` ephemeral
|
||||
## local storage. Setting the storageClass attribute to "-" disables dynamic
|
||||
## provisioning of Persistent Volumes; leaving it unset will invoke the default
|
||||
## provisioner.
|
||||
persistentVolume:
|
||||
enabled: false
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
size: 10Gi
|
||||
storageClass: ""
|
||||
|
||||
## The CouchDB image
|
||||
image:
|
||||
repository: couchdb
|
||||
tag: 3.1.0
|
||||
pullPolicy: IfNotPresent
|
||||
|
||||
## Experimental integration with Lucene-powered fulltext search
|
||||
enableSearch: true
|
||||
searchImage:
|
||||
repository: kocolosk/couchdb-search
|
||||
tag: 0.2.0
|
||||
pullPolicy: IfNotPresent
|
||||
|
||||
initImage:
|
||||
repository: busybox
|
||||
tag: latest
|
||||
pullPolicy: Always
|
||||
|
||||
## CouchDB is happy to spin up cluster nodes in parallel, but if you encounter
|
||||
## problems you can try setting podManagementPolicy to the StatefulSet default
|
||||
## `OrderedReady`
|
||||
podManagementPolicy: Parallel
|
||||
|
||||
## Optional pod annotations
|
||||
annotations: {}
|
||||
|
||||
## Optional tolerations
|
||||
tolerations: []
|
||||
|
||||
service:
|
||||
# annotations:
|
||||
enabled: true
|
||||
type: ClusterIP
|
||||
externalPort: 5984
|
||||
|
||||
## An Ingress resource can provide name-based virtual hosting and TLS
|
||||
## termination among other things for CouchDB deployments which are accessed
|
||||
## from outside the Kubernetes cluster.
|
||||
## ref: https://kubernetes.io/docs/concepts/services-networking/ingress/
|
||||
ingress:
|
||||
enabled: false
|
||||
hosts:
|
||||
- chart-example.local
|
||||
path: /
|
||||
annotations: []
|
||||
# kubernetes.io/ingress.class: nginx
|
||||
# kubernetes.io/tls-acme: "true"
|
||||
tls:
|
||||
# Secrets must be manually created in the namespace.
|
||||
# - secretName: chart-example-tls
|
||||
# hosts:
|
||||
# - chart-example.local
|
||||
|
||||
## Optional resource requests and limits for the CouchDB container
|
||||
## ref: http://kubernetes.io/docs/user-guide/compute-resources/
|
||||
resources:
|
||||
{}
|
||||
# requests:
|
||||
# cpu: 100m
|
||||
# memory: 128Mi
|
||||
# limits:
|
||||
# cpu: 56
|
||||
# memory: 256Gi
|
||||
|
||||
## erlangFlags is a map that is passed to the Erlang VM as flags using the
|
||||
## ERL_FLAGS env. `name` and `setcookie` flags are minimally required to
|
||||
## establish connectivity between cluster nodes.
|
||||
## ref: http://erlang.org/doc/man/erl.html#init_flags
|
||||
erlangFlags:
|
||||
name: couchdb
|
||||
setcookie: monster
|
||||
|
||||
## couchdbConfig will override default CouchDB configuration settings.
|
||||
## The contents of this map are reformatted into a .ini file laid down
|
||||
## by a ConfigMap object.
|
||||
## ref: http://docs.couchdb.org/en/latest/config/index.html
|
||||
couchdbConfig:
|
||||
couchdb:
|
||||
uuid: budibase-couchdb # REQUIRED: Unique identifier for this CouchDB server instance
|
||||
# cluster:
|
||||
# q: 8 # Create 8 shards for each database
|
||||
chttpd:
|
||||
bind_address: any
|
||||
# chttpd.require_valid_user disables all the anonymous requests to the port
|
||||
# 5984 when is set to true.
|
||||
require_valid_user: false
|
||||
|
||||
# Kubernetes local cluster domain.
|
||||
# This is used to generate FQDNs for peers when joining the CouchDB cluster.
|
||||
dns:
|
||||
clusterDomainSuffix: cluster.local
|
||||
|
||||
## Configure liveness and readiness probe values
|
||||
## Ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/#configure-probes
|
||||
livenessProbe:
|
||||
enabled: true
|
||||
failureThreshold: 3
|
||||
initialDelaySeconds: 0
|
||||
periodSeconds: 10
|
||||
successThreshold: 1
|
||||
timeoutSeconds: 1
|
||||
readinessProbe:
|
||||
enabled: true
|
||||
failureThreshold: 3
|
||||
initialDelaySeconds: 0
|
||||
periodSeconds: 10
|
||||
successThreshold: 1
|
||||
timeoutSeconds: 1
|
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
|
@ -1,9 +0,0 @@
|
|||
<html>
|
||||
<head>
|
||||
<title>Budibase Helm Chart Repo</title>
|
||||
</head>
|
||||
<body>
|
||||
<h1>Budibase Charts Repo</h1>
|
||||
<p>Point Helm at this repo to see charts.</p>
|
||||
</body>
|
||||
</html>
|
132
docs/index.yaml
132
docs/index.yaml
|
@ -1,132 +0,0 @@
|
|||
apiVersion: v1
|
||||
entries:
|
||||
budibase:
|
||||
- apiVersion: v2
|
||||
appVersion: 0.9.169
|
||||
created: "2021-10-20T14:27:23.521358+01:00"
|
||||
dependencies:
|
||||
- condition: services.couchdb.enabled
|
||||
name: couchdb
|
||||
repository: https://apache.github.io/couchdb-helm
|
||||
version: 3.3.4
|
||||
- condition: ingress.nginx
|
||||
name: ingress-nginx
|
||||
repository: https://github.com/kubernetes/ingress-nginx
|
||||
version: 3.35.0
|
||||
description: Budibase is an open source low-code platform, helping thousands of teams build apps for their workplace in minutes.
|
||||
digest: 57f365d799fcaace4658883cb8ec961a7905383a68acf065af4f6e57f9878ff8
|
||||
keywords:
|
||||
- low-code
|
||||
- database
|
||||
- cluster
|
||||
name: budibase
|
||||
sources:
|
||||
- https://github.com/Budibase/budibase
|
||||
- https://budibase.com
|
||||
type: application
|
||||
urls:
|
||||
- https://budibase.github.io/budibase/budibase-0.2.2.tgz
|
||||
version: 0.2.2
|
||||
- apiVersion: v2
|
||||
appVersion: 0.9.163
|
||||
created: "2021-10-20T14:27:23.5153+01:00"
|
||||
dependencies:
|
||||
- condition: services.couchdb.enabled
|
||||
name: couchdb
|
||||
repository: https://apache.github.io/couchdb-helm
|
||||
version: 3.3.4
|
||||
- condition: ingress.nginx
|
||||
name: ingress-nginx
|
||||
repository: https://github.com/kubernetes/ingress-nginx
|
||||
version: 3.35.0
|
||||
description: Budibase is an open source low-code platform, helping thousands of teams build apps for their workplace in minutes.
|
||||
digest: ebac6d8631cc38b266c3689508b5123f5afc395f23bdb02738be26c7cae0b0b5
|
||||
keywords:
|
||||
- low-code
|
||||
- database
|
||||
- cluster
|
||||
name: budibase
|
||||
sources:
|
||||
- https://github.com/Budibase/budibase
|
||||
- https://budibase.com
|
||||
type: application
|
||||
urls:
|
||||
- https://budibase.github.io/budibase/budibase-0.2.1.tgz
|
||||
version: 0.2.1
|
||||
- apiVersion: v2
|
||||
appVersion: 0.9.163
|
||||
created: "2021-10-20T14:27:23.510041+01:00"
|
||||
dependencies:
|
||||
- condition: services.couchdb.enabled
|
||||
name: couchdb
|
||||
repository: https://apache.github.io/couchdb-helm
|
||||
version: 3.3.4
|
||||
- condition: ingress.nginx
|
||||
name: ingress-nginx
|
||||
repository: https://github.com/kubernetes/ingress-nginx
|
||||
version: 3.35.0
|
||||
description: Budibase is an open source low-code platform, helping thousands of teams build apps for their workplace in minutes.
|
||||
digest: f369536c0eac1f6959d51e8ce6d74a87a7a9df29ae84fb9cbed0a273ab77429b
|
||||
keywords:
|
||||
- low-code
|
||||
- database
|
||||
- cluster
|
||||
name: budibase
|
||||
sources:
|
||||
- https://github.com/Budibase/budibase
|
||||
- https://budibase.com
|
||||
type: application
|
||||
urls:
|
||||
- https://budibase.github.io/budibase/budibase-0.2.0.tgz
|
||||
version: 0.2.0
|
||||
- apiVersion: v2
|
||||
appVersion: 0.9.56
|
||||
created: "2021-10-20T14:27:23.504543+01:00"
|
||||
dependencies:
|
||||
- condition: services.couchdb.enabled
|
||||
name: couchdb
|
||||
repository: https://apache.github.io/couchdb-helm
|
||||
version: 3.3.4
|
||||
- name: ingress-nginx
|
||||
repository: https://github.com/kubernetes/ingress-nginx
|
||||
version: 3.35.0
|
||||
description: Budibase is an open source low-code platform, helping thousands of teams build apps for their workplace in minutes.
|
||||
digest: 8dc4f2ed4d98cad5adf25936aefea680042d3e4e17832f846b961fd8708ad192
|
||||
keywords:
|
||||
- low-code
|
||||
- database
|
||||
- cluster
|
||||
name: budibase
|
||||
sources:
|
||||
- https://github.com/Budibase/budibase
|
||||
- https://budibase.com
|
||||
type: application
|
||||
urls:
|
||||
- https://budibase.github.io/budibase/budibase-0.1.1.tgz
|
||||
version: 0.1.1
|
||||
- apiVersion: v2
|
||||
appVersion: 0.9.56
|
||||
created: "2021-10-20T14:27:23.496847+01:00"
|
||||
dependencies:
|
||||
- condition: services.couchdb.enabled
|
||||
name: couchdb
|
||||
repository: https://apache.github.io/couchdb-helm
|
||||
version: 3.3.4
|
||||
- name: ingress-nginx
|
||||
repository: https://github.com/kubernetes/ingress-nginx
|
||||
version: 3.35.0
|
||||
description: Budibase is an open source low-code platform, helping thousands of teams build apps for their workplace in minutes.
|
||||
digest: 08031b0803cce0eff64472e569d454d9176119c8207aa9873a9c95ee66cc7d3f
|
||||
keywords:
|
||||
- low-code
|
||||
- database
|
||||
- cluster
|
||||
name: budibase
|
||||
sources:
|
||||
- https://github.com/Budibase/budibase
|
||||
- https://budibase.com
|
||||
type: application
|
||||
urls:
|
||||
- https://budibase.github.io/budibase/budibase-0.1.0.tgz
|
||||
version: 0.1.0
|
||||
generated: "2021-10-20T14:27:23.491132+01:00"
|
|
@ -0,0 +1,19 @@
|
|||
# Budibase DigitalOcean One Click
|
||||
You will find in this directory configuration for packaging and creating a snapshot for the Budibase 1 click Digitalocean build. We use this configuration to have an immutable and reproducible build package for Digitalocean, that rarely needs updated.
|
||||
|
||||
## Prerequisites
|
||||
You must install Hashicorps `packer` to build the snapshot for digitalocean. Follow the instructions to install packer [here](https://learn.hashicorp.com/tutorials/packer/get-started-install-cli)
|
||||
|
||||
You must have the `DIGITALOCEAN_TOKEN` environment variable set, so that packer can reach out to the digitalocean API for build information.
|
||||
|
||||
## Building
|
||||
Just run the following command:
|
||||
```
|
||||
yarn build:digitalocean
|
||||
```
|
||||
|
||||
## Uploading to Marketplace
|
||||
You can upload the snapshot to the Digitalocean vendor portal at the following link (Requires vendor account):
|
||||
|
||||
https://marketplace.digitalocean.com/vendorportal
|
||||
|
|
@ -0,0 +1,2 @@
|
|||
#!/bin/bash
|
||||
packer build template.json
|
|
@ -0,0 +1,19 @@
|
|||
#!/bin/sh
|
||||
#
|
||||
# Configured as part of the DigitalOcean 1-Click Image build process
|
||||
|
||||
myip=$(hostname -I | awk '{print$1}')
|
||||
cat <<EOF
|
||||
********************************************************************************
|
||||
|
||||
Welcome to the Budibase DigitalOcean 1-Click Droplet.
|
||||
To keep this Droplet secure, the UFW firewall is enabled.
|
||||
All ports are BLOCKED except 22 (SSH), 80 (HTTP), 443 (HTTPS), and 10000
|
||||
|
||||
* Budibase website: http://budibase.com
|
||||
|
||||
For help and more information, visit https://docs.budibase.com/self-hosting/hosting-methods/digitalocean
|
||||
|
||||
********************************************************************************
|
||||
To delete this message of the day: rm -rf $(readlink -f ${0})
|
||||
EOF
|
|
@ -0,0 +1,22 @@
|
|||
#!/bin/bash
|
||||
|
||||
# go into the app dir
|
||||
cd /root
|
||||
|
||||
# fetch envoy and docker-compose files
|
||||
wget https://raw.githubusercontent.com/Budibase/budibase/master/hosting/docker-compose.yaml
|
||||
wget https://raw.githubusercontent.com/Budibase/budibase/master/hosting/envoy.yaml
|
||||
wget https://raw.githubusercontent.com/Budibase/budibase/master/hosting/hosting.properties
|
||||
|
||||
# Create .env file from hosting.properties using bash and then remove it
|
||||
while read line; do
|
||||
uuid=$(uuidgen)
|
||||
echo $line | sed "s/budibase/$uuid/g" | sed "s/testsecret/$uuid/g" >> .env
|
||||
done <hosting.properties
|
||||
rm hosting.properties
|
||||
|
||||
# boot the stack
|
||||
docker-compose up -d
|
||||
|
||||
# return
|
||||
cd -
|
|
@ -0,0 +1,49 @@
|
|||
#!/bin/bash
|
||||
|
||||
# DigitalOcean Marketplace Image Validation Tool
|
||||
# © 2021 DigitalOcean LLC.
|
||||
# This code is licensed under Apache 2.0 license (see LICENSE.md for details)
|
||||
|
||||
set -o errexit
|
||||
|
||||
# Ensure /tmp exists and has the proper permissions before
|
||||
# checking for security updates
|
||||
# https://github.com/digitalocean/marketplace-partners/issues/94
|
||||
if [[ ! -d /tmp ]]; then
|
||||
mkdir /tmp
|
||||
fi
|
||||
chmod 1777 /tmp
|
||||
|
||||
if [ -n "$(command -v yum)" ]; then
|
||||
yum update -y
|
||||
yum clean all
|
||||
elif [ -n "$(command -v apt-get)" ]; then
|
||||
export DEBIAN_FRONTEND=noninteractive
|
||||
apt-get -y update
|
||||
apt-get -o Dpkg::Options::="--force-confold" upgrade -q -y --force-yes
|
||||
apt-get -y autoremove
|
||||
apt-get -y autoclean
|
||||
fi
|
||||
|
||||
rm -rf /tmp/* /var/tmp/*
|
||||
history -c
|
||||
cat /dev/null > /root/.bash_history
|
||||
unset HISTFILE
|
||||
find /var/log -mtime -1 -type f -exec truncate -s 0 {} \;
|
||||
rm -rf /var/log/*.gz /var/log/*.[0-9] /var/log/*-????????
|
||||
rm -rf /var/lib/cloud/instances/*
|
||||
rm -f /root/.ssh/authorized_keys /etc/ssh/*key*
|
||||
touch /etc/ssh/revoked_keys
|
||||
chmod 600 /etc/ssh/revoked_keys
|
||||
|
||||
# Securely erase the unused portion of the filesystem
|
||||
GREEN='\033[0;32m'
|
||||
NC='\033[0m'
|
||||
printf "\n${GREEN}Writing zeros to the remaining disk space to securely
|
||||
erase the unused portion of the file system.
|
||||
Depending on your disk size this may take several minutes.
|
||||
The secure erase will complete successfully when you see:${NC}
|
||||
dd: writing to '/zerofile': No space left on device\n
|
||||
Beginning secure erase now\n"
|
||||
|
||||
dd if=/dev/zero of=/zerofile bs=4096 || rm /zerofile
|
|
@ -0,0 +1,617 @@
|
|||
#!/bin/bash
|
||||
|
||||
# DigitalOcean Marketplace Image Validation Tool
|
||||
# © 2021 DigitalOcean LLC.
|
||||
# This code is licensed under Apache 2.0 license (see LICENSE.md for details)
|
||||
|
||||
VERSION="v. 1.6"
|
||||
RUNDATE=$( date )
|
||||
|
||||
# Script should be run with SUDO
|
||||
if [ "$EUID" -ne 0 ]
|
||||
then echo "[Error] - This script must be run with sudo or as the root user."
|
||||
exit 1
|
||||
fi
|
||||
|
||||
STATUS=0
|
||||
PASS=0
|
||||
WARN=0
|
||||
FAIL=0
|
||||
|
||||
# $1 == command to check for
|
||||
# returns: 0 == true, 1 == false
|
||||
cmdExists() {
|
||||
if command -v "$1" > /dev/null 2>&1; then
|
||||
return 0
|
||||
else
|
||||
return 1
|
||||
fi
|
||||
}
|
||||
|
||||
function getDistro {
|
||||
if [ -f /etc/os-release ]; then
|
||||
# freedesktop.org and systemd
|
||||
. /etc/os-release
|
||||
OS=$NAME
|
||||
VER=$VERSION_ID
|
||||
elif type lsb_release >/dev/null 2>&1; then
|
||||
# linuxbase.org
|
||||
OS=$(lsb_release -si)
|
||||
VER=$(lsb_release -sr)
|
||||
elif [ -f /etc/lsb-release ]; then
|
||||
# For some versions of Debian/Ubuntu without lsb_release command
|
||||
. /etc/lsb-release
|
||||
OS=$DISTRIB_ID
|
||||
VER=$DISTRIB_RELEASE
|
||||
elif [ -f /etc/debian_version ]; then
|
||||
# Older Debian/Ubuntu/etc.
|
||||
OS=Debian
|
||||
VER=$(cat /etc/debian_version)
|
||||
elif [ -f /etc/SuSe-release ]; then
|
||||
# Older SuSE/etc.
|
||||
:
|
||||
elif [ -f /etc/redhat-release ]; then
|
||||
# Older Red Hat, CentOS, etc.
|
||||
VER=$( cat /etc/redhat-release | cut -d" " -f3 | cut -d "." -f1)
|
||||
d=$( cat /etc/redhat-release | cut -d" " -f1 | cut -d "." -f1)
|
||||
if [[ $d == "CentOS" ]]; then
|
||||
OS="CentOS Linux"
|
||||
fi
|
||||
else
|
||||
# Fall back to uname, e.g. "Linux <version>", also works for BSD, etc.
|
||||
OS=$(uname -s)
|
||||
VER=$(uname -r)
|
||||
fi
|
||||
}
|
||||
function loadPasswords {
|
||||
SHADOW=$(cat /etc/shadow)
|
||||
}
|
||||
|
||||
function checkAgent {
|
||||
# Check for the presence of the do-agent in the filesystem
|
||||
if [ -d /var/opt/digitalocean/do-agent ];then
|
||||
echo -en "\e[41m[FAIL]\e[0m DigitalOcean Monitoring Agent detected.\n"
|
||||
((FAIL++))
|
||||
STATUS=2
|
||||
if [[ $OS == "CentOS Linux" ]] || [[ $OS == "CentOS Stream" ]] || [[ $OS == "Rocky Linux" ]]; then
|
||||
echo "The agent can be removed with 'sudo yum remove do-agent' "
|
||||
elif [[ $OS == "Ubuntu" ]]; then
|
||||
echo "The agent can be removed with 'sudo apt-get purge do-agent' "
|
||||
fi
|
||||
else
|
||||
echo -en "\e[32m[PASS]\e[0m DigitalOcean Monitoring agent was not found\n"
|
||||
((PASS++))
|
||||
fi
|
||||
}
|
||||
|
||||
function checkLogs {
|
||||
cp_ignore="/var/log/cpanel-install.log"
|
||||
echo -en "\nChecking for log files in /var/log\n\n"
|
||||
# Check if there are log archives or log files that have not been recently cleared.
|
||||
for f in /var/log/*-????????; do
|
||||
[[ -e $f ]] || break
|
||||
if [ $f != $cp_ignore ]; then
|
||||
echo -en "\e[93m[WARN]\e[0m Log archive ${f} found\n"
|
||||
((WARN++))
|
||||
if [[ $STATUS != 2 ]]; then
|
||||
STATUS=1
|
||||
fi
|
||||
fi
|
||||
done
|
||||
for f in /var/log/*.[0-9];do
|
||||
[[ -e $f ]] || break
|
||||
echo -en "\e[93m[WARN]\e[0m Log archive ${f} found\n"
|
||||
((WARN++))
|
||||
if [[ $STATUS != 2 ]]; then
|
||||
STATUS=1
|
||||
fi
|
||||
done
|
||||
for f in /var/log/*.log; do
|
||||
[[ -e $f ]] || break
|
||||
if [[ "${f}" = '/var/log/lfd.log' && "$( cat "${f}" | egrep -v '/var/log/messages has been reset| Watching /var/log/messages' | wc -c)" -gt 50 ]]; then
|
||||
if [ $f != $cp_ignore ]; then
|
||||
echo -en "\e[93m[WARN]\e[0m un-cleared log file, ${f} found\n"
|
||||
((WARN++))
|
||||
if [[ $STATUS != 2 ]]; then
|
||||
STATUS=1
|
||||
fi
|
||||
fi
|
||||
elif [[ "${f}" != '/var/log/lfd.log' && "$( cat "${f}" | wc -c)" -gt 50 ]]; then
|
||||
if [ $f != $cp_ignore ]; then
|
||||
echo -en "\e[93m[WARN]\e[0m un-cleared log file, ${f} found\n"
|
||||
((WARN++))
|
||||
if [[ $STATUS != 2 ]]; then
|
||||
STATUS=1
|
||||
fi
|
||||
fi
|
||||
fi
|
||||
done
|
||||
}
|
||||
function checkTMP {
|
||||
# Check the /tmp directory to ensure it is empty. Warn on any files found.
|
||||
return 1
|
||||
}
|
||||
function checkRoot {
|
||||
user="root"
|
||||
uhome="/root"
|
||||
for usr in $SHADOW
|
||||
do
|
||||
IFS=':' read -r -a u <<< "$usr"
|
||||
if [[ "${u[0]}" == "${user}" ]]; then
|
||||
if [[ ${u[1]} == "!" ]] || [[ ${u[1]} == "!!" ]] || [[ ${u[1]} == "*" ]]; then
|
||||
echo -en "\e[32m[PASS]\e[0m User ${user} has no password set.\n"
|
||||
((PASS++))
|
||||
else
|
||||
echo -en "\e[41m[FAIL]\e[0m User ${user} has a password set on their account.\n"
|
||||
((FAIL++))
|
||||
STATUS=2
|
||||
fi
|
||||
fi
|
||||
done
|
||||
if [ -d ${uhome}/ ]; then
|
||||
if [ -d ${uhome}/.ssh/ ]; then
|
||||
if ls ${uhome}/.ssh/*> /dev/null 2>&1; then
|
||||
for key in ${uhome}/.ssh/*
|
||||
do
|
||||
if [ "${key}" == "${uhome}/.ssh/authorized_keys" ]; then
|
||||
|
||||
if [ "$( cat "${key}" | wc -c)" -gt 50 ]; then
|
||||
echo -en "\e[41m[FAIL]\e[0m User \e[1m${user}\e[0m has a populated authorized_keys file in \e[93m${key}\e[0m\n"
|
||||
akey=$(cat ${key})
|
||||
echo "File Contents:"
|
||||
echo $akey
|
||||
echo "--------------"
|
||||
((FAIL++))
|
||||
STATUS=2
|
||||
fi
|
||||
elif [ "${key}" == "${uhome}/.ssh/id_rsa" ]; then
|
||||
if [ "$( cat "${key}" | wc -c)" -gt 0 ]; then
|
||||
echo -en "\e[41m[FAIL]\e[0m User \e[1m${user}\e[0m has a private key file in \e[93m${key}\e[0m\n"
|
||||
akey=$(cat ${key})
|
||||
echo "File Contents:"
|
||||
echo $akey
|
||||
echo "--------------"
|
||||
((FAIL++))
|
||||
STATUS=2
|
||||
else
|
||||
echo -en "\e[93m[WARN]\e[0m User \e[1m${user}\e[0m has empty private key file in \e[93m${key}\e[0m\n"
|
||||
((WARN++))
|
||||
if [[ $STATUS != 2 ]]; then
|
||||
STATUS=1
|
||||
fi
|
||||
fi
|
||||
elif [ "${key}" != "${uhome}/.ssh/known_hosts" ]; then
|
||||
echo -en "\e[93m[WARN]\e[0m User \e[1m${user}\e[0m has a file in their .ssh directory at \e[93m${key}\e[0m\n"
|
||||
((WARN++))
|
||||
if [[ $STATUS != 2 ]]; then
|
||||
STATUS=1
|
||||
fi
|
||||
else
|
||||
if [ "$( cat "${key}" | wc -c)" -gt 50 ]; then
|
||||
echo -en "\e[93m[WARN]\e[0m User \e[1m${user}\e[0m has a populated known_hosts file in \e[93m${key}\e[0m\n"
|
||||
((WARN++))
|
||||
if [[ $STATUS != 2 ]]; then
|
||||
STATUS=1
|
||||
fi
|
||||
fi
|
||||
fi
|
||||
done
|
||||
else
|
||||
echo -en "\e[32m[ OK ]\e[0m User \e[1m${user}\e[0m has no SSH keys present\n"
|
||||
fi
|
||||
else
|
||||
echo -en "\e[32m[ OK ]\e[0m User \e[1m${user}\e[0m does not have an .ssh directory\n"
|
||||
fi
|
||||
if [ -f /root/.bash_history ];then
|
||||
|
||||
BH_S=$( cat /root/.bash_history | wc -c)
|
||||
|
||||
if [[ $BH_S -lt 200 ]]; then
|
||||
echo -en "\e[32m[PASS]\e[0m ${user}'s Bash History appears to have been cleared\n"
|
||||
((PASS++))
|
||||
else
|
||||
echo -en "\e[41m[FAIL]\e[0m ${user}'s Bash History should be cleared to prevent sensitive information from leaking\n"
|
||||
((FAIL++))
|
||||
STATUS=2
|
||||
fi
|
||||
|
||||
return 1;
|
||||
else
|
||||
echo -en "\e[32m[PASS]\e[0m The Root User's Bash History is not present\n"
|
||||
((PASS++))
|
||||
fi
|
||||
else
|
||||
echo -en "\e[32m[ OK ]\e[0m User \e[1m${user}\e[0m does not have a directory in /home\n"
|
||||
fi
|
||||
echo -en "\n\n"
|
||||
return 1
|
||||
}
|
||||
|
||||
function checkUsers {
|
||||
# Check each user-created account
|
||||
for user in $(awk -F: '$3 >= 1000 && $1 != "nobody" {print $1}' /etc/passwd;)
|
||||
do
|
||||
# Skip some other non-user system accounts
|
||||
if [[ $user == "centos" ]]; then
|
||||
:
|
||||
elif [[ $user == "nfsnobody" ]]; then
|
||||
:
|
||||
else
|
||||
echo -en "\nChecking user: ${user}...\n"
|
||||
for usr in $SHADOW
|
||||
do
|
||||
IFS=':' read -r -a u <<< "$usr"
|
||||
if [[ "${u[0]}" == "${user}" ]]; then
|
||||
if [[ ${u[1]} == "!" ]] || [[ ${u[1]} == "!!" ]] || [[ ${u[1]} == "*" ]]; then
|
||||
echo -en "\e[32m[PASS]\e[0m User ${user} has no password set.\n"
|
||||
((PASS++))
|
||||
else
|
||||
echo -en "\e[41m[FAIL]\e[0m User ${user} has a password set on their account. Only system users are allowed on the image.\n"
|
||||
((FAIL++))
|
||||
STATUS=2
|
||||
fi
|
||||
fi
|
||||
done
|
||||
#echo "User Found: ${user}"
|
||||
uhome="/home/${user}"
|
||||
if [ -d "${uhome}/" ]; then
|
||||
if [ -d "${uhome}/.ssh/" ]; then
|
||||
if ls "${uhome}/.ssh/*"> /dev/null 2>&1; then
|
||||
for key in ${uhome}/.ssh/*
|
||||
do
|
||||
if [ "${key}" == "${uhome}/.ssh/authorized_keys" ]; then
|
||||
if [ "$( cat "${key}" | wc -c)" -gt 50 ]; then
|
||||
echo -en "\e[41m[FAIL]\e[0m User \e[1m${user}\e[0m has a populated authorized_keys file in \e[93m${key}\e[0m\n"
|
||||
akey=$(cat ${key})
|
||||
echo "File Contents:"
|
||||
echo $akey
|
||||
echo "--------------"
|
||||
((FAIL++))
|
||||
STATUS=2
|
||||
fi
|
||||
elif [ "${key}" == "${uhome}/.ssh/id_rsa" ]; then
|
||||
if [ "$( cat "${key}" | wc -c)" -gt 0 ]; then
|
||||
echo -en "\e[41m[FAIL]\e[0m User \e[1m${user}\e[0m has a private key file in \e[93m${key}\e[0m\n"
|
||||
akey=$(cat ${key})
|
||||
echo "File Contents:"
|
||||
echo $akey
|
||||
echo "--------------"
|
||||
((FAIL++))
|
||||
STATUS=2
|
||||
else
|
||||
echo -en "\e[93m[WARN]\e[0m User \e[1m${user}\e[0m has empty private key file in \e[93m${key}\e[0m\n"
|
||||
((WARN++))
|
||||
if [[ $STATUS != 2 ]]; then
|
||||
STATUS=1
|
||||
fi
|
||||
fi
|
||||
elif [ "${key}" != "${uhome}/.ssh/known_hosts" ]; then
|
||||
|
||||
echo -en "\e[93m[WARN]\e[0m User \e[1m${user}\e[0m has a file in their .ssh directory named \e[93m${key}\e[0m\n"
|
||||
((WARN++))
|
||||
if [[ $STATUS != 2 ]]; then
|
||||
STATUS=1
|
||||
fi
|
||||
|
||||
else
|
||||
if [ "$( cat "${key}" | wc -c)" -gt 50 ]; then
|
||||
echo -en "\e[93m[WARN]\e[0m User \e[1m${user}\e[0m has a known_hosts file in \e[93m${key}\e[0m\n"
|
||||
((WARN++))
|
||||
if [[ $STATUS != 2 ]]; then
|
||||
STATUS=1
|
||||
fi
|
||||
fi
|
||||
fi
|
||||
|
||||
|
||||
done
|
||||
else
|
||||
echo -en "\e[32m[ OK ]\e[0m User \e[1m${user}\e[0m has no SSH keys present\n"
|
||||
fi
|
||||
else
|
||||
echo -en "\e[32m[ OK ]\e[0m User \e[1m${user}\e[0m does not have an .ssh directory\n"
|
||||
fi
|
||||
else
|
||||
echo -en "\e[32m[ OK ]\e[0m User \e[1m${user}\e[0m does not have a directory in /home\n"
|
||||
fi
|
||||
|
||||
# Check for an uncleared .bash_history for this user
|
||||
if [ -f "${uhome}/.bash_history" ]; then
|
||||
BH_S=$( cat "${uhome}/.bash_history" | wc -c )
|
||||
|
||||
if [[ $BH_S -lt 200 ]]; then
|
||||
echo -en "\e[32m[PASS]\e[0m ${user}'s Bash History appears to have been cleared\n"
|
||||
((PASS++))
|
||||
else
|
||||
echo -en "\e[41m[FAIL]\e[0m ${user}'s Bash History should be cleared to prevent sensitive information from leaking\n"
|
||||
((FAIL++))
|
||||
STATUS=2
|
||||
|
||||
fi
|
||||
echo -en "\n\n"
|
||||
fi
|
||||
fi
|
||||
done
|
||||
}
|
||||
function checkFirewall {
|
||||
|
||||
if [[ $OS == "Ubuntu" ]]; then
|
||||
fw="ufw"
|
||||
ufwa=$(ufw status |head -1| sed -e "s/^Status:\ //")
|
||||
if [[ $ufwa == "active" ]]; then
|
||||
FW_VER="\e[32m[PASS]\e[0m Firewall service (${fw}) is active\n"
|
||||
((PASS++))
|
||||
else
|
||||
FW_VER="\e[93m[WARN]\e[0m No firewall is configured. Ensure ${fw} is installed and configured\n"
|
||||
((WARN++))
|
||||
fi
|
||||
elif [[ $OS == "CentOS Linux" ]] || [[ $OS == "CentOS Stream" ]] || [[ $OS == "Rocky Linux" ]]; then
|
||||
if [ -f /usr/lib/systemd/system/csf.service ]; then
|
||||
fw="csf"
|
||||
if [[ $(systemctl status $fw >/dev/null 2>&1) ]]; then
|
||||
|
||||
FW_VER="\e[32m[PASS]\e[0m Firewall service (${fw}) is active\n"
|
||||
((PASS++))
|
||||
elif cmdExists "firewall-cmd"; then
|
||||
if [[ $(systemctl is-active firewalld >/dev/null 2>&1 && echo 1 || echo 0) ]]; then
|
||||
FW_VER="\e[32m[PASS]\e[0m Firewall service (${fw}) is active\n"
|
||||
((PASS++))
|
||||
else
|
||||
FW_VER="\e[93m[WARN]\e[0m No firewall is configured. Ensure ${fw} is installed and configured\n"
|
||||
((WARN++))
|
||||
fi
|
||||
else
|
||||
FW_VER="\e[93m[WARN]\e[0m No firewall is configured. Ensure ${fw} is installed and configured\n"
|
||||
((WARN++))
|
||||
fi
|
||||
else
|
||||
fw="firewalld"
|
||||
if [[ $(systemctl is-active firewalld >/dev/null 2>&1 && echo 1 || echo 0) ]]; then
|
||||
FW_VER="\e[32m[PASS]\e[0m Firewall service (${fw}) is active\n"
|
||||
((PASS++))
|
||||
else
|
||||
FW_VER="\e[93m[WARN]\e[0m No firewall is configured. Ensure ${fw} is installed and configured\n"
|
||||
((WARN++))
|
||||
fi
|
||||
fi
|
||||
elif [[ "$OS" =~ Debian.* ]]; then
|
||||
# user could be using a number of different services for managing their firewall
|
||||
# we will check some of the most common
|
||||
if cmdExists 'ufw'; then
|
||||
fw="ufw"
|
||||
ufwa=$(ufw status |head -1| sed -e "s/^Status:\ //")
|
||||
if [[ $ufwa == "active" ]]; then
|
||||
FW_VER="\e[32m[PASS]\e[0m Firewall service (${fw}) is active\n"
|
||||
((PASS++))
|
||||
else
|
||||
FW_VER="\e[93m[WARN]\e[0m No firewall is configured. Ensure ${fw} is installed and configured\n"
|
||||
((WARN++))
|
||||
fi
|
||||
elif cmdExists "firewall-cmd"; then
|
||||
fw="firewalld"
|
||||
if [[ $(systemctl is-active --quiet $fw) ]]; then
|
||||
FW_VER="\e[32m[PASS]\e[0m Firewall service (${fw}) is active\n"
|
||||
((PASS++))
|
||||
else
|
||||
FW_VER="\e[93m[WARN]\e[0m No firewall is configured. Ensure ${fw} is installed and configured\n"
|
||||
((WARN++))
|
||||
fi
|
||||
else
|
||||
# user could be using vanilla iptables, check if kernel module is loaded
|
||||
fw="iptables"
|
||||
if [[ $(lsmod | grep -q '^ip_tables' 2>/dev/null) ]]; then
|
||||
FW_VER="\e[32m[PASS]\e[0m Firewall service (${fw}) is active\n"
|
||||
((PASS++))
|
||||
else
|
||||
FW_VER="\e[93m[WARN]\e[0m No firewall is configured. Ensure ${fw} is installed and configured\n"
|
||||
((WARN++))
|
||||
fi
|
||||
fi
|
||||
fi
|
||||
|
||||
}
|
||||
function checkUpdates {
|
||||
if [[ $OS == "Ubuntu" ]] || [[ "$OS" =~ Debian.* ]]; then
|
||||
# Ensure /tmp exists and has the proper permissions before
|
||||
# checking for security updates
|
||||
# https://github.com/digitalocean/marketplace-partners/issues/94
|
||||
if [[ ! -d /tmp ]]; then
|
||||
mkdir /tmp
|
||||
fi
|
||||
chmod 1777 /tmp
|
||||
|
||||
echo -en "\nUpdating apt package database to check for security updates, this may take a minute...\n\n"
|
||||
apt-get -y update > /dev/null
|
||||
|
||||
uc=$(apt-get --just-print upgrade | grep -i "security" | wc -l)
|
||||
if [[ $uc -gt 0 ]]; then
|
||||
update_count=$(( ${uc} / 2 ))
|
||||
else
|
||||
update_count=0
|
||||
fi
|
||||
|
||||
if [[ $update_count -gt 0 ]]; then
|
||||
echo -en "\e[41m[FAIL]\e[0m There are ${update_count} security updates available for this image that have not been installed.\n"
|
||||
echo -en
|
||||
echo -en "Here is a list of the security updates that are not installed:\n"
|
||||
sleep 2
|
||||
apt-get --just-print upgrade | grep -i security | awk '{print $2}' | awk '!seen[$0]++'
|
||||
echo -en
|
||||
((FAIL++))
|
||||
STATUS=2
|
||||
else
|
||||
echo -en "\e[32m[PASS]\e[0m There are no pending security updates for this image.\n\n"
|
||||
fi
|
||||
elif [[ $OS == "CentOS Linux" ]] || [[ $OS == "CentOS Stream" ]] || [[ $OS == "Rocky Linux" ]]; then
|
||||
echo -en "\nChecking for available security updates, this may take a minute...\n\n"
|
||||
|
||||
update_count=$(yum check-update --security --quiet | wc -l)
|
||||
if [[ $update_count -gt 0 ]]; then
|
||||
echo -en "\e[41m[FAIL]\e[0m There are ${update_count} security updates available for this image that have not been installed.\n"
|
||||
((FAIL++))
|
||||
STATUS=2
|
||||
else
|
||||
echo -en "\e[32m[PASS]\e[0m There are no pending security updates for this image.\n"
|
||||
((PASS++))
|
||||
fi
|
||||
else
|
||||
echo "Error encountered"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
return 1;
|
||||
}
|
||||
function checkCloudInit {
|
||||
|
||||
if hash cloud-init 2>/dev/null; then
|
||||
CI="\e[32m[PASS]\e[0m Cloud-init is installed.\n"
|
||||
((PASS++))
|
||||
else
|
||||
CI="\e[41m[FAIL]\e[0m No valid verison of cloud-init was found.\n"
|
||||
((FAIL++))
|
||||
STATUS=2
|
||||
fi
|
||||
return 1
|
||||
}
|
||||
|
||||
function version_gt() { test "$(printf '%s\n' "$@" | sort -V | head -n 1)" != "$1"; }
|
||||
|
||||
|
||||
clear
|
||||
echo "DigitalOcean Marketplace Image Validation Tool ${VERSION}"
|
||||
echo "Executed on: ${RUNDATE}"
|
||||
echo "Checking local system for Marketplace compatibility..."
|
||||
|
||||
getDistro
|
||||
|
||||
echo -en "\n\e[1mDistribution:\e[0m ${OS}\n"
|
||||
echo -en "\e[1mVersion:\e[0m ${VER}\n\n"
|
||||
|
||||
ost=0
|
||||
osv=0
|
||||
|
||||
if [[ $OS == "Ubuntu" ]]; then
|
||||
ost=1
|
||||
if [[ $VER == "20.04" ]]; then
|
||||
osv=1
|
||||
elif [[ $VER == "18.04" ]]; then
|
||||
osv=1
|
||||
elif [[ $VER == "16.04" ]]; then
|
||||
osv=1
|
||||
else
|
||||
osv=0
|
||||
fi
|
||||
|
||||
elif [[ "$OS" =~ Debian.* ]]; then
|
||||
ost=1
|
||||
case "$VER" in
|
||||
9)
|
||||
osv=1
|
||||
;;
|
||||
10)
|
||||
osv=1
|
||||
;;
|
||||
*)
|
||||
osv=2
|
||||
;;
|
||||
esac
|
||||
|
||||
elif [[ $OS == "CentOS Linux" ]]; then
|
||||
ost=1
|
||||
if [[ $VER == "8" ]]; then
|
||||
osv=1
|
||||
elif [[ $VER == "7" ]]; then
|
||||
osv=1
|
||||
elif [[ $VER == "6" ]]; then
|
||||
osv=1
|
||||
else
|
||||
osv=2
|
||||
fi
|
||||
elif [[ $OS == "CentOS Stream" ]]; then
|
||||
ost=1
|
||||
if [[ $VER == "8" ]]; then
|
||||
osv=1
|
||||
else
|
||||
osv=2
|
||||
fi
|
||||
elif [[ $OS == "Rocky Linux" ]]; then
|
||||
ost=1
|
||||
if [[ $VER =~ "8." ]]; then
|
||||
osv=1
|
||||
else
|
||||
osv=2
|
||||
fi
|
||||
else
|
||||
ost=0
|
||||
fi
|
||||
|
||||
if [[ $ost == 1 ]]; then
|
||||
echo -en "\e[32m[PASS]\e[0m Supported Operating System Detected: ${OS}\n"
|
||||
((PASS++))
|
||||
else
|
||||
echo -en "\e[41m[FAIL]\e[0m ${OS} is not a supported Operating System\n"
|
||||
((FAIL++))
|
||||
STATUS=2
|
||||
fi
|
||||
|
||||
if [[ $osv == 1 ]]; then
|
||||
echo -en "\e[32m[PASS]\e[0m Supported Release Detected: ${VER}\n"
|
||||
((PASS++))
|
||||
elif [[ $ost == 1 ]]; then
|
||||
echo -en "\e[41m[FAIL]\e[0m ${OS} ${VER} is not a supported Operating System Version\n"
|
||||
((FAIL++))
|
||||
STATUS=2
|
||||
else
|
||||
echo "Exiting..."
|
||||
exit 1
|
||||
fi
|
||||
|
||||
checkCloudInit
|
||||
|
||||
echo -en "${CI}"
|
||||
|
||||
checkFirewall
|
||||
|
||||
echo -en "${FW_VER}"
|
||||
|
||||
checkUpdates
|
||||
|
||||
loadPasswords
|
||||
|
||||
checkLogs
|
||||
|
||||
echo -en "\n\nChecking all user-created accounts...\n"
|
||||
checkUsers
|
||||
|
||||
echo -en "\n\nChecking the root account...\n"
|
||||
checkRoot
|
||||
|
||||
checkAgent
|
||||
|
||||
|
||||
# Summary
|
||||
echo -en "\n\n---------------------------------------------------------------------------------------------------\n"
|
||||
|
||||
if [[ $STATUS == 0 ]]; then
|
||||
echo -en "Scan Complete.\n\e[32mAll Tests Passed!\e[0m\n"
|
||||
elif [[ $STATUS == 1 ]]; then
|
||||
echo -en "Scan Complete. \n\e[93mSome non-critical tests failed. Please review these items.\e[0m\e[0m\n"
|
||||
else
|
||||
echo -en "Scan Complete. \n\e[41mOne or more tests failed. Please review these items and re-test.\e[0m\n"
|
||||
fi
|
||||
echo "---------------------------------------------------------------------------------------------------"
|
||||
echo -en "\e[1m${PASS} Tests PASSED\e[0m\n"
|
||||
echo -en "\e[1m${WARN} WARNINGS\e[0m\n"
|
||||
echo -en "\e[1m${FAIL} Tests FAILED\e[0m\n"
|
||||
echo -en "---------------------------------------------------------------------------------------------------\n"
|
||||
|
||||
if [[ $STATUS == 0 ]]; then
|
||||
echo -en "We did not detect any issues with this image. Please be sure to manually ensure that all software installed on the base system is functional, secure and properly configured (or facilities for configuration on first-boot have been created).\n\n"
|
||||
exit 0
|
||||
elif [[ $STATUS == 1 ]]; then
|
||||
echo -en "Please review all [WARN] items above and ensure they are intended or resolved. If you do not have a specific requirement, we recommend resolving these items before image submission\n\n"
|
||||
exit 0
|
||||
else
|
||||
echo -en "Some critical tests failed. These items must be resolved and this scan re-run before you submit your image to the DigitalOcean Marketplace.\n\n"
|
||||
exit 1
|
||||
fi
|
|
@ -0,0 +1,65 @@
|
|||
{
|
||||
"variables": {
|
||||
"token": "{{env `DIGITALOCEAN_TOKEN`}}",
|
||||
"image_name": "budibase-marketplace-snapshot-{{timestamp}}",
|
||||
"apt_packages": "jq"
|
||||
},
|
||||
"builders": [
|
||||
{
|
||||
"type": "digitalocean",
|
||||
"api_token": "{{user `token`}}",
|
||||
"image": "docker-20-04",
|
||||
"region": "lon1",
|
||||
"size": "s-1vcpu-1gb",
|
||||
"ssh_username": "root",
|
||||
"snapshot_name": "{{user `image_name`}}"
|
||||
}
|
||||
],
|
||||
"provisioners": [
|
||||
{
|
||||
"type": "shell",
|
||||
"inline": [
|
||||
"cloud-init status --wait"
|
||||
]
|
||||
},
|
||||
{
|
||||
"type": "file",
|
||||
"source": "files/etc/",
|
||||
"destination": "/etc/"
|
||||
},
|
||||
{
|
||||
"type": "file",
|
||||
"source": "files/var/",
|
||||
"destination": "/var/"
|
||||
},
|
||||
{
|
||||
"type": "shell",
|
||||
"environment_vars": [
|
||||
"DEBIAN_FRONTEND=noninteractive",
|
||||
"LC_ALL=C",
|
||||
"LANG=en_US.UTF-8",
|
||||
"LC_CTYPE=en_US.UTF-8"
|
||||
],
|
||||
"inline": [
|
||||
"apt -qqy update",
|
||||
"apt -qqy -o Dpkg::Options::='--force-confdef' -o Dpkg::Options::='--force-confold' full-upgrade",
|
||||
"apt -qqy -o Dpkg::Options::='--force-confdef' -o Dpkg::Options::='--force-confold' install {{user `apt_packages`}}"
|
||||
]
|
||||
},
|
||||
{
|
||||
"type": "shell",
|
||||
"environment_vars": [
|
||||
"application_name={{user `application_name`}}",
|
||||
"application_version={{user `application_version`}}",
|
||||
"DEBIAN_FRONTEND=noninteractive",
|
||||
"LC_ALL=C",
|
||||
"LANG=en_US.UTF-8",
|
||||
"LC_CTYPE=en_US.UTF-8"
|
||||
],
|
||||
"scripts": [
|
||||
"scripts/90-cleanup.sh",
|
||||
"scripts/99-img_check.sh"
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
|
@ -1,41 +0,0 @@
|
|||
apiVersion: v2
|
||||
name: budibase
|
||||
description: Budibase is an open source low-code platform, helping thousands of teams build apps for their workplace in minutes.
|
||||
keywords:
|
||||
- low-code
|
||||
- database
|
||||
- cluster
|
||||
sources:
|
||||
- https://github.com/Budibase/budibase
|
||||
- https://budibase.com
|
||||
|
||||
# A chart can be either an 'application' or a 'library' chart.
|
||||
#
|
||||
# Application charts are a collection of templates that can be packaged into versioned archives
|
||||
# to be deployed.
|
||||
#
|
||||
# Library charts provide useful utilities or functions for the chart developer. They're included as
|
||||
# a dependency of application charts to inject those utilities and functions into the rendering
|
||||
# pipeline. Library charts do not define any templates and therefore cannot be deployed.
|
||||
type: application
|
||||
|
||||
# This is the chart version. This version number should be incremented each time you make changes
|
||||
# to the chart and its templates, including the app version.
|
||||
# Versions are expected to follow Semantic Versioning (https://semver.org/)
|
||||
version: 0.2.2
|
||||
|
||||
# This is the version number of the application being deployed. This version number should be
|
||||
# incremented each time you make changes to the application. Versions are not expected to
|
||||
# follow Semantic Versioning. They should reflect the version the application is using.
|
||||
# It is recommended to use it with quotes.
|
||||
appVersion: "0.9.169"
|
||||
|
||||
dependencies:
|
||||
- name: couchdb
|
||||
version: 3.3.4
|
||||
repository: https://apache.github.io/couchdb-helm
|
||||
condition: services.couchdb.enabled
|
||||
- name: ingress-nginx
|
||||
version: 3.35.0
|
||||
repository: https://github.com/kubernetes/ingress-nginx
|
||||
condition: ingress.nginx
|
|
@ -1,19 +0,0 @@
|
|||
apiVersion: v1
|
||||
appVersion: 3.1.0
|
||||
description: A database featuring seamless multi-master sync, that scales from big
|
||||
data to mobile, with an intuitive HTTP/JSON API and designed for reliability.
|
||||
home: https://couchdb.apache.org/
|
||||
icon: http://couchdb.apache.org/CouchDB-visual-identity/logo/CouchDB-couch-symbol.svg
|
||||
keywords:
|
||||
- couchdb
|
||||
- database
|
||||
- nosql
|
||||
maintainers:
|
||||
- email: kocolosk@apache.org
|
||||
name: kocolosk
|
||||
- email: willholley@apache.org
|
||||
name: willholley
|
||||
name: couchdb
|
||||
sources:
|
||||
- https://github.com/apache/couchdb-docker
|
||||
version: 3.3.4
|
|
@ -1,244 +0,0 @@
|
|||
# CouchDB
|
||||
|
||||
Apache CouchDB is a database featuring seamless multi-master sync, that scales
|
||||
from big data to mobile, with an intuitive HTTP/JSON API and designed for
|
||||
reliability.
|
||||
|
||||
This chart deploys a CouchDB cluster as a StatefulSet. It creates a ClusterIP
|
||||
Service in front of the Deployment for load balancing by default, but can also
|
||||
be configured to deploy other Service types or an Ingress Controller. The
|
||||
default persistence mechanism is simply the ephemeral local filesystem, but
|
||||
production deployments should set `persistentVolume.enabled` to `true` to attach
|
||||
storage volumes to each Pod in the Deployment.
|
||||
|
||||
## TL;DR
|
||||
|
||||
```bash
|
||||
$ helm repo add couchdb https://apache.github.io/couchdb-helm
|
||||
$ helm install couchdb/couchdb \
|
||||
--set allowAdminParty=true \
|
||||
--set couchdbConfig.couchdb.uuid=$(curl https://www.uuidgenerator.net/api/version4 2>/dev/null | tr -d -)
|
||||
```
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- Kubernetes 1.9+ with Beta APIs enabled
|
||||
- Ingress requires Kubernetes 1.14+
|
||||
|
||||
## Installing the Chart
|
||||
|
||||
To install the chart with the release name `my-release`:
|
||||
|
||||
Add the CouchDB Helm repository:
|
||||
|
||||
```bash
|
||||
$ helm repo add couchdb https://apache.github.io/couchdb-helm
|
||||
```
|
||||
|
||||
Afterwards install the chart replacing the UUID
|
||||
`decafbaddecafbaddecafbaddecafbad` with a custom one:
|
||||
|
||||
```bash
|
||||
$ helm install \
|
||||
--name my-release \
|
||||
--set couchdbConfig.couchdb.uuid=decafbaddecafbaddecafbaddecafbad \
|
||||
couchdb/couchdb
|
||||
```
|
||||
|
||||
This will create a Secret containing the admin credentials for the cluster.
|
||||
Those credentials can be retrieved as follows:
|
||||
|
||||
```bash
|
||||
$ kubectl get secret my-release-couchdb -o go-template='{{ .data.adminPassword }}' | base64 --decode
|
||||
```
|
||||
|
||||
If you prefer to configure the admin credentials directly you can create a
|
||||
Secret containing `adminUsername`, `adminPassword` and `cookieAuthSecret` keys:
|
||||
|
||||
```bash
|
||||
$ kubectl create secret generic my-release-couchdb --from-literal=adminUsername=foo --from-literal=adminPassword=bar --from-literal=cookieAuthSecret=baz
|
||||
```
|
||||
|
||||
If you want to set the `adminHash` directly to achieve consistent salts between
|
||||
different nodes you need to addionally add the key `password.ini` to the secret:
|
||||
|
||||
```bash
|
||||
$ kubectl create secret generic my-release-couchdb \
|
||||
--from-literal=adminUsername=foo \
|
||||
--from-literal=cookieAuthSecret=baz \
|
||||
--from-file=./my-password.ini
|
||||
```
|
||||
|
||||
With the following contents in `my-password.ini`:
|
||||
|
||||
```
|
||||
[admins]
|
||||
foo = <pbkdf2-hash>
|
||||
```
|
||||
|
||||
and then install the chart while overriding the `createAdminSecret` setting:
|
||||
|
||||
```bash
|
||||
$ helm install \
|
||||
--name my-release \
|
||||
--set createAdminSecret=false \
|
||||
--set couchdbConfig.couchdb.uuid=decafbaddecafbaddecafbaddecafbad \
|
||||
couchdb/couchdb
|
||||
```
|
||||
|
||||
This Helm chart deploys CouchDB on the Kubernetes cluster in a default
|
||||
configuration. The [configuration](#configuration) section lists
|
||||
the parameters that can be configured during installation.
|
||||
|
||||
> **Tip**: List all releases using `helm list`
|
||||
|
||||
## Uninstalling the Chart
|
||||
|
||||
To uninstall/delete the `my-release` Deployment:
|
||||
|
||||
```bash
|
||||
$ helm delete my-release
|
||||
```
|
||||
|
||||
The command removes all the Kubernetes components associated with the chart and
|
||||
deletes the release.
|
||||
|
||||
## Upgrading an existing Release to a new major version
|
||||
|
||||
A major chart version change (like v0.2.3 -> v1.0.0) indicates that there is an
|
||||
incompatible breaking change needing manual actions.
|
||||
|
||||
### Upgrade to 3.0.0
|
||||
|
||||
Since version 3.0.0 setting the CouchDB server instance UUID is mandatory.
|
||||
Therefore you need to generate a UUID and supply it as a value during the
|
||||
upgrade as follows:
|
||||
|
||||
```bash
|
||||
$ helm upgrade <release-name> \
|
||||
--reuse-values \
|
||||
--set couchdbConfig.couchdb.uuid=<UUID> \
|
||||
couchdb/couchdb
|
||||
```
|
||||
|
||||
## Migrating from stable/couchdb
|
||||
|
||||
This chart replaces the `stable/couchdb` chart previously hosted by Helm and continues the
|
||||
version semantics. You can upgrade directly from `stable/couchdb` to this chart using:
|
||||
|
||||
```bash
|
||||
$ helm repo add couchdb https://apache.github.io/couchdb-helm
|
||||
$ helm upgrade my-release couchdb/couchdb
|
||||
```
|
||||
|
||||
## Configuration
|
||||
|
||||
The following table lists the most commonly configured parameters of the
|
||||
CouchDB chart and their default values:
|
||||
|
||||
| Parameter | Description | Default |
|
||||
|---------------------------------|-------------------------------------------------------|----------------------------------------|
|
||||
| `clusterSize` | The initial number of nodes in the CouchDB cluster | 3 |
|
||||
| `couchdbConfig` | Map allowing override elements of server .ini config | *See below* |
|
||||
| `allowAdminParty` | If enabled, start cluster without admin account | false (requires creating a Secret) |
|
||||
| `createAdminSecret` | If enabled, create an admin account and cookie secret | true |
|
||||
| `schedulerName` | Name of the k8s scheduler (other than default) | `nil` |
|
||||
| `erlangFlags` | Map of flags supplied to the underlying Erlang VM | name: couchdb, setcookie: monster
|
||||
| `persistentVolume.enabled` | Boolean determining whether to attach a PV to each node | false
|
||||
| `persistentVolume.size` | If enabled, the size of the persistent volume to attach | 10Gi
|
||||
| `enableSearch` | Adds a sidecar for Lucene-powered text search | false |
|
||||
|
||||
You can set the values of the `couchdbConfig` map according to the
|
||||
[official configuration][4]. The following shows the map's default values and
|
||||
required options to set:
|
||||
|
||||
| Parameter | Description | Default |
|
||||
|---------------------------------|--------------------------------------------------------------------|----------------------------------------|
|
||||
| `couchdb.uuid` | UUID for this CouchDB server instance ([Required in a cluster][5]) | |
|
||||
| `chttpd.bind_address` | listens on all interfaces when set to any | any |
|
||||
| `chttpd.require_valid_user` | disables all the anonymous requests to the port 5984 when true | false |
|
||||
|
||||
A variety of other parameters are also configurable. See the comments in the
|
||||
`values.yaml` file for further details:
|
||||
|
||||
| Parameter | Default |
|
||||
|--------------------------------------|----------------------------------------|
|
||||
| `adminUsername` | admin |
|
||||
| `adminPassword` | auto-generated |
|
||||
| `adminHash` | |
|
||||
| `cookieAuthSecret` | auto-generated |
|
||||
| `image.repository` | couchdb |
|
||||
| `image.tag` | 3.1.0 |
|
||||
| `image.pullPolicy` | IfNotPresent |
|
||||
| `searchImage.repository` | kocolosk/couchdb-search |
|
||||
| `searchImage.tag` | 0.1.0 |
|
||||
| `searchImage.pullPolicy` | IfNotPresent |
|
||||
| `initImage.repository` | busybox |
|
||||
| `initImage.tag` | latest |
|
||||
| `initImage.pullPolicy` | Always |
|
||||
| `ingress.enabled` | false |
|
||||
| `ingress.hosts` | chart-example.local |
|
||||
| `ingress.annotations` | |
|
||||
| `ingress.path` | / |
|
||||
| `ingress.tls` | |
|
||||
| `persistentVolume.accessModes` | ReadWriteOnce |
|
||||
| `persistentVolume.storageClass` | Default for the Kube cluster |
|
||||
| `podManagementPolicy` | Parallel |
|
||||
| `affinity` | |
|
||||
| `annotations` | |
|
||||
| `tolerations` | |
|
||||
| `resources` | |
|
||||
| `service.annotations` | |
|
||||
| `service.enabled` | true |
|
||||
| `service.type` | ClusterIP |
|
||||
| `service.externalPort` | 5984 |
|
||||
| `dns.clusterDomainSuffix` | cluster.local |
|
||||
| `networkPolicy.enabled` | true |
|
||||
| `serviceAccount.enabled` | true |
|
||||
| `serviceAccount.create` | true |
|
||||
| `serviceAccount.imagePullSecrets` | |
|
||||
| `sidecars` | {} |
|
||||
| `livenessProbe.enabled` | true |
|
||||
| `livenessProbe.failureThreshold` | 3 |
|
||||
| `livenessProbe.initialDelaySeconds` | 0 |
|
||||
| `livenessProbe.periodSeconds` | 10 |
|
||||
| `livenessProbe.successThreshold` | 1 |
|
||||
| `livenessProbe.timeoutSeconds` | 1 |
|
||||
| `readinessProbe.enabled` | true |
|
||||
| `readinessProbe.failureThreshold` | 3 |
|
||||
| `readinessProbe.initialDelaySeconds` | 0 |
|
||||
| `readinessProbe.periodSeconds` | 10 |
|
||||
| `readinessProbe.successThreshold` | 1 |
|
||||
| `readinessProbe.timeoutSeconds` | 1 |
|
||||
|
||||
## Feedback, Issues, Contributing
|
||||
|
||||
General feedback is welcome at our [user][1] or [developer][2] mailing lists.
|
||||
|
||||
Apache CouchDB has a [CONTRIBUTING][3] file with details on how to get started
|
||||
with issue reporting or contributing to the upkeep of this project. In short,
|
||||
use GitHub Issues, do not report anything on Docker's website.
|
||||
|
||||
## Non-Apache CouchDB Development Team Contributors
|
||||
|
||||
- [@natarajaya](https://github.com/natarajaya)
|
||||
- [@satchpx](https://github.com/satchpx)
|
||||
- [@spanato](https://github.com/spanato)
|
||||
- [@jpds](https://github.com/jpds)
|
||||
- [@sebastien-prudhomme](https://github.com/sebastien-prudhomme)
|
||||
- [@stepanstipl](https://github.com/sebastien-stepanstipl)
|
||||
- [@amatas](https://github.com/amatas)
|
||||
- [@Chimney42](https://github.com/Chimney42)
|
||||
- [@mattjmcnaughton](https://github.com/mattjmcnaughton)
|
||||
- [@mainephd](https://github.com/mainephd)
|
||||
- [@AdamDang](https://github.com/AdamDang)
|
||||
- [@mrtyler](https://github.com/mrtyler)
|
||||
- [@kevinwlau](https://github.com/kevinwlau)
|
||||
- [@jeyenzo](https://github.com/jeyenzo)
|
||||
- [@Pinpin31.](https://github.com/Pinpin31)
|
||||
|
||||
[1]: http://mail-archives.apache.org/mod_mbox/couchdb-user/
|
||||
[2]: http://mail-archives.apache.org/mod_mbox/couchdb-dev/
|
||||
[3]: https://github.com/apache/couchdb/blob/master/CONTRIBUTING.md
|
||||
[4]: https://docs.couchdb.org/en/stable/config/index.html
|
||||
[5]: https://docs.couchdb.org/en/latest/setup/cluster.html#preparing-couchdb-nodes-to-be-joined-into-a-cluster
|
|
@ -1,3 +0,0 @@
|
|||
couchdbConfig:
|
||||
couchdb:
|
||||
uuid: "decafbaddecafbaddecafbaddecafbad"
|
|
@ -1,9 +0,0 @@
|
|||
sidecars:
|
||||
- name: foo
|
||||
image: "busybox"
|
||||
imagePullPolicy: IfNotPresent
|
||||
resources:
|
||||
requests:
|
||||
cpu: "0.1"
|
||||
memory: 10Mi
|
||||
command: ['while true; do echo "foo"; sleep 5; done;']
|
|
@ -1,2 +0,0 @@
|
|||
[admins]
|
||||
{{ .Values.adminUsername }} = {{ .Values.adminHash }}
|
|
@ -1,20 +0,0 @@
|
|||
Apache CouchDB is starting. Check the status of the Pods using:
|
||||
|
||||
kubectl get pods --namespace {{ .Release.Namespace }} -l "app={{ template "couchdb.name" . }},release={{ .Release.Name }}"
|
||||
|
||||
Once all of the Pods are fully Ready, execute the following command to create
|
||||
some required system databases:
|
||||
|
||||
kubectl exec --namespace {{ .Release.Namespace }} {{ if not .Values.allowAdminParty }}-it {{ end }}{{ template "couchdb.fullname" . }}-0 -c couchdb -- \
|
||||
curl -s \
|
||||
http://127.0.0.1:5984/_cluster_setup \
|
||||
-X POST \
|
||||
-H "Content-Type: application/json" \
|
||||
{{- if .Values.allowAdminParty }}
|
||||
-d '{"action": "finish_cluster"}'
|
||||
{{- else }}
|
||||
-d '{"action": "finish_cluster"}' \
|
||||
-u <adminUsername>
|
||||
{{- end }}
|
||||
|
||||
Then it's time to relax.
|
|
@ -1,81 +0,0 @@
|
|||
{{/* vim: set filetype=mustache: */}}
|
||||
{{/*
|
||||
Expand the name of the chart.
|
||||
*/}}
|
||||
{{- define "couchdb.name" -}}
|
||||
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Create a default fully qualified app name.
|
||||
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
|
||||
*/}}
|
||||
{{- define "couchdb.fullname" -}}
|
||||
{{- if .Values.fullnameOverride -}}
|
||||
{{- printf "%s-%s" .Values.fullnameOverride .Chart.Name | trunc 63 | trimSuffix "-" -}}
|
||||
{{- else -}}
|
||||
{{- $name := default .Chart.Name .Values.nameOverride -}}
|
||||
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
In the event that we create both a headless service and a traditional one,
|
||||
ensure that the latter gets a unique name.
|
||||
*/}}
|
||||
{{- define "couchdb.svcname" -}}
|
||||
{{- if .Values.fullnameOverride -}}
|
||||
{{- printf "%s-svc-%s" .Values.fullnameOverride .Chart.Name | trunc 63 | trimSuffix "-" -}}
|
||||
{{- else -}}
|
||||
{{- $name := default .Chart.Name .Values.nameOverride -}}
|
||||
{{- printf "%s-svc-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Create a random string if the supplied key does not exist
|
||||
*/}}
|
||||
{{- define "couchdb.defaultsecret" -}}
|
||||
{{- if . -}}
|
||||
{{- . | b64enc | quote -}}
|
||||
{{- else -}}
|
||||
{{- randAlphaNum 20 | b64enc | quote -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Labels used to define Pods in the CouchDB statefulset
|
||||
*/}}
|
||||
{{- define "couchdb.ss.selector" -}}
|
||||
app: {{ template "couchdb.name" . }}
|
||||
release: {{ .Release.Name }}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Generates a comma delimited list of nodes in the cluster
|
||||
*/}}
|
||||
{{- define "couchdb.seedlist" -}}
|
||||
{{- $nodeCount := min 5 .Values.clusterSize | int }}
|
||||
{{- range $index0 := until $nodeCount -}}
|
||||
{{- $index1 := $index0 | add1 -}}
|
||||
{{ $.Values.erlangFlags.name }}@{{ template "couchdb.fullname" $ }}-{{ $index0 }}.{{ template "couchdb.fullname" $ }}.{{ $.Release.Namespace }}.svc.{{ $.Values.dns.clusterDomainSuffix }}{{ if ne $index1 $nodeCount }},{{ end }}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
If serviceAccount.name is specified, use that, else use the couchdb instance name
|
||||
*/}}
|
||||
{{- define "couchdb.serviceAccount" -}}
|
||||
{{- if .Values.serviceAccount.name -}}
|
||||
{{- .Values.serviceAccount.name }}
|
||||
{{- else -}}
|
||||
{{- template "couchdb.fullname" . -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{/*
|
||||
Fail if couchdbConfig.couchdb.uuid is undefined
|
||||
*/}}
|
||||
{{- define "couchdb.uuid" -}}
|
||||
{{- required "A value for couchdbConfig.couchdb.uuid must be set" (.Values.couchdbConfig.couchdb | default dict).uuid -}}
|
||||
{{- end -}}
|
|
@ -1,23 +0,0 @@
|
|||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: {{ template "couchdb.fullname" . }}
|
||||
labels:
|
||||
app: {{ template "couchdb.name" . }}
|
||||
chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
|
||||
heritage: {{ .Release.Service | quote }}
|
||||
release: {{ .Release.Name | quote }}
|
||||
data:
|
||||
inifile: |
|
||||
{{ $couchdbConfig := dict "couchdb" (dict "uuid" (include "couchdb.uuid" .)) -}}
|
||||
{{- $couchdbConfig := merge $couchdbConfig .Values.couchdbConfig -}}
|
||||
{{- range $section, $settings := $couchdbConfig -}}
|
||||
{{ printf "[%s]" $section }}
|
||||
{{ range $key, $value := $settings -}}
|
||||
{{ printf "%s = %s" $key ($value | toString) }}
|
||||
{{ end }}
|
||||
{{ end }}
|
||||
|
||||
seedlistinifile: |
|
||||
[cluster]
|
||||
seedlist = {{ template "couchdb.seedlist" . }}
|
|
@ -1,17 +0,0 @@
|
|||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: {{ template "couchdb.fullname" . }}
|
||||
labels:
|
||||
app: {{ template "couchdb.name" . }}
|
||||
chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
|
||||
release: {{ .Release.Name }}
|
||||
heritage: {{ .Release.Service }}
|
||||
spec:
|
||||
clusterIP: None
|
||||
publishNotReadyAddresses: true
|
||||
ports:
|
||||
- name: couchdb
|
||||
port: 5984
|
||||
selector:
|
||||
{{ include "couchdb.ss.selector" . | indent 4 }}
|
|
@ -1,33 +0,0 @@
|
|||
{{- if .Values.ingress.enabled -}}
|
||||
{{- $serviceName := include "couchdb.fullname" . -}}
|
||||
{{- $servicePort := .Values.service.externalPort -}}
|
||||
{{- $path := .Values.ingress.path | quote -}}
|
||||
apiVersion: networking.k8s.io/v1beta1
|
||||
kind: Ingress
|
||||
metadata:
|
||||
name: {{ template "couchdb.fullname" . }}
|
||||
labels:
|
||||
app: {{ template "couchdb.name" . }}
|
||||
chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
|
||||
release: {{ .Release.Name }}
|
||||
heritage: {{ .Release.Service }}
|
||||
annotations:
|
||||
{{- range $key, $value := .Values.ingress.annotations }}
|
||||
{{ $key }}: {{ $value | quote }}
|
||||
{{- end }}
|
||||
spec:
|
||||
rules:
|
||||
{{- range $host := .Values.ingress.hosts }}
|
||||
- host: {{ $host }}
|
||||
http:
|
||||
paths:
|
||||
- path: {{ $path }}
|
||||
backend:
|
||||
serviceName: {{ $serviceName }}
|
||||
servicePort: {{ $servicePort }}
|
||||
{{- end -}}
|
||||
{{- if .Values.ingress.tls }}
|
||||
tls:
|
||||
{{ toYaml .Values.ingress.tls | indent 4 }}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
|
@ -1,31 +0,0 @@
|
|||
|
||||
{{- if .Values.networkPolicy.enabled }}
|
||||
kind: NetworkPolicy
|
||||
apiVersion: networking.k8s.io/v1
|
||||
metadata:
|
||||
name: {{ template "couchdb.fullname" . }}
|
||||
labels:
|
||||
app: {{ template "couchdb.name" . }}
|
||||
chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
|
||||
release: {{ .Release.Name }}
|
||||
heritage: {{ .Release.Service }}
|
||||
spec:
|
||||
podSelector:
|
||||
matchLabels:
|
||||
{{ include "couchdb.ss.selector" . | indent 6 }}
|
||||
ingress:
|
||||
- ports:
|
||||
- protocol: TCP
|
||||
port: 5984
|
||||
- ports:
|
||||
- protocol: TCP
|
||||
port: 9100
|
||||
- protocol: TCP
|
||||
port: 4369
|
||||
from:
|
||||
- podSelector:
|
||||
matchLabels:
|
||||
{{ include "couchdb.ss.selector" . | indent 14 }}
|
||||
policyTypes:
|
||||
- Ingress
|
||||
{{- end }}
|
|
@ -1,19 +0,0 @@
|
|||
{{- if .Values.createAdminSecret -}}
|
||||
apiVersion: v1
|
||||
kind: Secret
|
||||
metadata:
|
||||
name: {{ template "couchdb.fullname" . }}
|
||||
labels:
|
||||
app: {{ template "couchdb.fullname" . }}
|
||||
chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
|
||||
release: "{{ .Release.Name }}"
|
||||
heritage: "{{ .Release.Service }}"
|
||||
type: Opaque
|
||||
data:
|
||||
adminUsername: {{ template "couchdb.defaultsecret" .Values.adminUsername }}
|
||||
adminPassword: {{ template "couchdb.defaultsecret" .Values.adminPassword }}
|
||||
cookieAuthSecret: {{ template "couchdb.defaultsecret" .Values.cookieAuthSecret }}
|
||||
{{- if .Values.adminHash }}
|
||||
password.ini: {{ tpl (.Files.Get "password.ini") . | b64enc }}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
|
@ -1,23 +0,0 @@
|
|||
{{- if .Values.service.enabled -}}
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: {{ template "couchdb.svcname" . }}
|
||||
labels:
|
||||
app: {{ template "couchdb.name" . }}
|
||||
chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
|
||||
release: {{ .Release.Name }}
|
||||
heritage: {{ .Release.Service }}
|
||||
{{- if .Values.service.annotations }}
|
||||
annotations:
|
||||
{{ toYaml .Values.service.annotations | indent 4 }}
|
||||
{{- end }}
|
||||
spec:
|
||||
ports:
|
||||
- port: {{ .Values.service.externalPort }}
|
||||
protocol: TCP
|
||||
targetPort: 5984
|
||||
type: {{ .Values.service.type }}
|
||||
selector:
|
||||
{{ include "couchdb.ss.selector" . | indent 4 }}
|
||||
{{- end -}}
|
|
@ -1,15 +0,0 @@
|
|||
{{- if .Values.serviceAccount.create }}
|
||||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
metadata:
|
||||
name: {{ template "couchdb.serviceAccount" . }}
|
||||
labels:
|
||||
app: {{ template "couchdb.name" . }}
|
||||
chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
|
||||
release: {{ .Release.Name }}
|
||||
heritage: {{ .Release.Service }}
|
||||
{{- if .Values.serviceAccount.imagePullSecrets }}
|
||||
imagePullSecrets:
|
||||
{{ toYaml .Values.serviceAccount.imagePullSecrets }}
|
||||
{{- end }}
|
||||
{{- end }}
|
|
@ -1,202 +0,0 @@
|
|||
apiVersion: apps/v1
|
||||
kind: StatefulSet
|
||||
metadata:
|
||||
name: {{ template "couchdb.fullname" . }}
|
||||
labels:
|
||||
app: {{ template "couchdb.name" . }}
|
||||
chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
|
||||
release: {{ .Release.Name }}
|
||||
heritage: {{ .Release.Service }}
|
||||
spec:
|
||||
replicas: {{ .Values.clusterSize }}
|
||||
serviceName: {{ template "couchdb.fullname" . }}
|
||||
podManagementPolicy: {{ .Values.podManagementPolicy }}
|
||||
selector:
|
||||
matchLabels:
|
||||
{{ include "couchdb.ss.selector" . | indent 6 }}
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
{{ include "couchdb.ss.selector" . | indent 8 }}
|
||||
{{- with .Values.annotations }}
|
||||
annotations:
|
||||
checksum/config: {{ include (print $.Template.BasePath "/configmap.yaml") . | sha256sum }}
|
||||
{{ toYaml . | indent 8 }}
|
||||
{{- end }}
|
||||
spec:
|
||||
{{- if .Values.schedulerName }}
|
||||
schedulerName: "{{ .Values.schedulerName }}"
|
||||
{{- end }}
|
||||
{{- if .Values.serviceAccount.enabled }}
|
||||
serviceAccountName: {{ template "couchdb.serviceAccount" . }}
|
||||
{{- end }}
|
||||
initContainers:
|
||||
- name: init-copy
|
||||
image: "{{ .Values.initImage.repository }}:{{ .Values.initImage.tag }}"
|
||||
imagePullPolicy: {{ .Values.initImage.pullPolicy }}
|
||||
command: ['sh','-c','cp /tmp/chart.ini /default.d; cp /tmp/seedlist.ini /default.d; ls -lrt /default.d;']
|
||||
volumeMounts:
|
||||
- name: config
|
||||
mountPath: /tmp/
|
||||
- name: config-storage
|
||||
mountPath: /default.d
|
||||
{{- if .Values.adminHash }}
|
||||
- name: admin-hash-copy
|
||||
image: "{{ .Values.initImage.repository }}:{{ .Values.initImage.tag }}"
|
||||
imagePullPolicy: {{ .Values.initImage.pullPolicy }}
|
||||
command: ['sh','-c','cp /tmp/password.ini /local.d/ ;']
|
||||
volumeMounts:
|
||||
- name: admin-password
|
||||
mountPath: /tmp/password.ini
|
||||
subPath: "password.ini"
|
||||
- name: local-config-storage
|
||||
mountPath: /local.d
|
||||
{{- end }}
|
||||
containers:
|
||||
- name: couchdb
|
||||
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
|
||||
imagePullPolicy: {{ .Values.image.pullPolicy }}
|
||||
ports:
|
||||
- name: couchdb
|
||||
containerPort: 5984
|
||||
- name: epmd
|
||||
containerPort: 4369
|
||||
- containerPort: 9100
|
||||
env:
|
||||
{{- if not .Values.allowAdminParty }}
|
||||
- name: COUCHDB_USER
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: {{ template "couchdb.fullname" . }}
|
||||
key: adminUsername
|
||||
- name: COUCHDB_PASSWORD
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: {{ template "couchdb.fullname" . }}
|
||||
key: adminPassword
|
||||
- name: COUCHDB_SECRET
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: {{ template "couchdb.fullname" . }}
|
||||
key: cookieAuthSecret
|
||||
{{- end }}
|
||||
- name: ERL_FLAGS
|
||||
value: "{{ range $k, $v := .Values.erlangFlags }} -{{ $k }} {{ $v }} {{ end }}"
|
||||
{{- if .Values.livenessProbe.enabled }}
|
||||
livenessProbe:
|
||||
{{- if .Values.couchdbConfig.chttpd.require_valid_user }}
|
||||
exec:
|
||||
command:
|
||||
- sh
|
||||
- -c
|
||||
- curl -G --silent --fail -u ${COUCHDB_USER}:${COUCHDB_PASSWORD} http://localhost:5984/_up
|
||||
{{- else }}
|
||||
httpGet:
|
||||
path: /_up
|
||||
port: 5984
|
||||
{{- end }}
|
||||
failureThreshold: {{ .Values.livenessProbe.failureThreshold }}
|
||||
initialDelaySeconds: {{ .Values.livenessProbe.initialDelaySeconds }}
|
||||
periodSeconds: {{ .Values.livenessProbe.periodSeconds }}
|
||||
successThreshold: {{ .Values.livenessProbe.successThreshold }}
|
||||
timeoutSeconds: {{ .Values.livenessProbe.timeoutSeconds }}
|
||||
{{- end }}
|
||||
{{- if .Values.readinessProbe.enabled }}
|
||||
readinessProbe:
|
||||
{{- if .Values.couchdbConfig.chttpd.require_valid_user }}
|
||||
exec:
|
||||
command:
|
||||
- sh
|
||||
- -c
|
||||
- curl -G --silent --fail -u ${COUCHDB_USER}:${COUCHDB_PASSWORD} http://localhost:5984/_up
|
||||
{{- else }}
|
||||
httpGet:
|
||||
path: /_up
|
||||
port: 5984
|
||||
{{- end }}
|
||||
failureThreshold: {{ .Values.readinessProbe.failureThreshold }}
|
||||
initialDelaySeconds: {{ .Values.readinessProbe.initialDelaySeconds }}
|
||||
periodSeconds: {{ .Values.readinessProbe.periodSeconds }}
|
||||
successThreshold: {{ .Values.readinessProbe.successThreshold }}
|
||||
timeoutSeconds: {{ .Values.readinessProbe.timeoutSeconds }}
|
||||
{{- end }}
|
||||
resources:
|
||||
{{ toYaml .Values.resources | indent 12 }}
|
||||
volumeMounts:
|
||||
- name: config-storage
|
||||
mountPath: /opt/couchdb/etc/default.d
|
||||
{{- if .Values.adminHash }}
|
||||
- name: local-config-storage
|
||||
mountPath: /opt/couchdb/etc/local.d
|
||||
{{- end }}
|
||||
- name: database-storage
|
||||
mountPath: /opt/couchdb/data
|
||||
{{- if .Values.enableSearch }}
|
||||
- name: clouseau
|
||||
image: "{{ .Values.searchImage.repository }}:{{ .Values.searchImage.tag }}"
|
||||
imagePullPolicy: {{ .Values.searchImage.pullPolicy }}
|
||||
volumeMounts:
|
||||
- name: database-storage
|
||||
mountPath: /opt/couchdb-search/data
|
||||
{{- end }}
|
||||
{{- if .Values.sidecars }}
|
||||
{{ toYaml .Values.sidecars | indent 8}}
|
||||
{{- end }}
|
||||
{{- if .Values.nodeSelector }}
|
||||
nodeSelector:
|
||||
{{ toYaml .Values.nodeSelector | indent 8 }}
|
||||
{{- end }}
|
||||
{{- with .Values.tolerations }}
|
||||
tolerations:
|
||||
{{ toYaml . | indent 8 }}
|
||||
{{- end }}
|
||||
{{- with .Values.affinity }}
|
||||
affinity:
|
||||
{{ toYaml . | indent 8 }}
|
||||
{{- end }}
|
||||
volumes:
|
||||
- name: config-storage
|
||||
emptyDir: {}
|
||||
- name: config
|
||||
configMap:
|
||||
name: {{ template "couchdb.fullname" . }}
|
||||
items:
|
||||
- key: inifile
|
||||
path: chart.ini
|
||||
- key: seedlistinifile
|
||||
path: seedlist.ini
|
||||
|
||||
{{- if .Values.adminHash }}
|
||||
- name: local-config-storage
|
||||
emptyDir: {}
|
||||
- name: admin-password
|
||||
secret:
|
||||
secretName: {{ template "couchdb.fullname" . }}
|
||||
{{- end -}}
|
||||
|
||||
{{- if not .Values.persistentVolume.enabled }}
|
||||
- name: database-storage
|
||||
emptyDir: {}
|
||||
{{- else }}
|
||||
volumeClaimTemplates:
|
||||
- metadata:
|
||||
name: database-storage
|
||||
labels:
|
||||
app: {{ template "couchdb.name" . }}
|
||||
release: {{ .Release.Name }}
|
||||
spec:
|
||||
accessModes:
|
||||
{{- range .Values.persistentVolume.accessModes }}
|
||||
- {{ . | quote }}
|
||||
{{- end }}
|
||||
resources:
|
||||
requests:
|
||||
storage: {{ .Values.persistentVolume.size | quote }}
|
||||
{{- if .Values.persistentVolume.storageClass }}
|
||||
{{- if (eq "-" .Values.persistentVolume.storageClass) }}
|
||||
storageClassName: ""
|
||||
{{- else }}
|
||||
storageClassName: "{{ .Values.persistentVolume.storageClass }}"
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- end }}
|
|
@ -1,201 +0,0 @@
|
|||
## clusterSize is the initial size of the CouchDB cluster.
|
||||
clusterSize: 3
|
||||
|
||||
## If allowAdminParty is enabled the cluster will start up without any database
|
||||
## administrator account; i.e., all users will be granted administrative
|
||||
## access. Otherwise, the system will look for a Secret called
|
||||
## <ReleaseName>-couchdb containing `adminUsername`, `adminPassword` and
|
||||
## `cookieAuthSecret` keys. See the `createAdminSecret` flag.
|
||||
## ref: https://kubernetes.io/docs/concepts/configuration/secret/
|
||||
allowAdminParty: false
|
||||
|
||||
## If createAdminSecret is enabled a Secret called <ReleaseName>-couchdb will
|
||||
## be created containing auto-generated credentials. Users who prefer to set
|
||||
## these values themselves have a couple of options:
|
||||
##
|
||||
## 1) The `adminUsername`, `adminPassword`, `adminHash`, and `cookieAuthSecret`
|
||||
## can be defined directly in the chart's values. Note that all of a chart's
|
||||
## values are currently stored in plaintext in a ConfigMap in the tiller
|
||||
## namespace.
|
||||
##
|
||||
## 2) This flag can be disabled and a Secret with the required keys can be
|
||||
## created ahead of time.
|
||||
createAdminSecret: true
|
||||
|
||||
# adminUsername: budibase
|
||||
# adminPassword: budibase
|
||||
# adminHash: -pbkdf2-this_is_not_necessarily_secure_either
|
||||
# cookieAuthSecret: admin
|
||||
|
||||
## When enabled, will deploy a networkpolicy that allows CouchDB pods to
|
||||
## communicate with each other for clustering and ingress on port 5984
|
||||
networkPolicy:
|
||||
enabled: true
|
||||
|
||||
## Use an alternate scheduler, e.g. "stork".
|
||||
## ref: https://kubernetes.io/docs/tasks/administer-cluster/configure-multiple-schedulers/
|
||||
##
|
||||
# schedulerName:
|
||||
|
||||
# Use a service account
|
||||
serviceAccount:
|
||||
enabled: true
|
||||
create: true
|
||||
# name:
|
||||
# imagePullSecrets:
|
||||
# - name: myimagepullsecret
|
||||
|
||||
## The storage volume used by each Pod in the StatefulSet. If a
|
||||
## persistentVolume is not enabled, the Pods will use `emptyDir` ephemeral
|
||||
## local storage. Setting the storageClass attribute to "-" disables dynamic
|
||||
## provisioning of Persistent Volumes; leaving it unset will invoke the default
|
||||
## provisioner.
|
||||
persistentVolume:
|
||||
enabled: false
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
size: 10Gi
|
||||
storageClass: ""
|
||||
|
||||
## The CouchDB image
|
||||
image:
|
||||
repository: couchdb
|
||||
tag: 3.1.0
|
||||
pullPolicy: IfNotPresent
|
||||
|
||||
## Experimental integration with Lucene-powered fulltext search
|
||||
searchImage:
|
||||
repository: kocolosk/couchdb-search
|
||||
tag: 0.2.0
|
||||
pullPolicy: IfNotPresent
|
||||
|
||||
## Flip this to flag to include the Search container in each Pod
|
||||
enableSearch: true
|
||||
|
||||
initImage:
|
||||
repository: busybox
|
||||
tag: latest
|
||||
pullPolicy: Always
|
||||
|
||||
## CouchDB is happy to spin up cluster nodes in parallel, but if you encounter
|
||||
## problems you can try setting podManagementPolicy to the StatefulSet default
|
||||
## `OrderedReady`
|
||||
podManagementPolicy: Parallel
|
||||
|
||||
## To better tolerate Node failures, we can prevent Kubernetes scheduler from
|
||||
## assigning more than one Pod of CouchDB StatefulSet per Node using podAntiAffinity.
|
||||
affinity: {}
|
||||
# podAntiAffinity:
|
||||
# requiredDuringSchedulingIgnoredDuringExecution:
|
||||
# - labelSelector:
|
||||
# matchExpressions:
|
||||
# - key: "app"
|
||||
# operator: In
|
||||
# values:
|
||||
# - couchdb
|
||||
# topologyKey: "kubernetes.io/hostname"
|
||||
|
||||
## Optional pod annotations
|
||||
annotations: {}
|
||||
|
||||
## Optional tolerations
|
||||
tolerations: []
|
||||
|
||||
## A StatefulSet requires a headless Service to establish the stable network
|
||||
## identities of the Pods, and that Service is created automatically by this
|
||||
## chart without any additional configuration. The Service block below refers
|
||||
## to a second Service that governs how clients connect to the CouchDB cluster.
|
||||
service:
|
||||
# annotations:
|
||||
enabled: true
|
||||
type: ClusterIP
|
||||
externalPort: 5984
|
||||
|
||||
## An Ingress resource can provide name-based virtual hosting and TLS
|
||||
## termination among other things for CouchDB deployments which are accessed
|
||||
## from outside the Kubernetes cluster.
|
||||
## ref: https://kubernetes.io/docs/concepts/services-networking/ingress/
|
||||
ingress:
|
||||
enabled: false
|
||||
hosts:
|
||||
- chart-example.local
|
||||
path: /
|
||||
annotations: []
|
||||
# kubernetes.io/ingress.class: nginx
|
||||
# kubernetes.io/tls-acme: "true"
|
||||
tls:
|
||||
# Secrets must be manually created in the namespace.
|
||||
# - secretName: chart-example-tls
|
||||
# hosts:
|
||||
# - chart-example.local
|
||||
|
||||
## Optional resource requests and limits for the CouchDB container
|
||||
## ref: http://kubernetes.io/docs/user-guide/compute-resources/
|
||||
resources:
|
||||
{}
|
||||
# requests:
|
||||
# cpu: 100m
|
||||
# memory: 128Mi
|
||||
# limits:
|
||||
# cpu: 56
|
||||
# memory: 256Gi
|
||||
|
||||
## erlangFlags is a map that is passed to the Erlang VM as flags using the
|
||||
## ERL_FLAGS env. `name` and `setcookie` flags are minimally required to
|
||||
## establish connectivity between cluster nodes.
|
||||
## ref: http://erlang.org/doc/man/erl.html#init_flags
|
||||
erlangFlags:
|
||||
name: couchdb
|
||||
setcookie: monster
|
||||
|
||||
## couchdbConfig will override default CouchDB configuration settings.
|
||||
## The contents of this map are reformatted into a .ini file laid down
|
||||
## by a ConfigMap object.
|
||||
## ref: http://docs.couchdb.org/en/latest/config/index.html
|
||||
couchdbConfig:
|
||||
couchdb:
|
||||
uuid: budibase-couchdb # REQUIRED: Unique identifier for this CouchDB server instance
|
||||
# cluster:
|
||||
# q: 8 # Create 8 shards for each database
|
||||
chttpd:
|
||||
bind_address: any
|
||||
# chttpd.require_valid_user disables all the anonymous requests to the port
|
||||
# 5984 when is set to true.
|
||||
require_valid_user: false
|
||||
|
||||
# Kubernetes local cluster domain.
|
||||
# This is used to generate FQDNs for peers when joining the CouchDB cluster.
|
||||
dns:
|
||||
clusterDomainSuffix: cluster.local
|
||||
|
||||
## Configure liveness and readiness probe values
|
||||
## Ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/#configure-probes
|
||||
livenessProbe:
|
||||
enabled: true
|
||||
failureThreshold: 3
|
||||
initialDelaySeconds: 0
|
||||
periodSeconds: 10
|
||||
successThreshold: 1
|
||||
timeoutSeconds: 1
|
||||
readinessProbe:
|
||||
enabled: true
|
||||
failureThreshold: 3
|
||||
initialDelaySeconds: 0
|
||||
periodSeconds: 10
|
||||
successThreshold: 1
|
||||
timeoutSeconds: 1
|
||||
|
||||
# Configure arbitrary sidecar containers for CouchDB pods created by the
|
||||
# StatefulSet
|
||||
sidecars: {}
|
||||
# - name: foo
|
||||
# image: "busybox"
|
||||
# imagePullPolicy: IfNotPresent
|
||||
# resources:
|
||||
# requests:
|
||||
# cpu: "0.1"
|
||||
# memory: 10Mi
|
||||
# command: ['echo "foo";']
|
||||
# volumeMounts:
|
||||
# - name: database-storage
|
||||
# mountPath: /opt/couchdb/data/
|
Binary file not shown.
|
@ -1,153 +0,0 @@
|
|||
# Default values for budibase.
|
||||
# This is a YAML-formatted file.
|
||||
# Declare variables to be passed into your templates.
|
||||
|
||||
replicaCount: 1
|
||||
|
||||
image:
|
||||
pullPolicy: IfNotPresent
|
||||
# Overrides the image tag whose default is the chart appVersion.
|
||||
tag: ""
|
||||
|
||||
imagePullSecrets: []
|
||||
nameOverride: ""
|
||||
# fullnameOverride: ""
|
||||
|
||||
serviceAccount:
|
||||
# Specifies whether a service account should be created
|
||||
create: true
|
||||
# Annotations to add to the service account
|
||||
annotations: {}
|
||||
# The name of the service account to use.
|
||||
# If not set and create is true, a name is generated using the fullname template
|
||||
name: ""
|
||||
|
||||
podAnnotations: {}
|
||||
|
||||
podSecurityContext:
|
||||
{}
|
||||
# fsGroup: 2000
|
||||
|
||||
securityContext:
|
||||
{}
|
||||
# capabilities:
|
||||
# drop:
|
||||
# - ALL
|
||||
# readOnlyRootFilesystem: true
|
||||
# runAsNonRoot: true
|
||||
# runAsUser: 1000
|
||||
|
||||
service:
|
||||
type: ClusterIP
|
||||
port: 10000
|
||||
|
||||
ingress:
|
||||
enabled: false
|
||||
aws: false
|
||||
nginx: true
|
||||
certificateArn: ""
|
||||
className: ""
|
||||
annotations:
|
||||
kubernetes.io/ingress.class: nginx
|
||||
hosts:
|
||||
- host: # change if using custom domain
|
||||
paths:
|
||||
- path: /
|
||||
pathType: Prefix
|
||||
backend:
|
||||
service:
|
||||
name: proxy-service
|
||||
port:
|
||||
number: 10000
|
||||
|
||||
resources:
|
||||
{}
|
||||
# We usually recommend not to specify default resources and to leave this as a conscious
|
||||
# choice for the user. This also increases chances charts run on environments with little
|
||||
# resources, such as Minikube. If you do want to specify resources, uncomment the following
|
||||
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
|
||||
# limits:
|
||||
# cpu: 100m
|
||||
# memory: 128Mi
|
||||
# requests:
|
||||
# cpu: 100m
|
||||
# memory: 128Mi
|
||||
|
||||
autoscaling:
|
||||
enabled: false
|
||||
minReplicas: 1
|
||||
maxReplicas: 100
|
||||
targetCPUUtilizationPercentage: 80
|
||||
# targetMemoryUtilizationPercentage: 80
|
||||
|
||||
nodeSelector: {}
|
||||
|
||||
tolerations: []
|
||||
|
||||
affinity: {}
|
||||
|
||||
globals:
|
||||
budibaseEnv: PRODUCTION
|
||||
enableAnalytics: true
|
||||
sentryDSN: ""
|
||||
posthogToken: ""
|
||||
logLevel: info
|
||||
selfHosted: "1" # set to 0 for budibase cloud environment, set to 1 for self-hosted setup
|
||||
multiTenancy: "0" # set to 0 to disable multiple orgs, set to 1 to enable multiple orgs
|
||||
accountPortalUrl: ""
|
||||
accountPortalApiKey: ""
|
||||
cookieDomain: ""
|
||||
platformUrl: ""
|
||||
|
||||
createSecrets: true # creates an internal API key, JWT secrets and redis password for you
|
||||
|
||||
# if createSecrets is set to false, you can hard-code your secrets here
|
||||
internalApiKey: ""
|
||||
jwtSecret: ""
|
||||
|
||||
smtp:
|
||||
enabled: false
|
||||
|
||||
services:
|
||||
dns: cluster.local
|
||||
|
||||
proxy:
|
||||
port: 10000
|
||||
replicaCount: 1
|
||||
|
||||
apps:
|
||||
port: 4002
|
||||
replicaCount: 1
|
||||
logLevel: info
|
||||
|
||||
worker:
|
||||
port: 4001
|
||||
replicaCount: 1
|
||||
|
||||
couchdb:
|
||||
enabled: true
|
||||
replicaCount: 3
|
||||
# url: "" # only change if pointing to existing couch server
|
||||
# user: "" # only change if pointing to existing couch server
|
||||
# password: "" # only change if pointing to existing couch server
|
||||
port: 5984
|
||||
storage: 100Mi
|
||||
|
||||
redis:
|
||||
enabled: true # disable if using external redis
|
||||
port: 6379
|
||||
replicaCount: 1
|
||||
url: "" # only change if pointing to existing redis cluster and enabled: false
|
||||
password: "budibase" # recommended to override if using built-in redis
|
||||
storage: 100Mi
|
||||
|
||||
objectStore:
|
||||
minio: true
|
||||
browser: true
|
||||
port: 9000
|
||||
replicaCount: 1
|
||||
accessKey: "" # AWS_ACCESS_KEY if using S3 or existing minio access key
|
||||
secretKey: "" # AWS_SECRET_ACCESS_KEY if using S3 or existing minio secret
|
||||
region: "" # AWS_REGION if using S3 or existing minio secret
|
||||
url: "" # only change if pointing to existing minio cluster and minio: false
|
||||
storage: 100Mi
|
|
@ -50,6 +50,14 @@ static_resources:
|
|||
route:
|
||||
cluster: app-service
|
||||
|
||||
- match:
|
||||
safe_regex:
|
||||
google_re2: {}
|
||||
regex: "/api/.*/export"
|
||||
route:
|
||||
timeout: 0s
|
||||
cluster: app-service
|
||||
|
||||
- match: { path: "/api/deploy" }
|
||||
route:
|
||||
timeout: 60s
|
||||
|
|
|
@ -0,0 +1,3 @@
|
|||
apiVersion: v1
|
||||
entries: {}
|
||||
generated: "2021-12-13T12:46:40.291206+01:00"
|
|
@ -1,5 +1,5 @@
|
|||
{
|
||||
"version": "0.9.190-alpha.1",
|
||||
"version": "1.0.30",
|
||||
"npmClient": "yarn",
|
||||
"packages": [
|
||||
"packages/*"
|
||||
|
|
|
@ -8,6 +8,7 @@
|
|||
"eslint-plugin-cypress": "^2.11.3",
|
||||
"eslint-plugin-svelte3": "^3.2.0",
|
||||
"husky": "^7.0.1",
|
||||
"js-yaml": "^4.1.0",
|
||||
"kill-port": "^1.6.1",
|
||||
"lerna": "3.14.1",
|
||||
"prettier": "^2.3.1",
|
||||
|
@ -22,8 +23,8 @@
|
|||
"build": "lerna run build",
|
||||
"publishdev": "lerna run publishdev",
|
||||
"publishnpm": "yarn build && lerna publish --force-publish",
|
||||
"release": "yarn build && lerna publish patch --yes --force-publish",
|
||||
"release:develop": "yarn build && lerna publish prerelease --yes --force-publish --dist-tag develop",
|
||||
"release": "lerna publish patch --yes --force-publish",
|
||||
"release:develop": "lerna publish prerelease --yes --force-publish --dist-tag develop",
|
||||
"restore": "yarn run clean && yarn run bootstrap && yarn run build",
|
||||
"nuke": "yarn run nuke:packages && yarn run nuke:docker",
|
||||
"nuke:packages": "yarn run restore",
|
||||
|
@ -47,7 +48,9 @@
|
|||
"build:docker:selfhost": "lerna run build:docker && cd hosting/scripts/linux/ && ./release-to-docker-hub.sh latest && cd -",
|
||||
"build:docker:develop": "node scripts/pinVersions && lerna run build:docker && cd hosting/scripts/linux/ && ./release-to-docker-hub.sh develop && cd -",
|
||||
"build:docker:airgap": "node hosting/scripts/airgapped/airgappedDockerBuild",
|
||||
"release:helm": "./scripts/release_helm_chart.sh",
|
||||
"build:digitalocean": "cd hosting/digitalocean && ./build.sh && cd -",
|
||||
"build:docs": "lerna run build:docs",
|
||||
"release:helm": "node scripts/releaseHelmChart",
|
||||
"env:multi:enable": "lerna run env:multi:enable",
|
||||
"env:multi:disable": "lerna run env:multi:disable",
|
||||
"env:selfhost:enable": "lerna run env:selfhost:enable",
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
{
|
||||
"name": "@budibase/auth",
|
||||
"version": "0.9.190-alpha.1",
|
||||
"version": "1.0.30",
|
||||
"description": "Authentication middlewares for budibase builder and apps",
|
||||
"main": "src/index.js",
|
||||
"author": "Budibase",
|
||||
|
|
|
@ -34,4 +34,5 @@ exports.Configs = {
|
|||
OIDC_LOGOS: "logos_oidc",
|
||||
}
|
||||
|
||||
exports.MAX_VALID_DATE = new Date(2147483647000)
|
||||
exports.DEFAULT_TENANT_ID = "default"
|
||||
|
|
|
@ -3,10 +3,15 @@ const Replication = require("./Replication")
|
|||
const { DEFAULT_TENANT_ID, Configs } = require("../constants")
|
||||
const env = require("../environment")
|
||||
const { StaticDatabases, SEPARATOR, DocumentTypes } = require("./constants")
|
||||
const { getTenantId, getTenantIDFromAppID } = require("../tenancy")
|
||||
const {
|
||||
getTenantId,
|
||||
getTenantIDFromAppID,
|
||||
getGlobalDBName,
|
||||
} = require("../tenancy")
|
||||
const fetch = require("node-fetch")
|
||||
const { getCouch } = require("./index")
|
||||
const { getAppMetadata } = require("../cache/appMetadata")
|
||||
const { checkSlashesInUrl } = require("../helpers")
|
||||
|
||||
const NO_APP_ERROR = "No app provided"
|
||||
|
||||
|
@ -194,6 +199,11 @@ exports.getCouchUrl = () => {
|
|||
return `${protocol}://${env.COUCH_DB_USERNAME}:${env.COUCH_DB_PASSWORD}@${rest}`
|
||||
}
|
||||
|
||||
exports.getStartEndKeyURL = (base, baseKey, tenantId = null) => {
|
||||
const tenancy = tenantId ? `${SEPARATOR}${tenantId}` : ""
|
||||
return `${base}?startkey="${baseKey}${tenancy}"&endkey="${baseKey}${tenancy}${UNICODE_MAX}"`
|
||||
}
|
||||
|
||||
/**
|
||||
* if in production this will use the CouchDB _all_dbs call to retrieve a list of databases. If testing
|
||||
* when using Pouch it will use the pouchdb-all-dbs package.
|
||||
|
@ -203,12 +213,34 @@ exports.getAllDbs = async () => {
|
|||
if (env.isTest()) {
|
||||
return getCouch().allDbs()
|
||||
}
|
||||
const response = await fetch(`${exports.getCouchUrl()}/_all_dbs`)
|
||||
if (response.status === 200) {
|
||||
return response.json()
|
||||
} else {
|
||||
throw "Cannot connect to CouchDB instance"
|
||||
let dbs = []
|
||||
async function addDbs(url) {
|
||||
const response = await fetch(checkSlashesInUrl(encodeURI(url)))
|
||||
if (response.status === 200) {
|
||||
let json = await response.json()
|
||||
dbs = dbs.concat(json)
|
||||
} else {
|
||||
throw "Cannot connect to CouchDB instance"
|
||||
}
|
||||
}
|
||||
let couchUrl = `${exports.getCouchUrl()}/_all_dbs`
|
||||
if (env.MULTI_TENANCY) {
|
||||
let tenantId = getTenantId()
|
||||
// get prod apps
|
||||
await addDbs(
|
||||
exports.getStartEndKeyURL(couchUrl, DocumentTypes.APP, tenantId)
|
||||
)
|
||||
// get dev apps
|
||||
await addDbs(
|
||||
exports.getStartEndKeyURL(couchUrl, DocumentTypes.APP_DEV, tenantId)
|
||||
)
|
||||
// add global db name
|
||||
dbs.push(getGlobalDBName(tenantId))
|
||||
} else {
|
||||
// just get all DBs in self host
|
||||
await addDbs(couchUrl)
|
||||
}
|
||||
return dbs
|
||||
}
|
||||
|
||||
/**
|
||||
|
@ -389,7 +421,7 @@ const getScopedFullConfig = async function (db, { type, user, workspace }) {
|
|||
}
|
||||
|
||||
const getPlatformUrl = async settings => {
|
||||
let platformUrl = env.PLATFORM_URL
|
||||
let platformUrl = env.PLATFORM_URL || "http://localhost:10000"
|
||||
|
||||
if (!env.SELF_HOSTED && env.MULTI_TENANCY) {
|
||||
// cloud and multi tenant - add the tenant to the default platform url
|
||||
|
@ -404,7 +436,7 @@ const getPlatformUrl = async settings => {
|
|||
}
|
||||
}
|
||||
|
||||
return platformUrl ? platformUrl : "http://localhost:10000"
|
||||
return platformUrl
|
||||
}
|
||||
|
||||
async function getScopedConfig(db, params) {
|
||||
|
|
|
@ -0,0 +1,9 @@
|
|||
/**
|
||||
* Makes sure that a URL has the correct number of slashes, while maintaining the
|
||||
* http(s):// double slashes.
|
||||
* @param {string} url The URL to test and remove any extra double slashes.
|
||||
* @return {string} The updated url.
|
||||
*/
|
||||
exports.checkSlashesInUrl = url => {
|
||||
return url.replace(/(https?:\/\/)|(\/)+/g, "$1$2")
|
||||
}
|
|
@ -1,6 +1,7 @@
|
|||
const redis = require("../redis/authRedis")
|
||||
|
||||
const EXPIRY_SECONDS = 86400
|
||||
// a week in seconds
|
||||
const EXPIRY_SECONDS = 86400 * 7
|
||||
|
||||
async function getSessionsForUser(userId) {
|
||||
const client = await redis.getSessionClient()
|
||||
|
|
|
@ -7,7 +7,7 @@ const {
|
|||
const jwt = require("jsonwebtoken")
|
||||
const { options } = require("./middleware/passport/jwt")
|
||||
const { createUserEmailView } = require("./db/views")
|
||||
const { Headers, UserStatus, Cookies } = require("./constants")
|
||||
const { Headers, UserStatus, Cookies, MAX_VALID_DATE } = require("./constants")
|
||||
const {
|
||||
getGlobalDB,
|
||||
updateTenantId,
|
||||
|
@ -83,14 +83,15 @@ exports.getCookie = (ctx, name) => {
|
|||
* @param {object} ctx The request which is to be manipulated.
|
||||
* @param {string} name The name of the cookie to set.
|
||||
* @param {string|object} value The value of cookie which will be set.
|
||||
* @param {object} opts options like whether to sign.
|
||||
*/
|
||||
exports.setCookie = (ctx, value, name = "builder") => {
|
||||
if (value) {
|
||||
exports.setCookie = (ctx, value, name = "builder", opts = { sign: true }) => {
|
||||
if (value && opts && opts.sign) {
|
||||
value = jwt.sign(value, options.secretOrKey)
|
||||
}
|
||||
|
||||
const config = {
|
||||
maxAge: Number.MAX_SAFE_INTEGER,
|
||||
expires: MAX_VALID_DATE,
|
||||
path: "/",
|
||||
httpOnly: false,
|
||||
overwrite: true,
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
{
|
||||
"name": "@budibase/bbui",
|
||||
"description": "A UI solution used in the different Budibase projects.",
|
||||
"version": "0.9.190-alpha.1",
|
||||
"version": "1.0.30",
|
||||
"license": "MPL-2.0",
|
||||
"svelte": "src/index.js",
|
||||
"module": "dist/bbui.es.js",
|
||||
|
|
|
@ -63,9 +63,9 @@
|
|||
const getFilteredOptions = (options, term, getLabel) => {
|
||||
if (autocomplete && term) {
|
||||
const lowerCaseTerm = term.toLowerCase()
|
||||
return options.filter(option =>
|
||||
getLabel(option)?.toLowerCase().includes(lowerCaseTerm)
|
||||
)
|
||||
return options.filter(option => {
|
||||
return `${getLabel(option)}`.toLowerCase().includes(lowerCaseTerm)
|
||||
})
|
||||
}
|
||||
return options
|
||||
}
|
||||
|
|
|
@ -12,7 +12,7 @@ context("Create a automation", () => {
|
|||
cy.get("[data-cy='new-screen'] > .spectrum-Icon").click()
|
||||
cy.get(".modal-inner-wrapper").within(() => {
|
||||
cy.get("input").type("Add Row")
|
||||
cy.contains("Row Created").click()
|
||||
cy.contains("Row Created").click({ force: true })
|
||||
cy.wait(500)
|
||||
cy.get(".spectrum-Button--cta").click()
|
||||
})
|
||||
|
|
|
@ -182,7 +182,9 @@ Cypress.Commands.add("navigateToFrontend", () => {
|
|||
cy.wait(1000)
|
||||
cy.contains("Design").click()
|
||||
cy.get(".spectrum-Search").type("/")
|
||||
cy.get(".nav-item").contains("Home").click()
|
||||
cy.createScreen("home", "home")
|
||||
cy.addComponent("Elements", "Headline")
|
||||
cy.get(".nav-item").contains("home").click()
|
||||
})
|
||||
|
||||
Cypress.Commands.add("createScreen", (screenName, route) => {
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
{
|
||||
"name": "@budibase/builder",
|
||||
"version": "0.9.190-alpha.1",
|
||||
"version": "1.0.30",
|
||||
"license": "GPL-3.0",
|
||||
"private": true,
|
||||
"scripts": {
|
||||
|
@ -14,7 +14,7 @@
|
|||
"cy:setup": "node ./cypress/setup.js",
|
||||
"cy:run": "cypress run",
|
||||
"cy:open": "cypress open",
|
||||
"cy:run:ci": "cypress run --record --key f308590b-6070-41af-b970-794a3823d451",
|
||||
"cy:run:ci": "cypress run --record",
|
||||
"cy:test": "start-server-and-test cy:setup http://localhost:10001/builder cy:run",
|
||||
"cy:ci": "start-server-and-test cy:setup http://localhost:10001/builder cy:run",
|
||||
"cy:debug": "start-server-and-test cy:setup http://localhost:10001/builder cy:open"
|
||||
|
@ -65,10 +65,10 @@
|
|||
}
|
||||
},
|
||||
"dependencies": {
|
||||
"@budibase/bbui": "^0.9.190-alpha.1",
|
||||
"@budibase/client": "^0.9.190-alpha.1",
|
||||
"@budibase/bbui": "^1.0.30",
|
||||
"@budibase/client": "^1.0.30",
|
||||
"@budibase/colorpicker": "1.1.2",
|
||||
"@budibase/string-templates": "^0.9.190-alpha.1",
|
||||
"@budibase/string-templates": "^1.0.30",
|
||||
"@sentry/browser": "5.19.1",
|
||||
"@spectrum-css/page": "^3.0.1",
|
||||
"@spectrum-css/vars": "^3.0.1",
|
||||
|
|
|
@ -61,7 +61,7 @@ export const getComponentBindableProperties = (asset, componentId) => {
|
|||
/**
|
||||
* Gets all data provider components above a component.
|
||||
*/
|
||||
export const getDataProviderComponents = (asset, componentId) => {
|
||||
export const getContextProviderComponents = (asset, componentId, type) => {
|
||||
if (!asset || !componentId) {
|
||||
return []
|
||||
}
|
||||
|
@ -74,7 +74,18 @@ export const getDataProviderComponents = (asset, componentId) => {
|
|||
// Filter by only data provider components
|
||||
return path.filter(component => {
|
||||
const def = store.actions.components.getDefinition(component._component)
|
||||
return def?.context != null
|
||||
if (!def?.context) {
|
||||
return false
|
||||
}
|
||||
|
||||
// If no type specified, return anything that exposes context
|
||||
if (!type) {
|
||||
return true
|
||||
}
|
||||
|
||||
// Otherwise only match components with the specific context type
|
||||
const contexts = Array.isArray(def.context) ? def.context : [def.context]
|
||||
return contexts.find(context => context.type === type) != null
|
||||
})
|
||||
}
|
||||
|
||||
|
@ -143,7 +154,7 @@ export const getDatasourceForProvider = (asset, component) => {
|
|||
*/
|
||||
const getContextBindings = (asset, componentId) => {
|
||||
// Extract any components which provide data contexts
|
||||
const dataProviders = getDataProviderComponents(asset, componentId)
|
||||
const dataProviders = getContextProviderComponents(asset, componentId)
|
||||
|
||||
// Generate bindings for all matching components
|
||||
return getProviderContextBindings(asset, dataProviders)
|
||||
|
|
|
@ -82,7 +82,7 @@ export const getFrontendStore = () => {
|
|||
libraries: application.componentLibraries,
|
||||
components,
|
||||
clientFeatures: {
|
||||
...state.clientFeatures,
|
||||
...INITIAL_FRONTEND_STATE.clientFeatures,
|
||||
...components.features,
|
||||
},
|
||||
name: application.name,
|
||||
|
@ -524,7 +524,7 @@ export const getFrontendStore = () => {
|
|||
}
|
||||
}
|
||||
},
|
||||
paste: async (targetComponent, mode) => {
|
||||
paste: async (targetComponent, mode, preserveBindings = false) => {
|
||||
let promises = []
|
||||
store.update(state => {
|
||||
// Stop if we have nothing to paste
|
||||
|
@ -536,7 +536,7 @@ export const getFrontendStore = () => {
|
|||
const cut = state.componentToPaste.isCut
|
||||
|
||||
// immediately need to remove bindings, currently these aren't valid when pasted
|
||||
if (!cut) {
|
||||
if (!cut && !preserveBindings) {
|
||||
state.componentToPaste = removeBindings(state.componentToPaste)
|
||||
}
|
||||
|
||||
|
|
|
@ -3,7 +3,14 @@
|
|||
import { database } from "stores/backend"
|
||||
import { automationStore } from "builderStore"
|
||||
import { notifications } from "@budibase/bbui"
|
||||
import { Input, ModalContent, Layout, Body, Icon } from "@budibase/bbui"
|
||||
import {
|
||||
Input,
|
||||
InlineAlert,
|
||||
ModalContent,
|
||||
Layout,
|
||||
Body,
|
||||
Icon,
|
||||
} from "@budibase/bbui"
|
||||
import analytics, { Events } from "analytics"
|
||||
|
||||
let name
|
||||
|
@ -56,6 +63,10 @@
|
|||
onConfirm={createAutomation}
|
||||
disabled={!selectedTrigger || !name}
|
||||
>
|
||||
<InlineAlert
|
||||
header="You must publish your app to activate your automations."
|
||||
message="To test your automation before publishing, you can use the 'Run Test' functionality on the next screen."
|
||||
/>
|
||||
<Body size="XS"
|
||||
>Please name your automation, then select a trigger. Every automation must
|
||||
start with a trigger.
|
||||
|
|
|
@ -96,13 +96,16 @@
|
|||
allSteps[idx].schema?.outputs?.properties ?? {}
|
||||
)
|
||||
bindings = bindings.concat(
|
||||
outputs.map(([name, value]) => ({
|
||||
label: name,
|
||||
type: value.type,
|
||||
description: value.description,
|
||||
category: idx === 0 ? "Trigger outputs" : `Step ${idx} outputs`,
|
||||
path: idx === 0 ? `trigger.${name}` : `steps.${idx}.${name}`,
|
||||
}))
|
||||
outputs.map(([name, value]) => {
|
||||
const runtime = idx === 0 ? `trigger.${name}` : `steps.${idx}.${name}`
|
||||
return {
|
||||
label: runtime,
|
||||
type: value.type,
|
||||
description: value.description,
|
||||
category: idx === 0 ? "Trigger outputs" : `Step ${idx} outputs`,
|
||||
path: runtime,
|
||||
}
|
||||
})
|
||||
)
|
||||
}
|
||||
return bindings
|
||||
|
@ -261,7 +264,6 @@
|
|||
value={inputData[key]}
|
||||
on:change={e => onChange(e, key)}
|
||||
{bindings}
|
||||
allowJS={false}
|
||||
/>
|
||||
</div>
|
||||
{/if}
|
||||
|
|
|
@ -102,6 +102,9 @@
|
|||
if (field.type === AUTO_TYPE) {
|
||||
field = buildAutoColumn($tables.draft.name, field.name, field.subtype)
|
||||
}
|
||||
if (field.type !== LINK_TYPE) {
|
||||
delete field.fieldName
|
||||
}
|
||||
try {
|
||||
await tables.saveField({
|
||||
originalName,
|
||||
|
|
|
@ -19,7 +19,10 @@
|
|||
let fields = []
|
||||
let hasValidated = false
|
||||
|
||||
$: valid = !schema || fields.every(column => schema[column].success)
|
||||
$: valid =
|
||||
!schema ||
|
||||
(fields.every(column => schema[column].success) &&
|
||||
(!hasValidated || Object.keys(schema).length > 0))
|
||||
$: dataImport = {
|
||||
valid,
|
||||
schema: buildTableSchema(schema),
|
||||
|
|
|
@ -53,7 +53,7 @@
|
|||
|
||||
const duplicateComponent = () => {
|
||||
storeComponentForCopy(false)
|
||||
pasteComponent("below")
|
||||
pasteComponent("below", true)
|
||||
}
|
||||
|
||||
const deleteComponent = async () => {
|
||||
|
@ -69,9 +69,9 @@
|
|||
store.actions.components.copy(component, cut)
|
||||
}
|
||||
|
||||
const pasteComponent = mode => {
|
||||
const pasteComponent = (mode, preserveBindings = false) => {
|
||||
// lives in store - also used by drag drop
|
||||
store.actions.components.paste(component, mode)
|
||||
store.actions.components.paste(component, mode, preserveBindings)
|
||||
}
|
||||
</script>
|
||||
|
||||
|
|
|
@ -25,8 +25,7 @@
|
|||
key: "layout",
|
||||
},
|
||||
]
|
||||
|
||||
let modal
|
||||
let newLayoutModal
|
||||
$: selected = tabs.find(t => t.key === $params.assetType)?.title || "Screens"
|
||||
|
||||
const navigate = ({ detail }) => {
|
||||
|
@ -93,14 +92,18 @@
|
|||
{#each $store.layouts as layout, idx (layout._id)}
|
||||
<Layout {layout} border={idx > 0} />
|
||||
{/each}
|
||||
<Modal bind:this={modal}>
|
||||
<Modal bind:this={newLayoutModal}>
|
||||
<NewLayoutModal />
|
||||
</Modal>
|
||||
</div>
|
||||
</Tab>
|
||||
</Tabs>
|
||||
<div class="add-button">
|
||||
<Icon hoverable name="AddCircle" on:click={showModal()} />
|
||||
<Icon
|
||||
hoverable
|
||||
name="AddCircle"
|
||||
on:click={selected === "Layouts" ? newLayoutModal.show() : showModal()}
|
||||
/>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
|
|
|
@ -10,16 +10,39 @@
|
|||
ProgressCircle,
|
||||
} from "@budibase/bbui"
|
||||
import getTemplates from "builderStore/store/screenTemplates"
|
||||
import { onDestroy } from "svelte"
|
||||
|
||||
import { createEventDispatcher } from "svelte"
|
||||
|
||||
export let selectedScreens = []
|
||||
export let chooseModal
|
||||
export let save
|
||||
export let showProgressCircle = false
|
||||
|
||||
let selectedScreens = []
|
||||
|
||||
const blankScreen = "createFromScratch"
|
||||
const dispatch = createEventDispatcher()
|
||||
|
||||
function setScreens() {
|
||||
dispatch("save", {
|
||||
screens: selectedScreens,
|
||||
})
|
||||
}
|
||||
|
||||
$: blankSelected = selectedScreens?.length === 1
|
||||
$: autoSelected = selectedScreens?.length > 0 && !blankSelected
|
||||
|
||||
let templates = getTemplates($store, $tables.list)
|
||||
|
||||
const confirm = async () => {
|
||||
if (autoSelected) {
|
||||
setScreens()
|
||||
await save()
|
||||
} else {
|
||||
setScreens()
|
||||
chooseModal(1)
|
||||
}
|
||||
}
|
||||
const toggleScreenSelection = table => {
|
||||
if (selectedScreens.find(s => s.table === table.name)) {
|
||||
selectedScreens = selectedScreens.filter(
|
||||
|
@ -32,14 +55,18 @@
|
|||
selectedScreens = [...partialTemplates, ...selectedScreens]
|
||||
}
|
||||
}
|
||||
|
||||
onDestroy(() => {
|
||||
selectedScreens = []
|
||||
})
|
||||
</script>
|
||||
|
||||
<div style="overflow-y: auto; max-height: 1000px">
|
||||
<div>
|
||||
<ModalContent
|
||||
title="Add screens"
|
||||
confirmText="Add Screens"
|
||||
cancelText="Cancel"
|
||||
onConfirm={() => (autoSelected ? save() : chooseModal(1))}
|
||||
onConfirm={() => confirm()}
|
||||
disabled={!selectedScreens.length}
|
||||
size="L"
|
||||
>
|
||||
|
@ -70,29 +97,31 @@
|
|||
{/if}
|
||||
</div>
|
||||
</div>
|
||||
<Detail size="S">Autogenerated Screens</Detail>
|
||||
{#if $tables.list.filter(table => table._id !== "ta_users").length > 0}
|
||||
<Detail size="S">Autogenerated Screens</Detail>
|
||||
|
||||
{#each $tables.list.filter(table => table._id !== "ta_users") as table}
|
||||
<div
|
||||
class:disabled={blankSelected}
|
||||
class:selected={selectedScreens.find(x => x.table === table.name)}
|
||||
on:click={() => toggleScreenSelection(table)}
|
||||
class="item"
|
||||
>
|
||||
<div class="content">
|
||||
<div class="text">{table.name}</div>
|
||||
</div>
|
||||
{#each $tables.list.filter(table => table._id !== "ta_users") as table}
|
||||
<div
|
||||
style="color: var(--spectrum-global-color-green-600); float: right"
|
||||
class:disabled={blankSelected}
|
||||
class:selected={selectedScreens.find(x => x.table === table.name)}
|
||||
on:click={() => toggleScreenSelection(table)}
|
||||
class="item"
|
||||
>
|
||||
{#if selectedScreens.find(x => x.table === table.name)}
|
||||
<div class="checkmark-spacing">
|
||||
<Icon size="S" name="CheckmarkCircleOutline" />
|
||||
</div>
|
||||
{/if}
|
||||
<div class="content">
|
||||
<div class="text">{table.name}</div>
|
||||
</div>
|
||||
<div
|
||||
style="color: var(--spectrum-global-color-green-600); float: right"
|
||||
>
|
||||
{#if selectedScreens.find(x => x.table === table.name)}
|
||||
<div class="checkmark-spacing">
|
||||
<Icon size="S" name="CheckmarkCircleOutline" />
|
||||
</div>
|
||||
{/if}
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
{/each}
|
||||
{/each}
|
||||
{/if}
|
||||
</Layout>
|
||||
<div slot="footer">
|
||||
{#if showProgressCircle}
|
||||
|
@ -132,7 +161,7 @@
|
|||
padding: var(--spectrum-alias-item-padding-s);
|
||||
background: var(--spectrum-alias-background-color-primary);
|
||||
transition: 0.3s all;
|
||||
border: 1px solid #e7e7e7;
|
||||
border: 1px solid var(--spectrum-global-color-gray-300);
|
||||
border-radius: 4px;
|
||||
box-sizing: border-box;
|
||||
border-width: 1px;
|
||||
|
|
|
@ -2,6 +2,7 @@
|
|||
import { ModalContent, Input, ProgressCircle } from "@budibase/bbui"
|
||||
import sanitizeUrl from "builderStore/store/screenTemplates/utils/sanitizeUrl"
|
||||
import { selectedAccessRole, allScreens } from "builderStore"
|
||||
import { onDestroy } from "svelte"
|
||||
|
||||
export let screenName
|
||||
export let url
|
||||
|
@ -32,6 +33,11 @@
|
|||
screen.routing.roleId === roleId
|
||||
)
|
||||
}
|
||||
|
||||
onDestroy(() => {
|
||||
screenName = ""
|
||||
url = ""
|
||||
})
|
||||
</script>
|
||||
|
||||
<ModalContent
|
||||
|
|
|
@ -4,7 +4,6 @@
|
|||
import sanitizeUrl from "builderStore/store/screenTemplates/utils/sanitizeUrl"
|
||||
import { Modal } from "@budibase/bbui"
|
||||
import { store, selectedAccessRole, allScreens } from "builderStore"
|
||||
import { onDestroy } from "svelte"
|
||||
import analytics, { Events } from "analytics"
|
||||
|
||||
let newScreenModal
|
||||
|
@ -34,7 +33,6 @@
|
|||
for (let screen of createdScreens) {
|
||||
await saveScreens(screen)
|
||||
}
|
||||
|
||||
await store.actions.routing.fetch()
|
||||
selectedScreens = []
|
||||
createdScreens = []
|
||||
|
@ -42,6 +40,7 @@
|
|||
url = ""
|
||||
showProgressCircle = false
|
||||
}
|
||||
|
||||
const saveScreens = async draftScreen => {
|
||||
let existingScreenCount = $store.screens.filter(
|
||||
s => s.props._instanceName == draftScreen.props._instanceName
|
||||
|
@ -90,17 +89,14 @@
|
|||
)
|
||||
}
|
||||
|
||||
onDestroy(() => {
|
||||
selectedScreens = []
|
||||
screenName = ""
|
||||
url = ""
|
||||
createdScreens = []
|
||||
})
|
||||
|
||||
export const showModal = () => {
|
||||
newScreenModal.show()
|
||||
}
|
||||
|
||||
const setScreens = evt => {
|
||||
selectedScreens = evt.detail.screens
|
||||
}
|
||||
|
||||
const chooseModal = index => {
|
||||
/*
|
||||
0 = newScreenModal
|
||||
|
@ -119,7 +115,7 @@
|
|||
|
||||
<Modal bind:this={newScreenModal}>
|
||||
<NewScreenModal
|
||||
bind:selectedScreens
|
||||
on:save={setScreens}
|
||||
{showProgressCircle}
|
||||
{save}
|
||||
{chooseModal}
|
||||
|
|
|
@ -1,5 +1,5 @@
|
|||
<script>
|
||||
import { getDataProviderComponents } from "builderStore/dataBinding"
|
||||
import { getContextProviderComponents } from "builderStore/dataBinding"
|
||||
import {
|
||||
Button,
|
||||
Popover,
|
||||
|
@ -17,7 +17,6 @@
|
|||
queries as queriesStore,
|
||||
} from "stores/backend"
|
||||
import { datasources, integrations } from "stores/backend"
|
||||
import { notifications } from "@budibase/bbui"
|
||||
import ParameterBuilder from "components/integration/QueryParameterBuilder.svelte"
|
||||
import IntegrationQueryEditor from "components/integration/index.svelte"
|
||||
import { makePropSafe as safe } from "@budibase/string-templates"
|
||||
|
@ -31,6 +30,7 @@
|
|||
const arrayTypes = ["attachment", "array"]
|
||||
let anchorRight, dropdownRight
|
||||
let drawer
|
||||
let tmpQueryParams
|
||||
|
||||
$: text = value?.label ?? "Choose an option"
|
||||
$: tables = $tablesStore.list.map(m => ({
|
||||
|
@ -58,16 +58,19 @@
|
|||
...query,
|
||||
type: "query",
|
||||
}))
|
||||
$: dataProviders = getDataProviderComponents(
|
||||
$: contextProviders = getContextProviderComponents(
|
||||
$currentAsset,
|
||||
$store.selectedComponentId
|
||||
).map(provider => ({
|
||||
label: provider._instanceName,
|
||||
name: provider._instanceName,
|
||||
providerId: provider._id,
|
||||
value: `{{ literal ${safe(provider._id)} }}`,
|
||||
type: "provider",
|
||||
}))
|
||||
)
|
||||
$: dataProviders = contextProviders
|
||||
.filter(component => component._component?.endsWith("/dataprovider"))
|
||||
.map(provider => ({
|
||||
label: provider._instanceName,
|
||||
name: provider._instanceName,
|
||||
providerId: provider._id,
|
||||
value: `{{ literal ${safe(provider._id)} }}`,
|
||||
type: "provider",
|
||||
}))
|
||||
$: links = bindings
|
||||
.filter(x => x.fieldSchema?.type === "link")
|
||||
.map(binding => {
|
||||
|
@ -102,12 +105,12 @@
|
|||
}
|
||||
})
|
||||
|
||||
function handleSelected(selected) {
|
||||
const handleSelected = selected => {
|
||||
dispatch("change", selected)
|
||||
dropdownRight.hide()
|
||||
}
|
||||
|
||||
function fetchQueryDefinition(query) {
|
||||
const fetchQueryDefinition = query => {
|
||||
const source = $datasources.list.find(
|
||||
ds => ds._id === query.datasourceId
|
||||
).source
|
||||
|
@ -121,6 +124,19 @@
|
|||
const getQueryDatasource = query => {
|
||||
return $datasources.list.find(ds => ds._id === query?.datasourceId)
|
||||
}
|
||||
|
||||
const openQueryParamsDrawer = () => {
|
||||
tmpQueryParams = value.queryParams
|
||||
drawer.show()
|
||||
}
|
||||
|
||||
const saveQueryParams = () => {
|
||||
handleSelected({
|
||||
...value,
|
||||
queryParams: tmpQueryParams,
|
||||
})
|
||||
drawer.hide()
|
||||
}
|
||||
</script>
|
||||
|
||||
<div class="container" bind:this={anchorRight}>
|
||||
|
@ -131,24 +147,14 @@
|
|||
on:click={dropdownRight.show}
|
||||
/>
|
||||
{#if value?.type === "query"}
|
||||
<i class="ri-settings-5-line" on:click={drawer.show} />
|
||||
<i class="ri-settings-5-line" on:click={openQueryParamsDrawer} />
|
||||
<Drawer title={"Query Parameters"} bind:this={drawer}>
|
||||
<Button
|
||||
slot="buttons"
|
||||
cta
|
||||
on:click={() => {
|
||||
notifications.success("Query parameters saved.")
|
||||
handleSelected(value)
|
||||
drawer.hide()
|
||||
}}
|
||||
>
|
||||
Save
|
||||
</Button>
|
||||
<Button slot="buttons" cta on:click={saveQueryParams}>Save</Button>
|
||||
<DrawerContent slot="body">
|
||||
<Layout noPadding>
|
||||
{#if getQueryParams(value._id).length > 0}
|
||||
{#if getQueryParams(value).length > 0}
|
||||
<ParameterBuilder
|
||||
bind:customParams={value.queryParams}
|
||||
bind:customParams={tmpQueryParams}
|
||||
parameters={getQueryParams(value)}
|
||||
{bindings}
|
||||
/>
|
||||
|
|
|
@ -4,6 +4,7 @@
|
|||
import { notifications } from "@budibase/bbui"
|
||||
import EventEditor from "./EventEditor.svelte"
|
||||
import { automationStore } from "builderStore"
|
||||
import { cloneDeep } from "lodash/fp"
|
||||
|
||||
const dispatch = createEventDispatcher()
|
||||
|
||||
|
@ -12,17 +13,23 @@
|
|||
export let bindings
|
||||
|
||||
let drawer
|
||||
let tmpValue
|
||||
|
||||
const openDrawer = () => {
|
||||
tmpValue = cloneDeep(value)
|
||||
drawer.show()
|
||||
}
|
||||
|
||||
const saveEventData = async () => {
|
||||
// any automations that need created from event triggers
|
||||
const automationsToCreate = value.filter(
|
||||
const automationsToCreate = tmpValue.filter(
|
||||
action => action["##eventHandlerType"] === "Trigger Automation"
|
||||
)
|
||||
for (let action of automationsToCreate) {
|
||||
await createAutomation(action.parameters)
|
||||
}
|
||||
|
||||
dispatch("change", value)
|
||||
dispatch("change", tmpValue)
|
||||
notifications.success("Component actions saved.")
|
||||
drawer.hide()
|
||||
}
|
||||
|
@ -54,11 +61,16 @@
|
|||
}
|
||||
</script>
|
||||
|
||||
<ActionButton on:click={drawer.show}>Define actions</ActionButton>
|
||||
<ActionButton on:click={openDrawer}>Define actions</ActionButton>
|
||||
<Drawer bind:this={drawer} title={"Actions"}>
|
||||
<svelte:fragment slot="description">
|
||||
Define what actions to run.
|
||||
</svelte:fragment>
|
||||
<Button cta slot="buttons" on:click={saveEventData}>Save</Button>
|
||||
<EventEditor slot="body" bind:actions={value} eventType={name} {bindings} />
|
||||
<EventEditor
|
||||
slot="body"
|
||||
bind:actions={tmpValue}
|
||||
eventType={name}
|
||||
{bindings}
|
||||
/>
|
||||
</Drawer>
|
||||
|
|
|
@ -3,7 +3,7 @@
|
|||
import { store, currentAsset } from "builderStore"
|
||||
import { tables } from "stores/backend"
|
||||
import {
|
||||
getDataProviderComponents,
|
||||
getContextProviderComponents,
|
||||
getSchemaForDatasource,
|
||||
} from "builderStore/dataBinding"
|
||||
import SaveFields from "./SaveFields.svelte"
|
||||
|
@ -11,13 +11,54 @@
|
|||
export let parameters
|
||||
export let bindings = []
|
||||
|
||||
$: dataProviderComponents = getDataProviderComponents(
|
||||
$: formComponents = getContextProviderComponents(
|
||||
$currentAsset,
|
||||
$store.selectedComponentId
|
||||
$store.selectedComponentId,
|
||||
"form"
|
||||
)
|
||||
$: schemaComponents = getContextProviderComponents(
|
||||
$currentAsset,
|
||||
$store.selectedComponentId,
|
||||
"schema"
|
||||
)
|
||||
$: providerOptions = getProviderOptions(formComponents, schemaComponents)
|
||||
$: schemaFields = getSchemaFields($currentAsset, parameters?.tableId)
|
||||
$: tableOptions = $tables.list || []
|
||||
|
||||
// Gets a context definition of a certain type from a component definition
|
||||
const extractComponentContext = (component, contextType) => {
|
||||
const def = store.actions.components.getDefinition(component?._component)
|
||||
if (!def) {
|
||||
return null
|
||||
}
|
||||
const contexts = Array.isArray(def.context) ? def.context : [def.context]
|
||||
return contexts.find(context => context?.type === contextType)
|
||||
}
|
||||
|
||||
// Gets options for valid context keys which provide valid data to submit
|
||||
const getProviderOptions = (formComponents, schemaComponents) => {
|
||||
const formContexts = formComponents.map(component => ({
|
||||
component,
|
||||
context: extractComponentContext(component, "form"),
|
||||
}))
|
||||
const schemaContexts = schemaComponents.map(component => ({
|
||||
component,
|
||||
context: extractComponentContext(component, "schema"),
|
||||
}))
|
||||
const allContexts = formContexts.concat(schemaContexts)
|
||||
|
||||
return allContexts.map(({ component, context }) => {
|
||||
let runtimeBinding = component._id
|
||||
if (context.suffix) {
|
||||
runtimeBinding += `-${context.suffix}`
|
||||
}
|
||||
return {
|
||||
label: component._instanceName,
|
||||
value: runtimeBinding,
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
const getSchemaFields = (asset, tableId) => {
|
||||
const { schema } = getSchemaForDatasource(asset, { type: "table", tableId })
|
||||
return Object.values(schema || {})
|
||||
|
@ -39,10 +80,8 @@
|
|||
<Label small>Data Source</Label>
|
||||
<Select
|
||||
bind:value={parameters.providerId}
|
||||
options={dataProviderComponents}
|
||||
options={providerOptions}
|
||||
placeholder="None"
|
||||
getOptionLabel={option => option._instanceName}
|
||||
getOptionValue={option => option._id}
|
||||
/>
|
||||
|
||||
<Label small>Table</Label>
|
||||
|
|
|
@ -2,13 +2,15 @@
|
|||
import { Button, ActionButton, Drawer } from "@budibase/bbui"
|
||||
import { createEventDispatcher } from "svelte"
|
||||
import NavigationDrawer from "./NavigationDrawer.svelte"
|
||||
import { cloneDeep } from "lodash/fp"
|
||||
|
||||
export let value = []
|
||||
let drawer
|
||||
let links = cloneDeep(value || [])
|
||||
|
||||
const dispatch = createEventDispatcher()
|
||||
const save = () => {
|
||||
dispatch("change", value)
|
||||
dispatch("change", links)
|
||||
drawer.hide()
|
||||
}
|
||||
</script>
|
||||
|
@ -19,5 +21,5 @@
|
|||
Configure the links in your navigation bar.
|
||||
</svelte:fragment>
|
||||
<Button cta slot="buttons" on:click={save}>Save</Button>
|
||||
<NavigationDrawer slot="body" bind:links={value} />
|
||||
<NavigationDrawer slot="body" bind:links />
|
||||
</Drawer>
|
||||
|
|
|
@ -32,31 +32,22 @@
|
|||
import { onMount } from "svelte"
|
||||
|
||||
export let query
|
||||
export let fields = []
|
||||
|
||||
let fields = query.schema ? schemaToFields(query.schema) : []
|
||||
let parameters
|
||||
let data = []
|
||||
let roleId
|
||||
const transformerDocs =
|
||||
"https://docs.budibase.com/building-apps/data/transformers"
|
||||
const typeOptions = [
|
||||
{ label: "Text", value: "STRING" },
|
||||
{ label: "Number", value: "NUMBER" },
|
||||
{ label: "Boolean", value: "BOOLEAN" },
|
||||
{ label: "Datetime", value: "DATETIME" },
|
||||
{ label: "Text", value: "string" },
|
||||
{ label: "Number", value: "number" },
|
||||
{ label: "Boolean", value: "boolean" },
|
||||
{ label: "Datetime", value: "datetime" },
|
||||
]
|
||||
|
||||
$: datasource = $datasources.list.find(ds => ds._id === query.datasourceId)
|
||||
$: query.schema = fields.reduce(
|
||||
(acc, next) => ({
|
||||
...acc,
|
||||
[next.name]: {
|
||||
name: next.name,
|
||||
type: "string",
|
||||
},
|
||||
}),
|
||||
{}
|
||||
)
|
||||
$: query.schema = fieldsToSchema(fields)
|
||||
$: datasourceType = datasource?.source
|
||||
$: integrationInfo = $integrations[datasourceType]
|
||||
$: queryConfig = integrationInfo?.query
|
||||
|
@ -135,7 +126,7 @@
|
|||
// unique fields returned by the server
|
||||
fields = json.schemaFields.map(field => ({
|
||||
name: field,
|
||||
type: "STRING",
|
||||
type: "string",
|
||||
}))
|
||||
} catch (err) {
|
||||
notifications.error(`Query Error: ${err.message}`)
|
||||
|
@ -155,6 +146,26 @@
|
|||
}
|
||||
}
|
||||
|
||||
function schemaToFields(schema) {
|
||||
return Object.keys(schema).map(key => ({
|
||||
name: key,
|
||||
type: query.schema[key].type,
|
||||
}))
|
||||
}
|
||||
|
||||
function fieldsToSchema(fieldsToConvert) {
|
||||
return fieldsToConvert.reduce(
|
||||
(acc, next) => ({
|
||||
...acc,
|
||||
[next.name]: {
|
||||
name: next.name,
|
||||
type: next.type,
|
||||
},
|
||||
}),
|
||||
{}
|
||||
)
|
||||
}
|
||||
|
||||
onMount(async () => {
|
||||
if (!query || !query._id) {
|
||||
roleId = Roles.BASIC
|
||||
|
|
|
@ -83,12 +83,11 @@
|
|||
}
|
||||
|
||||
async function createNewApp() {
|
||||
const letTemplateToUse =
|
||||
Object.keys(template).length === 0 ? null : template
|
||||
const templateToUse = Object.keys(template).length === 0 ? null : template
|
||||
submitting = true
|
||||
|
||||
// Check a template exists if we are important
|
||||
if (letTemplateToUse?.fromFile && !$values.file) {
|
||||
if (templateToUse?.fromFile && !$values.file) {
|
||||
$errors.file = "Please choose a file to import"
|
||||
valid = false
|
||||
submitting = false
|
||||
|
@ -99,10 +98,10 @@
|
|||
// Create form data to create app
|
||||
let data = new FormData()
|
||||
data.append("name", $values.name.trim())
|
||||
data.append("useTemplate", letTemplateToUse != null)
|
||||
if (letTemplateToUse) {
|
||||
data.append("templateName", letTemplateToUse.name)
|
||||
data.append("templateKey", letTemplateToUse.key)
|
||||
data.append("useTemplate", templateToUse != null)
|
||||
if (templateToUse) {
|
||||
data.append("templateName", templateToUse.name)
|
||||
data.append("templateKey", templateToUse.key)
|
||||
data.append("templateFile", $values.file)
|
||||
}
|
||||
|
||||
|
@ -116,7 +115,7 @@
|
|||
analytics.captureEvent(Events.APP.CREATED, {
|
||||
name: $values.name,
|
||||
appId: appJson.instance._id,
|
||||
letTemplateToUse,
|
||||
templateToUse,
|
||||
})
|
||||
|
||||
// Select Correct Application/DB in prep for creating user
|
||||
|
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue