Skip to content

Commit

Permalink
Merge branch 'v4' into develop
Browse files Browse the repository at this point in the history
# Conflicts:
#	.github/workflows/ci.yml
#	composer.json
#	composer.lock
#	ecs.php
#	phpstan.neon
#	src/Plugin.php
#	src/base/Element.php
#	src/base/Field.php
#	src/elements/Entry.php
#	src/fields/Assets.php
#	src/fields/Checkboxes.php
#	src/fields/Linkit.php
#	src/fields/TypedLink.php
#	src/helpers/DataHelper.php
#	src/models/FeedModel.php
#	src/services/Process.php
  • Loading branch information
angrybrad committed Mar 16, 2023
2 parents 9c272ab + 289ef9f commit 30a1f8a
Show file tree
Hide file tree
Showing 28 changed files with 763 additions and 101 deletions.
5 changes: 5 additions & 0 deletions .ddev/commands/.gitattributes
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
# #ddev-generated
# Everything in the commands directory needs LF line-endings
# Not CRLF as from Windows.
# bash especially just can't cope if it finds CRLF in a script.
* -text eol=lf
5 changes: 5 additions & 0 deletions .ddev/commands/db/README.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
#ddev-generated
Scripts in this directory will be executed inside the db
container. A number of environment variables are supplied to the scripts.

See https://ddev.readthedocs.io/en/stable/users/extend/custom-commands/#environment-variables-provided for a list of environment variables.
6 changes: 6 additions & 0 deletions .ddev/commands/host/README.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
#ddev-generated
Scripts in this directory will be executed on the host
but they can take easily take action on containers by using
`ddev exec`.

See https://ddev.readthedocs.io/en/stable/users/extend/custom-commands/#environment-variables-provided for a list of environment variables that can be used in the scripts.
11 changes: 11 additions & 0 deletions .ddev/commands/host/solrtail.example
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
#!/bin/bash

## #ddev-generated
## Description: Tail the main solr log
## Usage: solrtail
## Example: ddev solrtail

# This can't work unless you have a solr service,
# See https://ddev.readthedocs.io/en/stable/users/extend/additional-services/

ddev exec -s solr tail -40lf /opt/solr/server/logs/solr.log
15 changes: 15 additions & 0 deletions .ddev/commands/solr/README.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
#ddev-generated
Scripts in this directory will be executed inside the solr
container (if it exists, of course). This is just an example,
but any named service can have a directory with commands.

Note that /mnt/ddev_config must be mounted into the 3rd-party service
with a stanza like this in the docker-compose.solr.yaml:

volumes:
- type: "bind"
source: "."
target: "/mnt/ddev_config"


See https://ddev.readthedocs.io/en/stable/users/extend/custom-commands/#environment-variables-provided for a list of environment variables that can be used in the scripts.
13 changes: 13 additions & 0 deletions .ddev/commands/solr/solrtail.example
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
#!/bin/bash

## #ddev-generated
## Description: Tail the main solr log
## Usage: solrtail
## Example: ddev solrtail

# This example runs inside the solr container.
# Note that this requires that /mnt/ddev_config be mounted
# into the solr container and of course that you have a container
# named solr.

tail -f /opt/solr/server/logs/solr.log
4 changes: 4 additions & 0 deletions .ddev/commands/web/README.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
#ddev-generated
Scripts in this directory will be executed inside the web container.

See https://ddev.readthedocs.io/en/stable/users/extend/custom-commands/#environment-variables-provided for a list of environment variables that can be used in the scripts.
7 changes: 7 additions & 0 deletions .ddev/homeadditions/README.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
#ddev-generated
Files in .ddev/homeadditions will be copied into the web container's home directory.

An example bash_aliases.example is provided here. To make this file active you can either

cp bash_aliases.example .bash_aliases
or ln -s bash_aliases.example .bash_aliases
6 changes: 6 additions & 0 deletions .ddev/homeadditions/bash_aliases.example
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
# #ddev-generated
# To make this file active you can either
# cp bash_aliases.example .bash_aliases
# or ln -s bash_aliases.example .bash_aliases

alias ll="ls -lhA"
34 changes: 34 additions & 0 deletions .ddev/providers/README.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,34 @@
Providers README
================

#ddev-generated

## Introduction to Hosting Provider Integration

DDEV's hosting provider integration lets you integrate with any upstream source of database dumps and files (such as your production or staging server) and provides examples of configuration for Acquia, Platform.sh, Pantheon, rsync, etc.

The best part of this is you can change them and adapt them in any way you need to, they're all short scripted recipes. There are several example recipes created in the .ddev/providers directory of every project or see them in the code at https://github.com/ddev/ddev/tree/master/cmd/ddev/cmd/dotddev_assets/providers.

ddev provides the `pull` command with whatever recipes you have configured. For example, `ddev pull acquia` if you have created `.ddev/providers/acquia.yaml`.

ddev also provides the `push` command to push database and files to upstream. This is very dangerous to your upstream site and should only be used with extreme caution. It's recommended not even to implement the push stanzas in your yaml file, but if it fits your workflow, use it well.

Each provider recipe is a yaml file that can be named any way you want to name it. The examples are mostly named after the hosting providers, but they could be named "upstream.yaml" or "live.yaml", so you could `ddev pull upstream` or `ddev pull live`. If you wanted different upstream environments to pull from, you could name one "prod" and one "dev" and `ddev pull prod` and `ddev pull dev`.

Several example recipes are at https://github.com/ddev/ddev/tree/master/cmd/ddev/cmd/dotddev_assets/providers and in this directory.

Each provider recipe is a file named `<provider>.yaml` and consists of several mostly-optional stanzas:

* `environment_variables`: Environment variables will be created in the web container for each of these during pull or push operations. They're used to provide context (project id, environment name, etc.) for each of the other stanzas.
* `db_pull_command`: A script that determines how ddev should pull a database. It's job is to create a gzipped database dump in /var/www/html/.ddev/.downloads/db.sql.gz.
* `files_pull_command`: A script that determines how ddev can get user-generated files from upstream. Its job is to copy the files from upstream to /var/www/html/.ddev/.downloads/files.
* `db_push_command`: A script that determines how ddev should push a database. It's job is to take a gzipped database dump from /var/www/html/.ddev/.downloads/db.sql.gz and load it on the hosting provider.
* `files_pull_command`: A script that determines how ddev push user-generated files to upstream. Its job is to copy the files from the project's user-files directory ($DDEV_FILES_DIR) to the correct place on the upstream provider.

The environment variables provided to custom commands (see https://ddev.readthedocs.io/en/stable/users/extend/custom-commands/#environment-variables-provided) are also available for use in these recipes.

### Provider Debugging

You can uncomment the `set -x` in each stanza to see more of what's going on. It really helps.

Although the various commands could be executed on the host or in other containers if configured that way, most commands are executed in the web container. So the best thing to do is to `ddev ssh` and manually execute each command you want to use. When you have it right, use it in the yaml file.
82 changes: 82 additions & 0 deletions .ddev/providers/acquia.yaml.example
Original file line number Diff line number Diff line change
@@ -0,0 +1,82 @@
#ddev-generated
# Example Acquia provider configuration.

# To use this configuration,

# 1. Get your Acquia API token from your Account Settings->API Tokens.
# 2. Make sure your ssh key is authorized on your Acquia account at Account Settings->SSH Keys
# 3. `ddev auth ssh` (this typically needs only be done once per ddev session, not every pull.)
# 4. Add / update the web_environment section in ~/.ddev/global_config.yaml with the API keys:
# ```yaml
# web_environment:
# - ACQUIA_API_KEY=xxxxxxxx
# - ACQUIA_API_SECRET=xxxxx
# ```
# 5. Copy .ddev/providers/acquia.yaml.example to .ddev/providers/acquia.yaml.
# 6. Update the project_id and database corresponding to the environment you want to work with.
# - If have acli install, you can use the following command: `acli remote:aliases:list`
# - Or, on the Acquia Cloud Platform navigate to the environments page, click on the header and look for the "SSH URL" line. Eg. `[email protected]` would have a project ID of `project1.dev`
# 7. Your project must include drush; `ddev composer require drush/drush` if it isn't there already.
# 8. `ddev restart`
# 9. Use `ddev pull acquia` to pull the project database and files.
# 10. Optionally use `ddev push acquia` to push local files and database to Acquia. Note that `ddev push` is a command that can potentially damage your production site, so this is not recommended.

# Debugging: Use `ddev exec acli command` and `ddev exec acli auth:login`
# Make sure you remembered to `ddev auth ssh`

environment_variables:
project_id: yourproject.dev
database_name: yourproject

auth_command:
command: |
set -eu -o pipefail
if [ -z "${ACQUIA_API_KEY:-}" ] || [ -z "${ACQUIA_API_SECRET:-}" ]; then echo "Please make sure you have set ACQUIA_API_KEY and ACQUIA_API_SECRET in ~/.ddev/global_config.yaml" && exit 1; fi
if ! command -v drush >/dev/null ; then echo "Please make sure your project contains drush, ddev composer require drush/drush" && exit 1; fi
ssh-add -l >/dev/null || ( echo "Please 'ddev auth ssh' before running this command." && exit 1 )

acli -n auth:login -n --key="${ACQUIA_API_KEY}" --secret="${ACQUIA_API_SECRET}"
acli -n remote:aliases:download --all --destination-dir $HOME/.drush -n >/dev/null

db_pull_command:
command: |
#set -x # You can enable bash debugging output by uncommenting
set -eu -o pipefail
# If no database_name is configured, infer it from project_id
if [ -z "${database_name:-}" ]; then database_name=${project_id%%.*}; fi
backup_time=$(acli -n api:environments:database-backup-list ${project_id} ${database_name} --limit=1 | jq -r .[].completed_at)
backup_id="$(acli -n api:environments:database-backup-list ${project_id} ${database_name} --limit=1 | jq -r .[].id)"
backup_url=$(acli -n api:environments:database-backup-download ${project_id} ${database_name} ${backup_id} | jq -r .url)
ls /var/www/html/.ddev >/dev/null # This just refreshes stale NFS if possible
echo "Downloading backup $backup_id from $backup_time"
curl -o /var/www/html/.ddev/.downloads/db.sql.gz ${backup_url}

files_pull_command:
command: |
# set -x # You can enable bash debugging output by uncommenting
set -eu -o pipefail
ls /var/www/html/.ddev >/dev/null # This just refreshes stale NFS if possible
pushd /var/www/html/.ddev/.downloads >/dev/null;
drush -r docroot rsync --exclude-paths='styles:css:js' --alias-path=~/.drush -q -y @${project_id}:%files ./files

# push is a dangerous command. If not absolutely needed it's better to delete these lines.
db_push_command:
command: |
set -x # You can enable bash debugging output by uncommenting
set -eu -o pipefail
TIMESTAMP=$(date +%y%m%d%H%M%S)
ls /var/www/html/.ddev >/dev/null # This just refreshes stale NFS if possible
cd /var/www/html/.ddev/.downloads
drush rsync -y --alias-path=~/.drush ./db.sql.gz @${project_id}:/tmp/db.${TIMESTAMP}.sql.gz
acli -n remote:ssh -n ${project_id} -- "cd /tmp && gunzip db.${TIMESTAMP}.sql.gz"
acli -n remote:drush -n ${project_id} -- "sql-cli </tmp/db.${TIMESTAMP}.sql"
acli -n remote:drush -n ${project_id} -- cr
acli -n remote:ssh -n ${project_id} -- "rm /tmp/db.${TIMESTAMP}.*"

# push is a dangerous command. If not absolutely needed it's better to delete these lines.
files_push_command:
command: |
# set -x # You can enable bash debugging output by uncommenting
set -eu -o pipefail
ls ${DDEV_FILES_DIR} >/dev/null # This just refreshes stale NFS if possible
drush rsync -y --alias-path=~/.drush @self:%files @${project_id}:%files
40 changes: 40 additions & 0 deletions .ddev/providers/git.yaml.example
Original file line number Diff line number Diff line change
@@ -0,0 +1,40 @@
#ddev-generated
# Example git provider configuration.

# To use this configuration,

# 1. Create a git repository that contains a database dump (db.sql.gz) and a files tarball. It can be private or public, but for most people they will be private.
# 2. Configure access to the repository so that it can be accessed from where you need it. For example, on gitpod, you'll need to enable access to GitHub or Gitlab. On a regular local dev environment, you'll need to be able to access github via https or ssh.
# 3. Update the environment_variables below to point to the git repository that contains your database dump and files.

environment_variables:
project_url: https://github.com/ddev/ddev-pull-git-test-repo
branch: main
checkout_dir: ~/tmp/ddev-pull-git-test-repo


auth_command:
service: host
# This actually doesn't auth, but rather just checks out the repository
command: |
set -eu -o pipefail
if [ ! -d ${checkout_dir}/.git ] ; then
git clone -q ${project_url} --branch=${branch} ${checkout_dir}
else
cd ${checkout_dir}
git reset --hard -q && git fetch && git checkout -q origin/${branch}
fi

db_import_command:
service: host
command: |
set -eu -o pipefail
# set -x
ddev import-db --src="${checkout_dir}/db.sql.gz"

files_import_command:
service: host
command: |
set -eu -o pipefail
# set -x
ddev import-files --src="${checkout_dir}/files"
38 changes: 38 additions & 0 deletions .ddev/providers/localfile.yaml.example
Original file line number Diff line number Diff line change
@@ -0,0 +1,38 @@
#ddev-generated
# Example local file provider configuration.

# This will pull a database and files from an existing location, for example,
# from a Dropbox location on disk

# To use this configuration,
# 1. You need a database dump and/or user-generated files tarball.
# 2. Copy localfile.yaml.example to localfile.yaml.
# 3. Change the copy commands as needed.
# 4. Use `ddev pull localfile` to pull the project database and files.

# In this example, db_pull_command is not used

# Here db_import_command imports directly from the source location
# instead of looking in .ddev/.downloads/files
db_import_command:
command: |
set -eu -o pipefail
echo $PATH
ddev --version
set -x
gzip -dc ~/Dropbox/db.sql.gz | ddev mysql db
service: host

# In this example, files_pull_command is not used

# files_import_command is an example of a custom importer
# that directly untars the files into their appropriate destination
files_import_command:
command: |
set -eu -o pipefail
echo $PATH
ddev --version
set -x
mkdir -p web/sites/default/files
tar -zxf ~/Dropbox/files.tar.gz -C web/sites/default/files
service: host
88 changes: 88 additions & 0 deletions .ddev/providers/pantheon.yaml.example
Original file line number Diff line number Diff line change
@@ -0,0 +1,88 @@
#ddev-generated
# Example Pantheon.io provider configuration.
# This example is Drupal/drush oriented,
# but can be adapted for other CMSs supported by Pantheon

# To use this configuration:
#
# 1. Get your Pantheon.io machine token:
# a. Login to your Pantheon Dashboard, and [Generate a Machine Token](https://pantheon.io/docs/machine-tokens/) for ddev to use.
# b. Add the API token to the `web_environment` section in your global ddev configuration at ~/.ddev/global_config.yaml
#
# ```
# web_environment:
# - TERMINUS_MACHINE_TOKEN=abcdeyourtoken
# ```
#
# 2. Choose a Pantheon site and environment you want to use with ddev. You can usually use the site name, but in some environments you may need the site uuid, which is the long 3rd component of your site dashboard URL. So if the site dashboard is at <https://dashboard.pantheon.io/sites/009a2cda-2c22-4eee-8f9d-96f017321555#dev/>, the site ID is 009a2cda-2c22-4eee-8f9d-96f017321555.
#
# 3. On the pantheon dashboard, make sure that at least one backup has been created. (When you need to refresh what you pull, do a new backup.)
#
# 4. Make sure your public ssh key is configured in Pantheon (Account->SSH Keys)
#
# 5. Check out project codebase from Pantheon. Enable the "Git Connection Mode" and use `git clone` to check out the code locally.
#
# 6. Configure the local checkout for ddev using `ddev config`
#
# 7. Verify that drush is installed in your project, `ddev composer require drush/drush`
#
# 8. In your project's .ddev/providers directory, copy pantheon.yaml.example to pantheon.yaml and edit the "project" under `environment_variables` (change it from `yourproject.dev`). If you want to use a different environment than "dev", change `dev` to the name of the environment.
#
# 9. If using Colima, may need to set an explicit nameserver in `~/.colima/default/colima.yaml` like `1.1.1.1`. If this configuration is changed, may also need to restart Colima.
#
# 10. `ddev restart`
#
# 11. Run `ddev pull pantheon`. The ddev environment will download the Pantheon database and files using terminus and will import the database and files into the ddev environment. You should now be able to access the project locally.
#
# 12. Optionally use `ddev push pantheon` to push local files and database to Pantheon. Note that `ddev push` is a command that can potentially damage your production site, so this is not recommended.
#

# Debugging: Use `ddev exec terminus auth:whoami` to see what terminus knows about
# `ddev exec terminus site:list` will show available sites

environment_variables:
project: yourproject.dev

auth_command:
command: |
set -eu -o pipefail
ssh-add -l >/dev/null || ( echo "Please 'ddev auth ssh' before running this command." && exit 1 )
if ! command -v drush >/dev/null ; then echo "Please make sure your project contains drush, ddev composer require drush/drush" && exit 1; fi
if [ -z "${TERMINUS_MACHINE_TOKEN:-}" ]; then echo "Please make sure you have set TERMINUS_MACHINE_TOKEN in ~/.ddev/global_config.yaml" && exit 1; fi
terminus auth:login --machine-token="${TERMINUS_MACHINE_TOKEN}" || ( echo "terminus auth login failed, check your TERMINUS_MACHINE_TOKEN" && exit 1 )
terminus aliases 2>/dev/null

db_pull_command:
command: |
set -x # You can enable bash debugging output by uncommenting
set -eu -o pipefail
ls /var/www/html/.ddev >/dev/null # This just refreshes stale NFS if possible
pushd /var/www/html/.ddev/.downloads >/dev/null
terminus backup:get ${project} --element=db --to=db.sql.gz

files_pull_command:
command: |
set -x # You can enable bash debugging output by uncommenting
set -eu -o pipefail
ls /var/www/html/.ddev >/dev/null # This just refreshes stale NFS if possible
pushd /var/www/html/.ddev/.downloads >/dev/null;
terminus backup:get ${project} --element=files --to=files.tgz
mkdir -p files && tar --strip-components=1 -C files -zxf files.tgz

# push is a dangerous command. If not absolutely needed it's better to delete these lines.
db_push_command:
command: |
set -x # You can enable bash debugging output by uncommenting
set -eu -o pipefail
ls /var/www/html/.ddev >/dev/null # This just refreshes stale NFS if possible
pushd /var/www/html/.ddev/.downloads >/dev/null;
terminus remote:drush ${project} -- sql-drop -y
gzip -dc db.sql.gz | terminus remote:drush ${project} -- sql-cli

# push is a dangerous command. If not absolutely needed it's better to delete these lines.
files_push_command:
command: |
set -x # You can enable bash debugging output by uncommenting
set -eu -o pipefail
ls ${DDEV_FILES_DIR} >/dev/null # This just refreshes stale NFS if possible
drush rsync -y @self:%files @${project}:%files
Loading

0 comments on commit 30a1f8a

Please sign in to comment.