From 7844330bce99e146cf9404c460ad7beff9a21b97 Mon Sep 17 00:00:00 2001 From: Marcelle <53578688+m-goggins@users.noreply.github.com> Date: Tue, 26 Nov 2024 10:07:45 -0800 Subject: [PATCH] Update README to include documentation on queryBuilding data (#177) --- query-connector/README.md | 31 ++++++++++++++++++++++++++++++- query-connector/VS_DUMP_INFO.md | 19 ------------------- 2 files changed, 30 insertions(+), 20 deletions(-) delete mode 100644 query-connector/VS_DUMP_INFO.md diff --git a/query-connector/README.md b/query-connector/README.md index 5e6af20f1..652829503 100644 --- a/query-connector/README.md +++ b/query-connector/README.md @@ -77,7 +77,36 @@ The Query Connector will eventually require other inputs from other DIBBs servic ### Developer Documentation -Can be found in [api-documentation.md](api-documentation.md). +A Postman collection demonstrating use of the API can be found [here](https://github.com/CDCgov/dibbs-query-connector/blob/main/query-connector/src/app/assets/DIBBs_Query_Connector_API.postman_collection.json). + +### Query Connector Data for Query Building + +When initializing the backend database for the first time, the Query Connector makes the value sets associated with 200+ reportable conditions available to users tasked with building queries for their jurisdiction. To group value sets by condition and to group the conditions by type, the Query Connector obtains and organizes data from the eRSD and the VSAC in the following way: + +1. The Query Connector retrieves the 200+ reportable conditions from the eRSD as well as the value sets' associated IDs. +2. Using the value set IDs from the eRSD, the Query Connector retrieves the value set's comprehensive information from the VSAC, i.e., the LOINC, SNOMED, etc. codes associated with each value set ID. +3. The Query Connector then organizes these value sets according to the conditions with which they're associated, making the result available to users interested in building queries. The conditions are additionally organized by category, e.g., sexually transmitted diseases or respiratory conditions, using a mapping curated by HLN Consulting. + +#### Query Building Data in `dev` mode + +In order to make the dev process as low-lift as possible, we want to avoid executing the `db-creation` scripts when booting up the application in dev mode via `npm run dev` or `npm run dev-win`. To that end, we've created a `pg_dump` file containing all the value sets, concepts, and foreign key mappings that would be extracted from a fresh pull of the eRSD and processed through our creation functions. This file, `vs_dump.sql` has been mounted into the docker volume of our postgres DB when running in dev mode as an entrypoint script. This means it will be automatically executed when the DB is freshly spun up. You shouldn't need to do anything to facilitate this mounting or file running. + +#### Updating the pg_dump + +If the DB extract file ever needs to be updated, you can use the following simple process: + +1. Start up the application on your local machine using a regular `docker compose up`, and wait for the DB to be ready. +2. Load the eRSD and value sets into the DIBBs DB by using the `Create Query` button on the `/queryBuilding` page. Optionally, use DBeaver to verify that value sets exist in the database. +3. In a fresh terminal window, run + +``` +pg_dump -U postgres -f vs_dump.sql -h localhost -p 5432 tefca_db +``` + +If the above doesn't work, try replacing `localhost` with `0.0.0.0`. + +4. Enter the DB password when prompted. +5. The extract file, `vs_dump.sql`, should now be created. It should automatically be located in `/query-connector`, but if it isn't, put `vs_dump.sql` there. ### Architecture Diagram diff --git a/query-connector/VS_DUMP_INFO.md b/query-connector/VS_DUMP_INFO.md deleted file mode 100644 index d6c3ab06f..000000000 --- a/query-connector/VS_DUMP_INFO.md +++ /dev/null @@ -1,19 +0,0 @@ -## ValueSet SQL Dump Information - -In order to make the dev process as low-lift as possible, we want to avoid executing the `db-creation` scripts when booting up the application in dev mode via `npm run dev`. To that end, we've created a `pg_dump` file containing all the valusets, concepts, and foreign key mappings that would be extracted from a fresh pull of the eRSD and processed through our creation functions. This file, `vs_dump.sql` has been mounted into the docker volume of our postgres DB when running in dev mode as an entrypoint script. This means it will be automatically executed when the DB is freshly spun up. You shouldn't need to do anything to facilitate this mounting or file running. - -## Updating the pg_dump - -If the DB extract file ever needs to be updated, you can use the following simple process: - -1. Start up the application on your local machine using a regular `docker compose up`, and wait for the DB to be ready. -2. Load the eRSD and valuesets into the DIBBs DB by using the `Create Query` button on the `/queryBuilding` page. Optionally, use DBeaver to verify that valuesets exist in the database. -3. In a fresh terminal window, run - -``` -pg_dump -U postgres -f vs_dump.sql -h localhost -p 5432 tefca_db -``` - -If the above doesn't work, try replacing `localhost` with `0.0.0.0`. -4. Enter the DB password when prompted. -5. The extract file should now be created. It should automatically be located in `/query-connector`, but if it isn't, put it there. \ No newline at end of file