Learn about the data ingested, benefits of this integration, and how to use it with JupiterOne in the integration documentation.
-
Install Node.js using the installer or a version manager such as nvm or fnm.
-
Install dependencies with
yarn install
. -
Register an account in the system this integration targets for ingestion and obtain API credentials.
-
cp .env.example .env
and add necessary values for runtime configuration.When an integration executes, it needs API credentials and any other configuration parameters necessary for its work (provider API credentials, data ingestion parameters, etc.). The names of these parameters are defined by the
IntegrationInstanceConfigFieldMap
insrc/config.ts
. When the integration is executed outside the JupiterOne managed environment (local development or on-prem), values for these parameters are read from Node'sprocess.env
by converting config field names to constant case. For example,clientId
is read fromprocess.env.CLIENT_ID
.The
.env
file is loaded intoprocess.env
before the integration code is executed. This file is not required should you configure the environment another way..gitignore
is configured to to avoid commiting the.env
file.
yarn start
to collect datayarn graph
to show a visualization of the collected datayarn j1-integration -h
for additional commands
In JupiterOne
- Create a custom integration in JupiterOne (apps.us.jupiterone.io/integrations/custom)
- Generate an INTEGRATION API KEY for use with this custom integration
In this project / CLI
- Clone this repo
(
git clone [email protected]:JupiterOne/graph-aws-extender-scan-factory.git
) or download and unzip this project. - Run
yarn install
from a command line - Create a
.env
file at the root of this project with the following values:JUPITERONE_API_KEY=<jupiterone-api-key> JUPITERONE_ACCOUNT=<jupiterone-account-id>
- Update the
src/config.ts
file with scan factory scans & pointers you would like to ingest. - Run
yarn j1-integration run --integrationInstanceId <integration-instance-id>
from a command line
Start by taking a look at the source code. The integration is basically a set of functions called steps, each of which ingests a collection of resources and relationships. The goal is to limit each step to as few resource types as possible so that should the ingestion of one type of data fail, it does not necessarily prevent the ingestion of other, unrelated data. That should be enough information to allow you to get started coding!
See the SDK development documentation for a deep dive into the mechanics of how integrations work.
See docs/development.md for any additional details about developing this integration.
The history of this integration's development can be viewed at CHANGELOG.md.