diff --git a/docs/nodes/AI_ML/IMAGE_CLASSIFICATION/HUGGING_FACE_PIPELINE/a1-[autogen]/docstring.txt b/docs/nodes/AI_ML/IMAGE_CLASSIFICATION/HUGGING_FACE_PIPELINE/a1-[autogen]/docstring.txt index 6775992d4..395f8a84c 100644 --- a/docs/nodes/AI_ML/IMAGE_CLASSIFICATION/HUGGING_FACE_PIPELINE/a1-[autogen]/docstring.txt +++ b/docs/nodes/AI_ML/IMAGE_CLASSIFICATION/HUGGING_FACE_PIPELINE/a1-[autogen]/docstring.txt @@ -1,20 +1,29 @@ -Hugging Face Pipeline for Image Classification. +The HUGGING_FACE_PIPELINE node uses a classification pipeline to process and classify an image. + + For more information about Vision Transformers, + see: https://huggingface.co/google/vit-base-patch16-224 + + For a complete list of models, see: + https://huggingface.co/models?pipeline_tag=image-classification + + For examples of how revision parameters (such as 'main') is used, + see: https://huggingface.co/google/vit-base-patch16-224/commits/main Parameters ---------- - default: Image - The input image to be classified. The image must be a PIL.Image object wrapped in a flojoy Image object. - model: str + default : Image + The input image to be classified. + The image must be a PIL.Image object, wrapped in a Flojoy Image object. + model : str The model to be used for classification. - If not specified, Vision Transformers (i.e. `google/vit-base-patch16-224`) are used. - For more information about Vision Transformers, see: https://huggingface.co/google/vit-base-patch16-224 - For a complete list of models see: https://huggingface.co/models?pipeline_tag=image-classification - revision: str + If not specified, Vision Transformers (i.e. 'google/vit-base-patch16-224') are used. + revision : str The revision of the model to be used for classification. - If not specified, main is `used`. For instance see: https://huggingface.co/google/vit-base-patch16-224/commits/main + If not specified, 'main' is used. Returns ------- DataFrame: - A DataFrame containing as columns the `label` classification label and `score`, its confidence score. - All scores are between 0 and 1 and sum to 1. + A DataFrame containing the columns 'label' (as classification label) + and 'score' (as the confidence score). + All scores are between 0 and 1, and sum to 1. diff --git a/docs/nodes/AI_ML/NLP/COUNT_VECTORIZER/a1-[autogen]/docstring.txt b/docs/nodes/AI_ML/NLP/COUNT_VECTORIZER/a1-[autogen]/docstring.txt index f44858ce9..16ec78a2b 100644 --- a/docs/nodes/AI_ML/NLP/COUNT_VECTORIZER/a1-[autogen]/docstring.txt +++ b/docs/nodes/AI_ML/NLP/COUNT_VECTORIZER/a1-[autogen]/docstring.txt @@ -1,10 +1,8 @@ +The COUNT_VECTORIZER node receives a collection (matrix, vector or dataframe) of text documents and converts it to a matrix of token counts. -The COUNT_VECTORIZER node receives a collection (matrix, vector or dataframe) of -text documents to a matrix of token counts. - -Returns -------- -tokens: DataFrame - holds all the unique tokens observed from the input. -word_count_vector: Vector - contains the occurences of these tokens from each sentence. + Returns + ------- + tokens: DataFrame + Holds all the unique tokens observed from the input. + word_count_vector: Vector + Contains the occurences of these tokens from each sentence. diff --git a/docs/nodes/AI_ML/PREDICT_TIME_SERIES/PROPHET_PREDICT/a1-[autogen]/docstring.txt b/docs/nodes/AI_ML/PREDICT_TIME_SERIES/PROPHET_PREDICT/a1-[autogen]/docstring.txt index d53b6eef5..cc5b56f6d 100644 --- a/docs/nodes/AI_ML/PREDICT_TIME_SERIES/PROPHET_PREDICT/a1-[autogen]/docstring.txt +++ b/docs/nodes/AI_ML/PREDICT_TIME_SERIES/PROPHET_PREDICT/a1-[autogen]/docstring.txt @@ -1,44 +1,44 @@ +The PROPHET_PREDICT node runs a Prophet model on the incoming dataframe. -The PROPHET_PREDICT node rains a Prophet model on the incoming dataframe. + The DataContainer input type must be a dataframe, and the first column (or index) of the dataframe must be of a datetime type. -The DataContainer input type must be a dataframe, and the first column (or index) of dataframe must be of a datetime type. + This node always returns a DataContainer of a dataframe type. It will also always return an 'extra' field with a key 'prophet' of which the value is the JSONified Prophet model. + This model can be loaded as follows: -This node always returns a DataContainer of a dataframe type. It will also always return an "extra" field with a key "prophet" of which the value is the JSONified Prophet model. -This model can be loaded as follows: - ```python - from prophet.serialize import model_from_json + ```python + from prophet.serialize import model_from_json - model = model_from_json(dc_inputs.extra["prophet"]) - ``` + model = model_from_json(dc_inputs.extra["prophet"]) + ``` -Parameters ----------- -run_forecast : bool - If True (default case), the dataframe of the returning DataContainer - ("m" parameter of the DataContainer) will be the forecasted dataframe. - It will also have an "extra" field with the key "original", which is - the original dataframe passed in. + Parameters + ---------- + run_forecast : bool + If True (default case), the dataframe of the returning DataContainer + ('m' parameter of the DataContainer) will be the forecasted dataframe. + It will also have an 'extra' field with the key 'original', which is + the original dataframe passed in. - If False, the returning dataframe will be the original data. + If False, the returning dataframe will be the original data. - This node will also always have an "extra" field, run_forecast, which - matches that of the parameters passed in. This is for future nodes - to know if a forecast has already been run. + This node will also always have an 'extra' field, run_forecast, which + matches that of the parameters passed in. This is for future nodes + to know if a forecast has already been run. - Default = True + Default = True -periods : int - The number of periods to predict out. Only used if run_forecast is True. - Default = 365 + periods : int + The number of periods to predict out. Only used if run_forecast is True. + Default = 365 -Returns -------- -DataFrame - With parameter as df. - Indicates either the original df passed in, or the forecasted df - (depending on if run_forecast is True). + Returns + ------- + DataFrame + With parameter as df. + Indicates either the original df passed in, or the forecasted df + (depending on if run_forecast is True). -DataContainer - With parameter as "extra". - Contains keys run_forecast which correspond to the input parameter, - and potentially "original" in the event that run_forecast is True. + DataContainer + With parameter as 'extra'. + Contains keys run_forecast which correspond to the input parameter, + and potentially 'original' in the event that run_forecast is True. diff --git a/docs/nodes/AI_ML/REGRESSION/LEAST_SQUARES/a1-[autogen]/docstring.txt b/docs/nodes/AI_ML/REGRESSION/LEAST_SQUARES/a1-[autogen]/docstring.txt index 3e7c6bfed..89914eb83 100644 --- a/docs/nodes/AI_ML/REGRESSION/LEAST_SQUARES/a1-[autogen]/docstring.txt +++ b/docs/nodes/AI_ML/REGRESSION/LEAST_SQUARES/a1-[autogen]/docstring.txt @@ -1,10 +1,9 @@ +The LEAST_SQUARE node computes the coefficients that minimize the distance between the inputs 'Matrix' or 'OrderedPair' class and the regression. -The LEAST_SQUARE node computes the coefficients that minimizes the distance between the inputs 'Matrix' or 'OrderedPair' class and the regression. - -Returns -------- -OrderedPair - x: input matrix (data points) - y: fitted line computed with returned regression weights -Matrix - m : fitted matrix computed with returned regression weights + Returns + ------- + OrderedPair + x: input matrix (data points) + y: fitted line computed with returned regression weights + Matrix + m: fitted matrix computed with returned regression weights diff --git a/docs/nodes/AI_ML/SEGMENTATION/DEEPLAB_V3/a1-[autogen]/docstring.txt b/docs/nodes/AI_ML/SEGMENTATION/DEEPLAB_V3/a1-[autogen]/docstring.txt index 4f5e32308..cfeed0f5c 100644 --- a/docs/nodes/AI_ML/SEGMENTATION/DEEPLAB_V3/a1-[autogen]/docstring.txt +++ b/docs/nodes/AI_ML/SEGMENTATION/DEEPLAB_V3/a1-[autogen]/docstring.txt @@ -1,10 +1,9 @@ - The DEEPLAB_V3 node returns a segmentation mask from an input image in a dataframe. -The input image is expected to be a DataContainer of an "image" type. + The input image is expected to be a DataContainer of an 'image' type. -The output is a DataContainer of an "image" type with the same dimensions as the input image, but with the red, green, and blue channels replaced with the segmentation mask. + The output is a DataContainer of an 'image' type with the same dimensions as the input image, but with the red, green, and blue channels replaced with the segmentation mask. -Returns -------- -Image + Returns + ------- + Image diff --git a/docs/nodes/EXTRACTORS/FILE/READ_S3/a1-[autogen]/docstring.txt b/docs/nodes/EXTRACTORS/FILE/READ_S3/a1-[autogen]/docstring.txt index 6e569c5c9..00ab9cac4 100644 --- a/docs/nodes/EXTRACTORS/FILE/READ_S3/a1-[autogen]/docstring.txt +++ b/docs/nodes/EXTRACTORS/FILE/READ_S3/a1-[autogen]/docstring.txt @@ -7,9 +7,9 @@ The READ_S3 node takes a S3_key name, S3 bucket name, and file name as input, an Parameters ---------- s3_name : str - name of the key that the user used to save access and secret access key + name of the key that the user used to save the access and secret access keys bucket_name : str - AWS S3 bucket name that they are trying to access + Amazon S3 bucket name that they are trying to access file_name : str name of the file that they want to extract diff --git a/docs/nodes/GENERATORS/SAMPLE_DATASETS/R_DATASET/a1-[autogen]/docstring.txt b/docs/nodes/GENERATORS/SAMPLE_DATASETS/R_DATASET/a1-[autogen]/docstring.txt index 48d2243d3..deed9387e 100644 --- a/docs/nodes/GENERATORS/SAMPLE_DATASETS/R_DATASET/a1-[autogen]/docstring.txt +++ b/docs/nodes/GENERATORS/SAMPLE_DATASETS/R_DATASET/a1-[autogen]/docstring.txt @@ -1,11 +1,10 @@ +The R_DATASET node retrieves a pandas DataFrame from 'rdatasets', using the provided dataset_key parameter, and returns it wrapped in a DataContainer. -The R_DATASET node retrieves a pandas DataFrame from rdatasets using the provided dataset_key parameter and returns it wrapped in a DataContainer. + Parameters + ---------- + dataset_key : str -Parameters ----------- -dataset_key : str - -Returns -------- -DataFrame - A DataContainer object containing the retrieved pandas DataFrame. + Returns + ------- + DataFrame + A DataContainer object containing the retrieved pandas DataFrame. diff --git a/docs/nodes/GENERATORS/SAMPLE_DATASETS/TEXT_DATASET/a1-[autogen]/docstring.txt b/docs/nodes/GENERATORS/SAMPLE_DATASETS/TEXT_DATASET/a1-[autogen]/docstring.txt index 0301de899..82c6b61d1 100644 --- a/docs/nodes/GENERATORS/SAMPLE_DATASETS/TEXT_DATASET/a1-[autogen]/docstring.txt +++ b/docs/nodes/GENERATORS/SAMPLE_DATASETS/TEXT_DATASET/a1-[autogen]/docstring.txt @@ -1,37 +1,41 @@ The TEXT_DATASET node loads the 20 newsgroups dataset from scikit-learn. -The data is returned as a dataframe with one column containing the text -and the other containing the category. -Parameters ----------- -subset: "train" | "test" | "all", default="train" - Select the dataset to load: "train" for the training set, "test" for the test set, "all" for both. -categories: list of str, optional - Select the categories to load. By default, all categories are loaded. - The list of all categories is: - 'alt.atheism', - 'comp.graphics', - 'comp.os.ms-windows.misc', - 'comp.sys.ibm.pc.hardware', - 'comp.sys.mac.hardware', - 'comp.windows.x', - 'misc.forsale', - 'rec.autos', - 'rec.motorcycles', - 'rec.sport.baseball', - 'rec.sport.hockey', - 'sci.crypt', - 'sci.electronics', - 'sci.med', - 'sci.space', - 'soc.religion.christian', - 'talk.politics.guns', - 'talk.politics.mideast', - 'talk.politics.misc', - 'talk.religion.misc' -remove_headers: boolean, default=false - Remove the headers from the data. -remove_footers: boolean, default=false - Remove the footers from the data. -remove_quotes: boolean, default=false - Remove the quotes from the data. + The data is returned as a dataframe with one column containing the text and the other containing the category. + + Parameters + ---------- + subset : "train" | "test" | "all", default="train" + Select the dataset to load: "train" for the training set, "test" for the test set, "all" for both. + categories : list of str, optional + Select the categories to load. By default, all categories are loaded. + The list of all categories is: + 'alt.atheism', + 'comp.graphics', + 'comp.os.ms-windows.misc', + 'comp.sys.ibm.pc.hardware', + 'comp.sys.mac.hardware', + 'comp.windows.x', + 'misc.forsale', + 'rec.autos', + 'rec.motorcycles', + 'rec.sport.baseball', + 'rec.sport.hockey', + 'sci.crypt', + 'sci.electronics', + 'sci.med', + 'sci.space', + 'soc.religion.christian', + 'talk.politics.guns', + 'talk.politics.mideast', + 'talk.politics.misc', + 'talk.religion.misc' + remove_headers : boolean, default=false + Remove the headers from the data. + remove_footers : boolean, default=false + Remove the footers from the data. + remove_quotes : boolean, default=false + Remove the quotes from the data. + + Returns + ------- + DataFrame diff --git a/docs/nodes/GENERATORS/SAMPLE_IMAGES/SKIMAGE/a1-[autogen]/docstring.txt b/docs/nodes/GENERATORS/SAMPLE_IMAGES/SKIMAGE/a1-[autogen]/docstring.txt index 9bb381540..a3b43408e 100644 --- a/docs/nodes/GENERATORS/SAMPLE_IMAGES/SKIMAGE/a1-[autogen]/docstring.txt +++ b/docs/nodes/GENERATORS/SAMPLE_IMAGES/SKIMAGE/a1-[autogen]/docstring.txt @@ -1,4 +1,4 @@ -The SKIMAGE node is designed to load example images from scikit-image. +The SKIMAGE node is designed to load example images from 'scikit-image'. Examples can be found here: https://scikit-image.org/docs/stable/auto_examples/index.html diff --git a/docs/nodes/GENERATORS/SIMULATIONS/BASIC_OSCILLATOR/a1-[autogen]/docstring.txt b/docs/nodes/GENERATORS/SIMULATIONS/BASIC_OSCILLATOR/a1-[autogen]/docstring.txt index bc0619426..dbbab7a95 100644 --- a/docs/nodes/GENERATORS/SIMULATIONS/BASIC_OSCILLATOR/a1-[autogen]/docstring.txt +++ b/docs/nodes/GENERATORS/SIMULATIONS/BASIC_OSCILLATOR/a1-[autogen]/docstring.txt @@ -1,27 +1,26 @@ - The BASIC_OSCILLATOR node is a combination of the LINSPACE and SINE nodes. -It offers a more straightforward way to generate signals, with sample rate and the time in seconds as parameters, along with all the parameters in the SINE node. + It offers a more straightforward way to generate signals, with sample rate and the time in seconds as parameters, along with all the parameters in the SINE node. -Parameters ----------- -sample_rate : float - How many samples are taken in a second. -time : float - The total amount of time of the signal. -waveform : select - The waveform type of the wave. -amplitude : float - The amplitude of the wave. -frequency : float - The wave frequency in radians/2pi. -offset : float - The y axis offset of the function. -phase : float - The x axis offset of the function. + Parameters + ---------- + sample_rate : float + The number of samples that are taken in a second. + time : float + The total amount of time of the signal. + waveform : select + The waveform type of the wave. + amplitude : float + The amplitude of the wave. + frequency : float + The wave frequency in radians/2pi. + offset : float + The y axis offset of the function. + phase : float + The x axis offset of the function. -Returns -------- -OrderedPair - x: time domain - y: generated signal + Returns + ------- + OrderedPair + x: time domain + y: generated signal diff --git a/docs/nodes/GENERATORS/SIMULATIONS/CONSTANT/a1-[autogen]/docstring.txt b/docs/nodes/GENERATORS/SIMULATIONS/CONSTANT/a1-[autogen]/docstring.txt index a141bbfba..ccbced4f2 100644 --- a/docs/nodes/GENERATORS/SIMULATIONS/CONSTANT/a1-[autogen]/docstring.txt +++ b/docs/nodes/GENERATORS/SIMULATIONS/CONSTANT/a1-[autogen]/docstring.txt @@ -1,29 +1,26 @@ - The CONSTANT node generates a single x-y vector of numeric (floating point) constants. -Inputs ------- -default : OrderedPair|Vector - Optional input that defines the size of the output. - -Parameters ----------- -dc_type : select - The type of DataContainer to return. -constant : float - The value of the y axis output. -step : int - The size of the y and x axes. + Inputs + ------ + default : OrderedPair|Vector + Optional input that defines the size of the output. -Returns -------- -OrderedPair + Parameters + ---------- + dc_type : select + The type of DataContainer to return. + constant : float + The value of the y axis output. + step : int + The size of the y and x axes. -OrderedPair|Vector|Scalar - OrderedPair if selected - x: the x axis generated with size 'step' - y: the resulting constant with size 'step' - Vector if selected - v: the resulting constant with size 'step' - Scalar if selected - c: the resulting constant + Returns + ------- + OrderedPair|Vector|Scalar + OrderedPair if selected + x: the x axis generated with size 'step' + y: the resulting constant with size 'step' + Vector if selected + v: the resulting constant with size 'step' + Scalar if selected + c: the resulting constant diff --git a/docs/nodes/GENERATORS/SIMULATIONS/LINSPACE/a1-[autogen]/docstring.txt b/docs/nodes/GENERATORS/SIMULATIONS/LINSPACE/a1-[autogen]/docstring.txt index 9e42f9ac1..a85d945fe 100644 --- a/docs/nodes/GENERATORS/SIMULATIONS/LINSPACE/a1-[autogen]/docstring.txt +++ b/docs/nodes/GENERATORS/SIMULATIONS/LINSPACE/a1-[autogen]/docstring.txt @@ -1,23 +1,22 @@ - The LINSPACE node generates data spaced evenly between two points. -It uses the numpy function linspace. It is useful for generating an x axis for the ordered pair data type. + It uses the 'linspace' numpy function. It is useful for generating an x-axis for the OrderedPair data type. -Inputs ------- -default : OrderedPair - Optional input in case LINSPACE is used in a loop. Not used. + Inputs + ------ + default : OrderedPair + Optional input in case LINSPACE is used in a loop. Not used. -Parameters ----------- -start : float - The start point of the data. -end : float - The end point of the data. -step : float - The number of points in the vector. + Parameters + ---------- + start : float + The start point of the data. + end : float + The end point of the data. + step : float + The number of points in the vector. -Returns -------- -Vector - v: the vector between start and end with step number of points. + Returns + ------- + Vector + v: the vector between 'start' and 'end' with a 'step' number of points. diff --git a/docs/nodes/GENERATORS/SIMULATIONS/MATRIX/a1-[autogen]/docstring.txt b/docs/nodes/GENERATORS/SIMULATIONS/MATRIX/a1-[autogen]/docstring.txt index 0a14bedc5..8ca988e7a 100644 --- a/docs/nodes/GENERATORS/SIMULATIONS/MATRIX/a1-[autogen]/docstring.txt +++ b/docs/nodes/GENERATORS/SIMULATIONS/MATRIX/a1-[autogen]/docstring.txt @@ -1,16 +1,15 @@ +The MATRIX node takes two arguments, 'row' and 'col', as input. -The MATRIX node takes two arguments, row and col, as input. + Based on these inputs, it generates a random matrix where the integers inside the matrix are between 0 and 19. -Based on these inputs, it generates a random matrix where the integers inside the matrix are between 0 and 19. + Parameters + ---------- + row : int + number of rows + column : int + number of columns -Parameters ----------- -row : int - number of rows -column : int - number of columns - -Returns -------- -matrix - randomly generated matrix + Returns + ------- + matrix + randomly generated matrix diff --git a/docs/nodes/GENERATORS/SIMULATIONS/SECOND_ORDER_SYSTEM/a1-[autogen]/docstring.txt b/docs/nodes/GENERATORS/SIMULATIONS/SECOND_ORDER_SYSTEM/a1-[autogen]/docstring.txt index 6ec33d558..2b30a5433 100644 --- a/docs/nodes/GENERATORS/SIMULATIONS/SECOND_ORDER_SYSTEM/a1-[autogen]/docstring.txt +++ b/docs/nodes/GENERATORS/SIMULATIONS/SECOND_ORDER_SYSTEM/a1-[autogen]/docstring.txt @@ -1,9 +1,11 @@ -The SECOND_ORDER_SYSTEM has a second order exponential function. This node is designed to be used in a loop. The data is appended as the loop progress and written to memory. +The SECOND_ORDER_SYSTEM has a second order exponential function. + + This node is designed to be used in a loop. The data is appended as the loop progresses and written to memory. Inputs ------ default : Scalar - PID node output + PID node output. Parameters ---------- @@ -11,7 +13,6 @@ The SECOND_ORDER_SYSTEM has a second order exponential function. This node is de The first time constant. d2 : float The second time constant. - Returns ------- Scalar diff --git a/docs/nodes/GENERATORS/SIMULATIONS/SINE/a1-[autogen]/docstring.txt b/docs/nodes/GENERATORS/SIMULATIONS/SINE/a1-[autogen]/docstring.txt index 5d04404a3..f66caa8ac 100644 --- a/docs/nodes/GENERATORS/SIMULATIONS/SINE/a1-[autogen]/docstring.txt +++ b/docs/nodes/GENERATORS/SIMULATIONS/SINE/a1-[autogen]/docstring.txt @@ -1,27 +1,25 @@ +The SINE node generates a waveform function with the shape being defined by the input. -The SINE node generates a waveform function. With the shape being defined -by the input. + Inputs + ------ + default : OrderedPair|Vector + Input that defines the x-axis values of the function and output. -Inputs ------- -default : OrderedPair|Vector - Input that defines the x axis values of the function and output. + Parameters + ---------- + waveform : select + The waveform type of the wave. + amplitude : float + The amplitude of the wave. + frequency : float + The wave frequency in radians/2pi. + offset : float + The y axis offset of the function. + phase : float + The x axis offset of the function. -Parameters ----------- -waveform : select - The waveform type of the wave. -amplitude : float - The amplitude of the wave. -frequency : float - The wave frequency in radians/2pi. -offset : float - The y axis offset of the function. -phase : float - The x axis offset of the function. - -Returns -------- -OrderedPair - x: the input v or x values - y: the resulting sine function + Returns + ------- + OrderedPair + x: the input v or x values + y: the resulting sine function diff --git a/docs/nodes/GENERATORS/SIMULATIONS/TEXT/a1-[autogen]/docstring.txt b/docs/nodes/GENERATORS/SIMULATIONS/TEXT/a1-[autogen]/docstring.txt index df01bac62..1870eb33a 100644 --- a/docs/nodes/GENERATORS/SIMULATIONS/TEXT/a1-[autogen]/docstring.txt +++ b/docs/nodes/GENERATORS/SIMULATIONS/TEXT/a1-[autogen]/docstring.txt @@ -2,10 +2,10 @@ The TEXT node returns a TextBlob DataContainer. Parameters ---------- - value: str - The value set in Parameters + value : str + The value set in Parameters. Returns ------- TextBlob - text_blob: return the value being set in Parameters + Return the value being set in Parameters. diff --git a/docs/nodes/GENERATORS/SIMULATIONS/TIMESERIES/a1-[autogen]/docstring.txt b/docs/nodes/GENERATORS/SIMULATIONS/TIMESERIES/a1-[autogen]/docstring.txt index 067936259..cbd576e54 100644 --- a/docs/nodes/GENERATORS/SIMULATIONS/TIMESERIES/a1-[autogen]/docstring.txt +++ b/docs/nodes/GENERATORS/SIMULATIONS/TIMESERIES/a1-[autogen]/docstring.txt @@ -1,15 +1,13 @@ - - The TIMESERIES node generates a random timeseries vector (as a DataFrame). -Parameters ----------- -start_date : str - The start date of the timeseries in the format YYYY:MM:DD. -end_date : str - The end date of the timeseries in the format YYYY:MM:DD. - -Returns -------- -DataFrame - m: the resulting timeseries + Parameters + ---------- + start_date : str + The start date of the timeseries in the format 'YYYY:MM:DD'. + end_date : str + The end date of the timeseries in the format 'YYYY:MM:DD'. + + Returns + ------- + DataFrame + m: the resulting timeseries diff --git a/docs/nodes/IO/ANALOG_SENSORS/PRESSURE_SENSORS/FLEXIFORCE_25LB/a1-[autogen]/docstring.txt b/docs/nodes/IO/ANALOG_SENSORS/PRESSURE_SENSORS/FLEXIFORCE_25LB/a1-[autogen]/docstring.txt index 69d69fe34..c00dd021a 100644 --- a/docs/nodes/IO/ANALOG_SENSORS/PRESSURE_SENSORS/FLEXIFORCE_25LB/a1-[autogen]/docstring.txt +++ b/docs/nodes/IO/ANALOG_SENSORS/PRESSURE_SENSORS/FLEXIFORCE_25LB/a1-[autogen]/docstring.txt @@ -1,6 +1,8 @@ - The Flexiforce node allows you to convert voltages measured with the Phidget Interface Kit into pressures. -Calibration1 : float - Calibration parameters to convert voltage into pressure. -calibration2 : float - Calibration parameters to convert voltage into pressure. + + Parameters + ---------- + Calibration1 : float + Calibration parameters to convert voltage into pressure. + calibration2 : float + Calibration parameters to convert voltage into pressure. diff --git a/docs/nodes/IO/ANALOG_SENSORS/THERMOCOUPLES/LM34/a1-[autogen]/docstring.txt b/docs/nodes/IO/ANALOG_SENSORS/THERMOCOUPLES/LM34/a1-[autogen]/docstring.txt index b2af7fe64..449826951 100644 --- a/docs/nodes/IO/ANALOG_SENSORS/THERMOCOUPLES/LM34/a1-[autogen]/docstring.txt +++ b/docs/nodes/IO/ANALOG_SENSORS/THERMOCOUPLES/LM34/a1-[autogen]/docstring.txt @@ -1,4 +1,6 @@ - The LM34 node allows you to convert voltages measured with a thermocouple (LM34) connected to a LabJack U3 device into temperatures. -Calibration1, Calibration2, Calibration3 : float - Calibration parameters to convert voltage into temperature in Celcius. + + Parameters + ---------- + Calibration1, Calibration2, Calibration3 : float + Calibration parameters to convert voltage into temperature in Celcius. diff --git a/docs/nodes/IO/INSTRUMENTS/DAQ_BOARDS/LABJACK/U3/BASIC/READ_A0_PINS/a1-[autogen]/docstring.txt b/docs/nodes/IO/INSTRUMENTS/DAQ_BOARDS/LABJACK/U3/BASIC/READ_A0_PINS/a1-[autogen]/docstring.txt index 1d58d4d9e..882bcc731 100644 --- a/docs/nodes/IO/INSTRUMENTS/DAQ_BOARDS/LABJACK/U3/BASIC/READ_A0_PINS/a1-[autogen]/docstring.txt +++ b/docs/nodes/IO/INSTRUMENTS/DAQ_BOARDS/LABJACK/U3/BASIC/READ_A0_PINS/a1-[autogen]/docstring.txt @@ -1,7 +1,8 @@ -The READ_A0_PINS node allows you to record and return voltages from a sensor connected to a LABJACK U3 device. eturns temperature measurements with a - Use the sensor node to convert voltage into temperature measurements +The READ_A0_PINS node allows you to record and return voltages from a sensor connected to a LABJACK U3 device. + + The SENSOR node can be used to convert voltage into temperature measurements. Parameters ---------- number : int - Defines the number of temperature sensors connected to the LabJack U3 device. + Defines the number of temperature sensors connected to the LabJack U3 device. diff --git a/docs/nodes/IO/INSTRUMENTS/MOCK/WEINSCHEL8320/a1-[autogen]/docstring.txt b/docs/nodes/IO/INSTRUMENTS/MOCK/WEINSCHEL8320/a1-[autogen]/docstring.txt index 445682604..d725b25f4 100644 --- a/docs/nodes/IO/INSTRUMENTS/MOCK/WEINSCHEL8320/a1-[autogen]/docstring.txt +++ b/docs/nodes/IO/INSTRUMENTS/MOCK/WEINSCHEL8320/a1-[autogen]/docstring.txt @@ -1,12 +1,11 @@ -Note this node is for testing purposes only. +The WEINSCHEL8320 node mocks the WEINSCHEL 8320 instrument, which attenuates the input signal. - The WEINSCHEL8320 node mocks the instrument WEINSCHEL 8320. - The Weinschel 8320 attenuates the input signal. + Note: This node is for testing purposes only. Parameters ---------- attenuation : int - Value that the instrument would attenuate the input signal (mocked). + Value at which the instrument would attenuate the input signal (mocked). Returns ------- diff --git a/docs/nodes/IO/INSTRUMENTS/QCODES/CLOSE_ALL/a1-[autogen]/docstring.txt b/docs/nodes/IO/INSTRUMENTS/QCODES/CLOSE_ALL/a1-[autogen]/docstring.txt index dff87e7f9..918cc77c7 100644 --- a/docs/nodes/IO/INSTRUMENTS/QCODES/CLOSE_ALL/a1-[autogen]/docstring.txt +++ b/docs/nodes/IO/INSTRUMENTS/QCODES/CLOSE_ALL/a1-[autogen]/docstring.txt @@ -1,7 +1,6 @@ -The CLOSE_ALL node closes all qcodes instruments and should be ran at - the end of each Flojoy app that uses qcodes (and possibly the beginning). +The CLOSE_ALL node closes all QCoDeS instruments and should be run at the start and end of each Flojoy app that uses QCoDeS. Returns ------- DataContainer - optional: The input DataContainer is returned. + optional: The input DataContainer that is returned. diff --git a/docs/nodes/IO/INSTRUMENTS/SOURCEMETERS/KEITHLEY/24XX/BASIC/IV_SWEEP/a1-[autogen]/docstring.txt b/docs/nodes/IO/INSTRUMENTS/SOURCEMETERS/KEITHLEY/24XX/BASIC/IV_SWEEP/a1-[autogen]/docstring.txt index 474beb1cb..862fbadae 100644 --- a/docs/nodes/IO/INSTRUMENTS/SOURCEMETERS/KEITHLEY/24XX/BASIC/IV_SWEEP/a1-[autogen]/docstring.txt +++ b/docs/nodes/IO/INSTRUMENTS/SOURCEMETERS/KEITHLEY/24XX/BASIC/IV_SWEEP/a1-[autogen]/docstring.txt @@ -1,8 +1,12 @@ -The KEITHLEY2400 node takes a IV curve measurement with a Keithley 2400 source meter, send voltages, and measures currents. +The KEITHLEY2400 node takes an I-V curve measurement with a Keithley 2400 source meter, send voltages, and measures currents. Parameters - ----------- - comport : string - defines the serial communication port for the Keithley2400 source meter. + ---------- + comport : str + Defines the serial communication port for the Keithley2400 source meter. baudrate : float - specifies the baud rate for the serial communication between the Keithley2400 and the computer + Specifies the baud rate for the serial communication between the Keithley2400 and the computer. + + Returns + ------- + OrderedPair diff --git a/docs/nodes/IO/MOTION/MOTOR_DRIVER/STEPPER/POLULU/TIC/a1-[autogen]/docstring.txt b/docs/nodes/IO/MOTION/MOTOR_DRIVER/STEPPER/POLULU/TIC/a1-[autogen]/docstring.txt index fc690317e..cafc92d37 100644 --- a/docs/nodes/IO/MOTION/MOTOR_DRIVER/STEPPER/POLULU/TIC/a1-[autogen]/docstring.txt +++ b/docs/nodes/IO/MOTION/MOTOR_DRIVER/STEPPER/POLULU/TIC/a1-[autogen]/docstring.txt @@ -1,4 +1,4 @@ -The STEPPER_DRIVER_TIC node controls a stepper motor movement with a TIC driver. +The STEPPER_DRIVER_TIC node controls a stepper motor's movement with a TIC driver. The user defines the speed and the sleep time between movements. diff --git a/docs/nodes/IO/MOTION/MOTOR_DRIVER/STEPPER/POLULU/TIC_KNOB/a1-[autogen]/docstring.txt b/docs/nodes/IO/MOTION/MOTOR_DRIVER/STEPPER/POLULU/TIC_KNOB/a1-[autogen]/docstring.txt index 447ce525e..529afb278 100644 --- a/docs/nodes/IO/MOTION/MOTOR_DRIVER/STEPPER/POLULU/TIC_KNOB/a1-[autogen]/docstring.txt +++ b/docs/nodes/IO/MOTION/MOTOR_DRIVER/STEPPER/POLULU/TIC_KNOB/a1-[autogen]/docstring.txt @@ -1,6 +1,6 @@ -The STEPPER_DRIVER_TIC_KNOB controls a stepper motor movement with a TIC driver. +The STEPPER_DRIVER_TIC_KNOB controls a stepper motor's movement with a TIC driver. - The user controls the motor rotation with the knob position in the node's parameters. + The user controls the motor's rotation with the knob position, specified in the node's parameters. Parameters ---------- diff --git a/docs/nodes/IO/PROTOCOLS/SERIAL/BASIC/SERIAL_SINGLE_MEASUREMENT/a1-[autogen]/docstring.txt b/docs/nodes/IO/PROTOCOLS/SERIAL/BASIC/SERIAL_SINGLE_MEASUREMENT/a1-[autogen]/docstring.txt index a21e3b24a..4796448e8 100644 --- a/docs/nodes/IO/PROTOCOLS/SERIAL/BASIC/SERIAL_SINGLE_MEASUREMENT/a1-[autogen]/docstring.txt +++ b/docs/nodes/IO/PROTOCOLS/SERIAL/BASIC/SERIAL_SINGLE_MEASUREMENT/a1-[autogen]/docstring.txt @@ -1,4 +1,4 @@ -The SERIAL_SINGLE_MEASUREMENT node takes a single reading of data from an Ardunio or a similar serial device. +The SERIAL_SINGLE_MEASUREMENT node takes a single reading of data from an Arduino or a similar serial device. Parameters ---------- diff --git a/docs/nodes/IO/PROTOCOLS/SERIAL/BASIC/SERIAL_TIMESERIES/a1-[autogen]/docstring.txt b/docs/nodes/IO/PROTOCOLS/SERIAL/BASIC/SERIAL_TIMESERIES/a1-[autogen]/docstring.txt index 660071c9e..6c0503b23 100644 --- a/docs/nodes/IO/PROTOCOLS/SERIAL/BASIC/SERIAL_TIMESERIES/a1-[autogen]/docstring.txt +++ b/docs/nodes/IO/PROTOCOLS/SERIAL/BASIC/SERIAL_TIMESERIES/a1-[autogen]/docstring.txt @@ -1,4 +1,4 @@ -The SERIAL_TIMESERIES node extracts simple time-dependent 1D data from an Ardunio or a similar serial device. +The SERIAL_TIMESERIES node extracts simple time-dependent 1D data from an Arduino or a similar serial device. Parameters ---------- diff --git a/docs/nodes/LOADERS/LOCAL_FILE_SYSTEM/LOCAL_FILE/a1-[autogen]/docstring.txt b/docs/nodes/LOADERS/LOCAL_FILE_SYSTEM/LOCAL_FILE/a1-[autogen]/docstring.txt index 8ec6490eb..7634cca41 100644 --- a/docs/nodes/LOADERS/LOCAL_FILE_SYSTEM/LOCAL_FILE/a1-[autogen]/docstring.txt +++ b/docs/nodes/LOADERS/LOCAL_FILE_SYSTEM/LOCAL_FILE/a1-[autogen]/docstring.txt @@ -3,24 +3,20 @@ The LOCAL_FILE node loads a local file of a different type and converts it to a Parameters ---------- file_path : str - path to the file to be loaded - default : Optional[TextBlob] - If this input node is connected, the filename will be taken from - the output of the connected node. To be used in conjunction with batch processing + Path to the file to be loaded. + default : Optional[TextBlob] + If this input node is connected, the file name will be taken from + the output of the connected node. + To be used in conjunction with batch processing. file_type : str - type of file to load, default = image - - Notes - ----- - If both file_path and default are not specified when `file_type="Image"`, a default image will be loaded. - - Raises - ------ - ValueError - If the file path is not specified and the default input is not connected, a ValueError is raised. + Type of file to load, default = image. + If both 'file_path' and 'default' are not specified when 'file_type="Image"', + a default image will be loaded. + If the file path is not specified and the default input is not connected, + a ValueError is raised. Returns ------- - Image|DataFrame - Image for file_type 'image' - DataFrame for file_type 'json', 'csv', 'excel', 'xml' + Image | DataFrame + Image for file_type 'image'. + DataFrame for file_type 'json', 'csv', 'excel', and 'xml'. diff --git a/docs/nodes/LOADERS/REMOTE_FILE_SYSTEM/REMOTE_FILE/a1-[autogen]/docstring.txt b/docs/nodes/LOADERS/REMOTE_FILE_SYSTEM/REMOTE_FILE/a1-[autogen]/docstring.txt index 3ad916105..f58d91c47 100644 --- a/docs/nodes/LOADERS/REMOTE_FILE_SYSTEM/REMOTE_FILE/a1-[autogen]/docstring.txt +++ b/docs/nodes/LOADERS/REMOTE_FILE_SYSTEM/REMOTE_FILE/a1-[autogen]/docstring.txt @@ -1,30 +1,24 @@ The REMOTE_FILE node loads a remote file using an HTTP URL and converts it to a DataContainer class. + Note: If both the file_url and default are not specified when file_type="Image", a default image will be loaded. + + For now, REMOTE_FILE only supports HTTP file URLs, in particular GCP URL (starting with gcp://). S3 URL (starting with s3://) and other bucket-like URLs are not supported. + + If the file url is not specified and the default input is not connected, or if the file url is not a valid URL, a ValueError is raised. + Parameters ---------- file_url : str - URL the file to be loaded - default : Optional[TextBlob] + URL of the file to be loaded. + default : Optional[TextBlob] If this input node is connected, the file URL will be taken from - the output of the connected node. To be used in conjunction with batch processing + the output of the connected node. + To be used in conjunction with batch processing. file_type : str - type of file to load, default = image - - Notes - ----- - If both file_url and default are not specified when `file_type="Image"`, a default image will be loaded. - - REMOTE_FILE for now only supports HTTP file URLs, in particular GCP URL (starting with `gcp://`), - S3 URL (starting with `s3://`) and other bucket-like URLs are not supported. - - Raises - ------ - ValueError - If the file url is not specified and the default input is not connected, a ValueError is raised. - If the file url is not a valid URL, a ValueError is raised. + Type of file to load, default = image. Returns ------- Image|DataFrame - Image for file_type 'image' - DataFrame for file_type 'json', 'csv', 'excel', 'xml' + Image for file_type 'image'. + DataFrame for file_type 'json', 'csv', 'excel', 'xml'. diff --git a/docs/nodes/NUMPY/LINALG/EIG/a1-[autogen]/docstring.txt b/docs/nodes/NUMPY/LINALG/EIG/a1-[autogen]/docstring.txt index a24160ea0..62fae98ac 100644 --- a/docs/nodes/NUMPY/LINALG/EIG/a1-[autogen]/docstring.txt +++ b/docs/nodes/NUMPY/LINALG/EIG/a1-[autogen]/docstring.txt @@ -1,19 +1,18 @@ - The EIG node is based on a numpy or scipy function. -The description of that function is as follows: + The description of that function is as follows: - Compute the eigenvalues and right eigenvectors of a square array. + Compute the eigenvalues and right eigenvectors of a square array. -Parameters ----------- -select_return : This function has returns for multiple objects ['w', 'v']. - Select the desired one to return. - See the respective function docs for descriptors. -a : (..., M, M) array - Matrices for which the eigenvalues and right eigenvectors will be computed. + Parameters + ---------- + select_return : 'w', 'v' + Select the desired object to return. + See the respective function docs for descriptors. + a : (..., M, M) array + Matrices for which the eigenvalues and right eigenvectors will be computed. -Returns -------- -DataContainer - type 'ordered pair', 'scalar', or 'matrix' + Returns + ------- + DataContainer + type 'ordered pair', 'scalar', or 'matrix' diff --git a/docs/nodes/NUMPY/LINALG/PINV/a1-[autogen]/docstring.txt b/docs/nodes/NUMPY/LINALG/PINV/a1-[autogen]/docstring.txt index 1133c2f05..9d43a2fd1 100644 --- a/docs/nodes/NUMPY/LINALG/PINV/a1-[autogen]/docstring.txt +++ b/docs/nodes/NUMPY/LINALG/PINV/a1-[autogen]/docstring.txt @@ -1,31 +1,30 @@ - The PINV node is based on a numpy or scipy function. -The description of that function is as follows: + The description of that function is as follows: - Compute the (Moore-Penrose) pseudo-inverse of a matrix. + Compute the (Moore-Penrose) pseudo-inverse of a matrix. - Calculate the generalized inverse of a matrix using its singular-value decomposition (SVD) and including all *large* singular values. + Calculate the generalized inverse of a matrix using its singular-value decomposition (SVD) and including all **large** singular values. -.. versionchanged:: 1.14 - Can now operate on stacks of matrices + .. versionchanged:: 1.14 + Can now operate on stacks of matrices -Parameters ----------- -a : (..., M, N) array_like - Matrix or stack of matrices to be pseudo-inverted. -rcond : (...) array_like of float - Cutoff for small singular values. - Singular values less than or equal to "rcond * largest_singular_value" are set to zero. - Broadcasts against the stack of matrices. -hermitian : bool, optional - If True, "a" is assumed to be Hermitian (symmetric if real-valued), enabling a more - efficient method for finding singular values. - Defaults to False. + Parameters + ---------- + a : (..., M, N) array_like + Matrix or stack of matrices to be pseudo-inverted. + rcond : (...) array_like of float + Cutoff for small singular values. + Singular values less than or equal to "rcond * largest_singular_value" are set to zero. + Broadcasts against the stack of matrices. + hermitian : bool, optional + If True, "a" is assumed to be Hermitian (symmetric if real-valued), enabling a more + efficient method for finding singular values. + Defaults to False. -.. versionadded:: 1.17.0 + .. versionadded:: 1.17.0 -Returns -------- -DataContainer - type 'ordered pair', 'scalar', or 'matrix' + Returns + ------- + DataContainer + type 'ordered pair', 'scalar', or 'matrix' diff --git a/docs/nodes/NUMPY/LINALG/QR/a1-[autogen]/docstring.txt b/docs/nodes/NUMPY/LINALG/QR/a1-[autogen]/docstring.txt index c9b39e193..82e80bffc 100644 --- a/docs/nodes/NUMPY/LINALG/QR/a1-[autogen]/docstring.txt +++ b/docs/nodes/NUMPY/LINALG/QR/a1-[autogen]/docstring.txt @@ -1,35 +1,33 @@ - The QR node is based on a numpy or scipy function. -The description of that function is as follows: + The description of that function is as follows: - Compute the qr factorization of a matrix. + Compute the qr factorization of a matrix. - Factor the matrix 'a' as *qr*, where 'q' is orthonormal and 'r' is upper-triangular. + Factor the matrix 'a' as *qr*, where 'q' is orthonormal and 'r' is upper-triangular. -Parameters ----------- -select_return : This function has returns for multiple objects ['q', 'r', '(h, tau)']. - Select the desired one to return. - See the respective function docs for descriptors. -a : array_like, shape (..., M, N) - An array-like object with the dimensionality of at least 2. -mode : {'reduced', 'complete', 'r', 'raw'}, optional - If K = min(M, N), then: - 'reduced' : returns q, r with dimensions (..., M, K), (..., K, N) (default) - 'complete' : returns q, r with dimensions (..., M, M), (..., M, N) - 'r' : returns r only with dimensions (..., K, N) - 'raw' : returns h, tau with dimensions (..., N, M), (..., K,) + For the 'mode' parameters, the options 'reduced', 'complete, and 'raw' are new in numpy 1.8 (see the notes for more information). + The default is 'reduced', and to maintain backward compatibility with earlier versions of numpy, both it and the old default 'full' can be omitted. + Note that array h returned in 'raw' mode is transposed for calling Fortran. + The 'economic' mode is deprecated. + The modes 'full' and 'economic' may be passed using only the first letter for backwards compatibility, + but all others must be spelled out (see the Notes for further explanation). - The options 'reduced', 'complete, and 'raw' are new in numpy 1.8 (see the notes for more information). - The default is 'reduced', and to maintain backward compatibility with earlier versions of numpy, - both it and the old default 'full' can be omitted. - Note that array h returned in 'raw' mode is transposed for calling Fortran. - The 'economic' mode is deprecated. - The modes 'full' and 'economic' may be passed using only the first letter for backwards compatibility, - but all others must be spelled out (see the Notes for further explanation). + Parameters + ---------- + select_return : 'q', 'r', '(h, tau)' + Select the desired opject to return. + See the respective function docs for descriptors. + a : array_like, shape (..., M, N) + An array-like object with the dimensionality of at least 2. + mode : {'reduced', 'complete', 'r', 'raw'}, optional + If K = min(M, N), then: + 'reduced' : returns q, r with dimensions (..., M, K), (..., K, N) (default) + 'complete' : returns q, r with dimensions (..., M, M), (..., M, N) + 'r' : returns r only with dimensions (..., K, N) + 'raw' : returns h, tau with dimensions (..., N, M), (..., K,) -Returns -------- -DataContainer - type 'ordered pair', 'scalar', or 'matrix' + Returns + ------- + DataContainer + type 'ordered pair', 'scalar', or 'matrix' diff --git a/docs/nodes/NUMPY/LINALG/SLOGDET/a1-[autogen]/docstring.txt b/docs/nodes/NUMPY/LINALG/SLOGDET/a1-[autogen]/docstring.txt index ef2809663..2bbbb0273 100644 --- a/docs/nodes/NUMPY/LINALG/SLOGDET/a1-[autogen]/docstring.txt +++ b/docs/nodes/NUMPY/LINALG/SLOGDET/a1-[autogen]/docstring.txt @@ -1,23 +1,22 @@ - The SLOGDET node is based on a numpy or scipy function. -The description of that function is as follows: + The description of that function is as follows: - Compute the sign and (natural) logarithm of the determinant of an array. + Compute the sign and (natural) logarithm of the determinant of an array. - If an array has a very small or very large determinant, then a call to 'det' may overflow or underflow. + If an array has a very small or very large determinant, then a call to 'det' may overflow or underflow. - This routine is more robust against such issues, because it computes the logarithm of the determinant rather than the determinant itself. + This routine is more robust against such issues, because it computes the logarithm of the determinant rather than the determinant itself. -Parameters ----------- -select_return : This function has returns multiple objects []'sign', 'logdet']. - Select the desired one to return. - See the respective function documents for descriptors. -a : (..., M, M) array_like - Input array, has to be a square 2-D array. + Parameters + ---------- + select_return : 'sign', 'logdet' + Select the desired object to return. + See the respective function documents for descriptors. + a : (..., M, M) array_like + Input array, has to be a square 2-D array. -Returns -------- -DataContainer - type 'ordered pair', 'scalar', or 'matrix' + Returns + ------- + DataContainer + type 'ordered pair', 'scalar', or 'matrix' diff --git a/docs/nodes/NUMPY/LINALG/SVD/a1-[autogen]/docstring.txt b/docs/nodes/NUMPY/LINALG/SVD/a1-[autogen]/docstring.txt index 92f714980..1b4ab1b84 100644 --- a/docs/nodes/NUMPY/LINALG/SVD/a1-[autogen]/docstring.txt +++ b/docs/nodes/NUMPY/LINALG/SVD/a1-[autogen]/docstring.txt @@ -1,35 +1,34 @@ - The SVD node is based on a numpy or scipy function. -The description of that function is as follows: - - Singular Value Decomposition. - - When 'a' is a 2D array, and "full_matrices=False", then it is factorized as "u @ np.diag(s) @ vh = (u * s) @ vh", - where 'u' and the Hermitian transpose of 'vh' are 2D arrays with orthonormal columns and 's' is a 1D array of 'a' singular values. - - When 'a' is higher-dimensional, SVD is applied in stacked mode as explained below. - -Parameters ----------- -select_return : This function has returns multiple objects ['u', 's', 'vh']. - Select the desired one to return. - See the respective function docs for descriptors. -a : (..., M, N) array_like - A real or complex array with "a.ndim >= 2". -full_matrices : bool, optional - If True (default), 'u' and 'vh' have the shapes "(..., M, M)" and "(..., N, N)", respectively. - Otherwise, the shapes are "(..., M, K)" and "(..., K, N)", respectively, where "K = min(M, N)". -compute_uv : bool, optional - Whether or not to compute 'u' and 'vh' in addition to 's'. - True by default. -hermitian : bool, optional - If True, 'a' is assumed to be Hermitian (symmetric if real-valued), enabling a more efficient method for finding singular values. - Defaults to False. - -.. versionadded:: 1.17.0 - -Returns -------- -DataContainer - type 'ordered pair', 'scalar', or 'matrix' + The description of that function is as follows: + + Singular Value Decomposition. + + When 'a' is a 2D array, and "full_matrices=False", then it is factorized as "u @ np.diag(s) @ vh = (u * s) @ vh", + where 'u' and the Hermitian transpose of 'vh' are 2D arrays with orthonormal columns and 's' is a 1D array of 'a' singular values. + + When 'a' is higher-dimensional, SVD is applied in stacked mode as explained below. + + Parameters + ---------- + select_return : 'u', 's', 'vh' + Select the desired object to return. + See the respective function docs for descriptors. + a : (..., M, N) array_like + A real or complex array with "a.ndim >= 2". + full_matrices : bool, optional + If True (default), 'u' and 'vh' have the shapes "(..., M, M)" and "(..., N, N)", respectively. + Otherwise, the shapes are "(..., M, K)" and "(..., K, N)", respectively, where "K = min(M, N)". + compute_uv : bool, optional + Whether or not to compute 'u' and 'vh' in addition to 's'. + True by default. + hermitian : bool, optional + If True, 'a' is assumed to be Hermitian (symmetric if real-valued), enabling a more efficient method for finding singular values. + Defaults to False. + + .. versionadded:: 1.17.0 + + Returns + ------- + DataContainer + type 'ordered pair', 'scalar', or 'matrix' diff --git a/docs/nodes/SCIPY/SIGNAL/ARGRELMAX/a1-[autogen]/docstring.txt b/docs/nodes/SCIPY/SIGNAL/ARGRELMAX/a1-[autogen]/docstring.txt index c7760fde4..825a6dc1c 100644 --- a/docs/nodes/SCIPY/SIGNAL/ARGRELMAX/a1-[autogen]/docstring.txt +++ b/docs/nodes/SCIPY/SIGNAL/ARGRELMAX/a1-[autogen]/docstring.txt @@ -1,25 +1,25 @@ - The ARGRELMAX node is based on a numpy or scipy function. -The description of that function is as follows: - Calculate the relative maxima of `data`. + The description of that function is as follows: + + Calculate the relative maxima of 'data'. -Parameters ----------- -data : ndarray - Array in which to find the relative maxima. -axis : int, optional - Axis over which to select from 'data'. Default is 0. -order : int, optional - How many points on each side to use for the comparison - to consider "comparator(n, n+x)" to be True. -mode : str, optional - How the edges of the vector are treated. - Available options are 'wrap' (wrap around) or 'clip' (treat overflow - as the same as the last (or first) element). - Default 'clip'. See numpy.take. + Parameters + ---------- + data : ndarray + Array in which to find the relative maxima. + axis : int, optional + Axis over which to select from 'data'. Default is 0. + order : int, optional + How many points on each side to use for the comparison + to consider "comparator(n, n+x)" to be True. + mode : str, optional + How the edges of the vector are treated. + Available options are 'wrap' (wrap around) or 'clip' (treat overflow + as the same as the last (or first) element). + Default 'clip'. See numpy.take. -Returns -------- -DataContainer - type 'ordered pair', 'scalar', or 'matrix' + Returns + ------- + DataContainer + type 'ordered pair', 'scalar', or 'matrix' diff --git a/docs/nodes/SCIPY/SIGNAL/DETREND/a1-[autogen]/docstring.txt b/docs/nodes/SCIPY/SIGNAL/DETREND/a1-[autogen]/docstring.txt index 3a1f8f402..ded390873 100644 --- a/docs/nodes/SCIPY/SIGNAL/DETREND/a1-[autogen]/docstring.txt +++ b/docs/nodes/SCIPY/SIGNAL/DETREND/a1-[autogen]/docstring.txt @@ -1,30 +1,30 @@ - The DETREND node is based on a numpy or scipy function. -The description of that function is as follows: + The description of that function is as follows: - Remove linear trend along axis from data. + Remove a linear trend along an axis from data. -Parameters ----------- -data : array_like - The input data. -axis : int, optional - The axis along which to detrend the data. By default this is the - last axis (-1). -type : {'linear', 'constant'}, optional - The type of detrending. If type == 'linear' (default), - the result of a linear least-squares fit to 'data' is subtracted from 'data'. - If type == 'constant', only the mean of 'data' is subtracted. -bp : array_like of ints, optional - A sequence of break points. If given, an individual linear fit is - performed for each part of 'data' between two break points. - Break points are specified as indices into 'data'. This parameter - only has an effect when type == 'linear'. -overwrite_data : bool, optional - If True, perform in place detrending and avoid a copy. Default is False. + Parameters + ---------- + data : array_like + The input data. + axis : int, optional + The axis along which to detrend the data. + By default this is the last axis (-1). + type : {'linear', 'constant'}, optional + The type of detrending. If type == 'linear' (default), + the result of a linear least-squares fit to 'data' is subtracted from 'data'. + If type == 'constant', only the mean of 'data' is subtracted. + bp : array_like of ints, optional + A sequence of break points. If given, an individual linear fit is + performed for each part of 'data' between two break points. + Break points are specified as indices into 'data'. + This parameter only has an effect when type == 'linear'. + overwrite_data : bool, optional + If True, perform in place detrending and avoid a copy. + Default is False. -Returns -------- -DataContainer - type 'ordered pair', 'scalar', or 'matrix' + Returns + ------- + DataContainer + type 'ordered pair', 'scalar', or 'matrix' diff --git a/docs/nodes/SCIPY/SIGNAL/PERIODOGRAM/a1-[autogen]/docstring.txt b/docs/nodes/SCIPY/SIGNAL/PERIODOGRAM/a1-[autogen]/docstring.txt index 6d6ef59f8..1751e5dde 100644 --- a/docs/nodes/SCIPY/SIGNAL/PERIODOGRAM/a1-[autogen]/docstring.txt +++ b/docs/nodes/SCIPY/SIGNAL/PERIODOGRAM/a1-[autogen]/docstring.txt @@ -1,48 +1,53 @@ - The PERIODOGRAM node is based on a numpy or scipy function. -The description of that function is as follows: + The description of that function is as follows: - Estimate power spectral density using a periodogram. + Estimate power spectral density using a periodogram. -Parameters ----------- -select_return : This function has returns multiple objects ['f', 'Pxx']. - Select the desired one to return. - See the respective function docs for descriptors. -x : array_like - Time series of measurement values. -fs : float, optional - Sampling frequency of the 'x' time series. Defaults to 1.0. -window : str or tuple or array_like, optional - Desired window to use. If 'window' is a string or tuple, it is - passed to 'get_window' to generate the window values, which are - DFT-even by default. See 'get_window' for a list of windows and - required parameters. If 'window' is array_like it will be used - directly as the window and its length must be nperseg. - Defaults to 'boxcar'. -nfft : int, optional - Length of the FFT used. If 'None' the length of 'x' will be used. -detrend : str or function or 'False', optional - Specifies how to detrend each segment. If 'detrend' is a - string, it is passed as the 'type' argument to the 'detrend' function. - If it is a function, it takes a segment and returns a - detrended segment. If 'detrend' is 'False', no detrending is done. - Defaults to 'constant'. -return_onesided : bool, optional - If 'True', return a one-sided spectrum for real data. - If 'False' return a two-sided spectrum. Defaults to 'True', but for - complex data, a two-sided spectrum is always returned. -scaling : { 'density', 'spectrum' }, optional - Selects between computing the power spectral density ('density') - where 'Pxx' has units of V**2/Hz and computing the power - spectrum ('spectrum') where 'Pxx' has units of V**2, if 'x' - is measured in V and 'fs' is measured in Hz. Defaults to 'density'. -axis : int, optional - Axis along which the periodogram is computed; the default is - over the last axis (i.e. axis=-1). + Parameters + ---------- + select_return : 'f', 'Pxx'. + Select the desired object to return. + See the respective function docs for descriptors. + x : array_like + Time series of measurement values. + fs : float, optional + Sampling frequency of the 'x' time series. + Defaults to 1.0. + window : str or tuple or array_like, optional + Desired window to use. + If 'window' is a string or tuple, it is passed to 'get_window' to + generate the window values, which are DFT-even by default. + See 'get_window' for a list of windows and required parameters. + If 'window' is array_like, it will be used directly as the window + and its length must be nperseg. + Defaults to 'boxcar'. + nfft : int, optional + Length of the FFT used. + If 'None', the length of 'x' will be used. + detrend : str or function or 'False', optional + Specifies how to detrend each segment. + If 'detrend' is a string, it is passed as the 'type' argument + to the 'detrend' function. + If it is a function, it takes a segment and returns a detrended segment. + If 'detrend' is 'False', no detrending is done. + Defaults to 'constant'. + return_onesided : bool, optional + If 'True', return a one-sided spectrum for real data. + If 'False', return a two-sided spectrum. + Defaults to 'True', but for complex data, + a two-sided spectrum is always returned. + scaling : { 'density', 'spectrum' }, optional + Selects between computing the power spectral density ('density') + where 'Pxx' has units of V**2/Hz and computing the power + spectrum ('spectrum') where 'Pxx' has units of V**2, if 'x' + is measured in V and 'fs' is measured in Hz. + Defaults to 'density'. + axis : int, optional + Axis along which the periodogram is computed; + the default is over the last axis (i.e. axis=-1). -Returns -------- -DataContainer - type 'ordered pair', 'scalar', or 'matrix' + Returns + ------- + DataContainer + type 'ordered pair', 'scalar', or 'matrix' diff --git a/docs/nodes/SCIPY/SIGNAL/SAVGOL_FILTER/a1-[autogen]/docstring.txt b/docs/nodes/SCIPY/SIGNAL/SAVGOL_FILTER/a1-[autogen]/docstring.txt index fd7d28fb4..4b998c4fd 100644 --- a/docs/nodes/SCIPY/SIGNAL/SAVGOL_FILTER/a1-[autogen]/docstring.txt +++ b/docs/nodes/SCIPY/SIGNAL/SAVGOL_FILTER/a1-[autogen]/docstring.txt @@ -1,48 +1,48 @@ - The SAVGOL_FILTER node is based on a numpy or scipy function. -The description of that function is as follows: + The description of that function is as follows: - Apply a Savitzky-Golay filter to an array. + Apply a Savitzky-Golay filter to an array. - This is a 1-D filter. If 'x' has dimension greater than 1, 'axis' determines the axis along which the filter is applied. + This is a 1-D filter. If 'x' has a dimension greater than 1, 'axis' determines the axis along which the filter is applied. -Parameters ----------- -x : array_like - The data to be filtered. If 'x' is not a single or double precision - floating point array, it will be converted to type numpy.float64 before filtering. -window_length : int - The length of the filter window (i.e., the number of coefficients). - If 'mode' is 'interp', 'window_length' must be less than or equal to the size of 'x'. -polyorder : int - The order of the polynomial used to fit the samples. - 'polyorder' must be less than 'window_length'. -deriv : int, optional - The order of the derivative to compute. This must be a - nonnegative integer. The default is 0, which means to filter - the data without differentiating. -delta : float, optional - The spacing of the samples to which the filter will be applied. - This is only used if deriv > 0. Default is 1.0. -axis : int, optional - The axis of the array 'x' along which the filter is to be applied. - Default is -1. -mode : str, optional - Must be 'mirror', 'constant', 'nearest', 'wrap' or 'interp'. This - determines the type of extension to use for the padded signal to - which the filter is applied. When 'mode' is 'constant', the padding - value is given by 'cval'. See the Notes for more details on 'mirror', - 'constant', 'wrap', and 'nearest'. - When the 'interp' mode is selected (the default), no extension - is used. Instead, a degree 'polyorder' polynomial is fit to the - last 'window_length' values of the edges, and this polynomial is - used to evaluate the last 'window_length // 2' output values. -cval : scalar, optional - Value to fill past the edges of the input if 'mode' is 'constant'. - Default is 0.0. + Parameters + ---------- + x : array_like + The data to be filtered. + If 'x' is not a single or double precision floating point array, + it will be converted to type numpy.float64 before filtering. + window_length : int + The length of the filter window (i.e., the number of coefficients). + If 'mode' is 'interp', 'window_length' must be less than or equal to the size of 'x'. + polyorder : int + The order of the polynomial used to fit the samples. + 'polyorder' must be less than 'window_length'. + deriv : int, optional + The order of the derivative to compute. + This must be a nonnegative integer. + The default is 0, which means to filter the data without differentiating. + delta : float, optional + The spacing of the samples to which the filter will be applied. + This is only used if deriv > 0. Default is 1.0. + axis : int, optional + The axis of the array 'x' along which the filter is to be applied. + Default is -1. + mode : str, optional + Must be 'mirror', 'constant', 'nearest', 'wrap' or 'interp'. + This determines the type of extension to use for the padded signal to + which the filter is applied. + When 'mode' is 'constant', the padding value is given by 'cval'. + See the Notes for more details on 'mirror', 'constant', 'wrap', and 'nearest'. + When the 'interp' mode is selected (the default), no extension is used. + Instead, a degree 'polyorder' polynomial is fit to the last + 'window_length' values of the edges, and this polynomial is + used to evaluate the last 'window_length // 2' output values. + cval : scalar, optional + Value to fill past the edges of the input if 'mode' is 'constant'. + Default is 0.0. -Returns -------- -DataContainer - type 'ordered pair', 'scalar', or 'matrix' + Returns + ------- + DataContainer + type 'ordered pair', 'scalar', or 'matrix' diff --git a/docs/nodes/SCIPY/SIGNAL/STFT/a1-[autogen]/docstring.txt b/docs/nodes/SCIPY/SIGNAL/STFT/a1-[autogen]/docstring.txt index 175101c63..507217444 100644 --- a/docs/nodes/SCIPY/SIGNAL/STFT/a1-[autogen]/docstring.txt +++ b/docs/nodes/SCIPY/SIGNAL/STFT/a1-[autogen]/docstring.txt @@ -1,75 +1,79 @@ - The STFT node is based on a numpy or scipy function. -The description of that function is as follows: + The description of that function is as follows: - Compute the Short Time Fourier Transform (STFT). + Compute the Short Time Fourier Transform (STFT). - STFTs can be used as a way of quantifying the change of a nonstationary signal's frequency and phase content over time. + STFTs can be used as a way of quantifying the change of a nonstationary signal's frequency and phase content over time. -Parameters ----------- -select_return : This function has returns multiple objects []'f', 't', 'Zxx']. - Select the desired one to return. - See the respective function docs for descriptors. -x : array_like - Time series of measurement values -fs : float, optional - Sampling frequency of the 'x' time series. Defaults to 1.0. -window : str or tuple or array_like, optional - Desired window to use. If 'window' is a string or tuple, it is - passed to 'get_window' to generate the window values, which are - DFT-even by default. See 'get_window' for a list of windows and - required parameters. If 'window' is array_like it will be used - directly as the window and its length must be nperseg. Defaults - to a Hann window. -nperseg : int, optional - Length of each segment. Defaults to 256. -noverlap : int, optional - Number of points to overlap between segments. If 'None', - noverlap = nperseg // 2. Defaults to 'None'. When - specified, the COLA constraint must be met (see Notes below). -nfft : int, optional - Length of the FFT used, if a zero padded FFT is desired. If - 'None', the FFT length is 'nperseg'. Defaults to 'None'. -detrend : str or function or 'False', optional - Specifies how to detrend each segment. If 'detrend' is a - string, it is passed as the 'type' argument to the 'detrend' - function. If it is a function, it takes a segment and returns a - detrended segment. If 'detrend' is 'False', no detrending is - done. Defaults to 'False'. -return_onesided : bool, optional - If 'True', return a one-sided spectrum for real data. - If 'False' return a two-sided spectrum. Defaults to 'True', but for - complex data, a two-sided spectrum is always returned. -boundary : str or None, optional - Specifies whether the input signal is extended at both ends, and - how to generate the new values, in order to center the first - windowed segment on the first input point. This has the benefit - of enabling reconstruction of the first input point when the - employed window function starts at zero. Valid options are - ['even', 'odd', 'constant', 'zeros', None]. Defaults to - 'zeros', for zero padding extension. I.e. [1, 2, 3, 4] is - extended to [0, 1, 2, 3, 4, 0] for nperseg=3. -padded : bool, optional - Specifies whether the input signal is zero-padded at the end to - make the signal fit exactly into an integer number of window - segments, so that all of the signal is included in the output. - Defaults to 'True'. Padding occurs after boundary extension, if - 'boundary' is not 'None', and 'padded' is 'True', as is the - default. -axis : int, optional - Axis along which the STFT is computed; the default is over the - last axis (i.e. axis=-1). -scaling: {'spectrum', 'psd'} - The default 'spectrum' scaling allows each frequency line of 'Zxx' to - be interpreted as a magnitude spectrum. The 'psd' option scales each - line to a power spectral density - it allows to calculate the signal's - energy by numerically integrating over abs(Zxx)**2. + Parameters + ---------- + select_return : 'f', 't', 'Zxx' + Select the desired object to return. + See the respective function docs for descriptors. + x : array_like + Time series of measurement values. + fs : float, optional + Sampling frequency of the 'x' time series. + Defaults to 1.0. + window : str or tuple or array_like, optional + Desired window to use. + If 'window' is a string or tuple, it is passed to 'get_window' + to generate the window values, which are DFT-even by default. + See 'get_window' for a list of windows and required parameters. + If 'window' is array_like it will be used directly as the window + and its length must be nperseg. + Defaults to a Hann window. + nperseg : int, optional + Length of each segment. + Defaults to 256. + noverlap : int, optional + Number of points to overlap between segments. + If 'None', noverlap = nperseg // 2. + Defaults to 'None'. + When specified, the COLA constraint must be met (see Notes below). + nfft : int, optional + Length of the FFT used, if a zero padded FFT is desired. + If 'None', the FFT length is 'nperseg'. + Defaults to 'None'. + detrend : str or function or 'False', optional + Specifies how to detrend each segment. + If 'detrend' is a string, it is passed as the 'type' argument to the 'detrend' function. + If it is a function, it takes a segment and returns a detrended segment. + If 'detrend' is 'False', no detrending is done. + Defaults to 'False'. + return_onesided : bool, optional + If 'True', return a one-sided spectrum for real data. + If 'False' return a two-sided spectrum. + Defaults to 'True', but for complex data, a two-sided spectrum is always returned. + boundary : str or None, optional + Specifies whether the input signal is extended at both ends, and + how to generate the new values, in order to center the first + windowed segment on the first input point. + This has the benefit of enabling reconstruction of the first input point + when the employed window function starts at zero. + Valid options are ['even', 'odd', 'constant', 'zeros', None]. + Defaults to 'zeros', for zero padding extension. + I.e. [1, 2, 3, 4] is extended to [0, 1, 2, 3, 4, 0] for nperseg=3. + padded : bool, optional + Specifies whether the input signal is zero-padded at the end to + make the signal fit exactly into an integer number of window + segments, so that all of the signal is included in the output. + Defaults to 'True'. + Padding occurs after boundary extension, if 'boundary' is not 'None', + and 'padded' is 'True', as is the default. + axis : int, optional + Axis along which the STFT is computed. + The default is over the last axis (i.e. axis=-1). + scaling: {'spectrum', 'psd'} + The default 'spectrum' scaling allows each frequency line of 'Zxx' to + be interpreted as a magnitude spectrum. + The 'psd' option scales each line to a power spectral density. + It allows to calculate the signal's energy by numerically integrating over abs(Zxx)**2. -.. versionadded:: 1.9.0 + .. versionadded:: 1.9.0 -Returns -------- -DataContainer - type 'ordered pair', 'scalar', or 'matrix' + Returns + ------- + DataContainer + type 'ordered pair', 'scalar', or 'matrix' diff --git a/docs/nodes/SCIPY/SIGNAL/WELCH/a1-[autogen]/docstring.txt b/docs/nodes/SCIPY/SIGNAL/WELCH/a1-[autogen]/docstring.txt index ed84d832c..70611ea42 100644 --- a/docs/nodes/SCIPY/SIGNAL/WELCH/a1-[autogen]/docstring.txt +++ b/docs/nodes/SCIPY/SIGNAL/WELCH/a1-[autogen]/docstring.txt @@ -1,64 +1,68 @@ - The WELCH node is based on a numpy or scipy function. -The description of that function is as follows: + The description of that function is as follows: - Estimate power spectral density using Welch's method. + Estimate power spectral density using Welch's method. - Welch's method [1]_ computes an estimate of the power spectral density by dividing the data into overlapping segments, - computing a modified periodogram for each segment, and averaging the periodograms. + Welch's method [1]_ computes an estimate of the power spectral density by dividing the data into overlapping segments, + computing a modified periodogram for each segment, and averaging the periodograms. -Parameters ----------- -select_return : This function has returns multiple objects ['f', 'Pxx']. - Select the desired one to return. - See the respective function docs for descriptors. -x : array_like - Time series of measurement values -fs : float, optional - Sampling frequency of the 'x' time series. Defaults to 1.0. -window : str or tuple or array_like, optional - Desired window to use. If 'window' is a string or tuple, it is - passed to 'get_window' to generate the window values, which are - DFT-even by default. See 'get_window' for a list of windows and - required parameters. If 'window' is array_like it will be used - directly as the window and its length must be nperseg. Defaults - to a Hann window. -nperseg : int, optional - Length of each segment. Defaults to None, but if window is str or - tuple, is set to 256, and if window is array_like, is set to the - length of the window. -noverlap : int, optional - Number of points to overlap between segments. If 'None', - noverlap = nperseg // 2. Defaults to 'None'. -nfft : int, optional - Length of the FFT used, if a zero padded FFT is desired. - If 'None', the FFT length is 'nperseg'. Defaults to 'None'. -detrend : str or function or 'False', optional - Specifies how to detrend each segment. If 'detrend' is a - string, it is passed as the 'type' argument to the 'detrend' - function. If it is a function, it takes a segment and returns a - detrended segment. If 'detrend' is 'False', no detrending is - done. Defaults to 'constant'. -return_onesided : bool, optional - If 'True', return a one-sided spectrum for real data. If - 'False' return a two-sided spectrum. Defaults to 'True', but for - complex data, a two-sided spectrum is always returned. -scaling : { 'density', 'spectrum' }, optional - Selects between computing the power spectral density ('density') - where 'Pxx' has units of V**2/Hz and computing the power - spectrum ('spectrum') where 'Pxx' has units of V**2, if 'x' - is measured in V and 'fs' is measured in Hz. Defaults to - 'density' -axis : int, optional - Axis along which the periodogram is computed; the default is - over the last axis (i.e. axis=-1). -average : { 'mean', 'median' }, optional - Method to use when averaging periodograms. Defaults to 'mean'. + Parameters + ---------- + select_return : 'f', 'Pxx' + Select the desired object to return. + See the respective function docs for descriptors. + x : array_like + Time series of measurement values. + fs : float, optional + Sampling frequency of the 'x' time series. + Defaults to 1.0. + window : str or tuple or array_like, optional + Desired window to use. If 'window' is a string or tuple, it is + passed to 'get_window' to generate the window values, which are + DFT-even by default. + See 'get_window' for a list of windows and required parameters. + If 'window' is array_like,it will be used directly as the window + and its length must be nperseg. + Defaults to a Hann window. + nperseg : int, optional + Length of each segment. + Defaults to None, but if window is str or tuple, is set to 256, + and if window is array_like, is set to the length of the window. + noverlap : int, optional + Number of points to overlap between segments. + If 'None', noverlap = nperseg // 2. + Defaults to 'None'. + nfft : int, optional + Length of the FFT used, if a zero padded FFT is desired. + If 'None', the FFT length is 'nperseg'. + Defaults to 'None'. + detrend : str or function or 'False', optional + Specifies how to detrend each segment. + If 'detrend' is a string, it is passed as the 'type' argument to the 'detrend' function. + If it is a function, it takes a segment and returns a detrended segment. + If 'detrend' is 'False', no detrending is done. + Defaults to 'constant'. + return_onesided : bool, optional + If 'True', returns a one-sided spectrum for real data. + If 'False', returns a two-sided spectrum. + Defaults to 'True', but for complex data, a two-sided spectrum is always returned. + scaling : { 'density', 'spectrum' }, optional + Selects between computing the power spectral density ('density') + where 'Pxx' has units of V**2/Hz and computing the power + spectrum ('spectrum') where 'Pxx' has units of V**2, if 'x' + is measured in V and 'fs' is measured in Hz. + Defaults to 'density'. + axis : int, optional + Axis along which the periodogram is computed. + The default is over the last axis (i.e. axis=-1). + average : { 'mean', 'median' }, optional + Method to use when averaging periodograms. + Defaults to 'mean'. -.. versionadded:: 1.2.0 + .. versionadded:: 1.2.0 -Returns -------- -DataContainer - type 'ordered pair', 'scalar', or 'matrix' + Returns + ------- + DataContainer + type 'ordered pair', 'scalar', or 'matrix' diff --git a/docs/nodes/SCIPY/STATS/JARQUE_BERA/a1-[autogen]/docstring.txt b/docs/nodes/SCIPY/STATS/JARQUE_BERA/a1-[autogen]/docstring.txt index 979d13f38..f6bb41fcd 100644 --- a/docs/nodes/SCIPY/STATS/JARQUE_BERA/a1-[autogen]/docstring.txt +++ b/docs/nodes/SCIPY/STATS/JARQUE_BERA/a1-[autogen]/docstring.txt @@ -1,23 +1,22 @@ - The JARQUE_BERA node is based on a numpy or scipy function. -The description of that function is as follows: + The description of that function is as follows: - Perform the Jarque-Bera goodness of fit test on sample data. + Perform the Jarque-Bera goodness of fit test on sample data. - The Jarque-Bera test tests whether the sample data has the skewness and kurtosis matching a normal distribution. + The Jarque-Bera test tests whether the sample data has the skewness and kurtosis matching a normal distribution. - Note that this test only works for a large enough number of data samples (>2000) as the test statistic asymptotically has a Chi-squared distribution with 2 degrees of freedom. + Note that this test only works for a large enough number of data samples (>2000) as the test statistic asymptotically has a Chi-squared distribution with 2 degrees of freedom. -Parameters ----------- -select_return : This function has returns multiple objects ['jb_value', 'p']. - Select the desired one to return. - See the respective function docs for descriptors. -x : array_like - Observations of a random variable. + Parameters + ---------- + select_return : 'jb_value', 'p' + Select the desired object to return. + See the respective function docs for descriptors. + x : array_like + Observations of a random variable. -Returns -------- -DataContainer - type 'ordered pair', 'scalar', or 'matrix' + Returns + ------- + DataContainer + type 'ordered pair', 'scalar', or 'matrix' diff --git a/docs/nodes/SCIPY/STATS/MVSDIST/a1-[autogen]/docstring.txt b/docs/nodes/SCIPY/STATS/MVSDIST/a1-[autogen]/docstring.txt index d3a7d2d4d..668623b45 100644 --- a/docs/nodes/SCIPY/STATS/MVSDIST/a1-[autogen]/docstring.txt +++ b/docs/nodes/SCIPY/STATS/MVSDIST/a1-[autogen]/docstring.txt @@ -1,20 +1,19 @@ - The MVSDIST node is based on a numpy or scipy function. -The description of that function is as follows: + The description of that function is as follows: - 'Frozen' distributions for mean, variance, and standard deviation of data. + 'Frozen' distributions for mean, variance, and standard deviation of data. -Parameters ----------- -select_return : This function has returns multiple Objects ['mdist', 'vdist', 'sdist']. - Select the desired one to return. - See the respective function docs for descriptors. -data : array_like - Input array. Converted to 1-D using ravel. - Requires 2 or more data-points. + Parameters + ---------- + select_return : 'mdist', 'vdist', 'sdist' + Select the desired object to return. + See the respective function docs for descriptors. + data : array_like + Input array. Converted to 1-D using ravel. + Requires 2 or more data-points. -Returns -------- -DataContainer - type 'ordered pair', 'scalar', or 'matrix' + Returns + ------- + DataContainer + type 'ordered pair', 'scalar', or 'matrix' diff --git a/docs/nodes/SCIPY/STATS/NORMALTEST/a1-[autogen]/docstring.txt b/docs/nodes/SCIPY/STATS/NORMALTEST/a1-[autogen]/docstring.txt index bcacd30ec..88f4df63e 100644 --- a/docs/nodes/SCIPY/STATS/NORMALTEST/a1-[autogen]/docstring.txt +++ b/docs/nodes/SCIPY/STATS/NORMALTEST/a1-[autogen]/docstring.txt @@ -1,31 +1,31 @@ - The NORMALTEST node is based on a numpy or scipy function. -The description of that function is as follows: + The description of that function is as follows: - Test whether a sample differs from a normal distribution. + Test whether a sample differs from a normal distribution. - This function tests the null hypothesis that a sample comes from a normal distribution. - It is based on D'Agostino and Pearson's [1]_, [2]_ test that combines skew and kurtosis to produce an omnibus test of normality. + This function tests the null hypothesis that a sample comes from a normal distribution. + It is based on D'Agostino and Pearson's [1]_, [2]_ test that combines skewness and kurtosis to produce an omnibus test of normality. -Parameters ----------- -select_return : This function has returns multiple objects ['statistic', 'pvalue']. - Select the desired one to return. - See the respective function docs for descriptors. -a : array_like - The array containing the sample to be tested. -axis : int or None, optional - Axis along which to compute test. Default is 0. - If None, compute over the whole array 'a'. -nan_policy : {'propagate', 'raise', 'omit'}, optional - Defines how to handle when input contains nan. - The following options are available (default is 'propagate'): - 'propagate' : returns nan - 'raise' : throws an error - 'omit' : performs the calculations ignoring nan values + Parameters + ---------- + select_return : 'statistic', 'pvalue' + Select the desired object to return. + See the respective function docs for descriptors. + a : array_like + The array containing the sample to be tested. + axis : int or None, optional + Axis along which to compute test. + Default is 0. + If None, compute over the whole array 'a'. + nan_policy : {'propagate', 'raise', 'omit'}, optional + Defines how to handle when input contains nan. + The following options are available (default is 'propagate'): + 'propagate' : returns nan + 'raise' : raises an error + 'omit' : performs the calculations ignoring nan values -Returns -------- -DataContainer - type 'ordered pair', 'scalar', or 'matrix' + Returns + ------- + DataContainer + type 'ordered pair', 'scalar', or 'matrix' diff --git a/docs/nodes/SCIPY/STATS/SEM/a1-[autogen]/docstring.txt b/docs/nodes/SCIPY/STATS/SEM/a1-[autogen]/docstring.txt index b77ffae5b..c80a29ff8 100644 --- a/docs/nodes/SCIPY/STATS/SEM/a1-[autogen]/docstring.txt +++ b/docs/nodes/SCIPY/STATS/SEM/a1-[autogen]/docstring.txt @@ -1,30 +1,31 @@ - The SEM node is based on a numpy or scipy function. -The description of that function is as follows: + The description of that function is as follows: - Compute standard error of the mean. + Compute the standard error of the mean. - Calculate the standard error of the mean (or standard error of measurement) of the values in the input array. + Calculate the standard error of the mean (or standard error of measurement) of the values in the input array. -Parameters ----------- -a : array_like - An array containing the values for which the standard error is returned. -axis : int or None, optional - Axis along which to operate. Default is 0. If None, compute over the whole array 'a'. -ddof : int, optional - Delta degrees-of-freedom. How many degrees of freedom to adjust - for bias in limited samples relative to the population estimate of variance. - Defaults to 1. -nan_policy : {'propagate', 'raise', 'omit'}, optional - Defines how to handle when input contains nan. - The following options are available (default is 'propagate'): - 'propagate' : returns nan - 'raise' : throws an error - 'omit' : performs the calculations ignoring nan values + Parameters + ---------- + a : array_like + An array containing the values for which the standard error is returned. + axis : int or None, optional + Axis along which to operate. + Default is 0. + If None, compute over the whole array 'a'. + ddof : int, optional + Delta degrees-of-freedom. How many degrees of freedom to adjust + for bias in limited samples relative to the population estimate of variance. + Defaults to 1. + nan_policy : {'propagate', 'raise', 'omit'}, optional + Defines how to handle when input contains nan. + The following options are available (default is 'propagate'): + 'propagate' : returns nan + 'raise' : raises an error + 'omit' : performs the calculations ignoring nan values -Returns -------- -DataContainer - type 'ordered pair', 'scalar', or 'matrix' + Returns + ------- + DataContainer + type 'ordered pair', 'scalar', or 'matrix' diff --git a/docs/nodes/SCIPY/STATS/SHAPIRO/a1-[autogen]/docstring.txt b/docs/nodes/SCIPY/STATS/SHAPIRO/a1-[autogen]/docstring.txt index d13d35cb1..587f591ed 100644 --- a/docs/nodes/SCIPY/STATS/SHAPIRO/a1-[autogen]/docstring.txt +++ b/docs/nodes/SCIPY/STATS/SHAPIRO/a1-[autogen]/docstring.txt @@ -1,21 +1,20 @@ - The SHAPIRO node is based on a numpy or scipy function. -The description of that function is as follows: + The description of that function is as follows: - Perform the Shapiro-Wilk test for normality. + Perform the Shapiro-Wilk test for normality. - The Shapiro-Wilk test tests the null hypothesis that the data was drawn from a normal distribution. + The Shapiro-Wilk test tests the null hypothesis that the data was drawn from a normal distribution. -Parameters ----------- -select_return : This function has returns multiple objects ['statistic', 'p-value']. - Select the desired one to return. - See the respective function docs for descriptors. -x : array_like - Array of sample data. + Parameters + ---------- + select_return : 'statistic', 'p-value' + Select the desired object to return. + See the respective function docs for descriptors. + x : array_like + Array of sample data. -Returns -------- -DataContainer - type 'ordered pair', 'scalar', or 'matrix' + Returns + ------- + DataContainer + type 'ordered pair', 'scalar', or 'matrix' diff --git a/docs/nodes/SCIPY/STATS/SKEW/a1-[autogen]/docstring.txt b/docs/nodes/SCIPY/STATS/SKEW/a1-[autogen]/docstring.txt index 51f99edc8..de67d844d 100644 --- a/docs/nodes/SCIPY/STATS/SKEW/a1-[autogen]/docstring.txt +++ b/docs/nodes/SCIPY/STATS/SKEW/a1-[autogen]/docstring.txt @@ -1,40 +1,40 @@ - The SKEW node is based on a numpy or scipy function. -The description of that function is as follows: + The description of that function is as follows: - Compute the sample skewness of a data set. + Compute the sample skewness of a dataset. - For normally distributed data, the skewness should be about zero. - For unimodal continuous distributions, a skewness value greater than zero means that there is more weight in the right tail of the distribution. \ - The function 'skewtest' can be used to determine if the skewness value is close enough to zero, statistically speaking. + For normally distributed data, the skewness should be about zero. + For unimodal continuous distributions, a skewness value greater than zero means that there is more weight in the right tail of the distribution. \ + The function 'skewtest' can be used to determine if the skewness value is close enough to zero, statistically speaking. -Parameters ----------- -a : ndarray - Input array. -axis : int or None, default: 0 - If an int, the axis of the input along which to compute the statistic. - The statistic of each axis-slice (e.g. row) of the input will appear in a - corresponding element of the output. - If None, the input will be raveled before computing the statistic. -bias : bool, optional - If False, then the calculations are corrected for statistical bias. -nan_policy : {'propagate', 'omit', 'raise'} - Defines how to handle input NaNs. - - propagate : if a NaN is present in the axis slice (e.g. row) along - which the statistic is computed, the corresponding entry of the output - will be NaN. - - omit : NaNs will be omitted when performing the calculation. - If insufficient data remains in the axis slice along which the - statistic is computed, the corresponding entry of the output will be NaN. - - raise : if a NaN is present, a ValueError will be raised. -keepdims : bool, default: False - If this is set to True, the axes which are reduced are left - in the result as dimensions with size one. With this option, - the result will broadcast correctly against the input array. + Parameters + ---------- + a : ndarray + Input array. + axis : int + Default = 0. + If an int, the axis of the input along which to compute the statistic. + The statistic of each axis-slice (e.g. row) of the input will appear in a + corresponding element of the output. + If None, the input will be raveled before computing the statistic. + bias : bool, optional + If False, then the calculations are corrected for statistical bias. + nan_policy : {'propagate', 'omit', 'raise'} + Defines how to handle input NaNs. + - propagate : if a NaN is present in the axis slice (e.g. row) along + which the statistic is computed, the corresponding entry of the output + will be NaN. + - omit : NaNs will be omitted when performing the calculation. + If insufficient data remains in the axis slice along which the + statistic is computed, the corresponding entry of the output will be NaN. + - raise : if a NaN is present, a ValueError will be raised. + keepdims : bool, default: False + If this is set to True, the axes which are reduced are left + in the result as dimensions with size one. With this option, + the result will broadcast correctly against the input array. -Returns -------- -DataContainer - type 'ordered pair', 'scalar', or 'matrix' + Returns + ------- + DataContainer + type 'ordered pair', 'scalar', or 'matrix' diff --git a/docs/nodes/SCIPY/STATS/SKEWTEST/a1-[autogen]/docstring.txt b/docs/nodes/SCIPY/STATS/SKEWTEST/a1-[autogen]/docstring.txt index 1697375b2..ecdb19f12 100644 --- a/docs/nodes/SCIPY/STATS/SKEWTEST/a1-[autogen]/docstring.txt +++ b/docs/nodes/SCIPY/STATS/SKEWTEST/a1-[autogen]/docstring.txt @@ -1,41 +1,42 @@ - The SKEWTEST node is based on a numpy or scipy function. -The description of that function is as follows: + The description of that function is as follows: - Test whether the skew is different from the normal distribution. + Test whether the skewness is different from the normal distribution. - This function tests the null hypothesis that the skewness of the population that the sample was drawn from is the same as that of a corresponding normal distribution. + This function tests the null hypothesis that the skewness of the population that the sample was drawn from is the same as that of a corresponding normal distribution. -Parameters ----------- -select_return : This function has returns multiple objects ['statistic', 'pvalue']. - Select the desired one to return. - See the respective function docs for descriptors. -a : array - The data to be tested. -axis : int or None, optional - Axis along which statistics are calculated. Default is 0. - If None, compute over the whole array `a`. -nan_policy : {'propagate', 'raise', 'omit'}, optional - Defines how to handle when input contains nan. - The following options are available (default is 'propagate'): - 'propagate' : returns nan - 'raise' : throws an error - 'omit' : performs the calculations ignoring nan values -alternative : {'two-sided', 'less', 'greater'}, optional - Defines the alternative hypothesis. Default is 'two-sided'. - The following options are available: - 'two-sided' : the skewness of the distribution underlying the sample - is different from that of the normal distribution (i.e. 0) - 'less' : the skewness of the distribution underlying the sample - is less than that of the normal distribution - 'greater' : the skewness of the distribution underlying the sample - is greater than that of the normal distribution + Parameters + ---------- + select_return : 'statistic', 'pvalue' + Select the desired object to return. + See the respective function docs for descriptors. + a : array + The data to be tested. + axis : int or None, optional + Axis along which statistics are calculated. + Default is 0. + If None, compute over the whole array 'a'. + nan_policy : {'propagate', 'raise', 'omit'}, optional + Defines how to handle when input contains nan. + The following options are available (default is 'propagate'): + 'propagate' : returns nan + 'raise' : throws an error + 'omit' : performs the calculations ignoring nan values + alternative : {'two-sided', 'less', 'greater'}, optional + Defines the alternative hypothesis. + Default is 'two-sided'. + The following options are available: + 'two-sided' : the skewness of the distribution underlying the sample + is different from that of the normal distribution (i.e. 0) + 'less' : the skewness of the distribution underlying the sample + is less than that of the normal distribution + 'greater' : the skewness of the distribution underlying the sample + is greater than that of the normal distribution -.. versionadded:: 1.7.0 + .. versionadded:: 1.7.0 -Returns -------- -DataContainer - type 'ordered pair', 'scalar', or 'matrix' + Returns + ------- + DataContainer + type 'ordered pair', 'scalar', or 'matrix' diff --git a/docs/nodes/SCIPY/STATS/TMAX/a1-[autogen]/docstring.txt b/docs/nodes/SCIPY/STATS/TMAX/a1-[autogen]/docstring.txt index 800705386..059ef6931 100644 --- a/docs/nodes/SCIPY/STATS/TMAX/a1-[autogen]/docstring.txt +++ b/docs/nodes/SCIPY/STATS/TMAX/a1-[autogen]/docstring.txt @@ -1,32 +1,34 @@ - The TMAX node is based on a numpy or scipy function. -The description of that function is as follows: + The description of that function is as follows: - Compute the trimmed maximum. + Compute the trimmed maximum. - This function computes the maximum value of an array along a given axis, while ignoring values larger than a specified upper limit. + This function computes the maximum value of an array along a given axis, while ignoring values larger than a specified upper limit. -Parameters ----------- -a : array_like - Array of values. -upperlimit : None or float, optional - Values in the input array greater than the given limit will be ignored. - When upperlimit is None, then all values are used. The default value is None. -axis : int or None, optional - Axis along which to operate. Default is 0. If None, compute over the whole array 'a'. -inclusive : {True, False}, optional - This flag determines whether values exactly equal to the upper limit are included. - The default value is True. -nan_policy : {'propagate', 'raise', 'omit'}, optional - Defines how to handle when input contains nan. - The following options are available (default is 'propagate'): - 'propagate' : returns nan - 'raise' : throws an error - 'omit' : performs the calculations ignoring nan values + Parameters + ---------- + a : array_like + Array of values. + upperlimit : None or float, optional + Values in the input array greater than the given limit will be ignored. + When upperlimit is None, then all values are used. + The default value is None. + axis : int or None, optional + Axis along which to operate. + Default is 0. + If None, compute over the whole array 'a'. + inclusive : {True, False}, optional + This flag determines whether values exactly equal to the upper limit are included. + The default value is True. + nan_policy : {'propagate', 'raise', 'omit'}, optional + Defines how to handle when input contains nan. + The following options are available (default is 'propagate'): + 'propagate' : returns nan + 'raise' : raises an error + 'omit' : performs the calculations ignoring nan values -Returns -------- -DataContainer - type 'ordered pair', 'scalar', or 'matrix' + Returns + ------- + DataContainer + type 'ordered pair', 'scalar', or 'matrix' diff --git a/docs/nodes/SCIPY/STATS/TMIN/a1-[autogen]/docstring.txt b/docs/nodes/SCIPY/STATS/TMIN/a1-[autogen]/docstring.txt index aaeda4015..859e0ad9c 100644 --- a/docs/nodes/SCIPY/STATS/TMIN/a1-[autogen]/docstring.txt +++ b/docs/nodes/SCIPY/STATS/TMIN/a1-[autogen]/docstring.txt @@ -1,32 +1,34 @@ - The TMIN node is based on a numpy or scipy function. -The description of that function is as follows: + The description of that function is as follows: - Compute the trimmed minimum. + Compute the trimmed minimum. - This function finds the miminum value of an array 'a' along the specified axis, but only considering values greater than a specified lower limit. + This function finds the miminum value of an array 'a' along the specified axis, but only considering values greater than a specified lower limit. -Parameters ----------- -a : array_like - Array of values. -lowerlimit : None or float, optional - Values in the input array less than the given limit will be ignored. - When lowerlimit is None, then all values are used. The default value is None. -axis : int or None, optional - Axis along which to operate. Default is 0. If None, compute over the whole array 'a'. -inclusive : {True, False}, optional - This flag determines whether values exactly equal to the lower limit are included. - The default value is True. -nan_policy : {'propagate', 'raise', 'omit'}, optional - Defines how to handle when input contains nan. - The following options are available (default is 'propagate'): - 'propagate' : returns nan - 'raise' : throws an error - 'omit' : performs the calculations ignoring nan values + Parameters + ---------- + a : array_like + Array of values. + lowerlimit : None or float, optional + Values in the input array less than the given limit will be ignored. + When lowerlimit is None, then all values are used. + The default value is None. + axis : int or None, optional + Axis along which to operate. + Default is 0. + If None, compute over the whole array 'a'. + inclusive : {True, False}, optional + This flag determines whether values exactly equal to the lower limit are included. + The default value is True. + nan_policy : {'propagate', 'raise', 'omit'}, optional + Defines how to handle when input contains nan. + The following options are available (default is 'propagate'): + 'propagate' : returns nan + 'raise' : raises an error + 'omit' : performs the calculations ignoring nan values -Returns -------- -DataContainer - type 'ordered pair', 'scalar', or 'matrix' + Returns + ------- + DataContainer + type 'ordered pair', 'scalar', or 'matrix' diff --git a/docs/nodes/SCIPY/STATS/TRIM_MEAN/a1-[autogen]/docstring.txt b/docs/nodes/SCIPY/STATS/TRIM_MEAN/a1-[autogen]/docstring.txt index 29777b050..0fbe59945 100644 --- a/docs/nodes/SCIPY/STATS/TRIM_MEAN/a1-[autogen]/docstring.txt +++ b/docs/nodes/SCIPY/STATS/TRIM_MEAN/a1-[autogen]/docstring.txt @@ -1,25 +1,25 @@ - The TRIM_MEAN node is based on a numpy or scipy function. -The description of that function is as follows: + The description of that function is as follows: - Return mean of array after trimming distribution from both tails. + Return the mean of an array after trimming distribution from both tails. - If `proportiontocut` = 0.1, slices off 'leftmost' and 'rightmost' 10% of scores. - The input is sorted before slicing. - Slices off less if proportion results in a non-integer slice index (i.e. conservatively slices off 'proportiontocut'). + If `proportiontocut` = 0.1, slices off 'leftmost' and 'rightmost' 10% of scores. + The input is sorted before slicing. + Slices off less if proportion results in a non-integer slice index (i.e. conservatively slices off 'proportiontocut'). -Parameters ----------- -a : array_like - Input array. -proportiontocut : float - Fraction to cut off of both tails of the distribution. -axis : int or None, optional - Axis along which the trimmed means are computed. Default is 0. - If None, compute over the whole array `a`. + Parameters + ---------- + a : array_like + Input array. + proportiontocut : float + Fraction to cut off of both tails of the distribution. + axis : int, optional + Axis along which the trimmed means are computed. + Default is 0. + If None, compute over the whole array 'a'. -Returns -------- -DataContainer - type 'ordered pair', 'scalar', or 'matrix' + Returns + ------- + DataContainer + type 'ordered pair', 'scalar', or 'matrix' diff --git a/docs/nodes/SCIPY/STATS/TTEST_1SAMP/a1-[autogen]/docstring.txt b/docs/nodes/SCIPY/STATS/TTEST_1SAMP/a1-[autogen]/docstring.txt index 630d5e45d..59d99b12e 100644 --- a/docs/nodes/SCIPY/STATS/TTEST_1SAMP/a1-[autogen]/docstring.txt +++ b/docs/nodes/SCIPY/STATS/TTEST_1SAMP/a1-[autogen]/docstring.txt @@ -1,43 +1,44 @@ - The TTEST_1SAMP node is based on a numpy or scipy function. -The description of that function is as follows: + The description of that function is as follows: - Calculate the T-test for the mean of ONE group of scores. + Calculate the T-test for the mean of ONE group of scores. - This is a test for the null hypothesis that the expected value (mean) of a sample of independent observations 'a' is equal to the given population mean, 'popmean'. + This is a test for the null hypothesis that the expected value (mean) of a sample of independent observations 'a' is equal to the given population mean, 'popmean'. -Parameters ----------- -select_return : This function has returns multiple objects ['statistic', 'pvalue']. - Select the desired one to return. - See the respective function docs for descriptors. -a : array_like - Sample observation. -popmean : float or array_like - Expected value in null hypothesis. - If array_like, then it must have the same shape as 'a' excluding the axis dimension. -axis : int or None, optional - Axis along which to compute test; default is 0. If None, compute over the whole array 'a'. -nan_policy : {'propagate', 'raise', 'omit'}, optional - Defines how to handle when input contains nan. - The following options are available (default is 'propagate'): - 'propagate' : returns nan - 'raise' : throws an error - 'omit' : performs the calculations ignoring nan values -alternative : {'two-sided', 'less', 'greater'}, optional - Defines the alternative hypothesis. - The following options are available (default is 'two-sided'): - 'two-sided' : the mean of the underlying distribution of the sample - is different than the given population mean (`popmean`) - 'less' : the mean of the underlying distribution of the sample is - less than the given population mean (`popmean`) - 'greater' : the mean of the underlying distribution of the sample is - greater than the given population mean (`popmean`) + Parameters + ---------- + select_return : 'statistic', 'pvalue' + Select the desired object to return. + See the respective function docs for descriptors. + a : array_like + Sample observation. + popmean : float or array_like + Expected value in null hypothesis. + If array_like, then it must have the same shape as 'a' excluding the axis dimension. + axis : int or None, optional + Axis along which to compute test. + Default is 0. + If None, compute over the whole array 'a'. + nan_policy : {'propagate', 'raise', 'omit'}, optional + Defines how to handle when input contains nan. + The following options are available (default is 'propagate'): + 'propagate' : returns nan + 'raise' : raises an error + 'omit' : performs the calculations ignoring nan values + alternative : {'two-sided', 'less', 'greater'}, optional + Defines the alternative hypothesis. + The following options are available (default is 'two-sided'): + 'two-sided' : the mean of the underlying distribution of the sample + is different than the given population mean (`popmean`) + 'less' : the mean of the underlying distribution of the sample is + less than the given population mean (`popmean`) + 'greater' : the mean of the underlying distribution of the sample is + greater than the given population mean (`popmean`) -.. versionadded:: 1.6.0 + .. versionadded:: 1.6.0 -Returns -------- -DataContainer - type 'ordered pair', 'scalar', or 'matrix' + Returns + ------- + DataContainer + type 'ordered pair', 'scalar', or 'matrix' diff --git a/docs/nodes/SCIPY/STATS/VARIATION/a1-[autogen]/docstring.txt b/docs/nodes/SCIPY/STATS/VARIATION/a1-[autogen]/docstring.txt index 62855db65..1b397c58e 100644 --- a/docs/nodes/SCIPY/STATS/VARIATION/a1-[autogen]/docstring.txt +++ b/docs/nodes/SCIPY/STATS/VARIATION/a1-[autogen]/docstring.txt @@ -1,51 +1,50 @@ - The VARIATION node is based on a numpy or scipy function. -The description of that function is as follows: - - Compute the coefficient of variation. - - The coefficient of variation is the standard deviation divided by the mean. - - This function is equivalent to:: - - np.std(x, axis=axis, ddof=ddof) / np.mean(x) - - The default for "ddof" is 0, but many definitions of the coefficient of variation - use the square root of the unbiased sample variance for the sample standard deviation, which corresponds to "ddof=1". - - The function does not take the absolute value of the mean of the data, so the return value is negative if the mean is negative. - -Parameters ----------- -a : array_like - Input array. -axis : int or None, optional - Axis along which to calculate the coefficient of variation. - Default is 0. If None, compute over the whole array 'a'. -nan_policy : {'propagate', 'raise', 'omit'}, optional - Defines how to handle when input contains 'nan'. - The following options are available: - 'propagate' : return 'nan' - 'raise' : raise an exception - 'omit' : perform the calculation with 'nan' values omitted - The default is 'propagate'. -ddof : int, optional - Gives the "Delta Degrees Of Freedom" used when computing the - standard deviation. The divisor used in the calculation of the - standard deviation is 'N - ddof', where 'N' is the number of - elements. 'ddof' must be less than 'N'; if it isn't, the result - will be 'nan' or 'inf', depending on 'N' and the values in - the array. By default `ddof` is zero for backwards compatibility, - but it is recommended to use 'ddof=1' to ensure that the sample - standard deviation is computed as the square root of the unbiased - sample variance. -keepdims : bool, optional - If this is set to True, the axes which are reduced are left in the - result as dimensions with size one. With this option, the result - will broadcast correctly against the input array. - -Returns -------- -DataContainer - type 'ordered pair', 'scalar', or 'matrix' + The description of that function is as follows: + + Compute the coefficient of variation. + + The coefficient of variation is the standard deviation divided by the mean. + + This function is equivalent to: + + np.std(x, axis=axis, ddof=ddof) / np.mean(x) + + The default for 'ddof' is 0, but many definitions of the coefficient of variation use the square root of the unbiased sample variance for the sample standard deviation, which corresponds to 'ddof=1'. + + The function does not take the absolute value of the mean of the data, so the return value is negative if the mean is negative. + + Parameters + ---------- + a : array_like + Input array. + axis : int, optional + Axis along which to calculate the coefficient of variation. + Default is 0. + If None, compute over the whole array 'a'. + nan_policy : {'propagate', 'raise', 'omit'}, optional + Defines how to handle when input contains 'nan'. + The following options are available: + 'propagate' : return 'nan' + 'raise' : raise an exception + 'omit' : perform the calculation with 'nan' values omitted + The default is 'propagate'. + ddof : int, optional + Gives the "Delta Degrees Of Freedom" used when computing the standard deviation. + The divisor used in the calculation of the standard deviation is 'N - ddof', + where 'N' is the number of elements. + 'ddof' must be less than 'N'; if it isn't, the result will be 'nan' or 'inf', + depending on 'N' and the values in the array. + By default, 'ddof' is zero for backwards compatibility, + but it is recommended to use 'ddof=1' to ensure that the sample + standard deviation is computed as the square root of the unbiased + sample variance. + keepdims : bool, optional + If this is set to True, the axes which are reduced are left in the + result as dimensions with size one. + With this option, the result will broadcast correctly against the input array. + + Returns + ------- + DataContainer + type 'ordered pair', 'scalar', or 'matrix' diff --git a/docs/nodes/SCIPY/STATS/ZSCORE/a1-[autogen]/docstring.txt b/docs/nodes/SCIPY/STATS/ZSCORE/a1-[autogen]/docstring.txt index 20e8208ad..699f09c6d 100644 --- a/docs/nodes/SCIPY/STATS/ZSCORE/a1-[autogen]/docstring.txt +++ b/docs/nodes/SCIPY/STATS/ZSCORE/a1-[autogen]/docstring.txt @@ -1,29 +1,30 @@ - The ZSCORE node is based on a numpy or scipy function. -The description of that function is as follows: + The description of that function is as follows: - Compute the z score. + Compute the z-score. - Compute the z score of each value in the sample, relative to the sample mean and standard deviation. + Compute the z-score of each value in the sample, relative to the sample mean and standard deviation. -Parameters ----------- -a : array_like - An array like object containing the sample data. -axis : int or None, optional - Axis along which to operate. Default is 0. If None, compute over the whole array 'a'. -ddof : int, optional - Degrees of freedom correction in the calculation of the standard deviation. - Default is 0. -nan_policy : {'propagate', 'raise', 'omit'}, optional - Defines how to handle when input contains nan. 'propagate' returns nan, - 'raise' throws an error, 'omit' performs the calculations ignoring nan - values. Default is 'propagate'. Note that when the value is 'omit', - nans in the input also propagate to the output, but they do not affect - the z-scores computed for the non-nan values. + Parameters + ---------- + a : array_like + An array like object containing the sample data. + axis : int, optional + Axis along which to operate. + Default is 0. + If None, compute over the whole array 'a'. + ddof : int, optional + Degrees of freedom correction in the calculation of the standard deviation. + Default is 0. + nan_policy : {'propagate', 'raise', 'omit'}, optional + Defines how to handle when input contains nan. 'propagate' returns nan, + 'raise' throws an error, 'omit' performs the calculations ignoring nan values. + Default is 'propagate'. + Note that when the value is 'omit', nans in the input also propagate to the output, + but they do not affect the z-scores computed for the non-nan values. -Returns -------- -DataContainer - type 'ordered pair', 'scalar', or 'matrix' + Returns + ------- + DataContainer + type 'ordered pair', 'scalar', or 'matrix' diff --git a/docs/nodes/TRANSFORMERS/CALCULUS/DOUBLE_DEFINITE_INTEGRAL/a1-[autogen]/docstring.txt b/docs/nodes/TRANSFORMERS/CALCULUS/DOUBLE_DEFINITE_INTEGRAL/a1-[autogen]/docstring.txt index 7d2172b7e..73c84bc81 100644 --- a/docs/nodes/TRANSFORMERS/CALCULUS/DOUBLE_DEFINITE_INTEGRAL/a1-[autogen]/docstring.txt +++ b/docs/nodes/TRANSFORMERS/CALCULUS/DOUBLE_DEFINITE_INTEGRAL/a1-[autogen]/docstring.txt @@ -1,9 +1,11 @@ -The DOUBLE_DEFINITE_INTEGRAL node takes a function, upper, and lower bounds as input. It then computes double integral of the given function. +The DOUBLE_DEFINITE_INTEGRAL node takes a function, upper, and lower bounds as input. - Proper Syntax for function input example: + It then computes a double integral of the given function. + + Example of proper syntax for the function input: 2*x*y - Improper Syntax for function input example: + Example of improper syntax for the function input: 2xy Parameters diff --git a/docs/nodes/TRANSFORMERS/CALCULUS/DOUBLE_INDEFINITE_INTEGRAL/a1-[autogen]/docstring.txt b/docs/nodes/TRANSFORMERS/CALCULUS/DOUBLE_INDEFINITE_INTEGRAL/a1-[autogen]/docstring.txt index 07f1a723b..edba4c7ac 100644 --- a/docs/nodes/TRANSFORMERS/CALCULUS/DOUBLE_INDEFINITE_INTEGRAL/a1-[autogen]/docstring.txt +++ b/docs/nodes/TRANSFORMERS/CALCULUS/DOUBLE_INDEFINITE_INTEGRAL/a1-[autogen]/docstring.txt @@ -1,8 +1,9 @@ -The DOUBLE_INDEFINITE_INTEGRAL node takes an OrderedTriple (x,y,z) and have the width and height parameters. +The DOUBLE_INDEFINITE_INTEGRAL node takes an OrderedTriple (x,y,z) and has width and height parameters. - The width and height represent the number of columns and rows, respectively, that the x, y, and z reshape matrices will have. Here it is important to note that the length of x, y, and z is the same and that the width times the height needs to be equal to the length of x, y, and z. + The width and height represent the number of columns and rows, respectively, that the x, y, and z reshaped matrices will have. + Here it is important to note that the length of x, y, and z is the same, and that the width times the height needs to be equal to the length of x, y, and z. - It computes the double integral approximation according to given dimensions of the matrices, and it returns a matrix where each cell represents the volume up to the given point. + It computes the double integral approximation according to given dimensions of the matrices, and returns a matrix where each cell represents the volume up to the given point. Inputs ------ diff --git a/docs/nodes/TRANSFORMERS/IMAGE_PROCESSING/REGION_PROPERTIES/a1-[autogen]/docstring.txt b/docs/nodes/TRANSFORMERS/IMAGE_PROCESSING/REGION_PROPERTIES/a1-[autogen]/docstring.txt index 983276d19..945be4ade 100644 --- a/docs/nodes/TRANSFORMERS/IMAGE_PROCESSING/REGION_PROPERTIES/a1-[autogen]/docstring.txt +++ b/docs/nodes/TRANSFORMERS/IMAGE_PROCESSING/REGION_PROPERTIES/a1-[autogen]/docstring.txt @@ -1,5 +1,6 @@ -The image processing REGION_PROPERTIES node is a stand-alone visualizer for analyzing - an input array of data. There are multiple input 'DataContainer' types for which +The image processing REGION_PROPERTIES node is a stand-alone visualizer for analyzing an input array of data. + + There are multiple input 'DataContainer' types for which this function is applicable: 'Image', 'Grayscale', or 'Matrix'. Often in image analysis, it is necessary to determine subvolumes / subregions @@ -10,7 +11,7 @@ The image processing REGION_PROPERTIES node is a stand-alone visualizer for anal is entirely provided by this node in a two-step process: - First, the regions of the INTEGER image are identified and labelled. - - Second, the regions are analysed. + - Second, the regions are analyzed. The first step is provided by the morphology library of scikit-image's label function, while the second is provided by scikit-image's regionprops function. @@ -31,5 +32,5 @@ The image processing REGION_PROPERTIES node is a stand-alone visualizer for anal Returns ------- - fig : Plotly + fig: Plotly A Plotly figure containing the illustrated features as determined by this node. diff --git a/docs/nodes/TRANSFORMERS/MATRIX_MANIPULATION/DOT_PRODUCT/a1-[autogen]/docstring.txt b/docs/nodes/TRANSFORMERS/MATRIX_MANIPULATION/DOT_PRODUCT/a1-[autogen]/docstring.txt index 7fceea010..396982511 100644 --- a/docs/nodes/TRANSFORMERS/MATRIX_MANIPULATION/DOT_PRODUCT/a1-[autogen]/docstring.txt +++ b/docs/nodes/TRANSFORMERS/MATRIX_MANIPULATION/DOT_PRODUCT/a1-[autogen]/docstring.txt @@ -1,7 +1,6 @@ -The DOT_PRODUCT node takes two input matrices, multiplies them - (by dot product), and returns the result. +The DOT_PRODUCT node takes two input matrices, multiplies them (by dot product), and returns the result. - When multiplying a scalar use the MULTIPLY node. + To multiply a scalar, use the MULTIPLY node. Inputs ------ diff --git a/docs/nodes/TRANSFORMERS/ORDERED_PAIR_MANIPULATION/ORDERED_PAIR_XY_INVERT/a1-[autogen]/docstring.txt b/docs/nodes/TRANSFORMERS/ORDERED_PAIR_MANIPULATION/ORDERED_PAIR_XY_INVERT/a1-[autogen]/docstring.txt index 833af5506..612c65b34 100644 --- a/docs/nodes/TRANSFORMERS/ORDERED_PAIR_MANIPULATION/ORDERED_PAIR_XY_INVERT/a1-[autogen]/docstring.txt +++ b/docs/nodes/TRANSFORMERS/ORDERED_PAIR_MANIPULATION/ORDERED_PAIR_XY_INVERT/a1-[autogen]/docstring.txt @@ -1,12 +1,11 @@ -The ORDERED_PAIR_XY_INVERT node returns the OrderedPair - where the axes are inverted +The ORDERED_PAIR_XY_INVERT node returns the OrderedPair where the axes are inverted. Inputs ------ default : OrderedPair - The input OrderedPair that we would like to invert the axes + The input OrderedPair that we would like to invert the axes. Returns ------- OrderedPair - The OrderedPair that is inverted + The OrderedPair that is inverted. diff --git a/docs/nodes/TRANSFORMERS/SIGNAL_PROCESSING/IFFT/a1-[autogen]/docstring.txt b/docs/nodes/TRANSFORMERS/SIGNAL_PROCESSING/IFFT/a1-[autogen]/docstring.txt index 1faa3ecd6..d48e82f31 100644 --- a/docs/nodes/TRANSFORMERS/SIGNAL_PROCESSING/IFFT/a1-[autogen]/docstring.txt +++ b/docs/nodes/TRANSFORMERS/SIGNAL_PROCESSING/IFFT/a1-[autogen]/docstring.txt @@ -1,20 +1,19 @@ - The IFFT node performs the Inverse Discrete Fourier Transform on the input signal. -With the IFFT algorith, the input signal will be transformed from the frequency domain back into the time domain. + With the IFFT algorithm, the input signal will be transformed from the frequency domain back into the time domain. -Inputs ------- -default : OrderedPair - The data to apply inverse FFT to. + Inputs + ------ + default : OrderedPair + The data to apply inverse FFT to. -Parameters ----------- -real_signal : boolean - whether the input signal is real (true) or complex (false) + Parameters + ---------- + real_signal : boolean + whether the input signal is real (true) or complex (false) -Returns -------- -OrderedPair - x = time - y = reconstructed signal + Returns + ------- + OrderedPair + x = time + y = reconstructed signal diff --git a/docs/nodes/TRANSFORMERS/SIGNAL_PROCESSING/PID/a1-[autogen]/docstring.txt b/docs/nodes/TRANSFORMERS/SIGNAL_PROCESSING/PID/a1-[autogen]/docstring.txt index 16d29c1c8..21a95cc7e 100644 --- a/docs/nodes/TRANSFORMERS/SIGNAL_PROCESSING/PID/a1-[autogen]/docstring.txt +++ b/docs/nodes/TRANSFORMERS/SIGNAL_PROCESSING/PID/a1-[autogen]/docstring.txt @@ -1,10 +1,10 @@ The PID node acts like a PID function. - The returned value with be modified according to the - PID parameters Kp, Ki, and Kd. + + The returned value will be modified according to the PID parameters Kp, Ki, and Kd. Inputs ------ - default : Scalar + single_input : Scalar The data to apply the PID function to. Parameters diff --git a/docs/nodes/TRANSFORMERS/SIGNAL_PROCESSING/TWO_DIMENSIONAL_FFT/a1-[autogen]/docstring.txt b/docs/nodes/TRANSFORMERS/SIGNAL_PROCESSING/TWO_DIMENSIONAL_FFT/a1-[autogen]/docstring.txt index 8f08af22c..c21e1b098 100644 --- a/docs/nodes/TRANSFORMERS/SIGNAL_PROCESSING/TWO_DIMENSIONAL_FFT/a1-[autogen]/docstring.txt +++ b/docs/nodes/TRANSFORMERS/SIGNAL_PROCESSING/TWO_DIMENSIONAL_FFT/a1-[autogen]/docstring.txt @@ -1,27 +1,26 @@ +The TWO_DIMENSIONAL_FFT node performs a two-dimensional fast fourier transform function on the input matrix. -The TWO_DIMENSIONAL_FFT node performs a two-dimensional fourier transform function on the input matrix. + With the FFT algorithm, the input matrix will undergo a change of basis from the space domain into the frequency domain. -With the FFT algorithm, the input matrix will undergo a change of basis from the space domain into the frequency domain. + grayscale, dataframe, image, or matrix -grayscale, dataframe, image, or matrix + Inputs + ------ + default : Grayscale|DataFrame|Image|Matrix + The 2D data to apply 2DFFT to. -Inputs ------- -default : Grayscale|DataFrame|Image|Matrix - The 2D data to apply 2DFFT to. + Parameters + ---------- + real_signal : bool + true if the input matrix consists of only real numbers, false otherwise + color : select + if the input is an RGBA or RGB image, this parameter selects the color channel to perform the FFT on -Parameters ----------- -real_input : boolean - true if the input matrix consists of only real numbers, false otherwise -color : select - if the input is an RGBA or RGB image, this parameter selects the color channel to perform the FFT on - -Returns -------- -Matrix if input is Matrix - m: the matrix after 2DFFT -DataFrame if input is Dataframe - m: the dataframe after 2DFFT -Image - the frequency spectrum of the color channel + Returns + ------- + Matrix if input is Matrix + m: the matrix after 2DFFT + DataFrame if input is Dataframe + m: the dataframe after 2DFFT + Image + the frequency spectrum of the color channel diff --git a/docs/nodes/TRANSFORMERS/TYPE_CASTING/NP_2_DF/a1-[autogen]/docstring.txt b/docs/nodes/TRANSFORMERS/TYPE_CASTING/NP_2_DF/a1-[autogen]/docstring.txt index 18f17e95f..6106328d5 100644 --- a/docs/nodes/TRANSFORMERS/TYPE_CASTING/NP_2_DF/a1-[autogen]/docstring.txt +++ b/docs/nodes/TRANSFORMERS/TYPE_CASTING/NP_2_DF/a1-[autogen]/docstring.txt @@ -3,7 +3,7 @@ The NP_2_DF node converts numpy array data into dataframe type data. Inputs ------ default : DataContainer - The input numpy array to which we apply the conversion to. + The input numpy array which we apply the conversion to. Returns ------- diff --git a/docs/nodes/TRANSFORMERS/TYPE_CASTING/VECTOR_2_ORDERED_PAIR/a1-[autogen]/docstring.txt b/docs/nodes/TRANSFORMERS/TYPE_CASTING/VECTOR_2_ORDERED_PAIR/a1-[autogen]/docstring.txt index 8cde12fbe..3fafc3930 100644 --- a/docs/nodes/TRANSFORMERS/TYPE_CASTING/VECTOR_2_ORDERED_PAIR/a1-[autogen]/docstring.txt +++ b/docs/nodes/TRANSFORMERS/TYPE_CASTING/VECTOR_2_ORDERED_PAIR/a1-[autogen]/docstring.txt @@ -1,5 +1,4 @@ -The VECTOR_2_ORDERED_PAIR node returns the OrderedPair - where x and y axes are the input nodes +The VECTOR_2_ORDERED_PAIR node returns the OrderedPair where x and y axes are the input nodes. Inputs ------ diff --git a/docs/nodes/TRANSFORMERS/VECTOR_MANIPULATION/VECTOR_INDEXING/a1-[autogen]/docstring.txt b/docs/nodes/TRANSFORMERS/VECTOR_MANIPULATION/VECTOR_INDEXING/a1-[autogen]/docstring.txt index c525cc796..f035944c8 100644 --- a/docs/nodes/TRANSFORMERS/VECTOR_MANIPULATION/VECTOR_INDEXING/a1-[autogen]/docstring.txt +++ b/docs/nodes/TRANSFORMERS/VECTOR_MANIPULATION/VECTOR_INDEXING/a1-[autogen]/docstring.txt @@ -1,9 +1,8 @@ -The VECTOR_INDEXING node returns the value of the Vector at the - requested index. +The VECTOR_INDEXING node returns the value of the vector at the requested index. Inputs ------ - v : Vector + v : vector The input vector to index. Parameters diff --git a/docs/nodes/TRANSFORMERS/VECTOR_MANIPULATION/VECTOR_LENGTH/a1-[autogen]/docstring.txt b/docs/nodes/TRANSFORMERS/VECTOR_MANIPULATION/VECTOR_LENGTH/a1-[autogen]/docstring.txt index 7017eb5e2..28c55e0d8 100644 --- a/docs/nodes/TRANSFORMERS/VECTOR_MANIPULATION/VECTOR_LENGTH/a1-[autogen]/docstring.txt +++ b/docs/nodes/TRANSFORMERS/VECTOR_MANIPULATION/VECTOR_LENGTH/a1-[autogen]/docstring.txt @@ -1,8 +1,8 @@ -The VECTOR_LENGTH node returns the length of the input +The VECTOR_LENGTH node returns the length of the input. Inputs ------ - v : Vector + v : vector The input vector to find the length of. Returns diff --git a/docs/nodes/VISUALIZERS/DATA_STRUCTURE/ARRAY_VIEW/a1-[autogen]/docstring.txt b/docs/nodes/VISUALIZERS/DATA_STRUCTURE/ARRAY_VIEW/a1-[autogen]/docstring.txt index 3bea08e93..52102e2d0 100644 --- a/docs/nodes/VISUALIZERS/DATA_STRUCTURE/ARRAY_VIEW/a1-[autogen]/docstring.txt +++ b/docs/nodes/VISUALIZERS/DATA_STRUCTURE/ARRAY_VIEW/a1-[autogen]/docstring.txt @@ -1,12 +1,11 @@ -The ARRAY_VIEW node takes "OrderedPair", "DataFrame", "Matrix", and "Image" objects of DataContainer class as input - and displays its visualization in an array format. +The ARRAY_VIEW node takes OrderedPair, DataFrame, Matrix, and Image DataContainer objects as input, and visualizes it in array format. Inputs ------ default : OrderedPair | DataFrame | Matrix | Image - the DataContainer to be visualized in an array format + the DataContainer to be visualized in array format Returns ------- Plotly - the DataContainer containing visualization of the input in an array format + the DataContainer containing the visualization of the input in array format diff --git a/docs/nodes/VISUALIZERS/DATA_STRUCTURE/MATRIX_VIEW/a1-[autogen]/docstring.txt b/docs/nodes/VISUALIZERS/DATA_STRUCTURE/MATRIX_VIEW/a1-[autogen]/docstring.txt index 4eebce32e..8728a2f18 100644 --- a/docs/nodes/VISUALIZERS/DATA_STRUCTURE/MATRIX_VIEW/a1-[autogen]/docstring.txt +++ b/docs/nodes/VISUALIZERS/DATA_STRUCTURE/MATRIX_VIEW/a1-[autogen]/docstring.txt @@ -1,8 +1,7 @@ -The MATRIX_VIEW node takes a Matrix or OrderedPair object of DataContainer class as input and - displays its visualization using a Plotly table in matrix format. +The MATRIX_VIEW node takes a Matrix or OrderedPair DataContainer object as input, and visualizes it using a Plotly table in matrix format. Inputs - ------- + ------ default : OrderedPair | Matrix the DataContainer to be visualized in matrix format. diff --git a/docs/nodes/VISUALIZERS/PLOTLY/BAR/a1-[autogen]/docstring.txt b/docs/nodes/VISUALIZERS/PLOTLY/BAR/a1-[autogen]/docstring.txt index 50ef165cd..c39f6423d 100644 --- a/docs/nodes/VISUALIZERS/PLOTLY/BAR/a1-[autogen]/docstring.txt +++ b/docs/nodes/VISUALIZERS/PLOTLY/BAR/a1-[autogen]/docstring.txt @@ -1,11 +1,11 @@ -The BAR node creates a Plotly Bar visualization for a given input data container. +The BAR node creates a Plotly Bar visualization for a given input DataContainer. Inputs ------ default : OrderedPair|DataFrame|Matrix|Vector - the DataContainer to be visualized in bar chart + the DataContainer to be visualized in a bar chart Returns ------- Plotly - the DataContainer containing Plotly Bar chart visualization + the DataContainer containing the Plotly Bar chart visualization diff --git a/docs/nodes/VISUALIZERS/PLOTLY/BIG_NUMBER/a1-[autogen]/docstring.txt b/docs/nodes/VISUALIZERS/PLOTLY/BIG_NUMBER/a1-[autogen]/docstring.txt index 3af43a98c..7ed5044c7 100644 --- a/docs/nodes/VISUALIZERS/PLOTLY/BIG_NUMBER/a1-[autogen]/docstring.txt +++ b/docs/nodes/VISUALIZERS/PLOTLY/BIG_NUMBER/a1-[autogen]/docstring.txt @@ -8,15 +8,15 @@ The BIG_NUMBER node generates a Plotly figure, displaying a big number with an o Parameters ---------- relative_delta : bool - whether to show relative delta from last run along with big number + whether or not to show the relative delta from the last run along with big number suffix : str any suffix to show with big number prefix : str any prefix to show with big number title : str - title of the plot, default "BIG_NUMBER" + title of the plot, default = "BIG_NUMBER" Returns ------- Plotly - the DataContainer containing Plotly big number visualization + the DataContainer containing the Plotly big number visualization diff --git a/docs/nodes/VISUALIZERS/PLOTLY/HEATMAP/a1-[autogen]/docstring.txt b/docs/nodes/VISUALIZERS/PLOTLY/HEATMAP/a1-[autogen]/docstring.txt index 1004d7fdd..ee7ce35a6 100644 --- a/docs/nodes/VISUALIZERS/PLOTLY/HEATMAP/a1-[autogen]/docstring.txt +++ b/docs/nodes/VISUALIZERS/PLOTLY/HEATMAP/a1-[autogen]/docstring.txt @@ -1,4 +1,4 @@ -The HEATMAP node creates a Plotly Heatmap visualization for a given input data container. +The HEATMAP node creates a Plotly Heatmap visualization for a given input DataContainer. Inputs ------ @@ -8,12 +8,12 @@ The HEATMAP node creates a Plotly Heatmap visualization for a given input data c Parameters ---------- show_text : bool - whether to show the text inside the heatmap color blocks + whether or not to show the text inside the heatmap color blocks histogram : bool whether or not to render a histogram of the image next to the render Returns ------- Plotly - the DataContainer containing Plotly heatmap visualization + the DataContainer containing the Plotly heatmap visualization diff --git a/docs/nodes/VISUALIZERS/PLOTLY/HISTOGRAM/a1-[autogen]/docstring.txt b/docs/nodes/VISUALIZERS/PLOTLY/HISTOGRAM/a1-[autogen]/docstring.txt index dd2b17b73..27aeaec85 100644 --- a/docs/nodes/VISUALIZERS/PLOTLY/HISTOGRAM/a1-[autogen]/docstring.txt +++ b/docs/nodes/VISUALIZERS/PLOTLY/HISTOGRAM/a1-[autogen]/docstring.txt @@ -1,4 +1,4 @@ -The HISTOGRAM node creates a Plotly Histogram visualization for a given input data container. +The HISTOGRAM node creates a Plotly Histogram visualization for a given input DataContainer. Inputs ------ @@ -8,4 +8,4 @@ The HISTOGRAM node creates a Plotly Histogram visualization for a given input da Returns ------- Plotly - the DataContainer containing Plotly Histogram visualization + the DataContainer containing the Plotly Histogram visualization diff --git a/docs/nodes/VISUALIZERS/PLOTLY/IMAGE/a1-[autogen]/docstring.txt b/docs/nodes/VISUALIZERS/PLOTLY/IMAGE/a1-[autogen]/docstring.txt index 115e446b1..20efb15aa 100644 --- a/docs/nodes/VISUALIZERS/PLOTLY/IMAGE/a1-[autogen]/docstring.txt +++ b/docs/nodes/VISUALIZERS/PLOTLY/IMAGE/a1-[autogen]/docstring.txt @@ -1,4 +1,4 @@ -The IMAGE node creates a Plotly Image visualization for a given input data container type of image. +The IMAGE node creates a Plotly Image visualization for a given input DataContainer type of image. Inputs ------ @@ -8,4 +8,4 @@ The IMAGE node creates a Plotly Image visualization for a given input data conta Returns ------- Plotly - the DataContainer containing Plotly Image visualization of the input image + the DataContainer containing the Plotly Image visualization of the input image diff --git a/docs/nodes/VISUALIZERS/PLOTLY/LINE/a1-[autogen]/docstring.txt b/docs/nodes/VISUALIZERS/PLOTLY/LINE/a1-[autogen]/docstring.txt index 72ed8c43d..20022112e 100644 --- a/docs/nodes/VISUALIZERS/PLOTLY/LINE/a1-[autogen]/docstring.txt +++ b/docs/nodes/VISUALIZERS/PLOTLY/LINE/a1-[autogen]/docstring.txt @@ -1,4 +1,4 @@ -The LINE node creates a Plotly Line visualization for a given input data container. +The LINE node creates a Plotly Line visualization for a given input DataContainer. Inputs ------ @@ -8,4 +8,4 @@ The LINE node creates a Plotly Line visualization for a given input data contain Returns ------- Plotly - the DataContainer containing Plotly Line visualization of the input data + the DataContainer containing the Plotly Line visualization of the input data diff --git a/docs/nodes/VISUALIZERS/PLOTLY/PROPHET_COMPONENTS/a1-[autogen]/docstring.txt b/docs/nodes/VISUALIZERS/PLOTLY/PROPHET_COMPONENTS/a1-[autogen]/docstring.txt index ccac75493..e01e8dec7 100644 --- a/docs/nodes/VISUALIZERS/PLOTLY/PROPHET_COMPONENTS/a1-[autogen]/docstring.txt +++ b/docs/nodes/VISUALIZERS/PLOTLY/PROPHET_COMPONENTS/a1-[autogen]/docstring.txt @@ -1,11 +1,12 @@ The PROPHET_COMPONENTS node plots the components of the prophet model trained in the PROPHET_PREDICT node. - This is the output plotly graph from the "plot_components_plotly" function from "prophet.plot". + + This is the output plotly graph from the 'plot_components_plotly' function from 'prophet.plot'. It expects the trained Prophet model from the PROPHET_PREDICT node as input. - If "run_forecast" was True in that node, the forecasted dataframe will be available as the "m" attribute of the default input. - Otherwise, this will make the predictions on the raw dataframe (in which case it will be the "m" attribute of the default input). + If 'run_forecast' was True in that node, the forecasted dataframe will be available as the 'm' attribute of the default input. + Otherwise, this will make the predictions on the raw dataframe (in which case it will be the 'm' attribute of the default input). - You can tell if that forecasted dataframe is available via the "extra" field of data input, "run_forecast" (data.extra["run_forecast"]). + You can tell if that forecasted dataframe is available via the 'extra' field of data input, 'run_forecast' (data.extra["run_forecast"]). Inputs ------ @@ -13,7 +14,7 @@ The PROPHET_COMPONENTS node plots the components of the prophet model trained in the DataContainer to be visualized data : DataContainer - the DataContainer that holds prophet model and forecast data in the `extra` field + the DataContainer that holds prophet model and forecast data in the 'extra' field Parameters @@ -28,4 +29,4 @@ The PROPHET_COMPONENTS node plots the components of the prophet model trained in Returns ------- Plotly - the DataContainer containing Plotly visualization of the prophet model + the DataContainer containing the Plotly visualization of the prophet model diff --git a/docs/nodes/VISUALIZERS/PLOTLY/PROPHET_PLOT/a1-[autogen]/docstring.txt b/docs/nodes/VISUALIZERS/PLOTLY/PROPHET_PLOT/a1-[autogen]/docstring.txt index 73a9cacff..1387e5f2e 100644 --- a/docs/nodes/VISUALIZERS/PLOTLY/PROPHET_PLOT/a1-[autogen]/docstring.txt +++ b/docs/nodes/VISUALIZERS/PLOTLY/PROPHET_PLOT/a1-[autogen]/docstring.txt @@ -1,11 +1,12 @@ The PROPHET_PLOT node plots the forecasted trend of the time series data that was passed in. - This is the output plotly graph from the "plot_plotly" function from "prophet.plot". + + This is the output plotly graph from the 'plot_plotly' function from 'prophet.plot'. It expects the trained Prophet model from the PROPHET_PREDICT node as input. - If "run_forecast" was True in that node, the forecasted dataframe will be available as the "m" attribute of the default input. - Otherwise, this will make the predictions on the raw dataframe (in which case it will be the "m" attribute of the default input). + If 'run_forecast' was True in that node, the forecasted dataframe will be available as the 'm' attribute of the default input. + Otherwise, this will make the predictions on the raw dataframe (in which case it will be the 'm' attribute of the default input). - You can tell if that forecasted dataframe is available via the "extra" field of data input, "run_forecast" (data.extra["run_forecast"]). + You can tell if that forecasted dataframe is available via the 'extra' field of data input, 'run_forecast' (data.extra["run_forecast"]). Inputs ------ @@ -13,7 +14,7 @@ The PROPHET_PLOT node plots the forecasted trend of the time series data that wa the DataContainer to be visualized data : DataContainer - the DataContainer that holds the prophet model and forecast data in the `extra` field + the DataContainer that holds the prophet model and forecast data in the 'extra' field Parameters ---------- @@ -27,4 +28,4 @@ The PROPHET_PLOT node plots the forecasted trend of the time series data that wa Returns ------- Plotly - the DataContainer containing Plotly visualization of the prophet model + the DataContainer containing the Plotly visualization of the prophet model diff --git a/docs/nodes/VISUALIZERS/PLOTLY/SCATTER/a1-[autogen]/docstring.txt b/docs/nodes/VISUALIZERS/PLOTLY/SCATTER/a1-[autogen]/docstring.txt index fdcc69513..267da4d88 100644 --- a/docs/nodes/VISUALIZERS/PLOTLY/SCATTER/a1-[autogen]/docstring.txt +++ b/docs/nodes/VISUALIZERS/PLOTLY/SCATTER/a1-[autogen]/docstring.txt @@ -1,4 +1,4 @@ -The SCATTER node creates a Plotly Scatter visualization for a given input data container. +The SCATTER node creates a Plotly Scatter visualization for a given input DataContainer. Inputs ------ @@ -8,4 +8,4 @@ The SCATTER node creates a Plotly Scatter visualization for a given input data c Returns ------- Plotly - the DataContainer containing Plotly Scatter visualization + the DataContainer containing the Plotly Scatter visualization diff --git a/docs/nodes/VISUALIZERS/PLOTLY/SCATTER3D/a1-[autogen]/docstring.txt b/docs/nodes/VISUALIZERS/PLOTLY/SCATTER3D/a1-[autogen]/docstring.txt index 130bd052e..124eb35e6 100644 --- a/docs/nodes/VISUALIZERS/PLOTLY/SCATTER3D/a1-[autogen]/docstring.txt +++ b/docs/nodes/VISUALIZERS/PLOTLY/SCATTER3D/a1-[autogen]/docstring.txt @@ -1,4 +1,4 @@ -The SCATTER3D node creates a Plotly 3D Scatter visualization for a given input data container. +The SCATTER3D node creates a Plotly 3D Scatter visualization for a given input DataContainer. Inputs ------ @@ -8,4 +8,4 @@ The SCATTER3D node creates a Plotly 3D Scatter visualization for a given input d Returns ------- Plotly - the DataContainer containing Plotly 3D Scatter visualization + the DataContainer containing the Plotly 3D Scatter visualization diff --git a/docs/nodes/VISUALIZERS/PLOTLY/SURFACE3D/a1-[autogen]/docstring.txt b/docs/nodes/VISUALIZERS/PLOTLY/SURFACE3D/a1-[autogen]/docstring.txt index 7890bdb86..0b61956f3 100644 --- a/docs/nodes/VISUALIZERS/PLOTLY/SURFACE3D/a1-[autogen]/docstring.txt +++ b/docs/nodes/VISUALIZERS/PLOTLY/SURFACE3D/a1-[autogen]/docstring.txt @@ -1,4 +1,4 @@ -The SURFACE3D node creates a Plotly 3D Surface visualization for a given input data container. +The SURFACE3D node creates a Plotly 3D Surface visualization for a given input DataContainer. Inputs ------ @@ -8,5 +8,5 @@ The SURFACE3D node creates a Plotly 3D Surface visualization for a given input d Returns ------- Plotly - the DataContainer containing Plotly 3D Surface visualization + the DataContainer containing the Plotly 3D Surface visualization diff --git a/docs/nodes/VISUALIZERS/PLOTLY/TABLE/a1-[autogen]/docstring.txt b/docs/nodes/VISUALIZERS/PLOTLY/TABLE/a1-[autogen]/docstring.txt index bc301e205..d155f4a88 100644 --- a/docs/nodes/VISUALIZERS/PLOTLY/TABLE/a1-[autogen]/docstring.txt +++ b/docs/nodes/VISUALIZERS/PLOTLY/TABLE/a1-[autogen]/docstring.txt @@ -1,4 +1,4 @@ -The TABLE node creates a Plotly Table visualization for a given input data container. +The TABLE node creates a Plotly Table visualization for a given input DataContainer. Inputs ------ @@ -8,5 +8,4 @@ The TABLE node creates a Plotly Table visualization for a given input data conta Returns ------- Plotly - the DataContainer containing Plotly Table visualization - + the DataContainer containing the Plotly Table visualization diff --git a/docs/nodes/VISUALIZERS/PLOTLY/TABLE/a1-[autogen]/python_code.txt b/docs/nodes/VISUALIZERS/PLOTLY/TABLE/a1-[autogen]/python_code.txt index 976e1548d..f207438b4 100644 --- a/docs/nodes/VISUALIZERS/PLOTLY/TABLE/a1-[autogen]/python_code.txt +++ b/docs/nodes/VISUALIZERS/PLOTLY/TABLE/a1-[autogen]/python_code.txt @@ -5,7 +5,7 @@ from nodes.VISUALIZERS.template import plot_layout @flojoy -def TABLE(default: OrderedTriple | OrderedPair | DataFrame | Vector | Scalar) -> Plotly: +def TABLE(default: OrderedTriple | OrderedPair | DataFrame | Vector) -> Plotly: layout = plot_layout(title="TABLE")