Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Open Data Source (Data Import Wizard) brings data into BayesiaLab to create a new Bayesian network.
BayesiaLab can load data from flat text files (e.g., CSV, TXT) or connected databases.
In Step 1 — Data Structure Definition: Text File of the five-step Data Import Wizard, you need to define the dataset structure for BayesiaLab so that the data can be imported and interpreted correctly.
The Data Structure Definition window opens up.
Specify all Settings & Options (see below).
Click Next to proceed the Step 2 — Definition of Variable Types.
Many of the settings can be immediately reviewed and validated in the Data Preview panel. However, Missing Values or Filtered Values can be mischaracterized and yet go unnoticed and, later, introduce major problems causing misleading analysis results.
The Data Import Wizard will attempt to automatically identify the separator or delimiter of the fields in the data table.
However, there can be ambiguous situations in which you need to specify the separator by checking the appropriate box:
Tab
Semicolon
Comma
Space
Other
If you prepare a dataset externally for import into BayesiaLab, ensure that separators are unique and do not appear as content in any data field. So, if any data fields contain text with commas as content, you cannot use commas as the separator. In such a case, try a tab or semicolon.
The Encoding drop-down list allows you to select an alternative encoding for the dataset to be imported. This can become necessary for importing data from certain legacy systems.
Specifying the correct code for Missing Values is very important so that BayesiaLab can process such Missing Values appropriately.
The list shows a number of codes that are commonly used for Missing Values. However, this is not necessarily comprehensive, and your dataset may contain different codes, such as "." (dot) or "-9999", etc.
Click Add to create a new entry in this list for the current data import.
Clicking Remove deletes the selected entries.
Deleting a default entry such as NR (for no response) may become necessary, for instance, if a data field contains the string "NR" as a valid value. That would be the case if your data set included New York Stock Exchange ticker symbols. In this context, "NR" would be the symbol of Newpark Resources, Inc. Unless you address this issue, all "NR" strings would be treated as Missing Values.
You can set your own default list of codes under Main Menu > Windows > Preferences > Data > Import & Associate > Missing & Filtered Values
.
Just as important as the correct definition of Missing Values is a clear understanding of a Filtered Value.
A Filtered Value occurs when a variable cannot have any value for logical reasons. For instance, in a demographic dataset, there could be a field Age at Retirement. However, in the record of a 16-year-old high school student in this dataset, there could be no value for the field Age at Retirement. However, this situation must not be treated as a Missing Value! A Missing Value implies that a value exists but is unknown. In the case of the student's record, a value is logically impossible, not missing. So, instead of a numerical value or a blank, you must specify a code that says that there can be no value. This is the purpose of assigning a Filtered Value code.
Importantly, you must encode any Filtered Values before importing your dataset into BayesiaLab. In BayesiaLab, you merely need to declare what code you used in your dataset to represent Filtered Values. BayesiaLab will create a Filtered State as an additional state in each node for which Filtered Values are encountered during data import.
Click Add to create a new entry in this list for the current data import.
Clicking Remove deletes the selected entries.
You can set your own default list of codes under Main Menu > Windows > Preferences > Data > Import & Associate > Missing & Filtered Values
.
In Data Preview, all Filtered Values are marked with an asterisk (*) in the data table.
Understanding the difference between Missing and Filtered Values is critically important.
Clicking Define Sample button opens a window that allows you to sample records from your data source.
This is particularly useful for the preliminary analysis of large datasets. By default, BayesiaLab imports all records from the data.
You can define a subset in three ways:
Random Sample — Size in Percent: specify the size of the random sample as a percentage of the original dataset size.
Random Sample — Size: specify the number of records in the sampled dataset.
Custom Range — First Row to Last Row: specify the range of records to be imported.
Checking the option Fixed Seed and specifying a number ensures that you can repeat exactly the same random sampling for each iteration of the import. This allows you to reproduce your results as you develop your model.
By default, the Data Import Wizard loads the entire dataset as a Learning Set.
By clicking the Define Learning/Test Sets button, you can set aside a Test Set (or holdout sample).
You can define the Learning Set/Test Set split in three ways:
Random Test Set — Size in Percent: specify the size of the Test Set as a percentage of the original dataset size.
Random Test Set — Size: specify the number of records in the Test Set.
Custom Test Set — First Row to Last Row: select a specific range of records for a Test Set.
Checking the option Fixed Seed and specifying a number ensures that you can obtain the same Test Set with each iteration of the import. This allows you to reproduce your results and validation measures as you develop your model.
In addition to specifying a Learning Set/Test Set split here, you can define a split in other ways:
You can designate a variable in the original dataset to assign records to the Learning Set and Test Set. You can select such a variable in the next step of the Data Import Wizard: Step 2 — Definition of Variable Types.
Main Menu > Data > Data Set > Generate Learning/Test Split
Furthermore, you can remove the Learning Set/Test Set split at any time:
Main Menu > Data > Data Set > Remove Learning/Test Split
.
The Options Panel allows you to manage the interpretation of the to-be-imported dataset.
Title Line:
By checking this option, BayesiaLab reads the first row of the dataset and uses its values as column headers.
If the values in the first row are not compatible, e.g., due to missing values or duplicate values, you are prompted to accept the proposed corrections, which include adding suffixes for duplicate names and substituting missing values with generic column headers, e.g., N0, N1, N2, etc.
End of Line Character:
With some files, it may be necessary to specify a certain character so that BayesiaLab can correctly detect the end of a row in a data table.
Consider Identical Consecutive Separators as One:
Check this box so that if you have multiple consecutive separators of the same type, e.g., “;;;”, the Data Import Wizard will treat them as a single separator.
Consider Different Consecutive Separators as One:
Check this box so that if you have multiple consecutive separators of any type, e.g., “;,|”, the Data Import Wizard will treat them as a single separator.
Double Quotes:
Remove
As String Delimiters
Simple Quotes:
Remove
As String Delimiters
Transpose:
By default, BayesiaLab expects the data source to be arranged in
columns corresponding to variables and
rows corresponding to samples, records, or observations.
Checking the Transpose option allows you to accept an alternate format, i.e.,
rows corresponding to variables and
columns corresponding to samples, records, or observations.
The transposed format is commonly used in bioinformatics. For instance, variables representing genes — sometimes tens of thousands — are arranged row by row. Observations — sometimes only a few dozen — are placed in columns side by side.
The data table at the bottom of the window provides a preview of how the Data Import Wizard sees and interprets your dataset.
Blank fields indicate a Missing Value.
Asterisks (*) mark Filtered Values. In the dataset shown below, for instance, Filtered Values were assigned to all males and post-menopausal women for the variable Pregnancy Status. For those two groups and for obvious reasons, pregnancy is impossible.
Horizontal and vertical sliders allow you to scroll and view the entire dataset. Alternatively, you can move your mouse's scroll wheel up and down.
If a variable name exceeds the column width, you can click on the divider between column headers and drag it into the desired position. Alternatively, double-click the divider to auto-fit the column width to the variable name.
In the following animation, we show a dataset that requires numerous settings to be adjusted for proper import:
The dataset uses the pipe character ("|") as a delimiter.
All fields are enclosed in double quotes.
Multiple, arbitrary codes are used for Missing Values:
"Refused"
"unknown"
"Not Applicable" is the code for Filtered Value used in this dataset.
Note that there are no standardized codes for Missing Values and Filtered Values. They can be as arbitrary as in this example. Therefore, it is of utmost importance that whoever prepares the dataset must convey the precise meaning of these codes to the analyst who imports the data into BayesiaLab.
The Data Import Wizard is the principal tool in BayesiaLab for preprocessing and importing external data.
You can use BayesiaLab's Data Import Wizard to import data from two types of sources:
Data tables in text format, in which data fields are separated by delimiters, such as comma, semicolon, tab, or pipe "|". The most common format is CSV.
Data tables in SQL-compatible databases can be accessed via a JDBC driver. Third-party JDBC drivers are available for all major databases.
All data sources must be structured as a single table, i.e., with rows and columns. All table joins must be performed before importing the data in BayesiaLab.
To launch the Data Import Wizard for a data table in a
text file, select Main Menu > Data > Open Data Source > Text File
.
database, select Main Menu > Data > Open Data Source > Database
.
Then, the Data Import Wizard guides you through five sequential steps. The first step of the Data Import Wizard depends on the data source, i.e., text file or database. All subsequent steps of the Data Import Wizard are the same for both types of data sources.
Data Structure Definition
Data table in a database
Definition of Variable Types
Data Selection, Filtering, and Missing Value Processing
Discretization and Aggregation
Import Report
In Step 2 — Definition of Variable Types of the five-step Data Import Wizard, you need to define variable types.
Step 2 contains four panels that relate to each other in their content and available actions.
With the radio buttons in the Type panel, you can define the type of each variable.
Before you start making your determinations, BayesiaLab has already made some guesses regarding the appropriate variable type, i.e., Discrete versus Continuous.
Furthermore, some variables have limited options regarding the variable type because of their distributions:
If a variable has the same value for all observations, it falls into the Unused variable type. Such a not-distributed variable cannot be imported at all into BayesiaLab.
Variables that contain any text values cannot be declared Continuous variables.
Variables with Missing Values cannot be of the type Weight, Row Identifier, or Learn/Test.
You can perform the selection of multiple variables with keystroke combinations commonly used in spreadsheet editing:
Ctrl+Click
: add a variable to the current selection.
Shift+Click
: add all variables between the currently selected and the clicked variable to the selection.
Shift+End
: select all variables from the currently selected variable to the rightmost variable in the table.
Shift+Home
: select all variables from the currently selected variable to the leftmost variable in the table.
The current selection is highlight by showing the selected columns in a darker shade of their current color.
Discrete
The Discrete type considers each unique value of the variable a distinct state.
Any variable that contains text will be considered Discrete by default.
The maximum number of unique values that can be accommodated can be specified under Main Menu > Window > Preference > Editing > Node > Maximum Number of States
.
Continuous
The Continuous type applies to numerical variables, which must be discretized in Step 4 — Discretization and Aggregation.
If a variable contains integer values above a certain threshold, the variable will be considered Continuous.
You can specify this threshold under Main Menu > Windows > Preferences > Data > Import & Associate > Threshold for Assuming Integers as Continuous
. The default threshold value is 5.
Learn more about Discrete and Continuous nodes in the Node Editor topic.
Weight
Weighting is often applied to surveys to make a survey sample representative of the demographics of the underlying population.
If your dataset contains such a Weight variable, select it by clicking on the corresponding column.
Then, select the Weight button in the Type panel.
Later, in Step 4 — Discretization and Aggregation, you can specify whether or not to normalize the Weight variable.
Learning/Test
For a dataset that has already been split into a Learning Set and a Test Set, you can use such an existing definition to import your data into BayesiaLab.
Both the Learning Set and the Test Set need to be in the same data table, rather than in separate files.
A binary indicator variable needs to identify each set with a unique code.
With a Learning/Test variable defined, in Step 4 — Discretization and Aggregation of the Data Import Wizard, you need to assign which of your codes corresponds to BayesiaLab's Learning and Test states.
Row Identifier
You can assign one or more variables to serve as Row Identifiers. The values of Row Identifiers are imported but not processed in any way. They serve as labels that are attached to each record.
There are numerous functions in BayesiaLab that allow you to look up what record in the dataset corresponds to what is currently on display on the screen.
For instance, Automatic Evidence-Setting displays the Row Identifier in the Status Bar.
By selecting the Unused button, you can skip the import of the selected variables. In previous versions of BayesiaLab, this option was also known as "Not Distributed."
Unused is automatically applied to variables containing only a single value across all observations, i.e., when the variable is "not distributed," hence the original name.
Unused variables will appear grayed out in the remaining steps of the Data Import Wizard.
The Multiple Typing panel allows you to quickly assign variable types across multiple variables.
By clicking either button, all previous type assignments are replaced.
You can automatically remove variables, i.e., set them to the Unused type, if they exceed a certain column percentage of Missing Values.
Click the Set Missing Values Threshold button.
From the pop-up window, set the percentage.
All variables that exceed the specified threshold are set to Unused.
The Information panel provides a range of statistics relating to the current type assignment of variables:
Number of Rows refers to the number of records in the to-be-imported datasets. In the context of datasets, rows, records, cases, samples, and observations all have equivalent meanings.
Others displays the count of all the variable assigned to the types Row Identifier, Weight, or Learn/Test.
Unused shows the absolute count of variables currently assigned to the Unused type. The percentage refers to the proportion of Unused variables among all variables.
Missing Values displays the count of cells in the dataset that contain Missing Values. The percentage refers to the proportion of cells in the dataset that contain Missing Values, including all variables types, even Unused, Row Identifier, and Learning/Test.
Filtered Values displays the count of cells in the dataset that contain Filtered Values, as indicated by the asterisk (*). The percentage refers to the proportion of cells in the dataset that contain Filtered Values, including all variable types, even Unused, Row Identifier, and Learning/Test.
Horizontal and vertical scrolling allows you to view the entire dataset that will be imported.
Step 3 of the five-step Data Import Wizard deals with Data Selection, Filtering, and Missing Values Processing.
This Data panel resembles the Data panel from Step 2 — Definition of Variable Types.
However, there are several important additional pieces of information available:
For Discrete variables, it shows the frequencies of all states, including Missing Values and Filtered Values:
As you experiment with checking/unchecking, you can see how the Number of Rows in the Information panel changes.
In terms of a data query, the Filter checkbox would be the equivalent of a nominal value row filter.
Note that the number of Filtered Values does not refer to the number of excluded rows due to an unchecked Filter checkbox.
For Continuous variables, it shows the standard statistics, such as Minimum, Maximum, Mean, and Standard Deviation. Additionally, the table displays the frequencies of non-missing values, Missing Values, and Filtered Values:
Three actions are available in this panel:
You can choose the logic for combining the Filters and Minima/Maxima assigned in the Data panel:
OR: a row will be removed if ANY of the selected Filters or specified Minima/Maxima across all variables apply to that row.
AND: a row will only be removed of ALL of the selected Filters and specified Minima/Maxima across all variables that apply to that row.
Click the Show Selections button to review what Filters and Minima/Maxima are currently in place.
Note the syntax for Discrete variables: The variable name is followed by "in" (i.e., is an element of) followed by the included values shown as an array in square brackets.
Further logical expressions are shown as conjunctions (AND) or disjunctions (OR) in separate lines.
Clicking the Delete Selections button removes all Filters and Minima/Maxima currently in place.
In the Missing Value Processing panel you can specify which kind of processing to apply to variables with Missing Values, i.e., Filter, Replace, and Infer.
The Filter function allows you to remove rows from the dataset that contain Missing Values. This is equivalent to what is commonly known as casewise deletion.
You can apply the Filter individually to any variable that contains Missing Values.
Usage
Then, check the Filter checkbox in the Missing Values Processing panel.
Next, choose the logical condition to apply when you select multiple variables to be subject to the Filter.
OR: a row will be removed if ANY of the selected variables contain a Missing Value in that row.
AND: a row will only be removed of ALL of the selected variables containing a Missing Value in that row.
Before applying Filter, please consider the implications discussed in Chapter 9: Missing Values Processing.
With the Replace By function, you can specify a value for replacing the Missing Values in the selected variable.
You have several options in this regard:
You can set a specific value:
For a Discrete variable, you can select among the values observed in the variable from a drop-down list.
Alternatively, you can choose the Modal value, i.e., the most frequently occurring value of the variable in the dataset.
For a Continuous variable, you can select to use the Mean value computed from the dataset.
As an alternative, you can specify any arbitrary value.
For practical analysis purposes, the Infer option is the most common method for Missing Values Processing.
The Methods in Detail:
Infer — Static Imputation
Infer — Dynamic Imputation
Infer — Structural EM
Infer — Entropy-Based Imputations
In Modeling Mode , select Main Menu > Data > Open Data Source > Text File
.
Right-click on the database icon in the Status Bar and select Generate Learning/Test Split.
Right-click on the database icon in the Status Bar and select Remove Learning/Test Split.
To select a variable, click on the variable header or click anywhere inside the column in the panel.
Ctrl+A
: select all variables in the panel.
Click Set All to Discrete to apply the type all variables, if possible.
Click Set All to Continuous to apply the type all variables, if possible.
Discrete shows the absolute count of variables currently assigned to the type. The percentage refers to the proportion of Discrete variables among all variables, including the type Unused.
Continuous shows the absolute count of variables currently assigned to the type. The percentage refers to the proportion of Continuous variables among all variables, including the type Unused.
The Data panel visualizes the current variable selection and type assignment with colors (see above).
(same as in )
We start with the Data panel — although it is at the bottom of the window — as it can help inform decisions about .
A Missing Values icon indicates the presence of at least one Missing Value in the corresponding variable.
A triangle icon indicates that variable-specific statistics are available. It appears on all variable headers with the exception of variables of the type and .
Clicking on the triangle icon or the associated variable header brings up a table with variable statistics:
The Filter checkboxes allow you to uncheck/deselect specific values.
The checked box means that the value is included, which is the default condition.
The unchecked box means that the value is excluded and that all rows that contain that value will be filtered, i.e., removed.
The Select Values panel relates to the checkboxes plus any applied in the panel.
This panel is only active if you select one of the variables that feature a small question mark icon . This icon indicates that the corresponding variable contains at least one Missing Value.
In the panel, click on the header or into the column of the variable with Missing Values.
To learn about Missing Values Processing beyond and , please see Missing Values Processing in Chapter 9 of our e-book.
The Information panel is identical in its functionality to the Information panel in . Please refer to that topic for details.
This screen is only available if you designated a Weight variable in Step 2 — Definition of Variable Types.
Click on that Weight variable in the Data panel, and the Normalize Weights checkbox appears as the only option on the screen.
You need to determine whether to apply Normalize Weights or not:
If yes, the Weights will be normalized so that the total number of cases considered by BayesiaLab for machine learning is equal to the actual number of samples in the dataset.
If no, the Weight variable will be treated as representing the actual number of observed cases. So, a weight of 10 for one observation would be treated and counted like ten instances of that same observation. As a result, the total number of cases considered by BayesiaLab would correspond to the population from which the weight was calculated.
This example illustrates the situation for a survey consisting of 10 observations:
If you do not normalize, BayesiaLab would consider a sample of 100 for learning purposes and presumably find spurious relationships. This "over-counting" by a factor of 10 has the same effect as reducing the Structural Coefficient to 0.1.
If you normalize, BayesiaLab considers the correct proportions of the weighted samples but still only considers ten observations in total for learning purposes.
If you have specified a Weight variable, it will be taken into account in the Discretization and Aggregation algorithms.
Step 4 — Discretization and Aggregation requires you to make several more important choices before concluding the import process.
As opposed to the previous steps, which all consisted of a single screen, Step 4 provides one screen per variable type for six screens.
As you go from Step 3 to Step 4, the variable that you last selected in Step 3 remains highlighted.
And depending on the variable type, Step 4 starts with one of six possible screens, one for each variable type. Click on the thumbnails in the following table for a preview.
Note that for Row Identifier and Unused variables, no actions are available. Except for the Data panel, the corresponding screens are blank.
For all other variable types, we discuss all available options in detail in separate sections:
Weights
Learning/Test
Discretization
Aggregation
This screen is only available if you designated a Learning/Test variable in Step 2 — Definition of Variable Types.
Select the Learning/Test variable by clicking on its header or into the corresponding column.
Select BayesiaLab's learning and test labels from the drop-down lists to match the codes in your dataset.
Additionally, you can see the proportion of cases for each code in your dataset.
Given that you have a variable of the type Learn/Test, only the "learning" rows will be taken into account for Discretization and Aggregation. Otherwise, you would partially defeat the purpose of having a hold-out set.
BayesiaLab requires the discretization of all Continuous variables, and in this screen, you need to specify how to discretize those variables.
The Discretization process determines how a Continuous variable will be imported into BayesiaLab, i.e.,
the number of intervals (or bins);
the values of the thresholds which define the ranges of the intervals.
These attributes define the transformation of the underlying Continuous variable in the dataset into a discretized Continuous node in BayesiaLab.
To learn more about the important distinction between Continuous and Discrete nodes, please see these topics:
Continuous Nodes
Discrete Nodes
Select one or more Continuous variables and click into one of the headers or one of the corresponding columns.
The Discretization panel appears.
The first item in the Discretization panel is the Discretization Type drop-down menu.
The items on this list can be grouped into Automatic Discretization versus Manual Discretization.
The bottom item on the drop-down menu, Manual, refers to a Manual Discretization approach in which you have full control over thresholds, etc.
The remaining eleven items all refer to different kinds of Automatic Discretization.
However, even in Manual Discretization, you take advantage of the algorithms available with Automatic Discretization.
Manual Discretization
Automatic Discretization
Tree is a bivariate discretization method. It machine-learns a decision tree that uses the to-be-discretized variable for representing the conditional probability distributions of the Target variable given the to-be-discretized variable. Once the Tree is learned, it is analyzed to extract the most useful thresholds.
It is the method of choice in the context of Supervised Learning, i.e., if you plan to machine-learn a model to predict the Target variable.
At the same time, we do not recommend using Tree in the context of Unsupervised Learning. The Tree algorithm creates bins that are biased toward the designated Target variable. Naturally, emphasizing one particular variable would run counter to the intent of Unsupervised Learning.
Note that if the to-be-discretized variable is independent of the selected Target variable, it will be impossible to build a tree, and BayesiaLab will prompt you to select a univariate discretization algorithm.
All manually discretized variables can be used as a Target variable for Tree discretization.
Using a Target variable for Discretization does not create a Target Node in the network.
Except for Manual, all items in the Type menu represent Automatic Discretization algorithms.
Selecting a Discretization algorithm applies variable by variable, i.e., you can use a different algorithm for each Continuous variable.
To select a variable, click on the variable header or anywhere inside the column.
You can perform the selection and deselection of multiple variables with keystroke combinations commonly used in spreadsheet editing:
Ctrl+Click: add a variable to the current selection.
Shift+Click: add all variables between the currently selected and the clicked variable to the selection.
Ctrl+A: select all variables in the Data panel. However, selecting all variables is not useful here in Step 4, as there are no actions that can apply to all variable types.
Shift+End: select all variables from the currently selected variable to the rightmost variable in the table.
Shift+Home: select all variables from the currently selected variable to the leftmost variable in the table.
Click the Select All Continuous button to select all Continuous variables.
Note that this action will also select any variables which you have already discretized manually. As a result, you may override your previous choices.
Note that Continuous variables already discretized manually are highlighted in soft blue.
If you do not specify an algorithm for a variable that was not manually discretized either, the default Discretization algorithm with its default settings will be used.
You can set the default Discretization algorithm under Main Menu > Window > Preferences > Discretization. [+] Show More
For the following algorithms, a Log Transformation is available as an option:
Applying the Log Transformation is useful if you have a high density of values at the bottom end of the variable domain. This "stretches" the scale for small values approaching zero.
Note that the Log Transformation is only used temporarily for discretization purposes. Thus, the values of the thresholds and values of the intervals can all be interpreted based on the original scale.
For the following algorithms, the option Isolate Zeros is available:
Separating 0 into a separate interval can be useful for zero-inflated distributions so as to clearly separate small values from "absolutely nothing."
Click Finish to perform the Discretization.
A progress bar displays the status of the Discretization process:
If a Filtered Value is defined for a Continuous variable, a new artificial interval with an infinitesimally small width of 10-7 will be added after the intervals defined in this step. This newly-created state will serve as the Filtered State, and "*", i.e., the asterisk character, will be its State Name.
At its conclusion, BayesiaLab opens up a Graph Window with all imported variables now represented as nodes.
Select Manual from the drop-down menu.
Several additional items and buttons appear on the left side, plus a Cumulative Distribution Function (CDF) is shown on the right. This CDF plot can help in selecting appropriate discretization intervals.
In the screenshot below, the variable Standing Height (cm) is selected, meaning that the CDF plot corresponds to that variable.
Click on the Density Function button, and the Probability Density Function (PDF) of the same variable appears.
Now the button reads Distribution Function, and by clicking it, you can toggle back to the CDF view.
By default, only one threshold is placed at the mean value of the corresponding variable.
This threshold appears as a horizontal line on the CDF and a vertical line on the PDF.
The CDF and PDF plots are interactive; you can add, delete, and modify thresholds.
The following instructions apply to both plots:
To select a threshold, left-click on that threshold.
The selected threshold is highlighted in red.
The remaining thresholds on the plot remain blue.
The precise numerical value of a selected threshold is shown in the Threshold Value field to the right of the plot.
To move a threshold, click on it and hold, then move it. Release to fix its position.
The percentages displayed at the end of a selected threshold refer to the share of observations that fall into the intervals above and below this threshold.
Instead of moving the selected threshold with your cursor, you can type a specific value into the Threshold Value field.
To add an additional threshold, right-click with your cursor on the desired position.
To remove an existing threshold, right-click on it to delete it.
A zoom function is available for examining the plot in detail:
Hold the Ctrl
key, click and hold the left mouse button, then move the cursor across the range you wish to focus.
To revert to the default zoom, hold Ctrl
, then double-click anywhere in the plot area.
You can zoom in repeatedly until you have reached the desired magnification level.
As an alternative to selecting a threshold by left-clicking, you can scroll through all thresholds using the Previous and Next buttons.
Note that as soon as a threshold is defined on a Continuous variable, it is considered Discretized, and the variable's data column is colored in soft blue.
The interactive CDF and PDF plots are similar to the editing functions available under Curve View in the Node Editor.
We re-use the dataset from the previous steps, so we can fast-forward to Step 4 and focus on that step.
While remaining on the Manual Discretization screen, you can also utilize the Generate a Discretization function.
Click on the Generate a Discretization button.
Then, select the Type from the drop-down menu, e.g., the R2-GenOpt algorithm. You have nine algorithms available, i.e., the univariate methods only.
Choose the number of Intervals, e.g., 5.
Set a Minimum Interval Weight, which defines the minimum prior probability of an interval in percent. The default value is 1%.
Note that you can set defaults for the above settings under Main Menu > Window > Preferences > Discretization
.
Additionally, there are options for Log Transformation and Isolate Zeros, which we discuss in the context of Automatic Discretization.
Click OK to perform the Discretization.
Select the source variable from which you wish to copy the thresholds.
Click the Transfer the Discretization Thresholds button.
A new window opens up that allows you to select one or more target variables.
Select the target variables.
Click OK.
This checkbox is synchronized across Manual and Automatic Discretization processes.
If checked, BayesiaLab automatically creates Classes for each type of Discretization, i.e., all variables that are discretized with the same algorithm will belong to the same Class.
Note that variables that were discretized manually, even if you used the Generate a Discretization button, will all become members of the Class MANUAL.
You can review the Class memberships in the Class Editor after the data import process is complete.
This function allows you to load a Discretization Dictionary with saved Discretization Intervals and Discretization Methods.
This approach is particularly helpful when you repeatedly import datasets with the same variables for which you have already found a suitable discretization.
The following text file illustrates the syntax of a Discretization Dictionary.
The Discretization screen is part of within the .
This screen is only available if you designated at least one Continuous variable in .
At the bottom of the screen, the Data panel carries over from , although now without any options.
Tree is one of the algorithms for Continuous variables in of the .
Automatic Discretization covers numerous discretization algorithms that are part of of the .
Most of these algorithms can also be accessed via the function within the screen.
Simultaneously, a window pops up that offers you an optional , which is of the .
Manual Discretization is one type of Discretization available in of the Data Import Wizard.
It allows you to use the algorithms from but in a more controlled environment where you can closely observe the results of the Discretization.
In certain situations, you may carefully choose thresholds for a variable (see ). Perhaps another variable, or multiple variables, should have exactly the same discretization. In this context, you can use the Transfer the Discretization Thresholds button.
Observation No.
Weight
Normalized Weight
1
10
1.0
2
12
1.2
3
8
0.8
4
9
0.9
5
11
1.1
6
13
1.3
7
7
0.7
8
4
0.4
9
15
1.5
10
11
1.1
Sum
100
10
Perturbed Tree is one of the Automatic Discretization algorithms for Continuous variables in Step 4 — Discretization and Aggregation of the Data Import Wizard.
The Perturbed Tree algorithm is designed to optimize the representation of the probabilistic dependency between a Target variable and the to-be-discretized variable. It is an extension of the Tree discretization algorithm, and it functions as follows:
Data Perturbation generates a range of datasets.
For each perturbed dataset, a univariate tree is learned to predict the Target variable with the to-be-discretized continuous variable.
Extracting the most frequent thresholds produces the final discretization.
The Perturbed Tree algorithm takes into account the Minimum Interval Weight and can reduce the number of bins if necessary. It can also be more robust than the simple Tree discretization.
Supervised Multivariate is one of the Automatic Discretization algorithms for Continuous variables in Step 4 — Discretization and Aggregation of the Data Import Wizard.
The Supervised Multivariate discretization algorithm focuses on representing the multivariate probabilistic dependencies involving a Target variable.
It utilizes Random Forests to find the most useful thresholds for predicting the Target variable.
Its function can be summarized as follows:
Data Perturbation generates a range of datasets.
For each perturbed dataset, a multivariate tree is learned to predict the Target variable with a subset of variables. If a structure is already defined, it is used to bias the selection of the variables for each dataset.
Extracting the most frequent thresholds produces the final discretization.
The Supervised Multivariate takes into account the Minimum Interval Weight and can improve the generalization capability of the model.
Being based on Random Forests, this algorithm is computationally expensive and stochastic by nature.
After the conclusion of the Data Import Wizard, the Supervised Multivariate discretization algorithm is also available from Main Menu > Learning > Discretization
.
Not that the Supervised Multivariate discretization algorithm is not available via Node Context Menu > Node Editor > States > Curve > Generate a Discretization
.
Discretization Dictionary |
R2-GenOpt is one of the Automatic Discretization algorithms for Continuous variables in Step 4 — Discretization and Aggregation of the Data Import Wizard.
The R2-GenOpt algorithm utilizes a Genetic Algorithm to find a discretization that maximizes the R2 between the discretized variable and its corresponding (hidden) Continuous variable.
As such, it is the optimal approach for achieving the first objective of discretization, i.e., finding a precise representation of the values of a Continuous variable.
This algorithm takes into account the Minimum Interval Weight and can also create a specific bin for representing zeros if the Isolate Zeros option is set.
In Validation Mode, the R2 value between the Discretized variable and its corresponding Continuous variable can be retrieved in the Information Mode by hovering over the monitor.
The K-Means algorithm is based on the classical K-Means data clustering algorithm but uses only one dimension, which is the to-be-discretized variable.
K-Means returns a discretization that directly depends on the Probability Density Function of the variable.
More specifically, it employs the Expectation-Maximization algorithm with the following steps:
Initialization: random creation of K centers
Expectation: each point is associated with the closest center
Maximization: each center position is computed as the barycenter of its associated points
Steps 2 and 3 are repeated until convergence is reached.
Based on the centers K, the discretization thresholds are defined as:
The following figure illustrates how the algorithm works with K=3.
For example, applying a three-bin K-Means Discretization to a normally distributed variable would create a central bin representing 50% of the data points and one bin of 25% each for the distribution's tails.
Without a Target variable, or if little else is known about the variation domain and distribution of the Continuous variables, K-Means is recommended as the default method.
R2-GenOpt* is a modified version of R2-GenOpt and uses a specific MDL score to choose the number of bins.
With 100 observations, even though we selected 8 bins, only 3 were created for the variable 8- Wrist girth.
With 1,500 observations, even though we selected 10 bins, only 5 have been created for AGN, and 6 for ALL.
K-Means is one of the algorithms for Continuous variables in of the .
R2-GenOpt* is one of the algorithms for Continuous variables in of the .
Normalized Equal Distance is one of the Automatic Discretization algorithms for Continuous variables in Step 4 — Discretization and Aggregation of the Data Import Wizard.
The Normalized Equal Distance algorithm pre-processes the data with a smoothing algorithm to remove outliers before computing equal partitions.
As a result, the algorithm is less sensitive to outliers than the Equal Distance algorithm.
The algorithm also takes into account the Minimum Interval Weight that defines the minimum prior probability of a bin.
You can adjust the default Minimum Interval Weight under Main > Menu > Window > Preferences > Discretization
.
Density Approximation is one of the Automatic Discretization algorithms for Continuous variables in Step 4 — Discretization and Aggregation of the Data Import Wizard.
The Density Approximation discretization detects changes in the sign of the derivative of the Probability Density Function (PDF) in order to identify local minima and maxima.
Between each local minimum and maximum, the algorithm creates a threshold.
Also, the algorithm automatically detects the optimal number of bins, although you can specify the maximum number of bins.
The minimum size permitted for bins is 1% of the data points.
The Equal Distance algorithm computes the equal distances based on the range of the variable.
This method is particularly useful for discretizing variables that share the same variation domain (e.g. satisfaction measures in surveys).
Additionally, this method is suitable for obtaining a discrete representation of the density function.
This Equal Frequency algorithm defines thresholds so that each interval contains the same number of observations.
This approach typically produces a uniform distribution.
As a result, the shape of the original density function is no longer apparent upon discretization.
This also leads to an artificial increase in the entropy of the system, directly affecting the complexity of machine-learned models.
However, this type of discretization can be useful — once a structure is learned — for further increasing the precision of the representation of continuous values.
Equal Distance is one of the algorithms for Continuous variables in of the .
However, the Equal Distance algorithm is extremely sensitive to outliers and can generate intervals that do not contain any data points. Please see the algorithm, which addresses this particular issue.
Equal Frequency is one of the algorithms for Continuous variables in of the .
Unlike the Discretization step, which is mandatory for Continuous variables, Aggregation is optional for Discrete variables.
Note that an analogous function, Generate Aggregations, is also available for Discrete nodes in the States tab of the Node Editor.
This function is useful when dealing with a large number of values in a Discrete variable. Once imported, the large number of resulting Node States would make it difficult to discover any relationships with that node.
The Aggregation function in the Data Import Wizard is available for single Discrete variables and for multiple Discrete variables.
Please see the usage instructions and examples in the corresponding sub-topics:
Aggregation of Single Variable
Aggregation of Multiple Variables
This multivariate discretization method is based on analyzing the relationship between variables.
The Unsupervised Multivariate discretization algorithm focuses on representing multivariate probabilistic dependencies using Random Forests.
Its functionality can be described as follows:
A new dataset is created as a clone of the original one.
In this new dataset, each variable is independently shuffled to render all the variables independent while keeping the same statistics for each variable.
The cloned dataset is concatenated with the original dataset. Then, a target variable is created to differentiate the clone from the original, indicating the independent set versus the original dependent set.
Various datasets are generated from this concatenated dataset with Data Perturbation.
For each perturbed dataset, a multivariate tree is learned to predict the target variable with a subset of variables. If a structure is already defined, it is used to bias the selection of the variables for each dataset.
Extracting the most frequent thresholds produces the discretization.
Being based on Random Forests, this algorithm is computationally expensive and stochastic by nature, specifically when the number of variables is important.
The Unsupervised Multivariate discretization algorithm is also available after the data import via Main Menu > Learning > Discretization
.
However, it is not available in the Node Editor (Node Context Menu > Edit > Curve > Generate a Discretization
).
Unsupervised Multivariate is one of the algorithms for Continuous variables in of the .
Individual variables can be aggregated manually or automatically in Step 4 of the Data Import Wizard.
To illustrate all related workflows, we use an American auto buyer satisfaction survey containing 42,397 responses. Each record contains attributes of the purchased vehicle, such as make (or brand), model, body style, vehicle segment, number of cylinders, transmission, price paid, self-reported fuel economy, plus hundreds of other variables.
First, we want to manually aggregate all 37 automobile brands that appear in the survey into just two states, i.e., Premium Brands and Non-Premium Brands.
This manual aggregation will be based exclusively on our subjective perception of the auto industry as of 2009, which is when this particular survey was conducted.
Click on the Brand variable in the Data panel.
From the States list on the left, select the values you wish to aggregate using Shift+Click or Ctrl+Click.
Then, click the Aggregate button.
The newly-formed, aggregated state appears in the Aggregates list on the right.
By default, the original values are concatenated using the "+" symbol as a delimiter. An underscore "_" is added as a prefix.
As necessary, you can select more values from the States list and create additional aggregated states.
In the list of Aggregates, you can now replace the automatically-generated state names with more meaningful ones.
You can now proceed to any other variable or click Finish to conclude the Data Import Wizard.
In addition to the Manual Aggregation described above, BayesiaLab can support you in making the aggregation decisions. For this purpose, BayesiaLab can show how the original values of the to-be-aggregated variable correlate with those of other variables.
Continuing with the previous example, we now perform an aggregation of the same variable, Brand. Now, however, we use each brand's correlation with Price as a guide instead of our judgment.
For the purpose of this demonstration, we have already discretized the Price variable manually into three (arbitrary) intervals using two thresholds, i.e., $25,000 and $45,000.
We now want to use the correlation of each brand with the top interval, i.e., $45,000+, as a measure of its "premium appeal" so that we can reduce the 37 brands into three states, Mainstream, Premium, and Luxury.
For reference, 8.65% of all survey responses reported a vehicle purchase price of $45,000 or higher.
Click on the Brand variable in the Data panel.
Click the Show Correlations box.
Select Target and State.
Review the values shown in the Correlations column. By hovering with your cursor over the Correlation bars in each row, a Tooltip displays the percentage difference of the corresponding row versus the marginal value.
The colored bars show how each value compares to the marginal probability of the selected state of the target. A green-colored bar indicates a probability higher than the marginal probability, and a red bar suggests a lower probability.
Select the states to aggregate using Ctrl+Click.
Once you have selected the values, click the Aggregate button.
The newly aggregated values now appear as a single item in the Aggregates list.
Review the newly aggregated states and, if necessary, assign new names to replace the ones that were generated automatically.
To reverse the aggregation select the aggregated items in the Aggregates list and click Delete.
The Correlation-Aided Automatic Aggregation is very similar to the Correlation-Aided Manual Aggregation.
The principal difference is that you don't select your to-be-aggregated values manually but rather specify thresholds that determine the aggregation.
So, the initial steps are analogous to the Correlation-Aided Manual Aggregation.
Click on a Discrete variable in the Data panel.
Click the Show Correlations box.
Select Target and State.
Review the values shown in the Correlations column. By hovering with your cursor over the Correlation bars in each row, a Tooltip displays the percentage difference of the corresponding row versus the marginal value.
The colored bars show how each value compares to the marginal probability of the selected state of the target. A green-colored bar indicates a probability higher than the marginal probability, and a red bar suggests a lower probability.
Now, instead of manually selecting the values you want to aggregate, click the Automatic Aggregation button.
The Automatic Aggregation window opens up.
The colored bar at the top visualizes the percentage differences versus the marginal probability of the selected state of the target.
In our example, there is one brand, Mercury, which had no observations in the $45,000+ interval. As a result, it marks the bottom end of the spectrum, i.e., it is 8.65 percentage points below the marginal probability.
On the other end of the spectrum, Porsche is 83.97 percentage points above the marginal probability.
A default threshold is shown for 0, which is marked by the pink-to-red color change in the bar.
You can manually add thresholds by right-clicking on the bar.
As soon as you add a threshold, a corresponding entry appears in the list below.
Right-clicking again on an existing threshold removes that threshold.
You can move an existing threshold by clicking on it and then dragging it to the desired value.
Also, in the table below the colored bar, you can type in a threshold value.
By clicking OK, you confirm the specified thresholds, and all values in the States list will be aggregated accordingly.
Alternatively, you can click on Generate Aggregates and specify the desired number of intervals.
You obtain a set of aggregation thresholds, which you can further modify or accept by clicking OK.
Now you have a new set of states in the list of Aggregates.
Similar to the workflow for the Aggregation of a Single Variable, you can also perform an Aggregation of Multiple Variables.
We use the same auto buyer survey dataset to illustrate the process. In the auto industry, numerous schemes are used to group vehicle types and body styles into so-called segments. Each segment carries a descriptive name, e.g., Compact Car, Full-Size SUV, Minivan, Mid-Size Pickup, Mid-Size Crossover. In our dataset, we have four variables, which each represent such a segmentation scheme. While all these segmentation schemes roughly convey the same information, they differ in their granularity: for instance, variable Segmentation 3 has 23 states; Segmentation 4 has 33. Our objective is now to reduce each one of the segmentation schemes down to three states.
This time, instead of Price, we use the variable MPG - Combined as a target. It represents the survey respondents' estimates of their vehicles' combined fuel economy in miles per gallon (MPG). In other words, we want to create a new aggregation for each segmentation scheme based on fuel economy. Also, the variable MPG - Combined only has two intervals, with one threshold at 22.5. This number has been used in the past as a criterion for so-called "gas guzzlers." So, we are going to use the state <=22.5 as a proxy for poor fuel economy. As a result, we expect each of the existing segments to be "remapped" according to fuel economy.
In the Data panel, using Ctrl+Click or Shift+Click, select the variables Segmentation 1, Segmentation 2, Segmentation 3, and Segmentation 4.
This brings up the Multiple Aggregation panel.
Set Target to MPG - Combined, and State to <=22.5.
Set Final Number of States to 3.
Click the Aggregate button to perform the aggregation.
Note that there will be no immediate feedback regarding the results of the aggregation.
Rather, we can only see the results of the aggregation in the Import Report in Step 5 of the Data Import Wizard.
Click Finish to complete Step 4 of the Data Import Wizard.
BayesiaLab opens a new Graph Window with all variables now presented as nodes.
Simultaneously, a prompt comes up offering to display the Import Report.
Click Yes, and the Import Report — featuring all variables, not just the aggregated variables — appears in a new window.
The Import Report is the fifth and final step of the Data Import Wizard.
After you click Finish in Step 4 of the Data Import Wizard, two progress bars inform you about the status:
Depending on the size of your dataset, the selected discretization algorithms, and the number of Missing Values, this may take anywhere from a fraction of a second to several minutes.
Once completed, BayesiaLab opens up a new Graph Window with all imported variables now represented as nodes.
At the same time, a prompt appears, offering you the Import Report.
Note that this report is entirely optional. So whether you display it or not does not affect the completion of the Data Import Wizard.
Click Yes to bring up the Import Report window.
The first column displays the names of the imported variables.
The second column displays the type associated with each variable.
For a Weight variable, no further information is available or provided.
For a Learn/Test variable, the association with BayesiaLab's Learn and Test labels is shown, plus the corresponding number of cases.
The third column shows all States of each variable, if applicable.
The right part of the report depends on the variable type:
Discrete Variables:
The report shows each state and, adjacent to it, any aggregations that were performed. Furthermore, the color of the rightmost cell in the row highlights that an aggregation took place.
Continuous Variables:
The names of the discretized states are shown.
The next two columns to the right report the lower and upper thresholds for each interval.
The rightmost column is colored according to the discretization algorithm used.
Asked/Obtained indicates the requested discretization algorithm versus the one that was used as the fallback option.
Note that you can save this Import Report as an HTML file, so you can subsequently open the fully-formatted report in Excel or any other spreadsheet software.
Data Discretization:
Dataset Creation:
Missing Values Estimation: