The Data Tables feature (and tab on the main screen) allows the management of internal tables specific to your user. When there is no requirement, desire or knowledge to use an external database, then the Data Tables feature can be used to store and manage the date for your application. The respective database tables can be easily managed, are secured and unique to your user (and any application you choose to use them).
The following features are available:
A little thought should be put into the database structure and field names, as alteration later will required alteration of the models which use the data (this is currently not automatic).
Data types currently supported include:
Sharing between users is possible, but this is an advanced topic that can be explored by raising a support request from within the system.
A field may also be defined as the primary key (see below).
Names (of Tables, Fields, etc.) must confirm to the following rules:
One field in each table may be designated as a primary key. This field will automatically be indexed, must be unique, will be set to auto increment (which means if you don't supply a value, one will be automatically generated as the next incremental value from that last automatically generated).
Note:
It is good practice to have the field which is mainly used as the identification id for that data, to be set at the primary key.
To use a DataTable (whether reading or writing) first create a Data Source (tab on the Model View), selecting the JDBC Type as Data Table and setting the Default Table to the name of the DataTable desired. The Data Source name can be a name of your choice.
Then use the name you used as the Source within the Data processing node.
Data can be imported from a CSV file from the table list. The first row MUST contain the name of each column, which MUST match a relevant column name in the Data Table. column order in the file does not matter.
If one of the columns is designated the Primary Key, then this values in this column must be unique, otherwise any duplicate records will not be loaded.
When importing large datasets (10k+ records), it can improve load time to remove any field indexes, and apply them afterwards, for smaller datasets the different is unlikely to noticeable (this generally is the case with most databases).