logologo
Features

Specialized in the development of turnkey AI solutions, providing you with a Data Science and Machine Learning online platform that also allows you to customize through coding.


Pricing

Article

No result found
    Search by

    Tutorials>Smartpredict Modules

    Introduction

    This article describes the concept of modules in SmartPredict as well as tips you need to know to use them.

    What’s a module in SmartPredict?

    A module is each of the building blocks that we drag and drop into the Build and Deploy spaces to constitute a flowchart. It represents each basic step in the training, evaluation, and deployment of an AI model.

    You can retrieve them in the right pane in the Module menu, underneath the Core Modules drop-down list.

    Users can also create their own module with the Custom Module. The next article describes a tutorial on "How to create a Custom Module?".

    ( video drag and drop module data, dataset processor for missing data handling, data splitting a custom Module and run)

    Module menu in SmartPredict

    Modules menu is available in the Build and Deploy space, where modules are placed in 5 drop-down lists:

    • Datasets: each uploaded datasets are usable as a module. Usually, dataset modules are used as the first module of any flowchart in Build space.
    • Core Modules: where users can find the exhaustive list of all available modules categorized in drop-down sub-lists.
    • Custom Modules: where modules created by users are stored. They can be created either from class or function in Python code.
    • Processing pipeline: where Processing pipeline modules saved by users are stored. As a reminder, the processing pipeline is a sequence of processing operations mainly in preprocessing data.
    • Trained Models: where users retrieve models trained after each run and saved by the Item Saver module.
      ( Sary le Module menu screencast)

    Warning

    If we consider two interconnected modules placed one after the other, the data produced by one will be presented as the input of the next. It is therefore important to make sure that the type of data at the output of a module is the same as the input type of the module that follows it. You can check the information about the type of data using the Data Object logger module and check the log displayed at the bottom of the workspace.

    Tip

    It is not necessary to check the documentation every time you want to retrieve and configure a module. Just make sure you define the process sequences you want to get, as the module names are intuitive. In addition, a wizard indicates what to configure in the settings.

    Note

    If you need to run a specific process that is not included in the basic modules, you can create your own module with the custom modules.

    Module’s features

    All modules have the following common features :

    • Drag and Drop: Users can drag and drop, duplicate all the modules contained in the menu, whether in the "Build" or "Deploy" space.
    • Configurable: Once in the workspace, users can set up the module to get the desired processing operation.
    • Inter-connectable: Once placed in the workspace, the output of the modules will be connected to the input of the module that follows it directly according to the same type of data.

    Some specific modules

    If you need to run a specific process that is not included in the basic modules, you can create your own module with the custom modules.

    In the drop-down list of the Control Modules category, we find :

    - the Item Saver module: is used to save a process as a module. It's necessarily used to save a trained ML/DL as well as the whole data pre-processing process in the Build space during the model training. The generated modules are retrievable in the Trained Models drop-down list and are mainly used to build the flowchart in Deploy space.

    - the Data Visualizer module:  as its name suggests, allows you to visualize as well as profile your data. Drag and drop it right after your dataset to obtain graphs and quickly gain insights through data exploration.

    In the drop-down list of the Data Preprocessing category, we find :

    - the Dataset Processor module: It contains a huge set of data pre-processing operations from data cleaning to normalizing.

    In the drop-down list of the Helpers category, we find :

    -the Data/Object/Type Logger module: as its name indicates, this module is used to record the operating conditions of the build and deployment and to display the logs after each run to track it. This will make it easier to highlight areas for debugging in case of errors.

    You can see how to work with them in the tutorial where we deal with an end-to-end AI project.

    See the next article "Custom Modules" to learn how to create a module with the Custom Module.

    Stay up to date!

    This email has already been subscribed.