polus

Welcome to Polus!!!

Polus is a powerful tensorflow toolkit for creating/training complex deep learning models in a functional way.

This toolkit is currently under development and aims to focus on academic research, like biomedical tasks, although it can also also be used in other domains.

Main packages

Polus consists of a main API that resides under the polus package and more tasks specific APIs that will reside under the task name. For instance, polus.ner and polus.ir are sub-packages that are focused on ner (named entity recognition) and ir (information retrieval) tasks.

The main API

The main API consists of:

  • polus.training.py: Training API that contains the most basic training loop
  • polus.data.py: DataLoaders API that extends the tf.data.Dataset functionality to build more useful data loaders with easy to use caching mechanisms.
  • polus.callbacks.py: Main source of interaction with the main training loop
  • polus.models.py: Model API define an extension of the tf.keras.Model by handling storing and loading of the entire model architecture.
  • polus.metrics: Metrics API describe how metrics should be implemented so that can be efficiently used during the training loop.
  • polus.core: The Core API defines some base classes or variables that are used through the framework. It also exposes some functions to change the internal behaviour of the polus framework, e.g., the use of XLA.

Remaining of the polus package

The remaining of the files not yet mentioned act as code repository and hold some utility classes, e.g. polus.layers.py contains some ready to use layers that can be imported and used by tf.keras.Model(s).

Notebooks and examples

At the time of writing, there are no notebooks available... work in progress

TensorFlow focused

Since this framework was designed from scratch with TensorFlow 2.3+ in mind we leveraged the most recent features to make sure that the code runs smoothly and fast as possible. For instance, internally we utilize static computational graphs during the training procedure, and XLA is enabled by default, which can be easily accessed by the polus.core API.

  1r'''
  2# Welcome to Polus!!!
  3
  4Polus is a powerful tensorflow toolkit for creating/training 
  5complex deep learning models in a functional way.
  6
  7This toolkit is currently under development and aims to focus on academic research,
  8like biomedical tasks, although it can also also be used in other domains.
  9
 10# Main packages
 11
 12Polus consists of a main API that resides under the polus package
 13and more tasks specific APIs that will reside under the task name.
 14For instance, polus.ner and polus.ir are sub-packages that are
 15focused on ner (named entity recognition) and ir (information 
 16retrieval) tasks. 
 17
 18## The main API
 19
 20The main API consists of:
 21
 22- `polus.training.py`: Training API that contains the most basic training loop
 23- `polus.data.py`: DataLoaders API that extends the tf.data.Dataset functionality
 24 to build more useful data loaders with easy to use caching mechanisms.
 25- `polus.callbacks.py`: Main source of interaction with the main training loop
 26- `polus.models.py`: Model API define an extension of the tf.keras.Model by handling
 27 storing and loading of the entire model architecture.
 28- `polus.metrics`: Metrics API describe how metrics should be implemented so that can
 29 be efficiently used during the training loop.
 30- `polus.core`: The Core API defines some base classes or variables that are used
 31 through the framework. It also exposes some functions to change the internal
 32 behaviour of the polus framework, e.g., the use of XLA.
 33 
 34## Remaining of the polus package
 35
 36The remaining of the files not yet mentioned act as code repository and hold
 37some utility classes, e.g. `polus.layers.py` contains some ready to use layers that
 38can be imported and used by tf.keras.Model(s).
 39
 40# Notebooks and examples
 41
 42At the time of writing, there are no notebooks available... work in progress
 43
 44# TensorFlow focused
 45
 46Since this framework was designed from scratch with TensorFlow 2.3+ in mind
 47we leveraged the most recent features to make sure that the code runs
 48smoothly and fast as possible. For instance, internally we utilize static
 49computational graphs during the training procedure, and XLA is 
 50enabled by default, which can be easily accessed by the polus.core API.
 51
 52'''
 53
 54__version__="0.2.1"
 55import logging
 56from logging.handlers import TimedRotatingFileHandler
 57
 58import os
 59import sys
 60# setting up logger
 61logger = logging.getLogger(__name__)
 62
 63FORMATTER = logging.Formatter("%(asctime)s%(name)s%(levelname)s: %(message)s")
 64DEBUG_FORMATTER = logging.Formatter("%(asctime)s%(filename)s:%(name)s:%(funcName)s:%(lineno)d: %(message)s")
 65
 66if "POLUS_LOGGER_LEVEL" in os.environ:
 67    m = {"DEBUG": logging.DEBUG, 
 68         "INFO": logging.INFO, 
 69         "WARN": logging.WARN, 
 70         "ERROR": logging.ERROR}
 71    logger.setLevel(os.environ["POLUS_LOGGER_LEVEL"])
 72else:
 73    logger.setLevel(logging.DEBUG)
 74
 75console_handler = logging.StreamHandler(sys.stdout)
 76console_handler.setFormatter(FORMATTER)
 77
 78logger.addHandler(console_handler)
 79
 80if not os.path.exists('logs'):
 81    os.makedirs('logs')
 82
 83file_handler = TimedRotatingFileHandler(os.path.join("logs", "polus.log"), when='midnight', encoding='utf-8')
 84file_handler.setLevel(logging.WARN)
 85file_handler.setFormatter(FORMATTER)
 86logger.addHandler(file_handler)
 87
 88file_handler_db = TimedRotatingFileHandler(os.path.join("logs", "debug.log"), when='midnight', encoding='utf-8')
 89file_handler_db.setLevel(logging.DEBUG)
 90file_handler_db.setFormatter(DEBUG_FORMATTER)
 91logger.addHandler(file_handler_db)
 92
 93import tensorflow as tf
 94
 95try:
 96    import horovod.tensorflow as hvd
 97except ModuleNotFoundError:
 98    import polus.mock.horovod as hvd
 99
100from polus.utils import Singleton
101# init some vars
102class PolusContext(metaclass=Singleton):
103    
104    def __init__(self):
105        logger.debug("-----------------DEBUG INIT POLUS CONTEXT-------------")
106        self.use_horovod = False
107        gpus = tf.config.experimental.list_physical_devices('GPU')
108        if len(gpus) > 1:
109            if hvd.init() == "mock":
110                logger.info(f"The script found multiple GPUs, however it cannot use them since multi-gpu"
111                                 f" requires horovod.tensorflow module to be installed.\n"
112                                 f"Intead the process will only use one")
113            else:
114                if hvd.size() <= 1:
115                    logger.info(f"The script found multiple GPUs and a horovod.tensorlfow installation. However,"
116                                 f" only one process was initialized, please check if you are runing the script with horovodrun or mpirun.")
117                else:
118                    if hvd.local_rank() == 0:
119                        logger.info(f"MultiGPU training enabled, using {hvd.size()} processes ")
120                    self.use_horovod = True
121
122            tf.config.experimental.set_visible_devices(gpus[hvd.local_rank()], 'GPU')
123            
124    def is_horovod_enabled(self):
125        return self.use_horovod
126        
127PolusContext()
128
129
130
131# add main lib sub packages
132#import polus.callbacks
133#import polus.core
134#import polus.data
135#import polus.layers
136#import polus.losses
137#import polus.metrics
138#import polus.models
139#import polus.schedulers
140#import polus.training
141#import polus.utils#
142#import polus.hpo
143#import polus.ir
144#import polus.ner
145#import polus.experimental
class PolusContext:
103class PolusContext(metaclass=Singleton):
104    
105    def __init__(self):
106        logger.debug("-----------------DEBUG INIT POLUS CONTEXT-------------")
107        self.use_horovod = False
108        gpus = tf.config.experimental.list_physical_devices('GPU')
109        if len(gpus) > 1:
110            if hvd.init() == "mock":
111                logger.info(f"The script found multiple GPUs, however it cannot use them since multi-gpu"
112                                 f" requires horovod.tensorflow module to be installed.\n"
113                                 f"Intead the process will only use one")
114            else:
115                if hvd.size() <= 1:
116                    logger.info(f"The script found multiple GPUs and a horovod.tensorlfow installation. However,"
117                                 f" only one process was initialized, please check if you are runing the script with horovodrun or mpirun.")
118                else:
119                    if hvd.local_rank() == 0:
120                        logger.info(f"MultiGPU training enabled, using {hvd.size()} processes ")
121                    self.use_horovod = True
122
123            tf.config.experimental.set_visible_devices(gpus[hvd.local_rank()], 'GPU')
124            
125    def is_horovod_enabled(self):
126        return self.use_horovod
PolusContext()
105    def __init__(self):
106        logger.debug("-----------------DEBUG INIT POLUS CONTEXT-------------")
107        self.use_horovod = False
108        gpus = tf.config.experimental.list_physical_devices('GPU')
109        if len(gpus) > 1:
110            if hvd.init() == "mock":
111                logger.info(f"The script found multiple GPUs, however it cannot use them since multi-gpu"
112                                 f" requires horovod.tensorflow module to be installed.\n"
113                                 f"Intead the process will only use one")
114            else:
115                if hvd.size() <= 1:
116                    logger.info(f"The script found multiple GPUs and a horovod.tensorlfow installation. However,"
117                                 f" only one process was initialized, please check if you are runing the script with horovodrun or mpirun.")
118                else:
119                    if hvd.local_rank() == 0:
120                        logger.info(f"MultiGPU training enabled, using {hvd.size()} processes ")
121                    self.use_horovod = True
122
123            tf.config.experimental.set_visible_devices(gpus[hvd.local_rank()], 'GPU')
def is_horovod_enabled(self)
125    def is_horovod_enabled(self):
126        return self.use_horovod