bfgn.architectures package

Submodules

bfgn.architectures.alex_net module

class bfgn.architectures.alex_net.ArchitectureConfigSection[source]

Bases: bfgn.architectures.config_sections.AutoencoderMixin, bfgn.architectures.config_sections.BlockMixin, bfgn.architectures.config_sections.GrowthMixin, bfgn.architectures.config_sections.BaseArchitectureConfigSection

bfgn.architectures.alex_net.create_model(inshape, n_classes, output_activation, block_structure=(2, 2, 2, 2), filters=64, kernel_size=(3, 3), padding='same', pool_size=(2, 2), use_batch_norm=False, use_growth=False, use_initial_colorspace_transformation_layer=False)[source]
Return type

Model

bfgn.architectures.config_sections module

class bfgn.architectures.config_sections.BaseArchitectureConfigSection[source]

Bases: bfgn.configuration.sections.BaseConfigSection

Base class for architecture config section, includes options that are generic to all architectures.

filters = 'REQUIRED'

Number of filters to use for initial convolutions, may increase in architectures that support the use_growth option.

Type

int

kernel_size = (3, 3)

The kernel size used for convolutions. Most often (3, 3) for a 3x3 kernel.

Type

tuple

n_classes = 'REQUIRED'

if the network is being trained to predict a single continuous variable, this should be 1. Example 2: if the network is being trained to classify pixels into five classes, this should be 5.

Type

int

Type

The number of classes (filters) used in the final network layer. Example 1

internal_activation = 'relu'

The internal activation function used between layers. See Keras documentation for more details and available options.

Type

str

output_activation = 'REQUIRED'

The activation type for the final output layer. See Keras documentation for more details and available options.

Type

str

padding = 'same'

See Keras documentation for more details and available options.

Type

str

use_batch_norm = False

Whether to use batch normalization layers. Currently only implemented after convolutions, but there’s evidence that it may be useful before convolutions in at least some architectures or applications, and we plan on supporting both options in the near future.

Type

bool

use_initial_colorspace_transformation_layer = False

Whether to use an initial colorspace transformation layer. There is evidence that model-learned color transformations can be more effective than other types of transformations.

Type

bool

get_option_keys()[source]
class bfgn.architectures.config_sections.AutoencoderMixin[source]

Bases: object

Mixin for architectures with autoencoder downsampling/upsampling characteristics.

pool_size = (2, 2)

Pooling and upsampling size during each downsampling/upsampling step.

Type

tuple

class bfgn.architectures.config_sections.BlockMixin[source]

Bases: object

Mixin for architectures with block / layer patterns.

block_structure = (2, 2, 2, 2)

The number of layers in each network block. The length of the tuple is the number of network blocks and the value at each index i is the number of layers in block i. Example: (2, 3, 4) configures a network where the first block has two layers, the second block has three layers, and the third block has four layers.

Type

tuple

class bfgn.architectures.config_sections.DilationMixin[source]

Bases: object

Mixin for architectures with dilated convolutions.

dilation_rate = 2

The dilation rate for dilated convolutions.

Type

int

class bfgn.architectures.config_sections.FlatMixin[source]

Bases: object

Mixin for flat architectures.

num_layers = 8

The number of layers in the network.

Type

int

class bfgn.architectures.config_sections.GrowthMixin[source]

Bases: object

Mixin for architectures where growth in the number of filters could be useful.

use_growth = False

Whether to increase the number of filters at each layer in the network.

Type

bool

bfgn.architectures.config_sections.create_model_from_architecture_config_section(architecture_name, architecture_config_section, inshape)[source]

Creates a Keras model for a specific architecture using the provided options.

Parameters
  • architecture_name (str) – Architecture to create. Get a list of currently available architectures using

  • bfgn.architectures.get_available_architectures()

  • architecture_config_section (BaseArchitectureConfigSection) – Options for the specified architecture.

  • inshape (tuple) – Inshape for input data.

Return type

Model

Returns

Keras model object.

bfgn.architectures.config_sections.get_architecture_config_section(architecture_name)[source]

Gets architecture options for the specified architecture.

Parameters
  • architecture_name (str) – Architecture to create. Get a list of currently available architectures using

  • bfgn.architectures.get_available_architectures()

Return type

BaseArchitectureConfigSection

Returns

Options for the specified architecture.

bfgn.architectures.dense_flat_net module

class bfgn.architectures.dense_flat_net.ArchitectureConfigSection[source]

Bases: bfgn.architectures.config_sections.BlockMixin, bfgn.architectures.config_sections.GrowthMixin, bfgn.architectures.config_sections.BaseArchitectureConfigSection

bfgn.architectures.dense_flat_net.create_model(inshape, n_classes, output_activation, block_structure=(2, 2, 2, 2), filters=64, kernel_size=(3, 3), padding='same', use_batch_norm=False, use_growth=False, use_initial_colorspace_transformation_layer=False)[source]
Return type

Model

bfgn.architectures.dense_unet module

class bfgn.architectures.dense_unet.ArchitectureConfigSection[source]

Bases: bfgn.architectures.config_sections.AutoencoderMixin, bfgn.architectures.config_sections.BlockMixin, bfgn.architectures.config_sections.GrowthMixin, bfgn.architectures.config_sections.BaseArchitectureConfigSection

bfgn.architectures.dense_unet.create_model(inshape, n_classes, output_activation, block_structure=(2, 2, 2, 2), filters=64, internal_activation='relu', kernel_size=(3, 3), padding='same', pool_size=(2, 2), use_batch_norm=False, use_growth=False, use_initial_colorspace_transformation_layer=False)[source]
Return type

Model

bfgn.architectures.dilation_net module

class bfgn.architectures.dilation_net.ArchitectureConfigSection[source]

Bases: bfgn.architectures.config_sections.DilationMixin, bfgn.architectures.config_sections.FlatMixin, bfgn.architectures.config_sections.BaseArchitectureConfigSection

bfgn.architectures.dilation_net.create_model(inshape, n_classes, output_activation, dilation_rate=2, filters=64, kernel_size=(3, 3), num_layers=8, padding='same', use_batch_norm=False, use_initial_colorspace_transformation_layer=False)[source]
Return type

Model

bfgn.architectures.flat_net module

class bfgn.architectures.flat_net.ArchitectureConfigSection[source]

Bases: bfgn.architectures.config_sections.FlatMixin, bfgn.architectures.config_sections.BaseArchitectureConfigSection

bfgn.architectures.flat_net.create_model(inshape, n_classes, output_activation, filters=64, kernel_size=(3, 3), num_layers=8, padding='same', use_batch_norm=False, use_initial_colorspace_transformation_layer=False)[source]

Construct a flat style network with flexible shape

Return type

Model

bfgn.architectures.network_sections module

bfgn.architectures.network_sections.colorspace_transformation(inshape, inlayer, batch_normalization=False)[source]

Perform a series of layer transformations prior to the start of the main network.

Parameters
  • inshape (Tuple[int, int, int]) – Shape of the incoming layer.

  • inlayer (<module 'keras.layers' from '/Users/pbrodrick/miniconda3/envs/cnn/lib/python3.7/site-packages/keras/layers/__init__.py'>) – Input layer to the transformation.

  • batch_normalization (bool) – Whether or not to use batch normalization.

Returns

Keras layer ready to start the main network

Return type

output_layer

bfgn.architectures.network_sections.Conv2D_Options(inlayer, options)[source]

Perform a keras 2D convolution with the specified options.

Parameters
  • inlayer (<module 'keras.layers' from '/Users/pbrodrick/miniconda3/envs/cnn/lib/python3.7/site-packages/keras/layers/__init__.py'>) – Input layer to the convolution.

  • options (dict) – All options to pass into the input layer.

Returns

Keras layer ready to start the main network

Return type

output_layer

bfgn.architectures.network_sections.dense_2d_block(inlayer, conv_options, block_depth)[source]

Create a single, dense block.

Parameters
  • inlayer (<module 'keras.layers' from '/Users/pbrodrick/miniconda3/envs/cnn/lib/python3.7/site-packages/keras/layers/__init__.py'>) – Input layer to the convolution.

  • conv_options (dict) – All options to pass into the input convolution layer.

  • block_depth (int) – How deep (many layers) is the dense block.

Returns

Keras layer ready to start the main network

Return type

output_layer

bfgn.architectures.residual_dilation_net module

class bfgn.architectures.residual_dilation_net.ArchitectureConfigSection[source]

Bases: bfgn.architectures.config_sections.BlockMixin, bfgn.architectures.config_sections.DilationMixin, bfgn.architectures.config_sections.BaseArchitectureConfigSection

bfgn.architectures.residual_dilation_net.create_model(inshape, n_classes, output_activation, block_structure=(2, 2, 2, 2), dilation_rate=2, filters=64, kernel_size=(3, 3), padding='same', use_batch_norm=False, use_initial_colorspace_transformation_layer=False)[source]
Return type

Model

bfgn.architectures.residual_flat_net module

class bfgn.architectures.residual_flat_net.ArchitectureConfigSection[source]

Bases: bfgn.architectures.config_sections.BlockMixin, bfgn.architectures.config_sections.BaseArchitectureConfigSection

bfgn.architectures.residual_flat_net.create_model(inshape, n_classes, output_activation, block_structure=(2, 2, 2, 2), filters=64, kernel_size=(3, 3), padding='same', use_batch_norm=False, use_initial_colorspace_transformation_layer=False)[source]
Return type

Model

bfgn.architectures.residual_unet module

class bfgn.architectures.residual_unet.ArchitectureConfigSection[source]

Bases: bfgn.architectures.config_sections.AutoencoderMixin, bfgn.architectures.config_sections.BlockMixin, bfgn.architectures.config_sections.GrowthMixin, bfgn.architectures.config_sections.BaseArchitectureConfigSection

bfgn.architectures.residual_unet.create_model(inshape, n_classes, output_activation, block_structure=(2, 2, 2, 2), filters=64, kernel_size=(3, 3), padding='same', pool_size=(2, 2), use_batch_norm=False, use_growth=False, use_initial_colorspace_transformation_layer=False)[source]
Return type

Model

bfgn.architectures.unet module

class bfgn.architectures.unet.ArchitectureConfigSection[source]

Bases: bfgn.architectures.config_sections.AutoencoderMixin, bfgn.architectures.config_sections.BlockMixin, bfgn.architectures.config_sections.GrowthMixin, bfgn.architectures.config_sections.BaseArchitectureConfigSection

bfgn.architectures.unet.create_model(inshape, n_classes, output_activation, block_structure=(2, 2, 2, 2), filters=64, internal_activation='relu', kernel_size=(3, 3), padding='same', pool_size=(2, 2), use_batch_norm=False, use_growth=False, use_initial_colorspace_transformation_layer=False)[source]

Construct a U-net style network with flexible shape

Return type

Model

Module contents

bfgn.architectures.get_available_architectures()[source]

Gets a list of available architectures.

Return type

List[str]

Returns

List of available architectures.