dgs.utils.nn.fc_linear¶
- dgs.utils.nn.fc_linear(hidden_layers: list[int], bias: bool | list[bool] = True, act_func: list[str | None | torch.nn.Module] | tuple[str | None | torch.nn.Module, ...] | None = None) torch.nn.Sequential [source]¶
Create a Network consisting of one or more fully connected linear layers with input and output sizes given by the
hidden_layers
.- Parameters:
hidden_layers – A list containing the sizes of the input, hidden- and output layers. It is possible to use the
set_up_hidden_layer_sizes()
function to create this list. The length of the hidden layers is denotedL
.bias – Whether to use a bias in every layer. Can be a single value for the whole network or a list of length
L - 1
containing one value per layer. Default isTrue
.act_func – A list containing the activation function after each of the fully connected layers. There can be a single activation function after every layer. Therefore,
act_func
should have a length ofL
. Every value can either be thetorch.nn.Module
or the string representing the activation function. E.g. “ReLU” forReLU
Defaults to adding no activation functions.
- Returns:
A sequential model containing
N-1
fully-connected layers.