11. Activations

Neural network activation functions are the primary logic that dictates whether a given neuron should be activated or not, and by how much. Activation functions are best when they introduce some non-linearity to the neuron so that it can better model more complex behaviors.

In our case all activations are children of the Node abstraction to keep things consistent, and reduce the amount of redundant code.

Activations Status

Category

Name

Docs

Forward

Backward

activation

Argmax

Argmax

FHE incompatibility

FHE incompatibility

activation

Linear

Linear

Full FHE compatibility

Full FHE compatibility

activation

ReLU (Approx)

R_a(x)

Full FHE compatibility

Full FHE compatibility

activation

Sigmoid (Approx)

\sigma_a(x)

Full FHE compatibility

Full FHE compatibility

activation

Softmax

Softmax

FHE incompatibility

FHE incompatibility