Fluctuate: 0 to 1. Used inside the output layer for binary classification points.
import math
from typing import Uniondef calculate_sigmoid(z: Union[int, float]) -> float:
"""
Calculate the sigmoid of a given enter.
The sigmoid carry out is printed as 1 / (1 + exp(-z)).
Args:
z (Union[int, float]): The enter price for which to calculate the sigmoid.
Returns:
float: The sigmoid of the enter price.
Raises:
TypeError: If the enter is simply not an integer or a float.
"""
if not isinstance(z, (int, float)):
elevate TypeError("Enter needs to be an integer or a float")
try:
return 1 / (1 + math.exp(-z))
apart from OverflowError:
return 0.0 if z < 0 else 1.0
if __name__ == "__main__":
print(calculate_sigmoid(0)) # Output: 0.5
print(calculate_sigmoid(2)) # Output: 0.8807970779778823
print(calculate_sigmoid(-1000)) # Output: 0.0 (handles overflow)
print(calculate_sigmoid(1000)) # Output: 1.0 (handles overflow)
Fluctuate: -1 to 1. Often utilized in hidden layers.
import math
from typing import Uniondef calculate_tanh(z: Union[int, float]) -> float:
"""
Calculate the hyperbolic tangent (tanh) of a given enter.
The tanh carry out is printed as (exp(z) - exp(-z)) / (exp(z) + exp(-z)).
Args:
z (Union[int, float]): The enter price for which to calculate the tanh.
Returns:
float: The tanh of the enter price.
Raises:
TypeError: If the enter is simply not an integer or a float.
"""
if not isinstance(z, (int, float)):
elevate TypeError("Enter needs to be an integer or a float")
try:
exp_z = math.exp(z)
exp_neg_z = math.exp(-z)
return (exp_z - exp_neg_z) / (exp_z + exp_neg_z)
apart from OverflowError:
# For big optimistic or unfavourable values, tanh approaches 1 or -1
return 1.0 if z > 0 else -1.0
# Occasion utilization
if __name__ == "__main__":
print(calculate_tanh(1)) # Output: 0.7615941559557649
print(calculate_tanh(0)) # Output: 0.0
print(calculate_tanh(-1)) # Output: -0.7615941559557649
print(calculate_tanh(1000)) # Output: 1.0 (handles overflow)
print(calculate_tanh(-1000)) # Output: -1.0 (handles overflow)
Fluctuate: 0 to ∞. Extensively used on account of its simplicity and effectiveness in deep finding out.
from typing import Uniondef calculate_relu(z: Union[int, float]) -> float:
"""
Calculate the Rectified Linear Unit (ReLU) of a given enter.
The ReLU carry out is printed as max(0, z).
Args:
z (Union[int, float]): The enter price for which to calculate the ReLU.
Returns:
float: The ReLU of the enter price.
Raises:
TypeError: If the enter is simply not an integer or a float.
"""
if not isinstance(z, (int, float)):
elevate TypeError("Enter needs to be an integer or a float")
return max(0, z)
# Occasion utilization
if __name__ == "__main__":
print(calculate_relu(1)) # Output: 1
print(calculate_relu(-1)) # Output: 0
print(calculate_relu(0)) # Output: 0
print(calculate_relu(3.5)) # Output: 3.5
Fluctuate: -∞ to ∞. Helps to avoid the “dying ReLU” disadvantage by allowing a small gradient when the unit is simply not energetic.
alpha = 0.01, LeakyReLU(x)=max(alpha∗x,x)
LeakyReLU(x)=max(0.1∗x,x)
from typing import Uniondef calculate_leaky_relu(z: Union[int, float]) -> float:
"""
Calculate the Leaky Rectified Linear Unit (Leaky ReLU) of a given enter.
The Leaky ReLU carry out is printed as max(0.1 * z, z).
Args:
z (Union[int, float]): The enter price for which to calculate the Leaky ReLU.
Returns:
float: The Leaky ReLU of the enter price.
Raises:
TypeError: If the enter is simply not an integer or a float.
"""
if not isinstance(z, (int, float)):
elevate TypeError("Enter needs to be an integer or a float")
return max(0.1 * z, z)
# Occasion utilization
if __name__ == "__main__":
print(calculate_leaky_relu(1)) # Output: 1
print(calculate_leaky_relu(-1)) # Output: -0.1
print(calculate_leaky_relu(0)) # Output: 0
Activation capabilities are integral to the functioning of neural networks, enabling them to be taught from sophisticated information and make appropriate predictions. Selecting the best activation carry out can significantly affect the effectivity and effectiveness of your neural group model.
For additional insights and updates on machine finding out and neural networks, preserve tuned to our weblog!