Loss functions : Machines learn by means of a loss fuction. A loss function is for a single training example Loss function learns to reduce the error in predictions. Our aim is to minimize the loss function to enhance the accuracy of the model for better predictions. loss functions can be classified into 2 major categories : regression and classification losses. Classification tries to predict output from set of finite categorical values whereas regression deals with predicting a continuous value.
Activation functions : Activation func are incorporated in neural netwroks to help the network learn complex patterns in data.Activation function decides, whether a neuron should be activated or not by calculating weighted sum and further adding bias with it(this is added to restrict the output otherwise a large value might generate).It takes in the output signal from the previous cell and converts it into some form that can be taken as input to the next cell.
Epochs : epoch is a hyperparameter of gradient descent that defines the number of times the learning algorithm will work through the entire training dataset. The number of epochs can be set to an integer value between one and infinity. You can run the algorithm for as long as you like and even stop it using other criteria besides a fixed number of epochs, such as a change (or lack of change) in model error over time.
Learning rate : the learning rate is as hyperparameter used in the training of neural networks that has a small positive value, often in the range between 0.0 and 1.0.The learning rate controls how quickly the model is adapted to the problem. Smaller learning rates require more training epochs given the smaller changes made to the weights each update, whereas larger learning rates result in rapid changes and require fewer training epochs.