Skip to main content
Fig. 2 | Virology Journal

Fig. 2

From: Application of machine learning in understanding plant virus pathogenesis: trends and perspectives on emergence, diagnosis, host-virus interplay and management

Fig. 2

A schematic representation of a standard artificial neural network. The network is divided into three major components: the input layer, multiple hidden layers and the output layer. In this figure, it is assumed that the input layer has 3 independent variables, each of which is parsed through a set of weights and activation functions in the hidden layers and finally output layers to yield the model output. The activation functions are nonlinear mathematical function such as Tanh, Sigmoid, ReLU, etc. to induce nonlinearity to the model. Depending on the network structure, there may be ‘n’ neurons (also called hidden layer units) in each hidden layer and there may be multiple hidden layers. Any ANN with more than one hidden layer is technically is deep ANN. Once an input is fed into the network, one after another, each hidden layers gets operated among each other till finally the output layer is reached and activated, producing the final result. Weights in each layer is trained by means of the backpropagation algorithm

Back to article page