Introduction: TensorFlow is a machine learning framework for JavaScript applications. It helps us build models that can be directly used in the browser or in the node.js server. We use this framework for building an application that can recommend using deep learning that can find the best result in millions.
Description: The recommender from TensorFlow is built on two core features – one that supports fast approximate retrieval and another that supports better techniques for modeling feature interactions. The SaveModel object is instantiated for the recommender which takes as input query features and provides the recommendations as output. Feature interactions are based on deep and cross networks which are efficient architectures for deep learning. Cross features are essential to span large and sparse feature space which are typical of datasets used with recommenders. ScaNN is a state of the art nearest neighbor search library (NNS) and it integrates with TensorFlow recommenders.
For example, we can say:
scann = tfrs.layers.factorized_top_k.ScaNN(model.user_model)
scann.index(movies.batch(100).map(model.movie_model), movies)
A Keras layer is like a backend and can run on Colab environment. Keras can help author the model and deploy it to an environment such as Colab where the model can be trained on a GPU. Once the training is done, the model can be loaded and run anywhere else including a browser. The power of TensorFlow is in its ability to load the model and make predictions in the browser itself.
While Brute force approaches make fewer inferences per second, scann is quite scalable and efficient
As with any ML learning example, the data is split into 70% training set and 30% test set. There is no order to the data and the split is taken over a random set.
TensorFlow makes it easy to construct this model using the Keras API. It can only present the output after the model is trained. In this case, the model must be run after the training data has labels assigned. This might be done by hand. The model works better with fewer parameters. The summary of the model can be printed for viewing the model. With a set of epochs and batches, the model can be trained.
With the model and training/test sets defined, it is now as easy to evaluate the model and run the inference. The model can also be saved and restored. It is executed faster when there is GPU added to the computing.
When the model is trained, it can be done in batches of predefined size. The number of passes of the entire training dataset called epochs can also be set up front. These are called model tuning parameters. Every model has a speed, Mean Average Precision and output. The higher the precision, the lower the speed. It is helpful to visualize the training with the help of a high chart that updates the chart with the loss after each epoch. Usually there will be a downward trend in the loss which is referred to as the model is converging.
When the model is trained, it might take a lot of time say about 4 hours. When the test data has been evaluated, the model’s efficiency can be predicted using precision and recall, terms that are used to refer to positive inferences by the model and those that were indeed positive within those inferences.
Conclusion: Tensorflow.js is becoming a standard for implementing machine learning models. Its usage is simple, but the choice of model and the preparation of data takes significantly more time than setting it up, evaluating, and using it.
Similar article: https://1drv.ms/w/s!Ashlm-Nw-wnWxRyK0mra9TtAhEhU?e=TOdNXy
#codingexercise: https://1drv.ms/w/s!Ashlm-Nw-wnWrwRgdOFj3KLA0XSi
No comments:
Post a Comment