Introduction: Mobile applications can bring machine learning models closer to the end-users so that they can run against the local data. They are also efficient because the model has already been trained and does not require large datasets where they run. There are a few frameworks that allow machine learning models to be ported to mobile devices. These frameworks include TensorFlow Lite and BeeWare development framework. Both frameworks allow Python-based development and are easy to implement the machine learning libraries available in that language. The differences between the TensorFlow Lite and BeeWare is called out below.
Description: BeeWare is a write-once run-everywhere application that works very well to write the business logic once irrespective
of the platform targeted for the mobile application. Popular platforms include
Android and iOS. The former requires Java bytecodes, and the latter is written
with Objective-C. BeeWare allows the python bytecode to be reinterpreted for
Java so that the logic runs natively to the Android platform. Similarly, the
conversion for the iOS platform is performed during the build and a suitable installer
binary is generated during the packaging stage. This provides the opportunity for
developers to write little or no code for the platform and focus entirely on
the business logic. When a machine learning model is used, this logic usually
makes a prediction against data in real-time.
TensorFlow is a dedicated machine learning framework to
author models. TensorFlow
makes it easy to construct a model for mobile applications using the TensorFlow Lite ModelMaker. The model can make predictions
only after it is trained. In this case, the model must be run after the
training data has labels assigned. This
might be done by hand. The model works better with fewer parameters. The model is trained using functions like
tf.train.AdamOptimizer() and compiled with a loss function. The optimizer was just
created, and a metric such as top k in terms of categorical accuracy help tune
the model. The summary of the model can be printed for viewing the model. With
a set of epochs and batches, the model training can be controlled. Annotations help TensorFlow Lite converter
to fuse TF.Text API. This fusion leads to a significant speedup than conventional
models. The architecture for the model is also tweaked to include a projection
layer along with the usual convolutional layer and attention encoder mechanism
which achieves similar accuracy but with much smaller model size. There is
native support for HashTables for NLP models.
On the other hand, all the steps of build and test for BeeWare can be performed as
if it was written for the desktop. The packaging of the binaries creates a
redistributable which can be tested with a suitable emulator. When the emulator
shows launch failures, there might be nothing to see on the emulator but the
debug console on certain frameworks provides additional details. Proper SDK and
debug symbols must be provided to such a framework for use with the package
on the emulator. The debug build of the package will be better for diagnosis
than the release. Switching the framework to load and run a simulator allows
more visibility into the execution of the application on the targeted platform.
The
differences in the behavior of the application between those on desktop and
emulator might be attributed to application lifecycle routines on the targeted
platform. These can be exercised on the emulator once all the dependencies and
their versions have ensured the success of the launch.
#codingexercise: https://1drv.ms/w/s!Ashlm-Nw-wnWzGb-l7RO6fnpMcTH?e=g72pCx
No comments:
Post a Comment