Enter the URL of the YouTube video to download subtitles in many different formats and languages.
JASON MAYES: So let's talk about\nmachine learning in JavaScript
now that you know what\nthis is all about.
Currently, over 68% of\nprofessional developers
use JavaScript in\nproduction, and we
have over 51% of\ndevelopers using Node.js
as a primary framework\nin production, too.
Now, as you may have\nguessed, TensorFlow.js
is an open-source library for\nmachine learning in JavaScript
written by Google, and it\nactually started its life
when Google researchers needed\na way to visualize and use
ML models in the\nbrowser, which led
to the release of\ndeeplearn.js Back in 2017
around one year after\nTensorFlow itself was born.
This first incarnation\nwas very low-level
mathematical library\nthat allowed researchers
to demonstrate simple\nmodels live in the browser
mostly for educational\npurposes, and you really
needed to be an ML expert\nto use it at this time.
It lacked the\nhigh-level features
that ML developers were used to\nworking with that made creating
new models more manageable,\nbut its initial success
proved there was a desire\nto use ML in JavaScript.
As such, the need for\nproduction quality
js library that was aligned\nwith the original vision
of TensorFlow emerged, which led\nto the birth of TensorFlow.js
This version of a library added\nsupport for high-level APIs
that mimicked the popular\nKeras API for TensorFlow
making ML far more\naccessible in JavaScript
This, in turn, lead to a\nwave of ML researchers
porting cutting-edge models\nto the JavaScript ecosystem
and now TensorFlow.js is\nused by all sizes of company
from startups to\nlarge multinationals
and even individuals,\nfrom academics
to hobbyists, who all embrace\nthe unique benefits of using
In fact, using\nTensorFlow.js means
developers can run\nmachine learning anywhere
that JavaScript can run,\nwhich is basically everywhere.
This equates to zero install on\nbillions of devices globally.
You can just follow\na link, and it'll
Here are all the environments\nwe currently support--
common web browsers,\nserver-side via Node.js
mobile native via React Native\nor Progressive Web Apps
desktop native via\nElectron, and even
IoT on devices like a\nRaspberry Pi via Node.js.
Running client side on\ndevice is just one advantage
of using TensorFlow.js,\nand this is
very different from using\nTensorFlow Python, which can
This, however, brings a\nunique set of challenges
that you must consider,\nsuch as the types of devices
With server-side-based ML, you\nhave a fixed hardware setup
that is known in advance\nalong with the expected
performance of a model\non that specific machine.
When you run models on\nthe client side on device
the hardware can\nchange from user
to user, which will affect\nthe speed it can run
and, of course, execution\ntime of the model.
Clearly, a five-year-old\nsmartphone with a weak graphics
processor will most\nlikely run a given model
slower than a\ncurrent-generation phone
if you try and execute\nthe model on the graphics
So let's take a moment to\nfocus on how TensorFlow.js
is structured to understand\nbetter how you can get the best
performance from your\nmodels and clear up
some of the terminology\naround the library itself.
TensorFlow.js has two\nAPIs you can work with.
The first is our\nhigh-level Layers API
that's actually very\nsimilar to Keras, which
for those of you new to\nthe TensorFlow ecosystem
is essentially an\nAPI that was made
available in the Python form\nof TensorFlow that allows
you to work at a much higher\nlevel when making custom
So for those of you who have\nprior experience with this
in Python, you would\nfeel very comfortable
And for those of you who\nare new to all of this
there's no need\nto worry as you'll
be focusing on using\nthe TensorFlow.js
APIs in this course, which\nwe'll be teaching you later.
Next, you have our\nlow-level Ops API
which is more mathematical\nin nature that originates
from our deeplearn.js\nbeginnings. [? It ?] enables
you to go lower\nlevel, allowing you
to do things like linear algebra\nto build pretty much anything
And typically, researchers\nwill work at this level
creating new features for\ncutting-edge machine learning
In this introductory\ncourse, you'll
spend most of your time\nworking at the Layers level
when making custom models\nlater on in the course.
So here, you can see how\nit all comes together.
At the top you have\nour pre-made models.
These are built upon our\nLayers API, which itself
Now, this then\nunderstands how to work
within many different\nenvironments
such as the client side, which\nincludes things like the web
browser, for example, and\neach one of these environments
can execute on a number\nof different backends.
Now, by backend here, I\ndo not mean server side.
Backend, in this context,\nrefers to the hardware
So let's dive into backend\nin a little more detail.
On the left, you\nhave the CPU, which
is always available,\nessentially vanilla javascript
but it's the slowest form\nof execution, running
Now, you also have WebAssembly\nfor faster CPU execution
with support for SIMD and\nmultithreading commands
which allow smaller models to\nrun extremely fast on the CPU
sometimes even matching\nGPU performance
that is, on the graphics\ncard, when running
You then have WebGL that\ncan leverage a graphics
card, or GPU, if you will,\nwhich supports 97.6% of devices
And yet that means you can run\non more than just NVIDIA GPUs.
You can run on AMD\nand Intel, too
so that means you can get\ngreat performance even
on something like a\nMacBook that might not
And looking towards\nthe future, we're
seeing new web standards\nbeing formed namely
around WebNN and Web\nGPU, which are also
investigating to accelerate\nperformance even further.
And there's a similar story for\nserver-side environments, which
is provided by a Node.js\nversion of TensorFlow.
For those of you\nnew to Node.js, you
can think of it as a special\nversion of JavaScript that's
designed to run\non the server side
and has tighter integration with\nthe operating system instead
Note here that our\nNode.js environment
supports the same TensorFlow CPU\nand GPU bindings as Python has.
In fact, both the Python and\nNode versions of TensorFlow are
simply just wrappers\naround a C++ core
but server-side TensorFlow is\nwritten in behind the scenes.
This allows you to get the\nsame or better performance
from Python as you,\ntoo, can leverage
[? Fukuda ?] and [INAUDIBLE]\nacceleration that's typically
associated with the Python\nversion of TensorFlow.
But as JavaScript\ndevelopers, you can also
leverage the just-in-time\ncompilation features
of JavaScript in Node, which can\nlead to some great performance
gains over Python when\nrunning on the server side
and we'll talk more\nabout that later.
Now, some folks\nyou work with may
prefer to use Python for the\nmachine-learning research.
That, of course,\nis completely fine
and TensorFlow.js Node\nsupports the ingestion
of Keras and\nTensorFlow SavedModels
directly within Node without\nany sort of conversion.
And this is great\n[INAUDIBLE] Python developers
that directly integrate\nwith web teams
but are most likely\ngoing to be using Node.js
as their preferred server-side\nframework of choice.
Now, if you wish to execute\nPython models in the web
browser, however, you can use\nour command line converter
to convert to the format needed\nto run in the web browser.
We'll cover this in more\ndetail later in the course.
So what's the performance like?
As you can see here,\nexecuting a model
named MobileNet V2,\nwhich is basically
an image-recognition\nmodel, on both Python
and node with GPU acceleration\nleads to less than 1
millisecond of difference\nfor raw execution time.
However, if you've got a\nlot of preprocessing, which
is the act of converting\ndata into a form
that the model can use as an\ninput or [? postprocessing, ?]
which is the act of taking\nthe output of a model
and transforming it to something\nthat's useful that you can use
in your app, then\nNode.js can leverage
for just-in-time\ncompiler of JavaScript
to see a significant boost\nin speed over Python.
And here, you can see,\nyou have a company
Hugging Face, who\nare very well known
for their natural-language\nprocessing models
converted their model known\nas DistilBERT into Node.js
and this resulted\ninto a two times
performance boost over\nthe Python equivalent.
So if performance\nis top of mind
you might want to give Node.js\nto try, especially if your end
goal is to expose a server-side\nweb API for [? Inference, ?]
for which node is\nvery well suited.
In summary, here\nare the benefits
to using Node.js\non the server side.
First, you can use the\nTensorFlow SavedModel format
This, in turn, may enable\nyou to run larger models
[? when ?] you can do\non the client side.
In some situations, it\nmight not make sense
to transfer a large\nmotherboard to the client
side in the browser if\nit's gigabytes in size.
Next, it allows you to\ncode in just one language
and this is great for developers\nalready using JavaScript.
Currently, 68% of\ndevelopers do use js
and that can enable code\nreuse across the stack, which
for a small startup,\ncan be very beneficial
as your existing JavaScript\ndevelopers can also
deploy server-side\nML models for you
too, enabling you greater\nuse cases for your business.
And then we've got performance.
As mentioned, our Node.js\nimplementation talks
to the same C++ bindings behind\nthe scenes as Python does
so you'll get the same\nserver-side hardware
acceleration for both\nthe CPU and the GPU.
However, as mentioned, due\nto the just-in-time compiler
in JavaScript, if your\nmodel requires [? pre- ?]
or postprocessing of data,\nyou can get performance boost
And finally, as this\ncourse is primarily
about machine learning\nin the browser
let's talk about\nclient-side superpowers
that can only be achieved by\nrunning in the web browser.
As the model runs entirely\non the client machine
no data is ever sent to\na third-party server
maintaining data privacy for the\nend user, which is particularly
important for industries where\nit might be a requirement not
to transfer data to\na third party, not
to mention growing concerns\naround privacy these days.
And here, you get the privacy\nfor free with TensorFlow.js.
As JavaScript has direct\naccess to the device sensors
such as the microphone, camera,\naccelerometer, and more
there's no round-trip\ntime to the server
Now, latency to\nthe server could be
close to 100 milliseconds\non, say, a mobile connection
and assuming zero\nlatency for using
the machine-learning\nmodel itself
the maximum frames per\nsecond would cap out
at about 10 frames\nper second if you're
going to send images one by\none, which is less than ideal.
Now, with TensorFlow.js\nrunning on a device
you can go much\nfaster than that.
If there's no data\nsent to the server
you'll have less bandwidth\nand hardware server
costs as there's no\nserver side CPU, GPU
and RAM that are needed\nto be hired for inference.
This means you just have to\npay for hosting of the website
assets and model files, which\nis far cheaper than running
a dedicated ML server\nall of the time.
Web tech was designed for the\ndisplay of content agnostic
From day 1, it supported\ngraphical content
and has evolved to handle even\nricher formats, like video
As such, it has far\nmore mature libraries
for graphics and data [? vids ?]\nversus other languages
such as Three.js\nor dv.js, allowing
you to code your ideas in\nhours instead of days or weeks.
Finally, you've got the\nreach and scale of the web.
Anyone anywhere can link\nand load a web page
and that machine\nlearning will just work.
Even zero installer is\nrequired to do this.
And of course, you get more eyes\non your cutting-edge research
and [INAUDIBLE] used in\nother ways across industries.
Now, let's head on\nto the next video
to learn about\nthe three ways you
can use or create TensorFlow.js\nmodels using this library.