Nsfwjs
NSFW detection on the client-side via TensorFlow.js
Install / Use
/learn @infinitered/NsfwjsREADME
A simple JavaScript library to help you quickly identify unseemly images; all in the client's browser. NSFWJS isn't perfect, but it's pretty accurate (~90% with small and ~93% with midsized model)... and it's getting more accurate all the time.
Why would this be useful? Check out the announcement blog post.
<p align="center"> <img src="https://github.com/infinitered/nsfwjs/raw/master/_art/nsfw_demo.gif" alt="demo example" width="800" align="center" /> </p>NOTE
If you're trying to access the Cloudfront hosted model and are running into an error, it's likely due to the fact that the model has been moved to a new location. Please take a look at our Host your own model section. We will be returning the model after some hotlinkers have been dealt with.
Table of Contents
<!-- START doctoc generated TOC please keep comment here to allow auto update --> <!-- DON'T EDIT THIS SECTION, INSTEAD RE-RUN doctoc TO UPDATE -->- QUICK: How to use the module
- Library API
- Production
- Backend selection
- Install
- Run the Examples
- Learn TensorFlow.js
- More!
- Contributors
The library categorizes image probabilities in the following 5 classes:
Drawing- safe for work drawings (including anime)Hentai- hentai and pornographic drawingsNeutral- safe for work neutral imagesPorn- pornographic images, sexual actsSexy- sexually explicit images, not pornography
The demo is a continuous deployment source - Give it a go: http://nsfwjs.com
QUICK: How to use the module
import * as nsfwjs from "nsfwjs";
const img = document.getElementById("img");
// If you want to host models on your own or use different model from the ones available, see the section "Host your own model".
const model = await nsfwjs.load();
// Classify the image
const predictions = await model.classify(img);
console.log("Predictions: ", predictions);
Selective model bundles (tree-shaking)
nsfwjs keeps the default behavior and includes built-in model definitions. For selective bundling, import from nsfwjs/core and pass only the models you want in modelDefinitions.
import { load } from "nsfwjs/core";
import { MobileNetV2Model } from "nsfwjs/models/mobilenet_v2";
import { MobileNetV2MidModel } from "nsfwjs/models/mobilenet_v2_mid";
const model = await load("MobileNetV2", {
modelDefinitions: [MobileNetV2Model, MobileNetV2MidModel],
});
If you pass an empty model registry, named bundled model loads will fail:
await load("MobileNetV2", { modelDefinitions: [] }); // throws
Library API
load the model
Before you can classify any image, you'll need to load the model.
const model = nsfwjs.load(); // Default: "MobileNetV2"
You can use the optional first parameter to specify which model you want to use from the three built-in bundled models. Defaults to: "MobileNetV2".
For tree-shaken selective model bundling, use nsfwjs/core and pass modelDefinitions as shown above.
const model = nsfwjs.load("MobileNetV2Mid"); // "MobileNetV2" | "MobileNetV2Mid" | "InceptionV3"
You can also use same parameter and load the model from your website/server, as explained in the Host your own model section. Doing so could reduce the bundle size for loading the model by approximately 1.33 times (33%) since you can directly use the binary files instead of the base64 that are bundled with the package. i.e. The "MobileNetV2" model bundled into the package is 3.5MB instead of 2.6MB for hosted binary files. This would only make a difference if you are loading the model every time (without Caching) on the client-side browser since on the server-side, you'd only be loading the model once at the server start.
If you are hosting your own model via URL and want the smallest app bundle, import load from nsfwjs/core instead of nsfwjs. The core entrypoint does not include built-in model definitions by default, so bundlers do not pull those model assets into your app bundle.
import { load } from "nsfwjs/core";
const model = await load("/path/to/mobilenet_v2/model.json");
Model MobileNetV2 - 224x224
const model = nsfwjs.load("/path/to/mobilenet_v2/");
If you're using a model that needs an image of dimension other than 224x224, you can pass the size in the options parameter.
Model MobileNetV2Mid - Graph
/* You may need to load this model with graph type */
const model = nsfwjs.load("/path/to/mobilenet_v2_mid/", { type: 'graph' });
If you're using a graph model, you cannot use the infer method, and you'll need to tell model load that you're dealing with a graph model in options.
Model InceptionV3 - 299x299
const model = nsfwjs.load("/path/to/inception_v3/", { size: 299 });
Caching
If you're using in the browser and you'd like to subsequently load from indexed db or local storage (NOTE: model size may be too large for local storage!) you can save the underlying model using the appropriate scheme and load from there.
const initialLoad = await nsfwjs.load(
"/path/to/different/model/" /*, { ...options }*/
);
await initialLoad.model.save("indexeddb://exampleModel");
const model = await nsfwjs.load("indexeddb://exampleModel" /*, { ...options }*/);
Parameters
Initial Load:
- URL or path to folder containing
model.json. - Optional object with size or type property that your model expects.
Subsequent Load:
- IndexedDB path.
- Optional object with size or type property that your model expects.
Returns
- Ready to use NSFWJS model object
Troubleshooting
- On the tab where the model is being loaded, inspect element and navigate to the the "Application" tab. On the left pane under the "Storage" section, there is a subsection named "IndexedDB". Here you can view if the model is being saved.
For a complete browser worker implementation (including backend initialization, IndexedDB-first load, and save-on-miss caching), see examples/nsfw_demo/src/nsfwjs.worker.ts.
classify an image
This function can take any browser-based image elements (<img>, <video>, <canvas>) and returns an array of most likely predictions and their confidence levels.
// Return top 3 guesses (instead of all 5)
const predictions = await model.classify(img, 3);
Parameters
- Tensor, Image data, Image element, video element, or canvas element to check
- Number of results to return (default all 5)
Returns
- Array of objects that contain
classNameandprobability. Array size is determined by the second parameter in theclassifyfunction.
dispose a loaded model
If you are done with a model instance, call dispose() to release held tensors and model resources.
const model = await nsfwjs.load();
// ... classify/infer
model.dispose();
Production
Tensorflow.js offers two flags, enableProdMode and enableDebugMode. If you're going to use NSFWJS in production, be sure to enable prod mode before loading the NSFWJS model.
import * as tf from "@tensorflow/tfjs";
import * as nsfwjs from "nsfwjs";
tf.enableProdMode();
//...
let model = await nsfwjs.load(`${urlToNSFWJSModel}`);
NOTE: Consider downloading and hosting the model yourself before moving to production as explained in the Host your own model section. This could potentially improve the initial load time of the model. Furthermore, consider Caching the model, if you are using it in the browser.
Backend selection
NSFWJS uses whichever TensorFlow.js backend is active.
Setting a backend explicitly is optional. If you import one or more backends and call await tf.ready(), TensorFlow.js will pick the best available backend.
Use tf.setBackend(...) only when you want deterministic behavior across devices.
Automatic backend selection:
import * as tf from "@tensorflow/tfjs";
impo
