Category Posts Navigation

Toxic Comment Classifier

Posted by Marcus Zillman

Toxic Comment Classifier
https://developer.ibm.com/exchanges/models/all/max-toxic-comment-classifier/

This model is able to detect 6 types of toxicity in a text fragment. The six detectable types are toxic, severe toxic, obscene, threat, insult, and identity hate. The underlying neural network is based on the pre-trained BERT-Base, English Uncased model and was fine tuned on the Toxic Comment Classification Dataset using the Huggingface BERT Pytorch repository. A brief definition of the six different toxicity types can be found at this site. This will be added to Artificial Intelligence Resources Subject Tracer™. This will be added to Business Intelligence Resources Subject Tracer™. This will be added to Entrepreneurial Resources Subject Tracer™. This will be added to the tools section of Research Resources Subject Tracer™.

Leave a Reply

Facebook Comments

Browse Categories

AwarenessWatch Newsletter