Research Article
Method of Profanity Detection Using Word Embedding and LSTM
| el = Sequential () |
| >>> mod | >>> model.add (LSTM (units = 1, input_shape = (25, 100))) | >>> model.add (Dense (1, activation = ‘sigmoid’)) | >>> model.compile (loss = ‘binary_crossentropy’,optimizer = ‘RMSprop’,metrics = [‘accuracy’]). | >>> model.fit (X_train, y_train, epochs = 2, validation_data = (X_test, y_test)) |
|
|