Your organization manages an online message board. A few months ago, you discovered an increase in toxic language and bullying on the message board. You deployed an automated text classifier that flags certain comments as toxic or harmful. Now some users are reporting that benign comments referencing their religion are being misclassified as abusive. Upon further inspection, you find that your classifier's false positive rate is higher for comments that reference certain underrepresented religious groups. Your team has a limited budget and is already overextended. What should you do?
NamitSehgal
5 months, 2 weeks agovini123
5 months, 4 weeks agoOmi_04040
7 months, 3 weeks agoLaur_C
7 months, 3 weeks agorajshiv
8 months agof084277
8 months, 3 weeks agoDirtie_Sinkie
10 months, 3 weeks agoDirtie_Sinkie
10 months, 3 weeks agof084277
8 months, 3 weeks agobaimus
10 months, 4 weeks agoAzureDP900
1 year, 1 month agoAzureDP900
1 year, 1 month agoSimple_shreedhar
1 year, 2 months agogscharly
1 year, 3 months agopinimichele01
1 year, 3 months ago7cb0ab3
1 year, 3 months agoedoo
1 year, 5 months agodaidai75
1 year, 6 months agotavva_prudhvi
2 years agopowerby35
2 years ago