As the number of Padleteers expands, so have the ways in which people use Padlet. Some of the new ways are super exciting. Others, not so much.
In light of the not-so-exciting ways, I want to share with you what we are doing to keep content that violates our Content Policy off our platform.
- Promotion and Glorification of Self-Harm
- Gore and Mutilation Content
- Sexually Explicit Content
- Username and URL Abuse
- Mass Registration and Automation
- Copyright and Trademark Infringement
- Impersonation, Stalking, or Harassment
- Privacy Violations
- Disruptions, Exploits, and Resource Abuse
- Unlawful Uses and Content
- Malicious Bigotry
- Harm to Minors
- Child Sexual Abuse
Every padlet comes standard with several content moderation features. You can turn on name attribution, filter profanity, require post approval, and even force users to sign into an account before accessing your padlet.
To further combat abuse, we recently created a system that automatically detects and removes sexually explicit material on Padlet. The first time this material is detected, the user is warned. The second time, the user is suspended. If the sexually explicit content includes minors, we have an existing system in place to inform authorities with information about the uploader.
At the scale at which Padlet operates, and also for the privacy of our users, we cannot have humans looking at every image that is uploaded to determine if it is sexually explicit, so we are relying on machine learning algorithms. As good as these algorithms are, they're not always right. This means that some explicit content will still break through our system, and it's also possible that we will end up flagging content that is not explicit.
For example, as some of you brought to our attention via email, Instagram, and Twitter, images of nursing mothers were flagged by our system as explicit. We have obviously restored all of that content and improved our detection algorithm to prevent that from happening again.
We acknowledge that this system won't be 100% accurate, and there will be both false positives and content that falls through the cracks, but we want you to know that we are working hard on this. In the six weeks that the system has been live, we have already seen a huge improvement in our detection system. We are now actively working on detecting self-harm content and, once detected, directing people to the proper resources so that they can get help.
As we've learned, content safety is a pretty complicated problem, and, as such, it's not something that we can handle perfectly tomorrow, next month, or maybe ever. It will be a process. It is a huge priority for us, and we will be improving it constantly.
We need your help to keep Padlet a safe platform for everyone. So if you find a padlet that has objectionable content, you can report it to us via our contact form or via email at firstname.lastname@example.org, and we'll take it from there. Soon we will have a dedicated button in every padlet so you can report bad content even faster.
It might take us some time to get to a point that we—and you—are completely happy with. In the meantime, bear with us, and please, let us know what we're missing. Keep reporting padlets with objectionable content, let us know if we are flagging the wrong materials, and, if you have any ideas, please pass those along as well.