a.k.a "this-is-why-we-cant-have-nice-things"
🔒 Safety & Ethics
Grammable stands against any intentional abuse, and we are fully committed to creating a safe platform before we launch it.
Potential for abuse
Disclaimer: Our model is trained on the Google Books dataset, CommonCrawl. The top ten domains scraped by our team include: wordpress.com, medium.com, stackexchange.com, tumblr.com, elsevier.com, genius.com, bbc.co.uk, libsyn.com, yahoo.com, nytimes.com
The Grammable Platform may not be used for any of the following purposes. The description for each disallowed use case is illustrative but not exhaustive; Grammable reserves the right to permanently terminate access for harms that are not listed at our sole discretion.
- Violence and threats:
- Violence/Incitement: Actions that threaten, encourage, or incite violence against anyone, directly or indirectly.
- Self-harm: Promoting or glorifying acts of self-harm, such as cutting, eating disorders like anorexia or bulimia, and suicide.
- Sexual exploitation: Promoting or celebrating sexual exploitation, including the sexualization of minors.
- Hate speech: Promoting hatred or glorifying abuse against people based on characteristics like race, ethnicity, national origin, religion, disability, disease, age, sexual orientation, gender, or gender identity
- Antisocial and antidemocratic uses:
- Harassment: Bullying, threatening, shaming, doxxing.
- Insensitivity: Belittling victims of severe physical or emotional harm (even if unintentional).
- Intentional sowing of division: Sharing of divisive generated content in order to turn a community against itself.
- Harmful belief perpetuation: Perpetuating racism, and sexism (even if unintentional).
- Applications that aim to characterize identity: Attempting to characterize gender, race, ethnicity.
- Graphic depictions: Distribution of sexually explicit acts, torture, or abuse.
- Political manipulation: Attempting to influence political decisions or opinions.
- Deceit:
- Fraud: Catfishing, phishing, attempting to circumvent the law.
- Spam: Sending unsolicited email and messages or manipulating search engines.
- Misrepresentation: Representing raw generations as coming from humans, using supervised generations with false identities, or a single person using generations with many identities that appear to be independent.
- Misinformation: Creating or promoting harmful false claims about government policies or public figures, including applications founded on unscientific premises.
- Attacks on security or privacy:
- Security breaches: Spearphishing.
- Privacy violations: Model attacks to extract personal information.
- Unsafe unsupervised uses:
- Social media: Posting content to social platforms in an automated way.
- No transparency: Applications that do not disclose that the content is generated through automated means.
- Decision-making:
- AI-based social scoring for general purposes done by public authorities: Using output toward larger decision-making systems that will influence actions, decisions, or policies without a human-in-the-loop.
- Classification of individuals: Applications that classify and/or profile people based on protected characteristics, or infer those characteristics from text written about them or by them.
- High-Risk generations:
- Generation or summarization of long-form documents.
- Generation of content-sensitive politically, economically, medically, or culturally.
- Other:
- Plagiarism: Tools that promote academic dishonesty.
Copyright © 2022 Verst, Inc. All rights reserved