GenAI and Privacy
Privacy Principles
Within the world of artificial intelligence, data privacy is a major concern. The nature of artificial intelligence, including the reliance on abundant and accurate data, lends itself to the potential abuse of personal data. Each AI model or service is different, but six privacy principles drive how data should be managed, whether when simply using a GenAI service or training a new GenAI model. These privacy principles include purpose limitation, data minimization, lawfulness, transparency, protection, and duration. If your data use, or that of a service you use, violates these principles, you must find an alternative that adheres to these privacy principles.
Purpose Limitation
Data Minimization
Lawfulness
Transparency
Protection
Duration
Risk Factors for Violating Privacy Laws
If you’re considering using a GenAI service for your work, consider asking yourself the following questions as potential indicators of whether the service uses data in a way that protects privacy:
Risk Factor | Example |
---|---|
Do they have a current privacy policy? | If a service doesn't have a privacy policy, it almost certainly doesn't handle data in a safe way. |
Is there an option not to use your data to train models? | Don't use your data to train models whenever possible. Some services will reset this setting whenever you open the app, so be careful! |
How is data going to be stored? | Ensure that the way a service stores their data is secure. Insufficient security can lead to data leaks. |
Does the service disclose data sharing policies? | If the service does not disclose data sharing policies at all, treat the service as if it will share your data. |