
Artificial Intelligence & Data Protection
Since the release of the OpenAI ChatGPT chatbot, artificial intelligence has entered the consumer world with a straight leg and applications such as LensaAI have begun to go viral which, after the user has uploaded an adequate number of photos, recreate avatars and/or real artworks based on the uploaded photos.
❓Are these types of apps and chatbots privacy-proof?
🧠 With the increase in the use of AI, however, it has become increasingly important to protect the sensitive data that is collected and used by these systems.
An article in Libero dated 15 December talks about this, explaining how in the case of Prisma Labs (the creator company of LensaAI) it claims to destroy all the photos immediately after generating the modified image.
❓But is this the case for all companies?
📑 To date we have no “scandals” related to these types of applications but it is always important to read the Privacy Policy and Terms of Service before using a service and remember that data is the fuel for AI and, if not are adequately protected, can be used improperly or even dangerously.
Furthermore, we also need to be aware that AI can perpetuate the stereotypes and discriminations present in the data used to train it.
Therefore, it is important for companies to pay attention to diversity and inclusion in the collection and use of data and and take appropriate measures to ensure the privacy and security of their customers.
❓In conclusion, what should I do as a user?
✅ Always check the Privacy Policy and Terms of Service
✅ Find out about that app or platform before using it
✅ Always check the images and data that are put online and/or uploaded to these applications
✅ Last, but not least, never trust.
Undoubtedly it can be cool to have a trendy avatar, but let’s always remember that whoever created an app has a single goal: make money and companies often get to do it even in a not too ethical way.