---
The Personal Data Protection Authority’s (KVKK) guide titled "The Use of Generative Artificial Intelligence Tools in the Workplace" and its comprehensive AI framework have clarified the legal expectations for AI systems used daily by institutions and professionals. The overarching message of these documents is clear: fearing errors and outright banning technology is not a solution, because prohibitions breed "Shadow IT"—uncontrolled, hidden, and highly unsecure usage. What companies must do instead is integrate these tools into a legal framework through rigorous administrative and technical measures.
At Ertuğ & Partners, we recommend following this step-by-step Compliance Checklist to make these risks manageable and entirely avoid administrative fines.
A. Corporate Inventory and Data Mapping
Achieving compliance is impossible without knowing exactly which applications are processing what kinds of data. The KVKK's foremost expectation is a transparent inventory.
[ ] Extensively catalog all AI tools used in the company, whether acquired via corporate licenses or used individually by employees (including ChatGPT, Claude, Microsoft Copilot, Notion AI, Midjourney, and sector-specific applications).[ ] Chart a highly detailed data flow map for each application: Are the data (prompts or uploaded files) processed in the cloud, locally on the device, or sent to the LLM provider? Crucially, does the provider use this data to "train its own base model"?[ ] To detect "Shadow IT," conduct network traffic analysis with the IT department to ascertain which unapproved AI websites employees frequent outside of corporate policies.[ ] Clarify your role as either "Data Controller" or "Data Processor" under KVKK Article 3 for each tool. This distinction is vital, as it dictates liability in the event of a breach.B. Legal Grounds and Processing Limitations
Feeding personal data to an artificial intelligence model must rely on a legitimate legal basis articulated in KVKK Articles 5 and 6.
[ ] Determine a clear legal processing condition for every AI use-case scenario, such as "stipulated by laws," "performance of a contract," or "legitimate interest."[ ] If relying on "Explicit Consent," ensure that this consent is freely given specifically for artificial intelligence processing. Blanket consents (e.g., "You may process all my data however you wish") are legally invalid in Turkish courts.[ ] If utilizing the "legitimate interest" basis (e.g., an AI tool summarizing internal correspondence), you must conduct and formally document a "Balance of Interests Test" proving that processing does not harm the fundamental rights of the individual.C. Strict Protection for Special Category Personal Data (KVKK Art. 6)
[ ] Corporately establish an absolute prohibition against entering special category data listed in Article 6 (such as health records, biometric data, union or association memberships, ethnic origin, and criminal convictions) into AI systems (especially cloud-based environments).[ ] In mandatory exceptional scenarios, such as in the healthcare or insurance sectors, document that you fulfill the stringent additional conditions laid out in Art. 6 (including robust technical measures expected by the Board). Consent alone is rarely sufficient for special category data; strict IT criteria such as required logging and encryption must be met.D. Compliance Framework for Cross-Border Data Transfers
Most widely-used tools like Copilot or ChatGPT host their servers in the USA or Europe. The moment you input personal data into these tools, a "Transfer of Data Abroad" under KVKK Article 9 legally occurs.
[ ] Formally identify the countries where the application provider's data centers (servers) are located.[ ] Select one of the statutory transfer mechanisms approved by the Authority (Standard Contractual Clauses, Binding Corporate Rules, or an Adequacy Decision). Most global tech companies are currently opting to sign Standard Contractual Clauses (SCCs).[ ] Scrutinize "Enterprise" contracts thoroughly to ensure they contain iron-clad minimum clauses preventing the provider from using your enterprise data to train its own models (zero-data retention/opt-out clauses).E. Technical & Administrative Data Security and DPIA (KVKK Art. 12)
[ ] Enforce Role-Based Access Control (RBAC) across systems. Every employee should only be permitted to leverage AI features on folders they are strictly authorized to view.[ ] Implement "Data Loss Prevention (DLP)" rules within IT systems to technically block employees from pasting Turkish ID numbers, medical results, or strictly "Confidential" corporate documents into external AI prompt boxes.[ ] The Authority strongly recommends performing a "Data Protection Impact Assessment" (DPIA). Especially before deploying high-risk automated transactions like HR recruitment algorithms or automated loan approvals, a DPIA must be deemed mandatory.[ ] Ensure that prompt histories are encrypted so they remain invisible to unauthorized personnel.F. Corporate Implementation and Transparency
The obligation to inform (Article 10) is not just a piece of legal paperwork; it exemplifies the principle of honesty toward the user.
[ ] In documents aimed at employees (Employee Privacy Notice) and clients (Client/Visitor Privacy Notices), explicitly state that their data may be "subjected to artificial intelligence analysis" and processed via tools with server locations abroad.[ ] Publish an internal "Artificial Intelligence Usage Policy" annually or during onboarding. This text must concretely specify which tools are permitted and for what purposes. To give the policy teeth, disciplinary/termination processes corresponding to violations under Turkish Labor Law logic must be defined.[ ] Update your VERBİS (Data Controllers Registry) record to include "technology companies providing artificial intelligence services" under the section covering "Recipient Groups to Whom Personal Data is Transferred."---
This compliance checklist is prepared for general legal information purposes and does not constitute legal advice.