Ertuğ & Partners
Blog
Oct 28, 20252025 Q4

What KVKK's Generative AI Guide (2026) Changes for Businesses

KVKKAI

---

The Personal Data Protection Authority of Turkey (KVKK) published a comprehensive guide titled "Generative Artificial Intelligence and the Protection of Personal Data (in 15 Questions)" on November 24, 2025. This guide has clarified the usage standards for Generative AI (GenAI) tools like ChatGPT, Claude, and Midjourney, which have become deeply integrated into the daily workflows of companies. Together with the subsequently published "Use of Generative AI Tools in the Workplace" document, these two texts outline the de facto legal compliance framework for businesses in Turkey. Corporate entities are no longer permitted to use GenAI tools haphazardly but must operate under the strict regulations of the KVKK legislation.

Core Messages of the Guide and Legal Analysis for Companies

1. The Entire Lifecycle is Under KVKK Scope:

The most critical point of the guide is that the KVKK regulations apply not only at the moment the AI provides output to the end user (inference) but across all stages, including training data collection (web scraping, etc.), model fine-tuning, and user prompting. Even if the design of a model does not intend to "target or profile individuals," if the AI generates data relating to a real person in any way, KVKK provisions apply immediately.

2. Data Controller and Data Processor Distinctions Become Clearer:

The Authority emphasizes that the distinction between a data controller and a data processor must be made based on the specific circumstances of the case, rather than through a theoretical approach. A GenAI provider (such as OpenAI) is considered a "Data Controller" because it trains the foundational model with its own determined purposes and means. However, a company that integrates these tools into its internal operations or customer service processes (e.g., designing a chatbot using its own data via an API) attains independent "Data Controller" status for that specific transaction.

3. Narrow Interpretation of the Anonymous Data Exception:

Under KVKK Article 3/1-b, anonymized data—which cannot be related to an identified or identifiable natural person—falls outside the law's scope. However, the Authority interprets the anonymization threshold exceedingly narrowly. Merely deleting direct identifiers such as names or Turkish ID numbers does not render the data "anonymous." If there is any probability that the model can make the individual identifiable through contextual analysis (combining various data points), the data is not considered anonymized. The KVKK exception applies only if all inputted data is masked using an irreversible, 100% anonymization technique.

4. No Permission for Special Category Personal Data:

Processing special category personal data (health data, union membership, ethnic origin, etc.) listed in KVKK Article 6 through artificial intelligence tools harbors severe risks. Under the new legislation framework, passing such data through the training or prompt filters of a cloud-based AI system has become nearly impossible without "Explicit Consent" and a formal "Board Decision."

5. New Rules for Cross-Border Data Transfer:

The servers of the most widely used GenAI tools in the business world are mostly located abroad (e.g., USA, Ireland). Even using these services in a browser tab constitutes the "Transfer of Personal Data Abroad" (KVKK Article 9). Companies must either sign Standard Contractual Clauses and notify the KVKK, rely on an Adequacy Decision (which are not widely available currently), or obtain Explicit Consent. Otherwise, a company's transmission of personal data to foreign AI providers is unlawful.

The Danger of "Shadow IT" and the Workplace Usage Document

The second supportive document published by the KVKK targets employers. The fundamental point is the abandonment of a prohibitionist mindset:

  • Banning is Not the Solution: Completely banning these tools, which enhance employee productivity, pushes personnel toward covert usage ("Shadow IT") on personal accounts and unencrypted networks far from IT supervision. This maximizes the risk of data leaks.
  • Requirement for a Corporate AI Policy: The Authority expects a written, clear, and binding "Artificial Intelligence Usage Policy" for employees. The policy must communicate exactly which tools are permitted, what types of customer contracts can be uploaded, and what personal data must be completely omitted from prompts.
  • Human in the Loop: Because decisions generated by AI (due to "hallucinations" or biased attitudes) may have negative consequences for individuals, it is essential that an authorized employee verifies the outputs at the end of every process. The right to intervene in automated decision-making mechanisms is crucial.
  • 6 Steps Businesses Must Take for 2026

    1. Map Your Data Flow: Map the data of internal assistants like Copilot or Notion AI within your corporate email system, as well as the data employees manually input into ChatGPT/Claude.

    2. Establish "Prompting" Rules: Declare a policy requiring personnel to completely anonymize names, titles, financial values, and contract texts before entering prompts.

    3. Make KVKK-Approved Purchases: Proceed with B2B licensed "API" usages or "Enterprise" AI packages, which generally pledge not to use your data for model training. Restrict the use of free, consumer-tier AI versions.

    4. Prepare a Data Protection Impact Assessment (DPIA): If you are to decide on using AI-supported software, prepare an impact assessment detailing the potential KVKK violation risks (the Authority strongly recommends this).

    5. Update Privacy Notices and VERBİS: If an AI chatbot or similar service is providing customer support, your privacy notice must include a statement such as "This generative artificial intelligence provider is utilized in the processing of your request, and data is transferred abroad." The recipient groups in the VERBİS registry must also be updated.

    6. Organize Training Sessions: Providing periodic training to personnel on information security and AI data privacy is a legal obligation.

    Administrative Fines are Growing

    Administrative fines under Article 18 for KVKK violations have reached a much more devastating magnitude with the 2026 revaluation rates. The gaps created by new technologies in data privacy can result in administrative sanctions of millions of liras and severe damage to brand reputation.

    ---

    This article is for general information purposes only and does not constitute legal advice.