---
While traditional generative artificial intelligence (LLM) systems operate predominantly on a "question-and-answer" logic, 2025 and 2026 have undeniably become the years of "Agentic AI" (autonomous artificial intelligence systems). Agentic AI refers to systems capable of making autonomous decisions, calling independent tools (via API integrations), executing multi-step tasks without human intervention, and defining their own sub-tasks to achieve a given objective. In corporate settings, these systems wield massive authority—ranging from issuing invoices and managing employee calendars to executing automatic payments and altering data in CRM systems. This unprecedented level of autonomy creates unique vulnerabilities regarding data protection and liability under Turkish Law, particularly concerning the Code of Obligations and the KVKK (Personal Data Protection Law).
Classic GenAI vs. Agentic AI: Where is the Legal Divide?
In a classic AI system (e.g., standard ChatGPT), the data flow is limited and predictable: a user enters a "prompt," and the model generates a response based on its training. A breach typically occurs only if the user's prompt violates privacy.
With Agentic AI, the model takes multiple autonomous steps toward a general goal given to it (e.g., "Send a welcome email to the new client and create a CRM record"):
At each step, different data processing activities occur, the majority of which are autonomous choices not "hard-coded" in advance. This black-box structure inherently challenges the principle of transparency.
Direct Risks and Issues Under KVKK
1. The Dilemma of Determining the "Data Controller"
According to KVKK Article 3, the data controller is the entity that determines the "purposes and means" of processing personal data. In Agentic AI systems, the model itself autonomously determines the "means" and sometimes even sub-"purposes." Although some industry opinions argue that highly advanced agents blur the line between data controller and data processor, the practical approach in the application of the Turkish Personal Data Protection Authority (KVKK) is strict: The organization that implements, finances, and sets the primary goal for the agentic system is the sole data controller. Because autonomous AI lacks legal personality, the company is the only authority that legitimizes the data processing. The fact that the company "does not know exactly how" the agent processed the data will not save it from administrative fines.
2. Violation of the "Relevant, Limited and Proportionate" (Minimization) Principle
Per KVKK Article 4/2-ç, data must be processed only in an appropriate and proportionate manner relative to the purpose. Autonomous agents frequently exhibit a tendency to dive into data pools larger than necessary (data scraping/data extraction) to complete a task or make a "better" decision. An agent scanning a restricted folder outside its jurisdiction while pursuing its goal constitutes a clear data breach. Imposing "Sandboxing" for corporate agents is the natural consequence to prevent violating this principle.
3. How to Fulfill the Obligation to Inform?
The obligation to inform (KVKK Article 10) dictates that data subjects must be clearly told "for what purposes" their data will be processed. If it is impossible to predict with 100% certainty what data an autonomous agent will integrate, in what form, and from where in the future, how can privacy notices remain "clear" and "understandable"? Vague statements such as "Your data may be processed by our AI assistant" are considered invalid by the KVKK. Activity-based, specific disclosures are mandatory.
4. The Right to be Forgotten and Automated Deletion
Agentic systems can store personal data they learn or index during autonomous tasks in their "memories" (vector databases or RAG systems). Unlike traditional systems, when a deletion request arrives (Article 7), the agent must be able to delete this data not just from a table, but from its active memory loops—which demands complex technical architecture.
The Liability Regime Under the Turkish Code of Obligations (TBK)
If an autonomous AI agent causes commercial damage, an event of default, or a data leak, the defense "The AI did it, it was out of our control" holds absolutely no legal standing under Turkish Law:
Corporate Implementation: "Agentic AI" Compliance Checklist
Measures businesses must take to protect themselves from penal and civil sanctions:
1. Role-Based Access (Sandboxing): Prevent Agentic AI from acting as a "super-user" with comprehensive administrative rights. If it has permission to read the CRM, restrict its write permission; grant it only "query" authorization for banking APIs. The agent's autonomy must be constrained by technical walls.
2. Comprehensive Logging (Audit Trail): Every decision the agent makes, every API it calls, and every database row it extracts must be logged with a timestamp. In the event of a KVKK audit or litigation, you must prove "what the agent did," not "what the agent thought."
3. Human-in-the-Loop Rule: For tasks with high impact levels (e.g., outgoing payments from client accounts, executing database deletion requests, becoming a party to a legal contract), the agent's final action must unequivocally require a human to click an "Approve" button.
4. DPIA Reporting: Deploying Agentic AI is an inherently "high-risk" processing activity under KVKK. Taking an agent live without proactively conducting a Data Protection Impact Assessment (DPIA) poses a severe risk.
5. Updated Privacy Notices: Provide transparency addendums explaining to your customers that processes are "executed by an autonomous AI agent" and outlining the basic logic of the data processing involved.
---
This article is for general information purposes only and does not constitute legal advice.
