| Eylül GÜRCEĞİZ Trainee Lawyer | Roşan ÖZBİNGÖL Trainee Lawyer |
ABSTRACT
Automated decision-making mechanisms, while providing speed and efficiency, encompass matters that must be observed particularly concerning the processing of personal data with respect to privacy, transparency, algorithmic bias and data subject rights. This study will address the legal framework pertaining to automated decision-making mechanisms within the scope of European Union regulations and national law, and data subject rights and transparency obligations will be discussed in light of current case law on the subject. Subsequently, the boundaries of the relationship between automated decision-making mechanisms and profiling will be delineated, and approaches aimed at mitigating the risks of such mechanisms will be evaluated by examining the nature of human oversight in such applications, the elements necessary for effective oversight, and erroneous assumptions.
Keywords: Automated Decision-Making Mechanism, Human Oversight, Automation, Personal Data, Schufa Decision.
INTRODUCTION
The pursuit of speed, cost-effectiveness and operational efficiency in the management of business processes ranks among the driving factors behind the emergence of innovations in technology. With the advancement of technology, automation systems, which were initially developed solely to facilitate mechanical processes, have transformed into systems capable of making independent decisions with virtually no need for human intervention or contribution as of today. This transformation has also entailed the proliferation of tools that render business processes more practical. In order to ensure the aforementioned practicality, automated decision-making (“ADM”) mechanisms, defined by the United Kingdom Information Commissioner’s Office (“ICO”) as “the process of making decisions by automated means without any human involvement1,” are widely utilised. According to the ICO, while automated decision-making mechanisms may be based on factual data, they may also rely on digitally created profiles or ADM mechanisms, while possessing positive attributes such as providing speed and consistency, enabling utilisation across various sectors, and allowing for more stable decisions, these structures, wherein personal data is processed intensively, may occasionally lead to adverse consequences with regard to the protection of privacy. Data subjects may not even realise that their personal data is subject to an ADM process, and as ADM mechanisms become embedded within digital tools, they may also engender algorithmic bias and misclassification risks, thereby giving rise to additional adverse effects that may impact data subjects. For the aforementioned reason, the presence of human oversight in such mechanisms emerges as a significant security element, albeit not sufficient on its own, in order to eliminate risks. On the other hand, the operationalisation of ADM mechanisms under human oversight may not offer a sufficient solution on its own when the system is not properly designed, and may even prove inadequate in preventing outcomes that could be attributed as erroneous or biased. This situation also entails the risk of human oversight becoming merely a formal instrument, limited to transferring liability arising from the system to the human factor, in other words, becoming a tool that has been formally delegated.
I. ADM WITHIN THE SCOPE OF EUROPEAN UNION LEGISLATION AND NATIONAL LAW
Pursuant to Article 22 of the General Data Protection Regulation (“GDPR”), which regulates automated individual decision-making including profiling, the processing of personal data has been regulated, and accordingly “The data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her2.” Exceptions to this provision have been granted solely in cases where: (i) it is necessary for entering into, or the performance of, a contract between the data controller and the data subject, (ii) it is authorised by Union or Member State law to which the controller is subject and which also lays down suitable measures to safeguard the data subject’s rights and freedoms and legitimate interests, or (iii) it is based on the data subject’s explicit consent. As can be seen, the scope of application of the article may be established around three main elements, namely: (i) the existence of a decision rendered vis-à-vis the data subject, (ii) the said decision being rendered by an ADM mechanism on an individual basis, including profiling, and finally (iii) the said decision having a significant effect on the data subject, irrespective of whether such effect is positive or negative. Indeed, in the OQ v Land Hessen3 (“Schufa”) decision of the Court of Justice of the European Union (“CJEU”), it is observed that the same three elements are required for the applicability of Article 22 of the GDPR. While emphasising that the aforementioned three elements must be present cumulatively, the CJEU also explicitly states that the outcome/decision resulting from the ADM mechanism must either directly produce a legal effect on the data subject or similarly significantly affect the data subject. What is meant by this effect is that a third party acts concerning the data subject based on the probability value determined by the data controller, in other words, that the decision-making process “strongly relies” on the said probability value. For instance, in the system examined in the Schufa decision, Schufa, which holds the status of data controller as a private company subject to German law, provides probability-based scores concerning the creditworthiness of third parties to its contractual partners, and these scores, calculated through mathematical and statistical operations, reflect an estimate regarding the person’s future behavioural probability (e.g., the person’s payment capacity, possible changes in financial strength). The creation of scores (scoring) is based on the assumption that the data subject is assigned to a group with similar characteristics, and individuals within the said group will exhibit similar behaviours. In essence, behavioural patterns within the assigned group are attributed to the data subject. The CJEU assessed that the aforementioned situation “produces legal effects concerning the data subject” or “significantly affects the data subject” on the grounds that the probability value determined by Schufa and communicated to the bank plays a decisive role in the granting of credit to the data subject, and qualified the relevant mechanism as ADM. Additionally, it is observed that ADM mechanisms are explicitly referenced in Article 15 of the GDPR, which regulates the data subject’s right of access. Namely, pursuant to paragraph 1(h) of the said Article, the data subject, who has the right to request information about the existence of automated decision-making processes including profiling, has the right to access meaningful information about the logic involved in such processing, as well as the significance and the envisaged consequences of such processing for the data subject4. Within this framework, the importance of the right to obtain meaningful information regarding ADM processes becomes even more apparent. Since data subjects often do not even realise that they are subject to an ADM mechanism, and when they do realise, they may feel the need to request further explanation regarding the functioning of the system. Indeed, this matter has also found reflection in judicial decisions and personal data protection authorities, and in the decision of the Austrian Data Protection Authority concerning the data subject exercising their right of access pursuant to Article 15 of the GDPR by requesting information from the company holding the status of data controller about the manner in which their personal data is processed, the company’s response was found to be incomplete and insufficient. In the relevant decision, wherein it was concluded that the data subject’s right of access had been violated, it was determined that the failure to provide sufficient explanation, particularly regarding the personal data used in the ADM process and the impact of the said data on the credit score, constituted an infringement of paragraph 1(h) of Article 15 of the GDPR. An application was made to the Federal Administrative Court by the company against the aforementioned decision, and the decision of the data protection authority was partially upheld by the judicial body. Subsequently, an appeal was again lodged by the company, and the Austrian Supreme Administrative Court (Verwaltungsgerichtshof), in the Dun & Bradstreet Austria5 decision, rejected the appeal by emphasising that the company was indeed operating an ADM process, yet failed to provide meaningful information to the data subject. The aforementioned decision also emphasised that the information provided by the company was extremely general and vague, and that it was not disclosed with sufficient clarity which data affected the credit score and how. The concept of “meaningful information” referred to in paragraph 1(h) of Article 15 of the GDPR is elucidated in the said decision by clearly setting forth the criteria regarding the nature of the information that must be provided to the data subject in ADM processes. Accordingly, for the information to possess the attribute of being meaningful, the information presented to the data subject must be clear, transparent and comprehensible, and must contain content at a level that enables understanding of how personal data is used and which factors influenced the outcome. In the examined decision, the company’s reluctance to share information on the grounds of trade secret was also not accepted. According to the Court, while the credit scoring method may bear the nature of a trade secret, this circumstance does not eliminate the data subject’s right to learn the logic of the ADM process and the impact of their personal data on the outcome. CJEU case law also supports this approach, stating that trade secrets should not constitute a sufficient ground on their own for the complete denial of access to information, and emphasising the importance of establishing a balance according to the circumstances of the case. Therefore, even where trade secret protection exists, the data subject must be granted the right to access meaningful information. On the other hand, the fact that data subjects are often unable to ascertain whether they are included in an ADM process further increases the importance of this obligation. The European Data Protection Board (“EDPB”) states that ADM mechanisms generally operate in a manner that is in the background and not visible to the data subject, and therefore individuals may not be able to detect automated decision processes on their own in every circumstance6. Due to this invisibility stemming from the design of ADM mechanisms, it becomes difficult for the data subject to distinguish whether a decision they have received is based on automated processing7. This situation also necessitates the examination of how legal safeguards provided against ADM are regulated at the national level. Although there is no article directly regulating ADM mechanisms within the scope of Law No. 6698 on the Protection of Personal Data (“Law”), paragraph 1(g) of Article 11 entitled “Rights of the data subject” regulates the right “to object to the occurrence of a result to the detriment of the person through the analysis of the processed data exclusively by means of automated systems8.” Here, as in Article 22 of the GDPR, it is required that a result be produced concerning the data subject, but unlike the GDPR, it is specified that this result must be particularly “to the detriment.” As can be seen, data subjects possess various rights such as requesting information from the data controller, objecting to assessments that produce results to their detriment exclusively through automated operations, and similar rights.
II. BOUNDARIES OF ADM AND PROFILING
Profiling is defined by the EDPB as “any form of automated processing of personal data consisting of the use of personal data to evaluate certain personal aspects relating to a natural person, in particular to analyse or predict aspects concerning that natural person’s performance at work, economic situation, health, personal preferences, interests, reliability, behaviour, location or movements9.” Although ADM mechanisms and profiling may appear intertwined, not every ADM mechanism necessarily entails profiling. The aforementioned process may also be carried out by processing the concrete data at hand within the framework of defined fixed rules, without making any inference or assessment within the scope of the data subject’s personal characteristics. While profiling possesses a structure based on analysis and prediction, automated decision-making is bound to a deterministic algorithm that follows a causal relationship. As can be seen from the example provided by the EDPB on this matter, a highway speed control system measuring the vehicle’s speed, comparing it with the legal limit, and automatically generating a penalty constitutes an ADM process. However, since the relevant process does not produce a profile regarding driving habits and drivers’ personal characteristics and renders a decision solely based on concrete factual data, it does not bear the quality of profiling. Another example that can be given regarding the same situation is the processing of age and identity information entered into e-commerce systems, thereby imposing restrictions on the sale of alcohol or products for persons over 18 years of age on internet sites. The aforementioned examples demonstrate that it is not mandatory for ADM mechanisms to invariably create profiles or perform analyses based on complex algorithms. The frequent association of ADM mechanisms with artificial intelligence is likewise a similar misconception. Just as in the matter of profiling, ADM mechanisms may also be created using programmed, deterministic rules/algorithms without being based on artificial intelligence. For this reason, the fact that a system is not based on artificial intelligence does not mean that the relevant system is not an ADM mechanism. At this point, what is determinative is how the system operates and as a result of which processing steps the decision emerges.
III. HUMAN OVERSIGHT IN ADM MECHANISMS
How can it be possible to simultaneously ensure compliance with personal data protection legislation and benefit from the speed and practicality of ADM mechanisms? The method that comes to mind in order to provide the aforementioned benefits simultaneously is to incorporate human oversight into ADM processes. This approach will also eliminate the element of “being based solely on automated processing” required by the CJEU’s Schufa decision with respect to Article 22 of the GDPR. However, the mere presence of human oversight is not sufficient, and for the oversight to be effective, the human must be able to evaluate the system’s decisions and intervene when necessary. In the European Data Protection Supervisor’s TechDispatch10 (“TechDispatch”) document entitled Human Oversight of Automated Decision-Making, human oversight is defined as “at least one human operator monitoring ADM system operations, evaluating the system’s decisions, and having the authority to intervene when necessary,” and while mentioning different types of oversight within the scope of stages where human oversight can be implemented, it is stated that the presence of real-time oversight (ex-ante monitoring) is of importance11.
TechDispatch particularly focuses on the importance of real-time oversight in the production environment and sets forth erroneous assumptions regarding human oversight in ADM mechanisms with examples. One of these assumptions is that “ADM systems will operate in accordance with specific and predetermined conditions.” However, unforeseen situations may often be encountered in technological systems, yet designing by evaluating every possibility may not be practical. For this reason, the assumption that ADM mechanisms will always operate according to certain conditions poses a risk. For instance, in a vehicle operating with autopilot, the lane-keeping feature depends on the correct detection of lane markings on the road. In situations where these markings are missing or erroneous, the system will not function correctly, thereby creating a risk in terms of safety. Incorrect assumptions not only regarding technology but also concerning human-technology interaction may also be observed in ADM mechanisms. One of the first methods that comes to mind to render human oversight effective in ADM mechanisms is to present a decision/outcome resulting from ADM to human evaluation, and accordingly, for the final decision to be rendered by the human. However, as stated in TechDispatch, it will be an erroneous assumption that ADM mechanisms will not influence human judgment. For example, in a study published in the journal Radiology in 2023, it was observed that 27 radiologists were significantly influenced by the Breast Imaging Reporting and Data System (BI-RADS) categories suggested by artificial intelligence while examining 50 mammograms. In the study, two separate sets containing both correct and incorrect categorisations were used, and it was determined that artificial intelligence guidance significantly altered expert decisions. As a result of this experiment, it was noticed that radiologists were meaningfully influenced by artificial intelligence’s predictions regarding BI-RADS categories. It was particularly observed that less experienced radiologists tended to adopt the predictions given by artificial intelligence to a moderate extent and more compared to highly experienced radiologists12. The aforementioned research demonstrates that even though the final decision belongs to the human, ADM outputs can easily influence experts and direct them towards decisions they would not normally make. Apart from erroneous assumptions regarding ADM mechanisms, different mistaken beliefs may also arise concerning how human oversight will function. Therefore, if an ADM mechanism incorporating human oversight is to be operated, in order for the relevant mechanism to function in a secure and predictable manner, it is important that interface designs presenting information in a clear and accessible manner are created, and similar necessary arrangements are implemented to prevent potential risks. In line with this, regular training should be provided to persons who will be tasked in ADM mechanisms regarding how their perceptions may be misled and how they can make healthier decisions against this. The relevant training should be adapted appropriately for persons with different expertise, experience and prior knowledge levels13.
CONCLUSION
The inclusion of ADM mechanisms in decision-making processes constitutes a separate matter of evaluation with respect to personal data protection legislation. Accordingly, the examined regulations and decisions demonstrate that ADM mechanisms can only attain a privacy-sensitive structure through the presence of sufficient transparency, explainability, and effective human intervention. Within this scope, when both national and international legislation are considered, it is important to render the elements of application, objection, and information, which enable the data subject to exercise control over their personal data, more functional. In particular, in order to avoid adverse consequences that ADM mechanisms may produce on data subjects, the obligation of information has been emphasised, and it has been underscored that it is not possible to make a defence to the effect that matters underlying algorithmic assessments cannot be explained solely on the grounds of the existence of trade secrets. Ultimately, fully automated systems that make decisions without human intervention may give rise to erroneous assessments, and the said systems need to be supported with a supervision structure that is overseen, updated, and effectively operated in practice.
1 Information Commissioner’s Office (ICO) “Automated decision-making and profiling,” Access Date: 20 November 2025 https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/individual-rights/automated-decision-making-and-profiling/
2 General Data Protection Regulation (GDPR), “Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016,” Official Journal of the European Union, L119, 1–88, Access Date: 20 November 2025 https://eur-lex.europa.eu/eli/reg/2016/679/oj/eng
3 Court of Justice of the European Union, “OQ v Land Hessen, Case C634/21, ECLI:EU:C:2023:957, 7 December 2023, EUR-Lex” Access Date: 21 November 2025 https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A62021CJ0634
4 General Data Protection Regulation (GDPR), “Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016,” Official Journal of the European Union, L119, 1–88, Access Date: 20 November 2025 https://eur-lex.europa.eu/eli/reg/2016/679/oj/eng
5 Verwaltungsgerichtshof, Erkenntnis vom 7. April 2022, Ro 2020/04/0010 Access Date: 27 November 2025 https://gdprhub.eu/index.php?title=VwGH_(Austria)_-_Ro_2020/04/0010; Details pertaining to the Austrian Data Protection Authority decision subject to appeal are included in the Verwaltungsgerichtshof decision.
6 European Data Protection Board (EDPB) “Guidelines on Automated individual decision-making and Profiling for the purposes of Regulation 2016/679” Access Date: 4 December 2025 p. 9–10 https://ec.europa.eu/newsroom/article29/items/612053/en
7 European Data Protection Board (EDPB) “Guidelines on Automated individual decision-making and Profiling for the purposes of Regulation 2016/679” Access Date: 4 December 2025 p. 8–10 https://ec.europa.eu/newsroom/article29/items/612053/en
8 Kişisel Verilerin Korunması Kanunu, No. 6698 (2016), Resmî Gazete, No. 29677, 7 April 2016. Access Date: 3 December 2025
9 European Data Protection Board (EDPB) “Guidelines on Automated individual decision-making and Profiling for the purposes of Regulation 2016/679” Access Date: 4 December 2025 https://ec.europa.eu/newsroom/article29/items/612053/en
10 European Data Protection Supervisor. (2025). TechDispatch – Human oversight of automated decisionmaking. European Union. Access Date: 2 December 2025 https://www.edps.europa.eu/system/files/2025-09/25-09-15_techdispatch-human-oversight_en.pdf
11 European Data Protection Supervisor. (2025). TechDispatch – Human oversight of automated decisionmaking. European Union. Access Date: 2 December 2025 https://www.edps.europa.eu/system/files/2025-09/25-09-15_techdispatch-human-oversight_en.pdf
12 A. YALA, C. D. LEHMAN, M. SCHAPIRO, & R. BARZILAY, (2023). “The Impact of Artificial Intelligence BI-RADS Suggestions on Reader Performance. Radiology”, 307(4), e222176. Access Date: 2 December 2025 https://doi.org/10.1148/radiol.222176
13 W. Nicholson PRICE II / Rebecca CROOTOF / Margot E. KAMINSKI, “Humans in the Loop”, Vanderbilt Law Review, Vol.76, No.2, 2023, p. 501–502. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4066781
