Memory as cookies? Applying cookie regulations to local AI memory
Local AI memory: what it is and why it matters
Local AI memory features (hereafter referred to as “memory”) are becoming an essential capability in recent AI applications. AI agents like OpenClaw, which can automate daily tasks such as sending emails, managing calendars, and even interacting on online services, are reshaping how people work, live, and think. By retaining interaction history, agents nowadays can move beyond single-task execution toward adaptive action on behalf of users. According to OpenClaw’s statement, the model can “remember” what gets written to disk. Recently, a new generation of AI-enabled smartphones has also introduced a built-in general-purpose agent that memorises users’ instructions by default to interact with them in a more personalised manner. Memory allows these models to retain and act upon information across conversations. Relying on these capabilities, memory enables models to follow user instructions, personalise user experiences, and support continual learning of AI models, significantly improving both usability and efficiency.
As people increasingly consume AI devices in their daily lives, memory features of AI systems are capable of recording massive personal information, thoughts, and human behaviors. Retaining these data across interactions introduces significant privacy risks including unauthorised access or misuse, and challenges for data retention. Pandora’s box has just been opened – the development of AI will only depend more on memory. As OpenAI CEO Sam Altman claimed regarding GPT‑6, people want memory, and memory is the key to making AI truly personal. This intuition is echoed in the community: researchers from Google DeepMind argue that incorporating longer memory would be the next key step in improving frontier AI, and a more recent research from DeepSeek argues that conditional memory should be treated as an indispensable modeling primitive for next-generation AI systems. Against this backdrop, urgent questions arise regarding how memory should be regulated within the existing legal framework, and how its use can be aligned with the protection of privacy and fundamental human rights.
Regulating memories under EU law: memory as cookies?
While emerging technologies of generative AI present new challenges to the existing regulatory framework, the EU’s digital regulatory instruments such as the AI Act and the General Data Protection Regulation (GDPR) address a broad range of AI applications. Accordingly, even if the technology application of memory is novel, its legal status should be determined by the characteristics of its processing activities.
A reading of EU data protection law points to a possible answer: local AI memory can be, if not should be, treated as cookies or similar tracking technologies. Article 5(3) of the ePrivacy Directive establishes that storing information on, or accessing information from, a user’s terminal equipment is in principle subject to prior user consent, unless it is strictly necessary for providing a service explicitly requested by the user. Importantly, both the European Data Protection Board (EDPB) and the UK Information Commissioner’s Office (ICO) have clarified that the scope covered by Article 5(3) of the ePrivacy Directive is not limited to traditional cookies, allowing emerging tracking technologies to also fall within scope where the criteria are met.
Under the ePrivacy Directive, tracking technologies which fall into the scope shall meet three cumulative criteria, and those criteria are easily satisfied by AI memory functions. First, Article 5(3) applies to operations relating to “information”, while AI memory clearly involves “information” within this meaning, given that AI memory often consists of highly non-structured, open-ended, and context-dependent content including user conversations and preference settings. Second, this protection is guaranteed to the terminal equipment associated with the user, and local AI memory is generated, stored, and accessed on users’ terminal equipment – such as mobile phones or personal computers. Third, the scope of the ePrivacy Directive covers technologies that involve both the storage of information and subsequent access to that stored information; AI memory necessarily entails storing and retrieving users’ interaction data. Moreover, AI memory serves the same functional purposes as cookie technologies, including user recognition, personalisation, continuity of service, and behavioral analysis over time .
Seen in this light, AI memory does not present an entirely novel challenge to existing regulations, but rather an alternative form of cookie or similar tracking technology under the current EU data protection framework. This conclusion has significant regulatory implications, particularly regarding how consent requirements under the ePrivacy Directive are operationalised.
Applying cookie requirements to memory: regulatory challenges
Under the EU cookie regulations, the basic rule states that storing or accessing information on a user’s device requires transparency and, as a default, valid user consent, subject to a few narrowly defined exceptions. This requirement needs to be further situated within a broader, and still unresolved, interaction between the ePrivacy Directive and the GDPR. As Santos et al. (2020) illustrated, complying with the ePrivacy Directive and the GDPR requires a cookie banner consent mechanism to meet 22 cumulative legal requirements, while most of the requirements are not possible to be properly assessed due to the architecture of the web.
Therefore, given the persistent nature of AI memory, treating it as a form of cookie or similar tracking technology would raise significant compliance challenges and further complicate the already dense and cumbersome framework. First and apparently, it will further increase the complexity and cost of compliance. Second, it will complicate the coherence of the already complex framework, which consists of at least the ePrivacy Directive and the GDPR, and even the AI Act, the Product Liability Directive, and the Digital Fairness Act proposal. This is especially the case given the ongoing uncertainty surrounding the categorisation of AI agents under the AI Act, i.e., whether they should be treated as AI systems or General Purpose AI models. Third and most importantly, the difficulty of maintaining equivalent levels of protection under the aforementioned laws enables regulatory arbitrage and creates unintended incentives. For example, if local AI memory results in greater compliance burdens but is less privacy-intrusive than cloud-based alternatives, model providers may be discouraged from adopting such privacy-preserving designs.
Conclusion
Provided that local AI memory will be covered as cookies or similar technologies under the EU law, the positive and negative effects of this on multiple stakeholders must be carefully assessed. These include data subjects, AI model developers and deployers, end-device manufacturers and distributors, and even advertising media and publishers if memory is used for personalised advertisement in the near future. Existing regulatory practice already reflects regulatory adaptation to technological developments. The European Commission’s proposed Digital Omnibus introduces a narrowly framed exception in Article 88a (b-c) for explicitly requested service and aggregated audience measurement carried out exclusively for the controller’s own use. A new recital could be added to support the application of Article 88a (b–c) to memory and other similar processing operations performed by an agent, provided the GDPR is eventually simplified in line with the released proposal. This ongoing “simplification” project of EU digital regulation can be an opportunity to rethink the implications and conflictions among different EU regulations on this issue, and to achieve a coherent approach that balances individuals’ fundamental rights over their “memory” with the rapid and promising development of frontier AI innovations.






