The emerging discussions in many scientific working groups about data storage, are about reduction of data storage management costs, supporting the usage of standard protocols and exploiting new paradigms including data lake. Cache systems provide key technologies allowing a fast access to geographic far or outside NRENs resources, introducing an optimal usage of backbone. My research activity is focused on the design and implementation of a cache technology compatible with security mechanisms largely used in distributed infrastructures for scientific computing and experiments such as HEP ones. The goal is to develop the software components needed to establish a link between standard protocols and HPC/HTC applications requirements. I’m investigating federation technologies in order to aggregate traditional storage areas with volatile or opportunistic resources, even provided with cloud interfaces. Those technologies present storages within an unique namespace with fault-tolerant capabilities, resilient to the loss of a part of the distributed filesystem. Moreover, federation provides a common authentication and authorization method based on X.509 certificates and VOMS extensions. The first outcome was the design of an architectural model defined as protocols stack and composed by four layers: storage, cache, federation and application. The model has been implemented in a local testbed using a set of opensource softwares, such as Disk Pool Manager and DynaFED. In particular I developed some plugins to implement the cache layer taking advantage of the new volatile pools feature, introduced in the last release of DPM. The main use-cases are the BELLE II and ATLAS experiments, where the cache layer is built on top of a set of production storages thanks to the dynamic federator DynaFED. The preliminary tests validated the model showing promising results and giving the chance to add new functionalities to future experiment computing models.