Abstract
The paper discusses the efficiency of various caching strategies that can reduce application latency. A test application was developed for this purpose to measure latency from various conditions using logging and profiling tools. These scenario tests simulated high traffic loads, large data sets, and frequent access patterns. The simulation was done in Java; accordingly, T-tests and ANOVA were conducted in order to measure the significance of the results. The findings showed that the highest reduction in latency was achieved by in-memory caching: response time improved by up to 62.6% compared to non-cached scenarios. File-based caching decreased request processing latency by about 36.6%, while database caching provided an improvement of 55.1%. These results enhance the huge benefits stemming from the application of various caching mechanisms. In-memory caching proved most efficient in high-speed data access applications. On the other hand, file-based and database caching proved to be more useful in certain content-heavy scenarios. This research study provides some insight for developers on how to identify proper caching mechanisms and implementation to further boost responsiveness and efficiency of applications. Other recommendations for improvements to be made on the cache involve hybrid caching strategies, optimization of the eviction policies further, and integrating mechanisms with edge computing for even better performance.
Publisher
Libertatem Media Private Limited