Learn JavaScript by Implementing Memoization
Deep dive into the nuances of memoization and memory management

I really want to create something interactive. Therefore, it is highly recommended to switch between reading this article and a codesandbox.io service to do tasks offered during the article. Accordingly, it is assumed that this will be read from a PC, not a phone.
The article is designed to be accessible and useful for everyone, especially beginners. However, some topics may require a lot of effort from developers with little experience. I shortened the theoretical part and lyrical digressions to make the material more practical. I believe that those who need it will google the missing theory for a more detailed understanding.
I hope that after reading, you will have a confident and comprehensive understanding of memoization, as well as an understanding of such topics as closure, higher-order functions, pure functions, currying, TDD, recursion, and property-based testing. And most importantly, an understanding of how and where to apply it.
Table of contents
- Why Reinvent the Wheel
- Memoization
- Protocol of Interaction With This Article practice part
- Practice
- Function without arguments
- What aboutundefined
- Let’s add an argument
- Argument comparison policy
- API Requests
- More arguments
- Cache lifetime
- Max size
- Manual invalidation
- Context
- Transform arguments - Example of CRUD service with memoization
- About Test Coverage
- Restrictions
- Memoization of Recursive Functions
- NPM Package
Why Reinvent the Wheel
The phrase “to reinvent the wheel” is usually used in a negative context, but that’s exactly what we are going to do here because it’s an effective method of learning a topic. By trying to implement something on our own, we will better understand the tools that usually do part of the work for us. After that, we can extract more value from familiar tools.
For example, knowledge of the internal workings of certain systems will allow you to debug a problem faster because you will know what could have gone wrong. After attempting to implement something yourself, some things become easier and stop being magic. And some libraries which seemed simple turn out to be so steeped in nuances that you become grateful to the package creator for his work.
Memoization
Memoization is a specific type of caching. Any storage of data for future reuse can be considered as caching, whereas memoization specifically refers to caching the result of a function call. Overall, it is a way to optimize the code of our programs.
As mentioned earlier, plenty of ready-made implementations are available on the internet. Here is a partial list you can use: the lodash memoize function, p-memoize, async, memoizee (my favorite), moize, fast-memoize, and the ES7 @memoize decorator from decko.
Even if you are not explicitly using memoization, it is likely that some of the libraries you use regularly do. For example, in the world of React, it could be libraries like reselect, react-query, apollo-client, the useMemo hook, higher-order component memo, and much more.
Protocol of Interaction With This Article practice part
There is a starter template in CodeSandbox where you should write your own implementation of a memoization function. It contains two files: withMemo.js
and withMemo.test.js
.
Advancing through the article involves several iterations of the following steps:
- I suggest implementing or improving something in our function.
- Then, I provide tests that need to be added to
withMemo.test.js
to check the new functionality. - You write some code in
withMemo.js
so that the new tests pass successfully and the old ones do not fail. - I suggest my own solutions and, if necessary, add explanations.
This approach to writing code (doing several iterations of writing tests and then updating the code to make the tests pass successfully) is called Test Driven Development (TDD).
Practice
Function without arguments
1. Let's try to memoize functions of this kind:
// Reading into the function body isn't necessary
function generateDigitsOfPi() {
const DIGITS_COUNT = 10_001;
let q = 1n;
let r = 180n;
let t = 60n;
let i = 2n;
let str = '';
for (let j = 0; j < DIGITS_COUNT; j++) {
let digit = ((i * 27n - 12n) * q + r * 5n) / (t * 5n);
str += digit
let u = i * 3n;
u = (u + 1n) * 3n * (u + 2n);
r = u * 10n * (q * (i * 5n - 2n) + r - t * digit);
q *= 10n * i * (i++ * 2n - 1n);
t *= u;
}
const arr = str.split('');
arr.splice(1, 0, '.');
return arr.join('');
}
This function will return the value of pi as a string with a precision of 10,000 digits after the decimal point. Such a function is a good candidate for memoization because its execution requires significant processor resources. And after memoization, the calculations will not be performed on each function call, only on the first one.
2. Add these tests to the withMemo.test.js
file.
it(
"should call original function only ones and " +
"always return original result (no args)",
() => {
const result = Math.random();
const fn = jest.fn(() => result);
const memoizedFn = withMemo(fn);
expect(fn).toHaveBeenCalledTimes(0);
expect(memoizedFn()).toBe(result);
expect(fn).toHaveBeenCalledTimes(1);
expect(memoizedFn()).toBe(result);
expect(fn).toHaveBeenCalledTimes(1);
}
);
Did you notice that I assigned a random value to result
variable instead of something specific? This is one of the practices of property-based testing. It is explained quite well in this video. Such tests are a bit harder to write, but they cover more cases. Special libraries will run the same test several times themselves, generating different input data and always checking corner values (such as empty strings or arrays or undefineds) that you might forget about.
3. The task requires writing the implementation of the function withMemo
in the file withMemo.js
to make the tests pass successfully.
To run the tests, go to the Tests tab.

Sometimes, CodeSandbox does not rerun tests after making changes. Just refresh the page to fix it.
Here’s a possible solution:
// the function takes in another function `originFn` as an argument.
export const withMemo = (originFn) => {
let cache;
// the function returns a new function
// that is a memoized version of the original function.
return () => {
// if there is no cache, we populate it using the original function.
if (cache === undefined) {
cache = originFn();
}
// at the end it always return the value from the cache.
return cache;
};
};
As you can see, there is nothing complicated. Let’s pay attention to some concepts.
Firstly, withMemo
is a Higher-Order Function. This means that the function either takes another function as an argument or creates and returns a new function. The withMemo
function does both.
Secondly, this example involves a closure, which is represented by the variable cache
. The point is that after calling the withMemo
function variable isn’t deleted. It remains because withMemo
returns a new function that works with that variable internally (i.e., has a reference to it). However, the developer doesn’t have a direct reference to that cache
variable to work with it.
The concept of closure is closely related to the scope of variables. Scopes are nested within each other and form a tree-like structure. Sometimes it’s good to understand how it works.
What about undefined
1. In my previous step implementation, it is assumed that the original function should not return undefined
. If this value is in the cache, the cache will still be considered as empty. This approach is acceptable and will not cause problems in most cases.
2. However, let’s fix this corner case by adding these tests to the withMemo.test.js
file, as shown below:
it("'undefined' is valid value", () => {
const result = undefined;
const fn = jest.fn(() => result);
const memoizedFn = withMemo(fn);
expect(fn).toHaveBeenCalledTimes(0);
expect(memoizedFn()).toBe(result);
expect(fn).toHaveBeenCalledTimes(1);
expect(memoizedFn()).toBe(result);
expect(fn).toHaveBeenCalledTimes(1);
});
By the way, if we had used the fast-check library to write tests on the previous step, it would have pointed out to us that withMemo
wasn’t working correctly in this corner case. Don’t always rely solely on your own attention when there are plenty of useful tools.
3. Update the body of the function withMemo
in the file withMemo.js
so that the tests pass successfully and the old ones don’t fail.
Here’s a possible solution:
export const withMemo = (originFn) => {
const cache = {
value: undefined,
isCached: false, // new boolean variable
};
return () => {
if (!cache.isCached) {
cache.value = originFn();
cache.isCached = true;
}
return cache.value;
};
};
Click this link to see the diff.
I introduced a separate boolean variable to indicate whether the value is explicitly cached.
Let’s add an argument
1. Functions usually do take arguments, after all. For example, the generateDigitsOfPi
function can take the digitsCount
value as an argument instead of hardcoding the number of digits after the decimal point into the DIGITS_COUNT
variable.
function generateDigitsOfPi(digitsCount) {
let q = 1n;
let r = 180n;
let t = 60n;
let i = 2n;
let str = '';
for (let j = 0; j <= digitsCount; j++) {
let digit = ((i * 27n - 12n) * q + r * 5n) / (t * 5n);
str += digit
let u = i * 3n;
u = (u + 1n) * 3n * (u + 2n);
r = u * 10n * (q * (i * 5n - 2n) + r - t * digit);
q *= 10n * i * (i++ * 2n - 1n);
t *= u;
}
const arr = str.split('')
arr.splice(1, 0, '.')
return arr.join('')
}
2. Add these tests to the withMemo.test.js
file.
it("should cache depending on argument", () => {
const factor = Math.random();
const fn = jest.fn((arg) => arg * factor);
const memoizedFn = withMemo(fn);
expect(fn).toHaveBeenCalledTimes(0);
expect(memoizedFn(1)).toBe(factor);
expect(fn).toHaveBeenCalledTimes(1);
expect(memoizedFn(1)).toBe(factor);
expect(fn).toHaveBeenCalledTimes(1);
expect(memoizedFn(2)).toBe(2 * factor);
expect(fn).toHaveBeenCalledTimes(2);
expect(memoizedFn(2)).toBe(2 * factor);
expect(fn).toHaveBeenCalledTimes(2);
expect(memoizedFn(1)).toBe(factor);
expect(fn).toHaveBeenCalledTimes(2);
});
3. Update the body of the function withMemo
in the file withMemo.js
so that the tests pass successfully and the old ones don’t fail.
Here’s a possible solution:
export const withMemo = (originFn) => {
// the `cache` variable will now be an object, where the keys
// are the argument values and the object values are the results
// of calling the original function with the corresponding argument.
const cache = {};
return (arg) => {
// this notation may be less familiar. Essentially, it is similar to
// `if(!cache[arg])` but will be true if `null`, `undefined`,
// `0` or `''` are stored in the cache.
if (!(arg in cache)) {
cache[arg] = originFn(arg);
}
return cache[arg];
};
}
Click this link to see the diff.
I hope there’s nothing difficult for now. Right?
Argument comparison policy
1. Are the following two objects equivalent? Should such arguments be considered identical?
const obj1 = {
foo: 'bar',
}
const obj2 = {
foo: 'bar',
}
It depends on the circumstances. So, let’s add a new option getKey
to our withMemo
function to make it more versatile.
2. Add these tests to the withMemo.test.js
file.
it("should cache depending on arg", () => {
const num = Math.random();
// create 2 fucntions
const fn1 = jest.fn((arg) => ({ ...arg, num }));
const fn2 = jest.fn((arg) => ({ ...arg, num }));
// in the first function, we will compare arguments by reference.
const memoizedFn1 = withMemo(fn1, {
// The function should now optionally accept a function getKey.
getKey: (arg) => arg,
});
// in the second function, we will compare arguments by their content.
const memoizedFn2 = withMemo(fn2, {
getKey: (arg) => JSON.stringify(arg),
});
const argA1 = { a: "a" };
const argA2 = { a: "a" };
memoizedFn1(argA1);
expect(fn1).toHaveBeenCalledTimes(1);
memoizedFn1(argA2);
expect(fn1).toHaveBeenCalledTimes(2);
memoizedFn2(argA1);
expect(fn2).toHaveBeenCalledTimes(1);
memoizedFn2(argA2);
expect(fn2).toHaveBeenCalledTimes(1);
});
3. Update the body of the function withMemo
in the file withMemo.js
so that the tests pass successfully and the old ones don’t fail.
Here’s a possible solution:
export const withMemo = (originFn, { getKey = (arg) => arg } = {}) => {
// Now the `cache` variable isn't just an object but an instance of a Map.
// This is done so that keys in the `cache` can be not only strings.
// Before, for all objects the key was the string "[object Object]".
const cache = new Map();
return (arg) => {
const cacheKey = getKey(arg); // Before, the key was the argument itself.
// Now, it's whatever the function getKey returns.
if (!cache.has(cacheKey)) {
cache.set(cacheKey, originFn(arg));
}
return cache.get(cacheKey);
};
};
Click this link to see the diff.
Different functions can be useful to pass as getKey
in different situations:
(arg) ⇒ JSON.stringify(arg)
(arg) ⇒ stableStringify(arg)
, wherestableStringify
is a function from the fast-json-stable-stringify package. The idea is to serialize the objects in the example below in the same way, regardless of the order of field declarations.
import stableStringify from 'fast-json-stable-stringify'
const obj1 = {a: 'a', b: 'b'};
const obj2 = {b: 'b', a: 'a'};
JSON.stringify(obj1) === JSON.stringify(obj2) // false
stableStringify(obj1) === stableStringify(obj2) // true
(arg) ⇒ hash(arg)
, wherehash
is any hash function of your choice. This can be useful, for example, if you want to store keys as a string usingJSON.stringify
, but the arguments are too large objects that will become too long strings and take up too much memory. And a hash function can reduce strings to a fixed length.(arg) ⇒ arg
. I have chosen this option as the default for our function because it is suitable for most cases, and it is the fastest way to compare two objects — simply by reference.
I would like to draw your attention to how I added a second argument to withMemo
. Let’s take a closer look at two ways that could have been done:
// 1)
const withMemo = (originFn, getKey = (arg) => arg)
// 2)
const withMemo = (originFn, { getKey = (arg) => arg } = {})
The second option will be more convenient for us in the future when adding new optional arguments to the function. It will be easier to skip optional arguments. In addition, the arguments are named, which is more convenient when calling the function.
Instead of Map
, we could use WeakMap
. In that case, memory will be managed more efficiently. However, we won’t be able to store primitives (e.g., strings) as cache keys. And this is not universal. A good solution would be to allow the user to choose the structure in which the cache will be stored. The most important thing is that structure implements the necessary interface:
interface Map<K, V> {
clear(): void; // not used yet
delete(key: K): boolean; // not used yet
get(key: K): V | undefined;
has(key: K): boolean;
set(key: K, value: V): this;
readonly size: number; // not used yet
}
It may look like this:
export const withMemo = (
originFn,
{
getKey = (arg) => arg,
cache = new Map(),
} = {}
) => {
return (arg) => {
const cacheKey = getKey(arg);
if (!cache.has(cacheKey)) {
cache.set(cacheKey, originFn(arg));
}
return cache.get(cacheKey);
};
};
Click this link to see the diff.
API Requests
1. As you may have felt, memoization works best for pure functions. Calls to server methods are hardly pure, but API calls are often memoized. It saves network traffic and slightly reduces the load on the server. But a server request can fail for reasons beyond our control. And we would like to handle those that fail. For example, simply not writing anything to the cache if an error happens.
2. Add these tests to the withMemo.test.js
file.
it("should clear cache on error", async () => {
const result = Math.random();
let callIndex = 0;
const fn = jest.fn(async () => {
callIndex++;
if (callIndex === 1) return Promise.reject("Some error");
return Promise.resolve(result);
});
const memoizedFn = withMemo(fn, {
cacheRejectedPromise: false
});
await expect(memoizedFn()).rejects.toEqual("Some error");
expect(fn).toHaveBeenCalledTimes(1);
expect(await memoizedFn()).toBe(result);
expect(fn).toHaveBeenCalledTimes(2);
expect(await memoizedFn()).toBe(result);
expect(fn).toHaveBeenCalledTimes(2);
});
it("should not clear cache on error", async () => {
const error = Math.random();
const fn = jest.fn(async () => Promise.reject(error));
const memoizedFn = withMemo(fn, {
cacheRejectedPromise: true
});
await expect(memoizedFn()).rejects.toEqual(error);
expect(fn).toHaveBeenCalledTimes(1);
await expect(memoizedFn()).rejects.toEqual(error);
expect(fn).toHaveBeenCalledTimes(1);
});
3. Update the body of the function withMemo
in the file withMemo.js
so that the tests pass successfully and the old ones don’t fail.
Here’s a possible solution:
export const withMemo = (
originFn,
{
getKey = (arg) => arg,
cache = new Map(),
cacheRejectedPromise = false, // new option
} = {}
) => {
return (arg) => {
const cacheKey = getKey(arg);
if (!cache.has(cacheKey)) {
cache.set(cacheKey, originFn(arg));
}
const cachedValue = cache.get(cacheKey);
// Remove the value from cache if the value is a rejected promise.
if (!cacheRejectedPromise) {
Promise.resolve(cachedValue).catch(() => {
cache.delete(cacheKey);
});
}
return cachedValue;
};
};
Click this link to see the diff.
Note that the cache will store not the result of the promise
but the promise
itself. You could have implemented something like this:
originFn(arg).then(result => cache.set(cacheKey, result));
// or
const result = await constoriginFn(arg); // it will make parent function always return a promise :(
cache.set(cacheKey, result));
If this is your case, then its main disadvantage is that if a second call occurs before the first call’s promise
is resolved, then the cache will be useless. For example, if two parts of the application (header and sidebar) simultaneously request getCurrentUser
, then two identical requests will be sent to the server. But if you store the promise itself in the cache, then there will be only one request. Instead of creating a new promise
, the second call will wait for the first call’s promise
to be resolved.
I also want to explain the following notation:
Promise.resolve(cachedValue)
The result of that expression will always be a Promise
. If cachedValue
is already a Promise
, nothing will change, otherwise cachedValue
will be wrapped in a Promise
. As a result, we can always work with this value as with a Promise
and avoid adding unnecessary conditions.
I also want to note that HTTP requests can be cached even without such intervention. Smart people already have made it work well in most cases.
More arguments
1. Some people believe that a function should not have more than one argument, and they have good arguments for it. In reality, functions can have an arbitrary number of arguments. Let’s cover that case.
The easiest way to compare all arguments is to imagine we don’t have many arguments but only one array-argument. Then the task can be reduced to the previous one simply by serializing all arguments using JSON.stringify
. Here’s an example:
export const withMemo = (originFn) => {
const cache = new Map();
return (...args) => {
const cacheKey = JSON.stringify(args); // All arguments are serialized at once.
if (!cache.has(cacheKey)) {
cache.set(cacheKey, originFn(...args));
}
return cache.get(cacheKey);
};
};
The downside of that approach is that we no longer can choose the argument comparison strategy. And comparing by reference is the fastest and least memory-intensive option.
I want to suggest two ways of handling multiple arguments. The first idea is that the cache should be multi-level. Let me try to illustrate:
const sum = (...args) => args.reduce((acc, num) => acc + num, 0)
// at first, the cache is empty.
cache = {}
sum(4, 6) // 10
// the cache starts filling up
cache = {
4: {
6: 10, // This statement implies that if the first argument is 4
// and the second argument is 6, then the result will be 10.
}
}
sum(4, 8) // 12
cache = {
4: {
6: 10,
8: 12,
}
}
sum(2, 1) // 3
cache = {
2: {
1: 3,
},
4: {
6: 10,
8: 12,
}
}
sum(4) // 4
cache = {
2: {
1: 3,
},
4: {
6: 10,
8: 12,
'': 4,
}
}
The second idea is neater. First, we need to use currying. Currying is a transformation of functions in such a way that they accept arguments not like f(a, b, c)
, but like f(a)(b)(с)
. Any function accepts only one argument and returns a new function that also accepts only one argument. And since our function withMemo
already works well with one argument, it can be made recursive.
2. Add these tests to the withMemo.test.js
file.
it("multiple arguments", () => {
const fn = jest.fn((...args) => args.reduce((acc, num) => acc + num, 0));
const memoizedFn = withMemo(fn);
expect(memoizedFn(4, 6)).toBe(10);
expect(fn).toHaveBeenCalledTimes(1);
expect(memoizedFn(4, 6)).toBe(10);
expect(fn).toHaveBeenCalledTimes(1);
expect(memoizedFn(4, 8)).toBe(12);
expect(fn).toHaveBeenCalledTimes(2);
expect(memoizedFn(4)).toBe(4);
expect(fn).toHaveBeenCalledTimes(3);
expect(memoizedFn(4, 6)).toBe(10);
expect(fn).toHaveBeenCalledTimes(3);
});
3. Update the body of the function withMemo
in the file withMemo.js
so that the tests pass successfully and the old ones don’t fail.
You can use one of the suggested ideas or devise your own solution. I strongly recommend trying several options for maximum profit.
Here’s a possible solution:
export const withMemo = (
originFn,
{
getKey = (arg) => arg,
getCacheStore = () => new Map(), // small change
cacheRejectedPromise = false,
} = {}
) => {
const createSubCache = () => ({
subCaches: getCacheStore(),
result: null,
isCached: false
});
const cache = createSubCache();
return (...args) => {
let currentCache = cache;
for (const arg of args) {
const cacheKey = getKey(arg);
if (!currentCache.subCaches.has(cacheKey)) {
currentCache.subCaches.set(cacheKey, createSubCache());
}
currentCache = currentCache.subCaches.get(cacheKey);
}
if (!currentCache.isCached) {
currentCache.result = originFn(...args);
currentCache.isCached = true;
}
if (!cacheRejectedPromise) {
Promise.resolve(currentCache.result).catch(() => {
currentCache.isCached = false;
currentCache.result = null;
});
}
return currentCache.result;
};
};
Click this link to see the diff.
Solution with currying and recursion:
// don't destruct `options` here so we can pass it recursivly later
export const withMemo = (originFn, options = {}) => {
const {
getKey = (arg) => arg,
getCacheStore = () => new Map(),
cacheRejectedPromise = false,
} = options;
const cache = {
subFunctions: getCacheStore(),
result: null,
isCached: false
};
return (...args) => {
// This is the recursion exit condition - there are no more arguments left.
if (!args.length) {
if (!cache.isCached) {
cache.result = originFn();
cache.isCached = true;
}
if (!cacheRejectedPromise) {
Promise.resolve(cache.result).catch(() => {
cache.result = null;
cache.isCached = false;
});
}
return cache.result;
}
const [arg, ...otherArgs] = args;
const cacheKey = getKey(arg);
if (!cache.subFunctions.has(cacheKey)) {
cache.subFunctions.set(
cacheKey,
// Recursion is here.
withMemo((...theArgs) => originFn(arg, ...theArgs), options)
);
}
const subFunction = cache.subFunctions.get(cacheKey);
return subFunction(...otherArgs);
};
};
If you still have a poor understanding of recursion but want to figure it out, I suggest you carefully look at the second solution and try to reproduce it. If you write a similar function without peeking, you can consider yourself successful. If you had to peek a bit, repeat the iteration.
Cache lifetime
1. We have already covered memoization quite well. I hope that you already have some ideas on how to apply it in your current project. But we still have a lot of room for improvement.
There are two hard problems in computer science: cache invalidation, naming things, and off-by-1 errors.
— Leon Bambrick
Since we have started memoizing impure functions such as API requests, we need to take care of cache invalidation, i.e., clearing it.
There is an article that describes different advanced caching and invalidation techniques. For now, we will focus on simpler things. But why do we need to invalidate the cache at all? One reason is that the data in the cache becomes outdated over time. Another reason is that the cache begins to occupy too much memory.
One of the simplest solutions to the first problem is to define the cache’s time-to-live (TTL). Let’s add a new option ttl
to our function.
2. Add these tests to the withMemo.test.js
file.
it("ttl", async () => {
const fn = jest.fn();
const memoizedFn = withMemo(fn, { ttl: 1000 }); // milliseconds
memoizedFn();
expect(fn).toHaveBeenCalledTimes(1);
await wait(500);
memoizedFn();
expect(fn).toHaveBeenCalledTimes(1);
await wait(500);
memoizedFn();
expect(fn).toHaveBeenCalledTimes(2);
});
3. Update the body of the function withMemo
in the file withMemo.js
so that the tests pass successfully and the old ones don’t fail.
Here’s a possible solution:
export const withMemo = (
originFn,
{
getKey = (arg) => arg,
getCacheStore = () => new Map(),
cacheRejectedPromise = false,
ttl
} = {}
) => {
const createSubCache = () => ({
subCaches: getCacheStore(),
result: null,
isCached: false,
invalidationTimeoutId: null,
});
const cache = createSubCache();
const invalidateByCache = (theCache) => {
theCache.isCached = false;
theCache.result = null;
clearTimeout(theCache.invalidationTimeoutId);
};
return (...args) => {
let currentCache = cache;
for (const arg of args) {
const cacheKey = getKey(arg);
if (!currentCache.subCaches.has(cacheKey)) {
currentCache.subCaches.set(cacheKey, createSubCache());
}
currentCache = currentCache.subCaches.get(cacheKey);
}
if (!currentCache.isCached) {
currentCache.result = originFn(...args);
currentCache.isCached = true;
// set timeout to clear cache
if (ttl != null) {
currentCache.invalidationTimeoutId = setTimeout(
() => invalidateByCache(currentCache),
ttl
);
}
}
if (!cacheRejectedPromise) {
Promise.resolve(currentCache.result).catch(() =>
invalidateByCache(currentCache)
);
}
return currentCache.result;
};
};
Click this link to see the diff.
Now cache invalidation happens in two places (by timeout and by rejected promise
). That’s why this logic is extracted to a separate function called invalidateByCache
. DRY principle.
In this example, I did not implement cache invalidation propagation up the tree for simplicity. But it will be useful to do so. Here’s an example to illustrate:
// Suppose we have a memoized function for summing arguments.
memoizedSum(4, 6, 5) // 15
memoizedSum(4, 6, 10) // 20
// After these calls, the cache will look like this:
cache == {
4: {
6: {
5: 15,
10: 20,
}
}
}
// After invalidating the cache for the arguments (4, 6, 5),
// the overall cache will look like this:
cache == {
4: {
6: {
10: 20,
}
}
}
// After invalidating the cache for the arguments (4, 6, 10),
// the overall cache will look like this:
cache == {
4: {
6: {
}
}
}
// It would be more correct to do it like this:
cache == {}
Max size
1. Now, let’s try to limit the maximum memory consumption of our function. However, we will limit the number of entries in the cache, not the amount of memory consumed in bytes (calculating memory consumption is not as simple as it may seem). But these two indicators usually correlate directly, so this will be enough for us. Let’s create a new option maxSize
for our function.
After reaching the maximum number of entries in the cache, we must remove one of the previous entries when adding a new entry. You need to choose a cache replacement policy. For example, the least recently used, the least frequently used, or just the one added first. There are many strategies, and different functions and situations require different ones.
Even the MRU (Most Recently Used) policy sometimes turns out to be the best choice.
When a file is being repeatedly scanned in a [Looping Sequential] reference pattern, MRU is the best replacement algorithm.
— Hong-Tai Chou” and David J. Dewitt, PDF
I suggest implementing the LRU (Least Recently Used) cache replacement policy but also allowing the end user to provide their own policy. For any implementation (except random eviction), we would need to collect some statistics on function calls. For the general case, an array of hits timestamps will suffice. This means our memoization function will consume more memory to store these statistics, and additional computational power will be needed to choose the replaced entries based on the selected policy.
So, if you are implementing your memoization function for a real project, do not add this feature unnecessarily. Otherwise, your implementation will definitely not be called “the fastest memoization function.”
2. Add these tests to the withMemo.test.js
file.
it("LRU", async () => {
const fn = jest.fn((...args) => args.join("-"));
const memoizedFn = withMemo(fn, {
cacheReplacementPolicy: {
maxSize: 3
}
});
memoizedFn(1, 2);
await wait(0);
memoizedFn(1, 3);
await wait(0);
memoizedFn(2, 3);
await wait(0);
memoizedFn(1, 2);
expect(fn).toHaveBeenCalledTimes(3);
await wait(0);
memoizedFn(3, 3); // replace cache for (1, 3)
expect(fn).toHaveBeenCalledTimes(4);
await wait(0);
memoizedFn(1, 2);
expect(fn).toHaveBeenCalledTimes(4);
await wait(0);
memoizedFn(2, 3);
expect(fn).toHaveBeenCalledTimes(4);
await wait(0);
memoizedFn(3, 3);
expect(fn).toHaveBeenCalledTimes(4);
await wait(0);
memoizedFn(1, 3); // replace cache for (1, 2)
expect(fn).toHaveBeenCalledTimes(5);
});
Before each function call, I added await wait(0);
to make the hit timestamps different. Otherwise, all hits would have the same time, and we wouldn't be able to select the least recently used cache. If this doesn't suit you, you can, for example, store an array of objects like {index, timestamp}
in hits. The index will definitely not be repeated.
3. Update the body of the function withMemo
in the file withMemo.js
so that the tests pass successfully and the old ones don’t fail.
Here’s a possible solution:
const cacheReplacementStrategies = {
// Implementation of Least Recently Used cache replacement policy
lru: (itemsSet) => {
const asArray = Array.from(itemsSet)
.sort((a, b) => a.hits.at(-1) - b.hits.at(-1))
// I've decided that the function should return an array of caches
// for eviction. Sometimes, for optimization purposes, it's better
// to free up a lot of space at once (for example, 10%) than to call
// this function too often.
return asArray.slice(0, 1);
// If we want to clear 10% of cache records
// return asArray.slice(0, Math.round(asArray.length * 0.1);
}
};
export const withMemo = (
originFn,
{
getKey = (arg) => arg,
getCacheStore = () => new Map(),
cacheRejectedPromise = false,
ttl,
// new option. it can be an object with properties `maxSize` and `strategy`
cacheReplacementPolicy = null
} = {}
) => {
if (cacheReplacementPolicy && !cacheReplacementPolicy.maxSize) {
throw new Error("maxSize is mandatory for cacheReplacementPolicy");
}
const createSubCache = () => ({
subCaches: getCacheStore(),
result: null,
isCached: false,
invalidationTimeoutId: null,
hits: [], // new property of cache
});
const rootCache = createSubCache();
// We need a flat structure of all caches (not a tree) to be able pass all
// cache entries to the LRU function and also quickly obtain the cache size.
// allCacheRecords could be an array instead of a set.
// However, removing an element from the middle of an array is a slow
// operation because it requires shifting all subsequent elements by one.
const allCacheRecords = new Set();
const invalidateByCache = (theCache) => {
clearTimeout(theCache.invalidationTimeoutId);
theCache.isCached = false;
theCache.result = null;
theCache.invalidationTimeoutId = null;
// need 2 more actions on cache invalidation
theCache.hits = [];
allCacheRecords.delete(theCache);
};
return (...args) => {
let currentCache = rootCache;
for (const arg of args) {
const cacheKey = getKey(arg);
if (!currentCache.subCaches.has(cacheKey)) {
currentCache.subCaches.set(cacheKey, createSubCache());
}
currentCache = currentCache.subCaches.get(cacheKey);
}
if (!currentCache.isCached) {
// if the cache size is exceeded and the cacheReplacementPolicy
// is specified, then invalidate part of the cache
if (cacheReplacementPolicy) {
const {
maxSize,
strategy = cacheReplacementStrategies.lru
} = cacheReplacementPolicy;
if (allCacheRecords.size >= maxSize) {
const cachesToReplace = strategy(allCacheRecords);
cachesToReplace.forEach(invalidateByCache);
}
}
currentCache.result = originFn(...args);
currentCache.isCached = true;
if (ttl != null) {
currentCache.invalidationTimeoutId = setTimeout(
() => invalidateByCache(currentCache),
ttl
);
}
// add new record into allCacheRecords
allCacheRecords.add(currentCache);
}
// add a hit
currentCache.hits.push(+new Date());
if (!cacheRejectedPromise) {
Promise.resolve(currentCache.result).catch(() =>
invalidateByCache(currentCache)
);
}
return currentCache.result;
};
};
Click this link to see the diff.
Here I want to pay attention to the fact that the actual size of the cache may be smaller than allCacheRecords.size
. For example, if you pass something to getCacheStore
that returns WeakMap
, some cache keys may be removed without going through our invalidateByCache
. Therefore, before evicting any values from the cache, you may want to traverse the tree and measure the actual cache size recursively.
If you are going to do something similar, I would recommend making the strategy
property mandatory instead of using LRU by default. That way, users will make a more conscious choice. However, don't make your colleagues write their own strategies. Provide a set of strategies they can import and pass to the memoization function.
Manual invalidation
1. We have taught our function to clear the cache based on several different criteria. But that may not be enough. Sometimes we want to imperatively invalidate the cache for certain arguments or even clear the entire cache (for example, when the user logs out).
2. Add these tests to the withMemo.test.js
file.
it("invalidateCache", () => {
const fn = jest.fn();
const memoizedFn = withMemo(fn);
memoizedFn(1, 2);
memoizedFn(1, 3);
memoizedFn.invalidateCache();
expect(fn).toHaveBeenCalledTimes(2);
memoizedFn(1, 3);
expect(fn).toHaveBeenCalledTimes(3);
memoizedFn(1, 2);
expect(fn).toHaveBeenCalledTimes(4);
});
it("invalidateCacheByArgs", () => {
const fn = jest.fn();
const memoizedFn = withMemo(fn);
memoizedFn(1, 2);
memoizedFn(1, 3);
memoizedFn.invalidateCacheByArgs(1, 2);
expect(fn).toHaveBeenCalledTimes(2);
memoizedFn(1, 3);
expect(fn).toHaveBeenCalledTimes(2);
memoizedFn(1, 2);
expect(fn).toHaveBeenCalledTimes(3);
});
I want to remind you that functions in JS are objects. That means, like all objects, you can write a value by key to them obj.key = value
. Therefore, memoizedFn.invalidateCacheByArgs(1, 2)
is a completely valid construction.
3. Update the body of the function withMemo
in the file withMemo.js
so that the tests pass successfully and the old ones don’t fail.
Here’s a possible solution:
const cacheReplacementStrategies = {
lru: (itemsSet) => {
const asArray = Array.from(itemsSet)
.sort((a, b) => a.hits.at(-1) - b.hits.at(-1))
return asArray.slice(0, 1);
}
};
export const withMemo = (
originFn,
{
getKey = (arg) => arg,
getCacheStore = () => new Map(),
cacheRejectedPromise = false,
ttl,
cacheReplacementPolicy = null
} = {}
) => {
if (cacheReplacementPolicy && !cacheReplacementPolicy.maxSize) {
throw new Error("maxSize is mandatory ");
}
const createSubCache = () => ({
subCaches: getCacheStore(),
result: null,
isCached: false,
invalidationTimeoutId: null,
hits: []
});
const rootCache = createSubCache();
const allCacheRecords = new Set();
const invalidateByCache = (theCache) => {
clearTimeout(theCache.invalidationTimeoutId);
theCache.isCached = false;
theCache.result = null;
theCache.invalidationTimeoutId = null;
theCache.hits = [];
allCacheRecords.delete(theCache);
};
// Now we are not returning the memoized version of the function
// immediately, but storing it in a variable to add new properties later.
const memoizedFn = (...args) => {
let currentCache = rootCache;
for (const arg of args) {
const cacheKey = getKey(arg);
if (!currentCache.subCaches.has(cacheKey)) {
currentCache.subCaches.set(cacheKey, createSubCache());
}
currentCache = currentCache.subCaches.get(cacheKey);
}
if (!currentCache.isCached) {
if (cacheReplacementPolicy) {
const {
maxSize,
strategy = cacheReplacementStrategies.lru
} = cacheReplacementPolicy;
if (allCacheRecords.size >= maxSize) {
const cachesToReplace = strategy(allCacheRecords);
cachesToReplace.forEach(invalidateByCache);
}
}
currentCache.result = originFn(...args);
currentCache.isCached = true;
if (ttl != null) {
currentCache.invalidationTimeoutId = setTimeout(
() => invalidateByCache(currentCache),
ttl
);
}
allCacheRecords.add(currentCache);
}
currentCache.hits.push(+new Date());
if (!cacheRejectedPromise) {
Promise.resolve(currentCache.result).catch(() =>
invalidateByCache(currentCache)
);
}
return currentCache.result;
};
// add two functions for imperative cache clearing.
memoizedFn.invalidateCache = () => {
if (ttl != null) {
allCacheRecords.forEach(
cacheData => clearTimeout(cacheData.invalidationTimeoutId)
)
}
allCacheRecords.clear();
Object.assign(rootCache, createSubCache());
};
memoizedFn.invalidateCacheByArgs = (...args) => {
let currentCache = rootCache;
for (const arg of args) {
const cacheKey = getKey(arg);
currentCache = currentCache.subCaches.get(cacheKey);
if (!currentCache) return;
}
invalidateByCache(currentCache);
};
// Return the memoized version of the original function.
return memoizedFn;
};
Click this link to see the diff.
Context
1. If you want to memoize a function that depends on the context (this
), then most likely, you are doing something wrong. A few things to note:
- There can be some cases.
- For our educational purposes, it is not so important.
So, let’s make our function work properly depending on the context. To do that, let’s add the getContextKey
option to our function by analogy with getKey
. If this option is passed, we consider the context as one more function argument and reduce the task to the previous one.
2. Add these tests to the withMemo.test.js
file.
it("context", () => {
class Point {
constructor(x, y) {
this.x = x;
this.y = y;
}
doubleX() {
return this.x * 2
}
}
const fn = jest.fn(Point.prototype.doubleX);
Point.prototype.doubleX = withMemo(fn, {
getContextKey: (context) => context.x
});
const p1 = new Point(3, 100);
const p2 = new Point(5, 50);
expect(p1.doubleX()).toBe(6);
expect(fn).toHaveBeenCalledTimes(1);
expect(p1.doubleX()).toBe(6);
expect(fn).toHaveBeenCalledTimes(1);
expect(p2.doubleX()).toBe(10);
expect(fn).toHaveBeenCalledTimes(2);
p2.x = 3;
expect(p2.doubleX()).toBe(6);
expect(fn).toHaveBeenCalledTimes(2);
p2.x = 33;
expect(p2.doubleX()).toBe(66);
expect(fn).toHaveBeenCalledTimes(3);
});
it("invalidate with context", () => {
const fn = jest.fn();
const memoizedFn = withMemo(fn, {
getContextKey: (context) => context
});
class Dummy {}
Dummy.prototype.memoizedFn = memoizedFn;
const obj1 = new Dummy();
const obj2 = new Dummy();
obj1.memoizedFn(2);
obj1.memoizedFn(3);
obj1.memoizedFn(3);
expect(fn).toHaveBeenCalledTimes(2);
obj2.memoizedFn(2);
obj2.memoizedFn(3);
expect(fn).toHaveBeenCalledTimes(4);
obj1.memoizedFn.invalidateCacheByContextAndArgs(obj1, 3);
obj2.memoizedFn(3);
expect(fn).toHaveBeenCalledTimes(4);
obj1.memoizedFn(2);
expect(fn).toHaveBeenCalledTimes(4);
obj1.memoizedFn(3);
expect(fn).toHaveBeenCalledTimes(5);
});
As you may have noticed, it is suggested to add a separate method invalidateCacheByContextAndArgs
. We can’t use invalidateCacheByArgs
without changing the signature and obtaining information about the context.
3. Update the body of the function withMemo
in the file withMemo.js
so that the tests pass successfully and the old ones don’t fail.
Here’s a possible solution:
const cacheReplacementStrategies = {
lru: (itemsSet) => {
const asArray = Array.from(itemsSet)
.sort((a, b) => a.hits.at(-1) - b.hits.at(-1))
return asArray.slice(0, 1);
}
};
export const withMemo = (
originFn,
{
getKey = (arg) => arg,
getContextKey = null, // new option
getCacheStore = () => new Map(),
cacheRejectedPromise = false,
ttl,
cacheReplacementPolicy = null
} = {}
) => {
if (cacheReplacementPolicy && !cacheReplacementPolicy.maxSize) {
throw new Error("maxSize is mandatory ");
}
const createSubCache = () => ({
subCaches: getCacheStore(),
result: null,
isCached: false,
invalidationTimeoutId: null,
hits: []
});
const rootCache = createSubCache();
const allCacheRecords = new Set();
const invalidateByCache = (theCache) => {
clearTimeout(theCache.invalidationTimeoutId);
theCache.isCached = false;
theCache.result = null;
theCache.invalidationTimeoutId = null;
theCache.hits = [];
allCacheRecords.delete(theCache);
};
// To access the context at the moment of the call, we need to
// replace the arrow function with a regular one.
const memoizedFn = function (...args) {
let currentCache = rootCache;
// if getContextKey passed, then root cache is for context
if (getContextKey) {
const cacheKey = getContextKey(this);
if (!currentCache.subCaches.has(cacheKey)) {
currentCache.subCaches.set(cacheKey, createSubCache());
}
currentCache = currentCache.subCaches.get(cacheKey);
}
for (const arg of args) {
const cacheKey = getKey(arg);
if (!currentCache.subCaches.has(cacheKey)) {
currentCache.subCaches.set(cacheKey, createSubCache());
}
currentCache = currentCache.subCaches.get(cacheKey);
}
if (!currentCache.isCached) {
if (cacheReplacementPolicy) {
const {
maxSize,
strategy = cacheReplacementStrategies.lru
} = cacheReplacementPolicy;
if (allCacheRecords.size >= maxSize) {
const cachesToReplace = strategy(allCacheRecords);
cachesToReplace.forEach(invalidateByCache);
}
}
currentCache.result = originFn.call(this, ...args);
currentCache.isCached = true;
if (ttl != null) {
currentCache.invalidationTimeoutId = setTimeout(
() => invalidateByCache(currentCache),
ttl
);
}
allCacheRecords.add(currentCache);
}
currentCache.hits.push(+new Date());
if (!cacheRejectedPromise) {
Promise.resolve(currentCache.result).catch(() =>
invalidateByCache(currentCache)
);
}
return currentCache.result;
};
memoizedFn.invalidateCache = () => {
if (ttl != null) {
allCacheRecords.forEach(
cacheData => clearTimeout(cacheData.invalidationTimeoutId)
)
}
allCacheRecords.clear();
Object.assign(rootCache, createSubCache());
};
memoizedFn.invalidateCacheByArgs = (...args) => {
if (getContextKey) throw new Error("Use invalidateCacheByContextAndArgs instead");
let currentCache = rootCache;
for (const arg of args) {
const cacheKey = getKey(arg);
currentCache = currentCache.subCaches.get(cacheKey);
if (!currentCache) return;
}
invalidateByCache(currentCache);
};
// new function
memoizedFn.invalidateCacheByContextAndArgs = (context, ...args) => {
if (!getContextKey) throw new Error("Use invalidateCacheByArgs instead");
let currentCache = rootCache;
const contextKey = getContextKey(context);
currentCache = currentCache.subCaches.get(contextKey);
if (!currentCache) return;
for (const arg of args) {
const cacheKey = getKey(arg);
currentCache = currentCache.subCaches.get(cacheKey);
if (!currentCache) return;
}
invalidateByCache(currentCache);
};
return memoizedFn;
};
Click this link to see the diff.
Calling invalidateCacheByContextAndArgs
without passing getContextKey
doesn't make sense. Therefore, when this happens, it is better to throw an error with a hint. The same goes for invalidateCacheByArgs
when getContextKey
is passed.
Transform arguments
1. The last update I suggest implementing is the transformArgs
option. Let’s say we have a commutative function (the result does not depend on the order of the arguments), for example, the sum function. If the value for sum(4, 9) is already cached, then for sum(9, 4), we would like to get the value from the cache. To achieve that, before working with the arguments, we could sort them. The transformArgs
option will allow skipping some arguments or transforming them in some way.
2. Add these tests to the withMemo.test.js
file.
it("transformArgs", () => {
const fn = jest.fn((a, b) => a + b);
const memoizedFn = withMemo(fn, {
transformArgs: (...args) => args.sort()
});
expect(memoizedFn(4, 9)).toBe(13);
expect(memoizedFn(9, 4)).toBe(13);
expect(fn).toHaveBeenCalledTimes(1);
expect(memoizedFn(4, 4)).toBe(8);
expect(fn).toHaveBeenCalledTimes(2);
});
3. Update the body of the function withMemo
in the file withMemo.js
so that the tests pass successfully and the old ones don’t fail.
Here’s a possible solution:
const cacheReplacementStrategies = {
lru: (itemsSet) => {
const asArray = Array.from(itemsSet)
.sort((a, b) => a.hits.at(-1) - b.hits.at(-1))
return asArray.slice(0, 1);
}
};
export const withMemo = (
originFn,
{
getKey = (arg) => arg,
getContextKey = null,
getCacheStore = () => new Map(),
cacheRejectedPromise = false,
ttl,
cacheReplacementPolicy = null,
transformArgs = (...args) => args // new option
} = {}
) => {
if (cacheReplacementPolicy && !cacheReplacementPolicy.maxSize) {
throw new Error("maxSize is mandatory ");
}
const createSubCache = () => ({
subCaches: getCacheStore(),
result: null,
isCached: false,
invalidationTimeoutId: null,
hits: []
});
const rootCache = createSubCache();
const allCacheRecords = new Set();
const invalidateByCache = (theCache) => {
clearTimeout(theCache.invalidationTimeoutId);
theCache.isCached = false;
theCache.result = null;
theCache.invalidationTimeoutId = null;
theCache.hits = [];
allCacheRecords.delete(theCache);
};
const memoizedFn = function (...args) {
let currentCache = rootCache;
if (getContextKey) {
const cacheKey = getContextKey(this);
if (!currentCache.subCaches.has(cacheKey)) {
currentCache.subCaches.set(cacheKey, createSubCache());
}
currentCache = currentCache.subCaches.get(cacheKey);
}
// use transformArgs here and in invalidateCacheByArgs|invalidateCacheByContextAndArgs
for (const arg of transformArgs(...args)) {
const cacheKey = getKey(arg);
if (!currentCache.subCaches.has(cacheKey)) {
currentCache.subCaches.set(cacheKey, createSubCache());
}
currentCache = currentCache.subCaches.get(cacheKey);
}
if (!currentCache.isCached) {
if (cacheReplacementPolicy) {
const {
maxSize,
strategy = cacheReplacementStrategies.lru
} = cacheReplacementPolicy;
if (allCacheRecords.size >= maxSize) {
const cachesToReplace = strategy(allCacheRecords);
cachesToReplace.forEach(invalidateByCache);
}
}
currentCache.result = originFn.call(this, ...args);
currentCache.isCached = true;
if (ttl != null) {
currentCache.invalidationTimeoutId = setTimeout(
() => invalidateByCache(currentCache),
ttl
);
}
allCacheRecords.add(currentCache);
}
currentCache.hits.push(+new Date());
if (!cacheRejectedPromise) {
Promise.resolve(currentCache.result).catch(() =>
invalidateByCache(currentCache)
);
}
return currentCache.result;
};
memoizedFn.invalidateCache = () => {
if (ttl != null) {
allCacheRecords.forEach(
cacheData => clearTimeout(cacheData.invalidationTimeoutId)
)
}
allCacheRecords.clear();
Object.assign(rootCache, createSubCache());
};
memoizedFn.invalidateCacheByArgs = (...args) => {
if (getContextKey)
throw new Error("Use invalidateCacheByContextAndArgs instead");
let currentCache = rootCache;
for (const arg of transformArgs(...args)) {
const cacheKey = getKey(arg);
currentCache = currentCache.subCaches.get(cacheKey);
if (!currentCache) return;
}
invalidateByCache(currentCache);
};
memoizedFn.invalidateCacheByContextAndArgs = (context, ...args) => {
if (!getContextKey) throw new Error("Use invalidateCacheByArgs instead");
let currentCache = rootCache;
const cotextKey = getContextKey(context);
currentCache = currentCache.subCaches.get(cotextKey);
if (!currentCache) return;
for (const arg of transformArgs(...args)) {
const cacheKey = getKey(arg);
currentCache = currentCache.subCaches.get(cacheKey);
if (!currentCache) return;
}
invalidateByCache(currentCache);
};
return memoizedFn;
};
Click this link to see the diff.
That’s all for the main part, but there are a couple more things I’d like to talk about.
Example of CRUD service with memoization
I want to suggest a way of using our function that can be quickly implemented in most projects and will immediately bring benefits.
Let’s say we have a REST API for the User entity and need to implement basic CRUD functions. Here’s how it can look like using the withMemo
function that we implemented together:
const createMemoizedCrudService = (endpoint) => {
const get = withMemo(
(id) => fetch(`${endpoint}/${id}`),
{ ttl: 5 * 60 * 1000, } // 5 minute
)
const getAll = withMemo(
(paginationAndFiltersParams) => fetch(`${endpoint}?${ new URLSearchParams(paginationAndFiltersParams)}`),
{ ttl: 1 * 60 * 1000, } // 1 minute
})
const create = (id, newEntity) => {
getAll.invalidateCache();
return fetch(`${endpoint}`, {
method: 'POST',
body: JSON.stringify(newEntity),
})
}
const update = (id, updatedEntity) => {
getAll.invalidateCache();
get.invalidateCacheByArgs(id);
return fetch(`${endpoint}/${id}`, {
method: 'PUT',
body: JSON.stringify(updatedEntity),
})
}
const bulkUpdate = (filters, updatedValues) => {
getAll.invalidateCache();
get.invalidateCache();
return fetch(`${endpoint}/bulk`, {
method: 'PUT',
body: JSON.stringify({ filters, updatedValues }),
})
}
const remove = (id) => {
getAll.invalidateCache();
get.invalidateCacheByArgs(id);
return fetch(`${endpoint}/${id}`, {
method: 'DELETE',
})
}
return {
get,
getAll,
create,
update,
bulkUpdate,
remove,
}
}
Example of usage:
const usersCrudService = createMemoizedCrudService('/api/users')
usersCrudService.get(1) // request
usersCrudService.get(1) // no request. get result from cache
usersCrudService.get(2) // request
usersCrudService.getAll() // request
usersCrudService.getAll() // cache
usersCrudService.update(1, {name: 'Victor'})
usersCrudService.get(1) // request, becauese `update` invalidates cache
usersCrudService.get(2) // cache
usersCrudService.getAll() // request
// after 5 minutes
usersCrudService.get(2) // request, because the cache was invalidated by timeout
// other CRUD services
const companiesCrudService = createMemoizedCrudService('/api/companies')
const phonesCrudService = createMemoizedCrudService('/api/phones')
I would be glad to see your ideas and ways of application in the comments.
About Test Coverage
If you think the tests we have written provide good testing coverage of our function, then you are wrong. The main problem with our withMemo
function is that it has too many options and, therefore, many if statements inside. Ideally, we should write tests covering all possible option combinations to ensure everything works as expected. But this leads to a combinatorial explosion.
This reminds me of a video from the Fun Fun Function channel on YouTube, which I never miss. The video is called “Settings are evil,” and it emotionally explains this problem in detail.
If you copy our withMemo
code as is, it makes sense to wrap it in a function where you fix all the options and write your own tests to cover the option combination you have chosen.
Restrictions
It should be understood that only pure functions are ideal for memoization. And not all pure functions are suitable, but only those which are called frequently and consume a lot of CPU resources. And if a function is impure, we always risk getting an outdated value from the cache.
It is important to keep in mind that memoization is never free. Storing the results of function calls for future reuse incurs significant memory overhead. The speed gain from computation may not outweigh the increase in memory consumption if the original function is already fast (e.g., no need to memoize a sum
function) or if the original function is called infrequently. Most functions can accept an unlimited number of arguments, which means that memory consumption can also grow infinitely with memoization. If there are limitations in memory usage, then some cache replacement policies need to be used.
Memoization is always a compromise between memory usage and function speed.
Memoization of Recursive Functions
If you write a recursive function and then pass it to withMemo
as in this example:
const factorial = (x) => {
if (x === 0) {
return 1;
} else {
return x * factorial(x - 1);
}
}
const memoizedFactorial = withMemo(factorial)
Then, the original function will be applied for internal recursive calls, not its memoized version.
memoizedFactorial(50)
memoizedFactorial(49) // will not use cache. 49 recursive calls
memoizedFactorial(51) // will not use cache. 51 recursive calls
memoizedFactorial(50) // will use cache
But this is easily fixable. Simply rewrite the function declaration as follows:
const memoizedFactorial = withMemo(
(x) => {
if (x === 0) {
return 1;
} else {
return x * memoizedFactorial(x - 1);
}
}
)
memoizedFactorial(50)
memoizedFactorial(49) // will use cache
memoizedFactorial(51) // only one call
memoizedFactorial(50) // will use cache
NPM Package

I couldn’t resist making an npm package with this code, especially since most of the work was already done. I need to rewrite everything in TypeScript, add a few cache replacement policies, optimize a bit, and add a couple of tests. By the way, this is my first package on npm @eabgaryan/with-memo.
While rewriting, I made a few changes to the final implementation. I added cache invalidation propagation up the tree, which I mentioned in the “Cache lifetime” section. I removed allCacheRecords
and replaced it with the cacheSize
number and getAllCacheRecords()
function. If you’re interested in implementation details, here is the source code.
That’s all. Thank you all for your attention. Those who were able to follow the protocol all the way through — well done! You are breathtaking!