Skip to content

Commit 0e6b7e2

Browse files
committed
docs: first step docs
1 parent 46d3581 commit 0e6b7e2

2 files changed

Lines changed: 116 additions & 259 deletions

File tree

docs/content/docs/introduction.md

Lines changed: 85 additions & 139 deletions
Original file line numberDiff line numberDiff line change
@@ -1,184 +1,130 @@
11
---
2-
summary: Bentocache is a robust multi-tier caching library for Node.js applications
2+
summary: Verrou is a locking library for managing locks in a NodeJS application.
33
---
44

55
# Introduction
66

7-
Bentocache is a robust multi-tier caching library for Node.js applications
7+
Verrou is a locking library for managing locks ( mutexes ) in a NodeJS application.
88

9-
- 🗄️ Multi-tier caching
10-
- 🔄 Synchronization of local cache via Bus
11-
- 🚀 Many drivers (Redis, Upstash, In-memory, Postgres, Sqlite and others)
12-
- 🛡️ Grace period and timeouts. Serve stale data when the store is dead or slow
13-
- 🔄 Early refresh. Refresh cached value before needing to serve it
14-
- 🗂️ Namespaces. Group your keys by categories.
15-
- 🛑 Cache stamped protection.
16-
- 🏷️ Named caches
17-
- 📖 Well documented + handy JSDoc annotations
18-
- 📊 Events. Useful for monitoring and metrics
19-
- 📝 Easy Prometheus integration and ready-to-use Grafana dashboard
20-
- 🧩 Easily extendable with your own driver
9+
- 🔒 Easy Usage
10+
- 🔄 Multiple drivers (Redis, Postgres, MySQL, Sqlite, In-Memory and others)
11+
- 🔑 Customizable Named Locks
12+
- 🌐 Consistent API Across All Drivers
13+
- 🧪 Easy testing by switching to an in-memory driver
14+
- 🔨 Easily extensible with your own drivers
2115

22-
## Why Bentocache ?
23-
24-
There are already caching libraries for Node: [`keyv`](https://keyv.org/), [`cache-manager`](https://github.com/node-cache-manager/node-cache-manager#readme), or [`unstorage`](https://unstorage.unjs.io/). However, I think that we could rather consider these libraries as bridges that allow different stores to be used via a unified API, rather than true caching solutions as such.
25-
26-
Not to knock them, on the contrary, they have their use cases and cool. Some are even "marketed" as such and are still very handy for simple caching system.
27-
28-
Bentocache, on the other hand, is a **full-featured caching library**. We indeed have this notion of unified access to differents drivers, but in addition to that, we have a ton of features that will allow you to do robust caching.
29-
30-
With that in mind, then I believe there is no serious alternative to Bentocache in the JavaScript ecosystem. Which is regrettable, because all other languages have powerful solutions. This is why Bentocache was created.
31-
32-
## Quick presentation
33-
34-
Bentocache is a caching solution aimed at combining performance and flexibility. If you are looking for a caching system that can transition from basic use to advanced multi-level configuration, you are in the right place. Here's what you need to know :
35-
36-
### One-level
37-
38-
The one-level mode is a standard caching method. Choose from a variety of drivers such as **Redis**, **In-Memory**, **Filesystem**, **DynamoDB**, and more, and you're ready to go.
39-
40-
In addition to this, you benefit from many features that allow you to efficiently manage your cache, such as **cache stampede protection**, **grace periods**, **timeouts**, **namespaces**, etc.
41-
42-
### Two-levels
43-
For those looking to go further, you can use the two-levels caching system. Here's basically how it works:
44-
45-
- **L1: Local Cache**: First level cache. Data is stored in memory with an LRU algorithm for quick access
46-
- **L2: Distributed Cache**: If the data is not in the in-memory cache, it is searched in the distributed cache (Redis, for example)
47-
- **Synchronization via Bus**: In a multi-instance context, you can synchronize different local in-memory caches of your instances via a Bus like Redis or RabbitMQ. This method maintains cache integrity across multiple instances
48-
49-
Here is a simplified diagram of the flow :
50-
51-
![Bentocache Diagram flow](content/docs/bentocache_flow.webp)
52-
53-
All of this is managed invisibly for you via Bentocache. The only thing to do is to set up a bus in your infrastructure. But if you need multi-level cache, you're probably already using Redis rather than your database as a distributed cache. So you can leverage it to synchronize your local caches
16+
---
5417

55-
The major benefit of multi-tier caching is that it allows for responses between 2,000x and 5,000x faster. While Redis is fast, accessing RAM is REALLY MUCH faster.
5618

57-
In fact, it's a quite common pattern : to quote an example, it's [what Stackoverflow does](https://nickcraver.com/blog/2019/08/06/stack-overflow-how-we-do-app-caching/#layers-of-cache-at-stack-overflow).
19+
:::codegroup
5820

21+
```ts
22+
// title: Basic example
23+
import { Verrou } from 'verrou'
24+
import { redisStore } from 'verrou/drivers/redis'
25+
import { memoryStore } from 'verrou/drivers/memory'
26+
27+
const verrou = new Verrou({
28+
default: 'redis',
29+
drivers: {
30+
redis: { driver: redisStore() },
31+
memory: { driver: memoryStore() }
32+
}
33+
})
5934

60-
To give some perspective, here's a simple benchmark that shows the difference between a simple distributed cache ( using Redis ) vs a multi-tier cache ( using Redis + In-memory cache ) :
35+
await verrou.createLock('my-resource').run(async () => {
36+
await doSomething()
37+
}) // Lock is automatically released
38+
```
6139

6240
```ts
63-
// title: Benchmarked code
64-
benchmark
65-
.add('BentoCache', async () => await bento.get('key'))
66-
.add('ioredis', async () => await ioredis.get('key'))
41+
// title: Manual lock
42+
import { Verrou, E_LOCK_TIMEOUT } from 'verrou'
43+
44+
const lock = verrou.createLock('my-resource')
45+
try {
46+
await lock.acquire()
47+
await doSomething()
48+
} catch (error) {
49+
if (error instanceof E_LOCK_TIMEOUT) {
50+
// handle timeout
51+
}
52+
} finally {
53+
await lock.release()
54+
}
6755
```
6856

69-
![Redis vs Multi-tier caching](content/docs/redis_vs_mtier.webp)
70-
71-
So a pretty huge difference.
72-
73-
74-
## Features
75-
76-
Below is a list of the main features of BentoCache. If you want to know more, you can read each associated documentation page.
77-
78-
### Multi layer caching
79-
80-
Multi-layer caching allows you to combine the speed of in-memory caching with the persistence of a distributed cache. Best of both worlds.
81-
82-
### Lot of drivers
83-
84-
Many drivers available to suit all situations: Redis, Upstash, Database (MySQL, SQLite, PostgreSQL), DynamoDB, Filesystem, In-memory (LRU Cache), Vercel KV...
57+
```ts
58+
// title: using keyword
59+
import { Verrou } from 'verrou'
8560

86-
See the [drivers documentation](./cache_drivers.md) for list of available drivers. Also very easy to extend the library and [add your own driver](tbd)
61+
const lock = verrou.createLock('my-resource')
8762

88-
<!-- :::warning
89-
Only a Redis driver for the bus is currently available. We probably have drivers for other backends like Zookeeper, Kafka, RabbitMQ... Let us know with an issue if you are interested in this.
90-
::: -->
63+
function myFunction() {
64+
await using handle = await lock.acquire()
9165

66+
await doSomething()
67+
} // Lock is automatically released here thanks to the using keyword
68+
```
9269

93-
### Resiliency
70+
:::
9471

95-
- [Grace period](./grace_periods.md): Keep your application running smoothly with the ability to temporarily use expired cache entries when your database is down, or when a factory is failing.
72+
## Why Verrou ?
9673

97-
- [Cache stamped prevention](./stampede_protection.md): Ensuring that only one factory is executed at the same time.
74+
Main advantage of Verrou is that it provides a consistent API across all drivers. This means that you can switch from one driver to another without having to change your code. It also means you can switch to an in-memory in your test environment, making tests faster and easier to setup (no infrastructure or anything fancy to setup).
9875

99-
- [Retry queue](./multi_tier.md#retry-queue-strategy) : When a application fails to publish something to the bus, it is added to a queue and retried later.
76+
Having a consistent API also means that you don't have to learn a new API when switching from one driver to another. Today, in the node ecosystem, we have different npm packages to manage locks, but they all have differents APIs and behaviors.
10077

101-
### Timeouts
78+
But having a consistent API doesn't mean having a less powerful API. Verrou provides every features you would expect from a locking library, and even more.
10279

103-
If your factory is taking too long to execute, you can just return a little bit of stale data while keeping the factory running in the background. Next time the entry is requested, it will be already computed and served immediately.
80+
## Why I would need a locking library ?
10481

105-
### Namespaces
82+
Well, locks is a very common pattern in software development. It is used to prevent multiple processes or concurrent code from accessing a shared resource at the same time. It probably sounds a bit abstract, so let's take a concrete example.
10683

107-
The ability to create logical groups for cache keys together, so you can invalidate everything at once later :
84+
Let's say you are writing code for a banking system. You have a function that transfer money from one account to another. We gonna implement it very naively, and then we will see what can go wrong.
10885

10986
```ts
110-
const users = bento.namespace('users')
87+
router.get('/transfer', () => {
88+
const fromAccount = getAccountFromDb(request.input('from'))
89+
const toAccount = getAccountFromDb(request.input('to'))
11190

112-
users.set('32', { name: 'foo' })
113-
users.set('33', { name: 'bar' })
114-
115-
users.clear()
116-
```
91+
fromAccount.balance -= request.input('amount')
92+
toAccount.balance += request.input('amount')
11793

118-
### Events
119-
120-
Events are emitted by Bentocache throughout its execution, allowing you to collect metrics and monitor your cache.
121-
122-
```ts
123-
bento.on('cache:hit', () => {})
124-
bento.on('cache:miss', () => {})
125-
// ...
126-
```
127-
128-
See the [events documentation](./digging_deeper/events.md) for more information.
129-
130-
### Friendly TTLs
131-
132-
All TTLs can be passed in a human-readable string format. We use [lukeed/ms](https://github.com/lukeed/ms) under the hood. (this is optional, and you can pass a `number` in milliseconds if you prefer)
133-
134-
```ts
135-
bento.getOrSet('foo', () => getFromDb(), {
136-
ttl: '2.5h',
137-
gracePeriod: { enabled: true, duration: '6h' }
94+
await fromAccount.save()
95+
await toAccount.save()
13896
})
13997
```
14098

141-
### Early refresh
99+
Okay cool. It works when we are trying it locally. But imagine something. What if two users are calling the same endpoint at the same time ? What will happen ?
142100

143-
When you cached item will expire soon, you can refresh it in advance, in the background. This way, next time the entry is requested, it will already be computed and thus returned to the user super quickly.
101+
1. User A's request reads the balance of Account X.
102+
2. Concurrently, User B's request also reads the balance of Account X.
103+
3. User A's request deducts the transfer amount from Account X and saves it.
104+
4. Almost simultaneously, User B's request does the same, but it was unaware of the change made by User A's request because it read the old balance.
144105

145-
```ts
146-
bento.getOrSet('foo', () => getFromDb(), {
147-
earlyExpiration: 0.8
148-
})
149-
```
150-
151-
In this case, when only 20% or less of the TTL remains and the entry is requested :
152-
153-
- It will returns the cached value to the user.
154-
- Start a background refresh by calling the factory.
155-
- Next time the entry is requested, it will be already computed, and can be returned immediately.
156-
157-
### Logging
106+
As a result, Account X's balance end up totally incorrect. This is a classic example of what we call race condition.
158107

159-
You can pass a logger to Bentocache, and it will log everything that happens. Can be useful for debugging or monitoring.
108+
They are multiple ways to solve this problem. A simple one would be to use a lock. By adding a lock, we are preventing concurrent requests from accessing the same piece of code at the same time :
160109

161110
```ts
162-
import { pino } from 'pino'
163-
164-
const bento = new BentoCache({
165-
logger: pino()
111+
router.get('/transfer', () => {
112+
// Other requests will wait just here until the lock is released
113+
await verrou.createLock('transfer').run(async () => {
114+
const fromAccount = getAccountFromDb(request.input('from'))
115+
const toAccount = getAccountFromDb(request.input('to'))
116+
117+
fromAccount.balance -= request.input('amount')
118+
toAccount.balance += request.input('amount')
119+
120+
await fromAccount.save()
121+
await toAccount.save()
122+
}) // Lock is automatically released after the callback is executed
166123
})
167124
```
168125

169-
See the [logging documentation](./digging_deeper/logging.md) for more information.
126+
Now, if two users are calling the same endpoint at the same time, the second one will have to wait for the first one to finish before being able to execute the code. This way, we are sure that the balance will be correct.
170127

171128
## Sponsor
172129

173130
If you like this project, [please consider supporting it by sponsoring it](https://github.com/sponsors/Julien-R44/). It will help a lot to maintain and improve it. Thanks a lot !
174-
175-
176-
177-
178-
## Prior art and inspirations
179-
180-
- https://github.com/ZiggyCreatures/FusionCache
181-
- https://laravel.com/docs/10.x/cache
182-
- https://github.com/TurnerSoftware/CacheTower
183-
- https://github.com/dotnetcore/EasyCaching
184-
- https://symfony.com/doc/current/components/cache.html

0 commit comments

Comments
 (0)