Skip to content

Commit 03c9dcf

Browse files
authored
Add a note about LRU usage (#103)
1 parent 866a3cb commit 03c9dcf

1 file changed

Lines changed: 38 additions & 0 deletions

File tree

README.md

Lines changed: 38 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -67,6 +67,7 @@ Note: It's probably not a good idea to use `readFileSync` in production.
6767
* <a href="#analyse">Word analysis</a>
6868
* <a href="#generate">Word generation</a>
6969
3. <a href="#notes">Notes</a>
70+
* <a href="#improving-performance">Improving performance</a>
7071
* <a href="#notes-warning-on-synchronous-methods">A Warning on Synchronous Methods</a>
7172
* <a href="#notes-open-office-dictionaries">A Note About Open Office Dictionaries</a>
7273
* <a href="#notes-creating-dictionaries">A Note About Creating Dictionaries</a>
@@ -199,6 +200,43 @@ await nodehun.generate('told', 'run') // => [ 'tell' ]
199200

200201
## <a id="notes"></a>Notes
201202

203+
### <a id="improving-performance"></a> Improving Performance
204+
205+
If the native performance isn't fast enough for your workload, you can try using an LRU cache for your operations. The idea is to cache the results of the operation and only repeat the operations on cache misses.
206+
207+
```js
208+
const LRUCache = require('lru-native2')
209+
210+
var cache = new LRUCache({ maxElements: 1000 })
211+
212+
async function suggestCached() {
213+
let cachedResult = cache.get(word)
214+
if (cachedResult) {
215+
// cache hit
216+
return cachedResult
217+
} else {
218+
// cache miss
219+
let result = await nodehun.suggest(word)
220+
cache.set(word, result)
221+
return result
222+
}
223+
}
224+
225+
// ... example usage:
226+
227+
const suggestions = await suggestCached('Wintre')
228+
// now 'wintre' results are cached
229+
230+
// ... some time later...
231+
232+
const suggestions = await suggestCached('Wintre')
233+
// => this is fetched from the cache
234+
```
235+
236+
Here are two LRU implementations you can consider:
237+
* [lru-native2](https://github.com/adzerk/node-lru-native)
238+
* [lru-cache](https://github.com/isaacs/node-lru-cache)
239+
202240
### <a id="notes-warning-on-synchronous-methods"></a>A Warning on Synchronous Methods
203241
There are synchronous versions of all the methods listed above, but they are not documented as they are only present for people who really know and understand what they are doing. I highly recommend looking at the C++ source code if you are going to use these methods in a production environment as the locks involved with them can create some counterintuitive situations. For example, if you were to remove a word synchronously while many different suggestion threads were working in the background the remove word method could take seconds to complete while it waits to take control of the read-write lock. This is obviously disastrous in a situation where you would be servicing many requests.
204242

0 commit comments

Comments
 (0)