You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
If the native performance isn't fast enough for your workload, you can try using an LRU cache for your operations. The idea is to cache the results of the operation and only repeat the operations on cache misses.
206
+
207
+
```js
208
+
constLRUCache=require('lru-native2')
209
+
210
+
var cache =newLRUCache({ maxElements:1000 })
211
+
212
+
asyncfunctionsuggestCached() {
213
+
let cachedResult =cache.get(word)
214
+
if (cachedResult) {
215
+
// cache hit
216
+
return cachedResult
217
+
} else {
218
+
// cache miss
219
+
let result =awaitnodehun.suggest(word)
220
+
cache.set(word, result)
221
+
return result
222
+
}
223
+
}
224
+
225
+
// ... example usage:
226
+
227
+
constsuggestions=awaitsuggestCached('Wintre')
228
+
// now 'wintre' results are cached
229
+
230
+
// ... some time later...
231
+
232
+
constsuggestions=awaitsuggestCached('Wintre')
233
+
// => this is fetched from the cache
234
+
```
235
+
236
+
Here are two LRU implementations you can consider:
### <aid="notes-warning-on-synchronous-methods"></a>A Warning on Synchronous Methods
203
241
There are synchronous versions of all the methods listed above, but they are not documented as they are only present for people who really know and understand what they are doing. I highly recommend looking at the C++ source code if you are going to use these methods in a production environment as the locks involved with them can create some counterintuitive situations. For example, if you were to remove a word synchronously while many different suggestion threads were working in the background the remove word method could take seconds to complete while it waits to take control of the read-write lock. This is obviously disastrous in a situation where you would be servicing many requests.
0 commit comments