You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
// MSG_DONTWAIT — non-blocking for this single call (regardless of O_NONBLOCK)
79
+
// MSG_NOSIGNAL — don't raise SIGPIPE on broken connection
80
+
81
+
// For SOCK_DGRAM readBuffer: each Uint8Array element is one complete datagram.
82
+
// Message boundaries are preserved — two 100-byte sends produce two 100-byte recvs.
83
+
// For SOCK_STREAM readBuffer: elements may be coalesced or split at arbitrary boundaries.
84
+
// Max UDP datagram size: 65535 bytes. Max receive queue depth: 128 datagrams.
85
+
86
+
// Wildcard address matching: connect('127.0.0.1', 8080) matches a listener
87
+
// bound to '0.0.0.0:8080'. The listeners map must check both exact and wildcard.
88
+
89
+
// Error semantics for send() on closed connection: EPIPE (+ SIGPIPE unless MSG_NOSIGNAL).
90
+
// Error semantics for send() on reset connection: ECONNRESET.
91
+
// Error semantics for send() on unconnected SOCK_STREAM: ENOTCONN.
69
92
```
70
93
71
94
**Testing:** Standalone test in `packages/core/test/kernel/socket-table.test.ts`:
@@ -258,6 +281,130 @@ Runtimes call kernel DNS before falling through to host adapter.
258
281
- TTL expiry → host adapter called again
259
282
- Flush → all entries cleared
260
283
284
+
### 2.4 Unified Blocking I/O Wait System (K-10)
285
+
286
+
Currently each blocking operation (pipe read, socket recv, flock, poll) implements its own wait/wake logic. Add a unified `WaitHandle` primitive in `packages/core/src/kernel/wait.ts`:
287
+
288
+
```
289
+
WaitHandle {
290
+
wait(timeoutMs?: number): Promise<void> // suspends caller until woken or timeout
291
+
wake(): void // wakes one waiter
292
+
wakeAll(): void // wakes all waiters
293
+
}
294
+
295
+
WaitQueue {
296
+
private waiters: WaitHandle[]
297
+
enqueue(): WaitHandle // creates and enqueues a new WaitHandle
298
+
wakeOne(): void
299
+
wakeAll(): void
300
+
}
301
+
```
302
+
303
+
All kernel subsystems use `WaitQueue` for blocking:
-**Socket accept** (no pending connection) → `socket.acceptWaiters.enqueue().wait()`
307
+
-**Socket recv** (no data) → `socket.readWaiters.enqueue().wait()`
308
+
-**flock** (lock held by another process) → `fileLock.waiters.enqueue().wait()`
309
+
-**poll() with timeout -1** → `waitQueue.enqueue().wait()` on each polled FD, race with timeout
310
+
311
+
**WasmVM integration:** The WasmVM worker thread blocks on `Atomics.wait()` during any syscall. The main thread handler calls `waitQueue.enqueue().wait()` (which is a JS Promise). When the condition is met, `wake()` resolves the Promise, the main thread writes the response to the signal buffer, and `Atomics.notify()` wakes the worker. The existing 30s `RPC_WAIT_TIMEOUT_MS` applies — for indefinite waits (poll timeout -1), the main thread handler loops: wait → timeout → check condition → re-wait.
312
+
313
+
**Node.js integration:** The Node.js bridge is async. Blocking semantics are implemented via `applySyncPromise` (V8's synchronous Promise resolution). `recv()` returns a Promise that resolves when the WaitHandle is woken. The isolate event loop pumps until the Promise settles.
314
+
315
+
**Testing:** Standalone test in `packages/core/test/kernel/wait-queue.test.ts`:
316
+
- Create WaitHandle, wake it — verify wait() resolves
317
+
- Create WaitHandle with timeout — verify it times out
318
+
- Multiple waiters, wakeOne — verify only one wakes
319
+
- wakeAll — verify all wake
320
+
- Wait on pipe read with empty buffer — write data — verify read unblocks
321
+
- Wait on flock held by process A — process A unlocks — verify process B unblocks
322
+
323
+
### 2.5 Inode Layer (K-11)
324
+
325
+
Add `packages/core/src/kernel/inode-table.ts`:
326
+
327
+
```
328
+
Inode {
329
+
ino: number // unique inode number
330
+
nlink: number // hard link count
331
+
openRefCount: number // number of open FDs referencing this inode
332
+
mode: number // file type + permissions (S_IFREG, S_IFDIR, etc.)
333
+
uid: number
334
+
gid: number
335
+
size: number
336
+
atime: Date
337
+
mtime: Date
338
+
ctime: Date
339
+
birthtime: Date
340
+
}
341
+
342
+
InodeTable {
343
+
private inodes: Map<number, Inode>
344
+
private nextIno: number
345
+
346
+
allocate(mode, uid, gid): Inode
347
+
get(ino: number): Inode | null
348
+
incrementLinks(ino): void // hard link created
349
+
decrementLinks(ino): void // hard link or directory entry removed
350
+
incrementOpenRefs(ino): void // FD opened
351
+
decrementOpenRefs(ino): void // FD closed — if nlink=0 and openRefCount=0, delete data
VFS nodes reference inodes by `ino` number. Multiple directory entries (hard links) share the same inode. `stat()` returns inode metadata.
357
+
358
+
**Deferred deletion:** When `unlink()` removes the last directory entry (`nlink → 0`) but FDs are still open (`openRefCount > 0`), the inode and its data persist. The file disappears from directory listings but remains accessible via open FDs. When the last FD is closed (`openRefCount → 0`), the inode and data are deleted. `stat()` on an open FD to an unlinked file returns `nlink: 0`.
359
+
360
+
**Hard links:**`link(existingPath, newPath)` creates a new directory entry pointing to the same inode. `incrementLinks()` bumps `nlink`. Both paths return the same `ino` from `stat()`.
361
+
362
+
**Integration with FD table:**`ProcessFDTable.open()` calls `inodeTable.incrementOpenRefs(ino)`. `ProcessFDTable.close()` calls `inodeTable.decrementOpenRefs(ino)` and checks `shouldDelete()`.
363
+
364
+
**Testing:** Standalone test in `packages/core/test/kernel/inode-table.test.ts`:
365
+
- Allocate inode, verify ino is unique
366
+
- Create hard link — verify nlink increments, both paths return same ino
367
+
- Unlink file with open FD — verify data persists, stat returns nlink=0
368
+
- Close last FD on unlinked file — verify inode and data are deleted
369
+
- stat() on unlinked-but-open file — verify correct metadata
370
+
371
+
### 2.6 Signal Handler Registry (K-8, expanded)
372
+
373
+
Expand beyond section 4.8's basic signal delivery to full POSIX sigaction semantics:
374
+
375
+
```
376
+
SignalHandler {
377
+
handler: 'default' | 'ignore' | FunctionPointer // SIG_DFL, SIG_IGN, or user function
378
+
mask: Set<number> // signals blocked during handler execution (sa_mask)
379
+
flags: number // SA_RESTART, SA_NOCLDSTOP, etc.
380
+
}
381
+
382
+
ProcessSignalState {
383
+
handlers: Map<number, SignalHandler> // signal number → handler
384
+
blockedSignals: Set<number> // sigprocmask: currently blocked signals
385
+
pendingSignals: Map<number, number> // signal → count (queued while blocked)
386
+
}
387
+
```
388
+
389
+
**sigaction(signal, handler, mask, flags):** Registers a handler for `signal`. When the signal is delivered:
390
+
1. If handler is `'ignore'` → signal is discarded
391
+
2. If handler is `'default'` → kernel applies default action (SIGTERM→exit, SIGINT→exit, SIGCHLD→ignore, etc.)
392
+
3. If handler is a function pointer → kernel invokes it with `sa_mask` signals temporarily blocked
393
+
394
+
**SA_RESTART:** If a signal interrupts a blocking syscall (recv, accept, read, wait, poll) and SA_RESTART is set, the syscall is restarted automatically after the handler returns. Without SA_RESTART, the syscall returns EINTR.
395
+
396
+
**sigprocmask(how, set):**`SIG_BLOCK` adds signals to `blockedSignals`, `SIG_UNBLOCK` removes them, `SIG_SETMASK` replaces. Signals delivered while blocked are queued in `pendingSignals`. When unblocked, pending signals are delivered in order (lowest signal number first, per POSIX).
397
+
398
+
**Signal coalescing:** Standard signals (1-31) are coalesced — if SIGINT is delivered twice while blocked, only one instance is queued. The `pendingSignals` count is capped at 1 for standard signals.
399
+
400
+
**Testing:** Standalone test in `packages/core/test/kernel/signal-handlers.test.ts`:
401
+
- Register SIGINT handler, deliver SIGINT — verify handler called instead of default exit
0 commit comments