Skip to content

Last-emo-boy/rikune

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

166 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Rikune

Rikune is an MCP server for reverse engineering Windows executables and related binary formats. It combines sample intake, static triage, Ghidra-assisted function recovery, plugin-driven specialist tooling, artifact management, and optional isolated Windows runtime execution behind a Model Context Protocol interface.

The current server is organized around a staged analysis pipeline:

  1. Import a sample with sample.ingest or request a durable upload session with sample.request_upload.
  2. Start analysis with workflow.analyze.start.
  3. Poll with workflow.analyze.status.
  4. Promote deeper stages with workflow.analyze.promote.
  5. Inspect artifacts with artifact.*, analysis.context.get, reporting tools, or semantic review workflows.

workflow.triage is still available as a compatibility and quick-profile facade, but new clients should prefer workflow.analyze.start/status/promote.

What Rikune Provides

  • MCP stdio server for AI clients and agent runtimes.
  • Optional HTTP API and dashboard for uploads, downloads, health checks, SSE events, and artifact access.
  • SHA-256 based sample workspaces with durable original files, cache directories, analysis artifacts, and upload sessions.
  • SQLite-backed persistence for samples, analyses, jobs, evidence, artifacts, batches, debug sessions, and scheduler telemetry.
  • Plugin architecture with 56 built-in plugins and external plugin discovery.
  • Progressive tool surface: core tools are always visible, specialist tools are exposed according to sample type, findings, or explicit discovery.
  • Static analysis and enrichment for PE, ELF, Mach-O, APK/DEX, Office, firmware, strings, YARA, SBOM, signatures, packers, .NET, Go, Rust, and more.
  • Ghidra, Rizin, RetDec, angr, Capstone, Graphviz, Qiling, PANDA, Speakeasy, Wine, Frida, and dynamic-runtime integration where available.
  • Optional Analyzer/Runtime split for live Windows execution through a Windows Host Agent, Windows Sandbox, or Hyper-V VM.
  • Policy gates for live execution, network access, external upload, and bulk decompilation.

Quick Start

Static Docker Analyzer

Static Docker is the safest default. It does not execute samples.

.\rikune.ps1 install -Profile static -DataRoot "D:\Docker\rikune"
./rikune.sh install --profile static --data-root "$HOME/.rikune"

Manual equivalent:

npm install
npm run build
npm run docker:generate:all
docker compose --env-file .docker-runtime.env -f docker-compose.analyzer.yml up -d --build analyzer

Hybrid Docker + Windows Runtime

Hybrid mode runs the Analyzer in Docker and delegates live Windows work to a Windows Host Agent. The Host Agent can start Windows Sandbox on demand or control a configured Hyper-V VM.

.\rikune.ps1 install -Profile hybrid -InstallRuntime

From Linux/macOS with a remote Windows runtime host:

./rikune.sh install --profile hybrid --windows-host <windows-host> --windows-user <windows-user>

Connecting an MCP client does not start Windows Sandbox or run a sample. Live runtime work only starts when a tool explicitly requests it, such as runtime.debug.session.start, runtime.debug.command, sandbox.execute, or a promoted dynamic execution stage.

Native Development

npm install
npm run build
npm test
node dist/index.js

The root package requires Node.js 22 or newer. Some runtime subpackages can run on older Node versions, but repository development and the published root CLI should use Node 22+.

Primary MCP Flow

Upload Or Ingest

Use one of:

  • sample.ingest with a server-readable path or bytes_b64.
  • sample.request_upload to create an upload URL, then POST raw bytes to the embedded HTTP server.
  • POST /api/v1/samples when the HTTP API is enabled.

Successful ingest returns a sample_id. Analysis tools should use sample_id, not a local path, after import.

Start Analysis

Call workflow.analyze.start with the sample_id. The first stage performs a fast profile and creates or reuses an analysis run.

Promote Stages

Use workflow.analyze.promote to request deeper stages. The pipeline currently models these stages:

  • fast_profile
  • enrich_static
  • function_map
  • reconstruct
  • semantic_reviews
  • dynamic_plan
  • dynamic_execute
  • summarize

Long-running work is queued through the job system. Poll with workflow.analyze.status and task.status.

workflow.analyze.status is the primary staged-run view. Large historical stage payloads may be pruned with a top-level warning; use artifact.read for full artifacts. task.status is the raw queue/process view and includes external_active_* memory telemetry for analyzer subprocesses.

Review Results

Useful follow-up surfaces:

  • sample.profile.get
  • analysis.context.get
  • artifact.list, artifact.read, artifact.diff, artifact.download
  • report.summarize, report.generate, workflow.summarize
  • workflow.semantic_name_review
  • workflow.function_explanation_review
  • workflow.module_reconstruction_review
  • tools.discover and tool.readiness

Architecture

The current code path is:

src/index.ts
  -> loadConfig()
  -> WorkspaceManager / DatabaseManager / PolicyGuard / CacheManager / StorageManager / JobQueue
  -> optional RuntimeClient or Windows sandbox bootstrap
  -> registerAllTools()
  -> MCP stdio server

Core server modules live under src/core/:

Area Current file
MCP server wrapper src/core/server.ts
MCP tool/prompt/resource registry src/core/mcp-registry.ts
Tool execution, validation, hooks src/core/tool-executor.ts
Registry orchestration src/core/tool-registry.ts
Built-in registry slices src/core/tool-registry/*.ts
Plugin manager facade src/core/plugins.ts
Plugin discovery/loading src/core/plugin-orchestrator.ts
Progressive tool exposure src/core/tool-surface-manager.ts

Some root-level files such as src/server.ts, src/tool-registry.ts, and src/plugins.ts remain compatibility forwarders. New code should target src/core/*.

Deployment Planes

Plane Purpose Key code
Analyzer MCP stdio server, HTTP API, storage, jobs, static tools, plugin orchestration src/index.ts, src/core/*
Runtime Node Isolated task executor inside sandbox or VM packages/runtime-node/*
Windows Host Agent Starts/stops Windows Sandbox or Hyper-V runtime and exposes runtime control endpoints packages/windows-host-agent/*
Agent Gateway MCP gateway/proxy for analyzer/runtime connection management src/rikune-agent-gateway.ts

Runtime modes are configured through runtime.mode or environment variables:

  • disabled: no runtime delegation.
  • manual: connect to a supplied runtime endpoint.
  • remote-sandbox: delegate to a Windows Host Agent.
  • auto-sandbox: Windows-native analyzer launches Windows Sandbox locally.

Docker/WSL analyzers should use remote-sandbox, not auto-sandbox.

Plugin System

Rikune currently includes 56 built-in plugins under src/plugins/<id>/. Plugins can register tools, declare dependencies, expose configuration schema, participate in lifecycle hooks, and provide Docker metadata.

Plugin loading is controlled by PLUGINS:

PLUGINS=*                 # all built-ins
PLUGINS=pe-analysis,yara  # selected plugins
PLUGINS=-dynamic          # all except dynamic

Use these MCP tools at runtime:

  • plugin.list
  • plugin.enable
  • plugin.disable
  • tools.discover
  • tool.readiness

See docs/PLUGINS.md and packages/plugin-sdk/README.md.

HTTP API

When api.enabled is true, the embedded file server exposes:

Endpoint Purpose
/dashboard and / Dashboard UI
/api/v1/health Liveness
/api/v1/ready Readiness across database, queue, runtime, and plugin backends
/api/v1/events SSE events
/api/v1/samples Direct sample upload
/api/v1/samples/:id Sample metadata
/api/v1/samples/:id/download Original sample download
/api/v1/artifacts Artifact listing
/api/v1/artifacts/:id Artifact read/delete
/api/v1/uploads/:token Durable upload session POST/status

API key auth, rate limiting, security headers, and limited CORS are handled by the HTTP layer.

Prerequisites

Minimum development baseline:

  • Node.js 22+
  • npm
  • Python 3.11+ recommended for workers and analysis scripts
  • Docker 20.10+ and Docker Compose v2 for Docker profiles
  • Java 21+ for modern Ghidra releases
  • Ghidra for decompiler-backed function analysis
  • Windows 10/11 Pro, Enterprise, or equivalent VM support for Windows Sandbox and Hyper-V runtime paths

Optional tools are plugin-specific. Run system.health, system.setup.guide, tool.readiness, and plugin.list to see what is missing in a given environment.

Project Layout

src/
  index.ts                    main server entry
  core/                       MCP server, registry, executor, plugin orchestration
  core/tool-registry/         built-in tool/prompt/resource registration slices
  tools/                      core tool implementations
  workflows/                  staged analysis, triage, reconstruction, review workflows
  analysis/                   run state and background task runner
  plugins/                    56 built-in plugins
  persistence/                SQLite and workspace persistence
  sample/                     sample finalization and workspace inspection
  storage/                    artifacts, uploads, retention
  runtime-client/             analyzer-side runtime delegation client
  worker/                     Ghidra and Python worker orchestration
packages/
  plugin-sdk/                 public plugin SDK
  shared/                     runtime and tool contract types
  runtime-node/               isolated runtime executor
  windows-host-agent/         Windows Sandbox / Hyper-V host agent
workers/                      Python worker scripts and YARA rules
docker/                       generated Dockerfile templates and profile files
docs/                         architecture, plugin, runtime, deployment docs
tests/                        unit, integration, and e2e tests

Development Commands

npm install
npm run build
npm test
npm run typecheck
npm run validate
npm run docker:generate:all

Useful focused checks:

npm run test:unit
npm run test:integration
npm run test:e2e
npm run build:runtime

MCP Client Configuration

Local build:

{
  "mcpServers": {
    "rikune": {
      "command": "node",
      "args": ["D:/Playground/windows-exe-decompiler-mcp-server/dist/index.js"],
      "env": {
        "API_ENABLED": "true",
        "API_PORT": "18080",
        "PLUGINS": "*"
      }
    }
  }
}

Docker stdio:

{
  "mcpServers": {
    "rikune": {
      "command": "docker",
      "args": ["exec", "-i", "rikune-analyzer", "node", "dist/index.js"]
    }
  }
}

Published package:

npm install -g rikune
rikune
rikune docker-stdio
rikune agent

Storage

By default Rikune stores persistent data under the user-level Rikune root. Docker installers usually map that root to a host directory such as D:\Docker\rikune.

Common subdirectories:

  • samples/
  • artifacts/
  • uploads/
  • cache/
  • logs/
  • SQLite database file
  • audit log JSONL

Sample workspaces are bucketed by SHA-256 to avoid path collisions and preserve immutable originals.

Security Boundaries

Rikune is designed for malware and untrusted binary analysis, but it is not a magic safety boundary by itself.

  • Static Docker mode should be the default for routine analysis.
  • Live Windows execution must happen inside Windows Sandbox or an isolated VM.
  • Runtime Node refuses unsafe startup unless explicitly overridden.
  • Dangerous actions are guarded by PolicyGuard.
  • Command execution uses structured process APIs and allowlisted command validation.
  • Do not run unknown samples on a host workstation outside the runtime isolation model.

See SECURITY.md and TROUBLESHOOTING.md.

Documentation Map

License

MIT

About

No description, website, or topics provided.

Resources

License

Contributing

Security policy

Stars

Watchers

Forks

Packages

 
 
 

Contributors