This guide shows how to migrate profile-based integrations (e.g., Spark/Hadoop) to BTrace extensions without mutating the global classpath.
- API on bootstrap: expose minimal, stable APIs with simple/value types.
- Impl in isolated CL: load implementation in an extension classloader; no shading of app libs.
- Runtime linking: access application types via object hand-off and TCCL instead of compile-time imports.
- No classpath injection: avoid boot/system CL changes; reserve the escape hatch for exceptional cases only.
-
io.btrace.extension.util.ClassLoadingUtil- Loaders:
tccl(),definingLoader(Object) - Class loading:
load(String, ClassLoader),load(String, Object),tryLoad(String, ClassLoader) - Context:
withTCCL(ClassLoader, Supplier<T>),withTCCL(ClassLoader, Runnable),withDefiningLoader(Object, Supplier<T>) - Services:
loadService(Class<T>, ClassLoader),loadServices(Class<T>, ClassLoader) - Resources:
getResource(String, ClassLoader),openResource(String, ClassLoader) - Optional child loader:
newChildURLClassLoader(List<Path>, ClassLoader),safeClose(ClassLoader)
- Loaders:
-
io.btrace.extension.util.MethodHandleCache- Caches public
MethodHandles for reflective adapters.
- Caches public
// exported API (on bootstrap)
package org.example.btrace.spark.api;
public interface SparkApi {
void onJobStart(Object jobStartEvent);
void onStageCompleted(Object stageInfo);
}// implementation (extension CL)
package org.example.btrace.spark.impl;
import org.example.btrace.spark.api.SparkApi;
import io.btrace.extension.util.ClassLoadingUtil;
import io.btrace.extension.util.MethodHandleCache;
import java.lang.invoke.MethodHandle;
public final class SparkApiImpl implements SparkApi {
private final MethodHandleCache mh = new MethodHandleCache();
@Override
public void onJobStart(Object evt) {
ClassLoadingUtil.withDefiningLoader(
evt,
() -> {
try {
Class<?> cls = ClassLoadingUtil.loadFromContext(
"org.apache.spark.scheduler.SparkListenerJobStart", evt);
MethodHandle getJobId = mh.findVirtual(cls, "jobId", int.class);
int jobId = (int) getJobId.invoke(evt);
// emit metrics/logs...
} catch (Throwable t) {
// log and continue
}
return null;
});
}
}Writing reflective adapters by hand with ClassLoadingUtil + MethodHandleCache works but has three ergonomic costs: string method names aren't refactor-safe, eager static final MethodHandle fields fail extension init if the target class isn't yet visible, and every reflective call expands into 5+ lines of try/catch and cache plumbing.
The @ExternalType annotation + build-time annotation processor removes all three.
Declare an interface in your extension's exported API set marked with @ExternalType("fully.qualified.AppType"). In practice this means an API-facing interface under src/main/java. The BTrace extension Gradle plugin auto-registers the annotation processor, which generates a companion <InterfaceSimpleName>$Ext class in the same package with typed public static dispatchers for each method.
package com.example.spark.api;
import io.btrace.core.extensions.ExternalType;
@ExternalType("org.apache.spark.scheduler.SparkListenerJobStart")
public interface JobStart {
int jobId();
long time();
}The generated JobStart$Ext can then be called directly from the impl:
int id = JobStart$Ext.jobId(event);
long ts = JobStart$Ext.time(event);Each dispatcher uses a per-method volatile MethodHandle field with lazy resolution: on first call the method looks up the target class via self.getClass().getClassLoader() (virtual methods) or Thread.currentThread().getContextClassLoader() (static methods, see below), then calls MethodHandles.publicLookup().findVirtual / findStatic. Subsequent calls reuse the cached handle — once warm, the volatile read + MethodHandle.invoke is JIT-inlineable.
If the external class isn't yet loaded when the dispatcher is first called, the resolver throws; the volatile field stays null, so the next call retries. No eager init, no ExceptionInInitializerError at extension load.
- Target:
ElementType.TYPE, interface only. The processor rejects classes with a compile error. - Annotation value: non-empty fully-qualified class name. Empty string is a compile error.
- Method return and parameter types: anything resolvable at build time is fine. Types you can't put on the extension's compile classpath (app-private types, classes that only exist at runtime) must be typed as
Object. - Static methods: annotate with
@ExternalType.Staticon the interface method — the generated dispatcher callsfindStaticand uses TCCL for class loading. - Default methods and static interface methods: skipped (they already have bodies).
The following are not yet handled by the processor. Use ClassLoadingUtil / MethodHandleCache directly as a workaround; all items in the table below are planned for a future @ExternalType version:
| Feature | Status | Manual workaround |
|---|---|---|
| Field access (read/write) | Planned | MethodHandleCache.findGetter / findSetter |
Constructors (new ExternalType(...)) |
Planned | MethodHandleCache.findConstructor |
instanceof / checkcast on external types |
Planned | ClassLoadingUtil.load(...) + Class.isInstance |
Chained @ExternalType references |
Planned | Manual adapter per level |
Non-public methods |
Planned | MethodHandles.privateLookupIn (Java 9+) |
The hand-written pattern in the "Impl Sketch" section above works alongside @ExternalType-based adapters in the same impl class until these gaps are closed.
- Detect driver/executor via system properties or presence of marker classes using TCCL.
extensions.conf(examples):
# Spark
btrace-spark.enabled=true
btrace-spark.role=auto # auto|driver|executor
# optional: only if the app does not ship required libs
btrace-spark.classpath=/opt/spark/jars
# Hadoop
btrace-hadoop.enabled=false
# btrace-hadoop.classpath=/opt/hadoop/share/hadoop/common- Typical:
REFLECTION,THREADS,SYSTEM_PROPS. - Optional:
CLASSLOADERif creating a childURLClassLoaderfrom configured paths.
- If absolutely unavoidable, append a single jar to the system CL:
-Dbtrace.system.appendJar=/abs/path/lib.jar -Dbtrace.trusted=true- Restricted to
BTRACE_HOMEby default; override with-Dbtrace.allowExternalLibs=true.
public interface HadoopApi { void onFsOp(Object op); }
public final class HadoopApiImpl implements HadoopApi {
// Resolve org.apache.hadoop.fs.FileSystem via TCCL and reflectively extract fields
}- Extract minimal API for probes; avoid app types.
- Move environment-specific logic to impl; resolve app types via object hand-off/TCCL.
- Add extension config (role, optional classpath hints).
- Request permissions; add no-op shims when unavailable.
- Prefer eager load if APIs must be present before probes start.
- Keep APIs small and stable; impls can evolve independently.
- Cache MethodHandles for performance; avoid repeated reflective lookups.
- Do not rely on global classpath mutation; it’s discouraged and may be removed.
- Spark example:
btrace-extensions/examples/btrace-spark - Hadoop example:
btrace-extensions/examples/btrace-hadoop
See also: docs/examples/README.md for quick build and configuration snippets.