Skip to content

RFC-0051-Structured-call_hierarchy-metadata-for-FX-nodes#92

Open
andrewd-aws wants to merge 1 commit intopytorch:masterfrom
andrewd-aws:andrewd/hierarchy_rfc
Open

RFC-0051-Structured-call_hierarchy-metadata-for-FX-nodes#92
andrewd-aws wants to merge 1 commit intopytorch:masterfrom
andrewd-aws:andrewd/hierarchy_rfc

Conversation

@andrewd-aws
Copy link
Copy Markdown

RFC for call hierarchy metadata

@meta-cla
Copy link
Copy Markdown

meta-cla bot commented Apr 7, 2026

Hi @andrewd-aws!

Thank you for your pull request and welcome to our community.

Action Required

In order to merge any pull request (code, docs, etc.), we require contributors to sign our Contributor License Agreement, and we don't seem to have one on file for you.

Process

In order for us to review and merge your suggested changes, please sign at https://code.facebook.com/cla. If you are contributing on behalf of someone else (eg your employer), the individual CLA may not be sufficient and your employer may need to sign the corporate CLA.

Once the CLA is signed, our tooling will perform checks and validations. Afterwards, the pull request will be tagged with CLA signed. The tagging process may take up to 1 hour after signing. Please give it that time before contacting us about it.

If you have received this in error or have any questions, please contact us at cla@meta.com. Thanks!


The implementation builds on existing Dynamo infrastructure. During proxy creation, `stack_trace` is already built by walking the InstructionTranslator (tx) parent chain. `call_hierarchy` is built during the same walk: at each frame, if `nn_module_stack` has new keys compared to the previous frame, the frame is a module entry; otherwise it is a function entry (filtered for torch-internal and non-meaningful frames). Because both module and function entries come from the same ordered traversal, the interleaving is correct by construction. This works regardless of how modules override `__call__` or `forward`. Module invocation counts come from `nn_module_stack`'s existing `@N` key suffix. Function invocation counts are tracked via a `function_call_counts` dict shared across the tx chain, incremented when `InliningInstructionTranslator` is created.

We can gate the feature by `torch._dynamo.config.record_call_hierarchy` (default `False`). When disabled, zero additional work is performed. Or, if overhead is low enough have it on by default.
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this is an interesting callout - from the prototyped implementation, do we have a feeling for the runtime overhead?

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We're actively working on getting some numbers on this


## Alternatives

- Parse `stack_trace` strings at consumption time. This is what consumers do today and it is fragile, lacks invocation counts, and cannot be correctly interleaved with `nn_module_stack`.
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'mm interested in why this is fragile. stack_trace gives you the exact line, which one can use for line-level profiling and etc. Can you give a little more detail on why the new proposal would be easier to use than stack_trace?

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would be helpful if you can describe a concrete use-case for the new node meta.

Copy link
Copy Markdown

@nklshy-aws nklshy-aws Apr 13, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry for the delay on this.

Yes, I agree stack_trace works well for line-level profiling and pointing developers to source locations.

A concrete use case for us where this breaks down is with profiling, where we want to display each instruction annotated with its origin in the model as a hierarchical path, so users can expand and collapse the model structure to identify bottlenecks across related ops at any level of granularity. Building this path requires knowing, for each FX node, the complete interleaved sequence of modules and functions that produced it, including invocation counts to distinguish repeated calls to shared modules or helpers. nn_module_stack provides the module nesting but is missing function calls entirely, and stack_trace includes functions but filters out module frames and has no invocation counts; neither alone nor together can one infer this interleaved path with invocation counts correctly.

So essentially the difficulty is in reconstructing the hierarchical scope tree from what we currently output w.r.t. op provenance. Module frames from torch/nn/modules/ are filtered out by is_co_filename_from_nn_modules(), so module boundaries don't appear in the trace. You could try to recover them by parsing the source file at the line number indicated in the trace (e.g., read line 8, see self.qkv(x), infer the module), but this requires source file access at consumption time and I see this breaking for dynamic attribute access, generated code, etc. Invocation counts for shared modules or functions called in a loop are also unrecoverable. The proposed call_hierarchy would provide this information directly.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants