-
Notifications
You must be signed in to change notification settings - Fork 2.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature: Automated Creation Based on Example for PyTorch Linear Modules with ReLU Activations #126
base: master
Are you sure you want to change the base?
Conversation
Please feel free to feed back and contribute to this feature. Especially regarding the subsequent Keras support. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great job on this pull request! Clear structure and use of examples.
arch.append(pnn.to_begin()) | ||
for idx, layer in enumerate(summary_list[2:], start=1): | ||
|
||
if layer.class_name == "Linear": |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If more module types would be parsed in the future, having helper functions or a builder class corresponding to the layer.class_name would omit cluttering the parse()
function with further if conditions.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree and this becomes even more important with increased types of supported PyTorch modules (layers, activations). Though, I have oriented myself on the coding style in blocks.py
and tikzeng.py
.
Further, I propose to abstract parsing and constructing the tikz code toward a collection of layers that map to a similar tikz representation and a general activation representation that maps to all possible PyTorch-implemented activation functions.
Though, all of this could - and imo should - be an improvement and extension on-top of this basic functionaility.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Addressed comments from @MyGodItsFull0fStars
arch.append(pnn.to_begin()) | ||
for idx, layer in enumerate(summary_list[2:], start=1): | ||
|
||
if layer.class_name == "Linear": |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree and this becomes even more important with increased types of supported PyTorch modules (layers, activations). Though, I have oriented myself on the coding style in blocks.py
and tikzeng.py
.
Further, I propose to abstract parsing and constructing the tikz code toward a collection of layers that map to a similar tikz representation and a general activation representation that maps to all possible PyTorch-implemented activation functions.
Though, all of this could - and imo should - be an improvement and extension on-top of this basic functionaility.
Ready as initial functionality for PyTorch automated generation support. Please review and merge if deemed OK. |
hey guys, any plan to support conv layer ? |
Hey @space192 I am currently occupied and hindered to push the PR further but we happily accept your extension to CNNs. You can create a PR regarding this to that branch of my fork - so it shows up here. |
Work in progress
The PR addresses #124
Automated generation from PyTorch module class ``torch.nn.module
child, leveraging
torchinfo` architecture summary interface, comparable with TensorFlow/Keras `summary` method.This is created from the following code:
Conv
for fully connected layer until a specialized function is provided.TODOs for subsequent PRs
Addressed #124 with respect to PyTorch