You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I just wanted to clarify some doubts and present my rough idea and plan for idea [12] "Develop a drag-and-drop GUI for deep learning experimentation".
First of all, the queries I'd like to be resolved are:
Will the proposed application be used just for working with digital imagery, utilizing torchvision, or for all-round neural network development/experimentation, supporting base torch, and torchaudio?
In the idea description, it is mentioned
Predefined network architectures should be embedded as blocks
What would these predefined network architectures actually be? Do they refer to pre-trained architectures, like VGG or ResNet? Or basic Hidden-layer or CNN architectures ?
In the idea description, PyQT5 is mentioned. Is this absolutely necessary for the project? A browser-based front-end an be easily used on low-end devices, where the actual torch model would run in a remote server.
As we know, torch is a decently large library, hence it is tough to wrap all the features that the python library provides, by a GUI. So, there must be a level of abstraction in the whole project. We have to sacrifice some tweaking and specific functionalities for the sake of clutter-free and comfortable experience. I'd like to know what level of abstraction should be followed in this case.
Rough Idea:
I have decided to follow a Modular approach with low coupling and better modularity. This way, a browser based or any other GUI can be easily connected with the back-end of the application. The back-end and wrapper would provide some APIs, that can be used by any other front-end implementation/libraries(if necessary in the future). The GUI built would define a config.json file, which will be forwarded to a wrapper code, that will decode the JSON file and communicate with the torch back-end to make and interact with the neural networks. The JSON file would be mutex locked to synchronize the file between the ends and avoid overwrites.
Current state:
Currently, I am working on a demo, and it in its current state supports data loading, creating basic regression and classification and regression models. It also supports creating neural networks layer-by-layer and using the Sequential() API of PyTorch. I have tested the application by creating neural networks on the classic Titanic dataset and California House prices dataset.
I currently plan to improve the prototype/demo and create a basic documentation. After the prototype reaches a presentable stage, I'll be sharing it along with my final proposal.
I would also appreciate reviews on the current approach and possible improvements that can be done or alternate paths that can be followed.
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
Greetings!
I just wanted to clarify some doubts and present my rough idea and plan for idea [12] "Develop a drag-and-drop GUI for deep learning experimentation".
First of all, the queries I'd like to be resolved are:
What would these predefined network architectures actually be? Do they refer to pre-trained architectures, like VGG or ResNet? Or basic Hidden-layer or CNN architectures ?
In the idea description, PyQT5 is mentioned. Is this absolutely necessary for the project? A browser-based front-end an be easily used on low-end devices, where the actual torch model would run in a remote server.
As we know, torch is a decently large library, hence it is tough to wrap all the features that the python library provides, by a GUI. So, there must be a level of abstraction in the whole project. We have to sacrifice some tweaking and specific functionalities for the sake of clutter-free and comfortable experience. I'd like to know what level of abstraction should be followed in this case.
Rough Idea:
I have decided to follow a Modular approach with low coupling and better modularity. This way, a browser based or any other GUI can be easily connected with the back-end of the application. The back-end and wrapper would provide some APIs, that can be used by any other front-end implementation/libraries(if necessary in the future). The GUI built would define a config.json file, which will be forwarded to a wrapper code, that will decode the JSON file and communicate with the torch back-end to make and interact with the neural networks. The JSON file would be mutex locked to synchronize the file between the ends and avoid overwrites.
Current state:
Currently, I am working on a demo, and it in its current state supports data loading, creating basic regression and classification and regression models. It also supports creating neural networks layer-by-layer and using the Sequential() API of PyTorch. I have tested the application by creating neural networks on the classic Titanic dataset and California House prices dataset.
I currently plan to improve the prototype/demo and create a basic documentation. After the prototype reaches a presentable stage, I'll be sharing it along with my final proposal.
I would also appreciate reviews on the current approach and possible improvements that can be done or alternate paths that can be followed.
Thank you!
Beta Was this translation helpful? Give feedback.
All reactions