Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

JOSS paper review remarks #45

Open
inpefess opened this issue Aug 11, 2024 · 0 comments
Open

JOSS paper review remarks #45

inpefess opened this issue Aug 11, 2024 · 0 comments

Comments

@inpefess
Copy link

Good job! I enjoyed reading the paper draft (openjournals/joss-reviews#6468 (comment)) and learning more about your project. Here is the list of review remarks line by line:

  • line 7, "Hierarchical reasoning poses a fundamental challenge in the field of artificial intelligence." --- please cite any sources on hierarchical reasoning and its AI challenges
  • line 8, "Existing methods may struggle when confronted with hierarchical tasks" --- please cite papers confirming this claim
  • lines 8-9, "there is a scarcity of suitable environments or benchmarks designed to comprehend how the structure of the underlying hierarchy influence a task difficulty" --- I haven't found an explanation how HierarchyCraft helps to comprehend influence of hierarchy structure on task difficulty.
  • line 14-15, "tasks ... that do not necessitate feature extraction. This includes tasks containing pixel images, text, sound" --- I can't help reading it as "tasks that do not necessitate feature extraction include tasks containing images etc". Could you please reformulate these two sentencse to make them less ambiguous?
  • line 15-16 "or any data requiring deep-learning based feature extraction" --- I agree that deep-learning is a go-to method for feature extraction nowadays, but I don't think that any data requires it. Could you please reformulate?
  • line 24, "current hierarchical benchmarks often limit themselves to a single hierarchical structure per benchmark" --- HierarchyCraft is compared to RL benchmarks, but is it a benchmark itself? I don't see such a statement anywhere in a paper
  • line 52, "a undeniably complex hierarchical structure" --- probably "an undeniably"
  • lines 52-53, "this underlying hierarchical structures is fixed" --- probably "these underlying hierarchical structures are fixed"
  • line 63, "e.g., Swords " --- is the capital letter really needed here?
  • line 64, "easier.), " --- probably the full stop is unnecessary
  • line 87, "But each Transformations has" --- probably "each Transformation" or "each of Transformations"
  • line 88, "(eg. have" --- it's probably better to use consistent abbreviations through the paper, and you use "e.g." in other cases
  • line 88, "enought" --- probably "enough"
  • line 90, "HierarchyCraft directly provides a low-dimensional latent representation that does not require learning, as depicted in Figure 5." --- I don't see how representations in Figure 5 are latent. Could you please explain or drop the word "latent"?
  • line 100, "This not only saves computational time" --- I would suggest highlighting your contribution of a library of environments in HierarchyCraft. If I got it right, one has to code the transformations by hand to avoid representation learning. So, for included environments, it's an important contribution in my opinion, but the reader should also be warned that if they want to add environments of their own, they will have to do this job themselves. The framework's design won't do it for them automatically.

I have also several general remarks.

  1. The title promises to talk about benchmarking with HierarchyCraft but it doesn't seem to happen. Instead, you call HierarchyCraft "a lightweight environment builder". I understand that "a set of pre-defined hierarchical environments" that you mention is indeed a benchmark, but I would like it to be clearly stated.
  2. From the paper, I don't learn anything about the environments available in HierarchyCraft. I think it's important to mention them, in particular because creation of requirements graphs for them is exactly your contribution that helps other researchers to skip representation learning part and focus on study of hierarchical structures per se. On the other hand, descriptions of existing benchmarks might be shortend if needed to keep the paper to JOSS size standards.
  3. Please double check the citations and add DOIs as highlighted by the editorial bot. For example, I see that the citation for MiniGrid is not the one recommended (https://github.com/Farama-Foundation/Minigrid?tab=readme-ov-file#citation).
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant