Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Problems About Memory Monitoring #125

Open
MrLi2018 opened this issue Mar 5, 2022 · 15 comments
Open

Problems About Memory Monitoring #125

MrLi2018 opened this issue Mar 5, 2022 · 15 comments

Comments

@MrLi2018
Copy link

MrLi2018 commented Mar 5, 2022

In the following case, an infinite loop is performed, but I do not know why the memory usage increases sharply.

@MrLi2018
Copy link
Author

MrLi2018 commented Mar 5, 2022

9EC7DE15-69BE-4D03-8FF5-BD73FFC41F6C

@MrLi2018
Copy link
Author

MrLi2018 commented Mar 5, 2022

B2C04E58-09CE-4DAB-914D-4C4E08CEDF57

@mxro
Copy link
Collaborator

mxro commented Mar 5, 2022

Thansk for raising this issue!

Interesting. I would assume the sandbox will add the interrupt function and then call that for every loop run.

var \__it = Java.type('delight.nashornsandbox.internal.InterruptTest');var \__if=function(){\__it.test();};

You can double check what JS is generated by setting the following property log4j.logger.delight.nashornsandbox.NashornSandbox=DEBUG.

I don't think so that this should lead to a memory leak. Maybe it is Nashorn internals that allocate more objects?

@MrLi2018
Copy link
Author

MrLi2018 commented Mar 7, 2022

Thank you for your answer, I'm not saying it caused a memory leak, but I'm asking why the memory increases has anything to do with this injection method, which seems to be very simple.

@MrLi2018
Copy link
Author

MrLi2018 commented Mar 7, 2022

Because now I find that some js code that shouldn't have gone out of memory is now out, but I'm not sure how I should analyze it.

@mxro
Copy link
Collaborator

mxro commented Mar 10, 2022

Yes I don't see anything in the code here that could cause a memory leak. I would assume it is some allocation that Nashorn may make upon every method invocation.

But if you see anything in the code generated by the sandbox that looks like it could cause a memory leak, please let me know and happy to patch it then!

@MrLi2018
Copy link
Author

hreadMonitor.registerThreadToMonitor(Thread.currentThread())
synchronized(this.monitor) {
if (this.threadToMonitor == null) {
this.monitor.wait((this.maxCPUTime + 500L) / 1000000L);
}

            if (this.threadToMonitor == null) {
                this.timedOutWaitingForThreadToMonitor = true;
                throw new IllegalStateException("Executor thread not set after " + this.maxCPUTime / 1000000L + " ms");
            }
        }

Hello, I'd like to ask if this.threadToMonitor may be empty if Thread.currentThread is in the interrupt state.

@MrLi2018
Copy link
Author

Hello mxro, I find a strange phenomenon. My total heap memory is only 800 MB, but the sandbox calculates 3488 MB. I don't know why the error is so large.

81ACD530-3DFD-4694-BA93-07EF16260B54

DE3583CF-9517-4EE4-A944-6544C87947CC

3D926927-49F2-4907-A264-D8EE6C4D91DD
@mxro

@mxro
Copy link
Collaborator

mxro commented Mar 27, 2022

Could this be related to the stage memory

// if maxMemory is larger than 100M, split heap memory count into 4 stage, and offset will be 2

Otherwise the Sandbox doesn't do much else than just read the memory through the bean:

private long getCurrentMemory() throws InterruptedException {
- assuming that's not the most accurate to begin with?

@MrLi2018
Copy link
Author

It can't be related to stage memory. I used version 0.2.0.

@MrLi2018
Copy link
Author

After my thread runs for a long time, the memory value suddenly increases at some point. I suspect that the memory calculation thread becomes inaccurate after running for a long time.

@MrLi2018
Copy link
Author

delight-nashorn-sandbox/src/main/java/delight/nashornsandbox/internal/ThreadMonitor.java

Line 65 in 73b8a92

	// if maxMemory is larger than 100M, split heap memory count into 4 stage, and offset will be 2

This problem also occurs when threads whose memory is less than 100 MB run for a long time. You are advised to modify this problem.

@MrLi2018
Copy link
Author

@mxro
Copy link
Collaborator

mxro commented Apr 2, 2022

So what would be the easiest solution?

@MrLi2018
Copy link
Author

MrLi2018 commented Apr 2, 2022

I'm not sure at present. I just set the memory limit to 100 MB and used the segment calculation, but I found that the error still exists just to avoid it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants