-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
java.lang.NoClassDefFoundError: Could not initialize class org.apache.jena.riot.system.RiotLib #6
Comments
Hi, Also, can you describe how you created the Maven artifact? I guess |
Hi Lorenz, thanks. I figured this was just a test dir for beginners. I'm using the exact POM file from the develop branch, which is using the 0.7.2 version And you're correct, I'm using Do you recommend switching to the 0.7.1 version? |
Well, the latest version should work ... so, no need to go back I think. Let me check what's going wrong here. I've seen this issue before but I thought it has been resolved already - at least it shouldn't happen with the ResourcETransformer in the Maven Shade plugin enabled - which is the case. By the way, I'll also reply to your mailing list question once I found a good answer. |
I'm also having the same issue on Spark 2.2.1, Scala 2.11.8, JDK 1.8 |
Hi. Do you really want to use such an old Spark version? |
I have just switched to Spark 2.4.8. Also tried the example in https://github.com/SANSA-Stack/SANSA-Stack, but the problem still persists. I now downgraded sansa to |
wait a second. what exactly do you want to do (loading which files) and what exactly are you doing to use SANSA? I mean, the Maven template is nothing more than a stub of the dependencies, you won't even need all of them if for example you just want to load the RDF data. And which file format do you want to load? The most efficient way is for sure N-Triples as this format is splittable. |
We want to use SANSA for loading RDF into Spark, like you have speculated. I am aware that we only need |
Just to update. I have tried many things. I couldn't fix it, but I found an obvious workaround that I didn't think of before; the |
Hello,
When running the example on a Spark cluster using 'spark-submit', the following error is encountered. Any ideas what might be causing this?
The text was updated successfully, but these errors were encountered: