-
Notifications
You must be signed in to change notification settings - Fork 138
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
HDFS support #7
Comments
The way I've added HDFS support into this:
-Masters are NameNode in HA config with auto-failover (might as well while we're at it, I figure...)
Once I clean up things (like remove my AWS creds from cluster.yaml) I can fork and create PR if you want... To do this required some appreciable changes to the Vagrantfile for multinodes to ensure the HDFS configuration was inserted into the chef.json object (I am NOT a ruby programmer, probably a better/more robust way to do it than I did, but my way works...) and just adding the hadoop cookbook into the Berkshelf file. |
Thank you @24601 !! Has your PR already been merged?? Yes, I agree with you. I think it would be good that JournalNode and DataNode are only on slaves. I'm really appreciated to your contributions and I'm happy to review your changes on chef.json. |
@everpeace, thanks for the quick reply! Happy to help and hope my contribution is helpful, I'll be cleaning up the code and will submit a PR soon. Here are a few answers before that:
Still working on original project that this stuff was done for, will clean up and submit PR once that's done! *Uh, I just kinda take that back, 13.04 is already EOL'ed, could just take a step back to 12.04 LTS which has good support, but I'd rather figure out the leap forward to 14.04 while I'm at it...this is a bit of a separate issue, but will likely work on it and might just throw all my changes into one PR, I know the hadoop cookbook works with 14.04 well, al beit officially unsupported. |
@everpeace , making the changes/doing the clean up as discussed above to include HDFS support and move things to Ubuntu 14.04 LTS, not ready for a PR yet, but if you want, changes being made and occasionally synced with my fork here: https://github.com/24601/vagrant-mesos Feel free to make suggestions/comments, like I said, I'm not a ruby programmer or even too proficient with vagrant, but know enough to bumble-F my way through this to get it working as part of a larger project and am happy to share my work, even if it's not the greatest. |
I'm greatly appreciated your contribution @24601 again! I'm not so proficient in HDFS actually. So I'm really happy that you help! I've watched your Vagrantfile and several comments. After your clean up, I expect that
About Ubuntu, I'm not so heavy user of it. I think |
One of the problems I've had with HDFS is that it requires absolute IP addresses. For some reason, a lot of the Hadoop ecosystem doesn't play well with relative IPs. I'm not sure if this has been fixed. But HDFS would be very helpful. I'm not a Chef expert so I've installed HDFS manually, along with Spark. |
The text was updated successfully, but these errors were encountered: