Minnesota Supercomputing Institute
1.0.3
1.0.3, 2.5.1, 2.7.1, 2.7.1.java8, 1.0.3, 2.7.1, 2.8.5, testing.3.2.0
Tuesday, August 29, 2023
The Hadoop Map/Reduce framework harnesses a cluster of machines and executes user defined Map/Reduce jobs across the nodes in the cluster. On itasca, a script exists to create an ephemeral Hadoop cluster on the set of nodes assigned by the scheduler. The script setup_cluster will format a HDFS filesystem on the local scratch disks.
This resource is best-suited for application benchmarking, and algorithm testing. All data must be moved to HDFS after the cluster is brought up when the jobs starts. Any data that you wish to save must be moved to your home directory before the job completes. Many job scripts will follow the pattern: