Apache Spark 2.x Machine Learning Cookbook
上QQ阅读APP看书,第一时间看更新

How to do it...

The following are the steps for configuring IntelliJ to work with Spark MLlib and for running the sample ML code provided by Spark in the examples directory. The examples directory can be found in your home directory for Spark. Use the Scala samples to proceed:

  1. Click on the Project Structure... option, as shown in the following screenshot, to configure project settings:
  1. Verify the settings:
  1. Configure Global Libraries. Select Scala SDK as your global library:
  1. Select the JARs for the new Scala SDK and let the download complete:
  1. Select the project name:
  1. Verify the settings and additional libraries:
  1. Add dependency JARs. Select modules under the Project Settings in the left-hand pane and click on dependencies to choose the required JARs, as shown in the following screenshot:
  1. Select the JAR files provided by Spark. Choose Spark's default installation directory and then select the lib directory:
  1. We then select the JAR files for examples that are provided for Spark out of the box.
  1. Add required JARs by verifying that you selected and imported all the JARs listed under External Libraries in the the left-hand pane:
  1. Spark 2.0 uses Scala 2.11. Two new streaming JARs, Flume and Kafka, are needed to run the examples, and can be downloaded from the following URLs:

The next step is to download and install the Flume and Kafka JARs. For the purposes of this book, we have used the Maven repo:

  1. Download and install the Kafka assembly:
  1. Download and install the Flume assembly:
  1. After the download is complete, move the downloaded JAR files to the lib directory of Spark. We used the C drive when we installed Spark:
  1. Open your IDE and verify that all the JARs under the External Libraries folder on the left, as shown in the following screenshot, are present in your setup:
  1. Build the example projects in Spark to verify the setup:
  1. Verify that the build was successful: