Eclipse Scala

Posted on  by admin

For Eclipse users, ScalaTest offers a powerful plugin that provides seamless support for testing in the Scala IDE for Eclipse.Not only does this plugin give Scala programmers the level of test-framework/IDE integration that Java programmers have enjoyed withJUnit, it goes quite a bit farther:

  1. You can right click on any test or collection of tests and run them.
  2. You can run just the tests you select in code, run tests you selectin reported results, rerun all tests or just previously failed tests,or run tests in a selected class, file, or package.
  3. The results pane mirrors the structure of the specification (i.e.,if your BDD-style specification text is nested in the source, it willappear nested in the results pane).
  4. You can hop from results to test, scope, class, or line of failed code.
  5. You can unfold the top of that stack that is automatically foldedso only the offending line of code is shown.
  6. And, because ScalaTest is a platform that can support differentstyles of testing, the plugin can be extended to grant full IDEsupport for non-ScalaTest-native styles, such as a ScalaCheckProperties classes, Specs2 Specifications, or custom styles.

You can use the plugin with any release of ScalaTest, but you'll enjoythe most seamless IDE integration if you use ScalaTest 3.2.7.You can plug it into the latest stable release of the Scala IDE for Eclipse (version 3.0.x).For a nightly build, you'll need to build it by hand by following the steps given inhttps://github.com/scalatest/scalatest-eclipse-plugin

Download
  1. Eclipse says it is viewing it with the Scala editor. However, it is still giving my an error on each line because it is treat the code as Java (it gives errors for no semicolon, for the word 'def' etc.) I tried cleaning the project and it still gives the errors. Eclipse scala eclipse-plugin scala-ide.
  2. In this article, we will see how to configure Eclipse IDE for Scala. This tutorial is for people new to Scala. Tools/Technologies used: Eclipse Luna Scala IDE JDK 8 (you may use JDK 6 onwards) Prerequisites If you do not have JDK or Eclipse IDE installed, you can install it from.
Eclipse scala maven

It's recommended to upgrade to Scala 2.12 or Scala 2.13 Installing the plugin In your eclipse installation go to install new software and point the repository to.

Screenshot of ScalaTest Eclipse Plugin with ScalaTest 3.2.7

Installing the ScalaTest Eclipse Plugin

You can install the plugin together with Scala IDE using latest update sites listed at http://scala-ide.org/. For ScalaIDE 3.0.x, tick 'ScalaTest for Scala IDE' as shown in the figure below:

Information on the features of the integration (and some screenshots(and the source code)) is here:

A video demo of the plugin that I gave back at ScalaDays is here:

Info on ScalaTest 3.2.7 is here:

With the release notes for ScalaTest 3.2.7 here:

Eclipse Scala Sbt

Using ScalaTest in a Scala project

To use ScalaTest in your Scala project, you must download ScalaTest and include it in build path of your project.

You can use any ScalaTest 1.x release or the latest 3.2.7 (recommended). Using ScalaTest 2.0 enables the following:

  • Test result view built in the eclipse workspace.
  • Running of selected specific test or scope.

When using ScalaTest 1.x, the GUI Runner provided by ScalaTest will be used instead of the built-in test result view.

Running a Selected Suite

To run a selected suite, you can select the suite using 2 different ways:-

  • By choosing the suite source in an opened Scala source file within the editor.
  • By choosing the suite class from Project Explorer/Navigator/Outline view.

After you choose the target suite element, just right click and choose:-

A Run Configuration with the suite name will be created automatically.

Running a Selected Test

To run a selected test, click on the target test element in the editor, right click and choose:-

A Run Configuration with the test name will be created automatically.

Running a Selected Scope

To run a selected scope, click on the target scope element in the editor, right click and choose:-

A Run Configuration with the scope name will be created automatically.

Running All Suites in a Selected File

To run all ScalaTest suites in a selected file, you can select the file using 2 different ways:

  • By choosing an opened Scala source file containing ScalaTest suite(s) in the editor.
  • By choosing the Scala source file containing ScalaTest suite(s) from Project Explorer/Navigator.

After you choose the target Scala source file, just right click and choose:-

All ScalaTest suites in the selected Scala source file will be run.

A Run Configuration with the file name will be created automatically.

Running All Suites in Selected Package

To run all ScalaTests suites in a package, you can right click on a package in Project Explorer and choose:-

All ScalaTest suites in the selected package (not nested package) will be run. To include ScalaTest suites in nested packages, you'll need to select the 'Include Nested' option in the Run Configuration.

A Run Configuration with the package name will be created automatically.

Run Configuration Types

  • Suite - You specify Suite class name (mandatory) and test name(s) to run. If no test name is specified, all test(s) in the suite will be run.
  • File - You specify Suite file (mandatory) to run, all ScalaTest suites in the selected file will be run.
  • Package - You specify Package name (mandatory) and whether to include nested package, all ScalaTest suites in the selected package will be run. If 'Include Nested' is selected, all ScalaTest suites in nested package will be run as well.

Apache Spark is becoming very popular among organizations looking to leverage its fast, in-memory computing capability for big-data processing. This article is for beginners to get started with Spark Setup on Eclipse/Scala IDE and getting familiar with Spark terminologies in general –

Hope you have read the previous article on RDD basics, to get a basic understanding of Spark RDD.

Tools Used :

  • Scala IDE for Eclipse – Download the latest version of Scala IDE from here. Here, I used Scala IDE 4.7.0 Release, which support both Scala and Java
  • Scala Version – 2.11 ( make sure scala compiler is set to this version as well)
  • Spark Version 2.2 ( provided in maven dependency)
  • Java Version 1.8
  • Maven Version 3.3.9 ( Embedded in Eclipse)
  • winutils.exe

For running in Windows environment , you need hadoop binaries in windows format. winutils provides that and we need to set hadoop.home.dir system property to bin path inside which winutils.exe is present. You can download winutils.exehere and place at path like this – c:/hadoop/bin/winutils.exe . Read this for more information.

Creating a Sample Application in Eclipse –

In Scala IDE, create a new Maven Project –

Replace POM.XML as below –

POM.XML

For creating a Java WordCount program, create a new Java Class and copy the code below –

Java Code for WordCount

import java.util.Arrays;

import org.apache.spark.SparkConf;
import org.apache.spark.api.java.JavaPairRDD;
import org.apache.spark.api.java.JavaRDD;
import org.apache.spark.api.java.JavaSparkContext;

import scala.Tuple2;

public class JavaWordCount {
public static void main(String[] args) throws Exception {

String inputFile = “src/main/resources/input.txt”;

//To set HADOOP_HOME.
System.setProperty(“hadoop.home.dir”, “c://hadoop//”);

//Initialize Spark Context
JavaSparkContext sc = new JavaSparkContext(new SparkConf().setAppName(“wordCount”).setMaster(“local[4]”));

// Load data from Input File.
JavaRDD<String> input = sc.textFile(inputFile);

// Split up into words.
JavaPairRDD<String, Integer> counts = input.flatMap(line -> Arrays.asList(line.split(” “)).iterator())
.mapToPair(word -> new Tuple2<>(word, 1)).reduceByKey((a, b) -> a + b);

System.out.println(counts.collect());

sc.stop();
sc.close();
}
}

Scala Version

For running the Scala version of WordCount program in scala, create a new Scala Object and use the code below –

You may need to set project as scala project to run this, and make sure scala compiler version matches Scala version in your Spark dependency, by setting in build path –

import org.apache.spark.SparkConf
import org.apache.spark.SparkContext

object ScalaWordCount {

def main(args: Array[String]) {

//To set HADOOP_HOME.
System.setProperty(“hadoop.home.dir”, “c://hadoop//”);
// create Spark context with Spark configuration
val sc = new SparkContext(new SparkConf().setAppName(“Spark WordCount”).setMaster(“local[4]”))

//Load inputFile
val inputFile = sc.textFile(“src/main/resources/input.txt”)
val counts = inputFile.flatMap(line => line.split(” “)).map(word => (word, 1)).reduceByKey((a, b) => a + b)
counts.foreach(println)

sc.stop()
}

}

So, your final setup will look like this –

Running the code in Eclipse

You can run the above code in Scala or Java as simple Run As Scala or Java Application in eclipse to see the output.

Output

Now you should be able to see the word count output, along with log lines generated using default Spark log4j properties.

In the next post, I will explain how you can open Spark WebUI and look at various stages, tasks on Spark code execution internally.

You may also be interested in some other BigData posts –

Eclipse Scala Sdk

  • Spark ; How to Run Spark Applications on Windows