I was really glad to find out that my presentation about Scala and Java 8 was retweeted more than other similar content. That’s why I decided to make some notes and tell you about it. We are going to talk about the difference between Scala and Java and say why each of them is important. There are mutual innovations. Each language has borrowed something from each other. The question is: do you have to learn Scala if there is Java available? Definitely! The more languages you know, the more professional you are.

If we ever ask a Scala engineer about the principal differences between Scala and Java, he will not tell you all the nuances of lambda functions and traits. Instead, he will provide the following example:

Java

public class Person
{
  private  String firstName;
  private  String lastName;
  String getFirstName() { return firstName; }
  void setFirstName(String firstName) { this.firstName = firstName; }
  String getLastName() { return  lastName; }
  void setLastName(String lastName) { this.lastName = lastName; }
  int hashCode() ....
  boolean equals(Object o) { ....  }
}

Scala

case class Person(firstName:String, lastName:String)

Thus, one line in Scala corresponds to twenty lines in Java. On the other hand, the lack of compactness is characteristic not only for Java as a language, but also for a culture that has formed in the world of Java developers. Actually, we can write it this way:

public class Person extends DTOBase
{
  public String  firstName;
  public String  lastName;
}

hashCode and equals in the DTO-base are redefined with the help of reflection. Since Scala has proven that compactness is really promising, I can write a field without getters and setters and will not get it in the neck. It means that development of the idiomatic Java language moves towards compactness as well.

Java 8 introduces some innovations to make the functional programming style handy. At first sight, the innovations repeat some corresponding structures of Scala. For example:

Let’s review them in details.

Lambda Expressions

Java

list.sort((x,y)-> { 
    int cmp = x.lastName.compareTo(y.lastName);
    return cmp!=0 ? cmp : x.firstName.compareTo(y.firstName)
}

Scala

list.sort((x,y) => {
    val cmp = x.lastName.compareTo(y.lastName)
    if (cmp!=0) cmp else x.firstName.compareTo(y.lastName)
}

We can see that the code is really similar, but:

Scala

var (maxFirstLen, maxSecondLen) = (0,0)
  list.foreach{
      x => maxFirstLen = max(maxFirstLen, x.firstName.length)
              maxSecondLen = max(maxSecondLen, x.secondName.lenght)
}

Java

[?] (it is impossible to modify the content the lambda expression has been called from).

Thus, lambda expressions in Java are syntactic sugar over anonymous classes that have access to the final objects of a context only. But in Scala they are full-on closures that have the full access to the context.

Default Methods in Interfaces

Another feature borrowed by Java from Scala, is the default methods in interfaces. They correspond to traits in Scala in a way.

Java

interface AsyncInput
{
   void  onReceive(Acceptor acceptor)
   default void read(): Future  {
      final CompletableFuture promise = new CompletableFuture<>();
      onReceive( x -> promise.complete(x) );
      return promise;
   }
}

Scala

trait AsyncInput[T]
{
   def onReceive(acceptor: T=>()): Unit
   def read: Future[T] = {
       Promise p = Promise[T]()
       onReceive(p.complete(_))
       p.future
   }
}

At first glance, they are the same, but:

Scala

trait LoggedAsyncInput[T]
{
    override def  onReceive(acceptor: T => ()) =
         super.onReceive(x => { println(s“received:${x}”)
                                                 acceptor(x) })
}

Java
[?] (Java does not provide such functionality. An aspect approach could be a sort of an analogue here)

Another example (a bit less important):

Scala

trait MyToString
{
  override def toString = s”[${super.toString}]”
}

Java

[?] (it is impossible to overload object methods in the interface).

We can see that the structures of trait and default methods are quite different. In Java, it’s the specification of the call dispatching. As for Scala, a trait is a more general structure that specifies the build of a final class with the help of linearization.

Stream Operations on Collections

The third Java 8 innovation is the stream interface to collections, which resembles a standard Scala library in design.

Java

peoples.stream().filter( x -> x.firstName.equals(”Jon”)).collect(Collectors.toList())

Scala

peoples.filter(_.firstName == “Jon”)

They are really similar, but in Java we should first get a stream interface from collection and then convert it to the result interface. The main reason for it is the interface encapsulation.

It means that if Java does already have quite a full non-functional API of collections, it’s inappropriate to add another functional interface into it (with regard to the API design and the simplicity of its modification, use and understanding). So, it’s the price we pay for the slow evolutionary development.

But let’s keep comparing:

Java

persons.parallelStream().filter( x -> x.person==”Jon”).collect(Collectors.toList())

Scala

persons.par.filter(_.person==”Jon”)

Solutions are very similar here. We can create a “parallel” stream in Java. As for Scala, we can create a “parallel” collection.

The access to the SQL databases:

Scala

db.persons.filter(_.firstName === “Jon”).toList 

There is an analog in the Java ecosystem. We can write the following:

dbStream(em,Person.class).filter(x -> x.firstName.equals(“Jon”)).toList

It's interesting to take a look how the representation of collections in database tables is implemented in both cases.

In Scala, operations have types of operations on the data. Giving a rough description of types:

persons is of TableQuery[PersonTable] type

In which PersonTable <: Table[Person], that has a structure with firstName and lastName methods.

firstName === lastName is a binary operation === (we can define our own infix operations in Scala ), that is a type similar to Column[X] * Column[Y] => SqlExpr[Boolean].

filter SqlExpr[Boolean] Query[T]

has

filter: SqlExpr[Boolean] => Query[T]

method and some method to generate the SQL. Hence, we can express something as an expression over Table[Person], which is a Person representation.

It’s quite simple and even trivial in a way.

Now, let’s take a look how this functionality is implemented in jinq:

dbStream(em,Person.class).filter(x -> x.firstName.equals(“Jon”)).toList

x type is Person here, and x.firstName is a String. filter method accepts Person -> Boolean function as a parameter. But how do we generate the SQL from it?

filter analyzes the bytecode. There’s a sort of bytecode interpreter that executes instructions “symbolically”. The execution result is the route of calling getters and functions, which helps to build an SQL.

Actually, it’s a really nice idea. However, all of this is executed dynamically during the runtime. Thus, it takes quite a lot of time. If inside of our filter method we try to use a function not from a fixed list (which we do not know how to build an SQL for), we will also realize it during the runtime only.

Thus, the code in Scala is more or less trivial. Meanwhile, we use quite sophisticated technologies in Java.

That’s it about the Scala borrowings in Java. As you can see, Java «versions of features» really differ from Scala.

Scala Borrowings from Java

It’s time to talk about Scala borrowings from Java 8. The process of innovations is bi-directional. There is one Java 8 innovation that has been borrowed by Scala. In 2.11 version, we can turn it on with the help of compiler option. As for 2.12, it is there by default. We are talking about the SAM conversion.

Look at the two code fragments:

Java

interface AsyncInputOutput
{
  void onReceive(Acceptor acceptor)
  void  onSend(Generator generator)  
}

Scala

trait AsyncInputOutput[T]
{
   def onReceive(f: T => Unit): Unit
   def onSend(f: Unit => T):  Unit
}

As we can see, types and methods parameters in Java are Acceptor and Generator. At the bytecode level they are represented as corresponding classes. As for Scala, they are T=>Unit, and Unit=>T functions, which are represented as Function1.class.

SAM-type (Single Abstract Method) is a class or an interface containing one abstract method. If a method accepts a SAM-type as a parameter in Java, we can pass a function. The situation is different in Scala up to 2.11 version. A function is subclass of Function[A,B].

At first glance, the changes are not really significant. Except for the fact that we will be able to describe the functional API in an object-oriented way. In practice, this feature has a very important application – using SAM interfaces in those parts that are critical to time. Why? Efficiency of the bytecode execution by the JIT interpreter depends on its ability to execute the aggressive inlining.

But if you work with functional interfaces, parameters classes look like Function1 for any function with one parameter, Function2 for all functions with two parameters, etc. They are definitely not easy to inline. That’s why there was a hidden problem. We’d better not use functional interfaces in the critical to time low-level parts of the code, as JIT will not be able to inline them. Using SAM, we can rewrite them via local SAM types, and the compiler can inline them. Thus, the problem will disappear.

Though we will have to change the already existing code. We could rewrite some things (such as interfaces of collections) via SAM, but combine the new and the old interfaces in such a way, so that everything would look together.

We have observed the same problem in Java, when talking about the interfaces of collections. This allows to see how the evolution works. They have improved Java in a way. It is not perfect, but better than it used to be. They have improved Scala in another way, and it is not perfect either. Thus, there are two «bugs» in both languages. They are caused by the slow adaptation. There is a space for another language that can provide a “perfect” interface for some period of time. That’s the way evolution works.

We can separate Scala structures, that are absent in Java, into 2 groups:

  1. Those to be used in Java-9,10,11,12 (if we ever see those releases and Java is still interesting to anyone). That's the logic of development, just like Fortan-90 has become object-oriented.
  2. Those showing the difference between Java and Scala ideology.

As for the first group, we can name case classes and automatic type inference. All other things will go to the second one.

At the very beginning, we used the following code fragment:

case class Person(firstName: String, lastName: String)

Why are case classes named case? Because we can use them with a match/case operator:

p match {
   case Person(“Jon”,”Galt” ) => “Hi, who are you ?”
   case Person(firstName, lastName) => s”Hi, ${firstName}, ${lastName}”
   case _ => “You are not person”
}

The first case responses to Jon Galt name. The second one will respond to any other Person values. In addition to that, in the second case there are firstName and lastName names introduced. It’s called an ML-style pattern matching. ML-style is because it’s the first structure to be proposed in ML language that was invented in 1973.

Nowadays, most of “new” languages, such as Scala, Kotlin, Ceylon, Apple Swift, support it.

Scala Peculiarities

So, what is the main Scala peculiarity? What capacities, that are absent in Java, does it provide? The answer is the ability of building internal DSL [Domain Specific Language]. Thus, Scala is so structured, that we could build a strictly typed model for every object area, and then express it in language structures.

These structures are built in the statically-typed environment. What are basic features that allow us to build such structures?

Let’s start with the syntax flexibility. What does it mean in practice?

1. Methods can have any names:

def  +++(x:Int, y:Int) = x*x*y*y

2. Infix method calls for methods with one parameter:

1 to 100  ==  1.to(100)

3. The only difference between round and curly brackets is that curly brackets may have multiple expressions in it. But still, one parameter calls are the same:

future(1) == future{1}

4. We can define functions with several lists of arguments:

def until(cond: =>Boolean)(body: => Unit) : Unit

5. We can pass a block of code as a function argument, so that it will be called each time, when the corresponding argument is triggered (passing arguments “by name”)

def until(cond: =>Boolean)(body: => Unit):Unit =
  { body; while(!cond) { body } }
until(x==10)(x += 1)

Let’s make a DSL for Do/until:

object Do
{
   def apply(body: => Unit) = new DoBody(body)
} 
class DoBody(body: => Unit)
{
   def until(cond:  =>Unit): Unit =
      { body  
        while(!cond) 
            body
      }
}

Now, we can use something like this:

Do {
 x += 1
}  until ( x != 10 )

Another feature allowing to create DSL is a special syntax for some dedicated functions.

For instance, the following expression:

for(x <- collection){ doSomething }. 

is just a syntax to call the method:

collection.foreach(x => doSomething)

So, if we write our own class with foreach method accepting a certain function ( [X] => Unit ), we will be able to use foreach syntax for our own type.

The same is for for/yield (for map) structure, and nested iterations (flatMap) conditional operators in the loop:

Therefore,

for(x <- fun1 if (x.isGood);
    y <- fun2(x) ) yield z(x,y)

is just another syntax for

fun1.withFilter(_.isGood).flatMap(x => fun2.map(y => z(x,y)))

By the way, there is a Scala extension called Scala-virtualized. It is a stand-alone project. Unfortunately, it will hardly be a part of Scala standard. All syntactic structures are virtualized here: if-u, match and other. We could put in an absolutely different semantics. The following are the examples of applications: generating code for GCPU, a specialized language for machine learning, translation to JavaScript.

At the same time, program compilation to Javascript exist in a current ecosystem. Functionality has been moved to a scala.js Scala compiler that is able to generate JavaScript. You can use it at any time. 

Another useful Scala feature is macros. It is a program code conversion during the compilation time. Let’s take a look at a simple example:

object Log 
{
  def apply(msg: String): Unit = macro applyImpl
  def applyImpl(c: Context)(msg: c.Expr[String]):c.Expr[Unit] =
  {
   import c.universe._
   val tree = q"""if (Log.enabled) {
                      Log.log(${msg})
                  }
               """
   c.Expr[Unit](tree)
  }
}

Log(message) expression will be replaced with:

if (Log.enabled) {
    Log.log(message)
 }

Why are they useful?

First of all, with the help of macros, we can generate the so-called ‘boilerplate’ code, which is quite obvious, but still needs to be written somehow. We can name xml/json converters or case classes mapping to databases. In Java boilerplate, we can also «shorten» the code by using reflections. But this will impose some restrictions in places that are critical to execution speed, as reflections are not free. 

Secondly, using macros, we can make more significant changes in programs, rather than just passing functions. We can actually implement our own implementation of structures, or just rewrite them globally.

We can name async interfaces as an example. A copy of C# interface, i.e. in the middle of async block:

async {
     val x = future{ long-running-code-1}
     val y = future{ long-running-code-1}
     val z = await(x)+await(y)
}

Reading this code block directly, we will see that x and y will run the calculations. Then, z will be waiting for the calculations completion. In fact, the code in async is rewritten in a way that all context switches are non-blocking.

It’s interesting that async/await API is made as a library of macros. So, we would need to release a new compiler version in C#, whilst in Scala we can just write a library.

jscala is another example. It's a macros converting the subset of the Scala code to JavaScript. So, if you want do some frontend development and do not feel like switching to JavaScript, you can still do it with Scala and the macros will take care of the rest.

Summary

To sum up, we can say that it is more or less reasonable to compare Java and Scala in the sphere of their operation on the existing content, where the level of abstraction is classes and objects. When it is necessary to increase the abstraction level and describe something new, we can think out an internal DSL for Scala. As for Java, we can try to apply such solutions as building an external DSL or the aspect-oriented programming.

It would be wrong to say that some approach is definitely better in all situations. In Java, we should feel the moment when we leave the boundaries of a language and should build some infrastructure. In Scala, we can build the infrastructure “in the language”.

There are plenty of internal problems in Scala. Its facilities are sometimes imbalanced, and it’s quite a long story. There are plenty of experimental solutions, and it would be great to see them in the main vector of development. It seems that we have entered a new world and can see the dimension building abilities, as well as all the problems of the current structure. There is simply no such dimension in Java.

Subscribe to Kukuruku Hub

Or subscribe with RSS

9 comments

matthewhsilver
The case class example at the beginning does not generate setter methods, only getter methods. You would need to change the constructor parameters to vars to generate setter methods. By default, they're treated as vals.
weehawken
You have a slight typo, should be «new DoBody» not «new DoDody»object Do{ def apply(body: => Unit) = new DoDody(body)} class DoBody(body: => Unit){ def until(cond: =>Unit): Unit = { body while(!cond) body }}
Kukuruku Hub
Oh, good catch! Resolved.P.S. There is a «code» tag for commenting code blocks, e.g.
if (Log.enabled) {
    Log.log(message)
}
Jochen Bedersdorfer
And I would submit to you that some of the «Scala Peculiarities» should be used rarely if at all.If you don't agree on a common, simple standard for scala code in your organization, you will accumulate a lot of technical debt.Code is read more often that it is written or changed.If reading it is a chore, you've failed.And in languages like scala it is very easy to write code that is hard to read.
Kukuruku Hub
Very true about code readability. Scala is definitely one of the tools that can produce a lot of spaghetti code unless you use the language «the right way». Just curious, how often do you use implicits?
Jochen Bedersdorfer
I avoid implicits like the plague.
It's yet another convenience (aka write less) taken too far.
Thanks for getting back to me.

I'd love to see similar articles about Groovy, Kotlin, Ceylon and Fantom! :)
Kukuruku Hub
Heh. We'll see. Just in case you want to write/publish your own one, there is a "+" button in navbar :)
Alfonso da Silva
Great article. I just want to note another typo: it's written equals and hashMap methods, but I suppose you wanted to write about equals and hashCode.
Kukuruku Hub
Hey, good point, should be ok now ;)

Read Next