After wasting a lot of time, I've found the problem.
Despite I haven't used hadoop/hdfs in my application, hadoop client
matters. The problem was in hadoop-client version, it was different than
the version of hadoop, spark was built for.
Spark's hadoop version 1.2.1, but in my application that was 2.4.
When I changed the version of hadoop client to 1.2.1 in my app, I'm able
to execute spark code on cluster.