aboutsummaryrefslogtreecommitdiff
path: root/core/src
diff options
context:
space:
mode:
authorMatei Zaharia <matei@databricks.com>2014-01-08 00:32:18 -0500
committerMatei Zaharia <matei@databricks.com>2014-01-08 00:32:18 -0500
commit11891e68c32d1078ed16c65cca23e28a1f171bb7 (patch)
tree70c76f2130e924673acdc0e3151fdf394bbfc6f3 /core/src
parent7d0aac917b3263c1e2f037df4cf0e1f8ee836620 (diff)
parent4689ce29fd506d001d15c863ab4fe29bfac90326 (diff)
downloadspark-11891e68c32d1078ed16c65cca23e28a1f171bb7.tar.gz
spark-11891e68c32d1078ed16c65cca23e28a1f171bb7.tar.bz2
spark-11891e68c32d1078ed16c65cca23e28a1f171bb7.zip
Merge pull request #327 from lucarosellini/master
Added ‘-i’ command line option to Spark REPL We had to create a new implementation of both scala.tools.nsc.CompilerCommand and scala.tools.nsc.Settings, because using scala.tools.nsc.GenericRunnerSettings would bring in other options (-howtorun, -save and -execute) which don’t make sense in Spark. Any new Spark specific command line option could now be added to org.apache.spark.repl.SparkRunnerSettings class. Since the behavior of loading a script from the command line should be the same as loading it using the “:load” command inside the shell, the script should be loaded when the SparkContext is available, that’s why we had to move the call to ‘loadfiles(settings)’ _after_ the call to postInitialization(). This still doesn’t work if ‘isAsync = true’.
Diffstat (limited to 'core/src')
0 files changed, 0 insertions, 0 deletions