| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
| |
I tested the change locally with Spark 0.9.1, but I can't test with 1.0.0 because there was no AMI for it at the time. It's a trivial fix, so it shouldn't cause any problems.
Author: msiddalingaiah <madhu@madhu.com>
Closes #641 from msiddalingaiah/master and squashes the following commits:
a4f7404 [msiddalingaiah] Address SPARK-1717
|
|
|
|
|
|
|
|
|
|
| |
This is specially import because some ssh errors are raised as UsageError, preventing an automated usage of the script from detecting the failure.
Author: Allan Douglas R. de Oliveira <allan@chaordicsystems.com>
Closes #638 from douglaz/ec2_exit_code_fix and squashes the following commits:
5915e6d [Allan Douglas R. de Oliveira] EC2 script should exit with non-zero code on UsageError
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Added option to configure number of worker instances and to set SPARK_MASTER_OPTS
Depends on: https://github.com/mesos/spark-ec2/pull/46
Author: Allan Douglas R. de Oliveira <allan@chaordicsystems.com>
Closes #612 from douglaz/ec2_configurable_workers and squashes the following commits:
d6c5d65 [Allan Douglas R. de Oliveira] Added master opts parameter
6c34671 [Allan Douglas R. de Oliveira] Use number of worker instances as string on template
ba528b9 [Allan Douglas R. de Oliveira] Added SPARK_WORKER_INSTANCES parameter
|
|
|
|
|
|
|
|
|
|
| |
Mainly ported from branch-0.9.
Author: Harvey Feng <hyfeng224@gmail.com>
Closes #385 from harveyfeng/0.9.1-ec2 and squashes the following commits:
769ac2f [Harvey Feng] Add Spark v0.9.1 to ec2 launch script and use it as the default
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Reported in https://spark-project.atlassian.net/browse/SPARK-1156
The current spark-ec2 script doesn't allow user to login to a cluster without slaves. One of the issues brought by this behaviour is that when all the worker died, the user cannot even login to the cluster for debugging, etc.
Author: CodingCat <zhunansjtu@gmail.com>
Closes #58 from CodingCat/SPARK-1156 and squashes the following commits:
104af07 [CodingCat] output ERROR to stderr
9a71769 [CodingCat] do not allow user to start 0-slave cluster
24a7c79 [CodingCat] allow user to login into a cluster without slaves
|
|
|
|
|
|
|
|
|
|
| |
This removes some loose ends not caught by the other (incubating -> tlp) patches. @markhamstra this updates the version as you mentioned earlier.
Author: Patrick Wendell <pwendell@gmail.com>
Closes #51 from pwendell/tlp and squashes the following commits:
d553b1b [Patrick Wendell] Remove remaining references to incubation
|
|
|
|
|
|
|
|
|
|
|
|
| |
I launched an EC2 cluster without providing a key name and an identity file. The error showed up after two minutes. It would be good to check those options before launch, given the fact that EC2 billing rounds up to hours.
JIRA: https://spark-project.atlassian.net/browse/SPARK-1106
Author: Xiangrui Meng <meng@databricks.com>
Closes #617 from mengxr/ec2 and squashes the following commits:
2dfb316 [Xiangrui Meng] check key name and identity file before launch a cluster
|
|
|
|
|
|
|
|
|
|
|
|
| |
Update spark_ec2 to use 0.9.0 by default
Backports change from branch-0.9
Author: Shivaram Venkataraman <shivaram@eecs.berkeley.edu>
Closes #598 and squashes the following commits:
f6d3ed0 [Shivaram Venkataraman] Update spark_ec2 to use 0.9.0 by default Backports change from branch-0.9
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The number of disks for the c3 instance types taken from here: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/InstanceStorage.html#StorageOnInstanceTypes
Author: Christian Lundgren <christian.lundgren@gameanalytics.com>
Closes #595 from chrisavl/branch-0.9 and squashes the following commits:
c8af5f9 [Christian Lundgren] Add c3 instance types to Spark EC2
(cherry picked from commit 19b4bb2b444f1dbc4592bf3d58b17652e0ae6d6b)
Signed-off-by: Patrick Wendell <pwendell@gmail.com>
|
| |
|
| |
|
|
|
|
|
|
| |
ssh commands need the -t argument repeated twice if there is no local
tty, e.g. if the process running spark-ec2 uses nohup and the parent
process exits.
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|\
| |
| | |
EC2 SSH improvements
|
| | |
|
| | |
|
| | |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Under unknown, but occasional, circumstances, reservation.groups is empty
despite reservation.instances each having groups. This means that the
spark_ec2 get_existing_clusters() method would fail to find any instances.
To fix it, we simply use the instances' groups as the source of truth.
Note that this is actually just a revival of PR #827, now that the issue
has been reproduced.
|
| | |
|
|/ |
|
|
|
|
|
| |
Right now it seems like something has gone wrong when this message is printed out.
Instead, this is a normal condition. So I changed the message a bit.
|
|\
| |
| |
| |
| | |
Conflicts:
ec2/spark_ec2.py
|
| | |
|
| |
| |
| |
| |
| |
| | |
- Use SPARK_PUBLIC_DNS environment variable if set (for EC2)
- Use a non-ephemeral port (3030 instead of 33000) by default
- Updated test to use non-ephemeral port too
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
|\|
| |
| |
| |
| | |
Conflicts:
ec2/deploy.generic/root/mesos-ec2/ec2-variables.sh
|
| |
| |
| |
| | |
portable (JIRA Ticket SPARK-817)
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|