aboutsummaryrefslogtreecommitdiff
path: root/R/pkg
diff options
context:
space:
mode:
authorhyukjinkwon <gurwls223@gmail.com>2016-11-22 11:26:10 +0000
committerSean Owen <sowen@cloudera.com>2016-11-22 11:26:10 +0000
commit4922f9cdcac8b7c10320ac1fb701997fffa45d46 (patch)
tree19aef99f2007d7dd68c4972b4dc62e169031810f /R/pkg
parentacb97157796231fef74aba985825b05b607b9279 (diff)
downloadspark-4922f9cdcac8b7c10320ac1fb701997fffa45d46.tar.gz
spark-4922f9cdcac8b7c10320ac1fb701997fffa45d46.tar.bz2
spark-4922f9cdcac8b7c10320ac1fb701997fffa45d46.zip
[SPARK-18514][DOCS] Fix the markdown for `Note:`/`NOTE:`/`Note that` across R API documentation
## What changes were proposed in this pull request? It seems in R, there are - `Note:` - `NOTE:` - `Note that` This PR proposes to fix those to `Note:` to be consistent. **Before** ![2016-11-21 11 30 07](https://cloud.githubusercontent.com/assets/6477701/20468848/2f27b0fa-afde-11e6-89e3-993701269dbe.png) **After** ![2016-11-21 11 29 44](https://cloud.githubusercontent.com/assets/6477701/20468851/39469664-afde-11e6-9929-ad80be7fc405.png) ## How was this patch tested? The notes were found via ```bash grep -r "NOTE: " . grep -r "Note that " . ``` And then fixed one by one comparing with API documentation. After that, manually tested via `sh create-docs.sh` under `./R`. Author: hyukjinkwon <gurwls223@gmail.com> Closes #15952 from HyukjinKwon/SPARK-18514.
Diffstat (limited to 'R/pkg')
-rw-r--r--R/pkg/R/DataFrame.R6
-rw-r--r--R/pkg/R/functions.R7
2 files changed, 8 insertions, 5 deletions
diff --git a/R/pkg/R/DataFrame.R b/R/pkg/R/DataFrame.R
index 4e3d97bb3a..9a51d530f1 100644
--- a/R/pkg/R/DataFrame.R
+++ b/R/pkg/R/DataFrame.R
@@ -2541,7 +2541,8 @@ generateAliasesForIntersectedCols <- function (x, intersectedColNames, suffix) {
#'
#' Return a new SparkDataFrame containing the union of rows in this SparkDataFrame
#' and another SparkDataFrame. This is equivalent to \code{UNION ALL} in SQL.
-#' Note that this does not remove duplicate rows across the two SparkDataFrames.
+#'
+#' Note: This does not remove duplicate rows across the two SparkDataFrames.
#'
#' @param x A SparkDataFrame
#' @param y A SparkDataFrame
@@ -2584,7 +2585,8 @@ setMethod("unionAll",
#' Union two or more SparkDataFrames
#'
#' Union two or more SparkDataFrames. This is equivalent to \code{UNION ALL} in SQL.
-#' Note that this does not remove duplicate rows across the two SparkDataFrames.
+#'
+#' Note: This does not remove duplicate rows across the two SparkDataFrames.
#'
#' @param x a SparkDataFrame.
#' @param ... additional SparkDataFrame(s).
diff --git a/R/pkg/R/functions.R b/R/pkg/R/functions.R
index f8a9d3ce5d..bf5c96373c 100644
--- a/R/pkg/R/functions.R
+++ b/R/pkg/R/functions.R
@@ -2296,7 +2296,7 @@ setMethod("n", signature(x = "Column"),
#' A pattern could be for instance \preformatted{dd.MM.yyyy} and could return a string like '18.03.1993'. All
#' pattern letters of \code{java.text.SimpleDateFormat} can be used.
#'
-#' NOTE: Use when ever possible specialized functions like \code{year}. These benefit from a
+#' Note: Use when ever possible specialized functions like \code{year}. These benefit from a
#' specialized implementation.
#'
#' @param y Column to compute on.
@@ -2341,7 +2341,7 @@ setMethod("from_utc_timestamp", signature(y = "Column", x = "character"),
#' Locate the position of the first occurrence of substr column in the given string.
#' Returns null if either of the arguments are null.
#'
-#' NOTE: The position is not zero based, but 1 based index. Returns 0 if substr
+#' Note: The position is not zero based, but 1 based index. Returns 0 if substr
#' could not be found in str.
#'
#' @param y column to check
@@ -2779,7 +2779,8 @@ setMethod("window", signature(x = "Column"),
#' locate
#'
#' Locate the position of the first occurrence of substr.
-#' NOTE: The position is not zero based, but 1 based index. Returns 0 if substr
+#'
+#' Note: The position is not zero based, but 1 based index. Returns 0 if substr
#' could not be found in str.
#'
#' @param substr a character string to be matched.