summaryrefslogblamecommitdiff
path: root/site/docs/1.4.0/api/R/nafunctions.html
blob: 901ca121952f1a7dbf8101e1b328a7f1c7b22aba (plain) (tree)






















































































































                                                                                                                                                                        
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
<html><head><title>R: dropna</title>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
<link rel="stylesheet" type="text/css" href="R.css">

<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/styles/github.min.css">
<script src="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/highlight.min.js"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/languages/r.min.js"></script>
<script>hljs.initHighlightingOnLoad();</script>
</head><body>

<table width="100%" summary="page for dropna,DataFrame-method {SparkR}"><tr><td>dropna,DataFrame-method {SparkR}</td><td align="right">R Documentation</td></tr></table>

<h2>dropna</h2>

<h3>Description</h3>

<p>Returns a new DataFrame omitting rows with null values.
</p>
<p>Replace null values.
</p>


<h3>Usage</h3>

<pre>
## S4 method for signature 'DataFrame'
dropna(x, how = c("any", "all"), minNonNulls = NULL,
  cols = NULL)

## S4 method for signature 'DataFrame'
fillna(x, value, cols = NULL)

dropna(x, how = c("any", "all"), minNonNulls = NULL, cols = NULL)

na.omit(x, how = c("any", "all"), minNonNulls = NULL, cols = NULL)

fillna(x, value, cols = NULL)
</pre>


<h3>Arguments</h3>

<table summary="R argblock">
<tr valign="top"><td><code>x</code></td>
<td>
<p>A SparkSQL DataFrame.</p>
</td></tr>
<tr valign="top"><td><code>how</code></td>
<td>
<p>&quot;any&quot; or &quot;all&quot;.
if &quot;any&quot;, drop a row if it contains any nulls.
if &quot;all&quot;, drop a row only if all its values are null.
if minNonNulls is specified, how is ignored.</p>
</td></tr>
<tr valign="top"><td><code>minNonNulls</code></td>
<td>
<p>If specified, drop rows that have less than
minNonNulls non-null values.
This overwrites the how parameter.</p>
</td></tr>
<tr valign="top"><td><code>cols</code></td>
<td>
<p>Optional list of column names to consider.</p>
</td></tr>
<tr valign="top"><td><code>value</code></td>
<td>
<p>Value to replace null values with.
Should be an integer, numeric, character or named list.
If the value is a named list, then cols is ignored and
value must be a mapping from column name (character) to
replacement value. The replacement value must be an
integer, numeric or character.</p>
</td></tr>
<tr valign="top"><td><code>x</code></td>
<td>
<p>A SparkSQL DataFrame.</p>
</td></tr>
<tr valign="top"><td><code>cols</code></td>
<td>
<p>optional list of column names to consider.
Columns specified in cols that do not have matching data
type are ignored. For example, if value is a character, and
subset contains a non-character column, then the non-character
column is simply ignored.</p>
</td></tr>
</table>


<h3>Value</h3>

<p>A DataFrame
</p>
<p>A DataFrame
</p>


<h3>Examples</h3>

<pre><code class="r">## Not run: 
##D sc &lt;- sparkR.init()
##D sqlCtx &lt;- sparkRSQL.init(sc)
##D path &lt;- &quot;path/to/file.json&quot;
##D df &lt;- jsonFile(sqlCtx, path)
##D dropna(df)
## End(Not run)
## Not run: 
##D sc &lt;- sparkR.init()
##D sqlCtx &lt;- sparkRSQL.init(sc)
##D path &lt;- &quot;path/to/file.json&quot;
##D df &lt;- jsonFile(sqlCtx, path)
##D fillna(df, 1)
##D fillna(df, list(&quot;age&quot; = 20, &quot;name&quot; = &quot;unknown&quot;))
## End(Not run)
</code></pre>


<hr><div align="center">[Package <em>SparkR</em> version 1.4.0 <a href="00Index.html">Index</a>]</div>
</body></html>