Pandas is a powerful tool that provides data manipulation, analysis, and visualization capabilities. One of the most fundamental tasks in working with data is filtering, which involves selecting only the relevant data from your dataset.

Pandas notnull() function is an incredibly useful tool when it comes to filtering data. In this article, we will explore different ways to filter data using Pandas notnull() function, and how to create and view a sample DataFrame.

Data Filtering using Pandas notnull() Function:

Pandas notnull() function is used to filter rows based on whether they contain null values or not. We can use this function to filter our data in different ways, depending on our requirements.

## Filtering for rows with no null values in any column:

This type of filtering is useful when you want to remove rows that contain null values in any column. To do this, we can use the notnull() function with the all() function as shown below:

df = df[df.notnull().all(axis=1)]

This line of code filters out all rows that contain null values in any column.

It will return a new dataframe that only contains rows with no null values in any column. Filtering for rows with no null values in a specific column:

Sometimes we might need to filter rows based on the presence or absence of null values in a specific column.

We can achieve this by using the notnull() function on a specific column, as shown below:

df = df[df[‘column_name’].notnull()]

This line of code filters out all rows in which the specified column has a null value. It will return a new dataframe with rows that have no null values in the specified column.

Count number of non-null values in each column:

We can use the notnull() combined with the sum() function to count the number of non-null values in each column. This is useful when we want to find out how much missing data we have in our dataset.

To count non-null values in each column, we can use the following code:

df.notnull().sum()

This line of code returns a pandas series where each column name is paired with the count of non-null values in that column. Count number of non-null values in entire DataFrame:

If we want to count the number of non-null values in the entire DataFrame, we can use the sum() function twice, as shown below:

df.notnull().sum().sum()

This line of code returns a single integer value that represents the total number of non-null values in the DataFrame.

## Example DataFrame:

Before we can apply the filtering and counting functions, we need to create a sample DataFrame. To create a sample DataFrame in Pandas, we can use the following code:

import pandas as pd

import numpy as np

df = pd.DataFrame(np.random.rand(5, 4), columns=list(‘ABCD’))

This code creates a DataFrame with five rows and four columns, where all the values are randomly generated using the numpy.random.rand() function.

The columns are labeled ‘A’, ‘B’, ‘C’, and ‘D’. Viewing the DataFrame:

Once we have created a DataFrame, we might want to view it to check if everything looks good.

## Pandas provides several ways to view a DataFrame:

df.head() # returns first 5 rows of DataFrame

df.tail() # returns last 5 rows of DataFrame

df.sample(3) # returns 3 random rows from DataFrame

df.info() # returns a concise summary of the DataFrame

## Conclusion:

In this article, we have explored different ways to filter data using the Pandas notnull() function. We have learned how to filter for rows with no null values in any column or a specific column, count the number of non-null values in each column or entire DataFrame.

We have also seen how to create a sample DataFrame and view it using Pandas. By mastering these skills, we can become more efficient in working with our data and perform more advanced analyses.

Additional Resource: Common Filtering Operations in Pandas

Besides the methods of filtering data using the Pandas notnull() function, there are additional filtering operations in pandas that are commonly used. In this section, we will explore some of these operations in detail.

## Filtering using boolean indexing:

Boolean indexing is a powerful and flexible mechanism for selecting data from pandas data frame. It is based on the principle of filtering data using conditions.

For instance, suppose we want to filter rows of a DataFrame whose values in column A are greater than 0.5. We can use the following code:

df[df[‘A’] > 0.5]

The above code will filter all rows that satisfy the condition ‘A > 0.5’. Note that df[‘A’] > 0.5 returns a Boolean Series with True and False values.

In the filter expression, True values fetch their respective rows, whereas False values are dropped. We can combine multiple conditions using logical operators such as and ( & ) and or ( | ).

Consider a scenario where we want to select rows whose values in column A are greater than 0.5 and values in column B are less than 0.3. The code below demonstrates how we can accomplish this. df[(df[‘A’] > 0.5) & (df[‘B’] < 0.3)]

In the code above, the filter operation returns rows that satisfy both conditions inside the parenthesis.

Filtering using isin():

The isin() function in pandas is used to filter data based on a list of values. Suppose we want to filter rows that have values of ‘cat’ or ‘dog’ in column C.

## We can use the following code:

df[df[‘C’].isin([‘cat’, ‘dog’])]

The above code will filter all rows that have values of ‘cat’ or ‘dog’ in column C. Filtering using str.contains():

Str.contains() function is used to filter data based on string values.

For instance, consider a DataFrame that contains information about different cities. Running the code below will fetch details about cities in the state of Florida.

df[df[‘state’].str.contains(‘FL’)]

The code above filters all rows that contain the character string ‘FL’ in the ‘state’ column. Note that ‘FL’ can represent a portion of a string in the specified column.

Filtering using query():

The query() function in pandas solves the complexity of defining long filter expressions involving multiple columns. The code below demonstrates how we can perform a query on our DataFrame.

df.query(‘A > 0.5 and B < 0.3')

In the query above, we provided a string containing the filter expression inside quotes. Note that, within the string, we can directly refer to column names as variables using the syntax of @ + variable.

Filtering using loc() and iloc():

The loc() and iloc() functions in pandas provide a way of selecting rows based on label or numeric index respectively. For instance, we can filter all columns where the index is greater than 1 as follows.

df.loc[df.index > 1]

The code above filters all rows whose label index is greater than 1. To use iloc() function, we can replace df.loc with df.iloc and use the numeric index instead.

## Conclusion:

This additional resource has explored common filtering operations in pandas. We have seen how to perform different filtering operations like boolean indexing, isin(), str.contains(), query(), loc() and iloc().

Filtering is a crucial aspect of data preparation and helps in performing accurate and efficient data analysis. The more we understand the various filtering operations in pandas, the more flexible and powerful our data analysis skills become.

In conclusion, filtering data is a critical skill in data analysis, and Pandas notnull() function provides a powerful means to achieve this. We have learned different ways to filter data using the notnull() function, including filtering for rows with no null values in any column or a specific column and counting the number of non-null values in each column or the entire DataFrame.

Additionally, we explored various common filtering operations in Pandas, including boolean indexing, isin(), str.contains(), query(), loc() and iloc(). By mastering these skills, we can become more efficient in working with data and perform advanced analyses.

Effective filtering can lead to more accurate and insightful data analysis that paves the way for better decision-making.