The What, Why and How of Feature Engineering Shanawaz sheriff April 15, 2020

The What, Why and How of Feature Engineering

The What Why And How Of Feature Engineering

Artificial intelligence and machine learning have pervaded every industry, yielding substantial returns to those invested in them. As Machine Learning technologies grow more powerful and proliferate, companies are taking it as their imperative to implement these technologies to gain a competitive edge. While machine learning involves training computers to perform tasks without explicit instruction, putting together coherent data to successfully train the ML model is itself a challenge. Collecting, cleaning and engineering data is the most cumbersome part of the machine learning process.

All machine learning algorithms use data as the input to calibrate and generate output. Data is initially in its crudest form, requiring enhancement before feeding it to the algorithm. This input data comprises features, which are measurable properties of a process, often represented in the form of structured columns. The process of extracting relevant features from the data to train ML algorithms is called feature engineering. Features engineering is vital to data science as it produces reliable and accurate data and algorithms are only as good as the data fed to them.

What Is Feature Engineering?

What Is A Feature?

In machine learning, a feature is an individual property or characteristic of a process under study. To effectively train and calibrate algorithms, choosing appropriate features is a crucial step.

Features are the fundamental elements of datasets. The quality of features in the dataset bears a strong influence on the quality of the output derived from machine learning algorithms.

Feature engineering spans across diverse applications. In speech recognition for instance, features for recognizing phonemes can include noise ratios, length of sounds, relative power, filter matches and many others. In building an algorithm to classify spam and legitimate mails, some of the features include presence of particular topics, length, presence of URLs, structure of the URL, number of exclamation points, number of misspellings, information extracted from the header and so on.

So, what Is Feature Engineering?

Feature engineering involves leveraging data mining techniques to extract features from raw data along with the use of domain knowledge. Feature engineering is useful to improve the performance of machine learning algorithms and is often considered as applied machine learning.    

Features are also referred to as ‘variables’ or ‘attributes’ as they affect the output of a process.

Feature engineering involves several processes. Feature selection, construction, transformation, and extraction are some key aspects of feature engineering. Let’s understand what each process involves:

  • Feature selection involves choosing a set of features from a large collection. Selecting the important features and reducing the size of the feature set makes computation in machine learning and data analytic algorithms more feasible. Feature selection also improves the quality of the output obtained from algorithms. 
  • Feature transformation involves creating features using existing data by the use of mathematical operations. For example, to ascertain the body type of a person a feature called BMI (Body Mass Index) is needed. If the dataset captures the person’s weight and height, BMI can be derived using a mathematical formula.
  • Feature construction is the process of developing new features apart from the ones generated in feature transformation, that are appropriate variables of the process under study.
  • Feature extraction is a process of reducing the dimensionality of a dataset. Feature extraction involves combining the existing features into new ones thereby reducing the number of features in the dataset. This reduces the amount of data into manageable sizes for algorithms to process, without distorting the original relationships or relevant information.

Why is Feature Engineering Required?

The intention of feature engineering is to achieve two primary goals:

  1. Preparing an input dataset that is compatible with and best fits the machine learning algorithm.
  2. Improving the performance of machine learning models 

According to a survey in Forbes, data scientists spend 80% of their time on data preparation. The importance of feature engineering is realised through its time-efficient approach to preparing data that brings consistent output.

When feature engineering processes are executed well, the resulting dataset will be optimal and contain all the essential factors that bear an impact on the business problem. These datasets in turn result in best possible predictive models and most beneficial insights.

Common Feature Engineering Techniques Used

  • Imputation

One of the most common problems in machine learning is the absence of values in the datasets. The causes of missing values can be due to numerous issues like human error, privacy concern and interruptions in the flow of data among many. Irrespective of the cause, absence of values affects the performance of machine learning algorithms.

Rows with missing values are sometimes dropped by machine learning platforms and some platforms do not accept datasets with missing data. This decreases the performance of the algorithm due to reduced data size. By using the method of Imputation, values are introduced into the dataset that are coherent with the existing values. Although there are many imputation methods, replacing missing values with the median of the column or the maximum value occurred is a common imputation method. 

  • One-Hot Encoding

This is one of the common encoding methods used in feature engineering. One-hot encoding is a method of assigning binary values (0’s and 1’s) to values in the columns. In this method, all values above the threshold are converted to 1, while all values equal to or below the threshold are converted as 0. This changes the feature values to a numerical format which is much easier for algorithms to understand without compromising the value of the information and the relationship between the variables and the feature.

  • Grouping Operations

In machine learning algorithms, a variable or instance is represented in rows and features are represented in columns. Many datasets rarely fit into the simplistic arrangement of rows and columns as each column has multiple rows of an instance. To handle such cases, data is grouped in such a fashion that every variable is represented by only one row. The intention of grouping operations is to arrive at an aggregation that establishes the most viable relationship with features.

  • Log Transformation

A measure of asymmetry in a dataset is known as Skewness, which is defined as the extent to which a given distribution of data varies from a normal distribution. Skewness of data affects the prediction models in ML algorithms. To resolve this, Log Transformations are used to reduce the skewness of data. The less skewed distributions are, the better is the ability of algorithms to interpret patterns.

  • Bag of Words

Bag of Words (BoW) is a counting algorithm that evaluates the number of repetitions of a word in a document. This algorithm is useful in identifying similarities and differences in documents for applications like search and document classification.

  • Feature Hashing

Feature hashing is an important technique used to scale-up machine learning algorithms by vectorizing features. The technique of feature hashing is commonly used in document classification and sentiment analysis where tokens are converted into integers.  Hash values are derived by applying hash functions to features that are used as indices to map data.

Automated Feature Engineering

Automated feature engineering is a new technique that is becoming a standard part of machine learning workflow. The traditional approach is a time consuming, error-prone process and is specific to the problem at hand and has to change with every new dataset. Automated feature engineering extracts useful and meaningful features using a framework that can be applied to any problem. This will increase the efficiency of data scientists by helping them spend more time on other elements of machine learning and would enable citizen data scientists to do feature engineering using a framework based approach.

Conclusion

Despite being in its nascent stages, feature engineering has immense usefulness for data scientists to prepare data in a hassle-free and accelerated manner. 

Feature engineering is an essential process in data science to reap utmost benefits from the data available. Various methods of feature engineering aim at arriving at a coherent set of data that is comprehensible and easy to process for machine learning algorithms to obtain accurate and reliable results.

Features affect the quality of output from machine learning algorithms and feature engineering aims at improving the features that go into training the algorithms. 

If you’d like to learn more about this topic, please feel free to get in touch with one of our AI and data science experts for a personalized consultation.

You may also be interested in exploring our business intelligence and analytics solutions for further insights.