Trending February 2024 # Learn The Different Test Techniques In Detail # Suggested March 2024 # Top 5 Popular

You are reading the article Learn The Different Test Techniques In Detail updated in February 2024 on the website Kientrucdochoi.com. We hope that the information we have shared is helpful to you. If you find the content interesting and meaningful, please share it with your friends and continue to follow and support us for the latest updates. Suggested March 2024 Learn The Different Test Techniques In Detail

Introduction to Test techniques

Start Your Free Software Development Course

Web development, programming languages, Software testing & others

List of Test techniques

There are various techniques available; each has its own strengths and weakness. Each technique is good at finding particular types of defects and relatively poor at finding other types of defects. In this section, we are going to discuss the various techniques.

1. Static testing techniques 2. Specification-based test techniques

all Specification-based techniques have the common characteristics that they are based on the model of some aspect of the specification, enabling the cases to be derived systematically. There are 4 sub-specification-based techniques which are as follows

Equivalence partitioning: It is a specification-based technique in which test cases are designed to execute representatives from equivalence partition. In principle, cases are designed to cover each partition at least once.

Boundary value analysis: It is a technique in which cases are designed based on the boundary value. Boundary value is an input value or output value which is on the edge of an equivalence partition or at the smallest incremental distance on either side of an edge. For example, minimum and maximum value.

Decision table testing: It is a technique in which cases are designed to execute the combination of inputs and causes shown in a decision table.

State transition testing: It is a technique in which cases are designed to execute valid and invalid state transitions.

3. Structure-based testing

Test coverage: It is a degree that is expressed as a percentage to which a specified coverage item has been exercised by a test suite.

Statement coverage: It is a percentage of executable statements that the test suite has exercised.

Decision Coverage: It is a percentage of decision outcomes that a test suite has exercised. 100% decision coverage implies both 100% branch coverage and 100% statement coverage.

Branch coverage: It is a percentage of the branches that the test suite has exercised. 100% branch coverage implies both 100% decision coverage and 100% statement coverage.

4. Experience-based testing

The experience-based technique is a procedure to derive and select the cases based on the experience and knowledge of the tester. All experience-based have the common characteristics that they are based on human experience and knowledge, both of the system itself and likely defects. Cases are derived less systematically but may be more effective. The experience of both technical people and business people is a key factor in an experience-based technique.

Conclusion

The most important thing to understand here is that the best technique is no single testing, as each technique is good at finding one specific class of the defect. also, using just a single technique will help ensure that any defects of that particular class are found. It may also help to ensure that any defects of other classes are missed. So using a variety of techniques will help you ensure that a variety of defects are found and will result in more effective testing. Therefore it is most often used to statistically test the source code.

Recommended Articles

This is a guide to Test Techniques. Here we discuss the List of Various Test techniques along with their Strength and Weakness. You may also have a look at the following articles to learn more –

You're reading Learn The Different Test Techniques In Detail

Learn The Dataset Processing Techniques

Introduction to dataset preprocessing

In the actual world, data is frequently incomplete: it lacks attribute values, specific attributes of relevance are missing, or it simply contains aggregate data. Errors or outliers make the data noisy. Inconsistent: having inconsistencies in codes or names. The Keras dataset pre-processing utilities assist us in converting raw disc data to a tf. data file. A dataset is a collection of data that may be used to train a model. In this topic, we are going to learn about dataset preprocessing.

Start Your Free Data Science Course

Hadoop, Data Science, Statistics & others

Why use dataset pre-processing?

By pre-processing data, we can:

Improve the accuracy of our database. We remove any values that are wrong or missing as a consequence of human error or problems.

Consistency should be improved. The accuracy of the results is harmed when there are data discrepancies or duplicates.

Make the database as complete as possible. If necessary, we can fill up the missing properties.

The data should be smooth. We make it easier to use and interpret this manner.

We have few Dataset pre-processing Utilities:

Image

Text

Time series

Importing datasets pre-processing

Steps for Importing a dataset in Python:

Importing appropriate Libraries

import matplotlib.pyplot as mpt

Import Datasets

The datasets are in chúng tôi format. A CSV file is a plain text file that consists of tabular data. A data record is represented by each line in the file.

dataset = pd.read_csv('Data.csv')

We’ll use pandas’ iloc (used to fix indexes for selection) to read the columns, which has two parameters: [row selection, column selection].

x = Dataset.iloc[:, :-1].values

Let’s have the following incomplete datasets

Name Pay Managers

AAA 40000 Yes

BBB 90000

60000 No

CCC

Yes

DDD 30000 Yes

As we can see few missing cells are in the table. To fill these we need to follow a few steps:

from sklearn.preprocessing import Imputer

Next By importing a class

// Using Missing Indicator to fit transform.

Splitting a dataset by training and test set.

Installing a library:

from sklearn.cross_validation import train_test_split

A_train, A_test, B_train, B_test = train_test_split(X, Y, test_size = 0.2)

Feature Scaling

A_test = scale_A.transform(A_test)

Example #1

names = [‘sno’, ‘sname’, ‘age’, ‘Type’, ‘diagnosis’, ‘in’, ‘out’, ‘consultant’, ‘class’] X = array[:, 0:8] Y = array[:, 8]

Explanation

All of the data preprocessing procedures are combined in the above code.

Output:

Feature datasets pre-processing

Outliers are removed during pre-processing, and the features are scaled to an equivalent range.

Steps Involved in Data Pre-processing

Data cleaning: Data can contain a lot of useless and missing information. Data cleaning is carried out to handle this component. It entails dealing with missing data, noisy data, and so on. The purpose of data cleaning is to give machine learning simple, full, and unambiguous collections of examples.

a) Missing Data: This occurs when some data in the data is missing. It can be explored in many ways.

Here are a few examples:

Ignore the tuples: This method is only appropriate when the dataset is huge and many values are missing within a tuple.

Fill in the blanks: There are several options for completing this challenge. You have the option of manually filling the missing values, using the attribute mean, or using the most likely value.

b) Noisy Data: Data with a lot of noise

The term “noise” refers to a great volume of additional worthless data.

Duplicates or semi-duplicates of data records; data segments with no value for certain research; and needless information fields for each of the variables are examples of this.

Method of Binning:

This approach smoothes data that has been sorted. The data is divided into equal-sized parts, and the process is completed using a variety of approaches.

Regression:

Regression analysis aids in determining which variables do have an impact. To smooth massive amounts of data, use regression analysis. This will help to focus on the most important qualities rather than trying to examine a large number of variables.

Clustering: In this method, needed data is grouped in a cluster. Outliers may go unnoticed, or they may fall outside of clusters.

Data Transformation

We’ve already started modifying our data with data cleaning, but data transformation will start the process of transforming the data into the right format(s) for analysis and other downstream operations. This usually occurs in one or more of the following situations:

Aggregation

Normalization

Selection of features

Discretization

The creation of a concept hierarchy

Data Reduction:

Data mining is a strategy for dealing with large amounts of data. When dealing with bigger amounts of data, analysis faces quite a complication. We employ a data reduction technique to overcome this problem. Its goal is to improve storage efficiency and reduce analysis expenses. Data reduction not only simplifies and improves analysis but also reduces data storage.

The following are the steps involved in data reduction:

Attribute selection: Like discretization, can help us fit the data into smaller groups. It essentially combines tags or traits, such as male/female and manager, to create a male manager/female manager.

Reduced quantity: This will aid data storage and transmission. A regression model, for example, can be used to employ only the data and variables that are relevant to the investigation at hand.

Reduced dimensionality: This, too, helps to improve analysis and downstream processes by reducing the amount of data used. Pattern recognition is used by algorithms like K-nearest neighbors to merge similar data and make it more useful.

Conclusion – dataset preprocessing

Therefore, coming to end, we have seen Dataset processing techniques and their libraries in detail. The data set should be organized in such a way that it can run many Machines Learning and Deep Learning algorithms in parallel and choose the best one.

Recommended Articles

This is a guide to dataset preprocessing. Here we discuss the Dataset processing techniques and their libraries in detail. You may also have a look at the following articles to learn more –

Learning Different Techniques Of Anomaly Detection

This article was published as a part of the Data Science Blogathon.

Introduction

As a data scientist, in many cases of fraud detection in the bank for a transaction, Smart meters anomaly detection,

Have you ever thought about the bank where you make several transactions and how the bank helps you by identifying fraud?

Someone logins to your account, and a message is sent to you immediately after they notice suspicious activity at a place to confirm if it’s you or someone else.

What is Anomaly Detection?

Suppose we own a otice the font end data errors, even when our company supplies the same service, but the sales are declining. Here come the errors, which are termed anomalies or outliers.

Let’s take an example that will further clarify what an means.

Source: Canvas

 Here in this example, a bird is an outlier or noise.

Have you ever thought about the bank where you make several transactions, How the bank helps you by identifying fraud detection?

If the bank manager notices unusual behavior in your accounts, it can block the card. For example, spending a lot of amount in one day or another sum amount on another day will send a message of alert or will block your card as it’s not related to how you were spending previously.

Two AI firms detect an anomaly inside the bank. One is Fedzai’s detection firm, and another one by Ayasdi’s solution.

Let’s take another example of shopping e end of the month, the shopkeeper puts certain items on sale and offers you a scheme where you can buy two at less rate.

Now how do we describe the sales data compared to the start-of-month data? Do sale data validate data concerning monthly sales at the start of selling? It’s not vali

Outliers are “something I should remove from the dataset so that it doesn’t skew the model I’m building,” typically because they suspect that the data in question is flawed and that the model they want to construct shouldn’t need to take into account.

Outliers are most commonly caused by:

Intentional (dummy outliers created to test detection methods)

Data processing errors (data manipulation or data set unintended mutations)

Sampling errors (extracting or mixing data from wrong or various sources)

Natural (not an error, novelties in the data)

An actual data point significantly outside a distribution’s mean or median is an outlier.

An anomaly is a false data point made by a different process than the rest of the data.

If you construct a linear regression model, it is less likely that the model generated points far from the regression line. The likelihood of the data is another name for it.

Outliers are data points with a low likelihood, according to your model. They are identical from the perspective of modeling.

For instance, you could construct a model that describes a trend in the data and then actively looks for existing or new values with a very low likelihood. When people say “anomalies,” they mean these things. The anomaly detection of one person is the outlier of another!

Extreme values in your data series are called outliers. They are questionable. One student can be much more brilliant than other students in the same class, and it is possible.

However, anomalies are unquestionably errors. For example, one million degrees outside, or the air temperature won’t stay the same for two weeks. As a result, you disregard this data.

An outlier is a valid data point, and can’t be ignored or removed, whereas noise is garbage that needs removal. Let’s take another example to understand noise.

Suppose you wanted to take the average salary of employees and in data added the pay of Ratan Tata or Bill Gate, all the employer’s salary averages will show an increase which is incorrect data.

1.

2. Uni-variate – Uni-variate a variable with different values in the dataset.

3. Multi-variate – It is defined by the dataset by having more than one variable with a different set of values.

We will now use various techniques which will help us to find outliers.

Anomaly Detection by Scikit-learn

We will import the required library and read our data.

import seaborn as sns import pandas as pd titanic=pd.read_csv('titanic.csv') titanic.head()



We can see in the image many null values. We will fill the null values with mode.

titanic['age'].fillna(titanic['age'].mode()[0], inplace=True) titanic['cabin'].fillna(titanic['cabin'].mode()[0], inplace=True) titanic['boat'].fillna(titanic['boat'].mode()[0], inplace=True) titanic['body'].fillna(titanic['body'].mode()[0], inplace=True) titanic['sex'].fillna(titanic['sex'].mode()[0], inplace=True) titanic['survived'].fillna(titanic['survived'].mode()[0], inplace=True) titanic['home.dest'].fillna(titanic['home.dest'].mode()[0], inplace=True)

Let’s see our data in more detail. When we look at our data in statistics, we prefer to know its distribution types, whether binomial or other distributions.

titanic['age'].plot.hist( bins = 50, title = "Histogram of the age" )

This distribution is Gaussian distribution and is often called a normal distribution.

Mean and Standard Deviation are considered the two parameters. With the change in mean values, the distribution curve changes to left or right depending on the mean values.

Standard Normal distribution means mean(μ = 0) and standard deviation (σ) is one. To know the probability Z-table is already available.

Z-Scores

We can calculate Z – Scores by the given formula where x is a random variable, μ is the mean, and σ is the standard deviation.

Why do we need Z-Scores to be calculated?

It helps to know how a single or individual value lies in the entire distribution.

For example, if the maths subject scores mean is given to us 82, the standard deviation σ is 4. We have a value of x as 75. Now Z-Scores will be calculated as 82-75/4 = 1.75. It shows the value 75 with a z-score of 1.75 lies below the mean. It helps to determine whether values are higher, lower, or equal to the mean and how far.

Now, we will calculate Z-Score in python and look at outliers.

We imported Z-Scores from Scipy. We calculated Z-Score and then filtered the data by applying lambda. It gives us the number of outliers ranging from the age of 66 to 80.

from scipy.stats import zscore titanic["age_zscore"] = zscore(titanic["age"]) titanic["outlier"] = titanic["age_zscore"].apply( lambda x: x = 2.8 ) titanic[titanic["outlier"]]

We will now look at another method based on clustering called Density-based spatial clustering of applications with noise (DBSCAN).

DBSCAN

As the name indicates, the outliers detection is on clustering. In this method, we calculate the distance between points.

Let’s continue our titanic data and plot a graph between fare and age. We made a scatter graph between age and fare variables. We found three dots far away from the others.

Before we proceed further, we will normalize our data variables.

There are many ways to make our data normalize. We can import standard scaler by sklearn or min max scaler.

titanic['fare'].fillna(titanic['fare'].mean(), inplace=True) from sklearn.preprocessing import StandardScaler scale = StandardScaler() fage = scale.fit_transform(fage) fage = pd.DataFrame(fage, columns = ["age", "fare"]) fage.plot.scatter(x = "age", y = "fare")

We used Standard Scaler to make our data normal and plotted a scatter graph.

Now we will import DBSCAN to give points to the clusters. If it fails, it will show -1.

from sklearn.cluster import DBSCAN outlier = DBSCAN( eps = 0.5, metric="euclidean", min_samples = 3, n_jobs = -1) clusters = outlier.fit_predict(fage) clusters  array([0, 1, 1, ..., 1, 1, 1])

Now we have the results, but how do we check which value is min, max and whether we have -1 values? We will use the arg min value to check the smallest value in the cluster.

value=-1 index = clusters.argmin() print(" The element is at ", index) small_num = np.min(clusters) print("The small number is : " , small_num) print(np.where(clusters == small_num)) The element is at: 14 The small number is : -1 (array([ 14, 50, 66, 94, 285, 286], dtype=int64),)

We can see from the result six values which are -1.

Lets now plot a scatter graph.

from matplotlib import cm c = cm.get_cmap('magma_r') fage.plot.scatter( x = "age", y = "fare", c = clusters, cmap = c, colorbar = True )

The above methods we applied are on uni-variate outliers.

For Multi-variates outliers detections, we need to understand the multi-variate outliers.

For example, we take Car readings. We have seen two reading meters one for the odometer, which records or measures the speed at which the vehicle is moving, and the second is the rpm reading which records the number of rotations made by the car wheel per minute.

Suppose the odometer shows in the range of 0-60 mph and rpm in 0-750. We assume that all the values which come should correlate with each other. If the odometer shows a 50 speed and the rpm shows 0 intakes, readings are incorrect. If the odometer shows a value more than zero, that means the car was moving, so the rpm should have higher values, but in our case, it shows a 0 value. i.e., Multi-variate outliers.

Mahalanobis Distance Method

In DBSCAN, we used euclidean distance metrics, but in this case, we are talking about the Mahalanobis distance method. We can also use Mahalanobis distance with DBSCAN.

DBSCAN(eps=0.5, min_samples=3, metric='mahalanobis', metric_params={'V':np.cov(X)}, algorithm='brute', leaf_size=30, n_jobs=-1)

Why is Euclidean unfit for entities cor-related to each other? Euclidean distance cannot find or will give incorrect data on how close are the two points.

Mahalanobis method uses the distance between points and distribution that is clean data. Euclidean distance is often between two points, and its z-score is calculated by x minus mean and divided by standard deviation. In Mahalanobis, the z-score is x minus the mean divided by the covariance matrix.

Therefore, what effect does dividing by the covariance matrix have? The covariance values will be high if the variables in your dataset are highly correlated.

Similarly, if the covariance values are low, the distance is not significantly reduced if the data are not correlated. It does so well that it addresses both the scale and correlation of the variables issues.

Code

df = pd.read_csv('caret.csv').iloc[:, [0,4,6]] df.head()

We defined the function distance as x= None, data= None, and Covariance = None. Inside the function, we took the mean of data and used the covariance value of the value there. Otherwise, we will calculate the covariance matrix. T stands for transpose.

For example, if the array size is five or six and you want it to be in two variables, then we need to transpose the matrix.

np.random.multivariate_normal(mean, cov, size = 5) array([[ 0.0509196, 0.536808 ], [ 0.1081547, 0.9308906], [ 0.4545248, 1.4000731], [ 0.9803848, 0.9660610], [ 0.8079491 , 0.9687909]]) np.random.multivariate_normal(mean, cov, size = 5).T array([[ 0.0586423, 0.8538419, 0.2910855, 5.3047358, 0.5449706], [ 0.6819089, 0.8020285, 0.7109037, 0.9969768, -0.7155739]])

We used sp.linalg, which is Linear algebra and has different functions to be performed on linear algebra. It has the inv function for the inversion of the matrix. NumPy dot as means for the multiplication of the matrix.

import scipy as sp def distance(x=None, data=None, cov=None): x_m = x - np.mean(data) if not cov: cov = np.cov(data.values.T) inv_cov = sp.linalg.inv(cov) left = np.dot(x_m, inv_cov) m_distance = np.dot(left, x_m.T) return m_distance.diagonal() df_g= df[['carat', 'depth', 'price']].head(50) df_g['m_distance'] = distance(x=df_g, data=df[['carat', 'depth', 'price']]) df_g.head() B. Tukey’s method for outlier detection

Tukey method is also often called Box and Whisker or Box plot method.

Tukey method utilizes the Upper and lower range.

Upper range = 75th Percentile -k*IQR

Lower range = 25th Percentile + k* IQR

Let us see our Titanic data with age variable using a box plot.

sns.boxplot(titanic['age'].values)

We can see in the image the box blot created by Seaborn shows many dots between the age of 55 and 80 are outliers not within the quartiles. We will detect lower and upper range by making a function outliers_detect.

def outliers_detect(x, k = 1.5): x = np.array(x).copy().astype(float) first = np.quantile(x, .25) third = np.quantile(x, .75) # IQR calculation iqr = third - first #Upper range and lower range lower = first - (k * iqr) upper = third + (k * iqr) return lower, upper outliers_detect(titanic['age'], k = 1.5) (2.5, 54.5) Detection by PyCaret

We will be using the same dataset for detection by PyCaret.

from pycaret.anomaly import * setup_anomaly_data = setup(df)

Pycaret is an open-source machine learning which uses an unsupervised learning model to detect outliers. It has a get_data method for using the dataset in pycaret itself, set_up for preprocessing task before detection, usually takes data frame but also has many other features like ignore_features, etc.

Other methods create_model for using an algorithm. We will first use Isolation Forest.

ifor = create_model("iforest") plot_model(ifor) ifor_predictions = predict_model(ifor, data = df) print(ifor_predictions) ifor_anomaly = ifor_predictions[ifor_predictions["Anomaly"] == 1] print(ifor_anomaly.head()) print(ifor_anomaly.shape)

Anomaly 1 indicates outliers, and Anomaly 0 shows no outliers.

The yellow color here indicates outliers.

Now let us see another algorithm, K Nearest Neighbors (KNN)

knn = create_model("knn") plot_model(knn) knn_pred = predict_model(knn, data = df) print(knn_pred) knn_anomaly = knn_pred[knn_pred["Anomaly"] == 1] knn_anomaly.head() knn_anomaly.shape

Now we will use a clustering algorithm.

clus = create_model("cluster") plot_model(clus) clus_pred = predict_model(clus, data = df) print(clus_pred) clus_anomaly = clus_predictions[clus_pred["Anomaly"] == 1] print(clus_anomaly.head()) clus_anomaly.shape Anomaly Detection by PyOD

PyOD is a python library for the detection of outliers in multivariate data. It is good both for supervised and unsupervised learning.

from pyod.models.iforest import IForest from chúng tôi import KNN

We imported the library and algorithm.

from chúng tôi import generate_data from chúng tôi import evaluate_print from pyod.utils.example import visualize train= 300 test=100 contaminate = 0.1 X_train, X_test, y_train, y_test = generate_data(n_train=train, n_test=test, n_features=2,contamination=contaminate,random_state=42) cname_alg = 'KNN' # the name of the algorithm is K Nearest Neighbors c = KNN() c.fit(X_train) #Fit the algorithm y_trainpred = c.labels_ y_trainscores = c.decision_scores_ y_testpred = c.predict(X_test) y_testscores = c.decision_function(X_test) print("Training Data:") evaluate_print(cname_alg, y_train, y_train_scores) print("Test Data:") evaluate_print(cname_alg, y_test, y_test_scores) visualize(cname_alg, X_train, y_train, X_test, y_test, y_trainpred,y_testpred, show_figure=True, save_figure=True)

We will use the IForest algorithm.

fname_alg = 'IForest' # the name of the algorithm is K Nearest Neighbors f = IForest() f.fit(X_train) #Fit the algorithm y_train_pred = c.labels_ y_train_scores = c.decision_scores_ y_test_pred = c.predict(X_test) y_test_scores = c.decision_function(X_test) print("Training Data:") evaluate_print(fname_alg, y_train_pred, y_train_scores) print("Test Data:") evaluate_print(fname_alg, y_test_pred, y_test_scores) visualize(fname_alg, X_train, y_train, X_test, y_test_pred, y_train_pred,y_test_pred, show_figure=True, save_figure=True) Anomaly Detection by Prophet import prophet from prophet import forecaster from prophet import Prophet m = Prophet() data = pd.read_csv('air_pass.csv') data.head() data.columns = ['ds', 'y'] data['y'] = np.where(data['y'] != 0, np.log(data['y']), 0)

The Log of the y column enables no negative value. We split our data into train, test, and stored the prediction in the variable forecast.

train, test= train_test_split(data, random_state =42) m.fit(train[['ds','y']]) forecast = m.predict(test) def detect(forecast): forcast = forecast[['ds', 'yhat', 'yhat_lower', 'yhat_upper']].copy() forcast['real']= data['y'] forcast['anomaly'] =0 forcast.loc[forcast['real']< forcast['yhat_lower'], 'anomaly']=-1 forcast['imp']=0 in_range = forcast['yhat_upper']-forcast['yhat_lower'] forcast.loc[forcast['anomaly']==1, 'imp'] = forcast['real']-forcast['yhat_upper']/in_range forcast.loc[forcast['anomaly']==-1, 'imp']= forcast['yhat_lower']-forcast['real']/in_range return forcast detect(forecast)

We took the anomaly as -1.

Conclusion

The process of finding outliers in a given dataset is called anomaly detection. Outliers are data objects that stand out from the rest of the object values in the dataset and don’t behave normally.

Anomaly detection tasks can use distance-based and density-based clustering methods to identify outliers as a cluster.

We here discuss anomaly detection’s various methods and explain them using the code on three datasets of Titanic, Air passengers, and Caret to

Key Points

1. Outliers or anomaly detection can be detected using the Box-Whisker method or by DBSCAN.

2. Euclidean distance method is used with the items not correlated.

3. Mahalanobis method is used with Multivariate outliers.

4. All the values or points are not outliers. Some are noises that ought to be garbage. Outliers are valid data that need to be adjusted.

5. We used PyCaret for outliers detection by using different algorithms where the anomaly is one, shown in yellow colors, and no outliers where the outlier is 0.

6. We used PyOD, which is the Python Outlier detection library. It has more than 40 algorithms. Supervised and unsupervised techniques it is used.

7. We used Prophet and defined the function detect to outline the outliers.

The media shown in this article is not owned by Analytics Vidhya and is used at the Author’s discretion.

Related

Learn The Different Examples Of Sqlite Function

Introduction to SQLite functions

SQLite provides different kinds of functions to the user. Basically, SQLite has different types of inbuilt functions, and that function we easily use and handle whenever we require. All SQLite functions work on the string and numeric type data. All functions of SQLite are case sensitive that means we can either use functions in uppercase or lowercase. By using the SQLite function, we sort data as per the user requirements. SQLite functions have a different category, such as aggregate functions, data functions, string functions, and windows functions, and that function we can use as per the requirement.

Start Your Free Data Science Course

Hadoop, Data Science, Statistics & others

SQLite functions

Now let’s see the different functions in SQLite as follows.

1. Aggregate Functions

AVG: It is used to calculate the average value of a non-null column in a group.

COUNT: It is used to return how many rows from the table.

MAX: It is used to return the maximum value from a specified

MIN: It is used to return the minimum value from a specified

SUM: is used to calculate the sum of non-null columns from the specified table.

GROUP_CONCAT: It is used to concatenate the null value from the column.

2. String Functions

SUBSTR: It is used to extract and return the substring from the specified column with predefined length and also its specified position.

TRIM: It is used to return the copy of the string, and it removes the start the end character.

LTRIM: It is used to return the copy of the string that removed the starting character of the string.

RTRIM: It is used to return the copy of the string that removed the ending character of the string.

LENGTH: It is used to return how many characters in the string.

REPLACE: It is used to display the copy of the string with each and every instance of the substring that is replaced by the other specified string.

UPPER: It is used to return the string with uppercase that means it is used to convert the all character into the upper cases.

LOWER: It is used to return the string with a lowercase, which means converting all character into lower cases.

INSTR: It is used to return the integer number that indicates the very first occurrence of the substring.

3. Control Flow Functions

COALESCE: It is used to display the first non-null argument.

IFNULL: It is used to implement if-else statements with the null values.

IIF: By using this, we can add if – else into the queries.

NULLIF: It is used to return the null if first and second the element is equal.

4. Data and Time Function

DATE: It is used to determine the date based on the multiple data modifiers.

TIME: It is used to determine the time based on the multiple data modifiers.

DATETIME: It is used to determine the date and time based on the multiple data modifiers.

STRFTIME: That returns the date with the specified format.

5. Math Functions

ABS: It is used to return the absolute value of the number.

RANDOM: It is used to return the random floating value between the minimum and maximum integer.

ROUND: It is used to specify the precision.

Examples

Now let’s see the different examples of SQLite functions as follows.

create table comp_worker(worker_id integer primary key, worker_name text not null, worker_age text, worker_address text, worker_salary text);

Explanation

In the above example, we use the create table statement to create a new table name as comp_worker with different attributes such as worder_id, worker_name, worker_age, worker_address, and worker_salary with different data types as shown in the above example.

Now insert some record for function implementation by using the following insert into the statement as follows.

insert into comp_worker(worker_id, worker_name, worker_age, worker_address, worker_salary) values(1, "Jenny", "23", "Mumbai", "21000.0"), (2, "Sameer", "31", "Pune", "25000.0"), (3, "John", "19", "Mumbai", "30000.0"), (4, "Pooja", "26", "Ranchi", "50000.0"), (5, "Mark", "29", "Delhi", "45000.0");

Explanation

In the above statement, we use to insert it into the statement. The end output of the above statement we illustrate by using the following screenshot as follows.

Now we can perform the SQLite different functions as follows.

a. COUNT Function

Suppose users need to know how many rows are present in the table at that time; we can use the following statement.

select count(*) from comp_worker;

Explanation

In the above example, we use the count function. The end output of the above statement we illustrate by using the following screenshot.

b. MAX Function

Suppose we need to know the highest salary of the worker so that we can use the following statement as follows.

select max(worker_salary) from comp_worker;

Explanation

In the above example, we use the max function to know the max salary of a worker from the comp_worker table. The end output of the above statement we illustrate by using the following screenshot.

c. MIN Function select min(worker_salary) from comp_worker;

Explanation

The end output of the above statement we illustrate by using the following screenshot.

d. AVG Function

Suppose users need to know the total average salary of a worker from comp_worker at that time; we can use the following statement as follows.

select avg(worker_salary) from comp_worker;

Explanation

The end output of the above statement we illustrate by using the following screenshot.

e. SUM Function

Suppose users need to know the total sum salary of a worker from comp_worker at that time; we can use the following statement as follows.

select sum(worker_salary) from comp_worker;

Explanation

The end output of the above statement we illustrate by using the following screenshot.

f. Random Function select random() AS Random;

The end output of the above statement we illustrate by using the following screenshot.

g. Upper Function

Suppose we need to return the worker_name column in the upper case at that time, we can use the following statement as follows.

select upper(worker_name) from comp_worker;

Explanation

The end output of the above statement we illustrate by using the following screenshot.

h. Length Function select worker_name, length(worker_name) from comp_worker;

Explanation

The end output of the above statement we illustrate by using the following screenshot.

Conclusion

We hope from this article you have understood about the SQLite Function. From the above article, we have learned the basic syntax of Function statements, and we also see different examples of Function. From this article, we learned how and when we use SQLite Functions.

 Recommended Articles

We hope that this EDUCBA information on “SQLite functions” was beneficial to you. You can view EDUCBA’s recommended articles for more information.

Learn The Different Examples Of Bind Function

Introduction to DB2 bind

Database management systems provide a different kind of function to the user; the DB2 bind function is one of the functions provided by the database management system. In which that bind function () is used to establish the relationship between the application program and relational data. When we execute the bind function, that means during the execution of the binding process or function, it performs different kind of actions such as it validate all referenced objects in the SQL statement of the specified program that means the user-created table, created view column name per the DB2 catalog that is given.

Start Your Free Data Science Course

Hadoop, Data Science, Statistics & others

Syntax

Explanation

In the above syntax, we select statements with different parameters as follows.

colm 1, 2, and N: It is a column name that is created inside the table.

specified schema name: It is a user-defined schema.

specified table name: Specified table name means an actual table that the user creates.

where: It is used for the condition, and it contains an input bind function.

How does the bind function work in DB2?

Now let’s see how the bind function works in DB2 as follows.

Utilizing the input bind function, we can improve execution. A query can be arranged once and executed at various times, changing the bind variable values between every execution of the SQL statement at the server-side.

Utilizing the bind function, we can improve the reserve hit rate for data sets as well as which is used to store prepared queries. Information bases that support to bind function variables parse the query at that point plug input bind function into the generally parsed code. On the off chance that a similar query is run a lot of times, even with various qualities for the info tie factors, the database will have the code stored and will not need to parse the inquiry once more. In the event that you don’t utilize input bind faction, the information base will parse the query each time on the grounds that the where the condition will be marginally extraordinary each time and the code for every one of those somewhat various queries will stop up the reserve.

As a dependable guideline, you should utilize input bind variables rather than replacements in the WHERE clause of SELECT explanations at whatever point you can.

Output bind variable permit esteems to be passed straightforwardly from procedural code into cushions in your program, that means user-defined program. For the most part, this is more helpful and proficient than constructing a query that calls procedural code or building procedural code that fabricates an outcome set.

SQL Relay will counterfeit information and bind variables for data set API’s which don’t locally uphold ties. Presently that is only the MDB Tools association. Postgresql 8, MySQL 4.1.2, and current SQLite support bind variables, yet more seasoned variants don’t. SQL Relay fakes input binding for Postgresql, MySQL, and SQLite renditions that don’t uphold them. For variants that do, the “fackebinds” interface string boundary can be utilized to compel SQL Relay to counterfeit ties instead of utilizing the data set’s implicit help. You can utilize either Oracle style or DB2/Firebird style tie factors with those data sets. Yield ties are not upheld when utilizing “fakebinds”.

When utilizing a data set for which SQL Relay fakes bind variable, you should make a point not to pass some unacceptable sort of information into a bind variable.

The SQL explanation in the past area is an illustration of a named tie. Every placeholder in the articulation has a name related to it, for example, ’emp_name’ or ’emp_sal’. At the point when this assertion is readied, and the placeholders are related with values in the application, the name makes the affiliation of the placeholder utilizing the OCIBindByName() call with the name of the placeholder passed in the placeholder boundary.

The second sort of bind is known as a positional bind. In a positional tough situation, the placeholders are alluded to by their situation in the articulation instead of their names. For restricting purposes, an affiliation is made between information esteem and the situation of the placeholder.

Insert into employee values(:emp_no, :emp_name, :emp_job, :emp_sal)

Examples

Now let’s see the different examples of the bind function to better understand as follows.

First, create a new table by using the following create table statement as follows.

create a new table by using the create table statement as follows.

comp_address text);

Explanation

In the above example, we use a create table statement to create a new table name as a company with different attributes such as Comp_id, comp_name, and comp_address with different data types and different sizes as shown in the above statement.

For confirmation, insert some records by using the following insert into the statement as follows.

select * from company;

In the above example, we use to insert into statement. The end out we illustrate by using the following screenshot as follows.

Select count(*) from company where comp_name = :Dell and comp_address = :Pune;

Explanation

In the above example, we use a select statement with a count function as shown in the above statement; here, we use a bind variable such as comp_name and comp_address in the above statement. The end out we illustrate by using the following screenshot as follows.

Now execute the following statement as follows.

AND comp_address = ‘Pune’;

After checking the performance of the above statement, you will get a difference between them.

Conclusion

We hope from this article you have understood about the DB 2 bind functions. From the above article, we have learned the basic syntax of the bind function, and we also see different examples of bind function. We also learn the rule for the bind function. From this article, we learned how and when we use the DB 2 bind function.

Recommended Articles

This is a guide to DB2 bind. Here we discuss the basic syntax of the bind function, and we also see different examples of bind function. You may also have a look at the following articles to learn more –

Learn The Different Examples Of Matlab Exponent

Introduction to MATLAB exponent

MATLAB offers us different types of exponent functions that can compute the exponential of an array or matrix. These functions can be used to compute basic exponential, matrix exponential, or exponential integral as per our requirement. In this article, we will learn about 3 exponent functions offered by MATLAB: exp, expint, and expm. With the help of these functions, we can compute the solution when our input is an array, matrix, or a complex number and ‘e’ is raised to this power.

Start Your Free Data Science Course

Hadoop, Data Science, Statistics & others

Syntax of exponent function:

E = exp (I)

E = expint (I)

E = expm (I)

Description of the syntax:

E = exp (I) is used to return the exponential ‘e’ raised to the power (I). For an array, it will compute the exponential of every element

E = expint (I) is used to return the exponential integral of every element in the input ‘I’

E = expm (I) is used to compute the matrix exponential for our input matrix ‘I’. Please note that matrix exponential is given by the formula:

Examples of Matlab exponent

Let us now understand the code for these exponent functions in MATLAB

Example #1 (exp (I))

In this example, we will find the exponent of an array in MATLAB by using the exp (I) function. Below are the steps to be followed:

Initialize the array whose exponent we need to compute

Pass the input array as an argument to the exp function

Code:

I = [2 4 6 2 6] [Initializing the array whose exponent we need to compute] [Passing the input array as an argument to the exp function]

This is how our input and output will look like in MATLAB:

Input:

Output:

As we can see in the output, the exp function has computed the exponential of every element in our input array ‘I’.

 Next, we will learn the use of the expint function

Example #2 (expint (I))

In this example, we will find the integral exponent of an array in MATLAB by using the expint (I) function. Below are the steps to be followed:

Initialize the array whose integral exponent we need to compute

Pass the input array as an argument to the expint function

Code:

I = [2 3 5 1 6] [Initializing the array whose integral exponent we need to compute] [Passing the input array as an argument to the expm function]

This is how our input and output will look like in MATLAB:

Input:

Output:

As we can see in the output, the expint function has computed the integral exponential of every element in our input array ‘I’.

Next, we will learn the use of the expm function

Example #3 (expm (I))

In this example, we will find the matrix exponent of a matrix in MATLAB by using the expm (I) function. Below are the steps to be followed:

Initialize the matrix whose matrix exponent we need to compute

Pass the input matrix as an argument to the expm function

Code:

I = [2 3 5; 1 0 3; 2 5 7] [Passing the input matrix as an argument to the expm function]

This is how our input and output will look like in MATLAB:

Input:

Output:

As we can see in the output, the expm function has computed the matrix exponential of our input matrix ‘I’.

The exponent functions discussed above can also be used to compute the exponential of complex numbers. Let us understand this with an example.

Example #4 (exp (I))

In this example, we will find the exponent of an array of complex numbers in MATLAB by using the exp (I) function. Below are the steps to be followed:

Initialize the array of complex numbers whose exponent we need to compute

Pass the input array as an argument to the exp function

Code:

I = [2 + 3i   5 - 1i   4 + 5i] [Initializing the array of complex numbers whose exponent we need to compute] [Passing the input array as an argument to the exp function]

This is how our input and output will look like in MATLAB:

Input:

Output:

As we can see in the output, the exp function has computed the exponential of every complex element in our input array ‘I’.

Conclusion

Different forms of Exponent function can be used to compute the exponentials as per our requirement.

Basic exponential, integral exponential, matrix exponential are the types of exponentials we can compute using the exponent functions.

The exponential of complex numbers can also be calculated using the exponent functions.

Recommended Articles

This is a guide to Matlab exponent. Here we discuss the 3 exponent functions offered by MATLAB: exp, expint and expm, along with the examples. You may also have a look at the following articles to learn more –

Update the detailed information about Learn The Different Test Techniques In Detail on the Kientrucdochoi.com website. We hope the article's content will meet your needs, and we will regularly update the information to provide you with the fastest and most accurate information. Have a great day!