Trending February 2024 # 5 Amazing Types Of Classification Algorithms # Suggested March 2024 # Top 11 Popular

You are reading the article 5 Amazing Types Of Classification Algorithms updated in February 2024 on the website We hope that the information we have shared is helpful to you. If you find the content interesting and meaningful, please share it with your friends and continue to follow and support us for the latest updates. Suggested March 2024 5 Amazing Types Of Classification Algorithms

Introduction to Classification Algorithms

This article on classification algorithms gives an overview of different methods commonly used in data mining techniques with different principles. Classification is a technique that categorizes data into a distinct number of classes, and labels are assigned to each class. The main target of classification is to identify the class to launch new data by analysis of the training set by seeing proper boundaries. In a general way, predicting the target class and the above process is called classification.

Start Your Free Data Science Course

Hadoop, Data Science, Statistics & others

Explain Classification Algorithms in Detail

Classification can be performed on both structured and unstructured data. Classification can be categorized into

Naive Bayes classifier

Decision Trees

Support Vector Machine

Random Forest

K- Nearest Neighbors

1. Naive Bayes classifier

It’s a Bayes’ theorem-based algorithm, one of the statistical classifications, and requires few amounts of training data to estimate the parameters, also known as probabilistic classifiers. It is considered to be the fastest classifier, highly scalable, and handles both discrete and continuous data. In addition, this algorithm is used to make a prediction in real-time. There are different types of naive classifier, Multinomial Naïve Bayes, Bernoulli Naïve Bayes, Gaussian naive.

Bayesian classification with posterior probabilities is given by

If two values are independent of each other then,

P(A, B) =P(A) P(B)

Naïve Bayes can be built using the python library. Naïve’s predictors are independent, though they are used in recommendation systems. They are used in many real-time applications and are well knowingly used in document classification.


2. Decision tree

It’s a top-down approach model with the structure of the flowchart that handles high dimensional data. The outcomes are predicted based on the given input variable. A decision tree is composed of the following elements: A root, many nodes, branches, leaves. The root node does the partition based on the attribute value of the class; the internal node takes an attribute for further classification; branches make a decision rule to split the nodes into leaf nodes; lastly, the leaf nodes gives us the final outcome. The time complexity of the decision tree depends upon the number of records, attributes of the training data. If the decision tree is too long, it is difficult to get the desired results.

Advantage: They are applied for predictive analytics to solve the problems and used in day to daily activities to choose the target based on decision analysis. Automatically builds a model based on the source data. Best in handling missing values.

3. Support Vector Machine

This algorithm plays a vital role in Classification problems, and most popularly, machine learning supervised algorithms. It’s an important tool used by the researcher and data scientist. This SVM is very easy, and its process is to find a hyperplane in an N-dimensional space data point. Hyperplanes are decision boundaries which classify the data points. All this vector falls closer to the hyperplane, maximize the margin of the classifier. If the margin is maximum, the lowest is the generalization error. Their implementation can be done with the kernel using python with some training datasets. The main target of the SVM is to train an object into a particular classification. SVM is not restricted to become a linear classifier. SVM is preferred more than any classification model due to their kernel function, which improves computational efficiency.

Advantage: They are highly preferable for their less computational power and effective accuracy. Effective in high dimensional space, good memory efficiency.

4. Random Forest

It’s a powerful machine-learning algorithm based on the Ensemble learning approach. The basic building block of Random forest is the decision tree used to build predictive models. The work demonstration includes creating a forest of random decision trees, and the pruning process is performed by setting stopping splits from yielding a better result. Random forest is implemented using a technique called bagging for decision making. This bagging prevents overfitting of data by reducing the bias; similarly, this random can achieve better accuracy. Finally, a final prediction is taken by an average of many decision trees, i.e. frequent predictions. The random forest includes many use cases like Stock market predictions, fraudulence detection, News predictions.


It doesn’t require any big processing to process the datasets and is a very easy model to build. In addition, it provides greater accuracy helps in solving predictive problems.

Works well in handling missing values and automatically detects an outlier.

Requires high computational cost and high memory.

Requires much more time period.

5. K- Nearest Neighbors

Here we will discuss the K-NN algorithm with supervised learning for CART. They use a K positive small integer; an object is assigned to the class based on the neighbors, or we shall assign a group by observing in what group the neighbor lies. This is chosen by distance measure Euclidean distance and brute force. The value of K can be found using the Tuning process. KNN doesn’t prefer to learn any model to train a new dataset and use normalization to rescale data.

Advantage: Produces effective results if the training data is huge.


In conclusion, we have gone through different classification algorithms’ capabilities that still act as a powerful tool in feature engineering, image classification, which plays a great resource for machine learning. Classification algorithms are powerful algorithms that solve hard problems.

Recommended Articles

This is a guide to Classification Algorithms. Here we discuss that the Classification can be performed on both structured and unstructured data with pros & cons. You can also go through our other suggested articles –

You're reading 5 Amazing Types Of Classification Algorithms

9 Amazing Pieces Of 3

That’s right. You are looking at a geometrified 3D model of artist Joshua Harker’s skull. Fancy your own self-portrait? Head to the Museum of Arts and Design in New York for a full body scan

The 2014 New York 3D Printshow launched at the Metropolitan Pavilion on Wednesday, bringing together big names from the 3-D printing industry with creative individuals who apply the technology to art and fashion. The annual event featured a trade show, runway fashions, and an art gallery, where we saw creations from architectural models to life-size comic book heroes. Here are some artworks that blew our mind.

21st Century Self-Portrait

That’s right. You are looking at a geometrified 3D model of artist Joshua Harker’s skull. Fancy your own self-portrait? Head to the Museum of Arts and Design in New York for a full body scan.


Creepy 3D-printed baby born in a Matrix-inspired eggshell, presented by sculptor Dann Chetrit. We dig it.

Noisy Boy

Remember this badass robot fighter Hugh Jackman owned in the movie Real Steel? It helped its creator, the wizard studio Legacy Effects, win an 2012 Oscar nomination for visual effects.

Memorial Bust Of A Woman

Things change. Memories fade. Artist Sophie Kahn makes the statement in this deconstructed memorial sculpture, which she built using both cutting-edge 3D laser scanning and ancient bronze casting technique.

1927 Miller 91 “Perfect Choice”

C.ideas, a 3D printing consulting company, built this nickel-plated scale model over 700 combined hours. Watch the birth of this cool mini-racer here.

Kabutomushi Yoroi

Designed by mixed media artist Russ Ogi, this Sumarai-inspired armor suit will keep you out of harm’s way in a feud.

Bear [Animal Lace]

How about this wild spirit for a head display? It doesn’t involve killing, and you can put a bulb in it.

The Natural Column Project

This free-forming column not only shares similarities with bones and trees, but also represents architect Daniel Büning’s vision for an environmentally friendly manufacture process.


Manufacturer Voxeljet produced this massive, terrifying Batman statue, complete with his signature cape, cowl, and utility belt.

Classification Of Crops Using Machine Learning!

This article was published as a part of the Data Science Blogathon


to think about these projects, rather than their implementation as many of us getting trouble in initiating and doing the ending of projects.

In this article, we do simple classification modeling and trying to get good accuracy.

You can download the dataset from here.

Aim :

To determine the outcome of the harvest season, i.e. whether the crop would be healthy (alive), damaged by pesticides, or damaged by other reasons.

Data Description:

We have two datasets given to train and test.

ID: UniqueID Estimated_Insects_Count: Estimated insects count per square meter Crop_Type: Category of Crop(0,1) Soil_Type: Category of Soil (0,1) Pesticide_Use_Category: Type of pesticides uses (1- Never, 2-Previously Used, 3-Currently Using) Number_Doses_Week: Number of doses per week Number_Weeks_Used: Number of weeks used Number_Weeks_Quit: Number of weeks quit Season: Season Category (1,2,3) Crop_Damage: Crop Damage Category (0=alive, 1=Damage due to other causes, 2=Damage due to Pesticides)

1. Importing Libraries and Dataset

Then we move on to load the dataset from CSV format and convert it into panda DataFrame and check the top five rows to analyze the data.

2. Cleaning Dataset

1. Checking Null Values: By using dataset isnull().sum() we check that there were 9000 missing values present in the dataset i.e. in the Number_Weeks_Used variable.

2. Checking Datatypes: We checked Datatypes of all columns, to see any inconsistencies in the data.

3. Checking Unique Values: Now we have to understand unique values if present in columns, which will help to reduce dimensionality in future processing.

4. Replacing missing values: As there are 9000 missing values in the Number_Weeks_Used column so we can replace them by mode of the data. And again if we check null values we saw that there are no null values present in our dataset.

3. Exploratory Data Analysis :

First, we will get information by using info() method.



 Checking correlation with sns.heatmap() .

Inferences drawn from heatmap:

3.Number_weeks_Quit is highly negatively correlated with Pesticide_use_category and Number_weeks_used.

Data Visualization:

For gathering insights, data visualization is a must.

a. Univariate Analysis:

For univariate, I plotted countplot of Crop_Damage.


Crop damage due to pesticides is less in comparison to damage due to other causes.

Crop type 0 has a higher chance of survival compared to crop type 1.

Now I plotted countplot for Crop_Damage vs Insect count for Crop Type.


        chúng tôi 2 pesticide is much safer to use as compared to Type 3 pesticide.

        chúng tôi 3 pesticide shows most pesticide-related damage to crops.

Another plot in univariate analysis for gathering more insights.


        chúng tôi Graph 1 we can conclude that till 20-25 weeks damage due to pesticide is negligible.

        2. From Graph 3 we can see that after 20 weeks damage due to the use of pesticide increases significantly.


b. Bivariate Analysis: 

Plotted barplot between Crop_Damage vs Estimated_Insects_Count.

Clearly observed from the above plot that Most insect attacks are done on crop type 0.

Barplot between Crop_Type vs Number_Weeks_Used.


1.Crop Type 0 is more vulnerable to pesticide-related and other damages as compared to Type1.

2.Avg. duration of pesticide-related damage is lower for Crop type 1.

4. Data Pre-processing :

Outliers Analysis : 

Now we will check for outliers using Boxplot. 

Now for removing these outliers I simply find the mean value of each column and replace it with an outlier value.

After removing outliers,

Skew Analysis : 

Checking skewness of our data using histplot and observed that all the data is normally distributed.

Now, our dataset is ready to be put into the machine learning model for classification analysis.


5. Building Machine Learning Model: Scaling Dataset:

As usual, the first step is to drop the target variable and then scaling the dataset by using Standard Scaler to make the data normally distributed.

Splitting Dataset:

After preprocessing, we now split the data into training/testing subsets.

Evaluating Models:

We now checked various classification models and calculated metrics such as the precision, recall, and F1 score.

The models we will run here are:

Random Forest

K Nearest Neighbor(KNN)

Decision Tree Classifier

Gaussian NB

From initial model accuracy values, we see that KNN is performing better than others with 83% accuracy. It has a maximum accuracy score and minimum standard deviations.

Now I find best parameter i.e. n_neighbors using GridSearchCV taking ‘n_neighbors’ range = (2,30) , cv = 5 and  scoring = ‘accuracy’ for our KNN model and found n_neighbor = 22.

Again running the KNN model with its best parameters i.e. n_neighbor = 22.

Result :

To check model performance, we will now plot different performance metrics.

a. Confusion Matrix:


From observation, we found decent accuracy ( ~0.84), precision, and recall for the model. This indicates that the model is a good fit for the prediction.

For better results, one can do hyperparameter tuning which will help in increasing the accuracy of the model.

But overall KNN gives good accuracy among all the models so we save it as our final model.

For your reference you can check my complete solution and dataset used  from this link:

Thanks for reading !!! Cheers!

About the author:

Priyal Agarwal

Please feel free to contact me on Linkedin, Email.

The media shown in this article are not owned by Analytics Vidhya and are used at the Author’s discretion.


Classification Of Mental Disorders: The Guidebook

Research into the field of psychopathology has jumped leaps and bounds. The development of psychotic drugs, treatment methods, approaches, and even how we understand psychological disorders has changed. Gone are the days when mental disturbance was attributed to the anger of gods and demons. As the scientific inquiry into the subject matter was made, newfound discoveries changed the landscape of psychology. One more thing that changed was the method of diagnosis. An important addition to this landscape was the development of the Diagnostic and Statistical Manual of Mental Disorders or DSM. DSM established itself at the core of western psychiatry, forever changing how clinicians, researchers, and psychologists diagnose and treat psychological disorders.

A Demand for a Classification System

After attaining a scientific grounding in the 19th century, demands for a classification system grew louder. In theory, there are three ways to create a classification system of diseases or nosology: authority, consensus, and the medical model. Initially, the classification of disorders was done by authority; In this case, it was Emil Kraepelin, a German nosologist. Subsequently, his first book came out in 1883 and went to the 8th edition. Kraepelin’s work formed a solid backdrop against which DSM was later built.

The origin of the DSM

Technically, DSM started in October 1945 when psychoanalyst William Menninger created his diagnostic roster titled “Technical Medical Bulletin no. 203”. The creation of this roster can be regarded as the inception of DSM. The first edition of DSM came into being in 1952 with little to no influence on the international psychology scene, and DSM-II came out in 1968. Later, in 1980, DSM-III was published, which marked a turning point. For one, it dethroned psychoanalysis from the high pedestal within psychology; Also, it relinked psychiatry and other medical fields by adopting a medical model approach for classification. Subsequently, DSM-III-R (Revised) was published in 1984 and DSM-IV in 1994, followed by DSM-IV-TR in 2000.


Published in May 2013, DSM-V marked the first major revision in the DSM-IV of 1994. The task force to work on DSM-V came together in 2007, intending to combine the latest scientific and clinical research and improve the usability of the manual by clinicians and researchers. Priority was also given to providing the best care possible for the patients. The project involved more than 400 experts from over 13 countries; the represented psychology, psychiatry, neurology, pediatrics, epidemiology, primary care, research methodology, and statistics. 13 international conferences were held between 2003 and 2008. Information was compiled to fill up the gaps in the current knowledge. To ensure diversity representation, each team task force and work group had at least one international member. DSM-V actively encouraged detailed studies on transcultural psychiatry and the influence of social and environmental factors linked to disease risk, heritability, and resiliency factors.

Classification of disorders in DSM-V


Source: The DSM-5: Classification And Criteria Changes By Darrel A. Regier, Emily A. Kuhl, David J. Kupfer

Changes in DSM-V

For certain disorders, diagnosis criteria were combined, like in the case of Autism Spectrum Disorder (ASD), Somatic Symptom Disorder, Specific Learning Disorder, and Substance Abuse Disorder. For other disorders, they were split into independent disorders, such as emotionally withdrawn/inhibited and indiscriminately social/disinhibited disorders separated from reactive attachment disorder. Another change was replacing the phrase “general medical condition” with “another medical condition wherever necessary. The term mental retardation from DSM-IV was replaced with intellectual disability. Furthermore, similar changes were also made in the naming and diagnosis criteria for multiple disorders like communication disorders, ADHD, Schizophrenia, Schizoaffective Disorder, Catatonia, Bipolar and Related Disorders, Depressive Disorders, and so on.


The turning point in the history of psychology’s nosology came with DSM-III; However, even DSM-III did not have a big international reach and participation. Around this time International Classification of Diseases (ICD) was out with its eighth edition. ICD goes far back to 1984 and is published by World Health Organization (WHO). ICD-8 and DSM-III had little in common at that time, but the two manuals of the nosology of diseases have become similar over the years. This is largely due to the collaborative agreements between the two organizations. Regardless, differences between the two exist.

For one, ICD is produced by an international health agency, whereas a single nation’s professional association publishes DSM.

Second, the primary focus of WHO is to reduce the mental health burden in countries; It is global in its orientation. However, DSM is part of U.S. psychiatrists specifically.

ICD needs approval from the World Health Assembly, and whereas DSM needs only to be approved by a World Health Assembly.

Lastly, the sales and distribution of the two manuals are very different – ICD is distributed as widely as possible at cheaper rates, whereas DSM makes up a significant portion of the American Psychiatric Association’s revenue.

Keeping DSM separate from IDC is impossible in the long run, given the agreement between the two organizations. However, DSM would still be an important textbook for psychiatric diagnosis and have its standing over the ICD.


Despite significant changes and ensuring better and smoother research, application, treatment, and diagnosis of psychological disorders, some criticisms have been raised concerning DSM-V. One of them is the exclusion of bereavement. This was done to ensure that individuals experiencing grief due to the loss of a loved one are not wrongly labeled as mentally unstable. However, this step backfired, preventing individuals dealing with the major depressive disorder after a loss from being not diagnosed aptly. Recently, DSM-V-TR was published, which is a revised version of DSM-V. It includes new clarification, references, and diagnostic criteria and is updated to match the International Classification of Disease-10-CM (ICD-10-CM) codes.

Types And Purpose Of Derivative

Definition of Derivative

Start Your Free Investment Banking Course

Download Corporate Valuation, Investment Banking, Accounting, CFA Calculator & others

Purpose of Derivatives

The various purpose of entering into derivative contracts is as under:

Earning Profits: The main aim to enter into the derivative contract is to earn profits by doing speculation on the price of an underlying asset in the future. The market price where securities are traded is volatile where the shares may go up or down. When there is a fall in the share price you may suffer loss and in this situation entering into a derivative contract by placing the accurate bet may help in earning gains.

Arbitrage Advantage: Arbitrage trading involves a purchase of security in one market at a low price and then selling the same in the other market at a higher price. The difference between the selling price and the buying price will be the profit of the trader.

To Get Access to Unavailable Markets or Assets: The derivatives help the traders or organizations to get access to the markets or assets that are otherwise not available. For example, interest rate swaps can provide a more favorable rate of interest as compared to the direct borrowings.

Types of Derivatives 1. Futures

Futures are the financial derivatives in which legal agreements are being entered so as to buy or sale a particular derivative stock at the predefined price at the agreed time in the future. Future contracts in order to facilitate its trading over the exchange are standardized. It is the obligation of the buyer to buy the underlying assets upon the expiry of the contract. On the other hand, it is the seller obligation of providing and delivering the underlying asset to the buyer upon expiry of the contract. It allows the investor in speculating in the line of movement of the corresponding underlying stock. It can be used as a tool to hedge the losses which may happen in stock by entering into future agreement long or short depending on the position of stock held. Futures and future contracts refer to the same thing. The contracts are supposed to be compulsorily squared off on or before the expiry date. If anyone wants to continue the same position even after the expiry date, they can roll over the transaction with the new expiry date.

2. Options

The option is a kind of contract that provides a right but not an obligation to purchase/sell an underlying security at a predetermined price (strike price) and during the specified time period. The buyer of the option is required to pay the premium in order to purchase the right from the seller whereas the seller, also known as the option writer, who receives the premium amount, is under the obligation to sell the underlying security if the right is exercised by the buyer. Options are traded on both over counter market and exchange-traded markets. There are two types of options namely call option and put option. The call option is up side betting and no risk for down fall apart from premium paid loss. In the same line put option is down side betting and no risk for upward movement apart from the premium paid loss. The options may be bought or laid depending upon the risk appetite of the investor. If option is bought, it is subjected to maximum risk up to the premium paid amount and the profit bracket is unlimited. If the options are being laid or sold, the maximum profit will be the premium paid amount and is subjected to unlimited risk.

3. Forwards 4. Swaps

A swap is a derivative contract between the two counter parties to exchange the financial instrument or payments or cash flows for a certain time. The underlying instrument to this contract can be anything but in maximum cases, it is involved with cash based on a notional principal amount. Every stream of the cash flow is known as leg. This can be used in hedging the risk and minimizing the uncertainty of certain operations. It is traded over the counter and not in exchange. The default risk in the counter party in the swap contracts is very high and thus it is majorly opted by the financial organizations and the companies. The most popular type of swap includes interest rate swap, currency swap, commodity swap, credit default swap.

Conclusion – Derivative Types

Thus, derivatives are the financial contracts whose value is derived from any underlying asset including stocks, bonds, currencies, market indices etc. the value of underlying asset keeps on changing as per the conditions of the market and the main aim of the derivatives is to make profits by speculating on the value of the given asset in future.

Recommended Articles

This is a guide to Derivative Types. Here we also discuss the definition and types of derivatives along with the various purpose of entering into derivative contracts. You may also have a look at the following articles to learn more –

Types Of Business Insurance Risk

Running a business is inherently risky, and while you can’t protect your business against every threat it faces, it’s important to protect yourself and your company in any way possible. Purchasing business insurance is a way to mitigate risk and protect your company against unforeseen events. Here’s a look at the concept of risk in business insurance, how insurance companies assess risk and what you can do to reduce risk as much as possible.   

What is an insurance risk?

As a business owner, you’re likely familiar with how to file an insurance claim. But many owners aren’t aware of how insurance companies view risk and how this factors into your coverage and costs. 

Editor’s note: Looking for the right liability insurance for your business? Fill out the below questionnaire to have our vendor partners contact you about your needs.

How do insurance companies assess risk?

Insurance companies assess risk through underwriting and claims data. 

The insurance company gathers relevant data.

During an insurance applicant’s review process, underwriters use objective and subjective information to assess the risk associated with the applicant. For example, does the business have a security system (objective information)? Does the building look secure (subjective information)? 

The underwriter also gains objective information from computer-generated loss runs, meaning it looks at your business’s claims history and experience rating mods and worksheets. For instance, the Workers’ Compensation Insurance Rating Bureau of California (WCIRB) says it provides a merit rating percentage to qualified policyholders. “The rating percentage is calculated based upon the policyholder’s audited payroll and losses for three consecutive policy periods, as reported to the WCIRB by the policyholder’s insurance company,” the bureau notes. 

The data-gathering process might be specific to the insurance coverage being sought. For example, the property underwriter may obtain an Insurance Services Office (ISO) property report. Your underwriter will also evaluate business-specific data. According to Elliot Whittier Insurance, your insurance underwriter may review: 

Payroll data

Sales data

Vehicle counts and/or mileage data

A description of your operations

Information about the officers and owners of the company

Information about job duties, names of subcontractors, certificates of insurance for subcontractors, and tax documents

What data does the business need to track? 

Elliot Whittier Insurance recommends that business owners track time and payroll for different work and job categories, which allows for the lowest workers’ compensation premiums that still protect both workers and the company. In addition, it offers the following suggestions:  

Get certificates of insurance for your subcontractors, including general liability insurance and workers’ compensation.

Ensure a responsible and informed individual is present and available for onsite audits.

If you own a restaurant, keep tip records.

Inform your agent right away of any significant changes to your payroll, whether they’re increases or decreases.

The insurance company issues a rating.

After the insurance company gathers all the relevant data, the next factor is rating. The rating system assigns a price based on what the insurer believes it will cost to assume the financial responsibility for the applicant’s potential claim.

Underwriting will sort applicants into groups (risk pools) that present similar risk levels and then accept, deny or limit coverage for each applicant group. Underwriting sets a rate for each pool based on claims data for the group’s applicants. If a pool has claims data with higher average losses, it will have higher assigned premiums. 


Underwriting is not a one-size-fits-all approach. Each insurance company has its own determining factors when evaluating a pool.

What are the types of insurance risks in business? 

Now that we’ve examined how underwriters deny or limit coverage for a group of applicants, here are some examples of common insurance risk types in business. 

What are the costliest claims?

Below is a list of the costliest claims reported for small businesses, according to claims data from insurer The Hartford. [Learn more about this provider in our in-depth review of The Hartford.]

We also include suggestions for the business insurance coverage type that could help mitigate this risk. (Be sure to check for policy restrictions or coverage waivers.)               

Reputational harm: Average claim cost is $50,000. Consider reputational harm (risk) insurance or commercial general liability insurance.

Vehicle accidents: Average claim cost is $45,000. Consider commercial auto insurance.

Fire: Average claim cost is $35,000. Consider a business owners policy (BOP), commercial property insurance (business hazard insurance), commercial fire insurance or business interruption insurance.

Product liability: Average claim cost is $35,000. Consider product liability insurance. 

Customer injury or damage: Average claim cost is $30,000. Consider a BOP or commercial general liability insurance.

Wind and hail damage: Average claim cost is $26,000. Consider commercial property insurance (business hazard insurance).

Customers slipping and falling: Average claim cost is $20,000. Consider a BOP or commercial general liability insurance.

Water and freezing damage: Average claim cost is $17,000. Consider business property (business hazard) insurance.

Struck by object: Average claim cost is $10,000. Coverage depends on where the incident occurred and if the injured party is an employee or a third party. Consider workers’ compensation insurance, general liability insurance or a BOP. 

Theft and burglary: Average claim cost is $8,000. Consider a BOP or commercial general liability insurance.

These recommended policies are examples, but it’s critical to check with your insurer for policy details and speak with an insurance agent to address your specific business needs. 

Did You Know?

A business owners policy often combines general liability, business income insurance and commercial property insurance into one policy.

Update the detailed information about 5 Amazing Types Of Classification Algorithms on the website. We hope the article's content will meet your needs, and we will regularly update the information to provide you with the fastest and most accurate information. Have a great day!