Trending February 2024 # Getting Preferred Combination Of Attributes With Conjoint Analysis # Suggested March 2024 # Top 11 Popular

You are reading the article Getting Preferred Combination Of Attributes With Conjoint Analysis updated in February 2024 on the website Kientrucdochoi.com. We hope that the information we have shared is helpful to you. If you find the content interesting and meaningful, please share it with your friends and continue to follow and support us for the latest updates. Suggested March 2024 Getting Preferred Combination Of Attributes With Conjoint Analysis

Introduction

We often have to decide between two or more options when there are some things we like about one option and some things we want about the other.

Now, if we think of all the companies trying to create successful products, they can’t afford to make educated guesses about choosing the most appealing features for the customers.

This is where a proven approach called conjoint analysis comes in.

This article will take us through the essential concepts of conjoint analysis. We will learn how to prepare a conjoint design, know the difference between different conjoint analysis survey techniques, understand the different conjoint analysis terminologies, implement conjoint analysis in Python, and interpret results to determine the best combination of attributes in a product.

So, let’s get started.

This article was published as a part of the Data Science Blogathon.

Table of Contents What is Conjoint Analysis?

Companies win over consumers by using the right features and charging the right price. For example, smartphone manufacturers are packing more and more capabilities into these tiny devices, with billions of dollars at stake, if they get the right combinations of features and price.

Hotels and resorts fine-tune their facilities and service levels to appeal to specific target markets, such as people traveling in business class or luxury vacationers.

Consumer packaged goods companies tweak their packaging, flavors, and nutritional contents to appeal to new customer segments and create successful line extensions.

Conjoint Design

We can describe a product or service in terms of several attributes further broken down into several levels. E.g., a Mobile Phone may have attributes like screen size, color, brand, price, and so on, and levels for screen size maybe 5 inches, 5.5 inches, or 6 inches.

Attributes should be relevant to managerial decision-making, have varying levels in real life (at least two levels), be expected to influence preferences, be clearly defined and communicable, and not exhibit strong correlations (price and brand are an exception).

Levels of attributes should be unambiguous, mutually exclusive, and realistic.

The profile is a unique combination of attribute levels.

Types of Conjoint Analysis

Based on the response type of the survey questionnaire, conjoint analysis is classified as follows:

1. Ranking-based conjoint: Also called Preference-based conjoint analysis. Respondents rank the profiles from best to worst. It is similar to best-worst scaling, but respondents must allocate rankings to the intermediate options.

2. Rating-based conjoint: Also called Score-Based conjoint analysis. Respondents give ratings to the product profiles they are shown. Ratings can be on a scale of 0 to 5, 0 to 10, or 0 to 100. Respondents must allocate scores so that the scores add up to a certain number (e.g., all scores in each question must add up to 100).

3. Choice-based conjoint: Respondents choose which option to buy or otherwise select. The choice-based method is the most theoretically sound, practical, and common practice.

Conjoint Analysis Process

1. Describe your research objective and the target product. List down the research questions to answer.

2. Create the combination or product profiles (Specify Attributes & Levels).

3. Select the controlled set of “product profiles” or “combination of attributes & levels” for the consumer to choose from.

4. Design the Questionnaire (Based on the abovementioned types) and collect responses.

5. Perform conjoint analysis

6. Report and Interpret results

Implementing Conjoint Analysis

Let’s take the example of pizza. We want to understand which combination of attributes & levels is most and least preferred by customers while choosing or ordering pizza so that the marketing team can enter the market with the best combinations.

The first step is to define the attributes and levels of the product.

We will take eight different attributes, namely ‘brand,’ ‘price,’ ‘weight,’ ‘crust,’ ‘cheese,’ ‘size,’ ‘toppings,’ and ‘spicy,’ where brand, price, and weight have four levels each and rest of the attributes have two levels.

The next step is to select the number of combinations or profiles. Here, we have a total 4*4*4*2*2*2*2*2 number of combinations. But we will not use all combinations since the company may not be able to produce some combinations, and the customers may not prefer some combinations. So, we will go with the selected 16 combinations and their rankings from a survey. We will load the dataset in the proper format.

Python Code:



We will now estimate each attribute level’s effects using Linear Regression Model.

import chúng tôi as sm import chúng tôi as smf model='ranking ~ C(brand,Sum)+C(price,Sum)+C(weight,Sum) +C(crust,Sum)+C(cheese,Sum)+C(size,Sum)+C(toppings,Sum)+C(spicy,Sum)' model_fit=smf.ols(model,data=df).fit() print(model_fit.summary())

We can analyze the model’s fitness using parameters like R-squared, p-values, etc. The coefficients of each attribute level define its effect on the overall choice model.

Now, we will create the list of conjoint attributes.

conjoint_attributes = ['brand','price','weight','crust','cheese','size','toppings','spicy']

Before going ahead, we need to understand these conjoint analysis terminologies:

Relative importance: It depicts which attributes are more or less important when purchasing. E.g., a Mobile Phone’s Relative importance could be Brand 30%, Price 30%, Size 20%, Battery Life 10%, and Color 10%.

Part-Worths/Utility values: The amount of weight an attribute level carries with a respondent. These factors lead to a product’s overall value to consumers.

Next, we will build part-worths information and calculate attribute-wise importance level.

level_name = [] part_worth = [] part_worth_range = [] important_levels = {} end = 1 # Initialize index for coefficient in params for item in conjoint_attributes: nlevels = len(list(np.unique(df[item]))) level_name.append(list(np.unique(df[item]))) begin = end end = begin + nlevels -1 new_part_worth = list(model_fit.params[begin:end]) new_part_worth.append((-1)*sum(new_part_worth)) important_levels[item] = np.argmax(new_part_worth) part_worth.append(new_part_worth) print(item) #print(part_worth) part_worth_range.append(max(new_part_worth) - min(new_part_worth)) # next iteration print("level name:") print(level_name) print("npw with sum element:") print(new_part_worth) print("imp level:") print(important_levels) print("part worth:") print(part_worth) print("part_worth_range:") print(part_worth_range) print(len(part_worth)) print("important levels:") print(important_levels)

Now, we will calculate the importance of each attribute.

attribute_importance = [] for i in part_worth_range: #print(i) attribute_importance.append(round(100*(i/sum(part_worth_range)),2)) print(attribute_importance)

Now, we will calculate the part-worths of each attribute level.

part_worth_dict={} attrib_level={} for item,i in zip(conjoint_attributes,range(0,len(conjoint_attributes))): print("Attribute :",item) print(" Relative importance of attribute ",attribute_importance[i]) print(" Level wise part worths: ") for j in range(0,len(level_name[i])): print(i) print(j) print(" {}:{}".format(level_name[i][j],part_worth[i][j])) part_worth_dict[level_name[i][j]]=part_worth[i][j] attrib_level[item]=(level_name[i]) #print(j) part_worth_dict

In the next step, we will plot the relative importance of attributes.

plt.figure(figsize=(10,5)) sns.barplot(x=conjoint_attributes,y=attribute_importance) plt.title('Relative importance of attributes') plt.xlabel('Attributes') plt.ylabel('Importance')

We can see that weight is the attribute with the highest relative importance at 51%, followed by crust at 16% and toppings at 10%. Brand, cheese, and size are the least important attributes, each at 2.38%.

Now, we will calculate the utility score for each profile.

utility = [] for i in range(df.shape[0]): score = part_worth_dict[df['brand'][i]]+part_worth_dict[df['price'][I]] +part_worth_dict[df['weight'][i]]+part_worth_dict[df['crust'][i]] +part_worth_dict[df['cheese'][i]]+part_worth_dict[df['size'][I]] +part_worth_dict[df['toppings'][i]]+part_worth_dict[df['spicy'][i]] utility.append(score) df['utility'] = utility utility

Plotting the utility score

We can see that combination number 9 has the maximum utility, followed by combination numbers 13 and 5. Combination number 14 is the least desirable because of the most negative utility score.

Now, we will find the combination with maximum utility.

print("The profile that has the highest utility score :",'n', df.iloc[np.argmax(utility)])

Now, we will determine the levels being preferred in each attribute.

for i,j in zip(attrib_level.keys(),range(0,len(conjoint_attributes))): #print(i) #level_name[j] print("Preferred level in {} is :: {}".format(i,level_name[j][important_levels[i]])) Conclusion

Conjoint analysis is a type of statistical analysis used in market research to determine how customers value various components or qualities of a company’s products or services. It is founded on the idea that any product can be broken down into a set of features that ultimately influence users’ perceptions of an item or service’s value.

Conjoint analysis is an effective technique for extracting consumer preferences during the purchasing decision.

An effective conjoint design requires properly defined product attributes and levels and choosing the limited number of profiles or combinations of attributes & levels to be presented to the survey respondents.

The profile preference response can be collected in different ways, i.e., ranking-based, rating-based, or choice based.

The conjoint analysis involves the evaluation of the linear regression model, part-worth calculation, and analysis of utility score and relative importance of features to determine the best product profile.

The media shown in this article is not owned by Analytics Vidhya and is used at the Author’s discretion.

Related

You're reading Getting Preferred Combination Of Attributes With Conjoint Analysis

Getting Started With Kubernetes Quickstart

Definition of Kubernetes Quickstart

In this tutorial, we will discuss more the quick start guide for using Kubernetes, for we may require prerequisite to start the setup of Kubernetes. But before we dive into the setup of Kubernetes let’s first understand what Kubernetes is, it is an open-source platform which helps us to manage and contained service, a workload that enables both for us declarative and automation configuration for us. It is also extensible and portable. Also, we have good support for its tools, service because it is widely available. Also, Kubernetes has the fastest growing ecosystems, in the coming section of the tutorial, we will discuss in more detail the working, implementation, and quick start to setup Kubernetes, al the prerequisite that are required to start with in detail for better understanding and clarity.

Start Your Free Software Development Course

Web development, programming languages, Software testing & others

Overview of Kubernetes Quickstart

As we have already seen the overviews for Kubernetes in the last section, but before starting first understand the below;

1) We have to have a few prerequisite in place in order to start with the Kubernetes, let’s take key points and steps which are required for key quick start.

2) We need a machine which is running on macOS or Linux.

3) Also before start we require few tools which are as below;

2) golang

3) python

4) make

5) Docker

6) Pyyaml

7) gcc compiler

8) pip

So above tools we require before we start with the Kubernetes, it is an overviews for this w will have closer look at the whole setup in detail in the coming section of the tutorial, for better usage and easy setup of Kubernetes on machine.

Configure SPIRE Server

We need to execute the below sets of commands which are as follows;

1) Create namespaces using the below command;

kubectl apply -f spire-namespace.yaml

2) Verify the namespace by executing the below commands;

kubectl get namespaces

3) configure SPIRE as below;

e.g. :

kubectl apply -f chúng tôi -f chúng tôi -f server-cluster-role.yaml Kubernetes Quickstart Configuration file format

Now in this section we will discuss in more detail the format that we have to follow once we start the setup for Kubernetes on our machine, we have to maintain a configuration file that should follow the below points, let’s have a closer look at it;

1) First when we are trying to define the configure file we have to specify the stable version of the API which is needed.

2) After this we should store our configuration file in the version control, before we trying it to push to cluster. This will enable us and help us to easily and quickly revert the changes if we want to. This also helps us to aids the cluster restoration and re-creation.

3) We can write our configuration file using any of the formats such as JSON or YAML, but always recommended to use YAML rather than JSON, because YAML format is more readable and user-friendly also. But both of these formats that is JSON and YAML can be used interchangeably.

4) We should always try to group the related objects inside the single file, to inverse the readability, because we can easily maintain one file, tan to look for several files.

5) Also look for kubectl commands, which can be called directly from the directory.

7) Also give objects descriptions using the annotations, it will allow and enable better introspection for us.

We have looked at the sample configuration file, which can help you to set up your own configuration file see below;

e.g.:

apiVersion: veriosn here kind: type metadata: name: your name labels: app: your name tier: your name role: yourname spec: ports: - port: your port 6379 targetPort: your port 6379 selector: app: same as above tier: same as above role: same as above Download the Install Delegate

In this section, we will going to see the installation of Delegate and will try to launch it to the cluster. For this, we have to follow a few steps let’s take a closer look at it,

1) First step is to login in to Harness.

5) Name you can give as – k8s-delegate exactly same.

6) We will choose the Primary Profile.

8) Open command prompt and try to ma the Delegate path on your machine.

9) Now extract the folder that we have downloaded, after that navigates to the harness-delegate-kubernetes folder that we have extracted just now.

Create a Harness Application and Service

Follow below steps to create Application and Service;

For Service:

4) It will create the Service for you.

Your Target Kubernetes Cluster

By the use of this, we can represent our infrastructure like, dev, production, QA, Stage, etc. Use below steps to configure this;

1) Breadcrumb will navigate you to Environments.

2) Here we can add Environments, Fill in the details such as Name, Description type, etc.

4) Now go to infrastructure Settings and provide details like, Name, Description, Type, Release Name, etc.

5) Submit it.

Conclusion – Kubernetes Quickstart

As we have seen all the steps and format for the file also to setup our Kubernetes on our machine follow whole article and steps to get a better understanding and clarity, also it is easy to setup if we try to follow the steps as it as mentioned.

Recommended Articles

We hope that this EDUCBA information on “Kubernetes Quickstart” was beneficial to you. You can view EDUCBA’s recommended articles for more information.

Getting Started With Culturally Responsive Teaching

The world of education is buzzing with talk of being more culturally responsive, but what does that mean, and how important is it really?

When I talk about culture, I’m talking about norms, beliefs, and behaviors that are passed down from one generation to the next—the things that explain why a student might answer a question the way he does or why another might not feel comfortable looking you in the eye when you’re speaking to her. These aspects of culture are among the most misunderstood in the teacher-student dynamic and are often the things that cause students to get into the most trouble in the school discipline system. Culturally responsive teaching (CRT) attempts to bridge the gap between teacher and student by helping the teacher understand the cultural nuances that may cause a relationship to break down—which ultimately causes student achievement to break down as well.

In her book Culturally Responsive Teaching and the Brain, Zaretta Hammond writes that “by third grade, many culturally and linguistically diverse students are one or more years behind in reading.” CRT is one of the most impactful tools for empowering students to find their way out of that achievement gap. This alone makes being culturally responsive one of the most important things you can learn at this moment.

Getting Started

The first step in being culturally responsive is to do an internal audit—yes, you read that right, an audit: truly digging deep inside of ourselves and recognizing and naming those things we don’t want to look at or talk about. The experiences we’ve had along our journey in life have formed stereotypes which have then turned into implicit bias. These unintentional, unconscious attitudes impact how we relate to our students and their parents, and how we choose curriculum, assess learning, and plan lessons. Harvard University’s Project Implicit has an online test you can take to examine your implicit bias.

Culturally responsive teachers also have to be aware of the sociopolitical context schools operate in and dare to go against that status quo. Students need to understand the system that is working around them in schools. Give them context and don’t be afraid to talk about the tough subjects that may not be addressed in your school. In addition to Hammond’s Culturally Responsive Teaching and the Brain, another great resource is Affirming Diversity by Sonia Nieto. The most important part of this work is a willingness to do something different to get different results, with the goal of increasing academic achievement.

For your audit, take some time to ask yourself hard questions and reflect on past and current practices. Are you operating from a place of critical care within your classroom—a place that marries high expectations with empathy and compassion?  Are your students, regardless of socioeconomic status or background, being held to high standards? Has your past interaction with a particular race of people impacted your ability to communicate with parents? Identify those places in your instructional planning where you might have allowed your implicit biases to prevent you from pushing your students to achieve at optimal levels. Answering questions like these might be hard, but in order to create change, you have to identify and unearth the roots of your teaching practice.

Next Steps

Now that you have conducted an internal self-audit, your curriculum will need one as well. What books are students reading? Do they have a voice in what they read, where they sit, how they interact with each other?

Empowering students to take ownership of not just their learning but the environment itself is another critical component of CRT. One strategy for fostering a student-centered environment is having students create a classroom agreement that answers the question: “How will we be together?” Allowing students to answer this question will give you a window into how their cultures dictate the ways in which they want to feel respected, heard, safe, and included in the classroom and in their interactions with one another and with you. This reinforces the idea not only that they belong but that the way they show up at school every day, with all of their outside experiences in tow, has value.

Finally, put some thought into your lesson planning. You have taken the time to reflect and really look into your own biases that may have been getting in your way. You have revamped your classroom environment to reflect your students’ voices, their various cultural needs, and their choice. Now let’s have some fun. For example:

Encourage students to make a social media campaign that champions their favorite cause, and have them bring evidence of their results to class to discuss the role social media plays in social change.

Use current songs that students might love to analyze the use of literary techniques and imagery in music videos. Taylor Swift’s “Wildest Dreams” is a great one. Better yet, instead of assigning a song, ask students for their suggestions.

Watch and discuss documentaries like Race: The Power of an Illusion.

Zaretta Hammond shared three simple strategies you can use to make lessons in any subject more culturally responsive.

Our students need us now more than ever, and we have to roll up our sleeves and do what we must to close the achievement gap. Culturally responsive teaching is one step in the right direction. The outcome is a student body that loves learning, excels academically, and has teachers who respond to their needs.

Being culturally responsive encourages students to feel a sense of belonging and helps create a safe space where they feel safe, respected, heard, and challenged.

Getting At The Root Of Security Problems

”Why are our jobs moving overseas?”

What are we talking about — manufacturing of the 1980s or information technology today?

In the same way that the manufacturing world had a wake-up call from Japan, China and Korea, the US IT industry is having one today. Quality and cost concerns are now causing global shifts in the way information technology is organized. At the same time, IT is being challenged with regulatory compliance. All of this together creates a challenging environment.

One thing to note is that in the same way that manufacturing tried to inspect quality into products, IT is trying to inspect quality into systems and services. You can see it in regards to IT security spending.

For years and years now, security practitioners have known that there is a direct relationship between errors and security problems. Simply put, the more errors in a system, the greater the probability of a security problem.

Yet, even though this is well known, nobody addresses the root problem.

Instead, what do they do? They go out and buy expensive hardware and software, retain consultants and hire staff all to try and compensate for poor initial quality.

There is something fundamentally wrong with this.

Quality, as well evidenced by the manufacturing industry, must be built into the products. This is done by addressing process issues. If manufacturing followed IT’s approach, costs would be through the roof, the trash bins would be full and customers would be disappointed. Instead of spending more on technology and after-the-fact add-ons that mainly focus on symptoms, IT must change its focus and look at its core processes.

In 2003, a CompTIA study found that 63 percent of security breaches were attributable to human factors. In this year’s study that number rose to 84 percent despite heightened awareness.

Today’s IT security model is broken and this is not a technology issue.

Yes, there are clearly offensive threats that must be mitigated by firewalls, antivirus applications, and so on, but this does not diminish the fact that the processes are in dire need of attention. Not only must IT’s processes mature and benefit security, but they must clearly add value to the entire IT group and overall parent organizations as well.

Quality is not achieved in a vacuum.

Starting to Fix the Problem

So what do we do first to address quality?

Stop. Do not run out to hire consultants and buy software to improve quality. Instead, focus on your processes and ask three questions.

Are the right processes formally documented? Is there proof that people are actually following the documented processes? Are you focusing on continuous improvement through benchmarking and audits?

These three questions are basic to almost any form of quality initiative. You have to reduce variations in order to identify the key aspects that need improving. If each person builds a server their own way and one person’s server has higher availability, then it takes an inordinate amount of time to try and decipher what the beneficial differences are.

Take your best people, not the one sitting on the bench because he’s worthless, and document the best practices in the organization. Benchmark the processes and seek further guidance from the Infrastructure Technology Infrastructure Library (ITIL), the ITPI’s Visible Ops methodology, and the Microsoft Operations Framework (MOF).

Now it’s the Vendors’ Turn

It is not enough to focus quality improvement efforts solely in-house. Today, a large percentage of most firms’ software is either outsourced or purchased off-the-shelf.Vendors must be made to understand that quality is job one. Is it any wonder that car companies mandate quality programs in their suppliers and even provide training and auditing programs to that end?

At the minimum, consider four simple steps.

Then establish metrics. Performance studies must be objective. Next, regularly review performance and provide feedback to the vendors. And finally, mandate continuous improvement by setting of expectations.

Why must IT accept substandard quality from their vendors?

The answer is that many companies simply do not understand the causal relationship between poor IT products, security expenditures and total costs. Manufacturers wouldn’t stand to shoulder the costs of poor quality and neither should IT.

Poor security is the symptom of poor processes and can not be effectively remedied by pouring money into technology and staff. The true problem that must be addressed lies with processes that must be scrutinized, formalized and continuously improved, not just within IT, but within IT’s entire supply chain as well.

To improve security and overall operations, IT must go after the root cause and not just the symptom.

Getting Started With Restful Apis And Fast Api

This article was published as a part of the Data Science Blogathon.

Introduction

In this article, we will explore the details of RESTful architecture, the merits of Fast API, and how to create a simple API with Fast API. But before that, we shall discuss some basics of APIs like their functionalities, uses, and architectures, and then we will move on to more exciting stuff related to REST and Fast API.

What are APIs?

Imagine you went to a restaurant with your partner for a dinner date, and you want to place an order. How would you do it? Call the waiter and place an order. Now, what happens at the backend? The waiter goes to the chef and gives the order details. When the dishes are ready, he brings them to you.

Well, you just learned the way APIs typically work. In technical terms, the API is an intermediary between two software components to communicate with each other. API stands for Application Programming Interface. An interface between two entities.

APIs interact via a communication medium. As the internet is the most popular communication medium, an API refers to a web API. But APIs existed even before the web. So, not every API is a web API. The web APIs typically use HTTP to request response messages from the server, and these response messages could be of any supported format like XML, JSON, CSV, etc.

As the use of APIs grew, the need for standard data exchange between web services grew. In this way, APIs written in different languages can interact with each other. There are several protocols and architecture developed to address the problem. The most popular of them are SOAP and REST.

The SOAP stands for Service Object Access Protocol and uses HTTP and SMTP to communicate between different systems. The response messages are typically in XML format. On the other hand, REST (Representational State Transfer) APIs aren’t protocols but an architectural style. As long as an API follows the architecture constraints, we are good to go.

RESTful APIs

As REST is not a protocol but an architecture, developers do not need to worry about built-in rules while creating one, unlike SOAP. REST is lightweight, fast, and flexible. The communication medium is HTTP. The message responses can be of various formats such as JSON, XML, plain text, and HTML. JSON is the most popular as it is language agnostic. The REST architecture specifies some guidelines. API adhering to these guidelines is a REST API. These guidelines are,

Client Server design: A client-server architecture decouples the user interface from the storage interface. This decoupling of clients and servers helps them to evolve independently.

Cacheable: This constraint requires the response data from the server to a request made by a client must be labeled as cacheable or not. If the data is cacheable, the client will have the privilege to reuse it later.

Uniform Interface: A uniform interface ensures data transfer in a standardized format instead of specific to an application’s needs.

Layered system: A layered system allows the architecture to be composed of hierarchical layers. The components cannot see any layer beyond their immediate layers.

Code on demand: It allows the clients to extend their functionalities by downloading code blocks from servers as applets or scripts.

So, these are the six constraints that make an API a RESTful API.

One can develop REST APIs with any programming language. Examples are Javascript (Node js), Go lang (Gin, Martini), Ruby(Roda, Sinatra), Python (Flask, Django, FASTapi), Java (spring boot), PHP (Laravel), etc.

For this article, we will be focussing on Python’s FASTapi.

FAST API

Fast API is a Python web framework for creating APIs and web services. It got released in 2023 as an open-source Python web framework. Being a relatively new Fast API has garnered much reputation among developers. Tech behemoths like Microsoft, Uber and many more have started using Fast API in their tech stacks.

The unique selling point of Fast API is in its speed. Python is often dubbed a slow programming language and sometimes unsuitable for developing applications where execution speed is the prime need. But Fast API, as its name suggests, is the fastest Python framework, on par with Go and Node js. All thanks to ASGI (Asynchronous Server Gateway Interface). ASGI allows FAST API to support concurrency and async code. It fundamentally separates Fast API  from Flask web framework, which supports WSGI (Web Server Gateway Interface).

So, what is ASGI and WSGI? WSGI handles the requests from clients synchronously. Each request has to wait until the previous one is complete. Thus, making the entire process slow. But ASGI handles the requests asynchronously. Any request does not need to wait for the completion of the previous one. Hence, making execution faster. The Flask, Bottle, and Django are some examples of WSGI-based frameworks. Fast API is an ASGI-based framework.

Now, let’s point out some of the prime aspects of Fast API.

Excellent Performance: Like we already discussed, Fast API is the fastest Python web framework in the market now.

Concurrency Support: Fast API supports concurrent programming.

In-built documentation: Swagger UI GUI allows automatic browser-based documentation of API.

In-built data validation: Fast API uses Pydantic for data validation, which saves a lot of time. It returns a JSON with the reason for any wrong data type.

We now have some initial ideas for Fast API. Now, we will move to the part where we do all the code stuff.

Install Requirements

First, We will install all the requirements to run Fast API applications. Like any Python project, we will create a virtual environment where we will install all the dependencies and libraries for the project. The reason for using a virtual environment is Python is not very good at resolving dependency issues. Installing packages to the operating system’s global python environment might conflict with system-relevant packages. So, we create virtual environments.

We can do that by running simple scripts.

python -m venv fastapi

We created a virtual environment named ‘fastapi’ in the current directory. Then we will launch it by typing the following code.

fastapi/Scripts/activate

It will activate our virtual environment. The next step is to install the required libraries.

Install Fast API with pip command. Open up your shell and type in the following code.

python -m pip install fastapi uvicorn[standard]

So, we installed the fastapi and uvicorn server in our virtual environment. We will see what it does in a few moments.

Creating Simple API

Create a python file for your project. And type in the below codes. You may do it in any IDE.

from fastapi import FastAPI app = FastAPI() @app.get("/") async def root(): return {"message": "to be or not to be"}

In the above code snippet, we imported the FastAPi class from the fastapi module we just installed.

Then we created an app object of the class FastAPI.

The @app.get(“/”) is responsible for path operation, which is a decorator. It indicates that the below function is responsible for handling requests that go to the “/” path using the get operation.

The get operation is one of the HTTP operations along with Post, put and Patch, etc.

Next is the async function root, which returns a dictionary.

Note: The difference between the async/await function and a normal function is that in an async function, the function pauses while awaiting its results and let other such functions run in the meantime. For more reading on asynchronous functions, visit here.

In the above code, we can also write a regular function.

Next, in the shell, type in the below script.

uvicorn main: app --reload

In the above, script main is the name of the Python file, and the app is the name of the FastAPI instance we created in our code. The reload is for developing purposes. When you hit ctrl+s after changing your code, it will automatically update it in the uvicorn server. We don’t have to run the above shell command anymore.

The output of the above script is

←[32mINFO←[0m: Will watch for changes in these directories: [‘D:’] ←[32mINFO←[0m: Started reloader process [←[36m←[1m24796←[0m] using ←[36m←[1mWatchFiles←[0m ←[32mINFO←[0m: Started server process [←[36m28020←[0m] ←[32mINFO←[0m: Waiting for application startup. ←[32mINFO←[0m: Application startup complete.

{"message":"Hello World"} Path Parameters

As per Open API standards, a path parameter is the variable part of a URL path. It points to a specific resource location within a collection. See the below code for a better understanding,

from fastapi import FastAPI app = FastAPI() @app.get("/user/{user_id}") async def read_item(user_id:int): return {"user id": user_id}

The path parameter value goes to the read_item function. Notice we have mentioned the type of user_id, which means user_id has to be an integer. By default, the value is a string.

{"user id":100}

You can provide anything other than a string, and the output will be

{"detail":[{"loc":["path","user_id"],"msg":"value is not a valid integer","type":"type_error.integer"}]}

The Pydantic is responsible for all the under-the-hood data validation

The swagger Ui

Query Parameters

Any parameters other than the path parameters are query parameters. These are the most common types of parameters. The query is the set of key-value pairs that go after the ? in a URL, separated by & characters. Consider the below example.

from fastapi import FastAPI app = FastAPI() @app.get("/items/{item_id}") async def read_user_item(item_id: str, needy: str): item = {"item_id": item_id, "needy": needy} return item {"detail":[{"loc":["query","needy"],"msg":"field required","type":"value_error.missing"}]} {"item_id":"foo-item","needy":"needy"}

Swagger Ui

Request Body

A client sends data to API through a request body and receives a response body in return. The clients don’t need to send requests. API must return a response body to the clients.

We will declare our data model as a user_details class that inherits from the Pydantic base model. We will then use Python data types to define attributes. See the below example.

from fastapi import FastAPI from pydantic import BaseModel from typing import Union app = FastAPI() class user_details(BaseModel): user_id : str user_name : str income : int age : Union[int, None] = None @app.post('/users/') async def user_func(user : user_details ): tax = (user.income/100)*30 return f'Mr/Ms {user.user_name}, user Id {user.user_id} will pay a sum of {tax} as income tax' else: return f'Mr/Ms {user.user_name}, user Id {user.user_id} will pay a sum of {0} as income tax'

The request body user_details is the child class of Pydantic’s BaseModel. We then defined attributes such as user name, Id etc. In the user_func function, we declared the user as user_details type just as we did in path and query parameters.

Inside the function, we can access all the methods directly.

We can also use path, query, and request body together.

@app.put("/users/{user_add}") async def create_item(user_add: str, user: user_details, x: Union[str, None] = None): result = {"user_add": user_add, **user.dict()} if x: result.update({"x": x}) return result

The function parameters will be recognized as follows:

If a parameter is declared in the path URL, it will be a path parameter. (user_add)

If the parameter is of a singular type (like int, float, str, bool) it will be interpreted as a query parameter. (x)

If the parameter is declared to be of the type of a Pydantic model, it will be a request body. (user)

ML Model as a Web Service

Now that we know the nuts and bolts of fast API, we will create a predictive model and deploy it as a web service. All we need to do is putting all the pieces we learned so far together.

As discussed earlier, create a virtual environment, install necessary libraries, and create two Python files. One for model creation and the other for fast AP

The codes for our model

import pandas as pd df = pd.read_csv('D:/Data Sets/penguins_size.csv') df.dropna(inplace=True) #removing Null values df.drop('island', axis=1, inplace=True) #label encoding from sklearn.preprocessing import LabelEncoder enc = LabelEncoder() for col in df.select_dtypes(include='object'): df[col] = enc.fit_transform(df[col]) #train test split from sklearn.model_selection import train_test_split y = df.species df.drop('species', axis=1,inplace=True) X_train, X_test, y_train, y_test = train_test_split(df, y,test_size=0.15) #model train from sklearn.ensemble import RandomForestClassifier model = RandomForestClassifier() model.fit(X_train, y_train) y_pred = model.predict(X_test) from sklearn.metrics import accuracy_score acc = accuracy_score(y_pred, y_test) print(f'The accuracy of model is {acc}') #save model from joblib import dump dump(model,'penguin_model')

It is a simple model where we used a penguin dataset. After training the data, we saved the model using Joblib so that we will be able to use the model later in our fast API. Let’s create our API.

1. Import necessary libraries and load the saved model

from fastapi import FastAPI from pydantic import BaseModel from joblib import load model = load('penguin_model')

2. Define the input class

class my_input(BaseModel): culmen_length_mm: float culmen_depth_mm: float flipper_length_mm: float body_mass_g: float sex: int

3.  Define request body

@app.post('/predict/') async def main(input: my_input): data = input.dict() data_ = [[data['culmen_length_mm'], data['culmen_depth_mm'], data['flipper_length_mm'], data['body_mass_g'], data['sex']]] species = model.predict(data_)[0] probability = model.predict_proba(data_).max() if species == 0: species_name = 'Adelie' elif species == 1: species_name = 'Chinstrap' else: species_name = 'Gentoo' return { 'prediction': species_name, 'probability': float(probability) }

Now run the Python file where we defined our model. After successful execution, you will see a Joblib model in the same directory. Now run the application through uvicorn as we discussed earlier.

uvicorn app_name:app --reload

It’s working.

Conclusion

Fast API is a new addition to Python web frameworks. Despite being a new entrant, it is already gaining traction in the developer’s community. The execution speed of async programming with Python’s easiness is what sets it apart. Throughout the article, we touched on several topics essential to get you started with Fast API.

 Here are the key takeaways from the article.

Restful APIs are the APIs that follow REST architectural constraints (client-server, cacheable, stateless etc)

Fast API is the fastest and easiest for API design among all the Python web frameworks.

Prime aspects of Fast API are in-built data validation, In-built documentation, concurrency, and performance.

Throughout the article, we went through some key concepts of Fast API, such as path parameters, query parameters, request body etc. And built an API to serve an ML model.

So, this was all about Fast API initial steps. I hope you enjoyed the article.

The media shown in this article is not owned by Analytics Vidhya and is used at the Author’s discretion.

Related

Google My Business Adds 4 New Attributes

Google is rolling out four new attributes that businesses can use to make their Google My Business listing stand out in search results.

Carrie Hill, a local search analyst with Sterling Sky, Inc., reports the following four attributes have started appearing in business’s knowledge panels:

Online Care

Online Appointment

Online Estimates

Online Classes

Businesses can add these, and any other existing attributes, to their GMB profile and Google may show them in search results for relevant queries.

Here’s an example of what the new “Online Care” attribute looks like in search result.

— Tom Waddington (@tomwaddington8) June 15, 2023

Attributes in a Google My Business profile are designed to grab searchers’ attention by highlighting important service offerings.

Searchers can use attributes to make more informed decisions about where to visit.

Attributes have traditionally been tailored toward people visiting the location in person, such as “WiFi,” “outdoor seating,” and things of that nature.

With many businesses still being forced to remain closed, there’s been a shift toward offering online services. In some cases, businesses are serving clients online for the first time ever.

Similarly, people are seeking online alternatives to services they can no longer access in person. This may include doctors, fitness instructors, therapists, and others.

Given the sudden change in services businesses are offering, and the change in services people are looking for, it’s time for Google My Business to be updated accordingly.

Related: How to Completely Optimize Your Google My Business Listing

How to Add Attributes to Your Google My Business Listing

Businesses can the new attributes, or existing attributes, to their Google My Business listing by following the steps below:

Sign in to Google My Business.

Open the location you’d like to manage.

You can search for the attribute you want to add, or scroll through all the available options for your business.

Note that all businesses do not have access to all attributes. Available attributes vary according to the category of business.

For example, a pizza delivery place is unlikely to be able to add attributes such as “online care” and “online classes” to their GMB listing.

If your business is one that would have any of the four new attributes as service offerings, then it’s likely you’ll have access to them in GMB.

Google has been rolling out a steady stream of updates to its GMB platform ever since the pandemic hit. This is the second time new attributes have been added in less than a month.

Back in May, Google My Business added three new attributes to help restaurants highlight whether they’re offering dine-in, takeout, or delivery services.

As businesses start to re-open now, I predict the next wave of attributes will be related to businesses’ safety measures.

In the future, businesses may be able to highlight whether masks are mandatory optional, for example.

It may become important for businesses to highlight their maximum capacity as well. Then, searchers can use the in-store traffic estimates to gauge whether it’s a good time time to visit.

That’s just me brainstorming–though I imagine it won’t be long before we see more GMB updates as businesses adjust to the “new normal.”

Source: Local University

Related: How to Get More from Your Google My Business Listing

Update the detailed information about Getting Preferred Combination Of Attributes With Conjoint Analysis on the Kientrucdochoi.com website. We hope the article's content will meet your needs, and we will regularly update the information to provide you with the fastest and most accurate information. Have a great day!