Trending December 2023 # Mastering Macos Mojave’s New Screenshot Tools # Suggested January 2024 # Top 18 Popular

You are reading the article Mastering Macos Mojave’s New Screenshot Tools updated in December 2023 on the website We hope that the information we have shared is helpful to you. If you find the content interesting and meaningful, please share it with your friends and continue to follow and support us for the latest updates. Suggested January 2024 Mastering Macos Mojave’s New Screenshot Tools

macOS Mojave significantly changed the way screenshots work on macOS. Review the changes made, discuss how to accomplish tasks previously accomplished in the new-defunct chúng tôi and describe how to use the new Screenshots application most effectively.

In the past Mac users could use the Command + Shift + 3 and Command + Shift + 4 to capture screenshots of the full screen and a region respectively. Those screenshot shortcuts are still available, so you don’t need to rewrite your muscle memory. But Command + Shift + 5 invokes the new Screenshots application which provides a screenshot GUI and more options, especially for post-capture editing.

The Screenshots Toolbar

Pressing Command + Shift + 5 will pull up the Screenshots toolbar.

The buttons on the toolbar perform the following actions, from left to right:

Capture Entire Screen: take a screenshot of everything on the screen.

Capture Selected Window: take a screenshot of only the foremost window.

Capture a Selected Area: drag a box around a region to capture.

Record Entire Screen: record a video of the entire screen.

Record Selected Area: record a video of the selected region.

The first three capture still images and relate to the keyboard shortcuts Command + Shift + 3, Command + Shift + 4 + Space, and Command + Shift + 4, respectively. The last two record videos, which are new features in Mojave. If you’ve ever used Quicktime to record a screenshot, you’ll recognize the functionality. It’s been essentially moved from Quicktime’s screen-recording functionality to the Screenshots toolbar.

To start a video, select either of the two record options and press the “Record” button in the toolbar. To stop the recording, either press the Stop button in the Touch Bar (if applicable) or in the menu bar.

Screenshot Options

The “Options” menu reveals more settings.

Long-time macOS screenshot pros may recognize these options. These screenshot options were once confined to Terminal commands. Now they can be set through this menu.

Save to

The first section allows you to select the target of your screenshot. By default, the screenshot won’t be saved there immediately. If you don’t interact with the screenshot thumbnail, the screenshot will be saved to this location.

“Clipboard” will copy the screenshot to the clipboard after capture. Use the “Paste” command (or the Command + V keyboard shortcut) to insert the screenshot into an editable field. Choosing an application (Mail, Messages, or Preview) will open the screenshot in that application immediately. “Other Location …” allows you to set a specific folder as the screenshot’s destination. If you select a location, it will be included in the destination menu later.


The timer functions as expected. There are presets for none, which captures the screenshot immediately. Five and ten seconds makes you wait that many seconds before capturing the screenshot.

Additional Options

At the bottom we have miscellaneous options. “Show Floating Thumbnail” controls the post-capture behavior. Keep the option checked, and macOS will display a temporary thumbnail after capture and before saving to disk. “Remember last selection” saves the region selection box used last, allowing for easier repeatable screen captures. For example, if you’re capturing the same window repeatedly, repeating the capture region is perfect. “Show mouse pointer” controls whether or not your cursor appears in the screenshot.

Editing Screenshots with Markup

In the Markup window you can use all of the annotation tools from Preview on your images. While the set might not approach the usefulness of professional annotation programs, they’re more than adequate for simple labeling or editing. Applying signatures or circling objects are both especially useful.


The new Screenshots tool in Mojave is a major upgrade from the previous tool. With the benefit of a graphical interface, taking a screenshot is easier, clearer, and more robust. While third-party tools like Snagit still offer significantly more editing and markup tools, Screenshots is a major upgrade for all Mac users.

Alexander Fox

Alexander Fox is a tech and science writer based in Philadelphia, PA with one cat, three Macs and more USB cables than he could ever use.

Subscribe to our newsletter!

Our latest tutorials delivered straight to your inbox

Sign up for all newsletters.

By signing up, you agree to our Privacy Policy and European users agree to the data transfer policy. We will not share your data and you can unsubscribe at any time.

You're reading Mastering Macos Mojave’s New Screenshot Tools

Macos 12 Monterey: New Features, Compatibility, Release Date

macOS 12 Monterey received its first preview during the WWDC21 keynote. The Mac software focuses on productive features, with a new Shortcuts app, Universal Control, AirPlay to Mac, Safari updates, and more. Here’s everything about Apple’s latest macOS update.

New features in macOS 12 Monterey

With the release of the macOS 11 Big Sur in 2023, Apple gave the Mac software a nice redesign. With a neo-skeuomorphic UI, macOS Big Sur introduced a new Control Center and Notification Center, revamped Safari and iMessage apps, and more.

Now, with macOS 12 Monterey, Apple is refining the Big Sur experience with a redesigned Safari, Universal Control feature, AirPlay to Mac, and much more.

Universal Control

macOS 12 Monterey brings Universal Control, which allows the user to easily use an iPad with a Mac, as the keyboard and mouse seamlessly move over to each device.

It’s possible, for example, to drag a Procreate logo on the iPad to a Final Cut project on the Mac. It uses Bluetooth beaconing, peer-to-peer Wi-Fi, and the iPad’s touchpad support to allow the devices to know they’re closer to one another.

Shortcuts app on macOS 12 Monterey

Apple called Shortcuts the future of automation on Mac, and the first steps in Monterey are part of a multi-year transition. Automator will continue to be supported in this release and Automator workflows can be imported into Shortcuts.

The app on the Mac looks similar to Shortcuts on the iPad. You can build new shortcuts, access existing shortcuts, and more. The Shortcuts app on the Mac also integrates with Spotlight, appears in Finder, supports multitasking, and increases with the Menu bar. 

FaceTime and SharePlay

FaceTime is receiving a lot of love from Apple with macOS 12 Monterey. The video call app will feature, for example, Spatial Audio. With this function, Apple says it creates a “sound field that helps conversations flow as easily as they do face to face.”

Another function available is Portrait mode in calls. With this on, you can blur your background and put the focus on yourself. It works exactly as it does on the iPhone camera.

With Voice Isolation mode it isolates your voice from all the other noises and with Wide Spectrum mode, you can hear anything that’s happening in your friend’s surroundings.

FaceTime is also receiving the Grid view feature and with a FaceTime link, you can invite friends into a FaceTime call with a web link on Windows and Android devices.

With SharePlay, here’s everything you can do:

Watch together: Bring TV shows and movies to your FaceTime call

Listen together: Share music with friends

Shared music queue: When listening together, anyone in the call can add songs to the shared queue

Share your screen: Bring web pages, app, and more into your conversation

Synced playback: Pause, rewind, fast-forward, or jump to a different scene while in perfect sync with everyone else

Smart volume: Dynamically responsive volume controls automatically adjust audio so you can hear your friends even during a loud scene or climactic chorus

Multiple device support: Connect over FaceTime on the iPhone while watching video on the Apple TV or listening to music on the HomePod

Connect through audio, video, and text: Access the group’s Message thread right from the FaceTime controls and choose the mode of communication that matches the moment

AirPlay to Mac

AirPlay to Mac lets your Mac be an AirPlay destination, so you can quickly play content from your iPhone on your Mac’s display.

For the first time, the Mac can be used as a speaker for multiform audio, just as the HomePod can. AirPlay works both wirelessly and wired using USB. A wired connection is useful when you want to ensure that there’s no latency or you don’t have access to Wi-Fi.

Redesigned Safari

Safari also received a major redesign with macOS 12 Monterey. Here are some of the functions coming to the new Safari:

Streamlined tab bar: The streamlined tab bar takes up less room on the page and adjusts to match the colors of each site, extending your web page to the edge of the window.

Tab groups: Intelligent Tracking Prevention now also prevents trackers from profiling you using your IP address.

Redesigned tabs: Tabs have a rounder and more defined appearance, making them easier to work with. Tabs fluidly adapt as you add more, shrinking or stretching to fit the page.

iCloud Private Relay: iCloud Private Relay is a service that lets you connect to virtually any network and browse with Safari in an even more secure and private way. It ensures that the traffic leaving your device is encrypted so no one can intercept and read it. 

Safari’s new design caused a lot of controversies, which Apple slowly addressed, although it will remain with a new UI.

macOS 12.1 features

Here’s everything new with macOS 12.1:


SharePlay is a new way to share synchronized experiences in FaceTime with content from the Apple TV app, Apple Music, and other supported apps

Shared controls give everyone the ability to pause, play, rewind or fast forward

Smart volume automatically lowers the audio of a movie, TV show, or song when you or your friends speak

Screen sharing lets everyone on a FaceTime call look at photos, browse the web, or help each other out

Apple Music Voice Plan

Apple Music Voice Plan is a new subscription tier that gives you access to all songs, playlists, and stations in Apple Music using Siri

Just Ask Siri suggests music based on your listening history and likes or dislikes

Play it Again lets you access a list of your recently played music


Memories has been redesigned with a new interactive interface, new animation and transition styles, and multiple image collages

New Memory types include additional international holidays, child-focused memories, trends over time, and improved pet memories


Communication safety setting gives parents the ability to enable warnings for children when they receive or send photos that contain nudity

Safety warnings contain helpful resources for children when they receive photos that contain nudity

Siri and Search

Expanded guidance in Siri, Spotlight, and Safari Search to help children and parents stay safe online and get help with unsafe situations

Apple ID

Digital Legacy allows you to designate people as Legacy Contacts so they can access your iCloud account and personal information in the event of your death

TV App

Store tab lets you browse, buy, and rent movies and TV Shows all in one place

This release also includes the following enhancements for your Mac:

Hide My Email is available in the Mail app for iCloud+ subscribers to create unique, random email addresses

Stocks allow you to view the currency for a ticker and see year-to-date performance when viewing charts

Reminders and Notes now allow you to delete or rename tags

macOS 12.2 features

Different from macOS 12.1, macOS 12.2 didn’t bring lots of new features. This software update includes a security fix for a serious Safari flaw along with a few other improvements and fixes.

The Safari exploit can leak users’ browsing history as well as Google account IDs. Another fix with macOS 12.2 improves scrolling in Safari with ProMotion on the new MacBook Pro. There’s also a new, native Apple Music app.

macOS 12.3 features

macOS 12.3 finally brings the long-awaited Universal Control feature. Here’s everything new with this update:

Universal Control;

40+ fresh emoji;

New, non-binary Siri American voice;

Dynamic head tracking for Spatial Audio;

Add notes to saved passwords in Safari;

Support for the PS5 DualSense adaptive trigger;

 ScreenCaptureKit framework for “high-performance screen recording.”

macOS 12.4 features

macOS 12.4 brings small improvements compared to the previous versions of the operating system:

Apple Podcasts includes a new setting to limit episodes stored on your Mac and automatically delete older ones

Support for Studio Display Firmware Update 15.5, available as a separate update, refines camera tuning, including improved noise reduction, contrast, and framing.

macOS 12.5 features

macOS Monterey 12.5 includes enhancements, bug fixes, and security updates.

The TV app adds the option to restart a live sports game already in progress and pause, rewind, or fast-forward

Fixes an issue in Safari where a tab may revert back to a previous page

macOS 12.5 is likely the latest macOS 12 Monterey update before Apple releases macOS 13 Ventura.

macOS 12 Monterey exclusive features to the M1 Macs

With macOS 12 Monterey, Apple is focusing on a bunch of features only for the Mac with its own silicon. here’s everything exclusive to the M1 Mac mini, M1 MacBook Air, M1 MacBook Pro, and 24-inch iMac:

Portrait Mode on FaceTime: Only M1 Macs on macOS Monterey will be able to blur the background of a video call using FaceTime.

Object Capture: With macOS Monterey, users will be able to turn a series of 2D images into a photo-realistic 3D object that’s optimized for AR in just minutes using the power of Mac.

Siri: Neural text-to-speech voice in more languages is only available to the M1 Macs. With macOS Monterey, this feature will be available in more languages: Swedish (Sweden), Danish (Denmark), Norwegian (Norway), and Finnish (Finland).

On-device dictation: Keyboard dictation helps protect the user’s privacy by performing all processing completely offline. With macOS Monterey, users can dictate texts of any length without a timeout.

Previously, the Live Text in Photos feature, which gives users the ability to interact with text in photos, such as copy and paste, lookup, and translate, was only available for M1 Macs, but then Apple made it available to Intel Macs as well.

macOS 12 Monterey device compatibility

Here’s the full list of Macs compatible with macOS Monterey:

2023 and later MacBook

Early 2023 and later MacBook Air

Early 2023 and later MacBook Pro

Late 2014 and later Mac mini

Late 2023 and later iMac

2023 and later iMac Pro

Late 2013 and later Mac Pro

Hands-on with macOS 12 Monterey

Watch our hands-on video walkthrough as we step through over 100 macOS Monterey changes and features.

macOS 12 Monterey was introduced at Apple’s annual Worldwide Developers Conference,

The macOS 12 Monterey developer beta program started on the same day as the keynote, June 7. On the 1st of July, Apple released the public beta of macOS 12 Monterey.

Apple officially released macOS 12 Monterey on October 25.

macOS 12 Monterey: here’s how to install the public beta version

Here’s how to install macOS 12 Monterey public beta:

Putting Learning First With New Tech Tools

Five areas you can focus on to ensure that the digital tools transforming education serve your learning objectives.

It’s wild to think about how new technologies are changing the way we think about teaching and learning. The digital tools many students have access to both inside and outside the classroom require us all to take a hard look at the way we use these tools in the context of learning experiences. It’s easy to get caught up with the shiniest, brightest, or most attention-grabbing digital device or website, but it is possible to pause, reflect, and prioritize tasks over digital tools in the classroom. Are we putting the learning first?

Designing rigorous learning experiences in a tech-rich classroom requires us to take a step back and think about the ways technology can elevate and energize students. Prioritizing learning experiences means identifying our objectives and pausing to explore how digital tools can help students dive into these learning experiences like never before.

Digital tools let students collaborate in new ways, question the world around them, connect their work with the world, create products that demonstrate their understanding, and wonder about new topics they encounter. These strategies can help you integrate technology into a lesson as you design learning activities.


Although digital tools have changed the way we think about creating in the classroom, collaboration means more than accessing the same document from different devices. Students can work together as they dive into new content and apply what they’ve learned in the classroom.

You might ask students to log in to a shared presentation outside of school hours to combine their research efforts in one place. Remote collaboration on a shared document is powerful and gives students a new way to provide feedback to one another. Designing learning activities that leverage the collaborative nature of digital tools allows students to explore a topic while sitting in different classrooms or time zones.

And when students share a screen—leaning over to discuss, record, and dive into media together—they also build transferable skills. This shared-screen collaboration gives students an opportunity to compromise, work toward a common goal, and think critically as they dive into course content with their peers.


Typing a question into a search bar won’t always give students the full answer. We can leverage the power of digital tools to help students explore the world and answer questions that haven’t been asked yet or don’t have one correct answer. These deep-dive questions require educators to model their own searches and strategies for evaluating websites and online resources.

Modeling how to evaluate sources to find the answer to deep-dive questions is important for students to develop in any subject area, for any learning objective. As students navigate the internet from a personal or school device, we can create experiences for them to pose questions, share their findings, and build an appreciation of lifelong learning online.


An authentic audience breathes life into both tech-rich and low-tech tasks. What makes the use of technology especially powerful is how digital tools can connect students with readers, listeners, and viewers of their work.

Using online spaces to share student work with the world helps students connect their learning with an audience. Students can tweet a video they’ve created to share their opinion about a novel, or share the steps to solve a math problem on a classroom blog.


Open-ended creation tools give students a space to demonstrate their understanding. They can capture their voice, record video, and tell the story of their learning. A tool like Spark Video might be perfect for students to narrate images they’ve collected during a community walk as they create a public service announcement to share with their school board. Helping students determine the product for that will showcase their learning can take many forms.

Students who are going to share their new knowledge about a topic might use an audio tool like Soundtrap to create a podcast, and ones who will gather a handful of videos they recorded during a science experiment might use Book Creator to share their learning. Focusing on the features students need in order to share their learning with the world can help us place tasks before apps.


Providing a safe space to ask questions gives purpose to learning activities inside the classroom. Students have interests they can explore in the context of your learning goals. They might wonder why some animals are endangered and others are not, or they might wonder why an author chose to write about a topic.

Digital tools can help students discover new things, explore topics that pique their curiosity, and empower them as content consumers and creators. As they wonder about the world around them, students have access to online resources to help them harness their curiosity.

As you work to prioritize learning experiences over technology this school year, pause to ask:

At the end of this learning experience, what should students understand?

How will I know for sure if students understand?

What would I like students to accomplish?

In my new book, Tasks Before Apps: Designing Rigorous Learning in a Tech-Rich Classroom, I share more strategies for placing learning goals front and center and dive into creation, collaboration, and curiosity in the classroom.

Mastering 3D Lighting In Blender

Making images in Blender 3D has a lot in common with photography. In fact, if you have any photographic skills, these will transfer nicely into 3D software like Blender.

In previous articles we have discussed how you light a scene on a basic level. But how can you use all the different kinds of lamp for something approaching real cinematography?

Types of Lamp

Cinematography is all about choosing the right lights. In the virtual world of a 3D program the lights are all computed rather than real, but they perform the same function as real world lights. To get good lighting in 3D graphics images, you need to have a grasp of lighting in the real world, so a good tip is to learn how to light photographs from photography tutorials out there on the Internet.

The basic types of lamps in Blender are as follows: Point, Sun, Spot, Hemi and Area.


These lights are a tiny ball of light which are omnidirectional – that is to say scattering light in all directions like a lightbulb. Shadows fan out from the source centre in radiating lines.


Sun lights emulate the light you get from the sun; the light comes down from the source in parallel lines. Shadows cast straight down from the source and are soft.


Spotlights have a point source, but they fan out at a particular angle set in the properties, and they have a soft transition from the middle to the outer radius, the same as a real spotlight. Shadows are hard-edged and follow the angle of the beam.


These lights are like spotlights, but the difference is the source is a half sphere and the light focusses in straight lines, like a lighting brolly. Shadows are hard-edged.


Area lights are flat planes which cast light like a softbox or light reflected from a large reflective surface. Shadows are sharp when the objects are close to a surface but softer when they are distant.

Emission Surfaces

Another kind of light you can have in Blender is to turn an object into a light by selecting a surface texture of Emission. The texture emits light, meaning you can make a ball, cube or plane be a light emitter. The light is soft and the shadows smooth.

You can turn objects into lights, the benefit being that you can see the lights. The standard lights in Blender are invisible to the camera, but lights which are objects can be seen. The only light sources in this scene are the objects themselves.

Basic Setup

The basic lighting setup taught by all photography courses is to have a key, fill and rim or edge light.

The key light is either a strong, sun-like light or spotlight shining on the front of the object being lit. This casts light on the front and top of the object and shadows on the surface over any undercuts. In this example we used a sun light above and to the right of the camera. Strength is set to 700.

The fill light is positioned opposite to the key light to fill in any shadows. In this example, an Area light is positioned below the camera and to the left pointing up at the object. Strength is set to 75.

The rim or edge light is positioned behind the object pointing towards the object and the camera to highlight the edge of the object to separate it from its background. In this example, a Hemi light is positioned above, to the left and behind the skull pointing forwards towards the camera. The Strength is set to 2.

And that is how you light something perfectly.

Lighting Tips

The main tip for setting up lights and even textures in Blender is to use a rendered viewport. This makes a draft-quality rendering of the light that you can see updated in real time to allow you to position lights and shadows perfectly while seeing the effects of your light positions live on the screen.


Learn as much as you can about real world lighting for photography and transfer that knowledge to the 3D virtual world of Blender for fantastic lighting.

Image Credit: Cole Harris

Phil South

Phil South has been writing about tech subjects for over 30 years. Starting out with Your Sinclair magazine in the 80s, and then MacUser and Computer Shopper. He’s designed user interfaces for groundbreaking music software, been the technical editor on film making and visual effects books for Elsevier, and helped create the MTE YouTube Channel. He lives and works in South Wales, UK.

Subscribe to our newsletter!

Our latest tutorials delivered straight to your inbox

Sign up for all newsletters.

By signing up, you agree to our Privacy Policy and European users agree to the data transfer policy. We will not share your data and you can unsubscribe at any time.

Mastering Multiple Linear Regression: A Comprehensive Guide


Interesting in predictive analytics? Then research artificial intelligence, machine learning, and deep learning.

Let’s take a brief introduction to what linear regression sklearn is. Regression is the statistical method used to determine the strength and the relation between the independent and dependent variables. Generally, independent variables are those variables whose values are used to obtain output, and dependent variables are those whose values depend on the independent values. When discussing regression algorithms, you must know some of the multiple linear regression algorithms commonly used in python to train the machine learning model, like simple linear regression, lasso, ridge, etc.

In the following tutorial, we will talk about the multiple linear regression model or multilinear regression and understand how simple linear differs from multiple linear regression (MLR) in python.

Learning objectives

Understand the difference between simple linear regression and multiple linear regression in Python’s Scikit-learn library.

Learn how to read datasets and handle categorical variables for multiple linear regression using Scikit-learn.

Apply Scikit-learn’s linear regression algorithm to train a model for multiple linear regression.

This article was published as a part of the Data Science Blogathon.

Table of Contents What Is Machine Learning?

If you are on the path of learning data science, then you definitely have an understanding of what machine learning is. In today’s digital world, everyone knows what Machine Learning is because it is a trending digital technology across the world. Every step towards the adaptation of the future world is led by this current technology, which in turn, is led by data scientists like you and me.

Now, for those of you who don’t know what machine learning is, here’s a brief introduction:

Machine learning is the study of the algorithms of computers that improve automatically through experience and by the use of data. Its algorithm builds a model based on the data we provide during model building. This is the simple definition of machine learning, and when we go in deeper, we find that huge numbers of algorithms are used in model building. Generally, the most commonly used machine learning algorithms are based on the type of problem, such as regression, classification, etc. But today, we will only talk about sklearn linear regression algorithms.

Simple Linear Regression vs Multiple Linear Regression

Now, before moving ahead, let’s discuss the interaction behind the simple linear regression. Later, we will compare multiple and simple linear regression based on the intuition that we are solving our machine learning problem.

What Is Simple Linear Regression?

We considered a simple linear regression in any machine learning algorithm using an example.

Now, suppose we take a scenario of house prices where our x-axis is the size of the house, and the y-axis is the price of the house. In this example, we have two features – the first one is f1, and the second one is f2, where

 f1 refers to the size of the house and

f2 refers to the price of the house.

So, if f1 becomes the independent feature and f2 becomes the dependent feature, we usually know that whenever the size of the house increases, then the price also increases. Suppose we draw scatter points randomly. Through this scatter point, we try to find the best-fit line, which is given by the equation:

                            equation:   y = A + Bx

Suppose y is the price of the house, and x is the size of the house; then this equation seems like this:

When we discuss this equation, in which

In this equation, the intercept indicates what the base price of the house would be when the price of the house is 0. Meanwhile, the slope or coef (coefficient) indicates the unit increase in the slope, with the unit increase in size.

Now, how is it different when compared to multiple linear regression?

What Is Multiple Linear Regression?

Multiple Linear Regression (MLR) is basically indicating that we will have many features Such as f1, f2, f3, f4, and our output feature f5. If we take the same example as above we discussed, suppose:

f1 is the size of the house,

f2 is bad rooms in the house,

f3 is the locality of the house,

f4 is the condition of the house, and

f5 is our output feature, which is the price of the house.

Now, you can see that multiple independent features also make a huge impact on the price of the house, meaning the price can vary from feature to feature. When we are discussing multiple linear regression, then the equation of simple linear regression y=A+Bx is converted to something like:

                            equation:  y = A+B1x1+B2x2+B3x3+B4x4

“If we have one dependent feature and multiple independent features then basically call it a multiple linear regression.”

Now, our aim in using the multiple linear regression is that we have to compute A, which is an intercept. The key parameters B1, B2,  B3, and B4 are the slopes or coefficients concerning this independent feature. This basically indicates that if we increase the value of x1 by 1 unit, then B1 will tell you how much it will affect the price of the house. The others B2, B3, and B4, also work similarly.

So, this is a small theoretical description of multiple linear regression. Now we will use the scikit learn linear regression library to solve the multiple linear regression problem.

How to Train a Model for Multiple Linear Regression? Step 1: Reading the Dataset

Most of the datasets are in CSV file format; for reading this file, we use pandas library:

df = pd.read_csv('50_Startups.csv') df

Here you can see that there are 5 columns in the dataset where the state stores the categorical data points, and the rest are numerical features.

Now, we have to classify independent and dependent features.

Independent and Dependent Variables

There are total 5 features in the dataset, of which profit is our dependent feature, and the rest are our independent features.

Python Code:

Step 2: Handling Categorical Variables

In our dataset, there is one categorical column, State. We must handle the categorical values inside this column as part of data preprocessing. For that, we will use pandas’ get_dummies() function:

# handle categorical variable


# dropping extra column

x= x.drop(‘State’,axis=1)

# concatation of independent variables and new cateorical variable.



Step 3: Splitting the Data

Now, we have to split the data into training and test sets using the scikit-learn train_test_split() function.

# importing train_test_split from sklearn from sklearn.model_selection import train_test_split # splitting the data x_train, x_test, y_train, y_test = train_test_split(x, y, test_size = 0.2, random_state = 42) Step 4: Applying the Model

Now, we apply the linear regression model to our training data. First of all, we have to import linear regression from the scikit-learn library. Unlike linear regression, there is no other library to implement multiple linear regression.

# importing module from sklearn.linear_model import LinearRegression # creating an object of LinearRegression class LR = LinearRegression() # fitting the training data,y_train)

finally, if we execute this, then our model will be ready. Now we have x_test data, which we will use for the prediction of profit.

y_prediction =  LR.predict(x_test) y_prediction

Now, we have to compare the y_prediction values with the original values because we have to calculate the accuracy of our model, which was implemented by a concept called r2_score. Let’s briefly discuss r2_score:


It is a function inside sklearn. metrics module, where the value of r2_score varies between 0 and 100 percent,  we can say that it is closely related to MSE.

r2 is basically calculated by the formula given below:

                            formula:  r2 = 1 – (SSres  /SSmean )

now, when I say SSres, it means it is the sum of residuals, and SSmean refers to the sum of means.


y = original values

y^ = predicted values. and,

From this equation, we infer that the sum of means is always greater than the sum of residuals. If this condition is satisfied, our model is good for predictions. Its values range between 0.0 to 1.

”The proportion of the variance in the dependent variable or target variable that is predictable from the independent variable(s) or predictor.”

The best possible score is 1.0, which can be negative because the model can be arbitrarily worse. A constant model that always predicts the expected value of y, disregarding the input features, would get an R2 score of 0.0.

# importing r2_score module

from sklearn.metrics import r2_score

from sklearn.metrics import mean_squared_error

# predicting the accuracy score


print(‘r2 socre is ‘,score)

print(‘mean_sqrd_error is==’,mean_squared_error(y_test,y_prediction))

print(‘root_mean_squared error of is==’,np.sqrt(mean_squared_error(y_test,y_prediction)))

You can see that the accuracy score is greater than 0.8, which means we can use this model to solve multiple linear regression, and also mean squared error rate is also low.


Multiple Linear Regression is a statistical method used to study the linear relationship between a dependent variable and multiple independent variables. In the article above, we learned step-by-step how to implement MLR in Python using the Scikit-learn library. We used a simple example of predicting house prices to explain how simple linear regression works and then extended the example to multiple linear regression, which involves more than one independent variable. I hope now you have a better understanding of the topic.

Key Takeaways

Multiple linear regression is an extension of simple linear regression, where multiple independent variables are used to predict the dependent variable.

Scikit-learn, a machine learning library in Python, can be used to implement multiple linear regression models and to read, preprocess, and split data.

Categorical variables can be handled in multiple linear regression using one-hot encoding or label encoding.

Frequently Asked Questions Related

Accessible Augmented Reality Tools Unlock New Experiences For Art, Education, Retail, More

Creators are discovering new ways to tell immersive stories through augmented reality (AR). Driving the adoption of AR is a new generation of apps, tools, and technologies that democratize art creation and make virtual worlds instantly accessible to anyone. The evolution of AR from a novelty to an essential communication tool will unlock new experiences for artists, teachers, shoppers, travelers, students, and more.

One year ago, right as the pandemic began to reshape our lives, I spoke to half a dozen leading augmented reality artists to learn about their perspective on the future of AR. At the time, augmented art was still an obscure space, a niche branch of illustration and 3D design with a hobbyist feel. The artists I spoke to were optimistic, but recognized the medium’s significant constraints. The pandemic changed everything.

After a year at home, AR works have exploded in popularity with creators and viewers. Traditional artists are using this time to explore AR authoring. Virtual experiences can recreate the places we love and augment the confines of the four walls we’ve gotten to know a bit too well. As the world begins to emerge from the pandemic armed with new technology, how will AR shape our future?

Photos courtesy of Nadine Kolodziey

“The community seems more open and ready to dive into new virtual narrative formats that are more challenging, educational, and not only to be consumed passively,” says Nadine Kolodziey, an artist working in Berlin. Nadine’s latest works explore the potential of three tools poised to have an outsized impact on the adoption of AR: App Clips, Location Anchors, and authoring tools that are both powerful and accessible.

In 2023, Nadine began a collaboration with Scavengar, an AR creation and scavenger hunt storytelling platform, to build “Walk Your Day.” In light of the pandemic, the experience encouraged viewers to get up and have a productive and mindful start to their day through motivating visuals.

For her second AR experience, “New Nature,” Nadine challenges viewers, “What is the new reality?” A virtual garden and prompts along the way ask thought-provoking questions about life after the pandemic. “New Nature” is a geolocation experience playable at Bay Street, Los Angeles and Japantown, San Francisco. Scavengar is powered by Location Anchors in ARKit 4, a technology that locks an AR creation to a specific latitude, longitude, and altitude.

Associating AR content with physical places adds depth and context to virtual environments. Location Anchors offer retailers, museums, schools, galleries, and city streets opportunities to add value and wayfinding to their experiences. “If an experience has a place, a location, it becomes part of our world, even if the visual layer manifests itself on a display,” Nadine says.

Geolocation experiences have no value if they’re difficult to activate. A shopper won’t interact with an in-store experience that requires downloading a separate app or setting up an account. Cumbersome technology breaks the delicate illusion an artist constructs to tell their story. That’s where App Clips in iOS 14 come in.

The Scavengar team created a special version of “New Nature” you can try from home if you’re not near Los Angeles or San Francisco. Scan the App Clip Code above to get started.

Scavengar and “New Nature” are some of the first AR experiences that can be triggered by simply scanning a QR code from your iPhone. You don’t need to grab the Scavengar app from the App Store to dive in. You don’t need to visit a webpage and download a USDZ file. The experience begins almost as seamlessly as stopping to read a sign or streaming a video. It’s the combination of Location Anchors and App Clips that makes augmented reality truly start to feel transformative.

If you’d like to get started creating AR experiences, more resources and tools are available than ever before. I’d recommend beginning with this step-by-step guide created by Today at Apple, It’s Nice That, and WWWesh Studio. The guide will walk you through the basics of building and publishing a scene with Reality Composer, Apple’s AR authoring tool. Scavengar also has a series of getting started guides on their EDU website.

FTC: We use income earning auto affiliate links. More.

Update the detailed information about Mastering Macos Mojave’s New Screenshot Tools on the website. We hope the article's content will meet your needs, and we will regularly update the information to provide you with the fastest and most accurate information. Have a great day!