Trending December 2023 # How Image Compression Works: The Basics # Suggested January 2024 # Top 18 Popular

You are reading the article How Image Compression Works: The Basics updated in December 2023 on the website Kientrucdochoi.com. We hope that the information we have shared is helpful to you. If you find the content interesting and meaningful, please share it with your friends and continue to follow and support us for the latest updates. Suggested January 2024 How Image Compression Works: The Basics

Methods, Approaches, Algorithms Galore.

It’s naive to think that there’s just one way to compress an image. There are different methods, each with a unique approach to a common problem, and each approach being used in different algorithms to reach a similar conclusion. Each algorithm is represented by a file format (PNG, JPG, GIF, etc.). For now, we’re going to talk about the methods that are generally used to compress images, which will explain why some of them take up so much less space.

Lossless Compression

When you think of the word “lossless” in the context of image compression, you probably think about a method that tries its hardest to preserve quality while still maintaining a relatively small image size. That’s very close to the truth. As a method, lossless compression minimizes distortion as much as possible, preserving image clarity. It does this by building an index of all the pixels and grouping same-colored pixels together. It’s kind of like how file compression works, except we’re dealing with smaller units of data.

DEFLATE is among the most common algorithms for this kind of job. It’s based on two other algorithms (Huffman and LZ77, if you’re a bookworm) and it has a very tried-and-true way of grouping data found within images. Instead of just running through the length of the data and storing multiple instances of a pixel with the same color into a single data unit (known as run-length encoding), it grabs duplicate strings found within the entire code and sets a “pointer” for each duplicate found. Wherever a particular string of data (pixels) is used frequently, it replaces all of those pixels with a weighted symbol that further compresses everything.

Notice how with run-length encoding and DEFLATE, none of the pixels are actually eaten up or forced to change color. Using this method purely results in an image that is identical to the raw original. The only difference between the two lies in how much space is actually taken up on your hard drive!

Lossy Compression

As the name implies, lossy compression makes an image lose some of its content. When taken too far, it can actually make the image unrecognizable. But lossy doesn’t imply that you’re eliminating pixels. There are actually two algorithms commonly used to compress images this way: transform encoding and chroma subsampling. The former is more common in images and the latter in video.

Chroma subsampling takes another approach. Instead of averaging small blocks of color, which also may affect the brightness of an image, it carefully attempts to keep brightness the same on all areas. This tricks your eyes into not readily noticing any dip in quality. It’s actually great for the compression of animations, which is why it is used more in video streams. That’s not to say that images don’t also use this algorithm.

But wait, there’s more! Google also took a shot at a new lossy algorithm, known as WebP. Instead of averaging color information, it predicts the color of a pixel by looking at the fragments surrounding it. The data that’s actually written into the resulting compressed image is the difference between the predicted color and the actual color. In the end, many of the predictions will be accurate, resulting in a zero. And instead of printing a whole bunch of zeroes, it just compresses all of them into one symbol that represents them. Image accuracy is improved and the compression reduces image size by an average of 25 percent compared to other lossy algorithms, according to Google.

It’s Time For Questions And Discussion!

Miguel Leiva-Gomez

Miguel has been a business growth and technology expert for more than a decade and has written software for even longer. From his little castle in Romania, he presents cold and analytical perspectives to things that affect the tech world.

Subscribe to our newsletter!

Our latest tutorials delivered straight to your inbox

Sign up for all newsletters.

By signing up, you agree to our Privacy Policy and European users agree to the data transfer policy. We will not share your data and you can unsubscribe at any time.

You're reading How Image Compression Works: The Basics

How Bing’s Image & Video Algorithm Works

Meenaz Merchant’s official title is Principal Program Manager Lead, AI and Research, Bing.

He says simply: head of the multimedia team.

The first thing I learned is that the same team builds the algorithms for both images and videos.

That means understanding how to approach each gets a little simpler – they will have a similar approach and probably be looking at similar features in a similar manner.

They also run the camera search (reverse image search) algorithm. (Unfortunately, we didn’t have time to talk about that.)

What Triggers Video & Image Boxes?

Intent.

Obviously with very explicit intent such as “pictures of…” or “videos of..”, and more ambiguous explicitness (as it were) such as “show me a…”

But also implicit where the user probably wants and expects images or videos on the SERP – for example, movie stars.

Do Both Appear Side by Side in SERPS Regularly?

Merchant mentions 10% of overlap where both video and images are relevant and helpful to satisfy the user’s intent.

So 10% of the time when they show one, they will also show the other.

From spot checks on Brand SERPs that I have collected at chúng tôi it had seemed to me that there was a bias in the SERPs to tend toward an “either or” basis.

Merchant tells me this is not so.

I really should have checked my data beforehand. The crossover is quite extensive.

A good third of brands that have images on their SERP also have videos, and a good quarter that have videos also have images.

These are stats from Google SERPs, not Bing, but they nicely illustrate the doubling up Merchant mentions.

And that data supports what Frédéric Dubut said during the interview that kicked this series off, and that Merchant reiterates.

Some queries have very strong implicit intent for images and videos – famous people, especially in the entertainment sphere. Dubut mentions Beyoncé.

And looking again at the Beyonce example above, the videos ranking above the images make even more sense.

In the fifth episode in this series, Nathan Chalmers from the whole page team also states that user behavior on the SERP affects where rich elements are placed.

This is some analysis I really need to do on Brand SERP data.

Not just which rich elements are present for brands and people, but also which are on the rise in rankings – i.e., more popular with searchers.

What Helps Images Rank?

Relevance is the single most important factor.

Merchant says, “Relevance – is this the right image for the query – trumps everything.”

They do want diversity, but won’t dilute the quality of the results in order to expand to different sources.

If one source gives multiple images that are deemed relevant, they will all rank.

Merchant uses the example of a query for “San Francisco city from Alcatraz Island” which is very specific.

If they have an image that shows that view, that is the most relevant.

The best webpage with the best image SEO that contains an image that looks almost the same, but is actually taken from a different view – a picture of the city from Golden Gate bridge, for example – isn’t relevant and the algo will attempt to figure that out and filter that result out.

Evaluating relevancy depends on understanding what’s in the image.

They use the “traditional” signals:

Alt tag.

Title tag.

File name.

Caption.

The content around the image.

But it turns out the core signal for relevancy is understanding what the image shows by analyzing using machine learning.

Progress in Machine Learning

As Fabrice Canel points out in episode 2 of this series, the progress Microsoft (and Google) are making using deep learning is exponential.

Their algorithms are improving at an exponential rate.

For images, in particular, the last three years have been the “take-off.”

Bing’s ability to understand the content around the image to understand the context has improved, but also their ability to analyze the image itself and understand the contents.

Famous faces, landmarks, animals, flowers…

Tom Hanks, the Eiffel tower, German shepherd dogs, and roses will have been part of the training set three years ago.

They progressively expanded that out to less well-tagged datasets and incorporated the identification of specific elements in images… to the point that now they can be very subtle.

The example Merchant uses is understanding that a picture of a skyline is a city.

But more than that: which specific city? (i.e., San Francisco)

And yet further: that the picture is taken from a specific vantage point (i.e., San Francisco skyline taken from Alcatraz Island).

That highlights just how good this analysis is.

As users, we make use of these capabilities (and are starting to take them for granted) and yet forget as marketers just how smart these machines are.

And take that a step further – their confidence that they have correctly understood is increasing exponentially.

That means less and less reliance on all those traditional signals.

Alt tags are no longer the indicator they once were.

Even content around the image can become pretty much redundant if the machine is very confident it has correctly identified what the image shows.

They Analyze Every Single Image

I had always assumed that running images through their algos had a financial cost that meant that money would dictate they couldn’t analyze every image they collect.

Not true.

Merchant states that they analyze every single image and identify what it shows.

That means the clues they see in the filename, alt tags, titles, captions, and even the content around the image are simply corroboration of what the machine has understood.

So there is truly no longer any point in cheating on those aspects.

Bing will spot the cheat and ignore it.

But worse. Trust.

Taking alt tags as an example, Merchant states that the algorithm will learn which sites are trustworthy and will apply historical trust to ranking.

And that appears to confirm the experiences I have had when submitting pages on my sites to Bing and Google.

Both pages and images get indexed very very fast (seconds). Other sites I have tested take minutes, hours, or even days.

Historically built trust would appear to be a major factor here.

And Merchant suggests that building a reputation over time also applies to authority. They look at a lot of signals to evaluate authority.

And that nicely illustrates just how important E-A-T is. They are looking at:

Expertise (quality content).

Authoritativeness (peer group support).

Trust (audience appreciation via interaction on the SERP).

Makes sense.

Both authority and trust are re-evaluated on every search.

The algo is constantly micro-altering its perception of which domains are most trustworthy and authoritative.

Staying honest over time is crucial to your future success.

Image Boxes on the Core SERP Are Simply the Top Results From Image Vertical

Merchant talks about the image vertical and points out that they can generate as many results as the core blue links.

Getting to the top of the image results not only gives visibility there, but also on the core SERPs.

To get seen on the blue links SERP, you simply need to rank near the top for the images for a query that shows image boxes on the core SERP.

If the query is very image-centric, ranking top dozen will do since the image box is bigger.

What Helps Videos Rank?

The signals are similar to images.

First and foremost, relevancy…. but then also popularity, authority, trust, attractiveness (in that order, it seems).

What Triggers a Video Box on a Core SERP?

As with images, whether a video box gets shown on the core SERP depends on their relevancy to the explicit or implicit intent of the query.

Merchant cites two examples of implicit intent that will trigger video: news and entertainment.

Is There a Domain / Platform Bias?

The platform doesn’t matter as much as producing relevant (that word again !), quality video that your audience engages with. Hosting it on YouTube, Twitter, Facebook, Vimeo are all options.

But think about the type of query and your vertical.

Different platforms dominate on different verticals.

YouTube is a great source for How-To, but news would tend to favor the BBC or another news site.

Authority in that niche can play a big rôle. So, a small, specialized website would be seen by Bing as a perfect source for a video on a very niche search query.

They would look for:

Quality (aka being accurate and having decent production standards).

Authority within that industry (a.k.a., peer group approval).

Trust the domain has built over the years (a.k.a., providing results to Bing or Google that prove to be useful to their audience – a.k.a. SERP data).

Once again, all this looks suspiciously like E-A-T.

The Importance of E-A-T

As I move forward with writing up the interviews in the Bing Series, the more E-A-T stands out, and the more I am convinced that E-A-T a good way to approach creating and presenting content to rank.

Watch the video of the interview this article is based on.

Read the Other Articles in the Bing Series

How Ranking Works at Bing – Frédéric Dubut, Senior Program Manager Lead, Bing

Discovering, Crawling, Extracting and Indexing at Bing – Fabrice Canel Principal Program Manager, Bing

How the Q&A / Featured Snippet Algorithm Works – Ali Alvi, Principal Lead Program Manager AI Products, Bing

How the Image and Video Algorithm Works – Meenaz Merchant, Principal Program Manager Lead, AI and Research, Bing

How the Whole Page Algorithm Works – Nathan Chalmers, Program Manager, Search Relevance Team, Bing

Image Credits

Featured & In-Post Images: Véronique Barnard, Kalicube.pro

How Blockchain Changed The Way Money Works?

What do you think the effects of Bitcoin technology will be? Don Tapscott, the CEO of the Tapscott Group, says that blockchains, the technology behind cryptocurrencies, could change the world economy. Tapscott talks with McKinsey’s Rik Kirkland about the possible benefits of blockchains for working together and keeping track of transactions. Blockchains are a type of open-source, distributed database that uses cutting-edge cryptography.

Every day, our global financial system deals with trillions of dollars, which helps billions of people worldwide. But the system has some flaws that lead to higher costs because of fees and delays, more trouble for everyone because of unnecessary paperwork and a higher risk of fraud and other bad behavior.

More specifically, economic crime affects 45% of financial intermediaries, such as payment networks, stock exchanges, and money transfer services, compared to 37% of the economy. It shouldn’t be a surprise that rising regulatory costs have been one of the biggest worries in the banking industry. All of this adds to costs, which the public pays for.

But let’s look into the crystal ball and think about how it might affect our everyday lives in the future. The pros are too optimistic and have goals that are too high.

Secure Government Communications

Cryptocurrency has a lot of clear benefits and almost none, so the government will eventually accept it and spread it to all countries. Soon, most countries will let you use some cryptocurrency on their territory.

Blockchain-Based Authentication

Shortly, our personal information will almost certainly be kept on the Blockchain. Because of this, a global standard for protecting the identity that gives you more privacy and authority would come into being. Companies are already working on it. With these crypto-identities, everything about us would be saved, from our medical records to our work history.

The Protocols of a Trillion Dollars

In their quest to become trillion-dollar businesses, companies are pushing the economy to heights never seen before. As a result of Blockchains and protocols cutting transaction and data costs by a lot, trillion-dollar Blockchain protocols, like these companies, would start to appear.

Blockchain-based global commerce

It is a good option when two people or groups want to send and receive money for almost nothing. In the future, blockchain technology will make it possible to do business all over the world, making it faster, safer, and more open.

Considerable increases in people’s average quality of life

It would be a side effect of all the better processes and procedures that Blockchain makes possible. People would be able to use better financial services more often. Getting rid of intermediaries and letting money flow freely would be very helpful to developing countries. There would be more resources for people worldwide, which would help billions of people.

These are the few possible bright spots in Blockchain’s future. We haven’t even scratched the surface of the potential for change in a few areas. It will affect a lot of people.

Future people will look back on our time as the Renaissance when everything got better because knowledge was shared with more people.

Three ways in which Blockchain is changing lives The ways that people pay for things worldwide are changing right in front of our eyes.

The Bretton Woods Agreement, signed in 1944, was a vital part of the modern financial system. After World War II, it was important for money to move quickly and safely across the “western world.” To make this idea work, the developing countries all agreed on the rules for their economic and financial exchanges under the monetary management system. It significantly improved the gold-backed money system, which made sending money across oceans too slow and inefficient.

The US dollar was the most crucial currency in the fiat monetary system. It made sending money and goods across borders much easier and cheaper.

Nearly 80 years later, more people worldwide are willing to do business with each other, not just in North America and Europe. The system wasn’t made to handle these kinds of transactions.

It should be as easy to send money as it is to send a text message.

It is hard and expensive to send money to an Indian developer through a bank account in Germany. Aside from the fact that the paperwork for the SWIFT system takes a lot of time, an Indian bank does not allow IBAN wire transfers. The transaction fee is at least $20, plus a percentage of the amount transferred in conversion fees. The money may take a few days to clear the recipient’s account. Use a global fintech’s third-party payment system and follow their thorough Know Your Customer (KYC) protocol. It is the safest way to pay.

It is common to stop transfers of more than $1,000 and send a questionnaire to the person receiving the money. The person who gets paid is at the mercy of the authorities and must explain why he should get born.

Digital money can be stored on any phone

People with skills from worldwide want to be paid in a cryptocurrency backed by the US dollar. Up to 38% of Web3 freelancers would instead be paid in cryptocurrencies like Bitcoin and Ethereum than in other ways. One reason is that local fiat currencies often have higher inflation rates and less buying power than the US dollar. When Iran’s official inflation rate goes over 36%, as it did in 2023, cryptocurrencies like bitcoin are needed to protect buying power.

Conclusion

Because of the Internet, the size of the world has shrunk. As of economic globalization, companies can now find skilled workers from all over the world. It lets billions of people join a globally connected economy for the first time. Built on top of the internet, blockchain technology makes it possible for digital assets to be moved quickly, cheaply, and around the world. They are making it possible to participate in the global economy and earn more in ways that weren’t possible before, even without a traditional bank account.

How To Crop An Image Along The Y

In this tutorial, we are going to learn how to crop an image along the y-axis using FabricJS. We can create an Image object by creating an instance of fabric.Image. Since it is one of the basic elements of FabricJS, we can also easily customize it by applying properties like angle, opacity etc. In order to crop an image along the y-axis, we use the cropY property.

Syntax Parameters

element − This parameter accepts HTMLImageElement, HTMLCanvasElement, HTMLVideoElement or String which denotes the image element. The String should be a URL and would be loaded as an image.

options (optional) − This parameter is an Object which provides additional customizations to our object. Using this parameter origin, stroke width and a lot of other properties can be changed related to the image object of which cropY is a property.

callback (optional) − This parameter is a function which is to be called after eventual filters are applied.

Options Keys

cropY − This property accepts a Number value which denotes the image crop in pixels along the y-axis, from the original image size.

Default appearance of Image object Example

Let’s see a code example of how the Image object appears when cropY property is not used. As we can see, there is no image crop along the y-axis.

var

canvas

=

new

fabric

.

Canvas

(

“canvas”

)

;

canvas

.

setWidth

(

document

.

body

.

scrollWidth

)

;

canvas

.

setHeight

(

250

)

;

var

imageElement

=

document

.

getElementById

(

“img1”

)

;

var

image

=

new

fabric

.

Image

(

imageElement

,

{

top

:

50

,

left

:

110

,

}

)

;

canvas

.

add

(

image

)

;

Using the cropY property Example

In this example, we have used the cropY property and assigned it a number value of 25. Therefore, the image crop is 25 pixels along the y-axis.

var

canvas

=

new

fabric

.

Canvas

(

“canvas”

)

;

canvas

.

setWidth

(

document

.

body

.

scrollWidth

)

;

canvas

.

setHeight

(

250

)

;

var

imageElement

=

document

.

getElementById

(

“img1”

)

;

var

image

=

new

fabric

.

Image

(

imageElement

,

{

top

:

50

,

left

:

110

,

cropY

:

25

,

}

)

;

canvas

.

add

(

image

)

;

Conclusion

In this tutorial, we used two examples to demonstrate how you can crop an Image along the Y-axis using FabricJS.

How Xml Works In Ansible?

Definition

Ansible XML module, which is a part of the General ansible collection module and it supports the various parameters (described in the syntax section) to read the content of the XML file, add or remove the child elements or values, print the specific child attributes or the nodes, change the attribute values, etc. by providing the correct xpath syntax.

Start Your Free Software Development Course

Web development, programming languages, Software testing & others

Syntax;

For the non-windows target, ansible uses the module syntax community.general.xml, and for the windows target, ansible uses the win_xml module.

Parameters supported by the chúng tôi module.

Parameters:

add_children:

attribute:

The attribute to select when using the parameter value.

This is a string and not pretended with @.

backup:

choices: no (default) / yes

Create the backup file, including the timestamp of the backup. This module doesn’t take the backup automatically; we need to set the backup parameter to yes.

content: attribute/text

choices: attribute / text

count:

choices: no (default) / yes

Search for the given XPath and provide the count of any matches.

input_type:

choices: xml / yaml (default)

Type of the input for the add_children and set_children.

insertafter:

choices: no (default) / yes

It adds the additional child element(s) after the last element given in the Xpath.

Insertbefore:

choices: no (default) / yes

It adds the additional child element(s) before the first element given in the Xpath.

pretty_print:

choices: no (default) / yes

Pretty print xml output.

print_match:

choice: no (default) / yes

Search for the given Xpath and print any output that matches.

path:

aliases: dest / file

Path of the XML file as an input. If the XML string is not specified, then the path is required.

set_children:

To set the child element of the selected element from the given xpath.

state:

choices: absent/present

alias: ensure

Set or remove an xpath selection (node(s), attributes(s)).

value:

Desired state of the selected attribute.

xmlstring:

A string containing xml on which to operate. This parameter is necessary if the path is not provided.

xpath:

A valid xpath expression to describe the item(s) to manipulate, and it operates on the document root (/) by default.

How XML works in Ansible?

To deal with the XML files, we need the ansible XML module, and it doesn’t come with the ansible default installation, but we need to install it.  It is available as a part of the Ansible community.

To install the XML module, we can use the below command. This will install the XML for both the Unix and the Windows operating systems.

ansible-galaxy collection install community.general

For the Windows operating system, you can use the win_xml module, and for the non-windows target, you can use the chúng tôi module

When you work with the XML file, you need to provide the xPath to deal with the XML attributes and the values. You can learn more about the xPath notations from the websites below.

There are also many other websites you can refer to. For this example, we will use the sample XML file from Microsoft.

Playbook example:

Examples

Let us discuss examples of Ansible XML.

Example #1: Ansible XML module to count the number of attribute nodes

In this playbook, we will count the total number of author nodes.

Output:

There are a total of 12 “author” nodes.

Example #2: Remove the specific node with the attribute value

This playbook will remove all the nodes with the book id attribute bk101.

xpath: /catalog/book[@id=’bk101′] state: absent

Output:

Example #3: Playbook to remove attributes

This playbook will remove all the matching attributes. In this case, it will remove the ID attribute.

state: absent

Output:

Once you check the XML output, all the ids will be removed.

Example #4: Adding the new element with the value

This playbook will add the new element “Newbook” with the value “fiction”.

value: ’Fiction’

Output:

Example #5: Playbook to change the attribute value

value: ’ComputerTech’

Output:

XML file output:

Example #6: Ansible playbook to add multiple child elements

This playbook will add the multiple child elements to the specified attributes of the book node.

xpath: /catalog/book[@id=’bk101′] - tag: ’1001′

Output:

XML file output:

If you need to insert before the specific element, then use the insertbefore parameter. For example,

xpath: /catalog/book[@id=’bk101′]/genre[text() = ”Computer”] - tag: ’1001′

XML Output:

And for the insert after some attribute, you need to specify insertafter parameter.

- tag: ’1001′

Conclusion

XML file is used by the various websites, software, configurations, etc. and ansible uses the various parameters to read the content, copy the XML file, manipulate the XML file as per the requirement which helps when you configure any website or software using XML then you can use the builtin plugin to work with the XML file.

Recommended Articles

This is a guide to Ansible XML. Here we discuss the Definition, syntax, parameters, How XML works in Ansible? with Examples with code implementation. You may also have a look at the following articles to learn more –

How Automatic Hdmi Switching Works

Do you have more devices to connect to your TV than actual HDMI ports? In a world where people often have multiple game consoles, Blu-ray players and perhaps even a computer to connect to their display, that’s not uncommon.

Most mainstream TVs and monitors have multiple HDMI inputs, but sometimes it’s just not enough. An automatic HDMI switch might be just the ticket, but how do they work?

Table of Contents

What is HDMI?

HDMI or High Definition Multimedia Interface is a standard for modern digital display devices. It’s the most widespread type of connection and you’ll find it on everything from laptops to game consoles. There are three connection sizes: standard, mini and micro.

Mini and micro HDMI connectors are usually found on small devices such as action cameras, since their bodies are often too small for the chunky standard HDMI connector.

HDMI cables carry digital information. They carry both sound and video, among other things. It’s also a two-way connection. Which means that both devices connected by an HDMI cable can speak to each other. That’s an important feature when it comes to automatic HDMI switches.

HDMI Switches in a Nutshell

Switches that feature automatic HDMI switching can detect which device is active and then will switch the output to that device automatically. If it’s implemented well, you’ll never have to manually switch between inputs. However, bad automatic switches can be more trouble than they’re worth. If they constantly get it wrong, it’s much less frustrating to simply use a remote or onboard control to change from one device to the other. 

How Does Automatic Switching Work?

Because HDMI is a two-way street, it makes it possible for each device to actively indicate when it’s sending a signal. An automatic switcher detects which HDMI cable is currently active and switches to that one. 

It sounds simple in principle, but there needs to be a little but of logic to the process. After all, you don’t want the switch changing inputs when it shouldn’t. Generally automatic HDMI switching only works if just one device is currently powered on and sending a signal. If you want to switch between multiple devices that are all transmitting actively, you’ll have to do it manually using the remote or onboard control.

TVs With Automatic Switching

You might not need an automatic HDMI switch at all. Many recent TV models now include an automatic HDMI switching function built in. Even better, you can often turn this on or off in the options if you don’t like it. Higher-end TVs also include more than the typical three HDMI ports. Letting you hook up more devices without having to employ a switch.

Also keep in mind that some devices offer an HDMI passthrough option. They have both an IN and OUT port. For example, Xbox One consoles feature such a passthrough as do some Blu-ray players. Chaining devices together like this is another way to save on the number of HDMI ports you need. 

The Best Automatic HDMI Switches

With that basic explanation of how automatic HDMI switches work out of the way, we can look at some actual switches you can buy. Every person has different needs when it comes to these switches. So we’ve chosen examples for different use cases and budgets, to give you a good idea of the automatic HDMI switch options on offer.

The Zettaguard is a very affordable 4K switch that has fairly neutral styling and plenty of features. One design feature we immediately found attractive is the placement of all HDMI ports on the back of the unit. Making it easy to have a neat and unobtrusive setup.

The Zettaguard supports a maximum UHD 60Hz signal, which covers just about everything. With the exception of PCs and next-generation console 120Hz modes. Then again, hardly anyone has a 4K 120Hz TV, so it’s not a major issue right now.

Finally, it’s nice to see a picture-in-picture function. When you don’t want to switch automatically between sources, but want to watch more than one at once, it’s a cool option to have.

Setting aside for a minute that the brand name of this switch is “Awake Lion” and that it’s styled to look like something from the NES era, it’s an amazing deal. A five-port switch with 4K 60Hz for under $50 is a steal in our view.

The automatic switching function of this switch has a memory function, where it will go back to the last source you used when restarted. It does come with a ugly remote control if you want to take matters into your own hands, but for anyone on a budget who needs to connect five devices, it seems like a fine choice.

Amazon’s house brand has proven to be both a good line of products and genuinely light on the wallet. This 3-port switch is perfect for older TVs that may have only one or two HDMI ports. Perhaps a 1080p unit that’s been moved to a bedroom.

That being said, this switch supports 4K output and automatic switching. We especially like that it uses USB power rather than a wall adapter. Most TVs, even non-smart models, have a USB port of some sort. So you should have the option of using that to power this switch. It’s cheap, it’s basic, and it will do the job!

Coming in at just a few dollars more than the Amazon-brand switch, the Totu offers four ports rather than three. The Totu looks a bit more premium as well and promises especially wide compatibility with a range of HDMI devices. This also includes a comprehensive list of audio standards.

This five-port switch from the (unpronounceable) Sgeyr looks quite a bit like a USB hub at first glance. Which means you’ll probably want to hide it away somewhere rather than display it openly as part of your entertainment setup. That is, unless you like that sort of thing.

The main appeal of this switch is having five ports with auto-switching for only $30. However, the caveat is that this is an older HDMI 1.4 product. So 4K content is limited to 30Hz. That makes it a no-no for 4K gaming, but perfectly fine for 1080p output at higher refresh rates.

Switching it Up

Update the detailed information about How Image Compression Works: The Basics on the Kientrucdochoi.com website. We hope the article's content will meet your needs, and we will regularly update the information to provide you with the fastest and most accurate information. Have a great day!